Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
An error occurred while generating the dataset
Error code:   DatasetScriptError

Need help to make the dataset viewer work? Open a discussion for direct support.

id
string
asked_at
string
author_name
string
author_rep
string
score
int32
title
string
tags
sequence
body
string
comments
sequence
answers
sequence
973753
2014-10-14 18:36:59Z
Sujaan Kunalan
10.5k
3
List the elements of $\langle\frac{1}{2}\rangle$ in $(\mathbb{Q},+)$ and in $(\mathbb{Q}^*,\times)$.
[ "abstract-algebra", "group-theory" ]
List the elements of $\langle\frac{1}{2}\rangle$ in $(\mathbb{Q},+)$ and in $(\mathbb{Q}^*,\times)$. where $\mathbb{Q}^*:=\mathbb{Q}\setminus\{0\}$ My attempt: Well, I know that $\langle a\rangle=\{a^n:n\in\mathbb{Z}\}$, so $\langle \dfrac{1}{2}\rangle=\left\{\left( \frac{1}{2}\right)^n:n\in\mathbb{Z}\right\}$ Well, for $(\mathbb{Q},+), \langle\frac{1}{2}\rangle= n\cdot \frac{1}{2}$, since the group is under addition, so the elements are: $$\left\{\ldots,-\frac{3}{2},-1,-\frac{1}{2},0,\frac{1}{2},1,\frac{3}{2},\ldots\right\}$$ As for $(\mathbb{Q}^*,\times), \langle \frac{1}{2}\rangle=\left(\frac{1}{2}\right)^n$ the group is under multiplication so the elements are: $$\left\{\ldots,8,4,2,\frac{1}{2},\frac{1}{4},\frac{1}{8},\ldots\right\}$$ I was just wondering if this was correct.
{ "id": [ "1997265", "1997282" ], "body": [ "Yes, that is essentially correct", "In $(\\mathbb{Q}^*,\\times)$, you need $\\left(\\frac{1}{2}\\right)^{0}=1$ included. Every group needs an identity element." ], "at": [ "2014-10-14 18:49:27Z", "2014-10-14 18:55:27Z" ], "score": [ "1", "5" ], "author": [ "Manolito Pérez", "Cole Hansen" ], "author_rep": [ "1211", "335" ] }
{ "id": [ "974323" ], "body": [ "\nYour enumeration for $\\left(\\mathbb{Q}, +\\right)$ is correct. In a group whose operation is addition, you must consider all multiples of the generator, including inverses (negative values). \nHowever, in the second example of $\\left(\\mathbb{Q}^{*}, \\times \\right)$, you forgot the case of $\\left(\\frac{1}{2}\\right)^{0}=1$. A group with an operation of multiplication implies that you need to take all powers of the generator, including inverses (reciprocals). Thus, the generator raised to the $0$th power must be considered. Additionally, every group must have an identity element. Since the operation is multiplication, the group needs the multiplicative identity, which is $1$. So, the second example — $\\left(\\mathbb{Q}^{*}, \\times \\right)$ — requires that $1$ be an element.\n" ], "score": [ 2 ], "ts": [ "2014-10-15 02:39:55Z" ], "author": [ "Sujaan Kunalan" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
973510
2014-10-14 15:45:10Z
user010010001
861
3
Calculating the covariance of two random variables
[ "probability", "covariance" ]
I have 4 random variables: $X\sim Pois(6)$ $Y \sim Geom (\frac{1}{4})$ $Z=6X-Y$ $U=2X-1$ What is the covariance of X and Y if Cov(Z,U)=0? What I did: $Cov(X,Y)=E(XY)-E(X)E(Y)$, I know $E(X)$ and $E(Y)$ as well, I only need $E(XY)$ From $Cov(Z,U)=E(ZU)-E(Z) E(U)=0$, and I know $E(Z)=32$ and $E(U)=11$ $E(ZU)=E([6X-Y][2X-1])=32\cdot 11$ I expanded the expression and got $E(XY)=60$ So $Cov(X,Y)=60-24=36$ Is that right? I generated such X,Y, U and Z in R, but didn't get this 36 covariance.
{ "id": [ "1996922", "1996932", "1996942" ], "body": [ "Did you happen to simulate X and Y as draws from their respective distributions? If so, then that is not correct, as it implicitly assumes that X and Y are independent, when in fact they are not.", "BTW: Your calculation of the covariance is correct.", "@Eupraxis1981 yes, I simulated them from their distributions. Thanks!" ], "at": [ "2014-10-14 16:17:45Z", "2014-10-14 16:22:37Z", "2014-10-14 16:26:23Z" ], "score": [ "", "", "" ], "author": [ "user76844", "user76844", "user010010001" ], "author_rep": [ null, null, "861" ] }
{ "id": [ "973581" ], "body": [ "\nPer your response to my comment, your simulation did not capture the true behavior of X and Y, since you modeled them as independent when they are not independent. This is the source of your numerical error. The theoretically calculated covariance is correct though (36). Simulation of correlated variables with arbitrary marginals is an intermediate/advanced topic, involving copulas and other techniques for inducing the correct relationships.\n" ], "score": [ 2 ], "ts": [ "2014-10-14 16:33:42Z" ], "author": [ "user010010001" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
973213
2014-10-14 12:27:41Z
Luca
1596
3
Order of groups and elements
[ "group-theory", "finite-groups" ]
(related to this question: Finite Group and normal Subgroup) Let $d,m\in \mathbb{Z}$ with $d,m\geq 1$ and $\gcd(d,m)=1$. Let $G$ be a group of order $dm$ We define the set $X:= \{g\in G | g^d=1\}$ and $H$ a subgroup of $G$. If $H\subseteq X$ then $|H|$ divides $d$. I tried to show it the other way around: If $\#H\not | d$ then $H\not\subseteq X$. For every element $h\in H$ we have $ord(h)| \# H$. Let $d:= q_1*...*q_k$ and $\# H= p_1, ..., p_r$ with all $p_i\not=q_j$ since $\#H\not| d$. Then we have that $ord(h)=p_{i_1}*...*p_{i_l}$ for some of the $p_i$ in the product of $\# H$. But $h^d=1$ iff $ord(h)|d$. But since all $p_{i_\alpha}\not= q_j$ we have $h^d\not=1$ for all $h\in H$. I feel that I am missing something. Does this way make sense? Best, Luca
{ "id": [ "1996393", "1996403" ], "body": [ "$H$ is actually a subgroup?", "Sorry, yes. I forgot to write that." ], "at": [ "2014-10-14 12:58:17Z", "2014-10-14 13:01:05Z" ], "score": [ "", "" ], "author": [ "Nicky Hekster", "Luca" ], "author_rep": [ "45391", "1596" ] }
{ "id": [ "973248" ], "body": [ "\nI don't think all $p_i \\neq q_j$ necessarily follows from $|H| \\nmid d$. That would require $\\gcd(|H|,d) = 1$ which is a stronger condition.\nI'm guessing $H$ has to be a subgroup. Then note that $|H| \\mid |G|$ and for $h \\in H$, $h^d = e$. Suppose that $p$ is a prime which divides $|H|$. By Cauchy's theorem, there exists an element of order $p$ in $H$, so $p \\mid d$ and $p \\nmid m$. Therefore, we have that $\\gcd(|H|,m) = 1$. Furthermore, $|H| \\mid |G| = dm$ so $|H| \\mid d$.\n" ], "score": [ 2 ], "ts": [ "2014-10-14 13:11:46Z" ], "author": [ null ], "author_rep": [ "1" ], "accepted": [ true ], "comments": [ { "id": [ "1996437" ], "body": [ "Thank you! Also for explaining my mistake :)" ], "at": [ "2014-10-14 13:19:21Z" ], "score": [ "" ], "author": [ "Luca" ], "author_rep": [ "1596" ] } ] }
973153
2014-10-14 11:16:34Z
user184036
null
3
Prove $g(x)=\sqrt{f(x)}$ is regulated
[ "calculus", "real-analysis", "analysis", "functional-analysis" ]
Let $f:[a,b]→\mathbb{R}$ be regulated and non-negative. Prove that $g:[a,b]→\mathbb{R}$ defined by $g(x)=\sqrt{f(x)}$ is regulated. A function $f:[a,b]\to\Bbb R$ is a regulated function if $\forall$ $\varepsilon>0$ $\exists$ a step function $\varphi:[a,b]\to\Bbb R$ such that $\Vert f-\varphi\Vert<\varepsilon$. I've tried to use the definition of a regulated function but haven't been able to make any progress. Is there a way of using the fact that a linear combination of regulated functions is regulated? Or am I not even close? @Arthur Is this attempt at all correct or at least along the lines of what you mean: We have a step function $\varphi_f$ for $f$ and $\varepsilon_f$. Let $\sqrt{\varphi_f}=\varphi_g$. We know $\Vert f-\varphi_f\Vert<\varepsilon_f \implies \Vert f-\varphi_g^2\Vert<\varepsilon_f$ $\implies \Vert (\sqrt{f}+\varphi_g)(\sqrt{f}-\varphi_g)\Vert<\varepsilon_f$ $\implies \Vert (g+\varphi_g)\Vert \Vert(g-\varphi_g)\Vert<\varepsilon_f $ $\implies \Vert(g-\varphi_g)\Vert < (\varepsilon_f)/\Vert (g+\varphi_g)\Vert $ So letting $\varepsilon_g = (\varepsilon_f)/\Vert (g+\varphi_g)\Vert $ proves that $\Vert(g-\varphi_g)\Vert < \varepsilon_g$ and hence $g(x)$ is regulated.
{ "id": [ "1996205" ], "body": [ "I am a bit naive perhaps... Could you use $\\phi=\\sqrt{\\psi}$ as step function for $g$, for each $\\epsilon$ you have, since $\\phi$ will obviously be a step function as well?" ], "at": [ "2014-10-14 11:22:55Z" ], "score": [ "" ], "author": [ "Martigan" ], "author_rep": [ "5654" ] }
{ "id": [ "973159", "973167" ], "body": [ "\nUse the fact that $f([a,b]) \\subset [0,R]$ for some $R > 0$ and that $[0,R] \\to \\Bbb{R}, x \\mapsto \\sqrt{x}$ is uniformly continuous.\nFinally, you should use that if $\\varphi$ is a (non-negative) step function, then so is $\\sqrt{\\varphi}$.\nOf course, you will have to show that the step function $\\varphi$ approximating $f$ can be taken to be nonnegative, but this is easy.\n", "\nGiven an $\\varepsilon_g$ for $g$, you need to transform it into a fitting $\\varepsilon_f$ for $f$. Then, since $f$ is regulated, there is a step function $\\varphi_f$ for $f$ and $\\varepsilon_f$. This step function can then be transformed into a step function $\\varphi_g$ for $g$ and $\\varepsilon_g$.\nThese are the general lines along which you should write your proof. Details about exactly what transformations are involved, along with a final confirmation that $\\varphi_g$ actually works must be supplied.\n" ], "score": [ 1, 1 ], "ts": [ "2014-10-14 11:23:31Z", "2014-10-14 11:31:55Z" ], "author": [ "", "" ], "author_rep": [ null, null ], "accepted": [ false, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [ "1998690", "1998817", "1998829" ], "body": [ "is my new attempt close? I'm a bit unsure I'm allowed to do what I did.", "@john.smith It's close, but in some sense you've got it backward. You cannot start with $\\varepsilon_f$ and $\\varphi_f$, and from there say what $\\varepsilon_g$ should be. You have to start with an unspecified $\\varepsilon_g$, and then say what $\\varepsilon_f$ must be for it all to work out. Also remember that there is no $\\varphi_f$ until you've decided on what $\\varepsilon_f$ should be, which means that no $\\varphi$ of any kind should appear in the definition of $\\varepsilon_f$.", "That being said, while working backwards won't give you a working proof, it can in many cases give hints about what values do work. In many cases you see a proof that takes some random definition seemingly out of thin air, but somehow it magically works out in the end. Working backwards is one way of getting at those definitions, although it's not mentioned in the final proof." ], "at": [ "2014-10-15 08:24:01Z", "2014-10-15 10:05:07Z", "2014-10-15 10:12:06Z" ], "score": [ "", "", "" ], "author": [ "user184036", "Arthur", "Arthur" ], "author_rep": [ null, "192589", "192589" ] } ] }
972905
2014-10-14 06:24:27Z
Anixx
8417
3
Fourier transform of exponent?
[ "complex-numbers", "fourier-analysis", "dirac-delta" ]
Mathematica fails to find a Fourier transform of exponent. Yet according to this page $$\mathcal{F}[e^{2\pi iat}]=\delta(t-a)$$ and via substitution, $$\mathcal{F}[e^{at}]=\delta\left(t-\frac a{2\pi i}\right)$$ Yet this does not make much sense because inverse Fourier transform of this will give 0. Thus my question is what one can do so to fix this and make Fourier transform of exponent useful and revertible. Possibly some modification of Dirac Delta for complex argument? Or taking Fourier integral over complex plane? Or taking two integrals, one over reals the other over imaginary axis?
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "1103456" ], "body": [ "\nMathematica $10.0.2.0$ can find the Fourier transform of $f(t) = e^{2\\pi i at}$.\n\nMathematica defines the Fourier transform as \n$$\n\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^{\\infty}f(t)e^{i\\omega t}dt\n$$\nNow we can derive the solution shown by Mathematica and the website.\n\\begin{align}\n\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^{\\infty}e^{2\\pi iat}e^{i\\omega t}dt &=\n\\frac{1}{\\sqrt{2\\pi}}\\lim_{x\\to\\infty}\\int_{-x}^{x}e^{it(2\\pi a+\\omega)}dt\\\\\n&= \\sqrt{2\\pi}\\lim_{x\\to\\infty}\\frac{\\sin[(2\\pi a+\\omega)x]}{(2\\pi a+\\omega)\\pi}\n\\end{align}\nLet $y=2\\pi a+\\omega$ and $\\epsilon=\\frac{1}{x}$. Then\n\\begin{align}\n\\sqrt{2\\pi}\\lim_{\\epsilon\\to 0}\\frac{\\sin[y/\\epsilon]}{y\\pi} &= \\sqrt{2\\pi}\\delta(y)\\\\\n&= \\sqrt{2\\pi}\\delta(\\omega + 2\\pi a)\n\\end{align}\nNow, we can take the inverse Fourier transform $F(\\omega) = \\sqrt{2\\pi}\\delta[\\omega - (-2\\pi a)]$ so \n$$\n\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^{\\infty}\\sqrt{2\\pi}\\delta[\\omega - (-2\\pi a)]e^{-i\\omega t}d\\omega = e^{2\\pi iat}\\tag{1}\n$$\nwhich occurs by the sifting property of the Dirac Delta function.\n" ], "score": [ 2 ], "ts": [ "2015-01-14 01:09:30Z" ], "author": [ null ], "author_rep": [ "1031" ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
972574
2014-10-14 00:50:02Z
Avi Stiefel
31
3
Number of Curvature Maxima of a 2D Cubic Bezier curve
[ "geometry", "parametric", "curvature", "bezier-curve" ]
I am trying to prove that a standard cubic Bezier curve can only have at most 2 curvature maxima over $t \in [0,1]$. Assuming that no 3 adjacent control points are colinear, the curvature will either have 2 true local maxima, and the curvature at the endpoints will not be locally maximum, or else the curvature will have 0 or 1 true local maxima, but 2 or 1 endpoints will be a local maximum. Intuitively this appears to be true, and experimentally this holds, but I cannot figure out how to go about proving this. Any direction would be of great help
{ "id": [ "1996140" ], "body": [ "If I understand you correctly, then your postulated result is not true. I can certainly produce a Bezier curve whose curvature increases monotonically from one end to the other. So, there is only one local maximum, and it's located at one end of the curve." ], "at": [ "2014-10-14 10:53:57Z" ], "score": [ "" ], "author": [ "bubba" ], "author_rep": [ "41760" ] }
{ "id": [ "972637", "973133" ], "body": [ "\nHave you tried writing out the formula for the curvature directly in terms of the polynomials? Note that a standard Bezier curve is really just a way of writing a general cubic curve, so \"Bezier\" is a red herring here: you're really asking if an arbitrary cubic curve can have more than two curvature extreme on its whole domain. \nThe curvature formula is something like $(\\ddot{x}\\dot{y} - \\ddot{y} \\dot{x})/(\\dot{x}^2 + \\dot{y}^2)^\\frac{3}{2}$. The numerator is therefore a quadratic, and the denominator's the $3/2$ power of a quadratic. I'm not certain whether there's anyting useful to drag out of that, but it might be worth writing out in terms of the actual coeffs of $x$ and $y$. \n", "\nIf I understand you correctly, then your postulated result is not true. I can certainly produce a Bezier curve whose curvature increases monotonically from one end to the other. So, there is only one local maximum, and it's located at one end of the curve. \nTake, for example, the curve with $\\mathbf{P}_0 = (0,0)$, $\\mathbf{P}_1 = (4,1)$, $\\mathbf{P}_2 = (7,1)$, $\\mathbf{P}_3 = (9,0)$. Its curvature increases monotonically from $t=0$ to $t=1$. The only local maximum is at $t=1$.\nAnother example: $\\mathbf{P}_0 = (0,0)$, $\\mathbf{P}_1 = (4,1)$, $\\mathbf{P}_2 = (5,1)$, $\\mathbf{P}_3 = (9,0)$. Its curvature increases monotonically from $t=0$ to $t=0.5$, and then decreases monotonically from $t=0.5$ to $t=1$. The only local maximum is at $t=0.5$.\nAnd, just to make things more confusing, here's a curve with three local maxima: \n$\\mathbf{P}_0 = (0,0)$, $\\mathbf{P}_1 = (7.5,0.5)$, $\\mathbf{P}_2 = (1.5,0.5)$, $\\mathbf{P}_3 = (9,0)$.\nTo analyse curvature, the suggestion given by @John seems reasonable: write the curve in polynomial form, and use a computer algebra system like Maple or Mathematica to calculate the curvature $\\kappa$ function and its derivative.\nIf you do this, you will find that the numerator of $d\\kappa/dt$ is a polynomial of degree 5 in $t$. From there, I don't immediately see how to proceed.\n" ], "score": [ 1, 1 ], "ts": [ "2014-10-14 01:43:34Z", "2014-10-14 12:46:10Z" ], "author": [ null, null ], "author_rep": [ "11239", "11239" ], "accepted": [ false, false ], "comments": [ { "id": [ "1996118", "1996121" ], "body": [ "A Bezier curve is a bounded portion of a parametric cubic curve. Asking about curvature maxima on a compact interval is different from asking about maxima on the entire real line (it seems to me).", "But I agree that working with the polynomial form of the curve (rather than the Bernstein form) will make the algebra simpler, so it's a good idea." ], "at": [ "2014-10-14 10:39:00Z", "2014-10-14 10:41:22Z" ], "score": [ "", "" ], "author": [ "bubba", "bubba" ], "author_rep": [ "41760", "41760" ] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
972370
2014-10-13 21:53:25Z
Giiovanna
3187
3
Find the sum $\sum_{n=1}^\infty \frac{n}{(1+x)^{2n+1}}$
[ "sequences-and-series", "power-series" ]
Find the sum $$\sum_{n=1}^\infty \frac{n}{(1+x)^{2n+1}}.$$ Indicating the interval of convergence for $x$. My attempt: Let $ t=\frac{1}{x+1}$. Then, applying the root test, $$\lim_{n\to \infty} \{n t^{2n+1}\}^{1/n} = |t|^2 < 1 \iff |t| <1.$$ Then, we have that $|x-1| >1 \iff x < -2 \text{ or } x >0$. Now, consider the series $$\sum_{n=1}^\infty n t^n = \frac{t}{(t-1)^2}, \quad |t|<1.$$ So, since $|t|< 1 \Rightarrow |t^2| <1$, $$\sum_{n=1}^\infty n t^{2n }= \frac{t^2}{(t^2-1)^2}, \quad |t|<1.$$ Multiplying by $t,$ $$\sum_{n=1}^\infty n t^{2n +1 }= \frac{t^3}{(t^2-1)^2}, \quad |t|<1.$$ If we substitute back, we have what we want. I want to know if my steps are correct. I have doubts about the interval of convergence part. Thanks for your effort!
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "972463", "972406" ], "body": [ "\nFirst recall the basic formula for the sum of a geometric series:\n$$\\sum_{n=0}^{\\infty}z^n=\\frac{1}{1-z}.$$\nDifferentiating, we obtain:\n$$\\sum_{n=1}^{\\infty}nz^{n-1}=\\frac{1}{(1-z)^2}.$$\nMultiplying both sides by $z$ yields:\n$$\\sum_{n=1}^{\\infty}nz^{n}=\\frac{z}{(1-z)^2}.$$\n\nNow, we can rewrite the series $S(x)=\\sum_{n=1}^{\\infty}\\frac{n}{(1+x)^{2n+1}}$ as a finite sum of series that are summable via the formulas given above s follows:\n$$\\begin{align}\nS(x)\n&=\\sum_{n=1}^{\\infty}\\frac{n}{(1+x)^{2n+1}}\\\\\n&=\\frac12\\sum_{n=1}^{\\infty}\\frac{2n}{(1+x)^{2n+1}}\\\\\n&=\\frac12\\sum_{n=1}^{\\infty}\\frac{2n+1-1}{(1+x)^{2n+1}}\\\\\n&=\\frac12\\sum_{n=1}^{\\infty}\\frac{2n+1}{(1+x)^{2n+1}}-\\frac12\\sum_{n=1}^{\\infty}\\frac{1}{(1+x)^{2n+1}}\\\\\n&=\\frac12\\sum_{n=1}^{\\infty}\\frac{2n}{(1+x)^{2n}}+\\frac12\\sum_{n=1}^{\\infty}\\frac{2n+1}{(1+x)^{2n+1}}-\\frac12\\sum_{n=1}^{\\infty}\\frac{1}{(1+x)^{2n}}\\\\\n&~~~~~ -\\frac12\\sum_{n=1}^{\\infty}\\frac{1}{(1+x)^{2n+1}}-\\frac12\\sum_{n=1}^{\\infty}\\frac{2n}{(1+x)^{2n}}+\\frac12\\sum_{n=1}^{\\infty}\\frac{1}{(1+x)^{2n}}\\\\\n&=\\frac12\\sum_{n=2}^{\\infty}\\frac{n}{(1+x)^{n}}-\\frac12\\sum_{n=2}^{\\infty}\\frac{1}{(1+x)^{n}}-\\sum_{n=1}^{\\infty}\\frac{n}{(1+x)^{2n}}+\\frac12\\sum_{n=1}^{\\infty}\\frac{1}{(1+x)^{2n}}.\n\\end{align}$$\nIn the last line above, let $z=\\frac{1}{1+x}$:\n$$\\begin{align}\nS(x)\n&=\\frac12\\sum_{n=2}^{\\infty}\\frac{n}{(1+x)^{n}}-\\frac12\\sum_{n=2}^{\\infty}\\frac{1}{(1+x)^{n}}-\\sum_{n=1}^{\\infty}\\frac{n}{(1+x)^{2n}}+\\frac12\\sum_{n=1}^{\\infty}\\frac{1}{(1+x)^{2n}}\\\\\n&=\\frac12\\sum_{n=2}^{\\infty}nz^n-\\frac12\\sum_{n=2}^{\\infty}z^n-\\sum_{n=1}^{\\infty}nz^{2n}+\\frac12\\sum_{n=1}^{\\infty}z^{2n}\\\\\n&=\\frac12\\left[-z+\\sum_{n=1}^{\\infty}nz^n\\right]-\\frac12\\left[-1-z+\\sum_{n=0}^{\\infty}z^n\\right]-\\sum_{n=1}^{\\infty}n(z^2)^{n}+\\frac12\\sum_{n=0}^{\\infty}(z^2)^{n}-\\frac12\\\\\n&=\\frac12\\left[-z+\\frac{z}{(1-z)^2}\\right]-\\frac12\\left[-1-z+\\frac{1}{1-z}\\right]-\\frac{z^2}{(1-z^2)^2}+\\frac12\\frac{1}{1-z^2}-\\frac12\\\\\n&=\\frac{z^2}{2(1-z)^2}+\\frac{1-3z^2}{2(1-z^2)^2}-\\frac12\\\\\n&=\\frac{z^3}{(1-z^2)^2}\\\\\n&=\\frac{x+1}{x^2(x+2)^2}.\n\\end{align}$$\nThe interval of convergence corresponds to $|z|<1$, or $\\frac{1}{|1+x|}<1\\iff (x>0)\\lor(x<-2)$.\n", "\nStarting with the series\n\\begin{align}\n\\sum_{n=1}^{\\infty} n \\, t^{n} = \\frac{t}{(1-t)^{2}}\n\\end{align}\nthen it is seen that\n\\begin{align}\n\\sum_{n=1}^{\\infty} \\frac{n}{(1+x)^{2n+1}} = \\frac{1}{(1+x)^{3}} \\cdot \\frac{(1+x)^{4}}{[(1+x)^{2}-1]^{2}} = \\frac{1+x}{x^{2} \\, (2+x)^{2}}.\n\\end{align}\nThe series does not converge for $x \\in\\{0, -2\\}$.\n" ], "score": [ 2, 0 ], "ts": [ "2014-10-13 23:07:41Z", "2014-10-13 22:23:11Z" ], "author": [ null, null ], "author_rep": [ "2256", "2256" ], "accepted": [ true, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
971936
2014-10-13 16:39:11Z
Kestrel
247
3
Proving a Recursion Using Induction
[ "induction", "recursion" ]
I am trying to prove the following recursion. $$a(n) = \left\{\begin{matrix} n(a(n-1)+1) & \text{if } n \geq 1\\ 0 & \text{if } n = 0 \end{matrix}\right.$$ is the series definition of $a(n)$. using this, I need to prove that $$ a(n) = n!\bigg(\frac{1}{0!} + \frac{1}{1!} + \cdots + \frac{1}{(n-1)!}\bigg)$$ for $n \geq 1$ by induction on $n$. I've found that the $n$ equals, for the first 5 terms, $2,5,16,65,326$. I think now I need to find a formula that describes these terms, and therefore $a(n)$. The problem is, I don't know where to start. Can anyone give me a hand?
{ "id": [ "1994007" ], "body": [ "You were given the formula for $a$ already. You need to prove that $\\forall n\\in \\mathbb N\\left(a(n)=n!\\sum \\limits_{k=0}^{n-1}\\left(\\dfrac 1{k!}\\right)\\right)$." ], "at": [ "2014-10-13 16:43:30Z" ], "score": [ "2" ], "author": [ "Git Gud" ], "author_rep": [ "30993" ] }
{ "id": [ "971949", "971943" ], "body": [ "\nYour induction step is:\n\\begin{align}\nn \\cdot a_{n-1} + n & = n(n-1)!\\bigg(\\frac{1}{0!} + \\frac{1}{1!} + \\cdots + \\frac{1}{(n-1)!}\\bigg) + 1 \\\\\n& = n!\\bigg(\\frac{1}{0!} + \\frac{1}{1!} + \\cdots + \\frac{1}{(n-1)!}\\bigg) + n! \\cdot \\frac{1}{n!} \\\\\n& = n!\\bigg(\\frac{1}{0!} + \\frac{1}{1!} + \\cdots + \\frac{1}{(n-1)}! + \\frac{1}{n!}\\bigg) \\\\\n& = a_n\n\\end{align}\nThe base is easy. Then you're done. Read up on how induction works.\n", "\nFor $m\\ge1,$\nIf $a(m)=m!\\left(\\sum_{r=0^{m-}}\\frac1{r!}\\right)$\n\\begin{align}\na(m+1) & =(m+1)[a(m)+1] \\\\[6pt]\n& =(m+1)[m!\\left(\\sum_{r=0^{m-}}\\frac1{r!}\\right)+1] \\\\[6pt]\n& =(m+1)!\\left(\\sum_{r=0^{m-}}\\frac1{r!}\\right)+m+1 \\\\[6pt]\n& =(m+1)!\\left(\\sum_{r=0^{m-}}\\frac1{r!}\\right)+\\frac{(m+1)!}{m!}\n\\end{align}\n" ], "score": [ 1, 1 ], "ts": [ "2014-10-13 16:52:00Z", "2014-10-13 16:45:57Z" ], "author": [ null, null ], "author_rep": [ "1", "1" ], "accepted": [ true, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
971758
2014-10-13 14:41:31Z
Jamil_V
1683
3
Finding the Discriminant of $f(x)=x^n+ax+b$ Using Differentiation
[ "number-theory", "algebraic-number-theory" ]
Greetings fellow Mathematics enthusiasts. I was hoping someone could offer me some advice on proving the following statement about the discriminant of a polynomial with degree $n$. Let $f(x)=x^n+ax+b$ be irreducible over $\mathbb{Q}$ and $\alpha$ a root of $f$, where $a,b\in \mathbb{Q}, a \neq 0, n \geq 2$. Prove that disc$(\alpha) = (-1)^\frac{n(n-1)}{2}\left( (-1)^{1-n}(n-1)^{n-1}a^n+n^nb^{n-1}\right)$. My professor left us a hint that reads: Show that $f'(\alpha)= \frac{-((n-1)a \alpha+nb)}{\alpha}$. Now find $N((n-1)a \alpha+nb)$ by noticing that it is a root of $\left( \frac{x-nb}{(n-1)a}\right)^n + a\left( \frac{x-nb}{(n-1)a}\right)+b$. This power-reducing technique was introduced in class to find the discriminant above, but with $n=3$. As for the Norm calculation, we are told to use the constant term of the minimal polynomial. Here is my work so far: Given $f'(\alpha)=\frac{-((n-1)a \alpha+nb)}{\alpha}$, let $\beta=(n-1)a\alpha+nb$. So we have $\alpha = \frac{\beta-nb}{(n-1)a}$. Back substituting into the original equation, $\left( \frac{\beta-nb}{(n-1)a}\right)^n + a\left(\frac{\beta-nb}{(n-1)a}\right) + b \Rightarrow \left(\beta-nb \right)^n + \left((n-1)a\right)^n\frac{\beta-nb}{n-1} + \left((n-1)a\right)^nb=0$. Upon simplifying a bit, we are left with $\left(\beta-nb\right)^n + \left((n-1)\right)^{n-1}a^n(\beta-nb) + \left((n-1)a\right)^nb=0$. Using the Binomial Theorem, we expand the first term and obtain $\left[\beta^n-nb\beta^{n-1} + \ldots + (-nb)^n\right] + \left((n-1)\right)^{n-1}a^n(\beta-nb) + \left((n-1)a\right)^nb=0$. Could somebody please explain to me why this approach is or is not correct? Thanks in advance, any suggestions would be greatly appreciated.
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "972315" ], "body": [ "\nUndoubtedly you have seen the formula\n$$\n\\operatorname{disc}(\\alpha)=(-1)^{n(n-1)/2}\\prod_{i=1}^nf'(\\alpha_i),\n$$\nwhere $\\alpha_i$, $i=1,2,\\ldots,n,$ are the conjugates of $\\alpha$. \nThe relations arising from the factorization\n$$\nf(x)=\\prod_{i=1}^n(x-\\alpha_i)\\qquad(*)\n$$\nwill also play a role.\nThe norm calculation that your professor talked about is equivalent to my use of $f(q)$ below. Basically it means that for a rational number $q$ we have $f(q)=N(q-\\alpha)$. You may have done related tricks in class, so I cannot tell how familiar you are with this technique. \nI first describe how I would do this (this sounds familiar actually - I'm fairly sure I have done this exercise at some point). Then at the bottom I make a few comments about the material you posted. I'm afraid I'm not sure that I will answer your questions.\n\nThe relations\n$$\nf'(\\alpha_i)=\\frac{-n(a\\alpha_i+b)+a\\alpha_i}{\\alpha_i}\n$$\nthat hold for all $i$ are the key. We get\n$$\n\\prod_{i=1}^nf'(\\alpha_i)=\\prod_{i=1}^n\\frac{a(1-n)\\alpha_i-nb}{\\alpha_i}.\\qquad(**)\n$$\nHere the product of the denominators is $\\prod_{i=1}^n\\alpha_i=(-1)^nb$, because that product emerges as the constant term of the minimal polynomial $f$ (= the norm of $\\alpha$ up to a sign). In the numerator let's write\n$$\na(1-n)\\alpha_i-nb=-a(1-n)\\left(\\frac{nb}{a(1-n)}-\\alpha_i\\right)\n$$\nHere the fraction $q=nb/(a(1-n))$ is independent of $i$. Thus the factorization $(*)$ tells us that\n$$\n\\begin{aligned}\n\\prod_{i=1}^n(a(1-n)\\alpha_i-nb)&=(-1)^n(a(1-n))^n\\prod_{i=1}(q-\\alpha_i)\\\\\n&=(-1)^na^n(1-n)^nf(q).\n\\end{aligned}\n$$\nCombining this with $(**)$ tells us that\n$$\n\\prod_{i=1}^nf'(\\alpha_i)=\\frac{a^n(1-n)^n}{b}f(q).\n$$\nI'm sure you can take it from here. \n\nYou can also use the fact that you mentioned:\n$$g(x):=\\left( \\frac{x-nb}{(n-1)a}\\right)^n + a\\left( \\frac{x-nb}{(n-1)a}\\right)+b$$\nis the minimal polynomial of $(n-1)a\\alpha +nb=a(1-n)(q-\\alpha)$. You need to first scale $g(x)$ so that it becomes monic. Then you need to expand and find the constant term of that scaled $g$. That can be used much the same way as I used $f(q)$ (that I didn't bother to calculate!). If you pick the terms that do not contain $\\beta$ from the left hand side of the equation on the third line from the bottom, you do get this. I'm not sure why you used that $\\beta$ though.\n" ], "score": [ 2 ], "ts": [ "2014-10-13 21:31:38Z" ], "author": [ "Jamil_V" ], "author_rep": [ "1683" ], "accepted": [ true ], "comments": [ { "id": [ "1994914" ], "body": [ "Thanks!! This answer was very helpful and descriptive." ], "at": [ "2014-10-13 22:48:53Z" ], "score": [ "" ], "author": [ "Jamil_V" ], "author_rep": [ "1683" ] } ] }
971738
2014-10-13 14:13:04Z
Burak
153
3
How to prove Big-Oh Equation e.g. $O({2}^{2n}) = O(2^n)$
[ "computational-complexity" ]
I visit a course about complexity theory but I have some troubles to prove a Big-Oh equation like this: $O(2^{2n}) = O(2^n)$ $O(g(n))$ is a set of functions that fulfill following definition: The function $f(n)$ is an element of $O(g(n))$ if there are positive constants $c$ and $n_0$ such that $f(n) \leq c \cdot g(n)$ for all $n \geq n_0$. I already had some exercies in the form of $f(n) = O(g(n))$ but never $O(f(n)) = O(g(n))$. Is there any other approach to prove equations like that? I read in a book that some $O(g(n))$ term occures in an exponent like $2^{O(g(n))}$ it is the same as $2^{c\cdot g(n)}$. So, can I do the same for equations like that above? My approach is this: $O(2^{2n}) \in O(2^n)$ $2^{2n} \leq c \cdot 2^n$ Using $log_2$ I can write $2n \cdot log_2(2) \leq c \cdot n \cdot log_2(2)$ And because of $log_2(2) = 1$: $2n \leq c \cdot n$ Now I can set $c = 2$ and $n_0 = 1$ and shows that $O(2^{2n}) = O(2^n)$. Is this correct? The book I read is "Introduction to the theory of Computation" by Michael Sipser.
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "973369" ], "body": [ "\n$O(f (n)) = O(g (n))$ is a shortcut for \"every function in the set $O (f (n))$ is also a member of the set $O(g (n))$\". Now if you proved that f (n) is in $O(g (n))$, you can show quite easily that every element of the set $O (f (n))$ is also a member of the set $O(g (n))$. \nYou can't just take the logarithm on both sides. That might prove that $\\log f (n)$ is an element of $O (\\log g (n))$, but that's absolutely not the same as $f (n) = O (g (n))$. \nLooking at the problem \"$O(2^{2n})=O(2^n)$\", you first need to decide whether you want to prove or disprove it. To me, $2^{2n}$ looks a lot bigger the $2^n$. And indeed, no matter how large you pick c, you can pick an n such that $2^{2n} = 2^n 2^n > c 2^n$; this is the case as soon as $2^n > c$. \n" ], "score": [ 2 ], "ts": [ "2014-10-14 14:26:46Z" ], "author": [ null ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
971713
2014-10-13 13:55:35Z
ChloeKim
31
3
How to show that $f: (0, \infty)\longrightarrow\Bbb R$ defined by $f(x)= 1/x$ is not Lipschitz continuous?
[ "real-analysis", "lipschitz-functions" ]
How to show that $f: (0, \infty)\longrightarrow\Bbb R$ defined by $f(x)= 1/x$ is not Lipschitz continuous? If $K$ is a Lipschitz constant, I got $$K\ge \frac{|f(x)-f(y)|}{|x-y|}=\frac1{|xy|}$$ and then I don't know what to do. I thought maybe $xy\to 0$ then there is no $K$ because $|1/xy|\to\infty$. Is it right?
{ "id": [ "1993588" ], "body": [ "What? I don't understand what you're trying to do in that attempt you posted. Can you explain your reasoning a bit more?" ], "at": [ "2014-10-13 13:59:57Z" ], "score": [ "1" ], "author": [ "Adam Hughes" ], "author_rep": [ "36029" ] }
{ "id": [ "971723" ], "body": [ "\nBy definition,\n$$ f\\text{ Lipschitz}\\implies |f(x)-f(y)|\\le K|x-y|.$$\nBut take $x=1/n$, $y=1/2n$, $n\\in\\Bbb N$.\n$$|f(x)-f(y)|=|1/(1/n) - 1/(1/2n)|=n$$\nAnd now...\n" ], "score": [ 2 ], "ts": [ "2014-10-13 14:18:39Z" ], "author": [ null ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
971419
2014-10-13 08:42:16Z
math110
91.6k
3
How prove The triangle inequality $\rho{(a,b)}\le \rho{(a,c)}+\rho{(c,b)}$
[ "inequality" ]
Question: let $a,b,c$ be complex numbers,and such $$|a|<1,|b|<1,|c|<1$$ let $$\rho{(x,y)}=\left|\dfrac{x-y}{1-\overline{x}y}\right|$$ show that $$\rho{(a,b)}\le \rho{(a,c)}+\rho{(c,b)}$$ we only prove $$\left|\dfrac{a-b}{1-\overline{a}b}\right|\le \left|\dfrac{a-c}{1-\overline{a}c}\right|+\left|\dfrac{c-b}{1-\overline{c}b}\right|,|a|,|b|,|c|<1$$ then I can't , I have found this book ,page 38,this book say this triangle inequality is less obvious,so I can't see anywhere have this inequality solution, can you help me? Thank you
{ "id": [ "1993243" ], "body": [ "google.com/…" ], "at": [ "2014-10-13 11:16:48Z" ], "score": [ "2" ], "author": [ "Bumblebee" ], "author_rep": [ "16961" ] }
{ "id": [ "974852" ], "body": [ "\nReference is the book you referred :\nStep 1 : Let $$ f(x,y) :=\\frac{x-y}{1-\\overline{x} y } $$\nFrom a routine computation we have $$ |f( f(x,y),f(x,z)) | = |f(y,z)\n|$$\nSince $$\n |f(a,b)| = |f(f(c,a),f(c,b))| = \\bigg| \\frac{ f(c,a) -\n f(c,b)}{1-\\overline{f(c,a)} f(c,b)} \\bigg| $$ we have a claim $$ \\bigg| \\frac{ f(c,a) -\n f(c,b)}{1-\\overline{f(c,a)} f(c,b)} \\bigg|\\leq | f(c,a) | +\n |\n f(c,b) |\\ (1)$$\nStep 2 : $$|x|,\\ |y| < 1 \\Rightarrow |f(x,y)| <1$$\nProof : It is followed from a direct computation.\nStep 3 : That is if\n$v:= f(c,a),\\ w:= f(c,b) $ then we have\n$$\n |v-w|\\leq |v-|v|^2 w| +|w-|w|^2v |\\ (2) \\Leftrightarrow (1)$$\nIf $v:=v_1+iv_2,\\ w:=w_1+iw_2$ then $$ \\sqrt{(v_1-w_1)^2 +\n (v_2-w_2)^2} \\leq \\sqrt{(v_1-|v|^2w_1)^2 +\n (v_2-|v|^2w_2)^2} $$ $$+\\sqrt{ (w_1-|w|^2v_1)^2 +\n (w_2-|w|^2v_2)^2 } \\ (3)\\Leftrightarrow (2) $$\nCase 1 : $v_1w_1 \\geq 0$ Then\n$$ 0\\leq |w|^4 v_1^2 + |v|^4 w_1^2- 2v_1w_1 (|v|^2+\n |w|^2-1)\\Leftrightarrow $$\n$$0\\leq (|w|^2 v_1 - |v|^2 w_1)^2 + 2v_1w_1 (1-|v|^2)(1- |w|^2)\n$$\n$$\n\\Rightarrow\n (v_1-w_1)^2 \\leq (v_1-|v|^2w_1)^2 +\n(w_1-|w|^2v_1)^2\n$$\nCase 2 : $v_1=-nw_1,\\ n>0,\\ v_2w_2 \\geq 0$ Then $$\n (v_1-w_1)^2=(n+1)^2w_1^2, $$ $$ (v_1-|v|^2w_1)^2 +\n(w_1-|w|^2v_1)^2 +2 |v_1-|v|^2w_1 || w_1-|w|^2v_1 | \\geq $$ $$\n(n+n^2w_1^2)^2w_1^2 +(1+nw_1^2)^2w_1^2 +\n2(n+n^2w_1^2)(1+nw_1^2)w_1^2\n$$\nThat is, we proved the following :\n$$ (v_1-w_1)^2 \\leq (v_1-|v|^2w_1)^2 + (w_1-|w|^2v_1)^2 + 2\n|v_1-|v|^2w_1 || w_1-|w|^2v_1 | $$\nCase 3 : $v_1=-nw_1,\\ v_2=-mw_2,\\ n,\\ m >0$\nNote that\n$$\nnw_1^2+ mw_2^2 \\leq \\sqrt{n^2w_1^4 + (n^2+m^2)w_1^2 w_2^2 +\nm^2w_2^4} $$\nIf we replace $ v_1=-nw_1,\\ v_2=-mw_2$ in (3), then we have (3) from direct\ncomputation. \nSo we complete the proof.\n" ], "score": [ 2 ], "ts": [ "2014-10-15 11:51:44Z" ], "author": [ "math110" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
970903
2014-10-12 23:04:02Z
sm81095
135
3
Number of Distinct Regular n-gons, given n
[ "geometry", "elementary-number-theory" ]
Is there a formula to find the number of distinct regular n-gons possible, given n? And by distinct, I mean disregarding anything like reflections or rotations. Working it out, I find the following for the first new n's: n = 3, 1 unique (triangle) n = 4, 1 unique (square) n = 5, 2 unique (regular pentagon, 5-point star) n = 6, 1 unique (regular hexagon) n = 7, 3 unique (regular heptagon, 2 kinds of regular stars, see pic) n = 8, 2 unique (regular octogon, 8 point star) ect... But I cannot seem to find the formula that dictates this number, or even if that formula exists in the first place. This lack of a formula also makes it kind of hard to properly tag this question (under number theory or combinatorics), so my apologies for that.
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "970936" ], "body": [ "\nSuppose you have $n$ points distributed uniformly on a circle. There is a shape corresponding to each value $r$ which is relatively prime to $n$: given a starting point and $r$, just move forward by $r$ points, draw a line between the starting point and this point, and continue on.\nThe number of values relatively prime to $n$ is given by Euler's totient function $\\phi(n)$. However, both $r$ and $n - r$ produce the same figure, so we must divide by two to avoid double-counting. This means that the number of figures with $n$ sides is $\\phi(n)/2$.\n" ], "score": [ 2 ], "ts": [ "2014-10-12 23:33:27Z" ], "author": [ "sm81095" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [ "1992125" ], "body": [ "You posted this just as I started to look at applying Euler's totient. Many thanks for the straightforward answer." ], "at": [ "2014-10-12 23:39:57Z" ], "score": [ "" ], "author": [ "sm81095" ], "author_rep": [ "135" ] } ] }
970615
2014-10-12 19:16:52Z
Kris M
33
3
Showing that a sequence $a_n=(-4)^n$ is a solution of the recurrence relation $a_n = -3a_{n-1} + 4a_{n-2}$
[ "sequences-and-series", "discrete-mathematics", "recurrence-relations" ]
I'm having some trouble with showing that a sequence $a_n$ is a solution to the recurrence relation $a_n = -3a_{n-1} + 4a_{n-2}$. (See image below). The sequence is given by $a_n = (-4)^n$. I'm given the answer in the solutions manual, but I have absolutely no clue what is going on between step II and III. How did they get rid of the $n-1$ exponent? What did they multiply/divide/subtract/add to the equation? \begin{align*} -3a_{n-1}+4a_{n-2} & =-3(-4)^{n-1}+4(-4)^{n-1} \\ & =(-4)^{n-2}\bigl((-3)(-4)+4\bigr) \\ & =(-4)^{n-2}\cdot16 \\ & =(-4)^{n-2}(-4)^2 \\ & =(-4)^n \\ & =a_n \end{align*}
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "970628", "970625", "970636" ], "body": [ "\nSince \n$$n-1=1+(n-2)$$\nthey have\n$$\\begin{align}-3(-4)^{n-1}+4(-4)^{n-2}&=-3\\cdot (-4)^{1+(n-2)}+4(-4)^{n-2}\\\\&=-3\\cdot(-4)^1\\cdot (-4)^{n-2}+4(-4)^{n-2}\\\\&=(-4)^{n-2}(-3\\cdot (-4)+4).\\end{align}$$\n", "\nHe didn't get \"rid of it\". Remember that\n$$(-4)(-4)^{-1} = 1$$\nThen $$ -3(-4)^{n-1} + 4(-4)^{n-2} = -3(-4)^{n-1}(-4)(-4)^{-1} + 4(-4)^{n-2} = $$\n$$ -3(-4)^{n-2}(-4) + 4(-4)^{n-2}$$\nAnd step III follows by the distributive of multiplication.\n", "\nSo we start with $a_n=(-4)^n$ for all $n$, which means that $a_{n-1}=(-4)^{n-1}$ and $a_{n-2}=(-4)^{n-2}$.\nWe then substitute these two values into the recurrence $a_n=-3a_{n-1}+4a_{n-2}$ to obtain $$a_n=-3\\times(-4)^{n-1}+4\\times(-4)^{n-2} $$\nThe rest is just dealing with $a^r\\cdot a^s=a^{r+s}$, where $a=-4$ and $bc+bd=b(c+d)$ where $b$ is a power of $-4$. You just have to be careful about the signs.\n" ], "score": [ 2, 0, 0 ], "ts": [ "2014-10-12 19:23:27Z", "2014-10-12 19:21:42Z", "2014-10-12 19:29:02Z" ], "author": [ null, null, null ], "author_rep": [ "6539", "6539", "6539" ], "accepted": [ true, false, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
970596
2014-10-12 19:04:56Z
user183800
null
3
$\sum_{n=1}^\infty \frac{n+1}{\sqrt{n^3+1}}$convergent/divergent?
[ "sequences-and-series", "analysis", "convergence-divergence" ]
Please could someone help prove $$\sum_{n=1}^\infty \frac{n+1}{\sqrt{n^3+1}}$$ converges/diverges? Thank you.
{ "id": [ "1991455", "1991461", "1991465", "1991467" ], "body": [ "have you tried the condition convergence implies $a_n\\Rightarrow 0$??", "@PraphullaKoushik: Of course $a_n\\to 0$ !! indeed $$...=\\frac{1+\\frac{1}{n}}{\\sqrt n\\sqrt{1+\\frac{1}{n^3}}}\\to 0\\text{ if }n\\to\\infty $$", "@idm : Of course i know that... I was expecting reply from OP...", "I'm afraid that with your last edit, nobody can help you. The sum is divergent." ], "at": [ "2014-10-12 19:06:49Z", "2014-10-12 19:10:47Z", "2014-10-12 19:12:03Z", "2014-10-12 19:12:16Z" ], "score": [ "1", "", "", "1" ], "author": [ "user87543", "idm", "user87543", "Daniel Fischer" ], "author_rep": [ null, "11484", null, "202399" ] }
{ "id": [ "970606", "970614", "970613" ], "body": [ "\n$n+1\\sim n$ and $\\sqrt{n^3+1}\\sim n^{3/2}$ hence $$\\dfrac{n+1}{\\sqrt{n^3+1}}\\sim \\dfrac{n}{n^{3/2}}=\\dfrac{1}{\\sqrt n}$$\nSince $\\sum \\dfrac{1}{\\sqrt n}$ diverge, then $\\sum \\dfrac{n+1}{\\sqrt{n^3+1}}$ diverge.\n", "\nFor $n>0$ you have:\n$$\\frac{n+1}{\\sqrt{n^3+1}} \\geq \\frac{n}{\\sqrt{n^3+1}} \\geq \\frac{n}{\\sqrt{2n^3}} = \\sqrt{\\frac{1}{2}}\\frac{1}{\\sqrt{n}}$$\nBut the series $\\sum_{n=1}^{\\infty}\\frac{1}{\\sqrt{n}\\sqrt{2}}$ is divergent, so series $\\sum_{n=1}^\\infty \\frac{n+1}{\\sqrt{n^3+1}}$ is also divergent.\n", "\n\n$\\displaystyle n^{3}+n^{2} \\geq n^{3}+1 \\implies \\frac{1}{n^{3}+n^{2}} \\leq \\frac{1}{n^{3}+1}$\n$\\displaystyle \\sum_{n=0}^{\\infty} \\frac{1}{\\sqrt{n+1}}=\\sum_{n=0}^{\\infty}\\frac{n}{n\\sqrt{n+1}} \\leq \\sum_{n=0}^{\\infty} \\frac{n+1}{n \\sqrt{n+1}} \\leq \\sum_{n=0}^{\\infty} \\frac{n+1}{\\sqrt{n^{3}+1}}$\n$\\displaystyle\\sum_{n=0}^{\\infty} \\frac{1}{\\sqrt{n+1}}$ diverges\n\n" ], "score": [ 2, 0, 0 ], "ts": [ "2014-10-12 19:12:20Z", "2014-10-12 19:16:02Z", "2014-10-12 19:15:22Z" ], "author": [ "", "", "" ], "author_rep": [ null, null, null ], "accepted": [ true, false, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
970578
2014-10-12 18:54:11Z
B. Lee
1655
3
Trigonometry equation with arctan
[ "trigonometry" ]
Solve the following equation: $\arctan x + \arctan (x^2-1) = \frac{3\pi}{4}$. What I did Let $\arctan x = \alpha, \arctan(x^2-1) = \beta$, $\qquad\alpha+\beta = \frac{3\pi}{4}$ $\tan(\alpha+\beta) = \tan(\frac{3\pi}{4}) = -1$ $$\frac{\tan\alpha + tan\beta}{1-\tan\alpha\tan\beta} = \frac{x+x^2-1}{1-x(x^2-1)} = -1$$ $\begin{align} x^2+x-1 &= -(1-(x^3-x)) = -1+x^3-x \\ \iff x^2 + x &= x^3-x \\ \iff x(x+1) &= x(x^2-1) \qquad\implies \boxed{x_1 = 0}\\ \implies x+1 &= x^2-1 \\ \iff x^2-x-2 &= 0 \\ \end{align}$ $\therefore x_1 = 0,\quad x_2 = 2,\quad x_3 = -1$ However, the equation only works for $x=2$. I wonder Did I do this in an efficient manner? Is there any easy way to find $x$ where there's no fake solutions?
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "970616", "971650" ], "body": [ "\nYour three answers all look good to me. You did this in a pretty efficient manner, although I would've just jumped straight to the identity $$\\arctan(A)+\\arctan(B) = \\arctan\\left(\\frac{A+B}{1-AB} \\right)$$ without making the substitution $\\arctan(x) = \\alpha$, etc. \nAnyway, you are confident that the equation works for $x=2$ (which it does.) But for $x=0$ note that a calculator will tell us that $$\\arctan(0)+\\arctan(0^2-1) = 0+\\arctan(-1) = \\frac{-\\pi}{4}$$ However, this is because $$\\tan\\left( \\frac{3\\pi}{4}\\right) =\\tan\\left( \\frac{-\\pi}{4} \\right)$$ where there is a discrepancy with the calculator due to the fact that Cosine is negative and Sine is positive in the quadrant where $\\frac{3\\pi}{4}$ lies, and Cosine is positive while Sine is negative in the quadrant where $\\frac{-\\pi}{4}$ lies. Long story short, the calculator doesn't know the difference in the calculation and defaults to $\\frac{-\\pi}{4}$. The same exact thing happens for $x = -1$. All three of your answers are right though.\n", "\nLiek Show that $2\\tan^{-1}(2) = \\pi - \\cos^{-1}(\\frac{3}{5})$, \n$$\\arctan x+\\arctan(x^2-1)=\\begin{cases} \\arctan\\left(\\dfrac{x+x^2-1}{1-x(x^2-1)}\\right) &\\mbox{if } x(x^2-1)<1 \\\\\\pi+ \\arctan\\left(\\dfrac{x+x^2-1}{1-x(x^2-1)}\\right) & \\mbox{if } x(x^2-1)> 1. \\end{cases} $$ \nNow, $-\\dfrac\\pi2\\le\\arctan(z)\\le\\dfrac\\pi2$\n$\\implies-\\dfrac\\pi2\\le\\arctan x+\\arctan(x^2-1)\\le\\dfrac\\pi2$ if $x(x^2-1)<1$ which is true if $x=0,-1$\nThen $\\arctan x+\\arctan(x^2-1)\\ne\\dfrac{3\\pi}4$\n" ], "score": [ 2, 0 ], "ts": [ "2014-10-12 19:16:55Z", "2017-04-13 12:21:40Z" ], "author": [ "B. Lee", "B. Lee" ], "author_rep": [ "1655", "1655" ], "accepted": [ true, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [ "1993431" ], "body": [ "@XMLParsing, How about this?" ], "at": [ "2014-10-13 12:55:47Z" ], "score": [ "" ], "author": [ "lab bhattacharjee" ], "author_rep": [ "270589" ] } ] }
970448
2014-10-12 17:27:35Z
taninamdar
2568
3
Explanation about an identity involving inverse binomial coefficients.
[ "sequences-and-series", "binomial-coefficients" ]
Now, I was solving a this problem. It asks for summation of $$\sum\limits_{k =0}^\infty\dfrac{1}{{n+k \choose n}}$$ I solved it using this answer, the answer turns out to be $$\dfrac{n}{n-1}$$ However, can someone provide an explanation of how to go about proving this? Note: The answer is public, although the contest hasn't ended yet. If this is against the site policy, then moderators can block the question right now and unblock after one hour (which is when the contest ends).
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "970497", "1028547", "970518", "970501" ], "body": [ "\n\\begin{align}\\sum^{\\infty}_{k=0}\\frac{1}{n+k\\choose n}=\\sum^{\\infty}_{k=0}\\frac{k!\\cdot n!}{(n+k)!}&=\\sum^{\\infty}_{k=0}\\frac{\\Gamma{(n+1)}\\cdot \\Gamma{(k+1)}}{\\Gamma(n+k+2)}\\cdot (n+k+1)\\\\&=\\sum^{\\infty}_{k=0}(n+k+1)\\cdot B(n+1,k+1)\\end{align}\nWhere $B(x,y)$ is the Beta function defined as \n$$B(x,y)=\\int^{1}_{0}u^{x-1}(1-u)^{y-1}\\,du$$\nwhere $x,y>0$. So we can rewrite the last result as \n\\begin{align}\\sum^{\\infty}_{k=0}(n+k+1)\\cdot B(n+1,k+1)&=(n+1)\\sum^{\\infty}_{k=0}B(n+1,k+1)+\\sum^{\\infty}_{k=0}kB(n+1,k+1)\\\\\n&=(n+1)\\sum^{\\infty}_{k=0}\\int^{1}_{0}u^{k}(1-u)^{n}\\,du+\\sum^{\\infty}_{k=0}k\\int^{1}_{0}u^{k}(1-u)^{n}\\,du\\\\\n&=(n+1)\\int^{1}_{0}(\\sum^{\\infty}_{k=0}u^{k})(1-u)^{n}\\,du+\\int^{1}_{0}u(\\sum^{\\infty}_{k=1}ku^{k-1})(1-u)^{n}\\,du\\\\&\n=(n+1)\\int^{1}_{0}\\frac{1}{1-u}(1-u)^{n}\\,du+\\int^{1}_{0}u\\frac{1}{(1-u)^2}(1-u)^{n}\\,du\\\\&\n=(n+1)\\frac{1}{n}+B(2,n-1)\\\\&\n=\\frac{n+1}{n}+\\frac{1}{n(n-1)}\\\\&\n=\\frac{(n+1)(n-1)+1}{n(n-1)}\\\\&\n=\\frac{n^2-1+1}{n(n-1)}=\\frac{n}{n-1}\\end{align}\n", "\nHere is a calculation using a different integral for the beta function,\nalso very simple. Suppose we seek to evaluate\n$$\\sum_{n\\ge 0} {n+q\\choose q}^{-1}.$$\nThis is\n$$\\sum_{n\\ge 0} \\frac{q! \\times n!}{(n+q)!}\n= \\sum_{n\\ge 0} \n\\frac{\\Gamma(q+1) \\times \\Gamma(n+1)}{\\Gamma(n+q+1)}\n\\\\ = \\sum_{n\\ge 0} (n+q+1)\n \\frac{\\Gamma(q+1) \\times \\Gamma(n+1)}{\\Gamma(n+q+2)}\n= \\sum_{n\\ge 0} (n+q+1) \\mathrm{B}(q+1, n+1).$$\nRecall the beta function integral\n$$\\mathrm{B}(x,y)\n= \\int_0^\\infty \\frac{t^{x-1}}{(1+t)^{x+y}} dt.$$\nThis gives for the sum the representation\n$$\\int_0^\\infty \\sum_{n\\ge 0} (n+q+1)\n\\frac{t^{q}}{(1+t)^{n+q+2}} dt\n= \\int_0^\\infty \\frac{t^q}{(1+t)^{q+2}}\n\\sum_{n\\ge 0} (n+q+1)\n\\frac{1}{(1+t)^n} dt\n\\\\ = \\int_0^\\infty \\frac{t^q}{(1+t)^{q+2}} \\frac{1+t}{t^2} dt\n+ (q+1) \\int_0^\\infty \\frac{t^q}{(1+t)^{q+2}} \\frac{1+t}{t} dt\n\\\\ = \\int_0^\\infty \\frac{t^{q-2}}{(1+t)^{q+1}} dt\n+ (q+1) \\int_0^\\infty \\frac{t^{q-1}}{(1+t)^{q+1}} dt.$$\nConverting back from the beta functions that have appeared\nwe get\n$$\\mathrm{B}(q-1, 2) + (q+1) \\mathrm{B}(q, 1)\n= \\frac{\\Gamma(q-1)\\Gamma(2)}{\\Gamma(q+1)}\n+ (q+1) \\frac{\\Gamma(q)\\Gamma(1)}{\\Gamma(q+1)}\n\\\\ = \\frac{1}{q(q-1)} + (q+1)\\frac{1}{q} = \\frac{q}{q-1}.$$\n", "\nYou could evaluate it using the Gauss hypergeometric Function $_2F_1$. We have:\n$$\n_2F_1(a,b,c;z)=\\sum_{k=0}^{\\infty}\\frac{\\Gamma(a+k)\\Gamma(b+k)\\Gamma({c})}{\\Gamma(a)\\Gamma(b)\\Gamma(c+k)\\Gamma(k+1)}\\cdot z^k\n$$\nAnd:\n$$\n_2F_1(a,b,c;1)=\\sum_{k=0}^{\\infty}\\frac{\\Gamma(a+k)\\Gamma(b+k)\\Gamma({c})}{\\Gamma(a)\\Gamma(b)\\Gamma(c+k)\\Gamma(k+1)}=\\frac{\\Gamma({c})\\Gamma(c-a-b)}{\\Gamma(c-a)\\Gamma(c-b)}\n$$\nRewriting your sum yields:\n$$\n\\sum_{k=0}^{\\infty} \\frac{1}{\\binom{n+k}{n}}=\\sum_{k=0}^{\\infty} \\frac{n!\\cdot k!}{(n+k)!}=\\sum_{k=0}^{\\infty} \\frac{\\Gamma(n+1)\\Gamma(k+1)}{\\Gamma(n+k+1)}=\\sum_{k=0}^{\\infty} \\frac{n!\\cdot k!}{(n+k)!}=\\sum_{k=0}^{\\infty} \\frac{\\Gamma(k+1)\\Gamma(k+1)\\Gamma(n+1)}{\\Gamma(1)\\Gamma(1)\\Gamma(n+k+1)\\Gamma(k+1)}=_2F_1(1,1,n+1;1)=\\frac{\\Gamma({n+1})\\Gamma(n+1-1-1)}{\\Gamma(n+1-1)\\Gamma(n+1-1)}=\\frac{\\Gamma({n+1})\\Gamma(n-1)}{\\Gamma(n)\\Gamma(n)}=\\frac{n\\Gamma({n})\\Gamma(n-1)}{\\Gamma(n)\\cdot n\\Gamma(n-1)}=\\frac{n}{n-1}\n$$\n", "\nLet U(n,k) be the term of your series.\nYou can prove that $U(n,k)=\\frac{1}{(1+\\frac{n}{1})...(1+\\frac{n}{k})}$\nyou have: $U(n,k) = U(n,k-1)*\\frac{k}{n+k} $\nThen let $V(n,k) = k*U(n,k)$\nYou can prove that : $n*U(n,k) -U(n,k-1) = V(n,k) - V(n,k-1)$\nLet $S(n,N)$ be the partial sum of the $U(n,k)$:\n$S(n,N) = \\sum_{k=0}^N U(n,k) = \\sum\\limits_{k =0}^N\\dfrac{1}{{n+k \\choose n}} $\nBy summing the relation (on the index k, from 1 to N) above you get:\n$(n-1)*S(n,N) = n + V(n,N) - U(n,N)$\n$U(n,N) \\rightarrow 0$ ; $V(n,N) \\rightarrow 0$ , when $ N \\rightarrow \\infty$\nSo you get: $L = \\frac{n}{n-1} $\n" ], "score": [ 1, 1, 0, 0 ], "ts": [ "2014-10-12 18:01:59Z", "2014-11-19 01:25:05Z", "2014-10-12 18:15:05Z", "2014-10-28 09:20:46Z" ], "author": [ null, null, null, null ], "author_rep": [ "1", "1", "1", "1" ], "accepted": [ false, false, false, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
970279
2014-10-12 15:23:15Z
user183776
null
3
How to find the N control point of a bezier curve with N+1 points on the curve
[ "bezier-curve" ]
I have a the set of points my curve has to pass through, 2 of those are the start and end points. I'm looking for a way to find the control points of my bezier curve (mostly quadratic and cubic) by using points on the curve. ex: I have 4 points: start,end and 2 points on the curve, the first and last control point are the start and end point but how do I determine the middle control point. Can I do the same with a cubic bezier curve and 5 points ? (start,end and 3 on the curve) I cannot use spline interpolation because the tool I'm using only allows for bezier curves. thank you all EDIT: so far it got: I have 4 points on the curve, [$C_0,C_1,C_2,C_3$] since first and last are control points: $[Q_0=C_0, Q_2=C_3]$ Bezier equation: $B(t)=(1-t)^2*Q_0+2(1-t)*t*Q_1+t^2*Q_2$ From this I can make 2 equation: $C_1=(1-t_1)^2*Q_0+2(1-t_1)*t_1*Q_1+t_1^2*Q_2$ $C_2=(1-t_2)^2*Q_0+2(1-t_2)*t_2*Q_1+t_2^2*Q_2$ which is 2 equation, 3 unknown (infinite possiblity) adding more points from the curve would give me n equation with n+1 unknown. Maybe I'm wrong and there's no way to calculate the control points from just points on the curve (and no t value, the percentage along the curve at which they are located) EDIT 2: Is there a way to find the control points WITHOUT EVER specifying the $t$ values? I can provide as many point on the curve as needed but assigning $t$ values to these points would be very imprecise. (I'm trying to model curves of a real life object). Unless I'm wrong, There is only 1 set of control points that generate a certain curve
{ "id": [ "1999252" ], "body": [ "Regarding your edit2: see the last paragraph of my answer, plus the other answer that it links to." ], "at": [ "2014-10-15 13:43:22Z" ], "score": [ "" ], "author": [ "bubba" ], "author_rep": [ "41760" ] }
{ "id": [ "971536", "971285" ], "body": [ "\nThe simplest approach is to use $N+1$ points to construct a curve of degree $N$. You have to assign a parameter ($t$) value to each point. So, to construct a cubic curve through four given points $\\mathbf{P}_0$, $\\mathbf{P}_1$, $\\mathbf{P}_2$, $\\mathbf{P}_3$, you need four parameter values, $t_0, t_1, t_2, t_3$. Then, as @fang said in his answer, you can construct a set of four linear equations and solve for the four control points of the curve. Two of the equations are trivial, so actually you only have to solve two equations.\nThe simplest approach is to just set \n$$t_0 = 0 \\quad , \\quad\nt_1 = \\tfrac13 \\quad , \\quad\nt_2 = \\tfrac23 \\quad , \\quad\nt_3 = 1$$ \nThen the matrix in the system of linear equations is fixed, and you can just invert it once, symbolically. You can get explicit formulae for the control points, as given in this question. But this only works if the given points $\\mathbf{P}_0$, $\\mathbf{P}_1$, $\\mathbf{P}_2$, $\\mathbf{P}_3$ are spaced fairly evenly. \nTo deal with points whose spacing is highly uneven, the usual approach is to use chord-lengths to calculate parameter values. So, you set \n$$c_0 = d(\\mathbf{P}_0, \\mathbf{P}_1) \\quad ; \\quad\nc_1 = d(\\mathbf{P}_1, \\mathbf{P}_2) \\quad ; \\quad\nc_2 = d(\\mathbf{P}_2, \\mathbf{P}_3)$$\nThen put $c = c_0+c_1+c_2$, and\n$$t_0 = 0 \\quad , \\quad\nt_1 = \\frac{c_0}{c} \\quad , \\quad\nt_2 = \\frac{c_0+c_1}{c} \\quad , \\quad\nt_3 = 1$$\nThen, again, provided $t_0 < t_1 < t_2 < t_3$, you can set up a system of linear equations, and solve. \nIf you're willing to do quite a bit more work, you can actually construct a cubic curve passing through 6 points. Though, in this case, you can't specify the parameter values, of course. For details, see my answer to this question.\n", "\nIn general, you can find the Bezier curve of degree N passing through given (N+1) distinct points. You have to assign proper parameters to each point first and solve a linear equation set. However, differnet parameter assignments will generate different result. If you have more points than the order of the Bezier curve ( order = degree + 1), then you can find a Bezier curve that comes close to the given points using methods such as least square fitting.\n" ], "score": [ 1, 1 ], "ts": [ "2017-04-13 12:20:55Z", "2014-10-13 05:18:13Z" ], "author": [ "", "" ], "author_rep": [ null, null ], "accepted": [ true, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [ "1995559", "1995696", "1996516" ], "body": [ "Good answer, except for a small quibble: the $N+1$ given points don't need to be distinct.", "@bubba: If you somehow can assign distinct parameters to the points with same coordinates, then the points do not need to be distinct. But most good parametrization scheme are derived from distance between points (such as chord-length or centripental parametrization). In this sense, you do need distinct points to get good parametrization.", "Yes, it's the parameter values that need to distinct." ], "at": [ "2014-10-14 04:41:54Z", "2014-10-14 06:03:31Z", "2014-10-14 13:51:59Z" ], "score": [ "", "", "" ], "author": [ "bubba", "fang", "bubba" ], "author_rep": [ "41760", "3480", "41760" ] } ] }
969675
2014-10-12 04:31:17Z
Ruvi Lecamwasam
1932
3
Complicated integral, where $\int\coth(x)dx$ is somehow written in terms of $\int |x|e^{ix}dx$
[ "integration", "definite-integrals", "hyperbolic-functions" ]
In Gardiner's Quantum Noise the following integral equality is used (eq 3.3.10, 3.3.14): $$\int_0^{\infty}d\omega \omega\mathrm{coth}\left(\frac{a\omega}{2}\right)\cos(\omega(t-t'))=\int_0^{\infty}d\omega \left[\omega\mathrm{coth}\left(a\omega\right)-\omega\right]\cos(\omega(t-t'))$$ $$+\frac{1}{2}\int_{-\infty}^{\infty}d\omega|\omega|e^{i\omega(t-t')}$$ The aim of this is to write the divergent integral on the left as the sum of convergent and divergent parts. I have been trying to see where this comes from. So far I have noted that: $$\coth(x)=\coth(2x)+\frac{2}{e^{2x}-e^{-2x}}$$ which lets me write: $$=\int_0^\infty d\omega\left[\omega\coth(a\omega)-\omega\right]\cos(\omega(t-t'))+\int_0^{\infty}d\omega\omega\cos(\omega(t-t'))$$ $$+\int_0^{\infty}d\omega\frac{2\omega}{e^{a\omega}-e^{-a\omega}}\cos(\omega(t-t'))$$ Furthermore we have: $$\int_0^{\infty}d\omega\omega\cos(\omega(t-t'))=\frac{1}{2}\int_{-\infty}^{\infty}d\omega|\omega|\cos(\omega(t-t'))$$ $$\int_0^{\infty}d\omega\frac{2\omega}{e^{a\omega}-e^{-a\omega}}\cos(\omega(t-t'))=\frac{1}{2}\int_{-\infty}^{\infty}d\omega\frac{2\omega}{e^{a\omega}-e^{-a\omega}}\cos(\omega(t-t'))$$ From here though I can't see how to proceed, and am especially confused as to where the complex exponential could possibly have come from. Any ideas at all would be welcome.
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "971565" ], "body": [ "\n\\begin{align}\n\\int_{-\\infty}^{\\infty}d\\omega|\\omega|\\cos(\\omega(t-t'))=\\int_{-\\infty}^{\\infty}d\\omega|\\omega|e^{i\\omega(t-t')}\n\\end{align}\nbecause $i \\int_{-\\infty}^{\\infty}d\\omega|\\omega|sin(\\omega(t-t'))=0$,\n we integrate an odd function over an even interval.\nEdit:\nI don't see why this second integral should vanish mathematically. \nIs this maybe one of the standard physics arguments that one just absorbs some finite value in the redefinition of a physical quantity? \n" ], "score": [ 2 ], "ts": [ "2014-10-13 12:46:47Z" ], "author": [ "Ruvi Lecamwasam" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [ "1999194" ], "body": [ "Thanks, that's quite helpful. I didn't see an argument of that type made in the text but maybe it was done implicitly, I will give it another look." ], "at": [ "2014-10-15 13:23:20Z" ], "score": [ "" ], "author": [ "Ruvi Lecamwasam" ], "author_rep": [ "1932" ] } ] }
969262
2014-10-11 20:17:16Z
Soham
2034
3
Doubt about Probability of arranging identical balls
[ "probability", "probability-theory" ]
There are four boxes and 12 balls. The boxes are numbered and hence distinguishable but the balls are identical. What is the probability that a random arrangement would result in 10 balls in box 1 2 ball in box 2 and the rest boxes are empty ? My attempt was solving $x_1+x_2+x_3+x_4 = 12$ and only one case is favourable. My friend's attempt - assume the balls are numbered, total number of arrangements is $4^{12}$ out of which $\frac{12!}{10!2!}$ are favourable. Whose method is correct? EDIT Okay if we choose to throw the balls then the probability comes out as $$\frac{\frac{12!}{10!2!}}{4^{12}}$$. Now if the balls are unidentical then also the answer is $\frac{\frac{12!}{10!2!}}{4^{12}}$. How is it happening ?
{ "id": [ "1989241", "1989245", "1989300", "1989677" ], "body": [ "So you came to the solution of $\\frac{1}{{12+4-1\\choose 4-1}}$? Since solving $x_1 + x_2 + x_3 + x_4 = 12$ for the number of integer-valued non-negative values equals $12+4-1\\choose 4-1$ and $10 + 2 + 0 + 0 = 12$ represents one such solution. Your solution seems right to me.", "@KermittheHermit yes,but what is wrong with his ?", "The difficult question is whether your ${15 \\choose 3}$ solutions are equally probable. Does $12+0+0+0$ have a probability of $\\dfrac{1}{455}$ or $\\dfrac{1}{4^{12}}$? If the latter then your friend has a good approach. To test your approach, you have to write down all the possibilities and then choose one; your friend can just throw balls at boxes.", "I think your solution is only correct if the question asked for the probability that a solution to the equation $x_1+x_2+x_3+x_4=12$ chosen at random from the solution set happens to be the solution $(x_1,x_2,x_3,x_4)=(10,2,0,0)$. That’s not the probability that a random assignment of the balls to boxes happens to correspond to that solution." ], "at": [ "2014-10-11 20:40:17Z", "2014-10-11 20:43:32Z", "2014-10-11 21:01:10Z", "2014-10-12 00:39:35Z" ], "score": [ "", "", "", "" ], "author": [ "Kermit the Hermit", "Soham", "Henry", "Steve Kass" ], "author_rep": [ "1487", "2034", "147709", "14413" ] }
{ "id": [ "969440", "969432", "969683" ], "body": [ "\nConsider having the 12 identical balls, and three identical \"dividers\" arranged in a straight line. (Up to the first divider is Box #1, between the first and second dividers is Box #2, and so on. No need for an end-of-Box #4 marker)\nHow many ways can you arrange these 15 items?\nHow many of them have the first divider after the tenth ball, and the rest after the twelfth ball?\n", "\nAs I see it, and correct me if I misunderstood the question, the number of balls in each box is absolutely random. This means that each box has the same probability of having any number of balls between $0$ and $12$. With this in mind we now have to take into account that the total number of balls we have is $12$. Your approach is correct, but brute forcing it by finding all positive integer solutions that satisfy the equation is hard to do by hand. (You could write a computer program instead. I'll do it when I have a moment and edit the answer.)\nHowever, I think we can break the problem in two parts:\nFirst we'll try to get the probability of getting $10$ balls in box one and then add it to the probality of getting $2$ balls in box $2$.\nIf the arrangement is random, in the first box we could get any number of balls ranging from $0$ to $12$, so the possibility of getting $10$ balls is:\n$P(X_1=10)=\\frac{1}{13}$\nAfter that, getting them in box 1 means that $P(X=10)$ gets reduced by $4$ as just $1$ in $4$ cases is favourable.\nSo:\n$P(X_1=10)=\\frac{1}{13·4}$\nNow we have to add the probability of getting two balls in box $2$. We will assume that we got $10$ balls on box $1$, so the probability then is:\n$P(Y_2=2)=\\frac{1}{3}·\\frac{1}{3}$\nAdding the two probabilities together we get that the total probability is:\n$P(X_1 and Y_2)=\\frac{1}{13·4}·\\frac{1}{9}=\\frac{1}{468}=0.002137$\nEdit:\nI brute forced the answer with a Python program which resolved the problem in the way you suggested. The code is as follows:\ndef prob():\n results=[] #stored valid solutions\n Total=0 #total of iterarions\n Valid=0 #count valid solutions\n for i in range(13): #ranging from 0 to 12 balls in box 1\n for j in range(13): #ranging from 0 to 12 balls in box 2\n for k in range(13): #ranging from 0 to 12 balls in box 3\n for l in range(13): #ranging from 0 to 12 balls in box 4\n if i+j+k+l==12: #check if it is a valid solution\n results.append((i,j,k,l)) #if it is, append to solutions\n Total+=1;Valid+=1 #keep count of valid solutions\n else:\n Total+=1 #total iterations\n return results,Total,Valid\n\nThe program should run $13⁴=28561$ iterations (which does) and check how many of them are valid solutions. Once we get this number, as there is only one valid solution the probability must be $P(x_1andY_2)=\\frac{1}{\\text{solutions}}$. \n\nEach $(i,j,k,l)$ term stands for a valid solution and the two numbers at the end are total iterations and valid solutions.\nThe program outputs $455$ solutions, hence the probability we are looking for is:\n$P(X_1andY_2)=\\frac{1}{455}=0.002198$ which is a bit higher (13 cases up) than the one I previously calculated.The point is I don't know what I am missing.\n", "\nThere can be disagreement about the appropriate model. The model I would favour has us \"throwing\" the balls one at a time towards the boxes, with a ball equally likely to fall in any of the boxes, and the results of the $12$ throws independent. It is useful to imagine that the balls have ID numbers $1$ to $12$ on them, if you prefer in invisible ink. These ID numbers do not affect the probabilities.\nSo the appropriate sample space under this model has $4^{12}$ equally likely outcomes. \nNow we count the \"favourables.\" Call the boxes A, B, C, D. The favourables are the words of length $12$ that have $10$ A's and $2$ B's. The location of the $10$ A's can be chosen in $\\binom{12}{10}$ ways, and now the location of the B's is determined. \nYour friend is clearly using the same \"throwing\" model as the one used in this answer. \nNote that the answer we get is quite different from the one given by an analysis that makes all solutions of $x_1+x_2+x_3+x_4=12$ in non-negative integers equally likely. That model seems quite unsuitable, for example, in the standard application of this sort of balls and boxes problem to hashing. \n" ], "score": [ 2, 0, 0 ], "ts": [ "2014-10-11 23:07:30Z", "2014-10-12 00:49:20Z", "2014-10-12 04:41:40Z" ], "author": [ "Soham", "Soham", "Soham" ], "author_rep": [ null, null, null ], "accepted": [ true, false, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [ "1989682" ], "body": [ "FWIW, your BF value of 1/455 matches the value produced by the method in my answer..." ], "at": [ "2014-10-12 00:41:29Z" ], "score": [ "" ], "author": [ "DJohnM" ], "author_rep": [ "3520" ] }, { "id": [ "1990254", "1990267", "1990376" ], "body": [ "but what is wrong with my approach ?", "I think the problem comes from how the balls get into the boxes. If there is a person who (unknown to you) is determinig by hand the configuration of each box wouldn't the $x_1+x_2+x_3+x_4=12$ approach be okay?", "@Joannes: Precisely. If we were told that such a thing happened, counting numbers of solutions would be the right approach. But it is a \"physically\" implausible model, one to be chosen only if some explicit thing is pointing us towards it." ], "at": [ "2014-10-12 08:13:43Z", "2014-10-12 08:28:09Z", "2014-10-12 09:52:20Z" ], "score": [ "", "", "1" ], "author": [ "Soham", "Ioannes", "André Nicolas" ], "author_rep": [ "2034", "376", "497664" ] } ] }
968156
2014-10-11 18:05:40Z
Massimo Franceschetti
131
3
Kolmogorov n-width of N+1 dimensional ball
[ "functional-analysis", "metric-spaces", "approximation-theory" ]
For a normed linear space $\mathscr{X}$, let $\mathscr{A}\subset\mathscr{X}$ and $\mathscr{X}_N$ any $N$-dimensional subspace of $\mathscr{X}$. Define the $n$-width of $ \mathscr{A}$ in $\mathscr{X}$ as $$d_N(\mathscr{A}, \mathscr{X}) = \inf_{\mathscr{X}_N \subset \mathscr{X}} \sup_{f \in \mathscr{A}} \inf_{g \in \mathscr{X}_N} \|f-g\|.$$ This represents the extent to which $\mathscr{A}$ can be approximated by $N$-dimensional subspaces of $\mathscr{X}$. Consider now the N+1 dimensional ball $U_{N+1}$ defined by $$g(t) = \sum_{n=0}^N a_n \psi_n(t), \;\; \|g\|\leq r.$$ I want to prove that $$d_N(U_{N+1}) = r.$$ The rough idea is that the approximating hyperplane for which the "worst" point on the ball can be "best" approximated by a point on the plane, should pass through the center of the ball, so that the distance to the farthest point is $r$. Any ideas how to make this rigorous?
{ "id": [ "1990177" ], "body": [ "The introduction of $\\psi_n$ seems unnecessary." ], "at": [ "2014-10-12 06:49:25Z" ], "score": [ "1" ], "author": [ "user147263" ], "author_rep": [ null ] }
{ "id": [ "969795" ], "body": [ "\nGiven $\\mathscr X_N$ as above, pick $y\\notin \\mathscr X_N$ and let $z$ be a point of $\\mathscr X_N$ that minimizes the distance to $y$. Let $u=(y-z)/\\|y-z\\|$; this is a unit vector. \nSince $u$ is not parallel to $\\mathscr X_N$, there is $t\\in \\mathbb R$ such that $tu\\in \\mathscr X_N$. Let $$w=\\begin{cases} -ru\\quad &\\text{if }t\\ge 0 \\\\ ru\\quad &\\text{if }t< 0 \\end{cases}$$ \nand observe that \n$$\\operatorname{dist}(w,\\mathscr X_N)= \\operatorname{dist}(w-tu,\\mathscr X_N-tu)=\\|w-tu\\| = r+|t|$$\n" ], "score": [ 2 ], "ts": [ "2014-10-12 06:59:50Z" ], "author": [ "Massimo Franceschetti" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
968155
2014-10-11 18:03:08Z
Lover09
31
3
Conditional expectation of a random vector taking values in convex sets
[ "probability", "probability-theory", "convex-analysis", "convex-optimization", "conditional-probability" ]
on a probability space $(\Omega, \mathcal{F},\mathbb{P})$ i have a random vector $X\in L^1_{\mathbb{P}}(\mathbb{R}^d)$ (integrable with values in $\mathbb{R}^n$), such that $\mathcal{P}-a.s.$ $$X\in C$$ where $C$ is a convex set of $\mathbb{R}^n$. I want to know if it implies that $\mathcal{P}-a.s.$ $$E[X| \mathcal{F}_0] \in C$$ for any sigma algebra $\mathcal{F}_0\subset \mathcal{F}$. It seems natural and intuitive but i am not able to write the clean proof explicitely in the general case. Do we need to put some further assumptions of $C$ ? Do you have any idea of how we can wite the proof explicitely ? Thank you.
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "968235", "1001864" ], "body": [ "\nIf the convex set $C$ is closed, then $C$ is an intersection of half spaces $H_u^c=\\{x\\mid \\langle x,u\\rangle\\leqslant c\\}$ for some family $\\mathcal H$ of pairs $(u,c)$ in $\\mathbb R^n\\times\\mathbb R$. For every such pair $(u,c)$ in $\\mathcal H$, $\\langle X,u\\rangle\\leqslant c$ almost surely and $\\langle E(X\\mid \\mathcal F_0),u\\rangle=E(\\langle X,u\\rangle\\mid\\mathcal F_0)$ almost surely hence $\\langle E(X\\mid \\mathcal F_0),u\\rangle\\leqslant c$ almost surely. If $\\mathcal H$ is at most countable, this implies that $\\langle E(X\\mid \\mathcal F_0),u\\rangle\\leqslant c$ almost surely for every $(u,c)$ in $\\mathcal H$, that is, $P(E(X\\mid \\mathcal F_0)\\in C)=1$.\nThe space $\\mathbb R^n$ is separable hence, if $C$ is a closed convex set, then $C$ is constructible, that is, $C$ is the intersection of countably many halfspaces. \nAll this proves that if $X$ is in a convex set $C$ almost surely then $E(X\\mid\\mathcal F_0)$ is in $\\bar C$ almost surely.\n", "\nMore can be said when $C$ is an open interval, say $C=(0,1)$. Let $A$ be the event on which\n$E(X|{\\mathcal F}_0) = 0$. Then \n$$\n0=E(1_AE(X|{\\mathcal F}_0))=E(1_AX),\n$$\nwhere $1_A$ is the indicator of the event $A$. Consequently,\n$$\nP(A\\cap\\{X>1/n\\})\\le nE(1_AX) =0.\n$$\nfor $n=1,2,3,\\ldots$. It follows that $P(A\\cap\\{X>0\\}=0$. Because $P(X>0)=1$, we must have $P(A)=0$. That is, $P(E(X|{\\mathcal F}_0)=0)=0$. Likewise, $P(E(X|{\\mathcal F}_0)=1)=0$.\nSo $P(E(X|{\\mathcal F}_0)\\in C)=1$ in case $C$ is an open interval.\nFor open convex $C\\subset R^n$ I need to use regular conditional distributions. Let $(\\Omega,{\\mathcal F},P)$ be the probability space on which $X$ is defined. There is a kernel $\\mu(\\omega,B)$ ($\\omega\\in\\Omega$, $B$ in the Borel subsets of $R^n$) that is an ${\\mathcal F}$-measurable function of $\\omega$ for each fixed $B$ and a probability measure as a function of $B$ for each fixed $\\omega\\in\\Omega$, such that\n$$\nE(1_A f(X)) =\\int_A\\left[\\int_{R^n}\\,\\mu(\\omega,dx)f(x)\\right]\\,P(d\\omega),\n$$\nfor all $A\\in{\\mathcal F}_0$ and Borel $f:R^n\\to R$ such that $f(X)$ is integrable. \nIn other words, for $f$ as above, $\\omega\\mapsto \\int_{R^n}\\mu(\\omega,dx)f(x)$ is a version of the random variable $E(f(X)|{\\mathcal F}_0)$.\nFor example, the choice $f=1_C$ yields\n$$\nP(A) = \\int_A \\mu(\\omega,C)\\,P(d\\omega),\n$$\nand thereby the knowledge that $\\mu(\\omega,C)=1$ for $P$-a.e. $\\omega\\in\\Omega$. I shall write $G$ for $\\{\\omega\\in\\Omega:\\mu(\\omega,C)=1\\}$.\nLet's now focus on $E(X|{\\mathcal F}_0)$. I claim the following: If $\\mu$ is a probability measure concentrated on the open convex set $C$, then the barycenter \n$x^*:=\\int_C x\\,\\mu(dx)$ is an element of $C$. (The integrability of $x$ with respect to $\\mu$ is assumed.) By Didier's argument provided above, $x^*$ is necessarily an element of $\\overline C$. Arguing by contradiction, suppose that $x^*$ is in the boundary of $C$; that is $x^*\\in\\overline C\\setminus C$. The convex set $C$ admits a support hyperplane at $x^*$, embodied by an affine function $f:R^n\\to R$ of the form $f(x)=a+b\\cdot x$ ($a\\in R, b\\in r^n$) such that $f(x^*)=0$ but $f(x)<0$ for all $x\\in C$ (because $C$ is open). We have\n$$\n\\int_C f(x)\\,\\mu(dx) =a+b\\cdot\\left[\\int_C x\\mu)dx)\\right]=a+b\\cdot x^*=f(x^*)=0.\n$$\nOn the other hand, $\\int_C f(x)\\,\\mu(dx)<0$ because $f(x)<0$ for all $x\\in C$, and we have our contradiction. Finally, if $\\omega\\in G$ (defined at the end of the last paragraph) then the preceding discussion applies to $\\mu(\\omega,\\cdot)$, and so $E(X|{\\mathcal F}_0)(\\omega) =\\int_C x\\,\\mu(\\omega,dx)\\in C$, for $\\omega\\in G$, hence a.s.\nIt may be that a refinement of this argument yields the desired result for general convex $C$.\n" ], "score": [ 1, 1 ], "ts": [ "2014-10-11 19:32:58Z", "2014-11-02 17:33:26Z" ], "author": [ "Lover09", "Lover09" ], "author_rep": [ "31", "31" ], "accepted": [ false, false ], "comments": [ { "id": [ "1989343" ], "body": [ "Thank you for this deep answer. I am not a specialist of convex spaces. Your (impessive) proof holds for random vectors with values in separable Hilbert spaces and it shows that C must be both convex and closed. Can we do better than this ? Is it somehow elated to the fact that a set which is both closed and convex for the weak topology is closed for the strong topology ?" ], "at": [ "2014-10-11 21:23:21Z" ], "score": [ "" ], "author": [ "Lover09" ], "author_rep": [ "31" ] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
967900
2014-10-11 14:06:21Z
Joe
143
3
Show that for any integer a, a^2 + 5 is not divisible by 4.
[ "discrete-mathematics" ]
My solution is: Assume by contradiction that there is at least one number a such that $a^2$ + 5 is divisible by 4. Then a is either odd or even. Consider the case when a is odd. Then a= 2k+1 for some integer k. Then $a^2$ = $4k^2$ + 4k + 1 so $a^2$ is odd. Then $a^2$ + 5 is an even number. In order for $a^2$ + 5 to be divisible by 4, $a^2$ + 5 = 4p for p ≥ 1, 4p-5 must be a perfect square. Since $a^2$ is odd, therefore 4p-5 must be an odd value thus it is a square of another odd integer. 4p-5 = $(2n+1)^2$ = ($4n^2$ + $4n$ + 1) Thus,$4p$ = $4n^2$ + $4n$ + $6$ => $p$= $n^2$ + $n$ + $1.5$ which shows that p is not an integer therefore 4p-5 cannot be a perfect square. Therefore, $a^2$ + 5 cannot be divisible by 4. Consider the case when a is even. $a$ = $2k$ for some integer k. Then $a^2$ = $4k^2$. Clearly, $a^2$ is divisible by 4. But $a^2$ + 5 is odd value and clearly it is not divisible by 4. Thus, in all cases we reach a contradiction. Therefore, $a^2$ + 5 is not divisible by 4 for any integers a. Am i correct?
{ "id": [ "1987432" ], "body": [ "The concept of your proof is fine, but the proof as written suffers from various faults which makes it turn into nonsense at a certain point. For starters, you have two different and clashing uses of $k$: \"$a=2k+1$ for some integer $k$\" and \"$a^2+4=4k$ for $k \\ge 1$\". You also have two different expressions for the same number: \"$a^2+5=4k$\" and \"$4k-5=(2n+1)^2$\" so that $a$ and $2n+1$ are the same number. You'll be very close to a perfectly correct proof if you clean up those issues." ], "at": [ "2014-10-11 14:18:51Z" ], "score": [ "" ], "author": [ "Lee Mosher" ], "author_rep": [ "108798" ] }
{ "id": [ "967905", "967913" ], "body": [ "\nYour idea is fine. More compactly, any number is congruent to $0,1,2,3$ modulo $4$. Squaring gives $0,1,0,1$. But $5=-1$ modulo $4$, but it cannot be the case $a^2=-1\\mod 4$ for any $a$, by the above.\n", "\nWell, there is a little notation failure. Fisrt, you say that $a=2k+1$ and after, you say that $a^2+5=4k$. Obviously, you don't mean that the fisrt $k$ and the second $k$ are the same number, so you should rename one of them.\nBesides that, your proof is correct, but somewhat long.\nIt is clear that you don't know how, or want to use congruences. But even with no congruences, the proof can be shortened -and clarified, I think.\nI'll write a proof of the hard part of the problem, that is, the case in which $a$ is odd.\n\nSince $a$ is odd, there exists some integer $k$ such that $a=2k+1$. Squaring and adding $5$, we get $a^2+5=4k^2+4k+6=4(k^2+k+1)+2$, that is not a multiple of $4$.\n\nI hope it helps. Nevertheless, the best way to learn how to write proofs is writing (and reading) proofs.\n" ], "score": [ 1, 1 ], "ts": [ "2014-10-11 14:13:41Z", "2014-10-11 14:29:01Z" ], "author": [ "Joe", "Joe" ], "author_rep": [ null, null ], "accepted": [ false, false ], "comments": [ { "id": [ "1987414", "1987447" ], "body": [ "is my solution correct? I am using basic method to solve this question. I know about modulo but at my level i do not want to try it yet.", "\"Your idea is fine.\" I am just looking at the possiblities $n=4k,4k+1,4k+2,4k+3$. In each case, I can write $n^2$ as $4k$ or $4k+1$, hence the conclusion." ], "at": [ "2014-10-11 14:14:09Z", "2014-10-11 14:22:20Z" ], "score": [ "", "" ], "author": [ "Joe", "Pedro" ], "author_rep": [ "143", "118854" ] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
967308
2014-10-10 22:44:56Z
Kevin Carroll
242
3
Propositional Logic: Conditions for a sequence to be an element of $\mathcal{L_0}$
[ "logic", "propositional-calculus" ]
Let $\mathcal{L_0}$ be the smallest set $L$ of finite sequences of $\textit{logical symbols}= \{(\enspace)\enspace\neg\}$ and $\textit{propositional symbols}=\{A_n|n\in\mathbb{N}\}$ for $n \in \mathbb{N}$ satisfying the following properties: (1) For each propositional symbol $A_n$ with $n\in\mathbb{N}$, \begin{multline} A_n \in L. \end{multline} (2) For each pair of finite sequences $s$ and $t$, if $s$ and $t$ belong to $L$, then \begin{multline} (\neg s) \in L \end{multline} and \begin{multline} (s \to t) \in L. \end{multline} Show that $\phi$ is an element of $\mathcal{L_0}$ if and only if there is a finite sequence of sequences $\langle\phi_1,\dots,\phi_n\rangle$ such that $\phi_n = \phi$, and for each $i$ less than or equal to $n$ either there is an $m$ such that $\phi_i = \langle A_m \rangle$, or there is a $j$ less than $i$ such that $\phi_i = (\neg \phi_j)$ or there are $j_1$ and $j_2$ less than $i$ such that $\phi_i = (\phi_{j_1} \to \phi_{j_2})$. I'm a little confused on what this sequence of sequences is. Like for example, let $\phi = ((A_1 \to (\neg A_2)) \to A_3)$. Now when we talk about $\langle\phi_1,\dots,\phi_n\rangle$ where $\phi_n = ((A_1 \to (\neg A_2)) \to A_3)$, what are the $\phi_i$? Is $\phi_{n-1} = ((A_1 \to (\neg A_2)) \to A_3$? Is $\phi_{n-2} = ((A_1 \to (\neg A_2)) \to$? At this point, formulas have not yet been defined. What are the $\phi_i$? How can we prove that any sequence must be such a sequence of sequences to be in $\mathcal{L_0}$?
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "967407" ], "body": [ "\n\nLike for example, let $\\phi = ((A_1 \\to (\\neg A_2)) \\to A_3)$. Now when we talk about $\\langle\\phi_1,\\dots,\\phi_n\\rangle$ where $\\phi_n = ((A_1 \\to (\\neg A_2)) \\to A_3)$, what are the $\\phi_i$? \nIs $\\phi_{n-1} = ((A_1 \\to (\\neg A_2)) \\to A_3$? Is $\\phi_{n-2} = ((A_1 \\to (\\neg A_2)) \\to$? At this point, formulas have not yet been defined. What are the $\\phi_i$?\n\nOne important thing to note on the statement on the RHS of the equivalence is that it is of the form 'there exists a natural number $n$ and there exists a sequence of sequences of propositional and symbols with length $n$ and other certain properties'.\nSo in your example you should first specify what $n$ is. There are infinite possibilities for $n$, to help you through this example I'll choose $n=6$.\nSet \n$$\\begin{align}\n&\\phi _1=A_1,\\\\ \n&\\phi_2=A_2,\\\\ \n&\\phi _3=\\neg (A_2),\\\\ \n&\\phi _4=(A_1\\to (\\neg A_2)),\\\\ \n&\\phi_5=A_3,\\\\ \n&\\phi _6=((A_1 \\to (\\neg A_2)) \\to A_3)\n\\end{align}$$ and consider the finite sequence $\\langle \\phi_1, \\phi_2, \\phi_3, \\phi_4, \\phi_5, \\phi_6\\rangle$.\nLet's check if $\\langle \\phi_1, \\phi_2, \\phi_3, \\phi_4, \\phi_5, \\phi_6\\rangle$ satisfies what it is asked of you to prove.\nSo $i\\in \\{1,2,3,4,5,6\\}$.\nIf $i=1$, set $m=1$ to get $\\phi _1=A_1$ (by the way, I believe the $\\langle \\rangle$ enclosing $A_1$ in the first paragraph of this answer are not meant to be there).\nIf $i=2$, set $m=2$ to get $\\phi _2=A_2$.\nIf $i=3$, set $j=2$ to get $\\phi _3=(\\neg \\phi_2)=(\\neg A_2)$.\nIf $i=4$, set $j_1=1$ and $j_2=3$ to get $\\phi _4=(\\phi _1\\to \\phi _3)=(A_1\\to (\\neg A_2))$.\nIf $i=5$, set $m=3$ to get $\\phi _5=A_3$.\nIf $i=6$, set $j_1=4$ and $j_2=5$ to get $\\phi=\\phi _5=(\\phi _4\\to \\phi_5)=((A_1\\to (\\neg A_2))\\to A_3)$.\nFormulas have been defined, maybe they haven't been named, but formulas are the names one gives to the elements of $\\mathcal L_0$.\n\nHow can we prove that any sequence must be such a sequence of sequences to be in $\\mathcal{L_0}$?\n\nI think you're not asking what you want to ask. Can you rephrase it, please?\n\nA couple of remarks.\nI said there were infinite possibilities for $n$, that's because you can add superfluous $\\phi _k$ as you please. You can add them and not use them, nothing wrong with that.\nWhat the problem is asking of you is basically to prove that every element of $\\mathcal L_0$ exists by being built from previously defined elements.\n\nEdit: \nI said that formulas are what one calls the elements of $\\mathcal L_0$. There is a 'but'. Formulas are any finite sequences of propositional and logical symbols. The elements of $\\mathcal L_0$ are often called well-formed formulas or wffs, but most of the time in logic we're interested only in wffs, so the term formula is often used to refer to wff after the audience is past the introduction to this sort of thing.\nNow regarding the problem itself.\nGiven a formula (not necessarily a wff) $\\phi$, abbreviate $\\phi\\in \\mathcal L_0$ by $P(\\phi)$ and let $Q(\\phi)$ abbreviate \"there exists a positive natural number $n$ and a finite sequence $s$ of elements propositional and logical symbols such that $s=\\langle\\phi_1,\\dots,\\phi_n\\rangle, \\phi_n=\\phi$ and $\\forall i\\in \\mathbb N\\left(0<i\\leq n\\implies R(i,n)\\right)$\", where $R(i,n)$ is the predicate $$\\exists m\\in \\mathbb N(\\phi _i=A_m)\\lor \\exists j\\in \\mathbb N(j<i\\land \\phi_i=(\\neg \\phi_j))\\lor \\exists j_1, j_2\\in \\mathbb N(\\phi_i=(\\phi_{j_1}\\to \\phi_{j_2})).$$\nThe statement to prove is 'for all formulas $\\phi$ the equivalence $P(\\phi)\\iff Q(\\phi)$\nholds'. \nI'd rather reformulate this a bit. Let $\\mathcal F$ be the set of formulas that satisfy $Q$. You want to prove that $\\mathcal L_0=\\mathcal F$\n$\\boxed{\\subseteq}$\nFor this inclusion, prove that $\\mathcal F$ satisfies (1) and (2). Since $\\mathcal L_0$ is the smallest set of formulas that does this, the inclusion follows.\n$\\boxed{\\supseteq}$\nProve by complete induction on $m$ the following statement $\\forall m\\in \\mathbb N\\forall \\phi\\in \\mathcal F\\left(\\langle \\phi_1, \\ldots ,\\phi_m\\rangle\\text{ 'generates' }\\phi \\implies \\phi\\in \\mathcal L_0\\right)$, (I suppose you can guess that I mean with 'generating' here).\n" ], "score": [ 2 ], "ts": [ "2014-10-11 18:09:28Z" ], "author": [ null ], "author_rep": [ "30993" ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
967080
2014-10-10 19:26:24Z
GrayOnGray
254
3
Derivation of normal equations for minimization of Frobenius norm least squares error
[ "regression", "matrix-equations", "matrix-calculus", "least-squares" ]
I'm having a hard time understanding the most efficient sequence of steps for deriving the normal equations for Frobenius norm least squares minimization. Here I want to minimize the norm of a matrix directly, rather than arguing I can do it row-by-row, because I want to improve my facility with matrix calculations. "We observe pairs $(x_i,y_i)$ with $x_i \in \mathbb{R}^d$ and $y_i \in \mathbb{R}^n$. We’ll let $i$ range from $1$ to $L$. Let $X = [x_1,\ldots ,x_L]$ denote the matrix whose columns are the examples $x_i$. Similarly, let $Y = [y_1,\ldots , y_L]$. We will try to fit a function of the form $$f(y) = Wy + v$$ where $W \in \mathbb{R}^{d\times n}$ and $v \in \mathbb{R}^d$. Let $\mathbf{1}_L$ denote the vector of length $L$ with ones in all entries." So we want to minimize $$ \|WY + v \mathbf{1}^T_L - X\|_F^2 \, .$$ My confusion is that the correct normal equations seem to be \begin{align} & (WY + v \mathbf{1}^T_L - X) \mathbf{1}_L = 0 \\ & (WY + v \mathbf{1}^T_L - X) Y^T = 0 \end{align} but I do not obtain this directly by differentiation, setting the appropriate Jacobians to zero. At least not when I think I'm being careful. Rather, I obtain it through a way-too-complicated sequence of steps. Can someone clarify where I'm going wrong? No need to read all of the following (carefully) if you just know the answer. Maybe there's an obvious choice of partials one should take, in an easily generalizable manner? First, to optimize w.r.t. $v$, I employ the chain rule for Jacobians. We note that $\| A(v) \|_F^2 = \sum_{ij} A(v)_{ij}^2$, and by the chain rule, \begin{align*} J_v \bigl[ \| WY + v \mathbf{1}^T_L - X \|_F^2 \bigr] =& \ J_{(WY + v \mathbf{1}^T_L - X)}\bigl( \|\cdot\|_F^2\bigr) J_v\bigl(WY + v \mathbf{1}^T_L - X\bigr)\ \dot{=}\ 0 \in \mathbb{R}^{1 \times d} \end{align*} must be satisfied. (I will sometimes refer to $WY + v \mathbf{1}^T_L - X$ as `$A$.') By $J_{(WY + v \mathbf{1}^T_L - X)}\bigl( \|\cdot\|_F^2\bigr)$ I mean the Jacobian of the Frobenius norm, evaluated at $WY + v \mathbf{1}^T_L - X$. This would appear to be a $(1 )\times (d \times L)$ tensor, since we're looking at the derivative of the function $\| \cdot \|_F^2: \mathbb{R}^{d\times L} \to \mathbb{R}$. This first Jacobian $J \| A\|_F^2 $ has elements $J_{1,i,j} = \frac{\partial}{\partial A_{ij}} \sum_{kl} A_{kl}^2 = 2 A_{ij} = 2(WY + v \mathbf{1}^T_L - X\bigr)_{ij}$. Next we need the Jacobian $J_v\bigl(WY + v \mathbf{1}^T_L - X\bigr) = J_v (v \mathbf{1}^T_L) $. At first I had written that $\frac{\partial}{\partial v} v \mathbf{1} = \mathbf{1}$. This gives the correct normal equation: $$(WY + v \mathbf{1}^T_L - X) \mathbf{1}_L = 0 \, . $$ However, $\mathbf{1}^T_L$ is not the Jacobian of $v \mathbf{1}^T_L$, since the latter is rank one $d \times L$ matrix. Probably the answer lies in finding what the previous expression is the derivative of. Instead, this Jacobian $ J_v (v \mathbf{1}^T_L)$ is a $(d\times L) \times (d)$ tensor, since it's the matrix of the derivative of a function $\mathbb{R}^d \to \mathbb{R}^{d \times L}$. Since $v \mathbf{1}^T_L$ is linear in $v$, the $i$th partial derivative can be calculated by replacing $v$ with $e_i$, getting that the $d\times L$ matrix we denote as $J_v(v \mathbf{1}^T)_{:,:,i}$ is $e_i \mathbf{1}^T_L$. Then, when we multiply these tensors, we'll get a $1 \times d$ Jacobian, the transpose of the gradient w.r.t. $v$, and set that equal to $0$. How do we write that multiplication? Summing over the matching $(d \times L)$ indices, the composite derivative given by the chain rule equals (?) $\frac{1}{2}D_{1,j} = \sum_{k,l} A_{kl} (e_j \mathbf{1}^T_L)_{kl} = a_j^T \mathbf{1}$, the $j$th row sum of $A = WY + v \mathbf{1}^T_L - X \ \dot{=}\ 0$. Since that's true for every $j$, this system satisfies \begin{align} (WY + v \mathbf{1}^T_L - X) \mathbf{1} = 0 \end{align} which is the formula in my professor's notes. From this we follow the notes and get that $$v = \mu_x^{\text{emp}} - W \mu_y^\text{emp} \, .$$ (The $\mu$s are empirical averages, over the $L$ instances.) At this point, the professor ``plugs this back in and solves for $W$.'' To repeat the steps above we need the Jacobian $J_W(WY)$. This is a $(d \times L) \times (d \times n)$ tensor. The first order condition with respect to $W$ is a condition on the $1\times (d \times n)$ composite Jacobian of the Frobenius norm with respect to $W$. The $(i,j)$ partial, $(i,j) \in (1,\ldots, d) \times (1, \ldots, n)$, (the effect of varying the $(i,j)$ component of $W$) is the matrix $e_{i,j} Y = e_i y_j^T$, which places the $j$th row of $Y$ in the $i$th place. (By $e_{i,j}$ I denote the matrix with a $1$ in the $(i,j)$ place and zeros elsewhere.) So again applying the chain rule we sum out the $(d \times L)$ indices \begin{align} & \ \sum_{k\leq d,l \leq L} (WY + v \mathbf{1}^T_L - X)_{kl} (e_i y_j^T)_{kl} = 0^{d \times n} \\ \Leftrightarrow & \ \sum_{k,l} \bigl(WY + (\mu_x^{\text{emp}} - W \mu_y^\text{emp}) \mathbf{1}^T_L - X \bigr)_{kl} (e_i y_j^T)_{kl} = 0^{d \times n} \\ \Rightarrow & \ \sum_{l} \bigl(WY + (\mu_x^{\text{emp}} - W \mu_y^\text{emp}) \mathbf{1}^T_L - X \bigr)_{il} (e_i y_j^T)_{il} = 0^{d \times n}. \end{align} Where in the last step we ignored multiplications by zero. The last term is the dot product between the $j$th row of $Y$ and the $i$th row of $\bigl(WY + (\mu_x^{\text{emp}} - W \mu_y^\text{emp}) \mathbf{1}^T_L - X \bigr)$. Since this holds for all $i$ and $j$, in particular, we can write \begin{align*} &\ (WY + (\mu_x^{\text{emp}} - W \mu_y^\text{emp}) \mathbf{1}^T_L - X \bigr) Y^T = 0^{d \times n} \\ \Rightarrow & \ W \bigl[ Y Y^T - \mu_y^\text{emp} \mathbf{1}^T Y^T \bigr] = X Y^T - \mu_x^\text{emp}\mathbf{1}^T Y^T \end{align*} The top equation is exactly what we would have thought if we said that the Jacobian of $WY$ is $Y^T$. The rest of the post is less important. Note that $\mathbf{1}^T Y^T = (Y \mathbf{1})^T = L (\mu_y^\text{emp})^T$. Let's drop those \text{emp} superscripts. \begin{align*} \Rightarrow & \ W \bigl[ Y Y^T - L \mu_y \mu_y^T] = X Y^T - \mu_x \mu_y^T \, . \end{align*} In retrospect perhaps it would have been best to just acknowledge that the problem is separable in the rows of $W$, following e.g. the Willsky, Wornell and Shapiro notes, ch 3 p. 126. Or perhaps we could have directly derived the normal equations using an orthogonality argument with respect to the trace inner product? It seems intuitive, since $Y$ and $\mathbf{1}$ are ... they're like the span of something.... That would give: \begin{align*} & \ \text{Tr}\bigl( (W Y + v \mathbf{1}^T_L - X)^T \mathbf{1}^T \bigr) = 0 \\ \Rightarrow &\ \mathrm{Tr}\Bigl[ \mathbf{1}^T WY + \mathbf{1}^T v \mathbf{1}^T - \mathbf{1}^T X^T \Bigr] = 0 \\ \end{align*} I don't see it.
{ "id": [ "4825391" ], "body": [ "You can simplify notation by including $v$ in $W$ (as the final column) and adding a component that is equal to $1$ at the end of each vector $y_i$. Then, the normal equations are what you get when you set the gradient equal to $0$, so just come the gradient of your objective function with respect to $W$ and set it equal to $0$." ], "at": [ "2017-07-02 01:56:45Z" ], "score": [ "" ], "author": [ "littleO" ], "author_rep": [ "49486" ] }
{ "id": [ "2343543" ], "body": [ "\nI'll assume our model is $y_i \\approx W x_i + v$, and we want to compute $W$ and $v$. We can simplify notation by including $v$ in $W$ (as a final column) and adding a component equal to $1$ at the end of each vector $x_i$. Our goal is to minimize\n$$\nf(W) = \\frac12 \\| W X - Y \\|_F^2.\n$$\nWe can do this by simply solving the equation $\\nabla f(W) = 0$, but first we need to evaluate $\\nabla f(W)$.\nNotice that\n\\begin{align}\nf(W + \\Delta W) &= \\frac12 \\| WX - Y + \\Delta W X \\|_F^2 \\\\\n&=\\frac12 \\underbrace{\\| WX - Y \\|_F^2}_{f(W)} + \\underbrace{\\langle WX - Y, \\Delta W X \\rangle}_{\\text{Tr}((WX - Y)^T \\Delta W X)} + \\underbrace{\\frac12 \\| \\Delta W X \\|_F^2}_{\\text{negligible}} \\\\\n&\\approx f(W) + \\text{Tr}((WX - Y)^T \\Delta W X) \\\\\n&= f(W) + \\text{Tr}(X (WX - Y)^T \\Delta W) \\\\\n&= f(W) + \\langle (WX - Y) X^T, \\Delta W \\rangle.\n\\end{align}\nIn the second to last step we used the fact that $\\text{Tr}(AB) = \\text{Tr}(BA)$. The inner product denotes the Frobenius inner product, and the trace expression $\\langle A, B \\rangle = \\text{Tr}(A^T B)$ is a standard formula for the Frobenius inner product.\nComparing with the equation\n$$\nf(W + \\Delta W) \\approx f(W) + \\langle \\nabla f(W), \\Delta W \\rangle,\n$$\nwe discover that\n$$\n\\nabla f(W) = (WX - Y)X^T.\n$$\nSo, the normal equations are\n$$\n(WX - Y) X^T = 0,\n$$\nor equivalently $W X X^T = Y X^T$.\n" ], "score": [ 2 ], "ts": [ "2017-07-02 02:29:13Z" ], "author": [ "GrayOnGray" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
967064
2014-10-10 19:11:16Z
Nathan McKenzie
957
3
Calculating the limit $\lim_{c\rightarrow 1+}\sum_{j=0}^{\lfloor\frac{\log n}{\log c}\rfloor}(-1)^j\binom{z}{j}c^j$
[ "sequences-and-series", "limits" ]
Does anyone know how to calculate, for constant values of $n$ and $z$, this limit? $$\lim_{c\rightarrow 1+}\sum_{j=0}^{\lfloor\frac{\log n}{\log c}\rfloor}(-1)^j\binom{z}{j}c^j$$ Thanks!
{ "id": [ "1985823", "1985848", "1986918" ], "body": [ "Numerically, I find that the limit goes to zero, independently of n and z.", "NicoDean: Is that true for negative z as well for you? For me, if I try -1 or -2 for z, it looks like it diverges. Oh, and obviously if n = 1, the sum equals 1, regardless of z or c.", "only tried positive constants, but robjohn has solved it now, so everything is clear." ], "at": [ "2014-10-10 19:22:27Z", "2014-10-10 19:51:43Z", "2014-10-11 08:31:18Z" ], "score": [ "", "", "" ], "author": [ "Mario Krenn", "Nathan McKenzie", "Mario Krenn" ], "author_rep": [ "904", "957", "904" ] }
{ "id": [ "967105" ], "body": [ "\nFor positive integer values of $z$, when $1\\lt c\\le n^{1/{\\large z}}$ the sum is $(1-c)^{\\large z}$. Thus, as $c\\to1^+$, the sum tends to $0$.\nFor negative values of $z$, the alternation disappears and we are left with the sum\n$$\n\\sum_{j=0}^{\\left\\lfloor\\frac{\\log(n)}{\\log(c)}\\right\\rfloor}\\binom{j-z-1}{j}c^j\n\\ge\\sum_{j=0}^{\\left\\lfloor\\frac{\\log(n)}{\\log(c)}\\right\\rfloor}\\binom{j-z-1}{j}\n$$\nWith no alternation and the terms on the right being integers bigger than $1$ (generally, much bigger), as $c\\to1^+$, the sum will be the sum of $\\left\\lfloor\\frac{\\log(n)}{\\log(c)}\\right\\rfloor\\to\\infty$ terms bigger than $1$. Thus, the sum diverges for negative $z$.\n" ], "score": [ 2 ], "ts": [ "2014-10-10 20:05:56Z" ], "author": [ "Nathan McKenzie" ], "author_rep": [ "957" ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
966664
2014-10-10 12:56:47Z
steindijr
105
3
Taylor approximation of Gaussian pdf around the origin
[ "real-analysis" ]
Let $\phi(x)$ be the standard Gaussian pdf, i.e. $\phi(x)=\frac{1}{\sqrt{2\pi}}e^{-x^2/2}$. Can a constant $K$ be found such that $$|\phi(x+y)-\phi(x)-\phi(y)+\phi(0)|\leq K|xy|$$ for all $|x|,|y|\leq 1$?
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "966694" ], "body": [ "\nWe need to find a constant $C$ such that\n$$\\left|e^{-(x+y)^2/2}-e^{-x^2/2}-e^{-y^2/2}+1\\right|\\leq C\\,|xy| $$\nfor any $|x|,|y|\\leq 1$. Since, by setting $A=e^{-x^2/2},B=e^{-y^2/2}$:\n$$ AB e^{-xy}-A-B+1 = (A-1)(B-1)+AB(e^{-xy}-1) $$\nit follows that:\n$$\\left|e^{-(x+y)^2/2}-e^{-x^2/2}-e^{-y^2/2}+1\\right|\\leq \\left((e^{-1/2}-1)^2+(e-1)\\right)|xy|\\tag{1}$$\nsince for any $z\\in[-1,1]$ we have:\n$$e^{-z^2}\\leq 1,\\quad \\left|\\frac{e^{-z^2/2}-1}{z}\\right|\\leq(e^{-1/2}-1),\\quad \\left|\\frac{e^{-z}-1}{z}\\right|\\leq e-1,$$\nso the triangle inequality simply gives:\n$$\\left|\\phi(x+y)-\\phi(x)-\\phi(y)+1\\right|\\leq \\frac{1}{\\sqrt{2\\pi}}\\left(\\frac{1}{e}+e-\\frac{2}{\\sqrt{e}}\\right)|xy|\\tag{2}$$\nand the initial inequality holds with $K=0.74725876555468\\ldots$ or just with $K=\\frac{3}{4}.$\n\nHowever, the optimal constant seems to be just $K=\\frac{1}{\\sqrt{2\\pi}}$, since the graphics of $g(x,y)=\\frac{\\phi(x+y)-\\phi(x)-\\phi(y)+\\phi(0)}{xy}$ is very well-behaved on $[-1,1]^2$:\n$\\hspace1in$\n" ], "score": [ 2 ], "ts": [ "2014-10-10 13:48:13Z" ], "author": [ "steindijr" ], "author_rep": [ "105" ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
966494
2014-10-10 09:46:59Z
S. Pitchai Murugan
782
3
Subgroup of a direct product of center of a group
[ "abstract-algebra", "group-theory" ]
Let $G$ be a group and $Z(G)$ be its center. For $n\in \mathbb{N}$, define $$J_n=\{(g_1,g_2,...,g_n)\in Z(G)\times Z(G)\times\cdots\times Z(G): g_1g_2\cdots g_n=e\}.$$ Then $J_n$ is (1) not necessarily a subgroup, (2) a subgroup but not necessarily a normal subgroup, (3) a normal subgroup, (4) isomorphic to $Z(G)\times Z(G)\times\cdots\times Z(G)$ $(n-1)$ times. I proved $J_n$ is a normal subgroup. I cannot conclude whether my option 4 is correct or not.
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "966691" ], "body": [ "\nIt is a subgroup, the question is of what group? And following your notation we look at the direct product of $\\;G\\;$ with itself $\\;n\\;$ times:\n$$G^{\\times n}:=\\overbrace{G\\times G\\times\\ldots\\times G}^{n\\;\\text{times}}\\implies J_n\\lhd \\left(Z(G)\\right)^{\\times n}$$\nAbout the last part: define\n$$\\phi:J_n\\to \\left(Z(G)\\right)^{\\times(n-1)}\\;\\;,\\;\\;\\phi(g_1,...,g_n):=(g_1,...,g_{n-1})$$\nNote the above is well defined since for $\\;g_1,...,g_n\\in Z(G)\\;$ , we have\n$$(g_1,...,g_{n-1},g_n)\\in J_n\\iff g_n=(g_1\\cdot...\\cdot g_{n-1})^{-1}$$\n(Perhaps this is what the other answer tried to convey)\nProve now the above is an isomorphism.\n" ], "score": [ 2 ], "ts": [ "2015-06-07 12:59:08Z" ], "author": [ null ], "author_rep": [ "1" ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
966472
2014-10-10 09:19:14Z
EulerRamanujan121
99
3
Evaluating a double integral
[ "multivariable-calculus" ]
I have to evaluate this double integral: $$\int_0^1\int_0^1\cos\ (\max \ \{x^3,y^{\frac{3}{2}} \} )\ dxdy$$ I have hint with me that this is to be done with help of Greens theorem but i dont know how to start it Please help me with this. Thanks
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "971336" ], "body": [ "\nHint: divide the integration domain along the curve $x^3=y^{3/2}$. In each subdomain the integrand is function of only one variable:\n$$\\int_0^1\\int_0^1\\cos(\\max \\{x^3,y^{\\frac{3}{2}} \\} )\\,dxdy=\n\\iint_{D_1}\\cos(x^3)\\,dxdy+\\iint_{D_2}\\cos(y^{3/2})\\,dxdy.\n$$\nCan you continue?\n" ], "score": [ 2 ], "ts": [ "2014-10-13 06:23:21Z" ], "author": [ null ], "author_rep": [ "19485" ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
966392
2014-10-10 07:36:27Z
felipeuni
4978
3
$f(z)$ is a polynomial $\Longleftrightarrow$ $\lim_{z\to \infty}f(z)=\infty$
[ "complex-analysis" ]
Let $f:\mathbb{C}\longrightarrow\mathbb{C}$ be a entire function such that $$\lim_{z\to \infty}f(z)=\infty$$ How to prove that $f(z)$ is a polynomial using maximum modulus principle (without the use of Laurent series). Any hint would be appreciated.
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "966473" ], "body": [ "\n$\\lim_{x\\to\\infty}f(z)=\\infty$ means that $\\infty$ is a pole of $f$ (and not an essential singularity as would be the case for $f(z)=\\exp z$).\nHence for suitable $k\\in\\mathbb N$, $g(z):=z^nf(1/z)$ is entire. Can you take it from here?\n" ], "score": [ 2 ], "ts": [ "2014-10-10 09:20:14Z" ], "author": [ "felipeuni" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [ "7878603" ], "body": [ "$k$ or $n$ ?..." ], "at": [ "2020-09-10 01:16:30Z" ], "score": [ "" ], "author": [ "Twnk" ], "author_rep": [ "2336" ] } ] }
966360
2014-10-10 06:54:52Z
boaten
1665
3
Growth of ratio of binomials polynomial or exponential?
[ "asymptotics", "binomial-coefficients" ]
Is the growth of $$ \dfrac{\binom{2n}{\sqrt{n}}}{\binom{n}{\sqrt{n}}} $$ polynomial or exponential (or other kind of growth) in $n$? I tried using the Stirling's approximation, which gives (ignoring the constants): $$2^{2n}n^n\cdot\dfrac{\sqrt{n-\sqrt{n}}\cdot(n-\sqrt{n})^{n-\sqrt{n}}}{\sqrt{2n-\sqrt{n}}\cdot(2n-\sqrt{n})^{2n-\sqrt{n}}}$$ but how to estimate the growth of this?
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "966388" ], "body": [ "\nWithout Stirling: $$ R_n=\\frac{\\binom{2n}{\\sqrt{n}}}{\\binom{n}{\\sqrt{n}}}=\\prod_{k=0}^{\\sqrt{n}}\\frac{2n-k}{n-k}. $$\nFor every $k\\leqslant\\sqrt{n}$, $$2\\leqslant\\frac{2n-k}{n-k}\\leqslant\\frac{2n}{n-\\sqrt{n}},$$ hence $$2^{\\sqrt{n}}\\leqslant R_n\\leqslant2^{\\sqrt{n}}\\left(1-1/\\sqrt{n}\\right)^{-\\sqrt{n}}.$$\nThe last factor on the RHS converges to $\\mathrm e$ hence this is enough to show that $$R_n=\\Theta\\left(2^{\\sqrt{n}}\\right).$$ More generally, if $x_n=O(\\sqrt{n})$, then $$\\frac{\\binom{2n}{x_n}}{\\binom{n}{x_n}}=\\Theta\\left(2^{x_n}\\right).$$\n" ], "score": [ 2 ], "ts": [ "2014-10-10 07:30:24Z" ], "author": [ "boaten" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [ "1984678", "1984691" ], "body": [ "Shouldn't it be $R_n = \\prod_{k=0}^{\\sqrt{n}-1}\\frac{2n-k}{n-k}$?", "@BenFrankel Anyway $\\sqrt{n}$ is not an integer, \"most of the time\", hence one should add integer parts everywhere. I chose to skip these details, which do not change the result." ], "at": [ "2014-10-10 07:44:03Z", "2014-10-10 07:54:53Z" ], "score": [ "", "" ], "author": [ "Ben Frankel", "Did" ], "author_rep": [ "634", "274750" ] } ] }
966275
2014-10-10 04:53:06Z
Jan
31
3
Suppose that $L(\alpha)/L/K$ and that $[K(\alpha):K]$ and $[L:K]$ are relatively prime.
[ "abstract-algebra", "field-theory" ]
Suppose that $L(\alpha)/L/K$ and that $[K(\alpha):K]$ and $[L:K]$ are relatively prime. Show that the minimal polynomial of $\alpha$ over $L$ has its coefficients in $K$. I tried an approach but I got stuck: We have that the following field extensions: $L(\alpha)/L$ and $L/K$ and we have the following fact that $[L:K]$ and $[K(\alpha):K]$ are relatively prime. From this: it is safe to assume that $[K(\alpha): K] < \infty$ thus $\alpha \in K$ is algebraic over K. Also: $[L(\alpha): K] = [L(\alpha): L][L : K]$ and $[L:K] = [L:K(\alpha)][K(\alpha):K]$ (I feel like this isn't true because $[L:K]$ and $[K(\alpha):K]$ are relatively prime.) Now I've tried to do things but I couldn't progress anywhere. How would I show that the minimal polynomial of $\alpha$ over $L$ has coefficients in $K$?
{ "id": [ "1984470", "1984488" ], "body": [ "you can't use the symbol $ [L:K(\\alpha)]$ unless $\\alpha \\in L$ which is not assumed to be the case.", "Thank you for the clarifying that for me." ], "at": [ "2014-10-10 05:05:06Z", "2014-10-10 05:18:12Z" ], "score": [ "", "" ], "author": [ "David Holden", "Jan" ], "author_rep": [ "17756", "31" ] }
{ "id": [ "966302", "966289" ], "body": [ "\nthe basic numerical law for finite extensions is:\n$$\n[L(\\alpha):K] = [L(\\alpha):K(\\alpha)][K(\\alpha):K] = [L(\\alpha):L][L:K]\n$$\nthus because of the data on relative primality:\n$$\n[K(\\alpha):K]|[L(\\alpha):L]\n$$\nand in particular, therefore:\n$$\n[K(\\alpha):K] \\le [L(\\alpha):L] \\le [L(\\alpha):K]\n$$\nwhich makes the required point, since, because $K \\subset L$ we must have \n$$\n[L(\\alpha):L] \\le [K(\\alpha):K]\n$$\n", "\nAssume $[L:K]$ and $[K(\\alpha):K]$ are finite and relatively prime. The minimal polynomial $p$ of $\\alpha$ over $L$ divides the minimal polynomial $q$ of $\\alpha$ over $K$. The degree of $q$ is $[K(\\alpha):K]$, so the degree of $p$ divides the degree of $q$. Furthermore, $\\deg p=[L(\\alpha):L]$, and since $K(\\alpha)\\subset L(\\alpha),\\deg q$ divides $[L(\\alpha):K]$. Set $n=\\deg q/\\deg p$. Then $n$ divides $[L(\\alpha):K]/[L(\\alpha):L]=[L(\\alpha):K]/\\deg p=[L:K].$ But $n$ also divides $\\deg q$, which is relatively prime to $[L:K]$, so $n=1$. \n" ], "score": [ 2, 0 ], "ts": [ "2014-10-10 05:25:44Z", "2014-10-10 05:21:01Z" ], "author": [ null, null ], "author_rep": [ "1", "1" ], "accepted": [ true, false ], "comments": [ { "id": [ "1985431", "1985488", "1985490" ], "body": [ "I'm lost with this: I don't get how if $K \\subset L$ implies that $[L(\\alpha): L] \\le [K(\\alpha): K]$ or how you get $[K(\\alpha):K] \\le [L(\\alpha):L] \\le [L(\\alpha):K]$ from the 2nd step either.", "(a) since $K \\subset L$ any polynomial with coefficients in $K$ is ipso facto a polynomial with coefficients in $L$.", "(b) if $a | b$ then a fortiori $a \\le b$" ], "at": [ "2014-10-10 15:36:28Z", "2014-10-10 16:14:47Z", "2014-10-10 16:15:25Z" ], "score": [ "", "", "" ], "author": [ "Jan", "David Holden", "David Holden" ], "author_rep": [ "31", "17756", "17756" ] }, { "id": [ "1985376", "1985635", "7021082" ], "body": [ "Hey, I just fully read through your solution. I see that if $n = 1$, they are the same degree. But how does that imply that $\\alpha$ over $L$ has coefficient in $K$? The only thing I can work my way around this is if the minimal polynomial of $L$ is the same as the minimal polynomial of $K$.", "Ah, yes-$p/q$ must have degree $0$, thus be some $\\beta\\in L$. Then $\\beta^{-1} p=q$ is also a minimal polynomial of $\\alpha$ over $L$: minimal polynomials are only defined up to a unit. (Alternatively, if your minimal polynomials are monic, then $\\beta$ must be $1$.) So the point is indeed that $\\alpha$ has the same minimal polynomial over $L$ as over $K$.", "You say that $p(x) \\mid q(x)$ $\\implies \\operatorname{deg} p(x) \\mid \\operatorname{deg} q(x)$ but is that actually true?" ], "at": [ "2014-10-10 15:10:41Z", "2014-10-10 17:39:51Z", "2019-10-29 15:23:22Z" ], "score": [ "", "", "" ], "author": [ "Jan", "Kevin Arlin", "eatfood" ], "author_rep": [ "31", "50259", "2264" ] } ] }
966149
2014-10-10 02:31:55Z
yurnero
10.3k
3
A simple dual problem in economics: profit v.s. cost
[ "optimization", "economics" ]
The setup is simple but a bit lengthy. Please bear with me. Suppose that I have a production function $F(K,L)$ that is: constant return to scale; increasing in each factor: $F_K>0$, $F_L>0$ (these are partials); satisfying diminishing returns: $F_{KK}<0$ and $F_{LL}<0$. Define an auxiliary function $f(k)=F(k,1)$. Consider the following 2 problems (A) Profit maximization: with $W,R>0$ given, $$ \max_{K,L}(F(K,L)-WL-RK) $$ Solving this yields an increasing relationship between $w=\frac{W}{R}$ and $k=\frac{K}{L}$ described by $$ w=\frac{f(k)}{f'(k)}-k. $$ We can invert this relationship to obtain an increasing function $k(w)$. (B) Unit-cost minimization: $$ \min_{a_K,a_L}(Ra_K+Wa_L)\quad\text{s.t.}\quad F(a_K,a_L)=1. $$ Solving this yields the optimal $a_K(w)$ and $a_L(w)$ where as earlier $w=\frac{W}{R}$. My question: my instructor claimed in class that $\frac{a_K(w)}{a_L(w)}=k(w)$. How can I show this? Thank you for your help. I have been struggling with this for the past 2 hours.
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "969495" ], "body": [ "\nDue to constant return to scale, the cost minimization problem subject to $F(K,L)=q$ yields the optimal choices $K=qa_K(w)$ and $L=qa_L(w)$. Thus, the profit optimization problem can be written as\n$$\n\\max_q[q-R(qa_K(w))-W(qa_L(w))].\\tag{C}\n$$\nRegardless of which $q$ solves (C), the optimal $K$ and $L$ for (C) (which are the same optimal $K$ and $L$ for (A)) are proportional to $a_K(w)$ and $a_L(w)$ by a common factor (i.e. the optimal $q$). It follows that\n$$\n\\frac{a_K(w)}{a_L(w)}=\\frac{K}{L}=k=k(w).\n$$\n" ], "score": [ 2 ], "ts": [ "2014-10-12 05:21:17Z" ], "author": [ "yurnero" ], "author_rep": [ "10250" ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
966075
2014-10-10 01:30:16Z
user137452
773
3
How do I evaluate $\lim_{x \to -1} \frac {x^2+2x+1}{x^2+4}$?
[ "algebra-precalculus" ]
I have determined so far that this is equal to $$\lim_{x \to -1} \frac {(x+1)(x+1)}{(x+\sqrt [4] {1})(x-\sqrt [4]{1})(x^2+\sqrt{1})}.$$ However, my numerator becomes $0$ if I substitute the limit. What am I doing wrong?
{ "id": [ "1984210", "1984216", "1984217", "1984240" ], "body": [ "Why does it matter if the numerator becomes $0$? Currently your denominator becomes $0$ as well, but you can cancel with the numerator to fix this.", "The expression in the body of the post isn't equivalent to the one in the title.", "A $0$ in the numerator is never a problem. It's the $0$ in the denominator that causes issues.", "Thus, the only time that I need to factor the numerator is when the denominator equals 0, other than that, if the denominator equals a number > 0 I can calculate the given quotient, as long as the denominator is not 0." ], "at": [ "2014-10-10 01:36:40Z", "2014-10-10 01:37:49Z", "2014-10-10 01:38:30Z", "2014-10-10 01:56:40Z" ], "score": [ "", "2", "", "" ], "author": [ "user71641", "Travis Willse", "J126", "user137452" ], "author_rep": [ null, "87566", "17133", "773" ] }
{ "id": [ "966083", "966168" ], "body": [ "\nThe expression whose limit you're taking is a function continuous at $x = -1$, so the limit is just given by evaluating, which you've essentially already done:\n$$\\lim_{x \\to -1} \\frac{x^2 + 2x + 1}{x^2 + 4} = \\frac{(-1)^2 + 2(-1) + 1}{(-1)^2 + 4} = 0.$$\n", "\nThe limit of this function is zero because when you substitute -1 in the function the denominator does not come to be zero.When ever the denominator comes to be zero then only we need to simplify the function,or else not. \n" ], "score": [ 1, 1 ], "ts": [ "2014-10-10 01:36:52Z", "2014-10-10 02:52:15Z" ], "author": [ null, null ], "author_rep": [ "87566", "87566" ], "accepted": [ false, false ], "comments": [ { "id": [ "1984222", "1984225", "1984229", "1984241" ], "body": [ "I guess the better question is, when is 0 an acceptable answer. for example, lim -->5 $\\frac {x^2-6x+5}{x-5}\\ $ one factors the numerator to exclude the denominator. Do you understand what I am seeking to understand?", "Yes, but if this is your real question, you should ask that instead or, at this point, perhaps as a new question. If you post it I'm sure you'll get useful replies.", "$0$ is always an acceptable answer if you achieve it by working legitimately. Is there any particular reason you think $0$ is bad/different from other numbers ?", "Thus, the only time that I need to factor the numerator is when the denominator equals 0, other than that, if the denominator equals a number > 0 I can calculate the given quotient, as long as the denominator is not 0." ], "at": [ "2014-10-10 01:41:01Z", "2014-10-10 01:43:29Z", "2014-10-10 01:45:14Z", "2014-10-10 01:57:03Z" ], "score": [ "", "", "", "" ], "author": [ "user137452", "Travis Willse", "AgentS", "user137452" ], "author_rep": [ "773", "87566", "12005", "773" ] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
965663
2014-10-09 19:33:14Z
the_candyman
13.6k
3
Are the Taylor polynomials of a function the results of a minimization problem?
[ "optimization", "taylor-expansion", "approximation" ]
Here is an example to better explain my question. Consider the function $f(x) = \cos(x)$. I want to approximate it in the set $[-\pi; \pi]$ using a polynomial $g(x) = a + bx + cx^2$ of order $2$. First approach - Taylor polynomials $$f(x) \simeq g(x) = 1 - \frac{x^2}{2} $$ Second approach - Minimization of Euclidean Norm I look for $g(x)$ such that: $$[a, b, c] = \arg \min_{a,b,c} \Phi(a,b,c),$$ where $$\Phi(a,b,c) = \int_{-\pi}^{+\pi}[f(x)-g(x)]^2\text{d}x$$ I have to evaluate first derivative with respect to $a$, $b$ and $c$ and impose that they are $0$: $$\frac{\partial \Phi}{\partial a} = \int_{-\pi}^{+\pi}\frac{\partial }{\partial a}[f(x)-g(x)]^2\text{d}x = -2\int_{-\pi}^{+\pi}[f(x)-g(x)]\text{d}x = 0$$ $$\frac{\partial \Phi}{\partial b} = \int_{-\pi}^{+\pi}\frac{\partial }{\partial b}[f(x)-g(x)]^2\text{d}x = -2\int_{-\pi}^{+\pi}x[f(x)-g(x)]\text{d}x = 0$$ $$\frac{\partial \Phi}{\partial c} = \int_{-\pi}^{+\pi}\frac{\partial }{\partial c}[f(x)-g(x)]^2\text{d}x = -2\int_{-\pi}^{+\pi}x^2[f(x)-g(x)]\text{d}x = 0$$ At the end, I get: $$g(x) = \frac{15}{2\pi^2} - \frac{45x^2}{2\pi^4}$$ Other approaches One can use a different norm The question Are the Taylor polynomials of a function the results of a minimization problem?
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "965685" ], "body": [ "\nI'm not sure how satisfying this answer will be, but here's something:\nThe $n$th degree Taylor (MacLaurin) polynomial $g(x)$ of a function $f(x)$ is the unique degree $n$ polynomial such that\n$$\n\\lim_{x \\to 0} \\left(\\frac{f(x) - g(x)}{x^n}\\right)^2 = 0\n$$\nIs there some corresponding quantity that has been minimized? I'm not sure\n" ], "score": [ 2 ], "ts": [ "2014-10-09 19:47:18Z" ], "author": [ "the_candyman" ], "author_rep": [ "13626" ], "accepted": [ true ], "comments": [ { "id": [ "1983512", "1983803", "4078343" ], "body": [ "Since the square is always non-negative, $0$ is the minimum (or infimum), doesn't it?", "Ah, yes, it does! Well there you go", "The answer to a related question (math.stackexchange.com/questions/1978990/…) implies that this is a very non-smooth optimization problem; I think the quantity above, viewed as a function of the coefficients of $g$, is infinity almost everywhere, and is $0$ precisely at the solution. I think it's continuous nowhere, so optimization as I know it doesn't apply at all, i.e., differentiating w.r.t. the polynomial coefficients and setting equal to zero gets you nowhere because those derivatives are undefined everywhere." ], "at": [ "2014-10-09 19:48:54Z", "2014-10-09 22:00:10Z", "2016-10-26 20:14:11Z" ], "score": [ "", "", "" ], "author": [ "the_candyman", "Ben Grossmann", "rajb245" ], "author_rep": [ "13626", "214332", "4635" ] } ] }
965641
2014-10-09 19:20:31Z
George Apriashvili
1013
3
Inequality proof of integers
[ "algebra-precalculus" ]
My question is from Apostol's Vol. 1 One-variable calculus with introduction to linear algebra textbook. Page 36. Exercise 7. Let $n_1$ be the smallest positive integer $n$ for witch the inequality $(1+x)^n>1+nx+nx^2$ is true for all $x>0$. Compute $n_1$, and prove that the inequality is true for all integers $n\ge n_1$. The attempt at a solution: I solved first question asked, which was to find the value of $n_1$, it is equal to $3$, for the second part, I am assuming that I have to prove the inequality by induction, since the chapter is about induction, here's my attempt: $$(1+x)^{n+1}=(1+x)^n(1+x)>(1+nx+nx^2)(1+x)=nx^2(x+2)+(n+1)x+1$$Which gets me nowhere, what am I doing wrong?
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "965659", "965660" ], "body": [ "\n$$1+(n+1)x+(n+1)x^2=(1+nx+nx^2)+(x+x^2)<(1+x)^n+x(1+x)<$$\n(inductive hypothesis for first inequality)\n$$<(1+x)^n+x(1+x)^n=(1+x)^{n+1}$$\n($x>0$ for second inequality)\n", "\nWith $n,x>0$,\n$$\nnx^2(x+2)-(1+n)x^2=nx^2(x+1)-x^2=x^2(n(x+1)-1)>0\\\\\n\\implies nx^2(x+2)>(1+n)x^2.\n$$\n" ], "score": [ 1, 1 ], "ts": [ "2014-10-09 19:31:52Z", "2014-10-09 19:31:52Z" ], "author": [ "George Apriashvili", "George Apriashvili" ], "author_rep": [ "1013", "1013" ], "accepted": [ true, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [ "1983523", "1983575" ], "body": [ "How are your ankles?", "Fine, thanks. Lil' sis is taking care of things. :)" ], "at": [ "2014-10-09 19:54:05Z", "2014-10-09 20:17:53Z" ], "score": [ "", "" ], "author": [ "vadim123", "Kim Jong Un" ], "author_rep": [ "81944", "14546" ] } ] }
964974
2014-10-09 08:50:14Z
user182048
null
3
If $f(n) = O(g(n))$ and $f(n) \not\in o(g(n))$, does $f(n) = \Theta(g(n))$?
[ "algorithms", "asymptotics" ]
If $f(n) = O(g(n))$ and $f(n) \not\in o(g(n))$, does $f(n) = \Theta(g(n))$? From the assumptions, $g(n)$ seems to be an asymptotic tight upper bound for $f(n)$, but do they make $g(n)$ an asymptotic tight lower bound for $f(n)$ as well?
{ "id": [ "1982231", "1982248" ], "body": [ "No of cource. just follow definitions", "Please read this to learn how to properly format your questions." ], "at": [ "2014-10-09 08:59:40Z", "2014-10-09 09:07:49Z" ], "score": [ "1", "" ], "author": [ "Leox", "Najib Idrissi" ], "author_rep": [ "7674", "52368" ] }
{ "id": [ "964985" ], "body": [ "\n$f(n) = O(g(n))$ means that $|f(n)| \\leq C |g(n)|$, but $f(n) \\not\\in o(g(n))$ means that you can't make the $C$ as small as you want. If $f(n) = \\Theta(g(n))$, it would mean that $|g(n)| \\leq D |f(n)|$ for some other bound. What happens if you try $g(n) = 1$ and $f(n) = 0$ if $n$ is even, $f(n) = 1$ if $n$ is odd?\n" ], "score": [ 2 ], "ts": [ "2014-10-09 09:06:34Z" ], "author": [ "" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [ "5294070", "5294158" ], "body": [ "@yi416 Where intuition come from but this might be the case? This example might look a little weird but conflicting examples like this is extremely straightforward simply from looking at the definition of the relevant terms. If this result surprises you should probably spend a lot more time practicing with big-o and little-o", "I’m sorry if that comes across as rude or condescending. I’m seriously asking what your intuition is about this because I don’t really see any reason to not find this result intuitive." ], "at": [ "2017-12-13 16:09:30Z", "2017-12-13 16:38:16Z" ], "score": [ "", "" ], "author": [ "Stella Biderman", "Stella Biderman" ], "author_rep": [ "30617", "30617" ] } ] }
964616
2014-10-09 01:51:01Z
Elle Najt
20.1k
3
Over an algebraically closed field, is it possible to factor a symmetric invertible matrix $A$ as $X^T X$?
[ "linear-algebra" ]
I know that this result is true when the ground field is $\mathbb{C}$. (Though I don't remember why.) Does it also hold for algebraically closed fields? Can someone give me a hint as to why this is true?
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "964639" ], "body": [ "\nYour factorization is an immediate consequence of Takagi's factorization, which itself is a corollary of the following theorem:\nFrom Horn and Johnson: (p. 203, first edition)\n\nLet $A \\in M_n$ be given. There exists a unitary $U \\in M_n$ and an upper triangular $\\Delta$ such that $A = U\\Delta U^T$ if and only if all the eigenvalues of $A \\overline A$ are real and non-negative. Under this condition, the main diagonal entries of $\\Delta$ may be chosen to be non-negative.\n\nIt seems that the answer lies behind this theorem. The question is fundamentally whether it can be extended to handle $A \\in M_n(\\Bbb F)$ for arbitrary (algebraically closed fields) $\\Bbb F$.\n" ], "score": [ 2 ], "ts": [ "2014-10-09 02:41:23Z" ], "author": [ "Elle Najt" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [ "1981647" ], "body": [ "It looks like this might work; Schur decomposition does seem to extend nicely, along with the notion of an inner product." ], "at": [ "2014-10-09 02:36:24Z" ], "score": [ "" ], "author": [ "Ben Grossmann" ], "author_rep": [ "214332" ] } ] }
964082
2014-10-08 18:20:18Z
mathjacks
3564
3
Derive branch cuts for $\log(\sqrt{1-z^2} + iz)$ as $(-\infty,-1)$ and $(1,\infty)$?
[ "complex-analysis", "branch-cuts" ]
Attempt: First, we examine $\sqrt{1-z^2}$. Note that it can be written $\sqrt{1-z}\sqrt{1+z}$, so the appropriate branch cuts are $(-\infty,-1)$ and $(1,\infty)$ for the inner square root term. Next, we look at $\log(w)$ and note that we can define the cut for $\log(w)$ as $(-\infty,0)$. But now what? I tried setting $w= \sqrt{1-z^2} + iz$, solving for the branch point where $w=0$, but this results in $1=-z^2+z^2=0$, so I think this is the wrong approach. What is the correct way to understand this?
{ "id": [ "1980653", "1980654", "1980660" ], "body": [ "I did this to show that the branch cuts for the square root term are $(-\\infty,-1)$ and $(1,\\infty)$.", "You don't need a branch-cut for the logarithm. Only the one for the square root. $\\mathbb{C}\\setminus \\{t\\in\\mathbb{R} : \\lvert t\\rvert \\geqslant 1\\}$ is simply connected, and $\\sqrt{1-z^2}+iz$ is never $0$ there.", "Thanks, Daniel for your help with this question and my previous one. What is the reasoning for not needing a branch cut for the logarithm? (also: can you recommend a book that covers these concepts in more detail at an introductory level?)" ], "at": [ "2014-10-08 18:26:28Z", "2014-10-08 18:26:38Z", "2014-10-08 18:28:51Z" ], "score": [ "", "1", "" ], "author": [ "mathjacks", "Daniel Fischer", "mathjacks" ], "author_rep": [ "3564", "202399", "3564" ] }
{ "id": [ "964151", "964158" ], "body": [ "\nIt may be useful to consider $w=\\sqrt{1-z^2}+iz$. Isolating the square root and squaring, you may note that the $z^2$ terms cancel, and you end up with\n$$z=\\frac1{2i}\\Bigl(w-\\frac1w\\Bigr),$$\nand then subsituting this into the equation $\\sqrt{1-z^2}=w-iz$ you also find\n$$\\sqrt{1-z^2}=\\frac12\\Bigl(w+\\frac1w\\Bigr).$$\nThus both $z$ and $\\sqrt{1-z^2}$ are single-valued functions of $w$, so it should be easier to analyze the given function as $\\log w$.\nThere are two values of $w$ for each value of $z$: Replacing $w$ by $-1/w$ leaves $z$ unchanged, and flips the sign of $\\sqrt{1-z^2}$.\nNote that imaginary $w$ gives real $z$: Your proposed branch cuts in the $z$ plane go along the imaginary axis in the $w$ plane, from $\\pm i$ to infinity in opposite directions, but also (if you pick the other branch of the square root) from $\\pm i$ to $0$. Thus your branch cuts divides the $w$ plane in two halves along the imaginary axis, and you end up with not having to pick further branch cuts for the logarithm.\n(In my first edition of this answer I got a little confused because I was thinking of getting the full Riemann surface for the given function. I hope I managed to fix this before confusing anybody else too much. The analysis I give here is perhaps better for understanding the Riemann surface; it could well be overkill for the branch cut question. Oh well …)\n", "\nWe don't need a branch-cut for the logarithm here.\nGenerally, if $U$ is a simply connected domain and $f\\colon U\\to \\mathbb{C}$ holomorphic without zeros, then $f$ has a holomorphic logarithm on $U$, that is, there exists a holomorphic $g\\colon U\\to \\mathbb{C}$ with $e^{g(z)} = f(z)$ for all $z\\in U$ (of course, $f$ has infinitely many logarithms on $U$, any two differing by an integer multiple of $2\\pi i$).\nSuch a $g$ is conventionally denoted by $g = \\log f$, without (necessarily) meaning that $g$ is globally the composition of a branch of the logarithm with $f$.\nObviously, $g$ is locally, in a (small enough) neighbourhood $V$ of each $z\\in U$, the composition of a branch of the logarithm on $f(V)$ with $f$, but, if $f$ is not injective, $f(z_1)=f(z_2)$ for some $z_1\\neq z_2$, then different branches of the logarithm can be used on $f(V_1)$ and $f(V_2)$, where $V_1$ is a small neighbourhood of $z_1$ and $V_2$ one of $z_2$.\nHere, however, we can write $\\log (\\sqrt{1-z^2} + iz)$ as the composition $\\log \\circ f$ of a branch of the logarithm with $f(z) = \\sqrt{1-z^2}+iz$, since $f$ maps $U := \\mathbb{C}\\setminus \\{t\\in\\mathbb{R}:\\lvert t\\rvert\\geqslant 1\\}$ to a domain where a branch of the logarithm exists.\nThe - in my opinion - easiest way to see that is to follow the mapping of $\\sin$. Starting from the strip $S = \\left\\{z\\in\\mathbb{C} : \\lvert \\operatorname{Re} z\\rvert < \\frac{\\pi}{2}\\right\\}$, from the familiar behaviour of the exponential function, we see that $z\\mapsto e^{iz}$ maps the strip biholomorphically to the right half-plane. Now the map $h\\colon w \\mapsto \\frac{1}{2i}\\left(w-\\frac{1}{w}\\right)$ is a rational function of order $2$, hence attains each value in the sphere $\\widehat{\\mathbb{C}}$ exactly twice (counting multiplicity) in $\\widehat{\\mathbb{C}}$. It is easily seen that $h\\left(-\\frac{1}{w}\\right) = h(w)$, so it follows that $h$ is injective on the right half-plane, and\n$$h\\left(\\{z : \\operatorname{Re} z > 0\\}\\right) = \\widehat{\\mathbb{C}} \\setminus h\\left(i\\mathbb{R}\\cup \\{\\infty\\}\\right) = \\mathbb{C}\\setminus \\{t\\in\\mathbb{R} : \\lvert t\\rvert \\geqslant 1\\}.$$\nSo altogether, $\\sin$ maps $S$ biholomorphically to $U$. Then you just need to check that $h$ and $f(z) = \\sqrt{1-z^2} +iz$ are inverses of each other to see that $f$ maps $U$ to the right half-plane.\n" ], "score": [ 1, 1 ], "ts": [ "2014-10-08 19:16:16Z", "2014-10-08 19:13:18Z" ], "author": [ "mathjacks", "mathjacks" ], "author_rep": [ "3564", "3564" ], "accepted": [ true, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [ "7480885", "7525261", "7841393" ], "body": [ "Just curious … How would one evaluate the integral $\\oint_C \\log(iz+\\sqrt{1-z^2})\\,dz$ where $C$ is any rectifiable curve that encloses (one time) the branch points at $z=−1$ and $z=−1$?", "Hi Daniel. I hope that you are staying safe and healthy. Just curious … How would one evaluate the integral $\\oint_{C}\\log\\left(iz+\\sqrt{1-z^2}\\right)\\,dz$ where $C$ is any rectifiable curve that encloses (one time) the branch points at $z=-1$ and $z=1$? It seems that $\\arg\\left(z+\\sqrt{z^2-1}\\right)$ and $\\arg(z)$ lie in the same quadrant when we cut the plane from $-1$ to $\\infty$ and from $1$ to $\\infty$. Then, on $|z|=r>1$, as $\\arg(z)$ goes from $0$ to $2\\pi$ the argument of $z+\\sqrt{z^2-1}$ does likewise. This would imply that there is a branch point of the logarithm ($z=0$).", "@MarkViola I think you should post that as a separate question." ], "at": [ "2020-04-23 16:58:35Z", "2020-05-06 15:41:17Z", "2020-08-26 23:09:25Z" ], "score": [ "", "", "" ], "author": [ "Mark Viola", "Mark Viola", "BIRA" ], "author_rep": [ "172978", "172978", "209" ] } ] }
964016
2014-10-08 17:23:40Z
Felix Y.
663
3
Reference request for Homology Gysin sequence.
[ "reference-request", "algebraic-topology", "homological-algebra", "homology-cohomology", "exact-sequence" ]
I am trying to study the Homology Gysin sequence (not cohomology). I am interested in finding references that either use, or explain the Homology Gysin sequence, especially if it gives descriptions for the maps in the sequence.
{ "id": [ "4763830" ], "body": [ "does it exists?" ], "at": [ "2017-06-08 16:30:23Z" ], "score": [ "" ], "author": [ "Luigi M" ], "author_rep": [ "3807" ] }
{ "id": [ "2317341" ], "body": [ "\nLet $\\xi\\colon E\\to B$ be an orientable $n$-bundle. You can always consider the l.e.s. for the pair $(E,E_0)$ (where $E_0:= E\\setminus B$ via the zero section). In order to avoid pathologies, assume $B$ being a CW-complex. \nNotice that $H_k(E,E_0;\\Bbb Z)\\cong H_{k-n}(E)\\cong H_{k-n}(B)$ via the homological Thom Iso plus the fact $\\xi_*$ is an isomorphism in homology. If you use these identification you will end up with your Gysin sequence where the only non-obvious map will be cap-product with the Euler class.\n" ], "score": [ 2 ], "ts": [ "2017-06-10 14:47:18Z" ], "author": [ "Felix Y." ], "author_rep": [ "663" ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
963871
2014-10-08 15:11:11Z
Dongryul Kim
896
3
A theorem of Szemeredi in Erdos's paper
[ "number-theory", "reference-request" ]
In his paper "A survey of problems in combinatorial number theory", on page 110, Erdos writes: Graham conjectured: Let $1 \le a_1 < a_2 < \cdots < a_n$ be $n$ integers. Then $$ \max_{i,j} a_j/(a_i, a_j) \ge n. $$ Szemeredi proved this recently. The proof is not yet published. I tried to find the proof among Szemeredi's publications, but failed. Can anyone provide me with any reference (or maybe the proof itself) to this theorem?
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "972940" ], "body": [ "\nYou might be interested in this paper : \"On a conjecture of R. L. Graham\" by R. Balasubramanian\nand K. Soundararajan. They prove Graham's conjecture with the additional condition $(a_1,a_2,\\cdots,a_n)=1$ a condition which can be obtained without the loss of generality. (So yes the link is indeed the proof of the conjecture.)\nIn the introduction, the authors say that Szemeredi gave a proof for $n=p$. So it may be the case in which there was some communication error(?) between Szemeredi and Erdos.\nCheers! :-)\n" ], "score": [ 2 ], "ts": [ "2014-10-14 07:14:02Z" ], "author": [ null ], "author_rep": [ "274750" ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
963345
2014-10-08 05:53:54Z
Caleb Makela
33
3
Standard Deviation of a Magic Square
[ "standard-deviation", "magic-square" ]
I'm not sure if this is the right stackexchange to post this question to, but I was just wondering if someone had the answer to an interesting observation I've made. I've written a program that generates a 6th order magic square. Then it finds the standard deviation of each of the columns. Here are a few screenshots of generated squares and the deviations. First run: Second run: Third run: Now, what I was wondering is why are the highest deviations always at column 1 or 2 and the lowest at 5 or 6? And why are both never found at 3 or 4? It comes out like this no matter how many times I run the program. I thought my standard deviation math was wrong at first, but I checked it by hand and it all checks out. Does this have something to do with magic squares in general or is it just a coincidence? EDIT: @taninamdar I generate the magic square by randomly picking one of the 8 possible 3x3 magic squares. Then I expand each number of it to be a 2x2 section, making the base 6x6 square. Then I create another 6x6 magic square consisting of 9 Medjig squares. Once that is in order with every row and column adding up to 9, I loop through each square of the grid using the equation grid[x, y] = grid[x, y] + 9 * medjig_grid[x, y] which generates the final magic square you see in the screenshots.
{ "id": [ "1979283" ], "body": [ "It might have something to do with how you're generating magic squares as well." ], "at": [ "2014-10-08 05:56:16Z" ], "score": [ "1" ], "author": [ "taninamdar" ], "author_rep": [ "2568" ] }
{ "id": [ "963363" ], "body": [ "\nYour magic squares are generated in a way it is ignored that the diagonals must have the same sum as the rows and columns.\nFor this \"subclass\" of magic squares we can generate a different magic square by exchanging columns and/or rows.\nExchanging the columns 1/2 or 5/6 with 3/4 disproves your assumption.\nEither your observation was by chance, or @taninamdar is right and it is in the way you generate the squares.\n" ], "score": [ 2 ], "ts": [ "2014-10-08 06:18:47Z" ], "author": [ "Caleb Makela" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [ "1979337" ], "body": [ "Oh my goodness, it is always the small things. I completely forgot to add a check for the diagonals. Thank you and thank you @taninamdar." ], "at": [ "2014-10-08 06:24:47Z" ], "score": [ "" ], "author": [ "Caleb Makela" ], "author_rep": [ "33" ] } ] }
963032
2014-10-08 00:51:03Z
guest
33
3
How to solve this linear first order differential equation?
[ "calculus", "ordinary-differential-equations" ]
$$\frac{1}{N}\frac{dN}{dt} + 1 = te^{t+2}$$ The equation is separable and so is easily solvable. However doing so gives me the following: $$\int \frac{1}{N}dN = \int(te^{t+2} - 1)dt$$ Simplyifing gives: $$|N| = e^{-t + te^{t+2} - e^{t+2} + c}$$ How do I proceed from here?
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "963156" ], "body": [ "\nHere are the steps \n$$ \\frac{1}{N}\\frac{d}{dt}N+1= te^{t+2}$$\n$$ \\frac{1}{N}\\frac{d}{dt}N= te^{t+2}-1$$\n$$ \\frac{1}{N}dN= te^{t+2}-1\\ dt $$\n$$ \\int \\frac{1}{N}dN= \\int te^{t+2}-1\\ dt $$\n$$ \\ln|N|+C_1= e^2\\int te^t\\ dt-\\int dt $$\n$$ \\ln|N|+C_1= e^2 e^t(t-1)+C_2-t+C_3 $$\n$$ \\ln|N|= e^{t+2}(t-1)-t+C $$\n$$ e^{\\ln|N|}= e^{e^{t+2}(t-1)-t+C} $$\n$$ N= e^{e^{t+2}(t-1)-t+C} $$\n" ], "score": [ 2 ], "ts": [ "2014-10-08 02:46:56Z" ], "author": [ "guest" ], "author_rep": [ "33" ], "accepted": [ true ], "comments": [ { "id": [ "1979010", "1979022" ], "body": [ "Doesn't $e^{\\ln |N|} = |N|$ instead of $N$?", "@guest, have a look at this question and you'll understand." ], "at": [ "2014-10-08 03:00:10Z", "2014-10-08 03:08:36Z" ], "score": [ "", "1" ], "author": [ "guest", "k170" ], "author_rep": [ "33", "8727" ] } ] }
962843
2014-10-07 21:52:11Z
Peter
80.6k
3
Planar graphs and connectivity
[ "graph-theory", "connectedness", "planar-graphs" ]
How many edges must a planar graph with $n$ nodes have that it is sure that it is a) connected b) biconnected c) triconnected In particular, are all planar graphs with $n$ nodes and $3n-6$ edges ($n\ge 4$) triconnected ? I tried to use Euler's formula, but it only holds for connected planar graphs. And in the case, that the graph is connected, but not biconnected, the faces need not be bounded by a cycle. How can I deal with this case ? Since a planar graph with $n$ nodes and $3n-9$ edges ($n\ge 4$) need not be connected (see comment below), at least $3n-8$ edges are required.
{ "id": [ "1979682" ], "body": [ "A planar graph with $n$ nodes and $3n-9$ edges ($n\\ge 4$) need not be connected : Add an isolated vertex to a maximal planar graph with $n-1$ nodes. So, no number of edges can make sure that the graph is $4$-connected." ], "at": [ "2014-10-08 10:35:27Z" ], "score": [ "" ], "author": [ "Peter" ], "author_rep": [ "80641" ] }
{ "id": [ "962941" ], "body": [ "\nHint for the connected case: if $f(k)$ is the largest possible number of edges for a planar graph with $k$ nodes, then for any positive integers $j$ and $k$ there is a disconnected planar graph with $j+k$ nodes and $f(j)+f(k)$ edges.\nSo to conclude a graph with $n$ nodes and $e$ edges is connected, we need \n$e > \\max \\{f(j) + f(n-j) \\mid j = 1 \\ldots n-1\\}$.\n" ], "score": [ 2 ], "ts": [ "2014-10-07 23:31:43Z" ], "author": [ "Peter" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
962657
2014-10-07 19:06:49Z
Alex
31
3
Show $\int_0^{\infty} \frac{e^{-x}-e^{-xw}}{x} dx = \ln{w}$ for $\operatorname{Re}({w})>0$
[ "complex-analysis", "complex-integration" ]
I want to show that for $\operatorname{Re}({w})>0$, $$\int_0^{\infty} \frac{e^{-x}-e^{-xw}}{x} dx = \ln{w}.$$ I've tried setting the problem up as: $$\int_\gamma \frac{e^{-z}}{z} dz = 0,$$ where $\gamma$ is the path around the quadrilateral with vertices $a,b,bw,aw$ for some $a,b \in \mathbb{R}$ where $0<a<b<\infty$, but I'm not sure if I am parametrizing the paths between these points correctly.
{ "id": [ "1978085", "1978446", "1978460" ], "body": [ "There's some way to do this with differentiation under the integral sign. Also: do you know Frullani's integral?", "I didn't know it, but I see how it could be useful here. Thanks for sharing.", "Yeah, for real $w$ anyway, you just have to dominate the partial derivative of the integrand w.r.t. $w$ uniformly by some integrable $g(x)$. Since said derivative is $e^{-xw}$ with $w,x>0$, not too hard. For complex $w$ (real part $>0$), I'm not sure if those arguments carry over." ], "at": [ "2014-10-07 19:15:40Z", "2014-10-07 22:09:02Z", "2014-10-07 22:15:15Z" ], "score": [ "", "", "" ], "author": [ "Akiva Weinberger", "Alex", "BaronVT" ], "author_rep": [ "21110", "31", "13403" ] }
{ "id": [ "962669" ], "body": [ "\nConsider the integral\n\\begin{align}\n\\int_{1}^{w} e^{-x u} \\, du = \\left[ -\\frac{1}{x} \\, e^{-u} \\right]_{1}^{w} = \\frac{e^{-x} - e^{-w x}}{x}.\n\\end{align}\nNow,\n\\begin{align}\nI &= \\int_{0}^{\\infty} \\frac{e^{-x} - e^{-w x}}{x} \\, dx = \\int_{0}^{\\infty} \\, \\int_{1}^{w} e^{-x u} \\, du \\, dx \\\\\n&= \\int_{1}^{w} \\left[ \\int_{0}^{\\infty} e^{-x u} \\, dx \\right] \\, du = \\int_{1}^{w} \\frac{du}{u} = [ \\ln(u) ]_{1}^{w} \\\\\n&= \\ln(w). \n\\end{align}\nHence \n\\begin{align}\n\\int_{0}^{\\infty} \\frac{e^{-x} - e^{-w x}}{x} \\, dx = \\ln(w).\n\\end{align}\n" ], "score": [ 2 ], "ts": [ "2014-10-07 19:17:34Z" ], "author": [ "Alex" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
962622
2014-10-07 18:31:35Z
user115608
3255
3
A problem about a finite extension field
[ "abstract-algebra", "extension-field" ]
This is the problem: Suppose that $K$ is an infinite field and $E|K$ is an extension with degree $n>1$. Prove that the quotient group $E^*/K^*$ is infinite. i assume that the quotient group $E^*|K^*$ is finite, for example it's elements are $(b_1)k^*,...,(b_m)k^*$ also we can assume $E=K(a_1,a_2,...,a_n)$, from linear algebra we know : $E|K$ can't be written as the finite union of it's proper subspaces,if we consider it as a vector space on $K$.i want to reach a contradiction... any hint is welcomed.
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "962646", "962652" ], "body": [ "\nIf $A$ is a set of coset representatives of $K^\\ast$, then $E^\\ast = \\bigcup_{a\\in A} aK^\\ast$. Furthermore, $aK^\\ast \\cup \\{0\\}$ is the span of $a$ if we view $E$ as a vector space over $K$. Thus, we have that\n$$\n\\bigcup_{a\\in A} \\operatorname{span}\\{a\\} = \\bigcup_{a \\in A} (aK^\\ast \\cup \\{0\\}) = E^\\ast \\cup \\{0\\} = E.\n$$\nBecause $E \\neq K$, $\\operatorname{span}\\{a\\}$ must be a proper subspace of $E$. Because a vector space over an infinite field is not a finite union of proper subspaces, $A$ cannot be finite. Thus there is no finite set of coset representatives of $K^\\ast$, so $E^\\ast/K^\\ast$ has infinite order.\n", "\nHint. Let $\\alpha \\in E-K$. Consider the elements $x + \\alpha$, for $x \\in K$\n" ], "score": [ 2, 0 ], "ts": [ "2014-10-08 14:11:45Z", "2014-10-07 19:02:45Z" ], "author": [ null, null ], "author_rep": [ "1", "1" ], "accepted": [ true, false ], "comments": [ { "id": [ "1978096" ], "body": [ "Very well said, Bruce!" ], "at": [ "2014-10-07 19:23:09Z" ], "score": [ "" ], "author": [ "Georges Elencwajg" ], "author_rep": [ "145629" ] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
962450
2014-10-07 16:17:49Z
Big B
33
3
Approximation of a ratio
[ "taylor-expansion" ]
Is this approximation true? If so, why? $$\frac{1+x}{1+y}\approx 1+x -y$$ I think it has something to do with $x$ and $y$ being close to zero, so that the ratio of the two is approximately equal to zero, and therefore cancels out. In advance thanks!
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "962467", "962473" ], "body": [ "\nYou can check that $$(1+x-y)(1+y)=1+x-y+y+xy-y^2=1+x+xy+y^2\\approx1+x.$$\nThe second order terms ($xy$ and $y^2$) are negligible for small $y$.\n$x$ needn't be small.\n", "\n$$f(x,y) = \\frac{1+x}{1+y}$$\nLet's try to derive the 1st order Taylor expansion around $0$ of $f$;\n$$f(x,y) \\approx f(0,0) + f_x(0,0)x + f_y(0,0) y = 1 + x - y$$\nwhere\n$$f_x(x,y) = \\frac{\\partial f}{\\partial x} = \\frac{1}{1+y} \\Rightarrow f_x(0,0) = 1$$\nand\n$$f_y(x,y) = \\frac{\\partial f}{\\partial y} = -\\frac{1+x}{(1+y)^2} \\Rightarrow f_y(0,0) = -1$$\nSo, for $x$ and $y$ sufficiently small, this approximation is good.\n" ], "score": [ 2, 0 ], "ts": [ "2014-10-07 16:42:14Z", "2014-10-07 16:29:21Z" ], "author": [ null, null ], "author_rep": [ "13626", "13626" ], "accepted": [ true, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
962155
2014-10-07 12:15:02Z
Ethan
133
3
How to Derive Population Variance of AR(1) Process
[ "stochastic-processes" ]
If I have a process of the form $Y_t=\mu+\phi Y _{t-1} + \epsilon_t$, how is the population variance derived? Assuming that $\epsilon_t$ has a zero mean and a variance of 2.
{ "id": [ "1977154" ], "body": [ "Go down recursively to find an expression of Yt." ], "at": [ "2014-10-07 12:27:26Z" ], "score": [ "1" ], "author": [ "Somabha Mukherjee" ], "author_rep": [ "2460" ] }
{ "id": [ "962172" ], "body": [ "\nProvided that $|\\phi|<1$, you can do backwards substitution to arrive at\n$$\nY_t=\\mu(1+\\phi+\\phi^2+\\cdots)+\\epsilon_t+\\phi\\epsilon_{t-1}+\\cdots=\\frac{\\mu}{1-\\phi}+\\sum_{j=0}^\\infty\\phi^j\\epsilon_{t-j}.\n$$\nFrom here, you can compute\n$$\n\\text{Var}(Y_t)=E\\left(\\sum_{j=0}^\\infty\\phi^j\\epsilon_{t-j}\\sum_{j=0}^\\infty\\phi^j\\epsilon_{t-j}\\right)=\\sum_{j=0}^\\infty\\phi^{2j}E(\\epsilon_{t-j}^2)=\\frac{\\text{Var}{\\epsilon}}{1-\\phi^2}=\\frac{2}{1-\\phi^2}.\n$$\nThe second equality uses the fact that when the indices don't match, the expectation vanishes. Finally, if your process doesn't start at infinity in the past, you need to define $\\text{Var}(Y_0)$ as $\\frac{\\text{Var}{\\epsilon}}{1-\\phi^2}$.\n" ], "score": [ 2 ], "ts": [ "2014-10-07 12:28:18Z" ], "author": [ "Ethan" ], "author_rep": [ "133" ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
962148
2014-10-07 12:03:35Z
Hajar Elhammouti
161
3
singularity of $\frac{z}{\sinh(z)}$
[ "complex-analysis", "functions" ]
I was wondering why $0$ is not a singularity of $\frac{z}{\sinh(z)}$ Thank you for your feedbacks
{ "id": [ "1977133", "1977152" ], "body": [ "$z/\\sinh (z)=2z/(e^z-e^{-z})$ tends to $\\lim_{z\\to 0}2/2e^z=1$ by L'Hopital.", "Alright, thank you for helping :)" ], "at": [ "2014-10-07 12:11:57Z", "2014-10-07 12:26:44Z" ], "score": [ "", "" ], "author": [ "Dietrich Burde", "Hajar Elhammouti" ], "author_rep": [ "123585", "161" ] }
{ "id": [ "962151", "962154", "962153" ], "body": [ "\nWith the theory of power expansions, this is easier to see from the inverted function $\\frac{\\sinh z}{z}$. We have\n$$\n\\sinh z = \\frac{e^z - e^{-z}}{2} = \\sum_{i = 0}^{\\infty}\\frac{1}{i!}\\frac{z^n - (-z)^n}{2} = \\sum_{i = 1}^{\\infty}\\frac{1}{i!}\\frac{z^n - (-z)^n}{2}\n$$\nsince the constant term vanishes. We then get that\n$$\n\\frac{\\sinh z}{z} = \\sum_{i = 1}^{\\infty}\\frac{1}{i!}\\frac{z^{i-1} + (-z)^{i-1}}{2}\n$$\nwhere the right-hand side is defined and equal to $1$ for $z = 0$. Of course, the function itself is undefined for $z = 0$, but taking the limit of the above equation as $z \\to 0$ shows that this is a removable singularity.\n", "\n$0$ is a removable discontinuity, $$\\lim_{z\\rightarrow0}\\frac{z}{\\sinh(z)}=\\lim_{z\\rightarrow0}\\frac{2z}{e^{z}-e^{-z}}=\\lim_{z\\rightarrow0}\\frac{2}{2e^{z}}=1\\neq\\infty\n $$\n", "\nTake the limit (using De L'Hospital), and you'll find out that it is evaluated to $1$.\n" ], "score": [ 2, 0, 0 ], "ts": [ "2014-10-07 12:18:14Z", "2014-10-07 12:23:48Z", "2014-10-07 12:25:54Z" ], "author": [ null, null, null ], "author_rep": [ "192589", "192589", "192589" ], "accepted": [ true, false, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [ "1977151" ], "body": [ "That answers my question, thank you for all :))" ], "at": [ "2014-10-07 12:26:06Z" ], "score": [ "" ], "author": [ "Hajar Elhammouti" ], "author_rep": [ "161" ] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
962146
2014-10-07 11:55:44Z
Redundant Aunt
11.9k
3
Inequality: $ \frac{a}{1+9bc+k(b-c)^2}+\frac{b}{1+9ca+k(c-a)^2}+\frac{c}{1+9ab+k(a-b)^2}\geq\frac{1}{2} $
[ "inequality" ]
I was trying to solve this inequality, but I wasn't able to do so: Find the maximum number $k\in\mathbb R$ such that: $$ \frac{a}{1+9bc+k(b-c)^2}+\frac{b}{1+9ca+k(c-a)^2}+\frac{c}{1+9ab+k(a-b)^2}\geq\frac{1}{2} $$ Holds for all $a,b,c\ge0$ with $a+b+c=1$ Any help is highly appreciated.
{ "id": [ "1977206" ], "body": [ "Note that at $a=b=c=\\frac 13$ the inequality holds for every value of $k\\in\\Bbb R$. Another point of interest is $a=0,b=c=\\frac12$ which means $k\\le 4$." ], "at": [ "2014-10-07 12:55:18Z" ], "score": [ "" ], "author": [ "abiessu" ], "author_rep": [ "8050" ] }
{ "id": [ "963260" ], "body": [ "\nPlugging, $a=1/2,b=1/2,c=0 \\implies k \\le 4$.\nI will show that $k=4$. \n$$ \\frac{a}{1+9bc+4(b-c)^2}+\\frac{b}{1+9ca+4(c-a)^2}+\\frac{c}{1+9ab+4(a-b)^2}\\ge \\frac{1}{2} \\\\ \\Longleftrightarrow \\frac{a^2}{a+9abc+4a(b-c)^2}+\\frac{b^2}{b+9abc+4b(c-a)^2}+\\frac{c^2}{c+9abc+4c(a-b)^2} \\ge \\frac{1}{2}$$\nBy cauchy it suffice to show that, $$ \\frac{(a+b+c)^2}{a+b+c+3abc+4(a^2b+b^2c+c^2a+ab^2+bc^2+ca^2)} \\ge \\frac{1}{2} \\\\ \\Longleftrightarrow \\frac{1}{1+3abc+4(a^2b+b^2c+c^2a+ab^2+bc^2+ca^2)} \\ge \\frac{1}{2} \\\\ \\Longleftrightarrow 1 \\ge 3abc+4a^2b+b^2c+c^2a+ab^2+bc^2+ca^2 \\\\ \\Longleftrightarrow (a+b+c)^3 \\ge 3abc+a^2b+b^2c+c^2a+ab^2+bc^2+ca^2 \\\\ \\Longleftrightarrow a(a-b)(a-c)+b(b-c)(b-a)+c(c-a)(c-b) \\ge 0 $$ \nLast inequality is true by Schur. $\\Box$\n" ], "score": [ 2 ], "ts": [ "2014-10-08 04:38:41Z" ], "author": [ "Redundant Aunt" ], "author_rep": [ "11890" ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
961670
2014-10-07 02:01:38Z
A. Thomas Yerger
17.1k
3
Sylow's Theorem Explanation [closed]
[ "abstract-algebra", "group-theory", "sylow-theory" ]
Closed. This question does not meet Mathematics Stack Exchange guidelines. It is not currently accepting answers. Please provide additional context, which ideally explains why the question is relevant to you and our community. Some forms of context include: background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc. Closed last year. Improve this question Can someone explain it to me? I've been working out of Galian's Contemporary Abstract Algebra this semester, but came into possession a copy of Dummit and Foote's book, which I am aware is substantially more advanced. It was there I stumbled upon Sylow's Theorem, and although I followed the harder book well up to that point, I don't get the theorem at all. Any help would be appreciated. A statement of the theorem in question: If $P$ is a Sylow $p$-subgroup of $G$ and $Q$ is any $p$-subgroup of $G,$ then there exists $g \in$ G such that $Q \leq gPg^{-1}.$ In particular, any two Sylow $p$-subgroups of $G$ are conjugate in $G$.
{ "id": [ "1976344", "1976539", "1977253" ], "body": [ "Are you having trouble with the statement or the proof of this? Related, but probably not terribly helpful: What does \"the conjugacy part of Sylow's Theorems\" denote?", "I think I understand the conjugation, but the proof is what troubles me most here.", "It should be noted that $G$ is a finite group. Hence @Gage is able to appeal to a pigeonhole argument to get the conclusion." ], "at": [ "2014-10-07 02:10:50Z", "2014-10-07 03:52:00Z", "2014-10-07 13:11:07Z" ], "score": [ "1", "", "" ], "author": [ "hardmath", "A. Thomas Yerger", "hardmath" ], "author_rep": [ "35878", "17056", "35878" ] }
{ "id": [ "961682", "961940" ], "body": [ "\nSo it would definitely be helpful if you could clarify what part of this theorem doesn't make sense to you. As a broad overview it is saying if $\\vert G \\vert = p^m r$ and $p$ doesn't divide $r$ then if you have a subgroup $P$ with $\\vert P \\vert = p^m$ and any subgroup $Q$ where $\\vert Q \\vert = p^{m-k}$ then there is an element of the conjugacy class of $Q$ that is a subgroup of $P$. Using this and the pigeon hole principle it them follows that any Sylow $p$-subgroups are conjugates.\nEDIT:\nSeeing that you were having trouble with the proof and not the concept I went to my copy of Dummit and Foote to see what they do. The basic structure of their proof of this part of Sylow's Theorem is that they assume it is false and derive a contradiction. The crucial facts they use is that if $\\mathcal{O}_i$ denotes an orbit of $P_i$, a conjugate of some Sylow $p$-subgroup $P$, under the action of some subgroup $Q$ of $G$ by conjugation then $$\\vert \\mathcal{O}_i \\vert = \\vert Q : P_i \\cap Q \\vert$$ and that the sum of the orders of all the orbits $r$ is $1 \\pmod p$.\nFor the proof we assume that $Q$ is not contained in any $P_i$ (or isn't in a conjugate of $P$) and this imediately tells us that $\\vert Q : P_i \\cap Q \\vert > 1$ for every $i$ (because if it wasn't then $Q$ would be contained in $P_i$. We know that (since these are finite groups) $$\\vert Q : P_i \\cap Q \\vert = \\frac{ \\vert Q \\vert}{\\vert P_i \\cap Q \\vert }$$ and so (since $Q$ is a $p$ group and this fraction isn't $1$) $p$ divides the index of $P_i \\cap Q$ in $Q$ and also the size of the orbit $\\mathcal{O}_i$. Now since this is true for every $\\mathcal{O}_i$ $p$ also divides the sum $$ r = \\mathcal{O}_1 + \\cdots + \\mathcal{O}_n$$ however this contradict the fact that $r \\equiv 1 \\pmod p$. Therefore the assumption that $Q$ is not contained in a conjugate of $P$ must be false.\n", "\nLet $Q$ act by left-multiplication on $\\mathcal{S}$, the set of left cosets of $P$. Note that the cardinality of $\\mathcal{S}$ is index$[G:P]$ and hence not divisible by $p$. The length of the orbit of $gP$ is index$[Q:Q \\cap P^{g^{-1}}]$, a $p$-power. Since $|\\mathcal{S}|$ is the sum of these orbit lengths and $p \\nmid |\\mathcal{S}|$, there must be an orbit of length $1$, which means $Q \\subseteq P^{g^{-1}}$ for some $g \\in G$.\n" ], "score": [ 2, 0 ], "ts": [ "2014-10-08 02:05:42Z", "2014-10-07 07:25:52Z" ], "author": [ "A. Thomas Yerger", "A. Thomas Yerger" ], "author_rep": [ "17056", "17056" ], "accepted": [ true, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
961583
2014-10-07 00:16:26Z
user181407
550
3
When are extensional equivalence classes still sets?
[ "logic", "set-theory" ]
Let $\sim$ denote extensional equivalence. That is, $y\sim x \Leftrightarrow \forall z(z\in y \leftrightarrow z\in x)$. Given a set $x$, let $[[x]] := \lbrace y:y\sim x\rbrace$. Clearly, $\textrm{ZF}$ proves that these classes are sets, for by extensionality they are just singletons. With a little bit more effort, we can see that $\textrm{ZF}$ without the axiom of extensionality still proves that they are sets: If $y\sim x$, then $y\subseteq x$ and so $y\in \mathcal{P}(x)$, where $\mathcal{P}(x)$ is any powerset of $x$. Applying comprehension to $\mathcal{P}(x)$ then gives us some desired set with the same elements as $[[x]]$. Question: Does $\textrm{ZF}$ without the axiom of extensionality and the axiom of powerset still prove that $[[x]]$ is a set? Edit: The intended meaning of "$[[x]]$ is a set" is $\exists y \forall z (z\in y \leftrightarrow z\sim x)$.
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "962013" ], "body": [ "\nThe answer is no. Here's one way to see this. Let $\\kappa$ be a regular uncountable cardinal. By recursion we can define a series of structures $\\langle A_\\alpha, \\in_\\alpha\\rangle$ for $\\alpha\\leq\\kappa$. First, we let $A_0$ and $\\in_0$ be empty. Then we make $\\langle A_{\\alpha+1}, \\in_{\\alpha+1}\\rangle$ relate something new to the elements of every less than $\\kappa$ sized subset of $A_\\alpha$. For instance, suppose $x\\subseteq A_\\alpha$ has size less than $\\kappa$. Then we pick some $y\\not\\in \\bigcup_{\\beta\\leq \\alpha} A_\\beta$ and define $\\in_{\\alpha+1}$ such that $z\\in_{\\alpha+1} y$ just in case $z\\in x$. At limits we take unions. \nIt is straightforward to check that $\\langle A_\\kappa, \\in_\\kappa\\rangle$ satisfies ZF minus Powerset minus Extensionality. But because we add a new empty set at each successor stage, there will be $\\kappa$ many empty sets and thus no set containing all of them. \n" ], "score": [ 2 ], "ts": [ "2014-10-07 10:19:00Z" ], "author": [ "user181407" ], "author_rep": [ null ], "accepted": [ true ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
960340
2014-10-06 05:48:11Z
user157279
157
3
probability of a limiting sum
[ "probability", "probability-theory" ]
Suppose that $U_i$ are uniformly distributed on (0,1) and are independent. For all possible increasing index sets comprising the family $J$, I am trying to show that $P(\cap_{j\in J} \{\lim_{n \rightarrow \infty} \frac{\sum_{k=1}^n X_{j_k}}{n}\} = \frac{1}{2}) = 0.$ My attempt is as follows: denote by $A_i = \{\omega \in \Omega:\lim_{n \rightarrow \infty} \frac{\sum_{k=1}^n X_{i_k}}{n} = \frac{1}{2}\}$. Then instead of showing that $P(\cap A_i) = 0,$ I can show that $P(\cup A_i^c) = 1.$ However, $P(\cup A_i^c) \le \sum P(A_i^c)$. Now, there's a theorem which says that if $U_i$'s are iid, $E[U_i] = \mu$ and $E[U_i^4] < \infty$ then $S_n = X_1 + \dots + X_n$ has the property that $S_n/n \rightarrow \mu$ a.s. In the above case, $P(A_i) = 1$ because of the above theorem implying that $P(A_i^c) = 0$. This contradicts what I am supposed to prove. Any thoughts or hints? One hint given is to use the fact that $\{U_1,U_2,\dots\}$ are dense in $(0,1)$ save for a set of measure 0. But I'm stuck.
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "960356" ], "body": [ "\nThe idea is simpler to explain when $(X_i)$ is i.i.d. Bernoulli with $P(X_i=0)=P(X_i=1)=\\frac12$. For every $j$ in $J$, let $S_n^j=X_{j_1}+X_{j_2}+\\cdots+X_{j_n}$ and $$A_j=\\{\\lim\\limits_{n}\\tfrac1nS_n^j=1/2\\}.$$ Let $\\omega$ in $\\Omega$ and $Z(\\omega)=\\{n\\mid X_n(\\omega)=0\\}\\subseteq\\mathbb N$.\n\nEither $Z(\\omega)$ is infinite then $j=Z(\\omega)$ yields $S_n^j(\\omega)=0$ for every $n$ hence $\\omega$ is not in $A_j$ because $\\lim\\limits_{n}\\tfrac1nS_n^j(\\omega)=0$.\nOr $Z(\\omega)$ is finite then $j=\\mathbb N$ yields $S_n^j(\\omega)\\geqslant n-|Z(\\omega)|$ for every $n$ hence $\\omega$ is not in $A_j$ because $\\lim\\limits_{n}\\tfrac1nS_n^j(\\omega)=1$.\n\nThis proves that, for every $\\omega$ in $\\Omega$, there exists $j$ in $J$ such that $\\omega$ is not in $A_j$, that is, $$\\bigcap_{j\\in J}A_j=\\varnothing.$$\nin particular, the set on the LHS is an event and its probability is zero. (Note that the appearance of the empty set on the RHS of this identity is fortunate since one cannot know a priori that the set on the LHS, being an uncountable intersection of events, is an event.) \nCan you adapt this to the case when $(X_i)$ is i.i.d. uniform on $(0,1)$?\n" ], "score": [ 2 ], "ts": [ "2014-10-06 07:43:11Z" ], "author": [ "user157279" ], "author_rep": [ "157" ], "accepted": [ true ], "comments": [ { "id": [ "1974664", "1974671", "1974797", "1975043" ], "body": [ "I think I got the idea, thanks Did. But what's wrong with my solution above? Why did it give me something contradictory to what I'm supposed to prove? I can't find the flaw to it...", "You applied additivity to the uncountable family $(A_j^c)_{j\\in J}$ and, for that, Kolmogorov might strike down upon thee with great vengeance and furious anger... :-)", "Did, while thinking about it, I realized it's not straightforward to adapt the above to the U(0,1) case. It doesn't make sense to talk about $X_j = 0.5$ or something because these are continuous RVs. Is this where I invoke the fact that the $X_k$'s are dense in $(0,1)$?", "Right, one does not copy the above, one adapts it, but frankly, the adaptation is not that difficult." ], "at": [ "2014-10-06 12:55:57Z", "2014-10-06 12:59:35Z", "2014-10-06 14:09:01Z", "2014-10-06 16:19:51Z" ], "score": [ "", "2", "", "" ], "author": [ "user157279", "Did", "user157279", "Did" ], "author_rep": [ "157", "274750", "157", "274750" ] } ] }
959694
2014-10-05 19:43:38Z
MangoPirate
315
3
If $f$ is Lebesgue integrable on [0,1] show $g(x)=\int_{[x,1]} f(t)t^{-1}dt$ is Lebesgue integrable on [0,1]
[ "real-analysis", "lebesgue-integral" ]
Also want to show $\int_{[0,1]}g(x)dx = \int_{[0,1]}f(x)dx$. So since $f \in \mathcal{L}([0,1]), f=u-v$ where $u$ and $v$ are upper functions. Then I need to show $\int_{(x,1]} u(t)t^{-1}dt$ is an upper function because that will give $g \in \mathcal{L}([0,1])$. Not really sure how to go about doing this. I have to show this satisfies the properties of being an upper function but that's what I'm having trouble starting.
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "959823" ], "body": [ "\n$\\int_0^1|g(x)|dx \\leq \\int_0^1 dx\\int_x^1 \\left|\\dfrac{f(t)}{t}\\right|dt = \\int_0^1 dt \\int_0^t \\left|\\dfrac{f(t)}{t}\\right|dx = \\int_0^1 \\left|f(t)\\right|dt$\n" ], "score": [ 2 ], "ts": [ "2014-10-05 21:13:10Z" ], "author": [ "MangoPirate" ], "author_rep": [ "315" ], "accepted": [ true ], "comments": [ { "id": [ "1973272", "1973938", "1974000" ], "body": [ "This doesn't seem to answer the question. If I'm mistaken, why/how does this answer the question I asked?", "This is a valid answer to the question. The problem probably is that you are using an unusual definition of the integral (please tell us more about your definition, e.g. what is an \"upper function\"?), or tell us what you do not understand about the solution above (it uses Fubini's theorem, do you know it?).", "No, I'm not familiar with that theorem. My teacher is following a set of online notes, where I believe the main idea is to teach Lebesgue Integration without really using measure theory. The integral is defined in this chapter here, along with upper functions and other related things rutherglen.science.mq.edu.au/wchen/lnilifolder/ili04.pdf" ], "at": [ "2014-10-05 22:30:53Z", "2014-10-06 04:50:36Z", "2014-10-06 05:48:33Z" ], "score": [ "", "", "" ], "author": [ "MangoPirate", "PhoemueX", "MangoPirate" ], "author_rep": [ "315", "34284", "315" ] } ] }
959203
2014-10-05 12:43:12Z
Leo
7500
3
What is the difference between $\mathbb E[Z|\mathcal G]=Y$ and $\mathbb E[Z|\mathcal G]\stackrel{\text{a.s.}}{=}Y$?
[ "measure-theory", "probability-theory", "random-variables", "martingales", "conditional-expectation" ]
I'm somewhat confused by the definition of martingale: Let $(\Omega, \mathcal F, \mathcal F_n, \mathbb P)$ be a filtered probability space. We call $(X_n)_{n\in\mathbb N}$ martingale if for all $n\in\mathbb N$ holds: $X_n \in \mathcal L^1(\mathbb P)$ $X_n$ is $\mathcal F_n$-measurable $\mathbb E[X_{n+1}|\mathcal F_n]\stackrel{\text{a.s.}}{=}X_n$ Why do we need "a.s." in the 3rd part? Isn't $\mathbb E[Z|\mathcal G]$ always unique only up to "a.s." anyway? How would we check the 3rd part without knowing that the 2nd part holds? (I.e. doesn't the 3rd imply the 2nd?) Why do we separate the points 2 and 3? Wouldn't it be enough to just ask for $\mathbb E[X_{n+1}|\mathcal F_n]=X_n$ instead?
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "959246", "2806396" ], "body": [ "\nIn general, 3 does not imply 2\nSuppose $g$ is $\\mathcal{F}$-measruable and $h=g$ almost surely, it does not imply $h$ is also $\\mathcal{F}$-measurable.\nTake a set $A$ which is contained in $B \\in \\mathcal{F}$ with $P(B) = 1$, but $A$ is not itself in $\\mathcal{F}$. By definition, $h = 1_{A}$ is alomst surely equal to the constant variable $g \\equiv 0$. But $h$ is not $\\mathcal{F}$-measurable, since $h^{-1}(\\{1\\}) = A \\not\\in \\mathcal{F}$.\nIf $\\mathcal{F}$ has been completed and the filtration is augmented by $\\mathbb{P}$, then (2) is indeed unnecessary. That's one of the reason of completion and augmentation: to make sure modification on a null set doesn't cause measurability problem.\n", "\nLet's generalise how 3 doesn't imply 2:\n\nAll versions of a conditional expectation are almost surely equal to each other, but any version is free to be equal to another random variable that is not a version of said conditional expectation.\n\nPrecisely:\n\nLet $(\\Omega,\\mathcal F, \\mathbb P)$ be a probability space with random variable $X \\in \\mathscr L^1$ and a sub-$\\sigma$-algebra $\\mathcal G$. Versions of $\\mathbb E[X|\\mathcal G]$ are almost surely equal to each other, but any version is free to be equal to another random variable, $Y$, that is not $\\mathcal G$-measurable.\n\nConsider $X=c$, $Y=c1_A$, $P(A)=1$ and $A \\notin \\mathcal G$. Then\n$$\\mathbb E[X|\\mathcal G] := \\mathbb E[c|\\mathcal G] = c \\stackrel{\\text{a.s.}}{=}c1_A=:Y,$$ but $Y$ is not $\\mathcal G$-measurable. You could say that $Y$ is a non-$\\mathcal G$-measurable version of $\\mathbb E[X|\\mathcal G]$, i.e. it satisfies $E[Y1_G] = E[X1_G] \\ \\forall G \\ \\in \\ \\mathcal G$ but not $\\sigma(Y) \\subseteq \\mathcal G$, if that's even a thing in probability.\n" ], "score": [ 3, -1 ], "ts": [ "2014-10-05 18:37:30Z", "2018-06-03 11:00:41Z" ], "author": [ null, null ], "author_rep": [ "1", "1" ], "accepted": [ true, false ], "comments": [ { "id": [ "3772649", "3773757", "3773789", "3773834" ], "body": [ "Is there something like a 2 a.s.? Like $\\forall A \\in \\sigma(X_n) \\cap \\mathscr F_n^C$, $P(A) = 0$?", "@BCLC I don't quite get your question...", "like measurable almost surely?", "@BCLC Of course one can give a definition like measurable almost surely, i.e. measurable up to \"almost sure\" modification. But it is essentially similar to completion and augmentation, thus just an additional definition which is not really necessary" ], "at": [ "2016-06-30 01:51:02Z", "2016-06-30 13:51:07Z", "2016-06-30 14:08:02Z", "2016-06-30 14:24:32Z" ], "score": [ "", "2", "", "2" ], "author": [ "BCLC", "Petite Etincelle", "BCLC", "Petite Etincelle" ], "author_rep": [ "12667", "14481", "12667", "14481" ] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
959183
2014-10-05 12:27:53Z
user151465
null
3
unreduced suspension
[ "algebraic-topology" ]
Is the definition $SX=\frac{(X\times [a,b])}{(X\times\{a\}\cup X\times \{b\})}$ of the unreduced suspension the standard defininition? If I consider $X=$ point, the suspension of $X$ is a circle. But I saw an other definition of the unreduced suspension such that the suspension of a point should be an interval. Regards.
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "959226", "959486", "959299" ], "body": [ "\nI would have said that the suspension was $X \\times [-1, 1]$ modulo the relation that \n$$\n(x, a) \\sim (x', a') \n$$\nif and only iff \n\n$a = a' = 1$ or \n$a = a' = -1$, or\n$a = a'$ and $x = x'$. \n\nWikipedia seems to agree with me. It looks as if your author was a little glib, and failed to mention that the \"bottom\" and \"top\" sets of equivalent points were not supposed to be made equivalent to each other. \n", "\nOne should have a picture and here is one, taken from the e-version of Topology and Groupoids showing the suspension $SX$ as a union of two cones:\n\nNote the special case when $X$ is a circle $S^1$ when this gives the $2$-sphere $S^2$ as the union of two hemispheres. \n", "\nIn fact in my experience the unreduced suspension is commonly written in the way you did it, although it is wrong. What it means is: collapse one side to a point and also the other side. But NOT both to the same point. So this would be only right when quotiening is associative, i.e.\n$$\nSX = (X \\times I/ X\\times \\{0\\} )/ (X \\times \\{1\\}) \"=\" X \\times I / X\\times 0 \\cup X\\times 1\n$$\nTo answer your question clearly: no this definition is wrong (or at least not standard)!\n" ], "score": [ 1, 1, 0 ], "ts": [ "2014-10-05 12:57:55Z", "2020-05-09 10:24:09Z", "2014-10-05 14:19:18Z" ], "author": [ null, null, null ], "author_rep": [ "5205", "5205", "5205" ], "accepted": [ false, false, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [ "1972121", "1972167", "1972180" ], "body": [ "thank you. Ok, you mean \"=\" is not true in general and the definition on the left site of the \"equality\" (\"=\" ) is the standarddefinition?", "Yes precisely! The LHS is equivalent to the standard definition (with $X\\times 1$ I mean the image of this subset in the quotient) and explains why many people abuse the notation.", "ok, thank you, I understand." ], "at": [ "2014-10-05 14:58:30Z", "2014-10-05 15:15:05Z", "2014-10-05 15:20:35Z" ], "score": [ "", "", "" ], "author": [ "user151465", "Daniel Valenzuela", "user151465" ], "author_rep": [ null, "6133", null ] } ] }
958988
2014-10-05 08:47:39Z
curious
295
3
Conservation of the weak topology by homeomorphism
[ "functional-analysis" ]
I have some questions about Brezis book. We know that M is reflexive, so there exists an homeomorphism $J:(M,\|\|_M) \to (M'',\|\|_{L(M',R)})$ between the "strong" topology of M and M''. (Moreover J is isometric). So i would like to know why $B_M=B_{M''}=\{\xi\in M'':\|\xi\|_{L(E',R)}\leq 1\}$ and why $(M,\sigma(M,M')=(M,\sigma(M'',M')$ when the ball $B_M$ is metrizable in the weak topology $(M,\sigma(M,M')$ by consequence $B_{M''}$ is metrizable for the weak-star topology $(M,\sigma(M'',M')$ ? Translation : Theorem III.27 : Let $E$ be a reflexive Banach space, $(x_n)$ a bounded sequence in $E$. Then there exists a sub-sequence extracted from $(x_n)$ which converges for the $\sigma(E,E')$ topology Demonstration : Let $M_0$ be the vector space generated by the $(x_n)$, et $M=\overline{M_0}$. M is a separable space (see III.23). Furthermore, M is reflexive (see III.17). Therefore $B_M$ is a compact, metrizable space for the topology $\sigma(M,M')$. Indeed, $M'$ is separable (see III.24) hence $B_{M''}(=B_M)$ is metrizable for $\sigma(M'',M')(=\sigma(M,M'))$ (see III.25). We can then extract a sub-sequence $(x_{n_k})$ which converges for $\sigma(M,M')$. We conclude that $(x_n)$ converges too for $\sigma(E,E')$ (by restricting $M$ to linear forms on, $E$).
{ "id": [ "1971573" ], "body": [ "It's not just any old isometric isomorphism, it's the canonical embedding of $M$ into $M''$ that is an isometric embedding. The balls are the same only when identifying the two spaces via the canonical isometry (they lie in different spaces otherwise), but under this identification it is clear that we have equality, since $J$ is an isometry. Without the identification, it reads $J(B_M) = B_{M''}$. The same for the topologies induced by $M'$. They are only equal when identifying $M$ and $M''$ via $J$, otherwise $J$ is a homeomorphism $(M;\\sigma(M,M'))\\to (M'',\\sigma(M'',M'))$." ], "at": [ "2014-10-05 09:01:24Z" ], "score": [ "" ], "author": [ "Daniel Fischer" ], "author_rep": [ "202399" ] }
{ "id": [ "959027" ], "body": [ "\nWe don't actually have equality, but we have a canonical identification between the two spaces. This canonical identification is habitually left implicit for notational simplicity (at the cost of temporarily confusing beginners).\nWith the identification made explicit, the assertions are\n\n$J(B_M) = B_{M''}$, which immediately follows from the fact that $J$ is an isometric isomorphism, and\n$J\\colon (M,\\sigma(M,M')) \\to (M'',\\sigma(M'',M'))$ is a topological isomorphism. In particular, the restriction of $J$ to $B_M$ is a homeomorphism between $B_M$ and $B_{M''}$, where both are endowed with the subspace topology induced by $\\sigma(M,M')$ and $\\sigma(M'',M')$ respectively.\n\nIt is clear that if two spaces are homeomorphic, each is compact resp. metrisable if and only if the other is.\nTo see that $J\\colon (M,\\sigma(M,M'))\\to (M'',\\sigma(M'',M'))$ is a topological isomorphism, consider the standard neighbourhood bases of $0$ in these topologies: Given $\\mu_1,\\dotsc,\\mu_k\\in M'$, we have\n$$\\begin{aligned}\nJ\\left(\\{ x \\in M : \\lvert \\mu_\\kappa(x)\\rvert < 1 \\text{ for } 1 \\leqslant \\kappa\\leqslant k\\}\\right)\n&= J\\left(\\{x \\in M : \\lvert J(x)(\\mu_\\kappa)\\rvert < 1 \\text{ for } 1 \\leqslant \\kappa\\leqslant k\\}\\right)\\\\\n&= \\{ J(x)\\in M'' : \\lvert J(x)(\\mu_\\kappa)\\rvert < 1 \\text{ for } 1 \\leqslant \\kappa\\leqslant k\\}\\\\\n&= \\{\\varphi\\in M'' : \\lvert \\varphi(\\mu_\\kappa)\\rvert < 1 \\text{ for } 1 \\leqslant \\kappa\\leqslant k\\},\n\\end{aligned}$$\nso $J$ induces a bijection between the two neighbourhood bases, and that implies that $J$ is a homeomorphism, since $J$ is linear.\n" ], "score": [ 2 ], "ts": [ "2014-10-05 09:33:39Z" ], "author": [ null ], "author_rep": [ "7542" ], "accepted": [ true ], "comments": [ { "id": [ "1971989", "1972026", "1972132" ], "body": [ "Dear Daniel Fisher, thank you very much for your really useful and exhaustive answer! I just would like to be sure i well understood:", "1-using properties of bijectivity and isometry are suficient to show equallity $J(B_M)=B_{M''}$ ? 2- Since the properties of bijectivity and linearity are not affected by the changement of topology we just have to show bicontinuity. To show the fact that any element of the base of neigbourhood of the weak topology of M is sent in an element of the base of neighbourhood of M'' for the weak topology you focused on 0 center neighbourhood (translation) and you took the reciproque image of 1 radius balls (homotétie) (instead of any $\\varepsilon>0$:is it because of linearity of our objects $\\mu_k$?", "1. Yes, bijectivity and isometry are the conditions that ensure $J(B_M) = B_{M''}$. 2. Yes, since vector space topologies are translation invariant, one needs only consider the neighbourhoods of $0$. Generally, one needs only consider \"balls\" with radius $1$ for any seminorm because of the homothety-invariance, although here we can scale the $\\mu_\\kappa$ to achieve the same effect. And yes, it's the linearity of the $\\mu_\\kappa$ that allows that." ], "at": [ "2014-10-05 13:50:05Z", "2014-10-05 14:08:12Z", "2014-10-05 15:05:23Z" ], "score": [ "", "", "" ], "author": [ "curious", "curious", "Daniel Fischer" ], "author_rep": [ "295", "295", "202399" ] } ] }
958573
2014-10-04 23:21:18Z
SLM
153
3
A stricter Fermat's little theorem: when does $a^n\equiv 1$ (mod $p$) for $n < p$?
[ "number-theory" ]
By Fermat's little theorem we know that $a^{p-1} \equiv 1 \pmod{p}$ for all primes p. But it is often possible to find $x$ such that $a^{x} \equiv 1 \pmod{p}$ and x < p - 1. Is there anyway to predict when such an $x$ exists or what it is? I wrote a program to generate the minimal such $x$ for all $a$ less than a prime $p$, but I can't figure out any pattern.
{ "id": [ "1970864", "1970871", "1970877" ], "body": [ "The smallest such (positive) $x$ is called the order of $a$; it will necessarily divide $p-1$, so you just need to test all the divisors of $p-1$.", "I know it must be a divisor of p - 1, but is there no way of knowing if an x smaller than p - 1 exists besides testing all divisors of p - 1?", "Not that I'm aware of, except for some obvious special cases (like $a=1$ or $a=-1$)." ], "at": [ "2014-10-04 23:27:33Z", "2014-10-04 23:32:41Z", "2014-10-04 23:36:23Z" ], "score": [ "3", "", "" ], "author": [ "Hayden", "SLM", "Hayden" ], "author_rep": [ "16382", "153", "16382" ] }
{ "id": [ "958588", "958592" ], "body": [ "\nnote that the numbers $1,2,...,p-1$ form a cyclic group whose operation is multiplication followed by reduction mod $p$\nif you find a generator, $\\alpha$, then $\\alpha^k$ for $k=1,...,p-1$ gives all the elements of the group. the order of $\\alpha^k$ is $\\frac{n}{(n,k)}$\nfor example look at $F_7^{\\times}$ whose elements are $1,2,3,4,5,6$ you can see that $3$ is a generator:\n$$\n3^2 \\equiv_7 2 \\\\\n3^3 \\equiv_7 6 \\\\\n3^4 \\equiv_7 4 \\\\\n3^5 \\equiv_7 5 \\\\ \n3^6 \\equiv_7 1 \\\\ \n$$\ncheck out e.g. that $\\frac6{(6,3)} = 2$ so $6^2 \\equiv_7 1$. you can make up many examples to check. this is a good introduction to the study of finite fields\n", "\nOne can observe that $\\forall$ x such that $x|p-1$ $\\exists$ a such that $a^x\\equiv 1 (modp)$\nThis by cauchy's theorem for finite group http://en.wikipedia.o/wiki/Cauchy%27s_theorem_%28group_theory%29 In our case , the Group is $\\mathbb{Z}_{p}^*$.\n" ], "score": [ 2, 0 ], "ts": [ "2014-10-04 23:40:57Z", "2014-10-04 23:35:23Z" ], "author": [ null, null ], "author_rep": [ "2381", "2381" ], "accepted": [ true, false ], "comments": [ { "id": [ "1970914" ], "body": [ "Wow, thanks! That's really cool." ], "at": [ "2014-10-05 00:06:50Z" ], "score": [ "" ], "author": [ "SLM" ], "author_rep": [ "153" ] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
958514
2014-10-04 22:23:06Z
user174318
199
3
Borel measurable functions in measure theory
[ "measure-theory" ]
Suppose that $f$ is a function on $\mathbb{R}\times \mathbb{R^k}$ such that $f(x,\cdot)$ is Borel measurable for each $x\in \mathbb{R}$ and $f(\cdot,y)$ is continuous for each $y\in \mathbb{R^k}$. For $n\in \mathbb{N}$, define $f_n$ as follows. For $i\in \mathbb{Z}$, let $\displaystyle a_i=\frac{i}{n}$, and for $a_i\leq x \leq a_{i+1}$ let $\displaystyle f_n(x,y)=\frac{f(a_{i+1},y)(x-a_i)-f(a_i,y)(x-a_{i+1})}{a_{i+1}-a_i}$. Show that $f_n$ is Borel measurable on $\mathbb{R}\times \mathbb{R^k}$ and $f_n \rightarrow f$ pointwise. Hence show that $f$ is Borel measurable on $\mathbb{R}\times \mathbb{R^k}$. Concluse by induction that every function on $ \mathbb{R^n}$ that is continuous in each variable separately is Borel measurable.
{ "id": [ "1970751" ], "body": [ "Can anyone give me a hint. I cannot imagine how to start the solution..." ], "at": [ "2014-10-04 22:23:50Z" ], "score": [ "" ], "author": [ "user174318" ], "author_rep": [ "199" ] }
{ "id": [ "959032" ], "body": [ "\nNotice that \n$$f_n(x,y)=n\\sum_{i\\in\\mathbb Z}\\left[f(a_{i+1},y)(x-a_i)-f(a_i,y)(x-a_{i+1})\\right]\\chi\\left\\{\\left[\\frac in,\\frac{i+1}n\\right)\\right\\}(x).$$\nDefine \n$$f_{n,i}(x,y):=\\left[f(a_{i+1},y)(x-a_i)-f(a_i,y)(x-a_{i+1})\\right]\\chi\\left\\{\\left[\\frac in,\\frac{i+1}n\\right)\\right\\}(x).$$\nThis function is Borel measurable on $\\mathbb R\\times\\mathbb R^k$ a sum and product of such functions.\nFor the pointwise convergence, we have to use the continuity with respect to the first variable and the fact that \n$$f(x,y)=f(a_i,y)+n(x-a_i)(f(a_{i+1},y)-f(a_i,y)).$$\n" ], "score": [ 2 ], "ts": [ "2014-10-05 09:42:37Z" ], "author": [ "user174318" ], "author_rep": [ "199" ], "accepted": [ true ], "comments": [ { "id": [ "1973351", "1976821" ], "body": [ "How did you derive the expression for $f(x,y)$ like this?", "I translate the condition $a_i\\leqslant x\\lt a_{i+1}$ in term of characteristic function." ], "at": [ "2014-10-05 23:18:52Z", "2014-10-07 08:29:34Z" ], "score": [ "", "" ], "author": [ "user174318", "Davide Giraudo" ], "author_rep": [ "199", "165060" ] } ] }
958491
2014-10-04 21:57:28Z
Mathy Person
1525
3
Graph theory/pigeonhole question.
[ "graph-theory" ]
In a waiting room, there are 100 people, each of whom knows 67 others among the 100. Prove that there exist 4 people in the waiting room who all know each other (that is, each know the other 3). You assume that knowing is mutual (if A knows B, then B knows A). I understand that if there are 4 people in the waiting room who all know each other, that makes K_4, or a connected graph. However, the "67 others among the 100" part stumps me. Can someone give a hint/hints? Please do not solve the problem for me or give me the answer. I only need a hint/hints to get me started. Thanks for all of your help!!
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "958503", "1699409" ], "body": [ "\nHere the hint you want:\nYou're trying to find 4 people who know each other? Well, take one person at random, and ask yourself \"where can I find the second person?\", and you when you find the 2°, ask \"where can I find the third?\", and the same for the fourth.\nBonus You can try to prove a stronger question: prove that for every three people that know each other there exists a person that knows all of them\n", "\nAnother way you can do it is by letting this be a graph since you tagged this problem graph theory. \nConsider the waiting room as a a graph, where each vertex corresponds to a member, and a pair of vertices is connected by an edge if the corresponding members know each other. So the degree of every vertex is between $67$ and $99$ inclusive, so there are $33$ different possible degrees. We need to show that $4$ vertices have the same degree.\n" ], "score": [ 2, 0 ], "ts": [ "2014-10-04 22:36:59Z", "2016-03-15 23:03:43Z" ], "author": [ null, null ], "author_rep": [ "61934", "61934" ], "accepted": [ true, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
958381
2014-10-04 20:06:37Z
Guest
43
3
How to find the number of squares formed by given lattice points?
[ "geometry", "euclidean-geometry", "analytic-geometry", "polygons", "integer-lattices" ]
Let us say that we are N integer coordinates (x, y) - what would our approach be if we were supposed to find the number of squares we could make from those given n points? Additionally, if we were to figure out how many, minimally, more points should we add that we manage to form at least ONE square from the points, how would we go about that?
{ "id": [ "1974027" ], "body": [ "No help at all? -_-" ], "at": [ "2014-10-06 06:15:25Z" ], "score": [ "" ], "author": [ "Guest" ], "author_rep": [ "43" ] }
{ "id": [ "960429", "969946" ], "body": [ "\nYou could iterate over all pairs of points $(x_1,y_1), (x_2,y_2)$, rotate the difference vector by 90° and add that rotated difference to either point to obtain two more points which would form a square:\n\\begin{align*}\nx_3 &= x_1 + (y_2 - y_1) &\nx_4 &= x_2 + (y_2 - y_1) \\\\\ny_3 &= y_1 + (x_1 - x_2) &\ny_4 &= y_2 + (x_1 - x_2)\n\\end{align*}\nCheck whether these extra points $(x_3,y_3)$ and $(x_4,y_4)$ are contained in your input. Make sure you iterate over ordered pairs, i.e. for every pair of distinct points, do the above computation for both possible orders, so that you get the squares on either side of the line joining these. In the end, you'd have counted each square four times (once for every edge), so divide the total count by four.\nThis can be adapted to check whether there are any squares which lack only a single point, so it can be used to determine the number of points you have to add to form at least one square.\nThe above approach might be far from efficient, but if you are looking for efficient algorithms, you might be better off asking on Stack Overflow. There is very little math in this question, and a lot of algorithm.\n", "\nI dont have the reputation to comment so I am writing it in answer. This problem seems to be taken from http://www.codechef.com/OCT14/problems/CHEFSQUA live contest! please do remove it. Please respect the code of honor.\nAnd more over you are complaining of no help.\n" ], "score": [ 2, 0 ], "ts": [ "2017-05-23 12:39:35Z", "2014-10-12 10:16:16Z" ], "author": [ null, null ], "author_rep": [ null, null ], "accepted": [ true, false ], "comments": [ { "id": [ "1977357", "1977364" ], "body": [ "What do you mean by \"add the rotated difference to either point\"", "@AbhishekKaushik: I expanded my answer." ], "at": [ "2014-10-07 14:11:37Z", "2014-10-07 14:17:35Z" ], "score": [ "", "1" ], "author": [ "Abhishek Kaushik", "MvG" ], "author_rep": [ "133", "40500" ] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
958236
2014-10-04 17:56:42Z
colorfly
33
3
Solving $\int_{-\infty}^{+\infty}e^{-2\alpha|x|}\cos^2(x)\,dx$
[ "integration" ]
I'd like some help solving the integral $$ \int_{-\infty}^{\infty} e^{-2 \, \alpha \, |x|} \cdot \cos^2(x) \; \, dx $$ with $\alpha > 0$ I just assumed 'integration-by-parts' was the way to go, but the first part of the product alone ($e^{-2\alpha|x|}$) gets quite confusing. Is there any trick to making it stay manageable? The absolute-value in the exponent is confusing me, too - do I have to differentiate between cases every time?
{ "id": [ "1970267", "1970271", "1970286" ], "body": [ "Use the symmetry to get rid of the absolute value.", "what is $\\alpha$?", "Sorry, α > 0. It's just a parameter -> I want the solution in terms of α." ], "at": [ "2014-10-04 17:58:10Z", "2014-10-04 17:59:35Z", "2014-10-04 18:08:06Z" ], "score": [ "3", "", "" ], "author": [ "Daniel Fischer", "Dr. Sonnhard Graubner", "colorfly" ], "author_rep": [ "202399", "94750", "33" ] }
{ "id": [ "958244", "958243" ], "body": [ "\nWe have:\n$$ I = \\int_{-\\infty}^{+\\infty}e^{-2\\alpha|x|}\\cos^2(x)\\,dx = 2\\int_{0}^{+\\infty}e^{-2\\alpha x}\\cos^2 x\\,dx =\\int_{0}^{+\\infty}e^{-2\\alpha x}(1+\\cos(2x))\\,dx$$\nhence:\n$$ I = \\frac{1}{2\\alpha}+\\frac{\\alpha}{2(\\alpha^2+1)}$$\nsince:\n$$\\int_{0}^{+\\infty}e^{-\\beta x}\\,dx = \\frac{1}{\\beta},\\qquad \\int_{0}^{+\\infty}e^{-\\beta x}\\cos x\\,dx = \\frac{\\beta}{1+\\beta^2}.$$\n", "\nSince it is an even function, you can only do it for x > 0. Cos^2 ( x ) = ( 1 - cos2x )/ 2, and exp( -2ax )* cos (2x) is the real part of exp( -2ax + 2ix ). Remain is easy.\n" ], "score": [ 2, 0 ], "ts": [ "2014-10-04 18:02:49Z", "2014-10-04 18:02:46Z" ], "author": [ "colorfly", "colorfly" ], "author_rep": [ null, null ], "accepted": [ true, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] } ] }
958193
2014-10-04 17:14:28Z
Sky
347
3
Infinite product of probability measures is a premeasure
[ "measure-theory", "product-space" ]
This is an exercise from Real Analysis by Stein and Shakarchi (Chapter 6, Exercise 15). Given infinitely many measure spaces $(X_i, \mathcal M_i, m_i)$, each of which has measure 1, one can define an algebra on the product space consisting of all finite unions of the “cylinders”, by which we mean rectangles of the form $E_1 \times E_2 \times \cdots$, where $E_i$ belong to $\mathcal M_i$ and all but finitely many of $E_i$ are equal to $X_i$. Then define $m(E_1 \times E_2 \times \cdots) = m_1(E_1)\,m_2(E_2)\cdots$. How does one prove that $m$ is a premeasure on the algebra defined above? One only needs to check the equality in the definition of premeasure, but it seems a subtle problem of the exchange of summation and limit progress is involved, which can be easily ignored without carefulness. I would like some hints or any reference book about it.
{ "id": [ "1991179" ], "body": [ "The previous exercise (Exercise 14) about finite products can be solved by adapting what was done for the product of two spaces in Section 3 and using induction. Perhaps we're supposed to use that in some manner here. But it's not clear to me how. The problem I'm running into is that if $E = \\bigcup_{n=1}^\\infty E^{(n)}$ where $E$ and $E^{(n)}$ are cylinders, there may be infinite components $i$ for which there exists an $n_i$ such that $E^{(n_i)}_i \\neq X_i$." ], "at": [ "2014-10-12 17:07:44Z" ], "score": [ "" ], "author": [ "epimorphic" ], "author_rep": [ "3189" ] }
{ "id": [ "1074266", "972362" ], "body": [ "\nThis is proven as Theorem 3.5.1 in Measure Theory, Volume 1 by Vladimir Bogachev (WorldCat link if you want to find a library copy).\nAn overarching theme of the proof is that each cylinder set is essentially an element of finite product $\\sigma$-algebras; the countably infinitely many factors $X_i$ tacked on at the end act essentially as filler.\nHint.\nFirst prove the following lemma: Let $(X,\\mathcal M, \\mu)$ and $(Y,\\mathcal N, \\nu)$ be measure spaces. Suppose $A \\in \\mathcal M \\otimes \\mathcal N$ satisfies $(\\mu \\times \\nu)(A) > t > 0$. For $x \\in X$, we have the sections/slices $A^x := \\{y \\in Y : (x,y) \\in A\\}$. Let $B := \\{x \\in X : \\nu(A^x) > t/2\\}$. Then $B \\in \\mathcal M$ and $\\mu(B) > t/2$.\nTo establish the main result, it suffices to show that if $(A_k)_{k=1}^\\infty$ is a sequence of cylinder sets for which there exists a constant $\\epsilon > 0$ such that $m(A_k) > \\epsilon$ for all $k$, then $\\bigcap_{k=1}^\\infty A_k \\subset \\prod_{j=1}^\\infty X_j$ has at least one element. The lemma is helpful here.\n\nThere's another proof outlined in Sam Drury's class notes for his Math 355 course at McGill under Section 4.6, \"Infinite products of probability spaces\". You can find my commentary on the proof in an old revision of this answer.\n", "\nThe trick is that we know the union must have only finitely many non-trivial co-ordinates and that the unions are increasing.\nTo prove $m$ is a pre-measure, you only need to prove the summation condition for disjoint events $\\{A_i\\}_{i \\ge 1}$ where $\\cup_i A_i$ lies in the algebra. Only way $\\cup_i A_i$ lies in the algebra is if $$\\cup_i A_i = E_{i_1} \\times E_{i_2 } \\times \\ldots \\times E_{i_k}$$\nLet $E_{i_j}^{(N)}$ be the $i_j$th coordinate for $\\cup_{i=1}^N A_i$.\nwhich means for all $\\epsilon >0$, for all large enough $N$ , $$m (E_{i_j} \\setminus E_{i_j}^{(N)}) < \\epsilon$$\nfor all $j=1,\\ldots, k$.Hence \n$$ m(\\cup_iA_i) - m(\\cup_{i=1}^N A_i) = m (\\cup_iA_i \\setminus \\cup_{i=1}^N A_i) < \\prod_{j=1}^k m(E_{i_j} \\setminus E_{i_j}^{(N)}) < \\epsilon^k$$\nsince in $\\cup_iA_i \\setminus \\cup_{i=1}^N A_i$, there are finitely many non-trivial co-ordinates and we can replace the measures of all co-ordinates except $i_1,\\ldots, i_k$ by $1$ for the upperbound.\n" ], "score": [ 3, -1 ], "ts": [ "2017-04-13 12:21:40Z", "2014-10-13 21:44:13Z" ], "author": [ null, null ], "author_rep": [ "3189", "3189" ], "accepted": [ true, false ], "comments": [ { "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }, { "id": [ "1995303", "1995320", "1998762", "1999166" ], "body": [ "(1) It suffices to consider the case where $\\bigcup_{n=1}^\\infty A_n$ and each $A_n$ are cylinders, but the partial unions $\\bigcup_{n=1}^N A_n$ won't be cylinders in general. Rather, they are finite unions of cylinders. (2) The nontrivial coordinates of $\\bigcup_{n=1}^N A_n$ need not be the same as those of $\\bigcup_{n=1}^\\infty A_n$. One can construct examples where for each coordinate $i \\in \\mathbb Z_{>0}$ there exists an $N(i) \\in \\mathbb Z_{>0}$ such that $\\pi_i\\bigl(\\bigcup_{n=1}^{N(i)} A_n\\bigr) \\neq X_i$. Here $\\pi_i$ is the projection onto $X_i$.", "(3) I don't think the step $m\\bigl(\\prod E_{i_j} \\setminus \\prod E_{i_j}^{(N)}\\bigr) < \\prod m_{i_j}\\bigl(E_{i_j} \\setminus E_{i_j}^{(N)}\\bigr)$ in the last display block is correct. Counterexample: $m\\bigl([0,2]^2 \\setminus [0,1]^2\\bigr) = 3 \\not\\lt 1 = \\bigl(m(1,2]\\bigr)^2$.", "1) Finite union of rectangles are still rectangles. 2) I never said the non-trivial co-ordinates of $\\cup_{i=0}^\\infty A_i$ and $\\cup_{i=0}^N A_i$ are the same. Rather I only compared the co-ordinates of $\\cup_{i=0}^\\infty A_i$ with that of $\\cup_{i=0}^N A_i$. For the rest of the coordinates, the measure is bounded by $1$. 3) In your counterexample, I am assuming you are using Lebesgue measure? But measure of $[0,2]$ is $2$. But you mentioned in your question, each $X_i$ has measure $1$, so your counterexample is invalid. I apologise if things were not clear.", "(1) Something like the black part of this where $X_1 = X_2 = [0,1]$ with Lebesgue measures and $X_i = \\{0\\}$ with counting measures for $i>2$ isn't a rectangle. (3) It's trivial to fix the counterexample by rescaling. There's nothing special about the example, either; it is always the case that $\\prod E_{i_j} \\setminus \\prod E^{(n)}_{i_j} \\supset \\prod \\bigl(E_{i_j} \\setminus E^{(n)}_{i_j}\\bigr)$, meaning that the inequality you wrote is always false." ], "at": [ "2014-10-14 02:09:56Z", "2014-10-14 02:19:24Z", "2014-10-15 09:13:32Z", "2014-10-15 13:12:05Z" ], "score": [ "", "", "", "" ], "author": [ "epimorphic", "epimorphic", "gmath", "epimorphic" ], "author_rep": [ "3189", "3189", "1345", "3189" ] } ] }
957605
2014-10-04 05:50:26Z
Leucippus
25.3k
3
Sums with squares of binomial coefficients multiplied by a polynomial
[ "sequences-and-series", "summation", "binomial-coefficients", "closed-form" ]
It has long been known that \begin{align} \sum_{n=0}^{m} \binom{m}{n}^{2} = \binom{2m}{m}. \end{align} What is being asked here are the closed forms for the binomial series \begin{align} S_{1} &= \sum_{n=0}^{m} \left( n^{2} - \frac{m \, n}{2} - \frac{m}{8} \right) \binom{m}{n}^{2} \\ S_{2} &= \sum_{n=0}^{m} n(n+1) \binom{m}{n}^{2} \\ S_{3} &= \sum_{n=0}^{m} (n+2)^{2} \binom{m}{n}^{2}. \end{align}
{ "id": [], "body": [], "at": [], "score": [], "author": [], "author_rep": [] }
{ "id": [ "957647", "957611" ], "body": [ "\nLemma:\n$$\n\\begin{align}\n\\sum_{n=0}^m\\binom{n}{k}\\binom{m}{n}^2\n&=\\sum_{n=0}^m\\binom{n}{k}\\binom{m}{n}\\binom{m}{m-n}\\tag{1}\\\\\n&=\\sum_{n=0}^m\\binom{m}{k}\\binom{m-k}{n-k}\\binom{m}{m-n}\\tag{2}\\\\\n&=\\binom{m}{k}\\binom{2m-k}{m-k}\\tag{3}\\\\\n&=\\binom{m}{k}\\binom{2m-k}{m}\\tag{4}\\\\\n&=\\binom{2m-k}{k}\\binom{2m-2k}{m-k}\\tag{5}\n\\end{align}\n$$\nExplanation:\n$(1)$: $\\binom{m\\vphantom{k}}{n}=\\binom{m\\vphantom{k}}{m-n}$\n$(2)$: $\\binom{n\\vphantom{k}}{k}\\binom{m\\vphantom{k}}{n}=\\binom{m\\vphantom{k}}{k}\\binom{m-k}{n-k}$\n$(3)$: Vandermonde's Identity$\\vphantom{\\binom{k}{n}}$\n$(4)$: $\\binom{m\\vphantom{k}}{n}=\\binom{m\\vphantom{k}}{m-n}$\n$(5)$: $\\binom{n\\vphantom{k}}{k}\\binom{m\\vphantom{k}}{n}=\\binom{m\\vphantom{k}}{k}\\binom{m-k}{n-k}$\n\nApply the Lemma to\n$$\n\\color{#C00000}{n^2}-\\frac{m}2\\color{#00A000}{n}-\\frac{m}8\\color{#0000FF}{1}=\\color{#C00000}{2\\binom{n}{2}+\\binom{n}{1}}-\\frac{m}2\\color{#00A000}{\\binom{n}{1}}-\\frac{m}8\\color{#0000FF}{\\binom{n}{0}}\n$$\nand\n$$\nn(n+1)=2\\binom{n}{2}+2\\binom{n}{1}\n$$\nand\n$$\n(n+2)^2=2\\binom{n}{2}+5\\binom{n}{1}+4\\binom{n}{0}\n$$\n", "\nHINT:\nAs $\\displaystyle n\\binom mn=m\\binom{m-1}{n-1},$\n$\\displaystyle\\sum_{n=0}^mn\\binom mn^2=m\\sum_{n=0}^m\\binom mn\\binom{m-1}{n-1}=m\\binom{2m-1}m$ \ncomparing the coefficient of $x^{2m-1}$ in $(1+x)^m(1+x)^{m-1}=(1+x)^{2m-1}$\n" ], "score": [ 2, 0 ], "ts": [ "2014-10-04 06:54:35Z", "2014-10-04 06:04:21Z" ], "author": [ null, null ], "author_rep": [ null, null ], "accepted": [ true, false ], "comments": [ { "id": [ "1970497" ], "body": [ "The Lemma provided yields the results desired, namely: \\begin{align} S_{1} &= \\frac{1}{4(m-1)} \\binom{2m-2}{m-2} \\\\ S_{2} &= 2 (m^{2} + 2m -1) \\binom{2m-3}{m-2} \\\\ S_{3} &= \\frac{2}{m} \\, (m^{3} + 8 m^{2} + 12 m - 8) \\binom{2m-3}{m-2} \\end{align}." ], "at": [ "2014-10-04 19:46:43Z" ], "score": [ "" ], "author": [ "Leucippus" ], "author_rep": [ "25299" ] }, { "id": [ "1969090", "1969093" ], "body": [ "I think the last expression should be $m{{2m-1}\\choose m}$.", "@Slade, Thanks for your feedback" ], "at": [ "2014-10-04 06:01:30Z", "2014-10-04 06:05:39Z" ], "score": [ "", "" ], "author": [ "Slade", "lab bhattacharjee" ], "author_rep": [ "29666", "270589" ] } ] }
End of preview.

Mathematics StackExchange Dataset

This dataset contains questions and answers from Mathematics StackExchange (math.stackexchange.com). The data was collected using the Stack Exchange API. Total collected questions 465.295.

Data Format

The dataset is provided in JSON Lines format, with one JSON object per line. Each object contains the following fields:

  • id: the unique ID of the question
  • asked_at: the timestamp when the question was asked
  • author_name: the name of the author who asked the question
  • author_rep: the reputation of the author who asked the question
  • score: the score of the question
  • title: the title of the question
  • tags: a list of tags associated with the question
  • body: the body of the question
  • comments: a list of comments on the question, where each comment is represented as a dictionary with the following fields:
    • id: the unique ID of the comment
    • body: the body of the comment
    • at: the timestamp when the comment was posted
    • score: the score of the comment
    • author: the name of the author who posted the comment
    • author_rep: the reputation of the author who posted the comment
  • answers: a list of answers to the question, where each answer is represented as a dictionary with the following fields:
    • id: the unique ID of the answer
    • body: the body of the answer
    • score: the score of the answer
    • ts: the timestamp when the answer was posted
    • author: the name of the author who posted the answer
    • author_rep: the reputation of the author who posted the answer
    • accepted: whether the answer has been accepted
    • comments: a list of comments on the answer, where each comment is represented as a dictionary with the following fields:
      • id: the unique ID of the comment
      • body: the body of the comment
      • at: the timestamp when the comment was posted
      • score: the score of the comment
      • author: the name of the author who posted the comment
      • author_rep: the reputation of the author who posted the comment

Preprocessing

There was no preprocessing done, this dataset contains raw unfiltered data, also there might be problems with redundant line breaks or spacings

License

This dataset is released under the WTFPL license.

Contact

For any questions or comments about the dataset, please contact nurik040404@gmail.com.

Downloads last month
0
Edit dataset card