qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
1,038,060 | <p>Can anyone help me with this question: I know it before, but I have tried to solve it myself and didnt succeed.
what is the regular expression for this language: L=all words that have 00 or 11 but not both.</p>
<p>Thank you!</p>
| Community | -1 | <p>Your proof looks <strong>simple</strong> because you assumed not so <strong>simple</strong> result that $A_n $ is <strong>simple</strong> for $n\geq 5$...</p>
<p>Actually something more is true... </p>
<p>Suppose that $H\leq S_n$ of index $m $ with $m< n$ then we have homomorphism $\eta: S_n \rightarrow S_m$.</p>
<p>As $Ker(\eta)$ is a normal subgroup of $S_n$ we should have $Ker(\eta)=1$ or $Ker(\eta)=A_n$.</p>
<p>Suppose $Ker(\eta)=(1)$ then we should have $S_n$ embedded in $S_m$, which is not possible as $m<n$.</p>
<p>So,$Ker(\eta)=A_n$ and we know that $Ker(\eta)\subset H$ i.e., $A_n\leq H< S_n$.</p>
<p>As $A_n$ is maximal subgroup of $S_n$ we have $H=A_n$.</p>
<p>So.. Do you see what i am concluding??</p>
|
1,290,316 | <p>Let $F_i$ be a family of closed sets, then we know that $\bigcup_{i=1}^nF_i$ is closed.</p>
<p>Proving that statement is equivalent to proving:</p>
<blockquote>
<p>If $p$ is a limit point of $\bigcup_{i=1}^nF_i$ then $p\in\bigcup_{i=1}^nF_i$</p>
</blockquote>
<p>It is easy to prove the contrapositive: if $p\notin\bigcup_{i=1}^nF_i$ then $p$ is not a limit point of $\bigcup_{i=1}^nF_i$</p>
<p>However I tried the following direct proof that I am sure it is wrong because it does not use the countable nature of the union. I want to know where I am making a mistake in the following chain of reasonings:</p>
<p>If $p$ is a limit point of $\bigcup_{i=1}^nF_i$ then in every neighbourhood there is a point $q\neq p$, such that $q\in \bigcup_{i=1}^nF_i$. Since $q\in \bigcup_{i=1}^nF_i$ then $q$ belongs to at least one $F_i$, then (and this is what I suspect is false) $p$ is a limit point of $F_i$. Since $F_i$ is closed, then $p\in F_i$, then $p\in\bigcup_{i=1}^nF_i$.</p>
<p>Therefore: If $p$ is a limit point of $\bigcup_{i=1}^nF_i$ then $p\in\bigcup_{i=1}^nF_i$.</p>
<p>Thank you very much in advance.</p>
| DanielWainfleet | 254,665 | <p>Use the infinite pigeon-hole principle: If aninfinite set is presented as the union of finitely many subsets,at least one subset is infinite.So if a sequence p(n) converges to p, where each p(n) belongs to some F(j), then,for at least one j, there are infinitely many n for which p(n) belongs to F(j), so p belongs to this F(j).</p>
|
4,465,504 | <p>Take an invertible formal series <span class="math-container">$f\in \mathbb{Z}_p[[T]]$</span> of inverse <span class="math-container">$g\in \mathbb{Z}_p[[T]]$</span> and let <span class="math-container">$x\in \mathbb{Z}_p$</span> such that the value of <span class="math-container">$f$</span> evaluated at <span class="math-container">$x$</span> exists. I have a few questions:</p>
<ol>
<li>Is <span class="math-container">$f(x)$</span> a <span class="math-container">$p$</span>-adic integer ? Since <span class="math-container">$\mathbb{Z}_p$</span> is a closed set I thought it was the case, but I have some doubts.</li>
<li>Does <span class="math-container">$g(x)$</span> also exist ?</li>
<li>If <span class="math-container">$g(x)$</span> does exist, is it the inverse of <span class="math-container">$f(x)$</span> ? I was thinking of evaluating at <span class="math-container">$x$</span> in the identity <span class="math-container">$f(T) g(T) =1$</span>, but still not sure of myself.</li>
</ol>
<p>I'm new to the p-adic world and to the formal series world, so thanks a lot.</p>
| reuns | 276,986 | <p>You meant <strong>multiplicative</strong> inverse. Because the compositional inverse often exists in formal series.</p>
<p>If the series <span class="math-container">$f(x)$</span> converges then it does so to an element of <span class="math-container">$\Bbb{Z}_p$</span>, yes.</p>
<p>Then try <span class="math-container">$f(T)=1-T$</span> and <span class="math-container">$x=1$</span>.</p>
<p>If <span class="math-container">$g(x)$</span> converges as well then <span class="math-container">$f(x)g(x)=1$</span>, yes (consider the truncated series, change the order of summation to make <span class="math-container">$\sum_n a_n b_{k-n}=0$</span> appear, show that the remainder is small).</p>
|
3,972,907 | <p>Reviewing Trig I come across this problem :
<span class="math-container">$\text{Solve for all real $x$ such that } 2\sqrt{2} \cos\left(\frac{x}{2}\right)=\cos(x) + 2.$</span></p>
<p>The first thing I did was use the cosine half-angle identity get this look...</p>
<p><span class="math-container">$$2\sqrt{2}\sqrt{\frac{\cos(x)+1}{2}}=\cos(x)+2$$</span></p>
<p>I then squared both sides of the equation (I believe this is an error but can't pinpoint why).</p>
<p>The rest looks like this :</p>
<p><span class="math-container">$$8\frac{\cos(x)+1}{2}=\cos^2(x)+4\cos(x)+4$$</span></p>
<p><span class="math-container">$$4\cos(x)+4=\cos^2(x)+4\cos(x)+4$$</span></p>
<p><span class="math-container">$$0=\cos^2(x)$$</span></p>
<p>Then <span class="math-container">$x = \left\{\pm \frac{\pi}{2}+2\pi n \ \middle|\ n \in\mathbb Z \right\}$</span></p>
<p>Now, I believe this set does contain all <span class="math-container">$x$</span> which satisfy the orignial equation, but it definitely contains invalid solutions as well.</p>
<p>What specifically did I do wrong here and, if possible, are there any hard and fast rules about when algebraic operations on trig equations will change the solution set?</p>
| José Carlos Santos | 446,262 | <p>If you are trying to solve an equation of the type <span class="math-container">$f(x)=g(x)$</span>, it is perfectly fine to do <span class="math-container">$f^2(x)=g^2(x)$</span>. The solutions of the first equation will also be solutions of the second one. But there is a real possibility of creating new ones. An extreme case of this is the equation <span class="math-container">$x=-x$</span>, whose only solution is <span class="math-container">$x=0$</span>. But <em>every</em> real number is a solution of the equation <span class="math-container">$x^2=(-x)^2$</span>.</p>
<p>In your specific case, take all solutions that you got and check which ones are solutions of the original equation.</p>
|
3,972,907 | <p>Reviewing Trig I come across this problem :
<span class="math-container">$\text{Solve for all real $x$ such that } 2\sqrt{2} \cos\left(\frac{x}{2}\right)=\cos(x) + 2.$</span></p>
<p>The first thing I did was use the cosine half-angle identity get this look...</p>
<p><span class="math-container">$$2\sqrt{2}\sqrt{\frac{\cos(x)+1}{2}}=\cos(x)+2$$</span></p>
<p>I then squared both sides of the equation (I believe this is an error but can't pinpoint why).</p>
<p>The rest looks like this :</p>
<p><span class="math-container">$$8\frac{\cos(x)+1}{2}=\cos^2(x)+4\cos(x)+4$$</span></p>
<p><span class="math-container">$$4\cos(x)+4=\cos^2(x)+4\cos(x)+4$$</span></p>
<p><span class="math-container">$$0=\cos^2(x)$$</span></p>
<p>Then <span class="math-container">$x = \left\{\pm \frac{\pi}{2}+2\pi n \ \middle|\ n \in\mathbb Z \right\}$</span></p>
<p>Now, I believe this set does contain all <span class="math-container">$x$</span> which satisfy the orignial equation, but it definitely contains invalid solutions as well.</p>
<p>What specifically did I do wrong here and, if possible, are there any hard and fast rules about when algebraic operations on trig equations will change the solution set?</p>
| Michael Hardy | 11,667 | <p><span class="math-container">$$2\sqrt{2} \cos\left(\frac{x}{2}\right)=\cos(x) + 2.$$</span></p>
<p>By the double-angle formula for the cosine, which says <span class="math-container">$\cos(2\theta)= 2\cos^2\theta-1,$</span> applied in the case where <span class="math-container">$\theta = x/2,$</span> we get:</p>
<p><span class="math-container">$$ 2\sqrt{2} \cos\left(\frac{x}{2}\right)= \left[ 2\cos^2\left(\frac x 2\right) - 1\right] + 2. $$</span></p>
<p><span class="math-container">$$
2\sqrt 2 \, u = 2u^2 + 1.
$$</span></p>
<p>This is a quadratic equation. Solve it for <span class="math-container">$u$</span> and then write <span class="math-container">$\cos\dfrac x 2 = \text{the solution for $u$}$</span> and solve that for <span class="math-container">$x.$</span></p>
|
2,461,918 | <p>Often a function (real, say) is written without mentioning its domain, co-domain, just the rule $y=f(x)$ is given. In that case, how does one determine the domain, co-domain and range? For example, consider $f(x)=1/x$.</p>
| nonuser | 463,553 | <p>If you want to calculate domain it is usually problem with even root or with denominator. </p>
<p>Say you have $f:\mathbb{R}\to \mathbb{R}$ and </p>
<p>a) $f(x) = {x+3\over x-2}$, then $D_f = R\setminus \{2\}$ </p>
<p>b) $f(x) = \sqrt[4]{x-6}$, then $D_f = [6,\infty)$. </p>
<p>The hard part is range. For the first example. Lets see if $2$ in range. Then you have to find such $x$ that $f(x)=2$, so you must solve the equation:</p>
<p>$$ {x+3\over x-2}=2$$
which is solvable ($x=7$) and thus $2$ is in range. But you can't really do that for each number directly. You must solve more general equation. Take $a$ any and see for which one is equation $f(x)=a$ solvable. So we have:
$$ {x+3\over x-2} =a\;\; \Longrightarrow \;\; x= {2a+3\over a-1}$$
and thus it is solvable iff $a\ne 1$ so the range is $\mathbb{R}-\{1\}$</p>
<p>Actually, if you are familyar with the inverse function, searching for range is actually searching a domain for the inverse function. </p>
|
1,363,882 | <p>I am aware that the area under the curve of $\frac{1}{x}$ is infinite yet the area under the curve of $\frac{1}{x^2}$ is finite. </p>
<p>Calculus and series wise, I understand what is going on, but I can't seem to get a good geometric intuition of the problem.
Both curves can be shown to converge to $0$ (the curves themselves, not the area), and on the interval from $1$ to infinity, the two curves have nothing intrinsically different.
Can someone please provide me with an good geometric intuition of what's going on? I can't find anything on the web, people seem to not want to explain it geometrically.</p>
| ZenoCosini | 254,598 | <p>I do not think there is any geometric intuition behind the convergence you mentioned. As much as there is no geometric intuition which can help solving Zeno's paradox of the Tortoise and Achilles. </p>
<p>Indeed, the convergence is due to how the integral is defined - in terms of Riemann sums (here I assume you are talking about Riemann integration). </p>
<p>Perhaps it is not a case that convergence of series were rigorously studied (e.g. by Abel) exactly when analytic methods took over geometric ones (see M. Kline, Mathematical Thought from Ancient to Modern Times, chapter 40). </p>
|
141,101 | <p>I have given a high dimensional input $x \in \mathbb{R}^m$ where $m$ is a big number. Linear regression can be applied, but in generel it is expected, that a lot of these dimensions are actually irrelevant.</p>
<p>I ought to find a method to model the function $y = f(x)$ and at the same time uncover which dimensions contribute to the output. The overall hint is to apply the <strong>$L_1$-norm Lasso regularization</strong>.</p>
<p>$$L^{\text lasso}(\beta) = \sum_{i=1}^n (y_i - \phi(x_i)^T \beta)^2 + \lambda \sum_{j = 1}^k | \beta_j |$$</p>
<p>Minimizing $L^{\text lasso}$ is in general hard, for that reason I should apply gradient descent. My approach so far is the following:</p>
<p>In order to minimize the term, I chose to compute the gradient and set it $0$, i.e.</p>
<p>$$\frac{\partial}{\partial \beta} L^{\text lasso}(\beta) = -2 \sum_{i = 1}^n \phi(x_i)(y_i - \beta)$$</p>
<p>Since this cannot be computed simply, I want to apply the gradient descent here. The gradient descent takes a function $f(x)$ as input, so in place of that will be put the $L(\beta)$ function.</p>
<p>My problems are:</p>
<ul>
<li><p>is the gradient I computed correct? I left out the regularizationt erm in the end, which I guess is a problem, but I also have problems to compute the gradient for this one</p>
<p>In addition I may replace $| \beta_j |$ by the function $l(x)$, which is $l(x) = x - \varepsilon/2$ if $|x| \geq \epsilon$, else $x^2/(2 \varepsilon)$</p></li>
<li><p>one step in the gradient descent is: $\beta \leftarrow \beta - \alpha \cdot \frac{g}{|g|}$, where $g = \frac{\partial}{\partial \beta}f(x))^T$</p>
<p>I don't get this one, when I transpose the gradient I have computed the term cannot be resolved due to dimension mismatch</p></li>
</ul>
| shyamupa | 111,694 | <p>Have you tried using the shooting algorithm for optimizing the lasso regularized loss instead of gradient descent?</p>
<p>It involves using a coordinate wise search for the minimizer. This <a href="http://gautampendse.com/software/lasso/webpage/pendseLassoShooting.pdf" rel="nofollow">link</a> has some details and also a link to their code.</p>
<p>Hope this helps.</p>
|
1,318,552 | <blockquote>
<p>Let $f$ be a twice differentiable function on $\left[0,1\right]$ satisfying $f\left(0\right)=f\left(1\right)=0$. Additionally $\left|f''\left(x\right)\right|\leq1$ in $\left(0,1\right)$. Prove that $$\left|f'\left(x\right)\right|\le\frac{1}{2},\quad\forall x\in\left[0,1\right]$$</p>
</blockquote>
<p>The hint we were given is to expand into a first order Taylor polynomial at the minimum of $f'$. So I tried doing that:</p>
<p>As $f'$ is differentiable, it is continuous, and attains a minimum at $\left[0,1\right]$. Thus we can denote $x_{0}$ as the minimum, and expending into a Taylor polynomial of the first order around it gives us, for some $c$ between $x$ and $x_0$. </p>
<p>$$T_{x_{0}}\left(x\right)=f\left(x_{0}\right)+f'\left(x_{0}\right)\left(x-x_{0}\right)+\frac{f''\left(c\right)}{2}\left(x-x_{0}\right)^{2}$$</p>
<p>Now at $x=0$ we have
$$T_{x_{0}}\left(0\right)=f\left(x_{0}\right)-x_{0}f'\left(x_{0}\right)+x_{0}^{2}\frac{f''\left(c\right)}{2}=0$$</p>
<p>And at $x=1$ we have $$T_{x_{0}}\left(0\right)=f\left(x_{0}\right)+\left(1-x_{0}\right)f'\left(x_{0}\right)+\left(1-x_{0}\right)^{2}\frac{f''\left(c\right)}{2}=0$$</p>
<p>and I'm pretty much stuck here..</p>
<p>So I tried a different approach using Mean Value Theorem directly, by Rolle's I know the derivative is $0$ somewhere (as $f(0)=f(1)$), suppose at $x_0$, so by mean value theorem $\frac{f'(x)}{x-x_0} =f''(c)\leq1$ for some $c$ and $f'(x)<x-x_0$.</p>
<p>But using this approach as well I'm not sure how to proceed, as this gives me the desired propety only in $1/2$ environment around $x_0$...</p>
<p>Any help?</p>
| Ted Shifrin | 71,348 | <p>You were on the right track, but, as I suggested, the hint should have been to expand about the point $x_0$ where $|f'(x_0)|$ is a maximum.</p>
<p>Fix any $x_0\in [0,1]$ and, using Taylor's Theorem, write
$$f(x)=f(x_0) + f'(x_0)(x-x_0) + \frac12 f''(c)(x-x_0)^2\quad\text{for some $c$ between $x_0$ and $x$.}$$
Plugging in $x=0$ and $x=1$ respectively, we arrive at
$$f'(x_0) = \frac12\big(f''(c_1)x_0^2 - f''(c_2)(1-x_0)^2\big) \quad\text{for some $c_1$ and $c_2$}.$$
Therefore $|f'(x_0)|\le \dfrac12\big(x_0^2 + (1-x_0)^2\big) \le \dfrac12$, since $x_0\in [0,1]$ is arbitrary.</p>
|
1,905,308 | <p>In the book "A Course in Metric Geometry" (By Dmitri Burago, Yuri Burago Sergei Ivanov), there is a short proof of lower semicontinuity of length induced by a metric: (Prop 2.3.4, <a href="https://books.google.com/books?id=dRmIAwAAQBAJ&pg=PA35" rel="nofollow noreferrer">pg 35</a>)</p>
<p>Let $\gamma_j:[a,b] \to (X,d)$ converge pointwise to $\gamma$. Fix $\epsilon >0$.Let $Y=\{y_1,...y_N \}$ be a partition of $\gamma$ such that $\Sigma(Y) \ge L(\gamma)-\epsilon$, where $$\Sigma(Y)=\sum_i d(\gamma(y_i),\gamma(y_{i+1})).$$
Define $\Sigma_j(Y)=\sum_i d(\gamma_j(y_i),\gamma_j(y_{i+1}))$, and choose $j$ large enough so that the inequality $d(\gamma(y_i),\gamma_j(y_i))< \epsilon$ hols for all $y_i \in Y$. Then:</p>
<p>$$ L(\gamma) \le \Sigma(Y) + \epsilon \le \Sigma_j(Y) + 2N\epsilon +\epsilon \le L(\gamma_j)+(2N+1)\epsilon$$ where the $2N\epsilon$ factor comes from going through $\gamma(y_i) \to \gamma_j(y_i) \to \gamma_j(y_{i+1}) \to \gamma_j(y_i) \to \gamma_(y_{i+1})$.</p>
<p>The authors then say we take $\epsilon \to 0$ and finish. </p>
<p><strong>My problem:</strong></p>
<p>$N$ is in fact a function of $\epsilon$. How can I be sure that $N(\epsilon)\epsilon \to 0$ when $\epsilon \to 0$.</p>
<p>More formally, we can think of $N(\epsilon)$ as the <strong>minimal</strong> size of a partition which induces an $\epsilon$-approximation of $\gamma$. clearly this quantity increases with $\epsilon$, and without any knowledge on how "wild" is $\gamma$ I see no way how to gain control over it.</p>
| Asaf Shachar | 104,576 | <p>Yes, there is a mistake; See errata <a href="http://www.pdmi.ras.ru/~svivanov/papers/bbi-errata.pdf" rel="nofollow">here</a>.</p>
<p>The solution is to choose $j$ large enough, so that $d(\gamma(y_i),\gamma_j(y_i))< \frac{\epsilon}{N}$.</p>
|
159,438 | <p>Can be easily proved that the following series onverges/diverges?</p>
<p>$$\sum_{k=1}^{\infty} \frac{\tan(k)}{k}$$</p>
<p>I'd really appreciate your support on this problem. I'm looking for some easy proof here. Thanks.</p>
| Sangchul Lee | 9,340 | <p>Let $\mu$ be the <em><a href="http://mathworld.wolfram.com/IrrationalityMeasure.html">irrationality measure</a></em> of $\pi^{-1}$. Then for $s < \mu$ given, we have sequences $(p_n)$ and $(q_n)$ of integers such that $0 < q_n \uparrow \infty$ and </p>
<p>$$\left| \frac{1}{\pi} - \frac{2p_n + 1}{2q_n} \right| \leq \frac{1}{q_n^{s}}.$$</p>
<p>Rearranging, we have</p>
<p>$$ \left| \left( q_n - \frac{\pi}{2} \right) - p_n \pi \right| \leq \frac{\pi}{q_n^{s-1}}.$$</p>
<p>This shows that</p>
<p>$$ \left|\tan q_n\right| = \left| \tan \left( \frac{\pi}{2} + \left( q_n - \frac{\pi}{2} \right) - p_n \pi \right) \right| \gg \frac{1}{\left| \left( q_n - \frac{\pi}{2} \right) - p_n \pi \right|} \gg q_n^{s-1}, $$</p>
<p>hence</p>
<p>$$ \left| \frac{\tan q_n}{q_n} \right| \geq C q_n^{s-2}.$$</p>
<p>Therefore the series diverges if $\mu > 2$. But as far as I know, there is no known result for lower bounds of $\mu$, and indeed we cannot exclude the possibility that $\mu = 2$.</p>
<p>p.s. Similar consideration shows that, for $r > s > \mu$ we have</p>
<p>$$ \left| \frac{\tan k}{k^{r}} \right| \leq \frac{C}{k^{r+1-s}}.$$</p>
<p>Thus if $r > \mu$, then</p>
<p>$$ \sum_{k=1}^{\infty} \frac{\tan k}{k^r} $$</p>
<p>converges absolutely!</p>
|
765,738 | <p>I am trying to prove the following inequality:</p>
<p>$$(\sqrt{a} - \sqrt{b})^2 \leq \frac{1}{4}(a-b)(\ln(a)-\ln(b))$$</p>
<p>for all $a>0, b>0$.</p>
<p>Does anyone know how to prove it?</p>
<p>Thanks a lot in advance!</p>
| J. J. | 3,776 | <p>Since the inequality is homogeneous and invariant upon swapping the variables, we may assume that $b=1$ and $a \ge 1$. Then it remains to show that
$$f(a) = \frac{1}{4}(a-1)\log(a) - (\sqrt{a} - 1)^2 \ge 0.$$
Notice that $f(1) = 0$. Therefore we are done if we can show that $f$ is increasing.
Differentiating gives
$$f'(a) = \frac{1}{4} \log(a) + \frac{1}{4} \frac{a-1}{a} - \frac{\sqrt{a} - 1}{\sqrt{a}}$$
and $f'(1) = 0$. Thus we are done if we can show that $f'$ is increasing. Differentiating once more gives
$$f''(a) = \frac{1}{4a} + \frac{1}{4a^2} - \frac{1}{2a\sqrt{a}}.$$
Now $f''(a) \ge 0 \Leftrightarrow a + 1 - 2 \sqrt{a} \ge 0 \Leftrightarrow (a+1)^2 \ge 4a \Leftrightarrow (a-1)^2 \ge 0$, which is true.</p>
|
870,174 | <p>If I have $6$ children and $4$ bedrooms, how many ways can I arrange the children if I want a maximum of $2$ kids per room?</p>
<p>The problem is that there are two empty slots, and these empty slots are not unique.</p>
<p>So, I assumed there are $8$ objects, $6$ kids and $2$ empties.</p>
<p>$$C_2^8 \cdot C_2^6 \cdot C_2^4 \cdot C_2^2 = 2520.$$</p>
<p>Subtract off combinations where empties are together:</p>
<p>$$2520 - 4 \cdot C_2^6 \cdot C_2^4 \cdot C_2^2 = 2160$$</p>
<p>Divide by $2!$ to get rid of identical combinations due to identical empties and I get $1080$.</p>
<p>Is this right? </p>
| awkward | 76,172 | <p>Here is an approach via exponential generating functions. More generally, let's say the number of ways to place $r$ children in the four rooms is $a_r$. Define $$f(x) = \sum_{r=0}^{\infty} \frac{a_r}{r!}x^r$$
In the problem where no room may remain empty, it is evident (after a little thought) that
$$f(x) = \left(x + \frac{1}{2!} x^2 \right)^4$$ Expanding the product, we find the coefficient of $x^6$ is $3/2$, so $a_6 = 6! \cdot (3/2) = 1080$.</p>
<p>In the problem where rooms may remain empty, the generating function is just a little different:$$f(x) = \left( 1+ x + \frac{1}{2!} x^2 \right)^4$$
In this case the coefficient of $x^6$ on expansion is $2$, so $a_6 = 6! \cdot 2 = 1440$.</p>
|
1,593,007 | <p>maybe this is a stupid question but I have the following expression:</p>
<p>$ 10^{-18}(e^{50,9702078⋅0,75}) = 10^{-18}(4⋅10^{16}) $</p>
<p>How would I go about simplifying the big exponent on the left to what's on the right? With the use of a calculator.</p>
<p>Thanks a lot! </p>
| Rory Daulton | 161,807 | <p><strong>SHORT ANSWER:</strong> In your first equation, $x=-1,\ y=0$ seems to be a solution. It would be, if $y'$ were also defined. However, if $y=0$ for any $x$ then $y'$ will be undefined there, so that is not actually a solution. The only restrictions on your second equation are $y=0$ and both $y$ and $y'$ are defined, so the solutions for your equations are indeed the same.</p>
<p><strong>SOME DETAILS:</strong> You already know that separation of variables give us the solution for the first equation:</p>
<p>$$y=\pm\sqrt{x^2+2x+C}$$</p>
<p>for some real constant $C$. The derivative of this is</p>
<p>$$y'=\pm\frac{x+1}{\sqrt{x^2+2x+C}}=\pm\frac{x+1}y$$</p>
<p>So we see that for any $x$, if $y=0$ then $y'$ is undefined. Depending on the values of $x$ and $C$ the value of $y$ itself may not be defined, and we see the same conditions for both $y$ and $y'$ to exist are also conditions for your second equation to make sense. Therefore the solutions to those equations are exactly the same.</p>
<p>Here is another way to look at it. It seems that $x=-1,\ y=0$ should be a solution to your first equation. However, as soon as $x$ moves away from $-1$ then $y$ can no longer be zero. As we look at how quickly $y$ must move away from zero, we see that the demands on the rate of change of $y$ are too contradictory to satisfy your first equation. So our apparent solution $x=-1,\ y=0$ was not actually a solution.</p>
<p>Let's look more at $y'$. If we substitute $x=-1,\ y=0$ into our solution $y=\pm\sqrt{x^2+2x+C}$ we get $C=1$. So the function(s) is/are actually</p>
<p>$$y=\pm\sqrt{x^2+2x+1}=\pm\sqrt{(x+1)^2}=\pm|x+1|$$</p>
<p>I'm sure you know that the absolute value function has no derivative at zero, since the left-hand and right-hand derivatives there are not equal. As I said, the demands on the rate of change of $y$ where $y=0$ are inconsistent, so $y'$ does not exist there.</p>
|
2,359,408 | <p>Question:</p>
<p>Let $\\f: \ \mathbb{R} \to \mathbb{R} \times \mathbb{R}$ via $ f(x) = (x+2, x-3)$. Is $f$ injective? Is $f$ surjective?</p>
<p>I was able to prove that $f$ is injective. However, I am not quite sure if $f$ is surjective. If it is surjective could someone please tell me how to prove that. If not, could someone provide a counter example.</p>
<p>Thanks </p>
| Furrane | 373,901 | <p>I think you need to understand the intuition behind surjectivity :</p>
<p>A fonction is surjective if it at least fully "fills" the "end space" (if someone knows the real term feel free to comment) here $\mathbb{R} \times \mathbb{R}$. </p>
<p>But since it take $x$ values in $\mathbb{R}$, it won't be able to "fill" $\mathbb{R} \times \mathbb{R}$.</p>
<p>Here your fonction $f(x) = (x+2, x-3)$ will map $x$ to a point on the line of equation $y=x-5$ in $\mathbb{R} \times \mathbb{R}$, so everything outside of that line would not be reached, hence it is not surjective.</p>
<p>Another thing you should keep in mind, for a function to be bijective (injective and surjective) the starting space and ending space need to be the same dimension, so if $f:\mathbb{R} \mapsto \mathbb{R} \times \mathbb{R}$ is injective it can't be surjective since $\dim(\mathbb{R}) \ne \dim(\mathbb{R} \times \mathbb{R})$.</p>
<p>I hope those quick tips helped you get a better understanding of those notions.</p>
|
1,910,085 | <p>For all integers $n \ge 0$, prove that the value $4^n + 1$ is not divisible by 3.</p>
<p>I need to use Proof by Induction to solve this problem. The base case is obviously 0, so I solved $4^0 + 1 = 2$. 2 is not divisible by 3.</p>
<p>I just need help proving the inductive step. I was trying to use proof by contradiction by saying that $4^n + 1 = 4m - 1$ for some integer $m$ and then disproving it. But I'd rather use proof by induction to solve this question. Thanks so much.</p>
| MathIsNice1729 | 274,536 | <p>The statement is true for $n=0$. Now, let it be true for $n=k$. Also, if possible, let it be false for $n=k+1$. Then, $4^{k+1} \equiv -1 \pmod{3} \implies 4 \cdot 4^k \equiv -1 \pmod{3} \implies 4^k \equiv -4 \pmod{3} \equiv -1 \pmod{3}$ (since $4^{-1} \equiv 4 \pmod{3}$). So, $3 \mid 4^k+1$, a contradiction. Hence, it's true for $n=k+1$. Hence the proof. </p>
|
1,791,990 | <p>I have to prove that integral</p>
<p>$I = \int_{0}^{+\infty}\sin(t^2)dt$ is convergent. Could you tell me if it's ok?</p>
<p>Let $t^2=u$ then $dt=\frac{du}{2\sqrt{u}}$</p>
<p>Now $$I = \int_{0}^{+\infty}\frac{\sin(u)du}{2\sqrt{u}}$$</p>
<p>Which is equal to $$\int_{0}^{1}\frac{\sin(u)du}{2\sqrt{u}} + \int_{1}^{+\infty}\frac{\sin(u)du}{2\sqrt{u}}$$</p>
<p>First of these is convergent because of the limit</p>
<p>$$\lim_{u\to 0}\frac{sin(u)}{2\sqrt{u}} = 0$$</p>
<p>Second is convergent from Dirichlet test.</p>
<p>Is it correct?
Also how to find the value of this integral ($\sqrt{\frac{\pi}{8}}$) ?</p>
| Mark Viola | 218,419 | <p>To evaluate the integral, we analyze the closed-contour integral $I$ given by</p>
<p>$$I=\oint_C e^{iz^2}\,dz$$</p>
<p>where $C$ is comprised of (i) the line segment from $0$ to $R$, (ii) the circular arc from $R$ to $R(1+i)/\sqrt{2}$, and the line segment from $R(1+i)/\sqrt{2}$ to $0$. </p>
<p>Since $e^{iz^2}$ is analytic in and on $C$, Cauchy's Integral Theorem guarantees that $I=0$. Then, we have</p>
<p>$$\int_0^R e^{ix^2}\,dx+\int_0^{\pi/4}e^{iR^2e^{i2\phi}}iRe^{i\phi}\,d\phi-\frac{1+i}{\sqrt{2}}\int_0^R e^{-x^2}\,dx=0 \tag 1$$</p>
<p>Letting $R\to \infty$, the second integral on the left-hand side of $(1)$ approaches zero. Therefore, we find that </p>
<p>$$\begin{align}
\int_0^\infty e^{ix^2}\,dx&=\frac{1+i}{\sqrt{2}}\int_0^\infty e^{-x^2}\,dx\\\\&=\frac{1+i}{\sqrt{2}}\frac{\sqrt{\pi}}{2} \tag 2
\end{align}$$</p>
<p>Finally, equating real and imaginary parts of $(2)$, we obtain</p>
<p>$$\int_0^\infty \sin(x^2)\,dx=\sqrt{\frac{\pi}{8}}$$</p>
<p>and</p>
<p>$$\int_0^\infty \cos(x^2)\,dx=\sqrt{\frac{\pi}{8}}$$</p>
|
4,465,150 | <p>Let <span class="math-container">$A_1,A_2,…,A_n$</span> be events in a probability space <span class="math-container">$(\Omega,\Sigma,P)$</span>.</p>
<p>If <span class="math-container">$A_1,A_2,…,A_n$</span> are independent then <span class="math-container">$A_1^c,A_2^c,…,A_n^c$</span> are also independent, (where <span class="math-container">$A^c = \Omega \setminus A$</span>).</p>
<p>I have found a proof by induction for this exercise, however, I have not been able to understand the conclusion of the proof, which I have marked in red. That is, why can it be immediately concluded that <span class="math-container">$A_1^c , A_2^c ,..., A_{k+1}^c$</span> are independent? I would really appreciate if someone can give me a clear explanation of what happens in that conclusion.</p>
<p><strong>Proof by induction.</strong></p>
<p><em>Basis for the Induction</em>.</p>
<p>If <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span> are independent then <span class="math-container">$A_1^c$</span> and <span class="math-container">$A_2^c$</span> are independent.</p>
<p>Assume <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span> are independent. Then
<span class="math-container">\begin{align*}
P(A_1^c \cap A_2^c)
&= 1 - P(A_1 \cup A_2) \\
&= 1 - P(A_1) - P(A_2) + P(A_1 \cap A_2) \\
&= 1 - P(A_1) - P(A_2) + P(A_1)P(A_2) \\
&= (1-P(A_1))(1-P(A_2)) \\
&= P(A_1^c)P(A_2^c).
\end{align*}</span></p>
<p><em>Induction Hypothesis.</em></p>
<p>This is our induction hypothesis:</p>
<p>If <span class="math-container">$A_1,A_2,…,A_k$</span> are independent then <span class="math-container">$A_1^c,A_2^c,…,A_k^c$</span> are independent.</p>
<p>Then we need to show:</p>
<p>If <span class="math-container">$A_1,A_2,…,A_{k+1}$</span> are independent then <span class="math-container">$A_1^c,A_2^c,…,A_{k+1}^c$</span> are independent.</p>
<p><em>Induction Step</em>.</p>
<p>This is our induction step.</p>
<p>Suppose <span class="math-container">$A_1,A_2,…,A_{k+1}$</span> are independent.</p>
<p>Then:
<span class="math-container">\begin{align}
P\left( {\bigcap_{i = 1}^{k + 1} A_i}\right) &= P\left( \bigcap_{i=1}^{k}A_i \cap A_{k+1} \right) \\
&= \prod_{i=1}^{k}P(A_i) \cdot P(A_{k+1})\\
&= P\left(\bigcap_{i=1}^{k}A_i\right) \cdot P(A_{k+1})
\end{align}</span></p>
<p>So we see that <span class="math-container">$\bigcap_{i=1}^{k}A_i$</span> and <span class="math-container">$A_{k+1}$</span> are independent.</p>
<p>So <span class="math-container">$\bigcap_{i=1}^{k}A_i$</span> and <span class="math-container">$A_{k+1}^c$</span> are independent.</p>
<p><span class="math-container">$\color{red}{\text{So, from the above results, we can see that} A_1^c,A_2^c,…,A_{k+1}^c \text{are independent}}.$</span></p>
| angryavian | 43,949 | <p>Partial attempt:</p>
<p>Let <span class="math-container">$B_1 := \bigcap_{i=1}^k A_i^c$</span> and <span class="math-container">$B_2 := A_{k+1}^c$</span>.</p>
<p><span class="math-container">\begin{align}
P\left(\bigcap_{i=1}^{k+1} A_i^c\right)
&= P(B_1 \cap B_2)
\\
&= 1 - P(B_1^c \cup B_2^c)
\\
&= 1 - P(B_1^c) - P(B_2^c) + P(B_1^c \cap B_2^c)
\\
&\overset{*}{=} 1 - P(B_1^c) - P(B_2^c) + P(B_1^c) P(B_2^c)
\\
&= P(B_1) P(B_2)
\\
&= P\left(\bigcap_{i=1}^{k} A_i^c\right) P(A_{k+1}^c)
\\
&= \prod_{i=1}^{k+1} P(A_i^c).
\end{align}</span></p>
<p>It remains to verify the starred equality <span class="math-container">$P(B_1^c \cap B_2^c) = P(B_1^c) P(B_2^c)$</span>.</p>
<p><span class="math-container">\begin{align}
P(B_1^c \cap B_2^c)
&= P\left(
\left(\bigcap_{i=1}^k A_i^c\right)^c
\cap A_{k+1}
\right)
\\
&= P\left(
\left(\bigcup_{i=1}^k A_i\right)
\cap A_{k+1}
\right)
\\
&\overset{?}{=} P\left(\bigcup_{i=1}^k A_i \right) P(A_{k+1})
\\
&= P(B_1^c) P(B_2^c).
\end{align}</span></p>
<p>For the "?" equality, I think you can show this by writing <span class="math-container">$\bigcup_{i=1}^k A_i$</span> as the disjoint union of intersections of <span class="math-container">$A_1, \ldots, A_k, A_1^c, \ldots, A_k^c$</span>.</p>
|
3,928,429 | <p>As titled, I was considering the mimimization problem where <span class="math-container">$y(x)$</span> has two endpoints fixed. That is, minimizing <span class="math-container">$$\int_a^b L(x,y(x),y'(x)) \, dx$$</span></p>
<p>where <span class="math-container">$$\ y(a)=m, y(b)=n $$</span>for all <span class="math-container">$y(x)$</span>.</p>
<p>if the Euler-Lagrange equation is always zero for all functions <span class="math-container">$y(x)$</span>, I think the original integral <span class="math-container">$$\int_a^b L(x,y(x),y'(x)) \, dx$$</span> is constant for all <span class="math-container">$y(x)$</span>.</p>
<p>Which means if
<span class="math-container">$$\frac {d}{dx} \frac {\partial L}{\partial y'} = \frac {\partial L}{\partial y}$$</span> for all <span class="math-container">$y(x)$</span>, then
<span class="math-container">$$\int_a^b L(x,y(x),y'(x)) \, dx = C$$</span> for some constant <span class="math-container">$C$</span>.</p>
<p>But I don't know how to prove it. I start by using integration by parts, writing <span class="math-container">$$\int_a^b L(x,y(x),y'(x)) \, dx$$</span> as <span class="math-container">$$\left.\ x*L(x,y(x),y'(x))\right|_a^b - \int_a^b x*\frac {d}{dx}L(x,y(x),y'(x)) \, dx$$</span>
But I don't know how to continue from here, or maybe I shouldn't use IBP here.</p>
<p>Any help will be appreciated.</p>
| Qmechanic | 11,127 | <ol>
<li><p>More generally, one may show that</p>
<ul>
<li>if Euler-Lagrange (EL) equations are always satisfied, and</li>
<li>if the <span class="math-container">$x$</span>- and <span class="math-container">$y$</span>-spaces are <a href="https://en.wikipedia.org/wiki/Contractible_space" rel="nofollow noreferrer">contractible spaces</a>,</li>
</ul>
<p>then the Lagrangian density is a total divergence, i.e. the action functional is a boundary term, cf. e.g. Refs. 1-3.</p>
</li>
<li><p>If furthermore the boundary is fixed by boundary conditions, then the action functional is a constant, as OP already suspected.</p>
</li>
<li><p>See also <a href="https://physics.stackexchange.com/q/131925/2451">this</a> related Phys.SE post.</p>
</li>
</ol>
<p>References:</p>
<ol>
<li><p>P.J. Olver, <em>Applications of Lie Groups to Differential Equations,</em> 1993.</p>
</li>
<li><p>I. Anderson, <em>Introduction to <a href="https://en.wikipedia.org/wiki/Variational_bicomplex" rel="nofollow noreferrer">variational bicomplex</a>,</em> Contemp. Math. 132 (1992) 51.</p>
</li>
<li><p>G. Barnich, F. Brandt & M. Henneaux, <em>Local BRST cohomology in gauge theories,</em> Phys. Rep. 338 (2000) 439, <a href="http://arxiv.org/abs/hep-th/0002245" rel="nofollow noreferrer">arXiv:hep-th/0002245</a>.</p>
</li>
</ol>
|
4,173,308 | <p><a href="https://i.stack.imgur.com/G55Op.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G55Op.png" alt="triangle of area 0.5 on a lattice grid" /></a></p>
<p>I'm trying to find the area of this triangle using the <span class="math-container">$\frac{1}{2} \times b \times h$</span> formula, but for some reason, it isn't quite working out. My workings:</p>
<h2>My working:</h2>
<p><a href="https://i.stack.imgur.com/VPlsA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VPlsA.png" alt="working" /></a></p>
<p><span class="math-container">$$\alpha = \sqrt{1^{2}+1^{2}} = \sqrt{2}$$</span>
<span class="math-container">$$\beta = \sqrt{2^{2} + 2^{2}} = 2\sqrt{2}$$</span></p>
<p><span class="math-container">$$\frac{1}{2} \times \sqrt{2} \times 2\sqrt{2} = 2?$$</span></p>
<p>I know the area is supposed to be 0.5 units^2, so what am I doing wrong here?</p>
| DanielC | 129,267 | <p>It should be doable by a chain of substitutions, as soon as you complete the square:</p>
<p><span class="math-container">$$I=\int_0^z \sqrt{\frac{z^2}{4} -\left(x-\frac{z}{2}\right)^2} ~ dx $$</span></p>
<p>Then <span class="math-container">$ u = x-\frac{z}{2}$</span> and obviously <span class="math-container">$ dx = du$</span>.</p>
<p>Now</p>
<p><span class="math-container">$$ I = \int_{-\frac{z}{2}}^{\frac{z}{2}} \sqrt{\left(\frac{z}{2}\right)^2 - u^2} ~ du$$</span></p>
<p>Can you take it from here?</p>
|