qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,165,759
<p>I am solving the following question</p> <p>$$\int\frac{\sin x}{\sin^{3}x + \cos^{3}x}dx.$$</p> <p>I have been able to reduce it to the following form by diving numerator and denominator by $\cos^{3}x$ and then substituting $\tan x$ for $t$ and am getting the following equation. Should Iis there any other way use partial fraction to integrate it further or </p> <p>$$\int\frac{t}{t^3 + 1}dt.$$</p>
mrnovice
416,020
<p>Hint: $\frac{t}{t^{3}+1} = \frac{A}{t+1} + \frac{Bt + C}{t^{2}-t+1}$ where $A$ and $B$ and $C$ are constants to be found</p> <p>Can you solve it now?</p> <p>In case you get stuck:</p> <p>$A = -\frac{1}{3}$ $B = \frac{1}{3}$ $C = \frac{1}{3}$</p> <p>Then we get $I = \int -\frac{1}{3(t+1)} + \frac{t}{3(t^{2}-t+1)} + \frac{1}{3(t^{2}-t+1)}dt$</p>
2,222,215
<p>Determine whether the difference of the following two series is convergent or not and Prove your answer$$ \sum_{n=1}^\infty \frac{1}{n} $$ and $$\sum_{n=1}^\infty \frac{1}{2n-1} $$</p> <p>What i tried. I said that the difference of the two series is divergent. My proof is as follows. Find the difference of the two series to get $$\sum_{n=1}^\infty \frac{1}{n} -\sum_{n=1}^\infty \frac{1}{2n-1} = \sum_{n=1}^\infty \frac{n-1}{n(2n-1)}$$ But this diffcult to prove directly that $\sum_{n=1}^\infty \frac{n-1}{n(2n-1)}$ is divergent. So i tired proving by contradiction by assuming that it is convergent and by rearraging the above equation we have,$\sum_{n=1}^\infty \frac{1}{n} =\sum_{n=1}^\infty \frac{1}{2n-1} + \sum_{n=1}^\infty \frac{n-1}{n(2n-1)}$ and since $\sum_{n-1}^\infty \frac{n-1}{n(2n-1)}$ is convergent by our assumption and $\sum_{n=1}^\infty \frac{1}{2n-1}$ is also convergent (need to be proven) then the sum of both series also have to be convergent and thus contradicting the fact that $\sum_{n=1}^\infty \frac{1}{n}$ is divergent and thus proving the statement. Is my proof correct and is there a better prove. Could anyone explain the Prove to me. Thanks</p>
Sarvesh Ravichandran Iyer
316,409
<p>A few things need correction in your proof. First, let's compliment you for the idea of trying the contrapositive. Unfortunately, the idea falls flat due to a mistake:</p> <blockquote> <p>$\sum_{n=1}^\infty \frac{1}{2n-1}$ is <strong>not</strong> convergent!</p> </blockquote> <p>To do the question, we need an observation about the difference. First, write down the first few terms of the sum: $$ \sum_{n=1}^\infty \left(\frac 1{2n-1}\right) = 1 + \frac 13 + \frac 15 + \frac 17 + ... $$</p> <p>you can see that we are precisely summing over the <em>reciprocals of odd numbers</em> here. </p> <p>In the series $\sum \frac 1n$, we are summing over <em>reciprocals of all numbers</em>. Hence, the difference of these two series, is the series whose terms are the <em>reciprocals of all even numbers</em>. This should be clear.</p> <p>Once this is clear, we see that every even number is of the form $2k$, where $k$ is a natural number, and vice versa, so that: $$ \sum \frac 1n - \sum \frac 1{2n- 1} = \sum \frac{1}{2n} $$ </p> <p>(Note that the above has nothing to do with convergence of any series, since we are adding series it is mathematically sound).</p> <p>Now, note that $\sum \frac{1}{2n}$ cannot converge, for if it did, say the sum is $M$, then $\sum \frac 1n = 2M$, which is a contradiction as this series doesn't converge.</p> <p>Hence, the difference does not converge.</p> <p>To see that the reciprocals of odd numbers does not converge, just do: $$ 1 + \frac 13 + \frac 15 + \frac 17... &gt; \frac 12 + \frac 14 + \frac 16 + \frac 18... $$</p> <p>(Compare term by term)</p> <p>Hence, we have that by the comparison test this is also not convergent.</p> <hr> <p>EDIT : As the below comments point out, a rearrangement of the series $\sum \frac 1n - \sum \frac1{2n-1}$ may converge. However, the original problem also does not point this out explicitly, and in that case it is with me to decide which rearrangement is being referred to : and from the computations made above (where the difference was taken explicitly), I concluded that there was no rearrangement of the terms done while taking the difference. Nevertheless, this is a non-trivial thing that was not pointed out in the question.</p>
2,222,215
<p>Determine whether the difference of the following two series is convergent or not and Prove your answer$$ \sum_{n=1}^\infty \frac{1}{n} $$ and $$\sum_{n=1}^\infty \frac{1}{2n-1} $$</p> <p>What i tried. I said that the difference of the two series is divergent. My proof is as follows. Find the difference of the two series to get $$\sum_{n=1}^\infty \frac{1}{n} -\sum_{n=1}^\infty \frac{1}{2n-1} = \sum_{n=1}^\infty \frac{n-1}{n(2n-1)}$$ But this diffcult to prove directly that $\sum_{n=1}^\infty \frac{n-1}{n(2n-1)}$ is divergent. So i tired proving by contradiction by assuming that it is convergent and by rearraging the above equation we have,$\sum_{n=1}^\infty \frac{1}{n} =\sum_{n=1}^\infty \frac{1}{2n-1} + \sum_{n=1}^\infty \frac{n-1}{n(2n-1)}$ and since $\sum_{n-1}^\infty \frac{n-1}{n(2n-1)}$ is convergent by our assumption and $\sum_{n=1}^\infty \frac{1}{2n-1}$ is also convergent (need to be proven) then the sum of both series also have to be convergent and thus contradicting the fact that $\sum_{n=1}^\infty \frac{1}{n}$ is divergent and thus proving the statement. Is my proof correct and is there a better prove. Could anyone explain the Prove to me. Thanks</p>
Laars Helenius
112,790
<p>HINT: Observe that $$\sum_{n=1}^\infty\frac{n-1}{n(2n-1)}\ge\sum_{n=1}^\infty\frac{n-1}{n(2n)}=\sum_{n=1}^\infty\frac{1}{2n}-\sum_{n=1}^\infty\frac{1}{2n^2}.$$</p>
2,555,815
<p><strong>Problem</strong></p> <p>Let $a_{0}(n) = \frac{2n-1}{2n}$ and $a_{k+1}(n) = \frac{a_{k}(n)}{a_{k}(n+2^k)}$ for $k \geq 0.$</p> <p>The first several terms in the series $a_k(1)$ for $k \geq 0$ are:</p> <p>$$\frac{1}{2}, \, \frac{1/2}{3/4}, \, \frac{\frac{1}{2}/\frac{3}{4}}{\frac{5}{6}/\frac{7}{8}}, \, \frac{\frac{1/2}{3/4}/\frac{5/6}{7/8}}{\frac{9/10}{11/12}/\frac{13/14}{15/16}}, \, \ldots$$</p> <p>What limit do the values of these fractions approach?</p> <p><strong>My idea</strong></p> <p>I have calculated the series using recursion in C programming, and it turns out that for $k \geq 8$, the first several digits of $a_k(1)$ are $ 0.7071067811 \ldots,$ so I guess that the limit exists and would be $\frac{1}{\sqrt{2}}$.</p>
mercio
17,445
<p>Let $f_0(z) = z$ and $f_{n+1}(z) = f_n(z) / f_n(z+2^n)$</p> <p>One can show that when $z \to \infty$, the rational fractions $f_n$ for $n \ge 1$ have asymptotic developments at infinity that converge for $|z| &gt; n$, such that $f_n(z) = 1 + O(z^{-n})$ and $f_n'(z) = O(z^{-n-1})$</p> <p>Call $s(k) = +1,-1,-1,+1,\cdots$ the Thue-Morse sequence.<br> Then the sequence $\prod_{k \ge 0}^n f_2(z+4k)^{s(k)}$ converge pointwise, and the convergence is uniform on half-spaces of the form $\Re(z) \ge A$ for $A &gt; -1$. The sequence of their derivative also converge uniformly on those opens, so the limit is holomorphic on $\Re(z)&gt; -1$.</p> <p>Since the sequence $(f_n(z))$ for $n\ge 2$ is a subsequence of this sequence, it also converge to the same limit $f(z)$.</p> <p>Next, notice that $f_n(z)/f_n(z+ \frac 12) = f_{n+1}(2z)$.</p> <p>Taking the limit as $n \to \infty$, this gives the functional equation $f(z)/f(z+ \frac 12) = f(2z)$, or $f(z + \frac 12)f(2z) = f(z)$ (and incidentally, $f$ has a meromorphic continuation to the whole complex plane with zeroes and poles at $0$ and the negative integers).</p> <p>Let $k$ be the order of $f$ at $0$, so that $f(z) \sim az^k$ for some nonzero $a \in \Bbb C$ as $z \to 0$.<br> Looking at the functional equation as $z \to 0$, one gets $f(\frac 12) = 2^{-k}$.</p> <p>And finally, evaluating the functional equation at $z = \frac 12$, $f(1)f(1) = f(\frac 12)$, and so $f(1) = 2^{- \frac 12k}$</p> <p>Then one simply needs to evaluate $f(1)$ (or $f(\frac 12)$) to enough precision to determine that the order of $f$ at $0$ is $1$ and not anything larger.</p> <p>So indeed, $f(1) = 2^{- \frac 12}$</p>
1,768,317
<p>Show that $\sin(x) &gt; \ln(x+1)$ when $x \in (0,1)$. </p> <p>I'm expected to use the maclaurin series (taylor series when a=0)</p> <p>So if i understand it correctly I need to show that: </p> <p>$$\sin(x) = \lim\limits_{n \rightarrow \infty} \sum_{k=1}^{n} \frac{(-1)^{k-1}}{(2k-1)!} \cdot x^{2k-1} &gt; \lim\limits_{n \rightarrow \infty} \sum_{k=1}^{n} \frac{(-1)^{k-1}}{k} \cdot x^k = \ln(x+1)$$</p> <p>I tried to show that for any k the general term in the bigger sum is greater then the other one (the general term in the smaller sum) but its not true :(.</p> <p>$$\frac{(-1)^{k-1} \cdot x^{2k-1}}{(2k-1)!} &gt; \frac{(-1)^{k-1} \cdot x^k}{k}$$</p> <p>when k is odd we get:</p> <p>$$\frac{x^{k-1}}{(2k-1)!} &gt; \frac{1}{k}$$</p> <p>and this is a contradiction since :</p> <p>$x^{k-1} &lt; 1$ for any $x \in (0,1)$ and $k &gt; 1$ and $(2k-1)! &gt; k $ </p> <p>so for any $k &gt; 1 $ its $\frac{x^{k-1}}{(2k-1)!} &lt; \frac{1}{k}$ and if $k=1$ its $\frac{x^{k-1}}{(2k-1)!} = \frac{1}{k}$.</p> <p>What am I doing wrong and how i'm supposed to prove it ? </p> <p>Thanks in advance for help . </p>
Hagen von Eitzen
39,174
<p>As the series are alternating with summands strictly decreasing in absolute value (at least for $0&lt;x&lt;1$), we have $$ \sin x&gt;x-\frac16x^3$$ and $$ \ln(1+x)&lt;x-\frac12x^2+\frac13x^3$$ Hence the difference is $$ \sin x-\ln(1+x)&gt;\frac12x^2-\frac12x^3=\frac12x^2(1-x)&gt;0.$$</p>
569,103
<blockquote> <blockquote> <p>How can I calculate the first partial derivative $P_{x_i}$ and the second partial derivative $P_{x_i x_i}$ of function: $$ P(x,y):=\frac{1-\Vert x\rVert^2}{\Vert x-y\rVert^n}, x\in B_1(0)\subset\mathbb{R}^n,y\in S_1(0)? $$</p> </blockquote> </blockquote> <p>I ask this with regard to <a href="https://math.stackexchange.com/questions/568453/show-that-the-poisson-kernel-is-harmonic-as-a-function-in-x-over-b-10-setminu">Show that the Poisson kernel is harmonic as a function in x over $B_1(0)\setminus\left\{0\right\}$</a>.</p> <p>I think it makes sense to ask this in a separate question in order to give details to my calculations.</p> <hr> <p><strong>First partial derivative:</strong></p> <p>I use the quotient rule. To do so I set $$ f(x,y):=1-\lVert x\rVert^2,~~~~~g(x,y)=\Vert x-y\rVert^n. $$ Then I have to calculate $$ \frac{f_{x_i}g-fg_{x_i}}{g^2}. $$ Ok, I start with $$ f_{x_i}=(1-\lVert x\rVert^2)_{x_i}=(1)_{x_i}-(\sum_{i=1}^n x_i^2)_{x_i}=-2x_i. $$ Next is to use the chain rule: $$ g_{x_i}=((\sum_{i=1}^{n}(x_i-y_i)^2)^{\frac{n}{2}})_{x_i}=\frac{n}{2}\lVert x-y\rVert^{n-2}(2x_i-2y_i) $$</p> <p>So all in all I get $$ P_{x_i}=\frac{-2x_i\cdot\Vert x-y\rVert^n-(1-\lVert x\rVert^2)\cdot\frac{n}{2}\lVert x-y\rVert^{n-2}(2x_i-2y_i)}{\Vert x-y\rVert^{2n}} $$</p> <p>Is that correct? Can one simplify that?</p> <p>I stop here. If you say it is correct I continue with calculatin $P_{x_i x_i}$.</p>
Michael
155,065
<p>Here is my work for the first limit: Let $f(x) = \frac{num(x)}{den(x)}$. We get $\infty/\infty$ so we can use L'Hopital: </p> <p>\begin{align*} \frac{num'(x)}{den'(x)}&amp;=\frac{3(1+\sec(x))\log \sec x}{(\tan(x))(x + \log(\sec x + \tan x)) + (\log \sec (x)) (1 + \sec(x)) }\\ &amp;= \frac{3(1+\sec(x))\log \sec x}{(\tan(x))[x + (\log \sec x) +\log(1+\sin(x))] + (\log \sec x)(1+\sec(x)) )}\\ &amp;= \frac{3}{\left(\frac{\tan(x)}{1+\sec(x)}\right)\left(1 + \frac{x + \log(1 + \sin(x))}{\log \sec x}\right)+ 1} \end{align*} Taking a limit as $x\rightarrow \pi/2^-$ gives $3/2$.</p>
287,597
<p>Can anyone explain how why <a href="http://en.wikipedia.org/wiki/Quaternion#Matrix_representations">the matrix representation of the quaternions using real matrices</a> is constructed as such?</p>
Thomas
26,188
<p>I am not sure exactly what you mean by asking "... why the ...". "Why" questions can be hard to answer satisfactory in math.</p> <p>The claim is that the Quaternions $\mathbb{H}$ are isomorphic (as $\mathbb{R}$-algebras) to the given set of matrices. The isomorphism looks like this:</p> <p>$$ \phi: a + bi + cj + dk \longmapsto \begin{pmatrix}a &amp; b &amp; c &amp; d \\ -b &amp; a &amp; -d &amp; c \\ -c &amp;d &amp;a&amp; - b\\ -d&amp; -c &amp; b&amp; a\end{pmatrix}. $$ To "understand" why this is true, you "simply" check that this is an isomorphism. </p> <p>You check for example that $\phi$ is bijective, which is clear from the construction.</p> <p>Then you check that $\phi$ is an algebra homomorphism, so you need for $x,y\in \mathbb{H}$ and $\lambda \in \mathbb{R}$:</p> <ol> <li>$\phi(xy) = \phi(x)\phi(y)$ for $x,y\in\mathbb{H}$</li> <li>$\phi(x+y) = \phi(x) + \phi(y)$</li> <li>$\phi(\lambda x) = \lambda\phi(x)$</li> </ol> <p>The last two are not difficult to check. The first one requires a bit of work.</p> <p>Even though this does not answer the minus signs are where there are in the matrix, I highly recommend that you try to prove that $\phi$ is a homomorphism. This exercise will make you more familiar with the Quaternions.</p> <p>But note if you check property $3$ above you would need (as a special case) $$ \phi((bi)(bi)) = \phi(ib)\phi(ib). $$ That is you would need $$ \begin{pmatrix} -b^2 &amp; 0 &amp; 0&amp; 0 \\ 0 &amp; -b^2 &amp; 0 &amp; 0 \\ 0 &amp;0 &amp;-b^2&amp; 0\\ 0&amp; 0 &amp; 0&amp; -b^2\end{pmatrix} = \begin{pmatrix} 0 &amp; b &amp; 0&amp; 0 \\ -b &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp;0 &amp;0&amp; -b\\ 0&amp; 0 &amp; b&amp; 0\end{pmatrix}\begin{pmatrix} 0 &amp; b &amp; 0&amp; 0 \\ -b &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp;0 &amp;0&amp; -b\\ 0&amp; 0 &amp; b&amp; 0\end{pmatrix}. $$ So here you can see that you "need" the minus on all the $b$'s. In this case it comes down to the fact that $i^2 = -1$.</p>
287,597
<p>Can anyone explain how why <a href="http://en.wikipedia.org/wiki/Quaternion#Matrix_representations">the matrix representation of the quaternions using real matrices</a> is constructed as such?</p>
G Cab
317,234
<p>It is a simple consequence of the multiplication rules of the quaternions. </p> <p>So $$ \eqalign{ &amp; \left( {a + bi + cj + dk} \right) \cdot 1 = \left( {\matrix{ a &amp; b &amp; c &amp; d \cr { - b} &amp; a &amp; { - d} &amp; c \cr { - c} &amp; d &amp; a &amp; { - b} \cr { - d} &amp; { - c} &amp; b &amp; a \cr } } \right)^{\,T} \left( {\matrix{ 1 \cr 0 \cr 0 \cr 0 \cr } } \right) = \left( {\matrix{ 1 \cr 0 \cr 0 \cr 0 \cr } } \right)^{\,T} \left( {\matrix{ a &amp; b &amp; c &amp; d \cr { - b} &amp; a &amp; { - d} &amp; c \cr { - c} &amp; d &amp; a &amp; { - b} \cr { - d} &amp; { - c} &amp; b &amp; a \cr } } \right) = \left( {\matrix{ a \cr b \cr c \cr d \cr } } \right) \cr &amp; \left( {a + bi + cj + dk} \right) \cdot i = \left( {\matrix{ a &amp; b &amp; c &amp; d \cr { - b} &amp; a &amp; { - d} &amp; c \cr { - c} &amp; d &amp; a &amp; { - b} \cr { - d} &amp; { - c} &amp; b &amp; a \cr } } \right)^{\,T} \left( {\matrix{ 0 \cr 1 \cr 0 \cr 0 \cr } } \right) = \left( {\matrix{ { - b} \cr a \cr { - d} \cr c \cr } } \right) \cr} $$ and so on</p>
2,027,044
<p>Prove: $$ (a+b)^\frac{1}{n} \le a^\frac{1}{n} + b^\frac{1}{n}, \qquad \forall n \in \mathbb{N} $$ I have have tried using the triangle inequlity $ |a + b| \le |a| + |b| $, without any success.</p>
Bart Michels
43,288
<p><strong>Hint:</strong> (Assuming $a,b$ positive) Binomial theorem.</p> <blockquote class="spoiler"> <p> let $c=a^{1/n}$, $d=b^{1/n}$. We want $$c^n+d^n\leq(c+d)^n.$$ By the <a href="https://en.wikipedia.org/wiki/Binomial_theorem" rel="nofollow noreferrer">binomial theorem</a>, the RHS contains the LHS plus more positive terms.</p> </blockquote>
348,748
<p>Find the solution for $Ax=0$ for the following $3 \times 3$ matrix:</p> <p>$$\begin{pmatrix}3 &amp; 2&amp; -3\\ 2&amp; -1&amp;1 \\ 1&amp; 1&amp; 1\end{pmatrix}$$</p> <p>I found the row reduced form of that matrix, which was </p> <p>$$\begin{pmatrix}1 &amp; 2/3&amp; -1\\ 0&amp; 1&amp;-9/7 \\ 0&amp; 0&amp; 1\end{pmatrix}$$</p> <p>I'm not sure what I'm supposed to do next to find the "unique" solution besides $x=0$? Do I further reduce that matrix to the identity matrix?</p>
Pedro
23,350
<p>You have shown that matrix is non singular (or invertible), so that the only solution, as you state, is ${\bf x}=0$.</p>
2,464,649
<p>Find all primes $p$ such that $p+1$ is a perfect square.</p> <p>All primes except for 2 (3 is not a perfect square, so we can exclude that case) are odd, so we can express them as $2n+1$ for some $n\in\mathbb{Z}_{+}$. Let's express the perfect square as $a^2$, where $a\in\mathbb{Z}_{+}$. Since we are interested in a number that is one more than $2n+1$, we know that our perfect square can also be expressed as $2n+1+1=2(n+1)$.</p> <p>$2n+2=a^2$</p> <p>$2(n+1)=a^2$</p> <p>So we know that our perfect square must be even, as it has a factor of 2 in it (in fact $2\cdot2$).</p> <p>It is my strong intuition that we get a perfect square only if $n=1$, and therefore $p=3$ and $p+1=a^2=2\cdot2=4$, but how should I continue with this proof? It seems to me that whatever factor we have on the LHS we need to have it twice on the RHS (since $a$ must be an integer), but how do I continue from there?</p>
velut luna
139,981
<p>If a prime $p$ is of the form of $n^2-1$, then</p> <p>$$p=(n+1)(n-1)$$</p> <p>and so $n-1$ must be 1.</p>
2,464,649
<p>Find all primes $p$ such that $p+1$ is a perfect square.</p> <p>All primes except for 2 (3 is not a perfect square, so we can exclude that case) are odd, so we can express them as $2n+1$ for some $n\in\mathbb{Z}_{+}$. Let's express the perfect square as $a^2$, where $a\in\mathbb{Z}_{+}$. Since we are interested in a number that is one more than $2n+1$, we know that our perfect square can also be expressed as $2n+1+1=2(n+1)$.</p> <p>$2n+2=a^2$</p> <p>$2(n+1)=a^2$</p> <p>So we know that our perfect square must be even, as it has a factor of 2 in it (in fact $2\cdot2$).</p> <p>It is my strong intuition that we get a perfect square only if $n=1$, and therefore $p=3$ and $p+1=a^2=2\cdot2=4$, but how should I continue with this proof? It seems to me that whatever factor we have on the LHS we need to have it twice on the RHS (since $a$ must be an integer), but how do I continue from there?</p>
Community
-1
<p>I believe that if we have $p+1=k^2$ we can see $p=(k-1)(k+1)$ and hence $3$ is the only such prime.</p>
2,464,649
<p>Find all primes $p$ such that $p+1$ is a perfect square.</p> <p>All primes except for 2 (3 is not a perfect square, so we can exclude that case) are odd, so we can express them as $2n+1$ for some $n\in\mathbb{Z}_{+}$. Let's express the perfect square as $a^2$, where $a\in\mathbb{Z}_{+}$. Since we are interested in a number that is one more than $2n+1$, we know that our perfect square can also be expressed as $2n+1+1=2(n+1)$.</p> <p>$2n+2=a^2$</p> <p>$2(n+1)=a^2$</p> <p>So we know that our perfect square must be even, as it has a factor of 2 in it (in fact $2\cdot2$).</p> <p>It is my strong intuition that we get a perfect square only if $n=1$, and therefore $p=3$ and $p+1=a^2=2\cdot2=4$, but how should I continue with this proof? It seems to me that whatever factor we have on the LHS we need to have it twice on the RHS (since $a$ must be an integer), but how do I continue from there?</p>
Deniz Tuna Yalçın
471,430
<p>If our prime $p$ is in form of $n^2-1,\quad n\in\mathbb{Z}$ then it could also be written as $p=(n-1)(n+1)$ If $p$ is a prime than it cannot be written as the product of two other primes (or the productof any integer other than $1$ and itself) (states the fundamental theorem of arithmetic) If $n-1=1$ then $n=2$ and the other multiplier $n+1=3=p$. Another possibility is that $n+1=1$ however that doesn't give us a prime for $p$.. </p> <p>So there is only one prime $p$ that makes $p+1$ a perfect square and it is $3$.</p>
24,055
<p>Running this code:</p> <pre><code>Histogram[{RandomVariate[NormalDistribution[1/4,0.12],100], RandomVariate[NormalDistribution[3/4, 0.12], 100]}, Automatic, "Probability", PlotRange -&gt; {{0, 1}, {0, 1}}, Frame -&gt; True, PlotRangeClipping -&gt; True, FrameLabel -&gt; {Style["x axis", 15], Style["probability", 15]} ] </code></pre> <p>Gives me the following plot:</p> <p><img src="https://i.stack.imgur.com/jYSLN.png" alt="enter image description here"></p> <p>As you can see, the label on the right ("probability") is not printed correctly. The character "y" is missing. What's going on here?</p> <p>I am using Mathematica 9.0.0.0. I ran this on two laptops, one with Windows 7 and the other with Windows 8.</p> <p><strong>Update</strong>: Judging by the comments, this seems to be a bug. So now the question becomes: <strong>Is there a workaround?</strong></p> <p><strong>Update</strong>: This seems to be bug, so I'll tag as such. In the meantime, see the answers for workarounds.</p>
Silvia
17
<p>This issue happened for me in 9.0.1 and also some earlier versions.</p> <p>My crude yet working workaround is adding some spaces after <strong><em>y</em></strong> to force it been displayed entirely, meanwhile also add corresponding spaces before <strong><em>p</em></strong> to keep the word being centrally aligned.</p> <p>Hope this helps.</p>
1,242,001
<p>The following is the notation for Fermat's Last Theorem </p> <p>$\neg\exists_{\{a,b,c,n\},(a,b,c,n)\in(\mathbb{Z}^+)\color{blue}{^4}\land n&gt;2\land abc\neq 0}a^n+b^n=c^n$ </p> <p>I understand everything in the notation besides the 4 highlighted in blue. Can someone explain to me what this means?</p>
Gregory Grant
217,398
<p>The only $x$ that makes that zero is $x=-2$. Divide both sides by the $e^{\dots}$ part which you know is never zero.</p>
723,570
<p>In the proof of Theorem 6.11, $\varphi$ is uniformly continuous and hence for arbitrary $\epsilon &gt; 0$ we can pick $\delta &gt; 0$ s.t. $\left|s-t\right| \leq \delta$ implies $\left|\varphi\left(s\right)-\varphi\left(t\right)\right|&lt;\epsilon$. However, I do not understand why he claims that $\delta &lt; \epsilon$. Help?</p> <p>Statement of the theorem: Suppose $f\colon\left[a,b\right]\rightarrow \mathbb{R}$ is Riemann integrable on $\left[a,b\right]$, $m\leq f \leq M$ (for $m,M\in\mathbb{R}$), $\varphi\colon \left[m,M\right]\rightarrow \mathbb{R}$ is continuous on $\left[m,M\right]$. Let $h\equiv\varphi\circ f$. Then $h$ is Riemann integrable on $\left[a,b\right]$.</p>
Cameron Williams
22,551
<p>It should be clear to figure out how many even numbers there are between $100$ and $400$. If you can't see it immediately, consider the amount of even numbers between $0$ and, say, $20$ and draw some conclusions from that.</p> <p>Then note that $400-100=300$.. This should give a good indicator of the number of numbers divisible by $3$ and $5$.</p> <p>I'll let you figure $7$ out.</p> <p>As noted by Steven, this method double, triple and quadruple counts so you also need to count how many times <em>products</em> of these numbers appear as well.</p> <p>There is, what I think, a better way. By the fundamental theorem of arithmetic, every positive integer can be uniquely decomposed as a product of prime numbers. Another fact (which you may or may not know but it's not hard to see, at least at an intuitive level) is that the largest prime that can divide a given positive integer $N$ is less than or equal to $\sqrt{N}$. In this case, our upper bound is $400$ so we want to look at all primes less than or equal to $\sqrt{400} = 20$. The primes satisfying this are $2,3,5,7,11,13,17,19$. We want to calculate the number that <em>are</em> divisible by $2,3,5,7$. This is equivalent to finding the numbers that are divisible by $11,13,17,19$ but not $2,3,5,7$. If you invoke the fundamental theorem of arithmetic, this problem becomes very easy at this point.</p>
1,640,383
<p>We have function $f:\mathbb{R}\rightarrow \mathbb{R}$ with $$f\left(x\right)=\frac{1}{3x+1}\:$$ $$x\in \left(-\frac{1}{3},\infty \right)$$ Write the Maclaurin series for this function.</p> <p>Alright so from what I learned in class, the Maclaurin series is basically the Taylor series for when we have $x_o=0$ and we write the remainder in the Lagrange form. It has this shape: $f\left(x\right)=\left(T_n;of\right)\left(x\right)+\left(R^Ln;of\right)\left(x\right)=\sum _{k=0}^n\left(\frac{f^{\left(k\right)}\left(0\right)}{k!}x^n+\frac{f^{\left(n+1\right)}\left(c\right)}{\left(n+1\right)!}x^{n+1}\right)$</p> <p>So when I compute derivatives of my function I can see that the form they take is:$$f^{\left(n\right)}\left(x\right)=\left(-1\right)^n\cdot \frac{3^n\cdot n!}{n!}x^n$$</p> <p>Does that mean that the Maclaurin series is basically: $$f\left(x\right)=1-3x+3^2x^2-3^3x^3+....+\left(-1\right)^n\cdot 3^n\cdot x^n$$ ?</p> <p>But what about that remainder in Lagrange form? I don't get that part. We didn't really have examples in class, so I've no idea if what I'm doing is correct. Can someone help me with this a bit?</p>
Aaron Maroja
143,413
<p><strong>Hint:</strong> Remember that </p> <p>$$\frac{1}{1 + x} = \sum_{n=0}^{\infty} (-1)^n x^n \,\,\,\, \text{for} \,\,\,\ |x| &lt;1$$</p> <p>Then $$\frac{1}{1 +3x} = \ldots$$</p> <p>for $|3x| &lt; 1 \implies |x| &lt; \frac{1}{3}$</p>
242,097
<p>I need to show that the function $f(n) = n^2$ is not of $\mathcal{O}(n)$. If I am correct I should prove that there is no number $c,n \geq 0$ where $n^2\lt cn$. How to do that?</p>
Community
-1
<p>I assume that you want to show that $n^2 \notin \mathcal{O}(n)$. By definition, if $f(n) \in \mathcal{O}(n)$, then there exists $c&gt;0$ such that for all $n &gt; n_0$ we have $$f(n) \leq c n$$ If there were to exist constants $n_0$ and $c$ such that $$n^2 \leq cn$$ what happens for $n &gt; \max \left\{n_0, c \right \}$?</p>
233,915
<p>a,b are elements of the group G</p> <p>I have no idea how to even start - I was thinking of defining a,b as two square matrices and using the non-commutative property of matrix multiplication but I'm not sure if that's the way to go...</p>
André Nicolas
6,312
<p>Consider the multiplicative subgroup of the <a href="http://en.wikipedia.org/wiki/Quaternion" rel="nofollow">quaternions</a> consisting of $\pm 1$, $\pm i$, $\pm j$, and $\pm k$. Let $a=i$ and $b=j$. We have $ij^2=-i$ and $j^2i=-i$ but $ij\ne ji$.</p>
1,289,626
<blockquote> <p>find the Range of $f(x) = |x-6|+x^2-1$</p> </blockquote> <p>$$ f(x) = |x-6|+x^2-1 =\left\{ \begin{array}{c} x^2+x-7,&amp; x&gt;0 .....(b) \\ 5,&amp; x=0 .....(a) \\ x^2-x+5,&amp; x&lt;0 ......(c) \end{array} \right. $$</p> <p>from eq (b) i got $$f(x)= \left(x+\frac12\right)^2-\frac{29}4 \ge-\frac{29}4$$<br> and from eq (c) i got $$f(x)= \left(x-\frac12\right)^2+\frac{19}4 \ge\frac{19}4$$<br></p> <p>and eq(b) tells me that it also passes through 5 and so generalize all this and found its range is $\left[-\frac{29}4 , \infty\right)$</p> <p>but the graph says its range is $(5, \infty)$</p>
egreg
62,967
<p>No derivatives are necessary. For $x\ge6$ the function is $$ f(x)=x^2+x-7 $$ The graph is an arc of a parabola with its axis at $x=-1/2$, so in this interval the function is increasing, with its minimum at $6$: $f(6)=35$.</p> <p>For $x&lt;6$ the function is $$ f(x)=x^2-x+5 $$ whose graph is an arc of a parabola with its axis at $x=1/2$. Since $1/2&lt;6$ the minimum of $f$ is reached at $1/2$: $f(1/2)=19/4$.</p> <p>Thus the range is $(19/4,\infty)$ because, clearly, $\lim_{x\to\infty}f(x)=\infty$.</p>
3,375,181
<p>How do I graph f(x)=1/(1+e^(1/x)) except for replacing variable x with numbers? Besides, I get the picture of the answer online <a href="https://i.stack.imgur.com/gZb7X.png" rel="nofollow noreferrer">enter image description here</a> and do not understand why x = 0 exists on this graph.</p>
Certainly not a dog
691,550
<p>Noting the form of the LHS, it is instructive to choose the following path:</p> <p><span class="math-container">$${yy’’-2{y’}^2\over y^2} = 1$$</span> <span class="math-container">$$\frac{d\frac{y’}y}{dx} = 1+{\biggl(\frac {y’}y\biggr)}^2$$</span> <span class="math-container">$$x+c = \arctan{\frac{y’}y}$$</span> <span class="math-container">$$y’-y\tan{(x+c)}=0$$</span></p> <p>To get the solution for <span class="math-container">$(x,y)$</span> this linear D. E. must be solved. For some values of <span class="math-container">$x$</span>, as mentioned in the comments by @WE, there is indeed a solution.</p> <p><span class="math-container">$$y’(\cos {(x+c)})-y(\sin {(x+c)}) = 0$$</span> <span class="math-container">$$(y\cos{(x+c)})’ = 0$$</span> <span class="math-container">$$\implies y\cos{(x+c)} = k$$</span> <span class="math-container">$$\boxed{y = ksec{(x+c)}}$$</span></p>
902,313
<p>The wikipedia page on clopen sets says "Any clopen set is a union of (possibly infinitely many) connected components." </p> <p>I thought any topological space is the union of its connected components? Why is this singled out here for clopen sets?</p> <p>Does it have something to do with it $x\in C$ a clopen subset $C$ of a space $X$, then $C$ actually contains the entire component of $x$ in $X$?</p>
Jamie Walton
99,825
<p>Take the space $\mathbb{R}$. Then the subset $\mathbb{R}_{&gt;0}$ of positive elements is a subset of $\mathbb{R}$, but is not a union of connected components of $\mathbb{R}$ (the only connected component of $\mathbb{R}$ is $\mathbb{R}$ itself). But then that's okay, because $\mathbb{R}_{&gt;0}$ isn't clopen in $\mathbb{R}$ (it is only open).</p> <p>What you seem to be doing here (I think) is to pass down to a subset of a topological space but then change the notion of connected component (perhaps to the connected components of the subspace with the subspace topology). What the above is saying is that if $U \subset X$ is a clopen subset of the space $X$, then it is a union of connected components <strong>of $\bf{X}$</strong> (not of $U$).</p>
407,890
<p>Here comes a second sophisticated version of my conjecture, as critics came up the <a href="https://math.stackexchange.com/questions/407812/conjecture-on-combinate-of-positive-integers-in-terms-of-primes">first version</a> was trivial.</p> <p>Teorem <a href="https://oeis.org/A226233" rel="nofollow noreferrer">2</a></p> <p>for a given prime $p$ and a given power $m$ the representation of any positive integer $n\in \Bbb N$ in the form: $$ n=(a_u p - b_u) \; p^m$$ is <em>unique</em> with the coefficient pairs (OEIS: <a href="https://oeis.org/A226233" rel="nofollow noreferrer">A226233</a>, <a href="https://oeis.org/A226236" rel="nofollow noreferrer">A226236</a>) $$ \left\{ \begin{array}{l l} \langle a_u \rangle=1+\left\lfloor\frac{u-1}{p-1}\right\rfloor=\frac{(p-1)+u-1-((u-1)mod(p-1))}{p-1}\\ \langle b_u \rangle=u-(p-1)\left\lfloor\frac{u-1}{p-1}\right\rfloor=1+((u-1)mod(p-1)) \end{array} \right.$$</p> <p>while $a_u,b_u,u\in \Bbb N$ and $m\in \Bbb N_0$.</p> <p>Can anyone help raising the proof? Or connect to another existing unsolved conjecture?</p> <p>Note: $\lfloor\cdot \rfloor$ denotes the floor function.</p> <p><a href="https://oeis.org/A226233" rel="nofollow noreferrer">2</a>: Theorem to be cited Vaseghi 2013</p>
Jyrki Lahtonen
11,619
<p>If I have understood the question correctly, the claim can be seen to be correct as follows. </p> <p>The definition of the integer $b_u$ tells us that $b_u$ is the unique integer in the range $1\le b_u\le p-1$ that is congruent to $u$ modulo $p-1$. So choosing $u$ from the correct residue class modulo $p-1$ allows us to choose $b$ to be anything we wish in that range.</p> <p>On the other hand, if $u\le p-1$, then $a_u=1$. But also if we replace $u$ with $u'=u+k(p-1)$ we replace $a_u$ with $a_u+k$. As we saw in the preceding paragraph, $u$ and $u'$ give rise to the same value of $b_u$. All this means that a choice of $u$ allows us to assign the parameter $a_u$ to be any positive integer that we wish, and to assign the parameter $b_u$ any integer in the range $1\le b_u\le p-1$. The mapping $u\mapsto (a_u,b_u)$ is clearly a bijection from $\mathbb{Z}_+$ to $\mathbb{Z}_+\times \{1,2,\ldots,p-1\}$.</p> <p>The claim follows from this. We are given the values of $n$ and $p$. As the factor $a_up-b_u$ is never divisible by $p$, we are forced to select $m$ in such a way that $p^m$ is the highest power of $p$ dividing $m$. So we write $$ n=n'p^m, $$ where $n'$ is not a multiple of $p$. This determines $n'$ and $m$ uniquely. Then as $n'$ is not a multiple of $p$, there exists a unique integer $r$ in the range $1\le r \le p-1$ such that $n'+r$ is a multiple of $p$. So $n'+r=\ell p$ for some positive integer $\ell$. Now the previous paragraph says that there exists a unique $u$ such that $a_u=\ell$ and $b_u=r$. Clearly $a_up-b_u=\ell p-r=n'$ and the claimed equation then holds. No freedom remains in the choice of $u$.</p> <p>Illustrating this with an example $p=3$. I table some values of $a_u,b_u,a_up-b_u$ as functions of $u$ $$ \begin{array}{c|c|c|c} u&amp;a_u&amp;b_u&amp;a_up-b_u\\ \hline 1&amp;1&amp;1&amp;2\\ 2&amp;1&amp;2&amp;1\\ 3&amp;2&amp;1&amp;5\\ 4&amp;2&amp;2&amp;4\\ 5&amp;3&amp;1&amp;8\\ 6&amp;3&amp;2&amp;7\\ 7&amp;4&amp;1&amp;11\\ 8&amp;4&amp;2&amp;10 \end{array} $$ It is clear that the last column will contain all the positive integers that are not multiples of $p=3$. All such numbers appear exactly once.</p> <p>So for example when $n=63=7\cdot9=7\cdot3^2$, we are forced to select $m=2$ and $u=6$ (see the table).</p>
197,441
<p>I have a list,</p> <pre><code>l1 = {{a, b, 3, c}, {e, f, 5, k}, {n, k, 12, m}, {s, t, 1, y}} </code></pre> <p>and want to apply differences on the third parts and keep the parts right of the numerals collected.</p> <p>My result should be</p> <pre><code>l2 = {{2, c, k}, {7, k, m}, {-11, m, y}} </code></pre> <p>I tried Map and MapAt, but I could not get anywhere. I could work around split things up and connect again. But is there a better way to do it?</p>
kglr
125
<p>You can also use <a href="https://reference.wolfram.com/language/ref/BlockMap.html" rel="nofollow noreferrer"><code>BlockMap</code></a> as follows:</p> <pre><code>BlockMap[{#[[3]].{-1, 1}, ## &amp; @@ Flatten@#[[4 ;;]]} &amp;@* Transpose, l1, 2, 1] </code></pre> <blockquote> <p>{{2, c, k}, {7, k, m}, {-11, m, y}} </p> </blockquote> <p>or</p> <pre><code>BlockMap[{#[[1]].{-1, 1}, ## &amp; @@ Flatten@ #[[2 ;;]]} &amp;@*Transpose, l1[[All, 3 ;;]], 2, 1] </code></pre> <blockquote> <p>{{2, c, k}, {7, k, m}, {-11, m, y}} </p> </blockquote>
649,495
<p>Hello I'm having trouble understanding the factorizing of a polynomial as </p> <p>$$x^4-4x$$</p> <p>After that, I turned it into $$x(x^3-8)$$</p> <p>But I don't quite understand how it's factored (the process) as </p> <p>$$x(x−2)(x^2+2x+4)$$</p> <p>Thanks!</p>
taue2pi
112,397
<p>You can learn "difference of cubes" $$(x^3-y^3)=(x-y)(x^2+xy+y^2)$$ or the general form $$(x^n-y^n)=(x-y)(x^{n-1}y^0+x^{n-2}y^1+...+x^1y^{n-2}+x^0y^{n-1})$$</p>
1,080,858
<p>Why do we have</p> <ul> <li>$u_n=\dfrac{1}{\sqrt{n^2-1}}-\dfrac{1}{\sqrt{n^2+1}}=O\left(\dfrac{1}{n^3}\right)$</li> <li>$u_n=e-\left(1+\frac{1}{n}\right)^n\sim \dfrac{e}{2n}$</li> </ul> <p>any help would be appreciated</p>
DeepSea
101,504
<p><strong>Hint:</strong> $\left(\dfrac{1}{n^2-1}\right)^{1/2} = \left(n^2-1\right)^{-1/2} = \dfrac{1}{n}\left(1-\frac{1}{n^2}\right)^{-1/2} = \dfrac{1}{n} + \mathcal{O}\left(\frac{1}{n^3}\right)$</p>
1,080,858
<p>Why do we have</p> <ul> <li>$u_n=\dfrac{1}{\sqrt{n^2-1}}-\dfrac{1}{\sqrt{n^2+1}}=O\left(\dfrac{1}{n^3}\right)$</li> <li>$u_n=e-\left(1+\frac{1}{n}\right)^n\sim \dfrac{e}{2n}$</li> </ul> <p>any help would be appreciated</p>
RE60K
67,609
<p>Using Binomial Exapnsion:</p> <blockquote> <p>$$u_n=\dfrac{1}{\sqrt{n^2-1}}-\dfrac{1}{\sqrt{n^2+1}}\\ =(n^2-1)^{-1/2}-(n^2+1)^{-1/2}\\ =h[(1-h^2)^{-1/2}-(1+h^2)^{-1/2}]\quad h:=n^{-1}\\ =h[(1+h^2/2+...)-(1-h^2/2+...)]\\ =h[h^2+...]\\ =O(h^3)=O(n^{-3})$$</p> </blockquote> <p>And using the $e$ and $\ln$-Series:</p> <blockquote> <p>$$e-\left(1+\frac1n\right)^n\\ =e-e^{\frac{\ln(1+1/n)}{1/n}}\\ =e-e^{1-1/(2n)+...}=e[1-e^{-1/(2n)+...}]\\ =e[1-(1+(-1/2n)+...)]\\ \sim e/(2n)$$</p> </blockquote>
1,080,858
<p>Why do we have</p> <ul> <li>$u_n=\dfrac{1}{\sqrt{n^2-1}}-\dfrac{1}{\sqrt{n^2+1}}=O\left(\dfrac{1}{n^3}\right)$</li> <li>$u_n=e-\left(1+\frac{1}{n}\right)^n\sim \dfrac{e}{2n}$</li> </ul> <p>any help would be appreciated</p>
Clement C.
75,808
<p>For the first one: $$\begin{align} \frac{1}{\sqrt{n^2-1}}-\frac{1}{\sqrt{n^2+1}} &amp;= \frac{1}{n}\left(\frac{1}{\sqrt{1-\frac{1}{n^2}}}-\frac{1}{\sqrt{1+\frac{1}{n^2}}}\right) \\ &amp;= \frac{1}{n}\left(\frac{1}{1-\frac{1}{2n^2}+o\!\left(\frac{1}{n^2}\right)}-\frac{1}{1+\frac{1}{2n^2}+o\!\left(\frac{1}{n^2}\right)}\right) \\ &amp;=\frac{1}{n}\left(\left(1+\frac{1}{2n^2}+o\!\left(\frac{1}{n^2}\right)\right)-\left(1-\frac{1}{2n^2}+o\!\left(\frac{1}{n^2}\right)\right)\right) \\ &amp;=\frac{1}{n}\left(\frac{1}{2n^2}+o\!\left(\frac{1}{n^2}\right)+\frac{1}{2n^2}+o\!\left(\frac{1}{n^2}\right)\right) \\ &amp;=\frac{1}{n}\left(\frac{1}{n^2}+o\!\left(\frac{1}{n^2}\right)\right) \\ &amp;=\frac{1}{n^3}+o\!\left(\frac{1}{n^3}\right) \end{align}$$ using the Taylor expansions:</p> <ul> <li>$\sqrt{1+x} \operatorname*{=}_{x\to0} 1+\frac{x}{2} +o(x)$</li> <li>$\frac{1}{1+x} \operatorname*{=}_{x\to0} 1-x+x^2-\cdots+x^k +o(x^k)$</li> <li>$\frac{1}{1-x} \operatorname*{=}_{x\to0} 1+x+x^2+\cdots+x^k +o(x^k)$</li> </ul> <hr> <p>For the second: $$\begin{align} e - \left(1+\frac{1}{n}\right)^n &amp;= e - e^{n\ln\left(1+\frac{1}{n}\right)} = e - e^{n\left(\frac{1}{n}-\frac{1}{2n^2} + o\!\left(\frac{1}{n^2}\right)\right)} \\ &amp;= e^1 - e^{1-\frac{1}{2n} + o\!\left(\frac{1}{n}\right)} \\ &amp;= e\cdot\left( 1 - e^{-\frac{1}{2n} + o\!\left(\frac{1}{n}\right)}\right) \\ &amp;= e\cdot\left( 1 - \left( 1-\frac{1}{2n} + o\!\left(\frac{1}{n}\right)\right)\right) \\ &amp;= e\cdot\left( \frac{1}{2n} + o\!\left(\frac{1}{n}\right)\right) \\ \end{align}$$</p> <p>using this time the Taylor expansions:</p> <ul> <li>$\ln(1+x) \operatorname*{=}_{x\to0} x - \frac{x^2}{2} +o(x^2)$</li> <li>$e^x \operatorname*{=}_{x\to0} 1+x+o(x)$</li> </ul>
702,804
<p>I just need a sanity check, been thinking about this all morning.</p> <p>If we use the Mean Value Theorem on a function over the infinite interval (suppose the function's domain is unbounded), i.e.</p> <p>$$M=\lim\limits_{T \to \infty} \dfrac{1}{2T}\int_{-T}^{T} \text{dt} f(t)$$</p> <p>There is no way that M can be finite right? My intuition tells me it's either zero or infinite, but I wanted another opinion; oddly enough, I wasn't able to <a href="http://www.google.com/" rel="nofollow">google</a> it.</p> <p>Thanks!</p>
John Gowers
26,267
<p>The binomial series expansion that you used - </p> <p>$$ \frac1{1-y}=\sum_{n=0}^{\infty} y^n $$</p> <p>is valid only when $|y|&lt;1$. Otherwise, the series diverges. In your solution, you are substituting in $(x-1)$ for $y$, so your solution is valid for $|x-1|&lt;1$, so your power series is in some sense centred at $1$. Indeed, your power series is not a power series in $x$, but a power series in $x-1$. Normally, we want to find a series in $x$ (centred at $0$), and this is what the textbook solution is doing. If we substitute in $\frac{x}{2}$ for $y$, then we get a series valid for $|x/2|&lt;1$, or for $|x|&lt;2$. </p> <p>Note, however, that you can transform from one series to the other. The following is non-rigorous, but can be tidied up if you want to: </p> <p>\begin{align} \sum_{n=0}^\infty (x-1)^n &amp;= \sum_{n=0}^\infty \sum_{k=0}^n {n \choose k} x^k(-1)^{n-k}\\ &amp;=\sum_{k=0}^\infty x^k \sum_{n=k}^\infty {n\choose k} (-1)^{n-k} \end{align}</p> <p>At this point, the series $\sum_{n=k}^\infty {n\choose k} (-1)^{n-k}$ doesn't actually converge, but if you extend the notion of summation in a special way (called 'Abel summation', I think) then you can get it to come out as $2^{-n-1}$. Not at all rigorous: has anyone got a better way of doing this?</p>
25,137
<p>I want to find an intuitive analogy to explain how binary addition (more precise: an adder circuit in a computer) works. The point here is to explain the abstract process of <em>adding</em> something by comparing it to something that isn't abstract itself.</p> <p>In principle: An everyday object or an action that is structured like or functionally resembles an adder.</p> <p>Think of a thing that can belong to any number of categories x<sub>1</sub>, x<sub>2</sub>, x<sub>3</sub>, x<sub>4</sub>, x<sub>5</sub>, x<sub>6</sub>, x<sub>7</sub>, x<sub>8</sub> for which the property holds that if you put two objects together/perform two actions simultaneously, and both the objects/actions are of the same category you automatically create an object or perform an action that is of the next higher category that the object doesn't yet belong to, the whole thing therefore implementing the basic functionality of an adder.</p> <p>(Categories are changing here analogous to the bits in the circuit: 00000001 (1) + 00000001 (1) together, adds up to 00000010 (2).)</p> <p>But I just can't think of such a situation or an object where this pattern would occur. Whatever analogy i create with increasing amount of categories the way these categories transform becomes increasingly harder to explain, and the metaphor becomes overly specific and unhandy.</p> <p>Hence the question:</p> <p><strong>What's an everyday object that resembles an adder in it's basic functionality?</strong></p>
Steven Gubkin
117
<p>Why do you need an analogy?</p> <p>I think just having a bunch of beans to count, and grouping the beans into groups of size 1, 2, 4, 8, 16, etc is intuitive enough without needing an analogy. The comparison with base ten arithmetic, where we group into 1, 10, 100, 1000, etc is clear.</p> <p>Addition is straightforward: combine the piles and regroup.</p>
3,325,340
<p>Show that <span class="math-container">$$ \lim\limits_{(x,y)\to(0,0)}\dfrac{x^2y^2}{x^2+y^2}=0$$</span> My try: We know that, <span class="math-container">$$ x^2\leq x^2+y^2 \implies x^2y^2\leq (x^2+y^2)y^2 \implies x^2y^2\leq (x^2+y^2)^2$$</span> Then, <span class="math-container">$$\dfrac{x^2y^2}{x^2+y^2}\leq x^2+y^2 $$</span> So we chose <span class="math-container">$\delta=\sqrt{\epsilon}$</span></p>
user0102
322,814
<p><strong>HINT</strong> <span class="math-container">\begin{align*} 0\leq x^{2} \leq x^{2} + y^{2} \Longleftrightarrow 0\leq \frac{x^{2}}{x^{2}+y^{2}} \leq 1 \Longleftrightarrow 0\leq \frac{x^{2}y^{2}}{x^{2}+y^{2}}\leq y^{2} \end{align*}</span></p> <p>Then apply the squeeze theorem.</p>
3,325,340
<p>Show that <span class="math-container">$$ \lim\limits_{(x,y)\to(0,0)}\dfrac{x^2y^2}{x^2+y^2}=0$$</span> My try: We know that, <span class="math-container">$$ x^2\leq x^2+y^2 \implies x^2y^2\leq (x^2+y^2)y^2 \implies x^2y^2\leq (x^2+y^2)^2$$</span> Then, <span class="math-container">$$\dfrac{x^2y^2}{x^2+y^2}\leq x^2+y^2 $$</span> So we chose <span class="math-container">$\delta=\sqrt{\epsilon}$</span></p>
MafPrivate
695,001
<p><strong>Tips</strong></p> <p><span class="math-container">$\lim\limits_{\left(x,y\right)\rightarrow\left(0,0\right)} \dfrac{x^2 y^2}{x^2+y^2} = \lim\limits_{\left(x,y\right)\rightarrow\left(0,0\right)} \dfrac{1}{\frac{1}{x^2}+\frac{1}{y^2}}$</span></p>
501,660
<p>In school, we just started learning about trigonometry, and I was wondering: is there a way to find the sine, cosine, tangent, cosecant, secant, and cotangent of a single angle without using a calculator?</p> <p>Sometimes I don't feel right when I can't do things out myself and let a machine do it when I can't.</p> <p>Or, if you could redirect me to a place that explains how to do it, please do so.</p> <p>My dad said there isn't, but I just had to make sure.</p> <p>Thanks.</p>
Eric Stucky
31,888
<p>Congratulations! You've stumbled in to a very interesting question!</p> <p>In higher mathematics, we often notice that some things which are really easy to talk about but difficult to express rigorously have a property which is really easy to express rigorously but something that we probably wouldn't have thought of to begin with.</p> <p>The trig functions are one of these things. With (a lot of) effort, you can show that </p> <p>$$\sin x = x - \frac{x^3}{6} + \frac{x^5}{120} - \frac{x^7}{5040} + \frac{x^9}{362880} - \cdots $$</p> <p>where the patterns of increasing the powers of $x$ by $2$, and switching between $+$ and $-$ signs continues forever. (The denominators also have a pattern: take the power that $x$ is raised to in the term and multiply it by all of the smaller numbers down to $1$; that is the number in the denominator). Note that you have to use radians for this exact formula to work; of course you could come up with one for degrees as well.</p> <p>When you start realizing that circles are actually quite tricky objects to define, formulas like that one start to look more appealing. I have had multiple mathematics textbooks take this infinitely long expression as the <em>definition</em> of the sine function. (It turns out to be the same thing as the circle definition, but… well, circles get complicated.)</p> <p>Of course, we can't sit around multiply and add for the rest of our lives just to compute sin $1$, but we can just cut off the operations after a couple terms. If you go out to the $x^7$ term, you can guarantee that your answer is accurate to at least 3 decimal places as long as you use angles between $-\frac{\pi}{2}$ and $\frac\pi 2$. (These are the only angles you really need, if you get rid of multiples of $\pi$ properly.)</p> <p>The cosine formula, in case you are interested, is similar: $$\cos x = 1 - \frac{x^2}{2} + \frac{x^4}{24} - \frac{x^6}{720}+ \frac{x^8}{40320}-\cdots$$</p> <p>The internet has formulas for the other trig functions, but you can always just combine these.</p> <p>As copper.hat says, there are also these large books where people did the calculations once and wrote them down so that nobody would have to do them again. Of course, these were made long before computers existed; nobody makes them anymore! But somebody from your parents' or grandparents' generation probably still has one sitting in their house.</p>
501,660
<p>In school, we just started learning about trigonometry, and I was wondering: is there a way to find the sine, cosine, tangent, cosecant, secant, and cotangent of a single angle without using a calculator?</p> <p>Sometimes I don't feel right when I can't do things out myself and let a machine do it when I can't.</p> <p>Or, if you could redirect me to a place that explains how to do it, please do so.</p> <p>My dad said there isn't, but I just had to make sure.</p> <p>Thanks.</p>
Michael Hardy
11,667
<p>Long before there were power series, in the second century A.D., Ptolemy, a man who wrote in Greek and probably lived in Alexandria, created a table of values of what amounts to the sine function.</p> <p>See <a href="https://en.wikipedia.org/wiki/Ptolemy%27s_table_of_chords" rel="nofollow noreferrer">this page</a>.</p> <blockquote> <p>Chapter 10 of Book I of the ''Almagest'' presents geometric theorems used for computing chords. Ptolemy used geometric reasoning based on Proposition 10 of Book XIII of Euclid's <em>Elements</em> to find the chords of <span class="math-container">$72^\circ$</span> and <span class="math-container">$36^\circ.$</span> That Proposition states that if an equilateral pentagon is inscribed in a circle, then the area of the square on the side of the pentagon equals the sum of the areas of the squares on the sides of the hexagon and the decagon inscribed in the same circle. <p><p>He used Ptolemy's theorem on quadrilaterals inscribed in a circle to derive formulas for the chord of a half-arc, the chord of the sum of two arcs, and the chord of a difference of two arcs. The theorem states that for a quadrilateral inscribed in a circle, the product of the lengths of the diagonals equals the sum of the products of the two pairs of lengths of opposite sides. The derivations of trigonometric identities rely on a cyclic quadrilateral in which one side is a diameter of the circle.<p><p> To find the chords of arcs of <span class="math-container">$1^\circ$</span> and <span class="math-container">$\left(\tfrac 1 2\right)^\circ$</span> he used approximations based on Aristarchus's inequality. The inequality states that for arcs <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta,$</span> if <span class="math-container">$0&lt;\beta&lt;\alpha&lt;90^\circ,$</span> then <span class="math-container">$$ \frac{\sin \alpha}{\sin \beta} &lt; \frac\alpha\beta &lt; \frac{\tan\alpha}{\tan\beta}.$$</span> <p><p>Ptolemy showed that for arcs of <span class="math-container">$1^\circ$</span> and <span class="math-container">$\left(\tfrac 1 2 \right)^\circ,$</span> the approximations correctly give the first two sexigesimal places after the integer part.</p> </blockquote>
2,791,204
<p>I am trying to understand whether or not the product of two positive semidefinite matrices is also positive semidefinite. This topic has already been discussed in the past <a href="https://math.stackexchange.com/q/113859">here</a>. For me $A$ is positive definite" means $x^T A x &gt; 0$ for all nonzero real vectors $x$, in this case @RobertIsrael gives a counterexample: $$ A = \pmatrix{ 1 &amp; 2\cr 2 &amp; 5\cr},\ B = \pmatrix{1 &amp; -1\cr -1 &amp; 2\cr},\ AB = \pmatrix{-1 &amp; 3\cr -3 &amp; 8\cr},\ (1\ 0) A B \pmatrix{1\cr 0\cr} = -1$$ However, then proceeds to prove that for $A$, $B$, positive semidefinite real symmetric matrices the result holds. </p> <p>The proof is very short, quoting the answer: "Then $A$ has a positive semidefinite square root, which I'll write as $A^{1/2}$. Now $A^{1/2} B A^{1/2}$ is symmetric and positive semidefinite, and $AB = A^{1/2} (A^{1/2} B)$ and $A^{1/2} B A^{1/2}$ have the same nonzero eigenvalues."</p> <p>So then the question is: what does it mean to be positive semidefinite real symmetric and why are $A$, $B$ not of this type?</p>
Riccardo Sven Risuleo
419,249
<p>The problem is not in $A$ and $B$, but in their product $AB$: the product is not symmetric; hence, there is no clear definition of <em>positive definiteness</em>.</p> <p>In standard parlance, a <em>Hermitian</em> (or <em>symmetric</em>) matrix $M$ is <em>positive definite</em> if $x^T M x &gt; 0$ for all $x$ (and this corresponds to $M$ having only positive eigenvalues). If $M$ is not symmetric, then $x^T M x$ may be zero or negative, even tho $M$ has only positive eigenvalues (as in the example by RobertIsrael).</p> <p>So, for positive definiteness to hold, you need that $M = AB$ is Hermitian.</p>
153,217
<blockquote> <p>Let $$f(x)=\frac{2x+1}{\sin(x)}$$ Find $f'(x).$ </p> </blockquote> <p>I used Quotient Rule <br> $$\begin {align*}\frac{\sin(x)2-(2x+1)\cos(x)}{\sin^2(x)}\\ =\frac{3-2x\cos(x)}{\sin(x)} \end {align*}$$</p> <p>Is that right? I don't know how to get the answer. Please help me out, thanks.</p>
DonAntonio
31,254
<p>You had (or should) $$\frac{2\sin x-2x\cos x-\cos x}{\sin^2x}$$ and this can't possibly equal what you wrote (where does that $\,3\,$ come from?)</p>
625,821
<p>$$\int^\infty_0\frac{1}{x^3+1}\,dx$$</p> <p>The answer is $\frac{2\pi}{3\sqrt{3}}$.</p> <p>How can I evaluate this integral?</p>
lab bhattacharjee
33,337
<p>Using <a href="http://mathworld.wolfram.com/PartialFractionDecomposition.html" rel="nofollow">Partial Fraction Decomposition</a>,</p> <p>$$\frac1{1+x^3}=\frac A{1+x}+\frac{Bx+C}{1-x+x^2}$$</p> <p>Multiply either sides by $(1+x)(1-x+x^2)$ and compare the coefficients of the different powers of $x$ to find $A,B,C$</p> <p>As $\displaystyle1-x+x^2=\frac{4x^2-4x+4}4=\frac{(2x-1)^2+3}4$</p> <p>Using <a href="http://en.wikipedia.org/wiki/Trigonometric_substitution" rel="nofollow">Trigonometric substitution</a>, set $\displaystyle 2x-1=\sqrt3\tan\phi$ in the second part</p>
1,986,247
<p>The nth Catalan number is : $$C_n = \frac {1} {n+1} \times {2n \choose n}$$ The problem 12-4 of CLRS asks to find : $$C_n = \frac {4^n} { \sqrt {\pi} n^{3/2}} (1+ O(1/n)) $$ And Stirling's approximation is: $$n! = \sqrt {2 \pi n} {\left( \frac {n}{e} \right)}^{n} {\left( 1+ \Theta \left(\frac {1} {n}\right) \right)} $$ So, the nth catalan number becomes : $$C_n = \frac {2n!}{(n+1)(n!)^2} $$ That, after applying Stirling's approximation becomes: $$C_n = \left( \frac {1}{1+n} \right) \left( \frac {4^n}{\sqrt{\pi n}} \right) \frac {1}{\left( 1+\Theta \left(1/n\right) \right)}$$ And then, it becomes hopeless. The Asymptotic bound comes in the denominator, not in the numerator.<br> What should be done now? </p> <p>Any help appreciated.<br> Moon </p>
Micah
30,836
<p>Use the binomial approximation for $(1+y)^k$:</p> <p>$$ (1+y)^k=1+ky+\Theta(y^2) $$ as $y \to 0$.</p> <p>In your case, you can take $k=-1$ to show that any function which is $\frac{1}{1+\Theta(1/n)}$ is also $1+\Theta(\frac{1}{n})$.</p>
1,611,560
<p>Reading through the first half of Baby Rudin again before taking an Analysis class, I came across the assertion that "it is also easy to show that k-cells are convex". </p> <p>Previously it gave the example of open/closed balls being convex, and the proof is obvious and easy to understand. That being said, and it's probably very easy, but I can't for the life of me produce something that shows that all k-cells are convex as well.</p> <p>I understand convex to sort of say "all points 'between' two points in a given set are also in the set" in Layman's terms, but I'm not sure where to proceed from there. I either got this on my first read and can't remember for the life of me, or I took it for granted and went with it. </p> <p>Any help leading me in the right direction?</p>
Myo Nyunt
828,003
<p>If <span class="math-container">$x,y$</span> are two points of a k-cell, <span class="math-container">$C=\{(x_1,…,x_k )| a_i≤x_i≤b_i \}$</span>, then for <span class="math-container">$1≤i≤k$</span>, <span class="math-container">$a_i≤x_i≤b_i , a_i\le y_i≤b_i$</span> and <span class="math-container">$ λa_i+(1-λ) a_i\le λx_i+(1-λ) y_i≤λb_i+(1-λ) b_i$</span>, that is, <span class="math-container">$a_i\le λx_i+(1-λ) y_i≤b_i$</span>; so <span class="math-container">$λx+(1-λ)y$</span> in <span class="math-container">$C$</span>; hence <span class="math-container">$C$</span> is convex.</p>
2,293,147
<p>I was trying to solve this ODE $\frac{dy}{dx} = c_{1} + c_{2}y + \frac{c_{3}}{y} , y(0) = c , c &gt;0$.</p> <p>where $c_{1},c_{2},c_{3}$ are three real numbers say $c_{1} &lt; 0,c_{2},c_{3} &gt; 0$.</p> <p>I thought of using separation of variables giving me $x = \int(\frac{y}{c_{1}y+c_{2}y^2+c_{3}})dy + c$.</p> <p>Next I am trying to reduce the denominator into a perfect square thng like of the form $(a + by)^2 + c$ ,so equating $(a + by)^2 = c_{1}y + c_{2}y^2 + c_{3}$ we get,</p> <p>$(c_{1}y + c_{2}y^2 + c_{3}) = (\sqrt{\frac{-c_{1}^2}{4.c_{2}}} + \sqrt{c_{2}}.y)^2 + (c_{3} - \frac{c_{1}^2}{4.c_{2}})$</p> <p>thus $x = \int(\frac{y}{(\sqrt{\frac{-c_{1}^2}{4.c_{2}}} + \sqrt{c_{2}}.y)^2 + (c_{3} - \frac{c_{1}^2}{4.c_{2}})}) dy + c$.</p> <p>Now I am stuck at this point. Also it makes me think whether there exists an analytic solution to this ODE?</p>
Jaideep Khare
421,580
<p>Instead of doing that, why don't you just multiply the whole expression by $y$ and the let $y^2=t$.</p>
1,363,144
<p>Given a cubic polynomial $f(x) = ax^{3} + bx^{2} + cx +d$ with arbitrary real coefficients and $a\neq 0$. Is there an easy test to determine when all the real roots of $f$ are negative?</p> <p>The Routh-Hurwitz Criterion gives a condition for roots lying in the open left half-plane for an arbitrary polynomial with complex coefficients which helps a little, but this criterion doesn't help me when the complex roots lie in the right half plane.</p>
Daniel Fischer
83,702
<p>We can assume that $a = 1$, since dividing by $a$ doesn't change the zeros. Then we know that $\lim\limits_{x\to +\infty} f(x) = +\infty$. We want to check whether $f$ has a zero in $[0,+\infty)$.</p> <p>If $d = 0$, we have $f(0) = 0$. There can be circumstances when that should count as negative. Then we are reduced to checking a quadratic polynomial, whose zeros we know to find.</p> <p>If $f(0) = d &lt; 0$, then $f$ has a zero in $(0,+\infty)$ by the intermediate value theorem.</p> <p>If $d &gt; 0$, we check the derivative,</p> <p>$$f'(x) = 3x^2 + 2bx + c = 3\biggl(x+\frac{b}{3}\biggr)^2 - \frac{b^2-3c}{3}.$$</p> <p>If $f'$ has no positive zero, or a double zero, then $f$ is strictly increasing on $[0,+\infty)$, so then $f$ has no non-negative real zero. If $b^2 &lt; 3c$, then $f'$ has two non-real zeros, and if $b^2 = 3c$, $f'$ has a double zero at $-\frac{b}{3}$. If $b^2 &gt; 3c$, then $f'$ has two distinct real zeros. The larger of these is</p> <p>$$\zeta = \frac{\sqrt{b^2-3c}-b}{3}.$$</p> <p>We have $\zeta &gt; 0$ if and only if $b \leqslant 0$ or $b &gt; 0$ and $c &lt; 0$. Then $f$ has no positive zero if and only if $f(\zeta) &gt; 0$.</p>
1,289,868
<p>EDIT (<em>now asking how to write $F$ as distributions, instead of writing the integral in terms of distributions</em>): </p> <p>Let $F$ be the distribution defined by its action on a test function $\phi$ as </p> <p>\begin{equation*} F(\phi)=\int_{\pi}^{2\pi}x\phi(x)dx. \end{equation*}</p> <p>How would you write $F$ in terms of the delta distribution, heaviside distribution, and a regular distribution $R$ defined by its action on a test function $\phi$ as </p> <p>\begin{equation*} R(\phi)=\int_{-\infty}^{\infty}g(x)\phi(x)dx \end{equation*}</p> <p>for a continuous function $g$?</p> <p>Edit: Q1)b) in this link <a href="https://www.maths.ox.ac.uk/system/files/legacy/3422/B5a_13.pdf" rel="nofollow">https://www.maths.ox.ac.uk/system/files/legacy/3422/B5a_13.pdf</a></p>
paul garrett
12,291
<p>Although I think @NikitaEvseev's answer is the most natural, perhaps the small exercise of rewriting integration over an interval $[a,b]$ as a distribution is worthwhile. That is, consider $$ u(\varphi) \;=\; \int_a^b \varphi(x)\;dx $$ Edit: simpler than what I wrote before: $$ u(\varphi) \;=\; \int_a^b \varphi(x)\,dx \;=\; \int_{-\infty}^\infty (H(x-a) - H(x-b))\cdot \varphi(x)\;dx $$ That is, if <em>translates</em> and reflections of the Heaviside function are allowed, then it's just this (with $\varphi(x)$ replaced by $x\cdot \varphi(x)$).</p> <p>But then I'm confused as to why the posing of the question in the linked-to source made it seem as though something more complicated should happen.</p>
481,421
<p>Find the limit of: $$\lim_{x\to\infty}{\frac{\cos(\frac{1}{x})-1}{\cos(\frac{2}{x})-1}}$$</p>
njguliyev
90,209
<p>$$\lim_{x \to \infty} \frac{\cos \frac1x - 1}{\cos \frac2x - 1} = \lim_{x \to \infty} \frac{-2\sin^2 \frac{1}{2x}}{-2\sin^2 \frac{1}{x}} = \lim_{x \to \infty} \frac{\sin^2 \frac{1}{2x}}{\sin^2 \frac{1}{x}} = \lim_{x \to \infty} \frac14 \frac{\sin^2 \frac{1}{2x}}{\frac{1}{(2x)^2}} \frac{\frac{1}{x^2}}{\sin^2 \frac{1}{x}} = \frac14.$$</p>
481,421
<p>Find the limit of: $$\lim_{x\to\infty}{\frac{\cos(\frac{1}{x})-1}{\cos(\frac{2}{x})-1}}$$</p>
minar
86,791
<p>Another option, let $u=exp(i/2x).$ Then, $\cos(1/x)-1=(u^2+1/u^2)/2-1=(u-1/u)^2/2$ and $\cos(2/x)-1=(u^4+1/u^4)/2-1=(u^2-1/u^2)^2/2$.</p> <p>Thus: $$\frac{\cos(1/x)-1}{\cos(2/x)-1} =\left(\frac{u-1/u}{u^2-1/u^2}\right)^2 =\left(\frac{1}{u+1/u}\right)^2. $$</p> <p>Since $u$ tends to $1$ as $x$ goes to infinity, you can conclude that the desired limit is $1/4$.</p>
2,195,739
<p>For $$ f(x) = \begin{cases} x^2 &amp; \text{if $x\in\mathbb{Q}$,} \\[4px] x^3 &amp; \text{if $x\notin\mathbb{Q}$} \end{cases} $$</p> <p>What I did was examine each of the limits at $0$ of $\displaystyle\lim_{x\to0} \frac{f(x)-f(a)}{x-a}$ for each case but I am not sure </p>
egreg
62,967
<p>You need to compute $$ \lim_{x\to0}\frac{f(x)-f(0)}{x-0}=\lim_{x\to0}\frac{f(x)}{x} $$ But you can write, for $x\ne0$, $$ \frac{f(x)}{x}=\begin{cases} x &amp; \text{if $x\in\mathbb{Q}$}\\[4px] x^2 &amp; \text{if $x\notin\mathbb{Q}$} \end{cases} $$ For $0&lt;|x|&lt;1$, you have $\left|\dfrac{f(x)}{x}\right|\le |x|$. Then…</p>
2,195,739
<p>For $$ f(x) = \begin{cases} x^2 &amp; \text{if $x\in\mathbb{Q}$,} \\[4px] x^3 &amp; \text{if $x\notin\mathbb{Q}$} \end{cases} $$</p> <p>What I did was examine each of the limits at $0$ of $\displaystyle\lim_{x\to0} \frac{f(x)-f(a)}{x-a}$ for each case but I am not sure </p>
complexCreature
424,614
<p>Maybe you prefer definition. You must prove that: <br> $(\forall \epsilon\gt0)(\exists\delta\gt0) (|x-0|\lt\delta \Rightarrow \frac{|f(x)-f(0)|}{|x-0|}\lt\epsilon$) , that is: <br> $(\forall \epsilon\gt0)(\exists\delta\gt0) (|x|\lt\delta \Rightarrow \frac{|f(x)|}{|x|}\lt\epsilon$)<br> Since we are interested only in small $\epsilon$, let us take $\epsilon\in\langle0,1\rangle$ arbitrarily.<br> $\qquad$ Now define $\delta:= \epsilon^\frac{1}{2} $, obviously $\delta\gt0$ <br> $\qquad$ Let us assume $|x|\lt\delta$ , we have two cases:<br> case 1: $x\in\mathbb Q$<br> $\qquad$ $\frac{|f(x)|}{|x|}=\frac{|x^2|}{|x|}=|x|\lt\delta=\epsilon^\frac{1}{2}&lt;\epsilon$<br> case 2: $x\notin\mathbb Q$<br> $\qquad$ $\frac{|f(x)|}{|x|}=\frac{|x^3|}{|x|}=|x|^2\lt\delta^2=\epsilon^\frac{2}{2}=\epsilon$<br> In both cases we have shown that:<br> $|x|\lt\delta \Rightarrow \frac{|f(x)|}{|x|}\lt\epsilon$<br> $\qquad$ Which was need to be prove.</p>
82,716
<p>There seems to be two competing(?) formalisms for specifying theories: <a href="http://ncatlab.org/nlab/show/sketch" rel="noreferrer">sketches</a> (as developped by Ehresmann and students, and expanded upon by Barr and Wells in, for example, <a href="http://www.tac.mta.ca/tac/reprints/articles/12/tr12.pdf" rel="noreferrer">Toposes, Triples and Theories</a>), and the setting of <a href="http://cseweb.ucsd.edu/~goguen/pps/nel05.pdf" rel="noreferrer">institutions</a>. </p> <p>But I sometimes get a glimpse that sketches are really a very nice way to specifiy a good category of <em>signatures</em>, while institutions are much more model-theoretic. But in works on institutions, the category of signatures is usually highly under-specified (which is quite ironic, really).</p> <p>So my question really is: what is the relation between Sketches and Institutions? </p> <p>A subsidiary question is, why do I find a lot of work relying on institutions, but comparatively less on sketches? [I am talking volume here, not quality.] Did sketches somehow not prove to be effective?</p>
Zinovy Diskin
19,786
<p>So, saying that sketches compete with institutions is not correct because the former is an instance of the latter. The actual content of the question is probably this.</p> <p>There are two types/styles/paradigms of predicate logic: elementwise (eg, ordinary FOL) and sortwise, or categorical, logic. In the latter, predicates are just sorts, whose intended internal structure (set of tuples) is given by a family of projection arrows pi: P-->Ai (i=1,2..n for n-ary predicate P \subset A1x..xAn) declared to be jointly monic to exclude duplication of tuples. Importantly, a theory interpretation t: T1-->T2 can map sorts Ai in T1 to predicates Qj in T2. Such "mixing" theory interpretations are not considered in the elementwise logic. Thus, the competition is between two styles of predicate logic: sortwise vs. elementwise. Below I'll comment on this, but first let's clarify the institution part of the question. </p> <p>There are many elementwise logics (Horn, FOL, etc), each one forms an institution. There are many sortwise logics (formed by sketches of different types), each one also forms an institution (models are sketch morphisms m:T-->U from a theory sketch T to some "semantic" sketch U, usually extracted from a "semantic" category like Set). The actual Jacques' question is (Q1): why a typical work on institution theory is normally motivated by elementwise logic examples, and never by sketches? The dual of this question is (Q2): why institutions are so rarely mentioned/used in the categorical logic literature? </p> <p>(Q1). Institutions were invented by Goguen and Burstall to unify the diversity of elementwise logics; I think that they never considered sortwise logics in their papers. The institution theory has been heavily used by the algebraic specification community, whose main motivating example is algebras considered elementwise (like in universal algebra rather than in categorical algebra). Sortwise logics simply did not appear in their contexts. </p> <p>(Q2). For a categorical logician, the institution framework may seem unnecessarily too abstract. I mean that abstract institution's functors, sen: Sig-->Set and mod:Sig^op-->Cat, have quite concrete origin for a sketch logic. Since both sentences (diagrams) and models are arrows, functors sen and mod are given by, respectively, post- and pre-composition of these arrows with signature moprhisms. On the other hand, the institution theory does not take into account the concreteness of models (they have underlying sets), which is crucial for "real" model theory (and algebraic logic a la Henkin-Monk-Tarski). </p> <p>Now about "competition" sortwise vs. elementwise logic. Different contexts may need one or the other, or both. For example, in my application area -- model-driven software engineering --- sortwise setting is very convenient and actually widely sped in practice (of course, implicitly :). However, only considering universal predicates (limits, colimits,...) would be too heavy and very inconvenient. What is needed is the possibility to consider arbitrarily sortwise predicates in syntax, but specify their semantics elementwisely (with FOL or the like) rather than by universal properties. This idea leads to a version of Makkai's generalized sketches described in [1]. The paper discusses advantages of generalized sketches over Ehresnmann's sketches in software engineering applications, and shows that generalized sketches form an institution.</p> <p>[1] Diskin and Wolter, A Diagrammatic Logic for Object-Oriented Visual Modeling. ENTCS, Volume 203 Issue 6, November, 2008 </p>
4,646,498
<p>I have an assignment for university and I’m a bit confused as to how I should translate the following sentence:</p> <p>Neither Ana nor Bob can do every exercise but each can do some.</p> <p>I've identified the atomic sentences A=Ana can do every exercise and B=Bob can do every exercise and managed to translate the first part into ~A &amp; ~B but I don't know how to go about &quot;each can do some&quot;. Any help would be greatly appreciated!</p>
A. P.
1,027,216
<p>Using reduction of order should be: <span class="math-container">$p=y', p'=y''$</span> and then <span class="math-container">$y''+4x=0$</span> can be written as <span class="math-container">$p'+4x=0$</span> which is first order and then integrating give <span class="math-container">$p=a-2x^2$</span>. Thus, substitution back give <span class="math-container">$y'=a-2x^2$</span> and then integrating give <span class="math-container">$y=Ax+B-\frac{2}{3}x^{3}$</span>. It should be noted that this is exactly the same as the solution written by Angelo, only that I have adapted ''the substitution'' <span class="math-container">$p=y'$</span> here. Perhaps, it is not a good example to show the importance of order reduction using a substitution.</p>
2,573,458
<p>Given $n$ prime numbers, $p_1, p_2, p_3,\ldots,p_n$, then $p_1p_2p_3\cdots p_n+1$ is not divisible by any of the primes $p_i, i=1,2,3,\ldots,n.$ I dont understand why. Can somebody give me a hint or an Explanation ? Thanks.</p>
szw1710
130,298
<p>Simpler argument is that dividing by any $p_k$ we get the remainder $1$.</p>
2,877,080
<p>Let A denote a commutative ring and let e denote an element of A such that $e^2 = e$. How to prove that $eA \times (1 - e)A \simeq A$? I thought that $\phi: A \mapsto eA \times (1 - e)A, \ \phi(a) = (ea, (1-e)a)$ is an isomorphism but I don't know how to prove that $\phi$ is a bijection.</p>
paf
333,517
<p>It depends on the audience. I assume you talk to first year undergraduate math students. </p> <p>Of course, you must include the precise statement of the theorems (in particular, insist on the condition $f(a)=f(b)$ in Rolle's theorem) and geometric interpretation (except maybe for Cauchy's MVT), maybe with their proofs (depending on the audience: it's necessary for math students, probably not for some other students) and you must give examples so that students can understand how to apply these results in practice. Maybe you can give counter-examples. </p> <p>One of the main consequences of MVT is the theorem linking variations of a differentiable function and sign of its derivative. If you have time, you may state (and prove) it. </p> <p>Finally, beginning by including some motivation can't be bad, to say the least. </p>
264,770
<p>If we have a vector in $\mathbb{R}^3$ (or any Euclidian space I suppose), say $v = (-3,-6,-9)$, then:</p> <ol> <li>May I always "factor" out a constant from a vector, as in this example like $(-3,-6,-9) = -3(1,2,3) \implies (1,2,3)$ or does the constant always go along with the vector?</li> <li>If yes on question 1, then if I want to compute the norm, is the correct computation the following: $||v|| = |-3|\sqrt{14} = 3\sqrt{14}$ ? If so, is the only reason that we take the absolute value of -3 because we don't want a negative length?</li> </ol> <p>I'm sorry if things are obvious but I just want to make sure I actually get this correctly.</p> <p>Best regards</p>
WhitAngl
11,897
<p>Starting with your question 2., the general relation $\|\lambda v\| = |\lambda| \|v\|$ always holds. From that you can answer positively to your first question (even if $\lambda$ is real) </p>
264,770
<p>If we have a vector in $\mathbb{R}^3$ (or any Euclidian space I suppose), say $v = (-3,-6,-9)$, then:</p> <ol> <li>May I always "factor" out a constant from a vector, as in this example like $(-3,-6,-9) = -3(1,2,3) \implies (1,2,3)$ or does the constant always go along with the vector?</li> <li>If yes on question 1, then if I want to compute the norm, is the correct computation the following: $||v|| = |-3|\sqrt{14} = 3\sqrt{14}$ ? If so, is the only reason that we take the absolute value of -3 because we don't want a negative length?</li> </ol> <p>I'm sorry if things are obvious but I just want to make sure I actually get this correctly.</p> <p>Best regards</p>
Fly by Night
38,495
<p>You are perfectly entitled to "factorise" a vector, as you say $(-3,-6,-9) = -3(1,2,3).$ The important thing here is that this factorisation shows that the vectors $(-3,-6,-9)$ and $(1,2,3)$ are <a href="http://mathworld.wolfram.com/LinearlyDependentVectors.html" rel="nofollow">linearly dependent</a>. In the case of two vectors, this means that they are parallel vectors.</p> <p>In general, the vectors $(\lambda x, \lambda y, \lambda z)$ and $\lambda(x,y,z)$ are <em>identical</em>. The represent the same vector. Moreover, the vector $(\lambda x, \lambda y, \lambda z)$ is exactly $\lambda$-times the vector $(x,y,z)$.</p> <p>This is a nice way to simplify the calculations because:</p> <p>$$||(\lambda x, \lambda y, \lambda z)|| = |\lambda| \cdot ||(x,y,z)|| \, . $$</p> <p>This question hints towards a very interesting topic: projective space. The <a href="https://en.wikipedia.org/wiki/Projective_plane" rel="nofollow">projective plane</a>, denoted by $\mathbb{RP}^2$, can be defined using an <a href="https://en.wikipedia.org/wiki/Equivalence_relation" rel="nofollow">equivalence relation</a>: $\mathbb{RP}^2 = \mathbb{R}^3\backslash \sim$ where </p> <p>$$(x_1,y_1,z_1) \sim (x_2,y_2,z_2) \iff \exists \ \lambda \neq 0 : (x_1,y_1,z_1) = \lambda(x_1,y_2,z_2) \, . $$</p> <p>The <a href="https://en.wikipedia.org/wiki/Equivalence_class" rel="nofollow">equivalence classes</a> are denoted by by <a href="https://en.wikipedia.org/wiki/Homogeneous_coordinates" rel="nofollow">homogeneous coordinates</a> $(x_1:y_1:z_1)$.</p>
2,332,277
<p>First of all, note that $\frac{n^{n+1}}{(n+1)^n} \sim \frac{n}{e}$. </p> <p><em>Question</em>: Is there $n&gt;1$ such that $n^{n+1} \equiv 1 \mod (n+1)^n$?</p> <p>There is an OEIS sequence for $n^{n+1}\mod (n+1)^n$: <a href="https://oeis.org/A176823" rel="nofollow noreferrer">https://oeis.org/A176823</a>. </p> <blockquote> <p>$0, 1, 8, 17, 399, 73, 44638, 1570497, 5077565, 486784401, 22187726197, 166394893969, 13800864889148, 762517292682713, 9603465430859099, 803800832678655745, 3180753925351614970, 947615093635545799201$</p> </blockquote>
kotomord
382,886
<p>Use the previous answer for the $n = 4k + 1$</p> <p>$n^{n+1}-1 = k \dot (n-1)$</p> <p>For the $n=2*k$ </p> <p>$n-1$ and $n+1$ is coprime => $n^{n+1}-1 = (n-1)(n+1)^n *k$. By the asymptotical equation from the comments k = 0.</p> <p>For the $n=4k+3$. $\frac{n-1}{n}$ and $n+1$ is coprime $n^{n+1}-1 = k* \frac{(n-1)(n+1)^n}{2} $. By the asymptotical equation from the comments k = 0.</p>
284,809
<p>$F$ is a field and $F[X^2, X^3]$ is a subring of $F[X]$, the polynomial ring. I need to show that nonzero prime ideals of $F[X^2, X^3]$ are maximal.</p> <p>A classmate suggested taking a nonzero prime ideal $\mathfrak{p}$ of $F[X^2, X^3]$ and embedding $F[X^2,X^3]/\mathfrak{p} \hookrightarrow F[X]/(\mathfrak{p})$ and ultimately showing that $R/\mathfrak{p}$ was a field, but the proof we discussed was quite convoluted and it's not even apparent to me that that is an injective map. I'm inclined to think there's a shorter, more direct approach.</p> <p>Does anyone see one? It's not imperative that I find a shorter or nicer proof, but seeing another approach can't hurt and I'm sure the grader would appreciate an easy-to-follow proof.</p> <p>Thank you kindly.</p>
Fabian
7,266
<p>For problems of this kind it is often convenient to treat the $\mathbf{x}$ and $\mathbf{x}^H$ as independent variables (in the end they are linearly related to real and imaginary part of $x$). So we want to minimize $$E= \mathbf{x}^H Q \mathbf{x} - (\mathbf{x}^H \mathbf{b} + \mathbf{b}^H \mathbf{x}) +1.$$</p> <p>In this case as $E$ is a real function (as it should otherwise minimization does not make too much sense), the equations $$ \nabla_{\mathbf{x}} E =0 $$ and $$ \nabla_{\mathbf{x}^H} E=0$$ are complex conjugates of each other. So it is enough to solve one of these. We choose $$ \nabla_{\mathbf{x}^H} E = Q \mathbf{x} - \mathbf{b} =0$$ with the solution $$\mathbf{x} = Q^{-1} \mathbf{b}.$$</p>
430,629
<p>The <strong>full linear monoid</strong> <span class="math-container">$M_N(k)$</span> of a field <span class="math-container">$k$</span> is the set of <span class="math-container">$N \times N$</span> matrices with entries in <span class="math-container">$k$</span>, made into a monoid with matrix multiplication. A <strong>representation</strong> of <span class="math-container">$M_N(k)$</span> on a vector space <span class="math-container">$V$</span> over <span class="math-container">$k$</span> is a monoid homomorphism</p> <p><span class="math-container">$$ \rho \colon M_N(k) \to \textrm{End}(V). $$</span></p> <p>What is the classification of representations of the full linear monoid?</p> <p>(Note: this is different than asking about representations of <span class="math-container">$M_N(k)$</span> viewed as an <em>algebra.</em>)</p> <p>We get a bunch of irreducible representations of <span class="math-container">$M_N(k)$</span> from Young diagrams with <span class="math-container">$\le N$</span> rows. The vector space</p> <p><span class="math-container">$$ (k^N)^{\otimes n}= \underbrace{k^N\otimes \cdots\otimes k^N}_{\mbox{$n$ copies}} $$</span></p> <p>is a representation of <span class="math-container">$M_N(k)$</span> in an obvious way. Given a Young diagram with <span class="math-container">$n$</span> boxes and <span class="math-container">$\le N$</span> rows, we get a minimal central idempotent in <span class="math-container">$S_n$</span>, and we can use this to project <span class="math-container">$(k^N)^{\otimes n}$</span> down to a subspace that is a representation of <span class="math-container">$M_N(k)$</span>.</p> <p>I believe these are all the <strong>polynomial</strong> irreducible representations <span class="math-container">$\rho$</span> of <span class="math-container">$M_N(k)$</span>: that is, those where the matrix entries of <span class="math-container">$\rho(T)$</span> are polynomials in the matrix entries of <span class="math-container">$T \in M_N(k)$</span>.</p> <ol> <li>Is this correct?</li> </ol> <p>We get more irreducible representations using the absolute Galois group of <span class="math-container">$k$</span>. Any field automorphism of <span class="math-container">$k$</span> gives an automorphism <span class="math-container">$\alpha$</span> of the monoid <span class="math-container">$M_n(k)$</span>, and composing this with a polynomial irreducible representation <span class="math-container">$\rho \colon M_n(k) \to \mathrm{End}(V)$</span> we get a new representation <span class="math-container">$\rho \circ \alpha$</span> which is still irreducible, but not polynomial unless <span class="math-container">$\alpha$</span> is the identity.</p> <p>But when <span class="math-container">$N = 1$</span>, at least, there are even more irreducible representations of the full linear monoid! Then <span class="math-container">$M_1(k)$</span> is the multiplicative monoid of <span class="math-container">$k$</span>, and it has a 1-dimensional irreducible representation sending <span class="math-container">$x \in k$</span> to multiplication by <span class="math-container">$x^n$</span> for any <span class="math-container">$n \ge 0$</span>.</p> <p>The point here is that the multiplicative monoid of <span class="math-container">$k$</span> has endomorphisms that don't come from field automorphisms. There can also be others that combine raising to a power with field automorphisms: e.g. <span class="math-container">$M_1(\mathbb{C})$</span> has a 1-dimensional irreducible representation sending <span class="math-container">$z \in \mathbb{C}$</span> to multiplication by <span class="math-container">$z^4 \overline{z}^3$</span>.</p> <p>Since endomorphisms of the multiplicative monoid of <span class="math-container">$k$</span> not arising from field automorphisms don't preserve addition, I don't see how to use them to get extra representations of <span class="math-container">$M_N(k)$</span> for <span class="math-container">$N &gt; 1$</span>.</p> <p>So:</p> <ol start="2"> <li><p>Are all the irreducible representations of <span class="math-container">$M_N(k)$</span> for <span class="math-container">$N &gt; 1$</span> of the form <span class="math-container">$\rho \circ \alpha$</span> where <span class="math-container">$\rho$</span> comes from a Young diagram with at most <span class="math-container">$N$</span> rows and <span class="math-container">$\alpha$</span> comes from a field automorphism of <span class="math-container">$k$</span>, or are there more?</p> </li> <li><p>Do all the irreducible representations of <span class="math-container">$M_N(k)$</span> for <span class="math-container">$N = 1$</span> come from endomorphisms of the multiplicative monoid of <span class="math-container">$k$</span>?</p> </li> <li><p>Are all finite-dimensional representations of <span class="math-container">$M_n(k)$</span> completely reducible?</p> </li> </ol>
Benjamin Steinberg
15,934
<p>At least over an algebraically field, the polynomial representations are precisely the representations of <span class="math-container">$M_n(k)$</span> as an algebraic monoid and I don’t see immediately why that would change over a nonalgebraically closed field, but I am not an expert.</p> <p>To classify irreducible representations of <span class="math-container">$M_n(k)$</span> over any field <span class="math-container">$F$</span> is the same as to classify irreducible representations of <span class="math-container">$GL_r(k)$</span> for <span class="math-container">$0\leq r\leq n$</span> over <span class="math-container">$F$</span> (where <span class="math-container">$GL_0(k)$</span> is the trivial group).</p> <p>This follows from old results of Munn and Ponizovskii on representations of finite monoids that were eventually extended to monoids satisfying certain finiteness conditions that <span class="math-container">$M_n(k)$</span> satisfies. I don’t know the classification for <span class="math-container">$GL_r(k)$</span> but somebody else might.</p> <p>The argument is like this. A principal ideal in a monoid <span class="math-container">$M$</span> is a set of the form <span class="math-container">$MaM$</span>. The principal ideals of <span class="math-container">$M_n(k)$</span> are well known to be of the form <span class="math-container">$I_r$</span> where <span class="math-container">$I_r$</span> consists of all matrices of rank at most <span class="math-container">$r$</span> (with <span class="math-container">$0\leq r\leq n$</span>. Let <span class="math-container">$e_r$</span> be the diagonal row echelon form rank <span class="math-container">$r$</span> idempotent. Note every rank <span class="math-container">$r$</span> idempotent is conjugate to <span class="math-container">$e_r$</span>. Note that <span class="math-container">$e_rM_n(k)e_r\cong M_r(k)$</span> and so we can identify its group of units <span class="math-container">$G_r$</span> with <span class="math-container">$GL_r(k)$</span></p> <p>Let <span class="math-container">$V$</span> be an irreducible <span class="math-container">$M_n(k)$</span> module over a field <span class="math-container">$F$</span>. Then there is a least <span class="math-container">$r$</span> with <span class="math-container">$I_rV\neq 0$</span>. Then one easily checks that <span class="math-container">$e_rV$</span> is an irreducible representation of <span class="math-container">$e_rM_n(k)e_r\cong M_r(k)$</span> that is annihilated all elements not belonging to <span class="math-container">$G_r\cong GL_r(k)$</span> and so is an irreducible representation of <span class="math-container">$G_r$</span>.</p> <p>Conversely, given an irreducible representation <span class="math-container">$W$</span> of <span class="math-container">$Gl_r(k)$</span> (identified with <span class="math-container">$G_r$</span>), there is a unique irreducible representation <span class="math-container">$V$</span> of <span class="math-container">$M_n(k)$</span> with <span class="math-container">$I_{r-1`}W=0$</span> and <span class="math-container">$e_rV\cong W$</span> as representations of <span class="math-container">$G_r$</span>. You can construct <span class="math-container">$V$</span> as <span class="math-container">$(FM_n(k)/FI_{r_1}\otimes_{FG_r}W)/N$</span> where <span class="math-container">$N$</span> is largest submodule annihilated by <span class="math-container">$e_r$</span>. This kind of argument is a mixture of Munn-Ponizovksii’s argument for finite semigroups and Green’s condensation technique which coincidentally appears in his book on polynomial representations (he also introduced the study of principal ideals in structural semigroup theory).</p> <p>The details of how this works can be found in my book on representation theory of finite monoids. Finiteness can be replaced with taking a monoid <span class="math-container">$M$</span> with a finite unrefinable chain of two-sided ideals such that <span class="math-container">$eMe$</span>is Dededkind finite for all idempotents <span class="math-container">$e$</span>. I think this kind of result is in the generality you need in Okninski’s book Semigroups of Matrices.</p> <p>As to complete reducibility, this I do not know much about for the infinite case. For <span class="math-container">$k$</span> finite and <span class="math-container">$F$</span> any field it was first shown by Putcha and Okninski that <span class="math-container">$FM_n(k)$</span> is semisimple whenever <span class="math-container">$F$</span> is of characteristic <span class="math-container">$0$</span>. They showed that this is true for any finite monoid of Lie type (the finite analogues of reductive algebraic monoids). Kovacs then showed <span class="math-container">$FM_n(k)$</span> is Frobenius if and only if the characteristic of <span class="math-container">$k$</span> is not the same as that of <span class="math-container">$F$</span>. You can find his proof in my book. In fact, Kovacs showed if <span class="math-container">$R$</span> is any commutative ring in which the characteristic of <span class="math-container">$k$</span> is invertible, then <span class="math-container">$RM_n(k)$</span> is isomorphic to a direct product of <span class="math-container">$M_{n_r}(RGL_r(k))$</span> with <span class="math-container">$0\leq r\leq n$</span> where <span class="math-container">$n_r$</span> is the number of <span class="math-container">$r$</span>-dimensional sub spaces of <span class="math-container">$k^n$</span>. In particular, every representation of <span class="math-container">$M_n(k)$</span> over <span class="math-container">$F$</span> is completely reducible iff the characteristic of <span class="math-container">$F$</span> does not divide the order of <span class="math-container">$GL_n(k)$</span>. I have a recent paper giving explicitly the Frobenius form for <span class="math-container">$M_n(k)$</span> in good characteristic, which is easier than Kovacs’s proof since you can check by hand the form is Frobenius. The direct product decomposition follows from <span class="math-container">$M_n(k)$</span> being von Neumann regular and having a Frobenius algebra by a result you can find in my book</p> <p>The algebra <span class="math-container">$kM_n(k)$</span> is not self-injective if <span class="math-container">$k$</span> is finite.</p> <p>I’d like to hope if <span class="math-container">$k$</span> is a field of characteristic <span class="math-container">$0$</span>, then each finite dimensional representation of <span class="math-container">$M_n(k)$</span> is completely reducible but I don’t know. Let me add Putcha and Okninski showed if <span class="math-container">$k$</span> is any field and <span class="math-container">$F$</span> is a field of characteristic <span class="math-container">$0$</span>, then <span class="math-container">$FM_n(k)$</span> has trivial Jacobson radical.</p>
1,513,373
<p>Let M be a cardinal with the following properties:<br> - M is regular<br> - $\kappa &lt; M \implies 2^\kappa &lt; M$<br> - $\kappa &lt; M \implies s(\kappa) &lt; M$ where $s(\kappa)$ is the smallest strongly inaccessible cardinal strictly greater than $\kappa$ </p> <p>My question is: Is M a Mahlo cardinal ? If so, how does the definition above connect to the usual definition in terms of stationary sets ?</p> <p>Motivation: My intuition about a Mahlo cardinal is that it you cannot reach it by taking unions, power sets or "the next inaccessible cardinal", which is my definition above. My worry is, I might have arrived at something much smaller than Mahlo.</p>
Wojowu
127,263
<p>No, your condition doesn't imply Mahloness.</p> <p>First, note that your first two conditions simply state that $M$ is inaccessible, and the third one gives that $M$ is limit of inaccessibles. Now consider the first cardinal $M_0$ which is inaccessible limit of inaccessibles. That is, every inaccessible below $M_0$ is not a limit of inaccessibles. Now consider $C$ consisting of limits of the set $I$ of inaccessible cardinals smaller than $M_0$. By choice of $M_0$, $C$ contains no inaccessibles below $M_0$, so if we show $C$ is a club below $M_0$, we will get that $M_0$ is not Mahlo.</p> <p>$C$ is closed: this is true because the set of limit points of any set is closed. For the sake of completion, I will sketch the argument: suppose that each of $\alpha_i$ is a limit point of $I$, where $(\alpha_i)_{i&lt;\kappa}$ is an increasing sequence of cardinals with limit $A$. Let $\beta&lt;A$. This means that there is some $\alpha_i$ between $\beta$ and $A$. Since $\beta&lt;\alpha_i$, there is some element of $I$ which is between $\beta$ and $\alpha_i$, hence between $\beta$ and $A$. So $A$ is a limit point of $I$, so $A\in C$. Hence $C$ is closed.</p> <p>$C$ is unbounded below $M_0$: Let $\alpha&lt;M_0$. Then $s(\alpha),s(s(\alpha)),s(s(s(\alpha))),...$ is an increasing sequence in $I$ of length $\omega$. Since cofinality of $M_0$ is (clearly) greater than $\omega$, limit of this sequence, which is an element of $C$, is between $\alpha$ and $M_0$. Hence $C$ is unbounded below $M_0$.</p> <p>Indeed, the condition "inaccessible limit of inaccessibles" (or simply "regular limit of inaccessibles") is a condition known as 1-inaccessibility. It can be generalized: we can speak of 2-inaccessibles, which are regular limits of 1-inaccessibles, 3-inaccessibles which are regular limits of 2-inaccessibles and so on. This can be obviously extended to a successor ordinal, and for limit ordinal $\alpha$ (e.g. $\alpha=\omega$) we say that cardinal is $\alpha$-inaccessible if it's $\beta$-inacessible for $\beta&lt;\alpha$ (this isn't circular definition, but rather it proceeds by transfinite induction).</p> <p>For every ordinal $\alpha$, notion of $\alpha+1$-inaccessibility is strictly stronger than that of $\alpha$-inaccessibility. As can be shown, every Mahlo cardinal $\kappa$ is not only 1-inaccessible, 2-inaccessible, $\omega$-inaccessible, but even <em>$\kappa$-inaccessible</em>. Notion of Mahloness is <em>even stronger</em> than that, but for that, I'd recommend reading <a href="https://en.wikipedia.org/wiki/Inaccessible_cardinal#.CE.B1-inaccessible_cardinals_and_hyper-inaccessible_cardinals" rel="nofollow">this</a> section on Wikipedia article.</p>
2,134,928
<p>Let <span class="math-container">$ \ C[0,1] \ $</span> stands for the real vector space of continuous functions <span class="math-container">$ \ [0,1] \to [0,1] \ $</span> on the unit interval with the usual subspace topology from <span class="math-container">$\mathbb{R}$</span>. Let <span class="math-container">$$\lVert f \rVert_1 = \int_0^1 |f(x)| \ dx \qquad \text{ and } \qquad \lVert f \rVert_{\infty} = \max_{x \in [0,1]} |f(x)|$$</span> be the usual norms defined on that space. Let <span class="math-container">$ \ \Delta : C[0,1] \to C[0,1] \ $</span> be the diagonal function, ie, <span class="math-container">$ \ \Delta f=f \ $</span>, <span class="math-container">$\forall f \in C[0,1]$</span>. Then <span class="math-container">$$ \Delta = \big\{ (f,g) \in C[0,1] \times C[0,1] \ : \ g=f \ \big\} \ . $$</span> My questions are</p> <blockquote> <p><strong>(1)</strong> <span class="math-container">$ \ \ $</span> Is <span class="math-container">$ \ \Delta \ $</span> a closed set of <span class="math-container">$ \ C[0,1] \times C[0,1] \ $</span>, with respect to the product topology induced by these norms?</p> <p><strong>(2)</strong> <span class="math-container">$ \ \ $</span> Is <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_1) \to (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span> continuous?</p> <p><strong>(3)</strong> <span class="math-container">$ \ \ $</span> Does <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_1) \to (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span> maps closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_1) \ $</span> onto closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span>?</p> <p><strong>(4)</strong> <span class="math-container">$ \ \ $</span> Is <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_{\infty}) \to (C[0,1], \lVert \cdot \rVert_1) \ $</span> continuous?</p> <p><strong>(5)</strong> <span class="math-container">$ \ \ $</span> Does <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_{\infty}) \to (C[0,1], \lVert \cdot \rVert_1) \ $</span> maps closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span> onto closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_1) \ $</span>?</p> </blockquote> <p>Now about some terminology, when I say that &quot;<span class="math-container">$\Delta \ $</span> is closed&quot;, or that &quot;<span class="math-container">$\Delta \ $</span> is a closed map&quot; or that &quot;<span class="math-container">$\Delta \ $</span> is a closed operator&quot;?</p> <p>Thanks in advance.</p>
Daniel Pietrobon
17,824
<p>Suppose she had 6 correct answers. That's $6 \times 3 - 4 = 18-4 = 14$. Nope too low!</p> <p>What about 7? $7 \times 3 - 3 = 21-3 = 18$. Oh, we got it.</p>
1,849,577
<p>I recently asked for <a href="https://math.stackexchange.com/questions/1848739/a-topology-on-the-set-of-lines">natural topologies on the set of lines</a> in $\mathbb R^2$. Now I'm aiming for a similar question on the set $S_p$ of conic sections in $\mathbb R^2$ sharing the same focus $p$ (but not necessary having the same major axis). The situation is an idealization of the ecliptic plane and all stuff in the solar system. Are there natural topologies on this set?</p>
MvG
35,416
<p>It depends a lot on what exactly you want to call a conic. My first impulse was along the same lines as what N.H. wrote in a comment (and later in an answer): six numbers to describe a conic, but scalar multiples describe the same conic, so this looks like $\mathbb P^5$. But then N.H. went on to exclude degenerate conics, while I'd take the opposite approach of actually including degenerate conics. So in a first approximation, I'd say a conic is defined as a set of points $\{p\in\mathbb P^2\;|\;p^TAp=0\}$, described by a symmetric matrix</p> <p>$$A=\begin{pmatrix}a&amp;c&amp;d\\c&amp;b&amp;e\\d&amp;e&amp;f\end{pmatrix}$$</p> <p>(Some of my coefficients differ from the N.H.'s formula by a factor of two, but that makes no difference for the topology.) The matrix must not be the zero matrix, but apart from that I'd include everything else: the case of $\det(A)=0$ where the conic degenerates to a pair of lines, and $\operatorname{rank}(A)=1$ where these lines coincide to form a double line, and also cases like the identity matrix where the only solutions have complex coordinates. But that's my personal view, and the choice here will have a huge impact on the resulting topology.</p> <p>Actually, the scenario just described is just half the truth. Via projective duality, it makes equal sense to describe a conic not in terms of incident points, but in terms of tangent lines. You'd say a line $l$ is a tangent to a conic if it satisfies $l^TBl=0$. In the non-degenerate case, $B$ would simply be any multiple of the inverse of $A$. But in the degenerate case, that definition breaks down. Instead one requires $(A,B)$ to form a primal-dual pair, i.e. $A\cdot B=\lambda\mathbb 1$ for some $\lambda$ which may even be zero. If $A$ has rank two, then $B$ can still be computed as the adjoint of $A$. But if $A$ has rank one (a double line), then $B$ can no longer be derived from it, since <em>any</em> pair of (real, complex, distinct or coinciding) points on that double line can serve as the components which make up $B$. I honestly don't know how to express this space of all conics in terms of topology.</p> <p>I do know however how to express foci. A point $p$ is a focus if both the lines connecting it to the ideal circle points $(1,\pm i,0)$ are tangents to the conic. Since for the sake of topology the actual location of the origin is irrelevant, I'd consider $p=(0,0,1)$. Joining that to the circle points you get</p> <p>$$t_{1,2}=\begin{pmatrix}0\\0\\1\end{pmatrix}\times\begin{pmatrix}1\\\pm i\\0\end{pmatrix}=\begin{pmatrix}\mp i\\1\\0\end{pmatrix}$$</p> <p>Now it is easier to look at the dual matrix. In order to avoid confusion, I'll use different letters for its entries.</p> <p>$$B=\begin{pmatrix} g &amp; k &amp; l \\ k &amp; h &amp; m \\ l &amp; m &amp; n \end{pmatrix} \qquad t_{1,2}^TBt_{1,2} = (\mp i,1,0)\cdot \begin{pmatrix} g &amp; k &amp; l \\ k &amp; h &amp; m \\ l &amp; m &amp; n \end{pmatrix}\cdot \begin{pmatrix}\mp i\\1\\0\end{pmatrix} =(h-g)\mp i(2k) $$</p> <p>So if the point $p$ should be a focus, then the two equations above have to be satisfied. If all your coefficients are real, that means you need $(h-g)=0$ and $k=0$ for the dual matrix. So in this view, you are essentially imposing a restriction to the intersection of two hyperplanes in this space of all conics.</p> <p>It is possible to translate the conditions back into equivalent conditions for the primal matrix, at least for the non-degenerate case. If $B=\operatorname{adj}(A)$ then you have</p> <p>$$ 0=h-g= \begin{vmatrix}a&amp;d\\d&amp;f\end{vmatrix}- \begin{vmatrix}b&amp;e\\e&amp;f\end{vmatrix}= (a-b)f-d^2+e^2 \qquad 0=k= -\begin{vmatrix}c&amp;e\\d&amp;f\end{vmatrix}= de-cf $$</p> <p>This looks more complicated, but since the space of dual conics should have the same topology as the space of primal conics (unless you include degenerate conics but consider degenerate conics with the same primal matrix to be the same no matter the dual), this should still essentially be a restriction to two subspaces as in the dual view, at least topologically speaking.</p> <p>If you want to exclude degenerate conics, you could express that using either of the following inequalities:</p> <p>$$0\neq\det(A)=abf+2cde-ae^2-bd^2-fc^2\\ 0\neq\det(B)=ghn+2klm-gm^2-hl^2-nk^2$$</p> <p>Last but not least, if you want to exclude conics with no real points on them, you'd look at the eigenvalues of the matrix. If two of the eigenvalues have one sign and a third eigenvalue has the opposite sign, then you have a real and non-degenerate conic. If all three signs are the same, you have a non-degenerate conic with no real solutions. If one or two eigenvalues are zero, then you have a degenerate conic. I don't know how to translate these eigenvalue considerations into topology.</p>
211,803
<p>I ended up with a differential equation that looks like this: $$\frac{d^2y}{dx^2} + \frac 1 x \frac{dy}{dx} - \frac{ay}{x^2} + \left(b -\frac c x - e x \right )y = 0.$$ I tried with Mathematica. But could not get the sensible answer. May you help me out how to solve it or give me some references that I can go over please? Thanks.</p>
Pragabhava
19,532
<p>Series solution around zero:</p> <p>The point $x= 0$ is a regular singular. Taking the anzats $$ y(x) = \sum_{n=0}^\infty q_n x^{n+s} $$ we have $$ \sum_{n=0}^\infty [(n+s)^2 - a]q_n x^{n+s-2} -\sum_{n=0}^\infty c q_n x^{n+s-1} + \sum_{n=0}^\infty b q_n x^{n+s} - \sum_{n=0}^\infty e q_n x^{n+s+1} = $$ \begin{multline} q_0(s^2 - a)x^{s-2} + \big[q_1\big((s+1)^2 - a\big) - c q_0\big] x^{s-1} + \\q_2 \big[\big((s+2)^2 - a\big) -c q_1 + b q_0\big]x^s + \sum_{n=0}^\infty \Big(...\Big) \end{multline}</p> <p>The indicial equation is $s^2-a=0$, hence $s = \pm\sqrt{a}$.</p> <p><strong>Case</strong> $s = \sqrt{a}$</p> <p>In this case \begin{align} q_1(2\sqrt{a} + 1) -c q_0 &amp;=0,\\ q_2(4\sqrt{a} + 2) -c q_1 + b q_0 &amp;=0, \end{align}</p> <p>and the recurrence relation is $$ q_{m+3} = \frac{c q_{m+2} + b q_{m+1} - e q_m}{(m+3)(m + 3 + 2\sqrt{a})}. $$</p> <p>This, assuming all my algebra is right. </p> <p><strong>What's</strong> important here is that $$ y(x) \sim x^\sqrt{a} z(x), $$</p> <p>which, combined with the form of $F(u)$ obtained by <a href="https://math.stackexchange.com/a/211807/19532">Peter Tamaroff</a>, might help to propose a solution of the type $$ y(x) = x^\sqrt{a} \exp[\sqrt{F(u)} x] v(x), $$ in a similiar way as it is done when solving the <a href="http://www.physics.drexel.edu/~tim/open/hydrofin/hyd.pdf" rel="nofollow noreferrer">Hydrogen Atom</a> or the <a href="http://www.fisica.net/quantica/quantum_harmonic_oscillator_lecture.pdf" rel="nofollow noreferrer">Quantum Harmonic Oscillator</a> as modern physics textbooks do (see Eisberg <em>Fundamentals of Modern Physics</em>, Hydrogen atom solution), that can lead to a <em>relatively simple</em> form for $v(x)$.</p> <hr> <p><strong>Edit 1</strong></p> <p>Taking the change of variables $$ y(x) = \frac{z(x)}{\sqrt{x}}, $$ you end up with the equation $$ z'' + \left\{\frac{1-4\alpha}{x^2} + \beta - \frac{\gamma}{x} - \epsilon x\right\}z = 0 $$ (where I've changed the constants to greek letters to avoid the $e$ confussion).</p> <p>If $\epsilon = 0$ then, as <a href="https://math.stackexchange.com/a/211823/19532">Robert Israel</a> points out, it reduces to the <a href="http://mathworld.wolfram.com/WhittakerDifferentialEquation.html" rel="nofollow noreferrer">Whittaker Differential Equation</a> (using the proper rescaling). Also, for $\alpha =1/4$ and $\gamma = 0$, you have the <a href="http://mathworld.wolfram.com/AiryDifferentialEquation.html" rel="nofollow noreferrer">Airy Differential Equation</a>.</p> <p>Using the <a href="http://en.wikipedia.org/wiki/WKB_approximation" rel="nofollow noreferrer">WKB approximation</a>, $$ z(x) \sim A f^{-1/4} e^{\int f^{1/2} dx} + B f^{-1/4} e^{-\int f^{1/2} dx} $$ where $$ f(x) = \frac{1-4\alpha}{x^2} + \beta -\frac{\gamma}{x} -\epsilon x. $$ you can try to find the asymptotic behavior of $z$, which I believe is $$ z(x) \sim \frac{e^{-\frac{2}{3}(\epsilon^{1/3} x)^{3/2}}}{(\epsilon^{1/3} x)^{1/4}} $$ for $x &gt; 0$, and play with the differential equation resulting from taking $$ z(x) = \frac{e^{-\frac{2}{3}(\epsilon^{1/3} x)^{3/2}}}{(\epsilon^{1/3} x)^{1/4}} v(x). $$</p>
1,460,488
<p>There are $4$ girls and $3$ boys but there are only $5$ seats. How many ways can you seat the $3$ boys together?</p> <p>The order of the seat matters, for example: there's the order $B_1$ $B_2$ $B_3$ $G_2$ $G_4$ and there's $B_2$ $B_3$ $B_1$ $G_2$ $G_4$</p> <p>Here's my answer: There are $3!$ ways to seat the $3$ boys. The $2$ remaining seats are to be occupied by $2$ out of the $4$ girls, so $^4P_2$. So we now have $3! \cdot $ $^4P_2$. </p> <p>Lastly, there are $3$ ways to make that arrangement, <BR> $1)$ two girls on the left, <BR> $2)$ two girls on the right, and <BR> $3)$ a girl on both ends.</p> <p>So my final equation is $3! \cdot $ $^4P_2$ $\cdot 3 = 216$</p> <p>But then again, that was just a guess, I'm not really sure how to get it. So please confirm if my answer is right, and if it's wrong, please tell me how to get it.</p>
Tejus
274,219
<p>First Take all of the boys in a group say $A$. $A$ contains $B_1$, $B_2$ and $B_3$. Then select two girls from $4 \Rightarrow$ $^4C_2$. Now we got $2$ girls and boys (in $A$) which makes it $5$. We can arrange $B_1$ $B_2$ $B_3$ inside $A$ in $3!$ ways . and we can arrange $A$ and the $2$ girls in $3!$ ways . Hence the answer would be $^4C_2 \cdot 3! \cdot 3! =216.$ </p>
944,840
<p>For vectors u, w, and v in a vector space V, I am trying to prove:</p> <p>If $u + w = v + w$ then $u = v$</p> <p><strong>without</strong> using the additive inverse and only using the 8 axioms which define a vector space. I am coming up short. I don't see how to do this without assuming that if $u + w = v + w$ then I can just add something to both sides as in $(u+w) + w' = (v+w) + w'$.</p> <p>Thank you.</p>
Berci
41,488
<p>You will just have to add a $w'$ indeed, namely, try $w':=(-1)\cdot w$ and use $$(\alpha+\beta)w=\alpha w+\beta w$$ for certain $\alpha,\beta$ scalars.</p>
1,286,306
<p>Suppose that $a_1,...,a_n,b_1,...,b_n ∈ F $ are such that $\sum a_ib_i = 1_F$. </p> <p>Let $J : F^n → F^n $ be the linear transformation whose standard matrix has $ij^{th}$ entry $a_ib_j$. </p> <p>Prove that $J^2 = J$.</p> <p>So I think I've figured out that the index in the matrix $F^2$ given by </p> <p>$u_{ij} = a_ib_j = \sum_{c=1}^{n}\sum_{d=1}^{n} (a_ib_c)(b_ja_d)$</p> <p>given that</p> <p>$ \sum (a_ib_i) = 1 $</p> <p>I think I got the above right, not 100% sure, but I still don't know where to go from this point onwards.</p> <p>Could someone help me out with that.</p> <p>Thanks.</p>
Lost in a Maze
77,255
<p>Some remarks for proceeding:</p> <ul> <li>To show $H$ is complete, note that $[0,1]^\infty = \{(x_1, x_2,\ldots)\,:\, x_i \in [0,1]\}$ is complete. Therefore, it suffices to show that $H$ is closed (a closed bounded subset of a complete space is also complete).</li> <li>To show $H$ is totally bounded, given $\epsilon &gt; 0$, choose $N$ such that $\frac{1}{2^n} &lt; \epsilon$ for all $n \geq N$, and consider the subset $E$ of $H$ defined by $E = \{y \in H\,:\, y_n = 0\,\text{ for } n\geq N\}$. You should be able to show that there is a finite number of $y\in E$ such that $H \subset \bigcup_{\text{finite } y} B(y,\epsilon)$, where $B(y,\epsilon)$ denotes the ball centered at $y$ of radius $\epsilon$.</li> </ul>
2,485,529
<p>The integral is $$\int{\left[\frac{\sin^8(x) - \cos^8(x)}{1 - 2 \sin^2(x)\cos^2(x)}\right]}dx$$</p>
Humam
470,705
<p>this integer can be reduced using the double angle sine and cosine formula.</p> <p>let $$ I=\int \frac{sin^8(x)-cos^8(x)}{1-2sin^2(x)cos^2(x)}dx $$ $$ I=\int \frac{(sin^4(x)+cos^4(x))(sin^4(x)-cos^4(x))}{1-2sin^2(x)cos^2(x)}dx $$ $$ I=\int \frac{(sin^4(x)+2sin^2(x)cos^2(x)+cos^4(x)-2sin^2(x)cos^2(x))(sin^4(x)-cos^4(x))}{1-2sin^2(x)cos^2(x)}dx $$ $$ I=\int \frac{((sin^2(x)+cos^2(x))^2-2sin^2(x)cos^2(x))(sin^2(x)+cos^2(x))(sin^2(x)-cos^2(x))}{1-2sin^2(x)cos^2(x)}dx $$ $$ I=\int \frac{(1-2sin^2(x)cos^2(x))(sin^2(x)-cos^2(x))}{1-2sin^2(x)cos^2(x)}dx $$ $$ I=\int sin^2(x)-cos^2(x) dx=\frac{-1}{2}\int 2cos(2x) dx =\frac{-1}{2}sin(2x)+C $$</p>
137,501
<p>P. P. Palfy proved that a primitive solvable subgroup of $S_n$ has order bounded by $24^{-1/3} n^{3.24399\dots}$ (in: Pálfy, P. P. A polynomial bound for the orders of primitive solvable groups. J. Algebra 77 (1982), no. 1, 127–137. )</p> <p>This is sharp (and notice that for $n$ prime, the affine group of the line (all transformations of the form $x \rightarrow a x + b$ is solvable, and is of order $O(n^2)$</p> <p>I seem to be unable to find any sort of analogue for nilpotent groups (there are results on large nilpotent groups, but these are presumably [by Palfy's theorem!) NOT primitive, nor even transitive). </p>
Nick Gill
801
<p>This answer fleshes out observations made in comments above. The result below is an analogue of Palfy's theorem for nilpotent primitive groups, as requested.</p> <blockquote> <p><strong>Prop</strong>. Let $N&lt;S_n$ be a nilpotent primitive group. Then $n$ is prime and $N$ is cyclic of order $n$.</p> </blockquote> <p><strong>Proof</strong>. Since $N$ is nilpotent, a minimal normal subgroup $E$ of $N$ is elementary-abelian of order $p^a$. Since $N$ is primitive it must be transitive and, since $E$ is abelian, it must act regularly - so $n=p^a$. Indeed $E$ must be the unique minimal normal subgroup of $N$ because if there another, $E'$ say, then $E'$ would centralize $E$ and $EE'$ would be a transitive abelian subgroup of $S_n$ of order greater than $n$, which is impossible.</p> <p>Now, since $E$ is unique, we conclude that $C_N(E)=E$. In particular if $g$ is an element of order coprime to $p$, then $g\not\in C_N(E)$. But this contradicts the fact that $N$ is nilpotent. Thus $N$ is a $p$-group. Then $N$ acts on $E$ via linear transformations and so $N/E$ is a $p$-subgroup of $GL_a(p)$ and, in particular, fixes a 1-dimensional subspace of $E$. This subspace is normal in $N$ and hence, since $N$ is primitive, is transitive. i.e. $E$ is itself 1-dimensional, i.e. $E=C_p$ with $n=p$. But, since $N$ is a $p$-group inside $S_p$ we conclude that $N=E=C_p$ as required. <strong>QED</strong></p> <p>In fact, rather than primitivity, all I've used here is that $N$ has no intransitive normal subgroups. This property is called <em>quasiprimitivity</em> - it is a little weaker than primitivity.</p> <p>There is an interesting sort of strong converse to this result which also sheds light on the original question.</p> <blockquote> <p><strong>Prop</strong>. Let $n$ be a prime and $N&lt;S_n$ be a nilpotent transitive group. Then $N$ is cyclic of order $n$.</p> </blockquote> <p><strong>Proof</strong>. Since $N$ is transitive it contains a cyclic subgroup $C$ of order $n$. But $C_{S_n}(C)=C$ and so $N$ must be an $n$-group. But then $N=C$, as required. <strong>QED</strong></p> <p>This result, along with the example given above in the comments - a Sylow $2$-subgroup when $n=2^k$ - demonstrate that bounding the order of a nilpotent transitive group is strongly dependent on the prime factorization of $n$. It's not clear to me whether there is a natural stronger condition than transitivity that will hold for any $n$...</p>
186,638
<p>$f(x)=\max(2x+1,3-4x)$, where $x \in \mathbb{R}$. what is the minimum possible value of $f(x)$.</p> <p>when, $2x+1=3-4x$, we have $x=\frac{1}{3}$</p>
copper.hat
27,978
<p>The function $f$ is convex and its subdifferential is given by $$\partial f(x) = \begin{cases} \{-4\}, &amp; x&lt;\frac{1}{3}, \\ \, [-4,2], &amp; x = \frac{1}{3}, \\ \{2\}, &amp; x&gt;\frac{1}{3}. \end{cases}$$Since $f$ is convex, then $\hat{x}$ minimizes $f$ iff $0 \in \partial f (\hat{x})$. It follows that the minimizing $\hat{x}$ is $\frac{1}{3}$, and hence the minimum value of $f$ is $\frac{5}{3}$.</p>
3,863,495
<p>This is my solution to an old exam problem that I'd appreciate some feedback on. The problem:</p> <blockquote> <p>Let <span class="math-container">$f:[0,\infty)\to\mathbb{R}$</span>, <span class="math-container">$f\geq 0$</span> and <span class="math-container">$\int _0^{\infty} f(x) dx=L&lt;\infty;$</span> that is, <span class="math-container">$f$</span> is Riemann integrable in any finite interval <span class="math-container">$[0,R]$</span> and <span class="math-container">$\lim\limits_{R\to\infty} \int _0^R f(x) dx$</span> exists. Show that <span class="math-container">$$ \lim_{R\to \infty}\frac{\int_0^R x f(x)dx}{R}=0. $$</span></p> </blockquote> <p>Proof: Note that if <span class="math-container">$f\equiv 0$</span> there's nothing to prove. Else, we would like to apply L'Hôpital's Rule, but the problem is the integrand is not necessarily continuous, hence the integral is not differentiable in <span class="math-container">$R.$</span> To that end, we use Fubini's Theorem to rewrite as a double-integral: <span class="math-container">$$ \int _0^R x f(x)\,dx = \int _0^R \int _0^x f(x)\,dydx $$</span><span class="math-container">$$ = \int _0^R \int _y^R f(x)\; dxdy $$</span>Define <span class="math-container">$F(x):=\int _0^x f(t)\;dt;$</span> then <span class="math-container">$F$</span> is continuous. We have <span class="math-container">$$ \int _0^R \int _y^R f(x)\; dxdy=\int _0^R F(R)-F(y)\;dy; $$</span>then we have <span class="math-container">$$ \frac{\int_0^R x f(x)dx}{R}=\frac{\int _0^R F(R)-F(y)\;dy}{R} = F(R)-\frac{\int _0^R F(y)\,dy}{R} $$</span> Here's the tricky part:</p> <p>Since <span class="math-container">$f$</span> is integrable and non-negative, by assumption <span class="math-container">$F$</span> approaches <span class="math-container">$L$</span>. This allows us to split up the limit: <span class="math-container">$$ \lim_{R\to\infty}F(R) - \frac{\int _0^R F(y)\;dy}{R} =\lim_{R\to\infty}F(R) - \lim_{R\to\infty}\frac{\int _0^R F(y)\;dy}{R} $$</span> <span class="math-container">$$ =L - \lim_{R\to\infty}\frac{\int _0^R F(y)\;dy}{R} $$</span> In this case, the remaining integral approaches <span class="math-container">$LR$</span>. <em>Now</em> we can apply L'Hôpital's Rule: <span class="math-container">$$ L-\lim_{R\to\infty} \frac{\int _0^R F(y)\;dy}{R} = L-\lim_{R\to \infty} F(R)=L-L=0. \square $$</span>Any comments/suggestions?</p>
copper.hat
27,978
<p>Pick <span class="math-container">$\epsilon&gt;0$</span> and choose <span class="math-container">$T$</span> such that <span class="math-container">$\int_T^\infty f(x)dx &lt; {1 \over 2} \epsilon$</span>. Suppose <span class="math-container">$R &gt;T$</span>.</p> <p><span class="math-container">\begin{eqnarray} \int_0^R {x \over R} f(x)dx &amp;\le&amp; \int_0^T {x \over R} f(x)dx + \int_T^\infty {x \over R} f(x)dx \\ &amp;\le&amp; {T \over R}\int_0^T {x \over T} f(x)dx + {1 \over 2} \epsilon \\ &amp;\le&amp; {T \over R} L + {1 \over 2} \epsilon \end{eqnarray}</span> Now choose <span class="math-container">$R&gt;{2 TL\over \epsilon}$</span> to get the desired result.</p>
80,456
<p>Given an array <code>sel</code> and an index position <code>i0</code>, how can I find the position of the nearest (left or right) nonzero element? I'm able to do it with a loop and a couple of awful If's, but I was looking for a functional way...</p> <pre><code> lr=Length[sel]; For[i = 0, i &lt;= lr, i++, If[1 &lt;= i0 + i &lt;= lr &amp;&amp; sel[[i0 + i]] == 1, Print[i0+i]; Break[], If[1 &lt;= i0 - i &lt;= lr &amp;&amp; sel[[i0 - i]] == 1, Print[i0-i];Break[]]]] </code></pre>
Jinxed
24,763
<p>Try this one:</p> <pre><code>nearestNonNull[lst_, i_] := First@MinimalBy[ Select[MapIndexed[Flatten@{#1 != 0, #2} &amp;, lst], TrueQ@First@# &amp;][[All, 2]], Abs[i - #] &amp;] sel = RandomInteger[{0, 10}, 10^4]; nearestNonNull[sel, 1234]; </code></pre>
80,456
<p>Given an array <code>sel</code> and an index position <code>i0</code>, how can I find the position of the nearest (left or right) nonzero element? I'm able to do it with a loop and a couple of awful If's, but I was looking for a functional way...</p> <pre><code> lr=Length[sel]; For[i = 0, i &lt;= lr, i++, If[1 &lt;= i0 + i &lt;= lr &amp;&amp; sel[[i0 + i]] == 1, Print[i0+i]; Break[], If[1 &lt;= i0 - i &lt;= lr &amp;&amp; sel[[i0 - i]] == 1, Print[i0-i];Break[]]]] </code></pre>
LLlAMnYP
26,956
<p>Here's a generalization of the above to work with arrays (lists) of arbitrary depth. Also avoids checking the element at your specified position (something which may or may not be desired).</p> <pre><code>nearestNZP = Function[{array, i}, MinimalBy[ Flatten[MapIndexed[ If[#1 != 0 &amp;&amp; #2 != i, #2, Unevaluated[Sequence[]]] &amp;, array, {Length@i}], Length@i - 1], Norm[# - i] &amp;]]; </code></pre> <p>Define array (5x5 matrix in my example):</p> <pre><code>ar = RandomInteger[{-3, 3}, {5, 5}] </code></pre> <blockquote> <p><code>{{-3, 3, -2, -2, 2}, {-1, -2, 0, -3, -3}, {-1, -1, -2, 1, 1}, {2, 3, -1, 0, 1}, {-3, 0, -3, 1, -2}}</code></p> </blockquote> <pre><code>nearestNZP[ar, {2, 2}] </code></pre> <blockquote> <p><code>{{1, 2}, {2, 1}, {3, 2}}</code></p> </blockquote> <p>The depth at which the function works is determined by the length of the list of numbers specifying the index position. For a 1D array the index should be specified as <code>nearestNZP[array, {1234}]</code> rather than <code>nearestNZP[array, 1234]</code>.</p>
4,428,142
<p>Applying integration by parts splits the integral into 3 integrals, <span class="math-container">$\displaystyle \begin{aligned}I&amp;=\int_{0}^{1} \frac{\sin ^{-1} x \ln (1+x)}{x^{2}} d x\\&amp;=-\int_{0}^{1} \sin ^{-1} x \ln (1+x) d\left(\frac{1}{x}\right) \\&amp;=-\left[\frac{\sin ^{-1} x \ln (1+x)}{x}\right]_{0}^{1}+\underbrace{\int_{0}^{1} \frac{\ln (1+x)}{x \sqrt{1-x^{2}}}}_{K} +\underbrace{\int_{0}^{1}\frac{\sin ^{-1} x}{x}}_{L} d x-\underbrace{\int_{0}^{1} \frac{\sin ^{-1} x}{1+x}}_{M} d x \end{aligned} \tag*{} $</span> Letting <span class="math-container">$x= \cos \theta$</span> for <span class="math-container">$K$</span> and <span class="math-container">$\sin^{-1}x \mapsto x$</span> for <span class="math-container">$L$</span> and <span class="math-container">$M$</span>, yields <span class="math-container">$\displaystyle I=-\frac{\pi}{2} \ln 2 +\underbrace{\int_{0}^{\frac{\pi}{2}} \frac{\ln (1+\cos \theta)}{\cos \theta} d \theta}_{K}+\underbrace{\int_{0}^{\frac{\pi}{2}} \frac{x\cos x }{\sin x} d x}_{L}-\underbrace{\int_{0}^{\frac{\pi}{2}} \frac{x\cos x }{1+\sin x} d x }_{M}\tag*{} $</span></p> <hr /> <p>For the integral <span class="math-container">$ K,$</span>putting <span class="math-container">$ a=1$</span> in my <a href="https://math.stackexchange.com/a/4299808/732917">post</a> yields <span class="math-container">$\displaystyle \boxed{K=\frac{\pi^{2}}{8}}\tag*{} $</span></p> <hr /> <p>For the integral <span class="math-container">$ L,$</span> integration by parts yields <span class="math-container">$\displaystyle \begin{aligned}L &amp;=\int_{0}^{\frac{\pi}{2}} x d \ln (\sin x) \\&amp;=[x \ln (\sin x)]_{0}^{\frac{\pi}{2}}-\int_{0}^{\frac{\pi}{2}} \ln (\sin x) d x \\&amp;=\boxed{\frac{\pi}{2} \ln 2}\end{aligned}\tag*{} $</span></p> <hr /> <p>For the integral <span class="math-container">$ M,$</span> integration by parts yields <span class="math-container">$\displaystyle \begin{aligned}M &amp;=\int_{0}^{\frac{\pi}{2}} x d \ln (1+\sin x)\\&amp;=[x \ln (1+\sin x)]_{0}^{\frac{\pi}{2}}-\int_{0}^{\frac{\pi}{2}} \ln (1+\sin x) d x \\&amp;=\frac{\pi}{2} \ln 2-\underbrace{\int_0^{\frac{\pi}{2} }\ln (1+\sin x) d x}_{N}\end{aligned}\tag*{} $</span> For the integral <span class="math-container">$ N,$</span> using my <a href="https://www.quora.com/How-do-we-find-the-integral-displaystyle-int_-0-frac-pi-4-ln-cos-x-d-x-tag/answer/Lai-Johnny" rel="nofollow noreferrer">post </a> in the second last step yields <span class="math-container">$\displaystyle \begin{aligned}N \stackrel{x\mapsto\frac{\pi}{2}-x}{=} &amp;\int_{0}^{\frac{\pi}{2}} \ln (1+\cos x) d x \\=&amp;\int_{0}^{\frac{\pi}{2}} \ln \left(2 \cos ^{2} \frac{x}{2}\right) d x \\=&amp;\frac{\pi}{2} \ln 2+2 \int_{0}^{\frac{\pi}{2}} \ln \left(\cos \frac{x}{2}\right) d x \\=&amp;\frac{\pi}{2} \ln 2+4 \int_{0}^{\frac{\pi}{4}} \ln (\cos x) d x \\=&amp;\frac{\pi}{2} \ln 2+4\left(\frac{1}{4}(2 G-\pi \ln 2)\right) \\=&amp;\boxed{-\frac{\pi}{2} \ln 2+2 G}\end{aligned}\tag*{} $</span> where <span class="math-container">$ G$</span> is the Catalan’s Constant.</p> <hr /> <p>Putting them together yields <span class="math-container">$\displaystyle \boxed{I=-\pi \ln 2+\frac{\pi^{2}}{8}+2G} \tag*{} $</span></p> <hr /> <p><em><strong>Question:</strong></em> Is there any shorter solution?</p>
Quanto
686,284
<p>A self-contained solution <span class="math-container">\begin{align} I=&amp;\int_{0}^{1} \frac{\sin ^{-1} x \ln (1+x)}{x^{2}} d x =\int_{0}^{1} \sin ^{-1}x \&gt;d\left( \ln x -\frac{1+x}x \ln (1+x)\right)\\ \overset{ibp} =&amp;\&gt; -\pi \ln 2-{\int_{0}^{1} \frac{\ln \frac x{1+x}-\frac1x \ln(1+x)}{\sqrt{1-x^{2}}}}\&gt; \overset{x=\frac{2t}{1+t^2}}{dx}\\ =&amp;\&gt;-\pi\ln2 + \int_0^1 \underset{=K}{\frac{\ln\frac{(1+t)^2}{1+t^2}}{t}}dt -2\int_0^1 \underset{=J}{\frac{\ln\frac{2t}{(1+t)^2}}{1+t^2}}dt \end{align}</span> with <span class="math-container">\begin{align} K=&amp;\&gt;\int_0^1 \frac{\ln (1+t)^2}{t}dt -\int_0^1 \frac{\ln (1+t^2)}{t}\overset{t^2\to t}{dt} =\frac32 \int_0^1 \frac{\ln (1+t)}{t}dt =\frac{\pi^2}8\\ J=&amp; \int_0^1 \frac{\ln t}{1+t^2}dt +\int_0^1 \frac{\ln \frac2{(1+t)^2}}{1+t^2}\overset{t\to\frac{1-t}{1+t}}{dt}= \int_0^1 \frac{\ln t}{1+t^2}dt=-G\\ \end{align}</span> Thus <span class="math-container">$$I=-\pi \ln 2+\frac{\pi^{2}}{8}+2G$$</span></p>
3,074,900
<h2>Problem</h2> <p>When proving one result in the statistical learning theory course, the instructor uses <span class="math-container">$$ \mathbb{E}[\mathbb{E}[X\vert Y,Z]\vert Z]=\mathbb{E}[X\vert Z] $$</span> but I am not sure why this is true.</p> <h2>What I Have Done</h2> <p>I know I could do the following <span class="math-container">$$ \mathbb{E}[X\vert Y]=\int xf_{X\vert Y}(x\vert y)dx $$</span> But when <span class="math-container">$X$</span> becomes complicated like <span class="math-container">$\mathbb{E}[X\vert Y,Z]$</span> (sorry for the abuse of variable name), I do not know how to proceed.</p> <p>Could someone help me, thank you in advance.</p>
Masoud
653,056
<p>by tower property </p> <p>if <span class="math-container">$F_1 \subset F_2$</span> so <span class="math-container">$E(E(X|F_2)|F_1)=E(X|F_1)$</span></p> <p>now </p> <p><span class="math-container">$\mathbb{E}[\mathbb{E}[X\vert Y,Z]\vert Z]=\mathbb{E}[\mathbb{E}[X\vert \sigma (Y,Z)]\vert \sigma(Z)]$</span></p> <p><span class="math-container">$=\mathbb{E}[\mathbb{E}[X\vert F_2]\vert F_1]=E(X|F_1)$</span></p> <p><span class="math-container">$=\mathbb{E}[X\vert \sigma(Z)]=\mathbb{E}[X\vert Z]$</span></p> <p>since <span class="math-container">$F_1=\sigma(Z) \subset F_2=\sigma (Y,Z)$</span></p> <p>this proof valid for all type of random variable(continues,discrete and mixture). so you do not need any assumption about type of random variable. </p>
3,574,460
<p>Suppose <span class="math-container">${X_0, X_1, . . . , }$</span> forms a Markov chain with state space S. For any n ≥ 1 and <span class="math-container">$i_0, i_1, . . . , ∈ S$</span>, which conditional probability, <span class="math-container">$P(X_0 = i_0|X_1 = i_1)$</span> or <span class="math-container">$P(X_0 = i_0|X_n = i_n)$</span>, is equal to <span class="math-container">$P(X_0 = i_0|X_1 = i_1, . . . , X_n = i_n)$</span>?</p> <p>I think it is the second one?? I do know the Markov property but I am not sure on how it applies to the initial state? </p>
oso_hormiguero
1,053,142
<p><span class="math-container">\begin{aligned} P(X_0 = i_0|X_1 = i_1, . . . , X_n = i_n) &amp;= \frac{P(X_0 = i_0, X_1 = i_1, . . . , X_n = i_n)}{P(X_1 = i_1, . . . , X_n = i_n)} &amp; \text{(conditional probability)}\\ &amp;= \frac{P(X_0 = i_0)P( X_1 = i_1 |X_0 = i_0)\cdots P(X_n = i_n | X_{n-1} = i_{n-1})}{P(X_1 = i_1)P(X_2 = i_2 | X_1 = i_1) \cdots P(X_n = i_n | X_{n-1} = i_{n-1})} &amp; \text{(Markov chain joint distribution)}\\ &amp;= \frac{P(X_0 = i_0)P( X_1 = i_1 |X_0 = i_0)}{P(X_1 = i_1)} &amp;\\ &amp;= \frac{P(X_0 = i_0, X_1 = i_1)}{P(X_1 = i_1)} &amp;\\ &amp;= P(X_0 = i_0 | X_1 = i_1) \end{aligned}</span></p>
1,035,877
<p>I'd really appreciate if someone could help me so I could get going on these problems, but this is confusing me... and it's been holding me up for the last couple hours. </p> <p>How can I find the volume of the solid when revolving the region bounded by $y=1-\frac{1}{2}x$, $y=0$, and $x=0$ about the line $ x=-1$? How could I set it up? </p> <p>I'd REALLY appreciate if someone could take the time to answer this so I don't spend all night on one problem.</p>
Mr. Math
70,964
<p>Hint</p> <p>$$volume=\int\pi y^2dx$$</p> <p>use limites $x=-1$ and $x=0$</p>
2,609,252
<p>like the title said i'm looking for the best way for me(a 15 year old) to go about learning calculus, thank you :)</p>
Ski Mask
503,445
<p>Go straight to Khan Academy and start with their beginner courses. It's by far one of the most effective ways to learn things. I'm a CS student in university and keep coming back to Khan Academy to help with things from limits or L'Hopital's Rule. So for someone who wants to start learning Calculus Khan Academy is the way to go. </p> <p>Also maybe borrow a book from your library to get an idea of the things you need to study and make sort of a layout. Best of luck!</p>
2,609,252
<p>like the title said i'm looking for the best way for me(a 15 year old) to go about learning calculus, thank you :)</p>
Wessel de Zeeuw
160,433
<p>You could take a look at the online Pre-Calculus course of the Technical University in Delft. <a href="https://online-learning.tudelft.nl/courses/pre-university-calculus/" rel="nofollow noreferrer">https://online-learning.tudelft.nl/courses/pre-university-calculus/</a> I think this will fit your academic and enthusiasm level perfect!</p> <p>Cheers! Wessel</p>
322,134
<p>$$2e^{-x}+e^{5x}$$</p> <p>Here is what I have tried: $$2e^{-x}+e^{5x}$$ $$\frac{2}{e^x}+e^{5x}$$ $$\left(\frac{2}{e^x}\right)'+(e^{5x})'$$</p> <p>$$\left(\frac{2}{e^x}\right)' = \frac{-2e^x}{e^{2x}}$$ $$(e^{5x})'=5xe^{5x}$$</p> <p>So the answer I got was $$\frac{-2e^x}{e^{2x}}+5xe^{5x}$$</p> <p>I checked my answer online and it said that it was incorrect but I am sure I have done the steps correctly. Did I approach this problem correctly?</p>
lsp
64,509
<p>derivative of $2e^{-x} = -2e^{-x}$</p> <p>derivative of $e^{5x} = 5e^{5x}$</p> <p><strong>In general the derivative of $e^{ax} = ae^{ax}$, where $a$ is a constant.</strong></p>
3,218,525
<p>Let <span class="math-container">$f:[0,1] \to [0, \infty)$</span> is a non-negative continuous function so that <span class="math-container">$f(0)=0$</span> and for all <span class="math-container">$x \in [0,1]$</span> we have <span class="math-container">$$f(x) \leq \int_{0}^{x} f(y)^2 dy$$</span><br> Now consider the set <span class="math-container">$$A=\{x∈[0,1] : \text{for all } y∈[0,x] \text{ we have }f(y)≤1/2\}$$</span> Prove that <span class="math-container">$A=[0,1]$</span>.<br> Since <span class="math-container">$f$</span> is bounded of <span class="math-container">$[0,x]$</span>, I think <span class="math-container">$f$</span> may be <span class="math-container">$0$</span>. But I am not able to do this. Please help me to solve this.</p>
Henk
334,507
<p>A bit of a different approach I believe also to be correct: since <span class="math-container">$f(x)\leq\int_0^xf(y)^2dy$</span> for all <span class="math-container">$x\in[0,1]$</span>, we can say that <span class="math-container">$f$</span> is bounded from above (i.e. <span class="math-container">$f(x)\leq\bar{f}(x)$</span>) on <span class="math-container">$[0,1]$</span> by a function <span class="math-container">$\bar{f}$</span> defined by <span class="math-container">\begin{align} \bar{f}(x)&amp; = \int_0^x\bar{f}(y)^2dy,\quad\text{for all }x\in[0,1],\\ \bar{f}(0) &amp;= 0. \end{align}</span> This is an ODE for <span class="math-container">$\bar{f}$</span> with solution <span class="math-container">$\bar{f}(x)=(-x+c)^{-1}$</span>. Using <span class="math-container">$\bar{f}(0)=0$</span>, we find that <span class="math-container">$$ \bar{f}(0)=\frac{1}{-0+c}=0\implies c\rightarrow\infty. $$</span> Since <span class="math-container">$x\in[0,1], \bar{f}(x)=\lim_{c\rightarrow\infty}(-x+c)^{-1}=0$</span> for all <span class="math-container">$x$</span>. Thus, <span class="math-container">$f(x)\leq0\leq1/2$</span> for all <span class="math-container">$x\in[0,1]$</span> and <span class="math-container">$A=[0,1]$</span>.</p>
142,993
<p>I'm challenging myself to figure out the mathematical expression of the number of possible combinations for certain parameters, and frankly I have no idea how.</p> <p>The rules are these:</p> <p>Take numbers 1...n. Given m places, and with <em>no repeated digits</em>, how many combinations of those numbers can be made?</p> <p>AKA</p> <ul> <li>1 for n=1, m=1 --> 1</li> <li>2 for n=2, m=1 --> 1, 2</li> <li>2 for n=2, m=2 --> 12, 21</li> <li>3 for n=3, m=1 --> 1,2,3</li> <li>6 for n=3, m=2 --> 12,13,21,23,31,32</li> <li>6 for n=3, m=3 --> 123,132,213,231,312,321</li> </ul> <p>I cannot find a way to express the left hand value. Can you guide me in the steps to figuring this out?</p>
Christian Blatter
1,303
<p>Assume that $(0,0)$ is a white square. Call a white square <em>even</em> if it has even coordinates and <em>odd</em>, if it has odd coordinates; there are $4^2=16$ of each. The first square you can choose in $32$ ways; assume you pick an even one. The second square either is even, and there are $3^2=9$ left of these, or it is odd, and there are still $16$ of these. In the first case for the third choice $2^2=4$ even and $16$ odd squares remain, in the second case $9$ even and $9$ odd squares remain. Since the order in which the squares are selected is irrelevant there are $${32\bigl(9\cdot(4+16)+16\cdot(9+9)\bigr)\over 6}=2496$$ possibilities in all.</p>
160,801
<p>Here is a vector </p> <p>$$\begin{pmatrix}i\\7i\\-2\end{pmatrix}$$</p> <p>Here is a matrix</p> <p>$$\begin{pmatrix}2&amp; i&amp;0\\-i&amp;1&amp;1\\0 &amp;1&amp;0\end{pmatrix}$$</p> <p>Is there a simple way to determine whether the vector is an eigenvector of this matrix?</p> <p>Here is some code for your convenience.</p> <pre><code>h = {{2, I, 0 }, {-I, 1, 1}, {0, 1, 0}}; y = {I, 7 I, -2}; </code></pre>
Carl Woll
45,431
<p>You could use <a href="http://reference.wolfram.com/language/ref/MatrixRank" rel="noreferrer"><code>MatrixRank</code></a>. Here is a function that does this:</p> <pre><code>eigenvectorQ[matrix_, vector_] := MatrixRank[{matrix . vector, vector}] == 1 </code></pre> <p>For your example:</p> <pre><code>eigenvectorQ[h, y] </code></pre> <blockquote> <p>False</p> </blockquote>
3,014,085
<p>I am trying to isolate y in this equation: <span class="math-container">$$-4/3·\ln⁡(|y-60|)=x+c$$</span></p> <p>If I use a cas-tool to isolate <span class="math-container">$y$</span>, I get:</p> <p><span class="math-container">$$60.-(2.71828182846)^{−0.75*x-0.75*c}=y$$</span></p> <p>If I try isolating <span class="math-container">$y$</span> by hand I get:</p> <p><a href="https://i.stack.imgur.com/vEPyV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vEPyV.png" alt="enter image description here" /></a></p> <hr /> <p>These two are not the same, is the cas-tool right or am I right? What are the rules to isolate something when the absolute value is taken of it as in this case.</p> <hr /> <p>Proof they are not equal: (black is my result, red is cas-tool's result)</p> <p><a href="https://i.stack.imgur.com/oqD35.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oqD35.png" alt="enter image description here" /></a></p>
user
505,767
<p><strong>HINT</strong></p> <p>A trivial case is when <span class="math-container">$z$</span> is real.</p> <p>Excluding the trivial case, let consider different cases, for example for <span class="math-container">$z$</span> in the first quadrant we need that <span class="math-container">$z-2$</span> is in the second quadrant and</p> <p><span class="math-container">$$\arg(z+2) + \arg(z-2) = \pi \iff \arctan\left(\frac{y}{x+2}\right)+\arctan\left(\frac{y}{x-2}\right)+\pi=\pi$$</span></p>
353,480
<p>Is $f(x)=\ln(x)$ uniformly continuous on $(1,+\infty)$? If so, how to show it?</p> <p>I know how to show that it is not uniformly continuous on $(0,1)$, by taking $x=\frac{1}{\exp(n)}$ and $y = \frac{1}{\exp(n+1)}$.</p> <p>Also, on which interval does $\ln(x)$ satisfy the Lipschitz condition?</p>
Community
-1
<p>Let $x,y\in(1,+\infty)$ then by mean value theorem there's $z$ such that $$\log(x)-\log(y)=\frac{1}{z}(x-y)$$ hence we have \begin{array}\\ |\log(x)-\log(y) |&amp;\leq\sup_{z&gt;1}\frac{1}{z}|x-y|\\&amp;\leq|x-y| \end{array} so the function $\log$ is uniformly continuous on the interval $(1,+\infty)$ since it satisfies the lipschitz condition.</p>
2,354,036
<p>$\require{AMScd}\def\colim{\text{colim}}$I need this result in less generality, but I'd be happy to know this stronger version holds.</p> <p>Let $\{\alpha_c : Fc \to Gc\}$ be arrows in a category $D$, indexed by the objects of a category $C$, for two functors $F,G: C\to D$. </p> <p>Let $A\subseteq C$ be a dense subcategory (meaning that $i : A \hookrightarrow C$ is a dense functor). Then $\alpha$ is a natural transformation <em>if and only if it is natural on the components $c\in A$ (and with respect to morphisms of $A$)</em>.</p> <p><strong>Edit</strong>: This is not what I wanted to know (and false, see below): instead, I'm trying to prove that there is a unique extension of a natural transformation $Fi\to Gi$ to a natural transformation $F\to G$.</p> <p>Example: for a functor $K: [I°,Set] \to [I°,Set]$, a natural transformation $\alpha : 1_{[I°,Set]}\to K$ is such if and only if it is natural when restricted to representables.</p> <p>It is easy to see that the right universal property gives that if $P = \colim y(X_i)$ is a colimit of representables, then there is $$ P \cong \colim\; y(X_i) \xrightarrow{\colim \alpha_{X_i}} \colim\; Ky(X_i) \to K\big(\colim\; y(X_i)\big) \cong KP $$ It seems then that all boils down to the fact that the morphism $\colim\; KX\to K(\colim\; X)$ is "natural", i.e. $$ \begin{CD} KX @&gt;&gt;&gt; K(\colim_I X_i)\\ @VVV @VVV\\ KY @&gt;&gt;&gt; K(\colim_J Y_j) \end{CD} $$ commutes (but how do you induce the right-vertical arrow?).</p>
Eric Wofsey
86,856
<p>This is very very false. You can simply take any natural transformation $Fi\to Gi$, and then define $\alpha_c$ for each $c\not\in A$ to be any map $Fc\to Gc$ at all. It would be an enormous coincidence if the latter satisfied naturality.</p> <p>For a simple example, let $C=D=Set$, $F=G=1_{Set}$, and let $A$ consist of just a singleton. Any $\alpha$ is natural when restricted to $A$, but it's certainly not true that any collection of maps $X\to X$ for each set $X$ defines a natural transformation $1_{Set}\to 1_{Set}$</p>
1,556,747
<p>$$\text{a)} \ \ \sum_{k=0}^{\infty} \frac{5^{k+1}+(-3)^k}{7^{k+2}}\qquad\qquad\qquad\text{b)} \ \ \sum_{k=1}^{\infty}\log\bigg(\frac{k(k+2)}{(k+1)^2}\bigg)$$</p> <p>I am trying to determine the convergence values. I tried with partial sums and got stuck...so I am thinking the comparison test...Help</p>
Mark Viola
218,419
<p>HINTS:</p> <p>$$\frac{5^{k+1}+(-3)^{k}}{7^{k+2}}=\frac5{49}\left(\frac57\right)^k+\frac1{49}\left(-\frac{3}{7}\right)^k$$</p> <p>$$\log\left(\frac{k(k+2)}{(k+1)^2}\right)=(\log (k)-\log (k+1))+(\log (k+2)-\log (k+1))$$</p> <p>Then, telescope the two series.</p>
438,166
<p>Let $X$ be a bounded connected open subset of the $n$-dimensional real Euclidean space. Consider the Laplace operator defined on the space of infinitely differentiable functions with compact support in $X$. </p> <p>Does the closure of this operator generate a strongly continuous semigroup on $C_0(X)$ endowed with the supremum norm?</p> <p>I think it is equivalent to the following question: </p> <blockquote> <p>Is the space of infinitely differentiable functions with compact support in $X$ a core for the Dirichlet Laplacian on $X$?</p> </blockquote>
DeM
113,757
<p>I completely agree with the previous answer but I would like to add that in general there are even <em>infinitely many</em> extensions of the Laplacian - not only Dirichlet and Neumann. E.g., the Laplacian with all Robin-type boundary conditions $$ \frac{\partial u}{\partial n}=pu_{|\partial X} $$ will do the job.</p>
1,129,712
<p>So I'm complete stuck with something. I know it the following statements are true (or at least the seem to be from the results that I got from messing around with it a bit on MATLAB), but I don't understand why they are true or how to show so. Let $A$ be and $m$X$n$ matrix. Show that:</p> <p>a) if $x \in N(A^TA)$ then $Ax$ is in both $R(A)$ and $N(A^T)$. </p> <p>For this one I messed around with it with my own examples and I got $Ax=0$, therefore satisfing the statement, but I don't understand what's actually going on. </p> <p>b) $N(A^TA)=N(A)$</p> <p>again, makes sense when I see the results in MATLAB, but don't undestand why it works.</p> <p>c) $A$ and $A^TA$ have the same rank</p> <p>d) If $A$ has linearly independent columns, the $A^TA$ is nonsingular.</p> <p>For the last two I have no idea on how to even start showing the relationship. I feel like I'm missing some crucial relationship between $A$ and $A^TA$, but I'm just not seeing it. </p> <p>I would greatly appreciate any help of sugestions on how to show that these statements are true. </p> <p>Thank you very much. </p>
egreg
62,967
<p>a) By definition $Ax\in R(A)$; on the other hand, $A^TAx=0$, by assumption, so $Ax\in N(A^T)$.</p> <p>b) It is clear that $N(A)\subseteq N(A^TA)$. Suppose $x\in N(A^TA)$; then $x^TA^TAx=0$ as well, so $(Ax)^T(Ax)=0$, which implies $Ax=0$.</p> <p>c) The rank-nullity theorem says that, if $B$ is an $m\times n$ matrix, then $\dim R(B)+\dim N(B)=n$. Let $A$ be $m\times n$ and apply the rank nullity theorem to $A$ and $A^TA$: \begin{align} n&amp;=\dim R(A)+\dim N(A) \\ n&amp;=\dim R(A^TA)+\dim N(A^TA) \end{align} Since by part b we have $\dim N(A)=\dim N(A^TA)$, we conclude $\dim R(A)=\dim R(A^TA)$.</p> <p>d) If $A$ has linearly independent columns, its rank is $n$; so also $A^TA$ has rank $n$, hence it is invertible.</p>
2,120,510
<blockquote> <p>Out of <span class="math-container">$180$</span> students, <span class="math-container">$72$</span> have Windows, <span class="math-container">$54$</span> have Linux, <span class="math-container">$36$</span> have both Windows and Linux and the rest (<span class="math-container">$18$</span>) have OS X. What's the probability that out of <span class="math-container">$15$</span> randomly picked students:</p> <p>i) At most two won't have Windows;</p> <p>ii) At least one will have OS X.</p> </blockquote> <p>So for the first point I just thought that the probability is the sum of probabilities that &quot;none will have Windows&quot; plus &quot;one will have Windows&quot; plus &quot;two will have windows&quot;. Is this a correct train of thought? How do I calculate the probabilities though?</p> <p>If it were just <span class="math-container">$1$</span> student picked, the probability that the student didn't have windows would be <span class="math-container">$\left(\frac{72}{180}\right)$</span>, right? But I pick <span class="math-container">$15$</span>, is it <span class="math-container">$\frac{72}{180}\cdot \frac{71}{179}\cdot\dots$</span> ?</p>
Kanwaljit Singh
401,635
<p>Hint -</p> <p>Case 1 -</p> <p>Sum of probabilities that no one has windows, exactly one and exactly two students don't have windows.</p> <p>Case 2 -</p> <p>At least one have OS X = 1 - No one have OS X</p> <p>$= 1 - \frac{\binom{162}{15}}{\binom{180}{15}}$</p>
2,120,510
<blockquote> <p>Out of <span class="math-container">$180$</span> students, <span class="math-container">$72$</span> have Windows, <span class="math-container">$54$</span> have Linux, <span class="math-container">$36$</span> have both Windows and Linux and the rest (<span class="math-container">$18$</span>) have OS X. What's the probability that out of <span class="math-container">$15$</span> randomly picked students:</p> <p>i) At most two won't have Windows;</p> <p>ii) At least one will have OS X.</p> </blockquote> <p>So for the first point I just thought that the probability is the sum of probabilities that &quot;none will have Windows&quot; plus &quot;one will have Windows&quot; plus &quot;two will have windows&quot;. Is this a correct train of thought? How do I calculate the probabilities though?</p> <p>If it were just <span class="math-container">$1$</span> student picked, the probability that the student didn't have windows would be <span class="math-container">$\left(\frac{72}{180}\right)$</span>, right? But I pick <span class="math-container">$15$</span>, is it <span class="math-container">$\frac{72}{180}\cdot \frac{71}{179}\cdot\dots$</span> ?</p>
Dove
402,662
<ol> <li><p>For your first question, it will be the sum of probabilities that no one has windows, exactly one student doesn't have windows, and exactly two students don't have windows.</p></li> <li><p>P=1- P(no one has X)</p></li> </ol>
2,604,093
<p>I would like to study the convergence of the series:</p> <p>$$\sum_{n=1}^\infty \frac{\log n}{n^2}$$</p> <p>I could compare the generic element $\frac{\log n}{n^2}$ with $\frac{1}{n^2}$ and say that $$\frac{1}{n^2}&lt;\frac{\log n}{n^2}$$ and $\frac{1}{n^2}$ converges but nothing more about.</p>
José Carlos Santos
446,262
<p>By <a href="https://en.wikipedia.org/wiki/Cauchy_condensation_test" rel="nofollow noreferrer">Cauchy's condensation test</a>, your series converges if and only if$$\sum_{n=1}^\infty\frac{2^n\log2^n}{2^{2n}}$$converges. But this series is equal to$$\sum_{n=1}^\infty\frac{n\log2}{2^n}$$which clearly converges.</p>
2,604,093
<p>I would like to study the convergence of the series:</p> <p>$$\sum_{n=1}^\infty \frac{\log n}{n^2}$$</p> <p>I could compare the generic element $\frac{\log n}{n^2}$ with $\frac{1}{n^2}$ and say that $$\frac{1}{n^2}&lt;\frac{\log n}{n^2}$$ and $\frac{1}{n^2}$ converges but nothing more about.</p>
zhw.
228,045
<p>Repeat after me: $\ln n\to \infty$ more slowly than any positive power of $n.$ In other words,</p> <p>$$\frac{\ln n}{n^p} \to 0\,\text { for any } p&gt; 0.$$</p> <p>Once you have absorbed this, you'll know such things as</p> <p>$$\frac{\ln n}{n^2}&lt; \frac{n^{1/2}}{n^2} = \frac{1}{n^{3/2}}$$</p> <p>for large $n.$ Which in the case of your series proves convergence by the comparison test.</p>
4,537,050
<p>Question 2 of Chapter 14 in Spivak's <em>Calculus</em> reads as follows:</p> <blockquote> <p>For each of the following <span class="math-container">$f$</span>, if <span class="math-container">$F(x)=\int_0^xf$</span>, at which points <span class="math-container">$x$</span> is <span class="math-container">$F'(x)=f(x)$</span>?</p> </blockquote> <p>Part (viii) of Question 2 uses the function:</p> <blockquote> <p><span class="math-container">$f(x)=1$</span> if <span class="math-container">$x=\frac{1}{n}$</span> for some <span class="math-container">$n$</span> in <span class="math-container">$\mathbb N$</span>, <span class="math-container">$f(x)=0$</span> otherwise.</p> </blockquote> <p>The solution manual for this problem reads as:</p> <blockquote> <p>All <span class="math-container">$x$</span> not of the form <span class="math-container">$\frac{1}{n}$</span> for some natural number <span class="math-container">$n$</span> [...are points where <span class="math-container">$F'(x)=f(x)$</span>]</p> </blockquote> <p>From this proposed solution, Spivak suggests that <span class="math-container">$F'(0)=f(0)$</span> (more specifically, it should be <span class="math-container">$F'^+(0)=f(0)$</span>...but for his problems, this is usually implicit).</p> <p>However, I believe <span class="math-container">$f$</span> is discontinuous at <span class="math-container">$0$</span> (because there is no right limit) and, moreover, the intermediate value property is not upheld for any <span class="math-container">$\delta_n=\frac{1}{n} \gt 0$</span> on the interval <span class="math-container">$[0,\delta_n]$</span>. Therefore, I do not think that <span class="math-container">$0$</span> should be included in the list of points that exhibit the feature of <span class="math-container">$F'(x)=f(x)$</span>.</p> <p>Is this correct?</p>
Matt Werenski
733,040
<p>Having not read the book, I can't speak to what the &quot;rules&quot; are for integration, but it probably falls into one of the two camps below.</p> <p><strong>Riemann integral rules</strong> If the book is using Riemann integration then we can construct a series of step functions <span class="math-container">$(\phi_i,\psi_i)$</span> such that for every <span class="math-container">$x$</span> <span class="math-container">\begin{align} \phi_i(z) \leq f(z) \leq \psi_i(z) \text{ for all } z \in [0,x] \\ \lim_{i \rightarrow \infty} \int_0^x\psi_i(z)-\phi(z) dz = 0 \\ \lim_{i\rightarrow \infty} \int_0^x\psi_i(z)dz = \lim_{i\rightarrow \infty} \int_0^x\phi_i(z)dz = 0 \end{align}</span> This will show that <span class="math-container">$F(x) = \int_0^x f(z) dz = 0$</span> for every <span class="math-container">$x$</span>, and therefore since <span class="math-container">$F$</span> is constant the derivative of <span class="math-container">$F$</span> is <span class="math-container">$F'(x) = 0$</span> for every <span class="math-container">$x$</span>, in particular for <span class="math-container">$x = 0$</span> we have <span class="math-container">$F'(0) = 0 = f(0)$</span>.</p> <p>I'll construct the series now, its a bit messy to do this and a good exercise to try for yourself first.</p> <p>If <span class="math-container">$x \leq 0$</span> then just set <span class="math-container">$\phi_i = \psi_i = 0$</span> and all of the properties are satisfied.</p> <p>So let <span class="math-container">$x &gt; 0$</span> and let <span class="math-container">$n_0$</span> be the smallest positive integer such that <span class="math-container">$1/n_0 \leq x$</span>. Set <span class="math-container">$\phi_i = 0$</span> for every <span class="math-container">$i$</span>. Set <span class="math-container">\begin{equation} \psi_i(x) = \mathbb{1}\left [x \leq \frac{1}{n_0+i+1}\right ] + \sum_{k = n_0}^{n_0+i} \mathbb{1}\left[\frac{1}{k} - \frac{1/k -1/(k+1)}{2i} \leq x \leq 1/k \right] \end{equation}</span> This function may look like a beast but there is an intuition to its construction. The <span class="math-container">$n_0$</span> trick makes it so we can ignore the bad points of <span class="math-container">$f$</span> which are outside of <span class="math-container">$[0,x]$</span>. After that each term in the sum handles exactly one point <span class="math-container">$1/(n_0 + k) \in [0,x]$</span> where <span class="math-container">$f$</span> is badly behaved. The leading term handles all the points near the origin with bad behavior. Incrementing <span class="math-container">$i$</span> does two things. First it moves one point from the catch-all interval near the origin into the finite sum. Second, it makes the intervals around the bad points even tighter.</p> <p>One can check that <span class="math-container">$\psi$</span> is an appropriate upper bound on <span class="math-container">$f$</span> and also <span class="math-container">\begin{align} \int_0^x \psi_i(z) &amp;= \frac{1}{n_0+i+1}+ \sum_{k=n_0}^{n_0+i} \frac{1}{k} - \left ( \frac{1}{k} - \frac{1/k -1/(k+1)}{2i} \right ) \\ &amp;=\frac{1}{n_0 + i + 1} + \frac{1}{2i}\sum_{k=n_0}^{n_0 + i} \frac{1}{k(k+1)} \\ &amp;\leq \frac{1}{n_0 + i + 1} + \frac{1}{2i}\sum_{k=1}^\infty \frac{1}{k^2} \\ &amp;= \frac{1}{n_0 + i + 1} + \frac{C}{2i} \end{align}</span> where <span class="math-container">$C = \sum_{k=1}^\infty 1/k^2$</span> is a finite constant. From this last bound we see that indeed <span class="math-container">\begin{equation} 0 \leq \lim_{i\rightarrow \infty} \int_0^x \psi_i(z)dz \leq \lim_{i \rightarrow \infty} \frac{1}{n_0 + i + 1} + \frac{C}{2i} = 0 \end{equation}</span></p> <p><strong>Measure theory rules</strong> If the book is using measure theory to define integrals and we're talking about Lebesgue integration then we have, letting <span class="math-container">$S = \{1/n : n\in \mathbb{N}\}$</span> and <span class="math-container">$L$</span> be the Lebesgue measure, that <span class="math-container">\begin{align} F(x) = \int_0^x f dL &amp;= \int_{S} fdL + \int_{[0,x] \setminus S} fdL \\ &amp;= \int_S 1 dL + \int_{[0,x] \setminus S} 0dL \\ &amp;= 1 \cdot L[S] + 0 \cdot L[[0,x] \setminus S] \\ &amp; = 1\cdot 0 + 0 \cdot x = 0 \end{align}</span> which shows that <span class="math-container">$F(x) = 0$</span> and since this holds for all <span class="math-container">$x$</span> we have <span class="math-container">$F'(x) = 0$</span> for all <span class="math-container">$x$</span>, in particular <span class="math-container">$F'(0) = 0 = f(0)$</span>.</p>
886,243
<p>Evaluate</p> <p><img src="https://latex.codecogs.com/gif.latex?%0A%24%245050%20%5Cfrac%20%7B%5Cleft(%20%5Csum%20_%7Br%3D0%7D%5E%7B100%7D%20%5Cfrac%20%7B%7B100%5Cchoose%20r%7D%7D%7B50r%2B1%7D%5Ccdot%20(-1)%5Er%5Cright)%20-%201%7D%7B%5Cleft(%20%5Csum%20_%7Br%3D0%7D%5E%7B101%7D%5Cfrac%7B%7B101%5Cchoose%20r%7D%7D%7B50r%2B1%7D%20%5Ccdot%20(-1)%5Er%5Cright)%20-%201%7D%24%24" alt="image"></p> <p>There seemed to be some problem with stackexchange's math rendering but Ian corrected whatever error was there in the expression.Thanks</p> <p>$$5050 \frac {\left( \sum _{r=0}^{100} \frac {{100\choose r}}{50r+1}\cdot (-1)^r\right) - 1}{\left( \sum _{r=0}^{101}\frac{{101\choose r}}{50r+1} \cdot (-1)^r\right) - 1}$$</p> <p>The original definite integral that led to this was </p> <p>$$5050\frac{\int_0^1(1-x^{50})^{100} dx}{\int_0^1(1-x^{50})^{101} dx} $$</p> <p>I used binomail to arrive at the above expression.</p> <p>I already know a technique by definite integration but I want one by sequence and series.</p> <blockquote class="spoiler"> <p> The answer is 5051</p> </blockquote>
RE60K
67,609
<p><strong>Use of sequence and series is not suggested; and a possible way is outlined, which is very long and useless, this is due to the fact of bounded multiplication of r with 50 in denominator, if the r would have been free, it would be easy to use this method to caluculate:</strong></p> <h2>Brute Force</h2> <p>$$\frac{100\choose r}{50r+1}=1-50.\frac{r.{100\choose r}}{50r+1}$$ Use: $$r{100\choose r}=100.{99\choose{r-1}}$$ Then $$\frac{{99\choose{r-1}}}{50r+1}=\left[1-50.\frac{r.{99\choose {r-1}}}{50r+1}\right]$$ Again use: $$r{99\choose {r-1}}=99.{98\choose{r-2}}\text{ after }\frac{r.{99\choose {r-1}}}{50r+1}=\frac{(r-1).{99\choose {r-1}}}{50r+1}+\frac{{99\choose {r-1}}}{50r+1}$$ and so on...</p> <hr> <p>The way you are mentioning- definite integration of $\beta$-function (though you may not be realising it is the $\beta$-function)- is the best method in my opinion.</p> <hr> <p>You should use these(preferred but not necessary) while summing binomial coefficients: $$r{n\choose r}=n{{n-1}\choose{r-1}};\frac1r{n\choose r}=\frac1{n+1}{{n+1}\choose{r+1}}$$ Secondly, use partial fractions or partial rationals.</p>
3,968,905
<p>I am trying to prove this:</p> <p><span class="math-container">$\bullet$</span> Prove that <span class="math-container">$\Delta(\varrho_\epsilon \star u) = \varrho_\epsilon \star f $</span> in the sense of distributions, if <span class="math-container">$\Delta u = f$</span> in the sense of distributions, <span class="math-container">$ u \in L^1_{loc}(\mathbb{R}^n)$</span>.</p> <p>Can anybody help me? Thank you in advance and I take advantage of the situation to wish you a happy new year :D</p>
RicardoMM
730,135
<p>Using wolfram alpha, all the solutions are given by the expression : <span class="math-container">$$\begin{cases}y=\frac{x-1}{x+1}, x\neq -1 \\y = \frac {x+3}{x+1}, x\neq -1 \end{cases}$$</span> For example, the solutions for <span class="math-container">$x=1$</span> are <span class="math-container">$y=0$</span> and <span class="math-container">$y=2$</span>. You could try to find these solutions by hand solving the quadratic equation in respect to <span class="math-container">$y$</span>. If you expand the original equation you get: <span class="math-container">$$x^2 y^2 - 2 x^2 y + x^2 + 2 x y^2 + 2 x + y^2 - 2 y + 1 = 4 x y + 4$$</span> <span class="math-container">$$\Leftrightarrow (x^2 +2x+1) y^2 + (-2x^2-4x-2)y + (x^2+2x+1-4)=0 $$</span></p> <p>Which you can solve using the quadratic formula.</p>
3,968,905
<p>I am trying to prove this:</p> <p><span class="math-container">$\bullet$</span> Prove that <span class="math-container">$\Delta(\varrho_\epsilon \star u) = \varrho_\epsilon \star f $</span> in the sense of distributions, if <span class="math-container">$\Delta u = f$</span> in the sense of distributions, <span class="math-container">$ u \in L^1_{loc}(\mathbb{R}^n)$</span>.</p> <p>Can anybody help me? Thank you in advance and I take advantage of the situation to wish you a happy new year :D</p>
tiredsoldat
868,820
<p>The problem is that you are trying to solve this by simplifying instead of substituting for x and y. Hint to this problem is to first look at the graph of it on desmos perhaps. From this you can see the only integer solutions will ever be found on the axis'. Hence where x=0 or where y=0. This is the easiest method to do this of course. Doing this we can get: When x=0 -&gt; (y^2 +1)-2y =4 -&gt; y^2-2y-3=0 use quadratic formula to solve and get: y=-3 and y=1 two sols when x=0: (-3,0) and (1,0)</p> <p>Now for the solutions when y=0 same process y=0 -&gt; (x^2+1)+2x =4 follow the same steps as earlier to get solutions where y=0: (0,3) and (0,-1)</p> <p>Basically you had the right idea you simply needed to solve to place one variable as a zero first. Repeating the process with each variable. Hopefully this clears things up. If you want to know the reason why we can say there aren't any other integer solutions besides the intercept solutions would be a little more difficult/</p> <p>Just remember that it asked for positive solutions. Therefor (0,3) and (1,0) are the only cases that work here.</p>
2,120,763
<p>I've been given this problem:</p> <p>Prove that a subordinate matrix norm is a matrix norm, i.e. </p> <p>if $\left \|. \right \|$ is a vector norm on $\mathbb{R}^{n}$, then $\left \| A \right \|=\max_{\left \| x \right \|=1}\left \| Ax \right \|$ is a matrix norm</p> <p>I don't even understand the question, and a explanation on what the problem ask me to do would be very appreciated, thanks in advance.</p> <p>specific what does $\max_{\left \| x \right \|=1}\left \| Ax \right \|$ mean</p>
lab bhattacharjee
33,337
<p>Let the highest power of prime $p$ that divides $a,b,c$ be $A,B,C$ respectively.</p> <p>So, the highest power of prime $p$ that divides the GCD will be min$(A,B,C)$</p> <p>and the highest power of prime $p$ that divides the LCM will be max$(A,B,C)$</p> <p>We need min$(A,B,C)+$max$(A,B,C)=A+B+C$ for any prime that divides at least one of $a,b,c$</p> <p>WLOG min$(A,B,C)=A,$ and max$(A,B,C)=C\implies A+C=A+B+C\iff B=0$</p> <p>So, $(a,b,c)$ must be $1$ for LCM$(a,b,c)\cdot$GCD$(a,b,c)=abc$</p> <p>For the two integer case, trivially min$(A,B)+$max$(A,B)=A+B$</p>
1,624
<p>For example, to change the color of each pixel to the mean color of the three channels, I tried</p> <pre><code>i = ExampleData[{"TestImage", "Lena"}]; Mean[i] </code></pre> <p>but it just remains unevaluated:</p> <p><img src="https://i.stack.imgur.com/K1RRR.png" alt="enter image description here"></p> <p>How can I read the colors of an image into a list or matrix and change the color codes and save it back to an image?</p>
cormullion
61
<p>The ImageApply function applies any suitable Mathematica function to every pixel in an image. You just have to specify the transformation you want to make. I recently asked this: <a href="https://mathematica.stackexchange.com/questions/207/image-levels-how-to-alter-exposure-of-dark-and-light-areas">Image levels: how to alter &#39;exposure&#39; of dark and light areas?</a> question - and got many good answers...</p>
1,624
<p>For example, to change the color of each pixel to the mean color of the three channels, I tried</p> <pre><code>i = ExampleData[{"TestImage", "Lena"}]; Mean[i] </code></pre> <p>but it just remains unevaluated:</p> <p><img src="https://i.stack.imgur.com/K1RRR.png" alt="enter image description here"></p> <p>How can I read the colors of an image into a list or matrix and change the color codes and save it back to an image?</p>
Brett Champion
69
<p>There are also several built-in effects available through <code>ImageEffect</code>. For example:</p> <pre><code>ImageEffect[ExampleData[{"Image","Lena"}], #]&amp; /@ {"Charcoal", {"OilPainting", 10}, {"Posterization", 5}} </code></pre> <blockquote> <p><img src="https://i.stack.imgur.com/y18SQ.png" alt="Mathematica graphics"></p> </blockquote>
244,492
<p>Find $m \in \mathbb R$ for which the equation $|x-1|+|x+1|=mx+1$ has only one unique solution. When does a absolute value equation have only 1 solution?</p> <p>I solved for $x$ in all 4 cases and got $x=\frac{1}{-m-2},x=\frac{1}{2-m},x=\frac{1}{m},x=-\frac{3}{m}$</p>
Clive Newstead
19,542
<p><strong>Method 1:</strong> Draw a graph to find the answer, and then prove that your answer holds.</p> <p><strong>Method 2:</strong> Solve the equation in the separate cases</p> <ul> <li>$x \le -1$ (so that $|x+1|=-(x+1)$ and $|x-1|=-(x-1)$)</li> <li>$-1 \le x \le 1$ (so that etc.)</li> <li>$x \ge 1$ (etc.)</li> </ul> <p>Then play around with the values of $m$ to force only one solution to be valid.</p>
90,263
<p>Let $\mathcal{E} = \lbrace v^1 ,v^2, \dotsm, v^m \rbrace$ be the set of right eigenvectors of $P$ and let $\mathcal{E^*} = \lbrace \omega^1 ,\omega^2, \dotsm, \omega^m \rbrace$ be the set of left eigenvectors of $P.$ Given any two vectors $v \in \mathcal{E}$ and $ \omega \in \mathcal{E^*}$ which correspond to the eigenvalues $\lambda_1$ and $\lambda_2$ respectively. If $\lambda_1 \neq \lambda_2$ then $\langle v, \omega^\tau\rangle = 0.$ </p> <p>Proof. For any eigenvector $v\in \mathcal{E}$ and $ \omega \in \mathcal{E^*}$ which correspond to the eigenvalues $\lambda_1$ and $\lambda_2$ where $\lambda_1 \neq \lambda_2$ we have, \begin{equation*} \begin{split}\langle\omega,v\rangle = \frac{1}{\lambda_2}\langle \lambda_2 \omega, v\rangle = \frac{1}{\lambda_2} \langle P^ \tau \omega ,v\rangle = \frac{1}{\lambda_2}\langle\omega,P v\rangle = \frac{1}{\lambda_2} \langle \omega,\lambda_1v\rangle = \frac{\lambda_1}{\lambda_2}\langle \omega,v\rangle .\end{split} \end{equation*} This implies $(\frac{\lambda_1}{\lambda_2} - 1)\langle\omega,v\rangle = 0.$ If $\lambda_1 \neq \lambda_2$ then $\langle \omega,v\rangle = 0.$</p> <p>My question: what if $ \lambda_2 = 0 \neq \lambda_1,$ how can I include this case in my proof.</p>
Neal
20,569
<p>Culture affects the student's learning style, classroom expectations, attitude toward learning, and is even correlated with their mathematical background. Teaching a classroom full of poor, mostly black or hispanic kids who are first-generation college students is very different from teaching a classroom full of upper-middle class, mostly suburban white kids! If you're a good teacher, the cultures in your classroom will provide information that you can use to tailor your presentation, anecdotes, attitude, and examples to maximize the knowledge they get out of the class. It will also affect how you treat each individual student.</p> <p>Mathematical principles are timeless, raceless, classless, and culture-less. The people who learn them are not!</p>
1,605,281
<p><a href="https://i.stack.imgur.com/43uoh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/43uoh.png" alt="The question about finding the exact value of the sine of the angle between (PQ) and the plane"></a></p> <p>I have done part (a). For part (b), I know the principle of how to do it, I tried to use the cross product to find the exact value of the sine of the angle. </p> <p>So I found PQ which is $$\begin{pmatrix}7-2\\-1-1\\2-6\end{pmatrix}$$ </p> <p>$$=\begin{pmatrix}5\\-2\\-4\end{pmatrix}$$</p> <p>Then I do the cross product between $$\begin{pmatrix}5\\-2\\-4\end{pmatrix}$$ and $$\begin{pmatrix}5\\-3\\-1\end{pmatrix}$$ which the normal vector for the plane equation given</p> <p>and using $$||{\bf a} \times {\bf b}|| = ||{\bf a}|| \, ||{\bf b}|| \sin \theta,$$ I got the answer to be $\sqrt (490)/\sqrt (490)$ which makes $sin\theta =1$ but the answer states that $sin\theta = \sqrt 7/3$. Where did I go wrong? </p> <p>Thank you and sorry in advance for any wrong tags and title labelling. </p>
Brian M. Scott
12,042
<p>You have the essential idea, but you’ve stated it rather badly. To show that $f$ is not continuous, you need only find an open set $U$ such that $f^{-1}[U]$ is not open. You are correct in thinking that taking $U=[a,b)$ will work, but you need to say that that’s what you’re doing:</p> <blockquote> <p>Let $U=[a,b)$, where $a,b\in\Bbb R$ with $a&lt;b$; by definition $U\in\mathcal{T}_\ell$.</p> </blockquote> <p>That is, you want $U$ to be specifically $[a,b)$: you don’t want $[a,b)$ just to be some open subset of $U$, which is what you’ve actually said. </p> <blockquote> <p>Then $f^{-1}[U]=\{-x:a\le x&lt;b\}=\{x:a\le -x&lt;b\}=\{x:-a\ge x&gt;-b\}=(-b,-a]$.</p> </blockquote> <p>Note that it is <em>not</em> true that this is $[-a,-b)$: by definition $[-a,-b)=\{x\in\Bbb R:-a\le x&lt;b\}=\varnothing$, since $-a&gt;-b$.</p> <p>Now you have to show that $(-b,-a]\notin\mathcal{T}_\ell$, and you haven’t actually done this. The fact that $(-b,-a]$ is not of the form $[c,d)$ is <em>not</em> enough to ensure that it’s not in $\mathcal{T}_\ell$: the intervals of the form $[c,d)$ are only a <em>base</em> for the topology $\mathcal{T}_\ell$, which contains many sets not of this form. For instance, every subset of $\Bbb R$ that is open in the usual Euclidean topology is also open in the lower limit topology; if you’ve not already done so, you should try to prove this. This means, for instance, that the set $(-b,-a)$ <em>is</em> in $\mathcal{T}_\ell$.</p> <p>As it happens, $(-b,-a]$ is not in $\mathcal{T}_\ell$, but you have to do a little work to show this. Since $(-b,-a)$ is in $\mathcal{T}_\ell$, the point $-a$ must be the one that’s causing trouble. Does $(-b,-a]$ contain any basic open nbhd of $-a$? No: such a nbhd would have the form $[-a,-a+\epsilon)$ for some $\epsilon&gt;0$, and then the point $-a+\frac{\epsilon}2$ would be in $[-a,-a+\epsilon)\setminus(-b,-a]$, showing that $[-a,-a+\epsilon)\nsubseteq(-b,-a]$. Thus, $-a$ has no open nbhd contained in $(-b,-a]$, and therefore $(-b,-a]$ cannot be open in the lower limit topology.</p>
3,856,370
<p>this is the result in the book(Discrete mathematics and its applications) I was reading.</p> <ol> <li><span class="math-container">$n^d\in O(b^n)$</span></li> </ol> <p>where <span class="math-container">$b&gt;1$</span> and <span class="math-container">$d$</span> is positive</p> <p>and</p> <ol start="2"> <li><span class="math-container">$(\log_b(n))^c\in O(n^d)$</span></li> </ol> <p>where b&gt;1 and d,c are positive</p> <p>but i am having trouble understanding how that's possible for eg in the second if you have <span class="math-container">$c$</span> as some big number like <span class="math-container">$10^{100}$</span> and d as <span class="math-container">$10^{-100}$</span> they both are positive and b can be 1.0000001 how can we find a <span class="math-container">$C$</span> and <span class="math-container">$k$</span> for it to be a big-Oh notation</p> <p>and in the first one if we compare the function by taking log on both sides (ignoring the d here as we can always multiply with it on the other side)</p> <p>we will have</p> <p><span class="math-container">$\log(n)$</span> and <span class="math-container">$n\cdot \log(b)$</span></p> <p>now which one of them is bigger as <span class="math-container">$b$</span> can be some number like <span class="math-container">$1.00000000001$</span></p> <p>can some one give me proof of these two results. Thanks</p> <p>edit: in the first one its not <span class="math-container">$d.n$</span> but <span class="math-container">$n^d$</span></p> <p>update: i found a same question from another user <a href="https://math.stackexchange.com/questions/2687525/how-to-prove-that-nd-is-obn-from-n-is-o2n-given-that-d0-b1">identical question</a></p> <p>it's answer did clear a little bit for me but still left me confused as acc to op's accepted answer <span class="math-container">$d&lt;log_2(b)$</span> where, <span class="math-container">$log_2(b)&gt;0$</span>.</p> <p>but the result from the book is</p> <blockquote> <p><span class="math-container">$n^d\in O(b^n)$</span></p> </blockquote> <blockquote> <p>This tells us that every power of n is big-O of every exponential function of n with a base that is greater than one</p> </blockquote> <p>There is no such constraint for <span class="math-container">$d$</span> (as the statement says for every power of n)</p>
Claude Leibovici
82,404
<p>If you like special functions <span class="math-container">$$I=\int\left(\frac{\sin x}{x^3}-\frac1{x^2}\right)\,dx=-\frac{\text{Si}(x)}{2}-\frac{\sin (x)}{2 x^2}+\frac{1}{x}-\frac{\cos (x)}{2 x}$$</span> <span class="math-container">$$J(p)=\int_0^p\left(\frac{\sin x}{x^3}-\frac1{x^2}\right)\,dx=-\frac{\text{Si}(p)}{2}-\frac{\sin (p)}{2 p^2}+\frac{1}{p}-\frac{\cos (p)}{2 p}$$</span> nad the limit of <span class="math-container">$\text{Si}(p)$</span> is <span class="math-container">$\frac \pi 2$</span>.</p>
129,295
<p>$$\int{\sqrt{x^2 - 2x}}$$</p> <p>I think I should be doing trig substitution, but which? I completed the square giving </p> <p>$$\int{\sqrt{(x-1)^2 -1}}$$</p> <p>But the closest I found is for</p> <p>$$\frac{1}{\sqrt{a^2 - (x+b)^2}}$$ </p> <p>So I must add a $-$, but how? </p>
Kns
27,579
<p>I dont know i am right or wrong but i can do this example without using trigonometric substitution in following way, \begin{align*} \int\sqrt{x^{2}-2x}dx &amp;=\int\sqrt{x^{2}-2x+1-1}dx\ &amp;=\int\sqrt{(x-1)^{2}-1^{2}}dx\ &amp;=\frac{x}{2}\sqrt{(x-1)^{2}-1}-\frac{1}{2}\log |x+\sqrt{(x-1)^{2}-1}|+c. \end{align*} I used the following formula of integration, $\int\sqrt{x^{2}-a^{2}}dx=\frac{x}{2}\sqrt{x^{2}-a^{2}}-\frac{a^{2}}{2}\log |x+\sqrt{x^{2}-a^{2}}|+c.$</p>