qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,781,153
<p>I've a right triangle that is inscribed in a circle with radius $r$ the hypotunese of the triangle is equal to the diameter of the circle and the two other sides of the triangle are equal to eachother.</p> <blockquote> <p>Prove that when you divide the area of the circle by the area of the triangle that you will get $\pi$.</p> </blockquote> <p>This is what I did:</p> <p>The area of a triangle is $\frac{height\times width}{2}$ and the area of a circle is $\pi r^2$. Now I do not know how to continue.</p>
user061703
515,578
<p>The task given "the right triangle" and "two other sides of the triangle are equal to each other". This means this is a right iscoceles triangle or $45-45-90$ triangle.</p> <p>Assume that this is the $45-45-90$ triangle $ABC$ right and isoceles at $A$, draw the altitude $AH$ (perpendicular to $BC$), then we can prove that $HA=HB=HC=\dfrac{BC}{2}=r$. So the area of $\Delta{ABC}$ is:</p> <p>$\dfrac{\text{base}\times \text{height}}{2}=\dfrac{2r\times r}{2}=r^2$</p> <p>Alternatively, you can also use the Pythagorean theorem to calculate $AB$ and $AC$, then use the special formula to calculate the area of the right triangle (in this case, it is also isoceles).</p>
223,642
<p>$z\cdot e^{1/z}\cdot e^{-1/z^2}$ at $z=0$.</p> <p>My answer is removable singularity. $$ \lim_{z\to0}\left|z\cdot e^{1/z}\cdot e^{-1/z^2}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{z-1}{z^2}}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{-1}{z^2}}\right|=0. $$ But someone says it is an essential singularity. I don't know why.</p>
TTY
19,412
<p>for $z\cdot e^{\frac{-1}{z^2}}$, note if you approach to the origin along the imaginary line, say $z=ih$, we will get $ihe^{\frac{-1}{(i)^2h}}=ihe^{\frac{1}{h}}$, this obviously does not tends to zero as $h \to 0$</p>
1,137,079
<p>I'm new to the concept of complex plane. I found this exercise:</p> <blockquote> <p>Let $z,z_1,z_2\in\mathbb C$ such that $z=z_1/z_2$. Show that the length of $z$ is the quotient of the length of $z_1$ and $z_2$.</p> </blockquote> <p>If $z_1=x_1+iy_1$ and $z_2=x_2+iy_2$ then $|z_1|=\sqrt{x_1^2+y_1^2}$ and $|z_2|=\sqrt{x_2^2+y_2^2}$, which yields $|z_1|/|z_2|=\sqrt{\dfrac{x_1^2+y_1^2}{x_2^2+y_2^2}}$.</p> <p>Now, $z=\dfrac{z_1}{z_2}=\dfrac{x_1+iy_1}{x_2+iy_2} $. The first issue is to try and separate the imaginary part from the real one. I did this by: $$\dfrac{x_1+iy_1}{x_2+iy_2}=\dfrac{x_1+iy_1}{x_2+iy_2}\times\frac{x_2-iy_2}{x_2-iy_2}=\frac{x_1x_2-ix_1y_2+ix_2y_1+y_1y_2}{x_2^2+y_2^2}\\ =\frac{x_1x_2+y_1y_2}{x_2^2+y_2^2}+i\frac{x_2y_1-x_1y_2}{x_2^2+y_2^2}.$$ Hence $|z|=\sqrt{\left(\dfrac{x_1x_2+y_1y_2}{x_2^2+y_2^2}\right)^2+\left(\dfrac{x_2y_1-x_1y_2}{x_2^2+y_2^2}\right)^2}=\sqrt{\dfrac{(x_1x_2+y_1y_2)^2+(x_2y_1-x_1y_2)^2}{(x_2^2+y_2^2)^2}}$. Continuing I get $$\sqrt{\frac{(x_1x_2)^2+(y_1y_2)^2+(x_2y_1)^2+(x_1y_2)^2}{(x_2^2+y_2^2)^2}},$$ but I can't see any future here. Is there any mistake? How can I simplify the expression for $|z|$? I appreciate your help.</p>
egreg
62,967
<p>If you know that $|ab|=|a|\,|b|$, then use $a=z_1/z_2$ and $b=z_2$: $$ |z_1|=\left|\frac{z_1}{z_2}z_2\right|=|ab|=|a|\,|b|= \left|\frac{z_1}{z_2}\right|\,|z_2| $$ Now, easily, $$ \left|\frac{z_1}{z_2}\right|=\frac{|z_1|}{|z_2|}. $$</p> <p>Proving that $|ab|=|a|\,|b|$ is much simpler, even with the definition: if $a=x_1+iy_1$ and $b=x_2+iy_2$, then $$ ab=(x_1x_2-y_1y_2)+i(x_1y_2+x_2y_1) $$ and it's quite easy to verify that $$ (x_1x_2-y_1y_2)^2+(x_1y_2+x_2y_1)^2=(x_1^2+y_1^2)(x_2^2+y_2^2) $$</p> <p>However, it's much easier with the definition $$ |z|=\sqrt{z\bar{z}}. $$</p>
4,608,805
<p>Suppose that I have a class of 35 students whose average grade is 90. I randomly picked 5 students whose average came out to be 85. Assume their grades are i.i.d and of normal <span class="math-container">$N(\mu, \sigma^2)$</span>. From the example I have seen, <span class="math-container">$\mu$</span> is usually called the population mean and should be equal to <span class="math-container">$90$</span>. The sample me is usually referred to as <span class="math-container">$\frac{\sum{X_i}}{5}$</span>. When we do hypothesis testing we can ask whether the sample mean is equal to <span class="math-container">$90$</span>.</p> <ol> <li><p>Is the sample mean <span class="math-container">$\frac{\sum X_i}{5}$</span> or <span class="math-container">$85$</span>?</p> </li> <li><p>The population mean mathematically should be <span class="math-container">$\mu$</span>, but I think people also say that <span class="math-container">$90$</span> is the population mean. This does not make sense to mean since it is not obvious to me that why <span class="math-container">$90 = \mu$</span>. <span class="math-container">$90$</span> is calculated through the sum of the grades divided by 35, whereas <span class="math-container">$\mu$</span> is equal to some integral. I just do not see how they can be equal to each other.</p> </li> </ol>
William M.
396,761
<p>You are assuming that the grade of each student follows a normal distribution <span class="math-container">$\mathsf{Norm}(\mu, \sigma^2)$</span> <strong>and</strong> you assume <span class="math-container">$\mu = 90.$</span> You may rise the question as to whether is sensical to model grades with normal distribution since grades usually come either as points (e.g. 90 out of 100) or as fractions (e.g. 8.7 out of 10). Either way, the normal distribution may not seem a good choice. However, in many circumstances the normal distribution will provide a fit good enough to make <em>inferences</em> and <em>generalisation</em> over the <em>average</em> or other aggregate measure of the population. Of course, there are other tools to assess how well is the assumption of normality on a given data set, such as QQ plots, histograms and statistical goodness of fit tests.</p> <p>Under the assumptions, each student is then &quot;modelled&quot; as a random variable <span class="math-container">$X_i \sim \mathsf{Norm}(90; \sigma^2).$</span> If you pick 5 at random, then <span class="math-container">$(X_1 + \ldots + X_5)/5 \sim \mathsf{Norm}(90; \sigma^2 / 5).$</span> Note that <span class="math-container">$90$</span> is assumed to be the &quot;population mean&quot; which is just the statistician being pedantic in expressing that this is the number assumed to be the mean of the distribution we are using to model the current scenario (the term &quot;population&quot; is just fanciful and mean nothing, although some statistical books do take the term quite literally and go into a rabbit hole of phylosphising). In contrast, if you have a sample (i.e. observation that came at random) then the &quot;sample mean&quot; is just the (usual) average of the observed numbers. In your problem, it is assumed that this observed average is 85. Now the sensical question would be: Is 85 consistent with the proposed model? And the statistical way to answer this question would go on to study how likely it is for a random variable <span class="math-container">$\mathsf{Norm}(90; \sigma^2/5)$</span> to be less than or equal to 85, i.e. we calculate <span class="math-container">$\mathbf{P}(X &lt; 85)$</span> using standarisation and tables. (Of course, you also need to assume the value of <span class="math-container">$\sigma.$</span>) Anyway, if this probability is too small, you have a degree of certainty that this model is not sensical with the observed data; but if the probability is not so small (e.g. it would occur onve every 5, 10 or 20 times), then maybe the model is <em>sufficiently good.</em> In fact, a nice point rosen a while ago is not to only consider those numbers that would surprise us on the same side of 85, but rather <em>all number that would surprise us</em> (i.e. cause suspicion). In this case, those are numbers that are far from the average. Since you observed a distance of 5 from the average, you would be wondering &quot;What is the probability that, assuming the current proposed model, the observation differ from the mean in at least 5?&quot; Mathematically, you would calculate <span class="math-container">$\mathbf{P}(|X-90|&gt;5).$</span> The same idea as before, too small numbers raise a lot of suspicion: the model may be inadequate.</p> <p>Final note: there is no way you can find the &quot;true&quot; mean and no model or method can eliminate all uncertainty. You alway have to assume a degree of error (i.e. you have to always assume that any choice you make may be erroneous, ideally under very rare circumstances).</p>
2,087,724
<p>Let $\Omega $ a smooth domain of $\mathbb R^d$ ($d\geq 2$), $f\in \mathcal C(\overline{\Omega })$. Let $u\in \mathcal C^2(\overline{\Omega })$ solution of $$-\Delta u(x)+f(x)u(x)=0\ \ in\ \ \Omega .$$ Assume that $f(x)\geq 0$ for $x\in \Omega $. Prove that $$\int_{B(x,r)}|\nabla u|^2\leq \frac{C}{r^2}\int_{B(x,2r)}|u|^2,$$ for all $x\in \Omega $ and $r&gt;0$, $B(x,2r)\subset \subset \Omega $ for somme $C\geq 0$ independant of $u,f,x$ and $r$.</p> <p><strong>My attempts</strong></p> <p>Using divergence theorem and that $\Delta u=fu$ in $\Omega $, I have that $$\int_{B(x,r)}|\nabla u|^2=\int_{B(x,r)}\text{div}(u\nabla u)-\int_{B(x,r)}u\Delta u=\int_{\partial B(x,r)}u\nabla u\cdot \nu-\int_{B(x,r)}fu^2.$$</p> <p>But I can't do better. Any help would be welcome.</p>
xpaul
66,420
<p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\xi$ to have the following property $$ \xi=\left\{\begin{array}{lcr}1,&amp;\text{if }x\in B_r\\ \ge 0,&amp;\text{if }x\in B_{2r}\setminus B_r\\ 0,&amp;\text{if }x\in\Omega\setminus B_{2r}, \end{array} \right. $$ satisfying $|\nabla \xi|\le \frac{C}{r}$. Using $u\xi^2$ as a test function in the equation, one has $$ \int_{\Omega}\nabla u\nabla(u\xi^2)dx=-\int_{\Omega}fu^2\xi^2dx\le0. $$ Thus $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le-2\int_{\Omega}(\xi\nabla u)(u\nabla\xi) dx. $$ By using $2ab\le \frac{1}{2}a^2+2b^2$, one has $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le \frac12\int_{\Omega}\xi^2|\nabla u|^2dx+2\int_{\Omega}|u\nabla\xi|^2 dx $$ and hence $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le 4\int_{\Omega}|u\nabla\xi|^2 dx. $$ So using the property of $\xi$, one has $$ \int_{B_r}|\nabla u|^2dx\le \frac{C}{r^2}\int_{B_{2r}}|u|^2 dx. $$</p>
3,013,529
<p>Suppose that the function <span class="math-container">$f$</span> is:</p> <p>1) Riemann integrable (not necessarily continuous) function on <span class="math-container">$\big[a,b \big]$</span>;</p> <p>2) <span class="math-container">$\forall n \geq 0$</span> <span class="math-container">$\int_{a}^{b}{f(x) x^n} = 0$</span> (in particular, it means that the function is orthogonal to all polynomials).</p> <p>Prove that <span class="math-container">$f(x) = 0$</span> in all points of continuity <span class="math-container">$f$</span>.</p>
Disintegrating By Parts
112,478
<p>If you know something about Fourier analysis, you can use the Fejer kernel and the following to conclude that <span class="math-container">$f=0$</span> at all points of continuity: <span class="math-container">$$ \int_{a}^{b}f(x)e^{-isx}dx = \sum_{n=0}^{\infty}\frac{(-is)^n}{n!}\int_{a}^{b} f(x)x^n dx = 0,\;\;\; s\in\mathbb{R}. $$</span></p>
3,013,529
<p>Suppose that the function <span class="math-container">$f$</span> is:</p> <p>1) Riemann integrable (not necessarily continuous) function on <span class="math-container">$\big[a,b \big]$</span>;</p> <p>2) <span class="math-container">$\forall n \geq 0$</span> <span class="math-container">$\int_{a}^{b}{f(x) x^n} = 0$</span> (in particular, it means that the function is orthogonal to all polynomials).</p> <p>Prove that <span class="math-container">$f(x) = 0$</span> in all points of continuity <span class="math-container">$f$</span>.</p>
zhw.
228,045
<p>To keep the notation simple, suppose <span class="math-container">$f$</span> is Riemann integrable on <span class="math-container">$[-1,1],$</span> <span class="math-container">$f$</span> is continuous at <span class="math-container">$0,$</span> and <span class="math-container">$ \int_{-1}^1 p(x)f(x)\, dx =0$</span> for all polynomials <span class="math-container">$p.$</span> We want to show <span class="math-container">$f(0)=0.$</span></p> <p>Suppose, to reach a contradiction, that this fails. Then WLOG <span class="math-container">$f(0)&gt;0.$</span> By the continuity of <span class="math-container">$f$</span> at <span class="math-container">$0,$</span> there exists <span class="math-container">$0&lt;\delta &lt; 1$</span> such that <span class="math-container">$f&gt;f(0)/2$</span> in <span class="math-container">$[-\delta,\delta].$</span></p> <p>Define <span class="math-container">$p_n(x) = \sqrt n(1-x^2)^n.$</span> Then</p> <p><span class="math-container">$$|\int_{\delta}^1 fp_n|\le M\sqrt n(1-\delta^2)^n.$$</span></p> <p>The right hand side <span class="math-container">$\to 0$</span> as <span class="math-container">$n\to \infty.$</span> Same thing for the integral over <span class="math-container">$[-1,-\delta].$</span></p> <p>On the other hand, for large <span class="math-container">$n$</span> we have </p> <p><span class="math-container">$$\int_{-\delta}^{\delta} fp_n \ge (f(0)/2)\int_{-\delta}^{\delta} \sqrt n(1-x^2)^n\, dx \ge (f(0)/2)\sqrt n\int_{0}^{1/\sqrt n} (1-x^2)^n\, dx$$</span> <span class="math-container">$$ = \int_0^1 (1-y^2/n)^n\,dy \to \int_0^1 e^{-y^2}\,dy &gt;0.$$</span></p> <p>This proves that <span class="math-container">$\int_{-1}^1 p_n(x)f(x)\, dx &gt;0 $</span> for large <span class="math-container">$n,$</span> and we have our contradiction.</p>
1,002,777
<p>I want to convert this polynomoial to partial fraction.</p> <p>$$ \frac{x^2-2x+2}{x(x-1)} $$</p> <p>I proceed like this: $$ \frac{x^2-2x+2}{x(x-1)} = \frac{A}{x} + \frac{B}{x-1} $$ Solving, $$ A=-2,B=1 $$ But this does not make sense. What is going wrong?</p>
Timbuc
118,527
<p>An idea I didn't see in the other answers:</p> <p>$$\frac{x^2-2x+2}{x(x-1)}=\frac{(x-1)^2+1}{x(x-1)}=\frac{x-1}x+\frac1{x(x-1)}=1-\frac1x+\frac1{x(x-1)}$$</p> <p>And now either directly: $\;\frac1{x(x-1)}=\frac1{x-1}-\frac1x\;$ ,or by partial fractions, so that finally</p> <p>$$\frac{x^2-2x+2}{x(x-1)}=1-\frac2x+\frac1{x-1}$$</p>
2,136,024
<p>I am having problems with this linear algebra proof:</p> <blockquote> <p>Let $ A $ be a square matrix of order $ n $ that has exactly one nonzero entry in each row and each column. Let $ D $ be the diagonal matrix whose $ i^{th} $ diagonal entry is the nonzero entry in the $i^{th}$ row of $A$</p> <p>For example:</p> <p>$A = \begin{bmatrix}0 &amp; 0 &amp; a_1 &amp; 0\\a_2 &amp; 0 &amp; 0 &amp; 0\\0 &amp; 0 &amp; 0 &amp; a_3 \\0 &amp; a_4 &amp; 0 &amp; 0 \end{bmatrix} \quad $ $D = \begin{bmatrix}a_1 &amp; 0 &amp; 0 &amp; 0\\0 &amp; a_2 &amp; 0 &amp; 0\\0 &amp; 0 &amp; a_3 &amp; 0\\0 &amp; 0 &amp; 0 &amp; a_4 \end{bmatrix}$</p> <p>A permutation matrix, P, is defined as a square matrix that has exactly one 1 in each row and each column</p> <p>Please prove that:</p> <ol> <li>$ A = DP $ for a permutation matrix $ P $</li> <li>$ A^{-1} = A^{T}D^{-2} $</li> </ol> </blockquote> <p>My attempt:</p> <p>For 1, I tried multiplying elementary matrices to $ D $ to transform it into $ A $:</p> <p>$$ A = D * E_1 * E_2 * \cdots * E_k $$</p> <p>Since I am performing post multiplication with elementary matrices, the effect would be a column wise operation on D. But I can't see how this swaps the elements of $ D $ to form $A$. I also cannot prove that the product of the elementary matrices will be a permutation matrix.</p> <p>For 2, my attempt is as follows (using a hint that $PP^{T} = I$):</p> <p>$$ \begin{aligned} A^{T}D^{-2} &amp;= (DP)^{T}D^{-2} \\ &amp;= (P^{T})(D^{T})(D^{-1})(D^{-1}) \\ &amp;= (P^{-1})(D^{T})(D^{-1})(D^{-1}) \end{aligned} $$</p> <p>I am not sure how to complete the proof since I cannot get rid of the term $D^{T}$.</p> <p>Could someone please advise me on how to solve this problem?</p>
Joshua Ruiter
399,014
<p>For part one, you have the right idea. Try multiplying $A$ on the right by various permutation matrices, and just see what happens. For example, multiplying on the right by this matrix swaps columns 1 and 3. \begin{equation} \begin{bmatrix} 0 &amp; 0 &amp; a_1 &amp; 0 \\ a_2 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; a_3 \\ 0 &amp; a_4 &amp; 0 &amp; 0 \end{bmatrix} \begin{bmatrix} 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 \\ 1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1 \end{bmatrix} =\begin{bmatrix} a_1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; a_2 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; a_3 \\ 0 &amp; a_4 &amp; 0 &amp; 0 \end{bmatrix} \end{equation} You can find similar matrices to do more column swaps to get to $D$. You don't need to prove in general that these column swap matrices are necessarily permutation matrices (although they are), since all this problem asks you to do is write down a matrix $P$.</p>
3,979,371
<p>So I have been struggling with this question for a while. Suppose <span class="math-container">$X$</span> is uniformly distributed over an interval <span class="math-container">$(a, b)$</span> and <span class="math-container">$Y$</span> is uniformly distributed over <span class="math-container">$(-\sigma, \sigma)$</span>, <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are independent. Consider a random variable <span class="math-container">$Z = X + Y$</span>. What would be the <em>conditional</em> distribution of <span class="math-container">$X | Z$</span>, i.e. <span class="math-container">$F_{X|Z}(x)$</span>?</p> <p>I know that <span class="math-container">$F_X(x) = \frac{x - a}{b-a}$</span> and <span class="math-container">$F_y(y) = \frac{y + \sigma}{2\sigma}$</span> but I am having trouble figuring out <span class="math-container">$F_{X|Z}(x)$</span>. Could anyone help me? My gut says that <span class="math-container">$X|Z$</span> <em>should</em> be uniformly distributed over <span class="math-container">$(Z - \sigma, Z + \sigma)$</span> but I can't figure out how to prove this.</p>
Acccumulation
476,070
<p><span class="math-container">$X|Z$</span> can't, in general, be uniformly distributed over <span class="math-container">$(z-\sigma,z+\sigma)$</span> because there are values of <span class="math-container">$z$</span> such that <span class="math-container">$(z-\sigma,z+\sigma)$</span> is not a subset of <span class="math-container">$(a,b)$</span>. (Note that <span class="math-container">$Z$</span> is a random variable, <span class="math-container">$z$</span> is real number drawn from its distribution. <span class="math-container">$Z-\sigma$</span> is not a real number, because <span class="math-container">$Z$</span> is a random variable, not a real number. A random variable over the real numbers is different from a real number.)</p> <p>Draw a rectangle with horizontal dimension <span class="math-container">$(a,b)$</span> and vertical <span class="math-container">$(-\sigma,\sigma)$</span>. Then draw diagonal lines with slope <span class="math-container">$-1$</span> in the rectangle. Each diagonal line represents a constant value of <span class="math-container">$z$</span>. For a particular <span class="math-container">$z_0$</span>, <span class="math-container">$X$</span> will be uniformly distributed along the horizontal extension of the corresponding diagonal line. To find <span class="math-container">$X|Z$</span>, you have to take several cases depending on whether <span class="math-container">$z&lt;a$</span>, <span class="math-container">$z&gt;b$</span>, <span class="math-container">$z&lt;-\sigma$</span>, <span class="math-container">$z&gt;\sigma$</span>.</p>
4,309,247
<p>My question comes from an exercise in Shilov's <em>Linear Algebra</em>. His hint is to use induction, but I'm struggling to get anywhere. I looked through the book and couldn't find any theorem that seemed useful, so I'm guessing there is some sort of manipulation I must be missing? A good first step to take would be much appreciated.</p> <p>Thank you for any help you can provide!</p>
esoteric-elliptic
425,395
<p>As the author suggests, we shall use induction.</p> <p>The base case, i.e. <span class="math-container">$m = 1$</span> follows from the hypothesis. Suppose for <span class="math-container">$k\in \mathbb N$</span>, <span class="math-container">$$A^kB - BA^k= kA^{k-1}$$</span> Multiplying throughout by <span class="math-container">$A$</span>, we have <span class="math-container">$$A^{k+1} B - ABA^{k} = kA^k$$</span> but <span class="math-container">$AB = BA + I$</span>, so <span class="math-container">$$A^{k+1} B - (BA + I)A^k = kA^k$$</span> which on simplification yields, <span class="math-container">$$A^{k+1} B - BA^{k+1} = (k+1)A^k$$</span> Therefore, by the principle of (weak) mathematical induction, we have <span class="math-container">$$A^mB - BA^m = mA^{m-1}$$</span> for all <span class="math-container">$m\in \Bbb N$</span>.</p>
355,296
<p>How can we evaluate $$\displaystyle\int \frac{x^2 + x+3}{x^2+2x+5} dx$$ </p> <p>To be honest, I'm embarrassed. I decomposed it and know what the answer should be but<br> I can't get the right answer. </p>
lab bhattacharjee
33,337
<p>$$I=\int \frac{x^2 + x+3}{x^2+2x+5} dx=\int \frac{x(x+1)+3}{(x+1)^2+2^2} dx$$ </p> <p>Putting $x+1=2\tan\theta,dx=2\sec^2\theta d\theta,$</p> <p>$$I= \frac{2\tan\theta(2\tan\theta-1)+3}{4\sec^2\theta d\theta}2\sec^2\theta d\theta$$</p> <p>$$2\tan\theta(2\tan\theta-1)+3=4\tan^2\theta-2\tan\theta+3=4\sec^2\theta-2\tan\theta-1$$</p> <p>$$\text{So,} I=\frac12\int(4\sec^2\theta-2\tan\theta-1)d\theta=2\tan\theta-\log|\sec\theta+\tan\theta|-\theta+C $$ where $C$ is an arbitrary constant for indefinite integration.</p> <p>As $\tan\theta=\frac{x+1}2,$</p> <p>$\sec^2\theta =1+\left(\frac{x+1}2\right)^2=\frac{x^2+2x+5}4$</p> <p>$\theta=\arctan \left(\frac{x+1}2\right)$</p>
109,423
<p>Let $f$ be an isometry (<em>i.e</em> a diffeomorphism which preserves the Riemannian metrics) between Riemannian manifolds $(M,g)$ and $(N,h).$ </p> <p>One can argue that $f$ also preserves the induced metrics $d_1, d_2$ on $M, N$ from $g, h$ resp. that is, $d_1(x,y)=d_2(f(x),f(y))$ for $x,y \in M.$ Then, it's easy to show that $f$ sends geodesics on $M$ to geodesics on $N,$ using the length minimizing property of geodesics and that $f$ is distance-preserving. </p> <p>My question, </p> <blockquote> <p>Is it possible to derive the result without using the distance-preserving property of isometries, by merely the definition?</p> </blockquote> <p>What I have found so far;</p> <p>Let $\gamma : I \to M$ be a geodesic on $M$, i.e. $\frac{D}{dt}(\frac{d\gamma}{dt})=0,$ where $\frac{D}{dt}$ is the covariant derivative and $ \frac{d\gamma}{dt}:=d\gamma(\frac{d}{dt}).$ Let $t_0 \in I,$ we have to show that $\frac{D}{dt}(\frac{d(f \circ \gamma)}{dt})=0$ at $t=t_0,$ or $\frac{D}{dt}(df_{\gamma(t_0)}(\frac{d \gamma}{dt}|_{t=t_0}))=0.$</p> <p>We also know that </p> <p>$$ \langle \frac{d \gamma}{dt}|_{t=t_0},\frac{d \gamma}{dt}|_{t=t_0}\rangle_{\gamma(t_0)}= \langle df_{\gamma(t_0)}(\frac{d \gamma}{dt}|_{t=t_0}), df_{\gamma(t_0)}(\frac{d \gamma}{dt}|_{t=t_0})\rangle_{f(\gamma(t_0))}.$$</p> <p>Since $\frac{d}{dt} \langle \frac{d \gamma}{dt}|_{t=t_0},\frac{d \gamma}{dt}|_{t=t_0}\rangle_{\gamma(t_0)}=2 \langle \frac{D}{dt}(\frac{d \gamma}{dt}|_{t=t_0}),\frac{d \gamma}{dt}|_{t=t_0}\rangle_{\gamma(t_0)}=0,$ therefore</p> <p>$$\frac{d}{dt} \langle df_{\gamma(t_0)}(\frac{d \gamma}{dt}|_{t=t_0}), df_{\gamma(t_0)}(\frac{d \gamma}{dt}|_{t=t_0})\rangle_{f(\gamma(t_0))}=$$</p> <p>$$2\langle \frac{D}{dt}(df_{\gamma(t_0)}(\frac{d \gamma}{dt}|_{t=t_0})), df_{\gamma(t_0)}(\frac{d \gamma}{dt}|_{t=t_0})\rangle _{f(\gamma(t_0))}=0.$$</p> <p>How can I conclude from $\langle\frac{D}{dt}(df_{\gamma(t_0)}(\frac{d \gamma}{dt}|_{t=t_0})), df_{\gamma(t_0)}(\frac{d \gamma}{dt}|_{t=t_0}) \rangle _{f(\gamma(t_0))}=0$ that $\frac{D}{dt}(df_{\gamma(t_0)}(\frac{d \gamma}{dt}|_{t=t_0}))=0?$</p>
Yuri Vyatkin
2,002
<p>Your calculation looks like an attempt to prove the naturality of the Levi-Civita connection, the fact that @Zhen Lin implicitly points to. In the settings of the question it can be stated as $$ \nabla^g_X{Y}=f^* \left( \nabla^{(f^{-1})^* g}_{\operatorname{d}f(X)} \operatorname{d}f(Y) \right) $$</p> <p>Notice also that in fact you are using two different connections: one for vector fields along $\gamma \colon I \rightarrow M$ induced from $\nabla^g$ on $M$, and another one for vector fields along $f \circ \gamma \colon I \rightarrow N$ induced from $\nabla^h$ on $N$. Due to the naturality property they agree, and it may be helpful to distinguish $D^g_t:=\nabla^g_{\frac{d}{dt}{\gamma}}$ and $D^h_t:=\nabla^h_{\frac{d}{dt}(f \circ \gamma)}$ in the present calculation. Indeed, using $$\frac{\operatorname{d}}{\operatorname{d}t}(f\circ \gamma)=\operatorname{d}f(\frac{\operatorname{d}}{\operatorname{d}t}\gamma) $$ we get $$ D^h_t{\frac{\operatorname{d}(f\circ\gamma)}{\operatorname{d}t}} = f_* \left( D^g_t{\frac{\operatorname{d}\gamma}{\operatorname{d}t}} \right) =0 $$</p>
1,073,681
<p>Given two half-integers $a,b\in\mathbb Z+\frac 1 2$, the integers nearest to each is given by $N_a=\{a-0.5,\ a+0.5\}$ and $N_b=\{b-0.5,\ b+0.5\}$.</p> <p>Is there a general method to find, for the two given half-integers, the values $n\in N_a,m\in N_b$ where $\left|nm-ab\right|$ is minimal?</p> <hr> <p>The motivation for this post came when I considered $1.5\times1.5=2.25$. Intuition told me that I could round either (or both) number either way and the result would be the same. Such a symmetry in fact does not exist. After doing a few other examples I simply could not find a pattern.</p>
vadim123
73,324
<p>We compute $$nm=ab\pm \frac{a}{2} \pm \frac{b}{2} \pm \frac{1}{4}$$ and hence we wish to minimize $$|nm-ab|=|\pm \frac{a}{2} \pm \frac{b}{2} \pm \frac{1}{4}|$$ In this expression, the three $\pm$ are not all freely chosen; there must be an even number of $-$'s chosen. Let's assume without loss that $a\ge b\ge 0$ (if $a&lt;b$, swap the roles of $a,b$). For convenience, set $a'=\frac{a}{2}, b'=\frac{b}{2}$, with $a'\ge b'$. Now there are four cases to consider: $$|a'+b'+.25|,~ |a'-b'-.25|,~ |-a'+b'-.25|,~ |-a'-b'+.25|$$ Of these, it's clear that the first and last will never be smaller than the others. Now, if $a'=b'$ then the middle two agree; otherwise $|a'-b'-.25|$ will be the smallest. Hence, we have proved that the second one will always be smallest or tied for smallest. This corresponds to $$n=a-0.5, ~~m=b+0.5$$ (keeping in mind that $a\ge b$). </p> <p>I leave the cases of one (or two) of $a,b$ negative open.</p>
1,073,681
<p>Given two half-integers $a,b\in\mathbb Z+\frac 1 2$, the integers nearest to each is given by $N_a=\{a-0.5,\ a+0.5\}$ and $N_b=\{b-0.5,\ b+0.5\}$.</p> <p>Is there a general method to find, for the two given half-integers, the values $n\in N_a,m\in N_b$ where $\left|nm-ab\right|$ is minimal?</p> <hr> <p>The motivation for this post came when I considered $1.5\times1.5=2.25$. Intuition told me that I could round either (or both) number either way and the result would be the same. Such a symmetry in fact does not exist. After doing a few other examples I simply could not find a pattern.</p>
Heimdall
191,910
<p>What you're trying to do, is to find a number</p> <p>$$(a\pm{1\over2})(b\pm{1\over2})$$</p> <p>that's the closest to $ab$.</p> <p>For start, let's assume $a$ and $b$ are both positive.</p> <p>If they are both $+$ or both $-$ it's easy to see you are further than having one $+$ and one $-$. To see which one should be $+$ and which one $-$, calculate them, they are both the same distance from $ab+{1\over4}$. One is above and one is below (unless $a=b$, in which case they are both equal $ab+{1\over4}$). Clearly the one below is then closer to $ab$.</p> <p>So one has a $+$ and one has a $-$, but which one has what depends on which one is larger.</p> <p>If one (or both) is negative, work with $|a|$ and $|b|$ first. It's easy to figure out the rest.</p>
4,376,076
<p>i) the Matrix P has only real elements</p> <p>ii) 2+i is an eigenvalue of Matrix P</p> <p>I got that the zeros has to be <span class="math-container">$(x-(2+i))(x-(2-i))$</span> which is equal to <span class="math-container">$(x-2)^2 -i^2$</span> so the characteristical polynom is equal to <span class="math-container">$x^2-4x+4+1 =x^2-4x+5$</span> how can I find the matrix of this characteristical polynom?</p>
Herrpeter
190,056
<p>So the zeros are <span class="math-container">$(x-(2+i))(x-(2-i))$</span> =</p> <p><span class="math-container">$$=((x-2)+i)((x-2)-i)$$</span> <span class="math-container">$$=(x-2)^2-i^2 = x^2-4x+4+1=x^2-4x+5$$</span></p> <p>So we get that characteristical polynom has the factors <span class="math-container">$(2-x)(2-x)=x^2-2x-2x+4 +1$</span> which gives us the determinant:</p> <p><span class="math-container">\begin{vmatrix} 2-x &amp; 1 \\ -1 &amp; 2-x \end{vmatrix}</span></p> <p>the corresponding matrix is :<span class="math-container">\begin{pmatrix} 2 &amp; 1\\ -1 &amp; 2 \end{pmatrix}</span></p> <p>or the determinant</p> <p><span class="math-container">\begin{vmatrix} 2-x &amp; -1 \\ 1 &amp; 2-x \end{vmatrix}</span></p> <p>the corresponding matrix is :<span class="math-container">\begin{pmatrix} 2 &amp; -1\\ 1 &amp; 2 \end{pmatrix}</span></p>
2,265,782
<p>Number of twenty one digit numbers such that Product of the digits is divisible by $21$</p> <p>Since product is divisible by $21$ the number should contain the digits $3,6,7,9$ But i am unable to decide how to proceed...can i have any hint</p>
Bram28
256,001
<p>Here is a method called the 'short truth-table' method. As the name implies, it is not a full truth-table.</p> <p>The idea is that you try to make the statement false ... and if you find that you cannot do that, then that means that the statement <em>must</em> be true, i.e. it is a tautology.</p> <p>OK, so let's see what it would take for $(p \land q) \rightarrow p$ to be False. Well, a conditional can only be false when the antecedent is true, and the consequent is false. So, in order for $(p \land q) \rightarrow p$ to be false, we need that $p \land q$ is True, and that $p$ is False. OK, but for $p \land q$ to be True, both $p$ and $q$ need to be True. But that means we have a problem, because now we have that $p$ must both be True and False. So ... we conclude that it is impossible for $(p \land q) \rightarrow p$ to be False ... meaning it is a tautology.</p> <p>Here is that same method, but simply with annotating the statements (the indices show the order in which I place the truth-values, and the red shows the contradiction):</p> <p>\begin{array}{ccccccc} ( &amp; p &amp; \land &amp; q &amp; ) &amp; \rightarrow &amp; p\\ \hline &amp; \color{red}T_4 &amp; T_2 &amp; T_5 &amp;&amp; F_1 &amp; \color{red}F_3\\ \end{array}</p> <p>Another thing you can do is a formal proof: if you can derive $(p \land q) \rightarrow p$ from no premises at all, then that means the statements is a tautology. Here is a proof using a proof system called Fitch (there are many different proof systems):</p> <p><a href="https://i.stack.imgur.com/fY9B7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fY9B7.png" alt="enter image description here"></a></p>
8,997
<p>I have a set of data points in two columns in a spreadsheet (OpenOffice Calc):</p> <p><img src="https://i.stack.imgur.com/IPNz9.png" alt="enter image description here"></p> <p>I would like to get these into <em>Mathematica</em> in this format:</p> <pre><code>data = {{1, 3.3}, {2, 5.6}, {3, 7.1}, {4, 11.4}, {5, 14.8}, {6, 18.3}} </code></pre> <p>I have Googled for this, but what I find is about importing the entire document, which seems like overkill. Is there a way to kind of cut and paste those two columns into <em>Mathematica</em>? </p>
Mr.Wizard
121
<p>Copying <a href="https://superuser.com/a/371320/70766">my answer from SuperUser:</a></p> <p>I am not familiar with the Excel clipboard format, but there is a lovely suite of tools for pasting tabular data in the form of a Palette. See <a href="http://szhorvat.net/pelican/pasting-tabular-data-from-the-web.html" rel="nofollow noreferrer">this page</a> for the code. When you evaluate that block of code, you will get a Palette with three buttons for different formats. I think there is a good probability that one of the three will do what you want.</p> <p>You can save the Palette to your user <code>Mathematica\SystemFiles\FrontEnd\Palettes</code> directory and it will appear in the <code>Palettes</code> menu.</p> <hr> <p><strong>How to paste from Excel in practice</strong></p> <p>An important thing to know about the Windows clipboard is that it can hold data in several formats simultaneously. When you copy from Excel, the data gets copies in several formats, so it can be pasted into many different applications. Unfortunately, when you paste into Mathematica, the wrong format gets automatically chosen. It is not possible to remedy this from Mathematica directly.</p> <p>The workaround is to first paste into <a href="http://en.wikipedia.org/wiki/Notepad_%28software%29" rel="nofollow noreferrer">Notepad</a>, select all the text again (<kbd>CTRL</kbd>-<kbd>A</kbd>), the re-copy it as <a href="http://en.wikipedia.org/wiki/Plain_text" rel="nofollow noreferrer">plain text</a> format only. Now you can paste it into Mathematica using the palette's TSV or Table buttons.</p>
481,167
<p>Let $V$ be a $\mathbb{R}$-vector space. Let $\Phi:V^n\to\mathbb{R}$ a multilinear symmetric operator.</p> <p>Is it true and how do we show that for any $v_1,\ldots,v_n\in V$, we have:</p> <p>$$\Phi[v_1,\ldots,v_n]=\frac{1}{n!} \sum_{k=1}^n \sum_{1\leq j_1&lt;\cdots&lt;j_k\leq n} (-1)^{n-k}\phi (v_{j_1}+\cdots+v_{j_k}),$$ where $\phi(v)=\Phi(v,\ldots,v)$.</p> <p>My question come from that, I have seen this formula when I was reading about mixed volume, and also when I was reading about mixed Monge-Ampère measure. The setting was not exactly the one of a vector space $V$ but I think the formula is true here and I am interested by having this property shown out of the specific context of Monge-Ampère measures or volumes. I have done some work in the other direction, <em>i.e.</em> starting from an operator $\phi:V\to\mathbb{R}$ satisfying some condition and obtaining a multilinear operator $\Phi$ ; bellow are the results I have seen in this direction.</p> <p>I already know that if $\phi':V\to\mathbb{R}$ is such that for any $v_1,\ldots,v_n\in V$, $\phi'(\lambda_1 v_1+\ldots+\lambda_n v_n)$ is a homogeneous polynomial of degree $n$ in the variables $\lambda_i$, then there exists a unique multilinear symmetric operator $\Phi':V^n\to\mathbb{R}$ such that $\Phi'(v,\ldots,v)=\phi'(v)$ for any $v\in V$. Moreover $\Phi'(v_1,\ldots,v_n)$ is the coefficient of the symmetric monomial $\lambda_1\cdots\lambda_n$ in $\phi'(\lambda_1 v_1+\ldots+\lambda_n v_n)$ (see <a href="https://math.stackexchange.com/questions/469342/symmetric-multilinear-form-from-an-homogenous-form">Symmetric multilinear form from an homogenous form.</a>).</p> <p>I also know that if $\phi'(\lambda v)=\lambda^n \phi'(v)$ and we define $$\Phi''(v_1,\ldots,v_n)=\frac{1}{n!} \sum_{k=1}^n \sum_{1\leq j_1&lt;\cdots&lt;j_k\leq n} (-1)^{n-k}\phi' (v_{j_1}+\cdots+v_{j_k}),$$ then $\Phi''(v,\ldots,v)=\frac{1}{n!} \sum_{k=1}^n (-1)^{n-k} \binom{n}{k} k^n \phi'(v)=\phi'(v)$ (see <a href="https://math.stackexchange.com/questions/465172/show-this-equality-the-factorial-as-an-alternate-sum-with-binomial-coefficients">Show this equality (The factorial as an alternate sum with binomial coefficients).</a>). It is clear that $\Phi''$ is symmetric, but I don't know if $\Phi''$ is multilinear.</p> <p>Formula for $n=2$: $$\Phi[v_1,v_2]=\frac12 [\phi(v_1+v_2)-\phi(v_1)-\phi(v_2)].$$</p> <p>Formula for $n=3$: $$\Phi[v_1,v_2,v_3]=\frac16 [\phi(v_1+v_2+v_3)-\phi(v_1+v_2)-\phi(v_1+v_3)-\phi(v_2+v_3)+\phi(v_1)+\phi(v_2)+\phi(v_3)].$$</p>
Ken
544,921
<p>(This is just the summary of the answers by Ewan and Anthony, but a lot simpler.)</p> <p>Let <span class="math-container">$S_n$</span> denote the symmetric group. Also, let us write <span class="math-container">$X=\{1,\dots,n\}$</span>. We compute <span class="math-container">$$\begin{eqnarray}\sum_{k=1}^{n}\sum_{1\leq j_{1}&lt;\cdots&lt;j_{k}\leq n}(-1)^{k}\phi(v_{j_{1}}+\cdots+v_{j_{k}})&amp;=&amp;\sum_{A\subset X}(-1)^{|A|}\phi(\sum_{a\in A}v_{a})\\&amp;=&amp;\sum_{A\subset X}(-1)^{|A|}\sum_{f:X\to A}\Phi(v_{f(1)},\dots,v_{f(n)})\\&amp;=&amp;\sum_{f:X\to X}\Phi(v_{f(1)},\dots,v_{f(n)})\sum_{f(X)\subset A\subset X}(-1)^{|A|}\\&amp;=&amp;\sum_{f\in S_{n}}\Phi(v_{f(1)},\dots,v_{f(n)})(-1)^{n}\\&amp;=&amp;(-1)^{n}n!\Phi(v_{1},\dots,v_{n}),\end{eqnarray}$$</span> where the fourth equality follows from the binomial formula.</p>
3,027,450
<p>I'm learning about group homomorphisms and I'm confused about what the <span class="math-container">$\phi$</span> transformation is exactly. </p> <p>If we have some group homomorphism <span class="math-container">$\phi : G\rightarrow H$</span> what exactly does <span class="math-container">$\phi(G)$</span> mean? </p> <p>Thanks for your time.</p>
Robert Israel
8,508
<p><span class="math-container">$\phi(G)$</span> is the image of <span class="math-container">$G$</span> under the mapping <span class="math-container">$\phi$</span>, i.e. the set of all <span class="math-container">$\phi(x)$</span> for <span class="math-container">$x \in G$</span>.</p>
3,027,450
<p>I'm learning about group homomorphisms and I'm confused about what the <span class="math-container">$\phi$</span> transformation is exactly. </p> <p>If we have some group homomorphism <span class="math-container">$\phi : G\rightarrow H$</span> what exactly does <span class="math-container">$\phi(G)$</span> mean? </p> <p>Thanks for your time.</p>
Alekos Robotis
252,284
<p><span class="math-container">$\phi(G)$</span> denotes the image of the group <span class="math-container">$G$</span>. This is actually a purely set-theoretic definition. Given a map of sets <span class="math-container">$f:X\to Y$</span>, <span class="math-container">$f(X)$</span> denotes the image of <span class="math-container">$f$</span>. That is, <span class="math-container">$f(X)\subseteq Y$</span> is <span class="math-container">$\{y\in Y: y=f(x)\:\text{for some}\:x\in X\}$</span>.</p>
1,109,552
<p>So the Norm for an element $\alpha = a + b\sqrt{-5}$ in $\mathbb{Z}[\sqrt{-5}]$ is defined as $N(\alpha) = a^2 + 5b^2$ and so i argue by contradiction assume there exists $\alpha$ such that $N(\alpha) = 2$ and so $a^2+5b^2 = 2$ , however, since $b^2$ and $a^2$ are both positive integers then $b=0$ and $a=\sqrt{2}$ however $a$ must be an integer and so no such $\alpha$ exists, same goes for $3$.</p> <p>I already proved that </p> <ol> <li>$N(\alpha\beta) = N(\alpha)N(\beta)$ for all $\alpha,\beta\in\mathbb{Z}[\sqrt{-5}]$.</li> <li>if $\alpha\mid\beta$ in $\mathbb{Z}[\sqrt{-5}]$, then $N(\alpha)\mid N(\beta)$ in $\mathbb{Z}$.</li> <li>$\alpha\in\mathbb{Z}[\sqrt{-5}]$ is a unit if and only if $N(\alpha)=1$.</li> <li>Show that there are no elements in $\mathbb{Z}[\sqrt{-5}]$ with $N(\alpha)=2$ or $N(\alpha)=3$. (I proved it above)</li> </ol> <p>Now I need to prove that $2$, $3$, $1+ \sqrt{-5}$, and $1-\sqrt{-5}$ are irreducible.</p> <p>So I also argue by contradiction, assume $1 + \sqrt{-5}$ is reducible then there must exists Non unit elements $\alpha,\beta \in \mathbb{Z}[\sqrt{-5}]$ such that $\alpha\beta = 1 + \sqrt{-5} $ and so $N(\alpha\beta) =N(\alpha)N(\beta)= N(1 + \sqrt{-5}) = 6$ but we already know that $N(\alpha) \neq 2$ or $3$ and so $N(\alpha) = 6$ and $N(\beta) = 1$ or vice verse , in any case this contradicts the fact that both $\alpha$ and $\beta$ are both non units.I just want to make sure i am on the right track here. And how can i prove that $2$, $3$, $1+ \sqrt{-5}$, and $1-\sqrt{-5}$ are not associate to each other.</p>
Robert Soupe
149,436
<p>There are only two units in $\mathbb{Z}[\sqrt{-5}]$: 1 and $-1$. To obtain the associate of a number, you multiply that number by a unit other than 1, and in this domain there is only one choice: $-1$. So, for example, the associate of 2 is $-2$, the associate of 3 is $-3$, the associate of $1 - \sqrt{-5}$ is $-1 + \sqrt{-5}$ and the associate of $1 + \sqrt{-5}$ is $-1 - \sqrt{-5}$.</p> <p>You've already proven 2, 3, $1 - \sqrt{-5}$ and $1 + \sqrt{-5}$ are irreducible. The arguments by norm were sufficient. However, in some other domains, like $\mathbb{Z}[\sqrt{10}]$, it becomes very important to watch out for associates.</p> <p>I would like to close with some examples of "composite" numbers in $\mathbb{Z}[\sqrt{-5}]$:</p> <ul> <li>$2 \sqrt{-5}$</li> <li>$5 + 5 \sqrt{-5}$</li> <li>21</li> <li>29</li> </ul>
3,472,151
<p>I find two main sources on how to compute the half-derivative of <span class="math-container">$e^x$</span>. Both make sense to me, but they give different answers.</p> <p>Firstly, people argue, that <span class="math-container">$$\begin{align} \frac{\mathrm{d}}{\mathrm{d} x} e^{k x} &amp;= k e^{k x} \\[4pt] \frac{\mathrm{d}^2}{\mathrm{d} x^2} e^{k x} &amp;= k^2 e^{k x} \\[4pt] \frac{\mathrm{d}^n}{\mathrm{d} x^n} e^{k x} &amp;= k^n e^{k x} \end{align}$$</span></p> <p>Therefore, it seems very reasonable, that <span class="math-container">$$\frac{\mathrm{d}^{1/2}}{\mathrm{d} x^{1/2}} e^{k x} = \sqrt{k} e^{k x}$$</span></p> <p>But this is not what the usual formula gives <span class="math-container">$$ \frac{\mathrm{d}^{1/2}}{\mathrm{d} x^{1/2}} e^{k x} = \frac{1}{\Gamma (1/2)} \frac{\mathrm{d}}{\mathrm{d} x} \int \limits_0^x \mathrm{d} t \frac{e^{k t}}{\sqrt{x-t}} = \frac{1}{\Gamma (1/2)} \frac{\mathrm{d}}{\mathrm{d} x} e^{k x} \int \limits_0^x \mathrm{d} u \frac{e^{- k u}}{\sqrt{u}} = \\ = \frac{1}{\Gamma (1/2)} \frac{\mathrm{d}}{\mathrm{d} x} \frac{e^{k x}}{\sqrt{k}} \int \limits_0^{\sqrt{k x}} \mathrm{d} s \, e^{-s^2} $$</span></p> <p>We can already see that this is not equal to <span class="math-container">$\sqrt{k} e^{k x}$</span>.</p> <p>So who is right?</p> <p>Why do we use the latter formula in almost all cases but somehow we settle for the simpler formula when it comes to the exponential?</p> <hr> <p>Note that both satisfy that if we apply the half-derivative twice, we get the usual first derivative. (In the first case, it's simply because <span class="math-container">$\sqrt{k} \sqrt{k} = k$</span>; in the second case, there's a proof on Wiki using the properties of beta and gamma functions - plus I verified this numerically even though I couldn't express the integrals in a closed-form.) </p> <p>I also have hard time accepting this second, complicated, formula, mainly because any integer derivative of the exponential gives exponential, but for the half-derivative we get this weird monstrosity. On the other hand, it should be consistent with the formulas for the half-derivative for all powers of <span class="math-container">$x$</span> when it's put together in the infinite series for <span class="math-container">$e^{k x}$</span>.</p> <p>Can anyone shed some light on this issue for me, please?</p>
Conifold
152,568
<p>Both are right. Think of it like this: what is "the" square root of <span class="math-container">$i$</span>? Even for <span class="math-container">$1$</span> there are two of them, but we forget because of the habit to take the positive root. For <span class="math-container">$i$</span> there is no habit available. Things get even more plural when we ask for square roots of <span class="math-container">$I$</span>, the <span class="math-container">$n\times n$</span> identity matrix. There are <span class="math-container">$2^n$</span> square roots just among diagonal matrices, and if <span class="math-container">$A$</span> is a square root so is <span class="math-container">$SAS^{-1}$</span> for <em>any</em> invertible matrix <span class="math-container">$S$</span>. So we should not expect that square root of <span class="math-container">$D=\frac{d}{d x}$</span> is a single something either.</p> <p>A better analogy here is not derivative but antiderivative, which is, after all <span class="math-container">$D^{-1}$</span>. There is no "the" antiderivative, there is always a family of them. Again, we tend to overlook that, because they only differ by constants of integration. And we <em>could</em> force uniqueness by declaring the definite integral <span class="math-container">$\int_0^xf(x)\,dx$</span> to be "the" antiderivative of <span class="math-container">$f(x)$</span>. But if we do "the" antiderivative of <span class="math-container">$e^x$</span> will be not our favorite <span class="math-container">$e^x$</span>, but <span class="math-container">$e^x-1$</span>. To get <span class="math-container">$e^x$</span>, we should choose <span class="math-container">$\int_{-\infty}^xf(x)\,dx$</span> to be "the" antiderivative instead, but then this choice won't work for functions that do not quickly vanish at <span class="math-container">$-\infty$</span>. There just is no perfect choice. Just as antiderivatives need a reference value to become definitive (<span class="math-container">$x_0=0$</span> or <span class="math-container">$-\infty$</span> above) so do fractional derivatives more generally. The case of the derivative, and its integer powers, is an exception rather than a rule.</p> <p>For half-derivatives <span class="math-container">$D^{\frac12}$</span> the integral is a bit more involved, in one of the forms</p> <p><span class="math-container">$$D^{\frac12}f(x)=\frac{1}{\Gamma\left(\frac{1}{2}\right)}\frac{d}{dx}\int \frac{f(u)}{(x-u)^\frac12}du,$$</span></p> <p>I left out the limits of integration. As shown in <a href="https://www.mathpages.com/home/kmath616/kmath616.htm" rel="nofollow noreferrer">Fractional Calculus on Mathpages</a>, if we choose the limits from <span class="math-container">$0$</span> to <span class="math-container">$x$</span> the answer is <span class="math-container">$e^x\left(\operatorname{erf}\sqrt{x} + \frac{e^{-x}}{\sqrt{\pi x}}\right)$</span>. On the other hand, with the already familiar choice from <span class="math-container">$-\infty$</span> to <span class="math-container">$x$</span> the answer is the desired <span class="math-container">$e^x$</span>. Moreover, then <span class="math-container">$D^\nu(e^{kx})=k^\nu e^{kx}$</span> for any <span class="math-container">$k$</span> and <span class="math-container">$\nu$</span>. This is easier to check by using the inverted formula:</p> <p><span class="math-container">$$D^{-\frac12}f(x)=\frac{1}{\Gamma\left(\frac{1}{2}\right)}\int_{-\infty}^x \frac{f(u)}{(x-u)^\frac12}du.$$</span></p> <p>The difference in the answers is more visible now, because it is not by an additive constant of integration, but the principle is the same - we get a whole <span class="math-container">$1$</span>-parameter family of half-derivatives.</p>
526,820
<p>How do I integrate the inner integral on 2nd line? </p> <p><img src="https://i.stack.imgur.com/uIxQX.png" alt="enter image description here"></p> <hr> <p>$$\int^\infty_{-\infty} x \exp\{ -\frac{1}{2(1-\rho^2)} (x-y\rho)^2 \} \, dx$$</p> <p>I know I can use integration by substitution, let $u = \frac{x-y\rho}{\sqrt{1-\rho^2}}$ resulting in</p> <p>$$\sqrt{1-\rho^2}\int^{\infty}_{-\infty} [u\sqrt{1-\rho^2} + y\rho] e^{-u^2/2} \; du$$</p> <p>Thats the 3rd line in the image, but how do I proceed? </p>
Felix Marin
85,343
<p>\begin{align} &amp;\phantom{=\,\,}\int^\infty_{-\infty}x \exp\left(% -\,\frac{\left[x - y\rho\right]^{2}}{2\left[1 - \rho^{2}\right]} \right)\,dx \tag{1} \\[3mm]&amp;= \int^\infty_{-\infty}\left(x + y\rho\right) \exp\left(-\,\frac{x^{2}}{2\left[1 - \rho^{2}\right]}\right) \, dx \tag{2} \\[3mm]&amp;= y\rho\,\sqrt{2\pi\,}\,\sqrt{1 - \rho^{2}\,}\ \underbrace{\quad\left[% {1 \over \sqrt{2\pi\,}\,\sqrt{1 - \rho^{2}\,}}\int^\infty_{-\infty} \exp\left(-\,\frac{x^{2}}{2(1-\rho^2)}\right) \, dx \right]\quad}_{{\LARGE =\ 1}} \tag{3} \\[3mm]&amp;= \color{#ff0000}{\large y\rho\,\,\sqrt{\,2\pi\left(1 - \rho^{2}\right)\,\,\,}} \end{align}</p>
136,021
<p>There is an equivalence relation between inclusion of finite groups coming from the world of <a href="http://en.wikipedia.org/wiki/Subfactor" rel="noreferrer">subfactors</a>:</p> <p><strong>Definition</strong>: <span class="math-container">$(H_{1} \subset G_{1}) \sim(H_{2} \subset G_{2})$</span> if <span class="math-container">$(R^{G_{1}} \subset R^{H_{1}})\cong(R^{G_{2}} \subset R^{H_{2}})$</span> as subfactors.</p> <p>Here, <span class="math-container">$R$</span> is the hyperfinite <span class="math-container">$II_1$</span> factor (a particular von Neumann algebra), and the groups <span class="math-container">$G_1$</span> and <span class="math-container">$G_2$</span> act by outer automorphisms. The notation <span class="math-container">$R^G$</span> refers to the fixed-point algebra.</p> <p><strong>Theorem</strong>: Let <span class="math-container">$(H \subset G)$</span> be a subgroup and let <span class="math-container">$K$</span> be a normal subgroup of <span class="math-container">$G$</span>, contained in <span class="math-container">$H$</span>, then:<br /> <span class="math-container">$(H \subset G) \sim (H/K \subset G/K)$</span>. In particular, if <span class="math-container">$H$</span> is itself normal: <span class="math-container">$(H \subset G) \sim (\{1\} \subset G/K) $</span><br /> <strong>Theorem</strong> : <span class="math-container">$(\{1\} \subset G_{1}) \sim(\{1\} \subset G_{2})$</span> iff <span class="math-container">$G_1 \simeq G_2$</span> as groups.</p> <p><strong>Remark</strong> : the relation <span class="math-container">$\sim$</span> remembers the groups, but not necessarily the subgroups:<br /> <strong>Exemple</strong> (<a href="http://www.mscand.dk/article/view/14281" rel="noreferrer">Kodiyalam-Sunder</a> p47) : <span class="math-container">$(\langle (1234) \rangle \subset S_4) \sim (\langle (13),(24) \rangle \subset S_4)$</span></p> <blockquote> <p>Is there a purely group-theoretic reformulation of the relation <span class="math-container">$\sim$</span> ?</p> </blockquote> <p><strong>Motivations</strong>: See <a href="https://mathoverflow.net/questions/136171/an-upper-bound-for-the-maximal-subgroups-at-fixed-index">here</a> and <a href="https://mathoverflow.net/questions/135806/are-subfactor-planar-algebras-hard-to-classify-at-index-6/135994#135994">here</a>.</p> <hr /> <p><strong>Some definitions:</strong> A <em>subfactor</em> is an inclusion of factors. A <em>factor</em> is a von Neumann algebra with a trivial center. The <em>center</em> is the intersection with the commutant. A <em>von Neumann algebra</em> is an algebra of bounded operators on an Hilbert space, closed by taking bicommutant and dual. Here, <span class="math-container">$R$</span> is the hyperfinite <span class="math-container">$II_{1}$</span> factor. <span class="math-container">$R^{G}$</span> is the subfactor of <span class="math-container">$R$</span> containing all the elements of <span class="math-container">$R$</span> invariant under the natural action of the finite group <span class="math-container">$G$</span>. In its <a href="http://www.ams.org/books/memo/0237/" rel="noreferrer">thesis</a>, Vaughan Jones shows that, for all finite group <span class="math-container">$G$</span>, this action exists and is unique (up to outer conjugacy, see <a href="https://perswww.kuleuven.be/%7Eu0018768/artikels/bourbaki-popa.pdf" rel="noreferrer">here</a> p8), and the subfactor <span class="math-container">$R^{G} \subset R$</span> completely characterizes the group <span class="math-container">$G$</span>. See the book <a href="http://www.cambridge.org/us/academic/subjects/mathematics/abstract-analysis/introduction-subfactors" rel="noreferrer"><em>Introduction to subfactors</em></a> (1997) by Jones-Sunder.</p>
Owen Sizemore
5,732
<p>This is somewhere between a comment and an answer. But it is too long for a comment, so I put it here.</p> <p>To me the natural thing to look at is the action $G\curvearrowright G/H$. $|G/H|$ captures the index, which you surely want to do, and the action should in some sense capture the position of $H$ inside $G$. The issue then would be to try to come up with an appropriate notion of equivalence of these two objects. Here are 3 possibilities:</p> <p>So let $H\subset G$ and $\Lambda\subset \Gamma$ be two inclusions.</p> <p>1.) We can say $H\subset G \simeq_1 \Lambda\subset\Gamma$ if there is an isomorphism of groups $\Phi:G\rightarrow\Gamma$ such that $\Phi(H)=\Lambda$. This is the strongest notion of equivalence and would certainly imply that the two actions on the cosets spaces are the "same". The issue is that it requires the groups involved to be isomorphic, which you certainly don't want.</p> <p>2.) We can forget about the acting group per se and only consider the action on the coset space (in particular look at the orbit equivalence relation). We define this equivalence relation by saying $g_1H\simeq g_2H\Leftrightarrow \exists g\in G$ such that $gg_1H=g_2H$. We do a similar thing on $\Gamma/\Lambda$. Then we say $H\subset G \simeq_2 \Lambda\subset\Gamma$ if there exists a bijection $\Phi:G/H\rightarrow \Gamma/\Lambda$ that takes equivalence classes to equivalence classes.</p> <p>3.) The last one is exactly what you suggest for subfactors.</p> <p>Just a quick final few comments. $1\Rightarrow 2$ but I'm not sure if $2\Rightarrow 3$. The motivation for these comes from three notions of equivalence you can give to a measure preserving action of a discrete group. In this case (for the analogous three equivalences) we have $1\Rightarrow 2\Rightarrow3$ and none of the implication are reversible, in general. Much of Popa's deformation/rigidity program is in trying to reverse these arrows in certain cases. In the ergodic theory case, $1\Rightarrow 2$ is obvious and $2\not\Rightarrow 1$ is not to hard. $2\Rightarrow 3$ isn't to hard either but $3\not\Rightarrow 2$ was quite difficult.</p>
136,021
<p>There is an equivalence relation between inclusion of finite groups coming from the world of <a href="http://en.wikipedia.org/wiki/Subfactor" rel="noreferrer">subfactors</a>:</p> <p><strong>Definition</strong>: <span class="math-container">$(H_{1} \subset G_{1}) \sim(H_{2} \subset G_{2})$</span> if <span class="math-container">$(R^{G_{1}} \subset R^{H_{1}})\cong(R^{G_{2}} \subset R^{H_{2}})$</span> as subfactors.</p> <p>Here, <span class="math-container">$R$</span> is the hyperfinite <span class="math-container">$II_1$</span> factor (a particular von Neumann algebra), and the groups <span class="math-container">$G_1$</span> and <span class="math-container">$G_2$</span> act by outer automorphisms. The notation <span class="math-container">$R^G$</span> refers to the fixed-point algebra.</p> <p><strong>Theorem</strong>: Let <span class="math-container">$(H \subset G)$</span> be a subgroup and let <span class="math-container">$K$</span> be a normal subgroup of <span class="math-container">$G$</span>, contained in <span class="math-container">$H$</span>, then:<br /> <span class="math-container">$(H \subset G) \sim (H/K \subset G/K)$</span>. In particular, if <span class="math-container">$H$</span> is itself normal: <span class="math-container">$(H \subset G) \sim (\{1\} \subset G/K) $</span><br /> <strong>Theorem</strong> : <span class="math-container">$(\{1\} \subset G_{1}) \sim(\{1\} \subset G_{2})$</span> iff <span class="math-container">$G_1 \simeq G_2$</span> as groups.</p> <p><strong>Remark</strong> : the relation <span class="math-container">$\sim$</span> remembers the groups, but not necessarily the subgroups:<br /> <strong>Exemple</strong> (<a href="http://www.mscand.dk/article/view/14281" rel="noreferrer">Kodiyalam-Sunder</a> p47) : <span class="math-container">$(\langle (1234) \rangle \subset S_4) \sim (\langle (13),(24) \rangle \subset S_4)$</span></p> <blockquote> <p>Is there a purely group-theoretic reformulation of the relation <span class="math-container">$\sim$</span> ?</p> </blockquote> <p><strong>Motivations</strong>: See <a href="https://mathoverflow.net/questions/136171/an-upper-bound-for-the-maximal-subgroups-at-fixed-index">here</a> and <a href="https://mathoverflow.net/questions/135806/are-subfactor-planar-algebras-hard-to-classify-at-index-6/135994#135994">here</a>.</p> <hr /> <p><strong>Some definitions:</strong> A <em>subfactor</em> is an inclusion of factors. A <em>factor</em> is a von Neumann algebra with a trivial center. The <em>center</em> is the intersection with the commutant. A <em>von Neumann algebra</em> is an algebra of bounded operators on an Hilbert space, closed by taking bicommutant and dual. Here, <span class="math-container">$R$</span> is the hyperfinite <span class="math-container">$II_{1}$</span> factor. <span class="math-container">$R^{G}$</span> is the subfactor of <span class="math-container">$R$</span> containing all the elements of <span class="math-container">$R$</span> invariant under the natural action of the finite group <span class="math-container">$G$</span>. In its <a href="http://www.ams.org/books/memo/0237/" rel="noreferrer">thesis</a>, Vaughan Jones shows that, for all finite group <span class="math-container">$G$</span>, this action exists and is unique (up to outer conjugacy, see <a href="https://perswww.kuleuven.be/%7Eu0018768/artikels/bourbaki-popa.pdf" rel="noreferrer">here</a> p8), and the subfactor <span class="math-container">$R^{G} \subset R$</span> completely characterizes the group <span class="math-container">$G$</span>. See the book <a href="http://www.cambridge.org/us/academic/subjects/mathematics/abstract-analysis/introduction-subfactors" rel="noreferrer"><em>Introduction to subfactors</em></a> (1997) by Jones-Sunder.</p>
Dave Penneys
351
<p>For finite groups, the answer was given by Izumi in his paper "Characterization of isomorphic group-subgroup subfactors" (MR1920326). There he looks at the crossed product subfactor, but you can always take duals.</p> <p>Edit after @Andre's comment:</p> <p>The actual condition between the two pairs of subgroups is quite technical, and it would basically require reproducing an entire page of a 10 page article. Here is a link to the article: <a href="http://imrn.oxfordjournals.org/content/2002/34/1791.short" rel="nofollow">http://imrn.oxfordjournals.org/content/2002/34/1791.short</a> </p> <p>See also <a href="https://www.youtube.com/watch?v=I52MOU9F-sg&amp;index=5&amp;list=LLhntpxxSKIETTxsQMN8nywg" rel="nofollow">this video</a> (27:30) of a talk of M. Izumi on this subject, at the <a href="https://www.youtube.com/playlist?list=PL706CCF11D806FAA0" rel="nofollow">Sunder Fest 2012</a>.</p>
1,285,014
<p>Let $R,S$ be commutative rings with identity.</p> <p>Proving that $X \sqcup Y$ is an affine scheme is the same as proving that $Spec(R) \sqcup Spec(S) = Spec(R \times S)$.</p> <p>I proved that if $R,S$ are rings, then the ideals of $R \times S$ are exactly of the form $P \times Q$, where $P$ is an ideal of $R$ and $Q$ is an ideal of $S$.</p> <p>However, for prime ideals this is not true in general.</p> <p>If $I$ is a prime ideal of $R \times S$, then $I = \mathfrak{p} \times \mathfrak{q}$, where $\mathfrak{p}$ is a prime ideal of $R$ and $\mathfrak{q}$ is a prime ideal of $S$.</p> <p>But if $\mathfrak{p}$ is a prime ideal of $R$ and $\mathfrak{q}$ is a prime ideal of $S$, it is not true in general that $\mathfrak{p} \times \mathfrak{q}$ is a prime ideal of $R \times S$.</p> <p>Then, $Spec(R \times S) \subseteq Spec(R) \times Spec(S)$ and the reverse inclusion is false in general.</p> <p>My question is, what is $Spec(R) \sqcup Spec(S)$ set-theoretically, in order to use what I proved above?</p>
Sam Walls
181,802
<p>If you write out the truth table for your formula...</p> <pre><code>p q r p → ¬(q∨r) 0 0 0 1 0 0 1 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 0 1 1 0 0 1 1 1 0 </code></pre> <p>To get the DNF, look at every result that is True; those lines directly correspond to the following clauses in DNF: $$(\lnot p\land\lnot q\land\lnot r)\lor(\lnot p\land\lnot q\land r)\lor(\lnot p\land q\land \lnot r)\lor(\lnot p\land q\land r)\lor(p\land \lnot q\land \lnot r)$$ This is a complete description of the formula in DNF - look at how the zeros in the table inputs result in a negation of a literal, and ones result in plain literals.</p> <p>To get the CNF, look at every result that is False; those lines directly correspond to the following clauses in CNF: $$(p\lor \lnot q\lor r)\land(p\lor q\lor \lnot r)\land(p\lor q\lor r)$$ Again, this is a complete description of the formula in CNF.</p> <p>It follows then, that if you also wish to convert a DNF to a CNF then what you need to do is reverse engineer this process. Say for instance you had a DNF:</p> <ul> <li>map each clause in the DNF to a line in the truth table that results in <strong>True</strong></li> <li>map all other combinations of inputs to <strong>False</strong></li> <li>then follow the process above to get the CNF from the resulting truth table</li> </ul> <p>This process can be followed to convert CNF to DNF as well if we swap True and False, DNF and CNF etc..</p>
1,537,881
<p>Find the values of $a$ and $b$ if $$ \lim_{x\to0} \dfrac{x(1+a \cos(x))-b \sin(x)}{x^3} = 1 $$ I think i should use L'Hôpital's rule but it did not work.</p>
Idris Addou
192,045
<p>use the equivalent, near $0$ \begin{eqnarray*} \cos x &amp;\approx &amp;1-\frac{x^{2}}{2} \\ \sin x &amp;\approx &amp;x-\frac{x^{3}}{6} \end{eqnarray*} \begin{eqnarray*} \frac{x(1+a\cos x)-b\sin x}{x^{3}} &amp;\approx &amp;\frac{x(1+a\left( 1-\frac{x^{2}% }{2}\right) )-b\left( x-\frac{x^{3}}{6}\right) }{x^{3}} \\ &amp;=&amp;\frac{x+ax-a\frac{x^{3}}{2}-bx+\frac{bx^{3}}{6}}{x^{3}} \\ &amp;=&amp;\frac{(1+a-b)x+x^{3}(\frac{b-3a}{6})}{x^{3}} \\ &amp;=&amp;\frac{(1+a-b)}{x^{2}}+\frac{b-3a}{6} \end{eqnarray*} It suffices to choose $a$ and $b$ such that \begin{equation*} (1+a-b)=0\ \ \ \ \ and\ \ \ \ \ \ b-3a=6 \end{equation*} that is \begin{equation*} a=-\frac{5}{2},\ \ \ and\ \ b=-\frac{3}{2} \end{equation*}</p>
1,537,881
<p>Find the values of $a$ and $b$ if $$ \lim_{x\to0} \dfrac{x(1+a \cos(x))-b \sin(x)}{x^3} = 1 $$ I think i should use L'Hôpital's rule but it did not work.</p>
Bernard
202,857
<p>You can, unless using L'Hospital's rule repeatedly, which is not the alpha and omega of limit computations. The simplest way to go is to use <em>Taylor's polynomial</em>:</p> <p>$$ \cos x=1-\dfrac{x^2}2+o(x^2),\quad \sin x=x-\frac{x^3}6+o(x^3),$$ whence $$x(1+a\cos x)-b\sin x=(1+a-b)x+\Bigl(\frac b6-\frac a2\Bigr)x^3+o(x^3) $$ $$\frac{x(1+a\cos x)-b\sin x}{x^3}= \frac{(1+a-b)x+\Bigl(\dfrac b6-\dfrac a2\Bigr)x^3+o(x^3)}{x^3}. $$ For the limit to be equal to $1$, the following equations must be satisfied: $$\begin{cases} a-b+1=0,\\\dfrac b6-\dfrac a2=1. \end{cases}$$</p>
48,989
<p>How to prove $\text{Rank}(AB)\leq \min(\text{Rank}(A), \text{Rank}(B))$?</p>
Jyrki Lahtonen
11,619
<p>Hint: Show that rows of $AB$ are linear combinations of rows of $B$. Transpose this hint.</p>
48,989
<p>How to prove $\text{Rank}(AB)\leq \min(\text{Rank}(A), \text{Rank}(B))$?</p>
Listing
3,123
<p>Surely vectors that are in the kernel of <span class="math-container">$B$</span> are also in the kernel of <span class="math-container">$AB$</span>. Vectors that are in the kernel of <span class="math-container">$A^T$</span> are also in the kernel of <span class="math-container">$(AB)^T=B^TA^T$</span> therefore with the fact that Rank(<span class="math-container">$A$</span>)=Rank(<span class="math-container">$A^T$</span>) and the knowledge that the rank gives you the size of the kernel of a matrix you are done.</p>
1,828,097
<p>If we contruct two strainght lines as shown:<a href="https://i.stack.imgur.com/8K5Eo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8K5Eo.png" alt="enter image description here"></a></p> <p>Then join them such that to complete a triangle. <a href="https://i.stack.imgur.com/Uvtnw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uvtnw.png" alt="enter image description here"></a></p> <p>It is taught that we can find infinity points on straight line. So there are infinity points on $DE$ and $BC$. </p> <p>If we will join $A$ with $BC$ as shown:<a href="https://i.stack.imgur.com/HuWos.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HuWos.png" alt="enter image description here"></a> We can find one point on $DE$ and corresponding point on $BC$. So point on $DE$ and $BC$ are same.</p> <p>Hence, can we say that $\infty=\infty$, But why $\infty - \infty \neq0$</p> <p>I'm not sure does this make any sense or not,your suggestions are appreciated.</p>
Qwerty
290,058
<p>Take this example. Find <strong>any</strong> point between $(1,\infty)$ , invert it and you will get a point in $(0,1)$. Since there are infinitely many points you can choose , according to you it must mean $\infty=\infty$ i.e. $(0,1)=(1,\infty)$? </p>
788,995
<p>I need to prove that function $\mathbb R × \mathbb R → \mathbb R $ : $f(x,y) = \frac{|x-y|}{1 + |x-y|}$ is a metric on $\mathbb R$. First two axioms are trivial; it's the triangle inequality which is pain. $\frac{|x-y|}{1 + |x-y|}$ + $\frac{|y-z|}{1 + |y-z|} ≥ \frac{|x-z|}{1 + |x-z|} ⇒ \frac{|x-y| + |y-z| + 2|(x-y)(y-z)|}{1 + |x-y| + |y-z| + |(x-y)(y-z)|}≥ \frac{|x-z|}{1 + |x-z|}$, but then I am stuck. Can somebody show me way out of this?</p>
Hagen von Eitzen
39,174
<p>Have a look at $g(x)=\frac x{1+x}$. All we need is that for $u,v\ge 0$ and $w\le u+v$, we have $$\tag1 g(w)\le g(u)+g(v).$$ For if this is the case, we can let $u=|x-y|$, $v=|y-z|$, $w=|x-z|$; then $w\le u+v$ by the ordinary triangle inequality, and then $(1)$ i sthe desired triangle inequality for $f$.</p> <p>To show $(1)$, it is sufficient to show $$\tag2 w\le w'\implies g(w)\le g(w')$$ and $$\tag3 g(u+v)\le g(u)+g(v).$$ (Indeed, if $(2)$ and $(3)$ hold, we obtain $(1)$ by letting $w'=u+v$). We quickly check that $g$ has the following properties: $$ \tag4 g\text{ is differentiable and }g'(x)\ge 0\text{ for all }x\ge 0$$ which by the MVT implies that $g$ is increasing, i.e. $(2)$. Moreover we find that $g''(x)\le 0$ for all $x\ge 0$, so that $$\tag5 g'(x)\text{ is decreasing}.$$ Therefore $$\tag6g(u+v)-g(v)=\int_0^u g'(v+t)\,\mathrm dt\le \int_0^u g'(t)\,\mathrm dt=g(u)-g(0)$$ and using $$\tag7 g(0)=0 $$ this immediately gives us $(3)$. We conclude that $$f(x,y)=g(|x-y|) $$ obeys the triangle inequality whenever $g\colon[0,\infty)\to\mathbb R$ has the properties $(4)$, $(5)$, and $(7)$. To ensure that this is a metric, only very little more than this is needed abotu the function $g$, namely that $g(x)=0$ implies $x=0$.</p>
612,827
<p>I'm self studying with Munkres's topology and he uses the uniform metric several times throughout the text. When I looked in Wikipedia I found that there's this concept of a <a href="http://en.wikipedia.org/wiki/Uniform_space" rel="nofollow">uniform space</a>.</p> <p>I'd like to know what are it's uses (outside point set topology) and whether it's an important thing to learn on a first run on topology? </p>
Matt E
221
<p>The concept of <em>uniform space</em> is designed to abstract the notions of <em>uniform continuity</em>, <em>Cauchy sequences</em>, and <em>completeness</em> from metric space theory. </p> <p>What these concepts all have in common is that they require us to be able to talk about a pair of points $x$ and $y$ being "sufficiently close", where the notion of "sufficiently close" doesn't make explicit reference to either $x$ or $y$.</p> <p>In an arbitrary topological space, we can't do this: we can use the system of n.h.s around a fixed point $x$ to measure "closeness to $x$", but we can't make sense of the statement "$y$ is as close to $x$ as $y'$ is to $x'$" (if $x$ and $x'$ are different points) because there is no mechanism for comparing the size of a n.h. of $x$ to the size of a n.h. of $x'$.</p> <p>In a metric space, we <em>can</em> do this, because we can talk about the $\epsilon$ n.h.s of <em>any</em> point, for fixed values of $\epsilon$.</p> <p>In a topological group, we can also do this, because we can use translation by group elements to compare the system of n.h.s around any point to the system of n.h.s around some chosen base-point (usually the identity).</p> <p>As Daniel Fischer indicates in his comment, the concept of uniform space provides a language which generalizes these two examples. I did find it useful to learn, since it clarifies results such as the theorem "closed and bounded implies compact" for subsets of Euclidean space, and since it provides a useful background language when studying topological vector spaces (and these sometimes come up in my research). However, it is not used much in other areas of mathematics (in my experience), e.g. essentially not at all in algebraic topology or differenial topology, and so I wouldn't say there is any real need to learn it unless you feel highly motivated to do so.</p>
772,391
<p>The formula for the Chi-Square test statistic is the following:</p> <p>$\chi^2 = \sum_{i=1}^{n} \frac{(O_i - E_i)^2}{E_i}$</p> <p>where O - is observed data, and E - is expected.</p> <p>I'm curious why it depends on the absolute values? For example, if we change the units we're measuring we'll get a different statistics. Suppose we're performing a test on apple weights. One of the samples weights 165 gram, and we expect it to be 182 gram, then the part of the formula will be:</p> <p>$\frac{(165 - 182)^2}{182} \sim 1.58791$</p> <p><a href="http://en.wikipedia.org/wiki/Pearson&#39;s_chi-squared_test" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Pearson's_chi-squared_test</a></p> <p>Now suppose we're living in a country where the precision is on the top. We use milligrams for everything and we get the same results in different units: 165000 milligrams and 182000, respectively. The statistic:</p> <p>$\frac{(165000 - 182000)^2}{182000} \sim 1587.91$</p> <p>So our conclusion will be different based on the units we used. Why? What am I missing and why the values are not normalized in the Chi-squared test?</p>
2'5 9'2
11,123
<p>In the version of this test that I am familiar with, individual data is <em>categorical</em>, not quantitative like your examples. And the expected and observed values should be <em>frequencies</em> of some category (a count of how many times it occurs), not some individual's quantitative measurement. The numbers that go in to the $E_i$ and $O_i$ positions are unitless, as they are just counts.</p> <p>So for example, in a box with mixed fruit, maybe 12 pieces were bananas, but you were expecting 15 to be bananas. You will have the term $$\frac{(12-15)^2}{15}$$ and there is no way to rescale units as you did. Writing $$\frac{(12000-15000)^2}{15000}$$ would correspond to a <em>very</em> different scenario. There you would have seen 12000 bananas when you were expecting 15000. And the corresponding $P$ value <em>should</em> be a lot smaller, because it <em>should</em> be a lot less likely to be off by 3000 out of 15000 than 3 out of 15, when you consider the variance from one piece of fruit to the next on its chances to be a banana. So $\chi^2$ <em>should</em> be a lot larger in the latter case.</p>
3,444,556
<blockquote> <p>Let <span class="math-container">$f:[-1,1] \to \mathbb{R}$</span> be continuous on <span class="math-container">$[-1,1]$</span>.</p> <p>Assume <span class="math-container">$\displaystyle \int_{-1}^{1}f(x)x^ndx = 0$</span> for <span class="math-container">$n = 0,1,2,...$</span> </p> <p>Then show <span class="math-container">$f(x)=0, \ \forall x \in [-1,1]$</span></p> </blockquote> <p>I would like to use Weierstrass Approximation Theorem (WAT) to prove this. Here is my incomplete attempt:</p> <p><span class="math-container">$f$</span> is continuous on <span class="math-container">$[-1,1]$</span> so by WAT <span class="math-container">$\exists$</span> a sequence polynomials <span class="math-container">$p_1, p_2, p_3,...,p_n,...$</span> s.t. </p> <p><span class="math-container">$\displaystyle \lim_{n \to \infty} |p_n - f| = 0$</span>, so <span class="math-container">$\displaystyle \lim_{n \to \infty} \int_{-1}^{1}|f-p_n| = 0$</span>.</p> <p>Now, I would like to show <span class="math-container">$\int_{-1}^{1} f^2 = 0$</span> and from there I would like to conclude that <span class="math-container">$f=0$</span>. So, <span class="math-container">\begin{align} \int_{-1}^{1} (f(x))^2 dx &amp; = \int_{-1}^{1} f(x)f(x) \\ &amp; = \int_{-1}^{1} \left[ f(x)f(x)-f(x)p_n(x)+f(x)p_n(x) \right] dx \\ &amp; = \int_{-1}^{1} f(x) (f(x) - p_n(x)) dx + \int_{-1}^{1}f(x) p_n(x)dx \end{align}</span></p> <p><span class="math-container">$\int_{-1}^{1} f(x) p_n(x) dx = 0$</span> by the hypothesis since <span class="math-container">$\int_{-1}^{1} f(x) x^n dx = 0$</span> for <strong>all</strong> <span class="math-container">$n \in \mathbb{N}$</span>.</p> <p>So that leaves me to show,</p> <p><span class="math-container">$\int_{-1}^{1}f(x)(f(x)-p_n(x)) = 0$</span>. I know <span class="math-container">$\lim_{n \to \infty}|f-p_n|=0$</span>, but I am not clear how I can use this to show the integral is <span class="math-container">$0$</span>. First of all, we are working with a fixed <span class="math-container">$n$</span>, and second of all the limit of absolute difference is <span class="math-container">$0$</span>.</p> <p>Should I start the proof with limits and show the integral is arbitrarily small?</p> <p>Thanks.</p>
user284331
284,331
<p><span class="math-container">\begin{align*} \int_{-1}^{1}|f(x)||f(x)-p_{n}(x)|dx&amp;\leq\sup_{x\in[-1,1]}|f(x)-p_{n}(x)|\int_{-1}^{1}|f(x)|dx\\ &amp;=K\sup_{x\in[-1,1]}|f(x)-p_{n}(x)|, \end{align*}</span> where <span class="math-container">$K$</span> is the integral, which is a finite number, now the supremum can be made arbitrarily small by Weierstrass theorem.</p>
83,336
<p>Sorry if this is a naive question-- I'm trying to learn this stuff (cross-posted from <a href="https://math.stackexchange.com/questions/89248/induced-representations-of-topological-groups">https://math.stackexchange.com/questions/89248/induced-representations-of-topological-groups</a>)</p> <p>If $G$ is a group with subgroup $H$, then we have the restriction functor $\operatorname{Res}$ from $G-\operatorname{mod}$ to $H-\operatorname{mod}$. We also have this idea of induction, a functor $\operatorname{Ind}^G_H$ from $H-\operatorname{mod}$ to $G-\operatorname{mod}$. These are adjoints, which means (I think) that $\operatorname{Hom}_G(\operatorname{Ind}^G_H(V), U) \cong \operatorname{Hom}_H(V,\operatorname{Res}(U))$ naturally, for $G$-modules $U$ and $H$-modules $V$.</p> <p>For locally compact groups, there is a theory worked out by MacKey and others. Actually, I have only read Rieffel's work on the subject (as I come from a functional Analysis background). For a locally compact $G$ and closed subgroup $H$, there is a very satisfactory notion of the functor $\operatorname{Ind}^G_H$ (where we consider "Hermitian modules", i.e. unitary representations on Hilbert spaces). What I don't see is how (or even if) this relates to the restriction functor?</p> <blockquote> <p>In the topological setting, are $\operatorname{Ind}^G_H$ and $\operatorname{Res}$ in any sense adjoints?</p> </blockquote> <p>A slightly vague rider-- if (as I suspect) the answer is "no", can we be more precise about <em>why</em> the answer is no?</p>
Marc Palm
10,400
<p>In fact the right relation is $$ Hom_G ( - , Ind_H^G - ) = Hom_H( Res - , -).$$ For compact group, it does not really matter by the Peter Weyl theorem, but it is essential as soon as you omit compactness. </p> <p>I want to add that Frobenius reciprocity usually boils down to tensor-hom adjointness, so essentailly the nuclearity of the module category is necessary. For a positive answer, you have to be more restrictive.</p> <p>For Mackey's theory of induced representation and its variants of Frobenius reciprocity, have a look at Baruk&amp;Raczka "Theory of group representations and applications" pg. 549.</p> <p>For the locally profinite groups, there is a really nice treatment in Bushnell-Henniart "Local Langlands on GL(2)" on pg. 18ff.</p> <p>To add upon Jesse Peterson's example: If $G$ is locally compact, then $$ Hom_G( L^1 G , 1_G) \cong \mathbb{C} \cong End_H(1_H).$$ (Hint: The Haar integraal is the only element on the left hand side.) So the category of Hilbert spaces is not for all purposes the prefered one, you want to work in.</p>
2,400,336
<p>My first try was to set the whole expression equal to $a$ and square both sides. $$\sqrt{6-\sqrt{20}}=a \Longleftrightarrow a^2=6-\sqrt{20}=6-\sqrt{4\cdot5}=6-2\sqrt{5}.$$</p> <p>Multiplying by conjugate I get $$a^2=\frac{(6-2\sqrt{5})(6+2\sqrt{5})}{6+2\sqrt{5}}=\frac{16}{2+\sqrt{5}}.$$</p> <p>But I still end up with an ugly radical expression.</p>
Stefan4024
67,746
<p>Here's a useful formula for this kind of problems:</p> <p>$$\sqrt{a \pm \sqrt{b}} = \sqrt{\frac{a + \sqrt{a^2 - b}}{2}} \pm \sqrt{\frac{a - \sqrt{a^2 - b}}{2}}$$</p> <p>where we have $a,b \ge 0$ and $a^2 &gt; b$</p>
3,552,219
<p>I come across an explanation of recursion complexity. This screenshot is in question:</p> <p><a href="https://i.stack.imgur.com/ySKdo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ySKdo.png" alt="a"></a></p> <p>How do you get this?</p> <pre><code>T(n) = 3T(n/4) + n </code></pre> <p>The <span class="math-container">$log_n^4$</span> shown seems to be base 4, and this baffles me. What does the <code>n</code> superscript stand for? I am inclined to think the base of this log should be 3. Can someone explain this to me?</p> <hr> <p>Another example was provided where the tree expands at an exponent of 2: <a href="https://i.stack.imgur.com/gP5tc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gP5tc.png" alt="b"></a></p> <p>The formula given is:</p> <pre><code>T(n) = 2T(n/2) + 2 </code></pre> <p>The <code>log(n)</code> given is said to be of base 2. This makes sense to me, but not the base 4 in given by the first picture.</p>
Karl
279,914
<p>In this example, we're considering an algorithm that, given a problem of size <span class="math-container">$n$</span>, does 1 unit of "local" work and divides the remaining work into 3 smaller problems of the same kind, where each of these subproblems has size <span class="math-container">$n/4$</span>. (Instead of 3 and 4 we could have any numbers; this is just an example.) So if <span class="math-container">$T(n)$</span> is the total amount of work done in solving a problem of size <span class="math-container">$n$</span>, we have the equation <span class="math-container">$T(n)=3T(n/4)+1$</span> by assumption.</p> <p>The tree figure shows a way of thinking about the algorithm and understanding the growth rate of <span class="math-container">$T$</span>. At <span class="math-container">$k$</span> levels below the root, we have <span class="math-container">$3^k$</span> subproblems and each subproblem has size <span class="math-container">$n/4^k$</span>. Since our algorithm stops splitting the problem up when it reaches a base case (i.e. a problem of size 1), the tree has <span class="math-container">$\log_4n$</span> levels, and the number of base cases at the bottom level is <span class="math-container">$3^{\log_4n}$</span>.</p>
3,210,791
<p>The question is as follows:</p> <p>Consider the following partial differntial equation (PDE)</p> <p><span class="math-container">$2\frac{\partial^2u}{\partial x^2}+2\frac{\partial^2u}{\partial y^2} = u$</span></p> <p>where <span class="math-container">$u=u(x,y)$</span> is the unknown function.</p> <p>Define the following functions:</p> <p><span class="math-container">$u_1(x,y):=xy^2, u_2(x,y)=\sin(xy)$</span> and <span class="math-container">$u_3(x,y)=e^{\frac{1}{3}(x-y)}$</span></p> <p>Which of these functions are solutions to the above PDE?</p> <p>Any walkthroughs, description of methods, links to resources would be highly appreciated.</p>
Community
-1
<ul> <li><p><span class="math-container">$2\cdot0+2\cdot2x=xy^2$</span> ?</p></li> <li><p><span class="math-container">$2(-y^2\sin(xy))+2(-x^2\sin(xy))=\sin(xy)$</span> ?</p></li> <li><p><span class="math-container">$2\dfrac{e^{(x-y)/3}}9+2\dfrac{e^{(x-y)/3}}9=e^{(x-y)/3}$</span> ?</p></li> </ul>
68,438
<p>I recall reading somewhere that if a conformal class contains an Einstein metric then that metric is the unique metric with constant scalar curvature in its conformal class, with the exception of the case of the round sphere. Does this sound right? If it is true: where can I find the proof of this result? </p>
Renato G. Bettiol
15,743
<p>The first proof of the statement "Einstein metrics are the unique metrics with constant scalar curvature in their conformal class, except for round spheres" is due to Obata in 1971, see <a href="http://www.ams.org/mathscinet-getitem?mr=0303464" rel="noreferrer">MR0303464 M. Obata, The conjectures on conformal transformations of Riemannian manifolds. J. Differential Geometry 6 (1971/72), 247–258.</a> In the beginning of the paper he lists many previous works with partial results in this direction as well.</p>
1,281,507
<blockquote> <p>$$x*y = 3xy - 3x - 3y + 4$$</p> <p>We know that $*$ is associative and has neutral element, $e$.</p> <p>Find $$\frac{1}{1017}*\frac{2}{1017}*\cdots *\frac{2014}{1017}.$$</p> </blockquote> <p>I did find that $e=\frac{4}{3}$, and, indeed, $x*y = 3(x-1)(y-1)+1$. Also,it is easy to check that the law $*$ is commutative.</p> <p>How can I solve this?</p>
Travis Willse
155,629
<p><strong>Hint</strong> What is $1 \ast y$?</p> <blockquote class="spoiler"> <p><strong>Additional hint</strong> Note that $1$ occurs in the $2014$-fold product, namely as $\frac{1017}{1017}$.</p> </blockquote>
132,238
<p>I'm trying to solve a maximization problem that apparently is too complicated (it's a convex function) and NMaximize just runs endlessly.</p> <p>I'd like to have an approximate result, though. How can I tell <code>NMaximize</code> to just give up after $n$ seconds and give me the best it has found so far?</p>
Lukas
21,606
<p>This is how I usually deal with this kind of problem. Keywords to the solution are <code>TimeConstrained</code>, <code>AbortProtect</code>, <code>Throw</code> and <code>Catch</code>.</p> <p>Consider the two target functions: </p> <pre><code>fun[x_] := (Pause[1]; x^4 - 3 x^2 - x); fun2[x_, y_] := (Pause[1]; x + y); </code></pre> <p>Now we define our own optimization routine, that is <code>TimeConstrained</code> by some <code>timeLimit</code>.</p> <pre><code>timedOptimization[timeLimit_, vars_, target_, constraints___] := Module[{objective, bestVal = 100, symbolicVars = HoldForm /@ vars, tempVal}, objective[v_ /; VectorQ[v, NumericQ]] := Module[{val}, val = target@@v; If[val &lt; bestVal, bestVal = val; tempVal = ReleaseHold@v;]; val ]; AbortProtect[Catch@TimeConstrained[ NMinimize[ {objective[vars],constraints}, vars, MaxIterations -&gt; 400 ], timeLimit, Print["Too slow! Aborting with value ", bestVal, " for parameter ", Thread[symbolicVars -&gt; tempVal]]; Throw[{bestVal, Flatten[Thread[symbolicVars -&gt; tempVal]]}] ] ] ]; </code></pre> <p>This should be quite clear. Mainly, <code>AbortProtect</code> is needed to protect against the <code>Abort</code> generated by <code>TimeConstrained</code> as soon as <code>timeLimit</code> is reached. The current best value and the parameter is <code>Throw</code>n within the fail expression for <code>TimeConstrained</code> and the outer <code>Catch</code> is needed to, well, catch these values. </p> <p>Due to the <code>BlankNullSequence</code>, <code>constraints</code> is an optional argument which does not need to be specified. <code>vars</code>is a list with all parameters, even if it is a single one. See below examples of how to use with one/multiple arguments and with/without constraints. Please note that the <code>MaxIterations</code> option is actually not needed and just included for demonstration purposes (it exists).</p> <pre><code>AbsoluteTiming@timedOptimization[10, {x}, fun] </code></pre> <blockquote> <p>Too slow! Aborting with value -2.8916 for parameter x->0.965034</p> <p>{10.001186, {-2.8916, x -> 0.965034}}</p> </blockquote> <pre><code>AbsoluteTiming@timedOptimization[10, {x}, fun, x &gt; 1] </code></pre> <blockquote> <p>Too slow! Aborting with value -3.24616 for parameter {x->1.09146}</p> <p>{10.001160, {-3.24616, {x -> 1.09146}}}</p> </blockquote> <pre><code>AbsoluteTiming@timedOptimization[10, {x,y}, fun2] </code></pre> <blockquote> <p>Too slow! Aborting with value -0.672799 for parameter {x->-0.535769,y->-0.13703}</p> <p>{10.000125, {-0.672799, {x -> -0.535769, y -> -0.13703}}}</p> </blockquote> <pre><code>AbsoluteTiming@timedOptimization[10, {x,y}, fun2, {x, y} \[Element] Disk[]] </code></pre> <blockquote> <p>Too slow! Aborting with value -0.70015 for parameter {x->-0.351705,y->-0.348445} {10.000125, {-0.70015, {x -> -0.351705, y ->-0.348445}}}</p> </blockquote>
850,390
<p>Let $f(x)$ be differentiable function from $\mathbb R$ to $\mathbb R$, If $f(x)$ is even, then $f'(0)=0$. Is it always true?</p>
Marm
159,661
<p>Given: $f(x)=f(-x)$</p> <p>then we obtain: $f'(x)=-f'(-x) \implies f'(0)=-f'(0) \iff 2f'(0)=0$ , hence $f'(0)=0$</p>
104,626
<p>I encountered the following differential equation when I tried to derive the equation of motion of a simple pendulum:</p> <p>$\frac{\mathrm d^2 \theta}{\mathrm dt^2}+g\sin\theta=0$</p> <p>How can I solve the above equation?</p>
user14717
24,355
<p>Start with $$ \frac{1}{2}\frac{\mathrm{d}\dot{\theta}^{2}}{\mathrm{d}\theta} = \dot{\theta}\frac{\mathrm{d}\dot{\theta}}{\mathrm{d}\theta} = \frac{\mathrm{d}\theta}{\mathrm{d}t}\frac{\mathrm{d}\dot{\theta}}{\mathrm{d}\theta} = \frac{\mathrm{d}\dot{\theta}}{\mathrm{d}t} = \ddot{\theta} $$ Then your equation becomes $$ \frac{1}{2}\frac{\mathrm{d}\dot{\theta}^{2}}{\mathrm{d}\theta} = -\frac{g}{\ell}\sin(\theta) $$ or $$ \mathrm{d}\dot{\theta}^{2} = -\frac{2g}{\ell}\mathrm{d}\sin(\theta) \implies \dot{\theta}^{2} = \frac{2g}{\ell}\cos(\theta) + c_{1} $$ It's a bit easier if we assume initial conditions, say $\dot{\theta}(t_{0}) = \dot{\theta}_{0}$ and $\theta(t_{0}) = \theta_{0}$, so that $$ \dot{\theta}^{2} = \frac{2g}{\ell}\left[\cos(\theta)- \cos(\theta_{0}) + \frac{\ell\dot{\theta}_{0}^{2}}{2g} \right] $$ Then $$ \frac{\mathrm{d}\theta}{\mathrm{d}t} = \sqrt{ \frac{2g}{\ell}}\sqrt{\cos(\theta) - \cos(\theta_{0}) + \frac{\ell\dot{\theta}_{0}^{2}}{2g} } $$ so that $$ \mathrm{d}t = \sqrt{\frac{\ell}{2g}}\frac{\mathrm{d}\theta}{ \sqrt{\cos(\theta) - \cos(\theta_{0}) + \frac{\ell\dot{\theta}_{0}^{2}}{2g}}} $$ or $$ t_{f} - t_{0} = \sqrt{\frac{\ell}{2g}}\int_{\theta_{0}}^{\theta_{f}}\frac{\mathrm{d}\theta}{ \sqrt{\cos(\theta) - \cos(\theta_{0}) + \frac{\ell\dot{\theta}_{0}^{2}}{2g}}} $$ This equation is of the form $t = f(\theta)$. Your solution is given by $\theta = f^{-1}(t)$. That's about as much as you need to know, since it's more efficient to just solve the original equation numerically.</p> <p>If you really need a closed form for $f$, Mathematica will give you one, in terms of the function <code>EllipticF</code>.</p>
2,766,879
<p>Show that there are no primitive pythagorean triple $(x,y,z)$ with $z\equiv -1 \pmod 4$. </p> <p>I once have proven that, for all integers $a,b$, we have that $a^2 + b^2$ is congruent to $0$, or $1$, or $2$ modulo $4$. I feel like it is enough to conclude it by considering $a=x$, $b=y$ and $\gcd(x,y)=1$. But I am not completely sure if it is the way the proof should end.</p>
Dr. Sonnhard Graubner
175,066
<p>Hint: you have to prove that $$\frac{x^2+x+2}{x-1}-\frac{\frac{1}{x^2}+\frac{1}{x}+2}{\frac{1}{x}-1}\geq 8$$ and this is equivalent to $$\frac{(x+1)^3}{x(x-1)}\geq 8$$</p>
1,638,051
<p>$$\int\frac{dx}{(x^{2}-36)^{3/2}}$$</p> <p>My attempt:</p> <p>the factor in the denominator implies</p> <p>$$x^{2}-36=x^{2}-6^{2}$$</p> <p>substituting $x=6\sec\theta$, noting that $dx=6\tan\theta \sec\theta$ </p> <p>$$x^{2}-6^{2}=6^{2}\sec^{2}\theta-6^{2}=6^{2}\tan^{2}\theta$$</p> <p>$$\int\frac{dx}{(x^{2}-36)^{3/2}}=\int\frac{6\tan\theta \sec\theta}{36\tan^{2}\theta}=\frac{1}{6}\int\frac{\sec\theta}{\tan\theta}$$</p> <p>using trig identities: $$\frac{1}{6}\int\frac{\sec\theta}{\tan\theta}=\frac{1}{6}\int \sin^{-1}\theta$$</p> <p>now using integration by parts: $$\frac{1}{6}\int \sin^{-1}\theta$$ $$u=\sin^{-1}\theta, du=\frac{1}{\sqrt{1-\theta^{2}}}, dv=1, v=\theta$$ using $uv-\int{vdu}$</p> <p>$$\frac{1}{6}\bigg(\theta \sin^{-1}\theta-\int{\frac{\theta}{\sqrt{1-\theta^{2}}}}d\theta\bigg)$$</p> <p>now using simple substitution:$$z=1-\theta^{2}, dz=-2\theta d\theta, -\frac{1}{2}du=\theta d\theta$$</p> <p>it is apparent that</p> <p>$$\frac{1}{6}\bigg(\theta \sin^{-1}\theta-\bigg(-\frac{1}{2}\int{\frac{dz}{\sqrt{z}}}\bigg)\bigg)$$</p> <p>$$=\frac{1}{6}\bigg(\theta \sin^{-1}\theta-\bigg(-\frac{1}{2}(2\sqrt{z})\bigg)\bigg)=\frac{1}{6}\bigg(\theta \sin^{-1}\theta+\sqrt{1+\theta^{2}}\bigg)$$</p> <p>$$=\frac{1}{6}\theta \sin^{-1}\theta+\frac{1}{6}\sqrt{1+\theta^{2}}+C$$</p> <p>I have the following questions:</p> <p>1.This integral seems tricky and drawn out to me, is there another method that reduces the steps/ methods of integration? I had to use trig substitution, integration by parts, and substitution in order to solve the integral, what can I do to find easier ways to complete integrals of this type?</p> <p>2.Is this solution even correct? wolfram alpha says the solution to this integral is $-\frac{x}{36\sqrt{x^{2}-36}}+C$ how can i determine equivalence?</p>
egreg
62,967
<p>Be careful: with the substitution you have $$ (x^2-36)^{3/2}=(36\tan^2\theta)^{3/2}=216\tan^3\theta $$ (at least in an interval where $\tan\theta$ is positive) so your integral becomes $$ \int\frac{6\tan\theta\sec\theta}{216\tan^3\theta}\,d\theta= \frac{1}{36}\int\frac{\cos\theta}{\sin^2\theta}\,d\theta= -\frac{1}{36}\frac{1}{\sin\theta}+C $$</p> <p>If instead you set $x=6\cosh u$, you get $dx=6\sinh u$ and the identity $\cosh^2u-1=\sinh^2u$ brings the integral in the form $$ \frac{1}{36}\int\frac{1}{\sinh^2u}\,du= -\frac{1}{36}\frac{\cosh u}{\sinh u}+C= -\frac{1}{36}\frac{x}{\sqrt{x^2-36}} $$</p>
2,683,326
<p>I have a function $f(x)$ whose second order Taylor expansion is represented by $f_2(x)$. Is it true that $$f(x)&gt;f_2(x)$$ for all $x$? Any help in this regard will be much appreciated. Thanks in advance.</p>
Graham Kemp
135,106
<p>Rather than breaking a stick of length 1 into six parts with five breaks, break a circle of circumference 1 into six parts with six breaks and then pick one from those breaks as the "original ends". </p> <p>So, what is the probability that all six breaks are not on the same semicircle?</p> <p>Well wolog we pick one point as reference. The other five points will be uniformly distributed over $[-1/2,1/2]$ relative to that point. You want the probability that the distance between the least and most order statistic from those five points is more than 1/2.</p> <p>$$\int_{1/2}^0\int_{x+1/2}^1 f_{X_{(1)},X_{(5)}}(x,y)~\mathsf d y~\mathsf d x$$</p>
2,837,683
<p>I have to solve the integral $$\int_D \sqrt{x^2+y^2} dx dy$$ where $D=\{(x,y)\in\mathbb{R^2}: x^{2/3}+y^{2/3}\le1\}$.</p> <p>I am not able to find a parameterization that suits the integrand. I tried with $$\cases{x=(r\cos t)^3\\y=(r\sin t)^3}$$ in order to reduce the domain to a circle but then the integral becomes $$\int_0^1\int_0^{2\pi} r^3\sqrt{(\cos t)^6+(\sin t)^6}\cdot9r^5\cos^2 t\sin^2 t dt dr$$ and then I cannot proceed to solve the integral.</p> <p>Should I change the parametrization? </p>
MR ASSASSINS117
546,265
<p>With Polar Coordinates and with the Jacobian Transformation</p> <p>$$\int_D\sqrt{x^2+y^2}\ dS\implies\iint_{\mathbb R^2} r\cdot rdS\implies\int_0^1\int_0^{2\pi}r^2dtdr$$</p>
1,176,958
<p>I've been struggling with this for over an hour now and I still have no good results, the question is as follows:</p> <blockquote> <p>What's the probability of getting all the numbers from $1$ to $6$ by rolling $10$ dice simultaneously?</p> </blockquote> <p>Can you give any hints or solutions? This problem seems really simple but I feel like I'm blind to the solution.</p>
Victor
142,550
<p>The way I see this problem, I'd consider two finite sets, namely the set comprised by the $10$ dice (denoted by $\Theta$) and the set of all the possible outcomes (denoted by $\Omega$), in this particular situation, the six faces of the dice.</p> <p>Therefore, the apparent ambiguity of the problem is significantly reduced by considering all possible mappings of the form $f:\Theta \mapsto \Omega$ that are surjective. Why exactly surjection ?</p> <p>Recall that the definition of surjection implies that all values in the codomain must be hit at least once through the mapping of elements in the domain of the function. Under the circumstance, it's exactly what we're interested in, since we want to exclusively count all instances in which all the possible $6$ faces appear .</p> <p>The number of surjective mappings can be found using the following identity :</p> <p>$$m^n - \binom {m} {1}(m-1)^n +\binom {m} {2}(m-2)^n-\binom {m} {3}(m-3)^n + \cdots $$</p> <p>(<em>where $m$ denotes the number of elements in $\Omega$ and $n$ the number of elements in $\Theta$</em> ).</p> <p>Let $A$ represent the happy event, given the facts I've presented you should be able to get: $$Pr(A)=0.27$$ </p>
387,749
<p>This comes from Guillemin and Pollack's book Differential Topology. The book claims that one cannot parametrize a unit circle by a single map. I thought we could (by a single angle $\theta$). </p> <p>I think one possible answer might be the fact that if we let $\theta$ in [0, 2$\pi$), when $\theta$ approaches 2$\pi$, $f(\theta)$ approaches $f(0)$. But is this a problem?</p> <p>Also for $n$-spheres, why is there always a point that can't be covered by a single map (I know we can always cover it by two maps, one being the stereographic projection)?</p> <p>Thanks a lot for your help!</p>
GCD
74,703
<p>Remember that parametrizations are supposed to be homeomorphisms onto their images.</p> <p>The angle map $\theta\rightarrow e^{i\theta}$ (to use complex coordinates), as a map from $[0,2\pi)$ onto the circle $S^1$ is continuous and bijective. The issue is that it is not a homeomorphism. You can see this explicitly by noting that the inverse map is not continuous. If you approach $(1,0)\in S^1$ from "above", the angle approaches $0$, but if you approach from "below", the angle approaches $2\pi$.</p> <p>There is a more abstract way to see this: $S^1$ is compact, so it cannot be homeomorphic to $[0,2\pi)$, which is not.</p> <p>This also shows that you cannot parametrize the $n$-sphere $S^n$ by a single chart. $\phi:S^n\rightarrow\mathbb{R}^n$. If you could, the image $\phi(S^n)$ would be compact and thus closed in $\mathbb{R}^n$. On the other hand, as $\phi$ is a homeomorphism and $S^n$ is open in itself, the image $\phi(S^n)$ would also be open in $\mathbb{R}^n$. By connectedness, a closed and open set in $\mathbb{R}^n$ is all of $\mathbb{R}^n$, so this would mean the sphere is homeomorphic to the full Euclidean space. But again, one is compact and the other is not.</p> <p>This is a very common type of argument in topology.</p>
2,154,608
<blockquote> <p>Let $a$, $b$ and $c$ be non-negative numbers such that $a^3+b^3+c^3=3$. Prove that: $$a^4b+b^4c+c^4a\leq3$$</p> </blockquote> <p>This inequality similar to the following.</p> <blockquote> <p>Let $a$, $b$ and $c$ be non-negative numbers such that $a^2+b^2+c^2=3$. Prove that: $$a^3b+b^3c+c^3a\leq3,$$ which follows from the following identity. $$(a^2+b^2+c^2)^2-3(a^3b+b^3c+c^3a)=\frac{1}{2}\sum_{cyc}(a^2-b^2-ab-ac+2bc)^2.$$</p> </blockquote> <p>I tried Rearrangement.</p> <p>Let $\{a,b,c\}=\{x,y,z\}$, where $x\geq y\geq z$.</p> <p>Hence, $$a^4b+b^4c+c^4a=a^3\cdot ab+b^3\cdot bc+c^3\cdot ca\leq x^3\cdot xy+y^3\cdot xz+z^3\cdot yz=$$ $$=y(x^4+y^2xz+z^4)$$ and I don't see what is the rest.</p> <p>Thank you!</p>
Parcly Taxel
357,390
<p>Numberphile made <a href="https://youtu.be/2s4TqVAbfz4" rel="nofollow noreferrer">a video</a> on exactly this topic a while back. The idea is to consider the dihedral angle between adjacent faces for each of the five Platonic solids &ndash; a polychoron will consist of instances of one such solid. For the polychoron to be valid polychoron, at least three cells must meet at an edge and the sum of dihedral angles in 3-dimensional space must be strictly less than 360°.</p> <ul> <li><strong>Tetrahedron</strong> (dihedral angle 70.5°): three, four or five tetrahedra can share an edge in 3D without overlapping. Bent into 4D space, these configurations give the 5-cell, 16-cell and 600-cell respectively.</li> <li><strong>Cube</strong> (90°): three cubes around an edge yields the 8-cell or tesseract. Four around an edge, however, completely fills it up, yielding the ordinary cubic honeycomb and not any polychoron.</li> <li><strong>Octahedron</strong> (109.5°): only three octahedra can fit around an edge, yielding the 24-cell.</li> <li><strong>Dodecahedron</strong> (116.6°): the situation is the same as for the octahedron, and yields the 120-cell.</li> <li><strong>Icosahedron</strong> (138.2°): since this is larger than 120°, three of these cannot fit around an edge in the first place, so no regular polychoron has icosahedral cells.</li> </ul>
1,356,783
<p>What kind of mathematical object is this substitution(is it a function or what). We assuming set of variables exist.</p>
dtldarek
26,306
<p>Let us construct a toy language with terms defined recursively as</p> <p>$$\mathcal{T} = x \mid y \mid \mathtt{f(}t_1\mathtt{)} \mid \mathtt{g(}t_2\mathtt{,}t_3\mathtt{)} $$ where $x,y \in \mathcal{V}$ are variables and $t_1, t_2, t_3 \in \mathcal{T}$ are terms. Now we could define substitution for our toy language as any function $\phi : \mathcal{V} \to \mathcal{T}$ and extended it to whole $\mathcal{T}$ as follows: \begin{align} \phi'(x) &amp;= \phi(x)\\ \phi'(y) &amp;= \phi(y)\\ \phi'\Big(\mathtt{f(}t_1\mathtt{)}\Big) &amp;= \mathtt{f(}\phi'(t_1)\mathtt{)} \\ \phi'\Big(\mathtt{g(}t_2\mathtt{,}t_3\mathtt{)}\Big) &amp;= \mathtt{g(}\phi'(t_2)\mathtt{,}\phi'(t_3)\mathtt{)} \end{align}</p> <p>As our terms are finite, it's easy to see that $\phi' : \mathcal{T} \to \mathcal{T}$ is a properly defined function that transforms terms into some other terms.</p> <p>People usually don't distinguish between $\phi$ and $\phi'$ unless they need to be extra formal or extra cautious, and both of these functions are called "substitution". In other words, substituion is any function $\mathcal{V} \to \mathcal{T}$ or its extension to some bigger set of terms.</p> <p>Still, this is only one particular approach, there are other ways to define substitution and there are some complications when you introduce quantifiers and other constructs, but I hope you got the general meaning.</p> <p>Also, here I use "function" but in many texts on logic there are various levels of meta-languages, and to avoid confusion functions in one level are called just "functions" while on some other levels they may be called "transforms", "mappings", etc. For example, if for whatever reason we would like to call the symbol $\mathtt{f}$ a function, then we may call $\phi$ and $\phi'$ transformations.</p> <p><strong>Edit:</strong></p> <p>Let me define $\mathcal{T}$ and $\phi'$ more precisely (the index $i$ below is often called <em>nesting depth</em>).</p> <p>\begin{align} T_0 &amp;= \mathcal{V} \\ T_{i} &amp;= \Big\{\mathtt{f(}t'\mathtt{)} \ \Big|\ t' \in T_j, j &lt; i \Big\}\\ &amp;\hspace{2pt}\cup \Big\{\mathtt{g(}t''\mathtt{,}t'''\mathtt{)} \ \Big|\ t'' \in T_j, j &lt; i, t''' \in T_k, k &lt; i \Big\} \\ \mathcal{T} &amp;= \bigcup_{i \in \mathbb{N}}T_i \end{align} Now, take arbitrary function $\phi : \mathcal{V} \to \mathcal{T}$, define $\phi'_i : \left(\bigcup_{j = 0}^{i}T_j\right) \to \mathcal{T}$ as follows \begin{align} \phi'_0(t) &amp;= \phi(t)\\ \phi'_i(t) &amp;= \phi'_{i-1}(t) &amp; \text{ for any }t \in \mathcal{T}_{i-1}\\ \phi'_i\Big(\mathtt{f(}t_1\mathtt{)}\Big) &amp;= \mathtt{f(}\phi'_{i-1}(t_1)\mathtt{)} &amp; \text{ otherwise}\\ \phi'_i\Big(\mathtt{g(}t_2\mathtt{,}t_3\mathtt{)}\Big) &amp;= \mathtt{g(}\phi'_{i-1}(t_2)\mathtt{,}\phi'_{i-1}(t_3)\mathtt{)} &amp;\text{ otherwise} \end{align} and finally set $\phi' = \bigcup_{i \in \mathbb{N}} \phi'_i$. Of course, there are some things to consider, moreover, please be aware that this is not the only way to define things (for example, not all recursive definitions allow such simple interpretation). On the other hand, the above constitutes a precise definition. I hope the inductive scheme behind it (which works because the nesting depth is a well-order) is apparent now, perhaps this will clear your doubts.</p> <p><strong>Edit 2:</strong></p> <p>In response to:</p> <blockquote> <p>But there is problem I think. In theorem 8.4 function $ρ$ needs to be defined from natural number to a set $A$ (as you defined). But then function $h$ we get is also from $\mathbb{N}$ to $A$. What is range of $\mathcal{F}$ here and what will be the range of final function $h$ (here $T_i$)?</p> </blockquote> <p>where I defined $\mathcal{F}$ as </p> <p>$$\mathcal{F}(B) = \{\mathtt{f(}t'\mathtt{)} \mid t' \in B\} \cup \{\mathtt{g(}t''\mathtt{,}t'''\mathtt{)} \mid t'', t''' \in B\}.$$</p> <p>What is the symbol $\mathtt{f}$ in the definition of $\mathcal{T}$ (or for that matter $\mathcal{F}$)? Is there any axiom that says I can use it? In the standard approach there are only sets or things defined using sets like $\mathbb{N}$. In other words, we have to define somehow $\mathtt{f}$ as a set, number or something similar. What we do, is that we create an informal mapping between terms and numbers, for example $$x \to 7, y \to 1, \mathtt{(} \to 2, \mathtt{)} \to 3, \mathtt{,} \to 4, \mathtt{f} \to 5, \mathtt{g} \to 6$$ would imply that $\mathtt{g(}x\mathtt{,f(}y\mathtt{,}x\mathtt{))}$ is $62745214733$. Then, when we talk about terms, we talk about specifically structured numbers, but <em>think</em> about them as terms. If we were to use the above mapping we get $T : \mathbb{N} \to 2^\mathbb{N}$, $\mathcal{T} \subset \mathbb{N}$ and $\mathcal{F} : 2^\mathbb{N} \to 2^\mathbb{N}$. To be compatible with Munkers (compare with the second example after theorem 8.4) set</p> <p>\begin{align} \rho(f) &amp;= \mathcal{F}\big(f(m)\big) \quad \text{ where } f : \{0,\ldots,m\} \to \mathbb{N} \\ T(0) &amp;= \mathcal{V} \\ T(i) &amp;= \rho\Big(T\big|_{\{0,\ldots,i-1\}}\Big) \end{align}</p> <p><strong>Edit 3:</strong></p> <p>In response to:</p> <blockquote> <p>But we were defining terms. How do you first define $\mathcal{F}$ on $\mathcal{T}$ without knowing $\mathcal{T}$?</p> </blockquote> <p>We do not need to know $\mathcal{T}$ to define $\mathcal{F}$, we only need to know that $\mathcal{T}⊂\mathbb{N}$.</p> <p>Let $\bullet_{10}$ be the base-10 concatenation of numbers, that is, $x \bullet_{10} y = x\cdot 10^{|y|_{10}}+ y,$ where $|y|_{10}$ is the length of $y$ in base-10 notation; for example $123 \bullet_{10} 456 = 123456$. Now define $\mathcal{F} : 2^\mathbb{N} \to 2^\mathbb{N}$ as</p> <p>$$\mathcal{F}(B) = \{52\bullet_{10} t' \bullet_{10} 3 \mid t' \in B\} \cup \{62\bullet_{10} t''\bullet_{10} 4 \bullet_{10} t'''\bullet_{10} 3 \mid t'', t''' \in B\}.$$</p> <p>I hope this helps $\ddot\smile$</p>
2,468,329
<p>Let F be a field and choose an element $u \in F$. Consider the function $\epsilon_u:F[x]\rightarrow F$ given by $$\epsilon_u(a_nx^n+...+a_0)=a_nu^n+...+a_0$$</p> <p>I am asked to show that this is surjective but not injective, as well as finding its kernel.</p> <p>My idea is that this function will just send every element in $F$ to itself, hence the surjectivity. Since there is this one-to-one correspondence between the part of the range and the entire domain, the function cannot possibly be injective. I am not sure if this is the right idea or how to formalize it, and I am also not sure how to find the kernel.</p>
Community
-1
<p>Surjective: consider what happens to CONSTANT polynomials (polynomials of degree zero) under your mapping. Not injective: Consider the two polynomials $p(x) = x$ (of degree $1$) and $q(x) = u$ (a polynomial of degree zero, i.e. a constant). Are $p$ and $q$ equal? What are their images under the mapping?</p> <p>Kernel: a polynomial $p(x)$ is mapped to $0$ when $p(u) = 0$. This is equivalent to $p(x)$ being divisible by $x-u$. The kernel is the principal ideal generated by $x-u$.</p>
719,055
<p>I'm trying to show that if solid tori $T_1, T_2; T_i=S^1 \times D^2$ ,are glued by homeomorphisms between their respective boundaries, then the homeomorphism type of the identification space depends on the choice of homeomorphism up to, I think, isotopy ( Please forgive the rambling; I'm trying to put together a lot of different things from different sources, and I don't yet have a very coherent general picture.) I first thought of Lens spaces, but the gluing here is not done by a homeomorphism.</p> <p>I have some fuzzy ideas here that I would like to make precise: I know this has to see with Heegard splittings; specifically, this is a genus-1 splitting ( actually, genus-1 gluing ) and the gluing may be determined by a mapping in $SL(2,\mathbb Z)$, which determines the induced map on the top homology , and different induced maps would result in different homeomorphic types on the glued spaces.</p> <p>I think we can also see this from the perspective of Dehn surgery ( please feel free to correct anything I write here ), where we remove a link $L$ and a tubular 'hood $T(L)$ of $L$ , and then glue another torus. I know then an n-framing is equivalent to removing a solid torus, twisting n times and then regluing. But it's obvious from the post that I don't know how to show that the homeomorphism class of the space glued along $h: \partial T_1 \rightarrow \partial T_2$ depends on $h$.</p> <p>Thanks, and sorry for the rambling ( not my fault, I was born a rambling man.)</p>
Community
-1
<p>In terms of differentials (in the single-variable case), $\frac{dy}{dx}$ is the unique scalar with the property that $\frac{dy}{dx}dx = dy$.</p> <p>$\frac{dy}{du} \frac{du}{dx}$ therefore has the property that</p> <p>$$\frac{dy}{du} \frac{du}{dx} dx = \frac{dy}{du} du = dy $$</p> <p>therefore $\frac{dy}{du}\frac{du}{dx} = \frac{dy}{dx}$.</p> <p>You can't really justify the result by rearranging the expression like you would with fractions (e.g. by combining them into a single 'fraction'), which is why people mean when they say things like "you can't just cancel them". However, you can still prove (again this only makes sense in the single-variable case) that rearrangements are equal: e.g.</p> <p>$$ \frac{dw}{dx} \frac{dy}{dz} = \frac{dw}{dz} \frac{dy}{dx} $$</p> <p>(note that you could use this identity to prove your identity, because $\frac{du}{du} = 1$)</p>
719,055
<p>I'm trying to show that if solid tori $T_1, T_2; T_i=S^1 \times D^2$ ,are glued by homeomorphisms between their respective boundaries, then the homeomorphism type of the identification space depends on the choice of homeomorphism up to, I think, isotopy ( Please forgive the rambling; I'm trying to put together a lot of different things from different sources, and I don't yet have a very coherent general picture.) I first thought of Lens spaces, but the gluing here is not done by a homeomorphism.</p> <p>I have some fuzzy ideas here that I would like to make precise: I know this has to see with Heegard splittings; specifically, this is a genus-1 splitting ( actually, genus-1 gluing ) and the gluing may be determined by a mapping in $SL(2,\mathbb Z)$, which determines the induced map on the top homology , and different induced maps would result in different homeomorphic types on the glued spaces.</p> <p>I think we can also see this from the perspective of Dehn surgery ( please feel free to correct anything I write here ), where we remove a link $L$ and a tubular 'hood $T(L)$ of $L$ , and then glue another torus. I know then an n-framing is equivalent to removing a solid torus, twisting n times and then regluing. But it's obvious from the post that I don't know how to show that the homeomorphism class of the space glued along $h: \partial T_1 \rightarrow \partial T_2$ depends on $h$.</p> <p>Thanks, and sorry for the rambling ( not my fault, I was born a rambling man.)</p>
DanielV
97,045
<p>The relation $$\frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx}$$</p> <p>requires that $u$ has a nonzero relationship with $x$ and $y$. This doesn't have to be explicitly stated when the chain rule is written as $$D_x f(g(x)) = f'g(x) \cdot g'(x)$$</p> <p>Example:</p> <p>Consider using $g = \text{gravitational force}$ and $r = \text{distance}$. You might know:</p> <p>$$g = G\frac{m_1m_2}{r^2}$$</p> <p>What if $u$ is your temperature? Perhaps your temperature does not change based on your location. $$\frac{du}{dr} = 0$$ $$\frac{dg}{du} = 0$$</p> <p>If you applied the chain rule without realizing that $u$ is an independent variable, you'd get:</p> <p>$$\begin{align}\frac{dg}{dr} &amp;= \frac{dg}{du} \cdot \frac{du}{dr}\\ &amp;= 0 \cdot 0 \end{align}$$</p> <p>...and I think we can agree that gravitational force is not universally constant. So when using the differential form of the chain rule, make sure you are not using an independent variable.</p>
340,575
<p>I got my exam on Thursday, and just got a few questions left. Anyway I would aprreciate help a lot! Can anyone please help me to solve this task? You can see the picture below. The need is to finde the size of the two radius. I thought about working with cords, like the cord AC is the same size like another one. Still couldn´t really find something usefull. <img src="https://i.stack.imgur.com/nOGlt.png" alt="enter image description here"></p>
Adi Dani
12,848
<p>$AM_1M_2B$ is a right angle trapez from figure we can see that $$\frac{r_1+r_2}{2} AB=\frac {AB+r_1+r_2}{2}r_1=AM_1C+BM_2C+ABC $$ where $$AB=\sqrt{12^2+9^2}=15$$ $$AM_1C=6\sqrt{r_1^2-6^2}$$ $$BM_2C=9/2\sqrt{r_2^2-(9/2)^2}$$ $$ABC=54$$ so we get the system</p> <p>$$\frac{r_1+r_2}{2} 15=\frac {15+r_1+r_2}{2}r_1$$ $$\frac {15+r_1+r_2}{2}r_1=6\sqrt{r_1^2-6^2}+9/2\sqrt{r_2^2-(9/2)^2}+54 $$</p> <p>$$15(r_1+r_2)=(15+r_1+r_2)r_1$$ $$(15+r_1+r_2)r_1=12\sqrt{r_1^2-6^2}+9\sqrt{r_2^2-(9/2)^2}+108 $$</p>
1,116,022
<p>I've always had this doubt. It's perfectly reasonable to say that, for example, 9 is bigger than 2.</p> <p>But does it ever make sense to compare a real number and a complex/imaginary one?</p> <p>For example, could one say that $5+2i&gt; 3$ because the real part of $5+2i $ is bigger than the real part of $3$? Or is it just a senseless statement?</p> <p>Can it be stated that, say, $20000i$ is bigger than $6$ or does the fact that one is imaginary and the other is natural make it impossible to compare their 'sizes'?</p> <p>It would seem that the 'sizes' of numbers of any type (real, rational, integer, natural, irrational) can be compared, but once imaginary and complex numbers come into the picture, it becomes a bit counter-intuitive for me.</p> <p>So, does it ever make sense to talk about a real number being 'more than' or 'less than' a complex/imaginary one?</p>
Ross Millikan
1,827
<p>You can put (partial) orders on the complex numbers. One choice is to compare the real parts and ignore the complex ones. Another is to use the lexicographic order, comparing the real parts and then comparing the imaginary ones if the real parts are equal. Another is to use the modulus. There are many more. The distinction with the order on the reals (or subsets of the reals) is that the order relation is compatible with addition and multiplication. You can't do that in the complex numbers. The simple proof is to ask whether $i$ is greater or less than $0$. In either case, $i^2=-1$ should be greater than zero.</p>
1,116,022
<p>I've always had this doubt. It's perfectly reasonable to say that, for example, 9 is bigger than 2.</p> <p>But does it ever make sense to compare a real number and a complex/imaginary one?</p> <p>For example, could one say that $5+2i&gt; 3$ because the real part of $5+2i $ is bigger than the real part of $3$? Or is it just a senseless statement?</p> <p>Can it be stated that, say, $20000i$ is bigger than $6$ or does the fact that one is imaginary and the other is natural make it impossible to compare their 'sizes'?</p> <p>It would seem that the 'sizes' of numbers of any type (real, rational, integer, natural, irrational) can be compared, but once imaginary and complex numbers come into the picture, it becomes a bit counter-intuitive for me.</p> <p>So, does it ever make sense to talk about a real number being 'more than' or 'less than' a complex/imaginary one?</p>
dustin
78,317
<p>Since $\mathbb{R}\subset\mathbb{C}$, every $x\in\mathbb{R}$ can be written as $x + i\cdot 0$. Now if we prescribe the lexicographical (dictionary) ordering, we can compare them. </p> <p>Let $z,w\in\mathbb{C}$ and $z = x+iy$ and $w=a+bi$. Then the lexicographical ordering is $z &lt; w$ if $x&lt;a$ or $x=a$ and $y&lt;b$, $z = w$ if $x=a$ and $y=b$, and $z&gt;w$ otherwise.</p>
1,116,022
<p>I've always had this doubt. It's perfectly reasonable to say that, for example, 9 is bigger than 2.</p> <p>But does it ever make sense to compare a real number and a complex/imaginary one?</p> <p>For example, could one say that $5+2i&gt; 3$ because the real part of $5+2i $ is bigger than the real part of $3$? Or is it just a senseless statement?</p> <p>Can it be stated that, say, $20000i$ is bigger than $6$ or does the fact that one is imaginary and the other is natural make it impossible to compare their 'sizes'?</p> <p>It would seem that the 'sizes' of numbers of any type (real, rational, integer, natural, irrational) can be compared, but once imaginary and complex numbers come into the picture, it becomes a bit counter-intuitive for me.</p> <p>So, does it ever make sense to talk about a real number being 'more than' or 'less than' a complex/imaginary one?</p>
fleablood
280,126
<p>Simple answer. No. The complex numbers can not be an ordered field. [if $a \ge 0$ then $a^2 = a*a \ge 0$. If $a &lt; 0$ then $a^2 = a*a &gt; 0$ so $a^2 \ge 0$ for all $a$ so $1 = 1^2 &gt; 0$ and $-1 &lt; 0$. If $\mathbb C$ were an ordered field, $i^2 &gt; 0$ so $-1 &gt; 0$. Impossible. $\mathbb C$ can not be an ordered field.]</p> <p>But $\mathbb C$ can be ordered without holding to the field axioms. One simple way is the "dictionary" ordering. $a + bi &gt; c + di$ if $a &gt; c$ or $a = c$ and $b &gt; d$. This is consistent with the order on R. But we can't do much with it. It doesn't follow that if $z &lt; w$ and $v &gt; 0$ that $zv &lt; wv$. It doesn't follow <em>at all</em>.</p> <p>Or we can have partial orders. $z &gt; w$ if $|z| &gt; |w|$. But this isn't total order. $z &lt; w$, $z &gt; w$, $z = w$ are not exhaustive and mutually exclusive; we can have cases where none of the three apply.</p>
1,116,022
<p>I've always had this doubt. It's perfectly reasonable to say that, for example, 9 is bigger than 2.</p> <p>But does it ever make sense to compare a real number and a complex/imaginary one?</p> <p>For example, could one say that $5+2i&gt; 3$ because the real part of $5+2i $ is bigger than the real part of $3$? Or is it just a senseless statement?</p> <p>Can it be stated that, say, $20000i$ is bigger than $6$ or does the fact that one is imaginary and the other is natural make it impossible to compare their 'sizes'?</p> <p>It would seem that the 'sizes' of numbers of any type (real, rational, integer, natural, irrational) can be compared, but once imaginary and complex numbers come into the picture, it becomes a bit counter-intuitive for me.</p> <p>So, does it ever make sense to talk about a real number being 'more than' or 'less than' a complex/imaginary one?</p>
Abhishek Choudhary
452,208
<p>We can compare complex numbers just like we compare two-digit numbers,</p> <p>For example, <span class="math-container">$53 &gt; 42$</span> can be seen as <span class="math-container">$5 + 3i &gt; 4 + 2i$</span></p> <p>So, For <span class="math-container">$Z_{1} = a_{1} + ib_{1}$</span> and <span class="math-container">$Z_{2} = a_{2} + ib_{2}$</span> choose a <span class="math-container">$k$</span> such that <span class="math-container">$k&gt;a_{1}$</span>, <span class="math-container">$k&gt;b_{1}$</span>, <span class="math-container">$k&gt;a_{2}$</span>, <span class="math-container">$k&gt;b_{2}$</span> and then, the complex number can be written as <span class="math-container">$ka_{1} + b_{1}$</span>, <span class="math-container">$ka_{2} + b_{2}$</span>, now, compare it just like you compare Real Numbers.</p>
2,651,394
<p>I am attempting to create a function in Matlab which turns all matrix elements in a matrix to '0' if the element is not symmetrical. However, the element appears to not be reassigning.</p> <pre><code>function [output_ting] = maker(a) [i,j] = size(a); if i ~= j disp('improper input!') else end c = 1; b = a.'; while c &lt; length(a) + 1 if a(c) == b(c) c = c + 1; continue else a(c) = 0; c = c + 1; end end disp(a) end </code></pre>
Gwopmeat
532,852
<p>Your answer is correct lab, but the formal definition of a derivative is the ugliness provided in the picture above, and not the nice and neat cot(x) you described. Both methods, the chain rule and formal definition of a derivative, will always provide the same result - a derivative of a univariate function.</p>
4,331,790
<blockquote> <p><strong>Question 23:</strong> Which one of following statements holds true if and only if <span class="math-container">$n$</span> is a prime number? <span class="math-container">$$ \begin{alignat}{2} &amp;\text{(A)} &amp;\quad n|(n-1)!+1 \\ &amp;\text{(B)} &amp;\quad n|(n-1)!-1 \\ &amp;\text{(C)} &amp;\quad n|(n+1)!-1 \\ &amp;\text{(D)} &amp;\quad n|(n+1)!+1 \end{alignat} $$</span></p> </blockquote> <p>I ruled out choices C and D because <span class="math-container">$n$</span> is a natural factor of <span class="math-container">$(n+1)!$</span>, so <span class="math-container">$n$</span> won't divide <span class="math-container">$n(n+1)! -1 $</span> or <span class="math-container">$n(n+1)!+1 $</span>(that could be the case if <span class="math-container">$n$</span> is <span class="math-container">$1$</span> but this question concerns only primes). <br>I would like to know how to choose between choices A and B. Alternative ways to approach the problem are equally welcome.</p>
Diger
427,553
<p>Calculating the generating function for the LHS has already been done following your link. It suffices to calculate the generating function for the RHS i.e.</p> <p><span class="math-container">$$\sum_{n=0}^\infty \mathcal{J}_n x^n = e\sum_{n=0}^\infty x^n \sum_{k=0}^\infty \frac{(-1)^k}{k!} \binom{n+k}{k} = e \sum_{k=0}^\infty \frac{(-1)^k}{k!} \sum_{n=0}^\infty \binom{n+k}{k} x^n \\ = \frac{e}{1-x} \sum_{k=0}^\infty \frac{(-1)^k}{k!} \frac{1}{(1-x)^k} = \frac{e}{1-x} \, e^{-\frac{1}{1-x}} \, .$$</span></p> <p>Here we used the fact that <span class="math-container">$$\sum_{n=0}^\infty \binom{n+k}{k} x^n = \frac{1}{(1-x)^{k+1}} \,.$$</span></p> <hr /> <p>Interchanging summation and integration is justified, since <span class="math-container">$$\left|\sum_{k=0}^n \frac{(-t)^k}{k!^2} \right| \leq \sum_{k=0}^n \frac{t^k}{k!^2} \leq \sum_{k=0}^\infty \frac{t^k}{k!^2} \leq \left( \sum_{k=0}^\infty \frac{t^{k/2}}{k!} \right)^2 = e^{2\sqrt{t}}$$</span> for any integer <span class="math-container">$n$</span>. You can then make use of the DCT.</p>
549,299
<p>I'm searching(I searched this site first) for example of fields $F \subseteq K \subseteq L$ where $L/K$ and $K/F$ are normal but $L/F$ is not normal. Presenting some fields just for $F$ or $L$, instead of all three fields will help me too. Thanks for your attention.</p>
Community
-1
<p><strong>Hint</strong>: Consider $$\Bbb{Q} \subseteq \cdots \subseteq \Bbb{Q}[\sqrt[4]{2}]$$</p>
3,204,950
<p>This question arose from Physics, where the force on an object attached on a spring is proportional to the displacement to the equilibrium (that is, the rest position). Also, if the displacement to the equilibrium is positive, the force will be negative, as it tries to pull the object back (i.e. if you pull a string, the force is opposite to your direction of pull).</p> <p>Therefore, it can be said that:</p> <p><span class="math-container">$$F \propto -x$$</span> Where <span class="math-container">$F$</span> is the force and <span class="math-container">$x$</span> is the displacement from equilibrium</p> <p>Is this the same as: <span class="math-container">$$F \propto x$$</span> According to the relation of proportionality, it should be, but my friend says that putting the second one is not correct.</p> <p>Both are equivalent right?</p>
John Doe
399,334
<p>Yes, proportionality does not tell anything about the sign of the proportionality constant. This is probably done in physics so that the proportionality constant <span class="math-container">$k$</span> can be considered to be always positive. This has a physical interpretation , so it is convenient for it to be positive.</p>
4,417,901
<p>In the first chapter of &quot;Differential Equations, Dynamical Systems and an Introduction to Chaos&quot; by Hirch, Smale and Devaney, the authors mention the first-order equation <span class="math-container">$x'(t)=ax(t)$</span> and assert that the only general solution to it is <span class="math-container">$x(t)=ke^{at}$</span>. The assertion is proven by deriving <span class="math-container">$u(t)e^{-at}$</span> to show that it is the constant <span class="math-container">$k$</span> in the general solution mentioned.</p> <p>My question is that, how did the authors initially arrive at the asserted solution? Because they didn't explain it in the book.</p>
Dr. Sundar
1,040,807
<p>Let us define <span class="math-container">$$ I_n = \int\limits_{x = 0}^\infty \ x^n \, e^{-\lambda x} \ dx \tag{1} $$</span></p> <p><strong>Method 1: Using Gamma Functions</strong></p> <p>Use the substitution <span class="math-container">$$ \lambda x = t \ \ \mbox{or} \ \ x = {t \over \lambda} \tag{2} $$</span></p> <p>Then <span class="math-container">$$ {dx \over dt} = {1 \over \lambda} $$</span></p> <p>Using the substitution (2), we can express <span class="math-container">$I_n$</span> in (1) as <span class="math-container">$$ I_n = \int\limits_{t = 0}^\infty \ \left( {t^n \over \lambda^n} \right) \ e^{-t} \ {dt \over \lambda} = {1 \over \lambda^{n + 1}} \ \int\limits_{t = 0}^\infty \ t^n e^{-t} \ dt $$</span></p> <p>It is easy to note that <span class="math-container">$$ I_n = {1 \over \lambda^{n + 1}} \ \int\limits_{t = 0}^\infty \ t^{(n + 1) -1} \ e^{-t} \ dt = {\Gamma(n + 1) \over \lambda^{n + 1}} \ $$</span></p> <p>Hence, the integral is evaluated as <span class="math-container">$$ I_n = {\Gamma(n + 1) \over \lambda^{n + 1}} $$</span> where <span class="math-container">$\Gamma(\cdot)$</span> is the Gamma function.</p> <p>If <span class="math-container">$n$</span> is a non-negative integer, then we deduce that <span class="math-container">$$ I_n = {n! \over \lambda^{n + 1}} $$</span> because <span class="math-container">$\Gamma(n+1) = n!$</span> for <span class="math-container">$n \in \mathbf{N}$</span>.</p> <p><strong>Method 2: Using Integration by Parts</strong></p> <p><span class="math-container">$$ I_n = \int\limits_{x = 0}^\infty \ x^n \, e^{-\lambda x} \ dx \tag{1} $$</span></p> <p>Here, we express the Integral <span class="math-container">$I_n$</span> as <span class="math-container">$$ I_n = \int\limits_{x = 0}^\infty \ x^n \ d\left[ {e^{-\lambda x} \over -\lambda} \right] $$</span></p> <p>Using Integration by Parts, we evaluate <span class="math-container">$I_n$</span> as <span class="math-container">$$ I_n = \left[ x^n \left( {e^{-\lambda x} \over -\lambda} \right) \right]_0^\infty - \int\limits_0^\infty \left( {e^{-\lambda x} \over -\lambda} \right) \ \left( n x^{n - 1} \right) \ dx $$</span> which can be simplified as <span class="math-container">$$ I_n = \left[ 0 - 0 \right] + {n \over \lambda} \ \int\limits_0^\infty \ x^{n - 1} e^{-\lambda x} \ dx $$</span></p> <p>That is, <span class="math-container">$$ I_n = {n \over \lambda} \ I_{n - 1} $$</span> which is an useful recurrence relation.</p> <p>Proceeding recursively, we obtain <span class="math-container">$$ I_n = {n \over \lambda} {(n - 1) \over \lambda} \cdots {1 \over \lambda} \ I_0 $$</span></p> <p>That is, <span class="math-container">$$ I_n = {n! \over \lambda^n} \ {1 \over \lambda} = {n! \over \lambda^{n + 1}} $$</span> because <span class="math-container">$I_0 = 1$</span>.</p> <p>Note that <span class="math-container">$$ I_0= \int\limits_{0}^\infty \ e^{-\lambda x} \ dx = \left[ {e^{-\lambda x} \over - \lambda} \right]_0^\infty = {1 \over \lambda}. $$</span></p> <p>Hence, both methods yield the same value for <span class="math-container">$I_n$</span>. <span class="math-container">$\ \ \ \blacksquare$</span></p>
33,582
<p>My code finding <a href="http://en.wikipedia.org/wiki/Narcissistic_number">Narcissistic numbers</a> is not that slow, but it's not in functional style and lacks flexibility: if $n \neq 7$, I have to rewrite my code. Could you give some good advice?</p> <pre><code>nar = Compile[{$}, Do[ With[{ n = 1000000 a + 100000 b + 10000 c + 1000 d + 100 e + 10 f + g, n2 = a^7 + b^7 + c^7 + d^7 + e^7 + f^7 + g^7}, If[n == n2, Sow@n]; ], {a, 9}, {b, 0, 9}, {c, 0, 9}, {d, 0, 9}, {e, 0, 9}, {f, 0, 9}, {g, 0, 9}], RuntimeOptions -&gt; "Speed", CompilationTarget -&gt; "C" ]; Reap[nar@0][[2, 1]] // AbsoluteTiming (*{0.398023, {1741725, 4210818, 9800817, 9926315}}*) </code></pre>
chyanog
2,090
<p>Dynamically generated <code>Do</code> loops:)</p> <pre><code>cnar = With[{n = 7}, With[{var = Array[Unique["x"] &amp;, n]}, With[{n1 = FromDigits@var, n2 = Total[var^n]}, Compile[{Null}, Do[If[n1 == n2, Sow@n1], ##], RuntimeOptions -&gt; "Speed", CompilationTarget -&gt; "C" ] &amp; @@ MapAt[1 &amp;, Thread[{var, 0, 9}], {1, 2}] ] ] ]; Reap[cnar@0][[2, 1]] // AbsoluteTiming (* CompiledFunction[{$$}, Do[If[10 (10 (10 (10 (10 (10 x3 + x4) + x5) + x6) + x7) + x8) + x9 == x3^7 + x4^7 + x5^7 + x6^7 + x7^7 + x8^7 + x9^7, Sow[10 (10 (10 (10 (10 (10 x3 + x4) + x5) + x6) + x7) + x8) + x9]], {x3, 1, 9}, {x4, 0, 9}, {x5, 0, 9}, {x6, 0, 9}, {x7, 0, 9}, {x8, 0, 9}, {x9, 0, 9}], "-CompiledCode-"] {0.358024, {1741725, 4210818, 9800817, 9926315}} *) </code></pre>
4,159,341
<p>There are <span class="math-container">$4$</span> coins in a box. One is a two-headed coin, there are <span class="math-container">$2$</span> fair coins, and the fourth is a biased coin that comes up <span class="math-container">$H$</span> (heads) with probability <span class="math-container">$3/4$</span>.</p> <p>If we randomly select and flip <span class="math-container">$2$</span> coins (without replacement), what is the probability of getting <span class="math-container">$HH$</span>?</p> <p>So I was thinking about this question by using the normal way that dealing with each cases of probability and add them together, but there will be several cases that needed to be calculate, is there a better way to solve this problem?</p>
Ittay Weiss
30,953
<p>Firstly, I'll assume that by &quot;randomly select [and flip] 2 coins&quot; you mean &quot;uniformly select 2 coins&quot;. Now, naively there are <span class="math-container">$12$</span> possible pairs of coins to consider. However, there are only three types of coins. The outcome of <span class="math-container">$HH$</span> though is the same for an <span class="math-container">$xy$</span> pair as for a <span class="math-container">$yx$</span> pair, so you only need to analyse the cases <span class="math-container">$x,x$</span>. Then you just need to correctly count the occurrences of each type combination. It shouldn't take too long. I don't think there is a very clever way to avoid a direct computation.</p>
2,148,861
<p>In one of my junior classes, my Mathematics teacher, while teaching Mensuration, told us that <strong>metres square</strong> and <strong>square metres</strong> have a difference between them and <strong>metres cube</strong> and <strong>cubic metres</strong> too have a difference between them and that we should not mix them up. When i asked her the reason behind them being different, she told me that she would discuss about that later on but she forgot and i too forgot to remind her. Now i remember about all this. Why are they different and what is the difference between them. I have searched the internet but could not find anything valuable. </p>
Ian Miller
278,461
<p>They have the same meaning.</p> <p>What is sometimes confused is saying "four metres squared" versus "four square metres". The first is squaring the quantity four metres so you get an answer of $16m^2$ while the second is just a way of saying $4m^2$.</p>
82,770
<p>I know of two places where $K_{*}(\mathbb{Z}\pi_{1}(X))$ (the algebraic $K$-theory of the group ring of the fundamental group) makes an appearance in algebraic topology. </p> <p>The first is the Wall finiteness obstruction. We say that a space $X$ is finitely dominated if $id_{X}$ is homotopic to a map $X \rightarrow X$ which factors through a finite CW complex $K$. The Wall finiteness obstruction of a finitely dominated space $X$ is an element of $\tilde{K_{0}}(\mathbb{Z}\pi_{1}(X)) $ which vanishes iff $X$ is actually homotopy equivalent to a finite CW complex.</p> <p>The second is the Whitehead torsion $\tau(W,M)$, which lives in a quotient of $K_{1}(\mathbb{Z}\pi_{1}(W))$. According to the s-cobordism theorem, if $(W; M, M')$ is a cobordism with $H_{*}(W, M) = 0$, then $W$ is diffeomorphic to $M \times [0, 1]$ if and only if the Whitehead torsion $\tau(W, M)$ vanishes.</p> <p>For more details, see the following:</p> <p><a href="http://arxiv.org/abs/math/0008070" rel="noreferrer">http://arxiv.org/abs/math/0008070</a> (A survey of Wall's finiteness obstruction)</p> <p><a href="http://www.maths.ed.ac.uk/~aar/books/surgery.pdf" rel="noreferrer">http://www.maths.ed.ac.uk/~aar/books/surgery.pdf</a> (Algebraic and Geometric Surgery. See Ch. 8 on Whitehead Torsion)</p> <p>My question is twofold.</p> <p>First, is there a high-concept defense of $K_{*}(\mathbb{Z}\pi_{1}(X))$ as a reasonable place for obstructions to topological problems to appear? I realize that $\mathbb{Z}\pi_{1}(X)$ appears because the (cellular, if $X$ is a cell complex) chain groups of the universal cover $\tilde{X}$ are modules over $\mathbb{Z}\pi_{1}(X)$. Is it the case that when working with chain complexes of $R$-modules, we expect obstructions to appear in $K_{*}(R)$?</p> <p>Second, is there an enlightening explanation of the formal similarity between these two obstructions? (Both appear from considering the cellular chain complex of a universal cover and taking an alternating sum.)</p>
Tim Porter
3,502
<p>First a slight quibble: the Whitehead group is the <i>origin</i> of algebraic K-theory, and predates the general stuff by quite a time, so your wording is not quite fair to Whitehead! </p> <p>On your first question, one direction to look is at Waldhausen's K-theory. (The original paper is worth studying, but you should also look at the more recent stuff relating to the connections between that and model categories.) Waldhausen has a section on the links between his groups and the Whitehead simple homotopy theory. (I can provide some references if you need them. There are some useful comments in the nLab if you search on 'Waldhausen'.)</p> <p>For the second question, it is probably a good idea to look at some books on Simple Homotopy Theory and to take a slightly historical perspective, (also to glance at the original articles not just surveys). Whitehead's and then Milnor's papers released a set of tools for studying finite CW-complexes. The role of the chains on the universal cover can be viewed in various ways, but both constructions are part of the general idea at the time of making homotopy theory more 'constructive' and the taming of the fundamental group action in non-Abelian cases was a first step.</p> <p>The origins of simple homotopy theory are pre-world war II with Riedemeister and note that Whitehead's two part paper in 1949 was called 'Combinatorial Homotopy Theory' and was intended to mirror 'Combinatorial Group Theory'. </p> <p>This does not give the enlightenment on the 'reasons' for the similarity but may help you to gain some knowledge of the origins of that stuff and sometimes that helps to see what side alleys have been left unexplored and to provide an overview of the area. I have a sneaky idea that there is a K-theory for homotopy 2-types (and beyond) and that the Waldhausen construction is a way into that, but I may be wrong on that.</p> <p>On a completely different tack, have a look at Kapranov and Saito's article:</p> <p>Hidden Stasheff polytopes in algebraic K-theory and in the space of Morse functions , in Higher homotopy structure in topology and mathematical physics (Poughkeepsie, N.Y. 1996) , volume 227 of Contemporary Mathematics , 191–225</p> <p>This links up the Steinberg group (which has a neat motivation from linear algebra) with a lot of homotopical and homological algebra. I will not say more on this as this is already getting a bit long, but do ask if you find these references useful but need further pointers.</p>
376,600
<p>$$\lim_{n\to\infty} \int_{-\infty}^{\infty} \frac{1}{(1+x^2)^n}\,dx $$</p> <p>Mathematica tells me the answer is 0, but how can I go about actually proving it mathematically?</p>
robjohn
13,854
<p><strong>Hint:</strong> Apply <a href="http://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow">Dominated Convergence</a>.</p> <blockquote> <p><strong>Dominated Convergence:</strong> Suppose $|f_n(x)|\le g(x)$ where $f_n$ is measurable and $g(x)$ is a non-negative integrable function. If $\lim\limits_{n\to\infty}f_n(x)=f(x)$ ($f$ is the pointwise limit of $f_n\,$), then $$ \lim_{n\to\infty}\int f_n\,\mathrm{d}x=\int f\,\mathrm{d}x $$</p> </blockquote>
149,872
<p>How would I show that $|\sin(x+iy)|^2=\sin^2x+\sinh^2y$? </p> <p>Im not sure how to begin, does it involve using $\sinh z=\frac{e^{z}-e^{-z}}{2}$ and $\sin z=\frac{e^{iz}-e^{-iz}}{2i}$?</p>
user59776
59,776
<p>\begin{align} \sin(z)^2 &amp;= (\sin x \cos (iy))^2 +(\cos x \sin(iy))^2 \\ &amp;=\sin^2x \cosh^2y+\cos^2x \sinh^2y \\ &amp;= \sin^2x (1+\sinh^2y )+(1-\sin^2x ) \sinh^2y \\ &amp;=\sin^2 x+\sin^2 x \sinh^2y+ \sinh^2y-\sin^2x \sinh^2y \\ &amp;=\sin^2x+\sinh^2y \end{align}</p>
595,280
<p>Let $V$ be a vector space over $\Bbb F$, and let $x\not=0,y\not=0 $ be two elements in $V$. </p> <p>I want to show that $x\otimes_{_F} y=y\otimes_{_F} x$ iff $x=ay$ where $a\in \Bbb F$.</p> <p>I know the second direction, so only want to see the first direction (If case).</p>
Shuchang
91,982
<p>Take a set of basis $e_1,\ldots,e_n$ of $V$ and let $x=\sum_ {i}x^ie_i,~y=\sum_ {j}y^je_j$, then $$x\otimes y=\sum_ {i,j}(x^ie_i)\otimes(y^je_j)=\sum_ {i,j}x^iy^je_i\otimes e_j$$ $$y\otimes x=\sum_ {i,j}y^jx^ie_j\otimes e_i$$ The symmetry implies $$x^iy^j=x^jy^i$$ That is, $$\frac{x^i}{y^i}=\frac{x^j}{y^j}=a$$ for some constant $a$.</p>
125,317
<p>Consider a (locally trivial) fiber bundle $F\to E\overset{\pi}{\to} B$, where $F$ is the fiber, $E$ the total space and $B$ the base space. If $F$ and $B$ are compact, must $E$ be compact? </p> <p>This certainly holds if the bundle is trivial (i.e. $E\cong B\times F$), as a consequence of Tychonoff's theorem. It also holds in all the cases I can think of, such as where $E$ is the Möbius strip, Klein bottle, a covering space and in the more complicated case of $O(n)\to O(n+1)\to \mathbb S^n$ which prompted me to consider this question. I am fairly certain it holds in the somewhat more general case where $F,B$ are closed manifolds. However, I can't seem to find a proof of the general statement. My chief difficulty lies in gluing together the local homeomorphisms to transfer finite covers of $B\times F$ to $E$. Any insight would be appreciated.</p>
Mariano Suárez-Álvarez
274
<p>By local triviality, there is a open covering $\mathcal U$ of $B$ such that for each $U\in\mathcal U$ the open subset $\pi^{-1}(U)$ of $E$ is homeomorphic to $U\times F$ in a way compatible with the projection to $U$. It follows that there is a subbase $\mathcal S$ of the topology of $E$ consisting of open sets each of which is contained in one of these $\pi^{-1}(U)$ and corresponding under those homeomorhisms to an open subset of $U\times F$ of the form $V\times W$ with $V\subseteq U$ open in $B$ and $W\subseteq F$ open in $F$.</p> <p>To prove compactness, it is enough to show that every covering of $E$ by subsets of $\mathcal S$ contains a finite subcovering —this is called <em>Alexander's subbase lemma</em> and is used in one of the proofs of Thychonof's theorem (for example, in Kelley's book, iirc). Do that!</p>
3,121,103
<p>The integral surface of the first order partial differential equation <span class="math-container">$$2y(z-3)\frac{\partial z}{\partial x}+(2x-z)\frac{\partial z}{\partial y} = y(2x-3)$$</span> passing through the curve <span class="math-container">$x^2+y^2=2x, z = 0$</span> is</p> <ol> <li><span class="math-container">$x^2+y^2-z^2-2x+4z=0$</span> </li> <li><span class="math-container">$x^2+y^2-z^2-2x+8z=0$</span> </li> <li><span class="math-container">$x^2+y^2+z^2-2x+16z=0$</span> </li> <li><span class="math-container">$x^2+y^2+z^2-2x+8z=0$</span></li> </ol> <p>My effort:</p> <p>I find the mulipliers <span class="math-container">$x, 3y, -z$</span> and get the solution <span class="math-container">$x^2+3y^2-z^2=c_1$</span>. How to proceed further? Please help.</p>
JJacquelin
108,514
<p>Four equations are proposed and it is asked to find which one satisfies both properties :</p> <ul> <li><p>First : <span class="math-container">$x^2+y^2=2z$</span> at <span class="math-container">$z=0$</span></p></li> <li><p>Second : Is solution of the PDE.</p></li> </ul> <p>HINT :</p> <p>By inspection, it is obvious that the four equations satisfy the first property.</p> <p>So, the question is to find which one of the four proposed equations satisfy the PDE.</p> <p>Why not simply putting the equations, one after the other, into the PDE and see if it agrees or not ?</p> <p>I suppose that you can do it. </p> <p>Nevertheless, the next calculus might help you.</p> <p>For example, with the equation <span class="math-container">$\quad x^2+y^2+z^2-2x+4z=0 \quad$</span>. (This is not one of the four proposed equations, but another example).</p> <p><span class="math-container">$$2xdx+2ydy+2zdz-2dx+4dz=0$$</span> <span class="math-container">$$dz=\frac{-2x+2}{2z+4}dx+\frac{-2y}{2z+4}dy$$</span> <span class="math-container">$$\frac{\partial z}{\partial x}=\frac{-2x+2}{2z+4}\quad;\quad \frac{\partial z}{\partial y}=\frac{-2y}{2z+4}$$</span></p> <p><span class="math-container">$$2y(z-3)\frac{-2x+2}{2z+4} +(2x-z)\frac{-2y}{2z+4} - y(2x-3) =$$</span> <span class="math-container">$$=-4y\frac{xz+3}{z+2}$$</span> This is not equal to <span class="math-container">$0$</span>. Thus the equation <span class="math-container">$x^2+y^2+z^2-2x+4z=0$</span> is not solution of the PDE.</p> <p>No need to solve the PDE with the method of characteristics to answer to the question. Thanks to this simple method of checking, you will easily find that equation 1 is the right answer.</p>
2,483,611
<p>I believe the answer is 13 * $13\choose4$ * $48\choose9$.</p> <p>There are $13\choose4$ to draw 4 of the same cards, and multiply by 13 for each possible rank (A, 2, 3, ..., K). Then there are $48\choose9$ to choose the remaining cards.</p> <p>One thing I am not certain of, is whether this accounts for the possibility of having two 4-of-a-kinds or three 4-of-a-kinds, but I believe it is, since having two and three means you have one.</p>
Claude Leibovici
82,404
<p>May be, you could consider the function $$P(x)=x^4 -4(m+2)x^2 + m^2$$ $$P'(x)=4x^3 -8(m+2)x$$ $$P''(x)=12x^2 -8(m+2)$$</p> <p>The first derivative cancels at $$x_1=-\sqrt{4+2m} \qquad x_2=0\qquad x_3=\sqrt{4+2m}$$ For these points $$P(x_1)=-3 m^2-16 m-16 \qquad P(x_2)=m^2 \qquad P(x_3)=-3 m^2-16 m-16$$ </p> <p>So, in order to have four <em>distinct</em> roots, from the values of $P(x_i)$ we need to have $m \neq 0$ and $3m^2+16m+16&gt;0$. The last condition excludes the range $-4 \leq m \leq -\frac 43$.</p> <p>Since $x_1$ and $x_3$ must be minimum points and $x_2$ a maximum point, we also need $m&gt;-2$ since $$P''(x_1)=16(m+2) \qquad P''(x_2)=-8(m+2) \qquad P''(x_3)=16(m+2)$$ This seems to define quite well the range of acceptable values of $m$.</p>
2,025,934
<p>May $V$ be an $n$ dimensional Vektorspace such that $\dim (V) =: n \ge 2$.</p> <p>We shall prove, that there are infinitely many $k$-dimensional subspaces of $V$, $\forall k \in \{1, 2, ..., n-1\}$.</p> <p>So first, I thought about using induction, the base step is not that hard, for $n=2$ we take two vectors, say $a$ and $b$ and define infinitely many 1-dimensional subspaces as span$\{a+jb\}$ for $j \in \mathbb N$.</p> <p>It is easy to see those vector spaces are not all equal, but I kinda realised that induction is not the way to go, as I think $n$ is fixed.</p> <p>Anyhow, then I thought about using finiteness of basis for $V$ to try to construct those subspaces (using vectors from basis). I failed to do so, so I'm just asking for a hint or any useful advice where to start with this.</p>
Djura Marinkov
361,183
<p>Subspaces:$(a_1\times a_2\times...\times a_{k-1}\times(a_k+ma_{k+1}))$, $m\in N$ </p>
1,619,292
<p>Let $\mathbf C$ be an abelian category containing arbitrary direct sums and let $\{X_i\}_{i\in I}$ be a collection of objects of $\mathbf C$. </p> <p>Consider a subobject $Y\subseteq \bigoplus_{i\in I}X_i$ and put $Y_i:=p_i(Y)$ where $p_i:\bigoplus_{i\in I}X_i\longrightarrow X$ is the obvious projection. </p> <p>Is $Y$ a subobject of $\bigoplus_{i\in I}Y_i$?</p> <p>This seems so obvious, but I can't seem to be able to prove it. </p>
Martín-Blas Pérez Pinilla
98,199
<p>Divulgative:</p> <p><a href="http://rads.stackoverflow.com/amzn/click/0631232516" rel="noreferrer">Does God Play Dice? The New Mathematics of Chaos</a> by Ian Stewart.</p> <p>A bit more advanced:</p> <p><a href="http://rads.stackoverflow.com/amzn/click/0521477476" rel="noreferrer">Explaining Chaos</a> by Peter Smith.</p> <p>College level:</p> <p><a href="http://www.springer.com/cn/book/9783540609346" rel="noreferrer">Nonlinear Differential Equations and Dynamical Systems</a> by Ferdinand Verhulst.</p>
2,581,135
<blockquote> <p>Find: $\displaystyle\lim_{x\to\infty} \dfrac{\sqrt{x}}{\sqrt{x+\sqrt{x+\sqrt{x}}}}.$</p> </blockquote> <p>Question from a book on preparation for math contests. All the tricks I know to solve this limit are not working. Wolfram Alpha struggled to find $1$ as the solution, but the solution process presented is not understandable. The answer is $1$.</p> <p>Hints and solutions are appreciated. Sorry if this is a duplicate.</p>
Mr Pie
477,343
<p>Let $\Lambda =$ the limit we need to find. Then, $ \ \Box\ \Lambda = 1$.</p> <hr> <p><em>Proof</em>: We will begin our proof using the following <em>Lemma</em>. $$\forall a, b\in\mathbb{R}, \ \sqrt{a + \sqrt{b}} = \sqrt{\frac{a + \sqrt{a^2 - b}}{2}} + \sqrt{\frac{a - \sqrt{a^2 - b}}{2}}.\tag1$$ Substitute $a = x$ and $b = x + \sqrt{x}$ into the Lemma. $$(1) = \sqrt{\frac{x + \sqrt{x^2 - x + \sqrt{x}}}{2}} + \sqrt{\frac{x - \sqrt{x^2 - x + \sqrt{x}}}{2}}.$$ Find the limit of the fractions under each root.$$\lim_{x\to\infty}\frac{x \pm \sqrt{x^2 - x + \sqrt{x}}}{2} = \frac 12\lim_{x\to\infty}\bigg(x \pm \sqrt{x^2 - x + \sqrt{x}}\bigg) = \frac12\cdot\infty = \infty$$ $$\therefore \lim_{x\to\infty}\sqrt{x + \sqrt{x + \sqrt{x}}} = \sqrt{\infty} + \sqrt{\infty} = \infty + \infty = \infty.$$ And, $\because \lim_{x\to\infty}\sqrt{x} = \infty$ then we finally have as desired. $$\Lambda = \lim_{x\to\infty}\frac{\sqrt{x}}{\sqrt{x + \sqrt{x +\sqrt{x}}}} = \frac{\infty}{\infty} = 1$$ $$\therefore \Lambda = 1.\tag*{$\bigcirc$}$$</p>
2,828,205
<p><a href="https://i.stack.imgur.com/JJRaZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JJRaZ.png" alt="enter image description here"></a></p> <p>First we take identity element from set which is Identity matrix so S=I for which b(σ(x),σ(y))=b(x,y) which is identity transformation in O(V,b) so kernal becomes identity and so it is surjective since it is finite dimensional. Is it correct or not please guide regarding this.</p>
Federico Fallucca
531,470
<p>We want prove the surjectivity of the map $\Psi$.</p> <p>We had fix a base $\beta:=\{v_1,...,v_n\}$ on $V$ and the function $\Psi$ maps every $\sigma\in O(V,b)$ to matrix $A_\sigma$ associate with itself in the base $\beta$.</p> <p>If $A=(a_{ij})\in O(V,B)$ we can define a function</p> <p>$\sigma’: \beta\to V$</p> <p>that maps every $v_i\in \beta$ to $ \sigma’(v_i):=\sum_{k=1}^{n}a_{ki}v_k$</p> <p>In this way there exists a unique linear map $\sigma: V\to V$ that extends $\sigma’$. </p> <p>Now we want prove that $\sigma\in O(V,b)$.</p> <p>We know that the matrix associate to $\sigma$ in the base $\beta$ is the matrix that have on the $j$-th column the coefficients of $\sigma(v_j)$ written in the base $\beta$ and so, by construction, the matrix associate to $\sigma$ is $A$. But now for all $x,y\in V $</p> <p>$b(\sigma(x),\sigma(y))=(Ax)^tB(Ay)=$</p> <p>$=x^t(A^tBA)y=x^tBy=b(x,y)$</p> <p>So $\sigma \in O(V,b)$ and oviously $\Psi(\sigma)=A$. </p> <p>In other words the map $\Psi$ is surjective. </p>
276,987
<p>I want to visualize the following set in Maple:</p> <blockquote> <p>$\lbrace (x+y,x-y) \vert (x,y)\in (-\frac{1}{2},\frac{1}{2})^{2} \rbrace$ </p> </blockquote> <p>Which commands should I use? Is it even possible?</p>
Brian M. Scott
12,042
<p>Here’s a characterization of maximal non-discrete topologies.</p> <blockquote> <p><strong>Lemma.</strong> Let $\tau$ be a non-discrete topology on a set $X$, and let $N=\big\{x\in X:\{x\}\notin\tau\big\}\ne\varnothing$. Then $\tau$ is maximal non-discrete iff </p> <ol> <li>$\tau$ induces the discrete topology on each $A\in\wp(X)\setminus\tau$, and </li> <li>$A\supseteq N$ whenever $A\in\wp(X)\setminus\tau$.</li> </ol> <p><strong>Proof.</strong> Assume first that $\tau$ is maximal non-discrete. Let $A\in\wp(X)\setminus\tau$. If $\tau_A$ is the topology generated by the subbase $\tau\cup\{A\}$, it’s not hard to check that $$\tau_A=\big\{U\cup(A\cap V):U,V\in\tau\big\}\;,$$ so every subset of $X$ must be expressible in the form $U\cup(A\cap V)$ for some $U,V\in\tau$. In particular, for each $x\in X$ there must be $U_x,V_x\in\tau$ such that $\{x\}=U_x\cup(A\cap V_x)$. If $\{x\}\in\tau$ we may take $U_x=\{x\}$ and $V_x=\varnothing$; if, however, $\{x\}\notin\tau$, we must have $U_x=\varnothing$ and $A\cap V_x=\{x\}$. Thus, $\tau$ induces the discrete topology on $A$.</p> <p>Suppose that $A\nsupseteq N$ for some $A\in\wp(X)\setminus\tau$, and fix $x\in N\setminus A$. Then $\tau_A$ is strictly finer than $\tau$, but $\{x\}\notin\tau_X$, contradicting the maximality of $\tau$.</p> <p>Now suppose that $\tau$ satisfies (1) and (2), let $\tau'$ be a topology strictly finer than $\tau$, and let $U\in\tau'\setminus\tau$. By hypothesis $U\supseteq N$ and $\tau$ induces the discrete topology on $U$, so $\{x\}\in\tau'$ for each $x\in N$, and $\tau'$ is the discrete topology on $X$. Thus, $\tau$ is maximal non-discrete. $\dashv$</p> </blockquote> <p>But now we have an easy</p> <blockquote> <p><strong>Theorem.</strong> Let $\tau$ be a topology on a set $X$. Then $\tau$ is maximal non-discrete iff $X$ has a unique non-isolated point.</p> <p><strong>Proof.</strong> In the notation of the lemma this just says that $\tau$ is maximal non-discrete iff $N$ is a singleton. If $N$ is a singleton, then (1) and (2) are clearly satisfied, so $\tau$ is maximal non-discrete. Suppose now that there are distinct $x,y\in N$; then $\{x\}\in\wp(X)\setminus\tau$, but $\{x\}\nsupseteq N$, so $\tau$ is not maximal non-discrete. $\dashv$</p> </blockquote> <p>In Michael Greinecker’s example the unique non-isolated point is of course $y$, and its nbhd filter is the fixed filter generated by the set $\{x,y\}$. Among the nicer examples are $\omega+1$ (i.e., a simple sequence with its limit point) and $\{p\}\cup\omega$ for any $p\in\beta\omega\setminus\omega$.</p> <p>(I could probably have done this more simply, but this is how I actually discovered the result, so I thought that I’d let it stand as is.)</p> <p><strong>Added:</strong> I realized somewhat belatedly that these spaces can be described even more precisely. Let $X$ be a set with more than one element, and fix $p\in X$. Let $Y=X\setminus\{p\}$, let $\mathscr{F}$ be any filter on $Y$, and let $\tau=\big\{\{p\}\cup F:F\in\mathscr{F}\big\}\cup\wp(Y)$. Then $\tau$ is a maximal non-discrete topology on $X$, and all such topologies are obtained in this way.</p>
636,730
<p>Let $G$ be a group of infinite order . Does there exist an element $x$ belonging to $G$ such that $x$ is not equal to $e$ and the order of $x$ is finite?</p>
Amr
29,267
<p>No. The only element of $\mathbb{Z}$ that has finite order is $0$</p>
636,730
<p>Let $G$ be a group of infinite order . Does there exist an element $x$ belonging to $G$ such that $x$ is not equal to $e$ and the order of $x$ is finite?</p>
fkraiem
118,488
<p>Sometimes no, for example in $(\mathbf{Z},+)$. Sometimes yes, for example in $(\mathbf{R}^*,\times)$.</p>
1,618,411
<p>I'm learning the fundamentals of <em>discrete mathematics</em>, and I have been requested to solve this problem:</p> <p>According to the set of natural numbers</p> <p>$$ \mathbb{N} = {0, 1, 2, 3, ...} $$</p> <p>write a definition for the less than relation.</p> <p>I wrote this:</p> <p>$a &lt; b$ if $a + 1 &lt; b + 1$</p> <p>Is it correct?</p>
Nephente
96,393
<p>A way to think about the natural numbers is in terms of the <a href="https://en.wikipedia.org/wiki/Peano_axioms" rel="nofollow">Peano Axioms</a>. There exists a "successor" map </p> <p>$$ S: \mathbb{N}\rightarrow \mathbb{N} $$ such that in particular</p> <ul> <li>$S(0) = 1 $</li> <li>$0\notin S(\mathbb{N}) $</li> </ul> <p>The action of $S$ is usually written as $S(n) =: n+1$ The ordering of $\mathbb{N}$ may then be defined as</p> <p>$$ a\leq b :\Longleftrightarrow \exists k\in\mathbb{N}: S^k(a) = b$$</p> <p>where the $k$-th power is understood as $k$ fold application of $S$.</p> <p>This is essentially the same answer already given by Solitary.</p>
4,474,806
<p>I use the following method to calculate <span class="math-container">$b$</span>, which is <span class="math-container">$a$</span> <strong>increased</strong> by <span class="math-container">$x$</span> percent:</p> <p><span class="math-container">$\begin{align} a = 200 \end{align}$</span></p> <p><span class="math-container">$\begin{align} x = 5\% \text{ (represented as } \frac{5}{100} = 0.05 \text{)} \end{align}$</span></p> <p><span class="math-container">$\begin{align} b = a \cdot (1 + x) \ = 200 \cdot (1 + 0.05) \ = 200 \cdot 1.05 \ = 210 \end{align}$</span></p> <p>Now I want to calculate <span class="math-container">$c$</span>, which is also <span class="math-container">$a$</span> but <strong>decreased</strong> by <span class="math-container">$x$</span> percent.</p> <p>My instinct is to preserve the method, but to use division instead of multiplication (being the inverse operation):</p> <p><span class="math-container">$ \begin{align} c = \frac{a}{1 + x} \ = \frac{200}{1 + 0.05} \ = \frac{200}{1.05} \ = 190.476190476 \ \end{align} $</span></p> <p>The result looks a bit off? But also interesting as I can multiply it by the percent and I get back the initial value (<span class="math-container">$190.476190476 \cdot 1.05 = 200$</span>).</p> <p>I think the correct result should be 190 (without any decimal), using:</p> <p><span class="math-container">$ \begin{align} c = a \cdot (1 - x) \ = 200 \cdot (1 - 0.05) \ = 200 \cdot 0.95 \ = 190 \end{align} $</span></p> <p>What's the difference between them? What I'm actually calculating?</p>
insipidintegrator
1,062,486
<p>Notice that what you have done is basically exploit the first-order approximation of the Taylor series of <span class="math-container">$\frac{1}{1-x}$</span>: <span class="math-container">$$\displaystyle\frac{1}{1-x}=1+x+x^2+x^3+… for |x|\lt1$$</span> <span class="math-container">$≈1+x $</span>; for <span class="math-container">$x&lt;&lt;1 $</span>.</p>
1,046,066
<p>Is this series $$\sum_{n\geq 1}\left(\prod_{k=1}^{n}k^k\right)^{\!-\frac{4}{n^2}} $$ convergent or divergent?</p> <p>My attempt was to use the comparison test, but I'm stuck at finding the behaviour of $\displaystyle \prod_1^n k^k$ as $n$ goes to infinity. Thanks in advance.</p>
user 1591719
32,016
<p>By <strong>Cesaro-Stolz theorem</strong>, we have that $$\lim_{n\to\infty}\frac{1}{n^2 \log(n)}\sum_{k=1}^{n} k\log(k)=\frac{1}{2}$$ and then we see that for $n$ large, we have that $$\frac{1}{n^2}\sum_{k=1}^{n} k\log(k)\approx\frac{\log(n)}{2}\Rightarrow \frac{-4}{n^2}\sum_{k=1}^{n} k\log(k)\approx -2\log(n)$$and thus $$\left(\prod_{k=1}^{n}k^k\right)^{\!-\frac{4}{n^2}}\approx \frac{1}{n^2}$$ whence we conclude the series converges.</p> <p><strong>Q.E.D.</strong> (an elementary way , without integrals)</p>
3,165,937
<p>Are orthogonal groups are lie groups? I think parameter space points corresponds to elements with determinant -1 break analytic property of lie groups , what is the general condition to check a group is lie group or not ?</p>
Dietrich Burde
83,966
<p>Orthogonal groups can be defined over arbitrary fields <span class="math-container">$K$</span> as the subgroup of the general linear group <span class="math-container">$GL_n(K)$</span> given by <span class="math-container">$$ \operatorname {O} (n,K)=\left\{Q\in \operatorname {GL} (n,K)\;\left|\;Q^{\mathsf {T}}Q=QQ^{\mathsf {T}}=I\right.\right\}. $$</span> Now for example, for a <em>finite</em> field we obtain a finite group, which is also a compact discrete Lie group of dimension <span class="math-container">$0$</span>.</p> <p>Real orthogonal groups are real linear groups and hence real Lie groups.</p>
2,658,195
<p>I have the following problem with which I cannot solve. I have a very large population of birds e.g. 10 000. There are only 8 species of birds in this population. The size of each species is the same.</p> <p>I would like to calculate how many birds I have to catch, to be sure in 80% that I caught one bird of each species.</p>
farruhota
425,072
<p>It is a binomial experiment. Let $X$ is a number of UPs. When die is rolled three times, i.e. $n=3$, then: $$P(X=0)=\frac18; \ P(X=1)=\frac38; \ P(X=2)=\frac38; \ P(X=3)=\frac18.$$ So selecting one or two UPs have higher chances. </p> <p>Similarly if $n=4$, the combinations are: $$C(4,0)=1; \ C(4,1)=4; \ C(4,2)=6; \ C(4,3)=4; \ C(4,4)=1.$$ So the winning choice is to have two UPs and two DOWNs.</p>
546,572
<p><img src="https://i.stack.imgur.com/aJ2t5.jpg" alt="enter image description here"></p> <p>Could anyone tell me how to solve 9b and 10? I've been thinking for five hours, I really need help.</p>
bof
97,206
<p>$$f(x)=\cos x^2, A=f^{-1}(1),B=f^{-1}(-1)$$</p>
2,196,413
<blockquote> <p>Let $R$ be a commutative ring. Denote by $R^*$ the group of invertible elements (this is a group w.r.t multiplication.) Suppose $R^*\cong \mathbb{Z}$. I need to show that $1+1=0$ in $R$.</p> </blockquote> <p>I have no clue about why such statement should be true. I don't even have an example for a ring that satisfies these assumptions, so I'd be glad to see one.</p> <p>Hints (or partial solutions) will be welcomed. Thank you!</p>
lhf
589
<p><em>Hint 1:</em> $-1$ is a unit and so is a power of $u$, where $u$ is a generator of $R^\times$.</p> <p><em>Hint 2:</em> $(-1)^2=1$. What are the elements of finite order in $\mathbb Z$ ?</p>
2,414,492
<p>Check the convergence of $$\sum_{k=0}^\infty{2^{-\sqrt{k}}}$$ I have tried all other tests (ratio test, integral test, root test, etc.) but none of them got me anywhere. Pretty sure the way to do it is to check the convergence by comparison, but not sure how.</p>
farruhota
425,072
<p>Referring to Miguel's comment on integral test: $$\begin{align} \int_{0}^{\infty} \frac{1}{2^{\sqrt{x}}}dx &amp;=\int_{0}^{\infty} \frac{1}{2^t}\cdot 2tdt\\ &amp;=2\left(t\cdot \frac{-2^{-t}}{\ln{2}}\bigg{|}_{0}^{\infty}+\int_{0}^{\infty} \frac{2^{-t}}{\ln{2}}dt\right)\\ &amp;=2\left(-\frac{2^{-t}}{\ln^2{2}}\bigg{|}_{0}^{\infty}\right)=\frac{2}{\ln^2{2}}\\&amp;&lt;\infty. \end{align} $$</p>
1,335,950
<p>I have the following sum ($n\in \Bbb N)$: $$ \frac {1}{1 \times 4} + \frac {1}{4 \times 7} + \frac {1}{7 \times 10} +...+ \frac {1}{(3n - 2)(3n + 1)} \tag{1} $$ It can be proved that the sum is equal to $$ \frac{n}{3n + 1} \tag{2}$$ My question is, how do I get the equality? I mean, if I hadn't knew the formula $(2)$, how would I derive it?</p>
please delete me
249,542
<p>Write $\frac{1}{(3n-2)(3n+1)}=\frac{1}{3}(\frac{1}{3n-2}-\frac{1}{3n+1})$ before summing.</p>
2,732,202
<p>$Z_1, Z_2, Z_3,...$ are independent and identically distributed R>V.s s.t. $E(Z_i)^- &lt; \infty$ and $E(Z_i)^+ = \infty$. Prove that $$\frac {Z_1+Z_2+Z_3+\cdots+Z_n} n \to \infty$$ almost surely.</p> <p>What does $E(Z_i)^+$ $E(Z_i)^-$ mean? I believe it is integrating fromnegative infinity to zero and zero to positive infinity for - and x resepctively. But LLN wont apply here since expectation doesnt exist? like E|Zi|=$\infty$.</p> <p>p.s. another post i made is similar but diferent it is $E(Z)_i^+$ in the other which stands for max and min for the minus sign.</p>
Davide Giraudo
9,849
<p>If $x$ is a real number, $x^+$ denotes the positive part of $x$, that is, $x^+=\max\{x,0\}$ and $x^-=\max\{x,0\}-x$ so that $x=x^+-x^-$. Write $Z_i=Z_i^+-Z_i^-$. An application of the strong law of large number to $\left(Z_i^-\right)_{i\geqslant 1}$ shows that $n^{-1}\sum_{i=1}^nZ_i^-\to \mathbb E\left[Z_1^-\right] $ almost surely. For the part involving $Z_i^+$, let $Y_{i,M}:=Z_i^+\mathbf 1\left\{Z_i^+\leqslant M\right\}$. Then for all $n$ and $M$, $$ \frac 1n\sum_{i=1}^nZ_i^+\geqslant\frac 1n\sum_{i=1}^n Y_{i,M} $$ hence by the strong law of large number applied to the i.i.d. sequence $\left(Y_{i,M}\right)_{i\geqslant 1}$, $$ \liminf_{n\to +\infty}\frac 1n\sum_{i=1}^nZ_i^+\geqslant\liminf_{n\to +\infty}\frac 1n\sum_{i=1}^n Y_{i,M}=\mathbb E\left[Y_{1,M}\right], $$ and by assumption, $\mathbb E\left[Y_{1,M}\right]\to +\infty$ as $M$ goes to infinity.</p>
1,047,544
<p>I'm doing some research and I'm trying to compute a closed form for $ \mathbb{E}[ X \mid X &gt; Y] $ where $X$, $Y$ are independent normal (but not identical) random variables. Is this known?</p>
bijection
35,625
<p>yes, $\mathbb{E}(X|X &gt; y)$ has a closed form as the expectation of a truncated normal. However, integrating that expression times the pdf of $Y$, is difficult. The normal r.v. are arbitrary. Does anyone have a satisfactory answer or is there no known closed form? </p>
3,468,336
<p>Consider the multiplication operator <span class="math-container">$A \colon D(A) \to L^2(\mathbb R)$</span> defined by <span class="math-container">$$\forall f \in D(A): \quad(Af)(x) = (1+\lvert x \rvert^2)f(x),$$</span> where <span class="math-container">$$D(A) := \left \{f\in L^2(\mathbb R): (1+\lvert x \rvert^2)f\in L^2(\mathbb R), \int_\mathbb R f(x) \, dx = 0\right \}.$$</span> I want to show that <span class="math-container">$A$</span> is densely defined, symmetric and closed, but not self-adjoint. However I am struggling at some places.</p> <p><em>Proof.</em> Since <span class="math-container">$x\mapsto (1+\lvert x \rvert^2)$</span> is real-valued, <span class="math-container">$A$</span> is symmetric. Now let <span class="math-container">$(g_n)_{n\in \mathbb N} \subseteq L^2(\mathbb R)$</span> converging in <span class="math-container">$\lVert \cdot \rVert_{L^2(\mathbb R)}$</span> to some <span class="math-container">$g \in L^2(\mathbb R)$</span> and <span class="math-container">$Ag_n \to h \in L^2(\mathbb R)$</span>. Passing to a subsequence we see that <span class="math-container">$$g_n(x) \to g(x), \quad (1+\lvert x \vert^2)g_n(x) \to h(x)$$</span> for almost every <span class="math-container">$x\in \mathbb R$</span>, from which we see that <span class="math-container">$$g(x) = \frac{h(x)}{1+\lvert x \rvert^2}$$</span> for almost every <span class="math-container">$x\in \mathbb R$</span>. Also since <span class="math-container">$g_n \to g$</span> in <span class="math-container">$L^1(\mathbb R)$</span> we have <span class="math-container">$$\int_{\mathbb R} g(x) \, dx = \lim_{n\to \infty} \int_{\mathbb R} g_n(x) \, dx = 0,$$</span> so <span class="math-container">$g\in D(A), Ag = h$</span> and <span class="math-container">$A$</span> is closed. Now consider the function <span class="math-container">$f(x) = e^{-x^2}$</span>. Then letting <span class="math-container">$\psi_f(x) = (1+\lvert x \rvert^2) f(x)$</span> we have <span class="math-container">$ \psi_f \in L^2(\mathbb R)$</span> and for each <span class="math-container">$g\in D(A):$</span> <span class="math-container">$$\langle f, Ag \rangle_{L^2} = \langle \psi_f, g \rangle_{L^2},$$</span> so <span class="math-container">$f\in D(A^*)$</span>. But since <span class="math-container">$\int_{\mathbb R} f(x) \, dx = \pi, f \notin D(A)$</span> and hence <span class="math-container">$A$</span> is not self-adjoint.</p> <p>Is this correct so far? Also, I struggled showing <span class="math-container">$A$</span> is densely defined. It was hard for me to construct a approximating sequence under the constraint that the integral should vanish, also I could not compute the orthogonal complement. Any hints?</p>
Clarinetist
81,560
<p>Observe <span class="math-container">$$\left(\sum_{i=1}^{n}X_i\right)^2 = \sum_{i=1}^{n}\sum_{j=1}^{n}X_iX_j = \sum_{i = j}X_iX_j + \sum_{i \neq j}X_iX_j = \sum_{i=1}^{n}X_i^2 + \sum_{i\neq j}X_iX_j$$</span> yielding an expected value of <span class="math-container">$$n(\lambda^2 + \lambda) + \lambda^2(n^2 - n) = n\lambda+n^2\lambda^2\tag{*}$$</span> and when divided by <span class="math-container">$n^2$</span> yields <span class="math-container">$$\dfrac{1}{n}\lambda + \lambda^2$$</span> achieving the desired result.</p> <p>To understand why (*) is true, <span class="math-container">$\mathbb{E}[X_i^2] = \lambda^2+\lambda$</span> which is summed <span class="math-container">$n$</span> times. </p> <p>Due to independence, <span class="math-container">$\mathbb{E}[X_iX_j] = \mathbb{E}[X_i]\mathbb{E}[X_j] = \lambda^2$</span>. There are <span class="math-container">$n \cdot n = n^2$</span> total <span class="math-container">$(i, j)$</span> pairs, of which <span class="math-container">$n$</span> of them have <span class="math-container">$i = j$</span>, so there are <span class="math-container">$n^2 - n$</span> pairs which have <span class="math-container">$i \neq j$</span>.</p>
1,203,922
<p>Show that it is possible to divide the set of the first twelve cubes $\left(1^3,2^3,\ldots,12^3\right)$ into two sets of size six with equal sums.</p> <p>Any suggestions on what techniques should be used to start the problem?</p> <p>Also, when the question is phrased like that, are you to find a general case that always satisfies the condition? Or, do they instead want you to find a specific example, since if an example exists then of course the case would be possible. </p> <p>Thanks!</p> <p><strong>Edit</strong>: I finally found an answer :) $$1^3 + 2^3 + 4^3 + 8^3 + 9^3 + 12^3 = 3^3 + 5^3 + 6^3 + 7^3 + 10^3 + 11^3 = 3042$$ My approach to solving this problem was an extension of David's suggestion below. For six cubes to sum to an even number (3042), there has to either be 0 odd cubes, 2 odd cubes, 4 odd cubes, or 6 odd cubes. There cannot be 3 odd numbers because the sum of 3 odd numbers results in an odd number. The remaining 3 numbers would be even perfect cubes, and their sum would be even. </p> <p>Thus you would have an even + odd = odd sum, but 3042 is even, not odd. Following this logic, a set of six cubes that sum to 3042 can only have 0 odds, 2 odds, 4 odds, or 6 odds (note that the 0 odd case is the complementary set to the 6 odd case, and the 2 odd case is the complementary set to the 4 odd case).</p> <p>Checking the 0 or 6 odd case is simple. The sum of all the odd cubes does not equal 3042, so the two sets cannot be composed of all odd or no odds.</p> <p>Hence one set of six cubes must have 2 odds, and the other set must have 4 odds.</p> <p>Now it is simply a matter of guess and check, checking all the pairs of odd cubes from </p> <p>($1^3,3^3$) $\rightarrow$ ($9^3,11^3$). Fortunately, ($1^3,9^3$) works, so we don't have to try too many cases.</p> <p>Also, if anyone has any other solution method, please let me know :)</p>
barak manos
131,263
<p>First, you've noticed that each side of the equation has to sum up to $3042$.</p> <p>Since $11^3+12^3=3059&gt;3042$, each one of them has to be on a different group.</p> <p>Since $3042$ is even, each group must contain an even number of odd numbers.</p> <p>So each group must contain $0$ or $2$ or $4$ or $6$ odd numbers.</p> <p>With $0$ odd numbers on one group, it will definitely sum up to a larger value than the other group.</p> <p>With $6$ odd numbers on one group, it will definitely sum up to a smaller value than the other group.</p> <p>So the remaining options are:</p> <ul> <li>"$12$" with $3$ out of $5$ other even numbers and $2$ out of $5$ odd numbers other than "$11$"</li> <li>"$12$" with $1$ out of $5$ other even numbers and $4$ out of $5$ odd numbers other than "$11$"</li> </ul> <p>The total number of combinations left to check, is therefore $\binom53\cdot\binom52+\binom51\cdot\binom54=125$.</p>
1,245,651
<p>In algebra, I learned that if <span class="math-container">$\lambda$</span> is an eigenvalue of a linear operator <span class="math-container">$T$</span>, I can have <span class="math-container">\begin{equation} Tx = \lambda x \tag{1} \end{equation}</span> for some <span class="math-container">$x\neq 0$</span>, which is equivalent to <span class="math-container">$\lambda I-T$</span> not being invertible.</p> <p>In functional analysis, it is said that if <span class="math-container">$\lambda$</span> is an element of a spectrum of the linear operator <span class="math-container">$T$</span>, then <span class="math-container">$\lambda I - T$</span> is not invertible. However, my Professor never mentioned <span class="math-container">$(1)$</span>.</p> <p>Is the definition/concept in functional analysis the same as <span class="math-container">$(1)$</span> in linear algebra? Can I use <span class="math-container">$(1)$</span> in functional analysis too? Does it depend on which spaces we are in?</p> <p>For example, suppose <span class="math-container">$\lambda$</span> is in the spectrum of <span class="math-container">$T$</span>, where <span class="math-container">$T$</span> is a linear operator on <span class="math-container">$E$</span>, a Banach space. I want to show <span class="math-container">$\lambda^n$</span> is in the spectrum of <span class="math-container">$T^n$</span>. Would this problem is equivalent to showing if <span class="math-container">$\lambda$</span> is an eigenvalue of a linear operator <span class="math-container">$T$</span>, then <span class="math-container">$\lambda^n$</span> is an eigenvalue of <span class="math-container">$T^n$</span>?</p> <p>Thank you.</p>
Disintegrating By Parts
112,478
<p>The multiplication operator <span class="math-container">$(Mf)(x)=xf(x)$</span> on <span class="math-container">$L^{2}[0,1]$</span> is a classical example of an operator with no eigenvalues, but its spectrum is <span class="math-container">$[0,1]$</span>.</p> <p><span class="math-container">$M$</span> has no eigenvalue because <span class="math-container">$Mf=\lambda f$</span> gives <span class="math-container">$(x-\lambda)f=0$</span>, which forces <span class="math-container">$f(x)=0$</span> a.e..</p> <p>To see that <span class="math-container">$[0,1]\subseteq\sigma(M)$</span>, note that the constant function <span class="math-container">$1$</span> is not in the range of <span class="math-container">$(M-\lambda I)$</span> for any <span class="math-container">$\lambda \in [0,1]$</span> because <span class="math-container">$(x-\lambda)g = 1$</span> would force <span class="math-container">$g = 1/(x-\lambda)$</span> a.e., which is not in <span class="math-container">$L^{2}$</span> for such <span class="math-container">$\lambda$</span>. For any other <span class="math-container">$\lambda$</span>, <span class="math-container">$(M-\lambda I)$</span> is invertible.</p>
2,911,187
<p>Lines (same angle space between) radiating outward from a point and intersecting a line:</p> <p><a href="https://i.stack.imgur.com/52HY4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/52HY4.png" alt="Intersection Point Density Distribution"></a></p> <p>This is the density distribution of the points on the line:</p> <p><a href="https://i.stack.imgur.com/CqbWF.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/CqbWF.jpg" alt="Plot line Graph"></a></p> <p><sup>I used a Python script to calculate this. The angular interval is 0.01 degrees. X = tan(deg) rounded to the nearest tenth, so many points will have the same x. Y is the number of points for its X. You can see on the plot that ~5,700 points were between -0.05 and 0.05 on the line.</sup></p> <p>What is this graph curve called? What's the function equation?</p>
Andreas
317,854
<p>The curve is a density function. The idea is the following. From your first picture, assume that the angles of the rays are spaced evenly, the angle between two rays being $\alpha$. I.e. the nth ray has angle $n \cdot \alpha$. The nth ray's intersection point $x$ with the line then follows $\tan (n \cdot \alpha) = x/h$ where h is the distance of the line to the origin. So from 0 to $x$, n rays cross the line. </p> <p>Now you are interested in the density $p(x)$, i.e. how many rays intersect the line at $x$, per line interval $\Delta x$. In the limit of small $\alpha$, you have $\int_0^x p(x') dx' =c n = \frac{c}{\alpha}\arctan (x/h)$ and correspondingly, $p(x) = \frac{d}{dx}\frac{c}{\alpha}\arctan (x/h) = \frac{c h}{\alpha (x^2+h^2)}$. The constant $c$ is determined since the integral over the density function must be $1$ (in probability sense), hence </p> <p>$p(x)= \frac{h}{\pi (x^2+h^2)}$.</p> <p>This curve is called a Cauchy distribution.</p> <p>Obviously, $p(x)$ can be multiplied with a constant $K$ to give an expectation value distribution $E(x) = K p(x)$ over $x$, instead of a probability distribution. This explains the large value of $E(0) = 5700$ or so in your picture. The value $h$ is also called a <em>scale parameter</em>, it specifies the half-width at half-maximum (HWHM) of the curve and can be roughly read off to be $1$. If we are really "counting rays", then with angle spacing $\alpha$, in total $\pi/\alpha$ many rays intersect the line and hence we must have $$ \pi/\alpha = \int_{-\infty}^{\infty}E(x) dx = \int_{-\infty}^{\infty}K p(x) dx = K $$ So the expectation value distribution of the number of rays intersecting one unit of the line at position $x$ is $$ E(x) = \frac{\pi}{\alpha} p(x)= \frac{h}{\alpha (x^2+h^2)} $$ as we already had with the constant $c=1$. Reading off approximately $E(0) = 5700$ and $h=1$ gives $ E(x) = \frac{5700}{x^2+1} $ and $\alpha = 1/5700$ (in rads), or in other words, $\pi/\alpha \simeq 17900$ rays (lower half plane) intersect the line. </p>
3,583,330
<p>I've approached the problem the following way : </p> <p>Out of the 7 dice, I select any 6 which will have distinct numbers : 7C6.</p> <p>In the 6 dice, there can be 6! ways in which distinct numbers appear.</p> <p>And lastly, the last dice will have 6 possible ways in which it can show a number.</p> <p>So the required answer should be : 7C6 * 6! * 6/(6^7) which on simplifying becomes : 70/(6^3 * 3).</p> <p>However, the answer given is 35/(6^3 * 3).</p> <p>Where exactly am I going wrong?</p>
Mathsmerizing
757,478
<p>We need 2 alike and 5 distinct numbers on the die out of {<span class="math-container">$1,2,3,4,5,6$</span>}, which can be selected in C(6,1).C(5,5) ways. Now 2 alike and 5 distinct numbers can be arranged in <span class="math-container">$\frac{7!}{2!}$</span> ways. Total number of points in the sample space as you said <span class="math-container">$6^7$</span></p> <p>So, the required probability =<span class="math-container">$ \frac {C(6,1).C(5,5)\frac{7!}{2!}}{6^7}$</span> which is <span class="math-container">$\frac{35}{6^3.3}$</span></p> <p>In your case when you are considering the arrangement you need to divide it with 2! as the permutation of 2 alike numbers will not result in new configuration. </p>
4,279,076
<p>I have seen in wikipedia that irrational numbers have infinite continued fraction but I also found <span class="math-container">$$1=\frac{2}{3-\frac{2}{3-\ddots}}$$</span> so my question is that does that mean <span class="math-container">$1$</span> is irrational because it can be written as an infinite continued fraction?</p>
L0Ludde0
744,946
<p>No it does not, since the logic behind the statement is</p> <p><span class="math-container">$x$</span> irrational <span class="math-container">$\implies$</span> <span class="math-container">$x$</span> can be written as an infinite continued fraction.</p> <p>However, this does not necessarily mean that rationals <em>cannot</em> have an infinite continued fraction.</p>
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
5
Edit dataset card