instruction
stringlengths
12
30k
How to calculate volume of tetrahedron given lengths of all it's edges?
So, what's unnatural about the complex differential equation... f : C -> C satisfying f'(z) = f(z) and f(0)=1 ?
***Some Assumptions*** I will assume that you are ok with power series being used, just not Taylor's theorem. I will also assume you will allow us to observe a solution to a DE since you used it in your derivation. **Defn** A series of the form $\sum_{n=0}^{\infty}c_n\left(z-z_0\right)^n$ for $c_n,z,z_0\in\mathbb{C}$ is called a *power series*. **Thm** There is some $R\in[0,\infty]$ such that the power series above converges absolutely for all $z\in\mathbb{C}$ with $\mid z-z_0\mid<R$ and uniformly in $D\left(z_0,\rho\right)$ for all $\rho<R$. Further, the terms are unbounded for all $z$ with $\mid z-z_0\mid>R$. **pf** Use the geometric series' convergence. **Lemma** Inside the disk of convergence $\sum_{n=1}^{\infty}nc_n\left(z-z_0\right)^{n-1}$ is the derivative of the power series. ***Construction*** For power functions $y(z)=\sum c_nz^n$ can we find a unique solution to $y'(z)=y(z)$ in $\mathbb{C}$? We can observe that this implies $nc_n=c_{n-1}$. Hence **Defn** Let $E\left(z\right):=\sum_{n=0}^{\infty}\frac{1}{n!}z^n$. **Thm** 1) $E'=E$ 2) $E\left(z_1+z_2\right)=E\left(z_1\right)E\left(z_2\right)$ 3)$E_{\mid_{\mathbb{R}}}$ is strictly increasing and $E(\mathbb{R})=(0,\infty)$ 4) $x\mapsto E(ix)$ sends $\mathbb{R}$ onto $\mathbb{T}$. 5) There is some real $\pi>0$ such that $E\left(\frac{\pi}{2}i\right)=i$ and for all $z_1, z_2\in\mathbb{C}$, then $E\left(z_1\right)=E\left(z_2\right)$ iff $\frac{z_1-z_2}{2\pi i}\in\mathbb{Z}$ 6) $E\left(\mathbb{C}\right)=\mathbb{C}\setminus\left\lbrace 0\right\rbrace$. Note that all of these can be shown purely at the level of power series using no complex analysis. Further the proofs are not hard, if you want more details here let me know. **Corollary** $E$ is a homomorphism of the additive group $\mathbb{C}$ onto the multiplicative group with $\mathbb{C}\setminus\left\lbrace 0\right\rbrace$. ***Application*** If you care about loops(which I think you do!) lets observe that $\gamma(t)=e^{it}$ for $t\in[0,2\pi]$, is $\mathbb{T}$ (Note here that $\pi$ is simply the real number we found before, not some existential thing!). We could now push this to get winding numbers. Or you could use this definition of exponential, and the power series definition of inverse to get a (branchless) logarithm. In particular, you can show that the derivative of that fella is $\frac{1}{x}$. So we don't have to *define* it that way. :) ***Comments*** I agree with you that the definitions of complex exponentials feel contrived and the logarithm is even worse. The branched logarithms is the only part of Palka that I dislike as a complex book. These definitions I can stomach, as they require no Dues ex Machina.
***Some Assumptions*** I will assume that you are ok with power series being used, just not Taylor's theorem. I will also assume you will allow us to observe a solution to a DE since you used it in your derivation. **Defn** A series of the form $\sum_{n=0}^{\infty}c_n\left(z-z_0\right)^n$ for $c_n,z,z_0\in\mathbb{C}$ is called a *power series*. **Thm** There is some $R\in[0,\infty]$ such that the power series above converges absolutely for all $z\in\mathbb{C}$ with $\mid z-z_0\mid < R$ and uniformly in $D\left(z_0,\rho\right)$ for all $\rho < R$. Further, the terms are unbounded for all $z$ with $\mid z-z_0\mid > R$. **pf** Use the geometric series' convergence. **Lemma** Inside the disk of convergence $\sum_{n=1}^{\infty}nc_n\left(z-z_0\right)^{n-1}$ is the derivative of the power series. ***Construction*** For power functions $y(z)=\sum c_nz^n$ can we find a unique solution to $y'(z)=y(z)$ in $\mathbb{C}$? We can observe that this implies $nc_n=c_{n-1}$. Hence **Defn** Let $E\left(z\right):=\sum_{n=0}^{\infty}\frac{1}{n!}z^n$. **Thm** 1) $E'=E$ 2) $E\left(z_1+z_2\right)=E\left(z_1\right)E\left(z_2\right)$ 3)$E_{\mid_{\mathbb{R}}}$ is strictly increasing and $E(\mathbb{R})=(0,\infty)$ 4) $x\mapsto E(ix)$ sends $\mathbb{R}$ onto $\mathbb{T}$. 5) There is some real $\pi>0$ such that $E\left(\frac{\pi}{2}i\right)=i$ and for all $z_1, z_2\in\mathbb{C}$, then $E\left(z_1\right)=E\left(z_2\right)$ iff $\frac{z_1-z_2}{2\pi i}\in\mathbb{Z}$ 6) $E\left(\mathbb{C}\right)=\mathbb{C}\setminus\left\lbrace 0\right\rbrace$. Note that all of these can be shown purely at the level of power series using no complex analysis. Further the proofs are not hard, if you want more details here let me know. **Corollary** $E$ is a homomorphism of the additive group $\mathbb{C}$ onto the multiplicative group with $\mathbb{C}\setminus\left\lbrace 0\right\rbrace$. ***Application*** If you care about loops(which I think you do!) lets observe that $\gamma(t)=e^{it}$ for $t\in[0,2\pi]$, is $\mathbb{T}$ (Note here that $\pi$ is simply the real number we found before, not some existential thing!). We could now push this to get winding numbers. Or you could use this definition of exponential, and the power series definition of inverse to get a (branchless) logarithm. In particular, you can show that the derivative of that fella is $\frac{1}{x}$. So we don't have to *define* it that way. :) ***Comments*** I agree with you that the definitions of complex exponentials feel contrived and the logarithm is even worse. The branched logarithms is the only part of Palka that I dislike as a complex book. These definitions I can stomach, as they require no Dues ex Machina.
Let f and g be two periodic functions over R with the following property: If T is a period of f, and S is a period of g, then T/S is irrational. **Conjecture**: f+g is *not* periodic. Could you give a proof or a counter example? It is easier if we assume continuity. But is it true for arbitrary real valued functions?
Suppose a finite group has the property that for every $x, y$, it follows that $(xy)^3 = x^3 y^3$. How do you prove that it is abelian? ---------- Edit: I recall that the correct exercise needed in addition that the order of the group is not divisible by 3.
Why are noncommutative nonassociative Hopf algebras are called quantum groups? This seems to be a purely mathematical notion and there is no quantum anywhere in it prima facie.
Why are noncommutative "algebraic" groups called quantum groups?
Discrete valuations <-> points on a curve For a nonsingular projective curve over an algebraically closed field, there is a one-one correspondence between the points on it, and the discrete valuations of the function field (i.e. all the meromorphic functions of the curve). The correspondence is point P -> the valuation that sends a function f, to the order of zero/pole of f at P. Maximal ideals <-> points on a curve At least for varieties (common zeros of several polynomials) over an algebraically closed field, there is a one-one correspondence between points on it, and the maximal ideal in $k[x_1,\cdots,x_n]$. The correspondence is point $P = (a_1,\cdots,a_n)$ -> the polynomials vanishing at P, which turns out to be $(x_1-a_1,\cdots,x_n-a_n)$. This is something true not only for curves, but for varieties. (Hilbert's Nullstellensatz) So putting these together, for nonsingular projective curves over an algebraically closed field, you know that there is a one-one correspondence between the maximal ideals (think them as points) and the discrete valuations of the function field. Now the situation here is analogous. You consider a "curve", whose coordinate ring is $\mathbb{Z}$, with function field $\mathbb{Q}$. The nonarchimedean valuations correspond to discrete valuations in this case. So they should capture order of zeros/poles at some "points". What are the points? They should correspond to the maximal ideals of $\mathbb{Z}$, which are exactly the primes here. As for $K(x)$, look at it as the function field of $K\mathbb{P}^1$. Just like the usual real/complex projective spaces, you should have two pieces here. Let's say $K[x]$ corresponds to the piece where the second coordinate is nonzero. So the corresponding homogeneous coordinates here is like $[x,1]$. We know there is one point missing, which is $[1,0]$. For this, we change our coordinates $[x,1] \to [1,1/x]$, so the piece where the first coordinate is nonzero should be $K[1/x]$. The missing point corresponds to the ideal $(1/x - 0) = (1/x)$, so this is why the infinite place corresponds to (1/x). Of course, a more straight forward interpretation is that for a rational function, you divide both numerator and denominator sufficiently high power of $x$ so that they both become polynomials in 1/x, have nonzero constant term, with an extra term (x to the some power). The infinite place measures this power.
What are the differences between a (discrete) cosine transform and a (discrete) Fourier transform? I know the former is used in JPEG encoding while the latter plays a big part is signal and image processing- how related are they?
Let A and B be two matrices which can be multiplied. Then **rank(AB) <= min(rank(A), rank(B))** I proved **rank(AB) <= rank(B)** interpreting AB as a composition of linear maps, observing that ker(B) \subseteq ker(AB) and using the kernel-image dimension formula. This also provides, in my opinion, a nice interpretation: if non stable, under subsequent compositions the kernel can only get bigger, and the image can only get smaller, in a sort of _loss of information_. How to manage **rank(AB) <= rank(A)**? Is there a nice interpretation like the previous one?
Here is a simpler example. I claim that the function $h(x) = \sin x + \sin \pi x$ cannot possibly be periodic. Why? Suppose an equation of the form $\sin x + \sin \pi x = \sin (x+T) + \sin \pi (x+T)$ held for all $x$ and some $T > 0$. Take the second derivative of both sides with respect to $x$ to get $\sin x + \pi^2 \sin \pi x = \sin (x+T) + \pi^2 \sin \pi(x+T).$ This implies that $\sin x = \sin (x+T)$ and that $\sin \pi x = \sin \pi(x+T)$, which is impossible. (Or is the question whether the sum _can_ be periodic?)
Suppose you want to put a probability distribution on the natural numbers for the purpose of doing number theory. What properties might you want such a distribution to have? Well, if you're doing number theory then you want to think of the prime numbers as acting "independently": knowing that a number is divisible by $p$ should give you no information about whether it's divisible by $q$. That quickly leads you to the following realization: you should choose the exponent of each prime in the prime factorization independently. So how should you choose these? It turns out that the probability distribution on the non-negative integers with maximum entropy and a given mean is a geometric distribution, as explained for example by Keith Conrad <a href="https://docs.google.com/viewer?url=http://www.math.uconn.edu/~kconrad/blurbs/analysis/entropypost.pdf">here</a>. So let's take the probability that the exponent of $p$ is $k$ to be equal to $(1 - r_p) r_p^k$ for some constant $r_p$. This gives the probability that a positive integer $n = p_1^{e_1} ... p_k^{e_k}$ occurs as $\displaystyle C \prod_{i=1}^{k} r_p^{e_i}$ where $C = \prod_p (1 - r_p)$. So we need to choose $r_p$ such that this product converges. Now, we'd like the probability that $n$ occurs to be monotonically decreasing as a function of $n$. It turns out (and this is a nice exercise) that this is true if and only if $r_p = p^{-s}$ for some $s > 1$ (since $C$ has to converge), which gives the probability that $n$ occurs as $\frac{ \frac{1}{n^s} }{ \zeta(s)}$ where $\zeta(s)$ is the zeta function. One way of thinking about this argument is that $\zeta(s)$ is the partition function of a statistical-mechanical system called the <a href="http://en.wikipedia.org/wiki/Primon_gas">Riemann gas</a>. As $s$ gets closer to $1$, the temperature of this system increases until it would require infinite energy to make $s$ equal to $1$. But this limit is extremely important to understand: it is the limit in which the probability distribution above gets closer and closer to uniform. So it's not surprising that you can deduce statistical information about the primes by studying the behavior as $s \to 1$ of this distribution.
Let $S$ be a set of size $n$. There is an easy way to count the number of subsets with an even number of elements. Algebraically, it comes from the fact that $\displaystyle \sum_{k=0}^{n} {n \choose k} = (1 + 1)^n$ while $\displaystyle \sum_{k=0}^{n} (-1)^k {n \choose k} = (1 - 1)^n$. It follows that $\displaystyle \sum_{k=0}^{n/2} {n \choose 2k} = 2^{n-1}$. A direct combinatorial proof is as follows: fix an element $s \in S$. If a given subset has $s$ in it, add it in; otherwise, take it out. This defines a bijection between the number of subsets with an even number of elements and the number of subsets with an odd number of elements. The analogous formulas for the subsets with a number of elements divisible by $3$ or $4$ are more complicated, and divide into cases depending on the residue of $n \bmod 6$ and $n \bmod 8$, respectively. The algebraic derivations of these formulas are as follows (with $\omega$ a primitive third root of unity): observe that $\displaystyle \sum_{k=0}^{n} \omega^k {n \choose k} = (1 + \omega)^n = (-\omega^2)^n$ while $\displaystyle \sum_{k=0}^{n} \omega^{2k} {n \choose k} = (1 + \omega^2)^n = (-\omega)^n$ and that $1 + \omega^k + \omega^{2k} = 0$ if $k$ is not divisible by $3$ and equals $3$ otherwise. (This is a special case of the discrete Fourier transform.) It follows that $\displaystyle \sum_{k=0}^{n/3} {n \choose 3k} = \frac{2^n + (-\omega)^n + (-\omega)^{2n}}{3}.$ $-\omega$ and $-\omega^2$ are sixth roots of unity, so this formula splits into six cases (or maybe three). Similar observations about fourth roots of unity show that $\displaystyle \sum_{k=0}^{n/4} {n \choose 4k} = \frac{2^n + (1+i)^n + (1-i)^n}{4}$ where $1+i = \sqrt{2} e^{ \frac{\pi i}{4} }$ is a scalar multiple of an eighth root of unity, so this formula splits into eight cases (or maybe four). **Question:** Does anyone know a direct combinatorial proof of these identities?
Disclaimer: I am not a finitist --- but as a theoretical computer scientist, I have a certain sympathy for finitism. The following is the result of me openly speculating what an "official" finitist response would be, based on grounds of computability. The short version is this: **(a)**&nbsp;It depends on what you mean by a 'number', but there's a reasonable approach which makes it reasonable to talk about finitistic approaches to real numbers; **(b)**&nbsp;What you can do finitisitically with numbers, real, rational, or otherwise, depends on how you represent those numbers. 1. **What is a number?** Is &minus;1 a number? Is sqrt(2) a number? Is *i*&nbsp;=&nbsp;sqrt(&minus;1) a number? What about quaternions? --- I'm going to completely ignore this question and suggest a pragmatic, formalist approach: a "number" is an element of a "number system"; and a "number system" is a collection of expressions which you can transform or describe properties of in some given ways (*i.e.* certain given arithmetic operations) and test certain properties (*e.g.* tests for equality, ordering, *etc.*) These expressions don't have to have a meaningful interpretation in terms of quantities or magnitudes as far as I'm concerned; *you* get to choose which operations/tests you care about.<br><br> A finitist would demand that any operation or property be described by an algorithm which provably terminates. That is, it isn't sufficient to prove existence or universality *a la* classical logic; existence proofs must be finite constructions --- of a "number", that is a representation in some "number system" --- and univserality must be shown by a computable test. 2. **Representation of numbers:** How we represent the numbers matters. A finitist should have no qualms about rational numbers: ratios which ultimately boil down to ordered pairs. This, the decimal expansions of these numbers may be infinitely long: 1/3 = 0.33333... what's going on here?<br><br> Well, the issue is that we have two representations for the same number, one of which is finite in length (and allows us to perform computations) and another which is not finite in length. However, the decimal expansion can be easily expressed as a function: for all *k*, the *k*<sup>th</sup> decimal place after the point is '3'; so you can still characterize it precisely in terms of a finite rule.<br><br> However, there is now a question about what operations we can perform. For rationals-as-ratios, we can add/subtract, multiply/divide, and test order/equality. For rationals-as-decimal-expansions, we can still add/subtract and multiply/divide, by defining a new digit-function which describes how to compute the result from the decimal expansions; these will be messier than the representations as ratios. Order comparisons are still possible for *distinct* rationals; but you cannot test equality for arbitrary decimal-expansion representations, because you cannot necessarily verify that all decimal places of the difference |*a*&minus;*b*| are 0. The best you can do in general is testing "equality up to precision &epsilon;", wherein you show that |*a*&minus;*b*|&nbsp;<&nbsp;&epsilon;, for some desired precision &epsilon;. This is a number system which informally we may say has certain amount of "vagueness"; but it is in principle completely specified --- there's nothing wrong with this in principle. It's just a matter of how you wish to define your system of arithmetic. 3. **What representation of reals?** Obviously, because there are uncountably many real numbers, you cannot represent all real numbers even if you *aren't* a finitist. But we can still express some of them. The same is true if you're a finitist: you just don't have access to as many, and/or you're restricted in what you can do with them, according to what your representation can handle.<br><br> --- Algebraic irrational numbers such as sqrt(2) can be expressed simply like that: "sqrt(2)". There's nothing wrong with the expressions "sqrt(2)&nbsp;&minus;&nbsp;1" or "[1&nbsp;+&nbsp;sqrt(5)]/2" --- they express quantities perfectly well. You can perform arithmetic operations on them perfectly well; and you can also perform ordering/equality tests by transforming them into a normal form of the type "[sum of integers and roots of integers]/[positive integer]"; if the difference of two quantities is zero, the normal form of the difference will just end up being '0'. For order comparisons, we can compute enough decimal places of each term in the sum to determine whether the result is positive or negative, a process which is guaranteed to terminate.<br><br> --- Numbers such as &pi; and e can be represented by decimal expansions, and computed with in this form, as with the rational numbers. The decimal expansions can be gotten from classical equalities (*e.g.* "infinite" series, except computing only *partial* sums; a number such as e may be expressed by some finite representation of such an 'exact' formula, together with a computable function which describes how many terms of the series are required to get a correct evaluation of the first *k* decimal places.) Of course, what you can do finitistically with these representations is limited in the same way as described above with the rationals; specifically, you cannot always test equality.
Disclaimer: I am not a finitist --- but as a theoretical computer scientist, I have a certain sympathy for finitism. The following is the result of me openly speculating what an "official" finitist response would be, based on grounds of computability. The short version is this: **(a)**&nbsp;It depends on what you mean by a 'number', but there's a reasonable approach which makes it reasonable to talk about finitistic approaches to real numbers; **(b)**&nbsp;What you can do finitisitically with numbers, real, rational, or otherwise, depends on how you represent those numbers. 1. **What is a number?** Is &minus;1 a number? Is sqrt(2) a number? Is *i*&nbsp;=&nbsp;sqrt(&minus;1) a number? What about quaternions? --- I'm going to completely ignore this question and suggest a pragmatic, formalist approach: a "number" is an element of a "number system"; and a "number system" is a collection of expressions which you can transform or describe properties of in some given ways (*i.e.* certain given arithmetic operations) and test certain properties (*e.g.* tests for equality, ordering, *etc.*) These expressions don't have to have a meaningful interpretation in terms of quantities or magnitudes as far as I'm concerned; *you* get to choose which operations/tests you care about.<br><br> A finitist would demand that any operation or property be described by an algorithm which provably terminates. That is, it isn't sufficient to prove existence or universality *a la* classical logic; existence proofs must be finite constructions --- of a "number", that is a representation in some "number system" --- and univserality must be shown by a computable test. 2. **Representation of numbers:** How we represent the numbers matters. A finitist should have no qualms about rational numbers: ratios which ultimately boil down to ordered pairs. Despite this, the decimal expansions of these numbers may be infinitely long: 1/3 = 0.33333... what's going on here?<br><br> Well, the issue is that we have two representations for the same number, one of which is finite in length (and allows us to perform computations) and another which is not finite in length. However, the decimal expansion can be easily expressed as a function: for all *k*, the *k*<sup>th</sup> decimal place after the point is '3'; so you can still characterize it precisely in terms of a finite rule.<br><br> What's important is that there exists **some** finite way to express the number. But the way in which we choose to *define* the number (as a part of system or numbers, using some way of expressing numbers) will affect what we can do with it... there is now a question about what operations we can perform.<br><br>--- For rationals-as-ratios, we can add/subtract, multiply/divide, and test order/equality. So this representation is a very good one for rationals. <br><br>--- For rationals-as-decimal-expansions, we can still add/subtract and multiply/divide, by defining a new digit-function which describes how to compute the result from the decimal expansions; these will be messier than the representations as ratios. Order comparisons are still possible for *distinct* rationals; but you cannot test equality for arbitrary decimal-expansion representations, because you cannot necessarily verify that all decimal places of the difference |*a*&minus;*b*| are 0. The best you can do in general is testing "equality up to precision &epsilon;", wherein you show that |*a*&minus;*b*|&nbsp;<&nbsp;&epsilon;, for some desired precision &epsilon;. This is a number system which informally we may say has certain amount of "vagueness"; but it is in principle completely specified --- there's nothing wrong with this in principle. It's just a matter of how you wish to define your system of arithmetic. 3. **What representation of reals?** Obviously, because there are uncountably many real numbers, you cannot represent all real numbers even if you *aren't* a finitist. But we can still express some of them. The same is true if you're a finitist: you just don't have access to as many, and/or you're restricted in what you can do with them, according to what your representation can handle.<br><br> --- Algebraic irrational numbers such as sqrt(2) can be expressed simply like that: "sqrt(2)". There's nothing wrong with the expressions "sqrt(2)&nbsp;&minus;&nbsp;1" or "[1&nbsp;+&nbsp;sqrt(5)]/2" --- they express quantities perfectly well. You can perform arithmetic operations on them perfectly well; and you can also perform ordering/equality tests by transforming them into a normal form of the type "[sum of integers and roots of integers]/[positive integer]"; if the difference of two quantities is zero, the normal form of the difference will just end up being '0'. For order comparisons, we can compute enough decimal places of each term in the sum to determine whether the result is positive or negative, a process which is guaranteed to terminate.<br><br> --- Numbers such as &pi; and e can be represented by decimal expansions, and computed with in this form, as with the rational numbers. The decimal expansions can be gotten from classical equalities (*e.g.* "infinite" series, except computing only *partial* sums; a number such as e may be expressed by some finite representation of such an 'exact' formula, together with a computable function which describes how many terms of the series are required to get a correct evaluation of the first *k* decimal places.) Of course, what you can do finitistically with these representations is limited in the same way as described above with the rationals; specifically, you cannot always test equality.
The [Burnside Lemma][1] looks like it should have an intuitive explanation. Does anyone have one? [1]: http://en.wikipedia.org/wiki/Burnside%27s_lemma
I ran into a problem dividing by imaginary numbers recently. I was trying to simplify: $2 \over i$ I came up with two methods, which produced different results: Method 1: ${2 \over i} = {2i \over i^2} = {2i \over -1} = -2i$ Method 2: ${2 \over i} = {2 \over \sqrt{-1}} = {\sqrt{4} \over \sqrt{-1}} = \sqrt{4 \over -1} = \sqrt{-4} = 2i$ I know from using the formula from [this Wikipedia article][1] that method 1 produces the correct result. My question is: **why does method 2 give the incorrect result**? What is the invalid step? [1]: http://en.wikipedia.org/wiki/Complex_numbers#Operations
What calculation shortcuts exist to help or speed-up mental (or paper) calculations?
What's the most effective ways of teaching kids - times tables?
Art Benjamin is your man! He has many tricks to speed up mental calculation and other fun mathemagical tricks. He also wrote two books on the subject! Here is a video of him in action: http://www.youtube.com/watch?v=M4vqr3_ROIk Here is his new book: http://www.amazon.com/Secrets-Mental-Math-Mathemagicians-Calculation/dp/0307338401
**To square a number ending in 5:** Remove the ending 5. Let the resulting number be n, and compute n(n+1). Append 25 to the end of n(n+1) and that's your answer. **Example:** 85<sup>2</sup>. Here, we drop the last digit to get 8, compute 8*9 = 72, so 85<sup>2</sup> = 7225. Similarly, we can compute 115<sup>2</sup>. Here, we drop the last digit to get 11, compute 11*12 = 132, so 115<sup>2</sup> = 13225. **How does this work?:** Note that (10n + 5)<sup>2</sup> = 100n<sup>2</sup> + 100n + 25 = 100 * n(n+1) + 25.
Are there 2 subsets, say, $A$ and $B$, of the naturals such that $$\sum_{n\in A} f(n) = \sum_{n\in B} f(n)$$ where $f(n)=1/n^2$? If $f(n)=1/n$ then there are many counterexamples, which is probably a consequence of the fact that the harmonic series diverges: $$\frac23 = \frac12 + \frac16 = \frac14+\frac13+\frac1{12}$$ And if $f(n)=b^{-n}$ for some base b then it is true because for all $M$, $\sum_{n>M} f(n) < f(M)$. (This is just the base-b representation of a real number. <S>The case b=2 gives a bijection 2^{\N} \to [0,1])</S>. So we have sort of an in-between case here. Also, what if $A$,$B$: -are required to be finite sets? -are required to be infinite and disjoint?
The Wikipedia article on <a href="http://en.wikipedia.org/wiki/Honeycomb_(geometry)#Space-filling_polyhedra.5B2.5D">honeycombs</a> has several examples of 3d tilings by a single polyhedron which aren't vertex-transitive. I am not positive if they are vertex-uniform, though, since (again) I don't really understand your definition and can't find a good one online. Is the following a correct definition: a tiling by copies of a single polyhedron is vertex-transitive if every vertex can be sent to every other vertex by an automorphism of the _tiling_, and is vertex-uniform if every vertex can be sent to every other vertex by an automorphism of the _polyhedron_?
Are there 2 subsets, say, $A$ and $B$, of the naturals such that $$\sum_{n\in A} f(n) = \sum_{n\in B} f(n)$$ where $f(n)=1/n^2$? If $f(n)=1/n$ then there are many counterexamples, which is probably a consequence of the fact that the harmonic series diverges: $$\frac23 = \frac12 + \frac16 = \frac14+\frac13+\frac1{12}$$ And if $f(n)=b^{-n}$ for some base b then it is true because for all $M$, $\sum_{n>M} f(n) < f(M)$. (This is just the base-b representation of a real number.The case b=2 gives a bijection 2^{\N} \to [0,1]). So we have sort of an in-between case here. Also, what if $A$,$B$: -are required to be finite sets? -are required to be infinite and disjoint?
I'm trying to freshen up for school in another month, and I'm struggling with the simplest of proofs! > Problem: > ----- > For any natural number n , n<sup>3</sup> + 2n is divisible by 3. > `This makes sense` > > Proof: > -- > **Basis Step:** If n = 0, then n<sup>3</sup> + 2n = 0<sup>3</sup> + > 2*0 = 0. So it is divisible by 3. > > **Induction:** Assume that for an arbitrary natural number n, > n<sup>3</sup> + 2n is divisible by 3. > > **Induction Hypothesis:** To prove this for n+1, first try to express ( n > + 1 )<sup>3</sup> + 2( n + 1 ) in terms of n<sup>3</sup> + 2n and use > the induction hypothesis. `Got it` > > - ( n + 1 )<sup>3</sup> + 2( n + 1 ) = ( n<sup>3</sup> + 3n<sup>2</sup> + 3n > + 1 ) + ( 2n + 2 ) `Just some simplifying` > - = ( n<sup>3</sup> + 2n ) + ( 3n<sup>2</sup> + 3n + 3 ) `simplifying > and regrouping` > - = ( n<sup>3</sup> + 2n ) + 3( n<sup>2</sup> + n + 1 ) `factored out > the 3` > > which is divisible by 3, because ( > n<sup>3</sup> + 2n ) is divisible by 3 > by the induction hypothesis. `What?` Can someone explain that last part? I don't see how you can claim ( n<sup>3</sup> + 2n ) + 3( n<sup>2</sup> + n + 1 ) is divisible by 3.
Are there 2 subsets, say, $A$ and $B$, of the naturals such that $$\sum_{n\in A} f(n) = \sum_{n\in B} f(n)$$ where $f(n)=1/n^2$? If $f(n)=1/n$ then there are many counterexamples, which is probably a consequence of the fact that the harmonic series diverges: $$\frac23 = \frac12 + \frac16 = \frac14+\frac13+\frac1{12}$$ And if $f(n)=b^{-n}$ for some base b then it is true because for all $M$, $\sum_{n>M} f(n) < f(M)$. (This is just the base-b representation of a real number.The case b=2 gives a <s>bijection</s> surjection 2^{\N} \to [0,1]). So we have sort of an in-between case here. Also, what if $A$,$B$: -are required to be finite sets? -are required to be infinite and disjoint?
Why don't you just test the validity of this using modular arithmetic? I.e. take n congruent to 1 mod 3 and we easily get 3=0. If you try n congruent to 2 mod 3, you get 8+4 =12 =0, so you're done. No ugly inductions.
Why don't you just test the validity of this using modular arithmetic? I.e. take n congruent to 1 mod 3 and we easily get 3=0. If you try n congruent to 2 mod 3, you get 8+4 =12 =0, so you're done. No ugly inductions (although this particular case is not so dirty). A useful idea when thinking of induction is to think of dominos. If you know something is true for one fixed tile and if you know that it being true for one tile means that it's true for the neighbour on the right, then it's like knocking one over knocks them all over.
I'm trying to freshen up for school in another month, and I'm struggling with the simplest of proofs! > Problem: > ----- > For any natural number n , n<sup>3</sup> + 2n is divisible by 3. > `This makes sense` > > Proof: > -- > **Basis Step:** If n = 0, then n<sup>3</sup> + 2n = 0<sup>3</sup> + > 2*0 = 0. So it is divisible by 3. > > **Induction:** Assume that for an arbitrary natural number n, > n<sup>3</sup> + 2n is divisible by 3. > > **Induction Hypothesis:** To prove this for n+1, first try to express ( n > + 1 )<sup>3</sup> + 2( n + 1 ) in terms of n<sup>3</sup> + 2n and use > the induction hypothesis. `Got it` > > - ( n + 1 )<sup>3</sup> + 2( n + 1 ) = ( n<sup>3</sup> + 3n<sup>2</sup> + 3n + 1 ) + ( 2n + 2 ) `Just some simplifying` > - = ( n<sup>3</sup> + 2n ) + ( 3n<sup>2</sup> + 3n + 3 ) `simplifying > and regrouping` > - = ( n<sup>3</sup> + 2n ) + 3( n<sup>2</sup> + n + 1 ) `factored out > the 3` > > which is divisible by 3, because ( > n<sup>3</sup> + 2n ) is divisible by 3 > by the induction hypothesis. `What?` Can someone explain that last part? I don't see how you can claim ( n<sup>3</sup> + 2n ) + 3( n<sup>2</sup> + n + 1 ) is divisible by 3.
I know that a superalgebra is a Z/2Z graded algebra and that it behaves nicely. I know very little physics though, so even though I know *that* the super- prefix is related to supersymmetry, I don't know what that means; is there a compelling *mathematical* reason to consider superalgebras?
Why are superalgebras so important?
I just came up with this problem yesterday. **Problem**: Assume there is an important segment of straight line `AB` that needs to be watched at all time. A watchdog can see in one direction in front of itself and must walk at a constant *non-zero* speed at all time. (All watchdogs don't need to have the same speed.) When it reaches the end of the segment, it must turn back (at no time) and keep watching the line. How many watchdogs does it need to guarantee that the line segment is watched at all time? And how (initial positions and speeds of the dogs)? **Note**: It's clear that two dogs are not enough. I conjecture that four will suffice and three will not. For example, the below configuration doesn't work from 7.5 second if `AB`'s length is 10 meters. Dog 1 at A walks to the right with speed 1.0 m/s Dog 2 at between A and B walks to the right with speed 1.0 m/s Dog 3 at B walks to the left with speed 1.0 m/s Or it can be illustrated as: A ---------------------------------------- B 0.0 sec 1 --> 2 --> <-- 3 2.5 sec 1 --> <-- 32 --> 5.0 sec <-- 31 --> <-- 2 7.5 sec <-- 3 <-- 21 --> Please provide your solutions, hints, or related problems especially in higher dimensions or looser conditions (watchdogs can walk with acceleration, etc.)
I can summarize one really basic reason, which is actually the reason I originally got interested in the definition. Take a finite-dimensional vector space $V$ of dimension $n$, and let $\left( {n \choose k} \right) = {n+k-1 \choose k}$ denote the number of multisets of size $k$ on a set of size $n$. (Multisets are like subsets except that more than one copy of a given element is possible.) Then the symmetric powers of $V$ have dimensions $\displaystyle \left( {n \choose 1} \right), \left( {n \choose 2} \right), ... $ whereas the exterior powers of $V$ have dimensions $\displaystyle {n \choose 1}, {n \choose 2}, ....$ Now here is a funny identity: it is not hard to see that ${n \choose k} = (-1)^k \left( {-n \choose k} \right)$. One way we might interpret this identity is that the $k^{th}$ exterior power of a vector space of dimension $n$ is like the $k^{th}$ symmetric power of a vector space of dimension $-n$, whatever that means. So what could that possibly mean? The answer (and I'll let you work this out for yourself, because it's fun) is to work in the category of supervector spaces! A supervector space $V$ is a direct sum $V_0 \oplus V_1$, and while one notion of dimension is to take $\dim V_0 + \dim V_1$, another (which can be motivated by thinking of $\mathbb{Z}/2\mathbb{Z}$-graded vector spaces as the category of representations of $\mathbb{Z}/2\mathbb{Z}$) is to take $\dim V_0 - \dim V_1$. So while a purely even vector space has positive dimension, a purely odd vector space has negative dimension. Then: graded-commutativity implies that the symmetric power of a purely odd vector space is the exterior power of the corresponding purely even vector space. More generally, the $k^{th}$ symmetric power of a vector space of dimension $n$ (for all integers $n$) has dimension $\left( {n \choose k} \right)$. (And, of course, the symmetric algebra of a supervector space is naturally a graded-commutative superalgebra.) (The physics connection is that symmetric powers = bosons, exterior powers = fermions, and there is a duality between the two.)
Roots behave strangely over [complex numbers][1]. Given this, how do non-integer powers behave over negative numbers? More specifically: - Can we define fractional powers such as (-2)^-1.5? - Can we define irrational powers (-2)^pi? [1]: http://math.stackexchange.com/questions/1183/division-by-imaginary-number/1185#1185
Can we find f(x) given that 1-f(x) = f(-x) for all real x? I start by rearranging to: f(-x) + f(x) = 1. I can find an example such as f(x) = abs(x) that works for some values of x, but not all. Is there a method here? Is this possible?
According to [Wikipedia][1], a characteristic function completely determines the properties of a probability distribution. This means it must be unique. However, the definition give is: `Char of X (t)=E[e^itX]` Now e^ix repeats for every 2 pi increase in x. So how can it be unique? [1]: http://en.wikipedia.org/wiki/Characteristic_function_%28probability_theory%29
I'm trying to read a document that applies Reimann-Roch left, right and center. I don't know this theorem or the theory it comes from so I need to build up a bit more background before I can tackle this. Can you please recommend good books or (online) lecture notes which cover "multi-valued functions", Reimann Surfaces and similar up to (at least Reimann-Roch)? (I also want to pick up a bit about modular forms and the relation between lattices and elliptic curves). (I've done a bit of complex calculus but it was all with analogy to real analysis so I am not sure that it will really give me any head start here. Also apologies for being so vauge with this but I don't know enough about this subject to be any more precise)
I'm trying to read a document that applies Riemann-Roch left, right and center. I don't know this theorem or the theory it comes from so I need to build up a bit more background before I can tackle this. Can you please recommend good books or (online) lecture notes which cover "multi-valued functions", Riemann Surfaces and similar up to (at least) Riemann-Roch? (I also want to pick up a bit about modular forms and the relation between lattices and elliptic curves). (I've done a bit of complex calculus but it was all with analogy to real analysis so I am not sure that it will really give me any head start here. Also apologies for being so vauge with this but I don't know enough about this subject to be any more precise)
Consider instead the functions g that satisfy the identity -g(x)=g(-x) for all x. If (x,g(x)) is a point of the function g, then (-x,-g(x)) is also a point (since g(-x)=-g(x)). Therefore, every function g is symmetric when rotated by 180 degrees about the point (0,0). How do things change for the identity 1-f(x)=f(-x)? We merely shift the point of symmetry to (0,1/2). Here the point (x,f(x)) implies the point (-x,1-f(x)) > The function f satisfies the identity 1-f(x)=f(-x) for all real numbers x if and only if it is symmetric when rotated about the point (0,1/2) by 180 degrees. There's going to be many of these functions; some of which will be polynomials, some of which will not.
I am assuming that you need the error function only for real values. For complex arguments there are other approaches, more complicated than what I will be suggesting. If you're going the Taylor series route, the best series to use is formula [7.1.6][1] in Abramowitz and Stegun. It is not as prone to subtractive cancellation as the series derived from integrating the power series for e<sup>-x²</sup>. This is good only for "small" arguments. For large arguments, you can use either the asymptotic series or the continued fraction representations. Otherwise, may I direct you to [these][2] [papers][3] by S. Winitzki that give nice approximations to the error function. [1]: http://www.math.ucla.edu/~cbm/aands/page_297.htm [2]: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.90.6481 [3]: http://homepages.physik.uni-muenchen.de/~Winitzki/erf-approx.pdf
It's just a Fourier transform. E(x) is an integral over the probability distribution. The function is unique; if you focus on the value inside the expectation integral, that's not, but so what?
I don't have enough mojo to comment on Greg's answer. 1. Greg made a silly calculational mistake: The transfer function $A(\omega)$ should be $c/(1-(1-c)e^{-i\omega})$. 2. What you want is the modulus of $A(\omega)$. Note that $\sin \omega n$ is precisely the imaginary part of $e^{i\omega n}$. Because the relation between input and output is linear, the response to $\sin\omega n$ will be the imaginary part of $A(\omega)e^{i\omega n}$. That's going to be a sinusoid with some shifting and the amplitude $|A(\omega)|$. [Here][1] is a plot for $c=1/2$. 3. To read more about this sort of things, google "IIR filter" or "infinite impulse response". [1]: http://www.wolframalpha.com/input/?i=plot+|.5/(1-.5+exp(-ix))|
This was really inspired by Solitaire, but a few people reacted with ``oh, it's like the towers of Hanoi, isn't it?'' so I'll try to pose the problem in terms of discs here. Let's start. On the floor in front of you are n wooden disks of sizes 1, 2, 3, ..., and n. They are placed in a line and the distances between them are integer numbers. Your goal is to make a tower with all n discs, consuming as little energy as possible in the process. You are allowed to move a tower whose base is a disk of size k only on top of the disk with size k+1 (which may be the top of another mini-tower). The energy you consume to perform such a move is the distance traveled by the moved mini-tower. Now, you'd like to write a program that tells you whether the energy you have is enough to perform the task. It just needs to say Y or N. (If the answer is Y, then clearly the list of moves is proof enough that the answer is correct, so the problem is NP. If the answer is N, there's no point in even attempting the task---you are too tired.) What's the fastest such program you can find? Is the problem NP-complete? Here's an upper bound: $O(n2^n)$. Represent the initial state by listing the (integer) positions of the n disks. A move affects the state by deleting a number in this list and its cost is the absolute difference between the deleted number and the number coming after. Clearly there are $2^n$ states with at most $n$ moves each. (See [my blog post][1] for an example run of this algorithm if it's not clear. The description there is in terms of cards.) [1]: http://rgrig.blogspot.com/2010/07/solitaire.html
Is this version of the Hanoi towers problem NP-complete?
How have experts estimated the amount of oil that was shooting out of that pipe in the Gulf? I bet there's some neat math or physics involved here, and some interesting assumptions considering how little concrete data are available.
How do you estimate the flow rate of one fluid into another like the Deep Horizon Oil leak?
A fun question I ask students or interviewees (in engineering) is: > **This is not my question, this is an example:** > Using only what you know now, how many > cans of soda would you estimate are > produced per day (on average) in the > United States? For this question, the result doesn't matter so much as the process you use. In this theme of estimation, what's your favorite question?
What is your favorite estimation exercise?
This was really inspired by Solitaire, but a few people reacted with ``oh, it's like the towers of Hanoi, isn't it?'' so I'll try to pose the problem in terms of discs here. Let's start. On the floor in front of you are n wooden disks of sizes 1, 2, 3, ..., and n. They are placed in a line and the distances between them are integer numbers. (They are *not* placed in any particular order or at particular positions. For example, if n=3, the one with size 1 might be at position 1000, the one of size 2 at position -2, and the one of size 3 at position 57.) Your goal is to make a tower with all n discs, consuming as little energy as possible in the process. You are allowed to move a tower whose base is a disk of size k only on top of the disk with size k+1 (which may be the top of another mini-tower). The energy you consume to perform such a move is the distance traveled by the moved mini-tower. Now, you'd like to write a program that tells you whether the energy you have is enough to perform the task. It just needs to say Y or N. (If the answer is Y, then clearly the list of moves is proof enough that the answer is correct, so the problem is NP. If the answer is N, there's no point in even attempting the task---you are too tired.) What's the fastest such program you can find? Is the problem NP-complete? Here's an upper bound: $O(n2^n)$. Represent the initial state by listing the (integer) positions of the n disks. A move affects the state by deleting a number in this list and its cost is the absolute difference between the deleted number and the number coming after. Clearly there are $2^n$ states with at most $n$ moves each. (See [my blog post][1] for an example run of this algorithm if it's not clear. The description there is in terms of cards.) [1]: http://rgrig.blogspot.com/2010/07/solitaire.html
"How many estimation questions are asked in interviews across the world during a typical 24h period?"
The generalized mean (power mean) with exponent $p$ of $n$ numbers $x_1, x_2, \ldots, x_n$ is defined as $$ \bar x = (\frac{1}{x} \sum x_i^p)^{1/p}. $$ This is equivalent to the harmonic mean, arithmetic mean, and root mean square for $p = -1$, $p = 1$, and $p = 2$, respectively. Also its limit at $p = 0$ is equal to the geometric mean. When should the different means be used? I know harmonic mean is useful when averaging speeds and the plain arithmetic mean is certainly used most often, but I've never seen any uses explained for the geometric mean or root mean square. (Although standard deviation is the root mean square of the deviations from the arithmetic mean for a list of numbers.)
Which average to use? (RMS vs. AM vs. GM vs. HM)
WolframAlpha provides a solution to this (and many other) recurrence equations:<br> http://www.wolframalpha.com/input/?i=1-f(x)+%3D+f(-x)
I am in the process of trying to learn algebraic geometry via schemes and am wondering if there are simple motivating examples of why you would want to consider these structures. I think my biggest issue is the following: I understand (and really like) the idea of passing from a space to functions on a space. In passing from $k^n$ to $R:=k[x_1,\ldots,x_n]$, we may recover the points by looking at the maximal ideas of $R$. But why consider $\operatorname{Spec} R$ instead of $\operatorname{MaxSpec} R$? Why is it helpful to have non-closed points that don't have an analog to points in $k^n$? On a wikipedia article, it mentioned that the Italian school used a (vague) notion of a generic point to prove things. Is there a (relatively) simple example where we can see the utility of non-closed points?
To start off, the [discussion][1] at Sbseminar has comments from lots of people who actually know algebraic geometry, and if anything I say contradicts something they say, please trust them and not me. One reason is that you lose the functoriality of $Spec$ if you stick to $MaxSpec$: the inverse image of a maximal ideal is not necessarily maximal. Nevertheless, if you stick to schemes of finite type over a field, this is true (it's basically a version of the Nullstellensatz). In particular, in Serre's FAC paper he defines a "variety" by gluing together regular affine algebraic sets in the sense of classical algebraic geometry. But this is less general. One natural example of a scheme which is not of finite type over a field is simply $Spec \mathbb{Z}$. Then given a scheme $X$ over this (well, admittedly every scheme $X$ is a scheme over $Spec \mathbb{Z}$ in a canonical way), the fibers at the non-closed points of $Spec \mathbb{Z}$ are still interesting and basically amount to studying polynomial equations over finite fields (when $X \to \mathbb{Z}$ is of finite type). As a (simple) example of how generic points can be used, one can prove that a coherent sheaf on a noetherian integral scheme is free on a dense open subset. Why? Because it must be free at the generic point (since the local ring there is a field), and it is a general fact that two coherent sheaves whose stalks are isomorphic are isomorphic in a neighborhood. (This is true actually for sheaves of finite presentation over a ringed space.) [1]: http://sbseminar.wordpress.com/2009/08/06/algebraic-geometry-without-prime-ideals/
In my geometry class last year I remember putting down the statement in a column proof "That all isosceles are always and only similar to other isosceles". I do not remember what I was trying to prove. But, I do remember that I was stressed and that was the only thing I could think of and made a guess thinking I would probably get the proof wrong on my test. Funny enough though, I didn't get the proof wrong and I was wondering if anyone could show a proof as to why this would be true. I mean I makes sense but, I do not see any way to prove it. Could you please explain how this is true?
Are isosceles always and only similar to other isosceles?
See my answer [here](http://math.stackexchange.com/questions/942/meaning-of-closed-points-of-a-scheme/984#984) for a brief discussion of how points that are closed in one optic (rational solutions to a Diophantine equation, which are closed points on the variety over Q attached to the Diophantine equation) become non-closed in another optic (when we clear denominators and think of the Diophantine equation as defining a scheme over Z). In terms of rings (and connecting to Qiaochu's answer), under the natural map Z[x_1,...,x_n] --> Q[x_1,...,x_n], the preimage of maximal ideals are prime, but not maximal. These examples may give impression that non-closed points are most important in arithmetic situations, but actually that is not the case. The ring C[t] behaves much like Z, and so one can have the same discussion with Z and Q replaced by C[t] and C(t). Why would one do this? Well, suppose you have an equation (like y^2 = x^3 + t) which you want to study, where you think of t as a parameter. To study the generic behaviour of this equation, you can think of it as a variety over C(t). But suppose you want to study the geometry for one particular value of t_0 of t. Then you need to pass from C(t) to C[t], so that you can apply the homomorphism C[t] --> C given by t |--> t_0 (specialization at t_0). This is completely analogous to the situation considered in my linked answer, of taking integral solutions to a a Diophantine equation and then reducing them mod p. What is the upshot? Basically, any serious study of varieties in families (whether arithmetic families, i.e. schemes over Z, or geometric families, i.e. parameterized families of varieties) requires scheme-theoretic techniques and the consideration of non-closed points. (Of course, serious such studies were made by the Italian geometers, by Lefschetz, by Igusa, by Shimura, and by many others before Grothendieck's invention of schemes, but the whole point of schemes is to clarify what came before and to give a precise and workable theory that encompasses all of the contexts considered in the "old days", and is also more systematic and more powerful than the older techniques.) (among other things) to clarify
There are many types of numbers, though the natural numbers, the integers, the rational, the decimal, the real, and the complex form a nice self-complete expository whole. Hopefully, the following block(s) of text aren't too poorly formatted. 1) The set of **natural numbers** consists of numbers with which we count {0,1,2,3,...}. As noted in some of the other answers, some people think that 0 is not a natural number (see [one of my desktop backgrounds][1]). Whether or not it is, it's a matter of taste. What is not a matter of taste are the defining properties of the natural numbers. In particular, there is a (binary) operation called +, which takes two numbers a and b and spits out a third number a+b, which is the sum of a and b. It satisfies the usual properties that you would expect from counting: it is commutative (the order of the summands doesn't matter, i.e. a+b=b+a) and associative (the order in which you add summands to each other doesn't matter, i.e. (a+b)+c=a+(b+c), so you can always write a+b+c for a specific number). There is also an order relation < such that for any two different numbers a and b either a<b or b<a, and we never have a<a. The order plays nice with addition in that a < b implies a+c < b+c. We also have a cancellation property that if a < c, then there exists a number b such that a+b=c. The order also satisfies one unobvious property called the well-ordering principle and states that if you take any collection of natural numbers, there is a smallest one, i.e. a number that is smaller than all the other numbers in the collection. The well-ordering principle, together with cancellation, implies that there is a smallest number, and that any two consecutive numbers differ by the same number. From this it follows that either the smallest number is 0 (in the sense that 0 is the number such that 0+a=a+0=a) or that it is 1, where 1 is the common difference between any two consecutive numbers. Choosing one or the other of the two possibilities defines the natural numbers uniquely as either {0,1,2,...} or {1,2,...}. It is a fun exercise to show that the above axioms are equivalent to the axioms of Peano (note that the axioms of Peano state nothing about addition, so in fact BOTH {0,1,2,...} and {1,2,...} satisfy the axioms; how we define addition with respect to the smallest element is what specifies one of the two sets) 2) The set of **integers**, also known as whole numbers, is {..., -3, -2, -1, 0, 1, 2, 3, ...}. It arises as we try to "complete" the arithmetic of the integers in the following sense. For any two numbers a < b we have a number b-a such that (b-a)+a=b, thus we have subtraction of a smaller number from a bigger number. We do not have such numbers b-a if a>b (i.e. there is no natural number which when you add a to you get b if a>b), but from experience we see that numbers such as ... -3, -2, -1 seem to be sensical to add and subtract as we try to solve equations and whatnot. But while numbers such as b-a when a < b are defined, the same number could be represented in different ways. For example, 2 is both 5-3 and 8-6. So we need to answer the question of when b-a=d-c for a < b, c < d, and by algebra the answer is that they're equal whenever b+c=d+a. Hence, we can define all the integers, positive and negative, by taking them to be pairs of natural numbers (a,b) with the caveat that (a,b)=(c,d) whenever a+c=b+c. For example, the integer -2 consists of (among others) the pairs (5,3), (6,8), (0,2). This is all seems like much formal ado about intuitive nothings, but that is only because we can represent any integer as either the pair (a,0), which we write simply as a, which we write as -a, and think of as (a,0) is the integer which when you add a to you get 0, or as (0,a), which we think of as the integer to which which you add 0 you get the natural number a. Hence what we actually do arithmetic with are the pairs {... (3,0), (2,0), (1,0), (0,0), (0,1), (0,2), (0,3),...}, but the fact that we only need to throw in a negative sign to make the arithmetic meaningful is precisely because there exists these special representative pairs. Exercise: define the addition and order on the pairs of natural numbers for the above to work (we lose of course the property that any set of integers has a smallest integers [e.g. the set of negative integers], but the principle still holds for sets of positive integers). 3) The **rational numbers (or fractions)** are obtained by the exact same process of completion as above, except with respect to multiplication. the fraction a/b, where a and b are integers corresponds to the number which when multiplied by b gives you a. Note that a/b=c/d if and only if ad=bc, which mirrors the fact that a-b=c-d if and only if a+b=b+c. 4) The real numbers are best understood as coming from **decimals (or decimal expansions)**, so what are decimals and where do they come from? Rational numbers as fractions are hard to compare to one another. Is 7/5 bigger or smaller than 4/3? The answer is yes, becayse 7*3>3*5. In general if you have a/b and c/d, we have a/b < c/d, a/b=c=d or a/b>c/d if correspondingly ad<bc, ad=bc, ad>bc. But that's a tedious process, is there some way of writing them down so that it is clearer who's bigger than who? Better, can the process of writing them down be easier than the process of checking who's bigger than who by cross-multiplication? The answer is yes, and we only need consider the usual way in which we write natural numbers. We can express the number 1729 compactly based on the distributive properties of addition and exponentiation: 1729=1000+700+20+9=10^3+7*10^2+2*10+9, where the digits 1,2,3,4,5,6,7,8,9,10 are the first ten successors of zero, and zero is denoted by 0. So we can write rational numbers with denominator 10^n as a.b=a+b/10^n with a and b are natural numbers and b is smaller than 10^n. For example, 1729/100 can be written as 17.29 where 17 and 29 are natural numbers with 29<100. Now, for any rational number c/d there exists a rational number with denominator closest 10^n that's closest to and smaller than c/d (this is intuitively obvious and not that hard to formalize). Then we can say that c/d ~ a.b where a+b/10^n is that rational number. For example, the closest number with denominator 10 to 1/3 is 3/10=.3, with denominator 100 is 33/100=.33, and so on. Then that the infinite string 0.3333... equals 1/3 means that the closest rational number less than or equal to 1/3 with denominator 10^n is 333...3/10^n where you have n 3's. From this, one can prove that rational numbers correspond have periodic expansions, i.e. that the string a.b has a repeating sequence of digits from some point onward. (even better, we can compute decimal expansions by the usual process of long division by just adding a decimal point, and we can efficiently compare numbers by looking at the expansion until the first point of difference). 5) Real numbers come from taking arbitrary infinite sequences of digits as decimals. From above, we know that decimals actually encode sequences of rational numbers with denominators 1,10, 10^2, 10^3, ... When should two such sequences be equivalent? Consider the standard question of whether .999...=1. The decimal expansion of 1 is just 1.000... because the sequence of rational numbers with denominators 10, 10^2, 10^3, ... has to be a sequence of rational numbers less than or equal to 1. If we drop the requirement that the numbers be less than or equal to 1, keep the requirement that the sequence of rational numbers (approximations really) is non-decreasing, and say allow the rational number to be either the closest number less than 1 or 1 itself, then we obtain the other expansion 1=.999... If you had a measuring instrument that was accurate to n decimal places, then upon measuring .999... and 1.000... you would always get .999...9 with n-2 9s. So it turns out that with finite precision measuring, you can't tell the difference between certain decimal expansions, so we might as way say those two decimal expansions are equal. This is the process of completion by Cauchy sequences put into words. And those expansions then are the real numbers, and they have addition, multiplication, subtraction and division and ordering just like the rational numbers, except that they're complete in that every sequence of real numbers converges. 6) Complex numbers It turns out that every polynomial with real coefficients of degree >2 has a square root. -1 does not have a a real square root, so we wish to throw a root of -1 in to algebraically complete the real numbers, and thus get the complex numbers. The algebraic construction is best done by explaining concepts such as rings and ideals and quotients and homomorphisms, which is too much machinery. Instead, consider Euclidean geometry. It is a(n advanced) fact that you can coordinize it with real numbers, i.e. you can represent points by pairs of real numbers (a,b), lines by the solutions of the two variable equation ax+by=c, lengths given by square root of ((a-c)^2+(b-d)^2), etc. Adding pairs (a,b) and (c,d) gives you (a,b)+(c,d) so addition of pairs corresponds to translations. Another way of interpreting a pair of (a,b) is by interpreting it in polar form (r, theta) where r is the distance of (a,b) form the origin and theta is the angle from the x-axis to (a,b). Then we can think of (a,b)~(r,theta) as a dilation and rotation with center (0,0) and sending (1,0) to (a,b), i.e. a dilation by a factor of r and a rotation by an angle of theta. But this then defines a multiplication of vectors, which turns out to distribute over addition. Clearly we have inverse rotations/dilations and inverse translations, giving us subtraction and division. Finally, we can identify the real numbers with pairs (r,0) which are only a dilation without any rotation. These pairs are then our complex numbers and you can deduce all their properties from the above description. Note that the vector (0,1) multiplied by (0,1) gives (-1,0) since it corresponds to a rotation of 90 followed by a rotation of 90, which is a rotation of 180 of (1,0) which equals (-1,0). See Qiaouchu's answer for more details. [1]: http://www.marriedtothesea.com/031107/zero.gif
I've solved for it making a computer program, but was wondering there was a mathematical equation that you could use to solve for the nth prime?
Is there a known mathematical equation to find the nth prime?
* $\forall x, \frac{0}{x} = 0 \Rightarrow \frac{0}{0} = 0?$<br/><br/> * $\forall x, \frac{x}{x} = 1 \Rightarrow \frac{0}{0} = 1?$<br/><br/> * Moreover, if one could say, for _any_ k, that `0/0 = k` we could then say `2 = 3` -- just divide both sides by 0 and get `k = k`. Since there is no reasonable value `0/0` can have, `0/0` must be undefined.
Below is my geometric understanding of why irrational powers of negatives are difficult to define. As such it is probably not rigorous and may be wrong. The way irrational powers of the real numbers are usually defined is by limits of fractional powers. For complex numbers, the same is true, except the limiting process is more complicated. We can of course coast as usual on our real numbers result and see that we only need to define irrational powers on the unit circle, since every other points is some positive real multiple of a point on the unit circle. Now in general z -> z^n wraps the circle around itself n times. What does z->z^(1/n) do? Well, it's not clear since each point has n possible points it could have come from, in particular if you partition the circle into n arcs of length 2pi/n, each of those gets mapped to the full circle. Once you choose a starting arc though, z -> z^m maps the starting arc to other arcs in the following way. Partition your starting arc of length 2pi/n into arcs of length 2pi/(nm) and then each of the little arcs gets mapped to a big arc that is 2pim/n away from the previous big arc. The reason you choose arcs as opposed to weirdly distributed discrete sets, is because you want exponentiation to be continuous and hence you want the inverse image sets to be as the whole circle is connected. There is no problem with fractional powers of -1, you have n choices for starting arc. But if you want your exponentiation to have some semblance of continuity with respect to the exponent, then you have to be choosing branches (arcs) for z->z^(1/n) that cohere, i.e. such that for very large n the roots get closer and closer together. This is done by requiring that the nth root of 1 is always 1, which makes all of the arcs into neighborhoods of 1. But this means that all the points with argument between 0 and pi get mapped in the half-arc above 1, and all the points with argument between pi and 2pi get mapped in the half-arc below 1. Hence if you approach -1 from above and from below, the two limits of the nth root will be different, and hence you cannot have continuous exponentiation at -1. As a result (you can do some more visualization if you wish), the irrational powers of -1 cannot even be defined as limits of rational powers.
The generalized mean (power mean) with exponent $p$ of $n$ numbers $x_1, x_2, \ldots, x_n$ is defined as $$ \bar x = (\frac{1}{n} \sum x_i^p)^{1/p}. $$ This is equivalent to the harmonic mean, arithmetic mean, and root mean square for $p = -1$, $p = 1$, and $p = 2$, respectively. Also its limit at $p = 0$ is equal to the geometric mean. When should the different means be used? I know harmonic mean is useful when averaging speeds and the plain arithmetic mean is certainly used most often, but I've never seen any uses explained for the geometric mean or root mean square. (Although standard deviation is the root mean square of the deviations from the arithmetic mean for a list of numbers.)
Let $A$ be a commutative ring. Suppose $P \subset A$ is a minimal prime ideal. Then it is a theorem that $P$ consists of zero-divisors. This can be proved using localization, when $A$ is noetherian: $A_P$ is local artinian, so every element of $PA_P$ is nilpotent. Hence every element of $P$ is a zero-divisor. (As Matt E has observed, when $A$ is nonnoetherian, one can still use a similar argument: $PA_P$ is the only prime in $A_P$, hence is the radical of $A_P$ by elementary commutative algebra.) Can this be proved without using localization?
As other posters have indicated, the problem is that the complex logarithm isn't well-defined on $\mathbb{C}$. This is related to my comments in <a href="http://math.stackexchange.com/questions/1183/division-by-imaginary-number">a recent question</a> about the square root not being well-defined (since of course $\sqrt{z} = e^{ \frac{\log z}{2} }$). One point of view is that the complex exponential $e^z : \mathbb{C} \to \mathbb{C}$ does not really have domain $\mathbb{C}$. Due to periodicity it really has domain $\mathbb{C}/2\pi i \mathbb{Z}$. So one way to define the complex logarithm is not as a function with range $\mathbb{C}$, but as a function with range $\mathbb{C}/2\pi i \mathbb{Z}$. Thus for example $\log 1 = 0, 2 \pi i, - 2 \pi i, ...$ and so forth. So what are we doing when we don't do this? Well, let us suppose that for the time being we have decided that $\log 1 = 0$. This is how we get other values of the logarithm: using power series, we can define $\log (1 + z)$ for any $z$ with $|z| < 1$. We can now pick any number in this circle and take a power series expansion about _that_ number to get a different power series whose circle of convergence is somewhere else. And by repeatedly changing the center of our power series, we can compute different values of the logarithm. This is called _analytic continuation_, and typically it proceeds by choosing a (say, smooth) path from $1$ to some other complex number and taking power series around different points in that path. The problem you quickly run into is that the value of $\log z$ depends on the choice of path from $1$ to $z$. For example, the path $z = e^{2 \pi i t}, 0 \le t \le 1$ is a path from $1$ to $1$, and if you analytically continue the logarithm on it you will get $\log 1 = 2 \pi i$. And that is not what you wanted. One way around this problem is to arbitrarily choose a ray from the origin and declare that you are not allowed to analytically continue the logarithm through this ray. This is called choosing a branch cut, and it is not canonical, so I don't like it. There is another way to resolve this situation, which is to consider the <a href="http://en.wikipedia.org/wiki/Riemann_surface">Riemann surface</a> $(z, e^z) \subset \mathbb{C}^2$ and to think of the logarithm as the projection to the first coordinate from this surface to $\mathbb{C}$. So all the difficulties we have encountered above have been due to the fact that we have been trying to pretend that this projection has certain properties that it doesn't have. A closed path like $z = e^{2\pi i t}$ in which the logarithm starts and ends with different values corresponds to a path on this surface which starts and ends at different _points_, so there is no contradiction. This was Riemann's original motivation for defining Riemann surfaces, and it is this particular Riemann surface that powers things like the residue theorem.
As other posters have indicated, the problem is that the complex logarithm isn't well-defined on $\mathbb{C}$. This is related to my comments in <a href="http://math.stackexchange.com/questions/1183/division-by-imaginary-number">a recent question</a> about the square root not being well-defined (since of course $\sqrt{z} = e^{ \frac{\log z}{2} }$). One point of view is that the complex exponential $e^z : \mathbb{C} \to \mathbb{C}$ does not really have domain $\mathbb{C}$. Due to periodicity it really has domain $\mathbb{C}/2\pi i \mathbb{Z}$. So one way to define the complex logarithm is not as a function with range $\mathbb{C}$, but as a function with range $\mathbb{C}/2\pi i \mathbb{Z}$. Thus for example $\log 1 = 0, 2 \pi i, - 2 \pi i, ...$ and so forth. So what are we doing when we don't do this? Well, let us suppose that for the time being we have decided that $\log 1 = 0$. This is how we get other values of the logarithm: using power series, we can define $\log (1 + z)$ for any $z$ with $|z| < 1$. We can now pick any number in this circle and take a power series expansion about _that_ number to get a different power series whose circle of convergence is somewhere else. And by repeatedly changing the center of our power series, we can compute different values of the logarithm. This is called _analytic continuation_, and typically it proceeds by choosing a (say, smooth) path from $1$ to some other complex number and taking power series around different points in that path. The problem you quickly run into is that the value of $\log z$ depends on the choice of path from $1$ to $z$. For example, the path $z = e^{2 \pi i t}, 0 \le t \le 1$ is a path from $1$ to $1$, and if you analytically continue the logarithm on it you will get $\log 1 = 2 \pi i$. And that is not what you wanted. (This is essentially the same as the contour integral of $\frac{1}{z}$ along this contour.) One way around this problem is to arbitrarily choose a ray from the origin and declare that you are not allowed to analytically continue the logarithm through this ray. This is called choosing a branch cut, and it is not canonical, so I don't like it. There is another way to resolve this situation, which is to consider the <a href="http://en.wikipedia.org/wiki/Riemann_surface">Riemann surface</a> $(z, e^z) \subset \mathbb{C}^2$ and to think of the logarithm as the projection to the first coordinate from this surface to $\mathbb{C}$. So all the difficulties we have encountered above have been due to the fact that we have been trying to pretend that this projection has certain properties that it doesn't have. A closed path like $z = e^{2\pi i t}$ in which the logarithm starts and ends with different values corresponds to a path on this surface which starts and ends at different _points_, so there is no contradiction. This was Riemann's original motivation for defining Riemann surfaces, and it is this particular Riemann surface that powers things like the residue theorem.
In part of my research, the following problem has come up. Consider the system of equations (in complex numbers) z^b w^c = 1 z^d w^e = 1. I am interested in the solution set when we restrict both z and w to be ath roots of unity, for some positive integer a. Of course, one immediately sees that (z,w) = (1,1) is a solution. >What are some nice necessary and sufficient conditions on a,b,c,d, and e which guarantee that (z,w) = (1,1) is the ONLY solution? To give an idea of the flavor of answer I'd be most happy with, one must have gcd(a,b,d) = gcd(c,e) = 1, because if z is any gcd(a,b,d)th root of 1 (which is neccesarily an ath root of 1), then (z,1) is a solution to both equations. It also turns out that z and w must both be gcd(a, be-cd)th roots of 1. I'd love to have an answer like "gcd(a, be-cd) = gcd(a,b,d) = gcd(c,e) = 1 is necessary and sufficient", with, perhaps, a few more estimates on gcd terms. This problem can also been generalized (and I am interested in that case as well). Suppose you are given 3 equations z^a w^b = 1 z^c w^d = 1 z^e w^f = 1, with z and w complex numbers of modulus 1. >What are necessary and sufficient conditions on a,b,c,d,e and f which guarantee that the only simultaneous solution is (z,w) = (1,1)? The previous problem is a special case of this (which comes from setting b = 0. Clearly then, z must be an ath root of unity. It turns that if b is 0, using the fact that gcd(d,f)=1 one can show w must also be an ath root of unity). And please feel free to retag as appropriate! Thank you in advance.
Your question: given that X and Z are independent, X is Gaussian (I'll use "normal"), and Y = X+Z, prove that Y is normal iff Z is normal. Right? As you observed, one direction is easy: *if* Z is normal, then so is Y=X+Z. So for the other direction, assume that Y is normal. We need to prove that is Z normal too. Perhaps there's an even easier way, but it's straightforward to use [characteristic functions](http://en.wikipedia.org/wiki/Characteristic_function_%28probability_theory%29), which completely characterise distributions. Because X and Z are independent, $ \varphi_Y(t) = E[e^{itY}] = E[e^{it(X+Z)}] = E[e^{itX}]E[e^{itZ}]$, and so, $ \varphi_Z(t) = E[e^{itZ}] = E[e^{itY}]/E[e^{itX}] $ This means that Z has exactly the right characteristic function for a normal variable, and hence it's normal. --- More interestingly and much more generally, there is a theorem of Cramer (e.g. see [here](http://books.google.com/books?id=B7Ch-c2G21MC&pg=PA205&dq=cramer+theorem)) which says that if X and Z are independent and X+Z is normally distributed, then both X and Z are!
According to [Wikipedia][1], a characteristic function completely determines the properties of a probability distribution. This means it must be unique. However, the definition give is: `Char of X (t)=E[e^itX]` Now e^iz repeats for every 2 pi increase in z. So how can it be unique? [1]: http://en.wikipedia.org/wiki/Characteristic_function_%28probability_theory%29
See my answer [here](http://math.stackexchange.com/questions/942/meaning-of-closed-points-of-a-scheme/984#984) for a brief discussion of how points that are closed in one optic (rational solutions to a Diophantine equation, which are closed points on the variety over Q attached to the Diophantine equation) become non-closed in another optic (when we clear denominators and think of the Diophantine equation as defining a scheme over Z). In terms of rings (and connecting to Qiaochu's answer), under the natural map Z[x_1,...,x_n] --> Q[x_1,...,x_n], the preimage of maximal ideals are prime, but not maximal. These examples may give impression that non-closed points are most important in arithmetic situations, but actually that is not the case. The ring C[t] behaves much like Z, and so one can have the same discussion with Z and Q replaced by C[t] and C(t). Why would one do this? Well, suppose you have an equation (like y^2 = x^3 + t) which you want to study, where you think of t as a parameter. To study the generic behaviour of this equation, you can think of it as a variety over C(t). But suppose you want to study the geometry for one particular value of t_0 of t. Then you need to pass from C(t) to C[t], so that you can apply the homomorphism C[t] --> C given by t |--> t_0 (specialization at t_0). This is completely analogous to the situation considered in my linked answer, of taking integral solutions to a a Diophantine equation and then reducing them mod p. What is the upshot? Basically, any serious study of varieties in families (whether arithmetic families, i.e. schemes over Z, or geometric families, i.e. parameterized families of varieties) requires scheme-theoretic techniques and the consideration of non-closed points. (Of course, serious such studies were made by the Italian geometers, by Lefschetz, by Igusa, by Shimura, and by many others before Grothendieck's invention of schemes, but the whole point of schemes is to clarify what came before and to give a precise and workable theory that encompasses all of the contexts considered in the "old days", and is also more systematic and more powerful than the older techniques.)
Is there a name for a shape that is like a capsule, but with different radii?
Is there a name for a shape that is like a capsule, but with two different radii?
What is the name for a shape that is like a capsule, but with two different radii?
Here is a counterexample. Let $a, b, c \in \mathbb{Q}$ be linearly independent. Let $\text{span}(x, y, z, ...)$ be the $\mathbb{Q}$-vector space in $\mathbb{R}$ spanned by $x, y, z, ...$. Let $AB = \text{span}(a, b), BC = \text{span}(b, c), AC = \text{span}(a, c)$. And for a subset $S$ of $\mathbb{R}$, let $\chi_S$ denote the characteristic function of $S$. Now define $\displaystyle f(x) = \chi_{AB} - \chi_{BC}$ and $\displaystyle g(x) = \chi_{AC} + \chi_{BC}.$ Then $f$ has period set $\text{span}(b)$, $g$ has period set $\text{span}(c)$, and $f + g$ has period set $\text{span}(a)$. Are you still interested in the continuous case? --- (Old answer below. I slightly misunderstood the question when I wrote this.) Here is a simpler example. I claim that the function $h(x) = \sin x + \sin \pi x$ cannot possibly be periodic. Why? Suppose an equation of the form $\sin x + \sin \pi x = \sin (x+T) + \sin \pi (x+T)$ held for all $x$ and some $T > 0$. Take the second derivative of both sides with respect to $x$ to get $\sin x + \pi^2 \sin \pi x = \sin (x+T) + \pi^2 \sin \pi(x+T).$ This implies that $\sin x = \sin (x+T)$ and that $\sin \pi x = \sin \pi(x+T)$, which is impossible. (Or is the question whether the sum _can_ be periodic?)
Here is a counterexample. Let $a, b, c \in \mathbb{R}$ be linearly independent over $\mathbb{Q}$. Let $\text{span}(x, y, z, ...)$ be the $\mathbb{Q}$-vector space in $\mathbb{R}$ spanned by $x, y, z, ...$. Let $AB = \text{span}(a, b), BC = \text{span}(b, c), AC = \text{span}(a, c)$. And for a subset $S$ of $\mathbb{R}$, let $\chi_S$ denote the characteristic function of $S$. Now define $\displaystyle f(x) = \chi_{AB} - \chi_{BC}$ and $\displaystyle g(x) = \chi_{AC} + \chi_{BC}.$ Then $f$ has period set $\text{span}(b)$, $g$ has period set $\text{span}(c)$, and $f + g$ has period set $\text{span}(a)$. Are you still interested in the continuous case? --- (Old answer below. I slightly misunderstood the question when I wrote this.) Here is a simpler example. I claim that the function $h(x) = \sin x + \sin \pi x$ cannot possibly be periodic. Why? Suppose an equation of the form $\sin x + \sin \pi x = \sin (x+T) + \sin \pi (x+T)$ held for all $x$ and some $T > 0$. Take the second derivative of both sides with respect to $x$ to get $\sin x + \pi^2 \sin \pi x = \sin (x+T) + \pi^2 \sin \pi(x+T).$ This implies that $\sin x = \sin (x+T)$ and that $\sin \pi x = \sin \pi(x+T)$, which is impossible. (Or is the question whether the sum _can_ be periodic?)
Here is a counterexample. Let $a, b, c \in \mathbb{R}$ be linearly independent over $\mathbb{Q}$. Let $\text{span}(x, y, z, ...)$ be the $\mathbb{Q}$-vector space in $\mathbb{R}$ spanned by $x, y, z, ...$. Let $AB = \text{span}(a, b), BC = \text{span}(b, c), AC = \text{span}(a, c)$. And for a subset $S$ of $\mathbb{R}$, let $\chi_S$ denote the characteristic function of $S$. Now define $\displaystyle f(x) = \chi_{AB} - 2 \chi_{BC}$ and $\displaystyle g(x) = 3 \chi_{AC} + 2 \chi_{BC}.$ Then $f$ has period set $\text{span}(b)$, $g$ has period set $\text{span}(c)$, and $f + g$ has period set $\text{span}(a)$. (I am not sure if the coefficients are necessary; they're just precautions.) Are you still interested in the continuous case? --- (Old answer below. I slightly misunderstood the question when I wrote this.) Here is a simpler example. I claim that the function $h(x) = \sin x + \sin \pi x$ cannot possibly be periodic. Why? Suppose an equation of the form $\sin x + \sin \pi x = \sin (x+T) + \sin \pi (x+T)$ held for all $x$ and some $T > 0$. Take the second derivative of both sides with respect to $x$ to get $\sin x + \pi^2 \sin \pi x = \sin (x+T) + \pi^2 \sin \pi(x+T).$ This implies that $\sin x = \sin (x+T)$ and that $\sin \pi x = \sin \pi(x+T)$, which is impossible. (Or is the question whether the sum _can_ be periodic?)
I found this question a while ago on a SAT practice exam or something, can't quite remember. So given an acute triangle $ABC$ with $P$ a point inside it and $AP$, $BP$, and $CP$ meeting the opposite sides at $D$, $E$, and $F$ respectively: ![alt text][1] How can you find the area of triangle $ABC$ given the areas of triangles $x$, $y$, and $z$? [1]: https://i.stack.imgur.com/vgHvS.png
Here is an article at Mathworld on [Circle-Circle Tangents][1]. Perhaps "CirclesWithTangents"? ![Circle-Circle Tangents][2] [1]: http://mathworld.wolfram.com/Circle-CircleTangents.html [2]: http://mathworld.wolfram.com/images/eps-gif/CircleCircleTangentGeneral_1000.gif "Circle-Circle Tangents"
One that I remember from some book (I think it was Innumeracy by John Allen Paulos) was "How fast does your hair grow, in miles per hour?"
The "[sum and difference][1]" formulas often come in handy, but it's not immediately obvious that they would be true. ![sin(a +/- b) = ...][2] ![cos(a +/- b) = ...][3] So what I want to know is, 1. How can I prove that these formulas are correct? 2. More importantly, how can I understand these formulas intuitively? Ideally, I'm looking for answers that make no reference to Calculus, or to [Euler's formula][4], although such answers are still encouraged, for completeness. [1]: http://en.wikipedia.org/wiki/Angle_addition_formula#Angle_sum_and_difference_identities [2]: http://upload.wikimedia.org/math/6/b/5/6b56777608a427303f0277c8c248dd0f.png [3]: http://upload.wikimedia.org/math/3/8/e/38e6763ebbf185e9cff1a63138da69a9.png [4]: http://en.wikipedia.org/wiki/Euler%27s_formula
How can I understand and prove the "sum and difference formulas" in trigonometry? ($cos(a \pm b) = $..., etc.)?
This was really inspired by Solitaire, but a few people reacted with ``oh, it's like the towers of Hanoi, isn't it?'' so I'll try to pose the problem in terms of discs here. Let's start. There are n disks on the real line, one of size 1 at position $x_1$, one of size 2 at position $x_2$, ..., and one of size n at position $x_n$. Your goal is to make a tower with all n discs, consuming as little energy as possible in the process. You are allowed to move a tower whose base is a disk of size k only on top of the disk with size k+1 (which may be the top of another mini-tower). The energy you consume to perform such a move is the distance traveled by the moved mini-tower. For example, the energy consumed by the first move is $|x_k-x_{k+1}|$. Now, you'd like to write a program that tells you whether the energy you have is enough to perform the task. It just needs to say Y or N. (If the answer is Y, then clearly the list of moves is proof enough that the answer is correct, so the problem is NP. If the answer is N, there's no point in even attempting the task---you are too tired.) What's the fastest such program you can find? Is the problem NP-complete? If it helps, consider a simplifications that restrict $x_k$'s to be rational, integers, integers in a certain range, etc. Here's an upper bound: $O(n2^n)$. Represent the initial state by the list $x_1$, $x_2$ ,... ,$x_n$. A move affects the state by deleting a number in this list and the energy consumed by the move is the absolute difference between the deleted number and the number coming after. Clearly there are $2^n$ states with at most $n$ moves each. (See [my blog post][1] for an example run of this algorithm if it's not clear. The description there is in terms of cards.) Note: Some of the comments below refer to older versions of the questions. See the history of edits if they seem confusing. [1]: http://rgrig.blogspot.com/2010/07/solitaire.html
How to test if a point is inside the convex hull of two circles?
Let $A$ be a valuation ring of rank two. Then $A$ gives an example of a commutative ring such that $Spec(A)$ is a noetherian topological space, but $A$ is nonnoetherian. (Indeed, otherwise $A$ would be a discrete valuation ring.) Is there a concrete example of such a ring $A$?
I have seen and studied this class equation for a finite group acting on itself by conjugations. The only applications I know are Cauchys' theorm and Sylow's theorem. Are there more?
I've been told that there are precisely the same number of rationals as there are of integers. The set of rationals is countably infinite, therefore every rational can be associated with a positive integer, therefore there are the same number of rationals as integers. I've ignored sign-related issues, but these are easily handled. To count the rationals, consider sets of rationals where the denominator and numerator are positive and sum to some constant. If the constant is 2 there's 1/1. If the constant is 3, there's 1/2 and 2/1. If the constant is 4 there's 1/3, 2/2 and 3/1. So far we have counted out 6 rationals, and if we continue long enough, we will eventually count to any specific rational you care to mention. The trouble is, I find this very hard to accept. I have two reasons. First, this logic seems to assume that infinity is a finite number. You can count to and number any rational, but you cannot number all rationals. You can't even count all positive integers. Infinity is code for "no matter how far you count, you have never counted enough". If it were possible to count to infinity, it would be possible to count one step less and stop at count infinity-1 which must be different to infinity. The second reason is that it's very easy to construct alternative mappings. Between zero and one there are infinitely many rational numbers, between one and two there are infinitely many rational numbers, and so on. To me, this seems a much more reasonable approach, implying that there are infinite rational numbers for every integer. But even then, this is just one of many alternative ways to map between ranges of rationals and ranges of integers. Since you can count the rationals, you can equally count stepping by any amount for each rational. You can use 1..10 for the first rational and 11..20 for the second etc. Or 1..100 and 101..200 etc, or 1..1000 and 1001..2000 etc. You can map finite range of integers of any size to each rational this way and, since there is no finite upper bound to the stepping amount, you could argue there are potentially infinite integers for every single rational. So... can anyone convince me that there is a single unambiguous correct answer to this question? Are there more rational numbers than integers, or not?
Are there more rational numbers that integers?
Are there more rational numbers than integers?
Mathematicians have very precise definitions for terms like "infinite" and "same size". The single unambiguous correct answer to this question is that using the standard mathematical definitions, the rationals have the "same size" as the integers. First, here are the definitions: 1. Define "0" = emptyset, "1" = {0}, "2" = {0,1}, "3" = {0,1,2}, etc. So, the number "n" is really a set with "n" elements in it. 2. A set A is called "finite" iff there is some n and a function f:A->n which is bijective. 3. A set A is called "infinite" iff it is not finite. (Note that this notion says nothing about "counting never stops" or anything like that.) 4. Two sets A and B are said to have the "same size" if there is a some function f:A-> B which is a bijection. Note that we do NOT require that ALL functions be bijections, just that there is SOME bijection. Once one accepts these definitions, one can prove that the rationals and integers have the same size. One just needs to find a particular bijection between the two sets. If you don't like the one you mentioned in your post, may I suggest that Calkin-Wilf enumeration of the rationals? (Simply google search Calkin Wilf counting rationals. The first .pdf has what I'm talking about). Of course, these give bijections between the naturals (with out 0) and the rationals, but once you have a bijection like this, it's easy to construct a bijection from the integers to the rationals by composing with a bijection from the naturals to the integers.
Suppose you have a normal distribution with mean=0, and stdev=1. So the expected value is 0. Now suppose you limit the outcomes, such that no values can be below 0. So 50% of values now equal 0, and rest of distribution is still normal. Running 1000000 trials, I come out with an expected value of .4 My question is how can I get this expected value through calculation? Thanks
How do I calculate expected return of partial normal distribution?
In mathematics a set is called *infinite* if it can be put into a 1-1 correspondence with a proper subset of it, and *finite* it is not infinite. (I know it seems crazy to have the concept of infinite as primitive and finite as a derivate, but it's simpler to do this, since otherwise you must assume that the integers exist before saying that a set is finite) As for your remarks: - with your method (if you don't forget to throw out fractions like 4/6 which is equal to 2/3) you actually **counted** the rationals, since for each number you have a function which associates it to a natural number. It's true that you cannot count ALL rationals, or all integers; but you cannot either draw a whole straight line, can you? - with infinite sets you may build infinite mappings, but you just need a single 1-1 mapping to show that two sets are equal.
My favorite is [Elementary Number Theory][1] by Rosen, which combines computer programming with number theory, and is accessible at a high school level. [1]: http://www.amazon.com/Elementary-Number-Theory-Kenneth-Rosen/dp/0321500318/ref=sr_1_1?s=books&ie=UTF8&qid=1280607379&sr=1-1
You may not be very satisfied with this answer, but I'll try to explain anyway. **Countability.** We're not really talking about whether you can "count all of the rationals", using some finite process. Obviously, if there is an infinite number of elements, you cannot count them in a finite amount of time using any reasonable process. The question is whether there is *the same number* of rationals as there are positive integers; this is what it means for a set to be "countable" --- for there to *exist* a one-to-one mapping from the positive integers to the set in question. You have described such a mapping, and therefore the rationals are "countable". (You may disagree with the terminology, but this does not affect whether the concept that it labels is coherent.) **Alternative mappings.** You seem to be dissatisfied with the fact that, unlike the case of a finite set, you can define an injection from the natural numbers to the rationals which is not surjective --- that you can in fact define a more general *relation* in which each integer is related to infinitely many rationals, but no two integers are related to the same rational numbers. Well, two can play at that game: you can define a relation in which every rational number is related to infinitely many integers, and no two rationals are related to the same integers! Just define the relation that each positive rational a/b is related to all numbers which are divisible by 2<sup>a</sup> but not 2<sup>a+1</sup>, and by 3<sup>b</sup> but not 3<sup>b+1</sup>; or more generally respectively 2<sup>ka</sup> and 3<sup>kb</sup> for any positive integer k. (There are, as you say, sign issues, but these can be smoothed away.) You might complain that the relation I've defined isn't "natural". Perhaps you have in mind the fact that the integers are a *subset* of the rationals --- a subgroup, in fact, taking both of them as additive groups --- and that the factor group &#8474;/&#8484; is infinite. Well, this is definitely interesting, and it's a natural sort of structure to be interested in. But it's more than what the issue of "mere cardinality" is trying to get at: set theory is interested in size regardless of structure, and so we don't restrict to maps which have one or another kind of "naturalness" about them. Of course, if you are interested in mappings which respect some sort of structure, you can build theories of size based on that: this is what is done in measure theory (with measure), linear algebra (with dimension), and indeed group theory (with index). So if you don't like cardinality as set theorists conceive it, you can look at more structured measures of size that you find more interesting! **Immediate predecessors.** A somewhat unrelated (but still important) complaint that you make is this: "*If it were possible to count to infinity, it would be possible to count one step less and stop at count infinity-1 which must be different to infinity.*" The question is: **why** would you necessarily be able to stop at 'infinity minus one'? This is true for finite collections, but it does not necessarily hold that anything which is true of finite collections is true also for infinite ones. (In fact, obviously, some things necessarily will fail.) --- This is important if you study ordinals, which mirrors the process of counting itself in some ways (labelling things as being "first", "second", "third", and so forth), because of the concept of a limit ordinal: the first "infinitieth" element of a well-ordering doesn't have any immediate predecessors! Again, you are free to say that these are concepts that you are not interested in exploring personally, but this does not mean that they are necessarily incoherent. **To summarize:** the set theorists measure "the size of a set" using a simple definition which doesn't care about structure, and which may violate your intuitions if you like to take the structure of the integers (and the rational numbers) very seriously, and also want to preserve your intuitions about finite sets. There are two solutions to this: try to stretch your intuition to accomodate the ideas of the set theorists, or study a different branch of math which you find more interesting!
How do I calculate expected value of partial normal distribution?
I've been told that there are precisely the same number of rationals as there are of integers. The set of rationals is countably infinite, therefore every rational can be associated with a positive integer, therefore there are the same number of rationals as integers. I've ignored sign-related issues, but these are easily handled. To count the rationals, consider sets of rationals where the denominator and numerator are positive and sum to some constant. If the constant is 2 there's 1/1. If the constant is 3, there's 1/2 and 2/1. If the constant is 4 there's 1/3, 2/2 and 3/1. So far we have counted out 6 rationals, and if we continue long enough, we will eventually count to any specific rational you care to mention. The trouble is, I find this very hard to accept. I have two reasons. First, this logic seems to assume that infinity is a finite number. You can count to and number any rational, but you cannot number all rationals. You can't even count all positive integers. Infinity is code for "no matter how far you count, you have never counted enough". If it were possible to count to infinity, it would be possible to count one step less and stop at count infinity-1 which must be different to infinity. The second reason is that it's very easy to construct alternative mappings. Between zero and one there are infinitely many rational numbers, between one and two there are infinitely many rational numbers, and so on. To me, this seems a much more reasonable approach, implying that there are infinite rational numbers for every integer. But even then, this is just one of many alternative ways to map between ranges of rationals and ranges of integers. Since you can count the rationals, you can equally count stepping by any amount for each rational. You can use 1..10 for the first rational and 11..20 for the second etc. Or 1..100 and 101..200 etc, or 1..1000 and 1001..2000 etc. You can map finite range of integers of any size to each rational this way and, since there is no finite upper bound to the stepping amount, you could argue there are potentially infinite integers for every single rational. So... can anyone convince me that there is a single unambiguous correct answer to this question? Are there more rational numbers than integers, or not? **EDIT** Although I've already accepted an answer, I'll just add some extra context. My reason for questioning this relates to the Hilbert space-filling curve. I find this interesting because of applications to multi-dimensional indexing data structures in software. However, I found Hilberts claim that the Hilbert curve literally filled a multi-dimensional space hard to accept. As mentioned in a comment below, a one meter line segment and a two meter line segment can both be seen as sets of points and, but (by the logic in answers below), those two sets are both the same size (cardinality). Yet we would not claim the two line segments are both the same size. The lengths are finite and different. Going beyond this, we most certainly wouldn't claim that the size of *any* finite straight line segment is equal to the size of a one-meter-by-one-meter square. The Hilbert curve reasoning makes sense now - the set of points in the curve is equal to the set of points in the space it fills. Previously, I was thinking too much about basic geometry, and couldn't accept the size of a curve as being equal to the size of a space. However, this isn't based on a fallacious counting-to-infinity argument - it's a necessary consequence of an alternative line of reasoning. The two constructs are equal because they both represent the same set of points. The area/volume/etc of the curve follows from that.
Is there an atlas of Algebraic Groups and corresponding Coordinate rings?
What is a primitive polynomial? I was looking into some random number generation algorithms and 'primitive polynomial' came up a sufficient number of times that I decided to look into it in more detail. I'm unsure of what a primitive polynomial is, and why it is useful for these random number generators. I'd find it particularly helpful if an example of a primitive polynomial could be provided.