instruction
stringlengths 12
30k
|
---|
This could be a trivial question, but what is exactly the difference of between these two expressions? Am I correct to state the both interchangeably whenever I need to express the approximation of pi? I'm bit confused as [here][1] it states pi can be express by ≒ as it's not a rational number, but pi can also be expressed by a series (asymptotic), so it should be ≈ as well.
π ≈ 3.14..
π ≒ 3.14..
[1]: http://en.wikipedia.org/wiki/Approximation#Mathematics
[2]: http://en.wikipedia.org/wiki/Pi#Estimating_.CF.80 |
Consider the following assertion from [Scott Aaronson's blog][1]:
> Supposing you do prove the Riemann
> Hypothesis, it’s possible to convince
> someone of that fact, without
> revealing anything other than the fact
> that you proved it. It’s also possible
> to write the proof down in such a way
> that someone else could verify it,
> with very high confidence, having only
> seen 10 or 20 bits of the proof.
Can anyone explain where this result comes from?
[1]: http://scottaaronson.com/blog/?p=152# |
Here is another result from Scott Aaronson's blog:
> If every second or so your computer’s
> memory were wiped completely clean,
> except for the input data; the clock;
> a static, unchanging program; and a
> counter that could only be set to 1,
> 2, 3, 4, or 5, it would still be
> possible (given enough time) to carry
> out an arbitrarily long computation —
> just as if the memory weren’t being
> wiped clean each second. This is
> almost certainly not true if the
> counter could only be set to 1, 2, 3,
> or 4. The reason 5 is special here is
> pretty much the same reason it’s
> special in Galois’ proof of the
> unsolvability of the quintic equation.
Does anyone have idea of how to show this? |
Say I have an image, with pixels that can be either 0 or 1. For simplicity, assume it's a 2D image (though I'd be interested in a 3D solution as well).
A pixel has 8 neighbors (if that's too complicated, we can drop to 4-connectedness). Two neighbouring pixels with value 1 are considered to be connected.
If I know the probability `p` that an individual pixel is 1, and if I can assume that all pixels are independent, how many groups of at least `k` connected pixels should I expect to find in an image of size `n-by-n`?
What I really need is a good way of calculating the probability of `k` pixels being connected given the individual pixel probabilities. I have started to write down a tree to cover all the possibilities up to `k=3`, but even then, it becomes really ugly really fast. Is there a more clever way to go about this? |
I think that complex analysis is hard because graphs of even basic functions are 4 dimensional. Does anyone have any good visual representations of basic complex functions or know of any tools for generating them? |
There is a very simple expression for the inverse of Fourier transform. What is the easiest known expression for the inverse Laplace transform?
Moreover, what is the easiest way to prove it? |
Is there any connection between exact differential equations and forms, or is the similarity in name just an accident?
|
Why are noncommutative nonassociative Hopf algebras are called quantum groups? This seems to be a purely mathematical notion and there is no quantum anywhere in it prima facie.
|
Why are noncommutative groups called quantum groups? |
For $n > 1$ an integer, there are well-known formulas for volume of the balls.
What is the analogous statement in a Banach space/Hilbert Space? |
Open two browser windows side-by-side and use [wolfram alpha][1]. Recast your function f(z) as f(x+iy) and plot the [real part][2] in one window and the [imaginary part][3] in another. My example links provide plots for z<sup>3</sup>.
Not sure what I'm doing wrong, but two of my links keep vanishing (I'm guessing markdown doesn't like the parameter value):
http://www.wolframalpha.com/input/?i=plot+real((x+iy)^3),x%3D-2..2,+y%3D-2..2
http://www.wolframalpha.com/input/?i=plot+imag((x+iy)^3),x%3D-2..2,+y%3D-2..2
[1]: http://www.wolframalpha.com/
[2]: http://www.wolframalpha.com/input/?i=plot+real((x+iy)^3),x%3D-2..2,+y%3D-2..2
[3]: http://www.wolframalpha.com/input/?i=plot+imag((x+iy)^3),x%3D-2..2,+y%3D-2..2 |
Open two browser windows side-by-side and use [wolfram alpha][1]. Recast your function f(z) as f(x+iy) and plot the [real part][2] in one window and the [imaginary part][3] in another. My example links provide plots for z<sup>3</sup>.
Not sure what I'm doing wrong, but two of my links keep vanishing (I'm guessing markdown doesn't like the parameter value):
http://www.wolframalpha.com/input/?i=plot+real((x%2Biy)^3),x%3D-2..2,+y%3D-2..2
http://www.wolframalpha.com/input/?i=plot+imag((x%2Biy)^3),x%3D-2..2,+y%3D-2..2
[1]: http://www.wolframalpha.com/
[2]: http://www.wolframalpha.com/input/?i=plot+real((x%2Biy)^3),x%3D-2..2,+y%3D-2..2
[3]: http://www.wolframalpha.com/input/?i=plot+imag((x%2Biy)^3),x%3D-2..2,+y%3D-2..2 |
Ok, Ok, I know that in fact the discriminant is defined (up to sign) as a product of differences of the roots of the polynomial.
But why does it then have integral coefficients, if the polynomial you started with had integer coefficients? |
Why does the discriminant of an integral polynomial have integral coefficients? |
This looks a bit like percolation theory to me. In the 4-neighbour case, if you look at the dual of the image, the chance that an edge is connected (runs between two pixels of the same colour) is `1-2p+2p^2`.
I don't think you can get nice closed-form answer for your question, but maybe a computer can help with some Monte Carlo simulation? |
This is somewhere between an answer and commentary. As others have said, the question is equivalent to showing: for any prime $p > 3$, $p^2 \equiv 1 \pmod 3$ and $p^2 \equiv 1 \pmod 8$. Both of these statements are straightforward to show by just looking at the $\varphi(3) = 2$ reduced residue classes modulo $3$ and the $\varphi(8) = 4$ reduced residue classes modulo $8$. But what is their significance?
For a positive integer $n$, let $U(n) = (\mathbb{Z}/n\mathbb{Z})^{\times}$ be the multiplicative group of units ("reduced residues") modulo $n$. Like any abelian group $G$, we have a squaring map
$[2]: G \rightarrow G$, $g \mapsto g^2$,
the image of which is the set of squares in $G$. So, the question is equivalent to: for $n = 3$ and also $n = 8$, the subgroup of squares in $U(n)$ is the trivial group.
The group $U(3) = \{ \pm 1\}$ has order $2$; since $(-1)^2 = 1$, the fact that the subgroup of squares is equal to $1$ is pretty clear. But more generally, for any odd prime $p$, the squaring map $[2]$ on $U(p)$ is two-to-one onto its image -- an element of a field has no more than two square roots -- so that precisely half of the elements of $U(p)$ are squares. It turns out that when $p = 3$, half of $p-1$ is $1$, but of course this is somewhat unusual: it doesn't happen for any other odd prime $p$.
The group $U(8) = \{1,3,5,7\}$ has order $4$. By analogy to the case of $U(p)$, one might expect the squaring map to be two-to-one onto its image so that exactly half of the elements are squares. But that is not what is happening here: indeed
$1^2 \equiv 3^2 \equiv 5^2 \equiv 7^2 \equiv 1 \pmod 8$,
so the subgroup of squares is again trivial. What's different? Since $\mathbb{Z}/8\mathbb{Z}$ is not a field, it is legal for a given element to have more than two square roots, but a more insightful answer comes from the structure of the groups $U(n)$. For any odd prime $p$, the group $U(p)$ is *cyclic* of order $p-1$ ("existence of primitive roots"). It is easy to see that in any cyclic group of even order, exactly half of the elements are squares. So $U(8)$ must not be cyclic, so it must be the other abelian group of order $4$, i.e., isomorphic to the Klein $4$-group $C_2 \times C_2$.
More generally, if $p$ is an odd prime number and $a$ is a positive integer, then
$U(p^a)$ is cyclic of order $p^{a-1}(p-1)$ hence isomorphic to $C_{p^{a-1}} \times C_{p-1}$, whereas for any $a geq 2$, the group $U(2^a)$ is isomorphic to $C_{2^{a-2}} \times C_2$. This is one of the first signs in number theory "there is something odd about the prime $2$".
<b>Added</b>: Note that the above considerations allow us to answer the more general question: "What is the largest positive integer $N$ such that for all primes $p$ with $\operatorname{gcd}(p,N) = 1$, $N$ divides $(p^2-1)$?" (Answer: $N = 24$.) |
The function which is 1 on the interval `[0;1]`, and 0 elsewhere, is a non-continuous probability distribution function. The function which is 3 on `[0;1]` and -1 on (1;3], and so on and on. What kind of answer do you want? What kind of properties do you want your functions to have?
There really are too many functions to list, since multiplying **any** function by a C^oo function with compact support and then applying Kenny's trick gives you an answer. |
What are some good online/free resources (tutorials, guides, exercises, and the like) for learning Lambda Calculus?
Specifically, I am interested in the following areas:
* Untyped lambda calculus
* Simply-typed lambda calculus
* Other typed lambda calculi
* Church's Theory of Types (I'm not sure where this fits in).
(As I understand, this should provide a solid basis for the understanding of type theory.)
Any advice and suggestions would be appreciated. |
As Scott himself states in comments section of [the post in question][1] (comment #9):
> (4) Width-5 branching programs can compute NC1 (Barrington 1986); corollary pointed out by Ogihara 1994 that width-5 bottleneck Turing machines can compute PSPACE
Unfortunately, I don't have any ideas how this is proved.
[1]: http://scottaaronson.com/blog/?p=152 |
This is still WIP. There are a few missing details, still I think it's better than nothing. Feel free to edit in the missing details.
Given a problem of `SUBSET-SUM`. We have a set of `A`={a<sub>1</sub>,a<sub>2</sub>,...,a<sub>n</sub>} numbers, and another number `s`. The question we're seeking answer to is, whether or not there's a subset of `A` whose sum is `s`.
I'm assuming that the 24-game allows you to use rational numbers. Even if it doesn't, I think that it is possible to emulate rational numbers up to denominator of size `p` with integers.
We know that `SUBSET-SUM` is NP-complete even for integers only. I think the `SUBSET-SUM` problem is `NP`-hard even if you allow treating each a<sub>i</sub> as a negative number. That is even if `A` is of the form `A`={a<sub>1</sub>,-a<sub>1</sub>,a<sub>2</sub>,-a<sub>2</sub>,...,a<sub>n</sub>,-a<sub>n</sub>}. This is still a wrinkle I need to iron out in this reduction.
Obviously, if there's a subset of `A` with sum `s`, then there's a solution to the `24`-problem for how to reach using `A` to `s`. The solution is only using the `+` sign.
The problem is, what happens if there's no solution which only uses the `+` sign, but there is a solution which uses other arithmetic operations.
Let us consider the following problem. Let's take a prime `p` which is larger than `n`, the total number of elements in `A`. Given an oracle which solves the `24`-problem, and a `SUBSET-SUM` problem of `A`={a<sub>1</sub>,a<sub>2</sub>,...,a<sub>n</sub>} and `s`. We'll ask the oracle to solve the `24`-problem on
> `A`={a<sub>1</sub>+(1/p),a<sub>2</sub>+(1/p),...,a<sub>n</sub>+(1/p)}
for the following values:
> s<sub>1</sub>=s+1/p,s<sub>2</sub>=s+2/p,...,s<sub>n</sub>=s+n/p.
If the solution includes multiplication, we will have a denominator larger than `p` in the end result, and thus we will not be able to reach any s<sub>i</sub>.
Given an arithmetric expression that contains a<sub>i</sub>a<sub>j</sub>=x+(1/p<sup>2</sup>), It is impossible that the denominator p<sup>2</sup> would "cancel" out, since there are at most `n` elements in the summation, and thus the numerator would never reach `p`, since `p>n`.
THIS IS NOT QUIT RIGHT! The expression a<sub>i</sub>a<sub>j</sub>-a<sub>k</sub>a<sub>l</sub> will be an integer, and therefor our oracle might return answer which includes two multiplications one negative and one positive.
What about division? How can we be sure no division will occur. Find another prime `q` which is different than p, and larger than the largest a<sub>i</sub> times `n`. Multiply all answers by `q`. The set `A` will be
> `A`={qa<sub>1</sub>+(1/p),qa<sub>2</sub>+(1/p),...,qa<sub>n</sub>+(1/p)}
We will look for the following values:
> s<sub>1</sub>=qs+1/p,s<sub>2</sub>=qs+2/p,...,s<sub>n</sub>=qs+n/p.
In that case, a<sub>i</sub>/a<sub>j</sub> will be smaller than the minimal element in `A`, and therefor the end result which will contains a<sub>i</sub>/a<sub>j</sub> will never be one of the s<sub>i</sub> we're looking for. |
There is the notion of class number from algebraic number theory. Why is such a notion defined and what good comes out of it?
It is nice if it is $1$; we have unique factorization of all ideals; but otherwise?
|
I would read a book about Perelman's proof of the Poincaré conjecture (or even the papers themselves). Oh, you mean the book had to be written when I was starting? |
I am not a mathematician but [Flatland: A Romance of Many Dimensions][1] blew my mind. I read it when I was a college student in a class on Special Relativity and wish I had read it way earlier.
[1]: http://books.google.com/books?id=R6E0AAAAMAAJ&dq=flatland&printsec=frontcover&source=bn&hl=en&ei=eE9QTLKVF5G5rAex79zVDQ&sa=X&oi=book_result&ct=result&resnum=11&ved=0CEQQ6AEwCg#v=onepage&q&f=false |
Closed points should be thought of as being "actual points", whereas non-closed points can correspond to all sorts of different things: subvarieties, "fat points", generic points, etc. You might be interested in reading [this blog post][1] about Mumford's drawing of $\operatorname{Spec} \mathbb{Z}$.
One possible way to justify the claim that closed points are the "actual points" is the fact that if we have, for instance, a smooth variety over $\mathbb{C}$, then its [analytification][2] will be a complex manifold. The closed points of the former will then correspond exactly to the points of the latter.
[1]: http://www.neverendingbooks.org/index.php/mumfords-treasure-map.html
[2]: http://books.google.com/books?id=ZhzXJHUgcRUC&lpg=PA67&ots=aVQoeMkBwc&dq=analytification&pg=PA67#v=onepage&q=analytification&f=false |
Let A and B be two matrices which can be multiplied.
Then **rank(AB) <= min(rank(A), rank(B))**
I proved **rank(AB) <= rank(B)** interpreting AB as a composition of linear maps, observing that ker(B) \subseteq ker(AB) and using the kernel-image dimension formula. This also provide, in my opinion, a nice interpretation: if non stable, under subsequent compositions the kernel can only get bigger, and the image can only get smaller, in a sort of _loss of information_.
How to manage **rank(AB) <= rank(A)**? Is there a nice interpretation like the previous one? |
How to prove and interpret rank(AB) <= min(rank(A), rank(B)) ? |
For Moebius transformations, check out [this nice YouTube video][1].
[1]: http://www.youtube.com/watch?v=JX3VmDgiFnY |
What's the difference between open and closed sets? |
Kevin Lin's answer regarding the meaning of closed points is quite reasonable, especialy in the case when the scheme in question underlies a classical variety. I want to add some additional remarks and examples for thinking about more general schemes.
Here are some tautological remarks: recall that a point x in a scheme X is called a specialization of y if x lies in the Zariski closure of y (and y is called a generalization of x). So tautologically, a closed points is one that cannot be specialized any further (just as a generic point cannot be generalized any further). What does specialization really mean:
ring theoretically, it means taking the image under a homomorphism; so if p and q are prime
ideals of a ring A, then q is a specialization of p in Spec A if and only if q contains p, i.e. if A/p surjects onto A/q. It is perhaps best to think of an example: say A is C[x,y],
p is the prime ideal gen'd by (x-1) and q is the prime (actually maximal ideal) gen'd by (x-1,y). Then in A/p, we have "specialized" the value of x to equal 1 (because we have
declared x-1 = 0) but y is still a free variable. When we pass to the further quotient A/q, we have specialized both x and y: x is specialized to 1 and y is specialized to 0. At this point, we can't specialize any more; technically, this is because q is a maximal ideal of A,
so a closed point of Spec A; intuitively, it is because both x and y have now both been "specialized" to actual numbers, and so we can't specialize any further.
But suppose now we set B = Z[x,y], and take p and q to be the same, i.e. gen'd by x-1 and by
(x-1,y) respectively. Then q is *not* maximal; there is more capacity for specialization.
How is this? Well, x and y are now taking values in Z (rather than the field C) and so we can also reduce both x and y modulo some prime, say 5; this gives a prime ideal r = (x-1,y,5) in B containing q. Now r *is* maximal, and so we are done specializing.
So if you have a scheme that is finite type over Z, the closed points will correspond to
"actual points", in Kevin's terminology, but defined over finite fields. The points of the scheme whose coordinates are integers, say, will *not* be closed. One has the choice of thinking them of them as "actual points" which nevertheless can be specialized further by reducing modulo primes, or as subvarieties rather than "actualy points", by identifying them
with their Zariski closures (for a picture of this, see the drawing of Mumford that Kevin links to, which is actually of Spec Z[x] --- a more interesting space than Spec Z ). |
What is the smallest area of a parking lot in which a car (that is, a segment of) can perform a complete turn (that is, rotate 360 degrees)?
(This is obviously the Kakeya Needle Problem. Fairly easy to explain, models an almost reasonable real-life scenario, and has a very surprising answer as you probably know - the lot can have as small an area as you'd like).
[Wikipedia entry: Kakeya Set][1].
[1]: http://en.wikipedia.org/wiki/Kakeya_set |
I have lifted this from [Mathoverflow](http://mathoverflow.net/questions/11669/) since it belongs here.
Hi,
Currently, I'm taking matrix theory, and our textbook is Strang's Linear Algebra. Besides matrix theory, which all engineers must take, there exists linear algebra I and II for math majors. What is the difference,if any, between matrix theory and linear algebra?
Thanks!
kolistivra |
What is the difference between matrix theory and linear algebra? |
Lifted from [Mathoverflow](http://mathoverflow.net/questions/2446):
I think (almost) everyone agrees that Hartshorne's Algebraic Geometry is still the best.
Then what might be the 2nd best? It can be a book, preprint, online lecture note, webpage, etc.
One suggestion per answer please. Also, please include an explanation of why you like the book, or what makes it unique or useful.
|
Best Algebraic Geometry text book? (other than Hartshorne) |
Another vague question which didn't quite fit at [Mathoverflow](http://mathoverflow.net/questions/446):
So ... what is the Fourier transform? What does it do? Why is it useful (both in math and in engineering, physics, etc)?
(Answers at any level of sophistication are welcome.) |
Using only precalculus mathematics (including that the area of the triangle with vertices at the origin, (x1,y1), and (x2,y2) is half of the absolute value of the determinant of the 2x2 matrix of the vertices (x1,y1) and (x2,y2), 1/2 * |x1*y2 - x2*y1|) how can one prove that [the shoelace method][1] works for all non-self-intersecting polygons?
[1]: http://en.wikipedia.org/wiki/Shoelace_Method |
Kevin Lin's answer regarding the meaning of closed points is quite reasonable, especialy in the case when the scheme in question underlies a classical variety. I want to add some additional remarks and examples for thinking about more general schemes.
Here are some tautological remarks: recall that a point x in a scheme X is called a specialization of y if x lies in the Zariski closure of y (and y is called a generalization of x). So tautologically, a closed points is one that cannot be specialized any further (just as a generic point cannot be generalized any further). What does specialization really mean:
ring theoretically, it means taking the image under a homomorphism; so if p and q are prime
ideals of a ring A, then q is a specialization of p in Spec A if and only if q contains p, i.e. if A/p surjects onto A/q. It is perhaps best to think of an example: say A is C[x,y],
p is the prime ideal gen'd by (x-1) and q is the prime (actually maximal ideal) gen'd by (x-1,y). Then in A/p, we have "specialized" the value of x to equal 1 (because we have
declared x-1 = 0) but y is still a free variable. When we pass to the further quotient A/q, we have specialized both x and y: x is specialized to 1 and y is specialized to 0. At this point, we can't specialize any more; technically, this is because q is a maximal ideal of A,
so a closed point of Spec A; intuitively, it is because both x and y have now both been "specialized" to actual numbers, and so we can't specialize any further.
But suppose now we set B = Z[x,y], and take p and q to be the same, i.e. gen'd by x-1 and by
(x-1,y) respectively. Then q is *not* maximal; there is more capacity for specialization.
How is this? Well, x and y are now taking values in Z (rather than the field C) and so we can also reduce both x and y modulo some prime, say 5; this gives a prime ideal r = (x-1,y,5) in B containing q. Now r *is* maximal, and so we are done specializing.
So if you have a scheme that is finite type over Z, the closed points will correspond to
"actual points", in Kevin's terminology, but defined over finite fields. The points of the scheme whose coordinates are integers, say, will *not* be closed. One has the choice of thinking them of them as "actual points" which nevertheless can be specialized further by reducing modulo primes, or as subvarieties rather than "actual points", by identifying them
with their Zariski closures (for a picture of this, see the drawing of Mumford that Kevin links to, which is actually of Spec Z[x] --- a more interesting space than Spec Z ). |
Conic sections are a frequent target for dropping when attempting to make room for other topics in advanced algebra and precalculus courses. A common argument in favor of dropping them is that typical first-year calculus doesn't use conic sections at all. Do conic sections come up in typical intro-level undergraduate courses? In typical prelim grad-level courses? If so, where? |
How many ways are there to color faces of a cube with N colors if
two colorings are the same if it's possible to rotate the cube such that
one coloring goes to another?
|
I have lifted this from [Mathoverflow](http://mathoverflow.net/questions/11669/) since it belongs here.
Hi,
Currently, I'm taking matrix theory, and our textbook is Strang's Linear Algebra. Besides matrix theory, which all engineers must take, there exists linear algebra I and II for math majors. What is the difference, if any, between matrix theory and linear algebra?
Thanks!
kolistivra |
I wish I'd understood the importance of inequalities earlier. I wish I'd carefully gone through the classic book [Inequalities][1] by Hardy, Littlewood, and Poyla early on. Another good book is [The Cauchy-Schwarz Masterclass][2].
You can study inequalities as a subject in their own right, often without using advanced math. But they're critical techniques for advanced math.
[1]: http://www.amazon.com/gp/product/0521358809?ie=UTF8&tag=theende-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0521358809
[2]: http://www.amazon.com/gp/product/052154677X?ie=UTF8&tag=theende-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=052154677X |
Is the purpose of the derivative notation d/dx strictly for symbolic manipulation purposes?
I remember being confused when I first saw the notation for derivatives - it looks vaguely like there's some division going on and there are some fancy 'd' characters that are added in... I recall thinking that it was a lot of characters to represent an action with respect to one variable. Of course, once you start moving the dx around it makes a little more sense as to why they exist - but is this the only reason?
Any history lesson or examples where this notation is helpful or unhelpful is appreciated. |
Why are derivatives specified as d/dx? |
Closed points should be thought of as being "actual points", whereas non-closed points can correspond to all sorts of different things: subvarieties, "fat points", generic points, etc. You might be interested in reading [this blog post][1] about Mumford's drawing of $\operatorname{Spec} \mathbb{Z}[x]$.
One possible way to justify the claim that closed points are the "actual points" is the fact that if we have, for instance, a smooth variety over $\mathbb{C}$, then its [analytification][2] will be a complex manifold. The closed points of the former will then correspond exactly to the points of the latter.
[1]: http://www.neverendingbooks.org/index.php/mumfords-treasure-map.html
[2]: http://books.google.com/books?id=ZhzXJHUgcRUC&lpg=PA67&ots=aVQoeMkBwc&dq=analytification&pg=PA61#v=onepage&q=analytification&f=false |
Kevin Lin's answer regarding the meaning of closed points is quite reasonable, especialy in the case when the scheme in question underlies a classical variety. I want to add some additional remarks and examples for thinking about more general schemes.
Here are some tautological remarks: recall that a point x in a scheme X is called a specialization of y if x lies in the Zariski closure of y (and y is called a generalization of x). So tautologically, a closed points is one that cannot be specialized any further (just as a generic point cannot be generalized any further). What does specialization really mean:
ring theoretically, it means taking the image under a homomorphism; so if p and q are prime
ideals of a ring A, then q is a specialization of p in Spec A if and only if q contains p, i.e. if A/p surjects onto A/q. It is perhaps best to think of an example: say A is C[x,y],
p is the prime ideal gen'd by (x-1) and q is the prime (actually maximal ideal) gen'd by (x-1,y). Then in A/p, we have "specialized" the value of x to equal 1 (because we have
declared x-1 = 0) but y is still a free variable. When we pass to the further quotient A/q, we have specialized both x and y: x is specialized to 1 and y is specialized to 0. At this point, we can't specialize any more; technically, this is because q is a maximal ideal of A,
so a closed point of Spec A; intuitively, it is because both x and y have now both been "specialized" to actual numbers, and so we can't specialize any further.
But suppose now we set B = Z[x,y], and take p and q to be the same, i.e. gen'd by x-1 and by
(x-1,y) respectively. Then q is *not* maximal; there is more capacity for specialization.
How is this? Well, x and y are now taking values in Z (rather than the field C) and so we can also reduce both x and y modulo some prime, say 5; this gives a prime ideal r = (x-1,y,5) in B containing q. Now r *is* maximal, and so we are done specializing.
So if you have a scheme that is finite type over Z, the closed points will correspond to
"actual points", in Kevin's terminology, but defined over finite fields. The points of the scheme whose coordinates are integers, say, will *not* be closed. One has the choice of thinking them of them as "actual points" which nevertheless can be specialized further by reducing modulo primes, or as subvarieties rather than "actual points", by identifying them
with their Zariski closures (for a picture of this, see the drawing of Mumford that Kevin links to). |
I have some ploygons I would like to map onto the face of a Cone.
I can see from [this page](http://www.math.montana.edu/frankw/ccp/multiworld/multipleIVP/cylindrical/body.htm#skip3) that I can convert the points of the polygon to cylinderical coordinates, which is almost what I want.
How do I go about modifying the formulas to work for Conical coordinates? |
how do I convert from cartesian to conical coordinates? |
In school, we learn that sin is "opposite over hypotenuse" and cos is "adjacent over hypotenuse".
Later on, we learn the power series definitions of sin and cos.
How can one prove that these two definitions are equivalent? |
I have some polygons I would like to map onto the face of a cone.
I can see from [this page](http://www.math.montana.edu/frankw/ccp/multiworld/multipleIVP/cylindrical/body.htm#skip3) that I can convert the points of the polygon to cylinderical coordinates, which is almost what I want.
How do I go about modifying the formulas to work for conical coordinates? |
How do I convert from cartesian to conical coordinates? |
A vague question of Kevin Lin which didn't quite fit at [Mathoverflow](http://mathoverflow.net/questions/446):
So ... what is the Fourier transform? What does it do? Why is it useful (both in math and in engineering, physics, etc)?
(Answers at any level of sophistication are welcome.) |
Isn't it as simple as $sin^2(\theta)+cos^2(\theta)=1$? Here we have a right triangle. |
Suppose a finite group has an automorphism of order 3. How do you prove that it is abelian? |
Suppose we model traffic flow between two points with a directed graph. Each route has either a constant travel time or one that linearly increases with traffic. We assume that each driver wishes to minimise their own travel time and we assume that the drivers form a Nash equilibria. Can removing a route ever decrease the average travelling time?
Note that the existence of multiple Nash equilibria makes this question a bit complicated. To clarify, I am looking for a route removal that will guarantee a decrease in the average traveling time regardless of the Nash equilibria that are chosen before and after. |
Here is another result from [Scott Aaronson's blog][1]:
> If every second or so your computer’s
> memory were wiped completely clean,
> except for the input data; the clock;
> a static, unchanging program; and a
> counter that could only be set to 1,
> 2, 3, 4, or 5, it would still be
> possible (given enough time) to carry
> out an arbitrarily long computation —
> just as if the memory weren’t being
> wiped clean each second. This is
> almost certainly not true if the
> counter could only be set to 1, 2, 3,
> or 4. The reason 5 is special here is
> pretty much the same reason it’s
> special in Galois’ proof of the
> unsolvability of the quintic equation.
Does anyone have idea of how to show this?
[1]: http://scottaaronson.com/blog/?p=152 |
The calculus of relations is an algebra of operations over sets of pairs of individuals, where for any relation R, we can express the relation in the usual infix manner: x R y iff (x,y) ∈ R. This allows all of the properties of relations to be expressed in equational form.
There are four fundamental relations we want to define, each of which make non-useless examples of three of your five properties of relations, plus one other useful property, irreflexivity:
+ Eq = {(x,x) | for each individual x}. This is the diagonal or equality relation that holds only between something and itself. It is reflexive, symmetric and transitive, which is to say it is an equivalence relation.
+ All = {(x,y) | for all individuals x,y}. The whenever relation, it is also an equivalence relation.
+ Neq = {(x,y) | for all individuals x,y for which x ≠ y}: the inequality relation that holds between anything different. It is symmetric and irreflexive, but not transitive.
+ Never is the empty set. It is symmetric and transitive, and irreflexive.
Three basic binary operations on relations, and one unary relation:
+ R∩S = {(x,y) | (x,y) ∈ R and (x,y) ∈ S};
+ R–S = {(x,y) | (x,y) ∈ R and (x,y) ∉ S};
+ R⋅S = {(x,z) | there is some y such that (x,y) ∈ R and (y,z) ∈ S} (composition);
+ tr(R) = {(y,x) | (x,y) ∈ R} (transpose).
Observe that Never = All–All, and Neq = All–Eq.
We say R -> S when S holds whenever R holds. This is the same as saying either that R is a subset of S, or that R∩S=R, or that R-S=Never. So Never -> Eq, and Eq -> All.
Then we can express your five relations:
1. R is reflexive when Eq -> R; dually we have an additional property of a relation, where R is anti-reflexive when R -> Neq;
2. R is symmetric when R -> tr(R);
3. R is anti-symmetric when R ∩ tr(R) -> Eq, or equivalently when R ∩ tr(R) ∩ Neq = 0.
4. R is asymmetric when R ∩ tr(R)=Never: a relation is asymmetric when it is anti-symmetric and anti-reflexive;
5. R is transitive when R⋅R -> R.
*Also, how can a relation be a- and antisymmetrical at the same time? Don't they cancel each other out?* — Look at Eq: it is both symmetric and anti-symmetric, although it is not asymmetric. In fact, all asymmetric relations are anti-symmetric, but not vice versa: the difference is that asymmetric and anti-symmetric differ in what they assert about the diagonal: anti-symmetric doesn't care about what pairs there might be along the diagonal, whilst asymmetric insists that there are no pairs along the diagonal, which is irreflexivity.
|
Suppose a finite group has the property that for every $x, y$, it follows that $(xy)^3 = x^3 y^3$.
How do you prove that it is abelian? |
Conic sections should definitely be retained. If you don't cover conic sections, then what other examples can you cover?
Lines? Too simple. General curves? Insufficiently concrete.
Examples are very important for illustrating the general theory and techniques.
Also, in a multivariable calculus course, typical examples will involve quadric surfaces. Here conic sections will come into play, since hyperplane sections (or "level curves") of quadric surfaces are conic sections. |
From [Scott Aaronson's blog][1]:
"There’s a finite (and not unimaginably-large) set of boxes, such that if we knew how to pack those boxes into the trunk of your car, then we’d also know a proof of the Riemann Hypothesis. Indeed, every formal proof of the Riemann Hypothesis with at most (say) a million symbols corresponds to some way of packing the boxes into your trunk, and vice versa. Furthermore, a list of the boxes and their dimensions can be feasibly written down."
His later commented to explain where he get this from: "3-dimensional bin-packing is NP-complete."
I don't see how these two are related.
Another question inspired by the same article is [here][2].
[1]: http://scottaaronson.com/blog/?p=152
[2]: http://math.stackexchange.com/questions/946/computation-with-a-memory-wiped-computer |
All the integrals I'm familiar with have the form:
> int f(x) dx.
And I understand these as the sum of infinite tiny rectangles with an area of: f(x_i) * dx.
Is it valid to have integrals that do not have a differential, such as dx, or that have the differential elsewhere than as a factor ? Let me give couple of examples on what I'm thinking of:
> int 1
If this is valid notation, I'd expect it to sum infinite ones together, thus to go inifinity.
> int e^dx
Again, I'd expect this to go to infinity as e^0 = 1, assuming the notation is valid.
> int (e^dx - 1)
This I could potentially imagine to have a finite value.
Are any such integrals valid ? If so, are there any interesting / enlightening examples of such integrals?
|
From [Scott Aaronson's blog][1]:
> There’s a finite (and not
> unimaginably-large) set of boxes, such
> that if we knew how to pack those
> boxes into the trunk of your car, then
> we’d also know a proof of the Riemann
> Hypothesis. Indeed, every formal proof
> of the Riemann Hypothesis with at most
> (say) a million symbols corresponds to
> some way of packing the boxes into
> your trunk, and vice versa.
> Furthermore, a list of the boxes and
> their dimensions can be feasibly
> written down.
His later commented to explain where he get this from: "3-dimensional bin-packing is NP-complete."
I don't see how these two are related.
Another question inspired by the same article is [here][2].
[1]: http://scottaaronson.com/blog/?p=152
[2]: http://math.stackexchange.com/questions/946/computation-with-a-memory-wiped-computer |
There is another proof that the derivative of sine is cosine that doesn't use the sandwich theorem mentioned by Qiaochu and Akhil above. Instead, one can use the definition of arcsine and the standard calculus formula for arc length in terms of an integral to show that arcsine = the integral of (1 - x^2)^(-.5). It follows that the derivative of arcsine is (1 - x^2)^(-.5), and (by the chain rule) one can use this fact to prove that the derivative of sine is cosine.
In fact, I'm not sure why this proof is presented less frequently then the one via the sandwich theorem. The unit circle definition of sine is based on arc length, and in calculus we learn a formula for arc length based on integration. Why not connect these two concepts for a natural proof that the derivative of sine is cosine? |
1) Why does a small number of states suffice?
Regardless of whether the constant is 5 or 500, its still very surprising. Thankfully, it's fairly to prove if you allow the counter to be `{1...8}` instead of `{1...5}`. [This proof is by Ben-Or and Cleve.]. Start by representing the computation as a circuit, and ignore the whole wiping-clean thing.
Define a register machine as follows: It has 3 registers `(R_1,R_2,R_3)`, each of which holds a single bit. At each step, the program performs some computation on the registers of the form `R_i <-- R_a + x_b R_c` or `R_i <-- R_a + x_b R_c + R_d` (where `x_1...x_n` is the input).
Initially, set `(R_1,R_2,R_3) = (1,0,0)`. The machine should end in the state `R_3 + f R_1`. We'll simulate the circuit using a register machine.
We now proceed by induction on the depth of the circuit. If the circuit has depth 0, then we just copy the appropriate bit: `R_3 <-- R_3 + x_i R_1`.
For the induction, we have 3 cases, according to whether the final gate is NOT, AND, or OR.
Suppose that the circuit is `NOT(f)`. By induction, we can compute `f`, yielding the state `(R_1,R_2,R_3 + f R_1)`. We can therefore perform the instruction `R_3 <-- R_3 + R_1` to get the desired output.
If the circuit is `AND(f_1,f_2)`, then life is a tad more complicated. By induction, we then execute the following 4 instructions
`R_2 <-- R_2 + f_1 R_1` <br>
`R_3 <-- R_3 + f_2 R_2` <br>
`R_2 <-- R_2 + f_1 R_1` <br>
`R_3 <-- R_3 + f_2 R_2`
Assuming I haven't made any typos, we are left with the state `(R_1,R_2,R_3+f_1f_2R_1)`, as desired. OR works similarly.
QED.
Take a moment to process what just happened. It's a slick proof that you have to read 2 or 3 times before it begins to sink in. What we've shown is that we can simulate a circuit by applying a fixed program that stores only 3 bits of information at any time.
To convert this into Aaronson's version, we encode the three registers into the counter (that's why we needed the extra 3 spaces). The simple program uses the input and the clock to determine how far we've made it through the computation and then applies the appropriate change to the counter.
2) But what's the deal with 5?
To get from 8 states down to 5, you use a similar argument, but are much more careful about exactly how much information needs to be propagated between stages and how it can be encoded. A formal proof requires lots of advanced group theory. |
Let f and g be two periodic functions over R with the following property: If T is a period of f, and S is a period of g, then T/S is irrational.
Can we say that f + g is not periodic? Could you give a proof or a counter example? |
Here is a simpler example. I claim that the function $h(x) = \sin x + \sin \pi x$ cannot possibly be periodic. Why? Suppose an equation of the form
$$\sin x + \sin \pi x = \sin (x+T) + \sin \pi (x+T)$$
held for all $x$ and some $T > 0$. Take the second derivative of both sides with respect to $x$ to get
$$\sin x + \pi^2 \sin \pi x = \sin (x+T) + \pi^2 \sin \pi(x+T).$$
This implies that $\sin x = \sin (x+T)$ and that $\sin \pi x = \sin \pi(x+T)$, which is impossible.
(Or is the question whether the sum _can_ be periodic?) |
Before Hartshorne's book there was [Mumford's Red Book of Varietie][1]s. I think it is a great introductory textbook to modern algebraic geometry (scheme theory).
I found that Mumford is quite good at motivating new concepts; in particular I really enjoy his development of nonsingularity and the sheaf of differentials. I think another great aspect about this book is that it emphasizes how to define things intrinsically (i.e. without reference to a closed or open immersion into affine space) but also explains how to make local arguments (i.e. using immersion into affine space). A classic example of the above:
(non intrinsic tangent space): Say X is a variety and p is a point of X. Choose an affine neighborhood so that p corresponds to the origin. Then this affine neighborhood is spec k[x1, ..., xn]/I for some ideal. Let I' be all the linear terms of I (i.e. if I = (x,y^2), then I' = (x)). Then the tangent space at p is spec k[x1,...,xn]/I'.
(intrinsic tangent space): Let m be the maximal ideal of the local ring of the structure sheaf at p, then the tangent space is the dual of the vector space m/m^2.
Taking spec of the symmetric algebra of the latter gives you the former.
Some drawbacks. This book doesn't cover nearly as much as Hartshorne's book. It doesn't have that many exercises. The notation is slightly different; integral finite type schemes are called pre-varieties and you can remove the `pre' if it's also separated. Nevertheless I think its a great compliment to reading Hartshorne.
[1]: http://www.amazon.com/Red-Book-Varieties-Schemes-Mathematics/dp/354063293X/ref=sr_1_1/182-6808689-1883238?ie=UTF8&s=books&qid=1280363994&sr=8-1 |
The wiki [article][1] on eigenvectors offers the following geometrical interpretation:
> Each application of the matrix to an arbitrary vector yields a result which will have rotated towards the eigenvector with the largest eigenvalue.
Qn 1: If there is any other geometrical interpretation particularly in the context of a covariance matrix?
The wiki also discusses the difference between left and right eigenvectors.
Qn 2: Do the above geometrical interpretations hold irrespective of whether they are left or right eigenvectors?
[1]: http://en.wikipedia.org/wiki/Eigenvalue,_eigenvector_and_eigenspace
|
Is there a geometrical interpretation to the notion of eigenvector and eigenvalues? |
Where is the flaw in this argument of a proof of that 1 =2? |
Are there any well-studied analogs to the Mandelbrot set using functions other than f(z)=z^2+c? |
How many ways are there to tile a 2 x n grid with 1 x 2 dominos?
This problem can be solved using Fibonacci numbers. Let S<sub>n</sub> be the number of valid tilings of the 2 x n grid. Each such tiling has either a vertical 1 x 2 domino or two horizontal dominos on the right. Therefore, each tiling of a 2 x (n-1) grid or a 2 x (n-2) grid generates a tiling of the 2 x n grid, and hence we have a recurrence relation S<sub>n</sub> = S<sub>n-1</sub> + S<sub>n-2</sub>. This is precisely the recurrence relation of the Fibonacci numbers. Checking our base cases, we see that there is one way to tile a 1 x 2 grid and two ways to tile a 2 x 2 grid, so S<sub>1</sub> = 1 and S<sub>2</sub> = 2. Therefore, the number of tilings is precisely the Fibonacci sequence. |
Another example: [Euler's sum of powers conjecture](http://en.wikipedia.org/wiki/Euler%27s_sum_of_powers_conjecture), a generalization of Fermat's Last Theorem. It states:
If the equation ∑<sub>i=0</sub><sup>k</sup> x<sub>i</sub><sup>n</sup> = z<sup>n</sup> has a solution in positive integers, then n ≤ k (unless k=1). Fermat's Last Theorem is the k=2 case of this conjecture.
A counterexample for n=5 was found in 1966: it's
61917364224 = 27<sup>5</sup> + 84<sup>5</sup> + 110<sup>5</sup> + 133<sup>5</sup> = 144<sup>5</sup>.
The smallest counterexample for n=4 was found in 1988:
31858749840007945920321 = 95800<sup>4</sup> + 217519<sup>4</sup> + 414560<sup>4</sup> = 422481<sup>4</sup>.
This example used to be even more useful in the days before FLT was proved, as an answer to the question "Why do we need to prove FLT if it has been verified for thousands of numbers?" :-) |
I understand the mechanics of the proof of Ostrowski's Theorem, but I'm a little unclear on why one should expect valuations to be related to primes. Is this a special property of number fields and function fields, or do primes of K[x,y] correspond to valuations on K(x,y) in the same way?
I'm hoping for an answer that can explain what exactly are the algebraic analogs of archimedian valuations, and how to use them - for example, I've heard that the infinite place on K(x) corresponds to the "prime (1/x)" - how does one take a polynomial in K[x] "mod (1/x)" rigorously?
Thanks in advance. |
Why should one expect valuations to be related to primes? How to treat an infinite place algebraically? |
I just got out from my Math and Logic class with my friend. During the lecture, a well-known math/logic puzzle was presented:
>The King has $1000$ wines, $1$ of which is poisoned. He needs to identify the poisoned wine as soon as possible, and with the least resources, so he hires the protagonist, a Mathematician. The king offers you his expendable servants to help you test which wine is poisoned.
>The poisoned wine is very potent, so much that one molecule of the wine will cause anyone who drinks it to die. However, it is slow-acting. The nature of the slow-acting poison means that there is only time to test one "drink" per servant. (A drink may be a mixture of any number of wines) (Assume that the King needs to know within an hour, and that any poison in the drink takes an hour to show any symptoms)
>What is the minimum amount of servants you would need to identify the poisoned wine?
With enough time and reasoning, one can eventually see that this requires at most **ten** ($10$) servants (in fact, you could test 24 more wines on top of that 1000 before requiring an eleventh servant). The proof/procedure is left to the reader.
My friend and I, however, was not content with resting upon this answer. My friend added the question:
>What would be different if there were $2$ wines that were poisoned out of the 1000? What is the new minimum then?
We eventually generalized the problem to this:
> Given $N$ bottles of wine ($N \gt 1$) and, of those, $k$ poisoned wines ($0 \lt k \lt N$), what is the optimum method to identify the all of the poisoned wines, and how many servants are required ($s(N,k)$)?
After some mathsing, my friend and I managed to find some (possibly unhelpful) lower and upper bounds:
$ log_2 {N \choose k} \le s(N,k) \le N-1 $
This is because $log_2 {N \choose k}$ is the minimum number of servants to uniquely identify the $N \choose k$ possible configurations of $k$ poisoned wines in $N$ total wines.
Can anyone help us find an optimum strategy? Besides the trivial one requiring $N-1$ servants. How about a possible approach to start?
Would this problem be any different if you were only required to find a strategy that would for sure find a wine that is **not** poisoned, instead of identifying all poisoned wines? (other than the slightly trivial solution of $k$ servants) |
While it is certainly true that with the proper definition there is now 'wrong' notation, perhaps it should be mentioned that some notation is more suggestive and/or easier to work with than others, e.g. Arabic numeral vs. Roman numerals, the various symbols for the derivative, and countless others. The actual symbols are arbitrary, but good notation can certainly promote the flow of ideas more easily.
Also, do I remember correctly that Feynman gave up trying to invent more efficient notation for simple math when he was quite young because nobody could understand what he was doing?
A good notation has a subtlety and suggestiveness which at times make it almost seem like a live teacher.
--Bertrand Russell |
Where is the flaw in this argument of a proof that 1=2? |
Is it true that there are infinitely many nonprime integers $n$ such that $3^{n-1} - 2^{n-1}$ is a multiple of $n$? |
Discrete valuations <-> points on a curve
For a nonsingular projective curve over an algebraically closed field, there is a one-one correspondence between the points on it, and the discrete valuations of the function field (i.e. all the meromorphic functions of the curve). The correspondence is point P -> the valuation that sends a function f, to the order of zero/pole of f at P.
Maximal ideals <-> points on a curve
At least for varieties (common zeros of several polynomials) over an algebraically closed field, there is a one-one correspondence between points on it, and the maximal ideal in $k[x_1,\cdots,x_n]$. The correspondence is point $P = (a_1,\cdots,a_n) -> the polynomials vanishing at P, which turns out to be $(x_1-a_1,\cdots,x_n-a_n)$. This is something true not only for curves, but for varieties. (Hilbert's Nullstellensatz)
So putting these together, for nonsingular projective curves over an algebraically closed field, you know that there is a one-one correspondence between the maximal ideals (think them as points) and the discrete valuations of the function field. Now the situation here is analogous. You consider a "curve", whose coordinate ring is $\mathbb{Z}$, with function field $\mathbb{Q}$. The nonarchimedean valuations correspond to discrete valuations in this case. So they should capture order of zeros/poles at some "points". What are the points? They should correspond to the maximal ideals of $\mathbb{Z}$, which are exactly the primes here.
As for $K(x)$, look at it as the function field of $K\mathbb{P}^1$. Just like the usual real/complex projective spaces, you should have two pieces here. Let's say $K[x]$ corresponds to the piece where the second coordinate is nonzero. So the corresponding homogeneous coordinates here is like $[x,1]$. We know there is one point missing, which is $[1,0]$. For this, we change our coordinates $[x,1] \to [1,1/x]$, so the piece where the first coordinate is nonzero should be $K[1/x]$. The missing point corresponds to the ideal $(1/x - 0) = (1/x)$, so this is why the infinite place corresponds to (1/x). Of course, a more straight forward interpretation is that for a rational function, you divide both numerator and denominator sufficiently high power of $x$ so that they both become polynomials in 1/x, have nonzero constant term, with an extra term (x to the some power). The infinite place measures this power.
|
How many ways can a rectangle be partitioned by either vertical or horizontal lines into $ n $ sub-rectangles? At first I thought it would be:
$ f(n)=4f(n-1)-2f(n-2) where f(0)=1 and f(1)=1 $
but the recurrence relation only counts the cases which at least one side (either top, bottom, left or right of the original rectangle) is not split into sub-rectangles. There are many other partitions that don't belong to those simple cases like
http://img228.imageshack.us/img228/6572/partitions.png
(Now I cannot post image; I will edit it later.)
Any other related problem suggestions are welcome. Also it is nice to know how to traverse this partition efficiently. |
How many ways can a rectangle be partitioned by either vertical or horizontal lines into $ n $ sub-rectangles? At first I thought it would be:
$ f(n)=4f(n-1)-2f(n-2) where f(0)=1 and f(1)=1 $
but the recurrence relation only counts the cases in which at least one side (either top, bottom, left or right of the original rectangle) is not split into sub-rectangles. There are many other partitions that don't belong to those simple cases like
![sample partitions][1]
Any other related problem suggestions are welcome. Also it is nice to know how to traverse this partition efficiently.
[1]: http://img228.imageshack.us/img228/6572/partitions.png |
How many ways can a rectangle be partitioned by either vertical or horizontal lines into `n` sub-rectangles? At first I thought it would be:
f(n) = 4f(n-1) - 2f(n-2)
where f(0) = 1
and f(1) = 1
but the recurrence relation only counts the cases which at least one side (either top, bottom, left or right of the original rectangle) is not split into sub-rectangles. There are many other partitions that don't belong to those simple cases like
http://img228.imageshack.us/img228/6572/partitions.png
(Now I cannot post image; I will edit it later.)
Any other related problem suggestions are welcome. Also it is nice to know how to traverse this partition efficiently. |
How to find all natural X for that X^2 + (X+1)^2 is a perfect square?
|
I'd like to characterise the functions that 'have square roots' in the function composition sense. That is, can a given function f be written as f = g.g (where . is function composition)?
For instance, the function f(x) = x+10 has a square root g(x) = x+5.
Similarly, the function f(x) = 6x has a square root g(x) = 3x.
I don't know if the function f(x) = x^2 + 1 has a square root, but I couldn't think of any.
Is there a way to determine which functions have square roots? To keep things simpler, I'd be happy just to consider functions f: R -> R. |
Characterising functions f that can be written as f = g.g? |
How many ways can a rectangle be partitioned by either vertical or horizontal lines into `n` sub-rectangles? At first I thought it would be:
f(n) = 4f(n-1) - 2f(n-2)
where f(0) = 1
and f(1) = 1
but the recurrence relation only counts the cases in which at least one side (either top, bottom, left or right of the original rectangle) is not split into sub-rectangles. There are many other partitions that don't belong to those simple cases like
![sample partitions][1]
Any other related problem suggestions are welcome. Also it is nice to know how to traverse this partition efficiently.
[1]: http://img228.imageshack.us/img228/6572/partitions.png |
I'd like to characterise the functions that 'have square roots' in the function composition sense. That is, can a given function f be written as f = g.g (where . is function composition)?
For instance, the function f(x) = x+10 has a square root g(x) = x+5.
Similarly, the function f(x) = 9x has a square root g(x) = 3x.
I don't know if the function f(x) = x^2 + 1 has a square root, but I couldn't think of any.
Is there a way to determine which functions have square roots? To keep things simpler, I'd be happy just to consider functions f: R -> R. |
Are there 2 subsets, say, $A$ and $B$, of the naturals such that
$$\sum_{n\in A} f(n) = \sum_{n\in B} f(n)$$
where $f(n)=1/n^2$?
If $f(n)=1/n$ then there are many counterexamples, which is probably a consequence of the fact that the harmonic series diverges:
$$\frac23 = \frac12 + \frac16 = \frac14+\frac13+\frac1{12}$$
And if $f(n)=b^{-n}$ for some base b then it is true because for all $M$, $\sum_{n>M} f(n) < f(M)$. (This is just the base-b representation of a real number. The case $b=2$ gives a bijection $2^{\N} \to [0,1]$).
So we have sort of an in-between case here.
Also, what if $A$,$B$:
-are required to be finite sets?
-are required to be infinite and disjoint? |
I know volume preserving diffeomorphisms of a sphere^2 make a group sdiff(S2). I would to know if it is a Lie group, which I assume if it is that makes interpolation easier (like with rotations).
So that is one question, is it a Lie group?
My next question is how can i interpolate between two elements in the group, in some way that is closed in terms of the group (every intermediate morphism preserves the volume).
These are not subjects I know very little about. I apologize if Im phrasing it in some way that sounds ridiculous. |
I just came up with this problem yesterday.
**Problem**:
Assume there is an important segment of straight line `AB` that needs to be watched at all time. A watchdog can see in one direction in front of itself and must walk at a constant speed at all time. (All watchdogs don't need to have the same speed.) When it reaches the end of the segment, it must turn back (at no time) and keep watching the line.
How many watchdogs does it need to guarantee that the line segment is watched at all time? And how (initial positions and speeds of the dogs)?
**Note**:
It's clear that two dogs are not enough. I conjecture that four will suffice and three will not. For example, the below configuration doesn't work from 7.5 second if `AB`'s length is 10 meters.
Dog 1 at A walks to the right with speed 1.0 m/s
Dog 2 at between A and B walks to the right with speed 1.0 m/s
Dog 3 at B walks to the left with speed 1.0 m/s
Or it can be illustrated as:
A ---------------------------------------- B
0.0 sec 1 --> 2 --> <-- 3
2.5 sec 1 --> <-- 32 -->
5.0 sec <-- 31 --> <-- 2
7.5 sec <-- 3 <-- 21 -->
Please provide your solutions, hints, or related problems especially in higher dimensions or looser conditions (watchdogs can walk with acceleration, etc.) |
I know volume preserving diffeomorphisms of a sphere^2 make a group sdiff(S2). I would to know if it is a Lie group, which I assume if it is that makes interpolation easier (like with rotations).
So that is one question, is it a Lie group?
Also is the group path connected? If so, how can I interpolate between two elements in the group?
These are not subjects I know very little about. I apologize if Im phrasing it in some way that sounds ridiculous. |
Okay, so hopefully this isn't too hard or off-topic. Let's say I have a very simple lowpass filter (something that smooths out a signal), and the filter object has a position variable and a cutoff variable (between 0 and 1). In every step, a value is put into the following bit of pseudocode as "input": `position = position*(1-c)+input*c`, or more mathematically, `f(x[n]) = f(x[n-1])*(1-c)+x[n]*c`. The output is the value of "position." Basically, it moves a percentage of the distance between the current position and then input value, stores this value internally, and returns it as output. It's intentionally simplistic, since the project I'm using this for is going to have way too many of these in sequence processing audio in real time.
Given the filter design, how do I construct a function that takes input frequency (where 1 means a sine wave with a wavelength of 2 samples, and .5 means a sine wave with wavelength 4 samples, and 0 is a flat line), and cutoff value (between 1 and 0, as shown above) and outputs the amplitude of the resulting sine wave? Sine wave comes in, sine wave comes out, I just want to be able to figure out how much quieter it is at any input and cutoff frequency combination. |
Suppose we have some function $f(x)$ with local extrema at $x_1, x_2, \dots$, and a second function $g(x)$ which is continuous, strictly increasing and non-zero everywhere over the range of the $x_i$. Will $g(f(x))$ have its local extrema at the same $x_i$ and no others?
If so, are there any obvious loosenings of the constraints on $g$ for which this will remain true?
(I'm really thinking of this in the context of signal processing, looking at transformations that preserve the visual structure of an image, but it seems like a general question that must have been trivially proved by someone 250 years ago...) |
How can I show with a heuristic argument based on a Taylor expansion that for Stratonovich
stochastic calculus the chain rule takes the form of the classical (Newtonian) one?
Concerning Ito calculus the fact that dX^2 = dt results via a Taylor expansion in Ito's lemma - this fact should stay the same with Stratonovich but it should somehow cancel out in there - I just don't know how... |
Bourbaki shows in a very natural way that every continuous group isomorphism of the additive reals to the positive multiplicative reals is determined by its value at $1$, and in fact, that every such isomorphism is of the form $f_a(x)=a^x$ for $a>0$ and $a\neq 1$. We get the standard real exponential (where $a=e$) when we notice that for any $f_a$, $(f_a)'=g(a)f_a$ where $g$ is a continuous group isomorphism from the positive multiplicative reals to the additive reals. By the intermediate value theorem, there exists some positive real $e$ such that $g(e)=1$ (by our earlier classification of continuous group homomorphisms, we notice that $g$ is in fact the natural log).
Notice that every deduction above follows from a natural question. We never need to guess anything to proceed.
Is there any natural way like the above to derive the complex exponential? The only way I've seen it derived is as follows:
Derive the real exponential by some method (inverse function to the natural log, which is the integral of $1/t$ on the interval $[1,x)$, Bourbaki's method, or some other derivation), then show that it is analytic with infinite radius of convergence (where it converges uniformly and absolutely), which means that it is equal to its Taylor series at 0, which means that we can, by a general result of complex analysis, extend it to an entire function on the complex plane.
This derivation doesn't seem natural to me in the same sense as Bourbaki's derivation of the real exponential, since it requires that we notice some analytic properties of the function, instead of relying on its unique algebraic and topological properties.
Does anyone know of a derivation similar to Bourbaki's for the complex exponential? |
Natural derivation of the complex exponential function? |
The term "volume preserving" sounds a bit ambiguous to me: do you mean that your map preserves the total volume or do you mean that its differential at every point preserves volume (i.e. has determinant 1)? The former is weaker than the latter, and gives you more room for interpolation.
In any case, there is a famous invariant of continuous maps $S^2\to S^2$ called the *degree*. Any two maps with the same degree are homotopic to each other. Being volume preserving (in the former sense) implies that the degree is $1$ (taking orientation into account!), so you can interpolate between any two volume preserving maps. *However*, the intermediate maps in this line of reasoning are only continuous, not necessarily diffeomorphisms. I'm confident that with a standard argument "approximate continuous functions by differentiable ones" you can get them to be differentiable, but I don't about "is diffeomorphism" and "is locally volume preserving" parts. |
If and only if, which direction is which? |
Can you give a definition of the Conway base-13 function better than the one actually present on wikipedia ([here][1]), which isn't clear? Maybe with some examples?
[1]: http://www.google.it/url?sa=t&source=web&cd=1&ved=0CBgQFjAA&url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FConway_base_13_function&ei=empRTNK-OsqWON2_uPUE&usg=AFQjCNFqIS-UhBV9Miw1QnAZJaxnswI3Yg&sig2=AlFPiLqMkh3gd4VPEHJI-Q |
I know that this is meant to explain variance butthe description on Wikpiedia stinks and it is not clear how you can explain variance using this technique
Can anyone explain it in a simple way? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.