instruction
stringlengths 12
30k
|
---|
I'm not a real Mathematician, just an enthusiast. I'm often in the situation where I want to learn some interesting Maths through a good book, but not through an actual Maths textbook. I'm also often trying to give people good Maths books to get them "hooked".
So the question: What is a good book, for laymen, which teaches interesting Mathematics, but actually does it in a "real" way. For example, "Fermat's Last Enigma" doesn't count, since it doesn't actually feature any Maths, just a story, and most textbook don't count, since they don't feature a story.
My favorite example of this is "[Journey Through Genius][1]", which is a brilliant combination of interesting storytelling and large amounts of actual Mathematics. It took my love of Maths to a whole other level.
**Edit:**
A few more details on what I'm looking for.
The audience of "laymen" should be anyone who has the ability (and desire) to understand actual mathematics, but does not want to learn from a textbook. Obviously I'm thinking about myself here, as a programmer who loves mathematics, I love being exposed to real maths, but I'm not going to get into it seriously. That's why books that show actual maths, but give a lot more exposition (and much clearer explanations, especially of what the intuition should be) are great.
When I say "real maths", I'm talking about actual proofs, formulas, or other mathematical theories. Specifically, I'm not talking about philosophy, nor am I talking about books which **only** talk about the history of mats (Simon Singh style), since they only talk *about* maths, they don't actually show anything. William Dunham's books and Paul J. Nahin's books are good examples.
[1]: http://www.amazon.com/Journey-through-Genius-Theorems-Mathematics/dp/014014739X/ref=sr_1_1?ie=UTF8&s=books&qid=1279696241&sr=8-1
|
One of the main ways that sine and cosine come up is as two solutions to the differential equation y'' = -y. Why is this an important differential equation? Well, by Newton's second law it's the differential equation that says "the force is proportional and opposite to the position." For example, this is what happens with a spring! Now that's a degree 2 diff eq so it has a 2-dimensional space of solutions. How to pick a nice basis of that space? Well one way would be to pick f and g such that f' = i f and g' = -i g. However, that involves too many imaginary numbers, so another option is f' = -g, and g' = -f.
Thus if you're trying to find two functions which explain oscillatory motion you're naturally lead to picking functions that have f' = g, g' = -f, etc.
(On the other hand it's totally unclear from this point of view why Sine and Cosine should have anything to do with triangles...)
|
I have been learning some functional programming recently and I so I have come across monads. I understand what they are in programming terms, but I would like to understand what they are mathematically. Can anyone explain what a monad is using as little category theory as possible?
|
There are two descriptions that I know of. The first can easily be found by looking at wiki under Monad. The second is more interesting in my opinion.
I will assume that you don't know the definition of a monoidal action, if you do, just skip ahead.
A monoidal action is a functor from a monoid to the category of endofunctors on a category satisfying two coherence relations. These two coherence relations simply verify that your monoidal product is the same as composition in the target, and that the identity object behaves with the action. The relations are normally written as diagrams, but without latex implement, I wont type them here.
To get an idea of a monoidal action, consider a group action, and formulate it a little more categorically, by writing the two axioms as diagrams. These diagrams, when converted to the language of monoidal categories, are exactly those of a monoidal action.
Now the best part is once you have monoidal action, monads on a category are simply the category of monoidal actions from the trivial monoidal category to your category. The monadic coherence relations come for free from your monoidal action coherence relations.
So, my simple explanation?
**In this way, we can formulate monads functorially as "representations" of the trivial monoidal category.**
One can readily show the two definitions are the same.
|
Another book that is more elementary, not requiring any algebraic topology for motivation, and formulating the basics through a question and answer approach is:
http://www.amazon.com/Conceptual-Mathematics-First-Introduction-Categories/dp/052171916X/ref=sr_1_5?ie=UTF8&s=books&qid=1279755366&sr=8-5
An added benefit is that it is written by an expert!
|
There are two descriptions that I know of. The first can easily be found by looking at wiki under Monad. The second is more interesting in my opinion.
I will assume that you don't know the definition of a monoidal action, if you do, just skip ahead.
A monoidal action is a functor from a monoid to the category of endofunctors on a category satisfying two coherence relations. These two coherence relations simply verify that your monoidal product is the same as composition in the target, and that the identity object behaves with the action. The relations are normally written as diagrams, but without latex implement, I wont type them here.
To get an idea of a monoidal action, consider a group action, and formulate it a little more categorically, by writing the two axioms as diagrams. These diagrams, when converted to the language of monoidal categories, are exactly those of a monoidal action.
Now the best part is once you have monoidal action, monads on a category are simply the category of monoidal actions from the trivial monoidal category to your category. Note here that the trivial monoidal category will be the monoidal category with one object one morphism and all the other monoidal data is trivially determined. The monadic coherence relations come for free from your monoidal action coherence relations.
So, my simple explanation?
**In this way, we can formulate monads functorially as "representations" of the trivial monoidal category.**
One can readily show the two definitions are the same.
|
Say we have a function $f(z)=\sum_{n=0}^\inf a_n z^n$ with radius of convergence R>0. Why is the radius of convergence only R? Can we conclude that there must be a pole, branch cut or discontinuity for some z_0 with |z_0|=R? What does that mean for functions like
$f(z)=\begin{cases}
0 & \text{for $z=0$} \\
e^{-\frac{1}{z^2}} & \text{for $z \neq 0$}$
that have a radius of convergence 0?
|
Why do complex functions have a finite radius of convergence?
|
There are two descriptions that I know of. The first can easily be found by looking at wiki under Monad or consulting Harry's nice summary. The second is more interesting in my opinion.
I will assume that you don't know the definition of a monoidal action, if you do, just skip ahead.
A monoidal action is a functor from a monoid to the category of endofunctors on a category satisfying two coherence relations. These two coherence relations simply verify that your monoidal product is the same as composition in the target, and that the identity object behaves with the action. The relations are normally written as diagrams, but without latex implement, I wont type them here.
To get an idea of a monoidal action, consider a group action, and formulate it a little more categorically, by writing the two axioms as diagrams. These diagrams, when converted to the language of monoidal categories, are exactly those of a monoidal action.
Now the best part is once you have monoidal action, monads on a category are simply the category of monoidal actions from the trivial monoidal category to your category. Note here that the trivial monoidal category will be the monoidal category with one object one morphism and all the other monoidal data is trivially determined. The monadic coherence relations come for free from your monoidal action coherence relations.
So, my simple explanation?
**In this way, we can formulate monads functorially as "representations" of the trivial monoidal category.**
One can readily show the two definitions are the same.
|
We define Lie algebra's abstractly as algebras whose multiplication satisfies anti-commutativity and Jacobi's Identity. A particular instance of this is an associative algebra equipped with the commutator bracket [a,b]=ab-ba. However, the notation suggests that this bracket is the one we think about. Additionally, the right adjoint to the functor I just mentioned creates the universal enveloping algebra by quotient-ing the tensor algebra by the tensor version of this bracket; but we could always start with some arbitrary Lie algebra with some other satisfactory bracket and apply this functor.
My question is
> "why the commutator bracket?"
Is it purely from a historical standpoint(and if so could you explain why)? Or is there a result that says any Lie algebra is essentially one with the commutator bracket(maybe something about the faithfulness of the functor from above)?
I know of(a colleague told me) a proof that the Jacobi identity is also an artifact of the right adjoint to the universal enveloping algebra. He can show that it is the necessary identity for the universal enveloping algebra to be associative(if someone knows of this in the literature I would also appreciate the link to this!)
I hope this question is clear, if not, I can revise and try to make it a bit more specific.
|
It appears to me that only Triangles, Squares, and Pentagons are able to "tessellate" (is that the proper word in this context?) to become regular 3D convex polytopes.
What property of those regular polygons themselves allow them to faces of regular convex polyhedron? Is it something in their angles? Their number of sides?
And why can some (ie, the triangle, which can be used in three) be used in more regular polyhedron than others?
Similarly, is this the same property that allows certain Platonic Solids to be used as "faces" of regular polychoron (4D polytopes)?
|
What property of regular polygons allows them to be faces of the Platonic Solids?
|
Two questions:
1. Find a bijective function from (0,1) to [0,1]. I haven't find the solution to this since I saw it a few days ago. It strike me as odd--mapping a open set into a closed set.
2. S is countable. It's trivial to find a bijective function f:N -> N-S, (when |N| = |N-S|) f(n) = the nth smallest number in N-S. Are they any analogous trivial solutions to f: R->R-S?
|
I just got my statistics test back and I am totally confused about one of the questions!
> A study was done that took a simple random sample of 40 people and measured whether
> the subjects were right-handed or
> left-handed, as well as their ages.
> The study showed that the proportion
> of left-handed people and the ages had
> a strong negative correlation. What
> can we conclude? Explain your answer.
I know that we can't conclude that getting older causes people to become right-handed. We learned that correlation is not causation. Something else might be causing it, not the age. So I wrote:
> We can conclude that many people
> become right-handed as they grow
> older, but we cannot tell why.
My teacher marked me wrong! What mistake did I make? Is 40 too small of a sample size to make any conclusions?
|
why can't we conclude anything from correlation?
|
What can we conclude from correlation?
|
It appears to me that only Triangles, Squares, and Pentagons are able to "tessellate" (is that the proper word in this context?) to become regular 3D convex polytopes.
What property of those regular polygons themselves allow them to faces of regular convex polyhedron? Is it something in their angles? Their number of sides?
Also, why are there more Triangle-based Platonic Solids (three) than Square- and Pentagon- based ones? (one each)
Similarly, is this the same property that allows certain Platonic Solids to be used as "faces" of regular polychoron (4D polytopes)?
|
Just an observation, but it appears the "study" was conducted under false pretenses. Handedness is a false dichotomy, people can also be ambidextrous.
Also, you cannot necessarily conclude that people become right handed. What if left handed people have a shorter life expectancy? I do not believe that there is anything you can conclude from the facts given, other than what they directly state.
|
I've sort of gotten a grasp on the Chain rule with one variable. If you hike up a mountain at 2 feet an hour, and the temperature decreases at 2 degrees per feet, the temperature would be decreasing for you at 2*2 = 4 degrees per hour.
But I'm having a bit more trouble understanding the Chain Rule as applied to multiple variables. Even the case of 2 dimensions:
z = f(x,y) where x = g(t) and y = h(t).
dz/dt = dz/dx dx/dt + dz/dy dy/dt
Now, this is easy enough to "calculate" (and figure out what goes where). My teacher taught me a neat tree-based graphical method for figuring out partial derivatives using chain rule. All-in-all, it was rather hand-wavey. However, I'm not sure exactly how this works, intuitively.
Why, intuitively, is the equation above true? Why **addition**? Why not multiplication, like the other chain rule? Why are some multiplied and some added?
|
Intuitive reasoning behind the Chain Rule in multiple variables?
|
You cannot necessarily conclude that people become right handed. What if left handed people have a shorter life expectancy? I do not believe that there is anything you can conclude from the facts given, other than what is already directly stated.
Also, just an observation, but it appears the "study" was conducted under false pretenses. Handedness is a false dichotomy, people can also be ambidextrous.
|
I just got my statistics test back and I am totally confused about one of the questions!
> A study was done that took a simple random sample of 40 people and measured whether
> the subjects were right-handed or
> left-handed, as well as their ages.
> The study showed that the proportion
> of left-handed people and the ages had
> a strong negative correlation. What
> can we conclude? Explain your answer.
I know that we can't conclude that getting older causes people to become right-handed. Something else might be causing it, not the age. If two things are correlated, we can only conclude association, not causation. So I wrote:
> We can conclude that many people
> become right-handed as they grow
> older, but we cannot tell why.
That's exactly what association means, but my teacher marked me wrong! What mistake did I make? Is 40 too small of a sample size to make any conclusions?
|
You cannot necessarily conclude that people become right handed. What if left handed people have a shorter life expectancy? Or perhaps there was a spike in the birth rate of right handed people in the past. Thus, I do not believe that there is anything you can conclude from the facts given, other than what is already directly stated.
Also, just an observation, but it appears the "study" was conducted under false pretenses. Handedness is a false dichotomy, people can also be ambidextrous.
|
The [discriminant][1] <a href="http://mathurl.com/2dqz7gw"><img src="http://mathurl.com/2dqz7gw.png"></a> of the cubic polynomial <a href="http://mathurl.com/24gdf5r"><img src="http://mathurl.com/24gdf5r.png"></a> [indicates not only if there are repeated roots when Ξ vanishes][2], but also that there are three distinct, real roots if <a href="http://mathurl.com/2wooxlb"><img src="http://mathurl.com/2wooxlb.png"></a>, and that there is one real root and two complex roots (complex conjugates) if <a href="http://mathurl.com/2cuwdfa"><img src="http://mathurl.com/2cuwdfa.png"></a>.
Why does <img src="http://mathurl.com/2cuwdfa.png"> indicate complex roots? I understand that because of the way that the discriminant is defined, it indicates that there is a repeated root if it vanishes, but why does Ξ greater than 0 or less than 0 have special meaning, too?
[1]: http://en.wikipedia.org/wiki/Discriminant
[2]: http://en.wikipedia.org/wiki/Cubic_equation#The_nature_of_the_roots
|
Why does the discriminant of a cubic polynomial being less than 0 indicate complex roots?
|
[This answer][1] suggests that there are explicit polynomial equations for which the existence
(or nonexistence) of integer solutions is unprovable. How can this be?
[1]: http://math.stackexchange.com/questions/250/a-challenge-by-r-p-feynman-give-counter-intuitive-theorems-that-can-be-transl/260#260
|
How can there be explicit polynomial equations for which the existence of integer solutions is unprovable?
|
-1 is not 1, so where is the mistake?
|
An equivalence relation is defined by three properties: reflexivity, symmetry and transitivity.
Doesn't symmetry and transitivity implies reflexivity? Consider the following argument.
For any a and b,
a R b implies b R a by symmetry. Using transitivity, we have a R a.
|
Why isn't reflexivity redundant in the definition of equivalence relation?
|
1/7 = 0.(142857)...
with the digits in the parentheses repeating.
I understand that the reason it's a repeating fraction is because 7 and 10 are coprime. But this...cyclical nature is something that is not observed by any other reciprocal of any natural number that I know of (besides multiples of 7). (if I am wrong, I hope that I may find others through this question)
By "cyclical," i mean:
<pre>
1/7 = 0.(142857)...
2/7 = 0.(285714)...
3/7 = 0.(428571)...
4/7 = 0.(571428)...
5/7 = 0.(714285)...
6/7 = 0.(857142)...
</pre>
Where all of the repeating digits are the same string of digits, but shifted. Not just a simple "they are all the same digits re-arranged", but the same digits **in the same order**, but shifted.
Or perhaps more strikingly, from the [wikipedia article](http://en.wikipedia.org/wiki/142857_(number)):
<pre>
1 Γ 142,857 = 142,857
2 Γ 142,857 = 285,714
3 Γ 142,857 = 428,571
4 Γ 142,857 = 571,428
5 Γ 142,857 = 714,285
6 Γ 142,857 = 857,142
</pre>
What is it about the number 7 in relation to the base 10 (and its prime factorization 2*5?) that allows its reciprocal to behave this way? Is it (and its multiples) unique in having this property?
|
Why is the decimal representation of 1/7 "cyclical"?
|
1/7 = 0.(142857)...
with the digits in the parentheses repeating.
I understand that the reason it's a repeating fraction is because 7 and 10 are coprime. But this...cyclical nature is something that is not observed by any other reciprocal of any natural number that I know of (besides multiples of 7). (if I am wrong, I hope that I may find others through this question)
By "cyclical," i mean:
<pre>
1/7 = 0.(142857)...
2/7 = 0.(285714)...
3/7 = 0.(428571)...
4/7 = 0.(571428)...
5/7 = 0.(714285)...
6/7 = 0.(857142)...
</pre>
Where all of the repeating digits are the same string of digits, but shifted. Not just a simple "they are all the same digits re-arranged", but the same digits **in the same order**, but shifted.
Or perhaps more strikingly, from the [wikipedia article](http://en.wikipedia.org/wiki/142857_(number)):
<pre>
1 Γ 142,857 = 142,857
2 Γ 142,857 = 285,714
3 Γ 142,857 = 428,571
4 Γ 142,857 = 571,428
5 Γ 142,857 = 714,285
6 Γ 142,857 = 857,142
</pre>
What is it about the number 7 in relation to the base 10 (and its prime factorization 2*5?) that allows its reciprocal to behave this way? Is it (and its multiples) unique in having this property?
[Wikipedia](http://en.wikipedia.org/wiki/Cyclic_number) has an article on this subject, and gives a form for deriving them and constructing arbitrary ones, but does little to show the "why", and finding what numbers have cyclic inverses.
|
Let's say, I have 4 yellow and 5 blue balls. How do I calculate in how many different orders I can place them? And what if I also have 3 red balls?
|
Perhaps it's not an entirely practical application, but Fibonacci numbers can be used to [convert from miles to kilometers and vice versa][1]:
> Take two consecutive Fibonacci
> numbers, for example 5 and 8. And
> you're done converting. No kidding β
> there are 8 kilometers in 5 miles. To
> convert back just read the result from
> the other end - there are 5 miles in 8
> km!
But why does it work?
> Fibonacci numbers have a property that
> the ratio of two consecutive numbers
> tends to the [Golden ratio][2] as
> numbers get bigger and bigger. The
> Golden ratio is a number and it
> happens to be approximately 1.618.
>
> Coincidentally, there are 1.609
> kilometers in a mile, which is within
> 0.5% of the Golden ratio.
[1]: http://www.catonmat.net/blog/using-fibonacci-numbers-to-convert-from-miles-to-kilometers
[2]: http://mathworld.wolfram.com/GoldenRatio.html
|
I just came back from an intense linear algebra lecture which showed that linear transformations could be represented by transformation matrices; with more generalization, it was later shown that affine transformations (linear + translation) could be represented by matrix multiplication as well.
This got me to thinking about all those other transformations I've picked up over the past years I've been studying mathematics. For example, polar transformations -- transforming `x` and `y` to two new variables `r` and `theta`.
If you mapped `r` to the `x` axis and `theta` to the `y` axis, you'd basically have a coordinate transformation. A rather warped one, at that.
Is there a way to represent this using a transformation matrix? I've tried fiddling around with the numbers but everything I've tried to work with has fallen apart quite embarrassingly.
More importantly, is there a way to, given a specific non-linear transformation, construct a transformation matrix from it?
|
Can non-linear transformations be represented as Transformation Matrices?
|
I like to use physical demonstrations when teaching mathematics (putting physics in the service of mathematics, for once, instead of the other way around), and it'd be great to get some more ideas to use.
I'm looking for nontrivial ideas in abstract mathematics that can be demonstrated with some contraption, construction or physical intuition.
For example, one can restate Euler's proof that \sum (1/n^2) = pi^2/6 in ters of the flow of an incompressible fluid with sources at the integer points in the plane.
Or, consider the problem of showing that, for a convex polyhedron whose i^th face has area A_i and outward facing normal vector n_i, \sum A_i*n_i = 0. One can intuitively show this by pretending the polyhedron is filled with gas at uniform pressure. The force the gas exerts on the i_th face is proportional to A_i*n_i, with the same proportionality for every face. But the sum of all the forces must be zero; otherwise this polyhedron (considered as a solid) could achieve perpetual motion.
For an example showing less basic mathematics, consider "showing" the double cover of SO(3) by SU(2) by needing to rotate your hand 720 degrees to get it back to the same orientation.
Anyone have more demonstrations of this kind?
|
I like to use physical demonstrations when teaching mathematics (putting physics in the service of mathematics, for once, instead of the other way around), and it'd be great to get some more ideas to use.
I'm looking for nontrivial ideas in abstract mathematics that can be demonstrated with some contraption, construction or physical intuition.
For example, one can restate Euler's proof that \sum (1/n^2) = pi^2/6 in terms of the flow of an incompressible fluid with sources at the integer points in the plane.
Or, consider the problem of showing that, for a convex polyhedron whose i^th face has area A_i and outward facing normal vector n_i, \sum A_i*n_i = 0. One can intuitively show this by pretending the polyhedron is filled with gas at uniform pressure. The force the gas exerts on the i_th face is proportional to A_i*n_i, with the same proportionality for every face. But the sum of all the forces must be zero; otherwise this polyhedron (considered as a solid) could achieve perpetual motion.
For an example showing less basic mathematics, consider "showing" the double cover of SO(3) by SU(2) by needing to rotate your hand 720 degrees to get it back to the same orientation.
Anyone have more demonstrations of this kind?
|
Are there any interesting and natural examples of semigroups that are not monoids (that is, they don't have an identity element)?
To be a bit more precise, I guess I should ask if there any interesting examples of semigroups (X, *) for which there is not a monoid (X, *, e) where e is in X. I don't consider an example like the set of real numbers greater than 10 (considered under addition) to be a sufficiently 'natural' semigroup for my purposes; if the domain can be extended in an obvious way to include an identity element then that's not what I'm after.
|
Are there any interesting semigroups that aren't monoids?
|
I'm trying to understand the concepts and theory behind some of the common proof verifiers out there, but am not quite sure on the nature of the sort of systems they use. Are they simply based on higher-order logics with Henkin semantics, extended to higehr orders using type theory?
Proof verification programs such as [HOL Light](http://en.wikipedia.org/wiki/HOL_Light) and [Coq](http://en.wikipedia.org/wiki/Calculus_of_constructions) give some idea, but these pages contain limited/unclear information. There are so many variations on formal logics/systems used in proof theory that I'm not sure quite what the base idea sof these systems are.
Any clarifications, more precise descriptions of these systems, and notes about their shortcomings in particular would be much appreciated.
|
How do proof verifiers work?
|
I'm trying to understand the concepts and theory behind some of the common proof verifiers out there, but am not quite sure on the nature of the sort of systems they use. Are they simply based on higher-order logics with Henkin semantics. Though I'm mainly looking for a general answer with useful examples, here are a few specific questions:
* What exactly is the role of type theory in creating higher-order logics? Same goes with category theory/model theory, which I believe is an alternative.
* Is extending a) natural deduction, b) sequent calculus, or c) some other formal system the best way to go for creating higher order logics?
* Where does typed lambda calculus come into proof verification?
* Are there any other approaches than higher order logic to proof verification?
* What are the limitations of existing proof verification systems (see below)?
Proof verification programs such as [HOL Light](http://en.wikipedia.org/wiki/HOL_Light) and [Coq](http://en.wikipedia.org/wiki/Calculus_of_constructions) give some idea, but these pages contain limited/unclear information. There are so many variations on formal logics/systems used in proof theory that I'm not sure quite what the base idea sof these systems are.
Any clarifications, more precise descriptions of these systems, and notes about their shortcomings in particular would be much appreciated.
|
I'm trying to understand the concepts and theory behind some of the common proof verifiers out there, but am not quite sure on the exact nature and construction of the sort of systems/proof calculi they use. Are they simply based on higher-order logics that use Henkin semantics, or is there something more to it? Though I'm mainly looking for a general answer with useful examples, here are a few specific questions:
* What exactly is the role of type theory in creating higher-order logics? Same goes with category theory/model theory, which I believe is an alternative.
* Is extending a) natural deduction, b) sequent calculus, or c) some other formal system the best way to go for creating higher order logics?
* Where does typed lambda calculus come into proof verification?
* Are there any other approaches than higher order logic to proof verification?
* What are the limitations/shortcomings of existing proof verification systems (see below)?
Proof verification programs such as [HOL Light](http://en.wikipedia.org/wiki/HOL_Light) and [Coq](http://en.wikipedia.org/wiki/Calculus_of_constructions) give some idea, but these pages contain limited/unclear information. There are so many variations on formal logics/systems used in proof theory that I'm not sure quite what the base idea sof these systems are.
Perhaps a good way of answering this, certainly one I would appreciate, would be a brief guide (albeit with some technical detail/specifics) on how one might go about generating a complete proof calculus (proof verification system) from scratch?
|
I'm currently trying to understand the concepts and theory behind some of the common proof verifiers out there, but am not quite sure on the exact nature and construction of the sort of systems/proof calculi they use. Are they essentially based on higher-order logics that use Henkin semantics, or is there something more to it? As I understand, extending Henkin semantics to higher-order logic does not render the formal system any less sound, though I am not too clear on that.
Though I'm mainly looking for a general answer with useful examples, here are a few specific questions:
* What exactly is the role of type theory in creating higher-order logics? Same goes with category theory/model theory, which I believe is an alternative.
* Is extending a) natural deduction, b) sequent calculus, or c) some other formal system the best way to go for creating higher order logics?
* Where does typed lambda calculus come into proof verification?
* Are there any other approaches than higher order logic to proof verification?
* What are the limitations/shortcomings of existing proof verification systems (see below)?
The Wikipedia pages on proof verification programs such as [HOL Light](http://en.wikipedia.org/wiki/HOL_Light) [Coq](http://en.wikipedia.org/wiki/Calculus_of_constructions), and [Metamath](http://us.metamath.org/) give some idea, but these pages contain limited/unclear information, and there are rather few specific high-level resources elsewhere. There are so many variations on formal logics/systems used in proof theory that I'm not sure quite what the base ideas of these systems are - what is required or optimal and what is open to experimentation.
Perhaps a good way of answering this, certainly one I would appreciate, would be a brief guide (albeit with some technical detail/specifics) on how one might go about generating a complete proof calculus (proof verification system) from scratch? Any other information in the form of explanations and examples would be great too, however.
|
The question is more profound than is initially seems, and is really about algebraic structures. The first question you have to ask yourself is *where you're working*:
In general, addition and multiplication are defined on a *structure*, which in this case is a *set* (basically a collection of "things") with two *operators* we call *addition* (marked +) and *multiplication* (marked Β· or Γ or * or whatever). If this structure holds some properties, which are sometimes called *axioms*, then it is called a [unit ring][1]. The properties are:
1. The set is closed under the operator +. That is, if a and b are in R, then a+b is also in R.
2. The set has a member which we mark as 0. It has the properties that for every a in R, a+0 = 0 and 0+a = 0.
3. The operation + is commutative: a+b = b+a.
4. The operation + is associative: (a+b)+c = a+(b+c).
5. Every member has an additive inverse: for every a in R there is some b in R such that a+b = 0 (we mark b as -a).
6. The set is closed under the operator *. That is, if a and b are in R, then a*b is also in R.
7. The set has a member which we mark as 1. It has the properties that for every a in R, a*1 = a and 1*a = a.
8. The operation * is associative: (a*b) * c = a * (b*c).
9. Multiplication is distributive over addition: a * (b+c) = a*b + a*c and (a+b) * c = a*b + a*c.
While this is a long list, and introduces the operator + which is not even explicitly mentioned in the question, these properties are quite natural. For example, the integers {..., -2, -1, 0, 1, 2, ...} we all know and love indeed form a ring. The real numbers also form a ring (in fact they form a [field][2], which means they hold even more properties).
In regard to your question, the identity x * 1 = x (I assume that's what you meant) is in fact an axiom - it is axiom 7. However, the other two identities are results of the other axioms.
**First identity:** We use axioms 2 and 9 to get 0 * x = (0+0) * x = 0*x + 0*x and then by adding -(0*x) (the additive inverse of 0*x, from axiom 5) to both sides, 0 = 0*x
**Second identity:** As stated in axiom 5, -(-x) is just a notation used which means "the additive inverse of -x". To show that -(-x) = x we need to show that x is in fact the additive inverse of -x, or in other words that x + -x = 0 and -x + x = 0. But that's just what axiom 5 says, so we're done.
Last point: You might be wondering why did we have to go and introduce addition to answer a question about multiplication? Well, it so happens that without addition the other two identities are simply not true. For example, if we look at the positive integers {1, 2, 3, ...} with only multiplication, then there is no 0 there! Simply put, this is because the positive integers do not form a ring.
[1]: http://en.wikipedia.org/wiki/Unit_ring
[2]: http://en.wikipedia.org/wiki/Field_%28mathematics%29
|
What's the meaning of this symbol in mathematical notation? :
> β§
|
What is the meaning of this symbol?
|
When I tried to approximate $\int_0^1 (1-x^7)^{1/5} - (1-x^5)^{1/7} dx$, I kept getting answers that were really close to 0, so I think it might be true. But why? When I [ask Mathematica][1], I get a bunch of symbols I don't understand!
[1]: http://integrals.wolfram.com/index.jsp?expr=%281-x%5E7%29%5E%281%2F5%29+-+%281-x%5E5%29%5E%281%2F7%29&random=false
|
Why is $\int_0^1 (1-x^7)^{1/5} - (1-x^5)^{1/7} dx=0$?
|
Many years ago, I had read a book entitled "Think of a Number" by Malcolm E. Lines, and it was an eminently readable and thought provoking book. In the book, there were topics like Fibonacci numbers (along with the live examples from the nature) and Golden Section. Now I'm looking for a similar book. Can anyone recommend me one?
|
The set of all β β β continuous functions is [**c**](http://en.wikipedia.org/wiki/Cardinality_of_the_continuum). How to show that? Is there are bijection between β<sup>n</sup> and the set of continuous functions?
|
Just to enlarge on Harry's answer:
Your symbol denotes one of two specified notions of implication in formal logic
$\vdash$ -the **turnstile** symbol denotes **syntactic** implication (syntactic here means related to syntax, the structure of a sentence), where the 'algebra' of the logical system in play (for example [sentential calculus][1]) allows us to 'rearrange and cancel' the stuff we know on the left into the thing we want to prove on the right.
An example might be the classic "all men are mortal ^ socrates is a man \vdash socrates is mortal" ('^' of course here just means 'and'). You can almost imagine cancelling out the 'man bit' on the left to just give the sentence on the right (although the truth may be more complex...).
----------
$\vDash$ -the **double turnstile**, on the other hand, is not so much about algebra as meaning (formally it denotes **semantic** implication)- it means that any interpretation of the stuff we know on the left must have the corresponding interpretation of the thing we want to prove on the right true.
An example would be if we had an infinite set of sentences: $\Gamma$:= {"1 is lovely", "2 is lovely", ...} in which all numbers appear, and the sentence A= " the natural numbers are precisely {1,2,...}" listing all numbers. Any interpretation would give us B="all natural numbers are lovely". So $\Gamma$, A $\vDash$ B.
-----
Now, the goal of any logician trying to set up a formal system is to have $\Gamma \vdash A \iff \Gamma \vDash B$, meaning that the 'algebra' must line up with the interpretation, and this is not something we can take as given. Take the second example above- can we be sure that algebraic operations can 'parse' those infinitely many sentences and make the simple sentence on the right (this is to do with a property called **compactness**)??
The goal can be split into two distict subgoals:
**Soundness:** $A \vdash B \implies A \vDash B$
**Completeness:** $A \vDash B \implies A \vdash B$
Where the first stops you proving things that aren't true when we interpret them and the second means that everything we know to be true on interpretation, we must be able to prove.
Sentential calculus, for example, can be proved complete (and was in Godel's lesser known, but celebrated completeness theorem), but other for other systems Godel's incompleteness theorem, give us a terrible choice between the two.
-----
**In summary:** The interplay of meaning and axiomatic machine mathematics, captured by the difference between $\vDash$ and $\vdash$, is a subtle and interesting thing.
[1]: http://en.wikipedia.org/wiki/Sentential_calculus
|
did you try the books of Eastaway and Wyndham? *Why Do Buses Come in Threes?* and *How Long Is a Piece of String?*
|
When I used to compete in Olympiad Competitions back in high school, a decent number of the easier geometry questions were solvable by what we called a geometry bash. Basically, you'd label every angle in the diagram with the variable then use a limited set of basic geometry operations to find relations between the elements, eliminate equations and then you'd eventually get the result. It seems like the kind of thing you could program a computer to do. So, I'm curious, does there exist any software to do this? I know there is lots of software for solving equations, but is there anything that lets you actually input a geometry problem without manually converting to equations? I'm not looking for anything too advance, even seeing just an attempt would be interesting. If there is anything decent, I think it'd be rather interesting to run the results on various competitions and see how many of the questions it solves.
|
I have read a few proofs that is irrational.
I have never, however, been able to really grasp what they were talking about.
Is there a simplified proof that is irrational?
|
did you try the books of Eastaway and Wyndham? *Why Do Buses Come in Threes?* and *How Long Is a Piece of String?*
Blurb for the first one says "An amusing explanation of how maths is relevant to almost everything in life. Citing many examples of the way mathematics can explain common phenomena"; for the second, "This title is for anyone wanting to remind themselves - or discover for the first time - that maths is relevant to almost everything we do. Dating, cooking, travelling by car, gambling and ranking sportsmen all have links with intriguing mathematical problems that are explained in this book".
|
If we replace the axiom that 'there exists an infinite set' with 'all sets are finite', how would mathematics be like? My guess is that, all the theory that has practical importance would still show up, but everything would be very very unreadable for humans. Is that true?
We would have the natural numbers, athough the class of all natural numbers would not be a set. In the same sense, we could have the rational numbers. But could we have the real numbers? Can the standard constructions be adapted to this setting?
|
If all sets were finite, how would mahtematics be like?
|
If we replace the axiom that 'there exists an infinite set' with 'all sets are finite', how would mathematics be like? My guess is that, all the theory that has practical importance would still show up, but everything would be very very unreadable for humans. Is that true?
We would have the natural numbers, athough the class of all natural numbers would not be a set. In the same sense, we could have the rational numbers. But could we have the real numbers? Can the standard constructions be adapted to this setting?
Also, I guess that this kind of axiom system must have been studied, so do you know any references?
|
If all sets were finite, could the real numbers be constructed?
|
There is a fragment of mathematics that is given by a set of axioms known as the [Peano axioms](http://en.wikipedia.org/wiki/Peano_axioms). Using these rules you can carry out a vast amount of mathematics relating to natural numbers. For example you can prove lots of theorems in number theory using these axioms. The Peano axioms make no reference to sets at all, whether finite or infinite. The only things that exist in this theory are naturals. You can't even form the set of all integers. You can only talk about the naturals themselves. So a vast amount of mathematics would work absolutely fine.
Even though Peano's axioms are about naturals, you can already use them to talk about finite sets. The idea is that any finite set could be encoded as a finite sequence of symbols which in turn could be represented as naturals using [Godel numbering](http://en.wikipedia.org/wiki/G%C3%B6del_numbering). So questions like "is this set a subset of that one?" could be turned into purely arithmetical statements about Godel numbers.
So I'm pretty sure that declaring that there is no infinite set would make little difference to people working within the system defined by Peano's axioms. We'd still have all of the natural numbers to work with, we just wouldn't be able to assemble them into a single entity, the set of all natural numbers.
On the other hand, there are theorems that make essential use of an infinite set. Like [Goodstein's theorem](http://en.wikipedia.org/wiki/Goodstein%27s_theorem). Without infinite sets (or a substitute of some sort) it would be impossible to prove this result.
So the overall result would be, I think, that you could still do lots of mathematics fine. The mathematics you could do wouldn't be all that weird. And you'd simply be depriving yourself of a useful proof technique.
By the way, you'd still be able to say many things about real numbers. A real number can be thought of as a Cauchy sequence. A [Cauchy sequence](http://en.wikipedia.org/wiki/Cauchy_sequence) is a certain type of sequence of rational numbers. So many statements about real numbers, when unpacked, are really statements about rational, and hence naturals, but in disguise.
|
If we replace the axiom that 'there exists an infinite set' with 'all sets are finite', how would mathematics be like? My guess is that, all the theory that has practical importance would still show up, but everything would be very very unreadable for humans. Is that true?
We would have the natural numbers, athough the class of all natural numbers would not be a set. In the same sense, we could have the rational numbers. But could we have the real numbers? Can the standard constructions be adapted to this setting?
(Edit: In the standard construction, 0 is defined to be the empty set, 1 is {0} and 2 is {0,{0}} and it goes on like this. Then we define ordered pairs, and equivalence relations, using only sets and the axioms. The rational numbers are constructed with an equivalence relation over the ordered natural number pairs. I think these can easily be adapted to the finistic setting. But how could the definition of real numbers follow? In particular what would be the definition of e, or \sqrt(2)?)
Also, I guess that this kind of axiom system must have been studied, so do you know any references?
|
There is a fragment of mathematics that is given by a set of axioms known as the [Peano axioms](http://en.wikipedia.org/wiki/Peano_axioms). Using these rules you can carry out a vast amount of mathematics relating to natural numbers. For example you can prove lots of theorems in number theory using these axioms. The Peano axioms make no reference to sets at all, whether finite or infinite. The only things that exist in this theory are naturals. You can't even form the set of all integers. You can only talk about the naturals themselves. So a vast amount of mathematics would work absolutely fine.
Even though Peano's axioms are about naturals, you can already use them to talk about finite sets. The idea is that any finite set could be encoded as a finite sequence of symbols which in turn could be represented as naturals using [Godel numbering](http://en.wikipedia.org/wiki/G%C3%B6del_numbering). So questions like "is this set a subset of that one?" could be turned into purely arithmetical statements about Godel numbers.
So I'm pretty sure that declaring that there is no infinite set would make little difference to people working within the system defined by Peano's axioms. We'd still have all of the natural numbers to work with, we just wouldn't be able to assemble them into a single entity, the set of all natural numbers.
On the other hand, there are theorems that make essential use of an infinite set. Like [Goodstein's theorem](http://en.wikipedia.org/wiki/Goodstein%27s_theorem). Without infinite sets (or a substitute of some sort) it would be impossible to prove this result.
So the overall result would be, I think, that you could still do lots of mathematics fine. The mathematics you could do wouldn't be all that weird. And you'd simply be depriving yourself of a useful proof technique.
By the way, you'd still be able to say many things about real numbers. A real number can be thought of as a Cauchy sequence. A [Cauchy sequence](http://en.wikipedia.org/wiki/Cauchy_sequence) is a certain type of sequence of rational numbers. So many statements about real numbers, when unpacked, are really statements about rational, and hence naturals, but in disguise.
Update: Uncovering precisely what parts of mathematics you need in order to prove things is a field known as [reverse mathematics](http://en.wikipedia.org/wiki/Reverse_mathematics). Hilbert, and others mathematicians, were interested in trying to prove as much mathematics as possible using finite methods. Although it was ultimately shown that you can't carry out all mathematics using finite methods, it's surprising how much you can. [Here](http://www.andrew.cmu.edu/user/avigad/Papers/elementary.pdf)'s a paper that talks about a system called EA which has no infinite sets. Amazingly we can use results from [analytic number theory](http://en.wikipedia.org/wiki/Analytic_number_theory) in EA. This is because propositions about analytic functions can be interpreted as statements about natural numbers.
|
Can there be two distinct, continuous functions that are equal at all rationals?
|
I have read a few proofs that $\sqrt{2}$ is irrational.
I have never, however, been able to really grasp what they were talking about.
Is there a simplified proof that $\sqrt{2}$ is irrational?
|
I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.
I'm sure that everyone here is familiar with it; it describes an operation on a natural number -- n/2 if it is even, 3n+1 if it is odd.
The conjecture states that if this operation is repeated, all numbers will eventually wind up at 1 (or rather, in an infinite loop of 1-4-2-1-4-2-1).
I fired up Python and ran a quick test on this for all numbers up to 5.76 Γ 10^18 (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at 1.
Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid)
I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"
To which I said, "No, you are wrong! In fact I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"
And he said, "It is my conjecture that there are none! (and if any, they are rare)"
Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's) One, out of the many thousands (I presume) of conjectures. Are there any more famous examples?
|
Conjectures that have been disproved with extremely large counterexamples?
|
I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.
I'm sure that everyone here is familiar with it; it describes an operation on a natural number -- n/2 if it is even, 3n+1 if it is odd.
The conjecture states that if this operation is repeated, all numbers will eventually wind up at 1 (or rather, in an infinite loop of 1-4-2-1-4-2-1).
I fired up Python and ran a quick test on this for all numbers up to 5.76 Γ 10^18 (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at 1.
Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid)
I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"
To which I said, "No, you are wrong! In fact I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"
And he said, "It is my conjecture that there are none! (and if any, they are rare)"
Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's) One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?
|
Let $A$ be a commutative ring. Suppose $P \subset A$ is a minimal prime ideal. Then it is a theorem that $P$ consists of zero-divisors.
This can be proved using localization: $A_P$ is local artinian, so every element of $PA_P$ is nilpotent. Hence every element of $P$ is a zero-divisor.
Can this be proved without using localization?
|
Why is it that a minimal prime ideal consists of zerodivisors?
|
The group:
G = < x, y | x^2 = y^3 = (xy)^7 = 1 >
is infinite, or so I've been told. How would I go about proving this? (To prove finiteness of a finitely presented group, I could do a coset enumeration, but I don't see how this helps if I want to prove that it's infinite.)
|
How do you prove that a group specified by a presentation is infinite?
|
Why does a minimal prime ideal consist of zerodivisors?
|
Are x*0 = 0 and x*1 = x and -(-x) = x axioms?
|
If we replace the axiom that 'there exists an infinite set' with 'all sets are finite', how would mathematics be like? My guess is that, all the theory that has practical importance would still show up, but everything would be very very unreadable for humans. Is that true?
We would have the natural numbers, athough the class of all natural numbers would not be a set. In the same sense, we could have the rational numbers. But could we have the real numbers? Can the standard constructions be adapted to this setting?
(Edit: In the standard construction, 0 is defined to be the empty set, 1 is {0} and 2 is {0,{0}} and it goes on like this. Then we define ordered pairs, and equivalence relations, using only sets and the axioms. The rational numbers are constructed with an equivalence relation over the ordered natural number pairs. I think these can easily be adapted to the finistic setting. But how could the definition of real numbers follow? In particular what would be the definition of e, or \sqrt(2)?)
Also, I guess that this kind of axiom system must have been studied, so do you know any references?
Edit: The question has been closed for being "too localized". I think it is completely nonsense. I voted to reopen, and started a meta discussion: http://meta.math.stackexchange.com/questions/172/why-did-you-close-my-question-if-all-sets-were-finite .
|
Okay I burned a lot of reputation points (at least for me) on MathOverflow to gain clarity on how to give some intuition into this problem, so hopefully this answer will be at least be somewhat illuminating.
To gain a deeper understanding of what is going on, first we need to answer the question, "What is a number?"
There are a lot of ways to define numbers, but in general numbers are thought of as symbols that represent sets.
This is easy for things like the natural numbers. So 10 would correspond to the set with ten things -- like a bag of ten stones. Pretty straight forward.
The tricky part is that when we consider ten a subset of the real numbers, we actually redefine it. This is not emphasized even in higher mathematics classes, like real analysis; it just happens when we define the real numbers.
So what is the 10 when constructed in the real numbers? Well, at least with the Dedekind cut version of the real numbers, **all** real numbers correspond to a set with an infinite amount of elements. This makes 10 under the hood look drastically different, although in practice it operates exactly the same.
So let's return to the question: Why is 10 the same as 9.99999? Because the real numbers have this completely surprising quality, where there is no next real number. So when you have two real numbers that are as close together as possible, they are the same. I can't think of any physical object that has this quality, but it's how the real numbers work (makes "real" seem ironic).
With integers (bag of stones version) this is not the same. When you have two integers as close to each other as possible they are still different, and they are distance one apart.
The bottom line is that the real numbers have these tricky edge cases that are hard to understand intuitively. Don't worry, your intuition is not really failing you. :)
I didn't feel confident answering until I got this Terence Tao link: http://www.google.com/buzz/114134834346472219368/RarPutThCJv/In-the-foundations-of-mathematics-the-standard.
Put another way, 10 bag of stones are not the same as 9.9999999 but 10 the natural number, where natural numbers are a subset of the real numbers is.
|
Okay I burned a lot of reputation points (at least for me) on MathOverflow to gain clarity on how to give some intuition into this problem, so hopefully this answer will be at least be somewhat illuminating.
To gain a deeper understanding of what is going on, first we need to answer the question, "What is a number?"
There are a lot of ways to define numbers, but in general numbers are thought of as symbols that represent sets.
This is easy for things like the natural numbers. So 10 would correspond to the set with ten things -- like a bag of ten stones. Pretty straight forward.
The tricky part is that when we consider ten a subset of the real numbers, we actually redefine it. This is not emphasized even in higher mathematics classes, like real analysis; it just happens when we define the real numbers.
So what is the 10 when constructed in the real numbers? Well, at least with the Dedekind cut version of the real numbers, **all** real numbers correspond to a set with an infinite amount of elements. This makes 10 under the hood look drastically different, although in practice it operates exactly the same.
So let's return to the question: Why is 10 the same as 9.99999? Because the real numbers have this completely surprising quality, where there is no next real number. So when you have two real numbers that are as close together as possible, they are the same. I can't think of any physical object that has this quality, but it's how the real numbers work (makes "real" seem ironic).
With integers (bag of stones version) this is not the same. When you have two integers as close to each other as possible they are still different, and they are distance one apart.
Put another way, 10 bag of stones are not the same as 9.9999999 but 10 the natural number, where natural numbers are a subset of the real numbers is.
The bottom line is that the real numbers have these tricky edge cases that are hard to understand intuitively. Don't worry, your intuition is not really failing you. :)
I didn't feel confident answering until I got this Terence Tao link: http://www.google.com/buzz/114134834346472219368/RarPutThCJv/In-the-foundations-of-mathematics-the-standard.
|
Okay I burned a lot of reputation points (at least for me) on MathOverflow to gain clarity on how to give some intuition into this problem, so hopefully this answer will be at least be somewhat illuminating.
To gain a deeper understanding of what is going on, first we need to answer the question, "What is a number?"
There are a lot of ways to define numbers, but in general numbers are thought of as symbols that represent sets.
This is easy for things like the natural numbers. So 10 would correspond to the set with ten things -- like a bag of ten stones. Pretty straight forward.
The tricky part is that when we consider ten a subset of the real numbers, we actually redefine it. This is not emphasized even in higher mathematics classes, like real analysis; it just happens when we define the real numbers.
So what is 10 when constructed in the real numbers? Well, at least with the Dedekind cut version of the real numbers, **all** real numbers correspond to a set with an infinite amount of elements. This makes 10 under the hood look drastically different, although in practice it operates exactly the same.
So let's return to the question: Why is 10 the same as 9.99999? Because the real numbers have this completely surprising quality, where there is no next real number. So when you have two real numbers that are as close together as possible, they are the same. I can't think of any physical object that has this quality, but it's how the real numbers work (makes "real" seem ironic).
With integers (bag of stones version) this is not the same. When you have two integers as close to each other as possible they are still different, and they are distance one apart.
Put another way, 10 bag of stones are not the same as 9.9999999 but 10 the natural number, where natural numbers are a subset of the real numbers is.
The bottom line is that the real numbers have these tricky edge cases that are hard to understand intuitively. Don't worry, your intuition is not really failing you. :)
I didn't feel confident answering until I got this Terence Tao link: http://www.google.com/buzz/114134834346472219368/RarPutThCJv/In-the-foundations-of-mathematics-the-standard.
|
Polygons are, in this question, defined as non-unique if they similar to another (by rotation, reflection, translation, or scaling).
Would this answer be any different if similar but non-identical polygons were allowed? And if only if rotated/translated by rational coefficients?
Would this answer be any different if we constrained the length and internal angles of all polygons to rational numbers?
|
Is the set of all unique polygons countable? If so, by what bijection to the natural numbers?
|
I want to find the least squares solution to **Ax**=<b>b</b> where **A** is a highly sparse square matrix.
I found two methods that look like they might lead me to a solution: [QR factorization](http://en.wikipedia.org/wiki/QR_decomposition), and [singular value decomposition](http://en.wikipedia.org/wiki/Singular_value_decomposition#Pseudoinverse). Unfortunately, I haven't taken linear algebra yet, so I can't really understand most of what those pages are saying. I can calculate both in Matlab though, and it looks like the SVD gave me a smaller squared error. Why did that happen? How can I know which one I should be using in the future?
|
How can I tell which matrix decomposition to use for OLS?
|
Why are the only (associative) division algebras over the real numbers the real numbers, the complex numbers, and the quaternions?
Here a division algebra is an associative algebra where every nonzero number is invertible (like a field, but without assuming commutativity of multiplication).
This is an old result proved by Frobenius, but I can't remember how the argument goes. Anyone have a quick proof?
|
Why are the only division algebras over the real numbers the real numbers, the complex numbers, and the quaternions?
|
In intro number theory a key lemma is that if a and b are relatively prime integers, then there exist integers x and y such that ax+by=1. In a more advanced course instead you would use the theorem that the integers are a PID, i.e. that all ideals are principal. Then the old lemma can be used to prove that "any ideal generated by two elements is actually principal." Induction then says that any finitely generated ideal is principal. But what if all finitely generated ideals are principal but there are some ideals that aren't finitely generated? Can that happen?
|
Can you find a domain where ax+by=1 has a solution for all a and b relatively prime, but which is not a PID?
|
I know there's something called "ultrafinitism" which is a very radical form of constructivism that I've heard said means people don't believe that really large integers actually exist. Could someone make this a little bit more precise? Are there good reasons for taking this point of view? Can you actually get math done from that perspective?
|
What is "ultrafinitism" and why do people believe it?
|
The philosophy is explained in Doron Zeilberger's [article][1].
Basically, it's the belief that there is a largest natural number!
I've heard a funny story (on Scott Aaronson's blog) about someone who was an ultrafinitist.
-Do you believe in 1?
-Yes, he responded immediately
-Do you believe in 2?
-Yes, he responded after a brief pause
-Do you believe in 3?
-Yes, he responded after a slightly longer pause
-Do you believe in 4?
-Yes, after several seconds
It soon become clear that he would take twice as long to answer the next question as the previous one. (I believe [Alexander Esesin-Volpin][2] was the person.)
[1]: http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/real.pdf
[2]: http://en.wikipedia.org/wiki/Alexander_Esenin-Volpin
|
I know some references where I can find this, but they seem tedious(Both Hartshorne and Ueno cover this).
I am wondering if there is an elegant way to describe these. If this task is too difficult in general, how about just $\mathbb{P}^n$?
Thanks!
|
There is a little triviality that has been referred to as the <a href="http://www.johndcook.com/blog/2010/01/13/soft-maximum">"soft maximum"</a> over on <a href="http://www.johndcook.com/blog/">John Cook's Blog</a> that I find to be fun, at the very least.
The idea is this: given a list of values, say , the function
$\begin{equation*}
g(x_1, x_2, \ldots, x_n) = \log(\exp(x_1) \+ \exp(x_2) \+ \cdots \+ exp(x_n))
\end{equation*}$
returns a value very near the maximum in the list.
About this, John Cook says: "The soft maximum approximates the hard maximum but it also rounds off the corners." This couldn't really be said any better.
I recall trying to cleverly construct sequences for proofs in advanced calculus where not-everywhere-differentiable operations would have been great to use if they didn't have that pesky non-differentiable trait. I can't recall a specific incidence where I was tempted to use , but this seems at least plausible that it would have come up.
Has anyone used this before or have a scenario off hand where it would be useful?
|
![complex plot of the zeros][1]
(Diagram and setup from UCSMP _Precaluclus and DIscrete Mathematics_, 3rd ed.)
Above is a partial plot of the zeros of $p_c(x)=4x^4+8x^3-3x^2-9x+c$. The text stops at showing the diagram and does not discuss the shape of the locus of the zeros or describe the resulting curves. Are the curves in the locus some specific (named) type of curve? Is there a simple way to describe the curves (equations)?
The question need not be limited to the specific polynomial given--a similar sort of locus is generated by the zeros of nearly any quartic polynomial as the constant term is varied.
[1]: http://farm5.static.flickr.com/4081/4820253654_ae3f4a89a2.jpg
|
There is a little triviality that has been referred to as the <a href="http://www.johndcook.com/blog/2010/01/13/soft-maximum">"soft maximum"</a> over on <a href="http://www.johndcook.com/blog/">John Cook's Blog</a> that I find to be fun, at the very least.
The idea is this: given a list of values, say , the function
$g(x_1, x_2, \ldots, x_n) = \log(\exp(x_1) *+* \exp(x_2) *+* \cdots *+* \exp(x_n))$
returns a value very near the maximum in the list.
About this, John Cook says: "The soft maximum approximates the hard maximum but it also rounds off the corners." This couldn't really be said any better.
I recall trying to cleverly construct sequences for proofs in advanced calculus where not-everywhere-differentiable operations would have been great to use if they didn't have that pesky non-differentiable trait. I can't recall a specific incidence where I was tempted to use , but this seems at least plausible that it would have come up.
Has anyone used this before or have a scenario off hand where it would be useful?
|
How does one prove that a simple (steps of length 1 in directions parallel to the axes) symmetric (each possible direction is equally likely) random walk in 1 or 2 dimensions returns to the origin with probability 1?
|
There is a little triviality that has been referred to as the <a href="http://www.johndcook.com/blog/2010/01/13/soft-maximum">"soft maximum"</a> over on <a href="http://www.johndcook.com/blog/">John Cook's Blog</a> that I find to be fun, at the very least.
The idea is this: given a list of values, say , the function
$g(x_1, x_2, \ldots, x_n)$ = $\log(\exp(x_1)$ *+* $\exp(x_2)$ *+* $\cdots$ *+* $\exp(x_n))$
returns a value very near the maximum in the list.
About this, John Cook says: "The soft maximum approximates the hard maximum but it also rounds off the corners." This couldn't really be said any better.
I recall trying to cleverly construct sequences for proofs in advanced calculus where not-everywhere-differentiable operations would have been great to use if they didn't have that pesky non-differentiable trait. I can't recall a specific incidence where I was tempted to use $max(x_i)$, but this seems at least plausible that it would have come up.
Has anyone used this before or have a scenario off hand where it would be useful?
|
![complex plot of the zeros][1]
(Diagram and setup from UCSMP _Precaluclus and Discrete Mathematics_, 3rd ed.)
Above is a partial plot of the zeros of $p_c(x) = 4x^4 \+ 8x^3 - 3x^2 - 9x \+ c$. The text stops at showing the diagram and does not discuss the shape of the locus of the zeros or describe the resulting curves. Are the curves in the locus some specific (named) type of curve? Is there a simple way to describe the curves (equations)?
The question need not be limited to the specific polynomial given--a similar sort of locus is generated by the zeros of nearly any quartic polynomial as the constant term is varied.
[1]: http://farm5.static.flickr.com/4081/4820253654_ae3f4a89a2.jpg
|
There is a little triviality that has been referred to as the <a href="http://www.johndcook.com/blog/2010/01/13/soft-maximum">"soft maximum"</a> over on <a href="http://www.johndcook.com/blog/">John Cook's Blog</a> that I find to be fun, at the very least.
The idea is this: given a list of values, say $x_1,x_2,\ldots,x_n$ , the function
$g(x_1,x_2,\ldots,x_n) = \log(\exp(x_1)$ + $\exp(x_2)$ + $\cdots$ + $\exp(x_n))$
returns a value very near the maximum in the list.
About this, John Cook says: "The soft maximum approximates the hard maximum but it also rounds off the corners." This couldn't really be said any better.
I recall trying to cleverly construct sequences for proofs in advanced calculus where not-everywhere-differentiable operations would have been great to use if they didn't have that pesky non-differentiable trait. I can't recall a specific incidence where I was tempted to use $max(x_i)$, but this seems at least plausible that it would have come up.
Has anyone used this before or have a scenario off hand where it would be useful?
|
I'm not sure if I get the question but I'll wager a guess.
I'll do 1D.
1D walks are building binary strings, 010101, etc.
Say take six steps. Then 111111 is just as likely as 101010.
However, how many of the possible sequences have six ones? 1. How many of the possibly sequences have three ones and three zeros? Much more.
That number is called multiplicity, and it grows mighty fast. In the limit in becomes Shannon entropy.
Sequences are equally likely, but combinations are not.
In the limit the combinations with maximum entropy are going dominate all the rest. So the walk is going to have gone an equal number of right and left steps...almost surely.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.