instruction
stringlengths 12
30k
|
---|
Do primes become more or less frequent as you go further out on the number line? That is, are there more or fewer primes between 1 and 1,000,000 than between 1,000,000 and 2,000,000?
A proof or pointer to a proof would be appreciated. |
Distribution of primes? |
I keep looking at this picture and its driving me crazy. How can the smaller circle travel the same distance when its circumference is less than the entire wheel?
![demonstration image][1]
[1]: http://mathworld.wolfram.com/images/gifs/AristotlesWheel.gif |
How does the wheel paradox work? |
Since constructive mathematics allows us to avoid things like Russell's Paradox, then why don't they replace traditional proofs? How do we know the "regular" kind of mathematics are free of paradox without a proof construction? |
Aren't constructive math proofs more "sound"? |
Prime numbers are numbers with no factors other than one and itself.
Factors of a number are always lower or equal to than a given number; so, the larger the number is, the larger the pool of "possible factors" that number might have.
So the larger the number, it seems like the less likely the number is to be a prime.
Surely there must be a number where, simply, every number above it has some other factors. A "critical point" where every number larger than it simply will always have some factors other than one and itself.
Has there been any research as to finding this critical point, or has it been proven not to exist? That for any `n` there is always guaranteed to be a number higher than `n` that has no factors other than one and itself? |
Is there possibly a largest prime number? |
What exactly does it mean for a function to be "well-behaved"? |
I am learning geometric algebra, and it is incredible how much it helps me understand other branches of mathematics. I wish I had been exposed to it earlier.
Additionally I feel the same way about enumerative combinatorics.
What are some less popular mathematical subjects that you think should be more popular?
|
Often in my studies (economics) the assumption of a "well-behaved" function will be invoked. I don't exactly know what that entails (I think twice continuously differentiability is one of the requirements), nor do I know why this is necessary (though I imagine the why will depend on each case).
Can someone explain it to me, and if there is an explanation of the why as well, I would be grateful. Thanks!
<p>
**EDIT**: To give one example where the term appears, check this Wikipedia entry for utility functions, which says at one point:
> In order to simplify calculations,
> various assumptions have been made of
> utility functions.
> CES (constant elasticity of substitution, or
> isoelastic) utility
> Exponential utility
> Quasilinear utility
> Homothetic preferences
> Most utility functions
> used in modeling or theory are
> **well-behaved**. They are usually
> monotonic, quasi-concave, continuous
> and globally non-satiated.
I might be wrong, but I don't think "well-behaved" means monotonic, quasi-concave, continuous and globally non-satiated. What about twice differentiable? |
Okay, so hopefully this isn't too hard or off-topic. Let's say I have a very simple LP filter, with a position variable and a cutoff variable (between 0 and 1). So, in every step, position = position*(1-c)+input*c. Basically, it moves a percentage of the distance between the current position and then input value, stores this value internally, and returns it as output. It's intentionally simplistic, since the project I'm using this for is going to have way too many of these in sequence processing audio in real time.
So anyways, given the filter design, how would I get a function that takes cutoff and input frequency (between 0 and 1, where 1 represents sample rate and 0 is flat) as arguments, and returns either the decibal reduction, or what percent of the original amplitude it will be, or whatever type of output (in this vein) makes sense in the context of the solution? |
Top Prime's Divisors'
Product (Plus one)'s factors are...?
Q.E.D Bitches
[XKCD][1]
[1]: http://xkcd.com/622/ |
Are real numbers *"real"*? It's not even computationally possible to compare two real numbers for equality!
Interestingly enough, it is shown in Abstract Algebra courses that the idea of complex numbers arises naturally from the idea of real numbers - you could not say, for instance, that the real numbers are *valid* but the complex numbers aren't (whatever your definition of *valid* is...) |
Okay, so hopefully this isn't too hard or off-topic. Let's say I have a very simple LP filter, with a position variable and a cutoff variable (between 0 and 1). So, in every step, `position = position*(1-c)+input*c`. Basically, it moves a percentage of the distance between the current position and then input value, stores this value internally, and returns it as output. It's intentionally simplistic, since the project I'm using this for is going to have way too many of these in sequence processing audio in real time.
So anyways, given the filter design, how would I get a function that takes cutoff and input frequency (between 0 and 1, where 1 represents sample rate and 0 is flat) as arguments, and returns either the decibal reduction, or what percent of the original amplitude it will be, or whatever type of output (in this vein) makes sense in the context of the solution? |
<i>The sum of two Gaussian variables is another Gaussian.</i>
It seems natural, but I could not find a proof using Google.
What's a short way to prove this?
Thanks! |
The Weyl equidistribution theorem states that the sequence of fractional parts $\{n \xi\}$, $n = 0, 1, 2, \dots$ is uniformly distributed for $\xi$ irrational.
This can be proved using a bit of ergodic theory, specifically the fact that an irrational rotation is uniquely ergodic with respect to Lebesgue measure. It can also be proved by simply playing with trigonometric polynomials (i.e., polynomials in $e^{2\pi i k x}$ for $k$ an integer) and using the fact they are dense in the space of all continuous functions with period 1. In particular, one shows that if $f(x)$ is a continuous function with period 1, then for any $t$, $\int_0^1 f(x) dx = \lim \frac{1}{N} \sum_{i=0}^{N-1} f(t+i \xi)$. One shows this by checking this (directly) for trigonometric polynomials via the geometric series. This is a very elementary and nice proof.
The general form of Weyl's theorem states that if $p$ is a monic integer-valued polynomial, then the sequence $\{p(n \xi)\}$ for $\xi$ irrational is uniformly distributed modulo 1. I believe this can be proved using extensions of these ergodic theory techniques -- it's an exercise in Katok and Hasselblatt. I'd like to see an elementary proof.
Can the general form of Weyl's theorem be proved using the same elementary techniques as in the basic version? |
How do you prove that $p(n \xi)$ for $\xi$ irrational and $p$ a polynomial is uniformly distributed modulo 1? |
The normal train of logic goes like this:
- Prime numbers have two divisors.
- 1 has only one divisor
- Therefore, 1 is not prime
There are some more complex subtleties to that, but for most purposes, that reasoning will do. This comes from the accepted definition of a prime number.
Why is this the accepted definition?
Counting 1 as a prime would make prime factorization very, very, very (infinitely) messy. |
Suppose there are two cards each with a positive real number and with one twice the other and each with value equal to the number on the card. You are given one of the cards and and opportunity to swap. If you choose to swap, you are just getting another random number, and so your expected gain should be 0. However, the other card has a 50% chance of being half and a 50% chance of being double, so using expected value, with your current value as x, we get, 0.5*0.5x+0.5*2x=1.25x, so it seems better to swap. Can anyone explain this apparent contradiction? |
Given a point's coordinates (x,y), what is the procedure for determining if it lies within a polygon whose vertices are (x1,y1), (x2,y2), ..., (xn,yn)? |
How do you determine if a point sits inside a polygon? |
When I was that age, I discovered Raymond Smullyan's classic logic puzzle books in the library (such as *What is the name of this book?*), and really got into it. I remember my amazement when I first understood how a complicated logic puzzle could become trivial, just symbolic manipulation really, with the right notation. |
I remember hearing several times the advice that, we should avoid using a proof by contradiction, if it is simple to convert to a direct proof or a proof by contrapositive. Could you explain the reason? Do logicians think that proofs by contradiction are somewhat weaker than direct proofs? |
Are the "proofs by contradiction" weaker than other proofs? |
What I really don't like about all the above answers, is the underlying assumption that `1/3=0.3333...`, How do you know that?. It seems to me like assuming the something which is already known.
A proof I really like is:
0.9999... x 10 = 9.999999...
0.9999... x 9 + 0.99999.... = 9.99999....
0.9999... x 9 = 9.9999....-0.999999 = 9
0.9999... x 9 = 9
0.9999... = 1
The only things I need to assume is, that `9.999.... - 0.9999... = 9` and that `0.999... x 10 = 9.999...`. These seems to me intuitive enough to take for granted.
The proof is from an old highschool level math book of the Open University in Israel. |
The following is a quote from *Surely you're joking, Mr. Feynman* . The question is: are there any interesting theorems that you think would be a good example to tell Richard Feynman, as an answer to his challange? Theorems should be totally counter-intuitive, and be easily translatable to everyday language. (Apparently Banach-Tarski paradox was not a good example.)
> Then I got an idea. I challenged
> them: "I bet there isn't a single
> theorem that you can tell me - what
> the assumptions are and what the
> theorem is in terms I can understand -
> where I can't tell you right away
> whether it's true or false."
>
> It often went like this: They would
> explain to me, "You've got an orange,
> OK? Now you cut the orange into a
> finite number of pieces, put it back
> together, and it's as big as the sun.
> True or false?"
>
> "No holes."
>
> "Impossible!
>
> "Ha! Everybody gather around! It's
> So-and-so's theorem of immeasurable
> measure!"
>
> Just when they think they've got
> me, I remind them, "But you said an
> orange! You can't cut the orange peel
> any thinner than the atoms."
>
> "But we have the condition of
> continuity: We can keep on cutting!"
>
> "No, you said an orange, so I
> assumed that you meant a real orange."
>
> So I always won. If I guessed it
> right, great. If I guessed it wrong,
> there was always something I could
> find in their simplification that they
> left out. |
I remember hearing several times the advice that, we should avoid using a proof by contradiction, if it is simple to convert to a direct proof or a proof by contrapositive. Could you explain the reason? Do logicians think that proofs by contradiction are somewhat weaker than direct proofs?
Edit: To clarify the question. I am wondering, if there is any reason that one would still continue looking for a direct proof of some theorem, although a proof by contradiction has already been found. I don't mean improvements in terms of elegance or exposition, I am asking about logical reasons. For example, in the case of axiom of choice, there is obviously reason to look for a proof that does not use the axiom of choice. Is there a similar case for proofs by contradiction? |
Can someone give a simple explanation for why the series 1 + 1/2 + 1/3 + ... doesn't converge, but just grows very slowly?
I'd prefer an easily comprehensible explanation rather than a rigorous proof of the type I could get from an undergraduate text book. |
Why does the series 1/1 + 1/2 + 1/3 + ... not converge? |
Let's say I know 'X' is a Gaussian Variable.
Moreover, I know 'Y' is a Gaussian Variable and Y=X+Z.
Let's X and Z are Independent.
How can I prove Z is a Gaussian Variable?
It's easy to show the other way around (X, Z Orthogonal and Normal hence create a Gaussian Vector hence any Linear Combination of the two is a Gaussian Variable).
Thanks |
Let's say I know 'X' is a Gaussian Variable.
Moreover, I know 'Y' is a Gaussian Variable and Y=X+Z.
Let's X and Z are Independent.
How can I prove Y is a Gaussian Random Variable if and only if Z is a Gaussian R.V.?
It's easy to show the other way around (X, Z Orthogonal and Normal hence create a Gaussian Vector hence any Linear Combination of the two is a Gaussian Variable).
Thanks |
If you could go back in time and tell yourself to read a specific book at the beginning of your career as a mathematician, which book would it be? |
What is the single most influential book every mathematician should read? |
Is there a way of taking a number known to limited precision (e.g. 1.644934) and finding out an "interesting" real number (e.g. $pi^2/6$) that's close to it?
I'm thinking of something like Sloane's Online Encyclopedia of Integer Sequences, only for real numbers.
The intended use would be: write a program to calculate an approximation to $sum_{i=0}^\infty 1/n^2$, look up the answer ("looks close to $pi^2/6$") and then use the likely answer to help find a proof that the sum really is $pi^2/6$.
Does such a thing exist?
|
Is there a real number lookup algorithm or service? |
Try [Wolfram Alpha][1]. It actually does sequences as well.
[1]: http://www.wolframalpha.com/input/?i=1.644934 |
The following is a quote from *Surely you're joking, Mr. Feynman* . The question is: are there any interesting theorems that you think would be a good example to tell Richard Feynman, as an answer to his challenge? Theorems should be totally counter-intuitive, and be easily translatable to everyday language. (Apparently Banach-Tarski paradox was not a good example.)
> Then I got an idea. I challenged
> them: "I bet there isn't a single
> theorem that you can tell me - what
> the assumptions are and what the
> theorem is in terms I can understand -
> where I can't tell you right away
> whether it's true or false."
>
> It often went like this: They would
> explain to me, "You've got an orange,
> OK? Now you cut the orange into a
> finite number of pieces, put it back
> together, and it's as big as the sun.
> True or false?"
>
> "No holes."
>
> "Impossible!
>
> "Ha! Everybody gather around! It's
> So-and-so's theorem of immeasurable
> measure!"
>
> Just when they think they've got
> me, I remind them, "But you said an
> orange! You can't cut the orange peel
> any thinner than the atoms."
>
> "But we have the condition of
> continuity: We can keep on cutting!"
>
> "No, you said an orange, so I
> assumed that you meant a real orange."
>
> So I always won. If I guessed it
> right, great. If I guessed it wrong,
> there was always something I could
> find in their simplification that they
> left out. |
I'm not a real Mathematician, just an enthusiast. I'm often in the situation where I want to learn some interesting Maths through a good book, but not through an actual Maths textbook. I'm also often trying to give people good Maths books to get them "hooked".
So the question: What is a good book, for laymen, which teaches interesting Mathematics, but actually does it in a "real" way. For example, "Fermat's Last Enigma" doesn't count, since it doesn't actually feature any Maths, just a story, and most textbook don't count, since they don't feature a story.
My favorite example of this is "[Journey Through Genius][1]", which is a brilliant combination of interesting storytelling and large amounts of actual Mathematics. It took my love of Maths to a whole other level.
[1]: http://www.amazon.com/Journey-through-Genius-Theorems-Mathematics/dp/014014739X/ref=sr_1_1?ie=UTF8&s=books&qid=1279696241&sr=8-1 |
[Journey Through Genius][1]
![alt text][2]
A brilliant combination of interesting storytelling and large amounts of actual Mathematics. It took my love of Maths to a whole other level.
[1]: http://www.amazon.com/Journey-through-Genius-Theorems-Mathematics/dp/014014739X/ref=sr_1_1?ie=UTF8&s=books&qid=1279696241&sr=8-1
[2]: http://ecx.images-amazon.com/images/I/519AarC3x0L._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA300_SH20_OU01_.jpg |
Is there a consensus in the mathematical community, or some accepted authority, to determine whether zero should be classified as a natural number?
It seems as though formerly 0 was considered in the set of natural numbers but now it seems more common to see definitions saying that the natural numbers are precisely the positive integers.
|
Is 0 a natural number? |
Below is a visual proof (!) that 32.5 = 31.5. How could that be?
![alt text][1]
[1]: http://farm1.static.flickr.com/48/152036443_ca28c8d2a1_o.png |
Is 32.5 = 31.5 ? |
John D Cook writes <a href="http://www.johndcook.com/blog/">The Endeavor</a>
One of the MathWorks blogs: <a href="http://blogs.mathworks.com/loren/">Loren on the Art of Matlab</a>
... a few more:
<a href="http://unimodular.net/blog/?p=185">eon</a>
<a href="http://cameroncounts.wordpress.com/">Peter Cameron's Blog</a>
<a href="http://www.walkingrandomly.com/">Walking Randomly</a>
<a href="http://topologicalmusings.wordpress.com/">Todd and Vishal's Blog</a>
(Check their blogrolls for more) |
Simple answer: sometimes yes, sometimes no, it's usually stated (or impled by notation). From the [Wikipedia article](http://en.wikipedia.org/wiki/Natural_number):
> In mathematics, there are two
> conventions for the set of natural
> numbers: it is either the set of
> positive integers {1, 2, 3, ...}
> according to the traditional
> definition; or the set of non-negative
> integers {0, 1, 2, ...} according to a
> definition first appearing in the
> nineteenth century.
Saying that, more often than not I've seen the natural numbers only representing the 'counting numbers' (i.e. excluding zero). This was the traditional historical definition, and makes more sense to me. Zero is in many ways the 'odd one out' - indeed, historically it was not discovered (described?) until some time after the natural numbers. |
Are there any simple methods for calculating the digits of Pi? Computers are able to calculate billions of digits, so there must be an algorithm for computing them. Is there a simple algorithm that can be computed by hand in order to compute the first few digits? |
The arithmetic hierarchy defines the Pi-1 formulae of arithmetic to be formulae that are provably equivalent to a formula in [prenex normal form][1] that only has universal quantifiers, and Sigma-1 if it is provably equivalent to a prenex normal form with only existential quantifiers.
A formula is Delta-1 if it is both Pi-1 and Sigma-1. These formulae are often called recursive: why?
[1]: http://en.wikipedia.org/wiki/Prenex_normal_form |
Why are Delta-1 sentences of arithmetic called recursive? |
Any homomorphism φ between the rings Z_18 and Z_15 is completely defined by φ(1). So from
0 = φ(0) = φ(18) = φ(18 * 1) = 18 * φ(1) = 15 * φ(1) + 3 * φ(1) = 3 * φ(1)
we get that φ(1) is either 5 or 10. But how can I prove or disprove that these 2 are valid homomorphisms? |
What are all the homomorphisms between the rings Z_18 and Z_15? |
For basic mathematics [mathpage][1] has a long course, including the list below. Don't be put off by the elementary school style lesson names, it does go into some depth with each one.
<pre>
Lesson 1 Reading and Writing Whole Numbers
Lesson 2 The Meaning of Decimals
Lesson 3 Multiplying and Dividing
Lesson 4 More Elementary Addition
Lesson 5 Adding Whole Numbers and Decimals
Lesson 6 The Meaning of Subtraction
Lesson 7 Subtracting Whole Numbers and
Lesson 8 The Meaning of Multiplication
Lesson 9 Multiplying Whole Numbers
Lesson 10 The Meaning of Division
Lesson 11 Short Division
Lesson 12 Dividing Decimals
Lesson 13 Percent with a Calculator
Lesson 14 Parts of Natural Numbers 1
Lesson 15 Parts of Natural Numbers 2
Lesson 16 Ratio and Proportion 1
Lesson 17 Ratio and Proportion 2
Lesson 18 Proportionality
Lesson 19 Proper Fractions, Mixed Numbers
Lesson 20 Unit Fractions
Lesson 21 Equivalent Fractions
Lesson 22 Lowest Common Multiple
Lesson 23 Fractions into Decimals
Lesson 24 Adding and Subtracting Fractions
Lesson 25 Multiplying Fractions
Lesson 26 The Meaning of Multiplying Fractions
Lesson 27 Percents are Ratios
Lesson 28 Percent of a Number
Lesson 29 What Percent?
Lesson 30 Percent Increase or Decrease
Lesson 31 Prime Numbers
Lesson 32 Greatest Common Divisor
</pre>
[1]: http://www.themathpage.com/Arith/arithmetic.htm |
How come 32.5 = 31.5? |
The arithmetic hierarchy defines the Π_1 formulae of arithmetic to be formulae that are provably equivalent to a formula in [prenex normal form][1] that only has universal quantifiers, and Σ_1 if it is provably equivalent to a prenex normal form with only existential quantifiers.
A formula is Δ_1 if it is both Π_1 and Σ_1. These formulae are often called recursive: why?
[1]: http://en.wikipedia.org/wiki/Prenex_normal_form |
How can I show that (n-1)! is congruent to -1 (mod n) iff n is prime?
Thanks. |
In category theory, a subobject of X is defined as an object Y with a monomorphism, from Y to X. If A is a subobject of B, and B a subobject of A, are they isomorphic? It is not true in general that having monomorphisms going both ways between two objects is sufficient for isomorphy, so it would seem the answer is no.
I ask because I'm working through the exercises in Geroch's Mathematical Physics, and one of them asks you to prove that the relation "is a subobject of" is reflexive, transitive and antisymmetric. But it can't be antisymmetric if I'm right... |
If A is a subobject of B, and B a subobject of A, are they isomorphic? |
I've read about about [higher-order logics](http://en.wikipedia.org/wiki/Higher_order_logic) (i.e. those that build on first-order predicate logic) but am not too clear on their applications. While they are capable of expressing a greater range of proofs (though never *all*, by Godel's Incompleteness theorem), they are often said to be less "well-behaved".
Mathematicians generally seem to stay clear of such logics when possible, yet they are certainly necessary for prooving some more complicated concepts/theorems, as I understand. (For example, it seems the reals can only be constructed using at least 2nd order logic.) Why is this, what makes them less well-behaved or less useful with respect to logic/proof theory/other fields? |
When I tried to approximate $\int_0^1 (1-x^7)^(1/5) - (1-x^5)^(1/7) dx$, I kept getting answers that were really close to 0, so I think it might be true. But why? When I [ask Mathematica][1], I get a bunch of symbols I don't understand!
[1]: http://integrals.wolfram.com/index.jsp?expr=%281-x%5E7%29%5E%281%2F5%29+-+%281-x%5E5%29%5E%281%2F7%29&random=false |
why is $\int_0^1 (1-x^7)^(1/5) - (1-x^5)^(1/7) dx=0$? |
Why is $\int_0^1 (1-x^7)^(1/5) - (1-x^5)^(1/7) dx=0$? |
Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What's the best way to explain **to a non-mathematician** that complex numbers are necessary and meaningful, in the same way that real numbers are? |
Are complex numbers quantities? |
What is a unital homomorphism? Why are they important? |
Which is the single best book for [Number Theory][1] that everyone who loves Mathematics should read?
[1]: http://en.wikipedia.org/wiki/Number_theory |
**Background:** Many (if not all) of the transformation matrices used in 3D computer graphics are 4x4, including the three values for `x`, `y` and `z`, plus an additional term which usually has a value of 1.
Given the extra computing effort required to multiply 4x4 matrices instead of 3x3 matrices, there must be a substantial benefit to including that extra fourth term, even though 3x3 matrices *should* (?) be sufficient to describe points and transformations in 3D space.
**Question:** Why is the inclusion of a fourth term beneficial? I can guess that it makes the computations easier in some manner, but I would really like to know *why* that is the case. |
Why are 3D transformation matrices [4]x[4] instead of [3]x[3]? |
> even though 3x3 matrices should (?) be sufficient to describe points and transformations in 3D space.
No, they aren't enough! Suppose you represent points in space using 3D vectors. You can transform these using 3x3 matrices. But if you examine the definition of matrix multiplication you should see immediately that multiplying a zero 3D vector by a 3x3 matrix gives you another zero vector. So simply multiplying by a 3x3 matrix can never move the origin. But translations and rotations do need to move the origin. So 3x3 matrices are not enough.
I haven't tried to explain exactly how 4x4 matrices are used. But I hope I've convinced you that 3x3 matrices aren't up to the task and that something more is needed. |
Similar to the Monty Hall problem, but trickier: at the latest Gathering 4 Gardner, Gary Foshee asked
> I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?
We are assuming that births are equally distributed during the week, that every child is a boy or girl with probability 1/2, and that there is no dependence relation between sex and day of birth.
Answer: 13/27. This was in the news a lot recently, see for instance [BBC News][1].
[1]: http://news.bbc.co.uk/2/hi/programmes/more_or_less/8735812.stm |
Wikipedia has a closed-form function called "Binet's formula".
http://en.wikipedia.org/wiki/Fibonacci_number#Relation_to_the_golden_ratio
![$F\left(n\right) = {{\varphi^n-(1-\varphi)^n} \over {\sqrt 5}}$][1]
This is based on the Golden Ratio.
[1]: http://chart.apis.google.com/chart?cht=tx&chl=F%5Cleft(n%5Cright)%20%3D%20%7B%7B%5Cvarphi%5En-(1-%5Cvarphi)%5En%7D%20%5Cover%20%7B%5Csqrt%205%7D%7D |
It's not clear what the question is asking. Is it asking for logical relations among the properties? e.g. a relation that is transitive and symmetric MUST be reflexive (unless the relation holds between no two objects).
Or is it asking for mathematical relations that have these properties? ">" is transitive, asymmetric and total or complete (for all distinct x,y either xRy or yRx). It is a strict total order. "greater or equal" is transitive, complete, symmetric, reflexive. This is a weak total order.
"x is a subset of y" is reflexive and antisymmetric but not necessarily transitive, symmetric (although it can be either of those things if you restrict yourself to the right sort of sets). "x is a proper subset of y" is asymmetric and irreflexive (for all x, it is not the case that xRx), and can be transitive if you restrict yourself to the right sort of sets.
Note that in both these cases, one can define one of each pair of relations in terms of the other (plus a notion of identity or non-identity).
In logic the notion of "x entails y" is transitive, reflexive.
The relation "x has the same integer part as y" over the real numbers is transitive, reflexive, symmetric. This is called an equivalence relation. |
[Godel's proof][1] is one I enjoyed. It's was a little hard to understand but there is nothing in this book that makes it inaccessible to someone without a strong math background.
Keeping with Godel in the title, [Godel, Escher, Bach: An Eternal Golden Braid][2] while not just about math was a good read (a bit long ;p).
[The Music of the Primes: Searching to Solve the Greatest Mystery in Mathematics][3] It describes the Riemann Hypothesis and people who were involved with it somehow. My favorite part was learning about the people who attempted to solve it. Many I never heard off before this book. (Side not: I'll have to read pguertin suggestion, sounds in like a similar but more profound book).
[1]: http://www.amazon.com/Godels-Proof-Ernest-Nagel/dp/0814758371/ref=sr_1_1?ie=UTF8&s=books&qid=1279728534&sr=8-1
[2]: http://www.amazon.com/Godel-Escher-Bach-Eternal-Golden/dp/0465026567/ref=pd_sim_b_4
[3]: http://www.amazon.com/Music-Primes-Searching-Greatest-Mathematics/dp/0060935588/ref=sr_1_15?ie=UTF8&s=books&qid=1279728739&sr=8-15 |
One's that were suggested to me by my Calculus teacher in High School. Even my wife liked them and she hates math now:
- [The Education of T.C. Mits: What
modern mathematics means to you][1]
- [Infinity: Beyond the Beyond the
Beyond][1]
Written and illustrated(Pictures are great ;p) by a couple: Lillian R. Lieber, and Hugh Gray Lieber. These books were hard to find before because they went out of print but I have this new version and like it a lot. The books explains profound topics in a way that is graspable by anyone without being dumbed down.
[Godel's proof][2] is one I enjoyed. It's was a little hard to understand but there is nothing in this book that makes it inaccessible to someone without a strong math background.
Keeping with Godel in the title, [Godel, Escher, Bach: An Eternal Golden Braid][3] while not just about math was a good read (a bit long ;p).
[The Music of the Primes: Searching to Solve the Greatest Mystery in Mathematics][4] It describes the Riemann Hypothesis and people who were involved with it somehow. My favorite part was learning about the people who attempted to solve it. Many I never heard off before this book. (Side not: I'll have to read pguertin suggestion, sounds in like a similar but more profound book).
[1]: http://The Education of T.C. Mits: What modern mathematics means to you
[1]: http://www.amazon.com/Infinity-Beyond-Lillian-R-Lieber/dp/1589880366/ref=pd_bxgy_b_img_b
[2]: http://www.amazon.com/Godels-Proof-Ernest-Nagel/dp/0814758371/ref=sr_1_1?ie=UTF8&s=books&qid=1279728534&sr=8-1
[3]: http://www.amazon.com/Godel-Escher-Bach-Eternal-Golden/dp/0465026567/ref=pd_sim_b_4
[4]: http://www.amazon.com/Music-Primes-Searching-Greatest-Mathematics/dp/0060935588/ref=sr_1_15?ie=UTF8&s=books&qid=1279728739&sr=8-15 |
I'm very interested in Computer Science (computational complexity, etc.). I've already finished a University course in the subject (using Sipser's "Introduction to the Theory of Computation").
I know the basics, i.e. Turing Machines, Computability (Halting problem and related reductions), Complexity classes (time and space, P/NP, L/NL, a little about BPP).
Now, I'm looking for a good book to learn about some more advanced concepts. Any ideas? |
I am vaguely familiar with the broad strokes of the development of group theory, first when ideas of geometric symmetries were studied in concrete settings without the abstract notion of a group available, and later as it was formalized by Cayley, Lagrange, etc (and later, infinite groups being well-developed). In any case, it's intuitively easy for me to imagine that there was substantial lay, scientific, and artistic interest in several of the concepts well-encoded by a theory of groups.
I know a few of the corresponding names for who developed the abstract formulation of rings initially (Wedderburn etc.), but I'm less aware of the ideas and problems that might have given rise to interest in ring structures. Of course, now they're terribly useful in lots of math, and $\mathbb{Z}$ is a natural model for elementary properties of commutative rings, and I'll wager number theorists had an interest in developing the concept. And if I wanted noncommutative models, matrices are a good place to start looking. But I'm not even familiar with what the state of knowledge and formalization of things like matrices/linear operators was at the time rings were developed, so maybe these aren't actually good examples for how rings might have been motivated.
Can anyone outline or point me to some basics on the history of the development of basic algebraic structures besides groups? |
There's some of the history here in Bourbaki's _Commutative Algebra,_ in the appendix. Basically, a fair bit of ring theory was developed for algebraic number theory. This in turn was because people were trying to prove Fermat's last theorem.
Why's this? Let $p$ be a prime. Then the equation $x^p + y^p = z^p$ can be written as $\prod (x+\zeta_p^i) = z^p$ for $\zeta_p$ a primitive $p$th root of unity. All these quantities are elements of the ring $Z[\zeta_p]$. So if $p>3$ and there is unique factorization in the ring $Z[\zeta_p]$, it isn't terribly hard to show that this is impossible at least in the basic case where $p $ does not divide $xyz$ (and can be found, for instance, in Borevich-Shafarevich's book on number theory).
Lame actually thought he had a proof of FLT via this argument. But he was wrong: these rings generally don't admit unique factorization. So, it became a problem to study these "generalized integers" $Z[\zeta_p]$, which of course are basic examples of rings. It wasn't until Dedekind that the right notion of unique factorization -- namely, factorization of ideals -- was found. In fact, the case of FLT I just mentioned generalizes to the case where $p$ does not divide the class number of $Z[\zeta_p]$ (the class number is the invariant that measures how far it is from being a UFD). And, according to this [article][1], Dedekind was the first to define a ring.
The article I linked to, incidentally, has a fair bit of additional interesting history.
[1]: http://www-gap.dcs.st-and.ac.uk/~history/HistTopics/Ring_theory.html |
One's that were suggested to me by my Calculus teacher in High School. Even my wife liked them and she hates math now:
- [The Education of T.C. Mits: What
modern mathematics means to you][1]
- [Infinity: Beyond the Beyond the
Beyond][1]
Written and illustrated(Pictures are great ;p) by a couple: Lillian R. Lieber, and Hugh Gray Lieber. These books were hard to find before because they went out of print but I have this new version and like it a lot. The books explains profound topics in a way that is graspable by anyone without being dumbed down.
[Godel's proof][2] is one I enjoyed. It's was a little hard to understand but there is nothing in this book that makes it inaccessible to someone without a strong math background.
Keeping with Godel in the title, [Godel, Escher, Bach: An Eternal Golden Braid][3] while not just about math was a good read (a bit long ;p).
[The Music of the Primes: Searching to Solve the Greatest Mystery in Mathematics][4] It describes the Riemann Hypothesis and people who were involved with it somehow. My favorite part was learning about the people who attempted to solve it. Many I never heard off before this book. (Side not: I'll have to read pguertin suggestion, sounds in like a similar but more profound book).
[1]: http://www.amazon.com/Infinity-Beyond-Lillian-R-Lieber/dp/1589880366/ref=pd_bxgy_b_img_b
[2]: http://www.amazon.com/Godels-Proof-Ernest-Nagel/dp/0814758371/ref=sr_1_1?ie=UTF8&s=books&qid=1279728534&sr=8-1
[3]: http://www.amazon.com/Godel-Escher-Bach-Eternal-Golden/dp/0465026567/ref=pd_sim_b_4
[4]: http://www.amazon.com/Music-Primes-Searching-Greatest-Mathematics/dp/0060935588/ref=sr_1_15?ie=UTF8&s=books&qid=1279728739&sr=8-15 |
[Paul Nahin][1] has a number of accessible mathematics books written for non-mathematicians, the most famous being
* [An Imaginary Tale: The Story of $\sqrt{-1}$][2]
* [Dr. Euler's Fabulous Formula (Cures Many Mathematical Ills!)][3]
[Professor Ian Stewart][4] also has many books which give laymen explanations of various surprising mathematical results
* [Professor Stewart's Cabinet of Mathematical Curiosities][5]
* [Professor Stewart's Hoard of Mathematical Treasures][6]
* [Cows in the Maze: And Other Mathematical Explorations][7]
* [Does God Play Dice? The New Mathematics of Chaos][8]
[1]: http://www.amazon.com/Paul-J.-Nahin/e/B001HCS1XI/ref=sr_ntt_srch_lnk_1?_encoding=UTF8&qid=1279734093&sr=8-1
[2]: http://www.amazon.com/Imaginary-Tale-Princeton-Library-Science/dp/0691146004/ref=ntt_at_ep_dpt_1
[3]: http://www.amazon.com/Dr-Eulers-Fabulous-Formula-Mathematical/dp/0691118221/ref=ntt_at_ep_dpt_4
[4]: http://www.amazon.com/Ian-Stewart/e/B000APQ9NM/ref=ntt_athr_dp_pel_1
[5]: http://www.amazon.com/Professor-Stewarts-Cabinet-Mathematical-Curiosities/dp/0465013023/ref=sr_1_1?ie=UTF8&s=books&qid=1279734223&sr=1-1
[6]: http://www.amazon.com/Professor-Stewarts-Hoard-Mathematical-Treasures/dp/0465017754/ref=sr_1_2?ie=UTF8&s=books&qid=1279734223&sr=1-2
[7]: http://www.amazon.com/Cows-Maze-Other-Mathematical-Explorations/dp/0199562075/ref=ntt_at_ep_dpt_6
[8]: http://www.amazon.com/Does-Play-Dice-Mathematics-Chaos/dp/0631232516/ref=ntt_at_ep_dpt_2 |
I'm looking to find out if there's any easy way to calculate the number of ways to tile a $3 \times 2n$ rectangle with dominoes. I was able to do it with the two codependent recurrences
f(0) = g(0) = 1
f(n) = f(n-1) + 2g(n-1)
g(n) = f(n) + g(n-1)
where $f(n)$ is the actual answer and $g(n)$ is a helper function that represents the number of ways to tile a $3 \times 2n$ rectangle with two extra squares on the end (the same as a $3 \times 2n+1$ rectangle missing one square).
By combining these and doing some algebra, I was able to reduce this to
f(n) = 4f(n-1) - f(n-2)
which shows up as sequence [A001835](http://www.research.att.com/~njas/sequences/A001835), confirming that this is the correct recurrence.
The number of ways to tile a $2 \times n$ rectangle is the Fibonacci numbers because every rectangle ends with either a verticle domino or two horizontal ones, which gives the exact recurrence that Fibonacci numbers do. My question is, **is there a similar simple explanation for this recurrence for tiling a $3 \times n$ rectangle**? |
I'm looking to find out if there's any easy way to calculate the number of ways to tile a $3 \times 2n$ rectangle with dominoes. I was able to do it with the two codependent recurrences
f(0) = g(0) = 1
f(n) = f(n-1) + 2g(n-1)
g(n) = f(n) + g(n-1)
where $f(n)$ is the actual answer and $g(n)$ is a helper function that represents the number of ways to tile a $3 \times 2n$ rectangle with two extra squares on the end (the same as a $3 \times 2n+1$ rectangle missing one square).
By combining these and doing some algebra, I was able to reduce this to
f(n) = 4f(n-1) - f(n-2)
which shows up as sequence [A001835](http://www.research.att.com/~njas/sequences/A001835), confirming that this is the correct recurrence.
The number of ways to tile a $2 \times n$ rectangle is the Fibonacci numbers because every rectangle ends with either a verticle domino or two horizontal ones, which gives the exact recurrence that Fibonacci numbers do. My question is, **is there a similar simple explanation for this recurrence for tiling a $3 \times 2n$ rectangle**? |
[Paul Nahin][1] has a number of accessible mathematics books written for non-mathematicians, the most famous being
* [An Imaginary Tale: The Story of $\sqrt{-1}$][2]
* [Dr. Euler's Fabulous Formula (Cures Many Mathematical Ills!)][3]
[Professor Ian Stewart][4] also has many books which each give laymen overviews of various fields or surprising mathematical results
* [Professor Stewart's Cabinet of Mathematical Curiosities][5]
* [Professor Stewart's Hoard of Mathematical Treasures][6]
* [Cows in the Maze: And Other Mathematical Explorations][7]
* [Does God Play Dice? The New Mathematics of Chaos][8]
[1]: http://www.amazon.com/Paul-J.-Nahin/e/B001HCS1XI/ref=sr_ntt_srch_lnk_1?_encoding=UTF8&qid=1279734093&sr=8-1
[2]: http://www.amazon.com/Imaginary-Tale-Princeton-Library-Science/dp/0691146004/ref=ntt_at_ep_dpt_1
[3]: http://www.amazon.com/Dr-Eulers-Fabulous-Formula-Mathematical/dp/0691118221/ref=ntt_at_ep_dpt_4
[4]: http://www.amazon.com/Ian-Stewart/e/B000APQ9NM/ref=ntt_athr_dp_pel_1
[5]: http://www.amazon.com/Professor-Stewarts-Cabinet-Mathematical-Curiosities/dp/0465013023/ref=sr_1_1?ie=UTF8&s=books&qid=1279734223&sr=1-1
[6]: http://www.amazon.com/Professor-Stewarts-Hoard-Mathematical-Treasures/dp/0465017754/ref=sr_1_2?ie=UTF8&s=books&qid=1279734223&sr=1-2
[7]: http://www.amazon.com/Cows-Maze-Other-Mathematical-Explorations/dp/0199562075/ref=ntt_at_ep_dpt_6
[8]: http://www.amazon.com/Does-Play-Dice-Mathematics-Chaos/dp/0631232516/ref=ntt_at_ep_dpt_2 |
I know that the Fibonacci sequence can be described via the Binet's formula.
However, I was wondering if there was a similar formula for $n!$.
**Is this possible? If not, why not?** |
Is there a closed-form equation for $n!$? If not, why not? |
what is the best book or script on category theory? |
Are x × 0 = 0 and x × 1 = 1 and -(-x) = x axioms? |
Are x × 0 = 0 and x × 1 = x and -(-x) = x axioms? |
When it comes to textbooks, the Kenneth Rosen text [Discrete Mathematics and its Applications][1] is highly recommended. I was first introduced to it at my university, but I've seen it cited in several places.
[1]: http://www.amazon.com/gp/product/0073229725/ref=pd_lpo_k2_dp_sr_1?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=0072899050&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=184DG0PP8BR7S62457K7 |
The question is more profound than is initially seems, and is really about algebraic structures. The first question you have to ask yourself is *where you're working*:
In general, addition and multiplication are defined on a *structure*, which in this case is a *set* (basically a collection of "things") with two *operators* we call *addition* (marked +) and *multiplication* (marked · or × or * or whatever). If this structure holds some properties, which are sometimes called *axioms*, then it is called a [ring][1]. The properties are:
1. The set is closed under the operator +. That is, if a and b are in G, then a+b is also in G.
2. The set has a member which we mark as 0. It has the properties that for every a in G, a+0 = 0 and 0+a = 0.
3. The operation + is commutative: a+b = b+a.
4. The operation + is associative: (a+b)+c = a+(b+c).
5. Every member has an additive inverse: for every a in G there is some b such that a+b = 0 (we mark b as -a).
6. The set is closed under the operator *. That is, if a and b are in G, then a*b is also in G.
7. The set has a member which we mark as 1. It has the properties that for every a in G, a*1 = a and 1*a = a.
8. The operation * is associative: (a*b) * c = a * (b*c).
9. Multiplication is distributive over addition: a * (b+c) = a*b + a*c and (a+b) * c = a*b + a*c.
While this is a long list, and introduces the operator + which is not even explicitly mentioned in the question, these properties are quite natural. For example, the integers {..., -2, -1, 0, 1, 2, ...} we all know and love indeed form a ring. The real numbers also form a ring (in fact they form a [field][2], which means they hold even more properties).
In regard to your question, the identity x * 1 = x (I assume that's what you meant) is in fact an axiom - it is axiom 7. However, the other two identities are results of the other axioms:
**First identity:** We use axioms 2 and 9 to get
0 * x = (0+0) * x = 0*x + 0*x
and then by adding -(0*x) (the additive inverse of 0*x, from axiom 5) to both sides,
0 = 0*x
**Second identity:** As stated in axiom 5, -(-x) is just to notation used which means "the additive inverse of -x". To show that -(-x) = x we need to show that x is in fact the additive inverse of -x, or in other words that x + -x = 0 and -x + x = 0. But that's just what axiom 5 says, so we're done.
Last point: You might be wondering why did we have to go and introduce addition to answer a question about multiplication? Well, it so happens that without addition the other two identities are simply not true. For example, if we look at the positive integers {1, 2, 3, ...} with only multiplication, then there is no 0 there! Simply put, this is because the positive integers do not form a ring.
[1]: http://en.wikipedia.org/wiki/Ring_%28mathematics%29
[2]: http://en.wikipedia.org/wiki/Field_%28mathematics%29 |
The fibonacci sequence is very well known, and is often explained with a story about how many rabbits there are after `n` generations if they each produce a new pair every generation. Is there any *other* reason you would care about the fibonacci sequence? |
what is the best book or lecture notes on category theory? |
I know that the harmonic series 1 + 1/2 + 1/3 + 1/4 + ... diverges. I also know that the sum of the inverse of prime numbers 1/2 + 1/3 + 1/5 + 1/7 + 1/11 + ... diverges too, even more slowly since it's O(log log n).
But I think I read that if we consider the numbers whose decimal representation do not have a certain digit (say, 7) and sum the inverse of these numbers, the sum is finite (usually between 19 and 20, it depends from the missing digit). Does anybody know the result, and some way to prove that the sum is finite? |
I know that the harmonic series 1 + 1/2 + 1/3 + 1/4 + ... diverges. I also know that the sum of the inverse of prime numbers 1/2 + 1/3 + 1/5 + 1/7 + 1/11 + ... diverges too, even if really slowly since it's O(log log n).
But I think I read that if we consider the numbers whose decimal representation do not have a certain digit (say, 7) and sum the inverse of these numbers, the sum is finite (usually between 19 and 20, it depends from the missing digit). Does anybody know the result, and some way to prove that the sum is finite? |
One of the first things ever taught in a differential calculus class:
- The derivative of sin(x) is cos(x)
- The derivative of cos(x) is -sin(x)
This leads to a rather neat (and convenient?) chain of derivatives:
<pre>
sin(x)
cos(x)
-sin(x)
-cos(x)
sin(x)
...
</pre>
An analysis of the shape of their graphs confirms *some* points; for example, when sin(x) is at a maximum, cos(x) is zero and moving downwards; when cos(x) is at a maximum, sin(x) is zero and moving upwards. But these "matching points" only work for multiples of pi/4.
Let us move back towards the original definition(s) of sine and cosine:
At the most basic level, sin(x) is defined as -- for a right triangle with internal angle x -- the length of the side opposite of the angle divided by the hypotenuse of the triangle.
To generalize this to the domain of all real numbers, sin(x) was then defined as the y coordinate of a point on the unit circle that is an angle x from the positive x axis.
The definition of cos(x) was then made the same way, but with adj/hyp and the x coordinate, as we all know.
Is there anything about this **basic** definition that allows someone to look at these definitions, alone, and think, "Hey, the derivative of the sine function with respect to angle is the cosine function!"
That is, from **the unit circle definition alone**. Or, even more amazingly, the **right triangle definition alone**. Ignoring graphical analysis of their plot.
In essence, I am asking, essentially, "Intuitively *why* is the derivative of the sine the cosine?"
(Excluding analysis of their plot as an intuitive reason) |
There's some of the history here in Bourbaki's _Commutative Algebra,_ in the appendix. Basically, a fair bit of ring theory was developed for algebraic number theory. This in turn was because people were trying to prove Fermat's last theorem.
Why's this? Let $p$ be a prime. Then the equation $x^p + y^p = z^p$ can be written as $\prod (x+\zeta_p^iy) = z^p$ for $\zeta_p$ a primitive $p$th root of unity. All these quantities are elements of the ring $Z[\zeta_p]$. So if $p>3$ and there is unique factorization in the ring $Z[\zeta_p]$, it isn't terribly hard to show that this is impossible at least in the basic case where $p $ does not divide $xyz$ (and can be found, for instance, in Borevich-Shafarevich's book on number theory).
Lame actually thought he had a proof of FLT via this argument. But he was wrong: these rings generally don't admit unique factorization. So, it became a problem to study these "generalized integers" $Z[\zeta_p]$, which of course are basic examples of rings. It wasn't until Dedekind that the right notion of unique factorization -- namely, factorization of ideals -- was found. In fact, the case of FLT I just mentioned generalizes to the case where $p$ does not divide the class number of $Z[\zeta_p]$ (the class number is the invariant that measures how far it is from being a UFD). And, according to this [article][1], Dedekind was the first to define a ring.
The article I linked to, incidentally, has a fair bit of additional interesting history.
[1]: http://www-gap.dcs.st-and.ac.uk/~history/HistTopics/Ring_theory.html |
As a Physics Major, I would like to propose an answer that comes from my understanding of seeing sine and cosine in the real world.
In doing this, I will examine uniform circular motion.
Because of the point-on-a-unit-circle definition of sine and cosine, we can say that:
r(t) = < cos(t), sin(t) >
Is a proper parametric function to describe a point moving along the unit circle.
Let us consider what the first derivate, in a physical context, should be. The first derivative of position *should* represent, ideally, the *velocity* of the point.
In a physical context, we would expect the velocity to be the line tangent to the direction of motion at a given time `t`. Following from this, it would be tangent to the circle at angle `t`. Also, because the angular velocity is constant, the magnitude of the velocity should be constant as well.
r'(t) = < -sin(t), cos(t) >
|r'(t)|^2 = (-sin(t))^2 + cos(t)^2
|r'(t)|^2 = sin(t)^2 + cos(t)^2
|r'(t)|^2 = 1
|r'(t)| = 1
As expected, the velocity is constant, so the derivatives of sine and cosine are behaving as they should.
We can also think about what the **direction** of the velocity would be, as well, compared to the position vector.
I'm not sure if this is "cheating" by the bounds of the question, but by visualizing the graph we can see that the velocity, by nature of being tangent to the circle, must be perpendicular to the position vector.
If this is true, then position * velocity = 0 (dot product).
r(t) * r'(t) = 0
< cos(t), sin(t) > * < -sin(t), cos(t) > = 0
( cos(t) * -sin(t) ) + ( sin(t) * cos(t) ) = 0
-sin(t)cos(t) + sin(t)cos(t) = 0
0 = 0
Life is good. If we assume that the definition of cos(t) is -sin(t) and that the definition of sin(t) is cos(t), we find physical behavior **exactly** like expected: a constant velocity that is always perpendicular to the position vector.
We can take this further and look at the acceleration. In Physics, we would call this the restoring force. In a circle, what acceleration would have to exist in order to keep a point moving in a circle?
More specifically, in what direction would this acceleration have to be?
It takes little thought to arrive at the idea that acceleration would have to be center-seeking, and pointing towards the center. So, if we can find that acceleration is **in the opposite direction** as the position vector, the we can be almost certain about the derivatives of sine and cosine. That is, their internal angle should be `pi`.
r(t) * r''(t) = |r(t)| * |r''(t)| * cos(pi)
r(t) * r''(t) = |r(t)| * |r''(t)| * -1
< cos(t), sin(t) > * < -cos(t), -sin(t) > = |<cos(t),sin(t)>| * |<-cos(t),-sin(t)>| * -1
-cos(t)^2 + -sin(t)^2 = 1 * 1 * -1
-1 * (cos(t)^2 + sin(t)^2) = -1
-1 * 1 = -1
-1 = -1
QED |
As a Physics Major, I would like to propose an answer that comes from my understanding of seeing sine and cosine in the real world.
In doing this, I will examine uniform circular motion.
Because of the point-on-a-unit-circle definition of sine and cosine, we can say that:
r(t) = < cos(t), sin(t) >
Is a proper parametric function to describe a point moving along the unit circle.
Let us consider what the first derivate, in a physical context, should be. The first derivative of position *should* represent, ideally, the *velocity* of the point.
In a physical context, we would expect the velocity to be the line tangent to the direction of motion at a given time `t`. Following from this, it would be tangent to the circle at angle `t`. Also, because the angular velocity is constant, the magnitude of the velocity should be constant as well.
r'(t) = < -sin(t), cos(t) >
|r'(t)|^2 = (-sin(t))^2 + cos(t)^2
|r'(t)|^2 = sin(t)^2 + cos(t)^2
|r'(t)|^2 = 1
|r'(t)| = 1
As expected, the velocity is constant, so the derivatives of sine and cosine are behaving as they should.
We can also think about what the **direction** of the velocity would be, as well, compared to the position vector.
I'm not sure if this is "cheating" by the bounds of the question, but by visualizing the graph we can see that the velocity, by nature of being tangent to the circle, must be perpendicular to the position vector.
If this is true, then position * velocity = 0 (dot product).
r(t) * r'(t) = 0
< cos(t), sin(t) > * < -sin(t), cos(t) > = 0
( cos(t) * -sin(t) ) + ( sin(t) * cos(t) ) = 0
-sin(t)cos(t) + sin(t)cos(t) = 0
0 = 0
Life is good. If we assume that the definition of cos(t) is -sin(t) and that the definition of sin(t) is cos(t), we find physical behavior **exactly** like expected: a constant velocity that is always perpendicular to the position vector.
We can take this further and look at the acceleration. In Physics, we would call this the restoring force. In a circle, what acceleration would have to exist in order to keep a point moving in a circle?
More specifically, in what direction would this acceleration have to be?
It takes little thought to arrive at the idea that acceleration would have to be center-seeking, and pointing towards the center. So, if we can find that acceleration is **in the opposite direction** as the position vector, the we can be almost certain about the derivatives of sine and cosine. That is, their internal angle should be `pi`.
r(t) * r''(t) = |r(t)| * |r''(t)| * cos(pi)
r(t) * r''(t) = |r(t)| * |r''(t)| * -1
< cos(t), sin(t) > * < -cos(t), -sin(t) > = |<cos(t),sin(t)>| * |<-cos(t),-sin(t)>| * -1
-cos(t)^2 + -sin(t)^2 = 1 * 1 * -1
-1 * (cos(t)^2 + sin(t)^2) = -1
-1 * 1 = -1
-1 = -1 |
0^x = 0, x^0 = 1
both are true when x > 0
what happens when x=0? undefined, because there is no way to chose one definition over the other.
Some people define 0^0 = 1 in their books, like Knuth, because 0^x is less 'useful' than x^0. |
We will call the set of all positive even numbers `E` and the set of all positive integers `N`.
At first glance, it seems obvious that `E` is smaller than `N`, because for `E` is basically `N` with half of its terms taken out. The size of `E` is the size of `N` divided by two.
You could see this as, for every item in `E`, two items in `N` could be matched (the item x and x-1). This implies that `N` is **twice as large as `E`**
On second glance though, it seems less obvious. Each item in `N` could be mapped with *one* item in `E` (the item x*2).
Which is larger, then? Or are they both equal in size? Why?
(My background in Set theory is quite extremely scant) |
What is larger -- the set of all positive even numbers, or the set of all positive integers? |
One of the first things ever taught in a differential calculus class:
- The derivative of sin(x) is cos(x)
- The derivative of cos(x) is -sin(x)
This leads to a rather neat (and convenient?) chain of derivatives:
<pre>
sin(x)
cos(x)
-sin(x)
-cos(x)
sin(x)
...
</pre>
An analysis of the shape of their graphs confirms *some* points; for example, when sin(x) is at a maximum, cos(x) is zero and moving downwards; when cos(x) is at a maximum, sin(x) is zero and moving upwards. But these "matching points" only work for multiples of pi/4.
Let us move back towards the original definition(s) of sine and cosine:
At the most basic level, sin(x) is defined as -- for a right triangle with internal angle x -- the length of the side opposite of the angle divided by the hypotenuse of the triangle.
To generalize this to the domain of all real numbers, sin(x) was then defined as the y coordinate of a point on the unit circle that is an angle x from the positive x axis.
The definition of cos(x) was then made the same way, but with adj/hyp and the x coordinate, as we all know.
Is there anything about this **basic** definition that allows someone to look at these definitions, alone, and think, "Hey, the derivative of the sine function with respect to angle is the cosine function!"
That is, from **the unit circle definition alone**. Or, even more amazingly, the **right triangle definition alone**. Ignoring graphical analysis of their plot.
In essence, I am asking, essentially, "Intuitively *why* is the derivative of the sine the cosine?" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.