instruction
stringlengths
12
30k
I'm not sure if I get the question but I'll wager a guess. I'll do 1D. 1D walks are building binary strings, 010101, etc. Say take six steps. Then 111111 is just as likely as 101010. However, how many of the possible sequences have six ones? 1. How many of the possibly sequences have three ones and three zeros? Much more. That number is called multiplicity, and it grows mighty fast. In the limit its log becomes Shannon entropy. Sequences are equally likely, but combinations are not. In the limit the combinations with maximum entropy are going dominate all the rest. So the walk is going to have gone an equal number of right and left steps...almost surely.
There is a little triviality that has been referred to as the <a href="http://www.johndcook.com/blog/2010/01/13/soft-maximum">"soft maximum"</a> over on <a href="http://www.johndcook.com/blog/">John Cook's Blog</a> that I find to be fun, at the very least. The idea is this: given a list of values, say $x_1,x_2,\ldots,x_n$ , the function $g(x_1,x_2,\ldots,x_n) = \log(\exp(x_1) + \exp(x_2) + \cdots + \exp(x_n))$ returns a value very near the maximum in the list. About this, John Cook says: "The soft maximum approximates the hard maximum but it also rounds off the corners." This couldn't really be said any better. I recall trying to cleverly construct sequences for proofs in advanced calculus where not-everywhere-differentiable operations would have been great to use if they didn't have that pesky non-differentiable trait. I can't recall a specific incidence where I was tempted to use $max(x_i)$, but this seems at least plausible that it would have come up. Has anyone used this before or have a scenario off hand where it would be useful?
I'll do 1D. 1D walks are building binary strings, 010101, etc. Say take six steps. Then 111111 is just as likely as 101010. However, how many of the possible sequences have six ones? 1. How many of the possibly sequences have three ones and three zeros? Much more. That number is called multiplicity, and it grows mighty fast. In the limit its log becomes Shannon entropy. Sequences are equally likely, but combinations are not. In the limit the combinations with maximum entropy are going dominate all the rest. So the walk is going to have gone an equal number of right and left steps...almost surely.
There is a little triviality that has been referred to as the <a href="http://www.johndcook.com/blog/2010/01/13/soft-maximum">"soft maximum"</a> over on <a href="http://www.johndcook.com/blog/">John Cook's Blog</a> that I find to be fun, at the very least. The idea is this: given a list of values, say $x_1,x_2,\ldots,x_n$ , the function $g(x_1,x_2,\ldots,x_n) = \log(\exp(x_1) + \exp(x_2) + \cdots + \exp(x_n))$ returns a value very near the maximum in the list. This happens because that exponentiation exaggerates the differences between the $x_i$ values. For the largest $x_i$, $\exp(x_i)$ will be $really$ large. This largest exponential will significantly outweigh all of the others combined. Taking the logarithm, i.e. undoing the exponentiation, we essentially recover the largest of the $x_i$'s. (Of course, if two of the values were very near one another, we aren't guaranteed to get the true maximum, but it won't be far off!) About this, John Cook says: "The soft maximum approximates the hard maximum but it also rounds off the corners." This couldn't really be said any better. I recall trying to cleverly construct sequences for proofs in advanced calculus where not-everywhere-differentiable operations would have been great to use if they didn't have that pesky non-differentiable trait. I can't recall a specific incidence where I was tempted to use $max(x_i)$, but this seems at least plausible that it would have come up. Has anyone used this before or have a scenario off hand where it would be useful?
How does one prove that a simple (steps of length 1 in directions parallel to the axes) symmetric (each possible direction is equally likely) random walk in 1 or 2 dimensions returns to the origin with probability 1? ***edit***: note that while returning to the origin is guaranteed (p = 1) in 1 and 2 dimensions, it is not guaranteed in higher dimensions; this means that something in a correct justification for the 1- or 2-d case must fail to extend to 3-d (or fail when the probability for each direction drops from 1/4 to 1/6).
**EDIT**: Given the update, this proof must be wrong. I wonder where the error is though? I can prove the 1 dimensional case a bit more formally than [Jonathan][1]. First we only look at the absolute values. Let us try to calculate the probability of this never exceeding x. The probability of 2x+1 consecutive moves all being the same is some p>0. If this ever occurs, then the absolute value will exceed x. Consider n groups of 2x+1 moves, the probability that at least one of these is all the same is 1-(1-p)^n, which approaches 1. So the probability of reaching each absolute value x is 1. Furthermore, after we have reached an absolute value x, the probability of reaching it again is 1. So we reach each absolute value an infinite number of times. Due to symmetry, we can expect to reach each x an infinite number of times. Similar reasoning will work in the 2d case. Instead of using symmetry, we simply note that each point with the same Manhattan distance from 0 has an expected value that is a non-zero proportion of the expected value all points a particular Manhattan distance away. [1]: http://math.stackexchange.com/questions/536/proving-that-1-and-2-d-simple-symmetric-random-walks-return-to-the-origin-with-p/537#537
Polygons are, in this question, defined as non-unique if they similar to another (by rotation, reflection, translation, or scaling). Would this answer be any different if similar but non-identical polygons were allowed? And if only if rotated/translated by rational coefficients? Would this answer be any different if we constrained the length and internal angles of all polygons to rational numbers? Assume the number of sides is finite but unbounded, and greater than two.
Is the set of all unique (convex) polygons countable? If so, by what bijection to the natural numbers?
In laymen's terms, as much as possible: What is the Riemann-Zeta function, and why does it come up so often with relation to prime numbers?
What is the Riemann-Zeta function?
Which books would you recommend about Recreational Mathematics?
What is a intuitive explanation of a Markov Chain, and how they work? Please provide at least one practical example.
What is a Markov Chain?
Ignoring Limits, I would like to know if this is a valid explanation for why 0/0 is undefined. >x = 0/0 x * 0 = 0 >Hence There are an infinite number of values for x as anything multiplied by 0 is 0. However, it seems to have got comments, with two general themes. Once is that you lose the values of x by multiplying by 0. The other is that the last line is `x * 0 = 0/0 * 0` as it involves a division by 0. Is there any merit to either argument? And possibly more to the point, are there any major flaws in my explanation?
(This was asked due to the comments and downvotes on this [Stackoverflow](http://stackoverflow.com/questions/3236489/why-is-0-divided-by-0-an-error/3236541#3236541) answer) Ignoring Limits, I would like to know if this is a valid explanation for why 0/0 is undefined. >x = 0/0 x * 0 = 0 >Hence There are an infinite number of values for x as anything multiplied by 0 is 0. However, it seems to have got comments, with two general themes. Once is that you lose the values of x by multiplying by 0. The other is that the last line is `x * 0 = 0/0 * 0` as it involves a division by 0. Is there any merit to either argument? And possibly more to the point, are there any major flaws in my explanation?
Here there is another attempt at an explanation. We know that the sum of the inverse of the positive numbers, 1 + 1/2 + 1/3 + ..., diverges. Euler shown that the sum of the inverse of the squares, 1/(1<sup>2</sup>) + 1/(2<sup>2</sup>) + 1/(3<sup>2</sup>) + ..., has a finite sum, namely &pi;<sup>2</sup>/6. Mathematicians love to generalize things, so they thought at the function ![f(x) = sum for n=1 to infinity of (1/n^x)][1] which is defined for x > 1. But this was not enough: they decided that the variable could be a complex number and not a real one. There is a standard tecnique ([Analytic continuation][2]) which allows us to extend the function to nearly all the complex plane. So we now have a function which formally is ![zeta(s) = sum for n=1 to infinity of (1/n^s)]][3] (the variable being s and not x to show that we are dealing with complex numbers) but is not computed in this way. Just to make an example, &zeta;(0)=1/2 :-) It may be shown that for s = -2n (n positive integer) &zeta;(s) = 0. But there are infinite other point s'=(x,y) where &zeta;(s') = 0. For all of these points, 0 &lt; x &lt; 1; Riemann's hypothesis says that for all such points x = 1/2. If it were true, we could have the best asymptotic expression to count &pi;(n), that is the number of primes below n. Why does the function pop up when we talk about primes? I don't know, but in the case of integer values Euler proved that ![sum of 1/(n^s) = product over all primes of 1/(1-p^s)][3] Maybe this could be a good start. [1]: http://mathurl.com/35scyna.png [2]: http://mathworld.wolfram.com/AnalyticContinuation.html [3]: http://mathurl.com/34s664v.png
Here there is another attempt at an explanation. We know that the sum of the inverse of the positive numbers, 1 + 1/2 + 1/3 + ..., diverges. Euler shown that the sum of the inverse of the squares, 1/(1<sup>2</sup>) + 1/(2<sup>2</sup>) + 1/(3<sup>2</sup>) + ..., has a finite sum, namely &pi;<sup>2</sup>/6. Mathematicians love to generalize things, so they thought at the function ![f(x) = sum for n=1 to infinity of (1/n^x)][1] which is defined for x > 1. But this was not enough: they decided that the variable could be a complex number and not a real one. There is a standard tecnique ([Analytic continuation][2]) which allows us to extend the function to nearly all the complex plane. So we now have a function which formally is ![zeta(s) = sum for n=1 to infinity of (1/n^s)]][3] (the variable being s and not x to show that we are dealing with complex numbers) but is not computed in this way. Just to make an example, &zeta;(0)=1/2 :-) It may be shown that for s = -2n (n positive integer) &zeta;(s) = 0. But there are infinite other point s'=(x,y) where &zeta;(s') = 0. For all of these points, 0 &lt; x &lt; 1; Riemann's hypothesis says that for all such points x = 1/2. If it were true, we could have the best asymptotic expression to count &pi;(n), that is the number of primes below n. Why does the function pop up when we talk about primes? I don't know, but in the case of integer values Euler proved that ![sum of 1/(n^s) = product over all primes of 1/(1-p^s)][4] Maybe this could be a good start. [1]: http://mathurl.com/35scyna.png [2]: http://mathworld.wolfram.com/AnalyticContinuation.html [3]: http://mathurl.com/5euwuy.png [4]: http://mathurl.com/34s664v.png
(This was asked due to the comments and downvotes on this [Stackoverflow](http://stackoverflow.com/questions/3236489/why-is-0-divided-by-0-an-error/3236541#3236541) answer. I am not that good at maths, so was wondering if I had made any basic mistakes) Ignoring limits, I would like to know if this is a valid explanation for why 0/0 is undefined: >x = 0/0 x * 0 = 0 >Hence There are an infinite number of values for x as anything multiplied by 0 is 0. However, it seems to have got comments, with two general themes. Once is that you lose the values of x by multiplying by 0. The other is that the last line is `x * 0 = 0/0 * 0` as it involves a division by 0. Is there any merit to either argument? And possibly more to the point, are there any major flaws in my explanation?
(This was asked due to the comments and downvotes on this [Stackoverflow](http://stackoverflow.com/questions/3236489/why-is-0-divided-by-0-an-error/3236541#3236541) answer. I am not that good at maths, so was wondering if I had made any basic mistakes) Ignoring limits, I would like to know if this is a valid explanation for why 0/0 is undefined: >x = 0/0 x * 0 = 0 >Hence There are an infinite number of values for x as anything multiplied by 0 is 0. However, it seems to have got comments, with two general themes. Once is that you lose the values of x by multiplying by 0. The other is that the last line is `x * 0 = 0/0 * 0` as it involves a division by 0. Is there any merit to either argument? More to the point, are there any major flaws in my explanation and is there a better way of showing why 0/0 is undefined?
Quasi-coherent sheaves on affine schemes (say $Spec(A)$) are obtained by taking an $A$-module $M$ and the associated sheaf (by localizing $M$). This gives an equivalence of categories between $A$-modules and q-c sheaves on $Spec(A)$. Let $R$ be a graded ring, $R = R_0 + R_1 + \dots$ (direct sum). Then we can, given a *graded* $R$-module $M$, consider its associated sheaf $\tilde{M}$. The stalk of this at a homogeneous prime ideal $P$ is defined to be the localization $M_{(P)}$, which is defined as generated by quotients $m/s$ for $s$ homogeneous of the same degree as $m$ and not in $P$. In short, we get sheaves of modules on the affine scheme just as we get the normal sheaves of rings. We get sheaves of modules on the projective scheme in the same homogeneous localization way as we get the sheaf of rings. However, it's no longer an equivalence of categories. Why? Say you had a graded module $M= M_0 + M_1 + \dots$ (in general, we allow negative gradings as well). Then it is easy to check that the sheaves associated to $M$ and $M' = M_1 + M_2 + \dots$ are exactly the same. Nevertheless, it is possible to get every sheaf on $Proj(R)$ for $R$ a graded ring in this way. See Proposition II.5.15 in Hartshorne.
Here there is another attempt at an explanation. We know that the sum of the inverse of the positive numbers, 1 + 1/2 + 1/3 + ..., diverges. Euler shown that the sum of the inverse of the squares, 1/(1<sup>2</sup>) + 1/(2<sup>2</sup>) + 1/(3<sup>2</sup>) + ..., has a finite sum, namely &pi;<sup>2</sup>/6. Mathematicians love to generalize things, so they thought at the function ![f(x) = sum for n=1 to infinity of (1/n^x)][1] which is defined for x > 1. But this was not enough: they decided that the variable could be a complex number and not a real one. There is a standard tecnique ([Analytic continuation][2]) which allows us to extend the function to nearly all the complex plane. So we now have a function which formally is ![zeta(s) = sum for n=1 to infinity of (1/n^s)]][3] (the variable being s and not x to show that we are dealing with complex numbers) but is not computed in this way. Just to make an example, &zeta;(0)=1/2, and sum of an infinity of ones is not 1/2 :-) It may be shown that for s = -2n (n positive integer) &zeta;(s) = 0. But there are infinite other point s'=(x,y) where &zeta;(s') = 0. For all of these points, 0 &lt; x &lt; 1; Riemann's hypothesis says that for all such points x = 1/2. If it were true, we could have the best asymptotic expression to count &pi;(n), that is the number of primes below n. Why does the function pop up when we talk about primes? I don't know, but in the case of integer values Euler proved that ![sum of 1/(n^s) = product over all primes of 1/(1-p^s)][4] Maybe this could be a good start. [1]: http://mathurl.com/35scyna.png [2]: http://mathworld.wolfram.com/AnalyticContinuation.html [3]: http://mathurl.com/5euwuy.png [4]: http://mathurl.com/34s664v.png
Okay, so this question was bound to come up sooner or later- the hope was to ask it well before someone asked it badly... ###We all love a good puzzle To a certain extent, any piece of mathematics is a puzzle in some sense: whether we are classifying the homological intersection forms of four manifolds or calculating the optimum dimensions of a cylinder, it is an element of investigation and inherently puzzlish intrigue that drives us. Indeed most puzzles (cryptic crosswords aside) are somewhat mathematical (the mathematics of sudoku for example is hidden in [latin squares][1]). Mathematicians and puzzles get on, it seems, rather well. ###But what is a good puzzle? Okay, so in order to make this question worthwhile (and not a ten-page wadeathon through 57 varieties of the men with red and blue hats puzzle), we are going to have to impose some limitations. Not every puzzle-based answer that pops into your head will qualify for answerhood- to do so it must - **Not be widely known:** If you have a terribly interesting puzzle that motivates something in cryptography; well done you, but chances are we've seen it. If you saw that *hilarious* scene in the film 21, where kevin spacey explains the monty hall paradox badly and want to share, don't do so here. Anyone found posting the liar/truth teller riddle will be immediately disemvowelled. - **Be mathematical:** as much as possible- true: logic *is* mathematics, but puzzles beginning 'There is a street where everyone has a different coloured house...' are much of a muchness and tedious as hell. Note: there is a happy medium between this and trig substitutions. - **Not be too hard:** any level is cool but if the answer requires more than two sublemmas, you are misreading your adience - **Actually have an answer:** crank questions will not be appreciated! You can post the answers/hints in [Rot-13][2] underneath as comments as on MO if you fancy. And should - **Ideally include where you found it:** so we can find more cool stuff like it - **Have that indefinable spark that makes a puzzle awesome:** a situation that seems familiar, requiring unfamiliar thought... For ease of voting- one puzzle per post is bestest. ###Some examples to set the ball rolling > Simplify $\sqrt{2+\sqrt{3}}$ **From:** problem solving magazine **Hint:** GEL N GJB GREZ FBYHGVBA > Can one make an equilateral triangle with all vertices at integer coordinates? **From:** Durham distance maths challenge 2010 **Hint:** GUVF VF RDHVINYRAG GB GUR ENGVBANY PNFR >nxn [Magic squares][3] form a vector space over $\mathbb{R}$ prove this, and by way of a linear transformation, derive the dimension of this vector space. **From:** Me, I made this up (you can tell, can't you!) **Hint:** NCCYL GUR ENAX AHYYVGL GURBERZ Happy puzzling! [1]: http://en.wikipedia.org/wiki/Latin_squares [2]: http://en.wikipedia.org/wiki/Rot_13 [3]: http://en.wikipedia.org/wiki/Magic_squares
Ultrafinitism is basically resource-bounded constructivism: proofs have constructive content, and what you get out of these constructions isn't much more than you put in. Looking at the universal and existential quantifier should help clarify things. Constructively, a universally quantified sentence means that if I am given a parameter, I can construct something that satisfies the quantified predicate. Ultrafinitistically, the thing you give back won't be much bigger: typically there will be a polynomial bound on the size of what you get back. For existentially quantified statements, the constructive content is a pair of the value of the parameter, and the construction that satisfies the predicate. Here the resource is the size of the proof: the size of the parameter and construction will be related to the size of the proof of the existential. Typically, addition and mutliplication are total functions, but exponentiation is not. [Self-verifying theories][1] are more extreme: addition is total in the strongest of these theories, but multiplication cannot be. So the resource bound is linear for these theories, not polynomial. A foundational problem with ultrafinitism is that there aren't nice ultrafinitist logics that support an attractive formulae-as-types correspondence in the way that intuitionistic logic does. This makes ultrafinitism as less comfy kind of contructivism than intuitionism. **Why do people believe it?** For the same kinds of reasons people believe in constructivism: they want mathematical claims to be backed up by something they can regard as concrete. Just as an intuitionist might be bothered by the idea of cutting a ball into 10^50 pieces and putting it back together into two balls, so to an ultrafinitist might be concerned about the idea that towers of exponentials are meaningful ways of constructing numbers. Wittgenstein argued this point in his "Lectures on the Foundations of Mathematics". **Can you actually get math done from that perspective?** Yes. If intuitionism is the mathematics of the computable, ultrafinitism is the mathematics of the feasibly computable. But the difference in ease of working with between ultrafinitism and intuitionism is much bigger than that between intuitionism and classical mathematics. [1]: http://en.wikipedia.org/wiki/Self-verifying_theories
unknown source: Could the plane be colored with two different colors (say, red and blue) so that there is no equilateral triangle whose vertices are all of the same color?
You are standing on a cliff at a height *h* above the sea. You are capable of throwing a stone with velocity *v* at any angle *a* between horizontal and vertical. What is the value of *a* when the horizontal distance travelled *d* is at a maximum? On level ground, when *h* is zero, it's easy to show that *a* needs to be midway between horizontal and vertical, and thus π/4 or 45°. As *h* increases, however, *a* decreases to zero, and for small negative values of *h* (throwing up onto a platform), *a* will be greater than 45°. Is there a fully-solved, closed-form expression for the value of *a* when *h* is not zero?
What is the optimum angle of projection when throwing a stone off a cliff?
What does it mean (in a mathematically rigorous way) to claim something is "generic?" How does this coincide with the Zariski topology?
Given a curve how do you intuitively construct the picture of its projective dual? I know points --> lines, lines--> points but for something like the swallowtail this is not really obvious.
According to Wikipedia, Godel's incompleteness theorem states: > No consistent system of axioms whose > theorems can be listed by an > "effective procedure" (essentially, a > computer program) is capable of > proving all facts about the natural > numbers. This obviously includes our current system. So has anyone proposed any additional axioms that seem credible?
Has anyone ever proposed additional axioms?
You are standing on a cliff at a height *h* above the sea. You are capable of throwing a stone with velocity *v* at any angle *a* between horizontal and vertical. What is the value of *a* when the horizontal distance travelled *d* is at a maximum? On level ground, when *h* is zero, it's easy to show that *a* needs to be midway between horizontal and vertical, and thus π/4 or 45°. As *h* increases, however, we can see by heuristic reasoning that *a* decreases to zero, because you can put more of the velocity into the horizontal component as the height of the cliff begins to make up for the loss in the vertical component. For small negative values of *h* (throwing up onto a platform), *a* will actually be greater than 45°. Is there a fully-solved, closed-form expression for the value of *a* when *h* is not zero?
Most of us know that, being deterministic, computers cannot generate true [random numbers](http://en.wikipedia.org/wiki/Pseudorandom_number_generator). However, let's say you have a box which generates truly random binary numbers, but is biased: it's more likely to generate either a `1` or a `0`, but you don't know the exact probabilities, or even which is more likely *(both probabilities are > 0 and sum to 1, obviously)* Can you use this box to create a unbiased random generator of binary numbers?
Assuming you have unlimited time and cash, is there a strategy that's guaranteed to win at roulette?
**Fork in the road 1** You're on a path on an island, come to a fork in the road. Both paths lead to villages of natives; the entire village either always tells the truth or always lies *(both villages could be truth-telling or lying villages, or one of each)*. There are two natives at the fork - they could both be from the same village, or from different villages *(so both could be truth-tellers, both liars, or one of each)*. One path leads to safety, the other to doom. You're allowed to ask only one question to each native to figure out which path is which. What do you ask?
**Fork in the road 2** You're once again at a fork in the road, and again, one path leads to safety, the other to doom. There are three natives at the fork. One is from a village of truth-tellers, one from a village of liars, one from a village of random answerers. Of course you don't know which is which. Moreover, the natives answer "pish" and "posh" for yes and no, but you don't know which means "yes" and which means "no." You're allowed to ask only two yes-or-no questions, each question being directed at one native. What do you ask?
Adding additional axioms would make more truths provable. But it wouldn't make all truths provable (unless the axiom was inconsistent with the already given ones, in which case all falsehoods would also be provable too). So adding additional axioms isn't going to help make all the truths provable. I guess you could just add, say, Goldbach's conjecture and the Riemann hypothesis as extra axioms and carry on doing mathematics-PLUS! but why would you want to?
How do we know if a particular function can be represented as a power series? And once we have come up with a power series representation, how does one figure out its radius of convergence ?
What functions can be represented as power series?
How do we know if a particular function can be represented as a power series? And once we have come up with a power series representation, how does one figure out its radius of convergence ? Edit: In particular, I was thinking of Taylor series representations here.
How do we know if a particular function can be represented as a power series? And once we have come up with a power series representation, how does one figure out its radius of convergence ?
When I look at the Taylor series for e^x and the volume formula for oriented simplexes, it makes e^x look like it is, at least almost, the sum of simplexes volumes from n to infinity. Does anyone know of a stronger relationship beyond, "they sort of look similar"? Here are some links: Volume formula http://en.wikipedia.org/wiki/Simplex#Geometric_properties Taylor Series http://en.wikipedia.org/wiki/E_%28mathematical_constant%29#Complex_numbers
Is there a relationship between e and the sum of n-simplexes volumes?
It is a theorem in elementary number theory that if $p$ is a prime and congruent to 1 mod 4, then it is the sum of two squares. Apparently there is a trick involving arithmetic in the gaussian integers that lets you prove this quickly. Can anyone explain it?
How do you prove that a prime is the sum of two squares iff it is congruent to 1 mod 4?
**Fork in the road 1** Y'r n pth n n slnd, cme t frk n th rd. Bth pths ld t vllgs f ntvs; th ntr vllg thr lwys tlls th trth r lwys ls *(bth villgs cld b trth-tllng r lyng vllgs, r n f ch)*. Thr r tw ntvs t th frk - thy cld bth b frm th sm vllg, r frm dffrnt vllgs *(s bth cld b trth-tllrs, both lrs, r ne f ch)*. n pth lds t sfty, th thr t dm. Y'r llwd t sk nly n qstn t ch ntv t fgr t whch pth s whch. Wht d y sk?
I'm learning Linear Algebra using MIT's Open Courseware [Course 18.06][1] Quite often, the professor says "... assuming that the matrix is invertible ...". Somewhere in lecture he says that using a determinant on an $n \times n$ matrix is on the order of $O(n!)$ operations, where an operation is a multiplication and a subtraction. **Is there a more efficient way?** If the aim is to get the inverse, rather than just determine the invertibility, what is the most effecient way to do this? [1]: http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2005/
What is the most efficient way to determine if a matrix is invertible?
**Frk n th rd 1** Y'r n pth n n slnd, cme t frk n th rd. Bth pths ld t vllgs f ntvs; th ntr vllg thr lwys tlls th trth r lwys ls *(bth villgs cld b trth-tllng r lyng vllgs, r n f ch)*. Thr r tw ntvs t th frk - thy cld bth b frm th sm vllg, r frm dffrnt vllgs *(s bth cld b trth-tllrs, both lrs, r ne f ch)*. n pth lds t sfty, th thr t dm. Y'r llwd t sk nly n qstn t ch ntv t fgr t whch pth s whch. Wht d y sk?
I'm learning Linear Algebra using MIT's Open Courseware [Course 18.06][1] Quite often, the professor says "... assuming that the matrix is invertible ...". Somewhere in the lecture he says that using a determinant on an $n \times n$ matrix is on the order of $O(n!)$ operations, where an operation is a multiplication and a subtraction. **Is there a more efficient way?** If the aim is to get the inverse, rather than just determine the invertibility, what is the most effecient way to do this? [1]: http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2005/
Given n distinct objects, there are n! permutations of the objects and n!/n "circular permutations" of the objects (orientation of the circle matters, but there is no starting point, so 1234 and 2341 are the same, but 4321 is different). Given n objects of k types (where the objects within each type are indistinguishable), $r_i$ of the ith type, there are $\frac{n!}{r_1!r_2!\cdots r_k!}$ permutations. How many circular permutations are there of such a set?
From a standard 52-card deck, how many ways are there to pick a hand of $k$ cards that includes one card from all four suits? I know that for any specific $k$, it's possible to break it up into cases based on the partitions of $k$ into four parts. For example, if I want to choose a hand of six cards, I can break it up into two cases based on whether there are (1) three cards from one suit and one card from each of the other three or (2) two cards from each of two suits and one card from each of the other two. Is there a simpler, more general solution that doesn't require splitting the problem into many different cases?
Congruence transformations (isometries) and similarity transformations (isometries + dilations) should be constructable. What about other affine transformations? Other conformal mappings?
What transformations of the plane are geometrically constructable (compass & straight edge)?
For a long time I've eschewed bulky and inelegant calculators for the use of my trusty trig/log-log slide rule. For those unfamiliar, here is a simple [slide rule simulator using Javascript](http://www.antiquark.com/sliderule/sim/virtual-slide-rule.html). To demonstrate, find the LL_3 scale, which is on the back of the virtual one. Let's say we want to solve `3^n`. First, you would move the cursor (the red line) over where `3` is on the LL_3 scale. Then, you would *slide* the middle slider until the `1` on the C scale is lined up to the cursor. And voila, your slide rule is set up to find `3^n` for any arbitrary `n`! For example, to find `3^2`, move the cursor to `2` on the C scale, and your answer is what the cursor is on on the LL_3 scale (`9`). Move your cursor to `3` on C, and it should be lined up with `27` on LL_3. To `4` on C, it is on `81` on LL_3. You can even do this for non-integer exponents (1.3, etc.) You can also do this for **exponents less than one**, by using the LL_2 scale. For example, to do `3^0.5`, you would find `5` on the C scale, and look where the cursor is lined up at on the LL_2 scale (which is about `1.732`). Anyways, I was wondering if anyone could explain to me how this all works? It works, but...why? What property of logarithms and exponents (and logarithms of logarithms?) allows this to work? I already understand how the basics of the Slide Rule works (`ln(m) + ln(n) = ln(mn)`), with only multiplication, but this exponentiation eludes me.
Why does the log-log scale on my Slide Rule work?
Grigory has already answered your particular question. However, I wanted to point out that your question "How do you prove that a group specified by a presentation is infinite?" has no good answer *in general*. Indeed, in general the question of whether a group presentation defines *the trivial group* is undecidable.
Congruence transformations (isometries) and similarity transformations (isometries + dilations) should be constructable. What about other affine transformations? Other conformal mappings? ***edit***: by constructable, I mean given the defining information for the transformation in a geometric way (e.g. a dilation requires a center and a ratio, so the given could be a point and two segments), can you construct the image of a point under the transformation from its preimage?
Assuming I can play forever, what are my chances of coming out ahead in a coin flipping series? Let's say I want "heads"...then if I flip once, and get heads, then I win, because I've reached a point where I have more heads than tails (1-0). If it was tails, I can flip again. If I'm lucky, and I get two heads in a row after this, this is another way for me to win (2-1). Obviously, if I can play forever, my chances are probably pretty decent. They are at least greater than 50%, since I can get that from the first flip. After that, though, it starts getting sticky. I've drawn a tree graph to try to get to the point where I could start see the formula hopefully dropping out, but so far it's eluding me. Your chances of coming out ahead after 1 flip are 50%. Fine. Assuming you don't win, you have to flip at least twice more. This step gives you 1 chance out of 4. The next level would be after 5 flips, where you have an addtional 2 chances out of 12, followed by 7 flips, giving you 4 out of 40. I suspect I may be able to work through this given some time, but I'd like to see what other people think...is there an easy way to approach this? Is this a known problem?
Given enough time, what are the chances I can come out ahead in a coin toss contest?
If you're after Olmpiad-level books, get [The IMO Compendium][1] which is a collection of problems from the International Math Olympiad, 1959-2004. You can find similar books with national Olympiad problems by going to Amazon and searching for "mathematical Olympiad". Two books that offer collections of techniques useful for olympiad-level contests are Paul Zeitz's [The Art and Craft of Problem Solving][2] and Arthur Engel's [Problem Solving Strategies][3]. There are lots of other books with similar titles and descriptions. Just follow Google Books's suggestions. [1]: http://books.google.com/books?id=rVvr2p76cKQC&printsec=frontcover&dq=mathematical+olympiad&hl=en&ei=tiJKTKnAIcL-8AbAgs0z&sa=X&oi=book_result&ct=result&resnum=4&ved=0CEAQ6AEwAw#v=onepage&q=mathematical%20olympiad&f=false [2]: http://books.google.com/books?id=Go_iAAAACAAJ&dq=paul+zeitz&hl=en&ei=0SNKTO_MFIL48AaZ8PE3&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCUQ6AEwAA [3]: http://books.google.com/books?id=B3EYPeKViAwC&printsec=frontcover&dq=paul%20zeitz&source=gbs_slider_thumb#v=onepage&q&f=false
Are there any results from algebraic geometry that have led to an interesting "real world" application?
What are some applications outside of mathematics for algebraic geometry?
> A 10-year loan of $500 is repaid with > payments at the end of each year. The lender charges interest at an > annual effective rate of 10%. > Each of the first ten payments is > 150% of the amount of interest due. > Each of the last ten payments is X. > Calculate X. I came across this practice question while studying for my actuary exam. I tried it on my own and got stuck: $500 will earn $50 interest each year, so each of the first 10 payments must be $75. Then after 10 years, a total of 750 has been repaid. In 10 years, I can find the accumulated debt by saying PV=500, i%=10, n=10, giving me FV=$1296.87. So the balance would be $1296.87-$750=$546.97. Now I am stuck! How do I find out what the last payments should be? I know I can't just divide $546.97/10=$54.697, because the lender is still charging interest while the borrower pays off this remaining debt, so there would still be the interest left over. This situation isn't mentioned anywhere in my calculator manual! Can one of you give me some explanation about what is going on so that I can do it by hand?
How to determine annual payments on a partially repaid loan?
> A 10-year loan of $500 is repaid with > payments at the end of each year. The lender charges interest at an > annual effective rate of 10%. > Each of the first ten payments is > 150% of the amount of interest due. > Each of the last ten payments is X. > Calculate X. I came across this practice question while studying for my actuary exam. I tried it on my own and got stuck: $500 will earn $50 interest each year, so each of the first 10 payments must be $75. Then after 10 years, a total of 750 has been repaid. In 10 years, I can find the accumulated debt by saying `PV`=500, `i%`=10, `n`=10, giving me `FV`=$1296.87. So the balance would be $1296.87-$750=$546.97. Now I am stuck! How do I find out what the last payments should be? I know I can't just divide $546.97/10=$54.697, because the lender is still charging interest while the borrower pays off this remaining debt, so there would still be the interest left over. This situation isn't mentioned anywhere in my calculator manual! Can one of you give me some explanation about what is going on so that I can do it by hand?
Broadly speaking, algebraic geometry is used a lot in some areas of robotics and mechanical engineering. Real algebraic geometry, for example, is important to the development of CAD systems (think NURBS, computing intersections of primitives, etc.) And AG comes up in robotics when it is important to figure out, say, what motions a robotic arm in a given configuration is capable of, or to construct some kind of linkage that draws a prescribed curve. Something specific in that vein: Kempe's Universality Theorem gives that any bounded algebraic curve in $\mathbb{R}^2$ is the locus of some linkage. The "locus of a linkage" being the path drawn out by all the vertices of a graph, where the edge lengths are all specified and one or more vertices remains still. Interestingly, Kempe's orginal proof of the theorem was flawed, and more recent proofs have been more involved. However, <a href="http://erikdemaine.org/theses/tabbott.pdf">Timothy Abbott's MIT masters thesis</a> gives a simpler proof that gives a working linkage for a given curve, and makes for interesting reading concerning the problem in general. Edit: The NURBS connection is, in part, that can construct a B-spline that approximates a given real algebraic curve, which is crucial in displaying intersection curves, for example. See <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.3585">here</a> for more details (I'm afraid I don't know many on this.)
Suppose I have a loan of M dollars. At the end of each year, I am charged interest at rate R and make a repayment of P. The loan is repaid after n years. 1. How long (n) does it take to repay the loan if I am given the other variables? 2. How much are the repayments of P if I am given the other variable? 3. Suppose that the payments were at the start of the year. How would this change the problem?
*The Problem:* Two trains travel on the same track towards each other, each going at a speed of 50 kph. They start out 50km apart. A fly starts at the front of one train and flies at 75 kph to the front of the other; when it gets there, it turns around and flies back towards the first. It continues flying back and forth til the two trains meet and it gets squashed (the least of our worries, perhaps). How far did the fly travel before it got squashed? *Attempt at a solution:* I can do this by summing the infinite series of the fly's distance for each leg. I get an answer of 75 km: but that's so nice! There must be a more intuitive way...is there?
*The Problem:* Two trains travel on the same track towards each other, each going at a speed of 50 kph. They start out 50km apart. A fly starts at the front of one train and flies at 75 kph to the front of the other; when it gets there, it turns around and flies back towards the first. It continues flying back and forth til the two trains meet and it gets squashed (the least of our worries, perhaps). How far did the fly travel before it got squashed? *Attempt at a solution:* I can do this by summing the infinite series of the fly's distance for each leg. I get an answer of 37.5 km: but that's so nice! There must be a more intuitive way...is there?
Representation theory is a subject I want to like (it can be fun finding the representations of a group), but it's hard for me to see it as a subject that arises naturally or why it is important. I can think of two mathematical reasons for studying it: 1) The character table of a group is packs a lot of information about the group and is concise. 2) It is practically/computationally nice to have explicit matrices that model a group. But there are for sure deeper things that I am missing. I can understand why one would want to study group actions (the axioms for a group beg you to think of elements as operators), but why look at group actions on vector spaces? Is it because linear algebra is so easy/well-known (when compared to just modules, say)? I am also told that representation theory is important in quantum mechanics. For example, physics should be SO(3) invariant and when we represent this on a Hilbert space of wave-functions, we are led to information about angular momentum. But this seems to only trivially invoke representation theory since we already start with a subgroup of GL(n) and then extend it to act on wave functions by $\psi(x,t) \mapsto \psi(Ax,t)$ for A in SO(n). This http://en.wikipedia.org/wiki/Particle_physics_and_representation_theory wikipedia article claims that if our physical system has G as a symmetry group, then there is a correspondence between particles and representations of G. I'm not sure if I understand this correspondence since it seems to be saying that if we act an element of G on a state that corresponds to some particle, then this new state also corresponds to the same particle. So a particle is an orbit of the G action? Anyone know of good sources that talk about this?
Why is the volume of a cone one third of the volume of a cylinder?
In my AP chem class, I often have to balance chemical equations like the following: Al + O<sub>2</sub> ---> Al<sub>2</sub>O<sub>3</sub> The goal is to make both side of the arrow have the same amount of atoms by adding compounds in the equation to each side. An solution 2Al + 3O<sub>2</sub> ---> 2Al<sub>2</sub>O<sub>3</sub> When the subscripts become really large, or there are lot of atoms involved, trial and error is impossible unless performed by a computer. What if some chemical equation can not be balanced(does such equations exist)? I tried one for a long time only to realize the problem was wrong. My teacher said trial and error is the only way. Are there other methods?
Balance chemical equations without trial and error?
When differentiated with respect to $r$, the derivative of $\pi r^2$ is $2 \pi r$, which is the circumference of a circle. Similarly, when the formula for a sphere's volume $\frac{4}{3} \pi r^3$ is differentiated with respect to $r$, we get $4 \pi r^2$. Is this just a coincidence, or is there some deep explanation for why we should expect this?
Why is the derivative of a circle's area its perimeter (and similarly for spheres)?
> A 10-year loan of $500 is repaid with > payments at the end of each year. The lender charges interest at an > annual effective rate of 10%. > Each of the first ten payments is > 150% of the amount of interest due. > Each of the last ten payments is X. > Calculate X. I came across this practice question while studying for my actuary exam. I tried it on my own and got stuck: $500 will earn $50 interest each year, so each of the first 10 payments must be $75. Then after 10 years, a total of 750 has been repaid. In 10 years, I can find the accumulated debt by saying `PV`=500, `i%`=10, `n`=10, giving me `FV`=$1296.87. So the balance would be $1296.87-$750=$546.97. Now I am stuck! How do I find out what the last payments should be? I know I can't just divide $546.97/10=$54.697, because the lender is still charging interest while the borrower pays off this remaining debt, so there would still be the interest left over. This situation isn't mentioned anywhere in my calculator manual! Can one of you give me some explanation about what is going on so that I can do it by hand? ---------- I tried working on it some more, and came up with a really great idea! Since the payments are double the 10% interest, it's just like paying off 5% of the loan each year! This saves me a lot of time, because I can just set `i%=-5` and get `FV=598.74` on my calculator. I did it the long way like Casebash suggested just to check, and they were the same. Is this always going to work, or did I just get lucky here?
1. The representation theory of finite groups can be used to prove results about finite groups themselves that are otherwise much harder to prove by "elementary" means. For instance, the proof of [Burnside's theorem][1] (that a group of order $p^a q^b$ is solvable). A lot of the classification proof of finite simple groups relies on representation theory (or so I'm told, I haven't read the proof...). 2. Mathematical physics. Lie algebras and Lie groups definitely come up here, but I'm not familiar enough to explain anything. In addition, the classification of complex simple Lie algebras relies on the root space decomposition, which is a significant (and nontrivial) fact about the representation theory of semisimple Lie algebras. 3. Number theory. The nonabelian version of L-functions (Artin L-functions) rely on the representations of the Galois group (in the abelian case, these just correspond to sums of 1-dimensional characters). For instance, the proof that Artin L-functions are meromorphic in the whole plane relies on (I think) Artin's theorem that any irreducible character is a rational combination of induced characters from cyclic subgroups -- this is in Serre's _Linear Representations of Finite Groups._ Much more importantly, the Langlands program studies representations of groups $GL_n(\mathbb{A}_K)$ for $\mathbb{A}_K$ the adele ring of a global field. This is a generalization of standard "abelian" class field theory (when $n=1$ and one is determining the character group of the ideles). 4. Combinatorics. The representation theory of the symmetric group has a lot of connection to combinatorics, because you can parametrize the irreducibles explicitly (via Young diagrams), and this leads to the problem of determining how these Young diagrams interact. For instance, what does the tensor product of two Young diagrams look like when decomposed as a sum of Young diagrams? What is the dimension of the irreducible representation associated to a Young diagram? These problems have a combinatorial flavor. I should add the disclaimer that I have not formally studied representation theory, and these are likely to be an unrepresentative sample of topics (some of which I have only vaguely heard about). [1]: http://en.wikipedia.org/wiki/Burnside's_theorem
> A 10-year loan of $500 is repaid with > payments at the end of each year. The lender charges interest at an > annual effective rate of 10%. > Each of the first ten payments is > 150% of the amount of interest due. > Each of the last ten payments is X. > Calculate X. I came across this practice question while studying for my actuary exam. I tried it on my own and got stuck: $500 will earn $50 interest each year, so each of the first 10 payments must be $75. Then after 10 years, a total of 750 has been repaid. In 10 years, I can find the accumulated debt by saying `PV`=500, `I/Y`=10, `N`=10, giving me `FV`=$1296.87. So the balance would be $1296.87-$750=$546.97. Now I am stuck! How do I find out what the last payments should be? I know I can't just divide $546.97/10=$54.697, because the lender is still charging interest while the borrower pays off this remaining debt, so there would still be the interest left over. This situation isn't mentioned anywhere in my calculator manual! Can one of you give me some explanation about what is going on so that I can do it by hand? ---------- I tried working on it some more, and came up with a really great idea! Since the payments are 1.5x the 10% interest, it's just like paying off 5% of the principal each year! This saves me a lot of time, because I can just set `I/Y=-5` and get `FV=598.74` on my calculator. I did it the long way by calculating the future value of each interest payment, and they were the same. Is this always going to work, or did I just get lucky here? ---------- Another update! I think I solved it. All I needed to do was to set `FV=0`, `I/Y=10`, `N=10`, `PV=598.74`, and then I got `PMT=97.44`. I never used the `PMT` button before, though, so is there some other way I can check the answer is right?
1. The representation theory of finite groups can be used to prove results about finite groups themselves that are otherwise much harder to prove by "elementary" means. For instance, the proof of [Burnside's theorem][1] (that a group of order $p^a q^b$ is solvable). A lot of the classification proof of finite simple groups relies on representation theory (or so I'm told, I haven't read the proof...). 2. Mathematical physics. Lie algebras and Lie groups definitely come up here, but I'm not familiar enough to explain anything. In addition, the classification of complex simple Lie algebras relies on the root space decomposition, which is a significant (and nontrivial) fact about the representation theory of semisimple Lie algebras. 3. Number theory. The nonabelian version of L-functions (Artin L-functions) rely on the representations of the Galois group (in the abelian case, these just correspond to sums of 1-dimensional characters). For instance, the proof that Artin L-functions are meromorphic in the whole plane relies on (I think) Artin's theorem that any irreducible character is a rational combination of induced characters from cyclic subgroups -- this is in Serre's _Linear Representations of Finite Groups._ Much more importantly, the Langlands program studies representations of groups $GL_n(\mathbb{A}_K)$ for $\mathbb{A}_K$ the adele ring of a global field. This is a generalization of standard "abelian" class field theory (when $n=1$ and one is determining the character group of the ideles). 4. Combinatorics. The representation theory of the symmetric group has a lot of connections to combinatorics, because you can parametrize the irreducibles explicitly (via Young diagrams), and this leads to the problem of determining how these Young diagrams interact. For instance, what does the tensor product of two Young diagrams look like when decomposed as a sum of Young diagrams? What is the dimension of the irreducible representation associated to a Young diagram? These problems have a combinatorial flavor. I should add the disclaimer that I have not formally studied representation theory, and these are likely to be an unrepresentative sample of topics (some of which I have only vaguely heard about). [1]: http://en.wikipedia.org/wiki/Burnside's_theorem
> A 10-year loan of $500 is repaid with > payments at the end of each year. The lender charges interest at an > annual effective rate of 10%. > Each of the first ten payments is > 150% of the amount of interest due. > Each of the last ten payments is X. > Calculate X. I came across this practice question while studying for my actuary exam. I tried it on my own and got stuck: $500 will earn $50 interest each year, so each of the first 10 payments must be $75. Then after 10 years, a total of 750 has been repaid. In 10 years, I can find the accumulated debt by saying `PV`=500, `I/Y`=10, `N`=10, giving me `FV`=$1296.87. So the balance would be $1296.87-$750=$546.97. Now I am stuck! How do I find out what the last payments should be? I know I can't just divide $546.97/10=$54.697, because the lender is still charging interest while the borrower pays off this remaining debt, so there would still be the interest left over. This situation isn't mentioned anywhere in my calculator manual! Can one of you give me some explanation about what is going on so that I can do it by hand? ---------- I tried working on it some more, and came up with a really great idea! Since the payments are 1.5x the 10% interest, it's just like paying off 5% of the principal each year! This saves me a lot of time, because I can just set `I/Y=-5` and get `FV=598.74` on my calculator. I did it the long way by calculating the future value of each interest payment (turns out they were not all $75, because the outstanding principal got smaller), and they were the same. Is this always going to work, or did I just get lucky here? ---------- Another update! I think I solved it. All I needed to do was to set `FV=0`, `I/Y=10`, `N=10`, `PV=598.74`, and then I got `PMT=97.44`. I never used the `PMT` button before, though, so is there some other way I can check the answer is right?
The 24 game is as follows. Four numbers are drawn; the player's objective is to make 24 from the four numbers using the four basic arithmetic operations (in any order) and parentheses however one pleases. Consider the following generalization. Given $n+1$ numbers, determine whether the last one can be obtained from the first $n$ using elementary arithmetical operations as above. This problem admits succint certificates so is in $NP$. Is it $NP$-complete?
Is the 24 game NP-complete?
I am working on computing phase diagrams for alloys. These are blueprints for a material that show what phase, or combination of phases, a material will exist in for a range of concentrations and temperatures (see <a href="http://web.cos.gmu.edu/~tstephe3/talks/SIAMMaterialsScience2010.pdf">this pdf presentation</a>). The crucial step in drawing the boundaries that separate one phase from another on these diagrams involves minimizing a free energy function subject to basic physical conservation constraints. I am going to leave out the chemistry/physics and hope that we can move forward with the minimization using Lagrange multipliers. The free energy that is to be minimized is this: \begin{equation} \widetilde{G}(x_1, x_2) = f^{(1)}G_{1}(x_1) + f^{(2)}G_{2}(x_2), \end{equation} subject to: \begin{align} &f^{(1)}x_1 + f^{(2)}x_2 = c_1, \nonumber \\ &x_1 + x_2 = 1, \nonumber\\ &f^{(1)} + f^{(2)} = 1. \nonumber \end{align} (and also that the $x_{i} > 0$ and $f^{(i)} > 0$, for $i=1,2$.) The Lagrange formulation is: \begin{align} L(x_1,x_2,f^{(1)},f^{(2)},\lambda_1, \lambda_2, \lambda_3) =& f^{(1)}G_{1}(x_1) + f^{(2)}G_{2}(x_2) \\ &- \lambda_{1}(f^{(1)}x_1 + f^{(2)}x_2 - c_1) \nonumber \\ & - \lambda_{2}(x_1 + x_2 - 1) \nonumber \\ &- \lambda_{3}(f^{(1)} + f^{(2)} - 1) \nonumber \end{align} \begin{tabular}{l|l} The minimization of $\widetilde{G}$ follows & \\ from finding the $x_{i}$'s that satisfy & which yields:\\ $\nabla L = 0:$ & \\ \hline & \\ $\frac{\partial L}{\partial x_{1}} = f^{(1)}G'(x_1) - \lambda_{1}x_{1} = 0$ & $(*) f^{(1)}\left[G'(x_1) - \lambda_1 \right] = 0$ \\ & \\ $\frac{\partial L}{\partial x_2} = f^{(2)}G'(x_2) - \lambda_{1}x_{2} = 0$ & $(**) f^{(2)}\left[G'(x_2) - \lambda_1 \right]= 0 $ \\ & \\ $\frac{\partial L}{\partial f^{(1)}} = G(x_1) - \lambda_{1}x_{1} - \lambda_3 = 0$ & $(***) G(x_1) - G(x_2) = \lambda_1 \left[ x_1 - x_2\right]$ \\ & \\ $\frac{\partial L}{\partial f^{(2)}} = G(x_2) - \lambda_{1}x_{2} - \lambda_3 = 0$ & \\ \end{tabular} \\ \\ Because $f^{(1)}$ and $f^{(2)}$ are not to be zero, from (*) and (**) we have that \begin{equation} G'(x_1) = G'(x_2) = \lambda_{1}. \end{equation} And, a manipulation of equation (***) looks like \begin{equation} \frac{G(x_1) -G(x_2)}{x_1 - x_2} = \lambda_{1}. \end{equation} Now, think of $G$ as an even degree polynomial (which it isn't, but it's graph sometimes resembles one) in the plane. Let the points $x_1$ and $x_2$ be locations along the x-axis that lie roughly below the minima of this curve. The constraints () and () describe the condition that the line drawn between $(x_1,G(x_1))$ and $(x_2,G(x_2))$ form a common tangent to the "wells" of the curve. It is these points $x_1$ and $x_2$, which represent concentrations of pure components in our alloy, that become mapped onto a phase diagram. It is essentially by repeating this procedure for many temperatures that we can trace out the boundaries in the desired phase diagram. **The question is:** Looking at this from a purely analytic geometry perspective, **how would one derive the "variational" approach to find a common tangent line that we seem to have by the above Lagrangian?** (warning: I don't really know how to model things using variational methods.) **And, secondly:** I have presented a model of a binary alloy, meaning two variables to keep track of representing concentrations. I have been working on ternary alloys, where this free energy $\widetilde{G}$ is a function of three variables (two independent: $x_1,x_2,x_3$, where $x_3 = 1- x_1 - x_2$) and is therefore a surface over a Gibbs triangle. Then $\nabla L = 0$ produces partial derivatives that no longer "speak geometry" to me, although the solution is a common tangent plane. (I have attempted to characterize a common tangent plane based purely in analytic geometry - completely disregarding the Lagrangian - and have come up with several relations between directional derivatives... **How might directional derivatives relate to the optimality conditions set forth by the Lagrangian?**)
I am working on computing phase diagrams for alloys. These are blueprints for a material that show what phase, or combination of phases, a material will exist in for a range of concentrations and temperatures (see <a href="http://web.cos.gmu.edu/~tstephe3/talks/SIAMMaterialsScience2010.pdf">this pdf presentation</a>). The crucial step in drawing the boundaries that separate one phase from another on these diagrams involves minimizing a free energy function subject to basic physical conservation constraints. I am going to leave out the chemistry/physics and hope that we can move forward with the minimization using Lagrange multipliers. The free energy that is to be minimized is this: \begin{equation} \widetilde{G}(x_1, x_2) = f^{(1)}G_{1}(x_1) + f^{(2)}G_{2}(x_2), \end{equation} subject to: \begin{align} &f^{(1)}x_1 + f^{(2)}x_2 = c_1, \nonumber \\ &x_1 + x_2 = 1, \nonumber\\ &f^{(1)} + f^{(2)} = 1. \nonumber \end{align} (and also that the $x_{i} > 0$ and $f^{(i)} > 0$, for $i=1,2$.) The Lagrange formulation is: \begin{align} L(x_1,x_2,f^{(1)},f^{(2)},\lambda_1, \lambda_2, \lambda_3) =& f^{(1)}G_{1}(x_1) + f^{(2)}G_{2}(x_2) \\ &- \lambda_{1}(f^{(1)}x_1 + f^{(2)}x_2 - c_1) \nonumber \\ & - \lambda_{2}(x_1 + x_2 - 1) \nonumber \\ &- \lambda_{3}(f^{(1)} + f^{(2)} - 1) \nonumber \end{align} \begin{tabular}{l|l} The minimization of $\widetilde{G}$ follows & \\ from finding the $x_{i}$'s that satisfy & which yields:\\ $\nabla L = 0:$ & \\ \hline & \\ $\frac{\partial L}{\partial x_{1}} = f^{(1)}G'(x_1) - \lambda_{1}x_{1} = 0$ & $(*) f^{(1)}\left[G'(x_1) - \lambda_1 \right] = 0$ \\ & \\ $\frac{\partial L}{\partial x_2} = f^{(2)}G'(x_2) - \lambda_{1}x_{2} = 0$ & $(**) f^{(2)}\left[G'(x_2) - \lambda_1 \right]= 0 $ \\ & \\ $\frac{\partial L}{\partial f^{(1)}} = G(x_1) - \lambda_{1}x_{1} - \lambda_3 = 0$ & $(***) G(x_1) - G(x_2) = \lambda_1 \left[ x_1 - x_2\right]$ \\ & \\ $\frac{\partial L}{\partial f^{(2)}} = G(x_2) - \lambda_{1}x_{2} - \lambda_3 = 0$ & \\ \end{tabular} \\ \\ Because $f^{(1)}$ and $f^{(2)}$ are not to be zero, from (*) and (**) we have that \begin{equation} G'(x_1) = G'(x_2) = \lambda_{1}. \end{equation} And, a manipulation of equation (***) looks like \begin{equation} \frac{G(x_1) -G(x_2)}{x_1 - x_2} = \lambda_{1}. \end{equation} Now, think of $G$ as an even degree polynomial (which it isn't, but it's graph sometimes resembles one) in the plane. Let the points $x_1$ and $x_2$ be locations along the x-axis that lie roughly below the minima of this curve. The constraints () and () describe the condition that the line drawn between $(x_1,G(x_1))$ and $(x_2,G(x_2))$ form a common tangent to the "wells" of the curve. It is these points $x_1$ and $x_2$, which represent concentrations of pure components in our alloy, that become mapped onto a phase diagram. It is essentially by repeating this procedure for many temperatures that we can trace out the boundaries in the desired phase diagram. **The question is:** Looking at this from a purely analytic geometry perspective, **how would one derive the "variational" approach to find a common tangent line that we seem to have found using the above Lagrangian?** (warning: I don't really know how to model things using variational methods.) **And, secondly:** I have presented a model of a binary alloy, meaning two variables to keep track of representing concentrations. I have been working on ternary alloys, where this free energy $\widetilde{G}$ is a function of three variables (two independent: $x_1,x_2,x_3$, where $x_3 = 1- x_1 - x_2$) and is therefore a surface over a Gibbs triangle. Then $\nabla L = 0$ produces partial derivatives that no longer "speak geometry" to me, although the solution is a common tangent plane. (I have attempted to characterize a common tangent plane based purely in analytic geometry - completely disregarding the Lagrangian - and have come up with several relations between directional derivatives... **How might directional derivatives relate to the optimality conditions set forth by the Lagrangian?**)
I am working on computing phase diagrams for alloys. These are blueprints for a material that show what phase, or combination of phases, a material will exist in for a range of concentrations and temperatures (see <a href="http://web.cos.gmu.edu/~tstephe3/talks/SIAMMaterialsScience2010.pdf">this pdf presentation</a>). The crucial step in drawing the boundaries that separate one phase from another on these diagrams involves minimizing a free energy function subject to basic physical conservation constraints. I am going to leave out the chemistry/physics and hope that we can move forward with the minimization using Lagrange multipliers. The free energy that is to be minimized is this: $\widetilde{G}(x_1, x_2) = f^{(1)}G_{1}(x_1) + f^{(2)}G_{2}(x_2),$ subject to: $f^{(1)}x_1 + f^{(2)}x_2 = c_1,$ $x_1 + x_2 = 1, $ $f^{(1)} + f^{(2)} = 1. $ (and also that the $x_{i} > 0$ and $f^{(i)} > 0$, for $i=1,2$.) The Lagrange formulation is: $L(x_1,x_2,f^{(1)},f^{(2)},\lambda_1, \lambda_2, \lambda_3) = f^{(1)}G_{1}(x_1) + f^{(2)}G_{2}(x_2)$ $- \lambda_{1}(f^{(1)}x_1 + f^{(2)}x_2 - c_1)$ $- \lambda_{2}(x_1 + x_2 - 1) $ $- \lambda_{3}(f^{(1)} + f^{(2)} - 1) $ \begin{tabular}{l|l} The minimization of $\widetilde{G}$ follows & \\ from finding the $x_{i}$'s that satisfy & which yields:\\ $\nabla L = 0:$ & \\ \hline & \\ $\frac{\partial L}{\partial x_{1}} = f^{(1)}G'(x_1) - \lambda_{1}x_{1} = 0$ & $(*) f^{(1)}\left[G'(x_1) - \lambda_1 \right] = 0$ \\ & \\ $\frac{\partial L}{\partial x_2} = f^{(2)}G'(x_2) - \lambda_{1}x_{2} = 0$ & $(**) f^{(2)}\left[G'(x_2) - \lambda_1 \right]= 0 $ \\ & \\ $\frac{\partial L}{\partial f^{(1)}} = G(x_1) - \lambda_{1}x_{1} - \lambda_3 = 0$ & $(***) G(x_1) - G(x_2) = \lambda_1 \left[ x_1 - x_2\right]$ \\ & \\ $\frac{\partial L}{\partial f^{(2)}} = G(x_2) - \lambda_{1}x_{2} - \lambda_3 = 0$ & \\ \end{tabular} \\ \\ Because $f^{(1)}$ and $f^{(2)}$ are not to be zero, from (*) and (**) we have that \begin{equation} G'(x_1) = G'(x_2) = \lambda_{1}. \end{equation} And, a manipulation of equation (***) looks like \begin{equation} \frac{G(x_1) -G(x_2)}{x_1 - x_2} = \lambda_{1}. \end{equation} Now, think of $G$ as an even degree polynomial (which it isn't, but it's graph sometimes resembles one) in the plane. Let the points $x_1$ and $x_2$ be locations along the x-axis that lie roughly below the minima of this curve. The constraints () and () describe the condition that the line drawn between $(x_1,G(x_1))$ and $(x_2,G(x_2))$ form a common tangent to the "wells" of the curve. It is these points $x_1$ and $x_2$, which represent concentrations of pure components in our alloy, that become mapped onto a phase diagram. It is essentially by repeating this procedure for many temperatures that we can trace out the boundaries in the desired phase diagram. **The question is:** Looking at this from a purely analytic geometry perspective, **how would one derive the "variational" approach to find a common tangent line that we seem to have found using the above Lagrangian?** (warning: I don't really know how to model things using variational methods.) **And, secondly:** I have presented a model of a binary alloy, meaning two variables to keep track of representing concentrations. I have been working on ternary alloys, where this free energy $\widetilde{G}$ is a function of three variables (two independent: $x_1,x_2,x_3$, where $x_3 = 1- x_1 - x_2$) and is therefore a surface over a Gibbs triangle. Then $\nabla L = 0$ produces partial derivatives that no longer "speak geometry" to me, although the solution is a common tangent plane. (I have attempted to characterize a common tangent plane based purely in analytic geometry - completely disregarding the Lagrangian - and have come up with several relations between directional derivatives... **How might directional derivatives relate to the optimality conditions set forth by the Lagrangian?**)
I am working on computing phase diagrams for alloys. These are blueprints for a material that show what phase, or combination of phases, a material will exist in for a range of concentrations and temperatures (see <a href="http://web.cos.gmu.edu/~tstephe3/talks/SIAMMaterialsScience2010.pdf">this pdf presentation</a>). The crucial step in drawing the boundaries that separate one phase from another on these diagrams involves minimizing a free energy function subject to basic physical conservation constraints. I am going to leave out the chemistry/physics and hope that we can move forward with the minimization using Lagrange multipliers. The free energy that is to be minimized is this: $\widetilde{G}(x_1, x_2) = f^{(1)}G_{1}(x_1) + f^{(2)}G_{2}(x_2),$ subject to: $f^{(1)}x_1 + f^{(2)}x_2 = c_1,$ $x_1 + x_2 = 1, $ $f^{(1)} + f^{(2)} = 1. $ (and also that the $x_{i} > 0$ and $f^{(i)} > 0$, for $i=1,2$.) The Lagrange formulation is: $L(x_1,x_2,f^{(1)},f^{(2)},\lambda_1, \lambda_2, \lambda_3) = f^{(1)}G_{1}(x_1) + f^{(2)}G_{2}(x_2)$ $- \lambda_{1}(f^{(1)}x_1 + f^{(2)}x_2 - c_1)$ $- \lambda_{2}(x_1 + x_2 - 1) $ $- \lambda_{3}(f^{(1)} + f^{(2)} - 1) $ The minimization of $\widetilde{G}$ follows from finding the $x_{i}$'s that satisfy $\nabla L = 0:$ $\frac{\partial L}{\partial x_{1}} = f^{(1)}G'(x_1) - \lambda_{1}x_{1} = 0$ $\frac{\partial L}{\partial x_2} = f^{(2)}G'(x_2) - \lambda_{1}x_{2} = 0$ $\frac{\partial L}{\partial f^{(1)}} = G(x_1) - \lambda_{1}x_{1} - \lambda_3 = 0$ $\frac{\partial L}{\partial f^{(2)}} = G(x_2) - \lambda_{1}x_{2} - \lambda_3 = 0$ which yields: $(*) f^{(1)}\left[G'(x_1) - \lambda_1 \right] = 0$ $(**) f^{(2)}\left[G'(x_2) - \lambda_1 \right]= 0 $ $(\***) G(x_1) - G(x_2) = \lambda_1 \left[ x_1 - x_2\right]$ Because $f^{(1)}$ and $f^{(2)}$ are not to be zero, from (\*) and (\**) we have that $G'(x_1) = G'(x_2) = \lambda_{1}.$ And, a manipulation of equation (\***) looks like $\frac{G(x_1) -G(x_2)}{x_1 - x_2} = \lambda_{1}.$ Now, think of $G$ as an even degree polynomial (which it isn't, but it's graph sometimes resembles one) in the plane. Let the points $x_1$ and $x_2$ be locations along the x-axis that lie roughly below the minima of this curve. The constraints (\*),(\**), and (\***) describe the condition that the line drawn between $(x_1,G(x_1))$ and $(x_2,G(x_2))$ form a common tangent to the "wells" of the curve. It is these points $x_1$ and $x_2$, which represent concentrations of pure components in our alloy, that become mapped onto a phase diagram. It is essentially by repeating this procedure for many temperatures that we can trace out the boundaries in the desired phase diagram. **The question is:** Looking at this from a purely analytic geometry perspective, **how would one derive the "variational" approach to find a common tangent line that we seem to have found using the above Lagrangian?** (warning: I don't really know how to model things using variational methods.) **And, secondly:** I have presented a model of a binary alloy, meaning two variables to keep track of representing concentrations. I have been working on ternary alloys, where this free energy $\widetilde{G}$ is a function of three variables (two independent: $x_1,x_2,x_3$, where $x_3 = 1- x_1 - x_2$) and is therefore a surface over a Gibbs triangle. Then $\nabla L = 0$ produces partial derivatives that no longer "speak geometry" to me, although the solution is a common tangent plane. (I have attempted to characterize a common tangent plane based purely in analytic geometry - completely disregarding the Lagrangian - and have come up with several relations between directional derivatives... **How might directional derivatives relate to the optimality conditions set forth by the Lagrangian?**)
The knight's tour is a sequence of 64 squares on a chess board, where each square is visted once, and each subsequent square can be reached from the previous by a knight's move. Tours can be cyclic, if the last square is a knight's move away from the first, and acyclic otherwise. There are several symmetries among knight's tours. Both acyclic and cyclic tours have eight reflectional symmetries, and cyclic tours additionally have symmetries arising from starting at any square in the cycle, and from running the sequence backwards. Is it known how many knight's tours there are, up to all the symmetries?
How many knight's tours are there?
If we replace the axiom that 'there exists an infinite set' with 'all sets are finite', how would mathematics be like? My guess is that, all the theory that has practical importance would still show up, but everything would be very very unreadable for humans. Is that true? We would have the natural numbers, athough the class of all natural numbers would not be a set. In the same sense, we could have the rational numbers. But could we have the real numbers? Can the standard constructions be adapted to this setting? (Edit: In the standard construction, 0 is defined to be the empty set, 1 is {0} and 2 is {0,{0}} and it goes on like this. Then we define ordered pairs, and equivalence relations, using only sets and the axioms. The rational numbers are constructed with an equivalence relation over the ordered natural number pairs. I think these can easily be adapted to the finistic setting. But how could the definition of real numbers follow? In particular what would be the definition of e, or \sqrt(2)?) Also, I guess that this kind of axiom system must have been studied, so do you know any references? Edit: The question has been closed for being "too localized". I think it is completely nonsense. I voted to reopen, and started a meta discussion: http://meta.math.stackexchange.com/questions/172/why-did-you-close-my-question-if-all-sets-were-finite . Another clarification: After doing a little more research, I think what I mentioned in this question is relevant to finitism, which an extreme form of constructivism. There were very important names who defended such a system, such as Kronecker. In such a system, obviously we have no hope to get the set of real numbers, which is uncountably infinite. But we must have definitions for the real numbers that one is likely to encounter, like e or sqrt(2), in order to do meaningful mathematics. In standard constructions, real numbers are defined as Dedekind cuts, or Cauchy sequences, which are actually sets of infinite cardinality. But here is an alternative definition that I found in wikipedia, which I think may be used as a finitist definition, with a proper definition of function: http://en.wikipedia.org/wiki/Constructivism_(mathematics)#Example_from_real_analysis
I just got out from my Math and Logic class with my friend. During the lecture, a well-known math/logic puzzle was presented: >The King has $1000$ wines, $1$ of which is poisoned. He needs to identify the poisoned wine immediately, so he hires the protagonist, a Mathematician. The king offers you his expendable servants to help you test which wine is poisoned. >The poisoned wine is very potent, so much that one molecule of the wine will cause anyone who drinks it to die. However, it is slow-acting. The nature of the slow-acting poison means that there is only time to test one "drink" per servant. (A drink may be a mixture of any number of wines) (Assume that the King needs to know within an hour, and that any poison in the drink takes an hour to show any symptoms) >What is the minimum amount of servants you would need to identify the poisoned wine? Upon hearing this problem, the astute mathematics student will see that this requires at most **ten** ($10$) servants (in fact, you could test 24 more wines on top of that 1000 before requiring an eleventh servant). The proof/procedure is left to the reader. My friend and I, however, was not content with resting upon this answer. My friend added the question: >What would be different if there were $2$ wines that were poisoned out of the 1000? What is the new minimum then? We eventually generalized the problem to this: > Given $N$ bottles of wine ($N \gt 2$) and, of those, $k$ poisoned wines ($0 \lt k \lt N$), what is the optimum method to identify the all of the poisoned wines, and how many servants are required ($s(N,k)$)? After some mathsing, my friend and I managed to find some (possibly unhelpful) lower and upper bounds: $ log_2 {N \choose k} \le s(N,k) \le N-1 $ This is because $log_2 {N \choose k}$ is the minimum number of servants to uniquely identify the $N \choose k$ possible configurations of $k$ poisoned wines in $N$ total wines. Can anyone help us find an optimum strategy? Besides the trivial one requiring $N-1$ servants. How about a possible approach to start? Would this problem be any different if you were only required to find a strategy that would for sure find a wine that is **not** poisoned, instead of identifying all poisoned wines?
Can someone help me with a math/logic problem my friend and I are struggling with?
When I studied complex analysis, I could never understand how once-differentiable complex functions could be possibly be infinitely differentiable. After all, this doesn't hold for functions from R2 to R2. Can anyone explain what is different about complex numbers?
Why are differentiable complex functions infinitely differentiable?
If we replace the axiom that 'there exists an infinite set' with 'all sets are finite', how would mathematics be like? My guess is that, all the theory that has practical importance would still show up, but everything would be very very unreadable for humans. Is that true? We would have the natural numbers, athough the class of all natural numbers would not be a set. In the same sense, we could have the rational numbers. But could we have the real numbers? Can the standard constructions be adapted to this setting? (Edit: In the standard construction, 0 is defined to be the empty set, 1 is {0} and 2 is {0,{0}} and it goes on like this. Then we define ordered pairs, and equivalence relations, using only sets and the axioms. The rational numbers are constructed with an equivalence relation over the ordered natural number pairs. I think these can easily be adapted to the finistic setting. But how could the definition of real numbers follow? In particular what would be the definition of e, or \sqrt(2)?) Also, I guess that this kind of axiom system must have been studied, so do you know any references? Edit: The question has been closed for being "too localized". I think it is completely nonsense. I voted to reopen, and started a meta discussion: http://meta.math.stackexchange.com/questions/172/why-did-you-close-my-question-if-all-sets-were-finite . Another clarification: After doing a little more research, I think what I mentioned in this question is relevant to finitism, which an extreme form of constructivism. There were very important names who defended such a system, such as Kronecker. In such a system, obviously we have no hope to get the set of real numbers, which is uncountably infinite. But we must have definitions for the real numbers that one is likely to encounter, like e or sqrt(2), in order to do meaningful mathematics. In standard constructions, real numbers are defined as Dedekind cuts, or Cauchy sequences, which are actually sets of infinite cardinality. But here is an alternative definition that I found in wikipedia, which I think may be used as a finitist definition, with a proper definition of function: http://en.wikipedia.org/wiki/Constructivism_(mathematics)#Example_from_real_analysis So with the help of this wikipedia example, my question boils down to this: How can we define a function f over the natural numbers in a finitist axiom system? (If we can get 2 more reopen votes before the current ones expire, maybe someone can answer this question here.)
An extreme form of constructivism is called *finitisim*. In this form, unlike the standard axiom system, infinite sets are not allowed. There are important mathematicians, such as Kronecker, who supported such a system. I can see that the natural numbers and rational numbers can easily defined in a finitist system, by easy adaptations of the standard definitions. But in order to do any meaningful mathematics, we need to have definitions for the irrational numbers that one is likely to encounter in practice, such as e or sqrt(2). In standard constructions, real numbers are defined as Dedekind cuts, or Cauchy sequences, which are actually sets of infinite cardinality, so they are of no use here. My question is, how would a real number like those be defined in a finitist axiom system. (Of course we have no hope to construct the entire set of real numbers, since that set is uncountably infinite.) After doing a little research I found a constructivist definition in Wikipedia http://en.wikipedia.org/wiki/Constructivism_(mathematics)#Example_from_real_analysis , but we need a finitist definition of a function for this definition to work. (Because in the standard system, a function over the set of natural numbers is actually an infinite set.) So my question boils down to this: How can we define a function f over the natural numbers in a finitist axiom system? (If we can get 2 more reopen votes before the current ones expire, maybe someone can answer this question here.) --This question is substantially edited. Please see history if you would like to see the older versions. --
If all sets were finite, how could the real numbers be defined?
The 24 game is as follows. Four numbers are drawn; the player's objective is to make 24 from the four numbers using the four basic arithmetic operations (in any order) and parentheses however one pleases. Consider the following generalization. Given n+1 numbers, determine whether the last one can be obtained from the first n using elementary arithmetical operations as above. This problem admits succint certificates so is in NP. Is it NP-complete?
An extreme form of constructivism is called *finitisim*. In this form, unlike the standard axiom system, infinite sets are not allowed. There are important mathematicians, such as Kronecker, who supported such a system. I can see that the natural numbers and rational numbers can easily defined in a finitist system, by easy adaptations of the standard definitions. But in order to do any significant mathematics, we need to have definitions for the irrational numbers that one is likely to encounter in practice, such as e or sqrt(2). In standard constructions, real numbers are defined as Dedekind cuts, or Cauchy sequences, which are actually sets of infinite cardinality, so they are of no use here. My question is, how would a real number like those be defined in a finitist axiom system. (Of course we have no hope to construct the entire set of real numbers, since that set is uncountably infinite.) After doing a little research I found a constructivist definition in Wikipedia http://en.wikipedia.org/wiki/Constructivism_(mathematics)#Example_from_real_analysis , but we need a finitist definition of a function for this definition to work. (Because in the standard system, a function over the set of natural numbers is actually an infinite set.) So my question boils down to this: How can we define a function f over the natural numbers in a finitist axiom system? (If we can get 2 more reopen votes before the current ones expire, maybe someone can answer this question here.) --This question is substantially edited. Please see history if you would like to see the older versions. --
An extreme form of constructivism is called *finitisim*. In this form, unlike the standard axiom system, infinite sets are not allowed. There are important mathematicians, such as Kronecker, who supported such a system. I can see that the natural numbers and rational numbers can easily defined in a finitist system, by easy adaptations of the standard definitions. But in order to do any significant mathematics, we need to have definitions for the irrational numbers that one is likely to encounter in practice, such as e or sqrt(2). In standard constructions, real numbers are defined as Dedekind cuts, or Cauchy sequences, which are actually sets of infinite cardinality, so they are of no use here. My question is, how would a real number like those be defined in a finitist axiom system. (Of course we have no hope to construct the entire set of real numbers, since that set is uncountably infinite.) After doing a little research I found a constructivist definition in Wikipedia http://en.wikipedia.org/wiki/Constructivism_(mathematics)#Example_from_real_analysis , but we need a finitist definition of a function for this definition to work. (Because in the standard system, a function over the set of natural numbers is actually an infinite set.) So my question boils down to this: How can we define a function f over the natural numbers in a finitist axiom system? --This question is substantially edited. Please see history if you would like to see the older versions. --
Chicken is a famous game where two people drive on a collision course straight towards each other. Whoever swerves is considered a 'chicken' and loses, but if nobody swerves, they will both crash. So the payoff matrix looks something like this: B swerves B straight A swerves tie A loses, B wins A straight B loses, A wins both lose But I have heard of another situation called the prisoner's dilemma, where two prisoners are each given the choice to testify against the other, or remain silent. The payoff matrix for prisoner's dilemma also looks like B silent B testify A silent tie A loses, B wins A testify B loses, A wins both lose I remember hearing that in the prisoner's dilemma, it was always best for both prisoners to testify. But that makes no sense if you try to apply it to chicken: both drivers would crash every time, and in real life, almost always someone ends up swerving. What's the difference between the two situations?