INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
What Does it Really Mean to Have Different Kinds of Infinities? Can someone explain to me how there can be different kinds of infinities? I was reading "The man who loved only numbers" by Paul Hoffman and came across the concept of countable and uncountable infinities, but they're only words to me. Any help would be appreciated.
This is an answer to the following question marked as duplicate which redirects here: "I've known for some time that infinitary numbers can be different in order, such as the integers (countable), and the real numbers (uncountable). I read that you can always find a higher order of infinity given any order of infinity. Since infinity is the limit of the natural numbers under the successor function, I would like to know if there is a similar concept for orders of infinity under taking power-sets, if there is a sort of "super-infinity", a limit to the orders of infinity." Yes, there is such a concept: the smallest strongly inaccessible cardinal. Roughly, it is the smallest uncountable infinity that can not be reached by taking either unions or power sets of infinities under it, see here http://en.wikipedia.org/wiki/Limit_cardinal. Existence of such cardinals is widely believed to be independent of the standard axioms of set theory (ZFC), in other words it can neither be proved nor disproved from them. However, there are many works, where people postulate existence of strongly inaccessible cardinals and see what they can derive from it. Of course, even with such a postulate you still don't get the "infinity of all infinities", such a concept is self-contradictory according to the Russel paradox, but the smallest strongly inaccessible cardinal is in a similar relation to the ones under it regarding power sets as the countable cardinal is regarding successors and unions.
How can you prove that the square root of two is irrational? I have read a few proofs that $\sqrt{2}$ is irrational. I have never, however, been able to really grasp what they were talking about. Is there a simplified proof that $\sqrt{2}$ is irrational?
There is also a proof of this theorem that uses the well ordering property of the set of positive integers, that is in a non empty set of positive integers there is always a least element. The proof follows the approach of proof by contradiction but uses the well ordering principle to find the contradiction :) - Let us assume $\sqrt{2}$ is rational, hence it can be written down in the form $\sqrt{2}=a/b$ assuming that both $a$ and $b$ are positive integers in that case if we look at the set $S = \{k\sqrt{2} \mid k, k\sqrt{2}\text{ are integers}\}$ we find that it's a non empty set of positive integers, it's non empty because $a = b\sqrt{2}$ is in the above set. Now using the Well ordering principle we know that every set of positive integers which is non-empty has a minimum element, we assume that smallest element to be $s$ and let it equal to $s =t\sqrt{2}$. Now an interesting thing happens if we take the difference between the following quantities $s\sqrt{2} - s = (s-t)\sqrt{2} = s(\sqrt{2} - 1)$ which is a smaller element than $s$ itself, hence contradicting the very existence of $s$ being the smallest element. Hence we find that $\sqrt{2}$ is irrational. I know the proof but I am still amazed at how the author came up with the set assumption. Sometimes such assumptions make you feel kinda dumb :). If anyone has some insight regarding how to come up with such assumptions kindly post your answer in the comment, otherwise I would just assume that it was a workaround.
Is it true that $0.999999999\ldots=1$? I'm told by smart people that $$0.999999999\ldots=1$$ and I believe them, but is there a proof that explains why this is?
\begin{align} x &= 0.999... \\ 10x &= 9.999... \\ &= 9 + 0.999... \\ &= 9 + x \\ 10x - x &= (9 + x) - x \\ (10 - 1)x &= 9 + (x - x) \\ 9x &= 9 \\ x &= 1 \end{align}
How do you calculate the semi-minor axis of an ellipsoid? Given the semi-major axis and a flattening factor, is it possible to calculate the semi-minor axis?
Where, $a$ = transverse radius = semi-major axis (for ellipse/oblate spheroid); $b$ = conjugate radius = semi-minor axis (" " "); $oe$ = angular eccentricity = $\arccos(\frac{b}{a})$; $f$ = flattening = $\frac{a-b}{a} = 1 - \frac{b}{a} = 1 - \cos(oe) = 2\sin(\frac{oe}{2})^{2}$; then $b = a\cos(oe) = a(1-f)$.
List of Interesting Math Blogs I have the one or other interesting Math blog in my feedreader that I follow. It would be interesting to compile a list of Math blogs that are interesting to read, and do not require research-level math skills. I'll start with my entries: * *Division By Zero *Tanya Khovanova’s Math Blog
Timothy Gowers' blog is excellent. Like Terence Tao, he is both a Fields medalist and an excellent writer. Together their blogs were my first real introduction into how professional mathematicians think, and their writing has taught me a lot, both about mathematics and about mathematical writing. If you are a serious student of mathematics you will find all the blogs you need by scrolling through their blogrolls.
Online resources for learning Mathematics Not sure if this is the place for it, but there are similar posts for podcasts and blogs, so I'll post this one. I'd be interested in seeing a list of online resources for mathematics learning. As someone doing a non-maths degree in college I'd be interested in finding some resources for learning more maths online, most resources I know of tend to either assume a working knowledge of maths beyond secondary school level, or only provide a brief summary of the topic at hand. I'll start off by posting MIT Open Courseware, which is a large collection of lecture notes, assignments and multimedia for the MIT mathematics courses, although in many places it's quite incomplete.
A useful one for undergraduate level maths is Mathcentre. It has useful background material for people studying maths, or who need some maths background for other courses.
How would you describe calculus in simple terms? I keep hearing about this weird type of math called calculus. I only have experience with geometry and algebra. Can you try to explain what it is to me?
One of the greatest achievements of human civilization is Newton's laws of motions. The first law says that unless a force is acting then the velocity (not the position!) of objects stay constant, while the second law says that forces act by causing an acceleration (though heavy objects require more force to accellerate). However to make sense of those laws and to apply them to real life you need to understand how to move between the following three notions: * *Position *Velocity (that is the rate of change in position) *Acceleration (that is the rate of change of the velocity) Moving down that list is called "taking the derivative" while moving up that list is called "taking the integral." Calculus is the study of derivatives and integerals. In particular, if you want to figure out how objects move under some force you need to be able to integrate twice. This requires understanding a lot of calculus! In a first semester class you usually learn about derivatives and integrals of functions of one variable, that is what you need to understand physics in 1-dimension! To understand the actual physics of the world you need to understand derivatives and integrals in 3-dimensions which requires several courses.
Why is the volume of a sphere $\frac{4}{3}\pi r^3$? I learned that the volume of a sphere is $\frac{4}{3}\pi r^3$, but why? The $\pi$ kind of makes sense because its round like a circle, and the $r^3$ because it's 3-D, but $\frac{4}{3}$ is so random! How could somebody guess something like this for the formula?
I am no where near as proficient in math as any of the people who answered this before me, but nonetheless I would like to add a simplified version; A cylinder's volume is: $$\pi r^2h$$ A cone's volume is $\frac{1}{3}$ that of a cylinder of equal height and radius: $$\frac{1}{3}\pi r^2h$$ A sphere's volume is two cones, each of equal height and radius to that of the sphere's: $$\frac{1}{3}\pi r^2h + \frac{1}{3}\pi r^2h$$ The height of the sphere is equal to it's diameter $(r + r)$ so the earlier equation can be rewritten as; $$\frac{1}{3}\pi r^2(r + r) + \frac{1}{3}\pi r^2(r + r)$$ If we simplify it; $$\frac{1}{3}\pi r^2(2r) + \frac{1}{3}\pi r^2(2r)$$ Following the math convention of numbers before letters it changes to: $$\frac{1}{3}2\pi r^2r + \frac{1}{3}2\pi r^2r$$ Combining like terms; $$r^2\cdot r= r^3$$ and $$\frac{1}{3}\cdot 2 = \frac{2}{3}$$ The equation now becomes $$\frac{2}{3}\pi r^3 + \frac{2}{3}\pi r^3$$ Again add the like terms, being the $\frac{2}{3}$ together; $$\frac{2}{3} + \frac{2}{3} = \frac{4}{3}$$ Finally we get to how $\frac{4}{3}$ is part of the equation; $$\frac{4}{3}\pi r^3$$
Real world uses of hyperbolic trigonometric functions I covered hyperbolic trigonometric functions in a recent maths course. However I was never presented with any reasons as to why (or even if) they are useful. Is there any good examples of their uses outside academia?
Velocity addition in (special) relativity is not linear, but becomes linear when expressed in terms of hyperbolic tangent functions. More precisely, if you add two motions in the same direction, such as a man walking at velocity $v_1$ on a train that moves at $v_2$ relative to the ground, the velocity $v$ of the man relative to ground is not $v_1 + v_2$; velocities don't add (otherwise by adding enough of them you could exceed the speed of light). What does add is the inverse hyperbolic tangent of the velocities (in speed-of-light units, i.e., $v/c$). $$\tanh^{-1}(v/c)=\tanh^{-1}(v_1/c) + \tanh^{-1}(v_2/c)$$ This is one way of deriving special relativity: assume that a velocity addition formula holds, respecting a maximum speed of light and some other assumptions, and show that it has to be the above.
Do complex numbers really exist? Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What's the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are? This is not a Platonic question about the reality of mathematics, or whether abstractions are as real as physical entities, but an attempt to bridge a comprehension gap that many people experience when encountering complex numbers for the first time. The wording, although provocative, is deliberately designed to match the way that many people actually ask this question.
The argument isn't worth having, as you disagree about what it means for something to 'exist'. There are many interesting mathematical objects which don't have an obvious physical counterpart. What does it mean for the Monster group to exist?
Do complex numbers really exist? Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What's the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are? This is not a Platonic question about the reality of mathematics, or whether abstractions are as real as physical entities, but an attempt to bridge a comprehension gap that many people experience when encountering complex numbers for the first time. The wording, although provocative, is deliberately designed to match the way that many people actually ask this question.
In the en they exists as a consistent definition, you cannot be agnostic about it.
Is there possibly a largest prime number? Prime numbers are numbers with no factors other than one and itself. Factors of a number are always lower or equal to than a given number; so, the larger the number is, the larger the pool of "possible factors" that number might have. So the larger the number, it seems like the less likely the number is to be a prime. Surely there must be a number where, simply, every number above it has some other factors. A "critical point" where every number larger than it simply will always have some factors other than one and itself. Has there been any research as to finding this critical point, or has it been proven not to exist? That for any $n$ there is always guaranteed to be a number higher than $n$ that has no factors other than one and itself?
Another proof is: Consider the numbers $$9^{2^n} + 1, \\ \\ n = 1,2,\dots$$ Now if $$9^{2^n} + 1 = 0 \mod p$$ then we have that, for $ m > n$ that $$9^{2^m} + 1 = (9^{2^n})^{2^{m-n}} + 1 = (-1)^{2^{m-n}} + 1 = 1+1 = 2 \mod p$$ Thus if one term of the sequence is divisible by a prime, none of the next terms are divisible by that prime, i.e. if you write out the factors of the terms of the sequence, each term of this sequence gives rise to a prime not seen before! As a curiosity, it can be shown that each number in the sequence has at least one prime factor > 40. See this question on this very site: Does $9^{2^n} + 1$ always have a prime factor larger than $40$?
Proof that the sum of two Gaussian variables is another Gaussian The sum of two Gaussian variables is another Gaussian. It seems natural, but I could not find a proof using Google. What's a short way to prove this? Thanks! Edit: Provided the two variables are independent.
I posted the following in response to a question that got closed as a duplicate of this one: It looks from your comment as if the meaning of your question is different from what I thought at first. My first answer assumed you knew that the sum of independent normals is itself normal. You have $$ \exp\left(-\frac12 \left(\frac{x}{\alpha}\right)^2 \right) \exp\left(-\frac12 \left(\frac{z-x}{\beta}\right)^2 \right) = \exp\left(-\frac12 \left( \frac{\beta^2x^2 + \alpha^2(z-x)^2}{\alpha^2\beta^2} \right) \right). $$ Then the numerator is $$ \begin{align} & (\alpha^2+\beta^2)x^2 - 2\alpha^2 xz + \alpha^2 z^2 \\ \\ = {} & (\alpha^2+\beta^2)\left(x^2 - 2\frac{\alpha^2}{\alpha^2+\beta^2} xz\right) + \alpha^2 z^2 \\ \\ = {} & (\alpha^2+\beta^2)\left(x^2 - 2\frac{\alpha^2}{\alpha^2+\beta^2} xz + \frac{\alpha^4}{(\alpha^2+\beta^2)^2}z^2\right) + \alpha^2 z^2 - \frac{\alpha^4}{\alpha^2+\beta^2}z^2 \\ \\ = {} & (\alpha^2+\beta^2)\left(x - \frac{\alpha^2}{\alpha^2+\beta^2}z\right)^2 + \alpha^2 z^2 - \frac{\alpha^4}{\alpha^2+\beta^2}z^2, \end{align} $$ and then remember that you still have the $-1/2$ and the $\alpha^2\beta^2$ in the denominator, all inside the "exp" function. (What was done above is completing the square.) The factor of $\exp\left(\text{a function of }z\right)$ does not depend on $x$ and so is a "constant" that can be pulled out of the integral. The remaining integral does not depend on "$z$" for a reason we will see below, and thus becomes part of the normalizing constant. If $f$ is any probability density function, then $$ \int_{-\infty}^\infty f(x - \text{something}) \; dx $$ does not depend on "something", because one may write $u=x-\text{something}$ and then $du=dx$, and the bounds of integration are still $-\infty$ and $+\infty$, so the integral is equal to $1$. Now look at $$ \alpha^2z^2 - \frac{\alpha^4}{\alpha^2+\beta^2} z^2 = \frac{z^2}{\frac{1}{\beta^2} + \frac{1}{\alpha^2}}. $$ This was to be divided by $\alpha^2\beta^2$, yielding $$ \frac{z^2}{\alpha^2+\beta^2}=\left(\frac{z}{\sqrt{\alpha^2+\beta^2}}\right)^2. $$ So the density is $$ (\text{constant})\cdot \exp\left( -\frac12 \left(\frac{z}{\sqrt{\alpha^2+\beta^2}}\right)^2 \right) . $$ Where the standard deviation belongs we now have $\sqrt{\alpha^2+\beta^2}$.
Why does the series $\sum_{n=1}^\infty\frac1n$ not converge? Can someone give a simple explanation as to why the harmonic series $$\sum_{n=1}^\infty\frac1n=\frac 1 1 + \frac 12 + \frac 13 + \cdots $$ doesn't converge, on the other hand it grows very slowly? I'd prefer an easily comprehensible explanation rather than a rigorous proof regularly found in undergraduate textbooks.
Let's group the terms as follows:$$A=\frac11+\frac12+\frac13+\frac14+\cdots\\ $$ $$ A=\underbrace{(\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{9})}_{\color{red} {9- terms}} +\underbrace{(\frac{1}{10}+\frac{1}{11}+\frac{1}{12}+\cdots+\frac{1}{99})}_{\color{red} {90- terms}}\\+\underbrace{(\frac{1}{101}+\frac{1}{102}+\frac{1}{103}+\cdots+\frac{1}{999})}_{\color{red} {900- terms}}+\cdots \\ \to $$ $$\\A>9 \times(\frac{1}{10})+(99-10+1)\times \frac{1}{100}+(999-100+1)\times \frac{1}{1000}+... \\A>\frac{9}{10}+\frac{90}{100}+\frac{90}{100}+\frac{900}{1000}+...\\ \to A>\underbrace{\frac{9}{10}+\frac{9}{10}+\frac{9}{10}+\frac{9}{10}+\frac{9}{10}+\frac{9}{10}+...}_{\color{red} {\text{ m group} ,\text{ and} \space m\to \infty}} \to \infty $$ Showing that $A$ diverges by grouping numbers.
What is the single most influential book every mathematician should read? If you could go back in time and tell yourself to read a specific book at the beginning of your career as a mathematician, which book would it be?
There are so many, and I've already seen three that I would mention. Two more of interest to lay readers: The Man Who Knew Infinity by Robert Kanigel. Excellently written, ultimately a tragedy, but a real source of inspiration. Goedel's Proof by Nagel & Newman. Really, a beautiful and short exposition of the nature of proof, non-euclidean geometry, and the thinking that led Goedel to his magnificent proof.
Simple numerical methods for calculating the digits of $\pi$ Are there any simple methods for calculating the digits of $\pi$? Computers are able to calculate billions of digits, so there must be an algorithm for computing them. Is there a simple algorithm that can be computed by hand in order to compute the first few digits?
The first method that I applied successfully with function calculator was approximation of circle by $2^k$-polygon with approximating sides with one point on the circle and corners outside the circle. I started with unit circle that was approximated by square and the equation $\tan(2^{-k} \pi/4) \approx 2^{-k} \pi/4$, that gives $\pi \approx \frac{8}{2} = 4$ for $k=0$. I iterated the formula of tangent of half angle, that I solved applying the formula of the solution of second order equation, that was applied to the sum formula of tangent. I obtained the sequence $\pi \approx 8 \cdot 2^k \tan(2^{-k} \pi /4)/2$. The problem is that the solution formula of the second order equation has square root, that is difficult to calculate by hand. That's why I kept on searching a simple approximation method that applies addition, substraction, multiplication and division of integers. I ended up to the following calculation. This method applies Machin-like formula and was first published by C. Hutton. \begin{eqnarray} \pi & = & 4 \frac{\pi}{4} = 4 \arctan(1) = 4 \arctan\Bigg(\frac{\frac{5}{6}}{\frac{5}{6}}\Bigg) = 4 \arctan\Bigg(\frac{\frac{1}{2}+\frac{1}{3}}{1-\frac{1}{2}\frac{1}{3}}\Bigg) \\ & = & 4 \arctan\Bigg(\frac{\tan(\arctan(\frac{1}{2}))+\tan(\arctan(\frac{1}{3}))}{1-\tan(\arctan(\frac{1}{2}))\tan(\arctan(\frac{1}{3}))}\Bigg) \\ & = & 4 \arctan\Big(\tan\Big(\arctan\Big(\frac{1}{2}\Big)+\arctan\Big(\frac{1}{3}\Big)\Big)\Big) \\ & = & 4 \Big(\arctan\Big(\frac{1}{2}\Big)+\arctan\Big(\frac{1}{3}\Big)\Big) \\ & = & 4 \Big(\Big\vert_0^\frac{1}{2} \arctan(x) + \Big\vert_0^\frac{1}{3} \arctan(x)\Big) \\ & = & 4 \bigg(\int_0^\frac{1}{2} \frac{1}{1+x^2} dx + \int_0^\frac{1}{3} \frac{1}{1+x^2} dx\bigg) \\ & = & 4 \bigg(\int_0^\frac{1}{2} \sum_{k=0}^\infty (-x^2)^k dx + \int_0^\frac{1}{3} \sum_{k=0}^\infty (-x^2)^k dx \bigg) \\ & = & 4 \bigg(\sum_{k=0}^\infty \int_0^\frac{1}{2} (-x^2)^k dx + \sum_{k=0}^\infty \int_0^\frac{1}{3} (-x^2)^k dx \bigg) \\ & = & 4 \bigg(\sum_{k=0}^\infty \int_0^\frac{1}{2} (-1)^k x^{2k} dx + \sum_{k=0}^\infty \int_0^\frac{1}{3} (-1)^k x^{2k} dx \bigg) \\ & = & 4 \bigg(\sum_{k=0}^\infty \bigg\vert_0^\frac{1}{2} \frac{(-1)^k}{2k+1} x^{2k+1} + \sum_{k=0}^\infty \bigg\vert_0^\frac{1}{3} \frac{(-1)^k}{2k+1} x^{2k+1} \bigg) \\ & = & 4 \bigg(\sum_{k=0}^\infty \frac{(-1)^k}{2k+1} \frac{1}{2^{2k+1}} + \sum_{k=0}^\infty \frac{(-1)^k}{2k+1} \frac{1}{3^{2k+1}} \bigg) \\ & = & 4 \sum_{k=0}^\infty \frac{(-1)^k}{2k+1} \bigg(\frac{1}{2^{2k+1}} + \frac{1}{3^{2k+1}}\bigg) \\ & = & \sum_{k=0}^\infty \frac{4(-1)^k}{2k+1} \bigg(\frac{1}{2^{2k+1}} + \frac{1}{3^{2k+1}}\bigg). \end{eqnarray} It is the most beautiful in practice numerically applicable method I have found so far.
Calculating the probability of two dice getting at least a $1$ or a $5$ So you have $2$ dice and you want to get at least a $1$ or a $5$ (on the dice not added). How do you go about calculating the answer for this question. This question comes from the game farkle.
The other way to visualise this would be to draw a probability tree like so: alt text http://img.skitch.com/20100721-xwruwx7qnntx1pjmkjq8gxpifs.gif (apologies for my poor standard of drawing :) )
Applications of the Fibonacci sequence The Fibonacci sequence is very well known, and is often explained with a story about how many rabbits there are after $n$ generations if they each produce a new pair every generation. Is there any other reason you would care about the Fibonacci sequence?
Suppose you're writing a computer program to search a sorted array for a particular value. Usually the best method to use is a binary search. But binary search assumes it's the same cost to read from anywhere in the array. If it costs something to move from one array element to another, and the cost is proportional to how many array elements you need to skip over to get from one element you read to the next, then Fibonacci search works better. This can apply to situations like searching through arrays that don't fit entirely in your computer's cache so it's generally cheaper to read nearby elements that distant ones.
What is larger -- the set of all positive even numbers, or the set of all positive integers? We will call the set of all positive even numbers E and the set of all positive integers N. At first glance, it seems obvious that E is smaller than N, because for E is basically N with half of its terms taken out. The size of E is the size of N divided by two. You could see this as, for every item in E, two items in N could be matched (the item x and x-1). This implies that N is twice as large as E On second glance though, it seems less obvious. Each item in N could be mapped with one item in E (the item x*2). Which is larger, then? Or are they both equal in size? Why? (My background in Set theory is quite extremely scant)
They are both the same size, the size being 'countable infinity' or 'aleph-null'. The reasoning behind it is exactly that which you have already identified - you can assign each item in E to a single value in N. This is true for the Natural numbers, the Integers, the Rationals but not the Reals (see the Diagonal Slash argument for details on this result). -- Added explanation from comment -- The first reasoning is invalid because the cardinality of infinite sets doesn't follow 'normal' multiplication rules. If you multiply a set with cardinality of aleph-0 by 2, you still have aleph-0. The same is true if you divide it, add to it, subtract from it by any finite amount.
Why $\sqrt{-1 \cdot {-1}} \neq \sqrt{-1}^2$? I know there must be something unmathematical in the following but I don't know where it is: \begin{align} \sqrt{-1} &= i \\\\\ \frac1{\sqrt{-1}} &= \frac1i \\\\ \frac{\sqrt1}{\sqrt{-1}} &= \frac1i \\\\ \sqrt{\frac1{-1}} &= \frac1i \\\\ \sqrt{\frac{-1}1} &= \frac1i \\\\ \sqrt{-1} &= \frac1i \\\\ i &= \frac1i \\\\ i^2 &= 1 \\\\ -1 &= 1 \quad !!? \end{align}
Isaac's answer is correct, but it can be hard to see if you don't have a strong knowledge of your laws. These problems are generally easy to solve if you examine it line by line and simplify both sides. $$\begin{align*} \sqrt{-1} &= i & \mathrm{LHS}&=i, \mathrm{RHS}=i \\ 1/\sqrt{-1} &= 1/i & \mathrm{LHS}&=1/i=-i, \mathrm{RHS}=-i \\ \sqrt{1}/\sqrt{-1} &= 1/i & \mathrm{LHS}&=1/i=-i, \mathrm{RHS}=-i \\ \textstyle\sqrt{1/-1} &= 1/i & \mathrm{LHS}&=\sqrt{-1}=i, \mathrm{RHS}=-i \end{align*}$$ We can then see that the error must be assuming $\textstyle\sqrt{1}/\sqrt{-1}=\sqrt{1/-1}$.
Good Physical Demonstrations of Abstract Mathematics I like to use physical demonstrations when teaching mathematics (putting physics in the service of mathematics, for once, instead of the other way around), and it'd be great to get some more ideas to use. I'm looking for nontrivial ideas in abstract mathematics that can be demonstrated with some contraption, construction or physical intuition. For example, one can restate Euler's proof that $\sum \frac{1}{n^2} = \frac{\pi^2}{6}$ in terms of the flow of an incompressible fluid with sources at the integer points in the plane. Or, consider the problem of showing that, for a convex polyhedron whose $i^{th}$ face has area $A_i$ and outward facing normal vector $n_i$, $\sum A_i \cdot n_i = 0$. One can intuitively show this by pretending the polyhedron is filled with gas at uniform pressure. The force the gas exerts on the $i_th$ face is proportional to $A_i \cdot n_i$, with the same proportionality for every face. But the sum of all the forces must be zero; otherwise this polyhedron (considered as a solid) could achieve perpetual motion. For an example showing less basic mathematics, consider "showing" the double cover of $SO(3)$ by $SU(2)$ by needing to rotate your hand 720 degrees to get it back to the same orientation. Anyone have more demonstrations of this kind?
I cannot resist mentioning the waiter's trick as a physical demonstration of the fact that $SO(3)$ is not simply connected. For those who don't know it, it is the following: you can hold a dish on your hand and perform two turns (one over the elbow, one below) in the same direction and come back in the original position. I guess one can find it on youtube if it is not clear. To see why the two things are related, I borrow the following explanation by Harald Hanche-Olsen on MathOverflow: Draw a curve through your body from a stationary point, like your foot, up the leg and torso and out the arm, ending at the dish. Each point along the curve traces out a curve in SO(3), thus defining a homotopy. After you have completed the trick and ended back in the original position, you now have a homotopy from the double rotation of the dish with a constant curve at the identity of SO(3). You can't stop at the halfway point, lock the dish and hand in place, now at the original position, and untwist your arm: This reflects the fact that the single loop in SO(3) is not null homotopic.
Conjectures that have been disproved with extremely large counterexamples? I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture. I'm sure that everyone here is familiar with it; it describes an operation on a natural number – $n/2$ if it is even, $3n+1$ if it is odd. The conjecture states that if this operation is repeated, all numbers will eventually wind up at $1$ (or rather, in an infinite loop of $1-4-2-1-4-2-1$). I fired up Python and ran a quick test on this for all numbers up to $5.76 \times 10^{18}$ (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at $1$. Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.) I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?" To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!" And he said, "It is my conjecture that there are none! (and if any, they are rare)". Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?
Another class of examples arise from diophantine equations with huge minimal solutions. Thus the conjecture that such an equation is unsolvable in integers has only huge counterexamples. Well-known examples arise from Pell equations, e.g. the smallest solution to the classic Archimedes Cattle problem has 206545 decimal digits, namely 77602714 ... 55081800.
Can you find a domain where $ax+by=1$ has a solution for all $a$ and $b$ relatively prime, but which is not a PID? In Intro Number Theory a key lemma is that if $a$ and $b$ are relatively prime integers, then there exist integers $x$ and $y$ such that $ax+by=1$. In a more advanced course instead you would use the theorem that the integers are a PID, i.e. that all ideals are principal. Then the old lemma can be used to prove that "any ideal generated by two elements is actually principal." Induction then says that any finitely generated ideal is principal. But, what if all finitely generated ideals are principal but there are some ideals that aren't finitely generated? Can that happen?
If I'm not mistaken, the integral domain of holomorphic functions on a connected open set $U \subset \mathbb{C}$ works. It is a theorem (in Chapter 15 of Rudin's Real and Complex Analysis, and essentially a corollary of the Weierstrass factorization theorem), that every finitely generated ideal in this domain is principal. This implies that if $a,b$ have no common factor, they generate the unit ideal. However, for instance, the ideal of holomorphic functions in the unit disk that vanish on all but finitely many of ${1-\frac{1}{n}}$ is nonprincipal.
What is a Markov Chain? What is an intuitive explanation of Markov chains, and how they work? Please provide at least one practical example.
A Markov chain is a discrete random process with the property that the next state depends only on the current state (wikipedia) So $P(X_n | X_1, X_2, \dots X_{n-1}) = P(X_n | X_{n-1})$. An example could be when you are modelling the weather. You then can take the assumption that the weather of today can be predicted by only using the knowledge of yesterday. Let's say we have Rainy and Sunny. When it is rainy on one day the next day is Sunny with probability $0.3$. When it is Sunny, the probability for Rain next day is $0.4$. Now when it is today Sunny we can predict the weather of the day after tomorrow, by simply calculating the probability for Rain tomorrow, multiplying that with the probablity for Sun after rain plus the probability of Sun tomorrow times the probability of Sun after sun. In total the probability of Sunny of the day after tomorrow is $P(R|S) \cdot P(S|R) + P(S|S) \cdot P(S|S) = 0.3 \cdot 0.4+0.6 \cdot 0.6 = 0.48$.
What is the optimum angle of projection when throwing a stone off a cliff? You are standing on a cliff at a height $h$ above the sea. You are capable of throwing a stone with velocity $v$ at any angle $a$ between horizontal and vertical. What is the value of $a$ when the horizontal distance travelled $d$ is at a maximum? On level ground, when $h$ is zero, it's easy to show that $a$ needs to be midway between horizontal and vertical, and thus $\large\frac{\pi}{4}$ or $45°$. As $h$ increases, however, we can see by heuristic reasoning that $a$ decreases to zero, because you can put more of the velocity into the horizontal component as the height of the cliff begins to make up for the loss in the vertical component. For small negative values of $h$ (throwing up onto a platform), $a$ will actually be greater than $45°$. Is there a fully-solved, closed-form expression for the value of $a$ when $h$ is not zero?
I don't have a complete solution, but I attempted to solve this problem using calculus. $x'=v \cos a$ $y''= -g$ and (at $t=0) \quad y'= v \sin a$ So, $y'= v \sin a -gt$ $x_0=0$, so $x=vt \cos a$ $y_0=h$, so $y=vt \sin a - \frac12 gt^2+c$ (integrating with respect to $t$) Subbing in $h, y=vt \sin a - \frac12 gt^2+h$ The ball will hit the ground when $y=0$. This is as far as I got, but it appears that you can find a closed solution after all. I originally tried solving the quadratic for $t$ and subbing that into $x$, but it seems to work much better to do the substitution the other way round. I will leave this solution here in case anyone wants to see how to derive the basic equations for $x$ and $y$.
Why are differentiable complex functions infinitely differentiable? When I studied complex analysis, I could never understand how once-differentiable complex functions could be possibly be infinitely differentiable. After all, this doesn't hold for functions from $\mathbb R ^2$ to $\mathbb R ^2$. Can anyone explain what is different about complex numbers?
When one uses the complex plane to represent the set of complex numbers ${\bf C}$, $z=x+iy$ looks so similar to the point $(x,y)$ in ${\bf R}^2$. However, there is a difference between them which is not that obvious. The linear transformation in ${\bf R}^2$, can be represented by a $2\times 2$ matrix as long as one chooses a basis in ${\bf R}^2$, and conversely, any $2\times 2$ matrix can define a linear transformation by using the matrix multiplication $A(x,y)^{T}$. On the other hand, the linear transformation on $\bf C$ is different. Let $f:{\bf C}\to{\bf C}$ where $f(z)=pz$, $p \in{\bf C}$. If one writes $p=a+ib$ and $z=x+iy$, this transformation can be written as $$ \begin{bmatrix} x\\ y \end{bmatrix}\to \begin{bmatrix} a &-b\\ b &a \end{bmatrix} \begin{bmatrix} x\\ y \end{bmatrix} $$ when one sees it as in the complex plane. Hence, not all matrices can define a linear transformation $f:\bf C\to C$. The derivative, which can be regarded as a "linear transformation", is also different for $f:{\bf R}^2\to {\bf R}^2$ and $f:\bf C\to C$. In the real case $$ f \left( \begin{bmatrix} x\\ y \end{bmatrix} \right) = \begin{bmatrix} f_1(x,y)\\ f_2(x,y) \end{bmatrix} $$ $f_1$ and $f_2$ are "independent" for the sake of $f$ being differentiable. While in the complex case $f_1$ and $f_2$ have to satisfy the Cauchy-Riemann equations. The relationship between $f:{\bf R}^2\to{\bf R}^2$ and $f:{\bf C}\to{\bf C}$ is also discussed here.
What's an intuitive way to think about the determinant? In my linear algebra class, we just talked about determinants. So far I’ve been understanding the material okay, but now I’m very confused. I get that when the determinant is zero, the matrix doesn’t have an inverse. I can find the determinant of a $2\times 2$ matrix by the formula. Our teacher showed us how to compute the determinant of an $n \times n$ matrix by breaking it up into the determinants of smaller matrices. Apparently there is a way by summing over a bunch of permutations. But the notation is really hard for me and I don’t really know what’s going on with them anymore. Can someone help me figure out what a determinant is, intuitively, and how all those definitions of it are related?
If you have a matrix * *$H$ then you can calculate the correlationmatrix with *$G = H \times H^H$ (H^H denotes the complex conjugated and transposed version of $H$). If you do a eigenvalue decomposition of $G$ you get eigenvalues $\lambda$ and eigenvectors $v$, that in combination $\lambda\times v$ describes the same space. Now there is the following equation, saying: * *Determinant($H*H^H$) = Product of all eigenvalues $\lambda$ I.e., if you have a $3\times3$ matrix $H$ then $G$ is $3\times3$ too giving us three eigenvalues. The product of these eigenvalues give as the volume of a cuboid. With every extra dimension/eigenvalue the cuboid gets an extra dimension.
How many circles of a given radius can be packed into a given rectangular box? I've just came back from my Mathematics of Packing and Shipping lecture, and I've run into a problem I've been trying to figure out. Let's say I have a rectangle of length $l$ and width $w$. Is there a simple equation that can be used to show me how many circles of radius $r$ can be packed into the rectangle, in the optimal way? So that no circles overlap. ($r$ is less than both $l$ and $w$) I'm rather in the dark as to what the optimum method of packing circles together in the least amount of space is, for a given shape. An equation with a non-integer output is useful to me as long as the truncated (rounded down) value is the true answer. (I'm not that interested in how the circles would be packed, as I am going to go into business and only want to know how much I can demand from the packers I hire to pack my product)
I had an answer before, but I looked into it a bit more and my answer was incorrect so I removed it. This link may be of interest: Circle Packing in a Square (wikipedia) It was suggested by KennyTM that there may not be an optimal solution yet to this problem in general. Further digging into this has shown me that this is probably correct. Check out this page: Circle Packing - Best Known Packings. As you can see, solutions up to only 30 circles have been found and proven optimal. (Other higher numbers of circles have been proven optimal, but 31 hasn't) Note that although problem defined on the wikipedia page and the other link is superficially different than the question asked here, the same fundamental question is being asked, which is "what is the most efficient way to pack circles in a square/rectangle container?". ...And it seems the answer is "we don't really know" :)
Sum of the alternating harmonic series $\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k} = \frac{1}{1} - \frac{1}{2} + \cdots $ I know that the harmonic series $$\sum_{k=1}^{\infty}\frac{1}{k} = \frac{1}{1} + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \cdots + \frac{1}{n} + \cdots \tag{I}$$ diverges, but what about the alternating harmonic series $$\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k} = \frac{1}{1} - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} - \frac{1}{6} + \cdots + \frac{(-1)^{n+1}}{n} + \cdots \text{?} \tag{II}$$ Does it converge? If so, what is its sum?
it is not absolutely convergent (that is, if you are allowed to reorder terms you may end up with whatever number you fancy). If you consider the associated series formed by summing the terms from 1 to n of the original one, that is you fix the order of summation of the original series, that series (which is not the original one...) converges to $\ln(2)$ See Wikipedia.
Online Math Degree Programs Are there any real online mathematics (applied math, statistics, ...) degree programs out there? I'm full-time employed, thus not having the flexibility of attending an on campus program. I also already have a MSc in Computer Science. My motivation for a math degree is that I like learning and am interested in the subject. I've studied through number of OCW courses on my own, but it would be nice if I could actually be able to have my studying count towards something. I've done my share of Googling for this, but searching for online degrees seems to bring up a lot of institutions that (at least superficially) seem a bit shady (diploma mills?).
* *Penn State has a Master of Applied Statistics. *Stanford has Computational and Math Engineering. From what I've read, many universities will not make a distinction on the degree stating whether it was earned online or not.
Least wasteful use of stamps to achieve a given postage You have sheets of $42$-cent stamps and $29$-cent stamps, but you need at least $\$3.20$ to mail a package. What is the least amount you can make with the $42$- and $29$-cent stamps that is sufficient to mail the package? A contest problem such as this is probably most easily solved by tabulating the possible combinations, using $0$ through ceiling(total/greater value) of the greater-value stamp and computing the necessary number of the smaller stamp and the total postage involved. The particular example above would be solved with a $9$-row table, showing the minimum to be $\$3.23$, made with seven $42$-cent stamps and one $29$-cent stamp. Is there a better algorithm for solving this kind of problem? What if you have more than two values of stamps?
A similar problem is known in mathematics. It is called the "Postage-Stamp" problem, and usually asks which postal values can be realized and which cannot. Dynamic programming is a common but not polynomial-time solution for this. A polynomial time solution using continued fractions in the case of two stamp denominations exists, and more complex algorithms exist for three or more stamp denominations. Look up "Frobenius Number" for more technical information. For the given problem, I would roughly estimate the Frobenius number, and based on the estimate make an easy guess as to a near-solution and then use a dynamic programming/tabular solution, depending on the amount of time I had to solve the problem.
Why does Benford's Law (or Zipf's Law) hold? Both Benford's Law (if you take a list of values, the distribution of the most significant digit is rougly proportional to the logarithm of the digit) and Zipf's Law (given a corpus of natural language utterances, the frequency of any word is roughly inversely proportional to its rank in the frequency table) are not theorems in a mathematical sense, but they work quite good in the real life. Does anyone have an idea why this happens? (see also this question)
My explanation comes from a storage perspective: Uniformly distributed high-precision numbers that range over many powers are very expensive to store. For example, storing 9.9999989 and 99999989 is much more expensive than storing 10^1 and 10^8 assuming "they" are okay with the anomaly. The "they" refers to our simulators ;-) Yes, I'm referring to the possibility that we are in a simulation. Instead of truly random numbers, using small randomness just around whole powers might result in huge cost savings. This type of pseudo-precision probably works really well in producing "realistic" simulations while keeping storage costs to a minimum.
Separation of variables for partial differential equations What class of Partial Differential Equations can be solved using the method of separation of variables?
There is an extremely beautiful Lie-theoretic approach to separation of variables, e.g. see Willard Miller's book [1] (freely downloadable). I quote from his introduction: This book is concerned with the relationship between symmetries of a linear second-order partial differential equation of mathematical physics, the coordinate systems in which the equation admits solutions via separation of variables, and the properties of the special functions that arise in this manner. It is an introduction intended for anyone with experience in partial differential equations, special functions, or Lie group theory, such as group theorists, applied mathematicians, theoretical physicists and chemists, and electrical engineers. We will exhibit some modem group-theoretic twists in the ancient method of separation of variables that can be used to provide a foundation for much of special function theory. In particular, we will show explicitly that all special functions that arise via separation of variables in the equations of mathematical physics can be studied using group theory. These include the functions of Lam6, Ince, Mathieu, and others, as well as those of hypergeometric type. This is a very critical time in the history of group-theoretic methods in special function theory. The basic relations between Lie groups, special functions, and the method of separation of variables have recently been clarified. One can now construct a group-theoretic machine that, when applied to a given differential equation of mathematical physics, describes in a rational manner the possible coordinate systems in which the equation admits solutions via separation of variables and the various expansion theorems relating the separable (special function) solutions in distinct coordinate systems. Indeed for the most important linear equations, the separated solutions are characterized as common eigenfunctions of sets of second-order commuting elements in the universal enveloping algebra of the Lie symmetry algebra corresponding to the equation. The problem of expanding one set of separable solutions in terms of another reduces to a problem in the representation theory of the Lie symmetry algebra. [1] Willard Miller. Symmetry and Separation of Variables. Addison-Wesley, Reading, Massachusetts, 1977 (out of print)
Cotangent bundle This may be a poorly phrased question - please let me know of it - but what is the correct way to think of the cotangent bundle? It seems odd to think of it as the dual of the tangent bundle (I am finding it odd to reconcile the notions of "maps to the ground field" with this object).
I'm not completely sure what you mean by this: "It seems odd to think of it as the dual of the tangent bundle (I am finding it odd to reconcile the notions of "maps to the ground field" with this object)," but maybe the following will help you see why it is natural to consider the dual space of the tangent bundle. Given a function f on our manifold, we want to associate something like the gradient of f. Well, in calculus, what characterized the gradient of a function? Its the vector field such that when we take its dot product with a vector v at some point p, we get the directional derivative, at p, of f along v. In a general manifold we don't have a dot product (which is a metric) but we can form a covector field (something which gives an element of the cotangent bundle at any point) such that, when applied to a vector v, we get the directional derivative of f along v. This covector field is denoted df and is called the exterior derivative of f.
Correct usage of the phrase "In the sequel"? History? Alternatives? While I feel quite confident that I've inferred the correct meaning of "In the sequel" from context, I've never heard anyone explicitly tell me, so first off, to remove my niggling doubts: What does this phrase mean? (Someone recently argued to me that "sequel" was actually supposed to refer to a forthcoming second part of a paper, which I found highly unlikely, but I'd just like to make sure. ) My main questions: At what points in the text, and for what kinds of X, is it appropriate to use the phrase "In the sequel, X" in a paper? In a book? Is it ever acceptable to introduce definitions via "In the sequel, we introduce the concept of a "blah", which is a thing satisfying ..." at the start of a paper or book without a formal "Definition. A "blah" is a thing, satsifying ..." in the main text of the paper or book? Finally, out of curiosity, I'm wondering how long this phrase has been around, if it's considered out of date or if it's still a popular phrase, and what some good alternatives are.
I would never write that (mostly because it really sounds like the definition will be in another paper...). I'm pretty sure I'd write «In what follows, ...».
Probability to find connected pixels Say I have an image, with pixels that can be either $0$ or $1$. For simplicity, assume it's a $2D$ image (though I'd be interested in a $3D$ solution as well). A pixel has $8$ neighbors (if that's too complicated, we can drop to $4$-connectedness). Two neighboring pixels with value $1$ are considered to be connected. If I know the probability $p$ that an individual pixel is $1$, and if I can assume that all pixels are independent, how many groups of at least $k$ connected pixels should I expect to find in an image of size $n\times n$? What I really need is a good way of calculating the probability of $k$ pixels being connected given the individual pixel probabilities. I have started to write down a tree to cover all the possibilities up to $k=3$, but even then, it becomes really ugly really fast. Is there a more clever way to go about this?
This looks a bit like percolation theory to me. In the 4-neighbour case, if you look at the dual of the image, the chance that an edge is connected (runs between two pixels of the same colour) is 1-2p+2p^2. I don't think you can get nice closed-form answer for your question, but maybe a computer can help with some Monte Carlo simulation?
What's the difference between open and closed sets? What's the difference between open and closed sets? Especially with relation to topology - rigorous definitions are appreciated, but just as important is the intuition!
An open set is a set S for which, given any of its element A, you can find a ball centered in A and whose points are all in S. A closed set is a set S for which, if you have a sequence of points in S who tend to a limit point B, B is also in S. Intuitively, a closed set is a set which contains its own boundary, while an open set is a set where you are able not to leave it if you move just a little bit.
Usefulness of Conic Sections Conic sections are a frequent target for dropping when attempting to make room for other topics in advanced algebra and precalculus courses. A common argument in favor of dropping them is that typical first-year calculus doesn't use conic sections at all. Do conic sections come up in typical intro-level undergraduate courses? In typical prelim grad-level courses? If so, where?
If you're a physics sort of person, conic sections clearly come up when you study how Kepler figured out what the shapes of orbits are, and some of their synthetic properties give useful shortcuts to things like proving "equal area swept out in equal time" that need not involve calculus. The other skills you typically learn while studying conic sections in analytic geometry - polar parametrization of curves, basic facts about various invariants related to triangles and conics, rotations and changing coordinate systems (so as to recognize the equation of a conic in general form as some sort of transformation of a standard done), are all extremely useful in physics. I'd say that plane analytic geometry was the single most useful math tool for me in solving physics problems until I got to fluid dynamics stuff (where that is replaced by complex analysis). Relatedly, independent of their use in physics, I think they're a great way to show the connections between analytic and synthetic thinking in math, which will come up over and over again for people who go on to study math (coordinate-based versus intrinsic perspectives, respectively).
Mandelbrot-like sets for functions other than $f(z)=z^2+c$? Are there any well-studied analogs to the Mandelbrot set using functions other than $f(z)= z^2+c$ in $\mathbb{C}$?
Here's the Mandelbrot set on the Poincaré Disk. I made it by replacing all the usual operations in the iteration $$z_{n+1} = z_n^2+c$$ by "hyperbolic" equivalents. Adding a constant was interpreted as translating in the plane, and the hyperbolic equivalent is then $$z \mapsto \frac{z+c}{\bar{c}z+1}$$ For the squaring operation, that meant I used angle doubling plus rescaling of the distance by a factor two based on the distance formula for the Poincaré Disk: $$d(z_1,z_2)=\tanh^{-1}\left|\frac{z_1-z_2}{1-z_1\bar{z_2}}\right|$$
If and only if, which direction is which? I can never figure out (because the English language is imprecise) which part of "if and only if" means which implication. ($A$ if and only if $B$) = $(A \iff B)$, but is the following correct: ($A$ only if $B$) = $(A \implies B)$ ($A$ if $B$) = $(A \impliedby B)$ The trouble is, one never comes into contact with "$A$ if $B$" or "$A$ only if $B$" using those constructions in everyday common speech.
The explanation in this link clearly and briefly differentiates the meanings and the inference direction of "if" and "only if". In summary, $A \text{ if and only if } B$ is mathematically interpreted as follows: * *'$A \text{ if } B$' : '$A \Leftarrow B$' *'$A \text{ only if } B$' : '$\neg A \Leftarrow \neg B$' which is the contrapositive (hence, logical equivalent) of $A \Rightarrow B$
Proof that $n^3+2n$ is divisible by $3$ I'm trying to freshen up for school in another month, and I'm struggling with the simplest of proofs! Problem: For any natural number $n , n^3 + 2n$ is divisible by $3.$ This makes sense Proof: Basis Step: If $n = 0,$ then $n^3 + 2n = 0^3 +$ $2 \times 0 = 0.$ So it is divisible by $3.$ Induction: Assume that for an arbitrary natural number $n$, $n^3+ 2n$ is divisible by $3.$ Induction Hypothesis: To prove this for $n+1,$ first try to express $( n + 1 )^3 + 2( n + 1 )$ in terms of $n^3 + 2n$ and use the induction hypothesis. Got it $$( n + 1 )^3+ 2( n + 1 ) = ( n^3 + 3n^2+ 3n + 1 ) + ( 2n + 2 ) \{\text{Just some simplifying}\}$$ $$ = ( n^3 + 2n ) + ( 3n^2+ 3n + 3 ) \{\text{simplifying and regrouping}\}$$ $$ = ( n^3 + 2n ) + 3( n^2 + n + 1 ) \{\text{factored out the 3}\}$$ which is divisible by $3$, because $(n^3 + 2n )$ is divisible by $3$ by the induction hypothesis. What? Can someone explain that last part? I don't see how you can claim $(n^3+ 2n ) + 3( n^2 + n + 1 )$ is divisible by $3.$
Given the $n$th case, you want to consider the $(n+1)$th case, which involves the number $(n+1)^3 + 2(n+1)$. If you know that $n^3+2n$ is divisible by $3$, you can prove $(n+1)^3 + 2(n+1)$ is divisible by $3$ if you can show the difference between the two is divisible by $3$. So find the difference, and then simplify it, and then consider how to prove it's divisible by $3$.
Which average to use? (RMS vs. AM vs. GM vs. HM) The generalized mean (power mean) with exponent $p$ of $n$ numbers $x_1, x_2, \ldots, x_n$ is defined as $$ \bar x = \left(\frac{1}{n} \sum x_i^p\right)^{1/p}. $$ This is equivalent to the harmonic mean, arithmetic mean, and root mean square for $p = -1$, $p = 1$, and $p = 2$, respectively. Also its limit at $p = 0$ is equal to the geometric mean. When should the different means be used? I know harmonic mean is useful when averaging speeds and the plain arithmetic mean is certainly used most often, but I've never seen any uses explained for the geometric mean or root mean square. (Although standard deviation is the root mean square of the deviations from the arithmetic mean for a list of numbers.)
One possible answer is for defining unbiased estimators of probability distributions. Often times you want some transformation of the data that gets you closer to, or exactly to, a normal distribution. For example, products of lognormal variables are again lognormal, so the geometric mean is appropriate here (or equivalently, the additive mean on the natural log of the data). Similarly, there are cases where the data are naturally reciprocals or ratios of random variables, and then the harmonic mean can be used to get unbiased estimators. These show up in actuarial applications, for example.
How can I understand and prove the "sum and difference formulas" in trigonometry? The "sum and difference" formulas often come in handy, but it's not immediately obvious that they would be true. \begin{align} \sin(\alpha \pm \beta) &= \sin \alpha \cos \beta \pm \cos \alpha \sin \beta \\ \cos(\alpha \pm \beta) &= \cos \alpha \cos \beta \mp \sin \alpha \sin \beta \end{align} So what I want to know is, * *How can I prove that these formulas are correct? *More importantly, how can I understand these formulas intuitively? Ideally, I'm looking for answers that make no reference to Calculus, or to Euler's formula, although such answers are still encouraged, for completeness.
You might take refuge to complex numbers and use the Euler relation $\exp(i\phi)=\cos(\phi)+i\sin(\phi)$ and the fundamental property of the $\exp$ function: $\cos(\alpha+\beta)+i\sin(\alpha+\beta)=\exp(i(\alpha+\beta))=\exp(i\alpha)\cdot\exp(i\beta)=$ $=(\cos(\alpha)+i\sin(\alpha))\cdot(\cos(\beta)+i\sin(\beta))=$ $=(\cos(\alpha)\cdot\cos(\beta)-\sin(\alpha)\cdot\sin(\beta))+i(\cos(\alpha)\cdot\sin(\beta)+\sin(\alpha)\cdot\cos(\beta))$ Finally use therefrom the real resp. imaginary part separately. This is how you'd get both the trigonometric addition theorems. --- rk
Prove: $(a + b)^{n} \geq a^{n} + b^{n}$ Struggling with yet another proof: Prove that, for any positive integer $n: (a + b)^n \geq a^n + b^n$ for all $a, b > 0:$ I wasted $3$ pages of notebook paper on this problem, and I'm getting nowhere slowly. So I need some hints. $1.$ What technique would you use to prove this (e.g. induction, direct, counter example) $2.$ Are there any tricks to the proof? I've seen some crazy stuff pulled out of nowhere when it comes to proofs...
Induction. For $n=1$ it is trivially true Assume true for $n=k$ i.e. $$(a + b)^k \ge a^k + b^k$$ Consider case $n=k+1$ \begin{align}&(a+b)^{k+1} =(a+b)(a+b)^k\\ &\ge(a+b)(a^k+b^k)\\ &=a^{k+1}+b^{k+1}+ab^k+ba^k\\ &\ge a^{k+1}+b^{k+1}\end{align}
Why $PSL_3(\mathbb F_2)\cong PSL_2(\mathbb F_7)$? Why are groups $PSL_3(\mathbb{F}_2)$ and $PSL_2(\mathbb{F}_7)$ isomorphic? Update. There is a group-theoretic proof (see answer). But is there any geometric proof? Or some proof using octonions, maybe?
Can't leave comments yet, but the details of there being only one simple group of order 168, and why PSL(2,7) and PSL(3,2) are order 168 and simple, are spelled out on pages 141-147 in Smith and Tabachnikova's "Topics in Group Theory". Steve
Finding an addition formula without trigonometry I'm trying to understand better the following addition formula: $$\int_0^a \frac{\mathrm{d}x}{\sqrt{1-x^2}} + \int_0^b \frac{\mathrm{d}x}{\sqrt{1-x^2}} = \int_0^{a\sqrt{1-b^2}+b\sqrt{1-a^2}} \frac{\mathrm{d}x}{\sqrt{1-x^2}}$$ The term $a\sqrt{1-b^2}+b\sqrt{1-a^2}$ can be derived from trigonometry (since $\sin(t) = \sqrt{1 - \cos^2(t)}$) but I have not been able to find any way to derive this formula without trigonometry, how could it be done? edit: fixed a mistake in my formula.
Replace the first integral by the same thing from $-a$ to $0$, and consider the points W,X,Y,Z on the unit circle above $-a,0,b$ and $c = a\sqrt{1-b^2} + b \sqrt{1-a^2}$. Draw the family of lines parallel to XY (and WZ). This family sets up a map from the circle to itself; through each point, draw a parallel and take the other intersection of that line with the circle. Your formula says that this map [edit: or rather the map it induces on the $x$-coordinates of points on the circle] is a change of variables converting the integral on $[-a,0]$ to the same integral on $[b,c]$. Whatever differentiation you perform in the process of proving this, will be the verification that $dx/y$ is a rotation-invariant differential on the circle $x^2 + y^2 = 1$. [The induced map on x-coordinates is: $x \to$ point on semicircle above $x \to$ corresponding point on line parallel to XY $\to x$-coordinate of the second point. Here were are just identifying $[-1,1]$ with the semicircle above it.]
Is this a counter example to Stromquist's Theorem? Stromquist's Theorem: If the simple closed curve J is "nice enough" then it has an inscribed square. "Nice enough" includes polygons. Read more about it here: www.webpages.uidaho.edu/~markn/squares An "inscribed square" means that the corners of a square overlap with the curve. I would like to suggest a counter-example: The curve connected by the points$$ (0.2,0),\ (1,0),\ (1,1),\ (0,1),\ (0,0.2),\ (-0.2, -0.2),\ (0.2,0).$$ Link to plot: http://www.freeimagehosting.net/uploads/5b289e6824.png Can this curve be incribed by a square? (An older version of this question had another example: a triangle on top of a square (without their mutual side.) )
Regarding your edit: (0.2, 0) — (1, 0.2) — (0.8, 1) — (0, 0.8) (and many others) http://www.imgftw.net/img/326639277.png
Non-completeness of the space of bounded linear operators If $X$ and $Y$ are normed spaces I know that the space $B(X,Y)$ of bounded linear functions from $X$ to $Y$, is complete if $Y$ is complete. Is there an example of a pair of normed spaces $X,Y$ s.t. $B(X,Y)$ is not complete?
Let $X = \mathbb{R}$ with the Euclidean norm and let $Y$ be a normed space which is not complete. You should find that $B(X, Y) \simeq Y$.
Indefinite summation of polynomials I've been experimenting with the summation of polynomials. My line of attack is to treat the subject the way I would for calculus, but not using limits. By way of a very simple example, suppose I wish to add the all numbers between $10$ and $20$ inclusive, and find a polynomial which I can plug the numbers into to get my answer. I suspect its some form of polynomial with degree $2$. So I do a integer 'differentiation': $$ \mathrm{diff}\left(x^{2}\right)=x^{2}-\left(x-1\right)^{2}=2x-1 $$ I can see from this that I nearly have my answer, so assuming an inverse 'integration' operation and re-arranging: $$ \frac{1}{2}\mathrm{diff}\left(x^{2}+\mathrm{int}\left(1\right)\right)=x $$ Now, I know that the 'indefinite integral' of 1 is just x, from 'differentiating' $x-(x-1) = 1$. So ultimately: $$ \frac{1}{2}\left(x^{2}+x\right)=\mathrm{int}\left(x\right) $$ So to get my answer I take the 'definite' integral: $$ \mathrm{int}\left(x\right):10,20=\frac{1}{2}\left(20^{2}+20\right)-\frac{1}{2}\left(9^{2}+9\right)=165 $$ (the lower bound needs decreasing by one) My question is, is there a general way I can 'integrate' any polynomial, in this way? Please excuse my lack of rigour and the odd notation.
You seem to be reaching for the calculus of finite differences, once a well-known topic but rather unfashionable these days. The answer to your question is yes: given a polynomial $f(x)$ there is a polynomial $g(x)$ (of degree one greater than $f$) such that $$f(x)=g(x)-g(x-1).$$ This polynomial $g$ (like the integral of $f$) is unique save for its constant term. Once one has $g$ then of course $$f(a)+f(a+1)+\cdots+f(b)=g(b)-g(a-1).$$ When $f(x)=x^n$ is a monomial, the coefficients of $g$ involve the endlessly fascinating Bernoulli numbers.
Is the function $(x, a) \mapsto (F(x,a), a)$ continuous whenever $F$ is? Let $A$, $X$, and $Y$ be arbitrary topological spaces. Let $F:X\times A\rightarrow Y$ be a continuous function. Let $P$ be the map from $X\times A$ to $Y\times A$ taking $(x,a)$ to $(F(x,a),a)$. Does it follow from continuity of $F$ that $P$ is continuous?
Yes. This follows from the fact that a function $U \to V \times W$ is continuous if and only if its component functions $U \to V, U \to W$ are, and from the fact that the projection maps $V \times W \to V$ and $V \times W \to W$ are continuous. Both of these facts in turn follow from the universal property of the product topology.
Why is Gimbal Lock an issue? I understand what the problem with Gimbal Lock is, such that at the North Pole, all directions are south, there's no concept of east and west. But what I don't understand is why this is such an issue for navigation systems? Surely if you find you're in Gimbal Lock, you can simply move a small amount in any direction, and then all directions are right again? Why does this cause such a problem for navigation?
I don't imagine that this is a practical issue for navigation any longer, given the advent of GPS technology. However, it is of practical concern in 3-d animation and robotics. To get back to your navigation example, suppose that I have a mechanical gyro mounted in an airplane flying over the North Pole. If the gyro is only mounted on three gimbals, one of the gimbals will freeze because moving smoothly to the proper orientation would require at least one of the gimbals to flip 180 degrees instantaneously. The gimbal lock problem can be countered by adding a redundant degree of freedom in the form of an extra gimbal, an extra joint in a robotic arm, etc. As you pointed out, it's the singularity at the poles of the representation that's the problem. Having a redundant degree of freedom helps because you can have enough information at the pole to move the correct direction. In 3-d graphics, if an axis-angle representation (Euler axis and angle) or quaternions are used instead of a triple-axis representation (Euler angles), then weird rotation artifacts due to gimbal lock are eliminated (performing a YouTube search for "gimbal lock" yields several visual demonstrations of the problem).
How to find eigenvectors/eigenvalues of a matrix where each diagonal entry is scalar $d$ and all other entries are $1$ How would you find eigenvalues/eigenvectors of a $n\times n$ matrix where each diagonal entry is scalar $d$ and all other entries are $1$ ? I am looking for a decomposition but cannot find anything for this. For example: $\begin{pmatrix}2&1&1&1\\1&2&1&1\\1&1&2&1\\1&1&1&2\end{pmatrix}$
The matrix is $(d-1)I + J$ where $I$ is the identity matrix and $J$ is the all-ones matrix, so once you have the eigenvectors and eigenvalues of $J$ the eigenvectors of $(d-1)I + J$ are the same and the eigenvalues are each $d-1$ greater. (Convince yourself that this works.) But $J$ has rank $1$, so it has eigenvalue $0$ with multiplicity $n-1$. The last eigenvalue is $n$, and it's quite easy to write down all the eigenvectors.
Methods to see if a polynomial is irreducible Given a polynomial over a field, what are the methods to see it is irreducible? Only two comes to my mind now. First is Eisenstein criterion. Another is that if a polynomial is irreducible mod p then it is irreducible. Are there any others?
One method for polynomials over $\mathbb{Z}$ is to use complex analysis to say something about the location of the roots. Often Rouche's theorem is useful; this is how Perron's criterion is proven, which says that a monic polynomial $x^n + a_{n-1} x^{n-1} + ... + a_0$ with integer coefficients is irreducible if $|a_{n-1}| > 1 + |a_{n-2}| + ... + |a_0|$ and $a_0 \neq 0$. A basic observation is that knowing a polynomial is reducible places constraints on where its roots can be; for example, if a monic polynomial with prime constant coefficient $p$ is reducible, one of its irreducible factors has constant term $\pm p$ and the rest have constant term $\pm 1$. It follows that the polynomial has at least one root inside the unit circle and at least one root outside. An important thing to keep in mind here is that there exist irreducible polynomials over $\mathbb{Z}$ which are reducible modulo every prime. For example, $x^4 + 16$ is such a polynomial. So the modular technique is not enough in general.
How to get inverse of this matrix using least amount of space? I'm working on a problem from a past exam and I'm stuck, so I'm asking for help. Here it is: $A = \frac12 \left[\begin{array}{rrrr} 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & 1 & -1 \\ 1 & -1 & -1 & 1 \end{array}\right]$ find $\mathbf A^{-1}$. My problem isn't the inverse matrix itself. We just get the determinant, see if it's zero or not, get the adjoint matrix and divide it by determinant. My problem is space. As you can see, it's a 4x4 matrix meaning that I'd have to do 4x4 3x3 determinants to get the adjoint matrix plus 2 3x3 determinants to get determinant of the matrix. Now we get one A3 piece of paper for 6 problems. The problems are printed on one side and the other side is blank. This and the fact that inverse matrix is $A = \frac12 \left[\begin{array}{rrrr} 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & 1 & -1 \\ 1 & -1 & -1 & 1 \end{array}\right]$ led me to believe that there's some catch that I do not see. Any ideas what could it be? Also if someone could edit these matrices from MATLAB format into something that this site will parse would be great! EDIT Unfortunately it seem that TeX code for matrices doesn't work here. Here's the matrix in MATLAB form, if anyone wants it A=(1/2)*[1,1,1,1;1,1,-1,-1;1,-1,1,-1;1,-1,-1,1]; EDIT 2 Answer by Jack Schmidt contains code for matrices.
Gauss/Jordan Elimination will do it. It'll let you find |A|^1 with out the bother of finding the determinant. Just augment your original matrix with the identity and let her rip. On an aside, you can still deduce the determinant from the inverse. { |A|^1= (1/det)[adj|A|] therefore the determinant is equal to the lowest common denominator of all of the elements of the inverse.
Does there exist a bijective $f:\mathbb{N} \to \mathbb{N}$ such that $\sum f(n)/n^2$ converges? We know that $\displaystyle\zeta(2)=\sum\limits_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6}$ and it converges. * *Does there exists a bijective map $f:\mathbb{N} \to \mathbb{N}$ such that the sum $$\sum\limits_{n=1}^{\infty} \frac{f(n)}{n^2}$$ converges. If our $s=2$ was not fixed, then can we have a function such that $\displaystyle \zeta(s)=\sum\limits_{n=1}^{\infty} \frac{f(n)}{n^s}$ converges
We show that the for any $f: \mathbb{N} \rightarrow \mathbb{N}$ bijective this is not cauchy. Suppose it is for given $\epsilon > 0$ there exists $N$ such that $\sum_{n=N}^{2N} \frac{f(n)}{n^2} < \epsilon$. We have $\sum_{n=N}^{2N} \frac{f(n)}{n^2} \geq \frac{1}{(2N)^2}\sum_{n=N}^{2N}f(n)\geq \frac{1}{(2N)^2} \frac{N(N+1)}{2}=\frac{(N+1)}{8N}=\frac{1}{8}+\frac{1}{8N}$. If we choose $\epsilon < \frac{1}{8}$ we get a contradiction.
Division of Factorials [binomal coefficients are integers] I have a partition of a positive integer $(p)$. How can I prove that the factorial of $p$ can always be divided by the product of the factorials of the parts? As a quick example $\frac{9!}{(2!3!4!)} = 1260$ (no remainder), where $9=2+3+4$. I can nearly see it by looking at factors, but I can't see a way to guarantee it.
If you believe (:-) in the two-part Newton case, then the rest is easily obtained by induction. For instance (to motivate you to write a full proof): $$\frac{9!}{2! \cdot 3! \cdot 4!}\ =\ \frac{9!}{5!\cdot 4!}\cdot \frac{5!}{2!\cdot 3!}$$
For any $n$, is there a prime factor of $2^n-1$ which is not a factor of $2^m-1$ for $m < n$? Is it guaranteed that there will be some $p$ such that $p\mid2^n-1$ but $p\nmid 2^m-1$ for any $m<n$? In other words, does each $2^x-1$ introduce a new prime factor?
No. $2^6-1 = 3^2 \cdot 7$. But we have that $3|2^2-1$ and $7|2^3-1$
Proof that $1+2+3+4+\cdots+n = \frac{n\times(n+1)}2$ Why is $1+2+3+4+\ldots+n = \dfrac{n\times(n+1)}2$ $\space$ ?
You can also prove it by induction, which is nice and easy, although it doesn't give much intuition as to why it works. It works for $1$ since $\frac{1\times 2}{2} = 1$. Let it work for $n$. $$1 + 2 + 3 + \dots + n + (n + 1) = \frac{n(n+1)}{2} + (n + 1) = \frac{n(n+1) + 2(n+1)}{2} = \frac{(n+1)(n+2)}{2}.$$ Therefore, if it works for $n$, it works for $n + 1$. Hence, it works for all natural numbers. For the record, you can see this by applying the formula for the sum of an arithmetic progression (a sequence formed by constantly adding a rate to each term, in this case $1$). The formula is reached pretty much using the method outlined by Carl earlier in this post. Here it is, in all its glory: $$S_n = \frac{(a_1 + a_n) * n}{2}$$ ($a_1$ = first term, $a_n$ = last term, $n$ = number of terms being added).
Applications of algebraic topology What are some nice applications of algebraic topology that can be presented to beginning students? To give examples of what I have in mind: Brouwer's fixed point theorem, Borsuk-Ulam theorem, Hairy Ball Theorem, any subgroup of a free group is free. The deeper the methods used, the better. All the above can be proved with just the fundamental group. More involved applications would be nice.
How about the ham sandwich theorem? Or is that Ham and Cheese? There is a plane which cuts all 3 regions into equal parts.
How to prove $(f \circ\ g) ^{-1} = g^{-1} \circ\ f^{-1}$? (inverse of composition) I'm doing exercise on discrete mathematics and I'm stuck with question: If $f:Y\to Z$ is an invertible function, and $g:X\to Y$ is an invertible function, then the inverse of the composition $(f \circ\ g)$ is given by $(f \circ\ g) ^{-1} = g^{-1} \circ\ f^{-1}$. I've no idea how to prove this, please help me by give me some reference or hint to its solution.
Use the definition of an inverse and associativity of composition to show that the right hand side is the inverse of $(f \circ g)$.
Definition of an Algebraic Objects How did the definition of Algebraic objects like group, ring and field come up? When groups were first introduced, were they given the 4 axioms as we give now. And what made Mathematicians to think of something like this.
Let me address the question: "what made Mathematicians ... think of something like this"? The answer is: Galois, in studying the problem of factorization of polynomials, realized that reasoning about the symmetries of the roots could be more powerful a tool than studying explicit formulas (which, loosely speaking, had been the basic method in the theory of equations up to that time). As this structural/conceptual point of view began to reveal its power in solving difficult concrete problems, more and more mathematicians began to think in this way. The structural concepts were then isolated from their concrete settings, and this is how they are taught today. But the motivation was, and for many remains, the applications of these abstract notions to concrete problems. (A standard but helpful example is the pivotal role that group theory, Galois theory, cohomology, and ring theory played in the proof of Fermat's Last Theorem.) Furthermore, nowadays we train ourselves and our students to recognize any hint of structure in a problem, and to exploit such structure to the hilt. So these concepts have become basic tools in the problem-solving toolkits of contemporary mathematicians.
Formal notation for number of rows/columns in a matrix Is there a generally accepted formal notation for denoting the number of columns a matrix has (e.g to use in pseudocode/algorithm environment in LaTeX)? Something I could use in the description of an algorithm like: if horizontaldim(V) > x then end if or if size(V,2) > x then end if or should I just use a description like if number of columns in V > x then end if
None that I know of, but I've seen numerical linear algebra books (e.g. Golub and Van Loan) just say something like $V\in\mathbb{R}^{m\times n}$ for a matrix V with m rows and n columns, and then use m and n in the following algorithm description. MATLAB notation, which some other people use, just says rows(V) and columns(V).
Solving a quadratic inequality $x^2-3x-10>0$ I am solving the following inequality, please look at it and tell me whether am I correct or not. This is an example in Howard Anton's book and I solved it on my own as given below, but the book has solved it differently! I want to confirm that my solution is also valid.
If you graph the function $y=x^2-3x-10$, you can see that the solution is $x<-2$ or $x>5$.
Separability of $ L_p $ spaces I would like to know if the Lebesgue spaces $L_p$ with $ 0 < p < 1 $ are separable or not. I know that this is true for $1 \leq p < + \infty$, but I do not find any references for the case $ 0 < p < 1 $. Thank you
Please refer this article. It talks about $L_{p}$ spaces for $0 < p \leq 1$. Link: http://www.jstor.org/stable/2041603?seq=2 Look at the step functions, the ones that take rational values and whose steps have rational endpoints there should be only countably many of those. And then you can perhaps apply the same argument, you use to prove it for $L_{p}$ spaces for $1 < p < \infty$.
probability and statistics: Does having little correlation imply independence? Suppose there are two correlated random variable and having very small correlation coefficient (order of 10-1). Is it valid to approximate it as independent random variables?
It depends on what else you know about the relationship between the variables. If the correlation coefficient is the full extent of your information, then the approximation is unsafe, as Noldorin points out. If, on the other hand, you have good external evidence that the coefficient adequately captures the level of a small linear relationship (eg, a slight dependence on some third quantity that is not germane to your analysis), then it may well be valid to approximate them as independent for some purposes. RVs about which you know nothing are useful abstractions -- and this is, after all, the maths site -- but real world data often exist in less of vacuum. If you're analysing in the context of a model, that may help you to work out what approximations you can get away with.
What is the importance of the Collatz conjecture? I have been fascinated by the Collatz problem since I first heard about it in high school. Take any natural number $n$. If $n$ is even, divide it by $2$ to get $n / 2$, if $n$ is odd multiply it by $3$ and add $1$ to obtain $3n + 1$. Repeat the process indefinitely. The conjecture is that no matter what number you start with, you will always eventually reach $1$. [...] Paul Erdős said about the Collatz conjecture: "Mathematics is not yet ready for such problems." He offered $500 USD for its solution. QUESTIONS: How important do you consider the answer to this question to be? Why? Would you speculate on what might have possessed Paul Erdős to make such an offer? EDIT: Is there any reason to think that a proof of the Collatz Conjecture would be complex (like the FLT) rather than simple (like PRIMES is in P)? And can this characterization of FLT vs. PRIMES is in P be made more specific than a bit-length comparison?
Aside from the mathematical answers provided by others, the computational verification of the Collatz problem is a good exercise for programmers. There are many optimization opportunities (e.g., time-space trade-off using lookup tables, parallelism), many pitfalls (e.g., integer type overflow), possibilities of exploiting various implementation tricks (e.g., count trailing zeros instructions available in modern hardware), etc. It is a simple task where you can practice many basic programming constructions (branching the program, do-while loops, recursion). And for these reasons, this is arguably the most common task you can find in many online or university courses (e.g., Harvard University's CS50 course).
When is $n^2+n+p$ prime? Possible Duplicate: Behaviour of Polynomials in a PID! Prove: if $p$ is a prime, and if $n^2+n+p$ is prime for $0\leq n \leq \sqrt{p/3}$, then it is also prime for $0 \leq n \leq p-2$. This appeared on reddit recently, but no proof was posted. With $p=41$, it is Euler's famous prime-generating polynomial.
This follows by employing in Rabinowitsch's proof a Gauss bound, e.g. see Theorem 9.1 here.
Companions to Rudin? I'm starting to read Baby Rudin (Principles of mathematical analysis) now and I wonder whether you know of any companions to it. Another supplementary book would do too. I tried Silvia's notes, but I found them a bit too "logical" so to say. Are they good? What else do you recommend?
There is a set of notes and additional exercises due to George Bergman. See his web page... http://math.berkeley.edu/~gbergman/ug.hndts/
How to tell if a line segment intersects with a circle? Given a line segment, denoted by it's $2$ endpoints $(X_1, Y_1)$ and $(X_2, Y_2)$, and a circle, denoted by it's center point $(X_c, Y_c)$ and a radius $R$, how can I tell if the line segment is a tangent of or runs through this circle? I don't need to be able to discern between tangent or running through a circle, I just need to be able to discern between the line segment making contact with the circle in any way and no contact. If the line segment enters but does not exit the circle (if the circle contains an endpoint), that meets my specs for it making contact. In short, I need a function to find if any point of a line segment lies in or on a given circle. EDIT: My application is that I'm using the circle as a proximity around a point. I'm basically testing if one point is within R distance of any point in the line segment. And it must be a line segment, not a line.
There are too many answers already, but since no one mentioned this, perhaps this might still be useful. You can consider using Polar Coordinates. Translate so that the center of the circle is the origin. The equation of a line in polar coordinates is given by $r = p \sec (\theta - \omega)$ See the above web page for what $\omega$ is. You can compute $p$ and $\theta$ by using the two endpoints of the segment. If R is the radius of the circle, you need to find all $\theta$ in $[0, 2\pi]$ such that $R = p \sec (\theta - \omega)$ Now all you need to check is if this will allow the point to fall within the line segment.
Show $\sqrt 3$ is irrational using $3p^2=q^2$ implies $3|p$ and $3|q$ This is a problem from "Introduction to Mathematics - Algebra and Number Systems" (specifically, exercise set 2 #9), which is one of my math texts. Please note that this isn't homework, but I would still appreciate hints rather than a complete answer. The problem reads as follows: If 3p2 = q2, where $p,q \in \mathbb{Z}$, show that 3 is a common divisor of p and q. I am able to show that 3 divides q, simply by rearranging for p2 and showing that $$p^2 \in \mathbb{Z} \Rightarrow q^2/3 \in \mathbb{Z} \Rightarrow 3|q$$ However, I'm not sure how to show that 3 divides p. Edit: Moron left a comment below in which I was prompted to apply the solution to this question as a proof of $\sqrt{3}$'s irrationality. Here's what I came up with... [incorrect solution...] ...is this correct? Edit: The correct solution is provided in the comments below by Bill Dubuque.
Try to write out the factorization of the right and left handed sides. Now compare the order of the 3 on the left and right side, one of them is equal, forcing the other side to become odd. Contradiction.
Why are $x$ and $y$ such common variables in today's equations? How did their use originate? I can understand how the Greek alphabet came to be prominent in mathematics as the Greeks had a huge influence in the math of today. Certain letters came to have certain implications about their meaning (i.e. $\theta$ is almost always an angle, never a function). But why did $x$ and $y$ come to prominence? They seem like $2$ arbitrary letters for input and output, and I can't think why we began to use them instead of $a$ and $b$. Why did they become the de facto standard for Cartesian coordinates?
This question has been asked previously on MathOverflow, and answered (by Mariano Suárez-Alvarez). You can follow this link, and I quote his response below. You'll find details on this point (and precise references) in Cajori's History of mathematical notations, ¶340. He credits Descartes in his La Géometrie for the introduction of x, y and z (and more generally, usefully and interestingly, for the use of the first letters of the alphabet for known quantities and the last letters for the unknown quantities) He notes that Descartes used the notation considerably earlier: the book was published in 1637, yet in 1629 he was already using x as an unknown (although in the same place y is a known quantity...); also, he used the notation in manuscripts dated earlier than the book by years. It is very, very interesting to read through the description Cajori makes of the many, many other alternatives to the notation of quantities, and as one proceeds along the almost 1000 pages of the two volume book, one can very much appreciate how precious are the notations we so much take for granted!
Which one result in mathematics has surprised you the most? A large part of my fascination in mathematics is because of some very surprising results that I have seen there. I remember one I found very hard to swallow when I first encountered it, was what is known as the Banach Tarski Paradox. It states that you can separate a ball $x^2+y^2+z^2 \le 1$ into finitely many disjoint parts, rotate and translate them and rejoin (by taking disjoint union), and you end up with exactly two complete balls of the same radius! So I ask you which are your most surprising moments in maths? * *Chances are you will have more than one. May I request post multiple answers in that case, so the voting system will bring the ones most people think as surprising up. Thanks!
I was very surprised to learn about the Cantor set, and all of its amazing properties. The first one I learnt is that it is uncountable (I would never have told), and that it has measure zero. I was shown this example as a freshman undergraduate, for an example of a function that is Riemann-integrable but whose set of points of discontinuity is uncountable. (equivalently, that this set has measure zero). This came more as a shock to me, since I had already studied some basic integrals in high school, and we had defined the integral only for continuous functions. Later, after learning topology and when learning measure theory, I was extremely shocked to see that this set can be modified to a residual set of measure zero! I think the existence of such sets and the disconnectednes of topology and measure still gives me the creeps...
Which one result in mathematics has surprised you the most? A large part of my fascination in mathematics is because of some very surprising results that I have seen there. I remember one I found very hard to swallow when I first encountered it, was what is known as the Banach Tarski Paradox. It states that you can separate a ball $x^2+y^2+z^2 \le 1$ into finitely many disjoint parts, rotate and translate them and rejoin (by taking disjoint union), and you end up with exactly two complete balls of the same radius! So I ask you which are your most surprising moments in maths? * *Chances are you will have more than one. May I request post multiple answers in that case, so the voting system will bring the ones most people think as surprising up. Thanks!
Rather basic, but it was surprising for me: For any matrix, column rank = row rank.
Which one result in mathematics has surprised you the most? A large part of my fascination in mathematics is because of some very surprising results that I have seen there. I remember one I found very hard to swallow when I first encountered it, was what is known as the Banach Tarski Paradox. It states that you can separate a ball $x^2+y^2+z^2 \le 1$ into finitely many disjoint parts, rotate and translate them and rejoin (by taking disjoint union), and you end up with exactly two complete balls of the same radius! So I ask you which are your most surprising moments in maths? * *Chances are you will have more than one. May I request post multiple answers in that case, so the voting system will bring the ones most people think as surprising up. Thanks!
The Chinese Magic Square: 816 357 492 It adds up to 15 in every direction! Awesome! And the Chinese evidently thought so too, since they incorporated it into their religious writings.
Could you explain why $\frac{d}{dx} e^x = e^x$ "intuitively"? As the title implies, It is seems that $e^x$ is the only function whoes derivative is the same as itself. thanks.
Suppose $\frac{d}{dx}f(x)=f(x)$. Then for small $h$, $f(x+h)=f(x)+hf(x)=f(x)(1+h)$. If we do this for a lot of small intervals of length $h$, we see $f(x+a)=(1+h)^{a/h}f(x)$. (Does this ring a bell already?) Setting $x=0$ in the above, and fixing $f(0)=1$, we then have $f(1)=(1+h)^{1/h}$, which in limit as $h\rightarrow 0$ goes to $e$. And continuing $f(x)=(1+h)^{x/h}$, which goes to $e^x$.
Find the coordinates in an isosceles triangle Given: $A = (0,0)$ $B = (0,-10)$ $AB = AC$ Using the angle between $AB$ and $AC$, how are the coordinates at C calculated?
edit (to match revised question): Given your revised question, there is still the issue of C being on either side of the y-axis, but you have specified that AB=AC and that you are given $\mathrm{m}\angle BAC$ (the angle between AB and AC), so as in my original answer (below), the directed (trigonometric) measure of the angle from the positive x-axis to AC is $\mathrm{m}\angle BAC-90^{\circ}$ and AC=AB=10, so C has coordinates $(10\cos(\mathrm{m}\angle BAC-90^{\circ}),10\sin(\mathrm{m}\angle BAC-90^{\circ}))$. (This matches up to one of the answers in Moron's solution; the other corresponds to the other side of the y-axis.) original answer (when it was not specified that AB=AC and when the given angle was C): As suggested in the comments, there are several cases. First, C could be on either side of the y-axis; let's assume that C has positive x-coordinate (leaving the case where it has negative x-coordinate for you to solve). Second, ABC could be isosceles with AB=AC, AB=BC, or AC=BC. In the first case, $\angle B\cong \angle C$ (which cannot happen unless C is acute) and $\mathrm{m}\angle BAC=180^{\circ}-2\mathrm{m}\angle C$, so the directed (trigonometric) measure of the angle from the positive x-axis to AC is $90^{\circ}-2\mathrm{m}\angle C$ and AC=AB=10, so C has coordinates $(10\cos(90^{\circ}-2\mathrm{m}\angle C),10\sin(90^{\circ}-2\mathrm{m}\angle C))$. The second case is similar to the first (so it's left for you to solve). In the third case, C is equidistant from A and B, so C must lie on the perpendicular bisector of AB (as in J. Mangaldan's comment), and by symmetry this perpendicular bisector of AB also bisects $\angle ACB$; from there, you can use right triangle trigonometry to determine the coordinates of C (left for you to solve). The cases where AB=AC (blue), AB=BC (red), and AC=BC (green) (lighter versions on the left side of the y-axis) are shown below for measures of angle C between 0 and 180°.
Einstein notation - difference between vectors and scalars From Wikipedia: First, we can use Einstein notation in linear algebra to distinguish easily between vectors and covectors: upper indices are used to label components (coordinates) of vectors, while lower indices are used to label components of covectors. However, vectors themselves (not their components) have lower indices, and covectors have upper indices. I am trying to read the Wikipedia article, but I am constantly getting confused between what represents a vector/covector and what represents a component of one of these. How can I tell?
A vector component is always written with 1 upper index $a^i$, while a covector component is written with 1 lower index $a_i$. In Einstein notation, if the same index variable appear in both upper and lower positions, an implicit summation is applied, i.e. $$ a_i b^i = a_1 b^1 + a_2 b^2 + \dotsb \qquad (*) $$ Now, a vector is constructed from its component as $$ \mathbf a = a^1 \mathbf{\hat e}_1 + a^2 \mathbf{\hat e}_2 + \dotsb $$ where $\mathbf{\hat e}_i$ are the basis vectors. But this takes the form like (*), so if we make basis vectors to take lower indices, we will get $$ \mathbf a = a^i \mathbf{\hat e}_i $$ This is likely what Wikipedia means.
How to convert a hexadecimal number to an octal number? How can I convert a hexadecimal number, for example 0x1A03 to its octal value? I know that one way is to convert it to decimal and then convert it to octal 0x1A03 = 6659 = 0o15003 * *Is there a simple way to do it without the middle step (conversion to decimal or conversion to binary)? *Why do we tend to convert it to Base10 every time?
A simpler way is to go through binary (base 2) instead of base 10. 0x1A03 = 0001 1010 0000 0011 Now group the bits in bunches of 3 starting from the right 0 001 101 000 000 011 This gives 0 1 5 0 0 3 Which is your octal representation.
What does "only" mean? I understand the technical and logical distinction between "if" and "only if" and "if and only if". But I have always been troubled by the phrase "only if" even though I am able to parse and interpret it. Also in my posts on this and other sites I have frequently had to make edits to migrate the term "only", sometimes across multiple structural boundaries of a sentence, which is empirical evidence to myself that I don't intuitively know the meaning of the word. Is there any simple rule that I can use to determine whether or not it is appropriate to use this word in a particular context in order to achieve more clarity? In mathematical discourse, what are some other common lexical contexts, meaningful or not, in which appears the word "only"? Why do I often write "only" in the wrong place?
I think analogies in plain English are the way to internalize this... so here's one: Given that you want to wear socks with your shoes, put your shoes on only if you have already put your socks on. The idea is that there is no other way to arrive at the state of having your socks and shoes on (aside from the ridiculous possibility of placing your socks over your shoes).
What is $\sqrt{i}$? If $i=\sqrt{-1}$, is $\large\sqrt{i}$ imaginary? Is it used or considered often in mathematics? How is it notated?
With a little bit of manipulation you can make use of the quadratic equation since you are really looking for the solutions of $x^2 - i = 0$, unfortunately if you apply the quadratic formula directly you gain nothing new, but... Since $i^2 = -1$ multiply both sides of our original equation by $i$ and you will have $ix^2 +1 =0$, now both equations have exactly the same roots, and so will their sum. $$(1+i)x^2 + (1-i) = 0 $$ Aplly the quadratic formula to this last equation and simplify an you will get $x=\pm\frac{\sqrt{2}}{2}(1+i)$.
How many ways are there to define sine and cosine? Sometimes there are many ways to define a mathematical concept, for example the natural base logarithm. How about sine and cosine? Thanks.
An interesting construction is given by Michael Spivak in his book Calculus, chapter 15. The steps are basically the following: $1.$ We define what a directed angle is. $2.$ We define a unit circle by $x^2+y^2=1$, and show that every angle between the $x$-axis and a line origined from $(0,0)$ defines a point $(x,y)$ in that circle. $3.$ We define $x = \cos \theta$ and $y = \sin \theta$. $4.$ We note that the area of the circular sector is always $x/2$, so maybe we can define this functions explicitly with this fact: $5.$ We define $\pi$ as the area of the unit circle, this is: $$\pi = 2 \int_{-1}^1 \sqrt{1-x^2} dx$$ $6.$ We give an explicit formula for the area of the circular sector, namely: $$A(x) = \frac{x\sqrt{1-x^2}}{2}+\int_x^1 \sqrt{1-t^2}dx$$ and show that it is continuous, and takes all values from $0$ to $\pi/2$. We may also plot it, since we can show that $2A(x)$ is actually the inverse of $\cos x$. $7.$ We define $\cos x$ as the only number in $[-1,1]$ such that $$A(\cos x) = \frac{x}{2}$$ and thus define $$\sin x = \sqrt{1-\cos^2x}$$ $8.$ We show that for $0<x<\pi$ $$\cos(x)' = - \sin(x)$$ $$\sin(x)' = \cos(x)$$ $9.$ We finally extend the functions to all real values by showing that for $\pi \leq x \leq 2\pi$, $$-\sin(2\pi-x) = \sin x$$ $$\cos(2\pi-x) = \cos x$$ and then that $$\cos(2\pi k+x) = \cos x$$ $$\sin(2\pi k+x) = \sin x$$
When does the product of two polynomials = $x^{k}$? Suppose $f$ and $g$ are are two polynomials with complex coefficents (i.e $f,g \in \mathbb{C}[x]$). Let $m$ be the order of $f$ and let $n$ be the order of $g$. Are there some general conditions where $fg= \alpha x^{n+m}$ for some non-zero $\alpha \in \mathbb{C}$
We don't need the strong property of UFD. If $\rm D$ is a domain $\rm D$ then $\rm x$ is prime in $\rm D[x]$ (by $\rm D[x]/x \cong D$ a domain), and products of primes factor uniquely in every domain (same simple proof as in $\Bbb Z$). In particular, the only factorizations of the prime power $\rm x^i$ are $\rm \,x^j x^k,\ i = j+k\ $ (up to associates as usual). This fails over non-domains, e.g. $\,\rm x = (2x+3)(3x+2) \in \mathbb Z/6[x].$
Rules for rounding (positive and negative numbers) I'm looking for clear mathematical rules on rounding a number to $n$ decimal places. Everything seems perfectly clear for positive numbers. Here is for example what I found on math.about.com : Rule One Determine what your rounding digit is and look to the right side of it. If that digit is $4, 3, 2, 1,$ or $0$, simply drop all digits to the right of it. Rule Two Determine what your rounding digit is and look to the right side of it. If that digit is $5, 6, 7, 8,$ or $9$ add $1$ to the rounding digit and drop all digits to the right of it. But what about negative numbers ? Do I apply the same rules as above ? For instance, what is the correct result when rounding $-1.24$ to $1$ decimal place ? $-1.3$ or $-1.2$ ?
"Round to nearest integer" is completely unambiguous, except when the fractional part of the number to be rounded happens to be exactly $\frac 1 2$. In that case, some kind of tie-breaking rule must be used. Wikipedia (currently) lists six deterministic tie-breaking rules in more or less common use: * *Round $\frac 1 2$ up *Round $\frac 1 2$ down *Round $\frac 1 2$ away from zero *Round $\frac 1 2$ towards zero *Round $\frac 1 2$ to nearest even number *Round $\frac 1 2$ to nearest odd number Of these, I'm personally rather fond of "round $\frac 1 2$ to nearest even number", also known as "bankers' rounding". It's also the default rounding rule for IEEE 754 floating-point arithmetic as used by most modern computers. According to that rule, $$\begin{aligned} 0.5 &\approx 0 & 1.5 &\approx 2 & 2.5 &\approx 2 & 3.5 &\approx 4 \\ -0.5 &\approx 0 & -1.5 &\approx -2 & -2.5 &\approx -2 & -3.5 &\approx -4. \\ \end{aligned}$$
$F_{\sigma}$ subsets of $\mathbb{R}$ Suppose $C \subset \mathbb{R}$ is of type $F_{\sigma}$. That is $C$ can be written as the union of $F_{n}$'s where each $F_{n}$'s are closed. Then can we prove that each point of $C$ is a point of discontinuity for some $f: \mathbb{R} \to \mathbb{R}$. I refered this link on wiki : http://en.wikipedia.org/wiki/Thomae%27s_function and in the follow up subsection they given this result. I would like somebody to explain it more precisely.
I believe you're looking for something like the construction mentioned here.
Beta function derivation How do I derive the Beta function using the definition of the beta function as the normalizing constant of the Beta distribution and only common sense random experiments? I'm pretty sure this is possible, but can't see how. I can see that $$\newcommand{\Beta}{\mathrm{Beta}}\sum_{a=0}^n {n \choose a} \Beta(a+1, n-a+1) = 1$$ because we can imagine that we are flipping a coin $n$ times. The $2^n$ unique sequences of flips partition the probability space. The Beta distribution with parameters $a$ and $n-a$ can be defined as the prior over the coin's bias probability $p$ given the observation of $a$ heads and $n-a$ tails. Since there are ${n \choose a}$ such sequences for any $n$ and $a$, that explains the scaling factor, and we know that it all sums to unity since the sequences partition the probability space, which has total measure 1. What I can't figure out is why: $${n \choose a} \Beta(a+1, n-a+1) = \frac{1}{n+1} \qquad \forall n \ge 0,\quad a \in \{0, \dots, n\}.$$ If we knew that, we could easily see that $$\Beta(a + 1,n - a + 1) = \frac{1}{(n+1){n \choose a}} = \frac{a!(n-a)!}{(n+1)!}.$$
The multinomial generalization mentioned by Qiaochu is conceptually simple but getting the details right is messy. The goal is to compute $$\int_0^1 \int_0^{1-t_1} \ldots \int_0^{1-t_1-\ldots-t_{k-2}} t_1^{n_1} t_2^{n_2} \ldots t_{k-1}^{n_{k-1}} t_k^{n_k} dt_1 \ldots dt_{k-1},$$ where $t_k = 1 - t_1 - \ldots - t_{k-1},$ for nonnegative integers $n_1, \ldots, n_k$. Draw $k-1 + \sum_{i = 1}^{k}n_k$ numbers $X_1, \ldots, X_{k-1 + \sum_{i = 1}^{k}n_k}$ independently from a uniform $[0,1]$ distribution. Define $X_0 = 0$ and $X_{k + \sum_{i = 1}^{k}n_k} = 1$ for convenience. Let $E$ be the event that the numbers $X_1$ through $X_{k-1}$ are in ascending order and that the numbers $X_{j + \sum_{i = 1}^{j-1} n_i}$ through $X_{j + \sum_{i = 1}^{j}n_i - 1}$ are between $X_{j-1}$ and $X_j$ for $j = 1, \ldots, k$. Define a linear transformation from $(X_1, \ldots, X_{k-1}) \to (T_1, \ldots, T_{k-1})$ by $T_i = X_i - X_{i-1}$ for $i = 1, \ldots, k-1$. Note that the determinant of this linear transformation is 1 and it is therefore measure-preserving. Given values of $X_1$ through $X_{k-1}$, the conditional probability of $E$ is $$\mathbb{P}[E|(X_1, \ldots, X_{k-1}) = (x_1, \ldots, x_{k-1})] = \prod_{i = 1}^{k}(x_i - x_{i-1})^{n_k} \mathbf{1}_{\{x_i > x_{i-1}\}}.$$ Marginalizing with respect to the distribution of $X_1 \times \ldots \times X_{k-1}$ gives $$\begin{aligned} \mathbb{P}[E] &= \int_{0}^1 \ldots \int_{0}^1 \prod_{i = 1}^{k}(x_i - x_{i-1})^{n_k} \mathbf{1}_{\{x_i > x_{i-1}\}} p_{X_1 \times \ldots \times X_{k-1}}(x_1, \ldots, x_{k-1}) dx_{k-1} \ldots dx_{1} \\ &= \int_{0}^1 \int_{-t_1}^{1-t_1} \ldots \int_{-t_1 - \ldots - t_{k-1}}^{1 -t_1 - \ldots - t_{k-1}} \prod_{i = 1}^{k} t_k^{n_k} \mathbf{1}_{\{t_k > 0\}} p_{T_1 \times \ldots \times T_{k-1}}(t_1, \ldots, t_{k-1}) dt_{k-1} \ldots dt_{1} \\ &= \int_0^1 \int_0^{1-t_1} \ldots \int_0^{1-t_1-\ldots-t_{k-2}} t_1^{n_1} \ldots t_{k-1}^{n_{k-1}} t_k^{n_k} dt_{k-1} \ldots dt_{1}, \end{aligned}$$ so if we can compute $\mathbb{P}[E]$ combinatorially we will have evaluated the desired intergral. Let $\{R_i\}_{i \in \{1, \ldots, k-1 + \sum_{i = 1}^{k}n_k\}}$ be the ranks that the numbers $\{X_i\}_{i \in \{1, \ldots, n+m+1\}}$ would have if sorted in ascending order. (Note that the numbers are all distinct with probability 1). Since the numbers were drawn independently from a uniform distribution, the ranks are a random permutation of the integers $1$ through $k-1 + \sum_{i = 1}^{k}n_k$. Note that $E$ is exactly the event that $R_j = j + \sum_{i = 1}^j n_i$ for $j \in \{1, \ldots, k-1\}$ and that for each $l \in \{1, \ldots, k\}$, $$R_j \in \{l + \sum_{i = 1}^{l-1} n_i, \ldots, l + \sum_{i=1}^{l}n_i - 1\}$$ for $$j \in \{k+\sum_{i = 1}^{l-1}n_i, \ldots, k + \sum_{i = 1}^{l}n_i - 1\}.$$ There are $n_1!\ldots n_k!$ possible permutations which satisfy these conditions out of $(\sum_{i=1}^{k}n_i+k-1)!$ total possible permuations, so $$\mathbb{P}[E] = \frac{n_1!\ldots n_k!}{(\sum_{i=1}^{k}n_i+k-1)!}.$$
Why does this sum mod out to 0? In making up another problem today I came across something odd. I've been thinking it over and I can't exactly place why it's true, but after running a long Python script to check, I haven't yet found a counter example. Why is $\sum_{n=1}^{m}{n^m}\equiv 0\mod m$ true for all odd $m \ge 3$? The script showed me that each term for odd $m$ is equivalent to $n$ when taken $\mod m$ (until term $m$), and so the sum would be $\frac{m(m-1)}{2}$ which is obviously $0 \mod m$. What I am unable to understand is why $n^m\equiv n \mod m$ only for odd $m$.
If it is odd, each n can be paired with -n. So we get $n^m+(-n)^m=n^m-n^m=0$
Finding the $N$-th derivative of $f(x)=\frac {x} {x^2-1}$ I'm practicing some problems from past exams and found this one: Find the n-th derivative of this function: $$f(x)=\frac {x} {x^2-1}$$ I have no idea how to start solving this problems. Is there any theorem for finding nth derivative?
To add to Derek's hint: you will have to show the validity of the formula $\frac{\mathrm{d}^k}{\mathrm{d}x^k}\frac1{x}=\frac{(-1)^k k!}{x^{k+1}}$
Finding the Heavy Coin by weighing twice Suppose you have $100$ coins. $96$ of them are heavy and $4$ of them are light. Nothing is known regarding the proportion of their weights. You want to find at least one genuine (heavy) coin. You are allowed to use a weight balance twice. How do you find it? Assumptions: Heavy coins all have the same weight; same for the light coins. The weight balance compares the weight of two sides on the balance instead of giving numerical measurement of weights.
I think this works. Divide the coins into three groups: $A$ with $33$ coins, $B$ with $33$ coins and $C$ with $34$ coins. Weigh $A$ and $B$ against each other. Now if $A$ is heavier than $B$, then $A$ cannot have two or more light coins, as in that case, $A$ would be lighter (or equal to $B$). Now split $A$ into groups of $16$ plus one odd coin. Weigh the groups of $16$ against each other. If they are the same, then any of those coins is heavy. If not, then any of the heavier 16 coins is heavy. Consider the case when $A$ and $B$ are equal. The possibilities for $A$, $B$ and $C$ are: +-----------+------------+-----------+ | A | B | C | +-----------+------------+-----------+ | 33H | 33H | 30H + 4L | | | | | | 32H + L | 32H + L | 32H + 2L | | | | | | 31H + 2L | 31H + 2L | 34H | +-----------+------------+-----------+ Now move one coin from $A$ to $B$ (call the resulting set $B'$) and weigh it against $C$. If $B' > C$, then the coin you moved from $A$ is a heavy coin. If $B' = C$, then the coin you moved from $A$ is a light coin and the remaining coins in $A$ are heavy. If $B' < C$, then all the coins in $C$ are heavy.
Does a section that vanishes at every point vanish? Let $R$ be the coordinate ring of an affine complex variety (i.e. finitely generated, commutative, reduced $\mathbb{C}$ algebra) and $M$ be an $R$ module. Let $s\in M$ be an element, such that $s\in \mathfrak{m}M$ for every maximal ideal $\mathfrak{m}$. Does this imply $s=0$?
Not in general, no. For example, if $R = \mathbb C[T]$ and $M$ is the field of fractions of $R$, namely $\mathbb C(T)$, then (a) every maximal ideal of $R$ is principal; (b) every element of $M$ is divisible by every non-zero element of $R$. Putting (a) and (b) together we find that $M = \mathfrak m M$ for every maximal ideal $\mathfrak m$ of $R$, but certainly $M \neq 0.$ Here is a finitely generated example: again take $R = \mathbb C[T]$, and take $M = \mathbb C[T]/(T^2).$ Then $s = T \bmod T^2 \in \mathfrak m M$ for every $\mathfrak m$, because $\mathfrak m M = M$ if $\mathfrak m$ is a maximal ideal other than $(T)$, and this is clear from the choice of $s$ if $\mathfrak m = (T)$. The answer is yes if $M$ is finitely generated and torsion free. For let $S$ be the total quotient ring of $R$ (i.e. the product of functions fields $K(X)$ for each irreducible component $X$ of the variety attached to $R$). Then $M$ embeds into $S\otimes_R M$ (this is the torsion free condition), which in turn embeds into a finite product of copies of $S$ (since it is finite type over $S$, which is just a product of fields). Clearing denominators, we find that in fact $M$ then embeds into $R^n$ for some $n$. Thus it suffices to prove the result for $M = R^n$, and hence for $R$, in which case it follows from the Nullstellensatz, together with the fact that $R$ is reduced. Finally, note that for any finitely generated $R$-module, if $M = \mathfrak m M$ for all $\mathfrak m$ then $M = 0$ (since Nakayama then implies that $M_{\mathfrak m} = 0$ for all $\mathfrak m$). Thus if $M$ is non-zero it can't be that every section lies in $\mathfrak m M$ for all $\mathfrak m$.
If $AB = I$ then $BA = I$ If $A$ and $B$ are square matrices such that $AB = I$, where $I$ is the identity matrix, show that $BA = I$. I do not understand anything more than the following. * *Elementary row operations. *Linear dependence. *Row reduced forms and their relations with the original matrix. If the entries of the matrix are not from a mathematical structure which supports commutativity, what can we say about this problem? P.S.: Please avoid using the transpose and/or inverse of a matrix.
Since inverse/transpose are not allowed we start by writing $$A = \begin{bmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \dots & a_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & a_{n3} & \dots & a_{nn} \end{bmatrix}$$ and similarly $$B = \begin{bmatrix} b_{11} & b_{12} & b_{13} & \dots & b_{1n} \\ b_{21} & b_{22} & b_{23} & \dots & b_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ b_{n1} & b_{n2} & b_{n3} & \dots & b_{nn} \end{bmatrix}$$ Since $AB = I$, using matrix multiplication definition we can write the elements of AB as: $$ \sum_{i,i'} \sum_{j,j'} a_{i,i'}b_{j,j'} = I_{i,j'}= \begin{cases} 1, & \text{if}\ i=j'\space and \space j=i' \\ 0, & \text{otherwise} \end{cases} $$ such that when $i = j'$ and $j = i'$ we get the diagonal elements and off diagonal otherwise. But note that since $a_{i,i'}b_{j,j'}$ are scalar and commute we can write. $$ \sum_{i,i'} \sum_{j,j'} b_{j,j'}a_{i,i'} = \sum_{i,i'} \sum_{j,j'} a_{i,i'}b_{j,j'} = \begin{cases} 1, & \text{if}\ i=j'\space and \space j=i' \\ 0, & \text{otherwise} \end{cases} $$ Now we observe that the elements of $BA$ can be written as... $$ \sum_{i,i'} \sum_{j,j'} b_{j,j'}a_{i,i'} $$ such that when $i = j'$ and $j = i'$ we get the diagonal elements and off diagonal otherwise. But we showed that $$ \sum_{i,i'} \sum_{j,j'} b_{j,j'}a_{i,i'} = \begin{cases} 1, & \text{if}\ i=j'\space and \space j=i' \\ 0, & \text{otherwise} \end{cases} $$ thus $BA = I$
Find the Frequency Components of a Time Series Graph For a periodic (and not so periodic) function, it is always possible to use Fourier series to find out the frequencies contained in the function. But what about function that cannot be expressed in mathematical terms? For example, this graph (accelerogram): Is there anyway to apply a sort of Fourier series, in order to find out the frequencies contained in it?
Quinn and Hannan's The estimation and tracking of frequency is dedicated to this topic. I can highly recommend it.
A $1-1$ function is called injective. What is an $n-1$ function called? A $1-1$ function is called injective. What is an $n-1$ function called ? I'm thinking about homomorphisms. So perhaps homojective ? Onto is surjective. $1-1$ and onto is bijective. What about n-1 and onto ? Projective ? Polyjective ? I think $n-m$ and onto should be hyperjective as in hypergroups.
I will: * *suggest some terminology for three related concepts, and *suggest that $n$-to-$1$ functions probably aren't very interesting. Terminology. Let $f : X \rightarrow Y$ denote a function. Recall that $f$ is called a bijection iff for all $y \in Y$, the set $f^{-1}(y)$ has precisely $1$ element. So define that $f$ is a $k$-bijection iff for all $y \in Y$, the set $f^{-1}(y)$ has precisely $k$ elements. We have: The composite of a $j$-bijection and a $k$-bijection is a $(j \times k)$-bijection. There is also a sensible notion of $k$-injection. Recall that $f$ is called an injection iff for all $y \in Y$, the set $f^{-1}(y)$ has at most $1$ element. So define that $f$ is a $k$-injection iff for all $y \in Y$, the set $f^{-1}(y)$ has at most $k$ elements. We have: The composite of a $j$-injection and a $k$-injection is a $(j \times k)$-injection. There is also a sensible notion of $k$-subjection, obtained by replacing "at most" with "at least." We have: The composite of a $j$-surjection and a $k$-surjection is a $(j \times k)$-surjection. A criticism. I wouldn't advise thinking too hard about "$k$ to $1$ functions." There's a couple of reasons for this: * *Their definition is kind of arbitrary: we require that $f^{-1}(y)$ has either $k$ elements, or $0$ elements. Um, what? *We can't say much about their composites:
CAS with a standard language I hope this question is suitable for the site. I recently had to work with Mathematica, and the experience was, to put it kindly, unpleasing. I do not have much experience with similar programs, but I remember not liking much Matlab or Maple either. The result is that I am a mathematician who likes programming, but I never managed to learn how to work with a computer algebra system. Does there exist a CAS which can be programmed using a standard language? I guess the best thing would be just an enormous library of mathematical algorithms implemented for C or Python or whatever. I know SAGE is based on Python, but as far as I understand (which is not much) it just collects preexisting open source software, so (I assume) one has to learn how to use a new tool for every different problem.
Considering that Maxima is developed in Common Lisp and accepts CL sintax, maybe this system would suit your requirement.
Group as the Union of Subgroups We know that a group $G$ cannot be written as the set theoretic union of two of its proper subgroups. Also $G$ can be written as the union of 3 of its proper subgroups if and only if $G$ has a homomorphic image, a non-cyclic group of order 4. In this paper http://www.jstor.org/stable/2695649 by M.Bhargava, it is shown that a group $G$ is the union of its proper normal subgroups if and only if its has a quotient that is isomorphic to $C_{p} \times C_{p}$ for some prime $p$. I would like to make the condition more stringent on the subgroups. We know that Characteristic subgroups are normal. So can we have a group $G$ such that , $$G = \bigcup\limits_{i} H_{i}$$ where each $H_{i}$'s are Characteristic subgroups of $G$?
One way to ensure this happens is to have every maximal subgroup be characteristic. To get every maximal subgroup normal, it is a good idea to check p-groups first. To make sure the maximal subgroups are characteristic, it makes sense to make sure they are simply not isomorphic. To make sure there are not too many maximal subgroups, it makes sense to take p=2 and choose a rank 2 group. In fact the quasi-dihedral groups have this property. Their three maximal subgroups are cyclic, dihedral, and quaternion, so each must be fixed by any automorphism. So a specific example is QD16, the Sylow 2-subgroup of GL(2,3). Another small example is 4×S3. It has three subgroups of index 2, a cyclic, a dihedral, and a 4 acting on a 3 with kernel 2. Since these are pairwise non-isomorphic, they are characteristic too. It also just so happens (not surprisingly, by looking in the quotient 2×2) that every element is contained in one of these maximal subgroups.
Intuitive explanation of Cauchy's Integral Formula in Complex Analysis There is a theorem that states that if $f$ is analytic in a domain $D$, and the closed disc {$ z:|z-\alpha|\leq r$} contained in $D$, and $C$ denotes the disc's boundary followed in the positive direction, then for every $z$ in the disc we can write: $$f(z)=\frac{1}{2\pi i}\int\frac{f(\zeta)}{\zeta-z}d\zeta$$ My question is: What is the intuitive explanation of this formula? (For example, but not necessary, geometrically.) (Just to clarify - I know the proof of this theorem, I'm just trying to understand where does this exact formula come from.)
Expanding on my comment, this result can be translated into: "A surface in $\mathbb{R}^3$ which satisfies the Maximum-Modulus principle is uniquely determined by specifying it's boundary" To see this, write the holomorphic function $f(z)$ in terms of its real and imaginary parts: $$ f(z) = f(x,y) = g(x,y) + ih(x,y)$$ Then since $f$ is holomorphic, the functions $g(x,y)$ and $h(x,y)$ are both real valued harmonic functions which satisfy the maximum modulus principle. By interpreting the value of $g$ or $h$ as the height of a surface in $\mathbb{R}^3$, we can see that according to Cauchy's theorem such surfaces are uniquely determined by specifying their boundary.
How do you show monotonicity of the $\ell^p$ norms? I can't seem to work out the inequality $(\sum |x_n|^q)^{1/q} \leq (\sum |x_n|^p)^{1/p}$ for $p \leq q$ (which I'm assuming is the way to go about it).
For completeness I will add this as an answer (it is a slight adaptation of the argument from AD.): For $a\in[0,1]$ and any $y_i\geq 0, i\in\mathbb N$, with at least one $y_i\neq0$ and the convention that $y^0=1$ for any $y\geq0$, \begin{equation}\label{*}\tag{*}\sum_{i=1}^\infty \frac{y_i^a}{\left(\sum_{j=1}^\infty y_j\right)^a}=\sum_{i=1}^\infty \left(\frac{y_i}{\sum_{j=1}^\infty y_j}\right)^a\geq \sum_{i=1}^\infty \frac{y_i}{\sum_{j=1}^\infty y_j}=1,\end{equation} where I have used $y^a\geq y$ whenever $y\in[0,1]$ and $a\in[0,1]$. (This can be derived for instance from the concavity of $y\mapsto y^a$.) For $p=q$, there is nothing to prove. For $1\le p< q\le\infty$ and $x=(x_i)_{i\in\mathbb N}\in \ell^q$, set $a\overset{\text{Def.}}=\frac pq\in[0,1]$ and $y_i\overset{\text{Def.}}=\lvert x_i\rvert^q\ge0$. Then \eqref{*} yields \begin{equation*} \sum_{i=1}^\infty \lvert x_i\rvert^p\geq\left(\sum_{i=1}^\infty \lvert x_i\rvert^{q}\right)^{\frac pq}, \end{equation*} i.e. \begin{equation*} \lVert x\rVert_{\ell^q}\le\lVert x\rVert_{\ell^p}. \end{equation*}
Beautiful identity: $\sum_{k=m}^n (-1)^{k-m} \binom{k}{m} \binom{n}{k} = \delta_{mn}$ Let $m,n\ge 0$ be two integers. Prove that $$\sum_{k=m}^n (-1)^{k-m} \binom{k}{m} \binom{n}{k} = \delta_{mn}$$ where $\delta_{mn}$ stands for the Kronecker's delta (defined by $\delta_{mn} = \begin{cases} 1, & \text{if } m=n; \\ 0, & \text{if } m\neq n \end{cases}$). Note: I put the tag "linear algebra" because i think there is an elegant way to attack the problem using a certain type of matrices. I hope you will enjoy. :)
This follows easily from the Multinomial Theorem, I believe. $$ 1 = 1^n = (1 - x + x)^n$$ $$ = \sum_{a+b+c=n} {n \choose a,b,c} 1^a \cdot (-x)^b \cdot x^c$$ $$ = \sum_{m=0}^{n} \sum_{k=m}^{n} {n \choose m,k-m,n-k} 1^{m} \cdot (-x)^{k-m} \cdot x^{n-k} $$ $$ = \sum_{m=0}^{n} \left[ \sum_{k=m}^{n} (-1)^{k-m} {k \choose m}{n \choose k} \right] x^{n-m}$$ Comparing coefficients now gives the result immediately.
What is the $x$ in $\log_b x$ called? In $b^a = x$, $b$ is the base, a is the exponent and $x$ is the result of the operation. But in its logarithm counterpart, $\log_{b}(x) = a$, $b$ is still the base, and $a$ is now the result. What is $x$ called here? The exponent?
Another name (that I've only ever seen when someone else asked this question) is "logarithmand". From page 36 of The Spirit of Mathematical Analysis by Martin Ohm, translated from the German by Alexander John Ellis, 1843:
Alternate definition of prime number I know the definition of prime number when dealing with integers, but I can't understand why the following definition also works: A prime is a quantity $p$ such that whenever $p$ is a factor of some product $a\cdot b$, then either $p$ is a factor of $a$ or $p$ is a factor of $b$. For example, take $4$ (which clearly is not a prime): it is a factor of $16=8\cdot 2$, so I should check that either $4\mid 8$ or $4\mid 2$. But $4\mid 8$ is true. So $4$ is a prime, which is absurd. Please note that English is not my first language, so I may have easily misunderstood the above definition. Edit: Let me try formalize the definition as I understood it: $p$ is prime if and only if $\exists a\exists b(p\mid a\cdot b)\rightarrow p\mid a\lor p\mid b$.
As far as I know, your definition A prime is an element p such that whenever p divides ab, then either p divides a or p divides b, is the true definition of "prime". The usual one, ... an element p which cannot be expressed as a product of non-unit elements, is the definition of an irreducible element. Now, in every ring all primes are also irreducible, but the converse is in general not true. In other words, there are rings where the two definitions are not equivalent. One case in which they are indeed equivalent is when the ring is a unique factorization domain (the statement of the so-called "fundamental theorem of arithmetic" is that the integers form one such ring).
Are all algebraic integers with absolute value 1 roots of unity? If we have an algebraic number $\alpha$ with (complex) absolute value $1$, it does not follow that $\alpha$ is a root of unity (i.e., that $\alpha^n = 1$ for some $n$). For example, $(3/5 + 4/5 i)$ is not a root of unity. But if we assume that $\alpha$ is an algebraic integer with absolute value $1$, does it follow that $\alpha$ is a root of unity? I know that if all conjugates of $\alpha$ have absolute value $1$, then $\alpha$ is a root of unity by the argument below: The minimal polynomial of $\alpha$ over $\mathbb{Z}$ is $\prod_{i=1}^d (x-\alpha_i)$, where the $\alpha_i$ are just the conjugates of $\alpha$. Then $\prod_{i=1}^d (x-\alpha_i^n)$ is a polynomial over $\mathbb{Z}$ with $\alpha^n$ as a root. It also has degree $d$, and all roots have absolute value $1$. But there can only be finitely many such polynomials (since the coefficients are integers with bounded size), so we get that $\alpha^n=\sigma(\alpha)$ for some Galois conjugation $\sigma$. If $\sigma^m(\alpha) = \alpha$, then $\alpha^{n^m} = \alpha$. Thus $\alpha^{n^m - 1} = 1$.
Let me first mention an example in Character Theory. Let $G$ be a finite group of order $n$ and assume $\rho$ is a representation with character $\chi:=\chi_\rho$ which is defined by $\chi(g)=Tr(\rho(g))$. Since $G$ is a finite group then, by invoking facts from linear algebra, one can show $\chi(g)\in\mathbb{Z}[\zeta_n]$. For abelian groups, it is easy to see $\chi(g)$ is a root of unity, when $\chi$ is irreducible, but what about non-abelian groups? In other words let $|\chi(g)|=1$, what can we say about $\chi(g)$? This relates to your question. Let assume $K/\mathbb{Q}$ be an abelian Galois extension inside $\mathbb{C}$, and take an algebraic integer $\alpha\in\mathcal{O}_K$ such that $|\alpha|=1$, then for any $\sigma\in Gal(K/\mathbb{Q})$ we have $$ |\sigma(\alpha)|^2=\sigma(\alpha)\overline{\sigma(\alpha)} $$ Since $K/\mathbb{Q}$ is abelian then $\overline{\sigma(\alpha)}=\sigma(\overline{\alpha})$ so $$ |\sigma(\alpha)|^2=\sigma(|\alpha|)=1 $$ Then norm of all its conjugate is one so it must be a root of unity. This answer to the question was posed, therefore if $|\chi(g)|=1$ then $\chi(g)$ is root of unity.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
12
Add dataset card