INSTRUCTION
string
RESPONSE
string
Solving (quadratic) equations of iterated functions, such as $f(f(x))=f(x)+x$ In this thread, the question was to find a $f: \mathbb{R} \to \mathbb{R}$ such that $$f(f(x)) = f(x) + x$$ (which was revealed in the comments to be solved by $f(x) = \varphi x$ where $\varphi$ is the golden ratio $\frac{1+\sqrt{5}}{2}$). Having read about iterated functions shortly before though, I came up with this train of thought: $$f(f(x)) = f(x) + x$$ $$\Leftrightarrow f^2 = f^1 + f^0$$ $$f^2 - f - f^0 = 0$$ where $f^n$ denotes the $n$'th iterate of $f$. Now I solved the resulting quadratic equation much as I did with plain numbers $$f = \frac{1}{2} \pm \sqrt{\frac{1}{4} + 1}$$ $$f = \frac{1 \pm \sqrt{1+4}}{2} = \frac{1 \pm \sqrt{5}}{2}\cdot f^0$$ And finally the solution $$f(x) = \frac{1 \pm \sqrt{5}}{2} x .$$ Now my question is: *Is it somehow allowed to work with functions in that way?** I know that in the above, there are denotational ambiguities as $1$ is actually treated as $f^0 = id$ ... But since the result is correct, there seems to be some correct thing in this approach. So can I actually solve certain functional equations like this? And if true, how would the correct notation of the above be?
One way to think about this is that you are assuming that $f(x) = cx$ and then solving for the value of $c$. But there is something much more interesting going on; you have started doing some abstract algebra without knowing it. $f$ is not a real number, but it does live in something called an algebra over a field, which is a special type of ring. In particular, $f$ lives in the algebra of continuous functions $\mathbb{R} \to \mathbb{R}$. In this algebra there is a multiplicative identity $f(x) = x$ which plays the role of the zeroth power, an addition, a multiplication, and a scalar multiplication by real numbers. Many manipulations which are possible with ordinary real numbers are possible in rings and algebras; in particular, the first half of the proof of the quadratic formula carries through totally abstractly (the part where you complete the square). Unfortunately, the second half does not. In other words, it is not true that there are only two solutions to the equation $f^2 = a$ in a general algebra. This is because algebras are not in general integral domains. There may be none or infinitely many! However, in this special case $a$ is a positive real multiple of the identity, so we know it has at least two square roots (even though there may be more). These are the solutions that you found, and this method for finding them is perfectly valid. This is a very important technique. It is often used in the case where $f$ is a differential operator as a concise way to solve linear homogeneous ODEs.
Boy Born on a Tuesday - is it just a language trick? The following probability question appeared in an earlier thread: I have two children. One is a boy born on a Tuesday. What is the probability I have two boys? The claim was that it is not actually a mathematical problem and it is only a language problem. If one wanted to restate this problem formally the obvious way would be like so: Definition: Sex is defined as an element of the set $\\{\text{boy},\text{girl}\\}$. Definition: Birthday is defined as an element of the set $\\{\text{Monday},\text{Tuesday},\text{Wednesday},\text{Thursday},\text{Friday},\text{Saturday},\text{Sunday}\\}$ Definition: A Child is defined to be an ordered pair: (sex $\times$ birthday). Let $(x,y)$ be a pair of children, Define an auxiliary predicate $H(s,b) :\\!\\!\iff s = \text{boy} \text{ and } b = \text{Tuesday}$. Calculate $P(x \text{ is a boy and } y \text{ is a boy}|H(x) \text{ or } H(y))$ I don't see any other sensible way to formalize this question. To actually solve this problem now requires no thought (infact it is thinking which leads us to guess incorrect answers), we just compute $$ \begin{align*} & P(x \text{ is a boy and } y \text{ is a boy}|H(x) \text{ or } H(y)) \\\\ =& \frac{P(x\text{ is a boy and }y\text{ is a boy and }(H(x)\text{ or }H(y)))} {P(H(x)\text{ or }H(y))} \\\\ =& \frac{P((x\text{ is a boy and }y\text{ is a boy and }H(x))\text{ or }(x\text{ is a boy and }y\text{ is a boy and }H(y)))} {P(H(x)) + P(H(y)) - P(H(x))P(H(y))} \\\\ =& \frac{\begin{align*} &P(x\text{ is a boy and }y\text{ is a boy and }x\text{ born on Tuesday}) \\\\ + &P(x\text{ is a boy and }y\text{ is a boy and }y\text{ born on Tuesday}) \\\\ - &P(x\text{ is a boy and }y\text{ is a boy and }x\text{ born on Tuesday and }y\text{ born on Tuesday}) \\\\ \end{align*}} {P(H(x)) + P(H(y)) - P(H(x))P(H(y))} \\\\ =& \frac{1/2 \cdot 1/2 \cdot 1/7 + 1/2 \cdot 1/2 \cdot 1/7 - 1/2 \cdot 1/2 \cdot 1/7 \cdot 1/7} {1/2 \cdot 1/7 + 1/2 \cdot 1/7 - 1/2 \cdot 1/7 \cdot 1/2 \cdot 1/7} \\\\ =& 13/27 \end{align*} $$ Now what I am wondering is, does this refute the claim that this puzzle is just a language problem or add to it? Was there a lot of room for misinterpreting the questions which I just missed?
The Tuesday is a red herring. It's stated as a fact, thus the probability is 1. Also, it doesn't say "only one boy is born on a Tuesday". But indeed, this could be a language thing. With 2 children you have the following possible combinations: 1. two girls 2. a boy and a girl 3. a girl and a boy 4. two boys If at least 1 is a boy we only have to consider the last three combinations. That gives us one in three that both are boys. The error which is often made is to consider 2. and 3. as a single combination. edit I find it completely counter-intuitive that the outcome is influenced by the day, and I simulated the problem for one million families with 2 kids. And lo and behold, the outcome is 12.99 in 27. I was wrong.
Fixed point Fourier transform (and similar transforms) The Fourier transform can be defined on $L^1(\mathbb{R}^n) \cap L^2(\mathbb{R}^n)$, and we can extend this to $X:=L^2(\mathbb{R}^n)$ by a density argument. Now, by Plancherel we know that $\|\widehat{f}\|_{L^2(\mathbb{R}^n)} = \|f\|_{L^2(\mathbb{R}^n)}$, so the Fourier transform is an isometry on this space. My question now is, what is a theorem that guarantees that the Fourier transform has a fixed point on $L^2$? I know the Gaussian is a fixed point, but I'm also interested in other integral transforms, but I just take the Fourier transform as an example. The Banach Fixed Point Theorem does not work here since we don't have a contraction (operator norm $< 1$). Can we apply the Tychonoff fixed point theorem? Then we would need to show that there exists a non-empty compact convex set $C \subset X$ such that the Fourier transform restricted to $C$ is a mapping from $C$ to $C$. Is this possible? If we have a fixed point, what would be a way to show it is unique? By linearity we obviously have infinitely many fixed points of we have at least two of them.
My Functional Analysis Fu has gotten bit weak lately, but I think the following should work: The Schauder fixed point theorem says, that a continuous function on a compact convex set in a topological vector space has a fixed point. Because of isometry, the Fourier transform maps the unit ball in $L^2$ to itself. Owing to the Banach Alaoglu theorem, the unit ball in $L^2$ is compact with respect to the weak topology. The Fourier transform is continuous in the weak topology, because if $( f_n, \phi ) \to (f, \phi)$ for all $\phi \in L^2$, then $$ (\hat{f}_n, \phi) = (f_n, \hat{\phi}) \to (f, \hat{\phi}) = (\hat{f}, \phi). $$
Does contractibility imply contractibility with a basepoint? Let $X$ be a contractible space. If $x_0 \in X$, it is not necessarily true that the pointed space $(X,x_0)$ is contractible (i.e., it is possible that any contracting homotopy will move $x_0$). An example is given in 1.4 of Spanier: the comb space. However, this space is contractible as a pointed space if the basepoint is in the bottom line. Is there a contractible space which is not contractible as a pointed space for any choice of basepoint? My guess is that this will have to be some kind of pathological space, because for CW complexes, we have the Whitehead theorem. (So I'm not completely sure that the Whitehead theorem is actually a statement about the pointed homotopy category, but hopefully I'm right.)
Yes. See exercise 7 here.
proof by contradiction: a composite $c$ has a nontrivial factor $\le \sqrt c$ Let $c$ be a positive integer that is not prime. Show that there is some positive integer $b$ such that $b \mid c$ and $b \leq \sqrt{c}$. I know this can be proved by contradiction, but I'm not sure how to approach it. Usually I write the proof in the form $P \rightarrow Q$, and then if we can prove $P \land \neg Q$ is false, $P \rightarrow Q$ must be true. In this case, I wrote it as: If $c$ is a composite, positive integer, then $b \mid c$ and $b \leq \sqrt{c}$, for some positive integer $b$. I'm guessing that as long as I assume that $b \nmid c$ or $b > \sqrt{c}$, then this is still valid as $\neg Q$; that is, I don't have to assume the converse of both parts of $Q$? Moving on, if $b > \sqrt{c}$, and $b \mid c$, then $br=c$ for some integer $r$, which means $r < \sqrt{c}$. And this is where I get stuck.
What you want is to assume that every b that divides c is "too large" and derive a contradiction; however, you don't need proof by contradiction here at all. If b divides c and is too large, then it's easy to show directly that $c/b$ also divides c and is small enough. This also can be phrased with contradiction: assume $c/b > \sqrt{c}$ and $b > \sqrt{c}$ then $c=(c/b)\cdot b > \sqrt{c}\cdot\sqrt{c}=c$ - contradiction.
Why are quadratic equations called quadratics? The word "quad" generally means 4. Quadratics don't have 4 of anything. Can anyone explain where the name comes from?
From MathWorld: The Latin prefix quadri- is used to indicate the number 4, for example, quadrilateral, quadrant, etc. However, it also very commonly used to denote objects involving the number 2. This is the case because quadratum is the Latin word for square, and since the area of a square of side length $x$ is given by $x^2$, a polynomial equation having exponent two is known as a quadratic ("square-like") equation. By extension, a quadratic surface is a second-order algebraic surface.
How do I find a function from a differential equation? Hey, I'm looking for a guide on how I find Q given the following, where $a$ and $b$ are constants: \begin{equation} \frac{dQ}{dt} = \frac{a + Q}{b} \end{equation} I have the answer and working for specific case I'm trying to solve but do not understand the steps involved. A guide on how I can solve this, with an explanation of each step would be much appreciated.
Yet, another method for solving this differential equation is to look at it as a linear differential equation, whose general form is: $$ y'(x) = a(x) y(x) + b(x) \ , \qquad\qquad\qquad [1] $$ where $a(x), b(x)$ are arbitrary functions depending on the variable $x$. In your case: $$ x = t \ , y(x) = Q(t) \ , b(x) = \frac{a}{b} \quad \text{and}\quad a(x) = \frac{1}{b} \ . $$ A general procedure for solving [1] is the following: 1. First, try to solve the associated homogeneous linear differential equation $$ y' = a(x)y \ . \qquad\qquad\qquad [2] $$ This is easy: the general solution is $$ y = K e^{A(x)} \ , \qquad\qquad\qquad [3] $$ where $K\in \mathbb{R}$ is an arbitrary constant and $A(x) = \int a(x)dx$ is a primitive function of $a(x)$. 2. Once you have the general solution [3] of [2], you apply variation of constants; that is, you look for solutions of [1] of the following kind: $$ y = K(x) e^{A(x)} \ . \qquad \qquad \qquad [4] $$ Here, we have replaced the arbitrary constant $K$ by an arbitrary unkown function $K(x)$ (hence the name "variation of constants") to be determined. How? Imposing that we want [4] to be a solution of our first differential equation [1]. It goes like this: if you replace $y$ in [4] into [1], you get $$ K'(x) e^{A(x)} + K(x) A'(x) e^{A(x)} = a(x) K(x) e^{A(x)} + b(x) \ . $$ Since $A'(x) = a(x) $, this is the same as $$ K'(x) e^{A(x)} = b(x) \ . $$ So $$ K(x) = \int b(x)e^{-A(x)}dx + C \ , $$ where $C \in \mathbb{R}$ is an arbitrary constant. Now you put this $K(x)$ into [4] and get the general solution of your differential equation: $$ y(x) = Ce^{A(x)} + e^{A(x)}\int b(x) e^{-A(x)}dx \ . \qquad\qquad\qquad [5] $$ Since I've never could remember formula [5], I use to repeat the whole process for each particular linear differential equation, which is not hard and you can do it for yours.
Simplification of expressions containing radicals As an example, consider the polynomial $f(x) = x^3 + x - 2 = (x - 1)(x^2 + x + 2)$ which clearly has a root $x = 1$. But we can also find the roots using Cardano's method, which leads to $$x = \sqrt[3]{\sqrt{28/27} + 1} - \sqrt[3]{\sqrt{28/27} - 1}$$ and two other roots. It's easy to check numerically that this expression is really equal to $1$, but is there a way to derive it algebraically which isn't equivalent to showing that this expression satisfies $f(x) = 0$?
There are very general algorithms known for radical denesting. Below is the structure theorem which lies at the foundation of these algorithms. It widely generalizes the heuristic employed by Qiaochu in his answer. It may be employed heuristically - in a similar manner as Qiaochu - to perform complicated denestings, without requiring much comprehension of the underlying theory. In Bloemer's papers FOCS '91 & FOCS '92 & Algorithmica 2000 you will find polynomial-time algorithms for radical denesting. Informally, the key Denesting Structure Theorem says that if a radical $\rm\, r^{1/d} \,$ denests in any radical extension $\rm\, F' \,$ of its base field $\rm\, F \,$, then a suitable multiple $\rm\, q b\:\!\:\! r \,$ of the radicand $\rm\:\! r\:\! $ must already denest in the field $\rm\, F' \,$ defined by the radicand. More precisely Denesting Structure Theorem$\,\, \,$ Let $\rm\, F \,$ be a real field and $\rm\, F' = F(q_1^{1/d1},\ldots,q_k^{1/dk}) \,$ be a real radical extension of $\rm\, F \,$ of degree $\rm\, n \,$. Let $\rm\, B = \{b_0,\ldots, b_{n-1}\}$ be the standard basis of $\rm\, F' \,$ over $\rm\, F \,$. If $\rm\, r \,$ is in $\rm\, F' \,$ and $\rm\, d \,$ is a positive integer such that $\rm\, r^{1/d} \,$ denests over $\rm\, F \,$ using only real radicals, that is, $\rm\, r^{1/d} \,$ is in $\rm\, F(a_1^{1/t_1},\ldots,a_m^{1/t_m}) \,$ for some positive integers $\rm\, t_i \,$ and positive $\rm\, a_i \in F \,$, then there exists a nonzero $\rm\, q \in F \,$ and a $\rm\, b \in B \,$ such that $\rm\, (q b r)^{1/d}\! \in F' \,$. I.e. multiplying the radicand by a $\rm\, q \,$ in the base field $\rm\, F \,$ and a power product $\rm\, b = q_1^{e_1/d_1}\cdots q_k^{e_k/d_k} \,$ we can normalize any denesting so that it denests in the field defined by the radicand. E.g. $$ \sqrt{\sqrt[3]5 - \sqrt[3]4} \,\,=\, \frac{1}3 (\sqrt[3]2 + \sqrt[3]{20} - \sqrt[3]{25})$$ normalises to $$\qquad \sqrt{18\ (\sqrt[3]10 - 2)} \,\,=\, 2 + 2\ \sqrt[3]{10} - \sqrt[3]{10}^2\,\in\,\Bbb Q(\sqrt[3]{10}) $$ An example with nontrivial $\rm\,b$ $$ \sqrt{12 + 5\ \sqrt 6} \,\,=\, (\sqrt 2 + \sqrt 3)\ 6^{1/4}\qquad\quad $$ normalises to $$ \sqrt{\frac{1}3 \sqrt{6}\, (12 + 5\ \sqrt 6)} \,\,=\, 2 + \sqrt{6}\,\in\, \Bbb Q(\sqrt 6)\qquad\qquad\ \ \ $$ Here $\rm\, F=\mathbb Q,\ F' = \mathbb Q(\sqrt 6),\ n=2,\ B = \{1,\sqrt 6\},\ d=2,\ q=1/3,\ b= \sqrt 6\,$. The structure theorem also hold for complex fields except that in this case one has to assume that $\rm\, F \,$ contains enough roots of unity (which may be computationally expensive in practice, to wit doubly-exponential complexity). Note that the complexity of even simpler problems involving radicals is currently unknown. For example, no polynomial time algorithm is known for determining the sign of a sum of real radicals $\rm\, \sum{c_i\, q_i^{1/r_i}} \,$ where $\rm\, c_i,\, q_i \,$ are rational numbers and $\rm\, r_i \,$ is a positive integer. Such sums play an important role in various geometric problems (e.g. Euclidean shortest paths and traveling salesman tours). Even though testing whether such a sum of radicals is zero can be decided in polynomial time, this is of no help in determining the sign, it only shows that if sign testing is in $\rm\, NP \,$ then it is already in $\rm\, NP \cap \text{co-NP} \,$.
Wedge product and cross product - any difference? I'm taking a course in differential geometry, and have here been introduced to the wedge product of two vectors defined (in Differential Geometry of Curves and Surfaces by Manfredo Perdigão do Carmo) by: Let $\mathbf{u}$, $\mathbf{v}$ be in $\mathbb{R}^3$. $\mathbf{u}\wedge\mathbf{v}$ in $\mathbb{R}^3$ is the unique vector that satisfies: $(\mathbf{u}\wedge\mathbf{v})\cdot\mathbf{w} = \det\;(\mathbf{u}\;\mathbf{v}\;\mathbf{w})$ for all $\mathbf{w}$ in $\mathbb{R}^3$ And to clarify, $(\mathbf{u}\;\mathbf{v}\;\mathbf{w})$ is the 3×3 matrix with $\mathbf{u}$, $\mathbf{v}$ and $\mathbf{w}$ as its columns, in that order. My question: is there any difference between this and the regular cross product or vector product of two vectors, as long as we stay in $\mathbb{R}^3$? And if there is no difference, then why introduce the wedge? Cheers!
There is a difference. Both products take two vectors in $\mathbb{R}^3$. The cross product gives a vector in the same $\mathbb{R}^3$ and the wedge product gives a vector in a different $\mathbb{R}^3$. The two output vector spaces are indeed isomorphic and if you choose an isomorphism you can identify the two products. However this isomorphism is a choice, or to put it another way depends on fixing a convention. In higher dimensions the wedge product gives a vector in a vector space of higher dimension and so no identification is possible.
Grid of overlapping squares I have a grid made up of overlapping $3\times 3$ squares like so: The numbers on the grid indicate the number of overlapping squares. Given that we know the maximum number of overlapping squares ($9$ at the middle), and the size of the squares ($3\times 3$), is there a simple way to calculate the rest of the number of overlaps? e.g. I know the maximum number of overlaps is $9$ at point $(2,2)$ and the square size is $3\times 3$ . So given point $(3,2)$ how can I calculate that there are $6$ overlaps at that point?
If you are just considering $3\times 3$ squares then the number of overlapping squares at the $(i,j)$ is the number of $1\times 1$ squares (including itself) which are internal neighbours. i.e. neighbouring squares which are not on the edge.
Why does a circle enclose the largest area? In this wikipedia, article http://en.wikipedia.org/wiki/Circle#Area_enclosed its stated that the circle is the closed curve which has the maximum area for a given arc length. First, of all, I would like to see different proofs, for this result. (If there are any elementary ones!) One, interesting observation, which one can think while seeing this problem, is: How does one propose such type of problem? Does, anyone take all closed curves, and calculate their area to come this conclusion? I don't think that's the right intuition.
As Qiaochu Yuan pointed out, this is a consequence of the isoperimetric inequality that relates the length $L$ and the area $A$ for any closed curve $C$: $$ 4\pi A \leq L^2 \ . $$ Taking a circumference of radius $r$ such that $2\pi r = L$, you obtain $$ A \leq \frac{L^2}{4\pi} = \frac{4 \pi^2 r^2}{4\pi} = \pi r^2 \ . $$ That is, the area $A$ enclosed by the curve $C$ is smaller than the area enclosed by the circumference of the same length. As for the proof of the isoperimetric inequality, here is the one I've learnt as undergraduate, which is elementary and beautiful, I think. Go round your curve $C$ counterclockwise. For a plane vector field $(P,Q)$, Green's theorem says $$ \oint_{\partial D}(Pdx + Qdy) = \int_D \left( \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right) dxdy\ . $$ Apply it for the vector field $(P,Q) = (-y,x)$ and when $D$ is the region enclosed by your curve $C = \partial D$. You obtain $$ A = \frac{1}{2} \oint_{\partial D} (-ydx + xdy) \ . $$ Now, parametrize $C= \partial D$ with arc length: $$ \gamma : [0,L] \longrightarrow \mathbb{R}^2 \ ,\qquad \gamma (s) = (x(s), y(s)) \ . $$ Taking into account that $$ 0= xy \vert_0^L = \int_0^L x'yds + \int_0^L xy'ds \ , $$ we get $$ A = \int_0^L xy'ds = -\int_0^L x'yds \ . $$ So enough for now with our curve $C$. Let's look for a nice circumference to compare with! First of all, $[0,L]$ being compact, the function $x: [0,L] \longrightarrow \mathbb{R}$ will have a global maximum and a global minimum. Changing the origin of our parametrization if necessary, me may assume the minimum is attained at $s=0$. Let the maximum be attained at $s=s_0 \in [0,L]$. Let $q = \gamma (0)$ and $p = \gamma (s_0)$. (If there are more than one minimum and more than one maximum, we choose one of each: the ones you prefer.) Since $x'(0) = x'(s_0) = 0$, we have vertical tangent lines at both points $p,q$ of our curve $C$. Draw a circumference between these parallel lines, tangent to both of them (a little far away of $C$ to avoid making a mess). So the radius of this circumference will be $r = \frac{\| pq \|}{2}$. Let's take the origin of coordinates at the center of this circumference. We parametrize it with the same $s$, the arc length of $C$: $$ \sigma (s) = (\overline{x}(s), \overline{y}(s)) \ , \quad s \in [0, L] \ . $$ Of course, $\overline{x}(s)^2 + \overline{y}(s)^2 = r^2$ for all $s$. If we choose $\overline{x}(s) = x(s)$, this forces us to take $ \overline{y}(s) = \pm \sqrt{r^2 - \overline{x}(s)^2}$. In order that $\sigma (s)$ goes round all over our circumference counterclockwise too, we choose the minus sign if $0\leq s \leq s_0$ and the plus sign if $s_0 \leq s \leq L$. We are almost done, just a few computations left. Let $\overline{A}$ denote the area enclosed by our circumference. So, we have $$ A = \int_0^L xy'ds = \int_0^L \overline{x}y'ds \qquad \text{and} \qquad \overline{A}= \pi r^2 = -\int_0^L\overline{y}\overline{x}'ds = -\int_0^L\overline{y} x'ds \ . $$ Hence, $$ \begin{align} A + \pi r^2 &= A + \overline{A} = \int_0^L (\overline{x}y' - \overline{y}x')ds \\\ &\leq \int_0^L \vert \overline{x}y' - \overline{y}x'\vert ds \\\ &= \int_0^L \vert (\overline{x}, \overline{y})\cdot (y', -x')\vert ds \\\ &\leq \int_0^L \sqrt{\overline{x}^2 + \overline{y}^2} \cdot \sqrt{(y')^2+ (-x')^2}ds \\\ &= \int_0^L rds = rL \ . \end{align} $$ The last inequality is Cauchy-Schwarz's one and the last but one equality is due to the fact that $s$ is the arc-length of $C$. Summing up: $$ A + \pi r^2 \leq rL \ . $$ Now, since the geometric mean is always smaller than the arithmetic one, $$ \sqrt{A\pi r^2} \leq \frac{A + \pi r^2}{2} \leq \frac{rL}{2} \ . $$ Thus $$ A \pi r^2 \leq \frac{r^2L^2}{4} \qquad \Longrightarrow \qquad 4\pi A \leq L^2 \ . $$
Inverse of an invertible triangular matrix (either upper or lower) is triangular of the same kind How can we prove that the inverse of an upper (lower) triangular matrix is upper (lower) triangular?
Let $$ A=\begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1,n-1} & a_{1,n}\\ & a_{22} & \cdots & a_{2,n-1} & a_{2,n}\\ & & \ddots & \vdots & \vdots\\ & & & a_{n-1,n-1} & a_{n-1,n}\\ & & & & a_{n,n} \end{pmatrix}. $$ Let $i,j$ be two integers such that $i,j\in\{1,\dots,n\} $ and $i<j$. Let $A_{i,j}$ be an $n-1\times n-1$ matrix which is obtained by crossing out row $i$ and column $j$ of $A$. Then, $A_{i,j}$ is $$ \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1,i-1} & a_{1,i} &a_{1,i+1}&a_{1,i+2}&\cdots&a_{1,j-1}&a_{1,j+1}&a_{1,j+2}&\cdots&a_{1n}\\ & a_{22} & \cdots & a_{2,i-1} & a_{2,i} &a_{2,i+1}&a_{2,i+2}&\cdots&a_{2,j-1}&a_{2,j+1}&a_{2,j+2}&\cdots&a_{2n}\\ & & \ddots & \vdots & \vdots &\vdots&\vdots&\cdots&\vdots&\vdots&\vdots&\cdots&\vdots\\ & & & a_{i-1,i-1} & a_{i-1,i} &a_{i-1,i+1}&a_{i-1,i+2}&\cdots&a_{i-1,j-1}&a_{i-1,j+1}&a_{i-1,j+2}&\cdots&a_{i-1,n}\\ & & & & 0 & a_{i+1,i+1}&a_{i+1,i+2}&\cdots&a_{i+1,j-1}&a_{i+1,j+1}&a_{i+1,j+2}&\cdots&a_{i+1,n}\\ & & & & & 0&a_{i+2,i+2}&\cdots&a_{i+2,j-1}&a_{i+2,j+1}&a_{i+2,j+2}&\cdots&a_{i+2,n}\\ & & & & & &0&\cdots&\vdots&\vdots&\vdots&\cdots&\vdots\\ & & & & & &&\ddots&a_{j-1,j-1}&a_{j-1,j+1}&a_{j-1,j+2}&\cdots&a_{j-1,n}\\ & & & & & &&&0&a_{j,j+1}&a_{j,j+2}&\cdots&a_{j,n}\\ & & & & & &&&&a_{j+1,j+1}&a_{j+1,j+2}&\cdots&a_{j+1,n}\\ & & & & & &&&&&a_{j+2,j+2}&\cdots&a_{j+2,n}\\ & & & & & &&&&&&\ddots&\vdots\\ & & & & & &&&&&&&a_{n,n}\\ \end{pmatrix}. $$ So, $\det A_{i,j}=0$ if $i,j$ are two integers such that $i,j\in\{1,\dots,n\} $ and $i<j$. Let $C_{i,j}$ be the $(i,j)$-cofactor of $A$. Then, $C_{i,j}=(-1)^{i+j}\det A_{i,j}=0$ if $i,j$ are two integers such that $i,j\in\{1,\dots,n\} $ and $i<j$. So, $$A^{-1}=\frac{1}{\det A}\begin{pmatrix}C_{11}&C_{21}&\cdots&C_{n,1}\\ C_{12}&C_{22}&\cdots&C_{n,2}\\ \vdots&\vdots&&\vdots\\ C_{1n}&C_{2n}&\cdots&C_{n,n}\\ \end{pmatrix}=\frac{1}{\det A}\begin{pmatrix}C_{11}&C_{21}&\cdots&C_{n,1}\\ 0&C_{22}&\cdots&C_{n,2}\\ \vdots&\vdots&&\vdots\\ 0&0&\cdots&C_{n,n}\\ \end{pmatrix}.$$
What is wrong with my reasoning? The Questions $70\%$ of all vehicles pass inspection. Assuming vehicles pass or fail independently. What is the probability: a) exactly one of the next $3$ vehicles passes b) at most $1$ of the next $3$ vehicles passes The answer to a) is $.189.$ The way I calculated it was: $P(\text{success}) \cdot P(\text{fail}) \cdot P(\text{fail})\\ + P(\text{fail}) \cdot P(\text{success}) \cdot P(\text{fail})\\ + P(\text{fail}) \cdot P(\text{fail}) \cdot P(\text{success})\\ =.7\cdot.3\cdot.3 + .3\cdot.7\cdot.3 + .3\cdot.3\cdot.7\\ = .189$ I summed the $3$ possible permutations of $1$ success and $2$ failures. For b) the answer is $.216.$ To get that answer you take your answer to a) and add the probability of exactly $0$ successes which is $P(\text{fail}) \cdot P(\text{fail}) \cdot P(\text{fail}) = .189 + .3\cdot.3\cdot.3 = .216$ What I don't understand is why the probability of exactly $0$ successes doesn't follow the pattern of exactly $1$ success. Why doesn't the "formula" work: $P(\text{fail}) \cdot P(\text{fail}) \cdot P(\text{fail})\\ + P(\text{fail}) \cdot P(\text{fail}) \cdot P(\text{fail})\\ + P(\text{fail}) \cdot P(\text{fail}) \cdot P(\text{fail})\\ = .3\cdot.3\cdot.3+.3\cdot.3\cdot.3+.3\cdot.3\cdot.3\\ = .081$ $\Rightarrow .189 + .081 = .27$ (not $.216$) Now I'm wondering if I calculated the answer to a) the wrong way, and it was merely a coincidence that I got the right answer!
In part a, there are exactly three ways for one out of three cars to pass, they are the three possibilities that you added. But there is only one way for all the cars to fail! The first car must fail, the second car must fail, and the third car must fail. Since there is only one way for this to happen you only consider this one probability. Btw there are also three ways for exactly two cars to pass and only one way for all three of them to pass. Hope this helps.
What are we solving with row reduction? Why are row reductions so useful in linear algebra? It is easy to get lost in the mechanical solving of equations. I know I can get a matrix into reduced row echelon form. But what are the outcomes of this? What can this mean?
Well, Dan and muad have already told you everything, but let me stress the two main points (in my opinion): * *Row reduction is just the same as solving one equation for one unkown in terms of the others and then plugging the obtained expression into the remaining equations. (But without having to write down the unkowns all the time.) In a nutshell, row reduction = solving your system. *Row reduction doesn't change the solution of the system of linear equations. (So the target in performing row reduction is to obtain a simpler system, with the same solutions as the original one, but such that solutions can be read directly from it.) The second statement can be proved formally, but I think that, if I can convince you that (1) is true, you won't need this formal proof, will you? (Because, when you solve your system, you're sure that, if you don't make mistakes, the solution you arrive at is, well, the solution. You don't change the solution of a system when you perform your operations in order to solve it, do you? So, if row reduction = solving your system and you don't change the solutions of your system while solving it, then row reduction doesn't change the solutions of your system.) So let's see how row reduction = solving your system. Say you're trying to solve the following system: $$ \begin{align} 5x - 3y &= 3 \\ 2x + 4y &= 2 \end{align} $$ You realize that the second equation looks nicer than the first one, so you decide to start with it and interchange both equations: $$ \begin{align} 2x + 4y &= 2 \\ 5x - 3y &= 3 \end{align} \qquad \qquad \qquad \qquad \textbf{Step one} $$ Then you solve the first equation for $x$. You divide by two your, now, first equation $$ x + 2y = 1 \qquad \qquad \qquad \qquad \textbf{Step two} $$ solve it for $x$, $x = 1 -2y$, and plug it into the second one, that is $5(1-2y) - 3y = 3$. You obtain: $$ -13y = -2 \qquad \qquad \qquad \qquad \textbf{Step three} $$ Then, you solve the second equation for $y$, dividing by $-13$ $$ y = \frac{2}{13} \qquad \qquad \qquad \qquad \textbf{Step four} $$ and you perform back subtitution, that is you plug this $y$ into the first equation, $x = 1- 2\frac{2}{13}$, getting $$ x = \frac{9}{13} \qquad \qquad \qquad \qquad \textbf{Step five} $$ So you know the solution of your system is: $$ \begin{align} x &= \frac{9}{13} \\ y &= \frac{2}{13} \end{align} $$ Now, we are going to do exactly the same, but with row reduction. Our system of equations is the same as its augmented matrix: $$ \left( \begin{array}{rr|r} 5 & -3 & 3 \\ 2 & 4 & 2 \end{array} \right) $$ In Step one, we've interchanged both equations. Now, we interchange the two rows: $$ \left( \begin{array}{rr|r} 2 & 4 & 2 \\ 5 & -3 & 3 \end{array} \right) \qquad \qquad \qquad \qquad \textbf{Step one} $$ In Step two, we divided by two the first equation. Now, we divide by two the first row: $$ \left( \begin{array}{rr|r} 1 & 2 & 1 \\ 5 & -3 & 3 \end{array} \right) \qquad \qquad \qquad \qquad \textbf{Step two} $$ In Step three, we plugged $x = 1 -2y$ into the second equation. Now, we substract five times the first equation from the second: $$ \left( \begin{array}{rr|r} 1 & 2 & 1 \\ 0 & -13 & -2 \end{array} \right) \qquad \qquad \qquad \qquad \textbf{Step three} $$ In Step four, we divided the second equation by $-13$. Now, we divide the second row by $-13$: $$ \left( \begin{array}{rr|r} 1 & 2 & 1 \\ 0 & 1 & \frac{2}{13} \end{array} \right) \qquad \qquad \qquad \qquad \textbf{Step four} $$ In Step five, we performed back substitution. Now, we substract $2$ times the second row from the first one: $$ \left( \begin{array}{rr|r} 1 & 0 & \frac{9}{13} \\ 0 & 1 & \frac{2}{13} \end{array} \right) \qquad \qquad \qquad \qquad \textbf{Step five} $$ Now, the solution of your system is in the third column, because the system that corresponds to this matrix is: $$ \begin{align} x &= \frac{9}{13} \\ y &= \frac{2}{13} \end{align} $$
Partitioning Integers using only elements from a specific set I know how to partition given $p(x)$ using a generating function as my textbook on discrete mathematics explains it in detail. However, I want to know if it is possible to restrict the source elements of the partition? So, given I want to find the partition of $x$ is it possible to do it using only elements from $S={y_1, y_2, y_3, ..., y_p : y_i \in \mathbb{Z} \forall i}$. An example might be partition any number using only 4 and 9. If so, how? This is a homework question of sorts but this is not the actual homework question (because I can solve that - I just can't find a general answer for it yet and this is a way I think I can generalise (possibly)).
It's the same generating function method. If $p_S(n)$ denotes the number of partitions of $n$ using only positive integers in some set $S \subset \mathbb{N}$, then $$\sum_{n \ge 0} p_S(n) x^n = \prod_{s \in S} \frac{1}{1 - x^s}.$$ A popular choice is $S = \{ 1, 5, 10, 25 \}$ (the problem of making change). When $S$ is finite the above function is rational and it is possible to give a closed form for $p_S$. If the question is about whether such a partition exists at all, for sufficiently large $n$ this is possible if and only if the greatest common divisor of the elements of $S$ divides $n$. For small $n$ there are greater difficulties; see the Wikipedia article on the Frobenius problem. When $|S| = 2$ (say $S = \{ a, b \}$) and the elements of $S$ are relatively prime, the largest $n$ for which no such partition exists is known to be $ab - a - b$. When $a = 4, b = 9$ this gives $23$, so for any $n \ge 24$ such a partition always exists.
Given coordinates of hypotenuse, how can I calculate coordinates of other vertex? I have the Cartesian coordinates of the hypotenuse 'corners' in a right angle triangle. I also have the length of the sides of the triangle. What is the method of determining the coordinates of the third vertex where the opposite & adjacent sides meet. Thanks, Kevin.
Let $AB$ the hypotenuse, let vector $\vec c=\overrightarrow{OB}-\overrightarrow{OA}$, its length $c$, the right angle at $C$, $DC=h$ the height of the triangle, $a$ and $b$ the given length of the legs, $q$ length of $AD$ as usual. Define $J\colon R^2\rightarrow R^2$, $(v_1,v_2)\mapsto (-v_2,v_1)$ the rotation by 90 degree. We know by Euclid that $q=b^2/c$ and elementarily that $ab=ch$, so $h=ab/c$. Then $\vec c/c$ is the unit vector of $c$, thus one solution is $$\overrightarrow{OC}=\overrightarrow{OA}+\frac{b^2}{c}\frac{\vec c}{c}+\frac{ab}{c}J\Bigl(\frac{\vec c}{c}\Bigr)=\overrightarrow{OA}+\frac{b}{c^2}\bigl(b\vec c+ aJ(\vec c)\big).$$ Can you find the second solution? Moral: Avoid coordinates! Michael
What structure does the alternating group preserve? A common way to define a group is as the group of structure-preserving transformations on some structured set. For example, the symmetric group on a set $X$ preserves no structure: or, in other words, it preserves only the structure of being a set. When $X$ is finite, what structure can the alternating group be said to preserve? As a way of making the question precise, is there a natural definition of a category $C$ equipped with a faithful functor to $\text{FinSet}$ such that the skeleton of the underlying groupoid of $C$ is the groupoid with objects $X_n$ such that $\text{Aut}(X_n) \simeq A_n$? Edit: I've been looking for a purely combinatorial answer, but upon reflection a geometric answer might be more appropriate. If someone can provide a convincing argument why a geometric answer is more natural than a combinatorial answer I will be happy to accept that answer (or Omar's answer).
(This is, essentially, just a «repackaging» of your answer. Still, I find this version somewhat more satisfying — at least, it avoids even mentioning total orders.) For a finite set $X$ consider projection $\pi\colon X^2\to S^2 X$ (where $S^2 X=X^2/S_2$ is the symmetric square). To a section $s$ of the projection one can associate a polynomial $\prod\limits_{i\neq j,\,(i,j)\in\operatorname{Im}s}(x_i-x_j)$ — and since any two such products coincide up to a sign, this gives a partition of the set $\operatorname{Sec}(\pi)$ into two parts. Now, $A_X$ is the subgroup of $S_X$ preserving both elements of this partition. (I.e. the structure is choice of one of two elements of the described partition of $\operatorname{Sec}(\pi)$.)
Vivid examples of vector spaces? When teaching abstract vector spaces for the first time, it is handy to have some really weird examples at hand, or even some really weird non-examples that may illustrate the concept. For example, a physicist friend of mine uses "color space" as a (non) example, with two different bases given essentially {red, green, blue} and {hue, saturation and brightness} (see http://en.wikipedia.org/wiki/Color_space). I say this is a non-example for a number of reasons, the most obvious being the absence of "negative color". Anyhow, what are some bizarre and vivid examples of vector spaces you've come across that would be suitable for a first introduction?
The solutions of the differential equation $y''+p y' +q y=0$ on some interval $I\subset{\mathbb R}$ form a vector space $V$ of functions $f:I\to{\mathbb R}$. What is the dimension of this space? Physical intuition or the fundamental existence and uniqueness theorem for differential equations tell you that this dimension is 2: Consider the two initial problems $y(0)=1, y'(0)=0$ and $y(0)=0, y'(0)=1$. The two corresponding solutions $y_1(\cdot)$, $y_2(\cdot)$ form a basis of $V$. Now comes the upshot: You can "guess" explicit solutions of the form $y(t):=e^{\lambda t}$ for suitable $\lambda$'s (apart from special cases), and in this way you obtain a completely different basis of $V$. Of course this is not "weird", but it is an instance of a finite-dimensional vector space which does not have a "natural" basis to begin with.
Evaluating the integral $\int_0^\infty \frac{\sin x} x \,\mathrm dx = \frac \pi 2$? A famous exercise which one encounters while doing Complex Analysis (Residue theory) is to prove that the given integral: $$\int\limits_0^\infty \frac{\sin x} x \,\mathrm dx = \frac \pi 2$$ Well, can anyone prove this without using Residue theory? I actually thought of using the series representation of $\sin x$: $$\int\limits_0^\infty \frac{\sin x} x \, dx = \lim\limits_{n \to \infty} \int\limits_0^n \frac{1}{t} \left( t - \frac{t^3}{3!} + \frac{t^5}{5!} + \cdots \right) \,\mathrm dt$$ but I don't see how $\pi$ comes here, since we need the answer to be equal to $\dfrac{\pi}{2}$.
By definition (Laplace Transform): \begin{equation*} F(s)=L\left[\frac{\sin(t)}{t}\right]=\int_{0}^{\infty}\frac{\sin(t)}{t}e^{-st}dt=\arctan^{-1}\left(\frac{1}{s}\right) \end{equation*} Then, for $s=0$, \begin{equation*} F(0)=\int_{0}^{\infty}\frac{\sin(t)}{t}dt = \lim_{s\to0}\arctan^{-1}\left(\frac{1}{s}\right)=\lim_{u\to\infty}\arctan^{-1}(u)=\frac{\pi}{2} \end{equation*}
Determine whether a number is prime How do I determine if a number is prime? I'm writing a program where a user inputs any integer and from that the program determines whether the number is prime, but how do I go about that?
Algorithm posted by jericson is the best for basic purposes. IMHO, for programming competitions and for practical purposes randomized algorithms are best. Rabin-Miller is my favorite. Take a look at Rabin-Miller primality testing algorithm code @ TopCoder. As primes are in P, there is deterministic, polynomial time algorithm called AKS primality test.
What's the generalisation of the quotient rule for higher derivatives? I know that the product rule is generalised by Leibniz's general rule and the chain rule by Faà di Bruno's formula, but what about the quotient rule? Is there a generalisation for it analogous to these? Wikipedia mentions both Leibniz's general rule and Faà di Bruno's formula for the product and the chain rule, but rather nothing for the quotient rule.
The answer is: $\frac{d^n}{dx^n} \left (\frac{f(x)}{g(x)} \right ) = \sum_{k=0}^n {(-1)^k \tbinom{n}{k} \frac{d^{n-k}\left(f(x)\right)}{dx^{n-k}}}\frac{A_k}{g_{(x)}^{k+1}} $ where: $A_0=1$ $A_n=n\frac{d\left(g(x)\right)}{dx}\ A_{n-1}-g(x)\frac{d\left(A_{n-1}\right)}{dx}$ for example let $n=3$: $\frac{d^3}{dx^3} \left (\frac{f(x)}{g(x)} \right ) =\frac{1}{g(x)} \frac{d^3\left(f(x)\right)}{dx^3}-\frac{3}{g^2(x)}\frac{d^2\left(f(x)\right)}{dx^2}\left[\frac{d\left(g(x)\right)}{d{x}}\right] + \frac{3}{g^3(x)}\frac{d\left(f(x)\right)}{d{x}}\left[2\left(\frac{d\left(g(x)\right)}{d{x}}\right)^2-g(x)\frac{d^2\left(g(x)\right)}{dx^2}\right]-\frac{f(x)}{g^4(x)}\left[6\left(\frac{d\left(g(x)\right)}{d{x}}\right)^3-6g(x)\frac{d\left(g(x)\right)}{d{x}}\frac{d^2\left(g(x)\right)}{dx^2}+g^2(x)\frac{d^3\left(g(x)\right)}{dx^3}\right]$ Relation with Faa' di Bruno coefficents: The $A_n$ have also a combinatorial form, similar to the Faa' di Bruno coefficents (ref http://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno). An explication via an example (with for shortness $g'=\frac{d\left(g(x)\right)}{dx}$, $g''=\frac{d^2\left(g(x)\right)}{dx^2}$, etc.): Let we want to find $A_4$. The partitions of 4 are: $1+1+1+1, 1+1+2, 1+3, 4, 2+2$. Now for each partition we can use the following pattern: $1+1+1+1 \leftrightarrow C_1g'g'g'g'=C_1\left(g'\right)^4$ $1+1+2+0 \leftrightarrow C_2g'g'g''g=C_2g\left(g'\right)^2g''$ $1+3+0+0 \leftrightarrow C_3g'g'''gg=C_3\left(g\right)^2g'g'''$ $4+0+0+0 \leftrightarrow C_4g''''ggg=C_4\left(g\right)^3g''''$ $2+2+0+0 \leftrightarrow C_5g''g''gg=C_5\left(g\right)^2\left(g''\right)^2$ with $C_i=(-1)^{(4-t)}\frac{4!t!}{m_1!\,m_2!\,m_3!\,\cdots 1!^{m_1}\,2!^{m_2}\,3!^{m_3}\,\cdots}$ (ref. closed-form of the Faà di Bruno coefficents) where $t$ is the numers of partition items different of $0$, and $m_i$ is the numer of i. We have $C_1=24$ (with $m_1=4, t=4$), $C_2=-36$ (with $m_1=2, m_2=1, t=3$), $C_3=8$ (with $m_1=1, m_3=1, t=2$), $C_4=-1$ (with $m_4=2, t=1$), $C_5=6$ (with $m_2=2,t=2$). Finally $A_4$ is the sum of the formula found for each partition, i.e. $A_4=24\left(g'\right)^4-36g\left(g'\right)^2g''+8\left(g\right)^2g'g'''-\left(g\right)^3g''''+6\left(g\right)^2\left(g''\right)^2$
Covering of a topological group is a topological group If we have a covering $p:H\rightarrow G$ where $G$ is a topological group, then $H$ is also a topological group. The multiplication function can be defined as follows. Consider the map $f:H\times H \rightarrow G$ which is a composition of the map $p\times p$ and the multiplication function on $G$. Choose $h\in p^{-1}(e)$ where $e$ is the identity element of $G$. If $$f_* (\pi_1(H\times H,(h,h))) \subset p_*(\pi_1(H,h)),$$ then $f$ can be lifted to a map $g:H\times H \rightarrow H$ such that $p\circ g = f$ and $g(h,h) = h$. Suppose we have shown the "if" part, then $g$ should function as our multiplication map on $H$. But given any $x\in H$, why do we know that $g(x,h) = x$ and that $g(x,h)$ does not equal any other element of $p^{-1}(p(x))$?
Consider the map $k: H\to H$ given by $k(x) = g(x,h)$. Then for any $x\in H$, we have $$p\circ k(x) = p\circ g(x,h) = m\circ (p\times p)(x,h) = m(p(x),e) = p(x),$$ which implies that $k$ is a lift of $p\colon H\to G$. Note also that $k(h) = g(h,h) = h$. Thus $k$ and the identity are both lifts of $p$ that agree at a point, so they are equal. This implies $g(x,h)=x$ for all $x$.
Perfect numbers, the pattern continues The well known formula for perfect numbers is $$ P_n=2^{n-1}(2^{n}-1). $$ This formula is obtained by observing some patterns on the sum of the perfect number's divisors. Take for example $496$: $$ 496=1+2+4+8+16+31+62+124+248 $$ one can see that the first pattern is a sequence of powers of $2$ that stops at $16$, the second pattern starts with a prime number, in this case $31$, the rest of them are multiples of $31$, e.g. $2\cdot 31, 4\cdot 31$ and $8\cdot 31$. But I found that the pattern proceeds after $31$, for example $31=2^5-2^0$, $62=2^6-2^1$, $124=2^7-2^2$ and finally $248=2^8-2^3$, so the perfect number can be written as $$ 496=1+2+4+8+16+(2^5-2^0)+(2^6-2^1)+(2^7-2^2)+(2^8-2^3) $$ or $$ 496=(1+2+4+8+16+32+64+128+256) -(1+2+4+8). $$ So the formula follows very naturally from this. I've searched but didn't find this formulation anywhere. Well, is this something new? Has anyone seen this somewhere?
You've observed that $P_5 = 2^4(2^5-1) = 496$ can also be written as the sum of the first 9 powers of two minus the sum of the first four powers of two. Sums of powers of two Powers of two written in binary look like $1, 10, 100, 1000, \cdots$ but you can also write them like this $1, 1+1, 11+1, 111+1, \cdots$. This explains why (now in decimal notation) $1 + 2 + 4 + 8 = 15$ and the general sums of this form. Nine and four Well obvious $4 = 5-1$, and $9 = 2\cdot 5 - 1$, but where do these come from? Well just multiply out the formula! $2^{n-1}(2^n-1) = 2^{2n-1}-2^{n-1}$. This explains why the form $(1 + 2 + 4 + 8 + \cdots) - (1 + 2 + \cdots)$ works.
How is the codomain for a function defined? Or, in other words, why aren't all functions surjective? Isn't any subset of the codomain which isn't part of the image rather arbitrary?
Yes it is arbitrary, and in a sense this is exactly the problem. When we have a function we would like to be able to talk about the inverse of the function (assuming it is 1-1.) Consider the following: I have some function $f$ that maps $\mathbb{N} \rightarrow \mathbb{R}$ such that $$f(x) = (e^\sqrt{3.7x}+137.2)^3$$ The point here is that it is much easier to say that my codomain is $\mathbb{R}$ and my function is not surjective than to explicitly list the image of my function and say that $f$ is surjective on this subset of $\mathbb{R}$. I know that every output will be a real number, but it is hard to say exactly which numbers I will get out of $f$ and which real numbers I can never get. To get to the point, if my function was surjective I know that I could plug any real number into $f^{-1}$. But I know it's not and I therefore have to be very careful with my inputs to $f^{-1}$, restricting my attention only to $f$'s image (otherwise the inverse is undefined.) In this case, we say that $f$ does not have the 'nice' property of surjectivity.
Find thickness of a coin This is one of the question asked in a written test conducted by a company. The question sounded stupid to me. May be its not. "Given the area of the coin to be 'A'. If the probability of getting a tail, head and the edge are same, what is the thickness of the coin?
I assumed that the probability of getting a head, tail or edge depended on the angle from the centre of the coin that the side lies in. So the head, tail and edge must each occupy 120 degrees when viewed along the axis of rotation. In the diagram above the angles at the centre are all (meant to be) 60 degrees and the radius of each face is $\sqrt{A/\pi}$. A small amount of trigonometry later and I found the edge length to be $\sqrt{\frac{4A}{3\pi}}$.
Geometric Progression If $S_1$, $S_2$ and $S$ are the sums of $n$ terms, $2n$ terms and to infinity of a G.P. Then, find the value of $S_1(S_1-S)$. PS: Nothing is given about the common ratio.
I change your notation from S1, S2 and S to $S_{n},S_{2n}$ and $S$. The sum of $n$ terms of a geometric progression of ratio $r$ $u_{1},u_{2},\ldots ,u_{n}$ is given by $S_{n}=u_{1}\times \dfrac{1-r^{n}}{1-r}\qquad (1)$. Therefore the sum of $2n$ terms of the same progression is $S_{2n}=u_{1}\times \dfrac{1-r^{2n}}{1-r}\qquad (2)$. Assuming that the sum $S$ exists, it is given by $S=\lim S_{n}=u_{1}\times \dfrac{1}{1-r}\qquad (3)$. Since the "answer is S(S1-S2)", we have to prove this identity $S_{n}(S_{n}-S)=S(S_{n}-S_{2n})\qquad (4).$ Plugging $(1)$, $(2)$ and $(3)$ into $(4)$ we have to prove the following equivalent algebraic identity: $u_{1}\times \dfrac{1-r^{n}}{1-r}\left( u_{1}\times \dfrac{1-r^{n}}{1-r}% -u_{1}\times \dfrac{1}{1-r}\right) $ $=u_{1}\times \dfrac{1}{1-r}\left( u_{1}\times \dfrac{1-r^{n}}{1-r}-u_{1}\times \dfrac{1-r^{2n}}{1-r}\right) \qquad (5)$, which, after simplifying $u_1$ and the denominator $1-r$, becomes: $\dfrac{1-r^{n}}{1}\left( \dfrac{1-r^{n}}{1}-\dfrac{1}{1}\right) =\left( \dfrac{% 1-r^{n}}{1}-\dfrac{1-r^{2n}}{1}\right) \qquad (6)$. This is equivalent to $\left( 1-r^{n}\right) \left( -r^{n}\right) =-r^{n}+r^{2n}\iff 0=0\qquad (7)$. Given that $(7)$ is true, $(5)$ and $(4)$ are also true.
Prove that the interior of the set of all orthogonal vectors to "a" is empty I made a picture of the problem here: If the link does not work, read this: Let $a$ be a non-zero vector in $\mathbb{R}^n$. Let S be the set of all orthogonal vectors to $a$ in $\mathbb{R}^n$. I.e., for all $x \in \mathbb{R}^n$, $a\cdot x = 0$ Prove that the interior of S is empty. How can I show that for every point in S, all "close" points are either in the complement of S or in S itself? This is what I attempted: Let $u\in B(r,x) = \\{ v \in \mathbb{R}^n : |v - x| < r \\} $ So $|u - x| < r$ Then, $|a||u - x| < |a|r$. By Cauchy-Schwarz, $|a\cdot(u-x)| \leq |a||u - x|$. Then, $|a\cdot u - a\cdot x| < |a|r$. If $u\in S$, then either $u\in S^{\text{int}}$ or $u\in \delta S$. ($\delta$ denotes boundary). If $u\in S^{\text{int}}$, then $B(r,x) \subset S$, and $a\cdot u = 0$. But then the inequality becomes $|a|r > 0$ which implies $B(r,x) \subset S \forall r > 0$, but this is impossible since it would also imply that $S = \mathbb{R}^n$ and $S^c$ is empty, which is false. Therefore, if $u\in S$, then $u\in\delta S$. Hence, $\forall u\in B(r, x)$ such that $u\in S$, $u\in\delta S$. Thus $S^{\text{int}}$ is empty.
The condition that $x$ be orthogonal to $a$, i.e. that $x$ lies in $S$, is that $x \cdot a = 0$. Imagine perturbing $x$ by a small amount, say to $x'$. If $x$ were in the interior, than one would have $x' \cdot a = 0$ as well, provided that $x'$ is very close to $x$. Think about whether this is possible for every $x'$. (Hint: $x'$ has to be close to $x$, i.e. $x - x'$ has to be small. But it can point in any direction!)
Showing $G$ is the product of groups of prime order Let $G$ be a (not necessarily finite) group with the property that for each subgroup $H$ of $G$, there exists a `retraction' of $G$ to $H$ (that is, a group homomorphism from $G$ to $H$ which is identity on $H$). Then, we claim : * *$G$ is abelian. *Each element of $G$ has finite order. *Each element of $G$ has square-free order. Let $g$ be a nontrivial element of $G$ and consider a retraction $T : G \to \langle{g\rangle}$ which is identity on $\langle{g\rangle}$. As $G/Ker(T)$ is isomorphic to $\text{Img}\langle{g\rangle}$, it is cyclic and so, it is abelian. Other than this i don't know how to prove the other claims of the problem. Moreover, a similar problem was asked in Berkeley Ph.D exam, in the year 2006, which actually asks us to prove that: If $G$ is finite and there is a retraction for each subgroups $H$ of $G$, then $G$ is the products of groups of prime order.
Let $g$ be a nontrivial element of $G$ and consider a retraction $T : G \to \langle{g\rangle}$ which is identity on $\langle{g\rangle}$. As $G/Ker(T)$ is isomorphic to $\text{Img}\langle{g\rangle}$, it is cyclic and so, it is abelian. Thus $[G,G]$ is contained in $Ker(T)$. Since $g \notin Ker(T)$, $g \notin [G,G]$. As $g$ is an arbitrary nontrivial element of $G$, this means that $[G,G] = {e}$; that is, $G$ is abelian. Look at any element $g \in G$ and consider a retraction $T:G \to \langle{g^2 \rangle}$. $T(g)$ is in $\langle{g^2 \rangle}$ means $T(g) = g^{2r}$ for some $r$. Also, $T(g^2)=g^2$ means then that $g^{4r}=g^2$; that is, $g^{4r-2} = e$. As $4r-2$ is not zero, we get that $g$ has finite order.
Characterizing continuous functions based on the graph of the function I had asked this question: Characterising Continuous functions some time back, and this question is more or less related to that question. Suppose we have a function $f: \mathbb{R} \to \mathbb{R}$ and suppose the set $G = \\{ (x,f(x) : x \in \mathbb{R}\\}$ is connected and closed in $\mathbb{R}^{2}$, then does it imply $f$ is continuous?
Yes, I think so. First, observe that such $f$ must have the intermediate value property. For suppose not; then there exist $a < b$ with (say) $f(a) < f(b)$ and $y \in (f(a),f(b))$ such that $f(x) \ne y$ for all $x \in (a,b)$. Then $A = (-\infty,a) \times \mathbb{R} \cup (-\infty,b) \times (-\infty,y)$ and $B = (b, +\infty) \times \mathbb{R} \cup (a,+\infty) \times (y,+\infty)$ are disjoint nonempty open subsets of $\mathbb{R}^2$ whose union contains $G$, contradicting connectedness. (Draw a picture.) Now take some $x \in \mathbb{R}$, and suppose $f(x) < y < \limsup_{t \uparrow x} f(t) \le +\infty$. Then there is a sequence $t_n \uparrow x$ with $f(t_n) > y$ for each $n$. By the intermediate value property, for each $n$ there is $s_n \in (t_n, x)$ with $f(s_n) = y$. So $(s_n, y) \in G$ and $(s_n,y) \to (x,y)$, so since $G$ is closed $(x,y) \in G$ and $y = f(x)$, a contradiction. So $\limsup_{t \uparrow x} f(t) \le f(x)$. Similarly, $\limsup_{t \downarrow x} f(t) \le f(x)$, so $\limsup_{t \to x} f(t) \le f(x)$. Similarly, $\liminf_{t \to x} f(t) \ge f(x)$, so that $\lim_{t \to x} f(t) = f(x)$, and $f$ is continuous at $x$.
Proving ${n \choose p} \equiv \Bigl[\frac{n}{p}\Bigr] \ (\text{mod} \ p)$ This is an exercise from Apostol, which i have been struggling for a while. Given a prime $p$, how does one show that $${n \choose p} \equiv \biggl[\frac{n}{p}\biggr] \ (\text{mod} \ p)$$ Note that $\Bigl[\frac{n}{p}\Bigr]$ denotes the integral part of $\frac{n}{p}$. I would also like to know as to how does one try to solve this problem. Well, what we need is to show is whenever one divides ${n \choose p}$ by a prime $p$ the remainder is the integral part of $\frac{n}{p}$. Now, $${ n \choose p} = \frac{n!}{p! \cdot (n-p)!}$$ Now $n!$ can be written as $$n!= n \cdot (n-1) \cdot (n-2) \cdots (n-p) \cdots 2 \cdot 1$$ But i am really struggling in getting the integral part.
You can see the solution for the case $p=7$ here: http://www.artofproblemsolving.com/Forum/viewtopic.php?p=1775313.
Finding a line that satisfies three conditions Given lines $\mathbb{L}_1 : \lambda(1,3,2)+(-1,3,1)$, $\mathbb{L}_2 : \lambda(-1,2,3)+(0,0,-1)$ and $\mathbb{L}_3 : \lambda(1,1,-2)+(2,0,1)$, find a line $\mathbb{L}$ such that $\mathbb{L}$ is parallel to $\mathbb{L}_1$, $\mathbb{L}\cap\mathbb{L}_2 \neq \emptyset$ and $\mathbb{L}\cap\mathbb{L}_3 \neq \emptyset$. Since $\mathbb{L}$ must be parallel to $\mathbb{L}_1$, then $\mathbb{L}:\lambda(1,3,2)+(x,y,z)$ but I can't figure out how to get that (x,y,z) point. I'd like to be given just a slight nod because I'm sure the problem is really easy. Thanks a lot!
I'd like to add the following to Agustí Roig's splendid answer. The equation of the general line passing through both $\mathbb{L}_2$ and $\mathbb{L}_3$ is given by $\underline{r} = \underline{a}+ \mu(\mathbb{L}_2 - \mathbb{L}_3),$ where $\underline{a}$ is some point on this line and its direction is $\mathbb{L}_2 - \mathbb{L}_3.$ Now we know that the direction is (1,3,2), so set $\lambda (1,3,2) = \mathbb{L}_2 - \mathbb{L}_3.$ That is $$\lambda (1,3,2) = (0,0,-1)+\lambda_2(-1,2,3) – (2,0,1) - \lambda_3(1,1,-2).$$ Hence we obtain $\lambda_2 = -2$, $\lambda_3 = 2$ and $\lambda = -2.$ Putting $\lambda_2 = -2$ in the equation of $\mathbb{L}_2$ gives the point (2,-4,-7), which is equivalent to w=0 in my comment to Agustí's answer. I hope that this adds some value for you.
Integral solutions to $y^{2}=x^{3}-1$ How to prove that the only integral solutions to the equation $$y^{2}=x^{3}-1$$ is $x=1, y=0$. I rewrote the equation as $y^{2}+1=x^{3}$ and then we can factorize $y^{2}+1$ as $$y^{2}+1 = (y+i) \cdot (y-i)$$ in $\mathbb{Z}[i]$. Next i claim that the factor's $y+i$ and $y-i$ are co-prime. But i am not able to show this. Any help would be useful. Moreover, i would also like to see different proofs of this question. Extending Consider the equation $$y^{a}=x^{b}-1$$ where $a,b \in \mathbb{Z}$ and $(a,b)=1$ and $a < b$. Then is there any result which states about the nature of the solution to this equation.
Let $\alpha\in\mathbb{Z}[i]$ be a divisor of $y+i$ and $y-i$. Then $\alpha|2=i(y-i-(y+i))$ and $\alpha|(y-i)(y+i)=x^3$. Since $x$ is odd then $x^3$ is odd and therefore, by Bezout there exist $A,B\in\mathbb{Z}$ such that $Ax^3+2B=1$ and therefore $\alpha|1$ implying $\alpha\in\mathbb{Z}[i]^{\times}$. We conclude that $y+i$ and $y-i$ are coprime.
Calculating total error based on error of variables So I have to find the maximum possible error $dR$ in calculating equivalent resistance for three resistors, $\displaystyle\frac{1}{R}=\frac{1}{R_1}+\frac{1}{R_2}+\frac{1}{R_3}$ if the maximum error in each is 0.5%; $R_1=25\ \Omega$, $R_2=45\ \Omega$, $R_3=50\ \Omega$. Now, originally I did $dR_1=dR_2=dR_3=0.005$, and then did $\displaystyle\frac{dR}{R^2}=\frac{dR_1}{R_1^2}+\frac{dR_2}{R_2^2}+\frac{dR_3}{R_3^2}$ and solved for $dR$... now I realize now that that doesn't make any sense. I recall when doing an example problem we took the derivative like normal except when doing the chain rule, replacing it with the partial; for example, when $A=\ell w$, $dA = \frac{\partial A}{\partial \ell} d\ell+\frac{\partial A}{\partial w} dw$ (finding error in area of rectangle) and plugged in what I know. How would that work here? Was I close in my original attempt? I feel like I'm not sure where to put all the partials now that there's a bunch of reciprocals everywhere.
This is easy, since $R$ is monotonic as a function of $R_1,R_2,R_3$ (which is obvious from physical considerations: increasing one of the resistances can only increase the total resistance). Thus, to compute error bounds for $R$, it suffices to consider the minimum and maximum possible values for your three resistors: the minimum corrected value of $R$ is $f(0.995R_1, 0.995R_2, 0.995R_3)$, and the maximum corrected value is $f(1.005R_1, 1.005R_2, 1.005R_3)$, where $f(x,y,z) = (x^{-1} + y^{-1} + z^{-1})^{-1}$. But a further simplification is that $f$ is homogeneous of degree 1, i. e., $f(cx, cy, cz) = cf(x,y,z)$ (again, this is obvious intuitively: choosing different units for your resistances multiplies both the individual and total resistances by the same proportionality constant). Thus, $f(0.995R_1, 0.995R_2, 0.995R_3) = 0.995R$, and $f(1.005R_1, 1.005R_2, 1.005R_3) = 1.005R$. Thus, the maximum relative error in the total resistance is also 0.5%.
Derivative of Integral I'm having a little trouble with the following problem: Calculate $F'(x)$: $F(x)=\int_{1}^{x^{2}}(t-\sin^{2}t) dt$ It says we have to use substitution but I don't see why the answer can't just be: $x-\sin^{2}x$
Well according to me the answer is $$F'(x) = \frac{d}{dx}(x^{2}) \cdot [x^{2}-\sin^{2}(x^{2})] - \frac{d}{dx}(1) \times \text{something} = 2x \cdot \Bigl[x^{2} -\sin^{2}(x^{2})\Bigr] - 0$$ would be the answer.
Calculate combinations of characters My first post here...not really a math expert, but certainly enjoy the challenge. I working writing a random string generator and would like to know how to calculate how many possible combinations there are for a particular combination. I am generating a string of 2numbers followed by 2 letters (lowercase) e.g. 12ab I think the calculation would be (breaking it down) number combinations 10*10=100 letter combinations 26*26=676 So the number of possible combinations is 100*676=67600, but this seems a lot to me so I'm thinking I am off on my calculations!! Could someone please point me in the right direction? Thx
You are right. That is the most basic/fundamental procedure for counting in combinatorics. It's sometimes called the Rule of product, or multiplication principle or fundamental counting principle, and it can be visualized as a tree
Matrices commute if and only if they share a common basis of eigenvectors? I've come across a paper that mentions the fact that matrices commute if and only if they share a common basis of eigenvectors. Where can I find a proof of this statement?
An elementary argument. Summary: show that each eigenspace of $A$ has a basis such that each basis vector is contained in one of the eigenspace of $B$. This basis is then the simultaneous common basis we are looking for. Suppose $A,B$ are both diagonalizable and they commute. Now let $E_{\lambda_i}$ be eigenspaces of $A$ for each distinct eigenvalue $\lambda_i$ of $A$. Now let $F_{s_i}$ be eigenspaces of $B$ for each distinct eigenvalue $s_i$ of $B$. Now I claim that $E_{\lambda_i}$ (of say dimension $m$) has a basis $v_1^i,...,v_m^i\in E_{\lambda_i}$ such that each $v_r^i$ is in one of $B$'s engenspace $F_{s_j}$--this would imply these $v_r^i$ are eigenvectors of $B$ and $A$ simultaneously. Apply this to all eigenspaces $E_{\lambda_i}, i=1,...,n$. The collection of all $v_r^i$ then becomes a common basis for $A$ and $B$ as required. To show this claim, first pick arbitrary basis $w_1,...,w_m$ of $E_{\lambda_i}$. Each $w_i$ can be written as sum of vectors where each vector is in one of $B$'s engenspace $F_{s_j}$. This is a subtle point so let me repeat: for each $i=1,...,m,$ $w_i=z_1^i+...+z_{l_i}^i, l_i\le m$ and $z_k^i\in F_{s_j}$ for some $j$. This is trivially true because direct sum of $B$'s engenspaces is the entire space. Now we make a second claim that all $z_k^i\in E_{\lambda_i}$. Then the collection of all $z_k^i$ span $E_{\lambda_i}$ and thus the collection can be reduced to a basis $v_1,...,v_m$ where each $v_j$ is contained in $E_{\lambda_i}$ as required by the first claim. Note that $B$ is invariant to $E_{\lambda_i}$ since $A,B$ commute. The second claim follows from: $\sum_{i=1}^N z_i \in S$ where $z_i$ are eigenvectors of distinct eigenvalues of $B$ and $S$ is a subspace to which $B$ is invariant, then $z_i\in S,\forall i$. We check this by induction on $N$. It is trivially true for $N=1$. Then suppose $Bz_1=\lambda z_1$. Since $\lambda(z_1+...+z_N)\in S$ and $B(z_1+...+z_N)\in S$, we have $B(z_1+...+z_N)-\lambda(z_1+...+z_N)=a_2z_2+...+a_Nz_N\in S$ for some constant $a_i\neq 0$--the constants are non-zero because we assumed $z_i$ all have distinct eigenvalues. Then apply inductive hypothesis $z_2,...,z_N\in S$. This would imply $z_1\in S$ as well. This finishes the proof.
Is $[0,1]$ a countable disjoint union of closed sets? Can you express $[0,1]$ as a countable disjoint union of closed sets, other than the trivial way of doing this?
The answer to the question as stated is no, as others have explained. However, if we relax the hypothesis from disjoint to non-overlapping, then the answer is yes. Two intervals $I_1$ and $I_2$ are non-overlapping if $I_1^{\circ}\cap I_2^{\circ}=\emptyset$; that is, if their interiors are disjoint. If the intervals are closed and non-overlapping, then they intersect at most in their boundaries. For example, in $\mathbb{R}$, the intervals $\left[0,\frac{1}{2}\right]$ and $\left[\frac{1}{2},1\right]$ are non-overlapping, but clearly not disjoint as they share the point $\frac{1}{2}$.
The name for a subobject(subgroup) which is annihilated by action I know this question is easy, but for the life of me, I cannot remember what we call this thing. Googling for this has offered no help. Consider an object $A$ and a second object $B$(let them be groups if you so choose). We wish to consider and action of $A$ on $B$. Moreover there is a subobject $C \hookrightarrow B$(subgroup) which is annihilated by the action of $A$, i.e. the restriction of the action of $A$ on $B$ to $C$ sends $C$ to the zero object(the zero in $B$ which corresponds to the trivial group). I thought it would be the kernel of the action, but this term is reserved for something else(in particular those objects which fix everything). I think that this should be referred to as Torsion, and in particular, in the back of my mind, I keep thinking it is called the $A$-Torsion of $B$. But I am not sure. Does anyone know what this has been called in the past?
in linear algebra, the subspace annihilated by a linear mapping $A$ is the nullity of $A$.
Bounded operator Hardy space Let $T_f g = f \cdot g$ where $f, g, f \cdot g$ are in $H^2(\mathbb{D})$ (where $H^2$ is the Hardy space on the open unit disk). Now $T_f$ is a bounded operator. I want to show this by showing that $f \in H^\infty$. So I try to write $f = G_1 h_1$ and $g = G_2 h_2$ where $G_i$ are outer functions and $h_i$ inner functions. So, what I need to do is if $G_1 G_2$ is in $H^2$ for all $G_2$ outer, then $G_1$ is in $H^\infty$. Does someone have a hint how I could obtain this?
This isn't the same approach you had in mind, but you can show that $T_f$ is bounded using the closed graph theorem and the fact that evaluation at a point in the open disk is bounded on $H^2$. You can then show that $f$ is in $H^\infty$ by showing that the complex conjugates of elements of its image on the disk are eigenvalues for the adjoint of $T_f$. Here is an elaboration on the last sentence. For each $w\in\mathbb{D}$, define $k_w:\mathbb{D}\to\mathbb{C}$ by $k_w(z)=\frac{1}{1-\overline{w}z}=\sum_{k=0}^\infty \overline{w}^k z^k$. Each $k_w$ is in $H^\infty$ and thus in $H^2$. Using the second expression for $k_w$ and the characterization of the inner product on $H^2$ in terms of the $\ell^2$ sequences of Maclaurin coefficients, notice that $\langle g,k_w\rangle=g(w)$ for all $w\in\mathbb{D}$ and all $g\in H^2$. It then follows that for all $w$ and $z$ in $\mathbb{D}$, $$(T_f^*k_w)(z)=\langle T_f^* k_w,k_z\rangle=\overline{\langle T_f k_z,k_w\rangle}=\overline{f(w)k_z(w)}$$ $$=\overline{f(w)}\overline{\langle k_z,k_w\rangle}=\overline{f(w)}\langle k_w,k_z\rangle=\overline{f(w)}k_w(z).$$ Since $z$ was arbitrary, this shows that $T_f^*k_w=\overline{f(w)}k_w$, so $\overline{f(w)}$ is an eigenvalue for $T_f^*$ with eigenvector $k_w$. Thus, $\|f\|_\infty\leq \|T_f^*\|=\|T_f\|<\infty$. This is a standard fact about reproducing kernel Hilbert spaces, and only the particular form of the function $k_w$ is special to the Hardy space. The way I have presented this, it might seem that $k_w$ was summoned by magic, but in fact one could rediscover them without too much work. The important point is that there exist elements of $H^2$ whose corresponding inner product functionals are point evaluations. These exist by Riesz's lemma using continuity of the point evaluations, which can be shown by other means. You don't need to know what these elements are for the argument to carry through. However, if you did want to discover them, then "working backwards" and considering Maclaurin series would lead you to the second expression for $k_w$ given above.
Determining n in sigma ($\Sigma_{x=0}^n$) Reffered here by https://mathoverflow.net/questions/41750/determining-n-in-sigma-x0n I'm not entirely sure if this question falls under MathOverflow but neither of my Calculus AP teachers in high school could help me with this: Given $\Sigma_{x=0}^n {f(x)\over2}$ and the output of the summation, how would you find $n$? I've learned how to determine the $n$ given an arithmetic or geometric sequence, but not for an arbitrary function. Specifically, when $f(x) = 40 + 6\sqrt{x}$. 12 Oct 2010. Edit: It seems like I need to explain the entire situation for finding $n$, the number of trapezoids, for trapezoidal rule. It started on a simple review question for Calc AP and a TI-83 program that my calc teacher gave to me to solve the definite integral with trapezoidal rules. Aiming to major in Computer Science, I took it a bit further and completely took apart the program resulting in my original question on StackOverflow: https://stackoverflow.com/questions/3886899/determining-the-input-of-a-function-given-an-output-calculus-involved Since there were tumbleweeds for a response, I took it as a personal challenge to reverse engineer the trapezoidal program into an algebraic form with my notes found on my forum: http://www.zerozaku.com/viewtopic.php?f=19&t=6041 After reverse engineering the code into some algebra, I derived the formula: $$TrapRule(A, B, N) = {(B-A)\over N}({F(A)\over2}+\sum_{k=1}^NF(k)+{F(B)\over2})$$ Given the values of A and B are constant for the definite integral, I should be able to isolate and solve for $N$. The problem, however, was determining $N$ in $sum_{k=1}^N$ and I came to the conclusion that it was an issue that I called recursive complexity because it was impossible to determine without recursively adding for the summation. Eventually, I found MathOverflow and they referred me here. I was hoping only to get help on the issue for a summation because its beyond my skill as a high school student. Now that others have proposed other solutions for my dilemma, I guess I can throw out my thesis Dx Thanks for the help though, I'll definitely be returning for more.
An approximation to $\displaystyle \sum_{k=1}^{n} \sqrt{k}$ can be found here (on this very site): How closely can we estimate $\sum_{i=0}^n \sqrt{i}$ It was shown that $\displaystyle \frac{2n\sqrt{n}}{3} + \frac{\sqrt{n}}{2} -\frac{2}{3} < \sum_{k=1}^{n} \sqrt{k} < \frac{2n\sqrt{n}}{3} + \frac{\sqrt{n}}{2}$ Thus if $\displaystyle S = \sum_{k=0}^{n} f(x)/2$, then $$\sum_{k=0}^{n} \sqrt{n} = \frac{S - 10n(n+1)}{3}$$ Using the approximation above $$\frac{2n\sqrt{n}}{3} + \frac{\sqrt{n}}{2} = \frac{S - 10n(n+1)}{3}$$ This is a fourth degree equation in $\sqrt{n}$ which can be solved exactly (closed formula) in terms of $S$ and would give you a value of $n$ which is close to $n$. My guess is that taking the integer part of the square of an appropriate root will be sufficient to give $n$ or $n-1$ (and so a formula might exist, after all!) If you are not looking for a formula, but a procedure, you can always try binary search. Hope that helps.
Different proofs of $\lim\limits_{n \rightarrow \infty} n \int_0^1 \frac{x^n - (1-x)^n}{2x-1} \mathrm dx= 2$ It can be shown that $$ n \int_0^1 \frac{x^n - (1-x)^n}{2x-1} \mathrm dx = \sum_{k=0}^{n-1} {n-1 \choose k}^{-1}$$ (For instance see my answer here.) It can also be shown that $$\lim_{n \to \infty} \ \sum_{k=0}^{n-1} {n-1 \choose k}^{-1} = 2$$ (For instance see Qiaochu's answer here.) Combining those two shows that $$ \lim_{n \to \infty} \ n \int_0^1 \frac{x^n - (1-x)^n}{2x-1} \mathrm dx = 2$$ Is there a different, (preferably analytic) proof of this fact? Please do feel free to add a proof which is not analytic.
Let $x=(1+s)/2$, so that the expression becomes $$ \frac{n}{2^{n+1}} \int_{-1}^1 \frac{(1+s)^n-(1-s)^n}{s} ds = \frac{n}{2^{n}} \int_{0}^1 \frac{(1+s)^n-(1-s)^n}{s} ds. $$ (The integrand is an even function.) Fix some $c$ between 0 and 1, say $c=1-\epsilon$. Then the integral from 0 to $c$ will be small in comparison to $2^n$, since the integrand is bounded by a constant times $(1+c)^n$, so as far as the limit is concerned it is enough to look at the integral from $c$ to 1, and in that integral we can neglect the term $(1-s)^n$ since it will also contribute something much smaller than $2^n$. The surviving contribution therefore comes from $$ \int_c^1 \frac{(1+s)^n}{s} ds, $$ which lies between $$ \int_c^1 \frac{(1+s)^n}{1} ds $$ and $$ \int_c^1 \frac{(1+s)^n}{c} ds, $$ that is, $$ \frac{2^{n+1} - (1+c)^{n+1}}{n+1} < \int_c^1 \frac{(1+s)^n}{s} ds < \frac{1}{c} \frac{2^{n+1} - (1+c)^{n+1}}{n+1} .$$ Multiplying by $n/2^n$ and letting $n\to\infty$ shows that the liminf is at least 2 and the limsup is at most $2/c$. But since this holds for any $c$ between 0 and 1, it follows that liminf=limsup=2, hence the limit is 2.
Order of a Group from its Presentation Let $G$ be a group with generators and relations. I know that in general it is difficult to determine what a group is from its generators and relations. I am interested in learning about techniques for figuring out the order of a group from the given information. For example, I know that if the number of generators exceeds the number of relations then the group has infinite order. If the number of generators equals the number of relations then the group is cyclic or has infinite order. Let $G= <x, y|x^2 = y^3 = (xy)^4 = 1>$. My hunch is that G has finite order because $(xy)^4$ is somehow independent of $x^2$ and $y^3$. But if the exponent on $xy$ were bigger, say $(xy)^6=1$ that relation becomes redundant. My question is: is this sort of thinking correct? Furthermore: my method will only tell me if $G$, or its modification, is finite (or infinite). If $G$ is finite how can I figure out the order of the group? I know that the orders divide the order of the group, but I am looking for a specific number.
While somewhat "dated" at this point you might want to look at the book of Coxeter and Moser, Generators and Relations, for work in this area. http://www-history.mcs.st-and.ac.uk/Extras/Coxeter_Moser.html
Show convex combination If I have a bounded set $F$ in $N$ dimensional space and another set $G$ where every element $g$ in $G$ has $h'g=c$ and also must exist in $F$. $H$ is a vector in the $N$ dimensional space and $c$ is any constant $1\times 1$ matrix (scalar). $h$ is a vector of appropriate dimension. How can I prove every extreme point of $G$ lies on the boundary of $F$? That is to say if $x$ and $y$ are extreme points in $F$ then $xλ + (1-λ)y = g$
I'll prove that every point of $G$ in the interior of $F$ is not an extreme point of $G$. I'll assume that $N>1$. LEMMA. There is a vector $v\neq 0$ such that $h'v=0$. Proof. Since $N>1$ there is a vector $u$ which is not a multiple of $h$. Let $$v=u-\left({{h'u}\over{h'h}}\right)h$$ Then $h'v=0$. Since $u$ is not a multiple of $h$, $v\neq 0$. Answer. Suppose $x \in G$ and $x$ is also in the interior of $F$. Let $v$ be as in the lemma above. Then there is a small enough $\lambda$ such that $(x \pm \lambda v) \in F$. But $h'(x \pm \lambda v)=h'x$ so $(x \pm \lambda v) \in G$. But, $$x= {(x + \lambda v) \over 2} + {(x - \lambda v) \over 2}$$ So $x$ is not an extreme point of $G$.
Motivation behind the definition of complete metric space What is motivation behind the definition of a complete metric space? Intuitively,a complete metric is complete if they are no points missing from it. How does the definition of completeness (in terms of convergence of cauchy sequences) show that?
This answer only applies to the order version of completeness rather than the metric version, but I've found it quite a nice way to think about what completeness means intuitively: consider the real numbers. There the completeness property is what guarantees that the space is connected. The rationals can be split into disjoint non-empty open subsets, for example the set of all positive rationals whose squares are greater than two, and its complement, and the reason this works is because, roughly speaking, there is a "hole" in between the two sets which lets you pull them apart. In the reals this is not possible; there are always points at the ends of intervals, so whenever you partition the reals into two non-empty subsets, one of them will always fail to be open.
Why is two to the power of zero equal to binary one? Probably a simple question and possibly not asked very well. What I want to know is.. In binary, a decimal value of 1 is also 1. It can be expressed as $x = 1 \times 2^0$ Question: Why is two to the power of zero equal to one? I get that two to the power of one is equal to two, or binary 10, but why is to the power of zero equal to one, is this a math convention? is there a link I could read?
The definition $\ 2^0 = 1\ $ is "natural" since it makes the arithmetic of exponents have the same structure as $\mathbb N$ (or $\mathbb Z\:$ if you extend to negative exponents). In more algebraic language: the definition is the canonical extension of the powering homomorphism from $\rm\ \mathbb N_+\: $ to $\rm \mathbb N\ $ (or $\rm\: \mathbb Z\:$),$\ $ viz. $\rm\ 2^n\ =\ 2^{n+0}\ =\ 2^n\ 2^0\ $ $\rm\Rightarrow\ 2^0 = 1\:$. It's just a special case of the fact that the identity element must be preserved by structure preserving maps of certain multiplicative structures (e.g. commutative cancellative monoids). It may be viewed as a special case of adjoining an identity element to a commutative semigroup. And it proves very convenient to do so, for the same reason it proves convenient to adjoin the identity element 0 to the positive natural numbers, e.g. it allows every element to be viewed as a sum, so one can write general formulas for sums that work even in extremal cases where an element is indecomposable (e.g. by writing $ 1 = 1 + 0 $ vs. having to separate a special case for the sum-indecomposable element $1$ or, in $\,2^{\Bbb N},\,$ for $\, 2 = 2\cdot 1 $). Empty sums and products prove quite handy for naturally founding inductions and terminating recursive definitions.
Change of limits in derivation of Riemann-Liouville (Fractional) Derivative I'm having difficulty justifying the change of limits in the derivation of the Riemann-Liouville derivative at xuru.org. What I don't undestand is how $\int_0^{t_2}$ becomes $\int_{t_1}^x$ in the following statement, $\int_0^x \int_0^{t_2} f(t_1) dt_1 dt_2 = \int_0^x \int_{t_1}^x f(t_1) dt_2 dt_1$
You can use integration by parts, following the well known formula: \begin{align*} \int_a^b f(x) \frac{dg(x)}{dx} dx = [f(x)g(x)]_a^b - \int_a^b \frac{df(x)}{dx} g(x) dx \end{align*} setting $g(x)=x$ and $f(x)=\int_a^x f(\xi) d\xi$ you have your result :)
Principal and Annuities Suppose you want to accumulate $12\,000$ in a $5 \%$ account by making a level deposit at the beginning of each of the next $9$ years. Find the required level payment. So this seems to be an annuity due problem. I know the following: $ \displaystyle \sum_{k=1}^{n} \frac{A}{(1+i)^{k}} = \frac{A}{1+i} \left[\frac{1- \left(\frac{1}{1+i} \right)^{n}}{1- \left(\frac{1}{1+i} \right)} \right] = P$. So in this problem, we are trying to solve for $P$? Just plug in the numbers? Or do we need to calculate the discount rate $d = i/(i+1)$ since the annuity is being payed at the beginning of the year?
The problem statement is missing the time when you want to have the 12,000. If it is at the end of the ninth year, the value of the deposit at the beginning of year n will have increased by 1.05^(10-n). So if A is the deposit you have $\displaystyle \sum_{k=1}^{n} A*1.05^{(10-k)}=12000$. Solve for A
System of Non-linear ODEs -- Analytic Solution As part of my solution to a problem, I come to a point where I need to find the solutions to $-2\partial_{T}B\left(T\right)+\frac{3}{4}B\left(T\right)\left(A\left(T\right)^{2}+B\left(T\right)^{2}\right)=0$ $2\partial_{T}A\left(T\right)+\frac{3}{4}A\left(T\right)\left(B\left(T\right)^{2}+A\left(T\right)^{2}\right)=0$ where $\partial_{T}(f)$ is the derivative with respect to $T$. It is possible that I made a mistake in the steps leading to this because I am supposed to be able to get a not-so-ugly solution for $A(T)$ and $B(T)$. Is there one that exists and I don't see it? I've tried the following:
You can make the second terms in both equations vanish by multiplying the first by $A(T)$, the second by $B(T)$, and subtracting. The resulting equation is readily solved for the product $A(T)B(T)$, reducing the system to a single ODE which is directly integrable.
Prove that the sequence$ c_1 = 1$, $c_{n+1} = 4/(1 + 5c_n) $ , $ n \geq 1$ is convergent and find its limit Prove that the sequence $c_{1} = 1$, $c_{(n+1)}= 4/(1 + 5c_{n})$ , $n \geq 1$ is convergent and find its limit. Ok so up to now I've worked out a couple of things. $c_1 = 1$ $c_2 = 2/3$ $c_3 = 12/13$ $c_4 = 52/73$ So the odd $c_n$ are decreasing and the even $c_n$ are increasing. Intuitively, it's clear the the two sequences for odd and even $c_n$ are decreasing/increasing less and less. Therefore it seems like the sequence may converge to some limit $L$. If the sequence has a limit, let $L=\underset{n\rightarrow \infty }{\lim }a_{n}.$ Then $L = 1/(1+5L).$ So we yield $L = 4/5$ and $L = -1$. But since the even sequence is increasing and >0, then $L$ must be $4/5$. Ok, here I am stuck. I'm not sure how to go ahead and show that the sequence converges to this limit (I tried using the definition of the limit but I didn't manage) and and not sure about the separate sequences how I would go about showing their limits. A few notes : I am in 2nd year calculus. This is a bonus question, but I enjoy the challenge and would love the extra marks. Note : Once again I apologize I don't know how to use the HTML code to make it nice.
Here's one way to prove it: let $f(x) = 4/(1+5x)$. Say $|x-4/5| \le C$ for some constant $C$. Can you find $C$ and some constant $0 \le k < 1$ so that if $|x-4/5| \le C$, then $|f(x)-4/5| \le k|x-4/5|$? If you do this, then you can iterate to get $|f^j(x)-4/5| \le k^j |x-4/5|$, for all $j$, and so if you make $j$ large enough then you can get $f^j(x)$ as close to $4/5$ as you like.
Generators and Relations for $A_4$ Let $G=\{x,y|x^2=y^3=(xy)^3=1\}$ I would like to show that $G$ is isomorphic to $A_4.$ Let $f:\mathbf{F}_{2} \to G$ be a surjective homomorphism from the free group on two elements to $G$. Let $f$ map $x \to (12)(34)$ and $y \mapsto (123)$. I'm not sure how to show that these elements generate the kernel of $f$. If they do generate the kernel, how do I conclude that the order of $G$ is $12?$ Once I have that the order of the group is of order 12 then I can show that $G$ contains $V$ (the Klein four group) as a subgroup, or that $A_4$ is generated by the image of $x$ and $y$.
Perhaps this answer will use too much technology. Still, I think it's pretty. Consider $A_4$ as the group of orientation-preserving symmetries of a tetrahedron $S$. The quotient $X=S/A_4$ is a 2-dimensional orbifold. Let's try to analyse it. Two-dimensional orbifolds have three different sorts of singularities that set them apart from surfaces: cone points, reflector lines and corners where reflector lines meet. Because $A_4$ acts preserving orientation, all the singularities of $X$ are cone points, and we can write them down: they're precisely the images of the points of $S$ that are fixed by non-trivial elements of $A_4$, and to give $X$ its orbifold structure you just have to label them with their stabilisers. So what are these points? There are the vertices of $S$, which are fixed by a rotation of order 3; there are the midpoints of the edges of $S$, which are fixed by a rotation of order 2; and finally, the midpoints of faces, which are fixed by a rotation of order 3. A fundamental domain for the action of $A_4$ is given by a third of one of the faces, and if you're careful about which sides get identified you can check that $X$ is a sphere with three cone points, one labelled with the cyclic group $C_2$ and the other two labelled with the cyclic group $C_3$. Finally, we can compute a presentation for $A_4$ by thinking of it as the orbifold fundamental group of $X$ and applying van Kampen's Theorem. This works just as well for orbifolds, as long as you remember to consider each cone point as a space with fundamental group equal to its label. The complement of the cone points is a 3-punctured sphere, whose fundamental group is free on $x,y$. The boundary loops correspond to the elements $x$, $y$ and $xy$. Next, we take account of each cone point labelled $C_n$ by inserting a relation that makes the $n$th power of the appropriate boundary loop equal to $1$. So we get the presentation $\langle x,y\mid x^2=y^3=(xy)^3=1\rangle$ as required.
Find all points with a distance less than d to a (potentially not convex) polygon I have a polygon P, that may or may not be convex. Is there an algorithm that will enable me to find the collection of points A that are at a distance less than d from P? Is A in turn always a polygon? Does the solution change materially if we try to solve the problem on the surface of a sphere instead of on a Euclidean plane?
It will not be a polygon. If you think about the original polygon being a square of side s, the set A is a square of side s+2d, but with the corners rounded. The corners become quarter circles with radius d and centered on the original corners of the square. For a general polygon the situation is much the same. Draw a parallel to the sides offset by d. Then round the outer corners with a circular arc of radius d centered on the original corners and tangent to the new parallels. The meeting points will be the intersection of the new line parallel to one side and the extension of the other side of the corner. The inner corners will stay corners but get less deep and eventually disappear if d is large enough..
Help me understand linearly separability in a binary SVM I have a question pertaining to linear separability with hyperplanes in a support vector machine. According to Wikipedia: ...formally, a support vector machine constructs a hyperplane or set of hyperplanes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier.classifier. The linear separation of classes by hyperplanes intuitively makes sense to me. And I think I understand linear separability for two-dimensional geometry. However, I'm implementing an SVM using a popular SVM library (libSVM) and when messing around with the numbers, I fail to understand how an SVM can create a curve between classes, or enclose central points in category 1 within a circular curve when surrounded by points in category 2 if a hyperplane in an n-dimensional space V is a "flat" subset of dimension n − 1, or for two-dimensional space - a 1D line. Here is what I mean: That's not a hyperplane. That's circular. How does this work? Or are there more dimensions inside the SVM than the two-dimensional 2D input features?
As mentioned, the kernel tricks embeds your original points to a higher dimensional space (in fact, in some cases infinite dimensional - but of course the linear subspace generated by your actual points is finite dimensional). As an example, using the embedding $(x,y) \mapsto (x,y,x^2,y^2)$ (that actually corresponds to a quadratic kernel, I think) then the equation of an arbitrary ellipse becomes linear.
Best book of topology for beginner? I am a graduate student of math right now but I was not able to get a topology subject in my undergrad... I just would like to know if you guys know the best one..
As an introductory book, "Topology without tears" by S. Morris. You can download PDF for free, but you might need to obtain a key to read the file from the author. (He wants to make sure it will be used for self-studying.) Note: The version of the book at the link given above is not printable. Here is the link to the printable version but you will need to get the password from the author by following the instructions he has provided here. Also, another great introductory book is Munkres, Topology. On graduate level (non-introductory books) are Kelley and Dugunji (or Dugundji?). Munkres said when he started writing his Topology, there wasn't anything accessible on undergrad level, and both Kelley and Dugunji wasn't really undergrad books. He wanted to write something any undergrad student with an appropriate background (like the first 6-7 chapters of Rudin's Principles of Analysis) can read. He also wanted to focus on Topological spaces and deal with metric spaces mostly from the perspective "whether topological space is metrizable". That's the first half of the book. The second part is a nice introduction to Algebraic Topology. Again, quoting Munkres, at the time he was writing the book he knew very little of Algebraic Topology, his speciality was General (point-set) topology. So, he was writing that second half as he was learning some basics of algebraic topology. So, as he said, "think of this second half as an attempt by someone with general topology background, to explore the Algebraic Topology.
Finding all complex zeros of a high-degree polynomial Given a large univariate polynomial, say of degree 200 or more, is there a procedural way of finding all the complex roots? By "roots", I mean complex decimal approximations to the roots, though the multiplicity of the root is important. I have access to MAPLE and the closest function I've seen is: with(RootFinding): Analytic(Z,x,-(2+2*I)..2+2*I); but this chokes if Z is of high degree (in fact it fails to complete even if deg(Z)>15).
Everyone's first starting point when dealing with the polynomial rootfinding problem should be a peer at J.M. McNamee's excellent bibliography and book. Now, it is a fact that polynomials of very high degree tend to make most polynomial rootfinders choke. Even the standard blackbox, the Jenkins-Traub algorithm, can choke if not properly safeguarded. Eigenmethods, while they can have nice accuracy, can be very demanding of space and time (O(n²) space and O(n³) operations for a problem with only O(n) inputs!) My point is that unless you are prepared to devote some time and extra precision, this is an insoluble problem. Having been pessimistic in those last few sentences, one family of methods you might wish to peer at (and I have had personal success with) are the so-called "simultaneous iteration" methods. The simplest of them, (Weierstrass-)Durand-Kerner, is essentially an application of Newton's method to the Vieta formulae, treated as n equations in n unknowns (the assumption taken by (W)DK is that your polynomial is monic, but that is easily arranged). If you wish for more details and references, the book by McNamee I mentioned earlier is a good start.
Distribution of Functions of Random Variables In general, how would one find the distribution of $f(X)$ where $X$ is a random variable? Or consider the inverse problem of finding the distribution of $X$ given the distribution of $f(X)$. For example, what is the distribution of $\max(X_1, X_2, X_3)$ if $X_1, X_2$ and $X_3$ have the same distribution? Likewise, if one is given the distribution of $ Y = \log X$, then the distribution of $X$ is deduced by looking at $\text{exp}(Y)$?
Qiaochu is right. There isn't a magic wand. That said, there is a set of common procedures that can be applied to certain kinds of transformations. One of the most important is the cdf (cumulative distribution function) method that you are already aware of. (It's the one used in your previous question.) Another is to do a change of variables, which is like the method of substitution for evaluating integrals. You can see that procedure and others for handling some of the more common types of transformations at this web site. (Some of the other examples there include finding maxes and mins, sums, convolutions, and linear transformations.)
Problems on combinatorics The following comes from questions comes from a recent combinatorics paper I attended : 1.27 people are to travel by a bus which can carry 12 inside and 15 outside. In how many ways can the party be distributed between inside and outside if 5 people refuse to go outside and 6 will not go inside? The solution given C(16,7), I have no clue how they got it ?! 2.The number of functions f from the set A = {0, 1, 2} into the set B = {1, 2, 3, 4, 5, 6, 7} such that $f(i) \le f(j) $ for $i \lt j $ and $i,j$ belongs to A is The solution given is C(8,3). I didn't really understood this one. 3.The number of ordered pairs $(m, n) m, n $ is in {1 , 2, … , 100} such that $7^m + 7^n$ is divisible by 5 is The solution given is 2500, but how ? 4.The coefficient of $x^{20}$in the expansion of $(1 + 3x + 3x^2 + x^3)^{20}$, is ? How to solve this one elegantly ? 5.An eight digit number divisible by 9 is to be formed by using 8 digits out of the digits 0, 1, …, 9 without replacement. The number of ways in which this can be done is: Now this one seems impossible for me to solve in 1 mint,or is it ? Given soln is 36(7!)
* *Five people refuse to go outside, and therefore will go inside. Six people refuse to go inside, so will go outside. That means that you still have $27 - (5+6)=16$ people to accommodate. There are 12 spots for people inside, but five are already "taken" by those who refuse to be outside. That leaves 7 seats inside to assign. So you need to choose which seven people, out of the 16 that are left, will go inside. The number of ways of doing this is precisely $\binom{16}{7}$. *Edit: I misread the question as saying that $f(i)\lt f(j)$ if $i\lt j$. That answer follows: since you require the values of the function to increase (the condition just says that $f(0)\lt f(1)\lt f(2)$), if you know the three values of $f(0)$, $f(1)$, and $f(2)$ you know which one corresponds to which ($f(0)$ is the smallest value, $f(1)$ is the middle value, and $f(2)$ is the largest value). So all you need to do in order to determine a function is to pick three values from $B$. There are $7$ possibilities, you need to pick $3$, so there are $\binom{7}{3}$ ways of doing it. Now, it seems I misread the question. It actually says that $f(i)\leq f(j)$ if $i\lt j$. I gave the number of functions in which all inequalities are strict. There are $\binom{7}{1}$ functions in which all inequalities are actually equalities (just one value). Now, to count the number of functions in which $f(0)=f(1)\lt f(2)$, you just need to pick two values from the set $B$, which there are $\binom{7}{2}$ ways of doing; the same holds for the case in which you have $f(0)\lt f(1)=f(2)$. So the total is $\binom{7}{3}+2\binom{7}{2} + \binom{7}{1}$. Since $\binom{n}{k}+\binom{n}{k-1} = \binom{n+1}{k}$, this total is equal to $$\left(\binom{7}{3} + \binom{7}{2}\right) + \left(\binom{7}{2}+\binom{7}{1}\right) = \binom{8}{3}+\binom{8}{2} = \binom{9}{3}.$$ It seems to me, then, that the answer you give is incorrect, or perhaps you mistyped it. (If you know the formula for combinations with repetitions, then there is a more direct way of getting this result: simply pick $3$ out of the $7$ possible images, with repetitions allowed; smallest value is $f(0)$, middle value is $f(1)$, largest values $f(3)$ (equality allowed). The total for this is $\binom{7+3-1}{3} = \binom{9}{3}$, as above). *Assume $m\leq n$; then $7^m+7^n = 7^m(1 + 7^{n-m})$. The product is divisible by $5$ if and only if $1 + 7^{n-m}$ is divisible by $5$ (since $5$ is prime and never divides a power of $7$). For $1+7^{n-m}$ to be divisible by $5$, you need $7^{n-m}$ to have a remainder of $4$ when divided by $5$. If you run over the powers of $7$ and see the remainder when divided by $5$, you will notice that they go $2$, $4$, $3$, $1$, and repeat. So basically, you need $n-m$ to be a multiple of $4$ plus $1$. That is, you need $n-m = 4k+1$. Note in particular that if the pair has $n=m$, then it does not satisfy the condition. So count how many pairs there are where the two differ by a multiple of four plus 1. *One possibility is the Multinomial theorem You would need to figure out all the ways in which you can obtain $x^{20}$ as products of powers of $x$, $x^2$, and $x^3$, and add the appropriate coefficients. Edit: But the intended answer is almost certainly the one given by Larry Denenberg. *In order for the number to be divisible by $9$, the digits must add up to a multiple of $9$. The digits $0$ through $9$ add up to $45$, which is a multiple of $9$. So if you omit two of them, they must add up to $9$: thus, if you omit $0$, then you must also omit $9$; if you omit $1$, then you must also omit $8$; etc. So you only have five possible pairs of numbers that you can omit. So pick which of the five pairs you will omit. If you omit $0$ and $9$, then the remaining $8$ digits can be arranged in any order, giving $8!$ possibilities. In all other cases, you cannot place $0$ in the first position, but otherwise can place the rest in any order. That gives $7(7!)$ possible ways of ordering the numbers. Thus, you have one choice that leads to $8!$ numbers, and four choices that lead each to $7(7!)$ numbers. Adding them up gives $8!+(4\times 7)(7!) = 8(7!)+28(7!) = 36(7!)$.
How to prove $\cos \frac{2\pi }{5}=\frac{-1+\sqrt{5}}{4}$? I would like to find the apothem of a regular pentagon. It follows from $$\cos \dfrac{2\pi }{5}=\dfrac{-1+\sqrt{5}}{4}.$$ But how can this be proved (geometrically or trigonometrically)?
How about combinatorially? This follows from the following two facts. * *The eigenvalues of the adjacency matrix of the path graph on $n$ vertices are $2 \cos \frac{k \pi}{n+1}, k = 1, 2, ... n$. *The number of closed walks from one end of the path graph on $4$ vertices to itself of length $2n$ is the Fibonacci number $F_{2n}$. The first can be proven by direct computation (although it also somehow falls out of the theory of quantum groups) and the second is a nice combinatorial argument which I will leave as an exercise. I discuss some of the surrounding issues in this blog post.
finding the minima and maxima of some tough functions ok so I did all the revision problems and noted the ones I couldn't do today and Im posting them together, hope thats not a problem with the power that be? I have exhibit A: $e^{-x} -x + 2 $ So I differentiate to find where the derivative hits $0:$ $-e^{-x} -1 = 0 $ Now HOW do I figure when this hits zero!? $-1 = e^{-x} $ $\ln(-1) = \ln(e^{-x})$ ??? More to come ... as one day rests between me and my final exam/attempt at math!
HINT $\rm\ e^{-x}\:$ and $\rm\: -x\: $ are both strictly descreasing on $\:\mathbb R\:$, hence so is their sum + 2.
Combinatorics and Rolling Dice Similarity? Define a function $F(A, B, C)$ as the number of ways you can roll $B$ $C$-sided dice to sum up to $A$, counting different orderings (rolling a $2$, $2$, and $3$ with three dice is different from rolling a $2$, $3$, and $2$). Example: With three $5$-sided dice, the list of $F(A, B, C)$ values in the domain of the possible values of $A$ for $B = 3$ and $C = 5$ is: $$F(3, 3, 5), F(4, 3, 5), F(5, 3, 5), F(6, 3, 5), ... , F(15, 3, 5)$$ is evaluated to: $$1, 3, 6, 10, 15, 18, 19, 18, 15, 10, 6, 3, 1$$ Call this list $L_1$. Let $s$ be the number of sides on each die, let $n$ be the number of dice, and let $v$ be the total value to roll from the $n$ dice. Let $L_2$ be the list of ${v - 1}\choose{v - n}$ in the domain of $v$ values for $n = 3$. Then $L_2$ is: $${{3 - 1}\choose{3 - 3}}, {{4 - 1}\choose{4 - 3}}, {{5 - 1}\choose{5 - 3}}, {{6 - 1}\choose{6 - 3}}, ... , {{15 - 1}\choose{15 - 3}}$$ Which is evaluated to: $$1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 66, 78, 91$$ Comparing $L_1$ with $L_2$, we see that only the first $s$ values of the lists are equal: $$1, 3, 6, 10, 15$$ I have observed that this property holds with other values of $s$, $v$, and $n$, and $A$, $B$, and $C$. Can someone please explain why $L_1$ and $L_2$ share the first $s$ values?
Refer to answers to Rolling dice problem , because this is the same as finding a $B$-tuple, with values in the range $1..C$, summing up to $A$, i.e. with values in the range $0..C-1$, summing up to $A-B$. So $$N_{\,b} (s,r,m) = \text{No}\text{. of solutions to}\;\left\{ \begin{gathered} 0 \leqslant \text{integer }x_{\,j} \leqslant r \hfill \\ x_{\,1} + x_{\,2} + \cdots + x_{\,m} = s \hfill \\ \end{gathered} \right.$$ with $m=B,\ r=C-1,\ s=A-B$. The formula for $Nb(s,r,m)$ is given in the answers to the question linked above.
Comparing $\pi^e$ and $e^\pi$ without calculating them How can I compare (without calculator or similar device) the values of $\pi^e$ and $e^\pi$ ?
Let $$f(x) = e^x$$ $$G(x) = x^e$$ We can simply show that $$f(e)=G(e)$$ $$f'(e)=G'(e)$$ For $x > e$ the $f(x)$ will grow faster than $G(x)$ Then $$e^{\pi} > \pi^{e}$$
Can someone please explain the Riemann Hypothesis to me... in English? I've read so much about it but none of it makes a lot of sense. Also, what's so unsolvable about it?
In very layman's terms it states that there is some order in the distribution of the primes (which seem to occur totally chaotic at first sight). Or to say it like Shakespeare: "Though this be madness, yet there is method in 't." If you want to know more there is a new trilogy about that topic where the first volume has just arrived: http://www.secretsofcreation.com/volume1.html It is a marvelous and easy to understand book from a number theorist who knows his stuff!
Windows lightweight Math Software I'm looking for lightweight, free, Windows, Math software. Something I can put an expression and get an answer, or graph it. I tried Euler, but it is quiet complicated and HUGE. Basic needs: * *Expression Based *Supports Variables *Support Functions, User defined and auto loaded. *Supports graphs, 2D. Not really needing 3D. *Supports History. What do you use? What do you recommend?
I have also found SpeQ Mathematics. It is very lightweight, starts quickly and has some good functions.
A problem on progression If a,b,c are in arithmetic progression., p,q,r in harmonic progression and ap,bq,cr are in geometric progression., then $\frac{p}{r}+\frac{r}{p} = $ ? EDIT: I have tried to use the basic/standard properties of the respective progressions to get the desired result, but I am not yet successful.
Notice that $\rm\:\ \quad\displaystyle \frac{p}r+\frac{r}p\ =\ \frac{(p+r)^2}{pr} - 2$ But we have that$\rm\quad\displaystyle p\:r\ =\ \frac{(bq)^2}{ac}\ \ $ via $\rm\ ap,\:bq,\:cr\ $ geometric and we have $\rm\quad\ \ \displaystyle p+r\ =\ \frac{2pr}q\quad\ \ \ $ via $\rm\ p,q,r\ $ harmonic
A short way to say f(f(f(f(x)))) Is there a short way to say $f(f(f(f(x))))$? I know you can use recursion: $g(x,y)=\begin{cases} f(g(x,y-1)) & \text{if } y > 0, \ \newline x & \text{if } y = 0. \end{cases}$
I personally prefer $f^{\circ n} = f \circ f^{\circ n-1} = \dotsb = \kern{-2em}\underbrace{f \circ \dotsb \circ f}_{n-1\text{ function compositions}}$
How to write the equation of a line in $\mathbb C^n$? I want to write the equation of a line in $\mathbb C^n$ passing through a point $(z_1,z_2,...,z_n)$. Actually I have a set of points and I suspect they all lie on the same line which passes through this point and I want a convenient way to check it. Thank you
It doesn't matter if you work with complex, real numbers, or elements of any field $\mathbb{K}$: if you have a point $p = (z_1, \dots , z_n) \in \mathbb{K}^n$, or any $\mathbb{K}$-vector space $V$, an equation for a straight line in $\mathbb{K}^n$ (or in $V$) passing through $p$ may always be written, for instance, as $$ p + \lambda v \ , $$ with $v= (v_1, \dots , v_n) \in \mathbb{K}^n$ (or $v\in V$) and $\lambda \in \mathbb{K}$.
Logistic function passing through two points? Quick formulation of the problem: Given two points: $(x_l, y_l)$ and $(x_u, y_u)$ with: $x_l < x_u$ and $y_l < y_u$, and given lower asymptote=0 and higher asymptote=1, what's the logistic function that passes through the two points? Explanatory image: Other details: I'm given two points in the form of Pareto 90/10 (green in the example above) or 80/20 (blue in the example above), and I know that the upper bound is one and the lower bound is zero. How do I get the formula of a sigmoid function (such as the logistic function) that has a lower asymptote on the left and higher asymptote on the right and passes via the two points?
To elaborate on the accepted answer, if we have a logistic function using the common notation: $$f(x) = \frac{1}{1 + e^{-k(x-x_0)}}$$ ... and we want to solve for $k$ and $x_0$ given two points, $(x_l, y_l)$ and $(x_u, y_u)$: First we can group the unknowns in a single term $b \equiv k(x-x_0)$. So: $$y = \frac{1}{1 + e^{-b}}$$ $$y(1 + e^{-b}) = 1$$ $$e^{-b} = \frac{1-y}{y}$$ $$-b = \ln\left(\frac{1-y}{y}\right)$$ $$ b = \ln\left(\frac{y}{1-y}\right)$$ Now we expand b: $$k(x-x_0) = \ln\left(\frac{y}{1-y}\right)$$ ... which gives us a linear system to solve for $k$ and $x_0$ given the values of two $(x, y)$ coordinates.
How many even positive integers are there that are divisors of 720? How many even positive integers are there that are divisors of 720 ? I know how to compute the number of divisors but how to compute the number of even or odd positive divisors of a number ? If we list the divisors of 720 (using mathematica) : {1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 30, 36, 40, 45,48, 60, 72, 80, 90, 120, 144, 180, 240, 360, 720} among these only 24 are even,I am looking for some tricks that can be used in solving similar kinds of problems during exam (under a minute solution).
There is a very simple trick for this,first compute the prime factorization of $720$,which is $2^4 \times 3^2 \times 5$,the total number of factors here is $3 \times 2 \times 5 = 30$, and number of odd factors (number of factors of the odd primes)$=3 \times 2 = 6$,subtracting gives number of even factors = $24$.This method works for any number. NOTE: If the number has no odd factors i.e,the prime factorization is of the form $2^a$,then the number of number of even factors is $a$ and number of odd factors is $1$.
Discriminant of a monic irreducible integer polynomial vs. discriminant of its splitting field Let $f\in\mathbb{Z}[x]$ be monic and irreducible, let $K=$ splitting field of $f$ over $\mathbb{Q}$. What can we say about the relationship between $disc(f)$ and $\Delta_K$? I seem to remember that one differs from the other by a multiple of a square, but I don't know which is which. On a more philosophical note: why are these quantities related at all? Is there an explanation for why they can be different, i.e. some information that one keeps track of that the other doesn't?
The two are the same if the roots of $f$ form an integral basis of the ring of integers of $\mathbb{Q}[x]/f(x)$ (e.g. if $f$ is a cyclotomic polynomial) because then, well, they're defined by the same determinant (see Wikipedia), but in general they don't. In the general case $\mathbb{Z}[\alpha_1, ... \alpha_n]$ is an order in $\mathcal{O}_K$ so one can write the $\alpha_i$ as an integer linear combination of an integral basis, so the matrices whose determinants define the two discriminants should be related by the square of a matrix with integral entries, hence integral determinant. In fact if I'm not totally mistaken, the quotient of the two discriminants should be precisely the index of $\mathbb{Z}[\alpha_1, ... \alpha_n]$ in $\mathcal{O}_K$ as lattices, or maybe its square...? In any case, since the discriminant of the field is defined in terms of $\mathcal{O}_K$ it is the "right" choice for carrying information about, for example, ramification. One can see this even in the quadratic case: if $d \equiv 1 \bmod 4$ then the discriminant of $x^2 - d$ is $4d$ but the discriminant of $\mathbb{Q}(\sqrt{d})$ is $d$, and the latter is the "right" choice because $2$ doesn't ramify in $\mathbb{Z} \left[ \frac{1 + \sqrt{d}}{2} \right]$.
The Basel problem As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem) $$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}$$ However, Euler was Euler and he gave other proofs. I believe many of you know some nice proofs of this, can you please share it with us?
I really like this one. Consider $f(x)=x^2-\pi^2$. Compute it's Fourier expansion to obtain $$f(x)=\frac{2}{3}\pi^2-4\sum_{n=1}^\infty\frac{(-1)^n}{n^2}\cos nx.$$ Now let $x=\pi$, then it quickly follows that $$4\zeta(2)=\frac{2}{3}\pi^2\implies \zeta(2)=\frac{\pi^2}{6}.$$
The Basel problem As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem) $$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}$$ However, Euler was Euler and he gave other proofs. I believe many of you know some nice proofs of this, can you please share it with us?
There is a simple way of proving that $\sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}$ using the following well-known series identity: $$\left(\sin^{-1}(x)\right)^{2} = \frac{1}{2}\sum_{n=1}^{\infty}\frac{(2x)^{2n}}{n^2 \binom{2n}{n}}.$$ From the above equality, we have that $$x^2 = \frac{1}{2}\sum_{n=1}^{\infty}\frac{(2 \sin(x))^{2n}}{n^2 \binom{2n}{n}},$$ and we thus have that: $$\int_{0}^{\pi} x^2 dx = \frac{\pi^3}{12} = \frac{1}{2}\sum_{n=1}^{\infty}\frac{\int_{0}^{\pi} (2 \sin(x))^{2n} dx}{n^2 \binom{2n}{n}}.$$ Since $$\int_{0}^{\pi} \left(\sin(x)\right)^{2n} dx = \frac{\sqrt{\pi} \ \Gamma\left(n + \frac{1}{2}\right)}{\Gamma(n+1)},$$ we thus have that: $$\frac{\pi^3}{12} = \frac{1}{2}\sum_{n=1}^{\infty}\frac{ 4^{n} \frac{\sqrt{\pi} \ \Gamma\left(n + \frac{1}{2}\right)}{\Gamma(n+1)} }{n^2 \binom{2n}{n}}.$$ Simplifying the summand, we have that $$\frac{\pi^3}{12} = \frac{1}{2}\sum_{n=1}^{\infty}\frac{\pi}{n^2},$$ and we thus have that $\sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}$ as desired.
$|G|>2$ implies $G$ has non trivial automorphism Well, this is an exercise problem from Herstein which sounds difficult: * *How does one prove that if $|G|>2$, then $G$ has non-trivial automorphism? The only thing I know which connects a group with its automorphism is the theorem, $$G/Z(G) \cong \mathcal{I}(G)$$ where $\mathcal{I}(G)$ denotes the Inner- Automorphism group of $G$. So for a group with $Z(G)=(e)$, we can conclude that it has a non-trivial automorphism, but what about groups with center?
The other two answers assume the axiom of choice: * *Arturo Magidin uses choice when he forms the direct sum ("...it is isomorphic to a (possibly infinite) sum of copies of $C_2$...") *HJRW uses choice when he fixes a basis (the proof that every vector space has a basis requires the axiom of choice). If we do not assume the axiom of choice then it is consistent that there exists a group $G$ of order greater than two such that $\operatorname{Aut}(G)$ is trivial. This is explained in this answer of Asaf Karagila.
Nth term of the series where sign toggles after a triangular number What could be the possible way to find the Nth term of following series where the sign toggles after each triangular number? 1 -2 -3 4 5 6 -7 -8 -9 -10 11 12 13 14 15 -16 -17 .... The series cannot be in a Geometric Progression because there are 4 distinct triangular numbers in the above series.
Using the formula for the triangular numbers we note that if $m \in I = [2n^2+n+1,2n^2+3n+1]$ for some $n=0,1,2,\ldots$ then $f(m)=m,$ otherwise $f(m)=-m.$ The only possible choice of $n$ is $ \lfloor \sqrt{m/2} \rfloor,$ since if we write $l(n) = 2n^2+n+1$ and $u(n) = 2n^2+3n+1$ by writing $\sqrt{m/2} = N + r,$ where $N$ is an integer and $0 \le r < 1$ we have $$u \left( \lfloor \sqrt{m/2} \rfloor – 1 \right) = 2N^2 – N < 2N^2+4Nr+r^2 < m,$$ and so $m \notin I.$ Similarly $$l \left( \lfloor \sqrt{m/2} \rfloor + 1 \right) > m,$$ so $m \notin I.$ Hence we have $$f(m) = m \textrm{ when } m \in [2t^2+t+1,2t^2+3t+1] \textrm{ for } t = \lfloor \sqrt{m/2} \rfloor,$$ otherwise $f(m)=-m.$
Best Cities for Mathematical Study This may sound silly, but... Suppose an aspiring amateur mathematician wanted to plan to move to another city... What are some cities that are home to some of the largest number of the brightest mathematicians? I'm sure this may depend on university presence, or possibly industry presence, or possibly something surprising. Wondering where the best place to take a non-faculty job at a university and try to make friends with some sharp minds in the computer lab or at the nearby pub might be.
Without sounding biased in any way, I would say Cambridge/Boston is a good choice for you. In the particular order of funded research/department size there is Harvard University, MIT, Boston University, Boston College, Northeastern University, Brandeis University, Tufts University, Bently University, University of Massachusetts at Boston, Curry College, Eastern Nazarene College, Pine Manor College, Hellenic College, Lesley University, Wheelock College, Lasell College, Simmons University, Cambridge College and Bunker Hill Community College (and many, many more) within the metropolis. See http://en.wikipedia.org/wiki/List_of_colleges_and_universities_in_metropolitan_Boston for a complete list. A number of these institutions offer extension programs (with open enrollment and classes in the evening or weekends) suitable for life-long learners and aspiring amateur mathematicians. For example, the Masters for Mathematics Teaching Program at Harvard University offers courses in all major mathematics subject areas, taught by many instructors which hold separate positions in the university (like adjunct/junior faculty, preceptors, senior lecturers, post-doctoral or teaching fellows and even a senior graduate student).
Explicit solutions to this nonlinear system of two differential equations I am interested in a system of differential equations that is non-linear, but it doesn't seem to be too crazy. I'm not very good at non-linear stuff, so I thought I'd throw it out there. The actual equations I'm looking at have several parameters that'd I'd like to tweak eventually. q' = k - m / r r' = i - n r - j q i, j, k, m and n are all real-valued constants. I'm guessing that this system would be cyclical in nature, but I'm not sure if it has any explicit solution, so I have produced a version of it with the constants removed to see if that can be solved: q' = 1 - 1 / r r' = 1 - r - q Anyone know if either of these are solvable and what kind of techniques would be needed to solve them if so? The first equation is based on a polar coordinate system where Q (or theta) is the angle and r is radius, and I've made a number of simplifications to make it somewhat tractable.
Taking that second question, $r' = i - nr - jq$ and differentiating gives $r'' = -nr' - jq' = -nr' - j(k-\frac{m}{r})$ or in other words $r'' + ar' + \frac{b}{r} = c$ which is a much simpler differential equation only one variable. I think that you could probably solve this with power series or clever guessing, but it needs to be worked out.
Proving Gauss' polynomial theorem (Rational Root Test) Let $P \in \mathbb{Z}[x], P(x) = \displaystyle\sum\limits_{j=0}^n a_j x^j, a_n \neq 0$ and $a_0 \neq 0$; if $p/q$ is a root of P (with p and q coprimes) then $p|a_0$ and $q|a_n$ I've managed to prove the first part ($p|a_0$) and I suppose I'm not far from proving the second, though I'd really like some feedback since I'm just starting with making proofs of my own. Proof: $P(x) = a_n(x-p/q)\displaystyle\prod\limits_{j=2}^n (x-r_j)$, with $r_j$ being the other n-1 roots of P(x). It follows that $a_0 = a_n(-p/q)\displaystyle\prod\limits_{j=2}^n (-r_j)$ Then, $-p/q|a_0$ and obviously $p/q|a_0$. Rephrasing, $a_0 = l\frac{p}{q} = \frac{l}{q} p$ with $l \in \mathbb{Z}$. This implies $p|a_0$ if $l/q \in \mathbb{Z}$, but this is trivial since $q|lp$ and q and p are coprimes, so $q|l$. Therefore, $p|a_0$. As for the second part, we want to see that $q_i|a_n \forall i \leq n$. We define $d$ as the least common multiple of $\{q_1, q_2,...,q_n\}$. Then, $q_i|a_n \forall i \leq n \iff d|a_n$. Also, it follows that $d|\displaystyle\prod\limits_{j=1}^n q_j$, so we want to see that $a_n = l \displaystyle\prod\limits_{j=1}^n q_j$ with $l \in \mathbb{Z}$. Here's where I have my doubts with the proof as I have no way to show that l is indeed an integer. Rearranging the previously given equation for $a_0$: $a_n = a_0 \displaystyle\prod\limits_{j=1}^n \frac{-q_j}{p_j}$ Using the previous reasoning, as $p_i|a_0 \forall i \leq n$, then $a_0 = k \displaystyle\prod\limits_{j=1}^n p_j$. Replacing $a_0$: $a_n = k \displaystyle\prod\limits_{j=1}^n -q_j$, which is equivalent to $q|a_n$ as shown earlier.
If you know how to prove the first part, just apply it to the polynomial $t^n P(t^{-1})$.
What are the three cube roots of -1? What are the three cube roots of -1? Not sure if this is a trick question, But I have been asked this. one of the ansers is -1, what are the other 2?
Write $-1$ in polar form as $e^{i\pi}$. In general, the cube roots of $r e^{i\theta}$ are given by $r^{1/3}e^{i\theta/3}$, $r^{1/3}e^{i(\theta/3 + 2\pi /3)}$ and $r^{1/3}e^{i(\theta/3 + 4\pi /3)}$. In your case $r = 1$ and $\theta = \pi$, so your cube roots are $e^{i\pi / 3}$, $e^{i\pi}$, and $e^{i 5\pi/ 3}$. Put back into rectangular form, they are ${1 \over 2} + i{\sqrt{3} \over 2}$, $-1$, and ${1 \over 2} - i{\sqrt{3} \over 2}$.
Why does the polynomial equation $1 + x + x^2 + \cdots + x^n = S$ have at most two solutions in $x$? Américo Tavares pointed out in his answer to this question that finding the ratio of a geometric progression only from knowledge of the sum of its first $n+1$ terms $S = 1+x+x^2+\cdots+x^n$ amounts to solving a polynomial of degree $n$. This suggested to me that there might be up to $n$ real solutions of $x$ for a given sum, but I could not find any. In fact, it turned out that the following fact is true: For $n \ge 1$ and $S \in \mathbb{R}$, the polynomial equation $x^n + x^{n-1} + \cdots + x + 1 = S$ has at most two real solutions. A corollary is that if $n$ is odd, there is exactly one real solution. I was only able to prove this using a rather contrived geometric argument based on the shape of the graph of $y = x^{n+1}$. Is there a simple, direct (and ideally, intuitive) proof of this fact?
The roots are also roots of $x^{n+1} - Sx + S - 1 = 0$ which we get by multiplying your equation by $x-1$. This polynomial ($x^{n+1} - Sx + S-1$), as we move from $x = -\infty$ to $x = \infty$ is either * *Monotonically increasing, and thus has at most one real root. *Monotonically decreasing, and then monotonically increasing and hence can have at most two real roots. *Monotonically increasing, then decreasing and then again increasing (happens only when $n$ is even). In which case there are at most three real roots, one of which is $1$. So for $S \ne n+1$, the original equation does not have more than two solutions. If $S=n+1$ and $n$ is even, then the turning points are $-1$ and $1$ and the value of the polynomial at $-1$ is positive. So the only roots are $1$ and a root which is $< -1$. This can be seen by looking at its derivative, which is an increasing function for odd $n$, and for even $n$, it is positive, then possibly negative (depending on $S$) and then positive again, as we move from $x = -\infty$ to $x = \infty$.
Funny identities Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?
$$ \int_{-\infty}^{\infty}{\sin\left(x\right) \over x}\,{\rm d}x = \pi\int_{-1}^{1}\delta\left(k\right)\,{\rm d}k $$
derivative of characteristic function I came across an interesting problem but unable to see how to approach it. How do I use the dominated convergence theorem (LDCT), to show that first derivative of the characteristic function of the probability distribution at $t = 0$, $\phi^′(0)=iE[X]$? Any ideas? References: http://en.wikipedia.org/wiki/Characteristic_function_(probability_theory) http://en.wikipedia.org/wiki/Dominated_convergence_theorem
You need to show that if $(a_n)$ is a sequence of nonzero numbers which tends to zero, that $(\phi(a_n)-\phi(0))/a_n\to iE[X]$. Now $$\frac{\phi(a)-\phi(0)}{a}=E\left(\frac{e^{iaX}-1}{a}\right).$$ You need some hypothesis on $X$ for the result to work, for instance that $|X|$ has finite expectation. Certainly $(e^{iaX}-1)/a\to iX$ so to apply dominated convergence we need a function $f(X)$ with finite expectation and with $|(e^{iaX}-1)/a|\le f(X)$ at least for $a$ in a deleted neighbourhood of $0$. Does $f(X)=|X|$ work?
Solving the equation $-2x^3 +10x^2 -17x +8=(2x^2)(5x -x^3)^{1/3}$ I wanna know how to solve this equation: $-2x^3 +10x^2 -17x +8=(2x^2)(5x -x^3)^{1/3}$ I have some trouble to do that and I'd glad with any help I may get.
The algebraic $\frac{1}{12}(17 + \sqrt{97})$ is not a root of the equation \begin{eqnarray} -2 x^3 + 10 x^2 - 17 x + 8 = (2 x^2) (5 x - x^3)^{1/3} \end{eqnarray} Plugging it in, you find that the left hand side is real and equal to \begin{eqnarray} \tfrac{1}{216}(-149 - 37 \sqrt{97}) = -2.37689 \dots \end{eqnarray} The right side is \begin{eqnarray} \tfrac{1}{432} (\tfrac{1}{2}( 595 - 61 \sqrt{97})^{1/3} (17 + \sqrt{97})^2 = 1.18844 \dots + i 2.05845 \dots \end{eqnarray} Note: $595 < 61 \sqrt{97}$. I think the ambiguity lies in the fact that we have not used the third-roots of unity. Numerical computations aside, just plot the two functions. The RHS is a positive function defined only in the I and II quadrants. The LHS is cubic. There is only one real intersection point.
If $F$ is strictly increasing with closed image, then $F$ is continuous Let $F$ be a strictly increasing function on $S$, a subset of the real line. If you know that $F(S)$ is closed, prove that $F$ is continuous.
Let $f$ be any strictly increasing (not necessarily strictly) function on $S$. To show that $f$ is continuous on $S$, it is enough to show that it is continuous at $x$ for every $x \in S$. If $x$ is an isolated point of $S$, every function is continuous at $x$, so assume otherwise. The key here is that monotone functions can only be discontinuous in a very particular, and simple, way. Namely, the one-sided limits $f(x-)$ and $f(x+)$ always exist (or rather, the first exists when $x$ is not left-isolated and the second exists when $x$ is not right-isolated): it is easy to see for instance that $f(x-) = \sup_{y < x, \ y \in S} f(y)$. Therefore a discontinuity occurs when $f(x-) \neq f(x)$ or $f(x+) \neq f(x)$. In the first case we have that for all $y < x$, $f(y) < f(x-)$ and for all $y \geq x$, $f(y) > f(x-)$. Therefore $f(x-)$ is not in $f(S)$. But by the above expression for $f(x-)$, it is certainly a limit point of $f(S)$. So $f(S)$ is not closed. The other case is similar. Other nice, related properties of monotone functions include: a monotone function has at most countably many points of discontinuity and a monotone function is a regulated function in the sense of Dieudonné. In particular the theoretical aspects of integration are especially simple for such functions. Added: As Myke notes in the comments below, the conclusion need not be true if $f$ is merely increasing (i.e., $x_1 \leq x_2$ implies $f(x_1) \leq f(x_2)$). A counterexample is given by the characteristic function of $[0,\infty)$.
Probability of Fire The probability that a fire will occur is $0.001$. If there is a fire, the amount of damage, $X$, will have a Pareto distribution given by $P(X>x) = \left(\frac{2(10)^6}{2(10)^6+x} \right)^2$. An insurance will pay the excess of the loss over a deductible of $100,000$. For this coverage the one-time insurance premium will be $110 \%$ of the expected payment. Calculate the premium. So the expected payment is $E[W]$ where $W$ denotes the payment. Then $E[W] = E[W| \text{fire}]P(\text{fire})+E[W| \text{no fire}]P(\text{no fire})$. To calculate $E[W| \text{fire}]$, we could use $\int_{0.1}^{\infty} [1-F(x)] \ dx$? This would be: $\int_{0.1}^{\infty} 1-\left[1-\left(\frac{2(10)^6}{2(10)^6+x} \right)^2\right] \ dx$ which equals $\int_{0.1}^{\infty} \left(\frac{2(10)^6}{2(10)^6+x} \right)^2 \ dx$?
So $E[W| \text{fire}] = \int_{0.1}^{\infty} [1-F(x)] \ dx = 4 \int_{0.1}^{\infty} (2+x)^{-2} \ dx = 4/2.1$. Thus $E[W] = (4/2.1)(0.001)+0 = 0.00190476$. So the premium is $(0.0019047)(10^6)(1.1) =2,095$.
What happens to the 0 element in a Finite Group? So, I'm relearning Group Theory. And I got the axioms down, I think. So let's make a concrete example: * *The collection of numbers the positive integers less than 7: 1,2,3,4,5,6 *The • operation will be multiplication mod 7. *Associativity holds. *The Identity e is 1. *Every element has an inverse: * *1*? mod 7 = 1 --> 1 *2*? mod 7 = 1 --> 4 *3*? mod 7 = 1 --> 5 *4*? mod 7 = 1 --> 2 *5*? mod 7 = 1 --> 3 *6*? mod 7 = 1 --> 6 But! What is the order of the group?! I thought the order would be 7. But there are 6 elements! So maybe I was wrong and 0 should be in the group. But 0 does not have an inverse! There is no x such that 0*x mod 7 = 1. So what am I misunderstanding here? Is it the definition of order? Is it some other trick about groups?
The only error is your belief that the order "should" be 7. The order of a finite group is just the number of elements in the group. Your group consists of the positive integers that are smaller than, and relatively prime to, 7. There are six of them, so your group has order 6. (I'm not sure why you thought the order should be 7...) Indeed, you cannot add $0$ to the mix and still have a group. If you consider the numbers $0,1,\ldots,6$ under multiplication modulo $7$ you do not get a group, you get a semigroup. Added: Ah, Jonas Meyer's reply suggests what is going on; since you say you are relearning Group Theory, you might have vague memories of the "group of integers modulo $n$" as having order $n$. The group of integers modulo $n$ under addition has order $n$; but the multiplicative group modulo $n$ consists of the positive integers less than, and relatively prime to, $n$, with the operation being multiplication modulo $n$, and has $\varphi(n)$ elements (Euler's phi function). When $n=7$ (the case you are looking at), the group has $\varphi(7)=6$ elements, as you observed.
Finding the fixed points of a contraction Banach's fixed point theorem gives us a sufficient condition for a function in a complete metric space to have a fixed point, namely it needs be a contraction. I'm interested in how to calculate the limit of the sequence $x_0 = f(x), x_1 = f(x_0), \ldots, x_n = f(x_{n-1})$ for a fixed $x$. I couldn't figure out a way to do this limit with ordinary limits calculations. The only thing I have at my disposal is the proof of the theorem, from which we see that the sequence $x_n$ is a Cauchy sequence; from this, I'm able to say, for example, that $\left|f(f(f(x))) - f(f(f(f(x))))\right| \leq \left|f(x_0)-f(x_1)\right| ( \frac{k^3}{1-k})$, where $k$ is the contraction constant, but I can't get any further in the calculations. My question is: how should I procede to calculate this limit exactly? If there are non-numerical (read: analytical) way to do this. Remark: I'm interested in functions $\mathbb{R} \rightarrow \mathbb{R}$ (as it can be seen from my use of the euclidean metric in $\mathbb{R}$)
@Andy (in reply to your comment/question "Could you provide some example that has a closed form and explain if (and how) it is possible to find the fixed point without solving x = f(x) but trying to calculate the limit of x_n?": I believe that you would be hard-pressed to achieve this, since your function $f$ is a continuous function (being a contraction map in the first place); and if you then take limits of both sides of $x_n = f(x_{n-1})$, you will get: $$\lim_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} f(x_{n-1})$$ which (by continuity) leads to: $$\lim_{n \rightarrow \infty} x_n = f (\lim_{n \rightarrow \infty} x_{n-1})$$ or $$l = f(l)$$ with $l = \lim_{n \rightarrow \infty} x_n$ This means that you will have to solve $l = f(l)$, which was what you wanted to avoid in the first place!
Probability of cumulative dice rolls hitting a number Is there a general formula to determine the probability of unbounded, cumulative dice rolls hitting a specified number? For Example, with a D6 and 14: 5 + 2 + 3 + 4 = 14 : success 1 + 1 + 1 + 6 + 5 + 4 = 17 : failure
Assuming the order matters (i,e 1+2 is a different outcome from 2+1) The probability of getting the sum $n$ with dice numbered $1,2,\dots,6$ is the coefficient of $x^n$ in $$\sum_{j=0}^{\infty}(\frac{x+x^2+x^3+x^4+x^5+x^6}{6})^j = \frac{6}{6-x-x^2-x^3-x^4-x^5-x^6}$$ Writing it as partial fractions (using roots of $6-x-x^2-x^3-x^4-x^5-x^6=0$) or using Cauchy's integral formula to find the coefficient of $x^n$, Taylor series, etc should work.
Can't Solve an Integral According to the solution manual: $\int \frac{x}{\sqrt{1-x^{4}}}dx = \frac{1}{2}\arcsin x^{2}+C$ My solution doesn't seem to be working. I know another way of solving it (setting $u=x^{2}$) but the fact that this way of solving it doesn't work bothers me. $$\text{set }u=1-x^{4}\text{ so } dx=\frac{du}{-4x^{3}} $$ $$ \begin{align*} \int \frac{x}{\sqrt{1-x^{4}}}dx &= \int \frac{x}{\sqrt{u}}dx \\ &= \int \frac{xdu}{-4x^{3}\sqrt{u}} \\ &= -\frac{1}{4} \int \frac{du}{x^{2}\sqrt{u}} \\ \end{align*} $$ $$ \text{set } v=\sqrt{u} \text{ so }du=2\sqrt{u}\,dv $$ \begin{align*} -\frac{1}{4} \int \frac{du}{x^{2}\sqrt{u}} &= -\frac{1}{2} \int \frac{dv}{x^{2}} \\ &= -\frac{1}{2} \int \frac{dv}{\sqrt{1-v^{2}}} \\ &= -\frac{1}{2} \arcsin (v) + C \\ &= -\frac{1}{2} \arcsin (\sqrt {1-x^{4}}) + C \\ \end{align*} I'll be happy to clarify any steps I took. Thanks!
Your solution is an antiderivative of the original function. You can always check whether your solution is correct by taking its derivative. This also implies that the book solution and your solution differ by a constant. For this specific problem, imagine the right triangle with sides $x^2$ and $\sqrt{1-x^4}$ and hypotenuse $1$. Then $\arcsin\sqrt{1-x^4} = \frac{\pi}{2} - \arcsin x^2$, and it should be easy to see from there how both solutions are related.
Number of terms in a trinomial expansion According to Wikipedia, the number of terms in $(x+y+z)^{30}$ is $496$. I'm assuming this is before like terms are added up. How many terms would there be if like terms were combined? How would I go about figuring that out?
No, the 496 is the number of terms after like terms are combined. Before like terms are combined there are $3^{30}$ terms. This is because you have 30 different factors, and so the number of terms you get before combining is the number of ways to choose 30 elements when there are three choices for each. Zaricuse's answer is hinting at how to derive the formula on the Wikipedia page. Here's another way to look at the formula on the Wikipedia page: The number of terms in the expansion of $(x+y+z)^n$ after combining is the number of ways to choose $n$ elements with replacement (since you can choose $x,y,z$ more than once) in which order does not matter from a set of 3 elements. This formula is known to be $$\binom{3+n-1}{n} = \binom{n+2}{n} = \frac{(n+1)(n+2)}{2}.$$ See, for example, MathWorld's entry on Ball Picking.
Limit of integral - part 2 Inspired by the recent post "Limit of integral", I propose the following problem (hoping it will not turn out to be too easy). Suppose that $g:[0,1] \times [0,1] \to {\bf R}$ is continuous in both variables separately. Is it true that, for all $x_0 \in [0,1]$, $$ \lim \limits_{x \to x_0 } \int_0^1 {g(x,y)\,{\rm d}y} = \int_0^1 {g(x_0 ,y)\,{\rm d}y} . $$
I think I have another counterexample. Define $f(x)=\int_{0}^x e^{-1/t}dt$ for $x\gt0$. This is chosen because it goes to zero as $x$ goes to zero from the right and because experimentation led me to the differential equation $\frac{f''(x)}{f'(x)}=\frac{1}{x^2}$ as a sufficient condition for the following to work. Define $$g(x,y) = \left\{ \begin{array}{lr} \frac{xy}{f(x)+y^2} & \text{if } x\gt0, \\ 0 & \text{if } x=0, \end{array} \right.$$ for $(x,y)$ in $[0,1]\times[0,1]$, and let $x_0=0$. The right hand side of your tentative equation is obviously $0$. The left hand side is $$\lim_{x\to0+}\frac{x}{2}\log\left(1+\frac{1}{f(x)}\right),$$ which comes out to $\frac{1}{2}$ after $2$ applications of l'Hôpital's rule, if I did it correctly. (I started by playing with the standard example of a discontinuous but separately continuous function on $\mathbb{R}^2$, $f(x,y)=\frac{xy}{x^2+y^2}$ when $x$ or $y$ is nonzero, $f(x,y)=0$ when $x=y=0$. Then I tried to see how the $x^2$ in the denominator could be modified to give a counterexample here, by replacing it with an unknown $f(x)$ that goes to $0$ at $0$ and seeing what further properties of $f(x)$ would make it work. As I mentioned, this led in particular to the sufficient condition $\frac{f''(x)}{f'(x)}=\frac{1}{x^2}$, and hence to this example. Unfortunately, I can't offer any real intuition.) Added I decided to look at this a little more, and came up with an example simpler than the other one I gave. With $$g(x,y)=\frac{-y}{\log(x/2)(x+y^2)}$$ for $x\gt0$ and $0$ otherwise, the same result as above holds.
How many ways can I make six moves on a Rubik's cube? I am writing a program to solve a Rubik's cube, and would like to know the answer to this question. There are 12 ways to make one move on a Rubik's cube. How many ways are there to make a sequence of six moves? From my project's specification: up to six moves may be used to scramble the cube. My job is to write a program that can return the cube to the solved state. I am allowed to use up to 90 moves to solve it. Currently, I can solve the cube, but it takes me over 100 moves (which fails the objective)... so I ask this question to figure out if a brute force method is applicable to this situation. If the number of ways to make six moves is not overly excessive, I can just make six random moves, then check to see if the cube is solved. Repeat if necessary.
12^6 is just under 3 million. So it would probably not work to randomly try six unscrambles. But it wouldn't be too hard to make a data file of all the positions and their unscramble twists if you can find a reasonable way to search it, like some hash function on a description of the position.
Black Scholes PDE and its many solutions I know the general Black-Scholes formula for Option pricing theory (for calls and puts), however I want to know the other solutions to the Black-Scholes PDE and its various boundary conditions. Can someone start from the B-S PDE and derive its various solutions based on different boundary conditions? Even if you could provide some links/sources where it is done, I'll appreciate that. The point is that I want to know various other solutions and their boundary conditions which are derived from Black-Scholes PDE. Thank you.
Wikipedia has a fairly good explanation of this. In particular, look at http://en.wikipedia.org/wiki/Black%E2%80%93Scholes#Derivation The relevant problems (1 and 2) in Stein and Shakarchi's Fourier Analysis text (they derive the fundamental solution to the heat equation via the Fourier transform within the chapter): http://books.google.com/books?id=FAOc24bTfGkC&pg=PA169 Finally, John Hull's Options, Futures, and Other Derivatives text has a derivation of the Black-Scholes formulas in the appendix to Chapter 13.
Finding subgroups of a free group with a specific index How many subgroups with index two are there of a free group on two generators? What are their generators? All I know is that the subgroups should have $(2 \times 2) + 1 - 2 = 3$ generators.
I like to approach this sort of problem using graphs. The free group on two generators is the fundamental group of a wedge of two circles $R_2$, which I picture as a red oriented circle and a black oriented circle. A subgroup of index $ k$ corresponds to a covering map $G\to R_2$ of index $k$. $G$ can be pictured as a (Edit: basepointed) $k$-vertex connected graph with red and black oriented edges such that at every vertex there is one incoming and one outgoing edge of each color. In the case $k=2$, it's not hard to write down all such graphs. I count three myself.
Applications of the Mean Value Theorem What are some interesting applications of the Mean Value Theorem for derivatives? Both the 'extended' or 'non-extended' versions as seen here are of interest. So far I've seen some trivial applications like finding the number of roots of a polynomial equation. What are some more interesting applications of it? I'm asking this as I'm not exactly sure why MVT is so important - so examples which focus on explaining that would be appreciated.
There are several applications of the Mean Value Theorem. It is one of the most important theorems in analysis and is used all the time. I've listed $5$ important results below. I'll provide some motivation to their importance if you request. $1)$ If $f: (a,b) \rightarrow \mathbb{R}$ is differentiable and $f'(x) = 0$ for all $x \in (a,b)$, then $f$ is constant. $2)$ Leibniz's rule: Suppose $ f : [a,b] \times [c,d] \rightarrow \mathbb{R}$ is a continuous function with $\partial f/ \partial x$ continuous. Then the function $F(x) = \int_{c}^d f(x,y)dy$ is derivable with derivative $$ F'(x) = \int_{c}^d \frac{\partial f}{\partial x} (x,y)dy.$$ $3)$ L'Hospital's rule $4)$ If $A$ is an open set in $\mathbb{R}^n$ and $f:A \rightarrow \mathbb{R}^m$ is a function with continuous partial derivatives, then $f$ is differentiable. $5)$ Symmetry of second derivatives: If $A$ is an open set in $\mathbb{R}^n$ and $f:A \rightarrow \mathbb{R}$ is a function of class $C^2$, then for each $a \in A$, $$\frac{\partial^2 f}{\partial x_i \partial x_j} (a) = \frac{\partial^2 f}{\partial x_j \partial x_i} (a)$$
Are they isomorphic? $G$ and $G \times G$ where $G = \Bbb Z_2 \times \Bbb Z_2 \times \Bbb Z_2 \times\cdots$ The answer says yes but I cannot figure out what homomorphism function I could use.
Think of $$G = \mathbb{Z_{2_1}}\times \mathbb{Z_{2_2}} \times \mathbb{Z_{2_3}} \times \mathbb{Z_{2_4}} \times \mathbb{Z_{2_5}} \times \ldots$$ and $$G \times G= (\mathbb{Z_{2_1}}\times \mathbb{Z_{2_3}} \times \mathbb{Z_{2_5}} \times \ldots) \times (\mathbb{Z_{2_2}}\times \mathbb{Z_{2_4}} \times \mathbb{Z_{2_6}} \times \ldots)$$
Finding roots of polynomials, negative square root The formula for finding the roots of a polynomial is as follows $$x = \frac {-b \pm \sqrt{ b^2 - 4ac }}{2a} $$ what happens if you want to find the roots of a polynomial like this simplified one $$ 3x^2 + x + 24 = 0 $$ then the square root value becomes $$ \sqrt{ 1^2 - 4\cdot3\cdot24 } $$ $$ = \sqrt{ -287 } $$ which is the square root of a negative number, which isn't allowed. What do you do in this case? I know there are other methods, i.e. factorisation and completing the square, but does this mean that this formula can only be used in specialised cases or have i gone wrong somewhere along the path?
100% correct, and good observation. To solve this, we define $\sqrt{-1}=i$ where $i$ is the imaginary unit Then $\sqrt{-287}=\sqrt{287}i$, and we can solve as per the general quadratic formula. Numbers of the form $a+bi$ are known as complex numbers and are extremely useful. In general the term $b^2-4ac$ is known as the discriminant of the quadratic equation. It should be clear that if $b^2-4ac>0$ there exists two real solutions, if $b^2-4ac=0$ there is one solution (the repeated root) and if $$b^2-4ac \lt 0$$ there are two complex solutions. The quadratic formula is the most general way to solve the quadratic equation - so you are doing the right thing.
Reference for matrix calculus Could someone provide a good reference for learning matrix calculus? I've recently moved to a more engineering-oriented field where it's commonly used and don't have much experience with it.
Actually the books cited above by Sivaram are excellent for numerical stuff. If you want "matrix calculus" then the following books might be helpful: * *Matrix Differential Calculus with Applications in Statistics and Econometrics by Magnus and Neudecker *Functions of Matrices by N. Higham *Calculus on Manifolds by Spivak Some classic, but very useful material can also be found in * *Introduction to Matrix Analysis by Bellman. As a simple example, the books will teach (unless you already know it) how to compute, say, the derivative of $f(X) = \log\det(X)$ for an invertible matrix $X$.
Why is negative times negative = positive? Someone recently asked me why a negative $\times$ a negative is positive, and why a negative $\times$ a positive is negative, etc. I went ahead and gave them a proof by contradiction like so: Assume $(-x) \cdot (-y) = -xy$ Then divide both sides by $(-x)$ and you get $(-y) = y$ Since we have a contradiction, then our first assumption must be incorrect. I'm guessing I did something wrong here. Since the conclusion of $(-x) \cdot (-y) = (xy)$ is hard to derive from what I wrote. Is there a better way to explain this? Is my proof incorrect? Also, what would be an intuitive way to explain the negation concept, if there is one?
One way to picture this is to imagine a number line. Then rotate it $180^{\circ}$. Each number will now be superimposed over its negative: $-1$ will be where $+1$ was; $+2$ will be where $-2$ was. Rotation of the number line by $180^{\circ}$ is the equivalent of multiplying by $-1$. Now do the rotation twice. The number line is unchanged. So, multiplying by $-1$ twice is the same as multiplying by $+1$. This approach has applications with Complex numbers. In these scenarios, the number line is rotated $90^{\circ}$ counter clockwise to multiply by $i$. But that's another story.
Why is negative times negative = positive? Someone recently asked me why a negative $\times$ a negative is positive, and why a negative $\times$ a positive is negative, etc. I went ahead and gave them a proof by contradiction like so: Assume $(-x) \cdot (-y) = -xy$ Then divide both sides by $(-x)$ and you get $(-y) = y$ Since we have a contradiction, then our first assumption must be incorrect. I'm guessing I did something wrong here. Since the conclusion of $(-x) \cdot (-y) = (xy)$ is hard to derive from what I wrote. Is there a better way to explain this? Is my proof incorrect? Also, what would be an intuitive way to explain the negation concept, if there is one?
Why a negative times a negative can be reduced to the question of why -1 x -1 = 1. The reason for that is because it is forced upon you by the other rules of arithmetic. 1 + (-1) = 0 because of the definition of -1 as the additive inverse of 1 Now multiple both sides by -1 to get -1(1+(-1)) = 0 because 0 times anything is 0 Use distributive law to get: -1* 1 + (-1)x(-1) = 0 Now -1 * 1 = -1 because 1 is multiplicative identity. So we have -1 + (-1)x (-1) = 0 Put -1 on the other side by adding 1 to both sides to get (-1) x (-1) = 1 So -1 x -1 = 1. Now for any other negative numbers x, y we have x = (-1) |x| and y= (-1) |y| So x * y = (-1) |x| * (-1) |y| = (-1) *(-1) * |x| * |y| = |x * y| is positive. Now that you know the reason it really doesn't make much difference in understanding. This question is not really that important. It's like asking why is 1 raised to the 0 power equal to 1? Because that's forced upon you by other rules of exponents,etc. A lot of time is wasted on this. This is not the kind of problem kids should be thinking about.
Complex inequality $||u|^{p-1}u - |v|^{p-1}v|\leq c_p |u-v|(|u|^{p-1}+|v|^{p-1})$ How does one show for complex numbers u and v, and for p>1 that \begin{equation*} ||u|^{p-1}u - |v|^{p-1}v|\leq c_p |u-v|(|u|^{p-1}+|v|^{p-1}), \end{equation*} where $c_p$ is some constant dependent on p. My intuition is to use some version of the mean value theorem with $F(u) = |u|^{p-1}u$, but I'm not sure how to make this work for complex-valued functions. Plus there seems to be an issue with the fact that $F$ may not smooth near the origin. For context, this shows up in Terry Tao's book Nonlinear Dispersive Equations: Local and Global Analysis on pg. 136, where it is stated without proof as an "elementary estimate".
Suppose without loss of generality that $|u| \geq |v| > 0$. Then you can divide the equation through by $|v|^p$ and your task it to prove $||w|^{p-1}w - 1| \leq c_p|w - 1|(|w|^{p-1} + 1)$, where $w = u/v$. Note that $$||w|^{p-1}w - 1| = ||w|^{p-1}w - |w|^{p-1} + |w|^{p-1} - 1| $$ $$\leq ||w|^{p-1}w - |w|^{p-1}| + ||w|^{p-1} - 1|$$ Note the first term is $|w|^{p-1}|w - 1|$ is automatically bounded by your right hand side. So you're left trying to show that $||w|^{p-1} - 1|$ is bounded by your right hand side. For this it suffices to show that $$||w|^{p-1} - 1| \leq c_p||w| - 1|| (|w|^{p-1} + 1)$$ Since $|w| \geq 1$ by the assumption that $|u| \geq |v|$, it suffices to show that for all real $r \geq 1$ one has $$r^{p-1} - 1 \leq c_p(r - 1)(r^{p-1} + 1)$$ Now use the mean value theorem as you originally wanted to.
If $(x_{k})\to L$ and $\forall x_{i}\in (x_{k})$, $x_{i}$ is a subsequential limit of $a_{n}$ then I want to prove that: If $(x_{k})\to L$ and $\forall x_{i}\in (x_{k})$, $x_{i}$ is a subsequential limit of $a_{n}$ then $L$ is also a subsequential limit of $a_{n}$. I came up with the following: Let $\epsilon\gt0$; if $(x_{k})\to L$ then we simply pick $x_{i}\in(L-\epsilon, L+\epsilon)$ and because $x_{i}$ is a subsequential limit of $a_{n}$ we know that in every neighborhood of $L$ there are infinite elements of $a_{n}$ and we conclude that $L$ is also a subsequential limit of $a_{n}$. This seems a bit clumsy, is there a better way to show this? Perhaps with Bolzano-Weistrass?
I'm as mystified as Jonas Meyer on why you think this is "clumsy". It follows exactly along the intuition: I can get arbitrarily close to $L$ using the $x_i$, and I can find subsequence of $(a_i)$ that gets arbitrarily close to the $x_i$, so I can find subsequences that get arbitrarity close to things that get arbitrarily close. But perhaps what you want is some idea of which subsequence that might be? Well, we can get it done as follows: There is an $N_1$ such that if $k\geq N_1$, then $|x_k-L|\lt 1$. And since $x_k$ is the limit of a subsequence of $(a_n)$, there is an $n_1$ such that $|a_{n_1}-x_{N_1}|\lt 1$. In particular, $|a_{n_1}-L|\lt 2$. Now, there is an $N_2\gt N_1$ such that for all $k\geq N_2$, $|x_k-L|\lt\frac{1}{2}$. Since $x_{N_2}$ is the limit of a subsequence of $(a_n)$, there is an $n_2$, $n_2\gt n_1$, such that $|a_{n_2}-x_{N_2}|\lt \frac{1}{2}$; in particular, $|a_{n_2}-L|\lt 1$. Continue this way; assume that we have found $N_k$, $N_k\gt\cdots\gt N_1$ such that $|x_{N_i}-L|\lt \frac{1}{2^{i-1}}$, and $n_1\lt n_2\lt\cdots\lt n_k$ with $|x_{n_i}-x_{N_i}|\lt \frac{1}{2^{i-1}}$, so $|x_{n_i}-L|\lt \frac{1}{2^{i-2}}$. Then there is an $N_{k+1}\gt N_k$ such that for all $j\geq N_{k+1}$, $|x_{j}-L|\lt \frac{1}{2^k}$. Since $x_{N_{k+1}}$ is the limit of a subsequence of $(a_n)$, there is an $n_{k+1}\gt n_k$ such that $|a_{n_{k+1}}-x_{N_{k+1}}|\lt \frac{1}{2^k}$, and in particular $|a_{n_{k+1}}-L|\lt \frac{1}{2^{k-1}}$. Inductively, we get a subsequence $(a_{n_k})$ of $(a_n)$. I claim this subsequence converges to $L$. Let $\epsilon\gt 0$; find $k$ such that $0\lt \left(\frac{1}{2}\right)^{k-2}\lt \epsilon$. Then for all $\ell\geq k$ we have \begin{equation*} |a_{n_{\ell}} - L|\lt \frac{1}{2^{\ell-2}} \lt \frac{1}{2^{k-2}}\lt \epsilon. \end{equation*} Thus, the sequence converges to $L$, as claimed. QED Personally, I don't think this is particularly "elegant", but I don't think it is clumsy either. It is exactly the intuition: get very close to $L$ using the $x_i$, then get very close to $x_i$ using some $a_j$, and this gives you an $a_j$ that is very close to $L$. Just keep doing it and you get a subsequence converging to $L$.
Continuous function of one variable Let $f(x)$ continuous function on $R$ wich can be in different signs. Prove, that there is exists an arithmetic progression $a, b, c (a<b<c)$, such that $f(a)+f(b)+f(c)=0$.
Let's ponder like this: At some point $x$ $f(x)>0$, therefore, in the vicinity of this point there is an increasing arithmetic progression $a_{0}, \ b_{0}, \ c_{0}$ that, $f(a_{0})+f(b_{0})+f(c_{0})>0$. Like this one will be found increasing arithmetic progression of $a_{1}, \ b_{1}, \ c_{1}$ that, $f(a_{1})+f(b_{1})+f(c_{1})<0$. For all values of parametr $t[0,1]$ сonsider the arithmetic progression $a(t), \ b(t), \ c(t)$, where $a(t)=a_{0}(1-t)+a_{1}t$, $b(t)=b_{0}(1-t)+b_{1}t$, $c(t)=c_{0}(1-t)+c_{1}t$. Function $F(t)=f(a(t))+f(b(t))+f(c(t))$ continuously depends on $t$, at $t=0 \ F(t)>0$, and at $t=1 \ F(t)<0$. It means that in some $t \ F(t)=0$ and the corresponding progress $a(t), \ b(t), \ c(t)$ is required.