INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
What is the difference between topological and metric spaces? What is the difference between a topological and a metric space?
A metric space gives rise to a topological space on the same set (generated by the open balls in the metric). Different metrics can give the same topology. A topology that arises in this way is a metrizable topology. Using the topology we can define notions that are purely topological, like convergence, compactness, continuity, connectedness, dimension etc. Using the metric we can talk about other things that are more specific to metric spaces, like uniform continuity, uniform convergence and stuff like Hausdorff dimension, completeness etc, and other notions that do depend on the metric we choose. Different metrics that yield the same topology on a set can induce different notions of Cauchy sequences, e.g., so that the space is complete in one metric, but not in the other. In analysis e.g. one often is interested in both of these types of notions, while in topology only the purely topological notions are studied. In topology we can in fact characterize those topologies that are induced from metrics. Such topologies are quite special in the realm of all topological spaces. So in short: all metric spaces are also topological spaces, but certainly not vice versa.
Alchemist's problem Consider an alchemist that has many ($N$) sorts of ingredients in his possession. Initial amounts of each of the ingredients is expressed by vector $C^0=(C_1, C_2, \dots, C_N)$. Alchemist knows several ($M$) recipes of ingredient transmutation, expressed as a set of recipes: $R=\{ R^1, R^2, \dots, R^M\}$. Each recipe $R^i$ is a vector that describes the reagents and the products: $R^i=(R^i_1, R^i_2, \dots, R^i_N)$ such that if $R^i_j, i \in [1 \dots M], j \in [1 \dots N]$ is zero, that means $j'th$ ingredient is not used in mutation, if it is positive, than the ingredient appears in that quantity as a product of a mutation, and if it is negative, than it's quantity is consumed in mutation. Thus, a single mutation can be expressed as a vector sum: $C^1=C^0+R^i$, where $C^0$ are ingredients before mutation, $C^1$ - after mutation, $R^i$ - mutation recipe. Consider that we have a market where ingredients are traded. Market prices are fixed and are described by a value vector $v=(v_1, v_2, \dots, v_N)$. Thus, a value of the alchemist's supplies on the $k$-th step can be expressed as a dot product: $V^k=(C^k \cdot v)$. Question: having the initial supply of ingredients $C^0$, book of recipes $R$ and market prices $v$, how can the alchemist derive such sequence of $L$ mutations $S=(S^0, S^1, S^2, \dots, S^L), \forall t : S^t \in R$ that the price $V^L=(C^L \cdot v)$ of the final set of products $C^L=C^0+S^1+S^2+\dots + S^L$ would be maximal?
Since the market prices are fixed and $V^L = C^0 \cdot v + S^0 \cdot v + S^1 \cdot v + \ldots + S^L \cdot v$, you can also assign a value $R^i \cdot v$ to each recipe and use Dijkstras algorithm. The exact implementation works as follows: * *A vertex in the graph is represented by some choice of $n \leq N$ recipes. (You can identify to vertices if the same recipes were chosen in a possibly different order.) *Two vertices are connected by a directed edge if it is possible to go from one vertex to the next by using one recipe. At this point, one has to take into account that $C^i_k \geq 0$, because if we run out of an ingredient, it is not possible to use it. *Each edge gets a weight of $O - S \cdot v$ where $O$ is an offset such that all weights are positive (otherwise Dijkstra won't work here) and $S$ is the recipe applied along the edge. *Identify all vertices that are L steps away from $C^0$ and make this vertex the target vertex. *Let Dijkstra's algorithm find the shortest way. It has length $L \cdot O - V^L$.
Arrangement of six triangles in a hexagon You have six triangles. Two are red, two are blue, and two are green. How many truly different hexagons can you make by combining these triangles? I have two possible approachtes to solving this question: * *In general, you can arrange $n$ objects, of which $a$ are of type one, $b$ are of type two, and $c$ are of type three, in $\frac{n!}{a! \cdot b! \cdot c!}$ ways. In this case, $n = 6$, and $a = b = c = 2$. There are 90 possible ways to arrange the six triangles. However, the triangles are in a circle, which means that six different arrangements are really one truly different arrangement. Division by six results in 15 possible hexagons. *It is possible to enumerate every different arrangement, and count how many truly different arrangements you can make. There are six different ways to arrange the triangles with the two red triangles next to each other. There are six different ways to arrange the triangles with one triangle between two red triangles. There are four different ways to arrange the triangles with two triangles between two red triangles. This results in 16 possible hexagons. I also wrote a simple computer program that tries every possible combination and counts how many are different, and it confirms the answer 16. It turns out that the second approach is the right one, and 16 is the right answer. I can enumerate 16 different hexagons that are all different. Now my question is, what is wrong with the first approach? Where is the error? Remarks: When you arrange the 16 different hexagons in lines, you can create six different arrangements for each hexagon, but this results in doubles. There are less than 96 different arrangements in a line. This does not contradict the first approach, in which there are no doubles.
Determine which of the following mappings F are linear I'm having a really hard time understanding how to figure out if a mapping is linear or not. Here is my homework question: Determine which of the following mappings F are linear. (a) $F: \mathbb{R}^3 \to \mathbb{R}^2$ defined by $F(x,y,z) = (x, z)$ (b) $F: \mathbb{R}^4 \to \mathbb{R}^4$ defined by $F(X) = -X$ (c) $F: \mathbb{R}^3 \to \mathbb{R}^3$ defined by $F(X) = X + (0, -1, 0)$ Sorry about my formatting. I'm not sure how to write exponents and the arrow showing that the mapping is from R^n to R^m. Any help is greatly appreciated!!
To check if a mapping is linear in general, all you need is verify the two properties. * *$f(x+y) = f(x) + f(y)$ *$f(\alpha x) = \alpha f(x)$ The above two can be combined into one property: $f(\alpha x + \beta y) = \alpha f(x) + \beta f(y)$ Edit For instance, if we want to show say $F(x) = f(x_1,x_2,x_3) = x_1 - 4x_2 + x_3$ is linear, where $x = (x_1,x_2,x_3)$, then all you need to do is as follows. \begin{align*} F(\alpha x + \beta y) & = f(\alpha x_1 + \beta y_1,\alpha x_2 + \beta y_2,\alpha x_3 + \beta y_3) \\ & = (\alpha x_1 + \beta y_1) - 4(\alpha x_2 + \beta y_2) + (\alpha x_3 + \beta y_3)\\ & = (\alpha x_1 - 4 \alpha x_2 + \alpha x_3) + (\beta y_1 - 4 \beta y_2 + \beta y_3)\\ & = \alpha (x_1 - 4 x_2 + x_3) + \beta (y_1 - 4 y_2 + y_3)\\ & = \alpha f(x_1,x_2,x_3) + \beta f(y_1,y_2,y_3)\\ & = \alpha F(x) + \beta F(y)\\ \end{align*} Hence the above function is linear. EDIT As Arturo points out problem $c$ is not a linear map because of the constant hanging around. Such maps are called affine maps. Affine maps are those for which $f(x) - f(0)$ is a linear map.
Using congruences, show $\frac{1}{5}n^5 + \frac{1}{3}n^3 + \frac{7}{15}n$ is integer for every $n$ Using congruences, show that the following is always an integer for every integer value of $n$: $$\frac{1}{5}n^5 + \frac{1}{3}n^3 + \frac{7}{15}n.$$
Lets show that $P(n)=3n^5+5n^3+7n$ is divisible by 15 for every $n$. To do this, we will show that it is divisible by $3$ and $5$ for every $n$. Recall that for a prime $p$, $x^p\equiv x \pmod{p}$. (Fermat's Little Theorem) Then, looking modulo 5 we see that $$P(n)\equiv 3n^5+7n\equiv 3n+7n=10n\equiv 0.$$ Now looking modulo 3 we see that $$P(n)\equiv 5n^3+7n\equiv 5n+7n=12n\equiv 0.$$ Thus $P(n)$ is divisible by 15 for every $n$ as desired.
Can contractible subspace be ignored/collapsed when computing $\pi_n$ or $H_n$? Can contractible subspace be ignored/collapsed when computing $\pi_n$ or $H_n$? Motivation: I took this for granted for a long time, as I thought collapsing the contractible subspace does not change the homotopy type. Now it seems that this is only true for a CW pair...
Let me note a general fact: if the inclusion $A \hookrightarrow X$ (for $A$ a closed subspace) is a cofibration, and $A$ is contractible, then the map $X \to X/A$ is a homotopy equivalence. See Corollary 5.13 in chapter 1 of Whitehead's "Elements of homotopy theory."
Is there a definition of determinants that does not rely on how they are calculated? In the few linear algebra texts I have read, the determinant is introduced in the following manner; "Here is a formula for what we call $detA$. Here are some other formulas. And finally, here are some nice properties of the determinant." For example, in very elementary textbooks it is introduced by giving the co-factor expansion formula. In Axler's "Linear Algebra Done Right" it is defined, for $T\in L(V)$ to be $(-1)^{dimV}$ times the constant term in the characteristic polynomial of $T$. However I find this somewhat unsatisfactory. Its like the real definition of the determinant is hidden. Ideally, wouldn't the determinant be defined in the following manner: "Given a matrix $A$, let $detA$ be an element of $\mathbb{F}$ such that x, y and z." Then one would proceed to prove that this element is unique, and derive the familiar formulae. So my question is: Does a definition of the latter type exist, is there some minimal set of properties sufficient to define what a determinant is? If not, can you explain why?
Let $B$ a basis of a vector space $E$ of dimension $n$ over $\Bbbk$. Then $det_B$ is the only $n$-alternating multilinear form with $det_B(B) = 1$. A $n$-multilinear form is a map of $E^n$ in $\Bbbk$ which is linear for each variable. A $n$- alternated multilinear form is a multilinear form which verify for all $i,j$ $$ f(x_1,x_2,\dots,x_i,\dots, x_j, \dots, x_n) = -f(x_1,x_2,\dots,x_j,\dots, x_i, \dots, x_n) $$ In plain english, the sign of the application change when you switch two argument. You understand why you use the big sum over permutations to define the determinant with a closed formula.
Meaning of $\mathbf{C}^{0}$? My book introduces $\mathbf{C}^{\infty}$ as subspace of $F(\mathbb{R},\mathbb{R})$ that consists of 'smooth' functions, that is, functions that are differentiable infinitely many times. It then asks me to tell whether or not $\mathbf{C}^{0}$=$(f\in(\mathbb{R},\mathbb{R})$ such that $f$ is continuous$)$ is a subspace. Is $\mathbf{C}^{0}$ a collection of undiffirentiable (because of 0 as opposed to $\infty$) functions that are continuous? Does it have any elements then? Thanks a lot for clarifying the confusion!
Typically, $C^{0}(\Omega)$ denotes the space of functions which are continuous over $\Omega$. The higher derivatives may or may not exist. $C^{\infty}(\Omega) \subset C^{0}(\Omega)$ since if the function is infinitely "smooth" it has to continuous. Typically, people use the notation $C^{(n)}(\Omega)$ where $n \in \mathbb{N}$. $f(x) \in C^{(n)}(\Omega)$, means that $f(x)$ has $n$ derivatives in the entire domain ($\Omega$ denotes the domain of the function) and the $n^{th}$ derivative of $f(x)$ is continuous i.e. $f^{n}(x)$ is continuous. By convention, $f(x) \in C^{(0)}(\Omega)$ denotes the space of continuous functions. $f(x) \in C^{(\infty)}(\Omega)$ if the function is differentiable any number of times. For instance, $e^{x} \in C^{(\infty)}(\mathbb{R})$ An example to illustrate is to consider the following function $f: \mathbb{R} \rightarrow \mathbb{R}$. $f(x) = 0, x \leq 0$, $f(x) = x^2, x>0$ This function is in $C^{(1)}(\mathbb{R})$ but not in $C^{(2)}(\mathbb{R})$ Also, when the domain of the function is the largest set over which the function definition makes sense, then we omit $\Omega$ and write that $f \in C^{(n)}$ the domain being understood as the largest set over which the function definition makes sense. Also, note the obvious embedding $C^{(n)} \subseteq C^{(m)}$ whenever $n>m$. For "most" functions, if a function is differentiable $n$ times it is $C^{(n)}$. However, there are functions for which the derivative might exist but the derivative is not continuous. Some people might argue that the ramp function has derivative but the derivative is not continuous. It is incorrect. $f(x) = 0$, when $x<0$ and $f(x) = x$ when $x \geq 0$ Note that the derivative doesn't even exist at $x=0$. So the ramp function is not even differentiable in the first place. Let us take a look at the function $f(x) = x^2 \sin(\frac{1}{x})$. The first question is "Is the function even in $C^{(0)}$"? The answer is not yet since the function is ill-defined at the origin. However if we define $f(0) = 0$, then yes the function is in $C^0$. This can be seen from the fact that $\sin(\frac{1}{x})$ is bounded and hence the function is bounded above by $x^2$ and below by $-x^2$. So as we go towards $0$, the function is bounded by functions which themselves tend to $0$. And the limit is $0$ and thereby the function is continuous. Now, the next question "Is the function differentiable everywhere?" It is obvious that the function is differentiable everywhere except at $0$. At $0$, we need to pay little attention. If we were to blindly differentiate $f(x)$ using the conventional formulas, we get $g(x) = f'(x) = 2x \sin(\frac{1}{x}) + x^2 \times \frac{-1}{x^2} \cos(\frac{1}{x})$. Now $g(x)$ is ill-defined for $x=0$. Further $\displaystyle \lim_{x \rightarrow 0} g(x)$ doesn't exist. This is what we get if we use the formula. So can we say that $f(x)$ is not differentiable at the origin. Well no! All we can say is $g(x)$ is discontinuous at $x=0$. So what about the derivative at $x=0$? Well as I always prefer to do, get back to the definition of $f'(0)$. $f'(0) = \displaystyle \lim_{\epsilon \rightarrow 0} \frac{f(\epsilon) - f(0)}{\epsilon} = \displaystyle \lim_{\epsilon \rightarrow 0} \frac{\epsilon^2 \sin(\frac{1}{\epsilon})}{\epsilon} = \displaystyle \lim_{\epsilon \rightarrow 0} \epsilon \sin(\frac{1}{\epsilon}) = 0$. (Since $|\sin(\frac{1}{\epsilon})| \leq 1$ so it is bounded). So we find that the function $f(x)$ has a derivative at the origin whereas the function $g(x) = f'(x)$, $\forall x \neq 0$ is not continuous or even well-defined at the origin. So we have this function whose derivative exists everywhere but then $f(x) \notin C^{(1)}$ since the derivative is not continuous at the origin.
Can we construct a function $f:\mathbb{R} \rightarrow \mathbb{R}$ such that it has intermediate value property and discontinuous everywhere? Can we construct a function $f:\mathbb{R} \rightarrow \mathbb{R}$ such that it has intermediate value property and discontinuous everywhere? I think it is probable because we can consider $$ y = \begin{cases} \sin \left( \frac{1}{x} \right), & \text{if } x \neq 0, \\ 0, & \text{if } x=0. \end{cases} $$ This function has intermediate value property but is discontinuous on $x=0$. Inspired by this example, let $r_n$ denote the rational number,and define $$ y = \begin{cases} \sum_{n=1}^{\infty} \frac{1}{2^n} \left| \sin \left( \frac{1}{x-r_n} \right) \right|, & \text{if } x \notin \mathbb{Q}, \\ 0, & \mbox{if }x \in \mathbb{Q}. \end{cases} $$ It is easy to see this function is discontinuons if $x$ is not a rational number. But I can't verify its intermediate value property.
Sure. The class of functions satisfying the conclusion of the Intermediate Value Theorem is actually vast and well-studied: such functions are called Darboux functions in honor of Jean Gaston Darboux, who showed that any derivative is such a function (the point being that not every derivative is continuous). A standard example of an everywhere discontinuous Darboux function is Conway's base 13 function. (Perhaps it is worth noting that the existence of such functions is not due to Conway: his is just a particularly nice, elementary example. I believe such functions were already well known to Rene Baire, and indeed possibly to Darboux himself.)
Solving short trigo equation with sine - need some help! From the relation $M=E-\epsilon\cdot\sin(E)$, I need to find the value of E, knowing the two other parameters. How should I go about this? This is part of a computation which will be done quite a number of times per second. I hope there's a quick way to get E out of this equation. Thank you very much, MJ
I assume $\epsilon$ is a small quantity and propose one of the following: (a) Write your equation in the form $E=M+\epsilon \sin(E)=: f(E)$ and consider this as a fixed point problem for the function $f$. Starting with $E_0:=M$ compute numerically successive iterates $E_{n+1}:=f(E_n)$; these will converge to the desired solution of the given equation. (b) $E$ depends in an analytic way on the parameter $\epsilon$. Make the "Ansatz" $E:=M +\sum_{k=1}^\infty a_k \epsilon^k$ and determine the coefficients $a_k$ recursively. You will find $a_1=\sin(M)$, $a_2=\cos(M)\sin(M)$ and so on.
Chain rule for multi-variable functions So I have been studying the multi-variable chain rule. Most importantly, and this is what I must have overlooked, is it's not always clear to me how to see which variables are functions of other variables, so that you know when to use the chain rule. For example, if you have: $$ x^2+y^2-z^2+2xy=1 $$ $$ x^3+y^3-5y=8 $$ In general, say we want to find $\frac{dz}{dt}$ but $z$ is a function of $x$, then we get: $$ \frac{dz}{dt} = \frac{dz}{dx} \frac{dx}{dt} .$$ And if $z$ is a function of both $y$ and $t$, we get: $$ \frac{dz}{dt} = \frac{dz}{dx} \frac{dx}{dt} + \frac{dz}{dy} \frac{dy}{dt}$$ In this case, we have two equations. One involving all three variables $x,y,z$ and one involving just $x,y$. Say we want to find $\frac{dz}{dx}$. What does this mean for this case? How should we interpret this rule in general?
If we have an explicit function $z = f(x_1,x_2,\ldots,x_n)$, then $$\displaystyle \frac{dz}{dt} = \frac{\partial z}{\partial x_1} \frac{dx_1}{dt} + \frac{\partial z}{\partial x_2} \frac{dx_2}{dt} + \cdots +\frac{\partial z}{\partial x_n} \frac{dx_n}{dt}$$ If we have an implicit function $f(z,x_1,x_2,\ldots,x_n) = 0$, then $$\displaystyle \frac{\partial f}{\partial z} \frac{dz}{dt} + \frac{\partial f}{\partial x_1} \frac{dx_1}{dt} + \frac{\partial f}{\partial x_2} \frac{dx_2}{dt} + \cdots +\frac{\partial f}{\partial x_n} \frac{dx_n}{dt} = 0$$ $$\displaystyle \frac{dz}{dt} = - \frac{ \frac{\partial f}{\partial x_1} \frac{dx_1}{dt} + \frac{\partial f}{\partial x_2} \frac{dx_2}{dt} + \cdots +\frac{\partial f}{\partial x_n} \frac{dx_n}{dt}}{\frac{\partial f}{\partial z} }$$ In the first example, \begin{align*} \displaystyle x^2 + y^2 - z^2 + 2y & = 1\\ \displaystyle 2x\frac{dx}{dt} + 2y\frac{dy}{dt} - 2z\frac{dz}{dt} + 2\frac{dy}{dt} & = 0\\ \displaystyle x\frac{dx}{dt} + y\frac{dy}{dt} + \frac{dy}{dt} & = z\frac{dz}{dt}\\ \displaystyle \frac{dz}{dt} & = \frac{x\frac{dx}{dt} + y\frac{dy}{dt} + \frac{dy}{dt}}{z} \end{align*} In the second example, \begin{align*} \displaystyle x^3 + y^3 - 5y & = 8\\ \displaystyle 3x^2 \frac{dx}{dt} + 3y^2 \frac{dy}{dt} - 5 \frac{dy}{dt} = 0\\ \displaystyle \frac{dy}{dt} & = \frac{3x^2}{5-3y^2} \frac{dx}{dt} \end{align*}
Statistics: Predict 90th percentile with small sample set I have a quite small data set (on the order of 8-20) from an essentially unknown system and would like to predict a value that will be higher than the next number generated by the same system 90% of the time. Both underestimation and overestimation are problematic. What is the mathematically "correct" way to do this? If I could also generate a level-of-confidence estimate, it would wow my manager. Also, let me say I'm not a math major, so thanks for any help, however remedial it may be :)
This is where the technique of "Bootstrap" comes in extremely handy. You do not need to know anything about the underlying distribution. Your question fits in perfectly for a good example of "Bootstrap" technique. The bootstrap technique would also let you determine the confidence intervals. Bootstrap is very elementary to implement on computer and can be done really quick. The typical number of bootstrap samples you take is around $100-200$. Go through the wiki page and let me know if you need more information on "Bootstrap" technique and I am willing to help you out. The book by Bradley Efron covers this technique from an application point of view in great detail. The bootstrap algorithm for estimating standard errors is explained on Page $47$, Algorithm $6.1$. You can use this algorithm to construct confidence intervals and finding the quantiles.
What this kind probability should be called? I have $m$ continues integer points on a line, randomly uniform select $n$ points from the $m$ point without replacement. Order the points ascendingly. Let the random variable $A_i$ is the position (coordination on the line) of the $i$th point. So, $$P(A_i=k)=\frac{{k-1\choose i-1} {m-k \choose n-i}}{{m \choose n}} $$ How to derive the tail inequality for this probability. The tail probability look something like this: $$P(|A_i - E(A_i)| > t) < \sigma$$ I want the bound ($\sigma$) to be as tight as possible. The Chebyshev inequality is too loose. Updated: Some supplement about the question: http://www.randomservices.org/random/urn/OrderStatistics.pdf
Edit: See Didier's comment below. The binomial coefficients are "upside down" and so what's written below is meaningless. It is worthwhile, however, to see which tools are used to obtain tail estimates on the hypergeometric distribution, to get some ideas. Perhaps all they do is use Stirling's approximation and integrate it. Your distribution is very close to a hypergeomtric distribution (as noted in an earlier version of the question). In fact, it is related to it via a factor of $i/k$. So tail estimates for it should transfer to tail estimates for your distribution.
Simultaneous equations, trig functions and the existence of solutions Came across this conundrum while going over the proof that $$A \cdot \sin(bx) + B \cdot \cos(bx) = C \cdot \sin(bx + k)$$ for some numbers $C$ and $k$. ($A$, $B$ and $b$ are known.) The usual method is to expand the RHS using the compound angle identity \begin{align} C \cdot \sin(bx + k) &= C \cdot \bigl( \sin(bx)\cos(k) + \cos(bx)\sin(k) \bigl) \\ &= C\cos(k) \cdot \sin(bx) + C\sin(k) \cdot \cos(bx) \end{align} and thus set \begin{align} C\cos(k) &= A \\ C\sin(k) &= B \end{align} My trouble comes with what happens at this point - we then proceed to divide the second equation by the first, obtaining $$ \tan(k) = \frac{B}{A} $$ which we then solve to obtain $k$, etc. etc. My question is: how do we know that this is "legal"? We have reduced the original two-equation system to a single equation. How do we know that the values of $k$ that satisfy the single equation are equal to the solution set of the original system? While thinking about this I drew up this other problem: \begin{align} \text{Find all }x\text{ such that} \\ \sin(x) &= 1 \\ \cos(x) &= 1 \end{align} Obviously this system has no solutions ($\sin x$ and $\cos x$ are never equal to $1$ simultaneously). But if we apply the same method we did for the earlier example, we can say that since $\sin(x) = 1$ and $\cos(x) = 1$, let's divide $1$ by $1$ and get $$ \tan(x) = 1 $$ which does have solutions. So how do we know when it's safe to divide simultaneous equations by each other? (If ever?)
In general, you don't. You know that all of the solutions of the pair of equations you started with are solutions of the single equation you ended up with (barring division-by-zero issues), but you generally don't know the converse. In this case, the reason you can get away with the converse is that you can choose $C$. Knowing $\tan k$ is the same as knowing $(\cos k, \sin k)$ up to a multiplicative constant; draw a unit circle if you don't believe this. In general, the only way you know whether it is "legal" to do anything is to prove or disprove that you can do it.
Prove that if $A^2=0$ then $A$ is not invertible Let $A$ be $n\times n$ matrix. Prove that if $A^2=\mathbf{0}$ then $A$ is not invertible.
Well I've heard that the more ways you can prove something, the merrier. :) So here's a sketch of the proof that immediately came to mind, although it may not be as snappy as some of the other good ones here: Let's prove the contrapositive, that is if $A$ is invertible then $A^2 \neq 0$. If $A$ is invertible then we can write it as a product of elementary matrices, $$A = E_n...E_1I$$ Then $A^2$ can be written as $$AA = (E_n...E_1I)(E_n...E_1I) = (E_n...E_1 E_n...E_1)I$$ which is a sequence of elementary row operations on the identity matrix. But this will never produce the zero matrix $0$. QED.
Reaching all possible simple directed graphs with a given degree sequence with 2-edge swaps Starting with a given simple, directed Graph G, I define a two-edge swap as: * *select two edges u->v and x->y such that (u!=x) and (v!=y) and (u!=y) and (x!=v) *delete the two edges u->v and x->y *add edges u->y and x->v Is it guaranteed that I can reach any simple directed graph with the original (in- and out-) degree sequence in some finite number of 2-edge swaps? If we need some sort of 3-edge swaps, what are they? Background: I intend to use this as MCMC steps to sample random graphs, but over at the Networkx Developer site, there is a discussion that Theorem 7 of the paper P Erdos et al., "A simple Havel–Hakimi type algorithm to realize graphical degree sequences of directed graphs", Combinatorics 2010 implies that we need 3-edge swaps to sample the whole space.
The question is whether a triple swap is necessary or not. One of the examples in the paper is the directed cycle between three nodes (i->j), (j->k), (k->i). Obviously, another graph with the same degree sequence is the one in which all directions are reversed: (i <- j), (j <- k), (k <- i). It is, however, not possible to get from the first to the second graph if you do not allow for self-loops: there are no two edges whose swap is allowed under this condition. At first I thought that there cannot be an example for this is in larger graphs but actually there are graphs of infinite size with the same problem (under the condition of no multiple edges and self-loops): again, start with the directed triangle; add any number of nodes that are connected to all other nodes by bi-directional edges. Thus, the only edges that are flexible are the ones in the triangle and again, all of their edges can be reversed to result in a graph with the same degree sequences but no sequence of edge-swaps can achieve it. It is obvious that the family of graphs described here is very much constrained but there may be others with similar problems. Thus: there are directed graphs which need the triple-swap s.t. all graphs with the same degree sequences but without multiple edges and self-loops can be samples u.a.r.
Proof by contradiction: $r - \frac{1}{r} =5\Longrightarrow r$ is irrational? Prove that any positive real number $r$ satisfying: $r - \frac{1}{r} = 5$ must be irrational. Using the contradiction that the equation must be rational, we set $r= a/b$, where a,b are positive integers and substitute: $\begin{align*} &\frac{a}{b} - \frac{1}{a/b}\\ &\frac{a}{b} - \frac{b}{a}\\ &\frac{a^2}{ab} - \frac{b^2}{ab}\\ &\frac{a^2-b^2}{ab} \end{align*}$ I am unsure what to do next?
To complete your solution, note that you can, without loss of generality, set $a$ and $b$ to be coprime. So you have $a^2=b^2+5ab=b(b+5a)$. Hence $a$ divides $b(b+5a)$. Euclid's lemma now tells you that $a$ divides $b+5a$ (because $a$ and $b$ are coprime). But then $a$ must divide $b$, which is contradiction with the fact that $a$ and $b$ are coprime. Here's an alternative: transform it into $r^2-5r-1=0$. What are the real (if any) roots of this equation? The quadratic formula gives you: $r_{12}=5/2\pm \sqrt{29}/2$. Since there are at most two different roots for a quadratic polynomial in $\mathbb{R}$, these are the roots. So your problem comes down to showing that $\sqrt{29}$ is irrational. (In fact, you can prove that the square root of any non-perfect-square number is irrational. This is the number-theoretic part).
Algorithm complexity in for loop I have an algorithm and I would like to know how many times each line is called. There I wrote which lines I understand and some lines is left. j := 1 ---------------------- 1 time while j < n do --------------- n times x := B[j] ---------------- n-1 times k := j ------------------- n-1 times for i := j + 1 to n do ----- ? if B[i] < x then ------ ? x := B[i] ----------- ? k := i --------------- ? B[l] := B[j] -------------- n-1 times B[j] := x --------------- n-1 times j := j + 1 ---------------- n-1 times Note: k := i means that line is called once but for i := j + 1 to n do - twice: assign and comparison.
Hint: You enter the for loop n-1 times as shown from the line above. Then how many loops to you do? It should be something like n-j, but you have to figure out the ends-there may be a +1 or -1 or something. Does your definition of for do the loop with i=n or not? so the for will be executed 2*(n-1)*(something like n-j) times. Given the if statement, you can't say for sure how many times the lines within it are executed. All you can do is find a maximum, assuming the if always evaluates as true.
How to Prove the following Function Properties Definition: F is a function iff F is a relation and $(x,y) \in F$ and $(x,z) \in F \implies y=z$. I'm reading Introduction to Set Theory by Monk, J. Donald (James Donald), 1930 and i came across a theorem 4.10. Theorem 4.10 (ii)$0:0 \to A$, if $F : 0 \to A$, then $F=0$. (iii) If $F:A\to 0$, then $A=F=0$. Where the book just explain the concept of function and now is stating its function property. I am stuck on what actually does it mean and how to prove it. May be can give me a hint. Thanks ahead.
Consider the function $F\colon 0\to A$, suppose there is some $\langle x,y\rangle\in F$. This means that $x\in dom F$, since we have $dom F = 0$ then $x\in 0$ which is a contradiction. Therefore there are no ordered pairs in $F$, from the fact that it is a function we know that there are not other elements in $F$. If so, we proved $F=0$. The same proof goes for the other statement. Edit: An alternative method is by cardinal arithmetic: $|F|=|dom F| \le |dom F|\times|rng F|$ The first equality is simply by projection $\langle x,F(x)\rangle \mapsto x$, where the second is by the identity map. From this, suppose $dom F = 0$ then $F=0$ and suppose $rng F=0$ then $F=0$ and $dom F=0$.
What are the steps to solve this simple algebraic equation? This is the equation that I use to calculate a percentage margin between cost and sales prices, where $x$ = sales price and $y$ = cost price: \begin{equation} z=\frac{x-y}{x}*100 \end{equation} This can be solved for $x$ to give the following equation, which calculates sales price based on cost price and margin percentage: \begin{equation} x=\frac{y}{1-(\frac{z}{100})} \end{equation} My question is, what are the steps involved in solving the first equation for $x$? It's been 11 years since I last did algebra at school and I can't seem to figure it out. I'm guessing the first step is to divide both sides by $100$ like so: \begin{equation} \frac{z}{100}=\frac{x-y}{x} \end{equation} Then what? Do I multiply both sides by $x$? If so how to I reduce the equation down to a single $x$?
$$ z = 100 \cdot \frac{x-y}{x}$$ $$ zx = 100(x-y)$$ $$zx - 100x = -100y$$ $$x(z-100) = -100y$$ $$x = -\frac{100y}{z-100}$$ Then divide both numerator and denominator by $-100$ to get $$x = \frac{y}{1-(\frac{z}{100})}$$
What kind of matrices are orthogonally equivalent to themselves? A matrix $A \in R^{n\times n}$ is said to be orthogonally equivalent to $B\in R^{n\times n}$ if there is an orthogonal matrix $U\in R^{n\times n}$, $U^T U=I$, such that $A=U^T B U$. My question is what kind of matrices are orthogonally equivalent to themselves? i.e., $A=U^T A U$ A similar interesting question is: if $$U^T \Lambda U=\Lambda $$ where $\Lambda$ is a diagonal matrix and $U$ is a orthogonal matrix, are the diagonal entries of $\Lambda$ equal? That is whether $\Lambda=kI$. Look forward to your opinion. Thank you very much. Shiyu
The family of matrices $U^{T}BU$, where $B$ is a fixed, positive definite matrix $\mathbb{R}^{n\times n}$, and $U$ varies over the orthogonal group $O(n)$, is obtaining by rigidly rotating and reflecting the eigenvectors of $B$. The matrix $B$ is invariant under such a transformation iff its eigenspaces are preserved. Even if there are $n$ distinct eigenvalues (so that all eigenspaces are $1$-dimensional), there are $2^n$ discrete choices for $U$ that preserve $B$: namely, reflections of any subset of the eigenvectors. Note that these form a discrete subgroup of $O(n)$ under matrix multiplication: it can be represented as $O(1)^n$. When eigenvalues are degenerate, then additional orthogonal transformations of the higher-dimensional eigenspaces will preserve the matrix $B$. In general, if the eigenspaces of $B$ associated with eigenvalues $\lambda_1 < \lambda_2 < ... < \lambda_k$ have dimensions $d_1,d_2,...d_k$, with $d_1+d_2+...+d_k=n$, then the subgroup of $O(n)$ that preserves $B$ is isomorphic to $O(d_1)\times O(d_2) \times ... \times O(d_k)$.
Cartesian product set difference I know how to handle the 2d case: http://www.proofwiki.org/wiki/Set_Difference_of_Cartesian_Products But I am having trouble simplifying the following: Let $X=\prod_{1}^\infty X_i, A_i \subset X_i$ How can I simplify/rewrite $X - (A_1 \times A_2 \times \cdots A_n \times X_{n+1} \times X_{n+2} \cdots)$ with unions/intersections?
Try writing $$\prod_{k=n+1}^{\infty} X_k = X'$$ then you want the difference of $$(X_1 \times X_2 \times \cdots \times X_n \times X') - (A_1 \times A_2 \times\cdots \times A_n \times X')$$ You can use the rule that you linked inductively to this difference. Then note that in some parts of the expression you will get $X' - X' = \emptyset$.
Nondeterministic Finite Automata to Deterministic Finite Automata? I am unfamiliar with the general process of converting NFA to DFA. I have general understanding of the theory, but I don't have the method established. Please help explain the process required to transform an NFA to DFA. Thank you.
Suppose the original NFA had state set $S$, initial state $q \in S$, and accepting states $F \subset S$. The DFA is going to keep track of what possible states the NFA could get into reading the input so far. Therefore each state of the DFA corresponds to a subset of $S$, viz. the possible states the NFA could get into. The initial state is composed of $q$ and all possible states reachable from $q$ via epsilon transitions. The accepting states are all those containing a state from $F$. The transitions are defined in such a way that the interpretation of states in the DFA conforms to what's written above. In order to see what happens in state $\sigma$ upon reading input $a$, consider for all $s \in \sigma$ all states (if any) reachable by following $a$ and then epsilon transitions; collect all of these, for all $s \in \sigma$, in a set $\tau$, which is the target of the arrow labeled $a$ emanating from $\sigma$. For examples and more formal definitions, check the various textbooks and lecture notes detailing this topic (the latter are available online, just google "NFA to DFA").
Number of inner nodes in relation to the leaf number N I am aware that if there is a bifurcating tree with N leaves, then there are (N-1) internal nodes (branching points) with a single root node. How is this relationship proved? Best,
Here is an approach considering a directed binary tree: Let there be $k$ internal nodes (Note that we consider the root to be an internal node as well). Since we consider a binary tree, the $k$ internal nodes contributes 2 edges each and thus we have $2k$ many edges in the tree that implies there is a total degree of $4k$ considering all the edges in the tree. Now it is clear that for each such edge at least one of them (precisely all the outgoing edges of the internal nodes) is adjacent to an internal node contributing to $2k$ degree and the other node might be adjacent to an internal node or a leaf. Hence we are left with $2k$ degree that needs to be covered by the nodes which have an incoming edge to them. We also know that each node has only one incoming edge except the root. Thus, $k-1+N$ nodes have an incoming edge to them. Thus, $$2k = k-1+N \implies k = N-1$$ Thus, we proved that there are $N-1$ internal nodes for a directed singly-rooted binary tree with $N$ leaves. Remark: The above argument can be easily adapted for undirected version.
Simple limit, wolframalpha doesn't agree, what's wrong? (Just the sign of the answer that's off) $\begin{align*} \lim_{x\to 0}\frac{\frac{1}{\sqrt{4+x}}-\frac{1}{2}}{x} &=\lim_{x\to 0}\frac{\frac{2}{2\sqrt{4+x}}-\frac{\sqrt{4+x}}{2\sqrt{4+x}}}{x}\\ &=\lim_{x\to 0}\frac{\frac{2-\sqrt{4+x}}{2\sqrt{4+x}}}{x}\\ &=\lim_{x\to 0}\frac{2-\sqrt{4+x}}{2x\sqrt{4+x}}\\ &=\lim_{x\to 0}\frac{(2-\sqrt{4-x})(2+\sqrt{4-x})}{(2x\sqrt{4+x})(2+\sqrt{4-x})}\\ &=\lim_{x\to 0}\frac{2 \times 2 + 2\sqrt{4-x}-2\sqrt{4-x}-((\sqrt{4-x})(\sqrt{4-x})) }{2 \times 2x\sqrt{4+x} + 2x\sqrt{4+x}\sqrt{4-x}}\\ &=\lim_{x\to 0}\frac{4-4+x}{4x\sqrt{4+x} + 2x\sqrt{4+x}\sqrt{4-x}}\\ &=\lim_{x\to 0}\frac{x}{x(4\sqrt{4+x} + 2\sqrt{4+x}\sqrt{4-x})}\\ &=\lim_{x\to 0}\frac{1}{(4\sqrt{4+x} + 2\sqrt{4+x}\sqrt{4-x})}\\ &=\frac{1}{(4\sqrt{4+0} + 2\sqrt{4+0}\sqrt{4-0})}\\ &=\frac{1}{16} \end{align*}$ wolframalpha says it's negative. What am I doing wrong?
Others have already pointed out a sign error. One way to avoid such is to first simplify the problem by changing variables. Let $\rm\ z = \sqrt{4+x}\ $ so $\rm\ x = z^2 - 4\:.\:$ Then $$\rm \frac{\frac{1}{\sqrt{4+x}}-\frac{1}{2}}{x}\ =\ \frac{\frac{1}z - \frac{1}2}{z^2-4}\ =\ \frac{-(z-2)}{2\:z\:(z^2-4)}\ =\ \frac{-1}{2\:z\:(z+2)}$$ In this form it is very easy to compute the limit as $\rm\ z\to 2\:$.
Geometrical construction for Snell's law? Snell's law from geometrical optics states that the ratio of the angles of incidence $\theta_1$ and of the angle of refraction $\theta_2$ as shown in figure1, is the same as the opposite ratio of the indices of refraction $n_1$ and $n_2$. $$ \frac{\sin\theta_1}{\sin \theta_2} = \frac{n_2}{n_1} $$ (figure originally from wikimedia) Now let $P$ be a point in one medium (with refraction index $n_1$) and $Q$ a point in the other one as in the figure. My question is, is there is a nice geometrical construction (at best using only ruler and compass) to find the point $O$ in the figure such that Snell's law is satisfied. (Suppose you know the interface and $n_2/n_1$)? Edit A long time ago user17762 announced to post a construction. However until now no simple construction was given by anybody. So, does anybody know how to do this?
Yes, right. When entering a denser medium light slows down.. Adding another answer keeping only to the construction method. Drawn on Geogebra, removed the axes and grid to trace a ray inside a medium of higher refractive index $\mu$. Choose an arbitrary point X such that $$ \mu= \dfrac{XP}{XQ} \left( =\dfrac{n_2}{n_1}=\dfrac{v_1}{v_2} \right) $$ which is possible using Ruler & Compass using suitable segment lengths. From $X$ draw bisector to angle $PXQ$ cutting interface at $O,$ which is the required point of incidence. Due to refraction, the point of incidence $O$ always shifts to the right, when stat point $P$ is at top left, compared to the straight unrefracted ray $PQ$ point of impingement on interface. The ratio of sides and times spent in each medium is maintained due to the constant bisector property of Apollonian Circle ( no need to draw it). The diagram represents a lightray refraction into a medium of refractive index 1.5
Questions about determining local extremum by derivative * *Second derivative test in Wikipedia says that: For a real function of one variable: If the function f is twice differentiable at a stationary point x, meaning that $\ f^{\prime}(x) = 0$ , then: If $ f^{\prime\prime}(x) < 0$ then $f$ has a local maximum at $x$. If $f^{\prime\prime}(x) > 0$ then $f$ has a local minimum at $x$. If $f^{\prime\prime}(x) = 0$, the second derivative test says nothing about the point $x$, a possible inflection point. For a function of more than one variable: Assuming that all second order partial derivatives of $f$ are continuous on a neighbourhood of a stationary point $x$, then: if the eigenvalues of the Hessian at $x$ are all positive, then $x$ is a local minimum. If the eigenvalues are all negative, then $x$ is a local maximum, and if some are positive and some negative, then the point is a saddle point. If the Hessian matrix is singular, then the second derivative test is inconclusive. My question is why in the multivariate case, the test requires the second order partial derivatives of $f$ to be continuous on a neighbourhood of $x$, while in the single variable case, it does not need the second derivative to be continuous around $x$? Do both also require that the first derivative and the function itself to be continuous around $x$? *Similarly, does first derivative test for $f$ at $x$ need $f$ to be continuous and differentiable in a neighbourhood of $x$? *For higher order derivative test, it doesn't mention if $f$ is required to be continuous and differentiable in a neighbourhood of some point $c$ up to some order. So does it only need that $f$ is differentiable at $c$ up to order $n$? Thanks for clarification!
Actually, the continuity of the partials is not needed, twice total differentiability at the point is sufficient, so the one and the higher dimensional cases are totally analogous. But the existence of the second order partials is insufficient for twice total differentiability, so the Hessian is not necessarily the second total derivative just because the second partials exist, and you can compute the Hessian. But their continuity is a simple, sufficient condition for twice differentiability.
Paying off a mortgage twice as fast? My brother has a 30 year fixed mortgage. He pays monthly. Every month my brother doubles his principal payment (so every month, he pays a little bit more, according to how much more principal he's paying). He told me he'd pay his mortgage off in 15 years this way. I told him I though it'd take more than 15 years. Who's right? If I'm right (it'll take more than 15 years) how would I explain this to him? CLARIFICATION: He doubles his principal by looking at his statement and doubling the "amount applied to principal this payment" field.
Let's look at two scenarios: two months of payment $P$ vs. one month of payment $2P$. Start with the second scenario. Assume the total amount to be payed is $X$ and the rate is $r > 1$, the total amount to be payed after one month would be $$r(X-2P).$$ Under the first scheme, the total amount to be payed after two months would be $$r(r(X-P)-P) = r(rX - (1+r)P).$$ Most of the time, $X$ is much larger than $P$, and so $X-2P$ is significantly smaller than $rX - (1+r)P$ (remember $r \approx 1$). So it should take your brother less than 15 years. Note I first subtract the payment and then take interest, but it shouldn't really matter. This all assumes the payments are fixed, but looking at the other answers this is not really the case... I guess my banking skills are lacking. Too young to take loans.
Limit of function of a set of intervals labeled i_n in [0,1] Suppose we divide the the interval $[0,1]$ into $t$ equal intervals labeled $i_1$ upto $i_t$, then we make a function $f(t,x)$ that returns $1$ if $x$ is in $i_n$ and $n$ is odd, and $0$ if $n$ is even. What is $\lim_{t \rightarrow \infty} f(t,1/3)$? What is $\lim_{t \rightarrow \infty} f(t,1/2)$? What is $\lim_{t \rightarrow \infty} f(t,1/\pi)$? What is $\lim_{t \rightarrow \infty} f(t,x)$? joriki clarification in comments is correct, does $\lim_{t \rightarrow \infty} f(t,1/\pi)$ exist, is it 0 or 1 or (0 or 1) or undefined? Is it incorrect to say that is (0 or 1)? Is there a way to express this: $K=\lim_{t \rightarrow \infty} f(t,x)$ K, without limit operator ? I think to say K is simply undefined is an easy way out. Something undefined cant have properties. Does K have any properties? Is K a concept?
There is no limit for any $0<x<1$. (a) $f(t, 1/3) = 1$ for $t$ of the form $6n+1$ or $6n+2$ and $0$ for $t$ of the form $6n+4$ or $6n+5$, and on the boundary in other cases. For example $\frac{2n}{6n+1} < \frac{1}{3} < \frac{2n+1}{6n+1}$ and $\frac{2n+1}{6n+4} < \frac{1}{3} < \frac{2n+2}{6n+4} .$ (b) $f(t, 1/2) = 1$ for $t$ of the form $4n+1$ and $0$ for $t$ of the form $4n+3$ and on the boundary in other cases (c) If $f(t, 1/\pi) = 1$ then $f(t+3, 1/\pi) = 0$ or $f(t+4, 1/\pi) = 0$ and similarly if $f(t, 1/\pi) = 0$ then $f(t+3, 1/\pi) = 1$ or $f(t+4, 1/\pi) = 1$. (d) If $f(t, x) = 1$ then $f\left(t+\lfloor{1/x1}\rfloor , x \right) = 0$ or $f\left(t+\lceil{1/x1}\rceil , x \right) = 0$ and similarly if $f(t, x) = 0$ then $f\left(t+\lfloor{1/x1}\rfloor , x \right) = 1$ or $f\left(t+\lceil{1/x1}\rceil , x \right) = 1$. So there is no convergence and so no limit. If instead you explicitly gave boundary cases the value $1/2$ (only necessary for rational $x$) and took the partial average of $f(s,x)$ over $1 \le s \le t$, then the limit of the average as $t$ increases would be $1/2$.
A basic question about finding ideals of rings and proving that these are all the ideals I am a student taking a "discrete maths" course. Teacher seems to jump from one subject to another rapidly and this time he covered ring theory, Z/nZ, and polynomial rings. It is hard for me to understand anything in his class, and so the reports he gives become very hard. I did my best to find answers using google, but I just couldn't find it. Specifically he asked us to find all ideals of Z/6Z, and prove that these are in fact all of them. He also asked us to find all ideals of F[X]/(X^3-1) where F stands for Z/2Z. I understand the idea behind ideals, like I can see why {0,3} is ideal of Z/6Z, but how do I find ALL the ideals? And regarding polynomials, is there some kind of a mapping between polynomials and Z/nZ? Because otherwise I have no idea how to find ideals of polynomials.
Since $\mathbb{Z}/6\mathbb{Z}$ is finite, it is not difficult to try to find all ideals: you've got $\{0\}$ and you've got $\mathbb{Z}/6\mathbb{Z}$. Suppose the ideal contains $a\neq 0$. Then it must also contain $a+a$, $a+a+a$, and so on. Check the possibilities. No, there generally is no mapping between $F[X]/(p(x))$ and a $\mathbb{Z}/n\mathbb{Z}$. But first, notice that by doing long division, every polynomial $p(x)$ in $F[X]$ can be written as $p(x) = q(x)(x^3-1) + r(x)$, where $r(x)=0$ or else $\deg(r)\lt 3$. That means that every element of $F[x]/(x^3-1)$ corresponds to one of the "remainders", and there are only $8$ possible remainders (the remainder must be of the form $a+bx+cx^2$, with $a,b,c\in\mathbb{Z}/2\mathbb{Z}$), so again $F[x]/(x^3-1)$ is finite, and you can check the possibilities. Here adding an element to itself is not going to help much (because $p+p=0$ for all $p$) but you can instead consider a given $p(x)$ and all $8$ multiples of it that you get when you multiply by elements of $F[x]$. Alternatively, the ideals of $R/I$ correspond to ideals of $R$ that contain $I$. So the ideals of $\mathbb{Z}/6\mathbb{Z}$ correspond to ideals of $\mathbb{Z}$ that contain $6\mathbb{Z}$, and ideals of $F[X]/(x^3-1)$ correspond to ideals of $F[x]$ that contain $(x^3-1)$. Notice that $(a)$ contains $(b)$ if and only if $a$ divides $b$.
Covering and Cycles Let $G = (V, E)$ and $G' = (V', E')$ be two graphs, and let $f: V \rightarrow V'$ be a surjection. Then $f$ is a covering map from $G$ to $G'$ if for each $v \in G$, the restriction of $f$ to the neighbourhood of $v$ is a bijection onto the neighbourhood of $f(v) \in V'$ in $G'$. My question (homework) is how to easily prove that if there exists a cycle in $G$, there also exists a cycle in $G'$? I have a proof based on the size of the preimage of each vertex of $G'$. But, it seems to complicate. I would like to know your point of view. Thanks a lot in advance.
If $u,v\in V$ and $(u,v)\in E$ then since $u$ is in the neighborhood of $v$ then the condition on the local bijection gives you that $(f(v),f(u))\in E'$. Suppose C is a cycle in G. Take the subgraph C' of G' with the vertices $f(v)$ such that $v\in C$ and the edges $(f(v),f(u))$ such that $(v,u)\in C$. If $u\in C$ then it has two different neighbors $v,w\in C$ so by the local bijection $f(u)$ has two different neighbors $f(v),f(w)$ in C'. By the definition of C' these are all the neighbors $f(u)$ has. So C' must be a cycle. (you can show that C' is connected, but even if it isn't you can say that every connected component is a cycle)
Question about total derivative If $z=f(x,y)$, then total derivative is $\mathrm{d}z=\frac{\partial f}{\partial x}\mathrm{d}x+\frac{\partial f}{\partial y}\mathrm{d}y$. If $\mathrm{d} z=0$, how do you show that $z$ is a constant?
If $df = 0$, then $\frac{\partial f}{\partial x} \ dx = -\frac{\partial f}{\partial y} \ dy$. I guess one could solve for $f(x,y)$ to get $f(x,y) = g(x-y)$ since $(f_x+f_y)g(x-y) = g'(x-y)+(-1)g'(x-y) = 0$ identically for some $g \in C^1$.
Question on proof in "Primer on MCGs" This is a question about the proof of Proposition 1.4 in Farb and Margalit's "Primer on Mapping Class Groups" (in v. 5.0, it is on page 37 in the PDF, which you can download here). The proposition states Let $\alpha$ be a non-nullhomotopic simple closed curve on the (hyperbolic) surface $S$; then $[\alpha]\in\pi_1(S)$ is primitive. Most of the proof I'm OK with, except right in the beginning, when they write ...let $\phi\in\text{Isom}^+(\mathbb{H}^2)$ be the hyperbolic isometry corresponding to some element of the conjugacy class of $\alpha$. Two questions: 1) What do they mean by the hyperbolic isometry $\phi$? Don't different elements of $\pi_1(S)$ correspond to different elements of $\text{Isom}^+(\mathbb{H}^2)$? (Here $\pi_1(S)$ is acting as deck transformations on $\mathbb{H}^2$.) 2) Why is there a hyperbolic isometry corresponding to $\alpha$? For example, if $\alpha$ is a simple loop around a puncture point, then shouldn't any such $\phi$ be parabolic?
For (1): They mean pick any element of the conjugacy class, and look at the corresponding $\phi$. It doesn't matter which one you look at because being primitive is a conjugacy invariant.
Prove that if $a^{k} \equiv b^{k} \pmod m $ and $a^{k+1} \equiv b^{k+1} \pmod m $ and $\gcd( a, m ) = 1$ then $a \equiv b \pmod m $ My attempt: Since $a^{k} \equiv b^{k}( \text{mod}\ \ m ) \implies m|( a^{k} - b^{k} )$ and $a^{k+1} \equiv b^{k+1}( \text{mod}\ \ m ) \implies m|( a^{k+1} - b^{k+1} ) $ Using binomial identity, we have: $$a^{k} - b^{k} = ( a - b )( a^{k - 1} + a^{k - 2}b + a^{k - 3}b + ... ab^{k - 2} + b^{k - 1} )$$ $$a^{k + 1} - b^{k + 1} = ( a - b )( a^{k} + a^{k - 1}b + a^{k - 2}b + ... ab^{k - 1} + b^{k} )$$ Now there are two cases: 1. If $m|(a - b)$, we're done. 2. Else $m|( a^{k - 1} + a^{k - 2}b + a^{k - 3}b + ... ab^{k - 2} + b^{k - 1} )$ and $m|( a^{k} + a^{k - 1}b + a^{k - 2}b + ... ab^{k - 1} + b^{k} )$ And I was stuck from here, since I could not deduce anything from these two observations. I still have $(a, m) = 1$, and I guess this condition is used to prevent $m$ divides by the two right hand side above. A hint would be greatly appreciated. Thanks, Chan
* *$a\equiv b\pmod{n} \quad\textrm{and}\quad b\equiv c\pmod{n}\Rightarrow a\equiv c\pmod{n}$ *$a\equiv b\pmod{n}\Leftrightarrow b\equiv a\pmod{n}$ *$a\equiv b\pmod{n}\Rightarrow ac\equiv bc\pmod{n}$ for any $c$ The list can be enough to get $$a^kb\equiv a^ka\pmod{m}.$$ I would like to mentioned a theorem I just learned from Hardy's An Introduction to the Theory of Numbers: THEOREM 54. If $(k, m) = d$, then $$kx\equiv ky\pmod{m}\Rightarrow x\equiv y\pmod{\frac{m}{d}},$$ and conversely. This theorem tells you that when and how you can "cancel" something in the congruence equation. Now it suffices to show that $(a^k,m)=1$, which can be done by reductio ad absurdum (proof by contradiction). When one solves some problem about congruence, he/she had better have a list of the properties of congruence in hand or at least in mind. This is convenient for your thinking. As Terence Tao said in his "Solving Mathematical Problems", putting everything down on paper helps in three ways: * *you have an easy reference later on; *the paper is a good thing to stare at when you are stuck; *the physical act of writing down of what you know can trigger new inspirations and connections.
How to solve 700 = 7x + 6y for a range of values? Last time I did any maths was A-Level (some time ago!). This is a programming/layout problem where I need to display 7 items across a page, with 6 gaps between them. I have a fixed width, but need to determine values of x and y that are integers where x > y. This 'feels' like something I could plot on a graph, hence solve with calculus but I need a pointer in the right direction. Ideally I'd like to solve it programatically but I need an understanding of how to solve it first. Further down the line, I'd like to be able to vary the number of items to get different values. Thanks
$$700 = 7x + 6y\implies y = \frac{-7(x - 100)}{6}$$ Experimentation in a spreadsheet shows that [integer] $x,y$ values increase/decrease by amounts corresponding to their opposite coefficients. For example, a valid $x$-value occurs only every $6$ integers and a resulting $y$-value occurs every $7$ integers. Here is a sample to show the effect. $$(-14,133)\quad (-8,126)\quad (-2,119)\quad (4,112)\quad (10,105)\quad (16,98)\quad $$ A little math shows the predictable values and relationships to be as follows. $$x = 6 n + 4 \qquad y = 112 - 7 n \qquad n \in\mathbb{Z}$$ Now, if we have a known $x$-range, we can find $n$ by plugging $x$-lo and $x$-hi into $$n = \frac{(x - 4)}{6}$$ and rounding up or down to get the integer values desired and needed.
Question on Solving a Double Summation $$ \sum_{i=0}^{n-2}\left(\sum_{j=i+1}^{n-1} i\right) $$ Formulas in my book give me equations to memorize and solve simple questions like $$ \sum_{i=0}^{n} i $$ ... However, For the question on top, how would I go about solving it by hand without a calculator? WolfRamAlpha seems to give the equation of 1/6[(n-2)(n-1)n]. Any suggestions would be appreciated. It's not a homework question, but I am studying for a test. I wrote the mathematical version of two nested for-loops for code that checks to see if a number in an array is unique or not. Thank you.
Since the sum $$\sum_{j=i+1}^{n-1} i $$ does not depend on $j$ we see $$\sum_{j=i+1}^{n-1} i = i\cdot\sum_{j=i+1}^{n-1} 1= i(n-1-i) $$ Then you have to find $$\sum_{i=0}^{n-2} \left( i(n-1-i)\right)=\sum_{i=0}^{n-2} \left( (n-1)i-i^2)\right)=(n-1) \sum_{i=0}^{n-2} i- \sum_{i=0}^{n-2} i^2$$ Can you solve it from here? Hope that helps, Edit: Perhaps you wanted a $j$. In other words, lets evaluate $$ \sum_{i=0}^{n-2}\left(\sum_{j=i+1}^{n-1} j\right) $$ Reversing the order of summation yields: $$ \sum_{j=1}^{n-1}\left(\sum_{i=j-1}^{n-2} j\right) $$ which can be solved by the exact same method presented above.
Number Theory - Proof of divisibility by $3$ Prove that for every positive integer $x$ of exactly four digits, if the sum of digits is divisible by $3$, then $x$ itself is divisible by 3 (i.e., consider $x = 6132$, the sum of digits of $x$ is $6+1+3+2 = 12$, which is divisible by 3, so $x$ is also divisible by $3$.) How could I approach this proof? I'm not sure where I would even begin.
Actually, this is true for an integral number with any digits. The proof is quite easy. Let's denote the integral number by $\overline{a_n a_{n-1} \ldots a_1}$. If the sum of its digits $\sum_{i=1}^n{a_i}$ is divisible by 3, then $\sum_{i=1}^n{(1+\overline{9...9}_{i-1})*a_i}$ is too. Here $\overline{9...9}_{i-1}$ denotes the integer with $i-1$ 9's. But this second sum is just the original number $\overline{a_n a_{n-1} \ldots a_1}$ expanded.
Isomorphism in coordinate ring Let $x_{1},x_{2},...,x_{m}$ be elements of $\mathbb{A}^{n}$, where $\mathbb{A}^{n}$ is the n-affine space over an algebraically closed field $k$. Now define $X=\{x_{1},x_{2},...,x_{m}\}$. Why is the coordinate ring $A(X)$, isomorphic to $\oplus_{j=1}^{m} k = k^{m}$?
For each $i$, $A(\{x_i\}) = k[x]/I(x_i) \cong k$, so each $I(x_i)$ is a maximal ideal of $k[x]$. I assume the points $x_1,\ldots,x_n$ are distinct, from which it follows easily that the ideals $I(x_1),\ldots,I(x_n)$ are distinct maximal ideals. Thus they are pairwise comaximal and the Chinese Remainder Theorem -- see e.g. $\S 4.3$ of these notes -- applies. I leave it to you to check that it gives the conclusion you want.
Largest known integer Does there exist a property which is known to be satisfied by only one integer, but such that this property does not provide a means by which to compute this number? I am asking because this number could be unfathomably large. I was reading Conjectures that have been disproved with extremely large counterexamples? , does there exist a conjecture that is known to have a counterexample, but which has not been found, and where there is no "bound" on the expected magnitude of this integer? Is there known something about how the largest integer that is expressable in n symbols, grows with n?
One can easily generate "conjectures" with large counterexamples using Goodstein's theorem and related results. For example, if we conjecture that the Goodstein sequence $\rm\:4_k\:$ never converges to $0$ then the least counterexample is $\rm\ k = 3\ (2^{402653211}-1) \approx 10^{121210695}\:$. For much further discussion of Goodstein's theorem see my sci.math post of Dec 11 1995
Example for Cyclic Groups and Selecting a generator In Cryptography, I find it commonly mentioned: Let G be cyclic group of Prime order q and with a generator g. Can you please exemplify this with a trivial example please! Thanks.
In fact if you take the group $(\mathbb{Z}_{p},+)$ for a prime number $p$, then every element is a generator. * *Take $G= \{a^{q}=e, a ,a^{2}, \cdots, a^{q-1}\}$. Now $|G|=q$ and $G = <a>$, which means that $G$ is generated by $a$.
How exactly do differential equations work? My textbook says that solutions for the equation $y'=-y^2$ must always be 0 or decreasing. I don't understand—if we're solving for y', then wouldn't it be more accurate to say it must always be 0 or negative. Decreasing seems to imply that we're looking at a full graph, even though the book is talking about individual solutions. Can someone explain this? Secondly, it gives that the family $y=\frac{1}{x+C}$ as solutions for the equation. It then tells me that 0 is a solution for y in the original equation that doesn't match the family, but I don't quite understand that. How can we know that y' will equal 0 if we're specifically looking outside of the family of solutions it gives?
* *Remember if $y'(x) > 0$, $y(x)$ is increasing; if $y'(x) < 0$, $y(x)$ is decreasing; if $y'(x) = 0$ then $y(x)$ is constant. In our case $y'(x) \le 0$ which means $y(x)$ is always constant or decreasing. *You can verify yourself: if $y(x) = 0$ for all $x$, then $y'(x) = 0$ so it is true that $y' = -y^2$ therefore $y(x) = 0$ is a solution but it isn't in the form $\frac{1}{x + C}$.
If $a\mathbf{x}+b\mathbf{y}$ is an element of the non-empty subset $W$, then $W$ is a subspace of $V$ Okay, so my text required me to actually prove both sides; The non-empty subset $W$ is a subspace of a $V$ if and only if $a\mathbf{x}+b\mathbf{y} \in W$ for all scalars $a,b$ and all vectors $\mathbf{x},\mathbf{y} \in W$. I figured out one direction already (that if $W$ is a subspace, then $a\mathbf{x}+b\mathbf{y}$ is an element of $W$ since $a\mathbf{x}$ and $b\mathbf{y}$ are in $W$ and thus so is their sum), but I'm stuck on the other direction. I got that if $a\mathbf{x}+b\mathbf{y} \in W$, then $c(a\mathbf{x}+b\mathbf{y}) \in W$ as well since we can let $a' = ca$ and $b' = cb$ and we're good, so $W$ is closed under scalar multiplication. But for closure under addition, my text states that I can "cleverly choose specific values for $a$ and $b$" such that $W$ is closed under addition as well but I cannot find any values that would work. What I'm mostly confused about is how choosing specific values for $a$ and $b$ would prove anything, since $a, b$ can be any scalars and $\mathbf{x},\mathbf{y}$ can be any vectors, so setting conditions like $a = b$, $a = -b$, $a = 0$ or $b = 0$ don't seem to prove anything. Also something I'm not sure about is if they're saying that $a\mathbf{x}+b\mathbf{y} \in W$, am I to assume that that is the only form? So if I'm testing for closure under addition, I have to do something like $(a\mathbf{x}+b\mathbf{y})+(c\mathbf{z}+d\mathbf{w})$?
Consider $a=b=1$ and $a=-b=1$.
Numerical approximation of an integral I read a problem to determine the integral $\int_1^{100}x^xdx$ with error at most 5% from the book "Which way did the bicycle go". I was a bit disappointed to read the solution which used computer or calculator. I was wondering whether there is a solution to the problem which does not use computers or calculators. In particular, is there way to prove that the solution given in the book has a mistake because it claims that $$\frac{99^{99}-1}{1+\ln 99}+\frac{100^{100}-99^{99}}{1+\ln 100}\leq \int_1^{100}x^xdx$$ gives a bound $1.78408\cdot 10^{199}\leq \int_1^{100}x^xdx$ but I think the LHS should be $1.78407\cdot 10^{199}\leq \int_1^{100}x^xdx$? I checked this by Sage and Wolfram Alpha but I was unable to do it by pen and paper.
Because your integrand grows so fast the whole integral is dominates for $x\approx 100$. We can write $x^x = \exp[ x \ln(x)]$ and then expanding $x \ln(x) = 100 \ln(100) + [1+ \ln(100)] (x- 100) + \cdots$ around $x = 100$ (note that it is important to expand inside the exponent). The integral can therefore be estimated as $$\int_1^{100} dx \, x^x \approx 100^{100} \int_{-\infty}^{100} dx\, e^{[1+ \ln(100)] (x- 100) } = \frac{100^{100}}{1 + \ln (100)}.$$ Numerics shows that this result is off by $3\times 10^{-4}$.
Graph coloring problem (possibly related to partitions) Given an undirected graph I'd like to color each node either black or red such that at most half of every node's neighbors have the same color as the node itself. As a first step, I'd like to show that this is always possible. I believe this problem is the essence of the math quiz #2 in Communications of the ACM 02/2011 where I found it, so you might consider this a homework-like question. The quiz deals with partitions but I found it more natural to formulate the problem as a graph-coloring problem. Coming from computer science with some math interest I'm not sure how to approach this and would be glad about some hints. One observation is that any node of degree 1 forces its neighbor to be of the opposite color. This could lead to a constructive proof (or a greedy algorithm) that provides a coloring. However, an existence proof would be also interesting.
Hint: Start with a random colouring and try to increase the number of edges which have differently coloured endpoints. Spoiler: Pick a node which has more than half of it's neighbour of the same colour as itself and flip it's colour. Now show that, as a result, the number of edges with different coloured endpoints increases by at least 1. Repeat.
Stock Option induction problem Can anyone help me solve this problem. I have no idea where to even start on it. Link inside stock option problem
You are asked to prove that $$ \int_{ - \infty }^\infty {V_{n - 1} (s + x)dF(x)} > s - c, $$ for all $n \geq 1$. For $n=1$, substituting from the definition of $V_0$, you need to show that $$ \int_{ - \infty }^\infty {\max \lbrace s + x - c,0\rbrace dF(x)} > s - c. $$ For this purpose, first note that $$ \max \lbrace s + x - c,0\rbrace \ge s + x - c. $$ Then, by linearity of the integral, you can consider the sum $$ \int_{ - \infty }^\infty {(s - c)dF(x)} + \int_{ - \infty }^\infty {xdF(x)} , $$ from which the assertion for $n=1$ follows. To complete the inductive proof, substituting from the definition of $V_n$, you need to show that $$ \int_{ - \infty }^\infty {\max \bigg\lbrace s + x - c,\int_{ - \infty }^\infty {V_{n - 1} (s + x + u)dF(u)} \bigg\rbrace dF(x)} > s - c, $$ under the induction hypothesis that, for any $s \in \mathbb{R}$, $$ \int_{ - \infty }^\infty {V_{n - 1} (s + x)dF(x)} > s - c. $$ For this purpose, recall the end of the proof for $n=1$.
Is $2^{340} - 1 \equiv 1 \pmod{341} $? Is $2^{340} - 1 \equiv 1 \pmod{341} $? This is one of my homework problem, prove the statement above. However, I believe it is wrong. Since $2^{10} \equiv 1 \pmod{341}$, so $2^{10 \times 34} \equiv 1 \pmod{341}$ which implies $2^{340} - 1 \equiv 0 \pmod{341}$ Any idea? Thanks,
What you wrote is correct. $$2^{340}\equiv 1\pmod {341}$$ This is smallest example of a pseudoprime to the base two. See Fermat Pseudo Prime.
Some questions about $\mathbb{Z}[\zeta_3]$ I am giving a talk on Euler's proof that $X^3+Y^3=Z^3$ has no solutions in positive integers. Some facts that I believe to be true are the following. For some I give proof. Please verify that my reasoning is correct and make any pertinent comments. I use the notation $\zeta=\zeta_3$. (a) $\mathbb{Q}(\sqrt{-3})=\mathbb{Q}(\zeta)$ Proof: Note that $\{1, \zeta\}$ and $\{1, \frac{1+\sqrt{-3}}{2}\}$ are bases for $\mathbb{Z}[\zeta]$, $\mathbb{Z}[\sqrt{-3}]$, respectively. If $\alpha \in \mathbb{Z}[\zeta]$, then, $\alpha=a+b\zeta=a-b+b+b\zeta=(a-b)+b(1+\zeta)=(a-b)+b(\frac{1+\sqrt{-3}}{2})\in \mathbb{Z}[\sqrt{-3}]$. Similarly, if $\alpha \in \mathbb{Z}[\sqrt{-3}]$, then $\alpha=a+b(\frac{1+\sqrt{-3}}{2})=a+b-b+b(\frac{1+\sqrt{-3}}{2})=(a+b)+b\zeta \in \mathbb{Z}[\zeta]$. (b) $O=\mathbb{Z}[\frac{1+\sqrt{-3}}{2}]$. Proof: This follows from (a). (c) Ord$_3(1-\zeta)=1/2$ makes sense and is well defined. Here is my reasoning for this. $1=$Ord$_3(3)=$Ord$_3((1-\zeta)(1-\zeta'))=$Ord$_3(1-\zeta)+$Ord$_3(1-\zeta')$. There is a way around using this in the proof, but this is kinda cool so Im interested. (d) The conjugate of $a-\zeta b$ is $a-\zeta'b$. Proof: Since the conjugate of products is the product of conjugates and the conjugate of sums is the sum of the conjugates, this follows.
b) does not follow from a). By definition, for $K$ a number field, $\mathcal{O}_K$ is the set of algebraic integers in $K$. You have shown, at best, that the ring of algebraic integers in $K = \mathbb{Q}(\zeta)$ contains $\mathbb{Z}[\zeta]$, but you have not shown that this is all of $\mathcal{O}_K$. Here is a complete proof of b) although, again, this is unnecessary since all you need for this proof is that $\mathbb{Z}[\zeta]$ has unique prime factorization. Let $a + b \zeta$ be an algebraic integer in $K$. Then its conjugate is $a + b \zeta^2$, hence its trace is $a - b$, which must be an integer. After multiplying by $\zeta$ we get $a \zeta + b \zeta^2 = -b + (a-b) \zeta$. Since this is also an algebraic integer, its trace $-b - (a-b) = -a$ must also be an integer, hence $a, b$ are both integers. On the other hand any element of $K$ of the form $a + b \zeta, a, b \in \mathbb{Z}$ is an algebraic integer, so $\mathcal{O}_K = \mathbb{Z}[\zeta]$ as desired. c) is not a good idea. The correct definition is this: for a prime ideal $P$ in a Dedekind domain $D$, there is a discrete valuation $\nu_P$ defined as follows: if $x \in \mathcal{O}_K$, then let $\nu_P(x)$ be the greatest power of $P$ that divides the ideal $(x)$. So in this case the relevant prime ideal is $P = (1 - \zeta)$ since $(3)$ is not a prime ideal, and one has $\nu_P(1 - \zeta) = 1$. Associated to any discrete valuation is an absolute value $2^{-\nu_P(x)}$ on $K$, and the reason you do not want to try defining valuations with respect to non-prime ideals is that the corresponding absolute value will not satisfy the triangle inequality. In this case you are lucky because $3$ ramifies (it is the square of a prime ideal) but, for example, $7 = (2 - \zeta)(2 - \zeta^2)$ does not ramify and the valuations $\nu_{2 - \zeta}, \nu_{2 - \zeta^2}$ do not coincide, so the naive definition of $\nu_7$ will not satisfy the triangle inequality. d) depends on your definition of "conjugate."
Big $O$ vs Big $\Theta$ I am aware of the big theta notation $f = \Theta(g)$ if and only if there are positive constants $A, B$ and $x_0 > 0$ such that for all $x > x_0$ we have $$ A|g(x)| \leq |f(x)| \leq B |g(x)|. $$ What if the condition is the following: $$ C_1 + A|g(x)| \leq |f(x)| \leq C_2 + B |g(x)| $$ where $C_1, C_2$ are possibly negative? Certainly more can be said than just $f = O(g)$. Is there a generalized $\Theta$ notation which allows shifts (by, say $C_1, C_2$)? In particular, I'm interested in the special case: \begin{eqnarray} -C \leq f(x) - g(x) \leq C \end{eqnarray} for some positive $C$. How does $f$ compare to $g$ in this case? If $f$ and $g$ are positive functions of $x$ which both diverge to $\infty$, is it true that $f(x) = -C + g(x) + \Theta(1)$? What is the appropriate asymptotic notation in this case? Update Thanks for the clarifying answers. Now here is a slightly harder question. Suppose $f$ is discrete and $g$ is continuous. Suppose further that as $x \to \infty$, the difference $f(x) - g(x)$ is asymptotically bounded in the interval $[-C,C]$ but does not necessarily converge to $0$. Does $f \sim g$ still make sense? Would it be more appropriate to use $\liminf_{x \to \infty} f(x) - g(x) = - C$ and $\limsup_{x \to \infty} f(x) - g(x) = C$?
If $g(x)$ and $f(x)$ tends to $\infty$, then there is a value $x_0$ such that for $x > x_0$, $g(x)$ and $f(x)$ are strictly positive. Therefore, if $-C \leq f(x) - g(x) \leq C$, then for $x > x_0$, we have $$ \frac{-C}{g(x)} \leq \frac{f(x)}{g(x)} -1 \leq \frac{C}{g(x)}. $$ Taking limits, you see that $$ \lim_{x \to \infty} \frac{f(x)}{g(x)} = 1, $$ if the limit exists. In this case, you can write $f \sim g$. Update: To answer your second question, $f \sim g$ may not be appropriate here as $\displaystyle\lim_{x \to \infty} \frac{f(x)}{g(x)}$ may or may not exist. If the limit does exist, then you can write $f \sim g$ as before. If not, then the situation is trickier, and it must be dealt with individually, depending on the functions $f$ and $g$. You should just make the statement that best exemplifies what you are trying to say between the relationship of $f(x)$ and $g(x)$. The big-Oh (or big-Theta) notation may not be the best fit here. Hope this is helpful.
Prove that $f(n) = 2^{\omega(n)}$ is multiplicative where $\omega(n)$ is the number of distinct primes Prove that $f(n) = 2^{\omega(n)}$ is multiplicative where $\omega(n)$ is the number of distinct primes. My attempt: Let $a = p_1p_2\cdots p_k$ and $b = q_1q_2\cdots q_t$ where $p_i$ and $q_j$ are prime factors, and $p_i \neq q_j$ for all $1 \leq i \leq k$ and $1 \leq j \leq t$. We will show that $2^{\omega(ab)} = 2^{\omega(a)} \times 2^{\omega(b)}$ Indeed, $\omega(a) = k$ and $\omega(b) = t$. Then $2^{\omega(a)} \times 2^{\omega(b)} = 2^{k + t}$ Where $2^{\omega(ab)} = 2^{k + t}$ $\therefore 2^{\omega(ab)} = 2^{\omega(a)} \times 2^{\omega(b)}$ Am I in the right track? Thanks,
Hint: Notice $$2^{\omega(a)}\times 2^{\omega(b)}=2^{\omega(a)+\omega(b)}$$ so try to relate $\omega(a)+\omega(b)$ and $\omega(ab)$. To simplify things, you need only show it holds for $a=p^r$, $b=q^t$ where $q,p$ are prime numbers.
Find the average of a collection of points (in 2D space) I'm a bit rusty on my math, so please forgive me if my terminology is wrong or I'm overlooking extending a simple formula to solve the problem. I have a collection of points in 2D space (x, y coordinates). I want to find the "average" point within that collection. (Centroid, center of mass, barycenter might be better terms.) Is the average point just that whose x coordinate is the average of the x's and y coordinate is y the average of the y's?
There are different types of averages. Only the average of numbers is unambigious. The average you are looking for depends on what you want to use it for. If you take the avg. x and y coordinates separately, that will give you the center of mass.
Stronger than strong-mixing I have the following exercise: "Show that if a measure-preserving system $(X, \mathcal B, \mu, T)$ has the property that for any $A,B \in \mathcal B$ there exists $N$ such that $$\mu(A \cap T^{-n} B) = \mu(A)\mu(B)$$ for all $n \geq N$, then $\mu(A) = 0$ or $1$ for all $A \in \mathcal B$" Now the back of the book states that I should fix $B$ with $0 < \mu(B) < 1$ and then find $A$ using the Baire Category Theorem. Edit: I'm now pretty sure that this "$B$" is what "$A$" is in the required result. Edit: This stopped being homework so I removed the tag. Any approach would be nice. I have some idea where I approximate $A$ with $T^{-n} B^C$ where the $n$ will be an increasing sequence and then taking the $\limsup$ of the sequence. I'm not sure if it is correct. I will add it later on. My attempt after @Did's comment: "proof": First pick $B$ with $0 < \mu(B) < 1$. Then set $A_0 = B^C$ and determine the smallest $N_0$ such that $$\mu(A_0 \cap T^{-N_0} B) = \mu(A_0) \mu(B)$$ Continue like this and set $$A_k = T^{-N_{k - 1}} B^C$$ Now we note that the $N_k$ are a strictly increasing sequence, since suppose not, say $N_{k} \leq N_{k - 1}$ then $$\mu \left ( T^{-N_{k - 1}} B^C \cap T^{-N_{k - 1}} B \right ) = 0 \neq \mu(B^C) \mu(B) > 0$$ Set $A = \limsup_n A_n$, then note that \begin{align} \sum_n \mu(A_n) = \sum_n \mu(B^C) = \infty \end{align} So $\mu(A) = 1$, by the Borel-Cantelli lemma. Well, not yet, because we are also required to show that the events are independent, so it is sufficient to show that $\mu(A_{k + 1} \cap A_k) = \mu(A_{k + 1 })\mu(A_k)$ We know that $\mu(T^{N_k} B^C \cap T^{N_{k + 1}} B) = \mu(B^C)\mu(B)$. So does a similar result now hold if we replace $B$ with $B^C$ in the second part? Note: \begin{align} \mu(A \cap T^{-M} B^C) &= \mu(A \setminus (T^{-M} B \cap A))\\\ &= \mu(A) - \mu(A)\mu(B) \\\ &= \mu(A) - \mu(A \cap T^{-M} B)\\\ &= \mu(A)\mu(B^C) \end{align} which is what was required. For this $A$ and $B$ we can find an $M$ and a $k$ such that $N_k \leq M < N_{k + 1}$. Now note that $\limsup_n A \cap T^{-M} B = \limsup_n (A \cap T^{-M} B)$. Further, $$\sum_n \mu(A_n \cap T^{-N_{k +1}}) = \mu(A_0 \cap T^{-N_{k + 1}}) + \ldots + \mu(A_{k + 1} \cap T^{N_{k + 1}}) < \infty$$ So again by the Borel-Cantelli Lemma we have $\mu(\limsup_n A_n \cap T^{-M} B) = 0$. Thus we get $$\mu(A) \mu(B) = \mu(B) = \mu(A \cap T^{-M} B) = 0$$ which is a contradiction since $\mu(B) > 0$. So, such $B$'s violate the condition. Added: Actually the metric on the space of events $d(A,B) = \mu(A \Delta B)$ can work together with Baire's Category Theorem.
Hint: what happens if $A=T^{-N}B$?
Why isn't $\mathbb{CP}^2$ a covering space for any other manifold? This is one of those perhaps rare occasions when someone takes the advice of the FAQ and asks a question to which they already know the answer. This puzzle took me a while, but I found it both simple and satisfying. It's also great because the proof doesn't use anything fancy at all but it's still a very nice little result.
Euler characteristic is multiplicative, so (since $\chi(P^2)=3$ is a prime number) if $P^2\to X$ is a cover, $\chi(X)=1$ and $\pi_1(X)=\mathbb Z/3\mathbb Z$ (in particular, X is orientable). But in this case $H_1(X)$ is torsion, so (using Poincare duality) $\chi(X)=1+\dim H_2(X)+1>1$.
Is there a gap in the standard treatment of simplicial homology? On MO, Daniel Moskovich has this to say about the Hauptvermutung: The Hauptvermutung is so obvious that it gets taken for granted everywhere, and most of us learn algebraic topology without ever noticing this huge gap in its foundations (of the text-book standard simplicial approach). It is implicit every time one states that a homotopy invariant of a simplicial complex, such as simplicial homology, is in fact a homotopy invariant of a polyhedron. I have to admit I find this statement mystifying. We recently set up the theory of simplicial homology in lecture and I do not see anywhere that the Hauptvermutung needs to be assumed to show that simplicial homology is a homotopy invariant. Doesn't this follow once you have simplicial approximation and you also know that simplicial maps which are homotopic induce chain-homotopic maps on simplicial chains?
One doesn't need to explicitly compare with singular homology to get homotopy invariance (although that is certainly one way to do it), and one certainly doesn't need the Hauptvermutung (thankfully, since it is false in general). Rather, as you say, one can simplicially approximate a continuous map between simplicial complexes, and thus make simplicial homology a functor on the category of simplicial complexes and continuous maps (which, as you observe, will then factor through the homotopy category). Just thinking it over briefly, it seems to me that the hardest step in this approach will be to show that the induced map on homology really is independent of the choice of a simplicial approximation. Presumably this was covered in the lectures you are attending. (In fact, I guess this amounts to the fact that homotopic simplicial maps induce the same map on homology, which you say you have proved.)
Cross product: problem with breakdown of vector to parallel + orthogonal components Vector $\vec{a}$ can be broken down into its components $\vec{a}_\parallel$ and $\vec{a}_\perp$ relative to $\vec{e}$. * *$\vec{a}_\parallel = (\vec{a}\vec{e})\vec{e}$ and * *$\vec{a}_\perp = \vec{e} \times (\vec{a} \times \vec{e})$ (f1) The orthogonal part can be found via application of the triple product: * *$\vec{a}_\perp = \vec{a} - \vec{a}_\parallel = \vec{a}(\vec{e}\vec{e}) - \vec{e}(\vec{e}\vec{a}) = \vec{e} \times (\vec{a} \times \vec{e})$ (f2) This one causes me problems. I tried to use some values for the formulas and disaster strikes: $\vec{a} = \begin{pmatrix} 3 \\ 2 \\ 1 \end{pmatrix}$ and $\vec{e} = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}$ makes $\vec{a}_\parallel = \begin{pmatrix} 6 \\ 6 \\ 6 \end{pmatrix}$. I thought to follow the values "all the way" through the last formula f2. So I calculate for $\vec{a}_{\perp subtraction} = \vec{a} - \vec{a}_\parallel = \begin{pmatrix} -3 \\ -4 \\ -5 \end{pmatrix}$ but I find for $\vec{a}_{\perp cross} = \vec{e} \times (\vec{a} \times \vec{e}) = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \times \begin{pmatrix} 1 \\ -2 \\ 1 \end{pmatrix} =\begin{pmatrix} 3 \\ 0 \\ -3 \end{pmatrix}$. Can someone point out my error? Must be some mis-calculation, as $\vec{a}_{\perp cross} + \vec{a}_\parallel \neq \vec{a}$ and I do not really find the two $\perp$-vectors parallel.
To get the projection along $\vec{e}$ i.e. $\vec{a_{||}}$, you need to project your vector $a$ along the unit vector in the direction of $\vec{e}$. Similarly, if you want the component of $\vec{a}$ perpendicular to $\vec{e}$, $\vec{a_{\perp}} = \frac{\vec{e}}{||\vec{e}||}_2 \times \left( \vec{a} \times \frac{\vec{e}}{||\vec{e}||}_2 \right)$ Hence, $\vec{a_{||}} = \left( \vec{a} \cdot \frac{\vec{e}}{||\vec{e}||}_2 \right) \frac{\vec{e}}{||\vec{e}||}_2$ and $\vec{a_{\perp}} = \vec{a} - \vec{a_{||}} = \frac{\vec{e}}{||\vec{e}||}_2 \times \left( \vec{a} \times \frac{\vec{e}}{||\vec{e}||}_2 \right)$. In your case, $\vec{a_{||}} = \frac{6}{\sqrt{3}} \frac1{\sqrt{3}} \left(1,1,1\right)^T = \left( 2,2,2 \right)$ and hence $\vec{a_{\perp}} = \left( 3,2,1 \right) - \left( 2,2,2 \right) = \left( 1,0,-1 \right) = \frac{\vec{e}}{||\vec{e}||}_2 \times \left( \vec{a} \times \frac{\vec{e}}{||\vec{e}||}_2 \right)$
Elementary proof that $\mathbb{R}^n$ is not homeomorphic to $\mathbb{R}^m$ It is very elementary to show that $\mathbb{R}$ isn't homeomorphic to $\mathbb{R}^m$ for $m>1$: subtract a point and use the fact that connectedness is a homeomorphism invariant. Along similar lines, you can show that $\mathbb{R^2}$ isn't homeomorphic to $\mathbb{R}^m$ for $m>2$ by subtracting a point and checking if the resulting space is simply connected. Still straightforward, but a good deal less elementary. However, the general result that $\mathbb{R^n}$ isn't homeomorphic to $\mathbb{R^m}$ for $n\neq m$, though intuitively obvious, is usually proved using sophisticated results from algebraic topology, such as invariance of domain or extensions of the Jordan curve theorem. Is there a more elementary proof of this fact? If not, is there intuition for why a proof is so difficult?
Consider the one point compactifications, $S^n$ and $S^m$, respectively. If $\mathbb R^n$ is homeomorphic to $R^m$, their one-point compactifications would be, as well. But $H_n(S^n)=\mathbb Z$, whereas $H_n(S^m)=0$, for $n\ne m,0$.
Find radius of the Smallest Circle that contains this figure A two dimensional silo shaped figure is formed by placing a semi-circle of diameter 1 on top of a unit square, with the diameter coinciding with the top of the square. How do we find the radius of the smallest circle that contains this silo?
Draw the line of length 1.5 that cuts both the square and the half circle into two identical pieces. (Starting from the middle of the base of the square, go straight up) Notice that the center of the larger circle must lie somewhere on this line. Say that the center point is distance $x$ from the base of the square, and that this circle has radius $r$. Than, the distance from the center point to the corner of the square should be $r$, so by Pythagoras we find $$r^2 = x^2 +(1/2)^2$$. Now, the distance to the top of the half circle from this center point is $1.5-x$ and this should also be the radius. Hence $$1.5-x=r.$$ Thus you have two equations and two unknowns, which you can solve from here.
Nasty examples for different classes of functions Let $f: \mathbb{R} \to \mathbb{R}$ be a function. Usually when proving a theorem where $f$ is assumed to be continuous, differentiable, $C^1$ or smooth, it is enough to draw intuition by assuming that $f$ is piecewise smooth (something that one could perhaps draw on a paper without lifting your pencil). What I'm saying is that in all these cases my mental picture is about the same. This works most of the time, but sometimes it of course doesn't. Hence I would like to ask for examples of continuous, differentiable and $C^1$ functions, which would highlight the differences between the different classes. I'm especially interested in how nasty differentiable functions can be compared to continuously differentiable ones. Also if it is the case that the one dimensional case happens to be uninteresting, feel free to expand your answer to functions $\mathbb{R}^n \to \mathbb{R}^m$. The optimal answer would also list some general minimal 'sanity-checks' for different classes of functions, which a proof of a theorem concerning a particular class would have to take into account.
The Wikipedia article http://en.wikipedia.org/wiki/Pompeiu_derivative gives one example of how bad a non-continuous derivative can be. One can show that any set whose complement is a dense intersection of countably many open sets is the point of discontinuities for some derivative. In particular a derivative can be discontinuous almost everywhere and on a dense set. See the book "Differentiation of Real Functions" by Andrew Bruckner for this and much more.
Equality of polynomials: formal vs. functional Given two polynomials $A = \sum_{0\le k<n} a_k x^k$ and $B =\sum_{0\le k<n} b_k x^k$ of the same degree $n$, which are equal for all $x$, is it always true that $\ a_k = b_k\ $ for all $0\le k<n?$. All Coefficients and $x$ are complex numbers. Edit: Sorry, formulated the question wrong.
The answer is in general no. If the ground field is infinite,then it is true. In general it is not TRUE. In the polynomial algebra ${\mathbb{Z}/2\mathbb{Z}}[X]$ consider the polynomials $X^2$ and $X$. But they are different in ${\mathbb{Z}/2\mathbb{Z}}[X]$.
More help with implicit differentiation Given $(5x+5y~)^3= 125x^3+125y^3$, find the derivative. Using the chain rule and power rule, I came up with $3(5x+5y)^2 \cdot (\frac{d}{dx}5x+\frac{dy}{dx}5y)= 3 \cdot 125x^2 +3 \cdot 125y^2$ Now, the derivative of $5x~$ is 5, but what about the derivative of $5y~$? I know that $\frac{dy}{dx}5y~$ turns to $5(\frac{dy}{dx}(y~))$ What happens after that? When I plugged the formula into Wolfram Alpha to double check my steps, it says that $\frac{dy}{dx}(y~)=0$ What is the reasoning behind that?
This seems settled; I'll address how a problem like this would be handled in Mathematica (unfortunately the functionality doesn't work in Wolfram Alpha). You'll first want to express your given in the form $f(x,y)=0$, like so: expr = (5x + 5y)^3 - 125x^3 - 125y^3; The key is to remember that Mathematica supports two sorts of derivatives: the partial derivative D[] and the total derivative Dt[]. As your y depends on x, Dt[] is what's appropriate here (D[] treats y as a constant, so D[y, x] gives 0). Dt[expr, x] -375 x^2 - 375 y^2 Dt[y, x] + 3 (5 x + 5 y)^2 (5 + 5 Dt[y, x]) From there, you can use Solve[] to solve for Dt[y, x] like so: Dt[y, x] /. Solve[Dt[expr, x], Dt[y, x]]
proof of inequality $e^x\le x+e^{x^2}$ Does anybody have a simple proof this inequality $$e^x\le x+e^{x^2}.$$ Thanks.
Consider $f(x) = x+e^{x^2}-e^x$. Then $f'(x) = 1+ 2xe^{x^2}-e^x$. Find the critical points. So the minimum of $f(x)$ is $0$ which implies that $e^x \leq x+e^{x^2}$.
Proving that $\gcd(ac,bc)=|c|\gcd(a,b)$ Let $a$, $b$ an element of $\mathbb{Z}$ with $a$ and $b$ not both zero and let $c$ be a nonzero integer. Prove that $$(ca,cb) = |c|(a,b)$$
Let $d = (ca,cb)$ and $d' = |c|(a,b)$. Show that $d|d'$ and $d'|d$.
Guessing a subset of $\{1,...,N\}$ I pick a random subset $S$ of $\{1,\ldots,N\}$, and you have to guess what it is. After each guess $G$, I tell you the number of elements in $G \cap S$. How many guesses do you need?
This can be solved in $\Theta(N/\log N)$ queries. First, here is a lemma: Lemma: If you can solve $N$ in $Q$ queries, where one of the queries is the entire set $\{1,\dots,N\}$, then you can solve $2N+Q-1$ in $2Q$ queries, where one of the queries is the entire set. Proof: Divide $\{1,\dots,2N+Q-1\}$ into three sets, $A,B$ and $C$, where $|A|=|B|=N$ and $|C|=Q-1$. By assumption, there exist subsets $A_1,\dots,A_{Q-1}$ such that you could find the unknown subset of $A$ alone by first guessing $A$, then guessing $A_1,\dots,A_{Q-1}$. Similarly, there exist subsets $B_1,\dots,B_{Q-1}$ for solving $B$. Finally, write $C=\{c_1,\dots,c_{Q-1}\}$. The winning strategy is: * *Guess the entire set, $\{1,\dots,2N+Q-1\}$. *Guess $B$. *For each $i\in \{1,\dots,Q-1\}$, guess $A_i\cup B_i$. *For each $i\in \{1,\dots,Q-1\}$, guess $A_i\cup (B\setminus B_i)\cup \{c_i\}$. Using the parity of the the sum of the guesses $A_i\cup B_i$ and $A_i\cup (B\setminus B_i)\cup \{c_i\}$, you can determine whether or not $c_i\in S$. Then, using these same guesses, you get a system of equations which lets you solve for $|A_i \cap S|$ and $|B_i\cap S|$ for all $i$. This gives you enough info to determine $A\cap S$ and $B\cap S$, using the assumed strategy.$\tag*{$\square$}$ Let $\def\Opt{\operatorname{Opt}}\Opt(N)$ be the fewest number of guesses you need for $\{1,\dots,N\}$. Using the lemma and induction, you can show that $$ \Opt(k2^{k-1}+1)\le 2^k\qquad \text{for all }k\in \{0,1,2,\dots\} $$ Note that when $N=k2^{k-1}+1$, we have $\Opt(N)\le 2^k$, and $$\frac N{\frac12\log_2 N}=\frac{k2^{k-1}+1}{\frac12\log_2(k2^{k-1}+1)}= 2^k(1+o(1))$$ It follows that $\Opt(N)\in O(N/\log N)$ when $N$ is of the form $k2^{k-1}+1$. Since $\Opt(N+1)\le \Opt(N)+1$, this extends to all $N$. Combined with the entropy argument, we get $\Opt(N)\in \Theta(N/\log N)$.
Cardinality of sets of functions with well-ordered domain and codomain I would like to determine the cardinality of the sets specified bellow. Nevertheless, I don't know how to approach or how to start such a proof. Any help will be appreciated. If $X$ and $Y$ are well-ordered sets, then determine the cardinality of: * *$\{f : f$ is a function from $X$ to $Y\}$ *$\{f : f$ is an order-preserving function from $X$ to $Y\}$ *$\{f : f$ is a surjective and order-preserving function from $X$ to $Y\}$
* *The cardinality of the set of functions from $X$ to $Y$ is the definition of the cardinal $Y^X$. *The number of order-preserving functions from $X$ to $Y$, given that well-orders of each set have been fixed, depends on the nature of those orders. For example, there are no such orders in the case that the order type of $X$ is longer than the order type of $Y$. If $X$ and $Y$ are finite, then there is some interesting combinatorics involved to give the right answer. For example, if both are finite of the same size, there is only one order-preserving function. If $Y$ is one bigger, then there are $Y$ many (you can put the hole anywhere). And so on. If $Y$ is infinite, of size at least $X$, then you get $Y^X$ again, since you can code any function into the omitted part, by leaving gaps of a certain length. *A surjective order-preserving map is an isomorphism, and for well-orders, this is unique if it exists at all, so the answer is either 0 or 1, depending on whether the orders are isomorphic or not.
Homogeneous topological spaces Let $X$ be a topological space. Call $x,y\in X$ swappable if there is a homeomorphism $\phi\colon X\to X$ with $\phi(x)=y$. This defines an equivalence relation on $X$. One might call $X$ homogeneous if all pairs of points in $X$ are swappable. Then, for instance, topological groups are homogeneous, as well as discrete spaces. Also any open ball in $\mathbb R^n$ is homogeneous. On the other hand, I think, the closed ball in any dimension is not homogeneous. I assume that these notions have already been defined elsewhere. Could you please point me to that? Are there any interesting properties that follow for $X$ from homogeneity? I think for these spaces the group of homeomorphisms of $X$ will contain a lot of information about $X$.
Googling "topological space is homogeneous" brings up several articles that use the same terminology, for example this one. It is also the terminology used in the question Why is the Hilbert cube homogeneous?. The Wikipedia article on Perfect space mentions that a homogeneous space is either perfect or discrete. The Wikipedia article on Homogeneous space, which uses a more general definition, may also help.
Is it standard to say $-i \log(-1)$ is $\pi$? I typed $\pi$ into Wolfram Alpha and in the short list of definitions there appeared $$ \pi = -i \log(-1)$$ which really bothered me. Multiplying on both sides by $2i$: $$ 2\pi i = 2 \log(-1) = \log(-1)^2 = \log 1= 0$$ which is clearly false. I guess my error is $\log 1 = 0$ when $\log$ is complex-valued. I need to use $1 = e^{2 \pi i}$ instead. So my question: is it correct for WA to say $\pi = -i \log(-1)$? Or should be they specifying "which" $-1$ they mean? Clearly $ -1 = e^{i \pi}$ is the "correct" value of $-1$ here.
The (principal value) of the complex logarithm is defined as $\log z = \ln |z| + i Arg(z)$. Therefore, $$\log(-1) = \ln|-1| + i Arg(-1) = 0 + i \pi.$$ and then, one simply gets $$ -i \log(-1) = -i (i \pi) = \pi. $$
Squaring across an inequality with an unknown number This should be something relatively simple. I know there's a trick to this, I just can't remember it. I have an equation $$\frac{3x}{x-3}\geq 4.$$ I remember being shown at some point in my life that you could could multiply the entire equation by $(x-3)^2$ in order to get rid of the divisor. However, $$3x(x-3)\geq 4(x-3)^2.$$ Really doesn't seem to go anywhere. Anyone have any clue how to do this, that could perhaps show me/ point me in the right direction?
That method will work, but there's actually a simpler more general way. But first let's finish that method. $\:$ After multiplying through by $\rm\: (x-3)^2\: $ (squared to preserve the sense of the inequality) we obtain $\rm\ 3\:x\:(x-3) \ge\: 4\:(x-3)^2\:.\:$ Putting all terms on one side and factoring out $\rm\ x-3\ $ we obtain $\rm\: p(x) = (x-3)\:(a\:x-b) \ge 0\ $ for some $\rm\:a\:,\:b\:.\:$ The (at most) two roots partition the real line into (at most) three intervals where $\rm\ p(x)\ $ has constant sign. So the answer follows by simply testing these few intervals to determine where $\rm\:p(x)\:$ has the desired sign. But multiplying through by $\rm\: (x-3)^2\: $ creates more work than necessary. Instead we can simply multiply through by $\rm\: x-3\: $ and worry about the signs later, since we really only need to know the other root (besides $\rm\ x = 3\:$) of the quadratic $\rm\: p(x)\:.\:$ Indeed, if we multiply by $\rm\ x-3\ $ we get either $\rm\ 3x \ge 4\:x-12\ \Rightarrow\ 12 \ge x\ $ or we get the reverse $\rm\ 12 \le x\:,\:$ depending on the sign of $\rm\ x-3\:.\: $ Either way we get the same root $\rm\: x = 12\:,\: $ so we obtain the same partition of the real line into intervals where the function has constant sign. This method is simpler since one works with lower degree polynomials, here linear vs. quadratic in the first method above. Similar remarks hold true for arbitrary rational functions. Suppose that $\rm\: A,B,C,D\: $ are polynomials in $\rm\:x\:$ and we seek to determine where $\rm\: A/B\ >\ C/D\:.\: $ The first method converts to polynomials by multiplying it by $\rm\: (BD)^2\: $ resulting in $\rm\: ABD^2 >\: CDB^2\ $ or $\rm\ BD\ (AD-BC) > 0\:.\:$ For the second method, we partition the real line by the roots of the denominators $\rm\: B,D\ $ and also by the roots of the polynomial obtained by multiplying through by $\rm\:BD\:,\:$ i.e. the roots of $\rm\: AD-BC\:.\:$ However the union of these root sets is precisely the same as the roots of $\rm\: BD\ (AD-BC)\: $ from the first method. Since both methods partition the real line into the same constant-sign intervals, they're equivalent. If you study rational (and algebraic) functions and their Riemann surfaces you'll see that this is just a special case of how their roots (including poles = roots at $\infty$) serve to characterize them. From another viewpoint, the above method can be viewed as a very special case of Tarski's quantifier elimination algorithm for deciding arbitrary real polynomial inequalities - which, like above, works by decomposing $\rm\: \mathbb R^n\: $ into a finite number constant-sign (semi-)algebraic "cylinders".
Trig identity proof help I'm trying to prove that $$ \frac{\cos(A)}{1-\tan(A)} + \frac{\sin(A)}{1-\cot(A)} = \sin(A) + \cos(A)$$ Can someone help me to get started? I've done other proofs but this one has me stumped! Just a start - I don't need the whole proof. Thanks.
I would try multiplying the numerator and denominator both by (for the first term) $1+\tan{(A)}$ and for the second $1+\cot{(A)}$. From there it should just be a little bit of playing with Pythagorean identities ($\sin^2{(A)}+\cos^2{(A)}=1$, $\tan^2{(A)}+1=\sec^2{(A)}$, and $1+\cot^2{(A)}=\csc^2{(A)}$) and writing $\tan{(A)}$ and $\cot{(A)}$ in terms of $\sin$ and $\cos$.
In a unital $R$-module $M$, if $\forall M_1\!\lneq\!M\;\;\exists M_2\!\lneq\!M$, such that $M_1\!\cap\!M_2\!=\!\{0\}$, then $M$ is semisimple PROBLEM: Let $R$ be a ring with $1$ and $M$ be a unital $R$-module (i.e. $1x=x$). Let there for each submodule $M_1\neq M$ exist a submodule $M_2\neq M$, such that $M_1\cap M_2=\{0\}$. How can I prove, that $M$ is semisimple? DEFINITIONS: A module $M$ is semisimple iff $\exists$ simple submodules $M_i\leq M$, such that $M=\bigoplus_{i\in I}M_i$. A module $M_i$ is simple iff it has no submodules (other than $\{0\}$ and $M$). KNOWN FACTS: $M$ is semisimple $\Leftrightarrow$ $\exists$ simple submodules $M_i\leq M$, such that $M=\sum_{i\in I}M_i$ (the sum need not be direct) $\Leftrightarrow\forall\!M_1\!\leq\!M\;\exists M_2\!\leq\!M$ such that $M_1\oplus M_2=M$ (i.e. every submodule is a direct sumand).
Let $M$ be your module, and let $M_1$ be a submodule. Consider the set $\mathcal S$ of all submodules $N$ of $M$ such that $M_1\cap N=0$, and order $\mathcal S$ by inclusion. It is easy to see that $\mathcal S$ satisfies the hypothesis of Zorn's Lemma, so there exists an element $M_2\in\mathcal S$ which is maximal. We have $M_1\cap M_2=0$ and we want to show that $M_1+M_2=M$. If that were not the case, your hypothesis would provide a submodule $P\subseteq M$ such that $(M_1+M_2)\cap P=0$. Can you see how to reach a contradiction now?
$T(1) = 1 , T(n) = 2T(n/2) + n^3$? Divide and conquer $T(1) = 1 , T(n) = 2T(n/2) + n^3$? Divide and conquer, need help, I dont know how to solve it?
Use Akra-Bazzi which is more useful than the Master Theorem. Using Akra-Bazzi, I believe you get $$T(x) = \theta(x^3)$$ You can also use the Case 3 of Master theorem in the wiki link above. (Note: That also gives $\theta(x^3)$.)
Why every polynomial over the algebraic numbers $F$ splits over $F$? I read that if $F$ is the field of algebraic numbers over $\mathbb{Q}$, then every polynomial in $F[x]$ splits over $F$. That's awesome! Nevertheless, I don't fully understand why it is true. Can you throw some ideas about why this is true?
Consider some polynomial $$x^n = \sum_{i=0}^{n-1} c_i x^i,$$ where the $c_i$ are algebraic numbers. Thus for each $i$ we have a similar identity $$c_i^{n_i} = \sum_{j=0}^{n_i-1} d_{i,j} c_i^j,$$ where this time the $d_{i,j}$ are rationals. Suppose that $\alpha$ is a root of the original polynomial. By using the above identities, every power of $\alpha$ can be expressed as a linear combination with rational coefficients of terms of the form $$\alpha^m c_0^{m_0} \cdots c_{n-1}^{m_{n-1}},$$ where $$0 \leq m < n \text{ and } 0 \leq m_i < n_i.$$ Putting all these $N = nn_0\cdots n_{n-1}$ elements into a vector $v$, we get that there are rational vectors $u_k$ such that $$\alpha^k = \langle u_k,v \rangle.$$ Among the first $N+1$ vectors $u_k$ there must be some non-trivial rational linear combination that vanishes: $$\sum_{k=0}^N t_k u_k = 0.$$ Therefore $$\sum_{k=0}^N t_k \alpha^k = 0,$$ and so $\alpha$ is also algebraic. This proof is taken from these lecture notes, but it's pretty standard.
Factorial of 0 - a convenience? If I am correct in stating that a factorial of a number ( of entities ) is the number of ways in which those entities can be arranged, then my question is as simple as asking - how do you conceive the idea of arranging nothing ? Its easy to conceive of a null element in the context of arrays, for example - so you say that there is only one way to present a null element. But, in layman terms - if there are three humans $h1, h2, h3$ that need to be arranged to sit on three chairs $c1, c2, c3$ - then how do you conceive of a) a null human, and b) to arrange those ( that ? ) null humans ( human ? ) on the three chairs ? Please note that referral to humans is just for easy explanation - not trying to be pseudo-philosophical. Three balls to be arranged on three corners of a triangle works just fine. So basically, how do you conceive of an object that doesn't exist, and then conceive of arranging that object ? So, in essence ... is $0! = 1$, a convenience for mathematicians ? Not that its the only convenience, but just asking. Of course, there are many. If yes, then its a pity that I can't find it stated like so anywhere. If not, then can anybody suggest resources to read actual, good proofs ?
There is exactly one way to arrange nothing: the null arrangement. You've misused your chair analogy: when you arrange null humans, you do it on null chairs, and there is exactly one way to do this. Perhaps the following alternate definition will make things clearer. Suppose I have $n$ cards labeled $1, 2, ... n$ in order, and I shuffle them. Then $\frac{1}{n!}$ is the probability that they will stay in the original order. If $n = 0$, this probability is $1$ since $0$ cards can only be arranged in one possible order.
How to prove that square-summable sequences form a Hilbert space? Let $\ell^2$ be the set of sequences $x = (x_n)_{n\in\mathbb{N}}$ ($x_n \in \mathbb{C}$) such that $\sum_{k\in\mathbb{N}} \left|x_k\right|^2 < \infty$, how can I prove that $\ell^2$ is a Hilbert space (with dot-product $\left(x,y\right) = \sum_{k\in\mathbb{N}} x_k\overline{y_k}$). This is a standard textbook exercise: apparently this is easy and, even to me, it seems self-evident. However, I don't know what to do with the infinite sum.
This is more of an addendum for later, when you dig deeper into functional analysis: Complex analysis tells us, that every holomorphic function can be represented by its Taylor series, locally. Actually, the space of square summable complex numbers is, as a Hilbert space, isomporph to all holomorphic functions on the unit disc $D := \{ z \in \mathbb{C}, |z| \le 1 \}$, that are square integrable: $$ \int_D |f(z)|^2 dz \le \infty $$ with the obvious scalar product $$ (f, g) = \int_D \bar f (z) g(z) dz $$ via the mapping $$ f(z) \to \sum_{i = 0}^{\infty} a_i z^i $$ and vice versa. It may help to compare the proof of the Hilbert space axioms for the space of sequences with the proof of the axioms for the function space (the latter needs some "advanced" knowledge of complex calculus). Some may seem to be easier, some more involved. For example it is easy to see that the scalar product actually is a scalar product, without the need to handle infinite sums.
How to compute the following formulas? $\sqrt{2+\sqrt{2+\sqrt{2+\dots}}}$ $\dots\sqrt{2+\sqrt{2+\sqrt{2}}}$ Why they are different?
Suppose that the first converges to some value $x$. Because the whole expression is identical to the first inner radical, $\sqrt{2+\sqrt{2+\sqrt{2+\cdots}}}=x=\sqrt{2+x}$ and solving for $x$ gives $x=2$. Of course, I haven't justified that it converges to some value. The second can be thought of as starting with $\sqrt{2}$ and repeatedly applying the function $f(x)=\sqrt{2+x}$. Trying this numerically suggests that the values converge to 2. Solveing $f(x)=x$ shows that $2$ is a fixed point of that function. Looking at the second expression is actually how I'd justify (though it is perhaps not a rigorous proof) that the first expression converges.
What is the most general mathematical object that defines a solution to an ordinary differential equation? What is the most general object that defines a solution to an ordinary differential equation? (I don't know enough to know if this question is specific enough. I am hoping the answer will be something like "a function", "a continuous function", "a piecewise continuous function" ... or something like this.)
(All links go to Wikipedia unless stated otherwise.) Elaborating on joriki's answer: The most general spaces where it does make sense to talk about differential equations are certain classes of topological vector spaces, it is for example rather straight forward to formulate the concept of a differential equation in Banach spaces. (Here the esolution to an equation is a mapping of topological vector spaces) For differential equations in $\mathbb{R}^n$, the solutions themselves are elements of certain topological vector spaces.The most general topological vector spaces that are considered are AFAIK Sobolev spaces, these are function spaces such that each individual function is normable with respect to a prescribed $L^p$ norm, has generalized derivatives in the sense of distributions to the order that is necessary to formulate the weak formulation (see also the Azimuth wiki) of the equation one would like to solve, such that the generalized derivatives are again normable with respect to the $L^p$ norm. Some Sobolev spaces have similar characterizations like "all piecwise continuous function that..." which are a little bit more complicated than that, and differ from space to space, so I'd rather like to refer to the extensive literature instead of repeating that here :-(
Formula to Move the object in Circular Path I want to move one object (dot) in circular path. By using x and y position of that object. Thanks.
There are a few ways to choose from, but a nice one that doesn't require per-step trig functions (so can be calculated by a computer very quickly) is the midpoint circle algorithm. Otherwise, you can use x=cos(theta)*radius, y=sin(theta)*radius for 0 < theta < 360.
Intuitive Understanding of the constant "$e$" Potentially related-questions, shown before posting, didn't have anything like this, so I apologize in advance if this is a duplicate. I know there are many ways of calculating (or should I say "ending up at") the constant e. How would you explain e concisely? It's a rather beautiful number, but when friends have asked me "what is e?" I'm usually at a loss for words -- I always figured the math explains it, but I would really like to know how others conceptualize it, especially in common-language (say, "English"). related but not the same: Could you explain why $\frac{d}{dx} e^x = e^x$ "intuitively"?
Professor Ghrist of University of Pennsylvania would say that e^x is the sum of the infinite series with k going from zero to infinity of (x^k)/k!. If you are interested in Euler's number then you should not miss his Calculus of a Single Variable Course on Coursera
Why eliminate radicals in the denominator? [rationalizing the denominator] Why do all school algebra texts define simplest form for expressions with radicals to not allow a radical in the denominator. For the classic example, $1/\sqrt{3}$ needs to be "simplified" to $\sqrt{3}/3$. Is there a mathematical or other reason? And does the same apply to exponential notation -- are students expected to "simplify" $3^{-1/2}$ to $3^{1/2}/3$ ?
The form with neither denominators in radicals nor radicals in denominators and with only squarefree expressions under square-root signs, etc., is a canonical form, and two expressions are equal precisely if they're the same when put into canonical form. When are two fractions equal? How do you know that $\dfrac{51}{68}$ is the same as $\dfrac{39}{52}$? They're both the same when reduced to lowest terms. How do you know that $\dfrac{1}{3+\sqrt{5}}$ is the same as $\dfrac{3\sqrt{5}-5}{4\sqrt{5}}$? Again, they're the same when put into canonical form. How do you know that $\dfrac{13+i}{1+2i}$ is the same as $\dfrac{61}{6+5i}$? Same thing.
Center of gravity of a self intersecting irregular polygon I am trying to calculate the center of gravity of a polygon. My problem is that I need to be able to calculate the center of gravity for both regular and irregular polygons and even self intersecting polygons. Is that possible? I've also read that: http://paulbourke.net/geometry/polyarea/ But this is restricted to non self intersecting polygons. How can I do this? Can you point me to the right direction? Sub-Question: Will it matter if the nodes are not in order? if for example you have a square shape and you name the top right point (X1Y1) and then the bottom right point (X3Y3)? In other words if your shape is like 4-1-2-3 (naming the nodes from left to right top to bottom) Note: Might be a stupid question but I'm not a maths student or anything! Thanks
I think your best bet will be to convert the self-intersecting polygon into a set of non-self-intersecting polygons and apply the algorithm that you linked to to each of them. I don't think it's possible to solve your problem without finding the intersections, and if you have to find the intersections anyway, the additional effort of using them as new vertices in a rearranged vertex list is small compared to the effort of finding them. To answer your subquestion: Yes, the order of the nodes does matter, especially if the polygon is allowed to be self-intersecting since in that case the order is an essential part of the specification of the polygon and different orders specify different polygons -- for instance, the "square" with the ordering you describe would be the polygon on the right-hand side of the two examples of self-intersecting polygons that the page you linked to gives (rotated by $\pi/2$). P.S.: I just realized that different orders can also specify different non-self-intersecting (but not convex) polygons, so the only case where you could specify a polygon by its vertices alone is if you know it's convex. But even then you have to use the vertices in the right order in the algorithm you linked to.
Disjoint convex sets that are not strictly separated Question 2.23 out of Boyd & Vanderberghe's Convex Optimization: Give an example of two closed convex sets that are disjoint but cannot be strictly separated. The obvious idea is to take something like unbounded sets which are disjoint but approach each other in the limit. For example, $f(x) = \frac1x$ and $g(x) = -\frac1x$. But isn't $x=0$ a strictly separating hyperplane here?
Take $X = \{(x,y) \mid xy\geq 1, x,y>0\}$ and $Y = \{(x,y) \mid x\leq 0\}$.
Graph - MST in O(v+e) G=(v,e) , with weight on the edges than can be only a or b (when $a I need to find MST of the graph in O(v+e). I think to put all the edges in array, and than scanning the array. first only check about a, and after about b. The algorithm is like Kruskal's: check about evey edge if it doesnt form a cycle. but I'm not sure that this is taking O(v+e). Thank u!
The running time of the algorithm of Kruskal is dominated by the sorting time of the edges according to their weights. In your case you can do this in linear time. The rest of Kruskal's algorithm also runs in linear time. So you get linear running time after all.
Why is the generalized quaternion group $Q_n$ not a semi-direct product? Why is the generalized quaternion group $Q_n$ not a semidirect product?
How many elements of order 2 does a generalized quaternion 2-group have? How many elements of order 2 must each factor in the semi-direct product have? Note that dicyclic groups (generalized quaternion groups that are not 2-groups) can be semi-direct products. The dicyclic group of order 24 is a semi-direct product of a group of a quaternion group of order 8 acting on a cyclic group of order 3.
Proving continuous image of compact sets are compact How to prove: Continuous function maps compact set to compact set using real analysis? i.e. if $f: [a,b] \rightarrow \mathbb{R}$ is continuous, then $f([a,b])$ is closed and bounded. I have proved the bounded part. So now I need some insight on how to prove $f([a,b])$ is closed, i.e. $f([a,b])=[c,d]$. From Extreme Value Theorem, we know that $c$ and $d$ can be achieved, but how to prove that if $c < x < d$, then $x \in f([a,b])$ ? Thanks!
Lindsay, what you need is the intermediate value theorem, its proof is given in wikipedia.
How can I re-arrange this equation? I haven't used my algebra skills much for years and they seem to have atrophied significantly! I'm having real trouble working out how to re-arrange a formula I've come across to get $x$ by itself on the left hand side. It looks like this: $\frac{x}{\sqrt{A^{2}-x^{2}}}=\frac{B+\sqrt{C+Dx}}{E+\sqrt{F+G\sqrt{A^{2}-x^{2}}}}$ I've tried every method I can remember but I can't get rid of those pesky square roots! Any ideas?
I would start by multiplying the numerator and denomenator on the right by $E-\sqrt{F+G\sqrt{A^2-x^2}}$: $$\frac{x}{\sqrt{A^2 - x^2}} = \frac{\left(B + \sqrt{C + Dx}\right)\left(E - \sqrt{F + G\sqrt{A^2 - x^2}}\right)}{E^2-F+G\sqrt{A^2 - x^2}}$$ It may also help the manipulation to set $y = \sqrt{A^2 - x^2}$ for a while.
A sufficient condition for $U \subseteq \mathbb{R}^2$ such that $f(x,y) = f(x)$ I have another short question. Let $U \subseteq \mathbb{R}^2$ be open and $f: U \rightarrow \mathbb{R}$ be continuously differentiable. Also, $\partial_y f(x,y) = 0$ for all $(x,y) \in U$. I want to find a sufficient condition for $U$ such that $f$ only depends on $x$. Of course, the condition shouldn't be too restrictive. Is it sufficient for $U$ to be connected? Thanks a lot for any help.
It's enough for $U$ to have the property that, whenever $(x,y_1)$ and $(x,y_2)$ are in $U$, so is the line segment between them. This can be proved by applying the mean value theorem to the function $g(y)=f(x,y)$. It is not enough for $U$ to be connected. For example, take $U=\mathbb{R} \setminus \{(x,0)|x \le 0 \}$. Let $f(x,y)=0$ for $x \ge 0$, $f(x,y)=x^2$ for $x<0, y<0$ and $f(x,y)=-x^2$ for $x<0, y>0$.
Question regarding Hensel's Lemma Hensel's Lemma Suppose that f(x) is a polynomial with integer coefficients, $k$ is an integer with $k \geq 2$, and $p$ a prime. Suppose further that $r$ is a solution of the congruence $f(x) \equiv 0 \pmod{p^{k-1}}$. Then, If $f'(r) \not\equiv 0 \pmod{p}$, then there is a unique integer t, $0 \leq t < p$, such that $$f(r + tp^{k-1}) \equiv 0 \pmod{p^k}$$ given by $$t \equiv \overline{-f'(r)}\frac{f(r)}{p^{k-1}} \pmod{p}$$ where $\overline{-f'(r)}$ is an inverse of f'(r) modulo p. If $f'(r) \equiv 0 \pmod{p}$ and $f(r) \equiv 0 \pmod{p^k}$, then $f(r+tp^{k-1}) \equiv 0 \pmod{p^k}$ $\forall$ integers t. If $f'(r) \equiv 0 \pmod{p}$ and $f(r) \not\equiv 0 \pmod{p^k}$, then $f(x) \equiv 0 \pmod{p^k}$ has no solutions with $x \equiv r \pmod{p^{k-1}}$ I'm practicing solving congruence equation using Hensel's Lemma, however, I was a little confused with the last two cases. To be clear, I use this congruence equation as an example $f(x) = x^4 + 4x^3 + 2x^2 + 2x + 12 \equiv 0 \pmod{625}$ My attempt was: By inspecting all remainders of $5$, i.e $0, 1, 2, 3, 4$ \ We can see that $ x \equiv 3 \pmod{5}$ is the solution to $f(x) \equiv 0 \pmod{5}$ \ Apply Hensel's Lemma for $5^2 = 25$, we have: $$f'(x) = 4x^3 + 12x^2 + 4x + 2$$ And, $$f'(3) = 4.3^3 + 12.3^2 + 4.3 + 2 = 230 \equiv 0 \pmod{5}$$ $$f(3) = 3^4 + 4.3^3 + 2.3^2 + 2.3 + 12 = 225 \equiv 0 \pmod{5^2}$$ Hence, $$x \equiv 3 \pmod{5^2}$$ Apply Hensel's Lemma for $5^3 = 125$, we have: $$f(3) = 225 \not\equiv 0 \pmod{5^3}$$ So $f(x) \equiv 0 \pmod{5^3}$ has no solutions with $x \equiv 3 \pmod{5^2}$. \ Therefore, there are no solutions to $f(x) = x^4 + 4x^3 + 2x^2 + 2x + 12 \equiv 0 \pmod{625}$ What I understood about Hensel's Lemma is, it let us lift up the solution from $p^k$ to $p^{k + 1}$ each time we found a solution of a current $k$. But, in the case 2, when it said for all integers t, I was confused. Does it mean I can use the previous solution with $p^k$. By that I mean, if I have $x \equiv 3 \pmod{5}$, then if case 2 satisfies, then $x \equiv 3 \pmod{25}$? If the question is vague, please let me know. I will try my best to rewrite it again. Sorry for my poor English writing. Thanks,
You go off track at the word "Hence". If $f'(3)\equiv 0\pmod 5$ and $f(3)\equiv 0 \pmod{25}$ (I assume you've done this correctly; I didn't check), that means that $x\equiv 3,8,13,18,23 \pmod{25}$ are all solutions modulo 25. You only verified that there are no solutions modulo 125 which are 3 modulo 25. There may still be solutions which are 8, 13, 18, or 23 modulo 25.
Non-associative, non-commutative binary operation with a identity Can you give me few examples of binary operation that it is not associative, not commutative but has an identity element?
Here's an example of an abelian group without associativity, inspired by an answer to this question. Consider the game of rock-paper-scissors: $R$ is rock, $P$ is paper, $S$ is scissors, and $1$ is fold/draw/indeterminate. Let $\ast$ be the binary operation "play". \begin{array}{r|cccc} \ast & 1 & R & P & S\\ \hline 1& 1 & R & P & S \\ R & R& 1 & P & R\\ P & P& P & 1 & S\\ S & S& R & S & 1 . \end{array} The mutliplication table above defines a set of elements, with a binary operation, that is commutative and non-associative. Also, each element has an inverse (itself), and the identity exists.
Is it Variation? Counting elements Lets assume that we have a element, which can have value from 1 till n. (let's set it on 20 to make it easier) And we have the Set, that consists of object, which consists of three elements $\langle e_1, e_2, e_3 \rangle$. We have also one rule regarding to objects in the set: $e_1 \geq e_2 \geq e_3$ - example of good objects: $\langle n, n, n\rangle$, $\langle n, n-1, n-1\rangle$, $\langle 20, 19, 18\rangle$, $\langle 3, 2, 1\rangle$, $\langle 3, 3, 3\rangle$, $\langle 3, 2, 2\rangle$. - example of bad objects: $\langle n, n+1, n\rangle$, $\langle 2, 3, 2\rangle$, $\langle 3, 2, 4\rangle$. Now the question: How to count the amount of all good objects, which pass to this Set (they don't violate the rule ) ? Can you give me any hints? I can solve this with brute force method. But probably there is a short way.
If the first number is $k$, and the second number is $j$, where $j \leq k$ then the last number has $j$ choices. So the number of favorable cases is $$\sum_{k=1}^n \sum_{j=1}^k j = \sum_{k=1}^n \frac{k(k+1)}{2} = \frac{n(n+1)(n+2)}{6}$$ In general, if you have elements from $1$ to $n$ and want to choose an $m$ element set with the ordering you want the answer is $$\binom{n+m-1}{m}$$ which can be seen by induction on $m$ or by a combinatorial argument as proved here.
How to check if derivative equation is correct? I can calculate the derivative of a function using the product rule, chain rule or quotient rule. When I find the resulting derivative function however, I have no way to check if my answer is correct! How can I check if the calculated derivative equation is correct? (ie I haven't made a mistake factorising, or with one of the rules). I have a graphics calculator. Thanks!
For any specific derivative, you can ask a computer to check your result, as several other answers suggest. However, if you want to be self-sufficient in taking derivatives (for an exam or other work), I recommend lots of focused practice. Most calculus textbooks include answers to the odd-numbered problems in the back of the book, and if you search for "derivative worksheet" you'll find lots of problem lists online. Work through a list of at least 20 problems, and check your answers-- if you get less than 80% or 90% right, you know you need more practice. Here's the most important part: Track down your mistakes. Watch out for them in the future, and be sure you understand the right way to go. Pay attention to simplifying your answers, too, because a lot of people make algebra mistakes after getting the calculus right. The rules you have are the best way to take these derivatives, you just have to be able to use them accurately.
Embedding of finite groups It is well known that any finite group can be embedded in Symmetric group $S_n$, $GL(n,q)$ ($q=p^m$) for some $m,n,q\in \mathbb{N}$. Can we embed any finite group in $A_n$, or $SL(n,q)$ for some $n,q\in \mathbb{N}$?
Yes. The symmetric group $Sym(n)$ is generated by $\{(1,2), (2,3),\ldots, (n−1,n)\}$. You can embed $Sym(n)$ into $Alt(n+2)$ as the group generated by $\{(1,2)(n+1,n+2), (2,3)(n+1,n+2), …, (n−1,n)(n+1,n+2)\}$. This embedding takes a permution $\pi\in Sym(n)$ and sends it to $\pi⋅(n+1,n+2)^{\text{sgn}(\pi)}$, where $\text{sgn}(\pi)\in\{0,1\}$ is the parity of the permutation. In other words, $G\le Sym(n)\le Alt(n+2)$ embeds any group into a (slightly larger) alternating group. The general linear group $GL(n,q)$ embeds in the special linear group $SL(n+1,q)$ using a determinant trick. We just add a new coordinate to cancel out the determinant of the matrix from $GL(n,q)$ so the result lands in $SL(n+1,q)$. $$\operatorname{GL}(n,q) \cong \left\{ \begin{bmatrix} A & 0 \\ 0 & 1/\det(A) \end{bmatrix} : A \in \operatorname{GL}(n,q) \right\} ≤ \operatorname{SL}(n+1,q)$$ In other words, $G\le GL(n,q)\le SL(n+1,q)$ embeds any group into a (slightly larger) special linear group.
Equation of the complex locus: $|z-1|=2|z +1|$ This question requires finding the Cartesian equation for the locus: $|z-1| = 2|z+1|$ that is, where the modulus of $z -1$ is twice the modulus of $z+1$ I've solved this problem algebraically (by letting $z=x+iy$) as follows: $\sqrt{(x-1)^2 + y^2} = 2\sqrt{(x+1)^2 + y^2}$ $(x-1)^2 + y^2 = 4\big((x+1)^2 + y^2\big)$ $x^2 - 2x + 1 + y^2 = 4x^2 + 8x + 4 + 4y^2$ $3x^2 + 10x + 3y^2 = -3$ $x^2 + \frac{10}{3}x + y^2 = -1$ $(x + \frac{5}{3})^2 +y^2 = -1 + \frac{25}{9}$ therefore, $(x+\frac{5}{3})^2 + y^2 = \frac{16}{9}$, which is a circle. However, I was wondering if there is a method, simply by inspection, of immediately concluding that the locus is a circle, based on some relation between the distance from $z$ to $(1,0)$ on the plane being twice the distance from $z$ to $(-1,0)$?
Just to add on to Aryabhata's comment above. The map $f(z) = \frac{1}{z}$ for $ z \in \mathbb{C} -\{0\}$, $f(0) = \infty$ and $f(\infty) = 0$ is a circle preserving homeomorphism of $\bar{\mathbb{C}}$. To see this, one needs to prove that it is continuous on $\bar{\mathbb{C}}$, and since $f(z)$ is an involution proving this would mean that its inverse is continuous as well. It is also not hard to show that $f(z)$ is bijective. Lastly use the general equation of a circle in $\bar{\mathbb{C}}$ to see that circles in $\bar{\mathbb{C}}$ are preserved under this map.
Prove a 3x3 system of linear equations with arithmetic progression coefficients has infinitely many solutions How can I prove that a 3x3 system of linear equations of the form: $\begin{pmatrix} a&a+b&a+2b\\ c&c+d&c+2d\\ e&e+f&e+2f \end{pmatrix} \begin{pmatrix} x\\ y\\ z \end{pmatrix} =\begin{pmatrix} a+3b\\ c+3d\\ e+3f \end{pmatrix}$ for $a,b,c,d,e,f \in \mathbb Z$ will always have infinite solutions and will intersect along the line $ r= \begin{pmatrix} -2\\3\\0 \end{pmatrix} +\lambda \begin{pmatrix} 1\\-2\\1 \end{pmatrix}$
First, consider the homogeneous system $$\left(\begin{array}{ccc} a & a+b & a+2b\\\ c & c+d & c+2d\\\ e & e+f & e+2f \end{array}\right)\left(\begin{array}{c}x\\y\\z\end{array}\right) = \left(\begin{array}{c}0\\0\\0\end{array}\right).$$ If $(a,c,e)$ and $(b,d,f)$ are not scalar multiples of each other, then the coefficient matrix has rank $2$, so the solution space has dimension $1$. The vector $(1,-2,1)^T$ is clearly a solution, so the solutions are all multiples of $(1,-2,1)^T$. That is, the solutions to the homogeneous system are $\lambda(1,-2,1)^T$ for arbitrary $\lambda$. Therefore, the solutions to the inhomogeneous system are all of the form $\mathbf{x}_0 + \lambda(1,-2,1)^T$, where $\mathbf{x}_0$ is a particular solution to this system. Since $(-2,3,0)$ is a particular solution always, then all solutions have the described form. If one of $(a,c,e)$ and $(b,d,f)$ is a multiple of the other, though, then there are other solutions: the matrix has rank $1$, so the nullspace has dimension $2$. Say $(a,c,e) = k(b,d,f)$ with $k\neq 0$, then there is another solution: $(-1-\frac{1}{k},1,0)$ would also be a solution to the system, so that the solutions to the inhomogeneous system would be of the form $$r = \left(\begin{array}{r}-2\\3\\0\end{array}\right) + \lambda\left(\begin{array}{r}1\\-2\\1\end{array}\right) + \mu\left(\begin{array}{r}-1-\frac{1}{k}\\1\\0\end{array}\right).$$ This includes the solutions you have above, but also others. (If $k=0$, then you can use $(0,-2,1)$ instead of $(-1-\frac{1}{k},1,0)$) If $(b,d,f)=(0,0,0)\neq (a,c,e)$, then $(1,0,-1)$ can be used instead of $(-1-\frac{1}{k},1,0)$ to generate all solutions. And of course, if $(a,b,c)=(b,d,f)=(0,0,0)$, then every vector is a solution. In all cases, you have an infinite number of solutions that includes all the solutions you give (but there may be solutions that are not in that line).
Understanding a proof by descent [Fibonacci's Lost Theorem] I am trying to understand the proof in Carmichaels book Diophantine Analysis but I have got stuck at one point in the proof where $w_1$ and $w_2$ are introduced. The theorem it is proving is that the system of diophantine equations: * *$$x^2 + y^2 = z^2$$ *$$y^2 + z^2 = t^2$$ cannot simultaneously be satisfied. The system is algebraically seen equivalent to * *$$t^2 + x^2 = 2z^2$$ *$$t^2 - x^2 = 2y^2$$ and this is what will be worked on. We are just considering the case where the numbers are pairwise relatively prime. That implies that $t,x$ are both odd (they cannot be both even). Furthermore $t > x$ so define $t = x + 2 \alpha$. Clearly the first equation $(x + 2\alpha)^2 + x^2 = 2 z^2$ is equivalent to $(x + \alpha)^2 + \alpha^2 = z^2$ so by the characterization of primitive Pythagorean triples there exist relatively prime $m,n$ such that $$\{x+\alpha,\alpha\} = \{2mn,m^2-n^2\}.$$ Now the second equation $t^2 - x^2 = 4 \alpha (x + \alpha) = 8 m n (m^2 - n^2) = 2 y^2$ tells us that $y^2 = 2^2 m n (m^2 - n^2)$ by coprimality and unique factorization it follows that each of those terms are squares so define $u^2 = m$, $v^2 = n$ and $w^2 = m^2 - n^2 = (u^2 - v^2)(u^2 + v^2)$. It is now said that from the previous equation either * *$u^2 + v^2 = 2 {w_1}^2$, $u^2 - v^2 = 2 {w_2}^2$ or * *$u^2 + v^2 = w_1^2$, $u^2 - v^2 = w_2^2$ but $w_1$ and $w_2$ have not been defined and I cannot figure out what they are supposed to be. Any ideas what this last part could mean? For completeness, if the first case occurs we have our descent and if the second case occurs $w_1^2 + w_2^2 = 2 u^2$, $w_1^2 - w_2^2 = 2 v^2$ gives the descent. Which finishes the proof.
$u^2$ and $v^2$ are $m$ and $n$, respectively, which are coprime. Then since $(u^2+v^2)+(u^2-v^2)=2u^2$ and $(u^2+v^2)-(u^2-v^2)=2v^2$, the only factor that $u^2+v^2$ and $u^2-v^2$ can have in common is a single factor of $2$. Since their product is the square $w^2$, that leaves the two possibilities given.
Seeking a textbook proof of a formula for the number of set partitions whose parts induce a given integer partition Let $t \geq 1$ and $\pi$ be an integer partition of $t$. Then the number of set partitions $Q$ of $\{1,2,\ldots,t\}$ for which the multiset $\{|q|:q \in Q\}=\pi$ is given by \[\frac{t!}{\prod_{i \geq 1} \big(i!^{s_i(\pi)} s_i(\pi)!\big)},\] where $s_i(\pi)$ denotes the number of parts $i$ in $\pi$. Question: Is there a book that contains a proof of this? I'm looking to cite it in a paper and would prefer not to include a proof. I attempted a search in Google books, but that didn't help too much. A similar result is proved in "Combinatorics: topics, techniques, algorithms" by Peter Cameron (page 212), but has "permutation" instead of "set partition" and "cycle structure" instead of "integer partition".
These are the coefficients in the expansion of power-sum symmetric functions in terms of augmented monomial symmetric functions. I believe you will find a proof in: Peter Doubilet. On the foundations of combinatorial theory. VII. Symmetric functions through the theory of distribution and occupancy. Studies in Appl. Math., 51:377–396, 1972. See also MacMahon http://name.umdl.umich.edu/ABU9009.0001.001
Eigenvalues of the differentiation operator I have a linear operator $T_1$ which acts on the vector space of polynomials in this way: $$T_1(p(x))=p'(x).$$ How can I find its eigenvalues and how can I know whether it is diagonalizable or not?
Take the derivative of $a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$ (with $a_n\neq 0$), and set it equal to $\lambda a_nx^n+\cdots+\lambda a_0$. Look particularly at the equality of the coefficients of $x^n$ to determine what $\lambda$ must be. Once you know what the eigenvalues are, consider which possible diagonalized linear transformations have that eigenvalue set, and whether such linear transformations can be similar to differentiation.
What are good books to learn graph theory? What are some of the best books on graph theory, particularly directed towards an upper division undergraduate student who has taken most the standard undergraduate courses? I'm learning graph theory as part of a combinatorics course, and would like to look deeper into it on my own. Thank you.
I learned graph theory from the inexpensive duo of Introduction to Graph Theory by Richard J. Trudeau and Pearls in Graph Theory: A Comprehensive Introduction by Nora Hartsfield and Gerhard Ringel. Both are excellent despite their age and cover all the basics. They aren't the most comprehensive of sources and they do have some age issues if you want an up to date presentation, but for the basics they can't be beat. There are lots of good recommendations here, but if cost isn't an issue, the most comprehensive text on the subject to date is Graph Theory And Its Applications by Jonathan Gross and Jay Yellen. This massive, beautifully written and illustrated tome covers just about everything you could possibly want to know about graph theory, including applications to computer science and combinatorics, as well as the best short introduction to topological graph theory you'll find anywhere. If you can afford it, I would heartily recommend it. Seriously.
How many countable graphs are there? Even though there are uncountably many subsets of $\mathbb{N}$ there are only countably many isomorphism classes of countably infinite - or countable, for short - models of the empty theory (with no axioms) over one unary relation. How many isomorphism classes of countable models of the empty theory over one binary relation (a.k.a. graph theory) are there? I.e.: How many countable unlabeled graphs are there? A handwaving argument might be: Since the number of unlabeled graphs with $n$ nodes grows (faster than) exponentially (as opposed to growing linearly in the case of a unary relation), there must be uncountably many countable unlabeled graphs. (Analogously to the case of subsets: the number of subsets of finite sets grows exponentially, thus (?) there are uncountably many subsets of a countably infinite set.) How is this argument to be made rigorous?
I assume you mean by countable graph one that is countably infinite. I will also assume that your relation can be an arbitrary binary relation and not just symmetric since you seem to be interested in that case. In this case there are uncountably many. For, a special case of a binary relation is a total order. We do not need to add anything to the theory to have a total order; it's just a special case and ordering will be preserved by isomorphism. There are uncountably many nonisomorphic orders on a totally ordered set. Indeed let $N$ be the natural numbers. Let $S$ be a subset of $N$. Replace each $s\in S$ by a copy of $\mathbb{Q}\cap [0,1]$ where $\mathbb{Q}$ are the rationals. It's easy to show that if $S\not = T$ then you will get nonisomorphic orders this way. But there are uncountably many subsets of $N$. So there are uncountably many orders on a countable set.
fair value of a hat-drawing game I've been going through a problem solving book, and I'm a little stumped on the following question: At each round, draw a number 1-100 out of a hat (and replace the number after you draw). You can play as many rounds as you want, and the last number you draw is the number of dollars you win, but each round costs an extra $1. What is a fair value to charge for entering this game? One thought I had was to suppose I only have N rounds, instead of an unlimited number. (I'd then let N approach infinity.) Then my expected payoff at the Nth round is (Expected number I draw - N) = 50.5 - N. So if I draw a number d at the (N-1)th round, my current payoff would be d - (N-1), so I should redraw if d - (N-1) < 50.5 - N, i.e., if d < 49.5. So my expected payoff at the (N-1)th round is 49(50.5-N) + 1/100*[(50 - (N-1)) + (51 - (N-1)) + ... + (100 - (N-1))] = 62.995 - N (if I did my calculations correctly), and so on. The problem is that this gets messy, so I think I'm doing something wrong. Any hints/suggestions to the right approach?
Your expected return if you draw a number on the last round is 49.5 (because it costs a dollar to make the draw). On round N-1, you should keep what you have if it is greater than 49.5, or take your chances if it is less. The expected value if N=2 is then $\frac {51}{100}\frac {100+50}{2} -1 + \frac {49}{100}49.5=61.505$ where the first term is the chance that you keep the first draw times the expectation of that draw (assuming you will keep it), the second is the cost of the first draw, and the third is the chance that you will decline the first draw and take your chances on the second times the expectation of the second draw. Added: As Yuval makes more explicit, your strategy will be to quit when you get a number at least $X$. The gain then is $\frac{100+X}{2}-\frac{100}{101-X}$ where the first is the payoff and the second is the cost of the expected number of plays. As he says, this is maximized at X=87 with value $\frac{1209}{14}=86.3571$. I'll have to think where I was a bit off.
Krylov-like method for solving systems of polynomials? To iteratively solve large linear systems, many current state-of-the-art methods work by finding approximate solutions in successively larger (Krylov) subspaces. Are there similar iterative methods for solving systems of polynomial equations by finding approximate solutions on successively larger algebraic sets?
Sort of, the root finding problem is equivalent to the eigenvalue problem associated with the companion matrix. Nonsymmetric eigenvalue methods such as "Krylov-Schur" can be used here. Notes: * *The monic polynomials are extremely ill-conditioned and thus a better conditioned polynomial basis is mandatory for moderate to high order. *The companion matrix is already Hessenberg.
zeroes of holomorphic function I know that zeroes of holomorphic functions are isolated,and I know that if a holomorphic function has zero set whic has a limit point then it is identically zero function,i know a holomorphic function can have countable zero set, does there exixt a holomorphic function which is not identically zero, and has uncountable number of zeroes?
A holomorphic function on a connected open set that is not identically zero cannot have uncountably many zeros. Open subsets of $\mathbb{C}$ are $\sigma$-compact, so if $G$ is the domain, then there is a sequence $K_1,K_2,\ldots$ of compact subsets of $G$ such that $G=K_1\cup K_2\cup\cdots$. (It is not hard to construct $K_n$; e.g., if $G$ is the whole plane, you can take $K_n=\{z:|z|\leq n\}$. Otherwise you can take $\{z\in G:|z|\leq n\text{ and }d(z,\partial G)\geq \frac{1}{n}\}$.) An uncountable subset of $G$ must have uncountable intersection with one of the $K_n$s, because a countable union of countable sets is countable. An infinite subset of $K_n$ has a limit point in $K_n$ by compactness. The rest follows from the result you mentioned that a holomorphic function that is not identically zero cannot have a limit point of its zero set in the connected open set on which it is defined.
Isoperimetric inequalities of a group How do you transform isoperimetric inequalities of a group to the of Riemann integrals of functions of the form $f\colon \mathbb{R}\rightarrow G$ where $G$ is a metric group so that being $\delta-$hyperbolic in the sense of Gromov is expressible via Riemann integration? In other words, how do you define "being $\delta-$hyperbolic group" by using integrals in metric groups? (Note: I am not interested in the "Riemann" part, so you are free to take commutative groups with lebesgue integration etc.)
You can do this using metric currents in the sense of Ambrosio-Kirchheim. This is a rather new development of geometric measure theory, triggered by Gromov and really worked out only in the last decade. I should warn you that this is rather technical stuff and nothing for the faint-hearted. Urs Lang has a set of nice lecture notes, where you can find most of the relevant references, see here. My friend Stefan Wenger has done quite a bit of work on Gromov hyperbolic spaces and isoperimetric inequalities, his Inventiones paper Gromov hyperbolic spaces and the sharp isoperimetric constant seems most relevant. You can find a link to the published paper and his other work on his home page, the ArXiV-preprint is here. I should add that I actually prefer to prove that a linear (or subquadratic) isoperimetric inequality implies $\delta$-hyperbolicity using a coarse notion of area (see e.g. Bridson-Haefliger's book) or using Dehn functions, the latter can be found in Bridson's beautiful paper The geometry of the word problem.