diff --git "a/stack-exchange/math_stack_exchange/shard_1.txt" "b/stack-exchange/math_stack_exchange/shard_1.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_1.txt" +++ /dev/null @@ -1,28240 +0,0 @@ -TITLE: Connectedness of sets in the plane with rational coordinates and at least one irrational -QUESTION [10 upvotes]: Can someone please let me know if my solution is correct: -Define: -1) Let $A = \{x \in \mathbb{R}^{2}: \text{all coordinates of x are rational} \}$. Show that -$\mathbb{R}^{2} \setminus A$ is connected. -My answer: just note that $A = \mathbb{Q} \times \mathbb{Q}$ so countable and there's a standard result that $\mathbb{R}^{2}$ minus a countable set is path-connected so connected, thus the result follows. -2) $B = \{x \in \mathbb{R}^{2}: \text{at least one coordinate is irrational} \}$. Show that $B$ is connected. Well isn't $B$ just $B = \mathbb{R}^{2} \setminus \mathbb{Q}^{2}$ so it is exactly the same problem as above isn't it? -This is a problem in Dugundji's book. - -REPLY [4 votes]: Removing this from the unanswered list. As the comments note the OPs reasoning is correct. -To run with the comment of a proof using the definition of connectedness. -Suppose that $\mathbb{R} \backslash A$ can be written as $U \cup V$ where $U$ and $V$ are both open and disjoint. We know that there are open sets $U'$ and $V'$ in $\mathbb{R}$ such that $U = U' \cap \mathbb{R} \backslash A$ and $V = V' \cap \mathbb{R} \backslash A$. The fact that $\mathbb{R}$ is connected tells us that $U' \cap V'$ is not empty. It is clear that $U' \cap V' \subset A$ (otherwise $U \cap V$ isn't empty) so we can choose some point $x=(x_1 , x_2)$ of $A$ in $U' \cap V'$. Now $U' \cap V'$ is open in $\mathbb{R}$ so there exists some $\epsilon$ such that $B_\epsilon(x) \subset U' \cap V'$. There exists a rational $q$ such that $\vert q - x_1 \vert < \epsilon $ and then there exists an irrational number $s$ such that $s \in (q , x_1)$ and therefore $(s,x_2) \in B_\epsilon(x)$ so that $B_\epsilon(x) \cap \mathbb{R}\backslash A \not = \emptyset$ but this would mean that $U \cap V$ is not empty which is a contradiction so $\mathbb{R} \backslash A$ must be connected.<|endoftext|> -TITLE: Category Theory with and without Objects -QUESTION [13 upvotes]: Slight Motivation: In Mac Lane and Freyd's books (the latter being a reprint of an older book called "Abelian Categories") they note that instead of defining any Objects in a category we may define an "arrows only" approach by considering the identity morphism associated to an object to be the object itself. -Question: Is it computationally or syntactically easier in category theory to consider a category as objects and morphisms instead of as just as morphisms? In short, is there a reason we keep objects around? - -REPLY [5 votes]: From a formalisation-in-a-proof-assistant point of view, I found formalising category theory without objects to be quite horrible, because it is so much easier to have objects which correspond directly to types, rather than objects which are simply special kinds of morphisms.<|endoftext|> -TITLE: Description of the set of prime ideals of the $R/m^2$ -QUESTION [6 upvotes]: Let $R$ be a commutative ring and $m\subseteq R$ be a maximal ideal. -Can you describe the set of prime ideals of the $R/m^2$. Are they all maximal ? - -REPLY [3 votes]: Prime $\rm\ P = I + m^2\supset m^2\ \Rightarrow\ P\supset m\ \Rightarrow\ P = m\ $ since $\rm\ m\ $ is maximal.<|endoftext|> -TITLE: Question about basic strategy in Blackjack -QUESTION [5 upvotes]: I was watching Beating Blackjack with Andy Bloch where he runs through the basic strategy charts that outline the best strategy with playing the game. Later he also talks about the methodologies to count, but that is not relevant to my question. -He states in that video that the basic strategy is an outcome of computer simulations, and that suggests to me that its a rather weak method to go about doing it. I have a two-fold question, one is when is a computer solution considered good enough to be "true", and the second, surely, this is a much easier problem that does not need computer simulations to solve, are there any known methods to derive the basic strategy. - -REPLY [2 votes]: The scenario that you described is a single deck game where the dealer must -stand on Soft 17, which is a disadvantage to the casino. Every casino table has its own specific rules. On some tables, the dealer will hit Soft 17, others will stand. -In a single deck game, there are also other factors which can have an effect -on the overall game odds. One of these is deck penetration, another is the number of "rounds" played, which is just the number of times a new hand is played from the same deck before it is shuffled. -I noticed that there is no consideration of the cases where the dealer ties the player (but this is only applicable when the player is hitting). Also, the dealer can bust on 4 or more card. Example: dealer has 6-2 and then draws 4,10 to -form 6-2-4-10 = 22. -I have a blackjack Monte Carlo simulation program which runs -on my Apple notebook. I checked the data for the scenario described above. -In my simulation, I have used deck penetration of 65% and limited the number of rounds to 6. The results show that in the case where the player draws 10-5 and dealer draws 6, and the player stands, the player will win 42.1858%, lose 57.8142% and push 0%. The average loss by standing is 15.6283%. If the player hits however, the player will win 28.5349%, lose 67.1360% and push 4.3291%. -The average loss by hitting is 38.6011%. -This data was extracted from the following table for the case -that Dealer Stands of Soft 17: -The columns represent the following in this order: -(1) Player Card -(2) Player Card -(3) Dealer Card -(4) Strategy (Stand or Hit) -(5) Frequency (Trial size during simulation) -(6) Wins -(7) Losses -(8) Pushes (player tied dealer) -(9) Net Gain/Loss -(10) Win probability -(11) Loss probability -(12) Push probability - -10 5 6 S 2985424 1259426 1725998 0 -0.156283 0.421858 0.578142 0 -10 5 6 H 2983974 851474 2003320 129180 -0.386011 0.285349 0.671360 0.043291 -9 6 6 S 562585 230554 332031 0 -0.180376 0.409812 0.590188 0 -9 6 6 H 562917 157084 380592 25241 -0.397053 0.279054 0.676107 0.044840 -8 7 6 S 748253 305639 442614 0 -0.183060 0.408470 0.591530 0 -8 7 6 H 748480 222004 492233 34243 -0.361037 0.296606 0.657643 0.045750 - -Notice that the trial size in both of the 10-5-6-x cases is higher than the other four cases. This is simply due to the higher number of 10 valued cards in the deck, i.e. each value is the set {10,J,Q,K} is equal to 10. Also note that the sample sizes the two 10-5-6-x cases are not identical to each other. This is due to the random selection of the players strategy during simulation where -the probability of the player standing is assumed to be 1/2 and probability of the player hitting is also assumed to be 1/2. -The calculated 63.26 percent chance of busting by hitting (which was posted earlier), is pretty close to the simulation loss result of 67.1360%.<|endoftext|> -TITLE: Noetherian quotient rings -QUESTION [5 upvotes]: Let $R$ be a commutative ring and $I,J$ ideals of $R$ such that $J$ is finitely generated and the rings $R/I$ and $R/J$ are Noetherian. Are the $R$-modules $R/J$, $J/IJ$, $R/IJ$ Noetherian? Is the ring $R/IJ$ is Noetherian? - -REPLY [8 votes]: Sebastian did the $R/J$ case in the comments. Now $J/IJ$ is $J\otimes R/I$ which, since $J$ is finitely generated and $R/I$ is Noetherian (as an R-module), is a Noetherian R-module. -The exact sequence -$$ 0\rightarrow J/IJ \rightarrow R/IJ\rightarrow R/J\rightarrow 0$$ -and the previous two results show $R/IJ$ is Noetherian.<|endoftext|> -TITLE: Eigenfunctions of the Helmholtz equation in Toroidal geometry -QUESTION [14 upvotes]: the Helmholtz equation -$$\Delta \psi + k^2 \psi = 0$$ -has a lot of fundamental applications in physics since it is a form of the wave equation $\Delta\phi - c^{-2}\partial_{tt}\phi = 0$ with an assumed harmonic time dependence $e^{\pm\mathrm{i}\omega t}$. -$k$ can be seen as some kind of potential - the equation is analogue to the stationary Schrödinger equation. -The existance of solutions is to my knowledge linked to the separability of the Laplacian $\Delta$ in certain coordinate systems. Examples are cartesian, elliptical and cylindrical ones. -For now I am interested in a toroidal geometry, -$$k(\mathbf{r}) = \begin{cases} k_{to} & \mathbf{r}\in T^2 \\ k_{out} & \text{else}\end{cases}$$ -where $T^2 = \left\{ (x,y,z):\, r^2 \geq \left( \sqrt{x^2 + y^2} - R\right)^2 + z^2 \right\}$ -Hence the question: - -Are there known solutions (in terms of eigenfunctions) of the Helmholtz equation for the given geometry? - -Thank you in advance -Sincerely -Robert -Edit: As Hans pointed out, there might not be any solution according to a corresponding Wikipedia article. Unfortunately, there is no reference given - does anyone know where I could find the proof? - -REPLY [9 votes]: Normally $T^2$ means the Torus, which is a 2-manifold: $T^2 \cong [0,2\pi r]\times[0,2\pi R]$, the solution to -$$ -\Delta \psi + k^2\psi = 0\tag{1} -$$ -bears the form: for $m\in \mathbb{Z}^2$, $\psi_k = e^{ i m\cdot x}$, with $|m| = \sqrt{m_1^2 +m_2^2} = k. $ -The reason behind this is that $\mathbb{T}^2 \cong \mathbb{S}^1(r)\times \mathbb{S}^1(R) $, and for (1) on $\mathbb{S}^1$ has eigenvectors $e^{imx}$ where $|m| = k$, then the Fourier expansion on product spaces use basis $\prod e^{i m_i x_i}$. -In your case it is actually a Toroid, according to the Field Theory Handbook the chapter about rotational system, the Helmholtz equation is not separable in toroidal geometry. Only Laplace equation is separable, please see section 6 in here. -By that wikipedia article about Toroidal coordinates: we make the substitution for (1) as well: -$$\psi=u\sqrt{\cosh\tau-\cos\sigma},$$ -then by the Laplacian in the toroidal geometry in that wiki entry: -\begin{align} -\Delta \psi =& -\frac{\left( \cosh \tau - \cos\sigma \right)^{3}}{a^{2}\sinh \tau} - \left[ -\sinh \tau -\frac{\partial}{\partial \sigma} -\left( \frac{1}{\cosh \tau - \cos\sigma} -\frac{\partial \Phi}{\partial \sigma} -\right) \right. \\[8pt] -& {} + -\left. \frac{\partial}{\partial \tau} -\left( \frac{\sinh \tau}{\cosh \tau - \cos\sigma} -\frac{\partial \Phi}{\partial \tau} -\right) + -\frac{1}{\sinh \tau \left( \cosh \tau - \cos\sigma \right)} -\frac{\partial^2 \Phi}{\partial \phi^2} -\right]. -\end{align} -(one extra thing to mention, the wiki entry failed to mention that $a^2 = R^2-r^2$) Equation (1) can be reduced as follows: -$$ -\frac{\partial^2 u }{\partial \tau^2} + \frac{\cosh \tau}{\sinh\tau}\frac{\partial u }{\partial \tau} + \frac{1}{\sinh^2 \tau} \frac{\partial^2 u}{\partial \phi^2} + \frac{\partial^2 u}{\partial \sigma^2} + \left(\frac{ (R^2-r^2)k^2}{(\cosh\tau-\cos \sigma)^2} +\frac14\right)u= 0. -$$ -For above equation, though we separate it in three variables in toroidal coordinates, we can separate the $\phi$ variable: -$$ -u = K(\tau,\sigma)\Phi(\phi). -$$ -The equation becomes: -$$ -\Delta_{\tau,\sigma} K + \frac{\cosh \tau}{\sinh\tau}\frac{\partial K }{\partial \tau} + \left(\frac{ (R^2-r^2)k^2}{(\cosh\tau-\cos \sigma)^2} +\frac14 -\frac{m^2}{\sinh^2 \tau}\right) K = 0,\tag{2} -$$ -and -$$ -\Phi'' + m^2 \Phi = 0. -$$ -Hence $u_m = K(\tau,\sigma)e^{im\theta}$, and $K$ satisfies (2). If someone knows how to proceed using analytical method for (2), I am interested in it as well.<|endoftext|> -TITLE: How to perform a Fourier transform in spherical coordinates? -QUESTION [10 upvotes]: For a function $f(r, \vartheta, \varphi)$ given in spherical coordinates, how can the Fourier transform be calculated best? Possible ideas: - -express $(r,\vartheta,\varphi)$ in cartesian coordinates, yielding a nonlinear argument of $f$ -express $\vec k,\vec r$ in the $e^{i\vec k\vec r}$ term in spherical coordinates, yielding a nonlinear exponent in $\vartheta$ and $\varphi$ -decompose $f$ into Spherical Harmonics and then change base to Fourier space, requiring the Fourier transform of the Spherical Harmonics (it is obviously not possible to calculate them using this very method..., can that be be found somewhere?) - -REPLY [2 votes]: Tobias, your notation makes it look like your function $f:\mathbb{R}^3\to\mathbb{R}$, in other words, it takes in points in three-dimensional space and spits out real numbers. In that case, as you note, it can be written as a function of $(x,y,z)$. So there doesn't seem to be any reason not to go with your first option. Maybe you could write down the function so I can see the difficulty. If on the other hand your function takes in points on the sphere $\{(x,y,z):\, x^2+y^2+z^2=1\}$, then it makes sense to use spherical harmonics. Your second option doesn't seem reasonable--if you want translation to correspond to phase shifts, then you need to integrate along lines in $\mathbb{R}^3$, and then after a change of coordinates you would be back in your first situation.<|endoftext|> -TITLE: Examples of using Green's theorem to compute one-variable integrals? -QUESTION [10 upvotes]: We all know that the complex integral calculus can be useful for computing real integrals. I was wondering if there are any similar example where we can use Green's theorem to compute one-variables integrals. -Now it is clear that if we have an integral $\int_a^b f dx$ on the real line we can view this as a curve integral in the plane of $\int P dx + Q dy$ for infinitely many choices of $P$ and $Q$ where we integrate over the line between $a$ and $b$. We could then integrate this vector field over some other curve, $\gamma$, with the same endpoints and try computing the difference between our original integral and the new one with Green's theorem. It is clear that for random choices of $P$, $Q$ and $\gamma$ we will not have simplified our problem. -But are there any examples where this technique is useful? - -REPLY [4 votes]: It is easy to compute $\int_{-\infty}^\infty e^{-x^2/2}dx$ by an elementary argument, but maybe you want the integral $\int_{-\infty}^\infty e^{-x^2/2}\cos(\lambda x)dx$ without having complex analysis (Cauchy's theorem) at your disposal. So you can apply Green's theorem to the vector field -$$(P,Q):=e^{(y^2-x^2)/2}\bigl(\cos(x y),\sin(x y)\bigr)$$ -and an elongated rectangle $[-a,a]\times[0,\lambda]$; then let $a\to\infty$.<|endoftext|> -TITLE: Number of integer solutions of $x^2 + y^2 = k$ -QUESTION [20 upvotes]: I'm looking for some help disproving an answer provided on a StackOverflow question I posted about computing the number of double square combinations for a given integer. -The original question is from the Facebook Hacker Cup - -Source: Facebook Hacker Cup - Qualification Round 2011 -A double-square number is an integer $X$ - which can be expressed as the sum of - two perfect squares. For example, 10 - is a double-square because $10 = 3^2 + 1^2$. - Given $X$, how can we determine the - number of ways in which it can be - written as the sum of two squares? For - example, $10$ can only be written as $3^2 + 1^2$ - (we don't count $1^2 + 3^2$ as being different). On the other hand, - $25$ can be written as $5^2 + 0^2$ or as - $4^2 + 3^2$. -You need to solve this problem for $0 \leq X \leq 2,147,483,647$. -Examples: -$10 \Rightarrow 1$ -$25 \Rightarrow 2$ -$3 \Rightarrow 0$ -$0 \Rightarrow 1$ -$1 \Rightarrow 1$ - -In response to my original question about optimizing this for F#, I got the following response which I'm unable to confirm solves the given problem correctly. - -Source: StackOverflow answer by Alexandre C. -Again, the number of integer solutions - of $x^2 + y^2 = k$ is four times the - number of prime divisors of $k$ which - are equal to $1 \bmod 4$. -Knowing this, writing a program which - gives the number of solutions is easy: - compute prime numbers up to $46341$ once - and for all. -Given $k$, compute the prime divisors of - $k$ by using the above list (test up to - $\sqrt{k}$). Count the ones which are - equal to $1 \bmod 4$, and sum. Multiply - answer by $4$. - -When I go through this algorithm for $25$, I get $8$ which is not correct. - - -For each prime factor (pf) of $25$ ($5$, $5$ are prime factors of $25$) - - -if pf % $4$ = $1$ (true for both $5$'s), add $1$ to count - -return $4$ * count (count would be $2$ here). - -So for $25$, this would be $8$ - -REPLY [3 votes]: A simpler way to look at this would be to consider the factorization of $\displaystyle n$ in Gaussian Integers: $\displaystyle \mathbb{Z}[i] = \{a + bi \ \ | \ a \in \mathbb{Z}, \ b \in \mathbb{Z}\}$. -It is well known that that - -Gaussian integers have unique factorization property (upto units $\displaystyle \pm 1, \pm i$). -Primes (of $\displaystyle \mathbb{Z}$) of the form $\displaystyle 4k+3$ are also primes in $\displaystyle \mathbb{Z}[i]$. -Primes (of $\displaystyle \mathbb{Z}$) of the form $\displaystyle 4k+1$ factorize into $\displaystyle w w'$ for a prime $\displaystyle w$ (in $\displaystyle \mathbb{Z}[i]$) and $\displaystyle w'$ is the conjugate of $\displaystyle w$. In fact, if $\displaystyle w = a+ib$, then the prime is of the form $\displaystyle a^2 + b^2$. -$\displaystyle 2 = (1+i)(1-i)$ and $\displaystyle 1+i$ is prime. - -Now if $\displaystyle n = x^2 + y^2$ then $\displaystyle n = (x+iy)(x-iy)$, which corresponds to factorization of $\displaystyle n$ in $\displaystyle \mathbb{Z}[i]$, which you can get from the prime factorization of $\displaystyle n$ in $\mathbb{Z}[i]$ by choosing a subset of primes to multiply, to get $\displaystyle x + iy$. The conjugates of those primes go toward $\displaystyle x-iy$. The remaining need to be a perfect square to distribute once each into $\displaystyle x+iy$ and $\displaystyle x-iy$. Note this implies that primes of the form $\displaystyle 4k+3$ (which are also primes in $\displaystyle \mathbb{Z}[i]$) need to have an even power in the factorization. -So it basically amounts to finding out the different factorizations of the perfect squares which are factors of $\displaystyle n$ (after ignoring primes of the form $\displaystyle 4k+3$). -Note: It is the units which gives the multiple of $4$, if you count $(x,y), (y,x), (-x,y),(y,-x)$. -(Of course, this is not a formal proof...) -Hope the helps.<|endoftext|> -TITLE: Vector sum in spherical coordinates -QUESTION [15 upvotes]: I can't seem to come up with a simple formula to head-tail adding two vectors in spherical coordinates. So I'd like to know: - -Can anybody point out a way to do it in spherical coordinates (without converting back and forth from cartesian coordinates)? -For the sake of execution speed in a computer program, is it faster to do it straight in spherical coordinates or converting back and forth from cartesian coordinates? - - -update: to clarify, I'm not talking about the trivial case in which the tails of the two vectors lay in the same point - -REPLY [5 votes]: To my knowledge you cannot add vectors in polar / spherical coordinates by adding the components as you would in Cartesian coordinates. This is because even when the tails of the two vectors lie in the same point, the unit vectors in the $r$, $\theta$ and $\phi$ directions have different directions for the two different vectors. This means that the two vectors have e.g. two different $r$-directions and adding the two $r$ components to each other would be like adding apples and pears - something that is not allowed. -Here is a 2D example to prove my point: -Suppose you have two vectors that start in the origin of the coordinate system and respectively point to the two points A(1,0) and B(0,1) in the Cartesian coordinate system. Let's call these two vectors A and B. -In Cartesian coordinates we can add the two vectors by adding their components. Therefore, the sum of the two vectors are: -C = A + B = [1 0] + [0 1] = [1 1]. -You can draw these three vectors easily and confirm that they form a closed triangle. -Now let's convert the two points to polar coordinates and then do the same calculation. Points A and B in polar coordinates are: A(1,0) and B(1,$\frac{\pi}{2}$). If we now try to add the two vectors by adding their components, we get C = A + B = [1 0] + [1 $\frac{\pi}{2}$] = [2 $\frac{\pi}{2}$]. -Converting this back to Cartesian coordinates yields [0 2] $\neq$ [1 1]. You can further confirm that the vector obtained from the polar coordinate addition does not form a closed triangle when drawn together with the two vectors that was added to obtain it. -It might be possible to derive a formula for adding vectors in polar / spherical coordinates, but I expect that this formula will in any case be a (quite complicated / ugly) form of a coordinate transformation. -So in conclusion, as far as I know, the simplest and safest solution is to convert the two vectors to Cartesian coordinates before adding them.<|endoftext|> -TITLE: Thue-Morse sequence cube-freeness proof from the Book -QUESTION [6 upvotes]: I'm TA-ing an intro class on theoretical CS, and this week class only covered the simplest concepts, such as words and languages. I wanted to take this chance to present some combinatorics on words, and prove the cube-freeness of the Thue-Morse sequence (which I want to define as $w_0 = a, w_i = w_{i-1}w'_{i-1}$ where $w'$ is $w$ where the $a$'s and the $b$'s are exchanged). -Salomaa's old book (http://bit.ly/hHeWbK) gives the following program: (1) Show that neither $a^3$ nor $b^3$ can appear in $w_i$, (2) Show that neither $ababa$ nor $babab$ can appear in $w_i$, (3) show than any sequence $aa$ or $bb$ appears in $w_i$ on an even position, (4) Conclude by induction. -(1-3) are pretty easy by induction, but I can't think of a solution for (4). I came to the easy conclusion that no odd-length word can appear as a cube, but I've got nothing for the even part (and I really want to avoid going through the overlap-free proof, or use the definition of the sequence using the binary expansion of natural numbers). -Any idea? -Thanks! - -REPLY [7 votes]: Use the fact that the even positions in $w_i$ are $w_{i-1}$, and the odd positions are $w'_{i-1}$ (if we start counting at zero). -Edit: let's prove this bit by induction. -Denote by $E(s)$ every second bit starting with $0$, and $O(s)$ every second bit starting with $1$. -Want to prove that for $i \geq 1$, $E(w_i) = w_{i-1}$ and $O(w_i) = w'_{i-1}$. -Base case $i=1$: Just check: $$E(w_1) = E(ab) = a = w_0$$ while $$O(w_1) = O(ab) = b = a' = w'_0.$$ -Induction step: Assume that it holds for $i$, prove it for $i+1$: $$E(w_{i+1}) = E(w_i w'_i) = E(w_i) E(w'_i) = E(w_i) E(w_i)' = w_{i-1} w'_{i-1} = w_i,$$ and similarly $$O(w_{i+1}) = O(w_i w'_i) = O(w_i) O(w'_i) = O(w_i) O(w_i)' = w'_{i-1} w_{i-1} = w_i'.$$ In both steps, we used the fact that $|w_i|$ is even (this is true for $i \geq 1$).<|endoftext|> -TITLE: Proving Lagrange Form of Remainder for Taylor Polynomial -QUESTION [6 upvotes]: So I got to the infamous "the proof is left to you as an exercise" of the book when I tried to look up how to get the Lagrange form of the remainder for a Taylor polynomial. Is this right? -Given -$R_{n}(x)=\frac{1}{n!}\int_{0}^{x}f^{n+1}(t)(x-t)^{n}dt$, -find out why -$R_{n}(x)=\frac{1}{(n+1)!}f^{n+1}(c)x^{n+1}$ for some $c\in [0,x]$ -According to FTC, -$\int_{0}^{x}f'(t)dt = f(x) - f(0)$ -Also, according to the Mean Value Theorem, there exists a $c$ such that -$f'(c)(x-0)=f(x)-f(0)$ -so -$\int_{0}^{x}f'(t)dt = f'(c)(x-0)$ -finding the derivative of both sides with respect to $x$: -$f'(x) = f'(c)$ -so -$f^{n+1}(x) = f^{n+1}(c)$ -Going back to the integral form of the remainder: -$R_{n}(x)=\frac{1}{n!}\int_{0}^{x}f^{n+1}(t)(x-t)^{n}dt$, -I replace $f^{n+1}(x)$ with $f^{n+1}(c)$ (This is the step I am most unsure of) -Since $f'(c)$ is a constant, I pull it out of the integral and integrate what's left under the integral, giving me -$R_{n}(x)=\frac{1}{(n+1)!}f^{n+1}(c)x^{n+1}$ for some $c\in [0,x]$ -If this is right, then does it mean that $f'(c)$ is the average value of $f'(x)$ from $0$ to $x$? -Sorry if my LaTeX/wording/proof is off. I'd appreciate any corrections/answers to be as simple (notation-wise) as possible please - 1st year undergrad here... - -REPLY [2 votes]: No, it is not right. -$$f'(c) (x-0) = f(x) - f(0)$$ is true, but $c$ depends on $x$. -So it is something like -$$f'(c_x) (x-0) = f(x) - f(0)$$ -So derivative of $f'(c) x $ is not really $f'(c)$. -And the step from -$f'(x) = f'(c)$ to $f^{n+1}(x) = f^{n+1}(c)$ is also wrong. If $c$ is constant (according to your proof), then the derivative on the right side becomes $0$.<|endoftext|> -TITLE: How to define the characteristic length scale in a downhill simplex method? -QUESTION [5 upvotes]: I am currently converting a minimization problem from Matlab to C++, using the Numerical Recipes implementation of the Nelder and Mead Downhill simplex method. The function requires me to define a constant lambda for each variable, which represents the "characteristic length scale" of the variable. -I tried to find a more formal definition of what the authors mean by this, and how to choose one, but couldn't find anything. I'm guessing it's something of a "choose something that looks like a reasonable step size for that variable and hope that it converges fast enough; otherwise, try something else". Any pointers to something a little bit more scientific? - -REPLY [4 votes]: For the purposes of the Nelder-Mead algorithm, the characteristic length scale for a particular variable is basically your best guess as to the size of the potential solution space in that variable. -(Example taken from this site.) For example, in a 3-dimensional problem, if the initial guess is the point $[0,0,0]$, and you know that the function's minimum value occurs in the interval -$-10 < x_0 < 10,$ -$-100 < x_1 < 100,$ -$-200 < x_2 < 200,$ -then you could set $\lambda_0$, $\lambda_1$, and $\lambda_2$ to 10, 100, and 200, respectively. -I suppose this quantity can be called a step size, but it's only used in the initialization part of the algorithm. It's not (which is how I normally think of step sizes) used in each iteration of the algorithm to step from one solution to the next. Those step sizes are the reflection, expansion, contraction, and shrinking parameters. -Some additional references: - -Nelder and Mead's original paper. -This paper by Lagarias, et al, contains a nice presentation of the Nelder-Mead algorithm. -A Nelder Mead User's Manual. In particular, see the discussion of different ways to form the initial simplex.<|endoftext|> -TITLE: Gaussian fixed point Fourier transform -QUESTION [6 upvotes]: We know that $\text{exp}(-\alpha |x|^2)$ is a fixed point for the unitary Fourier transform if $\text{Re } \alpha > 0$. -I know many arguments to show this (contour-integration and differentiation). -Is there a not an elegant way where we can exploit that fact that the Gaussian is rotationally symmetric? A sketch would be fine. - -REPLY [11 votes]: Here's a different argument. Take a zero-mean Gaussian random variable $X$. We know that the sum of $n$ copies of $X$, scaled down by a factor of $\sqrt{n}$, is distributed the same as $X$. On the other hand, by the characteristic function argument (which can be used to prove the central limit theorem) we know that the Fourier transform of $(X_1+\cdots+X_n)/\sqrt{n}$ converges to $\exp(- \sigma^2x^2/2)$. -Edit: an even simpler way. Again take $X$ to be a zero-mean Gaussian. We know that $(X+X)/\sqrt{2}$ is equidistributed with $X$. We immediately deduce that all higher-order cummulants are nil, and since the second cummulant is the variance, we get that the Fourier transform is $\exp(-\sigma^2x^2/2)$.<|endoftext|> -TITLE: Is Cross Product Defined on Vector Space? -QUESTION [8 upvotes]: In Wikipedia, a cross product between two "vectors" is defined in terms of the angle between the vectors and their magnitudes. - -As I learned cross product in linear -algebra, which I understand to be a -topic about vector space, I now -wonder if cross product is not -defined on vector space, but instead -can only be defined on an inner -product space so that the angle -between the vectors and their -magnitudes can make sense? -Or is there other definition of -cross product on vector space? - -Thanks and regards! - -REPLY [12 votes]: One point of view is that the cross product is the composition of the Hodge dual and the exterior product $V \times V \to \Lambda^2 V$ in three dimensions. The Hodge dual requires additional structure to define: you need not only an inner product, but an orientation. This reflects the fact that there is a choice of handedness in the definition of the cross product. -Another point of view is that the cross product is not an operation on spatial vectors in $\mathbb{R}^3$, but the Lie bracket on the Lie algebra of $\text{SO}(3)$. This is the definition, for example, which is relevant to the physics of angular momentum. Of course, if you insist on taking the cross product of spatial vectors you need a way to identify spatial vectors with elements of the Lie algebra of $\text{SO}(3)$. Writing $\text{SO}(3)$ as $\text{Aut}(V)$ where $V$ is a real oriented 3-dimensional inner product space, the Lie algebra of $\text{SO}(3)$ is naturally isomorphic to $\Lambda^2 V$, so this identification is again the Hodge dual. - -REPLY [4 votes]: You are correct in that you need an inner product to define the cross product. A geometric picture of it is as follows: take two vectors in three dimensions. They determine a parallelogram and the cross product is defined to be the vector perpendicular (with respect to the inner product) to the parallelogram and with magnitude equal to its area. -Note that there is something special in 3-dimensions that allows this definition to work, namely that to any plane there is a unique perpendicular direction. This is not the case in other dimensions so this definition does not generalize. Indeed, as the wikipedia article states, the only other dimension that has an analogous cross product is 7 (dimension 1 also has a cross product but it is trivial-- just negative of the regular product of real numbers). What is special about dimensions 1,3 and 7 is that there are division algebras only in dimensions 2,4, and 8 (the real numbers, the quaternions, and the octonions). The wikipedia article talks more about this construction.<|endoftext|> -TITLE: How to “shrink” a triangle -QUESTION [7 upvotes]: Given any triangle (vertices are known) and a distance $X$, how can I compute the triangle that is shrunk by $X$ from the original? By shrink, I mean edges of the shrunk triangle are exactly $X$ away from the original edges. So if $X$ is large enough, the shrunk triangle doesn't exist. -EDIT: the resulting triangle needs to be inside the given triangle. -Attaching a picture for clarity. - -REPLY [8 votes]: Green triangle (result) is obviously a homothety of the given (blue) triangle with respect to a certain point $Q$ inside the triangle, and a certain coefficient $k$. As you wish to find $Q$ and $k$ providing the distance $X$ from the original edges, then the direction of the moveout of the vertices lays on the original triangle's bisectors (proof is trivial for every vertex of the green triangle). The intersection point of triangle bisectors is the center of the inscribed circle. So, the homothety center $Q$ is the center of the inscribed circle. The homothety coefficient $k$ is the fraction of $X$ with respect to inscribed circle radius $R$: $k=\dfrac{R}{R-X}$. Thus, given the original triangle vertices $A, B, C$, find the inscribed circle $Q, R$, then find the scale $k$, then moveout the vertices: $A'=A+(Q-A)\times k$ - same for $B, C$ - in vector form.<|endoftext|> -TITLE: Is there a bijection between $(0,1)$ and $\mathbb{R}$ that preserves rationality? -QUESTION [30 upvotes]: While reading about cardinality, I've seen a few examples of bijections from the open unit interval $(0,1)$ to $\mathbb{R}$, one example being the function defined by $f(x)=\tan\pi(2x-1)/2$. Another geometric example is found by bending the unit interval into a semicircle with center $P$, and mapping a point to its projection from $P$ onto the real line. -My question is, is there a bijection between the open unit interval $(0,1)$ and $\mathbb{R}$ such that rationals are mapped to rationals and irrationals are mapped to irrationals? -I played around with mappings similar to $x\mapsto 1/x$, but found that this never really had the right range, and using google didn't yield any examples, at least none which I could find. Any examples would be most appreciated, thanks! - -REPLY [4 votes]: Consider the function $f: (0,1) \rightarrow \mathbb{R}$. -$f(n) =\left\{ -\begin{array}{ll} -\dfrac{1}{x} - 2 & \text{if}\ 0 < x \leq \dfrac{1}{2}\\ -\dfrac{1}{x-1} + 2 & \text{if} \ \dfrac{1}{2} < x < 1 -\end{array} -\right.$ -We claim that $f$ is a bijective function between the open unit interval $(0,1)$ and $\mathbb{R}$ that takes rationals to rationals and irrationals to irrationals. -First, we notice that $f$ is piece-wise defined in such a way that $dom \ f$ partitions the open unit interval into two sets $S$ and $R$ defined by $S = \left \{ x \in \mathbb{R} \ | \ x \in (0,\dfrac{1}{2}] \right\}$ and $R = \left \{ x \in \mathbb{R} \ | \ x \in (\dfrac{1}{2},1) \right\}$. -Second, we show that $f$ takes rationals only to rationals and irrationals only to irrationals. There are a total of four cases to consider. -$\bf{Case \ \#1}$: Say $x \in S \cap \mathbb{Q}$. Then, by the definition of our function $f$, $\exists a,b \in \mathbb{Z}$ such that $x = \dfrac{a}{b}$ and $f(x) = \dfrac{b}{a} - 2$ and $b \neq 0$ because every rational $x$ can be represented as a ratio of two integers, with the denominator non-zero. Since rationals are closed under division and subtraction, we know that $\dfrac{b}{a} - 2$ is rational. -$\bf{Case \ \#2}$: Say $x \in R \cap \mathbb{Q}$. Then, by the definition of our function $f$, $\exists a,b \in \mathbb{Z}$ such that $x = \dfrac{a}{b}$ and $f(x) = \dfrac{1}{\dfrac{a}{b}-1} + 2$. Since rationals are closed under division, subtraction, and addition, we know that $\dfrac{1}{x-1} + 2$ is rational. -This shows that $\forall x \in (0,1) \cap \mathbb{Q}, f(x) \in \mathbb{Q} \subseteq \mathbb{R}$. So every rational in the open interval is mapped to a rational. -$\bf{Case \ \#3}$: Say $x \in S \cap (\mathbb{R} - \mathbb{Q}$). Then, by the definition of $f$, $\sim \exists a,b \in \mathbb{Z}$ such that $f(x) = \dfrac{b}{a} - 2$. Since both the subtraction and division of an irrational number by a rational number still produces an irrational number, we know that $\dfrac{b}{a} - 2$ is irrational. -$\bf{Case \ \#4}$: Say $x \in R \cap (\mathbb{R} - \mathbb{Q}$). Then, by the definition of $f$, $\sim \exists a,b \in \mathbb{Z}$ such that $f(x) = \dfrac{1}{\dfrac{a}{b}-1} + 2$. Since the subtraction, division, and addition of an irrational number with a rational number produces an irrational number, we know that $\dfrac{1}{\dfrac{a}{b}-1} + 2$ is irrational. -Therefore, $f$ maps irrationals only to irrationals. -Thirdly, we must show that $f$ is a bijective function. -To prove injectivity, there are two cases: -$\bf{Case \ \#1}$: Let $f(x) = f(y)$, where $x,y \in S \cap \mathbb{Q}$. Then, $\exists a,b,c,d \in \mathbb{Z}$, where $b,d \neq 0$, and $x = \dfrac{a}{b}$ and $y = \dfrac{c}{d}$ such that $\dfrac{b}{a} - 2 = \dfrac{d}{c} -2$. Adding both sides by two, we get $\dfrac{b}{a} = \dfrac{d}{c}$ which implies that $\dfrac{a}{b} = \dfrac{c}{d}$. So $x = y$ and $f$ is injective. -$\bf{Case \ \#2}$: Let $f(x) = f(y)$, where $x,y \in R \cap \mathbb{Q}$. Then, $\exists a,b,c,d \in \mathbb{Z}$, where $b,d \neq 0$, and $x = \dfrac{a}{b}$ and $y = \dfrac{c}{d}$ such that $\dfrac{1}{\dfrac{a}{b}-1} + 2 = \dfrac{1}{\dfrac{c}{d}-1} + 2$. Subtracting both sides by 2 we get $\dfrac{1}{\dfrac{a}{b}-1} = \dfrac{1}{\dfrac{c}{d}-1}$. This implies that $\dfrac{c}{d} - 1 = \dfrac{a}{b} -1$. Adding both sides by 1, we get $y = x$ so $f$ is injective. -To prove surjectivity, there are four cases: -$\bf{Case \ \#1}$: We show that $\forall y \in \mathbb{Q}$, $y = f(x)$ for some $x \in S \cap \mathbb{Q}$. Let $f(x) = y$; so $y = \dfrac{1}{x} - 2$. Then, $x = \dfrac{1}{y+ 2}$ and we are done. -$\bf{Case \ \#2}$: We show that $\forall y \in \mathbb{Q}$, $y = f(x)$ for some $x \in R \cap \mathbb{Q}$. Let $f(x) = y$; so $y = \dfrac{1}{x-1} + 2$. Then, $x = \dfrac{1}{y-2} + 1$ and we are done. -$\bf{Case \ \#3}$: We show that $\forall y \in (\mathbb{R} - \mathbb{Q})$, $y = f(x)$ for some $x \in S \cap (\mathbb{R} - \mathbb{Q})$. Let $f(x) = y$; so $y = \dfrac{1}{x} - 2$. Then, $x = \dfrac{1}{y+ 2}$ and we are done. -$\bf{Case \ \#4}$: We show that $\forall y \in (\mathbb{R} - \mathbb{Q})$, $y = f(x)$ for some $x \in R \cap (\mathbb{R} - \mathbb{Q})$. Let $f(x) = y$; so $y = \dfrac{1}{x-1} + 2$. Then, $x = \dfrac{1}{y-2} + 1$ and we are done. -This completes our proof that $f$ is a bijective function between the open unit interval $(0,1)$ and $\mathbb{R}$ that takes rationals to rationals and irrationals to irrationals.<|endoftext|> -TITLE: showing locally that a diagram commutes -QUESTION [5 upvotes]: When showing that a (natural family of) diagram of $R$-algebras for all rings $R$ commutes, why does it suffice to show that it commutes for all $R$ local with algebraically closed residue field? -My idea: Reduce to local rings (fpqc descent), then to strictly henselian rings (strict henselisation is faithfully flat). -Is this correct? - -REPLY [6 votes]: You don't need anything as fancy or as hard as the general fpqc descent theorem (or even the affine case, which is easier), but only basic facts about faithfully flat maps. So first of all it is fairly well-known and easy to prove (i.e., from the definition) that two maps of $R$-modules are equal iff the base-changes to the localizations $R_{\mathfrak{p}}$ for $\mathfrak{p}$ prime are equal, for every prime $\mathfrak{p}$. -This is why you can reduce to the case of a local ring. -The more interesting part of your question is why we can reduce to the case where the residue field is algebraically closed. Here I quote a lemma from EGA $0_{III}$: -Lemma: Let $(R, \mathfrak{m})$ be a noetherian local ring with residue field $k$, and let $K$ be an extension of $k$. Then there exists a flat, local noetherian $R$-algebra $R'$ such that $\mathfrak{m} R' $ is the maximal ideal of $R'$ and such that the residue field is $K$. -If $R$ is not assumed noetherian, then $R'$ will not be assumed noetherian, but the argument still works. Actually, it is not hard if you forget about noetherianness, so let me sketch it. We write $K$ as an inductive (by an ordinal whose cardinality is generally large) colimit of towers of extensions generated by one element. By transfinite induction (as a colimit of flat things is flat), we reduce to the case where $K$ is generated by one element over $k$, say some $\alpha$. If $\alpha$ is transcendental, then we can take $R' = (R[t])_{\mathfrak{m} R[t]}$. If $\alpha$ is algebraic, satisfying some monic polynomial $\overline{P} \in k[X]$ that lifts to $P \in R[X]$, then let $R' = R[X]/P$. One can check that this works (the maximal ideals of $R'$ are in correspondence with those of $R[X]/P \otimes_R k$ by Nakayama, and this last thing is clearly a field.) -In either of these cases, it is easy to check that $R'$ is flat (even free in the second case). -OK. So if $R'$ is flat over $R$ (assumed local as above, in the situation of the last paragraph), then $R'$ is faithfully flat because we are working with local rings. So if $M \rightrightarrows N$ are two maps of $R$-modules, they are equal if and only if $M \otimes_{R} R' \rightrightarrows N \otimes_R R'$. It follows that if we want to prove something about commutativity of diagrams of $R$-modules, we can reduce to proving something about $R'$-modules. -Now, as I explained above, any local ring admits a flat local homomorphism into a local ring whose residue field is algebraically closed. So if you want to prove some kind of commutativity for local rings, you can reduce to the case where the residue field is algebraically closed. And, as I explained at the very beginning, if you want to prove something about commuting diagrams of modules over a ring, you can reduce to the case of local rings.<|endoftext|> -TITLE: Is there a combinatorial proof of this congruence identity? -QUESTION [5 upvotes]: Prove that -$$\binom{2p}{p} \equiv 2\pmod{p^3},$$ -where $p\ge 5$ is a prime number. - -REPLY [8 votes]: A combinatorial proof for the congruence $\bmod p^2$ is given at this MO question but the answerer suggests the congruence $\bmod p^3$ does not have a natural combinatorial proof.<|endoftext|> -TITLE: An exercise involving characters -QUESTION [8 upvotes]: Suppose $p$ is a prime, $\chi$ and $\lambda$ are characters on $\mathbb{F}_p$. -How can I show that $\sum_{t\in\mathbb{F}_p}\chi(1-t^m)=\sum_{\lambda}J(\chi,\lambda)$ where $\lambda$ varies over all characters such that $\lambda^m=id$? -($J$ is the Jacobi sum, defined as $J(\chi,\lambda)=\sum_{a+b=1}\chi(a)\lambda(b)$ ) -The exercise is taken from A Classical Introduction to Modern Number Theory by Ireland and Rosen - page 105, ex. 8. - -REPLY [5 votes]: Write $J(\chi,\lambda) = \sum_{t \in \mathbb F_p} \chi(1-t)\lambda(t),$ -so that -$$\sum_{\lambda} J(\chi,\lambda) = \sum_t \chi(1-t)\sum_{\lambda} -\lambda(t).$$ -Now ask your self: what is the value of $\sum_{\lambda}\lambda(t)$, when $\lambda$ ranges -over all characters of order dividing $m$? If you sort this out, you will -have answered your question. -(One way to think about is is that the group of $\lambda$ with $\lambda^m = id$ is a subgroup of the full character group, so it is dual to a quotient of $\mathbb F_p^{\times}$. What is this quotient group explicitly?)<|endoftext|> -TITLE: How do you integrate a Bessel function? I don't want to memorize answers or use a computer, is this possible? -QUESTION [12 upvotes]: I am attempting to integrate a Bessel function of the first kind multiplied by a linear term: $\int xJ_n(x)\mathrm dx$ The textbooks I have open in front of me are not useful (Boas, Arfken, various Schaum's) for this problem. I would like to do this by hand. Is it possible? I have had no luck with expanding out $J_n(x)$ and integrating term by term, as I cannot collect them into something nice at the end. -If possible and I just need to try harder (i.e. other methods or leaving it alone for a few days and coming back to it) that is useful information. -Thanks to anyone with a clue. - -REPLY [23 votes]: At the very least, $\int u J_{2n}(u)\mathrm du$ for integer $n$ is expressible in terms of Bessel functions with some rational function factors. -To integrate $u J_0(u)$ for instance, start with the Maclaurin series: -$$u J_0(u)=u\sum_{k=0}^\infty \frac{(-u^2/4)^k}{(k!)^2}$$ -and integrate termwise -$$\int u J_0(u)\mathrm du=\sum_{k=0}^\infty \frac1{(k!)^2}\int u(-u^2/4)^k\mathrm du$$ -to get -$$\int u J_0(u)\mathrm du=\frac{u^2}{2}\sum_{k=0}^\infty \frac{(-u^2/4)^k}{k!(k+1)!}$$ -thus resulting in the identity -$$\int u J_0(u)\mathrm du=u J_1(u)$$ -For $\int u J_2(u)\mathrm du$, we exploit the recurrence relation -$$u J_2(u)=2 J_1(u)-u J_0(u)$$ -and -$$\int J_1(u)\mathrm du=-J_0(u)$$ -(which can be established through the series definition for Bessel functions) to obtain -$$\int u J_2(u)\mathrm du=-u J_1(u)-2J_0(u)$$ -and in the general case of $\int u J_{2n}(u)\mathrm du$ for integer $n$, repeated use of the recursion relation -$$J_{n-1}(u)+J_{n+1}(u)=\frac{2n}{u}J_n(u)$$ -as well as the additional integral identity -$$\int J_{2n+1}(u)\mathrm du=-J_0(u)-2\sum_{k=1}^n J_{2k}(u)$$ -should give you expressions involving only Bessel functions. -On the other hand, $\int u J_{\nu}(u)\mathrm du$ for $\nu$ not an even integer cannot be entirely expressed in terms of Bessel functions; if $\nu$ is an odd integer, Struve functions are needed ($\int J_0(u)\mathrm du$ cannot be expressed solely in terms of Bessel functions, and this is where the Struve functions come in); for $\nu$ half an odd integer, Fresnel integrals are needed, and for general $\nu$, the hypergeometric function ${}_1 F_2\left({{}\atop b}{a \atop{}}{{}\atop c}\mid u\right)$ is required.<|endoftext|> -TITLE: Integrate Form $du / (a^2 + u^2)^{3/2}$ -QUESTION [18 upvotes]: How does one integrate -$$\int \dfrac{du}{(a^2 + u^2)^{3/2}}\ ?$$ -The table of integrals here: http://teachers.sduhsd.k12.ca.us/abrown/classes/CalculusC/IntegralTablesStewart.pdf -Gives it as: $$\frac{u}{a^2 ( a^2 + u^2)^{1/2}}\ .$$ -I'm getting back into calculus and very rusty. I'd like to be comfortable with some of the proofs behind various fundamental "Table of Integrals" integrals. -Looking at it, the substitution rule seems like the method of choice. What is the strategy here for choosing a substitution? It has a form similar to many trigonometric integrals, but the final result seems to suggest that they're not necessary in this case. - -REPLY [23 votes]: A trigonometric substitution does indeed work. -We want to express $(a^2 + u^2)^{3/2}$ as something without square roots. We want to use some form of the Pythagorean trigonometric identity $\sin^2 x + \cos^2 x = 1$. Multiplying each side by $\frac{a^2}{\cos^2 x}$, we get $a^2 \tan^2 x + a^2 = a^2 \sec^2 x$, which is in the desired form. of (sum of two squares) = (something squared). -This suggests that we should use the substitution $u^2 = a^2 \tan^2 x$. Equivalently, we substitute $u = a \tan x$ and $du = a \sec^2 x dx$. Then -$$ -\int \frac{du}{(a^2 + u^2)^{3/2}} -= \int \frac{a \sec^2 x \, dx}{(a^2 + a^2 \tan^2 x)^{3/2}}. -$$ -Applying the trigonometric identity considered above, this becomes -$$ -\int \frac{a \sec^2 x \, dx}{(a^2 \sec^2 x)^{3/2}} -= \int \frac{dx}{a^2 \sec x} = \frac{1}{a^2} \int \cos x \, dx, -$$ -which can be easily integrated as -$$ -=\frac{1}{a^2} \sin x. -$$ -Since we set $u = a \tan x$, we substitute back $x = \tan^{-1} (\frac ua)$ to get that the answer is -$$ -=\frac{1}{a^2} \sin \tan^{-1} \frac{u}{a}. -$$ -Since $\sin \tan^{-1} z = \frac{z}{\sqrt{z^2 + 1}}$, this yields the desired result of -$$ -=\frac{u/a}{a^2 \sqrt{(u/a)^2 + 1}} -= \frac{u}{a^2 (a^2 + u^2)^{1/2}}. -$$ - -REPLY [6 votes]: Let $u = a \tan \theta$ and work from there. See this (page about trig substitutions).<|endoftext|> -TITLE: $I$-adic completion -QUESTION [7 upvotes]: Let $A$ be a commutative noetherian ring, and suppose that $A$ is $I$-adically complete with respect to some ideal $I\subseteq A$. Is it true that for any ideal $J\subseteq I$, the ring $A$ is also $J$-adically complete? - -Edit. Recall that a ring $A$ is $I$-adically complete if the canonical morphism $A\to \varprojlim A/I^n$ is an isomorphism. - -REPLY [8 votes]: The answer is "yes". -Since $A$ is Noetherian, for any $m$ the finitely generated $A$-module -$A/J^m$ is $I$-adically complete, and so $A/J^m$ is the inverse limit over $n$ -of $A/(I^n + J^m)$. Now $J^m \subset I^m,$ and so $I^n \subset I^n + J^m -\subset I^m$ when $n \geq m$. Thus the inverse limit (over $m$) of $A/J^m$ is -the same as the inverse limit (over $n$) of $A/I^n$, and we see that $A$ is $J$-adically complete. -Another way to think about it is that $A$ is $I$-adically complete (and separated, which is part of the requirement of "complete") if and only if any $I$-adic Cauchy sequence of elements of $A$ has a unique $I$-adic limit. Since a $J$-adic Cauchy sequence is also an $I$-adic sequence, a $J$-adic Cauchy $(a_n)$ sequence also has a unique $I$-adic limit, say $a$. -Now if we choose $n_0$ so that $a_m - a_n \in J^k$ if $m,n \geq n_0$, -then we see that $a - a_m = a - a_{n} + a_{n} - a_m \in J^k + I^l,$ where -$l$ can be made arbitrarily large by choosing $n$ large enough (since $a_n$ -converges to $a$ in the $I$-adic topology). Thus $a - a_m \in \cap_l J^k + I^l.$ This intersection is equal to $J^k$ (by $I$-adic completeness of $A/J^k$) -and so $a-a_m \in J^k$. Thus in fact $(a_n)$ converges to $a$ in the $J$-adic topology, and so $A$ is $J$-adically complete.<|endoftext|> -TITLE: Example of a complete, non-archimedean ordered field -QUESTION [17 upvotes]: I'm looking for a concrete example of a complete (in the sense that all Cauchy sequences converge) but non-archimedean ordered field, to see that these two properties are independent (an example of archimedean non-complete ordered field is obviously the rationals). -Thank you in advance. - -REPLY [4 votes]: The best example (in fact, provably the simplest answer to your question) is the field of formal Laurent series that Hurkyl mentioned. But actually, you can see that this field is complete by finding a metric that induces the order topology. (It's a surprise that this is even possible!) -The elements of this field look like $\sum_{i=n}^{\infty} a_i x^i$. Let's write $\textrm{ord}(f)$ for the index of the first nonzero term, or $\infty$ if all the terms are zero. Then the ordering is given by -$f = \sum_{i=n}^{\infty} a_i x^i \quad > \quad 0$ -when -$a_{\textrm{ord}(f)} > 0$. (Notice that this induces an ordering on the entire field, where $f > g$ iff $f - g > 0$.) -Ok, now let's give this baby a metric! You can verify on your own that -$d(f, g) = 2^{-\textrm{ord}(f-g)}$ -is a metric, and that it induces the same topology as the order on the field. -The last thing to check is that the space is complete under the given metric. This is easy once you have the right intuition. Think of it like this: each integer index is one of the display windows of a slot machine. A Cauchy sequence allows the values in each window to spin, but as you progress further down the sequence, each spinner (starting from the leftmost) eventually stops. Therefore the value to which such a sequence converges is simply the formal power series obtained by taking the coefficient of each wheel after it's already stopped. -~~~~~ -Also note that Harry Altman is right in general -- "most" nonarchimedean fields aren't second-countable, and so sequences don't suffice to characterize their topology. In this case you'd need nets or filters instead; thankfully the above field is actually metrizable, so you don't have to worry about this. (There's a nice characterization of the nonarchimedean fields that are metrizable, by the way.) -If you're interested in different notions of completeness, you'll find several (such as Cantor completeness and spherical completeness), but when it comes to ordered fields, the "right" one is Hilbert completeness.<|endoftext|> -TITLE: Complex logic puzzle -QUESTION [10 upvotes]: This is a puzzle that was sent to me a while back, I am told it is really hard, but supposedly solvable, I cant solve it, but I am interested in the solution, or any tips on how to proceed. -In front of you is an entity named Adam. Adam is a -solid block with a single speaker, through which -he hears and communicates. For all propositions -(statements that are either true or false) $p$, if -$p$ is true and logically knowable to Adam, then -Adam knows that $p$ is true. Adam is confined to his -physical form, cannot move, and only has the sense -of hearing. The only sounds Adam can make are to play -one of two pre-recorded audio messages. One message -consists of a very high note played for one second, -and the other one a very low note played for one -second. -Adam has mentally chosen a specific subset of the -Universe of ordinary mathematics. The Universe -of ordinary mathematics is defined as follows: -Let $S_0$ be the set of natural numbers: -$$S_0 = \{1,2,3,\ldots\}$$ -$S_0$ has cardinality $\aleph_0$, the smallest and only -countable infinity. -The power set of a set $X$, denoted $2^X$, is the set of all subsets -of $X$. The power set of a set always has a cardinality -larger than the set itself, $$|2^X| = 2^{|X|}$$ -Let $S_1 = S_0 \cup 2^{S_0}$. $S_1$ has cardinality $2^{\aleph_0} = \beth_1$ -Let $S_2 = S_1 \cup 2^{S_1}$. $S_2$ has cardinality $2^{\beth_1} = \beth_2$ -In general, let $S_{n+1} = S_n \cup 2^{S_n}$. $S_{n+1}$ has cardinality $2^{\beth_n} = \beth_{n+1}$ -The Universe of ordinary mathematics is defined as $$\bigcup_{i=0}^\infty S_i$$ -This Universe contains all sets of natural numbers, -all sets of real numbers, all sets of complex numbers, -all ordered $n$-tuples for all $n$, all functions, all -relations, all Euclidean spaces, and virtually -anything that arises in standard analysis. -The Universe of ordinary mathematics has cardinality -$\beth_\omega$. -Your goal is to determine the subset Adam is thinking -of, while Adam is trying to prevent you from doing so. -You are only allowed to ask Adam yes/no questions in -trying to accomplish your task. Adam must respond to -each question, and does so by playing a single note. -After Adam hears your question, he either chooses the -low note to mean yes and the high note to mean no, or -the high note to mean yes and the low note to mean no, -for that question only. He also decides to either tell -the truth or lie for each question after hearing it. -If at any time you ask a question which cannot be -answered by Adam without him contradicting himself, -Adam will either play the low note or the high note, -ignoring the question entirely. -Adam has given you an infinite amount of time to -accomplish your task. More specifically, the set of -both questions asked by you and notes played by Adam -can be of any cardinality. If in your strategy this -set is uncountably large, for any number of possibilities -of Adam's chosen subset, you must describe the order that -the elements of this set take place in as completely as -possible. -During your questioning, you are keeping track of -the following numbers: -$B_1 = $ The number of questions in which Adam had the option -of truthfully responding in the affirmative. (This number -and the following numbers can of course be cardinal numbers.) -$B_2 = $ The number of questions in which Adam had the option -of truthfully responding in the negative. -$B_3 = $ The number of questions in which Adam had the option -of falsely responding in the affirmative. -$B_4 = $ The number of questions in which Adam had the option -of falsely responding in the negative. -$B_5 = $ The number of questions in which Adam responded -with the high note. -$B_6 = $ The number of questions in which Adam responded -with the low note. -$B_7 = $ The number of questions. -Let $C = B_1+B_2+B_3+B_4+B_5+B_6+B_7$ -A strategy exists which will eventually allow you to -determine Adam's chosen subset. Describe such a strategy -in which $C$ is as small as possible, for all possibilities -of Adam's chosen subset. - -REPLY [4 votes]: There is an answer to this question at MO: -https://mathoverflow.net/questions/52246/seemingly-complex-logic-set-theoretic-puzzle/52303#52303<|endoftext|> -TITLE: Second order homogeneous linear difference equation with variable coefficients -QUESTION [7 upvotes]: I was wondering if you would point me to a book where the theory of second order homogeneous linear difference equation with variable coefficients is discussed. I am having difficulties in getting rigorous methods to solve some equations, see an example below. -In particular, given the recurrence relation -$X_{n+2} = \frac{3n-2}{n-1}X_{n+1} - \frac{2n}{n-1}X_n$, -two solutions are -$X(n)= n$ and $X(n) = 2^n$. -Is there an "elementary" way of arriving at these solutions? (i.e. without using transforms, etc.) -Thanks in advance. - -REPLY [8 votes]: HINT $\ $ Factor the difference operator. With the shift operator $\rm\ S\ X_n = X_{n+1}\ $ we have -$$\rm\ ((n-1)\ S^2 - (3\ n-2)\ S + 2\ n)\ \ X_n\ =\ ((n-1)\ S - n)\ (S - 2)\ \ X_n$$ -Now put $\rm\ Y_n = (S - 2)\ X_n = X_{n+1} - 2\ X_n\:.\ $ Then the above second-order equation reduces to $\rm\ (n-1)\ Y_{n+1} - n\ Y_n = 0\:.\ $ Solve that for $\rm\:Y_n\:$ and then plug it into the prior equation to obtain a first-order nonhomogeneous equation for $\rm\: X_n\:.$<|endoftext|> -TITLE: Why do we need to prove $e^{u+v} = e^ue^v$? -QUESTION [30 upvotes]: In this book I'm using the author seems to feel a need to prove -$e^{u+v} = e^ue^v$ -By -$\ln(e^{u+v}) = u + v = \ln(e^u) + \ln(e^v) = \ln(e^u e^v)$ -Hence $e^{u+v} = e^u e^v$ -But we know from basic algebra that $x^{a+b} = x^ax^b$. -Earlier in the chapter the author says that you should not assume $e^x$ "is an ordinary power of a base e with exponent x." -This is both a math and pedagogy question then, why does he do that? -So 2 questions really - -Do we need to prove this for such a basic property? -If we don't need to, then why does he do it? Fun? To make it memorable? Establish more neural connections? A case of wildly uncontrolled OCD? - -Also I've always taken for granted the property that $x^{a+b} = x^a x^b$. I take it as an axiom, but I actually don't know where that axiom is listed. - -REPLY [2 votes]: One definition of $e^x$ is $\displaystyle \lim_{n \rightarrow \infty}(1 + \frac{x}{n})^n$. From this definition, it doesn't automatically follow that $e^x e^y = e^{x+y}$. -In fact, it doesn't even follow immediately that $e^x = \displaystyle \lim_{n \rightarrow \infty}(1 + \frac{x}{n})^n = (\displaystyle \lim_{n \rightarrow \infty}(1 + \frac{1}{n})^n)^x = (e)^x$. What this means is $e^x$ is just a short hand notation for the limit which after some analysis we realize it as $(e)^x$. -By limit arguments, we can now show that $\displaystyle \lim_{n \rightarrow \infty}(1 + \frac{x}{n})^n = 1 + \sum_{k=1}^{\infty} \frac{x^k}{k!}$, $\forall x \in \mathbb{R}$. -Now $e^x \times e^y = (1 + \sum_{k=1}^{\infty} \frac{x^k}{k!}) \times (1 + \sum_{k=1}^{\infty} \frac{y^k}{k!})$. -Now we need to realize that we can rearrange the terms in the series and multiply terms of the two series since both of them converge absolutely. -Hence $e^x \times e^y = (1 + \sum_{k=1}^{\infty} \frac{x^k}{k!}) \times (1 + \sum_{k=1}^{\infty} \frac{y^k}{k!}) = 1 + (x+y) + (\frac{x^2 + 2xy + y^2}{2!}) + (\frac{x^3 + 3x^2y + 3xy^2 + y^3}{3!}) + \cdots$ -Now make use of the binomial theorem to get -$$e^{x} e^{y} = e^{x+y}$$ -PS: Though I have taken care to make sure the line of thought is right, you need to be careful when writing down the argument as to when you can interchange terms in an infinite series, multiply out two infinite series etc etc.<|endoftext|> -TITLE: $n$th powers in the p-adics -QUESTION [11 upvotes]: Suppose $K$ is a $p$-adic field (finite extension of the $p$-adics), and let $n$ be any integer (independent of what $p$ is). Define $U$ to be the set of all $x$ in $K$ such that $|x| = 1$ and such that $x = y^n$ for some $y$ in $K$. I would like to show that $U$ is an open set and that as a multiplicative group $U$ has finite index in the group of elements of $K$ of norm $1$. What's the best way of seeing why this is true (assuming it is)? -I pretty much have an idea why this holds.. in the $p$-adic case you can prove the $n$th powers are of bounded index in ${\bf Z}_{p^l}$ for each $l$ and then use an inverse limiting-type argument as $l$ goes to infinity to get this for $K = {\mathbb Q_p}$, and I think an analogous argument using powers of a uniformizer in place of powers of $p$ should work for a general $K$. But I keep thinking that this should be some well-known result or something that follows quickly from a well-known result. So I thought I'd throw this out. - -REPLY [9 votes]: I like Matt E's answer as an elementary and appealing way to see that $U^N$ has finite index in $U$ for all $N$. (Here I am writing $U$ for the full unit group $\mathcal{O}_K^{\times}$ of $K$ and $U^N = \{x^N \ | \ x \in U\}$. I just want to remark that it is not so much harder to give a general formula for the index $[U:U^N]$ in the general case (including local fields of positive characteristic $p$, so long as $p \nmid N$). -The answer is that if $v$ is the normalized (i.e., $\mathbb{Z}$-valued) valuation on $K$ and $q$ is the cardinality of the residue field, then -$$[U:U^N] = q^{v(N)} \ \# \mu_N(K).$$ -This is Theorem 12 in these notes. The treatment follows Lang's Algebraic Number Theory. (Perhaps it is worth mentioning that the argument is a bit tricky but completely elementary.) -Let $U_n = \{x \in U \ | \ x \equiv 1 \pmod{\mathfrak{p}^n} \}$, so that the $U_n$'s are a cofinal system of open subgroups of $U$. In other words, for a subgroup of $U$ to be open, it is necessary and sufficient that it contain $U_n$ for some $n$. We want to show that $U^N$ is open. But the proof of the above theorem proceeds by showing that for all sufficiently large $r$, -$$ U_{r+v(N)} = U_r^N \subset U^N,$$ -so indeed $U^N$ is open. Moreover, since every subgroup $H$ of $U$ of finite index $N$ contains $U^N$, it follows that every finite index subgroup of $U$ is open. -Yet another approach is to develop the theory of the logarithm as in Exercise 5.3 of loc. cit. to give, for all sufficiently large $n$, an isomorphism of topological groups from $U_n$ to the additive group $(\mathcal{O}_K,+)$. This also implies that each $U^N$ is open and of finite index in $U$.<|endoftext|> -TITLE: Why a complete graph has $\frac{n(n-1)}{2}$ edges? -QUESTION [52 upvotes]: I'm studying graphs in algorithm and complexity, -(but I'm not very good at math) as in title: - -Why a complete graph has $\frac{n(n-1)}{2}$ edges? - -And how this is related with combinatorics? - -REPLY [80 votes]: A simpler answer without binomials: A complete graph means that every vertex is connected with every other vertex. If you take one vertex of your graph, you therefore have $n-1$ outgoing edges from that particular vertex. -Now, you have $n$ vertices in total, so you might be tempted to say that there are $n(n-1)$ edges in total, $n-1$ for every vertex in your graph. But this method counts every edge twice, because every edge going out from one vertex is an edge going into another vertex. Hence, you have to divide your result by 2. This leaves you with $n(n-1)/2$. - -REPLY [30 votes]: A complete graph has an edge between any two vertices. You can get an edge by picking any two vertices. -So if there are $n$ vertices, there are $n$ choose $2$ = ${n \choose 2} = n(n-1)/2$ edges. -Does that help? - -REPLY [9 votes]: $\frac{n(n-1)}{2}$ comes from simple counting argument. You could directly say that every edge is obtained by asking the question: "how many pairs of vertices can I choose?", and this choosing of vertices is $C(n,2) = \frac{n(n-1)}{2}$. or you could take the other way of counting. Label the vertices $1,2, \ldots ,n$. The first vertex is now joined to $n-1$ other vertices. The second vertex has already been joined to vertex $1$ and hence has to be joined to the remaining $n-2$ vertices and in general the $k^{th}$ vertex has already been joined to the previous $k-1$ vertices and hence has to be joined to the remaining $n-k$ vertices. So the total number of edges is given by $(n-1) + (n-2) + \ldots 2 + 1 = \frac{n(n-1)}{2}$. - -REPLY [4 votes]: $\frac{n(n-1)}{2} = \binom{n}{2}$ is the number of ways to choose 2 unordered items from n distinct items. -In your case, you actually want to count how many unordered pair of vertices you have, since every such pair can be exactly one edge (in a simple complete graph). -suppose $(v,u)$ is an edge, then v can be any of the vertices in the graph - you have n options for this. u can be any vertex that is not v, so you have (n-1) options for this. the problem is that you counted each edge twice - one time as $(u,v)$ and one time as $(v,u)$ so you need to divide by two, and then you get that you have $\frac {n(n-1)}{2}$ edges in a complete simple graph.<|endoftext|> -TITLE: For what n does $n^2 + 2010$ divide $n^3 + 2010$ -QUESTION [7 upvotes]: I've just spent past 5 days trying to get a solution to this: -Find all whole numbers, for whom the fraction -$\frac{n^3 + 2010}{n^2 + 2010}$ -yields another whole number. -Before asking, please note that this is indeed an exercise from a math competition. However, this competition is already over, so don't worry; you're not doing my homework. -If anyone would be so kind to provide an explanation, could you formulate it in terms of high-school mathematics? Thanks. -P.S.: Please pardon my crude English. It's not my primary language. - -REPLY [4 votes]: You find that (Euclidian algorithm) -$$\frac{ n^3 + 2010 }{ n^2 + 2010 } = n + \frac{ (1-n) 2010 }{ n^2 + 2010 }.$$ -For a whole number division you need -$$\frac{ (1-n) 2010 }{ n^2 + 2010 } = k, $$ -for some integer k. You can solve this last equation for 'n' -$$n = -\frac{1005}{k} \pm \frac{1}{k} \sqrt{1005} \sqrt{1005 - 2 k ( k - 1)}.$$ -Now, noting that 1005 is not a square of an integer, you automatically conclude that -$$1005 - 2 k ( k - 1) = 1005 a^2,$$ -for some integer 'a'. Solving this for 'k' yields -$$k = \frac{1}{2} \pm \frac{1}{2} \sqrt{2011 - 2010 a^2}.$$ -Thus, the only solutions are those generated by a = 1, which are -$$k = 0 \Rightarrow n = 1;$$ -$$k = 1 \Rightarrow n = 0 \text{ (positive sign solution) and } n = -2010 \text{ (minus sign solution).}$$<|endoftext|> -TITLE: Using one stack to find number of permutations -QUESTION [5 upvotes]: Suppose I have a stack and I want to find the permutations of numbers 1,2,3,...n. -I can push and pop. e.g. if n=2: push,pop,push,pop 1,2 and push,push,pop,pop 2,1 -if n=4 I can only get 14 from the 24 permutations using the stack.. Does anyone know any function F(n) that could produce the number of permutations the stack (only one) can produce? -eg f(1)=1 -f(2)=2 -f(4)=14 -Thank you - -REPLY [7 votes]: The number corresponds to the Catalan Numbers which are of the form $\displaystyle \frac{{2n \choose n}}{n+1}$. -I believe this particular problem is an exercise Knuth's Art of Programming Volume I. -For a hint of proof: Number of pushes $\ge$ Number of pops in any prefix of the string of pushes and pops.<|endoftext|> -TITLE: Inverse of the sum of matrices -QUESTION [181 upvotes]: I have two square matrices: $A$ and $B$. $A^{-1}$ is known and I want to calculate $(A+B)^{-1}$. Are there theorems that help with calculating the inverse of the sum of matrices? In general case $B^{-1}$ is not known, but if it is necessary then it can be assumed that $B^{-1}$ is also known. - -REPLY [2 votes]: I know the question has been answered multiple times with great answers, but with my answer you don't need to memorize any lemmas or formulas. -Suppose $(A+B)x=y$, then $x=(A+B)^{-1}y$. This is all we need to get. The steps are: -(1) Start with $(A+B)x=y$. -(2) Then $Ax=y-Bx$, so $x=A^{-1}y -A^{-1}Bx$. -(3) Multiply $x$ in step (2) by $B$ to get - $$Bx=BA^{-1}y -BA^{-1}Bx$$ - which is equivalent to - $$(I+BA^{-1})Bx=BA^{-1}y $$ - or, - $$Bx=(I+BA^{-1})^{-1}BA^{-1}y $$ -(3) Substitute this $Bx$ into the $x$ in step (2) to get - $$x=A^{-1}y -A^{-1}(I+BA^{-1})^{-1}BA^{-1}y $$ -(4) Now factorizing the $y$ gives you the required result. - $$x=(A^{-1} -A^{-1}(I+BA^{-1})^{-1}BA^{-1})y $$ -(5)The assumptions we have used are $A$ and $I+BA^{-1}$ are nonsingular. -(6) We can factorize the $A^{-1}$ to get: -$$(A+B)^{-1}=A^{-1}(I -(I+BA^{-1})^{-1}BA^{-1})$$<|endoftext|> -TITLE: Stalks of the tensor product presheaf of two sheaves -QUESTION [19 upvotes]: Let $(X, \mathscr{O})$ be a ringed space and $\mathscr{F}, \mathscr{G}$ be sheaves of $\mathscr{O}$-modules on $X$. -Define $\mathscr{H}(U) = \mathscr{F}(U) \otimes_{\mathscr{O}(U)} \mathscr{G}(U)$. I am stuck trying to prove that $\mathscr{H}_p \cong \mathscr{F}_p \otimes_{\mathscr{O}_p} \mathscr{G}_p$ as $\mathscr{O}_p$-modules. -I know that if $X$ is a topological space, $\mathscr{F}, \mathscr{G}$ are presheaves of abelian groups on $X$ and $\mathscr{H}(U) = \mathscr{F}(U) \otimes_{\mathbb{Z}} \mathscr{G}(U)$ then $\mathscr{H}_p \cong \mathscr{F}_p \otimes_{\mathbb{Z}} \mathscr{G}_p$. This is just a consequence of the fact that tensor products commute with direct limits. But I don't know how to deal with the case when the base ring is changing - -REPLY [6 votes]: Alternatively you can prove that the tensor product commutes with colimits not only in both factors, but also over the base ring. This follows easily from the adjunction.<|endoftext|> -TITLE: What software and/or language to use to take Math lecture notes? -QUESTION [27 upvotes]: I have a terrible hand-writing and I'm very good at typing, so I had an idea about taking my math lecture notes using a computer. -I've tried using a simple syntax (using purely ASCII) but it's getting harder and harder, so I need something a bit more sophisticated. -Friend suggested Latex, but said that I probably won't be able to write it fast enough to use it in "real-time". It also has a quite learning curve. -I'm on Mac os X. What would you suggest? - -REPLY [2 votes]: With pen&paper, iPad's camera and my fingers -- but hopefully in the future with VimLatexSuite and Vim but I am too slow in using it although I can touch-type much faster than average people. - -For a very long time, I have tried to use Vim here but also looked for Emacs. However, the Vim LatexSuite developers have not responded to problems such as mentioned here (a script which comes with all kind of macros a bit like with TeXShop in Mac but ready macros and platform-independent) so I take notes with normal pen&paper and then with iPad photograph them. You can see how I take notes here with iPad. Someties during lectures, lecturer or a co-worker just mentions a good book -- ok I will take a photo of it and add it to my notes. Sometimes during lectures I want to screeshot something let say from Wikipedia, ok I will do it. I do this kind of note-taking with Notes Pluss -app in iPad but trying to use more GoodNotes because faster to use, I miss so much during lectures while using Notes Plus because slow to use. With iPhone 4/5, you can however take this kind of images faster because the images can be synced fast to iCloud so making the use of Notes Plus a bit less painless -- its developer has however said that he tries to simplify the UI in the future. - -I use vim most of the time but I think inefficinetly, particularly when not getting any help to problems like above. I try to get better in it by solving puzzles in services such as VimGolf.com here. - -I am still not satisfied in how I take notes, I feel there must be better ways -- still investigating. -Highly recommended threads - - -Help me to write long LaTex equations fast with colours and possible with other aids in Vim - -iPad for reading textbooks and writing math by hand?<|endoftext|> -TITLE: What is the geometric interpretation behind the method of exact differential equations? -QUESTION [19 upvotes]: Given an equation in the form $M(x)dx + N(y)dy = 0$ we test that the partial derivative of $M$ with respect to $y$ is equal to the partial derivative of $N$ with respect to $x$. If they are equal, then the equation is exact. What is the geometric interpretation of this? -Further more to solve the equation we may integrate $M(x) dx$ or $N(y)dy$, whichever we like better, and then add a constant as a function in terms of the constant variable and solve this. -e.g. If $f(x) = 3x^2$ then $F(x) = x^3 + g(y)$. -After we have our integral we set its partial differential with respect to the other variable our other given derivative and solve for $g(y)$. I have done the entire homework assignment correctly, but I have no clue why I am doing these steps. What is the geometric interpretation behind this method, and how does it work? - -REPLY [18 votes]: Great question. The idea is that $(M(x), N(y))$ defines a vector field, and the condition you're checking is equivalent (on $\mathbb{R}^2$) to the vector field being conservative, i.e. being the gradient of some scalar function $p$ called the potential. Common physical examples of conservative vector fields include gravitational and electric fields, where $p$ is the gravitational or electric potential. -Geometrically, being conservative is equivalent to the curl vanishing. It is also equivalent to the condition that line integrals between two points depend only on the beginning and end points and not only on the path chosen. (The connection between this and the curl is Green's theorem.) -The differential equation $M(x) \, dx + N(y) \, dy = 0$ is then equivalent to the condition that $p$ is a constant, and since this is not a differential equation it is a much easier condition to work with. The analogous one-variable statement is that $M(x) \, dx = 0$ is equivalent to $\int M(x) \, dx = \text{const}$. Geometrically, the solutions to $M(x) \, dx + N(y) \, dy = 0$ are therefore the level curves of the potential, which are always orthogonal to its gradient. The most well-known example of this is probably the diagram of the electric field and the level curves of the electrostatic potential around a dipole. This is one way to interpret the expression $M(x) \, dx + N(y) \, dy = 0$; it is precisely equivalent to the "dot product" of $(M(x), N(y)$ and $(dx, dy)$ being zero, where you should think of $(dx, dy)$ as being an infinitesimal displacement along a level curve. -(For those in the know, I am ignoring the distinction between vector fields and 1-forms and also the distinction between closed forms and exact forms.)<|endoftext|> -TITLE: How should I proceed in proving this tautology? -QUESTION [7 upvotes]: I know that following is a tautology because I've checked its truth table. I am now attempting to prove that it is a tautology by using the rules of logic, which is more difficult. How should I proceed? -$(p\land(p\implies q))\implies q$ -$(p\land(\lnot p \lor q))\implies q$ -$(p\land \lnot p) \lor (p\land q) \implies q$ This step is where I'm getting stuck at. I know that $(p\land \lnot p)$ is false. So it seems to me that the truth value of everything to the left of the $\implies$ operator depends on the truth value of $(p\land q)$ So what I want to do is this: -FALSE $\lor (p\land q) \implies q$ which reduces to -$(p\land q) \implies q$ -Is my thinking correct so far? If so, then I want to rewrite $(p\land q) \implies q$ as -$\lnot(p \land q) \lor (p \land q)$ by using the identity $p\implies q \equiv \lnot p \lor q$ -Am I on the right track? - -REPLY [4 votes]: $$ -\begin{align} -(p\land(p\rightarrow q))\rightarrow q &\Longleftrightarrow (p\land(\neg p\lor q))\rightarrow q\\ - &\Longleftrightarrow ((p\land \neg p)\lor (p\land q))\rightarrow q\\ - &\Longleftrightarrow (F\lor (p\land q))\rightarrow q & \text{Negation law}\\ - &\Longleftrightarrow \neg(F\lor (p\land q))\lor q\\ - &\Longleftrightarrow (T\land \neg(p\land q))\lor q \\ - &\Longleftrightarrow (T\land(\neg p\lor \neg q))\lor q &\text{DeMorgan's law}\\ - &\Longleftrightarrow (\neg p\lor \neg q)\lor q &\text{Domination law}\\ - &\Longleftrightarrow \neg p\lor (\neg q\lor q) \\ - &\Longleftrightarrow \neg p\lor T\\ - &\Longleftrightarrow T\\ -\end{align} -$$<|endoftext|> -TITLE: How can we prove Sylvester's determinant identity? -QUESTION [40 upvotes]: Sylvester's determinant identity states that if $A$ and $B$ are matrices of sizes $m\times n$ and $n\times m$, then -$$ \det(I_m+AB) = \det(I_n+BA)$$ -where $I_m$ and $I_n$ denote the $m \times m$ and $n \times n$ identity matrices, respectively. -Could you sketch a proof for me, or point to an accessible reference? - -REPLY [12 votes]: We will calculate $\det\begin{pmatrix} I_m & -A \\ B & I_n \end{pmatrix}$ in two different ways. We have -$$ \det\begin{pmatrix} I_m & -A \\ B & I_n \end{pmatrix} - = \det\begin{pmatrix} I_m & 0 \\ B & I_n + BA \end{pmatrix} = \det(I_n + BA). $$ -On the other hand, -$$ \det\begin{pmatrix} I_m & -A \\ B & I_n \end{pmatrix} - = \det\begin{pmatrix} I_m+AB & 0 \\ B & I_n \end{pmatrix} = \det(I_m + AB). $$<|endoftext|> -TITLE: transpose of positive matrix is positive -QUESTION [6 upvotes]: how to prove it? -I am talking about matrixes which satisfy: -$$( Ax , x ) > 0\quad \text{ for any}\quad \;x \neq 0.$$ -How to prove that $A^T\;$ is also positive? -$$x^T A x = ( x^T A x )^T$$ -and what? - -REPLY [8 votes]: Hint: Write the inner product as $x^T A x$ and use the fact that that expression is its own transpose (since it's a 1-by-1 matrix).<|endoftext|> -TITLE: Coefficients for Taylor series of given rational function -QUESTION [5 upvotes]: Looking at an earlier post Finding the power series of a rational function, I am trying to get a closed formula for the n'th coefficient in the Taylor series of the rational function (1-x)/(1-2x-x^3). Is it possible to use any of the tricks in that post to not only obtain specific coefficients, but an expression for the n'th coefficient ?If T(x) is the Taylor polynomial I am looking at the equality (1-x) = (1-2x-x^3)*T(x) and differentiating, but I am not able to see a pattern giving me an explicit formula for the coefficients. - -REPLY [5 votes]: Denoting the coefficients of $T(x)$ as $t_i$ (so $T(x) = t_0 + t_1 x + ...$), consider the coefficient of $x^a$ in $(1 - 2x - x^3) T(x)$. It shouldn't be hard to show that it's $t_a - 2t_{a-1} - t_{a-3}$. -So we have $t_0 - 2t_{-1} - t_{-3} = 1$, $t_1 - 2t_0 - t_{-2} = -1$, and $a > 1 \Rightarrow t_a - 2t_{a-1} - t_{a-3} = 0$. You can get $t_0$, $t_1$. Do you know how to solve the discrete recurrence $t_{a+3} = 2t_{a+2} + t_{a}$? -(If not, here's a hint: suppose you had a number $\alpha$ such that $t_a = \alpha^a$ satisfied the recurrence. What constraint on $\alpha$ can you prove? Now consider a linear combination of the possible such solutions, $t_a = b_0 \alpha_0^a + ... + b_n \alpha_n^a$, and solve $n$ simultaneous equations from your base cases $t_0$, $t_1$, and use $0 = t_{-1} = t_{-2} = ...$ if necessary).<|endoftext|> -TITLE: What math should a computer scientist take in college? -QUESTION [16 upvotes]: I'm a computer science major and like many of us we have to take two additional sciences. These two additional science courses are in addition to three semesters of calculus,two semesters of physics, and one semester each of linear algebra and discrete math. -I could take the easy way out and take an algebra based astronomy course, but I want the most bang for my buck as it relates to my major.What other types of math classes should I take? I do enjoy math a lot and have always done well. - -REPLY [3 votes]: I would advise you to take advantage of college to study something sufficiently different if available, that could be even advantageous from a professional point of view: - -biology / genomics / bio-modelization / physiology -captors / sensors / signal analysis / astronomy / experimental engineering -linguistics / phonology / ethnography / library science - -My reasoning is that you are already accumulating sufficient CS culture to train yourself in nearby areas. You will work with other people with a strong culture in C.S. and math. That's probably only in college that you will have a hands-on experience on something else. -A potential recruiter can appreciate this openness, and most of the topics I quote are developing strong computer science interaction.<|endoftext|> -TITLE: German sofa primes: Can both $q$ and $\frac{q^3+1}{2}$ be prime? -QUESTION [14 upvotes]: Is there an odd prime integer $\displaystyle q$ such that $\displaystyle p= \frac{q^3+1}{2}$ is also prime? - -A quick search did not find any, nor a pattern in the prime factorization of p. This is a possible quick solution to the unitary and Ree cases of ME.16954. - -REPLY [22 votes]: Isn't this divisble by $\displaystyle \frac{q+1}{2}$?<|endoftext|> -TITLE: Matrices with real entries such that $(I -(AB-BA))^{n}=0$ -QUESTION [13 upvotes]: I was just trying out some problems, when i couldn't solve this question: - -Does there exist $n \times n$ matrices $A$ and $B$ with real entries such that $$ \Bigl(I - (AB-BA)\Bigr)^{n}=0?$$ - -I really don't know how to proceed for this question. What i did was to find some examples, but that didn't quite work. Any ideas on how one goes about thinking on such problems and how to solve them would be of great help. - -REPLY [7 votes]: First off, there is no point in having the exponent be equal to the dimension of the space. On the other hand one does not want that dimension to be zero, since there exists an example with $0\times 0$ matrices (one has $I=0$ there). So I'll exclude that, and prove - -For $\def\N{\Bbb N}n\in\N_{>0}$ and $k\in\N$, and $F$ a field of characteristic zero, there are no $A,B\in M_n(F)$ with - $$ (I-(AB-BA))^k=0. $$ - -Suppose such $A,B$ existed. In other words $M=I-AB+BA$ is nilpotent. This implies the characteristic polynomial of $M$ is $X^n$, and in particular the trace of $M$ (which is minus the coefficient of $X^{n-1}$ in that characteristic polynomial) is zero. But $\def\tr{\operatorname{tr}}\tr M=\tr I-\tr(AB)+\tr(BA)=\tr I=n\neq0$, contradiction.<|endoftext|> -TITLE: Estimating population size -QUESTION [5 upvotes]: Let's suppose there are $n$ real numbers $a_0 < ... < a_n$ uniformly selected from interval [0, 1). If one knows $k$ numbers on consecutive positions $a_i < ... < a_{i+k-1}$ how good is $(k - 1) / (a_{i+k-1} - a_i)$ an estimator for $n$? What other estimators are possible/better? -NOTE: $n >> k$. - -REPLY [4 votes]: As in the book A First Course in Order Statistics (see, in particular, Section 2.5), let us denote by $W_{i,j:n}$ the spacing $W_{i,j:n} = U_{j:n} - U_{i:n}$, $1 \leq i < j \leq n$, where $U_{1:n} < \cdots < U_{n:n}$ are $n$ order statistics from a ${\rm uniform}(0,1)$ distribution. By equation (2.5.21) of that book, the density function of $W_{i,j:n}$ is given by -$$ -f_{W_{i,j:n} } (w) = \frac{{n!}}{{(j - i - 1)!(n - j + i)!}}w^{j - i - 1} (1 - w)^{n - j + i} ,\;\; 0 < w < 1, -$$ -so that $W_{i,j:n}$ has a ${\rm Beta}(j - i,n - j + i + 1)$ distribution (which depends only on $j-i$ and not on $i$ and $j$ individually). Having the distribution of $W_{i,j:n}$, you can check how good is $\frac{{j - i}}{{U_{j:n} - U_{i:n} }}$ an estimator for $n$, which is what you asked (in different notation). -EDIT: Using the density function of $W_{i,j:n}$, one can easily calculate the bias (cf. Trevor's answer). My calculation shows that -$$ -{\rm E}\bigg[\frac{{j - i}}{{U_{j:n} - U_{i:n} }}\bigg] = \frac{{j - i}}{{j - i - 1}}n, -$$ -hence the estimator gets better and better as $j-i$ grows. (The expectation is infinite when $j-i=1$; cf. Dinesh's answer.)<|endoftext|> -TITLE: Why is a general formula for Kostka numbers "unlikely" to exist? -QUESTION [28 upvotes]: In reference to Stanley's Enumerative Combinatorics Vol. 2: right after he has defined Kostka numbers (section 7.10), he mentions that it is unlikely that a general formula for $K_{\lambda\mu}$ exists, where $K_{\lambda\mu}$ is the number of semistandard Young tableaux of shape $\lambda$ and type $\mu$ with $\lambda\vdash n$ and $\mu$ a weak composition of $n$. Why? In particular, is this an expression of something rigorous, and if so, what? - -REPLY [23 votes]: This is a really good question, the kind of question I think about from time to time. The problem with this question is that it is so much imprecise, it is basically open ended. Here are some variations on the way to make question precise. -1) By a "general formula" you mean a product of some kind of factorials. This is rather uninteresting, since it's unclear what those factorials would be. Kostka numbers tend to be chaotic, so I am sure you can find relatively small partitions with annoying large prime factors. What do you do next? One can also ask about asymptotic results which don't allow this, in the flavor of de Bruijn (see Section 6). But again there are too many choices to consider, and none are really enlightening. -2) There is a formal notion of #P-completness, a computational complexity class, loosely corresponding to hard counting problems (see WP). It is known that Kostka numbers are #P-complete (see this paper). This means that computing Kostka numbers in full generality is just as hard as computing the number of 3-colorings in graphs. 3SAT solutions, etc. -3) Continuing with the theme "formula" as a polynomial algorithm. Such "formulas" do exist in special cases then. For example, if the number of rows in both partitions is fixed, Kostka numbers become the number of integer points in a finite dimensional polytope (see e.g. this nice presentation), which can be computed in polynomial time (see this book). -4) Alternatively, there is a rather weak notion of "formula" due to Wilf (see here, by subscription). Roughly, he asks for the algorithm which is asymptotically faster than trivial enumeration. But then one can use the "inverse Kostka numbers" which have their own combinatorial interpretation (see here), which are similar but perhaps slightly faster to compute. Since Wilf only asks for a little better than trivial bound, one can compute the whole matrix of Kostka numbers which has sub-exponential size p(n), while Kostka numbers are exponential under mild conditions. -Hope this helps.<|endoftext|> -TITLE: Unbounded subset of $\mathbb{R}$ with positive Lebesgue outer measure -QUESTION [10 upvotes]: The set of rational numbers $\mathbb{Q}$ is an unbounded subset of $\mathbb{R}$ with Lebesgue outer measure zero. In addition, $\mathbb{R}$ is an unbounded subset of itself with Lebesgue outer measure $+\infty$. Therefore the following question came to my mind: is there an unbounded subset of $\mathbb{R}$ with positive Lebesgue outer measure? -If there is, can you give me an example? - -REPLY [11 votes]: I guess you mean with positive and finite outer measure. An easy example would be something like $[0,1]\cup\mathbb{Q}$. But perhaps you also want to have nonzero measure outside of each bounded interval? In that case, consider $[0,1/2]\cup[1,1+1/4]\cup[2,2+1/8]\cup[3,3+1/16]\cup\cdots$. If you want the set to have positive measure in each subinterval of $\mathbb{R}$, you could let $x_1,x_2,x_3,\ldots$ be a dense sequence (like the rationals) and take a union of open intervals $I_n$ such that $I_n$ contains $x_n$ and has length $1/2^n$. -On the other hand, it is often useful to keep in mind that every set of finite measure is "nearly bounded". That is, if $m(E)<\infty$ and $\epsilon>0$, then there is an $M\gt0$ such that $m(E\setminus[-M,M])<\epsilon$. One way to see this is by proving that the monotone sequence $(m(E\cap[-n,n]))_{n=1}^\infty$ converges to $m(E)$.<|endoftext|> -TITLE: Invariant under transformation $i\mapsto -i$ implies real? -QUESTION [7 upvotes]: When one has an expression in terms of $i$, one can send $i$ to $-i$ and, if the expression remains unchanged, one can conclude that the expression is, in fact, real. Analogous statements hold for expressions involving radicals. Why is this? -One admittedly trivial example is the expression -$$\frac{1}{x-i}+\frac{1}{x+i} .$$ - -REPLY [11 votes]: This is a basic fact in Galois theory, namely that for Galois extensions $F/K$ the fixed field of the Galois group $G = \text{Gal}(F/K)$ is precisely $K$. The quadratic extension $\mathbb{Q}(i)/\mathbb{Q}$ is an example of such an extension, as is the quadratic extension $\mathbb{C}/\mathbb{R}$. (To apply this to your example one can do a few things: either one regards the expression as a power series and applies the above to each term, or one regards the expression as a family of elements of $\mathbb{C}$ and applies the above separately to each of them.) -It is instructive to see how this fails when the extension is not Galois. A simple example is the extension $\mathbb{Q}(\sqrt[3]{2})/\mathbb{Q}$. This extension has no nontrivial automorphisms because the other two roots of $x^3 - 2$ are complex, so the fixed field of the "Galois group" is all of $\mathbb{Q}(\sqrt[3]{2})$. To get an analogous statement to the one above one has to pass to the splitting field $\mathbb{Q}(\sqrt[3]{2}, \omega)$, which is Galois. - -REPLY [3 votes]: If $x+iy = x-iy$ then $y=0$ (extra characters).<|endoftext|> -TITLE: Cardinality of the set of all real functions of real variable -QUESTION [52 upvotes]: How does one compute the cardinality of the set of functions $f:\mathbb{R} \to \mathbb{R}$ (not necessarily continuous)? - -REPLY [6 votes]: This answer is based on, but differs slightly from, user Asaf Karaglia's above. - -First, observe that by definition, $\{\text{all real functions of real variable}\}:= \{f: \; f: \mathbb{R}\to\mathbb{R}\} := \mathbb{R}^\mathbb{R}$. -The question is about $|\{\text{all real functions of real variable}\}|$, so examine an arbitrary real function of real variable: $f\,\colon\,\mathbb{R}\to\mathbb{R}.$ - By inspection, $f\,\colon\,\mathbb{R}\to\mathbb{R} := \{(r, f(r)) : r \in \mathbb{R}\} \quad \subseteq \quad P(\mathbb{R} \times \mathbb{R})$. -Thus, $\color{green}{|\mathbb{R}^{\mathbb{R}}| \le |P(\mathbb{R}\times\mathbb{R})|}$. -Before continuing, let's try to simplify $|P(\mathbb{R}\times\mathbb{R})|$. Observe that $|\mathbb{R}| = |\mathbb{R}^k| \, \forall \, k \in \mathbb{N}$. Its proof by mathematical induction requires the induction hypothesis of $|\mathbb{R}| = |\mathbb{R}^2|$, one proof of which is : $|\mathbb{N}| = |\mathbb{N}\times\mathbb{N}| \implies |\mathbb{R}| = |2^{\mathbb{N}}| = |2^{\mathbb{N}\times\mathbb{N}}| = |2^\mathbb{N}\times 2^\mathbb{N}| = |\mathbb{R}\times\mathbb{R}|$. -Verily, $\mathbb{R} \neq \mathbb{R}^2$. Howbeit, for infinite sets $A,B$: $|A| = |B| \Longrightarrow \require{cancel} \cancel{\Longleftarrow} |P(A)| = |P(B)|$. -(The converse is discussed here.) -Thus, $|P(\mathbb{R})| = |P(\mathbb{R}\times\mathbb{R})| \implies \color{green}{|\mathbb{R}^\mathbb{R}| \le |P(\mathbb{R}\times\mathbb{R})|} = |P(\mathbb{R})|$. Now scrutinise $|P(\mathbb{R})|$: -● $\color{#A9057D}{|P(\mathbb{R})| = |2^{\mathbb{R}}|}$, where $2^{\mathbb{R}} := \{f : \; f: \mathbb{R} \to \{0,1\}\}$, -● Every $f: \mathbb{R} \to \{0,1\}$ is a particular case of a function from $\mathbb{R}$ to $\mathbb{R}$, thus $\color{#EC5021}{2^{\mathbb{R}} \subsetneq \mathbb{R}^\mathbb{R}}$. -Altogether, $\color{#A9057D}{|P(\mathbb{R})| =} \color{#EC5021}{|2^\mathbb{R}| \le} \color{green}{|\mathbb{R}^\mathbb{R}| \le |P(\mathbb{R}\times\mathbb{R})|} = |P(\mathbb{R})|$ -$\implies |P(\mathbb{R})| \qquad \qquad \quad \leq |\mathbb{R}^\mathbb{R}| \leq |P(\mathbb{R})| \implies \color{#A9057D}{\underbrace{|P(\mathbb{R})|}_{= |2^\mathbb{R}|}} = |\mathbb{R}^\mathbb{R}| $.<|endoftext|> -TITLE: How come the number $N!$ can terminate in exactly $1,2,3,4,$ or $6$ zeroes but never $5$ zeroes? -QUESTION [44 upvotes]: Possible Duplicate: -Highest power of a prime $p$ dividing $N!$ - -How come the number $N!$ can terminate in exactly $1,2,3,4,$ or $6$ zeroes but never $5$ zeroes? - -REPLY [7 votes]: HINT $\: $ The power of a prime $\rm\:p\:$ dividing $\rm\ n!\:$ jumps from $\rm\: p-1\:$ for $\rm\: n = p^2-1\:$ to $\rm\: p+1\: $ for $\rm\: n = p^2\:$ since their are $\rm\:p-1\:$ naturals $\rm < p^2\ $ divisible by $\rm\:p\:,\:$ viz. $\rm\ p,\ 2\:p,\:\cdots\:,\: (p-1)\:p\:.\ $ Now put $\rm\ p = 5\:.$<|endoftext|> -TITLE: About the limit of the coefficient ratio for a power series over complex numbers -QUESTION [14 upvotes]: This is my first question in mathSE, hope that it is suitable here! -I'm currently self-studying complex analysis using the book by Stein & Shakarchi, and this is one of the exercises (p.67, Q14) that I have no idea where to start. - -Suppose $f$ is holomorphic in an open set $\Omega$ that contains the closed unit disc, except for a pole at $z_0$ on the unit circle. Show that if $f$ has the power series expansion $\sum_{n=0}^\infty a_n z^n$ in the open unit disc, then -$\displaystyle \lim_{n \to \infty} \frac{a_n}{a_{n+1}} = z_0$. - -If the limit is taking on $|\frac{a_n}{a_{n+1}}|$ and assume the limit exists, by the radius of convergence we know that the answer is $1$. But what can we say about the limit of the coefficient ratio, which is a pure complex number? I've tried to expand the limit directly by definition, with no luck. And I couldn't see how we can apply any of the standard theorems in complex analysis. -I hope to get some initial directions about how we can start thinking on the problem, rather than a full answer. Thank you for the help! - -REPLY [7 votes]: Let's try another way to solve this problem. -Construct a contour, which consists of two parts. The first part is a circle and is a little bit larger than the unit circle, except near the point $z_0$. We call it $C_1$, and make it have absolute value strictly larger than $1+\delta$ for some $delta$. The second part is a small circle with radius $\epsilon$. -Suppose $z_0$ as pole have degree k, then $f(\zeta)=\frac{g(z)}{(z-z_0)^k}$, where $g(z)$ is holomorhpic. -For $C_1$, we have that $|\int_{C_1}\frac{g(\zeta)}{\zeta^n}\frac{1}{(\zeta-z_0)^k}d\zeta|\leq \frac{1}{\epsilon^k}\int_{C_1}\frac{M}{(1+\delta)^n}d\zeta \to 0$ -For $C_\epsilon$, we have -$$\int_{C_\epsilon}\frac{g(\zeta)}{\zeta^{n+1}}\frac{1}{(\zeta-z_0)^k}d\zeta=\int_{-\theta_0}^{-\pi+\theta_0} \frac{g(z_0+\epsilon e^\theta_0)}{(z_0+\epsilon e^\theta_0)^{n+1}}e^{-i\theta k} d\theta$$ -In the same way, -$$\int_{C_\epsilon}\frac{g(\zeta)}{\zeta^{n+2}}\frac{1}{(\zeta-z_0)^k}d\zeta=\int_{-\theta_0}^{-\pi+\theta_0} \frac{g(z_0+\epsilon e^\theta_0)}{(z_0+\epsilon e^\theta_0)^{n+2}}e^{-i\theta k} d\theta$$ -By multiplying the second one with $z_0$, and computing the difference, we get, -$$ \Delta=\int_{-\theta_0}^{-\pi+\theta_0} \frac{g(z_0+\epsilon e^\theta_0)}{(z_0+\epsilon e^\theta_0)^{n+2}}e^{-i\theta k} \epsilon e^{i\theta}d\theta \to 0$$, as $\epsilon \to 0$ -which means -$$ \frac{\int_{C_\epsilon}\frac{g(\zeta)}{\zeta^{n+1}}\frac{1}{(\zeta-z_0)^k}d\zeta}{z_0\int_{C_\epsilon}\frac{g(\zeta)}{\zeta^{n+2}}\frac{1}{(\zeta-z_0)^k}d\zeta} \to 1 $$ as $\epsilon \to 0$ -Combining all of these and Cauchy's integral formuals that $a_0=f(0)=\frac{1}{2\pi i}\int_C\frac{f(\zeta)}{\zeta}d\zeta$ and $n!a_n=f^{(n)}(0)=\frac{n!}{2\pi i}\int_C\frac{f(\zeta)}{\zeta^{n+1}}d\zeta$, we split $C$ as $C_1$ and $C_\epsilon$, we onle need to choose carefully the $\epsilon$'s and $\delta$'s to complete our proof.<|endoftext|> -TITLE: Preimaging units to units -QUESTION [12 upvotes]: I'm interested in (unity-preserving) homomorphisms $f: S \to T$ between (commutative, with-unity) rings $S$ and $T$ so that if $f(x)$ is a unit, then $x$ was a unit to start with. For example, an inclusion of fields has this property, but a nontrivial localization like $\mathbb{Z} \to \mathbb{Q}$ does not; it sends $2 \in \mathbb{Z}$, which is not a unit, to $2 \in \mathbb{Q}$, which is. I'm interested in answers to either of the following two questions: - -Is there a name for such $f$ that do preserve units through preimages? -If not, is there a class of maps broadly recognized as useful that enjoy this property? - -I'm actually only interested in the case of a surjection $R \twoheadrightarrow R_0$ whose kernel is an ideal $I$ satisfying $I^2 = 0$ (i.e., a square-zero extension of $R_0$ by $I$). But, if this property has a name in general or if square-zero extensions occur as special types of some broadly-recognized class of maps with this property, I'd like to know so I can chat about this with other people and don't go picking a name nobody recognizes. -I'm aware that this can also be viewed as a lifting property, if that jogs anyone's memory of useful geometry words: what kind of map should $\operatorname{spec} S \to \operatorname{spec} R$ be so that for any pair of maps $\operatorname{spec} S \to \mathbb{G}_m$ and $\operatorname{spec} R \to \mathbb{A}^1$ with commuting square $$\begin{array}{ccc} \operatorname{spec} S & \to & \mathbb{G}_m \\ \downarrow & & \downarrow \\ \operatorname{spec} R & \to & \mathbb{A}^1,\end{array}$$ there exists a lift $\operatorname{spec} R \to \mathbb{G}_m$ making both triangles commute? - -REPLY [7 votes]: The following result gives a characterization of when the property you ask about holds in the case of a surjective morphism. -Claim: If $f: S \to T$ is a surjective homomorphism of rings, then $f^{-1}(T^{\times}) = S^{\times}$ if and only if the kernel of $f$ is contained in the Jacobson radical of $S$. -Proof: If $f(s) \in T^{\times}$ and $f(s') = f(s)^{-1}$, then $f(s s') = 1$. -Thus the claim reduces to checking that $f^{-1}(1) \subset S^{\times}$, -i.e. that $1 + \mathrm{ker}(f) \subset S^{\times}$. This is equivalent to asking -that $\mathrm{ker}(f)$ be contained in the Jacobson radical of $S$. QED -In general the nilradical of $S$ is contained in the Jacobson radical, -and so if $f$ is surjective and the kernel of $S$ is a nil ideal then the property you are interested in holds. (This generalizes your square-zero extension example.) -If $S$ is a Jacobson ring, e.g. a finite type algebra over a field or over $\mathbb Z$, -then the Jacobson radical of $A$ coincides with the nilradical, and so for such -$S$ your condition holds (for $f$ surjective) precisely when the kernel of $f$ is a nil ideal.<|endoftext|> -TITLE: Why do we use the smash product in the category of based topological spaces? -QUESTION [17 upvotes]: I was telling someone about the smash product and he asked whether it was the categorical product in the category of based spaces and I immediately said yes, but after a moment we realized that that wasn't right. Rather, the categorical product of $(X,x_0)$ and $(Y,y_0)$ is just $(X\times Y,(x_0,y_0))$. (It seems like in any concrete category $(\mathcal{C},U)$, if we have a product (does a concrete category always have products?) then it must be that $U(X\times Y)=U(X)\times U(Y)$. But I couldn't prove it. I should learn category theory. Maybe functors commute with products or something.) Anyways, here's what I'm wondering: is the main reason that we like the smash product just that it gives the right exponential law? It's easy to see that the product $\times$ I gave above has $F(X\times Y,Z)\not\cong F(X,F(Y,Z))$ just by taking e.g. $X=Y=Z=S^0$. - -REPLY [3 votes]: This way $\mathbf{hCW_*}$, the pointed homotopy category of CW complexes, becomes a closed symmetric monoidal category with tensor product $\wedge$ and unit $S^0$. This is important, because once we observe that the functor -$$\Sigma^\infty: \mathbf{hCW_*}\to \mathbf{hSp}$$ -to the stable homotopy category is lax monoidal, it would follow easily that $\Sigma^\infty$ sends monoids to monoids. In particular, since $S^0$ is obviously a commutative monoid, the sphere spectrum $\mathbb{S}$ is a commutative ring spectrum. In English, the sphere spectrum $\mathbb{S}$ defines a cohomology theory that has a skew-commutative cup product.<|endoftext|> -TITLE: Differential Geometry of curves and surfaces: bibliography? -QUESTION [8 upvotes]: Dear all, next year, I will probably teach a one-semester course of Differential Geomtry of curves and surfaces. Its content must be something along the lines of the first four chapters of Do Carmo's famous book and my plan is indeed to follow it. -But I would like to know some other, more modern, references. Particularly some including material for Matlab or Maple. I'm already aware of the book of John Oprea: any other references? -I would also be greatful to know about some links to webpages with this subject. -Any hints, ideas, references, suggestions... are welcome. Thank you in advance. - -REPLY [7 votes]: My favorite is Andrew Pressley's "Elementary Differential Geometry." The material in it is very standard, yet the treatment is more modern than do Carmo's. However, I don't think there's much in the way of material for MatLab or Maple...<|endoftext|> -TITLE: What is the meaning of a number's "representation" on Wolfram Alpha? -QUESTION [14 upvotes]: When searching a number on Wolfram Alpha, one of the results is its representation. -For example, for 8549: - -8549 has the representation 8549 = $5·2^6·3^3-91$. - -Similarly for 75290: - -75290 has the representation 75290 = $3·2^9·7^2+26$. - -What is the significance of these representations? - -REPLY [2 votes]: What it seems to do is, when $n$ is your number, that it maximizes the number of prime factors of $q$ within the range $q \in (n-100,n+100)$. And then sets $n=q+(n-q)$. Doing this it can easily find that 513 is for example $513=2^9+1$. However for the numbers you gave it is not really interesting.<|endoftext|> -TITLE: Existence of a continuous function with pre-image of each point uncountable -QUESTION [13 upvotes]: Does there exist a continuous function $f : [0, 1] → [0, 1]$ such that the pre-image $f^{−1}(y)$ of any point $y \in [0, 1]$ is uncountable? - -REPLY [21 votes]: Yes. -One nice way to see that is to take a Peano curve $c: [0,1] \to [0,1]^{2}$ (that is, a continuous surjection) and to compose it with the projection $p(x,y) = x$. Then $f = p \circ c$ will have the desired property. -Added. This is a folklore construction illustrating how far from the graphs we can actually draw (or imagine) a continuous function can be. As mentioned by Jonas in the comments this construction appears in at least two MO threads, namely here and here. I don't know where this example appeared first, I suspect that it can be found in Hausdorff's Mengenlehre, but Peano or Hilbert may have noticed it before that. They're not mentioning it in their original papers, though: Hilbert's paper and Peano's paper, links taken from the Wikipedia page on space-filling curves.<|endoftext|> -TITLE: 1D random walk-probability to go back to origin -QUESTION [13 upvotes]: Suppose there is a random walk starting at the origin such that the probability to move right is $\frac13$ and the probability to move left is $\frac23$. -What is the probability to return to the origin? - -REPLY [4 votes]: The problem you refer to is the study of return probability for a random walk on a lattice. In your case, the lattice is the 1-D lattice of integers. This problem is very well studied on general lattices. A famous theorem by Polya says that while the probability of return is $1$ for symmetric random walks (i.e., all moves are equally likely unlike in this question) in $1$ and $2$ dimensional lattices, it is strictly less than $1$ in all higher dimensions. See here for more details. -The solution posted by mjqxxxx is very clever and is perfectly valid. If you are interested in other solutions, a very systematic way of studying this problem is through the use of generating functions. See this lecture notes for more information (In fact, these notes have the solution to the problem you posed).<|endoftext|> -TITLE: Cost of Solving Linear System -QUESTION [7 upvotes]: As most of us are aware the cost for solving a linear system ("exactly") with Gauss Elimination and other similar methods with a few right hand side and where the matrix has no structure is $\mathcal{O}(N^3)$ where $N$ is the system size. -I am wondering about the lower bound for solving a linear system. An obvious lower bound is $\mathcal{\Omega}(N^2)$ (since the information content is $\mathcal{O}(N^2)$). Are there better lower bounds other than $\mathcal{\Omega}(N^2)$ for solving the linear system? Is there a way to prove that the lower bound of $\mathcal{\Omega}(N^2)$ can never be hit for a matrix with no special structure? (assume that we are solving a system with only one right hand side). -Also are there other algorithm which solve these system "exactly" whose cost is less than $\mathcal{O}(N^3)$? I am aware of Strassen algorithm which perform matrix multiplications in $\mathcal{O}(N^{\log_27})$. Can this be used to solve a linear system in $\mathcal{O}(N^{\log_27})$? -(Note: The system has no special structure. Say the matrix is just made up of entries drawn out of a random number generator. I am not worried about the stability and other numerical intricacies of the method as of now. I would appreciate if someone could point to some work done in this regard.) - -REPLY [6 votes]: I assuming we are talking in terms of number of multiplications. -Since the de-facto algorithm for solving linear equations is Gaussian Elimination, let us only consider Gaussian Elimination. -Note that Gaussian Elimination can invert matrices in the same complexity as it solves linear equations. -It is well known that Matrix Inversion is as easy (or hard, if you will) as Matrix Multiplication (see this: http://www.lehigh.edu/~gi02/m242/08linstras.pdf) -The best known lower bound for Matrix multiplication, is apparently $\Omega(n^2)$, I believe. -I doubt you can find better lower bounds than that. In any case, the lower bounds you seek would very likely be same as the lower bounds for Matrix Multiplication. -Of course, that does not really answer your question exactly, but hope that helps.<|endoftext|> -TITLE: When can we exchange order of two limits? -QUESTION [23 upvotes]: My questions are about a sequence or function with several variables. - -I vaguely remember some while ago one -of my teachers said taking limits of -a sequence or function with respect to -different variables is not -exchangeable everywhere, i.e. -$$ \lim_n \lim_m a_{n,m} \neq \lim_m \lim_n a_{n,m}, \quad \lim_x \lim_y f(x,y) \neq \lim_y \lim_x f(x,y).$$ -So my -question is what are the cases or -examples when one can exchange the -order of taking limits and when one -cannot, to your knowledge? I would -like to collect the cases together, -and be aware of their difference and -avoid making mistakes. If you could -provide some general guidelines, that -will be even nicer! -To give you an example of what I am -asking about, this is a question that -confuses me: Assume $f: [0, \infty) - \rightarrow (0, \infty)$ is a -function, satisfying $$ - \int_0^{\infty} x f(x) \, dx < \infty. - $$ Determine the convergence of this -series $\sum_{n=1}^{\infty} - \int_n^{\infty} f(x) dx$. -The answer I saw is to exchange the -order of $\sum_{n=1}^{\infty}$ and -$\int_n^{\infty}$ as follows: $$ - \sum_{n=1}^{\infty} \int_n^{\infty} - f(x) dx - = \int_1^{\infty} \sum_{n=1}^{\lfloor x \rfloor} f(n) dx \leq - \int_1^{\infty} \lfloor x \rfloor - f(x) dx $$ where $\lfloor x \rfloor$ -is the greatest integer less than -$x$. In this way, the answer proves the series converges. I was wondering why the two steps are valid? Is there some special meaning of the first equality? Because it looks similar to the tail sum formula for expectation of a random variable $X$ with possible values $\{ 0,1,2,...,n\}$: $$\sum_{i=0}^n i P(X=i) = \sum_{i=0}^n P(X\geq i).$$ The formula is from Page 171 of Probability by Jim Pitman, 1993. Are they really related? - -Really appreciate your help! - -REPLY [2 votes]: $f(x,y)=\frac{x-y}{x+y}.$ -You can also check the limit at the point $(0,0)$. Typical example of calculus.<|endoftext|> -TITLE: Cardinality of relations set -QUESTION [6 upvotes]: I was thinking about cardinality of all symmetric relations, for example in $\mathbb{Z}$. I know, that if I have finite set (which contains $n$ elements), there are $2^{\frac{n(n+1)}{2}}$ symmetric relations. But what happens when we are talking about infinite sets ($\mathbb{Z}$ for example)? How many symmetric relations are in $\mathbb{Z}$ or $\mathbb{N}$? What about equivalence relation? -I just want to know, how to think about such problems. If You can, please show me some examples of similar problems (maybe with hints, how to solve them). - -REPLY [5 votes]: To see that the number of equivalence relations on an infinite set $A$ of size $\kappa$ is $2^{\kappa}=|{\mathcal P}(A)|$ (i.e., the largest possible size), recall that any set of size $\kappa$ can be split into $\kappa$ sets of size $\kappa$: $\kappa=\kappa\times\kappa$. Say $A$ has size $\kappa$. Fix a partition $A=\bigcup_{i\in I}A_i$, where $I$ is an index set of size $\kappa$, each $A_i$ has size $\kappa$, and $A_i\cap A_j=\emptyset$ if $i\ne j$. Also, for each $A_i$, find a partition $A_i=B_i\cup C_i$ where each $B_i,C_i$ is non-empty. -Given $D\subseteq I$, let $E_D$ be the equivalence relation on $A$ defined as follows: -If $i\in D$, then $B_i$ and $C_i$ are equivalence classes. If $i\in I$ is not in $D$, then the whole of $A_i$ is an equivalence class. -More precisely, in case the above description is not clear: $a E_D b$ for $a,b\in A$, iff both $a,b$ are in the same $A_i$ (for some -unique- $i\in I$) and (if $i\notin D$, then both $a,b$ are in $B_i$ or both are in $C_i$). -Then, from $E_D$, we can reconstruct $D$, so the map $f:{\mathcal P}(I)\to {\mathcal E}$, where ${\mathcal E}$ is the set of equivalence relations on $A$, given by $f(D)=E_D$ is injective, and $|{\mathcal E}|\ge 2^\kappa$. -(In fact, for this inequality to hold, all we needed is that $2\times\kappa=\kappa$: We could have taken each $A_i$ of size 2 and each $B_i,C_i$ a singleton. However, the argument below uses that $\kappa\times\kappa=\kappa$.) -Since each class is a subset of $A\times A$, and different classes are disjoint, each equivalence relation has at most $\kappa=|A|$ classes (cannot have more classes than elements of $A$), and each class is chosen from ${\mathcal P}(A\times A)$, so there are at most $\kappa\times 2^{|\kappa\times\kappa|}=2^\kappa$ equivalence relations. -It follows that the number of equivalence relations is $2^\kappa$, as claimed. -Since each equivalence relation is symmetric, this also shows that there are precisely $2^\kappa$ symmetric relations on $A$ if $|A|=\kappa$. - -Let me close with a technical remark: If $A$ is countable (so $\kappa$ is usually denoted $\aleph_0=|{\mathbb N}|=|{\mathbb Z}|$), the equality $\kappa=\kappa\times\kappa$ and the equality $\kappa 2^\kappa=2^\kappa$ can be verified without using the axiom of choice. Same if $A$ has the size of the reals (so $\kappa$ is usually denoted ${\mathfrak c}$ or $2^{\aleph_0}=|{\mathcal P}({\mathbb N})|$). -However, for arbitrary infinite $A$, the equalities require the axiom of choice. I haven't checked what the number of equivalence relations for an arbitrary infinite $A$ is if choice fails. I do not think it is $2^{|A|}$ in general.<|endoftext|> -TITLE: How to define Homology Functor in an arbitrary Abelian Category? -QUESTION [39 upvotes]: In the Category of Modules over a Ring, the i-th Homology of a Chain Complex is defined as the Quotient -Ker d / Im d -where d as usual denotes the differentials, indexes skipped for simplicity. -How can this be generalized to a general Abelian Category? Do I have the notion of a Quotient there? - -REPLY [3 votes]: The key is to give the definition of homology in a module chain by the language of category, and then generalize it to the Abelian category. -Let A→B→C be a module chain with f:A→B, g:B→C, gf=0. Then the homolgy at B is just -the cokernel of the embedding i:im f→Ker g. So we want to define i by the language of category. Note that f=jf1, where f1:A→im f has the same definition with that of f on A, j:im f→B is the embedding. gf=0 implies gjf1=0, then gj=0 by f1 being epi. So there is a unique morphism h: im f→Ker g so that j=kh, where k is the kernal mapping k: Ker g→B. -We then show that h is just i. This is due to direct computation. Given b in im f, b=j(b)=(kh)(b)=k(h(b))=h(b). The fisrt equality holds because j is embedding, similar arguments to the last equality. -Now we want to know whether the way to define h(=i) is a way of using the language of category. We fisrt decompose f into the composition of a epimorphism followed by a monomorphism, which is not justified in an arbitrary category, but justified in an Abelian category (this is what Abelian does!). Furthermore, the range of the epimorphism in an Abelian category is just the image of f(this can be proved, but not very easy), which concides with the modular case. Then we construct h just by the universal property of the kernel of g. So we can define h in an Abelian category. But we now cannot ask whether h=i, because i makes no sense in an arbitrary category. But we have already shown that h=i in the modular case, so we can use h directly instead of i in the arbitrary case, which will give a generalization(this is why we needn't know, we just want a generalization!). -So the definition follows, it's just cokernel of h: im f→Ker g, where h is defined above.<|endoftext|> -TITLE: Measurable functions with values in Banach spaces -QUESTION [12 upvotes]: My question refers to functions with values in Banach spaces and under what conditions the limit of a sequence of measurable functions is also measurable. -But first, let me recall some well-known results from real analysis. Let $(X,\mathcal{F})$ be a measurable space, $\bar{\mathbb{R}}$ the extended real line, and $\mathcal{B}(\bar{\mathbb{R}})$ the standard Borel $\sigma$-algebra generated by the open sets of $\bar{\mathbb{R}}$. -(1) If a sequence of measurable functions $f_n:(X,\mathcal{F})\rightarrow(\bar{\mathbb{R}},\mathcal{B}(\bar{\mathbb{R}}))$ converges to a function $f$ pointwise in $X$, then $f$ is also measurable w.r.t. $\mathcal{F}$ and $\mathcal{B}(\bar{\mathbb{R}})$. (See, e.g., Rudin, Real and Complex Analysis, Theorem 1.14.) -(2) This can be generalized to the case of a complete measure space $(X,\mathcal{F},\mu)$ and convergence of $f_n$ to $f$ pointwise-$\mu$-a.e. in $X$. (This is stated without proof in Hunter and Nachtergaele, Applied Analysis, Theorem 12.24. Can anyone point to a proof of this, or is it trivial?) -Dudley (Real Analysis and Probability, Theorem 4.2.2) generalizes part (1) above to functions with values in metric spaces. Therefore (1) also holds for functions with values in a Banach space $(Y,\Vert\cdot\Vert)$, with corresponding Borel $\sigma$-algebra $\mathcal{B}(Y)$. My question is: can part (2) be also generalized? In other words, does the following result hold: -Let $(X,\mathcal{F},\mu)$ be a complete measure space and $(Y,\Vert\cdot\Vert)$ be a Banach space. Let $f_n:(X,\mathcal{F})\rightarrow(Y,\mathcal{B}(Y))$ be a sequence of measurable functions that converges to a function $f$ pointwise-$\mu$-a.e. in $X$ (i.e., $\Vert f_n(x) - f(x)\Vert \rightarrow 0$ as $n\rightarrow\infty$, for $x$ $\mu$-a.e. in $X$), then $f$ is also measurable w.r.t. $\mathcal{F}$ and $\mathcal{B}(Y)$. -Note, I use "measurable" in the standard sense: if $U\in\mathcal{B}(Y)$ then $f^{-1}(U)\in\mathcal{F}$. -It seems to me that the result should hold but I haven't been able to find a proof in the standard references. Is it trivial? Also, why should the measure space $(X,\mathcal{F},\mu)$ be complete? What fails if it is not complete. Finally, if the result holds, does this mean that standard measurability is equivalent to so-called strong or Bochner measurability? (Maybe this last question should be the topic of another post.) -Thanks in advance! - -REPLY [5 votes]: Yes, it's rather trivial. You might note the following fact: if $f$ is measurable, $\mu$ is complete, and $f = g$ $\mu$-a.e., then $g$ is measurable. This is valid for functions from any complete measure space to any other measurable space. (Proving it is a good exercise.) Essentially, when you have a complete measure, you can change a function arbitrarily on a null set without breaking its measurability. -Now suppose $f_n \to f$ $\mu$-a.e., with each $f_n$ measurable. Let $A$ be the set of $x$ such that $f_n(x) \to f(x)$. Note $A$ is a measurable set since $A^C$ is $\mu$-null and hence measurable (by completeness of $\mu$). Let $$g_n = \begin{cases}f_n(x), & x \in A \\ 0, & x \notin A\end{cases}$$ Then clearly $g_n$ is measurable. Now $g_n(x)$ converges for all $x$, so the limit $g(x) = \lim_{n \to \infty} g_n(x)$ is measurable, as you know. But $g(x) = f(x)$ on $A$, so $f = g$ a.e. By the fact above, $f$ is measurable. -If your underlying measure space is not complete, certainly things can go wrong. For a trivial example, let $B$ be any set which is $\mu$-null but not measurable. If $f_n$ is the zero function for all $n$, we have $f_n \to 1_B$ a.e., but $1_B$ is not a measurable function.<|endoftext|> -TITLE: Is every Compact $n$-Manifold a Compactification of $\mathbb{R}^n$? -QUESTION [30 upvotes]: I read the result that every compact $n$-manifold is -a compactification of $\mathbb{R}^n$. -Now, for surfaces, this seems clear: we take -an n-gon, whose interior (i.e., everything in -the n-gon except for the edges) is homeo. to $\mathbb{R}^n$, -and then we identify the edges to end up with -a surface that is closed and bounded. -We can do something similar with the $S^n$'s ; by -using a "1-gon" (an n-disk), and identifying -the boundary to a point. Or we can just use the -stereo projection to show that $S^n-\{{\rm pt}\}\sim\mathbb{R}^n$; -$S^n$ being compact (as the Alexandroff 1-pt. -compactification -of $\mathbb{R}^n$, i.e., usual open sets + complements of compact subsets of -$\mathbb{R}^n$). And then some messy work helps -us show that $\mathbb{R}^n$ is densely embedded in $S^n$. -But I don't see how we can generalize this statement -for any compact n-manifold. Can someone suggest any -ideas? -Thanks in Advance. - -REPLY [5 votes]: Here is the answer in the context of topological manifolds: - -For smooth manifolds you already have completely satisfactory answers. -If a compact topological manifold $M$ has dimension $n\ne 4$ then it admits a handlebody decomposition (see the discussion and references here). Once you have a handlebody decomposition $M=H_1\cup_S H_2$, you use the fact that the complements to some $n-1$-dimensional disjoint disks $\Delta_{1,j}, \Delta_{2j}$ in the interiors of $H_1, H_2$ are homeomorphic to $R^n$. Now, find an open $n-1$-disk $D\subset S$ disjoint from the boundaries of the disks disks $\Delta_{1,j}, \Delta_{2j}$. Then the union -$$ -int(H_1)\cup D \cup int(H_2) -$$ -is homeomorphic to $R^n$. This union is clearly dense in $M$. -There are 4-dimensional manifolds which do not admit a handle decomposition, but Frank Quinn in - -Ends of Maps III: Dimensions 3 and 4, Journal of Differential Geometry vol. 17 (1982) -proved that all noncompact 4-dimensional manifolds are smoothable. Hence, removing a point from such a manifold results in a smoothable manifold which -admits an open and dense subset homeomorphic to $R^4$. -The conclusion is then that every compact topological $n$-manifold is homeomorphic to a compactification of $R^n$.<|endoftext|> -TITLE: Is there a general function to derive the minimum number which can be divided by 1, 2, 3, 4 ...? -QUESTION [5 upvotes]: 1 can be divided by 1 -2 can be divided by 1, 2 -6 can be divided by 1, 2, 3 -12 can be divided by 1, 2, 3, 4 -60 can be divided by 1, 2, 3, 4, 5 -60 can be divided by 1, 2, 3, 4, 5, 6 -420 can be divided by 1, 2, 3, 4, 5, 6, 7 -840 can be divided by 1, 2, 3, 4, 5, 6, 7, 8 - -I just wonder whether there is a general function to derive this number. -Thanks. - -REPLY [11 votes]: The question rephrased: -How does one determine $m_k$, the smallest number which is divisible by all the positive integers from 1 to $k$? -As a product of primes -Well, if one starts with the product $\tilde m_k = 1 \cdot 2 \cdot \dots \cdot k$, this is divisble by $1, 2, \dots, k$, but it's not the smallest. For example $1 \cdot 2 \cdot 3 \cdot 4 = 24$, but the smallest is 12, since 4 is divisble by 2. If we remove the 2 we get the smallest number. This suggest a general procedure. -Take all the prime numbers $p_1, \dots, p_l$ from $\{1, 2, \dots, k\}$ and let $q_i$ be the largest integer such that $p_i^{q_i} \leq k$ and set -$$m_k = p_1^{q_1} p_2^{q_2} \cdots p_l^{q_l}$$ -which should be the smallest number which is divisble by $1, \dots, k$. -Recursively -Calculate $m_k$, let $m_{k-1}$ be given. - -Is $m_{k-1}$ divisble by $k$? If so, set -$m_k = m_{k-1}$. -Otherwise, set $m_k = - \operatorname{lcm}(m_{k-1}, k)$, -where $\operatorname{lcm}$ is the -least common multiple. - -This gives a nice implementation for computer using the following formula: -$$\operatorname{lcm}(a,b) = \frac{|a b|}{\operatorname{gcd}(a,b)}$$ -Least common multiple -Of course, you can just set: -$$m_k = \operatorname{lcm}(2, 3, \dots, k)$$.<|endoftext|> -TITLE: Guessing a hidden number on a cube -QUESTION [7 upvotes]: You are and your friend are given a list of N distinct integers and are told this: -Six distinct integers from the list are selected at random and placed one at each side of a cube. The cube is placed in the middle of a rectangular room in front of its only door, with one face touching the floor, its 6 sides parallel to the walls of the room. Your friend must enter the room and is allowed to alter the orientation of the cube, with the restriction that afterwards its in the same place with one face touching the floor and its 6 sides kept parallel to the walls of the room. Your friend will then be sendt away, after which you can enter the room and are allowed to observe the 5 visible sides of the cube. -What is the largest N that guarantees you to be able to determine the number on the bottom of the cube and what should you instruct your friend to do with the cube for that N? -N=29 is possible see! still in need of strategy -https://mathoverflow.net/questions/52541/hard-cube-puzzle - -REPLY [8 votes]: A straightforward upper bound is $N=29$. For a given $N$, you are trying to select one of the possible assignments of numbers to the six sides (up to rotation of the cube), of which there are -$$ -{N \choose 6}\frac{6!}{24} = \frac{N!}{24(N-6)!}. -$$ -The number of different states you can see when you enter the room, on the other hand, is -$$ -{N \choose 5}5! = \frac{N!}{(N-5)!} = \frac{N!}{(N-5)(N-6)!}. -$$ -The number of outputs (i.e., assignments of numbers to the cube) is greater than the number of inputs (i.e., different states you can see) whenever $N-5 > 24$, and so there can't be a surjective mapping from inputs to outputs unless $N \le 29$. - -A lower bound of $N=18$ is given by the following strategy. Your friend will place the cube so that the largest face and its largest neighbor are visible. There are 16 such ways to orient the cube, all distinguishable by the visible faces: 4 where the largest face is on top; 4 where its largest neighbor is on top; and 8 where both the largest face and its largest neighbor are on the side. Order these 16 arbitrarily in advance, corresponding to the integers $0$ through $15$. Similarly, rank the 16 numbers that are not equal to the largest face or its largest neighbor, from $0$ to $15$. Your friend will place the cube in the position corresponding to the sum of the other four faces' ranks, modulo 16. By subtracting the ranks of the three faces you can see, you can deduce the rank, and hence the value, of the hidden face.<|endoftext|> -TITLE: Diophantine equations solved using algebraic numbers? -QUESTION [9 upvotes]: On Mathworld it says some diophantine equations can be solved using algebraic numbers. I know one example which is factor $x^2 + y^2$ in $\mathbb{Z}[\sqrt{-1}]$ to find the Pythagorean triples. -I would be very interested in finding some examples of harder equations (not quadratic) which are also solved easily using algebraic numbers. Thank you! - -REPLY [3 votes]: An interesting example which appears in section $4.9$ of Stewart and Tall's Algebraic Number Theory and Fermat's Last Theorem is the solution of the Ramanujan-Nagell equation given by -$$x^2 + 7 = 2^n$$ -The idea is to work over the quadratic field $\mathbb{Q}(\sqrt{-7})$ and use the fact that it has unique factorization in order to compare two factorizations at the core of the argument. -The argument needs to take care of different cases separately and does not run as "smoothly" as the arguments in some of the examples mentioned in the other answers, but that's probably in some sense due to the fact that this equation involves an exponential, which only makes the already difficult problem of solving a Diophantine equation worse.<|endoftext|> -TITLE: Finding Value of the Infinite Product $\prod \Bigl(1-\frac{1}{n^{2}}\Bigr)$ -QUESTION [54 upvotes]: While trying some problems along with my friends we had difficulty in this question. - -True or False: The value of the infinite product $$\prod\limits_{n=2}^{\infty} \biggl(1-\frac{1}{n^{2}}\biggr)$$ is $1$. - -I couldn't do it and my friend justified it by saying that since the terms in the product have values less than $1$, so the value of the product can never be $1$. I don't know whether this justification is correct or not. But i referred to Tom Apostol's Mathematical Analysis book and found a theorem which states, that - -The infinite product $\prod(1-a_{n})$ converges if the series $\sum a_{n}$ converges. - -This assures that the above product converges. Could anyone help me in finding out where it converges to? And, - -Does there exist a function $f$ in $\mathbb{N}$ ( like $n^{2}$, $n^{3}$) such that $\displaystyle \prod\limits_{n=1}^{\infty} \Bigl(1-\frac{1}{f(n)}\Bigr)$ has the value $1$? - -REPLY [2 votes]: You could use the infinite product for the sine function which says that: -$$\sin(x)=x\prod_{n=1}^\infty (1-\frac{x^2}{\pi^2n^2})$$ -Or: -$$\frac{\sin(x)}{x(1-\frac{x^2}{\pi^2})}=\prod_{n=2}^\infty (1-\frac{x^2}{\pi^2n^2})$$ -Then, for your problem, you can just add $\lim_{x\rightarrow\pi}$ to the equation and solve.<|endoftext|> -TITLE: Disjoint compact sets in a Hausdorff space can be separated -QUESTION [30 upvotes]: I want to show that any two disjoint compact sets in a Hausdorff space $X$ can be separated by disjoint open sets. Can you please let me know if the following is correct? Not for homework, just studying for a midterm. I'm trying to improve my writing too. -My work: -Let $C$,$D$ be disjoint compact sets in a Hausdorff space $X$. Now fix $y \in D$ and for each $x \in C$ we can find (using Hausdorffness) disjoint open sets $U_{x}(y)$ and $V_{x}(y)$ such that $x \in U_{x}(y)$ and $y \in V_{x}(y)$. Now the collection $\{U_{x}: x \in C\}$ covers $C$ so by compactness we can find some natural k such that -$C \subseteq \bigcup_{i=1}^{k} U_{x_{i}}(y)$ -Now for simplicity let $U = \bigcup_{i=1}^{k} U_{x_{i}}(y)$, then $C \subseteq U$ and let $W(y) = \bigcap_{i=1}^{k} V_{x_{i}}(y)$. Then $W(y)$ is a neighborhood of $y$ and disjoint from $U$. -Now consider the collection $\{W(y): y \in D\}$, this covers D so by compactness we can find some natural q such that $D \subseteq \bigcup_{j=1}^{q} W_{y_{j}}$. -Finally set $V = \bigcup_{j=1}^{q} W_{y_{j}}$, then $U$ and $V$ are disjoint open sets containing $C$ and $D$ respectively. -What do you think? - -REPLY [2 votes]: First prove the lemma: - -Let $X$ be Hausdorff and $A$ be compact and $p \notin A$. Then there exist open sets $U$ and $V$ such that $A \subseteq U$, $p \in V$ and $U \cap V = \emptyset$. - -The proof follows your idea: for each $a \in A$ we pick $U(a)$ open and $V(a)$ open and disjoint such that $a \in U(a), p \in V(a)$ by Hausdorffness. -The $\{U(a): a \in A\}$ cover $A$ by construction so by compactness of $A$, we have finitely many $a_1, \ldots a_n$ such that -$$A \subseteq U:=\bigcup_{i=1}^n U(a_i)$$ -and as we have finitely many corresponding $V(a_i)$ as well, $p \in V:= \bigcap_{i=1}^n V(a_i)$ which is then open. And no point is in $U \cap V$ or it would be in some $U(a_i)$ (from $U$ being a union) and also in the same $V(a_i)$ ($V$ being the intersection), contradicting how they were chosen. So $U$ and $V$ are as required. QED for the lemma. -Now apply the lemma repeatedly for $c \in C$ and $D$ when these are disjoint compact. We get $U(c)$ and $V(c)$ disjoint neighbourhoods of $c$ and $D$ (!) now, compactness lets us take a finite union of $U(c)$ as $U$ and the corresponding intersection of $V(c)$'s will work again, same argument essentially. -So in two steps is cleanest. Make the inbetween step visible.<|endoftext|> -TITLE: Some maps of the land of mathematics? -QUESTION [10 upvotes]: This question is motivated by a little anecdote. I was at home teaching some secondary school math to a relative. At some relax time, he glanced at a book I had over the table - it was some text about analytical number theory that I had recently bought, second hand- and I explained him that that was an area of mathematics quite arcane to me, that I'd like to learn something about it in some future, but I had little hopes... -He looked puzzled for a moment, and then asked me: "But, wait a moment... You don't know all the mathematics?" -This happened months ago, and I'm still laughing. But also (and here comes the question) I'm still thinking about how to make some picture of this issue: the big extension of "the land of mathematics", in diverseness and ranges of depth - and the small regions that one has explored. -I was specifically looking for some kind of bidimensional (planar?) chart, perhaps with the most basic/known math kwowledge in the center, and with the main math areas as regions, placed according to its mutual relations or some kind of taxonomy. -(I guess this should go in community wiki) - -REPLY [7 votes]: The Princeton Companion of Mathematics is a good resource. The mathematical atlas is good as well. The size of the bubbles are directly proportional to the amount of research activity in each area. This MO post might be useful also (looking at real world applications of arxiv areas).<|endoftext|> -TITLE: $x$ not nilpotent implies that there is a prime ideal not containing $x$. -QUESTION [8 upvotes]: Let $\mathscr{N}(R)$ denote the set of all nilpotent elements in a ring $R$. -I have actually done an exercise which states that if $x \in \mathscr{N}(R)$, then $x$ is contained in every prime ideal of $R$. - -The converse of this statement is: if $x \notin \mathscr{N}(R)$, then there is a prime ideal which does not contain $x$. - -But I am not able to prove it. Can anyone provide me a proof of this result? - -REPLY [8 votes]: Let $f$ not be nilpotent. Consider the set $\Sigma$ of ideals $I$ such that if $n > 0$ then $f^n \notin I$. It is non-empty because $(0) \in \Sigma$. Order $\Sigma$ by inclusion and by Zorn's lemma choose a maximal element $P$. Show that $P$ is prime: let $x,y \notin P$. Then the ideals $P+(x)$ and $P+(y)$ strictly contain $P$, and so er not in $\Sigma$. Thus $f^m \in P+(x)$ and $f^n \in P+(y)$ for some $m,n$ (by def of $\Sigma$). It follows that $f^{m+n} \in P+(xy)$, hence $P+(xy) \notin \Sigma$, that is $xy \notin P$, so $P$ is prime. Hence we have a prime ideal $P$ not containing $f$. -(the proof is really just copy from the proof in Atiyah-MacDonalds' book) - -REPLY [7 votes]: HINT $\ $ If $x$ isn't nilpotent then localize at the monoid generated by $x$ to deduce that that some prime ideal doesn't contain $x$.<|endoftext|> -TITLE: Online graph theory analysis tool? -QUESTION [9 upvotes]: I don't know much (any?) graph theory, but I'm interested in learning a bit, and doing some calculations for a game. Is there a tool online where I could construct a graph (this one has 30-40 vertices, maybe 100 edges), and play around to explore its properties? Useful things to do would be describing paths, finding related paths, and letting me write formulas to calculate the value of a path. -(By contrast with Online tool for making graphs (vertices and edges)?, I'm not interested in the presentation, I'm interested in analysis, playing, exploring, manipulating, sharing...) - -REPLY [5 votes]: I coded up a thing called Graphrel for this sort of stuff. Currently it supports WYSIWYG editing and an interactive d3 forcelayout; also counts the number of vertices/edges, calculates connected components, as well as reflexivity/symmetry/transitivity etc. of its underlying relation. I'm still adding new features so feel free to make suggestions.<|endoftext|> -TITLE: Tensor product of invertible sheaves -QUESTION [22 upvotes]: Given two invertible sheaves $\mathcal{F}$ and $\mathcal{G}$, one can define their tensor product, but in this definition $\mathcal{F} \otimes \mathcal{G} (U)$ is (apparently) not simply equal to $\mathcal{F} (U) \otimes \mathcal{G}(U)$ for an open set $U$. This latter structure is a presheaf; can someone give me an example for when it is not a sheaf? Is there a characterization for when this actually does give a sheaf? - -REPLY [2 votes]: Sorry for resurrecting such an old thread yet again... -This following paragraph is basically subsumed in Professor Emerton's and Smith's terrific answers, albeit in my own words. Let $p$ be a point of $\mathbb{P}^1$, and let $\mathcal{O}_{\mathbb{P}^1}(-p)$ denote the sheaf of functions which have a zero at $p$ (in other words, this is the ideal sheaf of $p$ in $\mathbb{P}^1$). Then $\mathcal{O}_{\mathbb{P}^1}(-p)$ has no global sections, since the only global sections are constants, so that the naïve definition of the tensor product has no global sections. However, the tensor product of these two sheaves is nothing more than $\mathcal{O}_{\mathbb{P}^1}$, the sheaf of holomorphic functions (the idea being that the two conditions, pole at $p$ and zero at $p$ cancel each other), which has plenty of global sections (the constants). -This following paragraph is not subsumed, though. For a very explicit example of where the naïve definition of the tensor product of two sheaves is not necessarily a sheaf, let $X = \{a, b\}$ be a two point space, with open sets $\{\emptyset, \{a\}, \{b\}, \{a, b\}\}$. Define $\mathcal{F}$ by$$\mathcal{F}(\{a\}) = \mathbb{Z}/2\mathbb{Z},\text{ }\mathcal{F}(\{b\}) = \mathbb{Z}/2\mathbb{Z},\text{ }\mathcal{F}(\{a, b\}) = \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z},$$and $\mathcal{G}$ by$$\mathcal{G}(\{a\}) = \mathbb{Z}/2\mathbb{Z},\text{ }\mathcal{G}(\{b\}) = \mathbb{Z}/3\mathbb{Z},\text{ }\mathcal{G}(\{a, b\}) = \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/3\mathbb{Z}.$$Then the naïve definition of the tensor product would give$$(\mathcal{F} \otimes \mathcal{G})(\{a\}) = \mathbb{Z}/2\mathbb{Z},\text{ }(\mathcal{F} \otimes \mathcal{G})(\{b\}) = 0,\text{ }(\mathcal{F} \otimes \mathcal{G})(\{a, b\}) = \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z},$$which is clearly not a sheaf.<|endoftext|> -TITLE: Do bounded permutations of N leave an initial segment invariant? -QUESTION [5 upvotes]: Let $p$ be a permutation of $\mathbb{N}$. We say that $p$ is bounded if there exists $k$ so that $|p(i)-i| \le k$ for all $i$. If $p$ is bounded, must there exist $M>0$ such that -$p(\{1,2,\ldots, M\}) = \{1,2,\ldots, M\}$? If so, can we bound $M$ by a function of $k$? - -REPLY [3 votes]: Qiaochu's example, as noted, has no finite cycles. At the other extreme, here's an involution, that is, a product of disjoint transpositions: $$(1,5)(2,4)(3,7)\prod_{k=1}^{\infty}(8k-2,8k+2)(8k,8k+4)(8k+1,8k+5)(8k+3,8k+7)$$ Nothing gets moved by more than 4, and no initial segment is fixed.<|endoftext|> -TITLE: Setting Up an Integral to Find A Cone's Surface Area -QUESTION [24 upvotes]: I tried proving the formula presented here by integrating the circumferences of cross-sections of a right circular cone: -$$\int_{0}^{h}2\pi sdt, \qquad\qquad s = \frac{r}{h}t$$ -so -$$\int_{0}^{h}2\pi \frac{r}{h}tdt.$$ -Integrating it got me $\pi h r$, which can't be right because $h$ isn't the slant height. So adding up the areas of differential-width circular strips doesn't add up to the lateral surface area of a cone? -EDIT: I now realize that the integral works if I set the upper limit to the slant height - this works if I think of "unwrapping" the cone and forming a portion of a circle. The question still remains though: why won't the original integral work? Won't the value of the sum of the cylinders' areas reach the area of the cone as the number of partitions approaches infinity? - -REPLY [6 votes]: The fundamental issue with using the integral stated in the original question is that is the wrong shape to use. It is using small cylinders to approximate what are in reality frustums (except the first one, which is a small cone). As the cylinders become smaller and smaller, as we approach the limit of the height of each becoming 0, those heights remain proportionally smaller than the slant height of the frustums. As those heights get smaller, the number of cylinders proportionally increases. When you sum it all up, the error between the approximated surface area to the actual surface area, remains unchanged. -The easiest way to see this is to approximate through numerical integration. Go ahead and start with one cylinder to approximate the cone. If you use the average radius for the cone, you will get a total surface area = $\pi h r$. Then split it into $2$ cylinders and add it up. You will get the same value for the total surface of the two cylinders. As you continue to break it into smaller and smaller cylinders, the total surface area remains constant. It remains the value of the integral stated above. In essence, this is similar to multiplying too numbers together, where one number of approaching 0 and the other is approaching infinity and the rates perfectly offset each other.<|endoftext|> -TITLE: What material is good for extra-studying -QUESTION [13 upvotes]: I am an undergraduate Math student, i find the courses that I am in are too easy (calculus, linear algebra, Discrete math, logic). -I want to become a successful mathematician. I want to know what material i should study by myself? Should I just do all the extra problems in the book, or should I start reading about general problem solving, or maybe about a specific topic.. - -REPLY [15 votes]: I find it telling that you do not list any intermediate level courses among those you have taken: especially, real analysis, topology or algebra. These are the courses which provide the theoretical foundation for all later study of mathematics (and are, as in Roy Smith's answer, arbitrarily challenging: one cannot be too good for any of them). -Have you not taken these courses just because you are in the first or second year of your undergraduate program, or for some other reason? Is it too late in the current term for you to take one of these courses? I think it might be worth a try to transfer into them. -As with many times when an undergraduate looks for help from the internet masses, I also wonder: are you being properly advised? Do you have a faculty member in the math department at your institution with whom you are discussing these issues? If not, you should find one right away. "I am not being properly challenged in my classes and would like to learn more interesting mathematics" is just about the best thing a faculty member can hear from a student. It is hard for me to imagine that you will not be received with open arms. -Added: I see from your profile that you are a first year undergraduate at a Canadian university. That actually explains a lot. Most North American undergraduates are not ready for the sort of theoretical mathematics I described above as first year students -- but some are. Accommodating both groups is a serious curricular challenge. At some universities there is a sort of "honors track" for those who are: e.g. both at UGA and at Chicago there is an honors calculus course taught out of Spivak, and at both Chicago and Harvard they have very challenging analysis courses for exceptionally strong first year students. But you have to be both well-prepared and well-informed in order to place into these courses. Moreover, I have taught at two Canadian universities, and my feeling is that they are a bit more sticklers for taking courses in a fixed sequence than American universities of comparable quality. One of my closest friends started an undergrad degree at UBC at around the age of 15. He did extremely well in his classes from the very beginning, but nevertheless took a lot of "cookie cutter" math courses (e.g. differential equations without any real analysis) that such an obviously prodigious student might at a top US university be well-advised to skip. Anyway, he got to the good stuff at the advanced age of 17 or so, and he is now a successful grown-up mathematician. So be aware that that's the local culture to a certain extent. But I stand by my previous advice: contact a faculty member and let them know what you're feeling. At the very least you should be able to find someone to talk to as you work through Spivak, or Little Rudin, or Artin, or whatever. - -REPLY [4 votes]: Read harder books, like Courant's or Spivak's or Apostol's Calculus, and Van der Waerden or M. Artin on algebra and linear algebra. If those are too easy try Spivak's Calculus on manifolds or Fleming's Calculus of several variables, or Dieudonne's Foundations of modern analysis and Jacobson's Basic algebra I, or Chi Han Sah's Abstract Algebra, or Fulton's Algebraic curves. If those are too easy try Hatcher's or 'Spanier's algebraic topology and Lang's Algebra, and Arnol'd's Ordinary differential equations. Or Spivak's Differential geometry, or Rick Miranda's Algebraic Curves and Riemann surfaces, or Shafarevich's Basic algebraic geometry, or Siegel's lectures on Riemann surfaces, Gauss's Disquisitiones, ..........<|endoftext|> -TITLE: How does the radius of convergence depend on the point about which the series is expanded? -QUESTION [7 upvotes]: For a given analytic germ $f(z)=\sum_{n=0}^{\infty} a_n (z-z_0)^n$, and a simple curve $\gamma: [0,1]\to \mathbb{C}$ such that $\gamma(0)=z_0$, suppose one may analytically continue $f(z)$ along $\gamma$. So for each point $\tilde{z} =\gamma(t)$ there is a corresponding power series $f(z;\tilde{z})=\sum_{n=0}^{\infty} \tilde{a}_n (z-\tilde{z})^n$ with its own radius of convergence $r(\tilde{z})$. My question is, what can be said about this function $r$? How regular is it? Is it continuous? -What about for more than one variable? - -REPLY [6 votes]: The radius of convergence should be the distance to the nearest singular point. So it will be continuous, and it will be differentiable (in fact, smooth) except where its argument is equidistant from two or more singular points.<|endoftext|> -TITLE: Number of zeros not possible in $n!$ -QUESTION [9 upvotes]: Possible Duplicate: -How come the number $N!$ can terminate in exactly $1,2,3,4,$ or $6$ zeroes but never $5$ zeroes? - - -The number of zeros which are not possible at the end of the $n!$ is: -$a) 82 \quad\quad\quad b) 73 \quad\quad\quad c) 156 \quad\quad\quad d) \text{ none of these }$ - -I was trying to solve this problem. I know how to calculate the no of zeros in factorial but have no idea how to work out this problem quickly. - -REPLY [5 votes]: Assuming there is a single answer, the answer is $73$. -Consider $300!$, it has $300/5 + 60/5 + 10/5 = 74$ zeroes. $299!$ has $59 + 11 + 2 = 72$ zeroes. -Got this by trial and error and luck. -Note this is an application of Legendre's formula of the highest power of a prime $p$ dividing $n!$ being $$\displaystyle \sum_{k=1}^{\infty} \left \lfloor \frac{n}{p^k} \right \rfloor$$. -We only need to consider power of $5$ to get the number of zeroes. - -Since you asked for a method: -For smallish numbers, you could try getting a multiple of $6 = 1+5$ close to your number, find the number of zeroes for $25/6$ times that and try to revise your estimate. -For example for $156 = 6*26$. -So try $26*5*5 = 650$. $650!$ has $26*5 + 26 + 5 + 1 = 162$ zeroes. -Since you overshot by $6$, try a smaller multiple of $6$. -So try $25*5*5$, which gives $25*5 + 25 + 5 + 1 = 156$. Bingo! -For $82$, try $78 = 13*6$. -So try $13*25$. Which gives $65 + 13 + 2 = 80$ zeroes. -So try increasing the estimate, say by adding 10 (since we were short by 2). -$13*25 + 10$ gives us $67 + 13 + 2 = 82$ zeroes. -For $187$ Try $186 = 6*31$. -So try $31*5*5$ this gives us $31*5 + 31 + 6 + 1 = 193$ zeroes. -since we overshot by $7$, try reducing it, say -$30*5*5$ gives us $30*5 + 30 + 6 + 1 = 187$ -For larger numbers instead of multiple of 6, consider multiple of $1+5+25$, $1+5+25+125$ etc. -I am pretty sure there must be a better method, but I don't expect the CAT folks to expect candidates to know that! -Hope that helps. - -REPLY [3 votes]: If you know how to calculate the number of zeros at the end of $n!$, then you know that there are some values of $n$ for which the number of zeros has just increased by 2 (or more), skipping over number(s). What numbers are skipped? -Further hint (hidden): - - Find the number of zeros are at the end of (a) $24!$; (b) $25!$ - -edit more explicitly: - - the factorials of 5-9 end in 1 zero, 10-14 end in 2 zeros, 15-19 end in 3 zeros, 20-24 end in 4 zeros, 25-29 end in 6 zeros, so 5 is the first number skipped. For what $n$ will the number of zeros at the end of $n!$ next skip over an integer and what is that number of zeros?<|endoftext|> -TITLE: Consecutive birthdays probability -QUESTION [20 upvotes]: Let $n$ be a number of people. At least two of them may be born on the same day of the year with probability: $$1-\prod_{i=0}^{n-1} \frac{365-i}{365}$$ -But what is the probability that at least two of them are born on two consecutive days of the year (considering December 31st and January 1st also consecutive)? It seems a good approximation is: $$1-\prod_{i=0}^{n-1} \frac{365-2 \times i}{365}$$ -However, simulating pseudo-random integers with Python, the 99%-confidence intervals may be slightly different. So do you have any closed formula? - -Results of the simulation with Python. Here are 99%-confidence intervals below: -Number of people: 1 Lower bound: 0.0 Upper bound: 0.0 -Number of people: 2 Lower bound: 0.00528 Upper bound: 0.00567 -Number of people: 3 Lower bound: 0.01591 Upper bound: 0.01657 -Number of people: 4 Lower bound: 0.03185 Upper bound: 0.03277 -Number of people: 5 Lower bound: 0.0528 Upper bound: 0.05397 -Number of people: 6 Lower bound: 0.07819 Upper bound: 0.07959 -Number of people: 7 Lower bound: 0.10844 Upper bound: 0.11006 -Number of people: 8 Lower bound: 0.14183 Upper bound: 0.14364 -Number of people: 9 Lower bound: 0.17887 Upper bound: 0.18086 -Number of people: 10 Lower bound: 0.21816 Upper bound: 0.2203 -Number of people: 11 Lower bound: 0.25956 Upper bound: 0.26183 -Number of people: 12 Lower bound: 0.30306 Upper bound: 0.30544 -Number of people: 13 Lower bound: 0.34678 Upper bound: 0.34925 -Number of people: 14 Lower bound: 0.39144 Upper bound: 0.39397 -Number of people: 15 Lower bound: 0.43633 Upper bound: 0.4389 -Number of people: 16 Lower bound: 0.48072 Upper bound: 0.48331 -Number of people: 17 Lower bound: 0.52476 Upper bound: 0.52734 - - -I give here some results with a tweaked approximation formula, using Wolfram Alpha: -$$\left( 1 - \frac{n-1}{2 \times 365 + n-1} \right) \times \left( 1-\prod_{i=0}^{n-1} \frac{365-2 \times i}{365} \right)$$ -However, this is just a tweak, ans is clearly wrong for $n=33$ since: -Number of people: 33 My guess: 0.91407 -Number of people: 33 Lower bound: 0.94328 Upper bound: 0.94447 - - -Thanks to Jacopo Notarstefano, leonbloy, and Moron, here is the (correct) formula: -$$ 1-\sum_{k=1}^{n}\frac{1}{365^{n-k}k}\left(\prod_{i=1}^{k-1}\frac{365-\left(k+i\right)}{365\times i}\right)\sum_{j=0}^{k-1}\left(-1\right)^{j}C_{k}^{j}\left(k-j\right)^{n} $$ -And here are the results of the computations using this formula with Python: -Number of people: 1 Probability: 0.0 -Number of people: 2 Probability: 0.005479452 -Number of people: 3 Probability: 0.016348283 -Number of people: 4 Probability: 0.032428609 -Number of people: 5 Probability: 0.053459591 -Number of people: 6 Probability: 0.079104502 -Number of people: 7 Probability: 0.108959718 -Number of people: 8 Probability: 0.14256532 -Number of people: 9 Probability: 0.179416899 -Number of people: 10 Probability: 0.218978144 -Number of people: 11 Probability: 0.260693782 -Number of people: 12 Probability: 0.304002428 -Number of people: 13 Probability: 0.34834893 -Number of people: 14 Probability: 0.393195856 -Number of people: 15 Probability: 0.438033789 -Number of people: 16 Probability: 0.482390182 -Number of people: 17 Probability: 0.525836596 -Number of people: 18 Probability: 0.567994209 -Number of people: 19 Probability: 0.608537602 -Number of people: 20 Probability: 0.647196551 -Number of people: 21 Probability: 0.683756966 -Number of people: 22 Probability: 0.718059191 -Number of people: 23 Probability: 0.749995532 -Number of people: 24 Probability: 0.779509664 -Number of people: 25 Probability: 0.806569056 -Number of people: 26 Probability: 0.831211564 -Number of people: 27 Probability: 0.853561895 -Number of people: 28 Probability: 0.873571839 -Number of people: 29 Probability: 0.892014392 -Number of people: 30 Probability: 0.906106867 -Number of people: 31 Probability: 0.919063161 -Number of people: 32 Probability: 0.928791992 -Number of people: 33 Probability: 0.944659069 - -REPLY [6 votes]: NB: I worked earlier on this problem and came up with the following solution. The first answer that was posted made me think that I got something wrong, and I discarded my work. Since the new answer by Moron agrees (essentially) with my previous work here it is, a slightly different derivation of the same formula. -Let $k$ be the number of days in a year. Let $m$ be the distinct number of birthdays among the $n$ friends. Let's assume that $k$ is big enough to have a non-trivial problem (say, $k > n/2$) -We're interested in binary strings with these three conditions: - -Are of length $k$, with $m$ ones and $k-m$ zeros. -There's at least one zero between any two ones. -Condition 2 holds when "wrapping around" the string. - -Let's count them by constructing them with the following algorithm: - -Start with a string of $m$ ones: $11\dots 1$ -There are now three distinct cases: there's a birthday on the first day of the year, there is a birthday on the last day of the year, there's a birthday on neither. -In the first case we have to distribute $k-m$ zeros in $m$ non-empty buckets. Those are called compositions, and one can show that there are $\binom{k-m-1}{m-1}$ such assignments. -The second case is analogous, giving another $\binom{k-m-1}{m-1}$ possible strings. -The third case is similar, giving instead $\binom{k-m-1}{m}$ strings. - -Putting this together we have: $$2\cdot \binom{k-m-1}{m-1}+\binom{k-m-1}{m}$$ -Since $n$ friends share $m$ birthdays we have to account for that, giving this expression: -$$p(n,k) =\frac{\sum_{m=1}^n{ m! \cdot S(n,m)\cdot \Bigg (2\cdot \binom{k-m-1}{m-1}+\binom{k-m-1}{m}}\Bigg )}{k^n}$$ -where $S(n,m)$ is the Stirling number of the second kind. -@Moron: Why do we need to multiply by $m!$ here? Oh right! Stirling numbers of the second kind count the number of coalitions, but we care about their order, too!<|endoftext|> -TITLE: What are the linear isometries on $R^n$, equipped with the $l_1$ norm? -QUESTION [5 upvotes]: Which conditions must the matrix entries satisfy, and what would be an interpretation of the row and column sums of the matrix? - -REPLY [5 votes]: Hint: Such an isometry would have to map the $l^1$-unit-sphere onto itself. How does this "sphere" look like?<|endoftext|> -TITLE: Solving matrix equations of the form $XA = XB$ -QUESTION [6 upvotes]: I am trying to solve the matrix equation of the form $XA = XB$. $A$, $B$ and the solution sought $X$ are $4 \times 4$ homegeneous matrices which are composed of a rotation matrix and translation vector, like in -$$ \begin{bmatrix} r_1 & r_2 & r_3 & tx\\ r_4 & r_5 & r_6 &ty\\ r_7 & r_8 & r_9 & tz\\ 0 & 0 & 0 & 1 \end{bmatrix}.$$ -There are several methods to solve equations of the type AX = XB. These type of equations occur in domains like robotics, for which solutions have been proposed like the Tsai and Lenz method [IEEE 1987], for example. -1) I feel solving the equation XA = XB is not the same as solving the known form AX = XB. Am I right? -2) Neither can it be solved like the Sylvester equation, because even that requires $AX + XB = C$ form. What i have is $XA = XB$, a redundant set of equations. That is, -\begin{align} -A_1.X & = X.B_1 \\ -A_2.X & = X.B_2 \\ -& \vdots \\ -A_n.X & = X.B_n -\end{align} -If i am correct, these could be rewritten into another form like $AX = BX$. Should i rewrite the equation and try to solve this problem using any other existing methods? - -REPLY [6 votes]: For $XA = XB$, solve $X(A-B) = 0$, basically the rows of $X$ can be any vector that is in the nullspace of $(A-B)^T$. -We can write $X(A-B) = 0 \Leftrightarrow (A-B)^TX^T = 0$. The columns of $X^T$ are the rows of $X$. Let $x_i$ be row $i$ of $X$. The $i$:th column of the product $(A-B)^TX^T$ is $(A-B)^Tx_i$. For this to be a zero vector, $x_i$ has to be in the null space of $(A-B)^T$. Thus all rows of $X$ have to be in this nullspace. -EDIT: Ok, title corrected. Rephrased a little. Short explanation added.<|endoftext|> -TITLE: $n$th derivative of $e^{1/x}$ -QUESTION [64 upvotes]: I am trying to find the $n$'th derivative of $f(x)=e^{1/x}$. When looking at the first few derivatives I noticed a pattern and eventually found the following formula -$$\frac{\mathrm d^n}{\mathrm dx^n}f(x)=(-1)^n e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} x^{-2 n+k}$$ -I tested it for the first $20$ derivatives and it got them all. Mathematica says that it is some hypergeometric distribution but I don't want to use that. Now I am trying to verify it by induction but my algebra is not good enough to do the induction step. -Here is what I tried for the induction (incomplete, maybe incorrect) -$\begin{align*} -\frac{\mathrm d^{n+1}}{\mathrm dx^{n+1}}f(x)&=\frac{\mathrm d}{\mathrm dx}(-1)^n e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} x^{-2 n+k}\\ -&=(-1)^n e^{1/x} \cdot \left(\sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} (-2n+k) x^{-2 n+k-1}\right)-e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} x^{-2 (n+1)+k}\\ -&=(-1)^n e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k}((-2n+k) x^{-2 n+k-1}-x^{-2 (n+1)+k)})\\ -&=(-1)^{n+1} e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k}(2n x-k x+1) x^{-2 (n+1)+k} -\end{align*}$ -I don't know how to get on from here. - -REPLY [2 votes]: For $i\in\mathbb{N}$ and $t\ne0$, we have -\begin{equation}\label{exp-frac1x-expans}\tag{1} -\bigl(\textrm{e}^{1/t}\bigr)^{(i)}=\textrm{e}^{1/t}\frac{(-1)^i}{t^{2i}}\sum_{k=0}^{i-1}\binom{i}{k}\binom{i-1}{k}{k!}t^{k}. -\end{equation} -To the best of my knowledge, an inductive proof of the derivative formula \eqref{exp-frac1x-expans} was published on pages 123--124, Section 2.1, Theorem 2.1 of the paper [1] below. Hereafter, there have been a number of literature, for example, the papers [2, 3] below and those collected at a site on wordpress, dedicated to the investigation of the function $\textrm{e}^{1/t}$, the derivative formula \eqref{exp-frac1x-expans}, and their applications to several areas in mathematics. -References - -Xiao-Jing Zhang, Feng Qi, and Wen-Hui Li, Properties of three functions relating to the exponential function and the existence of partitions of unity, International Journal of Open Problems in Computer Science and Mathematics 5 (2012), no. 3, 122--127; available online at https://doi.org/10.12816/0006128. -Siad Daboul, Jan Mangaldan, Michael Z. Spivey, and Peter J. Taylor, The Lah numbers and the $n$th derivative of $e^{1/x}$, Mathematics Magazine 86 (2013), no. 1, 39–47; available online at https://doi.org/10.4169/math.mag.86.1.039. -Khristo N. Boyadzhiev, Lah numbers, Laguerre polynomials of order negative one, and the $n$th derivative of $\exp(1/x)$, Acta Universitatis Sapientiae Mathematica 8 (2016), no. 1, 22–31; available online at https://doi.org/10.1515/ausm-2016-0002.<|endoftext|> -TITLE: Proving that the given two integrals are equal -QUESTION [9 upvotes]: I am stuck up with this simple problem. -If $\alpha \cdot \beta = \pi$, then show that $$\sqrt{\alpha}\int\limits_{0}^{\infty} \frac{e^{-x^{2}}}{\cosh{\alpha{x}}} \ \textrm{dx} = \sqrt{\beta} \int\limits_{0}^{\infty} \frac{e^{-x^{2}}}{\cosh{\beta{x}}} \ \textrm{dx}$$ -I tried replacing $\cosh{x} = \frac{e^{x}+e^{-x}}{2}$ and tried doing some manipulations, but it's of no use. Seems to be a clever problem. Moreover since we have $\alpha \cdot \beta = \pi$, we get $\sqrt{\alpha} = \frac{\sqrt{\pi}}{\sqrt{\beta}}$, but the Beta factor is in the numerator, which bewilders me. - -REPLY [5 votes]: There is a very simple proof, based on some common facts. -Consider the self-reciprocal Fourier transformations -$$ -\sqrt{\frac{2}{\pi}}\int_0^\infty e^{-x^2/2}\cos ax dx=e^{-a^2/2} -$$ -$$ -\sqrt{\frac{2}{\pi}}\int_0^\infty \frac{\cos ax}{\cosh \sqrt{\frac{\pi}{2}} x}dx=\frac{1}{\cosh \sqrt{\frac{\pi}{2}}a}. -$$ -For two arbitrary self-reciprocal functions $f$ and $g$ we have -$$ -\int_0^\infty f(x)g(\alpha x) dx=\int_0^\infty f(x)dx\cdot \sqrt{\frac{2}{\pi}}\int_0^\infty g(y) \cos(\alpha x y)dy=\\ -\int_0^\infty g(y)dy \cdot \sqrt{\frac{2}{\pi}}\int_0^\infty f(x)\cos(\alpha x y)dx=\int_0^\infty g(y)f(\alpha y) dy=\\ -\frac{1}{\alpha}\int_0^\infty f(x)g(x/\alpha) dx. -$$ -Substituting the above two functions into this general formula we get -$$ -\int\limits_{0}^{\infty} \frac{e^{-x^{2}/2}}{\cosh{\sqrt{\frac{\pi}{2}}\alpha{x}}} {dx} = \frac{1}{\alpha} \int\limits_{0}^{\infty} \frac{e^{-x^{2}/2}}{\cosh{\sqrt{\frac{\pi}{2}}\frac{x}{\alpha}}}{dx}, -$$ -which by trivial arithmetics can be brought to the symmetrical form given by Hardy. -For more information about self-reciprocal functions see my post here.<|endoftext|> -TITLE: What's the theory in which incompleteness of PA is proved? -QUESTION [16 upvotes]: Maybe this is a dumb question, but I have to admit that it is not really clear to me what the theory is, in which incompleteness of PA and stronger theories is proved. The texts I did study so far are often not really explicit about that. - -What's the (minimal) theory in which - incompleteness of PA and stronger theories is proved? - -REPLY [24 votes]: Let PRA be the theory of Primitive recursive arithmetic. This is a subtheory of PA, and it suffices to prove the incompleteness theorem. It is perhaps not the easiest theory to work with, but the point is that a proof of incompleteness can be carried out in a significantly weaker system than the theories to which incompleteness actually applies. It is sometimes argued that Hilbert's notion of finitism is (as precisely as possible) captured by PRA, so in that sense, this is the ideal setting for proving the incompleteness of PA or related systems. -You may benefit from reading the nice book Metamathematics of first order arithmetic by Hájek and Pudlák, where these matters are carefully discussed, as well as some of the numerous essays by Harvey Friedman, available at his homepage. -Let me point out that what one proves in PRA is that either PA is inconsistent or else, it cannot prove its own consistency (and therefore it is incomplete). If we actually want to prove that PA is incomplete, we need to reason within a system where Con(PA), the formal statement asserting the consistency of PA, is assumed. (Of course, Con(PA) is not a theorem of PA, so in a sense this is a strong assumption.) Similarly, PRA suffices to formalize the incompleteness of much stronger systems, and even, to formalize the equiconsistency results of set theory. -Some people prefer to be able to reason with sets (i.e., semantically, rather than with formal proofs, that are syntactic objects) in a less cumbersome fashion than via coding. Then, rather than working with PRA, it is convenient to work with a weak subsystem of second order arithmetic. Typically, WKL${}_0$ (A system that allows us to prove a weak version of König's lemma that finite branching infinite trees have infinite branches) is the chosen system to carry out the formalization of the incompleteness proofs. A good place to learn about this and second order arithmetic in general is the wonderful book by Stephen Simpson. -Let me mention that subsystems of second order arithmetic are understood to be the natural theories to carry out investigations on reverse mathematics, i.e., they provide precisely the setting one wants in order to study questions about what formal systems suffice to prove a theorem (such as the incompleteness of PA, in this case).<|endoftext|> -TITLE: 'Free Vector Space' and 'Vector Space' -QUESTION [20 upvotes]: In this very nice book, author has defined vector space as set of functions $f : S \rightarrow F$ where $S$ is a finite set and $F$ is a field. It turns out that this definition is closely resemble definition of Free Vector Space. -There are one or two threads on other online community regarding this. While I try to undersand it, could anyone point out some differences between vector spaces defined this way and defined in standard ways e.g. Hoffman and Kunze. -Can one give a small concrete example of constructing a vector space using this definition and its implication. Any tutorial or link will be appreciated. - -REPLY [5 votes]: This is not the definition of an arbitrary vector space, for (at least) two reasons. -1) The requirement that $S$ be finite means your vector space -- let me call it $F^{(S)}$ -- is a finite-dimensional vector space. So infinite-dimensional vector spaces are missing. For this, one can extend the construction to an arbitrary set $S$, but we need to slightly redefine $F^{(S)}$ to be the set of all functions $f: S \rightarrow F$ such that $f^{-1}(F \setminus \{0\})$ is finite (finitely nonzero functions). Note that this is precisely the definition in the planetmath article on Free Vector Spaces that you link to. -2) This definition captures all vector spaces up to isomorphism only. It is much closer to the concept of "vector space with distinguished basis". Indeed, if $V$ is an $F$-vector space, then taking $S$ to be a basis of $V$, we get a map from $V$ to $F^{(S)}$ by mapping each basis element $e \in S$ to the characteristic (or "delta") function $1_e: S \rightarrow F$: namely, for $e' \in S$, $1_e(e') = 1$ if $e = e'$ and $0$ otherwise. This is consistent with the universal mapping property underlying the definition of "free vector space", i.e., every vector space can be viewed as (or more accurately, canonically endowed with the structure of) the free vector space on any one of its bases in this way. -In terms of examples: for finite sets $S$, from a concrete perspective this is close to just considering the vector space $F^n$ of $n$-tuples of elements of $F$. Indeed, arguably $F^n$ is an example of this with $S = \{1,\ldots,n\}$. In general, the only difference I can see is that the finite set $S$ does not come with a distinguished ordering, so whereas $F^n$ has a canonical ordered basis $e_1,\ldots,e_n$, $F^{(S)}$ has a canonical unordered basis $\{1_s \ | \ s \in S\}$.<|endoftext|> -TITLE: Integral Representations of Hermite Polynomial? -QUESTION [9 upvotes]: One of my former students asked me how to go from one presentation of the Hermite Polynomial to another. And I'm embarassed to say, I've been trying and failing miserably. (I'm guessing this is a homework problem that he is having trouble with.) -http://functions.wolfram.com/Polynomials/HermiteH/07/ShowAll.html -So he has to go from the Rodriguez type formula (written as a contour integral) to an integral on the real axis, which is the 3rd formula in the link provided above. It seems like the hint he was given was to start from the contour integral. -Starting with the contour integral, I tried using different semi-circles (assuming that $z$ was real), but this quickly turned into something weird. -I also tried to use a circle as the contour, then map it to the real line. That was a failure. -I tried working backwards, from the integral on the real axis. Didn't have luck. -The last resort was to show that -1) Both expressions are polynomials. -2) Show that the corresponding coefficients were equal. (That is, I took both functions and evaluated them and their derivatives at 0.) -Even 2), I couldn't see a nice way of showing that -$\int_C \frac{e^{-z^2}}{z^{n+1}}dz = \int_{-\infty}^{\infty} z^n e^{-z^2} dz$ (Up to some missing multiplicative constants.) -I feel like I'm missing something really easy. If someone could give me some hints without giving away the answer, that would be most appreciated. - -REPLY [2 votes]: \begin{align*} -H_n(x) -&=(-)^n\mathrm{e}^{x^2}\partial_x^n\mathrm{e}^{-x^2} \\ -&=(-1)^n\mathrm{e}^{x^2}\partial_x^n\mathrm{e}^{-x^2}\Big(\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}\mathrm{e}^{-(t\pm \mathrm{i}x)^2}\mathrm{d}t\Big) \\ -&=(-1)^n\mathrm{e}^{x^2}\frac{1}{\sqrt{\pi}}\partial_x^n\int_{-\infty}^{\infty}\mathrm{e}^{-t^2\mp 2\mathrm{i}xt}\mathrm{d}t \\ -&=(-1)^n\mathrm{e}^{x^2}\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}\partial_x^n\mathrm{e}^{-t^2\mp 2\mathrm{i}xt}\mathrm{d}t \\ -&=(-1)^n\mathrm{e}^{x^2}\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}(\mp2\mathrm{i}t)^n\mathrm{e}^{-t^2\mp 2\mathrm{i}xt}\mathrm{d}t \\ -&=\frac{(\pm2\mathrm{i})^n}{\sqrt{\pi}}\mathrm{e}^{x^2}\int_{-\infty}^{\infty}t^n\mathrm{e}^{-t^2\mp 2\mathrm{i}xt}\mathrm{d}t \\ -&=\frac{(\pm2\mathrm{i})^n}{\sqrt{\pi}}\int_{-\infty}^{\infty}t^n\mathrm{e}^{-(t\pm \mathrm{i}x)^2}\mathrm{d}t -\end{align*} -Average the $+$ and the $-$ situations, we have -$$H_n(x)=\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}(2t)^n\mathrm{e}^{x^2-t^2}\cos(2xt-\frac{n\pi}{2})\,\mathrm{d}t .$$<|endoftext|> -TITLE: functoriality of blow-ups -QUESTION [6 upvotes]: Let $f:X\to Y$ be a finite map of varieties and let $BL_Z(Y)$ be the blow-up of a subscheme $Z\subset Y$. Is there a map $$\phi:BL_{f^{-1}(Z)}(X)\to BL_{Z}(Y)?$$ If so, what can be said about $\phi$? Is it also finite? - -REPLY [7 votes]: You need more hypotheses for such a map to exist. For example, suppose -that $X = pt$ and $Y = \mathbb A^2$ (over a field $k$, say) and $f$ is the map sending the point to the origin. Now take $Z$ to be the origin of $Y$. The preimage of $Z$ is all of $X$, and so the blow-up of $X$ at $f^{-1}(Z)$ is just $Z$ again. On the other hand, there is no natural map from $X$ to the blow-up of $Y$ along $Z$. (If there were we could pick out a distinguished point on the exceptional divisor, which we can't.) -If $f$ is finite and flat, then unless I am blundering the blow-up of $X$ along $f^{-1}(Z)$ will precisely equal the fibre product over $Y$ of the blow-up of $Y$ along $Z$ with $X$. So in this case, there is a map $\phi$, it is just a base-change, and in particular it is again finite and flat. -If you like, the problem is that Proj (which is what appears in the definition of blow-ups) is not functorial in the most naive way.<|endoftext|> -TITLE: How valid is the proposed P=NP solution for the 3-SAT problem? -QUESTION [7 upvotes]: Source link: http://romvf.wordpress.com/2011/01/19/open-letter/ (Referred from slashdot) - -The fact of existence of the polynomial algorithm for 3-SAT problem leads to a conclusion that P=NP. - -Is there validity to this argument that there is a P=NP solution? How long till this can be verified? Will this take years to verify or more likely weeks? -Final update: - -http://cartesianproduct.wordpress.com/2011/02/27/no-proof-of-pnp-after-all-yet/ -Vladimir Romanov has conceded that his published “proof” of P=NP is flawed and requires further work. - -REPLY [2 votes]: I like most people don't read P=NP? proofs. There seems to be something missing in the current approaches. -I have an approach that only lacks one step from resolving the issue. I only need to determine the complexity of determining membership in a fractal set. Decades more of work by me and I am sure I can resolve the final issues. Or not.<|endoftext|> -TITLE: $L^1$ space with values in a Banach Space -QUESTION [7 upvotes]: I have been reading a bit about the Bochner integral and now I'm wondering the following: -For the theory to be "nice", one would expect that -$$L^1([0, \tau], L^1([0, \tau])) \cong L^1([0,\tau] \times [0, \tau]).$$ -Is this the case? How do we prove this (or where can I find more about this), since if we take $u$ in the set in the LHS, then we can evaluate $(u(t))(x)$ and we want to map this to some $u(t,x)$. The problem I see is that $L^1$-functions are equivalence classes modulo sets of measure zero. So for each $t\in [0,\tau]$ we have different sets of measure zero than the ones in $[0,\tau]^2$. How do we get around this? - -REPLY [6 votes]: There is no need to restrict to intervals, the same holds true for measure spaces $X$ and $Y$ and Banach spaces $E$ in general: - -$L^{1}(X \times Y, E) \cong L^{1}(X,L^{1}(Y,E))$ - -Edit: Assume that $X$ and $Y$ are $\sigma$-finite and complete for the sake of simplicity. See also the edit further down. -By definition each Bochner-integrable function can be approximated by simple functions. In other words, the functions of the form $[A]e$ for some measurable subset $A \subset X$ of finite measure and $e \in E$ (or out of some dense subspace of $E$, if you prefer) generate a dense subspace of $L^{1}(X,E)$. Therefore you can reduce to the case of defining a bijection $[A]\cdot ([B] \cdot e) \leftrightarrow [A \times B]\cdot e$ and observe that this is a bijection on sets generating dense subspaces of $L^{1}(X, L^{1}(Y,E))$ and $L^{1}(X \times Y, E)$ and extend it linearly. That it is well-defined and an isometry is a special case of Fubini, that it is surjective in both directions follows from density. -The general fact underlying this is the canonical isomorphism - -$L^{1}(Y,E) \cong L^{1}(Y) \hat{\otimes} E$ (projective tensor product) - -and the isomorphism $L^{1}(X \times Y) \cong L^{1}(X) \hat{\otimes} L^{1}(Y)$ for $L^{1}$-spaces + associativity of the tensor product (and all this is proved in exactly the same way). -Edit: For the (non-​$\sigma$​-finite) general case the situation is quite a bit more subtle and is discussed carefully in the exercise section 253Y of Fremlin's Measure Theory, Volume II. Here are the essential points: - -For a general measure space $Y$ one can prove that $L^{1}(Y,E) \cong L^1(Y)\hat{\otimes} E$ (see 253Yf (vii)). -For a pair of measure spaces $(X,\mu)$ and $(Y,\nu)$ let $(X \times Y, \lambda)$ be the complete locally determined product measure space as defined by Fremlin, 251F. The rather deep theorem 253F in Fremlin then tells us that $L^1(X \times Y, \lambda) \cong L^1(X,\mu) \hat{\otimes} L^1(Y,\nu)$. - -Piecing these two results together and making use of the associativity of the projective tensor product we get -$$\begin{align*}L^1(X \times Y, E) & -\cong L^1(X \times Y) \hat{\otimes} E -\cong \left(L^1(X) \hat{\otimes} L^1(Y)\right) \hat{\otimes} E & &(\text{using 1. and 2., respectively})\\\ & \cong L^1(X) \hat{\otimes} \left(L^1(Y) \hat{\otimes} E\right) \cong L^1 (X) \hat{\otimes} L^1(Y,E) & &\text{(using associativity and 1.)} \\\ & \cong L^1(X,L^1(Y,E)) & & (\text{using 1. again})\end{align*}$$ -as asserted in an earlier version of this answer. - -Finally, a remark on user3148's cautionary counterexample. There is an isomorphism $(L^{1}(X,E))^{\ast} \cong L_{w^{\ast}}^{\infty}(X,E^{\ast})$ where the latter space is defined by weak$^{\ast}$-measurably in the sense of Gelfand-Dunford. So in this sense we have $L_{w^{\ast}}^{\infty}(X \times Y, E^{\ast}) \cong L_{w^{\ast}}^{\infty}(X, L_{w^{\ast}}^{\infty}(Y,E^{\ast}))$ simply by duality theory. - -REPLY [2 votes]: Since I am not able to comment yet, I write this as an answer. -Be prepared to face the fact that $L^\infty([0,T],L^\infty([0,T])) \neq L^\infty([0,T]\times[0,T])$. To see this, observe that the function -$$ -f(x,y) = \begin{cases} 1 & x>y\\ 0 & \text{else}\end{cases} -$$ -is not an element of the first space.<|endoftext|> -TITLE: Why are there 12 pentagons and 20 hexagons on a soccer ball? -QUESTION [32 upvotes]: Edge-attaching many hexagons results in a plane. Edge-attaching pentagons yields a dodecahedron. -Is there some insight into why the alternation of pentagons and hexagons yields an approximated sphere? Is this special, or are there an arbitrary number of assorted n-gons sets that may be joined together to create regular sphere-like surfaces? - -REPLY [2 votes]: An icosahedron is a platonic solid having 20 congruent equilateral triangular faces ($F=20$), 30 edges ($E=30$) each five meets at each vertex & 12 identical vertices ($V=12$) lying on the spherical surface (with certain radius). -When an icosahedron is (partially) truncated at all its 12 vertices then each of 20 triangular faces become a hexagon & each of 12 vertices produces a new petagonal face. Thus a truncated icosahedron has 12 regular pentagons & 20 regular hexagons. -While the process of truncation produces $12\times 5=60$ new vertices & $12\times 5=60$ new edges in addition to its 30 original edges. Thus it has total $60+30=90$ edges. For a truncated icosahedron, $F=12+20=32$, $E=90$ & $V=60$ which duly satisfies Euler's formula ($F+V=E+2$). It is also called Archimedean solid. -A soccer ball is analogous to a truncated icosahedron. Hence it has 12 (regular) petagons & 20 (regular) hexagons.<|endoftext|> -TITLE: A Conjecture of Schinzel and Sierpinski -QUESTION [18 upvotes]: Melvyn Nathanson, in his book Methods in Number Theory (Chapter 8: Prime Number's) states the following: - -A conjecture of Schinzel and Sierpinski asserts that every positive rational number $x$ can be represented as a quotient of shifted primes, that $x=\frac{p+1}{q+1}$ for primes $p$ and $q$. It is known that the set of shifted primes, generates a subgroup of the multiplicative group of rational numbers of index at most $3$. - -I would like to know what progress has been made regarding this problem and why is this conjecture important. Since it generates a subgroup, does the subgroup which it generates have any special properties? -I had actually posed a problem which asks us to prove that given any interval $(a,b)$ there is a rational of the form $\frac{p}{q}$ ($p,q$ primes) which lies inside $(a,b)$. Does, this problem have any connections with the actual conjecture? -I actually posted this question on MO. Interested users can see this link: - -https://mathoverflow.net/questions/53736/on-a-conjecture-of-schinzel-and-sierpinski - -REPLY [2 votes]: That the numbers $p/q$, $p$, $q$ prime, are dense in the positive reals is a simple consequence of the Prime Number Theorem, q.v.<|endoftext|> -TITLE: Why isn't the gamma function defined so that $\Gamma(n) = n! $? -QUESTION [35 upvotes]: As a physics student, I have occasionally run across the gamma function -$$\Gamma(n) \equiv \int_0^{\infty}t^{n-1}e^{-t} \textrm{d}t = (n-1)!$$ -when we want to generalize the concept of a factorial. Why not define the gamma function so that -$$\Gamma(n) = n!$$ -instead? -I realize either definition is equally good, but if someone were going to ask me to choose one, I would choose the second option. Are there some areas of mathematics where the accepted definition looks more natural? Are there some formulas that work out more cleanly with the accepted definition? - -REPLY [7 votes]: $$ -\Gamma(\alpha) = \int_0^\infty x^{\alpha-1} e^{-x}\,dx. -$$ -Why $\alpha-1$ instead of $\alpha$? Here's one answer; there are probably others. Consider the probability density function -$$ -f_\alpha(x)=\begin{cases} \dfrac{x^{\alpha-1} e^{-x}}{\Gamma(\alpha)} & \text{for }x>0 \\[12pt] 0 & \text{for }x<0 \end{cases} -$$ -The use of $\alpha-1$ instead of $\alpha$ makes the family $\{f_\alpha : \alpha > 0\}$ a "convolution semigroup": -$$ -f_\alpha * f_\beta = f_{\alpha+\beta} -$$ -where the asterisk represents convolution.<|endoftext|> -TITLE: Is there an Inverse Gamma $\Gamma^{-1} (z) $ function? -QUESTION [17 upvotes]: Since $\Gamma$ is not one to one over the complex domain, Is it possible to define some principal values ( analogues to Principal Roots for the Root function ) so we can have a $\Gamma^{-1} (z)$ (inverse $\Gamma$ function)? - -REPLY [15 votes]: A short sketch besides the request of Andres' for further specifications: -if you use the inverse-series for $ \Gamma(1+x)-1 $ (in Pari/GP: serreverse (gamma(1+x)-1) ) you get something like -$ f(x) = -1.73245471460*x + 5.14288192485*x^2 - 22.3588922658*x^3 + 120.586032684*x^4 $ -$ - 732.743269181*x^5 + 4785.68759665*x^6 - 32793.0682929*x^7 + O(x^8) $ -By 64 terms of the series it is not clear, whether the series has finite radius of convergence $ \rho $, possibly it is something like $ \rho \sim 1/9 $. If we define $ g(x) = f(x-1)+1 $ and use Euler-/Noerlund-sums for acceleration of convergence it seems we can arrive at meaningful values in a small range; examples -g(1.5) = 0.595332094501 // Noerlund-sum -gamma( 0.595332094501) = 1.500000000 -g(1.8) = 0.492222531811 // Noerlund-sum -gamma( 0.492222531811) = 1.800000000 -g(2.0) = 0.442877396485 // Noerlund-sum -gamma( 0.442877396485) = 2.000000000<|endoftext|> -TITLE: is there an analogue of short exact sequences for semigroups? -QUESTION [6 upvotes]: Since semigroups don't need to have an identity element, I was wondering if there's any kind of short exact sequence for semigroups. - -REPLY [8 votes]: Suppose that you have a homomorphism of semigroups $f\colon S\to T$. The kernel of $f$ is defined to be the congruence $\mathrm{ker} f$ on $S$ defined by: -$$\mathrm{ker} f = \{ (s_1,s_1)\in S\times S\mid f(s_1)=f(s_2)\}.$$ -Note that $S/\mathrm{ker} f \cong f(S)$. -We also define the image of $f$, $\mathcal{K}_{\mathrm{Im}f}$ to be the relation on $T$ defined by: -$$\mathcal{K}_{\mathrm{Im} f} = f(S)\times f(S)\cup\{(t,t)\mid t\in T\}.$$ -Edit: The image need not be a congruence if $f(S)$ is not an ideal in $T$, because it need not be a subsemigroup of $T\times T$. But this agrees with the situation in, say, not-necessarily-abelian groups, where the image need not be a normal subgroup (and thus, not "eligible" to be equal to a kernel). -Given homomorphism $f\colon S\to T$ and $g\colon T\to U$, we say that -$$S\stackrel{f}{\longrightarrow} T \stackrel{g}{\longrightarrow} U$$ -is exact (at $T$) if and only if $\mathrm{ker} g = \mathcal{K}_{\mathrm{Im} f}$. -So we say that $S\stackrel{f}{\longrightarrow} T\stackrel{g}{\longrightarrow} U$ is a short exact sequence if and only if -$\mathrm{ker} f = \{(s,s)\mid s\in S\}$, $\mathcal{K}_{\mathrm{Im} f} = \mathrm{ker}g$, and $\mathcal{K}_{\mathrm{Im} g} = U\times U$. -You can also see it by extending to monoids first. Given any semigroup $S$, you can always adjoin an identity element by taking some $1\notin S$, and defining the operation on $S\cup\{1\}$ by extending the multiplication by the rule $1s=s1=s$ for all $s\in S$; this monoid is denoted $S^1$ (even if $S$ already has an identity, we adjoin a new element). If $f\colon S\to T$ is a semigroup homorphism, then this induces a monoid homomorphism $f^1\colon S^1\to T^1$ by $f^1(s)=f(s)$ for all $s\in S$, and $f(1) = 1$. If we do this, then note that -$$\mathrm{ker}(f^1) = \mathrm{ker}(f)\cup\{(1,1)\}$$ -is a congruence on $S^1$; and that -\begin{align*} -\mathcal{K}_{\mathrm{Im}f^1} &= f^1(S^1)\times f^1(S^1)\cup\{(t,t)\mid t\in T^1\}\\ -&= f(S)\times f(S) \cup\{(t,t)\mid t\in T\} \cup\{(1_T,1_T)\}\\ -&= \mathrm{K}_{\mathrm{Im}f}\cup\{(1_T,1_T)\}, -\end{align*} -so that if we have $S\stackrel{f}{\to}T\stackrel{g}{\to}U$, with corresponding $S^1\stackrel{f^1}{\to} T^1\stackrel{g^1}{\to} U^1$, then $\mathrm{ker}f = \mathcal{K}_{\mathrm{Im}g}$ if and only if $\mathrm{ker}f^1 = \mathcal{K}_{\mathrm{Im}g^1}$. -If we do this, then -$$S\stackrel{f}{\longrightarrow} T \stackrel{g}{\longrightarrow} U$$ -is a short exact sequence if and only if -$$1 \longrightarrow S^1 \stackrel{f^1}{\longrightarrow} T^1\stackrel{g^1}{\longrightarrow} U^1 \longrightarrow 1$$ -is exact at $S^1$, $T^1$, and $U^1$.<|endoftext|> -TITLE: what functions or classes of functions are Riemann non-integrable but Lebesgue integrable -QUESTION [7 upvotes]: I am wondering if there are some other examples of Riemann non-integrable but Lebesgue integrable, besides the well-known Dirichlet function. -Thanks. - -REPLY [3 votes]: For me the "canonical" example is the characteristic function of the rational numbers in $[0,1]$. The upper integral is one, the lower integral is zero. -Edit: Jonas and Moron have informed me that some people call this example Dirichlet function (and I vaguely remember that we might have done so as well in our Analysis class).<|endoftext|> -TITLE: a question related to two competing patterns in coin tossing -QUESTION [6 upvotes]: If I have a two-sided coin with probability $p$ showing head. I repeatedly toss it until either HTHTH or HTHH appears. Can you calculate -1) the probability when I got HTHTH, and -2) the expected value of the number of tosses before I stop? -Thanks. - -REPLY [5 votes]: There is a direct, and rather automatic, way to compute the probability to hit A=HTHTH first rather than B=HTHH. -Both motives begin by HTH hence one can wait until HTH first appears. Then, either (1) the next letter is H, or (2) the two next letters are TH, or (3) the two next letters are TT. If (1) happens, B won. If (2) happens, A won. If (3) happens, one has to wait for more letters to know who won. The important fact in case (3) is that, since the last letters are TT, A or B must be entirely produced again. -Hence, $p_B=p_1+p_3p_B$ and $p_A=p_2+p_3p_A$, where $p_i$ for $i=$ 1, 2 and 3, is a shorthand for the conditional probability that ($i$) happens starting from the word HTH. Since $p_1=p$, $p_2=qp$ and $p_3=q^2$, one gets $p_B=p_1/(1-p_3)=p/(1-q^2)$, hence -$$ -p_B=1/(1+q),\quad p_A=q/(1+q). -$$ -Similarly, a standard, and rather automatic, way to compute the mean number of tosses before this happens is to consider a Markov chain on the state space made of the prefixes of the words one wishes to complete. -Here, the states are 0 (for the empty prefix), 1=H, 2=HT, 3=HTH, B=HTHH, 4=HTHT and A=HTHTH. The transitions are from 0 to 1 and 0, from 1 to 2 and 1, from 2 to 3 and 0, from 3 to B and 4 and from 4 to A and 0. The transitions from B and from A are irrelevant. The next step is to compute $n_s$ the number of tosses needed to produce A or B starting from any state $s$ amongst 0, 1, 2, 3 and 4, knowing that one is in fact only interested in $n_0$. -The $n_s$ are solutions of a Cramér system which reflects the structure of the underlying Markov chain: -$$ -n_0=1+pn_1+qn_0,\quad n_1=1+pn_1+qn_2,\quad n_2=1+pn_3+qn_0, -$$ -$$ -n_3=1+qn_4,\quad n_4=1+qn_0. -$$ -Solving this system of equations backwards, that is, going from the last equation back to the first one, yields $n_3$ in terms of $n_0$, then $n_2$ in terms of $n_0$, then $n_1$ in terms of $n_0$, and finally an equation for $n_0$ alone, which yields Mike's formula for $n_0$, namely: -$$ -n_0=\frac{1+pq+p^2q+p^2q^2}{p^3q(1+q)}. -$$ -An accessible reference for these techniques (in the context of genomic sequence analysis) is the book DNA, Words and Models by Robin, Rodolphe and Schbath, at Cambridge UP.<|endoftext|> -TITLE: If the product of an invertible symmetric matrix and some other matrix is symmetric, is that other matrix also symmetric? -QUESTION [7 upvotes]: The thought came from the following problem: -Let $V$ be a Euclidean space. Let $T$ be an inner product on $V$. Let $f$ be a linear transformation $f:V \to V$ such that $T(x,f(y))=T(f(x),y)$ for $x,y\in V$. Let $v_1,\dots,v_n$ be an orthonormal basis, and let $A=(a_{ij})$ be the matrix of $f$ with respect to this basis. -The goal here is to prove that the $A$ is symmetric. I can prove this easily enough by saying: -Since $T$ is an inner product, $T(v_i,v_j)=\delta_{ij}$. -\begin{align*} -T(A v_j,v_i)&=T(\sum_{k=1}^n a_{kj} v_k,v_i)\\ -&=T(a_{1j} v_1,v_i) + \dots + T(a_{nj} v_n,v_i)\\ -&=a_{1j} T(v_1,v_i) + \dots + a_{nj} T(v_n,v_i)\tag{bilinearity}\\ -&=a_{ij}\tag{$T(v_i,v_j)=\delta_{ij}$}\\ -\end{align*} -By the same logic, -\begin{align*} -T(A v_j,v_i)&=T(v_j,A v_i)\\ -&=T(v_j,\sum_{k=1}^n a_{ki} v_k)\\ -&=T(v_j,a_{1i} v_1)+\dots+T(v_j,a_{ni} v_n)\\ -&=a_{1i} T(v_j,v_1)+\dots+a_{ni} T(v_j,v_n)\\ -&= a_{ji}\\ -\end{align*} -By hypothesis, $T(A v_j,v_i)=T(v_j,A v_i)$, therefore $a_{ij}=T(A v_j,v_i)=T(v_j,T v_i)=a_{ji}$. -I had this other idea though, that since $T$ is an inner product, its matrix is positive definite. -$T(x,f(y))=T(f(x),y)$ in matrix notation is $x^T T A y=(A x)^T T y$ -\begin{align*} -x^T T A y &= (A x)^T T y\\ -&=x^T A^T T y\\ -TA &= A^T T\\ -(TA)^T &= (A^T T)^T\\ -A^T T^T &= T^T A\\ -TA &= A^T T^T\tag{T is symmetric}\\ -&= (TA)^T\tag{transpose of matrix product}\\ -\end{align*} -This is where I got stuck. We know that $T$ and $TA$ are both symmetric matrices. Clearly $T^{-1}$ is symmetric. If it can be shown that $T^{-1}$ and $AT$ commute, that would show it. - -REPLY [2 votes]: I did some numerical search for higher dimensions: -$n=3:$ -$\left( -\begin{array}{ccc} - 1 & 1 & 0 \\ - 1 & 1 & 1 \\ - 0 & 1 & 1 -\end{array} -\right).\left( -\begin{array}{ccc} - -383 & 13 & -13 \\ - -36 & -445 & -36 \\ - -13 & 13 & -383 -\end{array} -\right)=\left( -\begin{array}{ccc} - -419 & -432 & -49 \\ - -432 & -419 & -432 \\ - -49 & -432 & -419 -\end{array} -\right)$ -$n=4:$ -$\left( -\begin{array}{cccc} - 1 & 1 & 0 & 0 \\ - 1 & 1 & 1 & 0 \\ - 0 & 1 & 1 & 1 \\ - 0 & 0 & 1 & 1 -\end{array} -\right).\left( -\begin{array}{cccc} - -383 & 13 & -36 & -23 \\ - 85 & -360 & 49 & -49 \\ - -49 & 49 & -360 & 85 \\ - -23 & -36 & 13 & -383 -\end{array} -\right)=\left( -\begin{array}{cccc} - -298 & -347 & 13 & -72 \\ - -347 & -298 & -347 & 13 \\ - 13 & -347 & -298 & -347 \\ - -72 & 13 & -347 & -298 -\end{array} -\right)$ -$n=5:$ -$\left( -\begin{array}{ccccc} - 1 & 1 & 0 & 0 & 0 \\ - 1 & 2 & 1 & 0 & 0 \\ - 0 & 1 & 3 & 1 & 0 \\ - 0 & 0 & 1 & 4 & 1 \\ - 0 & 0 & 0 & 1 & 5 -\end{array} -\right).\left( -\begin{array}{ccccc} - -7526 & 3158 & -9379 & 7340 & -8405 \\ - 5216 & -3477 & 6079 & -6570 & 6359 \\ - -3225 & 1486 & -3098 & 2500 & -3543 \\ - 1159 & -1300 & 905 & -1249 & 970 \\ - -641 & 414 & -841 & 186 & -656 -\end{array} -\right)=$ -$=\left( -\begin{array}{ccccc} - -2310 & -319 & -3300 & 770 & -2046 \\ - -319 & -2310 & -319 & -3300 & 770 \\ - -3300 & -319 & -2310 & -319 & -3300 \\ - 770 & -3300 & -319 & -2310 & -319 \\ - -2046 & 770 & -3300 & -319 & -2310 -\end{array} -\right)$ -All matrices have full rank. However, for high dimensions they are quite ugly.<|endoftext|> -TITLE: Perturbation trick in the proof of Seifert-van-Kampen -QUESTION [34 upvotes]: The theorem of Seifert-Van-Kampen states that the fundamental group $\pi_1$ commutes with certain colimits. There is a beautiful and conceptual proof in Peter May's "A Concise Course in Algebraic Topology", stating the Theorem first for groupoids and then gives a formal argument how to deduce the result for $\pi_1$. However, there we need the assumption that our space is covered by path-connected open subsets, whose finite intersections are path-connected again. This assumption can be weakened using a little "perturbation" trick which is explained in the proof given by Hatcher in his book "Algebraic Topology". After doing this, we only need that triple intersections are path connected (and we cannot do better). -Question 1. Is it possible to "conceptualize" the perturbation trick, thus weakening the assumption concerning finite intersections? Maybe this is answered by one of the pure categorical proofs of Seifert-van-Kampen? -Question 2. In practice (explicit calculations of fundamental groups), do we actually often need or use that this weakening of the assumption is possible? - -REPLY [18 votes]: Question 1: One of the easiest ways to see this 3-fold intersection condition is in terms of "brick subdivisions" of the square. This is a subdivision of the square into rectangles such that each vertex of the subdivision is a corner of 3-rectangles. Such a subdivision can be taken fine enough to carry out the argument given in Hatcher's book. More details are in -R. Brown and A. Razak Salleh, ``A van Kampen theorem for unions of non-connected spaces'', Archiv. Math. 42 (1984) 85-88. -which relates this argument to the Lebesgue covering dimension. Also by using the fundamental groupoid $\pi_1(X,A)$ on a set $A$ of base points, it gives a theorem for the union of non-connected spaces. -I also prefer the argument in terms of "verifying the universal property" rather than looking at relations defining the kernel of a morphism. To see this proof in the case of the union of 2 sets, see -https://groupoids.org.uk/pdffiles/vKT-proof.pdf -One of the problems in the proof for the fundamental group as given by May is that it does not generalise to higher dimensions. -Question 2: One has to ask: where do such unions of non-connected spaces arise? The standard answer, and for me from which all this groupoid work arose, is to obtain the fundamental group of the circle $S^1$, which is after all, THE basic example in algebraic topology. Many other examples arise in geometric topology. -More answers come in applications to group theory; for example the Kurosh subgroup theorem on subgroups of free products of groups can be proved by using a covering space of a one-point union of spaces. The cover over each space of this union has usually many components. I confess I have not seen the minimal condition of 3-fold intersection used in practice, but it is always interesting to know the minimal conditions for a theorem. Just as well to know one can't in general get away with 2-fold intersections. -You can see the consistent use of groupoids in $1$-dimensional homotopy theory (van Kampen theorem, homotopy theory, covering spaces, orbit spaces) in my book Topology and Groupoids (2006) available from amazon. -Edit Feb 17, 2014: -It may be useful to point out that a sharper result than that in May's book was published in 1984 in the above Brown-Razak Salleh paper; the idea of proving a pushout result for the full fundamental groupoid, and then retracting to $\pi_1(X,A)$, is in my 1967 paper, and subsequent book, but Peter May cleverly makes it work for infinite covers; the problem is that this proof does not generalise to higher dimensions, as far as I can see. -Also Munkres' book on "Topology" uses non path connected spaces in dealing with the Jordan Curve Theorem, and for this uses covering space rather than groupoid arguments. I do not see why books avoid $\pi_1(X,A)$, since it hardly requires any extra in the proofs to that for $\pi_1(X,a)$, given the easy definition. -Edit Feb 20, 2015: A relevant question and answer is https://mathoverflow.net/questions/40945/compelling-evidence-that-two-basepoints-are-better-than-one -Some of the relevance to the history of algebraic topology, and to further developments, is in this presentation in Galway, December, 2014. See also my preprint page. -March 5, 2015 - -I feel this "perturbation (or deformation) trick" is of quite a fundamental nature. Above is a picture of part of a deformation involved in the proof of the 2-dimensional Seifert-van Kampen type theorem, for the crossed module involving a triple $(X,A,C)$ of spaces where $C$ is a set of base points, and $C \subseteq A \subseteq X$. -The red dots denote points of $C$. -The blue lines denote paths lying in $A$. The small squares in the bottom are supposed to denote squares lying in a set $U$ of an open cover. By making connectivity assumptions you can deform a small bottom square into the top one of the right kind, and still lying in the same open set. So, by working with a double groupoid construction $\rho_2(X,A,C)$ whose compositions are more 2-directional than the usual relative homotopy groups, you can get going with the proof of a universal property involving double groupoids, and hence for crossed modules.<|endoftext|> -TITLE: Is this an inner product on $L^1$? -QUESTION [11 upvotes]: I know that $\int f(x) \overline{g(x)} dx$ is an inner product on $L^2$. But is it one on $L^1$? I think it isn't, but I am have had difficulty figuring out which defining property is violated. -Thanks in advance for any pointers! - -REPLY [3 votes]: Suppose that $g$ is a measurable function on $[0,1]$ such that $\int |fg|<\infty$ for all $f\in L^1[0,1]$. Then $g$ is essentially bounded. -If $g$ is not essentially bounded and $E_n=\{x\in[0,1]:n\leq |g(x)|\lt n+1\}$ for each positive integer $n$, then there is a subsequence $E_{n_1},E_{n_2},\ldots$ with $m(E_{n_k})\gt 0$ for all $k$. Let $\displaystyle{f=\sum_{k=1}^\infty}\frac{1}{k^2m(E_{n_k})}\chi_{E_{n_k}}$, so $f$ is in $L^1$, and $\displaystyle{\int |gf| \geq \sum_{k=1}^\infty\frac{n_k}{k^2}\geq\sum_{k=1}^\infty}\frac{1}{k}=\infty$.<|endoftext|> -TITLE: Good examples of double induction -QUESTION [60 upvotes]: I'm looking for good examples where double induction is necessary. What I mean by double induction is induction on $\omega^2$. These are intended as examples in an "Automatas and Formal Languages" course. -One standard example is the following: in order to cut an $n\times m$ chocolate bar into its constituents, we need $nm-1$ cuts. However, there is a much better proof without using induction. -Another example: the upper bound $\binom{a+b}{a}$ on Ramsey numbers. The problem with this example is that it can be recast as induction on $a+b$, while I want something which is inherently inducting on $\omega^2$. -Lukewarm example: Ackermann's function, which seems to be pulled out of the hat (unless we know about the primitive recursive hierarchy). -Better examples: the proof of other theorems in Ramsey theory (e.g. Van der Waerden or Hales-Jewett). While these can possibly be recast as induction on $\omega$, it's less obvious, and so intuitively we really think of these proofs as double induction. -Another example: cut elimination in the sequent calculus. In this case induction on $\omega^2$ might actually be necessary (although I'm not sure about that). -The problem with my positive examples is that they are all quite technical and complicated. So I'm looking for a simple, non-contrived example where induction on $\omega^2$ cannot be easily replaced with regular induction (or with an altogether simpler argument). Any suggestions? - -REPLY [3 votes]: I suggest the following proof for Bezout's identity. It is a double induction, in a sense, since it proves a proposition for all pairs $a,b$ of natural numbers, by using "regular" induction on $\min(a,b)$. (I propose this as a general way to prove a statement on multiple natural variables: choose a function $f(n_1,\ldots,n_k)\mapsto N$ and then show that for all $n$, if $f(n_1,\ldots,n_k)= n$ then $n_1,\ldots,n_k$ satisfy the statement. This is then proved by induction on $n$). -Theorem: Let $a$ and $b$ be natural numbers and let $d = GCD(a,b)$ be their greatest common divisor. Then there exist integer $x$ and $y$ such that $d = ax +by$. -Proof: We use induction on $b=\min\{a, b\}$. Base case: If $b=1$ then $d=1$, and, for $x=0$, $y=1$ it is $1= ax + by$. Inductive step: Assume that for each pair $a,b$, with $a\ge b$ and $b\in \{1, 2,\ldots, n-1\}$ there are integers $x,y$ such that $GCD(a,b) = ax + by$. Consider a pair $a,b$ with $a\ge b$ and $b=n$, and let $q$ and $r$ be the quotient and remainder in the division of $a$ by $b$. If $r=0$ then $GCD(a,b)=b = ax +by$ for $x=0$ and $y=1$. -Otherwise, being $d= GCD(a,b) = GCD(b,r)$, from the inductive hypothesis there are integers $x'$, $y'$ such that -$d=bx' + ry'$. By replacing $r=a-qb$ we get $d=bx' +(a-qb)y' = ay' +b(x'-qy') = ax + by$ for $x=y'$ e $y=x'-qy'$. -\QED -Giuseppe Lancia (giulan@gmail.com)<|endoftext|> -TITLE: Is positive the same as non-negative? -QUESTION [34 upvotes]: I would assume the answer to my question is yes, but I want to make sure because my book uses both terminologies. Please also indicate where zero falls into the mix. -UPDATE: -Here is an excerpt from my book: - -The definition of $\Theta(g(n))$ requires - that every member $f(n) \in \Theta(g(n))$ be - asymptotically non-negative, that is, - that $f(n)$ be non-negative whenever n - is sufficiently large. (An - asymptotically positive function is - one that is positive for all - sufficiently large $n$.) - -REPLY [45 votes]: The real numbers can be partitioned into the positive real numbers, the negative real numbers, and zero. A real number is one and only one of those three possibilities. This is called "trichotomy." Non-negative (or, correspondingly, non-positive) means not negative (not positive), so zero or positive (zero or negative). -That is, non-negative includes zero whereas positive does not. -Edit for clarity: -Non-negative means zero or positive. -Non-positive means zero or negative. -That is, non-negative includes zero whereas positive does not and vice versa. - -REPLY [18 votes]: In mathematical English, - -positive is defined to be $> 0$ -negative is defined to be $< 0$ - -So non-negative means $\ge 0$, not the same as positive. -In mathematical French, it just happens that the word 'positif' is defined to be $\ge 0$, that is, 0 is both 'positif' and 'negatif'. -In other languages...who knows. - -REPLY [3 votes]: If we go by your edits, about the book excerpt, it looks like the book treats non-negative as $\ge 0$, and positive as $\gt 0$. -Also, from the notation it seems like you are talking about functions whose domain is $\mathbb{N}$. -For an example of an asymptotically positive function, consider -$$ f(n) = 1$$ -For an example of an asymptotically non-negative function, consider -$$f(n) = \left|\sin\left(\frac{n\pi}{2}\right)\right|$$ -For sufficiently large $\displaystyle n$, we have that $\displaystyle f(n) \ge 0$. Note that this function is not asymptotically positive, because it is zero (for even $\displaystyle n$) infinitely often. -Any asymptotically positive function is also asymptotically non-negative, but not vice-versa. -For an example of a function which is neither asymptotically non-negative, nor asymptotically positive, -$$f(n) = \sin\left(\frac{n\pi}{2}\right)$$ -This function takes the values $\displaystyle 1,-1 \ \text{and}\ 0$ infinitely often.<|endoftext|> -TITLE: Normalizing every Sylow p-subgroup versus centralizing every Sylow p-subgroup -QUESTION [12 upvotes]: Is it true that: - -If the intersection of the Sylow p-subgroups is trivial, then the intersection of their normalizers is equal to the intersection of their centralizers? - -I half remember this being true for odd p, but I cannot find the reference. I have not found a counterexample for p=2 or p=3. - -REPLY [6 votes]: I think the answer is yes. Let $K$ be the intersection of the normalizers of the Sylow $p$-subgroups of $G$, and $P$ any Sylow $p$-subgroup. Then $K$ is a normal subgroup of $G$, so $[K,P] \le K \cap P$. If $K \cap P$ is nontrivial, then a nonidentity element $g$ has order a power of $p$ and normalizes all Sylow $p$-subgroups, so it must lie in all Sylow $p$-subgroups, contradicting your assumption. So $[K,P] = 1$ and hence $K$ centralizes all Sylow $p$-subgroups.<|endoftext|> -TITLE: Never Perfect Square Permutation Polynomials -QUESTION [11 upvotes]: I had made up this problem a while back, and I think I had a tedious, uninsightful proof. Also, I am not able to reconstruct the proof I had. -Here is the problem: -Let $\displaystyle \pi: \{1,2, \dots, 13\} \to \{1,2,\dots, 13\}$ be a bijection, i.e. it is basically a permutation of the first $\displaystyle 13$ positive integers. -For each such $\displaystyle \pi$ construct the polynomial $\displaystyle P_{\pi}: \mathbb{Z} \to \mathbb{Z}$ such that -$$P_{\pi}(x) = \sum_{j=1}^{13} \ \ \pi(j) \ x^{j-1}$$ -Is there some $\displaystyle \pi$ such that for every $\displaystyle n \in \mathbb{Z}$, $\displaystyle P_{\pi}(n)$ is never a perfect square? -I believe it is true that there is such a $\displaystyle \pi$. In fact, I seem to recollect having "proved" that there are at least $7!$ such permutations. -So the questions are: -a) Is there a "slick" proof of the existence of at least one such $\displaystyle \pi$? (Any proof welcome, though). -b) What if $\displaystyle 13$ is replaced by a generic natural number $\displaystyle M$? Can we give a different (and hopefully simpler) characterization of the $\displaystyle M$ for which such a $\displaystyle \pi$ exists? -Please consider posting an answer even if you don't have an answer for b). - -REPLY [7 votes]: Edit: The original solution was wrong, but fortunately could be fixed. -Given $M$, we find a sufficient condition for a permutation $\pi$ to work. We use the following simple property: $m^2 \pmod{4} \in \{0,1\}$. So if we can force $P_\pi(n) \pmod{4} \in \{2,3\}$, we are done. -There are four cases to consider, according to $n \pmod{4}$. If $n \pmod{4} = 0$ then $$P_\pi(n) \equiv \pi(1) \pmod{4},$$ and so we need $\pi(1) \pmod{4} \in \{2,3\}$. If $n \pmod{4} = 2$ then $$P_\pi(n) \equiv \pi(1) + 2\pi(2) \pmod{4},$$ and so we need $\pi(2)$ to be even. If $n \pmod{4} = 1$ then $$P_\pi(n) \equiv \sum_{i=1}^M \pi(i) = \sum_{i=1}^M i = \frac{M(M+1)}{2} \pmod{4};$$ that happens whenever $M \pmod{8} \in \{2,3,4,5\}$. Finally, if $n \pmod{4} = 2$ then $$P_\pi(n) \equiv \sum_{i=1}^M (-1)^{i+1} \pi(i) \equiv \frac{M(M+1)}{2} + 2\sum_{j=1}^{\lfloor M/2 \rfloor} \pi(2j) \pmod{4}.$$ Thus we need $$\sum_{j=1}^{\lfloor M/2 \rfloor} \pi(2j) \equiv 0 \pmod{2}.$$ -Claim: Whenever $M\geq 3$ satisfies $M \pmod{8} \in \{2,3,4,5\}$, we can find such a $\pi$. -Proof: Set $\pi(1) = 3$ and $\pi(2m) = 2m$. -This method of proof can probably be generalized by replacing $4$ with other moduli.<|endoftext|> -TITLE: Riemann integral of characteristic function of Cantor set -QUESTION [13 upvotes]: Can anyone tell me how to calculate the Riemann integral of the characteristic function of the Cantor set? It's probably obvious but I don't see how to write it down. -Many thanks for your help! - -REPLY [21 votes]: Let $C$ be the Cantor set, and let $C_n$ be the closed set left after $n$ steps of removing middle thirds from $[0,1]$, so $C_n$ is a disjoint union of $2^n$ closed intervals, and the sum of the lengths of these intervals is $\left(\frac{2}{3}\right)^n$, which converges to zero. The characteristic function $\chi_{C_n}$ of $C_n$ is a step function that dominates the characteristic function of $C$, so its integral, $\left(\frac{2}{3}\right)^n$, is an upper Riemann sum for $\chi_C$. Thus the infimum of the upper Riemann sums for $\chi_C$ is at most $\inf_n\left(\frac{2}{3}\right)^n=0$. The lower Riemann sums are all greater than or equal to $0$, so this shows that the Riemann integral exists and equals $0$. - -REPLY [3 votes]: I am presuming you are talking about the Cantor Set in $[0,1]$, where you remove the middle third. -Since the Cantor set is of measure zero, the Lebesgue integral of its characteristic function is $0$. -If it were Riemann integrable (which it is, as the points of discontinuity is of measure $0$), then the value of the Riemann integral would equal the Lebesgue integral and so would be $0$.<|endoftext|> -TITLE: Automorphisms of non-abelian groups of order 27 -QUESTION [10 upvotes]: What are the automorphism groups of non-abelian groups of order 27? (there are two non-abelian groups of order 27). - -REPLY [14 votes]: The non-abelian group of order $p^3$ with no elements of order $p^2$ is the Sylow $p$-subgroup of $\operatorname{GL}(3,p)$. Its automorphism group can also be viewed as a group of $3\times3$ matrices, the affine general linear group, -$$\operatorname{AGL}(2,p) = \left\{ \begin{pmatrix}a & b& e\\ c& d& f\\ 0 & 0 & 1\end{pmatrix} : a,b,c,d,e,f \in \mathbb{Z}/p\mathbb{Z},\; ad-bc ≠ 0 \right\}, $$ -which is the semi-direct product of $\operatorname{GL}(2,p)$ on its natural module. -This description is reasonably famous, especially when considering non-abelian groups of order $p^{2n+1}$ with no elements of order $p^2$ whose center and derived subgroup have order $p$. Instead of $\operatorname{GL}(2,p)$ you get a variation on $\operatorname{Sp}(2n,p)$, that simplifies to $\operatorname{GL}(2,p)$ when $n=1$. -The non-abelian group of order $p^3$ with an element of order $p^2$ and $p ≥ 3$ has as its automorphism group a semi-direct product of $\operatorname{AGL}(1,p)$ with the dual of its natural module, so you get all $3×3$ matrices -$$\left\{ \begin{pmatrix}a & b& 0\\ 0& 1& 0\\ c & d & 1\end{pmatrix} : a,b,c,d \in \mathbb{Z}/p\mathbb{Z},\; a ≠ 0 \right\}. $$ -In both cases the "module part" of the semi-direct product is the group of inner automorphisms and the quotient ( $\operatorname{GL}(2,p)$ and $\operatorname{AGL}(1,p)$ ) are the outer automorphism groups. -You can read about some of this in section A.20 of Doerk–Hawkes, or Winter (1972). - -Winter, David L. -“The automorphism group of an extraspecial p-group.” -Rocky Mountain J. Math. 2 (1972), no. 2, 159–168. -MR297859 -Doerk, Klaus; Hawkes, Trevor. -Finite soluble groups. -de Gruyter Expositions in Mathematics, 4. Walter de Gruyter & Co., Berlin, 1992. xiv+891 pp. ISBN: 3-11-012892-6 -MR1169099<|endoftext|> -TITLE: Unique characterization of convex polygons -QUESTION [10 upvotes]: Question -I am looking for a unique characterization of a convex polygon with $n$ vertices, relative to a feature point $p$ in the interior of the polygon. This characterization would be a vector of numbers. Suppose a polygon is described by its feature point and vertices $P=(p,v_1,\ldots,v_n)$, then the characterization function is $C(P)$ such that $C(P)=C(Q)$ if and only if $P$ is congruent* to $Q$. The most important thing I need is that for two polygons which are "almost" congruent, their characterization vectors should be "close" (like, say, small 2-norm difference). The question then is what is a simple definition of the characterization function $C(\cdot)$ which satisfies my requirements? -*Here I define congruent as identical up to rotation and translation (reflections are not allowed), cyclic permutation of vertex indices, as well as identical feature point locations. If $P$ is congruent to $Q$, then any cyclic permutation of the vertices of either polygon should still leave $C(P)=C(Q)$ (thus $C$ should be invariant to cyclic permutations of the vertices). If two polygons are congruent, then when they are overlaid, the feature points of both polygons must also match up (this is why I originally stated that the polygon is defined relative to the feature point). -An illustration of what I mean is shown below. The dots inside are the feature points of the surrounding polygon. - -Things that don't work -Most characterizations of polygons usually calculate the 2x2 moment of inertial tensor relative to the center of mass of the polygon. This is not good enough because first of all, the moment tensor is not enough to completely define the shape, and second, the feature point of a polygon must also match for two congruent polygons. -Ideas - -A vector of higher order moments relative to the feature point. (Is this unique?) -A vector of displacements from a regular $n$-gon vertex positions. (Does this satisfy the nearness aspect?) - -REPLY [5 votes]: As described below, any $n$-gon is a "sum" of regular $\left\lbrace\frac{n}{k}\right\rbrace$-gons, with $k = 0, 1, 2, \cdots, n-1$. This can give rise to a vector of complex numbers that serves to characterize the shape of the polygon. -Given polygon $P$, we start by choosing a "feature point" in the form of a distinguished starting vertex, $v_0$, as well as a preferred tracing direction --in the universe of convex polygons, we can unambiguously take that direction as "always counter-clockwise"-- to get a list of successive vertices $v_0, v_1, \dots, v_{n-1}$. Write $[P]$ for the vector whose $j$-th element is the point of the Complex Plane at which the $j$-th vertex of $P$ lies. -Define the "standard regular $\left\lbrace\frac{n}{k}\right\rbrace$-gon", $P_k$, as the polygon whose $j$-th vertex coincides with the complex number $\exp\frac{2\pi i j k}{n}$. (As shapes, $P_k$ and $P_{n-k}$ (for $k \ne 0$) are identical, but they are traced in opposing directions.) -Now, any $n$-gon is the "sum" of rotated-and-scaled images of the $P_k$s, in the sense that we can write -$$[P] = r_0 [P_0] + r_1 [P_1] + \cdots + r_{n-1} [P_{n-1}]$$ -with each complex $r_j$ effecting the corresponding rotation-and-scale. (Determine the $r_j$s by reading the above as $n$ component-wise equations. The solution is, clearly, unique.) Therefore, the vector $R(P) := (r_0, r_1, \dots, r_{n-1} )$ exactly encodes the polygon as a figure in the plane. -Note that, for $k > 0$, polygon $P_k$ is centered at the origin, while all the vertices of polygon $P_0$ coincide at the complex number $1$. Consequently, the $P_0$ component of the decomposition amounts to a translational element, identifying the centroid (average of vertex-points) of the figure. As we are concerned about shape without regard for position, we can suppress (or just ignore) the $r_0$ component of $R(P)$. Since a polygon's shape is independent of the figure's rotational orientation, we choose to normalize $R(P)$ by rotating the elements through an angle that would align $v_0$ with the positive-real axis, arriving at our $C(P)$: -$$C(P) := \frac{1}{\exp(i\arg{v_0})} R(P) = \frac{|v_0|}{v_0} (r_1,r_2,\dots,r_{n-1})$$ -If polygons $P$ and $Q$ are congruent (with compatible distinguished vertices and tracing directions), then we have $C(P) = C(Q)$. When $P$ and $Q$ are nearly-congruent, $|C(P)-C(Q)|$ will be small, and vice-versa. -Note: When $P$ and $Q$ are similar (with compatible distinguished vertices and tracing directions), we have $\frac{C(P)}{|C(P)|} = \frac{C(Q)}{|C(Q)|}$. -Edit -As noted in comments, this $C(P)$ isn't invariant under cyclic permutations of the vertices. It's worth investigating exactly what effect a cyclic permutation has. -Consider the triangle $P$ with $[P] = ( v_0, v_1, v_2 )$. The corresponding regular $P_k$ figures are given by -$$[P_0] := ( 1, 1, 1 )$$ -$$[P_1] := ( 1, w, w^2 )$$ -$$[P_2] := ( 1, w^2, w )$$ -where $w = \exp\frac{2\pi i}{3}$. -We can easily solve the decomposition equation to get -$$R(P) = (r_0, r_1, r_2) = \frac{1}{3} \left( v_0+v_1+v_2 \;,\; v_0 + v_1 w^2 + v_2 w \;,\; v_0 + v_1 w + v_2 w^2 \right)$$ -If $P'$ is identical to $P$, but with cyclically re-ordered vertices, $[P'] = ( v_1, v_2, v_0 )$, then -$$R(P') = \frac{1}{3} \left( v_1+v_2+v_0 \;,\; v_1 + v_2 w^2 + v_0 w \;,\; v_1 + v_2 w + v_0 w^2 \right) = ( r_0 \;,\; w r_1 \;,\; w^2 r_2 )$$ -Observe that $w r_1 [P_1] = r_1 ( w, w^2, 1 )$ yields the same polygon as $r_1 [P_1] = r_1 ( 1, w, w^2 )$, except that its vertices have been cyclically re-ordered. Likewise for $w^2 r_2 [P_2]$ (and $w^0 r_0 [P_0]$, for that matter). The same holds for arbitrary $n$-gons. -Thus, as a family of polygonal shapes, the decomposition into regular components is independent of cyclic permutation, as is the correspondence between the vertices of the components and the vertices of the polygon. That is, in our triangles $P$ and $P'$, we have $v_0 = r_0 + r_1 + r_2$, and $v_1 = r_0 + w r_1 + w^2 r_2$, and $v_2 = r_0 + w^2 r_1 + w r_2$, regardless of where each $v_k$ appears in $[P]$ or $[P']$. Unfortunately, the $R(\cdot)$ vector doesn't suffice to capture this invariance; and $C(\cdot)$'s dependence on the distinguished vertex doesn't help matters. -$R(\cdot)$ and $C(\cdot)$ aren't entirely useless, however. The moduli, $|r_k|$, which yield the radii of the regular components, are invariants for the polygons. -Edit 2. Perhaps my $C(\cdot)$ provides a workable, comparable, characterization, after all ... with the caveat that we don't require equality between $C(P)$ and $C(Q)$ for congruent $P$ and $Q$, but, rather, an appropriate notion of equivalence. -To incorporate the feature point, we'll assume that our polygons are positioned with feature point at origin; the translational component, $P_0$, then becomes significant, so we won't suppress the corresponding element from $C(\cdot)$. -Let $r = C(P) = \frac{|u_0|}{u_0}(r_0, r_1, r_2, \dots, r_{n-1})$ and $s = C(Q) = -\frac{|v_0|}{v_0} (s_0, s_1, s_2, \dots, s_{n-1})$ be two $C$-vectors with respect to starting vertices $u_0$ and $v_0$ in polygons $P$ and $Q$, respectively. Define "$r \equiv s$" iff, for all $k$ and some fixed integer $m$, we have $\frac{|v_0|}{v_0} s_k = \frac{|u_0|}{u_0} r_k w^{km}$, where $w = \exp \frac{2 \pi i}{n}$. That is, $|s_k| = |r_k|$, and $\arg(r_k) - \arg(s_k) + 2 \pi k m/n \equiv \arg(u_0) - \arg(v_0) \mod 2 \pi$. (I suspect there's a cleaner way to express this.) Then $P \cong Q$, with compatible feature points, if and only if $C(P) \equiv C(Q)$. (If we don't need feature points, we can position our polygons with their average-of-vertices centroids at the origin and suppress the $0$-th components of $C(\cdot)$.) -With this, we just need to determine the best way to measure the degree of non-equivalence for incongruent figures.<|endoftext|> -TITLE: Convergence in $L_{\infty}$ norm implies uniform convergence -QUESTION [11 upvotes]: I'm trying to prove the following claim: -$f_n \in C_c$, $C_c$ being the set of continuous functions with compact support, then $\mathrm{lim}_{n \rightarrow \infty} || f_n - f||_{\infty} = 0$ implies $f_n(x) \rightarrow f(x)$ uniformly. -So, according to my understanding, -$$ \mathrm{lim}_{n \rightarrow \infty} || f_n - f||_{\infty} = 0 $$ -$$ \Leftrightarrow $$ -$$ \forall \varepsilon > 0 \exists N: n > N \Rightarrow |f_n(x) - f(x)| \leq || f_n - f||_{\infty} < \varepsilon$$ $\mu$-almost everywhere on $X$. -Now my problem is that I don't see how uniform convergence follows from pointwise convergence $\mu$ almost everywhere. Can someone give me a hint? I guess I have to use the fact that they have compact support but I don't see how to put this together. -Thanks for your help! - -REPLY [12 votes]: You were not very specific about your hypotheses - I assume you're working on $\mathbb{R}^{n}$ with Lebesgue measure. -Suppose there exists a point $x$ such that $|f_{n}(x) - f(x)| > \varepsilon$. Then there exists an open set $U$ containing $x$ such that for all $y \in U$ you have $|f_{n}(y) - f(y)| > \varepsilon$ by continuity. But this contradicts the a.e. statement you gave. -In case you don't know yet that $f$ is continuous (or rather: has a continuous representative in $L^\infty$), a similar argument shows that $(f_{n})$ is a uniform Cauchy sequence (with the sup-norm, not only the essential sup-norm), hence $f$ will be continuous (in fact, uniformly continuous). -Note that I haven't used compact support at all, just continuity. -If you're working in a more general setting (like a locally compact space), you'd have to require that the measure gives positive mass to each non-empty open set. -Finally, note that $f$ need not have compact support. It will only have the property that it will be arbitrarily small outside some compact set ("vanish at infinity" is the technical term). For instance, $\frac{1}{1+|x|}$ can easily be uniformly approximated by functions with compact support. - -REPLY [3 votes]: Just a hint: The compact support doesn't matter, but the continuity does matter. It helps you to remove null sets.<|endoftext|> -TITLE: Irrational$^\text{Irrational}$ -QUESTION [8 upvotes]: How do I compute $\text{(irrational)}^{\text{(irrational)}}$ up to a required number of decimals say m, in the fastest way ? (one way is of course compute both the irrational numbers to a precision much larger than m and then solve it... but you never know how much excess of m you will need to calculate the irrationals.. ) - -REPLY [3 votes]: This is a further development on the ideas of Doug Spoonwood and phv3773. -Exponentiation $a^b$ is a continuous function, in both arguments, so we can use "interval methods" to calculate a number to any desired precision. Now I'm guessing you are only interested in real numbers, so I guess I can assume $a>0$, and since $a^{-b}=1/a^b=(1/a)^b$, we can also restrict our attention to $a>1$ and $b>0$. For fixed $b>0$, $a^b$ (the power function) is a strictly increasing function for $a\in(0,\infty)$, and for fixed $a>1$, $a^b$ (an exponential function) is strictly increasing for $b\in\Bbb R$. -Now suppose $a\in[a_-,a_+]$ and $b\in[b_-,b_+]$. In your application this would correspond to having calculated the irrationals $a$ and $b$ to some precision, and $a_-,a_+$ are your rational upper and lower bounds on the number you have calculated. (For example, if I calculate $a=\pi\approx3.14$ to that many digits of precision, rounded correctly, then $a\in[a_-,a_+]=[3.135,3.145]$.) Because $a^b$ is increasing in both arguments in the region of discussion, I know $$a_-^{b_-}\le a_-^b\le a^b\le a_+^b\le a_+^{b_+}$$ and hence $a^b\in[a_-^{b_-},a_+^{b_+}]$. This is Doug's "interval exponentiation" (suitably simplified for the case when $a>1$ and $b>0$). -These represent pessimistic bounds on the number being calculated, but they give a guarantee that the number you want is actually in that range. - -A natural second question related to this process is which $a_-,a_+,b_-,b_+$ to choose. If you have an effective method for calculating $a$ and $b$ to whatever precision you need, then that means you can demand $a_+-a_-\le\delta$ and $b_+-b_-\le\delta'$ with your choice of $\delta,\delta'$. Our true goal is to get the range of our interval within some $\epsilon$, which is to say $a_+^{b_+}-a_-^{b_-}\le\epsilon$. A good estimate of what $\delta,\delta'$ to choose can be given by taking the partial derivatives of $a^b$ at the point of approximation: -$$(a+\alpha)^{b+\beta}\approx a^b+\alpha\frac\partial{\partial a}a^b+\beta\frac\partial{\partial b}a^b=a^b(1+\alpha \frac ba+\beta\log a)$$ -(where $|\alpha|\le\delta/2$ and $|\beta|\le\delta'/2$). For good approximations, we want these two error terms to be comparable to one another. Thus we want $\frac{\delta a}b\approx\frac{\delta'}{\log a}\approx\frac\epsilon2a^b$. (I realize that this requires the computation of $a^b$, but this is just an estimate and need not be very accurate. The true test is when you calculate $a_+^{b_+}$ and $a_-^{b_-}$ and find that the difference is within your error bounds. If it isn't, just cut $\delta,\delta'$ in half until it is.)<|endoftext|> -TITLE: proof that $\phi\circ\phi=id$ implies the existence of a diagonal matrix -QUESTION [5 upvotes]: As exam preparation we were trying to proof the following task: -Let $V=\mathbb{R}^2$ and let $\phi$ be an endomorphism of $V$ with $\phi \circ \phi = id$ and $\phi \neq id$ and $\phi \neq -id$. Proof that this implies the existence of a basis $B=(b_1,b_2)$ of $V$ with $\phi(b_1) = b_1$ and $\phi(b_2)=-b_2$ -Unfortunately we aren't able to solve that task and would very much appreciate some proofs and and a "how to" of how to approach such problems. -Thanks for your help. - -REPLY [4 votes]: An (elementary) algebraic approach that leads to this conclusion might be as follows. Since $\phi \neq id$ there is some $x$ for which $\phi(x) - x \neq 0$. Applying $\phi$ to this vector gives -$$\phi(\phi(x) - x) = \phi \circ \phi(x) - \phi(x) = x - \phi(x)$$ -So $x - \phi(x)$ is a nonzero vector $\phi$ takes to negative itself. Similarly, since $\phi$ is not $-id$, there is some $y$ for which $\phi(y) + y$ is nonzero, and applying $\phi$ analogously to this vector one gets back itself. These two vectors are not multiples of each other since $\phi$ has opposite behaviors on the two vectors. Hence they are a basis.<|endoftext|> -TITLE: Finding the sum $f(x)=\sum_{n=2}^{\infty} \frac{x^n}{n(n-1)}$ -QUESTION [5 upvotes]: I'm trying to find $$f(x)=\sum_{n=2}^{\infty} \frac{x^n}{n(n-1)}$$ -I found the radius of convergence of the above series which is $R=1$. Checking $x=\pm 1$ also yields a convergent series. Therefore the above series converges for all $x\in [-1, 1]$. -Using differentiation of the series term by term we get: $$f'(x)=\sum_{n=2}^{\infty} \frac{x^{n-1}}{n-1}=\sum_{n=1}^{\infty} \frac{x^{n}}{n}=-\log(1-x)$$ which also has $R=1$, and then, by integrating term by term we get: $$f(x)=\int_{0}^{x} f'(t)dt=-\int_{0}^{x} \log(1-t)dt=x-(x-1)\ln(1-x)$$ -if I understand the theorems in my textbook correctly, the above formulas are true only for $x \in (-1, 1)$. It seems the above is correct since this is also what WolframAlpha says. -I'm abit confused though. At first, it seemed the above series converges for all $x\in [-1, 1]$ but in the end I only got $f(x)$ for all $|x|\lt 1$, something seems to be missing. What can I say about $f(-1)$ and $f(1)$? - -REPLY [3 votes]: If you rewrite $\frac{1}{n(n-1)}$ in the form $\frac{1}{n-1}-\frac{1}{n}$, then you can rewrite the series in both cases $x = \pm 1$ and compute their values directly. You can then confirm that in both cases the value you compute coincides with the value $f(\pm 1)$. (In other words, rather than appealing to Abel's theorem, as Moron suggests, in this particular case you can verify it.) -[Caveat: In the case $x = -1$, you will need to use the familiar series -for $\log 2$, and maybe the easiest way to prove this is by appealing to Abel's theorem (applied to the series for $\log (1 + x)$). So my approach probably doesn't really avoid Abel's theorem, at least for $x = -1$.]<|endoftext|> -TITLE: Trying a Calculus question -QUESTION [5 upvotes]: One of my pals asked me to look at this question -Let $f: [0,1] \to \mathbb{R}$ be differentiable. Suppose that $f(0) = 0$ -and $0 < f'(x) < 1$ for all $x \in (0, 1)$, where $f'(x)$ is the derivative of $f$. Prove that -$$ \left(\int_{0}^{1}f(x) \ dx \right)^{2} \geq \int_{0}^{1}(f(x))^{3} \ dx$$ -I don't really have a clue as to where to start. Any ideas will be appreciated. - -REPLY [7 votes]: This Problem is given in the book - -Problems in Mathematical Analysis - III by Kaczor and Nowak - -The solution is as follows: -Set $$F(t)= \biggl(\int\limits_{0}^{t} f(x) \ \text{dx}\biggr)^{2} - \int\limits_{0}^{t} (f(x))^{3} \ \text{dx}, \quad t \in [0,1]$$ -Then $$F'(t)=f(t) \cdot \biggl(2 \int\limits_{0}^{t} f(x) \ \text{dx} - (f(t))^{2}\biggr)$$ and if $$G(t)= 2 \int\limits_{0}^{t} f(x) \ \text{dx} - (f(t))^{2}$$, then $G'(t)=2f(t) \cdot (1-f'(t)) \geq 0$. Consequently, $G(t) \geq G(0)=0$, which gives $F'(t) \geq 0$. So, $F(t) \geq 0$,, and in particular $F(1) \geq 0$. -Moreover if$F(1)=0$, then $F(t)=0$ for $t \in [0,1]$ and therfore $F'(t)=f(t)G(t)=0$. This, in turn, implies $G'(t)=2f(t) \cdot (1-f'(t))=0$ and $1-f'(t)=0$ for $t \in (0,1)$.<|endoftext|> -TITLE: Summation involving a factorial: $1 + \sum_{j=1}^{n} j!j$ -QUESTION [8 upvotes]: $$1 + \sum_{j=1}^{n} j!j$$ -I want to find a formula for the above and then prove it by induction. The answer according to Wolfram is $(n+1)!-1$, however I have no idea how to get there. Any hints or ideas on how I should tackle this one? - -REPLY [16 votes]: This is a simple induction. Use $(n+1)! = (n+1) \cdot n! = n\cdot n! + n!$. -Added: I don't see any better way than to play around with the formula. Rearrange this and you have -$j \cdot j! = (j+1)! - j!$. But then you have -\begin{align*} -\sum_{j=1}^{n} j \cdot j! & = \sum_{j=1}^{n} [ (j+1)! - j!] \\ -& = [2! - 1!] + [3! - 2!] + \cdots + [n! - (n-1)!] + [(n+1)! - n!] -\end{align*} -and you see that everything cancels, except the terms $- 1!$ from the first summand and $(n+1)!$ from the last summand, hence the sum must be equal to $(n+1)! - 1$. - -Edit 2 -Here's the argument: -We want to prove that the following statement $T(n)$ holds for all $n \in \mathbb{N}$: -\[ -(n+1)! - 1 = \sum_{j=1}^{n} j \cdot j!. -\] -For $n = 1$ we have the statement $T(1)$: -\[ -1 = (1+1)! - 1 = \sum_{j=1}^{1} j \cdot j! = 1\cdot 1! = 1, -\] -so this is ok. Assume that $T(n)$ holds. We want to prove $T(n+1)$: -\[ -(n+2)! - 1 = \sum_{j=1}^{n+1} j \cdot j!. -\] -Start with the right hand side: -\[ -\sum_{j=1}^{n+1} j \cdot j! = (n+1)\cdot (n+1)! + \sum_{j=1}^{n} j \cdot j! -\] -But the last sum is equal to $(n+1)! - 1$ by our assumption that $T(n)$ is true, so -\begin{align*} -\sum_{j=1}^{n+1} j \cdot j! & = (n+1)\cdot (n+1)! + (n+1)! - 1 \\ -& = [(n+1) + 1]\cdot(n+1)! - 1 = (n+2) \cdot (n+1)! - 1\\ -& = (n+2)! - 1, -\end{align*} -so $T(n+1)$ holds as well. - -REPLY [4 votes]: There is a nice interpretation of this identity in terms of uniqueness of representation in factorial base. But to answer your comment to Theo Buehler's answer, telescoping sequences are just a thing that you should be aware of and try to look for, and the identity Theo used is actually equivalent to what Wolfram told you, so...<|endoftext|> -TITLE: Notation for a polynomial ring and formal polynomials -QUESTION [8 upvotes]: Given that we shouldn't say that "$f(z)$ is a function", shouldn't we also not write "$p \in k[X_1, \ldots, X_n]$ is a polynomial"? Along those lines, I usually write $p(X_1, \ldots, X_n) \in k[X_1, \ldots, X_n]$ in order to balance the "free variables" on both sides of the relation, but that gets unwieldy when you start dealing with iterated polynomial rings. My question is: Is there a notation for polynomial rings which allow us to talk about polynomials without explicitly naming the indeterminates? Consider, for an analogy, vector spaces $\mathbb{R}^n$. These have a canonical basis, but the notation $\mathbb{R}^n$ does not commit me to naming the canonical basis, unlike, say, the notation $\operatorname{span} \{ e_1, \ldots, e_n \}$. -I suppose I should fix a definition for polynomial rings. For simplicity let's work in the category $\mathbf{CRing}$ of commutative rings with 1. Let $U: \mathbf{CRing} \to \mathbf{Set}$ be the forgetful functor taking rings to their underlying sets. A polynomial ring in a set of indeterminates $\mathcal{S}$ over a ring $A$ is a ring $R$ together with an inclusion map $\iota: A \hookrightarrow R$ and a set-map $x: \mathcal{S} \hookrightarrow UR$, and has the universal property that for every ring $B$, homomorphism $\phi: A \to B$, and set-map $b: \mathcal{S} \to B$, there is a homomorphism $\epsilon: R \to B$ such that $\epsilon \circ \iota = \phi$ and $U\epsilon \circ x = b$. -If we write $A[\mathcal{S}]$ for such a ring $R$, then we could write, for instance, $A[5]$ for the ring of polynomials in 5 variables over $A$, but that would, I imagine, be extremely confusing. Yet, on the other hand, if we have a bijection $\mathcal{S} \to \mathcal{S}'$, then this lifts to an isomorphism of $A[\mathcal{S}] \to A[\mathcal{S}']$, so it is all the more tempting to write $A[\kappa]$, $\kappa = |\mathcal{S}|$, for the canonical representative of this isomorphism class. -If $\mathcal{S} = \{ 1, \ldots, n \} \subset \mathbb{N}$ and $\phi: A \to B$ is given, I write $\phi p(b_1, \ldots, b_n)$ for the image of $p \in A[\mathcal{S}]$ under $\epsilon$ for $b(m) = b_m, m \in \mathcal{S}$. When the choice of homomorphism $\phi$ is clear I'll omit it in writing. This justifies my notation $p(X_1, \ldots, X_n) \in A[X_1, \ldots, X_n]$, since I would like to regard $A[X]$ as being analogous to $\mathbb{Z}[\pi]$, i.e. it's a ring with a transcendental element adjoined so is isomorphic to a polynomial ring, but doesn't come with evaluation maps attached. But following this line of thought, how should I denote the object that $p$ itself belongs to? -I recently started attending an algebraic geometry course and at one point the lecturer wrote $k[\mathbb{A}^n]$ for the ring of polynomials in $n$ indeterminates over $k$. This seems like a reasonable solution, but there are some problems: - -It feels suspiciously like a function ring, but in general the map taking formal polynomials to polynomial functions is neither injective nor surjective. -The notation makes it look like a ring with $\mathbb{A}^n$ adjoined, but that doesn't seem to make sense. (Is there a way to make sense of it, e.g. by defining ring operations on $\mathbb{A}^n$?) -Is it standard notation? I have seen $k[V]$ in some algebraic geometry textbooks for the coordinate ring of the (affine) variety $V$, but never for $V = \mathbb{A}^n$. (I have similar reservations about the notation $k[V]$, but not as strongly.) -Would it make sense to write, say, $\mathbb{Z}[\mathbb{A}^n]$? - -A related problem arises from the following: let $p(X)$ and $q(X)$ be formal polynomials in $k[X]$, with $p(X) = q(X^2)$. It's clear that $\operatorname{deg} p = 2 \operatorname{deg} q$... but this shows that, in a certain sense, the degree depends on the ambient polynomial ring: if $p(X)$ were considered as a formal polynomial in $k[X^2]$, its degree would be the same as $q$, since, after all, $p(X) = q(X^2)$. It is clear that we should have $k[X^2] \subset k[X]$, but if we obviate the indeterminates and reduce polynomials to their bare skeletons, then the "inclusion" map $k[X^2] \hookrightarrow k[X]$ is no longer a set-theoretic inclusion map. Is there a coherent way of thinking about polynomials and polynomial rings which resolves this ambiguity, and what is the notation that goes with it? - -REPLY [3 votes]: The other answer is bogged down with a lot of comments, so let me give a fresh answer to the revised question. The short answer is as follows: $k[\mathbb{A}^n]$ is the notation I support because $k[V]$ is perfectly standard. The fact that it looks like adjoining $V$ is irrelevant; what you are adjoining is coordinate functions on $V$. (That is, you should think of it as analogous to the notation $C(X)$ for the ring of real-valued functions on a topological space $X$, except that $k(V)$ is already taken; it means the field of functions on $V$ when $V$ is irreducible.) Finally, the fact that it looks like a function ring is also irrelevant because, in the right category, it is a function ring. -The long answer is as follows: your concern that - -It feels suspiciously like a function ring, but in general the map taking formal polynomials to polynomial functions is neither injective nor surjective. - -comes from the fact that the functor $\text{Hom}(\text{Spec } k, -) : k\text{-Alg}^{op} \to \text{Set}$ is not faithful, but you don't have to think about this functor. You can in fact work directly in $k\text{-Alg}^{op}$, and in this category $k[V]$ is precisely the ring of functions $\text{Hom}(V, k)$ where $k$ is shorthand for the affine line $\mathbb{A}^1(k) \simeq \text{Spec } k[x]$. One need not make a distinction between polynomials and the functions they define in this case. -Here's why. Suppose $V$ is given by an ideal $I$ in $k[x_1, ... x_n]$ for some $n$, so that $k[V]$ denotes the ring $k[x_1, ... x_n]/I$. (This is by definition.) Then a morphism $\text{Hom}(V, k)$ (in $k\text{-Alg}^{op}$, which is emphatically not the naive category of affine varieties over $k$) is precisely a morphism $k[x] \to k[x_1, ... x_n]/I$ of $k$-algebras. By the universal property of $k[x]$, such a morphism is freely determined by the image of $x$, so $\text{Hom}(k[x], -)$ represents the forgetful functor $k\text{-Alg} \to \text{Set}$. In particular such morphisms are in one-to-one correspondence with elements of $k[V]$, and this may therefore be taken as a definition of $k[V]$. (The details of how to get the ring operations are discussed, as I said, in this blog post.) -When $k$ is algebraically closed and we restrict to the opposite of the category of finitely-generated reduced $k$-algebras, then the functor $\text{Hom}(\text{Spec } k, -)$ is faithful by the Nullstellensatz, so we can define the ring of regular functions $k[V]$ in a naive way by regarding $V$ as a set of points in $\mathbb{A}^n(k)$. The point I am trying to make above is that, as long as we switch to a more sophisticated category, we can still do this for $k$ arbitrary (and not necessarily a field) and $V$ an arbitrary $k$-scheme rather than just a variety. In particular, if $V$ is defined over $\mathbb{Z}$, then the notation $\mathbb{Z}[V]$ makes perfect sense.<|endoftext|> -TITLE: How to find $3^{1000}\bmod 7$? -QUESTION [7 upvotes]: Sorry this is a really simple problem but I couldnt figure it out. I'm trying to find the value of $3^{1000}\bmod 7$. I know the actual value is 4, but I used a calculator. How would I simplify this problem to get the correct answer without a calculator? I dont want a direct answer, but maybe a nudge in the right direction, such as tricks I could use or maybe an alternate way of writing it. - -REPLY [2 votes]: Since that 36mod7=1, -then we have 31000=36*166+4=34=4 mod7.<|endoftext|> -TITLE: Extending a positive linear functional in finite dimensions -QUESTION [7 upvotes]: Let $V$ be a vector subspace of $R^N$, and $l:V \to R$ a linear mapping such that $l(V\bigcap R_{+}^N)\subseteq R_{+}$ (i.e., $l$ is positive). -I have heard that there exists a separating hyperplane sort of argument that allows us to show that $l$ extends to a positive linear functional on $R^N$. I have my own proof, but it is complicated and uses the Bauer-Namioka condition for extension of positive linear functionals on ordered vector spaces of any dimension. -Question: What is this simple separating hyperplane argument that shows that $l$ extends to a positive linear functional on $R^N$? (I believe it should be quite straightforward, but I was not able to construct the right problem to invoke a separation argument...) A reference would be okay as well. Thanks. - -REPLY [2 votes]: (Edit: reading the comments left by Kevin above, Riesz extension requires a bit more than he assumed. But I think that the separating hyperplane argument would boil down to essentially the same trick as outlined below.) -I think you can use the following modified proof of the Riesz extension theorem. -Marcel Riesz Extension Theorem Let $W$ be a vector space (finite dimensional in the following proof; the actual theorem has no such limitations), and $V$ a subspace. Let $F$ be a convex cone in $W$ with the property that for every $w\in W$ there exists $v_+, v_- \in V$ such that $v_+ - w\in F$ and $w - v_- \in F$. Then for any $\phi$ linear on $V$ and positive on $V\cap F$, there exists an extension $\psi$ on $W$ that is positive on $F$. -Proof (sketch) Define $\psi_+(w) = \inf_{v-w\in F, v\in V}\phi(v)$ and $\psi_-(w) = \sup_{w-v\in F, v\in V}\phi(v)$. Check that $\psi_+$ is convex, and $\psi_-(w)$ is concave, and $\psi_+(w) \geq \psi_-(w)$. An application of the separating hyperplane theorem implies that there exists a linear map $\psi$ with $\psi_+ \geq \psi \geq \psi_-$. Since $\psi_+ = \psi_- |_V$, you have that $\psi$ is an extension. By definition $\psi_-|_F > 0$, and so you have the conclusion of the theorem. -(See also this post on Terry Tao's blog for some related concepts.)<|endoftext|> -TITLE: Exterior power of dual space -QUESTION [7 upvotes]: Let $V$ be a vector space with basis $e_1, \ldots, e_n$ and $V^*$ be its dual space with dual basis $e_1^*, \ldots, e_n^*$. Let $k$ be an integer between $1$ and $n$. Why $\wedge^{n-k}V=\wedge^{k}V^*$? Thank you very much. - -REPLY [15 votes]: This is slightly false. The two are isomorphic, but not canonically so. There is a natural pairing $\Lambda^{n-k} V \times \Lambda^k V \to \Lambda^n V$ given by exterior product, but this pairing does not identify $\Lambda^{n-k} V$ with $(\Lambda^k V)^{\ast}$ until you pick an isomorphism $\Lambda^n V \simeq k$; this implies a choice of orientation, but is slightly stronger; one might say it implies a choice of "volume form." But it does not imply a choice of inner product. -The (canonical) isomorphism between $(\Lambda^k V)^{\ast}$ and $\Lambda^k V^{\ast}$ comes from the way duals commute with tensor products. It should look pretty straightforward with a specific basis.<|endoftext|> -TITLE: max and min versus sup and inf -QUESTION [51 upvotes]: What is the difference between max, min and sup, inf? - -REPLY [8 votes]: If we have a set $A$ and some partial ordering on $A$, for $B$ a subset of $A$ then we define the $\sup$ of $B$ (denoted $\sup B$) as the least element $a$ such that: - -$\forall b\in B\colon b -TITLE: Nonpositive curvature, Theorem of Cartan-Hadamard -QUESTION [6 upvotes]: In my differential geometry course we had the following -Theorem (Cartan-Hadamard): Let $M$ be a connected, simply connected, complete Riemannian manifold. Then the following are equivalent: - -$M$ has nonpositive curvature -$|d\exp_p(v)\hat v| \ge |\hat v|$ for all $p \in M$, $v, \hat v \in T_pM$ -$d(\exp_p(v), \exp_p(w)) \ge |v - w|$ for all $p \in M$, $v, w \in T_pM$ - -In addition $\exp_p$ is a diffeomorphism if either of these statements holds. -Now, I understand the proof that was given, but I do not see where we have to use simple connectivity of $M$ (in the proof given by my professor the Cartan-Ambrose-Hicks Theorem was used). What's wrong with the following: -$1. \Leftrightarrow 2.$ didn't make use of simple connectivity. (at least I don't see where) -But now I think 2., completeness and connectedness imply that $\exp_p$ is a diffeo, and in particular $M$ is simply connected. -_ 2. implies 3.: Let $v, w \in T_pM$ be given, let $\gamma(t) = \exp_p(v(t))$ be a lenght minimizing geodesic connecting $\exp_p(v)$ and $\exp_p(w)$. Then -$ -\begin{eqnarray} - d(\exp_p(v), \exp_p(w)) &=& L(\gamma) \\ &=& \int_0^1 |\dot \gamma(t)| \, dt \\ &\ge& \int_0^1 |\dot v(t)| \, dt \\ &\ge& \left| \int_0^1 \dot v(t) \, dt \right| \\ &=& |v-w| -\end{eqnarray} -$ -Therefore $\exp_p$ is injective. -Completeness and connectedness imply that $\exp_p$ is surjective (Hopf-Rinow). -2 implies that $\exp_p$ is a local diffeo. -So in conclusion $\exp_p$ is seen to be a bijective local diffeo, hence a diffeo. -My question: - -Is there anything wrong with this argument? -If yes: Could you give an example of a complete, connected manifold with nonpositive curvature, which is not simply connected? - -Thanks a lot! -S.L. - -REPLY [8 votes]: Regarding your comment above: yes, if $M$ is complete and connected with non-postiive curvature, then the universal cover $\widetilde{M}$ of $M$ inherits -a complete connected non-positive curvature metric (just pull-back the Riemannian -metric from $M$), and so satisfies the conditions (and so also the conclusions!) of the Cartan--Hadamard theorem. One then finds that $\widetilde{M}$ is homeomorphic to the tangent space of any point $\widetilde{m}$ of $\widetilde{M}$ (via $\exp_{\widetilde{m}}$. If $m$ is the image of $\widetilde{m}$ in $M$, then -the tangent space to $\widetilde{m}$ is naturally identified with the tangent space to $m$, and the exponential map $\exp_m$ is naturally identified with the composite of $\exp_{\widetilde{m}}$ and the projection $\widetilde{M} \to M$. -Thus indeed, one finds that $\exp_m$ is a covering map. -This circle of ideas is frequently applied, e.g. in the context of hyperbolic manifolds. If $M$ is a compact connected hyperbolic manifold, then one finds -by Cartan--Hadamard that $\widetilde{M}$ is isometric to hyperbolic space of -the appropriate dimension, and so $M$ is isometric to a quotient of hyperbolic space by a discrete cocompact group of isometries. (This is why hyperbolic manifold theory interacts so tightly with certain parts of group theory.) -Also, one concludes that if $M$ is compact and connected (and positive dimensional --- i.e. not a single point!) with non-positive curvature then $\pi_1(M)$ is infinite (because, writing $M$ as a quotient of $T_m M$ via -$\exp_m$, we see that to get something compact, we have to quotient out by an -infinite group of diffeos).<|endoftext|> -TITLE: Fun math outreach/social activities -QUESTION [25 upvotes]: What are some great math social activities for students? I'm looking for things that bring people together with a "light" mathematical touch. The goal is to create a stronger mathematical community in a small college math department. -Examples I've heard so far: -Math Stack Exchange Party: Students get together and surf questions on this site and answer them. -Zome Tool Party: Students get together and make cool things out of Zome tools. -Integration Bee: Spelling bee....but with integrals! -I'm looking for "soft" ways to attract students to the major. Thanks, in advance! - -REPLY [8 votes]: When I was a first year undergrad, we organised a mathematical Call My Bluff evening. A game master prepares questions in the vein of "What is a happy number?", "State one interesting theorem from Ramsey theory", and so on, questions whose answers a first year undergraduate is unlikely to know but likely to understand. Then, several teams have to invent credible answers and quietly submit them on a piece of paper. The game master reads them all out, along with the correct answer and the teams have to guess which answer is right. There are some points for identifying the right answer, but even more points for every team that fell for yours. -It's great fun, and actually quite demanding: when you invent a definition, you must take care not to define the empty set and not to define something that you already know under a different name. Accordingly, when you are evaluating the answers, you are trying to rule out those that seem to define/state something boring or something well-known under a different name. Of course, inventing answers is also very creative and makes you realise how difficult it is to ask interesting questions or introduce interesting concepts. - -REPLY [2 votes]: To add to Mike's Math movies/documentaries list: (unable to post links in comments) - -Story of Maths -Dangerous Knowledge -Fermat's Last Theorem -The Importance of Mathematics<|endoftext|> -TITLE: Quotient field of a quotient ring -QUESTION [8 upvotes]: Given $R$ an integral domain (commutative ring with no zero divisors), and $\mathfrak P$ a prime ideal in $R$, is there a relation between the field of fractions of $R$ and the field of fractions of $R/\mathfrak P$? -It's trivial to see that whenever $\mathfrak P$ is also maximal, then $\text{Frac}(R/\mathfrak P)\cong R/\mathfrak P$, but in general it would be nice if thing worked like that: - -There exists at least a maximal ideal containing $\mathfrak P$ - -There exists a maximal maximal ideal $\mathfrak M$ containing $\mathfrak P$ - -the field of fractions of $R/\mathfrak P$ is $R/\mathfrak M$ - - -but I'm not able to prove or disprove this... - -REPLY [13 votes]: With regard to the question in your first sentence, you may want to think about the example of $R = \mathbb Z$, $\mathfrak P = p \mathbb Z$ for a prime $p$, -and ask yourself what relationship (if any) there is between $\mathbb Q$ (the field of fractions of $\mathbb Z$) and $\mathbb F_p = \mathbb Z/p\mathbb Z$ (the finite field of $p$ elements). -In general, if $\mathfrak P$ is prime but not maximal, then the quotient -$R_{\mathfrak P}/P R_{\mathfrak P}$ (where $R_{\mathfrak P}$ is the localization of $R$ at $\mathfrak P$) is equal to the field of fractions of $R/\mathfrak P$, -and this is the typical method in commutative algebra for finding a link between -the field of fractions of $R/\mathfrak P$ and the ring $R$ itself.<|endoftext|> -TITLE: Logarithm of a complex number as intersections of two logarithmic spirals -QUESTION [5 upvotes]: In Penrose's book "The Road to Reality" page 97 figure 5.9 he shows the values of the complex logarithm in a diagram as the intersection of two logarithmic spirals. Can you please explain how these particular spirals are derived from the definition of the logarithm? - -REPLY [2 votes]: If $z = x+iy$ and $w = \exp(x'+iy')$, then the set of all powers $w^z$ is given by -$$ \{ w^z \} = \{ \exp(z\log w) \} = \{\exp(xx'-yy'-2\pi ky +i(xy'+x'y+2\pi k x) : k \in \Bbb{Z} \}$$ -You can easily check that this set lies on the logarithmic (equiangular) spiral given in polar coordinates $(r,\theta)$ by -$$ r(t) = \exp(\alpha_m t + \beta_m), \quad \theta(t) = t $$ -where -$$ \alpha_m = \frac{y}{m-x},\quad \beta_m = xx' - yy' - \alpha_m(xy'+x'y) $$ -and $m \in \Bbb{Z}$ is an arbitrary integer $\neq x$. Moreover, you can also easily check that the intersection of any two such spirals for integer $m,n$ with $|m-n| = 1$ is precisely the above set of all complex powers $w^z$. The surprising thing is that this set in itself can suggest a shape (see this picture) that is more complicated than a logarithmic spiral, despite (or because of) the fact that it lies on infinitely many of them.<|endoftext|> -TITLE: The Notorious Triangle Problem -QUESTION [15 upvotes]: I was told this question by a friend, who said that their friend had thought about it on and off for six months without any luck. I have then had it for a while without any luck either. It is in the style of a contest question, but it is unlike any other, as really nothing seems to gain any ground. -Here's the question: -Let $f:[0,1]\rightarrow \mathbb{R}$ be a continuous function satisfying the following property: If $ABC$ is the equilateral triangle with side lengths 1, we have for any point $P$ inside $ABC$, $f(\overline{AP})+f(\overline{BP})+f(\overline{CP})=0$ where $\overline{AP}$ is the distance from point P to vertex $A$. (Example: by taking $P=A$ we see $f(0)=-2f(1)$.) -Prove or disprove: $f$ must be identically zero on its domain -Notice that continuity is critical since otherwise I could just take the function $f(x)=0$ whenever $x\in (0,1)$ and $f(0)=1$, $f(1)=\frac{-1}{2}$. - -REPLY [10 votes]: Chandru has provided the link to a complete solution, but as Eric notes, it is somewhat involved. Some months ago, being unaware of that solution, I had found a less involved proof if you assume that $f$ is not only continuous but also differentiable. I reproduce that proof below. -Let $\triangle$ be the set of points inside or on the triangle, and $a, b, c : \triangle \to [0,1]$ be scalar fields on $\triangle$ mapping any point $P$ to the distances $\overline{AP}$, $\overline{BP}$, $\overline{CP}$ respectively. Define the scalar field $$g := f \circ a + f \circ b + f \circ c = 0.$$ It is easily shown that $\nabla a$ is a unit vector pointing away from $A$, and similarly for $b$ and $c$. Then $$\nabla g = f'(a) \nabla a + f'(b) \nabla b + f'(c) \nabla c,$$ where $f'(x) = df/dx$. But $g$ is constant, so $\nabla g = 0$. -Take $P$ to be a point on the side $AB$. Then $\nabla a$ and $\nabla b$ are antiparallel, while $\nabla c$ is linearly independent. For their linear combination to be $0$, the coefficient of $\nabla c$, which is $f'(c)$, must be $0$. Since $P$ is an arbitrary point on $AB$, $c$ is any value between $\sqrt{3}/2$ and $1$, so over that range $f'$ is zero, and $f$ is constant. -Once we've shown that $f'$ is zero between $\sqrt{3}/2$ and $1$, for any other value $a$ it's fairly easy to pick a point inside the triangle that's $a$ units from $A$ and over $\sqrt{3}/2$ units from $C$. Now $\nabla g = f'(a) \nabla a + f'(b) \nabla b + f'(c) \nabla c = 0$. The last term drops out because $f'(c) = 0$. $\nabla a$ and $\nabla b$ are linearly independent in the interior of the triangle, so their coefficients $f'(a)$ and $f'(b)$ must be zero. Thus $f' = 0$ everywhere, so $f$ is constant. In particular, $f(1/\sqrt{3}) = 0$, so $f = 0$ everywhere.<|endoftext|> -TITLE: Forcing cardinality of a set -QUESTION [9 upvotes]: I'm studying Shelah's proof (actually written by Uri Abraham) that adding one generic real implies the existence of a Suslin tree (available in this link, I think that freely for everyone.) -The notion of forcing is the set finite functions from $\omega$ to $\omega$, with stronger being "extending", then we construct a tree on $\omega_1$ by defining some functions by using the functions, and ensuring that the result gives us a Suslin tree. -At one point, the claim is that if $X$ is an uncountable anti-chain in the tree (in the generic extension, of course) then there exists $p$ such that $p\Vdash X$ is an uncountable anti-chain. -Then, it says, we can find $q$ stronger than $p$ for which $Y=\{\alpha | q\Vdash\alpha\in X\}$ is uncountable. -That last statement is unclear to me. I'm sensing that this is something relatively simple like a pigeonhole argument, but I'm uncertain how to deduce it. - -REPLY [8 votes]: Consider a fixed generic extension obtained by a generic where $p$ belongs. We have that $X$ is uncountable. For each $\alpha\in X$ there is $q\in G$ that in $V$ forces $\alpha\in X$. We may assume $q\le p$ by further extending if necessary. This shows that $A_\alpha=\{q\mid q\le p\land q$ forces in $V$ that $\alpha\in X\}$ is nonempty for each $\alpha\in X$. Since Cohen forcing is countable, the same $q$ must be in uncountably many of these $A_\alpha$, and we are done. -There is another way of presenting the argument that avoids talking about forcing extensions: For each $q\le p$ let $A_q$ be the set of $\alpha$ that $q$ forces to be in $X$. If each $A_q$ is countable, then (again, because Cohen forcing is countable) there is an $\alpha$ such that any ordinal forced to be in $X$ by a condition extending $p$ must be strictly below $\alpha$. But then $p$ itself must force $X$ to be contained in $\alpha$, contradicting that p forces $X$ to be uncountable.<|endoftext|> -TITLE: Compactness of the set of $n \times n$ orthogonal matrices -QUESTION [11 upvotes]: Show that the set of all orthogonal matrices in the set of all $n \times n$ matrices endowed with any norm topology is compact. - -REPLY [12 votes]: Recall a compact subset of $R^{n \times n}$ is a set that is closed and bounded. One way to show closedness is to observe that the orthogonal matrices are the inverse image of the element $I$ under the continuous map $M \rightarrow MM^T$. Boundedness follows for example from the fact that each column or row is a vector of magnitude $1$.<|endoftext|> -TITLE: Is a regular ring a domain -QUESTION [11 upvotes]: A regular local ring is a domain. Is a regular ring (a ring whose localization at every prime ideal is regular) also a domain? I am unable to find/construct a proof or a counterexample. Any help would be appreciated. - -REPLY [17 votes]: No. E.g. choose two regular domains and take their product; this will be regular, but not a domain. -This is more or less the general case, as I will explain: -In general, a Noetherian ring, all of whose localizations at its prime ideals are domains, is a finite product of domains (and of course a finite product of domains has this localization property). So a regular Noetherian ring will be a finite product of regular domains (and conversely any such product will be regular). -Geometrically, one can think of this as follows: regularity of $A$ is a local property on Spec $A$, and Spec $A\times B$ is equal to Spec $A \coprod$ Spec $B$. -So locally Spec $A\times B$ looks like either Spec $A$ or Spec $B$. In particular, -local properties, such as regularity (or the condition that the localization at prime ideals be a domain) can't detect global properties (like $A$ itself -being a domain).<|endoftext|> -TITLE: $\frac{1}{f'(x_1)}+\frac{1}{f'(x_2)}=2$ -QUESTION [5 upvotes]: 1.Let f be a real-valued differentiable function defined on [0, 1]. If $f(0)=0$ and -$f(1)=1$, prove that there exists two numbers $x_1,x_2 \in [0, 1]$ such that -$\frac{1}{f'(x_1)}+\frac{1}{f'(x_2)}=2$. -2.Let f be a real-valued differentiable function defined on (0,$\infty$). If $\lim_{x\rightarrow0^+}f(x)=\lim_{x\rightarrow\infty}f(x)=1$, prove that there exists $c\in(1,\infty)$ such that $f'(c)=0$. -1.From the question's statement, I wasn't sure if $x_1$ could equal $x_2$. If it can, is the following proof valid? -Clearly, it suffices to show $f'(x)=1$ for some $x \in [0,1]$. -We proceed by contradiction. -Suppose $f'(x)\neq1$ for all $x \in [0,1]$. -Then either $f'(x)>1$ or $f'(x)<1$ for all relevant x (otherwise, $\exists x_1,x_2$ s.t. $f'(x_1)>1, f'(x_2)<1$ which contradicts the IVT for derivatives). -Assume, wlog, $f'(x)>1$ for all x. -Then $1=f(1)=\int_{0}^{1}f'(x)dx>\int_{0}^{1}dx=1,$ contradiction. -2. Again we proceed by contradiction; suppose $f'(x)\neq0$ for all x. -By an identical argument to the one used in 1, either $f'(x)>0$ or $f'(x)<0$ for all x. -Wlog, assume $f'(x)>0$. Then f(x) is strictly increasing $\forall x$. -Then, if x>1, $1=\lim_{t\rightarrow0^+}f(t)1$ (if it exists), contradiction. -First, if my proofs are valid, are there simpler proofs? Second, can someone provide me with a proof of the case where $x_1\neq x_2$ in 1? - -REPLY [8 votes]: Updated for completeness -In fact, there are an uncountable number of solutions to -$\frac{1}{f'(x_1)} + \frac{1}{f'(x_2)} = 2$ -with $x_1 \ne x_2$. First, if the graph of $f$ coincides with the main diaognal -- i.e. if $f(x) = x$ for all $x \in [0,1]$ -- then any pair $(x_1,x_2)$ is a solution. So suppose there exists $t$ with $f(t) \ne t$. Then there must exist a point $(u,f(u))$ off the main diagonal with $f'(u) = 1$. To see this, apply the Mean Value Theorem (MVT) to the interval $(l,r)$, where $l$ is the greatest upper bound of those $x \lt t$ with $f(x)=x$, and $r$ is the least upper bound of those points $x \gt t$ with $f(x)=x$. -Now, to fix things, suppose that $f(u) \gt u$. Then we apply the MVT to intervals $[0,u]$ and $[u,1]$, to find points $v \in (0,u)$ and $w \in (u,1)$ with -$f'(v) = f(u)/u > 1$ and $f'(w) = (1 - f(u))/(1-u) < 1$. -Now, given any $s \in (1,f'(v))$ and $t \in (f'(w),1)$, there exist $x_1 \in (v,u)$ and $x_2 \in (u,w)$ such that $f'(x_1) = s$ and $f'(x_2) = t$ (because a derivative, although not necessarily continuous, does obey the Intermediate Value property). There are uncountably many such pairs $(s,t)$ with the property $\frac{1}{s} + \frac{1}{t} = 2$, so there are uncountably many solutions $(x_1, x_2)$.<|endoftext|> -TITLE: Linear independency before and after Linear Transformation -QUESTION [20 upvotes]: If we are given some linearly dependent vectors, would the T of those vectors necessarily be dependent (given a transformation from $R^n$ to $R^p$)? -And if we are given some linearly independent vectors, would T of those vectors necessarily be independent (given a transformation from $R^n$ to $R^p$)? -The answers are presumably no and no, but I am struggling to figure out why. Thanks! - -REPLY [2 votes]: Just to add to Arturo Magidin's answer, if you are considering sets of vectors rather than lists, then both statements are false: -Consider $T:\mathbb{R}^2 \to \mathbb{R}, \: T(x,y) = x+y,$ and sets $A = \{(2,0),(0,2),(1,1)\}$ and $B = \{(1,-1)\}$. Then $A$ is linearly dependent, but $T(A) = \{2\}$ is linearly independent; and $B$ is linearly independent, but $T(B) = \{0\}$ is linearly dependent.<|endoftext|> -TITLE: Proof of a combination identity:$\sum \limits_{j=0}^n{(-1)^j{{n}\choose{j}}\left(1-\frac{j}{n}\right)^n}=\frac{n!}{n^n}$ -QUESTION [6 upvotes]: I want to ask if there is a slick way to prove: -$$\sum_{j=0}^n{(-1)^j{{n}\choose{j}}\left(1-\frac{j}{n}\right)^n}=\frac{n!}{n^n}$$ -Edit: -I know Yuval has given a proof, but that one is not direct. I am requesting for a direct algebraic proof of this identity. -Thanks. - -REPLY [2 votes]: This answer is similar to Moron's but I think is somewhat simpler. Expand $(e^x-1)^n$ using the binomial theorem to obtain $$(e^x-1)^n = \sum_{j=0}^n (-1)^{n-j} \binom{n}{j} e^{jx}.$$ Differentiate both sides $n$ times. Every term on the left-hand side will contain an $e^x-1$ factor except for an $n!e^{nx}$ term, and the right side will be $$\sum_{j=0}^n (-1)^{n-j} \binom{n}{j} j^n e^{jx}.$$ Then substitute $x = 0$ to obtain -$$n! = \sum_{j=0}^n (-1)^{n-j} \binom{n}{j} j^n.$$ Reindex the sum and divide by $n^n$.<|endoftext|> -TITLE: Algebraic Proof that $\sum\limits_{i=0}^n \binom{n}{i}=2^n$ -QUESTION [22 upvotes]: I'm well aware of the combinatorial variant of the proof, i.e. noting that each formula is a different representation for the number of subsets of a set of $n$ elements. I'm curious if there's a series of algebraic manipulations that can lead from $\sum\limits_{i=0}^n \binom{n}{i}$ to $2^n$. - -REPLY [3 votes]: Using the binomial theorem, we find: -$2^n = (1 + 1)^n = \sum\limits_{k=0}^{n} \binom{n}{k} 1^k*1^{n-k} = \sum\limits_{k=0}^{n} \binom{n}{k}$<|endoftext|> -TITLE: Sums and products of bounded functions -QUESTION [6 upvotes]: I have been on this one for hours, cant figure out how to write this in the proper format/wording. -Let $f$ and $g$ be functions from $\mathbb{R}$ to $\mathbb{R}$. For the sum and product of $f$ and $g$, determine which statements are true. If true provide a proof; if false, provide a counterexample. - -If $f$ and $g$ are bounded, then $f + g$ is bounded. -If $f$ and $g$ are bounded, then $fg$ is bounded. -If both $f + g$ and $fg$ are bounded, then $f$ and $g$ are bounded. - -Any help is appreciated, thanks :) - -REPLY [7 votes]: a) That $f$ is bounded means that $|f(x)| \leq C$ for all $x$. Similarly $|g(x)| \leq D$ for all $x$. Therefore the triangle inequality gives $|f(x) + g(x)| \leq |f(x)| + |g(x)| \leq C + D$ for all $x$ and hence $f+g$ is bounded by $C + D$. -b) is similar but easier -c) $f^2 + g^2 = |f^2 + g^2| = |(f + g)^{2} - 2fg| \leq |f + g|^2 + 2|fg|$ and the last term is bounded by assumption and a) and b). - -REPLY [2 votes]: Remember the definition of bounded: A function $h\colon\mathbb{R}\to\mathbb{R}$ is bounded if and only if there exist number $M,N\gt 0$ such that for all $r\in\mathbb{R}$, $-N \leq f(r) \leq M$. -Intuitively, the graph of $f$ goes neither "too high" nor "too low". -So, suppose that $f$ and $g$ are both bounded. That means that there the values of $f$ don't get bigger than some $M_1\gt 0$, and the values of $g$ don't get bigger than some $M_2\gt 0$. When you take $(f+g)(r) = f(r)+g(r)$, how big can the number $f(r)+g(r)$ be, given that $f(r)\leq M_1$ and $g(r)\leq M_2$? -How about discussing how small they can be, given that you know there are numbers $N_1$ and $N_2$ such that $f(r)\geq N_1$ and $g(r)\geq N_2$? -How about the product? That's a bit harder because of the signs, so try working with $|fg|$, $|f|$, and $|g|$. Notice that if $-N\leq f(r)\leq M$, then $0\leq f(r) \leq \max\{M,N\}$ (verify this!). -Number $3$ is a bit trickier... first question: can the sum of two functions, each of which is not bounded, be bounded? Yes: consider for instance $f(x) = x$, which is not bounded, and $g(x) = 1-x$, which is also not bounded. What is $f+g$? Of course, this example does not satisfy that $fg$ is bounded (here, $fg(x) = x-x^2$, which is unbounded). And it's easy to come up with unbounded examples with their product bounded (just make sure that whenever $f$ gets out of control, $g$ "cancels it out"). -Perhaps you may want to check that if $ff$ (the product of $f$ with itself) is bounded, then so is $f$, and then consider Theo Buehler's answer.<|endoftext|> -TITLE: between Borel $\sigma$ algebra and Lebesgue $\sigma$ algebra, are there any other $\sigma$ algebra? -QUESTION [13 upvotes]: Is there any $\sigma$-algebra that is strictly between the Borel $\sigma$-algebra and the Lebesgue $\sigma$-algebra? -How about not in between the two, but in general, are there any other $\sigma$ algebra(s)? -What can be concluded about measure too, e.g. is Lebesgue measure the only measure for Lebesgue $\sigma$ algebra? - -REPLY [8 votes]: Your first question has been answered by Theo Buehler; for your second question, you can always take $\mathcal{P}(\mathbb{R})$ as a $\sigma$-algebra; a nontrivial example is the $\sigma$-algebra of all subsets of $\mathbb{R}$ that are either countable or co-countable (that is, all $X$ for which either $|X|\leq\aleph_0$ or $|\mathbb{R}-X|\leq\aleph_0$; it is straightforward to verify this is a $\sigma$-algebra). -For your last question: if you place no restrictions whatsoever on the measure, then the answer is trivial: yes, there are other measures. For a trivial example, just scale the Lebesgue measure by a factor different from $1$. -More generally, given a measurable positive function $f$, define the measure $\mu$ on the Lebesgue measurable sets by $\mu(X) = \int_X f d\lambda = \int f\chi_X d\lambda$, where $\lambda$ is the Lebesgue measure and $\chi_X$ is the characteristic function of $X$. This is a measure: - -If $X$ is any measurable set, then $\mu(X) = \int_X f\,d\lambda \geq 0$ because $f(x)\geq 0$ for all $x$, so $\mu$ is nonnegative. -If $\{E_i\}$ is a countable collection of pairwise disjoint sets, then $$\mu(\cup E_i) = \int_{\cup E_i}f\,d\lambda = \sum_{i=1}^{\infty} \int _{E_i} f\,d\lambda = \sum_{i=1}^{\infty}\mu(E_i)$$ -(with measures and sum possibly infinite); and -$\mu(\emptyset) = \int_{\emptyset}f\,d\lambda = 0$. - -Thus, $\mu$ is a measure on the Lebesgue $\sigma$-algebra; unless $f(x)=1$ for almost all $x$, you have $\mu\neq\lambda$. It may even be a finite measure, if $f\in\mathcal{L}^1$.<|endoftext|> -TITLE: What is to geometric mean as integration is to arithmetic mean? -QUESTION [41 upvotes]: The arithmetic mean of $y_i \ldots y_n$ is: $$\frac{1}{n}\sum_{i=1}^n~y_i $$ -For a smooth function $f(x)$, we can find the arithmetic mean of $f(x)$ from $x_0$ to $x_1$ by taking $n$ samples and using the above formula. As $n$ tends to infinity, it becomes an integration: $$\int_{x_0}^{x_1} f(x)~dx \over x_1 - x_0$$ -On the other hand, the geometric mean of $y_i \ldots y_n$ is: $$\left( \prod_{i=1}^n~{y_i}\right)^{1/n}$$ -Similarly, we can find the geometric mean of $f(x)$ by taking $n$ samples. -Here is my question: As $n$ tends to infinity, what do we call the resultant mathematical object? The geometric integration? -The geometric mean and the arithmetic mean, along with the quadratic mean (root mean square), the harmonic mean, etc, are special case of the generalized mean (with $p=0,1,2,-1$, respectively). -$$\left( \frac{1}{n}\sum_{i=1}^n~x_i^p\right)^{1/p}$$ -Do we have a generalized integration for different values of $p$? - -REPLY [23 votes]: There is a term for this (actually, more than one). It's called the product integral or multiplicative integral, and together with the corresponding derivative you get what's called product calculus or multiplicative calculus or non-Newtonian calculus. In addition, your idea of obtaining different versions of calculus by considering the generalized mean is discussed in this article by H. Vic Dannon. -The idea of product integration goes back at least to Vito Volterra in the late 1800s, and there are various applications of it. The standard reference is probably Dollard and Friedman's text in the Encyclopedia of Mathematics series, Product Integration with Application to Differential Equations. (The description says, "This book shows the beautiful simplifications that can be brought to the theory of differential equations by treating such equations from the product integral viewpoint.") -However, as Rahul Narain points out, it is easy to express the product integral in terms of usual integration. For that reason, some people don't believe it is really anything new. -And for more on the product integral, see one of the references in the Wikipedia page cited or this or this survey paper I wrote a few years ago.<|endoftext|> -TITLE: Mathematical symbol for "and" -QUESTION [12 upvotes]: I have found some pretty complete lists (I think) of mathematical symbols here and here, but I don't see a symbol for the word "and" on either list. A person could easily just write the word "and" or use an ampersand, but I was wondering if there was an actual mathematical symbol for the word "and". Also, if anyone knows any lists that are more complete than the ones I have linked to please provide a link. - -REPLY [6 votes]: The ampersand & is unmistakeable and just about right in semi-formal statements where "and" would be too wordy and a comma would be not very clear. The notation $\land$ is appropriate for formal logic, but isn't used much in general mathematics.<|endoftext|> -TITLE: How to check the veracity of arXiv.org papers? -QUESTION [7 upvotes]: Suppose I found a preprint on arXiv.org (like this one: http://arxiv.org/abs/math/0309146 ), and I need to check if what's in there is actually true. Any tips? I don't want to study an article only to later find that it contained an error <_< -More generally, should I spend my time on arXiv.org preprints? How serious is the danger that I will find myself holding a bunch of false beliefs and unable to publish my own works in any serious journal because I can't reference arXiv.org? -How do I find if an arXiv.org preprint was actually published in a peer-reviewed journal later? What is the average time interval between sending an article for peer review and it being published? If the paper was not published in, like, 5 years, is this a good reason not to trust it? - -REPLY [11 votes]: The simplest way of finding out if an arXiv paper was published is to check the arXiv to see if they list a publishing date; many papers do (those of mine that have been subsequently published certainly do). Absent that, contact the author(s) and ask. (As Willie Wong notes, the paper you refer to has such a listing; that paper has appeared in print already; of course, you'll take into account the quality of the journal if this is the case). -There is risk of a paper being wrong even if it has been published in a peer-reviewed journal, of course. The odds are lower (one hopes) than for papers on the arXiv simply because third parties have taken the time to go over the paper before publication. The same is true of papers in the arXiv, though less formally, so you'll want to check to see if anybody has read the arXiv paper (check in fora such as this, or sci.math.research, or in specialist mailing lists, such as the GroupPub Forum). -You may want to check the author's track record; and of course, you'll want to check the paper before you start quoting it (this is true of published material too!). -As for trusting a paper that has not appeared, there are many possible reasons for a paper not to appear that have nothing to do with its correctness. The results may have been subsumed by other papers (of the same author), or it may turn out that the results were known; or the journals it was submitted to may have considered it "correct-but-not-significant-enough-to-publish", etc. -The average time between submission of a paper and publication varies wildly. Even assuming that you submit it once, it gets reviewed once and accepted, the time span varies. Time between final acceptance and appearance in print can range from a few months to a couple of years depending on the backlog of the journal (the AMS publishes a yearly list of the backlog of the main journals); time between review and final acceptance depends on the nature of the changes requested and how quickly the authors get to them (anything from a week to several months); time between submissions and initial review can vary as well, though it is considered impolite for a referee to take more than six months for a review. If you end up having to submit a paper to several journals (because the referees deem it "correct-but-not-important-enough", for example) the process can take several years before it finally makes it into print. -My advice would be: check the authors track record. If there is nothing particularly bad about them (a habit of grandiose announcements that are later taken back, lots of errors, etc), then take the paper with double the amount of grains of salt as you take a published paper (I assume you read published papers skeptically as well; if you don't, then you should start), and review the theorems to your satisfaction before "believing them" or quoting them.<|endoftext|> -TITLE: Convergence of Sets? (Topology on a Powerset of a Set?) -QUESTION [8 upvotes]: Given a sequence of sets, is there some well-defined notion of a limit of a set? -In other words, given some universe set $U$, I am wondering if there is a topology on $2^U$ (the powerset of $U$) such that the usual intersection and the union limits converge in that topology. -As an explicit example, let $U=\mathbb{N}$, -$S_n = \{x\in \mathbb{N} | n< x \le 2n \}$, - $T_n = \{n\}$. -The limit of both sequences above should be the empty set by the following argument: -\begin{align} -S_n &\subset (n,\infty) \\\\ -\lim_{n\to\infty} S_n &\subset \lim_{n\to\infty} (n,\infty) = \cap_{n\in\mathbb{N}} (n,\infty) = \emptyset -\end{align} -(I'm not sure how to justify passing a set inclusion to the limit.) - -REPLY [12 votes]: The natural topology on $2^U$ is the compact-open topology, which here is the product topology. This is precisely the topology of pointwise convergence of indicator functions $U \to 2$. Thus a sequence $S_1, S_2, ...$ of sets converges in this topology if and only if, for every $u \in U$, either all but finitely many $S_i$ contain $u$ (so that $u$ is in the limit set) or all but finitely many $S_i$ do not contain $u$ (so that $u$ is not in the limit set). So both of the sequences you describe have limit the empty set as desired. -Equivalently (I think), one can define a sequence of sets to converge if its liminf and limsup (defined in the usual way) converge to the same set.<|endoftext|> -TITLE: Completeness of BMO without duality to $H^1$ -QUESTION [6 upvotes]: for $f \in L^1_{loc}$, let the sharp-function be defined as -$f^\sharp(x) := \sup_{B \in x} |B|^{-1} \int_B |f(y) - |B|^{-1} \int_B f(z) dz | dy$ -We define the space $BMO \subset L^1_{loc}$ via -$BMO := \{ f \in L^1_{loc} \;\;\;s.t.\;\;\; f^\sharp \in L^{\infty} \}$ -and the BMO-(semi)norm on BMO as -$\Vert f \Vert_{BMO} = \Vert f^\sharp \Vert_{\infty}$ -The completeness of BMO can be infered from the fact that it is exactly the dual space of the Hardy space $H^1$ with operator norm. -Is there a more direct way to show completeness of BMO? - -REPLY [7 votes]: Yes. -We have the following theorem: -Let $f \in \text{BMO}(\mathbb R^n)$. Then for any $\epsilon > 0$ there exists a ball $B$ with center $x_0$ and radius $R$ such that -$$R^\epsilon \int_{\mathbb R^n} \frac{|f(x) - \text{Avg}_B f|}{(R + |x - x_0|)^{n + \epsilon}} \, dx \leq C \|f\|_\text{BMO}$$ -where $C$ only depends on $n$ and $\epsilon$. (this is just estimating stuff) -So from this theorem we can deduce that any $\text{BMO}$-Cauchy sequence is Cauchy in $L^1$ on every compact set. From this we can deduce that every Cauchy sequence converges. -Edit: (A slight extension) My apologies for the two year delay. First remark that the $\text{BMO}$ norm is actually a seminorm, that is $\|x\| = 0$ does not only occur when $x = 0$ but actually for all constants. This should be obvious, otherwise you better go check out a book on integration theory. Let us define an equivalence relation on this space to the end of making it a true normed space. That is let $f \sim g$ if and only if $f - g$ is a constant. If we now make the quotient space $\text{BMO}/\{1\}$ this is a normed space and the map that sends every function to its equivalence class is continuous. Also, we have a norm on this space $\|[f]\| = \inf_{c \in \mathbf{R}} \|f + c\|$. If we want to show completeness, we would have to cook up a function where a Cauchy sequence converges to. That sucks, so let us not do that but use a corollary. - -Theorem: Let $X$ be a normed space and let $(x_n)$ be a sequence in $X$ such that $$\sum_n \|x_n\|$$ converges. Then $X$ is a Banach space if and only if $$\sum x_n$$ converges in the norm. - -Cool. Let us use this. First of all, remark that the induced norm gives for all non-constant functions the normal $\text{BMO}$ seminorm and for the constants just the constant. As constant sequences are pretty much boring and certainly convergent we could as well not care about sequences that are "eventually constant". So let us consider the functions $g_n = f_n - \langle f_n \rangle$ as the important ones. This is cool, right? Take a sequence $(g_n)$ of such functions such that the above sum converges. Now let $B$ be a closed ball with radius $R$ and center $x_0$. Then we have, -$$R^\epsilon \int_{B} \frac{|g_n(x)|}{(R + |x - x_0|)^{n + \epsilon}} \, dx \lesssim \|g_n [B]\|_\text{BMO} \lesssim \|g_n\|_\text{BMO}.$$ -This means that we have for all $\epsilon > 0$ there is $R > 0$ and an $x_0$ such that -$$R^\epsilon \sum_n \int_{B} \frac{|g_n(x)|}{(R + |x - x_0|)^{n + \epsilon}} \, dx$$ -converges. Now note that $R \leqslant R + |x - x_0| \leqslant 2R$ hence, we get that -$$\frac1{R^d} \sum_n \|g_n\|_{L^1(B)}$$ -converges and so also $g_n$ converges in $L^1(B)$ to a function $g_B$. Now you might be afraid that these $g_B$'s might be too different for different $B$'s, but not that if you enlarge the $B$, the "$g$" should at least coincide on the intersection of the balls. -Remark: I am too tired now, I will edit more later on. But now the idea would be to use this limit for each $B_k$ and pick a nice sequence of $B_k$'s. If the BMO sequence were to converge one would expect that the limit should be the same as the $L^1$ version (were it restricted). My idea would be to decompose $\mathbf{R}^d)$ into a sequences of shells. You start with $B$ and blow it up two times. Do this again with the new ball and so on. Then make this a disjoint sequence and build your $g$ from a sum of indicators of these shells and then multiply by the function you get from the $L^1(B_n)$. Show this is in BMO and that is is the limit you want. Cheers.<|endoftext|> -TITLE: $C^\infty$ vs. $C^\omega$ surfaces -QUESTION [10 upvotes]: I would appreciate it if someone could explain the difference(s) between a -$C^\infty$ and a $C^\omega$ surface embedded in $\mathbb{R}^3$. -I ran across these terms in M. Berger's Geometry Revealed book (p.387). -The context is: There are examples of two different $C^\infty$ compact surfaces -that are isometric, but no known examples for "two real analytic (class $C^\omega$)" -surfaces which are isometric. Thanks! -Clarification. Thanks to Mariano and Willie for trying to help---I appreciate that! -It is difficult to be clear when you are confused :-). -Let me try two more specific questions: (1) Where does the $\omega$ enter into -the definition of $C^\omega$? Presumably $\omega$ is the first infinite ordinal. -(2) What I'm really after is the geometric "shape differences" between $C^\infty$ and $C^\omega$. The non-analytic but smooth functions I know smoothly join, say, an exponential to a straight line, but geometrically they look just like smooth functions. I guess I don't understand what the constraints of real-analyticity imply geometrically. Maybe that's why this isometric question Berger mentioned is unsolved?! -Addendum. Here are Ryan's two functions: - -Left: $C^\infty$ but not $C^\omega$. Right: $C^\omega$. - -REPLY [16 votes]: To Clarification (1): I think $\omega$ is there just to indicate that the smoothness is "more than $C^\infty$", and I don't think it has any substantial connection to the first infinite ordinal. -(2): Analytic surfaces are much more rigid than smooth surfaces. For example, you can choose any open set on a smooth surface and do surgery so that only the chosen set is affected. For analytic surfaces you knock on a point, you disturb the entire surface. Smooth surfaces are made of sheet metal, while analytic surfaces are ceramic. I reserve rubber for topological surfaces.<|endoftext|> -TITLE: Homology of the Sphere -QUESTION [6 upvotes]: In my algebraic topology class we showed how to calculate the homology groups of $S^n$, using the tools of singular homology, however we did not discuss other ways of doing it; my question - is there any relatively simple way of doing this, using simplicial homology? I tried thinking about this for a bit, but couldn't see any obvious direction. -Thanks. - -REPLY [7 votes]: Using simplicial homology you can triangulate the sphere as the boundary of an $(n+1)$-simplex, and work out the chain complex by hand. -With cellular homology it is even easier since $S^n$ is the union of an $n$-cell and a $0$-cell. The chain complex has a single $\mathbb Z$ in degree $0$ and a single $\mathbb Z$ in degree $n$. In all other degrees it is zero. - -REPLY [4 votes]: Another rather quick way to compute the groups is with cellular homology. Here the $n$th chain group is generated by the $n$-cells of your CW-complex. The boundary map has to do with degree maps; but in your case it is simple. An $n$-sphere is a 0-cell with an $n$-cell attached by mapping the boundary $S^{n-1}$ to the 0-cell. If $n>1$ then all the maps in the chain complex must be 0 because the chain groups are trivial except for the $n$-chains and 0-chains. The homology of such a complex is easy to compute.<|endoftext|> -TITLE: Is this a new formula? -QUESTION [7 upvotes]: It's late at night and I'm tired, but I just stumbled across this while doing my homework. Any chance this is new? Or, maybe, did I just somehow transform it and it is still basically the same formula? In that case, forgive me, please. -Anyway, here it is. It is a recursive function (thus, of limited use!?) to calculate the sum of factorials: -$f(n) = \sum\limits_{i=1}^n i! = \frac{(n+1)!}{n}+f(n-2)$ -with -$f(0) = 0$ -and -$f(-1) = -1$ -Is this useful at all or did I just waste my time? :) - -REPLY [15 votes]: The formula certainly is true. Re-arranging what you wrote you get -$$ f(n) - f(n-2) = n! + (n-1)! = n(n-1)! + (n-1)! $$ -$$ = (n+1)\times (n-1)! = \frac{(n+1)\times n\times (n-1)!}{n} = \frac{(n+1)!}{n} $$ -but I don't think it is particularly more useful than the observation of, say, -$$ f(n) = n! + f(n-1) $$ - -The definition of your function as a sum means that it naturally can be described recursively. I don't think the formulation you gave offers any particular advantage to the summation formula, unfortunately.<|endoftext|> -TITLE: Every subset of $\mathbb{R}$ with finite measure is the disjoint union of a finite number of measurable sets -QUESTION [14 upvotes]: I'm trying to prove that if $E \subset \mathbb{R}$ has finite measure and $\varepsilon \in \mathbb{R}$ such that $\varepsilon>0$, then $E$ is the disjoint union of a finite number of measurable sets, each of which has measure at most $\varepsilon$. -Here is what I've done: - -Let $\varepsilon \in \mathbb{R}$ such that $\varepsilon>0$. Since $m^{*}(E) < +\infty$, there exists a countable collection $\{I_{k}\}_{k \in \mathbb{Z}^{+}}$ of open and bounded intervals (measurable sets) covering $E$ and such that $\sum \limits_{k=1}^{\infty } \ell ( I_{k}) -TITLE: Intersection of powers of an ideal in a Noetherian ring -QUESTION [11 upvotes]: Given a Noetherian ring $R$ and a proper ideal $I$ of it. Is it true that $$\bigcap_{n\ge 1} I^n=0$$ as $n$ varies over all natural numbers? - -If not, is it true if $I$ is a maximal ideal? If not, is it true if $I$ is the maximal ideal of a local ring $R$? If not, is it true under additional assumptions on $R$ (like $R$ is regular)? - -REPLY [12 votes]: See the Section on the Krull Intersection Theorem (currently Section 8.12) in these notes. -A version of the theorem valid for any ideal $I$ in a Noetherian ring $R$ is as follows: if there exists $x \in \bigcap_{n=1}^{\infty} I^n$, then $x \in xI$. From this one easily deduces that $\bigcap_{n=1}^{\infty} I^n = \{0\}$ under either of the following additional hypotheses: -$\bullet$ $R$ is a domain and $I$ is a proper ideal, or -$\bullet$ $I$ is contained in the Jacobson radical $J(R)$ of $R$ (i.e., the intersection of all maximal ideals). -In particular the second condition holds for any proper ideal in a Noetherian local ring. -As Mariano remarks, some hypothesis beyond Noetherianity is needed in order to guarantee $\bigcap_n I^n = \{0\}$. I should probably add his counterexample to my notes!<|endoftext|> -TITLE: Continuous complex function (2 variables) -QUESTION [5 upvotes]: Let $f$ be analytic on the open set $G\subset \mathbb{C}$. What's the best way of showing that the function $\phi:G \times G \to \mathbb{C}$ defined by $\phi(z,w)=[f(z)-f(w)]/(z-w)$ for $z \neq w$ and $\phi(z,z)=f'(z)$ is continuous? -This is obvious for $z\neq w$, but I'm having some trouble on the points $(z,z)$. I did it one way but I'm not convinced this is good. -Edit: Rudin's Proof (for the continuity on the diagonal) -Let $z_0\in G$. Since $f$ is analytic, there exists $r>0$ such that $B(a,r)\subset G$ and $|f'(\zeta)-f'(z_0)|<\epsilon$ for all $\zeta \in B(a,r)$. Getting $z,w\in B(a,r)$, than $\zeta(t)=(1-t)z+tw \in B(a,r)$ for $0\le t\le 1$. Now just use that $\phi(z,w)-\phi(z_0,z_0)=\int_0^1[f'(\zeta(t))-f'(z_0)]dt$ and the $\epsilon$-bound to get the desired continuity. - -REPLY [4 votes]: I just want to note that existence of the limit of $\phi$ at $(z,z)$ is the definition of the strong (or strict) differentiability of $f$ az $z$. It is a key strengthened concept of differentiability, which already makes the inverse, and thus the implicit function theorem work. It implies that $f'$ is continuous at $z$, and if $f'$ exists on a neighbourhood of $z$ the reverse is also true, which also anwers the question. This is a consequence of the mean value inequality: -$$|f(u)-f(w)-f'(z)(u-w)|\le\sup_{0 -TITLE: 1-dimensional solution space of homogeneous system Ax=0? -QUESTION [6 upvotes]: Given is an almost-square matrix $A$ with $n$ columns and $n-1$ rows with maximum rank. The solutions of the homogeneous system $Ax = 0$ form a 1-dimensional subspace of $\mathbb{R}^n$. -I've discovered the following which I believe to be true but I can't prove: the components of the vector $x$ that spans the (1D) solution space are given by: -$x_i = (-1)^{i-1} |A_i|$ -in which $|A_i|$ is the determinant of the square submatrix of A obtained by removing the i-th column from A. For example, in $\mathbb{R^3}$, $A$ is a 2x3 matrix, and $x$ as defined above turns out to be the crossproduct of the two row vectors of $A$. -Is this true, and if so, how can it be proved? - -REPLY [5 votes]: You only need to show that the $x$ you defined is orthogonal to every row of $A$. You can see that's the case by using the row-expansion of the determinant: given any row $j$ of $A$, consider the $n \times n$ matrix obtained from $A$ by duplicating that row. This new square matrix $A'$ has determinant $0$, and you can compute its determinant by expanding along the newly created duplicate row: $$ 0={\rm det} (A') = \sum_i (-1)^{j-1} (-1)^i A_{ji} |A_i | = (-1)^{j-1} \sum_i A_{ji} x_i$$<|endoftext|> -TITLE: Intuitive explanation of Nakayama's Lemma -QUESTION [34 upvotes]: Nakayama's lemma states that given a finitely generated $A$-module $M$, and $J(A)$ the Jacobson radical of $A$, with $I\subseteq J(A)$ some ideal, then if $IM=M$, we have $M=0$. -I've read the proof, and while being relatively simple, it doesn't give much insight on why this lemma should be true, for example - is there some way to see how the fact that $J(A)$ is the intersection of all maximal ideals related to the result? -Any intuition on the conditions and the result would be of great help. - -REPLY [2 votes]: There is a more primitive and natural version of Nakayama's lemma: - -Let $R$ be a commutative ring, and $M$ be a finitely generated $R$-module and $I\subset R$ be an ideal. If $IM=M$ then $\exists f\in I,\forall m\in M,f\cdot m=m$. - -The proof is a bit harder than the above version, but this pretty much says "If $IM=M$ then one of the elements of $I$ stablizes all elements of $M$." And I think this is much more natural than the above version. Assuming this version, then we can see that the condition of "Jacobson ideal" is only used to show that "$1-f$ is a unit" which leads to "$M=0$". -For completion, I give a proof of the above version here: -Proof: Let $m_1,...,m_n$ be a set of generators of $M$. Since $IM=M$, we have a matrix $A'=(a_{ij})\in \textrm{M}_{n\times n}(R)$ such that $A'\underline{m}=\underline{m}$ where $\underline{m}=(m_1,...,m_n)^t$ and $a_{ij}\in I,\forall i,j\leq n$. Now let $A=\textrm{Id}-A'$ so we have $A\underline{m}=\underline{0}$. -Note that $\det A\in R$ is well-defined and there exists a matrix $B\in \textrm{M}_{n\times n}(R)$ s.t. $BA=AB=\det A\cdot \textrm{Id}$, the proof is in Corollary 9.161 of Advanced Modern Algebra by Joseph J. Rotman. Also $\det$ commutes with the quotient map $R\rightarrow R/I$, with $A\equiv \textrm{Id} \ (\textrm{mod }I)$ we can deduce that $\det A\in 1+I$, so $\exists f\in I$, s.t. $\det A=1-f$. -To show $f$ stablizes $M$ it suffices to show that $f\cdot m_i=m_i,\forall i$. -$$\underline{0}=BA\underline{m}=\det A\cdot \underline{m}=(1-f)\cdot \underline{m}$$ -So -$$f\cdot \underline{m}=\underline{m}$$ -The result follows. Q.E.D<|endoftext|> -TITLE: Projective modules over a semi-local ring -QUESTION [6 upvotes]: I need a little bit of help, I found that theorem, but the book doesn't prove it and gives a reference to another book that I don't have; does anyone have an idea? - -Let $R$ be a semi-local ring, and $M$ a finite projective $R$-module. Show that $M$ is free if the localizations $M_m$ have the same rank for all maximal ideals $m$ of $R$. - -REPLY [3 votes]: See Lemma 1.4.4 in Bruns and Herzog, Cohen-Macaulay Rings.<|endoftext|> -TITLE: Why is cofiniteness included in the definition of direct sum of submodules? -QUESTION [10 upvotes]: In contrast to the possibility of taking an arbitrary sequence of elements of submodules in the definition of direct product, the definition for the direct sum of submodules of a module requires the indexed elements to vanish cofinitely(i.e. except finitely many times). -More precisely, Let $R$ be a ring, and $\{M_i : i ∈ I\}$ a family of left $R-$modules indexed by the set $I$. The direct sum of ${M_i}$ is then defined to be the set of all sequences $(α_i)$ where $\alpha_i \in M_i$ and $α_i = 0$ for cofinitely many indices $i$. (The direct product is analogous but the indices do not need to cofinitely vanish.)(Source Wikipedia: Direct sum of modules.)We have similar definition for the sum of submodules. -I have not yet understood what pathology would be incurred without assuming or what simplification (if any) would be gained in assuming cofiniteness. Can you please explain this in simple terms (possibly, with examples) ? -Thanks. - -REPLY [8 votes]: To see an example of some pathology, while the direct sum of free modules is always free (with the obvious free basis), the direct product of free modules may fail to be free. -Take $R=\mathbb{Z}$; then $M = \mathop{\oplus}\limits_{n=1}^{\infty}\mathbb{Z}$ is free abelian, with basis given by the "obvious" elements $e_i$ (which have a $1$ in the $i$th coordinate and zeros elsewhere). However, $N = \prod\limits_{n=1}^{\infty} \mathbb{Z}$ is not free abelian. For example, Specker proved (Additive Gruppen von Folgen ganzer Zahlen, Portugaliae Math. 9 (1950) 131-140) that $N$ has only countably many homomorphisms onto $\mathbb{Z}$. But since $N$ is uncountable, if it were free it would be free in uncountably many generators, and hence would have uncountably many homomorphisms onto $\mathbb{Z}$ (at least the projections). In fact, if $X$ is any infinite set, then $\mathop{\oplus}_{x\in X}\mathbb{Z}$ is free abelian of rank $|X|$, but $\prod_{x\in X}\mathbb{Z}$ is never free abelian. -You have a related pathology with vector spaces (of course, every vector space is free, so that's not what the problem will be, but rather when you think about "free on what set?"). When working with finitely many vector spaces, you have that $\dim(V_1\times V_2) = \dim(V_1\oplus V_2) = \dim(V_1)+\dim(V_2)$ (in the sense of sum of cardinalities). However, once you have infinitely many vector spaces, the equality breaks down for the product, while it holds for the direct sum: -$$\dim\left(\bigoplus_{i=1}^{\infty} V_i\right) = \sum_{i=1}^{\infty}\dim(V_i)$$ -but for products it need not hold: for a counterexample, take $V_i = \mathbb{Q}$ as a vector space over itself; the sum of dimensions is $\aleph_0$, but the direct product of denumerably many copies of $\mathbb{Q}$ is uncountable, so the dimension is $2^{\aleph_0}$ (so you have a "jump" in the dimension once you get to infinitely many elements).<|endoftext|> -TITLE: Understanding the properties and use of the Laplacian matrix (and its norm) -QUESTION [25 upvotes]: I am reading the wikipedia article on the Laplacian matrix: -https://en.wikipedia.org/wiki/Laplacian_matrix -I don't understand what is the particular use of it; having the diagonals as the degree and why the negative adjacency elements off the diagonal? What use would this have? -Then on reading about its norm, first of all what does a norm really mean? And what is the norm for the Laplacian matrix delivering? This norm does not result in a matrix whose terms cancel out or sum to one. Or that the determinant is equal to any consistent value. Any insight? -Best, - -REPLY [24 votes]: The Laplacian is a discrete analogue of the Laplacian $\sum \frac{\partial^2 f}{\partial x_i^2}$ in multivariable calculus, and it serves a similar purpose: it measures to what extent a function differs at a point from its values at nearby points. The Laplacian appears in the analysis of random walks and electrical networks on a graph (the standard reference here being Doyle and Snell), and so it is not surprising that it encodes some of its structural properties: as I described in this blog post, it can be used to set up three differential equations on a graph (the wave equation, the heat equation, and the Schrödinger equation). -(To be totally clear, when you're using this interpretation you should think of the Laplacian, not as a matrix, but as an operator acting on functions $f : V \to \mathbb{R}$. In this setting there is a discrete notion of gradient (which sends a function $f$ to a function $\text{grad } f : E \to \mathbb{R}$) and a discrete notion of divergence (which sends a function $g : E \to \mathbb{R}$ to a function $\text{div } g : V \to \mathbb{R}$), and the divergence of the gradient is the Laplacian - just like in the infinitary case. So the Laplacian defines a certain analogy between graphs and Riemannian manifolds.) -The quadratic form defined by the Laplacian appears, for example, as the power running through a circuit with given voltages at each point and unit resistances on each edge. It is the discrete analogue of the Dirichlet energy. -The Laplacian appears in the matrix-tree theorem: the determinant of the Laplacian (with a bit removed) counts the number of spanning trees. This is related to its appearance in the study of electrical networks and is still totally mysterious to me. The group $\mathbb{Z}^n / L$ where $L$ is the Laplacian has rank $1$, and its torsion subgroup is the critical group of the graph, which has size the number of spanning trees. The critical group appears in the description of chip-firing games on the graph (another name for this is the abelian sandpile model), and is an interesting invariant of graphs. -There is some evidence that finite graphs are analogous to curves over finite fields, and in this analogy the critical group appears to be analogous to the ideal class group (that is, the Jacobian). Its size even appears in a class number formula for graphs coming from the Ihara zeta function (the analogue of the Dedekind zeta function). Again, all of this is totally mysterious to me. -Here is a nice survey paper by Mohar on what graph theorists actually use the Laplacian for. In the literature there are several different normalizations; they correspond to either using a different preferred basis for the space of functions $f : V \to \mathbb{R}$ or varying physical properties of the graph (e.g. changing resistances, adding a potential).<|endoftext|> -TITLE: Primitive integer solutions to $2x^2+xy+3y^2=z^3$? -QUESTION [6 upvotes]: The class number of $\mathbb{Q}(\sqrt{-23})$ is $3$, and the form -$$2x^2 + xy + 3y^2 = z^3$$ -is one of the two reduced non-principal forms with discriminant $-23$. -There are the obvious non-primitive solutions $(a(2a^2+ab+3b^2),b(2a^2+ab+3b^2), (2a^2+ab+3b^2))$. -I'm pretty sure there aren't any primitive solutions, but can't seem to find an easy argument. Are there? -In general, is it possible for a non-principal form to represent an $h$-th power primitively (where $h$ is the class number of the associated quadratic field)? - -[EDIT] -I think I've solved the first question and have posted an answer below. -Since the proof is very technical I don't see how it can generalize to the greater question above ($h$-th powers). - -REPLY [3 votes]: If $4|y$, then $2|z$, which in turn, by reducing mod $4$, $2|x$, contradicting primitivity. Multiply by $2$ and factor the equation over $\mathbb{Q}(\sqrt{-23})$: -$$(\frac{4x+y+\sqrt{-23}y}{2})(\frac{4x+y-\sqrt{-23}y}{2}) = 2z^3$$ -Note that both fractions are integral. The gcd of the two factors divides $\sqrt{-23}y$ and $4x+y$. If $23|4x+y$ then $23|z$, and reducing the original equation modulo $23^2$ we see that $23|y$, hence also $23|x$, contradicting primitivity. So the gcd divides $y$ and $4x+y$, and by the above argument, it then must be either $1$ or $2$ according to wether $y$ is odd or even, respectively. -First, assume that $y$ is odd. So the gcd is $1$. Thus, for some ideal $I$: -$$(2x+y\frac{1+\sqrt{-23}}{2})=(2,\frac{1+\sqrt{-23}}{2})I^3$$ -Which implies that $(2,\frac{1+\sqrt{-23}}{2})$ is principal, leading to a contradiction. Now assume that $y$ is even, and that $x$ is therefore odd and $z$ is even. Put $y=2u$, $z=2v$, $u$ odd, so that: -$$(2x+u+\sqrt{-23}u)(2x+u-\sqrt{-23}u)=16v^3$$ -Both factors are divisble by $2$, so that: -$$(x+u\frac{1+\sqrt{-23}}{2})(x+u\frac{1-\sqrt{-23}}{2})=4v^3$$ -As before, the gcd is 1, and since $x+u\frac{1+\sqrt{-23}}{2}=x+u+u\frac{-1+\sqrt{-23}}{2} \in (2, \frac{-1+\sqrt{-23}}{2})$ we must have for some ideal $I$: -$$(x+u\frac{1+\sqrt{-23}}{2}) = (2, \frac{-1+\sqrt{-23}}{2})^2I^3$$ -Contradicting that $(2, \frac{-1+\sqrt{-23}}{2})^2$ is non-principal (the ideal above $2$ appears squared since the product of factors is divisible by $4$). We are done! - -It actually seems that the above can be generalised, but I have to use a major theorem, which seems like it might be a bit of an overkill. Without further ado: -Let $aX^2+bXY+cY^2$ be a primitive non-principal quadratic form of discriminant $\Delta=b^2-4ac$, and $h$ be the class number of the associated quadratic field. Assume there is a solution $x,y,z$: -$$ax^2+bxy+cy^2=z^h$$ -Recalling the Chebotarev Density Theorem (OVERKILL), there is an equivalent form with $a$ an odd prime that doesn't divide $\Delta$, and since the invertible change of variables preserves primitivity, we reduce to this case. -Multiplying by $a$ and factoring over $\mathbb{Q}(\sqrt{\Delta})$: -$$(ax+\frac{b+\sqrt{\Delta}}{2}y)(ax+\frac{b-\sqrt{\Delta}}{2}y)=az^h$$ -The gcd of the factors divides $(2ax+by,\sqrt{\Delta}y)$. Say $\Delta |2ax+by$, then since -$$(2ax+by)^2-\Delta y^2=4az^h$$ -we must have $\Delta |z$, so $\Delta |y$, and finally $\Delta |x$ (unless $\Delta=\pm 2$, which is impossible), contradicting primitivity. -Hence the gcd divides $(2a,y)$. -1) gcd$=1$: -$$(ax+\frac{b+\sqrt{\Delta}}{2}y) = (a,\frac{b+\sqrt{\Delta}}{2})I^h$$ -contradicting that the form is non-principal. -2) gcd$=2$ or $2a$: then $y$ is even, so $z$ is too, and since $a$ is odd, $x$ must also be even, contradicting primitivity. -3) gcd$=a$: thus $a|z$. If $a^2|y$, then reducing modulo $a$ we see that $a|x$. So $a||y$. Dividing this gcd out of the two factors, we must have: -$$(x+\frac{b+\sqrt{\Delta}}{2}\frac{y}{a}) = (a,\frac{b-\sqrt{\Delta}}{2})^{h-1}I^h$$ -for some ideal $I$. Note the particular ideal above $a$, which appears since its conjugate cannot appear as $x$ isn't divisble by $a$. Since $(a,\frac{b-\sqrt{\Delta}}{2})^{h-1} \sim (a,\frac{b+\sqrt{\Delta}}{2})$, this contradicts that the form is non-principal. -We are done! -There must be a way to avoid applying a major theorem such as Chebotarev... Please tell me if you have an idea how :D<|endoftext|> -TITLE: What about $GL(n,\mathbb C)$? Is it open, dense in $M(n,\mathbb C)$? -QUESTION [18 upvotes]: What about $GL(n,\mathbb C)$? Is it open, dense in $M(n,\mathbb C)$? - -REPLY [28 votes]: It seems like we agree that the openness of $\operatorname{GL}_n(\mathbb{C})$ is most easily proved by observing that it is the preimage of the open set $\mathbb{C} \setminus \{0\}$ under the continuous function $\operatorname{det}: M_n(\mathbb{C}) \rightarrow \mathbb{C}$. -We are giving different answers for the density, and I have not yet seen the simple linear algebra answer I was expecting to see. -Here it is: let $M \in M_n(\mathbb{C})$. It is sufficient to show that there exists $\epsilon > 0$ such that for all nonzero complex numbers $t$ with $|t| < \epsilon$, the matrix -$M + t I_n$ is nonsingular (because then $M$ is a limit point of this set of nonsingular matrices). But viewing $t$ as an indeterminate, the determinant of $M+t I_n$ is nothing else than the characteristic polynomial $P(t)$ of $-M$. Being a monic polynomial of degree $n$, it has $n$ roots $\alpha_1,\ldots,\alpha_n$ (possibly with multiplicity), and we may take $\epsilon = \min_{i \ | \ \alpha_i \neq 0} |\alpha_i|$. -Note that this argument works verbatim for matrices over $\mathbb{R}$ or any nondiscrete normed field. In fact, for an arbitrary field $k$, if one works instead with the Zariski topology on $M_n(k)$, then what we really need is for any cofinite subset of $k$ (i.e., the complement in $k$ of some finite set) to be dense in the Zariski topology. This is true iff $k$ is infinite. If $k$ is finite, the Zariski topology on $M_n(k)$ is discrete, and the result is false. -Added: A fun application of this "linear algebraic perturbation perspective" is to show that $\operatorname{GL}_n(\mathbb{C})$ is connected (whereas $\operatorname{GL}_n(\mathbb{R})$ is not). Hint: by general nonsense, it is equivalent to show path-connectedness, so try that instead. If I am trying to find a path from point $A$ to point $B$ in the complex plane, requiring that my path avoid a finite set of points is not nearly as problematic as it would be on the real line!<|endoftext|> -TITLE: Notation for surreal numbers -QUESTION [5 upvotes]: On the sound of sounding ridiculous, but in the line of "There are no stupid quetsions": Is there a way to express $\omega_1$ (and in general $\omega_k$ with $k >= 1$ as a Conway game (that is $$, with L and R the left and right options). And is there a way to do it such that addition, multiplication, etc. makes sense ? -I can express $\omega_1 = < {i}_{i\in\mathbb{R}} | >$, which is essential the same as $< f: \mathbb{R} \to \mathbb{R} |>$ where $f$ is increasing (${i}_{i\in\mathbb{R}} = f(i)$), which isn't exactly a Conway game notation. I think this also borders on an idea of Gonshor, to express the surreals as maps from initial segments of ordinals to a two-element set). In general, I have replaced the sequence $<0,1,2, ...|>$ from one of the forms of $\omega_0$ by an increasing "sequence" where the index is a positive real number. -Questions are: - -The above question(s) ? -Is my idea correct or at leat in the right direction ? -Are there any other ways to do this ? (of course this is an open question if my idea is wrong in the first place). - -And yes, I realize my question is little bit broad and I haven't an idea which model you should be work in (ZF, ZFC, NBG, etc), nor do I know how the answer varies with the choice of the model). - -REPLY [7 votes]: Every ordinal number $\alpha$ has a canonical copy as a surreal number $\hat\alpha$, whose left set has as members $\hat\beta$ for every $\beta\lt\alpha$, and whose right set is empty. Succinctly, -$$\hat\alpha=\{\ \{\hat\beta\mathrel{:} \beta\lt\alpha\ \}\mathrel{|} \}.$$ -One can prove by induction that $\alpha\mapsto\hat\alpha$ is isomorphism of the ordinals with their usual order and arithmetic to the corresponding suborder of the surreals. -For example, for the successor case, $\alpha+1$ is the next ordinal after $\alpha$, and $\hat(\alpha+1)=\{\hat\alpha\mid\}$ is the next surreal after $\hat\alpha$, or $\hat\alpha+1$ in the surreals. -In particular, $\omega_1$ would be represented by the surreal number $\hat\omega_1$, which I think is very different from the surreal numbers that you propose. - -REPLY [4 votes]: I should note that your expression for $\omega_1$ isn't actually correct; in part because the cardinality of the reals isn't necessarily $\aleph_1$ (this is just Cantor's Hypothesis, which of course Paul Cohen proved independent of ZF), but mostly because the surreal number you've given — $\langle i_{i\in\mathbb{R}}|\rangle$ — is actually equal to $\omega\stackrel{\mathrm{def}}{=}\omega_0$ (exercise: show that whoever plays first in $\langle i_{i\in\mathbb{R}}|\rangle - \omega$ loses). As JDH notes above, the standard way of defining the ordinals copies over into the surreals to give surreal definitions of $\omega_1$, $\omega_2,$ etc; On Numbers And Games even mentions this in passing, IIRC. That said, AFAIK the canonical arithmetic operations ($\sqrt{}$, etc) are relatively boring on ordinals of higher cardinality; in effect they suffer from (as I understand it - this isn't really my field!) predicativity issues. You might want to have a look at the Wikipedia page on (an) 'Ordinal collapsing function' - there's a pretty close tie between surreal definitions of various ordinal-related 'numbers' and the canonical ordinal notations discussed there.<|endoftext|> -TITLE: Graded modules over $k[t,t^{-1}]$ -QUESTION [6 upvotes]: If $R=k[t,t^{-1}]$ is a graded ring where $R_0=k$ is a field and $t\in R$ is a homogeneous element of positive degree which is transcendental over $k$, how can I prove that every graded $R$-module is free? - -REPLY [5 votes]: A low-tech way: -Let $M$ be a graded $k[t^{\pm1}]$-module, so that in particular $M=\bigoplus_{n\in\mathbb Z}M_i$ as a vector space. For each $n\in\mathbb Z$ the map $M_n\to M_{n+1}$ given by multiplication by $t$ is a linear bijection, with inverse given by multiplication by $t^{-1}$, of course. It follows that for all $n\in\mathbb Z$ we have $M_n=t^nM_0$. Moreover, it is easy to see that the map $k[t^{\pm1}]\otimes_k M_0\to M$ obtained by restricting the multiplication map $k[t^{\pm1}]\otimes_k M\to M$ to the subspace $k[t^{\pm1}]\otimes_k M_0$ is in fact an isomorphism of $k[t^{\pm1}]$-modules, if we see its domain as a $k[t^{\pm1}]$-module in the obvious way. But this obvious module is free: any basis of $M_0$ gives a basis. -N.B.: This is, I guess, what the high-tech proof amounts to in this concrete situation... - -REPLY [4 votes]: One way: -A graded $H=k[t^{\pm1}]$-module $M$ is the same thing as Hopf $H$-module (so it is an $H$-module, and $H$-comodule, and both structures are compatible). The Fundamental Theorem of Hopf modules tells you, then, that such a beast is free as an $H$-module. -The best reference for this is, in my opinion, Sweedler's book [Sweedler, Moss E. Hopf algebras. Mathematics Lecture Note Series W. A. Benjamin, Inc., New York 1969 vii+336 pp. MR0252485]<|endoftext|> -TITLE: Is there a function with this property? -QUESTION [25 upvotes]: Is there a real function over the real numbers with this property $\ \sqrt{|x-y|} \leq |f(x)-f(y)|$ ? -My guess is no but can anyone tell me why? -This came up as a question of one of my collegues and i cant give an answer. - -REPLY [3 votes]: Whenever $f(x_0)$ and $f(x_1)$ fall in an interval $[a, b]$, we must have $$|x_0 - x_1| \le (f(x_0) - f(x_1))^2 \le (a - b)^{2}.$$ Therefore, the pre-image of any interval $[a,b]$ under $f$ is contained in some interval of length $(a-b)^2$. Now let $a_0=0$ and $a_k=\sum_{n=1}^{k}n^{-1}$ for $k=1,2,3...$ Since $a_k$ diverges, we have $\cup_k [a_k, a_{k+1}] = [0, \infty)$. So we can write -$$ -\begin{eqnarray} -f^{-1}\left([0, \infty)\right) &=& f^{-1}\left(\cup_k [a_k, a_{k+1}]\right) \\ -&=& \cup_k f^{-1}\left([a_k, a_{k+1}]\right) \\ -&\subseteq& \cup_k I_k, -\end{eqnarray} -$$ -where each $I_k$ is an interval of length $(a_{k+1} - a_{k})^2 = 1/(k+1)^2$. This countable union of intervals is measurable, and has measure $\le \sum_{k=0}^{\infty} 1/(k+1)^2 = \pi^2/6$. Similarly, we can show that $f^{-1}\left((-\infty, 0]\right)$ is also contained in a measurable set with measure $\le \pi^2/6$. Combining these results, we have $f^{-1}(\mathbb{R}) \subseteq Q$, where $Q$ is measurable with measure $\pi^2/3$. Clearly $Q \neq \mathbb{R}$ (since $\mathbb{R}$ has infinite measure); hence $f^{-1}(\mathbb{R}) \neq \mathbb{R}$, and $f$ cannot be defined over the entire real line. -The same argument with $a_k$ replaced by $\sum_{n=1}^{k}(n+\alpha)^{-1}$ can be used to make the measure of $Q$ arbitrarily small (by choosing $\alpha$ sufficiently large), and we conclude that the domain of $f$ cannot contain any set of positive measure.<|endoftext|> -TITLE: Arbitrary intersection of closed sets is closed -QUESTION [22 upvotes]: It can be proved that arbitrary union of open sets is open. Suppose $v$ is a family of open sets. Then $\bigcup_{G \in v}G = A$ is an open set. - -Based on the above, I want to prove that an arbitrary intersection of closed sets is closed. - -Attempted proof: by De Morgan's theorem: -$(\bigcup_{G \in v}G)^{c} = \bigcap_{G \in v}G^{c} = B$. -$B$ is a closed set since it is the complement of open set $A$. -$G$ is an open set, so $G^{c}$ is a closed set. -$B$ is an infinite union intersection of closed sets $G^{c}$. -Hence infinite intersection of closed sets is closed. - -Is my proof correct? - -REPLY [22 votes]: This is true, and your reasoning is correct too.<|endoftext|> -TITLE: Puzzle: digit $x$ appears $y$ times on this piece of paper.... -QUESTION [15 upvotes]: There are ten questions on a piece of paper. Your task is to fill in each blank with a positive integer less than 10 such that there is no contradiction. You can reuse any digit. -The question is as follows: - -Digit 0 appears __ times on this paper. -Digit 1 appears __ times on this paper. -Digit 2 appears __ times on this paper. -Digit 3 appears __ times on this paper. -Digit 4 appears __ times on this paper. -Digit 5 appears __ times on this paper. -Digit 6 appears __ times on this paper. -Digit 7 appears __ times on this paper. -Digit 8 appears __ times on this paper. -Digit 9 appears __ times on this paper. - -How to solve it? -Edit 1: -I am not sure this question can be done without computer. - -REPLY [3 votes]: 0 appears 1 time -1 appears 11 times -2 appears 2 times -3 appears 1 time -4 appears 1 time -5 appears 1 time -6 appears 1 time -7 appears 1 time -8 appears 1 time -9 appears 1 time<|endoftext|> -TITLE: Showing that the path-connected component of the identity matrix in a subgroup of $GL_n(\Bbb R)$ is a normal subgroup -QUESTION [7 upvotes]: Let $M(n;\mathbb{R})$ denote the set of all $n \times n$ matrices with real entries (identified with $\mathbb{R}^{n^{2}}$ and endowed with its usual topology) and let $GL(n;\mathbb{R})$ denote the group of invertible matrices. Let $G$ be a subgroup of $GL(n;\mathbb{R})$. Define $$H = \biggl\{ A \in G \ \biggl| \ \exists \ \varphi:[0,1] \to G \ \text{continuous such that} \ \varphi(0)=A , \ \varphi(1)=I\biggr\}$$ -Then is $H$ normal in $G$? -For proving $H$ normal i should verify two things: - -First $H$ is a subgroup. -For all $A \in G, B \in H$, we must have $A \cdot B \cdot A^{-1} \in H$. - -That's it i am not able to proceed any further. - -REPLY [2 votes]: Here's another proof. -Since $Gl = GL_n(\mathbb{R})$ is an open subset of $\mathbb{R}^{n^2}$, it is naturally a smooth manifold. Hence, the path components are the same as the components of it. Further, since multiplication and inverses can be written as polynomials in the entries of the matrix, these operations are continuous (smooth in fact). -$H$ is clearly the path component of $Gl$, by the above, it's actually a component in the usual sense. -Now, fix $h\in H$ and consider the map $L_h:Gl\rightarrow Gl$ sending $A$ to $hA$. This is continuous by the above, so it sends components to components. Where does it send $H$? Well, since $I\in H$, we see that $L_h I = hI = h$, so $L_h$ sends one point in $H$ to another point in $H$. It follows that $L_h$ sends all of $H$ into itself. -This shows that $H$ is closed under multiplication. -What about inverses? -Well, first note that $L_h$ is a homeomorphism because it has inverse $L_h^{-1} = L_{h^{-1}}$. This shows that the ONLY component it sends onto $H$ is $H$. Let $X$ be the component of $Gl$ containing $h^{-1}$. Then we know $L(X) \subseteq H$ since $h^{-1}\in X$ and $L_h h^{-1} = hh^{-1} = Id\in H$. This implies that $X = H$, i.e., that $h^{-1}\in H$. So, $H$ is closed under inverses. -We have now shown that $H$ is a subgroup - now we need only show it's normal. -So, let $\in Gl$. Consider the map $C_g:Gl\rightarrow Gl$ sending $A$ to $hAh^{-1}$. As above, this map is continuous. So, it sends components to components. Where does it send $H$? Well, $I\in H$ and $C_g(I) = gIg^{-1} = I$, so it sends $H$ to $H$, that is, $C_g(H)\subseteq H$. But $C_g(H) = gHg^{-1}$, so $H$ is normal. -(Incidentally, what I really proved is the following: Let $G$ be any Lie group. Let $G_0$ be the identity path component of $G$. Then $G_0$ is a normal subgroup of $G$. More generally, if $G$ is a topological group such that components and path components agree, this works. I'm not sure whether or not $G$ is a topological group implies the components and path components agree, though).<|endoftext|> -TITLE: The Class of Non-empty Compact Subsets of a Compact Metric Space is Compact -QUESTION [9 upvotes]: This is a question from my homework for a real analysis course. Please hint only. -Let $M$ be a compact metric space. Let $\mathbb{K}$ be the class of non-empty compact subsets of $M$. The $r$-neighbourhood of $A \in \mathbb{K}$ is -$$ M_r A = \lbrace x \in M : \exists a \in A \text{ and } d(x,a) < r \rbrace = \bigcup_{a \in A} M_r a. $$ -For $A$, $B \in \mathbb{K}$ define -$$D(A,B) = \inf \lbrace r > 0 : A \subset M_r B \text{ and } B \subset M_r A \rbrace. $$ -Show that $\mathbb{K}$ is compact. -(Another part of the question is that if $M$ is connected, then so is $\mathbb{K}$, but this is not assigned). -Many thanks. - -REPLY [8 votes]: It is probably easier to appeal to the Heine-Borel Theorem -Step (1): Show that $D$ is a metric on $\mathbb{K}$. -Now that $(\mathbb{K},D)$ is a metric space, by the Heine-Borel theorem, it suffices to show that it is complete and totally bounded. -Step (2): Let $A_i$ be a Cauchy sequence in $\mathbb{K}$, show that it converges. (In fact, you can show that as long as $M$ is a complete metric space, then so is $\mathbb{K}$ with the metric you just wrote down.) -Step (3): Show that for every $\epsilon > 0$ there exists a finite open cover of $\mathbb{K}$ by $\epsilon$-balls. (In fact, as long as $M$ is totally bounded, so will $\mathbb{K}$.) - -Step (1) is obvious, so I won't give a hint. -For Step (2), let $A_i$ be your Cauchy sequence. Consider the set $A$ of points $a$ such that for any $\epsilon > 0$, $B(a,\epsilon)$ intersects all but finitely many $A_i$. -For Step (3), start with a finite cover of $M$ by $\epsilon$ balls, let $S$ be the set of the center points of those balls. Consider the power-set $P(S)$ (the set of all subsets of $S$) as a set of points in $\mathbb{K}$. - -REPLY [3 votes]: Hints: - -It suffices to show that $\mathbb{K}$ is complete and totally bounded. -For every $\varepsilon > 0$ there is a constant $C(\varepsilon)$ such that every $K \in \mathbb{K}$ can be covered by at most $C(\varepsilon)$ balls of radius $\varepsilon$ around points in $K$. To see this, cover $M$ with finitely many balls of radius $\varepsilon/2$, let $K \in \mathbb{K}$ be arbitrary and pick $x_{n}$ in the intersection of $K$ with those balls that intersect $K$. The balls with radius $\varepsilon$ around these balls will cover $K$. -Given a Cauchy sequence in $\mathbb{K}$, show that it converges (this doesn't need compactness!). - -REPLY [3 votes]: As an aside (I do not suggest you prove it this way): this $\mathbb{K}$ is just the so-called hyperspace of $M$, also denoted $H(M)$ (all non-empty closed sets of $M$ in general), and an alternative, non-metric description of its topology is by describing a subbase for it, consisting of all $[U] = \{A \in \mathbb{K}:\, A \cap U \neq \emptyset \}$ and $ = \{ A \in \mathbb{K}:\, A \subset U \}$, where $U$ ranges over all non-empty subsets of $M$. If $M$ is compact so is $H(M)$, by a simple application of the Alexander subbase lemma. This is a more general topological way of viewing this space (it's a theorem that for compact metric spaces $M$ the space $H(M)$ is compact metrizable as well, and one of the metrics is the one descibed in the question).<|endoftext|> -TITLE: Is there a problem in defining a complex number by $ z = x+iy$? -QUESTION [10 upvotes]: The field $\mathbb{C} $ of complex numbers is well-defined by the Hamilton axioms of addition and product between complex numbers, i.e., a complex number $z$ is a ordered pair of real numbers $(x,y)$, which satisfy the following operations $+$ and $\cdot$: -$ (x_1,y_1) + (x_2,y_2) = (x_1+x_2,y_1+x_2) $ -$(x_1,y_1)(x_2,y_2) = (x_1x_2-y_1y_2,x_1y_2 + x_2y_1)$ -The other field properties follow from them. -My question is: Is there a problem in defining complex number simply by $z = x+iy$, where $i² = -1$ and $x$, $y$ real numbers and import from $\mathbb{R} $ the operations? Or is this just an elegant manner to write the same thing? - -REPLY [18 votes]: There is no "explicit" problem, but if you are going to define them as formal symbols, then you need to distinguish between the + in the symbol $a$+$bi$, the $+$ operation from $\mathbb{R}$, and the sum operation that you will be defining later until you show that they can be "confused"/identified with one another. -That is, you define $\mathbb{C}$ to be the set of all symbols of the form $a$+$bi$ with $a,b\in\mathbb{R}$. Then you define an addition $\oplus$ and a multiplication $\otimes$ by the rules -$(a$+$bi)\oplus(c$+$di) = (a+c)$ + $(c+d)i$ -$(a$+$bi)\otimes(c$+$di) = (ac - bd)$ + $(ad+bc)i$ -where $+$ and $-$ are the real number addition and subtraction, and + is merely a formal symbol. -Then you can show that you can identify the real number $a$ with the symbol $a$+$0i$; and that $(0$+$i)\otimes(0$+$i) = (-1)$+$0i$; etc. At that point you can start abusing notation and describing it as you do, using the same symbol for $+$, $\oplus$, and +. -So... the method you propose (which was in fact how complex numbers were used at first) is just a bit more notationally abusive, while the method of ordered pairs is much more formal, giving a precise "substance" to complex numbers as "things" (assuming you think the plane is a "thing") and not just as "formal symbols". - -REPLY [6 votes]: There is a completely rigorous way to do the construction you allude to in the last paragraph, namely by means of quotient rings. Indeed, $\mathbb{C} \simeq \mathbb{R}[X] / (X^2 + 1)$. This generalises, for example, we can construct a commutative ring with elements of the form $x + y \epsilon$, where $\epsilon^2 = 0$. The ring so constructed is emphatically not a field, but it is sometimes useful for doing symbolic differentiation. - -REPLY [4 votes]: Just set $i=(0,1)$ and $x=(x,0)$ for any real $x$, and the notation $x+iy$ is just a shorthand for the ordered pairs notation. -Of course you could also choose $i=(0,-1)$ ........<|endoftext|> -TITLE: Homework Help - Finding a Vector when given two points, and then finding a unit vector in the same direction -QUESTION [5 upvotes]: I've attempted to solve the problem, but I got $\langle \frac{1}{\sqrt{\frac{29}{4}}}, \frac{5}{\sqrt{\frac{29}{4}}}\rangle$, which is incorrect. There is not a similar problem in my textbook that I can reference. -I know that to find a unit vector, we first find the length/magnitude of the given vector, and multiply $$1/\sqrt{magnitude}$$ by the original vector. -$$L = \sqrt{x^2 + y^2}.$$ -Can anyone give me any ideas on how to solve this problem? -Find the unit vector that has the same direction as the vector from the point A = (-1,2) to the point B = (3,3). -Thank you in advance. - -REPLY [4 votes]: The vector that goes from $A$ to $B$ is the vector $B-A$: to see this, notice that if you add vectors using the parallelogram rule, then adding the vector $V$ you are looking for to $A$ should give you $B$, so $A+V = B$, giving you $V=B-A$. -So the vector you are looking for is $V = B-A = (3,3) - (-1,2) = (4,1)$. -Now that you know the vector, finding the unit vector in the same direction is done as you indicate: find the magnitude of $V$, divide by the magnitude. -(Looks like you took $A+B$ instead of $B-A$) - -REPLY [2 votes]: When you took the vector from A to B it looks like you added instead of subtracting. It should be (4,1).<|endoftext|> -TITLE: If $\sin x + \cos x = \frac{\sqrt{3} + 1}{2}$ then $\tan x + \cot x=?$ -QUESTION [6 upvotes]: Hello :) -I hit a problem. -If $\sin x + \cos x = \frac{\sqrt{3} + 1}{2}$, then how much is $\tan x + \cot x$? - -REPLY [5 votes]: Another more general approach is to solve your equation for $x$. Since it is -linear in $\sin x$ and $\cos x$ it can be transformed into a quadratic -equation in $\tan \frac{x}{2}$ (see this answer): -$$\sin x+\cos x=\frac{1+\sqrt{3}}{2}\Leftrightarrow \frac{2\tan \frac{x}{2}}{% -1+\tan ^{2}\frac{x}{2}}+\frac{1-\tan ^{2}\frac{x}{2}}{1+\tan ^{2}\frac{x}{2}}% -=\frac{1+\sqrt{3}}{2}$$ -$$\Leftrightarrow 2\tan \frac{x}{2}+1-\tan ^{2}\frac{x}{2}=\left( 1+\tan ^{2}% -\frac{x}{2}\right) \frac{1+\sqrt{3}}{2}.$$ -Set $y=\tan \frac{x}{2}$ -$$2y+1-y^{2}=\frac{1+\sqrt{3}}{2}+\frac{1+\sqrt{3}}{2}y^{2}\Leftrightarrow \left( 1+\frac{1+\sqrt{3}}{2}\right) y^{2}-2y-1+\frac{1+% -\sqrt{3}}{2}=0$$ -and solve for $y$ -$$y_{1}=\frac{1}{3}\sqrt{3},y_{2}=2-\sqrt{3}$$ -Hence -$$x_{1}=2\arctan \frac{1}{3}\sqrt{3}=\frac{1}{3}\pi $$ -or -$$x_{2}=2\arctan \left( 2-\sqrt{3}\right) =\frac{1}{6}\pi .$$ -And finally -$$\tan \frac{1}{3}\pi +\cot \frac{1}{3}\pi =\frac{4}{3}\sqrt{3}$$ -or -$$\tan \frac{1}{6}\pi +\cot \frac{1}{6}\pi =\frac{4}{3}\sqrt{3}.$$<|endoftext|> -TITLE: Characterizing units in polynomial rings -QUESTION [64 upvotes]: I am trying to prove a result, for which I have got one part, but I am not able to get the converse part. -Theorem. Let $R$ be a commutative ring with $1$. Then $f(X)=a_{0}+a_{1}X+a_{2}X^{2} + \cdots + a_{n}X^{n}$ is a unit in $R[X]$ if and only if $a_{0}$ is a unit in $R$ and $a_{1},a_{2},\dots,a_{n}$ are all nilpotent in $R$. -Proof. Suppose $f(X)=a_{0}+a_{1}X+\cdots +a_{n}X^{n}$ is such that $a_{0}$ is a unit in $R$ and $a_{1},a_{2}, \dots,a_{r}$ are all nilpotent in $R$. Since $R$ is commutative, we get that $a_{1}X,a_{2}X^{2},\cdots,a_{n}X^{n}$ are all nilpotent and hence also their sum is nilpotent. Let $z = \sum a_{i}X^{i}$ then $a_{0}^{-1}z$ is nilpotent and so $1+a_{0}^{-1}z$ is a unit. Thus $f(X)=a_{0}+z=a_{0} \cdot (1+a_{0}^{-1}z)$ is a unit since product of two units in $R[X]$ is a unit. -I have not been able to get the converse part and would like to see the proof for the converse part. - -REPLY [69 votes]: Let $f=\sum_{k=0}^n a_kX^k$ and $g= \sum_{k=0}^m b_kX^k$. If $f g=1$, then clearly $a_0,b_0$ are units and: -$$a_nb_m=0 \tag1$$ -$$a_{n-1}b_m+a_nb_{m-1}=0$$ -(on multiplying both sides by $a_n$) -$$\Rightarrow (a_n)^2b_{m-1}=0 \tag2$$ -$$a_{n-2}b_m+a_{n-1}b_{m-1}+a_nb_{m-2}=0$$ -(on multiplying both sides by $(a_n)^2$) -$$\Rightarrow (a_n)^3b_{m-2}=0 \tag3$$ -$$.....$$ -$$.....+a_{n-2}b_2+a_{n-1}b_1+a_nb_0=0$$ -(on multiplying both sides by $(a_n)^m$) -$$\Rightarrow (a_n)^{m+1}b_{0}=0 \tag{m+1}$$ -Since $b_0$ is an unit, it follows that $(a_n)^{m+1}=0$. -Hence, we proved that $a_n$ is nilpotent, but this is enough. Indeed, since $f$ is invertible, $a_nx^n$ being nilpotent implies that $f-a_nX^n$ is unit and we can repeat (or more rigorously, perform induction on $\deg(f)$).<|endoftext|> -TITLE: Is order of variables important in probability chain rule -QUESTION [6 upvotes]: Is the order of random variables important in the chain rule? I mean, is this true: $P(A,B,C) = P(A)\times P(B|A)\times P(C|A,B) = P(C)\times P(B|C)\times P(A|B,C) = P(C,B,A)$? If it is, what is the meaning of such order? Thank you. - -REPLY [5 votes]: $P[A \cap B \cap C] = P[(A \cap B) \cap C] = P[(A \cap B)|C]P(C) = P[C|A \cap B]P[A \cap B]$. Then you can rewrite $P(A \cap B) = P(A|B)P(B) = P(B|A)P(A)$. -These are all useful. Suppose you want to find $P(A \cap B)$. Well $P(A \cap B) = P(A|B)P(B) = P(B|A)P(A)$. But suppose you only know $P(B|A)$. Then $P(B|A)P(A)$ is more useful.<|endoftext|> -TITLE: $f(x+f(y))=f(x-f(y))+4xf(y)$ -QUESTION [5 upvotes]: Find all functions $f:R\rightarrow R$ which satisfy $f(x+f(y))=f(x-f(y))+4xf(y)$ $\forall x,y \in R$. -I strongly suspect $0$ and $x^2+C$ to be the only solutions but, as is almost the case with functional equations, finding the set the solutions is easy; it is proving that a certain set of solutions represents ALL solutions is what is difficult. -It is trivial that if $f$ is bounded $f\equiv0$. - -REPLY [8 votes]: Ok I finally got it: the equation $f(x + f(y)) = f(x - f(y)) + 4xf(y)$ can be rewritten as -$$f(x + f(y)) - f(x - f(y)) = (x + f(y))^2 - (x - f(y))^2$$ -Replacing $x$ by $x + f(y)$ and rearranging terms we get -$$f(x + 2f(y)) - (x + 2f(y))^2 = f(x) - x^2$$ -Thus $f(x) - x^2$ has period $2f(y)$ for all $y$. In particular, it has period $2f(a + f(b))$ as well as $2f(a - f(b))$ for all $a$ and $b$. Since the difference of two periods is a period, it also has period $2f(a + f(b)) - 2f(a - f(b))$ for any $a$ and $b$, which is $8af(b)$ by the conditions given. Assuming $f(x)$ is not the zero function, we may pick $b$ so that $f(b)$ is not zero. Since $a$ can be anything, we conclude $f(x) - x^2$ has every real number as a period; in other words it is constant. So if $f(x)$ is not the zero function, then $f(x) = x^2 + C$ for some constant $C$.<|endoftext|> -TITLE: Generators of compact Lie groups -QUESTION [7 upvotes]: Suppose $G$ is a compact connected Lie group and let $\{X_i\}$ be a basis for its Lie algebra $\mathfrak g$. We know that the exponential $\exp:\mathfrak g \to G$ is surjective but when is it the case that $G$ is generated by $\{\exp(tX_i) : t\in \mathbb R\}$? - -REPLY [8 votes]: The map $\mathbb{R}^n \to G$ sending $(t_1, \dots, t_n)$ to $\mathrm{exp}(t_1 X_1) \dots \mathrm{exp}(t_n X_n)$ has nonsingular Jacobian at $0$, so its image contains a neighborhood of the origin. By a standard argument, a neighborhood of the origin in a connected topological group generates the full group.<|endoftext|> -TITLE: Non-squarefree version of Pell's equation? -QUESTION [12 upvotes]: Suppose I wanted to solve (for x and y) an equation of the form $x^2-dp^2y^2=1$ where d is squarefree and p is a prime. Of course I could simply generate the solutions to the Pell equation $x^2-dy^2=1$ and check if the value of y was divisible by $p^2,$ but that would be slow. Any better ideas? -It would be useful to be able to distinguish cases where the equation is solvable from cases where it is unsolvable, even without finding an explicit solution. - -REPLY [3 votes]: For what it is worth, my lecture notes on the Pell Equation are in the context of $d$ any positive, nonsquare integer. Also see problem 9.2 here where I ask the students to exploit the fact that $d$ does not need to be squarefree to show that one can find solutions $(x,y)$ to the Pell equation satisfying an additional congruence $y \equiv 0 \pmod M$ for any $M \in \mathbb{Z}^+$. -From the perspective of algebraic number theory, this comes down to a version of the Dirichlet Unit Theorem for nonmaximal orders in a number field $K$. Recall that a $\mathbb{Z}$-order $R$ in $K$ is a subring of $\mathbb{Z}_K$ such that the additive group $(R,+)$ is finitely generated as a $\mathbb{Z}$-module. The last condition is equivalent to the finiteness of the index $[\mathbb{Z}_K:R]$. The usual statement of the Dirichlet Unit Theorem is that the unit group $\mathbb{Z}_K^{\times}$ is a finitely generated abelian group with rank equal to $r_1 + r_2 - 1$, where if $K \cong \mathbb{Q}[t]/(P(t))$, the polynomial $P$ has $r_1$ real roots and $r_2$ pairs of complex conjugate non-real roots. -But if I am not mistaken (and please let me know if I am!), the standard proof of the Dirichlet Unit Theorem works to show that exactly the same is true for the unit group $R^{\times}$ of any nonmaximal order. (Certainly $R^{\times}$ is finitely generated, being a subgroup of the finitely generated abelian group $\mathbb{Z}_K^{\times}$; the claim is that its rank is no less than that of $\mathbb{Z}_K^{\times}$.) -Using the structure theory of finitely generated abelian groups, one easily deduces the following relative version of the Dirichlet Unit Theorem: for any order $R$ in $\mathbb{Z}_K^{\times}$, the quotient group $\mathbb{Z}_K^{\times}/R^{\times}$ is finite.<|endoftext|> -TITLE: Is there such a thing as a countable set with an uncountable subset? -QUESTION [16 upvotes]: Is there such a thing as a countable set with an uncountable subset? - -Actually I know the answer. Well, I believe I know the answer, which is NO. -Unfortunately, the professor in a Theory of Computation class said that yes, there is such a subset. -This is to settle a discussion with fellow students. A discussion that is going nowhere so we go to the internets for a verdict. -Thanks in advance for weighing in on this question. - -REPLY [2 votes]: If your course was a TCS-Course, then maybe your professor mixed up countable sets with enumerable sets. (This happens sometimes, especially in german language areas, where it is "abzählbar" vs. "aufzählbar). -Every subset of a countable set is countable or finite (or empty). However, not every subset of an enumerable set is enumerable. -For example, $\mathbb{N}$ is trivially enumerable. But the codomain of the busy bever function, which is a subset of $\mathbb{N}$, is not.<|endoftext|> -TITLE: Decidability of tiling of $\mathbb{R}^n$ -QUESTION [7 upvotes]: Given a polytope of dimension $n$, is there some general way to determine if it can tile $\mathbb{R}^n$? - -REPLY [6 votes]: The book Tilings and Patterns by Branko Grünbaum and Geoffrey Shephard (W.H. Freeman) is still, despite its 1987 publication date, an excellent source of information about tiling questions. Wang tiles and decidability problems are treated in Chapter 11.<|endoftext|> -TITLE: Algorithm to compute Gamma function -QUESTION [26 upvotes]: The question is simple. I would like to implement the Gamma function in my calculator written in C; however, I have not been able to find an easy way to programmatically compute an approximation to arbitrary precision. -Is there a good algorithm to compute approximations of the Gamma function? -Thanks! - -REPLY [4 votes]: Take the divergent series -$$\hat J_\nu(z):=\sqrt{\dfrac{2}{\pi z}}\left\{p(z)\cos\left(z-\dfrac{2\nu+1}{4}\pi\right)-q(z)\sin\left(z-\dfrac{2\nu+1}{4}\pi\right)\right\}$$ -with -$$p(z):=1-\dfrac{(4\nu^2-1^2)(4\nu^2-3^2)}{2!(8z)^2}+\dfrac{(4\nu^2-1^2)(4\nu^2-3^2)(4\nu^2-5^2)(4\nu^2-7^2)}{4!(8z)^4}-+\dots$$ -and -$$q(z):=\dfrac{(4\nu^2-1^2)}{1!(8z)^1}-\dfrac{(4\nu^2-1^2)(4\nu^2-3^2)(4\nu^2-5^2)}{3!(8z)^3}+-\dots$$ -For the Bessel function $J_\nu(z)$ we get the estimation -$$\left\vert J_\nu(z)-\hat J_\nu^{(k)}(z)\right\vert\le O\left(z^{-k-3/2}\right)$$ -for $\vert z\vert\ge1$ and $Re\{z\}\ge0$ with the partial sum $\hat J_\nu^{(k)}(z)$ whose last summand has the term -$$\dfrac{(4\nu^2-1^2)\cdots(4\nu^2-(2k-1)^2)}{k!(8z)^k}.$$ -The Bessel function $J_\nu(z)$ can be calculated to -$$J_\nu(z)=\sum_{j=0}^\infty\dfrac{(-1)^j}{j!\Gamma(\nu+j+1)}\left(\dfrac{z}{2}\right)^{\nu+2j}.$$ -With $\Gamma(x+1)=x\Gamma(x)$ we obtain -$$J_\nu(z)=\dfrac{1}{\Gamma(\nu+1)}\sum_{j=0}^\infty\dfrac{(-1)^j}{j!(\nu+1)\cdots(\nu+j)}\left(\dfrac{z}{2}\right)^{\nu+2j}.$$ -Take -$$K_\nu(z)=\sum_{j=0}^\infty\dfrac{(-1)^j}{j!(\nu+1)\cdots(\nu+j)}\left(\dfrac{z}{2}\right)^{\nu+2j}.$$ -Then we obtain -$$\lim_{n\to\infty}\dfrac{K_\nu\left(n\pi+\dfrac{2\nu+1}{4}\pi\right)}{\hat J_\nu^{(k)}\left(n\pi+\dfrac{2\nu+1}{4}\pi\right)}=\Gamma(\nu+1)$$ -and the gamma function drops out of the game. -As an example we get for $\nu=0.5$, $k=10$ and $z=4\pi+\dfrac{\pi}{2}$ -$$\dfrac{K_{0.5}\left(z\right)}{\hat J_{0.5}^{(10)}\left(z\right)}=0.88622692546003057$$ -and -$$\dfrac{K_{0.5}\left(z+0.1\right)}{\hat J_{0.5}^{(10)}\left(z+0.1\right)}=0.88622692544073867.$$ -But -$$\Gamma\left(\dfrac{3}{2}\right)=\dfrac{1}{2}\sqrt{\pi}= 0.88622692545275794.$$ -Because the series $\hat J_{0.5}(z)$ terminates for $\nu=\dfrac{1}{2}$ we even have $\hat J_{0.5}(z)=J_{0.5}(z)$ and the difference is only due to the precision of our calculation that was made with the type $\mathbf{double}$ (or System.Double) in C#. -For $\nu=0.25$, $k=10$ and $z=4\pi+\dfrac{3\pi}{8}$ we get -$$\dfrac{K_{0.25}\left(z\right)}{\hat J_{0.25}^{(10)}\left(z\right)}=0.90640247708308364\approx\Gamma(1.25)=\dfrac{1}{4}\Gamma(0.25)$$ -and for $\nu=0.75$, $k=10$ and $z=4\pi+\dfrac{5\pi}{8}$ we get -$$\dfrac{K_{0.75}\left(z\right)}{\hat J_{0.75}^{(10)}\left(z\right)}=0.91906252683450718\approx\Gamma(1.75)=\dfrac{3}{4}\Gamma(0.75).$$ -With -$$\dfrac{\pi}{\sin\pi x}=\Gamma(x)\Gamma(1-x)$$ -we obtain -$$(4\cdot 0.90640247708308364)\cdot(4\cdot 0.91906252683450718\,/\,3)-\dfrac{\pi}{\sin\pi/4}=0.000000000065\dots$$<|endoftext|> -TITLE: Is every finite separable extension of a strictly henselian DVR totally ramified? -QUESTION [5 upvotes]: Let $R$ be a discrete valuation ring with field of fraction $K$ and residue field $k$ and let $K'$ be a finite and separable extension of $K$. -If $R$ is henselian ("Hensel's lemma holds", e.g. if it is complete) and $k$ is algebraically closed then the extension $K'|K$ is totally ramified (from Hensel's lemma if follows that there's just one prime lying above the prime of $R$ and since $k$ is algebraically closed there's no residual extension). -What happens if we assume only that $R$ is strictly henselian (i.e. that $k$ is only separably closed)? -Equivalently: does it exist a finite separable extension $K'$ of $K$ such that the residual extension is a nontrivial inseparable extension? -I believe that such an extension should exist. -Moreover, can we find a Galois extension with this property? -Can we do this both in characteristics $(0,p)$ and $(p,p)$? - -REPLY [3 votes]: Let $k$ a separably closed imperfect field of char. $p$. Fix $t \in k$ for which $X^p - t \in k[X]$ has no root in $k$. -Set $K = k((s))$, and form the extension $L = K(y) \supset K$ obtained by adjoining to $K$ a root $y$ of the poly. $f(Y) = Y^p + sY - t \in K[Y]$. -Write $A = k[[s]]$ for the integers of $K$. Since the image of $f(Y)$ in $k[Y] = (A/sA)[Y]$ is irreducible, $f(Y)$ is irreducible in $A[Y]$ and hence in $K[Y]$. Since $f'(Y) = s \ne 0$, it follows that $L/K$ is separable. -Now write $B$ for the integral closure of $A$ in $L$; thus $y \in B$. Denoting by $\pi$ a uniformizer for the DVR $B$, note that $s \in \pi B$. Thus the image of $y$ in $\ell = B/\pi B$ is a $p$-th root of $t$, so $\ell$ is a proper (purely) inseparable extension of $k$. -EDIT: In hindsight, it seems that with minor tweaking the same argument gives an answer in the "mixed characteristic" case. -Indeed, let $t \in k$ as before, and now let $A$ a complete (or Henselian) DVR with residue field $k$ and with fractions $K$ of char. 0. Now choose an element -$\tau \in A$ whose image in $k$ is $t$. Let $\sigma \in A$ be a uniformizer, -and set $f(Y) = Y^p + \sigma Y - \tau \in A[Y]$. Then as before the extension $L = K(y)$ of $K$ is separable, where $y$ is a root of $f(Y)$. And the residue extension is again inseparable.<|endoftext|> -TITLE: does every adjoint orbit of a Lie group go through the Cartan subalgebra? -QUESTION [6 upvotes]: A naive question from a physicist, so forgive the lack of rigor. Consider a Lie group, acting on its Lie algebra by the adjoint action. Does every orbit go through the Cartan subalgebra? Alternatively, for which Lie groups is this true? (I view the statement as a generalization of the elementary fact that every Hermitian matrix can be diagonalized by a unitary transformation.) It would be nice if it held at least for real, compact, and semi-simple Lie algebras. Thank you for any hint or reference to literature! - -REPLY [3 votes]: If G is a compact and connected Lie Group,that must be true!!About this idea,you can read this book(Differential Geometry,Lie Groups and Symmetric Spaces,author:Helgason) Chapter V,6th section.<|endoftext|> -TITLE: If $f$ continuous and $f(x^2) = f(x)$, then $f$ is a const -QUESTION [9 upvotes]: Problem: Given $f:[0,1] \rightarrow \mathbb{R}$ ($f$ continuous ) and $f(x^2) = f(x)$ $\forall x \in [0,1]$. Show that function $f$ is a const. - -REPLY [14 votes]: Consider the sequence $a,a^2,a^4,a^8,\ldots$ i.e. $x_n = a^{2^{n}}$ where $a \in [0,1)$. -Clearly, we have $\displaystyle \lim_{n \rightarrow \infty} x_n = 0$. -We have $f(a) = f(a^2)$, $\forall a \in [0,1]$. -Using this, it is easy to prove by induction that $f(a) = f(a^{2^{n}})$, $\forall a \in [0,1]$ and $\forall n \in \mathbb{N}$ -Further, every continuous function is sequentially continuous i.e. $\displaystyle \lim_{n \rightarrow \infty} f(x_n) = f(\lim_{n \rightarrow \infty} x_n) $. -Hence, $\displaystyle \lim_{n \rightarrow \infty} f(x_n) = f(\lim_{n \rightarrow \infty} x_n)$. -Using the above arguments, we get that $\forall a \in [0,1)$, $$f(a) = \displaystyle \lim_{n \rightarrow \infty} f(a) = \displaystyle \lim_{n \rightarrow \infty} f(a^{2^{n}}) = f(\displaystyle \lim_{n \rightarrow \infty} a^{2^{n}}) = f(0)$$ -Hence, $f(a) = f(0)$, $\forall a \in [0,1)$. Use continuity to conclude that $f(1) = f(0)$ and hence $$f(a) = f(0), \forall a \in [0,1]$$ -EDIT -I just want to make this argument symmetric for $0$ and $1$. -Just like we argued out that $f(a) = f(0)$, $\forall a \in [0,1)$, we can argue out that $f(a) = f(1)$, $\forall a \in (0,1]$. -Instead of considering the sequence $a,a^2,a^4,a^8,\ldots$ consider $a, \sqrt{a}, \sqrt[4]{a}, \sqrt[8]{a}, \ldots$ i.e. $x_n = \sqrt[2^n]{a}$ where $a \in (0,1]$. -Clearly, we have $\displaystyle \lim_{n \rightarrow \infty} x_n = 1$. -We have $f(a) = f(\sqrt{a})$, $\forall a \in [0,1]$. -Using this, it is easy to prove by induction that $f(a) = f(\sqrt[2^n]{a})$, $\forall a \in [0,1]$ and $\forall n \in \mathbb{N}$ -Further, every continuous function is sequentially continuous i.e. $\displaystyle \lim_{n \rightarrow \infty} f(x_n) = f(\lim_{n \rightarrow \infty} x_n) $. -Hence, $\displaystyle \lim_{n \rightarrow \infty} f(x_n) = f(\lim_{n \rightarrow \infty} x_n)$. -Using the above arguments, we get that $\forall a \in (0,1]$, $$f(a) = \displaystyle \lim_{n \rightarrow \infty} f(a) = \displaystyle \lim_{n \rightarrow \infty} f(\sqrt[{2^{n}}]{a}) = f(\displaystyle \lim_{n \rightarrow \infty} \sqrt[{2^{n}}]{a}) = f(1)$$ -Hence, $f(a) = f(1)$, $\forall a \in (0,1]$. -So we have that $f(0) = f(a) = f(1)$, $\forall a \in [0,1]$.<|endoftext|> -TITLE: connection between graphs and the eigenvectors of their matrix representation -QUESTION [14 upvotes]: I am trying to learn graph theory and the linear algebra used to analyse graphs. The texts I have read through have lots of lemmas and theorems proved. The proofs are convincing but I fail to see the intuition behind them. How do they connect to the properties of the graphs? -I understand eigenvectors very clearly when they are taken from a transition matrix for a Markov chain, because very simply they represent a stationary distribution of the matrix. But for the matrices derived from graphs? From the adjacency matrix? Here I cannot understand what relevance the eigenvectors/eigenvalues have to the graph. I cannot imagine using those matrices in the first place for multiplying anything by them. -Maybe I don't completely understand totally the power and depth of eigenvectors/eigenvalues. But I guess this application would reveal more. -Best, - -REPLY [13 votes]: The abstract point of view is that the graph is characterized by its adjacency matrix up to permutation. When we view the adjacency matrix as an abstract linear transformation, we are looking at it up to conjugation, which is stronger. So we lose some information, but the information that is left is still interesting. And any time we can apply linear algebra to a situation, that is a good thing because linear algebra is really easy compared to almost anything else. (This sounds trite, but it is one of the most-used principles in mathematics.) -The eigenvalues of the adjacency matrix describe closed walks on the graph. More precisely, if $A$ is the adjacency matrix, then the total number of closed walks on the graph of length $n$ is given by $\text{tr } A^n$, which is the same as $\sum_i \lambda_i^n$ where $\lambda_i$ are the eigenvalues. So knowing the eigenvalues says something about how closed walks grow. Unfortunately this perspective does not provide a direct interpretation of the eigenvectors. -Spectral methods apply particularly well to graphs with a lot of structure, such as strongly regular graphs. In the best case one can write down a matrix equation the adjacency matrix satisfies, and analyzing what this says about the eigenvectors and eigenvalues puts strong constraints on the graph. This is the mechanism behind the standard proof of the classification of Moore graphs of girth $5$. -There is a fairly concrete interpretation of the eigenvalues and the eigenvectors of the Laplacian matrix, as I explained in this math.SE answer. The short version is that one can set up three natural differential equations using the Laplacian, and these all give concrete interpretations of the eigenvalues and eigenvectors: - -The wave equation. Here the eigenvectors are the "harmonics" of the graph: they describe the standing wave solutions of the wave equation, and the eigenvalues describe the frequencies at which these waves resonate. -The heat equation. Here the eigenvectors are still the "harmonics" of the graph, but the eigenvalues describe the rate at which heat decays for each harmonic. Morally this is about "Brownian motion" on the graph, so it is related to random walks. -The Schrödinger equation. Here the eigenvectors are energy eigenstates of a continuous-time quantum random walk, and the eigenvalues are (up to a constant) energy eigenvalues. - -When the graph is regular, the Laplacian and adjacency matrix are related in a very simple way; the eigenvectors are the same, and the eigenvalues can be easily related to each other. Things are more complicated in the irregular case.<|endoftext|> -TITLE: $ \sum\limits_{i=1}^{p-1} \Bigl( \Bigl\lfloor{\frac{2i^{2}}{p}\Bigr\rfloor}-2\Bigl\lfloor{\frac{i^{2}}{p}\Bigr\rfloor}\Bigr)= \frac{p-1}{2}$ -QUESTION [13 upvotes]: I was working out some problems. This is giving me trouble. - -If $p$ is a prime number of the form $4n+1$ then how do i show that: - -$$ \sum\limits_{i=1}^{p-1} \Biggl( \biggl\lfloor{\frac{2i^{2}}{p}\biggr\rfloor}-2\biggl\lfloor{\frac{i^{2}}{p}\biggr\rfloor}\Biggr)= \frac{p-1}{2}$$ -Two things which i know are: - -If $p$ is a prime of the form $4n+1$, then $x^{2} \equiv -1 \ (\text{mod} \ p)$ can be solved. -$\lfloor{2x\rfloor}-2\lfloor{x\rfloor}$ is either $0$ or $1$. - -I think the second one will be of use, but i really can't see how i can apply it here. - -REPLY [4 votes]: Here are some more detailed hints. -Consider the value of $\lfloor 2x \rfloor - 2 \lfloor x \rfloor$ where $x=n+ \delta$ for -$ n \in \mathbb{Z}$ and $0 \le \delta < 1/2.$ -Suppose $p$ is a prime number of the form $4n+1$ and $a$ is a -quadratic residue modulo $p$ then why is $(p-a)$ also a quadratic residue? -What does this say about the number of quadratic residues $< p/2$ ? -All the quadratic residues are congruent to the numbers -$$1^2,2^2,\ldots, \left( \frac{p-1}{2} \right)^2,$$ -which are themselves all incongruent to each other, so how many times does the set -$\lbrace 1^2,2^2,\ldots,(p-1)^2 \rbrace$ run through a complete set of -$\it{quadratic}$ residues? -Suppose $i^2 \equiv a \textrm{ mod } p$ where $i \in \lbrace 1,2,\ldots,p-1 \rbrace$ and $a$ is a quadratic residue $< p/2$ then what is the value of -$$ \left \lfloor \frac{2i^2}{p} \right \rfloor - - 2 \left \lfloor \frac{i^2}{p} \right \rfloor \quad \text{?}$$<|endoftext|> -TITLE: What is a local parameter in algebraic geometry? -QUESTION [25 upvotes]: Shafarevich offers the following theorem-definition: -"At any nonsingular point $P$ of an irreducible algebraic curve, there exists a regular function $t$ that vanishes at $P$ and such that every rational function $u$ that is not identically $0$ on the curve can be written in the form $u = t^k v$, with $v$ regular at $P$ and $v(P) \neq 0$. A function $t$ with this property is called a local parameter on the curve at $P$." - -I've looked through six other books on algebraic geometry (The Geometry of Schemes by Eisenbud and Harris, Algebraic Curves by Fulton, Principles of Algebraic Geometry by Griffiths and Harris, The Red Book of Varieties by Mumford, and Vakil's online notes Foundations of Algebraic Geometry) and, unless I have made an error, none even contain the phrase "local parameter." Hartshorne does appear to have the phrase in a few instances, but certainly does not give any definition at all similar to the one above, and besides Hartshorne is above my level right now so I am not in a good position to decide whether his usage agrees with that above or not. -The above theorem appears to me to exist only in Shafarevich and nowhere else in the mathematical literature. -Wikipedia offers the following much simpler characterization: "In the geometry of complex algebraic curves, a local parameter for a curve $C$ at a smooth point $P$ is just a meromorphic function on $C$ that has a simple zero at $P$." - -So my question is this: what exactly are these local parameters, and how should I think of them? How can I reconcile what Wikipedia has written with what Shafarevich writes? The name "local parameter" suggests to me there is some simple characterization of these functions which Shafarevich is keeping a mystery from me (or is Shafarevich's definition more intuitive than I am finding it?). And finally are these really present virtually nowhere in the entire mathematical literature except Shafarevich, or do equivalent ideas go under different names? - -REPLY [20 votes]: What Shafarevic calls a local parameter is often called a uniformizing parameter at $P$, and is also the same thing as a uniformizer of the local ring of $C$ at $P$. -The point is that if $P$ is a smooth point on a curve, then the local ring at $P$ (i.e. the ring of rational functions on $C$ which are regular at $P$) is a DVR, and hence its maximal ideal is principal; a generator of this ideal is called a uniformizer. -If $t$ is a uniformizer/local parameter/uniformizing parameter at $P$, and if -$u$ is any other rational function, then if we write $u = t^k v$ where $v(P) \neq 0$ (i.e. $v$ is a unit in the local ring), then $k$ is the order of vanishing of $u$ at $P$. In particular, $u$ vanishes to order one if and only if it is equal to $t$ times a unit in the local ring, if and only if it is also a generator of the maximal ideal of the local ring at $P$, if and only if it is also a uniformizer. Thus Shafarevic and Wikipedia are reconciled. -One is supposed to think of $t$ as being a "local coordinate at $P$." In the complex analytic picture you would choose a small disk around $P$, and consider the coordinate $z$ on this disk; this a local coordinate around the smooth point $P$. This analogy is very tight: indeed, it is not hard to show (when the ground field is the complex numbers) that a rational function $t$ is a local parameter at $P$ if and only $t(P) = 0$, -and if there is a small neighbourhood of $P$ (in the complex topology) which is mapped isomorphically to a disk around $0$ by $t$, i.e. if and only if $t$ restricts to a local coordinate on a neighbourhood of $P$. -Finally, this concept is ubiquitous. The fact that the local ring at a point on a smooth algebraic curve is a DVR is fundamental in the algebraic approach to the theory of algebraic curves; see e.g. section 6 of Chapter I of Hartshorne. - -REPLY [12 votes]: A synonym for this term is "uniformizing parameter" or "uniformizer," and this term does appear in other books. I believe it is supposed to be the algebraic analogue of a chart; in other words, it is a "local coordinate" at the point. Perhaps to understand the geometry behind the term you should look at textbooks on Riemann surfaces first. - -REPLY [2 votes]: I think page 146 of this seems like a clear explanation of local parameters. This is an article about algebraic geometry applied to coding theory. It also uses "uniformizing parameter" as a synonym as Qiaochu mentioned.<|endoftext|> -TITLE: Global Sections of Sheaf and Dual -QUESTION [9 upvotes]: Maybe this is well-known, but suppose you have an invertible sheaf $\mathcal{L}$ on a scheme $X$. If $X$ is a projective space (or even projective bundle over an integral scheme, by a similar argument...I think), then if $H^0(X, \mathcal{L})\neq 0$ and $H^0(X, \mathcal{L}^\vee)\neq 0$ we get that $\mathcal{L}\simeq \mathcal{O}_X$. This is really simple because $\mathcal{L}\simeq \mathcal{O}_X(n)$, so you get that the two hypothesis mean $n\geq 0$ and $n\leq 0$, so $n=0$. -My guess is this must be true more generally. Are there conditions on $X$ such that any invertible sheaf with the property that having non-trivial global sections of both it and its dual implies it is trivial? - -REPLY [8 votes]: In general, this fails. Take $X = \mathrm{Spec}A$ for $A$ a Dedekind domain. Then line bundles correspond to equivalence classes of Weil divisors. A line bundle corresponds to some formal sum $\sum n_i \mathfrak{p}_i$. But you can always find something in $A$ whose divisor is bigger than this (just take it highly divisible at those primes). So any line bundle will satisfy your condition, but if $A$ is not a UFD it will not necessarily be trivial: the Picard group is the class group of $A$. (See 8.2 of http://people.fas.harvard.edu/~amathew/CRing.pdf for an explanation of this isomorphism and a general discussion of line bundles on affine schemes.) -This is true for $X$ an integral proper scheme over an algebraically closed field $k$. (It's in Mumford's Abelian Varieties.). You don't need the isomorphism of the Picard group with $\mathbb{Z}$ which is only anyway true when $X$ is all of projective space. -Suppose $\mathcal{L}$ is such a line bundle. Then there is a nonzero morphism $\mathcal{O}_X \to \mathcal{L}$ and a nonzero morphism $\mathcal{L} \to \mathcal{O}_X$. This is what it means for the space of global sections of $\mathcal{L}$ and its dual to not vanish. So we get a composition -$$\mathcal{L} \to \mathcal{L},$$ -which is given by a global regular function that can't be zero (as it isn't zero at the generic point). This is necessarily an element of the field $k$, so an isomorphism of line bundles. Similarly the composition $\mathcal{O}_X \to \mathcal{O}_X$ is an isomorphism. (I suppose the key property here is that a nonzero endomorphism of a line bundle is an isomorphism, and this in turn is clear from $\Gamma(X, \mathcal{O}_X) = k$, which is a consequence of properness.) It follows that $\mathcal{L} \to \mathcal{O}_X$ is an isomorphism and $\mathcal{L}$ is trivial. -This doesn't really need $k $ to be algebraically closed (though then $X$ geometrically integral over $k$ would be necessary to make the argument work, unless I'm missing something). -This is false without integrality hypotheses. (Take a disconnected scheme, and a line bundle trivial on one piece but not the other.)<|endoftext|> -TITLE: Automorphisms of a non-abelian group of order $p^{3}$ -QUESTION [6 upvotes]: In the book "Structure of Groups of Prime Power order (Charles Richard Leedham-Green, Susan McKay), there is an exercise (2.1.10), which asks to show that the automorphism group of ($\mathbb{Z}_p \times \mathbb{Z}_p) \rtimes \mathbb{Z}_p$ is ($\mathbb{Z}_p \times \mathbb{Z}_p) \rtimes GL(2,p)$. To prove this, first, it is easy to show that there is a short exact sequence $1 \rightarrow \mathbb{Z}_p \times \mathbb{Z}_p \rightarrow Aut((\mathbb{Z}_p \times \mathbb{Z}_p) \rtimes \mathbb{Z}_p) \rightarrow GL(2,p) \rightarrow 1$. Therefore it remains to show that "this exact sequence is split". How to show it? - -REPLY [3 votes]: Here is a new, less coordinated version that motivates some of the correct coordinate changes. -Let V be a vector space, let W = V∧V be the product, and let G be the wedgey group on V⊕W with multiplication (v1,w1)⋅(v2,w2) = (v1+v2, w1+w2 + v1∧v2). Powers obey the rule (v,w)n = (nv,nw). Commutators obey the rule [(v1,w1),(v2,w2)] = [0,2(v1∧v2)]. In particular, G always has nilpotency class 2 when 0≠2 (and is abelian otherwise), and has exponent p whenever V is defined over a field of characteristic p. When V is one-dimensional over the field Z/pZ of p elements, G is the unique non-abelian group of order p3 and exponent p (the group in question). -Consider a block matrix of the form $$M = \begin{bmatrix} A & b \\ . & \Lambda^2(A) \end{bmatrix}$$ where A in End(V) and b in Hom(V,W), with the formal multiplication (v,w)⋅M = ( vA, vb + w⋅Λ2(A) ). When V is 2-dimensional, W is 1-dimensional and Λ2(A) is just multiplication by det(A). In general, (v1∧v2)⋅Λ2(A) = (v1⋅A) ∧ (v2⋅A). When dim(V)=2, then every element of Hom(V,W) can be written as b = ∧u which takes v in V to v∧u in W. -This matrix M defines a homomorphism of G: -$$\begin{array}{l} -(v_1, w_1)M \cdot (v_2, w_2)M \\ -= (v_1\cdot A, ~w_1\cdot\Lambda^2(A) + v_1\cdot b) \cdot -(v_2\cdot A, ~w_2\cdot\Lambda^2(A) + v_2 \cdot b) \\ -= (v_1\cdot A + v_2\cdot A, ~w_1\cdot\Lambda^2(A) + v_1 \cdot b + w_2\cdot\Lambda^2(A) + v_2 \cdot b + (v_1\cdot A)\wedge(v_2\cdot A) ) \\ -= ( (v_1+v_2)\cdot A, ~(w_1 + w_2 + v_1 \wedge v_2)\cdot\Lambda^2(A) + (v_1+v_2)\cdot b ) \\ -= (v_1+v_2, ~w_1+w_2+v_1\wedge v_2)M \\ -= ( (v_1,w_1)\cdot(v_2,w_2) )M -\end{array} -$$ -The composition of homomorphisms M and M′ is given by matrix multiplication: -$$M\cdot M' = \begin{bmatrix} A & b \\ . & \Lambda^2(A) \end{bmatrix} -\cdot \begin{bmatrix} A' & b' \\ . & \Lambda^2(A') \end{bmatrix} -= \begin{bmatrix} A\cdot A' & A\cdot b' + b \cdot \Lambda^2(A') \\ . & \Lambda^2(A\cdot A')\end{bmatrix} -$$ -When dim(V)=2, it would be nice to use parameters (A,u) to describe M, but the question is how to multiply in these parameters (A,u)⋅(A′,u′) is not simply (A⋅A′, Au′+u⋅det(A′)). Here A∧u′ is the element of Hom(V,W) which takes v in V to (vA)∧u′ in W. We can in fact simplify A∧u′ to just ∧Bu′ for some matrix B related to A. -Unfortunately, A ≠ B in general. Instead, one gets B = A−T ⋅ det(A), I believe. In other words, up to a determinant, we get the action of A on its dual module rather than on its natural module. I'd like a clearer description of this action, since I am pretty certain the real answer has the natural module. Hopefully, there is another dual I am forgetting that makes everything ok. At any rate, I have no trouble doing it with coordinates as below: - -It is comforting to put coordinates on the group: [x,y,z] = ( x⋅e1 + y⋅e2, z⋅e1∧e2 ). As matrices, this can be realized as: -$$[x,y,z] = \begin{pmatrix} 1 & x & z + xy \\ . & 1 & 2y \\ . & . & 1 \end{pmatrix}$$ -Multiplication follows the rule [a,b,c]⋅[x,y,z] = [a+x,b+y,c+z+ay-bx]. -Powers obey the very simple rule [x,y,z]n = [ nx, ny, nz ]. -Commutators obey the simple rule [ [a,b,c], [x,y,z] ] = [ 0, 0, 2(ay-bx) ]. -Now consider a matrix: $$ A = \begin{pmatrix} a & b & e \\ c & d & f \\ . & . & ad-bc \end{pmatrix}$$ -One can check that in these coordinates A is an automorphism. In particular, the function that takes the row vector [x,y,z] times the matrix A to get a new row vector [x′, y′, z′ ] defines an automorphism of G. Composition of automorphisms is matrix multiplication. In this coordinate system everything is perfect.<|endoftext|> -TITLE: the Riemann integrability of inverse function -QUESTION [11 upvotes]: If $f \colon [a,b] \rightarrow [c,d]$ is a bijection, $f\in \mathcal{R}$ and $f^{-1}$ exists, then prove or disprove that $f^{-1} \in \mathcal{R} [c,d]$. -Remark: I tried to use integration by parts to find $\int_{c}^{d} f^{-1}$ and to prove that was the right limit of Riemann sum, but failed. But I think this idea might be useful. - -REPLY [12 votes]: Let $C\subset[0,1]$ be the middle thirds Cantor set, and let $D\subset[0,1]$ be a fat Cantor set. Define $f:[0,1]\to[0,1]$ such that $f\vert_C$ is an order preserving homeomorphism of $C$ onto $D$ (by mapping to corresponding endpoints of the removed intervals), and $f\vert_{[0,1]\setminus C}$ is an order reversing homeomorphism of $[0,1]\setminus C$ onto $[0,1]\setminus D$ (by mapping linearly to corresponding removed intervals and then composing with $x\mapsto 1-x$). Then $f$ is discontinuous at each point of $C$ and continuous at each point of $[0,1]\setminus C$. Since $C$ has measure zero, $f$ is Riemann integrable. On the other hand, $f^{-1}$ is discontinuous at each point of $D$, so it is not Riemann integrable. -If you don't want to appeal to the Lebesgue criterion of Riemann integrability, you could work explicitly with Riemann sums of $f$ and $f^{-1}$. In the case of $f$, you can choose partitions such that the contribution of intervals containing points of $C$ is arbitrarily small. In the case of $f^{-1}$, the values of $f^{-1}$ on $D\cap[\frac{1}{2},1]$ are at least $\frac{2}{3}$ and the values of $f^{-1}$ on $[\frac{1}{2},1]\setminus D$ are at most $\frac{1}{2}$. Every interval contains points from $[0,1]\setminus D$, so the difference between upper and lower sums will always be at least $(\frac{2}{3}-\frac{1}{2})(\frac{1}{2}m(D))\gt 0$.<|endoftext|> -TITLE: In neutral geometry, can a family of parallel lines leave holes in the plane? -QUESTION [6 upvotes]: In neutral plane geometry, Euclidean geometry without the parallel postulate, I want to show that the family of parallel lines all perpendicular to a given line pass through all of the plane, leaving no holes. -My present formulation of this idea is as follows: Given two lines $l$ and $m$ and a transversal $t$ perpendicular to $l$ at point $A$ and passing through line $m$ at $B$, and given another point $C$ on line $m$ forming a triangle $\triangle ABC$. Then there exists a line $n$ perpendicular to $l$ (and therefore parallel to $t$) which passes through the interior of side $\overline{AC}$ (and therefore the interior of the triangle). -This claim is easy to show in Euclidean geometry where all parallel lines are equidistant. In neutral geometry, parallel lines may bend away from each other, and this claim is much less obvious. Since it appears to be true in models of hyperbolic geometry, I'm guessing it is true in neutral geometry, but I'm at a loss for how to show it. -Update: It occurs to me that this question seems a bit more esoteric than it really is. So let me point out that neutral geometry is really very familiar territory for many of you. As I mentioned it is just ordinary Euclidean geometry without the axiom that asserts the uniqueness of a line parallel to a given line and through a given point and any consequences that follow from that. -So the usual suspects (theorems) are present: -Isosceles triangle theorems, SAS, ASA, SSS, AAS. -The exterior angle theorem, triangle inequality, scalene inequality, hinge theorem. -Actually, many parallel line theorems hold, namely. those that say things like "If two lines are cut with a transversal are pair of congruent angles, then the lines are parallel." or " If two line share a common perpendicular, then they are parallel." -What is not true are theorems that assert the converse: "If two lines are parallel, then ..." and also not true "If lines $l$ and $m$ are parallel and lines $m$ and $n$ are parallel, then $l$ and $n$ are parallel," and that parallel lines are equidistant (but equidistant lines are parallel). -Others theorems that are missing include: -The Pythagorean theorem -The angle sum theorem (instead the sum of the angles in a triangle must be 180 or less) -Rectangles may not exist. -And the weird one: there may be no such thing as similarity. -Update: To make the answer below understandable, you need (also in the comments but hidden): -Proof that two lines perpendicular to a given line are parallel: If they were not parallel, they would intersect forming a isosceles triangle with two 90 degree angles. the complimentary angle to either angle would also be 90 degrees. But by the exterior angle theorem, that exterior angle must be strictly greater than 90 degrees, which is a contradiction. - -REPLY [2 votes]: Comments: -That there are no holes follows from the neutral geometry theorem that for any point P, there is a line through P perpendicular to your given line. -Your "proof" in the last paragraph is correct except you mean "supplementary" angle, not "complimentary" angle.<|endoftext|> -TITLE: Polish Spaces and the Hilbert Cube -QUESTION [16 upvotes]: I've been trying to prove that every Polish Space is homeomorphic to a $G_\delta$ subspace of the Hilbert Cube. There is a hint saying that given a countable dense subset of the Polish space $\{x_n : n\in\mathbb{N}\}$ define the function $f(x)=(d(x,x_n))_{n\in\mathbb{N}}$ (where $d$ is a metric on the Polish space). I think I've shown that $f$ is continuous and $1-1$ but I don't know how to prove that the converse of the function is continuous and that the image is a $G_\delta$ set. It's been a really long time since I've dealt with topology so I'm having a hard time coming up with any idea and I'm afraid I've forgotten some well known topological fact (so maybe it's something obvious here I'm not seeing). Any help on how to proceed would be kindly appreciated. - -REPLY [18 votes]: I take it that you've already normalized $d$ such that $0 \leq d \leq 1$ (otherwise replace $d$ by $\frac{d}{1+d}$). -As you've said, the function $f: x \mapsto f(x) = (d(x,x_{n}))_{n \in \mathbb{N}}$ is continuous and injective. Let $f(y_{m}) \to f(y)$ be a convergent sequence in $f(X)$. We want to show that $y_{m} \to y$. -By definition of the product topology, we have $d(y_{m},x_{n}) \xrightarrow{m \to \infty} d(y,x_{n})$ for all $n$. Let $\varepsilon > 0$ and pick a point $x_{n}$ such that $d(y,x_{n}) < \varepsilon/3$ by density. Since $d(y_{m},x_{n}) \to d(y,x_{n})$, there is $M$ such that $|d(y_{m},x_{n}) - d(y,x_{n})| < \varepsilon /3$ for all $m \geq M$, so $d(y_{m},x_{n}) < 2 \varepsilon /3$. But then $d(y_{m},y) \leq d(y_{m},x_{n}) + d(x_{n},y)< \varepsilon$ and hence $y_{m} \to y$. - -Why is the image a $G_{\delta}$-set? This seems to be much more difficult. I don't see any easier way than to essentially re-prove two classical results on metric spaces which are much more interesting, so I prefer to explain this: -Theorem (Kuratowski) -Let $A \subset X$ be a subset of a metrizable space and let $g: A \to Y$ be a continuous map to a completely metrizable space $Y$. Then $g$ can be continuously extended to a $G_{\delta}$-set containing $A$. -Fix a bounded and complete metric on $Y$. For the proof we need the notion of oscillation of $g$ at a point $x \in \overline{A}$ (the closure of $A$ in $X$) defined by -$$\displaystyle -\operatorname{osc}_{g}(x) = \inf\{\operatorname{diam}g(U \cap A)\,:\, x \in U, \;U\; \text{open}\}. -$$ -The set $B = \{x \in \overline{A}\,:\,\operatorname{osc}_{g}(x) = 0\}$ is a $G_{\delta}$-set. To see this, note that $B_{n} = \{x \in \overline{A} \,:\, \operatorname{osc}_{g}(x) < \frac{1}{n}\}$ is an open subset of the closed set $\overline{A}$ and $B = \bigcap_{n \in \mathbb{N}} B_{n}$. The continuity of $f$ implies that $A \subset B$. Now define $f: B \to Z$ by $f(x) = \lim g(x_{n})$, where $x_{n} \to x$. It is not hard to show that $f$ is well-defined (because $\operatorname{osc}_{g}(x) = 0$ implies that $g(x_{n})$ is a Cauchy-sequence) and clearly $f$ extends $g$ and is continuous. -The second ingredient we need is: -Theorem (Lavrentiev) -Let $X$ and $Y$ be completely metrizable spaces and let $g: A \to B$ be a homeomorphism from $A \subset X$ onto $B \subset Y$. Then there exist $G_{\delta}$-sets $G \supset A$ and $H \supset B$ and a homeomorphism $f: G \to H$ extending $g$. -Let $h = g^{-1}$. Choose $G_{\delta}$-sets $G' \supset A$ and $H' \supset B$ and continuous extensions $g': G' \to Y$ and $h': H' \to X$ by Kuratowski's theorem. Let $Z = \operatorname{graph}(g') \cap \widetilde{\operatorname{graph}}(h') \subset X \times Y$ be the intersection of the graphs (the tilde indicates the 'switch' $\widetilde{(y,x)} = (x,y)$ of coordinates) and let $G = \operatorname{pr}_{X} (Z)$ and $H = \operatorname{pr}_{Y}(Z)$. Obviously, $f = g'|_{G}$ is a homeomorphism of $G$ onto $H$. One can check that $H$ (and thus also $G$ by symmetry) is a $G_{\delta}$-set as follows: The graph of $g'$ is closed in $G' \times Y$ and thus it is a $G_{\delta}$-set and $H$ is its preimage under the continuous map $y \mapsto (h'(y),y)$. -Corollary. If $Y$ is a completely metrizable space and $X \subset Y$ a completely metrizable subspace then $X$ is a $G_{\delta}$-set. -By Lavrentiev's theorem, the inclusion $X \subset Y$ extends to a homeomorphism onto its image. -A further corollary of these ideas is that a subset of a Polish space is Polish if and only if it is a $G_{\delta}$. -More detailed information can be found in any decent book on descriptive set theory, for instance Kechris, Classical descriptive set theory, or Srivastava, A course on Borel sets, both appeared in the Springer Graduate Texts in Mathematics series.<|endoftext|> -TITLE: Are there other pseudo-random distributions like the prime-numbers? -QUESTION [9 upvotes]: Does there exists other structures in math, which are seemingly random, but deterministic, and follow rules similar to the prime numbers, by rules I mean there must be statements similar to goldbach's conjecture or twin-prime-conjecture etc, for instance i dont consider the digits of pi to have this sort of structure. Is there some field in math which deals with distributions which "seem" random, but still follow certain rules ? -What are some examples of similar structures? -Also, I want to know what are the necessary axioms to produce the prime-number distribution, and if one ca replace these axioms to get other pseudo-random distributions which are not "isomorphic" to the prime-numbers, and which are not constructable in regular axiomatic systems ? - -REPLY [3 votes]: The Ulam numbers are a good example of a seemingly "random" sequence, which certainly satisfies the Goldbach requirement (every positive integer is the sum of two Ulam numbers) and I believe it is conjectured that there are infinitely many pairs n, n+1 of Ulam numbers (I conjecture it, at least). -It is not hard to invent infinitely many sequences with sufficiently high natural density for which analogues to Goldbach and the Twin Prime conjectures can be expected to apply. Give it a try! -You might enjoy Halberstam and Roth's book Sequences if you are interested in these sorts of features of sequences.<|endoftext|> -TITLE: demonstration by induction: $(1+a)^n ≥1+an$ -QUESTION [6 upvotes]: Demonstrate by induction: $(1+a)^n ≥1+an$ is true, given a real number $a$, for any $n ∈ \mathbb N$. With $a > 0$ -I need to demostre this using the induction principle. My doubt is in the second part of the demonstration. -This is what I have: -In order to demonstrate the predicate these two points must be true: - -$P(1)$ must be True. -$∀ n ∈ \mathbb N , P(n) => P(n+1)$ is true. - -We demonstrate the first point: -$(1+a)^1 ≥ 1+a*1$, -$1+a ≥ 1+a$ So it is true -Now, the second part is where I have the problem. I do not know what to do. I understand the theory but I don't know how to apply it. -I tried this: -$(1+a)^{n+1} ≥ 1+a(n+1)$ -But I don't see that as useful. -Any tips? - -REPLY [2 votes]: Before you do anything else, ask yourself: do you even believe that this is true? Have you tried any examples with specific numbers? Fixing the value of $a$ (especially taking $a$ to be very small) and varying the value of $n$, perhaps? Does the result seem plausible after doing this? Do your experiments suggest anything about what's going on?<|endoftext|> -TITLE: Tetrahedral torus -QUESTION [9 upvotes]: Is it possible to form a closed loop by joining regular (platonic) tetrahedrons together side-to-side, with each tetrahedron having two neighbours? It should be a loop with a hole in, as can be done with 8 cubes, or 8 dodecahedrons as shown below. What is the minimum number of tetrahedrons needed? - -Edit: -Could it be that it is possible to create such a ring by allowing the tetrahedrons to extend in one more spatial dimension (R^4)? - -REPLY [3 votes]: To the follow up question about allowing embeddings in $\mathbb{R}^4$: the answer is yes. One simple example is to start with the hexadecachoron or 600-cell and remove some of the constituent tetrahedra.<|endoftext|> -TITLE: Integrating $e^{f(x)}$ -QUESTION [6 upvotes]: can someone tell me a way of integrating functions like $e^{f(x)}$ -I have a specific case: $\int e^{-3x}\,\mathrm{d}x$ -PS: I'm not looking for the answer of this, but the way of doing it. -Thanks for your help - -REPLY [5 votes]: For the case where $f(x)$ is linear, a nice $u$-substitution works. I assume you know how to integrate $\int e^xdx$? So in order to integrate a function of the form $e^{f(x)}$, let $u=f(x)$, and thus $du=f'(x)dx$, which allows you to 'solve' for $dx$ in terms of $du$. Then your original integral goes from: -$$ -\int e^{f(x)}dx -$$ -to -$$ -\int \frac{e^u}{f'(x)}du. -$$ -Of course, this is not always so easy to integrate, as Moron points out. When $f(x)$ is linear, you have a nice situation, because $f'(x)$ is just a constant. Other situations may not be so easily handled, as far as I'm aware.<|endoftext|> -TITLE: Infinite series $n^7/(\exp(2\pi n)-1)$ -QUESTION [18 upvotes]: I found an interesting topic on this site with regards to the series I am trying to evaluate: -Summing $\frac{1}{e^{2\pi}-1} + \frac{2}{e^{4\pi}-1} + \frac{3}{e^{6\pi}-1} + \cdots \text{ad inf}$ -I was wondering if there is a closed form for even m when we have: -$$\sum_{n=1}^{\infty}\frac{n^{2m-1}}{e^{2\pi n}-1}$$ -I have the series $$\sum_{n=1}^{\infty}\frac{n^{7}}{e^{2\pi n}-1},$$ -I am trying to evaluate. -The aforementioned thread mentions that if m > 1 and odd, then we can use -$$\frac{B_{2m}}{4m}$$ to find the sum. -But, if m is even, the formula omits and error term. -Does anyone have info on this error term or how to evaluate my series or others that involve even m?. -I noticed that when m=1, there is an error term of $$-\frac{1}{8\pi}$$ -The error appears to get smaller the larger m, and thus the power of n, becomes. -Thanks to all. - -REPLY [3 votes]: The sums you want all appear in $\displaystyle E_{4k}(q)=1-\frac{4k}{B_{2k}}\sum_{n \ge 1}\frac{n^{4k-1} q^n}{1-q^n}\,$, with $q=e^{-2\pi}$ and $B_{2k}$ the ${2k}\,$th Bernoulli number. By the recurrence relation from Wikipedia (which uses $d_k=(4k+6)E_{2k+4}k!\zeta(2k+4)$), all $E_{4k\ge 8}$ can be expressed in terms of $E_4$ and $E_6$ (but $E_6(e^{-2\pi})\,$=$0$, so you only need to consider $E_4$). -Now to find $E_4$, plug in $\tau=i\;$ to the equation for the modular discriminant$$g_2^3(\tau)-27g_3^2(\tau)=(2\pi)^{12}e^{2i\pi\tau}\prod_{k\ge 1}(1-e^{2ik\pi\tau})^{24}$$and use the well known value $\prod_{k\ge 1}(1-e^{-2k\pi})=\frac{e^{\pi/12}\Gamma(1/4)}{2\pi^{3/4}}$ along with $g_2(i)=120\zeta(4)E_4(e^{-2\pi})$ and $g_3(i)=0$ (N.B. $g_3(i)=280\zeta(6)E_6(e^{-2\pi})$) -Finally, the recurrence relations simplify to $E_{4n}=A_n E_4^n$, where $A_n$ is a rather unwieldy rational number (the first few though can be figured easily from the Wikipedia link above, under the subsection "Products of Eisenstein series"). -(Edit) I've confirmed the first two sums are $\sum_{k\ge1} \frac{k^3}{e^{2\pi k}-1}=-\frac1{240}+\color{maroon}{\frac{\Gamma^8(1/4)}{5120\pi^6}}$and$\sum_{k\ge1} \frac{k^7}{e^{2\pi k}-1}=-\frac1{480}+\color{maroon}{\frac{3\Gamma^{16}(1/4)}{655360\pi^{12}}}$, but I didn't get consistent results with Maple for the next sum. -(There shouldn't be any weird spaces now.)<|endoftext|> -TITLE: A place to learn about math etymology? -QUESTION [24 upvotes]: I was recently wondering where the word `kernel' comes from in mathematics. I am sure the internet must know. I did manage to find -http://www.pballew.net/etyindex.html#k -which contains the origin of many math words but it doesn't have `kernel'; it also only seems to address math words you might run into in at most a 1st or 2nd year college math course. I'm wondering if anyone is aware of a reference that might address the origin of more modern math words (but not so modern that it doesn't discuss kernel). -EDIT: By `math etymology' I mean why a particular word was used for a certain math concept. Of course such an explanation might involve discussing the actual etymology of the word. - -REPLY [10 votes]: You'll want to check out Earliest Known Uses of Some of the Words of Mathematics, maintained by Jeff Miller. In particular, an entry for 'kernel' appears here. - -REPLY [4 votes]: It's unclear if you want to discuss the reason for a particular choice of a word (whether coined or borrowed), or the etymological meaning of the word. For example, "kernel" is a perfectly fine English word, refering to a whole seed of a cereal (the word coming from the Indo-European root greno-). But the etyomological origin is somewhat different from the reason why it was chosen to refer to the particular mathematical concept it refers to. -One source for the etymology (and sometimes mathematical origin) is The Words of Mathematics: An Etymological Dictionary of Mathematical Terms Used in English, by Steven Schwartzmann, published by the Mathematical Association of America. It will tell you things like what the Latin origin of "conjecture" is, and so on. It does not provide much in the way of historical mathematical origin (i.e., why the word was chosen for that particular concept). Sometimes this is clear from the origin of the word, but often this is not the case.<|endoftext|> -TITLE: Kähler differentials of affine varieties -QUESTION [9 upvotes]: I would like to gain some intuition regarding the modules of Kähler differentials $\Omega^j_{A/k}$ of an affine algebra $A$ over a (say - algebraically closed) field $k$. -Let us recall the definition: let $A^e = A\mathrel{\otimes_k} A$, let $f:A^e\to A$ be the map defined by $f(a\otimes b) = ab$, and let $I = \ker f$. Then $\Omega^1_{A/k} = I/I^2$. And, $\Omega^j_{A/k} = \bigwedge^j \Omega^1_{A/k}$. -An important theorem regarding Kähler differentials says: If $k \to A$ is smooth of relative dimension $n$, then $\Omega^n_{A/k}$ is a projective module of finite rank. -My question: - -I was wondering if anyone could provide some examples of: - -How does the module of Kähler differentials look for some singular varieties? For example, what is $\Omega^1_{A/k}$ for $A = k[x,y]/(y^2-x^3)$? - -Can anyone provide an example of a non-singular affine variety with coordinate ring $A$, such that $k \to A$ is smooth of relative dimension $n$, and $\Omega^n_{A/k}$ is projective but not free? - - - -I would be happy for any concrete example that will help my intuition on the subject. -Thanks! - -REPLY [4 votes]: "Let us recall the definition" -$\Omega^1=I/I^2$ is not a definition, it is a construction (and I know at least three other constructions). The definition of the module $\Omega^1$ is its universal property, i.e. that it classifies derivations. -The following two facts follow from the universal property (I will omit the base ring $k$ from the notation): - -$\Omega^1_{k[x_1,\dotsc,x_n]}$ is a free $k[x_1,\dotsc,x_n]$-module with basis $d(x_1),\dotsc,d(x_n)$ -For an ideal $I \subseteq A$ the sequence $I/I^2 \xrightarrow{d} \Omega^1_{A} \otimes_A A/I \to \Omega^1_{A/I} \to 0$ is exact. - -From this get that $\Omega^1_{k[x_1,\dotsc,x_n]/(f_1,\dotsc,f_r)}$ is the free module over $k[x_1,\dotsc,x_n]/(f_1,\dotsc,f_r)$ generated by symbols $d(x_1),\dotsc,d(x_n)$ subject to the relations $d(f_1)=\dotsc=d(f_n)=0$, where $d(f)$ is expressed using the $d(x_i)$ (as in 1.). -For example, let $A=k[x,y]/(y^2-x^3)$. Then $d(y^2-x^3)=2 y d(y) - 3 x^2 d(x)$. Hence, $\Omega^1_{A}$ is the free $A$-module generated by symbols $d(x),d(y)$ subject to the relation $2 y d(y) = 3 x^2 d(x)$. -For another example, consider $A=k[x,y]/(x^2+y^2-1)$ (the coordinate ring of a "circle"). Let us assume $2 \in k^*$, so that $A$ is smooth over $k$. Then $d(x^2+y^2-1)=2 x d(x) + 2 y d(y)$, so that $\Omega^1_{A}$ is the free $A$-module generated by $d(x),d(y)$ subject to the relation $x d(x) = - y d(y)$. One can show that $x d(y) - y d(x)$ is a basis of $\Omega^1_{A}$.<|endoftext|> -TITLE: Is any divergence-free curl-free vector field necessarily constant? -QUESTION [10 upvotes]: I'm wondering, for no particular reason: are there differentiable vector-valued functions $\vec{f}(\vec{x})$ in three dimensions, other than the constant function $\vec{f}(\vec{x}) = \vec{C}$, that have zero divergence and zero curl? If not, how would I prove that one doesn't exist? -I had a vague memory of learning some reason that such a function doesn't exist, but there's a pretty good chance my mind is just making things up to trick me ;-) But I thought about it for a little while and couldn't think of another divergence-free curl-free function off the top of my head, so I'm curious whether I was thinking of real math or not. - -REPLY [2 votes]: For example a constant field is harmonic, but doesn't satisfy the normal boundary conditions at infinity. In any topological space, there is one harmonic field per winding number satisfying given boundary conditions (Hodge decomposition theorem).<|endoftext|> -TITLE: Is there any relation between the eigenvalues of possibly non-Hermitian matrix A and those of exp(A)? -QUESTION [5 upvotes]: Is there any relation between the eigenvalues of possibly non-Hermitian matrix A and those of exp(A)? -For hermitian matrices, they are just exponentials of the corresponding values. But in general, any relation? -Thanks. - -REPLY [4 votes]: If you are willing to forget about problems of convergence for a moment, think first with polynomials: let -$$ -f(t) = a_0 + a_1 t + a_2 t^2 + \cdots + a_n t^n -$$ -and $A$ any square matrix. Then -$$ -f(A) = a_0 I + a_1 A + a_2 A^2 + \cdots + a_n A^n \ . -$$ -Assume $v$ is an eigenvector of $A$ with eigenvalue $\lambda$: $Av = \lambda v$. Then -$$ -\begin{align} -f(A)v &= a_0 Iv + a_1 Av + a_2 A^2v + \cdots + a_n A^nv \\ - &= a_0 v + a_1 \lambda v + a_2 \lambda^2 v + \cdots + a_n \lambda^n v \\ - &= \left( a_0 + a_1 \lambda + a_2 \lambda^2 + \cdots + a_n \lambda^n \right) v \\ - &= f(\lambda ) v \ . -\end{align} -$$ -That is, $v$ is also an eigenvector of $f(A)$ with eigenvalue $f(\lambda )$. -The same result holds for analytic functions, such $\exp$: if -$$ -f(t) = \sum_{n=0}^\infty a_n t^n -$$ -then also $v$ is an eigenvector of $f(A)$ with eigenvalue $f(\lambda )$. -So, for no matter what kind of square matrix $A$, if $v$ is an eigenvector of $A$ with eigenvalue $\lambda$, $v$ is an eigenvector of $\exp (A)$ with eigenvalue $\exp (\lambda )$ too.<|endoftext|> -TITLE: Describing a particular quotient ring involving a subring of $\mathbb{Q}[x]$ -QUESTION [6 upvotes]: I had to do some homework problems involving the polynomial ring $R=\mathbb{Z}+x\,\mathbb{Q}[x]\subset\mathbb{Q}[x]$. This is an integral domain but not a UFD. Further, $x$ is not prime in $R$. -One of the problems was to describe to $R/(x)$. -Since $x$ is not a prime element, we know $(x)$ is not a prime ideal. So at the very least, $R/(x)$ is not an integral domain.... but what else can I say? This is perhaps something I should not admit, but problems of this form have always befuddled me. I know there's not any one "answer" they're looking for, but I never quite know what to say. -Anyway, this homework has been submitted already, so I am not including a homework tag. I'm just curious how you all would describe this particular quotient. - -REPLY [5 votes]: The key is to understand what that ideal $(x)$ looks like in the first place. So, clearly all polynomials in that ideal have degree at least 1, but what degree 1 polynomials can occur? For $f(x)*x$ to have degree 1, $f(x)$ must have a non-zero constant term. That term must be in $\mathbb{Z}$, and it is easy to see that you get any coefficients for higher degrees. So $(x)=\mathbb{Z}x + \mathbb{Q}x^2+\ldots+\mathbb{Q}x^n+\ldots$. -Thus, in the quotient, any coset is represented by some $n + ax$, $n\in \mathbb{Z},a\in \mathbb{Q}$, and another element $m+bx$ represents a different coset if and only if $n\neq m$ or $a-b\notin\mathbb{Z}$. In other words, the quotient is isomorphic to $\left(\mathbb{Z} + \mathbb{Q}/\mathbb{Z}x\right)/(x^2).$<|endoftext|> -TITLE: When is a tensor product of two commutative rings noetherian? -QUESTION [25 upvotes]: In particular, I'm told if $k$ is commutative (ring), $R$ and $S$ are commutative $k$-algebras such that $R$ is noetherian, and $S$ is a finitely generated $k$-algebra, then the tensor product $R\otimes_k S$ of $R$ and $S$ over $k$ is a noetherian ring. - -REPLY [14 votes]: Even for algebras over finite fields, “tensor products of Noetherian rings are Noetherian” may fail dramatically. Assume for example that $K=F((x_i)_{i \in B})$ is a function field. When $B$ is finite, then $K \otimes_F K$ is a localization of $F[(x_i)_{i \in B}, (x'_i)_{i \in B}]$, thus noetherian. Now assume that $B$ is infinite. Then $\Omega^1_{K/F}$ has dimension $|B|$. Since it is isomorphic to $I/I^2$, where $I$ is the kernel of the multiplication map $K \otimes_F K \to K, x \otimes y \mapsto x \cdot y$, it follows that $I$ is not finitely generated, hence $K \otimes_F K$ is not noetherian. -The general case treated in the following paper: - -P. Vámos, On the minimal prime ideals of a tensor product of two fields, Mathematical Proceedings of the Cambridge Philosophical Society, 84 (1978), pp. 25-35 - -Here is a selection of some results of that paper: Let $K,L$ be extensions of a field $F$. - -If $K$ is a finitely generated field extension of $F$, then $K \otimes_F L$ is noetherian. -If $K,L \subseteq F^{\mathrm{alg}}$ are separable algebraic extensions of $F$, and $L$ is normal, then $K \otimes_F L$ is noetherian iff $K \otimes_F L$ is a finite product of fields iff $[K \cap L : F] < \infty$. -If there is an extension $M$ of $F$ which sits inside $K$ and $L$, which has a strictly ascending chain of intermediate fields, then $K \otimes_F L$ is not noetherian. -If $K \otimes_F L$ is noetherian, then $\min(\mathrm{tr.deg}_F(K),\mathrm{tr.deg}_F(L)) < \infty$. -$K \otimes_F K$ is noetherian iff the ascending chain condition holds for intermediate fields of $K/F$ iff $K$ is a finitely generated field extension of $F$.<|endoftext|> -TITLE: Contributions of Galois Theory to Mathematics -QUESTION [32 upvotes]: What are the major and minor contributions of Galois Theory to Mathematics? I mean direct contributions (like being aplied as it appears in Algebra) or simply by serving as a model to other theories. -I have my own answers and point of view to this question, but I think it would be nice to know your opinion too. - -REPLY [2 votes]: The concept of a soluble group -has its origin in Galois theory, but it is nowadays an object of study in itself and -many important results in the modern theory of finite groups concern soluble -groups.<|endoftext|> -TITLE: Origin of mathematical use of "orbit" -QUESTION [5 upvotes]: If $G$ is a group acting on a set $S$, then the "orbit" of a point $x$ in $S$ is defined as the set of all elements of the form $gx$ where $g \in G$. My question: why was the word "orbit" chosen for this concept? It is not obviously related to previous uses of this word, such as the path of a planet around the Sun. - -REPLY [11 votes]: It seems the concept goes back further than D.W. Hall (a student of Whyburn) - at least to Kuratowski, cf. the following excerpt from Kuczma et al. Iterative functional equations, p. 14. $\quad $ One can probably find further historical details by Googling "Kuratowski-Whyburn orbit" etc. - -Note: for a simple yet enlightening example of the key role that orbits play in the solution of functional equation see this post.<|endoftext|> -TITLE: Do all infinite subsets of $\mathbb N$ have a bijective function to $\mathbb N$? -QUESTION [7 upvotes]: Does there exist for all subsets of the natural numbers, with infinite elements, a function that maps each element in that subset to each of the natural numbers, and the other way around? - -REPLY [11 votes]: If $A\subseteq \mathbb{N}$ and $A$ is infinite then since $\aleph _0$ is the smallest infinite cardinality then $\aleph_0 \leq |A|\leq |\mathbb{N}| =\aleph_0$, so A and $\mathbb{N}$ have the same cardinality - meaning there is a bijection between the two. -This can also be seen directly - suppose $A\subseteq \mathbb{N}$ and $A$ is infinite, define a function $\varphi: \mathbb{N}\rightarrow A$ as follows: A has a minimal number $a_1 \in A$ so $\varphi(1)=a_1$. $A-\{a_1\}$ has minimal number $a_2$ so define $\varphi(2)=a_2$. and in general, A has a k-th minimal number $a_k$ so $\varphi(k)=a_k$. This is true for all k since A is infinite. -It is easy to see that this is injective map. to show surjectivity notice that if $a\in A$ then there is only finite natural number (denote by k-1) smaller than $a$ that are in $A$, so $\varphi(k)=a$. - -For the case of $\mathbb{Z}$ this is also true, since $|\mathbb{Z}|=2|\mathbb{N}|=2\aleph_0 =\aleph_0$. -For the direct approach, suppose $A\subseteq \mathbb{Z}$ is infinite. Let $B=A\cap \mathbb{N}$ then if $B$ and $A-B$ (the positive, and negative parts of A) are infinite then there is a bijection between B and $\mathbb{N}$ and a bijection between $A-B$ and the negative integers, so join the two bijections to get one from A to $\mathbb{Z}$. -Suppose now that one of them is finite, wlog $B-A$ is finite, so A has a minimal number. The same proof as before shows that there is a bijection between A and $\mathbb{N}$. -There is a bijection between $\mathbb{N}$ and $\mathbb{Z}$ by ordering the integers, for example by $0,1,-1,2,-2,3,-3,...$. Now just take the composition of the maps - from A to $\mathbb{N}$ and then to $\mathbb{Z}$.<|endoftext|> -TITLE: Why are spectral sequences called "spectral"? -QUESTION [11 upvotes]: Why are spectral sequences called "spectral"? Is that use of "spectral" related to other uses in math, such as spectra in homotopy theory, the spectrum of a ring in algebraic geometry or the spectrum of an operator? Why are those things called spectral, for that matter? - -REPLY [5 votes]: As per Adrián Barquero's request, I post my comment as an answer (even if I think that it doesn't answer the question but rather indicates that the answer isn't really known). -See the related discussion on Math Overflow.<|endoftext|> -TITLE: Which unfolding of an icosahedron has the least number of edges to be glued? -QUESTION [7 upvotes]: Does every unfolding of an icosahedron has the same number of edges to be glued to construct it back to the solid? -If yes, what are those numbers for Platonic solids? -If no, which unfoldings have the least number of edges to be glued for Platonic solids? - -REPLY [8 votes]: The question has been answered, but if you don't mind a bit of additional information -which I find interesting: An icosahedron may be cut open and refolded to a flat, doubly covered parallelogram. I wrote a note on this, from which the figure below is copied. - -Addendum. I was not addressing the issue, just showing a neat unfolding. :-) -To address the issue more directly: for an icosahedron, $E=30$, $F=20$, and $E-F+1=11$. -Note that in the illustrated unfolding, the cut tree is a Hamiltonian path of 11 edges. -Of course, every spanning cut tree must have the same number of edges (as per Rahul's answer). -So the unfolding is bounded by 22 half-edges, each corresponding to the cutting -of one of the 11 edges of the path. But notice that many of the original polyhedron -edges end up collinear in the unfolding. As a polygon, the unfolding has 16 edges, -or 16 "half-edges" if you think of each as deriving from 8 straight slices on -the icosahedron surface. So there is a sense in which one need only glue 8 segments -to fold the unfolding back to the icosahedron.<|endoftext|> -TITLE: Dynamic programming - A type of balanced 0-1 matrix -QUESTION [8 upvotes]: I was reading the Wikipedia article of Dynamic programming, however, I'm having a hard time understanding the explanation given in the example for a type of balanced 0-1 matrix. -The problem is stated as: - - -Consider the problem of assigning values, either zero or one, to the positions of an n × n matrix, n even, so that each row and each column contains exactly n / 2 zeros and n / 2 ones. - - -I don't quote the whole text because I suppose there is no need to do that. However, I don't really understand the explanation after " The process of subproblem creation involves iterating..." in the third paragraph of the subsection. -Anyone can explain better that procedure? Of course, an example of this at work would be great. -Thanks - -REPLY [8 votes]: The dynamic programming algoritm uses a memoized recursive function that solves a more flexible subproblem : -The function takes a number k of rows to fill, and for each column, the number of 1s it has to place in those k rows. -It has to return the number of possible assignments of 1s in the block of k rows such that each row contains n/2 1s, and each column contains the proper number of 1s stated in the argument. -The principle is that : - -if a column has to contain a negative number of 1s, there is 0 solution. -if k=0 (and all the columns have to be empty), then there is 1 solution. -otherwise, an assignment of a block of k rows is an assignment of the 1st row + an assignment of a block (k-1) rows. - -So it iterates over the possible assignments of the 1st row. For each candidate assignment, it recursively asks itself for the number of ways of completing the remaining (k-1) rows with the updated information about the number of 1s that have to be in each column. -If you don't memoise the function, it is exactly what the backtracking algorithm does, essentially it is n nested extremely big loops. -If you memoise the function, you will find that it is called for the same k and for the same constraints on the column a great number of times. -For example, while doing the backtracking algorithm, at some point you chose the assignment A1 for row 1 and A2 for row 2, and proceed to count every solution starting with (A1,A2) with (n-2) nested loops. Then later in the execution, you choose the assignment A2 for row 1 and A1 for row 2, and proceed to count every solution starting with (A2,A1), again with (n-2) nested loops. -But if you look carefully at the subproblem you have to solve, it has the same constraints in both case. -A solution in the first case is the same as a solution in the 2nd case, only with the first two rows switched. So you are counting the same number twice. -If you memoize your recursive function, then it will recognize that these two subproblems are equivalent because they ask for the same constraints, and only compute its result once. In all the later calls it will directly return with that same result, and avoid a very expensive recomputation.<|endoftext|> -TITLE: Determine the *interval* in which the solution is defined? -QUESTION [5 upvotes]: The ODE: $y' = (1-2x)y^2$ -Initial Value: $y(0) = -1/6$ -I've solved the particular solution, which is $1/(x^2-x-6)$. I don't understand what they mean about the solution is defined, because when I graph $1/(x^2-x-6)$, it's only discontinuous at $x = -2, 3$. -What does "Determine the interval in which the solution is defined" mean? - -REPLY [9 votes]: It makes sense to consider solutions only on intervals that contain the initial time. The "interval where the solution is defined" (I would call it the (maximal) interval of existence) is the maximal interval of all intervals $I$ which contain 0 and there exists a solution on $I$. This maximal interval turns out to be $(-2,3)$ in your case. -So why doesn't it make sense to consider solutions defined on other subsets of $\mathbb{R}$ than intervals, e.g. $(-\infty,-2)\cup (-2,3)\cup (3,\infty)$? One reason is that you wouldn't have uniqueness of solutions, for example, $$\begin{cases}1/(x^2-x-6) & x<3 \cr 0 & x>3\end{cases}$$ would be another "solution". What happens in the other components which don't contain 0 can be rather arbitrary, so one does not allow them.<|endoftext|> -TITLE: recommend paper on application of group theory -QUESTION [6 upvotes]: Application field can vary from biology\biochemistry, to computer science\coding theory, the more unexpected a connection to a field, the better. And paper preferably should be not very large one. -Thank you. - -REPLY [2 votes]: Tom Fiore and his coauthors have used group theory to study various pieces of music. You might enjoy the article "Musical actions of dihedral groups ( http://www-personal.umd.umich.edu/~tmfiore/1/CransFioreSatyendra.pdf ), or take a look at Tom's music page: http://www-personal.umd.umich.edu/~tmfiore/1/music.html<|endoftext|> -TITLE: Limits and convolution -QUESTION [5 upvotes]: Let $f,g \in L^2(\mathbb{R^n})$, $\{ f_n \}, \{ g_m \} \subset C^\infty_0(\mathbb{R}^n)$ (infinitely differentiable functions with compact support) where $f_n \to f$ in $L^2$, and $g_n \to g$ in $L^2$. -Then $f_n \star g_m(x) \to f \star g(x)$ pointwise as $n,m \to \infty$ (where $\star$ denotes convolution). -Is it true that $\int_{\mathbb{R}^n} f_n \star g_m dx \to \int_{\mathbb{R}^n} f \star g dx$? -I've thought about using the dominated convergence theorem, but I couldn't figure out how, since I don't really know what I can do with the sequences of functions given only the facts at hand. - -REPLY [3 votes]: For any integrable $f$ and $g$ (so in particular for functions in $C_O({\mathbb R}^n)$ one has $\int_{{\mathbb R}^n} f \ast g = \int_{{\mathbb R}^n} f \int_{{\mathbb R}^n} g$. -So your question is asking if $\int_{{\mathbb R}^n} f_n \int_{{\mathbb R}^n} g_n$ necessarily converges to $\int_{{\mathbb R}^n} f \int_{{\mathbb R}^n} g$. A counterexample will be provided by letting $f_n = g_n$ such that $f_n$ converges to $f$ in $L^2$ but such that $\int_{{\mathbb R}^n} f_n$ does not converge to $\int_{{\mathbb R}^n} f$. To do this, one can let $\phi(x)$ be any smooth function with compact support and integral one, and then let $f_n(x) = {1 \over n} \phi({x \over n})$. Each $f_n$ will have integral $1$, but $f_n(x)$ converges in $L^2$ to $f(x) = 0$ which has integral zero.<|endoftext|> -TITLE: If $|f(z)|\lt a|q(z)|$ for some $a\gt 0$, then $f=bq$ for some $b\in \mathbb C$ -QUESTION [5 upvotes]: If $q\colon\mathbb{C}\to\mathbb{C}$ is a polynomial, $f\colon\mathbb{C}\to\mathbb{C}$ is analytic on all of $\mathbb{C}$, and if there exists $a\gt 0$ such that $|f(z)| \lt a|q(z)|$ for every $z\in \mathbb{C}$, then $f = bq$ for some $b\in \mathbb{C}$. -Can an arbitrary analytic function (on all of $\mathbb{C}$) replace $q$? - -REPLY [5 votes]: If you really mean strict inequality, then this follows from Liouville's theorem applied to $f/q$. Note that $q$ must be constant too, as $q$ can have no zeros from the condition. -It is actually true if $<$ is replaced by $\leq$, though. (Maybe you meant this?) First, the Cauchy estimates show that $f$ is a polynomial; indeed, since $f$ grows polynomially, we can just take averages of $f/(z-\alpha)^N$ over larger and larger circles. From this one can see that $f$ is a polynomial. -Now the question reduces to showing that if a polynomial $p$ is bounded by a constant multiple of another polynomial $q$, then $p$ and $q$ differ by a constant. This is a straightforward consequence of factorization.<|endoftext|> -TITLE: If $A$ is an $n \times n$ matrix such that $A^2=0$, is $A+I_{n}$ invertible? -QUESTION [13 upvotes]: If $A$ is an $n \times n$ matrix such that $A^2=0$, is $A+I_{n}$ invertible? -This question yielded two different proofs from my professors, which managed to get conflicting results (true and false). Could you please weigh in and explain what's happening, and offer a working proof? -Proof that it is invertible: Consider matrix $A-I_{n}$. Multiplying $(A+I_{n})$ by $(A-I_{n})$ -we get $A^2-AI_{n}+AI_{n}-I^2_{n}$. This simplifies to $A^2-I^2_{n}$ which is equal to $-I_{n}$, since $A^2=0$. So, the professor argued, since we have shown that there exists a $B$ such that $(A+I_{n})$ times $B$ is equal to $I$, $(A+I_{n})$ must be invertible. I am afraid, though, that she forgot about the negative sign that was leftover in front of the $I$ -- from what I understand, $(A+I_{n})$*$(A-I_{n})$=$-I$ does not mean that $(A+I_{n})$ is invertible. -Proof that it is not invertible: Assume that $A(x)=0$ has a non-trivial solution. Now, given $(A+I_{n})(x)=\vec{0}$, multiply both sides by $A$. We get $A(A+I_{n})(x)=A(\vec{0})$, which can be written as $(A^2+A)(x)=\vec{0}$, which simplifies to $A(x)=0$, as $A^2=0$. Since we assumed that $A(x)=0$ has a non-trivial solution, we just demonstrated that $(A+I_{n})$ has a non-trivial solution, too. Hence, it is not invertible. -I am not sure if I reproduced the second proof in complete accuracy (I think I did), but the idea was to show that if $A(x)=\vec{0}$ has a non-trivial solution, $A(A+I_{n})$ does too, rendering $A(A+I_{n})$ non-invertible. But regardless of the proofs, I can think of examples that show that at least in some cases, the statement is true; consider matrices $\begin{bmatrix} -0 & 0\\ -0 & 0 -\end{bmatrix}$ and $\begin{bmatrix} -0 & 1\\ -0 & 0 -\end{bmatrix}$ which, when added $I_{2}$ to, become invertible. -Thanks a lot! - -REPLY [13 votes]: The minus sign is not an obstacle: If $AB = -I$, then $A(-B) = -(AB) = -(-I) = I$. So in fact, if $A^2 = 0$, then $(A+I)(I-A) = A - A^2 + I - A = I$, so $A+I$ is invertible, as your first professor noted. -The error in the second argument is the following: It is true that if $B\mathbf{x}=\mathbf{0}$ has a nontrivial solution, then $CB\mathbf{x}=\mathbf{0}$ has a nontrivial solution. Thus, if $B$ is not invertible, then $CB$ is not invertible. But that is not what was argued. What was argued instead was that since $CB\mathbf{x}=\mathbf{0}$ has a nontrivial solution, then it follows that $B\mathbf{x}=\mathbf{0}$ has a nontrivial solution (with $B=A+I$ and $C=A$). This argument is incorrect: you can always take $C=0$, and that would mean that no matrix is invertible. -It is certainly true that if $A$ is not invertible, then no multiple of $A$ is invertible (so for every $C$, neither $CA$ nor $AC$ are invertible); so you can deduce that $A(A+I)$ is not invertible. This does not prove that $A+I$ is not invertible, however, which is what you wanted to show. -Now, for bonus points, show that if $A$ is an $n\times n$ matrix and $A^k=0$ for some positive integer $k$, then $A+\lambda I_n$ is invertible for any nonzero $\lambda$. -Added: For bonus bonus points, explain why the argument would break down if we replace $\lambda I_n$ with an arbitrary invertible matrix $B$. - -REPLY [12 votes]: For what it's worth I just want to mention that what is happening here is actually an instance of a more general result about rings. If $R$ is a ring then an element $a \in R$ is said to be nilpotent if there is $n \in \mathbb{N}$ such that $a^n = 0$. -In your question, the condition on your matrix that $A^2 = 0$ just means that it is nilpotent. -Now if $R$ is a ring with unity, then if $a \in R$ is nilpotent it can be proved that $1 - a$ is an invertible element (or unit) in the ring $R$, meaning that there is a $b \in R$ such that $(1 - a)b = b(1 - a) = 1$. From this you can then just change $a$ with $-a$ to also see that $1 + a$ is invertible. -I'm pretty sure that this is what Arturo had in mind by adding that exercise for bonus points for you, so I will not give the argument here. You can find it in this planetmath entry if you want to look at it. But I would suggest to you to first try it for yourself, for matrices at least. - -REPLY [6 votes]: I suggest thinking of the problem in terms of eigenvalues. Try proving the following: -If $A$ is an $n \times n$ matrix (over any field) which is nilpotent -- i.e., $A^k = 0$ for some positive integer $k$, then $-1$ is not an eigenvalue of $A$ (or equivalently, $1$ is not an eigenvalue of $-A$). -If you can prove this, you can prove a stronger statement and collect bonus points from Arturo Magidin. -(Added: Adrian's answer -- which appeared while I was writing mine -- is similar, and probably better: simpler and more general. But I claim it is always a good idea to keep eigenvalues in mind when thinking about matrices!) -Added: here's a hint for a solution that has nothing to do with eigenvalues (or, as Adrian rightly points out, really nothing to do with matrices either.) Recall the formula for the sum of an infinite geometric series: -$\frac{1}{1-x} = 1 + x + x^2 + \ldots + x^n + \ldots$ -As written, this is an analytic statement, so issues of convergence must be considered. (For instance, if $x$ is a real number, we need $|x| < 1$.) But if it happens that some power of $x$ is equal zero, then so are all higher powers and the series is not infinite after all...With only a little work, one can make purely algebraic sense out of this.<|endoftext|> -TITLE: Invertible $N \times N$ matrix over ${\rm GF}(2)$ having on each row and column $N/2$ ones -QUESTION [5 upvotes]: As per the title, I'm looking for the name and for a way to construct a ${\rm GF}(2)$ square matrix of size $N$ with the following properties: - -All rows/columns should be linearly independent -On each row and each column there should be $N/2$ ones and zeros - -Can anybody provide some pointers? It doesn't have to be a general solution (i.e. for all $N$) as long as it works for some $N$ it will be ok. -edit: It has been pointed out that if N is a multiple of 4 the two conditions above are incompatible. Let me relax number 2) by allowing, when N is a multiple of 4, the rows and columns to be (minimally) unbalanced. -edit2: Bonus points for pointing out a way to build matrices as the ones above, with the added constraint that the conditions above should be valid also for their inverse (obviously number 1 is implied). - -REPLY [4 votes]: It's not possible for $N$ divisible by 4, because the vectors with an even number of 1's form a proper subspace of ${\rm GF}(2)^N$. For $N=6$ there are many examples, such as -$$\begin{pmatrix}1&1&1&0&0&0\\1&1&0&1&0&0\\0&0&1&1&0&1\\1&0&0&0&1&1\\0&1&0&0&1&1\\ -0&0&1&1&1&0\end{pmatrix}$$<|endoftext|> -TITLE: Proving that the terms of the sequence $(nx-\lfloor nx \rfloor)$ is dense in $[0,1]$. -QUESTION [14 upvotes]: I have been doing a basic math course on Real analysis...I encountered with a problem which follows as " Prove that $na \pmod1$ is dense in $(0,1)$..where $a$ is an Irrational number , $n\ge1$... -I tried to prove it using only basic principles...first of all I proved that above defined sequence is infinite..and also it is bounded...so by Bolzano-Weierstrass theorem it has a limit point in $(0,1)$..but to prove denseness I need to prove that for any given $(a,b)$ a subset of $(0,1)$ there is at least one element of the sequence...I am not getting how to figure out and link that limit point to that interval $(a,b)$..can any one help me in this..?...It would be of great help... - -REPLY [10 votes]: It is better than dense, it is equidistributed (in $\mathbb{R}/\mathbb{Z}$), meaning that if $f: \mathbb{R}/\mathbb{Z} \rightarrow \mathbb{C}$ is continuous (it is equivalent to give $f$ or a $1$-periodic continuous function from $\mathbb{R}$ to $\mathbb{C}$), $\lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{k=1}^n f(kx)= \int_0^1 f$. -This can be easily proven using the fact that the polynomials in $t \mapsto e^{2 i \pi t}$ and $t \mapsto e^{2 i \pi t}$ (so, sums of the form $\sum_{l=-n}^n \lambda_k e^{2 i \pi l t}$) are dense (for the supremum norm) in our space of $1$-periodic continuous functions (Weierstrass theorem), and it is easy to compute $\lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{k=1}^n e^{2 i \pi l k x} = \delta_{l,0}$. -The density of your sequence follows: if an open subset of $\mathbb{R}/\mathbb{Z}$ (or $]0,1[$) did not contain any element of the sequence, using a "test function" $f$, non-negative, not identically zero and whose support is contained in our given open subset, would contradict the equidistribution. -EDIT: if you don't you the density theorem of Weierstrass above, you could also use the fact that $x \mathbb{Z} + \mathbb{Z}$ is dense in $\mathbb{R}$ (since it has no smallest positive element, otherwise $x$ would be rational), but this way you only get density, not equidistribution. - -REPLY [6 votes]: A hint on one way to prove it (not necessarily shortest) without using much theory: what is the difference between successive terms in the sequence? -(More explicit hint: Let $\{x\} = x - \lfloor x \rfloor$ be the fractional part of $x$. Try proving that $$(n+1)x-\lfloor(n+1)x\rfloor \equiv (nx - \lfloor nx \rfloor) + \{x\} \mod 1 .$$ -Can you take it from here, using the fact that $\{x\}$ is not rational?)<|endoftext|> -TITLE: Example of an UnDecidable Logical Theory which is an extension of a Logical Decidable Theory? -QUESTION [6 upvotes]: Let $T_1$ and $T_2$ be two first-order logical theories (over the same signature) such that $T_1 \subseteq T_2$ and both are recursively axiomatized. -My question is the following: is it possible that $T_1$ is finitely axiomatizable and decidable, while its extension $T_2$ is undecidable? -I would like to know whether there is or not a pair of logical theories with the previous requirements, but I have not been able to find any answer surfing the web. There is a paper by Verena Huber Dyson with the title "On the decision problem for theories of finite models" where there is an example for all these conditions except for requiring finitary axiomatizability. This paper is quite old (1964), so maybe someone here (if not, I will try mathoverflow) can provide me a better answer. -Addendum: By @JDH comments it is clear that - -$T_1$ cannot be a complete theory (because then there are no consistent extensions). - -$T_2$ cannot be finitely axiomatizable (because finite extensions of decidable theories are decidable) - -REPLY [7 votes]: Here are a few observations: - -Any finite extension of a decidable theory is decidable. That is, if $T_1$ is decidable and $T_2$ is obtained by adding finitely many additional axioms $\varphi$, then $T_2$ is also decidable, since $T_2\vdash\psi$ is the same as $T_1\vdash\varphi\to\psi$. -If $T_2$ is c.e. and complete, then it is decidable, since to decide whether $T_2\vdash\varphi$ one searches either for $\varphi$ or for $\neg\varphi$ in $T_2$. -As I point out in the comment to Chris's answer, a theory is c.e. if and only if it is computably axiomatizable. The reverse implication is just enumerating proofs. For the forward direction, replace each axiom $\varphi$ with an enormous conjunction $\varphi\wedge\cdots\wedge\varphi$ whose length is the time it takes to appear in the computable enumeration. The set of these conjunctions (although a silly trick) axiomatizes the same theory and is decidable. -There are finitely axiomatizable theories in the language of arithmetic that have no decidable extensions. For example, Robinson's $Q$ is an example of this. One reason is that if $Q\subset T$ and $T$ were decidable and consistent, then we could find a computable separation of the two sets $A$ and $B$ consisting of the TM programs that halt with output $0$ and $1$, respectively. The reason is that any such halting behavior would be provable by $Q$ and hence also by $T$, and so we would find a computable set $C$ containing $A$ and disjoint from $B$. But this contradicts that these sets are computably inseparable---by the recursion theorem we could create a program that halts without output $0$ iff it is not in $C$, a contradiction. -There are numerous examples of finitely axiomatizable $T_1$ with c.e. $T_2$ that are not decidable. Chris Eagle gave one in his answer, if you take his set $A$ to be c.e. but not decidable. You can do his trick with any decidable theory $T_1$, provided that it has models with at least two elements. Just add new constant symbols $c_i$ and add the axioms $c_i=c_j$ whenever $i,j\in A$, for some fixed c.e. non-decidable set $A$. We can enumerate these axioms, so the resulting theory $T_2$ is c.e., but not decidable, since from a decision procedure for $T_2$ we could decide membership in $A$.<|endoftext|> -TITLE: Spectrum of a product of operators on a Banach space -QUESTION [13 upvotes]: Let $A$ and $B$ be two operators on a Banach space X. -I am interested in the relationship between the spectra -of $A$, $B$ and $AB$. -In particular, are there any set theoretic inclusions -or everything can happen in general like: -$\sigma(A) \subset \sigma(AB)$, and conversely, -$\sigma(B) \subset \sigma(AB)$, and conversely? -If we know the spectra $\sigma(A)$, $\sigma(B)$ of -$A$ and $B$, can we determine the spectrum of $AB$? -I would appreciate any comment or reference. - -REPLY [28 votes]: If $A$ and $B$ commute, then $\sigma(AB)\subseteq \sigma(A)\sigma(B)$. This can be seen by applying the Gelfand transform to the commutative Banach algebra consisting of all operators that commute with all operators that commute with $A$ and $B$. (The reason to take the double commutant rather than simply the Banach algebra generated by $A$, $B$, and the identity is that the spectra of $A$, $B$, and $AB$ may be larger relative to the latter algebra.) -If $A$ and $B$ do not commute, things aren't as nice. E.g., if $A$ is a nonzero nilpotent (or quasinilpotent) operator on a Hilbert space and $B=A^*$, then $\sigma(A)=\sigma(B)=\{0\}$, but the spectral radius of $AB$ is $\|A\|^2\neq0$. For a very concrete example consider $A=\begin{bmatrix}0 & 1 \\ 0 & 0 \end{bmatrix}$ and $B=\begin{bmatrix}0 & 0 \\ 1 & 0 \end{bmatrix}$. -The spectra of $A$ and $B$ can also be much larger than that of $AB$, even in the commutative case. The spectrum of an invertible operator $A$ can be an arbitrary compact subset of the complex plane not containing $0$, but $\sigma(AA^{-1})=\{1\}$. For a noncommutative example, the spectrum of a nonunitary isometry $S$ on Hilbert space (like the unilateral shift) is the closed unit disk, while $\sigma(S^*S)=\{1\}$ and $\sigma(SS^*)=\{0,1\}$. - -Here's a little more detail on the first paragraph. Let $a$ and $b$ be commuting elements of a unital Banach algebra $\mathcal{A}$ (for instance, $\mathcal{A}$ could be the algebra of all bounded operators on a Banach space). In order to compare the spectrum of $ab$ to that of $a$ and $b$, we want to apply the Gelfand transform to a commutative Banach algebra $\mathcal{B}$ containing $a$ and $b$. One must be careful, however, because if $a,b,1_\mathcal{A}\in\mathcal{B}\subset\mathcal{A}$, there may be elements of $\mathcal{B}$ that are invertible in $\mathcal{A}$ but whose inverses lie in $\mathcal{A}\setminus\mathcal{B}$. To ensure that this doesn't affect the spectra of $a$, $b$, or $ab$, one can either just stipulate that $\mathcal{B}$ also contains all elements of the form $(a-\lambda)^{-1}$, $(b-\lambda)^{-1}$, and $(ab-\lambda)^{-1}$, as $\lambda$ ranges over the resolvent sets of $a$, $b$, and $ab$ respectively, or just take the double commutant of $\{a,b\}$ in $\mathcal{A}$, which will automatically contain all of these elements. -Once the problem is reduced to the case of elements $a$ and $b$ of a commutative unital Banach algebra $\mathcal{B}$, the Gelfand transform gives you a homomorphism $\Gamma:\mathcal{B}\to C(X)$, where $C(X)$ is the (Banach) algebra of continuous complex-valued functions on the maximal ideal space of $\mathcal{B}$ with pointwise operations (and $\sup$ norm, but that isn't very important here). One of the most fundamental properties of the Gelfand transform is that it preserves spectra; that is, for all $c\in\mathcal{B}$, $\sigma(c)=\Gamma(c)(X)$. Using this fact, the inclusion $\sigma(ab)\subseteq\sigma(a)\sigma(b)$ is translated to a tautological fact about the range of a pointwise product of functions. -To prove that $\sigma(c)=\Gamma(c)(X)$, remember that $X$ can be identified with (or is defined to be, depending on your perspective) the set of nonzero homomorphisms from $\mathcal{B}$ to $\mathbb{C}$. If $\phi\in X$, then $\phi(c-\phi(c))=0$, which, since unital homomorphisms send invertible elements to invertible elements, shows that $c-\phi(c)$ is not invertible. This shows that $\Gamma(c)(\phi)$ is in the spectrum of $c$, so $\Gamma(c)(X)\subseteq\sigma(c)$. Conversely, if $\lambda\in\sigma(c)$, then there is a maximal ideal $M$ of $\mathcal{B}$ containing $c-\lambda$, and by the Mazur-Gelfand theorem there is a unique isomorphism $\mathcal{B}/M\cong \mathbb{C}$. Composing with the quotient map gives a $\phi\in X$ with kernel $M$, so in particular $\phi(c-\lambda)=\phi(c)-\lambda = 0$, showing that $\lambda=\Gamma(c)(\phi)$, so that $\sigma(c)\subseteq\Gamma(c)(X)$. -To learn more about this, I recommend consulting any good textbook on functional analysis or Banach algebras. For example, Rudin, Douglas, and Rickart are good sources for the Gelfand theory that vary widely in emphasis and additional topics covered. As for your question about whether $\sigma(T^{-1}ATB)\subseteq\sigma(A)\sigma(B)$ for commuting $A$ and $B$, the answer is that this need not hold if $T^{-1}AT$ and $B$ do not commute. For example, let $A=B=\begin{bmatrix} 0 & 1\\ 0 & 0\end{bmatrix}$ and $T=\begin{bmatrix}0 & 1 \\ 1 & 0 \end{bmatrix}$.<|endoftext|> -TITLE: extension/"globalization" of inverse function theorem -QUESTION [6 upvotes]: I am curious as to what changes do we need to make to the hypotheses of the -inverse function theorem in order to be able to find the global differentiable inverse to a differentiable function. We obviously need $f$ to be a bijection, and $f'$ to be non-zero. Is this sufficient for the existence of a global differentiable inverse? -For functions $f\colon\mathbb{R}\to\mathbb{R}$, we have -Motivation: -$f^{-1}(f(x))=x$, so $(f')^{-1}(f(x))f'(x)=1$ -Then, we could define $(f')^{^-1}(f(x))$ to be $1/f'(x)$ ( this is the special case of the formula for the differentiable inverse -- when it exists -- in the IFT) -(and we are assumming $f'(x)\neq 0$) -In the case of $\mathbb{R}^2$, I guess we could think of all the branches of $\log z$ and $\exp z$, and we do have at least a branch-wise global inverse , i.e., if/when $\exp z$ is 1-1 (and it is , of course onto $\mathbb{C}-{0}$), then we have a differentiable inverse. -I guess my question would be: once the conditions of the IFT are satisfied: -in how big of a neighborhood of $x$ can we define this local diffeomorphism, -and, in which case would this neighborhood be the entire domain of definition of $f$? -I guess the case for manifolds would be a generalization of the case of -$\mathbb{R}^n$, but it seems like we would need for the manifolds to have a single chart. -So, are the conditions of f being a bijective, differentiable map sufficient -for the existence of a global differentiable inverse? And, if $f$ is differentiable, but not bijective, does the IFT hold in the largest subset of the domain of definition of $f$ where $f$ is a bijection? -Thanks. - -REPLY [6 votes]: There is a theorem ("Introduction to Smooth Manifolds," Lee, Thm 7.15) for differentiable manifolds which says that: - -If $F: M \to N$ is a differentiable bijective map of constant rank, then $F$ is a diffeomorphism -- so in particular $F^{-1}$ is differentiable. - -Here, the rank of differentiable map $F\colon M \to N$ at a point $p \in M$ is defined to be the rank of its pushforward $F_*\colon T_pM \to T_{F(p)}N$. (Some authors use the word "differential" for "pushforward," and use the notation $d_pF$ for $F_*$.)<|endoftext|> -TITLE: Why are signed and complex measures typically not allowed to assume infinite values? -QUESTION [8 upvotes]: In a number of real analysis texts (I am thinking of Folland in particular), three different kinds of measures are defined. - -Positive measures: Take values in $[0, +\infty]$ -Signed measures: Take values in either $(-\infty, \infty]$ or $[-\infty, \infty)$, but cannot assume both $+\infty$ and $-\infty$. -Complex measures: Take values in $\mathbb{C}$. Any kind of infinity is not allowed. - -My question is: why is this? Is this because of how we set up integrals with respect to these measures, or does it have to do with adding and subtracting measures to make new ones? - -REPLY [3 votes]: It has to do with adding measures to satisfy countable summability. Notice that in the complex plane $+\infty$ and $-\infty$ are the same value. So we can't allow one without the other, and who knows what it should mean to add them? On the other hand if we stick to finite values then the summability condition is clear.<|endoftext|> -TITLE: Show that $a^2 = b$ in a field $\mathbb{F} = \{0,1,a,b\}$ -QUESTION [5 upvotes]: I need some help proving the following: - -Let $\mathbb{F} = \{0, 1, a, b\}$ be a field with four elements. Prove that $a^2 = b$. - You can use $a \cdot 0 = 0$ without proving it. - -Attempted solution: -$$ -a^2 = b \\ -aa = b \\ -aa + 0 = b + 0 -$$ -We know $a \cdot 0 = 0$ so we can substitute for zero -$$ - aa + a \cdot 0 = b -$$ -Using the additive inverse of $aa$ we get: -$$ - (aa) + (-aa) + a \cdot 0 = b + (-aa) \\ - a \cdot 0 = b + (-aa) -$$ -I’m still not getting the concept of how to prove things, maybe a little insight into what are possible steps to approach problems like these. -Thats as far as I got, any help is appreciated. - -REPLY [8 votes]: Consider that the map $x \mapsto ax$ must be bijective (because the field is finite -- should be easy to show). We know, -$$a \cdot 0 = 0$$ -$$a \cdot 1 = a$$ -This means $a\cdot a = b$ or $a \cdot a = 1.$ But in the latter case we get $a \cdot b = b,$ which implies $a = 1,$ which is not true (by uniqueness of 1). - -REPLY [3 votes]: Note that $0$ and $1$ are distinguished elements, so none of the others should be equal to them. -Now, $b \cdot a$ cannot be equal to $a$ or $b$ becaus then $b$ or $a$ would be $1$, respectively, but that cannot happen as they are distinct elements. Similarly, $b \cdot a$ cannot be $0$. Hence, $b \cdot a = 1$. -Use this type of reasoning to think what $a^2$ cannot be, and you'll find the anwser: what is $a$ if $a^2 = 0$?<|endoftext|> -TITLE: Why is every abelian group the direct limit of its finitely generated subgroups? -QUESTION [21 upvotes]: I'm taking classes in homological algebra now, and the book (together with the lecturer) seem to assume more category theory than I already know. -A "fact" that is used freely in the book ("Homological algebra" by Weibel, by the way), is that every abelian group is (isomorphic to) the direct limit of its finitely generated subgroups. -However, I do not find this particularly obvious. How does one go about prove such things? - -REPLY [22 votes]: The fact that a group is the direct limit of its finitely generated subgroups is true for any group, not just abelian ones. -Remember what the direct limit of groups is. You start with a directed set $I$ (directed means that it is a partially ordered set, and if $i,j\in I$, then there exists $k\in I$ such that $i\leq k$ and $j\leq k$). For each $i\in I$, you have a group $H_i$, and if $i\leq k$, then you have a morphism $f_{ik}\colon H_i\to H_k$. The system $\{H_i, f_{ij}\}_{i\in I}$ must also satisfy: - -$f_{ii} = \mathrm{id}_{H_i}$ for each $i\in I$; and -If $i\leq j\leq k$, then $f_{ik} = f_{jk}\circ f_{ij}$. - -The direct limit is then constructed as follows: we let $X$ be the disjoint union of the sets $H_i\times\{i\}$, and we define the equivalence relation $\sim$ on $X$ as follows: given $(h,i)$ and $(h',j)$, we let $(h,i)\sim (h',j)$ if and only if there exists $k\in I$ such that $i\leq k$, $j\leq k$, and $f_{ik}(h) = f_{jk}(h')$. -We then define a group operation on $X/\sim$ as follows: given $[(h,i)]$ and $[(h',j)]$ in $X$, let $k\in I$ such that $i\leq k$ and $j\leq k$. We define $$[(h,i)][(h',j)] = [(f_{ik}(h)f_{jk}(h'),k)].$$ -Note that this makes sense, since $f_{ik}(h)$ and $f_{jk}(h)$ are both in $H_k$. It is then an exercise to show that this gives you a group with the appropriate universal property. -Intuition. The idea is just that two elements in $X$ are "equal" if and only if they "eventually" map to the same thing. And we define the product by first mapping both elements sufficiently "ahead" that they both lie in the same group, and then multiply them. The direct limit is then a way of "gluing" all of $H_i$ in a compatible way. -The reason we expect the direct limit of the finitely generated subgroups to "be" the group (that is, isomorphic to the group) is that any particular operation we want to do with group elements always occurs in a finitely generated subgroup: if you are performing an operation inside the group, the operation uses only finitely many elements, so it all happens inside a finitely generated group. Thus, everything that "determines" what the group is is captured if you look at all finitely generated subgroups: to know how to multiply $x$ by $y$, you can multiply them in the finitely generated subgroup $\langle x,y\rangle$, after all. The direct limit is just a way of putting together all of these subgroups which we should be able to do since they are all "really" already glued together inside of $G$. -Sketch of proof. -Now, let $G$ be a group. Let $I$ be the collection of all finitely generated subgroups of $G$, and order $I$ by inclusion of subgroups. For each element $i\in I$, let $H_i$ be the subgroup $i$ itself (remember that $i$ is a subgroup of $G$ by definition of $I$). If $i\leq j$, then $i\subseteq j$ as sets, so we let $f_{ij}\colon H_i \to H_j$ be the inclusion map. Note that if $i=j$, then $f_{ii} = \mathrm{id}_{H_i}$, as required, and if $i\leq j\leq k$, then $H_i\subseteq H_j\subseteq H_k$, and the inclusion $H_i\hookrightarrow H_k$ is the composition of the inclusion $H_i\hookrightarrow H_j$ and $H_J\hookrightarrow H_k$. So $\{H_i,f_{ij}\}_{i\in I}$ is a directed system of groups. Let $H$ be the direct limit, and let $f_i\colon H_i\to H$ be the structure maps into the direct limit. The structure maps are one-to-one, because $[(h,i)] = [(e,j)]$ if and only if there exists $k\geq i$ such that $f_{ik}(h) = e$, but all our $f_{ik}$ are one-to-one. -To see that the direct limit is (isomorphic to) a subgroup of $G$, note that you have embeddings $\varphi_i\colon H_i\hookrightarrow G$ from $H_i$ to $G$ for each $i$, and this embeddings commute with the structure maps $f_{ij}$. This means that by the universal property there is a homomorphism $\phi\colon H \to G$ such that $\varphi_i = \phi\circ f_i$ for all $i$. In particular, $\phi$ is an embedding, so $H$ is (isomorphic to) a subgroup of $G$. To see that the map is onto, let $g\in G$. Then letting $i=\langle g\rangle$, we have that $g$ is in the image of $\varphi_i$, hence in the image of $\phi$. Thus, $\phi$ is an isomorphism. -Added. The same argument holds for any collection $\mathcal{C}$ of subgroups of $G$ such that (i) every element of $G$ lies in at least one subgroup in $\mathcal{C}$; (ii) given any two subgroups $H$ and $K$ in $\mathcal{C}$, there is a subgroup $M$ in $\mathcal{C}$ that contains both $H$ and $K$. So it works for $\mathcal{C}$ being "all finitely generated subgroups"; "all subgroups"; for infinite cardinal $\kappa$, "all subgroups of cardinality less than or equal to $\kappa$"; and other classes.<|endoftext|> -TITLE: What is the reverse distributive technique? -QUESTION [5 upvotes]: I have a solution to a logic problem involving propositions that I don't undersand how a particular step was carried out.The professor called the step I'm having trouble with reverse distribution. -Prove: $(p \lor q ) \land\lnot (p \lor \lnot q)\leftrightarrow p$ -$(p \lor q ) \land (p \lor \lnot q) $ -$p\lor(q \land \lnot q)$ This is the step I don't understand. -$p\lor FALSE $ -p -The second step is throwing me for a loop. What am I not seeing? - -REPLY [8 votes]: Since $\vee$ distributes over $\wedge$, you know that $A\vee(B\wedge C)$ is equivalent to $(A\vee B)\wedge(A\vee C)$. So if you have the former, you can replace it with the latter. -But, likewise, if you have the latter, you can replace it with the former. Your second line, $(p\wedge q)\vee(p\wedge\neg q)$ is of the form $(A\vee B)\wedge(A\vee C)$ (with $A=p$, $B=q$, and $C=\neg q$), so it is equivalent to $A\vee(B\wedge C)$, which is exactly the third line. -In other words: instead of using the "distributive property" as usual, you use it "in reverse". It's much like going from $5\times 3 + 5\times 7$ to $5\times(3+7)$, instead of the other way around. You can think of it as the analogue of "factoring out" instead of "distributing through".<|endoftext|> -TITLE: When is a notion of convergence induced by a topology? -QUESTION [24 upvotes]: I'm interested in sufficient conditions for a notion of sequential convergence to be induced by a topology. More precisely: Let $V$ be a vector space over $\mathbb{C}$ endowed with a notion $\tau$ of sequential convergence. When is there a topology $\mathcal{O}$ on $V$ that makes $V$ a topological vector space such that "sequences suffice" in $(V,\mathcal{O})$, e.g. $(V,\mathcal{O})$ is first countable, and convergence in $(V,\mathcal{O})$ coincides with $\tau$-convergence? Is the topology $\mathcal{O}$ uniquely determined? -By a notion $\tau$ of sequential convergence on a vector space $V$ I mean a "rule" $\tau$ which assigns to certain sequences $(v_n)_{n\in\mathbb{N}}\subset V$ (which one would call convergent sequences) an element $v\in V$ (a limit of $(v_n)_n$). One could write $v_n\stackrel{\tau}{\to}v$ in this case. This process of "assigning a limit" should satisfy at least that any constant sequence $(v,v,v,...)$ is convergent and is assigned the limit $v$. Also, given a convergent sequence $(v_n)_n$ with limit $v$ any subsequence $(v_{n_k})_k$ should have $v$ as a limit. -I would also like this concept of assigning a limit to be compatible with addition in $V$ and multiplication by a scalar. -Maybe one should include further restrictions. In fact I would like to know which further assumptions on this "limiting process" one has to assume in order to ensure that this limiting procedure corresponds to an actual topology on $V$ which makes $V$ a topological vector space in which a sequence converges if and only if it $\tau$-converges. -Let me give two examples. If we take for instance a topological vector space $(V,\mathcal{O})$ then we have a notion of convergence in $V$ based upon the set $\mathcal{O}$ of open sets of $V$. This notion of convergence clearly satisfies the above assumptions on $\tau$. -If on the other hand we consider $L^\infty([0,1])$ equipped with the notion of pointwise convergence almost everywhere, then there is no topology on $L^\infty([0,1])$ which makes $L^\infty([0,1])$ a TVS in which a sequence converges if and only if it converges pointwise almost everywhere. Still, convergence almost everywhere also satisfies the above assumptions on $\tau$. -So the above assumptions on this concept of convergence are necessary but not sufficient conditions for what I mean by a notion of convergence to correspond to an actual topology. The question is: Which further assumptions do I have to make? -On a less general level I'm particularly interested in the following case: Let $G\subset\mathbb{C}^d$ be a domain, $X$ a (complex) Banach space and let $H^\infty(G;X)$ denote the space of bounded holomorphic functions $f\colon G\to X$. Now consider the following notion $\tau$ of sequential convergence on $H^\infty(G;X)$: We say that a sequence $(f_n)_{n\in\mathbb{N}}\subset H^\infty(G;X)$ $\tau$-converges to $f\in H^\infty(G;X)$ if $\sup_{n\in\mathbb{N}}\sup_{z\in G} \|f_n(z)\|_X$ is finite and $f_n(z)$ converges in $X$ to $f(z)$ for every $z\in G$. Is there a topology $\mathcal{O}$ on $H^\infty(G;X)$ such that "sequences suffice" in $(H^\infty(G;X),\mathcal{O})$ and a sequence $(f_n)_{n\in\mathbb{N}}\subset H^\infty(G;X)$ converges w.r.t. the topology $\mathcal{O}$ if and only if it $\tau$-converges? Is this topology $\mathcal{O}$ unique if existent? What if we drop the "sequences suffice"-restriction? Is $(H^\infty,\mathcal{O})$ locally convex? Metrizable? What if we replace $X$ by a more general space like a LCTVS or a Frechet space? -Thank you in advance for any suggestions, ideas or references. - -REPLY [12 votes]: I am addressing only the first part of your question (i.e., nothing with the structure of vector space; only topology and limits of sequences). -I will quote here part of Problems 1.7.18-1.7.20 from Engelking's General Topology. (It would be better if you could get the book. I believe it used to be here, but the links don't work now. Perhaps you'll find it in the Internet.) -L*-space is a pair $(X, \lambda)$, where X is -a set and $\lambda$ a function (called the limit operator) assigning to some sequences of points of X -an element of X (called the limit of the sequence) in such a way that the following conditions are satisfied: -(L1) If $x_i=x$ for $i = 1,2,\dots$, then $\lambda x_i = x$. -(L2) If $\lambda x_i = x$, then $\lambda x' = x$ for every subsequence $x'$ of $x$. -(L3) If a sequence $\{x_i\}$ does not converge to $x$, then it contains a subsequence $\{x_{k_i}\}$ such -that no subsequence of $\{x_{k_i}\}$ converges to $x$. -These properties are sufficient to define a closure operator on $X$ (not necessary idempotent). -If $(X,\lambda)$ fulfills and additional condition -(L4) If $\lambda x_i = x$ and $\lambda x^i_j = x_i$ for $i = 1,2,\dots$, then there exist sequences of positive integers -$i_1, i_2,\dots$ and $j_1, j_2, \dots$ such that $\lambda x_{j_k}^{i_k} = x$. -L*-space $X$ satisfying (L4) is called an S*-space. The closure operator given by S*-space is idempotent. -Using this closure operator we get a topology, such that the convergence of the sequences is given by $\lambda$. A topology can be obtained from a L*-space (S*-space) if and only if the original space is sequential (Frechet-Urysohn). -References given in Engelking's book are Frechet [1906] and [1918], Urysohn [1926a], Kisynski [i960]. -Frechet [1906] Sur quelques points du calcul fonctionnel, Rend, del Circ. Mat. di Palermo 22 (1906), 1-74. -Frechet [1918] Sur la notion de voisinage dans les ensembles abstraits, Bull. Sci. Math. 42 (1918), 138-156. -Kisynski [1960] Convergence du type L, Coll. Math. 7 (1960), 205-211. -Urysohn [1926a] Sur les classes (L) de M. Frechet, Enseign. Math. 25 (1926), 77-83. -NOTE: Some axioms for convergence of sequences are studied in the paper: -Mikusinski, P., Axiomatic theory of convergence (Polish), Uniw. Slaski w Katowicach Prace Nauk.-Prace Mat. No. 12 (1982), 13-21. I do not have the original paper, only a paper which cites this one; it seems that the axioms are equivalent to (L1)-(L3) and the uniqueness of limit. (But I do not know, whether some further axioms are studied in this paper.) -EDIT: In Engelking's book (and frequently in general topology) the term Frechet space is used in this sense, not this one. I've edited Frechet to Frechet-Urysohn above, to avoid the confusion.<|endoftext|> -TITLE: Solving the equation by going into a non-UFD -QUESTION [6 upvotes]: To solve $y^2 + 2 = x^3$ you can factor $(y - \sqrt{-2})(y + \sqrt{-2}) = x^3$ and then check that they are relatively prime and by unique factorization both must be cubes then you can solve it. -What about $y^2 + 5 = x^3$ which does not have unique factoring? - -REPLY [2 votes]: No factorization in rings without unique factorization is necessary. Look at the proof of Theorem 2.2 in http://www.math.uconn.edu/~kconrad/blurbs/gradnumthy/mordelleqn1.pdf.<|endoftext|> -TITLE: What is the most frequent number of edges of Voronoi cells of a large set of random points? -QUESTION [12 upvotes]: Consider a large set of points with coordinates that are uniformly distributed within a unit-length segment. Consider a Voronoi diagram built on these points. If we consider only non-infinite cells, what would be (if any) the typical (most frequent) number of edges (that is, neighbors) for those cells? Is there a limit for this number when number of points goes to infinity? Does it have anything common with kissing number? If so, doest it generalize to higher dimensions, that is, 6 for 2D, 13 for 3D etc? - -REPLY [2 votes]: I don't have access to the book that @lhf referenced, but here is a nice topological argument for the expected number of edges of a Voronoi cell in $2$D. Unfortunately, it does not generalize to more than two dimensions. -Consider the dual of the Voronoi diagram, which is the Delaunay triangulation of the given points. This is a planar connected graph, so its Euler characteristic is $\chi = 2$. The Euler characteristic is also given by $\chi = V - E + F$, where $V$, $E$, and $F$ are the number of vertices, edges, and faces in the Delaunay triangulation respectively. Now every face has $3$ edges, while all non-boundary edges are adjacent to $2$ faces. Under reasonable conditions*, the proportion of boundary edges tends to zero, so let us ignore them. This means that $3F \approx 2E$, and plugging this into $V - E + F = 2$ gives $V \approx \frac13E + 2$, where by "$\approx$" I mean the ratio tends to $1$ as $V \to \infty$. So there are about three times as many edges as vertices, and since each edge is incident on $2$ vertices, the average degree of the vertices approaches $6$. As the degree of a vertex in the Delaunay triangulation is precisely the number of edges of the corresponding Voronoi cell, this agrees with your intuition for the $2$D case. (Although, strictly speaking, the expected number of edges is not the same as the most frequent number of edges). -In three dimensions, the Euler characteristic is still $2$, but is now given by $V - E + F - C$, where $C$ is the number of tetrahedral cells in the triangulation. A similar argument as above gives $4C \approx 2F$, but we have no control over $E$. Indeed, there are triangulations on the same point set with the same boundary (the convex hull) but which have different numbers of edges. In the $2$D case, every such triangulation had exactly the same number of edges! So in $3$D one will have to think about the geometry of the Voronoi diagram and/or Delaunay triangulation, and cannot get a result purely via its topology. - -* I believe it is sufficient for the points to be drawn from a uniform distribution on a strictly convex area, but I don't know the details.<|endoftext|> -TITLE: The action of SU(2) on the Riemann sphere -QUESTION [15 upvotes]: One way to get the famous double cover $\text{SU}(2) \to \text{SO}(3)$ is to note that $\text{SU}(2)$ is isomorphic to the group of unit quaternions and to let unit quaternions $q$ act on the subspace $V$ of $\mathbb{H}$ spanned by $i, j, k$ via conjugation $t \mapsto qtq^{-1}$; this preserves the norm. (Alternately, this is the adjoint action on the Lie algebra, which preserves the Killing form.) -Another way to do this is to let $\text{SU}(2)$ act on $\mathbb{P}^1(\mathbb{C})$, which is diffeomorphic to the sphere. This gives an action of $\text{SU}(2)$ by conformal automorphisms. However, I don't know how to prove that $\text{SU}(2)$ actually acts by rotations (at least, not without some explicit and unenlightening calculations). -To be more precise, if we fix an inner product on $\mathbb{C}^2$, then the space of lines in $\mathbb{C}^2$ can be given the Fubini-Study metric, which $\text{SU}(2)$ preserves. But how can I prove that the Fubini-Study metric agrees with the natural metric on the sphere (up to a constant)? - -REPLY [5 votes]: There are of course many possible diffeomorphisms $\mathbb{CP}^1\simeq S^2$, and of course the transported action of $\mathrm{SU}(2)$ on $S^2$ will not be by rotations with respect to all of them. A few canonical diffeomorphisms are given by stereographic projection. (I say "a few" because one could perform the projection regardless of the size of the sphere or where it sits inside three-space $\mathbb{R}\times\mathbb{C}$.) -Every element of $\mathbb{CP}^1$ is a $\mathbb{C}$-line in $\mathbb{C}^2$, say $\mathbb{C}[\begin{smallmatrix} \alpha \\ \beta\end{smallmatrix}]$ (where $[\begin{smallmatrix} \alpha \\ \beta\end{smallmatrix}]$ is a unit vector), which corresponds to a projection map $p\in M_2(\mathbb{C})$ given by $p(x)=[\begin{smallmatrix} \alpha \\ \beta\end{smallmatrix}]\langle [\begin{smallmatrix} \alpha \\ \beta\end{smallmatrix}],x\rangle$ (assuming your inner product is conjugate-linear in the first argument), which equals -$$ p=\begin{bmatrix} \alpha \\ \beta\end{bmatrix}\begin{bmatrix} \alpha \\ \beta\end{bmatrix}^\dagger=\begin{bmatrix} \alpha\overline{\alpha} & \alpha\overline{\beta} \\ \beta\overline{\alpha} & \beta\overline{\beta}\end{bmatrix}. \tag{1}$$ -This is a hermitian matrix with trace $|\alpha|^2+|\beta|^2=1$. If we double and subtract $I$ from it we get a traceless hermitian matrix. What is the effect of this mapping from $\mathbb{CP}^1$ to $M_2(\mathbb{C})$? Write -$$\begin{bmatrix}\alpha\\ \beta\end{bmatrix}=\frac{1}{\sqrt{1+|x|^2}}\begin{bmatrix}1 \\ x\end{bmatrix} \quad\Rightarrow\quad 2p-I=\begin{bmatrix} \displaystyle \frac{1-|x|^2}{1+|x|^2} & \displaystyle \frac{2\overline{x}}{1+|x|^2} \\ \displaystyle \frac{2x}{1+|x|^2} & \displaystyle -\frac{1-|x|^2}{1+|x|^2}\end{bmatrix}. \tag{2}$$ -Denote by $\mathfrak{h}_2'(\mathbb{C})$ the vector space of traceless $2\times 2$ complex matrices. The embedding of $\mathbb{CP}^1$ is therefore the stereographic projection $\mathbb{C}\cup\{\infty\}\to\mathbb{R}\times\mathbb{C}$ composed with the obvious isomorphism $\mathbb{R}\times\mathbb{C}\cong \mathfrak{h}_2'(\mathbb{C})$. With respect to the Frobenius norm, the image of this map is the usual round sphere defined by the metric. -Observe $\mathrm{SU}(2)$ acts on $\mathfrak{h}_2'(\mathbb{C})$ by conjugation. This preserves the norm -$$ \|X\|_{\small F}^2=\sum_{i,j}|x_{ij}|^2=\sum \|\mathrm{row}\|^2=\sum\|\mathrm{column}\|^2 \tag{3}$$ -because left multiplication by $g\in\mathrm{SU}(2)$ preserves column norms and right multiplication preserves row norms. Therefore it acts by rotations. Alternatively we could have used cycling: -$$\|gXg^{-1}\|_{\small F}^2=\mathrm{tr}((gXg^{-1})(gXg^{-1})^\dagger)=\mathrm{tr}(gXX^\dagger g^{-1})=\mathrm{tr}(g^{-1}gXX^\dagger)=\|X\|_{\small F}^2. \tag{4}$$ -The map $\mathbb{CP}^1\to\mathfrak{h}_2'(\mathbb{C})$ is $\mathrm{SU}(2)$-equivariant because the induced action is -$$g\cdot p=\left(g \begin{bmatrix}\alpha \\ \beta\end{bmatrix}\right)\left(g \begin{bmatrix}\alpha \\ \beta\end{bmatrix}\right)^\dagger =gpg^{-1}. \tag{5}$$ -Therefore, the action of $\mathrm{SU}(2)$ on $\mathbb{CP}^1$ transported via stereographic projection to $S^2$ is indeed by rotations, and we have a natural map $\mathrm{SU}(2)\to\mathrm{SO}(\mathfrak{h}'_2(\mathbb{C}))$ (essentially $\mathrm{SO}(3)$). The kernel consists of matrices whose associated linear fractional transformations are trivial, so $\{\pm I\}$. -The three nicest values $0,1,i\in\widehat{\mathbb{C}}$ correspond to the Pauli matrices: -$$0\leftrightarrow\begin{bmatrix} 1 & 0 \\ 0 & -1\end{bmatrix}, \quad 1\leftrightarrow \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \quad i\leftrightarrow \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} \tag{6}$$ -Their stabilizers correspond to rotations around the axes of an orthonormal basis (which yields surjectivity too I think if that's of interest). This should be intuitively acceptable by drawing the corresponding vector flows around the three points in $\mathbb{C}$ and then imagining their effect after stereographic projection onto the unit-radius origin-centered sphere in $\mathbb{R}\times\mathbb{C}$. -If we interpret $\mathrm{SU}(2,\mathbb{R})$ as $\mathrm{SO}(2)$ and $\mathrm{SU}(2,\mathbb{H})$ as $\mathrm{Sp}(2)$, and treat $\mathbb{H}^2$ as a right vector space over the quaternions $\mathbb{H}$, the above argument can be generalized to yield more onto $2$-to-$1$ Lie group homomorphisms (i.e. spin maps). According to Baez this can be done for octonions $\mathbb{O}$ too by suitably interpreting $\mathrm{SU}(2,\mathbb{O})$. Thus we have -$$\begin{array}{|l|} \hline \mathrm{SU}(2,\mathbb{R})\to\mathrm{SO}(2) \\ \hline \mathrm{SU}(2,\mathbb{C})\to \mathrm{SO}(3) \\ \hline \mathrm{SU}(2,\mathbb{H})\to\mathrm{SO}(5) \\ \hline \mathrm{SU}(2,\mathbb{O})\to\mathrm{SO}(9) \\ \hline \end{array} \tag{7} $$<|endoftext|> -TITLE: How can my proof be improved? "Let $n$ be an integer. If $3n$ is odd then so is $n$." -QUESTION [8 upvotes]: I am attempting to self-study proof techniques and your criticism of my following proof would be greatly appreciated. Feel free to nitpick minor/trivial things; that's how I'll learn! -Edit: I have appended a revised proof with the criticisms received so that other rookies can learn from my progress too. -Edit 2: And another, third time lucky? - -Theorem: Let $n$ be an integer. If $3n$ is odd then $n$ is odd. -Let $P$ be the sentence "$3n$ is odd" and $Q$ be the sentence "$n$ is odd." We therefore have; -$$P \rightarrow Q$$ -By the law of the contrapositive, we may obtain; -$$\lnot Q \rightarrow \lnot P$$ -Which is translates to "If $n$ is not odd then $3n$ is not odd", or put another way; "If $n$ is even then $3n$ is even." -If an integer $n$ is even then there exists some integer $m$ such that; -$$n = 2m$$ -By multiplying this by $3$ we may obtain; -$$3n = 6m \equiv n = 2m$$ -It has therefore been shown that if $n$ is even so is $3n$ and that this is equivalent to showing that if $3n$ is odd then so is $n$. -■ - -Attempt 2, taking into consideration previous criticism -Theorem: Let $n$ be an integer. If $3n$ is odd then $n$ is odd. -Let $P$ be the sentence "$3n$ is odd" and $Q$ be the sentence "$n$ is odd." We want to show that $$P \rightarrow Q$$ and that by taking contrapositives this is equivalent to showing $$\lnot Q \rightarrow \lnot P$$ which translates to; -"If $n$ is not odd then $3n$ is not odd", or put another way; "If $n$ is even then $3n$ is even." -If an integer $n$ is even then there exists some integer $m$ such that; -$$n = 2m$$ -By multiplying this by $3$ we may obtain; -$$3n = 6m$$ -This must still be even as an even integer multiplied by an odd integer produces an even one. -We must now show that $3n$ is even. If this is so, then $6m = 2k$ for some integer $k$. -As $3n=6m=2k$ we have; -$3n=2k$ -It has therefore been shown that if $n$ is even so is $3n$ and that this is equivalent to showing that if $3n$ is odd then so is $n$. -■ - -Attempt 3 -Theorem: Let $n$ be an integer. If $3n$ is odd then $n$ is odd. -Let $P$ be the sentence "$3n$ is odd" and $Q$ be the sentence "$n$ is odd." We want to show that $$P \rightarrow Q$$ and that by taking contrapositives this is equivalent to showing $$\lnot Q \rightarrow \lnot P$$ which translates to; -"If $n$ is not odd then $3n$ is not odd", or put another way; "If $n$ is even then $3n$ is even." -If an integer $n$ is even then there exists some integer $m$ such that; -$$n = 2m$$ -By multiplying this by $3$ we may obtain; -$$3n = 6m$$ -Which can be rewritten as; -$$3n = 2(3m)$$ -Thus we have shown that $3n$ is even, as it is equal to $2(3m)$ which, as an integer multiplied by 2, must be even. -It has therefore been shown that if $n$ is even so is $3n$ and that this is equivalent to showing that if $3n$ is odd then so is $n$. -■ - -REPLY [5 votes]: Your argument is structured as if you were arguing from your conclusion. The correct way to state this argument is "we want to show that $P \Rightarrow Q$, and by taking contrapositives this is equivalent (you also did not say this) to showing that $\neg Q \Rightarrow \neg P$." -I guess you are using $\equiv$ to mean "is equivalent to," but this is usually reserved for an equivalence relation on mathematical objects, not on mathematical statements. For statements you should use $\Leftrightarrow$. -And, of course, Arturo's comments are spot-on.<|endoftext|> -TITLE: If A is noetherian, then Spec(A) is noetherian -QUESTION [10 upvotes]: Let A be a noetherian ring. How can I show that Spec(A) is noetherian? -Also, is there a way to show this by showing directly that the closed sets in Spec(A) satisfy the descending chain condition? -(This is exercise 6.8 from Atiyah and Macdonald.) - -REPLY [5 votes]: Let $A$ be a Noetherian ring. -Let $I$ be an ideal of $A$. -We denote by $V(I)$ the set {$P \in$ Spec($A$); $I \subset P$}. -Let $N$ be the nilradical of $A/I$. -We denote by rad($I$) the inverse image of $N$ by the canonical homomorophism $A \rightarrow A/I$, -i.e. rad($I$) = {$x \in A$; $x^n \in I$ for some $n$ ($n$ depends on $x$)}. -Since $N$ is an ideal(Atiyah-MacDonald Proposition 1.7), so is rad($I$). -Clearly $V(I)$ = $V$(rad($I$)). -Since the nilradical of $A/I$ is the intersection of all the prime ideals of $A/I$(Atiyah-MacDonald Proposition 1.8), rad($I$) = $\cap$ {$P \in V(I)$}. -Suppose $V(I_1) \supset V(I_2) \supset \dots$ is a descending sequence of closed subsets of Spec($A$). -Then rad($I_1$) $\subset$ rad($I_2$) $\subset \dots$ by the above claim. -Since $A$ is Noetherian, there exists $n$ such that rad($I_n$) = rad($I_{n+1}$) = $\dots$ -Hence $V(I_n) = V(I_{n+1}) = \dots$ -Hence Spec($A$) is Noetherian and we are done. - -REPLY [4 votes]: I find it helpful to think in terms of two somewhat more precise results. -Step 1: The mappings -$J \mapsto V(J) = \{\mathfrak{p} \in \operatorname{Spec}(R) \ | \ J \subset \mathfrak{p} \}$ from ideals of $R$ to subsets of $\operatorname{Spec}(R)$ -and -$S \mapsto I(S) = \bigcap_{\mathfrak{p} \in S} \mathfrak{p}$ from subsets of $\operatorname{Spec}(R)$ to ideals of $R$ -induce mutually inverse bijections from the set of radical ideals in $R$ to the family of Zariski-closed subsets of $\operatorname{Spec}(R)$. -(If I remember correctly, what is given in Atiyah-Macdonald before the exercise in question is sufficient to establish this without much trouble.) -Step 2: Therefore for any commutative ring $R$, $\operatorname{Spec} R$ is Noetherian -- i.e., satisfies DCC on Zariski-closed subsets -- iff $R$ satisfies ACC on radical ideals. -The details can be found in $\S 13.5$ of my commutative algebra notes.<|endoftext|> -TITLE: Efficient method to compute $\mathrm{gcd}(2^n-1,n!)$ -QUESTION [9 upvotes]: How can we compute the value of $\mathrm{gcd}(2^n-1,n!)$ efficiently where $n$ is very large? I couldn't think of any fast and efficient method. - -REPLY [3 votes]: Here's what I would do: -Initialise a large integer $N = 2^n - 1$. (This requires 1 or 2 kilobytes.) -For each prime $p \le n$, compute $2^n$ mod $p$ = $2^{n \mod (p-1)}$ mod $p$, using a square-and-multiply algorithm (see this Wikipedia article for details). If the result is $1$, then $N$ is divisible by $p$, and we want the highest power of $p$ that divides both $N$ and $n!$ -The highest power of $p$ that divides $n!$ is just $a = \sum_{r=1}^K \lfloor \frac{n}{p^r}\rfloor$, where $K$ is any integer such that $p^K>n$ (see e.g. this article). So try to divide $N$ by $p$ up to $a$ times, stopping if the remainder is non-zero. The number of times that this succeeds is the power that we want. -Now just multiply together all these prime powers. -Note that $N$ gets smaller and smaller as the computation proceeds, limiting the amount of arithmetic required. It might be faster to process the primes in reverse order, to accelerate this trend; experimentation will tell.<|endoftext|> -TITLE: Alternative to Ziegler's "Lectures on Polytopes" -QUESTION [8 upvotes]: I am interested in alternatives to Ziegler's Lectures on Polytopes, which is the suggested textbook for a course I am attending. I find the conversational style of the book jarring. - -REPLY [2 votes]: Some of the topics of Ziegler's (excellent) book (e.g. Gale transforms, f-vectors, the secondary polytope, fiber polytopes) are also covered in the "Triangulations" book of De Loera, Rambau and Santos. The treatment is from a slightly different perspective and definitely worth a look. They provide many pictures. -Another alternative (also more from the triangulations perspective) is the book Lectures in Geometric Combinatorics.<|endoftext|> -TITLE: What is the smallest value of $n$ for which $\phi(n) \ne \phi(k)$ for any $k1$ the smallest is $n=5$ and when $n=4m+3$ it's $n=3.$ However, when -$n=4m+2$ the above condition can never be satisfied since -$$\phi(4m+2) = (4m+2) \prod_{p | 4m+2} \left( 1 - \frac{1}{p} \right)$$ -$$ = (2m+1) \prod_{p | 2m+1} \left( 1 - \frac{1}{p} \right) = \phi(2m+1).$$ -In the case $n=4m,$ $n=2^{33}$ is a candidate and $\phi(2^{33})=2^{32}.$ This value satisfies $(1)$ because $\phi(n)$ is a power of $2$ precisely when $n$ is the product of a power of $2$ and any number of distinct Fermat primes: -$$2^1+1,2^2+1,2^4+1,2^8+1 \textrm{ and } 2^{16}+1.$$ -Note the $n=2^{32}$ does not satisfy condition $(1)$ because the product of the above Fermat primes is $2^{32}-1$ and so $\phi(2^{32})=2^{31}=\phi(2^{32}-1)$ and -$2^{32}-1 < 2^{32}.$ -The only solutions to $\phi(n)=2^{32}$ are given by numbers of the form -$n=2^a \prod (2^{x_i}+1)$ where $x_i \in \lbrace 1,2,4,8,16 \rbrace $ and -$a+ \sum x_i = 33$ (note that the product could be empty), -so all these numbers are necessarily $ \ge 2^{33}.$ -Why don't many "small" multiples of $4$ satisfy condition $(1)$? Well, note that for $n=2^a(2m+1)$ we have -$$\phi(2^a(2m+1))= 2^a(2m+1) \prod_{p | 2^a(2m+1)} \left( 1 - \frac{1}{p} \right)$$ -$$ = 2^{a-1}(2m+1) \prod_{p | 2m+1} \left( 1 - \frac{1}{p} \right) = 2^{a-1}\phi(2m+1),$$ -and so, for $a \ge 2,$ if $2^{a-1}\phi(2m+1)+1$ is prime we can take this as our value of $k> 1]) // only test primes p - for (int k = 3*p;k < n;k += 2*p) // loop over odd multiples of p - prime [k >> 1] = false; // sieve them out - - // fill phi by looping over all primes - fill (2); - for (int p = 3;p < n;p += 2) - if (prime [p >> 1]) - fill (p); - - // now go through phi, remembering which values we've already seen - for (int i = 1;i < n;i++) { - if ((i & 3) == mod4 && !seen [phi [i]]) { - System.out.println ("found " + i + " with phi (" + i + ") = " + phi [i]); - return; - } - seen [phi [i]] = true; - } - } - - // multiply all phi values by their factors of prime p - static void fill (int p) { - // once for the first factor of p - for (int i = p;i < n;i += p) - phi [i] *= p - 1; - - // and then for the remaining factors - long pow = p * (long) p; // long to avoid 32-bit overflow - while (pow < n) { - for (int i = (int) pow;i < n;i += pow) - phi [i] *= p; - pow *= p; - } - } -}<|endoftext|> -TITLE: Random permutations of $\mathbb{Z}_n$ -QUESTION [5 upvotes]: In "The maximum number of Hamiltonian paths in tournaments" by Noga Alon, the author states the following without proof (equation 3.1): -Consider a random permutation $\pi$ of $\mathbb{Z}_n$. What is the probability that $\pi(i+1)-\pi(i) \pmod{n} -TITLE: prime numbers in Pascal's triangle -QUESTION [11 upvotes]: Just wondering about this: -Is it true that there are no prime numbers in Pascal's triangle, with the exception of $\binom{n}{1}$ and $\binom{n}{n-1}$? -From the first 18 lines it appears that this is true, but I haven't looked beyond that. -Is this a coincidence or is there a reason for it? - -REPLY [15 votes]: Yes, it's true. The identity ${m \choose n} = \frac{m}{n} {m-1 \choose n-1}$ can be rearranged as $n {m \choose n} = m {m-1 \choose n-1}$. If ${m \choose n}$ is prime it follows that it must divide either $m$ or ${m-1 \choose n-1}$. In the first case we can only have $n = 1, n = m-1$, as you have already observed, and in the second case, the quotient $\frac{n}{m}$ cannot be an integer unless $n = m$ or $n = 0$, and neither of these cases gives a prime. - -REPLY [7 votes]: I believe it is true. -If $\displaystyle \binom{n}{r} = p$ for some prime $\displaystyle p$ and $\displaystyle 1 \lt r \lt n-1$, then, -If $\displaystyle n \ge p$, then $\displaystyle \binom{n}{r} \gt n \ge p$, as the binomial coefficients increase and then decrease as $\displaystyle r$ varies from $\displaystyle 0$ to $\displaystyle n$. -If $\displaystyle n \lt p$ then $\displaystyle \binom{n}{r}$ can never be divisible by $\displaystyle p$, as $\displaystyle n!$ is not divisible by $\displaystyle p$. -OR as -hardmath succintly put it: -$p \mid \binom{n}{r} \Rightarrow p \le n \lt \binom{n}{r}$<|endoftext|> -TITLE: What's With The Integral $\int\sqrt{\cos(2\theta)}\, \mathrm d\theta$? -QUESTION [5 upvotes]: A student randomly asked me to compute -$$\int\sqrt{\cos(2\theta)}\, \mathrm d\theta.$$ -I was unable to do so, as were several other instructors. I typed the integral into Wolfram and it says that it is an elliptic integral of type 2. I am not familiar with elliptic integrals and I don't have any idea why this student asked me to do this since we are not even doing derivatives yet. Is this integral important? Does it have a closed form answer? - -REPLY [7 votes]: The elliptic integral of the second kind crops up since -$$\int\sqrt{\cos\;2\theta}\;\mathrm d\theta=\int\sqrt{1-2\sin^2\theta}\;\mathrm d\theta=E(\theta|2)+C$$ -where $E(\phi|m)=\int_0^\phi \sqrt{1-m\sin^2\theta}\;\mathrm d\theta$ is the incomplete elliptic integral of the second kind. -All well and good, but traditionally one transforms elliptic integrals such that the parameter $m$ is within the interval $[0,1]$ (corresponding to the fact that ellipses have their eccentricity in the interval $(0,1)$). Using certain transformations, one now has -$$E(\theta|2)=\sqrt{2}E\left(\arcsin(\sqrt{2}\sin\;\theta)|\frac12\right)-\frac1{\sqrt{2}}F\left(\arcsin(\sqrt{2}\sin\;\theta)|\frac12\right)$$ -where $F(\phi|m)=\int_0^\phi\frac{\mathrm d\theta}{\sqrt{1-m\sin^2\theta}}$ is the incomplete elliptic integral of the first kind, which is probably what you want.<|endoftext|> -TITLE: What is the name for the topology where every point forms an open set -QUESTION [5 upvotes]: I remember there is some name for a special topology on a set, such that every subset is open i.e. every point forms an open set, but I cannot recall it. -Please tell me its name or say I am wrong. -Thanks! - -REPLY [7 votes]: I believe it is called the Discrete Topology.<|endoftext|> -TITLE: What's special about the greatest common divisor of a + b and a - b? -QUESTION [5 upvotes]: Problem: - -Prove that if gcd( a, b ) = 1, then gcd( a - b, a + b ) is either 1 or 2. - -From Bezout's Theorem, I see that am + bn = 1, and a, b are relative primes. However, I could not find a way to link this idea to a - b and a + b. I realized that in order to have gcd( a, b ) = 1, they must not be both even. I played around with some examples (13, 17), ...and I saw it's actually true :( ! Any idea? - -REPLY [2 votes]: Hint $\rm\,\ a\!-\!b + (a\!+\!b)\ {\it i}\ =\ (1\!+\!{\it i})\ (a\!+\!b\!\ {\it i})\ \ $ yields a slick proof using Gaussian integers. This reveals the arithmetical essence of the matter and, hence, suggests obvious generalizations.<|endoftext|> -TITLE: Idea behind defining a Projective System -QUESTION [6 upvotes]: What is the idea behind defined a Projective system of Groups/Rings. In our class an example for the Projective system was given by Taking the Ring $\mathbb{Z}/n\mathbb{Z}$ over $\mathbb{N}$. The order defined was: - -$ m \leq n$ if $m \mid n$. - -Moreover then we calculated the Projective Limit of this system and found it to be $\hat{\mathbb{Z}}$. I have not at all understood this concept fully so even while posing this question there might be errors. So people who can understand what i am talking about are welcome to pose answers. If one still has problems with the question please let me know i shall try to improve it or delete it. - -REPLY [7 votes]: Basically, a projective system formalizes the idea of "constructing elements by successive approximations" in an algebraic setting. -If $G$ is a group (abelian, to fix ideas) and $G_n$ a sequence of smaller and smaller subgroups, the projective limit $\Gamma=\lim(G/G_n)$ is the set of elements that are defined by giving them up to an ambiguity that gets smaller and smaller. -When $G$ is also a topological space and the $G_n$ are a local basis around the identity element, an element of $\Gamma$ is morally an element given by assigning a Cauchy sequence converging to it. -Incidentally, this is exactly the case for the $p$-adic integers ${\Bbb Z}_p$ because it is possible to define a metric in $\Bbb Z$ in such a way the ideal $p^n{\Bbb Z}$ are open spheres.<|endoftext|> -TITLE: Problems that are largely believed to be true, but are unresolved -QUESTION [18 upvotes]: Are there unsolved problems in math that are large believed to be true, but for reasons other then statistical justification? -It seems that Goldbach should be true, but this is based on heuristic justification. -I am looking for conjectures that seem to be true, but where the 'why' is something other then a statistical justification, and I want know what exactly that 'why' is. -Edit: Can you please include the reason it is widely believed to be true in your posting? That is the interesting part - -REPLY [3 votes]: It is believed, that $\pi$, $e$ and every irrational algebraic number are normal to every base. -The truth is, that none of those numbers have been proven to be normal to a single base.<|endoftext|> -TITLE: What is the norm measuring in function spaces -QUESTION [10 upvotes]: In spatial euclidean vector spaces norm is an intuitive concept: It measures the distance from the null vector and from other vectors. -The generalization to function spaces is quite a mental leap (at least for me). -My question: Is there some kind of intuition what "norm" or "euclidean norm" is even supposed to "mean" here? What is the null vector and how what does a "distance" between to functions reveal? -Edit: Please also see this follow up question: Visualization of 2-dimensional function spaces - -REPLY [8 votes]: Let's start with the $L_{2}[0,1]$ norm. -Start with a single function $f$. The 2-norm of $f$ is -$ \| f \|_{2}=\sqrt{\int_{0}^{1} f(x)^2 dx} $ -Now, $\| f \|_2$ will be 0 only if the integral of $f(x)^2$ is 0. This will certainly happen if $f(x)=0$, but it can also happen in cases where $f$ has some isolated points with discontinuities where the function values are non zero. Thus the "null vector" in this space is the zero function and all of the functions that are zero "almost everywhere" are considered to be equivalent to $f(x)=0$ with respect to this norm. -If you're familiar with electrical engineering, where $f(x)$ might be a time varying voltage, then $\| f \|_2$ is essentially the energy associated with that signal. -Now, suppose that $f(x)$ and $g(x)$ are two functions defined on $[0,1]$, and consider the 2-norm of the difference: -$ \| f-g \|_{2}=\sqrt{\int_{0}^{1} (f(x)-g(x))^2 dx} $ -If the functions $f$ and $g$ are very nearly identical on $[0,1]$, then this norm will be close to 0. -If you're familiar with electrical engineering terminology, you -might have heard of the "root mean square (RMS)" difference between two signals. This norm is exactly the RMS difference between $f$ and $g$. -If $f(x)=g(x)$, except for perhaps some individual points where there are discontinuities, then the norm will actually be 0. Thus the $L_{2}$ distance between two functions is 0 if the functions are equal "almost everywhere."<|endoftext|> -TITLE: Closure of $\{ (x_n) | x_n = 0 $ for almost all $n \}$ -QUESTION [5 upvotes]: What is the closure of $\{ (x_n) |  x_n = 0 $ for almost all $n \}$ in $\mathbb{R}^\mathbb{N}$ with the product topology? -I'm not sure how to think about it. A point $x$ is in the closure of a set $A$ if for every neighbourhood $N$ of $x$, $N \cap A \neq \emptyset$. -But I'm not sure what a neighbourhood of a point in $\mathbb{R}^\mathbb{N}$ looks like. In fact, I'm not even sure what an open ball looks like. -Can someone tell me what open balls or neighbourhoods look like? -Thanks for your help. - -REPLY [5 votes]: A better way to think about $\mathbb{R^N}$ is as the set of all real-valued sequences, right? You have $\mathbb N$ dimensions and each dimension corresponds to a real number, so $\mathbb{R^N}$ is just the set of all real-valued sequences (as a note $\mathbb{R^R}$ can be thought of as all the functions from $\mathbb R$ to $\mathbb R$). -So with this definition in mind, what does it mean for there only be a finite number of non-zero indexes in a sequence. Well it means that past a certain point the sequence is entirely zeros. Okay now keeping this in mind, think about what the product topology means. Well it means that any open set (We'll just work with basis sets) is a product of spaces $X_i$ where for only finitely many $i$ $X_i\neq \mathbb R$. Then if we want to show that our set is dense in $\mathbb{R^N}$, we need to show that it intersects every basis open set. Well we've just said that an open set has only finite many indices that aren't the whole space. So if we pick an $x \in X_i$ where each $X_i \neq \mathbb R$, then we'll get finitely many points that aren't zero and we can let everything else be zero. This will show that our set is dense in $\mathbb{R^N}$. -What happens if we put the box topology on $\mathbb{R^N}$?<|endoftext|> -TITLE: WANTED: Short proof for challenging identity -QUESTION [5 upvotes]: I have to prove the following without any use of further mathematical theories except basic calculus and linear algebra: -Let $ \ell $ be a positive integer and $A$ a real, symmetric and positive definite $n \times n$-Matrix (square Matrix with $n$ columns and $n$ rows). Show that the following identity holds: -$$ \int_{\mathbb{R}^n} x_{i_1} \cdots x_{i_{2 \ell}} e^{-\frac{1}{2} \left< \mathbf{ x}, A \ - - \mathbf{x} \right>} d^n x = \frac{(2 \pi)^{n/2}}{\ell! \sqrt{\det{A}}} \sum\limits_{ - \begin{array} - \{ \{ k_1,k_1'\},\ldots,\{k_\ell,k_\ell'\}\in P \\ - \cup_{j=1}^\ell \{ k_j, k_j' \} = \{ 1,\ldots,2 \ell \} - \end{array} } - ( A^{-1} )_{i_{k_1},i_{k_1'}} \cdots ( A^{-1} )_{i_{k_\ell},i_{k_\ell'}} - $$ - with $P = \left\{ \{k,k'\}, k \not= k' \in \{1,\ldots,2 \ell\} \right\}$ and $\left< \mathbf{x}, \mathbf{y} \right>$ the standard scalar product for $\mathbf{x},\mathbf{y} \in \mathbb{R}^n$. -Frankly speaking I have no idea what to do, where to begin. I'm also afraid that I cannot really explain the notation, because I don't get it 100% myself. In fact we are a group of approximately 15 students and we all don't know a smooth short way. This exercise could be part of a 2 hour exam with 5 other exercises. So we suppose there exists a more or less short solution. - -REPLY [6 votes]: Hint for the case $\ell = 1$. The rest can be done by induction. -Since $A$ is positive definite, can define the matrix $B = (A)^{-1/2}$. Consider the change of variables $x = By$, so $dx = |B| dy$ using the Jacobian determinant. Then your equation can be re-written as -$$ \int_{\mathbb{R}^n} x_i x_j e^{-\frac12 xAx} dx = |B| \int_{\mathbb{R}^n} B_{ik}B_{jl}y_ky_l e^{-\frac12 y\cdot y} dy $$ -(Notice that $|B| = |A|^{-1/2}$ as desired.). Now, using the linearity of the action by the matrix $B$, you can integrate by parts in $y_l$ using the derivative operator $\partial_l$. This will pick out a term of the form (using that $\partial_ly_k = \delta_{lk}$) -$$ B_{il}B_{jl} \int_{\mathbb{R}^n} e^{-\frac12 y\cdot y} dy $$ -the integral is the Gaussian integral giving you the factor of $2\pi$. And $B_{il}B_{jl} = B^2 = A^{-1}_{ij}$. -The factorial factor comes from when you integrate by parts, when $\ell > 1$, by counting the terms you actually have.<|endoftext|> -TITLE: Maclaurin polynomial of $\ln(\cos(x))$ -QUESTION [5 upvotes]: I want to write down $\ln(\cos(x))$ Maclaurin polynomial of degree 6. I'm having trouble understanding what I need to do, let alone explain why it's true rigorously. -The known expansions of $\ln(1+x)$ and $\cos(x)$ gives: -$$\forall x \gt -1,\ \ln(1+x)=\sum_{n=1}^{k} (-1)^{n-1}\frac{x^n}{n} + R_{k}(x)=x-\frac{x^2}{2}+\frac{x^3}{3}+R_{3}(x)$$ -$$\cos(x)=\sum_{n=0}^{k} (-1)^{n}\frac{x^{2n}}{(2n)!} + T_{2k}(x)=1-\frac{x^2}{2}+\frac{x^4}{24}+T_{4}(x)$$ -Writing $\ln(1+x)$ with $t=x-1$ gives: -$$\forall t \gt 0,\ \ln(t)=\sum_{n=1}^{k} (-1)^{n-1}\frac{(t+1)^n}{n} + R_{k}(t)$$ -But now I'm clueless. - -Do I just 'plug' $\cos(x)$ expansion in $\ln(t)$? Can I even do that? -Isn't it a problem that $\ln(x)$ isn't defined for $x\leq 0$ but $|\cos(x)| \leq 1$? - -REPLY [2 votes]: I think Joe Johnson's is a good idea: -$$ -\frac{d}{dx} \ln\cos x = -\tan x -$$ -plus the knowledge of the Taylor series for the tangent, -$$ -\tan x = \sum_{n=1}^\infty \frac{B_{2n}(-4)^n(1-4^n)}{(2n)!}x^{2n-1} \ , \qquad \text{for} \qquad x \in (-\pi/2 , \pi/2) -$$ -(here, $B_s$ are the Bernouilli numbers) gives you everything: -$$ -\ln\cos x = C - \int \tan x dx = C - \int \sum_{n=1}^\infty \frac{B_{2n}(-4)^n(1-4^n)}{(2n)!}x^{2n-1} dx = C - \sum_{n=1}^\infty \frac{B_{2n}(-4)^n(1-4^n)}{2n(2n)!}x^{2n} -$$ -Now, taking $x = 0$ on both sides, you get $C = 0$, so -$$ -\ln\cos x = - \sum_{n=1}^\infty \frac{B_{2n}(-4)^n(1-4^n)}{2n(2n)!}x^{2n} \ , \qquad \text{for} \qquad x \in (-\pi/2 , \pi/2) \ . -$$<|endoftext|> -TITLE: The set of rational numbers of the form p/p', where p and p' are prime, is dense in $[0, \infty)$ -QUESTION [11 upvotes]: I have been working through the exercises in Tenenbaum's "Introduction to analytic and probabilistic number theory" book, and I am stumped here (Exercise I.1.6). This is not a homework assignment, but just rather for my own edification. I know Tenenbaum's exercises are usually considered quite hard, so I don't feel as embarrassed to ask this question! -For those not aware (I suspect most experts know this exercise and book inside and out), the homework problem reads as follows: -Set $d_n = p_{n+1} - p_n$. - -Show that $p_n \sim n \log n \qquad (n \to \infty)$ -$\sum_{1 < n \leq x} d_n / \log x \sim x \qquad (x \to \infty)$ -$\liminf_{n \to \infty} \frac{d_n}{\log n} \leq 1 \leq \limsup_{n \to \infty} \frac{d_n}{\log n}$ -For each $\alpha > 0$ there exists a sequence of integers $\{n_1, n_2, \dots \}$, increasing in the weak sense, such that $p_{n_j} \sim \alpha j \qquad (j \to \infty)$. -The set of rational numbers of the form $p/p'$, where $p$ and $p'$ are prime is dense in $[0, \infty)$. - -My questions are two-fold: -How does part 5 of this exercise follow from part 4? -Are there different proofs (elementary or not) of question 5 that do not follows this route? - -REPLY [9 votes]: From question 4 you can easily show that for any $\alpha > 0$, there exists a sequence $(q_n, q'_n)$ of prime numbers such that $q_n/q'_n$ converges to $\alpha$. -You just have to take the sequence $\{n_1, n_2, \dots\}$ and use the subsequence of the terms with prime index. So you can build this sequence $\{(p_1, p_{n_{p_1}}), (p_2, p_{n_{p_2}}), \dots \}$. -Each pair in this new sequence in composed of prime numbers, and from the fact that $p_{n_j} / j \rightarrow \alpha$ you get that $p_{n_{p_i}}/p_i \rightarrow \alpha$ as $i \rightarrow \infty$. -Therefore $\alpha$ is in the closure of the rational numbers that are quotient of primes. -Since it is true for any $\alpha > 0$, this set is dense in $[0, \infty)$.<|endoftext|> -TITLE: Tangent Bundle on S^3 -QUESTION [6 upvotes]: how to show T(S^3) isomorphic to S^3 cross R^3? so can I say it for every odd dimension?I have shown it for n=1 - -REPLY [4 votes]: It does not work for any odd dimension, if I recall correctly it will only work for 1,3 and 7 which correspond to real-division algebras...<|endoftext|> -TITLE: What is Eulerian? -QUESTION [7 upvotes]: I encountered an interesting function which is called "Eulerian" by the Wolfram's MathWorld: -$$\phi(q)=\prod_{k=1}^{\infty} (1-q^{k})$$ -It is interesting because it seems that roots of any polynomial can be expressed in this function and elementary functions. -I want to know more about the properties of this function, where can I find the information? - -REPLY [9 votes]: One notable property of this function is Euler's Pentagonal Number Theorem: -$$\prod_{k=1}^\infty (1-x^k) = \sum_{k=-\infty}^\infty (-1)^k x^{k(3k-1)/2}.$$ -Here is a very interesting paper on the Pentagonal Number Theorem by Jordan Bell.<|endoftext|> -TITLE: Can a quotient field ever be finitely generated as an algebra? -QUESTION [8 upvotes]: If A is a commutative integral domain that's not a field, and let $K$ be the quotient field of A. We know that $K$ is not finitely generated as an A-module. But can $K$ ever be finitely generated as an A-algebra? - -REPLY [15 votes]: Sure. Let $A$ be the integers localized at $(2)$; that is, -$$A = \left\{ \frac{a}{b}\in\mathbb{Q}\;\Bigm|\; a,b\in\mathbb{Z}, b\gt 0, \gcd(a,b)=\gcd(2,b)=1\right\}.$$ -The field of quotients of $A$ is $\mathbb{Q}$, and is equal to $A[\frac{1}{2}]$, so it is generated as an $A$-algebra by $1$ and $\frac{1}{2}$. -More generally, any UFD $R$ with only finitely many pairwise non-associated irreducibles will have a field of fractions that is finitely generated as an $R$-algebra: just take $m_1,\ldots,m_k$ to be a maximal list of pairwise non-associates irreducibles in $R$, and the field of fractions will be equal to the subalgebra $R[\frac{1}{m_1},\ldots,\frac{1}{m_k}]$. -Added. In fact: -Theorem. Let $R$ be a UFD. The following are equivalent: - -The field of fractions $K$ of $R$ is finitely generated as an $R$-algebra. -$R$ has only finitely many pairwise non-associate irreducible elements. - -Proof. If $R$ has only finitely many non-associate irreducible elements (possibly $0$), $m_1,\ldots,m_k$, then the subalgebra $R[\frac{1}{m_1},\ldots,\frac{1}{m_k}]$ in $K$ equals all of $K$: every element of $K$ can be written as $\frac{a}{b}$ with $a,b\in R$, $b\neq 0$, and $b$ can be factored into irreducibles $b=um_1^{\alpha_1}\cdots m_k^{\alpha^k}$, where $\alpha_1,\ldots,\alpha_k$ are nonnegative integers and $u$ is a unit of $R$. Then -$$\frac{a}{b} = au^{-1}\left(\frac{1}{m_1}\right)^{\alpha_1}\cdots\left(\frac{1}{m_k}\right)^{\alpha_k}\in R\left[\frac{1}{m_1},\ldots,\frac{1}{m_k}\right].$$ -Conversely, suppose that $K$ is finitely generated as an $R$-algebra. We may assume that the set that generated $K$ is made up entirely of fractions of the form $\frac{1}{b}$, because any element $\frac{a}{b}$ can be replaced with $\frac{1}{b}$ and we still get the same $R$-subalgebra. Moreover, we can assume that $b$ is irreducible, because if $b=m_1^{\alpha_1}\cdots m_r^{\alpha_r}$, then we can replace $\frac{1}{b}$ with $\frac{1}{m_1},\ldots,\frac{1}{m_r}$. Thus, we may assume that $K$ is generated as an $R$-algebra by the multiplicative inverses of a finite set of pairwise non-associated irreducible elements of $R$, $m_1,\ldots,m_k$. Now let $m\in R$ be any irreducible. We can express $\frac{1}{m}$ as a sum of multiples of powers of the $m_i^{-1}$, so we have -$$\frac{1}{m} = \frac{a_1}{m_1^{\alpha_1}} + \cdots + \frac{a_k}{m_k^{\alpha_k}} = \frac{b_1a_1+\cdots+b_ka_k}{m_1^{\alpha_1}\cdots m_k^{\alpha_k}}$$ -where -$$b_j = \frac{m_1^{\alpha_1}\cdots m_k^{\alpha_k}}{m_i^{\alpha_i}}.$$ -Then we must have that $(b_1a_1+\cdots+b_km_k)m = m_1^{\alpha_1}\cdots m_k^{\alpha_k}$, so $m$ divides $m_1^{\alpha_1}\cdots m_k^{\alpha_k}$, and hence $m$ is an associate of one of $m_1,\ldots,m_k$. Thus, $R$ has only finitely many pairwise non-associate irreducibles, as claimed. $\Box$ - -REPLY [7 votes]: In fact, the class of such $A$ is not only nonempty, but important enough to have a standard name: Goldman domains. See page 117 of Pete L. Clark's notes on commutative algebra here.<|endoftext|> -TITLE: Guidelines for learning about Ramanujan's work? -QUESTION [20 upvotes]: It is well known that one of the first books Ramanujan studied was "Synopsis of Pure and Applied Mathematics" and that it shaped the way Ramanujan thought and wrote about mathematics. Being interested in the works of Ramanujan, I was wondering if it would be of any help if I studied this "Synopsis". -Moreover, I would appreciate it if someone who has experience with learning about Ramanujan's work (such as his "Notebooks" and his "Lost Notebook") could explain to me what is a good way to approach his works. What are the prerequisites? Would I be able to comprehend his works after having taken courses on, say, Analytic Number Theory and (Real/Complex) Analysis? Is it better to read his notebooks (supposedly after having taken these courses) right away, or start of with books such as "Ramanujan: twelve lectures suggested by his life and work" and "Number theory in the spirit of Ramanujan" (written by Hardy and Berndt respectively)? Should I learn about modular forms beforehand? Etcetera. -Thanks, -Max - -REPLY [2 votes]: I have my own take on approaching Ramanujan's work. My advice is to look at the two volumes of his Noteboks and the Lost Notebook. You may find these in some university libraries, or perhaps online. Read them selectively several times. I found his identities involving his theta functions, Lambert series and modular equations to be most interesting. You may find other topics that interest you. You can supplement this with Berndt's five volumes of editing Ramanujan's Notebook and the four volumes of Andrews and Berndt editing the Lost Notebook. The only prerequisites are power series, a bit of calculus, and college algebra. Good luck.<|endoftext|> -TITLE: Pseudoinverse matrix and SVD -QUESTION [7 upvotes]: I'm trying to solve an homework question but I got stuck. -Let A be a m x n matrix with the SVD $A = U \Sigma V^*$ and $A^+ = (A^* A)^{-1} A^*$ its pseudoinverse. -I'm trying to get $A^+ = V \Sigma^{-1} U^*$, but I'm missing something. -Can anyone help me with this please? -Thanks! - -REPLY [3 votes]: If the dimensions of A are m x n and $m\not\equiv n$ then there isn't any way of deriving $A^+ = U\Sigma^{-1}V$. The reason is because $\Sigma$ has the same dimensions as $A$ therefore it is not invertible. If you see any source about SVD you will see that the equation is $A = U_{mxm} \Sigma_{mxn}V^T_{nxn}$. -If A is rectangular maybe the possible derivation you're looking for is $$\begin{align} -A^+ &=(A^TA)^{-1}A^T\\ -&=(V\Sigma^TU^T U\Sigma V^T)^{-1}V\Sigma^TU^T\\ -&=(V\Sigma^T\Sigma V^T)^{-1}V\Sigma^TU^T \\ -&=(V^T)^{-1}(\Sigma^T \Sigma)^{-1}V^{-1}V\Sigma^TU^T \\ -&=V(\Sigma^T \Sigma)^{-1}\Sigma^TU^T -\end{align}$$<|endoftext|> -TITLE: Image of the Veronese Embedding -QUESTION [7 upvotes]: Is the image of the general Veronese embedding ever contained in a hyperplane of $P^{n}$? I'm guessing no, but I can't prove it. - -REPLY [15 votes]: No. To prove it, imagine what it would mean for the image to be contained in a hyperplane: this would mean that some non-zero linear combination of the degree $d$ monomials vanished identically, which is to say, that there is some non-zero degree $d$ homogeneous equation which vanishes identically on $\mathbb P^n$. Hopefully you can convince yourself that this is not possible.<|endoftext|> -TITLE: Exact sequence of double complexes induces exact sequence on total complexes -QUESTION [6 upvotes]: This is a homework question, so I'd appreciate hints (or perhaps explanations of concepts I've not properly digested) -Anyhow: This is exercise 1.3.6 in Weibel's book on homological algebra. Let $0 \to A \to B \to C \to 0$ be an exact sequence of double complexes of modules. Show that there is a short exact sequence of total complexes, and conclude that if Tot(C) is acyclic, then $Tot(A) \to Tot(B)$ is a quasi-isomorphism. -The last part of the exercise is clear. If Tot(C) is acyclic, then the long exact sequence is of the form -$\ldots \to H_{i+1}C(=0) \to H_i(A) \to H_i(B) \to H_i(C)(=0) $ -so the induced morphism on homology is an isomorphism. -The first part of the question is unclear, however. The definition of an exact sequence of double complexes is not explicitly stated, but I assume it is such that $0 \to A_{ij} \to B_{ij} \to C_{ij} \to 0$ is exact for all i,j and everything commutes. -Let $\alpha:A \to B$ be a morphism of double complexes. The induced morphism between the total complexes, $\alpha^*: Tot(A) \to Tot(B)$, is then defined, I'd assume, as $\alpha^*=d_B^h \alpha + d_B^v \alpha$ (where $d_B^h$ and $d_B^v$ denotes the horizontal and vertical differentials, respectively). The problem is how I go about showing the induced sequence is exact. -"EDIT:" After some thought, I guess a good first step would be to show that $\beta^* \circ \alpha^* = 0$, which shouldn't be too difficult. (where $\alpha^*,\beta^*$ denotes the induced morphisms of total complexes) -Edit2: I've clarified my notation a bit. - -REPLY [3 votes]: Note that exactness depends only on the underlying objects, and not on the differentials. It follows that if each sequence at coordinate is exact, then the sequence of total complexes is too, because the direct sum of exact sequences is exact. On the other hand, the morphism you have defined is not how the map $\mathrm{Tot}(f)$ for a map $f:C\to D$ of double complexes works. Rather, an element in $\mathrm{Tot}(C)$ of the form $(\ldots,c_{0,n},c_{1,n-1},\ldots)$ gets sent to $(\ldots,fc_{0,n},fc_{1,n-1},\ldots)$.<|endoftext|> -TITLE: How do different notions of "distribution" relate to one another? -QUESTION [8 upvotes]: In reading "Real Analysis: Modern Techniques and Their Applications" (Folland), I've come across a few different notions of "distribution" or "distribution functions." - -The distribution function of a finite Borel measure $\mu$ on $\mathbb{R}$ is defined by $F(x) = \mu((-\infty, x])$. -The distribution function of a measurable function $f\colon X \to \mathbb{R}$ on a measure space $(X, M, \mu)$ is a function $\lambda_f\colon (0,\infty) \to [0,\infty]$ given by $\lambda_f(\alpha) = \mu(\{x\colon |f(x)| > \alpha\})$. -A distribution on an open set $U \subset \mathbb{R}^n$ is a continuous linear functional on $C^\infty_c(U)$. - -Is there any sort of relationship between these concepts? - -REPLY [3 votes]: The first two uses came from probability theory, and are somewhat related as terminlogy. They are however, as far as I know, not related at all with the third concept. -In particular, given a measurable map $f:(X,M)\to(\mathbb{R},\mathcal{B})$, and a measure $\mu$ on $(X,M)$. $f$ (or I guess $|f|$ in your case) defines a push-forward measure $f_*\mu$ on $(\mathbb{R},\mathcal{B})$ by the definition that, for every $b\in\mathcal{B}$ the Borel sigma-algebra, $f_*\mu(b) = \mu(f^{-1}(b))$. Then the distribution function $\lambda_f$ is something like $1- F$ for the distribution function corresponding to the pushforward Borel measure $|f|_*\mu$. (The 1 should be replaced by the total mass of the measure $|f|_*\mu$ when it is not a probability measure.) -See the website Earlist Known Uses of some of the Words of Mathematics for some references for what I write below. -Now, the distribution in the sense of the continuous linear functional is introduced by Laurent Schwartz, in French. In French, however, the distribution function of your Borel measure (or of your measurable function) is called "la fonction de répartition", which strongly suggests that Schwartz's choice of terminology is completely independent of the probability/measure theory uses of the words. -In German, the language in which "distribution functions" were introduced, the probabilistic concept is Verteilungsfunktion, while the functional analytic concept is just taken straight from French/English as Distribution. -The above all strongly indicates that while senses 1 and 2 are related, they are disjoint for the 3rd use of the word distribution. In fact, English is one of the (perhaps few) unhappy languages in which they coincide. -(Just to confuse you further, there is also a use of the word distribution in differential geometry, which also means something completely different and disjoint from the three senses you listed above.) - -REPLY [2 votes]: Let's consider the relationship between the first two notions. It is convenient and natural to consider this in a probability setting, so that the total measure is $1$. -First notion. Suppose that $X$ is a random variable on a probability space $(\Omega,\mathcal{F},P)$. Define $\mu$ as follows. For any $B \in \mathcal{B}(\mathbb{R})$, $\mu(B)=P(\lbrace\omega : X(\omega) \in B\rbrace)$. The right-hand side is written in short as $P(X \in B)$. Then $\mu$ is a probability measure on $\mathcal{B}(\mathbb{R})$, which we call the distribution of $X$. Now, the function $F:\mathbb{R} \to [0,1]$ defined by $F(x) = \mu((-\infty,x])$ is called the distribution function of $X$. Note that $F(x) = P(X \in (-\infty,x]) = P(X \leq x)$, as one would expect. Of course, $F$ satisfies all the usual properties from probability theory. -Second notion. Let's first change the notation, in accordance with the previous notion, as follows. The distribution function of a random variable (that is, a measurable function) $X: \Omega \to \mathbb{R}$ on a probability space (that is, a measure space with total measure $1$) $(\Omega,\mathcal{F},P)$ is a function $\lambda_X:(0,\infty) \to [0,1]$ given by $\lambda _X (\alpha ) = P(\lbrace \omega :|X(\omega )| > \alpha \rbrace )$. As before, the right-hand side is written in short as $P(|X| > \alpha)$. -The relationship between the first two notions is thus established (in the setting of probability spaces). Specifically, -$$ -\lambda _X (\alpha ) = P(|X| > \alpha) = P(X > \alpha) + P(X < -\alpha) = [1 - F(\alpha)] + F(-\alpha^-), -$$ -where $F(-\alpha^-)=\lim _{s \uparrow -\alpha } F(s)$ (recall that $F$ is right-continuous with left limits).<|endoftext|> -TITLE: Roots of derivative of polynomial -QUESTION [5 upvotes]: Suppose f(z) is a polynomial such that its derivatives are non-zero for all $|z| <1$. Is the restriction of $f$ to $|z|<1$ 1 to 1? -I know that $f$ must be locally 1 to 1. It is obvious that $f$ is 1-1 for polynomials of order 1. The case of order 2 follows from Gauss-Lucas Thm. -I am stuck on how to prove the general case. - -REPLY [2 votes]: Zarrax has already given a great example. Here's a way to see that there are polynomial examples from the fact that there are holomorphic examples. The motivation for this is that the non-polynomial example $g(z)=e^{2\pi i z}$ came to mind when I read your question. -Let $g$ be holomorphic in a neighborhood of the closed disk, not injective on the open disk, but with nonvanishing derivative on the closed disk. Let $(p_n)_n$ be the sequence of partial sums of the Taylor series of $g$ centered at $0$, so $p_n\to g$ and $p_n'\to g'$ uniformly on a (smaller) neighborhood of the closed disk. -Since $g'$ is nonvanishing on the closed disk, it has a positive minimum modulus $m$ there, which by uniform convergence means that $p_n'$ eventually has modulus greater than $m/2$, and in particular is eventually nonvanishing on the closed disk. Take $a\neq b$ in the open unit disk such that $g(a)=g(b)$. Applying Hurwitz's theorem to the sequence of functions $p_n(z)-g(a)$ on small disjoint disks centered at $a$ and $b$ shows that $p_n$ eventually takes on the value $g(a)$ at more than one point in the open unit disk. Thus, $p_n$ is eventually a counterexample. -In fact, using WolframAlpha, it looks like $f(z)=\displaystyle{\sum_{k=0}^{25}\frac{(2\pi iz)^k}{k!}}$ gives an example. It allegedly takes on the value $-1$ at points very close to $\pm\frac{1}{2}$, and all $24$ of the zeros of its derivative are outside the closed disk.<|endoftext|> -TITLE: A proof of the Isoperimetric Inequality - how does it work? -QUESTION [45 upvotes]: Here is a nice proof of the isoperimetric inequality (equality part ommited): -Isoperimetric Inequality -If $\gamma$ is any simple closed piecewise $C^1$ curve of length $l$, with it's interior having area $A$, then $4\pi A \le l^2$. Furthermore, if equality holds then $\gamma$ is a circle. -Proof. -Take two parallel straight lines $L$ and $L'$ such that $\gamma$ is between them and move them together until they first touch the curve. See my nice picture below. - -Let C be a circle as in the picture. Take $x$ and $y$ axes as shown. Let $\gamma = (x,y)$ be a parametrization of $\gamma$. Pick points $\gamma (s_0)$ and $\gamma (s_1)$ on both $L$ and $L'$ wherever the lines touch the curve, respectively. -Let $C$ be parametrized by $(x, \overline{y})$ where -$$ -\overline{y}(s) = -\begin{cases} - + \sqrt{r^2 - x^2 (s)}, & \text{if } s_0 \le s \le s_1 \\ - - \sqrt{r^2 - x^2 (s)}, & \text{if } s_1 \le s \le s_0 + l -\end{cases} -$$ -Denote the derivative of $f$ with respect to $s$ as $f_s$. Using Green's Theorem, we write: -$$A + \pi r^2 = \int_{\gamma} x\,dy + \int_C -y\,dx = \int^l_0 x(s)y_s(s)\,ds - \int^l_0 \overline{y}(s)x_s(s)\,ds = $$ -$$ = \int^l_0 ( x(s)y_s(s) - \overline{y}(s)x_s(s)) \,ds \le \int^l_0 \sqrt{ (x(s)y_s(s) - \overline{y}(s)x_s(s))^2} \,ds \stackrel{*}{\le}$$ -$$ \stackrel{*}{\le} \int^l_0 \sqrt{ (x^2(s) + \overline{y}^2(s))} \,ds = lr$$ -Where the starred inequality follows from the fact that: -$$(x y_s - \overline{y} x_s)^2 = [(x, - \overline{y}) \cdot (y_s, x_s)]^2 \le (x^2 + \overline{y}^2) \cdot (y^2_s + x^2_s) = x^2 + \overline{y}^2 $$ -So we have that $A + \pi r^2 \le lr$. Next we employ the Geometric-Arithmetic Mean Inequality to find that: -$$\sqrt{A \pi r^2} \le \dfrac{A + \pi r^2}{2} \le \dfrac{lr}{2}$$ -From which it directly follows that $4 \pi A \le l^2$, as needed. $\square$ -I have seen other proofs of this Theorem, for example the one using Wirtinger's Inequality. This proof was presented to me by my professor, who said this it was rather mysterious, and I agree. I think this proof is rather beautiful and much simpler than the other proofs. Here are my questions: -How? How does it work? I do not mean to ask how to we get from one step to another. I mean to ask what makes this work intrinsically. In particular, I am bothered by this constructed circle, and its radius. The next picture is what I have in mind: - -For this curve $\gamma$, choosing two different pairs of lines $L$, $L'$ and $K$, $K'$ gives us two circles with different radii. Furthermore, note that it appears that the area of the smaller circle is less that the area traced out by $\gamma$, whereas it is not the case for the larger circle. I guess my question here can be rephrased as: why do the $r$'s magically fall out of the equation in the last step? I am not looking for an anwser of the type "because the math works out that way," rather some geometric insight/explanation. -One observation I have thought about is that in case of equality $4 \pi A = l^2$, i.e. when $\gamma$ is a circle, any circle given by our construction will always have equall radius, in fact the radius of the circle $\gamma$. -From that I was led to this question: Suppose we take $n$ pairs of parallel lines (where two distinct pairs of pairs are not mutually parallel), and construct circles for each. Now, as $n \rightarrow \infty$, what will happen to the average radius of these circles? What will a circle with this radius represent? -EDIT: -I have found an example where this last question turns out to be rather uninteresting. However, what will happen if we assume the curve to be convex as well? -I do not know how to explore this last question with my knowledge whatsoever. -Finally, I want to ask if anyone knows how this proof came to be; what is the hidden motivation. -Thank you. - -REPLY [3 votes]: My attempt at an answer has two parts. -First, I think the geometric mean thing slightly obscures what's going on and contributes to your bafflement why "the $r$'s magically fall out of the equation in the last step". A more geometric view of this step is: We know that the curve has "width" $w=2r$ (along the chosen direction). Then $A = w\overline{h} = 2r\overline{h}$, where $\overline{h}$ is the "average height" of the curve (along the chosen direction). Thus we have -\[A + \pi r^2 \le lr \;\Leftrightarrow\; 2r\overline{h} + \pi r^2 \le lr \;\Leftrightarrow\; 2\overline{h} + \pi r \le l \;\Leftrightarrow\; 2\overline{h} + \frac{\pi}{2} w \le l\;.\] -Now the cancellation of the $r$'s seems more natural, and the inequality gives an upper bound for the "average height" we can achieve for a given "width" $w$ and length $l$. This bound implies the isomperimetric inequality: -\[l^2 \ge (2\overline{h} + \frac{\pi}{2} w)^2=(2\overline{h} - \frac{\pi}{2} w)^2 + 4\pi w\overline{h} \ge 4\pi w\overline{h}=4\pi A\;.\] -(There may be a connection here to the inequality by Bonnesen cited in Christian Blatter's comment.) -Second, regarding the proof as a whole, it seems useful to think of it as a way of transforming the difficult global optimization problem implied by the isoperimetric inequality (how to enclose the greatest possible area within a given circumference) into a trivial local optimization problem through some clever bookkeeping. In a sense, what makes the problem difficult is that how much area you can enclose with a curve element (say, with respect to the origin) depends both on where you are and in which direction you move, but in which direction you move in turn determines where you will end up, and hence how much area you will be able to enclose later. -The proof decouples this by adding to the area element a suitable penalty which has two crucial properties: It exactly cancels the "where you are" part of how much area you can enclose, and because it is itself the area element of a circle, it automatically adds up to a constant. There are no longer any variable lengths in the integrand, only the angle between the tangent vector at the curve and the tangent vector at the corresponding point of the circle, and it is then obvious that the integral is maximized by always choosing the tangent vector of the curve parallel to the tangent vector of the circle -- which necessarily results in a circle.<|endoftext|> -TITLE: How many consecutive composite integers follow k!+1? -QUESTION [8 upvotes]: I wrote a program for myself in Mathematica to generate the answer for the first 300, which was really easy, but I can't find a pattern. The results are here. This is a problem in Underwood Dudley's Elementary Number Theory, but for the LIFE of me I can't figure it out! My initial thought was this: Let m be the smallest prime such that k3$ there exists a prime $p$ such that $n3$ -$$N(k) \le (2k!-2k-5)-(k!+2) = k! -2k-7,$$ -a slight improvement for the bound in the absence of factorial primes.<|endoftext|> -TITLE: Is it possible to use regularization methods on the Harmonic Series? -QUESTION [7 upvotes]: I recently learned about summation methods when dealing with divergent series to give them a finite value. An example of this isusing Cesàro summation on Grandi's series to get 1/2. However every method I know of is unable to sum the harmonic series. Are there any summation methods that work on the harmonic series or is it provably impossible to sum this series? - -REPLY [5 votes]: I'll stick to real, positive exponents $s$ in $\sum_{n=1}^\infty n^{-s}$. -The following regularization works for $s\in (0,\infty)\setminus \{1\}$: formally, -$$(1-2^{1-s})\sum_{n=1}^\infty n^{-s} = \sum_{n=1}^\infty n^{-s} - 2\sum_{n=1}^\infty (2n)^{-s} = \sum_{n=1}^\infty (-1)^{n-1}n^{-s} \tag1$$ -where the right-hand side of (1) converges (by the alternating series test) for $ s>0$. Thus, for $s\in (0,\infty)\setminus \{1\}$ we can interpret $\sum_{n=1}^\infty n^{-s} $ as -$$\zeta (s)=(1-2^{1-s})^{-1}\sum_{n=1}^\infty (-1)^{n-1}n^{-s} \tag2$$ -Although setting $s=1$ in (2) does not work, we can do simple averaging: -$$\sum_{n=1}^\infty n^{-1} \,``=" \lim_{\epsilon\to 0} \frac{\zeta(1-\epsilon)+\zeta(1+\epsilon)}{2} =\gamma \tag3$$ -where $\gamma$ is the Euler-Mascheroni constant, $\gamma=0.57721\dots$ Indeed, the averaging in (3) cancels out the contribution of the simple pole in the Laurent series -$$\zeta(s)=\frac{1}{s-1}+\gamma+O(s-1)$$ -(Adapted from MathOverflow).<|endoftext|> -TITLE: Special arrows for notation of morphisms -QUESTION [22 upvotes]: I've stumbled upon the definition of exact sequence, particularly on Wikipedia, and noted the use of $\hookrightarrow$ to denote a monomorphism and $\twoheadrightarrow$ for epimorphisms. -I was wondering whether this notation was widely used, or if it is common to define a morphism in the general form and indicate its characteristics explicitly (e.g. "an epimorphism $f \colon X \to Y$"). -Also, if epimorphisms and monomorphisms have their own special arrows, are isomorphisms notated by a special symbol as well, maybe a juxtaposition of $\hookrightarrow$ and $\twoheadrightarrow$? -Finally, are there other kinds of morphisms (or more generally, relations) that are usually notated by different arrows depending on the type of morphism, particularly in the context of category theory? -Thanks. - -REPLY [2 votes]: There is a Unicode character 2916 (⤖) for "bijective mapping". My algebraic topology instructor, educated in Canada and New York, wasn't familiar with it when I employed it.<|endoftext|> -TITLE: Group Theory: Cyclic groups and Subgroup of Dihedrals, is my proof okay? -QUESTION [6 upvotes]: Suppose we're given a Dihedral group $D_{30}$. -a) Find a cyclic subgroup H of order 10 in $D_{30}$. List all generators of H. -b) Let k and n be an integer such that k >= 3 and k divides n. Prove that $D_n$ contains exactly one cyclic subgroup of order k. -My attempt: -a) in $D_{30}$ we know that |r| = 30. So we can find a cyclic subgroup H generated by r such that it is of order 10. take < $r^{30/10}$ > = < $r^3$ >. Then < $r^3$ > contains the identity element e and powers of $r^3$, to $r^{27}$. Thus the generator of H is then $r^3$. Would this be okay? -b) Since k divides n, we can write n as n = kp for some p. Then we see that gcd(n,k) = k and thus: -|< r >| = |r| = n. -Then by Fundamental theorem of cyclic groups I can say that the group has exactly one subgroup of order k, ie: = since n = kp. And we are done. -This is my first course in Group Theory so I'm rather shaky and insecure about my proofs. Your comments and help would be greatly appreciated. - -REPLY [2 votes]: You are using $D_{30}$ to represent the dihedral group of order $60$, with $r$ corresponding to the "rotation", and some other element, call it $s$, the "reflection"; that is, -$$D_{30}=\left\langle r,s\Bigm| r^{30} = s^2 = 1,\quad sr = r^{-1}s\right\rangle.$$ -Part (a): Your first part begins well enough, but it seems like you did not read the question carefully. The last part of part (a) asks you to find all generators of the subgroup $H$ you found. -You took $H=\langle r^3\rangle$. That's fine; this is a group of order $10$. But $r^3$ is not the only generator of this group. Perhaps you know that, in general, the cyclic group of order $n$, $C_n$ has $\varphi(n)$ generators, where $\varphi$ is Euler's function that counts the number of positive integers less than $n$ and relatively prime to $n$ (if you don't, then try to prove it). So $H$, being cyclic of order $10$, should have $\varphi(10) = 4$ generators. You've found one, there are three more to go. -Your part (b) also suffers a bit. You know that there is one and only one subgroup of $\langle r\rangle$ that has order $k$; that's fine. But you did not prove that every subgroup of $D_n$ of order $k$ must be a subgroup of $\langle r\rangle$! You need to show that too if you want your argument to hold. So you would need to show that the only elements of order $k$ all lie in $\langle r\rangle$; that's where you'll need the fact that $k\geq 3$. Perhaps you can show that every other element has other ideas about what their order is?<|endoftext|> -TITLE: Seifert-van-Kampen and free product with amalgamation -QUESTION [11 upvotes]: I would like to apply Seifert van Kampen to a simple example taken from Wikipedia: I have $X = S^2$ and $A = S^2 - n$, where $n$ is the north pole and $B = S^2 - s$, where $s$ is the south pole. -According to my understanding, which might be wrong, Seifert van Kampen tells me that $\pi_1(X) = \pi_1(A) *_{\pi_1(A \cap B)} \pi_1(B)$, where the right hand side is the free product with amalgamation. -$A \cap B$, the sphere minus the two points has a fundamental group isomorphic to $\mathbb{Z}$. -The free product with amalgamation of two groups $G, H$ is $ G * H / N$, where $N$ is the smallest normal subgroup in $G * H$ (according to the Wikipedia entry about free product with amalgamation). -Note : I am aware that in the sphere example both $G$ and $H$ are trivial and so quotienting them with anything will be trivial again. This question is not about actually computing the fundamental group of $S^2$! -In the sphere example, this means I have to find the smallest normal subgroup of $\mathbb{Z}$. -Question 1: Is my understanding of Seifert van Kampen correct? -Question 2: What is the smallest normal subgroup of $\mathbb{Z}$? -As for question 2, what do you think about $\lim\limits_{k \rightarrow \infty} k\mathbb{Z}$? -Thanks for your help. - -REPLY [5 votes]: The only thing I'd like to add is that in general, if $i:A\hookrightarrow X$ is the inclusion, then $i_\#:\pi_1(A)\rightarrow\pi_1(X)$, defined by $i_\#:[\alpha]\mapsto[i\circ\alpha]$, usually isn't an inclusion. -For example, for the inclusion $i:\mathbb{S}^1 \hookrightarrow\mathbb{B}^2$, the homomorphism $i_\#$ isn't an inclusion, since $\pi_1(\mathbb{S}^1)\cong\mathbb{Z}$ and $\pi_1(\mathbb{B}^2)\cong\{1\}$. -Even if $i$ maps $[\alpha]$ to $[\alpha]$, the first equivalency class of $\alpha$ is computed via homotopies in $A$, while the second via homotopies in $X$! The codomain of the loop is vital. However, if $A$ is a retract of $X$ (i.e. $A\!\subseteq\!X$ and $\exists$ continuous $r:X\rightarrow A$ with $r|_A=id_A$), then $i_\#:\pi_1(A)\rightarrow\pi_1(X)$ is an inclusion. -Hence in the formulation of Seifert van Kampen theorem, when one has the inclusions $i:X_1\!\cap\!X_2\hookrightarrow X_1$, $j:X_1\!\cap\!X_2\hookrightarrow X_2$, the theorem states (under the hypotheses) that -$$\pi_1(X)\cong\pi_1(X_1)\ast\pi_1(X_2)/\langle\!\langle i_\#([\alpha])j_\#([\alpha])^{-1};\;[\alpha]\!\in\!\pi_1(X_1\!\!\cap\!\!X_2)\rangle\!\rangle$$ -and NOT that -$$\pi_1(X)\cong\pi_1(X_1)\ast\pi_1(X_2)/\langle\!\langle [\alpha][\alpha]^{-1};\;[\alpha]\!\in\!\pi_1(X_1\!\!\cap\!\!X_2)\rangle\!\rangle.$$<|endoftext|> -TITLE: p-Sylow subgroup -QUESTION [5 upvotes]: Let $G$ be a finite group and $P$ is a p-Sylow subgroup of $G$. - Show that $N(N(P))=N(P)$ where $N(P)$ is the normalizer. - -Not sure where to begin... Any hints will be appreciated. - -REPLY [8 votes]: Since $P$ is normal in $N(P)$ it is the unique normal $p$- Sylow subgroup of $N(P)$. However if $x \in N(N(P))$ then $xN(P)x^{-1}=N(P)$ so $xPx^{-1} \subseteq xN(P)x^{-1}=N(P)$. This forces $xPx^{-1}=P$, so $x \in N(P)$. Hence $N(N(P)) \subseteq N(P)$. But $N(P) \subseteq N(N(P))$. Hence $N(N(P))=N(P)$.<|endoftext|> -TITLE: Probability of "clock patience" going out -QUESTION [5 upvotes]: Here is a question I've often wondered about, but have never figured out a satisfactory answer for. Here are the rules for the solitaire game "clock patience." Deal out 12 piles of 4 cards each with an extra 4 card draw pile. (From a standard 52 card deck.) Turn over the first card in the draw pile, and place it under the pile corresponding to that card's number 1-12 interpreted as Ace through Queen. Whenever you get a king you place that on the side and draw another card from the pile. The game goes out if you turn over every card in the 12 piles, and the game ends if you get four kings before this happens. My question is what is the probability that this game goes out? -One thought I had is that the answer could be one in thirteen, the chances that the last card of a 52 card sequence is a king. Although this seems plausible, I doubt it's correct, mainly because I've played the game probably dozens of times since I was a kid, and have never gone out! -Any light that people could shed on this problem would be appreciated! - -REPLY [3 votes]: I'd always thought the answer was one in thirteen on the basis that you win if the last card turned up is a king. But then one day while playing clock patience I started thinking along the lines of Ross and Eric - how can you end up with 1/13 when you have to worry about whether a bottom card matches its position? So I wrote a python program to shuffle a pack randomly a million times and play the game. The python program (very inefficient but seems to be effective) produces results similar to these every time it's run: -Won 76921 out of 999700 plays = 0.076944% win rate ~= 1/13 = 0.076923% -Won 76928 out of 999800 plays = 0.076943% win rate ~= 1/13 = 0.076923% -Won 76937 out of 999900 plays = 0.076945% win rate ~= 1/13 = 0.076923% -Won 76939 out of 1000000 plays = 0.076939% win rate ~= 1/13 = 0.076923% - -which suggests that the answer 1/13 is correct. Here's my program: -from random import randint -from random import seed - -def createpack(): - "Creates a pack of cards - four lots of 1-13" - # N.B. there's an extra one named pack[0] which is ignored - pack = range(0,53) - for i in range(1,53): - pack[i] = (pack[i]-1) % 13 + 1 - return pack - -def shuffle(pack): - "Shuffles the pack of cards" - for i in range(1,53): - j = randint(1,52) - k = pack[j] - pack[j] = pack[i] - pack[i] = k - return pack - -def dodeal(pack): - "Deals the pack of cards randomly into 13 sets of 4" - deal = [[0]*4 for i in range(1,14)] - pack = shuffle(pack) - k = 0 - for i in range(0,13): - for j in range(0,4): - k += 1 - deal[i][j] = pack[k] - return deal - -def play(deal): - "Plays out the clock patience game, returning 1 for a win, 0 otherwise" - i = deal[0][0] - 1 - deal[0][0] = 0 - nc = 0 - winning = 1 - while (nc < 52) and winning: - nc += 1 - j = 0 - while (j < 4) and (deal[i][j] == 0) and winning: - j += 1 - if (i==0) and (j==4): - if (nc < 52): - winning = 0 - else: - k = deal[i][j] - 1 - deal[i][j] = 0 - i = k - return winning - -pack = createpack() -nwon = 0 -for nplayed in range(1,1000001): - deal = dodeal(pack) - won = play(deal) - if (won): - nwon += 1 - if nplayed % 100 == 0: - print("Won %d out of %d plays = %8.6f%% win rate ~= 1/13 = %8.6f%%" % - (nwon, nplayed, float(nwon) / nplayed, 1.0/13.0))<|endoftext|> -TITLE: What is $\pi_i(GL(n))$? -QUESTION [8 upvotes]: For some reason, I can't find a reference for $\pi_i GL(n,\mathbb C)$ nor can I figure what they are. For most Lie groups, you can get a nice fibration and use the long exact sequence in homotopy to inductively compute the homotopy groups (e.g. the fibration $SO(n-1) \to SO(n) \to S^{n-1}$). However, I can't think of a nice fibration; $GL(n)$ acts transitively on $\mathbb C^n$ but I don't know a nice description for the stabilizer subgroup. -This is motivated by understanding the statement that $GL(n)/GL(k)$ is $k-1$ connected (for the real and complex cases), so if there's an easy explanation for that without appealing to $\pi_1 GL(n)$, then that would also be appreciated. - -REPLY [13 votes]: There's a fibration -$$GL(n, \mathbb C) \to GL(n+1, \mathbb C) \to \mathbb C^{n+1} \setminus \{0\}$$ -By Gram-Schmidt, this fibration is fibre homotopy-equivalent to -$$U_n \to U_{n+1} \to S^{2n+1}$$ -given by only remembering the 1st vector in the matrix, just as in your $SO(n)$ example. -The stable homotopy groups of the unitary groups are known. Google "Bott Periodicity". The unstable groups for $U_n$, just like for $SO_n$, are only known in a range. -I believe these fibrations are discussed in Bredon's book, as well as May, among others. This is example 4.55 in Section 4.2 of Hatcher's book.<|endoftext|> -TITLE: Fixed, attracting points are Fatou points -QUESTION [9 upvotes]: Let $f$ be a holomorphic function on an open, connected set $\Omega\subset \mathbb{C}$ with $z_0\in \Omega$ a fixed point, and $\{f^n\}_{n\in \mathbb{N}}$ the sequence of iterates. -I want to prove that if $|f'(z_0)|<1$ then there is a neighborhood of $z_0$ such that $\{f^n\}$ is normal in it. -I don't really know what to do, because I don't know how to handle the iterates. I think the fact that $|f(z)| < |z-z_0| + |z_0| + \epsilon |z-z_0|$ might be useful to apply Montel's theorem, but I didn't get very far. -This is not homework, it's a problem I'm trying to solve as preparation for a complex analysis exam. -Also: I'd be grateful to have a geometrical interpretation of $|f'(z_0)|<1$ or of $|f'|$ in general. I don't really understand what does it mean for $|f'(z_0)|<1$, for example in Schwarz's lemma. - -REPLY [2 votes]: I think I got it. Correct me if I'm wrong. -There exists a neighborhood $U$ of $z_0$ such that $|f(z)-z_0|\leq \rho |z-z_0|$, where $|f'(z_0)|<\rho<1$. -Then for all $z\in U$, $|f^n(z_0)-z_0| = |f(f^{n-1}(z))-z_0| \leq \rho |f^{n-1}(z)-z_0| \leq \dots \leq \rho^n |z-z_0|$. -This means the sequence $\{f^n\}$ converges uniformly to $z_0$ in $U$, because $\rho^n\to 0$ as $n\to \infty$. In particular, it converges uniformly in compacts of $U$, whence $\{f^n\}$ is normal in $U$. -There is still the last part of the question to be answered, though.<|endoftext|> -TITLE: Is Nullstellensatz true for arbitrary fields if there aren't hidden points? -QUESTION [8 upvotes]: The ideals $I=(X,Y)$ and $J=(X^2+Y^2)$ in $\mathbb R[X,Y]$ are such that $V(I)=V(J)$ and their radicals aren't the same contradicting the Nullstellensatz (in case it was true for arbitrary fields). However, this shouldn't be a surprise, if we look at their varieties in $\mathbb C$ we find that they aren't the same as the second has a pair of lines that were hidden. -My question is if it is true the other way around: Suppose we have two ideals $I$ and $J$ of $\mathbb K[X_1,\ldots,X_n]$ where $\mathbb K$ is an arbitrary field. Such that $V(I)=V(J)$ and such that ${\bf V}(I)={\bf V}(J)=V(I)$; where ${\bf V}$ means the variety in the affine space of dimension $n$ over $ \mathbb{\bar K}$, the algebraic closure of $\mathbb K$. That is, there are no additional points hidden in the algebraic closure. -Is it true then that $\sqrt I=\sqrt J$? -My motivation for the question is a problem of primary descomposition, where we had to find one for $J=(X + Y − X^2 + XY − Y^2,X(X + Y − 1))$ in $\mathbb K[X,Y]$. We had already proved that $V(J)=\{(0,0),(1,0),(0,1)\}$ over $\mathbb{ A}_{\mathbb K}^2$ for any field $\mathbb K$. It made some computations easier in the case of algebraically closed fields because we had automatically that $\sqrt J=I$ where $I$ was the ideal of the three points $I=(X,Y)\cap(X-1,Y)\cap (X,Y-1)$. - -REPLY [8 votes]: There is a form of the Nullstellensatz (see the wikipedia entry) -which is valid for arbitrary fields (and even for more general rings --- so-called Jacobson rings). One way to formulate it (in the case of an arbitrary field) is as follows: -If $k$ is a field, then: - -For any ideal $I$ in $k[x_1,\ldots,x_n]$, -the radical $\sqrt{I}$ is the intersection of all maximal ideals $\mathfrak m$ -in $k[x_1,\ldots,x_n]$ containing $I$. -If $\mathfrak m$ is a maximal ideal of $k[x_1,\ldots,x_n]$, then there is a homomorphism of $k$-algebras -$k[x_1,\ldots,x_n] \to \overline{k}$ whose kernel is precisely $\mathfrak m$. - -These two facts taken together say that you can recover $\sqrt{I}$ by knowing all of the points of what you call ${\mathbf V}(I)$. (With a little work one -can deduce these statements from the Nullstellensatz for $\overline{k}$.) -[Note also that many (I would guess most) people, when they write $V(I)$, -would mean what you call ${\mathbf V}(I)$; i.e. even when the ideal consists of polynomials in a non-algebraically closed field $k$, they would take the variety $V(I)$ attached to $I$ to consist of all the common zeroes of the polynomials wiht coordinates lying in $\overline{k}$ (not just those with coordinates lying in $k$). Of course this is just a matter of convention; but it is a common convention which it might be helpful to be aware of.]<|endoftext|> -TITLE: How many ways can $r$ nonconsecutive integers be chosen from the first $n$ integers? -QUESTION [6 upvotes]: During self-study, I ran across the question of how many ways six numbers can be chosen from the numbers 1 - 49 without replacement, stipulating that no two of the numbers be consecutive. -I can obtain a simple lower bound by saying that, in the worst-case scenario, when you choose a particular number, there are now three numbers that cannot be chosen next. For example, if I first pick 17, then I can't choose 16, 17, or 18 for the remaining choices. This gives me the lower bound -$$\frac{49*46*43*40*37*34}{6!} = 6,773,770 \frac{8}{9}$$ -This is about 48% of ${}_{49}C_6 = 13,983,816$. The real answer must be bigger (and an integer). I haven't found a way to calculate it, though. -The original problem asked to show that the probability of having non-consecutive integers when you choose six is greater than 50%, so if the problem is complicated to count exactly, better approximations that show the answer is above 50% would also be appreciated. -Of course, I can use a computer to do the counting, but I'm interested in learning what methods I'm missing. - -REPLY [11 votes]: Another good way of solving problems like this is to set up a correspondence with a problem you already know how to solve. In this case, imagine that you've got a sorted set of six non-consecutive numbers $a_1, a_2, \dots a_6$ between 1 and 49. What does it look like? Well, the first number $a_1$ is essentially arbitrary; it can't be greater than a certain value (i.e. 39) or else there isn't 'room' for five more non-consecutive numbers above it, but we can ignore that for now. The second number $a_2$ has to be greater than $a_1+1$ — that's what it means to be nonconsecutive, after all — so we know that $a_2-1$ (call this $b_2$) is greater than $a_1$. Similarly, $a_3$ has to be greater than $a_2+1$, so $a_3-1$ is greater than $a_2$, and $a_3-2$ is greater than $a_2-1 = b_2$; we can call this $b_3$. And so on, and so forth, until we define $b_6 = a_6-5$. But this correspondence works both ways — given the $b_n$ it's easy to get the $a_n$ — and so we have a one-to-one correspondence between our non-consecutive sets of numbers and an ordinary combination (but with a different upper bound - can you see what it should be?). It takes a while to build this sort of instinct, but learning how to spot these correspondences is the most basic tool for proving most combinatorial identities.<|endoftext|> -TITLE: RSA in plain English -QUESTION [21 upvotes]: I'm a computer science student, I'm not a mathematician, I don't know anything about number or group theory. -I'm looking at RSA, and I want to understand it. -I know what Fermats's little theorem and Euler's totient function are, and this is what I've got: - -$p, q$ primes, $n=pq$. -$d$ relatively prime to of $\varphi(n)=(p-1)(q-1)$. -$e$ so that $ed \pmod{\varphi(n)}= 1$. -Hence $x^{ed} \pmod{n} = x$. - -For encryption I have to use $c=w^e \pmod n$, and for decryption $w=c^d \pmod n$. But I don't understand what has happened beyond, and how those formulas are linked. - -REPLY [28 votes]: Since you ask for "plain English", let me discuss a bit of what the issue is and how RSA hopes to address it. -As I mentioned in an earlier answer the underlying problem is this: you want to communicate securely with someone. There are two standard techniques: one is steganography, which consists of trying to hide the very existence of the communication; that's not usually very mathematical, and it is difficult to do. The other is cryptography, which consists of obscuring the message so that even if the message is overheard, the people overhearing it will not understand it. A method for doing this is called a "cryptosystem." -Cryptosystems usually rely on a key, a way for the speaker to encrypt his message, and a way for the listener to decrypt it (figure out what the message says). Historically, most cryptosystems used what are called "symmetric keys": the piece of information that was needed to encrypt was the same as the piece of information that was needed to decrypt (or at least, knowing one was "equivalent" to knowing the other). Both sides needed to know the key. This leads to a difficulty: how do we agree on a key? If we cannot communicate without being overheard, then we cannot agree on a key without everyone finding out what the key is, which makes the system useless. And if we can communicate without being overheard, then there's no point in the whole song and dance, we can just talk using that communication. -There were a number of ways of solving this issues: traditionally, you had a secure way of communication with limited bandwidth (only small amounts of information could be exchanged through it, and only at certain times), so that they key could be exchanged but communication was too difficult (diplomatic bags, spies, etc). -But there is another type of cryptosystems that were discovered in the 70s (it's known that the UK's GCHQ knew about them before, and there are always rumors that the NSA knew about them before they were "publically discovered" as well), which are not symmetric, in the following sense: knowing the encryption key does not give you enough information to find the decryption key. They are called "public key cryptosystems", because you can tell the entire world how to encrypt messages for you, but knowing this information does not give them a way to decrypt messages sent to you. That requires the "decryption key", which you only know. -[Sorry, we'll have to throw some math in now, but you had to expect it.] These systems rely on what are called trapdoor functions. The idea of a trapdoor function is that you have a function $f$ that encrypts, but for which it is difficult to find the inverse, $f^{-1}$ (which decrypts); so that even if you know exactly what $f$ is, and you know that the output of $f$ was, say, $D$, it is still very hard to find $f^{-1}(D)$ (which would be the message being sent). However, if you happen to know an extra piece of information, $k$, then from $f$, $D$, and $k$ you can easily find $f^{-1}(D)$ (think of $k$ as the lever that "opens the trapdoor" and lets the inverse fall through). So the idea of these public key systems is to tell everyone what $f$ is, but keep $k$ secret. Nobody needs to know $k$ in order to compute $f(M)$, which they can do in secret. Then they can yell $f(M)$ at you; it doesn't matter if people overhear $f(M)$, because it's very hard to find $M=f^{-1}(f(M))$ (very hard to decrypt). But since you know the secret piece of information $k$, when you hear $f(M)$, you can use $k$ to find $M$, and voilá!, you know the secret message but nobody else does. -RSA is one of these systems. In RSA, the messages are numbers (it is straightforward to convert text into a number). I want to communicate a number to you secretly, even though everyone can hear me talking. -The way RSA work is by using modular exponentiation. You select a big number $n$, and two numbers, $e$ and $d$, such that $ed\equiv 1\pmod{\varphi(n)}$ (Euler's $\varphi$ function). You tell everyone what $n$ is, and you tell everyone what $e$ is; these are the public keys, but you keep $d$ secret. If I want to send you a message $M$, I first make sure that $\gcd(M,n) = 1$ and $1\lt M\lt n$ (if not, I modify the message a bit so that it is, or I break it up into pieces if it is too big). Then I compute $R=M^e \bmod n$, which is easy to do with computers, and I tell everyone that I'm sending you the message $R$. When you receive $R$, you can figure out what the original message $M$ was, simply by computing $R^d \bmod n$ (which again is easy to do computationally). The reason this will tell you what $M$ was is due to Euler's Theorem. -Euler's Theorem. If $n$ is a positive integer, and $\gcd(a,n)=1$, then $a^{\varphi(n)} \equiv 1 \pmod{n}$. -This is a generalization of Fermat's Little Theorem (in Fermat's Little Theorem, you have $n=p$ a prime, so $\varphi(n)=\varphi(p)=n-1$, and the equation tells you that $a^{p-1}\equiv 1\pmod{p}$ if $\gcd(a,p)=1$). -Since $ed\equiv 1 \pmod{\varphi(n)}$, we can write $ed = k\varphi(n)+1$ for some integer $k$. Then we have: -$$ R^d \equiv (M^e)^d \equiv M^{ed} \equiv M^{k\varphi(n)+1} = (M^{\varphi(n)})^kM \equiv 1^kM = M \pmod{n}$$ -(by using the rules of exponentiation, and because $M^{\varphi(n)}\equiv 1\pmod{n}$). -Now, this means that you can figure out $M$, because you know $d$. Can other people figure out $M$ without knowing $d$? -If you can figure out $\varphi(n)$, then it turns out to be very easy to figure out $d$ from knowing $e$: because you know that $\gcd(e,\varphi(n))=1$, then using the Euclidean Algorithm you can find integers $x$ and $y$ such that $ex+\varphi(n)y = 1$; and then $d\equiv x\pmod{\varphi(n)}$, and all of this can be done easily. So you want to make sure that computing $\varphi(n)$ is not easy, even if you know $n$. -If you can factor $n$ into primes, though, then $\varphi(n)$ is very easy to find. So you want $n$ to not be a prime, but to have very few prime factors (because every time you find a prime factor, you divide it out, and you have a smaller problem left). So ideally, we want $n$ to just be a product of two really big primes, $n=pq$. -So, the way this works is: you first find two really big primes secretly (there are ways of doing this), $p$ and $q$. Then you compute $n=pq$. Since you know $p$ and $q$, you also know that $\varphi(n) = (p-1)(q-1)$, so $\varphi(n)$ is easy for you. Then you pick an $e$, and since you know $\varphi(n)$, you can find $d$ easily (there are "bad choices" of $e$ that will make finding $d$ not too hard, but they are known; you avoid them; there are also bad choices of $p$ and $q$, so you avoid them too, don't worry too much about this). -So, now you know $p$, $q$, $d$ and $e$. You tell everyone $pq$ and $e$, but you keep $d$, $p$, and $q$ secret. -If I want to send you message $M$, $1\lt M \lt pq$, I don't tell anyone what $M$ is, I compute $M^e\bmod n$, and I tell everyone $M^e$. You can figure out $M$ because you know $d$, so as above, $(M^e)^d = M^{ed} \equiv M\pmod{n}$ by Euler's Theorem. -The hope is that just from knowing $n$, $d$, and $M^d \bmod{n}$, it is hard to figure out $M$. -The RSA Problem is precisely that problem: if you know $n$ (and you know that $n$ is the product of two primes, but you don't know which primes ahead of time), you know $d$, and you know $M^d\bmod{n}$, can you figure out $M$? The security of the RSA cryptosystem depends on how hard the RSA Problem is to solve. -If I can factor $n$, then I can solve the RSA Problem: I can find $\varphi(n)$, then I can use $\varphi(n)$ and $d$ to find $e$, and once I have $e$ and $M^d$, I just figure out $(M^d)^e \equiv M \pmod{n}$. So the RSA Problem is no harder than the problem of factoring a product of two primes. Luckily, in so far as we know the problem of factoring a product of two large primes is pretty hard. -It is not known, however, if there might be a different way of solving the RSA problem that may not rely on factoring $n$. Maybe, maybe not. -I hope this helps.<|endoftext|> -TITLE: Is there a measurable set $A$ such that $m(A \cap B) = \frac12 m(B)$ for every open set $B$? -QUESTION [7 upvotes]: Is there a measurable set $A$ such that $m(A \cap B)= \frac12 m(B)$ for every open set $B$? -Edit: (t.b.) See also A Lebesgue measure question for further answers. - -REPLY [6 votes]: Hint: Lebesgue density theorem. -Alternatively, approximate $A\cap[0,1]$ with a finite union of intervals. - -On second thought, those hints are overly complicated. You can use the definition of Lebesgue measure to find an open set $B$ containing $A\cap[0,1]$ with measure close to that of $A\cap[0,1]$.<|endoftext|> -TITLE: $(A-λI)X=0$ works, but$ (λI-A)X=0$ does not when finding eigenvectors. Why? -QUESTION [6 upvotes]: Either works when trying to find the eigenvalues, but only the former works when trying to find corresponding eigenvectors. I can understand how it makes a difference, but what I don't understand how one is supposed to "know" the former is the "correct" form, since it starts from here: -$AX = λX$ -And from there, you can end up with either: -$AX - λX = 0$ -or -$λX - AX = 0$ -And finally: -$(A-λI)X = 0$ -or -$(λI-A)X = 0$ -And so should they not both be correct? Furthermore, once it's in that form, can you not multiply both sides by $-1$ to flip them? -I guess I'm overlooking some sort of rule of algebra when dealing with matrices. Let me know. Thanks. -EDIT -Here is an example: -$\pmatrix{5&3\\6&2}$ has an eigenvalue $-4$ -If we do $(A-λI)X = 0$: -$\pmatrix{9&3\\6&2}*X = 0$ -This solves to $X = \pmatrix{-1\\3}$ (and multiples of it) -If we do $(λI-A)X = 0$: -$\pmatrix{-9&3\\6&-2}*X = 0$ -This solves to $X = \pmatrix{1\\3}$ (and multiples of it) - -REPLY [2 votes]: You usally use $(\lambda I-A)X$ to calculate the Eigenvalues because the polynom you will get is normalized in this case ( its highest degree coefficient is 1). -For Eigenvektors it totally doesn't matter which way you do it as you have -$(\lambda I-\vec A)\vec v=\vec0 \Leftrightarrow -(\lambda I-A)\vec v=\vec 0 \Leftrightarrow (A-\lambda I)\vec v=\vec 0$<|endoftext|> -TITLE: Help with a formula for Google Adwords -QUESTION [5 upvotes]: I am creating a spreadsheet to calculate Adword formulas and I am stuck in how to calculate the Monthly Net Income for each keyword. I have created a formula which calculates it but can't figure out how to limit the Monthly Budget. -The formula I've created is this one: - -Monthly Net Income = ( DailyClicks x ConversionRate x SaleProfit) - ( CPC x DailyClicks ) - -There is an example of the formula in the file which is a Google Spreadsheet publicly available here: https://spreadsheets.google.com/ccc?key=0AnQMyM9XJ0EidDB6TUF0OTdaZ2dFb2ZGNmhQeE5lb0E&hl=en_GB#gid=2 -(you can create your own copy going to File > Make a copy...) -I am releasing this set of tools as Public Domain so feel free to use it :) -Any help is much appreciated! - -REPLY [3 votes]: the formula as presented is not calculating the monthly net income, it is calculating the daily net income -assuming 30 days/month, divide the monthly budget by 30 to get a daily budget -then limit the DailyClicks to be no more than DailyBudget / CPC<|endoftext|> -TITLE: Given a commutative ring $R$ and an epimorphism of free modules $R^m \to R^n$ is then $m \geq n$? -QUESTION [15 upvotes]: If $\varphi:R^{m}\to R^{n}$ is an epimorphism of free modules over a nontrivial commutative ring, does it follow that $m \geq n$? - -This is obviously true for vector spaces over a field, but how would one show this over just a commutative ring? ------Edit -Is there any way to use the following? -If $\varphi : M \to M'$ is an epimorphism of left $S$-modules and $N$ is any right $S$-module then $id_N \otimes \varphi $ is an epimorphism. - -REPLY [5 votes]: I am answering this question just because it took me a while to figure out a good proof, so it may help other people as well. -If $\varphi: R^m\rightarrow R^n $ is your epimorphism then you can construct the exact sequence -$$\text{Ker}(\varphi) \xrightarrow{\ \ \iota \ \ }R^m \xrightarrow{\ \ \varphi \ \ }R^n\longrightarrow 0.\quad (*)$$ -Let $\mathfrak{m}$ a maximal ideal of $R$. Therefore $R/\mathfrak{m}$ is a field and a $R$-module. Appling right-exactness to $(*)$ (Proposition 2.18, Introduction to Commutative Algebra - Michael Atiyah), we conclude that -$$R/\mathfrak{m}\otimes_R\text{Ker}(\varphi) \xrightarrow{\ 1\otimes\iota \ }R/\mathfrak{m}\otimes_R R^m \xrightarrow{\ 1\otimes\varphi \ }R/\mathfrak{m}\otimes_R R^n\longrightarrow 0,$$ -is an exact sequence. -Therefore $1\otimes\varphi$ is surjective. Since, $R/\mathfrak{m}\otimes_R R^m$ and $R/\mathfrak{m}\otimes_R R^n$ are vector spaces over $R/\mathfrak{m}$ and $1\otimes\varphi$ is a linear map between $R/\mathfrak{m}\otimes_R R^m$ and $R/\mathfrak{m}\otimes_R R^n$ over $R/\mathfrak{m}$, we have that $m\geq n.$<|endoftext|> -TITLE: Explanation of a Phrase from Prof. Ravi Vakil's Website -QUESTION [8 upvotes]: I was just browsing through Ravi Vakil's website when i found a nice article written on what he demands from his students. Here is the webpage. (For those interested!) - -http://math.stanford.edu/~vakil/potentialstudents.html - -I actually found a line which caught my attention. He says: - -"Mathematics isn't just about answering questions; even more so, it is about asking the right questions, and that skill is a difficult one to master." - -I would like somebody to explain the meaning of this statement. What does asking the right questions mean here? - -REPLY [3 votes]: I would say that most of the advancements in Mathematics were made as a result of trying to resolve conjectures. -Good conjectures themselves arise from asking questions, picking the right ones and failing to answer them. -Even during problem solving, many times, one finds that you need to make assumptions (which are really questions in disguise) which help solve the problem, and then verify/prove that the assumptions are true. It is probably easier to verify/prove the assumptions than coming up with them, and without making the right assumptions the problem would probably be unsolved. -Without new questions, there would be very little advancement.<|endoftext|> -TITLE: In what generality does the following isomorphism involving tensors and homs hold? -QUESTION [5 upvotes]: Let $R$ be a CRing and let $M,N$ be $R$-modules. Let $M^*:=Hom_R(M,R)$. I have seen the following isomorphism asserted in the case where $R$ is a field and $M$ and $N$ are f.g. vector spaces: -$M^*\otimes_R N\cong Hom_R(M,N)$. -I can give a proof of this using a basis, but in what generality can we say this kind of thing? Does it have a nice arrow-theoretic proof? -For some reason, I can't accept answers, so I apologize for not doing it if I am having trouble. Maybe the mods can force my account to accept the answers? (To answer the gentleman's question, I am having javascript problems, so I fear that it won't help). - -REPLY [5 votes]: Again ol' Bourbaki gives you an answer: when $M$ or $N$ are finitely projective modules, the canonical morphism $M^* \otimes N \longrightarrow \mathrm{Hom}(M,N)$ is an isomorphism (Algebra I, chapter II, 4.2, proposition 2).<|endoftext|> -TITLE: Integrals as Probabilities -QUESTION [7 upvotes]: Firstly, I'm not a mathematician as will become evident in a quick moment. I was pondering some maths the other day and had an interesting thought: If you encased an integrable function over some range in a primitive with an easily computable area, the probability that a random point within said primitive also exists below that function's curve, scaled by the area of the primitive, is the indefinite integral of the function over that domain. -So let's say I want to "solve" for $\pi$. Exploiting a circle's symmetry, I can define $\pi$ as: -$$4 \int_{0}^{1}\sqrt{1-x^2} \,dx$$ -Which I can "encase" in the unit square. Since the area of the unit square is 1, $\pi$ is just 4 * the probability that a point chosen at random within the unit square is below the quarter-circle's arc. -I'm sure this is well known, and so my questions are: - -What is this called? -Is there anything significant about -this--for instance, is the -relationship between the integral and -the encasing object of interest--or -is it just another way of phrasing -indefinite integrals? - -Sorry if this is painfully elementary! - -REPLY [6 votes]: No, this is a very good observation! It is the basis of the modern definition of probability, where all probabilities are essentially defined as integrals. Your particular observation about $\pi$ being closely related to the probability that a point lands in a circle is also very good, and actually leads to a probabilistic algorithm to compute $\pi$ (an example of a Monte Carlo method). The subject in which probabilities are studied as integrals is, broadly speaking, called measure theory. -Monte Carlo methods are also used to numerically compute other integrals; this is called Monte Carlo integration. -Now that you have discovered this wonderful fact, here are some interesting exercises. I recommend that you try to draw the relevant regions when $n = 2, 3$ before tackling the general case. - -Choose $n$ numbers randomly in the interval $[0, 1]$. What is the probability that the first number you chose is the biggest one? -Choose $n$ numbers randomly in the interval $[0, 1]$. What is the probability that they are in decreasing order? -Choose $n$ numbers randomly in the interval $[0, 1]$. What is the probability that their sum is less than $1$? - -REPLY [3 votes]: This is known as geometric probability.<|endoftext|> -TITLE: What happens when the order of two finite subgroups are relatively prime? -QUESTION [6 upvotes]: I'm returning from an exam on group-theory and there were 2 questions I couldn't solve (and still can't), so I'm asking here for any hint you could possibly give. - -Let G be a group and H and K subgroups such that $|H| = n$, $|K| = m$ and $gcd(n, m) = 1$. Show that $H \cap K = \{e\}$. - -I wish I could show you some of my attempts before hand, but they're all rubbish that didn't get me anywhere. Essentially, the only (and last) thing I remembered and thought it could be useful was to see if if H and K are partitions of G. I think I've read something similar somewhere but can't recall where, so, am uncertain about it. -The other question I couldn't solve is, I think, related to this, so I shall try it once I understand this one. -Thanks for taking the time to read! Any tip is appreciated. - -REPLY [10 votes]: Let $g \in H\cap K$. By Lagrange's Theorem, the order of g divides m and n, but the greatest common divisor of m and n is 1, so g=e.<|endoftext|> -TITLE: Where does Quadratic Reciprocity point? -QUESTION [6 upvotes]: In his book Lectures on the theory of algebraic numbers, Hecke says that the content of the quadratic reciprocity theorem, formulated and proved entirely in terms of rationals (integers) points beyond the domain of rational numbers. -He is talking about the algebraic numbers but how is it seen that it points elsewhere? - -REPLY [10 votes]: Just to elaborate on Qiaochu's answer: one way to prove quadratic reciprocity is to observe that, for an odd prime $p$, the quadratic field $\mathbb Q(\sqrt{\pm p})$ (the sign being chosen so that $\pm p \equiv 1 \bmod 4$) is contained in $\mathbb Q(\zeta_p)$ (the field obtained by adjoining a primitive $p$th root of unity -to $\mathbb Q$), as can be seen by using Gauss sums, and combining this with -the irreducibility of the $p$th cyclotomic polynomial (which shows that -$Gal(\mathbb Q(\zeta_p)/\mathbb Q) = (\mathbb Z/p)^{\times}$). -All these concepts go back to Gauss's Disquitiones Arithmeticae, which served to ispire and guide all the subsequent developments in number theory in the 19th century. -Gauss himself introduced his Gaussian integers (i.e. the ring $\mathbb Z[i]$) as part of his investigations of biquadratic (i.e. fourth power) reciprocity. His student Eisenstein investigated cubic reciprocity (and introduced the ring -$\mathbb Z[\zeta_3]$ as a tool to this end). -Later in the 19th century Kummer investigated higher prime power reciprocity laws, and was led to the invention of the main concepts of algebraic number theory (ideals, unique factorization into prime ideas, the class group and class number, all in the context of the fields $\mathbb Q(\zeta_p)$) as part of his investigation. -The investigation of higher reciprocity laws continued. When the class group -of $\mathbb Q(\zeta_p)$ is non-trivial, especially when it has order divisible by $p$, new phenomena emerged, which led Hilbert to the concept of Hilbert class field. -Out of all this the general conception of class field theory emerged, and was finally established by Takagi in the early 20th century. -Hecke was aware of all this tradition, and it to this tradition and these developments that he is referring in his remark.<|endoftext|> -TITLE: Fundamental group of $S^2$ with north and south pole identified -QUESTION [16 upvotes]: Consider the quotient space obtained by identifying the north and south pole of $S^2$. I think the fundamental group should be infinite cyclic, but I do not know how to prove this. -If it is infinite cyclic, would this and $S^1$ be an example of two spaces which have isomorphic fundamental groups but are not of the same homotopy type? - -REPLY [14 votes]: Another way to see this is to use the theory of covering spaces. Consider the subspace $Y$ of $\mathbf R^3$ obtained by placing a copy of $S^2$ centered at $(n, 0, 0)$ for each even integer $n$ (I'll try to remember to add a picture later). The group $\mathbf Z$ acts on this space by translation, and the quotient $\mathbf Z\backslash Y$ is the space in question. - -Since $Y$ is simply connected (can you prove this?), it follows from Proposition 1.40c of Hatcher that $\pi_1(\mathbf Z\backslash Y) \cong \mathbf Z$. I don't see a nice way of proving that the spaces aren't homotopy equivalent without using homology, as Chris does.<|endoftext|> -TITLE: Are all nonarchimedean valuations discrete? -QUESTION [16 upvotes]: I am studying valuation theory on the way to local class field theory, and the texts I have looked at immediately focus on discrete valuations in developing the theory of nonarchimedean valuations. Why? Are there nondiscrete nonarchimedean valuations? If so, why do we ignore them? (it is true that if a field is locally compact with respect to a nonarchimedean valuation, then that valuation must be discrete, and local compactness is very important, but I wonder if there isn't more to be said here). - -REPLY [14 votes]: As Pete says, many of us do not ignore non-discrete valuations. However, I can explain why a text on class field theory might. -If $K$ is a finite extension of $\mathbb{Q}$, then all nonarchimedean valuations on $K$ are discrete. If your text expects to spend most of its time focused on such fields, that would explain its focus. -Proof: Any valuation on $K$ gives rise to a valuation on $\mathbb{Q}$. By the classification of valuations on $\mathbb{Q}$, it must be the $p$-adic valuation for some $p$. Normalize $v(p)$ to $1$. If you read your textbooks description of extending valuations from $\mathbb{Q}$ to $K$, you should see that the image lands in $(1/e) \mathbb{Z}$, where $e$ is the ramification degree, and is bounded by $[K:\mathbb{Q}]$. QED -For an example of a non-discrete valuation of interest in number theory, let $K$ be the extension of $\mathbb{Q}$ obtained by adjoining every $p^k$ root of unity, for every $k$. If $\zeta_{p^k}$ is a $p^k$-th root of $1$, then $v_p(\zeta_{p^k} -1 ) = 1/((p-1)p^{(k-1)})$. In particular, the extension of $v_p$ to $K$ is not discrete.<|endoftext|> -TITLE: Infimum is a continuous function, compact set -QUESTION [14 upvotes]: Let $f: X \times Y \rightarrow \mathbb{R}$ be a continuous map. Show that if $Y$ is compact then the function $g: X \rightarrow \mathbb{R}$ defined by $g(x) = \inf \{f(x,y): y \in Y\}$ is also continuous. -No clue here. Can you please help? - -REPLY [2 votes]: Old question, I know, but there’s a simple synthetic approach: -If $f: X × Y → ℝ$ is continuous, then so is the curried version $f^\mathrm{curr}\colon X → C(Y,ℝ)$, where $C(Y,ℝ)$ is the space of continuous functions $Y → ℝ$ with the compact-open topology. Since $Y$ is compact, taking infima is continuous as a map $\inf \colon C(Y,ℝ) → ℝ, f ↦ \inf f$. Thus $\inf ∘ f^\mathrm{curr} \colon X → ℝ$ is continuous as well, which is exactly the function in question.<|endoftext|> -TITLE: Eigenvalues of product of a matrix and a diagonal matrix -QUESTION [10 upvotes]: My situation is as follows: I have a symmetric positive semi-definite matrix L (the Laplacian matrix of a graph) and a diagonal matrix S with positive entries $s_i$. -There's plenty of literature on the spectrum of $L$, and I'm most interested in bounds on the second-lowest eigenvalue, $\lambda_2$. -Now the thing is that I'm not using the Laplacian $L$ itself, but rather the 'generalized' Laplacian $L S^{-1}$. I still need results on its second lowest eigenvalue $\lambda_2$ (note that the lowest eigenvalue of the Laplacian, both the normal and the generalized, is 0). -My question is: Are there some readily available theorems/lemmata that allow me to relate the spectra of $L$ and $L S^{-1}$? -EDIT: Of course, $LS^{-1}$ is not a symmetric matrix any more, so I'm talking about its right-eigenvectors. The eigenvalues of $LS^{-1}$ are the same as those of -$S^{-1/2} L S^{-1/2}$ which again is a symmetric positive semi-definite matrix, so I know an eigenbasis actually exists. - -REPLY [11 votes]: Let $\mu_i$ be the eigenvalues of $L S^{-1}$. Then $(\lambda_i, \mu_i, s_i)$ obey the multiplicative version of Horn's inequalities. The most basic of these, if $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n$ and $s_1^{-1} \geq \cdots \geq s_n^{-1}$ and $\mu_1 \geq \mu_2 \geq \cdots \geq \mu_n$ is that -$$\mu_{i+j-1} \leq \lambda_i s_j^{-1} \ \mbox{and}\ \mu_{i+j-n} \geq \lambda_i s_j^{-1}.$$ -Proof: Let $X=\sqrt{L}$ and $T=\sqrt{S^{-1}}$. So the singular values of $X$ and $T$ are $\sqrt{\lambda_i}$ and $\sqrt{s_i^{-1}}$. Then $\sqrt{\mu_i}$ are the singular values of $XT$. By a result of Klyachko (Random walks on symmetric spaces and inequalities for matrix spectra, Linear Algebra and its Applications, Volume 319, Issues 1–3, 1 November 2000, Pages 37–59), the singular values of a product obey the exponentiated version of Horn's inequalities.<|endoftext|> -TITLE: How can I tell when two cubic Bézier curves intersect? -QUESTION [9 upvotes]: I'm working a little program that converges on vector-based approximations of raster images, inspired by Roger Alsing's genetic Mona Lisa. (I started on this after his first blog post two years ago, played a bit, and it's been on the back burner. But I'd like to un-back-burner it because I have some interesting further ideas.) -Roger Alsing's program uses randomly-generated sometimes-self-overlapping n-gons. Instead, I'm using Bézier splines linked into what, for lack of a math education, I am calling "Bézier petal" shapes. These are, simply, two cubic Bézier curves which share end points (but not control points). You can see this in action in this little youtube video. -The shapes I'm currently using are constructed by generating six random points. These are then sorted radially around their center of gravity, and two opposite points become the end-points of the two cubic Bézier curves, with the other four points used as control points where the fit into the order. -Because I, as I mentioned, have no math background, I can't prove it, but empirically this guarantees a non-degenerate petal, with no overlaps or twists. But, it's frustratingly limiting, since it will never produce crescents or S shapes — theoretically-valid "petals". (The results are not always convex, but concavities can only happen at the end points.) -(Here's a video of what happens if I allow self-intersecting shapes — not graceful.) -I'd love suggestions for a process which generates all possible non-intersecting shapes. It's clearly better for the application if the process can be done with minimal computation, although I'm willing to pay a bit of time in exchange for nicer results. -A further constraint is that small changes in the "source" data — in the current case, the six random points — needs to produce relatively small changes in the output, or else my stochastic hill-climbing approach can't get anywhere. -Also, it's fine if some input values produce no output — those will just get discarded. -Another possible way of asking the same question: is there an optimization for finding whether two cubic Bézier curves that share endpoints intersect? If I can calculate that quickly, I can simply discard the invalid ones. - -REPLY [3 votes]: One way of finding out whether two Bezier curves sharing the endpoints intersect is to calculate all the intersections of two general Bezier curves and discard the $0$ and $1$ parameters from the solutions. -One way of calculating the intersections of two Bezier curve is the well known "subdivision" method: when the convex hulls of two Bezier curves do not overlap, the curves cannot overlap neither. If they do overlap, subdivide both the curves with De Casteljau's algorithm and recurse with all the combinations so that you check each part of the first curve with each of the parts of the second curve. Stop if convex hulls are small enough or if they can be approximate by a line segment, in which case simply compute their intersection. -Alternatively, as you are interested in a visual setting, chop off the ends of one of the curves (i.e., remove the parts with parameter $[0, \epsilon]$ and $[1-\epsilon, 1]$, where $\epsilon$ is a small number), so that you don't look for intersection at the endpoints only to discard them later. -More on intersection methods of Bezier curves can be found in Comparison of three curve intersection algorithms by Sederberg and Parry (1986).<|endoftext|> -TITLE: When does an Embedding extend into a Homeomorphism? -QUESTION [5 upvotes]: This is from a post in sci.math that did not get a full answer; I may repost it for the OP there: -I am interested on the issue I read in another site of when an embedding from a closed set extends into a homeomorphism, i.e., if $C$ is closed in $X$, and $f:C \rightarrow Y$ is an embedding, when can we extend $f$ to $F$ so that $F:X \rightarrow Y$ is a homeomorphism, and $F|_C=f$ (i.e., $f=F$ in $C$ )? Of course there are trivial cases -like when $f$ is the identity on $C$: -I know of, e.g., Tietze Extension, and I think there are results about extending -maps from a space into its compactification; I think the map must be regular (inverse -image of a compact set is compact). But I don't know of any general result. -I will learn Latex as soon as I can; my apologies for using ASCII - -REPLY [2 votes]: gary, I think your question is too general. For instance, there is something called 'cofibration' in topology, which deals with this type of problem under a strong condition: Namely, $f:A\rightarrow Y$ is continuous and has an extension iff every $g\colon A\rightarrow Y$ satisfies this property where $g$ is homotopic to f. -If a space $(X,A)$ has homotopy extension property with respect to $Y$ then it is easier to check if a map $f\colon A\rightarrow X$ has an extension, because , now, there are billions of maps that should simultaneously have extensions. However, even under this condition, there is no general theorem that I know.<|endoftext|> -TITLE: What constants do I need to create this specific logarithmic spiral? -QUESTION [5 upvotes]: please bear with me as I'm not a mathematician and this is difficult to word properly. :] -I need the equation for a logarithmic spiral (let's call it $S(\theta)$) that meets certain constraints for a music visualizer I'm working on. Let's call the arc-length of the spiral $A(\theta)$. I'm looking for a spiral that meets the following requirements: -$A(0)=27.5$ -$A(2\pi)=55.0$ -$A(4\pi)=110.0$ -$A(6\pi)=220.0$ -$A(8\pi)=440.0$ -I'm basically wanting the arc-length of this spiral to correspond to a frequency of a musical note. The requirements I gave basically plot the octaves of "A" notes. (http://www.phy.mtu.edu/~suits/notefreqs.html). This way I can create a "directional" spectrogram. All "A" notes will point toward $\theta=0,2\pi,4\pi,...$, all "E" notes will point roughly in the direction of $\theta=\pi,3\pi,...$. -I spent some time grinding out rough approximations of $a$ and $b$ values that give me proper values between arclengths of 27.5 and 14080. -The closes constant values I've landed on are: -$a=1.5145$ -$b=0.0551625$ -I'm looking for a way to generate values of expressions for $a$ and $b$ that will produce very accurate arclength results for the extent of the average human hearing range (20Hz-20KHz). If someone could explain the process I would need to go through to get these values I would be very grateful. Let me know if something doesn't make sense. My mind is a horribly tangled place. - -REPLY [6 votes]: The arc-length of a logarithmic spiral is given here. You seem to want $A(\theta + 2\pi) = 2 A(\theta)$. This gives you $b = \ln 2/(2\pi)$. Use the value you want for $A(0)$ to find $a$.<|endoftext|> -TITLE: Orientation induced on submanifolds -QUESTION [5 upvotes]: Suppose you are given two oriented manifolds with boundary $M$ say $B, B'$ and $\partial B = M = \partial B'$. Identify the boundaries and form $C = B \sqcup_{Id: M \to M} B'$. I want to see why, with the orientation induced by being submanifolds of $C$, $B$ and $B'$ induce opposite orientations to their boundary $M$. I'm particularly interested in the way the fundamental classes of $C,B,B'$ and $M$ behave. -Thanks a lot! - -REPLY [3 votes]: I'm assuming you're JuanOS from MO, and this question corresponds to the MO question: https://mathoverflow.net/questions/54278/orientation-of-a-glued-manifold -Here's how I interpret your question. You have an oriented manifold $C$ which is compact without boundary, and you've decomposed it into the union of two submanifolds $B$ and $B'$ with $B \cap B' = M$, $M$ a compact manifold. Let $n=dim(C)$, so $n-1=dim(M)$. The global orientation class for $C$ is a generator $\mu_C \in H_n C$. There are restriction maps $H_n C \to H_n(B,M)$ and $H_n C \to H_n(B',M)$ which give the corresponding global orientations $\mu_B$ for $B$ and $\mu_{B'}$ for $B'$ respectively. Then there is the pairs $M \to B \to (B,M)$ and $M \to B' \to (B',M)$ and you want to know how the two generators of $H_n(B,M)$ and $H_n(B',M)$ compare when mapped to elements of $H_{n-1}M$ via the two connecting maps for the above pairs, specifically you want to show that $\partial \mu_{B} + \partial \mu_{B'} = 0$. Moreover, you want an argument that's fairly generic, in particular not specific to triangulated smooth manifolds or anything like that. -The above "restriction maps" are formally induced by inclusion $C \to (C, C \setminus int(B') ) \leftarrow (B,M)$, one being an excision inclusion, the other just an inclusion. -I don't believe this is as complicated as Kuperberg makes out -- the complication comes when attempting to bridge the gap between the smooth or simplicial views with the strictly homological view, especially say in the singular homology setting. But the above formulation side-steps those complications as your question is phrased entirely in terms of Mayer-Vietoris type constructions. -Okay, so here's a cheap way to check that it's true. Since $M$ is a submanifold of $C$, given a point $p \in M$ you can find an orientation-preserving degree 1 map $f : C \to S^n$ such that $f(M) \subset S^{n-1} \times \{0\} \subset S^n$. Moreover, you can ensure $f$ when restricted to $M$ is an isomorphism on the top homology groups of $M$ and $S^{n-1}$ respectively, and that $f$ sends $B$ to the top hemi-sphere, and $B'$ to the bottom hemi-sphere. So by naturality, you've reduced your problem to the case $D^n \sqcup_{S^{n-1}} D^n = S^n$, i.e. $C= S^n$, and $B$ and $B'$ both discs, which one way or another boils down to a cellular homology computation (we are using singular homology but what I'm saying is this is singular homology of a CW-complex to effectively cellular homology). This is the equivalent step to using outward pointing normals and determinants for smooth manifolds.<|endoftext|> -TITLE: Codomain of the Fourier transform -QUESTION [6 upvotes]: Among other things, the Fourier transform maps functions from $L^2(\mathbb{R}^n) \to L^2(\mathbb{R}^n)$, $L^1(\mathbb{R}^n) \to C_0(\mathbb{R}^n)$ (continuous functions vanishing at infinity), and $\mathcal{S}(\mathbb{R}^n)\to\mathcal{S}(\mathbb{R}^n)$ (Schwartz space, or the space of rapidly decreasing functions). -I'm interested in looking more closely at the codomain of the second mapping. Since $C_0 \subset L^\infty$, every $L^1$ function is mapped by the Fourier transform into an $L^\infty$ function. However, I was wondering if it is easy to find specific examples of functions that are only in $L^1$, but are mapped into $L^1$ or $L^2$ (or possibly any other $L^p$ for $1 \leq p < \infty$). - -REPLY [4 votes]: By only in $L^1$ I understand that the function $f$ is in $L^1$ but not in $L^p$ for any $p>1$. Such a function is for instance $f(x)=\min(|x|^{-a},|x|^{-b})$, where $0 -TITLE: Cauchy Sequences in R Converge: to What? -QUESTION [5 upvotes]: I am trying to show the completeness of R, using the LUB property. -Problem is that I don't know, given a Cauchy sequence,where the limit would come from; I can check if a sequence {an} converges to a specific value, but I don't know how to come up with the limit value that the sequence would converge to. -I imagine as the intervals containing the terms am (m>N) become smaller, as |am-ak|N, maybe an converges to the limiting point of intervals (am,ak). -I used limsup and iminf to show that an converges , but this does not tell me - what an should converge to. -Any Suggestions? - -REPLY [7 votes]: Sure. -Step 1: Show that any Cauchy sequence is bounded. -(This is true in any metric space and has nothing to do with completeness.) It follows that the limsup and liminf of (the underlying set of terms of) your sequence are finite. -Step 2: Show that any sequence with limsup $L < \infty$ has a subsequence converging to $L$. -Step 3: Now you have something specific to try to show the Cauchy sequence converges to: namely, $L$. Show in fact that if a subsequence of a Cauchy sequence converges to some $L$, then the sequence itself converges to $L$. -Step 4 (optional): Think about what would happen if in Steps 2 and 3 you chose to use the liminf instead of the limsup.<|endoftext|> -TITLE: Adjoint functors as "conceptual inverses" -QUESTION [31 upvotes]: The Stanford Encyclopedia of Philosophy's article on category theory claims that adjoint functors can be thought of as "conceptual inverses" of each other. -For example, the forgetful functor "ought to be" the "conceptual inverse" of the free-group-making functor. Similarly, in multigrid the restriction operator "ought to be" the conceptual inverse of it's adjoint prolongation operator. -I think there is some deep and important intuition here, but so far I can only grasp it in specific cases and not in the abstract sense. Can anyone help shed light on what is meant by this statement about adjoint functors being conceptual inverses? - -REPLY [2 votes]: One of my friend was being confused with the same question and decided to ask the author of the article about it. -His reply was, - -If you take arbitrary abstract categories and stipulate that a pair of adjoint functors exists between them, the only thing you can hold on to is their abstract or formal properties, e.g. the left adjoint preserving colimits, etc. Of course, in general, it is not the case that you have a forgetful functor, but the general point remains. (Recall that I used the case of the adjoint functor simply to illustrate the main general point.) One can and should consider a pair of adjoint functors as providing conceptual inverses. One has to be careful and look at the details, even more so since a functor can have both a left and a right adjoint! The best analogy is probably from topology with the notions of section and retraction to a given map. But again, one has to be careful and it is probably better to think about these up to homotopy.<|endoftext|> -TITLE: How to prove $|f(x) - f(y)| < |x - y|$ if $f(x) = x + 1/x$ where $x > 1$ -QUESTION [5 upvotes]: I have attempted as follows: -$|f(x) - f(y)| = |x + 1/x - y - 1/y|$ -$\leq |x - y| + |1/x - 1/y|$ -Struck here. -Any help. - -REPLY [7 votes]: Suppose $x,y>1$. If $x=y$ then both sides are 0 in your inequality, so I assume $x\ne y$. -Say that $x>y$. Then $f(x)-f(y)=(x-y)+(1/x-1/y)=\displaystyle (x-y)(1-\frac1{xy})$. Now, the fraction $1/(xy)$ is positive, but strictly less than 1, as $1x$, we get $0 -TITLE: Which series converges the most slowly? -QUESTION [11 upvotes]: We say $a_n$ converges slower than $b_n$ if there exist an $x$ such that for all $m>x$, $a_m>b_m$ and both $\sum a_n$ and $\sum b_n$ converges. -Ignoring constant factors, which type of function converges the slowest? - -REPLY [21 votes]: There is not such a function! Walter Rudin in his "Principles of Mathematical Analysis" has a series of exercises trying to indicate that there is no function at the "boundary" between convergence and divergence. -There was an interesting series of answers about this very issue at MathOverflow (MO): https://mathoverflow.net/questions/49415/nonexistence-of-boundary-between-convergent-and-divergent-series -A quick way of seeing that there is no "slowest" convergent series is Rudin's exercise 12.b, mentioned at the link above: If $\sum a_n$ converges, and the $a_n$ are positive, then $\sum a_n/\sqrt {r_n}$ converges, where $r_n$ is the tail $\sum_{i\ge n}a_i$. Note $r_n\to 0$, so $a_n/\sqrt{r_n}>a_n$ for $n$ large enough. -I recommend that you take a look at the answers at MO and at the references they suggest, for more subtle examples.<|endoftext|> -TITLE: bound on the cardinality of the continuum? I hope not -QUESTION [16 upvotes]: Suppose we don't believe the continuum hypothesis. Using Von Neumann cardinal assignment (so I guess we believe well-ordering?), is there any "familiar" ordinal number $\alpha$ such that, for non-tautological reasons, $\aleph_\alpha$ is provably larger than the cardinality of the continuum? I would hope not since it would seem pretty silly if something like $\alpha = \omega_0$ worked and we could say "well gee we can't prove that $c = \aleph_1$, but it's definitely one of $\aleph_1, \aleph_2, \ldots , \aleph_{73}, \ldots$". I (obviously) don't know jack squat about set theory, so this is really just idle curiosity. If a more precise question is desired I guess I would have to make it - -For any countable ordinal $\alpha$ is the statement: $c < \aleph_\alpha$ independent of ZFC in the same sense as the continuum hypothesis? - -assuming that even makes sense. Thanks! - -REPLY [26 votes]: Mike: If you fix an ordinal $\alpha$, then it is consistent that ${\mathfrak c}>\aleph_\alpha$. More precisely, there is a (forcing) extension of the universe of sets with the same cardinals where the inequality holds. -If you begin with a model of GCH, then you can go to an extension where ${\mathfrak c}=\aleph_\alpha$ and no cardinals are changed, as long as $\alpha$ is not a limit ordinal of countable cofinality. For example, $\aleph_{\aleph_\omega}$ is not a valid size for the continuum. But it can be larger. -Here, the cofinality of the limit ordinal $\alpha$ is the smallest $\beta$ such that there is an unbounded function $f:\beta\to\alpha$. There is a result of König that says that $\kappa^\lambda>\kappa$ if $\lambda$ is the cofinality of $\kappa$. If $\kappa={\mathfrak c}$, this says that $\lambda>\omega=\aleph_0$, since ${\mathfrak c}=2^{\aleph_0}$ and $(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0}$. Since $\aleph_{\aleph_\omega}$ has cofinality $\omega$, it cannot be ${\mathfrak c}$. -But this is the only restriction! The technique to prove this (forcing) was invented by Paul Cohen and literally transformed the field.<|endoftext|> -TITLE: Reference for the Universal Coefficient Spectral Sequence -QUESTION [7 upvotes]: I'm totally ignorant about the Universal Coefficient Spectral Sequence (I used to work only with principal ideal domains, where the Universal Coefficient Theorem only amounts to a short exact sequence) but I need now to understand it. The book I was redirected to is the infamous User's Guide to SS where I have only been able to find very general versions (one about spectra and the other with functors on general abelian categories). I guess that an exposition in the simple case of chain complexes of A-modules exists somewhere, but I don't know where to look. -Do you have any suggestion? - -REPLY [2 votes]: Well, I'm not sure that my answer is what you want. -I learned the universal coefficient spectral sequence from the following two sources. -J. Levine, "Knot modules I" (1977, Tran. AMS) -Cochran-Orr-Teichner "Knot Concordance, Whitney Towers and L2-Signatures" (2003, Ann. Math.) -In Levine's paper (see Thm. 2.3 in page 5), Levine gives a quick and readable construction of UCSS. -In Cochran-Orr-Teichner's one (see Remark. 2.8, proof of Thm. 2.13), they use UCSS to control the dimension of homology groups with coefficient in (noncommutative) quotient field of Ore domain. Also, they gives some examples (Reamrk 2.8) about collapsing conditions (you might want) of UCSS.<|endoftext|> -TITLE: How to find the sum of the following series -QUESTION [5 upvotes]: How can I find the sum of the following series? -$$ -\sum_{n=0}^{+\infty}\frac{n^2}{2^n} -$$ -I know that it converges, and Wolfram Alpha tells me that its sum is 6 . -Which technique should I use to prove that the sum is 6? - -REPLY [6 votes]: For $x$ in a neighborhood of $1$, let -$$ -f(x) = \sum\limits_{n = 0}^\infty {\frac{{x^n }}{{2^n }}} = \sum\limits_{n = 0}^\infty {\bigg(\frac{x}{2}\bigg)^n } = \frac{1}{{1 - x/2}} = \frac{2}{{2 - x}}. -$$ -Thus, on the one hand, -$$ -f'(x) = \sum\limits_{n = 0}^\infty {\frac{{nx^{n - 1} }}{{2^n }}} \;\; {\rm and} \;\; f''(x) = \sum\limits_{n = 0}^\infty {\frac{{n(n - 1)x^{n - 2} }}{{2^n }}} , -$$ -and, on the other hand, -$$ -f'(x) = \frac{2}{{(2 - x)^2 }} \;\; {\rm and} \;\; f''(x) = \frac{4}{{(2 - x)^3 }}. -$$ -Hence, -$$ -f'(1) = \sum\limits_{n = 0}^\infty {\frac{n}{{2^n }}} = 2 \;\; {\rm and} \;\; f''(1) = \sum\limits_{n = 0}^\infty {\frac{{n(n - 1)}}{{2^n }}} = 4. -$$ -Finally, -$$ -\sum\limits_{n = 0}^\infty {\frac{{n^2 }}{{2^n }}} = f'(1) + f''(1) = 6. -$$ -The idea here was to consider the Probability-generating function of the geometric$(1/2)$ distribution.<|endoftext|> -TITLE: Lebesgue measurable but not Borel measurable -QUESTION [53 upvotes]: I'm trying to find a set which is Lebesgue measurable but not Borel measurable. -So I was thinking of taking a Lebesgue set of measure zero and intersecting it with something so that the result is not Borel measurable. -Is this a good approach? Can someone give a hint what set I would take (so please no full answers, I want to find it myself in the end ;-)) -Also, I seem to remember that to construct a non-Lebesgue measurable set one needs to use the axiom of choice. Is this also the case for non-Borel measurable sets? - -REPLY [31 votes]: Bit of a spoiler: Your approach seems on the way to what I've seen done, but instead of trying to intersect your set, you might want to map a non measurable one into it using a measurable map and remember how preimages of borel sets behave. -Spoiler: your map could be one from the unit interval onto that very famous set by that very famous guy born in 1845 who suffered from depression and the dislike of many of his contemporaries... ;-)<|endoftext|> -TITLE: Famous puzzle: Girl/Boy proportion problem (Sum of infinite series) -QUESTION [15 upvotes]: Puzzle -In a country in which people only want boys, every family continues to have children until they have a boy. If they have a girl, they have another child. If they have a boy, they stop. What is the proportion of boys to girls in the country? -My solution (not finished) -If we assume that the probability of having a girl is 50%, the set of possible cases are: -Boy (50%) -Girl, Boy (25%) -Girl, Girl, Boy (12.5%) -... -So, if we call G the number of girls that a family had and B the number of boys that a family had, we have: -$B = 1$ -$P(G = x) = (1/2)^{x+1}*x$ -So -$G = \Sigma (1/2)^{x+1}*x$ -I feel like the sum of this infinite serie is 1 and that the proportion of girls/boys in this country will be 50%, but I don't know how to prove it! -Thanks! - -REPLY [3 votes]: Google and other interviewing companies made an important assumption — that there are infinitely many families. Under this assumption, we have their clever and simple answer: In expectation, there are as many boys as girls. -However, it turns out that the answer depends on the number of families there are in the country. If there is a finite number of families, then in expectation, there'll be more boys than girls. -Consider the extreme scenario where there's only one family, then 1/2 the time the fraction of girls is 0 (B in our only family), 1/4 the time it's 1/2 (GB), 1/8 the time it's 2/3 (GGB), 1/16 the time it's 3/4 (GGGB) etc. And so the expected fraction of girls is: -$$\frac{1}{2}\times 0+\frac{1}{4}\times \frac{1}{2}+\frac{1}{8}\times \frac{2}{3}+\frac{1}{16}\times \frac{3}{4}+\dots = 1 - \ln 2 \approx 0.307.$$ -If there are 4 families, the expected fraction of girls in the country is about 0.439; and if there are 10, it's about 0.475. -See this Math.Overflow answer for more details. -(By the way, this puzzle also appeared in Thomas Schelling's Micromotives and Macrobehavior, 1978, p. 72.)<|endoftext|> -TITLE: Factorial and exponential dual identities -QUESTION [99 upvotes]: There are two identities that have a seemingly dual correspondence: -$$e^x = \sum_{n\ge0} {x^n\over n!}$$ -and -$$n! = \int_0^{\infty} {x^n\over e^x}\ dx.$$ -Is there anything to this comparison? (I vaguely remember a generating function/integration correspondence) -Are there similar sum/integration pairs for other well-known (or not-so-well-known) functions? - -REPLY [8 votes]: Note that this duality is in fact a special case of Ramanujan's master theorem: -For some $x$ around the neighbourhood of $0$: -$$F(x) = \sum_{n=0}^\infty f(n) \frac{(-x)^n}{n!}$$ -Then, -$$\int_0^\infty x^{n} F(x)\,\text dx = n! f(-n-1)$$<|endoftext|> -TITLE: What's the point of studying topological (as opposed to smooth, PL, or PDiff) manifolds? -QUESTION [61 upvotes]: Part of the reason I think algebraic topology has acquired something of a fearsome reputation is that the terrible properties of the topological category (e.g. the existence of space-filling curves) force us to work very hard to prove the main theorems setting up all of the big machinery to get the payoff we want (e.g. invariance of domain, fixed point theorems). But why should I care about these arbitrary and terrible spaces and functions in the first place when, as far as I can tell, any manifold which occurs in applications is at least piecewise-differentiable and any morphism which occurs in applications is at least homotopic to a piecewise-differentiable one? -In other words, do topological manifolds really naturally occur in the rest of mathematics (without some extra structure)? - -REPLY [8 votes]: I learned a lot from the answers and discussion above. Here is my personal view, not necessarily applicable to others. -I published my Lectures on Algebraic Topology (W. A. Benjamin) in 1967. The book is an extensive elaboration of notes from a course I taught. All the manifold theorems in my book refer to topological manifolds - I only make a few side remarks about differentiable manifolds, De Rham cohomology and PL-manifolds. My book was later expanded and slightly revised by John Harper, renamed as Algebraic Topology: A First Course (Perseus, 1981). -I was delighted, in the process of teaching that course, to learn that so much could be proved in this subject without bringing in analysis and with very little messy (imho) combinatorial argumentation. Purity of method is an essential part of mathematical aesthetics, as Hilbert emphasized. And to achieve that purity, one usually has to discover new techniques that apply in greater generality than the previously known methods. -I was glad to see that J. P. May, in his lovely 1999 treatise A Concise Course in Algebraic Topology (U. of Chicago Press), also focused on topological manifolds.<|endoftext|> -TITLE: Representation theory of $SO(n)$ -QUESTION [10 upvotes]: This is probably not a very ethical question to ask but I need to have a fast introduction to a range of concepts about the representation theory of the $SO(n)$ and I would be happy to see some online references which will help me do this journey. -$\\$ -I am listing below the specific concepts about it that I need to know the most. - -I want to know about the concept of "lowest weights" and "highest weights" and how a string of $ [ \frac{n}{2}]$ numbers (integers?) say $(h_i, i = 1,2,..,[ \frac{n}{2}])$ label a representation of $SO(n)$. -I want to know what is a "quadratic Casimir" of such a representation (vaguely I understand that to be the eigenvalue of an operator which commutes with all the basis of the group's Lie algebra). For such a representation as above the quadratic Casimir is -$$c_2(\{h_i\}) = \sum _{i=1} ^{ i= [ \frac{n}{2}]} \left(h_i ^2 + (n-2i)h_i\right)$$ -which will help see that the quadratic Casimir for a "scalar representation" is $0$, for a "vector representation" it is $n-1$ and for a "spinor representation" it is $\frac{n(n-1)}{8}$ - -I hope to know how to convert the above in quotes standard physics terminology into the language of weights. - -If $\{ H_i\}$ form a set of Cartan generators of the group $SO(2n+1)$ then in the "vector representation" the character of the element labelled by the real numbers say $\{t_i\}$ is $1+\sum _{i=1} ^n 2\cosh(t_i)$ and for the "spinor representation" it is given by $1+\prod _{i=1} ^n 2\cosh(t_i)$ and in general it is given as, - -$$\chi (h_i,t_i) = \frac{\det \left(\sinh [ t_i(h_j +(n-j) +\frac{1}{2} ]\right) }{\det \left(\sinh [t_i((n-j) +\frac{1}{2}\right) } $$ - -The above equation leads to a Clebsch-Gordan (which I am familiar for $SO(3)$) like thinking that $\{h_i\} \times \{\mbox{vector}\} = \{h_i\} +$ All representations obtained from $\{h_i\}$ by adding or subtracting unity from a single $h_i$ such that in the resulting set $h_n \geq 0$ and $h_i \geq h_{i+1}$ for other $is$. And similarly for $\{h_i\} \times \{\mbox{spinor}\} = \{h_i\} +$ All representations obtained from $\{h_i\}$ by adding or subtracting half from every $h_i$ such that in the resulting set $ h_n \geq 0$ and $h_i \geq h_{i+1}$ for other $is$ -Similarly for $SO(2n)$ the corresponding character formula looks like, - -$$\chi (h_i,t_i) = \frac{\det \left(\sinh [ t_i(h_j + n-j)]\right) + \det \left(\cosh [ t_i(h_j + n-j)]\right) }{\det \left(\sinh [t_i(n-j)] \right) } $$ -And similar interpretations lead to the thinking that $\{h_i\} \times \{\mbox{vector}\} = \{h_i\} +$ All representations obtained from $\{h_i\}$ by adding or subtracting unity from a single $h_i$ such that in the resulting set $\vert h_n \vert \geq 0$ and $h_i \geq h_{i+1}$ for other $is$. And similarly for $\{h_i\} \times \{\pm \mbox{chirality spinor}\} = \{h_i\} +$ All representations obtained from $\{h_i\}$ by adding or subtracting half from every $h_i$ with number of subtractions being even/odd, such that in the resulting set $\vert h_n \vert \geq 0$ and $h_i \geq h_{i+1}$ for other $is$ -$\\$ -I am not sure I am reading everything right but I hope to get corrected and I would be happy to see references hopefully online which will explain to me the above things. - -REPLY [4 votes]: Read the book by Fulton and Harris on Representation Theory.<|endoftext|> -TITLE: Why is a smooth connected scheme irreducible? -QUESTION [14 upvotes]: Why is a smooth connected scheme (say over a field) necessarily irreducible? -Intuitively it makes sense because we might very well expect points in the intersection of two irreducible components to be singular points. -But what is a proof? Feel free to add any extra hypotheses if needed (e.g., separated if that is required). - -REPLY [23 votes]: The local rings of a smooth scheme over a field are regular, and a regular local ring is a domain. Thus a smooth scheme over a field has all local rings being domains. Thus the intersection of any two components must be empty (a point lying on the intersection would not have its local ring being a domain).<|endoftext|> -TITLE: The set of all sets of two elements does not exist, but the set of sets of two elements of a given set does? -QUESTION [6 upvotes]: I'm trying to wrap my head around this concept of extracting sets from a given set to show that a subset is indeed a subset. -Suppose I want to show that the set of all sets of two elements does not exist. I approach it like this. -Suppose such a set $V$ exists of all elements containing exactly the sets of two elements. Then for any nonempty set $X$, the set $\{X,\emptyset\}$ is a set, by the pairing axiom, and since $X\neq\emptyset$, $\{X,\emptyset\}$ has two elements, so $\{X,\emptyset\}\in V$ for all sets $X$. Then by the power set axiom, $\bigcup V$ is the set of all sets, since $\emptyset\in\bigcup V$, and any nonempty set $X\in\bigcup V$ as well. But the set of all sets does not exist, a contradiction, so such a set $V$ does not exist. Does this approach work? -However, if I'm given a nonempty set $A$, is the following a proper formulation that shows that the set of all sets of the form $\{a,b\}$ for $a,b\in A$ is a set? -If $a,b\in A$, then $\{a,b\}\subset A$, and $\{a,b\}\in\mathscr{P}(A)$. Assuming that $a$ and $b$ need not be distinct, consider -$$ -\{t\in\mathscr{P}(A)\ |\ \exists x\exists y(x\in A\land y\in A\land t=\{x,y\}\}. -$$ -This is a set by the subset schema. Have I correctly specified the type of sets I want, or do I need to include something about if $z\in t\implies z=x\vee z=y$?, or is that unnecessary? Thanks. - -REPLY [6 votes]: The problem is that $V$ is not a set, it is in fact a proper class. A good rule of thumb when it comes to these sort of problems is that if you can map each set into your collection - it's a proper class. -If, on the other hand, you started with a set $A$ (which you already know that it is a set) then you can have the collection of all pairs from the set. The formula which you wrote above is indeed the collection of all singletons and pairs from the set $A$, if you want to exclude the singletons you have to add $x\not= y$ as well. -This is a result from the Subset Axiom/Replacement Axiom - both are schemes and not actual axiom - which state that if you have a formula in one free variable, then the collection of elements which satisfies the formula coming from a set $A$ is a set itself, namely - if it's a subcollection of a set then it is a set. -The replacement axiom is slightly more complicated in that matter, but it says that the range of a function whose domain is a set is also a set, from this the subset axiom can be deduced easily.<|endoftext|> -TITLE: Question regarding Weierstrass M-test -QUESTION [7 upvotes]: The Weierstrass M-test tells that given a function sequence $(u_{n}(x))$ where $x \in I$, if there exists a convergent series $\sum a_{n}$ such that $|u_{n}(x)|\leq a_{n}$ for all $n$ and $x\in I$, then $\sum u_{n}(x)$ converges uniformly in $I$. -What about the opposite of it? - -If $\sum u_{n}(x)$ converges uniformly in $I$, then there exists a convergent series $\sum g_{n}$ such that $|u_{n}(x)|\leq g_{n}$ for all $n$ and $x \in I$. - -Since the theorem isn't in the form of 'if and only if', I'm trying to think of an example to counter the above: -My attempt was to define a function sequence such as this: - -Then taking $u_{1}(x)=f_{1}(x),\ u_{n}(x)=f_{n}(x)-f_{n-1}(x)$. It can be shown that $f_{n}(x)$ converges uniformly in $[1, \infty)$, therefore $u_{n}(x)$ does as well. -But $|u_{n}(1)|=|f_{n}(1)|$ which is the constant sequence: $1, 1, 1, ...$. If we assume by contradiction that there exists such a sequence $1\leq g_{n}$ that $\sum g_{n}$ converges then by the comparison test $\sum 1$ converges which is obviously not true. -First, I'd like to know if the above is true. It took me quite a while to come up with something and I'm not even sure it's true. -Also, It's very difficult for me to visualize these complicated functions (like the one above) where both $x$ and $n$ play a role. Is there an easier way to deal with these questions? - -REPLY [3 votes]: Almost a hundred years ago G.H. Hardy showed that there are power series over $\mathbb{C}$ that converge uniformly on the closed unit disk but not absolutely on the boundary, which is the same as saying that the hypothesis of the Weierstrass M-test doesn't hold (A theorem concerning Taylor's series, Quart. J. Pure Appl. Math. 44 (1913), 147-160). I posted some references including links to examples here.<|endoftext|> -TITLE: For which $P$ can one claim the existence of the set of all sets that satisfy $P$? -QUESTION [7 upvotes]: Given a property $P$, is there some rules that are sufficient or necessary to determine if there exists a set of all sets with property $P$? - -REPLY [13 votes]: In ZFC, the following are equivalent: - -The class $\{x\mid P(x)\}$ is a set. -There is an ordinal $\alpha$ such that whenever $P(x)$, then $x$ has rank at most $\alpha$. -There is a set $B$ such that whenever $P(x)$, then $x\in B$. -There is no class function $F$ from the class $\{x\mid P(x)\}$ onto the ordinals. -There is an ordinal $\theta$ such that there is no function mapping $\{x\mid P(x)\}$ onto $\theta$. -There is a set $C$ such that there is no function mapping $\{x\mid P(x)\}$ onto $C$. -There is an ordinal $\theta$ that does not map injectively into $\{x\mid P(x)\}$. -There is a set $D$ that does not map injectively into $\{x\mid P(x)\}$. - -Proof. (1 iff 2) If the class is a set, then it must be contained in some $V_\alpha$, and so every element will have rank at most $\alpha$. The converse is the Separation axiom. -(2 iff 3) Use that every $V_\alpha$ is a set, and every set is contained in some $V_\alpha$. -(1 implies 4) The ordinals are not a set, so this follows by the Replacement axiom. -(4 implies 2) Map each $x$ for which $P(x)$ holds to its rank. -(1 implies 5) For every set, there is an ordinal onto which it does not map, namely, the successor of its cardinality. -(5 iff 6) Every set is bijective with an ordinal. -(5 implies 7) If a class does not map surjectively onto $\theta$, then $\theta$ cannot map injectively into the class. -(7 iff 8) Every set is bijective with an ordinal. -(7 implies 2) If $\theta$ does not map injectively into $\{x\mid P(x)\}$, then that class cannot contain sets of arbitrarily large rank. -QED -Meanwhile, the following notions are strictly weaker in ZFC, if ZFC is consistent: - -There is a map from the ordinals onto $\{x \mid P(x)\}$. -There is a bijection of $\{x\mid P(x)\}$ with the ordinals. -There is a bijection of${x\mid P(x)}$ with $V$, the entire set-theoretic universe. - -There reason that they are weaker in general is that it is relatively consistent with ZFC that there is no definable (from parameters) well-ordering of the universe. In such a model $V$, there is no class surjection or bijection from the ordinals to $V$, since this would provide the desired well-ordering, but $V$ is not a set. Similarly, in such a model, there is no bijection from the class of ordinals to the entire universe, but the class of ordinals is not a set. Such a model can be constructed using the forcing technique, by an Easton support iteration that adds a Cohen subset to unboundedly many regular cardinals. -Addendum. Let me add that there can be no purely syntactic characterization of the properties $P$ for which $\{x\mid P(x)\}$ is a set. The reason is that some properties determine sets in some models of ZFC, but not in others. So the question of whether this class is a set depends not just on the syntactic features of $P$, but on the properties of the universe in which the class is to be formed. An example of this is the property $P(x)\iff CH\wedge x=x$, which determines a set just in case $\neg CH$.<|endoftext|> -TITLE: Fast method for Nth Squarefree number (using mathematica) -QUESTION [8 upvotes]: I am trying to compute Nth Squarefree numbers using Mathematica. What I am trying to utilize is the SquareFreeQ[i] function. -Here is my solution : -NSqFree[N_] := - For[n = 1; i = 1, n <= N + 1, - If[SquareFreeQ[i], If[n == N, Print[i] ] n++] i++] -But I am supposed to compute NSqFree[1000000000] but seems like my approach is taking for ever. Any faster method ? -Thanks, -ADDED: -Here an exactly identical topcoder question and the corresponding editorial for the same. - -REPLY [8 votes]: You have to us the Inclusion-Exclusion principle: suppose you want to find the number of square free numbers up to $N$, then from $N$ you have to substract the number of integers divisible by the square of a prime, but then you have to add any multiple of the square of the product of two discinct primes and so on, in formulas the number looked for is -$$ N - \sum_{p^2 \le N} \left\lfloor\frac{N}{p^2}\right\rfloor + \sum_{p^2q^2 \le N} \left\lfloor\frac{N}{p^2q^2}\right\rfloor - \sum_{p^2q^2r^2 \le N} \left\lfloor\frac{N}{p^2q^2r^2}\right\rfloor + \cdots -\cdots $$ -using the moebius $\mu$ function you can write this last formula as -$$ \sum_{n \le \sqrt{N}} \mu(n) \left\lfloor\frac{N}{n^2}\right\rfloor $$ -I don't know how to write this in mathematica but it should take a negligible fraction of the time it takes your current method.<|endoftext|> -TITLE: Find $\lim_{n\to \infty}\frac{(1^n+2^n+...+n^n)^{1/n}}{n}$ -QUESTION [6 upvotes]: Tried the squeeze theorem but it didn't get me anywhere since: -$$0 \leftarrow \frac{1}{n^{1-1/n}}=\frac{n^{1/n}}{n}\leq\frac{(1^n+2^n+...+n^n)^{1/n}}{n}\leq \ n^{1/n}\to 1$$ -None of the other tools we studied seem to work. Any tips? Thanks. - -REPLY [9 votes]: $$\frac{\left(1^n+2^n+...+n^n\right)^{1/n}}{n} = \left(\left(\frac{1}{n}\right)^n+\left(\frac{2}{n}\right)^n+...+1^n\right)^{1/n} \geq (1)^{1/n} = 1 \rightarrow 1$$ -The rest is as you suggest.<|endoftext|> -TITLE: Solve $\cos(\theta) + \sin(\theta) = x$ for known $x$, unknown $\theta$? -QUESTION [12 upvotes]: After looking at the list of trigonometric identities, I can't seem to find a way to solve this. Is it solvable? -$$\cos(\theta) + \sin(\theta) = x.$$ -What if I added another equation to the problem: -$$-\sin(\theta) + \cos(\theta) = y,$$ -where $\theta$ is the same and $y$ is also known? -Thanks. -EDIT: -OK, so using the linear combinations I was able to whip out: -$$a \sin(\theta) + b \cos(\theta) = x = \sqrt{a^2 + b^2} \sin(\theta + \phi),$$ -where $\phi = \arcsin \left( \frac{b}{\sqrt{a^2 + b^2}} \right) = \frac{\pi}{4}$ (as long as $a\geq 0$) -Giving me: -$$x = \sin(\theta + \frac{\pi}{4}) \text{ and } \arcsin(x) - \frac{\pi}{4} = \theta.$$ -All set! Thanks! - -REPLY [2 votes]: If you know -$$ \cos(\theta) + \sin(\theta) = x $$ -and -$$ -\sin(\theta) + \cos(\theta) = y $$ -then you have a system of two linear equations in the 'unknowns' $\cos(\theta)$ and $\sin(\theta)$, and thus can solve for the values of $\cos(\theta)$ and $\sin(\theta)$: -$$ \cos(\theta) = \frac{x+y}{2} $$ -$$ \sin(\theta) = \frac{x-y}{2} $$ -and then obtain $\theta$ in your favorite manner.<|endoftext|> -TITLE: Does $\mathbb{Q}_p \cap \overline{\mathbb{Q}}$ depend on the embedding of $\overline{\mathbb{Q}}$ into $\overline{\mathbb{Q}_p}$? -QUESTION [6 upvotes]: I think the title pretty much says it all. I'm getting confused in the subtle parts of a proof, and would appreciate some help. - -REPLY [4 votes]: There's a subtlety here. -$\overline{\mathbb{Q}}\cap \mathbb{Q}_p$ is well defined as a subfield of $\mathbb{Q}_p$ since it is simply the set of elements which are algebraic over $\mathbb{Q}$. -However, as a subfield of $\overline{\mathbb{Q}}$, this is not canonical. The choice of embedding $\phi: \overline{\mathbb{Q}}\rightarrow \overline{\mathbb{Q}_p}$ amounts to choosing a place of $\overline{\mathbb{Q}}$ lying over $p$. The induced subfield $\phi^{-1}(\mathbb{Q}_p)\subset \overline{\mathbb{Q}}$ will be the maximal subfield in which this place splits. This depends very much on the place we've chosen! This field is not Galois over $\mathbb{Q}$, and its Galois conjugates reflect the other choices of embeddings. -Here's another way of saying it. The embedding $\phi$ gives a choice of decomposition group -$D_p\subset$ Gal$(\overline{\mathbb{Q}}/\mathbb{Q})$. Then -$\phi^{-1}(\mathbb{Q}_p)=\mathbb{\overline{Q}}^{D_p}$.<|endoftext|> -TITLE: sums of square free numbers , is this conjecture equivalent to goldbach's conjecture? -QUESTION [18 upvotes]: As one can notice every integer greater than $1$ is a sum of two squarefree numbers.(numbers that are not divided by some prime square power). Can we prove that? -Edit: Can we have bounds for the length of these numbers? (meaning the number of the primes that divide it) -Chen's theorem asserts that for large enough even numbers the length (2,1) is enough Goldbach's conjecture says that (1,1) would be enough too. -And one conjecture: every odd number can be written as a sum of two squarefree numbers of length at most (2,1) (meaning as a sum of a prime and a double of a prime or a sum of a prime plus 2 or as a sum of 1 plus a double of a prime) -Questions - -do i really need the prime plus 2 or the 1 plus the double of a prime in oredr to have all the odd numbers?I think i do not need them but can we prove that?? -What is the relation of this conjecture to Goldbach's conjecture? does the one implies the other? - -EDIT Searching wikipedia i realised that this is a well-known conjecture, for more details see http://en.wikipedia.org/wiki/Lemoine%27s_conjecture - -REPLY [9 votes]: It is known that the Schnirelmann Density of square free numbers is $\displaystyle \frac{53}{88}$ for instance, as mentioned here: http://www.jstor.org/pss/2040089. Note this is different from the natural density which is known to be $\displaystyle \frac{6}{\pi^2}$. -This implies that for any $n$, the number of square free numbers $\displaystyle \le n$ is at least $\displaystyle \frac{53n}{88}$. Since $\displaystyle \frac{53}{88} \gt \frac{1}{2}$, this implies that $n+1$ can be written as the sum of exactly two squarefree numbers. -btw, your title does not match the question. -Additive Basis means that you also include $\displaystyle 0$ in the set which you seem to exclude by saying two instead of at most two.<|endoftext|> -TITLE: Prove there are no hidden messages in Pi -QUESTION [9 upvotes]: Assume that a proof that pi is normal existed. -Is it then possible that starting at some finite position x in pi, from there on every p(n)'th digit is 0, where p(n) is the n'th prime? -I know probability arguments say no, but do they also prove that it is impossible? -Is there a way to disprove the statement? -Also, the digits of Pi can tell us certain mathematical truths, for instance that the circumference of a circle is less then any other shape with same area. -The question is, is there a limit to the information one can extract from the digits of Pi? -Is it possible to reconstruct all theorems from the digits of Pi? -What sort of truths can be extracted from the digits of Pi? -Is entire math and all truths somehow encoded within the digits of Pi ? - -REPLY [4 votes]: This is more of an informational comment than a direct answer to your question. (Later) Just before sending this answer in, I noticed the question and activity are from almost a year ago! -The property you're describing is shared by almost all real numbers in both the Baire category sense and the Lebesgue measure sense. In fact, its complement is even smaller than first-category-and-measure-zero, being a countable union of lower porous sets. Note there is a huge difference between the notion of a normal number (to base $10$) and the property you're talking about, since the set of normal numbers is large in one way (Lebesgue measure) but small in another way (Baire category), whereas the set you're talking about (sometimes called the "absolutely disjunctive" real numbers, for which you can google the phrase I put in quotes) is large in both the Lebesgue measure sense and in the Baire category sense (and, in fact, even larger than what the conjunction of these two notions could allow for). For more details, see my 19 February 2003 sci.math post at -http://groups.google.com/group/sci.math/msg/4ec315328c1afdb8 -An excerpt from the above post: -Disjunctive to base $b$ ($b$ being some integer greater than $1$) means that every finite $b$-word appears infinitely often in the $b$-ary expansion of the number (note this is equivalent to every finite $b$-word appearing at least once), and the adjective "absolutely" means that this property holds for each $b = 2,\; 3,\;...$ The result is virtually immediate [all but a $\sigma$-porous set of real numbers are absolutely disjunctive], since for each of the countably many ways of choosing a fixed $b$ and a fixed $b$-word, the collection of numbers whose $b$-ary expansions don't contain that $b$-word infinitely often is a uniform Cantor set, and hence is porous (even uniformly porous; in fact, even uniformly for the stronger lim-inf type of porosity that Julia set theorists use). Recall that the larger collection of numbers which fail to be absolutely normal (or which fail to be simply normal relative to a specific base) forms a measure zero (but with Hausdorff dimension $1$) co-meager set.<|endoftext|> -TITLE: Irrationality proofs not by contradiction -QUESTION [24 upvotes]: Per now, I have basically come upon proofs of the irrationality of $\sqrt{2}$ (and so on) and the proof of the irrationality of $e$. However, both proofs were by contradiction. -When thinking about it, it seems like the definition of irrationality itself demands proofs by contradiction. An irrational number is a number that is not a rational number. It seems then that if we were to find direct irrationality proofs, this would rely on some equivalent definition of irrational numbers, not involving rational numbers themselves. -Are there any irrationality proofs not using contradiction? - -REPLY [3 votes]: Consider the sum of two reduced fractions. The variables a₁ and a₂ are integers. The variables b₁ and b₂ are positive integers. -$$ -\displaystyle \frac{a_1}{b_1}+\frac{a_2}{b_2}=\frac{a_1b_2+a_2 b_1}{b_1 b_2} -$$ -If the sum is an integer, then $ b_1|b_2 $ -and $ b_2|b_1 $, thus $ b_1=b_2 $. -Summarizing, if the sum of two reduced fractions is an integer, then the denominators are equal. Contrapositively, if the denominators are not equal, then the sum of two reduced fractions is not an integer. -Now let us apply this idea to the nth root of a positive integer, m. -$$ -\displaystyle m^{1/n}=\frac{a}{b}\\ -\displaystyle m=\frac{a^n}{b^n}\\ -\displaystyle \frac{a^n}{b^n}+\frac{-m}{1}=0 -$$ -Because the sum is an integer, the denominators must be equal. -$$ -\displaystyle b^n=1\\ -\displaystyle b=1 -$$ -In conclusion, the only rational solution occurs when $ m^{1/n} $ -is an integer. Therefore, $ m^{1/n} $ is either an integer or irrational.<|endoftext|> -TITLE: What sort of ring is the integral closure of $\mathbb{Z}$ in $\overline{\mathbb{Q}}$? -QUESTION [20 upvotes]: I begin losing intuition when I start dealing with infinitely generated fields over $\mathbb{Q}$... -The naive guess is that it is noetherian of krull dimension 1. Is this correct? -A related question is what sort of morphism is $Spec($the integral closure of $\mathbb{Z}$ in $\overline{\mathbb{Q}}) \rightarrow Spec(O_K)$ for some number field $K$? For example, is it a flat morphism? - -REPLY [6 votes]: To your related question "For example is flat?" the answer is yes. The point is that over a Dedekind domain, more generally over a Prüfer ring, a module is flat if and only if it is torsion free( see here). On the other hand I'm not so sure if I can say more about of the morphism. It is not going to be locally finite, locally of finite type, locally Noetherian etc...<|endoftext|> -TITLE: Why is integration so much harder than differentiation? -QUESTION [129 upvotes]: If a function is a combination of other functions whose derivatives are known via composition, addition, etc., the derivative can be calculated using the chain rule and the like. But even the product of integrals can't be expressed in general in terms of the integral of the products, and forget about composition! Why is this? - -REPLY [34 votes]: I guess the OP asks about the symbolic integration. Other answers already dealt with the numeric case where integration is easy and differentiation is hard. -If you recall the definition of the differentiation, you can see it's just a subtraction and division by a constant. Even if you can't do any algebraic changes, it won't get any more complex than that. But usually you can do many simplifications due to the zero limit, as many terms fall out as being too small. From this definition it can be shown that if you know the derivative of $f(x)$ and $g(x)$, then you can use these derivatives to express the derivative of $f(x) \pm g(x)$, $f(x)g(x)$ and $f(g(x))$. -This makes symbolic differentiation easy as you just need to apply the rules recursively. -Now about integration. Integration is basically an infinite sum of small quantities. So if you see an $\int f(x) \, dx$. You can imagine it as an infinite sum of $(f_1 + f_2 + ...) \, dx$ where $f_i$ are consecutive values of the function. -This means if you need to calculate integral of $\int (a f(x) + b g(x)) \,d x$. Then you can imagine the sum $((af_1 + bg_1) + (af_2 + bg_2) + ...) \,d x$. Using the associativity and distributivity, you can transform this into: $a(f_1 + f_2 +...)\,d x + b(g_1 + g_2 + ...)\,d x$. -So this means $\int (a f(x) + b g(x)) \, d x = a \int f(x) \,d x + b \int g(x) \, dx$. -But if you have $\int f(x) g(x) \, d x$, you have the sum $(f_1 g_1 + f_2 g_2 + ...) \,d x$. From which you cannot factor out the sum of $f$s and $g$s. This means there is no recursive rule for multiplication. -Same goes for $\int f(g(x)) \,d x$. You cannot extract anything from the sum $(f(g_1) + f(g_2) + ...) \,d x$ in general. -So far, only linearity is the useful property. What about the analogues of the Differentiation rules? We have the product rule: -$$\frac{d f(x)g(x) }{\, d x} = f(x) \frac{d g(x)}{\, d x} + g(x) \frac{d f(x)}{\, d x}.$$ Integrating both sides and rearranging the terms, we get the well-known integral by parts formula: -$$\int f(x) \frac{d g(x)}{\, d x} \, d x = f(x)g(x) - \int g(x) \frac{d f(x)}{\, d x} \, d x.$$ -But this formula is only useful if $\frac{d f(x)}{dx} \int g(x) \, d x$ or -$\frac{d g(x)}{dx} \int f(x) \, d x$ is easier to integrate than $f(x)g(x)$. -And it's often hard to see when this rule is useful. For example, when you try to integrate $\mathrm{ln}(x)$, it's not obvious to see that it's $1 \mathrm{ln}(x)$. The integral of $1$ is $x$ and the derivative of $\mathrm{ln}(x)$ is $\frac{1}{x}$, which lead to a very simple integral of $x\frac{1}{x} = 1$, whose integral is again $x$. -Another well-known differential rule is the chain rule $$\frac{d f(g(x))}{\, d x} = \frac{d f(g(x))}{d g(x)} \frac{d g(x)}{\, d x}.$$ -Integrating both sides, you get the reverse chain rule: -$$f(g(x)) = \int \frac{d f(g(x))}{d g(x)} \frac{d g(x)}{\, d x} \, d x.$$ -But again it's hard to see when it is useful. For example what about the integration of $\frac{x}{\sqrt{x^2 + c}}$? Is it obvious to you that $\frac{x}{\sqrt{x^2 + c}} = 2x \frac{1}{2\sqrt{x^2 + c}}$ and this is the derivative of $\sqrt{x^2 + c}$? I guess not, unless someone showed you the trick. -For differentiation, you can mechanically apply the rules. For integration, you need to recognize patterns and even need to introduce cancellations to bring the expression into the desired form and this requires lot of practice and intuition. -For example how would you integrate $\sqrt{x^2 + 1}$? -First you turn it into a fraction: -$$\frac{x^2 + 1}{\sqrt{x^2+1}}$$ -Then multiply and divide by 2: -$$\frac{2x^2 + 2}{2\sqrt{x^2+1}}$$ -Separate the terms like this: -$$\frac{1}{2}\left(\frac{1}{\sqrt{x^2+1}}+\frac{x^2+1}{\sqrt{x^2+1}}+\frac{x^2}{\sqrt{x^2+1}} \right)$$ -Play with 2nd and 3rd term: -$$\frac{1}{2} -\left( -\frac{1}{\sqrt{x^2+1}}+ -1\sqrt{x^2+1}+ -x2x\frac{1}{2\sqrt{x^2+1}} -\right)$$ -Now you can see the first bracketed term is the derivative of $\mathrm{arsinh(x)}$. The second and third term is the derivative of the $x\sqrt{x^2+1}$. Thus the integral will be: -$$\frac{\mathrm{arsinh}(x)}{2} + \frac{x\sqrt{x^2+1}}{2} + C$$ -Were these transformations obvious to you? Probably not. That's why differentiation is just a mechanic while integration is an art.<|endoftext|> -TITLE: What is the difference between linear space and a subspace? -QUESTION [12 upvotes]: If W is a subspace, is it also a linear space? If V is a linear space, is it also a subspace? I am having trouble wrapping my head around the difference between the two, as it seems that the way the book defines them is the following: both have to have a zero (neutral) element, both are closed under addition and scalar multiplication. Thanks! - -REPLY [22 votes]: The main difference between refering to a vector spaces as a linear space or as a subspace is, unsurprisingly, context. -When one talks about a "subspace", one is thinking of it as being "inside" another vector space, with the inherited operations. So, if we refer to the set of all polynomials of degree at most $2$ with coefficients in $\mathbb{R}$ as a "subspace of the vector space of all polynomials" (note that subspace is relation term; something is a subspace of something else, thought the "something else" may be left implied or understood from context), then we are not merely thinking of this set as a vector space, we are thinking of it as a vector space sitting inside a larger vector space. On the other hand, if we refer to the set as "the vector space of all polynomials of degree at most $2$ with coefficients in $\mathbb{R}$", then we are not really interested in the fact that it is sitting inside a larger vector space, but we want to consider it as a vector space in and of itself. -We do that in set theory all the time, since a set may be a set or a subset of something else. The natural numbers form a set, and we refer to "the set of natural numbers" when we are focusing on the natural numbers exclusively; on the other hand, sometimes we want to think about the natural numbers and their relationship with the larger set of rationals or real numbers, so we can talk about "the subset of the real numbers consisting of the natural numbers." -Now, every vector space is a subspace of itself; every subspace of a vector space is itself a vector space (with the inherited operations). So whether we call them "space" or "subspace of " is entirely a question of whether we want to focus on the object itself, or consider it in context with something else.<|endoftext|> -TITLE: How to simplify $\frac{\sqrt{4+h}-2}{h}$ -QUESTION [8 upvotes]: The following expression: -$$\frac{\sqrt{4+h}-2}{h}$$ -should be simplified to: -$$\frac{1}{\sqrt{4+h}+2}$$ -(even if I don't agree that this second is more simple than the first). -The problem is that I have no idea of the first step to simplify that.. any help? - -REPLY [15 votes]: If you multiply both the top and the bottom by $\sqrt{4+h}+2$, you get $\frac{(\sqrt{4+h}-2)(\sqrt{4+h}+2)}{h(\sqrt{4+h}+2)}$, which simplifies to $\frac{h}{h(\sqrt{4+h}+2)}$. Then, divide both by $h$ (assuming $h\neq 0$), and you get $\frac{1}{\sqrt{4+h}+2}$. - -REPLY [5 votes]: It is really simple. Let us just do what is most intuitive, multiply numerator and denominator with what you want to have in denominator. You get: $$ \frac{(\sqrt{4+h} - 2)(\sqrt{4+h} + 2)}{h(\sqrt{4+h}+2)} $$ -Then observe the numerator has a difference of squares. Multiply the numerator easily using that and then your left with -$$\frac{h}{h(\sqrt{4+h}+2)}$$ -Just assume $ h \neq 0 $ and get "rid" of it. - -REPLY [4 votes]: HINT $\rm\displaystyle\quad\quad g^2 = 4+h\ \ \Rightarrow\ \ \frac{g-2}h\ =\ \frac{g-2}{g^2-4}\ =\ \frac{1}{g+2}$ -Usually the "simplification" is the opposite inference - known as rationalizing the denominator.<|endoftext|> -TITLE: Calculate variance from a stream of sample values -QUESTION [33 upvotes]: I'd like to calculate a standard deviation for a very large (but known) number of sample values, with the highest accuracy possible. The number of samples is larger than can be efficiently stored in memory. -The basic variance formula is: -$\sigma^2 = \frac{1}{N}\sum (x - \mu)^2$ -... but this formulation depends on knowing the value of $\mu$ already. -$\mu$ can be calculated cumulatively -- that is, you can calculate the mean without storing every sample value. You just have to store their sum. -But to calculate the variance, is it necessary to store every sample value? Given a stream of samples, can I accumulate a calculation of the variance, without a need for memory of each sample? Put another way, is there a formulation of the variance which doesn't depend on foreknowledge of the exact value of $\mu$ before the whole sample set has been seen? - -REPLY [60 votes]: I'm a little late to the party, but it appears that this method is pretty unstable, but that there is a method that allows for streaming computation of the variance without sacrificing numerical stability. -Cook describes a method from Knuth, the punchline of which is to initialize $m_1 = x_1$, and $v_1 = 0$, where $m_k$ is the mean of the first $k$ values. From there, -$$ -\begin{align*} -m_k & = m_{k-1} + \frac{x_k - m_{k-1}}k \\ -v_k & = v_{k-1} + (x_k - m_{k-1})(x_k - m_k) -\end{align*} -$$ -The mean at this point is simply extracted as $m_k$, and the variance is $\sigma^2 = \frac{v_k}{k-1}$. It's easy to verify that it works for the mean, but I'm still working on grokking the variance.<|endoftext|> -TITLE: Updating eigen-decomposition of symmetric matrix $A$ to eigendecomposition of $A+D$ where $D$ is low-rank diagonal -QUESTION [9 upvotes]: Given a symmetric positive definite matrix $A$ and a mostly-zeros non-negative diagonal matrix $D$, is there a way to cheaply update the eigenvalues and/or eigenvectors of $A$ to that of $A+D$? Ideally I'm looking for something akin to the Woodbury matrix identity. - -REPLY [2 votes]: I would recommend reading http://www.unige.ch/~gander/consulting/X/EigenUpdate.ps.gz and having a look at the cited work of Golub and Van Loan. -They show howto update matrices with rank-one-changes. You can understand your update matrix $D \in\mathbb{R}^{n\times n}$ as a sum of $n$ rank-one-updates. -Good luck!<|endoftext|> -TITLE: Olympiad linear algebra problem -QUESTION [6 upvotes]: This is a problem from an olympiad I took today. I tried but couldn't solve it. -Let $A$ and $B$ rectangular matrices with real entries, of dimensions $k\times n$ and $m\times n$ respectively. Prove that if for every $n\times l$ matrix $X$ ($l\in \mathbb{Z}^+$) the equality $AX=0$ implies $BX=0$, then there exists a matrix $C$ such that $B=CA$. -I tried to define a commutative diagram, but failed. Anyhow, I'd prefer a solution that works with the matrices explicitly. But I'll welcome and appreciate any solutions. - -REPLY [3 votes]: $AX=0$ means that the rows of $A$ are all orthogonal to the columns of $X$, and likewise for $BX=0$. $B=CA$ means that the rows of $B$ are linear combinations of the rows of $A$. -Assume that there is a row of $B$ that is not a linear combination of the rows of $A$. Orthogonalize that row to the rows of $A$, and choose $X$ as the column vector corresponding to this orthogonalized row. Then $AX=0$, but $BX\neq0$. Hence the rows of $B$ are linear combinations of the rows of $A$.<|endoftext|> -TITLE: Olympiad - sequence of sum of complex numbers -QUESTION [5 upvotes]: This is the other problem I couldn't solve in the olympiad test I took today. -Let $c_1, ... , c_n$ be complex numbers with unitary norm, and $S_k=\sum_{i=1}^n c_i^k$, $k\in \mathbb{N}$. Suppose $S_1, S_2, ...$ converges. Prove that $c_i=1$ for every $i=1, ..., n$. -By using De Moivre's formula, the problem is equivalent to showing that $\cos(k\theta_1) + ... +\cos(k\theta_n)$ has no limit as $k\to\infty$ unless all $\theta_i$'s are $0$ (mod $2\pi$) (and an analogous claim for the sum of sines). -I don't know how to prove that. - -REPLY [5 votes]: I've edited the answer twice without deleting the previous versions, to keep the history of the answer and the comments transparent. That makes it much longer than it originally was, but if you're just interested in the end result, the first and last paragraphs make up a concise and complete proof. -If $S_k$ converges, then the differences $\Delta S_k := S_{k+1} - S_k$ must converge to zero. Hence the differences $\Delta^2 S_k := \Delta S_{k+1} - \Delta S_k$ must also converge to zero, and so on for $\Delta^n S_k$ for all $n$. But $\Delta^n S_k = \sum_{i=1}^{n} (c_i - 1)^n c_i^k$. There can be either one or two values of $c_i$ for which the absolute value of $c_i - 1$ is maximal. If there is only one, the corresponding term will dominate the sum for sufficiently large $n$, in the sense that its absolute value becomes greater than the absolute value of the sum of all the other contributions. Since this term only converges to zero if $c_i - 1 = 0$, it follows that $c_i = 1$ for all $i$. -If there are two (conjugate) maximal values of $c_i - 1$, there is a linear combination of $S_k$ and its complex conjugate in which only one of these maximal values occurs; if $S_k$ had a limit, so would its complex conjugate and this linear combination. -Edit in response to kevincuadros's question about the linear combination part: -In trying to clarify this, I can see now why you didn't "fully get" it -- because I hadn't thought it through properly :-) -What I had in mind was this: If there are two different values $c_i$ with maximal absolute value of $c_i - 1$, they are conjugate. Both may occur more than once; let $\mu_1$ and $\mu_2$ be their multiplicities. Then $\mu_1 S_k - \mu_2 S_k^\mathrm{*}$ will contain one of them with multiplicity $\mu_1^2 - \mu_2^2$ and the other with multiplicity $\mu_1\mu_2 - \mu_2\mu_1=0$. I was thinking that we could then reason that since this linear combination contains only one of the two, we can apply the above proof to it, and then argue that if $S_k$ had a limit, then so would $S_k^\mathrm{*}$ and any linear combination of them. But I see now that that doesn't work out because we could have $\mu_1=\mu_2$, and hence $\mu_1^2-\mu_2^2$, and in that case neither of the two would occur in that linear combination, so we still have to deal with that case. -So in that case, the contributions from these conjugates cancel in the imaginary part and add up in the real part. If $S_k$ is to converge, its real part must converge. But $(c_i - 1)^n c_i^k$ comes arbitrarily close to being real infinitely often; thus its real part comes arbitrarily close to $|c_i - 1|^n$ infinitely often, and hence the argument for the general case carries through. -To make the full proof more concise, we can forget about the whole linear combination thing and just argue as follows: If there are two distinct values of $c_i$ with equal maximal absolute value of $c_i - 1$, they are conjugate. Then consider the sum of the real parts of their contributions, which comes arbitrarily close to the sum of their absolute values infinitely often, and hence the argument for the general case carries through. -Further edit to salvage the style of the proof: -I don't really like the above fix, since part of the point of the original proof was to avoid saying something like "x gets arbitrarily close to y an infinite number of times". So here's a slightly nicer fix: -If there are two distinct values of $c_i$ with equal maximal absolute value of $c_i - 1$, they are conjugate. Then the real parts of their contributions add up to $|c_i-1|^n$ times the cosine of an angle that changes with $k$. Since the ratio of $|c_i-1|^n$ to the absolute value of the sum of all other terms goes to infinity with increasing $n$, increasing $n$ will yield arbitrarily small upper bounds on the cosine. But since $c_i \neq \pm 1$, the cosine necessarily violates these bounds. (Note that this doesn't use the property that the cosine gets arbitrarily close to 1, only that it doesn't remain arbitrarily close to 0, which is much less and doesn't require an argument using rational and irrational numbers.)<|endoftext|> -TITLE: For any even function $f$ there is infinite number of functions $g$ such that $f(x) = g(|x|)$ -QUESTION [5 upvotes]: This is question from Spivak's Calculus. -Question statement (paraphrased): For any even function $f$ there is infinite number of functions $g$ such that $f(x) = g(|x|)$ -I have made attempt at proof, here is my work. My main concern is whether my proof that there are infinite such functions is correct. - -Let $g$ be a function such that for any positive $x$, $f(x) = g(x)$ and let $f$ be some even function. We will show that above equality then holds even when $x$ is negative: $f(-x) = g(|-x|) \Leftrightarrow f(x) = g(x)$, which is true by definition of these functions. It now remains to show that there is infinite number of those functions. I understand that is so because $g$ can be defined as one whishes for negative numbers and it won't change equality. But I cannot think of formal proof. I'm thinking about assigning values to functions $g$ in a way that I define $g_{1}(-1) = k$ and then making infinite number of them by induction: for any function $g_{n}(x) = k$, if x is negative, we define $g_{n+1}(x-1) = k$. Then by the fact that there is infinite natural numbers, and for every natural number there is going to be unique function, we can conclude there is infinite number of functions. - -Is the concept of induction applicable here or is it not? - -REPLY [4 votes]: Okay, from the comments I was able to guess at what you were trying to do. -You are correct that the key is that as long as $g(x)=f(x)$ for $x\geq 0$, everything will work out, so you only need to worry about defining $g$ on the negative numbers. -You attempted to do so by letting $g_1$ be a function that is defined only on the nonnegative numbers and $-1$, and setting $g_1(-1)=k$. Then $g_2$ would be an extension of $g_1$, which is also defined as $k$ at $-2$; and so on. In general, $g_{n}$ would be define on $\{-n,-n+1,\ldots,-1\}\cup[0,\infty)$, by $g(x) = f(x)$ if $x\geq 0$ and $g(x)=k$ if $x\lt n$. -As I noted in the comments, your formulas didn't really say that; instead, they only specified $g_1$ at $-1$, and then said, for example, that $g_2(x-1)=k$. That is at best confusing. What is $g_2(-0.5)$? According to this, I have to think of $-0.5$ as $0.5 - 1$, and then it's $k$, and ... Well, a bit of a confusing issue arises... -In any case, whether this works as an answer or not depends on whether you are assuming that your functions need to be defined on the same set or not. Normally, we would be looking for functions $g$ such that $f(x)=g(|x|)$ and we want both $f$ and $g$ to have the same domain. Remember that two functions are equal if and only if they have the same domain, the same codomain, and the same value at every element of the domain. So even if you could have set up the induction properly to get the functions you wanted (or if you wanted to define $g_1$ on $[-1,0)$, then $g_2$ extended to all of $[-2,0)$, and so on sothat $g_n$ was defined on $[-n,\infty)$) it still would not give a good answer to the problem because of the restrictive domains of your $g$s. (You are really composing the absolute value with $g$; I think Spivak wants you to play with functions that are defined everywhere here, rather than on artificial domains; I could be wrong, though). -If you do not require your functions $g$ to have the same domain as $f$, then your intended answer would also work; the functions $g_n$ are different because they have different domains, even though they all have the same values where their domains agree. -I think, though, that the intended answer relies instead in defining $g_n(x)$ for $x\lt 0$ as different things for different $n$. For example, you could set -$$g_n(x) = \left\{\begin{array}{ll} -f(x) &\mbox{if $x\geq 0$,}\\ -n &\mbox{if $x\lt 0$;} -\end{array}\right.$$ -and this would work. There is no need to state it as induction, because the values depend only on the labels, and you simply have infinitely many distinct labels to choose from. -That said: yes, you can use induction to define a (countably infinite) series of functions. If you wanted to do that here, you could express it explicitly stating what $g_1$ is (giving its domain clearly; if all you say is $g(-1)=k$, then you are telling us the value at $-1$, but not saying anything about values elsewhere, not even that it is not defined there). Then saying that assuming you have defined $g_n$ with a domain of, say, $[-n,\infty)$, you define $g_{n+1}$ on $[-n-1,\infty)$ by -$$g_{n+1}(x) = \left\{\begin{array}{ll} -g_n(x) &\mbox{if $x\in[-n,\infty)$.}\\ -\mbox{whatever} &\mbox{if $x\in [-n-1,-n)$;} -\end{array}\right.$$ -This would indeed give an inductive definition for your $g_n$, defined on $[-n,\infty)$, each extending the previous one.<|endoftext|> -TITLE: Comb space has no simply connected cover -QUESTION [16 upvotes]: I'm having trouble solving the following exercise in Hatcher's Algebraic Topology(1.3 #5): -"Let $X$ be the subspace of $R^2$ consisting of the four sides of the square $[0,1] \times [0,1]$ together with the segments of the vertical lines $x=\frac{1}{2},\frac{1}{3},\frac{1}{4},\ldots$ inside the square. Show that for every covering space $\tilde{X}\to X$ there is some neighbourhood of the left edge of $X$ that lifts homeomorphically to $\tilde{X}$. Deduce that $X$ has no simply-connected cover." -I tried piecing together open sets in $\tilde{X}$ which were homeomorphic to open square shaped $\epsilon$-neighbourhoods of points in the left edge of $X$, but I couldn't glue them together coherently. -My second idea was to use the path lifting property to lift the left edge, but I couldn't extend it to a lift of an open neighbourhood of the left edge. -Edit: Some more detail on my first attempt: -Suppose I have two open squares $U_1$ and $U_2$ in $X$ (containing the left edge) with nonempty intersection, and that have preimages $\coprod_i U_{1,i}$ and $\coprod_j U_{2,j}$ with each piece of that mapping homeomorphically onto $U_1$ and $U_2$ respectively. -Suppose I've chosen a specific $U_{1,i}$, and I want to choose a specific $U_{2,j}$ so that their union maps homeomorphically to $U_1$ union $U_2$. Such a choice can be specified by a point in the chosen $U_{1,i}$ that maps into $U_2$, but for each such point, there could be a different choice of $j$. -My intution tells me that to make this choice, one needs to appeal to the bad local nature of the left edge of $X$, (so that you're certain to get some of the vertical lines inside of the desired open set) but I'm not sure how to proceed with that. - -REPLY [8 votes]: I believe that I found an answer to my problem. The earlier approach of playing with open covers didn't pan out because I couldn't extend local injectivity of $p$ on a bunch of open sets to injectivity on their union. -But I found the following theorem, which solves the problem quite neatly. I found it in [1], along with a sketch proof. (But I'll prove it fully): -Theorem: Let $E$ be any space and $F$ be a metric space. Let $K\subset E$ be compact, $f:E\to F$ be continuous, locally injective near every point $p\in K$, and (globally) injective on $K$. Then $f$ is also injective on some open neighbourhood $N$ of $K$. -Proof: Let $g:E\times E\to \mathbb{R}$ be the function defined by $g(x,x')=d(f(x),f(x'))$. Around each point of the form $(k,k)\in K\times K$, there is an open product neighbourhood $V_k \times V_k$ such that $(V_k\times V_k)\cap g^{-1}(0) \subset \{ (x,x)\in E\times E \}$ (by local injectivity). Let $V=\cup_k \left(V_k\times V_k\right)$. Then the compact set $K\times K$ is contained in -the open set $g^{-1}(0)^c \cup V$. By the generalized tube lemma, there are open sets $W,W'$ of $E$ so that $K\times K\subset W\times W'\subset g^{-1}(0)^c \cup V$. Then let -$N=W\cap W'$. $\square$ -Applying this theorem to $p:\tilde{X} \to X$, with $K$ being a lift of the left edge of -$X$, gives an open neighbourhood of that lift which maps homeomorphically to an open -neighbourhood of the left edge of $X$. Then we use the bad local nature of $X$ to find a -loop in $X$ which lifts to a loop in $\tilde{X}$ which is not contractible. (Compactness is used again to find an open set which contains the entirety of a vertical line) -[1] Wagner,M., "Existence of schlicht integral manifolds for ordinary differential equations", Archiv der Mathematik, Volume 61, Number 6, 529-542 (1993)<|endoftext|> -TITLE: Linear Programming Books -QUESTION [33 upvotes]: Do you know of a good book on linear programming? To be more specific, i am taking linear optimization class and my textbook sucks. Teacher is not too involved in this class so can't get too much help from him either, -Any help will be appreciated. -Thank you - -REPLY [3 votes]: I can recommend two books not mentioned here. -First is Understanding and Using Linear Programming by Jiri Matousek and Bernd Gärtner. -Here you find basic intro into geometry, simplex method, duality and interior point method with proofs. -Second is Combinatorial Optimization by Cook, Cunningham, Pulleyblank, Schrijver. -This is more a books of application ( with proofs ) full of algorithms using linear and integer programming, duality, also unimodularity, Chvatal-Gomory cuts and solving TSP with various methods. Both books are complementary ;) I recommend starting with first one and read few chapters of Combinatorial Optimization to get another look at things.<|endoftext|> -TITLE: How to solve $n < 2^{n/8}$ for $n$? -QUESTION [10 upvotes]: This is from an exercise (1.2.2) in introduction to algorithms that I'm working on privately. -To find at what point a $n \lg n$ function will run faster than a $n^2$ function I need to figure out for what value $n$ -$8n^2 > 64n \lg n$ -(with lg here being the binary log) -after some elementary simplification we get -$n > 8\lg n$ -Playing around with properties of log I can further get this to -$n^8 < 2^n$ -or -$n < 2^{n/8}$ -While I'm sure it's something very elementary I've lost somewhere over the years, after checking out a few logarithm tutorials I'm just not finding how to get any further on this. -Any help with solving for $n$ would be appreciated. - -REPLY [8 votes]: The Lambert $W$ function can be used to solve equations of the form $p^{ax+b} = cx+d.$ In fact, the Wikipedia page gives the solution to $n = 2^{n/8}$ to be -$$n = \frac{-8}{\ln 2} W\left(\frac{-\ln 2}{8}\right).$$ -Via Mathematica, which implements the $W$ function as ProductLog, this comes out to be 43.5592 (although you have to take the lower branch of the $W$ function), in agreement with the Newton iteration solution Theo Buehler mentions.<|endoftext|> -TITLE: expected number of shuffles to sort the cards -QUESTION [5 upvotes]: Initially the deck is randomly ordered. The aim is to sort the deck in order. Now in each turn the deck is shuffled randomly. If any of the initial or last cards are in sorted order then they are kept aside, and the process is followed with rest of the cards. The question now is what is the expected number of shuffles for the entire deck to get sorted in this manner. -e.g. let's say initially the deck is $(3,5,6,2,7,1,4)$. After the first shuffle it becomes $(1,2,6,4,5,3,7)$, then the next shuffle will be done with $(6,4,5,3)$ as $(1,2)$ and $(7)$ are already in proper order. Check that even if $5$ in also at right place, but not in continuous order hence considered for shuffle. -What I know is that the expected number of shuffles to sort the deck in one attempt is $n!$. (Something known as bogo-sort). But can't think any further. Any help will be appreciated. - -REPLY [5 votes]: Let $r$ denote the number of cards that, after shuffling $n$ cards, remain unsorted (the middle portion of the deck, that is to be suffled again). -Let $t_n$ be the needed number or shuffles starting from $n$ cards till all cards are sorted. We want $E(t_n)$ -Now, lets consider that expectation conditioned on the value of $r_n$, i.e. suppose we know exactly how many cards remain unsorted after the first try: -Then we can write: $E[t_n | r_n=r] = 1 + E[t_r]$ -(note that this includes the extreme cases $r_n=0$ and $r_n = n$) -Applying the property $E[E[x|y]] = E[x]$ we have: -$\displaystyle E[t_n] = \sum_{r=0}^n (1 + E[t_r]) \; p(n,r) $ -where $p(n,r) = Prob(r_n = r)$ (we'll compute it later) -We must take out the term $r=n$ for the sum, and by trivial manipulation we get the desired recursion: -$\displaystyle -E[t_n] = \frac{1}{1-p(n,n)} \left( p(n,n) + \sum_{r=0}^{n-1} (1 + E[t_r]) \; p(n,r) \right) $ -As initial values we have $E[t_0] = E[t_1] = 0$ -To compute $p(n,r)$ (probability that after one shuffling of $n$ cards there remains -with the rules of our game- exactly $r$ cards to shuffle again) it's more simple (but more tedious), just combinatorial counting - i'll explain if you ask to. -My result is -$\displaystyle p(n,r) = \frac{(n + 1 -r) \; (r^2 - 3r + 3) \; (r-2)!}{n!}$ -if $r>1$. Elsewhere, $p(n,0) = 1/n!$ and $p(n,1)=0$. -Here are some values (in javascript) http://hjg.com.ar/varios/mat/cards.html -BTW: This problem can be seen as in between two problems, one simpler and one more complex/general: -As compared to the Coupon Collector problem, this is more complicated, because we can't divide the road into "states" that will be visited sequentially - and hence we can simply express the desired expectation as a sum of expectations of those pieces. -It can be consisdered as a Markov Model with one absorbing state; but the markovian formulation, though apt, is too general, and the desired result (expected numbers of iterations to reach the absorbing state) is less straigforward that the solution proposed here. That's because our problem has some special restrictions (in terms of a Markov chain, we'd say that its transition matrix is triangular) -ADDED: To compute $p(n,r)$ , for $r \ge 2$ : -After shuffling $n$ cards, we get $r$ unsorted cards, say $(n1;r;n2)$ Lets count how many ways (with $n1$ $n2$ fixed) there are. They correspond to choose a permutation of the $r$ middle cards, with the condition that both extremes are "wrong". This is given by: -$r! - 2(r - 1)! + (r-2)! = (r^2 - 3 r + 3) (r-2)!$ -(This is exclusion-inclusion counting: the first term counts all possibilities, the second count the cases where the first OR the last is right, the third count the cases where the first AND the last are right). -But, for a given $r$ there are several choices of $(n1;n2)$, they are only required to sum up $n-r$. Actually, there are $n - r +1$ alternatives. Then we get the total number of permutations that give $r$ unsorted cards, and divide over all the permutations: -$\displaystyle p(n,r) = \frac{(n + 1 -r) \; (r^2 - 3r + 3) \; (r-2)!}{n!}$ -UPDATED: An asymptotic can be guessed (theoretically and empirically): -$\displaystyle E(t_n) \approx \frac{n(n-1)}{4} \approx \frac{n^2}{4}$<|endoftext|> -TITLE: The sum of an uncountable number of positive numbers -QUESTION [174 upvotes]: Claim: If $(x_\alpha)_{\alpha\in A}$ is a collection of real numbers $x_\alpha\in [0,\infty]$ -such that $\sum_{\alpha\in A}x_\alpha<\infty$, then $x_\alpha=0$ for all but at most countably many $\alpha\in A$ ($A$ need not be countable). -Proof: Let $\sum_{\alpha\in A}x_\alpha=M<\infty$. Consider $S_n=\{\alpha\in A \mid x_\alpha>1/n\}$. -Then $M\geq\sum_{\alpha\in S_n}x_\alpha>\sum_{\alpha\in S_n}1/n=\frac{N}{n}$, where $N\in\mathbb{N}\cup\{\infty\}$ is the number of elements in $S_n$. -Thus $S_n$ has at most $Mn$ elements. -Hence $\{\alpha\in A \mid x_\alpha>0\}=\bigcup_{n\in\mathbb{N}}S_n$ is countable as the countable union of finite sets. $\square$ -First, is my proof correct? Second, are there more concise/elegant proofs? - -REPLY [9 votes]: A more explicit formulation of the statement in the title is this. Let $A$ be some uncountable family of non-negative numbers. If $S$ is to be the "sum of $A$", then by any reasonable definition of "sum", surely $S$ must be greater than or equal to the sum of any finite sub-family of $A$. In that sense we can prove that any for any reasonable definition of "sum", the sum of $A$ must be infinite (unless all but countably many elements of $A$ are zero) by showing: - -For any $M$, there exists a finite sub-family $B$ of $A$ such that the sum of $B$ is at least $M$. - -Proof: Assume that $A^+$, the positive members of $A$, is uncountable (otherwise the theorem is obviously false). $A^+=\bigcup_n A_n$, where $A_n=\{a\in A | a \geq \frac 1 n\}$. Since the union of countably many finite sets is countable, one of the $A_n$ must be infinite. Grab as many elements as you need from that set to get a sum greater than $M$.<|endoftext|> -TITLE: Computing a Direct Limit -QUESTION [9 upvotes]: Motivation: In Spanier's Algebraic Topology we are introduced to direct and indirect limits in the first few pages. I partially understand the idea behind it but since I am an example-driven learner I'd actually like to compute some direct limits. -Question: Where can I find explicit examples of a direct (or even indirect) limit being calculated? -What I've done so far: So far, I've computed the direct limit for some finite directed sets. On the wikipedia page, there are a number of examples which I've been trying to work out. In particular, the third example (which is the infinite chain of ${\mathbb Z}/p^{n}{\mathbb Z}\rightarrow {\mathbb Z}/p^{n+1}{\mathbb Z}$ with multiplication by $p$ is the map) I'm unsure of how they come up with the roots of unity for some power of $p$. Any help on this one would also be greatly appreciated. - -REPLY [8 votes]: A standard example of a direct limit is the construction of the ring of germs of functions at a point (on a topological space), which generalizes to the stalk of a sheaf. -Consider the germs of holomorphic functions at $0 \in \mathbb{C}$. By definition, an element of this ring is a pair $(U, f)$ where $U$ is an open subset containing the origin and $f: U \to \mathbb{C}$ is a holomorphic function. Two pairs $(U, f), (V, g)$ are equivalent if there is a neighborhood $W \subset U \cap V$ containing $0$ such that $f = g$ on $W$. (We can take $W=U \cap V$ if we make our neighborhoods connected by the rigidity of holomorphic functions.) This is an example of a direct limit: it is the limit of $Hol(U)$ as the neighborhood $U$ shrinks to zero. -It is a good exercise to see that this direct limit is the ring of convergent power series in $\mathbb{C}[[z]]$. -Here is another exercise you might try. Let $A$ be a ring, $S$ a multiplicatively closed subset. Then one can define a preorder (it's not really an order) on the elements of $S$ by $s' \leq s$ if $s'$ divides $s$. Then the localization $A_S$ is the direct limit of the $A_s$ as $s \in S$.<|endoftext|> -TITLE: Finitely generated modules over PID -QUESTION [12 upvotes]: Let $A$, $B$, $C$, and $D$ be finitely generated modules over a PID such that $A\oplus B\cong C\oplus D$ and $A\oplus D\cong C\oplus B$. Prove that $A\cong C$ and $B\cong D$. - -The only tool I have is the theorem about finitely generated modules, but I don't quite see the connection. Please Help. Thanks. - -REPLY [11 votes]: According to the theorem about finitely generated modules, any finitely generated module over a PID $R$ can be expressed as -$$ -R^r \oplus R/(p_1^{k_1}) \oplus R/(p_2^{k_2}) \oplus \cdots \oplus R/(p_n^{k_n}) -$$ -where $p_1^{k_1},\ldots,p_n^{k_n}$ are prime powers in $R$. The $R^r$ is called the free part, and the other summands are elementary divisors. According to the theorem, these summands are unique up to permutation. -Let $p^k$ be a prime power in $R$. Let $a$, $b$, $c$, and $d$ denote the numbers of times that $R/(p^k)$ appears as an elementary divisor of $A$, $B$, $C$, and $D$, respectively. Since $A\oplus B \cong C \oplus D$, the uniqueness part of the theorem tells us that $a+b=c+d$. Similarly, since $A\oplus D = C\oplus B$, we know that $a+d=c+b$, and it follows from these two equations that $a=c$ and $b=d$. This holds for all $p^k$, and a similar argument works for the rank of the free parts, which proves that $A\cong C$ and $B\cong D$<|endoftext|> -TITLE: Theorem of Arzelà-Ascoli -QUESTION [25 upvotes]: The more general version of this theorem in Munkres' 'Topology' (p. 290 - 2nd edition) states that -Given a locally compact Hausdorff space $X$ and a metric space $(Y,d)$; a family $\mathcal F$ of continuous functions has compact closure in $\mathcal C (X,Y)$ (topology of compact convergence) if and only if it is equicontinuous under $d$ and the sets -$$ \mathcal F _a = \{f(a) | f \in \mathcal F\} \qquad a \in X$$ -have compact closure in $Y$. -Now I do not see why the Hausdorff condition on $X$ should be necessary? Why include it then? Am I maybe even missing something here (and there are counterexamples)? -btw if you are looking up the proof: Hausdorffness is needed for the evaluation map $e: X \times \mathcal C(X,Y) \to Y, \, e(x,f) = f(x)$ to be continuous. But the only thing really used in the proof is the continuity of $e_a: \mathcal C(X,Y) \to Y, \, e_a(f) = f(a)$ for fixed $a \in X$. -Cheers, S.L. - -REPLY [6 votes]: I think this question has been already been answered through the helpful comments. So thanks to Henno Brandsma and t.b.! This is just to finally tick it off. -My conclusion: It seems that $X$ being Hausdorff is rather a matter of convenience (maybe to avoid issues with the definition of local compactness for non-Hausdorff spaces, as pointed out in the comments), than a necessary condition. -Also this version of the theorem seems quite general enough for most uses.<|endoftext|> -TITLE: Cauchy's integral formula for Cayley-Hamilton Theorem -QUESTION [42 upvotes]: I'm just working through Conway's book on complex analysis and I stumbled across this lovely exercise: - -Use Cauchy's Integral Formula to prove the Cayley-Hamilton Theorem: If $A$ is an $n \times n$ matrix over $\mathbb C$ and $f(z) = \det(z-A)$ is the characteristic polynomial of $A$ then $f(A) = 0$. (This exercise was taken from a paper by C. A. McCarthy, Amer. Math. Monthly, 82 (1975), 390-391) - -Unfortunately, I was not able to find said paper. I'm completely lost with this exercise. I can't even start to imagine how one could possibly make use of Cauchy here... -Thanks for any hints. -Regards, S.L. - -REPLY [40 votes]: The idea is to use holomorphic functional calculus and to show that for a matrix $A$ and a polynomial $p(z)$ we have for $r \gt \|A\|$ -\begin{equation}\tag{$\ast$} -p(A) = \frac{1}{2\pi i} \int_{|z| = r} p(z) \cdot (z - A)^{-1}\ \,dz -\end{equation} -in complete analogy with the Cauchy formula for complex numbers. The integral of a matrix of holomorphic functions is defined by integrating each entry separately. -By Cramer's rule, the $(k,l)$-entry of $(z-A)^{-1}$ is $\displaystyle ((z-A)^{-1})_{k,l} = \frac{1}{\det(z-A)} c_{k,l}(z)$ where $c_{k,l}(z)$ is some polynomial in $z$. Let $p(z) = \det(z-A)$ be the characteristic polynomial of $A$. Conclude using $(\ast)$ by applying Cauchy's integral theorem to $c_{k,l}$. - -To see that the identity $(\ast)$ holds, proceed as follows (this is a slight variant of McCarthy's argument): - -The usual matrix norm induced by the Euclidean norm on $\mathbb{C}^{n}$ satisfies $\|A^{n}\| \leq \|A\|^{n}$. -Use this to show that $(z - A)^{-1} = \sum_{n = 0}^{\infty} \frac{A^{n}}{z^{n+1}}$, where the right hand side converges uniformly on $\{|z| \gt \|A\| + \varepsilon\}$. -It follows that we can interchange integration and summation. Conclude that $$ A^{k} = \int_{|z| = r} z^{k} (z - A)^{-1}\,dz$$ and $(\ast)$ follows by linearity. - - -Here's a link to McCarthy's article (you need a university subscription to download it, but the first page is almost the entire article).<|endoftext|> -TITLE: Minimum of $x^2+\frac{a}{x}$ without Calculus -QUESTION [8 upvotes]: How can I find the minimum of $x^2+\frac{a}{x}$ on $\mathbb{R}_+$ without calculus? - -REPLY [5 votes]: Revised (neater) answer: Fix $a>0$. For any $x>0$, -$$ -\Big(x^2 + \frac{a}{x}\Big) - \Big(m^2 + \frac{a}{m}\Big) = \frac{{(x - m)(x^2 m + m^2 x - a)}}{{mx}} \geq 0, -$$ -if $2m^3 = a$. We are done. (The above also shows that the minimum is strict.)<|endoftext|> -TITLE: In differential calculus, why is dy/dx written as d/dx ( y)? -QUESTION [5 upvotes]: In differential calculus, We know that dy/dx is the ratio between the change in y and the change in x. In other words, the rate of change in y with respect to x. -Then, why is dy/dx written as d/dx ( y)? -That is, why and how d/dx is considered as an operator? - -REPLY [10 votes]: It is productive to regard $D = \frac{d}{dx}$ as a linear operator, say from the space of smooth functions on $\mathbb{R}$ to itself, for several reasons. The simplest reason I can think of is that it makes the theory of linear homogeneous differential equations very simple. For a linear homogeneous differential equation is nothing more than an attempt to find a nullspace of the operator $p(D)$ where $p$ is some polynomial. -To do this we need to find the spectrum of $D$. It's not hard to see that there is a unique eigenvector with eigenvalue $\lambda$ given by $e^{\lambda x}$, and from here it follows that the nullspace of $p(D)$ at least contains (and, if $p$ has distinct roots, is entirely made of) the functions $e^{\lambda x}$ where $p(\lambda) = 0$. -Said another way, if $p(x) = \prod_{i=1}^n (x - \lambda_i)$ then we can factor the operator $p(D)$ as $\prod_{i=1}^n (D - \lambda_i)$, and it's not hard to see that $f$ is in the nullspace of this operator whenever $(D - \lambda_i) f = 0$, or $f(x) = e^{\lambda_i x}$ (up to initial conditions). In fact, we get a solution $f$ whenever $(D - \lambda_i)^{e_i} f = 0$ where $e_i$ is the multiplicity of $\lambda_i$, and studying this condition readily leads to the complete set of solutions. -In other words, thinking of $D$ as an operator in its own right essentially reduces the study of linear homogeneous differential equations to linear algebra (modulo some existence and uniqueness arguments), specifically the study of the Jordan decomposition. -Of course one can go much, much further with this idea: for example we can factor differential operators in more than one variable in the same way. The Laplacian $D_x^2 + D_y^2$ where $D_x$ is the derivative with respect to $x$ and $D_y$ the derivative with respect to $y$ factors as $\left( D_x + D_y i \right) \left( D_x - D_y i \right)$ and this immediately gives the connection between harmonic functions and holomorphic functions via the Cauchy-Riemann equations. And the Dirac equation in quantum mechanics was discovered through a similar factorization process, but with matrix rather than merely complex coefficients. - -REPLY [3 votes]: The way you're phrasing it, $x$ and $y$ play similar roles, and the question naturally arises why they should be treated differently, as in $\mbox{d}/\mbox{d}x (y)$. However, in calculus, one usually considers functions of variables, such as $f (x)$ or $y (x)$ -- here the symbols for the independent variable and the function play quite different roles, and in order to be able to think of differentiation more abstractly as an operation applied to functions (and yielding new functions), it is helpful to "factorize" the notation so that the function stands alone at the right and "what is being done to it", the operator, is separate and applied from the left -- hence this notation.<|endoftext|> -TITLE: Vector derivative w.r.t its transpose $\frac{d(Ax)}{d(x^T)}$ -QUESTION [46 upvotes]: Given a matrix $A$ and column vector $x$, what is the derivative of $Ax$ with respect to $x^T$ i.e. $\frac{d(Ax)}{d(x^T)}$, where $x^T$ is the transpose of $x$? -Side note - my goal is to get the known derivative formula $\frac{d(x^TAx)}{dx} = x^T(A^T + A)$ from the above rule and the chain rule. - -REPLY [6 votes]: Mathematicians kill each other about derivatives and gradients. Do not be surprised if the students do not understand one word about this subject. The previous havocs are partly caused by the Matrix Cookbook, a book that should be blacklisted. Everyone has their own definition. $\dfrac{d(f(x))}{dx}$ means either a derivative or a gradient (scandalous). We could write $D_xf$ as the derivative and $\nabla _xf$ as the gradient. The derivative is a linear application and the gradient is a vector if we accept the following definition: let $f:E\rightarrow \mathbb{R}$ where $E$ is an euclidean space. Then, for every $h\in E$, $D_xf(h)=<\nabla_x(f),h>$. In particular $x\rightarrow x^TAx$ has a gradient but $x\rightarrow Ax$ has not ! Using the previous definitions, one has (up to unintentional mistakes): -Let $f:x\rightarrow Ax$ where $A\in M_n$ ; then $D_xf=A$ (no problem). On the other hand $x\rightarrow x^T$ is a bijection (a simple change of variable !) ; then we can give meaning to the derivative of $Ax$ with respect to $x^T$: consider the function $g:x^T\rightarrow A(x^T)^T$ ; the required function is $D_{x^T}g:h^T\rightarrow Ah$ where $h$ is a vector ; note that $D_{x^T}g$ is a constant. EDIT: if we choose the bases $e_1^T,\cdots,e_n^T$ and $e_1,\cdots,e_n$ (the second one is the canonical basis), then the matrix associated to $D_{x^T}g$ is $A$ again. -Let $\phi:x\rightarrow x^TAx$ ; $D_x\phi:h\rightarrow h^TAx+x^TAh=x^T(A+A^T)h$. Moreover $<\nabla_x(f),h>=x^T(A+A^T)h$, that is ${\nabla_x(f)}^Th=x^T(A+A^T)h$. By identification, $\nabla_x(f)=(A+A^T)x$, a vector (formula (89) in the detestable matrix Cookbook !) ; in particular, the solution above $x^T(A+A^T)$ is not a vector !<|endoftext|> -TITLE: Some questions in Set Theory -QUESTION [5 upvotes]: I have some exam questions that were left unanswered for me: - -Suppose that for every $\alpha<\kappa$ there is a subset $A_\alpha$ of $\kappa$ of cardinality $\kappa$. Show that there is a subset X of $\kappa$ so that for every $\alpha<\kappa$ : |$A_\alpha\cap$ X|=|$A_\alpha$ \ X|=$\kappa$. -{$A_\alpha | \alpha<\aleph_1$} a collection of non-stationary disjoint sets. Suppose that their union is a stationary set. Prove that the set {$minA_\alpha | \alpha<\aleph_1$} is stationary. -For every couple of sets A,B $\in P(\omega)$ we will denote A $\subseteq^*$ B if $|A\setminus B|<\aleph_0$. Prove that there exist a sequence of length $\aleph_1$ in relation to $\subseteq^*$ . - -I'll appreciate any help regarding those questions. -Thanks in advance, -Pavel - -REPLY [5 votes]: Here is a solution for the first problem: -We already know that for every cardinal $\kappa$ we have $\kappa=\kappa\cdot\kappa$. So separate every $A_\alpha$ into $\kappa$ partitions of size $\kappa$ and let's call them $A_{\alpha,\beta}$. Now let $\{B_\alpha : \alpha<\kappa\}$ be an enumeration of all the $A_{\alpha,\beta}$. Observe now that we need to find sets $X,Y$ such that for every $\alpha<\kappa$ we have $B_\alpha\cap X\neq\varnothing\neq B_\alpha\cap Y$ and $X\cap Y=\varnothing$. -Each of these sets will satisfy the property we want since in every $A_{\alpha,\beta}$ there will exist an element in $X$ and one in $Y$ and therefore $\kappa$ elements of $A_\alpha$ in $X$ and $\kappa$ in $Y$ and furthermore $X\cap Y=\varnothing$ will give $Y\subset\kappa\setminus X$ and $X\subset\kappa\setminus Y$. I will construct two such sets $\mathcal{C}$, $\mathcal{D}$ inductively: -Let $\gamma_0$ be the least element of $B_0$ and make $\mathcal{C}_0=\{\gamma_0\}$. Now let $\delta_0$ be the least element of $B_0\setminus\mathcal{C}_0$ and let $\mathcal{D}_0=\{\delta_0\}$. Assume now that for every $\xi<\alpha$ we have defined $\mathcal{C}_\xi$ and $\mathcal{D}_\xi$ such that each of them have at most $|\xi+1|$ elements. -If $(\bigcup_{\xi<\alpha}\mathcal{C}_\xi)\cap B_\alpha$ is non-empty let $\mathcal{C}_\alpha=\bigcup_{\xi<\alpha}\mathcal{C}_\xi$ Otherwise let $\gamma_\alpha$ be the least element of $B_\alpha\setminus(\bigcup_{\xi<\alpha}\mathcal{D}_\xi)$ (this element has to exist since the union has at most $|\alpha|$ elements while $B_\alpha$ has $\kappa$) and let $\mathcal{C}_\alpha=(\bigcup_{\xi<\alpha}\mathcal{C}_\xi)\cup\{\gamma_\alpha\}$ (observe that this has at most $|\alpha+1|$ elements). Now if $(\bigcup_{\xi<\alpha}\mathcal{D}_\xi)\cap B_\alpha$ is non-empty let $\mathcal{D}_\alpha=\bigcup_{\xi<\alpha}\mathcal{D}_\xi$. Otherwise if $\delta_\alpha$ is the least element of $B_\alpha\setminus\mathcal{C}_\alpha$ (again this exists) let $\mathcal{D}_\alpha=(\bigcup_{\xi<\alpha}\mathcal{D}_\xi)\cup\{\delta_\alpha\}$. Finally let $\mathcal{C}=\bigcup_{\alpha<\kappa}\mathcal{C}_\alpha$ and $\mathcal{D}=\bigcup_{\alpha<\kappa}\mathcal{D}_\alpha$. -Finally, notice that $\mathcal{C}\cap\mathcal{D}=\varnothing$ because of the above construction and for every $\alpha<\kappa$ we have $B_\alpha\cap\mathcal{C}\neq\varnothing$ and $B_\alpha\cap\mathcal{D}\neq\varnothing$.<|endoftext|> -TITLE: How to find solutions of linear Diophantine ax + by = c? -QUESTION [130 upvotes]: I want to find a set of integer solutions of Diophantine equation: $ax + by = c$, and apparently $\gcd(a,b)|c$. Then by what formula can I use to find $x$ and $y$ ? -I tried to play around with it: -$x = (c - by)/a$, hence $a|(c - by)$. -$a$, $c$ and $b$ are known. So to obtain integer solution for $a$, then $c - by = ak$, and I lost from here, because $y = (c - ak)/b$. I kept repeating this routine and could not find a way to get rid of it? Any hint? -Thanks, -Chan - -REPLY [2 votes]: Look can be deceiving. The integer solution to the equation $ax + by = c$ is anything but easy. Please endure a rather long derivation. -To make it more comprehensible let's first solve the equation for y: -\begin{align*} -ax + by = c\\ -by = c - ax\\ -y = \frac{c - ax}{b} -\end{align*} -To have an integer solution, $y$ must be an integer, and that is if $c - ax$ is a multiple of $b$, or $c - ax = -nb \iff ax = c + nb$. This has the same meaning as $ax \equiv c \: (mod \: n)$. -To continue, we need this Theorem 1: - -The congruence $ax \equiv c \: (mod \: n)$ has a solution iff $gcd(a, n) \: | \: c$. - -And this Lemma 2: - -If $gcd(p, q) = 1$, then $px \equiv r \: (mod \: q)$ has a solution modulo $q$. - -To keep this answer manageable, I would like to skip the proof of Theorem 1 and Lemma 2 (which can be found by googling). Just post a question and comment me if you encounter some trouble with them. -Let's define $d = gcd(a, n)$, and continue the derivation: -\begin{align*} -ax \equiv c \: (mod \: n)\\ -ax = c + bn\\ -\frac{a}{d} x = \frac{c}{d} + \frac{b}{d} n -\end{align*} -Now we want to switch $b$ and $n$ so $\frac{n}{d}$ could be seen more clearly as the modulo and continue it as following: -\begin{align*} -\frac{a}{d} x = \frac{c}{d} + b \frac{n}{d}\\ -\frac{a}{d} x \equiv \frac{c}{d} \: (mod \: \frac{n}{d}) -\end{align*} -Note that $\frac{a}{d}$ and $\frac{n}{d}$ from our derivation above is the $p$ and $q$ in the Lemma 2 respectively. Also note that as $d$ is $gcd(a, n)$, so $gcd(\frac{a}{d}, \frac{n}{d}) = 1$. Hence by Lemma 2: -\begin{align*} -\frac{a}{d} x \equiv \frac{c}{d} \: (mod \: \frac{n}{d}) -\end{align*} -is our solution to equation $ax + by = c$. - -As an example, let us solve $6x - 10y = 4 \iff 6x = 4 + 10y \iff 6x ≡ 4 \: (mod \: 10)$. $Gcd(6, 10) = 2$, and $2 \: | \: 4$, so by Theorem 1 that equation has a solution. -From our derivation, the solution is $\frac{6}{2} x = \frac{4}{2} \: (mod \: \frac{10}{2}) \iff 3x = 2 \: (mod \: 5)$. -By Lemma 2, we have a solution $modulo \: 5$. What it means is if we write the solution in $Z_5$, we would have: -\begin{align*} -\bar{3} \bar{x} = \bar{2}\\ -\bar{x} = \bar{3}^{-1} \: \bar{2} -\end{align*} -As in $Z_5$, $\bar{3} \: \bar{2} = \bar{1} = \bar{3} \: \bar{3}^{-1}$, so $\bar{3}^{-1} = \bar{2}$ and we have: -\begin{align*} -\bar{x} = \bar{3}^{-1} \: \bar{2}\\ -\bar{x} = \bar{2} \: \bar{2} = \bar{4} -\end{align*} -In $Z_5$, $\bar{x} = \bar{4} \iff x \equiv 4 \: (mod \: 5) \iff x = 4 + 5s \iff x = 4, 9, 13, \cdots$. You can check that indeed $x \equiv 4 \: (mod \:5)$ is the solution.<|endoftext|> -TITLE: $ \mathbb{C} $ is not isomorphic to the endomorphism ring of its additive group -QUESTION [11 upvotes]: Let $M$ denote the additive group of $ \mathbb{C} $. Why is $ \mathbb{C}$, as a ring, not isomorphic to $\mathrm{End}(M)$, where addition is defined pointwise, and multiplication as endomorphisms composition? -Thanks! - -REPLY [18 votes]: Method 1: $\operatorname{End}(M)$ is not an integral domain. -Method 2: Count idempotents. There are only 2 idempotent elements of $\mathbb{C}$, but lots in $\operatorname{End}(M)$, including for example real and imaginary projections along with the identity and zero maps. -Method 3: Cardinality. Assuming a Hamel basis for $\mathbb{C}$ as a $\mathbb{Q}$ vector space, there are $2^\mathfrak{c}$ $\mathbb{Q}$ vector vector space endomorphisms of $\mathbb{C}$, but only $\mathfrak{c}$ elements of $\mathbb{C}$. - -REPLY [6 votes]: The endomorphism ring of $M$ is its endomorphism group as a vector space over $\mathbb{Q}$ (since any additive function on $\mathbb{C}$ is in fact $\mathbb{Q}$-linear, and any $\mathbb{Q}$-linear map is necessarily additive). -How many endomorphisms does a vector space over $\mathbb{Q}$ of dimension $\kappa\geq\aleph_0$ have? Turns out it has $2^{\kappa}$ endomorphisms. Any endomorphism is completely determined by the image of a basis. Fix a basis $\beta$ for $\mathbf{V}$ over $\mathbb{Q}$, which is of cardinality $\kappa$. -Each endomorphism corresponds to a function $\beta\to\mathop{\oplus}\limits_{b\in\beta}\mathbb{Q} = \mathbb{Q}^{(\beta)}$. So the cardinality of the endomorphism ring is the cardinality of $(\mathbb{Q}^{(\beta)})^{\beta}$. -The cardinality of $\mathbb{Q}^{(\beta)}$ is $|\beta|=\kappa$, so the entire thing has cardinality -$$|\mathrm{end}(M)| = \left|\mathbb{Q}^{(\beta)}\right|^{|\beta|} = \kappa^{\kappa} = 2^{\kappa}.$$ -(For a proof that if $\kappa$ is infinite and $2\leq\lambda\leq 2^{\kappa}$, then $\lambda^{\kappa} = 2^{\kappa}{}{}$, see this previous answer). -In the case of $M$ and $\mathbb{C}$, since $M$ is of dimension $\mathfrak{c}=2^{\aleph_0}$ as a $\mathbb{Q}$-vector space, you have that $|\mathrm{End}(M)| = 2^{\mathfrak{c}}\gt \mathfrak{c}=|\mathbb{C}|$. So there aren't enough elements in $\mathbb{C}$ to give you all the endomorphisms. -(The elements of $\mathbb{C}$ correspond to the $\mathbb{C}$-linear maps, of course, by simple multiplication; but here you want a lot more maps).<|endoftext|> -TITLE: What is the current status of Vinay Deolalikar's proof that P is not equal to NP -QUESTION [8 upvotes]: This could be mathematics or computer science, but also statistical physics, so I hope it qualifies for interest. -I am aware that there were reservations about the proof $P \neq NP$, but no fatal flaws. I have followed Terence Tao's blog and Tim Gowers, both of whom have reservations, but Deolalikar is sticking with his assertions and was supposedly preparing an updated response to the critics. -I haven't seen anything much since posts in August. Is anyone aware of any new updates more recent than Sept? - -Here's the status of the paper: - -http://michaelnielsen.org/polymath1/index.php?title=Deolalikar%27s_P!%3DNP_paper - -REPLY [13 votes]: It was my understanding that Terence Tao felt that there was no hope of recovery: -"To give a (somewhat artificial) analogy: as I see it now, the paper is like a lengthy blueprint for a revolutionary new car, that somehow combines a high-tech engine with an advanced fuel injection system to get 200 miles to the gallon. -The FO(LFP) objections are like a discovery of serious wiring faults in the engine, but the inventor then claims that one can easily fix this by replacing the engine with another, slightly less sophisticated engine. -The XORSAT and solution space objections are like a discovery that, according to the blueprints, the car would run just as well if the gasoline fuel was replaced with ordinary tap water. As far as I can tell, the response to this seems to be equivalent to “That objection is invalid – everyone knows that cars can’t run on water.” -The pp/ppp objection is like a discovery that the fuel is in fact being sent to a completely different component of the car than the engine. -" -Terence Tao at P=NP and Godel's Lost Letter<|endoftext|> -TITLE: Proof of bound on $\int t\,f(t)\ dt$ given well-behaved $f$ -QUESTION [14 upvotes]: I got the following question by mail from someone I don't know from Adam. (Quoted in part.) - -if $f(t)$ continuously diff. on $[0,1]$ and -a) $\int_0^1f(t)\ dt=0$ -b) $m\le f\,'\le M$ on $[0,1]$ -Prove -$\frac m{12}\le\int_0^1t\cdot f(t)\ dt\le\frac M{12}$ -I suspect it might be an error - -I assumed immediately that it's an error, but my first two thoughts as counterexamples were $f(t)=\frac12-t$ and $f(t)=\sin(2\pi t)$, both of which satisfy the result. Anyone with a proof or counterexample? - -REPLY [13 votes]: Edit: Incorporated simplification and shortening of the argument suggested by Didier. Thanks! - -By a) we know that $\frac{1}{2} \int_{0}^{1} f(x)\,dx = 0$. Therefore -\begin{align*} -\int_{0}^{1} xf(x)\,dx & = \int_{0}^{1} \left(x - \frac{1}{2}\right)f(x)\,dx \\ -& = -\left. \frac{1}{2}(x^{2} - x) f(x)\right\vert_{0}^{1} - -\int_{0}^{1} \frac{1}{2}(x^{2} - x)f'(x)\,dx \\ -& = \frac{1}{2} \int_{0}^{1} (x - x^{2})f'(x)\,dx -\end{align*} -using integration by parts. -Now note that $x - x^{2} \geq 0$ on $[0,1]$ and by b) we have $m \leq f'(x) \leq M$, hence -$$ -C m \leq \int_{0}^{1} x f(x)\,dx \leq C M -$$ -with -$$ -C = \frac{1}{2} \int_{0}^{1} (x - x^{2})\,dx = \left.\frac{1}{2}\left(\frac{1}{2}x^{2} - \frac{1}{3}x^{3}\right)\right\vert_{0}^{1} = \frac{1}{12} -$$ -as we wanted. -Let me add that these estimates arise in the Euler summation method and are often used in proofs of the Stirling formula.<|endoftext|> -TITLE: Minimal spectrum of a commutative ring -QUESTION [10 upvotes]: Can anyone explain to me why the minimal prime ideals of a commutative ring (with the subspace topology inherited from the Zariski topology) form a totally disconnected space, or give a reference? I remember that this is true but can't seem to prove it myself or find the proof anywhere. -I would be especially happy if there is some proof that does not use the (easy) fact that this space is Hausdorff. This is because I am trying to prove that the primitive spectrum of a certain noncommutative ring, which I know is not Hausdorff, is totally disconnected. Hopefully the proof works in my situation when phrased in the appropriate way. - -REPLY [4 votes]: Let $\text{MinSpec}(A) \subset \text{Spec}(A)$ be the subset of minimal primes. I claim that $\text{MinSpec}(A) \cap W$ is open and closed whenever $W$ is a quasi-compact open of $\text{Spec}(A)$. This follows immediately from Lemma Tag 00EV (which has a purely algebraic proof). Hence $\text{MinSpec}(A)$ has a base for its topology consisting of closed and open subsets. Thus it is totally disconnected.<|endoftext|> -TITLE: High Dimensional Optimization Algorithm? -QUESTION [5 upvotes]: I have an optimization problem that at first sounds quite textbook. I have a convex objective function in $D$-dimensional space that is twice differentiable everywhere and has no local optima. -Ordinarily it would be a perfect candidate for numerical Newton-Raphson methods. However, Newton-Raphson requires solving a system of linear equations of size $D$. This takes $O(D^3)$ computations, at least with any reasonably implementable algorithm I'm aware of. In my case $D$ is on the order of several thousand. Can anyone suggest an optimization algorithm that is typically more efficient than Newton-Raphson for $D$ this large? I tried gradient descent, but empirically it seemed absurdly slow to converge. - -REPLY [2 votes]: Even if your matrix is of order several thousand, a linear solve should still be pretty fast, and Newton methods should not require many iterations. If your matrix is sparse or has special structure, it will be even faster. My Matlab solves a rank 2000 system in 3 seconds. If you need even more speed, you can try to solve the system only approximately using an iterative method with some fixed number of steps (so then it becomes $O(D^2)$, and see if the Newton iteration still converges.<|endoftext|> -TITLE: Division problem -QUESTION [8 upvotes]: Problem statement - -Five sailors were cast away in an - island, inhabited only by monkeys. To - provide food, they collected all the - coconuts that they could find. During - the night, one of the sailors woke up - and decided to take his share of the - coconuts. He divided them into 5 equal - piles and noticed that one coconut was - left over, so he threw it to the - monkeys; he then hid his pile and went - to sleep. A little later a second - sailor awoke and had the same idea as - the first one. He divided the - remainder of the coconuts into 5 equal - piles, discovered also that one - coconut was left over, and threw it to - the monkeys. He then hid his share and - went back to sleep. One by one the - other three sailors did the same - thing, each throwing one coconuts to - the money. The next morning, all - sailors looking sharp and ready for - breakfast, divided the remaining pile - of coconuts into five equal parts, and - no coconuts was left over this time. - Find the smallest number of coconuts - in the original pile. - -My solution was, each time a sailer take his share, I recalculate the number of coconut: - -$n = 5\cdot q_0 + 1 $ -$\to$ # left = $\frac{4}{5}\cdot q_0 = \frac{4}{5}\cdot \frac{n - 1}{5}$ -$\frac{4}{5}\cdot q_0 = 5 \cdot q_1 + 1 \to$ # left $= \frac{4}{5}\cdot q_1 ....$ - -Continuing this process, I ended up with a very strange fraction: -$$\frac{(256\cdot n - 464981)}{1953125}$$ -Then I set this fraction to $5\cdot k$, since the last time # coconuts is divisible by $5$, to solve for $n$. -Am I in the right direction? Any hint would greatly appreciated. -Thanks, -Chan - -REPLY [4 votes]: You wrote that $n = 5q_0 + 1$, so then the first sailor threw away a coconut, and kept $q_0$ coconuts. The remaining number of coconuts is $4q_0$! -So then you write: -$$n = 5q_0 + 1$$ -$$4 q_0 = 5q_1 + 1$$ -$$4 q_1 = 5q_2 + 1$$ -$$4 q_2 = 5q_3 + 1$$ -$$4 q_3 = 5q_4 + 1$$ -$$4 q_4 = 5q_5$$ -Now you know that $q_5 > 0$, so $n$ will be smallest when $q_5 = 1$. (Of course if they had nothing for breakfast, do the following with $q_0 = 0$). -You must also consider that when the fifth sailor was taking his share, after throwing away a coconut, the remaining number of coconuts was divisible by $5$. So -$$ q_4 = \frac{5q_5}{4}$$ -must be an integer. Let's write $q_5 = 4k_0$. Then writing the equations backwards, and expressing $q_3$ with $q_{4}$ yields: -$$4 q_4 = 5q_5 = 20k_0 \text{ so } q_4 = 5k_0$$ -$$4 q_3 = 5q_4 + 1 = 5^2k_0 +1 $$ -Again you must make sure that $\frac{5^2k_0 +1}{4}$ is an integer. Continue on until you reach $n$. You will be reexpressing the $k_0$ from here on (that's why I added the subscript).<|endoftext|> -TITLE: Explanation/Intuition behind why $C_n$ counts the number of triangulations of a convex $n+2$-gon? -QUESTION [8 upvotes]: I was reading about the Catalan numbers on wikipedia, because it seems like they show up quite a bit. -I'm reading some of the examples in the applications to combinatorics section. Some of them make sense fairly easily for me. For example, I see that the number of ways to parenthesize a product of $n+1$ terms $x_0,x_1,\dots, x_n$ is $C_n$, since the final multiplication $\cdot$ based on the parentheses must show up between two factors $x_k$ and $x_{k+1}$, and the way to parenthesize those terms $x_0,\dots,x_k$ and $x_{k+1},\dots,x_n$ are $C_k$ and $C_{n-k-1}$, respectively . Since this final $\cdot$ could show up anywhere, I get the recurrence -$$ -C_n=C_0C_{n-1}+C_1C_{n-2}+\cdots +C_{n-1}C_0 -$$ -which is the same as the Catalan numbers, given that the initial conditions are the same. -In the same way, I see that the number of binary rooted trees with $n+1$ leaves is also given by $C_n$, since a binary tree essentially determines a parenthesization of a product of $n+1$ leaves if two leaves are enclosed in parentheses if they have the same parent node, and you follow the tree from the leaves up. -I also understand the way it counts monotonic paths not passing the diagonal, as the first time a path touches the diagonal is like the position of the final multiplication on a product of $n$ terms, and splits the product into two smaller, but essentially similar subcases. -But how does triangulating an $n+2$-gon give the same counting situation? I figured if you choose a vertex and draw a triangle with it as one of the vertices, it should give two other polygons that should suggest a recurrence relation like the one above, just like the Catalan numbers. Could someone point it out, or possibly show a way to relate it to one of the other identical counting situations which I understood? I don't quite see how this situation is the same. Thanks. - -REPLY [5 votes]: We can find a 1-1 correspondence between a triangulation of a polygon and a binary tree, by considering a triangle to be a vertex and connecting two vertices by an edge if they share a side. You fix one side of the polygon as the "root" side, and the triangle on that side becomes the root vertex of the tree. -For instance see this: http://www.cs.sunysb.edu/~jgao/CSE548-fall07/David-mount-DP.pdf, see pages 20 and 21. -Another page: http://www.ics.uci.edu/~eppstein/260/011023/<|endoftext|> -TITLE: geometric argument for van-kampen? -QUESTION [10 upvotes]: I've seen Van-Kampen's theorem presented algebraically many times; and although it provides a useful method of calculation; I don't have a very clear picture for "why" it should be true. Does anyone know of a more visual argument; or even an example that makes the it easier to see? - -REPLY [5 votes]: I will say it less acurately than Arturo and sure enough too much roughly. Consider it as a first approxiation to the theorem. -The way I see Seifert-Van Kampen theorem is like a sophisticated version of the dimension of the sum of vector subspaces: -$$ -\mathrm{dim} (F + G) = \mathrm{dim} F + \mathrm{dim} G - \mathrm{dim} (F\cap G) \ . -$$ -Because, forgetting torsion, the fundamental group counts the number of "holes" (non contractible loops, generators of $\pi_1$) in your space $X$. Right? -So, if you have $X = U \cup V$, then the number of holes in $X$ must be equal to the number of holes in $U$ plus the number of holes in $V$..., minus the number of holes in $U\cap V$, because these ones you have already counted them twice. -Hence, in order to count all the holes in $U$ plus those in $V$, you take the free product $\pi_1 (U) * \pi_1(V)$, but you must "identify" the holes "shared" by $U$ and $V$. This is the reason for the amalgamated product $\pi_1(U) *_{\pi_1(U\cap V)} \pi_1(V)$.<|endoftext|> -TITLE: Intuitionism and circles -QUESTION [5 upvotes]: I know next to nothing about intuitionism, so my question is probably silly :) -As I understand from Wikipedia, intuitionism (at least finitism) doesn't 'trust' in the existence of irrational numbers, because they cannot be constructed (at least in finite number of steps). How does it deal with circles then? Or, even better, how does it deal with bilateral right triangles? Do such things exist in this framework? Do metric concepts exist at all? - -REPLY [3 votes]: “Heyting. Intuitionism: An introduction.” (a Russian translation is also available) constructs real numbers.<|endoftext|> -TITLE: two phds in math -QUESTION [7 upvotes]: If one gets a phd in math, is it able to be admitted to another phd program in math? It is not for me, for someone else. - -REPLY [13 votes]: Here is the statement from the Graduate Division at Berkeley: - -Duplication of Degree. -Students who already hold a doctoral level degree are not admitted and duplication of degree or admission to a lesser degree is not permitted. However, in extraordinary circumstances, the faculty of the department may request an exception from the Dean of the Graduate Division. The department must demonstrate that the second degree field of study and program are distinctly different from that of the original degree, and that there is a professional or scholarly purpose that requires this second degree. - -Similar restrictions apply throughout the UC system. Harvard likewise states: - -Persons holding a PhD or its equivalent, or who have completed most of the work required to earn the PhD elsewhere, may apply to a PhD program in the Graduate School only if it is an unrelated field of study. - -I know that at least until the late 1990's MIT's Computer Science Department had a rule that nobody with a Ph.D. (in any field) would be admitted to their Ph.D. program; the rule was changed in the early 2000s so that a Ph.D. in a field other than Computer Science was no longer a bar for admission, but it still placed applicants at a disadvantage. I would expect similar rules to exist in other Departments at MIT, but I did not spot any explicit rule in the mathematics admission page. -Similar policies are explicit in many US universities; even in those where it is not explicit, space limitations will usually lead admission committees to consider such applications very skeptically, prefering to give spots to new students, and if they are very keen on the applicant, will likely suggest a post-doc instead.<|endoftext|> -TITLE: Dimension of a simple algebra over its center -QUESTION [5 upvotes]: Let $A$ be a finite dimensional simple algebra over a field $k$. Denote by $K$ the center of $A$. Why the dimension of $A$ over $K$ is a square? - -REPLY [12 votes]: This follows from the Artin-Wedderburn classification of central simple algebras, together with a sneaky little argument involving tensor products. -First, the big theorem: let $A$ be a finite dimensional simple (associative, unital) $K$-algebra with precise center $K$. Then $A$ is isomorphic to $M_n(D)$, where $D$ is a -$K$-central division algebra. Here, for any associative algebra $B$, $M_n(B)$ is the $n \times n$ matrix algebra with entries in $B$. Clearly $\operatorname{dim}_K M_n(B) = n^2 \operatorname{dim}_K B$, so it is enough to show that the dimension of a $K$-central division algebra $D$ is a square. -For this, tensor from $K$ to the algebraic closure $\overline{K}$. On the one hand, -$\operatorname{dim}_{\overline{K}} D \otimes \overline{K} = \operatorname{dim}_K D$. But on the other hand, an algebraically closed field admits no nontrivial finite-dimensional division algebras -- easy exercise -- so $D \otimes \overline{K} \cong M_N(\overline{K})$ and thus $\operatorname{dim}_{\overline{K}} M_N(\overline{K}) = N^2$. -To prove the "easy exercise", let $D$ be a finite-dimensional division algebra over an algebraically closed field $K$, let $x \in D$ and consider the minimal polynomial of -multiplication by $x$, viewed as a $K$-linear endomorphism of $D$...<|endoftext|> -TITLE: A connected space which is neither locally connected nor path connected -QUESTION [6 upvotes]: This is Problem 3.19.3 of Dieudonné's Foundations of Modern Analysis (in my words). For $x$ a rational number, let $E_x=\{x\}\times\left[-1,0\right[$, and for $x$ an irrational number, let $E_x=\{x\}\times[0,1]$. Let $E=\bigcup_{x\in\mathbf{R}}E_x$ with the subspace topology. Show that $E$ is connected. -There is the following hint: "Use (3.19.1) and (3.19.6) to study the structure of a subset of $E$ which is both open and closed." -(3.19.1) is the fact that the connected subspaces of $\mathbf{R}$ are intervals and that intervals are connected. -(3.19.6) is the fact that any open set of $\mathbf{R}$ is a countable disjoint union of open intervals. -My thoughts: Let $A$ be a clopen subset of $E$. It is fairly obvious that, for any $x\in\mathbf{R}$, $A$ contains either all elements or no element of $E_x$. So we define a subset $B$ of the real line by $B=\{x\in\mathbf{R}:E_x\subset A\}$. Now I thought that $B$ has to be clopen as well (in $\mathbf{R}$). But I can't prove it. I have no problems showing that any irrational point of $B$ is an interior point, and that any irrational point of $\overline{B}$ is in $B$, but nothing about rational points. I was pretty sure that this is the way to go, since the hint is exclusively about subsets of $\mathbf{R}$. -Any (further) hint or comment is much appreciated. - -REPLY [4 votes]: Consider the set $B$ as you have defined it. I claim that $B$ is an open subset of $\mathbb{R}$. If $x\in B$ and $x$ is irrational, then $(x,0)\in A$ and so $A$ contains a ball around $(x,0)$, and so $A$ contains points from $E_y$ for all $y$ sufficiently close to $x$. So $x$ is in the interior of $B$, as desired. If $x$ is rational, in contrast, then $A$ contains points from $E_y$ for all rational $y$ sufficiently close to $x$, and so $E_y\subset A$ for all rational $y$ sufficiently close to $x$. In this case, since the complement of $A$ is open, it means that the closure of such $E_y$ must be contained in $A$, but the closure of such $E_y$ includes all points of the form $(z,0)$ for irrational $z$ sufficiently close to $x$. Thus again, $x$ is in the interior of $B$, as desired. -It now follows by considering the complement of $A$ that $B$ is also clopen, which would be a contradiction unless $A$ and hence $B$ was either empty or the whole space. -(I see now that Joriki has answered while I was writing this...)<|endoftext|> -TITLE: The category of presheaves on a possibly-large category -QUESTION [9 upvotes]: Suppose $\mathcal{C}$ is a category such that for every $c \in \mathrm{Ob}(\mathcal{C})$, the slice category $\mathcal{C}/c$ is equivalent to a small category. I need to show that the category of presheaves $[\mathcal{C}^\mathrm{op}, \mathbf{Set}]$ is an elementary topos. -I understand the standard argument used when $\mathcal{C}$ is a small category — for example, to construct the exponential $G^F$ of two presheaves, we apply the Yoneda lemma and see that we are forced to set $G^F (c) = \mathrm{Hom}(\mathbf{y}c \times F, G)$, where $\mathbf{y}c = \mathrm{Hom}(-, c) : \mathcal{C}^\mathrm{op} \to \mathbf{Set}$ is the contravariant hom-functor. The main obstruction, then, to using this argument is showing that $\mathrm{Hom}(\mathbf{y}c \times F, G)$ is indeed a set under these weaker assumptions. Well, actually, I'd first have to show that $\mathbf{y}c$ is actually a set-valued functor... but isn't this the same as showing that $\mathcal{C}$ is locally small? It's intuitively plausible that $\mathcal{C}/c$ being equivalent to a small category implies $\mathcal{C}$ itself is locally small, but I imagine, from the phrasing of the problem, that it's not the case. - -REPLY [2 votes]: We must add the hypothesis that $\mathcal{C}$ is locally small. -Indeed, consider the case where $\mathcal{C}$ is a groupoid with a unique object $c$. Then $\mathcal{C}_{/ c}$ is equivalent to the terminal groupoid – in particular, it is always essentially small. But $\mathcal{C}$ may not be locally small.<|endoftext|> -TITLE: What's the fastest way to take powers of a square matrix? -QUESTION [6 upvotes]: So I know that you can use the Strassen Algorithm to multiply two matrices in seven operations, but what about multiplying two matrices that are exactly the same. Is there a faster way to go about doing that (ie. by reducing the number of multiplications per iteration to something less than 7) ? - -REPLY [6 votes]: Consider $$M = \begin{pmatrix} 0 & A & 0 \\ 0 & 0 & B \\ 0 & 0 & 0 \end{pmatrix}.$$ We have $$M^2 = \begin{pmatrix} 0 & 0 & AB \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}.$$ -So matrix squaring is not asymptotically better than regular product. Of course, in practice the constant factor difference is very significant. -Taking a power is best achieved by repeated squaring. The other method is diagonalizing, but I think usually eigenvalues are found via the power method rather than vice versa. - -REPLY [3 votes]: It really depends on the matrix. For instance if you have a diagonalizable matrix then you can decrease the amount of multiplications by diagonalizing so that -$$A^k=P^{-1}B^k P,$$ -and since $B$ is a diagonal matrix it only takes $2$ multiplications to multiply the matrices. Somewhat similar things can be done with jordan normal form matrices. A general method consists of using binary exponentiation to reduce the number of matrix multiplications that need to be done from N to $\log(N)$. Technically speaking matrix multiplication can be done "faster" than Strassen as well, but this will only be the case for very large matrices, due to the large constant coefficient hidden in the Coppersmith–Winograd algorithm.<|endoftext|> -TITLE: polar coordinates and derivatives -QUESTION [13 upvotes]: Using the standard notation $(x,y)$ for cartesian coordinates, and $(r, \theta)$ for polar coordinates, it is true that $$ x = r \cos \theta$$ and so we can infer that -$$ \frac{\partial x}{\partial r} = \cos \theta.$$ This means that if we perturb $r$ to $r+\Delta r$, $x$ gets perturbed to approximately $x+ \cos(\theta) \Delta r$. This is equivalent to saying that if we perturb $x$ to $x+\Delta x$, then we perturb $r$ to approximately $r+ (1/\cos \theta) \Delta x$. Correspondingly, we expect that $$ \frac{\partial r}{\partial x} = \frac{1}{\cos \theta}.$$ In fact, -$$ \frac{ \partial r}{\partial x} = \frac{\partial}{\partial x} \sqrt{x^2+y^2}= \frac{2x}{2 \sqrt{x^2+y^2}} = \frac{x}{r} = \cos \theta.$$ What gives? - -REPLY [14 votes]: When you take partial derivatives, what you are doing is to hold the other variable fixed. So for instance if $f$ is a function of $n$ variables, $f(x_1,x_2,\ldots,x_n)$, $\frac{\partial f}{\partial x_i}$ means you hold all the other $x_j$'s constant except $x_i$ and vary $x_i$ by $\delta x_i$ and find out what happens to $\delta f$. -We have -$x= r \cos(\theta)$,$y= r \sin(\theta)$, $r^2= x^2 + y^2$ and $\tan(\theta) = \frac{y}{x}$. -We are transforming from the $(x,y)$ space to $(r,\theta)$ space. -In the $(x,y)$ space, $x$ and $y$ are independent variables and In the $(r,\theta)$ space, $r$ and $\theta$ are independent variables. -$\frac{\partial x}{\partial r}$ means you are fixing $\theta$ and finding out how changing $r$ affects $x$. -So $\frac{\partial x}{\partial r} = \cos(\theta)$ since $\theta$ is fixed. -$\frac{\partial r}{\partial x}$ means you are fixing $y$ and finding out how changing $x$ affects $r$. -So $\frac{\partial r}{\partial x} = \frac{x}{\sqrt{x^2+y^2}}$ since $y$ is fixed. -When you do $r = \frac{x}{\cos(\theta)}$ and argue that $\frac{\partial r}{\partial x} = \frac{1}{\cos (\theta)}$ you are not holding $y$ constant. -If you were to hold $y$ constant, then $\theta$ would change as well. -EDIT: -I am adding this in the hope that this will make it a bit more clear. If you want to use $r = \frac{x}{\cos(\theta)}$ and still derive it, you need to do as follows: -$r = \frac{x}{\cos(\theta)}$, $\delta r = \frac{\delta x}{\cos(\theta)} + \frac{-x}{\cos^2(\theta)} (-\sin(\theta)) \delta \theta$. -$\tan(\theta) = \frac{y}{x} \Rightarrow x \tan(\theta) = y$ -$\delta x \tan(\theta) + x \sec^2(\theta) \delta \theta = 0$ (Since $y$ is held constant) -$x \delta \theta = - \frac{\delta x \tan(\theta)}{\sec^2{\theta}} = - \sin(\theta) \cos(\theta) \delta x$. -Plugging the above in the previous expression, we get -$\delta r = \frac{\delta x}{\cos(\theta)} + \frac{\sin(\theta)}{\cos^2(\theta)} (-\sin(\theta) \cos(\theta) \delta x) = \frac{1-\sin^2(\theta)}{\cos(\theta)} \delta x = \cos(\theta) \delta x$ and hence we get -$\frac{\partial r}{\partial x} = \cos(\theta) = \frac{x}{\sqrt{x^2+y^2}}$<|endoftext|> -TITLE: Recommendation on stochastic process books -QUESTION [15 upvotes]: I was wondering if someone could recommend good books on stochastic processes - -with measure theory treatment -with not much or no measure theory -treatment - -for each, it would be nice to have some books for introductory level, mid-level to comprehensive level respectively. They can be either classical or recently published, either continuous, discrete or both. -Thanks in advance! - -REPLY [10 votes]: Some good books include: - -Stochastic Processes by Sheldon Ross (no measure theory) -Introduction to Probability Models by Sheldon Ross (no measure theory) -Stochastic Processes by J.L.Doob (measure theory)<|endoftext|> -TITLE: How to find an integer solution for general Diophantine equation $ax + by + cz + dt... = N$ -QUESTION [11 upvotes]: I know how to solve $ax + by = c$ using Extended Algorithm. But with more than variables, I'm lost :(. -To verify if it has an integer solution is easy, since we only need to check for $\gcd(a,b,c)|d$. Other than that, how can we find an integer solution for this equation? -Thanks, -Chan - -REPLY [12 votes]: Suppose you need to solve -$$a_1x_1 + a_2x_2 + a_3x_3 = c\qquad (1)$$ -in integers. -I claim this is equivalent to solving -$$\gcd(a_1,a_2)y + a_3x_3 = c\qquad (2)$$ -in integers. -To see this, note that any solution to (1) produces a solution to (2): letting $g=\gcd(a_1,a_2)$, we can write $a_1 = gk_1$, $a_2=gk_2$, so then we have: -$$c = a_1x_1 + a_2x_2 + a_3x_3 = g(k_1x_1) + g(k_2x_2) + a_3x_3 = g(k_1x_1+k_2x_2) + a_3x_3,$$ -solving (2). Conversely, suppose you have a solution to (2). Since we can find $r$ and $s$ such that $g=ra_1+sa_2$, we have -$$c = gy+a_3x_3 = (ra_1+sa_2)y +a_3x_3 = a_1(ry) + a_2(sy) + a_3x_3,$$ -yielding a solution to (1). -This should tell you how to solve the general case -$$a_1x_1+\cdots+a_nx_n = c$$ -in terms of $\gcd(a_1,\ldots,a_n)$, which can in turn be computed recursively.<|endoftext|> -TITLE: Proof that Ext$^n_\mathbb{Z}(M, \mathbb{Q})=0$ and Baer's Criterion -QUESTION [8 upvotes]: That -(1) Ext$^n_\mathbb{Z}(M, \mathbb{Q})=0$ for every module $M$ -follows easily from the fact that -(2) $\mathbb{Q}$ is injective. -However, the only proof I have seen of the injectivity of $\mathbb{Q}$ relies on Baer's Criterion. While the proof of Baer's Criterion is not difficult, it seems stronger than the injectivity of $\mathbb{Q}$ (for example, the proof uses Zorn's Lemma). Is there a different (and preferably simpler) proof of (1) or (2)? - -REPLY [6 votes]: Let $M$ be an abelian group and consider an extension $$0\to\mathbb Q\to E\to M\to0$$ of abelian groups. Since $\mathbb Q$ is flat, there is an induced exact sequence $$0\to\mathbb Q\otimes_{\mathbb Z}\mathbb Q\to \mathbb Q\otimes_{\mathbb Z} E\to Q\otimes_{\mathbb Z} M\to 0$$ The latter, being a short exact sequence of rational vector spaces, splits and there is a map $\mathbb Q\otimes_{\mathbb Z}E\to\mathbb Q\otimes_{\mathbb Z}\mathbb Q$ such that the composition $$\mathbb Q\otimes_{\mathbb Z}\mathbb Q\to\mathbb Q\otimes_{\mathbb Z}E\to\mathbb Q\otimes_{\mathbb Z}\mathbb Q$$ is the identity. Now the composition $$E\to \mathbb Q\otimes_{\mathbb Z}E\to\mathbb Q\otimes_{\mathbb Z}\mathbb Q\to\mathbb Q$$ with the first and the last maps being the obvious morphisms, splits the original extension. -It follows that $\mathrm{Ext}_{\mathbb Z}^1(M,\mathbb Q)=0$.<|endoftext|> -TITLE: What's the average width of a convex polygon? -QUESTION [16 upvotes]: If one computes the average width of a triangle, then one gets $(s_1+s_2+s_3)/\pi$, where $s_1$, $s_2$, $s_3$ are the side lengths. I did this by brute force, using an integral which went through an interval of length $\pi$. For each angle $\theta$, the "width" $w_\theta$ of the triangle is the smallest distance one may place two parallel lines at that angle without penetrating the triangle. (Imagine using calipers for each angle.) -My questions: Is there an easier way (without using an integral) to see that the ratio of the perimeter $s_1+s_2+s_3$ to this average width is $\pi$? Does this generalize to any convex polygon (or polytope)? - -REPLY [14 votes]: The most basic statement along these lines is that the average width of any line segment is equal to $2L/\pi$, where $L$ is the length of the segment. This is highly related to the Buffon's needle problem, where the length of the needle is the same as the width of the strips. It is also related to the fact that the average value of $|x|$ on the unit circle is equal to $2/\pi$. -Now consider a convex polygon in the plane. No matter how you turn the polygon, its horizontal width is always equal to twice the sum of the horizontal widths of the edges. Specifically, the sum of the widths of the edges along the bottom is equal to the width of the polygon, and the same is true of the widths of the edges along the top. Thus, the average width of any convex polygon is equal to half the sum of the average widths of the side lengths: -$$ -\text{avg. width} \;=\; \frac{1}{2}\left(\frac{2s_1}{\pi} + \cdots + \frac{2s_n}{\pi}\right) \;=\; \frac{s_1+\cdots+s_n}{\pi}. -$$ -There is a similar formula in three dimensions involving area. Specifically, suppose we take a polyhedron and rotate it in all possible ways. For each rotation, we measure the area of the projection of the polyhedron onto the $xy$-plane. Then -$$ -\text{avg. $xy$-area} \;=\; \frac{A_1+\cdots+A_n}{4} -$$ -where $A_1,\ldots,A_n$ are the areas of the faces. Thus, if you take a unit cube outside on a sunny day and rotate it at random, the average area of its shadow will be $(1+1+1+1+1+1)/4 = 3/2$. Again, the factor of $4$ comes from the fact that a polyhedron has top faces and bottom faces, and the average $xy$-area of a flat object is $A/2$. The latter follows from the fact that the average value of $|z|$ on the unit sphere is $1/2$.<|endoftext|> -TITLE: Confused about Cantor function and measure of Cantor set -QUESTION [6 upvotes]: OK, so I know that the Lebesgue measure of the ternary Cantor set is $0$. -However, in class the prof briefly mentioned that if we build a Lebesgue-Stieltjes measure $\mu$ out of the Cantor function, then $\mu(C) = 1$. -I don't understand why this is true, can someone please explain? - -REPLY [12 votes]: The point here is that there are two different measures on the Cantor set $C$: - -Viewing the Cantor set as a subset of $\mathbb{R}$, there is the Lebesgue measure $m_1$ that gives the Cantor set measure 0. -Viewing the Cantor set as the set $2^{\mathbb{N}}$ of all infinite binary sequences, the "fair coin" measure $m_2$ is defined so that for each $n \in \mathbb{N}$ and $i \in \{0,1\}$, the set $\{ f \in 2^{\mathbb{N}} : f(n) = i\}$ has measure $1/2$. This measure is also called the Hausdorff measure of dimension $\log(2)/\log(3)$, where $\log(2)/\log(3)$ is the Hausdorff dimension of the Cantor set $C$. - -The Cantor function $g(x)$ is defined such that $g(x) = m_2([0,x) \cap C)$. This is just the cumulative distribution function of $m_2$, but now we again view $C$ as a subset of the real line.<|endoftext|> -TITLE: Question on essential prime implicants -QUESTION [7 upvotes]: I am having some trouble understand essential prime implicants. So if a minterm is not covered by another overlapping rectangle, then that is an EPI. However, if we make a K-map for $f(x,y,z)=xy+xz'+y'z$, we have minterms m4, m6, and m7 not covered by overlapping rectangles. If what I did is correct, then there should be a total of only 2 rectangles in the table-one horizontal one that completely encapsulated the 2nd row, and a vertical rectangle for the 2nd column. And so it is a rule that EPI must appear in the minimal sum of products, in which case I'd have a lot of terms, but in fact I only see that it simplifies to $y'z+x$. What am I missing here? - -REPLY [16 votes]: Prime implicant of $f$ is an implicant that is minimal - that is, if the removal of any literal from product term results in a non-implicant for $f$ . -Essential prime implicant is an prime implicant that cover an output of the function that no combination of other prime implicants is able to cover. -Let's observe these two pictures below.The first picture represents all prime implicants while the second picture represents only essential prime implicants: - -In your specific case (picture below) both of the prime implicants, $x$ and $y'z$ , are also essential prime implicants .<|endoftext|> -TITLE: Is there a slick way to show that finite projective planes of $7$ points are unique up to isomorphism? -QUESTION [9 upvotes]: I was reading about the Fano plane, the smallest possible projective plane. After playing around with it, it seems that any projective plane of 7 points will be isomorphic to the Fano plane. -However, I've always been troubled with showing isomorphisms between sets of points and lines, because it seems like any function between two sets is really just an assignment of points to other points that preserves lines, but there is no nice fixed function that maps elements to others based on some hard and fast rule, like say $x\mapsto x^2$ or $(x,y)\mapsto x-y$, etc. -Is there an elegant way to show that any projective plane of 7 points is necessarily isomorphic to the Fano plane? I could only think of exhausting all permutations of points and lines and saying "Look, if these points $A$, $B$, $C$ are on a line $l$ here, then $f(A), f(B), f(C)$ are on a line $f(l)$ here." But this seems like it's brute forcing the matter, and not very efficient at all, considering that there are many ways to connect the points. What is the best way to go about something like this? Thanks. - -REPLY [2 votes]: Let $P$ be a projective plane on $7$ points. Fix an ordered $4$-tuple of distinct points $(p_1,p_2,p_3,p_4)$ in $P$ such that no line is incident with more than two of them. If $\pi=\{\{u,v\},\{w,t\}\}$ is a partition of $\{1,2,3,4\}$ in two parts of size $2$, let $q_\pi$ be the point of intersection of the line joining $p_u$ and $p_v$ with the line joining $p_w$ and $p_t$. -This provides a labeling $p_1$, $p_2$, $p_3$, $p_4$, $q_{\{\{1,2\},\{3,4\}\}}$, $q_{\{\{1,3\},\{2,4\}\}}$, $q_{\{\{1,4\},\{2,3\}\}}$ of the $7$ points. -Now show from the labeling alone you can reconstruct the lines. -This proves uniqueness. -NB: Notice that there are the $4$-element set $\{p_1,p_2,p_3,p_4\}$ is determined by its complement, which is a line. It follows that there are $7$ possible sets, one for each line, and each such set can be turned into an ordered $4$-tuple in $4!$ ways. The above reasoning shows then that the automorphism group of $P$ has order $4!\cdot 7=168$.<|endoftext|> -TITLE: Field Extension -QUESTION [13 upvotes]: Let $F$ be an extension field of $K$. let $L$ and $M$ be intermediate fields, with both finite algebraic extensions of $K$. - Suppose {$a_1,...,a_n$} is a basis for $L$ over $K$ and {$b_1, ...,b_m$} is a basis for $M$ over $K$. Show that {$a_ib_j$} is a spanning set for the field $LM$ ($LM$ is the smallest field between $K$ and $F$ containing both $L$ and $M$) as a vector space over $K$. -What I have done so far is this: -Let $x$ $\in$ $L$. Then $x$ can be written as a linear combination of $a_i$. Also $y$ $\in$ $M$ implies that $y$ can be written as a linear combination of $b_j$. This is where I'm stuck. - -REPLY [11 votes]: Consider the field $L(b_1,\ldots,b_m)=L[b_1,\ldots,b_m]$ (the equality because we are dealing with finite algebraic extensions). Suppose you can prove that it is equal to $LM$. -Then every element of $LM$ can be written as an $L$-linear combination of $b_1,\ldots,b_m$ (there may be a bit of work to be done here if it is not clear; certainly, you can write it as an $L$-linear combination of products of powers of the $b_i$, but since each of those lies in $M$, you can write them as $K$-linear combinations of the $b_i$; take it from there). -And every coefficient in that linear combination can be written as a $K$-linear combination of $a_1,\ldots,a_n$. See where that leads you (assuming you can prove $L[b_1,\ldots,b_m]=LM$, of course). -Corrected. In comments you ask about showing that if $[LM:K]=[L:K][M:K]$, then $L\cap M=K$. I messed up my first attempt in a rather silly way (feel free to look at the edit history to see the screw-up!). Sorry about that. -Again, let $E=L\cap M$ for simplicity. Then: -\begin{align*} -[L:K][M:K] = [LM:K] &= [LM:E][E:K]\\ -&\leq [L:E][M:E][E:K]\\ -&= [L:E][M:K]. -\end{align*} -Can you take it from here? - -REPLY [2 votes]: Comme l'a signalé Arturo Magidin, $LM$ est en fait l'anneau engendré par $L$ et $b_1, \ldots, b_m$. Pour montrer que tout élément de cet anneau est combinaison linéaire des $b_i$, il suffit de voir que $A = \sum Lb_i$ est un anneau. On se ramène donc à prouver que chaque produit $b_i b_j$ (et 1) est dans $A$. Mais en fait, $b_i b_j \in \sum Kb_i$ (et 1 aussi) par l'hypothèse que les $b_i$ forment une base de $M$ sur $K$.<|endoftext|> -TITLE: Fibonacci identity: $f_{n-1}f_{n+1} - f_{n}^2 = (-1)^n$ -QUESTION [9 upvotes]: Consider this Fibonacci equation: -$$f_{n+1}^2 - f_nf_{n+2}$$ -The problem asked to write a program with given n, output the the result of this equation. I could use the formula -$$f_n = \frac{(1+\sqrt{5})^n - ( 1 - \sqrt{5} )^n}{2^n\sqrt{5}}$$ -However, from mathworld, I found this formula Cassini's identity -$$f_{n-1}f_{n+1} - f_{n}^2 = (-1)^n$$ -So, I decided to play around with the equation above, and I have: -$$ \text{Let } x = n + 1 $$ -$$ \text{then the equation above becomes } f_x^2 - f_{x-1}f_{x+1} $$ -$$ \Rightarrow -( f_{x-1}f_{x+1} - f_x^2 ) = -1(-1)^x = (-1)^{x+1} = (-1)^{n+1+1} = (-1)^{n+2}$$ -So this equation either is 1 or -1. Am I in the right track? -Thanks, -Chan - -REPLY [3 votes]: Let us try to find gcd of $F_n$ and $F_{n+1}$ using Extended Euclidean algorithm. I will write the steps algorithm in a table; this table method was also explained in some Bill Dubuque's posts. -$\begin{array}{|l||c|c|} - \hline - F_{n+1} & 1 & 0 \\\hline - F_{n} & 0 & 1 \\\hline - F_{n-1} & 1 & -1 \\\hline - F_{n-2} & -1 & 2 \\\hline - F_{n-3} & 2 & -3 \\\hline - \vdots & \vdots & \vdots \\\hline - F_{n-k} & (-1)^{k+1}F_k & (-1)^kF_{k+1} \\\hline -\end{array} $ -After a few steps we can guess the $k$-th line, which gives us the following formula: -$F_{n-k}=(-1)^{k+1}F_kF_{n+1}+(-1)^kF_{k+1}F_n=(-1)^{k+1}(F_kF_{n+1}-F_{k+1}F_n)$. -For $k=n-1$ we get Cassini's identity $F_1=(-1)^n(F_{n-1}F_{n+1}-F_n^2)$. -So the only thing we have to do is to verify the above formula, which can be done easily by induction on $k$. -Inductive step: We know that: -$F_{n-k}=(-1)^{k+1}(F_kF_{n+1}-F_{k+1}F_n)=-(-1)^{k}(F_kF_{n+1}-F_{k+1}F_n)$ -$F_{n-(k-1)}=(-1)^{k}(F_{k-1}F_{n+1}-F_{k}F_n)$ -Since $F_{n-(k+1)}=F_{n-(k-1)}-F_{n-k}$, we get -$F_{n-(k+1)}=(-1)^{k}[(F_{k-1}+F_k)F_{n+1}-(F_k+F_{k+1})F_n]=(-1)^{k+2}(F_{k+1}F_{n+1}-F_{k+2}F_n)$ -which completes the inductive step.<|endoftext|> -TITLE: A number theoretic function to characterize midpoint free subsets -QUESTION [5 upvotes]: Recently, I was going over a curious recreational math paper titled "On the Diagonal Queens Domination Problem". The main result of the paper is establishing the minimum value $diag(n)$ of number of queens needed to be kept on the diagonal of a $n \times n$ chessboard so that they attack all the squares on the board. -The authors make use of some clever lemma and make an observation that $diag(n) = n + 1 - r_3(\lceil \frac{n}{2} \rceil)$. Here, $r_3(.)$ denotes the minimum value $k$ such that any $k$-element subset of $[n]$ contains a $k$ term arithmetic progression (a la Roth's theorem statement). In order to prove this statement, authors make use of a number theoretic statement which they do not prove. I also cannot prove i by myself because of my limited number theory background (and maybe because I am dense). The statement says the following -A collection of $k$ even numbers or $k$ odd numbers (the numbers in the collection have same parity, this will keep their sum even) is midpoint free even sum subset of $[n]$ $\mathit{if\ and\ only\ if}$ all the half of the numbers in the collection (rounded down if not integer) is a midpoint-free subset of $\lceil \frac{n}{2} \rceil$. -A few definitions to clarify the question. A subset $X$ of $[n]$ is midpoint free if for any two elements $i,j \in X$ $(i+j)/2 \not\in X$. A subset is even sum if $i+j$ is even for any $2$ elements $i,j \in X$. -I hope the question is clear. In case, its not, I apologize and would request you to kindly let me know whats not clear. - -REPLY [3 votes]: I don't know if understand the problem correctly, since some of the information seems extraneous and I think there is one mistake, so I will first reformulate the question before I answer it. Please let me know if I misunderstood. -Instead of "midpoint-free subset of $\lceil \frac{n}{2} \rceil$", I think you meant "midpoint-free subset of $[\lfloor \frac{n}{2} \rfloor]$". Firstly, the brackets for turning the number into a set were missing, but more importantly, we need the floor and not the ceiling, since otherwise the theorem would fail simply because the set of halves could contain $\lceil \frac{n}{2} \rceil$ and this would prevent the original set from being a subset of $[n]$. -With this out of the way, it seems that neither $k$ nor $n$ nor $[n]$ are actually relevant to the problem, so I will drop all references to them. Also, the property "even-sum" is guaranteed by the collection containing either only even numbers or only odd numbers, so it's a bit confusing to state it as part of the equivalence to be proved. -With these clarifications, I would restate the theorem as follows: -A set $S$ containing either only even numbers or only odd numbers is midpoint-free iff the set $H$ of all halves of its elements (rounded down if not integer) is midpoint-free. -This seems to be a purely "local" property that can be proved by just looking at every pair of numbers individually: -First, let the numbers in $S$ be even, and take two numbers $2m$ and $2n$. Then the halves of these numbers are $m$ and $n$, respectively, and $(2m + 2n) / 2 = m + n$, so the theorem is proved if we can show that $m + n \notin S \Leftrightarrow (m + n) / 2 \notin H$. Now if $m + n$ is odd, this is vacuously true since the odd number $m + n$ cannot be in the set $S$ of even numbers and the non-integer $(m + n) / 2$ cannot be in the set $H$ of integers. If $m + n$ is even, the "$\Leftarrow$" direction is obvious, and the "$\Rightarrow$" direction follows because "S" contains only even integers, so the other number of which $(m + n)/2$ is a (rounded) half, $m + n + 1$, is not in $S$ either. This proves the theorem for $S$ containing only even numbers. -Now let the numbers in $S$ be odd, and take two numbers $2m+1$ and $2n+1$. Then the halves of these numbers, rounded down, are $m$ and $n$, respectively, and $(2m+1 + 2n+1) / 2 = m + n + 1$, so the theorem is proved if we can show that $m + n + 1\notin S \Leftrightarrow (m + n) / 2 \notin H$. Again, if $m + n$ is odd, this is vacuously true since the even number $m + n + 1$ cannot be in the set $S$ of odd numbers and the non-integer $(m + n) / 2$ cannot be in the set $H$ of integers. If $m + n$ is even, the "$\Leftarrow$" direction is again obvious, and the "$\Rightarrow$" direction follows because "S" contains only odd integers, so the other number of which $(m + n)/2$ is a (rounded) half, $m + n$, is not in $S$ either. This proves the theorem for $S$ containing only odd numbers.<|endoftext|> -TITLE: Integration by parts: $\int e^{ax}\cos(bx)\,dx$ -QUESTION [12 upvotes]: I need to evaluate the following function and then check my answer by taking the derivative: -$$\int e^{ax}\cos(bx)\,dx$$ -where $a$ is any real number and $b$ is any positive real number. - -I know that you set $u=\cos(bx)$ and $dv=e^{ax} dx$, -and the second time you need to integrate again you set $u=\sin(bx)$ and $dv=e^{ax}dx$ again. -It eventually simplifies down to -$$\int e^{ax}\cos(bx)dx = \frac{1}{a}e^{ax}\cos(bx) + \frac{b}{a}\left(\frac{1}{a}e^{ax}\sin(bx) - \frac{b}{a}\int e^{ax}\cos(bx)\,dx\right).$$ -Now I know to move the integral on the left side to the right side so that I can just divide by the constant to solve. -Here is my problem: -I know I need to solve the right side to be: -$$\frac{e^{ax}\left(a\cos(bx) + b\sin(bx)\right)}{a^2+b^2} + C.$$ -To divide by the constant, I multiplied everything on the right side by -$$\frac{a^2}{b^2+1}.$$ -but this leads me to get $b^2 + 1$ on the bottom instead of $a^2 + b^2$. -I will show what I am doing in detail: -After setting the initial integral equal to: -$$\frac{1}{a}e^{ax}\cos(bx) + \frac{b}{a}\left(\frac{1}{a}e^{ax}\sin(bx) - \frac{b}{a}\int e^{ax}\cos(bx)\,dx\right) + C$$ -I simplify: -$$\int e^{ax}\cos(bx)\,dx = \frac{a^2}{b^2+1}\left(\frac{1}{a}e^{ax}\cos(bx) + \frac{b}{a}\left(\frac{1}{a}e^{ax}\sin(bx)\right)\right) + C$$ -If this is already wrong, can someone point be in the right direction? -If I have not gone wrong yet, I can edit to show the rest of my work. - -REPLY [5 votes]: Another solution to your original question can be using complex numbers. -$$\begin{align} -I&=\displaystyle\int e^{ax}\cos {bx}dx \\ -&=\Re\left(\displaystyle\int e^{ax}(\cos {bx}+i\sin {bx})\right)dx\\ -&=\Re\left(\displaystyle\int e^{(a+ib)x}dx\right)\\ -&=\Re\left(\dfrac{e^{(a+ib)x}}{a+ib}\right)\\ -&=\Re\left(\dfrac{e^{ax}(\cos {bx}+i\sin {bx})}{a+ib}\right)\\ -&=\Re\left(\dfrac{e^{ax}}{a^2+b^2}(\cos {bx}++i\sin {bx})(a-ib)\right)+C\\ -\therefore I&=\dfrac{e^{ax}}{a^2+b^2}(a\cos {bx}+b\sin {bx})+C -\end{align}$$<|endoftext|> -TITLE: In what situations is the integral equal to infinity? -QUESTION [5 upvotes]: In the following integral, p(x) and q(x) are probability distributions. Can you help me to determine in what situation this integral is equal to infinity. For example, I think that such a situations is when only p(x) has an infinite peak. -$$\int_{-\infty}^{\infty}\{\log\frac{p(x)}{q(x)}\}p(x)dx$$ -Thank you very much! - -REPLY [4 votes]: As Dinesh says if the support of $\mathbf{P}$ includes points not in the support of $\mathbf{Q}$ then the Kullback-Leibler divergence will be infinite (or undefined). However this is not the only way this can happen. For a simple example I'll use a discrete distribution, so your integral becomes the sum $$\sum_{n\in\mathbb{N}}\mathbf{P}(n)\log\left(\frac{\mathbf{P}(n)}{\mathbf{Q}(n)}\right)$$ -then define: -$$\mathbf{P}(n) = 2^{-n}$$ -and -$$\mathbf{Q}(n)=\begin{cases} -2^{-n-2^n} & n\geq 2\\ -1-\sum_{n=2}^\infty 2^{-n-2^n} & n=1\end{cases}$$ -then for $n\geq 2$, $\frac{\mathbf{P}(n)}{\mathbf{Q}(n)}=2^{2^n}$ so $\mathbf{P}(n)\log\left(\frac{\mathbf{P}(n)}{\mathbf{Q}(n)}\right)=1$. So: -$$D_{KL}(\mathbf{P}||\mathbf{Q})=\sum_{n=1}^\infty \mathbf{P}(n)\log\left(\frac{\mathbf{P}(n)}{\mathbf{Q}(n)}\right)=\infty$$ -I don't know if there are any nice necessary and sufficient conditions. But the best sufficient condition I can come up with is, in the discrete case: if Shannon's entropy of $\mathbf{P}$, $\mathrm{H}(\mathbf{P})$ is finite and $\log(\mathbf{Q}(x)),\frac{\mathrm{d}\mathbf{P}}{\mathrm{d}\mathbf{Q}}\in\mathscr{L}^2(\mathbf{Q})$. In the case of continuous distributions with pdfs it's just a matter of replacing all the pmfs with pdfs. The proof is identical in both cases: -\begin{align} -D_{KL}(\mathbf{P}||\mathbf{Q}) & =E_\mathbf{P}\left[\log\left(\frac{\mathrm{d}\mathbf{P}}{\mathrm{d}\mathbf{Q}}\right)\right]\\ -& = E_\mathbf{P}(\log\mathbf{P}(x))-E_\mathbf{P}(\log\mathbf{Q}(x))\\ -& = \mathrm{H}(\mathbf{P})-E_\mathbf{Q}\left[\frac{\mathrm{d}\mathbf{P}}{\mathrm{d}\mathbf{Q}}\log(\mathbf{Q}(x))\right] -\end{align} -In the final line the first term is finite by assumption and the second term is finite by the Schwarz inequality.<|endoftext|> -TITLE: No prime number between number and square of number -QUESTION [5 upvotes]: Find the values of $x \in \mathbb{Z}$ such that there is no prime number between $x$ and $x^2$. Is there any such number? - -REPLY [7 votes]: Given the current wording of the question, you can set $x$ to any integer in $\{-1, 0, 1\}$ and there will be no prime between $x$ and $x^2$. For any other integer $x$, there will always be a prime between $x$ and $x^2$ (as noted in the comments to your question).<|endoftext|> -TITLE: Topology and axiom of choice -QUESTION [16 upvotes]: It it an easy exercise to show that if $X$ is first-countable then for every point $x$ and every subset $A$ we have $x \in \text{cl}A$ iff there exists a sequence $(x_n)_n$ that converges to $x$. -Well, this uses the axiom of choice to create the sequence (I think). What would happen if we don't have that? (I know that in topology it is much better to have AC but I want to figure out what happens). - -REPLY [7 votes]: Note that depending on the way you've defined the topology and how you want to use it, you may need choice to get the countable base at each point in your space. Thus even if you have dependent (countable) choice, there may be subtleties. -For example, suppose we work in ZF+DC+AD. Then $\omega_1$ with the usual topology is first-countable and we can even exhibit a countable local base at each point $\aleph\in\omega_1$, namely the collection of half-open intervals -$\{(\beta,\alpha] : \beta<\alpha\}$ --- we can even order this in order-type $\omega$. However, we cannot uniformly order all the bases in order-type $\omega$. That is, there is no function $f:\omega_1\times\omega\to P(\omega_1)$ such that $\{f(\alpha,n) : n\in\omega\}$ is a local base at $\alpha$. (Recall that AD implies that there is no sequence $\{C_\lambda\subseteq\lambda\}_{\lambda\in\omega_1}$ such that $C_\lambda$ is a cofinal subset of $\lambda$ with order-type $\omega$.<|endoftext|> -TITLE: Complex differentiabilty -QUESTION [8 upvotes]: I know that: -1) A function $f:\mathbb{R}^2\to \mathbb{R}^2$, when differentiable at a point, has a $2\times 2$ matrix as a derivative, which is a linear transformation from $\mathbb{R}^2\to \mathbb{R}^2$ best approximating the function linearly in some neighbourhood. -2) There is a ring homomorphism $\mathbb{C} \to Mat_{2x2}(\mathbb{R})$ as $a+ib \longmapsto \left[\begin{array}{11}a & -b\\b & a \end{array}\right]$ -3) For a function $f:\mathbb{C} \to \mathbb{C}$, I can define complex differentiabilty as the best $\mathbb{C}$-linear approximation of the function locally at a point, i.e., $f'(z_0):h \mapsto f'(z_0)h$ -Now, I want to combine these three observations, so that the Cauchy-Riemann equation falls out by considering a complex differentiable function as a function from $\mathbb{R}^2\to \mathbb{R}^2$ and connect the jacobian with the $\mathbb{C}$-linear transformation via the homomorphism. - -I am having trouble even formulating a proposition that I can prove. Do I define something called 'Complexfying an $\mathbb{R}^2$-operator'? Any help will be appreciated. - -The upshot will be that I can then 'shift' the proofs of some of the basic results of holomorphic functions (such as the fact that if the partial derivatives of the co-ordinate functions exist and are continuous then the function will be holomorphic, etc) to that of multivariable calculus. - -REPLY [7 votes]: Lemma. Let $T:\mathbb{C}\to\mathbb{C}$ be $\mathbb{R}$-linear (considering $\mathbb{C}$ as an $\mathbb{R}$-vector space just by restricting the scalar multiplication). Then the following are equivalent: - -(i) T is $\mathbb{C}$-linear -(ii) There exists $\lambda\in\mathbb{C}$ such that $T(z)=\lambda z$ for all $z\in\mathbb{C}$ (i.e. T is multiplication by $\lambda$). -(iii) The matrix of $T$ w.r.t. the standard $\mathbb{R}^2$-basis is $\left[\begin{array}{11}a & -b\\b & a \end{array}\right]$, where $a+bi=\lambda$. - -Now suppose $f:\mathbb{C}\to\mathbb{C}$ is complex differentiable at $z_0$, i.e. $T:\mathbb{C}\to\mathbb{C}$, $z\mapsto f'(z_0)z$ is the $\mathbb{C}$-linear derivative. Then $T$ is also the best $\mathbb{R}$-linear approximation of $f:\mathbb{R}^2\to \mathbb{R}^2$: -$\frac{|f(x)-f(z_0)-T(z-z_0)|}{|z-z_0|}=\left|\frac{f(z)-f(z_0)}{z-z_0}-f'(z_0)\right|\to 0$. -Hence the matrix of $T$ is the jacobian of $f$. Recalling that the matrix entries of the jacobian are the partial derivatives, from the Lemma we get the Cauchy-Riemann equations. -Conversely, suppose $f$ is totally differentiable as map $f:\mathbb{R}^2\to \mathbb{R}^2$ and the Cauchy-Riemann equations hold. Then there is a unique $\mathbb{R}$-linear map $T:\mathbb{C}\to\mathbb{C}$ which is the best approximation, i.e. -$\frac{|f(x)-f(z_0)-T(z-z_0)|}{|z-z_0|}\to 0$. -We know its matrix entries are the partial derivatives. Since the Cauch-Riemann equations hold, the Lemma says that $T$ is $\mathbb{C}$-linear, say $T(z)=\lambda z$. Note that -$\left|\frac{f(z)-f(z_0)}{z-z_0}-\lambda\right|=\frac{\left|f(z)-f(z_0)-c(z-z_0)\right|}{|z-z_0|}=\frac{|f(x)-f(z_0)-T(z-z_0)|}{|z-z_0|}\to 0$. -Hence $T$ is also the complex derivative.<|endoftext|> -TITLE: What is the kernel of the summation map from the direct sum to the sum? -QUESTION [8 upvotes]: Let $R$ be a ring, and let $I_1,\ldots,I_n$ be ideals in $R$ (or submodules of some $R$-module). Consider the sequence -$$ -\bigoplus_{1\leq j < k\leq n} I_j\cap I_k\quad\xrightarrow{f}\quad\bigoplus_{l=1}^n I_l\quad\xrightarrow{g}\quad\sum_{k=1}^n I_k, -$$ -where $g$ is given by addition, and $f$ maps $x\in I_j\cap I_k$ to $x\in I_j$ and to $-x\in I_k$ (and to zero in all other components). -Clearly, $g$ is surjective and the composition $g\circ f$ vanishes. -Question: Is the above sequence exact in the middle? -(This seems to be easy for $n=2$.) -(Concerning the title: I am aware of the fact that $f$ won't be injective in general.) - -REPLY [4 votes]: You can convert chandok's example to ideals in a commutative ring as well. Take R=Z[x,y,z]/(3,xx,xy,xz,yy,yz,zz), a 4-dimensional algebra over Z/3Z. It has ideals of (vector space) dimension 1 generated by x−y, y−z, and z−x. There is clearly a non-zero kernel of "g" containing the triple ( x−y, y−z, z−x ). However, the domain of "f" is 0, since all pairwise intersections of the ideals are 0. -The obvious translation of chandok's example to one over the ring Z3 fails, since the ideals generated by the xi get bigger. This ring fixes that, since it has a bunch of pointless ring elements that leave most abelian subgroups as ideals.<|endoftext|> -TITLE: Is there an easy proof for ${\aleph_\omega} ^ {\aleph_1} = {2}^{\aleph_1}\cdot{\aleph_\omega}^{\aleph_0}$? -QUESTION [12 upvotes]: The question contains 2 stages: - -Prove that ${\aleph_n} ^ {\aleph_1} = {2}^{\aleph_1}\cdot\aleph_n$ - -This one is pretty clear by induction and by applying Hausdorff's formula. - -Prove ${\aleph_\omega} ^ {\aleph_1} = {2}^{\aleph_1}\cdot{\aleph_\omega}^{\aleph_0}$ - -Is there an easy proof for the second one? -Thanks in advance. - -REPLY [11 votes]: As you mention, the first equation is a consequence of Hausdorff's formula and induction. -For the second: Clearly the right hand side is at most the left hand side. Now: Either $2^{\aleph_1}\ge\aleph_\omega$, in which case in fact $2^{\aleph_1}\ge{\aleph_\omega}^{\aleph_1}$, and we are done, or $2^{\aleph_1}<\aleph_\omega$. -I claim that in this case we have ${\aleph_\omega}^{\aleph_1}={\aleph_\omega}^{\aleph_0}$. Once we prove this, we are done. -Note that $\aleph_\omega={\rm sup}_n\aleph_n\le\prod_n\aleph_n$, so -$$ {\aleph_\omega}^{\aleph_1}\le\left(\prod_n\aleph_n\right)^{\aleph_1}=\prod_n{\aleph_n}^{\aleph_1}. $$ -Now use part 1, to conclude that ${\aleph_n}^{\aleph_1}<\aleph_\omega$ for all $n$, since we are assuming that $2^{\aleph_1}<\aleph_\omega$. This shows that the last product is at most $\prod_n \aleph_\omega={\aleph_\omega}^{\aleph_0}$. -This shows that ${\aleph_\omega}^{\aleph_1}\le {\aleph_\omega}^{\aleph_0}$. But the other inequality is obvious, and we are done.<|endoftext|> -TITLE: Is there a proof that $\pi$ is an irrational number? -QUESTION [16 upvotes]: Most math texts claim that $\pi$ is an irrational number. However, I'm having a little bit of trouble understanding that. -Since nobody has calculated all of the digits of $\pi$, how can we know that either: - -one of the digits repeats (as in $\frac{10}{3}$) -the number eventually terminates - - -Note: Please be very descriptive in your answers... I don't have anything beyond high school math. - -REPLY [16 votes]: If you know a bit of calculus and have come across induction, then here's an outline of a standard exercise (see Burkill - A first Course in Analysis) to prove $\pi$ irrational. -Let -$$I_n(\alpha)=\int_{-1}^1 (1-x^2)^n \cos \alpha x \textrm{ d}x$$ -then integrate by parts to show that for $n \ge 2$ -$$\alpha^2 I_n = 2n(2n-1)I_{n-1}-4n(n-1)I_{n-2}.$$ -Use induction to show that for positive integer $n$ we have -$$\alpha^{2n+1}I_n(\alpha)=n!(P(\alpha) \sin \alpha + Q(\alpha) \cos \alpha),$$ -where $P(\alpha)$ and $Q(\alpha)$ are polynomials of degree less than $2n+1$ in $\alpha$ with integral coefficients. -Show that if $\pi/2 = b/a,$ where $a$ and $b$ are integers, then -$$\frac{b^{2n+1}I_n(\pi/2)}{n!} \quad (1)$$ -would be an integer. -Note that -$$I_n(\pi/2) < \int_{-1}^1 (1-x^2)^n \textrm{ d}x < 2 \textrm{ and } -\frac{b^{2n+1}}{n!} \rightarrow 0 \textrm{ as } n \rightarrow \infty$$ -which results in contradiction since $(1)$ is supposed to be an integer but we can show that it is as small as one desires. -This was the first proof of the irrationality of $\pi$ that I came across, and think it is very accessible for those willing to give it a go.<|endoftext|> -TITLE: A "geometric'' infinite sum of matrices -QUESTION [11 upvotes]: The sum $$ I + A + A^2 + A^3 + \cdots $$ equals $$(I-A)^{-1}$$ under the assumption $\rho(A)<1$, which is necessary to make the sum converge. My question: what does the sum -$$ I + A^T A + (A^2)^T A^2 + (A^3)^T A^3 + \cdots$$ equal under the same assumption? Is there a similarly neat expression? -It is true that this last sum converges under the assumption $\rho(A)<1$. The ``obvious'' guess $(I-A^T A)^{-1}$ for the sum is not true because $ (A^2)^T A^2 \neq (A^T A)^2$; in fact, this guess does not even make sense because $\rho(A)<1$ does not rule out $\rho(A^T A)=1$. - -REPLY [3 votes]: Let $S$ be the solution to the discrete-time Lyapunov equation $A^TSA=S-I$. Note that $S$ exists and is unique because -$$ -A^TSA=S-I\ \Leftrightarrow\ (I - A^T\otimes A^T)\textrm{vec}(S)=\textrm{vec}(I) -$$ -and $I - A^T\otimes A^T$ is invertible owing to the fact that $\rho(A^T\otimes A^T)=\rho(A)^2<1$. -It follows that if the infinite series $I + A^T A + (A^2)^T A^2 + (A^3)^T A^3 + \cdots$ converges at all, its limit must be $S$. -But does it really converge? Note that for some matrix $A$ such as $\frac1{\sqrt{2}}\pmatrix{1&1\\ 0&1}$ we have $\rho(A)<1$ but also $\max\left\{\|A\|,\|A^T\|\right\}>1$ for every submultiplicative matrix norm. So, we cannot argue that $\|A\|\|A^T\|<1$ and pretend that $\|\sum_{k=0}^\infty(A^j)^TA^j\|\le\sum_{j=0}^\infty(\|A\|\|A^T\|)^j<\infty$ (the first inequality is always valid but the second one is not). The usual trick for proving the convergence of a Neumann series using submultiplicative norm is not applicable here. -Yet, we can still easily prove that the partial sums are convergent to $S$. Since $A^TSA=S-I$, one can prove by mathematical induction that -$$ -S-\sum_{j=0}^{n-1}(A^j)^TA^j=(A^n)^TSA^n -$$ -for every $n\ge1$. As $\rho(A)<1$, we have $A^n\rightarrow0$ and in turn $(A^n)^TSA^n\rightarrow0$ when $n$ approaches infinity. Hence the infinite series $\sum_{j=0}^\infty(A^j)^TA^j$ does converge to $S$. -Alternatively, we can vectorise the $n$-th partial sum of the infinite series as follows: -$$ -\operatorname{vec}\left(\sum_{j=0}^n(A^j)^T A^j\right) -=\sum_{j=0}^n\left((A^j)^T \otimes (A^j)^T\right)\operatorname{vec}(I) -=\sum_{j=0}^n\left(A^T\otimes A^T\right)^j\operatorname{vec}(I).\tag{$\ast$} -$$ -Now we get a geometric sum on the RHS of $(\ast)$. This effectively resurrects the Neumann series argument. As $\rho\left(A^T\otimes A^T\right)=\rho(A)^2<1$, we have $\|A^T\otimes A^T\|<1$ for some submultiplicative norm. The RHS of $(\ast)$ thus converges when $n\to\infty$ and the LHS converges too.<|endoftext|> -TITLE: Difference between logarithm of an expectation value and expectation value of a logarithm -QUESTION [27 upvotes]: Assuming I have a always positive random variable $X$, $X \in \mathbb{R}$, $X > 0$. Then I am now interested in the difference between the following two expectation values: - -$E \left[ \ln X \right]$ -$\ln E \left[ X \right]$ - -Is one maybe always a lower/upper bound of the other? -Many thanks in advance... - -REPLY [10 votes]: To add on Didier's answer, it is instructive to note that the inequality ${\rm E}(\ln X) \le \ln {\rm E}(X)$ -can be seen as a consequence of the AM-GM inequality combined with the strong law of large numbers, upon writing -the AM-GM inequality -$$ -\sqrt[n]{{X_1 \cdots X_n }} \le \frac{{X_1 + \cdots + X_n }}{n} -$$ -as -$$ -\exp \bigg(\frac{{\ln X_1 + \cdots + \ln X_n }}{n}\bigg) \le \frac{{X_1 + \cdots + X_n }}{n}, -$$ -and letting $n \to \infty$. -EDIT: For completeness, let me note that ${\rm E}[\ln X]$ might be equal to $-\infty$. For example, if $X$ has density function -$$ -f(x) = \frac{{\ln a}}{{x\ln ^2 x}},\;\;0 < x < \frac{1}{a}, -$$ -where $a>1$ (note that $\int f = 1$), then -$$ -{\rm E}[\ln X] = -\int_0^{1/a} {\frac{{\ln a}}{{x\ln x}}} \,{\rm d}x = -\infty. -$$<|endoftext|> -TITLE: Why prove that multiplicative functions are a group with Dirichlet convolution? -QUESTION [9 upvotes]: Everyone likes to prove that Dirichlet convolution is a group operation on the multiplicative arithmetic functions, but what consequence does this have? -Does any important theorem use this fact? -Can general group theory lead to results about these functions (or even better, about numbers) from the this theorem? - -Furthermore there are two ring structures on this set, the usual pointwise ring as well as the ring with convolution. -I would like to extend the same question for these. - -REPLY [2 votes]: I saw a talk by John Thompson, not that I knew what it was about, looking at a copy of SL(2,Z) in Dirichlet series under convolution. This paper http://arxiv.org/abs/0803.1121 might be something...<|endoftext|> -TITLE: Pushing forward sheaves and the result on sheaf cohomology -QUESTION [5 upvotes]: Let $f:X \rightarrow Y$ be a continuous map of topological spaces and let $\cal{F}$ be a sheaf on $X$. Is there an obvious map $H^\ast(Y,f_\ast \mathcal{F} ) \rightarrow H^\ast (X,\cal{F})$? - -REPLY [8 votes]: There is always a natural map $$\Gamma(Y, \mathcal{G}) \to \Gamma(X, f^{-1}(\mathcal{G}))$$ for any sheaf $\mathcal{G}$ on $Y$. However, these are both left-exact functors in the sheaf $\mathcal{G}$ (since $f^{-1}$ is an exact functor). By general (Tohoku) nonsense, there is an induced natural transformation of $\delta$-functors $H^*(Y, \mathcal{G}) \to H^*(X, f^{-1}(\mathcal{G}))$. -Take $\mathcal{G} = f_*(\mathcal{F})$ now. We get a map -$$H^*(Y, f_*(\mathcal{F})) \to H^*(X, f^{-1} f_*(\mathcal{F})).$$ However, the adjunction between push-forwards and inverse images produces a map $f^{-1} f_*(\mathcal{F})) \to \mathcal{F}$. This induces a morphism in cohomology $H^*(X, f^{-1} f_*(\mathcal{F})) \to H^*(X, \mathcal{F})$. Composing the two maps just described gives the map that you ask for.<|endoftext|> -TITLE: Determining don't-care values in a Karnaugh Map -QUESTION [6 upvotes]: I'm having a hard time understanding how to find the don't-care values in a Karnaugh map. What does it even mean? If I have a boolean function, say $f(a,b,c,d)=a'bc+abc'+bc'd+a'bc'd$, how would I determine don't-care values? What would I be looking for? - -REPLY [3 votes]: To solve the problem, try to write the function in a simplified form. Now draw a $k$-map for both the simplified function and the original function. Compare the the two functions and any $1$'s that are present in one and not in the other will be your don't cares.<|endoftext|> -TITLE: Is there a compact subset of the irrationals with positive Lebesgue measure? -QUESTION [12 upvotes]: Does there exist $K \subseteq \mathbb{R} \backslash \mathbb{Q}$ such that $K$ is compact, and has Lebesgue measure greater than $0$? As I have been trying to think of examples, I suspect that any subset of $\mathbb{R} \backslash \mathbb{Q}$ that is closed can be at most countable, since the closure of an uncountable subset of irrationals should contain some rationals. And, the Lebesgue measure of a countable set is $0$. If there are any examples of such a set, I would be very interested to know how it is constructed. - -REPLY [18 votes]: The answer is yes. Count the rationals in $[0,1]$ as $r_1,r_2,\ldots$, let $I_k$ be an open interval containing $r_k$ of length $3^{-k}$, and let $K=[0,1]\setminus\cup_k I_k$. -This question is somewhat related to the question Perfect set without rationals, but there measure did not come up. For example, the Cantor set-like construction given there by JDH could be made to have positive measure.<|endoftext|> -TITLE: What does Martin's Maximum imply for $P(\mathbb{R})$? -QUESTION [9 upvotes]: Prompted by this question: of course Godel's constructibility axiom implies that $P(S)$ is minimal for any set $S$ and so handily answers the question of the size of the power set of the continuum in $L$. Martin's Maximum is the only other principle I know of that has implications for the size of the continuum (caveat: most of my knowledge comes from Kanamori's The Higher Infinite, a couple of other set theory books and the occasional dip into Wikipedia), and it's not clear to me what its implications are for power set operations on higher cardinals, in particular $2^{\mathbb{R}} (=2^{\aleph_2})$ - my impression is that it implies that $2^{\aleph_1} = \aleph_2$ but I'm not 100% sure on that front either. Can anyone point to good information on the implications of MM for other cardinalities than just $\mathfrak{c}$? - -REPLY [10 votes]: About the size of $2^{\mathfrak c}$: MM is preserved by $\omega_2$-directed closed forcing, so we can change the size of $2^{\aleph_2}$ by forcing directly over a model of MM and preserving MM. This is a result of Paul Larson. (To contrast with Joel's answer: He shows how we can ensure models of MM with different sizes for $2^{\aleph_2}$ by manipulating the exponential function above a supercompact and then forcing MM. Paul's argument allows us to manipulate the relevant cardinal regardless of how the model of MM is originally obtained.) -As for implications beyond the continuum: MM implies PFA, and PFA implies SCH, the singular cardinal hypothesis (a very nice recent result of Matteo Viale), so it has a direct influence on (singular) cardinals larger than $\aleph_2$. For example, this implies that $2^\kappa=\kappa^+$ holds in models of MM for a proper class of cardinals $\kappa$. -(The same is true in ZFC above a strongly compact.) -PFA also implies the failure of $\square_\kappa$ at all $\kappa>\omega$. This was first shown by Todorcevic. -(Again, this also holds above strongly compact cardinals. This is all indirect evidence that the strength of PFA ought to be in the neighborhood of strong compactness, and that the method we know for establishing its consistency is essentially the only possible method. There are recent results of Matteo Viale and Christoph Weiss strengthening this connection, showing that the standard forcing argument requires supercompactness. On the other hand, there are also recent results of Itay Neeman exhibiting a different method of forcing PFA, although also from a supercompact and using properness.) -For a result that follows from MM and not from PFA: Magidor showed that MM implies that the principle "very weak square" fails at singulars of cofinality $\omega$. This shows that the singular cardinal combinatorics in models of MM is very interesting (and not yet well understood.)<|endoftext|> -TITLE: Prove that an odd integer $n>1$ is prime if and only if it is not expressible as a sum of three or more consecutive integers. -QUESTION [5 upvotes]: Prove that an odd integer $n>1$ is prime if and only if it is not expressible as a sum of three or more consecutive integers. -I can see how this works with various examples of the sum of three or more consecutive integers being prime, but I can't seem to prove it for all odd integers $n>1$. Any help would be great. - -REPLY [8 votes]: First of all, you can assume you're adding only positive numbers; otherwise the question isn't correct as written. -Note that the sum of the numbers from $1$ to $n$ is ${\displaystyle {n^2 + n \over 2}}$. So the sum of the numbers from $m+1$ to $n$ is ${\displaystyle {n^2 + n \over 2} - {m^2 + m \over 2} -= {n^2 - m^2 + n - m \over 2} = {(n - m)(n + m + 1) \over 2}}$. You want to know which odd numbers $k$ can be written in this form for $n - m \geq 3$. -If $k$ were a prime $p$ that could be expressed this way, then you'd have $(n- m )(n+m+1) = 2p$. But $n - m \geq 3$, and $n + m + 1$ would only be bigger than that. Since $2p$ has only the factors $2$ and $p$, that can't happen. -So suppose $k$ is an odd non-prime, which you can write as $k_1k_2$ where $k_1 \geq k_2$ are odd numbers that are at least $3$. You now want to solve $(n-m)(n+m+1) = 2k_1k_2$. It's natural to set $n - m = k_2$ (the smaller factor), and $2k_1 = n + m + 1$, the larger factor. Solving for $n$ and $m$ one gets ${\displaystyle n = {2k_1 + k_2 - 1 \over 2}}$ and ${\displaystyle m = {2k_1 - k_2 - 1 \over 2}}$. Since $k_1$ and $k_2$ are odd these are both integers. And since $k_1 \geq k_2$, the numbers $m$ and $n$ are nonnegative.<|endoftext|> -TITLE: Orthogonal Decomposition of A Matrix -QUESTION [5 upvotes]: I'm trying to follow/understand a research paper that I have, and well, it's been a while since I've done this kind of math. At this point I have an nxn matrix H and from that construct an (n-1)xn matrix H' = $[\textbf{h}_1-\textbf{h}_n$, ..., $\textbf{h}_{1-n}-\textbf{h}_n]^T$. Now "using orthogonal decomposition" I'm to obtain H'=$\textbf{QU}$, where $\textbf{Q}$ is an (n-1)x(n-1) orthogonal matrix and $\textbf{U}$, which is an (n-1)xn upper diagonal matrix. -I guess I'm hoping someone can better explain orthogonal decomposition and help me write the elements of $\textbf{Q}$ and $\textbf{U}$ in terms of the elements of H'. - -REPLY [5 votes]: Look up QR decomposition. QR decomposition decomposes the matrix as a product of an orthonormal matrix and an upper triangular matrix. i.e. $QQ^T = Q^TQ = I$ and $R$ is upper triangular. QR decomposition is nothing but Gram-Schmidt orthogonalization process. It is relatively easy to code up QR based on Gram Schmidt orthogonalization process. Wikipedia does a nearly complete job of explaining both QR and Gram-Schmidt. The main idea behind QR/Gram Schmidt is to construct an orthonormal basis for the column space/range of $A$ using the columns of the matrix $A$. The idea is relatively simple. - -Take the first column of $A$ and make it into a unit vector by dividing out by the norm - -Take the second column of $A$ and remove the component along the first column of $A$ and make it into a unit vector by dividing out by the norm - -In general, take the $k^{th}$ column of $A$ and remove the components from the previous $k-1$ columns of $A$ (i.e. remove the subspace spanned by the first $k-1$ columns of $A$) and make it into a unit vector by dividing out by the norm -This orthonormal set of vectors give you the $Q$ matrix and the (rotation) operations you perform on the vectors of the $Q$ matrix go into the $R$ (rotation) matrix. - - -However, the QR algorithm using Gram-Schmidt orthogonalization process can become unstable. Hence couple of other algorithms based on Givens Rotation, Householder reflection are used in practise since they tend to be more stable than Gram-Schmidt algorithm. -There is a one to one correspondence between the algorithm based on Givens Rotation and the algorithm based on Householder reflection. (Just like the correspondence between solving a system by Gauss Elimination and LU). So technically the stability of the algorithm based on Givens rotation is same as the stability of the algorithm based on Householder reflection.<|endoftext|> -TITLE: When to learn category theory? -QUESTION [123 upvotes]: I'm a undergraduate who wishes to learn category theory but I only have basic knowledge of linear algebra and set theory, I've also had a short course on number theory which used some basic concepts about groups and modular arithmetic. Is it too early to start learning category theory? should I wait to take a course on abstract algebra? -Is it very important to use category theory facts in a first course in group theory, ring theory, fields and Galois theory, modules and tensor products (each of those is a one semester course), would that make it a 'better' course? -I was unsure to learn category theory early but this post Mathematical subjects you wish you learned earlier inspired me to ask you given my background. - -REPLY [19 votes]: I've already answered a similar question on MathOverflow here. -Here are some thoughts of mine: - -category theory (seen as a language) should not be teached just in advanced courses, but it should be developed into the basics courses, in a very gradual way; -some elementary concepts are so simple that also a first year student can understand them, if these concepts are presented in the right way: - for instance you can see a category just as a graph with operations and functors as graph morphisms preserving the operations (this definition is not more complex or abstract then group-group homomorphism and vectorial space-linear map definitions; because these concepts are so simple way not introducing them early? -learning category theory helps in making connections between many different concepts, because it shows the deep unity in maths; -category theory is first of all a language, and so it gives us a new way of reasoning; this new way of reasoning requires some time to be fully assimilated, and this assimilation could require years; for this reason I think it's best starting to learn category soon; -having category theory help to learn new maths: for instance I've learned category theory for my interest in logic and foundations, then knowing those concepts helped to understand constructions in algebraic topology and algebraic geometry faster then what I would have done without it; - -Other things can be found in the link above.<|endoftext|> -TITLE: Help understanding game version of Baire category theorem -QUESTION [8 upvotes]: I got this from Thomson et al.'s freely available "Elementary Real Analysis" p.356. -They introduce Baire's category theorem through a game where, given two players (A) and (B) - -Player (A) is given a subset $A$ of $\mathbb{R}$, and player (B) is given the complementary set $B = \mathbb{R} \backslash A$. Player (A) first selects a closed interval $I_1 \subset \mathbb{R}$; then player (B) chooses a closed interval $I_2 \subset I_1$. The players alternate moves, a move consisting of selecting a closed interval inside the previously chosen interval. -The play of the game thus determines a descending sequence of closed intervals - \begin{align} - I_1 \supset I_2 \supset \ldots I_n \supset - \ldots\end{align} - where player (A) chooses those with odd index and player (B) those with even index. If - \begin{align} - A \: \bigcap_n^{\infty} I_n \neq \emptyset - \end{align} - then player (A) wins; otherwise player (B) wins. - -Then they argue that if player (A) is dealt the irrational set and (B) is dealt the rational set, (A) always has a strategy to win. -I'm confused in several ways by this argument. One confusion is about the term "closed interval". Does, for example, the closed interval [1,1] count as an interval? Because if that's the case, can't whoever has the first turn just end the game then and there without regard to whether he has the rationals or irrationals? Say, if (A) received the rationals, he can just pick [0.5,0.5]. Game over. (But I'm guessing probably not, because the game wouldn't happen) -If that is not the case, and an interval has to be defined $[a,b]$ s.t. $a < b$, isn't it always true that for any $I_{2n+1} = [a_{2n+1}, b_{2n+1}]$ for the odd numbered turns of (A), and where (A) has the irrationals, there exists some $q \in [a_{2n+1},b_{2n+1}]$ s.t. $q$ is rational? Because $\mathbb{Q}$ is also dense on the real line. Or is this argument relying on the fact that a countable intersection of closed sets is always closed? And that it can converge to a single point. But if it converges to a single point, it might be a closed set, but is it still a closed interval? And does that still count as winning? And can't (A) still play this same game even if he were dealt the rationals if all that is needed is that his choice of intervals converge to a single point? -I've been thinking about this for a few days now and none of the ways I've approached it convince me that their discussion of the game is true (though I do trust that it is true, since the authors are mathematicians and I'm not), so any help would be appreciated. - -REPLY [3 votes]: A closed interval is of the form $[a,b]$, $[a, \infty)$ or $(-\infty,b]$. -Since $A$ deals first, he can start with a bounded interval, so let's assume that all intervals $I_{n}$ are bounded (I leave it to you to describe a winning strategy for $B$ if $A$ is silly enough to always deal an unbounded interval). The only really interesting case is if the length of the interval converges to zero, for if not, the intersection will be a non-trivial interval, which will then contain an irrational number. So, if $|I_{n}| \to 0$ then there will be a unique real number $x \in \bigcap_{n} I_{n}$ by the priniple of nested intervals and the question becomes: can $B$ force this number to be rational so that $A$ loses? The answer is no. The reason is that the decimal expansion of a rational number is eventually periodic, so $A$ only has to take care to break any possible period of the decimal expansion of $x$ at each step and this is easily achieved by choosing an interval of sufficiently small length.<|endoftext|> -TITLE: Intuitive explanation of the Fundamental Theorem of Linear Algebra -QUESTION [17 upvotes]: Can someone explain intuitively what the Fundamental Theorem of Linear Algebra states? and why specifically it is called the above? Specifically, what makes it 'Fundamental' in the broad scope of the theory. - -REPLY [6 votes]: Suppose $V$ and $W$ are finite dimensional inner product spaces over $F$ (where $F$ is $\mathbb R$ or $\mathbb C$). Let $T:V \to W$ be a linear transformation. -There are four subspaces that are naturally associated with $T$: $N(T), R(T), N(T^*)$, and $R(T^*)$. (Here $T^*$ is the adjoint of $T$. So $\langle T(x), y \rangle = \langle x, T^*(y) \rangle$ for all $x \in V, y \in W$.) -What Strang calls the fundamental theorem of Linear Algebra is the fact that -\begin{equation} -V = N(T) \perp R(T^*) -\end{equation} -and -\begin{equation} -W = N(T^*) \perp R(T). -\end{equation} -This theorem is easy to prove: -\begin{align} -& x \in N(T) \\ -\iff & T(x) = 0 \\ -\iff & \langle T(x),y \rangle = 0 \forall y \in W \\ -\iff & \langle x, T^*(y) \rangle = 0 \forall y \in W \\ -\iff & x \in R(T^*)^{\perp}. -\end{align} -This shows that $N(T)$ is the orthogonal complement of $R(T^*)$. -Similarly, $N(T^*)$ is the orthogonal complement of $R(T)$. -It's natural to seek bases for these four subspaces, and perhaps the "most natural" or nicest bases are the ones that arise in the SVD of $T$. - -Edit: It's interesting that, as @WillieWong pointed out, a similar theorem can be formulated in a more general setting, without using inner product spaces. -Let $V$ and $W$ be finite dimensional vector spaces over a field $F$ and let $T:V \to W$. There are four subspaces naturally associated with $T$: $N(T), R(T), N(T^*)$, and $R(T^*)$. (Now $T^*$ is the dual transformation $T^*:W^* \to V^*$.) -We may not have an inner product to work with, but we can still write -\begin{equation} -\langle T^*(w^*),v \rangle = \langle w^*,T(v) \rangle -\end{equation} -if we use the notation -\begin{equation} -\langle x^*,y \rangle := x^*(y) -\end{equation} -for $x^* \in V^*, y \in V$. -Moreover, $N(T)$ may not have an orthogonal complement, but it does have an annihilator, which is the analogous thing in a more general setting. And we have that $R(T^*)$ is the annihilator of $N(T)$, and also that $N(T^*)$ is the annihilator of $R(T)$. This is a more general form of Strang's "fundamental theorem". -Note that for any subspace $U$ of $V$, we have -\begin{equation} -\text{dim} \, U + \text{dim} \,U^{\perp} = \text{dim} \, V -\end{equation} -where $U^{\perp}$ is the annihilator of $U$. In particular, -\begin{equation} -\text{dim} \, N(T) + \text{dim} \, R(T^*) = \text{dim} \, V. -\end{equation} -But we also know that -\begin{equation} -\text{dim} \, N(T) + \text{dim} \, R(T) = \text{dim} \, V. -\end{equation} -This shows that -\begin{equation} -\text{dim} \, R(T) = \text{dim} \, R(T^*). -\end{equation} -These easy but fundamental theorems and proofs appear in chapter 2 of Lax's book Linear Algebra and its Applications. (See theorems 5, 5', and 6.)<|endoftext|> -TITLE: Prove a number is composite -QUESTION [14 upvotes]: How can I prove that $$n^4 + 4$$ is composite for all $n > 5$? -This problem looked very simple, but I took 6 hours and ended up with nothing :(. I broke it into cases base on quotient remainder theorem, but it did not give any useful information. -Plus, I try to factor it out: -$$n^4 - 16 + 20 = ( n^2 - 4 )( n^2 + 4 ) - 5\cdot4$$ -If a composite is added to a number that is a multiple of $5$, is there anything special? A hint would suffice. -Thanks, -Chan - -REPLY [4 votes]: $$ x^4+4=\\ -[(x^2)^2+4x^2+4]-4x^2\\ -=(x^2+2)^2-(2x)^2\\ -=(x^2+2x+2)(x^2-2x+2)\ldots $$<|endoftext|> -TITLE: Does the ring of integers have the following property? -QUESTION [6 upvotes]: As a follow-up to this question, I'd like to ask: -What are examples of rings $R$ with the property that for all finite sets of ideals $I_1,\ldots,I_n$ in $R$ the sequence -$$ -\bigoplus_{1\leq j < k\leq n}^n I_j\cap I_k\quad\xrightarrow{f}\quad\bigoplus_{l=1}^n I_l\quad\xrightarrow{g}\quad\sum_{k=1}^n I_k -$$ -is exact in the middle? Here $g$ is given by addition, and $f$ maps $x\in I_j\cap I_k$ to $x\in I_j$ and to $-x\in I_k$ (and to zero in all other components). -A complete description of this class of rings would be even better, of course. -Obvious examples seem to be rings with at most two proper ideals. Those I would consider pathological in this context. -I'd be happy to restrict to complex algebras if that is useful. -Also, is there a name for rings with this property? - -Edit: In order to make the question more answerable, let's just consider the case $R=\mathbb Z$. Can we find ideals in $\mathbb Z$ such that the above sequence is not exact, or can we prove that this is impossible? - -REPLY [6 votes]: I can see that your property does hold in $\mathbb{Z}$. In fact, it holds in any principal ideal domain, and even in every Dedekind domain. -[Edit: As pointed out by Rasmus below, for the commutative case, the property is equivalent to the ring being arithmetical. This is because arithmetical rings are defined by the distributive lattice property I state in (2) below. So, for integral domains, the property is equivalent to the ring being Prüfer. This certainly includes all Dedekind domains and, in the case of Noetherian domains, it is actually equivalent to the ring being Dedekind.] -I think it's just as easy to start by considering an arbitrary ring $R$. Choosing any $x\in\bigoplus_kI_k$ such that $g(x)=0$, then we have $x_1=-\sum_{k\ge2}x_k\in I_1\cap\sum_{k\ge2}I_k$. Conversely, if $x_1\in I_1\cap\sum_{k\ge2}I_k$ then this can be extended to an $x\in\bigoplus_kI_k$ with $g(x)=0$. -We want this to mean that there exists a $y\in\bigoplus_{j < k}I_j\cap I_k$ with $f(y)=x$. This implies that $x_1=\sum_{k\ge2}y_{1k}\in\sum_{k\ge2}I_1\cap I_k$. So, a necessary condition for the given sequence to be exact in the middle is that -$$ -I_1\cap\sum_{k=2}^nI_k\subseteq\sum_{k=2}^nI_1\cap I_k.\qquad\qquad{\rm(1)} -$$ -The reverse inclusion is immediate for any ring. In particular, just considering $n=3$ gives the necessary condition -$$ -I\cap(J+K)=I\cap J+ I\cap K,\qquad\qquad{\rm(2)} -$$ -for all ideals $I,J,K\subseteq R$. In fact, this is both a necessary and sufficient condition. It can be seen that your original condition and (2) are both just statements about a collection of subgroups of an abelian group (here, the group is $R$ under addition and the subgroups are the ideals). In fact, you are asking when the first homology group of a specific chain complex vanishes (much like in Cech cohomology). Also, (2) is the same as saying that the ideals form a distributive lattice $({\rm Id}_R,+,\cap)$. However, I'm no expert in this, and don't know what the name of rings satisfying your particular condition is (Edit: In the commutative case, these are arithmetical rings as mentioned above and in a comment below). -To see that (2) implies (1), use induction for $n\ge2$, -$$ -I_1\cap\sum_{k=2}^nI_k=I_1\cap\sum_{k=2}^{n-1}I_k+I_1\cap I_n=\sum_{k=2}^nI_1\cap I_k. -$$ -To see that (1) implies exactness, consider $x\in\bigoplus_kI_k$ with $g(x)=0$. Then, $x_n\in I_n\cap\sum_{k < n}I_k=\sum_{k < n}I_k\cap I_n$. So, there will be a $y\in\bigoplus_{j < k}I_j\cap I_k$ with $y_{jk}=0$ for $j < k < n$ and $f(y)_n=x_n$. Replacing $x$ by $\tilde x=x-f(y)$ reduces to the case with $\tilde x_n=0$, so exactness follows by induction on $n$. -Finally, (2) holds in any Dedekind domain. It is clear if any of $I,J,K$ are zero, so suppose that they are nonzero. For any ideal $I$ and nonzero prime ideal $\mathfrak{p}$, write $v_{\mathfrak{p}}(I)$ for the index of $\mathfrak{p}$ in the factorization of $I$ (the $\mathfrak{p}$-adic valuation). Two nonzero ideals $I,J$ are equal if and only if $v_{\mathfrak{p}}(I)=v_{\mathfrak{p}}(J)$ for all primes $\mathfrak{p}$. Then, using $v_{\mathfrak{p}}(I+J)=\min(v_{\mathfrak{p}}(I),v_{\mathfrak{p}}(J))$ and $v_{\mathfrak{p}}(I\cap J)=\max(v_{\mathfrak{p}}(I),v_{\mathfrak{p}}(J))$ implies that (2) holds.<|endoftext|> -TITLE: Is the complex exponential function injective, surjective and/or bijective - and why? -QUESTION [13 upvotes]: I was just reading about the e-function in the complex plane and was trying to understand the differences between the real and the complex case. Part of the problem is that the mapping of a 2-D plane to another 2-D plane is hard to visualize. -My question -What are the properties of the complex exponential function in terms of being injective, surjective and/or bijective - and how is this different from the real case? -How can you proof these attributes in the real and in the complex case? - -REPLY [15 votes]: The complex exponential $e^z : \mathbb{C} \to \mathbb{C}$ is neither injective nor surjective. The former is because $e^{z + 2 \pi i k} = e^z \forall k \in \mathbb{Z}$ and the latter is because $e^z \neq 0$. In fact, if $e^z = e^w$ then $e^{z-w} = 1 \Rightarrow z - w \in 2 \pi i \mathbb{Z}$, so this is the only source of non-injectivity. -Geometrically, $e^{x + iy} = e^x (\cos y + i \sin y)$ is just the point $(e^x, y)$ in polar coordinates. One can think of the complex exponential as specifying a conformal isomorphism between the following two Riemann surfaces: - -Its domain mod $2 \pi i \mathbb{Z}$. One should think of this as the complex plane cut up into horizontal strips $S_k = \{ x + iy : 2 \pi k \le y < 2 \pi (k+1) \}$, all of which are then identified, and all of which have their top and bottom borders identified. Roughly speaking this is an infinitely long tube open at both ends. -Its range $\mathbb{C} \setminus \{ 0 \}$. - -Both of these Riemann surfaces can be identified with the Riemann sphere minus two points. In the first picture one first "puffs out" the tube and then shrinks the points. In the second picture one first identifies $\mathbb{C}$ with the Riemann sphere minus a point via stereographic projection and then removes an additional point. -(This picture works fairly well, I think. For example vertical lines in the domain are mapped to circles in the range in a natural way, and shifting the lines left or right corresponds to varying the radius of the circle, or varying its relative position between the poles on the Riemann sphere.) -There is a nice discussion about how to visualize holomorphic functions in Needham's Visual Complex Analysis. - -REPLY [14 votes]: The complex exponential is never zero, so it's not surjective; this is like the real case. It is also periodic, so it's not injective either; this is unlike the real case. - -REPLY [5 votes]: Richard Palais has a wonderful program for visualizing some complex maps called 3D-XplorMath which can be downloaded for free at http://3d-xplormath.org/. It also has some nice visual for 3d surfaces and fractals, among other things. This should at least be able to help you visualize exponentiation in $\mathbb{C}$.<|endoftext|> -TITLE: Convergence in probability versus convergence in distribution -QUESTION [5 upvotes]: Suppose $\bar{X}_n$ is the mean of a random sample of size ${n}$ from an exponential distribution with $\lambda$ > 0. Then what does the following statement about convergence mean (how does this converge)? -$$\text{exp} \left(-\frac{1}{\bar{X}_n} \right) \xrightarrow{\rm{P}} \text{exp}(-\lambda) $$ -More specifically what does the $\rm{P}$ on top of the arrow mean? I understand it's probability but what is the difference between some expression converging to some value in probability versus in distribution? - -REPLY [5 votes]: In response to the OP's request, I give two elaborated examples in order to clarify the difference between convergence in probability (denoted by $\stackrel{{\rm P}}{\to}$) and convergence in distribution (denoted by $\stackrel{{\rm D}}{\to}$). -Example 1. Suppose that $X_i$, $i=1,2,\ldots$, are non-constant random variables taking values in $[0,M]$, and let $(a_n)$ be a sequence of positive numbers such that $\sum\nolimits_{i = 1}^\infty {a_i } = c < \infty $. Define $S_n = \sum\nolimits_{i = 1}^n {a_i X_i }$. Then, since the sequence $S_n$ is monotone increasing, $S_n$ converges pointwise (that is, for all $\omega \in \Omega$) to a random variable $S$ taking values in $[0,Mc]$. Here, we have the strongest type of convergence (sure convergence), which implies all the other kinds of convergence. In particular, as one would expect, $S_n \stackrel{{\rm P}}{\to} S$. Indeed, this can be shown directly as follows. Fix $\varepsilon > 0$. Then, for all sufficiently large $n$, -$$ -{\rm P}(|S_n - S| > \varepsilon) = {\rm P}\bigg(\sum\limits_{i = n + 1}^\infty {a_i X_i } > \varepsilon \bigg) \leq {\rm P}\bigg(M \sum\limits_{i = n + 1}^\infty {a_i } > \varepsilon \bigg) = 0. -$$ -Now, since convergence in probability implies convergence in distribution, $S_n \stackrel{{\rm D}}{\to} S$ as well. However, the limit random variable $S$ plays no special role with regard to the convergence in distribution. Indeed, take, for example, an independent copy $S'$ of $S$. Then, trivially, $S_n \stackrel{{\rm D}}{\to} S'$ (simply because $S$ and $S'$ have the same distribution function). On the other hand, the limit $S$ plays an essential role with regard to the convergence in probability. In fact, it is easy to prove the following general statement: the limit of convergence in probability is unique in the sense that if $Z_n \stackrel{{\rm P}}{\to} X$ and $Z_n \stackrel{{\rm P}}{\to} Y$, then $X = Y$ almost surely, that is ${\rm P}(X \neq Y) = 0$. -Finally, it is worth noting that if $Z_n \stackrel{{\rm D}}{\to} Z$, and $Z$ is distributed according to a distribution $F$, then we can write $Z_n \stackrel{{\rm D}}{\to} F$. For example, if $Z_n \stackrel{{\rm D}}{\to} Z$ where $Z \sim {\rm exponential}(\lambda)$, then we can write $Z_n \stackrel{{\rm D}}{\to} {\rm exponential}(\lambda)$. -To further clarify the difference between convergence in probability and convergence in distribution, let's consider the fundamental case of the central limit theorem. -Example 2. Suppose that $X_1,X_2,\ldots$ is a sequence of i.i.d. random variables with expectation $\mu$ and (finite) variance $\sigma^2 > 0$. Define $S_n = X_1 + \cdots + X_n$ and $Z_n = \frac{{S_n - n\mu }}{{\sigma \sqrt n }}$. The central limit theorem states that $Z_n$ converges in distribution to the standard normal distribution ${\rm N}(0,1)$, that is $Z_n \stackrel{{\rm D}}{\to} {\rm N}(0,1)$. So, given any random variable $Z \sim {\rm N}(0,1)$ (which, in particular, may be defined on a different probability space), we can write $Z_n \stackrel{{\rm D}}{\to} Z$. On the other hand, there is no random variable $Z$ such that $Z_n \stackrel{{\rm P}}{\to} Z$. Indeed, suppose for a contradiction that $Z_n \stackrel{{\rm P}}{\to} Z$. It is an easy exercise to show that -$$ -{\rm P}(|Z_n - Z_m | > 2 \varepsilon ) \le {\rm P}(|Z_n - Z| > \varepsilon ) + {\rm P}(|Z_m - Z| > \varepsilon ). -$$ -Now, in order to reach a contradiction, it suffices to realize that $Z_n$ and $Z_m$ become asymptotically independent as $n,m \to \infty$ with $n/m \to 0$; indeed, -$$ -Z_m = \sqrt {\frac{n}{m}} Z_n + \sqrt {\frac{{m - n}}{m}} \frac{{\sum\nolimits_{i = n + 1}^m {X_i } - (m - n)\mu }}{{\sigma \sqrt {m - n} }}, -$$ -from which it is also seen that -$$ -{\rm Cov}(Z_n,Z_m) = \sqrt {\frac{n}{m}}. -$$ -Finally, especially in view of the first example, it is worth noting that convergence in probability, though quite strong relative to convergence in distribution, does not imply almost sure convergence. A short but sophisticated example is given in my answer to this question, at the end of the second paragraph. -EDIT: As I commented below, I intentionally gave the non-trivial example of the central limit theorem. Here are two trivial examples. -First (elaborating on Didier's example), if $X_1,X_2,\ldots$ are i.i.d. from a distribution $F$, then, trivially, $X_n \stackrel{{\rm D}}{\to} F$ (since $X_i \sim F$ for each $i$). But, unless the $X_i$ are deterministic, the sequence never converges in probability. Indeed, suppose that $X_n \stackrel{{\rm P}}{\to} X$. Let $\varepsilon >0 $ be arbitrary but fixed. By the triangle inequality, the event $\lbrace |X_{n+1} - X_n| > 2 \varepsilon \rbrace$ is contained in the event $\lbrace |X_{n+1} - X| > \varepsilon \rbrace \cup \lbrace |X_{n} - X| > \varepsilon \rbrace$. Hence, -$$ -{\rm P}(|X_{n+1}-X_n| > 2 \varepsilon) \leq {\rm P}(|X_{n + 1} - X| > \varepsilon ) + {\rm P}(|X_n - X| > \varepsilon ). -$$ -Since, by our assumption, the right-hand side tends to $0$ as $n \to \infty$, and since $|X_{n+1}-X_n|$ is equal in distribution to $Y:=|X_1 - X_2|$, we get ${\rm P}(Y > 2 \varepsilon) = 0$. Since $Y$ is nonnegative, this implies that $Y = 0$ almost surely (exercise), that is, $X_1 = X_2$ almost surely. Hence, the $X_i$ are deterministic (since they are independent). -As another example, suppose that ${\rm P}(X_1 = 1) = {\rm P}(X_1 = -1) = 1/2$, and let $X_{n+1}=-X_n$. Then, trivially, $X_n \stackrel{{\rm D}}{\to} X_1$, but either $(X_n) = (1,-1,1,-1,\ldots)$ or $(X_n) = (-1,1,-1,1,\ldots)$.<|endoftext|> -TITLE: What is a primitive point on an elliptic curve? -QUESTION [6 upvotes]: While working with elliptic curves for cryptography reasons, I found the notion of a primitive point, but no definition. -For example, $P(0,6)$ is a primitive point on the elliptic curve $y^2\equiv x^3+2x+2 \mod 17$. -What does that mean? How can I tell if a point is primitive or not? - -REPLY [3 votes]: Yes, primitive point means the group is cyclic and that P is a generator. -Let E be an elliptic curve over a finite field F_q. The group E( F_q ) is cyclic if gcd( #E( F_q ), q-1 ) = 1. To test whether a point P generates E( F_q ) it is necessary to factorise #E( F_q ). The case when #E( F_q ) is prime is an easy special case. -If M = gcd( #E( F_q ), q-1 ) > 1 then one can determine whether E( F_q ) is cyclic or not (both cases can arise) in expected (randomised) polynomial time using the Weil pairing. This algorithm is due to Victor Miller and is explained in his paper in the Journal of Cryptology, volume 17 (2004).<|endoftext|> -TITLE: What makes Torus Special -QUESTION [5 upvotes]: For the past couple of days I have been encountering the word "Torus" quite often. - -I would like to know what special properties does the Torus possess that it is studied very much in Mathematics. - -A recent article on Toral Automorphisms was given out to students by one of our Professors, which i have posted here. - -http://chandrumath.wordpress.com/2011/01/31/toral-automorphisms/ - -This is one example which illustrates what properties "Torus" possesses. I am looking for more exciting properties of the Torus which makes it ubiquitous in Mathematics. Another question : - -Are Torus and Doughnuts both same? Or is there any topological difference between them. - -REPLY [4 votes]: One of your questions was "Is a torus a doughnut?" The 2-dimensional surface that we know as a torus is the 2-dimensional surface of a standard doughnut or bagel. It is also the 2-dimensional surface of a coffee cup up to homeomorphism (read topological equivalence). This is why we say that a topologist cannot tell the difference between a doughnut and a coffee cup. -If a piece of rope (or better extension cord) is tied loosely into a knot and the ends joined together (or plugged into one another, respectively) then the boundary of the resulting figure is represented as torus. Three dimensional compact spaces that have no incompressible tori are important because of the geometrization conjecture. -To paraphrase Homer (Simpson, not the poet), "MMM. Tori..."<|endoftext|> -TITLE: Importance of Constructible functions -QUESTION [9 upvotes]: A function $f$ is called fully time-constructible if there exists a Turing machine $M$ which, given a string $1^n$ consisting of $n$ ones, stops after exactly $f(n)$ steps. -Analogously, we can call a function $f$ fully space-constructible if there exists a Turing machine $M$ which, given a string $1^n$ consisting of $n$ ones, halts after using exactly $f(n)$ cells. -Consider this function: -\begin{equation*} - f(n) = - \begin{cases} - n^3, & \text{if}\ n = 3k \ \text{for} \ k \in \mathbb{N} \ \text{and} - \ 2^n, & \text{otherwise} - \end{cases} - \end{equation*} -Is $f(n)$ time-constructible and/or space constructible? Also, what is the intuition behind considering time/space constructible functions in complexity theory? Will things be any different if we include non-constructible time/space functions? - -REPLY [7 votes]: f(n) is both time- and space-constructable. $M$ converts the input to binary, performs the necessary computations, and then spins for the appropriate number of steps. -Constructable functions are primarily considered in order to simplify technical issues in proofs. For example, consider the Time Hierarchy Theorem, which essentially says that having more time allows you to solve more stuff. -The formal statement is: Given a time-constructable function $f(n)$ and a function $g(n)=\omega(f(n)\log f(n))$, there exists a language decidable in $TIME(g(n))$ but not in $TIME(f(n))$. -The proof is by diagonalization. Construct a machine $M$ which, on input $x$, simulates machine $x$ on input $x$ for $f(n)$ steps. If $x(x)$ halts within that time bound, $M$ outputs the opposite. If $x(x)$ hasn't halted, $M$ rejects (arbitrarily). The simulation can be done using $\log f(n)$ overhead, and the theorem follows. -In order for the proof to work, $M$ needs to be able to count $f(n)$ steps, and it needs to do so efficiently. Otherwise, the running time of $M$ won't be bounded by $g(n)$. Time-constructability is exactly the criteria needed to allow the proof to work. -Since basically every function of interest is time-constructable, there isn't much point in removing the technicality. Actually, in the case of the time hierarchy theorem, removing the constructability requirement makes the theorem spectacularly false. The Blum Gap Theorem says that given any function $r(n)$, such as $2^n$, there exists a function $f(n)$ such that $TIME(f(n))=TIME(f(r(n)))$. Obviously, this $f$ is not time-constructable.<|endoftext|> -TITLE: Pasting Together Fibers of a Vector Bundle -QUESTION [6 upvotes]: Everyone: - Please forgive that I do not yet know LaTex, bro, and my English ( I am from UCV in Venezuela). -I think I understand concept of bundles almost well, and that, once a vector bundle with a fiber is known/given, that we can define a new fiber pointwise, in manipulating each of the fibers, e.g., we may change the fiber from being R (over itself) to being R(+)R, or from R to R(x)R ; RxR*, (dual)etc. - What I not too clear on, is on how one put together all the new fibers coherently into a bundle, i.e., how one construct new trivializations and transition functions to turn the space with altered fiber into a new bundle. I am particularly interest in the quotient bundle, if someone knows. I imagine we use initial charts, trivialization to construct the altered bundle, but I don't see fully how, other than I pretty sure we use multilinear algebra and functoriality somehow. Would be great if someone knew about how to do this for general fiber bundles. -Thanks You from Caracas. - -REPLY [7 votes]: Suppose $T$ is a functor from the category of finite dimensional (real, say) vector spaces to itself. We can say that $T$ is continuous if for every finite dimensional vector space $V$ the associated map $T:\hom(V,V)\to\hom(TV,TV)$ is continuous, when we view its domain and codomain as real vector spaces. -For example, the functor $T=\Lambda^2(\mathord-)$ which computes exterior squares is continuous, as many others. -Now suppose $E$ is a locally trivial vector bundle of dimension $n$ over a space $B$, and suppose $\mathcal U$ is an open covering of $B$ over whose open sets $E$ is trivial. Then for each pair $U$, $V\in\mathcal U$ such that $U\cap V\neq\emptyset$ we have a corresponding transition function $g_{U,V}:U\cap V\to\mathrm{GL}(\mathbb R,n)$, as explained for example in the Wikipedia page for vector bundles. Moreover, we can reconstruct $E$ up to isomorphism from the knowledge of $\mathcal U$ and the family $\{g_{U,V}:U,V\in\mathcal U,U\cap V\neq\emptyset\}$ alone. -Now, suppose $T(\mathbb R^n)$ has dimension $m$. Then the vector bundle $T(E)$ can be defined to be the vector bundle which one can construct from the covering $\mathcal U$ and the family $\{\tilde g_{U,V}:U,V\in\mathcal U,U\cap V\neq\emptyset\}$ of transition functions, where for each $U$, $V\in\mathcal U$ such that $U\cap V\neq\emptyset$, the map $$\tilde g:p\in U\cap V\mapsto T(g_{U,V}(p))\in GL(m).$$ -This construction gives a meaning to $T(E)$ for all continuous functors $T$, and it can be generalized for continuous functors of many variables, both covariant and contravariant. If I recall correctly, this is discussed in Milnor and Stasheff's book on Characteristic Classes, for example. -NB: there is another way of doing this... -If $E$ is a vector bundle of dimension $n$ over a space $B$, then one can attach to it a principal $\mathrm{GL(n)}$-bundle $P_E$, called the frame bundle. The fiber of $P_E$ over a point $b\in B$ is the set of all ordered bases of the fiber of $E$ over $b$. -Now, given any $\mathrm{GL}(n)$-representation $V$, we can form the bundle $P_E\times_{\mathrm{GL}(n)}V$. If we take $V=V_{\mathrm{taut}}=\mathbb R^n$ with the tautological action of $\mathrm{GL}(n)$, then $P_E\times_{\mathrm{GL}(n)}V$ is isomorphic to $E$; if we take $V=\Lambda^3(V_{\mathrm{taut}})$, then -$P_E\times_{\mathrm{GL}(n)}V$ is the bundle $\Lambda^3(E)$, and so on. -  -Both constructions do the same thing: they pick some gadget which precisely captures the way the fibers of the vector bundle you start with are put together (the set of transition functions, the associated principal bundle) and then pick some other fiber and glue copies together using the same prescription. The two approaches are equivalent, of course. The second one is somewhat more «geometric» because the gadget used to record the way the fibers of $E$ are put together is in fact a geometric object and not some vaporous family of transition functions, also known, if you want to scare the kids, as a $1$-cocycle.<|endoftext|> -TITLE: On Some Properties of Hölder Continuous Functions -QUESTION [9 upvotes]: The function space $H^{\alpha} (\Omega)$ for $0 < \alpha \le 1$, is the set of functions: -$$\{ f \in C^0(\Omega) : \sup_{x \neq y} \dfrac{|f(x) - f(y)|}{|x-y|^{\alpha}} < \infty \}$$ -with the metric $d_{H^{\alpha}} = || f - g ||_{H^{\alpha}}$, where -$$||f||_{H^{\alpha}} = ||f||_{sup} + [f]_{H^{\alpha}} \text{ , } [f]_{H^{\alpha}} = \sup_{x \neq y} \dfrac{|f(x) - f(y)|}{|x-y|^{\alpha}} $$ -Now, if $0 < \alpha < \beta \le 1$, then -$$[f]_{H^{\alpha}} \le 2 ||f||_{sup}^{1-\frac{\alpha}{\beta}} [f]_{H^{\beta}}^{\frac{\alpha}{\beta}} \space \forall f \in H^{\beta}$$ -And also, there is some constant $M$ so that: -$$||f||_{H^{\alpha}} \le M ||f||_{sup}^{1-\frac{\alpha}{\beta}} ||f||_{H^{\beta}}^{\frac{\alpha}{\beta}} \space \forall f \in H^{\beta}$$ -These were some questions on a problem set: I have checked that $d_{H^{\alpha}}$ is a metric, and proved the two properties (in the second I found that $M = 2$ is sufficient). However, rather blindly. It's easy to show from the first that if $0 < \alpha < \beta \le 1$, then $H^{\beta} \subset H^{\alpha}$. -What else do these formulas mean? Are they just some useful inequalities, or do they establish some connection between $H^{\beta}$ and $H^{\alpha}$? -Thanks. - -REPLY [8 votes]: Those two final inequalities are known as "interpolation inequalities". The point being the following: you can "extend" the Holder norms to $\alpha = 0$ with the formal expression -$$ [ f ]_{H^0} = \sup_{x\neq y} \frac{|f(x) - f(y)|}{|x-y|^0} = \sup_{x\neq y} \frac{|f(x) - f(y)|}{1} \leq 2 [f]_{sup} $$ -Or, in other words, you identify $H^0$ with $C^0$ equipped with the sup norm. As you observed, it gives you that $H^\alpha \subset H^\beta$ if $\alpha > \beta$. What's more, however, is that now, using the sup-norm factor in the interpolation inequality, you can use Arzela-Ascoli to show that the inclusion of $H^\alpha\subset H^\beta$ is pre-compact! That is, any bounded sequence in $H^\alpha$ would have a converging subsequence in $H^\beta$, for $\beta < \alpha$. -I think you understand how, whenever something allows you to extract a converging subsequence, it is very useful in analysis indeed. -Lastly, the expression illustrates a phenomenon that happens with regularity in classical analysis, which is that good "scales" of function spaces are often log-convex in the exponent. Your family of Holder space norms $H^\alpha$, parametrized by $\alpha$, by your two inequalities, is log-convex. -There presumably are very nice applications of the log convexity in interpolation theory etc for the Hölder spaces, but unfortunately none comes to mind immediately at the moment.<|endoftext|> -TITLE: Sufficient conditions on subsequences for convergence of a sequence -QUESTION [5 upvotes]: Given a sequence $a_n$, I know that if I can find a divergent subsequence of $a_n$, or two subsequences of $a_n$ that converge to different values, $a_n$ diverges, since, if I have understood correctly, a sequence $a_n$ converges to a limit $L$ if and only if every subsequence of $a_n$ converges to that value $L$. -I've been wondering if this last condition was equivalent to showing that some subsequences converge to $L$, picking the subsequences such that every element of the original sequence is in at least one of the subsequences. Is it? I would guess that the terms "partition" or "covering" fit this description. -Thanks. - -REPLY [5 votes]: If you can partition a sequence into finitely many subsequences, each of which converges to $L$, then the original sequence must converge to $L$ as well. This is clear from the following definition of the limit: $a_n \rightarrow L$ iff for all $\epsilon>0$ there exists $N_\epsilon$ such that $|a_n - L| < \epsilon$ whenever $n \ge N_\epsilon$. But then for any $\epsilon > 0$, because the $i$-th subsequence converges to $L$, it is within $\epsilon$ of the limit for $n \ge N^{(i)}_\epsilon$; so the sequence itself is within $\epsilon$ of the limit for $n \ge N_\epsilon = \max_{i}N^{(i)}_\epsilon$. -However, the result does not hold for a partition into infinitely many subsequences. For instance, consider the case where $a_n=1$ when $n$ is prime and $a_n=0$ otherwise. This can be partitioned into infinitely many subsequences, where $a_n$ is in the $i$-th subsequence if the smallest prime factor of $n$ is the $i$-th prime. (Just put $a_1$ anywhere.) Each subsequence converges (immediately) to zero, but the original sequence does not converge, because it has sporadic $1$'s as far out as you care to look.<|endoftext|> -TITLE: Morphisms of finite type are stable under base change -QUESTION [10 upvotes]: I am trying to prove that morphisms of finite type are stable under base change, but I am having some trouble moving from the case where everything is affine to the general case. Suppose $f:X \rightarrow Y$ is a morphism of finite type and $Y'$ is a $Y$-scheme. I want to show that the morphism $g: X \times_Y Y' \rightarrow Y'$ is of finite type. In the case that $X$, $Y$, and $Y'$ are affine, I understand why this is true. For the general case, by a lemma in Liu's book, it is enough to show that there is an affine open cover $\{V_i\}_i$ of $Y'$ such that for each $i$, $g^{-1}(V_i)$ is a finite union of affine open subsets $U_{ij}$ such that for each $i$ and $j$, $O_X(U_{ij})$ is a finitely generated algebra over $O_Y(V_i)$. Here is my attempt at proving this. -Choose an affine open cover $\{V_i\}_i$ of $Y'$. Is it true that $g^{-1}(V_i)=X \times_Y V_i$? I think this should follow from how we constructed the fibered product by gluing. Since $f:X \rightarrow Y$ is of finite type, we may choose an affine open cover $\{Y_j\}$ of $Y$ such that $f^{-1}(Y_j)$ is covered by a finite number of affine opens $W_{jk}$. Now, $W_{jk} \times_Y V_i$ are open subschemes that cover $X \times_Y V_i$, but since $Y$ is not necessarily affine, these schemes are not necessarily affine, right? Furthermore, if we are using the $W_{jk}$ to cover all of $X$, there could be infinitely many of them. To make the $W_{jk} \times_Y V_i$ affine, we could further cover them with $W_{jk} \times_{Y_k} V_i$, but we are not guaranteed finitely many $Y_k$ either, so while these schemes will be affine, there will not necessarily be finitely many. I have been having some trouble with these sorts of arguments where one can immediately reduce to the affine case, and some help here would be greatly appreciated. - -REPLY [14 votes]: You can do a slight change to make your argument work and give you an affine cover of $X\times_Y Y'$ as follows. -Call $h$ the base change morphism $Y'\to Y$. Consider a cover of $Y$ by open affines $V_i=Spec(A_i)$ and cover any $h^{-1}(V_i)$ by open affines $V_{ij}=Spec(A_{ij})$ of $Y'$. The preimage of $V_{ij}$ by $g$ is as you said $X\times_Y V_{ij}$ which by the properties of the fibered product coincides with $X_i\times_{V_i}V_{ij}$, where $X_i=f^{-1}(V_i)$ (this is crucial to end up with affine schemes as you will see in a moment). You can cover $X_i$ with open affines $X_{ik}=Spec(B_{ik})$ with all $B_{ik}$ finite type $A_i$-algebras. So $g^{-1}(V_{ij})$ is covered by the affines $X_{ik}\times_{V_i}V_{ij}=Spec(B_{ik}\otimes_{A_i}A_{ij})$, which are $A_{ij}$-algebras of finite type. -This proves that the base change of $f$ is locally of finite type (actually we didn't use that $f$ is quasi compact, so we proved that locally of finite type morphisms are stable under base change). -If now $f$ is quasi compact, you just need a finite number of $X_{ik}$ to cover $X_{i}$ and so a finite number of $X_{ik}\times_{V_i}V_{ij}$ is enough to cover $g^{-1}(V_{ij})$, proving the quasi-compactness of $g$.<|endoftext|> -TITLE: Does zero covariance imply independence of random variables here? -QUESTION [7 upvotes]: I have two random variables $X$ and $Y$. Both are distributed according to $N(0,1)$. If their covariance is 0, are they independent? -I know that this is not true for other distributions, say the Wikipedia example: $X$ chosen uniformly in $[-1,1]$ and $Y=X^2$. - -REPLY [7 votes]: http://en.wikipedia.org/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent<|endoftext|> -TITLE: Manifold with 3 nondegenerate critical points -QUESTION [7 upvotes]: Suppose $M$ is a n-dimensional (compact) manifold and $f$ is a differentiable function with exactly three (non-degenerate) critical points. Then one can show, using Morse theory, that $M$ is homeomorphic to a $\frac{n}{2}$ sphere with an $n$ cell attached. -I understand why we will have critical points with index $0$ and $n$ (since $M$ is compact and it achieves its max and min). My question is exactly why there one of index $\frac{n}{2}$? -By Poincare duality, we have that $H^{k}(M) \cong H_{n-k}(M)$. So if $k$ is not $\frac{n}{2}$, then $n-k \neq k$ so we will have two additional nonzero (co)homology classes. Does this somehow correspond to having two additional critical points, contradicting that there are only 3? - -REPLY [4 votes]: You can always do $\mathbb{Z}/2$ Morse homology, because then your manifold is guaranteed to be orientable with respect to your coefficient system, and so it's got to be that $H_0(M;\mathbb{Z}/2)=\mathbb{Z}/2$ and $H_n(M;\mathbb{Z}/2)=\mathbb{Z}/2$. So not only do we know we have critical points of index $0$ and $n$, but we know that they actually have to descend to generators in homology. This means that all differentials in our Morse complex will be zero, so generators in the complex are the same as generators for homology. This proves that the last guy has to be in dimension $n/2$.* As you point out, this means that $M$ is homeomorphic to $S^{n/2}$ with an $n$-cell attached. -*** Just to put what I said in the comments right here in the answer: This last critical point has some index $k\in [0,n]$, and it becomes a generator of $H_k(X)$ (with coefficients, if you'd like). But it's trivial to see that when $f$ is a Morse function then $-f$ is a Morse function too, and that this critical point of index $k$ will become a critical point of index $n-k$ for $-f$. And $-f$ equally well satisfies what I just said, i.e. all its critical points become homology generators too. So if you believe that (Morse) homology is a topological invariant, then it must be true that $n-k=k$. -Edit: As Jason explains in answer to my related question, the only manifolds this question can even apply to are homotopy equivalent to $\mathbb{R}P^2$, $\mathbb{C}P^2$, $\mathbb{H}P^2$, and $\mathbb{O}P^2$! Crazy.<|endoftext|> -TITLE: Algorithms for symbolic manipulation -QUESTION [18 upvotes]: If you take a look at WolframAlpha, or other computer algebraic system, you will find that it is able to do symbolic manipulation like real humans. -For example, if you type in an integral, it can show you step by step on how to solve the integration. -What are the algorithms behind all this? - -REPLY [19 votes]: The algorithms behind symbolic integration (due to Liouville, Ritt, Risch, Bronstein et al.) are discussed in prior questions here, e.g. the transcendental case and algebraic case. -For general references on symbolic computation see any of the standard texbooks, e.g. Geddes et al. Algorithms for computer algebra, Grabmeier et al: Computer algebra handbook,von zur Gathen: Modern computer algebra, and Zippel: Effective polynomial computation, and many other books. See also the Journal of Symbolic Computation and various conferences: SIGSAM ISSAC, EUROCAL, etc.<|endoftext|> -TITLE: What is the difference between topological and metric spaces? -QUESTION [27 upvotes]: What is the difference between a topological and a metric space? - -REPLY [2 votes]: If metric space is interpreted generally enough, then there is no difference between topology and metric spaces theory (with continuous mappings). Building on ideas of Kopperman, Flagg proved in this article that with a suitable axiomatization, that of value quantales, every topological space is metrizable. This gives rise to a precise equivalence between the category of topological spaces and the category of generalized metric spaces, presented here (alg. univ.).<|endoftext|> -TITLE: "Closed" form for $\sum \frac{1}{n^n}$ -QUESTION [74 upvotes]: Earlier today, I was talking with my friend about some "cool" infinite series and the value they converge to like the Basel problem, Madhava-Leibniz formula for $\pi/4, \log 2$ and similar alternating series etc. -One series that popped into our discussion was $\sum\limits_{n=1}^{\infty} \frac{1}{n^n}$. -Proving the convergence of this series is trivial but finding the value to which converges has defied me so far. Mathematica says this series converges to $\approx 1.29129$. -I tried Googling about this series and found very little information about this series (which is actually surprising since the series looks cool enough to arise in some context). -We were joking that it should have something to do with $\pi,e,\phi,\gamma$ or at the least it must be a transcendental number :-). -My questions are: - -What does this series converge to? -Does this series arise in any context and are there interesting trivia to be known about this series? - -I am actually slightly puzzled that I have not been able to find much about this series on the Internet. (At least my Google search did not yield any interesting results). - -REPLY [3 votes]: Liouville's theorem implies that if your constant is irrational then it is in fact transcendental. Unfortunately, the trivial way of proving irrationality doesn't work. On second thought, the same problem also prevents us from applying Liouville's theorem!<|endoftext|> -TITLE: Help understand the proof of infinitely many primes of the form $4n+3$ -QUESTION [17 upvotes]: This is the proof from the book: - -Theorem. There are infinitely many primes of the form $4n+3$. - -Lemma. If $a$ and $b$ are integers, both of the form $4n + 1$, then the product $ab$ is also in this form. -Proof of Theorem: -Let assume that there are only a finite number of primes of the form $4n + 3$, say -$$p_0, p_1, p_2, \ldots, p_r.$$ -Let $$Q = 4p_1p_2p_3\cdots p_r + 3.$$ -Then there is at least one prime in the factorization of $Q$ of the form $4n + 3$. Otherwise, all of these primes would be of the form $4n + 1$, and by the Lemma above, this would imply that $Q$ would also be of this form, which is a contradiction. However, none of the prime $p_0, p_1,\ldots, p_n$ divides $Q$. The prime $3$ does not divide $Q$, for if $3|Q$ then $$3|(Q-3) = 4p_1p_2p_3\cdots p_r,$$ which is a contradiction. Likewise, none of the primes $p_j$ can divides $Q$, because $p_j | Q$ implies $p_j | ( Q - 4p_1p_2\cdots p_r ) = 3$, which is absurd. Hence, there are infinitely many primes of the form $4n +3$. END -From "however, none of the prime ...." to the end, I totally lost! -My questions: - -Is the author assuming $Q$ is prime or is not? -Why none of the primes $p_0, p_1,\ldots, p_r$ divide $Q$? Based on what argument? - -Can anyone share me a better proof? -Thanks. - -REPLY [3 votes]: I have seen many proofs of this here, but I think a $basic$ proof is still lacking. I attempt one below: -(1) First note that $any$ integer may be written in the form $4k, 4k+1, 4k+2$, and $4k+3$. This is the result of the $Division$ $Algorithm$. -(2) $Any$ number is either a prime or a product of primes - Fundamental Theorem of Arithmetic (FTA) -(3) Also note this $lemma$: a product of two or more integers of the form $4n+1$ is also of the same form. To show this is simple: -Take two numbers of form $4n+1$, say ${N}_1=4m+1$ and ${N}_2=4m'+1$. Straight multiplication gives $16mm'+4(m+m')+1$ = $4[4mm'+(m+m')]+1$ = $4k+1$, where $k=4mm'+(m+m')$, i.e the product is the same $form$ as its multiplicands. -Now we have all we need. As already mentioned in many answers here, we use a modified Euclidian proof of the infinitude of primes, which is a proof by contradiction (It is a good idea to familiarize oneself with the said Euclidian proof before proceeding) -In anticipation of a contradiction, we assume there are only a finite number of primes of the form $4k+3$. We list them as follows ${q}_1,{q}_2,...,{q}_s$. Similar to Euclidian Proof, we form a positive integer $N=4{q}_1{q}_2....{q}_s-1$. -Note the following about $N$: -(1) It is odd (because $odd*even$=$even$ and $even-1$ = $odd$) -(2) $N$ may be written as $N=4({q}_1{q}_2....{q}_s-1)+3$, i.e. $N$ is of the form $4k+3$ -(3) Per FTA, we may write $N$ as a product of primes, say $N={r}_1{r}_2...{r}_t$. But remember all these primes must be odd (i.e. ${r}_i=2$ is excluded). -(4) And the only form that $every$ ${r}_i$ may take is either $4k+1$ or $4k+3$. -(5) This is a $crucial$ point: $N$ $cannot$ contain only primes of the form $4k+1$. If this were the case, by the lemma above, $N$ would be of the form $4n+1$, which is clearly not the case - $N$ is of the form $4n+3$. Therefore, we conclude that $N$ must contain at least $one$ factor of the form $4k+3$. -(6) Lastly, we show that if $N$ had a factor of the form $4k+3$, it leads to the anticipated contradiction: -Let's refer to this prime factor of $N$ as ${q}_i$. Since all the prime factors of the form $4k+3$ are limited to the list ${q}_1,{q}_2,...,{q}_s$ by assumption, ${q}_i$ must belong in this list. But this implies that ${q}_i$ divides $N$: -${q}_i$|$N$ = ${q}_i$|$4{q}_1{q}_2....{q}_s-1$ -$\rightarrow$ ${q}_i$|$1$ -And we have arrived at a contradiction. Therefore we conclude there exist infinitely many primes of the form $4k+3$<|endoftext|> -TITLE: places and primes -QUESTION [6 upvotes]: what does it means that a place divide a prime on an algebraic number field? - -REPLY [3 votes]: A place is a valuation (or an equivalence class of) $v:K^\times\rightarrow{\Bbb Z}$. The place $v$ divides the rational prime $p$ when $v(p)>0$, or equivalently when $v$ extends the $p$-adic valuation on $\Bbb Q$. -Since (non-archimedean) places in a number field $K$ correspond to prime non-zero ideals in the ring of integers ${\cal O}_K$, another formulation is that the valuation $v$ corresponding to the prime ideal ${\cal P}_v\subset{\cal O}_K$ divides $p$ when ${\cal P}_v$ appears in the primary decomposition of the ideal $p{\cal O}_K$.<|endoftext|> -TITLE: integration of a function -QUESTION [11 upvotes]: I found this explanation in a journal paper but I could not understand it. Can someone give me an explanation or possibly a proof that: -If -$$\frac{\mathrm{d}V(t)}{\mathrm{d}t}=\sqrt{2}\sum_{h=1}^{H}h\omega V_{h}\cos\left(h\omega t+\frac{\pi }{2}\right),$$ -then why integration over whole period is: -$$\frac{1}{T}\int_{0}^{T} \left( \frac{\mathrm{d} V(t)}{\mathrm{d} t} \right)^{2}dt=\omega \sum_{h=1}^{H}h^{2}V_{h}^{2}.$$ -I have problem with the power of $\omega$; my solution returns $\omega^2$, while the power of $\omega$ in answer is one. Here is my solution: -$$\frac{1}{T}\int_{0}^{T}\ \left( \frac{dV}{dt} \right)^{2}dt=\frac{2\omega ^{2}}{T}\int_{0}^{T}\sum_{h=1}^{H}h^{2}V_{h}^{2}\sin^{2}(h\omega t)dt$$ -and over whole period: -$$\frac{1}{T}\int_{0}^{T}\sin^{2}(h\omega t)dt=\frac{1}{2}$$ -then we will have -$$\omega ^{2}\sum h^{2}V_{h}^{2} $$ -not -$$\omega \sum h^{2}V_{h}^{2}$$ -Why? - -REPLY [7 votes]: Your solution is right. It should be a typo in the paper. Here is my evaluation confirming yours. Since -$$\begin{eqnarray*} -\frac{dV(t)}{dt} &=&\sqrt{2}\sum_{h=1}^{H}h\omega V_{h}\cos \left( h\omega t+% -\frac{\pi }{2}\right) \\ -&=&-\sqrt{2}\sum_{h=1}^{H}h\omega V_{h}\sin h\omega t, -\end{eqnarray*}$$ -and assuming $\omega$ is the angular frequency given by -$$\omega =\frac{2\pi }{T},$$ -we have -$$\left( \frac{dV(t)}{dt}\right) ^{2}=2\omega ^{2}\left( -\sum_{h=1}^{H}hV_{h}\sin \left( h\omega t\right) \right) ^{2}$$ -and -$$\begin{eqnarray*} -\frac{1}{T}\int_{0}^{T}\left( \frac{dV(t)}{dt}\right) ^{2}dt &=&\frac{\omega -}{2\pi }\int_{0}^{2\pi /\omega }\left( \frac{dV(t)}{dt}\right) ^{2}dt \\ -&=&\frac{\omega }{2\pi }\int_{0}^{2\pi /\omega }2\omega ^{2}\left( -\sum_{h=1}^{H}hV_{h}\sin \left( h\omega t\right) \right) ^{2}dt \\ -&=&\frac{\omega ^{3}}{\pi }\int_{0}^{2\pi /\omega }\left( -\sum_{h=1}^{H}hV_{h}\sin \left( h\omega t\right) \right) ^{2}dt. -\end{eqnarray*}$$ -The integrand $\left( \sum_{h=1}^{H}hV_{h}\sin \left( h\omega t\right) -\right) ^{2}$ is a sum of terms of two different types: -i) $h^{2}V_{h}^{2}\sin ^{2}\left( h\omega t\right) $ and -ii) $k\left( pV_{p}\sin \left( p\omega t\right) \cdot qV_{q}\sin \left( -q\omega t\right) \right) \,$, with $p\neq q$ and $p,q,k\in\mathbb{N}$. -The second type terms do not contribute to the last integral, because the $\sin nx$ ($n\in\mathbb{N}$) functions form an orthogonal system over $[0,2\pi ]$: -$$\int_{0}^{2\pi /\omega }k\left( pV_{p}\sin \left( p\omega t\right) \cdot -qV_{q}\sin \left( q\omega t\right) \right) dt=0\quad p\neq q$$ -The sum of the first type ones is $\sum_{h=1}^{H}h^{2}V_{h}^{2}\sin ^{2}\left( -h\omega t\right) $. Thus -$$\begin{eqnarray*} -\frac{1}{T}\int_{0}^{T}\left( \frac{dV(t)}{dt}\right) ^{2}dt &=&\frac{\omega -^{3}}{\pi }\int_{0}^{2\pi /\omega }\sum_{h=1}^{H}h^{2}V_{h}^{2}\sin -^{2}\left( h\omega t\right) dt \\ -&=&\frac{\omega ^{3}}{\pi }\sum_{h=1}^{H}h^{2}V_{h}^{2}\int_{0}^{2\pi -/\omega }\sin ^{2}\left( h\omega t\right) dt \\ -&=&\frac{\omega ^{3}}{\pi }\sum_{h=1}^{H}h^{2}V_{h}^{2}\cdot \frac{\pi }{% -\omega } \\ -&=&\omega ^{2}\sum_{h=1}^{H}h^{2}V_{h}^{2}, -\end{eqnarray*}$$ -because -$$\begin{eqnarray*} -\int \sin ^{2}\left( h\omega t\right) dt &=&\frac{1}{h\omega }\left( -\frac{1% -}{2}\cos h\omega t\sin h\omega t+\frac{1}{2}h\omega t\right) \\ -\int_{0}^{2\pi /\omega }\sin ^{2}\left( h\omega t\right) dt &=&\frac{\pi }{% -\omega }. -\end{eqnarray*}$$ - -REPLY [3 votes]: I agree, it should be $\omega^2$. The whole thing is in fact just the Pythagorean theorem: the functions $\sqrt{2} \cos(\dots)$ are orthonormal in the space $L^2([0,T])$, and the integral is the square of the $L^2$ norm of $dV/dt$, hence the sum of the squares of the coefficients: $\sum (h\omega V_h)^2$.<|endoftext|> -TITLE: Why Are These Two Morphisms the Same? -QUESTION [9 upvotes]: I am reading Max Karoubi's "K-Theory" and I think I'm overlooking some trivial fact. We have a vector bundle $E\rightarrow X$ and a morphism $p:E\rightarrow E$ with $p^2=p$. He is showing that $\ker p$ is locally trivial. First he assumes that $E=X\times V$ for a vector space $V$. Here's where I'm stuck: -He defines $f:X\longrightarrow \operatorname{End}(V)$ by -$$ -f(x)=1-p_{x_0}-p_x+2p_xp_{x_0}, -$$ -where $p_x$ is the restriction of $p$ to the fiber over $x$ and $x_0$ is a basepoint. The claim is that $p_{x_0}\circ f(x)=f(x)\circ p_x$. When I compute both sides I get -$$ -2p_{x_0}p_xp_{x_0}-p_{x_0}p_x=2p_{x}p_{x_0}p_{x}-p_{x_0}p_x -$$ -which says -$$ -2p_{x_0}p_xp_{x_0}=2p_{x}p_{x_0}p_{x}. -$$ -Why is that true? Thanks. - -REPLY [2 votes]: That's probably a typo. Try $p_x f(x)= f(x) p_{x_0}$ or, alternatively, $f(x)=1-p_{x_0}-p_x +2 p_{x_0} p_{x}$ in the rest of the proof (which I can't see). -As given, the equality is not true. Think of 2 lines, say of slope 2 and 1/2, in R^2 and the projections along vectors (0,1) and (1,0). Then the image of the first composition is all of the first line, and of the second one all of the second line. No way such maps can be equal.<|endoftext|> -TITLE: Symbols for Quantifiers Other Than $\forall$ and $\exists$ -QUESTION [8 upvotes]: The symbols $\forall$ and $\exists$ denote "for all" and "there exists" quantifiers. In some papers, I saw the (not so common) quantifiers $Я$ and $\exists^+$, denoting "for a randomly chosen element of" and "for most elements in", respectively. -Are there other symbols for quantifiers? -I'm specially interested in quantifiers for: - -for all but finitely many elements of... -for infinitely many elements of... - - -Edit: After seeing some of the comments, I found the list of logic symbols and the table of mathematical symbols, which I could be useful for others. - -REPLY [2 votes]: Somehow it is missing that $∃_∞$ and $∀_∞$ both are first order definable when for example the domain is the natural numbers. We then have: -there are infinitely many -$∃_∞x A(x) \Leftrightarrow ∀y ∃x (x \geq y \wedge A(x))$ -for all but finitely many -$∀_∞x A(x) \Leftrightarrow ∃y ∀x (x \geq y \Rightarrow A(x))$ -Not all exotic quantifiers are that simple, see for example: -Modal Quantifiers, Natasha Alechina - 1995 -http://www.cs.nott.ac.uk/~psznza/papers/Alechina:95d.pdf<|endoftext|> -TITLE: How to calculate a decimal power of a number -QUESTION [51 upvotes]: I wish to calculate a power like $$2.14 ^ {2.14}$$ -When I ask my calculator to do it, I just get an answer, but I want to see the calculation. -So my question is, how to calculate this with a pen, paper and a bunch of brains. - -REPLY [5 votes]: we can find $2.14 ^{2.14}$ using basic arithmetic operations +,-,/,*. -Use binomial theorem -for rational number $n and $-1 - -$(1+x)^n =1+nx+n(n-1)x^2/2!+....$ -note that in left hand side the power n is a fractional number but in the right hand side the powers are integers. that is, in the right hand side, each term can be calculated using basic operations +,-,*,/. -outline of the problem -$2.14^{2.14}=(1.14+1)^{2.14}$ -$=(1.14^{2.14}) * (1+1/1.14)^{2.14}$ -$=(1+0.14)^{2.14} * (1+1/1.14)^{2.14}$ -using binomial theorem two times (5 decimal places) and multiplying we get the answer<|endoftext|> -TITLE: Proof that $\frac{S_n}{n}$ converges almost surely to $\mu$ -QUESTION [5 upvotes]: I'm trying to show that given $(X_i)$ i.i.d., $E[X_i^2] < \infty$, $\mu := E[X_i]$ then $P\Big [ \lim_{n \rightarrow \infty} \frac{S_n}{n} = \mu \Big ] = 1$ where $ S_n := \sum_{k=1}^n X_k$. -So far, I have rewritten $P\Big [ \lim_{n \rightarrow \infty} \frac{S_n}{n} = \mu \Big ]$ as -$$ \lim_{k \rightarrow \infty} P \Big [ \cap_{n \geq k} \{ \omega \Big | |\frac{S_n}{n} - \mu| < \varepsilon \} \Big ]$$ -But I'm not sure how to proceed from here. I have -$$ \sum_{n} P\Big [ |X_n - X|  > \varepsilon \Big] < \infty \Rightarrow P\Big [ \lim_n X_n = X \Big ] = 1$$ which I think I should apply but I don't see how. Can anyone help me with this? Also, I don't see where $E[X_i^2] < \infty$ comes in. Many thanks for your help! - -REPLY [6 votes]: A proof of the strong law of large numbers can be more -or less complicated depending on your hypotheses. -In your case, since you assume that $E[X_i^2]<\infty$ there is -a straightforward proof. I am taking this from section 7.4 of the third -edition of Probability and Stochastic Processes by Grimmett and Stirzaker. -First, by splitting into positive and negative parts we can assume (without -loss of generality) that $X_i\geq 0$. -Second, using the positivity, it suffices to -prove that $S_{n^2}/n^2\to\mu$ almost surely; that is, we only need -convergence along that subsequence. -Next, Chebyshev's inequality gives -$$P(|S_{n^2}/n^2-\mu|>\varepsilon_n)\leq{E[X_i^2]\over n^2\varepsilon_n^2}.$$ -Choosing $\varepsilon_n\downarrow 0$ so slowly that the right hand side above -is summable, Borel-Cantelli finishes the job since then -$$P(|S_{n^2}/n^2-\mu| \leq \varepsilon_n \mbox{ for all but finitely many }n) = 1.$$ -In fact, the strong law of large numbers holds under the weaker hypothesis -that $E[|X_i|]<\infty$. There are various proofs in the literature, but every -student of probability ought to be familiar with Etemadi's tour de force elementary -proof. Etemadi uses a clever truncation argument and similar tools to those above, and only needs pairwise independence of the $X_i$'s, not full independence. -Some good textbooks like Grimmett and Stirzaker (section 7.5), Billingsley's Probability and Measure (2nd edition), or Durrett's Probability: Theory and Examples (2nd edition) include Etemadi's treatment. -N. Etemadi, An elementary proof of the strong law of large numbers, -Z. Wahrscheinlichkeitstheorie verw. Gebeite 55, 119-122 (1981)<|endoftext|> -TITLE: Linear transformations and norm -QUESTION [7 upvotes]: I am studying Linear Algebra II, and I came across several questions in which, for a certain linear transformation ($T\colon\mathbf{V}\to\mathbf{V}$) I was told that: -$$||T(a)|| \leq ||a||.$$ -I am not completely certain how to use this information. For instance, consider the following question (please forgive my translation, it's the first time I write math in English): - -For a linear transformation $T\colon\mathbf{V}\to\mathbf{V}$in a unitary space [i.e., complex inner product space], such that - -$|c|=1$ for every eigenvalue $c$ of $T$; -$||T(a)|| \leq ||a||$ for every vector $a$ in $\mathbf{V}$; - prove that T is a unitary operator. - - -How does the fact that $||T(a)|| \leq ||a||$ help me? -Thanks. - -REPLY [4 votes]: Every linear transformation on a finite dimensional complex inner product space is unitarily triangularizable. See for example Hogben's Handbook of linear algebra. This means you can find an orthonormal basis $e_1,\ldots e_n$ such that $Te_k=\sum_{i\leq k}a_{ik}e_i$. The eigenvalues of a triangular matrix are the diagonal entries, so $|a_{kk}|=1$ for each $k$. Thus $1\geq\|Te_k\|^2=\sum_{i\leq k}|a_{ik}|^2\geq|a_{kk}|^2=1$, forcing $a_{ik}=0$ for $i\lt k$. So the basis actually diagonalizes $T$, and it is now straightforward to show that $T$ is unitary regardless of which definition you use.<|endoftext|> -TITLE: Upper bound for zeros of holomorphic function -QUESTION [5 upvotes]: I'd appreciate some help with the following problem form Conway's book on functions of one complex variable: -Let $f$ be analytic in $\overline B (0;R)$ with $|f(z)|\le M$ for $|z|\le R$ and $|f(0)|=a>0$. Show that the number of zeros of f in $B(0;R/3)$ is less than or equal to $$\frac{1}{\log(2)}\log\left(\frac M a\right)$$ -I know that the number of zeros is given by -$$n = \frac 1 {2\pi i}\int_{|z|=R/3} \frac{f'}{f} \, dz -$$ -And there is a hint to look at $g(z) = f(z) \prod_{k=1}^n (1-z/z_k)^{-1}$, where the $z_k$ are the zeros of $f$. I have given it some time now, but don't seem to get anywhere. In particular I don't see how the logarithm, $M, a$ come into play. -The problem is in the chapter on the maximum modulus theorem, if that's of any help. -Might someone maybe give me a hint? -Cheers, S.L. - -REPLY [6 votes]: The function $g$ is holomorphic in $B(0,R)$ and continuous on $\bar{B}(0,R)$. For $|z|=R$ we have $\left|\frac{z}{z_k}\right|\geq 3$ and one obtains $|g(z)|\leq 2^{-n}M$. From the maximum principle one can infer that this also holds for $|z| -TITLE: How Do You Actually Do Your Mathematics? -QUESTION [111 upvotes]: Better yet, what I'm asking is how do you actually write your mathematics? -I think I need to give brief background: Through most of my childhood, I'd considered myself pretty good at math, up through the high school level. I easily followed mathematical concepts introduced in my classes and even did a few competitions; I definitely wouldn't say I was a star of the caliber one meets when one ventures out into the bigger ponds, but I thought I was decent and convinced myself I would major in math when I entered college. -That changed after a couple of years when I hit my first Real Analysis class that used Rudin's book; that was the first class, I think, I took that really required more than "expand-a-definition" type proofs and my struggle to find intuition and understanding there impacted my mathematical self-confidence. I eventually switched majors, with a bit of regret. -One thing that got me, I think, was the veritable explosion of superscripts and subscripts that one encounters for the first time in Real Analysis. I'd often find myself struggling to set up the machinery of what I was trying to prove, lost in the notation. How do good mathematicians format their work on paper so as not to get lost in the $i$s, $j$s, and $k$s and keep track of what they're investigating? I remember dealing with subsequences of sequences to show that limits did or did not exist got especially hairy in this way...writing things like $s_{n_{k_{\epsilon}}}$ and remembering what my goal at each "level" was difficult. I'd be interested in knowing if aspiring mathematicians and/or professional mathematicians scribble marginalia or have a system to overcome such problems. -Another thing that got me were what I personally called "consider..." statements. Many times, on this site, the most talented commenters will say "Consider $f(n)$" or "Consider transformation $T:$ $U \rightarrow V$" that in the first case gives a summation that wonderfully telescopes/has an obvious bound, or in the second case transforms the problem into a trivial application of the rank-nullity theorem, or something like that. Mathematics is a subject replete with geniuses , I understand that, but how do mere mortals investigate such functions and "massage" them into doing what they want? When good mathematicians get intuitionistic ideas, what (explicit) steps do they take to formalize them, especially when it is likely that first idea is murky or wrong? (Aside: I've been given "use numerical examples" as advice before, but sometimes I think to myself, "I've been dealing with $\mathbb{Z}$ since I was 6 years old, and not so much with Dedekind's definition of the real numbers...") -There's lots more I could ask, but I want to keep this question tractable, so I guess I might summarize by asking: How do you [professional and aspiring mathematicians] organize your math "notebook", and what perhaps idiosyncratic methods do you employ to be original and clever within it? I know there will be no strict formulas anyone can give; mathematicians are scientists of the abstract; I understand that the subject is acclaimed partly because it's so intellectually and individually demanding. But I think even acclaimed scientists draw on Springer's Protocols and Nature Methods...There seems to me a bit of a jump between the dryly algorithmic way one is taught to do math in high school and the more abstruse methods at the undergraduate level. I'd be interested if anyone here could help me bridge that gap, if only for my personal fulfillment. -(Apologies in advance if the question is ill-posed or too subjective in its current form to meet the requirements of the FAQ; I'd certainly appreciate any suggestions for its modification if need be.) - -REPLY [21 votes]: How do good mathematicians format their work on paper so as not to get lost in the is, js, and ks and keep track of what they're investigating? - -To be honest, part of this is just getting used to juggling several things in your head at once. Fortunately, as with many other skills, this is trainable: try, for example, doing Sudoku puzzles with a pen. -There are, of course, other ways to keep your work organized. The first thing to do when solving a problem is to write down all of the data you're given and write down the goal of the problem. The second thing is to unravel all of the definitions, working through all the quantifiers. This will get you surprisingly far, at least in real analysis. And then the third thing is to actually think about it. If you can't hold all the quantifiers in your head at once, practice reading the statements aloud slowly until you can (and see the first paragraph). - -When good mathematicians get intuitionistic ideas, what (explicit) steps do they take to formalize them, especially when it is likely that first idea is murky or wrong? - -Write down an argument that follows the lines of the intuition, at least for a simpler case or version of the problem. If it doesn't work on the simple version, it probably won't work on the hard version. If it does, you can try to figure out whether it extends to the original question and, if not, what the barrier is. -This is easier said than done. The bottom line is you need to practice the skill of turning your intuitions into proofs. This comes in a few steps: first you need to learn how to prove things, then you need to learn how to train your intuition to help you prove things more easily. Like anything else, this takes hard work and practice and there isn't a magical shortcut. -Terence Tao has written some very clear stuff on this and related subjects. - -How do you [professional and aspiring mathematicians] organize your math "notebook", and what perhaps idiosyncratic methods do you employ to be original and clever within it? - -I'm not completely sure what this means. - -There seems to me a bit of a jump between the dryly algorithmic way one is taught to do math in high school and the more abstruse methods at the undergraduate level. I'd be interested if anyone here could help me bridge that gap, if only for my personal fulfillment. - -Many people I know bridged the gap through competitions such as the AMC. Training for competitions is not everybody's style, but it is a chance to be exposed to interesting topics not covered in the high school curriculum and also a chance to hone problem-solving and proof-writing skills (at the Olympiad level). A book I benefited from enormously while doing this is Engel's Problem-Solving Strategies, which is geared fairly specifically to Olympiad preparation but is a great source of problems and elementary techniques for solving them. For more general advice, Polya's How to Solve It comes highly recommended (although I have not read it myself).<|endoftext|> -TITLE: Prove that the eigenvalues of a block matrix are the combined eigenvalues of its blocks -QUESTION [38 upvotes]: Let $A$ be a block upper triangular matrix: -$$A = \begin{pmatrix} A_{1,1}&A_{1,2}\\ 0&A_{2,2} \end{pmatrix}$$ -where $A_{1,1} ∈ C^{p \times p}$, $A_{2,2} ∈ C^{(n-p) \times (n-p)}$. Show that the eigenvalues of $A$ are the combined eigenvalues of $A_{1,1}$ and $A_{2,2}$ - -I've been pretty much stuck looking at this for a good hour and a half, so any help would be much appreciated. Thanks. - -REPLY [2 votes]: For another approach for a proof you can use the Gershgorin disc theorem (sometimes Hirschhorn due to pronounciation differences between alphabets) to prove the disks for the individual matrices are the same as the discs for the large matrix so the sets of possible eigenvalues must be the same. This is because the radial contribution to the disks are 0 all over all entries for the lower left block since $|0| = 0$ and $0+0=0$.<|endoftext|> -TITLE: Bourbaki exercise on connected sets -QUESTION [11 upvotes]: This is Exercise I.11.4 of Bourbaki's General Topology. -Let $X$ be a connected space. -a) Let $A$ be a connected subset of $X$, $B$ a subset of $\complement A$ which is both open and closed in $\complement A$. Show that $A\cup B$ is connected. -b) Let $A$ be a connected subset of $X$ and $B$ a component of the set $\complement A$. Show that $\complement B$ is connected (use a)). -I have managed to show a). -My attempts to show b): Let $C$ be a nonempty clopen subset of $\complement B$. By a), $B\cup C$ is connected. Since $B$ is a component of $\complement A$, $B\cup C$ cannot be a subset of $\complement A$. So $C$ has to contain an element of $A$. Since $A$ is connected, $A\subset C$. But I'm stuck here. I can't see how this implies $C=\complement B$. -Can someone point me in the right direction? - -REPLY [7 votes]: Suppose $B^{c} = U \cup V$ with $U,V$ disjoint and open. Then $U$ and $V$ are also closed and if they are non-empty, they both must contain $A$ by your argument, so they can't be disjoint.<|endoftext|> -TITLE: How to show that $L^p$ spaces are nested? -QUESTION [12 upvotes]: Suppose $11\}$. Then $$\int_a^b|f|^{p_1} = \int_A|f|^{p_1}+\int_B|f|^{p_1}\leq \int_A 1 + \int_B |f|^{p_2}\leq (b-a) +\int_a^b|f|^{p_2}<\infty,$$ so $f$ is in $L^{p_1}$.<|endoftext|> -TITLE: When can we recover a manifold when we attach a $2n$-cell to $S^n$? -QUESTION [9 upvotes]: I have a question related to this one. In my answer I was going to try and say something about the possible manifolds that might arise in this way, i.e. as mapping cones of elements of $\pi_{2n-1}(S^n)$. Certainly not all of them will be manifolds. At first I felt like there should be a reason why we'd need to be using a torsion-free homotopy generator if we want a manifold (in which case we could appeal to Serre's theorem that the only non-torsion part of the homotopy groups of spheres is $\pi_n(S^n)=\mathbb{Z}$ and $\pi_{4n-1}(S^{2n})=\mathbb{Z}$), but then I realized I couldn't think of any reason why that should be true. Also I was hoping the Hopf invariant would enter into the picture too, since that's a pretty obvious tool at our disposal, but I couldn't get anywhere with that either...and I think at this point I'm more or less all out of tricks. Does anyone have any ideas? - -REPLY [4 votes]: I believe this is related to the Hopf invariant one problem. Let $X = S^n \cup_f B^{2n}$. -The attaching map from $B^{2n}$ is a map $f$ from $S^{2n-1}$ to $S^n$. I think one way of thinking about the Hopf invariant of this map is as follows. -Let $x\in H^n(X)$. Then $x^2 = k y$ where $y$ generates $H^{2n}(X)$. The number $k$ is the Hopf invariant. -In order for $X$ to be a manifold, it must satisfy Poincare duality (so long as $n>1$ - if $n=1$ different stuff can happen), which implies that $k = \pm 1$. But Adams has shown you can only have Hopf invariant 1 when $n = 2, 4, 8$, getting you something homotopy equivalent to $\mathbb{C}P^2$, $\mathbb{H} P^2$ or $\mathbb{O}P^2$. -Edit: As pointed out by Denis Gorodkov, (communicated by G. Sassatelli), this last sentence is false. In the case of $\mathbb{C}P^n$, it is true as any manifold with the cohomology ring of $\mathbb{C}P^n$ is homotopy equivalent to it. But the cohomology rings of $\mathbb{H}P^2$ and $\mathbb{O}P^2$ do not determine the homotopy type - see the comments. -End edit -The $n=1$ case, you're attaching a disc to a circle and getting a manifold. Here, since $\pi_1(X)$ may be nontrivial, one cannot neccesarily use Poincare duality. However, by a simple analysis, the only manifold this can give rise to is $\mathbb{R}P^2$. -Edit If one wants to avoid cases ($n = 1$ vs $n > 1$) then one can work with cohomology with $\mathbb{Z}/2\mathbb{Z}$ coefficients instead.<|endoftext|> -TITLE: Is learning haskell a bad thing for a beginner mathematician? -QUESTION [9 upvotes]: Haskell is a programming language which uses some concepts from category theory like functor, monad, etc. My question is: Learning intuitive concepts about category from Haskell will ruin my intuition when I learn category theory as a mathematician or it could help developing it? - -REPLY [4 votes]: One problem is that "class Functor", "class Monad" are special cases of categorical concepts, namely the strong ones. With Haskell, you are working in the specific category. That may hinder learning category theory in full generality. Of course, this is just my POV, someone can perceive that obstacle as a no-brainer. -A concrete example. Try to define "instance Monad", i.e. a monad, on the category of rings which sends a ring R to the ring of polynomials with coefficients in R.<|endoftext|> -TITLE: For a polygon on complex plane, when are the vertex 'Fourier coefficients' non-zero -QUESTION [5 upvotes]: Consider an $n$-sided convex polygon $P$ that contains the origin in the complex plane. Let the $j$-th vertex be denoted $z_j = r_j e^{i\theta_j}$ ($0 \leq \theta_j < 2 \pi$) for $j= 1 \dots n$. I'm interested in non-zero values of -$$ a_k(P)= \sum_{j=1}^{n} \frac{z_{j}^{k}}{|z_{j}|^{k-1}}=\sum_{j=1}^{n} r_j e^{ik\theta_j} \textrm{ for } k \geq 1.$$ -Lemma: Given a integer $m \geq 2$, if, for every $k \geq 1$ where $m$ does not divide $k$, $a_k(P)=0$, then the polygon $P$ is $m$-fold rotationally symmetric, that is, a rotation of $e^{i\frac{2 \pi}{m}}$ rotates the polygon into itself. -Pseudo-Proof: Re-imagine the $n$-sided polygon $P$ as a $2 \pi$-periodic function $f(\theta)$ of the angle $\theta$ where each vertex $z_j$ is represented as a Dirac delta function at $\theta_j$ with integral $r_j$, that is, -$$f(\theta)=\sum_{j=1}^{n} r_j \delta (\theta - \theta_j).$$ The calculation $a_k(P)$ is then just $2 \pi$ times the $k$-th Fourier coefficient for $f(\theta)$. If, for all $k$ where $m$ does not divide $k$, $a_k(P)=0$, then the corresponding Fourier coefficients of $f(\theta)$ are all zero, implying that $f(\theta)$ is $\frac{2 \pi}{m}$-periodic. Hence the polygon will be $m$-fold rotationally symmetric. $\square$ -Firstly, is there a good way to prove this lemma without resorting to non-converging Fourier series? -Then, in the same vein, the lemma implies that, if $P$ is not rotationally symmetric, then for every $m$, there are values of $k$, that are not multiples of $m$ for which $a_k(P) \neq 0$. But I believe much more is true, namely, that for 'almost' all $k$, $a_k(P) \neq 0$. In particular, if $k$ is the smallest so that $a_k(P) \neq 0$, I'd like to show that there is a $k'$ relatively prime to $k$ so that $a_{k'}(P) \neq 0$, but I'm not sure how to approach the issue. Any thoughts? - -REPLY [3 votes]: Here is a proof of the lemma, where I now assume $0\leq \theta_1<\ldots<\theta_n<2\pi$ and all $r_j>0$; furthermore I replace $e^{i k\theta_j}$ by $e^{-i k\theta_j}$ in the definition of $a_k(P)$. -Let $b(t)$ be the $2\pi$-periodic box function which is $=1$ for $0\leq t<{2\pi\over m}$ and $=0$ for ${2\pi\over m}\leq t<2\pi$, and consider the function -$f(t):=\sum_{j=1}^n r_j b(t-\theta_j)$. For $k\ne0$ the Fourier coefficients of $f$ compute to -$$\hat f(k)={i\over 2\pi k}(e^{-2\pi i k/m}-1)\> a_k(P)=0,$$ -so $f$ is a constant. -Consider a summand $r_j b(t-\theta_j)$ of $f$. As this summand jumps down to zero at the point $t=\theta_j+{2\pi\over m}$ there has to be another summand $r_l b(t-\theta_l)$ to compensate for this jump; in fact one necessarily has $\theta_l= \theta_j+{2\pi\over m}$ and $r_l=r_j$. It follows that $P$ is invariant under a rotation by ${2\pi\over m}$.<|endoftext|> -TITLE: If a matrix is invertible, is its multiplication commutative? -QUESTION [8 upvotes]: The question is prompted by change of basis problems -- the book keeps multiplying the bases by matrix $S$ from the left in order to keep subscripts nice and obviously matching, but in examples bases are multiplied by $S$ (the change of basis matrix) from whatever side. So is matrix multiplication commutative if at least one matrix is invertible? - -REPLY [13 votes]: Definitely not. Yuan's comment is also not correct, diagonal matrices do not necessarily commute with non-diagonal matrices. Consider $$\left[\begin{array}{cc} -1 & 1\\ -0 & 1\end{array}\right]\left[\begin{array}{cc} -a & 0\\ -0 & b\end{array}\right]=\left[\begin{array}{cc} -a & b\\ -0 & b\end{array}\right] -$$ -Changing the order I get -$$ -\left[\begin{array}{cc} -a & 0\\ -0 & b\end{array}\right]\left[\begin{array}{cc} -1 & 1\\ -0 & 1\end{array}\right]=\left[\begin{array}{cc} -a & a\\ -0 & b\end{array}\right] -$$ -Which is different for $a\neq b$. -Hope that helps. (Sometimes change of basis matrices can go on different sides for different reasons, but without seeing the exact text you are talking about I can't comment) - -REPLY [9 votes]: In general, two matrices (invertible or not) do not commute. For example -$$\left(\begin{array}{cc} -1 & 1\\ -0 & 1\end{array}\right)\left(\begin{array}{cc} -1 & 0\\ -1 & 1\end{array}\right) = \left(\begin{array}{cc} -2 & 1\\ -1 & 1\end{array}\right) -$$ -$$ -\left(\begin{array}{cc} -1 & 0\\ -1 & 1\end{array}\right)\left(\begin{array}{cc} -1 & 1\\ -0 & 1\end{array}\right) = \left(\begin{array}{cc} -1 & 1\\ -1 & 2\end{array}\right)$$ -Also, to change a basis you usually need to conjugate and not just multiply from the left (or just right). -What you do know is that a matrix A commutes with $A^n$ for all $n$ (negative too if it is invertible, and $A^0 = I$), so for every polynomial P (or Laurent polynomial if A is invertible) you have that A commutes with $P(A)$.<|endoftext|> -TITLE: Computing homology groups over different fields -QUESTION [6 upvotes]: First of all, my background in Homology and Algebraic Topology is very limited and doesn't include much except for the basic concepts; I'm trying to understand some Homology techniques as applications relevant to a course in combinatorics I'm taking. -Anyway, in class we've seen the example of computing the homology groups of $\mathbb{P}^2$ - The projective plane over $\mathbb{R}$. It is known that $\beta_0(\mathbb{P}^2 ; \mathbb{F})=1$ since the projective plane is connected. Now, it turns out that $H_2(\mathbb{P}^2 ; \mathbb{Q})=0$ while $H_2(\mathbb{P}^2 ; \mathbb{F}_2) \cong \mathbb{F}_2$, but the lecturer hasn't really explained how it is computed. -Any explanation on the computing method would be greatly appreciated, as will be any intuition on this whole concept, or references to further reading (I know Hatcher's book on Algebraic Topology, but didn't find it very helpful). - -REPLY [3 votes]: I think that we can get along just fine without spectral sequences or the universal coefficient theorem :) (even though both of those are probably more useful in general- it's nice to actually do a real computation at least once in your life!). -For the case of $\mathbb{R}P^2$ we happen to have a nice description of this as a $\Delta$-complex (I'm assuming you saw this in Hatcher). So you can compute the homology groups just as Hatcher described for a $\Delta$-complex in general, except that when he uses $\mathbb{Z}$ (i.e. looking at the "free abelian group generated by") just replace it with $\mathbb{Z}/2$ or $\mathbb{Q}$ and look at the vector space. -More specifically: Given a $\Delta$-complex $X$, let $\Delta_n(X, F)$ be the vector space with basis the open $n$-simplices of $X$. Now look at Hatcher's Example 2.4 and see what happens with these new coefficients!<|endoftext|> -TITLE: Is there a closed form for $\int x^n e^{cx}\,\mathrm dx$? -QUESTION [11 upvotes]: Wikipedia gives this evaluation: -$$ \int x^ne^{cx}\,\mathrm dx=\frac1cx^ne^{cx}-\frac nc\int x^{n-1}e^{cx}\,\mathrm dx=\left(\frac{\partial}{\partial c}\right)^n\frac{e^{cx}}{c}$$ -But I have no idea how I should exactly understand the partial part: $\left(\frac{\partial}{\partial c}\right)\frac{e^{cx}}{c}$ -EDIT -Thanks for your responses so far. I should add that $n$ is not necessarily an integer. Can be for example $n = 1.2$. -I'll see how far I get on learning about fractional derivatives. - -REPLY [3 votes]: Use Wolfram Online Integrator, for example. The general answer is given in terms of the Incomplete Gamma Function.<|endoftext|> -TITLE: If a product of relatively prime integers is an $n$th power, then each is an $n$th power -QUESTION [7 upvotes]: Show that if $n$, $a$, $b$, and $c$ are positive integers with $\gcd(a, b) = 1$ and $ab = c^n$, then there are positive integers $d$, and $e$ such that $a = d^n$ and $b = e^n$. - -I know that (by Bezout) $\gcd\left(a,b\right) = 1$ implies $ax + by = 1$ for some integers $x$ and $y$, and also that $\gcd\left(a^n,b^n\right) = 1$, but this does not help me. - -REPLY [7 votes]: Of course it is easy using existence and uniqueness of prime factorizations. Below is a more general proof using gcd's (or ideals) that has the benefit of giving an explicit closed form. -$ab=c^n \overset{\rm Lemma}\Rightarrow c=(a,c)(b,c) \,\Rightarrow\, ab = (a,c)^n(b,c)^n\Rightarrow \dfrac{a}{(b,c)^n}\! = \dfrac{(a,c)^n}b$ $\,\Rightarrow\begin{align} a &= (a,c)^n\\ b &= (b,c)^n\end{align}$ -where the last inference uses Unique Fractionization [both fractions are irreducible by $(a,b)\!=\!1$] -Lemma $\ \ \color{#c00}{c\mid ab},\,\ \color{#0a0}{(a,b,c)=1}\ \Rightarrow \ c = (a,c)(b,c)\ [=\, (ab,c\color{#0a0}{(a,b,c)}) = (\color{#c00}{ab,c}) = c\,],\,$ where the braced proof uses gcd "polynomial" arithmetic, i.e. associative, commutative, distributive laws. -Alternatively $\ (a,c)^n\! \overset{\rm\color{#C00}F}= (a^n,c^n) = (a^n,ab) = a(a^n,b) = a$ and $\,(b,c)^n = b\,$ by symmetry, where we have invoked $\rm\color{#c00}F$ = GCD Binomial Theorem (Freshman's Dream). -As $ $ Weil remarks, $ $ this result can be viewed as the essence of Fermat's method of infinite descent. $ $ It generalizes to rings of algebraic integers but depends upon much deeper results in this more general context, viz. the finiteness of the class number and Dirichlet's unit theorem.<|endoftext|> -TITLE: shortcut for finding a inverse of matrix -QUESTION [20 upvotes]: I need tricks or shortcuts to find the inverse of $2 \times 2$ and $3 \times 3$ matrices. I have to take a time-based exam, in which I have to find the inverse of square matrices. - -REPLY [6 votes]: So we want to find out a way to compute $2 \times 2 ~\text{ or }~ 3 \times 3$ matrix systems the most efficient way. Well I think the route that we want to go would be to use Cramer's Rule for the $2 \times 2 \text{ or } 3 \times 3$ case. To state the $2 \times 2$ case we will use the following: -For some coefficient matrix A= -$\left[ \begin{array}{rr} -a & b \\ -c & d -\end{array} \right]$ -$A^{-1}=\dfrac{1}{ad-bc} \cdot - -\left[ \begin{array}{rr} -d & -b \\ --c & a -\end{array} \right]~ \iff ad-bc \ne 0$ $~~~~~~~~~\Big($i.e., Det(A)$~\ne ~ 0\Big)$ -For the $3 \times 3$ case, we will denote that as the folllowing: -$x_{1} = \dfrac{|b~~x_{2}~~x_{3}|}{|\bf{A}|}~~,$ -$x_{2} = \dfrac{|x_{1}~~b~~x_{3}|}{|\bf{A}|}~~,$ -$x_{3} = \dfrac{|x_{1}~~x_{2}~~b|}{|\bf{A}|}.$ -This comes from the matrix equation: ${\bf{A\vec{x}}}={\bf{\vec{b}}},~~~$ where $\vec{x}=[x_{1}~~x_{2}~~x_{3}]^{T}$. -For the elements of matrix $A = -\left|\begin{array}{rrr} -a_{11} & a_{12} & a_{13} \\ -a_{21} & a_{22} & a_{23} \\ -a_{31} & a_{32} & a_{33} -\end{array} \right|,~~$ -it can be extended for the solutions $x_{1},~x_{2},~x_{3}$ -as so knowing that ${\bf|{A}| =} ~ |a_{ij}| ~ \not= ~ 0.$ -$x_{1} = \dfrac{1}{|{\bf{A}}|} -\left|\begin{array}{rrr} -b_1 & a_{12} & a_{13} \\ -b_2 & a_{22} & a_{23} \\ -b_3 & a_{32} & a_{33} -\end{array} \right|$, -$x_{2} = \dfrac{1}{|{\bf{A}}|} -\left|\begin{array}{rrr} -a_{11} & b_1 & a_{13} \\ -a_{21} & b_2 & a_{23} \\ -a_{31} & b_3 & a_{33} -\end{array} \right|$, -$x_{3} = \dfrac{1}{|{\bf{A}}|} -\left|\begin{array}{rrr} -a_{11} & a_{12} & b_1 \\ -a_{21} & a_{22} & b_2 \\ -a_{31} & a_{32} & b_3 -\end{array} \right|$. -An alternate way of doing this would be using row reducing methods, known as either Gaussian Elimination( ref ) or Gauss-Jordan Elimination( rref ). -I hope this helped out. Let me know if there if anything you do not understand. -Thanks. -Good Luck.<|endoftext|> -TITLE: Using congruences, show $\frac{1}{5}n^5 + \frac{1}{3}n^3 + \frac{7}{15}n$ is integer for every $n$ -QUESTION [12 upvotes]: Using congruences, show that the following is always an integer for every integer -value of $n$: -$$\frac{1}{5}n^5 + \frac{1}{3}n^3 + \frac{7}{15}n.$$ - -REPLY [19 votes]: HINT $\displaystyle\rm\quad \frac{n^5}5\: +\: \frac{n^3}3\: +\: \frac{7\:n}{15}\ =\ \frac{n^5-n}5\: +\: \frac{n^3-n}3\: +\: n\ \in \mathbb Z\ $ by Fermat's Little Theorem.<|endoftext|> -TITLE: Projective but not free (exercise from Adkins - Weintraub) -QUESTION [7 upvotes]: This is exercise 38 from Chapter 3 (Modules and Vector Spaces) in Algebra by Adkins and Weintraub (GTM). How do you solve this problem? -Let -\begin{equation*} -R = \lbrace f : [0, 1] \to \Re : f \;\text{ is continuous and} \; f (0) = f (1) \rbrace -\end{equation*} -and let -\begin{equation*} -M = \lbrace f : [0, 1]\to \Re : f \;\text{is continuous and} \; f (0) = - f (1) \rbrace. -\end{equation*} -Then $R$ is a ring under addition and multiplication of functions, and $M$ is an $R$-module. Show that $M$ is a projective $R$-module that is not free. - -REPLY [6 votes]: I'll consider the interval $[0,2\pi]$ for notational simplicity. Consider the matrix -$$ -A = \left( -\begin{array}{cc} - \sin ^2\tfrac{\theta }{2} & - \sin - \tfrac{\theta }{2} \cos - \tfrac{\theta }{2} \\ - -\sin \tfrac{\theta }{2}\cos - \tfrac{\theta }{2} & \cos - ^2\tfrac{\theta }{2} -\end{array} -\right), -$$ -which defines an $R$-linear map $p:R^2\to R^2$. Computing $A^2$ we see that $p^2=p$, so $p$ is idempotent, and its kernel is a projective $R$-module $P$. -Now consider the map $$ -\phi : f\in M \mapsto(f(\theta)\cos \tfrac{\theta }{2},f(\theta)\sin - \tfrac{\theta }{2}) \in R^2. -$$ -It is clearly an $R$-linear injective map, whose image is precisely the kernel $P$ of $p$. It follows that $M\cong P$, and this shows projectivity. -Non-freeness is more subtle... -There is a morphism of rings $\varepsilon:R\to\mathbb R$ given by evaluation at $0$. One can see that $P\otimes_R\mathbb R$ is of dimension $1$ over $\mathbb R$, so that if $P$ is free, then it is free of rank $1$. In that case, $M$ would be free of rank $1$: suppose so, and let $h\in M$ be a generator. It is immediate then that every element of $M$ has to vanish where $h$ vanishes. But one can easily find an element of $M$ whose only zero is not a zero of $h$.<|endoftext|> -TITLE: Powers of the Laplacian matrix -QUESTION [7 upvotes]: Given a graph with an adjacency matrix $\bf A$, powers of this matrix give the number of walks from vertices. That is, $({\bf A}^k)_{ij}$ gives the number of walks from nodes $i$ to $j$ in $k$ steps. Is there any nice physical iterpertation for the powers of the Laplacian matrix? - -REPLY [8 votes]: In my opinion, it is not natural to take powers of the Laplacian in isolation. That is, the Laplacian should not be thought of as a linear operator that one is interested in iterating. There are two points of view that I know of about what the Laplacian "is": - -It's the matrix representation of the quadratic form $\sum_{(v, w) \in E} (f(v) - f(w))^2$ where $f$ is a function on the vertices. Matrices of quadratic forms are not meant to be composed. -It generates a one-dimensional Lie algebra acting on the space of functions on the vertices. This perspective is relevant to the answer I gave to this math.SE question, where by using the Laplacian to define certain differential equations it becomes natural to exponentiate, rather than take powers of, the Laplacian. The powers appear in the exponential, but indirectly.<|endoftext|> -TITLE: How does one actually show from associativity that one can drop parentheses? -QUESTION [28 upvotes]: I've always heard this reasoning, and it makes obvious sense, but how do you actually show it for some arbitrary product? Would it be something like this? -$$(a(b(cd)))e=((ab)(cd))e=(((ab)c)d)e=abcde?$$ -Do you just say that the grouping of the parentheses now corresponds to just multiplying straight through? Thanks. - -REPLY [2 votes]: Here is a geometric version of a proof: -First, note that parenthecizings of a product of length n are in bijection with triangulations of convex (n+1)-gon $A$. This is illustrated in the picture below, copied from p. 240 of the book “Discriminants, Resultants, and Multidimensional Determinants” of Gelfand Kapranov and Zelvinski. - -In words, to go from triangulations of $A$ to parenthecizings of $x_1x_2\ldots x_n$, label all but one edges of $A$ in order as $x_1, x_2, \ldots, x_n$, leaving the last edge as “output edge”. Then a triangulation of $A$ gives a parenthesizing in an inductive manner: Take the triangle $T$ incident on the output edge; then $A$ with $T$ removed is a union of two polygons $A_l$ and $A_r$ (one of which could be degenerate, i.e. a segment), each of which is incidents on one “non-output” edge of $T$. Then assign to our triangulations the product of parenthecizings assigned to $A_l$ and $A_r$. -One can easily go back in a similarly inductive manner, by thinking of a parentecising $p$ as a product of two subparenthesizings $p_l$ and $p_r$, and correspondingly getting the triangulation associated to $p$ by “inserting a triangle $T$” into the triangulations of smaller polygons associated to these $p_l$ and $p_r$. -Now that we have a geometric interpretation of parenthecizings, we can give geometric interpretation of single application of associativity law. This is known as a flip — in a given triangulation, one takes two triangles sharing an edge, thus forming a quadrilateral, and replaces this edge with the other diagonal of this quadrilateral. -Thus the question becomes: can we connect any two triangulations of a convex polygon by a sequence of “diagonal flips”? The answer is yes, and again is best illustrated by a picture, this time stolen from page 7 of book “Triangulations” by Jesús A. De Loera, Jörg Rambau, and -Francisco Santos. - -In words, pick any vertex $i$ and make triangulation by drawing all diagonal from that vertex. To connect any triangulation to that “i-standard” one, make flips that increase the degree of $i$: as long as you are not at “i-standard” triangulation, there exists a triangle $T$ with vertex $i$ and opposite edge not an edge of $A$ (but rather a diagonal); complete $T$ to a guadrilateral, and flip the diagonal. -This shows that “the graph of -flips of the triangulations of the n-gon” is connected (thus establishing what we want). -QED! -In fact much more is true. The graph of flips is just the union of 1 dimensional faces of a convex polytope, all of whose faces correspond to incomplete triangulations/incomplete parenthecisings, the one open face corresponding to incomplete triangulation with no triangles/incomplete parenthecising with no parenthesis. -What we proved is simply that one can get from any vertex of this polytope to any vertex by following some edges. This is of course true for any convex polytope (take the straight path and push it to faces of lower and lower dimension, for example by projecting "radially" from a point of the face not on the path; this is an easy version of cellular approximation theorem in algebraic topology). -The polytope in question is called an associahedron, or Stasheff polytope, and its existence follows from various direct constructions, notably it is constructed by Gelfand-Kapranov-Zelevinsky as the secondary polytope of a convex n-gone in the plane (this is described in the same book where the first picture came from). - -On a related and more ridiculous note, existence of a permutohedron (secondary polytope of the prism $I\times \Delta^n$ on the simplex $\Delta^n$) shows that commutativity law allows one to write products of any number of elements in any order (this is equivalent to showing that the permutation group is generated by transpositions).<|endoftext|> -TITLE: Intermediate Text in Combinatorics? -QUESTION [8 upvotes]: I'm currently attending a somewhat disorganized seminar on combinatorics that follows no textbook. So far we have covered the orbit-stabilizer theorem, some recursion, and we're heading into the Möbius inversion formula. -Can anyone suggest a text that approaches combinatorics at this level for a 2nd-3rd year undergrad who already knows some algebra and the more basic combinatorics like combinations, permutations, stars-and-bars, generating functions? Most introductory combinatorics books I've found are more suited to a discrete math class and cover stuff which I already know. I'm looking for something to supplement this lecture. Thank you. - -REPLY [7 votes]: I did a reading course in Combinatorics while a PhD student, and we used van Lint and Wilson which I thought was very hard, but very good. You don't need any more background than you already have, and you will learn a ton from this text. -It's definitely not as well-known as the other suggested texts, but it many ways it's superior. (Although Concrete Mathematics is a better book overall for its sheer beauty.)<|endoftext|> -TITLE: Proof by double induction on strings -QUESTION [5 upvotes]: This was a question on an assignment presented to my Logic & Mathematics for computer science course, and I am truly baffled as to go on to prove this by double induction: - -Consider a string consisting of one or -more decimal digits (0-9). Suppose you -repeatedly insert a 0 to the right of -the leftmost digit and then replace -that string by a string of the same -length which represents the result of -subtracting one from that string. -ie: start with string 11 you will get -the following: 11, 101, 100, 1000, -0999, 00999, 00998, and so on. -Prove by double induction, that no -matter what string you start with, you -will eventually get a string -containing only 0's. - -This question seems rather trivial on first glimpse, however trying to prove it via double induction is another thing for me. I'm having trouble deciding what the variables that I will be applying induction on will be. Obviously one of the two variables will be the string input, but what about the other? I've thought of using length but I don't see what that can do to help. What do you guys think? -Assuming I have encapsulated the variables, I plan on proceeding to prove this in the following manner given for all x and y in N. p(x,y): - -Let Q(x) = for all y in N, p(x,y) Let -x be arbitrary and assume for all i in -N, i < x IMPLIES Q(i). We then let y -in N be arbitrary then assume that for -all j in N, j < y implies p(x,j). -Afterward I will use the assumption -that for all i and j, p(i,j) is true -such that (i < x) or (i = x and j < -y).Then proceed proof. - -Does that skeletal proof structure make logical sense? Double induction is still new to me and it wasn't covered whole lot in lectures so I'm rather still insecure. -I suppose it'd make more sense if we used structural induction on each variable, however we're restricted to use double induction only. -I'm sorry for the long question, I'd appreciate any input & help & insight regarding this question or in double induction in general as my Google-fu seems to be coming short on information of the latter :(. - -REPLY [3 votes]: Let $a$ be the first digit of the string, and let $b$ be -the value of the remaining digits. Let's prove that the -operation is reducing with respect to the lexical order of -$\langle a,b\rangle$, which is one version of interpreting -your phrase "double induction." -First, note that the operation of inserting the $0$ to the -right of the leading digit affects neither the value of $a$ -nor $b$. -Next, note that if $b$ is all zero but $a$ is not $0$, then -the operation will end up with $a$ being reduced, because -when you subtract one, you will have to borrow from $a$, -thereby reducing. Thus, the operation will result with a -string having lower $\langle a,b\rangle$ in the lexical -order. -If $b$ is not all zero, however, then the operation will -end up with the $b$ value being value $1$ less (despite the -extra $0$), and so will reduce $\langle a,b\rangle$ in the -lexical order. -Since the lexical order on $\mathbb{N}\times\mathbb{N}$ is a well-order, it follows that -the operation must eventually hit all $0$s. -The argument shows that you can insert any number of $0$s, rather than just one $0$ at each step, although if you insert more, then it will take longer to get to the all $0$ string, since when you borrow, you will get more $9$s.<|endoftext|> -TITLE: Definition of the j-invariant of an elliptic curve -QUESTION [18 upvotes]: It seems that most introductory books on elliptic curves simply state the definition of the j-invariant of an elliptic curve without giving any background on how that definition was conceived. Of course, for moduli reasons, it is clear why one might want such an invariant, but the actual formula has always seemed quite mysterious to me. Does anyone know of a nice self-contained source that explains the definition of the j-invariant? - -REPLY [10 votes]: The $j$-invariant has the following classical interpretation. -Consider a model $E\subset{\Bbb P}^2$ of the elliptic curve (one knows that $E$ is a cubic). Let $P\in E$. There are 4 lines through $P$ that are tangent to $E$ and one can show that the set of cross-ratios $c$ of these 4 lines are independent of $P$. Then -$$ -j=\frac{(c^2-c+1)^3}{c^2(c-1)^2} -$$ -is invariant under the 24 permutations of the 4 tangents (which give up to 6 different values of the cross-ratio) and is the $j$-invariant of the elliptic curve, up to normalization. -When the curve $E$ is given in Legendre form $y^2=x(x-1)(x-\lambda)$ the formula for the $j$-invariant is obtained taking $P$ the point at infinity and the 4 tangents the line at infinity and the lines $x=0$, $x=1$ and $x=\lambda$.<|endoftext|> -TITLE: Restricted Compositions -QUESTION [6 upvotes]: Number Composition studies the number ways of compositing a number. -I wanna know the number of compositions of $m$ with $n$ parts with the size of the max part equal to or less than $k$. -Is there a closed form for this problem? - -REPLY [4 votes]: For big $N$, $K$, an coarse approximation can be obtained by a probabilistic argument. I get -$$C(N,M,K) \approx \left( 1 - \left( 1- \frac{K}{N}\right)^M \right)^K {N-1 \choose K-1}$$ -The approximation is good if the first factor (which represents the restriction) is not very small (say, $>0.1$) -Some examples: (exact and approximate values - N K M exact approx factor - 40 20 5 3.556E+010 3.653E+010 0.523 - 40 20 8 6.610E+010 6.373E+010 0.924 - 40 20 3 3.774E+008 4.770E+009 0.069<|endoftext|> -TITLE: Every member of an ordinal is an ordinal -QUESTION [13 upvotes]: How to prove that, if a is an ordinal and b is in a, then b is an ordinal? -Here are the definitions I'm using. -A set is an ordinal number if it is transitive and well-ordered by ∈. -A set T is transitive if every element of T is a subset of T. -My difficulty is mainly that I can't prove b is transitive. What's the magic? - -REPLY [5 votes]: Simpler proof than the one given: -Suppose $\beta\in\alpha$ and $\alpha$ is an ordinal. Suppose that $\gamma\in\beta$ and that $\delta\in\gamma$. -Since $\alpha$ is an ordinal, $\alpha$ is a transitive set. Therefore, since $\beta\in\alpha$ and $\gamma\in\beta$, we must have $\gamma\in\alpha$. Again, since $\gamma\in\alpha$ and $\delta\in\gamma$, $\delta\in\alpha$. -Now we have all the facts we need: - -$\in$ is a well-order on $\alpha$, which means that $\in$ is transitive on $\alpha$. -$\beta,\gamma,\delta\in\alpha$ -$\gamma\in\beta$ and $\delta\in\gamma$ - -Since $\in$ is transitive on $\alpha$ and $\delta\in\gamma$ and $\gamma\in\beta$ and $\beta,\gamma,\delta\in\alpha$, we must have $\delta\in\beta$. Thus, $\beta$ is a transitive set. $\Box$<|endoftext|> -TITLE: bound of the number of the primes on an interval of length n -QUESTION [5 upvotes]: I made this observation and it seems reasonable to me to ask :if $n$ is a natural number then the number of the primes less than or equal to $n$ is denoted by $π(n)$ . is that true that in any interval of length $n$ there are at most $π(n)+1$ primes?(the $+1$ is needed for the trivial occasion where $n=p-1$ and the interval of length $n$ is $[2,p]$) -Alternative we can say that in any interval of length $n-1$ there are at most $π(n)$ primes. - -REPLY [10 votes]: This is a well-known conjecture. It even has a name: the Second Hardy-Littlewood Conjecture, in the form: $\pi(x+y) \le \pi(x)+\pi(y)$ for $x, y \ge 2$. -For a long time, this was generally thought to be true. Then in 1974, Ian Richards showed that it was incompatible with the First Hardy-Littlewood Conjecture! He did this by constructing explicitly an admissible prime constellation of length $x$ and size larger than $\pi(x)$. Computers were involved. See here for more details. -The First H-L Conjecture is considered a sure thing, which has led most mathematicions to abandon the Second H-L Conjecture (although any counterexamples are likely to be extremely large). - -REPLY [2 votes]: This is indeed a known open problem, the Hardy-Littlewood conjecture: -$$\pi(x+y) - \pi(x) \le \pi(y)$$<|endoftext|> -TITLE: How can I prove that an order ("$<$" say) on $\mathbb Z_n$ cannot be defined? -QUESTION [6 upvotes]: I'm trying to show why it isn't possible to define an order of magnitude on $\mathbb Z_n$ (modular arithmetic) that satisfies the ordering properties of $\mathbb Z$. -By letting addition to be $\oplus$ and multiplication $\otimes$, I know the following about $\mathbb Z_n$: - -Closed under $\oplus$ and $\otimes$. -$\oplus$ and $\otimes$ are commutative. -$\oplus$ and $\otimes$ are associative. -$0$ is $\oplus$ identity and $1$ is $\otimes$ identity. -Cancellation: $(-a) \oplus a = 0$. -$$a \otimes (b \oplus c) = a \otimes b \oplus a \otimes c,$$ -$$(b \oplus c) \otimes a = b \otimes a \oplus c \otimes a .$$ - -How would I prove that at least one of the ordering properties of $\mathbb Z$ does not hold for $\mathbb Z_n$? -I will be definitely using the ordering properties of $\mathbb Z$: - -exactly one of $a < b$ or $a = b$ or $a > b$ holds; -if $a < b$ and $b < c$, then $a < c$; -if $a < b$ then $a + c < b + c$; -if $a < b$ and $c > 0$ then $a \cdot c < b \cdot c$. - -In a way I can see that one of these properties would collapse in $\mathbb Z_n$, but I cannot prove it. By contradiction perhaps? (start with $a < b$ and find end up with $a > b$?) -Any help much appreciated. - -REPLY [7 votes]: It is a 1943 theorem of F.W. Levi that for a commutative group $(G,+)$ the following are -equivalent: -(i) There exists a total ordering relation $\leq$ on $G$ which is compatible with the group law in the sense that for all $x_1,x_2,y_1,y_2 \in G$, $x_1 \leq y_1, \ x_2 \leq y_2 \implies x_1 + x_2 \leq y_1 + y_2$. -(ii) $G$ is torsionfree: for all $x \in G$ and $n \in \mathbb{Z}^+$, $nx = 0 \implies x = 0$. -A proof (and a citation to Levi's paper) is given in $\S 17.2$ of these notes. -Note that the implication (i) $\implies$ (ii) -- which is what was being asked about here -- is by far the easier one, and the argument I give for this is no different than those in the other answers. (So this answer is more for those with a more general interest in ordered commutative groups.)<|endoftext|> -TITLE: Orthogonal Projection -QUESTION [11 upvotes]: Seems like I still don't get it, I think I am missing something important. - -Let $V$ be an $n$ dimensional inner product space ($n \geq 1$), and $T\colon\mathbf{V}\to\mathbf{V}$ be a linear transformation such that: - -$T^2 = T$ -$||T(a)|| \leq ||a||$ for every vector $a$ in $\mathbf{V}$; - -Prove that a subspace $U \subseteq V$ exist, such that $T$ is the orthogonal projection on $U$. - -Now, I know these things: - -The fact hat $T^2 = T$ guarantees that $T$ is indeed a projection, so I need to prove that T is an orthogonal projection (I guess this is where $||T(a)|| \leq ||a||$ kicks in). -To do this I can prove that: - - -For every $v$ in $ImT^{\perp}$, $T(v) = 0$ -Alternatively, I can prove that for every $v$ in $ImT$ and $u$ in $KerT$, $(v,u)=0$. -$T$ is self-adjoint (according to Wikipedia) -The matrix $A = [T]_{E}$ when $E$ is an orthonormal basis, is hermitian (this is equivalent to the previous point). -What else? - - -I've been thinking about it for quite some time now, and I'm pretty sure there is something big I'm missing, again. I just don't know how to use the data to prove any of these things. -Thanks! - -REPLY [2 votes]: Here is an approach which doesn't use the decomposition of $V$ into a subspace and its orthogonal complement, nor finite dimension for that matter. However, I'll use the following characterization of orthogonal projection: A linear map $T\colon V\to V$ such that $T^2=T$ and such that $\langle v-T(v),T(w)\rangle=0$ for all $v,w$. -I'll take the complex case as there are some extra details, but the real case is analogous. -So we use the inequality $\Vert T(a)\Vert^2\leq\Vert a\Vert^2$ with $a=v+T(t w)$, where $v,w\in V$ and $t>0$: -\begin{align*}\Vert T(v+T(tw))\Vert^2&\leq\Vert v+T(t w)\Vert^2\\\Vert T(v)\Vert^2+2\operatorname{Re}\langle T(v),T(t w)\rangle+\Vert T(t w)\Vert^2&\leq\Vert v\Vert^2+2\operatorname{Re}\langle v,T(t w)\rangle+\Vert T(t w)\Vert^2\\\Vert T(v)\Vert^2+2\operatorname{Re}\langle T(v),T(t w)\rangle&\leq\Vert v\Vert^2+2\operatorname{Re}\langle v,T(t w)\rangle\\t^{-1}\Vert T(v)\Vert^2+2\operatorname{Re}\langle T(v),T(w)\rangle&\leq t^{-1}\Vert v\Vert^2+2\operatorname{Re}\langle v,T(w)\rangle,\end{align*} -where the last line is obtained from the previous one because $t>0$. Letting $t\to\infty$ we obtain -\begin{align*}2\operatorname{Re}\langle T(v),T(w)\rangle\leq 2\operatorname{Re}\langle v,T(w)\rangle\end{align*} -for all $v,w$. Simplifying the "$2$" term and using the same inequality with $-v$ in place of $v$ yields -$$\operatorname{Re}\langle T(v),T(w)\rangle=\operatorname{Re}\langle v,T(w)\rangle$$ -for all $v,w$. For a complex number $z$, we have $\operatorname{Im}(z)=\operatorname{Re}(-iz)$, so using the equality above with $iw$ in place of $w$ yields -$$\operatorname{Im}\langle T(v),T(w)\rangle=\operatorname{Im}\langle v,T(w)\rangle$$ -for all $v,w$. So the last two equalities give us -$$\langle T(v),T(w)\rangle=\langle v,T(w)\rangle$$ -for all $v,w$, and this means that $T$ is an orthogonal projection (onto $T(V)$), according to the definition above.<|endoftext|> -TITLE: Lattice of Gauss and Eisenstein Integers -QUESTION [10 upvotes]: Z is a 1D lattice -Gaussian and Eisenstein integers are 2D lattices -But the golden integers (for example) are dense on the real line. - -Are there rings of integers which have 3D, 4D, ... lattices? - -Here is a plot of $(a + \tfrac{1}{2}(1+\sqrt{5})b,a + \tfrac{1}{2}(1-\sqrt{5})b)$ for $-10\le a,b \le 10$. - -which is the lattice corresponding to the golden integers, if I understand correctly. The green points represent rational integers and the blue points represent multiples of $\varphi$. - -REPLY [12 votes]: The example of the golden integers (and more generally rings of integers in quadratic number fields with positive discriminant) shows that the single embedding into $\mathbb{R}$ is inadequate. Instead, if $K = \mathbb{Q}(\sqrt{d}), d > 0$ then the appropriate way to embed $K$ as a lattice in the plane is to look at both embeddings $\sigma_1, \sigma_2 : K \to \mathbb{R}$. The first one sends $\sqrt{d}$ to $\sqrt{d}$ and the second one sends $\sqrt{d}$ to $-\sqrt{d}$. Together they give an embedding $(\sigma_1, \sigma_2)$ of $K$ into $\mathbb{R}^2$, and relative to this embedding the ring of integers $\mathcal{O}_K$ really is a lattice. -More generally, if $K$ is a number field of degree $n$, then there are $n = r + 2s$ embeddings $\sigma_i : K \to \mathbb{C}$, $r$ of which have image in $\mathbb{R}$ and $2s$ of which have image outside of $\mathbb{R}$, which come in complex conjugate pairs. Here the appropriate generalization of the above embedding is to use all of the real embeddings $\sigma_1, ... \sigma_r$ and one representative of each complex conjugate pair of complex embeddings $\sigma_{r+1}, ... \sigma_{r+s}$. This gives an embedding $K \to \mathbb{R}^r \times \mathbb{C}^s$, and embedding $\mathbb{C}$ into $\mathbb{R}^2$ gives an embedding $K \to \mathbb{R}^n$. -Relative to this embedding, it's a standard exercise that $\mathcal{O}_K$ is a lattice in $\mathbb{R}^n$ of rank $n$. This is the standard construction used to prove the finiteness of the class group and Dirichlet's unit theorem, and details can be found in any book on algebraic number theory.<|endoftext|> -TITLE: Algebraic integers of a cubic extension -QUESTION [7 upvotes]: Apparently this should be a straightforward / standard homework problem, but I'm having trouble figuring it out. -Let $D$ be a square-free integer not divisible by $3$. Let $\theta = \sqrt[3]{D}$, $K = \mathbb{Q}(\theta)$. Let $\mathcal{O}_K$ be the ring of algebraic integers inside $K$. I need to find explicitly elements generating $\mathcal{O}_K$ as a $\mathbb{Z}$-module. -It is reasonably clear that $\theta$ is itself an algebraic integer and that $\mathbb{Z}[\theta] \le \mathcal{O}_K$, but I strongly suspect it isn't the whole ring. I'm not sure where the hypotheses on $D$ come in at all... any hints would be much appreciated. - -REPLY [6 votes]: A very belated answer: This is (part) of the content of Exercise 1.41 of Marcus' Number Fields (a great source of exercises in basic number theory). In it, one proves that, for $K = \mathbf{Q}(m^{1/3})$, $m$ squarefree, an integral basis is given by -$\begin{cases} 1, m^{1/3}, m^{2/3},& m \not \equiv \pm 1 \pmod 9 \\ -1, m^{1/3}, \frac{m^{2/3} \pm m^{1/3} + 1}{3},& m \equiv \pm 1 \pmod 9 - \end{cases}$ -This is leveraged out of his Theorem 13 (among other things).<|endoftext|> -TITLE: A question about the Inclusion-Exclusion principle -QUESTION [5 upvotes]: Grandma has 8 grandchildren, and 4 different types of popsicles: - -6 Vanilla popsicles -6 Strawberry popsicles -5 Banana popsicles -3 Chokolate popsicles - -This morning, all of her grandchildren came together and asked for one popsicle each (every grandchild asked for a particular flavor). What is the total number of different sets of requests that Grandma can fulfill? -I think this is related to the Inclusion-Exclusion principle because it was taught in the same class. Can you help me solve it? -I did reach the following sum, but I imagine the question's author had something simpler in mind... -$ E(0) = W(0)-W(1) = 3^8 - 4\cdot C(8,8)-4\cdot 3\cdot C(7, 8)-2\cdot C(6, 8)\cdot 3^2-C(5, 8)\cdot 3^3 - C(4, 8)\cdot 3^4$ - -REPLY [4 votes]: I don't quite understand how you arrived at your result, so I'll try to sketch my solution. As you tagged the question as homework, there are no computations - just the idea. -There are at first $4$ possibilities to consider, depending how many chocolate popsicles the kids want: $0,1,2$ or $3$. Say $A(i)$ is the number of requests that have $i$ chocolate popsicles chosen. Let $T$ be the desired total. Then clearly: -$$T = A(0) + A(1) + A(2) + A(3)$$ -as the possibilities are mutually exclusive and exhaustive. - -$A(3)$ is easy: $3$ kids picked chocolate, and the remaining $5$ are choosing amongst the other three flavours. Note that they can all choose the same flavour. -$A(2)$ is just as easy: you have $6$ kids left to consider. Assume first that -there are $6$ banana popsicles as well. Count all possible choices, then discount the one case when all six kids pick the banana popsicle. -$A(1)$ and $A(0)$: this would be hard to compute as directly as the previous two. Try it this way: counting $A(1)$ is equivalent to the problem: - - -Grandma has 7 grandchildren, and 3 different types of popsicles: - -- 6 Vanilla popsicles -- 6 Strawberry popsicles -- 5 Banana popsicles - - -This morning, all of her grandchildren came together and asked for one popsicle each (every grandchild asked for a particular flavor). What is the total number of different sets of requests that Grandma can fulfill? - -Which is just the original problem minus one child and the chocolate popsicles. You approach it as above. Let $B(i)$ be the number of requests where $i$ banana popsicles are picked. -Then you have: -$$A(1) = B(0) + B(1) + B(2) + B(3) + B(4) + B(5)$$ -All these $B(i)$ are now easy to compute. A similar calculation works for $A(0)$.<|endoftext|> -TITLE: How can I systematically find a solution to this problem? -QUESTION [5 upvotes]: While doing some self-study, a friend posed this problem to me: - -Let $a$ be a sequence, defined as following: -$$a_0 = 0,\quad a_1= 1, \qquad a_{n+2}=\frac{a_n+a_{n+1}}2$$ -Figure out, whether $a$ converges, and if yes to which value. Find a closed form for $a_n$. If you want to, you may also have a look at the common case with $a_0= \alpha$ and $a_1 = \beta$. - -It's easy to see, that this converges to $\frac2 3$, and the other parts are also easy to solve with quessing a result and afterwards proving it correct, but I dislike this approach, as it is not productive if the solution isn't obvious. -Is there a systematically approach for such problems? I would be really happy if somebody can explain me, how to systematically solve problems like this. - -REPLY [2 votes]: I realize you're asking for a systematic answer to the problem, and others have supplied such, but there is a neat correspondence with the binary representation of 1/3. -Every iteration of the recurrence, you're dividing everything by two each time, you're going up and down each time by a negative power of 2, $1 - 1/2 + 1/4 - 1/8 + ...$ or $1/2 + 1/8 + 1/32 + ...$ essentially creating a binary encoding .1010101... which, as a geometric series comes out to $${1\over 2}\cdot{1\over 1- {1\over 4}} = {2\over 3}.$$<|endoftext|> -TITLE: Does monotonicity and derivability of a function $f\colon \mathbb{R}\to\mathbb{R}$ imply bijectiveness? -QUESTION [6 upvotes]: I have to prove that $f \colon x \mapsto e^{4x} + x^5 + 2$ ($f\colon \mathbb{R}\to\mathbb{R}$) is bijective. The argument given in the solution is that since the first two summands of the image is a bijective function of $x$, then so is $f$. Nonetheless, this "proof" doesn't seem at all rigorous to me, since there are many counterexamples to this argument. -So I proved that $f$ is strictly increasing by looking at its derivative, thus injective, and that every $c \in \mathbb{R}$ has at least one preimage, applying Bolzano's theorem to $f(x) - c$ and evaluating its limit at $-\infty$ and $+\infty$, and so, demonstrating $f$ to be surjective. In consequence, $f$ is bijective. -I am much more pleased about this proof than the one given in the solutions, but I want to know if I missed something, or if my hypothesis are insufficient. I gave an example to illustrate my argument, but the question I want to ask in the general form is in the title. Also, I would like to know whether the converse holds as well. -Thanks. -Update: I've been thinking about this, and realized that only monotonicity and continuity (together with unboundedness, as Mariano pointed out) are necessary for bijectiveness, and derivability only helps to prove monotonicity. Is this correct? - -REPLY [6 votes]: A function $f: \mathbb{R} \rightarrow \mathbb{R}$ which is strictly monotonic -- i.e., either (for all $x_1,x_2 \in \mathbb{R}$, $x_1 < x_2 \implies f(x_1) < f(x_2)$) or (for all $x_1,x_2 \in \mathbb{R}$, $x_1 < x_2 \implies f(x_1) > f(x_2)$) -- is necessarily injective. This is immediate from the definition, and no continuity or differentiability is necessary for this. -But what about bijectivity? Let me assume WLOG that $f$ is strictly increasing. Then a necessary condition for bijectivity is that $\lim_{x \rightarrow \pm \infty} f(x) = \pm \infty$. (This is not absolutely immediate, but I hope and believe it will become clear after only a little thought.) For future use, let us call this the infinite limits property. -This observation can be used to see that the answer to your title question is no: as Mariano says, consider the arctangent function. -Moreover, being strictly increasing and having the infinite limits property is not enough to guarantee surjectivity: consider for instance the function defined piecewise as $x$ for $x \leq 0$ and as $x+1$ for $x > 0$. This example seems to indicate that we are missing the condition of continuity. -Proposition: Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be strictly increasing and -have the infinite limits property. The following are equivalent: -(i) $f$ is continuous. -(ii) $f$ is surjective. -Since the word "homework" is being thrown around, I leave the proof to the OP. (Anyway, it is a fun and not so difficult exercise.) -As the OP says, differentiability is not part of the fundamental picture here, but comes up in practice because having strictly positive derivative on an interval ensures that a function is strictly increasing.<|endoftext|> -TITLE: Can contractible subspace be ignored/collapsed when computing $\pi_n$ or $H_n$? -QUESTION [8 upvotes]: Can contractible subspace be ignored/collapsed when computing $\pi_n$ or $H_n$? -Motivation: I took this for granted for a long time, as I thought collapsing the contractible subspace does not change the homotopy type. Now it seems that this is only true for a CW pair... - -REPLY [7 votes]: Let me note a general fact: if the inclusion $A \hookrightarrow X$ (for $A$ a closed subspace) is a cofibration, and $A$ is contractible, then the map $X \to X/A$ is a homotopy equivalence. See Corollary 5.13 in chapter 1 of Whitehead's "Elements of homotopy theory."<|endoftext|> -TITLE: Explain why a set of mutually orthogonal non-zero vectors is linearly independent given a clause -QUESTION [5 upvotes]: "Given $\vec{u}_1,\ldots ,\vec{u}_n$ mutually orthogonal non-zero vectors, explain why for $\vec{v}=c_1\vec{u}_1+\ldots +c_n\vec{u}_n$ $c_k=\frac{\vec{v} \cdot \vec{u}_k}{\vec{u}_k \cdot \vec{u}_k}$" -This I explained by dotting both sides with $\vec{u}_k$ and simplifying everything. However, the question I now have is, how using this achieved result, can I show that $\vec{u}_1,\ldots ,\vec{u}_n$ are linearly independent? I was thinking of saying that in accordance to $c_k=\frac{\vec{v} \cdot \vec{u}_k}{\vec{u}_k \cdot \vec{u}_k}$, every coefficient can be of only one fixed value, so there is no room for changing one at the expense of another (as one could with coefficients of linearly dependent vectors), but I am not sure if this is right, and whether I am phrasing this correctly. Thanks for your help! - -REPLY [7 votes]: To show they are linearly independent, you want to show that any linear combination equal to zero is the trivial linear combination. So, suppose you have a linear combination -$$\alpha_1\mathbf{u}_1 + \cdots + \alpha_n\mathbf{u}_n = \mathbf{0}.$$ -Now, using the formula you got, taking $\mathbf{v}=\mathbf{0}$, will give you the value of each $\alpha_i$, namely: -$$\alpha_i = \frac{\mathbf{0}\cdot\mathbf{u}_i}{\mathbf{u}_i\cdot\mathbf{u}_i}.$$ -What does that tell you about $\alpha_1,\ldots,\alpha_n$?<|endoftext|> -TITLE: Cyclic Extensions of $\mathbb{R}(t)$ -QUESTION [6 upvotes]: Let $\mathbb{R}(t)$ be the field of rational functions over $\mathbb{R}$ (the fraction field of $\mathbb{R}[x]$). -I am looking for elements in the Brauer group of the field, and the current idea I have to follow on is to find infinitely many cyclic field extensions, and use those to create cyclic division algebras. -My Galois theory experience is not very rich with transcendental extensions of $\mathbb{R}$ and I'm a bit lost. Am I even on the right path towards the Brauer group? Any ideas on how to prove there are many cyclic extensions? - -REPLY [6 votes]: By a famous theorem of Tsen and Lang, the Brauer group of $\mathbb{C}(t)$ is zero. Thus if you take any class in the Brauer group of $\mathbb{R}(t)$ and restrict it to the quadratic extension $\mathbb{C}(t)$, it becomes zero. -From this it follows that every element of $\operatorname{Br}(\mathbb{R}(t))$ can be represented by a quaternion algebra, so you're on the right track by considering cyclic (quadratic) extensions, of which there are infinitely many. You need to think about when $\mathbb{R}(t)(\sqrt{f(t)}) = \mathbb{R}(t)(\sqrt{g(t)})$ when $f$ and $g$ are rational functions. Hint: it is enough to consider the case of squarefree polynomials, and then the above equality implies that $f$ and $g$ have the same roots (in $\mathbb{C}$). It is very tempting for me to speak geometrically in terms of ramification but I don't know if you would be comfortable with that.<|endoftext|> -TITLE: Variation on euler totient/phi function -QUESTION [8 upvotes]: Is there any efficient way , to find for a particular n, the cardinality of set consisting of all numbers coprimes to n, but bigger than m(assuming i know the prime factorisation of n and m) -I am looking for the implementation which is simple+fast (like the euler totient/phi function, which given the factorisation of n will just need O(logn) steps). - -REPLY [3 votes]: You can try to find instead the number of numbers $1 \leq x \leq m$ which are relatively prime to $n$. Lets denote $d(m,n)$ this number. Then your answer is $\phi(n)- d(m,n)$. -If $p_1,..,p_k$ are all the primes dividing $n$, a simple inclusion-exclusion calculation tells us what $m-d(m,n)$ (namely the numbers which are not relatively prime to n) is: -There are $\left\lfloor \frac{m}{p_i} \right\rfloor$ multiples of $p_i$, there are $\left\lfloor \frac{m}{p_ip_j} \right\rfloor$ multiples of $p_i p_j$ and so on. Thus -$$m-d(m,n)= \sum_{i=1}^k \left\lfloor \frac{m}{\,p_i\,} \right\rfloor -\sum_{ 1 \leq i < j \leq k} \left\lfloor \frac{m}{p_ip_j} \right\rfloor+\sum_{ 1 \leq i < j< l \leq k} \left\lfloor \frac{m}{p_ip_jp_k} \right\rfloor-\ldots+(-1)^{k-1} \left\lfloor \frac{m}{p_1p_2....p_k} \right\rfloor$$ -Thus, unless I made a mistake, your number is -$$\phi(n)-m+\sum_{i=1}^k \left\lfloor \frac{m}{p_i} \right\rfloor -\sum_{ 1 \leq i < j \leq k} \left\lfloor \frac{m}{p_ip_j} \right\rfloor+\sum_{ 1 \leq i < j< l \leq k} \left\lfloor \frac{m}{p_ip_jp_k} \right\rfloor-\ldots+(-1)^{k-1} \left\lfloor \frac{m}{p_1p_2....p_k} \right\rfloor$$ -P.S. I am not sure if the sum is calculable in reasonable time, there are $2^k$ terms where $k$ is the number of prime factors of $n$. $k$ is typically smaller than $\log_2(n)$ but I am not sure if it is always smaller than $\log(\log(n))$. -Also, it is improbable that the sum can be simplified further, due to the integer part. -The easy case is when $m$ has exactly the same divisors as $n$, and then it can be simplified to $\phi(n)-\phi(m)$, but in this case this result can be obtained easily directly.<|endoftext|> -TITLE: Why is this not an isomorphism? -QUESTION [7 upvotes]: Let $T(f(t))$= -$\begin{bmatrix} -f(0) & f(1)\\ -f(2) & f(3) -\end{bmatrix}$ from $P_2$ to $\mathbb{R}^{2\times 2}$. To show that it is not an isomorphism, I need to show that either kernel of the transformation is not equal to the zero element only, or that the image is not the whole target space. I am struggling in showing that either of these is false, dealing with polynomials in transformations is very counter-intuitive. Thanks for help! - -REPLY [9 votes]: A polynomial of degree at most 2 is determined by its values at three points. So if $f(0)=f(1)=f(2)=0$, then $f(3)=0$ also. Thus $T$ is not surjective.<|endoftext|> -TITLE: what is the expected maximum number of balls in the same box -QUESTION [5 upvotes]: If I have $m$ distinct boxes, and $n$ distinct balls. I put all of these balls into the boxes with one box possibly containing more than one balls. What is the expected maximum number of balls in one box? -Appreciate your thoughts on solving this problem. - -REPLY [4 votes]: This is answered in a paper by Raab and Steger (they switch your $n$ and $m$). The case $n=m$ is simpler and had been known before (see their introduction). -Intuitively, in order to find the "correct" answer, you calculate the expected number of bins $X_t$ which get at least $t$ balls; the value of $t$ such that $X_t \approx 1$ should be "close" to the expectation. -In order to make this rigorous, follow these steps: - -Find a critical value $t_0$ such that if $t \ll t_0$ then the probability that a given box gets at least $t$ balls is very close to $1$, and if $t \gg t_0$ then it is very close to $0$. -When $t \gg t_0$, a union bound shows that with high probability no box gets at least $t$ balls. -When $t \ll t_0$, the expected number of boxes with at least $t$ balls is very close to $m$, and so the probability that no box gets at least $t$ balls is very small. -Deduce that most of the probability mass of the expectation is concentrated around $t_0$, and so the expectation itself is close to $t_0$. - -When doing the calculations, we hit a problem in step 3: the number of expected boxes with at least $t$ balls is not $m$ but somewhat smaller. Raab and Steger show that the variable $X_t$ is concentrated around its expectation (using the "second moment method", i.e. Chebyshev's inequality), and so $\Pr[X_t = 0]$ is indeed small. -Most of the work is estimating the binomial distribution of the number of balls in each box, finding the correct $t_0$ for each case; the method fails when the number of balls is significantly smaller than the number of bins. - -Here is some Sage code: -def analyze_partition(partition, bins): - last = partition[0] - counts = [1] - for part in partition[1:]: - if part != last: - last = part - counts += [1] - else: - counts[-1] += 1 - counts.append(bins - sum(counts)) - return multinomial(*partition) * multinomial(*counts) - -def expected_max(balls, bins): - return sum([max(partition) * analyze_partition(partition, bins) - for partition in partitions(balls) - if len(partition) <= bins])/bins^balls - -When plugging $10$ balls and $5$ bins, I get $$1467026/390625 \approx 3.76.$$<|endoftext|> -TITLE: Norms Induced by Inner Products and the Parallelogram Law -QUESTION [273 upvotes]: Let $ V $ be a normed vector space (over $\mathbb{R}$, say, for simplicity) with norm $ \lVert\cdot\rVert$. -It's not hard to show that if $\lVert \cdot \rVert = \sqrt{\langle \cdot, \cdot \rangle}$ for some (real) inner product $\langle \cdot, \cdot \rangle$, then the parallelogram equality -$$ 2\lVert u\rVert^2 + 2\lVert v\rVert^2 = \lVert u + v\rVert^2 + \lVert u - v\rVert^2 $$ -holds for all pairs $u, v \in V$. -I'm having difficulty with the converse. Assuming the parallelogram identity, I'm able to convince myself that the inner product should be -$$ \langle u, v \rangle = \frac{\lVert u\rVert^2 + \lVert v\rVert^2 - \lVert u - v\rVert^2}{2} = \frac{\lVert u + v\rVert^2 - \lVert u\rVert^2 - \lVert v\rVert^2}{2} = \frac{\lVert u + v\rVert^2 - \lVert u - v\rVert^2}{4} $$ -I cannot seem to get that $\langle \lambda u,v \rangle = \lambda \langle u,v \rangle$ for $\lambda \in \mathbb{R}$. How would one go about proving this? - -REPLY [10 votes]: One should also convince oneself that: -$$\langle\cdot,\cdot\rangle\to\|\cdot\|\to\langle\cdot,\cdot\rangle$$ -$$\|\cdot\|\to\langle\cdot,\cdot\rangle\to\|\cdot\|$$ -(Otherwise really bad things could happen...) -Luckily, this can be checked rather easily: -$$\|x\|'=\sqrt{\frac{1}{4}\left(\|x+x\|^2-\|x-x\|^2\right)}=\|x\|$$ -$$\langle x,y\rangle'=\frac{1}{4}\left(\sqrt{\langle x+y,x+y\rangle}^2-\sqrt{\langle x-y,x-y\rangle}^2\right)=\langle x,y\rangle$$<|endoftext|> -TITLE: How to find a basis for this sub-space? -QUESTION [5 upvotes]: I am given a subspace of all polynomials $f(t)$ in $\mathbf{P}_2$ such that $f(1)=0$. I know that a basis for this space is $1-t$, $1-t^{2}$, and when I look at it, it makes perfect sense as to why. I was just wondering what is a systematic way of finding it, without eyeballing. Thanks! - -REPLY [10 votes]: Note that there is no such thing as the basis for this subspace: there are only two vector spaces in the entire universe that have just one basis: the zero vector space (okay, this may count as more than one vector space: the zero vector space over any field), and the one dimensional vector space over a field of $2$ elements. This is neither, so there is no such thing as "the" basis of the subspace: the subspace has lots of different bases, this is just one of them. -Now, as to this particular problem. Added: You don't actually need the following paragraph, but it gives you a way to know ahead of time how "big" a basis you are looking for, which may be useful in other situations. -The subspace you want is the nullspace of the linear tranformation $\mathbf{P}_2\to\mathbb{R}$ given by "evaluation at $1$." By the Rank-Nullity Theorem, you know the nullspace has dimension $2$, so you are looking for two linearly independent polynomials that are $0$ at $1$. Added. To see how the Rank-Nullity Theorem comes into play, remember that when you interpret the Rank-Nullity Theorem in terms of linear transformations it tells you that if $T\colon\mathbf{V}\to\mathbf{W}$ is a linear transformation, then -$$\dim(\mathbf{V}) = \mathrm{rank}(T) + \mathrm{nullity}(T) = \dim(\mathrm{Im}(T)) + \dim(\mathrm{ker}(T)).$$ -Here, $\mathbf{V}=\mathbf{P}_2$, $\mathbf{W}=\mathbb{R}$, and $T$ is "evaluation at $1$". Since $\dim(\mathbb{R}) = 1$, the only two possibilities for the rank of $T$ are $0$ and $1$; but the rank of $T$ is not $0$, because $T$ is not constant $0$ (it maps the polynomial $x$ to $1\neq 0$). So the rank is $1$, and since $\dim(\mathbf{P}_2) = 3$, that means the nullity is $2$. So the subspace you are looking for has dimension $2$. -A way to think about the Rank-Nullity Theorem is like the "law of conservation of matter": dimensions are not created nor destroyed, they are just transformed. If you have $\mathbf{V}$ of dimension $n$, then if you add the dimensions you "transform" to $\mathbf{0}$ and the dimensions you get in the image, you should get all the dimensions (they add up to $n$). -And now, finally, how to get a basis: -A polynomial is $0$ at $1$ if and only if it is divisible by $t-1$. So you are looking for a basis for the polynomials that are multiples of $t-1$, that is, of the form $(t-1)q(t)$, where $q(t)$ is a polynomial of degree $0$ or $1$ (cannot be degree two, since you are in $\mathbf{P}_2$). So if you pick any basis for the subspace of polynomials of degree at most $1$, say $1$ and $t+1$, then you get a basis for your subspace by considering the products $(t-1)1$ and $(t-1)(t+1)$: just take your arbitrary $(t-1)q(t)$, express $q(t)$ in terms of the vectors you took, $1$ and $t+1$, and this gives you how to express $(t-1)q(t)$ in terms of $(t-1)1$ and $(t-1)(t+1)$. -You could also have gotten a basis by taking $1$ and $t$, to get $t-1$ and $t^2-t$; or any basis for the subspace of polynomials of degree at most $1$ will do.<|endoftext|> -TITLE: A geometric look at $\frac{1}{a}+\frac{1}{b}=\frac{1}{c}$? -QUESTION [18 upvotes]: Is there a geometric way of looking at the relationship between the positive real numbers $a$, $b$ and $c$ if $\frac{1}{a}+\frac{1}{b}=\frac{1}{c}$? - -REPLY [2 votes]: In a right-angled triangle (with legs a and b, and hypotenuse c) you can inscribe a square in two "obvious" ways: - -The square shares the right angle with the triangle and touches c. Then 1/s = 1/a + 1/b -One side of the square lies on c, and the square touches a and b. Then 1/t = 1/c + 1/h (where h is the height on c)<|endoftext|> -TITLE: Estimating $\#\{\{\alpha k\} < 1/\sqrt{k} : k \leq n\}$ for irrational $\alpha$ -QUESTION [6 upvotes]: Suppose $\{\alpha n\}$ is the fractional part of $\alpha n$. Put -$$A_{\alpha}(n) = \#\{\{\alpha k\} < 1/\sqrt{k} : k \leq n\}.$$ -If $\alpha$ is irrational, can I find some constant $K$ such that $A_{\alpha}(n) < K \sqrt{n}$ for all $n$? Does the order of $A_{\alpha}(n)$ depend on $\alpha$? Suppose $1/\sqrt{k}$ is replaced by some function $f(k)$. What can I say about the number of $\{\alpha n\}$ less than $f(n)$ as $n$ tends to infinity? - -REPLY [5 votes]: The Weyl equidistribution theorem says that for irrational $\alpha$ and sufficiently many $k$, the fractional parts $\{\alpha k\}$ will be equidistributed in the interval $[0,1]$. -To apply this to your particular problem, consider a large interval $k\in[N,2N]$. Because of equidistribution, the numbers $k$ will "hit" the condition $\{\alpha k\} < 1/\sqrt{N}$ roughly $1/\sqrt{N}$ of the time, i.e. there will be roughly $N·1/\sqrt{N} = \sqrt{N}$ numbers $k$ from the interval that fulfill the condition. -Of course, we are actually interested in the condition $\{\alpha k\} < 1/\sqrt{k}$, so we have overestimated things a bit, but it will still work out. -Now, piecing intervals together by choosing $N=2^M$ as a sequence of powers of two, we obtain an estimate along the lines of -$$A_\alpha(n=2^M) \lesssim \sqrt{1} + \sqrt{2^1} + \sqrt{2^2} + .. + \sqrt{2^{M-1}} \leq K \sqrt{2}^M = K\sqrt{n} $$ -as desired. You might have to fill in some epsilons and stuff to make the proof precise, but this is the core argument. -In the general case, a similar argument will yield a good bound as long as the function $f(k)$ doesn't vanish too fast. If it does vanish very fast, then it can only get better, though a better bound might be harder to prove.<|endoftext|> -TITLE: How do you find the Lie algebra of a Lie group (in practice)? -QUESTION [24 upvotes]: Given a Lie group, how are you meant to find its Lie algebra? The Lie algebra of a Lie group is the set of all the left invariant vector fields, but how would you determine them? My group is the set of all affine maps $x \rightarrow A.x+v$ from $R^n$ to $R^n$ under function composition. - -REPLY [30 votes]: Here a worked out example: -What is the Lie algebra of the group of rotations in 3-dimensional space, $SO(3)$? -Matrices $A\in SO(3)$ are defined by the property that they are invertible and that the scalar product is invariant $\langle A\vec x,A\vec y\rangle = \langle \vec x,\vec y\rangle$ for all vectors $\vec x,\vec y\in \mathbb R^3$. Expanding the latter condition into coordinates, you can see it is equivalent to $A^TA = I$. -To find the Lie algebra, take a smooth path $A(t)$ with $A(0) = I$. In first order, it can be written as -$$A(t) = I + t·H + \mathcal O(t^2) .$$ -Plugging this into the condition $A^TA=I$, we get in first order -$$I = (I+t·H)^T(I+t·H) = I + t·(H^T + H) .$$ -Hence, the condition on the tanget vector $H$ is that it is antisymmetric -$$ H^T = -H .$$ -In other words, the Lie algebra of the rotation group $SO(3)$ consists of antisymmetric 3x3 matrices, which must have the form -$$ H = \begin{pmatrix}0 & -\omega_3 & \omega_2 \\ \omega_3 & 0 & -\omega_1 \\ -\omega_2 & \omega_1 & 0\end{pmatrix} .$$ -It is not difficult to show that exponentiating any such matrix will yield an element that preserves the scalar product. Hence, this is the whole Lie algebra. -By the way, the elements of the Lie algebra $so(3)$ are usually represented by the angular velocity $\vec\omega=(\omega_1,\omega_2,\omega_3)$ such that multiplication becomes the cross product -$$ H·\vec x = \vec\omega \times \vec x .$$ - -A similar method applies to your original problem. For instance, you can embed affine transformations of $\mathbb R^n$ into $GL_{n+1}(\mathbb R)$ by -$$ (x \mapsto Ax + v) \iff \begin{pmatrix} A & v \\ 0 & 1 \end{pmatrix} \in GL_{n+1}(\mathbb R).$$ -You can express this as an algebraic condition: a matrix $B\in GL_{n+1}(\mathbb R)$ represents an affine mapping of $\mathbb R^n$ if and only if -$$ (0,0,…,1)·B = (0,0,…,1) .$$ -This will induce an equation for the first derivative of a path $B(t)$ and as above, you will obtain the Lie algebra as a set of matrices.<|endoftext|> -TITLE: What does limit notation with an underline or an overline mean? -QUESTION [10 upvotes]: I've never seen this notation before, and I'm having trouble finding a reference through search. Could someone explain what these notations mean for me? -In context, the statement they're in is the following: -a bounded $f$ is Riemann integrable iff -$$\varliminf_{||C||\to 0}\mathcal{L}(f; C)\ge\varlimsup_{||C||\to 0}\mathcal{U}(f;C)$$ -where $C$ is a non-overlapping, finite, exact cover of a rectangular region $J$ in $\mathbb{R}^N$, $||C||$ denotes mesh size, and $\mathcal{L}, \mathcal{U}$ represent the lower and upper Riemann sums, respectively. - -REPLY [4 votes]: I agree : it is definitely Lim Sup, Lim Inf.; I have seen it used many times. If you do not see the top or bottom, you still have a Lim, but ---No Sup For You!<|endoftext|> -TITLE: Connecting a $n, n$ point grid -QUESTION [21 upvotes]: I stumbled across the problem of connecting the points on a $n, n$ grid with a minimal amount of straight lines without lifting the pen. -For $n=1, n=2$ it is trivial. For $n=3$ you can find the solution with a bit trial and error (I will leave this to the reader as it is a fun to do, you can do it with 4 lines). I found one possible solution for a $4,4$ grid and animated it, it uses 6 lines and is probably optimal (will hopefully help you to understand the problem better, the path doesn't have to be closed like in the animation, open ends are allowed!): - -Now my question is, for higher $n$, is there a way to get the amount of minimal lines to use and does an algorithm exist to find a actual solution? I think its quite hard to model the "straight lines" with graph theory. -Edit: -Reading Erics excellent answer I found the following website: http://www.mathpuzzle.com/dots.html that also gives an algorithm to connect the points in $2n-2$ steps, solutions up to $10,10$ and mentions: - -Toshi Kato conjectures: On -$(2N+1)x(2N+1)$ grid, $N \geq 2$, Using $4N$ -continuous lines, and not lifting your -pencil from the paper, can go through -all the dots of a $(2N+1)x(2N+1)$ grid, -ending at the same place started. But -must visit at least one dot twice in -the route. -On $(2N)x(2N)$ grid, $N \geq 2$, Using $4N-2$ -continuous lines, and not lifting your -pencil from the paper, can go through -all the dots of a $(2N)x(2N)$ grid, -ending at the same place started. And -can visit each dots just once. - -It seems to be an open problem to show that $2n-2$ is optimal. -Also I found the following page with a proof that in the $3,3$ grid there cannot be $2$ parallel lines: http://fahim-patel.blogspot.com/2011/01/proof.html I think it might be interesting for coming up with a proof that $2n-2$ is optimal (however maybe there is no such proof, as we only saw solutions for very small $n$, for bigger $n$ there might be some developments we don't know about). - -REPLY [5 votes]: Interesting question. In what follows consider the $n\times n$ square grid. Notice that the trivial solution obtained by following a square spiral towards the center starting from an outside corner yields a solution with $2n-1$ lines. -To see why, notice that 2 lines reduces the grid to a $(n-1)\times (n-1)$ grid, and since the $1\times 1$ grid requires only 1 line, induction yields $2n-1$ lines. -Can we do better? Based on the posts in the forum and my own attempts, I believe the answer is that $2n-2$ lines is optimal. Showing this is possible is again easy. Start at a corner, and spiral towards the center until there is only a $3\times 3$ grid remaining. Recall from above that 2 lines in the spiral will reduce the grid by a dimension, so thus far we will have used $2\cdot (n-3)=2n-6$ lines. On the last line, end it so that we are in a position to go through the diagonal of the $3\times 3 $ grid. Since the $3\times 3$ grid has a solution with $4$ lines starting diagonally from a corner we have found a solution to the $n\times n$ grid using only $2n-2$ lines. -Now, the question remains, is $2n-2$ optimal? The more I think about it, the more I believe it, but a proof does not leap into mind. I will think more. -Edit: Of course $n=1,2$ are exceptions, and required $2n-1$ lines. The method I presented can be modified slightly to produce a closed path. All that must be changed is the way the final $3\times 3$ grid is traversed, and perhaps moving the starting position of the first line to a spot slightly outside of the original $n\times n$ grid. In other words the conjecture Toshi Kato is true. -Edit 2: For a proof that $2n-2$ is optimal, see Joriki's answer to this question Not lifting your pen on the $n\times n$ grid.<|endoftext|> -TITLE: Techniques to compute complex integrals over infinite contours -QUESTION [17 upvotes]: In asymptotic analysis and analytic number theory, one often has to deal with complex integrals over infinite contours in the complex plane, and the required techniques to do so often go beyond the standard courses of complex analysis in one variable. In particular, I am interested in the following type of argument which I don´t fully understand and which I will try to illustrate by an example (taken from Paris and Kaminski, "Asymptotics and Mellin-Barnes Integrals"): -Consider the so called Cahen-Mellin integral -$$e^{-z}=\frac{1}{2\pi i}\int_{(c)}\Gamma(s)z^{-s}ds,\ \arg(z)<\frac{\pi}{2}, z\neq 0,$$ -which is an integral representation of the exponential function by taking the inverse Mellin transform of the Gamma function, where the integration contour $(c)$ stands for the vertical line $\{\Re(s)=c\}$ with some $c>0$. It can be shown that the integrand has "the controlling behavior" $|z|^{-\sigma}O(|t|^{\sigma-{1\over 2}}e^{t\arg(z)-{1\over 2}\pi|t|})$ as $|t|\to\infty$, where $s=\sigma + it$. Now, aside from the obvious way to show the validity of the above integral representation, Paris and Kaminski argue that because of the aforementioned exponential decay of the integrand we are allowed to move the contour of integration over the poles of the Gamma function and use the residues of the latter to obtain the exponential series. This is precisely the argument I want to understand, so I will try to break it down into few smaller questions: -(Q1) How does the exponential decay of the integrand allow us to displace the contour of intagration over singularities? Is there a more general setup where the asymptotic behavior of the integrand allows for moving the contour of integration through and over singularities? -(Q2) After the displacement of the integration contour (still a vertical line), what kind of a residue theorem allows for considering all of the infinitely many singularities of the gamma function? -The version of the residue theorem I know uses bounded interior of a (simply) closed contour in the complex plane and only finitely many residues contained in there. -Remark 1: In order to compute the above integral in a classical way, I would take a finite line segments of the vertical line, symmetric with respect to the positive real axis, i.e. $\{Re(s)=c, -r_n\leq\Im(s)\leq r_n\}$, construct circle segments with radii $r_n$ encompassing each of the poles, with $r_n\uparrow \infty$ suitably chosen such that no poles lie on the segment contours, and then show that the integrals over the half-circles tend to zero as $n\to\infty$, thus obtaining on the one hand the integral over the infinite vertical line and on the other hand the infinite sum of the residues. However, I haven´t really checked whether the exponential decay of the integrand would suffice for the half-circle integrals to vanish in the limit. -Remark 2: I could imagine that there might be a version of the residue theorem suitably formulated for the Riemann sphere resp. $\bar{\mathbb{C}}:=\mathbb{C}\cup\{\infty\}$, where basically infinite contours from the complex plane correspond to closed ones on the sphere. -(Q3) Where could one find a more systematic treatment of integrals over infinite contours in the complex plane, including contour shifting over poles, other types of contour modifications, usage of infinitely many residues, as well as other techniques for the exact computation of such integrals? I understand that such techniques are often to be applied "individually", thus such literature would ideally contain a few good examples. -Thanks in advance for your attention and sorry if I appear to sound too confused :-), I am only trying to fill in certain "gaps" in my knowledge of complex analysis. -PS: I am not interested in numerical computations or general asymptotic expansions for contour integrals (even though in the above example the residues "expansion" appears as a special case thereof). - -REPLY [7 votes]: The passage in Paris and Kaminski's "Asymptotics and Mellin-Barnes Integrals" is available here. It seems to me that the formulation "because of this exponential decay, we are free to displace the contour of integration [...] over the poles [...] to produce the exponential series" is not intended to mean that by displacing the contour over the poles we don't change the value of the integral, but rather that because of the exponential decay, we can displace the contour of integration and the value of the integral will change only due to the poles, and we can produce the exponential series by adding up the residues since the value of the remaining integral can be shown to go to zero as we move it to (negative) infinity. So perhaps there is nothing as mysterious here as you suspected? (To justify displacing the contour of integration, we have to integrate around infinite rectangular strips between two parallels of the imaginary axis. Because of the exponential decay, the pieces at infinity don't contribute, so the difference between the integrals along the two parallels of the imaginary axis must be given by the residues of the poles that lie within the strip.)<|endoftext|> -TITLE: If $\frac{dy}{dt}dt$ doesn't cancel, then what do you call it? -QUESTION [26 upvotes]: I have $y$ is a function of $t$. -I have reached a situation here where I need to evaluate -$$\displaystyle \int_0^b{\frac{dy}{dt}dt}$$ -Now clearly $y$ has dependence on $t$, otherwise $\displaystyle \frac{dy}{dt}$ should be 0. -So now I write -$$\displaystyle \int_{y(0)}^{y(b)}{dy} = y \rvert_{y(0)}^{y(b)} = y(b) - y(0)$$ -I know that dt's don't cancel, but what do you call it, then? Just a "change of variables"? How do we justify where $dt$ went? - -REPLY [2 votes]: Yes, it is a change of variables. In $$ \int_0^b \frac{dy}{dt} \,dt ,$$ the independent variable is $t$ and everything else (namely $y$) is treated as a function of $t$. But in $$ \int_{y(0)}^{y(b)} dy ,$$ the independent variable is $y$ and (at least once you've worked out what constants $y(0)$ and $y(b)$ are) everything else (namely nothing) is treated as a function of $y$. So you changed variables from $t$ to $y$. -As for where the $dt$ went, I wonder why you want to know? It is possible (despite what people may tell you) to think of differentials as honest-to-goodness things, so that you are dividing by them in derivatives and multiplying by them in integrals, and in that case the $dt$s certainly do cancel. But if you don't want to do that, if you treat the $dt$ as merely part of the notation, then it didn't go anywhere. When the variable was $t$, you wrote $dt$ at the end of the integral, and when the variable became $y$, you wrote $dy$; that's what you write in this notation. -Perhaps you should ask where the $dy/dt$ went. Since you changed variables from $t$ to $y$, it was replaced with $dy/dy$, which is $1$. And then there is a quirk of this notation that you don't bother to write the integrand when it is $1$. (This quirk makes sense if you think of differentials as actual things that you multiply by in an integral; multiplying by $1$ doesn't do anything and so can be omitted. But if you think of them as just part of the notation, then this is just a quirk.)<|endoftext|> -TITLE: Minimal polynomials of $\sqrt[4]{2}i$ over $\mathbb{Q}$ and $\mathbb{R}$ -QUESTION [5 upvotes]: Can someone tell me if this is right: -I would like to find the minimal polynomial of -(i) $\sqrt[4]{2}i$ over $\mathbb{Q}$ -(ii) $\sqrt[4]{2}i$ over $\mathbb{R}$ -(i): -$\sqrt[4]{2}i$ is a root of $f(x) = x^4-2$. This is already monic, so to show that this is a minimal polynomial I only need to show that it is irreducible. Edit: To do that, I can use the Eisenstein: $p=2$ does not divide $a_1 = 1$ and $p^2$ does not divide $a_0 = -2$, therefore it is irreducible over $\mathbb{Q}$. -(ii): -This time, $\sqrt[4]{2}i$ is a root of $f(x) = x^2+\sqrt{2}$. For polynomials of degree two it's enough to check if they have a root. The only roots this one has are complex therefore it is irreducible over $\mathbb{R}$ and therefore the minimal polynomial. -So my more general question is: is the way to find a minimal polynomial of an element $a$ in general to find a polynomial $f$ such that $f(a) = 0$ and then to norm $f$ and then to show that $f$ is irreducible? -Many thanks for your help! - -REPLY [4 votes]: Part (i) as given is incorrect: lack of roots does not imply irreducibility when the polynomial is of degree greater than 3, because the polynomial could split into irreducible polynomials of degrees greater than 1. -You have shown that $x^4-2$ has no linear factors modulo $3$, but you cannot conclude from this that it is irreducible modulo $3$: it could have two irreducible quadratic factors. And indeed, as chandok writes, -\begin{align*} -(x^2+x+2)(x^2+2x+2) &= x^4 + 2x^3 + 2x^2 + x^3 + 2x^2 + 2x + 2x^2 + 4x + 4\\ -&= x^4 + 3x^3 + 6x^2 + 6x + 4 = x^4 + 1 = x^4 - 2, -\end{align*} -so the polynomial is not irreducible over $\mathbb{F}_3$. -Instead, I would suggest using Eisenstein's Criterion to show $x^4-2$ is irreducible over $\mathbb{Q}$, as it is pretty easy to apply there. -Part (ii) is correct. -As to your final question: there is a top-down approach and a bottoms-up approach. In the top down approach, if you can find any polynomial $f(x)$ that has $a$ as a root, then you know that the minimal polynomial will divide $f(x)$; in fact, it will be an irreducible factor of $f$. So you can try to find which irreducible factor of $f$ has $a$ as a root. This may be $f$ itself, or some factor. For example, to find the minimal polynomial of a $7$th root of unity, you could use the fact that it satisfies $x^7-1$. Then factor $x^7-1 = (x-1)(x^6+x^5+x^4+x^3+x^2+x+1)$, and you know that $a$ is a root of either $x-1$ or the second factor. Then you would look at the second factor (since presumably your $a$ is not $1$). And so on. -The bottoms-up approach is described by Jim Belk in his answer, where you "build up" the polynomial by considering powers of $a$ (or alternatively, images under the action of some Galois group).<|endoftext|> -TITLE: Examples of function sequences in C[0,1] that are Cauchy but not convergent -QUESTION [8 upvotes]: To better train my intuition, what are some illustrative examples of function sequences in C[0,1] that are Cauchy but do not converge under the integral norm? - -REPLY [9 votes]: You can get examples by considering elements of $L^1[0,1]$ that are not equal almost everywhere to any continuous function, and considering sequences of continuous functions converging in the $L^1$ norm to these discontinuous functions. Because convergent sequences are Cauchy and $L^1$ limits are unique up to equality almost everywhere, such sequences will be Cauchy and nonconvergent in $C[0,1]$. -E.g., let $f$ be $1$ on $[0,\frac{1}{2}]$ and $0$ elsewhere. Let $f_n$ be the continuous function that is $1$ on $[0,\frac{1}{2}]$, $0$ on $[\frac{1}{2}+\frac{1}{n},1]$, and linear on $[\frac{1}{2},\frac{1}{2}+\frac{1}{n}]$. Then because $f_n\to f$ in $L^1$, $(f_n)$ is Cauchy. (The Cauchy-ness is also easy to verify directly.) If there were a limit function $g\in C[0,1]$, you would have $g=f$ a.e.. But this is impossible, because the left-hand and right-hand limits at $\frac{1}{2}$ would not agree. -More generally, a Cauchy sequence in a metric space $X$ with completion $\overline{X}$ that does not converge in $X$ is basically the same as a sequence in $X$ that converges to an element of $\overline{X}\setminus X$. In a case like this where $X=C[0,1]$ with $L^1$ norm and $\overline{X}=L^1[0,1]$ have explicit descriptions, you can find examples by starting with an element of $\overline{X}\setminus X$, and find a sequence in $X$ converging to that element. The same idea applies to demonstrating nonconvergent Cauchy sequences in $\mathbb{Q}$, where you can take any irrational number and consider the sequence of truncated decimal expansions. - -REPLY [6 votes]: By measure theory you can always pass to a subsequence that converges pointwise a.e. so it's usually quite easy to determine the only possible limit. Then you should try to build the examples in such a way that the pointwise (a.e.) limit has no chance of being continuous. -Here's an example of what I have in mind: -$f_{n}(x) = \begin{cases} 0 & 0 \leq x \leq \frac{1}{2} \\\ nx - \frac{n}{2} & \frac{1}{2} \leq x \leq \frac{1}{2} + \frac{1}{n} \\\ 1 & \frac{1}{2} + \frac{1}{n} \leq x \leq 1 \end{cases}$ defined for $n \geq 2$. It's easy to see that it converges in the $L^{1}$-norm to the characteristic function of $[\frac{1}{2},1]$. - -Added: Since an $L^{1}$-Cauchy sequence will always converge ($L^{1}$ is complete by the theorem of Riesz-Fischer (or Fréchet-Riesz)), the only thing you have to do is to pick a function in $L^{1}$ that doesn't have a continuous representative. The easiest example is a step function (that's probably the reason why all three answerers came up with essentially the same example). This suffices, because every function in $L^{1}$ is the $L^1$-limit of a sequence of continuous (even smooth) functions, as can be seen by convolving with mollifiers, for example. In other words, $C[0,1]$ (or even $C^{\infty}[0,1]$) are dense in $L^{1}$. - -REPLY [3 votes]: How about the sequence of functions given by $f_n(x)=x^n$? This sequence is Cauchy in $(C([0,1]),\|\cdot\|_{L^1(0,1)})$ but does not converge in $C([0,1])$. -EDIT: This example doesn't work. Thanks for pointing that out. An example that does work is given by the sequence of continuous functions $f_n$ with -$f_n(x)=\frac{2}{n}(x-\frac{1}{2}+\frac{1}{n})$ for $|x-\frac{1}{2}|\leq \frac{1}{n}$, $f_n(x)=0$ for $x<\frac{1}{2}-\frac{1}{n}$ and $f_n(x)=1$ for $x>\frac{1}{2}+\frac{1}{n}$. That's basically a continuous approximation of the characteristic function of the interval $[\frac{1}{2},1]$, which is not continuous.<|endoftext|> -TITLE: Does Fermat's Last Theorem hold for cyclotomic integers in $\mathbb{Q(\zeta_{37})}$? -QUESTION [64 upvotes]: The first irregular prime is 37. Does FLT(37) -$$x^{37} + y^{37} = z^{37}$$ -have any solutions in the ring of integers of $\mathbb Q(\zeta_{37})$, where $\zeta_{37}$ is a primitive 37th root of unity? -Maybe it's not true, but how could I go about finding a counter-example? (for any cyclotomic ring, not necessarily 37) - -REPLY [4 votes]: This question was answered in mathoverflow. I am writing this to close up this question and making this answer as a community wiki according to MSE's guidelines. The answer is due to Tauno Metsänkylä.<|endoftext|> -TITLE: Using Central Limit Theorem -QUESTION [5 upvotes]: Can anyone help me with it: -Using the central limit theorem for suitable Poisson random variables, prove that $$ \lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}=1/2$$ -Thanks! - -REPLY [10 votes]: Hint: A Poisson$(n)$ random variable can be represented as the sum of $n$ i.i.d. Poisson$(1)$ rv's.<|endoftext|> -TITLE: What is the term for the mathematical relationship between $\mathbb{Z}_n$ and $\mathbb{Z}$? -QUESTION [6 upvotes]: ...and if it's important, do those ideas have any generalization to more "exotic" number systems? -The motivation for my question comes from reading some of the excellent answers posted to other questions, such as a recent one asking whether $\sqrt{1 + 24n}$ always yields primes. In particular, I've been struck by commenter Bill Dubuque's repeated use of what he terms "modular reduction", casting a problem in $\mathbb{Z}_n$ to make it much easier to solve in the general case for $\mathbb{Z}$. -What I don't quite grok is why this works; why can we do this? Is there a deep reason? At first glance, to me, there doesn't seem anything inherent in the axioms of a given $\mathbb{Z}_k$ that necessarily ties it intimately with $\mathbb{Z}$; all we care is that is has $k$ elements and it is closed under the binary operator of addition. It doesn't seem to encode information about $\mathbb{Z}$'s other elements. -Now, in saying that, I'm not sure I'm on the right foot here at all, so I'll analogize to something I know a little better: One sees, in some textbook definitions, an identification of $\mathbb{C}$ with an ordered pair $(a,b)$, $(c,d)$ with rules for addition and multiplication that go to $(a + c, b + d)$ and $(ac - bd, ad + bc)$, respectively, with no immediate hint about the importance of $\mathbb{C}$ in that is the algebraic closure of $\mathbb{R}$, which is a highly nontrivial theorem that needs to be proved through the FTA. Is there a similar relationship between $\mathbb{Z}_n$ and $\mathbb{Z}$, and is that extensible to other systems? -I'd appreciate any answers and references tailored to someone who's taken up to the middling undergraduate math level. (e.g. Linear Algebra, elementary abstract Algebra, undergraduate Complex Analysis...) - -REPLY [15 votes]: Well, the first thing to say is that the ring $\mathbb{Z}/n\mathbb{Z}$ is the quotient of the ring $\mathbb{Z}$ by the ideal $n \mathbb{Z}$. -Is this already familiar to you? If not, you should take a course and/or read a book on basic abstract algebra -- and, in the meantime, check out this wikipedia article. If so, could you clarify your question: what more than this do you want to know? -Added: well, I certainly understand and appreciate that you're looking for insight. In this case I'm not sure exactly how to provide it. But I'll try... -Here is a number theorist's trick: there is in fact a unique homomorphism $f$ from $\mathbb{Z}$ to any ring $R$. From this it follows that if $P = P(x_1,\ldots,x_n) = 0$ is any polynomial equation with $\mathbb{Z}$-coefficients, then any solution $(a_1,\ldots,a_n) \in \mathbb{Z}^n$ induces a solution $(f(a_1),\ldots,f(a_n)) \in R^n$. Thinking contrapositively, if you can find a ring -- any ring -- for which the -equation $P = 0$ has no solutions in $R^n$, then you know right away that it has no solutions in $\mathbb{Z}$. The most classical -- and also, not coincidentally, effective -- choices of rings $R$ to choose are $\mathbb{R}$ (the real numbers) and $\mathbb{Z}/n\mathbb{Z}$. For instance: -Since $P(x,y) = x^2 + y^2 + 3 = 0$ has no solutions over $\mathbb{R}$, it has no solutions over $\mathbb{Z}$. -Since $P(x,y) = x^2 + y^2 - 3 = 0$ has no solutions over $\mathbb{Z}/4\mathbb{Z}$, it has no solutions over $\mathbb{Z}$. -As for your more foundational question as to whether the finite rings $\mathbb{Z}/n\mathbb{Z}$ are defined in reference to $\mathbb{Z}$: by coincidence you have hit upon one of my standard pedagogical rants. The answer is a resounding yes. If you don't think of the elements of $\mathbb{Z}/n\mathbb{Z}$ as equivalence classes of integers, how do you know that the addition and multiplication operations are commutative and associative, and that multiplication distributes over addition? To show this, you do not have to say "quotient map" if you really don't want to, but you have to use it, i.e., that the operations $+,\cdot$ in $\mathbb{Z}/n\mathbb{Z}$ are defined in terms of those in $\mathbb{Z}$.<|endoftext|> -TITLE: Question about Singular Homology section in Hatcher -QUESTION [19 upvotes]: From Hatcher, "Algebraic Topology," Chapter 2, "Singular Homology" section (p. 108-109 in my copy): - -Cycles in singular homology are defined algebraically, but they can be given a somewhat more geometric interpretation in terms of maps from finite $\Delta$-complexes. To see this, note that a singular $n$-chain $\xi$ can always be written in the form $\sum_i \varepsilon_i \sigma_i$ with $\varepsilon_i = \pm 1$, allowing repetitions of the singular $n$-simplices $\sigma_i$. Given such an $n$-chain $\xi = \sum_i \varepsilon_i \sigma_i$, when we compute $\partial \xi$ as a sum of singular $(n-1)$-simplices with signs $\pm 1$, there may be some canceling pairs consisting of two identical singular $(n-1)$-simplices with opposite signs. Choosing a maximal collection of canceling pairs, construct an $n$-dimensional $\Delta$ complex $K_\xi$ from a disjoint union of $n$-simplices, one for each $\sigma_i$, by identifying the pairs of $(n-1)$-dimensional faces corresponding to the chosen canceling pairs. The $\sigma_i$'s then induce a map $K_\xi \rightarrow X$. If $\xi$ is a cycle, all the $(n-1)$ simplices of $K_\xi$ come from canceling pairs, hence are faces of exactly two $n$-simplices of $K_\xi$. Thus $K_\xi$ is a manifold, locally homeomorphic to $\mathbb{R}^n$, except at a subcomplex of dimension at most $n - 2$. - -It's this last bit that has me confused, perhaps because I'm unable to visualize a nontrivial 3-manifold. We're building a $\Delta$-complex by identifying the $(n-1)$-faces of simplices; there are only finitely many such simplices; we're not required to identify the faces in any unusual way; and so forth. How, then, can we end up with something which is not a manifold? I'd really appreciate an explicit example. -I've been told that this is a general example of a "pseudomanifold," but all the examples of pseudomanifolds that I've been able to locate and follow wind up making a non-manifold by essentially identifying vertices to get pinched spaces. This can't happen under the present construction because we're always identifying the largest proper faces. So I'm quite confused by the situation. -EDIT: Looks like there are some more questions raised in the comments as to what the construction actually means. If there's a name for this construction, or any other source that discusses it, I'd appreciate a reference. And of course if anyone can shed any additional light on the topic that would be quite welcome too. -EDIT: I may have inadvertently overwritten someone else's edit just then, judging by a message that popped up. Apologies if so. (How do I tell / fix this?) - -REPLY [18 votes]: First, a remark that I think is relevant: -understanding the construction $K_{\xi}$ is related to the question of whether a homology class can be represented by a submanifold. This is not always possible; -see e.g. this MO answer. -On the other hand, it is always possible for $n$-cycles for small values of $n$ (if we are considering the homology of a manifold), -which may be one reason why it is hard to visualize non-manifold examples of $K_{\xi}$. -E.g. in the case $n = 2$, we are gluing triangles along their edges, with the rule that exactly two triangles meet along a common edge. This always gives a closed $2$-manifold. In general, unless I am confused, the fact that for $n = 2$ we always get a manifold implies in general that the non-manifold points are in codimension at least $n - 3$ (not just $n - 2$). So to find a counterexample we should take $n = 3$. I will give such a counterexample in a moment, but first let me describe a general approach to thinking about this situation. -When gluing simplices like this, the way to investigate whether the resulting object is a manifold or not is to consider the link of each vertex. Namely, if a bunch of $n$-simplices meet at a vertex $v$, take a transverse slice to each simplex just below the vertex (I am imagining that simplex is sitting on the face opposite to $v$, so that $v$ is at the top of the simplex), -to get an $n-1$-simplex. Now these glue together to make a closed simplicial $n-1$-complex that surrounds $v$; this is the link of $v$. Note that these $n-1$-simplices meet along $n-2$-dimensional faces, and so the link is a $K_{\xi}$-type object, but of one dimension less. -In the case that $n = 2$, we get a bunch of segments being joined at their vertices, and hence a circle. No drama there. -But now suppose that $n = 3$. Then the link is formed by a bunch of triangles gluing together, and we have agreed that this gives a $2$-manifold. But which one? -If $K_{\xi}$ is locally Euclidean at $v$, then this surface would have to be a -$2$-sphere. But a priori it doesn't have to be, and so in this case we can -get a non-manifold example! -In practice, to make an example we should take a cone on a surface that is not -a $2$-sphere, e.g. a cone on a $2$-torus. -Precisely: triangulate a $2$-torus, e.g. by choosing two triangles -$\Delta_1$ and $\Delta_2$, and identifying the three edges of the first with the three edges of the second in the appropriate manner. -Now to form the cone on this, replace $\Delta_i$ by a $3$-simplex -$\tilde{\Delta_i}$. Regard $\Delta_i$ as one of the faces of $\tilde{\Delta}_i$, -and label the other three faces according to the edge along which they -meet $\Delta_i$. Now glue the three faces of $\tilde{\Delta}_1$ other than -$\Delta_1$ with the three faces of $\tilde{\Delta}_2$ other than $\Delta_2$ -according to the same gluing scheme we used previously to construct the $2$-torus. What we end up with is a three dimensional simplicial complex whose boundary is equal to $\Delta_1 + \Delta_2$, i.e. a $2$-torus. But it is not a $3$-manifold; rather it is a cone on the $2$-torus. -If we let $\xi$ be the $3$-chain $\tilde{\Delta}_1 + \tilde{\Delta}_2$, -then $K_{\xi}$ is the cone on the $2$-torus, and so is not a $3$-manifold. -Of course, this $\xi$ is not a cycle (so the boundary of $K_{\xi}$ is non-empty), but we could take two such cones and glue -them along their common $2$-torus boundary to get a three-dimensional simplicial complex without boundary, and then take $\xi$ to be the sum of the four $3$-simplices involved; then $\xi$ would be a cycle, but $K_{\xi}$ would not be a manifold; it has two vertices whose links are $2$-tori rather than $2$-spheres. -Concluding remark: In dimension $2$, the only way to get pseudo-manifolds that are not manifolds is to explicitly glue together vertices, as you observed. But in dimension $3$ or higher, there are other examples, e.g. in dimension $3$ we can take cones on positive genus surfaces, as in the preceding construction.<|endoftext|> -TITLE: Finding an Inverse function with multiple occurences of $y$ -QUESTION [9 upvotes]: For some reason I cannot figure out how the book is finding the solution to this problem. -find the inverse function of $f(x)=3x\sqrt{x}$. -My steps seem to lead to a dead end: -step 1. switch $f(x)$ with $y$: $y = 3x\sqrt{x}$ -step 2. swap $x$ and $y$: $x = 3y\sqrt{y}$ -step 3. solve for $y$: -step 3.1: $\displaystyle \frac{x}{3y} = \sqrt{y}$. -step 3.2: $\displaystyle \left(\frac{x}{3y}\right)^2 = \left(\sqrt{y}\right)^2$ -step 3.3: $\displaystyle \frac{x^2}{9y^2} = y$ ... uhhh? -The book answer is $\displaystyle y = \left(\frac{x}{3}\right)^{2/3}$. - -REPLY [4 votes]: Note that $\frac{x^2}{9}=\left(\frac{x}{3}\right)^2$. Also note that you can multiply both sides of your equation by $y^2$ to get just one $y$ term, and that the cube root is the same as the $\frac{1}{3}$ power. -You could, as lhf indicates, avoid ever breaking up the $y$ terms in the first place. Technically this would be preferred, because when you divided by $y$ you made the assumption that $y\neq 0$, which is not always true. -Beware of squaring when solving equations in general, because it can lead to extraneous solutions. In this case it is a valid step, but for a silly example consider trying to solve the equation $\sqrt{y}=-1$ by squaring both sides. -A final remark. The original function has an implicit domain, $x\geq 0$. Your inverse function has a formula definition that does not appear to require any domain restriction, but it is misleading to say that $f^{-1}(x)=\left(\frac{x}{3}\right)^{2/3}$ without mentioning that this is only valid when $x\geq 0$. This domain restriction arises from the fact that the range of $f$ is $[0,\infty)$. The function $h(x)=\left(\frac{x}{3}\right)^{2/3}$ on the whole real line is not invertible (it fails the horizontal line test), whereas if $g(x)=-f(x)=-3x\sqrt{x}$, then the inverse function would be $g^{-1}(x)=\left(\frac{x}{3}\right)^{2/3}$ on $(-\infty,0]$. Looking at the graphs of these functions and keeping in mind how inverses are reflected across the line $y=x$ (resulting from switching $x$ and $y$) should make it clearer what is going on. - -REPLY [3 votes]: Nothing wrong with what you've done so far, you just haven't finished! -From $\displaystyle \frac{x^2}{9y^2} = y$, move all the $y$'s to the same side: -step 3.4: $x^2 = 9y^3$. -Now isolate $y$: -step 3.5: $\displaystyle\frac{x^2}{9} = y^3$. -Now solve for $y$ by taking cubic roots; you might also notice that the left hand side is a perfect square... - -REPLY [2 votes]: $x=3y\sqrt y$ implies $x^2=9y^3$. - -REPLY [2 votes]: The algorithm you are applying is correct, and will yield the inverse function. In step 3 you should square both sides. Then we get $$x^2=9y^2\cdot y=9y^3$$ -Can you solve the problem from here? -Hint: Divide by $9$ and then take cube roots. -Hope that helps. - -REPLY [2 votes]: Two hints: $\sqrt{x}=x^{\underline{?}}$ and $x^a\cdot x^b=x^{\underline{?}}$<|endoftext|> -TITLE: Refining the central limit theorem on discrete random vars -QUESTION [7 upvotes]: Let $x_i$ be iid nonnegative discrete random variables $E[x_i]=N/M$ for some integers $N, M$, variance $\sigma^2$ and higher moments known (finite). -Then, the sum $\displaystyle S = \sum_{i=1}^M x_i$ will have $E[S]=N$. -I'm interested in the probability that -$S$ takes that precise value: $A=P\left(S=E[S]\right)$. -Applying the central limit theorem, I can write -$\displaystyle A \approx \frac{1}{\sqrt{ 2 \pi M \sigma^2}}$ -My question is: can this approximation be refined? -ADDED: To add some example-context-motivation: -Lets consider $X$ as a sum of $N$ Bernoullis (0/1) with prob=$p$, such that $E(X)=E(N p)$ is an integer. We can compute exactly the probability that $X$ attains its expected value, it's a Binomial: -$\displaystyle P = P(X= N p) = {N \choose N p} p^{N p} q^{N q} \hspace{2cm}$ [1a] -We might also get an approximate value of that probability using the CTL (Central Limit Theorem) -$\displaystyle P \approx \frac{1}{\sqrt{2 \pi N p q}} \hspace{2cm} $ [2a] -If we take [1a] and use the Stirling approximation, with $K \approx (K/e)^K \sqrt{2 \pi K}$, we get the same value. Fine. -Now, we may try to refine the approximation, both from [1a] and [2a]. -Plugging the next orden Stirling approximation in [1a], we get (I am not mistaken) -$\displaystyle P \approx \frac{1}{\sqrt{2 \pi N p q}} \left(1 - \frac{1- p q}{12 N p q} \right) \hspace{2cm} $ [1b] -To refine the CTL, one can think of - -use some "continuity correction" to evaluate more precisely the (hipothetical) gaussian integral -add some terms from the Edgeworth expansions -do nothing of the above - because the CLT does not justify those procedures in this scenario (just one value of a discrete variable) - -I'm not sure which is the correct way. -But let's try the first one: the next order approximation of the integral gives me (again, if I'm not mistaken) -$\displaystyle P \approx \frac{1}{\sqrt{2 \pi N p q}} \left(1 - \frac{1}{24 N p q} \right) \hspace{2cm} $ [2b] -This is not the same as [1b], but it's close. -Is this just casual? Was it a reasonable thing to do? Should I look (also/instead) after the Edgeworth expansions? - -REPLY [9 votes]: For a discrete random variable $X$ with support $\mathbb{Z}$, the Fourier transform of the probability distribution $P_x \equiv P[X=x]$ is given by -$$ -\tilde{P}(k) = \sum_{x=-\infty}^{\infty} e^{ikx} P_x = E\left[e^{ikx}\right] = e^{h(k)}, -$$ -where -$$ -h(k) = \sum_{n=1}^{\infty} \kappa_{n} \frac {(ik)^{n}}{n!} -$$ -is the natural logarithm of the characteristic function of $X$, and $\kappa_{n}$ is the $n$th cumulant of $X$. Recall that $\kappa_{1} = \mu$ is the mean and $\kappa_{2} = \sigma^2$ is the variance. The probability that a sum of $M$ independent variables $X_i$ with the same distribution is exactly $x \in {\mathbb{Z}}$ is then -$$ -\begin{eqnarray} -P\left[\sum_{i=1}^{M} X_i = x\right] &=& \int_{-\pi}^{\pi}\frac{dk}{2\pi} e^{-ikx}\tilde{P}(k)^M \\ -&=& \int_{-\pi}^{\pi}\frac{dk}{2\pi} e^{Mh(k)-ikx} \\ -&=& \int_{-\pi}^{\pi}\frac{dk}{2\pi} e^{ik(M\mu - x) - \frac{1}{2}M\sigma^2 k^2} \exp\left(\sum_{n=3}^{\infty}M\kappa_{n}\frac{(ik)^{n}}{n!}\right). -\end{eqnarray} -$$ -Considering the desired case where $x = M\mu \in {\mathbb{Z}}$, and making the change of variable $k \rightarrow k/(\sigma\sqrt{M})$, we have -$$ -P\left[\sum_{i=1}^{M} X_i = M\mu\right] = \frac{1}{\sigma\sqrt{2\pi M}}\int_{-\pi\sigma\sqrt{M}}^{\pi\sigma\sqrt{M}} d\Phi(k) \exp\left(\sum_{n=3}^{\infty} \sigma^{-n}M^{1-\frac{1}{2}n}\kappa_{n}\frac{(ik)^{n}}{n!}\right), -$$ -where $d\Phi(k) = \phi(k) dk$ is the standard normal distribution (with mean $0$ and variance $1$). Here we assume that exponential decays rapidly away from $k=0$, so we may replace the limits of integration by $\pm\infty$. Then, expanding the exponential in inverse powers of $M$, and using the fact that the $n$th central moment of the standard normal distribution vanishes for odd $n$ and is equal to $(n-1)!!$ for even $n$, we obtain the following: -$$ -P\left[\sum_{i=1}^{M} X_i = M\mu\right] = \frac{1}{\sigma\sqrt{2\pi M}}\left(1 + \frac{\kappa_4}{8M\sigma^4} - \frac{5\kappa_3^2}{24M\sigma^6} + O(M^{-2})\right). -$$ -This is essentially the Edgeworth expansion. If $X$ is the Bernoulli distribution with probability of success $p = \frac{1}{2}(1+a)$ (and of failure $q=\frac{1}{2}(1-a)$), then it is straightforward to verify that -$$ -\begin{eqnarray} -\kappa_2 &=& \sigma^2 = pq = \frac{1}{4}(1-a^2) \\ -\kappa_3 &=& \frac{1}{4}(1-a^2)(-a) = -\frac{1}{4}a(1-a^2) \\ -\kappa_4 &=& \frac{1}{8}(1-a^2)(3a^2-1), -\end{eqnarray} -$$ -and hence -$$ -\begin{eqnarray} -\frac{5\kappa_3^2}{24\sigma^6} &=& \frac{5a^2}{6(1-a^2)} \\ -\frac{\kappa_4}{8\sigma^4} &=& \frac{3a^2 - 1}{4(1-a^2)}, -\end{eqnarray} -$$ -for a total correction term proportional to -$$ --\frac{5\kappa_3^2}{24M\sigma^6} + \frac{\kappa_4}{8M\sigma^4} = \frac{9a^2-3-10a^2}{12M(1-a^2)} = -\frac{3+a^2}{12M(1-a^2)} = -\frac{1-pq}{12Mpq}, -$$ -which agrees with the Stirling approximation to the exact result.<|endoftext|> -TITLE: What does it mean to say a map "factors through" a set? -QUESTION [53 upvotes]: Consider the following diagram: - -What does it mean precisely to say "$f$ factors through $G/\text{ker}(f)$"? -Does it mean $f = \tilde{f} \circ \pi$, for some $\tilde{f}$? -I've seen texts use the phrase, but never a definition of this notion. - -REPLY [42 votes]: It means exactly what you write: that you can express $f$ as "product" (composition) of two functions, with the first function going through $G/\mathrm{ker}(f)$; by implication, that map will be the "natural" map into the quotient, i.e., $\pi$. Under more general circumstances, you would also indicate the map in question. -The reason for the term "factors" is that if you write composition of functions by juxtaposition, which is fairly common, then the equation looks exactly as if you "factored" $f$: $f=\tilde{f}\pi$.<|endoftext|> -TITLE: Does algorithmic unsolvability imply unsolvability in general? -QUESTION [10 upvotes]: I recently found out that there is no algorithm which, given an arbitrary group presentation, will determine in finite time if it represents the trivial group*. Additionally, in a lecture I recently attended, it was portrayed that given any desired property P of a group, there is no algorithm that will determine if an arbitrary group (presentation) has this property. (For additional info, see the second answer to this question on MathOverflow). -My question is about the scope of this notion of "algorithm." With regard to group presentations, my intuition tells me that there must be some way to classify or otherwise look at these arbitrary group presentations that would enable us to determine if they are the trivial group (or have some property P). However, I am not very well versed on what it means for something to be solved via an algorithm vs. other mathematical methods. -In general, if something has been shown to be unsolvable via algorithm, does it mean that it is entirely unsolvable (and thus futile to pursue in research)? Or, are there cases where we have shown that despite being unable to use an algorithm to determine if a property exists, we have used other mathematical methods with success? -*Clarification by Arturo. - -REPLY [4 votes]: It's actually even more shocking than what Arturo's answer might suggest. Consider the following algorithm. Given a group presentation $G=\langle S|R \rangle$, enumerate all possible proofs (say, using the axioms of ZFC), and check whether or not they give a proof that $G$ is trivial or not. If $G$ is trivial, then this algorithm must finish. Indeed, if $G$ is trivial, then we can write each element of $S$ as a product of conjugates of elements of $R$, and this constitutes a proof. -However, we know that there are inputs for which this algorithm does not terminate. This implies that there are finite presentations that define nontrivial groups, but for which there is no proof that they are nontrivial. This gives a concrete and down-to-earth example of Goedel's incompleteness theorem!<|endoftext|> -TITLE: Nonmeasurable set with positive outer measure -QUESTION [5 upvotes]: It is well-known that any set $E \subseteq \mathbb{R}$ with positive outer measure contains a nonmeasurable subset $V$. I know that $0 < m^*(V) \le m^*(E)$. Nevertheless, my question is the following: given $r \in \mathbb{R}$ such that $r>0$, is there a nonmeasurable subset of $\mathbb{R}$ whose outer measure is exactly $r$? -Thank you in advance. - -REPLY [4 votes]: One can take a Vitali nonmeasurable subset of $[0,1]$, which has positive and finite outer measure, and just scale it appropriately. -As Jonas points out, this is closely related to this previous question, but much easier.<|endoftext|> -TITLE: Infinite product of measurable spaces -QUESTION [41 upvotes]: Suppose there is a family (can be -infinite) of measurable spaces. What -are the usual ways to define a sigma -algebra on their Cartesian product? - -There is one way in the context of -defining product measure on -planetmath. Let $(E_i, B_i)$ be -measurable spaces, where $i \in I$ -is an index set, possibly infinite. -We define their product as follows: - -let $E= \prod_{i \in I} E_i$ , the Cartesian product of $E_i$, - -let $B=\sigma((B_i)i \in I)$ , the smallest sigma algebra -containing subsets of E of the -form $\prod_{i \in I}B_i$ where -$B_i=E_i$ for all but a finite -number of $i \in I$ . - - -I was wondering why it is required -that "$B_i=E_i$ for all but a finite -number of $i \in I$"? - - -Thanks and regards! - -ADDED: -I was wondering if the product sigma algebra defined in 2 is the smallest sigma algebra such that any tuple composed of one measurable set from each individual sigma algebra is measurable? - -REPLY [3 votes]: Details of $\textbf{lemma}$ mencionated by Michael Greinecker : Let -$L=\{E \subseteq X; E\in \sigma(C)\,\mbox{for some} \, C\subseteq Y,\,\mbox{with C countable}\}$ -See that the following hold: -$\bullet$ X, $\emptyset \in L$ (Obvious); -$\bullet$ If, $E\in L$, then $E^{c}\in\sigma(C_E)$ for $C_E\subseteq Y$ (countable) such that $E\in\sigma(C_E)$; -$\bullet$ if $\{E_n\}_{n\in\mathbb{N}} \subseteq L$, we can take $\displaystyle C=\bigcup_{n=1}^{\infty}C_{E_n}$ with $C_{E_n}$ in the same sense of above item. Then C is countable and $\displaystyle\bigcup_{n=1}^{\infty}E_n \in \sigma(C)$. -This proof that L is an $\sigma$-algebra. Finnaly observes that -$\bullet$ $Y\subseteq L$ (Using $\sigma$-algebra generating by each element of Y); -$\bullet$ For all countable $C\subseteq Y$ see that $\sigma(C)\subseteq \sigma(Y)$, and this implies that $L\subseteq \sigma(Y)$. -Because $\sigma(Y)$ is the smallest $\sigma$ -algebra containg Y, we have $L=\sigma(Y)$. -This proof that each event just depend of a countable many sets of generators. -$\textbf{More about}:$ -In the case of product, by the observation of Michael you have that the product measure is generated by the sets $\prod_{n\in C}Y_n \times \prod_{n\in I-C}X_n$ with $C\subseteq I$ countable, this implies that for every $E\subseteq X$ exist an at most countable $C\subseteq I$ and $E_C$ in product sigma algebra of $(X_i, B_i)_{i\in C}$ satisfying $$E=\pi_{C}^{-1}(E_C)\hspace{3cm}(1)$$ for $\pi_C : \prod_{i\in I} X_{i} \to \prod_{i\in C} X_{i}$ the natural projection. This results says that, the product sigma algebra is not generated by the sets $\prod_{i\in I} Y_{i}$ with $Y_i \in B_i$ for all $i\in I(uncontable)$, indeed if every $Y_i\neq X_i$, for uncontable many of index $i\in I$, not exist an countable $C\subset I$ and $E_C$ such that (1) holds.<|endoftext|> -TITLE: Integral of measurable spaces -QUESTION [14 upvotes]: If for each $t\in I=[0,1]$ I have a measurable space $(X_t,\Sigma_t)$, is there a standard notion which will give a measurable space deserving to be called the integral $\int_I X_t\,\mathrm d t$? -Motivated by this question and curiosity... - -REPLY [2 votes]: An integral is a generalization of a weighted sum, but neither adding measurable spaces nor multiplying them with numbers are meaningful operations. Neither can we build integrals of topological spaces, filters, uniformities, etc. -Since the question is inspired by products of measurable spaces, what one can do is forming direct sums. The booklet Borel spaces by Rao and Rao contains two approaches. Let $(X_\lambda,\mathcal{X}_\lambda)_{\lambda\in\Lambda}$ be a family of measurable spaces. We can assume the underlying spaces to be disjoint and $X=\bigcup_\lambda X_\lambda$. The measurable sets of the direct sum are of the form $\bigcup_\lambda B_\lambda$ for some $(B_\lambda)\in\prod_\lambda \mathcal{X}_\lambda$. The weak direct sum has as its underlying $\sigma$-algebra the family $\sigma\big(\bigcup_\lambda\mathcal{X}_\lambda\big)$. The direct sum and the weak direct sum coincide if and only if $\Lambda$ is countable.<|endoftext|> -TITLE: On problems of coins totaling to a given amount -QUESTION [9 upvotes]: I don't know the proper terms to type into Google, so please pardon me for asking here first. -While jingling around a few coins, I realized that one nice puzzle might be to figure out which $n$ or so coins of a currency (let us, say, use the coins of the American dollar as an example) can be used to total to a given amount. For instance, one can have five coins totaling to forty cents: three dimes and two nickels. -1) How does one find other $n$ combinations of coins that can total to a set amount? -To use my example, are there other five coins that can total to forty cents? How might one algorithmically prove that those are the only solutions? -2) Given an amount not equal to a denomination, what is the minimum number of coins needed to be equivalent to the given amount? -For instance, one can have forty cents in three coins: a quarter, a dime, and a nickel. How does one algorithmically show that the "magic number" for forty cents is three (i.e., one cannot find two coins whose amounts total to forty cents)? -As mentioned already, this isn't homework; just idle curiosity. Any pointers to algorithms would be appreciated! - -REPLY [9 votes]: Question (2) is a known problem; it's called the change-making problem. The Wikipedia page lists integer programming and dynamic programming as solution approaches. The change-making problem is also a variation of the famous knapsack problem. One of the implications of that is that since the standard version of the knapsack problem is NP-complete, it's likely that the change-making problem is as well, dashing hopes for an easy-to-find fast algorithm for solving it. -However, the Wikipedia page also says that for the US and most other coin systems, the greedy algorithm will give the optimal solution. Thus for the nickel/dime/quarter example you give, the optimal solution is to use as many quarters as you can, then account for what's left by using as many dimes as you can, then make up the rest with nickels. So 3 is the minimum number of nickels, dimes, and quarters required to make 40 cents. - -Here's the integer program formulation for Question (2). If you let $x_i$ be the number of coins of type $i$ to be used, $d_i$ be the denomination of coin type $i$, and $A$ be the target amount, Question (2) entails solving -$$\min \sum_i x_i $$ -subject to -$$\sum_i d_i x_i = A,$$ -$$x_i \geq 0, x_i \in \mathbb{Z}.$$ -While integer programs are hard to solve in general, for small problems like the example you give a good IP solver will return a solution very quickly. For instance, if you take coins of denominations 31, 37, and 41 cents (since nickels, dimes, and quarters are too easy :) ), and want to know the minimum number required to be equivalent to 2011 cents, the solver Lingo requires only the following code to find the answer: -MIN = x1 + x2 + x3; -31*x1 + 37*x2 + 41*x3 = 2011; -@GIN(x1); @GIN(x2); @GIN(x3); - -Lingo outputs the optimal solution as $x_1 = 0, x_2 = 20, x_3 = 31,$ with only 7 solver iterations and "00:00:00" time elapsed. So the minimum number of coins is $51$. -You did ask specifically for an algorithm. Many IP solvers (including Lingo) use the branch and bound algorithm, which has solving a linear program (usually with the simplex method) embedded in it. - -For Question (1), you could use the following integer program to find a single solution or determine that one does not exist. (In the latter case, the IP solver will report that the problem is infeasible.) -$$\max 1$$ -subject to -$$\sum_i d_i x_i = A,$$ -$$\sum_i x_i = n,$$ -$$x_i \geq 0, x_i \in \mathbb{Z}.$$ -Again, for small problems a good IP solver will return a solution almost instantaneously. I'm not sure how (or even if it's possible) to modify an IP approach to find all solutions like you request. The generating function techniques mentioned by cardinal and in the question linked to by Derek Jennings will tell you how many solutions there are, but I'm not sure if they will tell you exactly what those solutions are.<|endoftext|> -TITLE: The identity morphism in $\mathbf{Set}$ is the identity function -QUESTION [12 upvotes]: I've been trying to wrap my head around the basic concepts of category theory, and I thought I would attempt to illustrate what I understand with the category of sets, probably the easiest example. Particularly, I've been trying to prove that $id_A$ (the identity morphism on $A$, for all $A \in Obj(\mathbf{Set})$) is $1_A \colon A \rightarrow A, x \mapsto x$. -This is a very intuitive and reasonable statement, and it's trivial to prove that $1_A$ is indeed an identity morphism on $A$, and I suppose uniqueness of $id_A$ can be demonstrated analogously to uniqueness of the identity element in a monoid (considering the subcategory which has $A$ as its only object, and endofunctions on $A$ as its only morphisms). -In this manner, it is not hard to prove that the proposition in the title is true, but this demonstration requires to make an assumption or guess as to what could $id_A$ be. Specifically, the scheme of the proof is: assume $id_A$ = $1_A$, see that it works with the definition of an identity morphism, show that the identity morphism is unique, and, in conclusion, $id_A$ can only be $1_A$. What I'm looking for, nonetheless, is a somehow more direct proof, that doesn't assume $id_A$ = $1_A$ at the start. I want to place myself in a state of little or no knowledge about sets and functions, and under this assumption, why would I assume $id_A$ = $1_A$ at first? Why not try with $id_\mathbb{Z}$ = $f \colon \mathbb{Z} \to \mathbb{Z}, x \mapsto x^2 + 1$, for example? It wouldn't work, but I don't have any reason to think that $1_\mathbb{Z}$ is a better guess for $id_\mathbb{Z}$. -I suppose that the proof for which I'm asking would work for categories of sets with additional structure, and probably for posets as well, although I'm not clear as to what modifications it would require to work. -Thanks. - -REPLY [3 votes]: i don't think this is that hard, at least for the category Set. what we know is that $id_A$ is an element $f$ of Hom(A,A) such that: -$f\circ g = g$ and $h\circ f = h$ for any $g$ in Hom(B,A) and $h$ in Hom(A,C). -in particular, let B be a singleton set {*}, then we can identify any $g$ in Hom(B,A) with an element $a$ of A, so that $f\circ g = g$ means that $f(a) = a$ for all $a$ in A. -thus $1_A$ is the only viable candidate for $id_A$.<|endoftext|> -TITLE: The limit (and function) changes after rationalizing? -QUESTION [6 upvotes]: I want to evaluate the following: -$$\lim_{r \rightarrow 0}\frac{-r^2}{2 \left(\sqrt{1-\frac{r^2}{4}}-1 \right)}$$ -I look at the graph and see that it seems to be going to zero. This makes sense to me because if I replace r with zero this function is defined and continuous near zero and the value of the function is zero. -So, I think: -$$\lim_{r \rightarrow 0}\frac{-r^2}{2 \left(\sqrt{1-\frac{r^2}{4}}-1 \right)}=0$$ -But next I try something else I rationalize the denominator since this is a good technique for solving limits. -$$\lim_{r \rightarrow 0}\frac{-r^2 \left(\sqrt{1-\frac{r^2}{4}}+1 \right)}{2 \left(1-\frac{r^2}{4}-1 \right)}$$ -Then, -$$\lim_{r \rightarrow 0}\frac{-r^2 \left(\sqrt{1-\frac{r^2}{4}}+1 \right)}{\frac{-r^2}{2}}$$ -so... -$$\lim_{r \rightarrow 0} 2 \left(\sqrt{1-\frac{r^2}{2}}+1 \right)=4$$ -What have I done? How can rationalizing change the graph? My guess is that the denominator $2 \left(\sqrt{1-\frac{r^2}{2}}-1 \right)$ is "divisible by $r^2$ in some not obvious way? -I know my original reasoning was sloppy, (not a proof) but the fact that the graph shows a limit of zero has me very confused. -How do I avoid this error? Just always rationalize everything? Why would I do that? - -REPLY [6 votes]: It looks pretty clear to me that the limit is 4 from the graph: - -Perhaps you mis-entered it on your graphing tool? -Beyond that, when $r=0$, the numerator of the expression is $-r^2=-0^2=0$ and the denominator is $2 \left(\sqrt{1-\frac{r^2}{4}}-1 \right)=2 \left(\sqrt{1-\frac{0^2}{4}}-1 \right)=2 \left(\sqrt{1}-1 \right)=0$—that is, the expression is of the form $\frac{0}{0}$, which is an indeterminate form, so it needs further investigation.<|endoftext|> -TITLE: Can the sums of two sequences of reciprocals of consecutive integers be equal? -QUESTION [31 upvotes]: I'm primarily a programmer, so forgive me if I don't know the proper nomenclature or notation. -Last night, an old teacher of mine told me about a question that had caused some noodle-scratching for him: -For any two sequences of consecutive integers, can the sums of their reciprocals be equal? -Now, I gather that these sums are called "harmonic numbers" if the consecutive sequence begins with 1. But what if it doesn't begin with 1? -How might we go about proving that for any two such sequences, their sums are unequal? -I have written a quick Python script that returns, in the form of a reduced fraction, the sum of any $\sigma(m, k)$ where $m$ is the first number to consider and $k$ is the length of the sequence. -So, I guess I have two questions: -1) Where can I find out about the current state of research on this question? -2) What are the most likely approaches, or "hooks" that I might grasp onto to arrive at a proof? -(Via the comments, I'll add the following for clarity:) -$\sigma(m,k)$ is $\frac{1}{m} + \frac{1}{m+1}+\dots +\frac{1}{m+k−1}.$ My question is: Can there exist distinct pairs $(m_1,k_1)$ and $(m_2,k_2)$ such that the corresponding sums are equal? - -REPLY [19 votes]: The answer is "no". Here's a proof: -The main idea is to consider the two sums as approximations of the same integral over $1/x$ with the rectangle rule and then show that the approximation errors can't be equal. -It will be more convenient to work with the centre of each sum rather than the beginning. So let $k_i$ be the number of terms in the $i$th sum (as in the question), let $n_i=m_i + (k_i-1)/2$ be the central denominator (which may or may not actually occur as the denominator of a term in the sum depending on whether it's an integer or a half-integer), and let $\sigma_i = \sigma(m_i,k_i)$. Without loss of generality, assume $n_2 > n_1$, and as Henry observed, we can also assume that the sums don't overlap. -Now $\sigma_i$ is roughly $k_i/n_i$, so the quantity $\Delta:=k_1n_2-k_2n_1$ would need to be small in order for the sums to be equal. This quantity is either an integer or a half-integer. Note that if $k_1$, $k_2$ and $n_2$ are fixed, then $\sigma_2$ is fixed, $n_1$ depends monotonically on $\Delta$, and $\sigma_1$ in turn depends monotonically on $n_1$, so the difference between the two sums can change sign at most once as $\Delta$ changes. Thus, if we can show that the difference has opposite signs for $\Delta = 0$ and $\Delta = 1/2$, it will follow that it cannot be zero for any possible value of $\Delta$. -Now consider $1/l$ as the first term in the following expansion: -$$\int_{l-\frac{1}{2}}^{l+\frac{1}{2}}\frac{\mathrm{d}x}{x} = \int_{-\frac{1}{2}}^{+\frac{1}{2}}\frac{\mathrm{d}x}{l-x}=\int_{-\frac{1}{2}}^{+\frac{1}{2}}\frac{1}{l}\sum_{i=0}^\infty\left(\frac{x}{l}\right)^i\mathrm{d}x=2\sum_{j=0}^\infty\frac{\left(2l\right)^{-(2j+1)}}{2j+1}\;.$$ -Then $\sigma_1$ is the first term of the following expansion: -$$\int_{n_1-k_1/2}^{n_1+k_1/2}\frac{\mathrm{d}x}{x}=\sum_{l=n_1-(k_1-1)/2}^{n_1+(k_1-1)/2}\int_{l-\frac{1}{2}}^{l+\frac{1}{2}}\frac{\mathrm{d}x}{x}=2\sum_{j=0}^\infty\frac{1}{2j+1}\sum_{l=n_1-(k_1-1)/2}^{n_1+(k_1-1)/2}\left(2l\right)^{-(2j+1)}\;.$$ -Now consider first the case $\Delta=0$. Then $n_2=\frac{k_2}{k_1}n_1$, so by substituting $u=\frac{k_2}{k_1}x$ we can write the two sums as approximations of the same integral: -$$\int_{n_1-k_1/2}^{n_1+k_1/2}\frac{\mathrm{d}x}{x}=\int_{n_2-k_2/2}^{n_2+k_2/2}\frac{\mathrm{d}u}{u}=2\sum_{j=0}^\infty\frac{1}{2j+1}\sum_{l=n_2-(k_2-1)/2}^{n_2+(k_2-1)/2}\left(2l\right)^{-(2j+1)}\;,$$ -where the $j=0$ term of this expansion is now $\sigma_2$. Since these are two expansions of the same integral, to show $\sigma_1<\sigma_2$ it suffices to show that for each $j>0$ the term in the first expansion is greater than the term in the second expansion. (This is plausible, since in going from the first approximation of the integral to the second, we've increased the number of intervals by a factor $k_2/k_1$ but decreased the $j$th error term in each interval roughly by a factor $(n_1/n_2)^{2j+1}$, which is $(k_1/k_2)^{2j+1}$ since $\Delta=0$.) In fact, we only need to treat the $j=1$ case; the inequalities for higher $j$ then follow since all additional factors of $l^{-2}$ are greater in the first expansion than in the second expansion, since we assumed that the two sums of reciprocals don't overlap. (We could have avoided the terms with higher $j$ altogether by writing the error term as a derivative at an intermediate value, but that would have introduced cumbersome shifts of $1/2$ to account for the intermediate values.) -For $j=1$, we use the convexity of $l^{-3}$ in both directions, decreasing the greater one of the sums by collapsing it onto its center: -$$\sum_{l=n_1-(k_1-1)/2}^{n_1+(k_1-1)/2}l^{-3}\ge\sum_{l=n_1-(k_1-1)/2}^{n_1+(k_1-1)/2}n_1^{-3}=\frac{k_1}{n_1^{3}}$$ -and increasing the lesser one by smearing it out over an integral: -$$\sum_{l=n_2-(k_2-1)/2}^{n_2+(k_2-1)/2}l^{-3}<\int_{n_2-k_2/2}^{n_2+k_2/2}l^{-3}\mathrm{d}l=\frac{(n_2-k_2/2)^{-2}-(n_2+k_2/2)^{-2}}{2}=$$ -$$=\frac{n_2k_2}{(n_2-k_2/2)^2(n_2+k_2/2)^2}<\frac{n_2k_2}{n_1^2n_2^2}=\frac{k_1}{n_1^{3}}\;.$$ -This establishes that $\sigma_1<\sigma_2$ when $\Delta=0$. Now consider $\Delta \neq 0$. Then $\frac{k_1}{k_2}n_2 = n_1+\frac{\Delta}{k_2}$, and we need to shift the integrand to make the integral limits match: -$$\int_{n_2-k_2/2}^{n_2+k_2/2}\frac{\mathrm{d}x}{x}=\int_{n_1-k_1/2+\Delta/k_2}^{n_1+k_1/2+\Delta/k_2}\frac{\mathrm{d}u}{u}=\int_{n_1-k_1/2}^{n_1+k_1/2}\frac{\mathrm{d}t}{t+\Delta/k_2}\;.$$ -Since all the error terms in the expansions for the integrals have the same sign, their difference is less than the larger of the two, which, as we showed above, is the one for $\sigma_1$. Thus, to show that the difference changes sign when $\Delta$ goes from $0$ to $1/2$, it suffices to show that the change due to the shift in the integrand is larger than the approximation error in the expansion for $\sigma_1$. To do this, we can expand the shifted integral in the same form as the error expansion, again using convexity to bound each integral by the central value of the integrand: -$$\int_{n_1-k_1/2}^{n_1+k_1/2}\frac{\mathrm{d}t}{t+\Delta/k_2}=\sum_{l=n_1-(k_1-1)/2}^{n_1+(k_1-1)/2}\int_{l-\frac{1}{2}}^{l+\frac{1}{2}}\frac{\mathrm{d}t}{t+\Delta/k_2}>\sum_{l=n_1-(k_1-1)/2}^{n_1+(k_1-1)/2}\frac{1}{l+\Delta/k_2}\;.$$ -To show that for each $l$ the summand differs from $1/l$ by more than the corresponding approximation error, we estimate the latter so that we can sum the series over $j$: -$$2\sum_{j=0}^\infty\frac{1}{2j+1}\left(2l\right)^{-(2j+1)}< -\frac{2}{3}\sum_{j=0}^\infty\left(2l\right)^{-(2j+1)}=\frac{2}{3}\left(2l\right)^{-3}\frac{1}{1-1/(2l)}=\frac{2}{3}\frac{1}{(2l)^2}\frac{1}{2l-1}\;.$$ -On the other hand, the difference from the shift in the reciprocals is -$$\frac{1}{l}-\frac{1}{l+\Delta/k_2}=\frac{\Delta/k_2}{l(l+\Delta/k_2)}\;.$$ -Now we can estimate the quotient of these two values (using $\Delta=1/2$): -$$\frac{2}{3}\frac{1}{(2l)^2}\frac{1}{2l-1}\frac{l(l+\Delta/k_2)}{\Delta/k_2}< -\frac{1}{12}\frac{1}{l}\frac{1}{l-1/2}\frac{l+1/2}{\Delta/k_2}< -\frac{k_2}{6(l-1)}\;.$$ -Thus, the change due to the shift in the integrand is larger than the approximation error provided $k_2<6(l-1)$. This we can derive by considering the powers of $2$ in the denominators of the two sums. -The highest power of $2$ that divides one of the denominators in the sum cannot be cancelled and hence divides the reduced denominator of the sum. This is because between any two numbers containing the same number of factors of $2$, there is one containing at least one more factor of $2$, and hence each sum contains a unique denominator $d$ with the highest power of $2$ in that sum. If we add all the other reciprocals and reduce them to a common denominator, that denominator will necessarily contain fewer powers of $2$ than $d$, and hence these powers cannot cancel if we then add $1/d$. Thus, the two sums can only be equal if the denominator with the most factors of $2$ has the same number of factors of $2$ in both sums. -For this to be the case, $k_2$ must be at least $1$ less than twice the largest denominator in $\sigma_1$; otherwise any interval of $k_2$ numbers would necessarily contain a number with one factor of $2$ more than the highest power of $2$ in the denominators of $\sigma_1$. Thus we have -$$k_2 \le 2\left(n_1+\frac{k_1-1}{2}\right)-1=2n_1+k_1-2\;.$$ -Since the sums don't overlap, we also have -$$n_1+\frac{k_1-1}{2}\le n_2-\frac{k_2-1}{2}-1\;,$$ -and hence -$$\frac{k_2}{2}\le \left(n_1+\frac{k_1-1}{2}\right)-\frac{1}{2} \le n_2-\frac{k_2-1}{2}-1-\frac{1}{2}\;,$$ -$$k_2\le n_2-1\;.$$ -With $\Delta=k_1n_2-k_2n_1= 1/2$, this yields -$$k_1=\frac{k_2}{n_2}n_1+\frac{\Delta}{n_2}\le n_1-\frac{n_1}{n_2}+\frac{\Delta}{n_2} -TITLE: Class of sets of a given infinite cardinality -QUESTION [6 upvotes]: This question was inspired by the question on examples of classes that are not sets. -From the discussion in the comments there, it seems there does not exist a set of all sets of a given cardinality. -To me, it seems easy to see that this is true for any nonzero finite cardinality $n$. Given any set $x$, you can always make a set of size $n$ by adding $n-1$ other sets to $x$ to make a set of size $n$, which exists by repeated use of the pairing axiom. -But how would you do this for infinite cardinalities? For suppose $\kappa$ is some given infinite cardinality. To show that the set of all sets of cardinality $\kappa$ does not exist, it seems you would have to show that for any set $x$, there exists a set of that size with $x$ as an element, and then you could take the union of that set to find the set of all sets, and thus a contradiction. But it doesn't seem reasonable to simply say, for a given set of size $\kappa$ if $x$ is in the set, we have no problem. If not, just take an element out of the set and put $x$ in. Something about that seems like it would not be allowed. -So how would you do this for infinite cardinalities? - -REPLY [9 votes]: Suppose there is a set $x$ whose elements are precisely the sets of size $\kappa$. We will define a function $f$ with domain $x$ whose range is the universe $V$ of all sets. Since the latter is known not to be a set (and functions map sets to sets---this is the replacement axiom), it follows that $x$ itself cannot be a set either. -Fix a set $Y$ of size $\kappa$. Given a set $a\in x$, let $f(a)=0$, unless there is a set $z$ such that $a=\{(b,z)\mid b\in Y\}$, in which case we set $f(a)=z$. This $f$ works. - -Let me add: The universe $V$ of sets can be seen as "constructed by stages." More precisely, $$V=\bigcup_{\alpha\in ORD}V_\alpha,$$ where the set $V_\alpha$ is obtained by iterating the power set operation $\alpha$ times, starting with the empty set, and $ORD$ is the (proper) class of all ordinals. The details do not matter much right now. The point is that something is a set if it appears at some stage $V_\alpha$ of the construction, and a set cannot appear until all its elements do (i.e., the stage $\alpha$ at which a set $t$ appears is larger than any stage $\beta$ at which an element of $t$ appears). -But something like "the collection of all sets of size $\kappa$" cannot possibly appear at any stage $V_\alpha$, since we can always build a set of size $\kappa$ with $V_\alpha$ as one of its elements, forcing the set to appear at a stage larger than $\alpha$. In general, one can verify whether a collection is a set or a proper class by checking whether it contains sets that appear at arbitrarily large stages (proper class) or, instead, all its elements appear by some stage $\alpha$ (set). -This suggests the heuristic that a proper class ought to be as large as the universe, or at least, as large as the class ORD of all ordinals. This, however, is independent of the usual axioms of set theory. (An additional problem appears here, since the standard presentation of set theory does not treat proper classes as objects, so one cannot even make the statement "all proper classes have the same size". But there are ways around this obstacle.)<|endoftext|> -TITLE: Points of bounded degree on varieties -QUESTION [5 upvotes]: Let $\mathbb{Q}^{alg}$ be the algebraic closure of the rationals. Given a point $P\in \mathbb{A}^n(\mathbb{Q}^{alg})$, $P = (a_1,\dots,a_n)$, we define the degree of $P$ to be the degree of the minimal field extension of $\mathbb{Q}$ over which $P$ is defined: $\text{deg}(P) = [\mathbb{Q}(a_1,\dots,a_n):\mathbb{Q}]$. -If a variety $X$ in $\mathbb{A}^n(\mathbb{Q}^{alg})$ has infinitely many points, must it have infinitely many points of bounded degree? That is, is there a positive integer $d$ such that $\{P\in X:\text{deg}(P)\leq d\}$ is infinite? -It seems like this should be an easy consequence of some more general theorem, but my knowledge is limited. - -REPLY [2 votes]: Yes, this is true. -Hint: we may assume that $X$ is irreducible (take any one irreducible component of positive dimension). Apply Noether Normalization.<|endoftext|> -TITLE: Is this space contractible? -QUESTION [34 upvotes]: Let $X$ be the following topological space (with the subspace topology): Connect the rational points of $([0,1]\cap \mathbb{Q})\times \{0\}$ with the point $(0,1)$ and connect the points of $([-1,0]\cap \mathbb{Q})\times \{1\}$ with $(0,0)$, as shown in the figure. Is $X$ contractible? - -REPLY [21 votes]: Let's first restrict the study to a simple class of continuous functions. -A continuous function $f : X \rightarrow X$ is "simple" if $\forall y \in [0;1], \exists y' \in [0;1]$ such that : - -$f(0,y) = (0,y')$ -$\forall q \in [0;1] \cap \mathbb{Q}, f(-qy,y) = (-qy',y')$ or $\forall q \in [0;1] \cap \mathbb{Q}, f(-qy,y) = (0,y')$ -$\forall q \in [0;1] \cap \mathbb{Q}, f(q(1-y),y) = (q(1-y'),y')$ or $\forall q \in [0;1] \cap \mathbb{Q}, f(q(1-y),y) = (0,y')$ - -This means that the image of a point $x$ in the central segment is a point $f(x)$ on the central segment, the points to its left are either sent to the corresponding points on the left of $f(x)$ or all equal to $f(x)$, and the same with points on the right. -Call $\hom_S(X,X)$ the set of simple maps. -I could define a more general class of simple maps for example by allowing points on the left to be sent to points on the right, but it would only make things more complicated, and it's unneeded because this will be enough to separate the identity map from a constant map. -The identity map $id_X$, and the constant map $k : x \rightarrow (0,1/2)$ are in $\hom_S(X,X)$ -Surely, if I want to prove that $X$ is not contractible, it is at least as hard as proving that it is not contractible with a homotopy that only uses simple maps. So, let's prove that there is no homotopy of simple maps between $id_X$ and $k$. - -In order to do that, I show that $\hom_S(X,X)$ is "homeomorphic" to a subset of $\hom([0;1],S^1)$ : -For $f : [0;1] \rightarrow S^1$, define $\phi(f) \in \hom_S(X,X)$ with : -$$\phi(f)(0,y) = \left\{\begin{array}{ll}(0,2f(y)/\pi) & \text{for} f(y) \in [0; \pi/2] \\ - (0,2-2f(y)/\pi) & \text{for} f(y) \in [\pi/2; \pi] \\ - (0,2f(y)/\pi-2) & \text{for} f(y) \in [\pi; 3\pi/2] \\ - (0,4-2f(y)/\pi) & \text{for} f(y) \in [3\pi/2; 2\pi]\end{array} \right. $$ -the points on the left of $(0,y)$ are taken to the points on the left of $f(0,y)$ if $\theta \in [0; \pi]$, are sent to $f(0,y)$ otherwise ; -the points on the right of $(0,y)$ are taken to the points on the right of $f(0,y)$ if $\theta \in [-\pi/2; \pi/2]$, are sent to $f(0,y)$ otherwise. -It is easy to check that $\phi(f)$ is continuous if and only if $f(0) \notin [0; \pi]$ and - $f(1) \notin [-\pi/2; \pi/2]$. If I call $\hom_S([0;1],S^1)$ the subset of functions with this property, $\phi$ is a bijection from $\hom_S([0;1],S^1)$ into $\hom_S(X,X)$ that conserves homotopies. - -Now we have to study homotopies in $\hom_S([0;1],S^1)$. -For $f \in \hom_S([0;1],S^1)$, define $w(f) = $ the number of times the function winds up from $0$ to $\pi/2$. More precisely, define -$\alpha : S^1 \rightarrow S^1 : \theta \rightarrow 4\theta$ for $\theta \in [0;\pi/2]$ and $0$ otherwise. -$\forall f \in \hom_S([0;1],S^1), (\alpha \circ f)(0) = (\alpha \circ f)(1) = 0$, so we can quotient that map and get a function $\hat{f} : S^1 \rightarrow S^1$. -Then define $w(f)$ as the winding number of $\hat{f}$. -$w$ is a homotopic invariant (in fact it completely describes homotopy inside $\hom_S(X,X)$), but $w(\phi^{-1}(id_X)) = 1$ while $w(\phi^{-1}(k)) = 0$. Therefore $id_X$ and $k$ are not homotopic inside $\hom_S(X,X)$ - -Now what is left is to show that if there is a homotopy using any continuous maps, then there is one that only uses simple maps. -To do that, I need an approximation map $\psi : \hom(X,X) \rightarrow \hom_S (X,X)$ that push forward homotopies. -First, define $\psi(f)(0,y) = (0,y')$ where $(x',y') = f(0,y)$. -If $x' \neq 0$, send all the points on the left/right side of $(0,y)$ into $(0,y')$ as well. -Otherwise, we need to decide when to send the points on the left/right side of $(0,y)$ on the left/right side of $f(0,y)$. There are uncountably many ways to do it, so let's arbitrarily settle it with this : -For $y \in [0;1]$, define $P_l(y) = \exists x_1,x_2,\ldots x_n \ldots < 0, \lim x_n = 0$, and $f(x_n,y)$ are on the (strictly) left half of $X$, -and similarly $P_r(y) = \exists x_1,x_2,\ldots x_n \ldots > 0, \lim x_n = 0$, and $f(x_n,y)$ are on the (strictly) right half of $X$. -Then, send the points on the left (resp. right) side of $(0,y)$ on the left (resp. right) of $\psi(f)(0,y)$ if and only if $P_l(y)$ (resp. $Pr(y)$) is true. -It is important to note that $P_l(0)$ and $P_r(1)$ are always false. -Now I have to prove that not only $\psi(f)$ is continuous, but that for any homotopy $h : [0;1] \times X \rightarrow X$, the map $\psi(h) : [0;1] \times X \rightarrow X$ is a homotopy of simple maps. -Suppose $(t_n,y_n) \rightarrow (t,y)$ in $[0;1]\times[0;1]$. -Call $(x'_n,y'_n) = h(t_n,(0,y_n))$ and $(x',y') = h(t,(0,y))$. -First, $h(t_n,(0,y_n)) \rightarrow (x',y') = h(t,(0,y))$. -This shows that $\psi(h)(t_n,(0,y_n)) \rightarrow (0,y') = \psi(h)(t,(0,y))$. -If $x' \neq 0$, then eventually neither are the $x'_n$ so $\psi(h)(t_n,(x_n,y_n)) = (0,y'_n) \rightarrow (0,y') = \psi(h)(t,(x,y))$, for any sequence $x_n \rightarrow x$. -If $y' \notin \{0;1\}$ then there is a neighbourhood of $(0,y')$ in $X$ where the diagonal lines are all disconnected from each other, so for $n$ large enough, the points on the diagonal lines near $(0,y)$ who are sent on the diagonal lines near $(0,y')$ are stuck there, so eventually $P_l$ and $P_r$ will not change. -This also means that if $y=0$ (resp. $y=1$), $P_l$ (resp. $P_r$) must eventually be false. -So there is no obstruction and $\psi(h)(t_n,(x_n,y_n)) \rightarrow \psi(h)(t,(x,y))$ for any sequence $x_n \rightarrow x$. -The only troublesome points are when $y' = 0$ or $y' = 1$. -For example, if $y' = 0$, then $P_r$ will not change for the same reason as stated above, so $\psi(h)(t_n,(x_n,y_n)) \rightarrow (0,y') = \psi(h)(t,(x,y))$ for any positive sequence $x_n \rightarrow x$. -For the points on the left, since $y'_n \rightarrow 0$, whatever $P_l(y_n)$ is and for any negative sequence $x_n \rightarrow x$, $\psi(h)(t_n,(x_n,y_n)) \rightarrow (0,0) = \psi(h)(t,(x,y))$ . -The case $y' = 1$ is similar, and this proves that the approximation map $\phi$ transport homotopies in $\hom(X,X)$ into homotopies in $\hom_S(X,X)$ (though it doesn't pull back homotopies at all). -Since $id_X$ and $k$ were not homotopic in $\hom_S(X,X)$, they are not homotopic in $\hom(X,X)$, which shows that $X$ is not contractible.<|endoftext|> -TITLE: How to understand joint distribution of a random vector? -QUESTION [6 upvotes]: Given a random vector, what are the -domain, range and sigma algebras on -them for each of its components to -be a random variable i.e. measurable -mapping? Specifically: - -is the domain of each component random variable same as the domain -of the random vector, and are the -sigma algebras on their domains -also the same? -Is the range of each component random variable the component -space in the cartesian product -for the range of the random -vector? What is the sigma algebra -on the range of each component -random variable and how is it -induced from the sigma algebra on -the range of the random vector? - -Please correct me if I am wrong. If -I understand correctly, given a -random vector, the probability -measure induced on the range (which -is a Cartesian product space) by the -random vector, is called the joint -probability measure of the random -vector. The probability measure -induced on the component space of -the range by each component of -the random vector, is called the -marginal probability measure of the -component random variable of the -random vector. -Consider the concept of the component random variables of -a random vector being -independent. I read from a webpage that it is said so when the joint probability measure is the product measure of the individual marginal probability measures. I was wondering if the sigma algebra for the joint probability must be the same as the product sigma algebra for the individual probability measures, or the former can just contain the latter? - -Thanks and regards! - -REPLY [4 votes]: I'll take a stab at answering these: - -(Part a) Yes, the domain probability space of a random vector is the same as the domain probability space of its components. Think of a random vector as a vector-valued random variable. - -I'm having trouble parsing these questions, because I can't tell whether you're using the word "range" to mean "image" or "codomain". I'll assume you mean "codomain". - -(Part b) Given a probability space $\Omega$ and measurable spaces $E_1,\ldots,E_n$, a random vector is a random variable $\Omega \to E_1\times\cdots\times E_n$. Typically, each $E_i$ starts with a $\sigma$-algebra on it, and then the $\sigma$-algebra on $E = E_1\times\cdots\times E_n$ is defined as the product algebra. (That is, the $\sigma$-algebra on $E$ is generated by all products of the form $A_1\times\cdots\times A_n$, where $A_i$ is measurable in $E_i$ for each $i$.) Sometimes it is helpful to expand the $\sigma$-algebra on the product slightly, e.g. if you want some probability measure on the product to be complete. -That seems right. "Marginal probability measure" would also refer to the measure obtained on a product like $\prod_{i\in S} E_i$, where $S$ is some subset of $\{1,\ldots,n\}$. -I suppose it's fine for the $\sigma$-algebra on the product to be slightly larger than the product $\sigma$-algebra, e.g. if we want the measure on the product to be complete. However, the measure on the product should have the property that it is the unique extension of the product measure to the $\sigma$-algebra on the product. That is, the measure on the product should either be the product measure, or the completion of the product measure, or something in between.<|endoftext|> -TITLE: Étale Local Sections of a Smooth Surjective Morphism -QUESTION [8 upvotes]: Why does a smooth surjective morphism of schemes admit a section étale-locally? - -REPLY [5 votes]: This is -EGA IV, 17.16.3 (ii).<|endoftext|> -TITLE: If $u=\frac{1+\sqrt5}{2}$, then $u^3=2+\sqrt5$, but $u^2=\frac{3+\sqrt5}{2}$. What is the group that measures the power that makes units look nice? -QUESTION [13 upvotes]: For $A=\mathbb{Z}[x]/(f)$ with quotient field $K$ and ring of integers $B$, does $U(B)/U(A)$ have a name? - -For instance $u = \tfrac{1+\sqrt{5}}{2}$ is a unit in $\mathbb{Q}[\sqrt{5}]$, but neither $u$ nor $u^2$ has integer coefficients in the basis $\{ 1, \sqrt{5} \}$. Of course $u^3$ has integer coefficients (spooky if you haven't tried it!) and in fact $u^n$ has integer coefficients iff $0 \equiv n \mod 3$. -For quadratic fields with basis $\{ 1, \sqrt{n} \}$ for $n$ square-free, one almost always has $U(A) = U(B)$. If not, then $[ U(B) : U(A) ] = 3$. -That's crazy, and it should have a name. For instance, I'd like to find out if the following is true, but I don't even know what to look for: - -Is $U(B)/U(A)$ always finite? [ where $B$ is the ring of integers of an order $A$ in a number ring ] - -REPLY [6 votes]: In general, if $A \subset B$ is an extension of rings, I would call $B^{\times}/A^{\times}$ the relative unit group. (I am probably not the only one, but I couldn't say how widespread this is.) I feel reasonably confident that there is no specialized terminology in the case of nonmaximal orders. -To answer your non-terminological question: yes, if $A \subset B$ are orders in the same number field, the group $B^{\times}/A^{\times}$ is finite. This follows from the fact that one can prove the Dirichlet Unit Theorem equally well for a nonmaximal order $A$ in a number field $K$: $A^{\times}$ is still finitely generated with torsion subgroup equal to the roots of unity in $A$ and free rank equal to $r+s-1$, where $r$ is the number of real places and $s$ is the number of complex places of $K$. Thus we have finitely generated abelian groups $A^{\times} \subset B^{\times}$ with the same free rank, so $B^{\times}/A^{\times}$ is finite.<|endoftext|> -TITLE: How to pick a thesis advisor? -QUESTION [38 upvotes]: This sort of question is probably in bad taste for math.stackexchange, but is probably in high demand. (I tried to start a site on Area 51 to house questions like this, but my request was closed due to the existence of math.stackexchange.) -I am advising talented students about graduate school. I believe that the most important thing one must be sure to do is pick the right PhD thesis advisor. - -Question: What is your best advice on picking a good thesis advisor. (For the sake of levity, feel free to answer an opposite question regarding how to pick a bad thesis advisor. Just be sure you are clear whether you are indicating how to pick a good or bad advisor!) - -REPLY [3 votes]: When sizing up a potential advisor it's helpful to chat with their current students, especially those nearing graduation. How much help do they get, both in choosing problems to work on and in working on them? How do they balance their desire for you to improve your thesis with your need to graduate in a timely manner? How likely are they to make the effort to help you with your career (placement, tenure review) after you get your degree? -I could rattle off quite a few tragic cases, where extremely talented mathematicians of my generation were irrevocably derailed and are "flipping burgers" due to either failing to ask those questions or ignoring the answers they got.<|endoftext|> -TITLE: If $C$ is a component of $Y$ and a component of $Z$, is it a component of $Y\cup Z$? -QUESTION [16 upvotes]: Let $X$ be a topological space, $Y$ and $Z$ subspaces of $X$. Let $C$ be a connected subset of $Y\cap Z$ such that $C$ is a component of $Y$ and a component of $Z$. Does it follow that $C$ is a component of $Y\cup Z$? -Intuitively, I would say yes, but I don't know how to prove it. -In case further assumptions are necessary, you can go as far as: $X$ is a compact metric space, $Y$ is open, $Z$ (and therefore $C$) is closed, and $C=Y\cap Z$. -Any help is much appreciated. - -REPLY [5 votes]: I think I can now give a positive result, using some of the extra -assumptions ($X$ compact Hausdorff, $Y$ open, $Z$ closed). It will -be convenient to use the following: - -Lemma -Let $X$ be a Hausdorff space and $C \subset X$ have a compact - neighbourhood $K$. Then $C$ is a component of $X$ if and only if - $C$ is a component of $K$ - -Proof of `only if': -If $C$ is not a component of $K$, then $C$ is not connected, or there -is a connected subset of $K$ that is a proper superset of $C$. -Either way, $C$ is not a component of $X$. -Proof of `if': -Assume $C$ is a component of $K$ and let $B$ be the boundary of -$K$ in $X$. Clearly $C$ is connected, so we need to prove that -no proper superset of $C$ is connected. -Let us consider $K$ as a subspace. Since $K$ is a compact Hausdorff -space, $C$ is a quasicomponent. (for a proof see this answer) -Because $C \cap B = \emptyset$, this means that for every $b \in B$ -there is a clopen neighbourhood $U_b$ disjoint from $C$. These -neighbourhoods form a cover of $B$, that by compactness has a -finite subcover. Let $U$ be the union of this subcover. Being a -finite union of clopen sets, $U$ is clopen and so is its complement. -Because none of the $U_b$ intersect $C$, and because -$B \subset U$, we have $C \subset K \setminus U \subset K \setminus B \subset K$. -Since $K$ is closed in $X$ and $K \setminus B$ is open in $X$, -$K \setminus U$ is clopen in $X$ too. -We may conclude that any connected superset of $C$ must be a subset of -$K \setminus U$, therefore a subset of $K$, therefore by assumption -equal to $C$. - -Wihout too much trouble we can now prove: - -Let $X$ be a compact Hausdorff space, $Y$ an open subspace and $Z$ a closed - subspace. - Let $C$ be a connected subset of $Y \cap Z$ such that $C$ is a component - of $Y$ and a component of $Z$. Then $C$ is a component of $Y\cup Z$. - -Proof: -$C$ is a component of, therefore closed in $Z$, which is closed in $X$, -so $C$ is closed in $X$. -$Y$ is open in $X$, so $X \setminus Y$ is closed in $X$. -$X$ is normal, so $C$ and $X \setminus Y$ have disjoint neighbourhoods -$U$ and $V$. If we take $K = \operatorname{Cl} U$ then -$$ -C \subset U \subset K \subset X \setminus V \subset Y \subset Y \cup Z -$$ -and $K$ is compact. -Starting from the fact that $C$ is a component of $Y$, we now apply -the lemma one way to find that $C$ is a component of $K$, then the -other way to find it is a component of $Y \cup Z$.<|endoftext|> -TITLE: Uniform mean ergodic theorem -QUESTION [8 upvotes]: I'm working in Einsiedler and Ward's book on Ergodic Theory and in Exercise 2.5.4 they want to prove the following -$$\lim_{N - M \to \infty} \frac{1}{N - M} \sum_{n = M}^{N - 1} U_T^n f \to P_T f.$$ -Now I'm wondering what this limit really says, it must be stronger than pointwise convergence since we can pick $M = 0$. It we let $N = 2n$ and $M = n$, then we seem to get more and more terms but adding them up from the "tail". To me it seems that this is a very strong convergence condition or am I wrong? What does the limit intuitively say? - -REPLY [9 votes]: First, let me provide the context: Given are a unitary operator $U \in L(H)$ (actually $\|U\| \leq 1$ suffices) on a Hilbert space and $P$ is the orthogonal projection onto the subspace $V = \{x \in H\,:\, Ux=x\}$ of fixed vectors of $U$. The von Neumann mean ergodic theorem asserts that for all $x \in H$ there is convergence of the Cesàro averages of $U^{k}x$ to $Px$ in the norm: -\[ -\left\Vert Px - \frac{1}{n+1} \sum_{k = 0}^{n} U^{k}x\right\Vert \; \xrightarrow{n \to \infty} \; 0. -\] -In other words, the sequence of operators $\frac{1}{n+1} \sum_{k = 0}^{n} U^{k}$ converges to $P$ in the strong operator topology on $L(H)$. Of course, convergence in the strong operator topology is the same as pointwise convergence of the operators on $H$, but I prefer not to speak of pointwise convergence because this might be confusing when speaking of a function space like $H = L^{2}$. - Spoiler: Note that if you have that, the exercise is already solved because you can multiply with $U^{M}$ inside the norm and write $N = n+M$ (or preferably do this backwards). - -I'm not sure what kind of answer you're looking for. Formally, the result looks stronger but it is equivalent, as you noticed and I argued in the spoiler above. -I think you're already pretty close to the intuition I have about this. -Let $S_{m}^{n} = \frac{1}{n-m+1}\sum_{k = m}^{n} U^{k}$ with $n \geq m$ (the $+1$ is rather immaterial but it should become clear from the stuff I say below why I prefer to add it). This is a triangular double sequence of operators in $L(H)$. As you say, taking $m = \text{const}$ you have convergence $\|S_{m}^{n}f - Pf\| \xrightarrow{n \to \infty} 0$. Now just requiring $(n-m) \to \infty$ means that only the increasing length of the tails matters eventually. - -For me a slightly more abstract stance is helpful, but this may be due to the fact that I've thought about amenability too much. I'll give a brief account of this point of view anyway: -The Cesàro averages are probability measures on the abelian semi-group $\mathbb{N}$, namely $\mu_{n} = \frac{1}{n+1}\sum_{k=0}^{n} \delta_{k}$. The shift $Sk = k+1$ on $\mathbb{N}$ acts on the probability measures and -\[ -\|S\mu_{n} - \mu_{n}\| = \|\frac{1}{n+1} (\delta_{n+1} - \delta_{0})\| \leq \frac{2}{n+1} \to 0, -\] -where the norm is understood to be the total variation norm (or $\ell^{1}$-norm). -Now in general, a sequence $(\lambda_{n})$ (or even a net) of probability measures on $\mathbb{N}$ is called approximately invariant (or a Reiter sequence) if -$\|S\lambda_{n} - \lambda_{n}\| \to 0$. -On the other hand, the semigroup $\mathbb{N}$ acts on $H$ by $n \ast x = U^{n}x$. By pushing probability measures forward via the orbit map, we get an action on $H$ by the convolution semigroup $P(\mathbb{N})$ of probability measures on $\mathbb{N}$. Explicitly, $\mu \ast x = \sum_{n \in \mathbb{N}} \mu(n) U^{n} x$. -Let $(\lambda_{n})$ be an approximately invariant sequence (or net) of probability measures. Then -\[ -\|\lambda_{n} \ast (Ux - x)\| = \| (S\lambda_{n}) \ast x - \lambda_{n} \ast x\| \leq \|S\lambda_{n} - \lambda_{n}\|\,\| x\|\;\xrightarrow{n \to \infty} \;0 -\] -for all $x \in H$. Let $W$ be the closed linear span of the vectors $Ux - x$. We have just seen that $\lambda_{n} \ast w \to 0$ for all $w$ in a dense subspace of $W$, hence for all $w \in W$. -Moreover, it is not difficult to check that $W^{\perp} = V = \{x \,:\,x = Ux\}$. Indeed, if $y \in W^{\perp}$ then $0 = \langle y, x - Ux \rangle = \langle y - U^{\ast}y, x\rangle$ for all $x \in H$, so $y = U^{\ast}y$ and hence $Uy = y$ because $U$ is unitary. Therefore $V \supset W^{\perp}$ and the other inclusion is clear. -For every $x \in H$ we have $x = Px + (1-P)x$ with $Px \in V$ and $(1-P)x \in W$. Therefore $\lambda_{n} \ast x = \lambda_{n} \ast Px + \lambda_{n} \ast (1-P)x = Px + \lambda_{n} \ast (1-P)x \to Px$ and we have proved the following version of the mean ergodic theorem: - -Theorem. If $(\lambda_{n})$ is an approximately invariant sequence (or net) of probability measures on $\mathbb{N}$ then $\lambda_{n} \ast x = \sum_{k \in \mathbb{N}} \lambda_{n}(k) U^{k}x$ converges to $Px$. - - -Finally, I come back to your specific question. The operators $S_{m}^{n}$ are easily seen to arise from the probability measures $\lambda_{m}^{n} = \frac{1}{n-m+1} \sum_{k=m}^{n} \delta_{k}$, which is a net using lexicographic ordering and is approximately invariant provided that $(n-m) \to \infty$.<|endoftext|> -TITLE: Calculating a Point that lies on an Ellipse given an Angle -QUESTION [25 upvotes]: I need to find a point (A on this diagram) given the center point of the ellipse as well as an angle. I've been melting my brain all day (as well as searching through questions here) testing out different equations. What's the best way to do this? - -I intend to grab point A at $225^o$ as well as another point at approximately $250^o$ using the same math. These need to be fetched regardless of elliptic width and height. - -REPLY [6 votes]: I've been working on this one for a while now because I was trying to test a coordinate for overlap with an ellipse, and I came up with something much easier to find the point on an ellipse given an angle from the center. If you use a general first degree equation for the line and substitute into the equation for an ellipse then you can solve for x and y (the points where the line intercepts the ellipse). -To find the general first degree equation of a line, you can use this formula : - $$(y_1 - y_2)*x + (x_2 - x_1)*y + (x_1*y_2 - x_2*y_1) = 0$$ -Since the ellipse is centered on the origin and the line passes through it as well, you can simplify the equation for the line by substituting $x_1 = 0$ and $y_1 = 0$ and you come up with : -$$-y_2*x + x_2*y = 0$$ -Solve for x and y and you get $$x = \frac{x_2*y}{y_2} , y = \frac{y_2*x}{x_2}$$ -Next use the equation for an ellipse $$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$ -and substitute in x and y and solve for $y^2$ and $x^2$ respectively. You come up with these two equations : -$$y^2 = \frac{a^2*b^2*y_2^2}{(b^2*x_2^2 + a^2*y_2^2)} , x^2 = \frac{a^2*b^2*x_2^2}{b^2*x_2^2 + a^2*y_2^2}$$ -If you know the point on the line you can substitute in $x_2$ and $y_2$, but since all we have is an angle, we'll have to re-derive our line equation. It's not hard though. To find the x and y coordinates of a point using a radius (which we won't need) and an angle, you just use a little trigonometry. The x value of the triangle is $r*\cos{\theta}$ and the y value is $r*\sin{\theta}$. Substitute these in for $x_2$ and $y_2$ above and you get $-r\sin{\theta}*x + r\cos{\theta}*y = 0$. Notice you can divide by the radius now to remove it from the equation, leaving us with $-\sin{\theta}*x + \cos{\theta}*y = 0$. Re-substitute into the earlier equation and you get $y = \sin{\theta}$ and $x = \cos{\theta}$. Substitute these into the equations for $y^2$ and $x^2$ and you come up with the following equations. -$$y = \pm\frac{ab\sin{\theta}}{\sqrt{(b\cos{\theta})^2 + (a\sin{\theta})^2}} , x = \pm\frac{ab\cos{\theta}}{\sqrt{(b\cos{\theta})^2 + (a\sin{\theta})^2}}$$ -You now know another formula to find the coordinates of a point on an ellipse given only an angle from the center, or to determine whether a point is inside an ellipse or not by comparing radii. ;)<|endoftext|> -TITLE: Forcing Classes Into Sets -QUESTION [16 upvotes]: I am still studying the topics in forcing and did not yet study much about forcing with a class of conditions. -I know from Jech's Set Theory that you can force that the class of ordinals in the world will be countable in the generic extension, which means that you can take a proper class and turn it into a set. -I was wondering if there is a nice characterization of classes that can be forced into sets. -Clearly it is sufficient to assume that the class is well-orderable, and if you assume AC then if you collapsed a class to a set, in the generic extension it will be well-orderable, but does that mean that you can construct from the function you've added one which shows that you actually collapsed $Ord^M$ to some $\dot\alpha$ in the generic extension? (i.e. is the sufficient condition is also necessary?) -Edit: -So I went to see my advisor today, and we talked about this question a little bit. He wasn't able to give me a full answer but he gave me a nice direction. We discussed generic extensions with global choice, namely you only add a class function which is a well-ordering of the world, without introducing new sets into the world. Jech wrote about it right between Easton Forcing and Levy Collapse in chapter 15 of his book. -I searched further and found some mailing list in which Solovay said (quite recently too!) that he and Jensen did something like that already way back. -In light of everything, I came up with some new questions which I will ponder on my own for a while; one of the question is worth mentioning here as it related very much to my original question - especially in light of Joel's answer: -Is it possible to collapse the entire universe to some inaccessible cardinal in a larger universe? (If so, doesn't that imply that $Con(ZFC)\rightarrow Con(ZFC+In)$? (where $In$ is the existence of an inaccessible cardinal)) If not in $ZFC$ how about $NBG$? - -REPLY [9 votes]: For any set $X$, we may consider the partial order $\mathbb{P}$ consisting of all finite partial functions from $\omega$ to $X$, ordered by extension. If $G\subset\mathbb{P}$ is $V$-generic for this forcing notion, then $f=\cup G$ is a function from $\omega$ onto $X$. It is a function, since conditions in $G$ are compatible; it is total, since it is dense to add any given point to the domain; and it is surjective, since for any $x\in X$ the set of conditions $p$ with $x\in\mathop{ran}(p)$ is dense. Thus, in the forcing extension $V[G]$, the set $X$ becomes countable. This is how any set can be made countable by forcing. Since the forcing notion $\mathbb{P}$ is a set, it also follows that $V[G]\models ZFC$. -In the case that $X$ is an arbitrary proper class, one can still form the partial order $\mathbb{P}$ consisting of all finite partial functions from $\omega$ to $X$, and it will still be true that the union of the generic filter will be a function from $\omega$ to $X$, thereby making $X$ countable in $V[G]$. But what will no longer be true in this class forcing case is that $V[G]\models ZFC$. Indeed, if $X$ is a proper class, then the corresponding forcing extension will definitely not satisfy ZFC, since $X$ will contain members of arbitrarily large ordinal rank (appearing unboundedly high in the $V_\alpha$ hierarchy), the forcing extension will have a function from $\omega$ unbounded in the ordinals, in violation of the Replacement axiom. So the price you pay for making a proper class $X$ countable is that you must give up ZFC in the forcing extension. Indeed, if $X$ is a proper class in $V$, then $X$ contains elements of unboundedly high rank, and this will remain true in any extension of $V$ to a model $W$ with the same ordinals; thus, $X$ will not be a set in any such $W$. -So, to answer your question: every set can be countable in a forcing extension by set forcing (and thus while preserving ZFC), but no proper class can become a set in any extension of the universe to a model of ZF with the same ordinals. - -Edit. I see now that you want to consider upward -extensions of the models, not just forcing extensions. -There are a few observations to make about this. I have -been interested in this topic for some time. -The question is the extent to which a model $M$ of set -theory can become the $(V_\theta)^N$, for some taller of a -taller model of set theory. - -First, I claim that every model $M$ of set theory can be -elementarily embedded into the $V_\theta$ of another model -of set theory $N$, and you can even arrange that $(V_\theta)^N\prec N$. -This is a sense in which $M$ is -continued as you desire to a taller model. You can prove this by a -simple compactness argument using the elementary diagram -of $M$, augmenting the theory with a new constant symbol -$\dot\theta$ and the assertions that the right theory -holds. This theory is finitely consistent by an -application of the Reflection theorem. -Second, some models $M$ of set theory cannot be realized -directly as the $V_\theta$ of a taller model. For -example, this is true in any pointwise definable model (a -model in which every element is definable without -parameters), since the larger model would then see that -every element of $V_\theta$ is definable in $V_\theta$, -which would contradict that the larger model thinks it is -uncountable. -Third, every countable computably saturated model $M$ of set -theory is realized as the $V_\theta$ of a taller model. -Indeed, every countable computably saturated model of ZFC -is isomorphic to one of its own $V_\theta$'s, and by -looking at things from the perspective of that copy, the -result obtains. Harvey Friedman was the first to prove -results of this kind, concentrating at first on -nonstandard models of PA. You can find an account of the -argument in my recent paper with Victoria Gitman in "A natural model of the multiverse axioms," -Notre Dame Journal of Formal logic, vol 51, -2010. -One of the things we prove is that if $M$ is a countable -computably saturated model of ZFC, then $M$ is also -isomorphic to an element of $M$ that thinks it is an -$\omega$-nonstandard model of set theory. Thus, every -countable computably saturated model of ZFC exists as a -nonstandard model inside another better model. -Fourth, it is possible to show that every countable transitive model of set theory can be end-extended to a (possibly nonstandard) model of V=L, which is well-founded to any desired countable height. The reason is that this assertion is true in $L$ itself, and the statement has complexity $\Pi^1_2$ in a code for the structure, so it is true in $V$ by Shoenfield absolutenss.<|endoftext|> -TITLE: Let $F$ be a Galois extension over $\mathbb{Q}$ with $[F:\mathbb{Q}]=2^n$, then all elements in $F$ are constructible -QUESTION [8 upvotes]: Let $F\subseteq\mathbb{C}$ be a Galois extension of $\mathbb{Q}$ such that $[F:\mathbb{Q}]=2^n$; then all elements in $F$ are constructible. - - -Added. -Here is what I have so far. -Since there exist the finite normal extension $F\subseteq\mathbb{C}$ over $\mathbb{Q}$ which contains an element say, s, then s has a degree of power of 2. From group theory we have 2 propositons we could use. 'A group $G$ is solvable iff it has a normal series whose factors are abelian', and also 'If $G$ is solvable and $N\triangleleft G$, then $G/N$ is solvable'. Then we can say that the galois group F has a normal series $\{e\}=G_0\triangleleft G_1\triangleleft\cdots\triangleleft G_n=\mathrm{Gal}(F/\mathbb{Q})$, whose factors have order 2. Thus there must exist intermediate fields $\mathbb{Q}=F_0 \subseteq F_1 \subseteq F_2 \subseteq\cdots\subseteq F_n=F$, such that $[F_{j+1}:F_j]=2$ for all $j$. Now this is where I got stuck. Next I need to somehow show that $F_{j+1}=F_j(s_j)$, where $s^2_j$ is an element in $F_j$. Then I think I can conclude that if $s_j$ is constructible than every element in $F_{j+1}$ is constructible... then with $F_n$ is constructible... - -REPLY [9 votes]: The idea is to try to deduce the result for $F$ by induction on $n$. -First, to get your feet wet, you want to show that if $[F:\mathbb{Q}]=2$, then all elements of $F$ are constructible. Feel comfortable with that one. -Then, how would the general proof by induction work? -You want to argue that an $F$ that is Galois over $\mathbb{Q}$ with $[F:\mathbb{Q}]=2^{n}$ has a subextension $K$, $\mathbb{Q}\subseteq K\subseteq F$, with $K$ Galois over $\mathbb{Q}$, $[F:K]=2$, and $[K:\mathbb{Q}]=2^{n-1}$. Then you would use induction to show that everything in $K$ is constructible. And then you'd use an argument similar to the one you used for the case $n=1$ in order to show that since everything in $K$ is constructible, and $[F:K]=2$, then everything in $F$ is constructible. So your induction hypothesis would look something like - -If $L$ is Galois over $\mathbb{Q}$, and has $[L:\mathbb{Q}]=2^k$ for some $k\lt n$, then every element of $L$ is constructible. - -To do it that way you would want to use a subgroup of $\mathrm{Aut}(F/\mathbb{Q})$ which is of index $2^{n-1}$ (rather than of order $2^{n-1}$), and which is normal (so that the corresponding extension $K$ is normal over $\mathbb{Q}$). -You can approach it going "the other way", with a subgroup of order $2^{n-1}$ (which is necessarily normal, being of index $2$), as youo ask. How would an inductive argument look in that case? The subgroup $H$ of order $2^{n-1}$ gives you an intermediate field $K$, $\mathbb{Q}\subseteq K\subseteq F$, with $[F:K] = 2^{n-1}$, $[K:\mathbb{Q}]=2$. So you would know, from the case $n=1$, that everything in $K$ is constructible, and you would want to argue inductively that everything in $F$ is constructible. -So here, your induction hypothesis should be somewhat different: the induction hypothesis I quote above would not let you conclude that everything in $F$ is constructible from the fact that everything in $K$ is constructible, because $K$ is not $\mathbb{Q}$ so the induction hypothesis would not apply. -You need a different induction hypothesis: one that doesn't "care" what the base field is, as long as everything in it is constructible. So your induction hypothesis should look like: - -If $K$ is an extension of $\mathbb{Q}$ in which all elements are constructible, and $L$ is a field extension of $K$ with $[L:K]=2$, then every element of $L$ is constructible. - -Then you could use that induction hypothesis to conclude everything in $F$ is constructible. -But this introduces a new problem: in order to prove the $n=1$ of the induction hypothesis I proposed first, you would need to prove that if $K$ is a field with $[K:\mathbb{Q}]=2$, then everything in $K$ is constructible; this is not too hard (I hope). But in order to use the second induction hypothesis, the case $n=1$ needs to be more general: now you need to show that if $K$ is any extension of $\mathbb{Q}$ in which every element is constructible, and $L$ is a field extension of $K$ with $[L:K]=2$, then every element of $L$ is constructible. Because the second induction hypothesis assumes more than the first, you need to prove more in the base case if you want to use it. -This is exactly what Chris pointed out: his "first" is the proof of the $n=1$ case of the second proposed induction hypothesis above; his "then" is the inductive step. -(Of course, you are just trading when and where you will prove that if everything in $K$ is constructible and $[L:K]=2$, then everything in $L$ is constructible; in the first argument, you need to do that to finish off the inductive step; in the second argument, you need it to get started. The one advantage of the method you propose is that you only need to know that a group of order $2^n$ has a subgroup of order $2^{n-1}$; with the first method, you need to know it has a normal subgroup of order $2$. This is not very hard, though, one just shows the center is nontrivial.) - -Added after edit to the question. -I think you are complicating your life again by trying to attack the entire problem in one fell swoop somehow, rather than just taking it one chunk at a time. Your original idea of trying to use induction somehow has a lot of merit, and means that you don't really need to invoke theorems on solvability and normal series, just the fact that a group of order $2^{n+1}$ must have a normal/central subgroup of order $2$ (or if you want to go by your original attempt, just using that a group of order $2^{n+1}$ must have a subgroup of order $2^n$, see the final comments after the next horizontal rule). -The key is indeed showing that: -Result we hope to prove. If $[F:K]=2$ (and $F/K$ is a Galois extension; but this is immediate, because any extension of degree $2$ in a field of characteristic different from $2$ is always a Galois extension), and all elements of $K$ are constructible, then all elements of $F$ are constructible. -Assume for a moment that you have already managed to prove this. How will the induction argument go? If $[F:\mathbb{Q}]=2^1$ is a Galois extension (that is, $n=1$) then the result will follow because every element of $\mathbb{Q}$ is certainly constructible, so by the result we are assuming you have proven, every element in $F$ is constructible. Done. -Now, assume inductively that for any finite Galois extension $K$ of the rationals, if $[K:\mathbb{Q}]=2^k$, then every element of $K$ is constructible. We want to show that if $F$ is an extension of $\mathbb{Q}$ with $[F:\mathbb{Q}]=2^{k+1}$, then every element of $F$ is constructible. If we can prove this, then this will prove the result you want by induction on $n$. -So, say $F$ is a Galois extension of $\mathbb{Q}$ with $[F:\mathbb{Q}]=2^{n+1}$. Then we know that $\mathrm{Gal}(F/\mathbb{Q})$ is a group with $2^{n+1}$ elements, and thus has nontrivial center; the center is abelian of order $2^r$ for some $r\geq 1$, so there is a central subgroup of order $2$, call it $N$. Then $N\triangleleft G$, $[G:N]=2^k$, so the fixed field of $N$, call it $K$, satisfies $\mathbb{Q}\subseteq K\subseteq F$, $[K:\mathbb{Q}]=[G:N]=2^k$, $[F:K]=2$, and $K$ is Galois over $\mathbb{Q}$ because $N$ is normal in $G$. By the induction hypothesis, every element of $K$ is constructible. Now, we have $[F:K]=2$, and every element of $K$ is constructible, so by the Result-we-hope-to-prove, we conclude that every element of $F$ is constructible, and we are done. QED, RIP, $\Box$. -So, how about that "Result-we-hope-to-prove"? Well, suppose that, as we've suggested, you manage to prove that: -Lemma-we-hope-to-prove. If $K\subseteq F\subseteq\mathbb{C}$ are field extensions, and $[F:K]=2$, then there exists $\xi\in F$ with $\xi^2\in K$ such that $F=K(\xi)=K[\xi]$. -Then: assuming every element of $K$ is constructible, then notice that every element of $F$ can be written (uniquely) as $a+b\xi$ with $a,b\in K$. Now, $a$ and $b$ are both constructible, so if $\xi$ is constructible, then $a+b\xi$ is constructible (product of constructible numbers is constructible, sums of constructible numbers are constructible), so every element of $F$ is constructible. So, assuming the Lemma, it will all fall to showing $\xi$ is constructible. Since $\xi^2 = k\in K$, and $k$ is constructible, then . This proves the Result-we-hope-to-prove, modulo proving the Lemma-we-hope-to-prove. -So now we are down to proving the Lemma-we-hope-to-prove. -Since $[F:K]=2$, if you take $\alpha\in F$, $\alpha\notin K$ (it must exist), then $\{1,\alpha\}$ is a basis for $F$ over $K$ (linearly independent because $\alpha\notin K$, and of the correct cardinality to be a basis). Since $\{1,\alpha,\alpha^2\}$ is linearly dependent (too many vectors), but $\{1,\alpha\}$ is linearly independent, then $\alpha^2$ is a linear combination of $1$ and $\alpha$. So we can find $b,c\in K$ such that $\alpha^2+b\alpha+c = 0$. That is, the minimal polynomial of $\alpha$ over $K$ is $f(x) = x^2+bx+c$. -But we know exactly what the roots of $f(x)$ are: they are -$$\frac{-b+\sqrt{b^2-4c}}{2}\quad\text{and}\quad\frac{-b-\sqrt{b^2-4c}}{2}.$$ -So $\alpha$ must be one of them. Letting $r$ be the first root, note that the second root is $-r-b$, so replacing $\alpha$ by $-\alpha-b$ if necessary (remember that $b$ is in $K$), we may assume that $\alpha = \frac{-b+\sqrt{b^2-4c}}{2}$. -Now... what about that $\sqrt{b^2-4c}$ ? Is it in $F$? Is it in $K$? What is $K\left(\sqrt{b^2-4c}\right)$, anyway? - -Note. If you change what you are trying to prove to the apparently more general: - -Let $F\subseteq\mathbb{C}$ be a Galois extension of $K$ such that $[F:K]=2^n$. If all elements of $K$ are constructible then all elements of $F$ are constructible. - -Then you get a result which implies the one you want, by taking $K=\mathbb{Q}$ (since certainly every element of $\mathbb{Q}$ is constructible). In this approach, the base of the induction would be the Result-we-hope-to-prove, and the induction hypothesis would be: - -Induction hypothesis. If $M\subseteq\mathbb{C}$ is a Galois extension of $L$, $[M:L]=2^k$, and every element of $L$ is constructible, then every element of $M$ is constructible. - -Then instead of using a central subgroup $N$ of $\mathrm{Aut}(F/K)$ of order $2$, you can take the subgroup $H$ of order $2^k$ that you know exists inside $\mathrm{Gal}(F/K)$. Letting $L$ be the fixed field of $H$, you have that $[L:K]=2$, $[F:L]=2^k$, and $F$ is Galois over $L$. By the Result-we-hope-to-prove you know every element of $L$ is constructible (since $H\triangleleft G$, so $L$ is Galois over $K$), and then by the induction hypothesis applied to $F/L$ you know that every element of $F$ is constructible. It all still comes down, though, to the Result-we-hope-to-prove.<|endoftext|> -TITLE: Limit of algebraic function $\ \lim_{x\to\infty} \sqrt[5]{x^5 - 3x^4 + 17} - x$ -QUESTION [6 upvotes]: How to solve this limit? -$$\lim_{x \to \infty}{\sqrt[5]{x^5 - 3x^4 + 17} - x}$$ - -REPLY [4 votes]: If you want to solve ab initio, then the way to go about is the way Arturo suggested. -Note that $a^5 - b^5 = (a-b)(a^4 + ba^3+ b^2a^2 + b^3a + b^4)$ and hence $$(a-b)= \frac{a^5 - b^5}{a^4 + ba^3+ b^2a^2 + b^3a + b^4}$$ -Now take $a=\sqrt[5]{x^5-3x^4+17}$ and $b=x$ -$\sqrt[5]{x^5-3x^4+17}-x = \frac{(x^5-3x^4+17)-x^5}{\left(\sqrt[5] {x^5-3x^4+17} \right)^4 + x \left(\sqrt[5] {x^5-3x^4+17} \right)^3 + x^2 \left(\sqrt[5] {x^5-3x^4+17}\right)^2 + x^3 \left(\sqrt[5] {x^5-3x^4+17}\right)+ x^4}$ -Simplifying, we get, -$\sqrt[5]{x^5-3x^4+17}-x = \frac{-3x^4+17}{x^4 \left(\sqrt[5] {1-\frac{3}{x}+\frac{17}{x^5}} \right)^4 + x^4 \left(\sqrt[5] {1-\frac{3}{x}+\frac{17}{x^5}} \right)^3 + x^4 \left(\sqrt[5] {1-\frac{3}{x}+\frac{17}{x^5}} \right)^2 + x^4 \left(\sqrt[5] {1-\frac{3}{x}+\frac{17}{x^5}} \right) + x^4}$ -$\sqrt[5]{x^5-3x^4+17}-x = \frac{-3+\frac{17}{x^4}}{\left(\sqrt[5] {1-\frac{3}{x}+\frac{17}{x^5}} \right)^4 + \left(\sqrt[5] {1-\frac{3}{x}+\frac{17}{x^5}} \right)^3 + \left(\sqrt[5] {1-\frac{3}{x}+\frac{17}{x^5}} \right)^2 + \left(\sqrt[5] {1-\frac{3}{x}+\frac{17}{x^5}} \right) + 1}$ -Hence, we have -$$\sqrt[5]{x^5-3x^4+17}-x = \frac{Nr(x)}{Dr(x)}$$ where $Nr(x) = -3+\frac{17}{x^4}$ and $Dr(x) = \left(\sqrt[5] {1-\frac{3}{x}+\frac{17}{x^5}} \right)^4 + \left(\sqrt[5] {1-\frac{3}{x}+\frac{17}{x^5}} \right)^3 + \left(\sqrt[5] {1-\frac{3}{x}+\frac{17}{x^5}} \right)^2 + \left(\sqrt[5] {1-\frac{3}{x}+\frac{17}{x^5}} \right) + 1$ -$\displaystyle \lim_{x \rightarrow \infty} Nr(x) = -3$ and $\displaystyle \lim_{x \rightarrow \infty} Dr(x) = 5$. -Hence, we have $\displaystyle \lim_{x \rightarrow \infty} Nr(x)$ and $\displaystyle \lim_{x \rightarrow \infty} Dr(x)$ exists as a real number and hence -$$\displaystyle \lim_{x \rightarrow \infty} \sqrt[5]{x^5-3x^4+17}-x = \frac{\displaystyle \lim_{x \rightarrow \infty} Nr(x)}{\displaystyle \lim_{x \rightarrow \infty} Dr(x)} = \frac{-3}{5} = - \frac{3}{5}$$<|endoftext|> -TITLE: Best place to find open questions / latest research -QUESTION [7 upvotes]: Is there a central wiki or something where open questions (and relevant research on them) takes place? - -REPLY [3 votes]: Going by the title of your question if you are trying to find out the "Best place to find... latest Research" I think I would go by http://arxiv.org. Lots of papers out there. Many papers are junk (including some of mine... hehe) But you can also find papers on open problems by serious mathematicians. -For me, reading papers from arxiv and trying to understand them has helped me a lot to learn new stuff. -For a wiki like site, you might check, http://garden.irmacs.sfu.ca/ -For collaborative mathematics: -You might also do google search on the polymath project which started from this blog post: http://gowers.wordpress.com/2009/01/27/is-massively-collaborative-mathematics-possible/<|endoftext|> -TITLE: Parametrization of a line -QUESTION [17 upvotes]: This is a very basic question, and its funny that I'm able to solve more advanced problems like this, but I was presented with a basic one and got stumped. I have the equation -$$y=-\frac{3}{4}x+6.$$ -In $\mathbb{R}^2$, this is a line. I want to find a parametrization of this line. I guess the problem here is I never really understood the concept of parametrization. I'm just a robot-I follow the steps the book tells me, but I really dont understand the intuition behind it. What is the point of parametrization (in laymen's terms, which is hard to find anywhere), and how would I do it for this equation? - -REPLY [12 votes]: Think of a parametrization as describing the "trace" of the curve, with $t$ representing time. You want to write equations -\begin{align*} -x &= f(t),\\ -y &= g(t) -\end{align*} -that describe someone tracing the line as $t$ varies. -How would you trace the graph of a function $y=g(x)$? Well, the points on the graph are all of the form $(x,g(x))$, so the simplest is to use a parametrization like -\begin{align*} -x &= t,\\ -y &= g(t). -\end{align*} -If you think about what the points $(x(t),y(t))$ look like as $t$ ranges from $a$ to $b$, you'll see that you are giving the graph of $y=g(x)$ from $x=a$ to $x=b$. -That gives you one way to give the line. -Another is to pick a point on the line, and just say "go along this direction, backwards or forwards". A direction is given by a vector $(u,v)$. So if your line goes through the point $(a,b)$, and has direction $(u,v)$, then one possible parametrization just says: "start at $x=a$ and $y=b$, and then move $x$ and $y$ in the direction $(u,v)$: if you move in the $x$ direction by $ku$, then you need to move in the $y$ direction by $kv$." That is, -\begin{align*} -x &= a + tu\\ -y &= b + tv. -\end{align*} -You can do that with the graph you have above by determining a point and a direction. For the direction: the slope of the line with direction $(u,v)$ is $\frac{v}{u}$, because $v$ is the rise and $u$ is the run. - -REPLY [10 votes]: I may be misunderstanding your question, but you have a parametrization given by your equation: -$$ x\mapsto (x,-\frac 34 x+6) $$ -or if you want something with integer coefficients: -$$ t\mapsto (4t, 6-3t).$$ -The point of paramterization is that on one hand you reduce the number of variables you;re working with (in this case from two: $x,y$ to one $t$), but more importantly you make an implicit situation, that is, one defined by equations into an explicit one, that is, a way to generate the solutions. -In this case the point is perhaps not obvious because the equation is so simple, but imagine that if you have a very difficult equation then how much easier it is to work with the solutions when you are given a way to generate them (that is, by a parametrization) than when they are just given as the solutions of the equation. -Perhaps a better example to work with is the solutions to the equation -$$ x^2+y^2=1.$$ -A possible parametrization is $$t\mapsto (\cos t, \sin t).$$ -The sad truth about parametrizations is that they actually rarely exist. I guess you could interpret this as saying that if you can find a parametrization, then you should be happy. -Addendum -To answer the question in the comments: Imagine that you have to solve a system of equations in two variables. In other words you have two curves given by their equation on the plane and you have to find their intersection points. If you can parametrize one of them, then you can plug the parametrization into the other and solve an equation of a single variable. Here is an example: -Say you need to find the solution to the equation system -$$ x^2+y^2=0 \qquad 2x^3y-2xy^3=1.$$ -(this is a rather random choice) -If you just go head-to-the-wall approach, you might solve the first as a quadratic equation and plug that into the second an end up with a degree six equation. Good luck with that. (Actually it might be solvable in this case, I have no idea, but in general a degree six equation cannot be solved with a formula). -On the other hand, if you use the parametrization of the first equation given above, you end up with the equation -$$ 2\cos^3t\sin t-2\cos t\sin^3 t=1.$$ -This can easily be reduced to -$$\sin 4t =\frac 12$$ -which can be easily solved. -The point is that parametrization helped you solve a system of equations that would normally be very difficult or impossible to solve explicitly.<|endoftext|> -TITLE: What is the difference between a kernel and a function? -QUESTION [14 upvotes]: I have been looking around for this question, but all results I found only describe the definition and not the answer I seek. -Is "kernel" basically a synonym of "function"? When should be the time we should use the word "kernel" instead of "function"? - -REPLY [18 votes]: "Kernel" is an old-fashioned term for the function you use to define certain integral operators. (I assume this is the sense you mean, not the more common modern sense, which is completely different.) Like many other words in mathematics (although people generally never tell you this), it has less to do with denotation than connotation: when you use the word "kernel" you are thinking of your function in terms of integral operators.<|endoftext|> -TITLE: For holomorphic $f$, $f(\frac{z}{2})= \frac{1}{2}f(z) \Longrightarrow f(z) = z$ -QUESTION [6 upvotes]: Let $f$ be a holomorphic function on the open unitary disk $\mathbb{D}$ and continuous on $\mathbb{\overline{D}}$. If $f(\frac{z}{2})= \frac{1}{2}f(z)$ for all $z\in \mathbb{\overline{D}}$ and $f(1)=1$, then $f(z)=z$ for all $z\in \mathbb{\overline{D}}$. -Got this as homework. Any hints would be highly appreciated. - -REPLY [4 votes]: Another way of looking at this is that you are given that $f(z) - z = 0$ for $z = 1$. Then use the condition that $f(z/2) = {1 \over 2}f(z)$ to inductively show that $f(z) - z$ also has zeroes at $z = 2^{-n}$ for all positive integers $n$. Thus the zeroes of $f(z) - z$ have an accumulation point at $z = 0$, which is only possible if $f(z) - z = 0$ for all $z$ since nonzero analytic functions can only have isolated zeroes.<|endoftext|> -TITLE: How many $n\times m$ binary matrices are there, up to row and column permutations? -QUESTION [17 upvotes]: I'm interested in the number of binary matrices of a given size that are distinct with regard to row and column permutations. -If $\sim$ is the equivalence relation on $n\times m$ binary matrices such that $A \sim B$ iff one can obtain B from applying a permutation matrix to A, I'm interested in the number of $\sim$-equivalence classes over all $n\times m$ binary matrices. -I know there are $2^{nm}$ binary matrices of size $n\times m$, and $n!m!$ possible permutations, but somehow I fail to get an intuition on what this implies for the equivalence classes. - -REPLY [10 votes]: Here is a computational contribution that treats the case of a square -matrix. As pointed out this problem can be solved using the Polya -Enumeration Theorem. In fact if we are interested only in counting -these matrices, then the Burnside lemma will suffice. We just need to -compute the cycle index of the group acting on the slots of the -matrix. - -These cycle indices are easy to compute and we do not need to iterate -over all $(n!)^2$ pairs of permutations (acting on rows and columns) -but instead it is sufficient to iterate over pairs of terms from the -cycle index $Z(S_n)$ of the symmetric group $S_n$ according to their -multiplicities to obtain the cycle index $Z(Q_n)$ of the combined -action on rows and columns. The number of terms here is the much -better count of the number of partitions of $n$ squared (upper bound). - -Now for a pair of cycles, one of length $l_1$ from a row permutation -$\alpha$ and another of length $l_2$ from a column permutation $\beta$ -their contribution to the disjoint cycle decomposition product for -$(\alpha,\beta)$ in the cycle index $Z(Q_n)$ is by inspection -$$a_{\mathrm{lcm}(l_1, l_2)}^{l_1 l_2 / \mathrm{lcm}(l_1, l_2)} = -a_{\mathrm{lcm}(l_1, l_2)}^{\gcd(l_1, l_2)}.$$ - -The algorithm now becomes very simple -- iterate over pairs of terms -as described above, collect the contribution from each pair of cycles -and add it to the cycle index being computed. - -This gives the following cycle indices (only the first four are -shown): -$$Z(Q_2) = 1/4\,{a_{{1}}}^{4}+3/4\,{a_{{2}}}^{2},$$ -$$Z(Q_3) = -1/36\,{a_{{1}}}^{9}+1/6\,{a_{{1}}}^{3}{a_{{2}}}^{3}+1/4\,a_{{ -1}}{a_{{2}}}^{4}+2/9\,{a_{{3}}}^{3}+1/3\,a_{{3}}a_{{6}},$$ -$$Z(Q_4) = -{\frac {{a_{{1}}}^{16}}{576}}+1/48\,{a_{{1}}}^{8}{a_{{2}}}^{4 -}+1/16\,{a_{{1}}}^{4}{a_{{2}}}^{6}+1/36\,{a_{{1}}}^{4}{a_{{3} -}}^{4}+{\frac {17\,{a_{{2}}}^{8}}{192}}\\+1/6\,{a_{{1}}}^{2}a_{ -{2}}{a_{{3}}}^{2}a_{{6}}+1/9\,a_{{1}}{a_{{3}}}^{5}+1/12\,{a_{ -{2}}}^{2}{a_{{6}}}^{2}+{\frac {13\,{a_{{4}}}^{4}}{48}}+1/6\,a -_{{4}}a_{{12}},$$ -and -$$Z(Q_5) = -{\frac {{a_{{1}}}^{25}}{14400}}+{\frac {{a_{{1}}}^{15}{a_{{2} -}}^{5}}{720}}+{\frac {{a_{{1}}}^{9}{a_{{2}}}^{8}}{144}}+{ -\frac {{a_{{1}}}^{10}{a_{{3}}}^{5}}{360}}+{\frac {{a_{{1}}}^{ -5}{a_{{2}}}^{10}}{480}}\\+1/48\,{a_{{1}}}^{3}{a_{{2}}}^{11}+{ -\frac {a_{{1}}{a_{{2}}}^{12}}{64}}+1/36\,{a_{{1}}}^{6}{a_{{2} -}}^{2}{a_{{3}}}^{3}a_{{6}}+1/36\,{a_{{1}}}^{4}{a_{{3}}}^{7}+{ -\frac {{a_{{1}}}^{5}{a_{{4}}}^{5}}{240}}\\+{\frac {{a_{{2}}}^{5 -}{a_{{3}}}^{5}}{360}}+1/24\,{a_{{1}}}^{3}a_{{2}}{a_{{4}}}^{5} -+1/24\,{a_{{1}}}^{2}{a_{{2}}}^{4}a_{{3}}{a_{{6}}}^{2}+1/36\,{ -a_{{2}}}^{5}{a_{{3}}}^{3}a_{{6}}\\+1/16\,a_{{1}}{a_{{2}}}^{2}{a -_{{4}}}^{5}+1/24\,{a_{{2}}}^{5}a_{{3}}{a_{{6}}}^{2}+1/18\,{a_ -{{2}}}^{2}{a_{{3}}}^{5}a_{{6}}+1/16\,a_{{1}}{a_{{4}}}^{6}\\+1/ -36\,{a_{{2}}}^{2}{a_{{3}}}^{3}{a_{{6}}}^{2}+1/12\,{a_{{1}}}^{ -2}a_{{3}}{a_{{4}}}^{2}a_{{12}}+1/12\,a_{{2}}a_{{3}}{a_{{4}}}^ -{2}a_{{12}}+{\frac {13\,{a_{{5}}}^{5}}{300}}\\+1/30\,{a_{{5}}}^ -{3}a_{{10}}+1/15\,{a_{{5}}}^{2}a_{{15}}+1/20\,a_{{5}}{a_{{10} -}}^{2}+1/10\,a_{{5}}a_{{20}}+1/15\,a_{{10}}a_{{15}}.$$ -Evaluating these cycle indices with the variables set to two we -quickly obtain the sequence -$$2, 7, 36, 317, 5624, 251610, 33642660, 14685630688,\\ -21467043671008, 105735224248507784,1764356230257807614296,\\ -100455994644460412263071692,19674097197480928600253198363072,\\ -13363679231028322645152300040033513414,\\ -31735555932041230032311939400670284689732948,\ldots$$ -which is indeed OEIS A002724. -Note that the cycle indices make it possible to enumerate -configurations with more than two possible entries or with entries -having different weights. For example, with a 3x3 square and three -colors $A,B$ and $C$ we get the generating function -$$Z(Q_3)(A+B+C) = 1/36\, \left( A+B+C \right) ^{9}+1/6\, -\left( A+B+C \right) ^{3} \left( {A}^{2}+{B}^{2}+{C}^{2} -\right) ^{3}\\+2/9\, \left( {A}^{3}+{B}^{3}+{C}^{3} \right)^{3} -+1/4\, \left( A+B+C \right) \left( {A}^{2}+{B}^{2}+{C}^{2 -} \right) ^{4}\\+1/3\, \left( {A}^{3}+{B}^{3}+{C}^{3} - \right) \left( {A}^{6}+{B}^{6}+{C}^{6} \right)$$ -which expands to -$${A}^{9}+{A}^{8}B+{A}^{8}C+3\,{A}^{7}{B}^{2}+3\,{A}^{7}B -C+3\,{A}^{7}{C}^{2}+6\,{A}^{6}{B}^{3}+10\,{A}^{6}{B}^{2 -}C\\+10\,{A}^{6}B{C}^{2}+6\,{A}^{6}{C}^{3}+7\,{A}^{5}{B}^ -{4}+17\,{A}^{5}{B}^{3}C+28\,{A}^{5}{B}^{2}{C}^{2}\\+17\,{ -A}^{5}B{C}^{3}+7\,{A}^{5}{C}^{4}+7\,{A}^{4}{B}^{5}+22\, -{A}^{4}{B}^{4}C+43\,{A}^{4}{B}^{3}{C}^{2}+43\,{A}^{4}{B -}^{2}{C}^{3}\\+22\,{A}^{4}B{C}^{4}+7\,{A}^{4}{C}^{5}+6\,{ -A}^{3}{B}^{6}+17\,{A}^{3}{B}^{5}C+43\,{A}^{3}{B}^{4}{C} -^{2}+54\,{A}^{3}{B}^{3}{C}^{3}\\+43\,{A}^{3}{B}^{2}{C}^{4 -}+17\,{A}^{3}B{C}^{5}+6\,{A}^{3}{C}^{6}+3\,{A}^{2}{B}^{ -7}+10\,{A}^{2}{B}^{6}C+28\,{A}^{2}{B}^{5}{C}^{2}\\+43\,{A -}^{2}{B}^{4}{C}^{3}+43\,{A}^{2}{B}^{3}{C}^{4}+28\,{A}^{ -2}{B}^{2}{C}^{5}+10\,{A}^{2}B{C}^{6}+3\,{A}^{2}{C}^{7}\\+ -A{B}^{8}+3\,A{B}^{7}C+10\,A{B}^{6}{C}^{2}+17\,A{B}^{5}{ -C}^{3}+22\,A{B}^{4}{C}^{4}+17\,A{B}^{3}{C}^{5}\\+10\,A{B} -^{2}{C}^{6}+3\,AB{C}^{7}+A{C}^{8}+{B}^{9}+{B}^{8}C+3\,{ -B}^{7}{C}^{2}+6\,{B}^{6}{C}^{3}+7\,{B}^{5}{C}^{4}\\+7\,{B -}^{4}{C}^{5}+6\,{B}^{3}{C}^{6}+3\,{B}^{2}{C}^{7}+B{C}^{ -8}+{C}^{9}.$$ -This is the Maple code for this computation. Here we have two slightly -different ways of evaluating the count, the first by substituting into -the cycle index and the second by skipping the cycle index altogether -and evaluating all variables at two during processing. The latter -should be used when we are interested ony in the count as opposed to -classifying configurations according to the number of each color / -value that are present. - -pet_cycleind_symm := -proc(n) -option remember; - - if n=0 then return 1; fi; - - expand(1/n*add(a[l]*pet_cycleind_symm(n-l), l=1..n)); -end; - -pet_varinto_cind := -proc(poly, ind) -local subs1, subs2, polyvars, indvars, v, pot, res; - - res := ind; - - polyvars := indets(poly); - indvars := indets(ind); - - for v in indvars do - pot := op(1, v); - - subs1 := - [seq(polyvars[k]=polyvars[k]^pot, - k=1..nops(polyvars))]; - - subs2 := [v=subs(subs1, poly)]; - - res := subs(subs2, res); - od; - - res; -end; - - -pet_cycleind_sqmat := -proc(n) -option remember; -local sind, cind, term_a, term_b, v_a, v_b, - len_a, len_b, inst_a, inst_b, p; - - cind := 0; - - if n=1 then - sind := return a[1]; - else - sind := pet_cycleind_symm(n); - fi; - - for term_a in sind do - for term_b in sind do - p := 1; - for v_a in indets(term_a) do - len_a := op(1, v_a); - inst_a := degree(term_a, v_a); - - for v_b in indets(term_b) do - len_b := op(1, v_b); - inst_b := degree(term_b, v_b); - - p := p*a[lcm(len_a, len_b)] - ^(gcd(len_a, len_b)*inst_a*inst_b); - od; - od; - - cind := cind + - lcoeff(term_a)*lcoeff(term_b)*p; - od; - od; - - cind; -end; - -v := -proc(n) - option remember; - local cind, vars, sbl; - - cind := pet_cycleind_sqmat(n); - - vars := indets(cind); - sbl := [seq(v=2, v in vars)]; - - subs(sbl, cind); -end; - -w := -proc(n) -option remember; -local sind, count, term_a, term_b, v_a, v_b, - len_a, len_b, inst_a, inst_b, p; - - count := 0; - - if n=1 then - sind := return 2; - else - sind := pet_cycleind_symm(n); - fi; - - for term_a in sind do - for term_b in sind do - p := 1; - for v_a in indets(term_a) do - len_a := op(1, v_a); - inst_a := degree(term_a, v_a); - - for v_b in indets(term_b) do - len_b := op(1, v_b); - inst_b := degree(term_b, v_b); - - p := p* - 2^(gcd(len_a, len_b)*inst_a*inst_b); - od; - od; - - count := count + - lcoeff(term_a)*lcoeff(term_b)*p; - od; - od; - - count; -end; - -This MSE Meta Link -has many more PET computations by various users.<|endoftext|> -TITLE: Does inclusion of a ring into a polynomial ring induce a closed map on prime spectra? -QUESTION [13 upvotes]: Let $A$ be a commutative (unital) ring, and $A[x_1,\ldots,x_n]$ a polynomial ring over it in some finite number of variables. The inclusion $i\colon A \hookrightarrow A[x_1,\ldots,x_n]$ induces (by contraction) a continuous surjection $\mathrm{Spec}(i)\colon \mathrm{Spec}(A[x_1,\ldots,x_n]) \twoheadrightarrow \mathrm{Spec}(A)$ on the prime spectra. Is $\mathrm{Spec}(i)$ a closed map of topological spaces? Does this become the case if $A$ is assumed to be Noetherian and/or an integral domain or a field? -If it's not closed, (under whatever assumptions on $A$), could someone provide a simple counterexample? -I realize this is probably a very stupid question. It seems like the map should be obviously be closed or obviously not be, but I've vacillated as to which. I seem finally to have devised a proof it is closed, but I am suspicious of this quasi-proof, because it seems to make an exercise I've been working on easier than the hint provided would indicate, and also fails to use some of the hypotheses granted for the exercise. Also, if it were true, I would expect to have seen some mention of it on the Internet or in some text, and so far I haven't. - -REPLY [14 votes]: Another good example to think about, more geometric and less arithmetic than that of Qiaochu, is obained by taking $A = k[x]$ ($k$ an algebraically closed field) and considering $k[x] \to k[x,y]$. The induced map on Spectra is the map $\mathbb A^2 \to \mathbb A^1$ from the affine plane to the affine line given by the projection -$(x,y) \mapsto x$. -This is a (perhaps the most!) famous example of a non-closed map in algebraic geometry, which motivates the defintion of projective spaces, properness, complete varieties, and so on. -To see that it is non-closed, consider the hyperbola $xy = 1$, which is a closed subset of $\mathbb A^2$. Its image is the subset $x \neq 0$ of $\mathbb A^1$, which is not closed. -Here is the geometric picture: to see if a given point $x_0$ of $\mathbb A^1$ is in the image of this map, we have to take the vertical line $x = x_0$ and intersect with hyperbola $x y = 1$, and ask whether or not this intersection is non-empty. What we see is that the intersection is non-empty if $x_0 \neq 0$, -but as we pass to the limit $x_0 = 0$, the intersection suddenly becomes empty. -This illustrates the general phenomenon that in affine varieties, there is no "conservation of intersection number" when we make continuous deformations of the varieties being intersected. Rectifying this problem is one of the main motivations for introducing projective spaces, or, more generally, complete varieties. -Technically, if you look at the basic definitions related to projective varieties, or more generally, complete varieties, you will see that "conservation of intersection number" is not explicitly mentioned, but that the property of properness (which has to do with the closedness of certain maps) is what looms large. This may seem a little mysterious, but in fact it turns out that the failure of certain maps to be closed is more or less equivalent to the failure of conservation of intersection number. The example of the map -$\mathbb A^2 \to \mathbb A^1$ above illustrates how the two issues are connected, and this is one reason why it is worth thinking about this example very carefully.<|endoftext|> -TITLE: Does the multiplicative identity have to be 1? -QUESTION [9 upvotes]: I am just starting out with vector spaces and I am having a hard time understanding them. One of the requirements states that $1\mathbf{v}=\mathbf{v}$ where $1$ is the multiplicative identity. -Does 1 have to be the identity? Or is that whatever is the multiplicative identity is labelled 1? -I ran across a question where $a(x,y)$ was defined as $(a x/3, a y/3)$. It was not specified but I guess the question also implied that scalar a belongs to real numbers. -Why has 1 got to be the identity here? Why can't I define 3 to be the identity here? -Edit: I realise that I didn't describe the question in enough detail. $\mathbf{v}$ here is the set of all ordered pairs $(x,y)$ where $x, y \in\mathbb{R}$. -Addition is defined as $(x_1,y_1) + (x_2,y_2) = (2x_1-3x_2,y_1-y_2)$. Now addition obviously violates many axioms but I am interested in scalar multiplication, -which is defined as given above. The question is a simple example question asking us to list down all axioms violated. - -REPLY [24 votes]: Let's ignore the axiom that requires $1\mathbf{v}=\mathbf{v}$ to hold for a moment. -Let's call an element $\alpha$ of the field with the property that $\alpha\mathbf{v}=\mathbf{v}$ for every $\mathbf{v}$ an "identity". First, let's note that we really want to have at most one of them, not multiple ones. Otherwise, we could just define scalar multiplcation by "do nothing", and we don't really have a vector space structure (the scalars are just dummys, not doing anything). And also, because otherwise we're going to lose the notion of linear independence (which is very important later on): if $\alpha$ and $\beta$ both "work" as the identity and $\alpha\neq\beta$, then for every vector you are going to have -$$(\alpha-\beta)\mathbf{v} = \alpha\mathbf{v}-\beta\mathbf{v}=\mathbf{v}-\mathbf{v}=\mathbf{0},$$ -but with $\alpha\neq\beta$. This will mean that no collection of vectors is linearly independent. So we don't want more than one identity. -One axiom is the associativity of scalar multiplication: -$$(\alpha\beta)\mathbf{v} = \alpha(\beta\mathbf{v}).$$ -Why can't you define the "identity" to be $3$? Because then you would have $3\mathbf{v}=\mathbf{v}$, hence -$$9\mathbf{v} = 3(3\mathbf{v}) = 3\mathbf{v}=\mathbf{v},$$ -so that now $9$ is also an identity, even though $3\neq 9$, and we already said that is not a good idea. -So, whatever the identity is, you want it to equal its square, for precisely this reason. Otherwise, you're going to have lots of identities, and hence lots of troubles. -But in a field, there are only two solutions to $x^2=x$: namely, $x=0$ and $x=1$. So the "identity" has to be either $0$ or $1$. But the fact that $0\mathbf{v}=\mathbf{0}$ holds follows without the identity axiom ("$1\mathbf{v}=\mathbf{v}$"), so unless every vector is $\mathbf{0}$, you cannot have $0$ be the identity, because then we would have -$$\mathbf{v} = 0\mathbf{v}=\mathbf{0}.$$ -So, really, the only reasonable choice for identity is $1$. -So, then, why do we make it an axiom? Because otherwise, we can define scalar multiplication by "any scalar times any vector is $\mathbf{0}$". This definition satisfies all the axioms of a vector space except the existence of the identity, but it's a "silly" vector space (and one where the notion of linear independence is also ultimately completely screwed up). So we want to exclude it. And the way to exclude it is precisely by requiring that there be an element of the field that does not map everything to $\mathbf{0}$ through scalar multiplication, so it makes sense to kill both birds with one stone and ask for $1$ to not send everyone to $\mathbf{0}$ by multiplying each vector to itself.<|endoftext|> -TITLE: Questions about Serre duality -QUESTION [9 upvotes]: I've read the section "Serre duality" in Hartshorne's book and have several questions. -1) In Remark 7.1.1 it is claimed that on $X = \mathbb{P}^n$ -$\alpha = \frac{x_0^n}{x_1 \cdot ... \cdot x_n} d(\frac{x_1}{x_0}) \wedge ... \wedge d(\frac{x_n}{x_0})$ -is a Cech cocycle which generates $H^n(X,\omega_X)$ (I have checked this) and that this is independent from the choice of the basis of $\mathbb{P}^n$. Well it's easy to check that it's independent from the order of $x_0,...,x_n$. But it does not seem to be invariant under other automorphisms of $\mathbb{P}^n$, which are typically given by -$x_0 \mapsto x_0 + \lambda x_1 , x_1 \mapsto x_1, ..., x_n \mapsto x_n$. -Or am I wrong? What is the fix group in $\text{Aut}(\mathbb{P}^n) = PGL(n)$ of $\omega_X$? -2) In Remark 7.12.1 it is shown that Serre duality implies that for a smooth projective variety $X$ of dimension $n$ we have that $H^n(X,\omega_X) \cong k$. It's quite fascinating to get such a concrete result with this abstract machinery of Ext-functors. What can be said about this isomorphism? Is there a "canonical" one? Is it possible to distinguish a generator? What examples do you know? - -REPLY [5 votes]: The point of this answer is to give a short proof of the following result. -Theorem 1: Let $X$ and $Y$ be smooth $n$-dimensional projective varieties over a field $k$ of characteristic zero; let $B$ be a smooth connected variety; and let $\phi: X \times B \to Y$ be a morphism. Then the induced map $H^q(Y, \Omega_Y^p) \to H^q(X \times \{ b \}, \Omega_X^p)$ is a constant function of $b$. (If the base field is not algebraically closed, this requires a little more language to say correctly; I'm going to gloss over this.) -We need the following, very deep theorem: -Theorem 2: If $X$ is a smooth projective variety over a field of characteristic zero, then the map $H^q(X, \Omega^p) \to H^q(X, \Omega^{p+1})$ induced by $d$ is $0$. -This is usually proved by Hodge theory. -As a corollary of Theorem 1, if $\mathrm{Aut}(X)$ is connected, then $\mathrm{Aut}(X)$ acts trivially on $H^q(X, \Omega^p_X)$. As a corollary of the corollary, $\mathrm{PGL}_{n+1}$ acts trivially on $H^n(\mathbb{P}^n, \omega)$. -Proof (Sketch): -We may immediately assume that $B$ is one dimensional, as any two points in a connected variety may be joined by a chain of smooth affine curves. Also, the statement is local on $B$, so we may pass to a smaller base curve whenever it is convenient. -Let $\theta$ be a class in $H^q(Y, \Omega^p_Y)$. Let $\eta = \phi^* \theta \in H^q(X \times B, \Omega^p_{X \times B})$. For any $b \in B$, we can restrict $\eta$ to $X \times \{ b \}$, and get a class in $H^q(X, \Omega^p_X)$. Call this $\eta(b)$. We want to show that $\eta(b)$ is a constant function of $b$. In other words, $\eta(b)$ is a section of $H^q(X, \Omega_X^p) \otimes \mathcal{O}_B$ and we want to show that it is in $H^q(X, \Omega_X^p)$. Since $H^q(X, \Omega_X^p) \otimes \mathcal{O}_B$ is a trivial vector bundle, it makes sense to talk about $d \eta(b)$ (just take $d$ of each component), and we will show that $d \eta(b)=0$. -In case you would like to see this done in more technical language, here it is: We can think of $H^q(Y, \Omega_Y^p)$ as $R^q \pi_* \Omega_Y^p$ where $\pi$ is the map $Y \to \mathrm{Spec} \ k$. So we can pull back to $\theta$ to a class $\eta$ in $R^q \pi_2^* \Omega^p_X$ where $\pi_1$ and $\pi_2$ are the projections of $X \times B$ onto its factors. -We have a canonical map $\Omega^p_{X \times B} \to \Omega^p_{X \times B/B}$ (restricting to the vertical fibers). So we get a class in $R^q (\pi_2)_{\ast} \Omega^p_{X \times B/B}$. Also, $\Omega^p_{X \times B/B} \cong \pi_1^* \Omega_X^p$. Finally, if you think about how pushforwards and pullbacks work in products, then $R^q (\pi_2)_{\ast} \pi_1^{\ast} \Omega^p_X \cong H^q(X, \Omega^p_X) \otimes \mathcal{O}_B$. So that's how we get a class in $H^q(X, \Omega^p_X) \otimes \mathcal{O}_B$. -So, we want to compute $d \eta(b)$. -Passing to a local neighborhood on $B$, we may assume that $\Omega^1_B$ is free, with generator $dz$ for $z \in \mathcal{O}(B)$. -On $X \times B$, we have -$$\Omega^p_{X \times B} \cong \pi_1^* \Omega_X^p \oplus \ \pi_1^* \Omega_X^{p-1} \otimes \pi_2^* \Omega_B^1 \quad (*)$$ -($B$ is one dimensional; otherwise there would be more terms.) -Locally, write $\eta = \alpha + \beta dz$ where $\alpha$ and $\beta$ are sections of $\pi_1^* \Omega^p_X$ and $\pi_1^* \Omega^{p-1}_X$. So we get $\alpha$ and $\beta$ in $R^q (\pi_2)_{\ast} \Omega^{p}_{X \times B/B}$ and $R^q (\pi_2)_{\ast} \Omega^{p-1}_{X \times B/B}$. (More precisely, $\eta$ is represented by a Cech cocycle $U_{i_0} \cap U_{i_1} \cap \cdots \cap U_{i_p} \mapsto \eta_{i_0 i_1 \cdots i_p}$. Write each $\eta_{I}$ as $\alpha_I + \beta_I dz$. Then $\alpha_I$ and $\beta_I$ are Cech cocycles for the corresponding cohomology groups.) -Write $d'$ for the derivation $\Omega^p_{X} \to \Omega^{p+1}_{X}$ and $d''$ for the analogous derivation on $B$. Using the isomorphism $(*)$, we have $d=d'+d''$. In these terms, we have -$$d \eta = d'\alpha + d'' \alpha + d'(\beta) dz. \quad (**)$$ -By Theorem 2, $d'(alpha)$ and $d'(\beta)$ are $0$. So $d \eta$, which is a class in $R^q (\pi_2)_{\ast} \Omega^{p+1}_{X}$, is actually a class in $R^q (\pi_2)_{\ast} \pi_1^* \Omega^{p}_{X} \otimes \Omega^1_B$ and, as such, is equal to $d'' \alpha$. -Now, $\eta(b)$ is the restriction of $\eta$ to $X \times \{ b \}$. -As $dz$ is $0$ on the vertical fibers, $\eta|_{X \times \{ b \}} = \alpha|_{X \times \{ b \}}$. -We wanted to compute $d \eta(b)$, where the $d$ is with respect to the $B$-variables. -This makes it plausible, and I leave it to you to check, that $d \eta(b)$ is $d'' \alpha$. -So far, this argument has been valid for any $\eta \in R^q (\pi_2)_{\ast} \Omega^p_{X \times B}$. -Now we use that $\eta = \phi^* \theta$. By Theorem 2, $\theta$ is closed. So $d \eta = d \phi^* \theta = \phi^* d \theta = 0$. -Looking at $(**)$, and at the compatibilities we have just checked, we see that $d \eta(b)=0$, as desired. - -Comments: -(1) The analogous result in differential geometry is that, if $B$ is connected, and $\phi : X \times B \to Y$ is a smooth map, then the induced maps $H^{k}(Y) \to H^k(X)$ are the same for every point of $B$. The reader might enjoy writing down a deRham proof of this fact, and seeing how it relates to the above argument. -(2) More generally, let $\pi: \mathcal{X} \to B$ be a smooth projective map with $B$ connected; let $\phi: \mathcal{X} \to Y$ be any morphism with $Y$ projective; let $\theta \in H^q(Y, \Omega_Y^p)$; let $\eta = \phi^* \theta \in R^q \pi_{\ast} \Omega^p_{\mathcal{X}}$. Then we want to say that $\eta(b)$ is a constant function of $b$. The right way to say this is that there is a connection on $R^q \pi_{\ast} \Omega^p_{\mathcal{X}}$ which annihilates $\eta$. This is called the Gauss-Manin connection, and I have basically walked you through how to compute it for a trivial family. -(3) Theorem 2 is trivial for $p=n$. However, even if you only care about Theorem 1 for $p=n$, then our argument uses Theorem 2 applied to $\beta$, which is an $n-1$ form. So we still need to use a nontrivial case of Theorem 2, even in that case. (This is what I missed the first time I wrote this up.)<|endoftext|> -TITLE: Representations of a non-compact group are labeled by its maximal compact subgroup? -QUESTION [9 upvotes]: I don't have much of any awareness about the representation theory of non-compact Lie groups but I bumped into it for my work. -Is there some idea that the representations of a non-compact group are labeled by those of its maximal compact subgroup? If yes then I would like to know of explanations for the above and of references from where I can pick this up. - -The conformal group of $3+1$ dimensional space-time is $SO(4,2)$ and apparently any representation of it can be written as a direct sum over representations of $SO(4) \times SO(2)$. -The N=2 superconformal group for $2+1$ dimensions is $SO(3,2)\times SO(2)$ and its maximal compact subgroup $SO(2) \times SO(3) \times SO(2)$ ? If yes then I would like to know how can this be proven. (In physics contexts these two $SO(2)$ factors are distinguished by physical generators of different meanings.) - -I would like to know of the general framework in which the above fits in. -To quote from a paper a typical argument where such a thing seems to get used, -"Any irreducible representation of the superconformal algebra may be decomposed into a finite number of distinct irreducible representations of the conformal algebra...which are in turn labeled by their own primary states...hence the state content of an irreducible representation of the superconformal algebra is completely specified by the quantum numbers of its conformal primaries" -I would be very glad if someone can also give expository references or explanations specific to the above argument. - -REPLY [12 votes]: Suppose that we have (to fix ideas) a unitary irreducible representation of a semi-simple Lie group $G$ (such as the non-compact Lie groups that you write down) on a Hilbert space. (Here I mean irreducible in the Hilbert space sense, i.e. there are no proper invariant closed subspaces.) Let's call the Hilbert space $V$ (just to give it a name). -A theorem of Harish-Chandra then says that $V$ is admissible, which means the following: fix a maximal compact subgroup $K$ of $G$. Then each irreducible representation $W$ of $K$ appears with finite multiplicity as a subrepresentation -of $V$. If we call this multiplicity $m_W$, then we may write -$V = \hat{\oplus}_W W^{m_W}$, i.e. as the Hilbert space direct sum (i.e. the completed direct sum) of the various $W$, each appearing with multiplicity $m_W$. (This is a consequence of the Peter--Weyl theorem, and is true -for any unitary representation of a compact group in which each irrep. appears -with finite multiplicity.) -Now inside $\hat{\oplus} W^{m_W}$ we have the actual algebraic direct sum -$\oplus_W W^{m_W}$, and this has an intrinsic characterization as a subspace -of $V$, as the $K$-finite vectors. (A vector $v$ is called $K$-finite if the -linear span of all its translates by elements of $K$ is finite-dimensional.) -Let's denote it by $V_K \subset V$. -It turns out that $V_K$, although it is not invariant under the action of $G$ -(typically, unless $V$ happens to be finite-dimensional, which it usually won't be), is invariant under $\mathfrak g$, the Lie algebra of $G$. -One calls $V_K$ a $(\mathfrak{g},K)$-module, or also a Harish-Chandra module -(because it has actions of $\mathfrak{g}$ and $K$). It turns out that -$V_K$ determines $V$, and the basis of Harish-Chandra's approach to the study -of unitary reps. of $G$ is to work instead with the underlying $(\mathfrak g,K)$-modules. -Now in principle, to recover $V$, one really needs $V_K$ as a $(\mathfrak g, K)$-module; i.e. forgetting $\mathfrak g$ and just remembering the $K$-action is throwing away a lot of information. -But in practice (at least in the examples that I know) non-isomorphic irreducible $V$ have different list of multiplicities $m_W$, and so just knowing -$V_K$ as a $K$-rep. may well already pin down $V$. -In fact, often one doesn't even have to know all the $m_W$, but just -the first non-zero vanishing value. (If I think of the reps. $W$ as being -labelled by their highest weights lying in some choice of dominant Weyl chamber for $K$.) -A good place to read about this (it is not short, but I found it very good for dipping into) is Knapp's book Representation theory of semisimple groups: an -overview based on examples. He gives the basic definitions, a lot of examples, and goes on to develop various aspects of the theory (e.g. the theory of -the relationship between $V$ and the multiplicities $m_W$: this is known as -the theory of $K$-types). -Incidentally, Harish-Chandra was a student of Dirac, and (as far as I know) his study of unitary reps. of semisimple groups was inspired by Bargmann's treatment of the special case of $SL_2(\mathbb R)$, which was in turn inspired in part by the role of this group in physics. -On the other hand, I don't know of a treatment of the theory which directly relates it to the physics literature, and I can't parse the physics argument that you wrote down in detail.<|endoftext|> -TITLE: Functions on P(R) - are there examples? -QUESTION [6 upvotes]: What are some examples of functions on the Power Set of the Reals? Is this an abuse of terminology - functions on the reals can be thought of as functions on the power set of the naturals with a specific ordering. I was hoping someone would kindly refer me to a text or article where explicit (not necessarily 'useful') examples of functions with the domain P(R) are given; or if this is confused idea why there is nothing to it. Thanks! - -REPLY [8 votes]: A function on $\mathcal{P}(\mathbb{R})$ essentially means a "rule" of assigning to each subset of $\mathbb{R}$ an element of some set $S$, the codomain. So e.g., there is the identity function $\mathcal{P}(\mathbb{R})\to\mathcal{P}(\mathbb{R})$ that sends a set $A$ to itself. There is the $\sup$ map from $\mathcal{P}(\mathbb{R})$ to the extended reals $[-\infty,\infty]$ that sends $A\subseteq\mathbb{R}$ to $\sup A$, and similarly with $\inf$. You could also define functions like $f(A)=1$ if $A$ is open and $f(A)=0$ if $A$ is not open, or other such maps indicating topological properties of subsets of $\mathbb{R}$. You could define the function $c:\mathcal{P}(\mathbb{R})\to \{0,1,2,\ldots,2^{\aleph_0}\}$ such that $c(A)$ is the cardinality of $A$. Or $C:\mathcal{P}(\mathbb{R})\to \{0,1,2,\ldots,2^{\aleph_0}\}$ such that $C(A)$ is the cardinality of the set of connected components of $A$. -I see that Zhen Lin has indicated a couple of other useful examples in the comments. The complement in $\mathbb{R}$ defines a bijection on $\mathcal{P}(\mathbb{R})$. Each outer measure on $\mathcal{P}(\mathbb{R})$ defines a function from $\mathcal{P}(\mathbb{R})$ to $[0,\infty]$. The closure and interior maps are other functions from $\mathcal{P}(\mathbb{R})$ to itself, mapping onto the set of closed and open subsets of $\mathbb{R}$ respectively. -So yes, there are lots of explicit examples, but I don't know exactly what you're looking for. The set of functions from $\mathcal{P}(\mathbb{R})$ to any fixed set $S$ is $S^{\mathcal{P}(\mathbb{R})}$, with cardinality $\displaystyle{|S|^{2^\mathfrak{c}}}$, which is at least $\displaystyle{2^{2^\mathfrak{c}}=2^{2^{2^{\aleph_0}}}}$ if $S$ has more than $1$ element. - -REPLY [6 votes]: Every set can be a domain of a function (in fact, a proper class can also be the domain of a function if one is careful enough). In particular the powerset of the real numbers can be. -Some examples that might be used are measures in measure theory, granted the Lebesgue Measure is not defined for every subset of the real numbers (unless the axiom of choice is not assumed) but you can define the outer measure which is defined on $P(\mathbb{R})$, or some other measure that is not limited by the requirements of the Borel/Lebesgue measures. -Other examples are negation, union and intersection (functions in two variables), as well symmetric difference. -Assuming the axiom of choice, the Stone-Cech compactification of the natural numbers with the discrete topology is of cardinality $\beth_2$ (i.e. $|P(\mathbb{R})|$) and you can look at it as if you are assigning each subset of the real numbers an ultrafilter over the natural numbers.<|endoftext|> -TITLE: Diffusions - global and local -QUESTION [5 upvotes]: Suppose $dX_t = \mu(X_t)dt + \sigma(X_t)dW_t$ is a diffusion. Is there a sense in which the dynamics are "dominated" locally by the diffusion term, and dominated globally by the drift term? -If $\mu$ and $\sigma$ are constants, then the law of the iterated logarithm says that the contribution from the diffusion term is slightly greater than $\sqrt{t}$, whereas the contribution from the drift term is linear. -On the other hand, over small timescales, small variations in the noise dominate any estimate one can make of the drift term. -Does a similar principle apply to more general diffusions? - -REPLY [3 votes]: If $\mu$ and $\sigma$ are not too wild, then the local dynamics are determined by the diffusion. However, the large scale behavior can be affected by the diffusion. -One example which may help is to consider a potential hill. A particle placed slightly to the left of the peak will tend to roll down to the left. A particle placed slightly to the right will tend to roll to the right. However, the strengths of those tendencies depend on the diffusion. If you have two potential wells with a hill between them, the proportion of the time spent in the deeper well depends on the diffusion, not just on the shape of the well.<|endoftext|> -TITLE: Has there been a rigorous analysis of Strassen's algorithm? -QUESTION [10 upvotes]: According to Wikipedia, Strassen's Algorithm runs in $O(N^{2.807})$ time. Has anyone seen a more rigorous analysis displaying constants, possibly in a specific language such as C or Java? -I realize this will vary from language to language, machine to machine etc, but does anyone know of an approximate input size where Strassen's algorithm starts to outperform regular matrix multiplication? -I think this may belong on the computer science stack exchange but since it is somewhat mathematical I thought I would post it here. - -REPLY [3 votes]: From a google search [crossover strassen matrix multiplication], I've found that experiments have found the crossover point to be from n = 8 to n = 20. -However, at some point in analysis of algorithms you jump from manipulating mathematical concepts straight to just running things experimentally and recording values and times. At that point it is not really called analysis of algorithms, but instead plain old engineering where you manipulate things other than the basic algorithm (agressively optimizing compiling, caching strategies, the speed of memory components, hardware parallelization, the cooling system, etc). Then the variables to consider are too many and underspecified to make an analytic (on paper) calculation, and you just have to compare using runtime data.<|endoftext|> -TITLE: Reaching all possible simple directed graphs with a given degree sequence with 2-edge swaps -QUESTION [5 upvotes]: Starting with a given simple, directed Graph G, I define a two-edge swap as: - -select two edges u->v and x->y such that (u!=x) and (v!=y) and (u!=y) and (x!=v) -delete the two edges u->v and x->y -add edges u->y and x->v - -Is it guaranteed that I can reach any simple directed graph with the original (in- and out-) degree sequence in some finite number of 2-edge swaps? -If we need some sort of 3-edge swaps, what are they? -Background: I intend to use this as MCMC steps to sample random graphs, but over at the Networkx Developer site, there is a discussion that Theorem 7 of the paper P Erdos et al., "A simple Havel–Hakimi type algorithm to realize graphical degree sequences of directed graphs", Combinatorics 2010 implies that we need 3-edge swaps to sample the whole space. - -REPLY [2 votes]: The question is whether a triple swap is necessary or not. One of the examples in the paper is the directed cycle between three nodes (i->j), (j->k), (k->i). Obviously, another graph with the same degree sequence is the one in which all directions are reversed: (i <- j), (j <- k), (k <- i). It is, however, not possible to get from the first to the second graph if you do not allow for self-loops: there are no two edges whose swap is allowed under this condition. At first I thought that there cannot be an example for this is in larger graphs but actually there are graphs of infinite size with the same problem (under the condition of no multiple edges and self-loops): again, start with the directed triangle; add any number of nodes that are connected to all other nodes by bi-directional edges. Thus, the only edges that are flexible are the ones in the triangle and again, all of their edges can be reversed to result in a graph with the same degree sequences but no sequence of edge-swaps can achieve it. -It is obvious that the family of graphs described here is very much constrained but there may be others with similar problems. Thus: there are directed graphs which need the triple-swap s.t. all graphs with the same degree sequences but without multiple edges and self-loops can be samples u.a.r.<|endoftext|> -TITLE: is $\nabla \cdot ( c^2 \nabla)$ a Laplace-Beltrami operator? -QUESTION [8 upvotes]: I asked this on mathoverflow, and was suggested to ask it here. -Someone mentioned, in passing, to me that $u \mapsto \nabla \cdot ( c^2 \nabla u)$ is a Laplace-Beltrami operator. Does anyone have some insight into this? From my understanding, the Laplace-Beltrami operator generalizes the Laplacian to Riemannian manifolds, by taking the trace of the Hessian. I don't really see the connection to that and the above operator, unless c=1. -This operator comes from the wave equation, where $\partial^2_t u -\nabla \cdot ( c^2 \nabla u) = f$. There may or not be some smoothness conditions on $c$, and all these are functions on subsets of $\mathbb{R}^n$. -Thanks for any help, --Nick - -REPLY [2 votes]: Willie's hint in his comment is a bit misleading; your operator is equivalent to the Laplace-Beltrami operator with respect to some (conformally equivalent) metric only if $c$ is constant. -More generally, an elliptic operator $Lf = \operatorname{div} M \nabla f$ with $M$ self-adjoint and positive-definite is equivalent to the Laplacian of some new metric if and only if $\det M$ is constant. For more information see my answer to this related question: Generalized Laplace--Beltrami operators<|endoftext|> -TITLE: $G_\delta$ set with the same Lebesgue outer measure -QUESTION [5 upvotes]: Statement: If $E$ is a bounded set of real numbers, there exists a $G_\delta$ set $G$ such that $G$ contains E and has the same Lebesgue outer measure with $E$. -I completed the proof for the case that $E$ is countable, but how should I prove this for the case that $E$ is uncountable? - -REPLY [9 votes]: How are you defining the Lebesgue outer measure $\tau$ of a set $E$? It should be the infimum of the set $M$ consisting of all those numbers $r$ that are the measures of certain open sets that cover $E$, right? -Note that this infimum is finite because $E$ is bounded: To be bounded means that for some $l$, $E\subseteq(-l,l)$, so $2l\in M$ and $\tau=\inf M\le 2l$. -Take a sequence $r_n$ of elements of $M$ that is decreasing and converges to $\tau$. This sequence exists because $\tau$ is an infimum: For each $n$ there must be an $r\in M$ with $\tau\le r<\tau+(1/n)$. -For each $r_n$ pick a witnessing open set $E_n$ covering $E$ with measure $r_n$. We may as well assume that $E_n$ is bounded. If it is not, replace it with $E_n'=E_n\cap(-l,l)$. Of course, this may replace $r_n$ with a smaller number. That's fine. -The point, of course, is that $F=\bigcap_nE_n$ is a $G_\delta$ set that contains $E$ (since each $E_n$ is open and contains $E$). To check that $F$ has the same outer measure as $E$, note that any open set covering $F$ also covers $E$, so the outer measure $\rho$ of $F$ satisfies $\rho\ge\tau$. On the other hand, $E_n$ covers $F$, so $\rho\le\tau+(1/n)$. Therefore, $\rho\le\tau$.<|endoftext|> -TITLE: Proving the set of points at which a function diverges to $\infty$ is countable -QUESTION [14 upvotes]: Let $f\colon\mathbb{R}\to\mathbb{R}$. Prove that the set - $$\{x \mid \mbox{if $y$ converges to $x$, then $f(y)$ converges to $\infty$}\}$$ is countable. - -My book told me to consider $g(x)=\arctan(f(x))$, then it said "it is easy to see the set is countable." But I still can't understand what it mean. - -REPLY [5 votes]: If we allow $f$ to be a partial function, that is, where the domain is only a subset of $\mathbb{R}$ rather than all of $\mathbb{R}$, then there is an interesting counterexample. -Namely, let $C\subset\mathbb{R}$ be the usual Cantor set, and let $U=\mathbb{R}-C$ be the complement, an open set. For any real $x\in U$, let $$f(x)=\frac{1}{d(x)},$$ where $d(x)$ is the distance from $x$ to $C$, which is nonzero for all $x\in U$. Note that for any sequence $y_n\in U$ with $y_n\to x\in C$, we have $d(y_n)\to 0$ and hence $f(y_n)\to\infty$. (Note also that every $x\in C$ is a limit of points in $U$.) Since the Cantor set has uncountable size continuum, we have therefore an uncountable set of points where $f$ diverges to infinity. Furthermore, $f$ is continuous on its domain $U$.<|endoftext|> -TITLE: Reading commutative diagrams? -QUESTION [7 upvotes]: Sorry for this whole bunch of questions. Please note, that I know what a commutative diagram is, and that I can somehow read them, at least the simpler ones. But often enough the diagrams are labelled and/or explained verbally, thus my question: - -How are commutative diagrams to be - read in general? Can they always be - deterministically translated into - first-order sentences or properties? - -Which "background assumptions" are needed for a commutative diagram to make specific sense? (A minimal assumption should be that they tell something about a category. What else?) -Which conventions are used? (e.g. for existence and uniqueness of arrows) -When is it important to label the objects and arrows? -When are verbal explanations necessary? -Does every conceivable diagram mean something? -Are there (graphical) analogues of commutative diagrams for other (relational or algebraic) structures than categories? - -REPLY [7 votes]: Commutative diagrams are, to put it simply, a (very) handy way of writing systems of equations in categories. Instead of having rows of symbols and do symbol-pushing you reason diagramatically and geometrically. -Now, about your questions, I am not sure I understand all of them so my answer may be way off, but here it goes anyway. - -Which "background assumptions" are needed for a commutative diagram to make specific sense? - -Without going (too) technical here, diagrams in categories are oriented graphs where the vertices are labeled with objects and the edges with arrows and to every path we associate the composite arrow. The problem is that the composite is not well defined in the presence of loops, e.g. think of a single vertex with two loops labeled f and g: does it denote the composite fg or gf? If your labeled graph does not have loops then you are all set. -Note: making this all formal and rigorous is something of a chore and for a little gain. One possible way is to define diagrams of shape $G$ (a graph, for example a square) in a category $A$ as functors $FG\to A$ where $FG$ is the free category on the graph. Then a commutative diagram of a shape $G$ is a functor $FG/R \to A$ where $FG/R$ denotes the quotient of $FG$ by a congruence $R$ that "forces" commutativity, that is, the equality of certain (all) parallel pairs of paths. You can then define the gluing of diagrams along a common diagram as a certain pushout, which allows you to prove that if the smaller pieces of a diagram are commutative then the whole thingy is also commutative, etc. - -Which conventions are used? (e.g. for existence and uniqueness of arrows) - -I am not sure understand what you are trying to ask here. What problem of existence and uniqueness are you referring to? The existence and uniqueness of a composite associated to a path was treated in the previous answer. - -When is it important to label the objects and arrows? - -Unless there is no ambiguity about what arrows and objects you are mentioning (or you simply do not need them), feel free to not label the graph. In my experience, this happens very rarely so I tend to label just about everything. - -When are verbal explanations necessary? - -No general rule here. The best that I can offer is use common sense and know your target audience. - -Does every conceivable diagram mean something? - -No, see my first answer. - -Are there (graphical) analogues of commutative diagrams for other (relational or algebraic) structures than categories? - -Yes, there is a fairly sophisticated graphical calculus for braided monoidal categories with duals involving tangles. There are also higher dimensional generalizations of diagrams to n-categories but here the problems of assigning a consistent composite to the corresponding notion of graph are much harder and their computational usefulness is rather limited for reasons that should be obvious. Even 2-diagrams in 2-categories are already very cumbersome. This is actually one of the reasons why higher dimensional category is inherently harder: you do not have a handy graphical calculus and computing anything can range from a pain in the butt to a pain in the (insert ultra-sensitive bodily region here). -Hope it helps, regards. -G. Rodrigues<|endoftext|> -TITLE: A question regarding the definition of Galois group -QUESTION [15 upvotes]: In my book, Galois group is defined to mean the set of automorphisms on $E/F$ that "leave alone" the elements in $F$. -On Wikipedia it says: -"If $E/F$ is a Galois extension, then $Aut(E/F)$ is called the Galois group of (the extension) $E$ over $F$, $\dots$" -And Wikipedia's definition of Galois: -"An algebraic field extension $E/F$ is Galois if it is normal and separable. Equivalently, the extension $E/F$ is Galois if and only if it is algebraic, and the field fixed by the automorphism group $Aut(E/F)$ is precisely the base field $F$." -So in one case, Wikipedia, the extension is restricted to be algebraic. So the set of automorphisms on $\mathbb{Q}(\pi) / \mathbb{Q}$ is not a Galois group. -My question: How is it possible to have two different definitions of what a Galois group is? Do these not conflict? Or what am I missing here? -Many thanks for your help. -Edit: -I'm using J. Gallian, Contemporary Abstract Algebra and Allan Clark, Elements of Abstract Algebra. Both use the same terminology, not the same as Wikipedia. - -REPLY [18 votes]: There is a slight divergence of nomenclature. Everyone agrees on what $\mathrm{Aut}(E/F)$ is. The question is what to call it. - -Some books (e.g., Hungerford, Rotman's Galois Theory), always refer to $\mathrm{Aut}(E/F)$ as the "Galois group" of $E$ over $F$ (or of the extension), whether or not the extension is a Galois extension. -Other books (e.g., Lang), use the generic term "automorphism group" to refer to $\mathrm{Aut}(E/F)$ in the general case, and reserve the term Galois group exclusively for the situation in which $E$ is a Galois extension of $F$. - -So, in Lang, even just saying "Galois group" already implies that the extension must be a Galois extension, that is, normal and separable. In Hungerford, just saying "Galois group" does not imply anything beyond the fact that we are looking at the automorphism of the extension. -Wikipedia is following Convention 2; your book is following convention 1. -There is also the question of whether to admit infinite extensions or not. A lot of introductory books only consider only finite extensions when dealing with Galois Theory, and define an extension to be Galois if and only if $|\mathrm{Aut}(E/F)| = [E:F]$. This definition does not extend to the infinite extension, so the definitions are restricted to finite (algebraic) extensions, with infinite extensions not considered at all. Other characterizations of an extension being Galois (e.g., normal and separable) generalize naturally to infinite extensions, so no restriction is placed. Likewise, some books explicitly restrict to algebraic extensions, others do not; but note that most define "normal" to require algebraicity, because it is defined in terms of embeddings into the algebraic closure of the base field, so even if you don't explicitly require the extension to be algebraic in order to be Galois, in reality this restriction is (almost) always in place. -This is not such a big deal as it might appear, because one can show that an arbitrary (possibly infinite) Galois extension $E/F$ is completely characterized in a very precise sense by the finite Galois extensions $K/F$ with $F\subseteq K\subset E$ with $[K:F]\lt\infty$, as the automorphism group $\mathrm{Aut}(E/F)$ is the inverse limit of the corresponding finite automorphism groups.<|endoftext|> -TITLE: Find the sum $\sum\limits_{k=1}^{2n} (-1)^{k} \cdot k^{2}$ -QUESTION [5 upvotes]: How to find this sum? -$$\sum\limits_{k=1}^{2n} (-1)^{k} \cdot k^{2}$$ - -REPLY [8 votes]: As I'm a fan of proofs without words, here is my pictorial effort of why -$$\sum_{k=1}^{2n} (-1)^k k^2 = \sum_{i=1}^{2n} i = n(2n+1).$$<|endoftext|> -TITLE: Comparing Poisson variables -QUESTION [7 upvotes]: Let $X$ and $Y$ be two Poisson variables with different mean. -Is there a better (as in more concise or numerically faster) way to compute $P(X\leq Y)$ than using -$$ P(X\leq Y) = \sum_{y=0}^\infty P(Y=y) P(X\leq y) $$ - -REPLY [2 votes]: Rasholnikov has shown you a method using modified Bessel functions. You could also get that sort of result from a Skellam distribution, which is the difference between two Poisson distributions. -If you want a quick and simple approximation for $X \sim \text{Poiss}(\lambda)$ and $Y \sim \text{Poiss}(\mu)$ then -$$\text{Pr}(X \leq Y) \approx \Phi\left( \frac{1/2 - (\lambda - \mu )}{\sqrt{\lambda + \mu}} \right)$$ -where $\Phi()$ is the cumulative distribution function of a standard normal distribution. The $1/2$ is there as a continuity adjustment to deal with the possibility that $X=Y$. -For example with $\lambda =5$ and $\mu =10$, the true result is about 0.9256 while the approximation gives 0.9222.<|endoftext|> -TITLE: Prove 7 divides $15^n+6$ with mathematical induction -QUESTION [9 upvotes]: Prove that for all natural numbers statement n, statement is dividable by 7 -$$15^n+6$$ -Base. We prove the statement for $n = 1$ -15 + 6 = 21 it is true -Inductive step. -Induction Hypothesis. We assume the result holds for $k$. That is, we assume that -$15^k+6$ -is divisible by 7 -To prove: We need to show that the result holds for $k+1$, that is, that -$15^{k+1}+6=15^k\cdot 15+6$ -and I don't know what to do - -REPLY [5 votes]: Often textbook solutions to induction problems like this are magically "pulled out of a hat" - completely devoid of intuition. Below I explain the intuition behind the induction in this proof. Namely, I show that the proof easily reduces to the completely trivial induction that $\rm\ \color{#c00}{1^n \equiv 1}$. -Since $\rm\ 15^n + 6 = 15^n-1 + 7\:,\: $ it suffices to show that $\rm\ 7\ |\ 15^n - 1\:.\: $ The base case $\rm\ n=1\ $ is clear. The inductive step, slightly abstracted, is simply the following -$\ \ \ \ \ \ \ \begin{align} &7\ |\ \ \color{#0a0}{c\ -1},\ \ \ \color{#90f}{d\ -\ 1}\ \ \Rightarrow\ \ 7\ |\ cd\,-\,1 = (\color{#0a0}{c-1})\ d + \color{#90f}{d-1}\\[.2em] -{\rm thus} \ \ \ \ &7\ |\ 15-1,\ 15^n-1\ \ \Rightarrow\ \ 7\ |\ 15^{n+1}-1\end{align}$ -$\rm Said\ \ mod\ 7,\ \ 15\equiv 1\ \Rightarrow\ 15^n\equiv \color{#c00}{1^n\equiv 1}\ $ by inductively multiplying ("powering") using this: -Lemma $\rm\ \ \ \ \ A\equiv a,\ \ B\equiv b\ \Rightarrow\ AB\equiv ab\ \ (mod\ m)\quad\ $ [Congruence Product Rule) -Proof $\rm\ \ m\: |\: A-a,\:\:\ B-b\ \Rightarrow\ m\ |\ (A-a)\ B + a\ (B-b)\ =\ AB - ab $ -Notice how this transformation from divisibility form to congruence arithmetic form has served to reduce the induction to the triviality $\rm\, \color{#c00}{1^n \equiv 1}$. Many induction problems can similarly be reduced to trivial inductions by appropriate conceptual preprocessing. Always think before calculating! -See here and here for much further discussion on this topic.<|endoftext|> -TITLE: Can there be such a thing as a classification of classification theorems? -QUESTION [5 upvotes]: Can there be such a thing as a classification of classification theorems? - -REPLY [9 votes]: Allow me to supplement Andres's excellent answer by copying over the following answer that I gave over at this MO question: -How can we understand in a precise general way the idea that a given classification problem is complicated or simple? How are we to compare the relative difficulty of two classification problems? -These questions form the central motivation for the emerging subject known as Borel equivalence relation theory (see Greg Hjorth's survey article, greatly missed after his recent death). The main idea is that many of the most natural equivalence relations arising in many parts of mathematics turn out to be Borel relations on a standard Borel space. To give one example, the isomorphism problem on finitely generated groups, but of course, there are hundreds of other examples. A classification problem for an equivalence relation E is really the problem of finding a way to describe the E-equivalence classes, of finding an E-invariant function that distinguishes the classes. -Harvey Friedman defined that one equivalence relation E is Borel-reducible to another relation F if there is a Borel function f such that x E y if and only if f(x) F f(y). That is, the function f maps E classes to F classes in such a way that different E classes get mapped to different F classes. This provides a classification of the E classes by using the F classes. The concept of reducibility provides a precise, robust way to say that one relation F is at least as complex as another E. Two relations are Borel equivalent if they reduce to each other, and we are led to the hierarchy of equivalence relations under Borel reducibility. By placing an equivalence relation into this hierarchy, we come to understand how complex it is in comparision with other equivalence relations. In particular, we say that one equivalence relation E is strictly simpler than F, if E reduces to F but not conversely. -It sometimes happens that one has a classification problem E and is able to provide a classification by assigning to each structure a countable list of data, such that two structures are equivalent iff they have the same data. This amounts to a reduction of E to the equality relation =, for two structures are E equivalent iff their data is equal. Such relations that reduce to equality are called smooth, and lay near the bottom of the hierarchy of Borel equivalence relations. These are the simplest equivalence relations. Thus, one way of showing that a relation is comparatively simple, is to show that it is smooth, and to show it is comparatively hard, show that it is not smooth. -The subject of Borel equivalence relation theory, as now developed by A. Kechris, G. Hjorth, S. Thomas and many others, is focused on placing many of the natural classification problems of mathematics into this hierarchy. Some of the main early results are the following interesting dichotomies: -Theorem.(Silver dichotomy) Every Borel equivalence relation E either has only countably many equivalence classes or = reduces to E. -The relation E0 says that two binary sequences are equivalent iff they agree from some point onward. It is easy to see that = reduces to E0, and an elementary argument shows that E0 does not reduce to =. Thus, E0 is strictly harder than equality. Moreover, it is a kind of next-step up in the hiearchy, in light of the following. -Theorem.(Glimm-Effros dichotomy) Every Borel equivalence relation E either reduces to = or E0 reduces to E. -The subject continues with many interesting results that gradually illuminate more and more of the hierarchy of Borel equivalence relations. For example, the Feldman-Moore theorem shows that every Borel equivalence relation E having every equivalence class countable is the orbit equivalence of a countable group of Borel bijections of the space. The relation Eoo is the orbit equivalence of the left-translation action of the free group F2 on its power set. This relation is complete for the countable Borel equivalence relations, in the sense that every countable Borel equivalence relation reduces to it. It's great stuff!<|endoftext|> -TITLE: Does the principle of mathematical induction extend to higher cardinalities? -QUESTION [6 upvotes]: Does the principle of mathematical induction extend to a cardinality larger than that of the countably infinite? - -REPLY [9 votes]: Mathematical induction can be extended, one of the usual ways to extend it is transfinite induction, as mentioned by PEV. -A more general way requires slightly more terms: -Let $R$ be a partial order. We say that $R$ is well-founded if for every non-empty class $A$ there is an $R$-minimal element in $A$. We say that $R$ is set-like if for every $x$ the class $\{y | yRx\}$ is a set. -Suppose that $A$ is a class, $R$ is a well-founded partial order which is set like (at least on the elements of $A$), then if $B\subseteq A$ such that for every $x\in A$ we have $(yRx \rightarrow y\in B) \rightarrow x\in B$ then $A=B$. -This is the same way induction is defined on the natural numbers, even if it isn't always clear how from this clumsy definition one can prove things by induction as usually.<|endoftext|> -TITLE: Sum and product of Martingale processes -QUESTION [23 upvotes]: Given two Martingale processes $(X_t)$ and $(Y_t)$, are their sum $(X_t+Y_t)$ and their product $(X_t \times Y_t)$ also Martingale? -If not, will the two $(X_t)$ and $(Y_t)$ being independent grant their sum and product Martingale? Thanks! - -REPLY [27 votes]: The product of two independent martingales is a martingale--or rather it is or it is not, depending on the precise formulation of the hypothesis! When it is, one says that the martingales are orthogonal. This is explained, for example, by Alexander Cherny in the chapter Some Particular Problems of Martingale Theory of the Shiryaev Festschrift. -And yes, the sum of two independent martingales is a martingale but, here again, it might be wise to state the result with some care and, first of all, as mentioned by steveO in a comment, to specify the filtration(s) one is considering. -The trivial version is that if $X$ and $Y$ are two martingales (independent or not) with respect to a given filtration $\mathcal{G}$, then the sum $X+Y$ is also a martingale with respect to $\mathcal{G}$. But what happens if one assumes that $X$ is a martingale with respect to its own filtration $\mathcal{F}^X$ and that $Y$ is a martingale with respect to its own filtration $\mathcal{F}^Y$? -(Recall that the filtration $\mathcal{F}^Z$ of a process $Z$ is defined by $\mathcal{F}^Z_n=\sigma(\{Z_k;k\le n\})$ for every $n$.) -Then, if $X$ and $Y$ are independent, $X+Y$ is a martingale with respect to its own filtration $\mathcal{F}^{X+Y}$ but, first, the proof, while not terribly difficult, requires to be careful (and uses more than the linearity of conditional expectations), and, second, for non independent martingales, this becomes horribly wrong. -To get an idea of the problem, consider a given integrable random variable $\xi$ and two $\sigma$-algebras $\mathcal{A}$ and $\mathcal{B}$ and try to find conditions guaranteeing that $E(\xi|\mathcal{A}\vee\mathcal{B})=E(\xi|\mathcal{A})$. Is $\xi$ independent of $\mathcal{B}$ enough? No, one has to assume that $\mathcal{B}$ is independent of $\sigma(X)\vee\mathcal{A}$.<|endoftext|> -TITLE: What is the spectrum of the commutative C*-algebra I have constructed here? -QUESTION [16 upvotes]: Let $B$ and $F$ be compact Hausdorff spaces. -Let $E\to B$ be a fiber bundle with fibre $F$ and structure group $\mathrm{Homeo}(F)$, the group of homeomorphisms of $F$. -I think this induces a fiber bundle $E'$ over $B$ with fiber $C(F,\mathbb C)$, the C*-algebra of continuous functions on $F$, and with structure group $\mathrm{Aut}(C(F,\mathbb C))\cong\mathrm{Homeo}(F)$, the group of *-automorphisms of $C(F,\mathbb C)$. -(To be more explicit about what happens here: my idea is: take a covering of $B$ which trivialises $E$. The transition functions give me a cocycle with values in the structure group $\mathrm{Homeo}(F)$. But, since $\mathrm{Homeo}(F)\cong\mathrm{Aut}(C(F,\mathbb C))$, I get a cocycle with values in $\mathrm{Aut}(C(F,\mathbb C))$, which I'd like to use to glue my new bundle.) -Let $\Gamma(B,E')$ denote the continuous sections of $E'$. I think pointwise operations turn this into a C*-algebra. Since the fiber $C(F,\mathbb C)$ is commutative, $\Gamma(B,E')$ is commutative as well. -Question: What is the spectrum of $\Gamma(B,E')$? -Example: If $E\cong B\times F$ is the trivial bundle, then $E'\cong B\times C(F,\mathbb C)$ and thus -$$\Gamma(B,E')\cong C(B,C(F,\mathbb C))\cong C(B\times F,\mathbb C).$$ -This suggests that the spectrum of $\Gamma(B,E')$ is actually $E$. -Edit: I posted this question on MO where it was solved in a comment by Anton Deitmar. - -REPLY [2 votes]: [As requested by Norbert:] -I posted this question on MO where it was solved in a comment by Anton Deitmar.<|endoftext|> -TITLE: Taylor series of a polynomial -QUESTION [10 upvotes]: Given a polynomial $y=C_0+C_1 x+C_2 x^2+C_3 x^3 + \ldots$ of some order $N$, I can easily calculate the polynomial of reduced order $M$ by taking only the first $M+1$ terms. This is equivalent to doing a Taylor series expansion with $M<=N$ around $x=0$. -But what if I want to take the Taylor series expansion around a different point $x_c$. In the end, I want the polynomial coefficients of $y_2=K_0+K_1 x + K_2 x^2 + K_3 x^3 + \ldots$ which represents the Taylor's expansion of $y$ around point $x_c$ such that $y(x_c)=y_2(x_c)$ including the first $M$ derivatives. -So given the coefficients $C_i$ with $i=0 \ldots N$, and a location $x_c$ I want to calculate the coefficients $K_j$ with $j=0 \ldots M$. -Example -Given $y=C_0+C_1 x+C_2 x^2$ ( $N=2$ ) then the tangent line ($M=1$) through $x_c$ is -$$ y_2 = (C_0-C_2 x_c^2) + (C_1+2 C_2 x_c) x $$ -or $K_0 = C_0-C_2 x_c^2$, and $K_1 =C_1+2 C_2 x_c$ -There must be a way to construct a ($M+1$ by $N+1$ ) matrix that transforms the coefficients $C_i$ into $K_j$. For the above example this matrix is -$$ \begin{bmatrix}K_{0}\\ -K_{1}\end{bmatrix}=\begin{bmatrix}1 & 0 & -x_{c}^{2}\\ -0 & 1 & 2\, x_{c}\end{bmatrix}\begin{bmatrix}C_{0}\\ -C_{1}\\ -C_{2}\end{bmatrix} $$ -Example #2 -The reduction of a $5$-th order polynomial to a $3$-rd order around $x_c$ is -$$ \begin{bmatrix}K_{0}\\ -K_{1}\\ -K_{2}\\ -K_{3}\end{bmatrix}=\left[\begin{array}{cccc|cc} -1 & & & & -x_{c}^{4} & -4\, x_{c}^{5}\\ - & 1 & & & 4\, x_{c}^{3} & 15\, x_{c}^{4}\\ - & & 1 & & -6\, x_{c}^{2} & -20\, x_{c}^{3}\\ - & & & 1 & 4\, x_{c} & 10\, x_{c}^{2}\end{array}\right]\begin{bmatrix}C_{0}\\ -C_{1}\\ -C_{2}\\ -C_{3}\\ -C_{4}\\ -C_{5}\end{bmatrix} $$ -which is a block matrix, and not an upper diagonal one as some of the answers have indicated. - -REPLY [2 votes]: Here is a Mathematica routine that is (more or less) an efficient way of performing Arturo's proposal (I assume the array of coefficients cofs is arranged with constant term first, i.e. $p(x)=\sum\limits_{k=0}^n$cofs[[k + 1]]$x^k$): -polxpd[cofs_?VectorQ, h_, d_] := Module[{n = Length[cofs] - 1, df}, - df = PadRight[{Last[cofs]}, d + 1]; - Do[ - Do[ - df[[j]] = df[[j - 1]] + h df[[j]], - {j, Min[d, n - k + 1] + 1, 2, -1}]; - df[[1]] = cofs[[k]] + h df[[1]], - {k, n, 1, -1}]; - Do[ - Do[ - df[[k]] -= h df[[k + 1]], - {k, d, j, -1}], - {j, d}]; - df] - -Let's try it out: -polxpd[{c[0], c[1], c[2]}, h, 1] // FullSimplify -{c[0] - h^2*c[2], c[1] + 2*h*c[2]} - -polxpd[c /@ Range[0, 5], h, 3] // FullSimplify -{c[0] - h^4*(c[4] + 4*h*c[5]), c[1] + h^3*(4*c[4] + 15*h*c[5]), - c[2] - 2*h^2*(3*c[4] + 10*h*c[5]), c[3] + 2*h*(2*c[4] + 5*h*c[5])} - -Now, Arturo gave the linear-algebraic interpretation of this conversion; I'll look at this from the algorithmic point of view: -For instance, see this (modified) snippet: -n = Length[cofs] - 1; -df = {Last[cofs]}; -Do[ - df[[1]] = cofs[[k]] + x df[[1]], - {k, n, 1, -1}]; - -This is nothing more than the Horner scheme (alias "synthetic division") for evaluating the polynomial at x. What is not so well known is that the Horner scheme can be hijacked so that it computes derivatives as well as polynomial values. We can "differentiate" the previous code snippet like so (i.e., automatic differentation): -n = Length[cofs] - 1; -df = {Last[cofs], 0}; -Do[ - df[[2]] = df[[1]] + x df[[2]]; - df[[1]] = cofs[[k]] + x df[[1]], - {k, n, 1, -1}]; - -where the rule is $\frac{\mathrm d}{\mathrm dx}$df[[j]]$=$df[[j+1]]. "Differentiating" the line df = {Last[cofs]} (the leading coefficient of the polynomial) requires appending a 0 (the derivative of a constant is $0$); "differentiating" the evaluation line df[[1]] = cofs[[k]] + x df[[1]] gives df[[2]] = df[[1]] + x df[[2]] (use the product rule, and the fact that $\frac{\mathrm d}{\mathrm dx}$cofs[[k]]$=0$). Continuing inductively (and replacing the x with h), we obtain the first double loop of polxpd[]. -Actually, the contents of df after the first double loop are the "scaled derivatives"; that is, df[[1]]$=p(h)$, df[[2]]$=p^\prime(h)$, df[[3]]$=\frac{p^{\prime\prime}(h)}{2!}$, ... and so on. -What the second double loop accomplishes is the "shifting" of the polynomial by -h; this is in fact synthetic division applied repeatedly to the coefficients output by the first double loop, as mentioned here.<|endoftext|> -TITLE: A module is projective iff it has a projective basis -QUESTION [7 upvotes]: I have a question related to this: Projective modules -I'm trying to understand the "philosophy" of the statement, because it seems too similar to the statement "a module is free iff every element can be written uniquely as a finite linear combination of elements of a basis". -Is this "projective basis" property saying this: -a module P is projective iff every element in P can be written as a finite linear combination of some elements of P? -We lose uniqueness in the expression as a sum: in the elements of P, in the elements of R, and in the number of terms (so the concept of "rank" wouldn't make sense). Is this all, or am I misunderstanding the statement? -Any other intuition related to that property is also appreciated. - -REPLY [12 votes]: The statement you're linking to is: A module $P$ is projective if and only if there is a family $\{x_{i}\}_{i \in I} \subset P$ and morphisms $f_{i}: P \to R$ such that for each $x \in P$ we have $x = \sum_{i \in I} f_{i}(x) x_{i}$. The last statement says three things: - -In order for the sum to make sense we must have that for all $x$ the set $\{i\,:\,f_{i}(x) \neq 0 \}$ is finite. Or, as stated there: for all $x$ we have $f_{i}(x) = 0$ for almost all $i$. -The set $(x_{i})_{i \in I}$ generates $P$. In other words, the map $g: \bigoplus_{i \in I} R \to P$ sending $(r_{i})$ to $\sum_{i \in I} r_{i} x_{i}$ is an epimorphism (it suffices to take $r_{i} = f_{i}(x)$ to see that this map is onto). -The epimorphism $g$ splits: there is a right inverse $f: P \to \bigoplus_{i \in I} R$ of $g$, i.e. $gf = \operatorname{id}_{P}$ (this morphism $f$ is of the form $(f_{i})_{i \in I}$ with morphisms $f_{i}: P \to R$ and by the definition of a direct sum we have $f_{i}(x) \neq 0$ only for finitely many $i$). - -In my opinion its just an extremely explicit way of phrasing the much more catchy "a module is projective if and only if it is a direct summand of a free module".<|endoftext|> -TITLE: Computing the product of p/(p - 2) over the odd primes -QUESTION [28 upvotes]: I'd like to calculate, or find a reasonable estimate for, the Mertens-like product -$$\prod_{2 -TITLE: Proof by contradiction: $r - \frac{1}{r} =5\Longrightarrow r$ is irrational? -QUESTION [11 upvotes]: Prove that any positive real number $r$ satisfying: -$r - \frac{1}{r} = 5$ must be irrational. -Using the contradiction that the equation must be rational, we set $r= a/b$, where a,b are positive integers and substitute: -$\begin{align*} -&\frac{a}{b} - \frac{1}{a/b}\\ - -&\frac{a}{b} - \frac{b}{a}\\ - -&\frac{a^2}{ab} - \frac{b^2}{ab}\\ - -&\frac{a^2-b^2}{ab} -\end{align*}$ -I am unsure what to do next? - -REPLY [5 votes]: Since you have already seen answers which complete your proof, here is another proof using continued fractions. -If $r$ was rational, it will have a finite continued fraction, say $[a_1, a_2, \dots, a_n]$, but that can be extended to $[5, a_1, a_2, \dots , a_n]$ and $[5,5,a_1, a_2, \dots a_n]$ etc, because -$$ r = 5 + \frac{1}{r} = 5 + \frac{1}{[a_1, a_2, \dots, a_n]} = [5, a_1, a_2, \dots, a_n]$$ -In fact, you can easily give the infinite continued fraction of $r$ -$$[5,5,5,5,\dots]$$<|endoftext|> -TITLE: For $|G|$ even, $\forall x\in G\exists b\in G\setminus{\{x^{-1}\}}$ such that $bxb = x^{-1}$ -QUESTION [6 upvotes]: This is another homework question I can't figure out. - -For $|G|$ even, $\forall x\in G\exists b\in G\setminus{\{x^{-1}\}}$ such that $bxb = x^{-1}$ - -I tried to toy with associativity but to no avail. Also, I can't see the relevance of $|G|$ being even. Any hint (I'm not here for the solution) is appreciated. -Thanks for your attention. - -Update: I thought I would show you the proof I've written since that's the least I can trade for the effort you took to help me. I'm eager for any kind of feedback so, if you're about to comment on it, show no mercy. I'm trying to improve. - -Let $x \in G$. There's left to prove that there exists $b \in G\setminus{\{x^{-1}\}}$ such that: - $bxb = x^{-1}$ - $\Longleftrightarrow (bxb)x = x(bxb) = e$ - $\Longleftrightarrow (bx)bx = xb(xb) = e$ - $\Longleftrightarrow (bx)^{2} = (xb)^{2} = e$ - $\Longleftrightarrow bx = (bx)^{-1}$ and $xb = (xb)^{-1}$ - $\Longleftrightarrow bx = x^{-1}b^{-1}$ and $xb = b^{-1}x^{-1}$ -Let $a \in G\setminus{\{e\}}$ such that $a = a^{-1}$. (We know such $a$ exists by a previous result). Let $b = ax^{-1}$. Then: - $bx = x^{-1}b^{-1}$ and $xb = b^{-1}x^{-1}$ - $\Longleftrightarrow ax^{-1}x = x^{-1}(ax^{-1})^{-1}$ and $xax^{-1} = (ax^{-1})^{-1}x^{-1}$ - $\Longleftrightarrow a = x^{-1}xa^{-1}$ and $xax^{-1} = xa^{-1}x^{-1}$ - $\Longleftrightarrow a = a^{-1}$ and $xax^{-1} = xax^{-1}$ - -What do you think of it? - -REPLY [2 votes]: Here are some hints, read them one at a time if you want to keep looking yourself :-) -1 - Saying |G| is even means the group contains some $a$ that satisfies $a^2 = 1$ but $a\neq 1$. -2 - For given $x$ you should try to find $b$ such that $(bx)^2 = 1$ -3 - So you could attempt find $b$ such that $bx = a$ -4 - This means $b = ax^{-1}$. --edit -Say $|G|$ is even. There exists $a$ such that $a^2 =1$, $a\neq 1$. (See arguments in the comments.) -Now take any $x\in G$ and let $b := ax^{-1}$. Indeed, this $b$ satisfies - -$b\neq x^{-1}$, because $b = x^{-1}$ would yield the contradiction $bx = a = 1$ -$ bxb = ax^{-1}xax^{-1} = aax = x $ -as required.<|endoftext|> -TITLE: Proving the value of a limit using the $\epsilon$-$\delta$ definition -QUESTION [6 upvotes]: I'm trying to solve the problem of showing that -$$\lim_{x\to6}\left(\frac{x}{4}+3\right) = \frac{9}{2}$$ -using the $\epsilon$-$\delta$ definition of a limit. - -REPLY [10 votes]: Since you said you're still lost after Arturo's post, I'll try to start earlier. -Q. What do you mean intuitively by $\lim\limits_{x\rightarrow 6} \frac{x}{4} + 3$? -A. Intuitively, you keep plugging in particular $x$ values really close to $6$ (but never actually plugging in $6$ - things like 5.99999 and 6.0000001 - into $\frac{x}{4}+3$ and you record the outputs. Now, as you keep plugging in things closer and closer to $6$, you expect the outputs to hone in on one number. The limit, then, is that one number. By looking at the graph of $\frac{x}{4}+3$, you'd probably guess that the output is $\frac{6}{4} + 3 = \frac{9}{2}$. -Now, let me resay this answer in a way that will lead into the official math definition for a limit. -If I come along and say the limit is $\frac{9}{2}$, how would you test me? Well, you could think to yourself "if the values are honing in on $\frac{9}{2}$, eventually they must get and stay within $.1$ of $\frac{9}{2}$, and so you challenge me by asking me to show that this is indeed the case. -Then I could respond by saying, "Once $x$ is within .01 of 6, then $\frac{x}{4} + 3$ will be within $.1$ of $\frac{9}{2}$. For if $|x-6|<.01$, then $|\frac{x}{4}+3 - \frac{9}{2}| = |\frac{x}{4} - \frac{3}{2}| = |\frac{x-6}{4}| = \frac{|x-6|}{4} < \frac{.01}{4} < .1$." -If every time you come up with a tolerance (like $.1$), I can pass your test by making up a tolerance of my own ($.01$), then mathematically we'd say the limit is $\frac{9}{2}$. -Now, the official math definition is: -$\lim\limits_{x\rightarrow 6} \frac{x}{4} + 3 = \frac{9}{2}$ means for all $\epsilon > 0$, there is a $\delta > 0$ such that if $|x-6|<\delta$, then $|\frac{x}{4}+3 - \frac{9}{2}|< \epsilon$. -In our previous "conversation". The $.1$ played the role of $\epsilon$ while the $.01$ played the role of $\delta$. -After reading (and possibly rereading, and rerereading) all of the above, I'd encourage you to reread Arturo's response and see if you can turn what he said into a full fledged answer.<|endoftext|> -TITLE: Geometric and analytic multiplicity of a linear operator -QUESTION [6 upvotes]: If I understand correctly, the analytic multiplicity of a linear operator say $T:V\to V$ is the amount of times $\lambda$ shows up as a root in the characteristic polynomial (Assuming you have a matrix $A$ representing $T$ with respect to some basis of $V$, then the characteristic equation is $\det(A-\lambda I)$). -I understand how to find this, but what exactly does this mean for the linear operator? -Also, how do we find the geometric multiplicity of $T$? What is it's significance? I tried looking it up on Wikipedia but they are using some notation and words that I am not familiar with; the definitions there are usually quite formal. Am I correct in thinking that it is the greatest amount of linearly independent vectors of the eigenvalues of $T$? Do all the eigenvalues of a linear operator have the same amount of linearly independent vectors? -Edit: Here is a specific example. -We have the matrix, say $A$, with $1$s all along the diagonal and above, and $0$s below. Since this matrix is upper triangular, the eigenvalues of $A$ are $1$. Looking at the equations we have to satisfy to find the eigenvector(s) of an eigenvalue $1$, we get: -\begin{align*} -x_1 + x_2 + \cdots + x_n &= \lambda x_1\\ -x_2 + \cdots + x_n &= \lambda x_2\\ -&\vdots\\ -x_n &=\lambda x_n -\end{align*} -It seems like we need to do some inspection here, it would be hard (or at least I can't see how) to show this formally. It seems to me like there are two possible linear independant eigenvectors here, -$$\displaystyle \vec 0$$ and $$\displaystyle (1,0,...,0)$$ -So the geometric multiplicity is $1$? - -REPLY [10 votes]: The geometric multiplicity of $\lambda$ tells you how big a subspace of $V$ you can find where $T$ acts simply as "multiplication by $\lambda$" (that is, how big, dimensionally speaking, the subspace spanned by the eigenvectors of $\lambda$ is). -The analytic/algebraic multiplicity of $\lambda$ tells you how big that space "should" be for $V$ and $T$ to have a "nice" decomposition of the following kind: you can express $V$ as a direct sum of subspaces $E_{\lambda_1}$, $E_{\lambda_2},\ldots,E_{\lambda_k}$ (where $\lambda_1,\ldots,\lambda_k$ are the distinct roots of the characteristic polynomial) so that on each $E_{\lambda_i}$, $T$ acts just by "multiplication by $\lambda_i$". For $V$ to really be equal to the sum of these spaces, you need $\dim(E_{\lambda_i})$, which is the geometric multiplicity of $\lambda_i$, to equal the algebraic multiplicity of $\lambda_i$. -It has other properties, but I think that's a good place to start. -Yes, the geometric multiplicity is the largest possible number of linearly independent eigenvectors of $T$ associated to $\lambda$ (vectors $\mathbf{v}$, $\mathbf{v}\neq\mathbf{0}$, such that $T(\mathbf{v}) = \lambda\mathbf{v}$; that is, vectors on which $T$ acts just by "multiplication by $\lambda$). -No, not all eigenvalues have the same geometric multiplicity; for example, in the matrix -$$\left(\begin{array}{cccc} -2 & 1 & 0 & 0\\\ -0 & 2 & 0 & 0\\\ -0 & 0 & 2 & 0\\\ -0 & 0 & 0 & 1 -\end{array}\right),$$ -the characteristic polynomial is $(2-\lambda)^3(1-\lambda)$, so the two eigenvalues are $\lambda_1=1$ and $\lambda_2=2$. The eigenvalue $\lambda_1=1$ has algebraic and geometric multiplicities both equal to $1$; $\lambda_2=2$ has algebraic multiplicity $3$, and geometric multiplicity $2$. (You can check the geometric multiplicity by finding the nullity of $A-2I$). -Added. Since the geometric multiplicity of an eigenvalue $\lambda_i$ is the dimension of the subspace $E_{\lambda_i}$, your first task in finding that dimension is to identify the vectors $\mathbf{v}$ for which $T(\mathbf{v})=\lambda_i\mathbf{v}$. This is equivalent to finding the vectors for which $(T-\lambda_i I)(\mathbf{v})=\mathbf{0}$. The reason this is a better problem to tackle is that it is easier to solve a system that looks like $B\mathbf{v}=\mathbf{0}$, than one that looks like $A\mathbf{v}=\lambda\mathbf{v}$. -So, you find the nullspace of $T-\lambda_iI$, that is, the collection of all vectors $\mathbf{v}$ for which $(T-\lambda_iI)(\mathbf{v})=\mathbf{0}$. Its dimension is precisely the geometric multiplicity of $\lambda_i$, so the geometric multiplicity of $\lambda_i$ is found by computing $\mathrm{nullity}(T-\lambda_iI)$. -For your specific example: we begin with the matrix corresponding to the standard basis: -$$A =\left(\begin{array}{cccc} -1 & 1 & \cdots & 1\\\ -0 & 1 & \cdots & 1\\\ -\vdots & \vdots & \ddots & \vdots \\\ -0 & 0 & \cdots & 1 -\end{array}\right).$$ -The characteristic polynomial is $\det(A-tI) = (1-t)^n$, so the only eigenvalue is $\lambda=1$, with algebraic multiplicity $n$. -To find the geometric multiplicity, take $A-1I$ ("$1I$" because we are taking $\lambda = 1$), and find its nullspace. Since -$$A - I = \left(\begin{array}{ccccc} -0 & 1 & 1 &\cdots & 1\\\ - 0 & 0 & 1 & \cdots & 1\\\ -\vdots & \vdots & \vdots & \ddots & \vdots \\\ -0 & 0 & 0 & \cdots & 0 -\end{array}\right),$$ -finding the reduced row-echelon form will give you the solutions to $(A-I)\mathbf{x}=\mathbf{0}$. The reduced row-echelon form of $A-I$ is -$$\left(\begin{array}{ccccc} -0 & 1 & 0 & \cdots & 0\\\ -0 & 0 & 1 & \cdots & 0\\\ -\vdots & \vdots & \vdots & \ddots & \vdots\\\ -0 & 0 & 0 & \cdots & 1\\\ -0 & 0 & 0 & \cdots & 0 -\end{array}\right),$$ -so $\mathbf{x}=(x_1,x_2,\ldots,x_n)$ is in the nullspace if and only if $x_2=x_3=\cdots=x_n=0$. So the eigenvectors of $\lambda=1$ are all vectors of the form $(a,0,0,\ldots,0)$ for arbitrary $a$; however, because $\mathbf{0}$ is always a solution to $A\mathbf{x}=\lambda \mathbf{x}$ for any $\lambda$, we declare by fiat that an eigenvector has to be nonzero (this has no bearing on the geometric multiplicity of $\lambda$, because $\mathbf{0}$ can never be in a linearly independent set). A basis for this nullspace is given by $(1,0,\ldots,0)$, so the nullspace has dimension $1$. This dimension is the geometric multiplicity of $\lambda=1$. -So, in summary: $\lambda=1$ is the only eigenvalue; it has algebraic multiplicity $n$, and geometric multiplicity $1$. The eigenvectors are all nonzero multiples of $(1,0,0,\ldots,0)$.<|endoftext|> -TITLE: Nested sequences of balls in a Banach space -QUESTION [17 upvotes]: This seems to be a fairly easy question but I'm looking for new points of view on it and was wondering if anyone might be able to help. -(By the way- this question does come from home-work, but I've already solved and handed it, and I'm posting this out of interest, so no HW tag.) - -Let $B_n=B(x_n,r_n)$ be a sequence of nested closed balls in a Banach space $X$. Prove that $\bigcap_{n=1}^\infty B_n\neq\varnothing$. - -As I said before, it should be rather simple. When the radii decrease to 0, it's just a matter of selecting any sequence of points in $B_n$, and it must be Cauchy- and the limit is in the intersection. -My question is what to do when the radii do not decrease to 0? I got some tips about multiplying the balls by a sequence of decreasing scalars, or reducing the radii so that they decrease to 0, but found too many pathological cases for both methods. -Finally- I used a geometric arguemnt (which i've shown to work in any normed space) that if $B(x_1,r_1)\subset B(x_2,r_2)$ then $\| x_1-x_2\|\leq|r_1-r_2|$. -This turned out to be some kind of technical catastrophe, but it worked... -Still, if anyone knows of a more elegant solution, I'd love to hear about it. - -REPLY [19 votes]: I don't know if this is more elegant, but that's about the best I can come up with at the moment and probably essentially the same as your argument. - -Consider first the situation $B_{\leq r}(x) \subset B_{\leq s}(y)$. It is easy to see that $r \leq s$. -Claim. $\|y - x\| \leq s - r$. -Proof. If $x = y$ there is nothing to prove, so let's assume $x \neq y$. The point $z = x - r \frac{y-x}{\|y - x\|}$ belongs to $B_{\leq r}(x)$ and hence also to $B_{\leq s}(y)$. Therefore $\|y - z\| \leq s$. On the other hand, -\[ -y - z = y - x + \frac{r}{\|y - x\|} (y - x) = \underbrace{\left(1 + \frac{r}{\|y - x\|}\right)}_{\lambda} (y - x), -\] -so $s \geq \lambda \|y - x\| = \|y - x\| + r$ and hence $\|y - x\| \leq s - r$. - -This means that a nested sequence of closed balls $B_{\leq r_{n}}(x_{n})$ has the following properties: - -The sequence $r_{n}$ is monotonically decreasing, hence converges to some $r$. -If $N$ is such that $r_{N} \leq r + \varepsilon$ then the above claim implies that for all $n\geq m \geq N$ we have $r_m - r_n \leq \varepsilon$, so $\|x_{m} - x_{n}\| \leq \varepsilon$ because $B_{\leq r_{n}}(x_{n}) \subset B_{\leq r_{m}}(x_{m})$. - -In other words, the centers $x_{n}$ form a Cauchy sequence and their limit point $x$ must belong to $\bigcap_{n = 1}^{\infty} B_{\leq r_{n}}(x_{n})$. - -Added: As Jonas pointed out, the argument can be made even simpler and doesn't need completeness: Suppose $r_{n} \to r \gt 0$. Then there is $N$ such that $r_{N} \leq 2r$. Then for all $n \geq N$ we have $r \leq r_{n} \leq r_{N} \leq 2r$, so $r_{N} - r_{n} \leq r$ and the claim implies that $\|x_{n} - x_{N}\| \leq r \leq r_{n}$, so $x_{N} \in \bigcap_{n = 1}^{\infty} B_{\leq r_{n}} (x_{n})$.<|endoftext|> -TITLE: Derivation of Basic Level Set Equations -QUESTION [5 upvotes]: For the level set method, $\phi(\vec{x},t)$ is the level set function in 3D and the level set $\phi(\vec{x},t) = 0$ forms the interface. For evolving $\phi$ the derivation says to imagine a particle $\textbf{x}(t)$ on the surface, then we differentiate with respect to t -$\frac{d}{dt}(\phi(\vec{x}, t)=0)$ -Using chain rule we get -$\frac{\partial \phi}{\partial t} + \triangledown \phi \cdot \frac{d\vec{x}}{dt} = 0$ -Then this equation is turned into -$\frac{\partial \phi}{\partial t} + \triangledown \phi \cdot \vec{V} = 0$ -I guess $\vec{V}$ is the speed, and its elements are to replace $dx$, $dy$, .. that are elements of $d{\vec{x}}$. Hence we can substitute our own displacement and receive a result. -What I do not understand is they continue like this: -Seperate $\vec{V}$ into normal and tangential components -$\frac{\partial \phi}{\partial t} + \triangledown \phi \cdot (\vec{V_N}\vec{N} + \vec{V_T}\vec{T}) = 0$ -Then since $\vec{N} = \frac{\triangledown \phi}{|\triangledown \phi |}$ -We get -$\frac{\partial \phi}{\partial t} + V_N \cdot |\triangledown \phi| = 0$ -What happened to $V_T$? Also how can the dot product of $\triangledown \phi$ with $\frac{\triangledown \phi}{|\triangledown \phi |}$ result in $|\triangledown \phi|$ ? -Link: http://www.cs.au.dk/~bang/smokeandwater2006/Lecture9_IntroToWaterAndLS.ppt - -REPLY [4 votes]: Since $\nabla \phi$ is normal to the surface, its inner product with a vector tangential to the surface is zero. Furthermore $\nabla \phi \cdot \nabla \phi=|\nabla\phi|^2$, hence the second identity: $$\nabla\phi\cdot\frac{\nabla\phi}{|\nabla\phi|}=\frac{|\nabla\phi|^2}{| -\nabla\phi|}=|\nabla\phi|$$.<|endoftext|> -TITLE: Why do we use n-dimensional spaces? -QUESTION [6 upvotes]: On mathoverflow, Terry Tao says the following: - -For instance, one can view a high-dimensional vector space as a state space for a system with many degrees of freedom. A megapixel image, for instance, is a point in a million-dimensional vector space; by varying the image, one can explore the space, and various subsets of this space correspond to various classes of images. - -One can similarly interpret sound waves, a box of gases, an ecosystem, a voting population, a stream of digital data, trials of random variables, the results of a statistical survey, a probabilistic strategy in a two-player game, and many other concrete objects as states in a high-dimensional vector space, and various basic concepts such as convexity, distance, linearity, change of variables, orthogonality, or inner product can have very natural meanings in some of these models (though not in all). - - -This paragraph is saying something that helps a little but I can't quite grasp it. -The picture as a million-dimensional space.. why would you consider it $10^6$-space and not simply 5-space (with x,y,R,G,B as the basis vectors). Shouldn't you go for the simpler representation? -I understand that you can define orthogonality this way, by the inner product of two $10^6$ pictures equalling 0. But what does it mean for 2 pictures to be orthogonal, and why would you want to define that? -Why do we want to use multidimensional space to handle something like the Fourier Transform for example. A soundwave. You can decompose it into 22050-dimensional space using a 22050 point FFT, and MATLAB seems to have a rocking good time with that, and filters work. But mathematically I'm unsure of the reason. - -REPLY [18 votes]: I think the question is related to why the space of images is 1-million dimensional, and not 3 dimensional, and why studying these spaces might be useful. -Well, think about the 3 dimensional space of RGB colors: what is a point in that space? It's simply a vector with 3 coordinates: $$. So that's just the information for one pixel, not for a whole image. Any point in that space carries only the information about how much red, green and blue one pixel has. -Now think about what would you do if you needed to store the information of color for TWO pixels: you would need one axis to represent the Red color of pixel 1 and one axis to represent the Red color of pixel 2; one for the Green color of pixel 1, one for Green color of pixel 2; and one for Blue color of pixel 1, and one for Blue color of pixel 2. So you would have 6 axis: 3 for each pixel. -If you want to represent the color of 1 million pixels, you need 3 * 1 million axis. These axis are orthogonal in the sense that if you want, you can change the color of just one pixel without having to adjust the color of any other pixels. So you would have a space with 3 * 1 million dimensions. Each point in that space now corresponds to an image: specifically, the coordinates along each of the 3 * 1 million axes gives you the value of R, G, or B for a given pixel. -So, in effect, if you want to have a space of IMAGES, and not a space of PIXELS, you need a 3 * 1 million-dimensional space, not a 3D space. -Now think about what would happen if you could take, for instance, 50 pictures of human faces and plot them in this 3 million-dimensional space, and see where they lie. Of course you can't visualize this, but you might expect that these pictures have SOMETHING in common (after all, they are all pictures of human faces, not arbitrary pictures of anything, or crazy combinations of pixel colors). If you could see how these images are spread over that space (where the points corresponding to those pictures are, that is), you would see that they usually are not located ANYWHERE. For instance, human faces have typical colors -- they tend not to be green, for example. That means that the regions of the space where you would expect to see green pixels are probably sort of empty. That's what they mean when they talk about identifying the subspace of human faces. It's just the "surface", or sub-regions of that 3*1 million-dimensional space where you'd expect to find points corresponding to human faces. Typically that sub-region might be described with fewer information than 3*1 million coordinates, if you just find a better representation for your image, instead of one that stores the value of each and all r,g,b component for all pixels. That is why image compression is possible: if you just find the right way of representing your information (like the value of your 3*1 million RGB components), you might do that with LESS than 3*1 million numbers; specifically, since this numbers have a pattern. -It is possible to try to identify the FORM of the sub-region of the 3-million-dimensional space where human faces tend to appear. Then given another image which you don't know if it's a human face or not, you could try to GUESS if it is a human face. How? Well, check if the point corresponding to that new image, when plotted on that 3*1 million-dimensional space, is close to the sub-region where the points of human faces usually are. Sometimes they call the process of identifying this subregions by the name of "manifold learning". -Ok, lots of information. Just think about it for a while. It's hard (actually impossible) to visualize spaces with more than 3 dimensions, but once you get the idea of what's going on, often you'll see that your intuitions about what happens in 2D or 3D carry on. -Try this exercise: imagine a black-and-white image with just 3 pixels; each pixel is then just a value between 0 and 1 (0 being completely black and 1 being completely white, and values in between being shades of gray). Now imagine the set of images where the first pixel is darker than the second one, and the second one is darker than the third one. That is, images like this: -$< 0.3, 0.8, 1.0 >$ -or -$< 0.12, 0.53, 0.7 >$ -Now generate a bunch of those (say, 10.000 images like that) and plot them in the 3D where they lie. Notice that there is a pattern in these images: the value of the 2nd pixel is always larger than the value of the 1st pixel; the value of the 3rd pixel is always larger than the value of the 2nd. Clearly we shouldn't expect points corresponding to images like this to occupy ANY place in the 3D space. For example, we would most certainly not see points in the space close to < 0.5, 0.3, 0.1 >. -We can actually see how 10.000 of such images in this plot, where I show 10.000 images like the ones I described. Since each axis corresponds to the value of one of the 3 pixels in a given image, we have 3 axes. Each point in the plot is thus a 3-pixel image. - -Notice how the points are occupying just a small part of the whole 3D space. That happens because there is a relation between the values of the pixels. Images of that type all have something in common, so they occupy similar portions of the whole 3D space. -The same way, if you could plot 1-million pixel images (which would lie in a 3*1-million dimensional space, as mentioned before), and all of those images corresponded to images of human faces, you would see some pattern like the one I showed above. Specifically, the points corresponding to images of human faces would most likely NOT occupy the whole 3-million dimensional space. We could actually try to estimate the "shape" of the sub-region where human faces are by using techniques called "manifold learning". -Now, notice that you could use the same ideas as above to analyze any other kind of data. Results of a statistical survey? Imagine that you have 50 questions, each one being a value from 0 to 100. You ask those questions to one person and get 50 numbers back. You ask them to another person and get another 50 numbers. How to "visualize" them? Plot them in a 50-dimensional space where each axis corresponds to the value of one given answer. A point in that space then corresponds to 50 numbers (specifically, the answers given in one specific survey). If you plot, say, 1000 of these surveys in this space, you would get 1000 50-dimensional points. Maybe there is some pattern can be found; maybe there isn't. If there is, it might be the case that these 1000 50-dimensional points lie in a subspace (or sub-region) of the 50-D space. That is what Terence Tao was saying when he said that it is useful to study these subspaces, or sub-regions, and when he said that the "subsets of this space correspond to various classes of images." -Hope that helps! -Bruno<|endoftext|> -TITLE: Learning mathematics as if an absolute beginner? -QUESTION [90 upvotes]: I dread mathematics, and I believe it's because I have come to associate mathematics with the experience of terrible teachers. All of my math teachers have been grumpy, but one in particular was the epitome of evil. She would take any opportunity to yell and scream at me when I struggled to comprehend the problems given. She approached the kids in my class as if their struggling wasn't a result of a misunderstanding, but rather from a lack of discipline, one that she could solve by being some sort of mathematics drill sergeant. This was when I was a small child, which obviously left an impression on my mind that probably wouldn't have existed had I been older. -Now that I am older, however, I need mathematics. I also have a growing curiosity and interest for it. Right now, I am planning to move out of my parent's house and live elsewhere. However, I have a fear in the back of my mind that my understanding of mathematics is not sufficient enough to do all of this; to handle a job; to handle expenses; to handle day-to-day life. Even the idea of becoming a cashier and having to handle money frightens me into avoiding those jobs; which leaves me (having no formal education) with virtually no options for employment. It's pretty intimidating. I seem to have some problem grasping even the simplest mathematic questions. Come up to me and ask me "What's 8 + 6?" and my mind wanders for the answer as if blindfolded. I would probably resort to sticking my hands behind my back and counting it off on my fingers or counting one at a time in my head. This just doesn't seem normal. -I want to conquer my fear of mathematics and educate myself. I want to approach the field as an absolute beginner, and by that, I mean go back to the very basics and work my way up, no matter how reassured I am of my abilities at times. I know it's impossible to conquer the entire mathematic field, but I want to conquer what is necessary and then some. I need the bare minimum, though I want a sufficient understanding. Are there any approaches or, what I am specifically requesting, books or courses, that would allow me to to teach myself in such a manner? -Sorry for the history lesson and/or if this is not a legitimate question. - -REPLY [2 votes]: You should find a good book or a good teacher if you want to appreciate the beauty of mathematics. If you personally experienced finding a subject beautiful and interesting then there will be no problem learning it even if you are a beginner. Note that by the terms good book and good teacher that is in accordance to your taste and therefore subjective.<|endoftext|> -TITLE: A question regarding normal field extensions and Galois groups -QUESTION [11 upvotes]: The following is possibly true but I can't find a corresponding theorem: -If $E/F$ is the splitting field of some polynomial in $F$ and $F \subset K \subset E$ then: -$Gal(E/K)$ normal subgroup of $Gal(E/F)$ $\Leftrightarrow$ $K/F$ is a normal field extension -Is this true? I think saying that $E/F$ is the splitting field of some polynomial in $F$ is the same as saying $E/F$ is Galois and therefore the Galois correspondence theorem applies. But I'm not sure I see how the meaning of normality of a subgroup relates to the meaning of normality of a field extension. But maybe the above is wrong altogether? -In any case, many thanks for your help to clarify this. - -REPLY [5 votes]: It is not true in general that saying that $E$ is a splitting field over $F$ implies that the extension is Galois: the missing ingredient is separability. The implication holds for characteristic zero and in more generality for perfect fields, but not always. For an example, take $F=\mathbb{F}_p(x)$, the field of rational functions over the field of $p$ elements, and let $E$ be the splitting field over $F$ of $t^p - x$. If $\alpha$ is a root of $t^p-x$ in $E$, then $t^p - x = t^p - \alpha^p = (t-\alpha)^p$ in $E$, since $E$ has characteristic $p$. So $E=F[\alpha]$ is a splitting field of $f(t)=t^p-x$ over $F$. But we also conclude that $\mathrm{Aut}(E/F)$ consists only of the identity (since $\sigma\in\mathrm{Aut}(E/F)$ is completely determined by its value in $\alpha$, but $\alpha$ must map to itself, since it must map to a root of $t^p - x$, and $\alpha$ is the only possibility). However, since $f(t)$ is irreducible over $F$ of degree $p$, then $[E:F]=p$. Hence, $|\mathrm{Aut}(E/F)|\lt[E:F]$, and the extension cannot be a Galois extension. The reason it fails to be a Galois extension is that the extension is not separable, since it is given by an irreducible polynomial with multiple roots.<|endoftext|> -TITLE: Useful series for limit comparison test -QUESTION [6 upvotes]: I already know the harmonic series ($1/n$, which diverges) and following the link, $1/n^2$ that converges. -What other useful series could you teach me, or perhaps some general advice? Any website/resource is very welcome also, since most of my search for them reveals only the harmonic one and not much more. (I know how to execute the test!). -Have a nice day! - -REPLY [4 votes]: There is the generalization of what you put above, the so called $p$-series. Let $p\in \mathbb{R}$, then $$\sum_{n=1}^\infty \frac{1}{n^p}$$ converges if and only if $p>1$. This can be proven using the integral test mentioned by Shai Covo. -Also of interest are alternating series, such as $$\sum_{n=1}^\infty \frac{(-1)^n}{\sqrt{n}}$$ since there are good tests to see if they converge. (The above sum converges) Specifically, the alternating series test tells us that if we have a sequence $a_n$ with -(1) $a_n\cdot a_{n+1} <0$ for every $n$ (it alternates signs) -(2) $|a_{n+1}|\leq |a_n|$ -(3) $\lim_{n\rightarrow \infty} a_n =0$ -Then $\sum_{n=1}^\infty a_n$ converges. This then brings up the topic of Conditional and Absolute convergence. For a generalization of the alternating series test, see Dirichlets Test. (This test allows us to give the conditions of convergence for series such as $\sum_{n=1}^\infty a_n \sin (n)$) -Hope that helps,<|endoftext|> -TITLE: Extension of Riemannian Metric to Higher Forms -QUESTION [12 upvotes]: I've been reading about Riemannian manifolds, and have come across a comment that says that for a metric $g$ on an $N$-dimensional manifold $M$, considered as a bilinear map -$$ -g:\Omega^1(M) \times \Omega^1(M) \to C^{\infty}(M), -$$ -there exists a canonically induced bilinear map -$$ -g_k:\Omega^k(M) \times \Omega^k(M) \to C^{\infty}(M), -$$ -for all $2 \geq k \leq N$. What is this canonically induced $g_k$ defined? - -REPLY [17 votes]: If we have an inner product g on a vector space V, we can define an inner product on $\bigotimes_{i=1}^k V$ via -$$g(v_1 \otimes \cdots \otimes v_k, w_1 \otimes \cdots \otimes w_k) := \frac{1}{k!}g(v_1, w_1)\cdots g(v_k, w_k).$$ -The factor 1/k! has the following explanation: If the wedge product if defined via $$\omega \wedge \eta := \frac{(r+s)!}{r!s!}\operatorname{Alt}(\omega \otimes \eta),$$ -where $\omega$ is an r-form and $\eta$ an s-form, then we get -$$g(v_1 \wedge \ldots \wedge v_k, w_1 \wedge \ldots \wedge w_k) = \operatorname{det}(g(v_i, w_j)).$$ -This gives the nice statement, that if $\{v_1, \ldots, v_n\}$ is an orthonormal basis of V, then $\{v_{i_1} \wedge \ldots \wedge v_{i_k}: 1 \le i_1 < \ldots < i_k \le n\}$ is an orthonormal basis of $\Lambda^k V$.<|endoftext|> -TITLE: $A_{4}$ unique subgroup of $S_4$ of order $12$ -QUESTION [7 upvotes]: I was reading the proof that $A_{4}$ is the unique subgroup of order $12$. -So the author counts the number of conjugates: -$1$ cycle of type $()$ , $6$ cycles of type $(1 \ 2)$, 8 cycles of type $(1 \ 2 \ 3)$, $6$ cycles of type $(1 \ 2 \ 3 \ 4)$ and $3$ cycles of type $(1 \ 2)(3 \ 4)$. -Now it says, the only possible way to get $12$ elements is $1 + 3 + 8$. Why is this? can't we have $1$ cycle of type $()$, $2$ cycles of type $(1 \ 2)$, $3$ cycles of type $(1 \ 2 \ 3)$, $2$ cycles of type $(1 \ 2)(3 \ 4)$ and $4$ cycles of type $(1 \ 2 \ 3 \ 4)$. This also gives you $12$. Why is this impossible though? I don't really understand why the only possibility is $1 + 3 + 8$. - -REPLY [2 votes]: Another way to prove this result. -A subgroup of index $2$ is always normal, so $A_4$ is normal in $S_4$. -Assume by absurd that $H\ne A_4$ is another subgroup of $S_4$ of order $12$. -Using the second theorem of isomorphism, we get: -$$\vert H.A_4\vert =\frac{\vert H\vert \cdot\vert A_4\vert }{\vert H\cap A_4\vert }=\frac {144}{\vert H\cap A_4\vert }.$$ -Using Lagrange's theorem, we get that: -$$\frac{144}{\vert H\cap A_4\vert}\text{ divides }\vert S_4\vert=24.\qquad (\star)$$ -But since $H\ne A_4$, we have $\vert H\cap A_4\vert<12$. -So with $(\star)$ we get $\vert H\cap A_4\vert=6$. -But $A_4$ can not have a subgroup of order $6$. -If it had one $K$, it would be normal (since of index $2$), and by Cauchy's theorem there would exist $s\in K$ of order $3\mid 6$: $s$ would be a $3$-cycle. -But the $3$-cycles are conjugated so $K=A_4$ of order $12$ which is absurd.<|endoftext|> -TITLE: Elementary question in differential geometry -QUESTION [7 upvotes]: I am trying to learn differential geometry (i.e., teach myself!) -So here is a question that came up. -For some $h > 0$, consider the cone -$C_h = \{ (x,y,z) \; : \; 0 \le z = \sqrt{x^2 + y^2} < h \} \subset \mathbb{R}^3$ -endowed with subspace topology. It seems that we can cover this with a single chart $(U,\phi)$ where $U = C_h$ and $\phi$ is the projection $\phi(x,y,z) = (x,y)$. So it seems that this defines a differentiable structure and we get a smooth ($C^\infty$) 2-dimensional manifold. (Is it correct?) -Now consider the inclusion map $i : C_h \to \mathbb{R}^3$, is this maps smooth? It doesn't seem to me that it is. The expression of $i$ in the chart above is not smooth at $(0,0)$ and I don't seem to be able to find any other compatible chart around zero which has a smooth representation. (Haven't given it much thought though). If this is true how one shows that this map is not smooth. (Also, if this is true, a vague question is whether removing the origin is the only way to fix this problem) - -REPLY [3 votes]: You've endowed $C_h$ with the structure of an abstract manifold but $C_h$ is not a submanifold of $\mathbb R^3$. The fact that your set isn't a submanifold boils down to two observations: -(1) The fact your map $i$ is not differentiable at the origin -and -(2) An application of the implicit function theorem gives the proof by contradiction. The implicit function theorem says that if your set was a submanifold, $i$ would have to be smooth -- technically you have to consider the two other coordinate projections $(x,y,z) \to (y,z)$, $(x,y,z) \to (x,z)$ but $C_h$ does not satisfy the "vertical line rule" so it can't be a graph of a function of $(y,z)$ or $(x,z)$.<|endoftext|> -TITLE: Do there exist sets $A\subseteq X$ and $B\subseteq Y$ such that $f(A)=B$ and $g(Y-B)=X-A$? -QUESTION [7 upvotes]: This is a little exercise I've been fiddling with for a while now. - -Let $f\colon X\to Y$ and $g\colon Y\to X$ be functions. I want to show that there are subsets $A\subseteq X$ and $B\subseteq Y$ such that $f(A)=B$ and $g(Y-B)=X-A$. - -Of course, if $f$ is surjective, then taking $A=X$, one has $f(A)=B=Y$ and $g(Y-Y)=g(\emptyset)=\emptyset=X-X=X-A$, and you're done. So I suppose $f$ is not surjective. -I tried to approach it by contradiction. It seems that for any $A\subseteq X$, there is obviously a $B\subseteq Y$ such that $f(A)=B$, so if the result is not the case, for each pair of subsets $A$ and $f(A)$, we must have $g(Y-f(A))\neq X-A$. Then for every $f(A)\subseteq Y$, there exists a $y\in Y-f(A)$ such that $g(y)\notin X-A$. This would imply $g(y)\notin X \lor g(y)\in A$, but since $g(y)\in X$, we must have $g(y)\in A$. -The only thing I can glean from this is that $g$ is surjective, since for any singleton $\{x\}\subseteq X$, we could then find a $y\in Y-\{f(x)\}$ such that $g(y)=x$. But I don't quite see how to get a contradiction. What direction should I head? I attempted to apply the fact that a monotone function on power sets has a fixed point, but that only seems to apply then the function maps from a power set into itself. Thanks! - -REPLY [4 votes]: There's a proof here (where "isotonic" means "order-preserving"). Theorem (F1) there is a corollary of the Knaster–Tarski theorem.<|endoftext|> -TITLE: A Conway problem on identifying groups -QUESTION [7 upvotes]: I first saw this on the Missouri State problem page and it has never been solved there. -Consider the group generated by a,b,c, and d subject to the relations -ab = c, bc = d, cd = a, and da = b -Using the first relation, the second relation becomes bab = d. Using this expression and the first relation, we obtain -abbab = a and baba = b -Taking the second relation above and multiplying both sides on the left by a^-1b^-1 and on the right by a^-1, we have b = a^-2. Now first relation above becomes aa^-2a^-2aa^-2 = a or a^-4 = a, hence i = a^5. Therefore our group consists of the five elements i, a, a^2, a^3, a^4. The other elements can be expressed in terms of a as follows: b = a^3, c = a^4, and d = a^2. -Finally, we get to his month's problems. How many elements are there in the groups given by the following generators and relations? -* Generators:a,b,c - Relations: ab = c, bc = a, ca = b - -* Generators:a,b,c,d,e - Relations: ab = c, bc = d, cd = e, de = a, ea = b - -* Generators:a,b,c,d,e,f - Relations: ab = c, bc = d, cd = e, de = f, ef = a, fa = b - -Source: John H. Conway -I recognize the first one as the quaternions but have made no progress on the other two. - -REPLY [5 votes]: For your second group, we can write it as: -$$ \langle a,b\ |\ babab^2ab=a, ab^2aba=b\rangle. $$ -The first relation gives $baba=ab^{-1}a^{-1}b^{-2}$, and plugging that into the second relation gives $ab(ab^{-1}a^{-1}b^{-2})=b$, or $aba=b^3ab$. Plugging that again into the second relation gives $ab^2(b^3ab)=b$, or $ab^5a=1$, so $b^5=a^{-2}$. Thus $a^2$ is central; combining the first and second relation gives $bab^2=a^2$, and plugging that into the first relation gives $ba^4b=a$. Since $a^2$ is central, this is the same as saying $b^2=a^{-3}$. This is enough to abelianize your group, so it was abelian to begin with; a simple check then shows it is $C_{11}$. -It is easy to see the last group is infinite: the presentation can be given as -$$ \langle a,b\ |\ b^2ab^2abab^2ab, abab^2aba\rangle;$$ -Quotienting out by the normal subgroup $\langle a^2,b^2\rangle$ (you can check it is normal), gives the group $C_2\ast C_2$, the infinite dihedral group.<|endoftext|> -TITLE: Fermat's Last Theorem - Special Case of Sophie Germain Primes -QUESTION [14 upvotes]: Sophie Germain proved Fermat's Last Theorem $x^p+y^p \neq z^p$ for the special case where p is a Sophie Germain prime and $p\not|xyz$. Does any one know of a proof for the other case, where $p|xyz$? Note: I am looking for a proof restricted to the Sophie Germain primes, as of course, Wiles proved this generally. - -REPLY [12 votes]: These two cases are traditionally called ${\bf Case \,\, 1}$ and ${\bf Case \,\, 2}$, and you are after a proof of Case 2. -The essence of Dirichlet's proof of Case 2 when $p=5$ can be found in -Fermat's Last Theorem: A Genetic Introduction to Algebraic Number Theory, by Harold M. Edwards. It's in chapter three, at about page 70.<|endoftext|> -TITLE: Spivak and Invariance of Domain -QUESTION [12 upvotes]: On p.3 of the first volume of Spivak's Comprehensive Introduction to Differential Geometry, he says that it is an "easy exercise" to show that the invariance of domain theorem (if $f:U\subset\mathbb{R}^n\rightarrow\mathbb{R}^n$ is one-to-one and continuous and $U$ is open then $f(U)$ is open) implies that in his definition of a manifold - -a metric space $M$ such that every - point $x\in M$ has a neighborhood $U$ - of $x$ and some integer $n\geq0$ such - that $U$ is homeomorphic to - $\mathbb{R}^n$, - -the neighborhood $U$ in fact must be open. -My question: My proof seems to require a bit of set up, as well as two appeals two the invariance of domain theorem, including once to first prove that the dimension of a manifold is well defined (which Spivak discusses on p.4). In any case, I feel like the argument is longer than what Spivak calls a "complicated little argument" on the previous page. Am I missing something obvious? -Perhaps my real question is whether this kind of comment is to be expected from Spivak, since I am reading this book on my own. - -REPLY [10 votes]: I also find the "it is an easy exercise" remarks very frustrating. I think (but am not sure) that I came up with a proof, probably similar to yours. This assumes the fact stated on page 2 "we can always choose the neighborhood $U$ in our definition to be an open neighborhood." I use this fact in choosing $V$ below. -Let $\varphi : U \to \mathbb{R}^n$ be the homeomorphism given in the definition of the Euclidean neighborhood of $x$, and let $z \in U$. -Since $\, z \in M \quad \exists$ an open set $V$ of $M$ with $z \in V$, and a homeomorphism $\theta:V \to \mathbb{R}^n$. -Consider the set $W \equiv V \cap U$. $W$ is open in the relative topology of $U$, so $\varphi (W)$ is open in $\mathbb{R}^n$. For notational convenience, let $\psi \equiv \varphi \mid_{W}$. The map $\;\theta \circ \psi^{-1} :\varphi (W) \to \mathbb{R}^n$ is continuous, so by Invariance of Domain $\; \theta (W)= \theta \circ \psi^{-1}[\varphi (W)] \;$ is open in $\mathbb{R}^n$, which tells us that $W$ is open in $V$. -Since $V$ is open in $M$, $W$ is open in $M$ and we have $z \in W \subset U$, so $U$ is open. -I wouldn't bet my life that what I did is correct, but I think it is. This is my first time attempting to answer a question, so if I've messed up, please be gentle. -Thanks, -Dave<|endoftext|> -TITLE: Two questions with respect to the determinants -QUESTION [5 upvotes]: I have got a proof of $det(AB)$=$(detA)(detB)$ in my book. -It goes as follows (for invertible A): we know that rref[A|AB]=[$I_{n}$|B] -We also know that det(A)=$(-1)^{s}k_{1}k_{2}...k_{r}$ where s is the number of row swaps needed to get to the rrefA, and $k_{i}$ are coefficients by which we divide rows of A to get to rrefA. Hence, the book concludes, det(AB)=$(-1)^{s}k_{1}k_{2}...k_{r}$(detB)=(detA)(detB), but I don't see how we go from det(AB) to $(-1)^{s}k_{1}k_{2}...k_{r}$(detB). Could you please explain the logic behind this step? (I see how (detA)(detB)=$(-1)^{s}k_{1}k_{2}...k_{r}$(detB), obviously). -The book then also uses the fact that if A is not invertible, neither will be AB (because image of AB is contained in A), so (detA)(detB)=0(detB)=0=det(AB). My question is, how would we prove that this equation holds if B is non-invertible, and hence detB=0? I could think of saying that since B is not invertible then it can't be represented as a product of elementary matrices, while A can, so AB can't be represented as such either, but that sounds hand-wavy to me. -Thanks a lot! - -REPLY [2 votes]: First, in case anyone else, like me, doesn't know the abbreviation "rref": it means "reduced row echelon form". -The structure of the proof in the book isn't entirely clear from your exposition. It is clear that the first part assumes that $A$ is invertible. (You didn't mention that assumption.) It's not quite clear to me whether the first part also assumes that $B$ is invertible. (If it does not, your question in the second part wouldn't affect the validity of the proof.) -I) To the first part of the question ("Could you please explain the logic behind this step?"): -a) If we assume that $B$ is invertible, we can proceed as follows: Since $\mathrm{rref}[A|AB]=[I_n|B]$, we know that applying row operations that transform $A$ into reduced row echelon form to $AB$ yields $B$. But that means that whatever row operations we perform to get $B$ into reduced row echelon form, we can perform the same operations on $AB$ after first performing $s$ row swaps and dividing rows of $AB$ by the $k_i$. Thus, applying $\det(A)=(-1)^{s}k_{1}k_{2}...k_{r}$ to $AB$, we see that this product for $AB$ must be the corresponding product for $A$ multiplied by the corresponding product for $B$. -b) If we don't assume that $B$ is invertible in the first part, I think we need a bit more than what you've written. For instance, it would be enough to know how row operations affect the determinant, namely that a row swap multiplies the determinant by $-1$, dividing a row by $k_i$ divides the determinant by $k_i$, and adding a multiple of a row to another row leaves the determinant unchanged. Applying this to $\mathrm{rref}[A|AB]=[I_n|B]$ yields the desired equation, since $AB$ is transformed into $B$ by $s$ row swaps and divisions by the $k_i$. -II) To the second part of the question ("how would we prove that this equation holds if B is non-invertible"): -As mentioned, this is only required if we assume in the first part that $B$ is invertible. -Your "hand-waving" proof seems fine to me -- more explicitly: We've dealt with the case where $A$ is not invertible. So assume that $A$ is invertible. Then if $AB$ were invertible, that would allow us to express both $A$ and $AB$ as a product of elementary matrices, which in turn would allow us to represent $B$ as such a product, which can't be if $B$ is not invertible. -Other ways of proving the same thing would be: If $A$ is invertible (assumed as above), we can write $B=(A^{-1}A)B=A^{-1}(AB)$. So if $AB$ were invertible, then $(AB)^{-1}A$ would be an inverse for $B$, which can't be if $B$ is not invertible. Or, in the same vein as what the book does in the case where $A$ isn't invertible: If $B$ isn't invertible, its kernel is not empty, but the kernel of $AB$ contains the kernel of $B$, and hence the kernel of $AB$ is not empty, and thus $AB$ isn't invertible. (Note that here we don't have to assume that $A$ is invertible.)<|endoftext|> -TITLE: what does it mean for a prime at infinity to ramify? -QUESTION [25 upvotes]: I understand what it means for a prime number to ramify in a ring of integers of a number field. However, an infinite prime is an archimedean valuation, what does it mean for an archimedean valuation to ramify in a number field? - -REPLY [20 votes]: I think it's worth adding that while Keenan's answer is a good one, that this is not usually what's given as the definition. I think this issue can be confusing in the literature because in the case of number fields a lot of the relevant constructions become trivial, but it's often not mentioned what the punch line is. So, although this question is old maybe this will still help someone. -The Archimedean valuations on a number field $K$ come from the possible embeddings $\sigma:K\rightarrow\mathbb{C}$ (which you can also identify with $Gal(K,\mathbb{Q})$). For each such $\sigma$, if its image lies in $\mathbb{R}$, the valuation is $v_{\sigma}:x\mapsto |\sigma(x)|$, but if its image does not lie entirely in $\mathbb{R}$, then $\sigma$ and $\overline{\sigma}$ both yield the same valuation $v_{\sigma}:x\mapsto|\sigma(x)|^2$. -Now fix an infinite place $v$ on $K$, let $L$ be a finite field extension of $K$, and let $w$ be an extension of $v$ to $L$. The extension is said to ramify at $w$ iff $\#\{\tau\in Gal(L,K)\mid w\circ\tau=w\}>1$. But in reality this all simplifies to what Keenan said. The only possibilities for $\tau$ satisfying $w\circ\tau=w$ are the identity map and complex conjugation. -Moreover, if $v$ is a real embedding and $w$ is not, you will always have the option of $\tau$ being complex conjugation, so the extension will always be ramified there. And if the situation is any of the other possibilities ($v$ real & $w$ real; or $v$ non-real & $w$ non-real), then $\tau$ can only be the identity, hence not ramified.<|endoftext|> -TITLE: Why is the ring of matrices over a field simple? -QUESTION [50 upvotes]: Denote by $M_{n \times n}(k)$ the ring of $n$ by $n$ matrices with coefficients in the field $k$. Then why does this ring not contain any two-sided ideal? -Thanks for any clarification, and this is an exercise from the notes of Commutative Algebra by Pete L Clark, of which I thought as simple but I cannot figure it out now. - -REPLY [44 votes]: A faster, and more general result, which Arturo hinted at, is obtained via following proposition from Grillet's Abstract Algebra, section "Semisimple Rings and Modules", page 360: - -Consequence: if $R:=D$ is a division ring, then $M_n(D)$ is simple. -Proof: Suppose there existed an ideal of $M_n(D)$. By the proposition, it'd be of the form $M_n(I)$, for $I\unlhd D$, but division rings do not have any ideals (other than $0$ and $D$), so this is a contradiction. $\blacksquare$<|endoftext|> -TITLE: Question regarding to canonical factorization of $n!$? -QUESTION [5 upvotes]: The problem is: write 101! in canonical form. -I solved it by try to find the highest power of each factor prime factor less than 101. When running factor() function using TI-89, I realized that the power of each prime factor decrease. -So I guess I have two questions: - -How can write this idea using Math notation? I'm think of $\prod$ and $\sum$ notation, but I don't know how to express this idea. -As a programmer, I want to write a program to handle this situation. I could check every prime factor, however, I'm curious about the decrease of these prime factor. Is there any pattern behind the scene? If there is, it would increase my algorithm a bit. - -Update -Based on Aryabhata's answer, my attempt is: -$$\prod_{i=1}^{\infty}p_i^{\sum_{k=1}^{\infty} \left\lfloor\frac{n}{p^k}\right\rfloor}$$ -Does it make sense? -Thanks. - -REPLY [11 votes]: You can use Legendre's formula that the highest power of a prime $p$ which divides $n!$ is given by -$$\text{ord}(n,p) = \sum_{k=1}^{\infty} \left\lfloor\frac{n}{p^k}\right\rfloor$$ -Note, even though the upper limit says $\displaystyle \infty$, the summation is finite: $\displaystyle \left\lfloor\frac{n}{p^k}\right\rfloor$ is zero when $\displaystyle p^k \gt n$. -In your case, the only primes you need to consider are all in the range $\displaystyle 1 \lt p \le 101$. -You can thus write $n!$ as -$$n! = \prod_{p \le n} p^{\text{ord}(n,p)}$$ -where $\displaystyle \text{ord}(n,p)$ is defined as above and $\displaystyle p$ runs through the primes. - -REPLY [6 votes]: First note that the largest prime dividing $N!$ has to be less than or equal to $N$. Let this prime be $p$. -Once you observe this, write $N!$ as a product of primes $N! = 2^{\alpha_2} 3^{\alpha_2} 5^{\alpha_5} 7^{\alpha_7} 11^{\alpha_{11}} \ldots p^{\alpha_p}$ where $\alpha_i \in \mathbb{N}$ and $p$ is the largest prime not exceeding $N$. -Now if you want to find the maximum power of a prime $q$ dividing $N!$, it is given by -$$\alpha_q = \left \lfloor \frac{N}{q} \right \rfloor + \left \lfloor \frac{N}{q^2} \right \rfloor + \left \lfloor \frac{N}{q^3} \right \rfloor + \cdots$$ -The first term appears since you want to count the number of terms less than $N$ and are multiples of $q$ and each of these contribute one $q$ to $N!$. But then when you have multiples of $q^2$ you are not adding just one $q$ but you are adding two of these primes $q$ to the product. So you now count the number of multiple of $q^2$ less than $N$ and add and so on and so forth for higher powers of $q$ less than $N$.<|endoftext|> -TITLE: How can $Z\cup W\sim W$ for sets $Z$ and $W$? -QUESTION [5 upvotes]: I've found some time to read a little more on set theory, and I've come across the following question. -Suppose I have four sets $X$, $Y$, $Z$, and $W$ such that $Y\subseteq W$ and $Z\subseteq X$. Suppose also that $X\cup Y\sim Y$, where by $\sim$ I mean that the two sets $X\cup Y$ and $Y$ are equinumerous. How can I show that $Z\cup W\sim W$? -I thought the Bernstein-Schroeder theorem might be applicable. The identity function maps $W$ into $Z\cup W$ injectively, so I figured it suffices to show that there is an injection from $Z\cup W$ into $W$. From $X\cup Y\sim Y$, there is an injection $f\colon X\cup Y\to Y$, and thus $f|_X$ is an injection from $X$ into $Y$. Since $Z\subseteq X$ there is an injection from $Z$ to $X$, and likewise from $Y$ into $W$. Composing all these would give an injection from $Z$ into $W$. Those were my thoughts, but I don't think I can use them to show that $Z\cup W$ maps injectively into $W$. There must be a better way. Thanks for any help. - -REPLY [4 votes]: $Z \cup W \subseteq X \cup W = X \cup (Y \cup (W \;\backslash Y)) = (X \cup Y) \cup (W \;\backslash Y) \sim Y \cup (W \;\backslash Y) = W \subseteq Z \cup W$<|endoftext|> -TITLE: Structure Theorem for abelian torsion groups that are not finitely generated? -QUESTION [44 upvotes]: I know about the structure theorem for finitely generated abelian groups. -I'm wondering whether there exists a similar structure theorem for abelian groups that are not finitely generated. In particular, I'm interested in torsion groups. Maybe having a finite exponent helps? - -REPLY [14 votes]: To complement Arturo's excellent survey of the foundations of abelian p-groups: A useful survey of the later (1960-2000) development of the theory of abelian p-groups is given in: -Hill, Paul. "The development of the theory of p-groups." -Rocky Mountain J. Math. 32 (2002), no. 4, 1135–1151. -MR1987598 -DOI:10.1216/rmjm/1181070013. -I personally enjoyed the earlier theory a lot more (Kulakoff, Prüfer, Zippin, Ulm) which was gathered up into textbook form by Fuchs and Kaplansky. However, lots of positive and negative results were achieved after then, so if you aren't daunted by the several hundred pages of the textbooks by Fuchs, Kaplansky, and Griffith, then the survey has a lot of important papers beyond the textbook level.<|endoftext|> -TITLE: How do quaternions represent rotations? -QUESTION [10 upvotes]: I wonder how $qvq^{-1}$ gives the rotated vector of $v$. -Is there any easy-to-understand proof for it? -I was on Wikipedia, but I could not understand the proof there -because of the conversions. -Why is $uv-vu$ the same as $2(u \times v)$, and why is $uvu$ the same as $v(uu)-2(uv)u$? - -REPLY [6 votes]: A proof is outlined here, although I skipped a few computations you should verify.<|endoftext|> -TITLE: Why is UFD a Krull domain? -QUESTION [9 upvotes]: Matsumura mentions this as if it is obvious, and I can't find this result anywhere. Am I missing something obvious here? - -REPLY [8 votes]: Just for completeness: A Krull domain is an integral domain $D$ with field of fractions $K$ for which there is a family $\mathcal{F}=\{R_{\lambda}\}_{\lambda\in\Lambda}$ of discrete valuation rings of $K$ such that: - -$D = \mathop{\cap}\limits_{\lambda\in\Lambda}R_{\lambda}$; and -For every $x\in K$, $x\neq 0$, there are at most a finite number of $\lambda\in\Lambda$ such that $v_{\lambda}(x)\neq 0$. - -So, assume that $D$ is a UFD. For each (class of associated) irreducible element $\pi$ of $D$, localizing away from $\pi$ (only allowable denominators are prime to $\pi$) gives you a DVR with valuation given by $(\pi)$. The intersection of all of these DVRs, viewed as subrings of the field of fractions of $D$, is exactly $D$ (because the only elements in the field of fractions that can be written with denominators prime to all irreducibles are the elements of $D$), and every element of the field of fractions has nonzero $\pi$-valuation only at finitely many $\pi$ (write the fraction in reduced terms: only those $\pi$ that show up in the numerator or the denominator give you nonzero valuation). So $D$ is a Krull ring.<|endoftext|> -TITLE: How to draw a complex line bundle? -QUESTION [7 upvotes]: The most basic example of a topologically non-trivial real line bundle is the well-known Möbius strip. Everyone who learns about vector bundles will be confronted by it, if only because it has the distinguished advantage that we can draw a picture of it. - -I would like to draw pictures of other line bundles, too. In particular, I have a complex line bundle which I would like to visualize somehow. How do I do that? -To be more specific: - -The base manifold is the torus $M = S^1\times S^1$. It should be fine to visualize it as a rectangle, though. -The complex line bundle has structure group $U(1)$. -It is given as a direct summand of the trivial bundle $M \times L^2(\mathbb R^3)$. In other words, it is embedded in an infinite dimensional Hilbert space bundle. In particular, there is an induced connection coming from the hermitian form (scalar product). -(The bundle arises from an analysis of the Quantum Hall Effect.) - -My questions: - -1) Are there any example drawings of complex line bundles? - -I imagine that one attaches a plane to every point of the base manifold, but it is not clear how to me how to arrange them such that one obtains a qualitative picture of the fact that they represent complex numbers. - -2) Is there a minimal dimension $N$ such that every complex line bundle can be embedded into $\mathbb R^N$ in a suitable fashion? - -It is probably the case that $N \geq 4$, so this won't be of much use, but it might still shed some insight on the problem, in particular because we are also given a connection. - -3a) Any ideas of how one might go about drawing a complex line bundle? -3b) Any ideas on how to best visualize the connection coming from a hermitian form? - -REPLY [4 votes]: Apparently, Mario Serna has produced pictures of $U(1)$-bundles on his webpage and in his paper "Riemannian Gauge Theory and Charge Quantization". Here an example - -The image represents a trivial $\mathbb R^3$ over some rectangular base manifold. The $U(1)$ bundle which we want to visualize is shown as an $\mathbb R^2$-sub-bundle: the disks indicate the 2-dimensional fibers at each point, to be understood as subspaces of small 3-dimensional boxes at each point (not shown). -It seems that the disks are also meant to give an impression of the connection, but I don't fully understand how parallel transport is supposed to work here. -He cites a result by Narasimhan and Ramanan which says that every $U(1)$ bundle can be embedded into a trivial $(2d+1)$-dimensional complex vector bundle where $d = \text{dim} M$ is the dimension of the base manifold. Fortunately, the dimension is lower in the cases drawn.<|endoftext|> -TITLE: Projection map being a closed map -QUESTION [88 upvotes]: Let $\pi: X \times Y \to X$ be the projection map where $Y$ is compact. Prove that $\pi$ is a closed map. - -First I would like to see a proof of this claim. -I want to know that here why compactness is necessary or do we have any other weaker condition other than compactness for the same result to hold. - -REPLY [14 votes]: I'll add a proof using nets. I think that nets are often useful, since we have good intuition about sequences in metric spaces and many things work very similarly for nets in general topological spaces. (For example, we know that a metric space is compact if and only if every sequence has a convergent subsequence. If we work with topological spaces, we have a similar characterization with nets: A topological space is compact if and only if every net has a convergent subnet.) -Proof. Let $C$ be a closed subset of $X\times Y$. We want to show that $\pi[C]$ is a closed subset of $X$. -Let $(x_d)_{d\in D}$ be a net in $X$ such that each $x_d$ belongs to $\pi[C]$ and $x=\lim_{d\in D} x_d$. We want to show that $x\in\pi[C]$. -Since $x_d\in\pi[C]$, we can choose (for each $d\in D$) a point $y_d\in Y$ such that $(x_d,y_d)\in C$. Now $(y_d)_{d\in D}$ has a convergent subnet $(y_e)_{e\in E}$. (This follows from compactness of $Y$.) This means that there is an $y\in Y$ such that $y=\lim_{e\in E} y_e$. -Now we have $\lim_{e\in E} x_e = x$ and $\lim_{e\in E} y_e = y$, which implies that $\lim_{e\in E} (x_e,y_e)=(x,y)$ and $(x,y)\in C$. Therefore $x\in\pi[C]$. $\hspace{2cm}\square$ - -Kuratowski's theorem says that this property in fact characterizes compact spaces. Proof can be found in Engelking's book (Theorem 3.1.16) or in Henno Brandsma's post. Eric Auld asked in his comment whether this can be shown using nets. It seems that a very similar idea as in the proof using filters works also for nets, see my proof below. -I should mention that I have previously posted here a longer proof which turned out to be incorrect. (You can find it by checking revision history, if you are interested.) Luckily, Eric Auld caught the mistake - -If $p_Y \colon X\times Y\to Y$ is closed for every $Y$, then $X$ is compact. - -Let $D$ be a directed set and $(x_d)_{d\in D}$ be a net in $X$. -We can topologize $Y=D\cup\{\infty\}$, where $\infty\notin D$, in a natural way: All points of $D$ will be isolated. Basic neighborhoods of $\infty$ are the sets of the form $\{\infty\}\cup\{d; d\ge d_0\}$ for $d_0\in D$. (The reason that this seems to be relatively natural choice is that $x_d$ converges to $x$ in $X$ if and only if $(x_d,d)$ converges to $(x,\infty)$ in $X\times Y$.) -We want to show that the net $(x_d)_{d\in D}$ has a cluster point. Let us denote $A=\{(x_d,d); d\in D\}$. Since the map $p_Y$ is closed, we have $p_Y[\overline A]=\overline{p_Y[A]}=\overline D$, so $\infty\in p_Y[\overline A]$. This means that there is an $x\in X$ such that $(x,\infty)\in\overline A$. -Notice that basic neighborhoods of the point $(x,\infty)$ are of the form -$$U\times \{d\in D; d\ge d_0\}$$ -where $d_0\in D$ and $U$ is an neighborhood of $x$. -Since every set of this form has nonempty intersection with $A$ we get that for each neighborhood $U$ of $x$ and for each $d_0$ there exists $d\ge d_0$ such that $x_d\in U$. Hence $x$ is a cluster point of the net $(x_d)_{d\in D}$. $\hspace{2cm}\square$<|endoftext|> -TITLE: Find a pair $(n, k)$ such that $\sum_{i=1}^{n} i = \sum_{i=1}^{k} i^2$? -QUESTION [6 upvotes]: How could I find all the pairs $(n, k)$ for this equation. The most obvious pair solution that I can see is $(1, 1)$. -Using summation identity, I have: -$$\frac{n(n+1)}{2} = \frac{k(k + 1)(2k + 1)}{6}$$ -Then I thought of using cubic formula for $k$-equation, but it involved many variables. Any idea? -Thanks, -Chan - -REPLY [6 votes]: There are only two variables involved. If you want to search, you can write it as a quadratic in $n$, just try values of $k$, solve for $n$, and see if it comes out integral. I find k=5, n=10, k=6, n=13 and k=85, n=645 as solutions as well with no more under k=200. Then OEIS has no more and asserts the series is finite. There are references for this claim in A053611 - -REPLY [3 votes]: Fix the variable $k$. Let -$$k' = \dfrac{k(k+1)(2k+1)}{6}.$$ -Then you get the quadratic equation -$$n^2+n-2k' = 0$$ -with the solutions -$$n_{1/2} = -\dfrac{1}{2} \pm \sqrt{(\dfrac{1}{2})^2+2k'}.$$ -Now you can generate your solution pairs.<|endoftext|> -TITLE: Is there an n such that $2^n|n!$? -QUESTION [6 upvotes]: If there exists such an $n$ then $n$ must be the highest of power of 2 that divides $n!$. But to find out the highest power we need to increase n for each step until it reaches $n$. Any idea? -Thanks, -Chan - -REPLY [16 votes]: The power of 2 that divides n! is $\lfloor \frac{n}{2} \rfloor + \lfloor \frac{n}{4} \rfloor +\lfloor \frac{n}{8} \rfloor + \ldots \lfloor \frac{n}{2^k} \rfloor$ This can never be more than $n-1$, which occurs when $n$ is a power of 2. - -REPLY [10 votes]: No. Consider the formula due to Legendre: the power of the prime $p$ in the factorization of $n!$ is $\sum_{k=1}^\infty \lfloor n/p^k \rfloor$. In the case $p=2$ you have $\sum_{k=1}^\infty \lfloor n/2^k \rfloor$. But -$$ \sum_{k=1}^\infty \lfloor n/2^k \rfloor < \sum_{k=1}^\infty n/2^k $$ -since we can't have $n/2^k$ an integer for all $k$ simultaneously, so at least one term in the left-hand sum is less than the corresponding term in the right-hand sum. And the right-hand sum is just $n$. -However, as you might imagine from the fact that $\lfloor n/2^k \rfloor$ is close to $n/2^k$, the exponent of 2 in $n!$ is pretty close to $n$; in fact it's $n$ minus the number of 1s in the binary expansion of $n$. In fact, if $n$ is a power of 2, then $2^{n-1}|n!$.<|endoftext|> -TITLE: The fundamental group of a pair of Hawaiian earrings -QUESTION [16 upvotes]: Let $H$ be the Hawaiian earring and let $H'$ be the reflection of the Hawaiian earring across the $y$-axis (in the Wikipedia picture). There is a canonical homomorphism from the free product $\pi_1(H) * \pi_1(H')$ to $\pi_1(H \cup H')$ (with basepoint their intersection), but it is not an isomorphism. -This was intended to be a recent homework problem of mine, but as stated the problem actually asked whether the two groups are abstractly isomorphic. I don't know the answer to this question, and neither does my professor. My guess is that they are not isomorphic, but I don't have good intuitions about such large groups. -Edit: to be clear, I know how to do the intended problem, and I also know that $H \cup H'$ is homeomorphic to $H$. - -REPLY [6 votes]: The two groups are not isomorphic. -See Thm 1.2 of Topology Appl. 123 (2002) 479-505.<|endoftext|> -TITLE: Why are inner products in RKHS linear evaluation functionals? -QUESTION [10 upvotes]: I'd like to know why inner products in Reproducing kernel Hilbert spaces are (linear) evaluation functionals. -I understand that inner products are linear functionals, and I know what an evaluation functional is, I just can't explain why an inner product (in a RKHS) is evaluation functional, and vise-versa. - -REPLY [13 votes]: On a Hilbert space, all continuous linear functionals are inner product functionals (Riesz), and conversely (Cauchy-Schwarz). In a RKHS, evaluation functionals are continuous, which is equivalent to being inner product functionals. The converse is usually not true. That is, inner product functionals on a RKHS need not be point evaluations. -Let $H$ be a Hilbert space consisting of complex-valued functions on a set $X$ such that for each $x\in X$, the evaluation functional $f\mapsto f(x)$ is continuous. Then for each $x\in X$ there is a $k_x\in H$ such that for all $f\in H$, $f(x)=\langle f,k_x\rangle$. (The function $K:X\times X\to \mathbb{C}$ defined by $K(x,y)=k_y(x)=\langle k_y,k_x\rangle$ is the reproducing kernel of the RKHS $H$, and some authors start with $K$ in defining a RKHS.) Only the inner products with the elements $k_x$ are evaluation functionals. -For example, consider the Hardy space $H^2$ of holomorphic functions on the open unit disk whose sequences of Maclaurin series coefficients are in $\ell^2$, with inner product $\displaystyle{\left\langle \sum_{k=0}^\infty a_kz^k,\sum_{k=0}^\infty b_kz^k\right\rangle=\sum_{k=0}^\infty a_k\overline{b_k}}$. Evaluations on the open disk are continuous, as can be seen directly by writing down the element $k_w$ of $H^2$ whose inner product functional is evaluation at $w$, $k_w(z)=\sum_{k\geq0} \overline{w}^kz^k=\frac{1}{1-\overline{w}z}$. So a necessary and sufficient condition for an inner product functional $f\mapsto\langle f,g\rangle$ to be an evaluation functional is the existence of a $w$ in the open unit disk such that $g=k_w$, a condition which typical $g\in H^2$ will not satisfy. Note that the set of evaluation functionals is not closed under scalar multiplication, nor addition. In fact, it is linearly independent. -A simpler but in some ways less interesting example is $\ell^2$ thought of as a space of functions on the nonnegative integers, where the evaluation functionals are just the inner products with elements of $\ell^2$ that have the value $1$ at one point and vanish elsewhere. An even simpler example would be a finite dimensional Hilbert space thought of as functions on a finite set. In these cases, cardinality is enough to see that most inner product functionals are not point evaluations. -The inner product functionals and evaluation functionals would be identical if you were considering a Hilbert space $H$ as a space of functions on its dual space, in the usual isomorphism of $H$ with its double dual. - -Added: Here is some elaboration on the first 2 sentences. If $H$ is a Hilbert space and $g\in H$, then the function $T_g :H\to \mathbb{C}$ defined by $T_g(f)=\langle f,g\rangle$ is a linear functional on $H$ called an "inner product functional" above. Each inner product functional is continuous. The operator norm of $T_g$ is equal to $\|g\|$. The Cauchy-Schwarz inequality gives $|T_g(f)|\leq \|f\|\|g\|$ for all $f$, which means $\|T_g\|\leq\|g\|$. Plugging $g$ into $T_g$ gives $|T_g(g)|=\|g\|^2$, showing that $\|T_g\|\geq \|g\|$. -So inner product functionals are continuous, and this would be true in any inner product space. The Riesz representation theorem (for Hilbert space, sometimes also called Riesz's lemma) says that the converse is true for a Hilbert space. You can see this for example in the Wikipedia article, and in many textbooks including the basics of Hilbert spaces, such as Rudin's Real and complex analysis. That is, if $T:H\to\mathbb{C}$ is any continuous linear functional, then there is a $g\in H$ such that $T=T_g$. -Hopefully the first sentence is clearer now. As for the second sentence, it follows directly from the first sentence and the definition of RKHS, and the second paragraph elaborates on this. There is more than one way to characterize RKHS, and if continuity of point evaluations isn't clear from your definition, perhaps you could provide the definition to make it easier to answer your questions.<|endoftext|> -TITLE: How to find the highest power of a prime $p$ that divides $\prod \limits_{i=0}^{n} 2i+1$? -QUESTION [6 upvotes]: Possible Duplicate: -How come the number $N!$ can terminate in exactly $1,2,3,4,$ or $6$ zeroes but never $5$ zeroes? - -Given an odd prime $p$, how does one find the highest power of $p$ that divides -$$\displaystyle\prod_{i=0}^n(2i+1)?$$ -I wrote it down all paper and realized that the highest power of $p$ that divides this product will be the same as the highest power of $p$ that divides $(\lceil\frac{n}{2}\rceil - 1)!$ -Since -$$10! = 1\times 2\times 3\times 4\times 5\times 6\times 7\times 8\times 9\times 10$$ while -$$\prod_{i=0}^{4} (2i+1) = 1\times 3\times 5\times 7\times 9$$ -Am I in the right track? -Thanks, -Chan - -REPLY [9 votes]: Note that $\displaystyle \prod_{i=1}^{n} (2i-1) = \frac{(2n)!}{2^n n!}$. -Clearly, the highest power of $2$ dividing the above product is $0$. -For odd primes $p$, we proceed as follows. -Note that the highest power of $p$ dividing $\frac{a}{b}$ is nothing but the highest power of $p$ dividing $a$ - highest power of $p$ dividing $b$. -i.e. if $s_p$ is the highest power of $p$ dividing $\frac{a}{b}$ and $s_{p_a}$ is the highest power of $p$ dividing $a$ and $s_{p_b}$ is the highest power of $p$ dividing $b$, then $s_p = s_{p_a}-s_{p_b}$. -So the highest power of $p$ dividing $\displaystyle \frac{(2n)!}{2^n n!}$ is nothing but $s_{(2n)!}-s_{2^n}-s_{n!}$. -Note that $s_{2^n} = 0$. -Now if you want to find the maximum power of a prime $q$ dividing $N!$, it is given by -$$s_{N!} = \left \lfloor \frac{N}{q} \right \rfloor + \left \lfloor \frac{N}{q^2} \right \rfloor + \left \lfloor \frac{N}{q^3} \right \rfloor + \cdots$$ -(Look up this stackexchange thread for the justification of the above claim) -Hence, the highest power of a odd prime $p$ dividing the product is $$\left ( \left \lfloor \frac{2N}{p} \right \rfloor + \left \lfloor \frac{2N}{p^2} \right \rfloor + \left \lfloor \frac{2N}{p^3} \right \rfloor + \cdots \right ) - \left (\left \lfloor \frac{N}{p} \right \rfloor + \left \lfloor \frac{N}{p^2} \right \rfloor + \left \lfloor \frac{N}{p^3} \right \rfloor + \cdots \right)$$<|endoftext|> -TITLE: What's the point of orthogonal diagonalisation? -QUESTION [19 upvotes]: I've learned the process of orthogonal diagonalisation in an algebra course I'm taking...but I just realised I have no idea what the point of it is. -The definition is basically this: "A matrix $A$ is orthogonally diagonalisable if there exists a matrix $P$ which is orthogonal and $D = P^tAP$ where $D$ is diagonal". I don't understand the significance of this though...what is special/important about this relationship? - -REPLY [5 votes]: Matrices are complicated objects. At first glance they are rectangular arrays of numbers with a complicated multiplication rule. Diagonalization helps us reduce the the matrix multiplication operation to a sequence of simple steps which make sense. In your case you are asking about orthogonal diagonalization, so I will limit my comments to that. Note that I normally think about diagonalization as a factorization of the matrix $\mathbf A$ as -$$\mathbf A = \mathbf P \mathbf D \mathbf P^T$$ -We know that $\mathbf D$ contains the eigenvalues of $\mathbf A$ and $\mathbf P$ contains the corresponding eigenvectors. Consider the multiplication $\mathbf{y=Ax}$. We can perform the multiplication $\mathbf y = \mathbf P \mathbf D \mathbf P^T \mathbf x$ step-by-step as follows: -First: $\mathbf x' = \mathbf P^T \mathbf x$. This step projects $\mathbf x$ onto the eigenvectors because the eigenvectors are in the rows of $\mathbf P^T$. -Second: $\mathbf y' = \mathbf D \mathbf x'$. This step stretches the resultant vector independently in each direction which corresponds to an eigenvector. This is the key. A matrix does a simple scaling (in general we may also shear or rotate) operation in independent directions (orthogonal case only!) in a particular basis or representation. -Third: $\mathbf y = \mathbf P \mathbf y'$. We take our resulting stretched vector and linearly combine the eigenvectors to get back to our original space. -So in summary, diagonalization tells us that under matrix multiplication, as vector can be represented in a special basis under which the actual operation of the matrix is a simple diagonal matrix (the simplest possible) and then represented back in our original space.<|endoftext|> -TITLE: Why does $10^x$ have $(x+1)^2$ factors? -QUESTION [20 upvotes]: eg. $1000$ has $16$ factors $(1, 2, 4, 5, 8, 10, 20, 25, 40, 50, 100, 125, 200, 250, 500, 1000)$ - -REPLY [23 votes]: Assuming $x$ is a positive integer, of course... -How many factors does a positive integer $n$ have? If you factor $n$ into primes, -$$n = p_1^{a_1}\times p_2^{a_2}\times\cdots\times p_r^{a_r}$$ -then every factor of $n$ is of the form -$$m = p_1^{b_1}\times p_2^{b_2}\times\cdots\times p_r^{b_r}$$ -with $0\leq b_i\leq a_i$. So the number of factors of $n$ is -$$(a_1+1)(a_2+1)\cdots(a_r+1)$$ -(because you have $a_1+1$ choices for the exponent of $p_1$, $(a_2+1)$ choices for the exponent of $p_2$, etc.) -Since $10 = 2\times 5$, then $10^x = 2^x\times 5^x$, so the number of factors is $(x+1)(x+1)=(x+1)^2$. - -REPLY [5 votes]: The short answer to your question is $10^x = 2^x 5^x$. We shall now enumerate the number of divisors. Let $d$ be one such divisor. Then $d = 2^y 5^z$, where $0 \leq y \leq x$ and $0 \leq z \leq x$. Now $y$ has $x+1$ choices and similarly $z$ has $x+1$ choices and each of these choices are independent and will give rise to an unique divisor $d$. -Hence, the total number of divisors of $10^x$ is $(x+1) \times (x+1) = (x+1)^2$. -In general, to find the number of divisors of $n$, write $n = p_1^{\alpha_1} \times p_2^{\alpha_2} \times \ldots p_k^{\alpha_k}$. -A divisor $d$ of $n$ must be of the form $d = p_1^{\beta_1} \times p_2^{\beta_2} \times \ldots p_k^{\beta_k}$ where $0 \leq \beta_i \leq \alpha_i$, $i \in \{0,1,2,\ldots,k\}$. Hence, each $\beta_i$ has $(1+\alpha_i)$ choices and each of these choices are independent and will give rise to an unique divisor $d$. -Hence, the total number of divisors of $n$ is $$\displaystyle \prod_{i=1}^{k} (1+\alpha_i)$$ - -REPLY [4 votes]: ${\displaystyle 10^x = 2^x5^x}$. A factor of $10^x$ is a number of the form ${\displaystyle 2^y5^z}$ where $0 \leq y \leq x$ and $0 \leq z \leq x$, $y$ and $z$ integers. There are $(x+1)$ integers from $0$ to $x$, so the total number of possible $(y,z)$ is $(x+1)*(x+1) = (x+1)^2$.<|endoftext|> -TITLE: Weierstrass M-Test -QUESTION [9 upvotes]: What does "M" stand for in Weierstrass M-Test? -Just asking... - -REPLY [13 votes]: The M stands for majorant. -In german it is also called Weierstraßsches Majorantenkriterium.<|endoftext|> -TITLE: Locally a domain and connected implies a domain -QUESTION [5 upvotes]: Let $R$ be a commutative ring with unit. Let $R_p$ be a domain for all $p\in SpecR$ and let $SpecR$ be connected. Is it true that $R$ is a domain or can someone provide a counterexample. Note here that $R$ is not necessarily a Noetherian ring. For a Noetherian ring this is easy. - -REPLY [3 votes]: See https://mathoverflow.net/questions/7477/non-integral-scheme-having-integral-local-rings<|endoftext|> -TITLE: Is there an accepted term for "locally" nilpotent linear operators? -QUESTION [7 upvotes]: Let $V$ be a vector space over a field $k$ (not necessarily finite-dimensional) and $T : V \to V$ a linear operator. Is there an accepted term for the following condition on $T$? - -For any $v \in V$ the subspace $\text{span}(v, Tv, T^2 v, ...)$ is finite-dimensional, and $T$ is nilpotent on any such subspace. - -For example, the differential operator $\frac{d}{dx}$ acting on $k[x]$ satisfies this condition but is not nilpotent. -Motivation: When $\text{char}(k) = 0$, this condition ensures that the exponential $e^T : V \to V$ is well-defined without giving $V$ any additional structure, since $e^T v$ is a finite sum for any particular $v$. - -REPLY [4 votes]: The standard name is locally nilpotent. Thus one hears about locally nilpotent derivations, for example, like $\frac{\mathrm d}{\mathrm dt}$ in $k[t]$.<|endoftext|> -TITLE: How to compute the SVD of a symmetric matrix? -QUESTION [6 upvotes]: If I have only the upper triangular part of a symmetric matrix $A$, how could I compute the SVD? -$$\begin{pmatrix} 1 & 22 & 13 & 14 \\ & 1 & 45 & 24 \\ & & 1 & 34 \\ & & & 1\end{pmatrix}$$ -Does having this upper triangular make the computing easier? - -REPLY [26 votes]: The SVD for a symmetric matrix $A = U \Sigma V^T$, where $U$ and $V$ are unitary matrices with $U = \left[u_1 | u_2 | \ldots | u_n \right]$, $V = \left[v_1 | v_2 | \ldots | v_n \right]$ and $\Sigma$ is a diagonal matrix with non-negative diagonal entries and $v_i = \pm u_i$ -For a symmetric matrix the following decompositions are equivalent to SVD. (Well, almost equivalent if you do not worry about the signs of the vectors). - -Eigen-value decomposition i.e. $A = X \Lambda X^{-1}$. When $A$ is symmetric, the eigen values are real and the eigenvectors can be chosen to be orthonormal and hence $X^TX = XX^T = I$ i.e. $X^{-1} = X^T$. The only difference is that the singular values are the magnitudes of the eigen values and hence the column of $X$ needs to be multiplied by a negative sign if the eigen value turns out to be negative to get the singular value decomposition. Hence, $U = X$ and $\sigma_i = |\lambda_i|$. -Orthogonal decomposition i.e. $A = PDP^T$, where $P$ is a unitary matrix and $D$ is a diagonal matrix. This exists only when matrix $A$ is symmetric and is the same as eigen value decomposition. -Schur decomposition i.e. $A = Q S Q^T$, where $Q$ is a unitary matrix and $S$ is an upper triangular matrix. This can be done for any matrix. When $A$ is symmetric, then $S$ is a diagonal matrix and again is the same as the eigen value decomposition and orthogonal decomposition. - -I do not remember the cost for each of these operations i.e. I don't remember the coefficients before the leading order $n^3$ term. If my memory is right, the typical algorithm for orthogonal decomposition is slightly cheaper than the other two, though I cannot guarantee.<|endoftext|> -TITLE: Questions about derivative and differentiation -QUESTION [9 upvotes]: For a real-valued function $f$ defined on $\mathbb{R}$ or its subset, - -is it possible that it is -differentiable at one point and not -in one of its neighbourhoods except -the point itself? -is it possible that it is -differentiable over an interval, but -its derivative over the interval is -not continuous? -I found on this link that for -$f(x) = x^2 \sin(1/x)$, when $x$ is not -0, the derivative is $2x \sin(1/x) - - \cos(1/x)$ which does not have a limit -as x approaches 0, but the -derivative of $f$ does exist at 0: -$$\lim_{h \rightarrow 0} ( h^2 - \sin(1/h) - 0)/(h-0) = 0.$$ -I was wondering how $f$ is -differentiable at 0? Doesn't it -require $f$ to be defined on 0? - -Thanks and regards! - -REPLY [8 votes]: Here is the answer to the third question -Let us take a look at the function $f(x) = x^2 \sin(\frac{1}{x})$. -The first question is "Is the function even in $C^{(0)}$"? -The answer is not yet since the function is ill-defined at the origin. However if we define $f(0) = 0$, then yes the function is in $C^0$. This can be seen from the fact that $\sin(\frac{1}{x})$ is bounded and hence the function is bounded above by $x^2$ and below by $-x^2$. So as we go towards $0$, the function is bounded by functions which themselves tend to $0$. And the limit is $0$ and thereby the function is continuous. -Now, the next question "Is the function differentiable everywhere?" -It is obvious that the function is differentiable everywhere except at $0$. At $0$, we need to pay little attention. If we were to blindly differentiate $f(x)$ using the conventional formulas, we get $g(x) = f'(x) = 2x \sin(\frac{1}{x}) + x^2 \times \frac{-1}{x^2} \cos(\frac{1}{x})$. -Now $g(x)$ is ill-defined for $x=0$. Further $\displaystyle \lim_{x \rightarrow 0} g(x)$ doesn't exist. This is what we get if we use the formula. So can we say that $f(x)$ is not differentiable at the origin. Well no! All we can say is $g(x)$ is discontinuous at $x=0$. -So what about the derivative at $x=0$? Well as I always prefer to do, get back to the definition of $f'(0)$. -$f'(0) = \displaystyle \lim_{\epsilon \rightarrow 0} \frac{f(\epsilon) - f(0)}{\epsilon} = \displaystyle \lim_{\epsilon \rightarrow 0} \frac{\epsilon^2 \sin(\frac{1}{\epsilon})}{\epsilon} = \displaystyle \lim_{\epsilon \rightarrow 0} \epsilon \sin(\frac{1}{\epsilon}) = 0$. -(Since $|\sin(\frac{1}{\epsilon})| \leq 1$ so it is bounded). -So we find that the function $f(x)$ has a derivative at the origin whereas the function $g(x) = f'(x)$, $\forall x \neq 0$ is not continuous or even well-defined at the origin. -So we have this function whose derivative exists everywhere but then $f(x) \notin C^{(1)}$ since the derivative is not continuous at the origin. -Look up Volterra's function as an answer to your second question. - -REPLY [7 votes]: Let $f(x)=x^2$ if $x$ is rational, $f(x)=0$ if $x$ is irrational. If $x\neq 0$, then $f$ is not continuous at $x$, and hence not differentiable at $x$. If $h\neq0$ is rational, then $\frac{f(h)}{h}=h$, while if $h$ is irrational, then $\frac{f(h)}{h}=0$. Therefore $f'(0)=\lim_{h\to0}\frac{f(h)}{h}=0$, and in particular it exists. If you wanted the example to be continuous, take a continuous function that is nowhere differentiable and multiply by $x$ to get a continuous function differentiable only at $0$.<|endoftext|> -TITLE: How many times do I roll an unfair die to determine its bias? -QUESTION [20 upvotes]: This question comes from computer security, but I'll distill it into a probability question: -I have a biased die with 96 sides. 95 sides are equiprobable, each having a 1% chance of landing up. The remaining side, side X, has a 5% chance. -All sides look identical; I want to identify side X. My best strategy is obviously to roll it a bunch of times and take the majority, but my question is this: how many rolls are needed until Pr{X is majority result} > p for any given p > 1/2? - -REPLY [2 votes]: Let's agree that the first side has $5\%$ chance, and remaining $95$ sides each have $1\%$ of occurrence. After $n$ rolls, let $N_k$ denote the random variable for the number of occurrences of $k$-th side. Then the random vector $(N_1,\ldots, N_{96})$, subject to $\sum_{k=1}^{96} N_k = n$, follows multi-nomial distribution. -I would interpret your question as a quest to determine, for $p > \frac{1}{2}$: -$$ - n_\mathrm{min} = \operatorname{arg min}_n \mathbb{P}(N_1 > N_2 \land N_1 > N_3 \land \ldots \land N_1 > N_{96} ) > p -$$ -This probability equals: -$$ \begin{eqnarray} - \mathbb{P}\left( \land_{k=2}^{96} N_1 > N_k \right) &=& \sum_{n_1=1}^n \mathbb{P}\left( \land_{k=2}^{96} N_k \le n_1 -1 ; N_1 = n_1 \right) \mathbb{P}\left( N_1 = n_1 \right) \\ - &=& \sum_{k=0}^{n-1} \left( F\left( k+1,k,\ldots,k \right) - F\left( k,k,\ldots,k \right) \right) - \end{eqnarray} -$$ -where $F(n_1,\ldots,n_k,\ldots,n_{96}) = \mathbb{P}\left( N_1 \le n_1, \ldots, N_k \le n_k,\ldots, N_{96} \le n_{96} \right)$. -Even when normal or Poisson approximation to multinomial cumulative distribution function is applicable, it does not simplify matter very much. -As an approximation $F$ can be replaced with product of marginal cumulative distribution functions, since correlation coefficients are on the order of $0.01$, i.e. small: -$$ - \mathbb{P}\left( \land_{k=2}^{96} N_1 > N_k \right) = - \sum_{k=0}^{n-1} f_{\mathrm{Bi}\left(n, \frac{1}{20}\right)}(k+1) \left( F_{\mathrm{Bi}\left(n, \frac{1}{95}\right)}(k) \right)^{95} -$$ -I ran simulation, and plotted the probability $\mathbb{P}(\land_{k=2}^{96} N_k < N_1)$ as a function of $n$, and compared to this approximation, the agreement is rather good:<|endoftext|> -TITLE: Similar result to Burnside's lemma -QUESTION [6 upvotes]: I have an exercise scribbled down, and I am not sure what it is asking. It is somewhat similar to Burnside's lemma. -We have a finite group $G$ acting on a set $X$. For each $g \in G$, let $X^g$ denote the set of elements in $X$ fixed by $g$. -$$\sum_g |X^g|^2 = |G| \cdot \text{(number of orbits of a stabilizer)}$$ -I am not sure what it means by "orbit of a stabilizer". I am guessing that it refers to the action of $G$ on cosets of a stabilizer by multiplication. But this really doesn't make sense to me since this action is transitive and the orbit is just the entire set. -Does anyone know of such an exercise and can someone explain what the precise statement of the problem should be? - -REPLY [5 votes]: The "number of orbits of a stabilizer": given $x \in X$, the stabilizer $G_x$ acts on $X$. The number of orbits of the stabilizer means the number of orbits of $G_x$ on $X$. -If you add the extra condition that the action of $G$ is transitive on $X$. If that is so, you first not that $\Sigma_{g \in G_x} |X_g| = \Sigma_{g \in G_y} |X_g|$, for every $x, y \in X$. -Since the action of $G$ is transitive on $X$, then $|G|=|X||G_x|$, implying that -$$k|G|=k|X||G_x|=(|X|)(k|G_x|)=\Sigma_{x\in X} \Sigma_{g \in G_x} |X_g| = \Sigma_{g\in G} \Sigma_{x \in X_g} |X_g| = \Sigma_{g\in G}|X_g|^2 ,$$ -where $k$ is the number of orbits of $G_x$ on $X$. -Now, if the action of $G$ on $X$ is not transitive, the stabilizers can have different number of orbits, depending on the point that is stabilized. For example if $1, 2, 3, 4$ represent the vertices of a regular square, and $G$ consists of the identity and the reflection $R$ in the line through $1$ and $3$, then $\Sigma_{g\in G}|X_g|^2 = |X_{id}|^2 + |X_R|^2 = 4^2+2^2 =20$. On one hand the stabilizer of 1, $G_1$, has three orbits on the vertices (the one containing 1, the one containing 3 and the one containing 2 and 4), and hence ("number of orbits of the stabilizer")|G|=3*2=6.<|endoftext|> -TITLE: Paying off a mortgage twice as fast? -QUESTION [7 upvotes]: My brother has a 30 year fixed mortgage. He pays monthly. Every month my brother doubles his principal payment (so every month, he pays a little bit more, according to how much more principal he's paying). -He told me he'd pay his mortgage off in 15 years this way. I told him I though it'd take more than 15 years. Who's right? If I'm right (it'll take more than 15 years) how would I explain this to him? -CLARIFICATION: He doubles his principal by looking at his statement and doubling the "amount applied to principal this payment" field. - -REPLY [2 votes]: It seems your brother is essentially right. -In a standard amortization schedule, the amount applied to principal -each month increases geometrically, at the interest rate. -Doubling these amounts (or increasing them by any constant factor -over the amounts in the original amortization schedule) corresponds -to making payments at a higher constant level that amortizes -the loan over a shorter total time. -Here's the math: For month $j=0,1,2\ldots$ of the loan, let $P_j$ be -the principal remaining at the start of the month, -and $Y$ the payment, paid at the end of the month. -The amount paid toward interest is $I_j=rP_j$ with $r=0.05/12$, -and the amount paid toward principal is $A_j=Y-I_j=Y-rP_j$. -Then the new principal is -$$ -P_{j+1}= P_j-A_j = P_j(1+r)-Y, -$$ -so -$$ -I_{j+1}= rP_{j+1}= (1+r)I_j-rY = (1+r)(I_j-Y)+Y. -$$ -Hence $(1+r)A_j=A_{j+1}$ and therefore $A_j=(1+r)^jA_0$. -The standard payment $Y$ is rigged to make $P_N=I_N=0$ with -$N=360$ months. A higher (constant) payment $\hat Y$ corresponds to -principal payments $\hat A_j$ that are larger than $A_j$ -by always the same proportion. -For a 30 year loan at 5 percent, the standard monthly payment -is \$5.3692 per \$1000. Doubling the principal payment results -in a monthly \$6.5697 per \$1000, which amortizes the loan over -about 20 years. Increasing the principal payments by 200 percent -(tripling them) amortizes the loan over a bit more than 15 years. -But from your description it seems your brother is doing something -different, something that increases his payments each month. -A spreadsheet calculation shows that he would indeed pay off -the loan in 15 years, if he adds, to the standard 30-year payment $Y$, -the amount of principal that the payment $Y$ would pay off this month. -(This amount may be listed on his statement.) -This means his payment at the end of month $j$ -is $$ -Y_j=Y+ (Y-rP_j). -$$ - As above, now his remaining principal satisfies -$$ -P_{j+1}= P_j+I_j - Y_j = P_j(1+2r)-2Y. -$$ -So effectively his principal is reduced as if he makes the constant -payment $2Y$ on a loan with interest rate $2r$. As it happens, -with $Y$ being the original 30-year standard payment, $2Y$ is almost just the right value to amortize this loan over 15 years. -Any way you do it, paying principal off early is a great way to -save lots on interest later.<|endoftext|> -TITLE: Definition of mean as an integral over the CDF -QUESTION [10 upvotes]: I'm reading a statistics textbook which defines the mean of a random variable $X$ with CDF $F$ as a statistical function $t(\centerdot)$, where -$$ t(F) = \int x \, dF(x).$$ -Can someone explain this definition? I'm familiar with the definition of the mean as an integral over the PDF $f$: -$$ \int x \, f(x) \, dx.$$ -But what does it mean to have the function $F(x)$ in the variable of integration? - -REPLY [18 votes]: The former integral is a Stieltjes integral. See this, for example, in particular the section "Application to probability theory". -In brief, the integral $I_1 = \int {xdF(x)} $ is a generalization of $I_2 = \int {xf(x)dx} $. $I_2$ can be used only when the distribution is absolutely continuous, that is, has a probability density function. $I_1$ can be used for any distribution, even if it is discrete or continuous singular (provided the expectation is well-defined, that is $\int {|x|dF(x)} < \infty $). Note that if $F'(x) = f(x)$, then $\frac{d}{{dx}}F(x) = f(x)$ gives rise to $dF(x) = f(x)dx$. -For a thorough account of this topic, see, for example, this. -EDIT: -An example. Suppose that a random variable $X$ has CDF $F$ such that $F'(x) = f_1 (x)$ for $x < a$, $F'(x) = f_2 (x)$ for $x > a$, and $F(a) - F(a-) = p$, where $f_1$ and $f_2$ are nonnegative continuous functions, and $0 < p \leq 1$. Note that $F$ is everywhere differentiable except at $x=a$ where it has a jump discontinuity of size $p$. In particular, $X$ does not have a PDF $f$, hence you cannot compute ${\rm E}(X)$ as the integral $I_2$. However, ${\rm E}(X)$ can be computed as follows: -$$ -{\rm E}(X) = \int_{ - \infty }^\infty {xdF(x)} = \int_{ - \infty }^a {xf_1 (x)dx} + a[F(a) - F(a - )] + \int_a^\infty {xf_2 (x)dx} . -$$ -In case $f_1$ and $f_2$ are identically zero and, hence, $p=1$, this gives -$$ -{\rm E}(X) = \int_{ - \infty }^\infty {xdF(x)} = a[F(a) - F(a - )] = a, -$$ -which is obvious since $X$ has the $\delta_a$ distribution (that is, ${\rm P}(X=a)=1$). -Exercise 1. Suppose that $X$ takes finitely many values $x_1,\ldots,x_n$, with probabilities $p_1,\ldots,p_n$, respectively. Conclude that ${\rm E}(X) = \int_{ - \infty }^\infty {xdF(x)} = \sum\nolimits_{k = 1}^n {x_k p_k } $. -Exercise 2. Suppose that with probability $0 < p < 1$ a random variable $X$ takes the value $a$, and with probability $1-p$ it is uniform$[0,1]$. Find the CDF $F$ of $X$, and compute its expectation (note that $X$ is neither discrete nor continuous random variable). -Note: The integral $I_1$ is, of course, a special case of $\int {h(x)dF(x)} $ (say, for $h$ a continuous function). In particular, letting $h=1$, we have $\int {dF(x)} = 1$ (which is a generalization of $\int {f(x)dx} = 1$). -EDIT: Further details. -Note that if $h$ is continuous, then ${\rm E}[h(X)]$, if exists, is given by -$$ -{\rm E}[h(X)] = \int {h(x)dF(x)}. -$$ -In particular, if the $n$-th moment exists, it is given by -$$ -{\rm E}[X^n] = \int {x^n dF(x)}. -$$ -In principle, the limits of integration range from $-\infty$ to $\infty$. In this context, consider the following important example. Suppose that $X$ is any nonnegative random variable. Then its Laplace transform is -$$ -{\rm E}[e^{ - sX} ] = \int {e^{ - sx} dF(x)} = \int_{ 0^- }^\infty {e^{ - sx} dF(x)}, \;\; s \geq 0, -$$ -where $0^-$ can be replaced by $-\varepsilon$ for any $\varepsilon > 0$. While the $n$-th moment of (the nonnegative) $X$ is given, for any $n \geq 1$, by -$$ -{\rm E}[X^n] = \int_{ 0^- }^\infty {x^n dF(x)} = \int_0^\infty {x^n dF(x)}, -$$ -it is not true in general that also -$$ -{\rm E}[e^{ - sX} ] = \int_{0 }^\infty {e^{ - sx} dF(x)}. -$$ -Indeed, following the definition of the integral, -$$ -{\rm E}[e^{ - sX} ] = \int_{ 0^- }^\infty {e^{ - sx} dF(x)} = e^{-s0}[F(0)-F(0-)] + \int_{ 0 }^\infty {e^{ - sx} dF(x)}, -$$ -hence the jump of $F$ at zero should be added (if positive, of course). In the $n$-th moment case, on the other hand, the corresponding term is $0^n[F(0)-F(0-)] = 0$, hence a jump of $F$ at zero does not affect the overall integral. - -REPLY [6 votes]: The cumulative density function $F$ is related to the probability density function $f$ by $dF(x)/dx=f(x)$. The equation you have in terms of $F$ can be re-expressed in terms of $f$ by substituting in $dF(x) = f(x)\,dx$. In fact, for many purposes, you can take this as the definition of the differential term $dF$. However, in more general circumstances where $F$ is not differentiable and the PDF $f$ is not well-defined, the form involving $dF$ still holds (interpreting it as a Riemann-Stieltjes integral). For example, if the distribution is discrete, so that $F$ is piecewise constant, then $dF$ becomes a sum over Dirac distributions.<|endoftext|> -TITLE: Computing $\sum_{m \neq n} \frac{1}{n^2-m^2}$ -QUESTION [8 upvotes]: A series arising in perturbation theory in quantum mechanics: -$\sum_{m\neq n} \frac{1}{n^2 - m^2}$, where $n$ is a given positive odd integer and $m$ runs through all odd positive integers different from $n$. I have a hunch that residue methods are applicable here, but I don't know complex analysis. - -REPLY [14 votes]: You can write -$$ \frac{1}{n^2 - m^2} = \frac{1}{2n} -\left\lbrace \frac{1}{m+n} - \frac{1}{m-n} \right\rbrace . \quad (1)$$ -Now if we sum up both sides over all odd $m \ne n ,$ taking into account that $n$ is odd, lots of cancelling goes on and we obtain -$$\sum_{m \ne n} \frac{1}{n^2 - m^2} = -\frac{1}{4n^2}.$$ -At first sight it appears the cotangent identity could be useful but it's not actually needed. -As a numerical check try summing the following with wolframAlpha -$$1/24 + 1/16 + \sum_{k=3}^\infty 1/(5^2 - (2k+1)^2),$$ -you will see that it is $-1/100,$ as expected. -Or try this: -$$1/48 + 1/40 + 1/24 + \sum_{k=4}^\infty 1/(7^2 - (2k+1)^2).$$ -You will get $-1/196.$ -EDIT: To clarify the cancellation taking place when we sum the RHS of $(1).$ -We have -$$\sum_{m \ne n, \,\, m \textrm{ odd} } -\left\lbrace \frac{1}{m+n} - \frac{1}{m-n} \right\rbrace - -= \sum_{m \ne n, \,\, m \textrm{ odd} } -\left\lbrace \frac{1}{n+m} + \frac{1}{n-m} \right\rbrace $$ -$$= \left\lbrace \left( \frac{1}{n+1} + \frac{1}{n-1} \right) + -\left( \frac{1}{n+3} + \frac{1}{n-3} \right) + -\left( \frac{1}{n+5} + \frac{1}{n-5} \right) + \cdots -+ \left( \frac{1}{2n-2} + \frac{1}{2} \right) \right\rbrace $$ -$$+ \left( \frac{1}{2n+2} - \frac{1}{2} \right) -+ \left( \frac{1}{2n+4} - \frac{1}{4} \right) -+ \left( \frac{1}{2n+6} - \frac{1}{6} \right) + \cdots $$ -and rearranging all the terms in the braces -$$= \left\lbrace \frac{1}{2} + \frac{1}{4} + \cdots + \frac{1}{2n-2} \right\rbrace -+ \left( \frac{1}{2n+2} - \frac{1}{2} \right) -+ \left( \frac{1}{2n+4} - \frac{1}{4} \right) + \cdots $$ -$$=\left\lbrace \frac{1}{2} + \frac{1}{4} + \cdots + \frac{1}{2n-2} \right\rbrace -- \left\lbrace \frac{1}{2} + \frac{1}{4} + \cdots + \frac{1}{2n} \right\rbrace -= - \frac{1}{2n}$$ -and hence the result. - -REPLY [2 votes]: Use the cotangent indentity at the very end of the page. -There are several ways to derive this identity, some use residues, other the Euler product formula for the sine. -EDIT: So we have the identity: -$$\frac{\pi z \cot (\pi z) - 1}{2z^2} = \sum_{m=1}^{\infty} \frac{1}{z^2-m^2}$$ -Now, we want to compute the sum $\sum_{n\neq m} \frac{1}{n^2-m^2}$, so we rewrite the identity as follows: -$$\sum_{\substack{m=1, \\ n\neq m}}^{\infty} \frac{1}{z^2-m^2} = \frac{\pi z \cot (\pi z) - 1}{2z^2} - \frac{1}{z^2-n^2}$$ -And we take the limit for $z \to n$ for both sides to get the desired sum. -EDIT 2: After some work rearranging, I arrived at the following identity -$$-\frac{\pi \tan \left(\pi \frac{z}{2}\right)}{4z} = \sum_{n \; odd} \frac{1}{z^2-n^2}$$ -which can be used along the lines explained in the previous edit.<|endoftext|> -TITLE: What is a Lie Group in layman's terms? -QUESTION [27 upvotes]: I'm having trouble getting my head arround the concept. -Can someone explain it to me? - -REPLY [48 votes]: I think that understanding comes through examples. The most fundamental example I believe to be the rotation group. Consider the sphere $S^2\subset \mathbb{R}^3$. The sphere has rotational symmetries. If we rotate the sphere by any angle, the sphere doesn't change. -The collection of all rotations forms a Lie group. The group property basically means that -if we rotate the sphere over any angle $\alpha$, after this over an angle $\beta$, it is the same if we would have rotated it in one go (over some different angle). Also any rotation has an inverse (rotating it over the opposite angle). This makes the rotations a group. The "Lie" in Lie group means that these rotations can be done arbitrary small. Many small rotations makes for a big rotation. -Lie groups capture the concept of "continuous symmetries". - -REPLY [16 votes]: Consider the set of $(n\times n)$ matrices that have non-zero determinant. Such a matrix corresponds to a system of linear equations ($n$ equations in $n$ unknowns) that has a unique solution. You can think of the solution as the unique point of intersection between the graph of a function and a horizontal hyperplane. Here it is helpful to think of $n=1$. In other words, the coefficients of the system correspond to a transformation of space: the variables $x_1, \ldots x_n$ are transformed to $\sum a_{ij} x_i$. The set of such transformations form a group: the matrices can be multiplied, each has an inverse, the multiplication is associative, and the identity transformation fixes each point of space. -Intuitively, it is easy to see which transformations are close to one another. They are close if they move points that are nearby to points that are nearby. Arithmetically, if the entries in the matrix are close, then the transformations are close: thus $0.14x + .33y$, is a reasonable approximation to $x/7+y/3$. -Thus the set of invertible $(n\times n)$ matrices is a space of invertible $(n\times n)$ matrices. What is not easy for a layman to see is that its spacial characteristics are defined via the determinant since as a set, the $(n\times n)$ matrices are a subset of $n^2$-space. The non-singular matrices are the pull-back of a regular value of the determinant function. [There is a small lie here: this is true for for matrices of determinant 1, but all non-zero determinant matrices deform onto that smaller space]. -One important spacial characteristic is that these matrices form a smooth manifold. This is something that is analogous to the surface of a sphere (which is NOT a lie group), the surface of a torus (which is) or the $3$-dimensional sphere that consists of the set of $(x,y,z,w)$ such that $x^2+y^2+z^2+w^2=1$ (which also happens to be a Lie group). -From these examples, we abstract the idea of a Lie group which is a group (that can be thought of as a set of transformations or symmetries) that has the structure of a smooth manifold --- at small scales it is indistinguishable from ordinary Euclidean space. The multiplication and inversion maps are a differentiable functions. And these multiplications occur between pairs of symmetries --- they should not be confused with the action of the matrices on the vector space which is where I started the discussion. -Examples include the real line, the non-zero real numbers, the circle, the torus, the $3$-sphere, the set of rotations of 3-dimensional space, and the special unitary groups representations of which determine particles in physics. -There are some small problems with the definition that I gave. A smooth manifold is a topological space which is paracompact and Hausdorff (neither definition will play a role in the layman's understanding), and that is covered by coordinate charts with specific properties. I imagine that Wikipedia has the relevant definitions articulated carefully.<|endoftext|> -TITLE: Characterizations of the $p$-Prüfer group -QUESTION [15 upvotes]: I'm an undergrad student fairly keen on algebra. Over the different algebra courses I've taken, I've often encountered the so-called $p$-Prüfer group on exercises but somehow never got around to them. Now I'm trying to take care of that, but there are some statements I've seen about this group which I don't know how to prove (maybe because I lack some more background in group theory, especially in the study of infinite abelian groups?) - -Definition A $p$-group is a $p$-Prüfer group if it is isomorphic to $$C_{p^\infty}=\{e^{\frac{2k\pi i}{p^n}}:k\in \mathbb{Z}, n\in\mathbb{Z}^+\} \subset (\mathbb{C}^\times, \cdot)$$ - -What I'm having trouble to prove is: - -The following are $p$-Prüfer groups: -1) An infinite $p$-group whose subgroups are totally ordered by inclusion, -2) An infinite $p$-group such that every finite subset generates a cyclic group, -3) An infinite abelian $p$-group such that $G$ is isomorphic to every proper quotient, -4) An infinite abelian $p$-group such that every subgroup is finite - -Just for the record, what I (think I) could prove was that the following are $p$-Prüfer groups: - -5) An injective envelope of $C_{p^n}$, for any $n\geq 1$, -6) A Sylow $p$-subgroup of $\frac{\mathbb{Q}}{\mathbb{Z}}$, -7) The direct limit of $0\subset C_p \subset C_{p^2}\subset ...$ - -Here $C_{p^n}$ denotes a cyclic group of order $p^n$. -Any other characterizations of the $p$-Prüfer group are welcome. - -REPLY [7 votes]: Arturo already explained the hint 1⇒7, 2⇒7 (and 1⇒2 is easy enough, I trust you can do it). I'll explain how to use pG = { pg : g in G } to understand G for 3 and 4. -If every nonzero quotient of G is isomorphic to G, then what does G/pG look like? Well, it is an elementary abelian p-group, so a vector space over Z/pZ. If it is nonzero, then it has a one-dimensional quotient: Z/pZ. By the hypothesis on G, that would mean G itself is Z/pZ. Unfortunately such a group is not infinite, and so is not a group as in 3. Hence a group as in 3 must have G=pG. As it is a p-group, it is divisible, so a direct sum of Prüfer p-groups. However, such a direct sum always has a single Prüfer p-group as a quotient, and so G must itself be the Prüfer p-group. -If every proper subgroup of G is finite, then what does pG look like? If it is finite, then G/pG is infinite, and so infinite dimensional. Take a proper subspace. The preimage of that subspace in G is a proper subgroup that is infinite. Oops. So pG cannot be finite! So again G=pG, and G is a direct sum of Prüfer p-groups. How many? Well each summand is a subgroup, and so if there is more than one, then one has an infinite proper subgroup. So G must itself be a single Prüfer p-group.<|endoftext|> -TITLE: Fundamental group of a torus with points removed -QUESTION [9 upvotes]: Question 5.33 from "Topology and its Applications" by Baesner is to compute the fundamental group of the torus ($T^2$) with $n$ points removed. I can "see" in my mind that if we remove one point we get a bouquet of two circles. Less clear is what happens when we remove two (or more) points. Any hints? - -REPLY [19 votes]: It's a bouquet of $n+1$ circles, so the free group on $n+1$ generators. Think of a rectangle with $n-1$ horizontal lines across it. Roll it up into a tube, so you have a line segment with $n+1$ circles attached to it. Then identify the top and bottom circles. So you have a circle with $n$ circles attached to it which is homotopic to a bouquet of $n+1$ circles. - -REPLY [2 votes]: Hint: removing $n$ points will also give something that only consists of 1-dimensional things. -You can also use van Kampen's theorem to calculate the fundamental group directly. EDIT: Err, you can't, at least not in the way I thought. Let $T_n$ be a torus with $n$ holes. Then, van Kampen's theorem gives a short exact sequence of groups -$$ 1 \to \pi_1(S^1) \to \pi_1(T_n) \to \pi_1(T_{n-1}) \to 1$$ -but that's not enough information to deduce $\pi_1(T_n)$.<|endoftext|> -TITLE: Pull the teeth out of Lebesgue integration -QUESTION [11 upvotes]: In Lebesgue integration we usually approximate the function we want to integrate with step-functions on measurable sets. How much "power" do we take away if we require that the step functions are on intervals instead? What functions are left that are integrable? -I'm asking this because I want some integral to converge but I only know the values on $1_{(x_1,x_2)}$. Maybe we can get to Lipschitz functions that way? -Edit to be more specific. -Usually we define an integral for a positive function in this sense. First if we have -$$f = \sum a_i 1_{A_i} \text{ then } \int f \, d\mu = \sum a_i \mu(A_i)$$ -Now if $f$ is a positive measurable function we then define -$$\int f \, d\mu := \sup \left \{\int g : g \leq f \text{ and $g$ is a simple function} \right \}$$ -My question now is: what is left of the theory if we require the $A_i$ to be intervals instead of elements of the whole $\sigma$-algebra? -My apologies for the unclear question. I shouldn't ask questions in the middle of the night. - -REPLY [9 votes]: There is a way to use step functions, and only step functions, to set up the Lebesgue integral due to Mikusinski. It requires something more subtle than a supremum. We say that a function $f$ is integrable if there exists a sequence $f_i$ of step functions such that $\sum \int |f_i|$ converges and such that $f(x) = \sum f_i(x)$ pointwise whenever $\sum |f_i|$ converges, and we define its integral to be $\int f = \sum \int f_i$.<|endoftext|> -TITLE: When are the principal series of GL_2(F) square integrable? -QUESTION [9 upvotes]: Let $F$ be a local field, $G=GL_2(F)$, $\chi_1,\chi_2$ quasicharacters, and $B(\chi_1,\chi_2)$ be the principal series (we are assuming $\chi_1,\chi_2$ to be such that the representation is irreducible). -Is $B(\chi_1,\chi_2)$ a constituent of $L^2(Z\backslash G)$? -I'm pretty sure it never is, but can't prove it. Am familiar with Bump up to 4.5. -On the other hand, say $\chi_1\chi_2^{-1} = |\cdot |^{\pm 1}$ (so not a principal series). Same question? -This time I think it is... - -REPLY [2 votes]: If the principal series would be square integrable, they would be discrete points in the Fell topology. It can be seen that the $B(\chi_1, \chi_2)$ varies continuously with the pair $(\chi_1, \chi_2)$ with the usual topology of local characters as long as it is irreducible. -Now, the point $\chi_1 \chi_2^{-1}$ being the norm or its inverse, the representation contains the Steinberg representation (times a one-dim'l twist) as a submodule or -quotient, being not irreducible anymore. Otherwise, it is irreducible. -The Steinberg (times a one-dim'l twist) and the supercuspidal representations are square-intgerable, if the central character is unitary.<|endoftext|> -TITLE: The role of the "hidden" probability space on which random variables are defined -QUESTION [14 upvotes]: One learns in a probability course that a (real) random variable is a measurable mapping of some probability space $(\Omega,\mathcal{A},\mathbf{P})$ into $(\mathbb{R},\mathcal{B}(\mathbb{R}))$. But as soon as one gets into topics that are a little advanced, the space $(\Omega,\mathcal{A},\mathbf{P})$ is not mentioned unless it is absolutely necessary. After a long time of frustration, I have become quite comfortable with this language. But some things still trouble me. The following kind of reasoning comes up in the book I'm reading: -The author says that $(X_i)_{i\in I}$ is a family of random variables and specifies the distribution of each random variable. Then he phrases some (random) proposition $A((X_i)_{i_\in I})$ (this is a little imprecise, I hope you get the meaning) and talks about $\mathbf{P}[A((X_i)_{i_\in I}) \text{ holds}]$. -My question: Let $(\Omega',\mathcal{A}',\mathbf{P}')$ be another probability space and $(Y_i)_{i\in I}$ random variables such that, for each $i\in I$, the distribution of $Y_i$ is the same as the distribution of $X_i$. Is it then obvious that $\mathbf{P}[(A(X_i)_{i_\in I}) \text{ holds}]=\mathbf{P}'[(A(Y_i)_{i_\in I}) \text{ holds}]$? -Now my guess is that this is true, but needs a proof, which is not completely trivial in case $I$ is infinite, at least not for a beginner. However, in the book this problem isn't discussed at all. So did I miss something? -Edit: -I'm not sure whether the question was correctly understood, so I'll rephrase it a little. -Let $(\Omega,\mathcal{A},\mathbf{P})$ and $(\Omega',\mathcal{A}',\mathbf{P}')$ be two probability spaces, $I$ a set, and $(X_i)_{i\in I}$ and $(Y_i)_{i\in I}$ families of random variables on $(\Omega,\mathcal{A},\mathbf{P})$ and $(\Omega',\mathcal{A}',\mathbf{P}')$ respectively such that, for each $i\in I$, the distribution of $X_i$ is equal to the distribution of $Y_i$. Let $J$ be a countable subset of $I$ and $B_j$ a Borel set for each $j\in J$. The question is: -Is it obvious that $\mathbf{P}\left[\bigcup_{j\in J}\{X_j\in B_j\}\right]=\mathbf{P}'\left[\bigcup_{j\in J}\{Y_j\in B_j\}\right]$? -The sets $B_j$ and the union over $J$ are just an example. What I mean, but cannot formalize: Let $A\in\mathcal{A}$ and $A'\in\mathcal{A}'$ such that there is an expression for $A$ in terms of the $X_i$, and $A'$ is given by the same expression replacing $X_i$ by $Y_i$ for each $i$. Is it obvious that $\mathbf{P}[A]=\mathbf{P}'[A']$? - -REPLY [2 votes]: The distributions of each $X_i$ and each $Y_i$ are far from being sufficient to decide anything about the families $(X_i)_i$ and $(Y_i)_i$. -Assume for instance that $X_1$, $X_2$, $Y_1$ and $Y_2$ are all uniform $\pm1$ Bernoulli random variables and that $X_1=X_2=Y_1=-Y_2$. Then the event $[X_1=1\ \mbox{or}\ X_2=1]$ has probability $\frac12$ while the event $[Y_1=1\ \mbox{or}\ Y_2=1]$ has probability $1$.<|endoftext|> -TITLE: What is $T\mathbb{S}^2$? -QUESTION [11 upvotes]: I recently learned that the only parallelizable spheres are $\mathbb{S}^1$, $\mathbb{S}^3$, and $\mathbb{S}^7$. This led me to wonder: -What is $T\mathbb{S}^2$? Is it diffeomorphic to a more familiar space? What about $T\mathbb{S}^n$ for $n \neq 1, 3, 7$? -EDIT (for precision): Is $T\mathbb{S}^2$ diffeomorphic to some finite product, connected sum, and/or quotient of spheres, projective spaces, euclidean spaces, and linear groups? - -REPLY [7 votes]: A slightly random answer, but if we concretely identify $TS^{n-1}$ as the embedded real manifold $\{ (x, v) \in \mathbb{R}^n \times \mathbb{R}^n : \| x \| = 1, \langle x, v \rangle = 0 \}$, then it is diffeomorphic to the complex affine quadric $\{ (z_1, \ldots, z_n) \in \mathbb{C}^n : z_1^2 + \cdots + z_n^2 = 1 \}$. (This was an amusing homework exercise I did yesterday.)<|endoftext|> -TITLE: What is so special about negative numbers $m$, $\mathbb{Z}[\sqrt{m}]$? -QUESTION [15 upvotes]: This question is based on a homework exercise: -"Let $m$ be a negative, square-free integer with at least two prime factors. Show that $\mathbb{Z}[\sqrt{m}]$ is not a PID." -In an aside comment in the book we're using (Milne), it was noted that a lot was known about quadratic extension with $m$ negative, but very little with $m$ positive. Why is this? -I have not solved the exercise, mainly because I can't think of a propery of negative numbers that positive numbers don't possess ($|m|\neq m$, but that's trivial). -It seem there should be some relatively straightforward way to calculate class numbers for quadratic extensions with $m$ negative and composite. Or maybe the way to do this is to produce an ideal that is not principal - but then again I must find this property of negative numbers that separates them from positive. - -REPLY [8 votes]: By finding an irreducible that's not prime for d<-2 should suffice as a prime is the same as an irreducible in a PID. -A good candidate is 2 which is not prime in $Z[\sqrt{d}]$, since it divides $d(d-1)$ but not $\frac{d \pm \sqrt{d}}{2}$. -To show that 2 is irreducible assume to the contrary that $2 = \alpha \beta$ where neither a units. -Take the norm, but note that the $N(a + b\sqrt{d}) = a^2 - db^2$ is multiplicative, always nonnegative, and N(u)=1 iff u is unit, and units are +1, -1 which wouldn't be the case if d were positive, as Arturo Magidin suggested in a previous comment. -Then $N(\alpha)=N(\beta)=2$, i.e $a^2 - bd^2 = \pm 2$. Then you may deduce that if d<-2 then no such a or b exists, and the result follows. -Addendum: -Let d<-5 be squarefree such that d = pr, where p prime. -The ideal $(p, \sqrt{d})$ is not principal in $Z[\sqrt{d}]$. -Suppose $Z[\sqrt{d}]$ is PID, then $(p, \sqrt{d}) = (n)$ for some $n = a + b\sqrt{d}$. -Taking the norm, then $N(n) = a^2 - db^2$ divides d (inclusion of ideals). -If b = |1|. Then $a^2 \pm d$ divides d and a = 0. But then $N(n)=d$ divides $N(p)=p^2$, but d = pq. Case b >|1| is also trivial. -Case b = 0. Then $a^2$ divides d, and then $a=1$. Hence (n) = (1) = $Z[\sqrt{d}] = (p, \sqrt{d})$. Then $1 = px + \sqrt{d}y$ for x, y in $Z[\sqrt{d}]$, but then $ps - dt = 1$, which is impossible since d = pr.<|endoftext|> -TITLE: Basis for Kernel and Image of the following T -QUESTION [6 upvotes]: I am working on this practice problem, and I was wondering if I could get some help. -I have a $T$:$\mathbb{R^{2x2}}\to \mathbf{P_{2}}$, that is, from 2x2 matrices to polynomials of degree at most 2. The transformation is given as following: -$$T\left(\begin{bmatrix} -a & b\\ -c & d -\end{bmatrix}\right) -= a-c+2d+(b+2c-d)t+(a-c+3d)t^{2}.$$ -To get the basis of kernel of $T$, I solved a system of equations needed to get the 'O' element in the $\mathbf{P_{2}}$ -- $a-c+2d=0$, $b+2c-d=0$ and $a-c+3d=0$. As a result, I got the basis of the kernel equal to $$\begin{bmatrix} -1 & -2\\ -1 & 0\\ -\end{bmatrix}.$$ -When it comes to image, if I understand correctly, I need to factor out all the variables separately, to see what is it that they span. So I got $a(1+t^{2})+b(t)+c(-t^{2}+2t-1)+d(3t^{2}-t+2)$. So would I be correct in saying that these three polynomials (without the coefficients $a$, $b$, $c$, and $d$) form the basis for the image $T$? Thank you! - -REPLY [3 votes]: The simplest way to compute the image, abstractly, is to take a basis of the domain, and see what its image spans. So here, you could take the standard basis of $\mathbb{R}^{2\times 2}$, to wit, -$$\left(\begin{array}{cc} -1 & 0\\ -0 & 0 -\end{array}\right),\quad -\left(\begin{array}{cc} -0 & 1\\ -0 & 0 -\end{array}\right),\quad -\left(\begin{array}{cc} -0 & 0\\ -1 & 0 -\end{array}\right),\quad -\left(\begin{array}{cc} -0 & 0\\ -0 & 1 -\end{array}\right).$$ -Here, you have that the images of these basis vectors are, respectively, -$$1+t^2,\quad t,\quad, -1+2t-t^2,\quad 2-t+3t^2.$$ -So you would want to find out the span of these four polynomials in $\mathbb{P}^2$ (note that, of course, these four polynomials must be linearly dependent; after all, $\dim(\mathbf{P}_2) = 3$). -Alternatively, you can think about the matrix representation of $T$. Convince yourself that the image of $T$ corresponds to the columnspace of the matrix that represents $T$. One way to find the columnspace is to find a row-echelon form of the matrix, and then take the columns in the original matrix that correspond to the columns that contain the pivots of the row-echelon form. Since you may have already computed the row-echelon form of the matrix in order to find a basis for the null-space, it's likely you have already done all the needed work and can just exploit it. -Of course, as you noted in comments, if you already know that the dimension of the image is $3$ (by the Dimension Theorem, aka the Rank-Nullity Theorem, say), and you know that $\mathbf{P}_2$ has dimension $3$, then you know the image is all of $\mathbf{P}_2$ will do. The descriptions above are the general procedures you might use with any linear transformation.<|endoftext|> -TITLE: Mapping class groups in higher dimensions -QUESTION [7 upvotes]: I am in the process of learning about Mapping class groups. At this point it seems like most of what I've read involves very low dimensional (surfaces and 3-manifolds) applications. -I was wondering if they were studied (or arise naturally) in higher dimensional settings? -In particular, any references to their uses in homotopy theory would be appreciated. - -REPLY [6 votes]: In high dimensions there are several variants that are all distinct (which for surfaces they all agree). There's mapping class groups in the "homotopy category" meaning the homotopy-classes of homotopy equivalences of a topological space, with composition giving the group structure. This is a "core" object of study of classical algebraic topology. In the topological/pl/smooth categories there are isotopy classes of homeomorphisms/pl automorphisms/diffeomorphisms of a manifold. The smooth category gets quite a bit of attention -- for example the smooth category mapping class group of $S^n$ (if you restrict to orientation-preserving diffeomorphisms) is the group of homotopy $(n+1)$-spheres, provided $n \geq 5$. There has been some work on stable high-dimensional mapping class groups by people like Giansiracusa (Swansea). -I have to head out but I can add more later. -The Giansiracusa reference is this: http://www.arxiv.org/abs/math.gt/0510599 -Modulo some qualifiers the statement is that the stable mapping class group of a 4-manifold is the automorphisms of homology that preserve the intersection form. -Mapping class groups of a products of circles $(S^1)^n$ in the topological, PL and smooth categories were computed by Hatcher in his "Higher simple homotopy theory" paper. -Is there anything in particular you're interested in? -edit: In a little self-plug, David Gabai and I recently proved a result that contrasts heavily with the stability results referenced in the Giansiracusa paper. Specifically, we show that the mapping class group (smooth diffeomorphisms) of $S^1 \times D^3$ and $S^1 \times S^3$ are not finitely generated. Tadayuki Watanabe has also been able to prove this result for $S^1 \times D^3$ using fairly different techniques.<|endoftext|> -TITLE: How can I show that the set of rational numbers with denominator a power of two form a dense subset of the reals? -QUESTION [5 upvotes]: How can I show that the set of rational numbers with denominator a power of two form a dense subset of the reals? - -REPLY [7 votes]: This follows immediately from the Archimedian property. To find a dyadic $\rm\ m/2^n \in (r,s)\ $ start at at any dyadic $\rm\ k < r\ $ (e.g. an integer) and keep taking dyadic step sizes smaller than the interval, say $\rm\ 1/2^j\ <\ s-r\:.\: $ By the Archimedean property eventually you'll land in the interval, necessarily at some dyadic rational (being a sum of such). This is a special case of the proof I explained here.<|endoftext|> -TITLE: How to find the distance between a point and line joining two points on a sphere? -QUESTION [11 upvotes]: How do I calculate the distance between the line joining the two points on a spherical surface and another point on same surface? I have illustrated my problem in the image below. - -In the above illustration, the points A, B and X lies on a spherical surface, I need to find the distance between points (A,B) and X. I am not a mathematics guy. If possible please illustrate me the solution as non-mathematics guys could understand. Thanks. - -REPLY [8 votes]: The question is a little ambiguous: the three previous answers used three different interpretations. If the OP wants the surface distance from the point $X$ to the geodesic line $\overleftrightarrow{AB}$, the answer is straightforward. If the desired distance is between $X$ and the segment $\overline{AB}$, a bit more work is required. -Using longitude ($\theta$) and latitude ($\phi$), let $A=(\theta_A, \phi_A)$, $B=(\theta_B, \phi_B)$, and $X=(\theta_X, \phi_X)$. The direction vectors for these points are -$$\hat A = (\cos \phi_A \cos \theta_A, \cos \phi_A \sin \theta_A, \sin \phi_A),$$ $$ \hat B = (\cos \phi_B \cos \theta_B, \cos \phi_B \sin \theta_B, \sin \phi_B), $$ -$$\hat X = (\cos \phi_X \cos \theta_X, \cos \phi_X \sin \theta_X, \sin \phi_X).$$ -Let $\Phi$ be the distance on the unit sphere between $\hat X$ and the geodesic line passing through $\hat A$ and $\hat B$. Imagine the plane $\mathcal{P}$ passing through $\hat A$, $\hat B$, and the origin, which cuts the unit sphere in half. Then the Euclidean distance of $\hat X$ from plane $\mathcal{P}$ is $\sin \Phi$. Now let $\hat n$ be a unit normal vector for $\mathcal{P}$, and we have -$$\hat n = \hat A \times \hat B$$ -$$\sin \Phi = | \hat n \cdot \hat X |$$ -So, if the radius of the original sphere is $R$, then the surface distance from the point $X$ to the geodesic line $\overleftrightarrow{AB}$ is $R \Phi$. -To determine the distance to the segment $\overline{AB}$, we need to determine whether or not the point of line $\overleftrightarrow{ A B}$ that $ X$ is closest to is between $A$ and $B$. If the closest point is between $A$ and $B$, then the surface distance to the segment is $R \Phi$. Otherwise, the distance to the segment is the distance to the closest endpoint, which is best resolved though the methods described in the Wikipedia article referenced by Ross Millikan. One way to make this determination is to find the point $\hat{X}_{\textrm{proj}}$, the projection of $\hat X$ onto plane $\mathcal{P}$, -$$\hat{X}_{\textrm{proj}} = \hat X - (\hat n \cdot \hat X) \hat n,$$ -and then normalize $\hat{X}_{\textrm{proj}}$, -$$\hat x = \frac{\hat{X}_{\textrm{proj}} }{| \hat{X}_{\textrm{proj}} |},$$ -So determining whether the point of line $\overleftrightarrow{AB}$ that $X$ is closest to is between $A$ and $B$ reduces to determining whether $\hat x$ is between $\hat A$ and $\hat B$. -Now consider the mid-point of $\hat A$ and $\hat B$, -$$M=\frac{\hat A + \hat B}{2}$$ -If the projection of $\hat x$ on the ray $\overrightarrow{OM}$ is further along than the projection of $\hat A$ or $\hat B$, then $\hat x$ is between $\hat A$ and $\hat B$, that is, if $\; \hat x \cdot M > \hat A \cdot M \; \; (=\hat B \cdot M)$, then $\hat x$ is between $\hat A$ and $\hat B$, otherwise not.<|endoftext|> -TITLE: What is the best numerator and denominator couple to get the value of $\pi$? -QUESTION [5 upvotes]: I need to express the value of $\pi$ as numerator/denominator. What is the best pair considering that the numerator should be less than or equal to $2^{62}$? Or how to get this pair? - -REPLY [3 votes]: I think $\frac{129988029236677443584}{41376474791582048256}$ is the closest fraction to $\pi$ with a numerator less than $2^{62}$. -Edit: I just realized that this question is four years old.<|endoftext|> -TITLE: Under what condition we can interchange order of a limit and a summation? -QUESTION [60 upvotes]: Suppose f(m,n) is a double sequence in $\mathbb R$. Under what condition do we have $\lim\limits_{n\to\infty}\sum\limits_{m=1}^\infty f(m,n)=\sum\limits_{m=1}^\infty \lim\limits_{n\to\infty} f(m,n)$? Thanks! - -REPLY [3 votes]: In this answer, I will focus on uniform convergence. That has been discussed in the comments, but not in any proper answer. -The interchange is valid if the partial sums are uniformly convergent, in the sense that $$\sup_{n}\left|\sum_{m>N} f(m,n) \right| \to 0 \ \ \text{as} \ N \to \infty \ \ \ \ \ \ \ \ \ \ (1)$$ -More precisely, we show that if (i) $\sum_{m} f(m,n)$ converges for each $n$, (ii) $\lim_{n} f(m,n)$ exists for each $m$ and (iii) $(1)$ holds, then $\sum_{m} \lim_{n} f(m,n)$, $\lim_{n} \sum_{m} f(m,n)$ both exist and are equal. -There are cases where the above theorem can be applied, but neither the DCT nor the MCT can. An example is something like $f(m,n) = 2^{-(mn+1)} + \frac{(-1)^m}{m}$. It is a weaker condition than that required by the dominated convergence theorem, in the sense that if the DCT applies to $f(m,n)$ i.e., $|f(m,n)| \leq K_{m}$, where $\sum_{m} K_{m} < \infty$ then $(1)$ holds for $g$, by the Weierstrass M-test. -The proof follows from following fact: if $g_n \to g$ uniformly in a metric space $E$, with the $g_n$'s continuous, then $g$ is continuous. With a bit of work, this theorem can be applied in our context. The idea is to identify convergent sequences $(a_n)_n$ as precisely the continuous functions on $E=\{1/n\} \cup \{0\}$ with the induced (Euclidean) metric. If we are convinced of this identification, let us define $g_N(1/n) = \sum_{m=1}^{N} f(m,n)$. This is a finite sum, hence in light of (ii), we deduce $$\lim_{1/n \to 0^{+}} g_{N}(1/n) = \sum_{m=1}^{N} \lim_{n} f(m,n)$$ Thus defining $g_{N}(0) = \sum_{m=1}^{N} \lim_{n} f(m,n)$ we have that the $g_{N}$'s are continuous on $E$ and by $(1)$ converge uniformly on $E \setminus \{0\}$. It is not hard to check that this implies they converge uniformly on $E$. Indeed, if $|g_{N'}(x) - g_{N''}(x)| < \epsilon$ for $N', N''>M_{\epsilon}, x \in E \setminus \{0\}$ we can take $x \to 0^{+}$ to get $|g_{N'}(0) - g_{N''}(0)| \leq \epsilon$ since the $g_{N}$'s are continuous. Hence $$g(0) = \lim_{N} g_{N}(0) = \sum_{m} \lim_{n} f(m,n)$$ is well-defined, and since $g$ is continuous (as the uniform limit of continuous functions), we deduce $$\sum_{m} \lim_{n} f(m,n) = g(0)= \lim_{1/n \to 0^{+}} g(1/n) = \lim_{n} \sum_{m} f(m,n) $$ - -However, it is very important to be careful. The result above only applies if the partial sums converge uniformly. It is possible to have functions $f_n:\mathbb{N} \to \mathbb{R}$ such that $f_n \to f$ uniformly i.e., $\sup_{m} |f_n(m) - f(m)| \to 0$ but $\lim_{n} \sum_{m} f_n(m) \neq \sum_{m} \lim_{n} f_n(m)$. For instance, define $f_n(m)=1/n$ for $1 \leq m \leq n$, $f_n(m)=0$ for $m>n$. Then we can check that $\sup_{m} |f_n(m)| \leq 1/n \to 0$ but $1 = \lim_{n} \sum_{m} f_n(m) \neq \sum_{m} \lim_{n} f_n(m) = 0$. - -A measure-theoretic generalization of these ideas is that of uniform integrability. -Uniform integrability can be used, among other things, to a prove stronger version of the DCT, the Vitali convergence theorem.<|endoftext|> -TITLE: How to prove $\max_{x \in I} |f(x)| \leq \max_{x \in I} |f'(x)|$? -QUESTION [8 upvotes]: Today we had a probational exam in analysis. I wasn't able to solve one of the exercises and I have no idea what theorem to apply in order to solve it: -Let $I=[0,1]$ and $f: I \rightarrow \mathbb{R}$ be continuously differentiable. Assuming that $f$ has a root. Show: $$\max_{x \in I} |f(x)| \leq \max_{x \in I} |f'(x)|$$ -Does this theorem have a name? What other theorem will I need in order to prove it? I'm sure the fact that the function has a root is important, but I don't see why and how to make use of it... -Thanks for your help! - -REPLY [9 votes]: The example $f(x)\equiv 1$ shows that having a root is important. -Let $x_0\in I$ be such that $|f(x_0)|=\max_{x\in I}|f(x)|$, and let $r\in I$ be such that $f(r)=0$. Then $\left|\frac{f(x_0)-f(r)}{x_0-r}\right|\geq |f(x_0)|$, and you can apply the Mean Value Theorem to finish. Continuity of the derivative isn't necessary, just continuity of $f$ on $I$ and differentiability on $(0,1)$, except that with continuity of $f'$ you can really say that the right-hand side of the inequality is a max rather than a sup. - -REPLY [6 votes]: I don't know if this theorem has a name, but you're guessing right that it is important that $f$ has a root (otherwise adding a constant allows you to make the values of $f$ arbitrarily large without changing $f'$). One proof is a straightforward application of the fundamental theorem of analysis and a standard estimate on the integral. Indeed, if $x_{0}$ is such that $f(x_{0}) = 0$ then $f(x) = \int_{x_{0}}^{x} f'(t)\,dt$. Now combine this with the fact that $|\int_{a}^{b} h| \leq \int_{a}^{b}|h|$ for all $a,b$ and all integrable $h$. - -REPLY [4 votes]: Hints: Mean value theorem between a root and any point + the fact that the diameter of $I$ is $1$. -By the way, continuity on the closed interval $I$ and differentiability on the interior of $I$ are sufficient.<|endoftext|> -TITLE: Convert Frobenius canonical form to Jordan canonical form -QUESTION [5 upvotes]: recently I've come across a few cases where I had to evaluate $e^{At}$ (differential equations) where A is in Frobenius canonical form. The algorithm looks like: -$$e^{At} = Pe^{Jt}P^{-1}$$ so I need to do Jordan decomposition. From Frobenius canonical form I can easily write the characteristic equation => compute the roots which are eigenvalues and write the Jordan canonical form. -This was the easy part. Now the other part: compute matrices $P$ and $P^{-1}$. -Is there any standard form of those matrices if matrix A is in a Frobenius canonical form? - -REPLY [8 votes]: The matrix P is the matrix of coefficients of certain divisors of the characteristic polynomial of the Frobenius canonical form. - -In fact this works for any companion matrix, not just the companion matrix of a power of an irreducible polynomial. -Suppose a matrix A has 1s above the diagonal, and the last row is [ v0, v1, v2, v3, ..., vn ], so -$$ A = \begin{bmatrix} -0 & 1 & 0 & 0 & \ldots & 0 & 0 \\ -0 & 0 & 1 & 0 & \ldots & 0 & 0 \\ -0 & 0 & 0 & 1 & \ldots & 0 & 0 \\ -\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ -0 & 0 & 0 & 0 & \ldots & 1 & 0 \\ -0 & 0 & 0 & 0 & \ldots & 0 & 1 \\ -v_0 & v_1 & v_2 & v_3 & \ldots & v_{n-1} & v_n \\ -\end{bmatrix}.$$ Then -$$ A^{(n+1)} = v_0 A^0 + v_1 A^1 + v_2 A^2 + \ldots + v_n A^n$$ -Suppose $x-\alpha$ is a factor of -$$ -f(x) = x^{(n+1)} - v_n x^n - \ldots - v_1 x^1 - v_0 x^0 -$$ -and write -$$ -f(x)/(x-\alpha) = w_n x^n + w_{n-1} x^{(n-1)} + \ldots + w_1 x^1 + w_0 x^0 -$$ -where of course $w_n = 1$. Then -$w = [ w_0, w_1, w_2, \ldots, w_n ]$ is a left eigenvector of A, since $wA = \alpha w$. -In more detail, $$wA = [ v_0, w_0 + v_1, w_1 + v_2, \ldots, w_{n-1} + v_n ]$$ -and -$$f(x) = (x-\alpha)( w_n x^n + w_{n-1} x^{(n-1)} + \ldots + w_1 x^1 + w_0 x^0) -= x^{n+1} - (-\alpha+w_{n-1}) x^{n} \ldots$$ -so by equating coefficients of $x^n$, one gets $v_n = \alpha - w_{n-1}$ and $v_n + w_{n-1} = \alpha w_n$. Similarly, $v_{n-1} = \alpha w_{n-1} - w_{n-2}$, and in general $v_i = \alpha w_i - w_{i-1}$ so $v_i + w_{i-1} = \alpha w_i$ and $wA = \alpha w$. -Something similar works with repeated roots even, to give you Jordan blocks. -So to find your matrix P, find each root α of f(x) with multiplicity k, and then the rows of P are the coefficients (in increasing order of power of x) of f(x)/(x−α)i, from i=1 to k, with corresponding diagonal entry of the matrix just being α, and the run from 1 to k forming the Jordan block. - -Example 1 -Find the JCF of -$$A = \begin{bmatrix}0&1&0&0\\0&0&1&0\\0&0&0&1\\-1&4&-6&4\end{bmatrix}$$ -The characteristic polynomial is just $f(x) = x^4-4x^3+6x^2-4x+1$ since it is a companion matrix of $f$. Of course, $f(x) = (x-1)^4$ is easy to factor. -We calculate -$$f(x)/(x-1) = 1x^3 - 3x^2 + 3x - 1,$$ -$$f(x)/(x-1)^2 = 0x^3+1x^2-2x+1,$$ -$$f(x)/(x-1)^3 = 0x^3+0x^2+x-1,\text{ and}$$ -$$f(x)/(x-1)^4 = 0x^3+0x^2+0x+1.$$ -We then write down the coefficients (in reverse order, I guess): -$$P = \begin{bmatrix} -1 & 3 & -3 & 1 \\ 1 & -2 & 1 & 0 \\ -1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix}$$ -so that -$$ P A P^{-1} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 \end{bmatrix}$$<|endoftext|> -TITLE: A discontinuous linear function over the rationals -QUESTION [5 upvotes]: Possible Duplicate: -On sort-of-linear functions - -It is an interesting exercise to show that $\operatorname{Gal}(\mathbb R ~\colon \mathbb Q)$ is trivial. The only solution I know hinges on the fact that the automorphism is order-preserving, which in turn depends on the fact that $\theta(xy)=\theta(x)\theta(y)$ for $\theta \in \operatorname{Aut} \mathbb R$. -Now, a function $L:\mathbb R \to \mathbb R$ with just the property that $L(x+y)=L(x)+L(y)$ can be shown to preserve multiplication on the rationals. And, I have been unsuccessful in trying to extend this fact to the reals. This could be because such a function could be discontinuous, an example of which I also have failed to construct (it's a bad day). -My question: - -Can you give me an example of a discontinuous function $L:\mathbb R \to \mathbb R$ with the property that $L(x+y)=L(x)+L(y)$ and $L|_{\mathbb Q}= \text{identity}$? - -An idea: Perhaps we could consider a function that preserves order on rationals but reverses it on irrationals. - -REPLY [5 votes]: This is impossible to do in ZF alone, but possible with a Hamel basis for $\mathbb{R}$ as a $\mathbb{Q}$ vector space. Hence no "explicit" example can be expected. Complete $\{1\}$ to a Hamel basis $B$, let $x\in B\setminus\{1\}$, and let $f:\mathbb{R}\to\mathbb{R}$ be the unique linear function such that $f(x)=0$ and $f(b)=b$ for all $b\in B\setminus\{x\}$; in particular, $f(1)=1$ implies that $f$ is the identity on the rationals. Because $x$ can be approximated by rationals (not to mention with rational multiples of other elements of $B$), $f$ is not continuous at $x$. In fact, $f$ is not continuous anywhere.<|endoftext|> -TITLE: The Pythagorean theorem and Hilbert axioms -QUESTION [9 upvotes]: Can one state and prove the Pythagorean theorem using Hilbert's axioms of geometry, without any reference to arithmetic? -Edit: Here is a possible motivation for this question (and in particular for the "state" part of this question). It is known that the theory of Euclidean geometry is complete. Every true statement in this theory is provable. -On the other hand, it is known that the axioms of (Peano) arithmetic cannot be proven to be consistent. So, basically, I ask if there is a reasonable theory which is known to be consistent and complete, and in which the Pythagorean theorem can be stated and proved. -In summary, I guess I am asking - can we be sure that the Pythagorean theorem is true? :) - -REPLY [10 votes]: Although students are seldom aware of this fact, the Pythagorean Theorem, as described by Euclid, makes absolutely no reference to numbers. Where students say "the square of the hypotenuse," Euclid wrote "the square on the hypotenuse." And the assertion is that this is the same as the squares on the two sides. Here "same" means content, and "content" is not explicated further. Neither of the two proofs of the Pythagorean Theorem in Elements makes any reference to arithmetic. -Hilbert, in his axiomatization, did not make any reference to arithmetic either. But Hilbert's axiomatization is really not suitable for a further discussion of the OP's question. -This is because Hilbert's axiomatization of geometry very much shows its age. His Axiom of Continuity (completeness) is second-order. Hilbert developed the axiomatization many years before he started to take a serious interest in Logic. -A more modern first-order axiomatization, or series of axiomatizations, is due to Tarski. Again, the axioms do not mention or use arithmetic. But naturally a version of the Pythagorean Theorem is derivable from the axioms. -Tarski showed that his theory is complete. It is recursively axiomatized, and hence the theory is decidable. -This reminds me of a famous reply that Euclid is said to have made to one of the Ptolemies, when the latter asked whether there was an easier path to geometry than pushing one's way through the thickets of Elements (I am paraphrasing). Euclid is said to have replied something to the effect that there is no royal road to geometry. -Presumably, the story, like most such stories, is false. For one thing, essentially the same story is told of Menaechmus and Alexander the Great. For another, it can be unhealthy to dis a king. Euclid would surely not risk having his grant, and perhaps other things, cut off. -Anyway, if Euclid did make that comment about geometry, he was wrong. Any king or queen with access to sufficient computing power can sip wine while the machine crunches through a problem. -I guess I should remark that Tarski's algorithm was grossly inefficient. But more recently there has been significant progress. -Back to numbers! Tarski showed that any model for his geometry is isomorphic to the geometry of $\mathbb{F}^2$, where $\mathbb{F}$ is an algebraically closed field of characteristic $0$. (Hilbert had shown, sort of, that the geometry of his synthetically defined plane was the geometry of $\mathbb{R}^2$.) -Tarski's algorithm for geometry depends on the fact (that he proved) that the theory of algebraically closed fields of characteristic $0$ is complete. Since the theory is recursively (but not finitely) axiomatizable, it is decidable. The decision procedure for elementary geometry involves translating a geometric problem, via coordinatization, into a sentence of "elementary algebra" and then determining whether that sentence is true in an algebraically closed field of characteristic $0$. All such fields are elementarily equivalent, so if a sentence is true in one such field, it is true in all. -Note that Number Theory cannot be developed within the first-order theory of fields of characteristic $0$. There is no formula $N(x)$ in that theory such that $N(x)$ holds iff $x$ is an integer. As we know, elementary Number Theory, as opposed to the theory of algebraically closed fields of characteristic $0$, is undecidable. -And finally, let me reassure the OP that the Pythagorean Theorem is true. And any of the usual axiomatizations of Number Theory is consistent. We all know that, the axioms are obviously true in the non-negative integers. We can prove that the consistency of any reasonably useful axiomatization of Number Theory cannot be proved within that theory, but that's an entirely different matter.<|endoftext|> -TITLE: Some questions concerning the size of proper classes in ZFC -QUESTION [8 upvotes]: For some formulae $\phi(x)$ it can be proved from the axioms of ZFC, that there is no set $X$ with $(x)x\in X \equiv \phi(x)$. Thus the collection $\lbrace x\ |\ \phi(x)\rbrace$ is a proper class. -Furthermore, there is no set $Y$ that is in bijection with such a collection, right? Thus, every proper class must be in some sense larger than any set, right? ("too large to be a set") -For two formulae $\phi(x)$, $\psi(x)$, one of them defining a proper class, it may be possible two show that $(x) \phi(x) \equiv \psi(x)$. This means a) that also the second collection is a proper class, and b) that the two have equal size in some sense. - -Can the notion of size of proper classes (or equipollency) be made precise? Is it possible that two proper classes - do not have equal size in this sense? - Would it thus follow, that there are - smaller and larger proper classes? Or - do all proper classes have the same - size (of $V$)? - -And - finally - can the size of $V$ be characterized? To talk at large, maybe like this (if sense of $P(V)$ could be made): $V$ is so large such that $|V| = |P(V)|$ (which never holds for sets)? (This would admittedly be counter-intuitive, since the discrepancy between $|X|$ and $|P(X)|$ grows with $|X|$.) - -REPLY [9 votes]: This question was asked at MO a while ago. Below I quote the relevant part of my answer. But let me add some comments. -In ZFC (proper) classes are not actual objects, we only treat them formally, and they are really just short-cuts for formulas (with parameters), i.e., we identify formulas $\phi(x)$ with the collection of sets that satisfy $\phi$. -This makes talking of relations between classes somewhat awkward. Arturo's nice answer, for example, indicates how one can make sense of, say, a bijection between two classes. A bit more formally, if $X$ is the class of $x$ such that $\phi(x)$ and $Y$ is the class of $y$ such that $\psi(y)$, then a bijection between $X$ and $Y$ is a class $Z$ given by a formula $\rho(x,y)$ such that: - -For any $x,y,z$, if $\rho(x,y)$ and $\rho(x,z)$ then $y=z$. -Similarly, for any $x,y,z$, if$ \rho(x,y)$ and $\rho(z,y)$, then $x=z$. -For any $x,y$, $\rho(x,y)$ implies that $\phi(x)$ and $\psi(y)$ holds. -For any $x$ such that $\phi(x)$ there is a $y$ such that $\rho(x,y)$. -And, for any $y$ such that $\psi(y)$, there is an $x$ such that $\rho(x,y)$. - -This awkwardness also makes it impossible to develop an appropriate theory of classes. For example, the natural question: "Are $X$ and $Y$ pf the same size?" cannot even be asked in ZFC. Of, course, if there is a $\rho$ as above, then we can say that they have the same size, as witnessed by $Z$, and if there is no such $\rho$, outside of ZFC we can say that thy don't have the same size, but all we can do in ZFC is to say, of any given formula $\rho$, that $\rho$ does not define a bijection between $X$ and $Y$. -Of course, equipotency is only an example of what we cannot discuss freely about classes. Thus when interested in proper classes, we move from ZFC to other theories. There are (at least) two natural extensions of ZFC where classes can be treated as formal objects. One is NBG (Von Neumann-Bernays-Goedel). Here, the objects are classes, and sets are classes that belong to other classes. $Z$ is a bijection between $X$ and $Y$ iff $A$ is a function, its domain is $X$ and its range is $Y$ (just as with sets). NBG is usually preferred because it is a conservative extension of ZFC; in a sense, all we have done is to allow references to classes without changing what we mean by "set" in any way. Formally, any theorem of NBG that only mentions sets is a theorem of ZFC. And our interpretation of classes as given by formulas gives us a way of extending any model of ZFC to one of NBG. -The fact that NBG is conservative over ZFC is in a sense a serious limitation, as we limit the notion of class somewhat artificially. When discussing elementary embedding formulations of large cardinal axioms (something very common in modern set theory), for example, the discussion can be carried out in NBG, but not in the smoothest possible fashion. In technical terms, the comprehension axiom in NBG is predicative, but it is hard to justify this when we are allowing proper classes at all. -The most natural extension of ZFC to treat proper classes is Morse-Kelley (MK). Here, comprehension is unrestricted, so it is more natural to combine classes into new ones by usual operations. The cost of this is that MK is not a conservative extension of ZFC. In fact, MK can prove the consistency of NBG (and therefore of ZFC). That being said, to discuss equipotency of classes, MK seems the appropriate framework. -Here is what I said in the MO post mentioned above: - -In extensions of set theory where classes are allowed (not just formally as in ZFC, but as actual objects as in MK or GB), sometimes it is suggested to add an axiom (due to Von Neumann, I believe) stating that any two classes are in bijection with one another. Under this axiom, the "cardinality" of a proper class would be ORD, the class of all ordinals. (By the way, by class forcing, given any proper class, one can add a bijection between the class and ORD without adding sets, so this assumption bears no implications for set theory proper.) -Without assuming Von Neumann's axiom, or the axiom of choice, I know of no sensible way of making sense of this notion, as now we could have some proper classes that are "thinner" than others, or even incomparable. Of course, we could study models where this happens (for example, work in ZF, assume there is a strong inaccessible $\kappa$, and consider $V_\kappa$ as the universe of sets, and $Def(V_\kappa)$ in Gödel's sense (or even $V_{\kappa+1}$) as the collection of classes). - -Let me expand on this a bit. I argue in ZFC as this is the best known framework of the three mentioned above: -First, class forcing allows us to add a bijection between $V$ and $ORD$ without adding sets. Essentially, what we do is to "thread" through the class of bijections between sets and ordinals. In the resulting extension, we have added no sets, but we have a new class $G$. The resulting structure $(V,G,\in)$ is a model of the strong version of ZFC where we allow $G$ to appear as a predicate in instances of the replacement axiom. Also, any proper class $A$ here (in the sense of "definable by a formula") is also in bijection with the ordinals, and such a bijection is easily definable from $A$ and $G$. -This shows that the assumption that all classes have the same size is completely harmless: No new theorems of ZFC can be proved by adding this assumption. -The model we obtain is not a "natural" model of NBG, though, since G is not definable. -Arturo's suggestion ($V=L$) gives us models where bijections between classes and the ordinals are definable. It has the disadvantage that V=L is quite limiting. As I mentioned in the comments to his answer, there is an alternative: We can assume $G$ definable. This is because, over any model of ZFC, we can force to obtain a new model where $V=HOD$ (on the other hand, once $V\ne L$ we cannot force to have the equality back). -$HOD$ is the class of hereditarily ordinal definable sets. This means that $x\in HOD$ implies that $x\subset HOD$, and that to be in HOD, $x$ must be definable from ordinal parameters. Of course $V=L$ implies $V=HOD$, but $HOD$ is compatible with all known large cardinals, while $L$ is not. Moreover, $HOD$ carries a definable well-ordering (in order type ORD), so if $V=HOD$, then any proper class is in bijection with the ordinals. Moreover, $V=HOD$ is equivalent to the statement that there is a (definable) bijection between $V$ and $ORD$. So, at least in ZFC, this characterizes in a natural way when all proper classes have the same size (and it is really the only way that "sizes of proper classes" can be freely discussed in ZFC). -Finally, it is consistent that $V\ne HOD$, in which case $V$ and $ORD$ have different cardinality in the ZFC sense defined above, and in fact one can arrange that there are incomparable "sizes" of proper classes.<|endoftext|> -TITLE: Prove that the class number of $\mathbb{Z}[\zeta_3]$ is $1$ -QUESTION [5 upvotes]: How does one prove that the class number of $\mathbb{Z}[\zeta_3]$ is $1$? - -REPLY [18 votes]: Here's a nice geometric argument, which you can find essentially in Klein's lecture on Ideal Numbers (Lecture VIII in Lectures of Mathematics, by Felix Klein, AMS Chelsea Publishing, AMS, 2000). -Consider the lattice $\mathbb{Z}[\zeta_3]$ in $\mathbb{C}$ ($\zeta_3 = \frac{-1+\sqrt{-3}}{2}$). Given $a,b\in\mathbb{Z}[\zeta_3]$, $a\neq 0$. Consider the points in the lattice of the form $(q_1+q_2\zeta_3)a$, and use them to tessellate the complex plane. Then locate $b$, and the closest corner of one of the squares to $b$, $qa$. Then let $r=b-qa$. -Here is a picture which is slightly off, since my lattice here is $\mathbb{Z}[i]$, not $\mathbb{Z}[\zeta_3]$, but it should give you an idea; it's taken from the slides for a talk I gave some years ago. It's a bit hard to see, but the small dots are the lattice points; the big dots represent the $\mathbb{Z}[\zeta_3]$ multiples of $a$, with $a$ the first large dot to the right of the one labeled $0$ and above the horizontal line. (Well, as I said,t he small dots are really in the position of the lattice points of $\mathbb{Z}[i]$, but you get the idea, I hope). The three labeled dots above zero are (counterclockwise) $a$, $(1+\zeta_3)a$, and $\zeta_3 a$. The dot corresponding to $b$ is circled in red. The blue dot is $qa$, the blue arrow is the vector corresponding to $qa$, and the green arrow is the vector corresponding to $r = b-qa$. (The "1" at the very bottom of the picture is the page number, so ignore it...) -$\hskip 0.7in$ -How big can $r$ be? The tessellation is by parallelograms whose sides have length $|a|$ and $|\zeta_3 a|=|a|$, so the furthest that $b$ can be from the corner it is closest to is $\frac{|a|\sqrt{2}}{2}$. That is, $|r|\leq\frac{|a|\sqrt{2}}{2}\lt|a|$. From this it follows that $0\leq N(r)\lt N(a)$. -Thus, for all $a,b\in\mathbb{Z}[\zeta_3]$, there exists $q,r\in\mathbb{Z}[\zeta_3]$ such that $b = qa+r$, and $0\leq N(r)\lt N(a)$. So $N$ is a Euclidean function on $\mathbb{Z}[\zeta_3]$, the latter is Euclidean, hence a PID, hence the class number is $1$. -The same geometric argument can be used to show that $\mathbb{Z}[i]$ and $\mathbb{Z}[\sqrt{-2}]$ are Euclidean, but it breaks down when you get to $\mathbb{Z}[\sqrt{-5}]$, because the size of the rectangles now allows the distance from $b$ to a corner to be larger than the length of $a$ (good thing, too, since $\mathbb{Z}[\sqrt{-5}]$ is not a UFD).<|endoftext|> -TITLE: Could someone explain conditional independence? -QUESTION [119 upvotes]: My understanding right now is that an example of conditional independence would be: -If two people live in the same city, the probability that person A gets home in time for dinner, and the probability that person B gets home in time for dinner are independent; that is, we wouldn't expect one to have an affect on the other. But if a snow storm hits the city and introduces a probability C that traffic will be at a stand still, you would expect that the probability of both A getting home in time for dinner and B getting home in time for dinner, would change. -If this is a correct understanding, I guess I still don't understand what exactly conditional independence is, or what it does for us (why does it have a separate name, as opposed to just compounded probabilities), and if this isn't a correct understanding, could someone please provide an example with an explanation? - -REPLY [8 votes]: No independence -Take a random sample of school children and for each child obtain data on: - -Foot Size ($F$) -Literacy Score ($L$). - -The two will be (positively) correlated, in that the bigger the foot size the higher the literacy score. -The random variables $F$ and $L$ are not independent. -Confounder - -Obviously a bigger foot size is not the direct cause for a higher literacy score. What correlates the two is the child's age ($A$), which is the confounder in the fork structure above. -If I tell you someone's foot size, it hints at their age, which in turn hints at their literacy score. So we can write: -$$ -P(L|F) \neq P(L) -$$ -Again, the random variables $F$ and $L$ are not independent. -Conditioning -By conditioning on age (the confounder), we no longer consider the relationship between foot size and literacy for the whole sample, but per each age group separately. -Doing so annihilates the correlation caused by the confounder, and makes foot size and literacy score independent. -While age does hint at literacy score, if now I tell you someone's foot size it doesn't hint a smidgen about their age because their age is given (we condition on it) - no correlation. -$$ -P(L|F, A) = P(L|A) -$$ -And so: -$$ -P(L|F) = P(L) -$$ -Conclusion -So this was just an example of two random variables $F$ and $L$ that were: - -dependent when not conditioned on $A$ -independent when conditioned on $A$ - -We say that $F$ is conditionally independent of $L$ given $A$: -$$ -(F \perp L | A) -$$<|endoftext|> -TITLE: Cattle problem -- all solutions in $\mathbb{Z}$ -QUESTION [9 upvotes]: There is an ancient problem, I remember reading once in a book while I was a kid, that says: there was a father who had $N$ sons and $T$ cows. He divided the cows among the sons in the following order: for the first son, $1 + 1/7$ of the remaining; for the second son $2+ 1/7$ of the remaining; ..., for the $i$'th son $i+1/7$ of the remaining... the question asked for the numbers T and N. In fact $N=6$ and $T=36$ is a solution. I found that it can be generalized to any given ratio $1/M$ and $N=M-1$ and $T=N^2$ is a solution. I was wondering if there is any other solution to the general problem in $\mathbb{Z}$. I have formalized the problem as following: -We have a system of equations described by the following $N+1$ equations defined in $\mathbb{Z}$: -\begin{align*} -n_1 &= 1+ \frac{T-1}{M}\\ -n_2 &= 2+ \frac{T-2-n_1}{M}\\ -&\vdots\\ -n_i &= i+ \frac{T-i-\sum\limits_{j=1}^{i-1} n_j}{M}\\ -T&=\sum\limits_{i=1}^{N} n_i. -\end{align*} -One set of solutions is given by: -\begin{align*} -T &= N^2\\ -n_i&=N\\ -M&=N+1 -\end{align*} -For example, when $N=6$, $T=36$, $n_1 = n_2 = \cdots = n_6 = 6$ is a solution. -I'm wondering if there is any other non-trivial integer solutions to the above system of equations. -Thanks, -MG - -REPLY [8 votes]: Are you interested in solutions in $\mathbb{Z}$ or solutions in $\mathbb{Z}^+$? -Another trivial solution in integers is $$M=1$$ $$n_1=T$$ $$n_i = 0, \forall i \neq 1$$ -So setting $M=1$, you can choose $T$ to be any integer and you will get only integer solutions since the condition $T = \displaystyle \sum_{i=1}^N n_i$ is trivially satisfied and $n_i$ are defined recursively as integer combinations of previous $n_i$'s and $T$ with $n_1 = T$. -EDIT: Lets analyze further to see other possible integer solutions. (My analysis is incomplete will try to do this later tonight/tomorrow.) -$$n_N = N + \frac{T-N - \displaystyle \sum_{j=1}^{N-1} n_j}{M}$$ -From the constraint, $T = \displaystyle \sum_{j=}^N n_j$, we get $n_N = T - \displaystyle \sum_{j=1}^{N-1} n_j$ and hence -$$n_N = N + \frac{n_N - N}{M}$$ -$$\left( n_N-N \right) \left(1-\frac{1}{M} \right) = 0$$ which gives us that $$n_N = N \text{ or } M=1$$ -If $M=1$, then we already have the solution $$n_1 = T \text{ and } n_i = 0 \text{ } \forall \text{ }i>1$$ -If $M \neq 1$, then $n_N = N$, and we have $$n_{N-1} = N-1 + \frac{T-(N-1) - (n_1+n_2+ \cdots n_{N-2})}{M} = N - 1 + \frac{N + n_{N-1}-(N-1)}{M}$$ -$$(n_{N-1} - N + 1)(M-1) = N$$ -From which we get that $M \neq 1$ and since we are interested only in integer solutions we have $(M-1)|N$ i.e. $M = 1 + a$ where $a|N$. Then $n_{N-1} = N + \frac{N}{a} - 1$ -$$n_{N-2} = N-2 + \frac{n_N + n_{N-1} + n_{N-2} - (N-2)}{1+a}$$ -$$n_{N-2} - (N-2) - \frac{n_{N-2} - (N-2)}{1+a} = \frac{N + N + \frac{N}{a}-1}{1+a}$$ -$$a(n_{N-2} - (N-2)) = 2N + \frac{N}{a} - 1$$<|endoftext|> -TITLE: Number of simple paths between two vertices on an $n \times m$ square-grid graph? -QUESTION [13 upvotes]: I've encountered this whilst writing an optimisation benchmark for some heuristic search algorithms. Feels like there should be a basic solution out there! -A square-grid graph is constructed from $n \times m$ vertices embedded into the cartesian plane, such that each vertex corresponds to a pair of integers. Say that a vertex is on a boundary if the $x$ or $y$ coordinate is minimal or maximal (i.e. it is at the lower/upper bound of the grid). -Is there a general expression for the number of simple paths (paths that do not revisit the same vertex) between a pair of vertices $(a,b)$ on a square-grid graph. Where $a$ is on a boundary and $b$ is not? -Thanks - -REPLY [12 votes]: This is a variation of the problem of enumerating self avoiding walks. In general there are no closed form expressions for the number of self avoiding walks between two arbitrary points in a grid of arbitrary dimension; although I have not seen the problem considered with the particular constraints in question, I would be surprised if there was a closed form solution in this case. However, reliable approximation methods do exist for such enumeration problems, and such an approximation may be suitable for your purposes. -Edit: Using a rather naive approach, I've worked out the following lower bound for the number of self-avoiding walks on an $n\times m$ rectangular grid from a given boundary point to a given interior point. With more mathematical sophistication, it is likely that significant improvements could be made on this bound. -Let $A$ be an $n\times m$ rectangular grid, with $n>2$ and $m>2$. That is, let $A$ be the collection of integer points in an $xy$ cartesian plane satisfying the conditions $0 \leq x \leq n$ and $0 \leq y \leq m$. Let $\left(a,b\right)$ be a point in the interior of $A$. In the wikipedia article it discusses the case of walks from one diagonal to another with moves only in the positive direction. I shall call such walks positive walks. There is a simple formula for the number of positive walks from one point to another in a rectangular grid. -We claim that the number of self-avoiding walks from a given point on the boundary to $\left(a,b\right)$ is bounded below, independently of the choice of the point on the boundary by -$$ \binom{a+b}{b} + \binom{a+m-b}{m-b} + \binom{n-a+b}{b} + \binom{n-a+m-b}{m-b}$$ -That is, by the number of positive walks from any of the four corners of the grid to the point $\left(a,b\right)$. -Let $\alpha$ be a point on the boundary of $A$. We first bound the number of self-avoiding walks from $\alpha$ to $\left(a,b\right)$ below by the number of positive walks from any point on the boundary of an $\left(n-2\right)\times\left(m-2\right)$ rectangular grid $B$ to the point $\left(a-1,b-1\right)$ within $B$. Our requirement that $n>2$ and $m>2$ is to allow this step to work with a minimum of headache. There is probably an exact solution in the case where $n \leq 2$ or $m \leq 2$. -We may consider those walks which first move $k$ steps in the counter-clockwise direction around the boundary, then move one step into the interior, and whose successive moves follow a positive walk directly to $\left(a,b\right)$. We may do this for $k$ from 0 to $2n+2m-1$, excluding those values of $k$ which place the walker upon a corner, where there is no move into the interior. It is clear that we have produced a disjoint collection of self-avoiding walks equal in number to the number of positive walks from the boundary of an $\left(n-2\right)\times\left(m-2\right)$ $B$ to the point $\left(a-1,b-1\right)$ in $B$. -Now we use the result that the number of positive walks from one diagonal to another on an $j\times k$ grid equals -$$\binom{j+k}{j} = \binom{j+k}{j,k} = \binom{j+k}{k} $$ -Using this result, we may express the number of positive walks from any point on the boundary of $B$ to the point $\left(a-1,b-1\right)$ as the sum -$$\sum_{i=0}^{b-1}\binom{a-1+i}{a-1} + \sum_{i=0}^{m-b-1}\binom{a-1+i}{a-1} + \sum_{i=0}^{b-1}\binom{n-a-1+i}{n-a-1} + $$ -$$\sum_{i=0}^{m-b-1}\binom{n-a-1+i}{n-a-1} + \sum_{i=0}^{a-1}\binom{b-1+i}{b-1} + \sum_{i=0}^{n-a-1}\binom{b-1+i}{b-1} + $$ -$$ \sum_{i=0}^{a-1}\binom{m-b-1+i}{m-b-1} + \sum_{i=0}^{n-a-1}\binom{m-b-1+i}{m-b-1} - 4$$ -Where we subtract 4 because the above expression involving binomial coefficients counts each straight line path from the boundary of $B$ to $\left(a-1,b-1\right)$ twice. Since we have specified that $n>2$ and $m>2$, we may safely ignore this 4 by adding in some walks that first move $k$ steps in the clock-wise direction, move away from the boundary, and then follow a positive walk to the finish point. -Using the well known binomial coefficient identity -$$ \sum_{i=0}^{k}\binom{r+i}{r} = \binom{r+k+1}{r+1}$$ -We may express the above sum, without the addition of the constant $-4$, as -$$\binom{a+b-1}{a}+\binom{a-1+m-b}{a}+\binom{n-a-1+b}{n-a}+\binom{n-a-1+m-b}{n-a} + $$ -$$\binom{b-1+a}{b}+\binom{b-1+n-a}{b}+\binom{m-b-1+a}{m-b}+\binom{m-b-1+n-a}{m-b}$$ -After some simple manipulations, we may pair up the terms and combine each pair using the identity $\binom{n}{r}=\binom{n-1}{r}+\binom{n-1}{r-1}$ to get -$$ \binom{a+b}{b} + \binom{a+m-b}{m-b} + \binom{n-a+b}{b} + \binom{n-a+m-b}{m-b}$$ - -The result we were attempting to prove. -I'm not sure how useful this result is, but I thought it was interesting how easily things lined up to be simplified by standard binomial coefficient identities. I also thought it was interesting how the final formula has a straightforward combinatorial interpretation.<|endoftext|> -TITLE: Applications of cross product in sciences other than physics -QUESTION [6 upvotes]: I am familiar with using cross products in physics to answer questions about force and torque, but are there applications to other scientific fields? The examples that I've seen in most calculus textbooks deal only with physics or engineering questions. - -REPLY [4 votes]: To "questions about force and torque" I would add the purely geometrical field of ${\it kinematics}$ which deals with the movement of three-dimensional bodies in three-space, and there are applications in elementary three-dimensional geometry, as determining the distance between two lines in three-space. Then there is a simple rule for solving two homogeneous linear equations in three unknowns: The general solution of the system $$a_1 x_1+a_2 x_2+a_3 x_3=0, \quad b_1 x_1+b_2 x_2+b_3 x_3=0$$ -is given by $\ {\bf x}=\lambda\ ({\bf a}\times {\bf b})$, $\ \lambda\in{\mathbb R}$. But otherwise no application of the cross product comes to mind, the reason being that this concept is strictly confined to euclidean spaces of dimension three. Only for $d=3$ a skew bilinear form can be represented by a vector, i.e. an element of the base space.<|endoftext|> -TITLE: Show that $\displaystyle \sum_{k \ge 0}\frac{1}{k^2+k-\alpha} = \frac{\pi}{\sqrt{4\alpha+1}}\tan\left(\frac{1}{2}\pi\sqrt{4\alpha+1}\right)$ -QUESTION [9 upvotes]: For $\alpha\in\mathbb{R}$, it seems that we have $\displaystyle \sum_{k \ge 0}\frac{1}{k^2+k-\alpha} = \frac{\pi}{\sqrt{4\alpha+1}}\tan\left(\frac{1}{2}\pi\sqrt{4\alpha+1}\right).$ -I've tried many techniques and ways -- partial fractions, turning it into an integral, finding the partial sum and so on -- to show it, but all ended in failure. Many thanks for the help in advance. - -REPLY [10 votes]: First we note that -$$ S = \sum_{k \ge 0} \frac{1}{k^2+k-\alpha} = -\sum_{k \ge 0} \frac{1}{(k+1/2)^2 - (4\alpha+1)/4} = -4\sum_{k \ge 0} \frac{1}{(2k+1)^2 -a^2},$$ -where $a=\sqrt{4\alpha+1},$ and so -$$ S = 4 \sum_{m \textrm{ odd}} \frac{1}{m^2 - a^2}.$$ -Now we use the well-known cotangent identity (at the bottom of the page) -$$\pi\cot(a\pi) = \frac{1}{a} + \sum_{m=1}^\infty \frac{2a}{a^2-m^2} . \quad (1)$$ -Replacing $a$ by $a/2$ and dividing by $2$ gives -$$ \frac12 \pi\cot \left( \frac{a\pi}{2}\right) = \frac{1}{a} + \sum_{m=1}^\infty \frac{2a}{a^2-(2m)^2} . \quad (2)$$ -Subtracting $(2)$ from $(1)$ gives -$$ \pi \cot(a\pi) - \frac12 \pi\cot \left( \frac{a\pi}{2}\right) = -\sum_{m=1, \textrm{ odd } m}^\infty \frac{2a}{a^2-m^2} $$ -but $\cot(\theta) -\frac12 \cot(\theta/2)= -\frac12 \tan(\theta/2)$ and so -$$ \frac{\pi}{4a} \tan \left( \frac{a \pi}{2} \right) = -\sum_{m \textrm{ odd}} \frac{1}{m^2 - a^2},$$ -from which the result follows setting $a = \sqrt{4\alpha + 1}.$ -EDIT: The cotangent identity is proven here. -EDIT2: An easy way to discover the cotangent identity is to take logarithms -of the following product formula for $\sin \pi x$ and differentiate wrt $x.$ -$$\sin \pi x = \pi x \prod_{n=1}^{\infty} \left( 1 - \frac{x^2}{n^2} \right)$$<|endoftext|> -TITLE: Proof of linear independence of $e^{at}$ -QUESTION [15 upvotes]: Given $\left\{ a_{i}\right\} _{i=0}^{n}\subset\mathbb{R}$ which are -distinct, show that $\left\{ e^{a_{i}t}\right\} \subset C^{0}\left(\mathbb{R},\mathbb{R}\right)$, - form a linearly independent set of functions. -Any tips on how to go about this proof. I tried a working from the -definition of an exponential and combining sums but that didn't seem -to get me anywhere. I saw a tip on the internet that said write it -in the form -$\mu_{1}e^{a_{1}t}+\dots+\mu_{n}e^{a_{n}t}=0$ to try to show $\mu_{1}=\dots=\mu_{n}=0$ -considering each term of the left hand side must be positive, but -I can't get my head around that because while I understand $e^{x}>0\forall x\in\mathbb{R}$ -I cannot see why $\mu_{i}$ must be positive in any case. I have thought -about differentiating but that doesn't seem to help. The question -did originally ask for a "rigourous" proof but I'll take any hints right now and the provided the solution of 'is obvious' is most unhelpful to me. -Any input would be fantastic. Thank you. - -REPLY [7 votes]: Although the answer by Ross Millikan is probably the easiest elementary approach, the answer by Bill Dubuque points at a more profound reason that these exponential functions must be linearly independent functions: they are eigenvectors (eigenfunctions) of the differentiation operation $D:f\mapsto f'$ for distinct eigenvalues $a_1,\ldots,a_n$. It is therefore an instance of the fundamental fact that eigenspaces for distinct eigenvalues always form a direct sum. The essence of the argument can be formulated without any advanced language as follows. -We can prove the linear independence by induction of the number $n$ of distinct exponentials involved; the cases $n\leq1$ are trivial (an exponential function is not the zero function). Then by the induction hypothesis one can assume $e^{a_1x},\ldots,e^{a_{n-1}x}$ to be linearly independent. Now if $e^{a_1x},\ldots,e^{a_nx}$ were linearly dependent, the dependency relation must involve the final exponential $e^{a_nx}$ with nonzero coefficient, and therefore (after division by the coefficient) allow that function to be expressed as linear combination of $e^{a_1x},\ldots,e^{a_{n-1}x}$: -$$ - c_1e^{a_1x}+\ldots+c_{n-1}e^{a_{n-1}x}=e^{a_nx} -$$ -Now (restricting to the subspace of differentiable functions, where all our exponentials obviously live), the operator $D-a_nI: f\mapsto f'-a_nf$ has the property of annihilating the final exponential function $f=e^{a_nx}$, but multiplying all other exponentials by a nonzero constant (namely $a_i-a_n$ in the case of $f=e^{a_ix}$). Moreover this operator is linear so it can be applied term-by-term; application to both sides of our identity turns it into -$$ - c_1(a_1-a_n)e^{a_1x}+\ldots+c_{n-1}(a_{n-1}-a_n)e^{a_{n-1}x}=(a_n-a_n)e^{a_nx}=0. -$$ -But by the (induction) hypothesis of linear independence this can only be true if all the coefficients $c_i(a_i-a_n)$ on the left are zero, which means that all $c_i$ are zero. But in view of our original expression that is absurd. So $e^{a_1x},\ldots,e^{a_nx}$ cannot be linearly dependent, completing the induction step and the proof.<|endoftext|> -TITLE: Ergodic Recurrence -QUESTION [5 upvotes]: My solution concerning a problem about Ergodic Recurrence requires me to prove that $\|P_T 1_B\| > 0$. -Where $P_T$ is the projection onto the space $I := \{f \in L^2 : f \circ T = f\}$, $T$ is a measure preserving mapping (so for all $B$ measurable $\mu(T^{-1} B) = \mu(B)$) and $B$ is of positive measure. -Can someone hint my why $\|P_T 1_B\|$ should be strictly positive? If $1_B$ would be in $I$ then I would have that $T^{-1} B = B$ which is probably not the case. - -REPLY [5 votes]: It doesn't matter whether $B$ is invariant or not, it only has to be of positive measure, so that its characteristic function $[B]$ is nonzero in $L^{2}$. I assume that $X$ is a finite measure space (so $[X] \in L^{2}$) and that $T: X \to X$ is a measure-preserving transformation. -Clearly, $P_{T}[X] = [X]$, so -\[ -0 < \mu(B) = \langle [B],[X] \rangle = -\langle [B], P_{T}[X] \rangle = \langle P_{T}[B],[X]\rangle -\] -using $P_{T}^{\ast} = P_{T}$ in the last equation, hence $P_{T}[B] \neq 0$ in $L^{2}$.<|endoftext|> -TITLE: How to Interpret Time Scales in a Dynamic System -QUESTION [7 upvotes]: Here I have a question about time scales in dynamic systems - for reference you can look at a previous question that spurs this one: -Minimizing the cost of a path in a dynamic system -That question was about finding the minimum cost path from $0$ to $c$ on the real line, where cost is quadratic in the size of the step at each time $t$, and there was a drift towards 0 of $-P(t)$, $P(t)$ being your position at time $t$. The question is, what is the minimum cost path, when you minimize over size of step each period, and number of steps, subject only to the constraint of eventually getting to $c$. -The solution in this discrete time problem is to see that the problem is essentially linear, with linear constraints, and a solution easy to characterize for a given length of path $T$, and whose cost is decreasing in $T$, so that the minimum cost path is to take a infinite amount of time to get from o to $c$. Taking very very small steps is the way to go. Well and good, the question is answered. - -It may then seem natural to ask, "What about the continuous time analog to that problem?" That is, if the original dynamics are given by -$$ -P(t) = (1-\gamma) P(t-1) + \gamma x(t) -$$ -where $\gamma<1$ limits fast I can move, and which can be written -$$ -\frac{P(t)- P(t-1)}{\gamma} = - P(t-1) +x(t), -$$ -What if we send $\gamma$ to zero, essentially discretizing the time interval finer and finer? The problem then becomes to solve the following problem: Minimize the cost: -$$ -\int _1^T ( x(t))^2 dt, -$$ -subject to the constraint that P(0) = 0, P(T) = c, and P goes from 0 to c, and $P$ follows the following dynamics: -$$ -\dot{P} = -P(t) + x(t) -$$ -The constraint is essentially that $P$ solves a particular ODE. I ask mathematica about it, and am informed that $P$ has the following form: -$$ -P(z) = e^{-z} ( K + \int_1^{z} e^t x(t) dt), -$$ -where $K$ is a constant of integration. So, adding in the boundary conditions, the constraint on the minimization problem is that -$$ -e^{-T} \int_1^{T} e^t x(t) dt = c, -$$ -that is, whatever $x(t)$ is, $P$ must end up at $c$ at time $T$. So the lagrangian is -$$ -L = \int _1^T ( x(t))^2 dt - \lambda (e^{-T} \int_1^{T} e^t x(t) dt - c) -$$ -But note, just as before, this lagrangian is simply quadratic in $x(t)$, and can be solved for a fixed $T$. -When I ask mathematica to do so, I find that -$$ -x(t) = \frac{e^t c}{e^{t-1} - e}, -$$ -Which is very simple, linear in distance to be moved $c$. Note that this doesn't depend on $T$, the total time length of the path; indeed, when we minimize cost with respect to $T$, we find that cost in minimized for -$$ -T = \frac{1}{2} (1 - 2 ProductLog[-1, -(1/(2 \sqrt{e}))]), -$$ -where "ProductLog(-1,z)" is mathematica's way of calculating Lambert's W function. So a finite $T$ is cost minimizing, and it is about 2.25643. -Now, this isn't necessarily a contradiction to the discrete case; after all, a finite length of time in the continuous model is an infinite number of periods in the discrete model, so the answers are not in conflict. However, the answer in the continuous model might easily have been "infinite amount of (continuous) time," but it wasn't. My questions are - -Why not? -How to interpret the value of cost-minimizing $T$ - what does it mean? How much "time" is that? - - -The point of all this is to "justify" some simluations of the discrete system we are doing; we have a problem where we are simulating random shocks to a dynamic system, and we have some theorems that say, mean time to escape of from some point is a function of this cost minimization problem, IE the cost minimizing way to escape is the dominant way to escape, the most likely way to escape. But we are having a hard time interpreting the cost minimizing escape; it says that in the discrete case, the dominant escape should take an infinite amount of time, while in the continuous case(which the theorems are actually written for), it takes a finite amount of time, a time which seems to have nothing to do with the distance attempting to be escaped! - -Well, obviously no one had anything to say on this issue - is it due to a poorly framed question? Would this question be more appropriate at mathoverflow? - -REPLY [2 votes]: This question keeps being bumped by the Community bot, so I guess I should try giving it a proper answer. -$\newcommand{\d}{\mathrm d}$ -You want to minimize $\int_0^T x(t)^2\ \d t$ with $\dot P(t) = -P(t) + x(t)$, subject to the constraints $P(0) = 0$ and $P(T) = c$. This is essentially a problem in the calculus of variations. Your Lagrangian, i.e. the integrand in the objective function, is $\mathcal L(t,P,\dot P) = x^2 = \big(P + \dot P\big)^2$. The optimal solution for $P(t)$ must satisfy the corresponding Euler-Lagrange equation, -$$\begin{align} -0 &= \frac{\partial \mathcal L}{\partial P} - \frac{\d}{\d t}\frac{\partial \mathcal L}{\partial \dot P} \\ -&= 2(P + \dot P) - \frac{\d}{\d t}\big(2(P + \dot P)\big) \\ -&= 2(P - \ddot P). -\end{align}$$ -Applying the given boundary conditions yields a unique solution to this differential equation, -$$P(t) = a(e^t - e^{-t}),$$ -where $a = c/(e^T - e^{-T})$. The corresponding $x(t)$ is $2ae^t$, and the total cost is -$$\int_0^T x(t)^2\ \d t = 2a^2(e^{2T}-1) = \frac{2c^2}{1 - e^{-2T}}.$$ -As in the discrete case, the cost decreases asymptotically with $T$. -In your derivation, you're fine up to $e^{-T} \int_0^{T} e^t x(t)\ \d t = c$; I suspect the problem comes in somewhere between where you define your Lagrangian $$L = \int _1^T x(t)^2\ \d t - \lambda \left(e^{-T} \int_1^{T} e^t x(t)\ \d t - c\right)$$ and where you get Mathematica to solve it. I'm not really sure what you did there, but if I make a heuristic and totally nonrigorous argument that $\dfrac{\partial L}{\partial x(t)} = 0$, I get $2x(t)\ \d t - \lambda e^{-T}e^t\ \d t = 0$, or $x(t) = \text{const}\cdot e^{t}$, consistent with my solution.<|endoftext|> -TITLE: Prove that if $p^{a}$ is a factor of the canonical factorization of ${{2n}\choose{n}}$ then $p^{a} < 2n$? -QUESTION [7 upvotes]: Prove that if $p^{a}$ is a factor of the canonical factorization of ${{2n}\choose{n}}$ then $p^{a} < 2n$? -My attempt: -$${{2n}\choose{n}} = \frac{(2n)!}{n!n!}$$ -Let $a_1$ be the highest of power of $(2n)!$ -Let $a_2$ be the highest of power of $n!$ -So the highest power of $\frac{(2n)!}{n!n!}$ = $a_1 - 2a_2$ -where $a_1 <= 2n - 1$ and $2a_2 <= 2n - 2$ -Therefore the highest power of $p$ that divides $\frac{(2n)!}{n!n!}$ is $2n - 1 - 2n + 2 = 1$. -Since $a <= 1 \implies p^{a} < 2n$ -Am I in the right track? Any idea? -Update -Following Ross Millikan's hint: -Let $a$ be the highest power of $p$ such that $p^{a}|n!$ -Then, $a$ = $\lfloor \frac{n}{p} \rfloor + \lfloor \frac{n}{p^2} \rfloor +\lfloor \frac{n}{p^3} \rfloor + \ldots \lfloor \frac{n}{p^k} \rfloor$ -Let $b$ be the highest power of $p$ such that $p^{b}|(2n)!$ -Then, $b$ = $\lfloor \frac{2n}{p} \rfloor + \lfloor \frac{2n}{p^2} \rfloor +\lfloor \frac{2n}{p^3} \rfloor + \ldots \lfloor \frac{2n}{p^q} \rfloor$ -$\Longrightarrow b - 2a$ is the highest power of $p$ such that $p|\frac{(2n)!}{n!n!}$ -Where $a, b \in N \implies b - 2a < b$ -Besides, $p^{b} < 2n$ -$\therefore p^{b - 2a} < p^{b} < 2n$ -Am I in the right track now? -Thanks, -Chan - -REPLY [4 votes]: Here's a way I like a lot, I have outlined the steps: -(Each step is a one line proof) -(i) There are $\lfloor N/q \rfloor$ integers less than or equal to $N$ that are divisible by $q$. -(ii) Deduce that the difference in the number of integers in the numerator and denominator of $\left({2N\atop N}\right)$ which are divisible by $q$ is $\lfloor 2N/q\rfloor -2\lfloor N/q\rfloor$. -(iii) Show that this quantity equals either 0 or 1. -(iv) Deduce that if $p^r$ divides $\left({2N\atop N}\right)$ then $p^r\leq 2N$. -Hope that helps, -Remark: This is an exercise I did a while ago from a book by Dr. Andrew Granville. It had similar outline, and was not my invention.<|endoftext|> -TITLE: Help with irreducible components -QUESTION [5 upvotes]: I want to find the irreducible components of the algebraic set $Y$, in $\mathbb{A}^{3}$ given by the zero-locus of the equations $x^{2}-yz$ and $xz-z$. I also want to compute the dimension of $Y$. -Well if $xz-z=0$ then $x=1$ or $z=0$. If $z=0$ then $x=0$ so we obtain the $y-axis$. -Now if $x=1$ then $yz-1=0$ hence: -$Y = V_{1}(x,z) \cup V_{2}(x-1,yz-1)$. (Here $V_{i}$ denotes the locus-set). -Now $k[x,y,z]/(x,z) \cong k[y]$ so $V_{1}(x,z)$ is irreducible because $k[y]$ is a integral domain. -I'm stuck in showing $V_{2}(x-1,yz-1)$ is irreducible. -Can you please help? (Also in general how do you compute the dimension?) - -REPLY [7 votes]: We have that $k[V_2]=k[x,y,z]/(x-1,yz-1)\cong k[y,z]/(yz-1)$. You can show that the map from $V_2$ to $\mathbb{A}^1-\{0\}$ defined by $(x,y,z)\mapsto y$ is an isomorphism by showing that the corresponding morphism of coordinate rings $k[\mathbb{A}^1-\{0\}]=k[y,y^{-1}]\rightarrow k[y,z]/(yz-1)$ (i.e., the morphism defined by $y\mapsto y$, $y^{-1}\mapsto z$) is an isomorphism. Because $\mathbb{A}^1-\{0\}$ is a non-empty open set of the irreducible set $\mathbb{A}^1$, it is also irreducible, and hence $V_2$ is irreducible as well. -I feel like this may be a bit roundabout, but I think it works. -The dimension of $Y$ is 1, because a maximal-length chain of irreducible subsets of $Y$ will start at either of its irreducible components, each of which is dimension 1. The irreducible components are a line, which is clearly of dimension 1 (also: its coordinate ring is $k[y]$, which is a ring of dimension 1), and a hyperbola $yz-1$, which (as we showed above) is isomorphic to $\mathbb{A}^1-\{0\}$, and by Prop 1.10 in Hartshorne, $\dim(Y)=\dim(\overline{Y})$ for any quasi-affine variety $Y$. -I often find it helpful to try to visualize what's going on - here is the output from Mathematica. The two planes are the zero locus of $xz-z$, and the cone is the zero locus of $x^2-yz$ - their intersection is, as expected, the $y$-axis and a hyperbola which lies on the plane $x=1$.<|endoftext|> -TITLE: Not lifting your pen on the $n\times n$ grid -QUESTION [24 upvotes]: Does there exist $n$, and $r<2n-2$, such that the $n\times n$ square grid can be connected with an unbroken path of $r$ straight lines? - -Note: This has essentially already been asked - see this MSE thread. I am posting this question because I wanted to explicitly focus on one of the unanswered parts. -Notice that $2n-1$ is the trivial solution. This is obtained by either spiralling towards the center, or by zig-zagging up and down. On the other MSE thread, I posted an answer showing $2n-2$ was possible for $n\geq 3$. (this is obtained by reducing to the $3\times 3$ case) I felt $2n-2$ should be optimal, but I couldn't think of a proof. (Also, it is not necessarily clear that for a 100 billion by 100 billion grid you can't use some kind of cleverness to eliminate one single line) -This has been bothering me since I saw it, and I would like to know if anyone has any ideas. -Thanks, -Edit: This is regarding an question/answer I received: Not only are 45 degree lines allowed, but lines in any direction. Naturally 0,45,90 seem like the best since those angles will intersect the most lattice points, and we ask ourselves "Will any optimal solution consist only of lines at these angles?". The answer is a definite no. Consider the following solution to the $10\times 10$ grid which uses 18 lines (so $2n-2$, i.e. currently optimal). Also there is this solution to the $8\times 8 $ grid using 14 lines. -Interestingly, this solution to the $10\times 10 $ generalizes to give another $2n-2$ line solution to the $n\times n$ grid when $n$ is even. -(I found this $8\times8$ and $10\times 10$ solution on a link posted by user3123 on the other question page. Specifically it was this link.) - -REPLY [5 votes]: Since I have to explain Joriki's solution to all the people I mentioned the problem to, I decided to provide a typed solution written up in my own words. Although there are no new ideas, there doesn't seem to be any good reason not to just share it anyway: -Lemma Let $a\geq 1$, $b\geq 1$. Given an $a\times b$ rectangular grid, without using horizontal or vertical lines it requires at least $a+b-2$ lines to cover all the points. (Here the path need not be connected) -proof: Consider the exterior of the grid. If $a\geq 2$ and $b\geq 2$ then each diagonal line can cover at most two points, but there are $2a+2b-4$ points that must be covered. If $b=1$ then each diagonal line covers at most 1 point, and hence we need $a=a+b-1>a+b-2$ lines. Similarly if $a=1$. -Given a solution to the $n\times n$ grid, let $h$ be the number of horizontal lines, and $v$ be the number of vertical lines used. If $h=n$, we must have at least $2n-1$ lines since it will take at least $n-1$ lines to connect all of these horizontal parts. Similarly for $v=n$. -Now, let $v -TITLE: Which is bigger: $9^{9^{9^{9^{9^{9^{9^{9^{9^{9}}}}}}}}}$ or $9!!!!!!!!!$? -QUESTION [37 upvotes]: In my classes I sometimes have a contest concerning who can write the largest number in ten symbols. It almost never comes up, but I'm torn between two "best" answers: a stack of ten 9's (exponents) or a 9 followed by nine factorial symbols. Both are undoubtedly huge, but I haven't been able to produce an argument that one is larger (surely they aren't equal). Any insight into which of these two numbers is bigger would be greatly appreciated. - -REPLY [5 votes]: I find it quite sad that none of the answers has actually formally shown that - -$$9^{9^{9^{9^{9^{9^{9^{9^{9^9}}}}}}}} > 9!!!!!!!!!$$ - -The proces is a bit long, but it it worth it in my opinion. First, a simple calculation shows $9! < 9^8$. Then $9!! < (9^8)^{9^8} = 9^{8\cdot9^8}$. Then $$9!!!< \left(9^{8\cdot9^8}\right)^{9^{8\cdot9^8}} < \left(9^{9^9}\right)^{9^{8\cdot9^8}}=9^{9^{8\cdot9^8}\cdot9^9}=9^{9^{8\cdot9^8+9}}$$ -$$9!!!! < \left(9^{9^{9^9}}\right)^{9^{9^{8\cdot9^8+9}}}=9^{9^{9^{8\cdot9^8+9}}\cdot 9^{9^9}}=9^{9^{9^{8\cdot9^8+9}+9^9}}<9^{9^{9^{8\cdot9^8+10}}}$$ -Now, some notation. Let us use $f(x)$ for $9^x$ and $f^k(x)=f(f(... f(f(x))...))$. Let $x!^k$ denote $x!!!...!!!$ with $k$ factorials. -Clearly, $f(x)+1 -TITLE: Is the cross product of two unit vectors itself a unit vector? -QUESTION [26 upvotes]: Or, in general, what does the magnitude of the cross product mean? How would you prove or disprove this? - -REPLY [7 votes]: $|\vec{a}\times\vec{b}|=|\vec{a}||\vec{b}||\sin(\theta)|$ -Let $a,b$ unit vectors, so we have $|a| = |b| = 1$ -$|\vec{a}\times\vec{b}|=|\sin(\theta)| \le 1$ (equality is when $|\sin(\theta)| = 1$ i.e. when a and b are perpendicular) -Therefore in general the result won't be a unit vector.<|endoftext|> -TITLE: Probability of 3 of a kind with 7 dice -QUESTION [8 upvotes]: Similar questions: -Chance of 7 of a kind with 10 dice -Probability of getting exactly $k$ of a kind in $n$ rolls of $m$-sided dice, where $k\leq n/2$ -Probability was never my thing, so please bear with me. -I've reviewed the threads above to the best of my ability, but I still wonder how to go about finding a match of 3 from 7 dice. -At least three match, but no more (two sets of three is okay, a set of three and a set of four is not): -(a) : $ \frac{6 \binom{7}{3} 5^4}{6^7} $ -In the other discussions, this wasn't desired since it would allow for a second triple to occur, or even a quadruple. -Odds of a quadruple with the remaining 4 dice: -(b) : $(1/5)^4 $ -Then, the probability that from rolling 7 dice that there is at least three that match, and no more than three, would be: -(c) : $ \frac{6 \binom{7}{3} * 5^4}{6^7}- (1/5)^4 $ -Exactly two sets of three: -(d) : $ \frac{6 \binom{7}{3} \binom{4}{3} \binom{1}{1}}{ 6^7} $ -Maybe? My thought process was that if $\binom{7}{3}$ will give me a set of three, then with the remaining 4, I could pick 3 yielding $\binom{4}{3}$ with 1 leftover. I realize this is probably wrong. Why? What would be the proper way to go about this? -Exactly one set of three: -Then to find the probability that there is one and only one set of three from 7 dice, we could take the probability of one or more sets of three (c) and subtract the probability of exactly two sets (d), for: -$ \frac{6 \binom{7}{3} 5^4}{ 6^7} - (1/5)^4 - \frac{6 \binom{7}{3} \binom{4}{3} \binom{1}{1} }{ 6^7} $ -(e) : $ \left(\frac{6 \binom{7}{3}}{ 6^7}\right) \left( 5^4 - \binom{4}{3} \binom{1}{1} \right) - (1/5)^4 $ -Is this at all on the right path? -Thank you! -PS. -Sorry about the syntax, but I couldn't figure out how to make the standard nCr() symbol with MathJaX. - -REPLY [6 votes]: You are on the right track with $6^7 = 279936$ as the denominator. To find how many cases have three but no more matching, I would start by looking at the four partitions of 7 into up to 6 parts where the largest is 3: 3+3+1, 3+2+2, 3+2+1+1, 3+1+1+1+1. You can then work out each systematically, taking account both the numbers that appear and the order they appear in. -I think the first (which has two sets of three) is $$\frac{6!}{2!\;1!\;3!} \times \frac{7!}{3!\;3!\;1!} = 8400$$ which is ten times what you have in (c). The others (just one three-of-a-kind) are -$$\frac{6!}{1!\;2!\;3!} \times \frac{7!}{3!\;2!\;2!} + \frac{6!}{1!\;1!\;2!\;2!} \times \frac{7!}{3!\;2!\;1!\;1!} + \frac{6!}{1!\;4!\;1!} \times \frac{7!}{3!\;1!\;1!\;1!\;1!} = 113400$$. -So I get about 0.405 for the probability of exactly one three-of-a-kind (but no four or more) and about 0.435 for the probability of the one or more threes-of-a-kind (but no four or more)<|endoftext|> -TITLE: Differential equation, quite weird task -QUESTION [6 upvotes]: I'm having some trouble while trying to understand one task.. -The task is as follows: -$$\ddot{x}(t) + \dot{x}(t) + 2x(t) = \sin(\omega t)$$ -where $x(0) = 7, t\geq 0$ -The solution is in the following form: -$$x(t) = f(t) + A\sin(\omega t + \varphi)$$ -And the task is: find $\omega$ so that $A$ is max. -My understanding of this is that $f(t)$ is the solution of the homogeneous differential equation and the rest is the special solution of the nonhomogeneous equation. Still that does not give me any clue about how to evaluate the relationship between $A$ and $\omega$. -Any clues? - -REPLY [4 votes]: The correct value of $\omega$ is not $\frac{\sqrt{7}}{2} \approx 1.32$. It's $\sqrt{\frac{3}{2}} \approx 1.22$. We are looking for a near resonance effect, but you can't actually have resonance when the harmonic oscillator is damped. This makes the analysis is a little different. -A reference is the section on sinusoidal forcing in the Wikipedia page on the harmonic oscillator. From the formulas there you can see that the relationship between $A$ and $\omega$ is given by -$$A = \frac{1}{\sqrt{\omega^2 + (2 - \omega^2)^2}}.$$ -Solving $\frac{dA}{d \omega} = 0$ yields $\omega = \sqrt{\frac{3}{2}}$. - -Update: Here's the derivation. -The trick is to generalize, and solve the differential equation $x'' + x' + 2x = e^{i \omega t}$. The resulting solution will have a real part and an imaginary part. Since $e^{i \omega t} = \cos \omega t + i \sin \omega t$, you actually want the imaginary part of the solution. -As the driving force is an exponential, we know that the particular solution must be of the form $x_p(t) = c e^{i \omega t}$. Subbing that into the differential equation produces the auxiliary equation $-c \omega^2 + i c\omega + 2c = 1$. Solving that for $c$ yields $$c = \frac{1}{a + i b} = \frac{a - i b}{a^2 + b^2},$$ -where $a = 2 - \omega^2$ and $b = \omega$. Thus the particular solution to the complex differential equation is -$$x_p(t) = \frac{a - i b}{a^2 + b^2} (\cos \omega t + i \sin \omega t),$$ of which the imaginary part is $$-\frac{b}{a^2 + b^2} \cos \omega t + \frac{a}{a^2+b^2} \sin \omega t.$$ Since $A$ is just the magnitude of this solution (you're doing a rotation to the vertical axis when converting to $A \sin (\omega t + \phi)$), we get $$A = \frac{1}{\sqrt{a^2+b^2}} = \frac{1}{\sqrt{\omega^2 + (2 - \omega^2)^2}}.$$<|endoftext|> -TITLE: Relationship between Cyclotomic and Quadratic fields -QUESTION [17 upvotes]: Since $\varphi(p)=p-1$ is even the p'th cyclotomic field contains some quadratic field. Hecke says that in fact every quadratic field is contained by some cyclotomic field. -What is this theorem called and how is it proved? - -REPLY [22 votes]: This is a special case of the Kronecker-Weber theorem, which says that any abelian extension of $\mathbb{Q}$ is contained inside some cyclotomic field; any quadratic extension of $\mathbb{Q}$ is automatically abelian. I don't believe the special case of the theorem for quadratic fields has a separate name. -However, one does not need the full power of this (very advanced) theorem. The following two steps are used in exercise 8 of Chapter 2 in Marcus's Number Fields to prove precisely the case of quadratic fields: - -Show that $\mathbb{Q}(\zeta_p)$ contains $\sqrt{p}$ if $p\equiv 1\bmod 4$ and $\sqrt{-p}$ if $p\equiv 3\bmod 4$. (Note: This step follows from results in the preceding chapter in Marcus about the discriminant of $\mathbb{Q}(\zeta_p)$ being -$$\prod_{1\leq r -TITLE: Smooth curve with no Frenet frame -QUESTION [11 upvotes]: Let $\gamma: I \rightarrow \mathbb{R}^n$ be a smooth curve. We define a Frenet frame to be an orthonormal frame $X_1, \ldots X_n$ such that for all $1 \leq k \leq n$, $\gamma^{(k)}(t)$ is contained in the linear span of $X_1(t), \ldots, X_k(t)$, $t \in I$. -What is an example of a smooth curve $\gamma$ with no Frenet frame? - -REPLY [7 votes]: I presume that implicit in the definition is that the frame varies smoothly (as Elliott pointed out). -Here is an example in $\mathbb R^2$: -\begin{equation} - \gamma(t) = \begin{cases} (\mathrm{e}^{1/t}, 0) &\text{ for $t < 0$} \\\\ -(0,0) &\text{ for $t = 0$} \\\\ -(0, \mathrm{e}^{-1/t}) &\text{ for $t > 0$} -\end{cases} -\end{equation} - -EDIT: -To give an embedded curve, I will modify the example to work in $\mathbb R^3$. -\begin{equation} - \gamma(t) = \begin{cases} (\mathrm{e}^{1/t}, 0, t) &\text{ for $t < 0$} \\\\ -(0,0, 0) &\text{ for $t = 0$} \\\\ -(0, \mathrm{e}^{-1/t}, t) &\text{ for $t > 0$} -\end{cases} -\end{equation} -Observe that this curve is embedded with $\dot \gamma \ne 0$. This means the first vector field in the Frenet frame has to be $\frac{1}{|| \dot \gamma ||} \dot \gamma$. In particular then, at $t=0$, $X_1 = (1, 0, 0)$. -We've now moved the problem to the second derivative. For $t < 0$, $\ddot \gamma$ is in the span of $(0, 1, 0)$ and for $t > 0$, $\ddot \gamma$ is in the span of $(0, 0, 1)$. This means that $X_2$ can't exist. - -Let me say that we have a partial Frenet frame if we can find such vector fields $X_1, \dots, X_m$, where $m < n$. If the first $m$ derivatives $\dot \gamma, \ddot \gamma, \dots, \gamma^{(m)}$ are linearly independent, we have the existence of a partial frame by applying Gram-Schmidt. -The way for the Frenet frame to fail to exist is for there to be an $m$ such that at some point, the span of $\{ \dot \gamma, \ddot \gamma, \dots, \gamma^{(m)} \}$ has dimension less than $m$. In both examples, at the image of $t=0$, the span dropped (for $m=1$ and $m=2$ respectively). Obviously, this is not a sufficient condition, such we can easily construct examples where things work out. -This construction seems very similar to the one for obtaining a smooth family of matrices without a corresponding smooth frame of eigenvectors. I don't immediately see how to transform this question to that one. The question about eigenvectors has been discussed MO before. (I just provide the link for curiosity's sake, unless someone can explain how to relate these two questions.)<|endoftext|> -TITLE: Probability density function vs. probability mass function -QUESTION [93 upvotes]: I've a confession to make. I've been using PDF's and PMF's without actually knowing what they are. My understanding is that density equals area under the curve, but if I look at it that way, then it doesn't make sense to refer to the "mass" of a random variable in discrete distributions. How can I interpret this? Why do we call use "mass" and "density" to describe these functions rather than something else? -P.S. Please feel free to change the question itself in a more understandable way if you feel this is a logically wrong question. - -REPLY [5 votes]: The most basic difference between probability mass function and probability density function is that probability mass function concentrates on a certain point for example, if we have to find a probability of getting a number 2. Then our whole concentration is on 2. Hence we use pmf however in pdf our concentration our on the interval it is lying. For e.g.$ -\infty <= X <= \infty $. -Always remember that discrete and continuous are dependent on the Range.<|endoftext|> -TITLE: Local ring on generic fiber -QUESTION [6 upvotes]: Let $\pi: X\to C$ be a fibration in curves where $C$ is a non-singular curve and $X$ a regular, integral surface and the generic fiber $X_\eta$ is a non-singular curve over $k(C)$ (these hypotheses might be stronger than necessary, but I just threw a bunch on to make it as nice as possible). -Now a point on $X_\eta$, say $p$ is also a point on $X$ itself. Note the generic point of $X$ is the generic point of $X_\eta$, say $\zeta$. All the other points on $X_\eta$ are closed in the curve and are height $1$ points on $X$. -I read in a paper that for any point on the generic fiber, $p$, we have $\mathcal{O}_{X,p}\simeq \mathcal{O}_{X_\eta, p}$, and at first I just thought to myself that it's obvious, but when I tried to actually think of a reason it wasn't so obvious. -If $X_\eta$ were open in $X$, then this would be clear since restricting to an open and then taking a stalk doesn't cause problems, but why should this still be true for the generic fiber which is neither open nor closed? -One noted consequence is that $k(X)=\mathcal{O}_{X,\zeta}=\mathcal{O}_{X_\eta, \zeta}=k(X_\eta)$. - -REPLY [5 votes]: Take affine affine open sets $U\subseteq X$ and $V\subseteq C$ such that $\pi|_U: U\rightarrow V$.Then $B:=O_X(U)$ is an $A:=O_C(V)$-algebra and $O_{X,p}=B_p$, where $p\in\mathrm{Spec} (B)$ satisfies $p\cap A=0$. -The generic fibre $X_\zeta\cap U$ equals $\mathrm{Spec}((A\setminus 0)^{-1}B)$. Hence -$ -O_{X_\zeta ,p} =((A\setminus 0)^{-1}B)_{p(A\setminus 0)^{-1}B}=B_p -$ -since $B\setminus p$ contains $A\setminus 0$.<|endoftext|> -TITLE: Derivation of Fourier Series? -QUESTION [5 upvotes]: Can someone point me to the full derivation of the Fourier Series? I'm having problems understanding how the a's and b's coeffients are worked out. - -REPLY [4 votes]: Imagine that $f(x) = \sum_n a_n \cos n x + \sum_n b_n \sin n x$ (for $x \in [0,2\pi]$, say). The idea for computing the $a_n$s and $b_n$s is that when -you write down integrals of the form $\int_0^{2\pi} \cos m x \cos n x,$ -or $\int_0^{2\pi} \sin m x \sin n x$, or $\int_0^{2 \pi} \cos m x \sin n x$, -then the integrals vanish (just compute them!) unless $m = n$ and the functions -coincide; and in the cases when they don't vanish, their values are easily computed. -So taking $f$, and then computing $\int_0^{2\pi} f(x) \cos n x$ or $\int_0^{2 \pi} -f(x) \sin n x$, one exactly reads off $a_n$ or $b_n$ (for the value of $n$ you -chose). This is where the formulas come from. -The way people normally think about this is as a kind of orthogonal projection: -the functions $\cos n x$ and $\sin n x$ are like orthogonal basis vectors in -a vector space, and the integral is like an inner product. So to find the -coefficient of a given basis vector (i.e. an $a_n$ or a $b_n$) one takes the -inner product against that particular basis vector.<|endoftext|> -TITLE: What is the importance of eigenvalues/eigenvectors? -QUESTION [338 upvotes]: What is the importance of eigenvalues/eigenvectors? - -REPLY [5 votes]: Eigenvalues and eigenvectors are central to the definition of measurement in quantum mechanics -Measurements are what you do during experiments, so this is obviously of central importance to a Physics subject. -The state of a system is a vector in Hilbert space, an infinite dimensional space square integrable functions. -Then, the definition of "doing a measurement" is to apply a self-adjoint operator to the state, and after a measurement is done: - -the state collapses to an eigenvalue of the self adjoint operator (this is the formal description of the observer effect) - - -the result of the measurement is the eigenvalue of the self adjoint operator - -Self adjoint operators have the following two key properties that allows them to make sense as measurements as a consequence of infinite dimensional generalizations of the spectral theorem: - -their eigenvectors form an orthonormal basis of the Hilbert space, therefore if there is any component in one direction, the state has a probability of collapsing to any of those directions -the eigenvalues are real: our instruments tend to give real numbers are results :-) - -As a more concrete and super important example, we can take the explicit solution of the Schrodinger equation for the hydrogen atom. In that case, the eigenvalues of the energy operator are proportional to spherical harmonics: - -Therefore, if we were to measure the energy of the electron, we are certain that: - -the measurement would have one of the energy eigenvalues -The energy difference between two energy levels matches experimental observations of the hydrogen spectral series and is one of the great triumphs of the Schrodinger equation - -the wave function would collapse to one of those functions after the measurement, which is one of the eigenvalues of the energy operator - - -Bibliography: https://en.wikipedia.org/wiki/Measurement_in_quantum_mechanics -The time-independent Schrodinger equation is an eigenvalue equation -The general Schrodinger equation can be simplified by separation of variables to the time independent Schrodinger equation, without any loss of generality: -$$ -\left[ \frac{-\hbar^2}{2m}\nabla^2 + V(\mathbf{r}) \right] \Psi(\mathbf{r}) = E \Psi(\mathbf{r}) -$$ -The left side of that equation is a linear operator (infinite dimensional matrix acting on vectors of a Hilbert space) acting on the vector $\Psi$ (a function, i.e. a vector of a Hilbert space). And since E is a constant (the energy), this is just an eigenvalue equation. -Have a look at: Real world application of Fourier series to get a feeling for separation of variables works for a simpler equation like the heat equation. -Heuristic argument of why Google PageRank comes down to a diagonalization problem -PageRank was mentioned at: at https://math.stackexchange.com/a/263154/53203 but I wanted to add one cute handy wave intuition. -PageRank is designed to have the following properties: - -the more links a page has incoming, the greater its score -the greater its score, the more the page boosts the rank of other pages - -The difficulty then becomes that pages can affect each other circularly, for example suppose: - -A links to B -B links to C -C links to A - -Therefore, in such a case - -the score of B depends on the score A -which in turn depends on the score of A -which in turn depends on C -which depends on B -so the score of B depends on itself! - -Therefore, one can feel that theoretically, an "iterative approach" cannot work: we need to somehow solve the entire system in one go. -And one may hope, that once we assign the correct importance to all nodes, and if the transition probabilities are linear, an equilibrium may be reached: - Transition matrix * Importance vector = 1 * Importance vector - -which is an eigenvalue equation with eigenvalue 1. -Markov chain convergence -https://en.wikipedia.org/wiki/Markov_chain -This is closely related to the above Google PageRank use-case. -The equilibrium also happens on the vector with eigenvalue 1, and convergence speed is dominated by the ratio of the two largest eigenvalues. -See also: https://www.stat.auckland.ac.nz/~fewster/325/notes/ch9.pdf<|endoftext|> -TITLE: Fourier Transforms -QUESTION [5 upvotes]: I'm having a terrible time trying to understand Fourier transforms. I'm very visual so leaving the $X,Y,Z,t$ domain is not working form me :) -I'm trying to figure out the basics at the moment. Like, taking a Sine wave (they're odd right?) and converting it into its real and imaginary numbers. I'm pretty sure I got that working, but to make sure, what should the plotted data look like? -Also, how do I find the power spectrum of a transform? How do I use FT to identify the $n$ most significant frequencies in a signal? That last question shows how lost I am! -What I have: -I know how to get the real and imaginary numbers from a signal. I know how to get the phase and the magnitude. What I need to get is the power spectrum and the most significant frequencies. Also any dumbed down explanation of what's going on would be very helpful! -Thanks! - -REPLY [3 votes]: I'll try and give some intuition from an Electrical engineering perspective. It is useful to think of a Fourier transform as giving you the frequency domain picture of a given signal, i.e., it gives you a picture of both the amplitude and the phase of the different frequency components that make up the signal. -If you are familiar with Fourier series representation of periodic signals, the Fourier transform can be viewed as an extension to non periodic signals. Of course, not all continuous functions have a well defined Fourier transform, but if you look at typical engineering applications, that is not an issue. -When dealing with deterministic signals, the power spectrum is given by the squared magnitude of the Fourier transform. If $f(t)$ is the signal in the (continuous) time domain and $F(\omega)$ is the frequency domain representation, then the Power spectrum is $|F(\omega)|^2$. -When dealing with random signals, the power spectral density is the Fourier transform of the autocorrelation function of the process (Provided the process is wide sense stationary). -The above explanations carry over to the discrete domain as well. So, if you are working with DFTs of signal samples, things are not significantly different. -As for the N most significant frequencies, I think you need to take the DFT of your signal and then look at the frequencies in increasing order of amplitudes.<|endoftext|> -TITLE: Representation of quaternion group over $\mathbb{C}$ and $\mathbb{R}$ -QUESTION [5 upvotes]: The quaternion group of order 8 has an irreducible two dimensional representation over $\mathbb{C}$ but how does one show that this representation cannot be defined over $\mathbb{R}$? - -REPLY [3 votes]: There are sophisticated ways you can do this, but I think its best to push it through in a straightforward way. Here are two approaches; I'm deliberately not giving the details because this is a good homework problem: -Algebraic: Let $C$ be the subgroup $\{ 1, i, -1, -i \}$ of the quaternions. Let $V$ be the real representation of $C$ where $i$ acts by $\left( \begin{smallmatrix} 0 & 1 \\ -1 & 0 \end{smallmatrix} \right)$. -Suppose that the two dimensional representation of the quaternions could be defined over $\mathbb{R}$. Show that the restriction of this representation to $C$ is isomorphic to $V$. In other words, you can always choose a basis where $i$ acts by $\left( \begin{smallmatrix} 0 & 1 \\ -1 & 0 \end{smallmatrix} \right)$. Now write down the equations $ij=ji^3$ and $j^2=i^2$ and try to solve them -Geometric: Any representation on $\mathbb{R}^2$ can be conjugated to preserve the standard inner product; meaning that the action is by rotations and reflections. A case by case analysis should show you pretty quickly that you can't find rotations and reflections of the plane that give an action of the quaternions.<|endoftext|> -TITLE: Prove that honeycomb structures are the most geometrically efficient structure -QUESTION [6 upvotes]: I was reading this paragraph and it got me thinking: - -The closed ends of the honeycomb cells - are also an example of geometric - efficiency, albeit three-dimensional - and little-noticed. The ends are - trihedral (i.e., composed of three - planes) sections of rhombic - dodecahedra, with the dihedral angles - of all adjacent surfaces measuring - $120^o$, the angle that minimizes surface - area for a given volume. (The angle - formed by the edges at the pyramidal - apex is approximately $109^\circ 28^\prime 16^{\prime\prime}$ $\left(= -180^\circ - \cos^{-1}\left(\frac13\right)\right)$ - -This is hardly intuitive; is there a proof of this somewhere? - -REPLY [12 votes]: If you want to divide space up into uniform volume cells with minimum surface area, the honeycomb is not optimal. Look at the Weaire–Phelan structure. While honeycombs are not quite optimal, they are certainly close enough for bees -- they're suboptimal by only 0.3%.<|endoftext|> -TITLE: Prime factorization of square numbers -QUESTION [8 upvotes]: Let n be a natural number with unique prime factorization $p^m$... $q^k$ . Show that n can be written as a square if and only if all (m, ...k) are even - -REPLY [3 votes]: If $$n = \prod_{i = 1}^\infty {p_i}^{\alpha_i} \in \mathbb Z,$$ where $p_i$ are the primes in order by $i$ and $\alpha_i$ are the corresponding exponents which may be $0$ as needed, we then have $$\sqrt n = \sqrt{\prod_{i = 1}^\infty {p_i}^{\alpha_i}} = \prod_{i = 1}^\infty {p_i}^{\frac{\alpha_i}{2}}.$$ But if any of the $\alpha_i$ are odd, then $\frac{\alpha_i}{2}$ is not an integer and neither is ${p_i}^{\frac{\alpha_i}{2}}$. -For example, consider $129600 = 2^6 \times 3^4 \times 5^2 \times 7^0 \times \ldots$. The square root is found thus: $\sqrt{129600} = 2^3 \times 3^2 \times 5^1 \times 7^0 \times \ldots$ -Now compare $648000 = 2^6 \times 3^4 \times 5^3 \times 7^0 \times \ldots$. The square root is found thus: $$\begin{align}\sqrt{648000} & = 2^3 \times 3^2 \times 5^{\frac{3}{2}} \times 7^0 \times \ldots \\ & = 2^3 \times 3^2 \times 5 \sqrt 5 \\ & = 360 \sqrt 5 \\ & \approx 804.98447\end{align}$$<|endoftext|> -TITLE: Lusin's theorem -QUESTION [8 upvotes]: Lusin's theorem states that for every $\varepsilon$, for every borel measure $\mu$, for every function $f:\mathbb{R}^n\to\mathbb{R}^m$, for every open set $A$ of finite measure, there exists a compact set $K$ such that $\mu(A-K)<\varepsilon$ and $f$ restricted to $K$ is continue. -Now, I'm wondering about what's the form of the compact $K$ in the case of the Dirichlet's function. Can you help me? - -REPLY [10 votes]: I found an answer by myself. First, let's order the rationals, say $\{q_n\}$. Then consider the covering $O_n=(q_n-\frac{\varepsilon}{2^n},q_n+\frac{\varepsilon}{2^n})$ for $n=1,2,\ldots$. It's straightforward to check that the measure of $\cup O_n$ is less than $\epsilon$. Now it suffices to choose the complementary $K$ of $\cup O_n$ in $A$ which is obviously compact. The Dirichlet's function on $K$ is $0$.<|endoftext|> -TITLE: On the density of $C[0,1]$ in the space $L^{\infty}[0,1]$ -QUESTION [11 upvotes]: It's easy to show $C[0,1]$ is not dense in $L^{\infty}[0,1]$ in the norm topology, but $C[0,1]$ is dense in $L^{\infty}[0,1]$ in the weak*-topology when take $L^{\infty}$ as the dual of $L^{1}$. how to prove it? - -REPLY [12 votes]: Let $\psi_n$ be a sequence of standard mollifiers, symmetric about 0. If $f \in L^\infty$, $g \in L^1$, it is a quick application of Fubini's theorem to see that $\int (f * \psi_n) g = \int f (\psi_n * g)$, where $*$ denotes convolution. Since $f * \psi_n$ is continuous and $\psi_n * g \to g$ in $L^1$, we are done.<|endoftext|> -TITLE: Examples of non symmetric distances -QUESTION [31 upvotes]: It is well known that the symmetric property is $d(x,y)=d(y,x)$ is not necessary in the definition of distance if the triangle inequality is carefully stated. On the other hand there are examples of functions satifying -(1) $d(x,y)\geq 0$ and $d(x,y)=0$ if and only if $x=y$ -(2) $d(x,y)\leq d(x,z)+d(z,y)$ -which are not symmetric: in the three point space $(a,b,c)$ take the non-zero values of $d$ as $1=d(a,b)=d(c,b)$, $2=d(b,a)=d(b,c)=d(a,c)=d(c,a)$. -Do you know other examples of "non symmetric distances"? Are there examples on the real numbers, etc.? Are there examples of spaces were every function satisfying (1) and (2) is symmetric? - -REPLY [5 votes]: I came here looking for something else and was surprised that nobody has mentioned distances induced by convex bodies (these are classically called Minkowski distances, and "convex distance functions" in the Computational Geometry literature). -Let $K$ be an arbitrary convex body (= compact convex set with non-empty interior) in $\mathbb R^d$ with the origin in its interior. You can define the $K$-norm $||v||_K$ of a vector $v$ to be the unique positive $\lambda\in [0,\infty)$ with $\lambda v \in \partial K$, and the $K$-distance between $a$ and $b$ to be $||b-a||_K$. -If $K$ is centrally symmetric then these are a norm and a distance in the usual sense (the Euclidean distance and all the $L_p$ distances are examples). -If $K$ is not centrally symmetric then this distance is not symmetric ($||-v||\ne -||v||$), but it still satisfies the asymmetric triangle inequality.<|endoftext|> -TITLE: Did the Appel/Haken graph colouring (four colour map) proof really not contribute to understanding? -QUESTION [20 upvotes]: I hope this isn't off topic - sorry if I'm wrong. -In 1976, Kenneth Appel and Wolfgang Haken proved the claim (conjecture) that a map can always be coloured with four colours, with no adjacent regions given the same colour. This was controversial because the proof process required a computer to evaluate many different cases. -From what I've read, the controversy was (1) that no human could reasonably confirm the proof, and (2) that being a computer proof, all it did was give a yes/no answer to the question - it didn't contribute to the understanding of the problem. -The first issue seems to be a non-issue now - the proof has been replicated using different software. But the second issue seems to stand - and I don't really understand that issue. -Quite a few proofs require a number of particular cases of the problem to be separately checked. Having eliminated every possible case where the answer might have been "no" is a routine way of proving that the answer is "yes". Does it really make any difference in principle whether the number of cases is 2 or 2 million? Does the scale of that number make any difference to the degree of human understanding of the problem? -And, given that people wrote the software that evaluated all the cases, all the ideas underlying the proof seem to be understandable by, and to have been understood by, people. -To me, having evaluated a thousand different variations on the same kind of a problem shows no more understanding than evaluating one. Solving a thousand quadratic equations, for instance, is much the same as solving one - all the repetitions are just plugging different numbers into the same formula, or repeating the same completing-the-square procedure, or whatever. -Therefore, I am very much impressed that Appel and Haken were able to understand the problem in sufficient depth that they could write a program to derive what all the special cases are and check them. Writing software to reliably determine all the cases often shows even deeper understanding than manual derivation of all the cases, where deeper understanding can be bypassed to a degree by trial-and-error. -Getting the computer to run the program, once written, seems irrelevant to me. The program presumably could have been (eventually) executed by a person, in the same way that it could have been executed by a Turing machine, but doing the mechanical follow-the-steps stuff seems irrelevant to depth of understanding. -Is there something I'm missing? - -REPLY [33 votes]: When people say that Appel-Haken did not really contribute understanding, they aren't necessarily talking about the four-color problem itself. They mean that the proof did not really contribute any understanding of mathematics. -A famous example when the proof of a long-standing conjecture really did contribute a lot of understanding is Andrew Wiles' proof of Fermat's Last Theorem. People weren't excited about this proof because they cared directly about the answer to FLT; they were excited because the proof brought together and integrated ideas from many disciplines and represented progress on the much larger Langlands program. Similarly, Perelman's proof of the Poincare Conjecture introduced important new ideas and tools. This was why many experts were convinced that Perelman's proof was valid even before it was formally checked; the high-level summary contained enough nontrivial ideas that they already saw that there was something extremely interesting going on. -In other words, the problem with Appel-Haken is that it is boring. It was more or less an application of an already-understood technique, just on a larger scale, and so has led to very little interesting new mathematics. People are still looking for a conceptual proof of the Four-Color Theorem analogous to the two proofs above, for example people working in quantum topology; see this blog post by Noah Snyder and Kainen's Quantum interpretations of the four-color theorem. A proof along these lines would be much more interesting, as it would likely shed light on a number of other issues in quantum topology. - -This principle applies to more than just long-standing open problems. Many problems you might encounter in an undergraduate course might be solvable by a tedious computation along the lines of the Appel-Haken proof, but often there exist much more interesting conceptual proofs along the lines of Wiles' proof. If you stopped after finding the tedious calculation you might never find the conceptual argument, which often turns out to be much more interesting (e.g. it naturally suggests generalizations, interesting concepts, ties to other branches of mathematics...). -I will take your comment about quadratic equations as an example. It's true that the quadratic formula allows you to solve quadratic equations mechanistically. The question, then, is whether the quadratic formula leads to any significant conceptual understanding of polynomials. For example, does it suggest a natural route to the cubic formula? -If you think of the quadratic equation in terms of completing the square, then you quickly run into a problem: you cannot, in general, complete the cube. So completing the square does not generalize to cubic equations. If you want to understand cubic equations, it follows that you need to think about the quadratic formula more conceptually. -The conceptual breakthrough is the following: what the quadratic formula really shows you is that there is a symmetry to the roots of a quadratic polynomial. The roots -$$x_1 = \frac{-b + \sqrt{b^2 - 4ac}}{2a}$$ -and -$$x_2 = \frac{-b - \sqrt{b^2 - 4ac}}{2a}$$ -are conjugate: they are related by a symmetry which flips the sign of the square root. This symmetry manifests itself concretely in the fact that the sum -$$x_1 + x_2 = - \frac{b}{a}$$ -is completely invariant under flipping the sign of the square root, whereas the sum -$$x_1 - x_2 = \frac{ \sqrt{b^2 - 4ac} }{a}$$ -is completely negated under flipping the sign of the square root. Taking this idea seriously leads you to the method of Lagrange resolvents, and now if you were born in the right century you would be well on your way to inventing group theory, Galois theory, and (if you were really observant) representation theory. -Isn't that way more interesting than using the quadratic formula?<|endoftext|> -TITLE: Order of operations in evaluating a polynomial -QUESTION [6 upvotes]: I have the following function -$$f(x) = 3x^3 - 5x^2 - 4x + 4.$$ -I would like to find the value of $f(x)$ when $x = -3$. -I have ordered the equation in the following way. -$$f(-3) = (3 \times -3^3) - (5 \times -3^2) - (4 \times -3) + 4 = -28.$$ -Can anyone tell me if the following syntax is correct and if not, where have I gone wrong. -Many Thanks. - -REPLY [4 votes]: When you write $5*-3^2$, the precedence of operations means that you first compute $3^2$, then you multiply the result by $-1$, then you multiply the entire thing by $5$. This is not what you want. -Don't be scared of parentheses! The best way to write this, if you want to insist on putting the $*$ in it (we normally just write multiplication by juxtaposition) would be: -$$f(-3) = (3*(-3)^3) - (5*(-3)^2) - (4*(-3)) + 4.$$ -More succinctly, you can write it as: -$$f(-3) = 3(-3)^3 - 5(-3)^2 - 4(-3) + 4.$$ -By the precedence order, you would first compute the exponentials, then the products, then the sums. -That said, I don't think you computed $f(-3)$ correctly.<|endoftext|> -TITLE: having trouble with limit of integral -QUESTION [7 upvotes]: How do I solve the following? -$$\lim_{x\to 0} \int_0^1 \cos\left(\frac{1}{xt}\right)\, dt$$ - -REPLY [4 votes]: Here's another take. Since $\cos \left(\frac{1}{xt}\right)$ is an even function of $x$, we can take the limit from the right or from the left, and the result is the same. So assume $x > 0$. Then apply the $u = 1/(xt)$ substitution as in joriki's answer, followed by integration by parts in the opposite direction from joriki's answer. Rewriting the result in terms of the sine integral produces the output from Wolfram Alpha mentioned by Eivind and Sivaram: -$$ \lim_{x \to 0} \left(\frac{1}{x} \int_0^{1/x} \frac{\sin u}{u} du - \frac{\pi}{2x} + \cos \left(\frac{1}{x}\right)\right).$$ -Now, apply the following known series expansion for the sine integral $\int_0^x \frac{\sin u}{u} du$ (valid for large values of $x$): -$$\int_0^x \frac{\sin u}{u} du = \frac{\pi}{2} -\frac{\cos x}{x} \left(1 - \frac{2!}{x^2} + \frac{4!}{x^4} \pm \cdots \right) - \frac{\sin x}{x} \left(\frac{1}{x} - \frac{3!}{x^3} \pm \cdots \right).$$ -After simplifying, we are left with -$$\lim_{x \to 0} \left(-\cos \left(\frac{1}{x}\right) \left(- 2! x^2 + 4! x^4 \pm \cdots \right) - \sin\left(\frac{1}{x}\right) \left(x - 3! x^3 \pm \cdots \right) \right).$$ -Since $\cos \left(\frac{1}{x}\right)$ and $\sin \left(\frac{1}{x}\right)$ are both bounded by $-1$ and $1$, the limit is $0$. -In addition, we can see that the dominant term in the limit is $- x\sin\left(\frac{1}{x}\right)$, which is clearly bounded by $|x|$, as conjectured by Sivaram in the comments.<|endoftext|> -TITLE: How to prove that if $\#A = n$, then $\#A^{k} = n^{k}$? And what about the formula $\frac{n!}{(n-k)!}$? -QUESTION [6 upvotes]: I'm starting a mini-course on Combinatorics and, although I can "see" the results, I'm having difficulties proving them. -For instance, - -Being $\#A = n \in \mathbb{N}$, prove that $\#A^{k} = n^{k}$, for $k \in \mathbb{N}$. - -I understand that $A^{k} = \{(x_1, \ldots, x_k) : x_i \in A\}$, and that we have $n \times n \ldots \times n$ ($k$ times) possible layouts for a sequence like $(x_1, \ldots, x_k)$, but I don't know what to use to prove the result. -Can you suggest some plan? If possible, could it be an advice that would give me traction to prove the following related results? -Thanks for the time of took to read my question. I highly appreciate it. - -Update: Since this question is so straightforward (see comments), and was only presented with the intent of getting acquainted with the tools of the trade, I shall move on to another result I wish I can prove. - -Prove that the number of elements in $\{(x_1, \ldots, x_k) : x_i \in A, i \in \{1, \ldots, k\}, x_i \neq x_j \Longleftrightarrow i \neq j, j \in \{1, \ldots, k\}\}$ is given by the formula $\frac{n!}{(n-k)!}$. - -Thanks for your replies so far! - -REPLY [2 votes]: To address the second question, the set in question is the number of elements of $A^k$ such that no two entries are the same. Since $A$ has $n$ elements, you then have $n$ choices for the first entry, and $n-1$ choices for the second entry, as you cannot choose the element which you placed in the first entry. You then have $n-2$ choices for the third entry, $\dots$, $n-(k-1)=n-k+1$ choices for the the $k^{\text{th}}$ entry, for the same reasoning. But notice -$$ -n(n-1)(n-2)\cdots(n-k+1)=\frac{n(n-1)\cdots(n-k+1)(n-k)\cdots 1}{(n-k)\cdots 1}=\frac{n!}{(n-k)!}. -$$<|endoftext|> -TITLE: Function $\mathbb{R}\to\mathbb{R}$ that is continuous and bounded, but not uniformly continuous -QUESTION [20 upvotes]: I found an example of a function $f: \mathbb{R}\to\mathbb{R}$ that is continuous and bounded, but is not uniformly continuous. It is $\sin(x^2)$. I think it's not uniformly continuous because the derivative is bigger and bigger as $x$ increases. But I don't know how to prove this is uniformly continuous. Is $\sin(x^2)$ uniformly continuous then? if it isn't, can you guys think of any other examples? thanks - -REPLY [2 votes]: By MVT $|\sin(x^2)-\sin(y^2)|=|2k \cos(k^2)||x-y|$,where $k$ is between $x$ and $y$ which is not always less than $|x-y|$, for all $x,y \in \mathbb{R}$. Thus $\sin(x^2)$ is not uniformly continuous on $\mathbb{R}$.<|endoftext|> -TITLE: How to find the solution for $\frac{2x-3}{x+1} \leq 1$? -QUESTION [8 upvotes]: I have the following inequality: -$$\frac{2x-3}{x+1}\leq1$$ -so, considering $x \neq -1$, I started multiplying $x+1$ both sides: -$$2x-3\leq x+1$$ -then I subtracted $x$ both sides: -$$x-3\leq1$$ -and then sum $3$ both sides: -$$x\leq4$$ -Therefore, my solution for $x\neq-1$ is: -$$(-\infty,4]$$ -But the book solution is: -$$(-1,4]$$ -What I did wrong? - -REPLY [7 votes]: To preserve the $\:\le\:$ you must multiply by $\rm\ (x+1)^2\ $ not $\rm\ x+1\:,\:$ namely -$\rm\quad\quad\quad\quad\quad\quad\ \displaystyle\frac{2x-3}{x+1}\ \le\ 1$ -$\rm\quad\quad\iff\quad \displaystyle\frac{x-4}{x+1}\ \ \le\ 0 $ -$\rm\quad\quad\iff\quad (x+1)\ (x - 4)\ \le\ 0,\quad x\ne -1 $ -$\rm\quad\quad\iff\quad\ x\ \in\ (-1,4\:] $ - -REPLY [3 votes]: As others have mentioned, multiplying by $x+1$ forces you to consider cases at the outset. Instead, you can write $\frac{2x-3}{x+1} \leq 1$ as $\frac{2x-3}{x+1} -1 \leq 0$. Simplify this into $\frac{p(x)}{q(x)} \leq 0$ and consider when a fraction is negative.<|endoftext|> -TITLE: Rewrite equation to solve for $x$, not $y$ -QUESTION [5 upvotes]: I am doing calculus integration, and need to show my work for Horizontal slicing (even though Vertical slicing is far easier). -The equation is $$y= x/\sqrt{2x^2+1}$$ -I need to rewrite the equation so that it is $x=\;...$ in order to horizontally slice it (in other words, it should be rewritten so that it is dependent on $y$). -This isn't exactly a calculus question, although it is being used for calculus. I'm probably missing something that is pretty obvious. -Any help would be greatly appreciated! - -REPLY [6 votes]: Heres a tip: Take reciprocals. -Notice $$\left( \frac{1}{y}\right)^2=\left( \frac{\sqrt{2x^2+1}}{x}\right)^2=\frac{2x^2+1}{x^2}=2+\frac{1}{x^2}$$ -Then we get $$\frac{1}{x^2}=\frac{1}{y^2}-2$$ -Take reciprocals again and we find -$$x^2=\frac{1}{\frac{1}{y^2}-2}$$ -Take square roots, and we are finished. -Hope that helps -Edit: Just to make things complete I decided to add the final line: $$x=\pm \sqrt{\frac{y^2}{1-2y^2}}$$<|endoftext|> -TITLE: Realification and Complexification of vector spaces -QUESTION [20 upvotes]: I am interested in a good comprehensive resource on realification and complexification of vector spaces over the reals or complexes (and the interplay of these structures on the 'same' space in general). -In particular, understanding of the basic theory is necessary and useful for a more intuitive approach towards functional analysis. -Can you give me a tip? For example, Serge Lang's classical book does not explicitly work this part out. I am aware of a few pages in Arnold's book on ODE, but there should be something more comprehensive and neat somewhere out there. - -REPLY [9 votes]: Please find §12 "Complexification and Decomplexification" in book: "LINEAR ALGEBRA AND GEOMETRY" by Kostrikin & Manin (1989), pages 75-81. -There you will find an excellent answer to your question (according to my point of view).<|endoftext|> -TITLE: A commutative group structure on $R\times R$ for a ring $R$ -QUESTION [10 upvotes]: Let $R$ be a commutative ring. The Cartesian square $A=R\times R$ is endowed with the operation -$(a_1,b_1)\circ(a_2,b_2)=(a_1+a_2,b_1+b_2+a_1a_2^2+a_1^2a_2)$ -which turns $A$ into a commutative group. I have two questions concerning this group. -Question 1: For what $R$ is $(A,\circ)$ isomorphic to $R\oplus R$? -I managed to show that if $3$ is invertible in $R$ then the isomorphism holds. -It also holds for R=F_9, but does not hold for $R=\mathbb{F}_3$, $\mathbb{F}_9$, or $\mathbb{F}_{27}$. -Other rings $R$ in which $3$ is not invertible (including $R=\mathbb{Z}$) remain a mystery to me. -Question 2: When is $(A,\circ)$ generated by the elements of the form $(a,0)$, $a\in R$? -Again, only partial results here. Let $B$ be the subgroup of $A$ generated by $(a,0)$, $a\in R$. Then $B$ contains $(0,a_1a_2^2+a_1^2a_2)$ for all $a_1,a_2\in R$. Hence, -we have $B=A$ if $R$ is additively generated by the elements of the form $a_1a_2^2+a_1^2a_2$. This is so for $R=\mathbb{Z}/p\mathbb{Z}$ with $p$ odd. - -REPLY [2 votes]: Ok, so inspired by Arturo's answer, here is another partial answer that includes the Gaussian integers and the 3-adic integers: -Define τ:R→R:n↦2*(n+1 choose 3) and notice the formal equality -$$2\cdot\binom{n+m+1}{3} = 2\cdot\binom{n+1}{3}+2\cdot\binom{m+1}{3} + (mn^2+m^2n)$$ -Then define [ a, b ] = ( a, b + τ(a) ) to be a different coordinate system on (R⊕R,∘). Then [ a1, b1 ]∘[ a2, b2 ] = [ a1+a2, b1+b2 ] is clearly isomorphic to R⊕R. -Suppose R is a domain with field of fractions K. Then τ:R→K:n↦2*(n+1 choose 3) definitely exists, so one just needs some condition for τ(R) ≤ R. I thought this was at least sort of common, but I can't think of any real examples other than Z. -Certainly R=Z[x] doesn't work for τ, though both R⊕R and A will be free abelian of countably infinite rank. -I guess the only improvement is that we don't need every element of R to be divisible by 3, only the elements of the form xxx−x. -I believe the ring R=Z[i] works for this, since Z[i]/(3) ≅ Z/3Z × Z/3Z satisfies xxx-x ≡ 0. This ring is less exciting in some ways though, since the additive group is free abelian. -Rings that are even nicer, where all binomial coefficients exist, are called binomial rings. For example the p-adic integers for any prime p (and where p=3 is the interesting one for us) also work. The additive group of p-adic integers is not free abelian, so this is a real gain.<|endoftext|> -TITLE: How to Use Big O Notation -QUESTION [12 upvotes]: In my question about the convergence/divergence of -$$ -\sum_{n=2}^\infty \frac{1\cdot 3\cdot 5\cdot 7\cdots (2n-3)}{2^nn!}. -$$ -here: Why Doesn't This Series Converge? -Zarrax gave the answer: -"You can use Taylor approximations here. Note that the ratio between consecutive terms is ${2n - 3 \over 2n} = \exp(\ln(1 - 3/2n)) = \exp(-{3 \over 2n} + O(1/n^2))$. So the product is comparable to $\exp(-{3 \over 2} \sum_{i = 2}^n {1 \over n} + O(1/n))$, which in turn is comparable to $\exp(-{3 \over 2} \ln(n))$ or $n^{-{3 \over 2}}$. Thus the series converges." -I've decided to give a talk in a graduate seminar about the danger of coming up with examples off the top of your head and now wish to understand what this answer means. The problem is that I have no exposure to big O notation and am not having luck online. Basically, I don't understand the answer at all. I can break it into a few questions: - -How does Zarrax pass from $\exp(\ln(1 - 3/2n))$ to $\exp(-{3 \over 2n} + O(1/n^2))$? -To which product is Zarrax referring in his third sentence? And how is it comparable to $\exp(-{3 \over 2} \sum_{i = 2}^n {1 \over n} + O(1/n))$? -How does Zarrax pass from that to $\exp(-{3 \over 2} \ln(n))$? - -Thank you for your help. - -REPLY [10 votes]: To answer your questions: -1) He is using $\log(1-x) = -x-x^2/2-x^3/3 - \cdots = -x + O(x^2).$ -2) The product he's referring to is -$$\frac{1\cdot 3\cdot 5\cdot 7\cdots (2n-3)}{2^nn!}$$ -and it's comparable to $\exp(-{3 \over 2} \sum_{i = 2}^n {1 \over n} + O(1/n))$ because this is the product of $n-1$ versions of $\exp(-{3 \over 2n} + O(1/n^2)).$ -3) He uses $\sum_{i=2}^n 1/i = \log n + \text{constant} + O(1/n).$ - -REPLY [9 votes]: When an expression uses $\mathcal{O}(1/n^2)$ it means that this is actually a function $f$ such that $f \in \mathcal{O}(1/n^2)$. This is a very convenient abuse of notation. -Note that $\mathcal{O}(1/n^2)$ is actually a set of functions. A function $f \in \mathcal{O}(1/n^2)$ iff there is a constant $C$ (dependent on $f$) such that $|f(n)| \le \frac{C}{n^2}$ for all sufficiently large $n$, i.e. as $n \to \infty$. -Note: we are talking about positive integers for now, the definitions are also valid when $n$ is real or when $n \to A$, where $A$ need not be $\infty$, in which case, we talk about the inequality holding in a neighbourhood of $A$. Usually, what $A$ is, is clear from the context and is left out (another convenience). -This symbol is called BigOh (as you seem to know already). You can find more information about that here: http://en.wikipedia.org/wiki/Big_O_notation -So when you use $\mathcal{O}(1/x^2)$, in the expression -$\ln(1-\frac{1}{x}) = \frac{1}{x} + \mathcal{O}(1/x^2)$ what we really mean is that -for some $f \in \mathcal{O}(1/x^2)$ we have that -$\ln(1-\frac{1}{x}) = \frac{1}{x} + f(x)$ -Now you can read Derek's answer :-) I guess I don't have to repeat it. -Note that when Derek says -$\ln(1- x) = x + \mathcal{O}(x^2)$, he is talking about $x$ in the neigbourhood of $0$. -There are plenty of other answers here which use the BigOh. -Here are a few of them (I especially recommend you read the first one) - -Convergence of $\sqrt{n}x_{n}$ where $x_{n+1} = \sin(x_{n})$ -Big $\mathcal{O}$ Notation question while estimating $\sum \frac{\log n}{n}$ -Proving $\sum\limits_{p \leq x} \frac{1}{\sqrt{p}} \geq \frac{1}{2}\log{x} -\log{\log{x}}$ -What is the expression of $n$ that equals to $\sum_{i=1}^n \frac{1}{i^2}$?<|endoftext|> -TITLE: Why is the eigenvector of a covariance matrix equal to a principal component? -QUESTION [141 upvotes]: If I have a covariance matrix for a data set and I multiply it times one of it's eigenvectors. Let's say the eigenvector with the highest eigenvalue. The result is the eigenvector or a scaled version of the eigenvector. -What does this really tell me? Why is this the principal component? What property makes it a principal component? Geometrically, I understand that the principal component (eigenvector) will be sloped at the general slope of the data (loosely speaking). Again, can someone help understand why this happens? - -REPLY [11 votes]: If we would project our data $D$ onto any vector $\vec{v}$, this data would be obtained as $\vec{v}^{\intercal} D$, and its covariance matrix then becomes $\vec{v}^{\intercal} \Sigma \vec{v}$. -Since the largest eigenvector is the vector that points into the direction of the largest spread of the original data, the vector $\vec{v}$ that points into this direction can be found by choosing the components of the resulting covariance matrix such that the covariance matrix $\vec{v}^{\intercal} \Sigma \vec{v}$ of the projected data is as large as possible. -Maximizing any function of the form $\vec{v}^{\intercal} \Sigma \vec{v}$ with respect to $\vec{v}$, where $\vec{v}$ is a normalized unit vector, can be formulated as a so called Rayleigh Quotient. The maximum of such a Rayleigh Quotient is obtained by setting $\vec{v}$ equal to the largest eigenvector of matrix $\Sigma$. -In other words; the largest eigenvector of $\Sigma$ corresponds to the principal component of the data. -If the covariances are zero, then the eigenvalues are equal to the variances: - -If the covariance matrix not diagonal, the eigenvalues represent the variance along the principal components, whereas the covariance matrix still operates along the axes: - -An in-depth discussion (and the source of the above images) of how the covariance matrix can be interpreted from a geometrical point of view can be found here: http://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/<|endoftext|> -TITLE: How to compute the following definite integral -QUESTION [5 upvotes]: Studying some integral table, I came across the following definite integral -$$\int_0^{\pi} \log [ a^2 + b^2 -2 a b \cos \phi ]\,d\phi$$ for $a,b \in \mathbb{R}$. Does somebody know a nice way to get the results? - -REPLY [9 votes]: Here is a different way, and is strangely the first thing that came to my mind. It uses generating series, and power series as well as other various techniques: -(Warning!: It is significantly more complicated) -Assume that $a\geq b\geq0$ without loss of generality. Notice that our integral is $$\pi \log(a^2+b^2)+\int_{0}^{\pi}\log\left( 1-\frac{2ab\cos\phi}{a^2+b^2}\right). $$ Since $\log(1-x)=-\sum_{i=1}^{\infty}\frac{x^{i}}{i}$ we have -$$\int_{0}^{\pi}\log\left( 1-\frac{2ab\cos\phi}{a^2+b^2}\right)d\phi=-\int_{0}^{\pi}\sum_{i=1}^{\infty} \frac{1}{i}\left(\frac{2ab\cos\phi}{a^2+b^2}\right)^i d\phi$$ -Switch the order or integration and summation to find: (More on why this is legal at the end) -$$-\int_{0}^{\pi}\sum_{i=1}^{\infty} \frac{1}{i}\left(\frac{2ab\cos\phi}{a^2+b^2}\right)^i d\phi=-\sum_{i=1}^{\infty} \frac{1}{i}\left(\frac{2ab}{a^2+b^2}\right)^i \int_{0}^{\pi} \cos^i\phi d\phi$$ -For $n$ odd we have $\int_0^\pi \cos^n xdx=0$. For even $n=2r$ we see by integration by parts that $\int_{0}^{\pi}\cos(x)^{2r}dx=\frac{2r-1}{2r}\int_{0}^{\pi}\cos(x)^{2r-2}dx$, so that induction yields $$\int_{0}^{\pi}\cos(x)^{2r}dx=\pi\prod_{i=1}^{r}\frac{2i-1}{2i}=\pi\frac{\left({2r\atop r}\right)}{4^{r}}$$ -Thus our sum becomes -$$-\pi \sum_{r=1}^{\infty} \frac{1}{2r}\left(\frac{ab}{a^2+b^2}\right)^{2r} \left({2r\atop r}\right ) $$ -Now let $$f(z)=\sum_{r=1}^{\infty} \frac{1}{r} \left({2r\atop r}\right) \frac{z^r}{4^r}$$ -Then we need to find $-\frac{\pi}{2} f\left(\left( \frac{2ab}{a^2+b^2} \right)^2\right)$. -Let $Y=\left( \frac{2ab}{a^2+b^2} \right)^2$. -Recall the generating series $$\sum_{n=0}^{\infty}\left({2n\atop n}\right)x^{n}=\frac{1}{\sqrt{1-4x}}. $$ Differentiating yields $1+xf^{'}(x)=\frac{1}{\sqrt{1-x}}$ and hence $f(Y)=\int_{0}^{Y}\frac{1-\sqrt{1-x}}{x\sqrt{1-x}}dx$. -Make the substitution $x=\sin^{2}(u)$, and we get $$f(Y)=\int_{0}^{\theta}\frac{1-\cos(u)}{\sin^{2}(u)\cos(u)}2\sin(u)\cos(u)du=2\int_{0}^{\theta}\csc(u)-\cot(u)du$$ where $\theta$ is in the first quadrant and satisfies $\sin (\theta)=\frac{2ab}{a^2+b^2}.$ -Then since $\int\csc x\, dx=-\log\left|\csc x+\cot x\right|$, and $\int\cot x dx=\log\left|\sin x\right|$, we see that $\int\csc(u)-\cot(u)du=-\log|\sin(u)\csc(u)+\sin(u)\cot u|=-\log|1+\cos(u)|$. Thus $$f(Y)=-2\log|1+\cos(u)|\biggr|_{u=0}^{u=\theta}$$ -Drawing the triangle with sides $2ab$, $a^2-b^2$ and $a^2+b^2$ tells us that $\cos(\theta)=\frac{a^2-b^2}{a^2+b^2}$, and hence $$f(Y)=2\log 2- 2\log \left(1+\frac{a^2-b^2}{a^2+b^2}\right)$$ -Thus the final answer is $$\pi \log(a^2+b^2)-\frac{\pi}{2}f(Y)=\pi\log(a^2+b^2)-\pi \log 2 +\pi\log\left(1+\frac{a^2-b^2}{a^2+b^2}\right)=2\pi\log(a)$$ - -REPLY [6 votes]: Note that by symmetry we can assume $|b| \geq |a|$. Since $\cos(\phi)$ is even, your integral is half of the integral from $-\pi$ to $\pi$, or -$${1 \over 2}\int_{-\pi}^{\pi} \log(a^2 + b^2 - 2ab\cos(\phi))\,d\phi$$ -Note that $|a^2 + b^2 - 2ab\cos(\phi)|$ is the same as $|ae^{i\phi} - b|^2$, so your integral becomes -$$\int_{-\pi}^{\pi}\log|ae^{i\phi} - b|\,d\phi$$ -Thinking of $ae^{i\phi}$ as a complex number $z$, the integrand is the real part of the function $\ln(az - b)$, which is analytic on the interior of the unit disk (we use that $|b| \geq |a|$). Thus the integrand is harmonic and therefore satisfies the mean-value property: the integral over the unit circle is $2\pi$ times the value at the center of the circle, which is $\log|a0 - b| = \log|b|$ in this case. So your answer is $2\pi\log|b|$, which is the same as $2\pi\max(\log|a|,\log|b|)$ since we assume $|b| \geq |a|$. -Technical point: we glossed over applying the mean-value property when $|b| = |a|$ and thus the integrand has a singularity at the boundary; but any of a number of limiting techniques will deal with this case as well.<|endoftext|> -TITLE: Intuition behind Conditional Expectation -QUESTION [123 upvotes]: I'm struggling with the concept of conditional expectation. First of all, if you have a link to any explanation that goes beyond showing that it is a generalization of elementary intuitive concepts, please let me know. -Let me get more specific. Let $\left(\Omega,\mathcal{A},P\right)$ be a probability space and $X$ an integrable real random variable defined on $(\Omega,\mathcal{A},P)$. Let $\mathcal{F}$ be a sub-$\sigma$-algebra of $\mathcal{A}$. Then $E[X|\mathcal{F}]$ is the a.s. unique random variable $Y$ such that $Y$ is $\mathcal{F}$-measurable and for any $A\in\mathcal{F}$, $E\left[X1_A\right]=E\left[Y1_A\right]$. -The common interpretation seems to be: "$E[X|\mathcal{F}]$ is the expectation of $X$ given the information of $\mathcal{F}$." I'm finding it hard to get any meaning from this sentence. - -In elementary probability theory, expectation is a real number. So the sentence above makes me think of a real number instead of a random variable. This is reinforced by $E[X|\mathcal{F}]$ sometimes being called "conditional expected value". Is there some canonical way of getting real numbers out of $E[X|\mathcal{F}]$ that can be interpreted as elementary expected values of something? -In what way does $\mathcal{F}$ provide information? To know that some event occurred, is something I would call information, and I have a clear picture of conditional expectation in this case. To me $\mathcal{F}$ is not a piece of information, but rather a "complete" set of pieces of information one could possibly acquire in some way. - -Maybe you say there is no real intuition behind this, $E[X|\mathcal{F}]$ is just what the definition says it is. But then, how does one see that a martingale is a model of a fair game? Surely, there must be some intuition behind that! -I hope you have got some impression of my misconceptions and can rectify them. - -REPLY [3 votes]: I was only able to understand the notion of the conditional expectation with respect to a sub-$σ$-algebra $\mathcal F$, when I realized that this game is only interesting when $\mathcal F$ is "not Hausdorff", meaning that there might be points $x$ and $y$ which cannot be separated by an $\mathcal F$-measurable set. Any $\mathcal F$-measurable function must therefore coincide on $x$ and $y$, so $E(X|\mathcal F)$ tries to be the best photograph of the random variable $X$ which coincides on $x$ and $y$, as well as on any other similar pairs of points. -In the event that $\mathcal F$ is the smallest sub-$σ$-algebra possible, namely $\mathcal F = \{\emptyset, \Omega\}$, -only constant functions are measurable. So $E(X|\mathcal F)$ must be a constant function, and that constant turns out to be the -average of $X$, a.k.a. the expectation of $X$. - -PS. This is a comment I made in a recent question (Refference for conditional expectation) which in turn brought me here to this 10 year old question when I clicked on a "Related" post. Reading the answers I did not find anyone referring to the above point of view, so I hope my little contribution will be useful to someone.<|endoftext|> -TITLE: Upper bound for $n^{3-n}\sum_{k=1}^{n/2} \binom{n-2}{k-1}k^{k-2}(n-k)^{n-k-2}$ -QUESTION [5 upvotes]: I am trying to upper bound the following sum: -$$\sum_{k=1}^{n/2} \frac{\binom{n-2}{k-1}k^{k-2}(n-k)^{n-k-2}}{n^{n-3}}.$$ -Based on numerical computations, it seems like the upper bound is a constant (there is also another complicated proof that suggested the upper bound should be a constant). Any idea how to prove this? Stirling's approximation does not seem to help: using Stirling's (in a loose way) I can show that the sum is $O(log n)$. -A related bound that would imply the bound on the above sum is to show that -$$ \frac{\binom{n-2}{k-1}k^{k-2}(n-k)^{n-k-2}}{n^{n-3}} \leq \frac{c}{k^2}$$ -for some constant $c$. -Thanks! - -REPLY [3 votes]: Lets do some rearranging. First split up the binomial notation: $$\frac{\binom{n-2}{k-1}k^{k-2}(n-k)^{n-k-2}}{n^{n-3}}=\frac{(n-2)!k^{k-2}(n-k)^{n-k-2}}{(k-1)!(n-k-1)!n^{n-3}}$$ -Multiply the numerators and denominators to remove the $-1$'s and $-2$'s: -$$=\frac{n!}{(n-1)n}\cdot\frac{k}{k!}\cdot\frac{n-k}{(n-k)!}\cdot\frac{k^{k}}{k^{2}}\cdot\frac{(n-k)^{n-k}}{(n-k)^{2}}\frac{n^{3}}{n^{n}}.$$ -Now rearrange again so that Sterlings formula jumps out at us: -$$=\frac{n}{(n-1)}\frac{n}{k\cdot(n-k)}\cdot\frac{n!}{n^{n}}\cdot\frac{k^{k}}{k!}\cdot\frac{(n-k)^{(n-k)}}{(n-k)!}.$$ -Applying Sterlings formula roughly, this becomes -$$\approx\frac{n}{(n-1)}\frac{n}{k\cdot(n-k)}\cdot\left(\sqrt{2\pi n}e^{-n}\right)\cdot\left(\frac{1}{\sqrt{2\pi k}}e^{k}\right)\cdot\left(\frac{1}{\sqrt{2\pi(n-k)}}e^{n-k}\right)$$ -$$=\frac{n}{(n-1)\sqrt{2\pi}}\cdot\left(\frac{n}{k(n-k)}\right)^{3/2}$$ -Now compare the last piece to the integral -$$\int_{1}^{n-1}\left(\frac{n}{x(n-x)}\right)^{3/2}dx.$$ -This integral is bounded by a constant for every $n$ so the proof is finished. (The bound can be placed on the integral by a partition trick that yields an infinite geometric series.) -Hope that helps,<|endoftext|> -TITLE: How to compute the Gel'fand Models for a (quantum) Lie Algebra -QUESTION [5 upvotes]: Given a lie algebra $g$, how does one approach finding the Gel'fand models? For clarity, by this I mean - -$\bigoplus_{\lambda\in P^+}V(\lambda)$ where $P^+$ are the dominant weights, and $V(\lambda)$ is the highest weight representation of weight $\lambda$. - -One can calculate the weight modules and just take their sum, however I would like something more succinct. -For example, consider the simple case of $sl_2(\mathbb C)$. This Gel'fand model is simply the complex two variable polynomials. One sees this by writing the highest weight representations of $sl_2(\mathbb C)$ as homogeneous polynomials in variables $x,y$ by considering the Leibniz action of $sl_2(\mathbb C)$ on $C\langle x,y \rangle$. By summing these you get the polynomials in two variables. -I find this particularly intuitive. However, in the more general situation of $sl_n$, I don't see how to do this. Note, I am particularly interested in showing they are isomorphic to rings with nicer forms(I don't care to argue about what I mean by nicer, I think we both know). -What I am even more interested in, is this question for quantized universal enveloping algebras, and again a nice simple case would be $U_q(sl_n)$. Again, our simple case, $U_q(sl_2)$ I know and like: the quantum plane, two variable polynomials quotient $xy-qyx$ for parameter $q$. -I know of a paper or two that mention some of these, but none that I have explain how to see this for the general type A case. In particular, papers about the quantum version are especially rare. References are appreciated. I would also appreciate proofs for other specific cases, they might be enlightening. - -Note:This coincides with the homogeneous coordinate ring for $sl_n$. - -Thanks in advance! -Edit: A large discussion has taken place with Mariano below. He pointed out that my previous language was incorrect, and has helped me identify the correct question that I wished to ask. Hail to the chief! (I hope he doesn't mind I call him chief. :/) - -REPLY [6 votes]: For a semisimple Lie algebra, the representation ring is a polynomial ring, and can be described quite concretely as the invariant ring $\mathbb Z[\Lambda]^W$ of the group algebra $\mathbb Z[\Lambda]$ of the weight lattice $\Lambda$ under the natural action of the Weyl group $W$. In the quantum case with $q$ not a root of unity, the ring has a similar description, as the deformation is not strong enough to mess much with it, in a sense; if $q$ is a root of unity, things are considerably more complicated. -The classical case is treated in pretty much any good text on representation theory of semisimple algebras, in one form or another. For example, the ever great Representation Theory by Fulton and Harris. The quantum case is treated in the corresponding quantum books :)<|endoftext|> -TITLE: Limit approaches infinity on one side and negative infinity on other side -QUESTION [5 upvotes]: I know this is a simple question for most of you, but I am currently studying for a Calculus exam and was just wondering why an online calculator I am using to double-check my work was disagreeing with me on this question: -$$\lim_{x\to 0} \cot(x)\sec(x)$$ -I reduce this down to $\frac{1}{\sin(x)}$, and in that case $x\to 0^-$ the limit is equal to negative infinity; and if $x\to 0^+$, the limit is equal to positive infinity. -Doesn't this mean that the limit as $x\to 0$ does not exist? I use the calculator (linked below), and while it verifies that the two sides approach opposite infinity, it solves the entire limit as approaching "infinity". What does this mean? -http://www.numberempire.com/limitcalculator.php - -REPLY [6 votes]: Your analysis is correct. Alternatively, $\sec(x)\to 1$ as $x\to 0$, and you can deal with $\cot(x)$, which goes to $\infty$ as $x\to 0^+$ and to $-\infty$ as $x\to 0^-$. -Note, though, the fact that each one-sided limit does not exist is already enough to tell you the limit does not exist. (Saying that the limit equals $\infty$ or $-\infty$ is not saying that the limit exists, it is saying that the limit does not exist and explaining why: because the values of the function grow without bound, either in the positive direction or in the negative direction, respectively). Even though we write things like -$$\lim_{x\to 0}\frac{1}{x^2} = \infty$$ -this limit does not exist. -As to the limit calculator at your link, I don't know what it means when it says as two-sided limit is $\infty$, since it says the same thing for $\lim\limits_{x\to 0}\frac{1}{x}$. In other words, it means that the on-line calculator is either not giving the correct answer, or else it means something other than what we think it means.<|endoftext|> -TITLE: What is a quotient ring and cosets? -QUESTION [16 upvotes]: So I am trying to understand what a coset is and what a quotient ring is. -So I am going to tell you guys what I know. And please let me know if my thinking is right or wrong, and if I am missing something. For the rest of this post, assume $R$ is a ring and $I$ is an ideal of that ring. -So $I = (m)$ is a principal ideal generated by $m$ where $m \in R$. -Now the congruence class of $a$ and $I$ is denoted by $[a]_m$ but this congruence class can also be written as $a + (m)$ or simply $a + I$. -now this congruence class is obviously a set. So is this the coset of it? So for any $a$ that you choose in $R$ and you "add" the ideal to it (generated by m which is also in $R$) you get a coset. -The quotient ring $R/I$ just means ALL the cosets of $I$ in $R.$ So does this say that if hypothetically speaking $m = 3,$ then $1 + (3)$ is one coset, $2 + (3)$ is another coset and hence the quotient ring is ALL the cosets for every possible $a?$ -I hope I make sense. If someone could send me a link to an easy (introduction to algebra) article or a "tutorial", that would be appreciated. I am using Hungerford Algebra. -If anyone can explain this to me in easy english, that is appreciated. -Thanks - -REPLY [30 votes]: Remember that every equivalence relation induces a partition on the set on which you have defined the relation. An ideal $I$ defines an equivalence relation on the set $R$ by saying that $a\sim b$ if and only if $a-b\in I$; we express this by saying that $a$ is congruent to $b$ modulo $I$. -That means that $R$ is partitioned into equivalence class under this "congruent to modulo $I$" relation. The equivalence classes are called "cosets" (I claimed some time ago this is short for "congruence set", but have been unable to substantiate this; but you can surely imagine that it is). The cosets are the equivalence classes. -Now, what is the equivalence class of an $a\in R$? it is the set of all things that are congruent to $a$ modulo $I$; this consists exactly of all elements of the form $a+x$ with $x\in I$, so we write it as -$$ a + I = \{ a+x \mid x\in I\}.$$ -This is its description as a set. If we want to think of it in terms of the equivalence relation and remember that it is the equivalence class of $a$, then we use the standard notation for equivalence classes and write $[a]$ (or $[a]_I$, or $[a]_m$, to remind us also of which ideal $I$ we are dealing with). -When you ask if $a+I$ is "the coset of it", it is unclear to me who "it" is. But, $a+I$ is the equivalence class of $a$, so it is the coset of $a$ (since "the coset of $x$" just means "the equivalence class of $x$ under the equivalence relation 'congruent modulo $I$'"). -The reason that when you "add" the ideal you get the coset is just because of what the definition of the equivalence relation is: every element of the form $a+i$ is congruent to $a$ modulo $I$, because $a-(a+i) = -i\in I$; and if $b$ is congruent to $a$ modulo $I$, then $a-b=i$ for some $i\in I$, so we get that $b=a-i$. That is, every element of the form $a+x$ with $x\in I$ is in the coset, and everything in the coset is of the form $a+x$ with $x\in I$. So the notation $a+I$ is both suggestive and useful. -Now, the set $R/I$ is just the set of equivalence classes; as a set, the elements are the cosets. Each coset has many different names, since $[a]_I = [b]_I$ whenever $a\sim b$. -As a ring, $R/I$ is the ring whose elements are the cosets/equivalence classes, and whose operations $\oplus$ and $\odot$ are defined by -$$\begin{align*} -[x]_I \oplus [y]_I &= [x+y]_I,\\\ -[x]_I\odot[y]_I &= [x\cdot y]_I -\end{align*}$$ -where $+$ and $\cdot$ are the operations in $R$. (We later drop the distinction between $+$ and $\oplus$, but the point here is that they are operations defined on different sets, so they are really different functions, though very closely related). -In your example, yes: $R/I$ is the collection of all cosets, but remember that the same coset may have many different names. So, for instance, if $R=\mathbb{Z}$ and $I=(3)$, then $R/I$ consists of all cosets, which are sets of the form -$$ a + (3) = \{ \ldots, a+3(-2), a+3(-1) , a+3(0), a+3(1), a+3(2), a+3(4),\ldots\}$$ -for all $a$. But as it happens, every single coset is equal to either $0+(3)$, $1+(3)$, or $2+(3)$, so in fact $R/I$ has only three elements, even though each of those elements has infinitely many names: -\begin{align*} -0+(3) &= 3+(3) = 6+(3) = 9+(3) =\cdots = 3k+(3),\\ -1+(3) &= 4+(3) = 7+(3) = 10+(3) = \cdots = 3k+1 + (3)\\ -2+(3) &= 5+(3) = 8+(3) = 11+(3) = \cdots = 3k+2 + (3) -\end{align*}<|endoftext|> -TITLE: What are the postulates that can be used to derive geometry? -QUESTION [12 upvotes]: What are the various sets of postulates that can used to derive Euclidean geometry? -It might be nice to have several different approaches together for comparison purposes and for ready reference. -It might also be interesting to include an axiomatization (or two) of elliptical geometry. - -REPLY [3 votes]: Axioms used in Euclid's elements as translated by J. L. Heiberg -Link: http://farside.ph.utexas.edu/euclid/Elements.pdf -Definitions - -A point is that of which there is no part. -And a line is a length without breadth. -And the extremities of a line are points. -A straight-line is (any) one which lies evenly with points on itself. -And a surface is that which has length and breadth only. -And the extremities of a surface are lines. -A plane surface is (any) one which lies evenly with the straight-lines on itself. -And a plane angle is the inclination of the lines to one another, when two lines in a plane meet one another, and are not lying in a straight-line. -And when the lines containing the angle are straight then the angle is called rectilinear. -And when a straight-line stood upon (another) straight-line makes adjacent angles (which are) equal to one another, each of the equal angles is a right-angle, and the former straight-line is called a perpendicular to that upon which it stands. -An obtuse angle is one greater than a right-angle. -And an acute angle (is) one less than a right-angle. -A boundary is that which is the extremity of something. -A figure is that which is contained by some boundary or boundaries. -A circle is a plane figure contained by a single line [which is called a circumference], (such that) all of the straight-lines radiating towards [the circumference] from one point amongst those lying inside the figure are equal to one another. -And the point is called the center of the circle. -And a diameter of the circle is any straight-line, being drawn through the center, and terminated in each direction by the circumference of the circle. (And) any such (straight-line) also cuts the circle in half. -And a semi-circle is the figure contained by the diameter and the circumference cuts off by it. And the center of the semi-circle is the same (point) as (the center of) the circle. -Rectilinear figures are those (figures) contained by straight-lines: trilateral figures being those contained by three straight-lines, quadrilateral by four, and multi- lateral by more than four. -And of the trilateral figures: an equilateral trian- gle is that having three equal sides, an isosceles (triangle) that having only two equal sides, and a scalene (triangle) that having three unequal sides. -And further of the trilateral figures: a right-angled triangle is that having a right-angle, an obtuse-angled (triangle) that having an obtuse angle, and an acute- angled (triangle) that having three acute angles. -And of the quadrilateral figures: a square is that which is right-angled and equilateral, a rectangle that which is right-angled but not equilateral, a rhombus that which is equilateral but not right-angled, and a rhomboid that having opposite sides and angles equal to one an- other which is neither right-angled nor equilateral. And let quadrilateral figures besides these be called trapezia. -Parallel lines are straight-lines which, being in the same plane, and being produced to infinity in each direc- tion, meet with one another in neither (of these direc- tions). - -Postulates - -Let it have been postulated† to draw a straight-line from any point to any point. -And to produce a finite straight-line continuously in a straight-line. -And to draw a circle with any center and radius. -And that all right-angles are equal to one another. -And that if a straight-line falling across two (other) straight-lines makes internal angles on the same side (of itself whose sum is) less than two right-angles, then the two (other) straight-lines, being produced to infinity, meet on that side (of the original straight-line) that the (sum of the internal angles) is less than two right-angles (and do not meet on the other side). - -Common Notions - -Things equal to the same thing are also equal to one another. -And if equal things are added to equal things then the wholes are equal. -And if equal things are subtracted from equal things then the remainders are equal. -And things coinciding with one another are equal to one another. -And the whole [is] greater than the part.<|endoftext|> -TITLE: Are there any areas of mathematics that are known to be impossible to formalise in terms of set theory? -QUESTION [21 upvotes]: Are there any areas of mathematics that are known to be impossible to formalise in terms of set theory? - -REPLY [5 votes]: The notion of constructive proof in mathematics, as understood by constructivists themselves, is an inherently informal notion. There are certainly many formalized analogues of constructive mathematics, which are worth studying. But constructivists reject the notion that any of these captures the notion of constructive proof, and they often make the stronger claim that constructive proof cannot be formalized.<|endoftext|> -TITLE: When a conditional probability becomes a mapping to probability measures -QUESTION [7 upvotes]: In the Wikipedia article for conditional expectation, conditional probability is defined in terms of conditional expectation. - -Given a sub sigma algebra of the one -on a probability space. -Given a probability space $(\Omega, -\mathcal{F}, P)$, a conditional -probability $P(A \mid \mathcal{B})$ -of a measurable subset $A \in -\mathcal{F}$ given a sub sigma -algebra $\mathcal{B}$ of -$\mathcal{F}$, is defined as the -conditional expectation $E(A \mid -\mathcal{B})$ of indicator function -$1_A$ of $A$ given $\mathcal{B}$, -i.e. $$ P(A \mid \mathcal{B}): = E(1_A -\mid \mathcal{B}), \forall A \in -\mathcal{F}.$$ -So actually the conditional -probability $P(\cdot \mid -\mathcal{B})$ is a mapping $: \Omega -\times \mathcal{F} \rightarrow -\mathbb{R}$. -A conditional probability $P(\cdot -\mid \mathcal{B})$ is called regular if -$P(\cdot|\mathcal{B})(\omega), -\forall \omega \in \Omega$ is also a -probability measure. -Question: -What are some -necessary and/or sufficient -conditions for a conditional -probability $P(\cdot \mid -\mathcal{B})$ to be regular? - -Given a random variable on a probability space. -Suppose $(\Omega, \mathcal{F}, P)$ -is a probability space and $(U, -\mathcal{\Sigma})$ is a measurable -space. There seem to be two ways of -defining the conditional expectation -$E(X\mid Y)$ of a r.v. $X: \Omega -\rightarrow \mathbb{R}$ given -another r.v. $Y: \Omega \rightarrow -U$, either as a -$\sigma(Y)$-measurable mapping $: -\Omega \rightarrow \mathbb{R}$, or -as a $\Sigma$-measurable mapping $: -U \rightarrow \mathbb{R}$, as in -my previous post. -If one let $X$ to be the indicator -function $1_A$ for some $A \in -\mathcal{F}$, one can similarly -define $E(1_A \mid Y)$ to be -conditional probability of $A$ given -$Y$, denoted as $P(A\mid Y)$. -Therefore $P(\cdot \mid Y)$ is a -mapping $: \Omega \times \mathcal{F} -\rightarrow \mathbb{R}$ or a mapping -$: U \times \mathcal{F} \rightarrow -\mathbb{R}$. -Questions: -(1). What are some necessary and/or -sufficient conditions for $P(\cdot - \mid Y)$ to be regular, i.e. to be a -mapping $: \Omega \rightarrow \{ - \text{probability measures on - }(\Omega, \mathcal{F}) \}$ or a -mapping $: U \rightarrow \{ - \text{probability measures on - }(\Omega, \mathcal{F}) \}$? -(2). Under what kinds of conditions, will -$P(X \mid Y)$ defined as above be -equal to the ratio $\frac{P(X, Y)}{P(Y)}$, the -definition used in elementary -probability? - - -Thanks and regards! References (links or books) will also be appreciated! - -REPLY [7 votes]: Conditional probabilities do not give a unique function on the sample space. Since conditional expectations are only defined up to a measure zero set and one has to make an uncountable number of selections, the essential problem is whether one can "glue" them together in coherent way, so that you can actually calculate conditional probabilities by integrating the function. There are several notions of regular conditional probabilities and this paper by Faden gives necessary and sufficient conditions for some of them. For the particular version you mentioned, little is known about necessary conditions. The strongest results on the existence of regular conditional probabilities can be found in this paper by Pachl, but he only requires them to be measurable with respect to the completion of the measure. The machinery he uses is rather sophisticated, his method is based on using a lifting that he then shows (under some condition, compactness) to give a countably additive probability. -The most extensive resource on conditional probabilities is probably the book Conditional Measures and Applications by M.M. Rao. The book is not recommended for its readability. Your question is addressed in chapter 3 in a comprehensive manner.<|endoftext|> -TITLE: Proving $0$ is an ordinal -QUESTION [7 upvotes]: In Introduction to Set Theory by J. Donald Monk, he defines ordinal as follows. - -Definition (1): $A$ is an ordinal iff $A$ is $\in$-transitive and each member of $A$ is $\in$-transitive. $A$ is $\in$-transitive iff for all $x$ and $y$ , $x \in y \in A \implies x \in A$. - -And using definition (1), I have a problem of showing $0$ is an ordinal. -My solution: Let $x \in y \in 0$, then show that $x \in 0$, but in the theorem shown before this, there cannot exist an $x \in 0$ for any $x$, for if $x \in 0$, then $x$ is a set and $x \neq x$, which is absurd. Hence there cannot be an $x$ such that $x\in 0$ .But if this is the case, we cannot shown the above theorem by definition. Any ideas? -thanks for your help. -Edit: Initially I have 2 question to ask. But later on, I have found the answer to my first question. That is why I deleted it and the question post above is my second question. - -REPLY [2 votes]: Seoral: A statement of the form "for all $x\in A$ blah" literally means "for all $x$, if it is the case that $x\in A$, then blah". -Now: As you mentioned, $y\in 0$ is false for any $y$. We need to show that if $x\in y\in 0$ then $x\in 0$. More precisely, this means: For all $x$ and $y$, if $y\in 0$ and $x\in y$, then $x\in 0$. Since $y\in 0$ is false, we need to show that for all $x$ and $y$, (false and $x\in y$) implies $x\in 0$. Recall that "false and blah" is false, so this is just: For all $x$ and $y$, false implies $x\in 0$. -But false implies anything! (More precisely, statements $p\to q$ are true whenever $p$ is false. They are also true when $q$ is true, but that doesn't matter here.) So, we have "for all $x$ and $y$, true", which is the same as "true". I.e., you have proved what you needed. -One usually doesn't express situations as the one above this way; instead, people talk of statements being "vacuously true", as pointed out in a comment above. -Let's review: 0 is transitive, because any element of 0 is a subset. (Precisely, because there are no elements of 0.) -Similarly, any element of 0 is transitive. Again, because there are no elements of 0. -This is somewhat strange the first time one encounters it, since it is also the case that any element of 0 fails to be transitive (and that any element of 0 is blue, etc).<|endoftext|> -TITLE: property about topological space -QUESTION [7 upvotes]: I have a question. -Let $f,g$ be continuous functions from $X$ to $Y$, $X$ is a topological space and $Y$ a topological space under ordered topology. Then prove that the set $\{x \in X \ | \ f(x) < g(x)\}$ is open. - I want to know that what intrisic property of order makes it possible. - -REPLY [7 votes]: Look at the map $(f,g)\colon X\to Y\times Y$. This map is continuous because $f$ and $g$ are. -Your set $\{x \in X \mid f(x) < g(x)\}$ is the preimage under $(f,g)$ of the set $\{(a,b) \in Y\times Y \mid a < b\}$. -Now, the only thing that remains to prove is that $\{(a,b) \in Y\times Y \mid a < b\}\subseteq Y\times Y$ is open. -This follows from the definition of the Order topology on $Y$. Do you need help with that?<|endoftext|> -TITLE: How to find the integral closure of $\mathbb{Z}_{(3)}$ in the field $\mathbb{Q}(\sqrt{-5})$? -QUESTION [8 upvotes]: Let $v$ be the 3-adic valuation on $\mathbb{Q}$ and consider the subring $\mathbb{Z}_{(3)}$ of $\mathbb{Q}$ defined by -$$ -\mathbb{Z}_{(3)} = \{ x \in \mathbb{Q} : v(x) \geq 0 \}. -$$ -That is, $\mathbb{Z}_{(3)}$ is the ring of rational numbers that are integral with respect to $v$. $\mathbb{Z}_{(3)}$ is also the localization of $\mathbb{Z}$ at the prime ideal $(3)$. I know $\mathbb{Z}_{(3)}$ is integrally closed in $\mathbb{Q}$. -I want to find the integral closure of $\mathfrak{O}$ in the field $\mathbb{Q}(\sqrt{-5})$: -$$ -\overline{\mathbb{Z}_{(3)}} = \{x \in \mathbb{Q}(\sqrt{-5}) : x \text{ is a root of a monic irreducible polynomial with coefficients in } \mathbb{Z}_{(3)} \} -$$ -How can I do this? What should I be thinking about? - -REPLY [7 votes]: Here is a different and (perhaps) somewhat more elementary approach than Andrea's. -Let $R$ be the integral closure of $\mathbb{Z}_{(3)}$ in $\mathbb{Q}(\sqrt{-5})$. Clearly $R$ is a subring of $\mathbb{Q}(\sqrt{-5})$, and thus we are looking for a necessary and sufficient condition on $a,b \in \mathbb{Q}$ such that $a+b\sqrt{-5} \in R$. -Let $P(a,b)$ be the minimal polynomial of $a+b\sqrt{-5}$ over $\mathbb{Q}$, i.e., the unique monic polynomial with $\mathbb{Q}$-coefficients of least degree satisfied by $a+b\sqrt{-5}$. Certainly if $P(a,b)$ has coefficients lying in $\mathbb{Z}_{(3)}$ then $a+b\sqrt{-5}$ lies in $R$. In fact the converse is true because the ring $\mathbb{Z}_{(3)}$ is integrally closed (e.g. it is a PID and PID $\implies$ UFD $\implies$ integrally closed). For a proof of this fact, see e.g. the section on Integrally Closed Domains in these notes. (Currently this is Theorem $260$ in $\S 14.5$, but that more precise information is subject to change.) -Try out this computation for yourself: you should get that $a+b\sqrt{-5} \in R \iff a,b \in \mathbb{Z}_{(3)}$. -Now in my notes I also prove the compatibility of integral closure with localization which Andrea uses in his answer: in fact it comes about five pages earlier than the fact about minimal polynomials mentioned above. So it is certainly arguable which of these facts is "more basic". My reason for choosing the latter is because I think it is a little more concrete and amenable to computation: indeed, if you don't know this fact about minimal polynomials, it seems to me that you're going to have a hard time computing any nontrivial examples of integral closures whatsoever.<|endoftext|> -TITLE: cones in the derived category -QUESTION [21 upvotes]: If I have two exact triangles $X \to Y \to Z \to X[1]$ and $X' \to Y' \to Z' \to X'[1]$ in a triangulated category, and I have morphisms $X \to X'$, $Y \to Y'$ which 'commute' (i.e., such that $X \to Y \to Y' = X \to X' \to Y'$), thene there exists a (not necessarily unique) map $Z \to Z'$ which completes what we've got to a morphism of triangles. -Is there a criterion which ensures the uniqueness of this cone-map? -I'd like something along the lines of: if $\operatorname{Ext}^{-1}(X,Y')=0$ then yes. -(I might be too optimistic, cfr. Prop 10.1.17 of Kashiwara-Schapira Categories and Sheaves: in addition to $\operatorname{Hom}{(X[1],Y')} = 0$ they also assume $\operatorname{Hom} {(Y,X')} =0$. I really don't have this second assumption.) -(In the case I'm interested in $X=X', Y=Y'$ and $X\to X'$, $Y \to Y'$ are the identity maps.) -(If it makes things easier, although I doubt it, you can take the category to be the bounded derived category of coherent sheaves on some, fairly nasty, scheme.) -In the context I have in mind $X, Y, X', Y'$ are all objects of the heart of a bounded t-structure. If we assumed $\operatorname{Hom}{(Z,Y')} = 0$ or $\operatorname{Hom}{(X[1],Z')} = 0$ then the result easily follows. I don't think I'm happy making those assumptions though. - -REPLY [5 votes]: The uniqueness condition of the maps between cones is very restrictive. If it holds for every commutative square, this indeed means that you could define a "cone functor" $\mathrm{Mor}(\mathcal T) \to \mathcal T$ from the category of morphisms of your triangulated category $\mathcal T$ to $\mathcal T$ itself (just choose a cone object for each morphism, then your uniqueness condition ensures that the cone functor is well defined on morphisms). It turns out that this makes $\mathcal T$ a semisimple abelian category, if $\mathcal T$ is assumed to be Karoubian (i.e. every idempotent splits; many common triangulated categories are Karoubian). I found a proof of this claim in the following article: -http://www.math.uni-bielefeld.de/~gstevens/no_functorial_cones.pdf -In conclusion: you can't expect your uniqueness condition to hold globally in "useful" triangulated categories. There is a technique to overcome this problem, that is, using pre-triangulated dg-categories (introduced by Bondal and Kapranov) to "lift" triangulated categories. In this new framework you indeed have functorial cones. -Perhaps this doesn't answer your specific question (which, as I understand, is about a given commutative square), but it should point out that the desired uniqueness is, roughly speaking, very difficult to obtain in general.<|endoftext|> -TITLE: Relation between Borel–Cantelli lemmas and Kolmogorov's zero-one law -QUESTION [13 upvotes]: I was wondering what is the relation between the first and second Borel–Cantelli lemmas and Kolmogorov's zero-one law? -The former is about limsup of a sequence of events, while the latter is about tail event of a sequence of independent sub sigma algebras or of independent random variables. Both have results for limsup/tail event to have either probability 0 or 1. I guess there are relations between but cannot identify them. -Can the former be viewed as a special case of the latter? How about in reverse direction? -Thanks and regards! - -REPLY [15 votes]: The first Borel-Cantelli is simply the fact that the probability of a union is at most the sum of the probabilities; it has nothing in common with Kolmogorov zero-one law. -What Kolmogorov zero-one law tells you in the setting of the second Borel-Cantelli lemma is that the probability of the limsup is $0$ or $1$, because (1) the limsup is always in the tail $\sigma$-algebra, (2) you are considering independent events, hence their tail $\sigma$-algebra is trivial. Then the second Borel-Cantelli lemma itself tells you that this probability is in fact $1$ under the non summability condition which you know.<|endoftext|> -TITLE: Sobolev space on closed subset of the real line -QUESTION [7 upvotes]: Everywhere I look in the literature, Sobolev spaces are defined on an open subset of the real line. What are the technical issues with defining a Sobolev space on a closed subset, i.e. are there problems at the boundary, and does anyone know any good references that cover this? -My main purpose is to prove $H^1([0,T];\mathbb{R}) = -\{ x \in L^2([0,T];\mathbb{R}) : ||x'||_{L^2} + \gamma^{2}||x||_{L^2} < \infty \}$ is a reproducing kernel Hilbert space. I can do this for $(0,T)$ and want to know if the proof is transferable to the case of the closed interval $[0,T]$. -Many thanks, -Matthew. - -REPLY [4 votes]: Well, I suspect you defined the Sobolev space on the close interval as the one on the open interval (i.e. using the same functional equation taking the test functions on $\mathscr{C}^{\infty}_0((0,T))$), and then extend every function of it by choosing arbitrary values on the boundary. -In that case $H^1((0,T))=H^1([0,T])$ and your problem is of little concern, because every function of $W^{1,1}((0,T))$ is also continue on the closed interval $[0,T]$ (i.e. they admit finite limit on the boundary). So you can apply, by limit, the same technique you used in the case of the open interval $(0,T)$ (for example using the continuous inclusion of $W^{1,1}(I)$ in $L^{\infty}(I)$). -Anyway, I think the main reason beyond defining Sobolev space on open sets lies in this: $\mathscr{C}^{\infty}_0(\bar{I})=\mathscr{C}^{\infty}(\bar{I})$.<|endoftext|> -TITLE: How to invert this exponential function to solve for x: $y = a \exp(bx) + c \exp(dx)$? -QUESTION [5 upvotes]: Cheers. -So if I don't make sense, I have a value for $y$, I need to know what $x$ is. -$$y = a \exp(bx) + c \exp(dx)$$ -$a = 12.85$, -$b = 0.001857$, -$c = -54.24$, -$d = -0.05316$ - -REPLY [2 votes]: You have $y=ae^{bx} + ce^{dx}$ with $a, b$ positive and $c, d$ negative. So this means: - -if $x$ is large and positive, then $y \approx ae^{bx}$ -if $x$ is large and negative, then $y \approx ce^{dx}$ - -So for $x$ large and positive, $x \approx (1/b) \log (y/a)$; for $x$ large and negative, $x \approx (1/d) \log (y/c)$ -- these come from solving the above approximations for $x$. If you need to go further I'd say start with these approximations and then use something like Newton's method.<|endoftext|> -TITLE: What types of geometries are scale-invariant? -QUESTION [7 upvotes]: This question explains that scale-invariance (or more accurately, similarity) is an important property of Euclidean geometry. Are there any other ways to define scale-invariant geometries in any sense? And how do they differ from Euclidean geometry? - -REPLY [6 votes]: If you accept Euclid's first four postulates, then as the question you mention discusses, being able to scale an object (a triangle) implies the parallel postulate and Euclidean geometry. -So any other geometry will be significantly different from what we learned in school. -Here are a few examples of some other scale invariant geometries: - -Minkowski Geometry -Continuous Projective Geometries have no distances or scale at all, but they have cross ratio measurements, and we draw scale invariant pictures of the theorems to understand them. -Inversive Geometry treats lines and circles as the same. -The Moulton Plane is scale invariant but not translation invariant! - -These are just a few; an exhaustive list is impossible.<|endoftext|> -TITLE: Anyone know a clear, useful online tutorial on dimensional analysis -QUESTION [5 upvotes]: We've just started this today in first year applied maths at university. Today we were given the problem of deriving the formula for the area of an ellipse. I've got as far as saying there's some relationship between the semi-major axis and the semi-minor axis on the one hand, and the area of the ellipse on the other, but now I'm stuck. -I do not want a direct answer to the problem. I'm looking for a good online tutorial for these sorts of problems that might get me thinking in the right way. I've tried doing a search myself, but keep getting stuff on performing conversions. Can anyone point me to a good source for my purposes? - -REPLY [3 votes]: There is an MIT OCW course and textbook called Street-Fighting Mathematics by Mahajan which discusses dimensional analysis among other things that you might find useful. For this problem the key observation is to think about what happens to an ellipse when you stretch or shrink along either axis. (Hint: you get another ellipse. What is the relationship between their areas? Between their semimajor / semiminor axes?)<|endoftext|> -TITLE: What is the condition for a field to make the degree of its algebraic closure over it infinite? -QUESTION [7 upvotes]: As we all know, the algebraic closure often has an infinite degree. -Also, this shows the necessary and sufficient condition for a Galois extension to be a finite extension of fields. -However, we may want to characterize the cases of the case where the extension is not Galois, which is actually my question. -And this question is related to this question. - -REPLY [8 votes]: Let me summarize the results of $\S 12.4$ of my notes. (This involves making explicit some things which were left as "exercises", but I'm okay with that.) -Let $K$ be any field, let $K^{\operatorname{sep}}$ be any separable closure and let $\overline{K}$ be any algebraic closure containing $K^{\operatorname{sep}}$. Then: -1) Suppose first that $K = K^{\operatorname{sep}}$, i.e., $K$ is separably closed. Then either: -1a) $K$ is algebraically closed, or -1b) It isn't, i.e., $K$ has positive characteristic $p$ and there exists $a \in K$ such that -the polynomial $t^p - a$ is irreducible. Then $t^{p^n}-a$ is irreducible for all $n$, so -$[\overline{K}:K]$ is infinite. (For this, see Lemma 32 in $\S 6.1$.) -2) Suppose that $K$ is not separably closed, so $G = \operatorname{Aut}(K^{\operatorname{sep}}/K)$ is nontrivial. Then by the Artin-Schreier theorem, either -2a) $\# G = 2$, in which case $K$ is formally real and $K(\sqrt{-1})$ is algebraically closed, or -2b) $\# G > 2$, in which case $G$ is infinite, which implies $[\overline{K}:K]$ is infinite. -Note that some of the exercises outline further extensions of the Artin-Schreier Theorem, i.e., if $\overline{K}/K$ is "small" in other ways then $K$ is either real closed or algebraically closed.<|endoftext|> -TITLE: Is there a definite integral for which the Riemann sum can be calculated but for which there is no closed-form antiderivative? -QUESTION [15 upvotes]: Some definite integrals, such as $\int_0^\infty e^{-x^2}\,dx$, are known despite the fact that there is no closed-form antiderivative. However, the method I know of calculating this particular integral (square it, and integrate over the first quadrant in polar coordinates) is not dependent on the Riemann sum definition. What I thought might be interesting is a definite integral $\int_a^bf(x)\,dx$ for which the limit of the Riemann sums happens to be calculable, but for which no closed-form antiderivative of $f$ exists. Of course there are some obvious uninteresting examples, like integrating odd functions over symmetric intervals, but one doesn't need Riemann sums to calculate these uninteresting examples. -Edit: To make this a bit clearer, it would be nice to have a "natural" continuous function $f(x)$ where by some miracle $\lim_{n\to\infty} \sum_{i=1}^nf(x_i)\Delta x$ is computable (for some interval $[a,b]$) using series trickery, but for which no antiderivative exists composed of elementary functions. - -REPLY [4 votes]: This may not answer your question, but I read of another example of an integral which can be calculated without finding an anti-derivative in Halmos's Problems for Mathematicians, Young and Old is the integral: -$$ \int_{0}^{1} \int_{0}^{1} \frac{1}{1-xy} dx dy.$$ -The trick here is to look at the integrand at each $x,y$ as a number $\frac{1}{1-r}$ where $01$.<|endoftext|> -TITLE: Local triviality of principal bundles -QUESTION [24 upvotes]: Suppose I define a principal $G$-bundle as a map $\pi: P \to M$ with a smooth right action of $G$ on $P$ that acts freely and transitively on the fibers of $\pi$. Does it follow that $P$ is locally isomorphic to $M \times G$ with the obvious right action of $G$ on $M \times G$? Let's suppose $M$ is a manifold. -I know that fiber bundles over a contractible set are trivial and a manifold is locally contractible, but I believe this statements refers to locally trivial fiber bundles and so will not apply to this case. -A related question is: if we have a fibration such that the base space is contractible and all fibers are homeomorphic, does it follow that the fibration is just the product of the base with the fiber? -Thanks! - -REPLY [9 votes]: For the first question - yes, at least if you suppose $P$ is a smooth manifold and, say, $G$ is a Lie group. For the principal $G$-bundles with your definition, just as with the regular one, being trivial is the same as admitting a section (in the section is $s$, map $s(x)$ to ($x$, unit of $G$), and use the action to define the rest of the trivializing map). Now to construct a section locally, near $x_0$ take any $s(x_0)$ in the fiber above $x_0$. Pick an auxiliary Riemann metric near take the orthogonal subspace to the tangent of the fiber at $s(x)$. Exponential map will give you a section locally. -In other categories you would need to construct the section $s$ in a different manner. I think this can be done in the category of topological manifolds. Not sure about more general cases, but it would seem ok. Maybe for CW complexes you can go cell by cell?<|endoftext|> -TITLE: Which books for refreshing high school algebra? -QUESTION [19 upvotes]: I'll take a Calculus course next year, and my professor suggested reviewing high school algebra. - -REPLY [4 votes]: You might want to take a look at the text by George F. Simmons, Precalculus Mathematics in a Nutshell. It is very concise, weighing in at only 128 pages, and like other books by Simmons, is extremely clear and well-written.<|endoftext|> -TITLE: Is the norm on a Hilbert space always finite? -QUESTION [10 upvotes]: If $H$ is a Hilbert space and $x \in H$ then does it follow that $||x|| < \infty$? - -REPLY [17 votes]: On any real or complex vector space $X$ for which a norm $\|\cdot\|$ is defined, part of the definition is that $\|x\|$ is a real number for each $x\in X$. The norm on a real or complex inner product space $H$ fits into this context, because part of the definition of the inner product is that $\langle x,y\rangle$ is a real or complex number for each $x$ and $y$ in $H$, and that $\langle x,x\rangle$ is nonnegative for each $x\in H$, and hence $\langle x,x\rangle$ is a nonnegative real number (excluding the possibility of $\langle x,x\rangle=\infty$). -In some contexts there is notational abuse of $\|\cdot\|$, which may be the source of the question here. For example, suppose that $\mu$ is a positive measure on $X$, and $1\leq p\lt \infty$. Some authors will say that for a measurable real or complex-valued function $f$ on $X$, $\|f\|_p$ is defined to be the $p^\text{th}$ root of $\int_X |f|^pd\mu$, before defining $L^p(\mu)$ to be the set of such $f$ for which $\|f\|_p$ is finite. With this convention, $\|\cdot\|_p$ is a norm when restricted to $L^p(\mu)$, but the extended notation allows $\|f\|_p=\infty$ to also be a meaningful statement; it is equivalent to saying that $f$ is not in $L^p(\mu)$. So for example, $\|f\|_2$ can be infinite for some measurable $f$, but $\|\cdot\|_2$ is a norm on the Hilbert space $L^2(\mu)$, meaning in part that $\|f\|_2$ is a nonnegative real number for all $f\in L^2(\mu)$.<|endoftext|> -TITLE: Constructing Infinite Cartesian Products without AC -QUESTION [7 upvotes]: I recently stumbled across the wikipedia page on equivalents to the Axiom of Choice. I noticed that every infinite Cartesian product of a non-empty family of non-empty sets being non-empty was equivalent to the axiom of choice. Which isn't hard to see, since any element from such a product is essentially a choice function. -This led me to wonder under what circumstances can you construct such infinite Cartesian products without the axiom of choice. I'm curious about general conditions required, presumably if there's some well-ordering on each set in the family this would be sufficient. In particular I'm curious about $\mathbb R^{\mathbb R}$ and $\mathbb R^{\mathbb N}$ that is the sets of real-valued functions on the real line and all real-valued sequences. Can we still claim that these sets exist without the axiom of choice? - -REPLY [11 votes]: What is equivalent to the Axiom of Choice is the assertion that every cartesian product of any nonempty family of nonempty sets is itself nonempty (not merely that "an infinite Cartesian product of a non-empty family of nonempty sets"). -There are plenty of families for which one can prove, without needing to invoke the Axiom of Choice, that their cartesian product is nonempty. For example, as you note, a product of any nonempty family of nonempty well-ordered sets is nonempty, regardless of the cardinality of the family. (However, proving that a denumerably infinite product of denumerably infinite sets is nonempty requires at least some Choice; you can think of this as a weakening of the previous case, since here we are saying that the sets are well-order able but not necessarily well-order ed. That is, rather than coming already provided with a well-ordering, we are just told that a well ordering exists). -In general, any family in which all the sets are the same also has nonempty product: if $\{A_i\}_{i\in I}$ is a nonempty family, and $A_i=A_j\neq\emptyset$ for all $i,j\in I$, then since $A_i$ is nonempty, there exists $a\in A_i$. Then the function $f\colon I\to\cup A_i$ given by $f(i) = a$ for all $i\in I$ is a choice function for the family (and an element of $\times_{i\in I} A_i$). In particular, both the sets $\mathbb{R}^\mathbb{R}$ and $\mathbb{R}^{\mathbb{N}}$ are nonempty, and we can prove this without invoking choice. Just the $f\colon\mathbb{R}\to\mathbb{R}$ given by $f(r)=0$ for all $r$; this is an element of $\mathbb{R}^{\mathbb{R}}$; and $g(n)=0$ for all $n\in\mathbb{N}$, this is an element of $\mathbb{R}^{\mathbb{N}}$. Neither one requires AC. -Likewise, any family $\{A_i\}_{i\in I}$ in which there is a cofinite $J\subseteq I$ with $\cap_{j\in J}A_j\neq\emptyset$ will have a choice function whose existence does not depend on AC: take any $J$ with the given property, take any $x\in \cap_{j\in J}A_j$, and letting $I-J = \{i_1,\ldots,i_k\}$, pick $a_t\in A_{i_t}$, $t=1,\ldots,k$. Then $f\colon J\to\cup A_i$ given by -$$f(i) = \left\{\begin{array}{ll} -x & \mbox{if $i\in J$;}\\ -a_1 &\mbox{if $i=i_1$;}\\ -\vdots\\ -a_k &\mbox{if $i=i_k$.} -\end{array}\right.$$ -is a choice function for the family, whose existence can be established without invoking the Axiom of Choice. This is of course not necessary, merely sufficient. - -REPLY [3 votes]: Sure, they exist always, by Comprehension, and they are non-empty in this case, because $X^X$ always contains the identity and constant functions. The same holds for $X^N$: we already have constant functions and many other ones. Powers are not really the issue, but product of indexed sets, of different types, that have no "structure" like a well-order to exploit, roughly speaking.<|endoftext|> -TITLE: On the existence of closed form solutions to finite combinatorial problems -QUESTION [7 upvotes]: Is it possible that a finite combinatorial problem may admit a closed form solution, and for it to be impossible in practice to prove the validity of this solution? I'm not sure if a rigorous definition can be given to the notion of a finite combinatorial problem, but I mean problems of the following nature: - -Given a set finite set $X$, which may or may not have additional structure, enumerate the number of elements in $X$ which satisfy some constraint defined in terms of the primitives of set theory and the additional structure of $X$. - - -Since we are speaking of closed form solutions, we are really interested in families of finite combinatorial problems, parametrized by $\mathbb{N}$, which -scale in some natural way with increasing $n\in \mathbb{N}$. I'm not sure if the notion of a closed form solution can be given a rigorous definition, but I mean something along the lines of the following definition from Graham, Knuth, and Patashnik's Concrete Mathematics: - -An expression for a quantity $f\left(n\right)$ is in closed form if we can compute it using at most a fixed number of "well known" standard operations, independent of $n$. - -I understand that this question is vague and open ended, so I would be happy with answers or partial answers to any of the following subquestions. - -Are there examples of finite combinatorial problems for which empirical evidence suggests there is a closed form solution, but for which significant effort has failed to produce any proofs of the validity of this solution? -Is it possible to give a rigorous axiomatization that captures the notion of of a finite combinatorial problem that working combinatorialists work with, and then bring to bear ideas along the lines of Godel's Theorems and Turing's work on the Halting Problem to produce an existence proof for such families of combinatorial problems? Can any rigorous formulation be given to the notion of a family of finite combinatorial problems scaling naturally with $n\in\mathbb{N}$? -Are there examples of finite combinatorial problems which display a kind of regularity for large $n$. That is, are there any known families of finite combinatorial problems that scale naturally with $n\in\mathbb{N}$, such that $f\left(n\right)$, the answer to the problem associated to $n$, behaves haphazardly for small $n$, but can then be given in terms of a closed form expression for sufficiently large $n$? Is there anything to suggest that difficult combinatorial problems that have been studied, but for which little is known, may display this kind of regularity? -Finally, are there any results in mathematical logic, set theory, or proof theory of which I am unaware, that render my question trivial or foolish? - -I would appreciate any help with this question that can be given. As someone with a decent background in combinatorics, but no deep knowledge of logic or set theory, I don't know where to begin with this. -Edit:(In response to a comment of Qiaochu Yuan) -$n$ need not be the size of $X$. I hope this example should clarify what I'm trying to get at. Consider the problem of enumerating the permutations of the elements of a finite set $X$ of cardinality $n$. This problem may be cast as the following problem. - -Enumerate the elements of $X^n$, $\left(x_0,\ldots ,x_{n-1}\right)$, for which -$x_i = x_j$ if and only if $i = j$. The problem has solution $n!$, which may or may not be considered closed, depending on what you mean by "well known" in Knuth's definition. The answer to this problem is dependent only on the size of $X$, not on any interpretation of what the elements might be. In a sense, the problems of this manner could be said to scale naturally with $n$. Part of my question is to provide a rigorous definition of what I mean by scale naturally. - -REPLY [6 votes]: There's some work on complexity of combinatorial counting of objects definable in Monadic Second Order Logic (MSOL). -For example, you can define "X is an independent set of G" using MSOL. Time to count number of independent sets is independent of size of the graph for family of graphs of bounded tree width. For independent sets it is independent for family of bounded clique width, and one of Makowksi's papers gives conditions beyond MSOL which allow tractability with bounded clique width. -See page 8 of Makowski/Godlin's Graph Polynomials: From Recursive Definitions to Subset Expansion Formulas for examples of properties definable using MSOL on a graph. -There's a number of papers from Makowski and co-authors on this line of work, for instance his student's Definability of Combinatorial Functions and their Linear Recurrence Relations.<|endoftext|> -TITLE: Where are this kind of series used, $\vartheta_{4}(0,e^{\alpha \cdot z})$? -QUESTION [5 upvotes]: In my recent explorations I stumbled upon the following series -$$ -\vartheta_{4}(0,e^{\alpha \cdot z})=1+2\sum_{k=1}^{\infty} (-1)^{k}\cdot e^{\alpha \cdot z\cdot k^{2}} ; \alpha \in \mathbb{R}, z \in \mathbb{C} -$$ -This is one of the well known Jacobi theta functions/series with the peculiarity of having the variable $z \in \mathbb{C}$ in a different place, i.e. $e^{\alpha \cdot \mathbf{z} \cdot k^{2}}$!! -The usual form of the theta function is -\begin{align*} -\vartheta_{4}(z,e^{\alpha })=1+2\sum_{k=1}^{\infty} (-1)^{k}\cdot e^{\alpha \cdot k^{2}}\cos(2kz) ; -\end{align*} -but not in the case I have in hands. Does the former formula make any sense? Where are this kind of series used or analysed? (Apart from the well known case of -$$\psi(x)=\sum_{n=1}^{\infty}e^{-n^{2}\pi x}=\frac{1}{2} \left[ \vartheta_{3}(0,e^{-\pi x})-1 \right]$$ -used in the context of the Riemann zeta-function.) - -REPLY [5 votes]: I've seen this series in Ono's book "The web of modularity" in Theorem 1.60 on page 17 where he gives the identity: -$$\frac{\eta(z)^2}{\eta(2z)} = \sum_{n= -\infty}^\infty (-1)^n q^{n^2}$$ -where $q = e^{\pi i z}$ and -$$\eta(z) := q^{1/24}\prod_{n=1}^\infty (1-q^n).$$ -is the Dedekind Eta function. -I suppose one reason that this kind of thing could be useful is that it is very sparse, and therefore fast to compute lots of coefficients very quickly. For example you could use it to calculate the $q$-expansion of the Eisenstein series $E_4$ using the identity (equation 1.28 in Ono) -$$E_4 = \frac{\eta(z)^{16}}{\eta(2z)^8} + 2^8 \frac{\eta(2z)^{16}}{\eta(z)^8}.$$ -(However I think there are faster ways to calculate $E_4$?!)<|endoftext|> -TITLE: Is the sentence "$(A,\in)\models ZFC$" absolute? -QUESTION [14 upvotes]: I know that we can assume that formulas are objects in $V_\omega$, and that notions such as formula and satisfiability for a standard model (when the universe is a set) are definable and absolute. Because of that it appears to me that we can define an absolute (metamathematical) formula that "decides" when a formula (as an object in the universe) belongs in ZFC and furthermore through it we can write a formula $S(A)$ that states $(A,\in)\models ZFC$ , which again will be absolute. Is this correct? -Furthermore, given sets $A$ and $E\subset A\times A$ it looks to me that the satisfiability of the model $(A,E)$ can be described with a $\Delta_1$ formula $S(A,E,\ulcorner\phi\urcorner)$ and thus is absolute. Is this true? And does this mean that "$(A,E)\models ZFC$" is absolute as well? -Thanks in advance. - -REPLY [14 votes]: Here is a way to see directly that the satisfaction relation $M\models\varphi[a]$ has complexity $\Delta_1$ in set theory. This relation is defined (by Tarski) by induction on the complexity of $\varphi$. Thus, we can say that $M\models\varphi[a]$ if and only if there is a function mapping formulas and points in $M$ to the set {satisfied, not satisfied}, which satisfies the inductive properties of Tarski's definition (so that atomic formulas are correctly satisifed, the satisfaction relation on negation inverts the answer, the relation on Boolean combinations works correctly and the relation on quantifiers works correctly). These requirements on the satisfaction relation S have complexity $\Delta_0$, since we need only quantifiy over $S$, over $M$ and over the natural numbers (codes for formulas). Furthermore, one can prove that there is a unique satisfaction relation, so $M\models\varphi[a]$ if and only if there is a satisfaction relation showing this, if and only if all satisfaction relations show this. So the satisfaction relation is $\Delta_1$. -It works the same for any theory, such as ZFC as in your question, since this just adds an additional quantifier over the natural numbers (for every axiom, it is satisfied), which is bounded and therefore doesn't increase the complexity beyond $\Delta_1$. -Thus, being a model of any particular assertion or theory is $\Delta_1$ and therefore absolute in the sense you may have had in mind. For example, any model of set theory will agree with all its forcing extensions and inner models about whether a given set structure is a model of ZFC. -Addendum. But there is another subtle sense in which being a model of ZFC is not absolute. For example, suppose that $M$ is a model of $ZFC+\neg Con(ZFC)$, so that $M$ thinks there are no models of ZFC at all. So this model $M$ must have nonstandard natural numbers, since otherwise it would have the same proofs from ZFC as we do and so it would realize that ZFC is consistent. Inside $M$, we may enumerate the axioms of what $M$ thinks of as ZFC (this includes nonstandard instances of the axioms). In $M$, consider the largest initial segment of the axioms that $M$ thinks is true in some rank initial segment $(V_\alpha)^M$. By the Reflection theorem, this includes all the standard initial segments. Thus, what $M$ thinks is the longest such realizable initial segment of the axioms has nonstandard length. And so the corresponding set $(V_\alpha)^M$ satisfies all the standard instances of axioms of ZFC. That is to say, it really is a model of ZFC, but $M$ doesn't think it is, because $M$ thinks that some of the nonstandard axioms are not true there. So this is an example of non-absoluteness, where two universes of set-theory can disagree about whether a given structure is a model of ZFC or not.<|endoftext|> -TITLE: Limit a.e. of a sequence measurable functions is measurable -QUESTION [5 upvotes]: I'm having trouble showing the following: - -If $f_n$ is a sequence of measurable functions such that $f_n$ converges to $f$ almost everywhere, then $f$ is measurable. - -I was thinking of using $\limsup$ since I know that $\limsup f_n$ is measurable. But now I'm not sure how to continue my argument. - -REPLY [2 votes]: It is not always true. -See Exercise V chapter 3 of Bartle's Book.<|endoftext|> -TITLE: Interpretation of sigma algebra -QUESTION [30 upvotes]: My question is how to interpret sigma algebra, especially in the context of probability theory (stochastic processes included). I would like to know if there is some clear and general way to interpret sigma algebra, which can unify various ways of saying it as history, future, collection of information, size/likelihood-measurable etc? -Specifically,I hope to know how to interpret the following in some consistent way: - -being given/conditional on a sigma algebra -a subset being measurable or nonmeasurable w.r.t. a sigma -algebra -a mapping being measurable or nonmeasurable w.r.t. a -sigma algebra in domain and another -sigma algebra in codomain -a collection of increasing sigma algebras, i.e. a filtration of sigma algebras -... - -Following are a list of examples that I have met. They are nice examples, but I feel their ways of interpretation are not clear and consistent enough for me to apply in practice. Even if there is no unified way to interpret all the examples, I would like to know what some different ways of interpretation are. - -Stopping time - -Let $(I, \leq)$ be an ordered index - set, and let $(\Omega, \mathcal{F},\mathcal{F}_t, \mathbb{P})$ be a - filtered probability space. -Then a random variable $\tau : \Omega \to I$ is called a stopping time if - $\{ \tau \leq t \} \in \mathcal{F}_{t} \forall t \in I$. -Speaking concretely, for τ to be a - stopping time, it should be possible - to decide whether or not $\{ \tau \leq t \}$ has occurred on the basis of the - knowledge of $\mathcal{F}_t$, i.e., - event $\{ \tau \leq t \}$ is - $\mathcal{F}_t$-measurable. - -I was still wondering how exactly to "decide whether or not $\{ \tau \leq t \}$ has occurred on the basis of the knowledge of $\mathcal{F}_t$, i.e., event $\{ \tau \leq t \}$ is $\mathcal{F}_t$-measurable." -Martingale process - -If a stochastic process $Y : T \times \Omega \rightarrow S$ is a martingale - with respect to a filtration $\{ \Sigma_t\}$ and probability measure - $P$, then for all s and t with $s < t$ and all $F \in \Sigma_s$, - $$Y_s = \mathbf{E}_{\mathbf{P}} ( Y_t | \Sigma_s ),$$ - -where $\Sigma_s $ is interpreted as "history". -I was also wondering how $\Sigma_s, s < t$ can act as history, $\Sigma_s, s=t$ as present, and $\Sigma_s, s > t$ as future? -I originally interpret a measurable -subset wrt a sigma algebra as a -subset whose "size"/"likelihood" is measurable, -and the class of such -size-measurable subsets must be -closed under complement and -countable union. -In a post by Nate Eldredge, a -measurable subset wrt a sigma -algebra is interpreted by analogy of questions being answered: - -If I know the answer to a question - $A$, then I also know the answer to - its negation, which corresponds to the - set $A^c$ (e.g. "Is the dodo - not-extinct?"). So any information - that is enough to answer question $A$ - is also enough to answer question - $A^c$. Thus $\mathcal{F}$ should be - closed under taking complements. - Likewise, if I know the answer to - questions $A,B$, I also know the - answer to their disjunction $A \cup B$ - ("Are either the dodo or the elephant - extinct?"), so $\mathcal{F}$ must also - be closed under (finite) unions. - Countable unions require more of a - stretch, but imagine asking an - infinite sequence of questions - "converging" on a final question. - ("Can elephants live to be 90? Can - they live to be 99? Can they live to - be 99.9?" In the end, I know whether - elephants can live to be 100.) - - -Thanks in advance for sharing your views, and any reference that has related discussion is also appreciated! - -REPLY [2 votes]: $\mathcal{F}_0=\{\emptyset,\Omega\}$ -$\mathcal{F}_1=\sigma(\{HH,HT\},\{TH,TT\})=\{\emptyset, \Omega, \{HH,HT\},\{TH,TT\}\}\supset \mathcal{F}_0$ -$\mathcal{F}_2=\sigma(\{HH\},\{HT\},\{TH\},\{TT\})=\{\emptyset, \Omega, \{HH\},\{HT\},\{TH\},\{TT\},\{HH,HT\},\{HH,TH\},\{HH,TT\},\{HT,TH\},\{TH,TT\},\{HT,TT\},\{HH,TH,HT\},\{HH,HT,TT\},\{HH,TH,TT\},\{HT,TH,TT\}\}\supset \mathcal{F}_1\supset \mathcal{F}_0$<|endoftext|> -TITLE: Have equation, want its name -QUESTION [7 upvotes]: In reading journal articles (in physics), I often come across recurring equations in the Introduction sections. Sometimes they don't mention its name. For example, I come across -$$\begin{eqnarray*} - -E\Psi &=& {\skew{6}{\hat}{H}}\Psi\\ - -i\hbar\frac{\partial}{\partial t} &=& {\skew{6}{\hat}{H}}\Psi\\ - -\end{eqnarray*}$$ -Yes, that's the Schrödinger equation (not that I actually understand it, just got it from Wikipedia). But what if I didn't know what it is? Is there some place I can type in the equation, then I'll know what its name is? Then from there, I'll know where to start looking for more information. - -REPLY [7 votes]: You could try http://uniquation.com/. This is basically a tex searcher, but it is better than a full text search. For instance search for \frac{a}{b} returns results which contain \frac{e}{m}. -I believe this is still in Beta though. -Caveat: I haven't used this site much.<|endoftext|> -TITLE: Prime ideals and examples of them -QUESTION [8 upvotes]: So the question states that the intersection of two prime ideals is always a prime ideal. Well this is false but I need an example to counter it. I looked online and found one "For example, inside $\mathbb Z, 2 \mathbb Z$ and $3\mathbb Z$ are prime, but there intersection, $6\mathbb Z$ is not prime" -so I just need some explanation to what a prime ideal is and how you can determine that an ideal is prime. The definition I know of is - -Let $R$ be a comm. ring with identity. An ideal P is prime iff $R \neq P$ and whenever $bc \in P$ then $b \in P$ or $c \in P$ - -I dont know how to apply this definition to the example above. - -REPLY [31 votes]: I'm going to concentrate on commutative rings, because those are the ones closest to what you might be familiar with. To deal with the notions in noncommutative rings takes a bit more work, but is certainly doable. -Ideals, at least at first, are meant to generalize the notion of "is a multiple of" (turns out that there is a different motivation for singling out ideals among the subrings, which is essentially the same reason for singling out normal subgroups among all subgroups of a group; this is not relevant right now, but you might be interested in taking a look at this answer about normal subgroups later). -If you consider the integers, you can characterize a number $n$ (up to sign) by describing all elements that are multiples of $n$. If you know exactly who the multiples of $n$ are, then you know exactly who $n$ is (except that you might confuse it with $-n$). So, instead of looking at the number $n$, we can look at the collection of all its multiples, -$$n\mathbb{Z} = (n) = \{a\in\mathbb{Z}\mid a\text{ is a multiples of }n\}.$$ -What properties do the collections of "all multiples of a given number" have? Well: - -The collection always contains $0$. -If $a$ and $b$ are in the collection, so are $a+b$ and $a-b$. -If $a$ is in the collection, and $r$ is any element of the ring, then $ra$ is also in the collection. - -In the integers, and also in many other rings (for example, $\mathbb{R}[x]$, the polynomials over $\mathbb{R}$), every collection that satisfies these three properties is in fact the collection of all multiples of some $a\in R$. But there are other rings where this does not happen. For example, if you consider the ring $\mathbb{Z}[x]$ of all polynomials with integer coefficients, you can take -$$I = \{ p(x)\in\mathbb{Z}[x]\mid p(0)\text{ is even}\}.$$ -This collection satisfies all three properties: $0$ is in $I$; if $p(x)$ and $q(x)$ are in $I$, then so is $p(x)+q(x)$, because $(p+q)(0) = p(0)+q(0)$ is a sum of two even numbers, hence even; and if $p(x)$ is in the collection and $q(x)$ is any polynomial with integer coefficients, when $pq(0) = p(0)q(0)$ is even, because $p(0)$ is even and $q(0)$ is an integer. So $I$ is an ideal. -Is this $I$ the collection of "all multiples of $a$" for some $a\in\mathbb{Z}[x]$? No. If there were such an $a$, then since $2\in I$, then $2$ would have to be a multiple of $a$. That means that $a$ must be a constant polynomial, and must be either $\pm 1$ or $\pm 2$ (the only elements of $\mathbb{Z}[x]$ that divide $2$). It can't be either $1$ or $-1$, because "multiples of $\pm 1$" is everything, and not everything is in our $I$. But $x\in I$ as well, since evaluating $x$ at $0$ is even; and neither $2$ nor $-2$ divide $x$ in $\mathbb{Z}[x]$. So even though $I$ is an ideal, it is not "all multiples of" someone. So the notion of "ideal", even though it starts up as "all multiples of" someone, is actually more general. This is the distinction between principal ideals (ideals which are "all multiples of $a$" for some $a$), and more general ideals (which need not be made up of "all multiples of $a$" for some $a$). -Nonetheless, ideals are closely connected to the notions of divisibility; as Dedekind noted when he introduced them in the 19th century, if you want to try to do "modulo arithmetic" as in the integers (working modulo $n$ is "really" working in $\mathbb{Z}/(n)$) then the conditions you need on a collection are precisely the conditions that are needed to have ideals. That is, ideals are exactly the things for which you can do "modulo arithmetic". And modulo arithmetic is all about divisibility (after all, $a\equiv b\pmod{n}$ means that $n$ divides $a-b$). -So we want to also keep track of a few of the other special properties that some numbers have, and "translate" them into what they mean for ideals. -Prime numbers play a major role in divisibility issues in the integers. How does the "prime" property translate into the setting of ideals? A prime is a number $p$ such that: - -$p\neq \pm 1$; and -If $p$ divides a product, then it divides at least one of the factors. - -Okay, how does that translate into ideals? If you think of an ideal $I$ as "the collection of all multiples of some number" (again, not really that in the general setting, but that's where the intuition and some of the definitions come from), then when do the multiples correspond to a prime? We need the prime not to divide everything; so we require the ideal to not be the entire ring, $I\neq R$. And the second condition: if $ab$ is a multiple, then either $a$ or $b$ is a multiple. In other words: if $ab\in I$, then either $a\in I$ or $b\in I$. So we define: - -An ideal $I$ is a prime ideal if and only if $I\neq R$, and whenever $ab\in I$, either $a\in I$ or $b\in I$. - -Going back to the intuition for ideals: what does the intersection of ideals correspond to? If $I$ is sort of like "all multiples of $a$", and $J$ is sort of like "all multiples of $b$", then what is $I\cap J$? All things that are multiples of both $a$ and $b$! -So, if $P$ and $Q$ are both prime ideals, would $P\cap Q$ be a prime ideal? Generally no: in general, you don't expect things that are multiples of two different primes to be themselves prime. And so you get to your example. $(2)$ is a prime ideal in $\mathbb{Z}$, precisely because $2$ is a prime number: if $ab\in(2)$, then $ab$ is a multiple of $2$, so either $a$ is a multiple of $2$ or $b$ is a multiple of $2$ (because $2$ is a prime number), so either $a\in(2)$ or $b\in(2)$. Similarly with $(3)$. But $(2)\cap(3)$ will be all numbers that are multiples of both $2$ and $3$; this corresponds to "all multiples of $6$", as we know from elementary number theory: $(2)\cap(3)=(6)$. But $6$ is not a prime number, so there is no reason to expect $(6)$ to be a prime ideal. In fact, a witness to the fact that $6$ is not a prime number should also work as a witness to the fact that $(6)$ is not a prime ideal. And indeed it does. -Caveat. The analogy of ideals as "set of all multiples of something" works reasonably well in very familiar settings, but breaks down very quickly once you get beyond the most basic of rings. For instance, in the integers, you cannot have two nonzero prime ideals $(p)$ and $(q)$ with $p\neq 0$, $q\neq 0$, $p\neq \pm q$, and $(p)\subseteq (q)$: that would mean that $p$ is a multiple of $q$, and with prime numbers that can only happen if $p=\pm q$. But in other rings it is certainly possible for it to happen. For instance, in $R=\mathbb{R}[x,y]$, the ring of polynomials in two variables, both -\begin{align*} -(x) &= \{ p(x,y)\in R\mid p(0,y) = 0\text{ for all }y\};\\ -(x,y) &= \{ p(x,y)\in R\mid p(0,0) = 0\} -\end{align*} -are ideals; clearly $(x)\subseteq (x,y)$, $(x)\neq (0)$, $(x,y)\neq (0)$, and $(x)\neq (x,y)$. Yet both $(x)$ and $(x,y)$ are prime ideals. -So the analogy can only take you so far, and it can be misleading if you try to take it all the way. But at least at first you might find it a useful hook for thinking about possible examples and possible counterexamples.<|endoftext|> -TITLE: First-Order Logic vs. Second-Order Logic -QUESTION [67 upvotes]: Wikipedia describes the first-order vs. second-order logic as follows: - -First-order logic uses only variables that range over individuals (elements of the domain of discourse); second-order logic has these variables as well as additional variables that range over sets of individuals. - -It gives $\forall P\,\forall x (x \in P \lor x \notin P)$ as an SO-logic formula, which makes perfect sense to me. -However, in a post at CSTheory, the poster claimed that $\forall x\forall y(x=y\leftrightarrow\forall z(z\in x\leftrightarrow z\in y))$ is an FO formula. I think this must not be the case, since in the above formula, $x$ and $y$ are sets of individuals, while $z$ is an individual (and therefore this must be an SO formula). -I mentioned this as a comment, but two users commented that ZF can be entirely described by FO logic, and therefore $\forall x\forall y(x=y\leftrightarrow\forall z(z\in x\leftrightarrow z\in y))$ is an FO formula. -I'm confused. Could someone explain this please? - -REPLY [21 votes]: Replace the non-logical relational symbol ∈ with R and the confusion goes away. Your problem arises from the fact that the symbol ∈ makes you confuse the neutral symbol of the language of set theory (which has no properties other than those expressed by axioms of the theory) with the intended interpretation of the symbol about real sets. The set theories like ZF have very strange models which has nothing to do with the intended model and interpretation of the relation $R$ has nothing to do with the real membership relation between sets. -A first-order theory can have many sorts and the intended meaning of some of those sorts can be higher type objects, e.g. we can have a two sorted theory where the intended meaning of the objects of the first sort are natural numbers and the intended meaning of the objects of the second sort are sets of natural numbers. We can quantify over them and the theory is still first order, though it has two sorts. The models of this theory needs to have two sets, one for interpreting the objects of the first sort and the other for interpreting the objects of the second sort, but there does not need to be any relation between these two sets that are used for interpreting objects even if the intended interpretation for the second sort of sets of objects from the first sort. The only properties of that these two sets need to satisfy are those expressed by axioms of the theory. You can add axioms of a set theory like ZFC for the second sort and consider objects of the first sort as urelements of the set theory that satisfying some other axioms e.g. first order Peano Arithmetic. This is still a first order theory. And the theory will have models where the interpretations of the two sort are not related in the intended way, i.e. the objects of the second sort will not be the sets of objects of the first sort, and there are no way to enforce this syntactically, i.e. using axioms. -A second- or higher-order theory enforces some restriction on possible interpretations of some sort which are called the higher sort. These restrictions are not syntactical, i.e. they are not axioms of theory, but semantical, i.e. we restrict the models of the theory to those that satisfy some certain conditions, e.g. the members of the set interpreting the second sort in the two sorted number theory I mentioned above are really the sets of objects of the first sort. This is like assuming some amount of set theory semantically. Without these semantical restrictions about interpretation the theory is still first order. -The higher-order theories are interesting for studying things like natural numbers. The (complete/full) second-order Peano arithmetic will force the set interpreting the second sort to be exactly the powerset of the first sort. Note that this theory is not axiomatizable, i.e. there are no set of first order axioms that will capture exactly these models. (In fact even weaker version of this second-order set theory which do not require the existence of all subsets of the first sort are not axiomatizable.) -On the other hand, I don't know any higher order set theory, and in fact the concept itself seems a little bit unnatural (of course it can be due to my lack of knowledge).<|endoftext|> -TITLE: Understanding the intuition behind math -QUESTION [61 upvotes]: I'm currently a Calculus III student. I enjoy math a lot, but only when I understand its beauty and meaning. However, so many times I have no idea what it is I am learning about, althought I am still able to solve problems pertaining to those topics. I'm memorizing the sounds this language makes, but I dont understand them. I think a big reason why most children and teenagers really dread math over any other subject is because the only thing that is taught to them is equations and numbers, and to them is not being explained its significance. For if people really knew what it all meant, then they'd realize it's probably the most important think they should ever study. Even in college, at this relatively high level of math, they still do not preach its meaning, but rather scare you into cramming pages after pages of material so you can pass the next exam. When you have passed the exam, then it is safe for you to forget the material you just absorbed. This is the reason I often find myself bored of studying my current topics. For some things, I see the intuition behind it, and those are the things that keep me interested in calculus, but its often so hard to come up with a good meaning of what I'm learning all by myself. -It took mankind hundreds and thousands of years to come to where we are with math, so I dont expect to understand its true meaning in an hour, but I'd really like to. The school curriculum, here in America at least, doesnt teach meaning or utility. This is the reason so many youngsters always ask "when will we ever need to use this stuff?"-what a naive question this I learned to be. So I guess what I'm trying to say is that I've grown bored of this material, as we often cram 2 full sections of a topic in one day. Its impossible to keep up with its meaning, but if I am to survive this course along with more advanced courses to come, I must be able to understand its meaning. My textbook doesn't help me with this issue though- no math book can really teach intuition, but they dont even attempt to. They barely even go into the history of the current topic, and thats one of my favorite parts-I like to read about how a normal man came up with this theory that revolutionized the world. It makes math feel human to me, and that I too can understand it like men before me. -Most of the question answerers on this site are mathematicians, if not at least one in training. This means you have come to where you are by understanding what you have learned in the past. How have you done this? How have you been able to connect all the pieces? What are some good resources that will help in what I hope to do? How do I stop being a robot, and actually connect with what I'm learning? - -REPLY [6 votes]: I'm in my third year of a maths bachelor (in Italy), and the situation is the same over here. We are now dealing with topological vector fields and advanced measure theory and I don't have a clue of what's going on, but this is not a new issue. -In my first year I found very helpful betterexplained.com. Unfortunately, it is quite deficient on advanced topics, but fortunately, 3blue1brown came to life recently to fill this gap, and it is a source of astounding material. -Ultimately, I feel like the source of the issue is that there's too much teaching on too many topics. -I am huge believer that self-learning is the most effective way of learning. However, there's just no space for it in modern curricula. After 4-6 hours of morning lesson, there's just no way I will be sharp enough in the afternoon to come up with my intuitions, my solutions, my understanding. I will just look at how many notes I took in the morning, be scared, and start racing through it understanding as little as needed to understand the following topic. It is really sad to be said, but that's the way it is. -I would love to be faced with a problem and be told to seek the solution (which is what I do in my free time, on topics unrelated to university classes). At university level, solutions are likely to be very difficult, and it's unlikely that anyone will really find it, but that does not matter. If you have been struggling with a problem, and you really know what you are trying to address, what caused you trouble, ecc, you will welcome the lesson in a different way. And it's not just a matter of "I'm more interested in it now that I have struggled with it", it is also a matter of efficacy. When I see the solution to a problem I really have worked on for a week, it will sink in me. -But building this way of doing in a math curriculum would necessarily imply that way less topics would be covered, because each of them would take more time, and there would also be less lesson hours. Would that be bad? I don't really think that you are a mathematician only if you know what a Hilbert space is, or what Riesz's theorem states, or what Baire's lemma says. I believe math is a matter of how you face a problem, not a set of notions. Anyway, I really don't see the point of doing moving on to another topic if the previous one hasn't thoroughly been understood. After all, even from a working perspective, knowing advanced analysis is unlikely to benefit you (unless you do research), whereas having true intuitions about basic calculus topics could be really helpful. -We are not in the 1800s anymore, there are a lot of other ways to gather knowledge other than university. This is not to say that we don't need universities anymore, but that it is a huge opportunity! It means we can stop giving students notions, thinking that otherwise it will be difficult for them to obtain them, and start cultivating the math mindset in them, which will allow to go through any theory/notion in the future, whenever it will be needed. -This does also mean that a student cannot have to go through 3 courses simultaneously. How is one supposed to deeply get into a subject, if he has to juggle through topological vector fields, probability, singular homology and linear programming? There's just no way: as soon as one stops to really consider a subject for a couple of days, he's gonna be left so much behind on everything else that he's gonna be crying! -All of this to say: we need to teach less stuff. There's no other way as far as I can see. Teach less, let students do more.<|endoftext|> -TITLE: coin toss question -QUESTION [26 upvotes]: Two players A and B each has a fair coin and they start to toss simultaneously (counted as one round). They toss in $n$ ($\ge 1$) rounds and stop because they have accumulated the same number of heads (could be 0, the case that both of them did not get any head) for the first time. What is the distribution of $n$ and its expectation? - -REPLY [13 votes]: The problem is symmetric with the respect to the two players. Treating the first round separately, we have probability $1/2$ to stop at $n=1$ (both $0$ heads or both $1$ head), and probability $1/2$ to continue with a difference of $1$ in the head counts. It will be convenient to denote by $q_n$ the conditional probability that we end after round $n$ if the first step left us with a difference of $1$. The overall probability to end in round $n$ will then be given by $1/2$ for $n=1$ and by $q_{n-1}/2$ for $n>1$. -The sign of the difference cannot change since we stop when it becomes zero, so we can assume without loss of generality that it's non-negative. Thus we are interested in the probability distribution $p_n(k)$, where $n$ is the round and $k$ is the difference in head counts. This is a Markov chain with an absorbing state at $k=0$. The evolution of the probabilities is given by -$$p_{n+1}(k)=\frac{1}{4}p_n(k-1)+\frac{1}{2}p_n(k)+\frac{1}{4}p_n(k+1)$$ -for $k>1$ and $p_{n+1}(1)=\frac{1}{2}p_n(1) + \frac{1}{4}p_n(2)$ for $k=1$. Let's ignore the complication at the origin for a bit and just treat the recursion relation for general $k$. This is a linear operator being applied to the sequence $p_n (k)$ to obtain the sequence $p_{n+1}(k)$, and the eigenfunctions of this linear operator are the sequences $e^{ik\phi}$ with $-\pi < \phi \le \pi$ (since other values of $\phi$ just reproduce the same sequences). We can obtain the corresponding eigenvalue $\lambda(\phi)$ from -$$\lambda(\phi) e^{ik\phi}=\frac{1}{4}e^{i(k-1)\phi}+\frac{1}{2}e^{ik\phi}+\frac{1}{4}e^{i(k+1)\phi}\;,$$ -$$\lambda(\phi) =\frac{1}{4}e^{-i\phi}+\frac{1}{2}+\frac{1}{4}e^{i\phi}\;,$$ -$$\lambda(\phi)=\frac{1+\cos\phi}{2}=\cos^2\frac{\phi}{2}\;.$$ -We have to combine the sequences $e^{ik\phi}$ and $e^{-ik\phi}$ into sines and cosines to get real sequences. Here's where the boundary at $k=0$ comes into play again. The equation $\lambda(\phi) p_n(1)=\frac{1}{2}p_n(1) + \frac{1}{4}p_n(2)$ provides an additional condition that selects a particular linear combination of the sines and cosines. In fact, it selects the sines, since this equation differs only by the term $\frac{1}{4}p_n(0)$ from the general recursion relation, and they can only both be satisfied if this term would have been zero anyway, which is the case for the sines. -Since we know the time evolution of the eigenfunctions (they are multiplied by the corresponding eigenvalue in each round), we can now decompose our initial sequence, $p_1(1)=1$ and $p_1(k)=0$ for $k\neq1$, into sines and write its time evolution as the sum of the time evolution of the sines. Thus, -$$p_1(k)=\int_0^\pi f(\phi) \sin (k\phi)\mathrm{d}\phi$$ -for $k \ge 1$, and we obtain our initial sequence by taking $f(\phi)=(2/\pi)\sin\phi$. Then the time evolution is given by -$$p_n(k)=\frac{2}{\pi}\int_0^\pi \sin\phi \sin (k\phi)\left(\frac{1+\cos\phi}{2}\right)^{n-1}\mathrm{d}\phi\;.$$ -Now the probability $q_n$ that we end after round $n$ is just the probability $p_n(1)$ times the probability $1/4$ that we move from $k=1$ to $k=0$, and so -$$q_n=\frac{1}{2\pi}\int_0^\pi \sin^2\phi \left(\frac{1+\cos\phi}{2}\right)^{n-1}\mathrm{d}\phi\;.$$ -According to Wolfram, this is -$$q_n=\frac{\Gamma(n+1/2)}{\sqrt{\pi}\,\Gamma(n+2)}\;,$$ -which I presume could be simplified for integer $n$. We can check that the $q_n$ sum up to $1$: -$$\sum_{n=1}^\infty q_n=\frac{1}{2\pi}\int_0^\pi \sin^2\phi \sum_{n=1}^\infty\left(\frac{1+\cos\phi}{2}\right)^{n-1}\mathrm{d}\phi$$ -$$=\frac{1}{2\pi}\int_0^\pi \frac{\sin^2\phi}{1-\frac{1+\cos\phi}{2}}\mathrm{d}\phi$$ -$$=\frac{1}{\pi}\int_0^\pi \frac{\sin^2\phi}{1-\cos\phi}\mathrm{d}\phi$$ -$$=\frac{1}{\pi}\int_0^\pi \frac{\sin^2\phi}{1-\cos^2\phi}\left(1+\cos\phi\right)\mathrm{d}\phi$$ -$$=\frac{1}{\pi}\int_0^\pi 1+\cos\phi\,\mathrm{d}\phi$$ -$$=1\;.$$ -We can also try to find the first moment of this probability distribution: -$$\sum_{n=1}^\infty n q_n=\frac{1}{2\pi}\int_0^\pi \sin^2\phi \sum_{n=0}^\infty n\left(\frac{1+\cos\phi}{2}\right)^{n-1}\mathrm{d}\phi$$ -$$=-\frac{1}{\pi}\int_0^\pi \sin\phi \sum_{n=0}^\infty \frac{\mathrm{d}}{\mathrm{d}\phi}\left(\frac{1+\cos\phi}{2}\right)^n\mathrm{d}\phi$$ -$$=\frac{1}{\pi}\int_0^\pi \cos\phi \sum_{n=0}^\infty \left(\frac{1+\cos\phi}{2}\right)^n\mathrm{d}\phi$$ -$$=\frac{2}{\pi}\int_0^\pi \frac{\cos\phi}{1-\cos\phi}\mathrm{d}\phi\;.$$ -This integral diverges at the limit $\phi=0$ (which corresponds to the sequence wandering off to large $k$), and thus as Mike had already pointed out the expected number of rounds is infinite.<|endoftext|> -TITLE: Some basics of Sobolev spaces -QUESTION [11 upvotes]: Let $W^{m,p}(\Omega) = \{ f \in L^p(\Omega): D^\alpha f \in L^p(\Omega) \text{ for multi-indices } |\alpha| \leq m\}$, where $D$ denotes the weak derivative. Let $W_0^{m,p}$ denote the closure of $C_c^\infty(\Omega)$ in $W^{m,p}(\Omega)$. -Why is it true that $W_0^{m,p}(\mathbb{R}^d) = W^{m,p}(\mathbb{R}^d)$, but in general $W_0^{m,p}(\Omega) \subsetneq W^{m,p}(\Omega)$? -I am trying to understand why there is a need to consider $W_0^{m,p}(\mathbb{R}^d)$. I'm guessing it's because the elements in $W^{m,p}(\Omega)$ can get really messy, but I don't have very good intuition about both spaces. - -REPLY [6 votes]: There are really three Sobolev spaces, which in many situations are provably the same, but the details concerning boundary values are (unsurprisingly) a large technical issue. -The a-priori smallest space is the closure of _test_functions_ (compactly supported smooth, with support in the interior of the domain) with respect to the Sobolev norm. The a-priori middle-sized Sobolev space is the closure of _smooth_functions_ with respect to the Sobolev norm. The a-priori largest Sobolev space is the collection of distributions with the corresponding distributional derivatives in $L^p$. (A relatively recent book by Gerd Grubb, "Distributions and Operators", discusses the impact of boundary conditions.) -In nice situations, such as "free space" problems, all these spaces are readily proven to be the same. -With boundary issues, some not-necessarily intuitive things can happen, since Sobolev norms (while arguably more appropriate than $C^k$ norms for discussion of PDEs) are not instantly comparable to classical pointwise ($C^k$) norms. That is, there is the "loss" of $n/2+\epsilon$ arising in Sobolev's inequality. -Nevertheless, there are "trace theorems", which with smooth boundaries predict accurately what loss of Sobolev index occurs upon restriction to the boundary: it is half the codimension, so, typically, $1/2$. -For example, an $L^2$ limit of test functions (supported in the interior) certainly can have non-zero boundary values. Raising the Sobolev-norm's index implies vanishing on the boundary in a (typically less-by-$1/2$) Sobolev space on the boundary. Comparison to $C^k$ norms is via Sobolev's inequality.<|endoftext|> -TITLE: Are there any known barriers to some approach for solving P vs. NP? -QUESTION [8 upvotes]: Are there any known barriers to show the following invariant (perhaps by some sort of induction)? -Let $\Sigma$ be some finite alphabet with $|\Sigma| \geq 2$, let $M$ be some (deciding) deterministic Turing machine with input alphabet $\Sigma$, and let $L_0 \subseteq \Sigma^{\star}$ be some non-sparse, $\mbox{NP}$-complete language. -Then at least one of the following properties hold: - -$M$ doesn't terminate always. -$M$ has superpolynomial time complexity. -$L(M) \triangle L_0$ is non-sparse. - -Concise problem description: $\mbox{NP} \not\subseteq \mbox{P-close}$ (according to Tsuyoshi Ito, see his answer). -Caution: This problem is equivalent to $\mbox{P} \neq \mbox{NP}$. - -REPLY [8 votes]: This is a slightly more detailed version of some of my comments on your cross-posting on cstheory.stackexchange.com. -The statement which you described can be concisely written as NP ⊈ P-close. Here P-close is the class of decision problems for which there exists a polynomial-time algorithm A such that the set of instances on which A fails to answer correctly is sparse. -It is easy to see that P ⊆ P-close ⊆ P/poly, from which it is easy to see the implications hold that NP ⊈ P/poly ⇒ NP ⊈ P-close ⇒ P≠NP. Since NP ⊈ P-close implies P≠NP, any proof of NP ⊈ P-close must also overcome the relativization barrier and the algebrization barrier. I do not know if the natural-proof barrier (which every proof of NP ⊈ P/poly must overcome) necessarily applies to NP ⊈ P-close. -I do not think that it is known that NP ⊈ P-close is equivalent to P≠NP as you claim. - -Edit: On the contrary to what I wrote in an earlier revision, I learned that NP ⊈ P-close is indeed equivalent to P≠NP. Although I already answered your question about barriers above, I guess that writing down the proof of this equivalence may be useful. The proof is based on what you described on cstheory.stackexchage.com with one modification (namely, I use the result by Ogihara and Watanabe instead of Mahaney’s theorem). -As stated above, we have the implication NP ⊈ P-close ⇒ P≠NP. We will prove the converse: NP ⊆ P-close ⇒ P=NP. -A polynomial-time k-truth-table reduction from a language L1 to a language L2 is a Turing reduction from L1 to L2 which invokes the oracle at most k times nonadaptively. Note that a many-one reduction is a special case of a 1-truth-table reduction. Ogihara and Watanabe [OW91] proved the following result: -Theorem [OW91]. If some sparse language is NP-complete under polynomial-time k-truth-table reducibility for some constant k, then P=NP. -Note that this theorem generalizes Mahaney’s theorem, which is the special case of the theorem where the reduction is restricted to a polynomial-time many-one reduction. -Assume NP ⊆ P-close. Then SAT ∈ P-close. Equivalently, there exists a language L∈P such that the symmetric difference S=SAT△L is sparse. Then the following is a polynomial-time 1-truth-table reduction from SAT to S: given an input x, decide (in polynomial time) whether x∈L and decide (by invoking the oracle for S) whether x∈S, and return the XOR of the two results. Therefore, the sparse set S is NP-complete under polynomial-time 1-truth-table reducibility. This implies P=NP by the aforementioned theorem by Ogihara and Watanabe. -[OW91] Mitsunori Ogihara and Osamu Watanabe. On polynomial-time bounded truth-table reducibility of NP sets to sparse sets. SIAM Journal on Computing, 20(3):471–483, June 1991. http://dx.doi.org/10.1137/0220030<|endoftext|> -TITLE: Motivation for Eisenstein Criterion -QUESTION [32 upvotes]: I have been thinking about this for quite sometime. - -Eisentein Criterion for Irreducibility: Let $f$ be a primitive polynomial over a unique factorization domain $R$, say -$$f(x)=a_0 + a_1x + a_2x^2 + \cdots + a_nx^n \;.$$ -If $R$ has an irreducible element $p$ such that -$$p\mid a_m\ \text{ for all }\ 0\le m\le n-1$$ -$$p^2 \nmid a_0$$ -$$p \nmid a_n$$ -then $f$ is irreducible. - -Can anyone give me an explanation of how one might have conjectured this problem? Thinking along, the same lines the first polynomial which came to my mind was $x^{2}+1 \in \mathbb{R}[x]$ which is irreducible. But there are lots of polynomials and it's very difficult to think of a condition, which would make them irreducible. - -REPLY [8 votes]: If you don't know about ramification or valuations and things, there is still a great easy way to "see" Eisenstein from the $\mathbb{Z}[x]$ case. This is presented in Integers, Polynomials, and Rings by Ron Irving as a series of exercises to get the student to guess Eisenstein on their own. -First examine the case $x^n-p$ where $p$ is a prime. If this factors, then there are $f(x)=\sum_{k=0}^m a_kx^k$ and $g(x)=\sum_{k=0}^r b_kx^k$ for which $f(x)g(x)=x^n-p$. First, you get that $a_0b_0=p$ and hence without loss of generality $a_0=p$ and $p$ does not divide $b_0$ (we'll merely use this fact rather than $b_0=1$). -Now since $a_0b_1+a_1b_0=0$ we get that $p|(a_0b_1+a_1b_0)$ and $p|a_0$ so $p|a_1b_0$. This means $p|a_1$ or $p|b_0$, but $p$ does not divide $b_0$, so $p|a_1$. You can keep bootstrapping this argument up all the coefficients $a_k$ to get that $p|a_m$ the top coefficient, which is a contradiction since $a_mb_r=1$. -But what was really used here? Not much. So you could repeat the exact same argument with $x^n-pm$ where $(p,m)=1$. But that still isn't as general as you could go. You didn't need that $\sum_{j+k=l} a_jb_k=0$, you only needed that $p$ divided that sum to make the argument work, which is the same as checking that the polynomial $x^n+c_{n-1}x^{n-1}+\cdots +c_0$ has all $c_i$ divisible by $p$ (and $c_0$ not divisible by $p^2$). -Yet again, this still isn't as general as the argument allows you to go, because the contradiction only came from the fact that $p$ did not divide the leading coefficient, it wasn't that it was $1$. -So merely extrapolating what made the proof that $x^n-p$ is irreducible in $\mathbb{Z}[x]$ work gives you the full Eisenstein in $\mathbb{Z}[x]$. Lastly, if you want to go to $R[x]$, you need to figure out which facts about division you needed about $\mathbb{Z}$. But Eisenstein really is a criterion invented artificially in $\mathbb{Z}$ that could be guessed from examining what the key properties of one proof were.<|endoftext|> -TITLE: Why events in probability are closed under countable union and complement? -QUESTION [5 upvotes]: In probability, events are considered to be closed under countable union and complement, so mathematically they are modeled by $\sigma$-algebra. I was wondering why events are considered to be closed under countably union and complement? -In Nate Eldredge's post, he has done an excellent job on explaining this, by using whether questions are answered or not as an analogy to whether events occur or not, if I understand his post correctly. However, if someone could explain plainly without analogy, it could be clearer to me. -I was particularly curious why events are not considered to be closed under infinite (possibly uncountably) union, but instead just under countably union? So possibly to model events using the power set? I think this is not addressed in Nate Eldredge's post. -My guess would be that the reason might be related to the requirement on the likelihood of any event to occur to be "measurable" in some sense. But how exactly to understand this requirement is unclear to me. -PS: This post is related to my previous one Interpretation of sigma algebra, but the questions asked in these two are not the same. -Thanks and regards! - -REPLY [6 votes]: As Jonas mentioned, allowing arbitrary unions is not "consistent", in the sense that there is no proper definition of probability. This is also related to the fact that infinite sums make much more sense when countable, since it's not clear how to attach a finite number to an uncountable sum of positive reals. -On the other hand, many desirable events are describable using countable unions and intersections. For example, events like "the random walk returns to the origin" is a union of countably many events "the random walk returns to the origin at time $t$", and any one of those is a finite union of "basic" events. -In general, first order properties always correspond to taking countable unions and intersections; this means that if you have a statement of the form "$\forall x \exists y \cdots P(x,y,\ldots)$", where $x,y,\ldots$ are integers, and the $P$s are basic events (e.g. for a random walk, depend on finitely many times), then the corresponding event is guaranteed to be in the $\sigma$-algebra, i.e. is guaranteed to have assigned to it a "probability".<|endoftext|> -TITLE: generalizations of determinant and trace -QUESTION [8 upvotes]: There are $n$ symmetric polynomials in the eigenvalues of a square matrix. Two of these are the determinant and the trace, each of which have countless applications and interpretations in algebra and geometry. -What about the other symmetric polynomials? They are also similarity invariants, yet I've never seen them used or referenced. Are there any geometric interpretations, or applications, for these other invariants? - -REPLY [9 votes]: The trace and the determinant are the most useful invariants because the trace is additive and the determinant is multiplicative. The other coefficients of the characteristic polynomial are neither. The determinant also has a clear geometric interpretation. In addition, all of the coefficients of the characteristic polynomial of an operator $T$ can be computed from the traces of the operators $T^n$; this is one reason why it is not so surprising that traces of group elements in group representations carry a lot of information. -This is not to say that people never use the other invariants, although they don't tend to have special names. For example, my understanding is that the Killing form in Lie theory, an important tool, was discovered by messing around with characteristic polynomials. And the construction underlying the coefficients of the characteristic polynomial, the exterior algebra, is enormously useful.<|endoftext|> -TITLE: Formulas for the (top) coefficients of the characteristic polynomial of a matrix -QUESTION [14 upvotes]: The characteristic polynomial of a matrix $A$ is defined as: -$$\chi(A) = \det(xI-A) = \sum_{i=0}^n (-1)^i\cdot \operatorname{tr}^{(i)}(A) \cdot x^{n-i}$$ -The trace is the sum of the eigenvalues of a matrix, tr(A) = tr(1)(A). It is also the sum of the diagonal entries: -$$\operatorname{tr}(A) = \sum_{i=1}^n A_{ii}$$ -The sum of the products of pairs of eigenvalues is like the next trace. -Is this formula valid? -$$\operatorname{tr}^{(2)}(A) = \sum_{1 \leq i < j \leq n } A_{ii} A_{jj} - A_{ij} A_{ji}$$ -What about this one? -$$\operatorname{tr}^{(2)}(A) = \tfrac12(\operatorname{tr}(A)^2 - \operatorname{tr}(A^2))$$ -Are there corresponding formulas for the next one, tr(3)? - -REPLY [3 votes]: Regarding your last question on $\text{tr}^{(3)}$, there is a kind of recursive definition for all $\text{tr}^{(k)}$ given by: -$$ -\text{tr}^{(k)}=-(1/k)\left(\sum_{j=1}^k (-1)^{j} \text{Tr}(A^j)\text{tr}^{(k-j)}\right). -$$<|endoftext|> -TITLE: What is the practical difference between a differential and a derivative? -QUESTION [264 upvotes]: I ask because, as a first-year calculus student, I am running into the fact that I didn't quite get this down when understanding the derivative: -So, a derivative is the rate of change of a function with respect to changes in its variable, this much I get. -Thing is, definitions of 'differential' tend to be in the form of defining the derivative and calling the differential 'an infinitesimally small change in x', which is fine as far it goes, but then why bother even defining it formally outside of needing it for derivatives? -And THEN, the bloody differential starts showing up as a function in integrals, where it appears to be ignored part of the time, then functioning as a variable the rest. -Why do I say 'practical'? Because when I asked for an explanation from other mathematician parties, I got one involving the graph of the function and how, given a right-angle triangle, a derivative is one of the other angles, where the differential is the line opposite the angle. -I'm sure that explanation is correct as far it goes, but it doesn't tell me what the differential DOES, or why it's useful, which are the two facts I need in order to really understand it. -Any assistance? - -REPLY [5 votes]: These answers haven't formalized the objects $dx$, so I'll give my own answer which does. This will be more high level and requires some understanding of linear algebra, group actions, and calculus in more than one variable. There's a few objects we need to make clear: the basis vector $\frac{\partial}{\partial x}$, the projection map $x^i$, and the operator d. -The setup for our story is going to be in affine space $\mathbb{A}^n$ which is defined as $\mathbb{R}^n$ with a simply transitive group action on itself by translations. We say that $\mathbb{A}^n$ is defined over $\mathbb{R}^n$ and think of elements in the former as points and the elements in the latter as vectors. The upshot of the words "simply transitive" is that subtracting two points always gives a unique vector. If these words are unfamiliar, feel free to think of $\mathbb{A}^n$ as $\mathbb{R}^n$ + translations. We define an affine subspace $A\subset \mathbb{A}^n$ to be a subset with a simply transitive group action from a linear subspace $V\subset \mathbb{R}^n$. We call $V$ the tangent space of $A$. Note that $\mathbb{R}^n$ comes with canonically equipped with a set of basis vectors $\{e_i\}$ with $e_i=(0,...,1,...0)$ where the 1 is in the $i$th position. We rename $\frac{\partial}{\partial x^i}:=e_i$. We also have the standard projection maps $x^i:\mathbb{R}^n\to \mathbb{R}$ by remembering only the $i$th coordinate. We define $dx^i\in (\mathbb{R}^n)^* $ to be the dual to vector $\frac{\partial}{\partial x^i}$, so $\{dx^i\}$ forms a basis for $(\mathbb{R}^n)^*$. Since we built $\mathbb{A}^n$ on top of $\mathbb{R}^n$, we also get a canonical basis and projection maps on $\mathbb{A}^n$. -For example, fix $\mathbb{A}^2$ and let $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$ denote the basis vector for the underlying tangent space $\mathbb{R}^2$. The graph of equation $y=3$ is an affine subspace of $\mathbb{A}^2$ with tangent space the span of $\frac{\partial}{\partial x}\in \mathbb{R}^2$. Another example would be the graph of the equation $y=x$ is an affine subspace with tangent space the span of $\frac{\partial}{\partial x}+\frac{\partial}{\partial y}$. -At this point in our story we require that $\mathbb{R}^n$ has a norm denoted $\|\cdot\|$ which induces a distance function on $\mathbb{A}^n$ via subtraction. Let $U\subset \mathbb{A}^n$ be a open set, and $p\in U$ a point. We say $f:U\to \mathbb{A}^m$ is differentiable at $p$ if there exists a linear function which we denote $df_p:\mathbb{R}^n\to \mathbb{R}^m$ which satisfies the following inequality: For every $\epsilon>0$, there exists a $\delta$ such that if $\|\xi\|<\delta$, then -$$\|f(p+\xi)-f(p)-df_p(\xi)\|<\epsilon \|\xi\|$$ -We say $f$ is differentiable on $U$ if it's differentiable at every $p\in U$. One may call $df_p$ the derivative at $p$ in a multivariable class. Fix codomain $\mathbb{A}^1$. We don't lose generality by restricting the codomain to one dimensions becuase every function $f:\mathbb{A}^n\to \mathbb{A}^m$ can be expressed as coordinate functions $f=(f^1,...,f^n)$ by composing with projection map $x^i$. Let me denote Diff($U$,$\mathbb{A}^1$) to be the set of differentiable functions between the two, and Map($U$,$\mathbb{A}^1$) the set of all functions between the two. These two form a vector space structure by function addition and scalar multiplication defined in the usual way. We define an operator between the these two vector spaces -$$\mathfrak{d}:\text{Diff}(U,\mathbb{A}^1)\to \text{Map}(U,\mathbb{A}^1)$$ -We require $\mathfrak{d}$ to be linear and obey the Leibniz rule, namely given $f,g\in \text{Diff}(U,\mathbb{A}^1)$ -$$\mathfrak{d}(fg)=\mathfrak{d}f \cdot g+ f\cdot \mathfrak{d}f$$ -We call $\mathfrak{d}$ a derivation if it satisfies the above conditions, and we see that the differential operator $d$ is a special case of this. So it turns out that taking $U=\mathbb{A}^n$ and $f=x^i$ the projection map, $dx^i$ which we defined as the dual basis to $\frac{d}{dx^i}$ is actually the differential of $x^i$ (I'll leave this to whoever the hell read this far to show). There's a whole lot more to be said and skimmed over. If you're interested, all this information and more comes from here.<|endoftext|> -TITLE: Is quantum logic producing interesting/different mathematics? -QUESTION [14 upvotes]: Is quantum logic producing interesting/different mathematics? -Is it different from the intuitionist approach to mathematics? How? - -REPLY [4 votes]: There is an approach to quantum logic where you get topoi with quantum logic. An elementary topos is sometimes regarded as a "place" where you can do mathematics, but where classical logic doesn't necessarily apply. Thus you get a different sort of mathematics. -I know very little about these quantum topoi, so I cannot detail in what way their mathematics differ from the classical one. But I think the two articles referenced in the below PlanetMath articles may (or may not - I haven't read them) answer your question. -http://planetmath.org/encyclopedia/QuantumLogicsTopoi2.html -http://planetmath.org/encyclopedia/QuantumStateSpace.html<|endoftext|> -TITLE: Help solving a limit -QUESTION [8 upvotes]: While helping a friend out for an exam (last year of high school), I found an exercise that neither of us could solve. I've tried a couple of different approaches but nothing seemed to work. Could anyone tell me how to solve it, or at least some hints? This is the exercise: - -Knowing that -$$\lim_{x \to a}\frac{x^2-\sqrt{a^3x}}{\sqrt{ax}-a}=12$$ -Find out $a$. - -We can assume that $a$ exists and is real. - -REPLY [5 votes]: Besides L'Hopital one can simply rationalize the denominator, after discarding case $\rm\:a \le 0\:,\:$ viz: -$$\rm \frac{x^2-a\sqrt{ax}}{\sqrt{ax}-a}\ =\ \frac{x^2-a\sqrt{ax}}{\sqrt{ax}-a} \ \frac{\sqrt{ax}+a}{\sqrt{ax}+a}\ =\ \frac{ax\:(x-a)+\sqrt{ax}\ (x^2-a^2) }{a\:(x-a) }\ =\ x+(x+a)\sqrt{\frac{x}{a}}$$ -Since the above $\rm\to 3\ a\ $ as $\rm\ x\to a\ $ the problem reduces to solving $\rm\ 3\ a = 12\:$.<|endoftext|> -TITLE: Explicit well-ordering of $\mathbb{N}^{\mathbb{N}}$ -QUESTION [11 upvotes]: Is there an explicit well-ordering of $\mathbb{N}^{\mathbb{N}}:=\{g:\mathbb{N}\rightarrow \mathbb{N}\}$? -I've been thinking about that for awhile but nothing is coming to my mind. My best idea is this: -Denote by $<$ the usual "less than" relation on $\mathbb{N}$. Since $\mathbb{N}^{\mathbb{N}}$ is the set of infinite -sequences ${\{x_{n}\}}_{n\in \mathbb{N}}$ with $x_{n}\in \mathbb{N}$, -we can define ${\{x_{n}\}}_{n\in \mathbb{N}}\leq ^{\prime }{\{y_{n}\}}_{n\in -\mathbb{N}}$ as follows. If $x_{0} -TITLE: Big-O Notation and Asymptotics -QUESTION [6 upvotes]: I realize that this is not a typical programing question but its still related. If anyone could help me out I would really appreciate it because I have a midterm coming up and this is the part that I don't understand. This is not a homework problem so don't worry about me trying to get out of my work. I just need someone to explain how to do this is normal plain english instead of whatever my professor is using. -Let $p(n) = \sum_{i=0}^d a_i n^i$ where $a_i,d > 0$ be a polynomial in $n$ of degree $d$. Use the definitions of the asymptotic notations to prove the following properties: -a) If $k \geq d$, then $p(n) = O(n^k)$. -There are also 4 more correspoding to the Omega, theta small o and small omega properties but if I could get an idea on how to start I can figure the other ones out on my own. Thanks so Much! - -REPLY [6 votes]: All students have difficulties when they meet the $O$- and the $o$-notations for the first time. Whenever such an $O$ appears it refers to a pre-agreed limit process for the independent variable, in the case at stake to $n\to\infty$. The statement $p(n)=O(n^k)\ (n\to\infty)$ does not mean that the (usually "complicated") function $p(n)$ is equal to some other function $O(\cdot)$, evaluated at $n^k$. Instead it expresses the (claimed or proven) fact that the function $p(n)$ under study, after division by $n^k$, stays bounded when $n\to\infty$; which is the same thing as saying that $p(n)=b(n)\cdot n^k$ where now $b(n)$ is a bounded function. In your example, each $n^i/n^k$ is $\leq 1$, so in fact $|p(n)|\leq C\cdot n^k$ with $C:=\sum_{i=1}^d |a_i|$.<|endoftext|> -TITLE: Derivation of the method of Lagrange multipliers? -QUESTION [26 upvotes]: I've always used the method of Lagrange multipliers with blind confidence that it will give the correct results when optimizing problems with constraints. But I would like to know if anyone can provide or recommend a derivation of the method at physics undergraduate level that can highlight its limitations, if any. - -REPLY [17 votes]: An algebraic way of looking at this is as follows: -From an algebraic view point, we know how to find the extremum of a function of many variables. Say we want to find the extremum of $f(x_1,x_2,\ldots,x_n)$, we set the gradient to zero and look at the definiteness of the Hessian. -We would like to extend this idea, when we want to find the extremum of a function along with some constraints. Say the problem is: -$$\begin{align} -\text{Minimize }f(x_1,x_2,\ldots,x_n)\\\ -\text{subject to: }g_k(x_1,x_2,\ldots,x_n) = 0\\\ -\text{where }k \in \{1,2,\ldots,m\}\\\ -\end{align} -$$ -If we find the extremum of $f$ just by setting the gradient of $f$ to zero, these extremum need not satisfy the constraints. -Hence, we would like to include the constraints in the previous idea. One way to it is as follows. Define a new function: -$$F(\vec{x},\vec{\lambda}) = f(\vec{x}) - \lambda_1 g_1(\vec{x}) - \lambda_2 g_2(\vec{x}) - \cdots - \lambda_m g_m(\vec{x})$$ -where -$\vec{x} = \left[ x_1,x_2,\ldots,x_n \right], \vec{\lambda} = \left[\lambda_1,\lambda_2,\ldots,\lambda_m \right]$ -Note that when the constraints are enforced, we have $F(\vec{x},\vec{\lambda}) = f(\vec{x})$ since $g_j(x) = 0$ when the constraints are enforced. -Let us find the extremum of $F(\vec{x},\vec{\lambda})$. This is done by setting $\frac{\partial F}{\partial x_i} = 0$ and $\frac{\partial F}{\partial \lambda_j} = 0$ where $i \in \{1,2,\ldots,n\}$ and $j \in \{1,2,\ldots,m\}$ -Setting $\frac{\partial F}{\partial x_i} = 0$ gives us $$\vec{\nabla}f = \vec{\nabla}g \cdot \vec{\lambda}$$ where $\vec{\nabla}g = \left[\vec{\nabla} g_1(\vec{x}),\vec{\nabla} g_2(\vec{x}),\ldots,\vec{\nabla} g_m(\vec{x}) \right]$ -Setting $\frac{\partial F}{\partial \lambda_j} = 0$ gives us $$g_j(x) = 0$$ where $j \in \{1,2,\ldots,m\}$ -Hence, we find that when we find the extremum of $F$, the constraints are automatically enforced. This means that the extremum of $F$ corresponds to extremum of $f$ with the constraints enforced. -To decide, if the extremum is a minimum (or) maximum (or) if the point we obtain by solving the system is a saddle point, we need to look at the definiteness of the Hessian of $F$ and decide.<|endoftext|> -TITLE: Formalizing Those Readings of Leibniz Notation that Don't Appeal to Infinitesimals/Differentials -QUESTION [15 upvotes]: [disclaimer: I've studied a lot of logic but never been good at analysis, so that's the angle I'm coming from below] -in my attempt to find a precise version of the 'definitions' usually given when first introducing leibniz notation in single or multivariable calculus or analysis, wherein there is no appeal to differentials or infinitesimals, I've discovered that I'm confused about a few interellated low-level issues around formalization and notation, etc. I don't know which questions are the more basic ones here, so I'll just ask them as go along explaining what I think I do understand about proposed formal defintions of the sort in question. -I'm concerned with both the dy/dx notation and the 'del y/del x' notation for partial derivatives, but just the real-valued case. -Since I suspect that my confusions stem from use-mention confusions, and the confusion of functions, variables, and their values, I will use, and assume familiarity with lambda notation, metavariables, and quasi-quotation throughout. Where not stated, ⌜λx.φ⌝ refers to the function on the largest real domain on which φ is a real number. Assume only definitions for real variables are sought after below. -Anyway, on with the definitions: -These short articles by the author Thurston (the first five results) all give roughly the same formal definition of leibniz notation: -http://scholar.google.ca/scholar?hl=en&as_sdt=0,5&q=thurston+leibniz -whereas the top results for this search are by the author Harrison, and each give one of a few slight variants on a different definition: -http://www.google.ca/search?q=%22leibniz+notation%22+%22lambda+term%22&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-GB:official&client=firefox-a -here is my understanding of their definitions: -HARRISON: -⌜dφ/dψ⌝ is a shorthand for ⌜D(λψ.φ)(ψ)⌝ i.e. -"dy/dx" stands for "D(λx.y)(x)" -This means that: -ψ must be a variable of the underlying logic (and it has a free and a bound occurence here) and -φ must be a string such that ⌜∀x, φ = f(x)⌝ is true for some function f:S→R where S⊆R. -Q1. does anyone have a simpler way to state the restriction on φ? -Q1.1 what is the proper name for the sort of string φ must be? -THURSTON: -⌜dφ/dψ⌝ is a shorthand for ⌜ψ'/φ'⌝ i.e. -"dy/dx" is short for "y'/x'" -This means that φ and ψ must be names of functions from the reals to the reals. -Q1.2 Are there other formal definitions in the literature I should compare to these? I haven't found any yet... -Harrison's defintion does not return a value because a free variable is uninstntiated, in the same sense in which wffs which are not sentences do not return a truth value. Thurston's version, however returns a function. -For instance, -df/dx = f'(x) for harrison, -df/dx = f' for thurston -More concretely -d(x²+x)/dx = -2x+1 for harrison -λx.2x+1 for thurston -Q2 is the value '2x+1' which contains a free variable being returned, or are we implicitly quantifying over x, or, let's call it 'ξ' for the purposes of quasi quotation, and saying: ∀ξ, ⌜2ξ+1⌝ refers to (not 'is') the value returned? -However, consider the following 'typical' calculus problem: -" -y = f(x) -x = g(u) -g(x) = x³ - 7 -find df/dx -" -Thurston gets us df/dx = λx.1/(3x²) -This is undefined on harison's approach, since x is not a variable of the logic, but the name of a function. -Q3 should I be considering the possibility that the logic allows for variables ranging over functions? -And if we pretended that it was we'd still get df/dx = D(λx.f)(x) but this is mal-formed since λx needs something like 'f(x)' rather than 'f' as input. -And if we added a case to harrison's definition to append '(x)' or the like when it's missing, we'd still get df/dx = 1, which is not equal to the result we got with thurston's definition--but more strikingly, the function we evaluated to get 1 was, unike the f' vs f'(x) case above, not even the same function as was returned by thurston's definition. -Q4 What should I conclude from the fact that these definitions diverge in this manner? -Now consider: -" -y = f(x) -f(x) = x⁹ -find dy/dx -" -On harrison's account, we could view y as a metavariable, so that f(x) is placed substitutionally into the defining string, but I have a feeling that is not the right way to understand it. However, if y is merely a variable of the logic, it is a free variable in the result, and we end up with one free variable too many.. -On thurston's account, y must be the name of a function, but "y = f(x)" sets y equal to an expression with a free variable, not equal to the name of a function -Q5 should i view statements like "y=f(x)" as involving a supressed "(x)" and "∀" so that we get "∀x,y(x)=f(x)" ? Or should I see y as a metavariable? Or, should I imagine the logic extended to allow some new syntactic category of 'dependent' variables, while thinking of the usual variables in the logic as 'independent' variables--i.e. those whose value does not depend on others? I think I am very confused about what happends when one variable depends on another. -Q6 On a related note, I saw a passage recently that spoke in terms like "x(u) is the inverse function of u(x)"--how should this be understood more precisely? I've come to discover that I don't understand expressions of this sort at all! -Q7 Does either of these defintions clearly capture 'practice' better than another? -Q8 How should similar attempts be made for the 'del' notation for patial derivatives? -Q9 Can someone give me an example of where d/dx and del/delx return different values on the same input? If I'm not mistaken, in some formalizations this never happens, and in other formalizations it does--I think harrison's would not allow for this since it just returns an 'expression' rather than one of the various functions that can be formed by an expression when you apply a lambda operator to it. -I started also trying to read this article on revising patial derivative notation: -[I've hit my link limit as I'm new here, but google "revised notation for partial derivatives" (with quotes). It's by WC Hassenpflug] -but I got stuck on the sentence: -"If we have a function u = f(x,y) and the transformation y=g(x∩) is made, then it is not clear whether del u / dx means del u / dx |y or del y / dx | n" -Can someone explain that one to me? -Q10 This all bears some superficial similarity to the relationship between so-called 'random variables', which are actually functions, and what are called 'variables' in the underlying logic--this has also confused me, and I see many operations done in text on random variables where the operators have only been defined for values in the random variable's domain, and not on functions. Can anyone comment on this? I would be nice if I could dismiss two long-standing confusions with one stone :p - -REPLY [2 votes]: I do not propose this as a proper answer, but it is too long for a comment. -The same problem was troubling me some time ago. IMHO Leibniz notation may be put on a formal ground, but context where it works is vividly different. Something like: Calculus is done in a “diagram”. A diagram is a set of variables and assignments -variable→set -(list variable)×variable→function -Functions go between variables. The diagram is commuting in the category theory sense. So, given a variable v, we can get a function to v from the diagram. This allows to “differentiate a variable”. (We also can define a diagram as a product preserving functor from a finite poset category with products to Set.) Every expression involving operations like sum and product creates a new variable v and a new function …→v . Likewise the differential operator. This allows to speak formally about partial derivatives also, prove the chain rule etc. However, I did not pursued my idea further and never came across something in the literature.<|endoftext|> -TITLE: How does the method of Lagrange multipliers fail (in classical field theories with local constraints)? -QUESTION [11 upvotes]: The method of Lagrange multipliers is used to find the extrema of $f(x)$ subject to the constraints $\vec g(x)=0$, where $x=(x_1,\dots,x_n)$ and $\vec g=(g_1,\dots,g_m)$ for $m \leq n$. -Although many textbooks get the final equations by arguing that at an extrema, the variation of $f(x)$ must be orthogonal to the surface $g(x)=0$, the "simpler" approach (and that which is commonly seen in field theory / optimizing functionals) is to construct the Lagrange function -$$ L(x,\lambda) = f(x) + \vec\lambda\cdot\vec g(x) $$ -and varying w.r.t. $x$ and $\lambda$ to get the vector equations -$$ -\begin{align} - &x:& 0 &= \nabla f(x) + \sum_i \lambda_i \nabla g_i(x) \,, \\ - &\vec \lambda:& 0 &= \vec g(x) \ . -\end{align} -$$ -The method only works if the extremal point is a regular point of the constraint surface, i.e. if $\mathrm{rnk}(\nabla\vec g) = m$. -What is the best way of understanding what goes wrong when the extrema is not a regular point of the constraint? -And, most importantly to me, how does this generalize to field theories (i.e. optimizing functionals) with local constraints? What is the equivalent regularity condition for constraints in field theory? -Instructive examples are more than welcome. - -REPLY [13 votes]: Generically, the $m$ equations $g_i(x)=0$ define a manifold $S$ of dimension $d:=n-m$. At each point $p\in S$ the $m$ gradients $\nabla g_i(p)$ are orthogonal to the tangent space $S_p$ of $S$ at $p$. The condition rnk$(\nabla g(p))=m$ means that these $m$ gradients are linearly independent, so that they span the full orthogonal complement $S_p^\perp$ which has dimension $m=n-d$. At a conditionally stationary point $p$ of $f$ the gradient $\nabla f(p)$ is in $S_p^\perp$, and if the rank condition is fulfilled, there will be constants $\lambda_i$ such that $\nabla f(p)=\sum_{i=1}^m \lambda_i\nabla g_i(p)$. In this case the given "recipe" will find the point $p$. -Consider now the following example where the rank condition is violated: The two constraints -$$g_1(x,y,z):=x^6-z=0,\qquad g_2(x,y,z):=y^3-z=0$$ -define a curve $S\subset{\mathbb R}^3$ with the parametric representation $$S: \quad x\mapsto (x,x^2,x^6)\qquad (-\infty < x <\infty).$$ -The function $f(x,y,z):=y$ assumes its minimum on $S$ at the origin $o$. But if we compute the gradients -$$\nabla f(o)=(0,1,0), \qquad \nabla g_1(o)=\nabla g_2(o)=(0,0,-1),$$ -it turns out that $\nabla f(o)$ is not a linear combination of the $\nabla g_i(o)$. As a consequence Lagrange's method will not bring this conditionally stationary point to the fore.<|endoftext|> -TITLE: A calculus question -QUESTION [6 upvotes]: On the interval $(0, \infty)$,the function $f \geq 0$,$f' \leq 0$, and $f'' \geq 0$.Prove that $\lim\limits_{x \to \infty} xf'(x) = 0$. - -REPLY [2 votes]: On the one hand, $f$ is monotone decreasing (since $f' \leq 0$) and $f \geq 0$; hence, for some $l \geq 0$, -$\lim _{x \to \infty } f(x) = l$. -On the other hand, $f'$ is monotone increasing (since $f'' \geq 0$) and $f' \leq 0$; hence -$$ -f(x) - f(x/2) = \int_{x/2}^x {f'(u)du} \le \int_{x/2}^x {f'(x)du} = \frac{{x f'(x)}}{2} \le 0. -$$ -Letting $x \to \infty$, the left-hand side converges to $0$; hence $x f'(x) \to 0$ too.<|endoftext|> -TITLE: Showing two definitions of a binomial coefficient are the same -QUESTION [6 upvotes]: I have a homework question where we have to prove the following definitions of a binomial coefficient are equal, algebraically. - -This is what I got so far, and it's getting pretty complicated. And I could use some directions on how to continue. At the moment I think I'm just doing it wrong from the start and I'm overcomplicating things. But I'm not quite sure how to continue, because I'm just staring at these numbers and can't continue. -I have writting this in Microsoft Word, so I'll use images to show what I've got so far, because writing this all again in Latex is tedious. - -Also this is homework for me, so please don't give me the answer yet, because I know I could just look this up on the internet. I'd like to prove it myself for most of the part, but since I'm stuck I was wondering if someone could give me a slight hint on what to do. - -REPLY [5 votes]: You were on the right track, but you haven't chosen the easiest/lowest common denominator. -As you've written, -$${{n-1} \choose {k-1}}+{{n-1} \choose k}=\frac{(n-1)!}{(n-k)!(k-1)!}+\frac{(n-1)!}{(n-k-1)!k!}$$ -Isn't one of the denominator a multiple of the other one ? -Remember that $(n+1)!=(n+1)\cdot n \cdots 2\cdot 1=(n+1)\cdot n!$. -Note that you can also prove this using a combinatorial proof, which will give you a more intuitive idea of why the equality holds.<|endoftext|> -TITLE: Example of Hausdorff space $X$ s.t. $C_b(X)$ does not separate points? -QUESTION [11 upvotes]: We know the Stone-Weierstrass theorem for locally compact Hausdorff spaces (LCH) which states the following: - -Theorem: Suppose $X$ is LCH. A subalgebra $\mathcal{A}$ of $C_0(X)$ is dense if and only if it separates points ($\forall x,y \in X : x\neq y \implies \exists f \in \mathcal{A}: f(x) \neq f(y)$) and vanishes nowhere $(\forall x \in X \exists f \in \mathcal{A} : f(x) \neq 0$) and is closed under complex conjugation1. - -It's also easy to show that $C_b(X)$, the continuous and bounded functions on a topological space $X$, is a Banach space2. It would be obvious to try and show a variant of Stone-Weierstrass for $C_b(X)$ where $X$ is merely Hausdorff, and since it's obviously dense in itself, for such a theorem to exist it would be required that $C_b(X)$ separates points (that it vanishes nowhere is clear: it contains the constant functions). -I've tried to come up with an example of a Hausdorff space where $C_b(X)$ fails to separate points (this could for example be a space where every non-constant continuous function is unbounded), but to no avail. So does anyone happen to have an instructive example lying around? -PS: It's an interesting exercise to prove that given a LCH space $X$ the continuous functions with compact support, $C_{00}(X)$, separates points and vanishes nowhere. You get to use many theorems from topology in the process of constructing a continuous function such that $f(x)=1$ and $f(y)=0$ given distinct points $x,y \in X$ [Hint: Find a good compact subset and use Urysohn's lemma]. -[1]: In the case of real-valued functions this is essentially a no-op, so including it in the theorem statement doesn't hurt. -[2]: [Car00] shows in Lemma 10.8 that $B(X)$, the set of bounded functions on a set $X$ is a Banach space, and it's an immediate corollary from Thm 10.4 that $C_b(X)$ is closed in $B(X)$ for $X$ a metric space. Of course this generalises readily to the case of $X$ a topological space. - -REPLY [2 votes]: I recommend this short 1971 note of T.E. Gantner. He first proves: -Theorem: Each regular space $Z$ can be embedded as a subspace of a regular space $Q(Z)$ such that every continuous, real-valued function on $Q(Z)$ is constant on $Z$. -He applies the theorem as follows: take $X_0$ to be a one-point space, and inductively, for all $n \in \mathbb{N}$, define $X_{n+1} = Q(X_n)$. Let $X = \lim_{n \rightarrow \infty} X_n$, endowed with the direct limit (or final) topology. Then $X$ is an infinite regular space on which every continuous real-valued function is constant.<|endoftext|> -TITLE: Equivalence relation on a proper class -QUESTION [13 upvotes]: We define cardinality as an equivalence relation on sets. But the class of all sets is not a set, so how do we do that? In particular, I'm interested in the proposition that equivalence classes form a partition of the initial set. It seems like it can be translated to cardinality, but I do not know how, at least in ZFC (and I don't even know ZFC :)) - -REPLY [8 votes]: We can of course define cardinality as you say: Two sets are equipotent (or have the same cardinality) iff there is a bijection between them. -You can prove directly that this notion is reflexive, symmetric and transitive. For example, the last statement is: For any sets $A,B,C$, if there is a bijection from $A$ to $B$ and a bijection from $B$ to $C$, then there is a bijection from $A$ to $C$. Note that this does not require that we reference directly the collection of equivalence classes or even that we consider a single equivalence class as a given object. What I mean is: We can do all that we need to do without talking about proper classes or collections of proper classes. -What would be the statement corresponding to "the equivalence classes of an equivalence relation on a non-empty set form a partition of the set into non-empty subsets"? Simply the conjunction of the following two statements: - 1. "Cardinality is defined for all sets", which simply means that given any two sets $A$ and $B$, the statement "$A$ and $B$ have the same cardinality" is meaningful, and is either true or false. But of course that is the case, since $A$ and $B$ have the same cardinality iff there is a bijection between them, this is the very definition. In fact, "$A$ and $B$ have the same cardinality" is simply a linguistic shortcut for "there is a bijection between $A$ and $B$". - 2. "Given two sets $A$ and $B$, if there is a set $C$ such that $A$ and $C$ have the same cardinality and also $B$ and $C$ have the same cardinality, then so do $A$ and $B$." And this can be easily proved in the expected way. -All of this can be easily formalized in set theory (ZFC or even much weaker systems). Again, the point is that there is no need to directly argue about proper classes or collections of classes (but, if you want, then there are also appropriate set theories, such as MK, where this is possible).<|endoftext|> -TITLE: Can we make $\tan(x)$ arbitrarily close to an integer when $x\in \mathbb{Z}$? -QUESTION [23 upvotes]: My 7-year-old son was staring at the graph of tan() and its endlessly-repeating serpentine strokes on the number line between multiples of $\pi$ and he asked me the question in the title. More precisely, is the following true or false? -For any $\epsilon > 0$, there exists some $N \in \mathbb{Z}^+$ such that -$|\tan(N)-\lfloor \tan(N) \rceil| < \epsilon$. - -REPLY [8 votes]: Fairly random (and definitely unrelated) web surfing turned up this nice short paper of Cheng and Zheng which proves in a very constructive way that if $f: \mathbb{R} \rightarrow \mathbb{R}$ is any continuous function which is periodic with irrational period, then $f(\mathbb{N})$ is dense in $f(\mathbb{R})$. -The ideas are first exhibited with respect to the function $f(x) = \sin x$. -Your question is not literally a special case of this, since $\tan x$ is not continuous on all of $\mathbb{R}$. Nevertheless I think it is close enough for the same methods to be carried over. (For instance, $\tan x$ can be viewed as a continuous function with values in $\mathbb{R} \mathbb{P}^1 \cong S^1$.) In any case the relation here is close enough so that I thought the paper would be of interest to readers of this question.<|endoftext|> -TITLE: Projective module over $R[X]$ -QUESTION [7 upvotes]: Let $(R,m)$ be commutative noetherian local ring with unity. Suppose $P$ is a finitely generated projective module over $R[X]$ of rank $n$ . Is $P$ free? If not, what is the counter example? - -REPLY [6 votes]: Here is some elaboration on the wiki entry in George's comment. -Suppose $R$ is a domain. $R$ is called seminormal if whenever $b^2=c^3$ in $R$ one can find $t \in R$ such that $b=t^3, c=t^2$. -The relevant thing here is the following fact: - -R is seminormal if and only if $Pic(R) \cong Pic(R[X])$ - -So if $R$ is local and not seminormal then there will be a projective, non-free $R[x]$-module of rank $1$. -As for an implicit example, take $R = k[t^2,t^3]_{(t^2,t^3)}$. One can check that $I = (1-tx, t^2x^2)$ is an invertible (fractional) ideal of $R[x]$ which is non-free. -UPDATE: by request, a reference is this survey, see page 16. I am sure you can find more by googling the relevant terms.<|endoftext|> -TITLE: Splitting field of $x^{n}-1$ over $\mathbb{Q}$ -QUESTION [9 upvotes]: From I.N.Herstein's Topics in Algebra. Chap 5 Sec 5.3 Page 227 Problem 8 - -Problem 8: If $n>1$ prove that the splitting field of $x^{n}-1$ over the field of rational numbers is of degree $\Phi(n)$ where $\Phi$ is the Euler $\Phi$-function. ( This is a well known theorem. I know of no easy solution, so don't be disappointed if you fail to get it. If you get an easy proof, I would like to see it.) - -First, I would like to see a proof of this result. Next, I think I have seen this proof in Dummit and Foote's Abstract Algebra book, but not sure. Anyway, next question is: Has an easy solution been found to this problem? If not, I would like to know what efforts have been taken to make the proof more simple. And why does Herstein think an easy solution can exist. - -REPLY [8 votes]: I think the difficulty is proving that the $n$th cyclotomic polynomial is irreducible. Wikipedia says it's a non-trivial result. This gives a factorization of $x^n-1$ as the product of all cyclotomic polynomials $\Phi_d$ for $d$ dividing $n$.<|endoftext|> -TITLE: Why can we think of the second fundamental form as a Hessian matrix? -QUESTION [5 upvotes]: Let $f: U \rightarrow \mathbb{R}^3$ be an immersion that parametrizes a piece of a surface, and let $(h_{ij})$ be the matrix for the second fundamental form of that surface. -According to pg. 70 of the text Differential Geometry by Wolfgang Kuhnel, we can think of the $(h_{ij})$ as "the Hessian of matrix of a function $h$, which represents the surface as a graph over its tangent plane". -I have a "heuristic" understanding of what's going on, but I'd like to be a bit more careful about this. What exactly is the function $h$? Can we write it down explicitly (perhaps in terms of the parametrization $f$, the unit normal $\nu$, and their derivatives), so that we can directly check that its Hessian is indeed the second fundamental form $(h_{ij})$? - -REPLY [2 votes]: Thanks Ryan! So in conclusion, if we choose coordinates such that the surface is locally described by $f(u,v) = (u,v,h(u,v))$, then the second fundamental form at the point $f(0,0)$ is precisely the Hessian of $h$ at $(0,0)$.<|endoftext|> -TITLE: How is $3 + 4\cos \theta + \cos 2\theta \geq 0$ related to $\zeta(s)^3|\zeta(s + it)^4\zeta(s + 2it)| \geq 1$? -QUESTION [8 upvotes]: The inequality -$$\zeta(s)^3 | \zeta(s + it)^4 \zeta(s + 2it)| \ge 1$$ -follows from -$$3 + 4 \cos(\theta) + \cos(2 \theta) \ge 0.$$ -How is that done? What is the relationship between zeta and the trigonometry? - -REPLY [16 votes]: Just in case, everyone recall the notation $s=\sigma+it$ to denote complex numbers. -The inequality you are looking for is -$$|\zeta(\sigma)^3 \zeta(\sigma + it)^4 \zeta(\sigma + 2it)| \geq 1.$$ -for $\sigma>1$. From this we can prove the Prime Number Theorem (by using a contour integral and some limits switching) because it shows that the zeta function has no zeros on the line $\sigma=1$. To see why, count the zeros of the numerator and the zeros of the denominator when $\sigma\rightarrow 1$. We see that a zero at $1+it$ would force a pole to exist at $1+2it$, but that is impossible since the only pole of $\zeta(s)$ is at $s=1$. -So why does it follow from the trigonometric identity -$$3 + 4 \cos(\theta) + \cos(2 \theta) \ge 0?$$ -First take logarithms. Then the above is equivalent to showing that $$3\log|\zeta(\sigma)|+4\log|\zeta(\sigma+it)|+\log|\zeta(\sigma+2it)|\geq 0.$$ -Recall that $\log|z|=\Re(\log z)$ and $\Re(z)+\Re(w)=\Re(w+z)$ where $\Re(z)$ denotes the real part of $z$. Hence we need to show that -$$\Re \left(3\log\zeta(\sigma)+4\log\zeta(\sigma+it)+\log\zeta(\sigma+2it)\right) \geq 0.$$ -Now since -$$\log(\zeta(s))=\sum_{n=1}^\infty \frac{\Lambda(n)}{\log n}n^{-s}$$ -when $\sigma>1$, where $\Lambda(n)$ is the Von Mangoldt Lambda Function, grouping the sums changes the left hand side into -$$\Re \left(\sum_{n=1}^\infty \frac{\Lambda(n)}{\log n}\left( 3n^\sigma+4n^{\sigma+it}+n^{\sigma+2it} \right) \right).$$ -Since everything is sufficiently convergent, we can bring $\Re$ inside the summation to get -$$\left(\sum_{n=1}^\infty \frac{\Lambda(n)}{\log n}\Re\left( 3n^\sigma+4n^{\sigma+it}+n^{\sigma+2it} \right) \right)$$ -Now, notice $\Re(n^{x+iy})= n^{x}\cos(y\log n)$, so that the above becomes -$$\left(\sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-\sigma}\left(3+4\cos(t\log n)+\cos(2t\log n) \right) \right)$$ -Lastly, the term above must be greater than or equal to zero since every term in the summation is non-negative. Thus we conclude -$$\Re \left(3\log\zeta(\sigma)+4\log\zeta(\sigma+it)+\log\zeta(\sigma+2it)\right) \geq 0$$ -and hence -$$|\zeta(\sigma)^3 \zeta(\sigma + it)^4 \zeta(\sigma + 2it)| \geq 1$$ -as desired. -Hope that helps,<|endoftext|> -TITLE: $(p\!-\!1\!-\!h)!\,h! \equiv (-1)^{h+1}\!\!\pmod{\! p}\,$ [Wilson Reflection Formula] -QUESTION [10 upvotes]: Suppose that $p$ is a prime. Suppose further that $h$ and $k$ are non-negative integers such that -$h + k = p − 1$. -I want to prove that $h!k! + (−1)^h \equiv 0 \pmod{p}$ -My first thought is that by Wilson's theorem, $(p-1)! \equiv -1 \pmod{p}$, and $h!k!$ divides $(p-1)!$ (definition of a binomial). Where would I go from here? - -REPLY [7 votes]: Wilson's theorem $\Rightarrow$ any complete system of representatives $\,r_i\,$ of $\rm\color{#c00}{nonzero}$ remainders mod $\,p\,$ has product $\equiv -1,\,$ by $\,r_i\equiv i\,\Rightarrow\,\displaystyle \prod_{i=1}^{p-1} r_i\equiv \prod_{i=1}^{p-1} i \equiv (p-1)!\equiv -1\,$ by inductive extension of Congruence Product Rule. In particular this is true for any sequence of $\,p\,$ consecutive integers, after removing its unique $\rm\color{#c00}{multiple}$ of $\,p.\,$ Your special case is the sequence $$\, \underbrace{\color{#90f}{-h},\,-h\!+\!1,\ldots,\color{#0a0}{-1}}_{\!\!\textstyle\equiv\,\color{#90f}{k\!+\!1},\,k\!+\!2,\cdots,\color{#0a0}{p\!-\!1}}\!\!\!\!,\require{cancel}\color{#c00}{\cancel{0,}} 1,2,\ldots, k\ \ \ \text{whose product is}\,\ \ (-1)^h h!\,k!\equiv -1\qquad$$ -since $\,\color{#90f}{-h\equiv k\!+\!1}\,$ by $\,h\!+\!k\!+\!1\equiv p\equiv 0$ -Remark $\ $ This is slight reformulation of the Wilson reflection formula mentioned yesterday -$$ k! = (p\!-\!1\!-\!h)! \equiv \frac{(-1)^{h+1}}{h!}\!\!\pmod{\! p},\,\ \ 0\le h< p\ {\rm prime}\qquad $$<|endoftext|> -TITLE: Quotient Space $\mathbb{R} / \mathbb{Q}$ -QUESTION [14 upvotes]: I've just learned about topological quotient spaces and was wondering if anyone can help me with this example I thought of. -Let $(\mathbb{Q}, +)$ be the usual group of rational numbers for addition, likewise $(\mathbb{R}, +)$. Set $S$ to be the set of all cosets, t.i. $S=\mathbb{R}/\mathbb{Q}=\{x + \mathbb{Q} \mid x \in \mathbb{R} \}$. What is the quotient space $\mathbb{R} / S$ like? ($\mathbb{R}$ is equipped with the regular euclidian topology) What is it homeomorphic to? What does a typical open set look like? -Thanks. - -REPLY [4 votes]: Since stackexchange is being silly and I can't seem to comment on my own question - I'll post this as an answer. -I'm thinking the topology is trivial on the set $S$. Since if the set $U$ is open in $\mathbb{R} / S$ then it's preimage of $q$ (where $q$ is quotient mapping) must be open in $\mathbb{R}$, meaning there exists an open interval $J \subseteq q^{-1}(U)$. But $q(J)$ equals all of the cosets in $\mathbb{R} / S$. Am I right?<|endoftext|> -TITLE: Prove that $1 + \frac{1}{2} + \frac{1}{3} + ... + \frac{1}{n}$ is not an integer -QUESTION [5 upvotes]: Possible Duplicate: -Is there an elementary proof that $∑_{k=1}^n 1/k$ is never an integer? - -Hello, - -Prove that $1 + \frac{1}{2} + \frac{1}{3} + ... + \frac{1}{n}$ is not an integer. - -I tried to prove by induction on $n$, but I was stuck :( -Assume $1 + \frac{1}{2} + \frac{1}{3} + ... + \frac{1}{n} = \frac{a}{b}$ for some integers $a, b$ and $a \neq b \text{and} b \neq 0$ -Then $ 1 + \frac{1}{2} + \frac{1}{3} + ... + \frac{1}{n + 1} = \frac{a}{b} + \frac{1}{n + 1}$ -Then how can I prove that this expression is not integer? A hint would be greatly appreciated. -Thanks, -Chan - -REPLY [11 votes]: HINT: There is always a prime between $\frac{n}{2}$ and $n$, $\forall n \geq 4$ - -REPLY [5 votes]: Hint: look at the largest power of 2 less than $n$. Can it get canceled out from the denominator?<|endoftext|> -TITLE: Difference between pairwise distinct and unique? -QUESTION [32 upvotes]: I've come across the term "pairwise distinct" in many research papers. But, I don't understand how it differs from just saying that the elements of a set are unique instead of saying that they are pairwise distinct. -Can someone please explain the difference, if any, to me? - -REPLY [27 votes]: The first problem is that, set-theoretically, what matters is only whether an element is in a set or not. That is, for sets $A$ and $B$, -$$A=B\Longleftrightarrow\mbox{for all $x$, $x\in A$ if and only if $x\in B$.}$$ -What this means, among other things, is that the set $\{1,1\}$ is equal to the set $\{1\}$, because every element in the first is in the second and vice-versa. The fact that $1$ shows up twice in the first set is completely immaterial and irrelevant, the two sets are equal. So there is no way, set-theoretically, to say that $1$ only shows up once in the second set but shows up twice in the first. -So it doesn't really make sense to say that "elements of a set are unique." -Instead, you really want to talk about either multisets (which introduces its own complications), or else you want to talk about ordered tuples and say that entries with distinct indices should be distinct. Something like: -$$(a_i)_{i\in I}\text{ and }a_i\neq a_j\text{ if }i\neq j.$$ -But this also introduces complications of its own, such as having to introduce an index set, not to mention lots of extra words. -So instead what we want to say is that given any pair of elements (and we want to say "pair", because in mathematics, if you simply say "any $x$ and $y$", you do not exclude the possibility that you selected the same element twice), the two elements are different. And this is what "pairwise distinct" means: every pair of elements consists of two different things. -Another potential problem is that the term "unique" is usually reserved for a different kind of meaning. For example, the Chinese Remainder theorem says that a certain system of congruences has solutions, and that the solutions are "unique modulo $M$" (where $M$ is a certain integer defined in terms of the hypothesis). That use of "unique" means that if you find any two solutions, then they are the same solution modulo $M$. That usage would be at odds with using unique in order to say "these list of things doesn't have any repeats".<|endoftext|> -TITLE: Div, curl and linear algebra -QUESTION [9 upvotes]: I came across this post lying dormant on some online forum. I am putting it here verbatim, it seems to me worth a lot. - -By Prof. S. D. Agashe, IIT Bombay -(Source: Vector Calculus, by Durgaprasanna Bhattacharyya, University -Studies Series,Griffith Prize Thesis, 1918, published by the University -of Calcutta, India, 1920, 90 pp) -Chapter IV: The Linear Vector Function, article 15, p.24: -"The most general vector expression linear in $r$ can contain terms only -of three possible types, $r$, $(a\cdot r)b$ and $c\times r$, $a$, $b$, $c$ being constant unit vectors. Since $r$, $(a\cdot r)b$ and $c\times r$ are in general non-coplanar, it follows from the theorem of the parallelepiped of vectors that the most general linear vector expression can be written in the form $\lambda \cdot r + \mu (a\cdot r)b + \nu (c\times r)$, where $\lambda, \mu, \nu$ are scalar constants". -Bhattacharyya does not prove this. Has anyone seen a similar result and its proof? -Bhattacharyya uses this to show that the divergence of the linear -function is ($3 \lambda + a\cdot b$), that the curl is ($a \times b + 2c$). He goes on to define div and curl of a differentiable function as the div and curl of the (linear) derivative function. The div and curl of a linear function are defined in terms of certain surface integrals. -I am excited about this result because it seems to provide an excellent -route to div and curl, as Bhattacharyya himself remarks. -Sorry for a rather long and "technical" communication. - -REPLY [7 votes]: The claim is true. Any $3\times3$ matrix can be expressed as -$$ -A= \lambda I+ a b^T + B -$$ -where $\lambda$ is real, $a$ and $b$ are 3-vectors and $B$ is skew -(so that $Bx=c\times x$ for some vector $c$). -To prove this, choose an orthogonal matrix -$Q$ to diagonalize the symmetric part of $A$. -Then $Q^TAQ=D+K$ where $D$ is diagonal and $K$ is skew. -If the diagonal entries of $D$ are not all distinct then it is -easy to write -$D=\lambda I+\hat a \hat b^T$ and we finish as below. -If the entries are all distinct, -we can suppose that $Q$ was chosen so that -the largest eigenvalue of $D$ is first, the smallest -second and the middle last. Then for some positive -$\mu$ and $\nu$, the matrix -$D$ can be written -$$ -D = \lambda I + \mu -\begin{pmatrix} -1 & 0 & 0\cr 0 &-\nu^2&0\cr -0&0&0 -\end{pmatrix} -=\lambda I + \hat a\hat b^T+\hat K, -$$ with $$ - \hat a= \mu\begin{pmatrix}1\cr \nu\cr0\end{pmatrix}, -\quad -\hat b= -\begin{pmatrix} 1 \cr -\nu\cr 0 \end{pmatrix}, -\quad \hat K=\mu\begin{pmatrix} -0&\nu&0\cr -\nu & 0&0\cr 0&0&0 -\end{pmatrix} . -$$ -Let $a=Q\hat a$, $b=Q\hat b$, and $B=Q(\hat K+K)Q^T$, -and you're done.<|endoftext|> -TITLE: Find all invertible $n\times n$ matrices $A$ such that $A^2 + A = 0$ -QUESTION [5 upvotes]: This was a question on one of our practice midterms: -Find all invertible $n \times n$ matrices $A$ such that $$A^2 + A = 0.$$ -I was told to expand $A^2$ and then solve, but that seems like a really ugly (and hard-to-generalize) solution... are there any better ones? - -REPLY [18 votes]: If $A^2 + A = 0$, then the minimal polynomial of $A$ divides $t^2+t = t(t+1)$. That means that the characteristic polynomial of $A$ must be of the form $(-1)^nt^k(t+1)^r$, where $k+r=n$. $\lambda=0$ cannot be a root, though (since you specify that $A$ is invertible, so $0$ is not an eigenvalue), so the characteristic polynomial is necessarily $(-1)^n(t+1)^n$. -But that means that the minimal polynomial of $A$ must be $t+1$ (since it must divide the characteristic polynomial, and also $t(t+1)$). This implies that $A+I=0$, hence $A=-I$. -Added. If we drop the requirement that $A$ be invertible, then the invertible case proceeds as above. In the noninvertible case, the minimal polynomial is $t(t+1)$ or $t$. If it is $t$, then $A=0$. If the minimal polynomial is $t(t+1)$, then the matrix is diagonalizable (since the minimal polynomial splits and is square free), and the only eigenvalues are $0$ and $-1$. So the matrix $A$ is similar to a diagonal matrix in which every diagonal entry is $0$ or $-1$. -In summary, if we drop the requirement that $A$ be invertible, then there are $n+1$ similarity classes for possible $A$'s, each similarity class corresponding to a diagonal matrix that has $k$ diagonal entries equal to $0$, followed by $n-k$ entries equal to $-1$s in the diagonal, $k=0,\ldots,n$. - -REPLY [10 votes]: Note that -$$A^2+A=0\iff A^2=-A\iff A=-I$$ -where $I$ is the identity matrix, and where we have used that $A$ is invertible in going from $A^2=-A$ to $A=-I$ (we multiplied by $A^{-1}$ on both sides). So the only solution is $A=-I$. - -REPLY [3 votes]: It is given that $A$ is invertible and hence multiply by $A^{-1}$ to get $$A^{-1}A^2 + A^{-1}A = 0 \Rightarrow A + I =0 \Rightarrow A = -I$$<|endoftext|> -TITLE: Find the remainder of $2^{11}$ by $23$ -QUESTION [6 upvotes]: My attempt was: -By Fermat's little theorem: -$$2^{22} \equiv 1 \pmod{23}$$ -$$(2^{11})^2 \equiv 1 \pmod{23}$$ -I checked with my calculator the remainder is actually $1$. However, I wonder if I can take the square root on both sides of congruence. Any idea? -Thanks, - -REPLY [4 votes]: Hint $\rm\ \bmod\ 23\!:\ \ 2 \equiv 5^{\large2}\, \Rightarrow\, 2^{\large11} \equiv 5^{\large 22} \equiv 1\ $ by Fermat's little Theorem. -See Euler's Criterion and quadratic reciprocity to understand what happens generally, and see the Remark here for the analog with higher power residues. -Regarding square-roots, $\rm\ x^2 = a^2\ \iff\ (x-a)\ (x+a) = 0\ \iff\ x = \pm\: a\ \ $ holds true in any integral domain, i.e. it's true in any ring without zero-divisors. More concretely, in $\rm\ \mathbb Z/p\:,\: $ we have prime $\rm\ p\ |\ (x-a)\ (x+a)\ \Rightarrow\ p\ |\ x-a\ $ or $\rm\ p\ |\ x+a\:,\: $ so $\rm\ x \equiv \pm\: a\ \ (mod\ p)\:.$<|endoftext|> -TITLE: Lebesgue integral uniform convergence -QUESTION [8 upvotes]: Let $f_n, f \colon [a,b] \to \mathbb{R}.$ -Show that, if $f_n \to f$ uniformly, then the Lebesgue integrals are equal, i.e. $\int f = \lim \int f_n$. - -This is clearly true for continuous functions, but how do I handle the case of non-continuous functions? - -REPLY [11 votes]: Do you know how to do this argument for Riemann integrable functions? It's the same one for Lebesgue integrable functions. Hint: if $f_n \rightarrow f$ uniformly, then for all $\epsilon > 0$ and all sufficiently large $n$, $|f_n(x) - f(x)|$ is measurable and less than $\epsilon$ for all $x \in [a,b]$. What does that tell you about the Lebesgue -integral $\int_{[a,b]} f_n -f$? -Note that the hypothesis that the total measure of the space be finite is essential. It is easy to construct a sequence of integrable continuous functions $f_n: [0,\infty) \rightarrow [0,\infty)$ which converges uniformly to $0$ but so that $\int_{[0,\infty)} f \rightarrow \infty$. - -REPLY [8 votes]: It follows from -$\left| \int f \, dm - \int f_n \, dm \right| \le \int \left|f - f_n\right| \, dm \le (b-a) \Vert f - f_n \Vert_\infty$<|endoftext|> -TITLE: What is the probability that $\pi(x) + x$ is injective? -QUESTION [11 upvotes]: Let $S$ be a finite group with operator + and $\pi$ be a permutation on $S$. Then what is the probability that $\pi(x) + x$ is injective over choices of $\pi$? -The concrete instantiation I'm interested in is $S=$GF$(2^n)$ for fixed $n > 0$. (Computer Scientists call this "xor on $n$-bit strings.") -For $n=1$ we have two permutations, neither of which produce an injection. For $n=2$ we have 24 permutations, 8 of which induce an injection so the probability is 1/3. -Here is an example $\pi$ for $n=2$: -$$\pi(0)=0,\ \pi(1) = z,\ \pi(z)=z+1,\ \pi(z+1)=1.$$ -Here notice that $\pi(x) + x$ produces $0$, $z+1$, $1$, and $z$, respectively, which is an injection. - -REPLY [5 votes]: There is a name for what you describe: it's called an orthomorphism. If $S$ is the cyclic group of order $n$, then the number of orthomorphisms for $n=1,3,5,...$ is $1, 3, 15, 133, 2025, 37851,...$ (sequence A006717 in OEIS). (If $n$ is even, $S$ has no orthomorphisms.) -You can find lots of links for the case when $S$ is GF$(2^n)$ by googling "orthomorphism of galois field". For instance, here is a short paper which confirms my $244744192$ and goes on to discuss the construction of permutation polynomials.<|endoftext|> -TITLE: How does summation formula work with floor function? -QUESTION [6 upvotes]: Prove that if a and b are relatively prime, then - $$\sum_{n=1}^{a-1} \left\lfloor \frac{nb}{a}\right\rfloor = \frac{(a - 1)(b - 1)}{2}$$ - -My attempt was: -We have: -$$\sum_{i=1}^{n-1} i = \frac{n(n - 1)}{2}$$ -Then, -$$\sum_{n=1}^{a-1} \left\lfloor \frac{nb}{a}\right\rfloor = \left\lfloor \frac{a(a - 1)b}{2a}\right\rfloor$$ -Could I apply the summation formula for floor function like above? Am I in the right track? -Thanks, -Chan - -REPLY [2 votes]: A different Hint: -Consider a square of side $ab$, with one corner at $(0,0)$ and the opposite corner at $(ab, ab)$. Draw the vertical lines $x = b$, $x=2b$ etc and the horizontal lines $y = a$, $y = 2a$ etc. Now try to count in two different ways, the number of intersection points of these lines, which lie on or below the diagonal of the square.<|endoftext|> -TITLE: Exercise in Do Carmo's "Riemannian Geometry": the Möbius band is nonorientable. -QUESTION [11 upvotes]: Of course there are many ways to prove this. However, I came across the following exercise (Ch. 0 #3). - -Prove that: - (a) a regular surface - $S\subset \mathbb{R}^3$ is an - orientable manifold if and only if - there exists a differentiable mapping - of $N:S\rightarrow \mathbb{R}^3$ with - $N(p)\perp T_p(S)$ and $|N(p)|=1$, for - all $p\in S$. (b) the Möbius band - (Example 4.9 (b)) is non-orientable. - -In Example 4.9 (b), he constructs the Möbius band as the quotient by the antipodal map of the cylinder $C=\{(x,y,z)\in \mathbb{R}^3:x^2+y^2=1,|z|<1\}$. The problem, of course, is that this isn't given as a surface in $\mathbb{R}^3$! I was thinking for a second that maybe I should try and construct a map $C\rightarrow S^2$ with the right properties and check that it doesn't descend to a map on $M$, but that's stupid because if I were to embed $M\subset \mathbb{R}^3$, I'm pretty sure it couldn't possibly have those tangent planes anyways. -Does anyone have any insight? Presumably the solution to (b) should use the fact given in (a). - -REPLY [2 votes]: What doCarmo might have had in mind: -I didn't check, but I think the quotient map -$$ \pi: C \to M $$ -is precisely the orientation covering of $M$. And one can prove that this covering is connected iff $M$ is not orientable (see Lee "Intro to Smooth Manifolds", p. 331 for example). -If the above is not useful: Maybe you could try to pull back the orientation of $M$ to $C$ and get a contradiction or so. (although this really has not much to do with part a) of the exercise, so it might be that doCarmo wants you to imbed the Möbius strip)<|endoftext|> -TITLE: Understanding the Pareto distribution as applied to wealth -QUESTION [5 upvotes]: The Pareto distribution is used to say, given a particular person X, what is the pdf of his wealth. -I would like to explore the reciprocal question: Given the total amount of wealth in a population, what portion does a random person have. I conjecture that this is simply a constant times the Pareto distribution. -More interestingly: What is the shape of the distribution curve, if the richest person would be at the 0 point on the x axis, the next richest person to the right, and so on - we would see a monotonically decreasing curve. But what is its shape? What is its derivative? -It's quite likely that I'm not phrasing that question properly. Let me ask a more basic question: What is the appropriate terminilogy to explore the question? Give a probability distribution applied many times over, what is the shape of the resultant allocation curve? - -REPLY [6 votes]: What you're looking for is called Zipf's law. This law says that many distribution curves in which the data values are placed in rank order on the horizontal axis by frequency (or, equivalently, percent) follow a power law. The most famous use of Zipf's law is to describe the frequency of word usage in any given language, although the Wikipedia article specifically mentions income rankings as you ask for. -It can be thought of as a discrete version of the Pareto distribution, so you're right about that. Added: This is because Zipf's law is the discrete power law distribution, and Pareto is the continuous power law distribution. - -(Update, in response to the OP's request for more on the relationship between Zipf and Pareto.) -I'm going to do this in the general case. The argument will also be for numbers and amounts, rather than probabilities, with the understanding that we can convert the functions involved to pdfs or pmfs by scaling by the appropriate constants. -Suppose we have the density function $p(x)$ for dollars (although it could be any good) allocated among people in a group, so that $\int_a^b p(x) dx$ gives the number of people in the group who have between $a$ and $b$ dollars. Now, rank the people in the group by wealth, and let $z(y)$ denote the wealth that the person ranked $y$ has. The question then is, "What is the relationship between $p(x)$ and $z(y)$?" -Consider the number of people who have more than $M$ dollars. Using $p(x)$, that is given by $\int_M^{\infty} p(x) dx$. But this is also $R$, where $R$ is the largest rank of a person who has at least $M$ dollars. (In other words, if the 34th person has at least $M$ but the 35th does not, then $R = 34$.) So $z(R+1) < M \leq z(R)$. If the population is large enough, we can say $z(R) = M$ without losing much accuracy. Thus $R = z^{-1}(M)$. So there's our relationship (approximately): $$\int_M^{\infty} p(x) dx = z^{-1}(M).$$ -Thus the ranking function $z(y)$ is the inverse of the wealth tail cumulative distribution function $\int_M^{\infty} p(x) dx$. -How does this relate to power laws? Well, in this special case, if $p(x) = \frac{C}{x^{\alpha+1}}$ (i.e., a Pareto distribution) for some $\alpha > 0$ and constant $C$, then we have $$z^{-1}(M) = \int_M^{\infty} \frac{C}{x^{\alpha}} dx = \frac{C}{\alpha M^{\alpha}},$$ -which means, for some constant $K$, $$z(y) = \frac{K}{y^{\frac{1}{\alpha}}}.$$ -Thus $z(y)$ is also a power law. Thus a Pareto (power law) distribution function for some good produces a power law ranking function for people with that good (i.e., Zipf).<|endoftext|> -TITLE: Should RSA public exponent be only in {3, 5, 17, 257 or 65537} due to security considerations? -QUESTION [6 upvotes]: In my project I'm using the value of public exponent of 4451h. I thought it's safe and ok until I started to use one commercial RSA encryption library. If I use this exponent with this library, it throws exception. -I contacted developpers of this library and got the following reply: "This feature is to prevent some attacks on RSA keys. The consequence is that the exponent value is limited to {3, 5, 17, 257 or 65537}. Deactivating this check is still being investigated, as the risks may be great." -It's the first time in my life I hear that values other than {3, 5, 17, 257 or 65537} are used to break RSA. I knew only of using 3 with improper padding being vulnerable. -Is that really so? Surely, I can use another library, but after such answer I worried about security of my solution. - -REPLY [6 votes]: Like Yuval said, there is a performance reason: if you use $e = 2^k + 1$ for some (small) $k$, then computing $x^e \mod n$ is going to be faster than for most other exponents. Taking $k=16$ is very common and gives $e = 65537$, while taking $k$ equal to 1,2,4 or 8 give the other values in your library. Also, as these are (Fermat) primes, all $n = pq$ with $p \mod e \neq 1$ and $q \mod e \neq 1$ will give a valid $(n,e)$-combination, and many libraries build their $n$ this way, for efficient testing and generation of $n$. A small $e$ is potentially dangerous because of so-called broadcast attacks (sending 3 times the same message with $e = 3$ to 3 different people (with different moduli) compromises the message), even though this can be thwarted by padding, and small $e$ are thus often avoided. -But in principle any large enough $e$ that is coprime with $\phi(n)$ can be used, provided that $d$ is not too small etc. So this library has chosen to optimize its generation and testing (take shortcuts that can be made for these choices of $e$) and disallow other $e$. There are standards that always take 65537 for $e$ (so you don't have to transmit that info), so this library is even flexible compared to that.<|endoftext|> -TITLE: Diagonal of the double sequence $(n+1)v_{h,n+1}-(2h+1)v_{h,n}-nv_{h,n-1}=0$ -QUESTION [7 upvotes]: Update: it is not possible to reply to this question without additional information. -My comment below: "I have to agree with you that one "cannot derive (2) from (1) alone". Now it seems to me that one must consider how the sequences $v_{h,n}^{\prime }$ and $v_{h,n}$ are constructed. That is described in the first part of the paper, but unfortunately it is not easy for me to summarize it. As I understand Apéry transformed repeatidely a continued fraction whose approximants are $\dfrac{u_{h,n}^{\prime }}{u_{h,n}}$, iterating on $h$. The above sequences are $v_{h,n}^{\prime }=\dfrac{u_{h,n}^{\prime }}{h!n!}, v_{h,n}=\dfrac{u_{h,n}}{h!n!}$." - -In Irrationalité de Certaines Constantes, Bull. section des sciences du C.T.H.S., n.º3, p.37-53, Roger Apéry derives rational approximations $\dfrac{v_{h,n}^{\prime }}{v_{h,n}}$ for $\ln (1+t)$, -$\zeta (2)$ and $\zeta (3)$, the simplest being the one for the $\ln $. The -sequences $v_{h,n}^{\prime }$ and $v_{h,n}$, whose ratio converges -to $\ln (2)$, satisfy the recursive relation -$$(n+1)v_{h,n+1}-(2h+1)v_{h,n}-nv_{h,n-1}=0.\qquad (1)$$ -The diagonal sequences $w_{n}^{\prime }=v_{n,n}^{\prime },w_{n}=v_{n,n}$ -satisfy -$$(n+1)w_{n+1}-3\left( 2n+1\right) w_{n}-nw_{n-1}=0.\qquad (2)$$ -Remarks: - -The initial conditions for $v_{h,n}^{\prime }$, $v_{h,n}$, $w_{n}^{\prime -}$, $w_{n}$ are not indicated in the paper. -Recurrences $(1)$ and $(2)$ are the particular case for $t=1$ of, respectively, - -$$(n+1)v_{h,n+1}-\left( \left( n+1\right) -nt+h\left( 1+t\right) \right) -v_{h,n}-ntv_{h,n-1}=0\qquad (\ast)$$ -(to simplify the notation the index $h$ was deleted in the original) and -$$(n+1)w_{n+1}-\left( 2n+1\right) \left( 2+t\right) w_{n}-nt^{2}w_{n-1}=0.\qquad (\ast\ast)$$ -Question: How do you derive $(2)$ from $(1)$? - -Copy of the mentioned paper - -REPLY [3 votes]: I hope I'm not misunderstanding your question; please correct me if I am. -I think the answer must be that you cannot derive (2) from (1) alone. For each $h$, (1) is a separate recurrence relation linking only values with the same $h$. If you assume only (1), you can freely choose two initial values $v_{h,0}$ and $v_{h,1}$ and use them to generate a two-dimensional vector space of sequences (for fixed $h$) that can be made to take any value for a given $n$. But (2) contains only members for different values of $h$, so each of the members occurring in (2) can separately be chosen arbitrarily by choosing suitable initial values for its value of $h$. But there can't be a linear relationship between three quantities that can all be independently made to take an arbitrary value. -(It's possible that for some values of $h$ and $n$ (1) forces $v_{h,n}$ to be zero, but that could only imply (2) if this were the case on the entire diagonal, i.e. if the diagonal sequence were in fact zero -- if it is non-zero somewhere, we can always take a multiple of the sequence for that $h$ and thereby disturb (2) but not (1).)<|endoftext|> -TITLE: Nine lemma in Triangulated categories -QUESTION [6 upvotes]: I am curious if something like the Nine Lemma (http://en.wikipedia.org/wiki/Nine_lemma) is true in an arbitrary triangulated category. To be more explicit, suppose I have a map of cofiber sequences/distinguished triangles and I take the cofiber/mapping cone at each stage vertically (this gives a diagram like the diagram in the wikipedia link without the zeroes) then is the bottom row a cofiber sequence/distinguished triangle? -I am particularly interested in the category of spectra if that makes things easier or harder. -Also, if the result is not true in general what about when one of the maps that we end up taking the cofiber of is the identity map? -I feel like this ought to be true but I did not see anything in the two references I checked and I am not sure how to make us of verdier's/octahedral axiom. -thanks for your time. - -REPLY [6 votes]: Yes, a version of it is true (however, I don't think you can do it with any morphism of distinguished triangles - since the map of the cones is not unique). -The statement I know is: -Every commutative square sits inside a $4 \times 4$-diagram whose rows and colums are distinguished triangles, $8$ squares commute and one square is sign commutative. -This is called "Verdier's exercise" in the folklore and can be found in Bernstein-Beilinson-Deligne, faisceaux pervers, Proposition 1.1.11. -You can also find a proof as Lemma 2.6 in May's, The additivity of traces in tensor triangulated categories, available here. -If you want to prove it yourself, here's an outline: Start with a commutative square $ABA'B'$ and draw the diagonal $A \to B'$. Build octahedra over the two ensuing commutative triangles. Only using these octahedra, you will then be able to build a diagram of the form -$$\require{AMScd} -\begin{CD} -A @>>> B @>>> C @>>> A[1]\\ -@VVV @VVV @VVV @VVV\\ -A' @>>> B' @>>> C' @>>> A'[1]\\ -@VVV @VVV\\ -A'' @>>> B''\\ -@VVV @VVV\\ -A[1] @>>> B[1] -\end{CD}$$ -The morphism $C \to C'$ will be a composition of two morphisms and build yet another octahedron and complete the diagram. You'll have to rotate one triangle and that's the reason for a sign commutativity occurring in the bottom right square.<|endoftext|> -TITLE: Big $O$ vs Big $\Theta$ -QUESTION [6 upvotes]: I am aware of the big theta notation $f = \Theta(g)$ if and only if there are positive constants $A, B$ and $x_0 > 0$ such that for all $x > x_0$ we have -$$ -A|g(x)| \leq |f(x)| \leq B |g(x)|. -$$ -What if the condition is the following: -$$ -C_1 + A|g(x)| \leq |f(x)| \leq C_2 + B |g(x)| -$$ -where $C_1, C_2$ are possibly negative? Certainly more can be said than just $f = O(g)$. Is there a generalized $\Theta$ notation which allows shifts (by, say $C_1, C_2$)? In particular, I'm interested in the special case: -\begin{eqnarray} --C \leq f(x) - g(x) \leq C -\end{eqnarray} -for some positive $C$. How does $f$ compare to $g$ in this case? If $f$ and $g$ are positive functions of $x$ which both diverge to $\infty$, is it true that $f(x) = -C + g(x) + \Theta(1)$? What is the appropriate asymptotic notation in this case? -Update Thanks for the clarifying answers. Now here is a slightly harder question. Suppose $f$ is discrete and $g$ is continuous. Suppose further that as $x \to \infty$, the difference $f(x) - g(x)$ is asymptotically bounded in the interval $[-C,C]$ but does not necessarily converge to $0$. Does $f \sim g$ still make sense? Would it be more appropriate to use $\liminf_{x \to \infty} f(x) - g(x) = - C$ and $\limsup_{x \to \infty} f(x) - g(x) = C$? - -REPLY [3 votes]: If $g(x)$ and $f(x)$ tends to $\infty$, then there is a value $x_0$ such that for $x > x_0$, $g(x)$ and $f(x)$ are strictly positive. Therefore, if $-C \leq f(x) - g(x) \leq C$, then for $x > x_0$, we have -$$ \frac{-C}{g(x)} \leq \frac{f(x)}{g(x)} -1 \leq \frac{C}{g(x)}. $$ -Taking limits, you see that -$$ \lim_{x \to \infty} \frac{f(x)}{g(x)} = 1, $$ -if the limit exists. In this case, you can write $f \sim g$. -Update: To answer your second question, $f \sim g$ may not be appropriate here as $\displaystyle\lim_{x \to \infty} \frac{f(x)}{g(x)}$ may or may not exist. If the limit does exist, then you can write $f \sim g$ as before. If not, then the situation is trickier, and it must be dealt with individually, depending on the functions $f$ and $g$. You should just make the statement that best exemplifies what you are trying to say between the relationship of $f(x)$ and $g(x)$. The big-Oh (or big-Theta) notation may not be the best fit here. Hope this is helpful. - -REPLY [3 votes]: If $\lim |f(x)| = \lim |g(x)| = \infty$ then there is no difference between your two concepts. -If $f$ is a $\Theta(g)$, then it is a "shifted" $\Theta(g)$ with $C_1 = C_2 = 0$. -If $f$ is a "shifted $\Theta(g)$, then it is a $\Theta(g)$ : -Since $\lim |g(x)| = \infty$, there exists $x_1$ from which $C_2/B \le |g(x)|$ and $ -2C_1/A \le |g(x)|$. This shows that for $x \ge \max(x_0,x_1)$, $A|g(x)|/2 \le C_1 + A|g(x)| \le |f(x)| \le C_2 + B|g(x)| \le 2B|g(x)|$. -If $-C \le f(x)-g(x) \le C$, then this is exactly the same as saying $f-g = O(1)$. -In this case, you have $f = g+O(1)$, and if their limit is $\pm \infty$, $f \sim g$<|endoftext|> -TITLE: Is the 'variable' in 'let $y=f(x)$' free, bound, or neither? -QUESTION [13 upvotes]: Consider the string 'Let $y = f(x)$." Suppose that it occurs in some elementary context, such as when graphing the function $f$ using $x$/$y$ coordinates. How is this to be understood in predicate logic? We can't have either $x$ or $y$ be free variables, for consider the following: -Let $f:\mathbb{R}\rightarrow \mathbb{R}, \forall x,\ f(x)=2x$ -Let $g:\mathbb{R}→\mathbb{R}, \forall x,\ g(x)=x+x$ -Let $y = f(x)$ -Let $z = g(x)$ -$\therefore y = z$ -Here the last line is clearly true, but would lack a truth value if either variable were a free variable. -However, if both variables are bound, we're stuck with permutations of quantifiers that mean the wrong things: -$\forall x,\forall y,y=f(x)$ [says the universe has cardinality 1] -$\forall x,\exists y,y=f(x)$ [says f's domain is the universe] -$\exists y,\forall x,y=f(x)$ [says f is a constant function] -$\exists x,\forall y,y=f(x)$ [says the universe has cardinality 1 and f is nonempty] -$\forall y,\exists x,y=f(x)$ [says f is onto the universe] -$\exists x,\exists y,y=f(x)$ [says f is not the empty function] - -REPLY [2 votes]: First, recall that a function is a set of ordered pairs, and the statement $y = f(x)$ is shorthand for $(x,y)\in f$. This is a perfectly good formula, where we make the usual assumption that the variables $x$, $y$ and $f$ represent sets as opposed to proper classes. The main observation is that if we take a formula $\varphi$ with free variables, then we shouldn't expect that sentences resulting from binding the free variables should necessarily be true. -For example, if $\phi$ is $x=y$, then $\forall x, \forall y, \varphi(x,y)$ is false, whereas $\forall x, \exists y, \varphi(x,y)$ is true. The ones you obtain above that "say the wrong things" are false. Moreover, if $\forall x$ is replaced with $\forall x\in A$ for some set $A$, then the proper class problems go away. (Same for $ \forall y$.)<|endoftext|> -TITLE: Is there a good way to compute Christoffel Symbols -QUESTION [29 upvotes]: Lets say you have a Riemannian Manifold $(M,g)$, and you have some given chart where $g = g_{ij} dx_i dx_j$ and you wish to compute the Christoffel symbols for the Riemannian connection in this chart. To do this involves: -1) Calculating the inverse matrix of $g$, which is not too bad in dimensions 2 and 3, but becomes quite painful in higher dimensions. -2) Using the formula -$\Gamma_{ij}^k = \frac{1}{2} g^{kl}(\partial_i g_{jl} + \partial_j g_{il} - \partial_l g_{ij}) $ -which if you look closely the index that is summed over on the right is $l$, so there are $n=dim M$ terms on the right that need to be evaluated, and inside each of these you must take a partial derivative of three terms in $g$, so in total on the right-side there are $3n$ terms to be computed. This is just for each value of $k$. -Luckily $\Gamma_{ij}^k$ is symmetric in $ij$, and so you only have to compute about half of the $3n^3$ quantities involved, but it is still a very involved calculation. -Does anyone know a simpler method to go about this? Or an easier way to look at the calculation of this? As $\Gamma_{ij}^k$ is an $n \times n$ symmetric matrix for each fixed $k$, maybe there is a way to write $(\Gamma_{ij}^k)$, the matrix, as a matrix product of some other matrices that would be easier to write down. That way you would not have to work with index notation for doing the calculation, which also slows things down. - -REPLY [20 votes]: I'd like to expand on Hans Lundmark's answer -because the question keeps recurring. -Let $q^1,\ldots,q^n$ denote the generalized coordinates. -Introduce additional velocity variables $\dot{q}^1,\ldots,\dot{q}^n$. -If the dot accent is already reserved for other things in your theory, -use another accent to avoid confusion. -First, treat all $q^i$ and $\dot{q}^i$ as pairwise independent variables and -define, using Einstein summation convention, -$$L(q^1,\ldots,q^n;\dot{q}^1,\ldots,\dot{q}^n) -= \frac{1}{2} g_{ij}(q^1,\ldots,q^n)\,\dot{q}^i \dot{q}^j\tag{1}$$ -To avoid confusion later on, I have explicitly indicated -the formal dependencies of the expressions for $g_{ij}$ and $L$. -Note that $L$ can immediately be written down when given the -first fundamental form. -Now consider a twice differentiable curve, -which makes the $q^i$ functions of some independent new parameter $\tau$, -and set $\dot{q}^i = \frac{\mathrm{d}q^i}{\mathrm{d}\tau}$. -Proposition. On the curve parameterized with $\tau$ we have -$$g^{kh}\left(\frac{\mathrm{d}}{\mathrm{d}\tau} -\left(\frac{\partial L}{\partial\dot{q}^h}\right) -- \frac{\partial L}{\partial q^h}\right) -= \ddot{q}^k + \Gamma^k_{\ ij} \dot{q}^i \dot{q}^j -\tag{2}$$ -You might recognize the right-hand side as the expression -constrained by geodesics, -and you might recognize the expression wrapped around $L$ in the left-hand side -as the form of Euler-Lagrange differential equations, -which correspond to variational problems with the Lagrangian $L$. -Therefore $(2)$ hints at a variational foundation for the geodesics equation. -Such interpretations make $(2)$ easier to memorize or cross-link with -other knowledge, but all we actually need is the identity $(2)$. -The following observations should provide enough details to prove $(2)$. -We can rewrite $(2)$ to -$$\frac{\mathrm{d}}{\mathrm{d}\tau} -\left(\frac{\partial L}{\partial\dot{q}^h}\right) -- \frac{\partial L}{\partial q^h} -= g_{hk} \ddot{q}^k + \Gamma_{hij} \dot{q}^i \dot{q}^j -\tag{3}$$ -The idea now is, for every $h\in\{1,\ldots,n\}$, -to take the left-hand side of $(3)$, -plug in expressions for the metric in $L$, -and rewrite the thing so that it matches the format of the right-hand side, -where dotted variables occur only in the places shown. -Then you can read off the Christoffel symbols of the first kind, $\Gamma_{h**}$, -from the coefficients of the velocity products. -To obtain $(2)$ and $\Gamma^k_{\ **}$ is then just a matter of multiplication -with the inverse metric coefficients matrix $((g^{kh}))$, or -equivalently, taking the right-hand side expressions of -$(3)$ obtained for $h\in\{1,\ldots,n\}$ -and then, for each $k\in\{1,\ldots,n\}$, -finding a linear combination whose only second derivative -with respect to $\tau$ is $\ddot{q}^k$, with coefficient $1$. -This is the form given by $(2)$, so the coefficients of the velocity products -are then the Christoffel symbols of the second kind, $\Gamma^k_{\ **}$. -Remember, while doing the partial derivatives of $L$, treat the $q^i$ -and the $\dot{q}^i$ as independent formal variables. -You will do that with more concrete symbol meanings and metric expressions, -but in this moderately abstract setting, you can already refine -$$\begin{align} -\frac{\partial L}{\partial\dot{q}^h} -&= g_{hj}\dot{q}^j\tag{4} -\\\frac{\partial L}{\partial q^h} -&= \frac{1}{2}\frac{\partial g_{ij}}{\partial q^h}\dot{q}^i \dot{q}^j\tag{5} -\end{align}$$ -However, when doing the -$\frac{\mathrm{d}}{\mathrm{d}\tau}$ outside of $L$, stick to the curve -and apply the chain rule accordingly: -$$\frac{\mathrm{d}g_{hj}}{\mathrm{d}\tau} -= \frac{\partial g_{hj}}{\partial q^i}\,\dot{q}^i\tag{6}$$ -Now $(4)$, $(5)$, $(6)$ and the Levi-Civita formula -$$\Gamma_{hij} = \frac{1}{2}\left( -\frac{\partial g_{hj}}{\partial q^i} -+ \frac{\partial g_{ih}}{\partial q^j} -- \frac{\partial g_{ij}}{\partial q^h} -\right)$$ -can be used to prove $(3)$ and thereby $(2)$. -But I will focus on how to apply that proposition. -Example: -Spherical coordinates with radius $r$, longitude $\phi$, latitude $\theta$, -with $\theta=\frac{\pi}{2}$ at the equator. -At index positions, I will write coordinate names instead of digits. -The first fundamental form is -$$\mathrm{d}s^2 = \mathrm{d}r^2 -+ (r^2\sin^2\theta)\,\mathrm{d}\phi^2 -+ r^2\,\mathrm{d}\theta^2$$ -Accordingly, the Lagrangian $L$ is -$$L = \frac{1}{2}\left(\dot{r}^2 -+ (r^2\sin^2\theta)\,\dot{\phi}^2 -+ r^2\,\dot{\theta}^2\right)$$ -We now treat $r,\phi,\theta,\dot{r},\dot{\phi},\dot{\theta}$ -as independent variables and get -$$\begin{align} -\frac{\partial L}{\partial\dot{r}} &= \dot{r} -&\frac{\partial L}{\partial r} -&= (r\sin^2\theta)\,\dot{\phi}^2 + r\,\dot{\theta}^2 -\\\frac{\partial L}{\partial\dot{\phi}} &= (r^2\sin^2\theta)\,\dot{\phi} -&\frac{\partial L}{\partial\phi} &= 0 -\\\frac{\partial L}{\partial\dot{\theta}} &= r^2\,\dot{\theta} -&\frac{\partial L}{\partial\theta} &= (r^2\sin\theta\cos\theta)\,\dot{\phi}^2 -\end{align}$$ -Now we give up the independence, consider some curve paramterized by $\tau$ -and obtain -$$\begin{align} -\frac{\mathrm{d}}{\mathrm{d}\tau} -\frac{\partial L}{\partial\dot{r}} &= \ddot{r} -\\\frac{\mathrm{d}}{\mathrm{d}\tau} -\frac{\partial L}{\partial\dot{\phi}} &= (r^2\sin^2\theta)\,\ddot{\phi} -+ 2\,(r\sin^2\theta)\,\dot{r}\,\dot{\phi} -+ 2\,(r^2\sin\theta\cos\theta)\,\dot{\phi}\,\dot{\theta} -\\\frac{\mathrm{d}}{\mathrm{d}\tau} -\frac{\partial L}{\partial\dot{\theta}} &= r^2\,\ddot{\theta} -+ 2\,r\,\dot{r}\,\dot{\theta} -\end{align}$$ -And so -$$\begin{align} -\frac{\mathrm{d}}{\mathrm{d}\tau} -\left(\frac{\partial L}{\partial\dot{r}}\right) -- \frac{\partial L}{\partial r} -&= \underbrace{1}_{g_{rr}}\,\ddot{r} -+ \underbrace{(-r\sin^2\theta)}_{\Gamma_{r\phi\phi}}\,\dot{\phi}^2 -+ \underbrace{(-r)}_{\Gamma_{r\theta\theta}}\,\dot{\theta}^2 -\\\frac{\mathrm{d}}{\mathrm{d}\tau} -\left(\frac{\partial L}{\partial\dot{\phi}}\right) -- \frac{\partial L}{\partial\phi} -&= \underbrace{(r^2\sin^2\theta)}_{g_{\phi\phi}}\,\ddot{\phi} -+ 2\,\underbrace{(r\sin^2\theta)}_{\Gamma_{\phi r\phi} -= \Gamma_{\phi\phi r}}\,\dot{r}\,\dot{\phi} -+ 2\,\underbrace{(r^2\sin\theta\cos\theta)}_{\Gamma_{\phi\phi\theta} -= \Gamma_{\phi\theta\phi}}\,\dot{\phi}\,\dot{\theta} -\\\frac{\mathrm{d}}{\mathrm{d}\tau} -\left(\frac{\partial L}{\partial\dot{\theta}}\right) -- \frac{\partial L}{\partial\theta} -&= \underbrace{r^2}_{g_{\theta\theta}}\,\ddot{\theta} -+ 2\,\underbrace{r}_{\Gamma_{\theta r\theta} -= \Gamma_{\theta\theta r}}\,\dot{r}\,\dot{\theta} -+ \underbrace{(-r^2\sin\theta\cos\theta)}_{\Gamma_{\theta\phi\phi}} -\,\dot{\phi}^2 -\end{align}$$ -All other Christoffel symbols of the first kind are zero. -If we had a non-diagonal metric, some right-hand side expressions would have -several second derivatives, each accompanied by a corresponding metric -coefficient. -To obtain the Christoffel symbols of the second kind, find linear combinations -of the above right-hand side expressions that leave only one second derivative, -with coefficient $1$. -Here this is easy because the metric is already in diagonal form. Therefore -$$\begin{align} -g^{rh}\left(\frac{\mathrm{d}}{\mathrm{d}\tau} -\left(\frac{\partial L}{\partial\dot{q}^h}\right) -- \frac{\partial L}{\partial q^h}\right) -&= \ddot{r} -+ \underbrace{(-r\sin^2\theta)}_{\Gamma^r_{\ \phi\phi}}\,\dot{\phi}^2 -+ \underbrace{(-r)}_{\Gamma^r_{\ \theta\theta}}\,\dot{\theta}^2 -\\g^{\phi h}\left(\frac{\mathrm{d}}{\mathrm{d}\tau} -\left(\frac{\partial L}{\partial\dot{q}^h}\right) -- \frac{\partial L}{\partial q^h}\right) -&= \ddot{\phi} -+ 2\,\underbrace{\left(\frac{1}{r}\right)}_{\Gamma^\phi_{\ r\phi} -= \Gamma^\phi_{\ \phi r}}\,\dot{r}\,\dot{\phi} -+ 2\,\underbrace{(\cot\theta)}_{\Gamma^\phi_{\ \phi\theta} -= \Gamma^\phi_{\ \theta\phi}}\,\dot{\phi}\,\dot{\theta} -\\g^{\theta h}\left(\frac{\mathrm{d}}{\mathrm{d}\tau} -\left(\frac{\partial L}{\partial\dot{q}^h}\right) -- \frac{\partial L}{\partial q^h}\right) -&= \ddot{\theta} -+ 2\,\underbrace{\left(\frac{1}{r}\right)}_{\Gamma^\theta_{\ r\theta} -= \Gamma^\theta_{\ \theta r}}\,\dot{r}\,\dot{\theta} -+ \underbrace{(-\sin\theta\cos\theta)}_{\Gamma^\theta_{\ \phi\phi}} -\, -\dot{\phi}^2 -\end{align}$$ -All other Christoffel symbols of the second kind are zero.<|endoftext|> -TITLE: Ideals as an algebraic integer ring? -QUESTION [6 upvotes]: Let $\mathcal{O}_K$ be the ring of integers of some number field $K$. -It happens that $\mathcal{O}_K$ might not have unique factorization, but... - -We can form the multiplicative group of ideals of $\mathcal{O}_K$ -It has unique factorization -This construction doesn't seem to be a ring -Each ideal can be put into the form $(\alpha,\beta)$ with both $\alpha,\beta \in \mathcal{O}_K$ - -I think the ideal $(\alpha,\beta)$ represents the gcd of $\alpha$ and $\beta$ (analogous to field of fractions) so why can't we build a new ring out of the algebraic integers which has gcd closed and unique factorization? - -REPLY [5 votes]: In some sense, we can; I think this is what the ring of integers in the Hilbert class field does. -However, I don't think this is the right way to think about the move from elements to ideals in general. The point of passing to ideals is to abstract out the main property we want out of divisibility: $m | n$ if and only if the ideal $(m)$ contains the ideal $(n)$. So the natural structure on ideals is as a lattice ordered by inclusion, and it just happens to be a happy fact about Dedekind domains that this lattice is isomorphic to a product of copies of $\mathbb{N}$, one for each prime ideal. In general the order structure on ideals is much more complicated and the idea that one can think about ideals as generalized elements breaks down (e.g. try to apply this philosophy to $F[x, y]$ for $F$ a field). - -REPLY [4 votes]: There's no "additive inverses" to ideals. However, the ideals of a ring do form a semiring - see this MO question.<|endoftext|> -TITLE: what is the definition of a line in $\mathbb{P}^n(k)$ + how to compute the hilbert polynomial of two intersecting lines? -QUESTION [5 upvotes]: (1) I have never studied any projective/affine geometry or algebraic curves. I'd like to see a clear definition of a line in the projective space $\mathbb{P}^n(k)$, since I need it for my algebraic geometry study. -(2) I'm guessing a line is the variety $\mathcal{V}(f_1,\ldots,f_{n-1})$, where $f_i$ are linear homogenous polynomials, whose coefficients form a $n\times n\!-\!1$ matrix of full rank. Yes or no? -Another guess would be, that a line in $\mathbb{P}^n(k)$ is uniquely determined by two points $a\!=\![a_0\!:\!\ldots\!:\!a_n]$, $b\!=\![b_0\!:\!\ldots\!:\!b_n]$, such that the matrix $\begin{bmatrix} a \\ b \end{bmatrix}$ is of rank $2$. But how is such a line parametrized? Is any of my two attempts of a definition correct? -(3) What are the defining equations of two intersecting lines in $\mathbb{P}^3$? And now, most importantly: how can I compute the Hilbert polynomial of such a variety? -For such an elementary concept, one would expect it to be the first object defined, but to my annoyance and frustration, I have yet to see an official definition. I have the book Introduction to Algebraic Geometry (Hassett), as well as Algebraic Curves (Fulton) as my main source. Any references would be highly desirable. -thank you - -REPLY [2 votes]: (2) Your guess is correct, except I think it is an $(n-1)\times (n+1)$ matrix. Think of your line as the intersection of $n-1$ hyperplanes in non-degenerate position. -(3) Assuming you meant the union of two lines $L_1, L_2$, then the defining variety is given by $I_1I_2$. -As for the Hilbert poly., use: -$$0 \to \mathcal O_{L_1\cup L_2} \to \mathcal O_{L_1}\oplus \mathcal O_{L_2} \to \mathcal O_{L_1\cap L_2} \to 0$$ -and the fact that Hilbert poly. are additive. -Details added: the two terms on the right of the above sequence can be calculated easily. Think of a line as $Proj(k[x,y])$ and a point (since you know $L_1,L_2$ intersect at a point) as $Proj(k[x])$. I am sure you can find the Hilbert polynomials of those rings.<|endoftext|> -TITLE: Prove that $\gcd( a + a', b + b' ) = 1$ if $ab - a'b' = \pm 1$ -QUESTION [5 upvotes]: Prove that $\gcd(a + a', b + b') = 1$ if $ab - a'b' = \pm 1$ - -My attempt was: -Case 1: -$ab - a'b' = 1 \implies \gcd(a, b') = 1$ and $\gcd(a', b) = 1$ -Then is it sufficient to conclude that $\gcd(a + a', b + b') = 1$? -Furthermore, when we write $\pm 1$, does it mean or or and? -Thanks, - -REPLY [10 votes]: Here's one way of looking at these problems. In general, integers $m$ and $n$ are relatively prime if and only if there are integers $x$ and $y$ such that $mx - ny = \pm 1$. You can write this in matrix form as -$$det\pmatrix {m&n \cr y&x \cr} = \pm 1$$ -Correspondingly, the condition you are given is that -$$det\pmatrix {a&b' \cr a'&b \cr} = \pm 1$$ -Adding one row to another doesn't change the determinant, so you also have -$$det\pmatrix {a + a'&b + b' \cr a'&b \cr} = \pm 1$$ -This gives that $a + a'$ and $b + b'$ are relatively prime.<|endoftext|> -TITLE: Hilbert Space - Norm of derivative -QUESTION [5 upvotes]: If $H$ is a Hilbert space of entire functions with weighted norm $||f||^{2}=\int_{R} |\frac{f(t)}{g(t)}|^{2}dt$ for some entire function $g$ (not necessary in $H$). Can we find any relation between the norm of $f$ and the norm of it's derivative? Something like: -$||f'||\leq C ||f||$ for some constant $C$. (Note: so far we don't know whether $f'$ belongs to $H$ or not). - -REPLY [3 votes]: An explicit counterexample: fix some $f$ and define -$$\tilde{f}(t) = e^{iKt}f(t) $$ -for some large, unspeficied $K$. $\tilde{f}$ is clearly also in $H$. Then $$\tilde{f}'(t) = e^{iKt}f'(t) + Ke^{iKt}f(t)$$ An inequality of the form you seek will imply that -$$K\|f\| - \|f'\| \leq \| f' + K f\| \leq C\|f\| $$ -for all $K$, which is absurd.<|endoftext|> -TITLE: Prove that $\sum_{d|n} \mu(d)\sigma(d) = (-1)^{k} \prod_{i=1}^{k} p_i$ -QUESTION [7 upvotes]: In my notes: -$$\sum_{d|n} \mu(d)\sigma(d) = (-1)^{k} \prod_{i=1}^{k} p_i$$ -where $\mu(d)$ is the Möbius function and $\sigma(d)$ is the sum of all positive divisors of $d$. -And I have no idea how they got the expression on the right hand side. Could anyone help me explain how this works? -Thanks, - -REPLY [5 votes]: This looks more like a special case of Möbius Inversion Formula. You need to choose an appropriate $f$ and $g$. -Note that $n = p_1^{\alpha_1} p_2^{\alpha_2} \ldots p_k^{\alpha_k}$ and $n = p_1 p_2 \ldots p_k$ should give the same answer. (Essentially you are removing the $\mu(d) = 0$ terms on the left side). -Hence, we could just deal with $n = p_1 p_2 \ldots p_k$. -Now let $f(n) = n$ and $g(n) = \displaystyle \sum_{d|n} f(d)$. -$g(n)$ is nothing but the sum of divisors of $n$ i.e. $g(n) = \sigma(n)$. -Note that $\mu(d) = (-1)^k \mu \left( \frac{n}{d} \right)$ when $n = p_1 p_2 \ldots p_k$. -By Möbius Inversion Formula, $$n = \sum_{d|n} \mu(d) g\left( \frac{n}{d} \right) = (-1)^k \sum_{d|n} \mu\left( \frac{n}{d} \right) g \left( \frac{n}{d} \right) = (-1)^k \sum_{d|n} \mu\left( d \right) g \left( d \right)$$ -Hence, $$\sum_{d|n} \mu\left( d \right) \sigma \left( d \right) = (-1)^k p_1 p_2 \ldots p_k$$<|endoftext|> -TITLE: Factorization of primes and $Spec(\mathcal{O}_K)$ -QUESTION [6 upvotes]: Let $K$ be a quadratic number field, and $\mathcal{O}_K$ the ring of integers of $K$. -The map $\pi: Spec(\mathcal{O}_K) \rightarrow Spec(\mathbb{Z})$ that sends a prime ideal $\mathbb{p}$ to $\mathbb{p} \cap \mathbb{Z}$ is induced since $\mathcal{O}_K$ contains $\mathbb{Z}$. And the fiber $\pi^{-1}$ of the prime ideal $(p)$ of $\mathbb{Z}$ is then understood as the decomposition of $(p)$ in $\mathcal{O}_K$. -We then obtain a geometric interpretation of how p factors in $\mathcal{O}_K$ using results obtained from Algebraic number theory. -I'm looking for hints to (major or minor!) results that can be proved regarding the behaviour of $(p)$ in $\mathcal{O}_K$, or other interesting aspects of $\mathcal{O}_K$ using "as much as possible" Algebraic geometry (at level of a first course in Schemes, using, say first seven chapters of Liu's "Algebraic Geometry and Arithmetic Curves"). -I'd also appreciate a recommendation of a textbook or notes that discusses these ideas in details. - -REPLY [4 votes]: As suggested in a comment by Quiaochu Yuan, Neukirch's book is really good, but I'd recommend you to have a look also at the beautiful lecture notes on Arithmetic Geometry by Lucien Szpiro. You can find them in his webpage, here. -These are the notes of a course given by Szpiro in Orsay and they are in French (Cours De Géométrie Arithmétique), but it looks like somebody TeX-ed and translated (alas, just part of) them (Basic Arithmetic Geometry Notes). -Szpiro starts by studying Picard groups and one-dimensional rings and then discusses in a very geometric flavour some classical results of algebraic number theory (finiteness of $Pic$, Dirichlet's unit theorem...).<|endoftext|> -TITLE: Convex and bounded function is constant -QUESTION [10 upvotes]: Let f be a convex and bounded function, meaning there is a constant $C$, such that $f(x) < C$ for every $x$. -I need to prove that $f$ is a constant function. -Thanks! - -REPLY [14 votes]: Suppose f is not constant, i.e., $\exists x,y\in\mathbb{R}:f(x)>f(y)$. -Since f is convex, we have: $f(x)\leq\lambda f(\frac{x-(1-\lambda)y}{\lambda})+(1-\lambda)f(y)\;\;\;\forall\lambda\in(0,1).$ -(This is just the definition of convexity, $f(\lambda x'+(1-\lambda)y')\leq\lambda f(x')+(1-\lambda)f(y')\;\;\;\forall\lambda\in(0,1)$, with $x=\lambda x'+(1-\lambda)y'$ and $y=y'$.) -Hence $\frac{f(x)-(1-\lambda)f(y)}{\lambda}\leq f(\frac{x-(1-\lambda)y}{\lambda}).$ -Now, since $f(x)>f(y)$, $\frac{f(x)-(1-\lambda)f(y)}{\lambda}=\frac{f(x)-f(y)}{\lambda}+f(y)\rightarrow \infty$ as $\lambda\rightarrow0^+.$ -Hence f is not bounded above.<|endoftext|> -TITLE: A question related to Krull-Akizuki theorem -QUESTION [10 upvotes]: Let $(R,m)$ be a D.V.R with field of fraction $K$ and $L$ any finite algebraic field extension of $K$. Suppose $\bar{R}$ is the integral closure of $R$ in $L$. Then it is well known that $\bar{R}$ is a Dedekind domain and for any nonzero ideal $J$ of $\bar{R}$ the ring $\bar{R}/J$ is a finite $R$-module. My question is if $\bar{R}$ is itself a finite $R$ module. If $L$ is separable over $K$ or $(R,m)$ is essentially finite over a field, then the answer is affirmative. I think in general it is not true. But I am not getting any easy counter-example. - -REPLY [6 votes]: Example (O.Zariski?): -Consider the rational function field $k(x)$ over a field $k$ of characteristic $p>0$. -The field extension $k((x))/k(x)$ is transcendental; let $\alpha\in k[[x]]$ be transcendental - and let $y:=\alpha^p$. Then $L:=k(x,\alpha )$ is a purely inseparable -extension of $K:=k(x,y)$. -The discrete valuation ring $B:=k[[x]]\cap L$ is the integral -closure of the discrete valuation ring $A:=k[[x]]\cap K$ and $x$ is a prime element of $A$. -For every $n\in\mathbb{N}$ the element $y$ can be written as -$ -y=f_n^p + x^{pn}y_n , f_n\in k[x], y_n\in k[[x]]. -$ -(To see this consider the power series: $\alpha =c_0+c_1x+c_2x^2+c_3x^3+\ldots$ hence -$y =c_0^p+c_1^px^p+c_2^px^{2p}+c_3^px^{3p}+\ldots$.) -Then $y_n\in A$ and one gets: -$ -y_n^{1/p} =x^{-n}(-f_n+\alpha )\;\; (*). -$ -Now if $B$ were a finite $A$-module one could find $d\in A\setminus 0$ such that -$ -dB\subseteq A+A\alpha +A\alpha^2 +\ldots +A\alpha^{p-1}=:R. -$ -In particular $d y_n^{1/p}\in R$, which using (*) yields that $d$ is divisible by $x^n$ -for every $n$. (Note that $1, \alpha ,\ldots ,\alpha^{p-1}$ are linearly independent -over $K$.)<|endoftext|> -TITLE: Is there a function with infinite integral on every interval? -QUESTION [36 upvotes]: Could give some examples of nonnegative measurable function $f:\mathbb{R}\to[0,\infty)$, such that its integral over any bounded interval is infinite? - -REPLY [41 votes]: The easiest example I know is constructed as follows. Let $q_{n}$ be an enumeration of the rational numbers in $[0,1]$. Consider $$g(x) = \sum_{n=1}^{\infty} 2^{-n} \frac{1}{|x-q_{n}|^{1/2}}.$$ -Since each function $\dfrac{1}{|x-q_{n}|^{1/2}}$ is integrable on $[0,1]$, so is $g(x)$ [verify this!]. Therefore $g(x) < \infty$ almost everywhere, so we can simply set $g(x) = 0$ in the points where the sum is infinite. -On the other hand, $f = g^{2}$ has infinite integral over each interval in $[0,1]$. Indeed, if $0 \leq a \lt b \leq 1$ then $(a,b)$ contains a number $q_{n}$, so $$\int_{a}^{b} f(x)\,dx \geq \int_{a}^{b} \frac{1}{|x-q_{n}|}\,dx = \infty.$$ Now in order to get the function $f$ defined at every point of $\mathbb{R}$, simply define $f(n + x) = f(x)$ for $0 \leq x \lt 1$. - -REPLY [9 votes]: See exercise 26 (c) on p. 327 here.<|endoftext|> -TITLE: Upsetting inequality (à la Cauchy-Schwarz?) -QUESTION [10 upvotes]: How to prove that: -$$ -\left(\sum_{i=1}^n w_i n_i \sqrt{\dfrac{y_i(1-y_i)}{n_i+1}}\right)^2 \leq \dfrac{\left(\sum_{i=1}^n w_i n_i y_i\right)\left(\sum_{i=1}^n w_i n_i (1-y_i)\right)}{(\sum_{i=1}^n w_i n_i+1)} -$$ -where $w_i\geq0$, $\sum_{i=1}^n w_i=1$, $n_i>0$ and $y_i \in (0,1)$ for $i=1,\dots,n$, with $n>1$? -I have verified numerically that it should hold, but I cannot still find an elegant way to show it. -The formula comes from an inequality for the variance of a convex combination of beta-distributed variables. - -REPLY [16 votes]: Consider some random variables $X$ and $Y$ such that, for every $i$, $(X,Y)=(n_iy_i,n_i(1-y_i))$ with probability $w_i$. The OP asks a proof of an inequality equivalent to -$$ -E(g(X,Y))\le g(E(X),E(Y)), -$$ -where, for every nonnegative $x$ and $y$, -$$ -g(x,y)=\sqrt{\frac{xy}{x+y+1}}. -$$ -The second partial derivatives $\partial^2_{xx}g$ and $\partial^2_{yy}g$ are negative and the determinant of the Hessian matrix of $g$ is $(xy+x+y)/(4xy(x+y+1)^3)$, which is positive. Hence both eigenvalues of the Hessian matrix are negative, the function $g$ is concave on its domain, and Jensen's inequality yields the result.<|endoftext|> -TITLE: Irreducible homogeneous polynomials of arbitrary degree -QUESTION [16 upvotes]: Suppose we have an algebraically closed field $F$ and $n+1$ variables $X_0, \dots, X_n$, where $n > 1$. Does there exist an irreducible homogeneous polynomial in these variables of degree $d$ for any positive integer $d > 1$? In other words, does there always exist an irreducible hypersurface of arbitrary degree? - -Of course, I am also interested in constructions of these polynomials. -Thank you. - -REPLY [8 votes]: Proposition: The Fermat polynomial $f = X_0^d + \cdots + X_n^d$ is irreducible for $n\ge 2$ in characteristic zero. -Proof: By induction on $n$ using Eisenstein's criterion (e.g. in Lang's Algebra, Part One, Theorem IV.3.1). For $n=2$, we view $X_0^d+X_1^d+X_2^d$ as an element of $k[X_1,X_2][X_0]$. Now, $X_1^d+X_2^d = (X_1-e_1 X_2)\cdots(X_1-e_d X_2)$, where $e_1,\dots,e_d$ are the roots of the polynomial $\xi^d+1$. Since $k$ is not necessarily algebraically closed, the $e_i$ will lie in general in the algebraic closure $\bar{k}$ of $k$. Second, all $e_i$ are distinct. This implies that there is a factor $f \in k[\xi]$ of the polynomial $\xi^d+1$ (possibly the entire polynomial itself), that is irreducible over $k$ and $\xi^d+1 \not\in (f^2)$. Now let $\mathfrak{P}$ be the prime ideal generated by $f$. Then $X_1^d+X_2^d \in \mathfrak{P}-\mathfrak{P}^2$ and so Eisenstein's criterion gives that $X_0^d+X_1^d+X_2^d$ is irreducible. Next, for $n >2$ we view $X_0^d + X_1^d+\cdots+X_n^d$ as an element of -$k[X_1,\dots,X_n][X_0]$. By induction hypothesis $X_1^d+\cdots+X_n^d$ is irreducible and we can take the prime ideal $\mathfrak{P}$ of Eisenstein's criterion to be the ideal generated by $X_1^d+\cdots+X_n^d$.<|endoftext|> -TITLE: Conventions for function notation -QUESTION [6 upvotes]: I'm in year 11 right now and I just had a brief discussion with my maths teacher about function notation in trigonometry. -For a test, I wrote this, -sin(50)^2 - -I assumed that would be interpreted as sin(50)*sin(50) -But I was told the correct notation for this is is -sin^2 (50) - -or optionally -(sin (50))^2 - -I'm curious if that is the 'proper' mathematical convention, or just how things are taught in high schools? -I'm Australian, in case there are some regional differences. -Thanks for the help! - -REPLY [12 votes]: This is a weird notational bug specific to trigonometric functions; chalk it up to historical inertia. We write $\sin^2 x$ for $(\sin x)^2$ but for a generic function $f$, more often than not $f^2(x)$ means $f(f(x))$ and does not mean $(f(x))^2$ (or $f(x^2)$). On the other hand, $\sin^{-1} x$ means $\arcsin x$ rather than $\csc x$... -It is preferable to include extra parentheses when in doubt. Generally I would interpret $f(x)^2$ as $(f(x))^2$ but it is less clear whether $f(\log x)^2$ means $f((\log x)^2)$ or $(f(\log x))^2$.<|endoftext|> -TITLE: Matrix multiplication: interpreting and understanding the process -QUESTION [53 upvotes]: I have just watched the first half of the 3rd lecture of Gilbert Strang on the open course ware with link: -http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/ -It seems that with a matrix multiplication $AB=C$, that the entries as scalars, are formed from the dot product computations of the rows of $A$ with the columns of $B$. Visual interpretations from mechanics of overlpaing forces come to mind immediately because that is the source for the dot product (inner product). -I see the rows of $C$ as being the dot product of the rows of $B$, with the dot product of a particular row of $A$. Similar to the above and it is easy to see this from the individual entries in the matrix $C$ as to which elements change to give which dot products. -For understanding matrix multiplication there is the geometrical interpretation, that the matrix multiplication is a change in the reference system since matrix $B$ can be seen as a transormation operator for rotation, scalling, reflection and skew. It is easy to see this by constructing example $B$ matrices with these effects on $A$. This decomposition is a strong argument and is strongly convincing of its generality. This interpreation is strong but not smooth because I would find smoother an explanation which would be an interpretation begining from the dot product of vectors and using this to explain the process and the interpretation of the results (one which is a bit easier to see without many examples of the putting numbers in and seeing what comes out which students go through). -I can hope that sticking to dot products throughout the explanation and THEN seeing how these can be seen to produce scalings, rotations, and skewings would be better. But, after some simple graphical examples I saw this doesn't work as the order of the columns in matrix $B$ are important and don't show in the graphical representation. -The best explanation I can find is at Yahoo Answers. It is convincing but a bit disappointing (explains why this approach preserves the "composition of linear transformations"; thanks @Arturo Magidin). So the question is: Why does matrix multiplication happen as it does, and are there good practical examples to support it? Preferably not via rotations/scalings/skews (thanks @lhf). - -REPLY [14 votes]: I think part of the problem people have with getting used to linear transformations vs. matrices is that they have probably never seen an example of a linear transformation defined without reference to a matrix or a basis. So here is such an example. Let $V$ be the vector space of real polynomials of degree at most $3$, and let $f : V \to V$ be the derivative. -$V$ does not come equipped with a natural choice of basis. You might argue that $\{ 1, x, x^2, x^3 \}$ is natural, but it's only convenient: there's no reason to privilege this basis over $\{ 1, (x+c), (x+c)^2, (x+c)^3 \}$ for any $c \in \mathbb{R}$ (and, depending on what my definitions are, it is literally impossible to do so). More generally, $\{ a_0(x), a_1(x), a_2(x), a_3(x) \}$ is a basis for any collection of polynomials $a_i$ of degree $i$. -$V$ also does not come equipped with a natural choice of dot product, so there's no way to include those in the discussion without making an arbitrary choice. It really is just a vector space equipped with a linear transformation. -Since we want to talk about composition, let's write down a second linear transformation. $g : V \to V$ will send a polynomial $p(x)$ to the polynomial $p(x + 1)$. Note that, once again, I do not need to refer to a basis to define $g$. -Then the abstract composition $gf : V \to V$ is well-defined; it sends a polynomial $p(x)$ to the polynomial $p'(x + 1)$. I don't need to refer to a basis or multiply any matrices to see this; all I am doing is composing two functions. -Now let's do everything in a particular basis to see that we get the same answer using the correct and natural definition of matrix multiplication. We'll use the basis $ \{ 1, x, x^2, x^3 \}$. In this basis $f$ has matrix -$$\left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\\ - 0 & 0 & 2 & 0 \\\ - 0 & 0 & 0 & 3 \\\ - 0 & 0 & 0 & 0 \end{array} \right]$$ -and $g$ has matrix -$$\left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\\ - 0 & 1 & 2 & 3 \\\ - 0 & 0 & 1 & 3 \\\ - 0 & 0 & 0 & 1 \end{array} \right].$$ -Now I encourage you to go through all the generalities in Arturo's post in this example to verify that $gf$ has the matrix it is supposed to have.<|endoftext|> -TITLE: Differentiable at a point -QUESTION [11 upvotes]: My roommates and I have an argument you guys can help to settle (peace is at stake, don't let us down!) In undergrad calculus courses, one usually explains what it means for a function to be differentiable at a point x, and then differentiable in a domain. Then the focus is entirely on this latter notion. My question is: - -Has the notion of differentiability at a point any interest? - -That is, I'm looking for a theorem which is valid for a function regular at some point, but which needs significantly less regularity in a neighborhood of this point, or a good reason for which such a theorem doesn't exist. -Of course, this question is very flexible, and any insight is welcome. - -REPLY [6 votes]: There are functions that are only differentiable at one point, here is an example: -Consider the function $d:\mathbb{R}\to\mathbb{R}$ defined by $d(x)=0$ if $x$ is rational and $d(x)=1$ if $x$ is irrational. This function is not differentiable anywhere, since it is not continuous, however, it is surely bounded. Now look at $f(x)=x^2\cdot d(x)$. -Note that for a rational $x$ we have $f(x)=0$ and while for irrational $x$ we have $f(x)=x^2$. -Next since there are rational numbers arbitrary close to any irrational $x$ our function is not continuous at irrationals. The same argument holds for rational $x$'s whenever $x\ne0$. -At $x=0$ we have, for $h$ rational and $\ne0$, -$$\frac{f(x+h)-f(x)}{h}= \frac{f(0+h)-f(0)}{h}=\frac{f(h)-f(0)}{h}=\frac{0}{h}=0$$ -while for irrational $h$ we get -$$\frac{f(x+h)-f(x)}{h} = \frac{f(h)-f(0)}{h}=\frac{h^2}{h}=h$$ -which tends to $0$ as $h\to0$, in particular $f$ is differentiable at $0$ and $f'(0)=0$.<|endoftext|> -TITLE: Find the average of a collection of points (in 2D space) -QUESTION [9 upvotes]: I'm a bit rusty on my math, so please forgive me if my terminology is wrong or I'm overlooking extending a simple formula to solve the problem. -I have a collection of points in 2D space (x, y coordinates). I want to find the "average" point within that collection. (Centroid, center of mass, barycenter might be better terms.) -Is the average point just that whose x coordinate is the average of the x's and y coordinate is y the average of the y's? - -REPLY [10 votes]: Yes, you can compute the average for each coordinate separately because $$\frac{1}{n} \sum (x_i,y_i) = \frac{1}{n} (\sum x_i, \sum y_i) = (\frac{1}{n}\sum x_i, \frac{1}{n}\sum y_i)$$ - -REPLY [4 votes]: There are different types of averages. Only the average of numbers is unambigious. The average you are looking for depends on what you want to use it for. If you take the avg. x and y coordinates separately, that will give you the center of mass.<|endoftext|> -TITLE: What is the image of $\zeta_3$ under the non-identity embedding of $\mathbb{Q}(\zeta_3)$ in $\mathbb{C}$? -QUESTION [5 upvotes]: What is the image of $\zeta_3$ under the non-identity embedding of $\mathbb{Q}(\zeta_3)$ in $\mathbb{C}$? - -REPLY [3 votes]: If you define an algebraic extension of $\Bbb Q$ as $K={\Bbb Q}[x]/(f(x))$ where $f(x)$ is an irreducible polynomial of degree $d$, you obtain the $d$ embeddings of $K$ by sending $x\mapsto\alpha$ where $\alpha$ is a complex root of $f(x)$ (they always exist, thanks to Gauss). -In the case of ${\Bbb Q}(\zeta_3)$, this is just ${\Bbb Q}[x]/(x^2-x+1)$ because $x^2-x+1$ is the quadratic irreducible factor of $x^3-1$. Thus you obtain two complex embeddings sending -$$ -x=\zeta_3\mapsto\alpha^\pm=\frac12(1\pm\sqrt{-3}). -$$ -If you decide to identify $\zeta_3=\alpha^{+}=(1+\sqrt{-3})/2$, a moment of thought (in the form of a short computation) will convince you that the "other" root, i.e. the other embedding, $\alpha^{-}$ is just $\zeta_3^2$. Mind that the two roots (and so the two embeddings) are switched by complex conjugation. -The identities $\zeta_3^2=\overline{\zeta}_3=\zeta_3^{-1}$ are also clear observing that $1$, $\alpha^{+}$ and $\alpha^{-}$ are the three complex roots of $x^3-1$, so that their product must be $1$. -Adrian's answer gives the generalization to the $p$-th cyclotomic field<|endoftext|> -TITLE: mahlo and hyper-inaccessible cardinals -QUESTION [8 upvotes]: Wikipedia states that a Mahlo cardinal is hyper-inaccessible, hyper-hyper-inaccessible, etc. Is this a characterisation of Mahlo? If not what about "alpha = hyper^alpha-inaccessible" with the obvious transfinite recursive definition of hyper^alpha-inaccessible ? - -REPLY [6 votes]: No amount of hyperinaccessibility or hyperhyperinaccessibility and so on can be provably equivalent to Mahloness (unless those notions are inconsistent). The reason is that if $\kappa$ is Mahlo, then all its hyperinaccessibility and hyperhyperinacessibility properties and so on are expressible in the structure $\langle V_\kappa,\in\rangle$, once one knows that $\kappa$ is regular, since they have to do only with what is happening eventually below $\kappa$. But when $\kappa$ is Mahlo, then there are many inaccessible $\delta$ with $V_\delta\prec V_\kappa$, since the set of $\alpha$ with $V_\alpha\prec V_\kappa$ is club in $\kappa$ by a Lowenheim-Skolem argument. Any such $\delta$ will therefore have exactly the same hyperhyperinacessibility properties as $\kappa$, even though when $\kappa$ is the least Mahlo, then no such $\delta$ is Mahlo. So those properties do not imply Mahloness. -Note that the property of $\kappa$ being Mahlo is naturally expressed in $V_{\kappa+1}$, since it makes reference to stationarity, which requires one to consider all subsets of $\kappa$.<|endoftext|> -TITLE: Does Hom commute with stalks for locally free sheaves? -QUESTION [22 upvotes]: This is somewhat related to the question Why doesn't Hom commute with taking stalks?. -My question is this: If $F$ and $G$ are locally free sheaves of $\mathcal{O}_X$ -modules on an arbitrary ringed space $(X,\mathcal{O}_X)$, then is the stalk of the Hom sheaf $\mathcal{H}om(F,G)$ at a point $p$ equal to $\text{Hom}_{\mathcal{O}_{X,p}}(F_p,G_p)$? -I ask this because I feel I need it to solve Exercise 5.1(a) from Chapter II in Hartshorne. By proposition 6.8 of Chapter III, the answer to my question is affirmative IF $X$ is a Noetherian scheme, and it holds even if $F$ is only coherent and with no conditions on $G$, but his does not answer my question which is assuming less on $X$ and more on the sheaves. -In any case, is there a way to solve the exercise in Hartshorne without going to the stalks? - -REPLY [17 votes]: Presumably the solution you have in mind is this. There is a natural homomorphism $\newcommand{\E}{\mathcal{E}}\newcommand{\O}{\mathcal{O}}\newcommand{\Hom}{\mathcal{H}om} -\E\to(\E^\vee)^\vee$. It suffices to check that this homomorphism is an isomorphism at every stalk. -That's the right approach, but there's no need to go all the way to stalks. It suffices to show that on any open set $U\subseteq X$ where $\E$ is actually free, the natural map is an isomorphism. Since $\E|_U\cong \O_U^n$, it suffices to check that the natural map $\O^n\to \Hom(\Hom(\O^n,\O),\O)$ is an isomorphism. This is easy to check, but requires you to unravel the map. -Remark: Note that to show two sheaves on $X$ are isomorphic, it is not enough to find an open cover $X$ and isomorphisms between the two sheaves on each open set. The reason is that the isomorphisms may not agree the intersections of the open sets in the cover. Naturality of the map plays a very important role here: it ensures that the isomorphisms on the open cover will glue. To put it another way, we first constructed the morphism $\E\to (\E^\vee)^\vee$, and then checked that it is an isomorphism on an open cover. I want to stress that it is not enough to just check that $\O^n$ and $\Hom(\Hom(\O^n,\O),\O)$ are isomorphic, you must check that the specific map $s\mapsto (\phi\mapsto \phi(s))$ is an isomorphism. -This same remark holds if you want to use the stalks approach. You must first construct the global map, and then verify that it induces isomorphisms on stalks. Just showing that the stalks are isomorphic is not enough. If it were, any two locally free sheaves of the same rank would be isomorphic. - -REPLY [4 votes]: As $F$ and $G$ are locally free, and as the stalk at $p$ depends only of what happens in an open neighborhood of $p$, you can assume that $F$ and $G$ are in fact free. -In the context of that exercise, the sheaves are actually of finite rank, so we end up with $F$ and $G$ free of finite rank. Since everything in sight is an additive functor, additivity allows us to reduce to the case where $F$ and $G$ are in fact free of rank $1$. Everything is then obvious :)<|endoftext|> -TITLE: How do I come up with a function to count a pyramid of apples? -QUESTION [8 upvotes]: My algebra book has a quick practical example at the beginning of the chapter on polynomials and their functions. Unfortunately it just says "this is why polynomial functions are important" and moves on. I'd like to know how to come up with the function (the teacher considers this out of scope). Even suggesting google search terms to find out more information on this sort of thing would be helpful. -The example -Consider a stack of apples, in a pyramid shape. It starts with one apple at the top. The next layer is 2x2, then 3x3, and so on. How many apples are there, given x number of layers? -The polynomial function -$$f(x) = \frac{2x^3 + 3x^2 + x}{6}$$ -What I do and don't understand -Thanks to @DJC, I now know this is a standard function to generate Square Pyramidal Numbers, which is part of Faulhaber's formula. Faulhaber's formula appears to be about quickly adding sequential coefficients which all have the same exponent. Very cool. But how does one get from: -$$\sum_{k=1}^{n} k^p$$ -to the spiffy function above? If I'm sounding stupid, how do I make the question better? -Fwiw, I'm in intermediate algebra in the USA. The next course would be Trigonometry or Calculus. (to help people place my current knowledge level) - -REPLY [4 votes]: The binomial coefficient identities -$$\sum_{k=0}^n\binom k1=\binom{n+1}2,$$ -$$\sum_{k=0}^n\binom k2=\binom{n+1}3,$$ -$$\sum_{k=0}^n\binom k3=\binom{n+1}4,$$ -etc. are simple and natural. To derive the sum-of-squares formula you asked about, observe that -$$\binom{n+1}3=\sum_{k=0}^n\binom k2=\sum_{k=0}^n\frac{k^2-k}2=\frac12\sum_{k=0}^nk^2-\frac12\sum_{k=0}^nk=\frac12\sum_{k=0}^nk^2-\frac12\binom{n+1}2,$$ -whence -$$\sum_{k=0}^nk^2=2\binom{n+1}3+\binom{n+1}2=\frac{n(n+1)(2n+1)}6.$$<|endoftext|> -TITLE: Is this algebraic identity obvious? $\sum_{i=1}^n \prod_{j\neq i} {\lambda_j\over \lambda_j-\lambda_i}=1$ -QUESTION [24 upvotes]: If $\lambda_1,\dots,\lambda_n$ are distinct positive real numbers, then -$$\sum_{i=1}^n \prod_{j\neq i} {\lambda_j\over \lambda_j-\lambda_i}=1.$$ -This identity follows from a probability calculation that you can find at the -top of page 311 in the 10th edition of Introduction to Probability Models by Sheldon Ross. -Is there a slick or obvious explanation for this identity? -This question is sort of similar to my previous problem; clearly algebra is not my strong suit! - -REPLY [31 votes]: It's the Lagrange interpolation polynomial for the constant function $1$ evaluated at $0$: -$$\sum_{i=1}^n 1 \prod_{j\neq i} {{\lambda_j-0}\over \lambda_j-\lambda_i}=1$$ -In general, you have that the polynomial below interpolates the data points $(\lambda_i,y_i$): -$$\sum_{i=1}^n y_i \prod_{j\neq i} {{\lambda_j-x}\over \lambda_j-\lambda_i}$$ -See http://en.wikipedia.org/wiki/Lagrange_polynomial<|endoftext|> -TITLE: What is the quickest way to solve this 2nd Order Linear ODE? -QUESTION [5 upvotes]: This appeared on my professor's test review, and its taken me hours to, surprise surprise, get the wrong answer. Could someone help me with the method I should be using to solve this? -$$y^{\prime\prime}+y=\tan x$$ - -REPLY [3 votes]: The method below will solve equations of the form: - -$$y'' + y = \frac{f'(x)}{\cos x}$$ - -First notice that $\displaystyle (h \cos x)'' = h'' \cos x - 2 h' \sin x - h \cos x$ -Thus if $\displaystyle y = h \cos x$, then $\displaystyle y'' + y = h'' \cos x - 2h' \sin x$ -Thus $\displaystyle (y'' + y')\cos x = h'' \cos^2 x - 2h' \sin x \cos x = (h' \cos^2 x)'$ -Thus we get $$h' \cos^2 x = f(x) + A$$ -And so - -$$y = \cos x \int (f(x) + A)\sec^2 x \ \text{d}x$$ - -In your case, $\displaystyle f(x) = - \cos x$ and so - -$$y = \cos x \ \int (A - \cos x) \sec^2 x \ \text{d}x = A\sin x - \cos x \ \log (\sec x + \tan x) + B \cos x$$<|endoftext|> -TITLE: On the limits of weakly convergent subsequences -QUESTION [8 upvotes]: Let $\{ f_n \}$ be a sequence in a Hilbert space $L^2(\mathbb{R}^d)$. We say that this sequence converges weakly to an element $f \in L^2$ if $\langle f_n, g \rangle \to \langle f,g \rangle$ for every $g \in L^2$ (where $\langle \cdot,\cdot \rangle$ denotes the inner product on $L^2$). By definition, we are given that the weak limit $f$ is in $L^2$. -However, suppose we know that a sequence "formally" converges weakly to a limit $f$ (i.e. $\langle f_n, g \rangle \to \langle f,g \rangle$ for every $g \in L^2$ for some $f$ which we don't necessarily know yet to be in $L^2$) . -Does this, purely by the characteristics of weak convergence, directly imply that $f \in L^2$? -I think you could also generalize this question to any Hilbert space, provided that taking the inner product of an element possibly not in the Hilbert space makes sense. - -REPLY [6 votes]: Let me elaborate on user3148's answer and comment. -There are two facts: - -A weak Cauchy sequence $(f_{n})$ is bounded. -Every bounded sequence has a weakly convergent subsequence. - -Combining these two facts it is easy to see that every weak Cauchy sequence converges. Recall that a weak Cauchy sequence is a sequence $(f_{n})$ such that $\langle f_{n}, g\rangle$ is Cauchy in $\mathbb{R}$ for all $g$. The condition you impose on the sequence $(f_{n})$ means in particular that it is a weak Cauchy sequence, so it necessarily converges to some $f \in L^2$. - -Proof of 1. This follows immediately from the Banach-Steinhaus theorem applied to the operators $\langle f_{n}, \cdot \rangle: X^{\ast} \to \mathbb{R}$, see Sokal's recent paper for a neat proof of that theorem (without Baire!). -Proof of 2. This is immediate from the version of the Banach-Alaoğlu theorem saying that the unit ball in a separable reflexive space is compact metrizable in the weak topology (= weak$^{\ast}$-topology by reflexivity).<|endoftext|> -TITLE: Given k real n x n matrices with a common eigenvector, is there some nontrivial polynomial equation the entries of the matrices satisfy? -QUESTION [8 upvotes]: The following problem has come up in my research and I don't have the tools (i.e., I don't know Algebraic Geometry, especially over $\mathbb{R}$) to solve it. -Consider two subsets $X$ and $Y$ of $M_n(\mathbb{R})^k$ with $k\geq 3$. The subset $X$ consists of all $k$ tuples of n x n matrices $(A_1,..., A_k)$ such that for any two $A_i$ and $A_j$, there is a vector $v_{ij}$ which is simultaneously an eigenvector for $A_i$ and $A_j$ (but perhaps with different eigenvalues). -The subset $Y$ consists of all those elements of $X$ such that $v_{ij}$ can be chosen independently from $i$ and $j$ - that is, if $(A_1,.., A_k)\in Y$ then all $k$ matrices have a common eigenvector. (Of course, when $k=1$ or $k=2$, $X = Y$, hence the above restriction on $k$). -Now, I have not been able to prove that $X$ is Zariski closed (though I have not tried that hard - it's not so important for my purposes), but I can prove that it's contained in a proper Zariski closed subset of $M_n(\mathbb{R})^k$ (thought of as $\mathbb{R}^{kn^2}$): we have $f_{12} = det(A_1A_2 - A_2A_1) = 0$ since $A_1A_2 v_{12} = A_2A_1 v_{12}$. Or, since we're worker over $\mathbb{R}$, we can put all the $f_{ij}$ into one big polynomial equation $$\sum_{1\leq i < j\leq k} f_{ij}^2 = 0.$$ -What I'd like to know is - -Is there a Zariski closed subset $F$ with $Y\subseteq F\subsetneq X$? - -Said another way - -Is there a polynomial which is simultaneously satisfied by all k-tuples of matrices sharing a common eigenvector but for which there are elements in $X$ which do not solve it? - -Finally, in case it helps, the case I'm most interested in is $n=3$ and $k = 5$, but I imagine the choice of $n$ won't affect the answer greatly and $k=3$ probably contains all the insight necessary to tackle the larger $k$ values. -Thank you in advance for your help. - -REPLY [2 votes]: Let us consider the problem over $\mathbb{C}$ first (because it is a question about joint spectrum and it is easier to tackle at first over an algebraically closed field). Then both $X$ and $Y$ are subsets of $\mathbb{A}^{k n^2} (\mathbb{C}) = MaxSpec(\mathbb{C}[x_{pqs}]_{p,q=1,\ldots,n,s=1,\ldots,k})$. Now a $k$-tuple of matrices, $(A_1,\ldots,A_k)$, has a joint eigenvector if and only if the long matrix $(\lambda_1 I -A_1 \lambda_2 I - A_2 \ldots \lambda_k I - A_K)$ has rank less then $n$ for some choice $(\lambda_1,\ldots,\lambda_k)$. Now you can consider the maximal minors of the long matrix as polynomials in $(n^2+1) k$ variables. The zeros of the ideal generated by those maximal minors is a Zariski closed subset of $\mathbb{A}^{k (n^2+1)} (\mathbb{C})=MaxSpec(\mathbb{C}[x_{pqs},\lambda_j])$. Let us call this ideal, generated by the maximal minors $\mathfrak{I}$. Then $\mathfrak{J} = \mathfrak{I} \cap \mathbb{C}[x_{pqs}]_{p,q=1,\ldots,n,s=1,\ldots,k}$ is the ideal of $Y$. Therefore $Y$ is closed. -Finding the polynomial you are looking for is a question of elimination using Groebner basis techniques. I'd advise Macaulay2 for this.<|endoftext|> -TITLE: Classsifying 1- and 2- dimensional Algebras, up to Isomorphism -QUESTION [25 upvotes]: I am trying to find all 1- or 2- dimensional Lie Algebras "a" up to isomorphism. This is what I have so far: -If a is 1-dimensional, then every vector (and therefore every tangent -vector field) is of the form $cX$. Then , by anti-symmetry, and bilinearity: -$$[X,cX]=c[X,X]= -c[X,X]==0$$ -I think this forces a unique Lie algebra because Lie algebra isomorphisms preserve the bracket. I also know Reals $\mathbb{R}$ are the only 1-dimensional Lie group, so its Lie algebra ($\mathbb{R}$ also) is also 1-dimensional. How can I show that every other 1-dimensional algebra is isomorphic to this one? Do I use preservation of bracket? -For 2 dimensions, I am trying to use the fact that the dimension of the Lie algebra g of a group $G$ is the same as the dimension of the ambient group/manifold $G$. I know that all surfaces (i.e., groups of dimension 2) can be classified as products of spheres and Tori, and I think the only 2-dimensional Lie group is $S^1\times S^1$, but I am not sure every Lie algebra can be realized as the Lie algebra of a Lie group ( I think this is true in the finite-dimensional case, but I am not sure). -I know there is a result out there that I cannot yet prove that all 1- and - 2-dimensional Lie algebras are isomorphic to Lie subalgebras of $GL(2,\mathbb{R})$ (using matrix multiplication, of course); would someone suggest how to show this last? Thanks. - -REPLY [28 votes]: I found myself working on this same problem (for homework), and I think I've written a fairly detailed solution. So I will post it here, in case it is helpful to anyone else. - -Let $\mathfrak{g}$ be a 1-dimensional Lie algebra, and let $\{E_1\}$ be a basis for $\mathfrak{g}$. Then for any two vector fields $X,Y\in\mathfrak{g}$, we have $X=aE_1$ and $Y=bE_1$, for some $a,b\in\mathbb{R}$. Thus, -$$[X,Y]=[aE_1,bE_1]=ab[E_1,E_1]=0$$ -for all $X,Y\in\mathfrak{g}$. Therefore, the only 1-dimensional Lie algebra is the trivial one. The map -$$\varphi:\mathfrak{g}\rightarrow\mathfrak{gl}(2,\mathbb{R})$$ -$$\varphi:aE_1\mapsto -\left(\begin{array}{ll} -a&0\\ -0&0 -\end{array}\right)$$ -is a Lie algebra homomorphism, since -$$\varphi([aE_1,bE_1])=\varphi(0)=\left(\begin{array}{ll} -0&0\\ -0&0 -\end{array}\right)\mbox{, and}$$ -$$[\varphi(aE_1),\varphi(bE_1)]=\left(\begin{array}{ll} -a&0\\ -0&0 -\end{array}\right)\left(\begin{array}{ll} -b&0\\ -0&0 -\end{array}\right)-\left(\begin{array}{ll} -b&0\\ -0&0 -\end{array}\right)\left(\begin{array}{ll} -a&0\\ -0&0 -\end{array}\right)$$ -$$=\left(\begin{array}{ll} -0&0\\ -0&0 -\end{array}\right).$$ - Thus, $\mathfrak{g}$ is isomorphic to the (abelian) Lie subalgebra -$$\varphi(\mathfrak{g})=\left\{\left(\begin{array}{ll} -a&0\\ -0&0 -\end{array}\right)\in\mathfrak{gl}(2,\mathbb{R}):a\in\mathbb{R}\right\}\subset\mathfrak{gl}(2,\mathbb{R}).$$ -Now let $\mathfrak{h}$ be a 2-dimensional Lie algebra, and let $\{E_1,E_2\}$ be a basis for $\mathfrak{h}$. Then for any two vector fields $X,Y\in\mathfrak{h}$, we have $X=aE_1+bE_2$ and $Y=cE_1+dE_2$, for some $a,b,c,d\in\mathbb{R}$. Thus, -$$\begin{array}{ll} -[X,Y]&=[aE_1+bE_2,cE_1+dE_2]\\ -&=a[E_1,cE_1+dE_2]+b[E_2,cE_1+dE_2]\\ -&=ac[E_1,E_1]+ad[E_1,E_2]+bc[E_2,E_1]+bd[E_2,E_2]\\ -&=(ad-bc)[E_1,E_2]. -\end{array}$$ -If $[E_1,E_2]=0$, then we have the trivial 2-dimensional Lie algebra. The map -$$\varphi:\mathfrak{h}\rightarrow\mathfrak{gl}(2,\mathbb{R})$$ -$$\varphi:aE_1+bE_2\mapsto -\left(\begin{array}{ll} -a&0\\ -0&b -\end{array}\right)$$ -is a Lie algebra homomorphism, since -$$\varphi([aE_1+bE_2,cE_1+dE_2])=\varphi(0)=\left(\begin{array}{ll} -0&0\\ -0&0 -\end{array}\right)\mbox{, and}$$ -$$[\varphi(aE_1+bE_2),\varphi(cE_1+dE_2)]=\left(\begin{array}{ll} -a&0\\ -0&b -\end{array}\right)\left(\begin{array}{ll} -c&0\\ -0&d -\end{array}\right)-\left(\begin{array}{ll} -c&0\\ -0&d -\end{array}\right)\left(\begin{array}{ll} -a&0\\ -0&b -\end{array}\right)$$ -$$=\left(\begin{array}{ll} -0&0\\ -0&0 -\end{array}\right).$$Furthermore, this map is faithful (injective). Thus, $\mathfrak{h}$ is isomorphic to the (abelian) Lie subalgebra -$$\varphi(\mathfrak{h})=\left\{\left(\begin{array}{ll} -a&0\\ -0&b -\end{array}\right)\in\mathfrak{gl}(2,\mathbb{R}):a,b\in\mathbb{R}\right\}\subset\mathfrak{gl}(2,\mathbb{R}).$$ -If $[E_1,E_2]\neq0$, then set $E_3=[E_1,E_2]$. Then for all $X,Y\in\mathfrak{h}$ we have $[X,Y]=\lambda E_3$ for some $\lambda\in\mathbb{R}$. In particular, for any $E_4\in\mathfrak{g}$ such that $E_4$ and $E_3$ are linearly independent, we have $[E_4,E_3]=\lambda_0 E_3$. Replacing $E_4$ with $1/\lambda_0 E_4$, we now have a basis $\{E_4, E_3\}$ for $\mathfrak{g}$ such that $[E_4, E_3]=E_3$. The map -$$\varphi:\mathfrak{h}\rightarrow\mathfrak{gl}(2,\mathbb{R})$$ -$$\varphi:aE_4+bE_3\mapsto -\left(\begin{array}{ll} -a&b\\ -0&0 -\end{array}\right)$$ -is a Lie algebra homomorphism, since -$$\varphi([aE_4+bE_3,cE_4+dE_3])=\varphi((ad-bc)E_3)=\left(\begin{array}{ll} -0&ad-bc\\ -0&0 -\end{array}\right)\mbox{, and}$$ -$$[\varphi(aE_4+bE_3),\varphi(cE_4+dE_3)]=\left(\begin{array}{ll} -a&b\\ -0&0 -\end{array}\right)\left(\begin{array}{ll} -c&d\\ -0&0 -\end{array}\right)-\left(\begin{array}{ll} -c&d\\ -0&0 -\end{array}\right)\left(\begin{array}{ll} -a&b\\ -0&0 -\end{array}\right)$$ -$$=\left(\begin{array}{ll} -0&ad-bc\\ -0&0 -\end{array}\right).$$Furthermore, this map is faithful (injective). Thus, $\mathfrak{h}$ is isomorphic to the (non-abelian) Lie subalgebra -$$\varphi(\mathfrak{h})=\left\{\left(\begin{array}{ll} -a&b\\ -0&0 -\end{array}\right)\in\mathfrak{gl}(2,\mathbb{R}):a,b\in\mathbb{R}\right\}\subset\mathfrak{gl}(2,\mathbb{R}).$$<|endoftext|> -TITLE: Relationship between tuples, vectors and column/row matrices -QUESTION [9 upvotes]: I am taking a course in linear algebra at the moment, and the book I have uses $1\times n$ matrices, $n\times 1$ matrices and $n$-tuples to represent vectors. In condition I have been taught that $1\times n$ and $n\times 1$ matrices are vectors. -What, then, is the difference between an $n$-tuple and a $1\times n$ matrix? What do we need tuples for, that we can't use matrices for? -I see that the product between tuples are defined in another way than products between matrices. The product between two $1\times n$ matrices isn't even defined (if $n\neq 1$). But this we can easily solve with trasposing one of them. -I hope you can help to clarify this for me. - -REPLY [13 votes]: As with many aspects of linear algebra, this aspect is greatly cleared up by working in a coordinate-independent fashion. What an $n \times m$ matrix really is is a particular representation of a linear transformation $T$ from a vector space of dimension $m$ to a vector space of dimension $n$. To write down such a representation you need a basis of both the source and the target vector spaces. -If $V$ is a single $n$-dimensional vector space, then - -$n \times n$ matrices generally denote linear transformations $T : V \to V$ with respect to a basis $e_1, ... e_n$. Note that we are using the same basis for both source and target. -$n \times 1$ matrices generally denote elements of $V$. Note that this is the same thing as a linear transformation $k \to V$ where $k$ is the base field (I assume $k = \mathbb{R}$ here). -$1 \times n$ matrices generally denote linear transformations $V \to k$, otherwise known as elements of the dual space $V^{\ast}$. - -When we pick a basis $e_1, ... e_n$ of $V$, it follows that every element $v \in V$ can be uniquely expressed in the form -$$v = \sum v_i e_i.$$ -Now (and this is somewhat confusing, but it is a lesson well worth learning) the coefficients $v_i$ actually define linear transformations $V \to k$; in other words, they define distinguished elements of $V^{\ast}$ called the dual basis $e_i^{\ast}$ associated to $e_i$. Again, this is confusing, so I'll repeat: $e_i^{\ast}$ is the linear transformation $V \to k$ which sends a vector $v$ to the component $v_i$ of $e_i$ in the unique representation of $v$ in the basis $e_i$. -The problem with working with matrices instead of linear transformations is that nobody ever tells you about the dual basis, and generally people treat the basis and the dual basis as if they were the same thing, which they're not; they transform differently under change of coordinates. When you take the transpose of a matrix, what you are actually doing is switching basis elements with dual basis elements. This operation is coordinate-dependent, and nobody ever tells you this. (Another reason why you can go a long time without ever learning this lesson is that it doesn't matter if $V$ has an inner product on it with respect to which $e_i$ is an orthonormal basis, since then this operation is coordinate-independent with respect to orthogonal change of coordinates.) -You can multiply a $n \times 1$ matrix with a $1 \times n$ matrix to get a $1 \times 1$ matrix; this is a basis-dependent way of talking about the dual pairing $V^{\ast} \times V \to k$ given by evaluating a linear transformation $V \to k$ at a given element of $V$. Note that the dual pairing is coordinate-independent, and it is determined by what it does to a basis of $V^{\ast}$ and a basis of $V$. Predictably it sends $(e_i^{\ast}, e_j)$ to $1$ if $i = j$ and $0$ otherwise. -You can also multiply a $n \times 1$ matrix with a $1 \times n$ matrix in the other direction to get an $n \times n$ matrix; this is a basis-dependent way of talking about the isomorphism between $\text{End}(V)$ (the space of endomorphisms of $V$, or linear transformations $V \to V$) and the tensor product $V^{\ast} \otimes V$. Explicitly, the isomorphism is as follows: if $T$ is a linear transformation with matrix $a_{ij}$, so that -$$T(e_i) = \sum e_j a_{ji}$$ -then it gets sent to the element $\sum a_{ji} e_i^{\ast} \otimes e_j \in V^{\ast} \otimes V$. The dual pairing then gives a pairing $V \times (V^{\ast} \otimes V) \to V$ which is precisely the evaluation of a linear transformation at an element of $V$. Again, this is probably extremely confusing, but, again, it is a lesson well worth learning; it is the abstract way to talk about representing a linear transformation by a matrix. - -What the word "vector" means is also worth clearing up. A vector is just an element of a vector space. Here that means an element of $V$, but sometimes it will actually mean an element of $V^{\ast}$. This is a bad habit on the part of people who work with coordinates; they should actually call these dual vectors, since vectors and dual vectors (column and row vectors) transform differently under change of coordinates.<|endoftext|> -TITLE: Simplifying $\sum 2^k \tan(2^k x)$ -QUESTION [5 upvotes]: Simplify $\sum\limits_{k = 0}^n {{2^k}\tan ({2^k}x)}$ which $k \in \{ 0,1,...,n + 1\} ,{2^k}x \notin \{ 0,\frac{\pi }{2}\}$ - -REPLY [11 votes]: Hint: $\ln(\cos(x))'=\tan(x)$ and Recursion. -Alternative route: Use the identity $\cot(x) - 2\cot(2x)=\tan(x)$ and telescoping.<|endoftext|> -TITLE: How to compute the series $\sum_{n=0}^\infty q^{n^2}$? -QUESTION [25 upvotes]: Let $q\in (0,1)$. Is there a way of computing the series -$$ -\sum_{n=0}^\infty q^{n^2} -$$ -explicitly? Is there at least a nice accurate estimate? -All I could get is the estimate -$$\sqrt{\frac{\pi}{4\cdot\mathrm{ln}\frac{1}{q}}}\leq\sum_{n=0}^\infty q^{n^2}\leq 1+\sqrt{\frac{\pi}{4\cdot\mathrm{ln}\frac{1}{q}}}$$ -via integration (quite possibly flawed). -For $q=\frac{1}{2}$, Maple gives the values -$$ -1.064467020\leq 1.564468414\leq 2.064467020, -$$ -showing that my estimate is not very precise. (Of course, the sum of the two errors will always be $1$. Here both errors are coincidentally almost exactly $\frac{1}{2}$.) - -REPLY [18 votes]: As Qiaochu says, the Euler-Maclaurin formula is useful here. It gives the error in the approximation -$$\sum_{n=0}^{\infty} q^{n^2} \approx \sqrt{\frac{\pi}{4 \log (1/q)}} + \frac{1}{2}$$ -to be exactly -$$ \lim_{a \to \infty} \int_0^a 2xq^{x^2} (\log q) \left(x - \lfloor x \rfloor - \frac{1}{2}\right) dx.$$ -Since $q^{x^2} \to 0$ quite rapidly, the value of this integral can be approximated closely by using small values of $a$. -For example, if $q = 1/2$, the integral (i.e., the error in the approximation) evaluates to $1.39417 \times 10^{-6}$ (via Mathematica).<|endoftext|> -TITLE: Spherical harmonics for dummies -QUESTION [25 upvotes]: Adding to the for dummies. -The real spherical harmonics are orthonormal basis functions on the surface of a sphere. -I'd like to fully understand that sentence and what it means. -Still grappling with - -Orthonormal basis functions (I believe this is like Fourier Transform's basis functions are sines and cosines, and sin is orthogonal to cos, and so the components can have a zero inner product..) -".. are orthonormal basis functions ..on the surface of a sphere". - - -What sphere? Where does the sphere come from? Do you mean for each position on the sphere, we have a value? Is the periodicity in space on the sphere exploited? Is that how we get the higher order terms? - -REPLY [19 votes]: I think the point that was confusing me/missing link was that spherical harmonics functions are the solution of the Laplace's differential equation: -$$\frac{\partial^2u}{\partial x^2}+\frac{\partial^2u}{\partial y^2}+\frac{\partial^2u}{\partial z^2}=0$$ -Orthogonal means the functions "pull in different directions". Like in linear algebra, orthogonal vectors "pull" in completely "distinct" directions in n-space, it turns out that orthogonal functions "help you reach completely distinct values", where the resultant value (sum of functions) is again a function. -SH are based on the associated Legendre polynomials, (which are a tad more funky than Legendre polynomials, namely each band has more distinct functions defined for it for the associated ones.) -The Legendre polynomials themselves, like SH, are orthogonal functions. So if you take any 2 functions from the Legendre polynomial set, they're going to be orthogonal to each other (integral on $[-1,1]$ is $0$), and if you add scaled copies of one to the other, you're going to be able to reach an entirely distinct set of functions/values than you could with just one of those basis functions alone. -Now the sphere comes from the idea that, SH functions, use the Legendre polynomials (but Legendre polynomials are 1D functions), and the specification of spherical harmonics is a function value for every $\phi \theta$. There is no "sphere" per se.. it's like if you say "there is a value for every point on the unit circle", it means you trace a circle around the origin and give each point a value. -What is meant is every point on a unit sphere has a numeric value. If we associate a color to every point on the sphere, you get a visualization like this: - -This page shows a visualization where the values of the SH function are used to MORPH THE SPHERE (which is part of what was confusing me earlier). But just because a function has values for every point on the sphere doesn't mean there is a sphere.<|endoftext|> -TITLE: Amalgamated Free Product: Practical Uses. -QUESTION [5 upvotes]: Suppose $H$ is embedded in $G$ and $H'$ is isomorphic to $H$ and embedded in $G'$. Then we can simultaneously embed $H$, $H'$, $G$ and $G'$ into a single object (the amalgamated free product) such that $H$ and $H'$ become identifiable. This seems similar to the Isomorphism Extension Theorem for fields which is important for developing Galois Theory. What are the practical uses of the amalgamated free product? - -REPLY [3 votes]: Another important use of amalgamated products arises from groups acting on trees. One easy way to interpret $G=\mbox{SL}_2(\mathbb{Z})$ as an amalgam is to observe how it naturally acts on a tree embedded in the upper half plane $H$ (the tree being simply the boundary of the tessellation of $H$ by fundamental domains of the action of $G$). -See Serre's "Trees" ("Arbres, Amalgames, SL_2"), or these nice lecture notes.<|endoftext|> -TITLE: convex function in open interval is continuous -QUESTION [11 upvotes]: How can I prove that a convex function ƒ defined on some open interval C is continuous on C? -Thank you. - -REPLY [10 votes]: For each $x_0 \in C$, $f$ is both left- and right-differentiable at $x_0$, that is, -$$ -\mathop {\lim }\limits_{x \uparrow x_0 } \frac{{f(x) - f(x_0 )}}{{x - x_0 }} \in \mathbb{R} \;\; {\rm and} \;\; \mathop {\lim }\limits_{x \downarrow x_0 } \frac{{f(x) - f(x_0 )}}{{x - x_0 }} \in \mathbb{R}, -$$ -respectively. For a proof, see Theorem 14.5(a) on p. 248 of this book or Theorem 1(2) here. It follows that $f$ is both left- and right-continuous at $x_0$, hence continuous there. -Remark: A convex function on a closed interval need not be continuous at the end points (for example, the function $f$ on $[0,1]$ defined by $f(x)=x^2$ for $|x|<1$, $f(x)=2$ for $x \in \lbrace -1,1 \rbrace)$.<|endoftext|> -TITLE: Stronger than strong-mixing -QUESTION [6 upvotes]: I have the following exercise: -"Show that if a measure-preserving system $(X, \mathcal B, \mu, T)$ has the property that for any $A,B \in \mathcal B$ there exists $N$ such that -$$\mu(A \cap T^{-n} B) = \mu(A)\mu(B)$$ -for all $n \geq N$, then $\mu(A) = 0$ or $1$ for all $A \in \mathcal B$" -Now the back of the book states that I should fix $B$ with $0 < \mu(B) < 1$ and then find $A$ using the Baire Category Theorem. Edit: I'm now pretty sure that this "$B$" is what "$A$" is in the required result. -Edit: This stopped being homework so I removed the tag. Any approach would be nice. I have some idea where I approximate $A$ with $T^{-n} B^C$ where the $n$ will be an increasing sequence and then taking the $\limsup$ of the sequence. I'm not sure if it is correct. I will add it later on. -My attempt after @Did's comment: "proof": -First pick $B$ with $0 < \mu(B) < 1$. Then set $A_0 = B^C$ and -determine the smallest $N_0$ such that -$$\mu(A_0 \cap T^{-N_0} B) = \mu(A_0) \mu(B)$$ -Continue like this and set -$$A_k = T^{-N_{k - 1}} B^C$$ -Now we note that the $N_k$ are a strictly increasing sequence, since -suppose not, say $N_{k} \leq N_{k - 1}$ then -$$\mu \left ( T^{-N_{k - 1}} B^C \cap T^{-N_{k - 1}} B \right ) = 0 \neq - \mu(B^C) \mu(B) > 0$$ -Set $A = \limsup_n A_n$, then note that -\begin{align} -\sum_n \mu(A_n) = \sum_n \mu(B^C) = \infty -\end{align} -So $\mu(A) = 1$, by the Borel-Cantelli lemma. Well, not yet, because -we are also required to show that the events are independent, so it is -sufficient to show that $\mu(A_{k + 1} \cap A_k) = \mu(A_{k + 1 -})\mu(A_k)$ -We know that $\mu(T^{N_k} B^C \cap T^{N_{k + 1}} B) = \mu(B^C)\mu(B)$. So -does a similar result now hold if we replace $B$ with $B^C$ in the -second part? -Note: -\begin{align} -\mu(A \cap T^{-M} B^C) &= \mu(A \setminus (T^{-M} B \cap A))\\\ -&= \mu(A) - \mu(A)\mu(B) \\\ -&= \mu(A) - \mu(A \cap T^{-M} B)\\\ -&= \mu(A)\mu(B^C) -\end{align} -which is what was required. -For this $A$ and $B$ we can find an $M$ and a $k$ such that $N_k \leq -M < N_{k + 1}$. Now note that $\limsup_n A \cap T^{-M} B = \limsup_n -(A \cap T^{-M} B)$. -Further, -$$\sum_n \mu(A_n \cap T^{-N_{k +1}}) = \mu(A_0 \cap T^{-N_{k + 1}}) + -\ldots + \mu(A_{k + 1} \cap T^{N_{k + 1}}) < \infty$$ -So again by the Borel-Cantelli Lemma we have -$\mu(\limsup_n A_n \cap T^{-M} B) = 0$. -Thus we get -$$\mu(A) \mu(B) = \mu(B) = \mu(A \cap T^{-M} B) = 0$$ -which is a contradiction since $\mu(B) > 0$. So, such $B$'s -violate the condition. -Added: Actually the metric on the space of events $d(A,B) = \mu(A \Delta B)$ can work together with Baire's Category Theorem. - -REPLY [2 votes]: let $\mathcal{B}$ be the space of events, then $\mathcal{S}= \mathcal{B}$ (mod $\mu$) equipped with the metric $d(A,B)=\mu(A\Delta B)$ is a complete metric space, (where if $(A_n)$ is a koshi sequence, then for any subsequence $(A_{n_k})$ with $\sum_k \mu(A_{n_k}\Delta A_{n_{k+1}})<\infty$, $A_n\to \limsup_kA_{n_k}$). -let $B \in \mathcal{S}$ such that $0<\mu(B)<1$, and define -$\Lambda_N = \{A \in \mathcal{S}:\mu(A\cap T^{-n}B)=\mu(A)\mu(B)\space \space(\forall n\geq N) \}$ -we shall prove that $\Lambda_N$ is closed and meagre for all $N \in \mathbb{N}$: -if $(A_m) \subset \Lambda_N$, and $A_m \to A$, then for any $C \in \mathcal{S}$, $A_m\cap C \to A \cap C$, hence for any $n \geq N$ $\mu(A \cap T^{-n}B)=\lim_m\mu(A_n \cap T^{-n}B)=\lim_m \mu(A_m) \mu(B)=\mu(A) \mu(B)$, thus $A \in \Lambda_N$, so $\Lambda_N$ is closed. (observe that $\mu (\lim_m A_m)= \lim_m \mu(A_m)$ in $(\mathcal{S},\Delta)$) -for any $A \in \Lambda_N, \mu (A) \neq 1$ and any $\epsilon >0$, take $C \subset A^c \cap T^{-N}B^c$ such that $0<\mu(C)< \epsilon$, then $\mu((A\sqcup C) \Delta A)= \mu (C)< \epsilon$. -(observe that $\mu(A^{c} \cap T^{-N}B^c)= \mu(A^c) \mu(B^c) \neq0$, and since there exist k and a sequence $N_1 ... N_k$ such that $\mu((A^{c} \cap T^{-N}B^c) \cap T^{-N_1}B \cap ... \cap T^{-N_k}B)= \mu(A^c) \mu(B^c) \mu(B)^k< \epsilon$, we can find such $C$) -but $\mu((A \sqcup C) \cap T^{-N}B)=\mu(A \cap T^{-N}B)=\mu(A) \mu(B) \neq \mu(A \sqcup C)\mu(B)$, hence $A\sqcup C \notin \Lambda_N$, so $\Lambda_N$ is meagre. -now, by baire category theorem, $\bigcup_{N \in \mathbb{N}} \Lambda_N \subsetneq \mathcal{S}$, hence there must be some $A \in \mathcal{S} - (\bigcup_{N \in \mathbb{N}} \Lambda_N)$ such that for any $N$, there exists $n \geq N$ for which $\mu(A \cap T^{-n}B) \neq \mu(A) \mu(B)$. -Remark: $(\mathcal{S},\Delta) \cong (\{\chi_A:A \in \mathcal{B} \},\| \cdot \|_{L^1})$.<|endoftext|> -TITLE: Has this Extension to a Series been Studied Before? -QUESTION [9 upvotes]: We know from Calculus what a series is, and you might have seen infinite products as well. But the Elementary Symmetric Polynomials give an entire spectrum of operators between a sum and product over a finite set. -Given a sequence $s_n$ and an $a \in \mathbb{N}$(representing which operator we're picking), define $S_n$ to be the set $\{s_k | 1 \leq k \leq n\}$. Then our "generalized series" is -$T_a(s) = \lim_{n \to \infty} e_a(S_n)$ -If $a = 0$, you get $1$ no matter what the set is. If $a = 1$, then you get a standard Calculus series. If $a = 2$, then we get -$T_2(s) = \sum_{n = 1}^\infty s_n \left(\sum_{m = n+1}^\infty s_m\right)$ -If $a=3$, then -$T_3(s) = \sum_{n=1}^\infty s_n \left(\sum_{m = n+1}^\infty s_m \left(\sum_{k = m+1}^\infty s_k \right) \right)$ -and so on. I can provide a Mathematica function I wrote to compute them, if you like. My questions are: - -Is there a standard name or paper for these? -I can argue to myself that if $a < b$ and $T_b(s)$ exists, then $T_a(s)$ must exist as well. Does there exist a sequence $s_n$ and a $b > 1$ such that $T_a(s)$ exists for every $a < b$, but $T_b(s)$ doesn't? -You can't recover an infinite product with this $T$ function. Short of providing a new function $R$ that swaps $\sum$ for $\prod$ and multiplication for addition in the $T_2$ and $T_3$ expansions above, is there an easy way to recover them? - -Thank you for your time, --- Michael Burge - -REPLY [3 votes]: There's a book on the subject: Symmetric Functions and Hall Polynomials, Macdonald. You can find all sorts of bases for the linear space of all symmetric functions, and they lead to the representation theory of $S_n$.<|endoftext|> -TITLE: Inequality on balls/bins with nested logs -QUESTION [6 upvotes]: Let $k = \lceil \frac{3 \ln n}{\ln \ln n}\rceil$. How does one show that -$$ \left(\frac{e}{k}\right)^k \frac{1}{1-\frac{e}{k}} \le n^{-2} ? $$ -This is from p. 44 of Motwani and Raghavan, Randomized Algorithms, where they're talking about ball/bin probabilities. The $\left(\frac{e}{x}\right)^x$ is motivated but the $k$ just comes out of nowhere. Ignoring that for the moment, I don't see the derivation of the inequality; even assuming the the 3 is really an approximation for $e$, and throwing out the $\log\log n$... and the geometric series, I get (letting $k^{*}= e \log n$): -$$ -\left(\frac{e}{k^{*}}\right)^{k^{*}}\approx n^{-\log\log n} \le n^{-2} -$$ -for $n \ge e^{e^2}$. -But I changed things quite a bit, and this is quite a large constant ($\approx 3^{27}$). -So two questions: - -how does one derive the full inequality? -why that particular $k$? - -REPLY [7 votes]: The inequality does not hold. If you set $n=5.6 \times 10^{6}$ (close to $ e^{e^e}$) with $k=17$, you will find that the left hand side is larger than $3.4 \times 10^{-14}$ and the right hand side is smaller than $3.2 \times 10^{-14}$. To see why the inequality is wrong and why $k=\lceil \frac{4 \log n}{\log \log n} \rceil$ works, read on: -To get a "good" value for $k$, you want that the inequality becomes asymptotically $n\to\infty$ and equality. That is how you derive the particular form. Taking log of your inequality, we get (i switched the sign of the equation because I rather like positive numbers) -$$ L= k (\log k -1) +\log(1- e/k) \geq 2 \log n .$$ -The most important term (for $n,k \to \infty$) is $k\log k$. So we would like to have -$$ k\sim \frac{2 \log n}{\log k}.$$ -This can be solved iteratively. The first approximation is $k_0 = 2 \log n$. We set this approximation in the right hand side and obtain $k_1 = 2 \log n/ \log \log n$ (up to an irrelevant additive term $\log 2$ in the denominator). This is already almost the estimate, changing the 2 to 3 and the ceiling function are needed to have some space and make the inequality valid (and not only the expressions asymptotically equal). -Now, let us check the inequality and set $k= c \log n /\log \log n$ ($c$ we will determine in the end). We obtain -$$ L= \frac{c \log n}{\log \log n} ( \log c + \log \log n - \log \log \log n -1) + \log (1- e \log \log n /c \log n).$$ We will use that $(\log c -1) > 0$ (for $c>e$), $(\log \log \log n / \log \log n) \leq 1/e$ (with the maximum attained for $n=e^{e^e}$), and $\log (1- e \log \log n /c \log n)>\log(c-1)-\log c$ (with the minimum at $n=e^e$). Thereby, we can show that $$L \geq c (1- 1/e) \log n + \log (c-1) - \log c .$$ -The lowest value of $c$ for which we can hope that the inequality is always fulfilled is given by $c(1-1/e) =2$, i.e., $c= 2e/(e-1) \approx 3.2$. Assuming that $n>e$ such that $\log n >1$, we can further bound the expression -$$L \geq [c (1-1/e) + \log (c-1) - \log c ] \log n \geq 2 \log n$$ -if $c\gtrsim 3.67$.<|endoftext|> -TITLE: Want to learn differential geometry and want the sheaf perspective -QUESTION [14 upvotes]: I would like to learn some differential geometry: basically manifolds, differentiable manifolds, smooth manifolds, De Rham cohomology and everything else that is pretty much part of a course in differential geometry. I do however know some deal of category theory and algebraic geometry, and I would therefore like to learn differential geometry from a more "abstract" (categorical and algebraical) setting. Are there any good books for this? I was able to find a book called "Sheaves on Manifolds" but I don't know if it is a good book for learning the subject (AFAIK, the book might assume prior knowledge of differential geometry) -/edit/ Or just lecture notes. - -REPLY [2 votes]: It is a bit late, but a book that fits perfectly with your demands was published after this questions was asked: - -Name: "Manifolds, Sheaves, and Cohomology" -Author: Prof. Dr. Torsten Wedhorn, Department of Mathematics, Technische Universität Darmstadt, Germany. -Springer's Link: https://www.springer.com/gp/book/9783658106324 - -My opinion about this book is that the author makes a perfectly combination of abstract, topological and geometric flavor in a very gentle way that is not usually seen in advanced texts in differential geometry. Prerequisits are sumarized in the appendices and they are not very ambicious. Another plus point is that Prof. Dr. Torsten Wedhorn's research is focus in algebraic geometry (as far as I know), so you won't have any problems with the way he introduces the ideas.<|endoftext|> -TITLE: Help complete a proof of Dirichlet on biquadratic character of 2? -QUESTION [7 upvotes]: I am stuck proving the theorem that there exists $x$, $x^4 \equiv 2 \pmod p$ iff $p$ is of the form $A^2 + 64B^2$. -So far I have got this (and I am not sure if it's correct) - -Let $p = a^2 + b^2$ be an odd prime, - -$\left(\frac{a}{p}\right) = \left(\frac{p}{a}\right) = \left(\frac{a^2 + b^2}{a}\right) = \left(\frac{b^2}{a}\right) = 1$ - -since $p \equiv 1 \pmod 4$ - -$\left(\frac{a+b}{p}\right) = \left(\frac{(a+b)^2-2ab}{a+b}\right) = \left(\frac{2}{a+b}\right) = (-1)^{((a+b)^2-1)/8}$ - -using the Jacobi symbol and second supliment of quadratic reciprocity. - -$(a+b)^{(p-1)/2} = (2ab)^{(p-1)/4}$ - -since $(a+b)^2 \equiv 2ab \pmod p$ - -and the last step which I'm stuck on now is for $p = a^2 + b^2$ let $x^2 \equiv -1 \pmod p$ then $2^{(p-1)/4} = x^{ab/2}$. And I don't see how to prove the theorem with this result. - -REPLY [4 votes]: We can asume that 2 is a quadratic residue mod $p$ and so that $p \equiv 1 \pmod 8$ and this implies that if we pick $a$ odd and $b$ even then $b$ is a multiple of 4. We have to prove that $b$ is a multiple of 8. -First observe that as $x^2 \equiv -1 \pmod{p}$ and $a^2 + b^2 = p$ we have - $$ \left(\frac{a+b}{p}\right) \equiv (-1)^{((a+b)^2-1)/8} \equiv x^{(p+2ab-1)/4} \pmod{p} $$ -note that the exponent of $x$ is even because $b$ is multiple of 4, so we can chose the sign of the base as we wish. -In adition we also have - $$ -a^2 \equiv b^2 \pmod{p} $$ -so $(xa)^2 \equiv b^2 \pmod{p}$ and - $$ax \equiv \pm b \pmod{p}$$ -and picking the sign of $x$ we can asume that $ax \equiv b \pmod{p}$. So $ab \equiv a^2 x$ and - $$ (ab)^{(p-1)/4} = a^{(p-1)/2} x^{(p-1)/4} \equiv x^{(p-1)/4} \pmod{p} $$ -because $a$ is a quadratic residue mod $p$. -The identity you obtained: - $$\left(\frac{a+b}{p}\right) = (a+b)^{(p-1)/2} \equiv (2ab)^{(p-1)/4} \pmod p $$ -now becomes - $$x^{(p+2ab-1)/4} \equiv 2^{(p-1)/4} x^{(p-1)/4} \pmod p $$ -and in consequence - $$2^{(p-1)/4} \equiv x^{ab/2} \pmod p $$ -As $b$ is multiple of 4 say $b = 4b'$, we have - $$2^{(p-1)/4} \equiv (-1)^{ab'} \pmod p $$ -so $2$ is a biquadratic residue iif $b'$ is even or what is the same thing iif $b$ is multiple of 8 and we are done.<|endoftext|> -TITLE: Given a smooth map which is open, is it a submersion? -QUESTION [6 upvotes]: A submersion between smooth manifolds is an open map. Is the converse true? That is, is a smooth open map $f:M\to N$ between smooth manifolds a submersion? We can additionally assume that it is surjective, if necessary, because that is the only case I am interested in. -I can see how it is almost true, taking a chart $U$ about $m\in M$ and considering a chart $V$ about $f(m)\in N$ contained in the image of $U$ (which is open). Using the isomorphism between the charts and the tangent spaces, I'm fairly sure this gives us a submersion, but I feel there is a slight gap in my argument. So either, does my argument work, or is it true but my argument is incomplete, or is it false? - -REPLY [7 votes]: A nonconstant holomorphic function is an open map on $\mathbb{C}\cong\mathbb{R^2}$, but is a submersion only if its derivative is never zero. So for example, $z\mapsto z^2$, a.k.a. $(x,y)\mapsto (x^2-y^2,2xy)$ is a counterexample.<|endoftext|> -TITLE: A Reverse Citation Index -QUESTION [5 upvotes]: Given a published mathematics article, is there a way to find its tree of subsequent citations starting from the date of publication? Does such a (reverse bibliography) database exist? -I'm at a student in a relatively well-known university with access to virtually every mathematics journal imaginable, but I can't seem to find such a simple and useful tool. -Thanks! - -REPLY [6 votes]: If you go to the AMS MathSciNet database, from each paper you can access to the list of papers citing it by clicking on the "From Reference" link from a "Citation" box in the upper right of the page. -See for istance the 255 papers citing Pierre Deligne's first paper on the Weil Conjectures here: -http://www.ams.org/mathscinet/search/publications.html?refcit=340258&loc=refcit<|endoftext|> -TITLE: Distinct Sylow $p$-subgroups intersect only at the identity, which somehow follows from Lagrange's Theorem. Why? -QUESTION [26 upvotes]: It seems that often in using counting arguments to show that a group of a given order cannot be simple, it is shown that the group must have at least $n_p(p^n-1)$ elements, where $n_p$ is the number of Sylow $p$-subgroups. - -It is explained that the reason this is the case is because distinct Sylow $p$-subgroups intersect only at the identity, which somehow follows from Lagrange's Theorem. - -I cannot see why this is true. -Can anyone quicker than I tell me why? I know it's probably very obvious. -Note: This isn't a homework question, so if the answer is obvious I'd really just appreciate knowing why. -Thanks! - -REPLY [4 votes]: In some situations, to prove that groups of order $n$ cannot be simple, you can use the counting argument if all Sylow subgroups have trivial intersection, and a different argument otherwise. -For example let $G$ be a simple group of order $n=144 = 16 \times 9$. The number $n_3$ of Sylow 3-subgroups is 1, 4 or 16. If $n_3 = 1$ then there is normal Sylow subgroup and if $n_3= 4$ then $G$ maps nontrivially to $S_4$, so we must have $n_3 = 16$. -If all pairs of Sylow 3-subgroups have trivial intersection, then they contain in total $16 \times 8$ non-identity elements, so the remaining 16 elements must form a unique and hence normal Sylow 2-subgroup of $G$. -Otherwise two Sylow 3-subgroups intersect in a subgroup $T$ of order 3. Then the normalizer $N_G(T)$ of $T$ in $G$ contains both of these Sylow 3-subgroups, so by Sylow's theorem it has at least 4 Sylow 3-subgroups, and hence has order at least 36, so $|G:N_G(T)| \le 4$ and $G$ cannot be simple.<|endoftext|> -TITLE: What are useful tricks for determining whether groups are isomorphic? -QUESTION [27 upvotes]: In general, it is not too hard to find isomorphisms between two groups when their order is relatively low. However, as their orders grow, it becomes increasingly irritating to write down their entire Cayley tables and such. Is there a set of tricks that is generally useful when trying to prove that two groups are actually isomorphic? After all, it usually seems easier to prove that they aren't, as you just need to point out one property that doesn't correspond... -Example: in Armstrong's Groups and Symmetry, it is asked to show that the dihedral group of order 8 and the subgroup of S4 generated by (1234) and (24) are isomorphic. It is easy to send D4's "single-rotation" element r to S4's (1234) and D4's "flipping" element s to S4's (24), as they are all part of the generating set and their orders coincide, but what is the way to go from here? Also, how far should one go in showing the isomorphism - might pointing out the correspondence in generating elements even be enough? - -REPLY [15 votes]: Proving that two groups are isomorphic is a provably hard problem, in the sense that the group isomorphism problem is undecidable. Thus there is literally no general algorithm for proving that two groups are isomorphic. To prove that two finite groups are isomorphic one can of course run through all possible maps between the two, but that's not fun in general. -For your particular example, there's not much to say. It depends on what you know about $D_4$. If you know a presentation of it, then you can prove that what you've defined is actually a homomorphism which is moreover surjective. Since the two groups have the same order, you're done. -For recognizing a finite group via a presentation you can sometimes use the Todd-Coxeter algorithm. - -REPLY [8 votes]: There are a few ways to show that a homomorphism of groups is an isomorphism: - -A homomorphism which is a bijection of sets is an isomorphism. -A homomorphism with a two-sided inverse is an isomorphism. -A homomorphism which is surjective as a map of sets and with a trivial kernel is an isomorphism. - -Of course, sometimes the hard part is showing that you have a homomorphism at all. For your example of $D_4$ and $S_4$, you need to show the map you specify extends to a homomorphism. It's automatically surjective because you've hit every element in the generating set. Then it suffices to show that it has a trivial kernel. However, the easiest way to show that $D_4$ embeds as a subgroup of $S_4$ is to show that it acts faithfully on a set of 4 elements. This is easy: $D_4$ naturally acts on the vertices of a square, and it's certainly faithful because there's no non-trivial element of $D_8$ which acts trivially. - -REPLY [7 votes]: The usual way to show isomorphy between groups is indeed to simply construct the isomorphism, i.e. construct a function $\varphi$ that is a bijection and morphism. -However sometimes you can resort to neater 'tricks', by considering actions of groups and constructing equivalent permutation representations. For instance to show that $Aut(\mathbf C_2\times \mathbf C_2) \cong \mathbf S_3 \cong \mathbf D_6$ one could notice that all three groups act on three elements. The first one actually consists of all permutations of the non-trivial elements of $C_2\times \mathbf C_2$, (1,0), (0,1) and (1,1), the second group is by definition the group of all permutations of $\{1,2,3\}$ and the third group acts on a triangle in the plane where once again it consists of all possible permutations of the three vertices of the triangle. -All three groups act faithful (only the identity acts trivially) therefore any bijection between these sets (the non-trivial elements of $\mathbf C_2\times \mathbf C_2$, the set $\{1,2,3\}$, the angles of a triangle) will naturally induce an isomorphism between these groups. It's not really an alternative to constructing the isomorphism ofcourse, it just tells you where to start looking for one. -Another strategy that I've seen at work when dealing with groups defined by presentations is to first construct an epimorphism (which is rather easy, one only has to verify that all relations still hold) and then show by considerations about the group (e.g. simplicity) that its kernel must be trivial. -However, it's often very hard to think of such things. Here's something I read: During the golden years of discovery of sporadic simple groups two groups of equal order were constructed and even though this was suspected and quite some non-trivial things about these groups had been proven, it remained unsure for quite another while whether or not they were actually equal. (It turned out they were. It would be interesting if someone had a reference of this fact, I forgot where I read it.) ---- added: -Here's how this may help to solve your particular problem. Let $\mathbf D_8$ act on a square with vertices numbered $1,2,3,4$. Note that indeed $a = (1\ 2\ 3\ 4)$ and $b = (2\ 4)$ correspond to valid isometries of the square, let's call them $R$ (for rotation) and $D$ (for reflection along the diagonal). This implies that the map -$$ \varphi \colon \langle R,D\rangle \to \langle a,b\rangle $$ -is actually an isomorphism: each element of either groups uniquely corresponds to some permutation of $\{1,2,3,4\}$ resp. the vertices of the square. Therefore it is sufficient to show that $\mathbf D_8$ is generated by $R$ and $D$ (which is well-known) and you're done.<|endoftext|> -TITLE: Is there a gap in the standard treatment of simplicial homology? -QUESTION [18 upvotes]: On MO, Daniel Moskovich has this to say about the Hauptvermutung: - -The Hauptvermutung is so obvious that it gets taken for granted everywhere, and most of us learn algebraic topology without ever noticing this huge gap in its foundations (of the text-book standard simplicial approach). It is implicit every time one states that a homotopy invariant of a simplicial complex, such as simplicial homology, is in fact a homotopy invariant of a polyhedron. - -I have to admit I find this statement mystifying. We recently set up the theory of simplicial homology in lecture and I do not see anywhere that the Hauptvermutung needs to be assumed to show that simplicial homology is a homotopy invariant. Doesn't this follow once you have simplicial approximation and you also know that simplicial maps which are homotopic induce chain-homotopic maps on simplicial chains? - -REPLY [11 votes]: I didn't state it very well- what I meant is that standard algebraic topology textbooks take for granted (or cause the reader to take for granted) that topology of polyhedra is the same as topology of simplicial complexes. Sean Tilson's response is spot-on; I'll restate it in my own words. -By simplicial approximation, continuous maps of polyhedra are homotopic to PL maps. However, homeomorphisms of polyhedra might not be homotopic to PL homeomorphisms, merely to PL maps (the statement that they are is an equivalent formulation of the Hauptvermutung). So a (combinatorial) homotopy invariant of simplicial complexes might a-priori fail to be a homotopy invariant of polyhedra, unless one proves also that it's independent of the choice of simplicial approximation (which Matt E. and Qiaochu both say). It isn't enough just to show that it's independent under combinatorial homotopy of simplicial complexes. -I apologise for the confusion. My answer was essentially taken from Page 4 of The Hauptvermutung Book- I apologise also for the lack of attribution. I interpret what it says there as the assertion that textbooks don't tend to check independence with respect to choice of simplicial approximation; and my own experience (or inattentiveness) leads me to suspect that this is indeed the case. I'll look in a library on Monday, and maybe edit this response again.<|endoftext|> -TITLE: Navigating though the surface of a hypersphere in a computer game -QUESTION [10 upvotes]: People in StackOverflow seems not so into this theme, so I thought I could have better luck in here. -I had the idea of an spaceship game where the world is confined in the surface of an 4-D hypersphere (also called a 3-sphere). Thus, in seeing it from inside, it would look like a 3-D world, but by navigating in every direction, I would never leave the limited volume of the 3-sphere. -To represent the 3-shpere as a "flat" 3-D space, I use a stereographic projection, which is very simple to implement, just need to divide the point in the 3-sphere by one minus its w coordinate. -To represent the vertices of the objects I am using normalized 4D vectors, such that $x^2+y^2+z^2+w^2=1$, thus keeping them inside the 3-sphere. -The first problem to solve was rotation. But I soon figured out that ordinary 3D rotation matrices would suffice to rotate the world around the viewer in the 3D projection, since it does not mess up with the $w$ coordinate (pretty much like rotating a sphere around the z-axis would also rotate its stereographic projection). -Then I figured out that any rotation that included the $w$ coordinate would be equivalent of translation inside the 3D projection (just not commutative, as ordinary 3D translations on "flat" spaces), then I could translate along the axis by using a simple around axis rotation matrix $(x', y') = (x \ cos a - y \sin a, x \sin a + y \cos a)$, but varying $w$ along with another axis. -This is so far where I got, and I could not figure out how to navigate forward, based on the position the viewer is facing from the projection. I can apply the inverse transform to derive the normalized 4-D vector (called F) the viewer is facing in the hypersphere coordinates, but I don't know how to navigate in that direction by using a $4\times4$ matrix (what is optimal in OpenGL). I could think on a hackish solution: for every vertex $V$, $d V' = \text{normalize}(dF + V)$, where $d$ is the distance moved forward (in some strange unit I can not exactly precise). This way only works for small values of d, there is no direct correlation between d and the angle variation. -Thus the question is: how to move forward (using a $4\times 4$ matrix transform) being in the surface of a 4-D hypersphere? In other words: if I am at $(x, y, z, w)$ now and want to be at $(x', y', z', w')$ next (both vectors of norm 1), how can I derive M such that $M \times (x, y, z, w) = (x', y', z', w')$ ? - -REPLY [4 votes]: The $M$ you asked in your final question is in general non-unique, given the data you specified. The way to see this is as follows: -Given a point $(x,y,z,w)$ and another $(x',y',z',w')$, add to it the third point $(0,0,0,0)$ in four dimensional space. Assuming that the two original points do not coincide and that they are not antipodes, this three points are not collinear and so defines a plane in four dimensional space. So you can take a rotation in that plane that carries $(x,y,z,w)$ to $(x',y',z',w')$ while fixing the directions perpendicular to that plane fixed. Call this transformation $M_1$. -However, since your ambient space is four dimensional, the directions perpendicular to the given plane also is two dimensional (4-2 = 2). So you can equally take an arbitrary rotation in that plane which fixes all directions perpendicular to it. Call such a rotation $O$. Then you can check that $M_2 = OM_1$ also sends $(x,y,z,w)$ to $(x',y',z',w')$. -To be explicit. Assume your initial coordinate is $(1,0,0,0)$ and the final coordinate is $(0,1,0,0)$. Then we have -$$ M_1 = \begin{pmatrix} 0 & -1 & 0 & 0\\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{pmatrix}$$ -If you take $O$, parametrized by $\theta$, to be -$$ O(\theta) = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & \cos\theta & \sin\theta \\ 0 & 0 & -\sin\theta & \cos\theta\end{pmatrix}$$ -then you can check that $O(\theta)M_1$ will send $(1,0,0,0)$ to $(0,1,0,0)$ for any $\theta$, but the matrices $O(\theta)M_1$ are all different for $\theta$ in the range $0 \leq \theta < 2\pi$. - -So what does this mean physically? Moving "forward" in Euclidean space is not as simple an issue as you think. The analogue of the different $O(\theta)$ in 3 dimension Euclidean space corresponds to rotating the whole space along the axis of travel! In other words, imagine you have a spaceship in Euclidean space and it spins (with the axis of spinning in the same direction of its travel) while it moves forward. This is what $O(\theta)$ captures, the spinning. -Without getting too much into the gory details, I will note that this ambiguity lies in the heart of differential/Riemannian geometry, and is closely connected with the notion of parallel transport. -In any case, using a bit of differential geometry, you can see that the matrix $M_1$ defined above is the "correct" notion of translation is you do not want "spinning". - -Okay, enough about that. How to actually implement this? Let's say that the viewer is sitting at coordinates $(x,y,z,w)$. And let's say the viewer is facing in the direction of $(\delta x, \delta y, \delta z, \delta w)$ with $\delta x^2 + \delta y^2 + \delta z^2 + \delta w^2 = 1$. Notice that since the direction the viewer is facing is tangential to the hypersphere, it must be perpendicular to the coordinates, that is -$$ x\cdot \delta x + y\cdot \delta y + z \cdot \delta z + w \cdot \delta w = 0$$ -Given these two vectors, you can use some linear algebra to complete them to an orthonormal basis of four dimensional space (which can be obtained by solving some linear equations based on orthogonality to the two given vectors), call the two additional vectors $(a,b,c,d)$ and $(a',b',c',d')$. Then your translation matrix should be given by -$$ M(\phi) = \begin{pmatrix} x & \delta x & a & a'\\ y & \delta y & b & b' \\ z & \delta z & c & c' \\ w & \delta w & d & d'\end{pmatrix} \begin{pmatrix} \cos(\phi) & \sin(\phi) & 0 & 0\\ -\sin(\phi) & \cos(\phi) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{pmatrix} \begin{pmatrix} x & y & z & w\\ \delta x & \delta y & \delta z & \delta w \\ a & b & c & d \\ a' & b' & c' & d'\end{pmatrix} $$ -A quick explanation: by fixing the orthonormal basis, we can use it to construct an orthogonal transformation to a coordinate system in which the viewer sits at $(1,0,0,0)$ and is facing in the $(0,1,0,0)$ direction. Then all we need to do is to conjugate back the transportation matrix for that situation. The $\phi$ parameter measures the (angular) distance you travelled on the hypersphere.<|endoftext|> -TITLE: Getting generators of graphs automorphism group -QUESTION [6 upvotes]: Suppose I have a graph like this - -and a list of its automorphisms. How do I go about getting a set of generators for this group? - -REPLY [7 votes]: Algorithmically, you take the elements one by one and use them to grow a subgroup. You create strong generating sets so that the size of the subgroup is known and so that membership in the subgroup can be tested easily. You only record a new generator when you get an element not in the current subgroup. You can stop early if you notice the subgroup and the list have the same size (as long as you trust the list is correct). -GAP provides a command AsGroup to convert a list of group elements into a group (which can then be asked for small generating sets). In fact, in this example it fairly automatically finds a good generating set. -Since the list is not too long, you can use: -g := AsGroup( List( myListOfImages, PermList ) ); - -This has some performance problems if the list had more like 10,000 elements, but for small groups like this it is instant. It yields: -Group([ (3,6)(5,7)(8,10), (2,5)(3,4)(7,10)(8,9), (1,2)(3,5)(6,7)(8,10) ]) - -If you like the image list format for permutations: -gap> PrintArray( List( GeneratorsOfGroup(g), ListPerm ) ); -[ [ 1, 2, 6, 4, 7, 3, 5, 10, 9, 8 ], - [ 1, 5, 4, 3, 2, 6, 10, 9, 8, 7 ], - [ 2, 1, 5, 4, 3, 7, 6, 10, 9, 8 ] ] - -You can also ask GAP (and grape) directly: -LoadPackage("grape"); -Petersen := Graph( SymmetricGroup(5), [[1,2]], OnSets, function(x,y) return Intersection(x,y)=[]; end );; -AutomorphismGroup(Petersen); - -The vertices are in a different order than yours, but otherwise everything is good.<|endoftext|> -TITLE: Does exceptionalism persist as sample size gets large? -QUESTION [29 upvotes]: Which of the following is more surprising? - -In a group of 100 people, the tallest person is one inch taller than the second tallest person. -In a group of one billion people, the tallest person is one inch taller than the second tallest person. - -Put more precisely, suppose we have a normal distribution with given mean $\mu$ and standard deviation $\sigma$. If we sample from this distribution $N$ times, what is the expected difference between the largest and second largest values in our sample? In particular, does this expected difference go to zero as $N$ grows? -In another question, it is explained how to compute the distribution $MAX_N$ of the maximum, but I don't see how to extract an estimate for the expected value of the maximum from that answer. Though $E(MAX_N)-E(MAX_{N-1})$ isn't the number I'm looking for, it might be a good enough estimate to determine if the value goes to zero as $N$ gets large. - -REPLY [8 votes]: The precise version of the question was answered in the affirmative in the paper "Extremes, Extreme Spacings, and Tail Lengths: An Investigation for Some Important Distributions," by Mudholkar, Chaubey, and Tian (Calcutta Statistical Association Bulletin 61, 2009, pp. 243-265). (Unfortunately, I haven't been able to find an online copy.) -Let $X_{i:n}$ denote the $i$th order statistic from a random sample of size $n$. Let $S_{n:n} = X_{n:n} - X_{n-1:n}$, the rightmost extreme spacing. The OP asks for $E[S_{n:n}]$ when sampling from a normal distribution. -The authors prove that, for an $N(0,1)$ distribution, $\sqrt{2 \log n}$ $S_{n:n}$ converges in distribution to $\log Z - \log Y$, where $f_{Z,Y}(z,y) = e^{-z}$ if $0 \leq y \leq z$ and $0$ otherwise. -Thus $S_{n:n} = O_p(1/\sqrt{\log n})$ and therefore converges in probability to $0$ as $n \to \infty$. So $\lim_{n \to \infty} E[S_{n:n}] = 0$ as well. Moreover, since $E[\log Z - \log Y] = 1$, $E[S_{n:n}] \sim \frac{1}{\sqrt{2 \log n}}$. (For another argument in favor of this last statement, see my previous answer to this question.) -In other words, (2) is more surprising. -Added: This, does, however, depend on the fact that the sampling is from the normal distribution. The authors classify the distribution of extreme spacings as ES short, if $S_{n:n}$ converges in probability to $0$ as $n \to \infty$; ES medium, if $S_{n:n}$ is bounded but non-zero in probability; and ES long, if $S_{n:n}$ diverges in probability. While the $N(0,1)$ distribution has ES short right tails, the authors show that the gamma family has ES medium right tails (see Shai Covo's answer for the special case of the exponential) and the Pareto family has ES long right tails.<|endoftext|> -TITLE: The inverse of a certain tricky function -QUESTION [5 upvotes]: What is the explicit form of the inverse of the function $f:\mathbb{Z}^+\times\mathbb{Z}^+\rightarrow\mathbb{Z}^+$ where $$f(i,j)=\frac{(i+j-2)(i+j-1)}{2}+i?$$ - -REPLY [5 votes]: Let $i+j-2 = n$. -We have $f = 1 + 2 + 3 + \cdots + n + i$ with $1 \leq i \leq n+1$. Note that the constraint $1 \leq i \leq n+1$ forces $n$ to be the maximum possible $n$ such that the sum is strictly less than $f$. -Hence given $f$, find the maximum $n_{max}$ such that $$1 + 2 + 3 + \cdots + n_{max} < f \leq 1 + 2 + 3 + \cdots + n_{max} + (n_{max} + 1)$$ and now set $i = f - \frac{n_{max}(n_{max}+1)}{2}$ and $j = n_{max} + 2 - i$. -$n_{max}$ is given by $\left \lceil \frac{-1 + \sqrt{1 + 8f}}{2} - 1 \right \rceil$ which is obtained by solving $f = \frac{n(n+1)}{2}$ and taking the ceil of the positive root minus one. (since we want the sum to strictly smaller than $f$ as we need $i$ to be positive) -Hence, -$$ -\begin{align} -n_{max} & = & \left \lceil \frac{-3 + \sqrt{1 + 8f}}{2} \right \rceil\\\ -i & = & f - \frac{n_{max}(n_{max}+1)}{2}\\\ -j & = & n_{max} + 2 - i -\end{align} -$$ - -REPLY [3 votes]: Since your function seems to be Cantor's pairing function $p(x,y) = \frac{(x+y)(x+y+1)}{2} + y$ applied to $x= j-2, y = i$, and since the inverse of the pairing function is $p^{-1}(z) = (\frac{\lfloor \frac{\sqrt{8z+1}-1}{2} \rfloor^2 + 3\lfloor \frac{\sqrt{8z+1}-1}{2} \rfloor}{2}-z,z-\frac{\lfloor \frac{\sqrt{8z+1}-1}{2} \rfloor^2 + \lfloor \frac{\sqrt{8z+1}-1}{2} \rfloor}{2})$, the inverse of your function is: $f^{-1}(z)=(z-\frac{\lfloor \frac{\sqrt{8z+1}-1}{2} \rfloor^2 + \lfloor \frac{\sqrt{8z+1}-1}{2} \rfloor}{2},2+ \frac{\lfloor \frac{\sqrt{8z+1}-1}{2} \rfloor^2 + 3\lfloor \frac{\sqrt{8z+1}-1}{2} \rfloor}{2}-z)$, which can be a bit ugly. What is your motivation for inverting this function?<|endoftext|> -TITLE: Working out a Group Presentation -QUESTION [18 upvotes]: If you have a group, (say you have group table or any other information), is there an algorithm to find the group presentation? What is the general way of finding presentation of a group? - -REPLY [7 votes]: No, in general there's not an algorithm. In this paper, Bridson and I prove that there are finite sets of integer matrices that generate finitely presented groups, but for which there's no algorithm to compute a presentation.<|endoftext|> -TITLE: Difference between a theorem and a law -QUESTION [16 upvotes]: There are plenty of theorems out there as well as laws within mathematics. For example, in Boolean algebra: -Theorems - -Idempotent -Involution -Theorem of Complementarity - -Laws - -Commutative -Associative -Distributive - -There are countless other examples out there, but my real question is this: What makes a theorem a theorem and a law a law? - -REPLY [16 votes]: Theorems are results proven from axioms, more specifically those of mathematical logic and the systems in question. Laws usually refer to axioms themselves, but can also refer to well-established and common formulas such as the law of sines and the law of cosines, which really are theorems. -In a particular context, propositions are the more trivial theorems, lemmas are intermediate results, while corollaries are results deduced easily from others. However, lemmas and corollaries may be major results on their own. -Note that a system may be given axioms in more ways than one. For example, we can use the least upper bound axiom to define the real numbers, or we can consider this axiom as a theorem if we were to construct the reals from the rationals using Dedekind cuts and prove it instead. The difference here lies in which axioms we choose to start with.<|endoftext|> -TITLE: Equilibrium distributions of Markov Chains -QUESTION [10 upvotes]: I often get confused about - -when a Markov chain has an equilibrium distribution; -when this equilibrium distribution is unique; -which starting states converge to the equilibrium distribution; and -how finite and countably infinite Markov chains different with respect to the above. - -(Google isn't quite clearing up my confusion.) Is the following correct/am I missing anything? -An irreducible Markov chain (finite or countably infinite) has a unique equilibrium distribution if and only if all states are positive recurrent. (What about reducible Markov chains? A reducible Markov chain has a non-unique equilibrium distribution iff all states are positive recurrent?) However, not all starting states necessarily converge to the unique equilibrium, unless the Markov chain is also aperiodic; that is, an irreducible Markov chain converges to its unique equilibrium regardless of initial state, if and only if all states are positive recurrent and aperiodic. - -REPLY [14 votes]: For a Markov chain with $N<\infty$ states, the set $I$ of invariant probability vectors is a non-empty simplex in ${\mathbb R}^N$ whose extreme points correspond to the recurrent classes of the chain. Thus, the vector is unique iff there is exactly one recurrent class; the transient states (if any) play absolutely no role (as in Jens's example). The set $I$ is a point, line segment, triangle, etc. exactly when there are one, two, three, etc. recurrent classes. -If the invariant vector $\pi$ is unique, then there is only one recurrent class and the chain will eventually end up there. The vector $\pi$ necessarily puts zero mass on all transient states. Letting $\phi_n$ be the law of $X_n$, as you say, we have $\phi_n\to \pi$ only if the recurrent class is aperiodic. However, in general we have Cesàro convergence: -$${1\over n}\sum_{j=1}^n \phi_j\to\pi.$$ -An infinite state space Markov chain need not have any recurrent states, and may have the zero measure as the only invariant measure, finite or infinite. Consider the chain on the positive integers which jumps to the right at every time step. -Generally, a Markov chain with countable state space has invariant probabilities iff there are positive recurrent classes. If so, every invariant probability vector $\nu$ is a convex combination of -the unique invariant vector $m_j$ corresponding to each positive recurrent class $j\in J$, i.e., -$$\nu=\sum_{j\in J} c_j m_j,\qquad c_j\geq 0,\quad \sum_{j\in J}c_j=1.$$ -This result is Corollary 3.23 in Wolfgang Woess's Denumerable Markov Chains.<|endoftext|> -TITLE: Proving that $\mathbf{W}$+$\mathbf{W^{\perp}}$=$\mathbb{R^{n}}$ -QUESTION [11 upvotes]: I am trying to prove that given a subspace $\mathbf{W}$ in $\mathbb{R^{n}}$, the subspace and its orthogonal complement 'cover' whole of $\mathbb{R^{n}}$ through '+' where we define $\mathbf{W}$+$\mathbf{W^{\perp}}$ as linear combinations of vectors both in the subspace and in its orthogonal complement. It seems intuitively right, and I can prove that the sum of their dimensions adds up to n, but I am not sure how to prove the question I am looking at. Thanks! - -REPLY [5 votes]: Suppose $W$ has dimension $k$, and start with a basis $v_1,\ldots,v_k$ of $W$. This is a linearly independent subset of $\mathbb{R}^n$, so it may be extended to a basis $v_1,\ldots,v_n$ of $\mathbb{R}^n$. -Apply the Gram-Schmidt process to $v_1,\ldots,v_n$ to obtain an orthonormal basis $w_1,\ldots,w_n$ for $\mathbb{R}^n$ such that $w_1,\ldots,w_k$ is an orthonormal basis for $W$. Now take any vector $z = a_1 w_1 + \ldots + a_k w_k + a_{k+1} w_{k+1} + \ldots + a_n w_n$ of $\mathbb{R}^n$. The conditions for $z \in W^{\perp}$ are -precisely -$\langle z,w_1 \rangle = \ldots = \langle z,w_k \rangle= 0$, -i.e., -$a_1 = \ldots = a_k = 0$. -Thus $W^{\perp}$ is the span of $w_{k+1},\ldots,w_n$ and -$\mathbb{R}^n = W \oplus W^{\perp}$. - -In the above argument, the key is showing that $\dim W + \dim W^{\perp} = \dim \mathbb{R}^n$ (as Jonas Meyer says in his answer). Another argument for this, valid for any nondegenerate symmetric bilinear form $(u,v) \mapsto B(u,v)$ on a vector space $V$ (not necessarily finite-dimensional) over a field $K$ is given in $\S 4$ of these notes on quadratic forms. Note that a general nondegenerate bilinear form could be isotropic: that is, one may have nonzero vectors $v$ with $B(v,v) = 0$ and thus $Kv \cap (Kv)^{\perp} \neq 0$. But by definition an inner product is anisotropic, which forces $W \cap W^{\perp} = 0$ for all subspaces $W$.<|endoftext|> -TITLE: Square-free zeta function zeros -QUESTION [6 upvotes]: It is a well known fact that the geometric series -$$1+x+x^2+x^3+\ldots$$ -has the following form -$$\frac{1}{1-x}$$ -Another possible representation is -$$\prod_{k=0}^{\infty}\left(1+x^{2^{k}}\right)$$ -This comes from the identity -$$1+x+x^2+x^3+\ldots+x^{2^{k}}=\frac{1-x^{2^{k}+1}}{1-x}$$ -now taking the numerator of the rhs we have -$$1-x^{2^{k}+1}=\left(1-x^{2^{k}}\right)\left(1+x^{2^{k}}\right)=\left(1-x^{2^{k}-1}\right)\left(1+x^{2^{k}-1}\right)\left(1+x^{2^{k}}\right)$$ -proceeding this way we eventually get -$$\left(1-x\right)\left(1+x\right)\left(1+x^{2}\right)\ldots\left(1+x^{2^{k}-2}\right)\left(1+x^{2^{k}-1}\right)\left(1+x^{2^{k}}\right)$$ -Taking the limit for the geometric series -$$\sum_{k=0}^{\infty}x^{k}=\prod_{k=0}^{\infty}\left(1+x^{2^{k}}\right)$$ -Now taking the zeta function -$$\zeta(z)=\prod_{p\in\mathbb{P}}\left(1+\frac{1}{p^{z}}+\frac{1}{p^{2z}}+\frac{1}{p^{3z}}+\ldots\right)$$ -we can express it as -$$\zeta(z)=\prod_{k=0}^{\infty}\;\prod_{p\in\mathbb{P}}\left(1+\frac{1}{p^{z\;2^{k}}}\right)$$ -Now considere for -$$G(z)=\prod_{k=1}^{\infty}\;\prod_{p\in\mathbb{P}}\left(1+\frac{1}{p^{z\;2^{k}}}\right)$$ -note that now $k\geq 1$ and that $G(z)$ converges absolutely for $z>\frac{1}{2}$ -Can we say that, after analytic continuation, that -$$H(z)=\sum_{k=0}^{^\infty}\frac{|\mu(k)|}{k^{z}}=\prod_{p\in\mathbb{P}}\left(1+\frac{1}{p^{z}}\right)$$ -has exactly the same zeros as $\zeta(z)$? - -REPLY [10 votes]: Hint: $$ \sum \frac{|\mu(k)|}{k^z} = \frac{\zeta(z)}{\zeta(2z)} $$ for $\Re(z) > 1$ -Check out http://en.wikipedia.org/wiki/Dirichlet_series -It might help you out.<|endoftext|> -TITLE: Weak convergence in Sobolev spaces -QUESTION [6 upvotes]: Consider the inner product by $\langle f,g \rangle_{H^1} = \langle f, g \rangle_{L^2} + \sum_{|\alpha|=1} \langle D^\alpha f, D^\alpha g \rangle_{L^2}$ where $\alpha$ is a multi-index and $D$ denotes the weak derivative. Define $H^1(\Omega)$ as the space of functions that are finite under the norm induced from this inner product. It can be shown that $H^1(\Omega)$ is a Hilbert space. -Now, suppose there exists a sequence of functions $\{ f_n \} \subset H^1(\Omega)$ that converges weakly in $H^1$ to some limit $f \in H^1(\Omega)$. Can I then say that this sequence converges weakly to the same limit under the $L^2$ inner product? -By an application of the Banach-Alaoglu Theorem, I know that weak convergence of this sequence in $H^1$ will imply strong convergence of a subsequence in $H^1$. And then I think strong convergence of this subsequence in $H^1$ will imply strong convergence in $L^2$ as well. -However, I'm not sure if anything can be said about the entire sequence under the $L^2$ inner product and its weak/strong convergence properties. - -REPLY [5 votes]: I don't see how a simple statement '$L^2(\Omega)$ is a subspace of $H^{-1}(\Omega)$' proves that the weakly converging sequence in $H^1$ converges to the same limit in $L^2$ because the inner products on $L^2$ and $H^1$ are different. Here is my attempt: -Let $(u_n)_n$ be a sequence converging weakly to $u$ in $H^1$, hence $(u_n)_n$ is bounded in $L^2$, and we can extract a subsequence $(u_{\varphi(n)})_n$ that converges weakly to $v \in L^2$. -$\forall \phi \in C^1_c(\Omega)$, $\left|\int_\Omega u_{\varphi(n)} \phi'\right| = \left|\int_\Omega u_{\varphi(n)}' \phi\right|\leq C ||\phi||_{L^2}\Rightarrow \left|\int_\Omega v\phi'\right|\leq C ||\phi||_{L^2}$, hence $v$ is weakly differentiable. Furthermore, $\left|\int_\Omega (u_{\varphi(n)}' - v')\phi\right| = \left|\int_\Omega (u_{\varphi(n)} - v)\phi'\right| \xrightarrow{n\rightarrow \infty} 0$ by the weak convergence of $(u_{\varphi(n)})_n$. Thus we have that $(u_{\varphi(n)}')_n$ converges weakly to $v'$ in $L^2$ by the density of $C^1_c$ in $L^2$. -To conclude, $\forall f\in H^1, \int_\Omega u_{\varphi(n)} f + \int_\Omega u_{\varphi(n)}' f' \xrightarrow{n\rightarrow \infty} \int_\Omega v f + \int_\Omega v' f'$ by the respective weak convergence of $u_{\varphi(n)}$ and $u_{\varphi(n)}'$. The uniqueness of weak limit then implies $v=u$ in $H^1\subset L^2$.<|endoftext|> -TITLE: Why isn't $\mathbb{CP}^2$ a covering space for any other manifold? -QUESTION [23 upvotes]: This is one of those perhaps rare occasions when someone takes the advice of the FAQ and asks a question to which they already know the answer. This puzzle took me a while, but I found it both simple and satisfying. It's also great because the proof doesn't use anything fancy at all but it's still a very nice little result. - -REPLY [16 votes]: Here's another argument that has the disadvantage of being less elementary, but the advantage of working on all $\mathbb{C}P^{2k}$ simultaneously. (This also answers Pete's question in the comments). -We're going to apply the Lefshetz fixed point theorem which states the following: Suppose $f:M\rightarrow M$ with $M$ "nice enough" (certainly, this applies to compact manifolds - I think it applies to all compact CW complexes). Then $f$ induces a (linear) map $f_*:H_*(M)/Torsion\rightarrow H_*(M)/Torsion$. Let $Tr(f)\in\mathbb{Z}$ denote the trace of this map. If $Tr(f)\neq 0$, then $f$ has a fixed point. -Now, we'll show that every diffeomorphism $f:\mathbb{C}P^{2k}\rightarrow \mathbb{C}P^{2k}$ has trace $\neq 0$, so that every diffeomorphism has a fixed point. Believing this for a second, note that every element of $\pi_1(X)$ for a hypothetical space $X$ covered by $\mathbb{C}P^{2k}$ acts by diffeomorphisms, and thus has a fixed point. But it is easy to show that the only element of the deck group which fixes any point must be the identity. It follows that $\pi_1(X)$ is trivial, so $X=\mathbb{C}P^{2k}$. -So, why does every diffeomorphism of $\mathbb{C}P^{2k}$ have a fixed point? Well, every diffeomorphism (or even homotopy equivalence!) must act as multiplication by $\pm 1$ on each of the $2k+1$ $\mathbb{Z}$s in the cohomology ring of $\mathbb{C}P^{2k}$ and the trace of the induced map is the sum of all the $\pm 1$s. But since there is an odd number of $\pm 1$s, they can't sum to 0 (by, say, checking the parity), so by the Lefshetz fixed point theorem, every diffeomorphism (or even homotopy equivalence) must have a fixed point. -What about $\mathbb{C}P^{2k+1}$? Now we must investigate using the ring structure of $\mathbb{C}P^{2k+1}$. Since there is a single multiplicative generator, once we know what happens on $H^2(\mathbb{C}P^{2k+1})$ we know what happens everywhere. It's easy too see that every orientation preserving homotopy equivalence must have a fixed point: if $f$ is orientation preserving, it's the identity on $H^{4k+2}(\mathbb{C}P^{2k+1})$, which implies it must have been the identity on $H^2(\mathbb{C}P^2)$ so it's the identity on all cohomology groups. Thus, the trace of such an $f$ is $2k+1\neq 0$, and so, by the Lefshetz theorem, this map has a fixed point. -As an immediate corollary, if $\mathbb{C}P^{2k+1}$ covers anything, it can only double cover it. For the product of any two nontrivial elements in the deck group must be trivial: any nontrivial map must be orientation reversing and the composition of two orientation reversing maps is orientation preserving, hence has a fixed point, hence is the identity. That is, any two nontrivial elements product to $e$. It's easy to show that this implies the Deck group is $\mathbb{Z}/2\mathbb{Z}$ (or trivial). -In fact $\mathbb{C}P^{2k+1}$ does double cover something (though, to my knowledge, it doesn't have a more common name, except in the case of $\mathbb{C}P^1 = S^2$ double covering $\mathbb{R}P^2$). In homogeneous coordinates, the involution maps $[z_0:z_1:...:z_{2k+1}:z_{2k+2}]$ to $[-z_1:z_0:...:-z_{2k+2}:z_{2k+1}]$. This involution acts freely, and the quotient of $\mathbb{C}P^{2k+1}$ by the involution is a space which $\mathbb{C}P^{2k+1}$ double covers. -I do not know if $\mathbb{C}P^{2k+1}$ covers anything else. -Incidentally, just to preempt a bit, the space $\mathbb{H}P^{n}$ doesn't cover anything unless $n=1$. The proof is much more complicated in general (though the case where $n$ is even follows precisely as it did in the $\mathbb{C}P^{2k}$ case). -In general, one needs to compute Pontrjagin classes and note that they are preserved by diffeomorphisms. -We have $p_1(\mathbb{H}P^n) = 2(n-1)x$ where $x$ is a particular choice of generator for $H^4(\mathbb{H}P^n)$. Since any diffeomorphism must preserve $p_1$, it follows that so long as $n\neq 1$, we must have $x\rightarrow x$ on $H^4$. Then, the Lefshetz theorem once again guarantees a fixed point.<|endoftext|> -TITLE: The $n$-disk $D^n$ quotiented by its boundary $S^{n-1}$ gives $S^n$ -QUESTION [15 upvotes]: Define $D^n = \{ x \in \mathbb{R}^n : |x| \leq 1 \}$. By identifying all the points of $S^{n-1}$ we get a topological space which is intuitively homeomorphic to $S^n$. -If $n = 2$, this can be visualised by pushing the centre of the disc $D^2$ down so you have a sack, then shrinking the boundry of the sack to a point which gives you a teardrop shaped object which is clearly homeomorphic to $S^2$. -I am new to algebraic topology. How do I prove that the quotient space is actually homeomorphic to $S^n$. I haven't been able to write down explicitly a continuous map between $D^n$ and $S^n$ which maps $S^{n-1}$ to a point on $S^n$, which at the moment is the only way I know how to begin showin that two spaces are homeomorphic. -Is more machinery needed? If so I am interested to hear what is needed. If not, please tell me how stupid I am and give me a hint! - -REPLY [16 votes]: None of the answer give any map explicitely so I'll write one here. Note that $S^{n−1}\times I→D^n,(u,t)\mapsto tu$ (this is scalar product) is a quotient map and thus one can define (after verifying some things) the continuous map $D^n\to S^n,tu\mapsto (\sin⁡(tπ)u,\cos⁡(tπ))$ which can be verified to work.<|endoftext|> -TITLE: Atiyah-Macdonald Exercises 5.16-5.19 -QUESTION [23 upvotes]: I have solutions to Exercises 5.16–5.19 in Atiyah–Macdonald's Introduction to Commutative Algebra, but not in the order desired; I find myself needing later exercises to do earlier ones, and it's been frustrating me. Online solution sets (I count five, in various stages of completion) seem to either not notice this problem or treat it as something too obvious to merit consideration. -For context, the first part of 5.16 is Noether's normalization lemma, which is fine, and 5.18 is Zariski's Lemma, which is proved in the text two times, and is accessible to (at least) two natural proofs at this point. -What I can't do is to obtain the second part of 5.16 without using 5.17, or prove 5.17 without using 5.18. Moreover, 5.19 specifically asks one to prove 5.17 using 5.18 (which I can do), so the strong implication is that the original solutions should not require use of Zariski's Lemma. -$k$ is an infinite field, assumed algebraically closed in 5.17 but not 5.16. -The first part of 5.17 is that if an affine variety $X$ in $k^n$ has ideal $I(X)$ a proper subset of $k[t_1,\ldots,t_n]$, then $X$ is not empty. This follows immediately from the second part of 5.16. -The second part of 5.16 is that for any subvariety $X$ of $k^n$ there are an $r \leq n$ -and a linear map $k^n \to k^r$ taking $X$ onto all of $k^r$. The natural candidate map follows from Noether normalization, but my approach to surjectivity seems to require the second part of 5.17. -The second part of 5.17 is that every maximal ideal of $k[t_1,\ldots,t_n]$ is of the form $(t_1 - a_1,\ldots,t_n - a_n)$ for some $a_i \in k$. From this one can show a similar result for the coordinate ring of a subvariety of $k^n$. -So this mess will be fixed if I can prove the second parts of 5.16 and 5.17 without reference to later material. -In more detail, my current approach to the second part of 5.16 is as follows. Noether normalization shows $A$ is integral over some polynomial ring $A' := k[y_1,\ldots,y_r]$ in $r \leq n$ indeterminates. The proof of Noether normalization, at least in case $k$ is infinite, gives the $y_i$ as $k$-linear combinations of the natural generators $x_1,\ldots,x_n$ of $A$, the coordinate functions on $X$. If we say $y_i = \sum a_{ij} x_j$, then the projection $k^n \to k^r$ should be given by $(z_1,\ldots,z_n) \mapsto -\big(\sum_{j=1}^n a_{1j}z_j,\ldots,\sum_{j=1}^n a_{rj}z_j,\big)$. It's not immediately obvious to me that it is surjective. However, letting $\textrm{Max}(A) \subseteq \textrm{Spec}(A)$ denote the set of maximal ideals of $A$, results of Chapter 5 show that the map $\textrm{Max}(A) \to \textrm{Max}(A')$ induced by $k^n \to k^r$ is surjective. If we know each maximal ideal of $A$ is of the form $(x_1-c_1,\ldots,x_n-c_n)$ for some $(c_1,\ldots,c_n) \in X$, then we can identify $X$ and $\textrm{Max}(A)$, and that will show the map $X \to k^r$ is surjective. That seems, however, not to be what they want, and also like too much work. -My current approach to the second part of 5.17 is to use the weak Nullstellensatz (if $k$ is algebraically closed and $\mathfrak a$ is a proper ideal of $k[t_1,\ldots,t_n]$, then there is at least one point of $k^n$ at which $\mathfrak a$ vanishes). The weak Nullstellensatz implies the first part of 5.17, but the reverse implication is not obvious to me. This implication is certainly proved by, e.g., the strong Nullstellensatz, but that would be completely missing the point. -Update: To clarify, the text of 5.16, excluding a long hint about the first part, is as follows. - -Let $k$ be a field and let $A \neq 0$ be a finitely generated $k$-algebra. - Then there exist elements $y_1,\ldots,y_r \in A$ which are algebraically - independent over $k$ and such that $A$ is integral over - $k[y_1,\ldots,y_r]$. We shall assume that $k$ is infinite. - (The result is still true if $k$ is finite, but a different proof is needed.) -... -From the proof it follows that $y_1,\ldots, y_r$ - may be chosen to be linear combinations of $x_1,\ldots,x_n$. - This has the following geometrical interpretation: - if $k$ is algebraically closed and $X\!$ is an affine algebraic variety in $k^n$ - with coordinate ring $A \neq 0$, then there exists a linear subspace $L$ - of dimension $r$ in $k^n$ and a linear mapping of $k^n$ onto $L$ - which maps $X\!$ onto $L$ [Use Exercise 2]. - -I had forgotten about that hint... Here is what Ex. 5.2 says. - -Let $A$ be a subring of a ring $B$ such that $B$ is integral over $A$, - and let $f\colon A \to \Omega$ be a homomorphism of $A$ into - an algebraically closed field $\Omega$. - Show that $f\!$ can be extended to a homomorphism of $B$ into $\Omega$. [Use (5.10).] - -REPLY [9 votes]: This is just another wording of Matt E's very nice argument. That's why I'm making it a community wiki. (I voted for the question and for Matt E's answer.) -I'll freely use Exercises 1.27 and 1.28. For any regular map $\varphi:V\to W$ between affine varieties, write $\varphi^*$ for the induced $k$-algebra morphism between coordinate rings (going into the reverse direction). -Let $V\subset k^n$ be an affine variety, let $A$ be its coordinate ring, let $B$ be the coordinate ring of $k^r$, and let $\varphi:V\to k^r$ be a regular map induced by a linear map from $k^n$ to $k^r$. Assume that $\varphi^*:B\to A$ is injective and that $A$ is integral over $\varphi^*(B)$. -Let $z$ be in $k^r$. We must find a $v$ in $V$ such that $\varphi(v)=z$. -View $z$ as a regular map from the zero dimensional affine space $\{0\}$ to $k^r$. By Exercise 5.2 there is a $k$-algebra morphism $v^*:A\to k$, coming from a $v$ in $V$ viewed as a regular map from $\{0\}$ to $V$, such that $v^*\circ\varphi^*=z^*:B\to k$, and we get $\varphi\circ v=z$.<|endoftext|> -TITLE: Random points in a rectangular grid defining a closed path -QUESTION [13 upvotes]: Suppose we have a $n\times m$ rectangular grid (namely: $nm$ points disposed as a matrix with $n$ rows and $m$ columns). -We randomly pick $h$ different points in the grid, where every point is equally likely. -If only horizontal or vertical movements between two points are allowed, what is the probability that the points define at least one closed path? -ps: we can suppose $m=n$ to simplify -For example, let $n=m=4$ and $h=6$. -1 denotes a selected point, 0 a non-selected one. -These $6$ points define a closed path: -1 0 0 1 -0 0 0 0 -1 1 0 0 -0 1 0 1 -as these $6$ do (the $4$ in the bottom-right corner): -1 0 0 1 -0 0 0 0 -0 1 0 1 -0 1 0 1 -while the following $6$ points do not: -1 0 0 0 -0 0 0 1 -1 1 0 0 -0 1 0 1 -Substantially, the $h$ points define a closed path if and only if there exist a subset of these $h$ points such that every point in the subset has one other point of the subset on the same row and one on the same column. -Thanks for your help. - -REPLY [6 votes]: As Aryabhata comments, it's equivalent to counting the number $f_{m,n,h}$ of $h$-edge acyclic subgraphs of $K_{m,n}$. I managed to find a method to compute $f_{m,n,h}$. The first observation is \[f_{m,n,h}=\sum_{i=0}^m \sum_{j=0}^n {m \choose i} {n \choose j} w_{i,j,h}\] where $w_{m,n,h}$ is the number of $h$-edge acyclic subgraphs of $K_{m,n}$ without isolated vertices. Since $w_{m,n,h}=0$ when $m \geq h$ or $n \geq h$, for fixed $h$, this formula can be run in time $O(\log(mn))$, by pre-computing the non-zero values of $w_{m,n,h}$. It seems, however, that it's not so easy to compute $f_{m,n,h}$ for general $h$. -We can make some improvements to this formula by: - -deriving a formula for $w_{m,n,h}$ via decompositions into disjoint subgraphs, -considering the number of $h$-edge acyclic subgraphs of $K_{m,n}$ without isolated vertices and leaf vertices on the "$n$" side, -introducing a symmetry breaking condition. - -I implemented this algorithm and used it to compute all non-zero values of $f_{m,n,h}$ with $m,n \leq 50$. The source code is here. In an effort to describe this algorithm in detail, I ended up writing a paper Computing the number of h-edge spanning forests in complete bipartite graphs (2014). -Here are the first few values: -[1,1] 1 1 -[1,2] 1 2 1 -[1,3] 1 3 3 1 -[1,4] 1 4 6 4 1 -[1,5] 1 5 10 10 5 1 -[1,6] 1 6 15 20 15 6 1 -[1,7] 1 7 21 35 35 21 7 1 -[1,8] 1 8 28 56 70 56 28 8 1 -[1,9] 1 9 36 84 126 126 84 36 9 1 -[1,10] 1 10 45 120 210 252 210 120 45 10 1 -[2,2] 1 4 6 4 -[2,3] 1 6 15 20 12 -[2,4] 1 8 28 56 64 32 -[2,5] 1 10 45 120 200 192 80 -[2,6] 1 12 66 220 480 672 544 192 -[2,7] 1 14 91 364 980 1792 2128 1472 448 -[2,8] 1 16 120 560 1792 4032 6272 6400 3840 1024 -[2,9] 1 18 153 816 3024 8064 15456 20736 18432 9728 2304 -[2,10] 1 20 190 1140 4800 14784 33600 55680 65280 51200 24064 5120 -[3,3] 1 9 36 84 117 81 -[3,4] 1 12 66 220 477 648 432 -[3,5] 1 15 105 455 1335 2673 3375 2025 -[3,6] 1 18 153 816 3015 7938 14499 16524 8748 -[3,7] 1 21 210 1330 5922 19278 45738 75330 76545 35721 -[3,8] 1 24 276 2024 10542 40824 118692 253368 373977 338256 139968 -[3,9] 1 27 351 2925 17442 78246 268758 701298 1345005 1778031 1436859 531441 -[3,10] 1 30 435 4060 27270 138996 549990 1691280 3969405 6845310 8129079 5904900 1968300 -[4,4] 1 16 120 560 1784 3936 5632 4096 -[4,5] 1 20 190 1140 4785 14544 31520 44800 32000 -[4,6] 1 24 276 2024 10536 40704 117376 244224 331776 221184 -[4,7] 1 28 378 3276 20349 95256 341712 928512 1822464 2308096 1404928 -[4,8] 1 32 496 4960 35792 196672 842240 2811904 7147520 13058048 15204352 8388608 -[4,9] 1 36 630 7140 58689 370080 1839936 7266816 22556160 53272576 89800704 95551488 47775744 -[4,10] 1 40 780 9880 91120 648288 3667200 16696320 61009920 175636480 383451136 593756160 576716800 262144000 -[5,5] 1 25 300 2300 12550 51030 155900 347500 515625 390625 -[5,6] 1 30 435 4060 27255 138606 544525 1641000 3645000 5400000 4050000 -[5,7] 1 35 595 6545 52150 318122 1524530 5764750 16900625 36596875 52521875 37515625 -[5,8] 1 40 780 9880 91110 647928 3660300 16607400 60170625 169700000 352400000 480000000 320000000 -[5,9] 1 45 990 14190 148635 1206999 7847220 41469300 178356375 616146875 1656168750 3257718750 4157578125 2562890625 -[5,10] 1 50 1225 19600 229850 2098060 15421050 92925000 461728125 1881031250 6165578125 15674375000 28953125000 34375000000 19531250000 -[6,6] 1 36 630 7140 58680 369792 1834992 7210080 22083840 50388480 77262336 60466176 -[6,7] 1 42 861 11480 111615 838698 5022031 24263028 94246740 287884800 657456912 1008189504 784147392 -[6,8] 1 48 1128 17296 194160 1693824 11870272 67942272 318691584 1212710400 3642236928 8169652224 12230590464 9172942848 -[6,9] 1 54 1431 24804 315711 3135510 25159545 166283280 912183120 4140720000 15318167904 44729853696 97138911744 139586167296 99179645184 -[6,10] 1 60 1770 34220 486960 5423712 49015360 367096320 2302629120 12114178560 53031822336 189460684800 532998144000 1108546560000 1511654400000 1007769600000 -[7,7] 1 49 1176 18424 211435 1887039 13542816 79497264 383225031 1503254095 4674900664 10930062696 17230990189 13841287201 -[7,8] 1 56 1540 27720 366702 3789240 31681300 218620760 1255792825 5993472240 23447436096 73006381056 171071057920 269858570240 215886856192 -[7,9] 1 63 1953 39711 594909 6984243 66640413 528138963 3516498531 19723392829 92605693635 357804004509 1101151227519 2544938108433 3938980639167 3063651608241 -[7,10] 1 70 2415 54740 915950 12040644 129071950 1154594080 8734901805 56204359750 307067393059 1410568820220 5339441040500 16075559360000 36194714850000 54189129400000 40353607000000 -[8,8] 1 64 2016 41664 634592 7577472 73574144 593773056 4029819264 23069699072 110763376640 438772432896 1389556137984 3322157203456 5360119185408 4398046511104 -[8,9] 1 72 2556 59640 1027782 13923000 153923196 1421475912 11117737665 74098919744 420440041728 2013081735168 7981887578112 25343121162240 60740934303744 98077104930816 80244904034304 -[8,10] 1 80 3160 82160 1580320 23944256 296880640 3086266880 27312138880 207462978560 1355573469184 7587975987200 35975515340800 141460766720000 445225369600000 1055286886400000 1677721600000000 1342177280000000 -[9,9] 1 81 3240 85320 1662444 25521804 320717880 3380400216 30345929910 233984870262 1553345659224 8845120243512 42730719804108 171593700184620 553227160200264 1348883466233256 2219048868131217 1853020188851841 -[9,10] 1 90 4005 117480 2553570 43809948 616649250 7302228120 73952056845 646947951130 4911678171801 32346150078960 183665091934800 887748334704000 3573356139900000 11554377645600000 28238648976000000 46490458680000000 38742048900000000 -[10,10] 1 100 4950 161700 3919200 75093120 1182753600 15712656000 179127216000 1772146496000 15311555436800 115760058048000 763841356800000 4365392640000000 21306528000000000 86798400000000000 284496000000000000 705600000000000000 1180000000000000000 1000000000000000000 - -The above lists $[m,n]$ followed by $f_{m,n,h}$ for $0 \leq h \leq m+n-1$ when $n \geq m \geq 1$. Note that $f_{m,n,m+n-1}=m^{n-1}n^{m-1}$, which counts spanning trees in $K_{m,n}$ (see Kirchoff's Matrix-Tree Theorem). Adding more edges introduces a cycle, so $f_{m,n,h}=0$ when $h \geq m+n$.<|endoftext|> -TITLE: Smooth Poincaré Conjecture -QUESTION [16 upvotes]: One of my professors wrote the following open question on the blackboard: -If $M$ is a compact, connected smooth $4$-manifold such that $\pi_1(M) = 0$, $\pi_2(M) = 0$ (first two homotopy groups are trivial), does it follow that $M$ is diffeomorphic to the $4$-sphere? -and warned us, that if we managed to solve it, we would get an instant Ph.D. -- so, keen on getting a Ph.D. before my bachelor degree, I went to work immediately! ;-) -My first thought was the following: If one could endow $M$ with a Riemannian metric giving a Riemannian manifold with constant sectional curvature $1$, then by compactness $M$ would be a complete, connected, simply connected manifold of curvature $1$, which would imply the statement. -Now I obviously didn't get much further than this (my dreams were shattered!). -Anyways, this leads to the question: -"When is it possible to endow a smooth manifold with a metric which has some desired properties (i.e. constant curvature or bounded curvature)?" -Has there been much work on this? Are there any good books/papers I could take a look at (just to get some impression of how the experts approach this problem)? -I was also wondering whether the above is actually an approach to the problem taken by people working in the field? Or may it be completely hopeless to try and gain any control of the metric globally? -Well, as always I thank in advance for any comments, answers etc. -Best regards, -S.L. - -REPLY [17 votes]: I imagine you'd get more than a Ph. D! -I get the impression that it's generally hopeless to gain control of the metric globally. The problem is that if $g_1$ and $g_2$ are metrics, then so is $g_1 + g_2$, but the curvature of $g_1+g_2$ can be wildly different from $g_1$ and $g_2$. -Said another way, typically one uses a partition of unity argument to show that metrics exist on all smooth (paracompact) manifolds, but curvature is not well behaved under partitions of unity. -Another very relevant point is the Hopf conjecture: that $S^2\times S^2$ does not carry a metric of positive sectional curvature. If we were to adapt your method to this setting, we're just trying to prove positive curvature, which is a far cry from constant positive sectional curvature, but the efforts here have been fairly fruitless. -Also, Burkhard Wilking has shown $\mathbb{R}P^2\times\mathbb{R}P^3$ has a metric of "almost positive curvature" - it has an open dense set of points $U$ such that for any point in $U$, all sectional curvatures are positive. It follows from the classical Synge Theorem, that this example cannot be deformed to positive curvature everywhere. So, if you were to get your hands on a "sort of" nice metric on your hypothetical $S^4$, then there's still no reason to assume you can deform it to one even of positive curvature. (Though, to be clear, if $M$ is simply connected with a metric of almost positive curvature, it is still completely open whether or not this can be deformed to positive curvature.) -On the positive side, Jeff Cheeger showed the connect sum of any pair of $\mathbb{R}P^n$, $\mathbb{C}P^n$, $\mathbb{H}P^n$, or $\mathbb{O}P^2$ (of appropriate dimensions) carries a metric of nonnegative sectional curvature. To do this he looks at each of those manifolds minus a point, and shows that near the deleted point, one can deform the metric to be nonnegatively curved everywhere and locally isometric to the product $S^{k}\times \mathbb{R}$ near the removed point. This guarantees he can glue the metrics together on each piece and get a smooth nonnegatively curved metric.<|endoftext|> -TITLE: Is there an elementary proof for Fermat's last theorem? -QUESTION [22 upvotes]: I wonder is there an elementary proof for Fermat's last theorem. Why it's so difficult to prove this theorem by elementary method? -Thanks, - -REPLY [23 votes]: Fermat's Last Theorem, although elementary to state, is a very subtle problem. -The general $ABC$ conjecture (still unproved) states (roughly) that if $C = A + B$ (with $A, B, C$ coprime integers), then it is not possible for $A$, $B$, and $C$ to be simultaneously divisible by high powers of integers. -Fermat's equation considers the special case when $A$, $B$, and $C$ are all taken to be perfect $n$th powers. -The $ABC$ conjecture in general, and Fermat in particular, are then subtle problems relating the additive and multiplicative nature of the integers, and so -there it is perhaps not too surprising that they are difficult to prove (or, in -the case of $ABC$, that it remains unproved!). -Another famous conjecture relating the additive and multiplicative nature of the -integers is the Goldbach conjecture. This is, if you like, the "opposite" of $ABC$; it states that for an even integer $N$, we may write $N = p_1 + p_2,$ where $p_1$ and $p_2$ are prime (which is a kind of "opposite" to being divisible by a high perfect power). It also resists proof. -At a technical level, the tools that have been brought to bear on Fermat, and on $ABC$, are quite different from the tools that have been brought to bear on Goldbach, so perhaps one shouldn't take a comparison between them too seriously. -But they do share the common element of getting at something quite deep about the interrelationship between the additive and mutliplicative structure of the integers, and this is what makes them difficult (or so it seems to me). -[Added September 2012:] Shinichi Mochizuki has very recently claimed a proof of the ABC conjecture.<|endoftext|> -TITLE: Finding the norm in the cyclotomic field $\mathbb{Q}(e^{2\pi i / 5})$ -QUESTION [14 upvotes]: I'm doing one of the exercises of Stewart and Tall's book on Algebraic Number Theory. The problem concerns finding an expression for the norm in the cyclotomic field $K = \mathbb{Q}(e^{2\pi i / 5})$. The exact problem is the following: - - -If $\zeta = e^{2 \pi i / 5}$, $K = \mathbb{Q}(e^{2\pi i / 5})$, prove that the norm of $\alpha \in \mathbb{Z}[\zeta]$ is of the form $\frac{1}{4}(A^2 -5B^2)$ where $A, B \in \mathbb{Z}$. -(Hint: In calculating $\textbf{N}(\alpha)$, first calculate $\sigma_1 (\alpha) \sigma_4 (\alpha)$ where $\sigma_i (\zeta) := \zeta^{i}$. Show that this is of the form $q + r\theta + s\phi$ where $q, r, s \in \mathbb{Z}$, $\theta = \zeta + \zeta^{4}$ and $\phi = \zeta^{2} + \zeta^{3}$. In the same way establish $\sigma_2 (\alpha) \sigma_3 (\alpha) = q + s\theta + r\phi$ ) -Using Exercise $3$ prove that $\mathbb{Z}[\zeta]$ has an infinite number of units. - - -Now, I've already done what the hint says and arrived at the following. If we let $\alpha = a +b\zeta^{} + c\zeta^{2} + d\zeta^{3} \in \mathbb{Z}[\zeta]$ then after simplifying I get -$$\textbf{N}(\alpha) = \sigma_1 (\alpha) \sigma_4 (\alpha) \sigma_2(\alpha) \sigma_3(\alpha) = ( q + r\theta + s\phi ) ( q + s\theta + r\phi )$$ -$$ = q^2 + (qr + qs)(\theta + \phi) + rs(\theta^2 + \phi^2) + (r^2 + s^2)\theta \phi$$ -and then it is not that hard to see that $\theta + \phi = -1$, $\theta^2 + \phi^2 = 3$ and $\theta \phi = -1$ so that in the end one obtains -$$\textbf{N}(\alpha) = q^2 - (qr + qs) + 3rs - (r^2 + s^2)$$ -where $q = a^2 + b^2 + c^2 + d^2$, $r = ab + bc + cd$ and $s = ac + ad + bd$. -Now, here I got stuck because I just can't take the last expression for the norm into the form that the exercise wants. -The purpose is to get that nice form for the norm to find units by solving the diophantine equation $\textbf{N}(\alpha) = \pm 1$, which is what the Exercise $3$ mentioned in the statatement of the problem is about. -I already know how to prove the existence of infinitely many units in $\mathbb{Z}[\zeta]$ (without using Dirichlet's Unit Theorem of course), but the exercise also demands a proof that the norm is equal to $\frac{1}{4}(A^2 -5B^2)$. -I even asked my professor about this and we were not able to get the desired form for the norm. - - -So my question is if anybody knows how to prove that the norm has that form, and if so, how can I show that? Or if it could be that maybe the hint given in the exercise is not that helpful? - - -Thanks a lot in advance for any help with this. -EDIT -After looking at Derek Jennings' answer below, to get from the expression I had for the norm to the one in Derek's answer is just a matter of taking out a common factor of $1/4$ in the expression and then completing the square, -$$\textbf{N}(\alpha) = q^2 - (qr + qs) + 3rs - (r^2 + s^2) = q^2 - q(r+s) + rs - (r-s)^2$$ -$$ = \frac{1}{4}( 4q^2 - 4q(r+s) + 4rs - 4(r-s)^2 ) $$ $$= \frac{1}{4} ( 4q^2 - 4q(r+s) +\overbrace{(r+s)^2} - \overbrace{(r+s)^2} + 4rs - 4(r-s)^2 )$$ -$$ = \frac{1}{4} ( (2q -(r+s))^2 -(r-s)^2 - 4(r-s)^2 )$$ -$$ = \frac{1}{4}( (2q - r - s)^2 - 5(r-s)^2 ) = \frac{1}{4}(A^2 - 5B^2),$$ -as desired. Of course it is easier if you already know what to get at =) - -REPLY [9 votes]: I agree completely with Qiaochu. If you understand what he wrote, then you can use this to give a much shorter and easier proof than the hint is suggesting. -In fact I didn't even read the hint. I saw the question, thought "Oh, that looks like the norm from $\mathbb{Q}(\sqrt{5})$". I also know that $\mathbb{Q}(\sqrt{5})$ is the unique quadratic subfield of $\mathbb{Q}(\zeta_5)$ (since $5 \equiv 1 \pmod 4$; there's a little number theory here), and that for any tower of finite field extensions $L/K/F$, we have -$N_{L/F}(x) = N_{K/F}(N_{L/K}(x))$. -Norms also carry algebraic integers to algebraic integers, so this shows that the -norm of any element of $\mathbb{Z}[\zeta_5]$ is also the norm of some algebraic integer of -$\mathbb{Q}(\sqrt{5})$, i.e., is of the form $(\frac{A+\sqrt{5}B}{2})(\frac{A-\sqrt{5}B}{2})$ for some $A,B \in \mathbb{Z}$. I think we're done. -[I have never read Stewart and Tall, so it may be that they have not assumed or developed this much theory about norm maps at the point they give that exercise. But if you know it, use it!] - -REPLY [6 votes]: You're almost there: set $A=2q-r-s$ and $B=r-s.$ Then your expression for $\textbf{N}(\alpha)$ reduces to the desired form. i.e. your -$$\textbf{N}(\alpha) = \frac14 \left \lbrace (2q-r-s)^2 - 5(r-s)^2 \right \rbrace -= \frac14(A^2-5B^2).$$<|endoftext|> -TITLE: Principal fibrations with a section are trivial -QUESTION [15 upvotes]: I'm trying to prove that, given a principal fibration $\Omega B \rightarrow F \stackrel{p}{\rightarrow} E$ such that $p$ is a retraction of $F$ onto $E$, the total space $F$ is homotopy equivalent to the product $\Omega B \times E$. This is essentialy a problem from Hatcher (4.3, Q22). -Anyway, since $E$ is a retract of $F$, we have a map $s\colon E \rightarrow F$ such that $p\circ s = 1$. So the long exact sequence in homotopy for the fibration breaks into split short exact sequences, and one approach I've been trying is to show that the isomorphism in the splitting is induced from some map $F \rightarrow \Omega B \times E$, so that Whitehead's theorem can finish the job (you can assume I'm always talking about CW complexes). -Another approach I've tried - inspired by the proof of the similar statement about principal bundles and global cross-sections - is to try and define an action of the fibre $\Omega B$ on $F$. Up to homotopy, we can replace $F$ with the homotopy fibre $F_q$, where $E \stackrel{q}{\rightarrow} B$ is the rest of the fibration sequence in the principal fibration, and it's then very easy to define an action of $\Omega B$ on $F_q$ (again, see the exercises in Hatcher), so we have an action of $\Omega B$ on $F$ up to homotopy. So we can define a map $\Omega B \times E \rightarrow F$ as the composite $$\Omega B \times E \stackrel{1 \times s}{\rightarrow} \Omega B \times F \rightarrow F,$$ where the last map is the action, and I thought about trying to show that this induces isomorphisms in homotopy for, once again, a solution by Whitehead. -Alternatively, I could try and define a map $F \rightarrow \Omega B \times E$ and show that it's the homotopy inverse of the above map, but I've come a little unstuck in trying to do this. There's no obvious map $F \rightarrow\Omega B$. -I've been wrestling with this problem for the last few days, and I've reached the point where I'm just going around in circles and finding it hard to make much progress. Sorry for the hodgepodge nature of this post, but I wanted to explain the various ways I've been going about solving this. I'm going to leave it on the backburner for a few days, sleep on it a little more, and hopefully I'll come back to it and the answer will be obvious --- it does feel like I'm missing something important that's staring me in the face. In the meantime, though, I'd love it if someone could suggest a hint or two, or let me know if I'm tackling this in the right way! - -REPLY [13 votes]: My supervisor pointed me in the right direction for the solution, which I'm recording here for posterity. It (unsurpringly) involves the action of $\Omega B$ on the homotopy fibre $F$, just like in the bundle case. -Recall that I had defined a map $\Omega B \times E \rightarrow F$ by composition of $1 \times s$ with the action $*$. Now, remembering that the product of two fibrations is a fibration, we have the following commutative diagram of long exact sequences of homotopy groups and maps between them: -$$\begin{array}{ccccc}\pi_{n+1} E & = & \pi_{n+1} E & = & \pi_{n+1} E \\ - -\downarrow & & \downarrow & & \downarrow \\ - -\pi_{n} \Omega B & \longrightarrow & \pi_{n} \Omega B \times \pi_{n} \Omega B & -\longrightarrow & \pi_n \Omega B\\ - -\downarrow & & \downarrow & & \downarrow \\ - -\pi_{n} \Omega B \times \pi_{n} E & \longrightarrow & \pi_{n} \Omega B \times \pi_n F & \longrightarrow & \pi_{n} F - -\\ \downarrow & & \downarrow & & \downarrow \\ - -\pi_{n} E & = & \pi_{n} E & = & \pi_{n} E \\ - -\downarrow & & \downarrow & & \downarrow \\ - -\pi_{n-1} \Omega B & \longrightarrow & \pi_{n-1} \Omega B \times \pi_{n-1} \Omega B & \longrightarrow & \pi_{n-1} \Omega B -\end{array}$$ -where the second and fifth rows come from (left) inclusion and $H$-group multiplication, and the middle row is the homomorphism induced by the map $\ast \circ (1 \times s)$ defined above. The result then follows from the 5-lemma and Whitehead's theorem. -(Sorry about the shoddiness of the diagram, I wasn't sure if this textbox is xy enabled or not, so I stuck with an array.)<|endoftext|> -TITLE: 2D Rotation Around Point -QUESTION [5 upvotes]: D. My first post here ::- >. I got a rather simple question. But please, allow me to introduce myself a bit first. I think it's polite for a first post ::- D. -I'm a game developer (free Flash games) with a few ahem big holes in my math (read: I haven't done any math since high school - thanks to the school, I was totally repulsed by it in University). Even so, I always liked math, but only out of school ::- D. Unfortunately, I haven't kept touch with it (it wasn't necessary so far). -So, without further delay, I will get to my problem for a game I'm working on. Anybody helping me wins a free ticket to the game's credits grin. And it's going to be a nice game, much nicer than my previous: http://www.kongregate.com/games/Kyliathy/thunderbirdz (to see that I'm not a fraud, LOL). -What I'm trying to do (and failing miserably) is to rotate a Rectangle around a Point, in Adobe Flash. The problem is that Flash rotates an object relative to it's X=0, Y=0 coordinate, which they call a registration point. And I want to rotate the Rectangle around ANOTHER POINT inside that Rectangle. -This is a sample Flash which illustrates my problem: -http://www.axonnsd.org/W/P002/MathSandBox.swf -[LATER EDIT: all SWF samples we talked about in the comments below are now accessible at the same links, except that you have to add '/W/P002'after 'axonnsd.org' - see the above link.] -The BLUE circle should stay on top of the RED circle. Instead, the Rectangle rotates around its registration point. -For people without Flash or who hate Flash, here is an image of the same problem which also show the registration point via a BLACK Square - -Now....... -My solution to this problem would be to find an equation to MOVE the Rectangle to an appropriate X/Y so that the BLUE circle stays on top of the RED circle. -Specifying the rotation in Flash is simple: object.rotation = X, where X can be any number. It will, however, always divide it by 360, of course. -But now I have to find SOME method by which I set my object.x and object.y so that the Rectangle appears to rotate around the BLUE/RED circles, NOT around its registration point. -And, for the life of me, I don't even know where to START finding that mysterious equation ::- D. I bet it involves a bit of Pi and some drops of trigonometry. But I don't even know where to begin... Pointers, anybody? -Thank you for reading my long first message! ::- D. - -REPLY [5 votes]: The upper-left corner of the rectangle is the point we want to locate. It travels along a circular path with radius $r=\sqrt{a^2+b^2}$ and center $(h,k)=$ $(x_r+a,y_r+b)$. - -One possible parametric description of a circle with center $(h,k)$ and radius $r$ is $(x,y)=$ $(h+r\cos t,k+r\sin t)$, but this parameterization starts at $(h+r,k)$ when $t=0°$ and moves to $(h,k+r)$ when $t=90°$. Swapping the sine and cosine gives a parameterization $(x,y)=$ $(h+r\sin t,k+r\cos t)$ that starts at $(h,k+r)$ when $t=0°$ and moves to $(h+r,k)$ when $t=90°$. Reversing the sign of the cosine term in the $y$ coordinate gives a parameterization $(x,y)=$ $(h+r\sin t,k-r\cos t)$ that starts at $(h,k-r)$ when $t=0°$ and moves to $(h+r,k)$ when $t=90°$. In the coordinate system you describe, this is the circle below. - -So we're fairly close, but $t$ is slightly off from $\theta$. Letting $t=\theta-\arctan(\frac{a}{b})$ shifts the parameter to match the desired values. -$$\begin{align} -x&=h+r\sin(\theta-\arctan\left(\frac{a}{b}\right)) -\\ -y&=k-r\cos(\theta-\arctan\left(\frac{a}{b}\right)) -\end{align}$$ -Some earlier failed attempts at thinking transformationally are in the revision history.<|endoftext|> -TITLE: Showing groups of order $p^{k}(p+1)$ are not simple, p prime -QUESTION [6 upvotes]: I want to show that there are no simple groups of order $p^{k}(p+1)$ where $k>0$ and $p$ is a prime number. -So suppose there is such a group. Then if we let $n_{p}$ denote the number of $p$-Sylow subgroups of $G$ we have that $n_{p}=p+1$. Now by letting $G$ act on $Sylow_{P}(G)$ by conjugation we obtain a group homomorphism $G \rightarrow S_{p+1}$. Since $G$ is simple then either $ker(f)$ is trivial or all $G$. Now here's my question: assume $ker(f)=G$ this would imply then that $G$ has a unique $p$-Sylow subgroup no? but then such subgroup is normal which contradicts the fact that $G$ is simple. So the map in fact is injective but then $|G|$ divides $(p+1)!$ which cannot be. -Basically my question is if my argument is correct, namely thta if $ker(f)=g$ implies the existence of a unique $p$-Sylow subgroup which implies such subgroup is normal in $G$ which cannot be. In case this is wrong, how do you argue that $ker(f)$ cannot be all $G$? -Thanks - -REPLY [5 votes]: You are (mostly) correct. If ker(f) = G, then the image of G in the symmetric group is just the identity, so G does not move any of its Sylow p-subgroups around. However, G acts transitively on its Sylow p-subgroups (they are all conjugate) and the identity is not transitive unless p+1=1, which is silly. -If ker(f) = 1, then G embeds in the symmetric group on p+1 points, so its order divides (p+1)!. This is not a contradiction when k=1. -Indeed, taking p=2, the non-abelian group of order six has order p(p+1) and has exactly p+1 Sylow p-subgroups, and the homomorphism f is injective. -Of course a group of order p(p+1) is also not simple, but probably for a different reason.<|endoftext|> -TITLE: Is it possible to construct a quasi-vectorial space without an identity element? -QUESTION [5 upvotes]: I mean if there is any construction that satisfies all the conditions for an vectorial space except it lacks an identity element? This questions was posed to me by a classmate last semester and I have been puzzling over it since then. - -REPLY [20 votes]: In an old sci.math post by Dave Rusin, he discusses dropping sundry axioms form the usual set of axioms of vector spaces. What is below is taken from there. -So, let's recall what axioms we are dealing with: normally, a vector space $\mathbf{V}$ over a field $\mathbb{F}$ is defined to be a set, together with operations $+\colon\mathbf{V}\times\mathbf{V}\to\mathbf{V}$ and $\cdot\colon \mathbb{F}\times\mathbf{V}\to\mathbf{V}$, written in their usual infix notation, which satisfies the following conditions: - -For all $x,y\in\mathbf{V}$, $x+y=y+x$. -For all $x,y,z\in\mathbf{V}$, $(x+y)+z = x+(y+z)$. -There exists a vector $\mathbf{0}\in\mathbf{V}$ such that for all $x\in\mathbf{V}$, $x+\mathbf{0}=x$. -For each $x\in\mathbf{V}$ there exists $y\in\mathbf{V}$ such that $x+y=\mathbf{0}$. -For all $x\in\mathbf{V}$, $\alpha,\beta\in \mathbb{F}$, $\alpha(\beta x) = (\alpha\beta)x$. -For all $x\in\mathbf{V}$, $1x = x$. -For all $x,y\in\mathbf{V}$, $\alpha\in\mathbb{F}$, $\alpha(x+y) = \alpha x + \alpha y$. -For all $x\in\mathbf{V}$, $\alpha,\beta\in\mathbb{F}$, $(\alpha+\beta)x = \alpha x + \beta x$. - -You cannot just drop 3, because that would make 4 unintelligible. A way around it is to replace 4 with another statement which, in the presence of all other axioms, is equivalent to 4; namely: - -4'. For all $x,y,w\in\mathbf{V}$, if $y + x = w + x$, then $y=w$. - -That is, we have right cancellation. (This is important; if you set up cancellation on the "other side" as the identity, you run into some difficulties below) -Note that if you have 1-8, then you get 4'. And if you have 1-3, 4', and 5-8, then you get 4: first, note that since $\mathbf{0}+0x = 0x = (0+0)x = 0x + 0x$, then 4' implies that $0x = \mathbf{0}$. Then given any $x\in\mathbf{V}$, we have $x + (-1)x = (1+(-1))x = 0x = \mathbf{0}$, so 4 holds in this case. That is, 1, 2, 3, 4', 5, 6, 7, and 8, are an alternative way of defining vector spaces, with the added advantage that now you can drop any of the eight and the remaining statements are still intelligible. -If you take 1, 2, 3, 4', 5, 6, 7, and 8, then you can construct objects that are not vector spaces and satisfy any seven of these and not the eighth. - -All but 1: Take $\mathbf{V}=\mathbb{R}$, and define scalar multiplication by $\alpha x = x$, and $x+y = x$ for all $x$ and $y$. -All but 2: Take $\mathbf{V}=\mathbb{R}^2$, define scalar multiplication the usual way, and -$$x+y = \left\{\begin{array}{ll} -(0,0) &\mbox{if $x=(0,0)$ or $y=(0,0)$;}\\ -|\cos(\theta)|(x+y) & \mbox{if $x\neq(0,0)\neq y$, and $\theta$ is the angle from $x$ to $y$.} -\end{array}\right.$$ -All but 3: Take $\mathbf{V}=\emptyset$, with the empty addition and multiplication! -All but 4 or 4': Take $\mathbf{V}=\mathbb{R}\cup\{\bigcirc\}$. Define addition so that it is the usual addition in $\mathbb{R}$, and $r+\bigcirc=\bigcirc+r = r$ for all $r\in\mathbb{R}$, and $\bigcirc+\bigcirc=\bigcirc$. Scalar multiplication is regular multiplication on $\mathbb{R}$, and $r\bigcirc = \bigcirc$ for all reals $r$. -All but 5: Take $\mathbf{V}=\mathbb{R}$, and let $\sigma\colon\mathbb{R}\to\mathbb{Q}$ be any additive homomorphism of abelian groups. Define addition as usual, and scalar multiplication by $r x = \sigma(r)\cdot x$, where the multiplication on the right hand side is the usual real multiplication. -All but 6: Take $\mathbf{V}=\mathbb{R}$ with the usual addition, but zero multiplication: $\alpha x = 0$ for all $x$ and all $\alpha$. -All but 7: Take $\mathbf{V}=\mathbb{C}^2$, addition defined the usual way, and scalar multiplication given by: -$$\alpha(x,y) = \left\{\begin{array}{ll} -(\alpha x,\alpha y) & \mbox{if $x\neq 0$,}\\ -(0, \overline{\alpha}y) & \mbox{if $x= 0$.} -\end{array}\right.$$ -where $\overline{\alpha}$ is complex conjugation. -All but 8: Take $\mathbf{V}=\mathbb{R}$ with usual addition, and scalar multiplication $r x = r^2\cdot x$, where the multiplication on the right hand side is the usual multiplication of real numbers. - -Okay, but "All but 3" was almost cheating. What if we require that $\mathbf{V}$ be nonempty? Then you cannot have a structure that satisfies 1, 2, 4', 5, 6, 7, and 8, and does not satisfy 3: -Suppose $\mathbf{V}$ satisfies 1, 2, 4', 5, 6, 7, and 8, and is nonempty. Let $x\in V$. Then for all $y\in V$, we have: -\begin{align*} -(0x + y) + 0x &= 0x + (0x+y) &\quad&\mbox{(by 1)}\\\ - &= (0x+0x) + y &\quad&\mbox{(by 2)}\\\ -&= (0+0)x + y &&\mbox{(by 8)}\\\ -&= 0x + y\\\ -&= y + 0x &&\mbox{(by 1)} -\end{align*} -By 4', this means that $0x+y = y$, so $\mathbf{0}=0x$ shows that 3 is satisfied.<|endoftext|> -TITLE: Elementary proof that $\mathbb{R}^n$ is not homeomorphic to $\mathbb{R}^m$ -QUESTION [137 upvotes]: It is very elementary to show that $\mathbb{R}$ isn't homeomorphic to $\mathbb{R}^m$ for $m>1$: subtract a point and use the fact that connectedness is a homeomorphism invariant. -Along similar lines, you can show that $\mathbb{R^2}$ isn't homeomorphic to $\mathbb{R}^m$ for $m>2$ by subtracting a point and checking if the resulting space is simply connected. Still straightforward, but a good deal less elementary. -However, the general result that $\mathbb{R^n}$ isn't homeomorphic to $\mathbb{R^m}$ for $n\neq m$, though intuitively obvious, is usually proved using sophisticated results from algebraic topology, such as invariance of domain or extensions of the Jordan curve theorem. -Is there a more elementary proof of this fact? If not, is there intuition for why a proof is so difficult? - -REPLY [5 votes]: Consider the one point compactifications, $S^n$ and $S^m$, respectively. If $\mathbb R^n$ is homeomorphic to $R^m$, their one-point compactifications would be, as well. But $H_n(S^n)=\mathbb Z$, whereas $H_n(S^m)=0$, for $n\ne m,0$.<|endoftext|> -TITLE: I-adic completion of a ring -QUESTION [8 upvotes]: Let $R$ be a ring, $I$ an ideal. According to Atiyah-Macdonald, if $R$ is Noetherian, then, we have $\hat{I}=\hat{R}I$ where hat denotes $I$-adic completion of $R$ and (I presume) $\hat{I}$ denotes the induced completion on $I$. I don't understand how to arrive at this equality and why the Noetherian hypothesis is necessary. Essentially $\hat{I}$ consists of equivalence classes of Cauchy sequences with elements in $I$. Any element of $\hat{R}I$ is an equivalence class of Cauchy sequences consisting of elements of $I$. I don't see how every Cauchy sequence with elements in $I$ is equivalent to one which can be written as a sum of products of a Cauchy sequence and a constant sequence of an element of $I$. - -REPLY [7 votes]: If $R$ is Noetherian, then $I$ must be finitely generated, say $I = \langle p_1,\ldots, p_n\rangle$. So if an element of $\hat{I}$ is represented by a sum $x = i_1 + i_2 + \cdots$, rewriting $i_m = p_1 i_{m1} + \cdots + p_n i_{mn}$ we can rewrite this as -$$x = p_1 \sum_{k_1}i_{1k} + \cdots + p_n\sum_{k_n}i_{nk} \in \hat{R}I$$ -which is what you wanted.<|endoftext|> -TITLE: $\sum \frac{1}{f(k)}$ converges iff $\sum \frac{f^{-1}(k)}{k^2}$ converges -QUESTION [11 upvotes]: Let $f$ be a strictly increasing positive continuous function defined on $[1,\infty)$ with limit $\infty$ as $x$ goes towards $\infty$. Then $\sum_{k=0}^{\infty} \frac{1}{f(k)}$ converges if and only if $\sum_{k=0}^{\infty} \frac{f^{-1}(k)}{k^2}$ converges, where $f^{-1}$ denotes the inverse of $f$. -I saw this claim as an exercise in a analysis textbook, linked from this site. Can not remember which one unfortunately. It was listed as a challenging exercise and has proven too challenging for me. -My initial idea was to try to use the integral test. $\sum \frac{1}{f(k)}$ converges exactly when $\int \frac{1}{f(x)}$ converges. I thought I might do some smart change of variable to find that $\int \frac{f^{-1}(x)}{x^2}$ converges. I could not come up with one however and I also realized that I do not know that the sum $\sum \frac{f^{-1}(k)}{k^2}$ satisfies the conditions for using the integral test. Unfortunately I ran out of ideas at that point. - -REPLY [4 votes]: Define $A(n)$ to be $\{k: 2^n \leq f(k) < 2^{n+1}\}$, and $|A(n)|$ its cardinality. Let $S_n$ be the sum of the terms of the first sequence over $k$ in $A_n$. Then we have -$${1 \over 2^{n+1}}|A(n)| < S_n \leq {1 \over 2^n}|A(n)|$$ -Thus the first sum $\sum_n S_n$ is within a factor of $2$ of ${\displaystyle \sum_n {|A(n)| \over 2^n}}$. -Next, let $T_n$ be the sum of the terms of the second sequence over $2^n \leq k < 2^{n+1}$. In the sum $T_n$, since $f^{-1}$ is increasing, $f^{-1}(k)$ is less than $f^{-1}(2^{n+1}) = \sum_{m \leq n+1}|A(m)|$, and at least $f^{-1}(2^n) = \sum_{m \leq n}|A(m)|$. There are $2^n$ terms in the sum defining $T_n$, so we have -$$2^{-2(n+1)}2^n\sum_{m \leq n}|A(m)| \leq T_n < 2^{-2n}2^n\sum_{m \leq n+1}|A(m)|$$ -Summing the above in $n$, we have that $\sum_n T_n$ is within a constant factor of -$$\sum_n 2^{-n} \sum_{m \leq n} |A(m)|$$ -Switching the order of summation, this becomes -$$\sum_m |A(m)| \sum_{n \geq m} 2^{-n}$$ -$$= 2\sum_m{|A(m)| \over 2^m}$$ -This is within a constant factor of $\sum_n {S_n}$ computed above, so one series converges iff the other one does.<|endoftext|> -TITLE: multiplication equivalent of the summation symbol -QUESTION [34 upvotes]: I was curious (even though this is a very amateur question)... what would the multiplication equivalent of sigma (the summation symbol) be? -$$\sum$$ -I want to do a series of multiplication of terms.... instead of addition... - -REPLY [2 votes]: I google "latex symbols" when I need something I can't recall. That'll give you many lists and tips. For example, if you choose the first hit, the AoPS list and look for the sum symbol you'll find the product symbol right below it. This is but a simple example of a general technique of exploiting organization and classification on the web to discover information about similar items. -For the inverse, when you know the symbol but not the name there is deTeXify which let's you draw the symbol and does OCR to find the name. For deeper questions there's the (La)Tex SE site.<|endoftext|> -TITLE: Closed Subspaces of Vector Spaces -QUESTION [17 upvotes]: Question: In Functional Analysis we can note things like: every closed subspace of a Banach space is Banach. In this case, what does "closed subspace" mean? - -Does this mean closed under the norm topology? -Or does this mean closed in the sense that multiplication of scalars and addition of vectors is closed? -Or does this mean closed with respect to limits? - -I'm reviewing this material and I realized that even though I have this in my notes a number of times I am unsure of what this actually is. I thought it was the second statement above, but the third statement makes the "every closed subspace of a banach space is banach" statement easy to prove. - -REPLY [22 votes]: "Does this mean closed under the norm topology?" -Yes, that's what it means. -"Or does this mean closed in the sense that multiplication of scalars and addition of vectors is closed?" -No, that is what "subspace" means here. (As user1736 says, sometimes people write "linear subspace" for emphasis, but in functional analysis it is generally safe to assume that "subspace" means "linear subspace".) -"Or does this mean closed with respect to limits?" -Again yes, because this is an equivalent characterization of closed subspaces of a topological space. Since a Banach space is metrizable hence first countable, it is enough to take limits of sequences. For a general topological space -- and even some non-Banach topological vector spaces -- in order to retain this equivalence, one must allow limits of nets.<|endoftext|> -TITLE: Inner Product, definite positive? -QUESTION [7 upvotes]: While reading through my textbook it says "the most important example of an inner-product space is $F^n$", where $F$ denotes $\mathbb{C}$ or $\mathbb{R}$ . -Our definition of an inner product on a vector space $V$ is as follows: -1) Positive definite: $\langle v,v \rangle \ge 0$ with equality if and only if $v=0$ -2) Linearity in the first arguement: $\langle a_1v_1+a_2v_2,w \rangle = a_1 \langle v_1,w \rangle + a_2\langle v_2,w \rangle$ -3) Conjugate symmetric: $\langle u,v\rangle = \overline{\langle v,u\rangle}$ -Let $$\displaystyle w=(w_1\ldots,w_n) , z=(z_1,\ldots,z_n)$$ -Then: -$$\displaystyle \langle w,z\rangle =w_1\overline{z_1}+\cdots+w_n\overline{z_n}$$ -I'm trying to verify that this is indeed true. So first I want to check that $\langle w,z\rangle$ satisfies condition (1). -Say that $w,z\in \mathbb{C}$. -Just looking at say $w_1=a+bi$ and $z_1=c+di$, how can we guarantee that $w_1\overline{z_1}\geq 0$? -If we can observe this, it would need to hold true for the other coordinates as well. So my question is, how do we know that $w_1\overline{z_1}\geq 0$? - -REPLY [5 votes]: Item 1 in your definition of an inner product is incorrect. A simple counter example from $\mathbb{R}^2$ is -$$(1,0).(-1,0) = -1.$$ -It should read -$$\langle v, v \rangle \ge 0,$$ -this guarantees that all vectors in your space have a non-negative length.<|endoftext|> -TITLE: What are the ramifications of the fact that the first homotopy group can be non-commutative, whilst the higher homotopy groups can't be? -QUESTION [18 upvotes]: Does this mean that the first homotopy group in some sense contains more information than the higher homotopy groups? Is there another generalization of the fundamental group that can give rise to non-commutative groups in such a way that these groups contain more information than the higher homotopy groups? - -REPLY [4 votes]: These problems puzzled the early topologists: in fact Cech's paper on higher homotopy groups was rejected for the 1932 Int. Cong. Math. at Zurich by Hopf and Alexandroff, who quickly proved they were abelian. We now know this is because group objects in groups are abelian groups. However group objects in the category of groupoids are NOT just abelian groups, but are equivalent to crossed modules, which occurred in the 1940s in relation to second relative homotopy groups, $\pi_2(X,A,x)$. It turns out that there is a nice double groupoid $\rho_2(X,A,x)$ consisting of homotopy classes of maps of a square $I^2$ to $X$ which map the edges to $A$ and the vertices to $x$. (The proof that the compositions are well defined is not quite trivial!). Using this Philip Higgins and I proved a 2-d van Kampen Theorem, published in Proc. LMS 1978, i.e. 34 years ago, from which one can deduce new results on the nonabelian second relative homotopy groups, as crossed modules over the fundamental group. -This is the start of using strict higher homotopy groupoids for obtaining nonabelian calculations in higher homotopy theory -- see the web page in my comment, and references there. -This idea came from examining in 1965 a proof of the 1-dim van Kampen theorem for the fundamental groupoid, and observing that it ought to generalise to higher dimensions if one had the right homotopical gadgets. It took years to get the idea that this could be done for pairs of spaces, filtered spaces, or $n$-cubes of spaces, but apparently not easily just for spaces, or spaces with base point.<|endoftext|> -TITLE: Why does triply periodic univariate function not exist? -QUESTION [5 upvotes]: A univariate function $f$ is periodic with period $p_1,\ldots,p_k$ if -$$f(z) = f(z + \sum_{i=1}^k n_i \cdot p_i)$$ -for all complex $z$ and integers $n_i$. Elliptic function is an example of doubly periodic function. It is claimed that a triply periodic univariate function cannot exist, but why? Can't we follow the construction of elliptic functions that uses Schwarz-Christoffel formula to map upper-plane onto a triangle, and use the reflection principle to create a lattice which corresponds to three periods? - -Which step fails to hold when we mimic the construction of an elliptic function to create a triply period meromorphic univariate function? - -Sorry if the question is stupid. I do not have an access to the original proof that a triply periodic function cannot exist, either. So references are welcome! - -REPLY [8 votes]: The problem is with the lattice of periods you want to consider in the construction... -One can easily show that if a subgroup $\Gamma$ of the additive group $\mathbb C$ contains three linearly independent elemtents over $\mathbb Z$, then it is not closed; so there is no sensible way to do all the reflections you are planning to do... To actually prove triply periodic meromorphic functions do not exist, using that fact about subgroups of $\mathbb C$ and properties of meromorphic functions should get you to where you want.<|endoftext|> -TITLE: Universal binary operation and finite fields (ring) -QUESTION [33 upvotes]: Take Boolean Algebra for instance, the underlying finite field/ring $0, 1, \{AND, OR\}$ is equivalent to $ 0, 1, \{NAND\} $ or $ 0, 1, \{ NOR \}$ where NAND and NOR are considered as universal gates. Does this property, that AND ('multiplication') and OR ('addition') can be written in terms of a single universal binary relation (e.g. NAND or NOR), hold with every finite field (or finite ring)? -EDIT : I am interested in mathematical structures where boolean algebra holds (so that I can design a digital circuit.). Comments from JDK and jokiri point out that this is a valid question for finite rings at least and for finite fields in one case (i.e. $1, 0$ case). - -REPLY [3 votes]: I'm not sure I get the question right, I understand you are asking if it is true that you can express any boolean operation using only one gate. If this is your question, the answer is yes. -Take the NAND, for example (represented in boolean argebra by the sheffer stroke |). It can replace any unary or binary gate. - -We already know that anything can be expressed with AND and NOT. -If we can express AND and NOT with NAND, -therfore we can express anything with NAND. - - -Reminder, NAND can be understood in English as "At most one", which means it's true except if both p and q are true: -p q p|q -------------- -0 0 1 -0 1 1 -1 0 1 -1 1 0 - -Let's prove that NOT (¬) can be expressed with NAND (|): -p ¬p p|p ---------------------- -0 1 1 (0|0=1) -1 0 0 (1|1=0) - -NOT can be expressed with NAND: ¬p = p|p -Let's now prove that AND (^) can be expressed with NAND(|). p^q = ¬(p|q) and we already know how to express NOT with NAND: -p q p^q p|q (p|q)|(p|q) ----------------------------------- -0 0 0 1 0 (1|1=0) -0 1 0 1 0 (1|1=0) -1 0 0 1 0 (1|1=0) -1 1 1 0 1 (0|0=1) - -AND can be expressed with NAND: p^q = (p|q)|(p|q) -For your information, the OR gate can be expressed (p|p)|(q|q), I'm sure you can prove it for yourself.<|endoftext|> -TITLE: What is an example of a Transfinite Argument? -QUESTION [14 upvotes]: I was in a discussion today with a philosopher about the merit of the technique of "proof by contradiction." He mentioned the Law of Excluded Middle, wherein we (typically as mathematicians) assume that we either have P or Not P. That is, if we show not "Not P", then we must have P. -Further along in the conversation, he mentions Kolmogorov's tendencies toward Intuitionist Logic (where the Law of Excluded middle does not hold; i.e. we cannot infer p from not not p). I located the source material of Kolmogorov, "On the Principle of Excluded Middle," wherein he states: - -...it is illegitimate to use the principle of excluded middle in the domain of transfinite arguments. - -Furthermore, - -Only the finitary conclusions of mathematics can have significance in applications. But the transfinite arguments are often used to provide a foundation for finitary conclusions. - -Additionally, - -We shall prove that all the finitary conclusions obtained by means of a transfinite use of the principle of excluded middle are correct and can be proved even without its help. - -My question: What does Kolmogorov mean when he differentiates finitary conclusions from transfinite arguments? Namely, what is an example of a finite conclusion, and what is an example of a corresponding transfinite argument? -(Source material cited in Wikipedia: http://en.wikipedia.org/wiki/Andrey_Kolmogorov#Bibliography) - -REPLY [11 votes]: The usage of "transfinite" there is not the same as what we now call "transfinite induction". Kolmogorov essentially just means "infinite". -One example of the sort of thing that Kolmogorov calls "excluded middle" is now called the "limited principle of omniscience" (LPO). LPO says that if you have any property $P(n)$ of a natural number $n$, so that each number either does or does not have the property, then either there is a number $m$ such that $P(m)$ holds, or, for every number $m$, $P(m)$ does not hold. In other words, if $(\forall m)(P(m) \lor \lnot P(m))$ then $(\exists m)P(m) \lor (\forall m)\lnot P(m)$. -LPO is trivially true in classical logic but it is not trivial in intuitionistic systems. The best way to understand this is to know that when an intuitionist says that a number exists with some property, he or she means that he or she already has an example of a specific number with that property. So LPO claims, on this reading, that there is some way to determine whether $(\exists m)P(m)$ holds, for any decidable property $P$. -Classical mathematicians will accept that, while they believe that the sentence $(\exists m)P(m)$ is either true or false, they have no way in general to decide which is the case (in general). For intuitionists, the lack of ability to decide which is the case means that they cannot assert that the sentence is true, and the cannot assert it is false. Because of their redefinition of the word "or", this means to them that they cannot assert the sentence is true or false. -The next place to go if you're new to this area is the Brouwer-Heyting-Kolmogorov (BHK) interpretation of intuitionistic logic. That provides a way for classically-trained mathematicians to understand what intuitionists mean when they say certain things, including the way that they redefine the terms "or" and "there exists". -One example of what Kolmogorov means by "finitary conclusion" is an equation between numbers such as $2^2 = 4$ or $0=1$. He might even accept equations with free variables like $3x + 5x = 8x$ or $3^x \not = 4y$. Working in classical Peano arithmetic, it would be possible to make a proof which ends with that sort of statement, but which applies LPO along the way. Such a proof would not be acceptable in intuitionistic logic. -There are "conservation results" that show that if sentences of a certain form are provable in Peano arithmetic, then they are actually provable (possibly with a different proof) in intuitionistic Heyting arithmetic. This is what Kolmogov is referring to when he says - -We shall prove that all the finitary conclusions obtained by means of a transfinite use of the principle of excluded middle are correct and can be proved even without its help.<|endoftext|> -TITLE: Repeated Factorials and Repeated Square Rooting -QUESTION [27 upvotes]: I was talking with friends about silly questions involving what numbers you can get using only a single digit "3" and unary operations. We eventually conjectured that using only factorials and square roots you can get arbitrarily close to any number greater than or equal to $1$. But we are having trouble proving or disproving the conjecture. -Precisely, start with the number $3$. Then take its factorial $m$ times, and then take the square root of its result $n$ times. Ie, $(3!!\ldots !)^{\frac{1}{2^n}}$ where there are $m$ factorials. Let the set of numbers achievable in this way be $X$. Is $X$ dense in $[1,\infty)$? -The only progress we have made is to show that any interval $[x,x^2]$ for $x>1$ there is a limit point $a \in [x,x^2)$ of $X$. This is true because for any $z > x^2$ we can square root it an appropriate number of times to get it in $[x,x^2)$. We can do this for infinitely many points of the sequence $3,3!,3!!,\ldots$. And all points we get through this process are distinct. Let $F(m)$ be $3$ with $m$ factorials. If $F(m)^{\frac{1}{2^n}} = F(a)^{\frac{1}{2^b}}$ then we can raise each side to a power and get an expression of the form $F(m) = F(a)^{\frac{1}{2^c}}$. But factorials are not squares. (To see this for $q!$, note that there is a prime in $[q/2,q]$ by bertrand's postulate. This prime appears only once in the factorization of $q!$). So all numbers we get are distinct. We have infinitely many distinct points in $X \cap [x,x^2)$ and therefore there is a limit point. -Other than that, we can't figure anything out. It feels like $X$ should be dense. Consider some interval $[x,x^2]$. Take lots of factorials of $3$. Then take square roots of that until it falls in $[x,x^2]$. It feels like the points we get will be somewhat uniformly distributed around $[x,x^2]$ and therefore dense. - -REPLY [3 votes]: A related conjecture is posted at this link-that with the floor function you can get all naturals. I remember seeing a proof to that version with $\pi$ instead of $3$ a while ago, but can't find it now.<|endoftext|> -TITLE: Nasty examples for different classes of functions -QUESTION [13 upvotes]: Let $f: \mathbb{R} \to \mathbb{R}$ be a function. Usually when proving a theorem where $f$ is assumed to be continuous, differentiable, $C^1$ or smooth, it is enough to draw intuition by assuming that $f$ is piecewise smooth (something that one could perhaps draw on a paper without lifting your pencil). What I'm saying is that in all these cases my mental picture is about the same. This works most of the time, but sometimes it of course doesn't. -Hence I would like to ask for examples of continuous, differentiable and $C^1$ functions, which would highlight the differences between the different classes. I'm especially interested in how nasty differentiable functions can be compared to continuously differentiable ones. Also if it is the case that the one dimensional case happens to be uninteresting, feel free to expand your answer to functions $\mathbb{R}^n \to \mathbb{R}^m$. The optimal answer would also list some general minimal 'sanity-checks' for different classes of functions, which a proof of a theorem concerning a particular class would have to take into account. - -REPLY [3 votes]: Another example of what can go wrong is Volterra's function. - -It is differentiable everywhere. -Its derivative is bounded everywhere. -Its derivative is not Riemann-integrable.<|endoftext|> -TITLE: A question on $\operatorname{GL}_2(\mathbb R)$ -QUESTION [5 upvotes]: I know that all finite subgroups of $\operatorname{SL}_2(\mathbb R)$ are cyclic by standard averaging argument. They are all conjugate to some finite subgroup of $\operatorname{SO}_2(\mathbb R)$ and therefore cyclic. My question is how to classify all finite subgroups of $\operatorname{GL}_2(\mathbb R)$. -Thanking you. - -REPLY [5 votes]: The same averaging argument you and rvk describe gives you that any finite subgroup $G$ of $\operatorname{GL}_2(\mathbb{R})$ is conjugate to a subgroup of $O_2(\mathbb{R})$. (For that matter, this is also shown in $\S 1.3$ of these notes. The rest of the notes treat complex and $p$-adic analogues, with applications to classifying finite subgroups of $\operatorname{GL}_n(\mathbb{Q})$.) -Without loss of generality, we may as well assume that $G \subset O_2(\mathbb{R})$. Let $H = G \cap SO_2(\mathbb{R})$. Since $[O_2(\mathbb{R}):SO_2(\mathbb{R})] = 2$, we have $[G:H] = 1$ or $2$. If $G = H$ then $G$ itself is a finite subgroup of $SO_2(\mathbb{R})$, hence cyclic. If $[G:H] = 2$, let $g \in G \setminus H$. Then -$\det(g) = -1$, so by the Cartan-Dieudonné Theorem $g$ is a linear reflection and -thus $G = \langle H, g \rangle$ is a dihedral group $D_n$.<|endoftext|> -TITLE: Vector Spaces: Finding a basis and Dimension -QUESTION [5 upvotes]: I could really use some step-by-step help on these two problems please. Thank You in advance. -1.) Let $V = \{{\bf{A|A}}$ is an $n \times n$ matrix, $n$ fixed, -det$({\bf{A}}) = 0$ }. Is $V$, with the usual addition and -scalar multiplication, a vector space? Give reason. If yes, find the -dimension and a basis for $V$. -2.) Let $V = \{f(x)|f(x) = (ax + b)e^{-x},\; a,b\; \in\; \mathbb{R}\}$. -Is $V$, with the usual addition and scalar -multiplication, a vector space? Give reason. If yes, find the dimension -and basis for $V$. - -REPLY [6 votes]: Okay, this got a bit mangled. -(1) Is the set $\mathbf{V}=\{A\mid A\text{ is an }n\times n\text{ matrix and }\det(A)=0\}$ a vector space, under the usual addition and scalar multiplication of matrices? -Since the set of all $n\times n$ matrices is a vector space, the question is really whether this is a subspace (all the axioms of a vector space will necessarily hold, except perhaps for the existence of a zero vector, the existence of inverses, and the "hidden" axioms that the set must be closed under vector addition and scalar multiplication: the sum of two vectors in $\mathbf{V}$ must lie in $\mathbf{V}$, and every scalar multiple of a vector in $\mathbf{V}$ lies in $\mathbf{V}). -Scalar multiplication is easy: if $\alpha\in\mathbb{R}$ and $A$ is any $n\times n$ matrix, then we know that $\det(\alpha A) = \alpha^n \det(A)$. So $\det(\alpha A)=0$ if and only if $\alpha = 0$ or $\det(A)=0$. So, if $A\in\mathbf{V}$, then $\alpha A\in\mathbf{V}$ for all scalars $\alpha$. -What about vector addition? We would need to show that if $A$ and $B$ both have nonzero determinant, then so does $A+B$. But this is not the case, as Agustí Roig points out. It should be easy to come up with a similar example for any $n\gt 0$. So $\mathbf{V}$ is not closed under addition (exhibit an explicit pair of matrices, both in $\mathbf{V}$, but whose sum is not in $\mathbf{V}$; sometimes the sum of two matrices in $\mathbf{V}$ is in $\mathbf{V}$, the point is that it doesn't always lie in $\mathbf{V}$, so give an example!). -(2). Again, since all real valued functions form a vector space, the only issue is whether the set $\mathbf{V} = \{ f(x)\mid f(x) = (ax+b)e^{-x},\ a,b\in\mathbb{R}\}$ is closed under sums and scalar multiplication. -Suppose $f(x),g(x)\in\mathbf{V}$. Will $f(x)+g(x)$ lie in $\mathbf{V}$ as well? Write -\begin{align*} -f(x) &= (ax+b)e^{-x}\\ -g(x) &= (cx+d)e^{-x}\\ -f(x)+g(x) &= \Bigl( (ax+b)e^{-x}\Bigr) + \Bigl( (cx+d)e^{-x}\Bigr)\\ -&= \Bigl( (ax+b) + (cx+d)\Bigr) e^{-x}\\ -&= \Bigl( (a+c)x + (b+d)\Bigr) e^{-x}. -\end{align*} -So setting $A=a+c$ and $B=b+d$, which are real numbers because all of $a,b,c,d$ are real numbers, then we see that we can write $f(x)+g(x)$ in the form $(Ax+B)e^{-x}$. So if each of $f(x)$ and $g(x)$ are in $\mathbf{V}$, then $f(x)+g(x)\in\mathbf{V}$. -Now you need to show that if $f(x)\in\mathbf{V}$ and $\alpha\in\mathbb{R}$, then $\alpha f(x)\in\mathbf{V}$. I'll leave that to you to do. -What about the dimension of $\mathbf{V}$? If you want to describe an element of $\mathbf{V}$, you really only need to specify two things: the value of $a$ and the value of $b$. that suggests that the dimension will be $2$. Can you find a set of $2$ linearly independent functions, both in $\mathbf{V}$, that span $\mathbf{V}$?<|endoftext|> -TITLE: nth term of sequences -QUESTION [8 upvotes]: I'm a Phd student who teaches part time at a high school and I noticed something when teaching sequences today. I asked my students to find the nth term (the general term) for some sequences. They observed: -If $a_n=n$ then the first differences will be $1$. -If $a_n=n^2$ then the second differences will be $2$. -If $a_n=n^3$ then the third differences will be $6$. -If $a_n=n^4$ then the fourth differences will be $27$. -So I can now construct a sequence: $1,2,6,27,120,720,...$ -What are these numbers? Can I find $a_n$? How? -Thanks! - -REPLY [9 votes]: Your $27$ should be $24$. Now it's easy to see the pattern! -Edited to add: -To see why, take (for example) $a_n=n^4$. The first difference is -$a_{n+1}-a_n = (n+1)^4 - n^4 = 4n^3 +$ lower powers of $n$ -So the fourth difference is four times the third difference of $n^3$ plus the third difference of lower powers of $n$. Lower powers have zero third difference, so this is $4 \times 6 = 24$.<|endoftext|> -TITLE: Expectancy value for the percentage of points lying in the Convex Hull (3D) -QUESTION [15 upvotes]: Suppose I chose n uniformly distributed random points in a 3D cube. What is the expected value for the percentage of points lying on the convex hull as a function of n? -Just as a reference, I made the following experiment in Mathematica 8: -Needs["TetGenLink`"]; Show[ - DiscretePlot[ - 1/k Mean[Length /@ Union /@ (Flatten /@ (TetGenConvexHull /@ - RandomReal[{0, 1}, {500, k, 3}]))], {k, 4, 200, 3}, - AxesOrigin -> {0, 0}, Joined -> True], Plot[.5, {x, 1, 200}]] - - -Edit -Again as a reference, if we plot the mean number of points in the convex hull (not the percentage) as a function of the total number of points we get: - -Edit 2 -The second plot in Log and Log-Log forms: - -Edit 3 -As noted by @Raskolnikov in the comments below, and confirmed by the "experimental" result, the case n=5 can be though as the Cube Tetrahedron Picking Formula wich is basically the probability of the fifth point being inside the tetrahedron determined by the other four points. -As noted by @Steven Stadnicki that is not completely obvious because you are choosing the four points beforehand and some permutation could have been left aside ... but the experiments confirm the @Raskolnicov reasoning. -Edit 4 -I fitted the data using Eureqa, a nice package from Cornell for guessing fitting functions, and got as a probable fit for the number of points in the convex hull: -f[x_] := 1.4723399 Log@x Log@(3.0543704 + x) - -which gives: - -In line with Raskolnicov's answer about the asymptotic behavior. I wasn't able to read the cited paper, though (restricted access) - -REPLY [6 votes]: In the meantime, I have found this article which addresses an even more general issue, namely the same problem but for the interior of a $d$-dimensional polytope. -So, for the 3D-cube, this implies $O((\log n)^2)$ for the number of points.<|endoftext|> -TITLE: What's the significance of Tate's thesis? -QUESTION [152 upvotes]: I've just sat through several lectures that proved most of the results in Tate's thesis: the self-duality of the adeles, the construction of "zeta functions" by integration, and the proof of the functional equation. However, while I was able to follow at least some of the arguments in the individual steps, I understand almost nothing about the big picture. My impression so far is that Tate invented a new and fancier way of proving the functional equation that the Hecke analytic approach. But is there more to the story than "this is a neat way of proving something already known"? -I'm under the impression that Tate's thesis laid the foundations for the Langlands program, but I don't understand this properly yet. -Can someone explain to me what's the real significance and meaning of Tate's thesis? - -REPLY [191 votes]: Tate's thesis introduces the concept, ubiquitous now, of doing analysis, and especially Fourier analysis, on the locally compact ring of adeles. In this setting, the discrete subgroup $\mathbb Z \subset \mathbb R$ is replaced -by the discrete subgroup $\mathbb Q \subset \mathbb A$. -This has a number of implications, some of which are: - -$\mathbb Q$ is a field, and $\mathbb A$ is essentially a product of fields. -It is technically almost always easier to work with fields rather than more general rings (such as $\mathbb Z$). The adelic formalism allows one to have one's cake and eat it too (in some sense): one is working with the field $\mathbb Q$, not the ring $\mathbb Z$, but the primes are still present, in the factorization of $\mathbb A$ as a product. (And this product structure of -$\mathbb A$, which is formally very simple, captures in some subtle way the -deeper sense of "product" in the statement of the fundamental theorem of arithmetic, i.e. that any natural number is a unique product of prime powers.) -Tate writes zeta-functions, or more generally, Hecke $L$-series, as integrals over $\mathbb A^{\times}$. The Euler product structure of the $L$-series then becomes simply a factorization of this integral according to the product structure of $\mathbb A^{\times}$. (This is a manifestation of the parenthetical remark at the end of point (1).) -The proof of the functional equation becomes (more-or-less) just an application of Poisson summation (in the adelic context). -It is worth comparing this with the classical proof (which one can read in Lang's book, among other places, if memory serves). Classically, one takes -the sum over ideals representation of the $L$-function, and decomposes it first -into a finite collection of sums, indexed by the ideal class group, each sum taking place over all the integral ideals in a given ideal class. These individual series are then described as Mellin transforms of theta series, -and the functional equation is derived from the transformation properties of the -theta series, the latter being proved by an application of Poisson summation in the classical setting. -Once one unpacks all the details, Tate's proof and Hecke's proof don't look so different; but the difference in packaging is enormous! In Tate's approach there is no need to unpack everything (for example, the ideal class group is just lurking around in the background implicitly, and there is no need to bring it out explicitly), while in the classical arguments such unpacking is key to the whole thing. -As another example of the conceptual clarity and simplification that Tate's approach gives, you might consider the way he derives the formula for the residue at $s = 1$ of the zeta function of a number field (i.e. the general class number formula) and compare it with the classical derivation. -Working in the case of a function field over a finite field, Tate derives the Riemann--Roch formula (in the form $\dim H^0(C,\mathcal O(D)) - \dim -H^0(C,\mathcal O(K - D)) = 1 + \deg D - g$) as a straightforward consequence of Poisson summation. Among other things, this provides a rather striking unification of (what we now call) Serre duality and Fourier duality. (Although I don't know the precise history, this probably has antecedents in the literature: the original proof of the functional equation of the $\zeta$-function for a curve over a finite field, by Schmidt, proceeded by applying Riemann--Roch; so Tate is essentially reversing this argument.) -Tate's explication of the functional equation of $L$-series in terms of local functional equations shows that the global root number --- i.e. the constant that appears in the functional equation --- is a product of local numbers. As far as I understand, this wasn't known (and perhaps not even suspected) prior to Tate's proof. -This may seem slightly esoteric, but experience shows that one should regard global root numbers, and their factorization into a product of local root numbers (or $\epsilon$-factors), to be of essentially equal importance to global $L$-series, and their (Euler product) factorization into local $L$-factors. - -Summary/Conclusion: The aim of the above list is just to highlight some of the points to watch out for while studying Tate's thesis. Let me now make some remarks at a more general level. -In the classical theory of zeta and $L$-functions, there is a tension between the analytic tools, which are essentially additive Fourier theory (e.g. Poissson summation) and the multiplicative aspects of the theory (exemplified by the Euler product). Tate's thesis resolves these tensions by moving to the adelic context. -In the general theory of automorphic forms (say on a quotient $\Gamma -\backslash G(\mathbb R)$) for some congruence subgroup $\Gamma$ of the integral points $G(\mathbb Z)$ of a semi-simple or reductive Lie group $G(\mathbb R)$) there is the same tension between the harmonic analysis and Lie theory (which $\Gamma \backslash G(\mathbb R)$ is well set-up to accommodate) and the theory of Hecke operators (which pertain to the finite primes, which are not particular visible in this classical description), which is resolved by moving to the adelic picture $G(\mathbb Q)\backslash G(\mathbb A)$. -Another thing to bear in mind is that the theories of $L$-series and of automorphic forms are quite technical in nature, and so conceptual and aesthetic simplifications (as in Tate's thesis) go hand in hand with technical simplifications. (See e.g. points (1) and (3) above.) One instance of this in the automorphic forms context is that conjugacy classes in $G(\mathbb Q)$ are much easier to understand than in a congruence subgroup $\Gamma$ of $G(\mathbb Z)$. (Another instance of the technical superiority of fields over more general rings.) One might also consider the Tamagawa number one theorem, which gives an elegant reformulation and generalization of a myriad of classical results. -So, to finish, Tate's thesis is significant because it improves the classical point of view in a number of ways, achieving conceptual, technical, and aesthetic simplifications. At the same time, it suggests a way of unifying harmonic analytic and arithmetic considerations in the general context of automorphic forms, by working in the adelic context. -Finally, I strongly suggest working through the details of Tate's thesis in the particular case of the Riemann zeta function, and seeing how his arguments and construction compare with the classical ones. If you haven't already done this, it should be quite enlightening. (In particular, it will illuminate points (1), (2), and (3) above.)<|endoftext|> -TITLE: Translating "neither...nor" into a mathematical logical expression -QUESTION [6 upvotes]: Having some difficulty doing translations for complicated neither...nor sentences. -With these characters: -~: Negation; $\vee$: Disjunction; &: Conjunction. -I'm trying to translate and understand, for example: -"Neither John nor Mary are standing in front of either Jim or Cary" -I have been told that a successful translation of "Neither e nor a is to the right of c" is translated as follows: ~(RightOf(e, c) $\vee$ RightOf(e, c)) -What about just doing a translation on: "I like neither chocolate nor vanilla" -~(Like(chocolate) $\vee$ Like(Vanilla)) -What confuses me the most is the sentence: "I like neither chocolate nor vanilla" is translated to ~((Like(chocolate) $\vee$ Like(vanilla)) and the sentence: "Neither e nor a is to the right of c and to the left of b" is translated to ~(RightOf(e, c) & LeftOf(e, b)) & ~(RightOf(a, c) & LeftOf(a, b)). Both sentences use neither...nor, however in the second sentence I see no disjunction, but in the first it exists. -Any food for thought and help would be appreciated! - -REPLY [12 votes]: Mathematically, there are two ways of "translating" "I like neither chocolate nor vanilla" (the two ways are logically equivalent, an instance of de Morgan's laws). You can write either: -$$\neg\bigl(\mathrm{like}(\mathrm{chocolate})\bigr)\ \&\ \neg\bigl(\mathrm{like}(\mathrm{vanilla})\bigr)$$ -(that is, "I don't like chocolate and I don't like vanilla") or as -$$\neg\bigl(\mathrm{like}(\mathrm{chocolate})\ \vee\ \mathrm{like}(\mathrm{vanilla})\bigr).$$ -The two are equivalent, because $\neg(P\vee Q) \equiv (\neg P)\&(\neg Q)$ (this is one of De Morgan's Laws: for "P or Q" to be false, you need both P to be false and Q to be false). -The second sentence is similar: "Neither e nor a is to the right of c and to the left of b". -You mention you are confused about it; I think it's just a confusion of parsing (it is a bit awkwardly constructed). What is says is "Neither e nor a satisfy xxxx". That is: "e does not satisfy xxxx, and a does not satisfy xxxx". What is this xxxx? It is the condition "is to the right of c and to the left of b". -So the sentence is the same as "neither e is to the right of c and to the left of b, nor a is to the right of c and to the left of b." (I added the grey spaces as parsing aides). -(Note that "e is not to the right of c and to the left of b" means that either e is not to the right of c, or e is not to the left of b, or both; same with a). -If you wanted to translate it in the same manner as the first translation above, with conjunction, you would have: -$$\Bigl(\neg\bigl(\mathrm{RightOf}(e,c)\&\mathrm{LeftOf}(e,b)\bigr) \Bigr) \& \Bigl(\neg\bigl(\mathrm{RightOf}(a,c)\&\mathrm{LeftOf}(e,b)\bigr)\Bigr).$$ -But you can equally as well use the model of the second translation above, to get the equivalent statement: -$$\neg\Biggl( \bigl(\mathrm{RightOf}(e,c)\&\mathrm{LeftOf}(e,b)\bigr) \vee \bigl(\mathrm{Rightof}(a,c)\&\mathrm{LeftOf}(a,b)\bigr)\Biggr).$$ -So now you see the disjunction in the second. -The reason why the first "translation" of this phrase might be better than the second is that the mix of conjunctions and disjunctions ( "(a and b) or (x and y)" ) is usually a little harder to parse than a sequence of conjunctions, even with negations ( "not(a and b) and not(x and y)"). So the latter form is slightly prefered for parsing reasons, but mathematically they are equivalent. The parsing problem does not show up in the vanilla/chocolate example, because there are no conjunctions hiding inside the clauses and complicating the parsing.<|endoftext|> -TITLE: On existence of an integer between $\sqrt{n}$ and $\sqrt{2n}$ coprime to $n$ -QUESTION [21 upvotes]: I have one proof of the following statement, and I would like to know if there is a simpler proof. I am not sure if “simpler” is the right word or not but, for the purpose of this question, I prefer an elementary proof rather than a short proof depending on a powerful and difficult-to-prove theorem. - -For any integer $n \ge 171$, there exists an integer $m$ coprime to $n$ satisfying $n\lt m^2 \lt 2n$. - -The proof which I have actually gives an additional guarantee that $m$ is a prime. I do not need this guarantee and want a simpler proof if there is one. -My proof uses the method used by Nagura [Nag52], which I suspect may be overkill for this purpose. By the same argument as the one in [Nag52], we can prove that for every $x \geq 16769$, there exists a prime between $x$ and $2^\frac 14 x$, and we can verify that the same conclusion holds for $x \ge 32$ by calculation. This implies that for $n \ge 32^2=1024$, there exist at least two primes between $\sqrt{n}$ and $\sqrt{2n}$. Because both primes are greater than $\sqrt{n}$, at least one of them must be coprime to $n$. The case of $171 \le n \le 1023$ can be verified by calculation. -Is there a simpler proof of the statement above? -[Nag52] Jitsuro Nagura. On the interval containing at least one prime number. Proceedings of the Japan Academy, 28(4):177–181, 1952. - -REPLY [16 votes]: I will show what you want for sufficiently large $n$. -First question: How many squares are between $n$ and $2n$. Well we have $$\#\text{of squares}=\lceil\sqrt{2n}-1\rceil-\lceil\sqrt{n}-1\rceil\geq\sqrt{2n}-\sqrt{n}-2=(\sqrt{2}-1)\sqrt{n}-2\geq \frac{\sqrt{n}}{3}$$ for sufficiently large $n$. So lets look at the square roots of these squares, which form a sequence of more than $\sqrt{n}/3$ consecutive integers. (This is fine since $\gcd(n,m)=1$ if and only if $\gcd(n^2,m)=1$) The goal is then to show that one of these integers in this sequence is relatively prime to $n$. -Recall the elementary Eratosthenes-Legendre sieve which says that: - -Theorem: For any real $x$ and any $y\geq 0$, we have $$S(x,y;n)=\frac{\phi(n)}{n}y+O\left(2^{\omega(n)}\right)$$ where $S(x,y;n)$ is the number of integers $m$ in the interval $x0)$$ the result then follows for sufficiently large $n$. -Notice that this proof generalizes to show that for any $k$, there exists $N$ such that for any $n>N$ there exists $m\in \left(\sqrt[k]{n},\ \sqrt[k]{2n}\right)$ for which $\gcd(m,n)=1$. In other words, we can show: - -For any $k$, taking $n$ large enough guarantees that there will be a $k^{th}$ power between $n$ and $2n$ that is also relatively prime to $n$. - -Hope that helps, -Remark: The key fact which makes this proof work is that $2^{\omega(n)}\ll_\epsilon n^\epsilon$ for every $\epsilon$. The Eratosthenes-Legendre Sieve is really what you get when you try the simplest approach to evaluating $S(x,y;n)$. It follows directly from the inclusion-exclusion principle. -Added Proof: -Why is it true that $$\frac{\phi(n)}{n}\geq \frac{e^{-\gamma}}{\log\log n}+O\left(\frac{1}{(\log\log n)^2}\right)?$$ This follows from Mertens formulas along with Chebyshevs estimates. The error term can be removed by making the constant smaller, which is what I wrote down above, but lets prove this form. -Look at the $n$ which minimize $\phi(n)/n$ for their size. These will be of the form $$\prod_{p\leq y}p,$$ that is the product of the first few primes. (*Prove these numbers do indeed minimize) Taking logarithms introduces $\theta(y)$, so we can deduce $\log \log n = \log y +O(1)$ by Chebyshevs estimate. Next $$\frac{n}{\phi(n)}=\prod_{p\leq y}=\left( 1-\frac{1}{p}\right)^{-1}=e^\gamma\log y+O(1)$$ is one of Mertens formulas. Taking reciprocals yields $$\frac{\phi(n)}{n} = \frac{e^{-\gamma}}{\log \log n} + O\left(\frac{1}{(\log \log n)^2}\right)$$ as desired. -That result is actually stronger than we need. All the above proof required was that $n^{-\epsilon}\ll_\epsilon\phi(n)/n$ for every $\epsilon$.<|endoftext|> -TITLE: Is it possible to link the eigenvalues of a matrix to the Fourier transform of the matrix? -QUESTION [7 upvotes]: I'm trying to get insight into the eigenvalue spectrum of a square matrix (large N, symmetric, positive semi-definite matrix) using Fourier transforms (I've tried transforming a bunch of things: the diagonal, the skew diagonal, the entire matrix, each eigenvector, etc. . Is there anything I should be looking at specifically? - -REPLY [9 votes]: I know of a couple of results for certain special types of matrices. -1) If you have a circulant matrix, the eigenvalues of that matrix will be the discrete Fourier transform of the first row of the circulant matrix. -2) For the second one I don't remember all the technical details. Suppose you have a symmetric Toeplitz matrix whose size goes to infinity, certain functions of the eigenvalues can be written in terms of the Fourier transform of the first row of the matrix. In fact, the expression is written as a Riemann sum, which becomes a Riemann integral in the Fourier domain. In other words, you can write an unwieldy expression involving the eigenvalues of this growing matrix in terms of an integral which is usually much easier to evaluate. -If you want to learn more, check out Robert Gray's notes at http://ee.stanford.edu/~gray/toeplitz.pdf for an introduction to these kinds of results. -There is also the book by Grenander and Szego which gives a lot more detail and proves some more general results here -http://books.google.com/books?id=CFhVdL78wGcC&printsec=frontcover&dq=toeplitz+forms&hl=en&ei=CudxTf_XFtL1gAfH9qRV&sa=X&oi=book_result&ct=result&resnum=1&ved=0CDQQ6AEwAA#v=onepage&q&f=false<|endoftext|> -TITLE: Prove that $x^{2} \equiv 1 \pmod{2^k}$ has exactly four incongruent solutions -QUESTION [12 upvotes]: Prove that $x^{2} \equiv 1 \pmod{2^k}$ has exactly four incongruent solutions. - -My attempt: -We have, -$x^2 - 1 = (x - 1) \times (x + 1)$, then -$(x - 1)(x + 1) \equiv 0 \pmod{2^k}$ -which implies, -$2^k|(x - 1)$ or $2^k|(x + 1) \implies x \equiv \pm 1 \pmod{2^k} (1)$ -Furthermore, $2^{k-1} \equiv 0 \pmod{2^k} \Leftrightarrow 2^{k-1} + 1 \equiv 1 \pmod{2^k}$. -Multiply both sides by $-1$, we have another congruent namely $-(2^{k-1} + 1) \equiv -1 \pmod{2^k}$ -Hence, $x \equiv \pm(1 + 2^{k-1}) \pmod{2^k} (2)$ -From $(1)$ and $(2)$, we can conclude that $x^{2} \equiv 1 \pmod{2^k}$ have four incongruent solutions. -Am I in the right track? -Thanks, - -REPLY [6 votes]: For any positive integer $n$, let $\mathbb{Z}/n\mathbb{Z}$ be the usual ring of integers modulo $n$ and let $U(n) = (\mathbb{Z}/n \mathbb{Z})^{\times}$ be its unit group -- i.e., the "reduced residues" modulo $n$ under multiplication. -Your question can be viewed as asking about the structure of this unit group when -$n = 2^k$ for $k \geq 3$. (Note that $U(2)$ is the trivial group and $U(4)$ has order $2$, so the structure of these groups is clear.) -In fact it is a standard result -- at the border of undergraduate number theory and undergraduate algebra -- to give an exact computation of $U(n)$ for all positive integers $n$. See for instance Theorem 1 (and the discussion immediately preceding it, which reduces the general problem to the prime power case) of these notes. Especially, for all $k \geq 3$, -$U(2^k) \cong Z_2 \times Z_{2^{k-2}}$, -where here $Z_a$ denotes a cyclic group of order $a$. In a product $H_1 \times H_2$ -of finite (multiplicatively written) commutative groups, an element $h = (h_1,h_2)$ satisfies $h^2 = 1$ iff this holds separately for both coordinates $h_1^2 = h_2^2 = 1$. Here $H_1$ and $H_2$ are both cyclic groups of even order, so each has exactly two elements which square to $1$, and thus $U(2^k)$ has $2 \times 2 = 4$ such elements. -Of course one can be much more explicit about what these elements are. This takes place in the proof of the result as well as in the answers others have given to this question.<|endoftext|> -TITLE: Cardinality of sets of functions with well-ordered domain and codomain -QUESTION [5 upvotes]: I would like to determine the cardinality of the sets specified bellow. Nevertheless, I don't know how to approach or how to start such a proof. Any help will be appreciated. -If $X$ and $Y$ are well-ordered sets, then determine the cardinality of: - -$\{f : f$ is a function from $X$ to $Y\}$ -$\{f : f$ is an order-preserving function from $X$ to $Y\}$ -$\{f : f$ is a surjective and order-preserving function from $X$ to $Y\}$ - -REPLY [5 votes]: The cardinality of the set of functions from $X$ to $Y$ is the definition of the cardinal $Y^X$. -The number of order-preserving functions from $X$ to $Y$, given that well-orders of each set have been fixed, depends on the nature of those orders. For example, there are no such orders in the case that the order type of $X$ is longer than the order type of $Y$. If $X$ and $Y$ are finite, then there is some interesting combinatorics involved to give the right answer. For example, if both are finite of the same size, there is only one order-preserving function. If $Y$ is one bigger, then there are $Y$ many (you can put the hole anywhere). And so on. If $Y$ is infinite, of size at least $X$, then you get $Y^X$ again, since you can code any function into the omitted part, by leaving gaps of a certain length. -A surjective order-preserving map is an isomorphism, and for well-orders, this is unique if it exists at all, so the answer is either 0 or 1, depending on whether the orders are isomorphic or not.<|endoftext|> -TITLE: construction of a linear functional in $\mathcal{C}([0,1])$ -QUESTION [6 upvotes]: Can someone help me to construct a linear functional in $\mathcal{C}([0,1])$ that does not attain its norm? -Actually, I want to prove that $\mathcal{C}([0,1])$ is not reflexive Banach space. Is it sufficient to construct that kind of functional? - -REPLY [6 votes]: Knowing some deeper theorems will help to see what to do. -The Riesz representation theorem says that the continuous linear functionals on $C([0,1])$ are precisely the signed Radon measures on $[0,1]$. It's not hard to see that any bounded measurable function is a continuous linear functional on the space of signed measures, i.e. is an element of $C([0,1])^{**}$. So the existence of bounded discontinuous functions on $[0,1]$ shows that $C([0,1])$ cannot be reflexive.<|endoftext|> -TITLE: Elliptic functions and Weierstrass $\wp$-function -QUESTION [6 upvotes]: Question that seems pretty easy, but I can't formalize it: -Let $L \subset C$ be a lattice, and $f(z)$ be an elliptic function for $L$, that is a meromorphic function so that $f(z+w) = f(z)$ for all $\omega \in L$. Assume that $f$ is analytic except for double poles at each point of the lattice $L$. Show that $f = a\wp + b$ for some constants $a,b$. -What I tried: -$\displaystyle f(z) = \prod_{\omega \in L} {\frac{g(z)}{(z-\omega)^2}}$ , $g(z)$ is analytic and therefore constant in the fundamental domain. Now what is left to do, is to take the product apart to partial fractions, and then I get almost what needed, except it's not one constant $a$ and $b = \sum_{\omega \in L} -\frac1{\omega^2}$. -Am I right? How do I proceed? -Thanks in advance. - -REPLY [5 votes]: Here is a hint to help orient you: Suppose $f$ is doubly periodic on the lattice $L_{\tau} = \mathbb{Z} \oplus \tau \mathbb{Z}$, where $\tau \in \mathbb{H}$, and $f$ is assumed analytic everywhere save for double poles on the lattice points of $L_{\tau}$. What can you say about the function $f/\wp$ on $L_{\tau}$? -Compare the position of the poles of both $f$ and $\wp$. If they occur at the same place, then $f/\wp$ has no poles (provided that the order of the poles is the same), and so is doubly periodic as well as analytic, hence constant of $\Lambda_{\tau}$. -Now generalize $f$ to an arbitrary lattice $\omega_1 \mathbb{Z} \oplus \omega_2 \mathbb{Z}$, as the one you post, and consider the function $g = (f - a)/\wp = f/\wp - a/\wp$, where $a$ is an arbitrary constant. What properties will $g$ have on the lattice?<|endoftext|> -TITLE: Are there clever ways to evaluate this infinite series? -QUESTION [16 upvotes]: Here is an interesting infinite series. It would be great to see a method to evaluate it, if possible. I know it converges to a little less than 11/40 -$\displaystyle\sum_{k=1}^{\infty}\frac{1}{4^{k}+k!}$ -I could not think of any good identities to start this. -Thanks a million to those who can show how to evaluate it. -Maybe even in general, $\displaystyle\sum_{k=1}^{\infty}\frac{1}{x^{k}+k!}$, where $x\geq 1$ - -REPLY [2 votes]: The only approach I can think of is to narrow down the difference in the remainder term between the exact value of the series and the value of some partial sum that we use as estimate and judge how good or close our estimate is. Take the remainder term $R_n=\sum_{k=n+1}^\infty \frac{1}{4^k+k!}$ and $T_n=\sum_{k=n+1}^\infty \frac{1}{4^k}$ we know that $R_n\lt T_n\lt\int_{n+1}^\infty \frac{1}{4^k}dk $. The higher you choose your $n$ to be the lower the remainder or the error term. For example, $n=8$ partial sum is $s_7=0.2745411421$ and $R_7\lt 0.000011006$. So the partial sum is correct to atleast three or four decimal places. -(This was too long for a comment.)<|endoftext|> -TITLE: Quaternionic veronese Embedding -QUESTION [13 upvotes]: I know that the complex projective line $\mathbb{C}P^1$ can be embedded in the complex projective space $\mathbb{C}P^n$ (Veronese embedding). For example, $\mathbb{C}P^1\rightarrow\mathbb{C}P^3$ is given explicitly by $(z,w)\mapsto(z^3,z^2w,zw^2,w^3)$ in homogenous coordinates. -I was wondering if the same could be done with quaternionic projective spaces, i.e. is there a 'veronese type' embedding: $\mathbb{H}P^1\rightarrow\mathbb{H}P^n$ ?? -(I know that the non-commutativity of quaternions make the veronese map ill defined) - -REPLY [3 votes]: Veronese embedding is the restriction of Segre embedding on the diagonal, so let's talk about quaternionic Segre embeddings instead. Now I have to admit, it's not a real answer, just 2 remarks: - -Segre embedding $\mathbb CP^{\infty}\times\mathbb CP^{\infty}\to\mathbb CP^{\infty}$ gives a structure of an H-space on $\mathbb CP^{\infty}$. But I don't think $\mathbb HP^{\infty}$ admits an H-space structure. (See also Hatcher. 4L.4.) -Ordinary Segre embedding $\mathbb P(V)\times\mathbb P(W)\to\mathbb P(V\otimes W)$ maps a pair of 1-dimensional subspaces to their tensor product. Now, tensor product of two (say, left) quaternionic vector spaces is not a quaternionic vector space. But one can take tensor product over complex numbers — it induces a map $\mathbb P(V)\times\mathbb P(W)\to Gr_2(V\otimes_{\mathbb C}W)$.<|endoftext|> -TITLE: proof of inequality $e^x\le x+e^{x^2}$ -QUESTION [12 upvotes]: Does anybody have a simple proof this inequality -$$e^x\le x+e^{x^2}.$$ -Thanks. - -REPLY [18 votes]: For $x \geq 0$, -$$ -e^{x^2 - x} + x e^{-x} \geq 1 + (x^2 - x) + x (1-x) = 1 \> . -$$ -For $x < 0$, let $y = - x > 0$, whence, -$$ -e^{-x^2}(e^x - x) = e^{-y^2} (e^{-y} + y) \leq e^{-y^2} (1 - y + y^2/2 + y) \leq e^{-y^2}(1+y^2) \leq e^{-y^2} e^{y^2} = 1 \> . -$$ - -Notice that we've only used the basic facts that $e^x \geq 1 + x$ for all $x \in \mathbb{R}$ and that $e^{-x} \leq 1 - x + x^2/2$ for $x \geq 0$, both of which are trivial to derive by simple differentiation, similar to Didier's approach.<|endoftext|> -TITLE: A binomial coefficient identity? -QUESTION [8 upvotes]: Suppose $p$, $k$ and $s$ are integers with $s,k \le p$. Consider the following polynomial in $x$ and $y$, -$$ \sum_{\ell=0}^k \binom{s}{\ell} \binom{p-s}{k-\ell} x^\ell -y^{p-\ell}$$ -Does this expression look familiar to anyone? Is there closed form? - -REPLY [2 votes]: We can transform -$$\sum_{\ell=0}^k \binom{s}{\ell} \binom{p-s}{k-\ell} x^\ell y^{p-\ell}$$ -into a hypergeometric form as follows: we factor out $y^p$ like so -$$y^p\sum_{\ell=0}^k \binom{s}{\ell} \binom{p-s}{k-\ell} \left(\frac{x}{y}\right)^\ell$$ -and then we can easily transform the first binomial coefficient to a Pochhammer symbol: -$$y^p\sum_{\ell=0}^k \frac{(-1)^\ell (-s)_\ell}{\ell!} \binom{p-s}{k-\ell} \left(\frac{x}{y}\right)^\ell$$ -The second one requires a bit more work; using this identity, we have -$$y^p\binom{p-s}{k}\sum_{\ell=0}^k \frac{(-s)_\ell}{\ell!} (-1)^\ell \frac{(k-\ell+1)_\ell}{(p-s-k+1)_\ell}\left(\frac{x}{y}\right)^\ell$$ -and then use this identity to yield -$$y^p\binom{p-s}{k}\sum_{\ell=0}^k \frac{(-s)_\ell (-k)_\ell}{(p-s-k+1)_\ell} \frac1{\ell!}\left(\frac{x}{y}\right)^\ell$$ -from which we find that we have a finite ${}_2 F_1$ hypergeometric sum; more specifically we have -$$y^p\binom{p-s}{k} {}_2 F_1\left({{-k}\atop{}}{{}\atop{p-k-s+1}}{{-s}\atop{}}\mid \frac{x}{y}\right)$$ -With either the hypergeometric expression or the sum before it, we find that the sum has $\min(k,s)$ terms. -From the hypergeometric expression, further transformations might be possible. Alternatively, the expression itself can directly be used for numerical evaluation, since hypergeometric functions satisfy a three-term recurrence; start with the expressions corresponding to $k=0$ and $k=1$ and recurse from there to numerically evaluate the expression for a given $k$.<|endoftext|> -TITLE: Is $6.12345678910111213141516171819202122\ldots$ transcendental? -QUESTION [58 upvotes]: My son was busily memorizing digits of $\pi$ when he asked if any power of $\pi$ was an integer. I told him: $\pi$ is transcendental, so no non-zero integer power can be an integer. -After tiring of memorizing $\pi$, he resolved to discover a new irrational whose expansion is easier to memorize. He invented (probably re-invented) the number $J$: -$$J = 6.12345678910111213141516171819202122\ldots$$ -which clearly lets you name as many digits as you like pretty easily. He asked me if $J$ is transcendental just like $\pi$, and I said it must be but I didn't know for sure. Is there an easy way to determine this? -I can show that $\pi$ is transcendental (using Lindemann-Weierstrass) but it doesn't work for arbitrary numbers like $J$, I don't think. - -REPLY [46 votes]: This is a transcendental number, in fact one of the best known ones, it is $6+$ Champernowne's number. -Kurt Mahler was first to show that the number is transcendental, a proof can be found on his "Lectures on Diophantine approximations", available through Project Euclid. The argument (as typical in this area) consists in analyzing the rate at which rational numbers can approximate the constant (see the section on "Approximation by rational numbers: Liouville to Roth" in the Wikipedia entry for Transcendence theory). -An excellent book to learn about proofs of transcendence is "Making transcendence transparent: an intuitive approach to classical transcendental number theory", by Edward Burger and Robert Tubbs.<|endoftext|> -TITLE: In an additive category, why is finite products the same as finite coproducts? -QUESTION [24 upvotes]: In an additive category, why is finite products the same as finite coproducts? -This is relatively easy to prove when the category is R-mod, but my intuition/creativity fails to see how the method can be extended to arbritrary additive categories -Specifically, a category (in Weibel, "An introduction to homological algebra") is called additive if the Hom-sets are abelian groups, composition of morphisms distribute over addition, and such that it has a distinguished zero object (that is, an object that is both initial and terminal). -After giving this definition, Weibel claims, without further explanation, that "this structure is enough to make finite products the same as finite coproducts". -How is this? - -REPLY [31 votes]: Note that a product $A \times B$ is the same as a pull-back diagram -A x B -> B - | | - v v - A ---> 0 - -with maps $p:A \times B \to A$ and $q: A \times B \to B$. -In particular there is a map $i: A \to A \times B$ such that $pi = 1_{A}$ and $qi = 0$ as well as a map $j: B \to A \times B$ such that $qj = 1_{B}$ and $pj = 0$. Now note that $p(ip + jq) = pip + 0 = p$ and $q(ip + jq) = q$ so that $ip + jq = 1_{A \times B}$. -Let us check that $i: A \to A \times B$ and $j: B \to A \times B$ define a coproduct. Given $f: A \to D$ and $g: B \to D$ we get a map $d: A \times B \to D$ by setting $d = fp + gq$. Since $di = (fp + gq)i = fpi +gqi = f$ and $dj = g$ it remains to prove uniqueness of $d$. But this is clear as any other such map will satisfy $(d-d')1_{A \times B} = (d-d')(ip+jq) = 0$. Summing up, we have proved that the product of two objects is also a coproduct. -Now if we want to say that finite products and coproducts exist and coincide, we need a zero object, since otherwise the empty product (the terminal object) and the empty coproduct (the initial object) would not coincide.<|endoftext|> -TITLE: What does := mean? -QUESTION [65 upvotes]: What does := mean? - -REPLY [5 votes]: I think the Bourbaki used it first.. not sure.. I know physicists use $\equiv$<|endoftext|> -TITLE: Show $\sum_{k=1}^n (p_k + \frac{1}{p_k})^2\geq n^3 + 2n + \frac{1}{n}$ ; $p_k\geq 0 \forall k$ and $\sum_kp_k=1$ -QUESTION [5 upvotes]: Attempt: -$\sum_{k=1}^n (p_k + \frac{1}{p_k})^2 = 2n + \sum_k p_k^2 + \sum_k \frac{1}{p_k^2}$ -I used the Cauchy inequality to decompose 1 as $\sqrt{p_k}(\frac{1}{\sqrt{p_k}})$ and got -$n^2\leq \sum_k \frac{1}{p_k}$ -I could, use the Cauchy inequality again on $\frac{1}{p_k}\cdot 1$ to get -$(\sum\frac{1}{p_k})^2 < n \sum\frac{1}{p_k^2}$ -$\Rightarrow n^4< (\sum\frac{1}{p_k})^2 < n \sum\frac{1}{p_k^2} $ -$\Rightarrow n^3\leq\sum\frac{1}{p_k^2} $ -what about $\sum p_k^2$. There's a mental block here. Any help would be appreciated. - -REPLY [2 votes]: I would say $f(x) = (x+1/x)^2$ is convex for $x \neq 0$. So $\sum \frac 1n f(p_i) \geq f(\sum \frac1n p_i)$. Now we find easily: -$$ \sum_{i=1}^n \left(p_i+\frac1{p_i}\right)^2 \geq n \times f(1/n) = n \times \left(\frac1n+n\right)^2 $$ -which is exactly what you want.<|endoftext|> -TITLE: Does isotopy in a covering space imply isotopy in a base space? -QUESTION [5 upvotes]: If $p:\tilde{X}\rightarrow X$ is a regular covering space of finite degree, why is it not obvious that if two curves $\gamma$ and $\delta$ are isotopic in $\tilde{X}$ their images are isotopic in $X$? By my understanding, this is a nontrivial theorem (including the statement that the cover is possibly branched) in a paper by Birman and Hilden, and it further requires the condition that the group of deck transformations is solvable. -It seems to me that this statement should follow from the fact that induced map on fundamental group $p_*$ is well defined. What am I missing? Does anyone have a counterexample (preferably one involving surfaces, rather than anything higher dimensional)? -If this is in fact nontrivial, will it be nontrivial with the added condition of the path lifting property? - -REPLY [4 votes]: If two paths are isotopic then they are homotopic via a homotopy $h_t$ that is injective for each $t$. Projecting this to the base doesn't guarantee that $h_t$ will remain injective.<|endoftext|> -TITLE: Visualising a specific orbifold -QUESTION [7 upvotes]: Let $1 < k \in \mathbb N$ and $M = \{(z_1, z_2) \in \mathbb C^2 : k|z_1|^2 + |z_2|^2 = 1\}$. Let $S^1$ act on $M$ via $e^{i\theta}(z_1,z_2) = (e^{ik\theta} z_1, e^{i\theta} z_2)$. Then I am told that $M/S^1$ is not a manifold (observe that the action is not free as $e^{2\pi i/k}$ stabilizes the points of the form $(z_1,0)$). -I am having trouble seeing why this fails to be a manifold (I'm guessing it fails to be locally $\mathbb R^2$ at the points $(z_1,0)$). I've read that this space has a cone singularity and was wondering if someone could explain how to visually see this. -Thanks! - -REPLY [5 votes]: You can parametrize the three-dimensional manifold $M$ by the magnitude of either $z_1$ or $z_2$ and the two arguments of $z_1$ and $z_2$. Let's call the one of $z_1$ and $z_2$ whose magnitude we use for the parametrization $z$ and the other $z'$. You can visualize this parametrization using the volume enclosed by a torus, with the magnitude of $z$ corresponding to the distance from the central ring, the argument of $z$ corresponding to the angle with respect to the central ring (the poloidal angle) and the argument of $z'$ corresponding to the angle with respect to the torus axis (the toroidal angle). That is, with the parameters used in the Wikipedia article on the torus, the magnitude of $z$ is $r$, the argument of $z$ is $v$, and the argument of $z'$ is $u$. -To see what happens in the two interesting neighbourhoods around $z_1=0$ and $z_2=0$, we can use the appropriate torus where $z$ is the one of $z_1$ and $z_2$ that is close to zero. Dealing first with $z_1$ near $0$ (and thus $z=z_1$, $z'=z_2$), the orbits of the given action of $S^1$ are spirals that spiral around the central ring $k$ times before returning to the same point. We can pick any disk defined by some value of $u$ as the set of representatives of these orbits, and see that the space of these orbits has the manifold structure of an ordinary disk. -On the other hand, if $z_2$ is near $0$ (and thus $z=z_2$, $z'=z_1$), the orbits are again spirals that spiral around the central ring, but this time, they only go around the ring $1/k$ times before returning to the same value of $u$, and only return to the same point after going around the torus axis $k$ times. If we now pick any disk defined by some value of $u$, it contains $k$ representatives of each orbit. Thus, the space of these orbits has the same structure as the quotient of the disk with respect to the cyclic group of rotations generated by a rotation around the origin through $2\pi/k$. The conical singularity arising from this is described here for the case $k=2$. Note that it is only a singularity in the differentiable structure, not in the topological structure, since the cone is topologically a two-dimensional manifold at the origin.<|endoftext|> -TITLE: Complex Zeros of $z^2e^z-z$ -QUESTION [10 upvotes]: Can anyone give me a hint on showing (in a relatively elegant way, as I know the answer from WolframAlpha), that the complex valued function $z^2e^z-z$ has at most 2 roots with norm less than 2? Obviously it has one root at $z=0$. The other root is the number $W_0(1)$, where $W_0(1)$ is the principal branch of the product log function at 1 (the inverse of the function $ze^z$, and is approximately equal to .56. I have seen that the next possible zero is outside of the ball of radius 2, but I cannot show it algebraically. I am primarily trying to use Rouche's Theorem, and to find a suitable function $g(z)$ such that $z^2e^z-z-g(z)|<|g(z)|$, however I have not been able to find a good one yet. -Any help would be much appreciated, especially ideas about what sort of function to consider in cases like this, rather than just "here's the function..." -Thanks! - -REPLY [5 votes]: As I understand the question, you are also interested in how one finds functions for Rouché. That is why I give this answer, even though I cannot prove the last inequality (which is however true). -For Rouché you want a function which is close enough to $f(z) = z^2 e^{z}-z$ for $z=2e^{i\phi}$, but simple enough such that you know how many zeros there are in circle with $|z| \leq 2$. I cannot give you a general recipe. But here is how I proceed to find it: -The first obvious choice $g(z) =z^2 e^{z}$ (for which we know the number of zeros) fails for $z$ around $-2$, simply because the $g(z)$ becomes exponential small whereas in $f(z)$ the $-z$ term dominates. Therefore, you want some other term in $g(z)$ which takes around $-2$ whereas you still need to have an $e^{z}$ in order that the function approximates $f(z)$ around $2$ correctly ($f$ is quite large for $z=2$ and then quickly falls off before at some value the $z$ term takes over). -My second guess therefore is $g(z)= z^2 ( e^{z} + c)$ with some $c>0$. Playing around a bit, I see that the value $c=1/2$ seems to work. -Now what one has to proof: we have to show that $|f(z) - g(z)| < |g(z)|$ for $z=2 e^{i\phi}$. The left hand side is $\left|z(\frac{1}{2}+z)\right| = 4 \left| \cos (\phi/2)\right|$. The right hand side can be simplified to $$|g(z)|^2= 16 \left|e^{z} +\frac{1}{2}\right|^2 = 16 \left[\frac{1}{4} + e^{4 \cos \phi} + e^{2 \cos \phi} \cos(2 \sin \phi)\right].$$ -Taking the difference $$ \frac{|g(z)|^2 -|f(z) - g(z)|^2}{16} = \frac{1}{4} + e^{4 \cos \phi} + e^{2 \cos \phi} \cos(2 \sin \phi) - \cos^2 (\phi/2) \geq 0.07 .$$ The last estimate was obtained by plotting the function. The minimum is attained around $\phi=1.9$. Too make the argument complete one should prove that the difference is larger than zero (which I couldn't do so far). Then we know that $f(z)$ and $g(z)$ have the same number of zeros with $|z| \leq 2$. As $g(z)$ has only two zeros (two times $z=0$), the same hold for $f(z)$.<|endoftext|> -TITLE: Proof there is a 1-1 correspondence between an uncountable set and itself minus a countable part of it -QUESTION [10 upvotes]: Problem statement: -Let A be an uncountable set, B a countable subset of A, and C the complement of B in A. Prove that there exists a one-to-one correspondence between A and C. -My thoughts: -There's a bijection between A and A (the identity function). There's a bijection between C and C (the identity function). There's a bijection between B and $\mathbb{N}$. That's all I know. - -REPLY [10 votes]: Since $A\setminus B$ is uncountable, assuming countable choice it has a countably infinite subset $B'$. Then $B'\cup B$ is countable, so there is a bijection $g:B'\cup B\to B'$. Define $f:A\to A\setminus B$ by $f\vert_{B'\cup B}=g$ and $f(a)=a$ for $a\in A\setminus(B'\cup B)$. -Basically, you just take chunks off of $A$ and $A\setminus B$ that have equal size, respectively $B'\cup B$ and $B'$, leaving the remaining sets equal to $A\setminus(B'\cup B)$, then piece together the two bijections.<|endoftext|> -TITLE: Relation between analytic functions and polynomials -QUESTION [5 upvotes]: I've been stumbling across a couple of questions in Ahlfors that fall along the lines of "If $f$ is an analytic function with property $X$, show that $f$ reduces to a polynomial." -One concrete example would be: -Show that a function which is analytic in the whole plane and has a nonessential singularity at $\infty$ reduces to a polynomial. -I was wondering what the general approach to these problems is. Is it to use some sort of factorization of the zeros/poles of the function and then to use Cauchy's estimate? -My thought process for this particular problem has been as follows: -Consider $g(z) = \frac{1}{f(\frac{1}{z})}$. Then $h(z)=0$ for $z=0$. $h$ cannot vanish identically or else $f$ is identically $\infty$ and then, $f$ does not have a nonessential singularity at $\infty$. Thus, we know that $g(z) = z^h g_h(z)$ where $g_h$ is an analytic function with $g_h(0) \neq 0$. Then we get $\frac{1}{f(\frac{1}{z})} = z^h \frac{1}{f_h(\frac{1}{z})}$ (where $\frac{1}{f_h(\frac{1}{z})} = g_h(z)$), which implies that $f(\frac{1}{z}) = z^{-h} f_h(\frac{1}{z})$. This last expression can also be equivalently written as $f(z) = z^h f_h(z)$. Then, since $f_h(z)$ is bounded as $|z| \to \infty$, we can use Cauchy's estimate to show that the $(h+1)$-th derivative of $f$ is equal to $0$ for all $z$, which implies that the function must reduce to a polynomial. -I'd appreciate it if any of you could tell me if that's correct and/or if there is a better/different way to go about answering the question. -As an extra note, would it be wrong to assume that any approach toward these problems with polynomials is also going to extend in some sense toward problems about rational functions? - -REPLY [2 votes]: When I look at this problem, Cauchy's Integral Formula definitely jumps out. Other than that, I am not too sure of any "general" approaches. -Solution: Recall the singularity at $\infty$ of $f(z)$ is the same as the singularity at $0$ of $f\left(\frac{1}{z}\right)$. Say $f$ has a pole of order $m$ at $\infty$. Then $$z^m\cdot f\left(\frac{1}{z}\right)$$ can be made is an analytic function at $z=0$. Thus, $f(z)=O(z^m)$ as $z\rightarrow \infty$. -Now apply Cauchy's Integral Formula to a sufficiently high derivative of $f$. Using the contour $C_R$, the circle of radius $R$, we can find that this derivative will be bounded on the complex plane. (Show it is bounded in the circle of radius $r$ for every $r$ by taking $R$ large enough, and using $f(z)=O(z^m)$) Hence by Liouvilles Theorem it is a constant. -Thus we conclude $f$ is a polynomial. -Approaches: It looks like your solution is basically the same as what I wrote, so this answer may not be too much help. (Although, I don't fully see the use of defining the function $g$?) I am curious to see what other solutions people conceive. In short, I think you are thinking about it in a fairly good way. -Alternative: Suppose $f$ is analytic but not a polynomial. Then in the Taylor expansion around $0$, there must be infinitely many terms. Hence, the Taylor expansion of $f(1/z)$ at $0$ will have arbitrarily large negative powers, so that $0$ cannot be a pole of finite order. -Alternative 2: Recall that if $h(z)$ has a pole of order $n$ at $z=a$, then $h'(z)$ has a pole of order $n + 1$ at $z = a$ ($n\geq 1$). Consequently, if $f(1/z)$ has a pole of order $m$ at $0$, then $$f'(1/z)\cdot \frac{-1}{z^2}$$ has a pole order $n+1$, and hence $f'(1/z)$ has a pole order $n-1$. Thus, $f^{(n)}(z)$ is analytic everywhere on the Riemann sphere, and hence a constant. -Edit: I added two alternative solutions. I view them as the same, since the key ideas employed in each are identical.<|endoftext|> -TITLE: A property of subsets of topological spaces -QUESTION [21 upvotes]: Back in college I tried to find characterizations of compact (metric) spaces that make use of Lebesgue's Covering Lemma. During the course of this, I defined the following (pretty arbitrarily named) property of subsets of topological spaces: - -If $X=(X,\mathcal{O})$ is a topological space and $Y\subseteq X$, we say that $Y$ is relatively finite (in $X$) if and only if for every neighborhood $N$ of the boundary $\partial Y$ the set $Y\setminus N$ is finite. - -My professor found it quite interesting, but couldn't recall seeing something similar before. -My assumption is that this property is either not very meaningful, or that it has previously been used elsewhere under a different name. Can anyone enlighten me? -Edit: Some context, in case it helps: As I said I was looking for a converse of the Covering Lemma – i.e. what properties do I have to require of the metric space $X$ in addition to $X$ fulfilling the Lebesgue condition (every open cover has a Lebesgue number $\delta>0$ such that every $\delta$-ball is contained in some cover element) in order for $X$ to be compact? -A property I found was the “relative finiteness” of the set $X^\star$ of the isolated points of $X$. Since I did this out of mere curiosity and never actually used it for anything, the only additional properties of “relatively finite” sets I can give you are these small ones that I needed for the proof: - -If, in addition to the above definition, $Y$ is open, then every sequence of points from $Y$ has a cluster point in $X$. -If, in addition to the above definition, $X$ is first-countable, then the interior of $Y$ is at most countable. - -REPLY [4 votes]: This is now a very old question, but I want to correct an error in the original post. It is not the case that a relatively finite subset of a first countable space must have countable interior: the space $\omega_1$ with the order topology is a counterexample. Let $A = \{0\} \cup \{\alpha+1:\alpha \in \omega_1 \}$, the set of isolated points in $\omega_1$; $\operatorname{bdry}A = \omega_1 \setminus A$, the set of countable limit ordinals. Every infinite subset of $A$ has a cluster point in bdry $A$, so every nbhd of $\operatorname{bdry}A$ must be cofinite in $\omega_1$. That is, $A$ is an uncountable set of isolated points that is relatively finite in $\omega_1$ -As for the original question, since a subset $A$ of a space $X$ is relatively finite in $X$ iff its interior is, it seems fair to say that relative finiteness is ‘really’ a property of open sets, and it's in that form (if at all) that I’d expect to find prior uses. I've not actually seen any, however. -As MartianInvader pointed out, in metric spaces it’s equivalent to having discrete interior and compact closure. More generally, in countably paracompact $T_3$ spaces it’s equivalent to having discrete interior and countably compact closure. -Proposition: Let $X$ be a $T_3$ space, and let $Y \subseteq X$ have discrete interior and countably compact closure; then $Y$ is relatively finite in $X$. -Proof: Let $W$ be an open nhbd of $\operatorname{bdry}Y$, and suppose that $Y \setminus W$ is infinite. Let $\{W_n:n \in \omega \}$ be a partition of $Y \setminus W$; points of $Y \setminus W$ are isolated in $X$, so $\{W\} \cup \{X \setminus \operatorname{cl}Y \cup \{W_n:n \in \omega \}$ is a countable open cover of $X$ that clearly has no finite subcover. -Theorem: Let $X$ be a countably paracompact $T_3$ space. If $Y$ is a relatively finite subset of $X$, then $\operatorname{cl}Y$ is countably compact. (In fact, $Y$ need only satisfy the weaker property that if $W$ is an open nbhd of $\operatorname{bdry}Y$, then $Y \setminus W$ is compact.) -Proof: Let $H = \operatorname{cl}Y$, and suppose that $H$ is not countably compact. Then $H$ has an increasing open cover $\mathcal U = \{U_n:n \in \omega\}$ with no finite subcover. Countable paracompactness of $X$ implies that $\mathcal U$ has an open refinement $\mathcal V = \{V_n:n \in \omega\}$ such that $\operatorname{cl}V_n \subseteq U_n$ for each $n \in \omega$. By passing to a (faithfully indexed) refinement if necessary, we may further assume that $\mathcal V$ is locally finite. -Let $V = \operatorname{int}Y$. Let $n(0) = 0$, and choose $x_0 \in V \cap V_0$. Suppose that for some $m > 0$ and all non-negative integers $k < m$ we have chosen $n(k) \in \omega$ and $x_k \in V$ so that (a) $x_{m-1} \in V \cap V_{n(m-1)}$, and (b) $n(i) < n(k)$ and $x_i \in V \cap V_{n(i)} \setminus V_{n(k)}$ whenever $0 \le i < k < m$. $\mathcal V$ is point-finite and has no finite subfamily covering $H$, so there is an $n(m) \in \omega$ such that $n(m) > n(m-1)$ and $\{x_k:0 \le k < m\} \cap V_i = \emptyset$ for all $i \ge n(m)$. Clearly we may now choose $x_m \in V \cap V_{n(m)}$, and the recursion goes through to yield a set $A = \{x_n:n \in \omega\} \subseteq V$ such that $x_k \in V \cap V_{n(k)} \setminus V_i$ whenever $i,k \in \omega$ and $i \ge n(k+1)$. -Now let $p$ be any point of $H$. $\mathcal V$ is locally finite, so $p$ has an open nbhd $W$ that meets only finitely many members of $\mathcal V$. Clearly $W \cap A$ is finite, so $p$ is not a cluster point of $A$. Thus, $A$ is a closed, discrete subset of $V$, and $X \setminus A$ is an open nbhd of $\operatorname{bdry}Y$ whose relative complement in $Y$ is not compact (and hence certainly not finite).<|endoftext|> -TITLE: algebraic version of "finite covering of a compact space is compact" -QUESTION [9 upvotes]: The following statement is an exercise in point set topology: If $E \to X$ is a covering with nonempty finite fibers and $X$ is compact, then also $E$ is compact. Now Grothendieck generalized covering theory so that in particular separable field extensions may be regarded as coverings. -Question: What is the corresponding statement in field theory? - -REPLY [2 votes]: To answer Martin's questions in his comments to Rayleigh's answer: -Fix a scheme $S$ and a proper morphism $f:X\to Y$ of $S$-schemes. Suppose that $Y$ is proper over $S$. If $f$ is proper, then $X$ is proper over $S$. This is simply because proper morphisms are stable under composition. -How does one prove that proper morphisms are stable under composition? One simply proves this property for finite type morphisms of schemes and separated morphisms. Then you're done. -If you stick to fields, all morphisms are separated so to prove that proper morphisms are stable under composition in this case, you simply have to prove that if you have a tower $K\subset L\subset M$ of finite degree field extensions, then $K\subset M$ is of finite degree. This is an easy fact. In conclusion, the proof of the statement for fields is easy and the statement itself doesn't give any nontrivial information in the case of fields. -I think Rayleigh added the additional hypotheses of "finite etale" to his statement, because he wanted to mimic the set-up of a "covering".<|endoftext|> -TITLE: If P is k-c.c. and C is club in k in M[G] then C contains a club in M -QUESTION [6 upvotes]: I've seen this written several places without proof, so I assume it's not difficult, but I am not getting it. -Let $\mathbb P$ be a $\kappa$-c.c. notion of forcing, and let $C\in M[G]$ be club in $\kappa$. I want to show there exists $D\in M$ such that $D\subset C$ is club in $\kappa$. Kunen suggests: let $f\in M[G], f:\kappa\rightarrow\kappa$ such that $\forall\alpha<\kappa(\alpha -TITLE: Dot product of two vectors -QUESTION [6 upvotes]: How does one show that the dot product of two vectors is $ A \cdot B = |A| |B| \cos (\Theta) $? - -REPLY [6 votes]: Think about a triangle with sidelengths $|\textbf{a}|,|\textbf{b}|,|\textbf{c}|$. Then we can use the law of cosines. -$$ -\begin{align} -|\textbf{c}|^2&=|\textbf{a}|^2+|\textbf{b}|^2-2|\textbf{a}||\textbf{b}| \cos \theta \\ -\implies 2|\textbf{a}||\textbf{b}| \cos \theta &= |\textbf{a}|^2+|\textbf{b}|^2-|\textbf{c}| = \textbf{a}\cdot \textbf{a} + \textbf{b} \cdot \textbf{b} - \textbf{c}\cdot \textbf{c} -\end{align} -$$ -By the properties of the dot product and from the fact that $\textbf{c}=\textbf{b}-\textbf{a}$ we find that -$$ -\begin{align} - \textbf{c}\cdot \textbf{c} &=(\textbf{b}-\textbf{a}) \cdot (\textbf{b}-\textbf{a}) \\ - &=(\textbf{b}-\textbf{a})\cdot \textbf{b} - (\textbf{b}-\textbf{a}) \cdot \textbf{a} \\ - &= \textbf{b}\cdot \textbf{b} - \textbf{a}\cdot \textbf{b} - \textbf{b}\cdot \textbf{a} + \textbf{a}\cdot \textbf{a}. -\end{align} -$$ -By substituting $\textbf{c}\cdot \textbf{c}$, we get -$$ -\begin{align} - 2|\textbf{a}||\textbf{b}| \cos \theta &= \textbf{a}\cdot \textbf{a} + \textbf{b} \cdot \textbf{b} - (\textbf{b}\cdot \textbf{b} - \textbf{a}\cdot \textbf{b} - \textbf{b}\cdot \textbf{a} + \textbf{a}\cdot \textbf{a}) \\ - &=2 \textbf{a}\cdot \textbf{b}. -\end{align} -$$ - -REPLY [3 votes]: One way of showing this requires taking a Geometric look at stuff: -Think of $\vec{a},\vec{b}$ as the vertices of a triangle with one corner in the origin -and sides of length $|\vec a|,|\vec b|,|\vec{a-b}|$. Now, use the Law of cosines, and inner product properties (over $\mathbb R$) to calculate $|\vec{a-b}|$: -The law of cosines gives you: -$$|\vec{a-b}|^2=|\vec{a}|^2+|\vec{b}|^2-2|\vec{a}||\vec{b}|\cos\theta(a,b)$$ -Inner product gives: -$$|\vec{a-b}|^2=\langle\vec{a-b},\vec{a-b}\rangle=|\vec{a}|^2+|\vec{b}|^2-2\langle\vec{a},\vec{b}\rangle$$ -from there on it's an easy proof (I left you the technical details). -Hope that helps<|endoftext|> -TITLE: Guessing a subset of $\{1,...,N\}$ -QUESTION [46 upvotes]: I pick a random subset $S$ of $\{1,\ldots,N\}$, and you have to guess what it is. After each guess $G$, I tell you the number of elements in $G \cap S$. How many guesses do you need? - -REPLY [6 votes]: This can be solved in $\Theta(N/\log N)$ queries. First, here is a lemma: - -Lemma: If you can solve $N$ in $Q$ queries, where one of the queries is the entire set $\{1,\dots,N\}$, then you can solve $2N+Q-1$ in $2Q$ queries, where one of the queries is the entire set. - -Proof: Divide $\{1,\dots,2N+Q-1\}$ into three sets, $A,B$ and $C$, where $|A|=|B|=N$ and $|C|=Q-1$. By assumption, there exist subsets $A_1,\dots,A_{Q-1}$ such that you could find the unknown subset of $A$ alone by first guessing $A$, then guessing $A_1,\dots,A_{Q-1}$. Similarly, there exist subsets $B_1,\dots,B_{Q-1}$ for solving $B$. Finally, write $C=\{c_1,\dots,c_{Q-1}\}$. -The winning strategy is: - -Guess the entire set, $\{1,\dots,2N+Q-1\}$. - -Guess $B$. - -For each $i\in \{1,\dots,Q-1\}$, guess $A_i\cup B_i$. - -For each $i\in \{1,\dots,Q-1\}$, guess $A_i\cup (B\setminus B_i)\cup \{c_i\}$. - - -Using the parity of the the sum of the guesses $A_i\cup B_i$ and $A_i\cup (B\setminus B_i)\cup \{c_i\}$, you can determine whether or not $c_i\in S$. Then, using these same guesses, you get a system of equations which lets you solve for $|A_i \cap S|$ and $|B_i\cap S|$ for all $i$. This gives you enough info to determine $A\cap S$ and $B\cap S$, using the assumed strategy.$\tag*{$\square$}$ -Let $\def\Opt{\operatorname{Opt}}\Opt(N)$ be the fewest number of guesses you need for $\{1,\dots,N\}$. Using the lemma and induction, you can show that -$$ -\Opt(k2^{k-1}+1)\le 2^k\qquad \text{for all }k\in \{0,1,2,\dots\} -$$ -Note that when $N=k2^{k-1}+1$, we have $\Opt(N)\le 2^k$, and $$\frac N{\frac12\log_2 N}=\frac{k2^{k-1}+1}{\frac12\log_2(k2^{k-1}+1)}= 2^k(1+o(1))$$ -It follows that $\Opt(N)\in O(N/\log N)$ when $N$ is of the form $k2^{k-1}+1$. Since $\Opt(N+1)\le \Opt(N)+1$, this extends to all $N$. Combined with the entropy argument, we get $\Opt(N)\in \Theta(N/\log N)$.<|endoftext|> -TITLE: Derivative of $(a\,x)^{b\,x}$ -QUESTION [5 upvotes]: is there any rule to differentiate the function $(a\,x)^{b\,x}$? -I've got to find the derivative of $(x^2+1)^{\arctan x}$ and Wolfram|Alpha says the answer is -$$\tan^{-1}(x) (x^2+1)^{\tan^{-1}(x)-1} \left(\frac{d}{dx}(x^2+1)\right)+\log(x^2+1) (x^2+1)^{\tan^{-1}(x)} \left(\frac{d}{dx}(\tan^{-1}(x))\right)$$ -Is there any general rule to do that? -Thanks. - -REPLY [4 votes]: HINT $\rm\ \ (F^G)'\ =\ (e^{G\:ln\ F})'\: =\ F^G\ (GF'/F + G'\ ln\ F)$<|endoftext|> -TITLE: Simulating uniformly on $S^1=\{x \in \mathbb{R}^n \mid \|x\|_1=1\}$ -QUESTION [15 upvotes]: A scheme to generate random variates distributed uniformly in $S^2=\{x\in \mathbb{R}^n \mid \|x\|_2=1\}$ is well known: generate a standard normal variate in $\mathbb{R}^n$ and normalize it to unit norm. -Is there a similarly simple and clever procedure to simulate uniformly distributed variates on the $1$-ball $S^1=\{x \in \mathbb{R}^n \mid \|x\|_1=1\}$? - -REPLY [14 votes]: Flip $n$ fair coins to pick an orthant—that is, to pick the signs of the coordinates of the point you are choosing. Now pick a point uniformly in the standard simplex, and flip the signs of its coordinates according to what the coins told you.<|endoftext|> -TITLE: Functions on vector spaces over finite fields -QUESTION [17 upvotes]: Can every function $\psi ~\colon \mathbb{F_{p}^n} \to \mathbb{F_{p}}$ be regarded as a polynomial function for some polynomial in $\mathbb{F_{p}[x_1, \ldots,x_n]}$? -I believe this is true, but am having trouble proving it. - -REPLY [18 votes]: For each point $y=(y_1,...,y_n)$ in $\mathbb{F}_p^n$, the polynomial -$$\prod_{i=1}^n\prod_{z_i=0\atop z_i\neq y_i}^{p-1}(x_i-z_i)$$ -is non-zero at $y$ and zero everywhere else, so these polynomials form a basis of the function space. - -REPLY [5 votes]: The answer is yes. Suppose that the elements of $F_p$ are given by $\{{a_1,\dots,a_p\}}$. First, let us note that the function $f(x) =1$ if $x=a_i$, and $f(x)=0$ otherwise is a polynomial, given by (up to a constant) $f(x) = (x-a_1)\dots \widehat{(x-a_i)} \dots (x-a_p)$. -Let us denote this function by $f_i$. For an $n$-tuple $(a_{i_1},\dots,a_{i_n})$ let $f_{i_1,\dots,i_n}(x_1,\dots,x_n) = f_{i_1}(x_1) \cdot f_{i_2}(x_2) \cdot \dots \cdot f_{i_n}(x_n)$. -Now, given a function $f:F_p^n \to F_p$, we may write it as: -$f(x_1,\dots,x_n) = \sum_{ (a_{i_1},\dots,a_{i_n}) \in F_p^n } f_{i_1,\dots,i_n}(x) f(a_{i_1},\dots,a_{i_n})$ which is a finite sum of polynomials, and therefore, a polnomial.<|endoftext|> -TITLE: How is a group made up of simple groups? -QUESTION [94 upvotes]: I've read more than once the analogy between simple groups and prime numbers, stating that any group is built up from simple groups, like any number is built from prime numbers. -I've recently started self-studying subgroup series, which is supposed to explain the analogy, but I'm not completely sure I understand how "any group is made of simple groups". -Given a group $G$ with composition series $$ \{e\}=G_0 \triangleleft G_1\triangleleft \dots \triangleleft G_{r-1} \triangleleft G_r=G$$ -then $G$ has associated the simple factor groups $H_{i+1}=G_{i+1}/G_i$. But how is it "built" from them? -Well, if we have those simple groups $H_i$ then we can say the subnormal subgroups in the composition series can be recovered by taking certain extensions of $H_i$: $$ 1 \to K_i \to G_i \to H_i \to 1$$ -where $H_i = G_i/G_{i-1}$, $K_i\simeq G_{i-1}$. -Then $G$ is built from some uniquely determined (Jordan-Hölder) simple groups $H_i$ by taking extensions of these groups. -Is this description accurate? -The question now is: this description seems overly theoretical to me. I don't know how the extensions of $H_i$ look like, and I don't understand how $G$ puts these groups together. Can we describe more explicitly how a group $G$ is made of simple groups? -EDIT: I forgot a (not-so-tiny) detail. The previous explanation works for finite groups, or more in general for groups with a composition series. But what about groups which don't admit a composition series? Is it correct to say that they are built from simple groups? - -REPLY [11 votes]: In Dummit & Foote's "Abstract Algebra" they briefly discuss the Hölder Program: - - -Classify all finite simple groups. -Find all ways of "putting simple groups together" to form other - groups. - - -They write the following on part 2 of the program (the so-called extension problem for finite groups): - -Part (2) of the Hölder Program, sometimes called the extension problem, was rather vaguely formulated. A more precise description of "putting two groups together" is: given groups $A$ and $B$, describe how to obtain all groups $G$ containing a normal subgroup $N$ such that $N \cong B$ and $G/N \cong A$. For instance, if $A=B=Z_2$, there are precisely two possibilities for $G$, namely $Z_4$ and $V_4$ [the Klein four grup] and the Hölder Program seeks to describe how the two groups of order 4 could have been built from two $Z_2$'s without a priori knowledge of the existence of the groups of order 4. This part of the Hölder Program is extremely difficult, even when the subgroups involved are of small order. For example, all composition factors of a group $G$ have order 2 if and only if $|G| = 2^n$, for some $n$ (...). It is known, however, that the number of nonisomorphic groups of order $2^n$ grows (exponentially) as a function of $2^n$, so the number of ways of putting groups of 2-power together is not bounded. Nonetheless, there are a wealth of interesting and powerful techniques in this subtle area which serve to unravel the structure of large classes of groups.<|endoftext|> -TITLE: Proving a sequence involved in Apéry's proof of the irrationality of $\zeta(3)$, converges -QUESTION [28 upvotes]: I am trying to understand Apery's 1978 proof that $\zeta(3) = \displaystyle \sum_{n=1}^\infty \frac{1}{n^3}$ is irrational. The idea behind the proof is to find an 'accelerated' series for $\zeta(3)$ which converges too fast to $\zeta(3)$, thus proving that $\zeta(3)$ cannot be rational. In particular, a particular quantity is defined: -$$e_{n,k} = \displaystyle \sum_{m=1}^k \frac{(-1)^{m-1} (m!)^2 (n-m)!}{2m^3 (n+m)!},\quad k \leq n.$$ -The key is to show that $\displaystyle \lim_{n \rightarrow \infty} e_{n,k} = 0$ uniformly in $k$, and I have no idea why this sum converges to 0. Any ideas? - -REPLY [22 votes]: By the triangular inequality, $|e_{n,k}|\leqslant\frac12 a_n$ with -$$ -a_n=\sum_{m=1}^nb_{n,m},\qquad b_{n,m}=\frac{(m!)^2(n-m)!}{m^3(n+m)!}, -$$ -hence the desired uniform convergence holds as soon as $a_n\to0$. -To prove that $a_n\to0$, note that -$$ -b_{n,m}=\frac{(m-1)!(m-1)!(n-m)!}{m(n+m)!}\leqslant\frac{(m-1)!(m-1)!(n-m)!}{(n+m)!}, -$$ -and use twice the fact that $i!j!\leqslant (i+j)!$ for every nonnegative integers $i$ and $j$. This yields -$$ -b_{n,m}\leqslant\frac{(n+m-2)!}{(n+m)!}=\frac1{(n+m-1)(n+m)}=\frac1{n+m-1}-\frac1{n+m}. -$$ -Thus $a_n$ is bounded by a telescoping sum, namely -$$ -a_n\leqslant\sum_{m=1}^n\left(\frac1{n+m-1}-\frac1{n+m}\right)=\frac1{n}-\frac1{2n}=\frac1{2n}, -$$ -This shows that $|e_{n,k}|\leqslant\frac1{4n}$ for every $n\geqslant1$ and uniformly over $k\leqslant n$, hence the proof is complete. -Edit One can refine the estimates above, taking into account the variations of the sequence $(b_{n,m})_{1\leqslant m\leqslant n}$ for some fixed $n$, which is nonincreasing on $1\leqslant m\leqslant m_n$ for some $m_n\approx n/\sqrt2$ and nondecreasing on $m_n\leqslant m\leqslant n$. Thus, $|e_{n,k}|\leqslant\frac12 b_{n,1}+\frac12 b_{n,n}$. Hence, $n^2e_{n,k}\to\frac12 $ uniformly over $k$.<|endoftext|> -TITLE: Homogeneous topological spaces -QUESTION [18 upvotes]: Let $X$ be a topological space. -Call $x,y\in X$ swappable if there is a homeomorphism $\phi\colon X\to X$ with $\phi(x)=y$. This defines an equivalence relation on $X$. -One might call $X$ homogeneous if all pairs of points in $X$ are swappable. -Then, for instance, topological groups are homogeneous, as well as discrete spaces. Also any open ball in $\mathbb R^n$ is homogeneous. On the other hand, I think, the closed ball in any dimension is not homogeneous. -I assume that these notions have already been defined elsewhere. Could you please point me to that? -Are there any interesting properties that follow for $X$ from homogeneity? I think for these spaces the group of homeomorphisms of $X$ will contain a lot of information about $X$. - -REPLY [7 votes]: Googling - -"topological space is homogeneous" - -brings up several articles that use the same terminology, for example this one. It is also the terminology used in the question Why is the Hilbert cube homogeneous?. The Wikipedia article on Perfect space mentions that a homogeneous space is either perfect or discrete. The Wikipedia article on Homogeneous space, which uses a more general definition, may also help.<|endoftext|> -TITLE: Simple proof that $8\left(\frac{9}{10}\right)^8 > 1$ -QUESTION [24 upvotes]: This question is motivated by a step in the proof given here. - -$\begin{align*} -8^{n+1}-1&\gt 8(8^n-1)\gt 8n^8\\ - &=(n+1)^8\left(8\left(\frac{n}{n+1}\right)^8\right)\\ - &\geq (n+1)^8\left(8\left(\frac{9}{10}\right)^8\right)\\ -&\gt (n+1)^8 . -\end{align*}$ - -I had no trouble following along with the proof until I hit the step that relied on -$$8\left(\frac{9}{10}\right)^8 > 1$$. So I whipped out a calculator and confirmed that this is indeed correct. And I could see, after some fooling around with pen and paper that any function in the form -\begin{align} -k \; \left(\frac{n}{n+1}\right)^k -\end{align} -where $n \in \mathbb{Z}$ and $k \rightarrow \infty$ is bound to fall below one and stay there. So it's not a given that any function in the above form will be greater than one. -What I'm actually curious about is whether there are nifty or simple little tricks or calculations you can do in your head or any handwavy type arguments that you can make to confirm that $$8\left(\frac{9}{10}\right)^8 > 1$$ and even more generally, to confirm for certain natural numbers $k,n$ whether -\begin{align} -k \; \left(\frac{n}{n+1}\right)^k > 1 -\end{align} -So are there? And if there are, what are they? -It can be geometrical. It can use arguments based on loose bounds of certain log values. It doesn't even have to be particularly simple as long as it is something you can do in your head and it is something you can explain reasonably clearly so that others can do also it (so if you're able to mentally calculate numbers like Euler, it's not useful for me). -You can safely assume that I have difficulties multiplying anything greater two single digit integers in my head. But I do also know that $$\limsup_{k\rightarrow \infty} \log(k) - a\cdot k < 0$$ for any $a>0$ without having to go back to a textbook. - -REPLY [4 votes]: Here is one that uses two facts about powers of 2 and 10: $2^{10}=1024>10^3$ and $2^7=128>10^2$ -$$8\left(\frac{9}{10}\right)^8>8\left(\frac{8}{10}\right)^8=\frac{8^9}{10^8}=\frac{2^{3·9}}{10^6·10^2}>\frac{2^{27}}{2^{20}·10^2}=\frac{2^7}{10^2}=\frac{128}{100}>1$$<|endoftext|> -TITLE: Why does 0! = 1? -QUESTION [14 upvotes]: Possible Duplicate: -Prove $0! = 1$ from first principles - -Why does $0! = 1$? -All I know of factorial is that $x!$ is equal to the product of all the numbers that come before it. The product of 0 and anything is $0$, and seems like it would be reasonable to assume that $0! = 0$. I'm perplexed as to why I have to account for this condition in my factorial function (Trying to learn Haskell). Thanks. - -REPLY [10 votes]: Mostly it is based on convention, when one wants to define the quantity $\binom{n}{0} = \frac{n!}{n! 0!}$ for example. An intuitive way to look at it is $n!$ counts the number of ways to arrange $n$ distinct objects in a line, and there is only one way to arrange nothing. - -REPLY [7 votes]: In a combinatorial sense, $n!$ refers to the number of ways of permuting $n$ objects. There is exactly one way to permute 0 objects, that is doing nothing, so $0!=1$. -There are plenty of resources that already answer this question. Also see: -http://mathforum.org/library/drmath/view/57905.html -http://wiki.answers.com/Q/Why_is_zero_factorial_equal_to_one -http://en.wikipedia.org/wiki/Factorial#Definition - -REPLY [6 votes]: It's because $n! = \prod_{0 -TITLE: Eisenstein series -QUESTION [5 upvotes]: Show that all Eisenstein series $G_k$ can be expressed as polynomials in $G_4$ and $G_6$, e.g. express $G_8$ and $G_{10}$ in this way. - -Hint: Setting $a_n := (2n+1)G_{2n+2}$, show that - $2n(2n-1)a_n = 6(2a_n + \sum\limits_{k=1}^{n-2} a_{k}a_{n-1-k})$ , $n>2$ -Obviously, what I have trouble proving, is the Hint. It seems, like I should use the second derivative of Weierstrass function representation, using the Eisenstein series (namely $\wp = \displaystyle\frac{1}{z^2} + \sum\limits_{m=1}^{\infty} (2m+1)G_{2m+2}z^{2m}$), but I'm not sure where the sum on the right comes from. -Any help will be appreciated. -EDIT: Can someone help some more? - -REPLY [3 votes]: Use the differential equaltion satisfied by $\wp$: - $$ \wp'^2 = 4\wp^3 -g_2 \wp -g_3 $$ -from this you get easily (as $g_2 = 60 G_4$) - $$ \wp'' = 6 \wp^2 -10 a_1 $$ -which translates directly into the hint if you substitute $\wp$ with the series you give. -EDIT: TO clarify further, squaring -$$ \wp(z) = \frac{1}{z^2} + \sum_{m>=1} a_m z^{2m} $$ -gives -$$\begin{align*} \wp(z)^2 &= \frac{1}{z^4} + \frac{2}{z^2}\sum_{m\ge1} a_m z^{2m} + \left(\sum_{m>=1} a_m z^{2m}\right)^2 =\\ -&= \frac{1}{z^4} + 2\sum_{m\ge 1} a_m z^{2m-2} + \sum_{m\ge 1} z^{2m} \sum_{k,l \ge 1, k+l = m} a_ka_l\\ -&= \frac{1}{z^4} + 2\sum_{m\ge 1} a_m z^{2m-2} + \sum_{m\ge 2} z^{2m-2} \sum_{k=1}^{m-2} a_ka_{m-1-k} -\end{align*} $$ -I hope this helps.<|endoftext|> -TITLE: Why do rhombus diagonals intersect at right angles? -QUESTION [6 upvotes]: I've looked all over and I can't find a good proof of why the diagonals of a rhombus should intersect at right angles. I can intuitively see its true, just by drawing rhombuses, but I'm trying to prove that the slopes of the diagonals are negative reciprocals and its not working out. -I'm defining my rhombus as follows: $[(0,0), (a, 0), (b, c), (a+b, c)]$ -I've managed to figure out that $c = \sqrt{a^2-b^2}$ and that the slopes of the diagonals are $\frac{\sqrt{a^2-b^2}}{a+b}$ and $\frac{-\sqrt{a^2-b^2}}{a-b}$ -What I can't figure out is how they can be negative reciprocals of one another. -EDIT: I mean to say that I could not find the algebraic proof. I've seen and understand the geometric proof, but I needed help translating it into coordinate form. - -REPLY [2 votes]: One more way of looking at it is by factoring $a^2-b^2$ while it's inside the square root. Then -$$ -\begin{align} -\frac{\sqrt{a^2-b^2}}{a+b} &= \frac{\sqrt{(a-b)(a+b)}}{a+b} = \frac{\sqrt{a-b}}{\sqrt{a+b}} \\ -- \frac{\sqrt{a^2-b^2}}{a-b} &= \frac{\sqrt{(a-b)(a+b)}}{a-b} = -\frac{\sqrt{a+b}}{\sqrt{a-b}} -\end{align} -$$ -From here, it's easier to see that they have the correct relationship.<|endoftext|> -TITLE: If $a,b\in\mathbb{Z}$, and if $a+b\sqrt{2}$ has a root in $\mathbb{Q}(\sqrt{2})$, then the root is actually in $\mathbb{Z}[\sqrt{2}]$ -QUESTION [9 upvotes]: I'm working my way though a classical geometry book by Hartshorne right now, but this problem popped up in a section I'm reading. It is Problem 13.10 from Hartshorne's Geometry: Euclid and Beyond if you're curious. -Anyway, the problem states: - -If $a,b\in\mathbb{Z}$, and if $a+b\sqrt{2}$ has a square root in $\mathbb{Q}(\sqrt{2})$, then the square root is actually in $\mathbb{Z}[\sqrt{2}]$. - -I'm not super familiar with algebra, so I'm having trouble interpreting the question, but I would like to know how to solve it. -I looked up $\mathbb{Q}(\sqrt{2})$ on wikipedia, and it seems that it is the set $\{a+b\sqrt{2}\ |\ a,b\in\mathbb{Q}\}$. I couldn't find $\mathbb{Z}[\sqrt{2}]$, but I assume it is the set $\{a+b\sqrt{2}\ |\ a,b\in\mathbb{Z}\}$. -So if $a+b\sqrt{2}$ has a square root in $\mathbb{Q}(\sqrt{2})$ means there exists some $c+d\sqrt{2}\in\mathbb{Q}(\sqrt{2})$ such that -$$ -(c+d\sqrt{2})^2=c^2+2d^2+2cd\sqrt{2}=a+b\sqrt{2}. -$$ -This implies (I think?) that $c^2+2d^2=a$ and $2cd=b$. If this is the correct path, is there then someway to conclude that $c$ and $d$ are in fact integers? Thanks. -By the way, is this exercise easily related to some aspect of classical geometry? It seems kind of out of the blue to me. - -REPLY [3 votes]: First $\rm\ 2\: c^2\: $ times $\rm\ c^2+2\ d^2 =\: a\ $ yields $\rm\ 2\: c^4 + b^2 =\: 2\: a\ c^2\ $ hence $\rm\ 2\: c\in \mathbb Z\ $ by the Rational Root Test. -Next $\:4\ $ times $\rm\ 2\ d^2 = \:a-c^2\: \ \to\ \ 8\ d^2 = \:4\ a\ - (2\:c)^2 \in \mathbb Z\ \:$ thus $\rm\: 2\: d\in \mathbb Z\ $ -Finally $\rm\: 4\ a - 2\ (2\: d)^2 \:=\: (2\:c)^2\: \Rightarrow\ 2\:|\:(2\ c)^2\Rightarrow 2\: |\: 2\: c\ \Rightarrow\ c\in \mathbb Z\ \Rightarrow\ d\in \mathbb Z\quad\quad$ QED<|endoftext|> -TITLE: Integrate a quartic in a radical in the denominator of $\int_{1}^{\infty}\frac{1}{\sqrt{3x^{4}+6x^{2}-1}}dx$ -QUESTION [7 upvotes]: Here is a rather tough integration. I think it looks like an Elliptic Integral of some sort. -$$\int_{1}^{\infty}\frac{1}{\sqrt{3x^{4}+6x^{2}-1}}dx$$ -Since there are no odd terms in the quartic, I thought maybe completing the square -would be OK. But I got nowhere. I even factored it into: -$3x^{4}+6x^2-1=((2\sqrt{3}-3)x^{2}+1)((2\sqrt{3}+3)x^{2}-1)$ and got nowhere. -I think the solution will involve the Gamma function in some manner. -Does anyone have a good starting point for this... like a clever substitution?. -Thanks for any input. - -REPLY [11 votes]: As has been mentioned many times on this site, anytime you have an algebraic function containing the square root of a cubic or a quartic, you are bound to bump into an elliptic integral. -Usually, such things are handled by using Jacobian elliptic functions for substitutions (in a manner similar to using substitution with trigonometric or hyperbolic functions when you have the square root of a quadratic in an integral). -I'll skip the tedious details of figuring out the proper substitution, since Byrd and Friedman give a formula for handling your integral (formula 212.00 in their handbook): -$$\int_y^\infty\frac{\mathrm dt}{\sqrt{(t^2+a^2)(t^2-b^2)}}=\frac1{\sqrt{a^2+b^2}}F\left(\arcsin\left(\sqrt{\frac{a^2+b^2}{a^2+y^2}}\right) \mid\frac{a^2}{a^2+b^2}\right)$$ -where $F(\phi|m)$ is the incomplete elliptic integral of the first kind. -Coming back to your integral, we let $u=2\sqrt{3}-3$ and $v=2\sqrt{3}+3$ such that -$$\int_1^\infty\frac{\mathrm dx}{\sqrt{3x^4+6x^2-1}}=\frac1{\sqrt{uv}}\int_1^\infty \frac{\mathrm dx}{\sqrt{(x^2+1/u)(x^2-1/v)}}$$ -Using the quoted formula, the integral reduces to -$$\frac1{\sqrt{u+v}}F\left(\arcsin\left(\sqrt{\frac{u+v}{v+uv}}\right)\mid\frac{v}{u+v}\right)$$ -Substituting the values of $u$ and $v$ into this expression and simplifying, we have the result -$$\frac1{\sqrt[4]{48}}F\left(\arcsin\left(\sqrt{\sqrt{3}-1}\right)\mid\frac{2+\sqrt{3}}{4}\right)$$ -which agrees with the numerical result in the comments. -As an aside, I consider it a capital annoyance that Mathematica often returns results with complex amplitudes even for real results...<|endoftext|> -TITLE: How to solve recurrence relations by the generalized hypergeometric series -QUESTION [5 upvotes]: I am reading methods of solving recurrence relation on Wikipedia. There is one method: - -Many linear homogeneous recurrence - relations may be solved by means of - the generalized hypergeometric series. - Special cases of these lead to - recurrence relations for the - orthogonal polynomials, and many - special functions. For example, the - solution to $$J_{n+1}=\frac{2n}{z}J_n-J_{n-1}$$ - is given by $$J_n=J_n(z), \,$$ the Bessel function. - -There are no description regarding how to use the method of "generalized hypergeometric series", nor can I find some on the article for generalized hypergeometric series or on Bessel function. I was wondering if someone here can explain somehow or gives some references about that? Thanks and regards! - -REPLY [9 votes]: See the (on-line, downloadable) book - -A = B, by Petkovsek, Wilf, and Zeilberger - -It gives all sorts of links between hypergeometric series and recurrence relations.<|endoftext|> -TITLE: How to find the inverse modulo $m$? -QUESTION [77 upvotes]: For example: -$$7x \equiv 1 \pmod{31} $$ -In this example, the modular inverse of $7$ with respect to $31$ is $9$. How can we find out that $9$? What are the steps that I need to do? -Update -If I have a general modulo equation: -$$5x + 1 \equiv 2 \pmod{6}$$ -What is the fastest way to solve it? My initial thought was: -$$5x + 1 \equiv 2 \pmod{6}$$ -$$\Leftrightarrow 5x + 1 - 1\equiv 2 - 1 \pmod{6}$$ -$$\Leftrightarrow 5x \equiv 1 \pmod{6}$$ -Then solve for the inverse of $5$ modulo 6. Is it a right approach? -Thanks. - -REPLY [2 votes]: Solve $7x \equiv 1 \pmod{31}$. -$\quad 7x = 1 + 31y \implies 3y \equiv 6 \pmod{7} \implies y \equiv 2 \pmod{7}$ -and -$\quad y : 2 \; \mid \; 1 + 31y = 63 \quad \quad \text{AND is divisible by } 7$ -So, -$\tag{ANS} 7^{-1} \equiv 9 \pmod{31}$ - -Here is another more interesting example: -Solve $25x \equiv 1 \pmod{41}$. -$\quad 25x = 1 + 41y \implies y \equiv 4 \pmod{5}$ -and -$\quad y : \;\,4 \; \mid \; 1 + 41y = 165 \quad \quad \text{NO GOOD!}$ -$\quad y : \;\,9 \; \mid \; 1 + 41y = 370 \quad \quad \text{NO GOOD!}$ -$\quad y : 14 \; \mid \; 1 + 41y = 575 \quad \quad \text{AND is divisible by } 25$ -So, -$\tag{ANS} 25^{-1} \equiv 23 \pmod{41}$ - -Here is another more interesting example: -Solve $23x \equiv 1 \pmod{41}$. -$\quad 23x = 1 + 41y \implies 18y \equiv 22 \pmod{23} \implies 9y \equiv 11 \pmod{23}$ -Sub Solve: $9y \equiv 11 \pmod{23}$. -$\quad 9y = 11 + 23z \implies 2z \equiv 1 \pmod{3} \implies z \equiv 2 \pmod{3}$ -and -$\quad z : \;\,2 \; \mid \; 11 + 23z = 057 \quad \quad \text{NO GOOD!}$ -$\quad z : \;\,5 \; \mid \; 11 + 23z = 126 \quad \quad \text{AND is divisible by } 9$ -So, -$\quad 9y \equiv 11 \pmod{23} \text{ iff } y \equiv 14 \pmod{23}$ -Continuing where we left off with the original problem, -$\quad y : \;\,14 \; \mid \; 1 + 41y = 575 \quad \quad \text{AND is divisible by } 23$ -So, -$\tag{ANS} 23^{-1} \equiv 25 \pmod{41}$<|endoftext|> -TITLE: Relative homology of a retract -QUESTION [7 upvotes]: Show that if $A$ is a retract of $X$ then for all $n \ge 0$ $$H_n(X) \simeq H_n(A) \oplus H_n(X,A)$$ -So we have a retraction $r:X \to A$, which is surjective. -Consider the long exact sequence -$$\cdots \to H_n(A) \to H_n(X) \to H_n(X,A) \to H_{n-1}(A) \cdots$$ -As $r$ is surjective we have that $H_n(X) \to H_n(X,A)$ is surjective, and hence $H_n(A) \to H_n(X)$ is injective. Thus we have a short exact sequence -$$0 \to H_n(A) \to H_n(X) \to H_n(X,A) \to 0$$ -I am unsure how to go from the fact this is exact, to the result (assuming the above is correct!) - -REPLY [4 votes]: If you can find a homomorphism $H_n(X,A) \to H_n(X)$ which 'splits' the quotient map $H_n(X) \to H_n(X,A)$ then you can use the splitting lemma to give you your direct sum.<|endoftext|> -TITLE: Do we know this homogeneous space by another name? -QUESTION [7 upvotes]: Consider the homogeneous space $GL(3)/GL(2) = GL(3,\mathbb{R})/GL(2,\mathbb{R})$ where $GL(2)$ fixes the first coordinate axis (so can be identified with the subgroup of $2\times 2$ blocks sitting in the 'bottom right' corner of matrices in $GL(3)$). -Is there any 'explicitly known' other homogeneous space which is isomorphic to this one? Say by using orthogonal groups? Is it some sort of Grasmannian/Stiefel manifold? -Am I correct in guessing that $GL(3)/GL(2)$ has dimension 5? - -REPLY [3 votes]: As a bundle it fibers over $\mathbb RP^2$ with fiber $\mathbb R^3 \setminus \mathbb R^2$. -I think the most natural way to describe this space would be to consider two bundles over $\mathbb RP^2$: (1) $\mathbb RP^2 \times \mathbb R^3$ and (2) $\{(L,v) : L \in \mathbb RP^2, v \in \mathbb R^3, v \perp L \}$ for this purpose $\mathbb RP^2$ is considered the space of lines in $\mathbb R^3$. -So the total space of bundle (2) is a subspace of the total space of bundle (1). The bundle you're interested in, is the complement of (2) in (1). -So another way to say what this space is, is it's all pairs $(L, v)$ such that $L$ is in $\mathbb RP^2$ and $v$ projects to a non-zero vector in $L$ (using orthogonal projection). -So another way to describe it would be the tangent bundle of $\mathbb RP^2$ fiber-product with a certain bundle over $\mathbb RP^2$ -- as a bundle over $\mathbb RP^2$ this "certain bundle" is the map $\mathbb R \times S^2 \to \mathbb RP^2$ where $(p,q) \longmapsto \pi(q)$ where $\pi : S^2 \to \mathbb RP^2$ is the 2:1 covering map. -Even simpler, as a space it's diffeomorphic to $S^2 \times \mathbb R^3$. You can make this equivarient with respect to the left $GL(3)$ action as well -- $GL(3)$ has its standard (projectivized) action on $S^2$, similarly it acts on $\mathbb R^3$.<|endoftext|> -TITLE: Why should I care about adjoint functors -QUESTION [72 upvotes]: I am comfortable with the definition of adjoint functors. I have done a few exercises proving that certain pairs of functors are adjoint (tensor and hom, sheafification and forgetful, direct image and inverse image of sheaves, spec and global sections ect) but I am missing the bigger picture. -Why should I care if a functor has a left adjoint? What does it tell me about the functor? - -REPLY [7 votes]: It has been about half a year since I asked this question and I have learnt a lot more category theory since then. All of the above answers are excellent. One thing which everyone said is that a left (resp. right) adjoint commutes with colimits (resp. limits). Indeed, this is one of the best things about knowing that two functors are adjoints. -We can say more about the relationship between adjoints and (co)limits. - -${\rm \bf Theorem:}$ Let $\mathcal{A}$ and $\mathcal{B}$ be categories and assume that $G : \mathcal{A} \to \mathcal{B}$ and $F : \mathcal{B} \to \mathcal{A}$ are functors. The following are equivalent: - -There exists a natural isomorphism $\tau : {\rm Mor}(F-,-) \to {\rm Mor}(-,G-)$ -There exists natural transformations $\eta : 1_{\mathcal{B}} \to G \circ F$ and $\epsilon : F \circ G \to 1_{\mathcal{A}}$ such that $$ F(B) \stackrel{F(\eta_B)}{\to} F \circ G \circ F(B) \stackrel{\epsilon_{F(B)}}{\to} F(B) $$ and $$ G(A) \stackrel{\eta_{G(B)}}{\to} G \circ F \circ G(A) \stackrel{G(\epsilon_A)}{\to} G(A) $$ are the identity morphisms -there exists a natural transformation $\eta : 1_{\mathcal{B}} \to G \circ F$ such that for all $B \in \mathcal{B}$, $\eta_B$ is a $G$-universal morphism -there exists a natural transformation $\epsilon : F \circ G \to 1_{\mathcal{A}}$ such that for all $A \in \mathcal{A}$, $\epsilon_A$ is an $F$-couniversal morphism - - -If any of the conditions in the above theorem are satisfied we say that $(F,G)$ is an adjoint pair. Vladimir Sotirov discussed this theorem in his answer. - - -$\eta_B : B \to G \circ F(B)$ is a $G$-universal morphism means that for all morphisms $f : B \to G (A)$ in $\mathcal{B}$, there exists a unique morphism $\bar{f} : F(B) \to A$ such that $$G(\bar{f}) \circ \eta_B = f$$ -$\epsilon_A : F \circ G(A) \to A$ is an $F$-universal morphism means that for all morphisms $f : F(B) \to A$ in $\mathcal{A}$, there exists a unique morphism $\bar{f} : B \to G(A)$ such that -$$ \epsilon_A \circ F(\bar{F}) = f$$ - - -Now let $I$ be a small category and $\mathcal{A}^I$ be the category whose objects are functors $I \to \mathcal{A}$ and whose morphisms are natural transformations. There is a natural functor $\Delta : \mathcal{A} \to \mathcal{A}^I$ defined as follows: - - -$\Delta(A) : I \to \mathcal{A}$ maps each object of $I$ to $A$ and each morphism in $I$ to the identity on $A$ -if $f : A \to B$ is a morphism in $\mathcal{A}$ then $\Delta(f)$ is the natural transformation defined by $\Delta(f)_i = f$ - - -By the above theorem, saying that every functor $D : I \to \mathcal{A}$ has a colimit is exactly the same as saying that ${\rm colim} : \mathcal{A}^I \to \mathcal{A}$ is a left adjoint of $\Delta$. Saying that every functor $D : I \to \mathcal{A}$ has a limit is exactly the same as saying that ${\rm lim} : \mathcal{A}^I \to \mathcal{A}$ is a right adjoint of $\Delta$<|endoftext|> -TITLE: If every commutator is idempotent, then the ring is commutative -QUESTION [15 upvotes]: Let $R$ be a ring and for $x,y\!\in\!R$, define the commutator as $[x,y]:=xy-yx$. An $r\!\in\!R$ is idempotent iff $r^2=r$. -How can one prove, that if every commutator is idempotent, then the whole ring is commutative, i.e. all commutators are zero? - -REPLY [10 votes]: Proof from "Two elementary generalisations of boolean rings" AMM Feb.1986: -Let $Z(R):=\{x\!\in\!R;\: \forall y\!\in\!R\!: xy=yx\}$ denote the center of the ring. We notice that - \begin{equation*} - \label{eq:7.1} - xy-yx=(xy-yx)^2=(yx-xy)^2=yx-xy. - \tag*{(1)} - \end{equation*} -First we show a general lemma, that if $xy\!=\!0$ implies $yx\!=\!0$ $(\ast)$, then every idempotent $e$ is central. For arbitrary $r\!\in\!R$ we have $(e^2\!-\!e)r=0=e(er\!-\!r)$ and by $(\ast)$ we have $0=(er\!-\!r)e=ere\!-\!re$. Similarly, from $r(e^2\!-\!e)=0=(re\!-\!r)e$ by $(\ast)$ we get $0=e(re\!-\!r)=ere\!-\!er$. Therefore $re=ere=er$ which proves $e\!\in\!Z(R)$. -Next we prove, that all commutators are central. This follows from the previous paragraph, because the condition $(\ast)$ holds: if $xy=0$, then $yx$ $=yx-xy$ $=(yx\!-\!xy)^2$ $=(yx)^2$ $=y(xy)x$ $=0$. We have proved, that - \begin{equation*} - \label{eq:7.2} - \forall x,y\in R:\; xy-yx\in Z(R). - \tag*{(2)} - \end{equation*} -Next we prove that all squares are central. This we get from the equality $x(xy-yx)$ $\overset{(1)}{=}x(yx-xy)$ $\overset{(2)}{=}(yx-xy)x$, which tells us that $x^2y=yx^2$ , i.e. -\begin{equation*} - \label{eq:7.3} - \forall x\in R:\; x^2\in Z(R). - \tag*{(3)} -\end{equation*} -Next we prove, that $(xy)^2=(yx)^2$. This we achieve via $(3)$ by writing $yxy$ as a sum of squares: -$$(xy)^2 =x(yxy)=x\big((yx)^2+y^2-(yx\!-\!y)^2-y^2x\big)$$ -\begin{equation*} - \label{eq:7.4} - \overset{(3)} {=}\big((yx)^2+y^2-(yx-y)^2-y^2x\big)x=(yxy)x=(yx)^2 - \tag*{(4)} -\end{equation*} -Finally, we show that all elements are central: -$$xy\!-\!yx=(xy\!-\!yx)^2=xy(xy\!-\!yx)\!-\!yx(xy\!-\!yx) \overset{(1)}{=} xy(xy\!-\!yx)\!-\!yx(yx\!-\!xy)$$ -$$= (xy)^2\!-\!xy^2x\!-\!(yx)^2\!+\!yx^2y\overset{(4)}{=}-\!xy^2x\!+\!yx^2y \overset{(3)}{=} -\!x^2y^2\!+\!x^2y^2=0$$ -This completes the proof.<|endoftext|> -TITLE: Finding $ \csc \theta $ given $ \cot \theta $ -QUESTION [7 upvotes]: I have the following problem: -If $ \cot{C} = \frac{\sqrt{3}}{7} $, find $ \csc{C} $ -From my trig identities, I know that $ \cot{\theta} = \frac{1}{\tan{\theta}} $, and $ \csc{\theta} = \frac{1}{\sin{\theta}} $, and also $ \cot{\theta} = \frac{\cos{\theta}}{\sin{\theta}} $ -However, I can't seem to see how to connect the dots to get from cotangent to cosecant. I figure I might be able to use the last identity if I can somehow make $ \cos{C} = 1 $, but I don't really see how to do that, either. -This is homework, so please provide me with some pointers rather than complete solutions. -Thanks. - -REPLY [3 votes]: Here's a simple way I usually think about it. Suppose you have a right triangle in the first quadrant. Since $\cot C=\frac{\sqrt{3}}{7}$, you know the ratio of the leg adjacent to $C$ to the leg opposite of $C$ is $\frac{\sqrt{3}}{7}$. So let's just say the opposite leg has length $7$ and the adjacent leg has length $\sqrt{3}$. Then by the Pythagorean theorem, the hypotenuse has length $\sqrt{52}=2\sqrt{13}$. -Now $\csc C$ is just the ratio of the hypotenuse to the opposite leg, essentially the inverse of $\sin$, which is the ratio of the opposite leg to the hypotenuse.<|endoftext|> -TITLE: Natural set to express any natural number as sum of two in the set -QUESTION [5 upvotes]: Any natural number can be expressed as the sum of three triangular numbers, or as four square numbers. The natural analog for expressing numbers as the sum of two others would apparently be the sum of two "linear" numbers, but all natural numbers are "linear", so this is rather unsatisfying. Is there a well-known sparser set of integers (or, half-integers, for that matter) that has this property? - -REPLY [3 votes]: Assuming Goldbach's conjecture, you can take the set of all primes and successors of primes (plus some small numbers). These have density $2/\log n$. -Not assuming the conjecture, you can take primes, almost-primes and successors of primes (plus some small numbers) - this is the famous Chen's theorem. The resulting density is $\log\log n/\log n$. -Another suggestion is to take the set of all numbers whose binary expansion is "supported" on only odd powers of $2$ or only even powers of $2$. The resulting density is roughly $2/\sqrt{n}$, so this is close to optimal (you need at least $1/\sqrt{n}$).<|endoftext|> -TITLE: Number Theory: confusion with notation -QUESTION [12 upvotes]: In elementary number theory, I find the following notations being used interchangeably, which leads me to have many ambiguous assumptions: - -$\mathbb{Z}_p^\times$; -$\mathbb{Z}_p$, - -where p is any integer. What's the difference between them? -One more question to add to the kitty: what do they mean when they say "a subgroup of $\mathbb{Z}_p$, where p is prime"? -It will be nice if someone can enlighten me on this elementary notation. -Thanks. - -REPLY [10 votes]: $\mathbb{Z}_p$ is, in my opinion, problematic notion. In slightly less elementary number theory it refers to the p-adic numbers, which are very different from the integers $\bmod p$. An unambiguous, if slightly more cumbersome, notation would be $\mathbb{Z}/(p)$ or $\mathbb{Z}/p\mathbb{Z}$.<|endoftext|> -TITLE: Example of relative Ext functor -QUESTION [8 upvotes]: Greetings, -I've been reading Maclane's "Homology" and ran into the following question: -Let $(R,S)$ be a resolvent pair of ring, i.e $R$ is an $S$-algebra and we have a functor $\Psi \colon \operatorname{R-Mod} \to \operatorname{S-Mod}$, that is additive exact and faithful. We also have an left adjoint functor of $\Psi$, namely $F \colon \operatorname{S-Mod} \to \operatorname{R-Mod}$. -One defines then relative $Ext_{(R,S)}$ functor, using the bar-resolution. Can you please help me find a concrete example of a case when $Ext^1_{(R,S)} \neq Ext^1_R$ (where $Ext^1_R$ is the regular $Ext$ functor). -My thought on the matter. One can identify $Ext^1_{(R,S)}(A,B)$ with the set of extensions of $B$ by $A$, that are $S$-split. One must then find an extension in $R$ that is not $S$-split to do the trick. But this argument feels a bit like cheating. -Any help will be appreciated. - -REPLY [4 votes]: Well, if $R=S$, then an $S$-split extension is obviously also $R$-split, so $\operatorname{Ext}^1_{(R,S)}$ is always zero. On the other hand, $\operatorname{Ext}^1_{R}$ can be pretty much anything you like.<|endoftext|> -TITLE: Is there a nice way to show that any finite sequence of arithmetic operations on the rationals can be put in 'standard form'? -QUESTION [7 upvotes]: I read a little proposition that any quantity obtainable from rational numbers by a finite number of operations $+$, $-$, $\cdot$, $\div$, $\sqrt{{}}$ can be put in a standard form $\frac{p}{q}\cdot A$, where $\frac{p}{q}\in\mathbb{Q}$ and $A$ is an expression of only integers and $+$, $-$, $\cdot$, and $\sqrt{{}}$. -I don't think it's difficult to believe this is the case, but I don't know of a nice, clear way to prove it. -I tried to simplify the argument first by saying that any operation $+$, $-$, $\cdot$, $\div$ can be rewritten entirely in terms of $+$, since $-$ is the addition of a the additive inverse, $\cdot$ is repeated addition, and $\div$ is multiplication of the reciprocal. -I also decided that given a radicand consisting of rationals and operations $+$, $-$, $\cdot$, $\div$, you could then rewrite the radicand as a sum of rationals, and multiply the radical by $\frac{\sqrt{\Delta^2}}{\Delta}$, where $\Delta$ is some multiple of the denominators. This would put the radicand in standard form, with a fraction $\frac{1}{\Delta}$ outside the radical. For example, -$$ -\sqrt{\frac{a}{b}+\frac{c}{d}}=\sqrt{\frac{ad+cb}{bd}}=\frac{\sqrt{(bd)^2}}{bd}\sqrt{\frac{ad+cb}{bd}}=\frac{1}{bd}\sqrt{(bd)(ad+cb)}. -$$ -I suppose for nested radicals, the process could be repeated. Then any sequence of these operations can be put into a sum of terms with rational denominators by rationalizing any denominator, and then one could do the same procedure of pulling a common multiple of all the terms out in front to put the expression in standard form. -However, I wanted to formalize this idea, but it seems like there are so many different things to consider, that I don't know how to go about it. Is there a way to make basic induction work, or perhaps some other method? Thanks. - -REPLY [10 votes]: This sort of thing is proved by induction over the number of operations by treating the "outermost" operation: -Case $+$: $\left(\frac{p}{q}A\right)+\left(\frac{r}{s}B\right)=\frac{1}{qs}(psA+qrB)$ -Case $-$: $\left(\frac{p}{q}A\right)-\left(\frac{r}{s}B\right)=\frac{1}{qs}(psA-qrB)$ -Case $\cdot$: $\left(\frac{p}{q}A\right)\cdot\left(\frac{r}{s}B\right)=\frac{pr}{qs}(AB)$ -Case $\div$: $\left(\frac{p}{q}A\right)\div\left(\frac{r}{s}B\right)=\frac{ps}{qr}\frac{A}{B}$ -Case $\sqrt{}$: $\sqrt{\frac{p}{q}A}=\frac{1}{q}\left(\sqrt{pqA}\right)$ -In the first version of this answer, I claimed: "Now all that remains to be shown is that you can deal with the $B$ in the denominator in the $\div$ case, and this you can show with the same kind of induction by cases over the number of operations in $B$." -It turns out this was overly optimistic, and actually the more subtle part of the proof lies in dealing with that case. So here's a proper proof: -The proof is by induction over $(n,m)$, where $m$ is the number of operations in $B$ and $n$ is the number of different radicands that appear in $B$. For given $(n,m)$, assume that we can move all roots to the numerator if the denominator has $(n',m') < (n,m)$ in lexicographical order, i.e. either $n' < n$, or $n'=n$ and $m' -TITLE: Approachable = provably approachable? -QUESTION [6 upvotes]: Question: Let’s call a real number $x$ approachable if there exists a Turing machine $M$ such that $M(1), M(2),\dots$ is a sequence than converge to $x$. If we can prove (edit: In ZFC) that the sequence converge, we say that $x$ is provably approachable. Is every approachable number also provably approachable? -Clearification: Another way to ask the same question is: Is it true that: "If there exists a Turing machine M such that x is the limit of the output values, then there exists a Turing machine M' that converge to the same number, and we can prove that it converge". -Comments: I don't know these concepts have names, I just invented the terminology: If they have names, please make a comment, so I can change it (edit: Approachable is called "computable in the limit", but I don't know if there is a name for provably appoachable. "Approachable" have been used a lot in the answers, so I have decided not the change it in the question). All computable numbers are provably approachable but not all provably approachable numbers are computable (Chatins constant is a counterexample). I don't know what the relation is between approachable and definable (I don't even know if there is an accepted definition of definable number?) - -REPLY [6 votes]: After a discussion with Russell Miller, I've got an answer to the question. There is in fact an approachable real that is not provably approachable. -Let us adopt PA as the base theory, although the construction can easily adapt to other theories. I will describe a Turing machine $M$ that produces a convergent sequence of rational numbers, but any Turing machine $M'$ that PA proves to produce a convergent sequence of rational numbers converges to a different number than $M$ does. -The idea is to diagonalize against any such proofs that may be discovered. Fix an enumeration of the Turing machines $M_n$. Consider the following Turing machine $M$. Our machine will at stage $k$ produce a rational number $r_k$ by giving finitely many trinary digits, but using only the digits 0 and 1 and not 2, to avoid non-unique readibility issues. As the construction proceeds, we systematically enumerate all possible proofs from the theory. We may find at some stage $k$ that there is a proof that Turing machine $M_n$ produces a convergent sequence of rational numbers. In this case, we run $M_n$ for $k$ steps, getting the current rational $q_{n,k}$ approximation to the real to which $M_n$ is converging. In this case, if $r_k$ is different from $q_{n,k}$ by digit $n$, then we use $r_{k+1}=r_k$; otherwise, we produce $r_{k+1}$ by flipping the $n$-th digit of $r_k$ from $0$ to $1$ or vice versa to ensure that $r_{k+1}$ is different from $q_{n,k}$. (Note, we are flipping the $n$-th digit, not the $k$-th digit, so each machine $M_n$ is tied to the $n$-th digit of our limit real.) And simply proceed with this plan, considering the new proofs as they appear. -Note that each machine $M_n$ that provably produces a convergent sequence will be considered infinitely many times, for increasingly large $k$, since there are many proofs that it does so. So the relevant $k$ for $M_n$ will become arbitrarily large. Since $M_n$ was proved to produce a convergent sequence, it follows that the values of $q_{n,k}$ really do converge, and thus eventually stabilize in their first $n$ bits (if those bits are all $0$s and $1$s). Thus, we will flip the $n$-th bit of our rational $r_k$ at most finitely many times. It follows that our sequence $r_k$ is a convergent sequence of rational numbers. -But also, it follows that whenever there is a proof that some machine $M_n$ produces a convergent sequence of rational numbers, then the limit of our real will differ from the limit of that machine by digit $n$. -Thus, we have an approachable real that is not provably approachable, as desired. -Let me remark on the confusing subtle point here about in which theory we have proved our machine to produce a convergent sequence of rationals. The answer is that we have done so in any theory that knows that any statement that is provable in the base theory is in fact true, since we needed to know that when we found a proof that $M_n$ produces a converging sequence of rationals, that those rationals really did converge. For example, ZFC has this relation to PA, since ZFC proves that if $\varphi$ is provable in PA, then $\varphi$ is true. But more generally, for any $\omega$-consistent theory $T$, we may extend to a stronger theory $T^+$ that knows this implication.<|endoftext|> -TITLE: cubic root of negative numbers -QUESTION [31 upvotes]: Excuse my lack of knowledge and expertise in math,but to me it would came naturally that the cubic root of $-8$ would be $-2$ since $(-2)^3 = -8$. -But when I checked Wolfram Alpha for $\sqrt[3]{-8}$, real it tells me it doesn't exist. -I came to trust Wolfram Alpha so I thought I'd ask you guys, to explain the sense of that to me. - -REPLY [3 votes]: In the years since the question was asked and answered, Wolfram introduced the $\operatorname{Surd}(x,n)$ function (Mathematica 9 circa $2012$, then Alpha) to designate the real single-valued $n^{th}$ root of $x$. -For example $\sqrt[3]{-8}$ and $\sqrt[5]{-243}$ result directly in $-2$ and $-3$ respectively: - - -The $\operatorname{Cbrt}$ function discussed in Mark McClure's answer - which had changed behavior around the same time to return the real cube root by default - appears to be identical to $\operatorname{Surd}(\,\cdot\,,3)$:<|endoftext|> -TITLE: Why are there no discrete zero sets of a polynomial in two complex variables? -QUESTION [5 upvotes]: Why is the zero set in $\mathbb{C}\times\mathbb{C}$ of a polynomial $f(x,y)$ in two complex variables always non-discrete (no zero of $f$ is isolated)? - -REPLY [5 votes]: This actually holds for any analytic function of two complex variables on an open set. Suppose $f(x_0,y_0) = 0$. Then viewed as a function of $y$, $f(x_0,y)$ must have an isolated zero of some order $k$ at $y = y_0$. By complex analysis (the residue theorem will work for example), for some small $r$ one has -$${1 \over 2\pi i} \int_{|z - y_0| = r} {f'(x_0, z) \over f(x_0,z)}\,dz = k$$ -But the integrand above is a continuous function of $x$ near $x = x_0$ and takes integer values. Hence for $x$ close enough to $x_0$ one also has -$${1 \over 2\pi i} \int_{|z - y_0| = r} {f'(x, z) \over f(x,z)}\,dz = k$$ -This in turn implies that whenever $x$ is close enough to $x_0$, $f(x,y)$ (viewed as a function of $y$) has $k$ zeroes inside $|z - y_0| = r$. Since this holds for arbitrarily small $r$, we conclude the zero of $f(x,y)$ at $(x_0,y_0)$ is not isolated. -This is actually a watered-down version of the proof of the famous Weierstrass Preparation Theorem (which incidentally also implies this fact in pretty short order).<|endoftext|> -TITLE: Transfinite sequences of topologies -QUESTION [5 upvotes]: I am not that familiar with arguments using ordinals so this may be pretty simple. Let $T$ be a topology on a set $X$. For each ordinal $\alpha$ let $T_{\alpha}$ be a topology on $X$ satisfying $T\subseteq T_{\alpha+1}\subsetneq T_{\alpha}$ (notice the proper inclusion) and $T_{\alpha}=\bigcap_{\beta<\alpha}T_{\beta}$ when $\alpha$ is a limit ordinal. Apparently, this should imply that there is some ordinal $\gamma$ such that $T=T_{\gamma}$. Why is this? -Edit: I see there is a problem with the above question. I will try to right this here. Henno is right that I am referring to a certain $f$ that makes a specific topology containing $T$ coarser but so that the new topology still contains $T$. -New question: -We start with a topology $T_0$ on $X$ containing $T$ and have $T\subseteq T_{\alpha+1}=f(T_{\alpha})$ for successor ordinals. We have $T_{\alpha}=\bigcap_{\beta<\alpha}T_{\beta}$ for limit ordinals. We also have that $T_{\alpha+1}=T$ whenever $T_{\alpha+1}=T_{\alpha}$. Apparently there should be a $\gamma$ such that $T=T_{\gamma}$. Why is this? - -REPLY [5 votes]: The reason is that as you proceed with your recursion, you are gradually throwing out objects from your set. Since you only have a set of objects to begin with, you must eventually run out of things to throw out. The only way this can happen, under the assumptions you have set up, is if you have hit the bottom topology $T$. -One can turn this idea into a proof by noting that if the recursion never hit $T$, then you would have an injection from the class of ordinals into the power set of $T_0$, by the map $\alpha\mapsto (T_\alpha-T_{\alpha+1})$. (Or you could also select an element from this set and get a map directly into $T_0$.) This would contradict Hartog's theorem among others, that for every set there is an ordinal that does not inject into it.<|endoftext|> -TITLE: Expected number of rolling a pair of dice to generate all possible sums -QUESTION [28 upvotes]: A pair of dice is rolled repeatedly until each outcome (2 through 12) has occurred at least once. What is the expected number of rolls necessary for this to occur? -Notes: This is not very deep conceptually, but because of the unequal probabilities for the outcomes, it seems that the calculations involved are terribly messy. It must have been done already (dice have been studied for centuries!) but I can't find a discussion in any book, or on line. Can anybody give a reference? - -REPLY [30 votes]: This is the coupon collectors problem with unequal probabilities. There is a treatment of this problem in Example 5.17 of the 10th edition of Introduction to Probability Models by Sheldon Ross (page 322). He solves the problem by embedding it into a Poisson process. Anyway, the answer is -$$ E[T]=\int_0^\infty \left(1-\prod_{j=1}^m (1-e^{-p_j t})\right) dt, $$ -when there are $m$ events with probability $p_j, j=1, \dots, m$ of occurring. -In your particular problem with two fair dice, my calculations give -$$E[T] = {769767316159\over 12574325400} = 61.2173.$$<|endoftext|> -TITLE: Formula for $\binom{n}{2}/\binom{m}{2}=1/2$ or $2n(n-1) = m(m-1)$? -QUESTION [6 upvotes]: Where could I find a formula that produces integers $n$ and $m$ such that $\binom{n}{2}/\binom{m}{2}=1/2$? -Of course, this questions can be reformulated as: How to find all the integer values of n and m such that $$2n(n-1) = m(m-1)\quad ?$$ - -REPLY [4 votes]: In my answer here you will find Lagrange's method for reducing the general bivariate quadratic Diophantine equation to a Pell equation, along with some interesting links, including a Java applet that will show you the complete solution, including complete step-by-step details of the solution. -According to said applet the solutions are given by -$$\rm N_{i+1}\ =\ 3\ N_i + 2\ M_i -2 $$ -$$\rm M_{i+1}\ =\ 4\ N_i + 3\ M_i -3 $$ -with $\rm\ N_0,M_0\ =\ 0,1\:;\ \ 1,1\:.\ $ To see complete details of the solution, including the relationship with the continued fraction of $\rm\:\sqrt{2}\:,\:$ choose the "step by step" solution method.<|endoftext|> -TITLE: Proof by induction for an exponential inequality: $(2^r-1)(1-x)x^{2^{r}-2}+x^{2^{r}-1}>x^{2^{r}-r-1}$ -QUESTION [5 upvotes]: How do I prove -$(2^r-1)(1-x)x^{2^{r}-2}+x^{2^{r}-1}>x^{2^{r}-r-1}$ -for $\frac{1}{2}1$. -To see this, one can begin like Moron and try to show that $f(y)1$ because $r>1$ hence $y\mapsto g(y)$ is increasing on $y>1$, and $g(1)=0$ hence $g(y)>0$ for $y>1$. Finally, $y\mapsto f(y)$ is increasing on $y>1$ hence we are done.<|endoftext|> -TITLE: How to prove absolute summability of sinc function? -QUESTION [10 upvotes]: We know that $$\int_0^\infty \left(\frac{\sin x}{x}\right)^2 dx=\int_0^\infty \frac{\sin x}{x} \, dx=\frac{\pi}{2}.$$ -How do I show that $$\int_0^\infty \left\vert\frac{\sin x}{x} \right\vert \, dx$$ converges? - -REPLY [9 votes]: $\int_0^\infty \frac{\sin x}{x} dx$ is in fact my most favorite example of a function which is Riemann integrable but not Lesbegue integrable. (just to tease people which think that Lesbegue is so much better than Riemann; in fact this integral might appear more frequent in applications than any integral which is Lesbegue but not Riemann integrable)<|endoftext|> -TITLE: Wedge Product, A Novel Interpretation or Just Plain Wrong? -QUESTION [15 upvotes]: I have read (I think) all of the previous threads on this website (and many others) -on this topic & unfortunately have not found an answer to my question. Due to the fact that I am only beginning to study the subjects of tensors, differential forms etc... -I appeal to your good nature not to respond with anything too advanced as it just goes over my head. The structure of my question is as follows: first a look at the cross product; then a look at a wedge product calculation & it's similarities (that I think -are far more explicit if you interpret it in the way I've explained below) to the cross product & finally 4 questions (in bold) that are motivated by the wedge product calculation. -The 4 questions are all I'm hoping for answers to as I barely understand anything beyond -what I've written about this subject & just don't understand what a lot of the online -fora threads on this topic are saying in the responses to this topic. I think these -questions are my way in. -The cross product is a strange animal, it really has very little justification as it is -taught in elementary linear algebra books. It took me a long time to learn that the -cross product is really no more than the dot product in disguise. It is actually quite -easy to derive the result that a cross product gives, through clever algebra, as is done in the cross product pdf's here & here: synechism.org/drupal/cfsv/ (sorry about the text-link!). By doing -your own algebra you can -justify the anti-symmetric property of the cross product, -$\overline{u} \ x \ \overline{v} \ = \ - \ \overline{v} \ x \ \overline{u}$. -So understanding the cross product in this way is quite satisfying to me as we can -easily justify why $\ \overline{u} \ x \ \overline{u} \ = \ 0$ without relying on -these properties as definitions. -My questions are based on the fact that these properties can be justified in such an -elementary way. If you've never seen the cross product explained they way it is in the .pdf's then I urge you to read them & think seriously about it. I'm sure these are -justified in more advanced works in other ways but if an explanation can be given -at this level I see no reason not to take it. -So lets look at an example & the steps taken that I think have explanations analogous -to those of the cross product above: -$\ \overline{v}$ = v₁$\ \hat{e_1}$ + v₂$\ \hat{e_2}$ -$\ \overline{w}$ = w₁$\ \hat{e_1}$ + w₂$\ \hat{e_2}$ -(where $\ \hat{e_1}$ = (1,0) & $\ \hat{e_2}$ = (0,1)). -v ⋀ w = (v₁$\ \hat{e_1}$ + v₂$\ \hat{e_2}$) ⋀ (w₁$\ \hat{e_1}$ + w₂$\ \hat{e_2}$) -v ⋀ w = v₁w₁$\ \hat{e_1}$⋀$\ \hat{e_1}$ + v₁w₂$\ \hat{e_1}$⋀$\ \hat{e_2}$ + v₂w₁$\ \hat{e_2}$⋀$\ \hat{e_1}$ + v₂w₂$\ \hat{e_2}$⋀$\ \hat{e_2}$ -v ⋀ w = v₁w₂$\ \hat{e_1}$⋀$\ \hat{e_2}$ + v₂w₁$\ \hat{e_2}$⋀$\ \hat{e_1}$ -v ⋀ w= (v₁w₂ - v₂w₁)$\ \hat{e_1}$⋀$\ \hat{e_2}$ -This is interpreted as the area contained in the parallelogram formed by v & w. -No doubt you noticed that all of the manipulations with the $\ \hat{e_i}$'s have -the exact same form as the cross product. Notice also the fact that this two -dimensional calculation comes out with the exact same result as the cross product of -$\ \overline{v'}$ = v₁$\ \hat{e_1}$ +_v₂$\ \hat{e_2}$_+ 0$\ \hat{e_3}$ -$\ \overline{w'}$ = w₁$\ \hat{e_1}$ + w₂$\ \hat{e_2}$ + 0$\ \hat{e_3}$ -in ℝ³.Also the general -x ⋀ y = (x₁$\ \hat{e_1}$ + x₂$\ \hat{e_2}$ + x₃$\ \hat{e_3}$) ⋀ (y₁$\ \hat{e_1}$ + y₂$\ \hat{e_2}$ + y₃$\ \hat{e_3}$) -comes out with the exact same result as the cross product. The important thing is that -the cross product of the two vector results in a vector orthogonal to $\ \overline{v}$ & $\ \overline{w}$ and that the result is the same as the wedge product calculation. -1: Can $\ \hat{e_1}$⋀$\ \hat{e_2}$ be interpreted as $\ \hat{e_3}$ in my above -calculation? -What I mean is that can $\ \hat{e_1}$⋀$\ \hat{e_2}$ be interpreted as a (unit) vector -orthogonal to the two vectors involved in the calculation that is scaled up by some -factor β, i.e. β$\ \hat{e_1}$⋀$\ \hat{e_2}$ where β is the scalar representing the -area of the parallelogram. -2: Just as we can algebraically validate why $\overline{u} \ x \ \overline{v} \ = \ - \ \overline{v} \ x \ \overline{u}$ why doesn't the exact same logic validate -$\ \hat{e_1}$⋀$\ \hat{e_2}$ = - $\ \hat{e_2}$⋀$\ \hat{e_1}$? -If we think along these lines I think we can justify why $\ \hat{e_1}$⋀$\ \hat{e_1}$ = 0, -just as it occurs analogously in the cross product. They seems far too similar for it to be coincidence but I can't find anyone explaining this relationship. -3: In general, if you are taking the wedge product of (n - 1) vectors in n-space -will you always end up with a new vector orthogonal to all of the others? -If you are taking the wedge product of (n - 1) vectors then will you end up with -λ($\ \hat{e_1}$⋀$\ \hat{e_2}$⋀...⋀$\ \hat{e_n}$) -where the term ($\ \hat{e_1}$⋀$\ \hat{e_2}$⋀...⋀$\ \hat{e_n}$) is orthogonal to all -the vectors involved in the calculation & the term λ represents the area/volume/hypervolume (etc...) contained in the (n - 1) vectors? -4: I have seen it explained that we can interpret the wedge product of -$\ \hat{e_1}$⋀$\ \hat{e_2}$ -as in the picture here, as a kind of two-dimensional -vector. Still, the result given is no different to that of the 3-D cross product so is -it not justifiable to think of $\ \hat{e_1}$⋀$\ \hat{e_2}$ as if it were just an -orthogonal vector in the same way you would the cross product if you think along the -lines I have been tracing out in this post? When you go on to take the wedge product -of (n - 1) vectors in n-space can I not think in the same (higher dimensional) way? -That's it, thanks a lot for taking the time to read this I have tried to be as clear -as possible, any contradictions/errors are as a result of my poor knowledge of all of -this! :D - -REPLY [7 votes]: The best introduction I know of to the exterior product is Sergei Winitzki's free book Linear Algebra via Exterior Products. Chapter $2$ in particular I think addresses all of your questions (it is unclear how much of Chapter $1$ you need to read in order to read Chapter $2$, I guess that depends on how much linear algebra you've had).<|endoftext|> -TITLE: Primal and dual solution to linear programming -QUESTION [12 upvotes]: Lets say we are given a primal linear programming problem: -$\begin{array}{ccc} -\text{minimize } & c^{T}x & &\\ -\text{subject to: } & Ax & \ge & b \\ -& x & \ge & 0 -\end{array}$ -The dual problem is defined as: -$\begin{array}{ccc} -\text{maximize } & b^{T}y & &\\ -\text{subject to: } & A^{T}y & \le & c \\ -& y & \ge & 0 -\end{array}$ -According to the duality theorem -$c^{T}x \ge b^{T}y$ for every feasible solution $x$ and $y$, and in addition when $x$ and $y$ are optimal solutions to the primal and the dual task then $c^{T}x=b^{T}y$ -So if we define linear programming task with following constraints: -$\begin{array}{ccc} -Ax & \ge & b \\ -x & \ge & 0 \\ -A^{T}y & \le & c \\ -y & \ge & 0 \\ -b^{T}y & \ge & c^{T}x -\end{array}$ -Then any feasible solution to this task should be an optimal solution to the primal and the dual task, because the last constraint might be satisfied only if $x$ and $y$ are optimal. -The question is why this approach is not used? -I see three potential reasons: -1) I've made somewhere mistake and it doesn't make any sense. -2) It is often the case when primal or dual problem is infeasible. I've seen such examples, but in all of them the optimal solution was unbounded, is it the only case when exactly one of the primal and dual problem is infeasible? -3) Finding any feasible solution might be hard. The so called Phase 1 of simplex method can be used to find a feasible solution. I couldn't find the complexity of this phase, is it exponential just like the simplex algorithm? The other question is what is the fastest method to determine whether there exist any feasible solution? This solution doesn't have to be found. - -REPLY [2 votes]: I think that you're missing some additional nitpicky conditions here and there to guarantee optimality complementary slackness, but the idea is essentially correct. You should read more about Karush Kuhn Tucker (KKT) conditions in order to see how these equations apply not only to linear programming, but constrained optimization in general.<|endoftext|> -TITLE: quasiconformal "automorphism" groups of Julia sets -QUESTION [5 upvotes]: To motivate this question, let me begin with a picture: - -Each letter labels a "blob" of this quartic Julia set. (is there a technical term for these parts?). Because of resolution limitations I haven't been able to color and label every blob. -The only answer to the the MO question Symmetries of Julia sets for $z^2 - c$ mentions that “This means that there is a quasi-conformal map (thus of bounded distortion) which maps parts of the Julia set to the whole.” The transformation that this labelling seems to suggest maps the entire Julia set to itself. - -Does the set of quasiconformal transformations $\Xi$ (as motivated by the one above) which map the Julia set to itself form a group? I am aware that the quasiconformal transformation is not a quasiconformal conjugacy -- while it fixes two points in the plane it also fixes the entire shape of the Julia set) -If it exists, Is $\Xi$ the discrete subgroup of a continuous group? (I want to make a movie of such a transformation as this) -What relation, if any, exists between $\Xi$ and Teichmüller space? - -The reason that automorphism is in quotes in the question title is because the phrase "quasiconformal automorphism groups" was used in this paper -EDIT: Here are two more of these transformations, applied to the Douady Rabbit: - -The first one simply rotates around a vertex: - -In the second, the point that is fixed is inside the mauve Fatou -component. - -REPLY [5 votes]: Why use a quartic Julia set at all? -Consider the famous "basilica" Julia set; i.e., the Julia set of $z^2-1$. - -Source of image: Prokofiev, Wikimedia commons, -http://commons.wikimedia.org/wiki/File:Julia_z2-1.png -There is a cycle of two periodic Fatou components: One contains the critical point $0$, the other the critical value $-1$ (which in turn is mapped back to zero). These are connected via a fixed point, which is commonly denoted $\alpha$. (Here $\alpha=\frac{1 - \sqrt{5}}{2}$.) Let $K_1$ be the part of the filled Julia set to the left of $\alpha$ and $K_2$ the part to the right of $\alpha$. Then $f$ maps a neighborhood of $K_1$ conformally to $K_2$. Let us denote this map by $\phi:K_1\to K_2$. Then -$$ \psi : K \to K; z\mapsto\begin{cases} \phi(z) & \text{if } z\in K_1 \\ \phi^{-1}(z) & \text{if }z\in K_2\end{cases}$$ -is a homeomorphism of $K$ (the filled Julia set) to itself. Using the fact that $K_1$ and $K_2$ do not touch tangentially at $\alpha$, it is easy to see that the map extends to a quasiconformal homeomorphism of the plane. Alternatively, consider the corresponding self-map of the lamination: - -Source of image: Adam majewski, Wikimedia Commons, http://upload.wikimedia.org/wikipedia/commons/9/9e/Basilica_lamination.png -The fixed point $\alpha$ corresponds to the "characteristic leaf" connecting angles $1/3$ and $2/3$ (where we think of the circle as $\mathbb{R}/\mathbb{Z}$. The map in question is given by -$$ \Psi(x) = \begin{cases}2x-k & \text{if } x\in [k+1/3,k+2/3], k\in\mathbb{Z} \\ - (x+k+1)/2 & \text{if } x\in (k-1/3,k+1/3), k\in\mathbb{Z}. \end{cases}.$$ -This map is clearly a quasisymmetric self-map of the circle, and hence extends to a quasiconformal self-map of the complement of the unit disk. This gives rise to a quasiconformal self-map of the complement of $K$ that agrees with $\psi$ on the boundary of $K$. By the classical glueing lemma for quasiconformal mappings, the combined map is quasiconformal near every point apart possibly from $\alpha$, and hence quasiconformal everywhere. -Presumably the same is true for the example you post, although since you do not say exactly which map you are considering, it is impossible to tell. -To answer the three questions posed by you: - -Of course the set of quasiconformal maps that take the Julia set to itself forms a group. This is trivial because the composition of two such maps again preserves the Julia set, and of course the same is true for inverses. However, note that the quasiconformal dilatation can increase under composition. -Of course this is a subgroup of the group of quasiconformal self-maps of the plane. The subgroup is not discrete because you can continuously deform the quasiconformal map on the Fatou components. If we identify two maps when they take the same values on the Julia set, the question because more sensible. However, even then I believe the answer is negative. Indeed, it should be possible to define a quasiconformal isomorphism that fixes $K_2$ and "rotates" the bulbs around on the other side. By doing this kind of transformation on one of the iterated preimages of $K_2$ that is very far to the left (or right) of the Julia set, and keeping the rest of the set fixed, we obtain a map that is very close to the identity on the Julia set. (Of course details need to be checked.) -I do not understand your third question. What Riemann surface are you considering? - -EDIT. Lyubich and Merenkov have recently given a complete description of the group of orientation-preserving quasisymmetries of the basilica Julia set. In contrast, for certain rational maps, they previously showed together with Bonk that every quasisymmetry is a Möbius transformation.<|endoftext|> -TITLE: Show that if $a \equiv b \pmod n$, $\gcd(a,n)=\gcd(b,n)$ -QUESTION [9 upvotes]: My problem is how to somehow relate the the gcd and congruence. I know that $(a,b) = ax + by$. I also know that $a \equiv b \pmod n$ means $n\mid a-b$. Any hints? -Thanks! - -REPLY [6 votes]: If $\rm\ d\ |\ n\ |\ a\!-\!b\ $ then $\rm\ d\ |\ a \!\iff\! d\ |\ b, \ $ i.e. $\rm\bmod d\!:\ $ if $\rm\ a\equiv b\ $ then $\rm\ a\equiv 0\!\iff\! b\equiv 0$ -So $\rm\, \{n,a\},\ \{n,b\}\ $ have the same common divisors $\rm\,d,\,$ so the same greatest common divisor. -Remark $\ $ Note how it simplifies after eliminating the less intuitive divisibility relations in favor of familiar equations (here congruences). $\ $ After converting it to manipulation of equations it is completely trivial, viz. if $\rm\ a \equiv b\ $ then $\rm\ n,a\equiv 0 \iff n,b \equiv 0.\:$ That illustrates the tremendous power of congruences - they allow us to transform diverse problems to equational form - allowing us to reuse our well-honed intuition on manipulating (integer) equations. To succeed in number theory and algebra it is essential to master such techniques. They lead to powerful methods of "modular reduction" - an algebraic way of "dividing and conquering" - reducing problems to simpler problems that (hopefully) combine to yield the complete solution.<|endoftext|> -TITLE: Well known proofs that $\mathrm{PSL}_2(\mathbf Z) \cong C_2 * C_3$ -QUESTION [5 upvotes]: I've seen one proof of $\mathrm{psl}_2(\mathbf Z) \cong C_2 * C_3$ based on the Ping-pong lemma -But what I'm looking for is how this result was historically obtained first. -I've checked Coxeter & Moser's "Generators and relations for discrete groups" and they (§7.2) do things that I find weird, they let the matrices act on circles in the half-plane $y>0$ which in turn they regard as lines of a hyperbolic plane, it then follows this group can be interpreted as triangle group acting on a hyperbolic triangle. -Is this indeed the 'classic' way to arrive at this result? And if so, I'd be happy with a bit of an explanation of what's going on, or a reference to a text where this is explained a bit more in detail, because I'm really not that comfortable with hyperbolic triangles. - -REPLY [2 votes]: I don't know about the history, but if you want to understand 'what's going on' the keyword is 'orbifold'. I recommend Peter Scott's article 'The geometries of 3-manifolds', available as a pdf on his website, for an introduction to 2-dimensional orbifolds.<|endoftext|> -TITLE: Given an exponential generating function, is it possible to isolate only the even terms? -QUESTION [11 upvotes]: Suppose you have an exponential generating function -$$ -F(x)=F_0+F_1x+F_2\frac{x^2}{2!}+\cdots+F_n\frac{x^n}{n!}+\cdots -$$ -and you want to get only the even terms -$$ -F_e(x)=F_0+F_2\frac{x^2}{2!}+F_4\frac{x^4}{4!}+\cdots -$$ -Is it possible to write $F_e(x)$ in terms of $F(x)$? I noticed that taking the derivative of $F(x)$ seems to shift the coefficients down a term, but couldn't get much further than that. -I'm also interested in the odd case, but I guess that follows easily since $F(x)-F_e(x)=F_o(x)$. - -REPLY [17 votes]: They're called series bisections, e.g. bisecting into even and odd parts the power series for $\:\rm e^{\,{\it i}\,x} \:,\;$ -$$\begin{align} -\rm f(x) \ &= \ \rm\frac{f(x)+f(-x)}{2} \;+\; \frac{f(x)-f(-x)}{2} \\[.2em] -\Rightarrow\quad \rm e^{\,{\it i}\,x} \ &= \ \rm\cos(x) \ +\ {\it i} \ \sin(x) \end{align}\qquad$$ -Similarly one can perform multisections into $\rm\:n\:$ parts using $\rm\:n\:$'th roots of unity - see my post here for some examples and see Riordan's classic -textbook Combinatorial Identities for many applications. Briefly, -with $\rm\:\zeta\ $ a primitive $\rm\:n$'th root of unity, the $\rm\:m$'th $\rm\:n$-section selects the linear progression of $\rm\: m+k\:n\:$ indexed terms from a series $\rm\ f(x)\ =\ a_0 + a_1\ x + a_2\ x^2 +\:\cdots\ $ as follows -$\rm\quad\quad\quad\quad a_m\ x^m\ +\ a_{m+n}\ x^{m+n}\ +\ a_{m+2\:n}\ x^{m+2\:n}\ +\:\cdots $ -$\rm\quad\quad =\ \frac{1}{n} \big(f(x)\ +\ f(x\zeta)\ \zeta^{-m}\ +\ f(x\zeta^{\:2})\ \zeta^{-2m}\ +\:\cdots\: +\ f(x\zeta^{\ n-1})\ \zeta^{\ (1-n)\:m}\big)$ -Exercise $\;$ Use multisections to give elegant proofs of the following -$\quad\quad\rm\displaystyle sin(x)/e^{x} \quad\:$ has every $\rm 4\ k\;$'th term zero in its power series -$\quad\quad\rm\displaystyle cos(x)/e^{x} \quad$ has every $\rm 4k\!+\!2\;$'th term zero in its power series -See the posts in this thread for various solutions and more on multisections. When you later study representation theory of groups you will learn that this is a special case of much more general results, with relations to Fourier and other transforms. It's also closely related to various Galois-theoretic results on modules, e.g. see my remark about Hilbert's Theorem 90 in the linked thread. - -REPLY [9 votes]: The general technique of isolating the terms which are congruent to $a \bmod n$ is known in the US high school competition circuit as the "roots of unity filter" and known to most others as the discrete Fourier transform. -The abstract idea is that the cyclic group $\mathbb{Z}/n\mathbb{Z}$ acts on the space of, say, formal power series in one variable $z$ over $\mathbb{C}$ via $z \mapsto e^{\frac{2 \pi i}{n} } z$. This group representation has $n$ isotypic components (subspaces where the group acts the same way) corresponding to the different residues $\bmod n$, and what we are interested in computing is the projections to these components, which can be done quite explicitly by averaging appropriately over the group action. -For example, to isolate the terms divisible by $3$ you use $\frac{F(z) + F(\omega z) + F(\omega^2 z)}{3}$ where $\omega = e^{ \frac{2 \pi i}{3} }$. The fact that all the terms cancel out nicely when you do this is a special case of the orthogonality relations in representation theory. - -REPLY [5 votes]: $F_e(x)=\frac{1}{2}(F(x)+F(-x))$. -$F_o(x)=\frac{1}{2}(F(x)-F(-x))$. -See also How do I divide a function into even and odd sections?<|endoftext|> -TITLE: Question on de la Vallee Poussin's simplified proof of Dirichlet's theorem on primes in arithmetic progressions -QUESTION [7 upvotes]: I've been trying to understand de la Vallee Poussin's "Demonstration Simplifiee du Theorem de Dirichlet sur la Progression Arithmetique" and I've got stuck at the following step on pg 18 where Poussin takes the logarithmic derivative of: -$$\Sigma_{n}\frac{\chi(n)}{n^{s}} = \prod_{q}(1-\frac{\chi(q)}{q^{s}})^{-1}$$ -in order to obtain: -$$-D\log\sum_{n}\frac{\chi(n)}{n^{s}} = \Sigma_{q}\frac{\chi(q)\log(q)}{q^{s}-\chi(q)}$$ -Specifically, when I try to work through the calculation, I don't see where the -1 on the left hand side of the equation comes from. When I tried to take logs and then differentiate the first equation, I ended up with the following: -$$D\log\sum_{n}\frac{\chi(n)}{n^{s}} = \Sigma_{q}\frac{\chi(q)\log(q)}{q^{s}-\chi(q)}$$ -Could someone help me understand where I'm going wrong please? - -REPLY [3 votes]: In what follows, D denotes differentiation with respect to $s$. -We have that -$\Sigma_{n}\frac{\chi(n)}{n^{s}} = \prod_{q}(1-\frac{\chi(q)}{q^{s}})^{-1}$ -We take the log of each side and then proceed to take the derivative. Thus we have -$Dlog\Sigma_{n}\frac{\chi(n)}{n^{s}} = Dlog \prod_{q}(1-\frac{\chi(q)}{q^{s}})^{-1}$ -Now the right hand side of the equation is equal to: -$D\Sigma_{q}log(1-\frac{\chi(q)}{q^{s}})^{-1} = -D\Sigma_{q}log(1-\frac{\chi(q)}{q^{s}})$ using properties of log. -Now we differentiate term by term. For an arbitrary term in the sum, we have: -$Dlog(1-\frac{\chi(q)}{q^{s}})=-(1-\frac{\chi(q)}{q^{s}})^{-1}\chi(q)Dq^{-s}$ by the chain rule. -But we know that -$Dq^{-s}=-log(q)q^{-s}$ -Thus we have -$Dlog(1-\frac{\chi(q)}{q^{s}})=(1-\frac{\chi(q)}{q^{s}})^{-1}\chi(q)log(q)q^{-s}$ because the negative signs cancel. -But then -$-D\Sigma_{q}log(1-\frac{\chi(q)}{q^{s}}) = \Sigma(1-\frac{\chi(q)}{q^{s}})^{-1}\chi(q)log(q)q^{-s}$ -And after re-arranging, this gives us $-D\Sigma_{q}log(1-\frac{\chi(q)}{q^{s}})=-\Sigma_{q}\frac{\chi(q)log(q)}{q^{s}-\chi(q)}$. Thus we have -$Dlog\Sigma_{n}\frac{\chi(n)}{n^{s}}= -\Sigma_{q}\frac{\chi(q)log(q)}{q^{s}-\chi(q)}$ and so swapping the negative sign from the right hand side to the left hand side gives us the result. -(P.S. Please forgive me if there are typos in the above-I'm not very good with latex code!)<|endoftext|> -TITLE: How to sketch the curve of parametric equations -QUESTION [5 upvotes]: I have the following parametric equations: -$$x = \sin\theta,\qquad y = \cos\theta,\qquad 0 \leq \theta \leq \pi.$$ -The corresponding Cartesian equation is $x^2 + y^2 = \sin^2\theta + \cos^2\theta = 1$. -So it would be $x^2 + y^2 = 1$. -How would I develop the curve for the equation? -Should I plot points? I know that this is a circle centered at 1, but with the limits on $\theta$, I'm not sure how to determine the final graph. - -REPLY [12 votes]: So now you know that every point in the parametric curve lies on the unit circle, since every point satisfies the equation $x^2+y^2=1$. -Some questions to ask yourself: - -Where do you start? That is, when $\theta=0$, where on the unit circle are you? -As $\theta$ increases from $0$ towards $\pi$, what happens to the point $(x,y) = (\sin\theta,\cos\theta)$? -Once $\theta$ reaches $\pi$, where are you? - -Now you know which parts of the unit circle are given by the parametric curve. -Added. Like many students early on, you seem to want to plot a few points and then interpolate; this is not a good idea, because it relies on you just happening to hit the correct points to get an accurate picture of what is going on. To give you an example, if you were trying to plot a graph for the function $y = \sin(\pi x)$, and tried a few points, say, $x=0$, $x=1$, $x=2$, $x=3$, etc., you might think that your function is the constant function $0$, because your selection of points happens to miss all the important stuff that is going on with $y$. -You don't want to do that. Instead, you want to think about what those functions are doing. -One of the ways in which I find most fruitful to think about parametric equations is to think of the parameter as giving time, and the equations as describing the movement of a point; imagine an animation with a glowing point moving along the plane, and leaving a "trail" of light behind it. That trail is the parametric curve, the point is the position at the "present $t$". You want to think about what that point is doing as your parameter ranges from its initial value to its final value (that is, as the animation goes from beginning to end). -So, start with $\theta=0$, the first frame of your animation. Your glowing point will be at $x(0) = \sin(0) = 0$, and $y(0)=\cos(0) = 1$. So you start at the point $(0,1)$. -Now, press the PLAY button. What happens as $\theta$ starts advancing from $0$ towards $\pi$? The $x$ coordinate will follow the graph of $y(\theta)=\sin(\theta)$, so first it will rise from $0$ to $1$ (at $\theta=\frac{\pi}{2}$), and then drop again from $1$ to $0$ (at $\theta=\pi$). It will do so without jumps or breaks. So if you were looking only at the "shadow" of our glowing point on the $x$-axis, it starts at $x=0$, then moves towards the right in a smooth way (no jerks, no jumps, no skips) until it hits $1$ at the midpoint of the movie, and then moves back towards $0$ until it returns to $0$ at the end of the movie. -What about $y$? It starts at $1$; it will behave like the graph of $\cos\theta$. As you press PLAY, it will start at $1$, and then drop towards $0$, again in a smooth way, without jumps, jerks, or skips, until it hits $0$ halfway through the movie (at $\theta=\frac{\pi}{2}$). Then, it will keep going in the same direction, from $0$ down to $-1$, and reach $-1$ at the end of the "movie" (when $\theta=\pi$). So if you look at the "shadow" of the glowing point on the $y$-axis, it will start at $1$, then drop down to $0$, and keep going down to $-1$, all in a generally smooth manner, with no jumps, jerks, hesitations, backtracks, etc. -Now put those two motions together: you start at $(0,1)$, the top of the unit circle. Then as $\theta$ moves from $0$ to $\pi/2$, the glowing point starts moving both right and down, always along the unit circle, until $\theta=\pi/2$, when it is at point . Press PAUSE on the video, and take stock. Where are we? What part of the unit circle have we traced? How many times? Any backtrack? Any long period of time spent without moving? Was the motion generally "fluid"? -Okay, ready to continue? Press PLAY again, and our $\theta$ starts increasing from $\pi/2$ towards $\pi$. That glowing point representing $(x(\theta),y(\theta))$ moves, but now down and to the left, always along the unit circle, until finally when $\theta=\pi$, the "end of the movie", it reaches its final destination at point . -All along, the movement was without jumps, hesitations, or backtracks, because the functions $x=\sin\theta$ and $y=\cos\theta$ have those motions: no jumps, no skips, no jerks, no hesitations, no backtracks, just a smooth motion (imagine your hand drawing their graphs). That glowing point has now traced part of the unit circle, exactly once, without backtracking. Which part?<|endoftext|> -TITLE: Is $SO_2$ an amenable group? -QUESTION [9 upvotes]: In S. Wagon's "The Banach-Tarski Paradox," amenable groups are defined on p. 12 as follows: - -[amenable] groups bear a left-invariant, finitely additive measure of total measure one that is defined on all subsets. - -He defines $SO_2$ to be the group of rotations of the unit circle, which he has used to show that $S_1$, the unit circle, is $SO_2$-paradoxical (as an analogue to the usual non-measurable set defined in the interval $[0,1)$ ). I am taking measure theory this term, but am not sure how to assign a measure to subsets of $SO_2$. Thus, I am not really sure where to start in showing whether or not $SO_2$ is an amenable group. -When I look at the Wikipedia entry about amenable groups, I'm unable to make much more sense of the definition in the context of the material. -Is $SO_2$, the group of rotations of the unit circle, an amenable group? If not, why (so that I may build an intuition for these objects)? - -REPLY [6 votes]: An amusing point is that the proof that there is a measure on the Abelian group of rotations is generally done using the Hahn-Banach Theorem, and that requires the Axiom of Choice. So this means that the proof that a Banach-Tarski Paradox does NOT exist in the plane requires the same Axiom of Choice that yields the paradox in 3-space. -But in fact one can eliminate (almost all of) AC from the proof that there is no BTP in the Euclidean plane. -Stan Wagon<|endoftext|> -TITLE: Permutation Inversion Questions (3) -QUESTION [5 upvotes]: Working my way through a combinatorics text and I'm hung up on a couple of questions: -1.) Let $p=p_1 p_2\cdots p_n$ be a permutation. An inversion of $p$ is a pair of entries $(p_i,p_j)$ so that $ip_j$. Let us a call a permutation even (resp. odd) if it has an even (resp. odd) number of inversion. Prove that the permuation consisting of the one cycle $(a_1a_2\cdots a_k)$ is even if $k$ is odd, and is odd if $k$ is even. -2.) Let $I(n,k)$ be the number of $n$-permutations that have $k$ inversions. Prove that $I(n,k)=I(n,\binom{n}{2}-k)$. -3.) Find an explicit formula for $I(n,3)$. -These are from a combinatorics text, but I vaguely remember this topic popping up in undergrad abstract algebra and possibly in an algorithm design course as well. - -REPLY [6 votes]: if the permutation consists of one cycle, (this doesn't seem to have anything to do with inversions), how many repetitions of the cycle will it take to return to identity if $k$ is even? if $k$ is odd? -for $I(n,k)$, how many total possible inversions are there? How many inversions are there of the reverse of a given permutation? -for $I(n,3)$, what is $I(n,1)$? $I(n,2)$? Look at some values, guess a formula, and use induction to quickly prove. Or think combinatorially: you have 3 inversions; if they don't interact then it's straightforward; if they do interact then a little more thought on counting them.<|endoftext|> -TITLE: What is the ''right" norm for the Banach space tensor product in this situation? -QUESTION [9 upvotes]: Let $X,Y$ denote (real) vector spaces. The vector space of $n$-linear maps $X^n \to Y$ will be denoted by $L^n(X,Y)$. Unless I'm much mistaken -$$L(X,L(X,Y)) \ \ \ L^2(X,Y) \ \ \ L(X \otimes X,Y)$$ -are pairwise isomorphic. One could even view the three as pairwise naturally isomorphic functors from vector spaces to vector spaces (contravariant in $X$ and covariant in $Y$). -But what if we want to work with Banach spaces and bounded linear maps instead?Operator norms allow us to make $B(X,B(X,Y))$ into a Banach space without any headache. A 2-linear map $f:X^2 \to Y$ is continuous in the product topology if and only if the norm $\|f\| = \sup_{\|x\| = \|x'\| = 1} \|f(x,x')\|$ is finite and this makes $B^2(X,Y)$ the set of continuous $2$-linear maps into a Banach space which is isometrically isomorphic to $B(X,B(X,Y))$. But what about $B(X \otimes X, Y)$? According to wikipedia there are various ways of putting a norm on $X \otimes X$. Which is the right one if we want the same sort of equivalence in this setting? - -REPLY [12 votes]: You're looking for the projective tensor norm. Explicitly, for $\omega$ in the algebraic tensor product $X \otimes_{\mathbb{R}} Y$ it is given by -\[ -\Vert \omega\Vert_{\pi} = \inf\left\{\sum \|x_{i}\|\,\|y_{i}\|\,:\,\omega = \sum_{i=1}^{n} x_{i} \otimes y_{i}\right\} -\] -and the associated tensor product $X \hat{\otimes}_{\pi} Y$ is the completion of the algebraic tensor product with respect to that norm (the above expression obviously defines a semi-norm, in order to show that it is a norm, you have to think a little bit). -Since $X \otimes_{\mathbb{R}} Y$ sits inside $X \hat{\otimes}_{\pi} Y$ you get a bilinear map $m:X \times Y \to X \hat{\otimes}_{\pi} Y$ by sending $(x,y)$ to $x \otimes y$ and by definition $\Vert x \otimes y\Vert_{\pi} \leq \|x\|\,\|y\|$ so that $\|m\| \leq 1$. -The basic thing you have to check is that the above inequality is in fact an equality $\Vert x \otimes y\Vert_{\pi} = \|x\|\,\|y\|$ and using this it is easy to prove that a continuous bilinear map $\Phi: X \times Y \to Z$ yields a unique linear map $\varphi: X \hat{\otimes}_{\pi} Y \to Z$ of the same norm (by the property of the algebraic tensor product $\Phi$ gives a linear map on $X \otimes_{\mathbb{R}} Y$ and if you've proven that this linear map is continuous, it is continuous on a dense subspace, hence extends uniquely to $X \hat{\otimes}_{\pi} Y$). -In other words the map $m: X \times Y \to X \hat{\otimes}_{\pi} Y$ yields an isometric isomorphism -$$\displaystyle -B(X \hat{\otimes}_{\pi} Y, Z) \to B^2(X,Y;Z) -$$ -by pre-composition $\varphi \mapsto \varphi \circ m = \Phi$. As you already observed $B^2(X,Y;Z) \cong B(X,B(Y,Z))$ which yields the isometric isomorphisms -\[ -B(X \hat{\otimes}_{\pi} Y, Z) \cong B(X,B(Y,Z)) \cong B^2(X,Y;Z) -\] -as in the case of vector spaces. In fact, you can convince yourself that you have to define the projective tensor norm as above if you want the first isomorphism to hold, already for $Z = \mathbb{R}$. -You can find all this and much more in great detail in R.A. Ryan's book Introduction to tensor products of Banach spaces, Springer 2001.<|endoftext|> -TITLE: The integral $\int \cos^3(2x) \, \mathrm dx$ -QUESTION [5 upvotes]: Given the following problem: - -integrate $\cos^3(2x)$ - -I was given the solution - -\begin{align*} - \int\cos^3(2x)\, \mathrm dx &= \int\cos^2(2x) \cos 2x\, \mathrm dx = \int(1-\sin^2 2x)\cos 2x\, \mathrm dx\\ - &= \frac12\int (1-u^2)\, \mathrm du = \cdots - \end{align*} - -but the problem is I am stuck on the 1/2. Where did it come from? - -REPLY [3 votes]: So we are trying to integrate the following expression $~~~\rightarrow ~~~ \displaystyle\int \cos^{3} (2x)\ dx$. -To do the this, we will need to make an appropriate substitution inside of the integrand. Doing this leads us to the following: -$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\displaystyle\int \cos^{3} (2x)\ dx$ -Let: $~u =2x$ -$du=2\ dx$ -$dx=\dfrac{1}{2}\ du$ -$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\dfrac{1}{2}\displaystyle\int \cos^{3} (u)\ du$ -Using the reduction formula, $$\int \cos^{m}(u) du = \dfrac{1}{m} \cos^{m-1}(u) \sin (u) + \dfrac{m-1}{m} \int \cos^{m-2}(u)\ du,~ \text{where }~ m = 3,~\text{gives}:$$ -$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\Rightarrow~\dfrac{1}{2}\Bigg[\dfrac{1}{3} \cos^{2}(u) \sin (u) + \dfrac{2}{3} \displaystyle\int \cos (u)\ du \Bigg]$ -$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~=~\dfrac{1}{6} \cos^{2} (2x) \sin (2x) + \dfrac{1}{3} \sin (2x) + C~~~~~~~~~~~~~~~~~~~~~~~~~~~~\blacksquare$ -Which can be simplified further to this: -$$\dfrac{1}{24}\Bigg(9 \sin (2x) + \sin (6x)\Bigg) + C.$$ -Okay, I hope that this has helped out and now you see where the $\dfrac{1}{2}$ came from. Let me know if there is any step covered that did not make much sense for doing so. -Thanks. -Good Luck.<|endoftext|> -TITLE: What is a good text in intermediate set theory? -QUESTION [20 upvotes]: I've been working my way through Enderton's Elements of Set Theory for a while, and I feel I have a decent grasp on some of the basics of elementary set theory. My question is, where should I look to next in set theory? What is a good book for set theory that may be considered 'the next step up'? -If it helps any, my background knowledge consists of some basic abstract algebra, general topology, linear algebra, etc., but I'm not sure how often they are used in real set theory. Thanks. - -REPLY [2 votes]: I've read only some chapters from these books, hopefully enough to be able to give some kind of opinion on them. I think they could be good texts for looking into more advanced set theory. - -Ciesielski: Set theory for the working mathematician See also: http://at.yorku.ca/t/o/p/c/62.htm -http://www.math.wvu.edu/~kcies/STbook.html -(This book was recommended by Theo, too.) -Two-volume set Just, Weese: Discovering modern set theory -See also: http://www.ohio.edu/people/just/book1.html -http://logicmatters.blogspot.com/2009/09/praise-for-just-and-weese.html<|endoftext|> -TITLE: Counting paths in a square matrix -QUESTION [5 upvotes]: Question: - -Consider a square matrix of order $m$. - At each step you can move one step to - the right or one step to the top. How - many possibilities are to reach $(m,m)$ - from $(0,0)$? - -I think it is just counting the Central binomial coefficients. -Am I right? If not what is be the correct answer and why? - -REPLY [10 votes]: You are correct. The reason is to get to $(m,m)$ you need to take a total of $m+m$ steps. However, you need to choose $m$ of those steps to be steps up, so the total number of paths is $\binom{m+m}{m}=\binom{2m}{m}$, since the central binomial coefficient picks which of the $2m$ steps will be up. -To add to this, in general, the number of paths from $(0,0)$ to $(m,n)$ is then $\binom{m+n}{m}=\binom{m+n}{n}$ for the same reasoning.<|endoftext|> -TITLE: Complement of maximal multiplicative set is a prime ideal -QUESTION [22 upvotes]: Let $R$ be a commutative ring with identity. I've been trying to prove the following: - -If $S \subset R$ is a maximal multiplicative set, then $R \setminus S$ is a prime ideal of $R$. - -I have spent some time in it, but nothing is coming to my mind. Any idea/solution will be appreciated. - -REPLY [13 votes]: I assume that all multiplicative subsets discussed here are disjoint from $\{0\}$. -Let me write $S$ for the maximal multiplicative subset of $R$. -Step 1: I claim that $S^{-1} R$ is a local ring. -Indeed, if not, there would be at least two maximal ideals in the localization, i.e., -two prime ideals $\mathfrak{p}_1 \neq \mathfrak{p}_2$, each maximal with respect to -being disjoint from $S$. Thus $S$ is contained in both $R \setminus \mathfrak{p}_1$ and $R \setminus \mathfrak{p}_2$, but these are two different sets so at least one of the containments must be proper, contradicting the maximality of $S$. -Step 2: Therefore there is a unique maximal ideal in the localization, which corresponds to a prime ideal $\mathfrak{p}$ of $R$ which is disjoint from $S$. Arguing as above, maximality of $S$ implies $S = R \setminus \mathfrak{p}$. -Remark 1: When $R$ is a domain, the unique maximal multiplicative subset is obviously $R \setminus \{0\}$. In this case it is tempting to construe "maximal multiplicative subset" to mean "multiplicative subset maximal with respect to being properly contained in $R \setminus \{0\}$." -Remark 2: Conversely, the primes $\mathfrak{p}$ such that $R \setminus \mathfrak{p}$ is maximal are precisely the primes which are minimal with respect to containing $\{0\}$. (When $R$ is a domain and the question is reconstrued as above, we want $\mathfrak{p}$ to be minimal with respect to properly containing $\{0\}$.) -Mariano's answer makes this clear as well. - -REPLY [8 votes]: We must assume $\rm 0\not\in S $ since $\rm\: 0\in S\: \Rightarrow\: 0 \not\in \overline S := R\setminus S,\: $ so $\rm\:\overline S\:$ is not an ideal. A simple Zorn lemma argument (see below) shows that, since $\rm\:S\:$ is multiplicatively closed, the ideal $\rm\{0\}\subset \overline S\:$ can be enlarged to an ideal $\rm\:P\:$ maximal w.r.t. to exclusion of $\rm\:S,\:$ and such an ideal must be prime. Therefore $\rm\: P = \overline S,\:$ else we could enlarge $\rm\:S\:$ to the monoid $\rm\:\overline P,\:$ contra maximality of $\rm\:S.\quad$ QED -Note $\: $ The prime $\rm\:P\,$ above may be alternatively constructed as the contraction of a maximal ideal $\rm\:Q\:$ of the localization $\rm\: R_S = S^{-1} R.\: $ $\rm\:Q\:$ exists since $\rm\ 0\not\in S\ \Rightarrow\ R_S \ne \{0\}.\:$ Generally, there is a bijective order-preserving correspondence between all prime-ideals in $\rm\:R_S\:$ and all prime ideals in $\rm R$ disjoint from $\rm\:S,\:$ see Theorem 34 in Kaplansky's Commutative Rings. His Theorem $1$, p. $1$ is the above-invoked form of this result (employing no localization theory). I've appended it below. - -Let $\,\overline I\,$ be the set-theoretic complement of an ideal $\,I.\,$ Then the definition of a prime ideal can be recast as follows: $\,\ I\,$ is prime $\iff$ $\, \overline I\,$ is multiplicatively closed (and nonempty). $ $ Now it goes without saying that $\,I\,$ is an ideal maximal with respect to the exclusion of $\,S = \overline I.\,$ Krull discovered a very useful converse. -Theorem $\bf 1\ \ $ Suppose that $\,S\:$ is a nonempty multiplicatively closed set in a ring $\,R,\,$ and suppose that $\,I\:$ is an ideal in $\,R\,$ maximal with respect to the exclusion of $\,S.\,$ Then $\,I\:$ is prime. -Proof $\ \ $ Given $\,ab\in I\,$ we must show that $\,a\,$ or $\,b\,$ lies in $\,I.\:$ Suppose the contrary. Then the ideal $\,(I,a)\,$ generated by $\,I\,$ and $\,a\,$ is strictly larger than $\,I\,$ and therefore intersects $\,S.\,$ Thus there exists $\,s\in S\,$ of the form $\,s = i + r a\,\ (i\in I,\, r\in R).\,$ Similarly $\,\hat s = \hat i + \hat r b\in S\,\ (\hat i\in I,\, \hat r\in R).\,$ But then -$$ i,\hat i,ab\in I\ \Rightarrow\ s \hat s = (i+ ra)(\hat i + \hat r b)\in I\cap S,\ \ {\rm contradiction}\quad {\bf QED}$$ -We note that, given any ideal $\,J\,$ disjoint from a nonempty multiplicatively closed set $\,S,\,$ we can by Zorn's lemma expand $\,J\,$ to an ideal $\,I\,$ maximal with respect to disjointness from $\rm\,S.\,$ Thus we have a method of constructing prime ideals. - -REPLY [7 votes]: Another way to proceed is to show - -First, that every multiplicative subset $S$ of a ring is contained in a saturated multiplicative subset $S'$, that is, one such that $$ab\in S'\implies a\in S'\wedge b\in S'.$$ In fact, there is a smallest saturated multiplicative subset containing $S$, called the saturation. -Second, that a saturated multiplicative subset $S$ of a ring (which does not contain $0$) is the intersection of the complements of the prime ideals which are disjoint from it. (The hard part here is to show that such primes actually exist: you can construct them as the maximal elements in the family of ideals disjoint from $S$) - -Once you have these two facts, your claim follows easily.<|endoftext|> -TITLE: Why every polynomial over the algebraic numbers $F$ splits over $F$? -QUESTION [5 upvotes]: I read that if $F$ is the field of algebraic numbers over $\mathbb{Q}$, then every polynomial in $F[x]$ splits over $F$. That's awesome! Nevertheless, I don't fully understand why it is true. Can you throw some ideas about why this is true? - -REPLY [8 votes]: The fact you mention is that the field $\overline{\mathbb{Q}}$ of algebraic numbers is algebraically closed. This is true because it is the algebraic closure of the field $\mathbb{Q}$ inside the larger algebraically closed field $\mathbb{C}$, i.e., the set of all complex numbers which satisfy some nonzero polynomial equation with $\mathbb{Q}$-coefficients. -Notwithstanding the terminology, that the algebraic closure of a field in an algebraically closed field is algebraically closed requires (nontrivial) proof! See for instance $\S 4$ of my notes on field theory, especially Proposition 17 and Corollary 18. - -REPLY [3 votes]: Consider some polynomial $$x^n = \sum_{i=0}^{n-1} c_i x^i,$$ where the $c_i$ are algebraic numbers. Thus for each $i$ we have a similar identity $$c_i^{n_i} = \sum_{j=0}^{n_i-1} d_{i,j} c_i^j,$$ where this time the $d_{i,j}$ are rationals. -Suppose that $\alpha$ is a root of the original polynomial. By using the above identities, every power of $\alpha$ can be expressed as a linear combination with rational coefficients of terms of the form $$\alpha^m c_0^{m_0} \cdots c_{n-1}^{m_{n-1}},$$ where $$0 \leq m < n \text{ and } 0 \leq m_i < n_i.$$ Putting all these $N = nn_0\cdots n_{n-1}$ elements into a vector $v$, we get that there are rational vectors $u_k$ such that $$\alpha^k = \langle u_k,v \rangle.$$ Among the first $N+1$ vectors $u_k$ there must be some non-trivial rational linear combination that vanishes: $$\sum_{k=0}^N t_k u_k = 0.$$ Therefore $$\sum_{k=0}^N t_k \alpha^k = 0,$$ and so $\alpha$ is also algebraic. -This proof is taken from these lecture notes, but it's pretty standard.<|endoftext|> -TITLE: Introduction to Bourbaki structures, and their relation to category theory -QUESTION [7 upvotes]: I just opened vol.1 of the Bourbaki treatise to take a look at how they define mathematical structure. I was amazed by its sheer complexity. Can you recommend an introductory text that wouldn't require as much effort to understand? -Also, a few related soft questions: -1) Is category theory more general than this theory? -2) In Bourbaki approach, theory corresponds to category, and structure corresponds to object, am I right? -3) I recently started reading MacLane's "Categories for a working mathematician", and category theory seems much simpler to understand. Is it an illusion due to less formal exposition, or is it really the case? If it's the latter, was this the reason why everyone adopted category theory in place of Bourbaki approach? - -REPLY [8 votes]: When Bourbaki began, in the 1930s, there was no "category theory", for one thing. One of the issues the group was addressing was the lack of "modern" texts (not only in French), and various problems of rigor in some existing sources. With hindsight, their notion of "structure" was not a big success, and they themselves did not really use it in later volumes. -It wasn't a completely frivolous idea, insofar as one can observe the dynamics of interactions of "different" fundamental notions ("algebraic" and "topological", etc.) However, with hindsight, the Bourbaki group was naive about foundations and philosophy-of-mathematics, no matter their great strengths in mathematics per se. Even their attitude about analysis seems skewed. E.g., where's the PDE volume? :) -Leo Corry's book "Modern algebra and the rise of mathematical structures" includes a discussion of Bourbaki's "structures", and makes comparisons to both category theory and some other early competing notions. -But, among other conclusions, one can ignore Bourbaki's notion of "structure" in terms of the practice of mathematics, or even for reading Bourbaki (!). -Edit: also, I think we should distinguish "foundational" from "organizational" attempts/conceptions, although an approach can include both. It does not seem that set theory ever tried to provide organizational principles for mathematics, only foundational (and interesting questions of its own). In contrast, category theory has always been far more organizational than foundational (despite Lawvere's work, and many others more recently). To my perception, Bourbaki's "structures" had more an organizational than foundational intention, although, arguably, any "economy" of concepts should make foundational burdens lighter.<|endoftext|> -TITLE: How many circles to cover 2 times bigger circle? -QUESTION [15 upvotes]: How many circles (radius $r$) are needed to cover circle whose radius is $2$ times bigger (radius $2r$). -I think we need to use area which is $S=\pi R^2$ but I don't really know what to do. - -REPLY [20 votes]: This is a disk covering problem. As you have stated it, it is not quite as difficult as some others. -The first task is to find the minimum number of small circles which cover the circumference of the bigger circle rather than the whole area. If this is $m$ then it will be impossible to have a regular $m$-gon with edges of length $2r$ fitting strictly inside the circle of radius $2r$. This implies $m \ge 6$. -It turns out to be just possible to cover the circumference of the bigger circle with 6 smaller circles, and ignoring symmetries there is only one way to do it (you get a regular hexagon whose vertices lie on the bigger circle and whose edges are diameters of the smaller circles). But this leaves an uncovered area in the middle, which needs at least one (and in fact exactly one) more. -So the answer is seven smaller circles.<|endoftext|> -TITLE: Some questions regarding (relative) constructibility and the condensation lemma -QUESTION [7 upvotes]: I've got a question regarding the constructible universe and I'm a bit confused about the Condensation Lemma for the universe constructible from some set $A$. Help will be greatly appreciated: -Let's assume that we have $(M,E)$ a model of ZF (or maybe of ZFC) but not necessarily standard or transitive. Since it's a model of ZF we can create its constructible universe $L^{(M,E)}$. What can we say about this model? I think it is a model of ZF both inside $(M,E)$ and outside of it. Is this correct? -What about the continuum hypothesis and the axiom of choice? To show that the actual $L$ satisfies GCH we use the Condensation Lemma which is based on the fact that $V=L$ is satisfied in every $L_\gamma$. Furthermore to show the consistency of the axiom of constructibility and of its consequences we need to show $(V=L)^L$ which is done through absoluteness. Can we do something similar in the case of the arbitrary model? For example can we say that $L^{(M,E)}$ is an actual model of ZFC+GCH? Or it is so only inside $(M,E)$? - -Regarding relative constructibility: I can see how $L[A]$ is a model of ZFC and I can see how for an inner model $M$ of ZF that has $A\cap M\in M$ we have that $L[A]\subset M$ through the Gödel operations. Furthermore it appears to me that since satisfiability even if we add a set as a predicate is absolute, we can through this show the absoluteness of "$x$ is relative constructible" and thus show both that $L[A]$ is the least inner model (under the restriction mentioned above) and $(\exists X\quad V=L[X])^{L[A]}$. -My problem is with the generalized continuum hypothesis and the condensation lemma. In Jech's book it states that GCH is true in $L[A]$ above some ordinal $\alpha$. But then the Condensation Lemma is stated as: - -If $\mathcal{M}\prec(L_\delta[A],\in,A\cap L_\delta[A])$ where $\delta$ is a limit ordinal, then the transitive collapse of $\mathcal{M}$ is $L_\gamma[A]$ for some $\gamma\leq\delta$. - -If this is indeed how the condensation lemma generalizes then why can't we prove the GCH much like we do in the case of $L$? For every $\alpha$, taking $X\in\mathcal{P}^{L[A]}(\omega_\alpha)$ there is of course some $\delta$ such that $X\in L_\delta[A]$. Then taking the Skolem Hull of $\omega_\alpha\cup\{X\}$ we get a model $\mathcal{M}\prec(L_\delta[A],\in,A\cap L_\delta[A])$ with $|\mathcal{M}|=|\omega_\alpha|$. Its transitive collapse would be some $L_\gamma[A]$ and since $|L_\gamma[A]|=|\gamma|$ we would have that $\gamma<\omega_{\alpha+1}$. I can't find any gap in this syllogism. What am I missing? -Thanks in advance, - -REPLY [6 votes]: Apostolos: About the first question, yes. The point is $(*)$: If $(M,\dot\in)$ is a model of set theory, $\phi$ is a sentence, and $(M,\dot\in)\models$"$(N,E)$ is a model of $\phi$", then $(N,E)\models\phi$. -More precisely, $(N,E)^*\models\phi$, where $(N,E)^*$ is the model that $M$ thinks $(N,E)$ is. Here, $(N,E)^*$ has universe $\{a\in M\mid (M,\dot\in)\models a\dot\in N\}$ and its relation is $\{(a,b)\in M\times M\mid (M,\dot\in)\models a,b\dot\in N\land aE b\}$. -$(*)$ is easily established by an induction on the complexity of formulas. Now, if $\phi$ is an axiom of ZFC, then $(M,\dot\in)$ thinks that $\phi$ is an axiom of ZFC (more precisely, if $n$ is a Gödel number for $\phi$, then in $M$, the numeral corresponding to $n$ is a Gödel number for a formula, and that formula is precisely $\phi$), and therefore, $(N,E)^*$ satisfies all ZFC axioms. -Note that $M$ may fail to be an $\omega$-model, in which case there are "natural numbers" in $M$ that code what $M$ believes are ZFC axioms, and $M$ may believe that $(N,E)$ satisfies them. We do not care about these "fake formulas". Similarly, there may be an $(N,E)$ in $M$ that $M$ thinks is not a model of ZFC but $(N,E)^*$ is, in fact, a ZFC model. The reason is similar: $M$ may think that $(N,E)$ does not satisfy one of the fake axioms, but this does not matter. -In the above, it doesn't matter whether $(N,E)$ is a proper class or a set in the sense of $M$. -The argument by induction in the complexity of formulas is perfectly general, so in fact $((L,\in)^M)^*$ is a model of every $\phi$ that follows from ZFC$+V=L$. In particular, GCH, diamonds, squares, etc, hold in $(L^M)^*$. Of course, there are also additional properties this model has that are not provable from ZFC$+V=L$. -About the second question: Suppose that CH fails, and let $A$ be a subset of $\omega_2$ that codes an injective $\omega_2$-sequence of reals, say the $\alpha$-th real is coded in $A\cap(\omega\cdot\alpha,\omega\cdot(\alpha+1))$. -Then $L[A]$ cannot possibly be a model of GCH, because $A\in L[A]$ so $L[A]$ sees at last $\omega_2^V\ge\omega_2^{L[A]}$ many different reals. -As you see, the usual proof of CH breaks down because we cannot ensure that for every real there is a countable $\alpha$ such that the real belongs to $L_\alpha[A]$. In fact, if we are careful we could even have a situation where $\omega_1=\omega_1^L$, ${\mathfrak c}=\omega_2$, $A$ codes all the reals, and $L_{\omega_1}[A]=L_{\omega_1}$, i.e., none of the "new" reals are visible in $L_{\omega_1}[A]$. -The issue is that $A$ may not collapse correctly, meaning, what you do is take $\delta$ large and a countable elementary $X\prec L_\delta[A]$ that contains $r$. Then you collapse $X$. Its collapse may not be $L_\alpha[A]$ for any $\alpha$, because the collapse of $A$ needs not be $A$. Of course, an initial segment of $A$ coincides with an initial segment of its collapse, but its collapse may "code information faster", in particular, it will code $r$ by a countable stage.<|endoftext|> -TITLE: Russell's paradox and the foundations of measure theory -QUESTION [7 upvotes]: Measure theory was established on naive set theory(Not totally sure). But after Russell discovered the paradox named by him, set theory was reconstructed in the sense of axiomatization. -My question is in the first chapter of many measure theory textbooks, there is a set theory introduction, most likely describing the naive set theory. How can I be convinced that measure theory is rigorous? Or I can just take as granted that it is, and the approach by introducing naive set theory is only because it is easier to understand? -And could we encounter such a example of the paradox when using measure theory, e.g to analyze integrability. - -REPLY [12 votes]: The only practical difference (as long as one doesn't work near to the foundations of mathematics at least) between naive set theory and axiomatic set theory is that the axioms say that there are certain things you just can't do - such as unrestricted comprehension and sets have to be wellfounded, and neither crops up in measure theory since the most complicated set theoretic "stuff" you do starts with a set $X$, considers $\sigma$-algebras on $X$, i.e. certain subsets of $\mathcal{P}(X)$, the power set of $X$, so the collection of all $\sigma$-algebras on $X$ is a subset of $\mathcal{P}(\mathcal{P}(X))$. In particular you never run into unrestricted comprehension because you're always guaranteed to have a set to do comprehension in, and these "nested structures" that lead to non-wellfoundedness don't happen either for essentially the same reason.<|endoftext|> -TITLE: Please recommend good text on complex Fourier series/analysis -QUESTION [6 upvotes]: I am looking for some good text/reference on complex Fourier series resp. Fourier analysis for complex (in particular holomoprhic) functions (of one variable). The more it contains on this particular subject, the better. -Background: For my diploma thesis, I need in particular to understand asymptotics of the Fourier coefficients for certain entire functions, so I need to study it fast, that is, more straightforward, well-structured theory without much "bla-bla", and less exercises... Nevertheless, I would like to learn the more general theory of Fourier analysis for complex/holomorphic functions as it has a great deal of applications in Analytic Number Theory, which is one of the subjects of interest to me. -Thanks in advance! - -REPLY [2 votes]: The following references cover some close links between harmonic and complex analysis that may be suitable for what you need (such as Paley-Wiener theorems, Corona Theorems, etc): - -Geometric Function Theory: Explorations in Complex Analysis by Steven Krantz -Bounded Analytic Functions by John Garnett -A Guide to Distribution Theory and Fourier Transforms by Robert Strichartz -Real and Complex Analysis by Walter Rudin<|endoftext|> -TITLE: Recognizable vs Decidable -QUESTION [60 upvotes]: What is difference between "recognizable" and "decidable" in context of Turing machines? - -REPLY [6 votes]: My answer mostly agrees with Aryabhata’s: -A language is “Turing-Recognizable” iff there exists a Turing Machine such that - -when encountering a string in that language, the machine terminates and accepts that string; -when encountering a string not in that language, the machine either terminates and rejects that string or doesn’t terminate at all. - -A language is “Turing-Decidable” iff there exists a Turing Machine such that - -when encountering a string in that language, the machine terminates and accepts that string; -when encountering a string not in that language, the machine terminates and rejects that string. - -Note that “Turing-Decidable” is a stronger condition than “Turing-Recognizable”, because, if a language is Turing-Decidable then its corresponding Turing Machine never runs forever.<|endoftext|> -TITLE: Analyticity of a function depending on $z$ and $\bar{z}$ -QUESTION [6 upvotes]: Say $z \in \mathbb{C}$ and $\bar{z}$ the complex conjugate (i.e. with $\bar{z} z = \left|z \right|^2$). -Can a function of $z$ and $\bar{z}$ be analytical? -Example: $f(z,\bar{z}) = Az^3 + B \bar{z} z$ -I thought no, because the partial derivatives will depend on the direction in the complex plane (i.e. the phase of the line along which you take the derivative limit). -Thanks! - -REPLY [10 votes]: One of the many equivalent definitions for a function to be holomorphic is $\displaystyle \frac{\partial f}{\partial \bar{z}} = 0$ -$\displaystyle \frac{\partial f}{\partial \bar{z}} = 0$ is equivalent to Cauchy Riemann equations as shown below. -$$x = \frac{z+\bar{z}}{2} \text{ and } y = \frac{z-\bar{z}}{2i}$$ -$$\frac{\partial f}{\partial \bar{z}} = \frac{\partial f}{\partial x} \frac{\partial x}{\partial \bar{z}} + \frac{\partial f}{\partial y} \frac{\partial y}{\partial \bar{z}} = \frac{1}{2} \left( \frac{\partial f}{\partial x} + i \frac{\partial f}{\partial y} \right)$$ -So if $f = u(x,y) + i v(x,y)$, where $u,v: \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$, then -$$\frac{\partial f}{\partial x} = \frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x}$$ -$$\frac{\partial f}{\partial y} = \frac{\partial u}{\partial y} + i \frac{\partial v}{\partial y}$$ -Hence,$$\frac{\partial f}{\partial \bar{z}} = \frac{1}{2} \left( \frac{\partial u}{\partial x} - \frac{\partial v}{\partial y} \right) + \frac{i}{2} \left( \frac{\partial u}{\partial y} + \frac{\partial v}{\partial x} \right)$$ -Hence, you find that $$\left( \frac{\partial f}{\partial \bar{z}} = 0 \right) \iff \left(\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y} \text{ and } \frac{\partial u}{\partial y} = - \frac{\partial v}{\partial x} \right)$$<|endoftext|> -TITLE: Number of reversals of direction to return in random walk -QUESTION [6 upvotes]: I am wondering if there are some studies about the number of reversals of direction to return to the starting point in random walk (either symmetric or non-symmetric), for example, its distribution and expectation etc. - -REPLY [5 votes]: In the symmetric case, one assumes that $X_0=0$ and that $X_n=Y_1+\cdots+Y_n$ for $n\ge1$, where $(Y_n)_{n\ge1}$ is i.i.d. Bernoulli and centered. For $n\ge1$, let $R_n$ denote the number of reversals of $(X_k)_{0\le k\le n}$. Then $R_1=0$ and $R_n=U_2+\cdots+U_n$ where $U_k=[Y_kY_{k-1}=-1]$. -The conditional expectation of $R_{n+1}$ with respect to the $\sigma$-algebra $F_n=\sigma(X_k;0\le k\le n)$ is $R_n+P(Y_nY_{n+1}=-1|F_n)=R_n+\frac12$, hence $M_n=2R_n-n$ defines a martingale $(M_n)_{n\ge1}$ starting from $M_1=-1$. -In particular, for every uniformly integrable stopping time such as the first hitting time $T_h$ of the set $\{0,h\}$ with $h\ge1$ by $(X_n)_n$, $E(M_{T_h})=-1$, hence -$$ -2E(R_{T_h})=E(T_h)-1. -$$ -When $h\to+\infty$, $T_h$ converges to the first return time $T$ to $0$ and $T$ is not integrable hence $R_T$ is not integrable. - -In the asymmetric case, assume that $P(Y_n=+1)=p$ and $P(Y_n=-1)=1-p$ for a given $p$ in $(0,1)$. If $p\ne\frac12$, $(X_n)_n$ has a positive probability to never hit $0$ again, in which case the total number of reversals of its path is almost surely infinite, hence not integrable. -One way to save the day is to assume that $p<\frac12$ (for example) and to condition on the event $[X_1=1]$. Write $P^+$ for this conditioned probability measure and $E^+$ for the expectation with respect to $P^+$. Then the first return time $T$ to $0$ is (at last!) integrable for $P^+$ and $R_T\le T-1$ hence $R_T$ is integrable for $P^+$. -To compute the value of $E^+(R_T)$, one can mimick the argument given in the symmetric case to show that the formula -$$ -M_n=2R_n-n-(1-2p)X_{n-1} -$$ -defines a martingale $(M_n)_{n\ge1}$ with respect to $P^+$, starting from $M_1=-1$. Since $X_{T-1}=+1$ almost surely for $P^+$, this yields -$$ -2E^+(R_{T})=E^+(T)-2p, -$$ -and it remains to compute $E^+(T)$. This can be done by the usual first-step decomposition: on $[Y_2=-1]$, $T=2$, and on $[Y_2=+1]$, $T=T'+T''$ for two independent copies of $T$. Hence $E^+(T)=2(1-p)+2E^+(T)p$, which yields the value of $E^+(T)$. Finally the mean number of reversals is -$$ -E^+(R_T)=\frac{1-2p(1-p)}{1-2p}. -$$<|endoftext|> -TITLE: How do you do a cross product of two $3 \times 3$ boolean matrices? -QUESTION [5 upvotes]: I have two boolean matrices: -A = |1 1 0| - |0 1 0| - |0 0 1| - -and - -B = |1 0 0| - |1 1 1| - |0 0 1| - -What is the result of A x B and what are the steps needed to attain the result? -Note: My textbook says that the answer to the above is: -A x B = |1 1 1| - |1 1 1| - |0 0 1| - -and that A * B is not equal to A x B. Unfortunately, it does not give the steps needed to find the solution. - -REPLY [10 votes]: I think it is the same as conventional matrix multiplication just that the multiplication is replaced by the "and" operation while the addition is replaced by the "or" operation. -Hence, $$A \times B = -\begin{bmatrix} 1 & 1 & 0 \\\ 0 & 1 & 0 \\\ 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} 1 & 0 & 0 \\\ 1 & 1 & 1 \\\ 0 & 0 & 1 \end{bmatrix}$$ -$$A \times B = \begin{bmatrix} (1 \& 1) || (1 \& 1) || (0 \& 0) & (1 \& 0) || (1 \& 1) || (0 \& 0) & (1 \& 0) || (1 \& 1) || (0 \& 1) \\\ (0 \& 1) || (1 \& 1) || (0 \& 0) & (0 \& 0) || (1 \& 1) || (0 \& 0) & (0 \& 0) || (1 \& 1) || (0 \& 1) \\\ (0 \& 1) || (0 \& 1) || (1 \& 0) & (0 \& 0) || (0 \& 1) || (1 \& 0) & (0 \& 0) || (0 \& 1) || (1 \& 1) \end{bmatrix}$$ -$$A \times B = \begin{bmatrix} 1 & 1 & 1 \\\ 1 & 1 & 1 \\\ 0 & 0 & 1 \end{bmatrix}$$<|endoftext|> -TITLE: Improper integral -QUESTION [7 upvotes]: I would like to know the value of the following improper integral: -$$\int \limits_{-\infty}^{\infty}\frac{2x}{1+x^2}dx$$ -as the function $f(x)=\displaystyle\frac{2x}{1+x^2}$ satisfies $f(-x)=-f(x)$ can I immediately conclude that the integral is equal to zero? -Thank you in advance for any suggestion. - -REPLY [14 votes]: The Cauchy principal value of your integral is defined to be -$$ \lim_{a \to \infty} \int_{-a}^a {2x \over 1+x^2} \: dx $$ -and this is zero by symmetry. So there is a sense in which the answer is $0$. However, if we let the limits go to infinity at different rates then we can make this integral "equal" whatever we want; for example, for any positive real constant $k$, -$$ \lim_{a \to \infty} \int_{-a}^{ka} {2x \over 1+x^2} \: dx = \lim_{a \to \infty} \log \left( {1+k^2 a^2 \over 1+a^2} \right) = 2 \log k. $$ -Basically, since there is infinitely much area between this curve and the $x$-axis both in the first and third quadrants, we have to be careful about how we arrange things in order to make them "cancel out"; this is why we just throw up our hands and say that the integral doesn't converge.<|endoftext|> -TITLE: the number of Young tableaux in general -QUESTION [7 upvotes]: From the wiki page Catalan number, we know the number of Young tableaux whose diagram is a 2-by-n rectangle given $2n$ distinct numbers is $C_n$. In general, given $m\times n$ distinct numbers, how many Young tableaux whose diagram is a $m\times n$ rectangle are there? -Also, what if these numbers can be repeated? -Many thanks. - -REPLY [2 votes]: It not quite clear what you mean by allowing repeated numbers, but what one usually considers in that case is so-called semi-standard Young tableaux, i.e., tableaux which are increasing (strict inequality) down each column, but only nondecreasing (equality allowed) along each row. The number of such arrangements on a given Young diagram, where the numbers $1,2,\dots,N$ are allowed, is counted as follows: define the "content" of box $(i,j)$ in the diagram to be $i-j$. Here's an illustration: - 0 1 2 3 4 --1 0 1 2 --2 -1 0 --3 -2 --4 -3 --5 - -Hook lengths are defined as for the usual hook-length formula for counting standard Young tableaux: -10 8 5 3 1 - 8 6 3 1 - 6 4 1 - 4 2 - 3 1 - 1 - -To get the answer, take the product over all boxes of (($N$ plus the content of that box) divided by (the hook length for that box)). -(This is a special case of something called Stanley's hook-content formula.)<|endoftext|> -TITLE: Equivalence of Rolle's theorem, the mean value theorem, and the least upper bound property? -QUESTION [12 upvotes]: How to show that Rolle's theorem, the Mean Value Theorem are equivalent to the least upper bound property? -I'm thinking of starting like this: Let F be an ordered field that does not satisfy the least upper bound property, and then deduce that F does not satisfy either Rolle's or MVT. But I'm not sure how to continue, any help please? thanks! - -REPLY [5 votes]: The purpose of this answer is to provide some supplementary insight into Chris Eagle's answer and to show in particular that his proof of MVT $\implies$ LUB is a very natural one. -In $\S 4$ of this note, I discuss the notions of completeness and Dedekind completeness in linearly ordered sets in terms of the induced order topology. (Recall an ordered set is Dedekind complete if it satisfies LUB -- every nonempty subset which is bounded above has a least upper bound -- and is complete if it is Dedekind complete and has a minimal element and a maximal element.) In particular, I show that a linear order is dense and Dedekind complete iff it is connected in the order topology. Every ordering on a field (compatible with the field structure, of course) is dense, so we see that an ordered field satisfies LUB iff it is connected in the order topology. -(In fact Chris Eagle's answer gives a snappy proof of "connected $\implies$ LUB": if LUB does not hold, take a nonempty set $A$ which is bounded above but has no least upper bound. Then the set of upper bounds of $A$ is nonempty, proper, open and closed. The other implication is the one which is more familiar from real analysis, and is not quite so easy. In the above note, I take the perspective that a very natural proof is given using the technique of real induction.) -So if we want to show that MVT $\implies$ LUB (as Chris Eagle points out, LUB $\implies$ Rolle's Theorem $\implies$ MVT is part of the standard honors calculus / elementary real analysis curriculum), it is natural to go by contrapositive: if an ordered field $(F,+,\cdot,<)$ does not satisfy LUB, then it is not connected, i.e., there exists a nonconstant continuous function $f: F \rightarrow \{0,1\}$. It follows immediately from the definition of the order topology that for each $x \in F$, there exists an open interval $I$ containing $x$ such that $f|_I$ is constant. Thus $f$ is a nonconstant function which is everywhere differentiable with derivative identically zero, contradicting MVT.<|endoftext|> -TITLE: Example of Convergent Series -QUESTION [8 upvotes]: Can anyone think of sequences $\{a_n\}$, $\{b_n\}$ such that $\sum a_n$ diverges, ${b_n}\to\infty$, but $\sum a_nb_n $ converges? -Thank you. -Note that $\{a_n\}$ must have infinitely many positive terms and infinitely many negative terms. -Edit: I get the feeling that only Qiaochu Yuan could answer this one... ;) - -REPLY [8 votes]: Let $c_n = (-1)^n/\sqrt{n}$. Then $\sum c_n$ converges by the alternating series test. Let $b_n=\sqrt{n}$ or $n$ depending on whether $n$ is even or odd. Then $b_n \to \infty$. Let $a_n = c_n / b_n$. Then $a_n = 1/n$ if $n$ is even and $a_n = -1/n\sqrt{n}$ if $n$ is odd, so the negative $a_n$'s have a finite sum, which the positive $a_n$'s don't. Hence $\sum a_n$ diverges.<|endoftext|> -TITLE: Why don't all path-connected space deformation retract onto a point? -QUESTION [8 upvotes]: I'm reading Hatcher's book Algebraic Topology and on page 12 he writes: - -... It is less trivial to show that there are path-connected spaces that do not deformation retract onto a point... - -The definition of deformation retract is given as: - -A deformation retraction of a space $X$ onto a subspace $A$ is a family of maps $f_t :X→X$, $t \in I$, such that $f_0 = 1 $ (the identity map), $f_1(X) = A$, and $f_t |A = 1$ $\forall t$. The family $f_t$ should be continuous in the sense that the associated map $X×I→X, (x,t) \rightarrow f_t(x)$ is continuous. - -Now my question is: how is it possible for a space $X$ to be path-connected but not retractable to a single point? Can one not just pick $x \in X$ and then retract any $y \in X$ along the path between $x$ and $y$? - -REPLY [10 votes]: It's probably worth pointing out that you should be careful with the term retraction versus deformation retraction. Every space $X$ retracts onto a point, via the map $r \colon X \rightarrow \ast$ that sends everything to a single point, but as Hatcher says, not every space deformation retracts to a point. -What Hatcher's definition of deformation retract is really saying is that a deformation retract is a retract $r$ along with a homotopy $r \simeq 1$, where $1$ is the identity. So if a space deformation retracts to another space, those two spaces have the same homotopy type, whereas spaces can retract onto spaces that that are not of their homotopy type (as you'll soon see if you keep reading the book). -In particular, if every space deformation retracted to a point, every space would be contractible, and algebraic topology would be very boring indeed!<|endoftext|> -TITLE: Existence of a measure that preserves a given mapping -QUESTION [5 upvotes]: Let $(X, d)$ be a compact metric space and let $T:X \to X$ be a continuous mapping. Now, does there exist a probability measure $\nu$ such that $T_* \nu = \nu$ (the first thing is the image measure)? -Now I want to do this using a fixed-point theorem. So I start like this: -Define the operator $\psi_T$ from $C(X)$ into itself by $\psi_T f = f \circ T$. Now we also know that $C(X)^* = P(X)$ where $P(X)$ are the Borel probability measures on $X$. So we have a mapping $\psi_T^* : P(X) \to P(X)$. Further we have that: -$$\langle f, \underbrace{\psi_T^* \mu}_{= \nu} \rangle = \langle \psi_T f, \mu \rangle = \int f \circ T \, d\mu.$$ -And $\nu$ is completely determined by -$$\int f \, d\nu = \int f \circ T \, d\mu$$ -Further, since $X$ is compact I know that $P(X)$ is compact (hence complete) with respect to the Bounded Lipschitz metric. This one is given by -$$d_\text{BL}(\mu, \nu) := \sup \left \{ \left | \int f \, d\mu - \int f \, d\nu \right | : f \in \text{BL}(X,d), \|f\|_\text{BL} \leq 1 \right \}.$$ -So everything is fine for the Banach fixed point theorem except one thing: is $d_\text{BL}$ a contraction? I don't think so. Does someone have an idea if I'm on the right track or does someone have a suggestion how to continue? -By the way, this is homework. - -REPLY [4 votes]: If I remember correctly, the map in question is not a contraction; instead, you have to use the compactness of the unit ball of measures in the weak* topology and a fixed point result. Namely, you consider the space of measures of total mass one (which is compact), which is acted on by $T_*$ (in a linear manner). Now if you are in a Banach space and have a compact convex subset which is invariant under a linear operator, it has a fixed point. (Hint: consider successive averages $a_n$ of a given element, show that $Ta_n - a_n$ is small for $n$ large, and choose a limit point of the $\{a_n\}$.) So this gives the result.<|endoftext|> -TITLE: When is the vector space of continuous functions on a compact Hausdorff space finite dimensional? -QUESTION [11 upvotes]: I know that the vector space of all real valued continuous functions on a compact Hausdorff space can be infinite dimensional. When will it be finite dimensional? And how will I identify that vector space with $\mathbb{R}^n$ for some $n$? - -REPLY [18 votes]: For a topological space $X$, let me write $C(X,\mathbb{R})$ for the set of continuous functions $f: X \rightarrow \mathbb{R}$. Note that $C(X,\mathbb{R})$ forms a ring under pointwise addition and multiplication which contains $\mathbb{R}$ as the subring of constant functions. -Suppose $X$ is compact Hausdorff. -1) If $X = \{x_1,\ldots,x_n\}$ is finite, then since it is Hausdorff it has the discrete topology so $C(X,\mathbb{R})$ is the set of all functions from $\{x_1,\ldots,x_n\}$ to $\mathbb{R}$. In this case a natural basis is given by "$\delta$ functions'': i.e., for $1 \leq i \leq n$, let $e_i: X \rightarrow \mathbb{R}$ be such that $e_i(x_j) = \delta_{i,j}$ (i.e., $1$ if $i = j$, $0$ otherwise). Note that in this case $e_1,\ldots,e_n$ is also a family of idempotent elements giving rise to a direct product decomposition of rings $C(X,\mathbb{R}) \cong \mathbb{R}^n$. -2) If $X$ is infinite, then for every positive integer $n$ there is a subspace -$X_n$ consisting of exactly $n$ points. By Step 1, the dimension of the space -$C(X_n,\mathbb{R})$ is $n$. Moreover, since $X$ is Hausdorff, $X_n$ is closed in $X$, -and, since $X$ is compact Hausdorff, the Tietze Extension Theorem applies to show that every continuous function on $X_n$ extends to a continuous function on $X$. In other words, the natural restriction map $r_n: C(X,\mathbb{R}) \rightarrow C(X_n,\mathbb{R})$ is surjective. Since $r_n$ is an $\mathbb{R}$-linear map (indeed a homomorphism of $\mathbb{R}$-algebras), it follows that -$\operatorname{dim} C(X,\mathbb{R}) \geq \operatorname{dim} C(X_n,\mathbb{R}) = n.$ -Since $n$ was arbitrary, we conclude that $\operatorname{dim} C(X,\mathbb{R})$ is infinite. -Added: As Qiaochu points out, it is enough to require that $X$ be what I call "C-separated": i.e., for the continuous $\mathbb{R}$-valued functions to separate points of $X$. And for this it is enough that $X$ be Tychonoff. (Recall compact Hausdorff $\implies$ normal $\implies$ Tychonoff.) In fact these considerations come up in $\S 5.2$ of my commutative algebra notes. See especially the second exercise in that section, which asks for a justification of the claim I made in the first sentence of this paragraph. -Indeed, since "C-separated" implies Hausdorff, looking back at my answer it shows that one can replace "compact Hausdorff" with "C-separated" and the result still holds. However, one cannot replace "C-separated" with "regular". As I have mentioned elsewhere, there are infinite regular spaces $X$ in which the only continuous functions are the constant functions, so $C(X,\mathbb{R}) = \mathbb{R}$ is a one-dimensional space.<|endoftext|> -TITLE: Chebyshev's versus Markov's inequality -QUESTION [15 upvotes]: More important than knowing inequalities by hearts is knowing when and how to apply them and what they express. -Regarding Chebyshev's and Markov's inequality. What is the relation (if any) between them? Which one is more strict (and in which situation)? Is there an easy way to understand what they express (kind of like drawing a triangle for the triangle inequality)? What is a typical application in probability theory? What are applications outside of probability theory? - -REPLY [15 votes]: Markov's inequality is a "large deviation bound". It states that the probability that a non-negative random variable gets values much larger than its expectation is small. -Chebyshev's inequality is a "concentration bound". It states that a random variable with finite variance is concentrated around its expectation. The smaller the variance, the stronger the concentration. -Both inequalities are used to claim that most of the time, random variables don't get "unexpected" values. -A typical application of concentration bounds is polling - if you poll 200 people, then the standard deviation is so-and-so, and so the probability that the results are off by some amount $x$ is only $p$ which is very small. You can bound $p$ using Chebyshev's inequality, although there are better bounds. -Another application is in the statistics of patients in a hospital. Suppose they conduct a survey and find that the average number of patients per day is 10. Suppose they want to be able all patients 90% of the time. Then they need to handle at most 100 patients. -The list of probability results goes on. The law of large numbers states that the average of identical random variables tends (almost certainly) to the expectation. The central limit theorem describes the behavior of the noise, but only "close" to the expectation. Chernoff bounds are large deviation bounds useful for understanding the behavior of the noise away from the expectation.<|endoftext|> -TITLE: Proof that retract of contractible space is contractible -QUESTION [14 upvotes]: I'm reading Hatcher and I'm doing exercise 9 on page 19. Can you tell me if my answer is correct? -Exercise: Show that a retract of a contractible space is contractible. -Proof: -Let $X$ be a contractible space, i.e. $\exists f :X \rightarrow \{ * \}$,$g: \{ * \} \rightarrow X$ continuous such that $fg \simeq id_{\{ *\}}$ and $gf \simeq id_X$ and let $r:X \rightarrow X$ be a retraction of $X$ i.e. $r$ continuous and $r(X) = A$ and $r\mid_A = id_A$. -(Edited using Anton's answer) -Define $f^\prime := f\mid_A$ and $g^\prime := r \circ g$. -Now we need to show $f^\prime \circ g^\prime \simeq id_{ \{ * \} }$ and $g^\prime \circ f^\prime \simeq id_A$. -$f^\prime \circ g^\prime \simeq id_{ \{ * \} }$ follows from $f^\prime \circ g^\prime = id_{ \{ * \} }$ (because there is only one function $\{ * \} \rightarrow \{ * \}$). -From $gf \simeq id_X$ we have a homotopy $H: I \times X \rightarrow X$ such that $H(0,x) = g \circ f (x)$ and $H(1,x) = id_X$. From this we build a homotopy $H^\prime : I \times A \rightarrow A$ by defining $H^\prime := r \circ H \mid_{I \times A}$. Then $H^\prime (0,x) = g^\prime \circ f\mid_A (x)$ and $H^\prime (1,x) = id_A$. -I'm particularly dissatisfied with the amount of detail in my reasoning but I can't seem to produce what I'm looking for. Many thanks for your help! - -REPLY [6 votes]: Your real questions have allready been answered. Here I give an alternative proof, demanding some familiarity with categories. It illustrates how convenient 'abstract nonsense' can be. I go out from retraction $r:X\rightarrow A$ and inclusion $i:A\rightarrow X$ with $ri=1$. -The contractible objects in category $\mathbf{Top}$ are exactly the -terminal objects in category $\mathbf{hTop}$. If $X$ is a terminal object in $\mathbf{hTop}$, then for every object $Y$ the -homset $\mathbf{hTop}\left(Y,X\right)$ contains exactly one arrow. -Let us denote this arrow by $\left[c\right]$. Then homset $\mathbf{hTop}\left(Y,A\right)$ -contains arrow $\left[r\right]\left[c\right]$. Now let $\left[f\right]\in\mathbf{hTop}\left(Y,A\right)$. -Then $\left[i\right]\left[f\right]\in\mathbf{hTop}\left(Y,X\right)$ -so $\left[i\right]\left[f\right]=\left[c\right]$. Then we find $\left[f\right]=\left[r\right]\left[i\right]\left[f\right]=\left[r\right]\left[c\right]$ -demonstrating that $\left[r\right]\left[c\right]$ is the only element -of $\mathbf{hTop}\left(Y,A\right)$. Proved is now that $A$ is terminal -in $\mathbf{hTop}$, hence contractible in $\mathbf{Top}$. - -It can be done even shorter: -$\left[\right]:\mathbf{Top}\rightarrow\mathbf{hTop}$ is a functor -and functors respect retractions (=arrows that have a right-inverse). So if $r:X\rightarrow A$ is a retraction -in $\mathbf{Top}$ then $\left[r\right]:X\rightarrow A$ is a retraction -in $\mathbf{hTop}$. -If $\left[r\right]:X\rightarrow A$ is a retraction and -$X$ is terminal then $\left[r\right]$ is an isomorphism, hence $A$ -is terminal. (This is true in every category, but to maintain the line of the proof I keep on using the notation $\left[r\right]$.) -Proof of that: some $\left[s\right]:A\rightarrow X$ exists with $\left[r\right]\left[s\right]=1$; -then $\left[s\right]\left[r\right]:X\rightarrow X$ and the identity -is the only endomorphism here, so $\left[s\right]\left[r\right]=1$.<|endoftext|> -TITLE: What is the difference between divisors and proper divisors? -QUESTION [9 upvotes]: I'm really confused about these two. For example if $n = 6$, then: -Divisors: $2, 3$ -Proper Divisors: $1, 2, 3, 6$ -Is it right? -Update -From Elementary Number Theory and Its Application by Kenneth H. Rosen 6th edition, page 256: - -Because of certain mystical beliefs, the ancient Greeks were interested in those integers that are equal to the sum of all their proper positive divisors. Such integers are called perfect numbers. -Example: -$\sigma(6) = 1 + 2 + 3 + 6 = 12$, we see that $6$ is perfect. - -Thanks, - -REPLY [7 votes]: Generally for order relations, the adjective "proper" is usually used to denote a strict ordering, i.e. $\rm\ a \preceq b\ $ properly means $\rm\ a \preceq b\ $ but not $\rm\ b \preceq a\:.\:$ Thus we have proper divisors, proper subsets, etc. -Hence $\rm\:a\:$ is a proper divisor of $\rm\:b\:,\:$ or $\rm\ a\ |\ b\ $ properly, $\:$ simply means that $\rm\ a\ |\ b\ $ but not $\rm\ b\ |\ a\:.$<|endoftext|> -TITLE: Bounding the integral of a $C^1$ function using its gradient -QUESTION [5 upvotes]: Let $f \in C^1_c(\Omega)$ where $\Omega \subset \mathbb{R}^d$ is a bounded domain. Let $\phi \in C^1_c(\mathbb{R}^d)$ be an approximation of the identity (i.e. $\int_{\mathbb{R}^d} \phi=1$, $\phi \geq 0$, $\phi_\epsilon := \frac{1}{\epsilon^d} \phi(\frac{x}{\epsilon})$. -How would you prove that -$$\int_\Omega |f(x) - f \ast \phi_\epsilon(x)| dx \leq \epsilon \int_\Omega |\nabla f| dx?$$ -I'm trying to show that the family of $C^1_c$ functions convolved with a mollifier is uniformly close to the function in $L^1$ (which would be true after having this result if we assume something like the family of functions being bounded in $W^{1,1}(\Omega)$). - -REPLY [7 votes]: WLOG assume $\phi(x)$ supported in the unit ball (it has compact support, so it is supported in some ball). First look at -$$ |f(x) - f*\phi_\epsilon(x)| \leq \int_{|z|\leq 1} \phi(z) |f(x) - f(x-\epsilon z)| dz $$ -replace the integrand -$$ f(x) - f(x-\epsilon z) = - \int_0^{\epsilon|z|} D_rf(x - r\omega) dr $$ -with $\omega = z / |z|$ and using the fundamental theorem of calculus. So -Integrating the whole thing over $x$, and changing the order of integration, you have -$$ \int_{\Omega}|f(x) - f*\phi_\epsilon(x)|dx \leq \int_{|z|\leq 1} \phi(z) \int_0^{\epsilon|z|} \int_{\Omega} |D_rf(x - r\omega)| dx~ dr~ dz $$ -The inside most integral for fixed $r\omega$ gives you $\int_\Omega |\nabla f| dx$. The integral over $r$ gives you the factor of $\epsilon$. And integrating $\phi(z)$ over the ball of radius 1 gives you 1.<|endoftext|> -TITLE: Understanding conditional independence of two random variables given a third one -QUESTION [10 upvotes]: I am reading the Wikipedia article on conditional independence. There seems to be Two definitions for conditional independence of Two random variables $X$ and $Y$ given another one $Z$: - -Two random variables $X$ and $Y$ are -conditionally independent given a -third random variable $Z$ if and -only if they are independent in -their conditional probability -distribution given $Z$. That is, $X$ -and $Y$ are conditionally -independent given $Z$ if and only -if, given any value of $Z$, the -probability distribution of $X$ is -the same for all values of $Y$ and -the probability distribution of $Y$ -is the same for all values of $X$. -Two random variables $X$ and $Y$ are -conditionally independent given a -random variable $Z$ if they are -independent given $\sigma(Z)$: the -$\sigma$-algebra generated by $Z$. -Two events $R$ and $B$ are -conditionally independent given a -$\sigma$-algebra $\Sigma$ if - $$\Pr(R \cap B \mid \Sigma) = \Pr(R \mid \Sigma)\Pr(B \mid - \Sigma)\ a.s.$$ where $\Pr(A \mid - \Sigma)$ denotes the conditional -expectation of the indicator -function of the event $A$, given the -sigma algebra $\Sigma$. That is, -$$ \Pr(A \mid \Sigma) := - \operatorname{E}[\chi_A\mid\Sigma].$$ -Two random variables X and Y are -conditionally independent given a -$\sigma$-algebra $\Sigma$ if the -above equation holds for all $R$ in -$\sigma(X)$ and $B$ in $\sigma(Y)$. - -I can understand the second definition, but my questions are: - -What does the first definitions mean -actually? I have tried several times -on reading it, but fail to get what -it means? Can someone rephrase it -using rigorous and clean language, -for example, by writing the definition in terms of some formulae? -Do the two definitions agree with -each other? Why? -ADDED: I was wondering if the following is the correct way to understand the first definition. Notice that $P(\chi_A \mid Z)$ is defined as $E(\chi_A \mid Z)$ and therefore is a random variable. When the conditional probability $P(\cdot \mid Z)$ is "regular", i.e. when $P(\cdot \mid Z)(\omega)$ is a probability measure for each point $\omega$ in the underlying sample space $(\Omega, \mathcal{F}, P)$, does conditional independence between $X$ and $Y$ given $Z$ mean that $X$ and $Y$ are independent w.r.t. every probability measure defined by $P(\cdot \mid Z)(\omega), \forall \omega \in \Omega$? If yes, is the conditional probability $P(\cdot \mid Z)$ always guaranteed to be "regular"? So that there is no need to explicitly write this "regular" assumption? - -Thanks and regards! - -REPLY [3 votes]: The first definition is the informal one, but at the same time seems rather convoluted to me. -I'd prefer: X and Y are conditionally independent with respect to a given Z iff -$P(X \; Y | Z) = P(X | Z ) P(Y | Z)$ -Recall that conditioning one (or several) variables on the value of another, is (informally) the same as restricting the whole universe to a part of it. -Then, if you are given the value of $Z$, you can think as if you are defining new variables that are the same as the unconditioned but that are restricted to our new (smaller universe) $X' \equiv X | Z$ $Y' \equiv Y | Z$ -The above formula simply states that $X'$ and $Y'$ are independent. -The first definition says the same, but applying (in words) the property that two variables are independent iff their conditioned probabilities are the same as the unconditioned : $A$ indep $B$ iff $P(A | B ) = P (A)$<|endoftext|> -TITLE: Inverse Fourier transform relation for $L^2$ function -QUESTION [7 upvotes]: If $f$ is an $L^2$ function and $\hat{f}$, its Fourier transform, also in $L^2$, can the Fourier transform and its inverse be written as -$$\hat{f}(\omega)=\int_{-\infty}^\infty f(x) e^{i\omega x}dx$$ -and -$$f(x)=\int_{-\infty}^\infty \hat{f}(\omega) e^{-i\omega x}d\omega$$ -respectively, disregarding the constants? When can a Fourier transform and its inverse exist, but the above definitions not hold? - -REPLY [4 votes]: Jonas T already hit on the important point that for the first integral to be defined, $f$ must be in $L^1$. Similarly, for the second integral to be defined, $\hat{f}$ must be in $L^1$. -Fabian has mentioned that the Fourier tranform can be generalized even beyond $L^2$. But if you're only concerned with $L^2$, then one way to define the Fourier transform is to first show that it defines an isometric (with respect to the $L^2$ norm) isomorphism on the Schwartz space, which is dense in $L^2$, and then take the unique extension to all of $L^2$. The fact that the Fourier transform defines an isometry with dense range on a dense subspace of $L^2$, and thus has a unique extension to a unitary operator on $L^2$, is known as Plancherel's theorem. I have only mentioned one approach, which is the one I learned from Chapter 10 of J.B. Conway's A course in functional analysis. -So the answer to your final question, at least restricted to $L^2$, is that every element of $L^2\setminus L^1$ provides an example, strictly speaking. However, you can show that $\frac{1}{\sqrt{2\pi}}\int_{-R}^R f(x)e^{-i\omega x}dx$ converges to $\hat{f}(\omega)$ in $L^2$ norm as $R\to\infty$, and similarly $\frac{1}{\sqrt{2\pi}}\int_{-R}^R \hat{f}(\omega)e^{i\omega x}dx$ converges to $f(x)$ in $L^2$ norm as $R\to\infty$. I don't have a reference to a proof handy, but this and more is summarized in the Springer online encyclopedia's article on the Fourier transform, which includes helpful references and links.<|endoftext|> -TITLE: Questions about the concept of strong Markov property -QUESTION [6 upvotes]: I am trying to understand the concept of strong Markov property quoted from Wikipedia: - -Suppose that $X=(X_t:t\geq 0)$ is a - stochastic process on a probability - space - $(\Omega,\mathcal{F},\mathbb{P})$ with - natural filtration - $\{\mathcal{F}\}_{t\geq 0}$. Then $X$ - is said to have the strong Markov - property if, for each stopping time - $\tau$, conditioned on the event - $\{\tau < \infty\}$, the process - $X_{\tau + \cdot}$ (which maybe needs - to be defined) is independent from - $\mathcal{F}_{\tau}:=\{A \in \mathcal{F}: \tau \cap A \in \mathcal{F}_t ,\, \ t \geq 0\}$ and - $X_{\tau + t} − X_{\tau}$ has the same - distribution as $X_t$ for each $t \geq 0$. - -Here are some questions that make me stuck: - -In $\mathcal{F}_{\tau}:=\{A \in - \mathcal{F}: \tau \cap A \in - \mathcal{F}_t ,\, \ t \geq 0\} $, -what does $\tau \cap A $ mean? -$\tau$ is a stopping time and -therefore a random variable and $A$ -is a $\mathcal{F}$-measurable -subset, but what does $\tau \cap A$ -mean? -How is the process $X_{\tau + \cdot}$ defined from the process $X_{\cdot}$ ? Is it the translated -version of the latter by $\tau$? -How is the conditional independence -between a process, such as $X_{\tau - + \cdot}$, and the sigma algebra, such as $\mathcal{F}_{\tau}$, given -an event, such as $\{\tau < - \infty\}$, defined? -Related question, is independence -between a random variable and a -sigma algebra defined as -independence between the sigma -algebra of the random variable and -the sigma algebra? -Is "$X_{\tau+ t} − X_{\tau}$ has the -same distribution as $X_t$ for each -$t \geq 0$" also conditional on the -event $\{\tau < \infty\}$? - -Thanks and regards! - -REPLY [2 votes]: Here is a less garbled version of the Wikipedia definition. (Use TheBridge's correction for the definition of ${\cal F}_\tau$.) - The post-$\tau$ process $X_{\tau+\cdot}$ is defined on the event $\{\tau<\infty\}$ by -$$ -X_{\tau+t}(\omega) = X_{\tau(\omega)+t}(\omega),\qquad t\ge 0, -$$ -for $\omega\in\{\tau<\infty\}$. One way to state the strong Markov property is this: The conditional distribution of $X_{\tau+\cdot}$ given ${\cal F}_\tau$ is (a.s.) equal to the conditional distribution of -$X_{\tau+\cdot}$ given $\sigma\{X_\tau\}$, on the event $\{\tau<\infty\}$. More precisely, -$$ -P[ X_{\tau+t}\in B|{\cal F}_\tau] = P[ X_{\tau+t}\in B|X_\tau],\qquad \hbox{almost surely on }\{\tau<\infty\}, -$$ -for all $t\ge 0$, and all measurable subsets $B$ of the state space of $X$. -This is equivalent to the statement that $X_{\tau+\cdot}$ and ${\cal F}_\tau$ are conditionally independent, given $X_\tau$: -$$ -P[ F\cap \{X_{\tau+t}\in B\}|X_\tau] = P[ F|X_\tau]\cdot P[X_{\tau+t}\in B|X_\tau],\qquad \hbox{almost surely on }\{\tau<\infty\}, -$$<|endoftext|> -TITLE: Strong cardinals and reflection -QUESTION [6 upvotes]: I'm new to all this large cardinal thing and I have trouble in proving the following: -If $\kappa$ is a $\gamma$-strong cardinal, for some large enough $\gamma$, then $\kappa$ is $\Sigma_2$-reflecting. -Any ideas or hints will be welcome. -Thanks in advance - -REPLY [7 votes]: A cardinal $\kappa$ is $\Sigma_2$ reflecting if whenever a sentence $\varphi$ is true in some $V_\alpha$, then there is an $\alpha\lt\kappa$ such that $V_\alpha\models\varphi$. -(This can be seen to be equivalent to the "reflecting" version of $\Sigma_2$-reflecting, since every $\Sigma_2$ statement $\psi$ is equivalent to a statement of the form "$\exists\alpha V_\alpha\models\varphi$", where $\varphi$ is some sentence (with no bound on the complexity of $\varphi$). But let us leave this aside.) -Now, if $\kappa$ is a strong cardinal, and there is some $\alpha$ above $\kappa$ such that $V_\alpha\models\varphi$ for some statement $\varphi$, then fix an $\alpha$-strongness embedding $j:V\to M$ with critical point $\kappa$. Since $V_\alpha^M=V_\alpha$, the model $M$ thinks there is $\alpha\lt j(\kappa)$ for which $V_\alpha\models\varphi$. Thus, by elementarity, there is $\alpha\lt\kappa$ in $V$ for which $V_\alpha\models \varphi$, thereby verifying that $\kappa$ is $\Sigma_2$-reflecting, as desired. -Now, to answer your question, pick any $\gamma\gt\kappa$ such that all $\Sigma_2$ statements reflect from $V$ down to $V_\gamma$. Such a cardinal $\gamma$ exists by the Reflection theorem. Now, I claim that any $\gamma$-strong cardinal $\kappa$ is $\Sigma_2$ reflecting, since any statement $\varphi$ true in some $V_\alpha$ will be true in some $V_\alpha$ with $\alpha\lt\gamma$ by the choice of $\gamma$, and thus $\varphi$ will be true in $V_\alpha$ for some $\alpha\lt\kappa$ by the argument of the previous paragraph. -In fact, it is easy to see that if $\kappa$ is $\gamma$-strong for this $\gamma$, then in fact it is fully strong, since any violation of strongness of $\kappa$ above $\gamma$ would be reflected below $\gamma$.<|endoftext|> -TITLE: When is the image of a linear operator closed? -QUESTION [42 upvotes]: Let $X$, $Y$ be Banach spaces. Let $T \colon X \to Y$ be a bounded linear operator. -Under what circumstances is the image of $T$ closed in $Y$ (except finite-dimensional image). -In particular, I wonder under which assumptions $T \colon X \to T(X)$ is a bounded linear bijection between Banach spaces, so it is at least an isomorphism onto its image by bounded inverse theorem. - -REPLY [12 votes]: Thrm 1: Suppose $X$ is a Banach space, $Y$ is a normed vector space, and $T:X\to Y$ is a bounded linear operator. -Then the range of $T$ is closed in $Y$ if $T$ is open. -Proof: Suppose $\mathrm{ran}(T)$ is not closed in $Y$. Let $\delta>0$ be given. The goal is to show that there exists $x\in X$ such that $\|T(x)\|/\|x\|<\delta$. Since $\delta$ is arbitrary this will demonstrate that $T$ is not open. -Since $\mathrm{ran}(T)$ is not closed there is a sequence $\{y_n\}$ in $\mathrm{ran}(T)$ and point $y\in Y\setminus\mathrm{ran}(T)$ such that $y_n\to y$. This means that there are corresponding $x_n\in X$ such that $y_n=T(x_n)$. Since $T$ is continuous it cannot be that $\{x_n\}$ is a convergent sequence or $y$ would be in the range of $T$. -Since $\{x_n\}$ does not converge it is not Cauchy. So there exists an $\epsilon>0$ such that $\forall N\in\mathbb{N} \ \ \exists n,m \ge N $ s.t. $\|x_n-x_m\|>\epsilon$. On the other hand, since $y_n\to y$, there is an $M\in\mathbb{N}$ such that $\forall k\ge M \ \ \ \|T(x_k)-y\|<\delta\frac{\epsilon}{2}$. By choosing $N=M$, there exist $n,m\ge N$ such that $\|x_n-x_m\|>\epsilon$, $\ \|T(x_n)-y\|<\delta\frac{\epsilon}{2}$, and $\|T(x_m)-y\|<\delta\frac{\epsilon}{2}$. By the Triangle Inequality, $\|T(x_n)-T(x_m)\|=\|T(x_n-x_m)\|<\delta \ \epsilon$. -Let $x=(x_n-x_m) \in X$. Then -\begin{align*} - \frac{\|T(x)\|}{\|x\|} &< \frac{\delta \ \epsilon}{\epsilon} \\ - &= \delta. -\end{align*} -Thrm 2: Suppose $X$ and $Y$ are both Banach spaces, and $T:X\to Y$ is a bounded linear operator. Then $\mathrm{ran}(T)$ is closed in $Y$ if and only if $T$ is open. -Proof: If $\mathrm{ran}(T)$ is closed then $Y$ is a surjective map onto this subspace so by the Open Mapping Theorem $T$ is open. -The other direction is just the first Theorem. -EDIT: The so called "Thrm2" is false, as per the counterexample provided in comments. The "proof" makes the false assumption that $Y=T(X)$.<|endoftext|> -TITLE: Finding the distance between two triangles, one inside the other -QUESTION [10 upvotes]: I have two right triangles One is a $6$-$8$-$10$ and inside is a $3$-$4$-$5$ and the space between the two triangles is a uniform amount. -I made a really awkward and weird pic of the diagram and I need to solve for $X$ - -How would I go about solving this? I couldn't figure out any good approaches. - -REPLY [18 votes]: Draw lines connecting the "respective vertices". Add up the areas to solve for $x$. - -Area of a right triangle with the non-hypotenuse sides being $a$ and $b$ is $\frac{1}{2}(a \times b)$ while the area of a trapezium with parallel sides being $a$ and $b$ and the height being $h$ is $\frac{1}{2}h(a+b)$ -$$\frac{1}{2}x(5+10) + \frac{1}{2}x(3+6) + \frac{1}{2}x(4+8) + \frac{1}{2}(3 \times 4) = \frac{1}{2} (6 \times 8)$$ -$$36x + 12 = 48 \Rightarrow x = 1$$ -EDIT: -Let us try to look at a slightly general case. Take a triangle and scale it to another similar triangle with the scale factor being $t$. - -Let the sides of the inner triangle be $a$, $b$ and $c$. -The perimeter and area of the inner triangle is $P$ and $A$ respectively. -The sides of the outer triangle are $ta$, $tb$ and $tc$ while the perimeter and area are $tP$ and $t^2A$. -As before join the "respective" vertices and summing the areas give us, -$$A + \frac{1}{2}x(ta + a) + \frac{1}{2}x(tb + b) + \frac{1}{2}x(tc + c) = t^2A$$ -$$A + \frac{1}{2}x(t+1)P = t^2A \Rightarrow \frac{1}{2}x(t+1)P = (t^2-1)A$$ -$$x = \frac{2A}{P}(t-1)$$ -In the problem asked, $t=2$ with $P = 2A = 12$ and hence we get $x=1$. -Also, $t=1$ gives $x=0$ as expected. -This also gives a nice proof that the radius of the incircle of a triangle is $$r_{in} = \frac{2A}{P}$$ -This is got by plugging in $t=0$ and realizing that $\left| x \right|$ is nothing but the radius of the incircle. -Hence, $$x = (t-1)r_{in}$$<|endoftext|> -TITLE: Visual representation of groups -QUESTION [6 upvotes]: I vaguely remember seeing something like a "picture" of various groups a while back. It was as if the elements of the group were each associated with a point and many points had segments connecting them, but not all were connected. -Does anyone know what I am talking about? If so, would you care to take the time to explain the basics (or point me to a resource, if you think google won't help)? -Thanks. - -REPLY [5 votes]: Another visualization of a group on a graph is the cycle graph. These are much less regular than a Cayley graph, so might better match your "not all points were connected." In fact, Cayley, Schreier coset, and cycle graphs are all connected, but they get less regular down the list. -The cycle graph has vertices the elements of the group, and for each maximal cyclic subgroup, a generator x is chosen, and edges are drawn between xi and xi+1. It is not entirely clear to me if every cycle graph of a group is isomorphic, but they are for small groups at least. Here are a few pictures from wikipedia: the elementary abelian group of order 16, the dihedral group of order 12, and the modular (but not Dedekind) group of order 16.<|endoftext|> -TITLE: Perfect Matching in a bipartite graph with a constrained degree sequence -QUESTION [7 upvotes]: Given a bipartite graph G with two partition sets $U$ and $V$ of the same size $n$. In each set, there are d vertices of degree (n-d+1), and n-d vertices of degree (n-d). Can we find a perfect matching in this graph? -So basically, the two sets are the same in terms of degree sequence, although one has no idea how vertices are adjacent to each other. I tried to use Hall's theorem here, but i got stuck. Take any subset S of U that contain some vertices of degree n-d+1 and some of degree n-d, then I am not be able to show $|N(S)| \geq |S|$. - -EDIT: The first question is solved and now a more general and perhaps research-oriented question is: What if in each set $U$ and $V$, there are $d$ vertices of degree AT LEAST $(n-d+1)$, and $n-d$ vertices of degree AT LEAST $(n-d)$. Can we still find a perfect matching? - -The method for the previous question fails here. - -REPLY [5 votes]: You can bound $\lvert N(S)\rvert$ from below by considering the worst case that all neighbours have the maximal degree, $n-d+1$. Then each vertex in $S$ of degree $n-d+1$ has as many outgoing edges as a worst-case neighbour has incoming edges, so the "cost balance" of such a vertex is neutral. Each vertex in $S$ of degree $n-d$ "saves" one edge, but there can be at most $n-d$ of these in $S$, so only $n-d$ edges can be "saved", whereas $n-d+1$ edges would have to be "saved" to "save" one worst-case neighbour. So no neighbour can be "saved" even in the worst case, and thus there must be as many neighbours as vertices in $S$. -I can write that out in formulas if you want, but you said you wanted a hint. -[Edit in response to request for formulas:] -Since all edges incident at a vertex in $S$ are also incident at a vertex of $N(S)$, we have -$$\sum_{w\in N(S)}\deg(w)\ge\sum_{v\in S}\deg(v)\;.$$ -If $S$ contains $n_-$ vertices of degree $n-d$ and $n_+$ vertices of degree $n-d+1$, then -$$\begin{eqnarray} -\sum_{v\in S}\deg(v)&=&n_-(n-d)+n_+(n-d+1)\\\ -&=&(n_-+n_+)(n-d+1)-n_-\\\ -&=&\lvert S \rvert (n-d+1)-n_-\\\ -&\ge&\lvert S \rvert (n-d+1)-(n-d)\;, -\end{eqnarray}$$ -since there are only $n-d$ vertices of degree $n-d$. On the other hand, since no vertex has degree more than $n-d+1$, we also have -$$\sum_{w\in N(S)}\deg(w)\le \lvert N(S) \rvert (n-d+1)\;.$$ -Putting this all together gives -$$\lvert N(S) \rvert (n-d+1) \ge \lvert S \rvert (n-d+1)-(n-d)\;,$$ -$$\lvert N(S) \rvert \ge \lvert S \rvert -\frac{n-d}{n-d+1}\;.$$ -But $\lvert N(S) \rvert$ and $\lvert S \rvert$ are integers and the last term is less than $1$, so we can drop the last term to conclude that $\lvert N(S) \rvert\ge\lvert S \rvert$. Linking this back to the original answer, each vertex in $S$ of degree $n-d$ can change the balance in the penultimate inequality by at most $1$, and there are at most $n-d$ such vertices, so the balance can only be changed by $n-d$, but it would have to change by $n-d+1$ to make a difference in the last inequality, where the division by $n-d+1$ corresponds to the fact that we need to "save" $n-d+1$ edges in order to "save" one worst-case neighbour.<|endoftext|> -TITLE: In generatingfunctionology, for a polynomial $P$ and a differential operator $D$, what does $P(xD)$ mean? -QUESTION [11 upvotes]: I'm working through some of the exercises in generatingfunctionology. One of the questions is to find the generating function where the $n$th term $a_n=P(n)$ for $P$ a polynomial. The answer is $P(xD)(1/(1-x))$. I thought it meant something like if $P(n)=3n+n^2$, then $P(xD)=3xD+x^2D^2$, and then you would apply that to $1/(1-x)$. However, I know the generating function for $a_n=n^2$ is $$\frac{x(x+1)}{(1-x)^3}$$ -which is not the same as $x^2D^2(1/(1-x))$. So what does it mean? Thanks. - -REPLY [16 votes]: The problem is that you've mistakenly assumed commutativity. Here $\rm\:x\:$ and $\rm\:D\:$ do not commute. Indeed $\rm\,\color{#c00}{D\:\!x = x\:\!D + 1}\,$ since $\rm\,(D\:\!x)\:\!f\:\!=\:\!D\:\!(x\:\!f)\:\!=\:\!x\:\!f\:\!'+f\:\!=\:\!(x\:\!D + 1)\:\!f.\,$ So it's not true that $\rm\:\!(x\:\!D)^2 =\:\!x^2\:\!D^2.\,$ Instead $\rm\,x\:\!\color{#c00}{D\:\!x}\:\!D\:\!=\:\!x\:\!(\color{#c00}{x\:\!D + 1})\:\!D\:\!=\:\!x^2\:\!D^2 + x\:\!D.\,$ Generally -$$\rm (x\:\!D)^n\:\!=\:\!\sum_{k\:\!=\:\!0}^n\:\!S(n,k)\:\!x^k\:\!D^k $$ -where $\rm\:\!S(n,k)\:\!$ are the Stirling numbers of the second kind. -Beware $\, $ Since this operator algebra is noncommutative so many familiar polynomial identities do not hold true. For example, consider the proof of the difference of squares identity -$$\rm\:\!(x-y)\:\!(x+y)\:\!=\:\!x^2 + \color{#0a0}{x\:\!y - y\:\!x} - y^2\:\!=\:\!x^2 - y^2$$ -Notice how commutativity is assumed to cancel the $\rm\color{#0a0}{middle}$ terms. Thus the identity needn't remain true if we substitute elements from some noncommutative ring, e.g. we cannot substitute $\rm\,D\,$ for $\rm\,y.\,$ Said slightly more technically, the polynomial evaluation map is a ring homomorphism iff coefficients and indeterminates commute. See this post for further remarks on this topic.<|endoftext|> -TITLE: Statements with no counterexample -QUESTION [12 upvotes]: Is there some proven statement that tells us "not all X satisfy Y", but there are currently no examples of X which do not satisfy Y? -Is it necessarily true that there are always explicit counterexamples? Or are there statements that only guarantee the existence of such counterexamples, whose construction is impossible in ZFC. - -REPLY [2 votes]: At least one of the numbers $\pi e$, $\pi+e$ is irrational. But we don't know which (yet). -So in your notation, let $X=\{\pi e,\pi+e\}$, and let $Y$ be the statement "$x$ is rational".<|endoftext|> -TITLE: Expectation of the maximum of i.i.d. geometric random variables -QUESTION [35 upvotes]: Given $n$ independent geometric random variables $X_n$, each with probability parameter $p$ (and thus expectation $E\left(X_n\right) = \frac{1}{p}$), what is -$$E_n = E\left(\max_{i \in 1 .. n}X_n\right)$$ - -If we instead look at a continuous-time analogue, e.g. exponential random variables $Y_n$ with rate parameter $\lambda$, this is simple: -$$E\left(\max_{i \in 1 .. n}Y_n\right) = \sum_{i=1}^n\frac{1}{i\lambda}$$ -(I think this is right... that's the time for the first plus the time for the second plus ... plus the time for the last.) -However, I can't find something similarly nice for the discrete-time case. - -What I have done is to construct a Markov chain modelling the number of the $X_n$ that haven't yet "hit". (i.e. at each time interval, perform a binomial trial on the number of $X_n$ remaining to see which "hit", and then move to the number that didn't "hit".) This gives -$$E_n = 1 + \sum_{i=0}^n \left(\begin{matrix}n\\i\end{matrix}\right)p^{n-i}(1-p)^iE_i$$ -which gives the correct answer, but is a nightmare of recursion to calculate. I'm hoping for something in a shorter form. - -REPLY [26 votes]: There is no nice, closed-form expression for the expected maximum of IID geometric random variables. However, the expected maximum of the corresponding IID exponential random variables turns out to be a very good approximation. More specifically, we have the hard bounds -$$\frac{1}{\lambda} H_n \leq E_n \leq 1 + \frac{1}{\lambda} H_n,$$ -and the close approximation -$$E_n \approx \frac{1}{2} + \frac{1}{\lambda} H_n,$$ -where $H_n$ is the $n$th harmonic number $H_n = \sum_{k=1}^n \frac{1}{k}$, and $\lambda = -\log (1-p)$, the parameter for the corresponding exponential distribution. -Here's the derivation. Let $q = 1-p$. Use Did's expression with the fact that if $X$ is geometric with parameter $p$ then $P(X \leq k) = 1-q^k$ to get -$$E_n = \sum_{k=0}^{\infty} (1 - (1-q^k)^n).$$ -By viewing this infinite sum as right- and left-hand Riemann sum approximations of the corresponding integral we obtain -$$\int_0^{\infty} (1 - (1 - q^x)^n) dx \leq E_n \leq 1 + \int_0^{\infty} (1 - (1 - q^x)^n) dx.$$ -The analysis now comes down to understanding the behavior of the integral. With the variable switch $u = 1 - q^x$ we have -$$\int_0^{\infty} (1 - (1 - q^x)^n) dx = -\frac{1}{\log q} \int_0^1 \frac{1 - u^n}{1-u} du = -\frac{1}{\log q} \int_0^1 \left(1 + u + \cdots + u^{n-1}\right) du $$ -$$= -\frac{1}{\log q} \left(1 + \frac{1}{2} + \cdots + \frac{1}{n}\right) = -\frac{1}{\log q} H_n,$$ -which is exactly the expression the OP has above for the expected maximum of $n$ corresponding IID exponential random variables, with $\lambda = - \log q$. -This proves the hard bounds, but what about the more precise approximation? The easiest way to see that is probably to use the Euler-Maclaurin summation formula for approximating a sum by an integral. Up to a first-order error term, it says exactly that -$$E_n = \sum_{k=0}^{\infty} (1 - (1-q^k)^n) \approx \int_0^{\infty} (1 - (1 - q^x)^n) dx + \frac{1}{2},$$ -yielding the approximation -$$E_n \approx -\frac{1}{\log q} H_n + \frac{1}{2},$$ -with error term given by -$$\int_0^{\infty} n (\log q) q^x (1 - q^x)^{n-1} \left(x - \lfloor x \rfloor - \frac{1}{2}\right) dx.$$ -One can verify that this is quite small unless $n$ is also small or $q$ is extreme. -All of these results, including a more rigorous justification of the approximation, the OP's recursive formula, and the additional expression -$$E_n = \sum_{i=1}^n \binom{n}{i} (-1)^{i+1} \frac{1}{1-q^i},$$ -are in Bennett Eisenberg's paper "On the expectation of the maximum of IID geometric random variables" (Statistics and Probability Letters 78 (2008) 135-143).<|endoftext|> -TITLE: Matrix Inverses and Eigenvalues -QUESTION [25 upvotes]: I was working on this problem here below, but seem to not know a precise or clean way to show the proof to this question below. I had about a few ways of doing it, but the statements/operations were pretty loosely used. The problem is as follows: -Show that ${\bf A}^{-1}$ exists if and only if the eigenvalues $ \lambda _i$ , $1 \leq i \leq n$ of $\bf{A}$ are all non-zero, and then ${\bf A}^{-1}$ has the eigenvalues given by $ \frac{1}{\lambda _i}$, $1 \leq i \leq n$. -Thanks. - -REPLY [2 votes]: Here is a different approach that gives more information. It shows that the algebraic multiplicities match up as well. I'll defer to previous answers for the fact that a matrix is invertible iff $0$ is not an eigenvalue, which will be used implicitly below. I just focus on the eigenvalues of the inverse. -Suppose $A$ is an invertible $n \times n$ matrix. Assuming (WLOG) we're working in an algebraically closed field, the eigenvalues are the roots of the polynomial $\det(A-\lambda I) = \prod_{1 \leq i \leq n} (\lambda_i - \lambda)$. Using the properties of determinants, we get: -$$ \det(A^{-1} - \lambda I) = \det(A^{-1}) \det(I - \lambda A) = \det(A^{-1}) ( -\lambda)^n \det(A - \lambda^{-1}I)$$ -$$ = \det(A^{-1}) ( -\lambda)^n \prod_{1 \leq i \leq n} (\lambda_i - \lambda^{-1}) = \det(A^{-1}) \prod_{1 \leq i \leq n} (1 - \lambda \lambda_i) $$ -$$ = \det(A)^{-1} \prod_{1 \leq i \leq n} \lambda_i \prod_{1 \leq i \leq n} (\lambda_i^{-1} - \lambda) = \det(A)^{-1} \det(A) \prod_{1 \leq i \leq n} (\lambda_i^{-1} - \lambda)$$ -$$ = \prod_{1 \leq i \leq n} (\lambda_i^{-1} - \lambda).$$<|endoftext|> -TITLE: Proof that $X$ contractible $\iff \forall f:X \rightarrow Y$ : $f \cong const.$ -QUESTION [10 upvotes]: I'm reading Hatcher and I did exercise 10 on page 19. Can you tell me if my answer is correct? Many thanks for your help! -Claim: $X$ contractible $\Leftrightarrow \forall$ arbitrary maps $f:X \rightarrow Y$, $Y$ arbitrary, $f \cong const.$ -Proof: -$\implies$ -Given the homotopy $H: I \times X \rightarrow X$, $H(0,x) = id_X$, $H(1,x) = id_{ \{ \ast \}}$ and an arbitrary map $f: X  \rightarrow Y$ construct a homotopy $H^\prime: I \times X \rightarrow Y$, $H^\prime(0,x) = f(x)$, $H^\prime(1,x)=const_{y_0 \in Y}$ as follows: -$H^\prime (t,x) := f(H(t,x))$. -Note that even though not stated in the exercise, $f$ is assumed to be continuous. (At least I think that's the case) -$\impliedby$ -Given $\forall f:X \rightarrow Y$: $f \cong const.$ we pick $Y := X$ then $f:X \rightarrow X$ $\cong const_{x_0} \forall f$. Now pick $f := id_X$. -Second part of question: -Claim: $X$ contractible $\iff \forall f: Y \rightarrow X$: $f \cong const_X$ -Proof: -$\implies$ -Define $H^\prime(t,x) := H(t, f(x))$. -$\impliedby$ -Choose $Y:=X$ and $f:=id_X$. - -REPLY [4 votes]: This looks basically right, although to nitpick I might argue that you should either write that $H(0,-)=id_X$ and $H(1,-)=const._*$ (both as maps $X\rightarrow X$) or you should write that $H(0,x)=x$ and $H(1,x)=*$ for all $x\in X$. -Also, I think I can confidently say that you can assume every map in any topology book is assumed to be continuous. Similarly, any function of groups can generally be assumed to be a homomorphism, etc. The fancy way of saying this here is that we're working in the category of topological spaces, whose morphisms are continuous maps.<|endoftext|> -TITLE: Proof of another Hatcher exercise: homotopy equivalence induces bijection -QUESTION [10 upvotes]: I'm doing stuck with the first half of exercise 12 on page 19 in Hatcher: -Exercise: -Show that a homotopy equivalence $f : X \rightarrow Y$ induces a bijection between the set of path-components of $X$ and the set of path-components of $Y$. and that $f$ restricts to a homotopy equivalence from each path-component of $X$ to the corresponding path-component of $Y$ -By the definition of homotopy equivalence, there exists $g$ such that $gf \simeq id_X$ and $fg \simeq id_Y$. So $g$ is a two-sided inverse of $f$, therefore $f$ is a bijection. -I'm stuck now because I don't know what there is to prove to show that "$f$ restricts to a homotopy equivalence from each path-component of $X$ to the corresponding path-component of $Y$". -Is there anything to prove? If $f$ is a homotopy equivalence, is it not obvious that any restriction is also a homotopy equivalence? Many thanks for helping me, I really appreciate it. -Edit: -So here is an attempt of proof that $f^\ast:\pi_0(X) \rightarrow \pi_0(Y)$ is a bijection: -Define $f^\ast$ as $f^\ast (A) := B$ where $B$ is the path-(connected)-component that contains $f(A)$ and $A \in \pi_0 (X)$. -Claim: $f^\ast$ is bijective -Proof: -(i) injective: -Let $f(A) = f(B)$ where $A, B \in \pi_0(X)$. -Then $f(A) \cup f(B)$ is a path-(connected)-component. -Then $f(A \cup B)$ is also a path-component. -By the definition of homotopy equivalence, there exists $g$ (continuous) such that $gf \simeq id_X$ and $fg \simeq id_Y$. So $g(f(A \cup B)) = A \cup B$ is path-connected. -$\implies A = B$. -In the proof above I used the following claim: -Claim: $A$ path-connected, $f$ continuous $\implies f(A)$ path-connected. -Proof: Assume $f(A)$ not path-connected, then $f(A)$ not connected, then $g(f(A))=A$ not connected $\implies$ contradiction. -(ii) surjective: -Let $B \in \pi_0(Y)$. Then for $A := g(B)$, $f(A) = B$. -Can you tell me if my proof is correct? I really appreciate your help! - -REPLY [11 votes]: Be careful: A homotopy equivalence is by no means a bijection in general: Let $X = \mathbb{R}$ and $Y = \{\ast\}$. Also, a homotopy equivalence need not restrict to a homotopy equivalence of subspaces. For instance, the orthogonal projection of $\mathbb{R}^2$ onto the $x$ axis is a homotopy equivalence. However, it maps the unit circle to the interval $[-1,1]$ and these two spaces certainly aren't homotopy equivalent (but this isn't completely obvious). Even simpler, take the subspace $([-1,1] \times \{0\}) \cup ([-1,1] \times \{1\})$ of $\mathbb{R}^2$ and its image $[-1,1]$ under the projection: you'll get a contradiction to your exercise. -The expressions $gf \simeq \operatorname{id}_{X}$ and $fg \simeq \operatorname{id}_{Y}$ mean that $gf$ and $fg$ are homotopic to the respective identities. (btw: it is more customary to use $\simeq$ \simeq instead of $\cong$ for "homotopic") -A continuous map sends path components into path components (because it sends paths to paths). The path component of a point $x$ is sent into the path component of its image $f(x)$ and the path component of $f(x)$ is sent into the path component of $gf(x)$. But as $gf \simeq \operatorname{id}_{X}$, a homotopy $H: X \times [0,1] \to X$ such that $H(\cdot,0) = gf$ and $H(\cdot,1) = \operatorname{id}_{X}$ gives us the path $\gamma(t) = H(x,t)$ connecting $gf(x)$ with $x$, so the path component of $gf(x)$ is the same as the path component of $x$. I let you finish up the argument yourself. - -Added in response to the edited question. -Let $[x]$ be the path component of $x$. Let me write $f_{\ast}: \pi_{0}(X) \to \pi_{0}(Y)$ instead of $f^\ast$. By definition $f_{\ast}[x] = [f(x)]$ (check that this is well-defined!). In my last paragraph above I argued that $[gf(x)] = [x]$ so $g_\ast f_\ast [x] = [x]$ (check that $(gf)_\ast = g_\ast f_\ast$!). In other words $g_\ast f_\ast = (\operatorname{id}_{X})_{\ast} = \operatorname{id}_{\pi_{0}(X)}$ (check that $(\operatorname{id}_X)_\ast = \operatorname{id}_{\pi_{0}(X)}$!). In particular, $f_{\ast}$ is injective and $g_\ast$ is surjective. By symmetry we have $f_\ast g_\ast = \operatorname{id}_{\pi_{0}(Y)}$, so $f_{\ast}$ and $g_\ast$ are mutually inverse bijections. - -A bit later -Maybe it's better to start from scratch. -Define an equivalence relation on $X$ by $x \sim x'$ if and only if there is a path connecting $x$ and $x'$. The equivalence classes of $\sim$ are precisely the path components of $X$. In other words, $\pi_{0}(X) = X/\!\!\sim$. Write $[x]$ for the $\sim$-equivalence class of $x$ (so $[x]$ is the path component of $x$). - -Fact 1: A continuous map $f: X \to Y$ sends path components into path components. In other words, if $x \sim x$ then $f(x) \sim f(x')$. -Indeed, if $\gamma: [0,1] \to X$ is a path with $\gamma(0) = x$ and $\gamma(1) = x'$ then $f \circ \gamma: [0,1] \to Y$ is a path and $f(\gamma(0)) = f(x)$ and $f(\gamma(1)) = f(x')$, so $f(x) \sim f(x')$. - -Again in other words Fact 1 tells us that $f$ yields a map $f_{\ast}: \pi_{0}(X) \to \pi_{0}(Y)$. Explicitly, -\[ -f_{\ast}([x]) = [f(x)]. -\] -This is well-defined because for $x \sim x'$ we have $f(x) \sim f(x')$, so $[f(x)] = [f(x')]$. -From this description we see that $(\operatorname{id}_{X})_{\ast}([x]) = [\operatorname{id}_{X}(x)] = [x] = \operatorname{id}_{\pi_{0}(X)}([x])$, so $(\operatorname{id}_{X})_{\ast} = \operatorname{id}_{\pi_{0}(X)}$. Also $(gf)_{\ast}([x]) = [gf(x)] = g_{\ast}([f(x)]) = g_{\ast}f_{\ast}([x])$, so $(gf)_{\ast} = g_{\ast}f_{\ast}$. -Since the two identities I've just proven are so important, let me state them again for emphasis: -Fact 2: For the identity $\operatorname{id}_{X}: X \to X$ and any two maps $f:X \to Y$ and $g: Y \to Z$ -we have $$\displaystyle -(\operatorname{id}_{X})_{\ast} = \operatorname{id}_{\pi_{0}(X)}: -\pi_{0}(X) \to \pi_{0}(X) \qquad\text{and}\qquad (gf)_{\ast} = g_\ast f_\ast : \pi_{0}(X) \to \pi_{0}(Z) -$$ - -Fact 3: If $f, f': X \to Y$ are homotopic, $f \simeq f'$, then $f_{\ast} = f'_{\ast}: \pi_{0}(X) \to \pi_{0}(Y)$. -Indeed, pick a homotopy $H: X \times [0,1] \to Y$ such that $f = H(\cdot,0)$ and $f' = H(\cdot,1)$. Since $t \mapsto H(x,t)$ is a path connecting $f(x)$ to $f'(x)$ we see that $f(x) \sim f'(x)$, so $f_{\ast}[x] = [f(x)] = [f'(x)] = f'_\ast[x]$. - -Now we are finally in shape to solve the exercise. Let $f: X \to Y$ and $g: Y \to X$ be such that $gf \simeq \operatorname{id}_{X}$ and $fg \simeq \operatorname{id}_{Y}$. Then combining facts 2 and 3 we have that -$$\displaystyle -(\operatorname{id}_{X})_\ast = g_{\ast} f_{\ast} -\qquad \text{and}\qquad -(\operatorname{id}_{Y})_\ast = f_{\ast} g_{\ast} -$$ -and as $(\operatorname{id}_X)_\ast = \operatorname{id}_{\pi_{0}(X)}$ we see that $f_{\ast}$ and $g_{\ast}$ are mutually inverse bijections. - -REPLY [2 votes]: If $D$ is a path component of $X$, then $f$ restricts to a map $D \to Y$ which lands in some path component $E$ of $Y$. If you want to claim that $f$ automatically restricts to a homotopy equivalence from $D$ to $E$, you have to show first that $g(E) \subseteq D$ and second that the induced homotopy $H$ between $gf$ and $\text{id}_D$ never has image outside of $D$ (and the corresponding statement for $E$). This is stronger than what you're being asked to prove; all you need to show is that $f$ and $g$ induce inverse maps on path components.<|endoftext|> -TITLE: Induction problem: log of product equals sum of logs -QUESTION [5 upvotes]: Please help me prove by induction that - -$\displaystyle\forall n\in {{\mathbb{N}}^{*}}$, $\displaystyle\forall {{a}_{1}},\ldots ,{{a}_{n}}\in {\mathbb{R}}^{*}_{+}$, $\displaystyle \ln \left( \prod\limits_{j=1}^{n}{{{a}_{j}}} \right)=\sum\limits_{j=1}^{n}{\ln \left( {{a}_{j}} \right)}$. -Deduce that $\displaystyle \forall n\in \mathbb{Z},\forall a\in {\mathbb{R}}^{*}_{+}$, $\displaystyle \ln \left( {{a}^{n}} \right)=n\ln a$. - -REPLY [4 votes]: HINT $\rm\displaystyle\ \ \ \ f(n)\ =\ \prod_{j\ =\ 1}^n\ a_j $ -$\rm\quad \iff\ \ \ \:f\:(n)\ =\:\ a_n\ \: * \ \ f\:(n-1),\:\ \ \ f\:(0)\: = 1$ -$\rm\quad \iff\ \ F(n)\ =\ A_n + F(n-1),\ \ F(0) = 0\:,\ $ with $\rm\ \ F(n) = \ln\: f(n)\:,\ \ A_n = \ln\: a_n$ -$\rm\displaystyle\quad \iff\ \ F(n)\ =\ \sum_{j\ =\ 1}^n\ A_n$ -The first and last equivalences are the recursive definitions of $\rm\:\Pi\:$ and $\rm\:\Sigma\:.$ -The middle equivalence follows from $\rm\ \ln\ (x\ *\ y)\ =\ \ln\ x\ +\ \ln\ y\:.$<|endoftext|> -TITLE: What is the 'implicit function theorem'? -QUESTION [53 upvotes]: Please give me an intuitive explanation of 'implicit function theorem'. I read some bits and pieces of information from some textbook, but they look too confusing, especially I do not understand why they use Jacobian matrix to illustrate this theorem. - -REPLY [10 votes]: There are basically two interpretations of (part of) the implicit function theorem (IMFT). -One is that it tells you under what conditions we have solutions to some equation of the form $f=0$, which has been mentioned in Zhen Lin's answer. -More precisely, - -let $E$ be an open set of ${\Bbb R}^{n+m}={\Bbb R}^n\times{\Bbb R}^m$ and $f:E\to {\Bbb R}^n$ be a $C^1$ mapping such that $f(a,b)=0$, where $(a,b)\in {\Bbb R}^n\times {\Bbb R}^m$. Put $A=f'(a,b)$ and assume that $A_x$ is invertible, where $A_x$ denotes the restriction of the linear map $A$ on ${\Bbb R}^n\times\{0\}$. $\tag{*}$ - -Then we can "solve" the equation $f(x,y)=0$ for each of those $y$ near $b$, which means in an open neighborhood $W\subset{\Bbb R}^m$ of $b$, we have an implicitly defined function $g:W\to {\Bbb R}^n$ such that $f(g(y),y)=0$ for every $y\in W$. In particular, $f(g(b),b)=f(a,b)=0$. The word "solve" might be kind of confusing. After all, what does it mean by solving an equation $f=0$? Basically it means one can write some variables in terms of others. On the other hand, the IMFT only tells you the existence of $g$ but not what $g$ looks like. -Another way to look at the IMFT is that it tells you when a set defined by -$$ -S=\{z\in{\Bbb R}^d|f(z)=0\} -$$ -is locally a graph of some function. (Here "being locally a graph" means one can find an open set $U\subset {\Bbb R}^d$ such that $U\cap S$ is a graph.) Under the assumption $(*)$, we would have the following conclusion - -there exsit open sets $U\subset{\Bbb R}^{n+m}$ and $W\subset {\Bbb R}^m$, with $(a,b)\in U$ and $b\in W$ such that - $$ -U\cap S=G:=\{(g(y),y)\mid y\in W\}, -$$ - where $g$ is a function from $W$ to ${\Bbb R}^n$. - -Note that we call $G$ the graph of the function $g$. -Using the notations in the second interpretation, the first one basically tells you -$$ -G\subset U\cap S. -$$<|endoftext|> -TITLE: Legendre symbol: Showing that $\sum_{m=0}^{p-1} \left(\frac{am+b}{p}\right)=0$ -QUESTION [5 upvotes]: I have a question about Legendre symbol. -Let $a$, $b$ be integers. Let $p$ be a prime not dividing $a$. Show that the Legendre symbol verifies: -$$\sum_{m=0}^{p-1} \left(\frac{am+b}{p}\right)=0.$$ -I know that $\displaystyle\sum_{m=0}^{p-1} \left(\frac{m}{p}\right)=0$, but how do I connect this with the previous formula? -Any help is appreciated. - -REPLY [4 votes]: To allow the question to be marked as answered, then: -Show that as $m$ ranges from $0$ to $p−1$, $am$ ranges over all residue classes modulo $p$, and hence $am+b$ ranges over all residue classes modulo $p$.<|endoftext|> -TITLE: Fubini's theorem problem -QUESTION [7 upvotes]: Let $f$ be a non-negative measurable function in $\mathbb{R}$. Suppose that $$\iint_{\mathbb{R}^2}{f(4x)f(x-3y)\,dxdy}=2\,.$$ Calculate $\int_{-\infty}^{\infty}{f(x)\,dx}$. - -My first thought was to change the order of integration so that I integrate in $y$ first, since there's only a single $y$ in the integrand... but I'm not sure how/if that even helps me. -Then the more I thought about it, the less clear it was to me that Fubini's theorem even applies as it's written. Somehow I need a function of two variables. So should I set $g(x,y) = f(4x)f(x-3y)$ and do something with that? At least Fubini's theorem applies for $g(x,y)$, since we know it's integrable on $\mathbb{R}^2$. .... Maybe? -I'm just pretty lost on this, so any help you could offer would be great. Thanks! - -REPLY [3 votes]: I think both Vitali and Matt are right. As soon as $G(x,y)$ is integrable on $\mathbb{R}^2$, $$\iint_{\mathbb{R}^2}{G(x,y)\,dxdy}=\int_{\mathbb{R}}dx\,\int_{\mathbb{R}}{G(x,y)\,dy}=\int_{\mathbb{R}}dy\,\int_{\mathbb{R}}{G(x,y)\,dx} $$ -So you can substitute: $$ u=4x $$ $$ v=x-3y $$ $$ x=\frac{u}{4} $$ $$ y=\frac{u}{12}-\frac{v}{3} $$ -Jacobian $$ J=\det D(x(u,v),y(u,v))=\begin{vmatrix} \frac{\partial x}{\partial u} & \frac{\partial x}{\partial v} \\\\ \frac{\partial y}{\partial u} & \frac{\partial y}{\partial v} \end{vmatrix}=\begin{vmatrix}\frac{1}{4} & 0\\\\\frac{1}{12} & -\frac{1}{3}\end{vmatrix}=-\frac{1}{12} $$ -Then $$\iint_{\mathbb{R}^2}{f(4x)\,f(x-3y)\,dxdy}=-\frac{1}{12}\int_{-\infty}^{\infty}\,du\,\int_{\infty}^{-\infty}{f(u)\,f(v)\,dv}=\frac{1}{12}(\int_{-\infty}^{\infty}{f(z)\,dz})^2=2$$ -Thus giving us -$$\int_{-\infty}^{\infty}{f(z)\,dz}=2\sqrt 6$$ -Please tell me if I $\mathbb{F}\bigoplus$ something up.<|endoftext|> -TITLE: Regarding a proof involving geometric series -QUESTION [5 upvotes]: Someone asked this question about how many ways there are to prove $0.999\dots = 1$ and I posted this: -$$ 0.99999 \dots = \sum_{k = 1}^\infty \frac{9}{10^k} = 9 \sum_{k = 1}^\infty \frac{1}{10^k} = 9 \Big ( \frac{1}{1 - \frac{1}{10}} - 1\Big ) = \frac{9}{9} = 1$$ -The question was a duplicate so in the end it was closed but before that someone wrote in a comment to the question: "Guys, please stop posting pseudo-proofs on an exact duplicate!" and I got down votes, so I assume this proof is wrong. -Now I would like to know, of course, why this proof is wrong. I have thought about it but somehow I can't seem to find the mistake. -Many thanks for your help. The original can be found here. - -REPLY [5 votes]: The problem is that you are assuming 1) that multiplication by constants distributes over infinite sums, and 2) the validity of the geometric series formula. Most of the content of the result is in 2), so it doesn't make much sense to me to assume it in order to prove the result. Instead you should prove 2), and if you really want to be precise you should also prove 1).<|endoftext|> -TITLE: Center-commutator duality -QUESTION [62 upvotes]: I'm reading this article by Keith Conrad, on subgroup series. I'm having trouble with a statement he does at page 6: - -Any subgroup of $G$ which contains $[G,G]$ is normal in $G$. - -He says this as evidence that commutator and center play dual roles, since any subgroup of $G$ contained in $Z(G)$ is normal in $G$. Now, this I'm sure I understand, but I don't see how the quoted line holds. -What I have read is that $[G,G]$ is the least normal subgroup of $G$ such that the quotient is abelian, which seems related. -Also, while we're at it: are the center and the commutator "really" (as in, categorically) dual constructions? I'm quite a novice in category theory, so please excuse me if this question is trivial. - -REPLY [3 votes]: Here is a sort of "duality" (though very weak) between the center and commutator of a finite group, which occurred to me recently: -Let $\rm{Irr}(G)$ be the set of irreducible complex characters of the finite group $G$. Then the commutator subgroup is equal to $$\bigcap_{\chi \in \rm{Irr}(G),\ \chi(1)=1}\rm{ker}(\chi)$$ -But if we look at something which in a certain sense is the complement of this, namely $$H = \bigcap_{\chi\in\rm{Irr}(G),\ \chi(1)\neq 1}\rm{ker}(\chi)$$ then this subgroup is contained in the center of $G$. -Proof (only of the last part): We know that the intersection of the kernels of all irreducible characters is trivial, so we get that $H\cap G' = \{ 1\}$. But it is a general fact that any normal subgroup that intersects the commutator subgroup trivially is central. To see this, let $N$ be such a subgroup, $x \in N$ and $g\in G$. Then the element $xgx^{-1}g^{-1}$ is in $N$ since $N$ is normal, and it is in $G'$ by definition, so it is $1$, and hence $N$ is central. -The above is of course not anywhere close to a proper duality, as we really look at a sort of complement instead, and we only get something contained in the center, rather than the entire center. -Remark: The above is a special case of a more general result, which states that if $m$ is the degree of some irreducible character of $G$ then $$\bigcap_{\chi\in\rm{Irr}(G),\ \chi(1)\neq m}\rm{ker}(\chi)$$ is abelian, and in fact, one can go further and leave out $2$ degrees and get something of derived length at most $2$, and even leave out $3$ and get something of derived length at most $3$ (though for this last result, one needs to assume that $G$ is solvable to begin with). These results are proved in the paper "Irreducible character degrees and normal subgroups" By Isaacs and Knutson.<|endoftext|> -TITLE: The Laplace transform of the first hitting time of Brownian motion -QUESTION [21 upvotes]: Let $B_t$ be the standard Brownian motion process, $a > 0$, and let $H_a = \inf \{ t : B_t > a \}$ be a stopping time. I want to show that the Laplace transform of $H_a$ is -$$\mathbb{E}[\exp(-\lambda H_a)] = \exp (-\sqrt{2\lambda} H_a)$$ -by considering the martingale $$M_t = \exp \left(\theta B_t -\frac{1}{2}\theta^2 t\right)$$ -There's an obvious argument to follow here: assuming the optional stopping theorem applies, we have -$$1 = \mathbb{E}[M_{H_a}] = \mathbb{E} \left[ \exp \left(\theta a - \frac{1}{2}\theta^2 H_a\right) \right] = \exp(\sqrt{2\lambda} a) \mathbb{E} \left[ \exp(-\lambda H_a) \right]$$ -where $\theta = \sqrt{2\lambda}$. This is exactly what we wished to show. However, as far as I can tell, the hypotheses of the optional stopping theorem are not satisfied here. Here is the statement I have: - -If $(X_n)$ is a martingale and $T$ is an a.s. bounded stopping time, then $\mathbb{E}[X_T] = \mathbb{E}[X_0]$. - -I think not all is lost yet. $M_t > 0$ for all $t$, so the martingale convergence theorem applies, and $M_t \to M_\infty$ a.s. for some integrable random variable $M_\infty$. For each $t$, $H_a \wedge t = \min \{ H_a, t \}$ is a bounded stopping time, so certainly $\mathbb{E}[M_{H_a \wedge t}] = \mathbb{E}[M_0]$. But, -$$\mathbb{E}[M_{H_a \wedge t}] = \mathbb{E}[M_{H_a} \mathbf{1}_{\{H_a \le t\}}] + \mathbb{E}[M_t \mathbf{1}_{\{H_a > t\}}]$$ -and clearly what one wants to do is to take $t \to \infty$ on both sides. But here's where I get stuck: I'm sure I need a convergence theorem here in order to conclude that the equation remains valid in the limit. -Now, $0 < M_{H_a} = \exp \left(\theta a - \frac{1}{2} \theta^2 H_a \right) \le \exp(\theta a)$, so the dominated convergence theorem applies, and so -$$\lim_{t \to \infty} \mathbb{E}[M_{H_a} \mathbf{1}_{\{H_a \le t\}}] = \mathbb{E}[M_{H_a} \mathbf{1}_{\{H_a < \infty\}}]$$ -and I believe Fatou's lemma gives me that -$$\liminf_{t \to \infty} \mathbb{E}[M_t \mathbf{1}_{\{H_a > t\}}] \ge \mathbb{E}[M_{\infty} \mathbf{1}_{\{H_a = \infty\}}]$$ -but I think what I need is the equality -$$\lim_{t \to \infty} \mathbb{E}[M_t \mathbf{1}_{\{H_a > t\}}] = \mathbb{E}[M_\infty \mathbf{1}_{\{H_a = \infty\}}]$$ -and as far as I can tell neither the monotone convergence theorem nor the dominated convergence theorem applies here. Is there anything I can do to rescue this line of thought? - -REPLY [12 votes]: Use the fact that $0\leqslant M_t\mathbf{1}_{\{H_a>t\}}\leqslant\exp\left(\theta a−\frac12\theta^2t\right)$.<|endoftext|> -TITLE: Drinfeld Center -QUESTION [7 upvotes]: Let $\mathscr{C}$ be a strict monoidal category. I will denote the product of $\mathscr{C}$ by $\otimes$. The Drinfeld center $\mathscr{Z(C)}$ of $\mathscr{C}$ is the category with object $(X,\phi)$ where $X$ is an object of $\mathscr{C}$ and $\phi$ is a natural isomorphism from $X \otimes -$ to $ - \otimes X$. Morhphisms from $(X,\phi)$ to $(Y,\psi)$ in $\mathscr{Z(C)}$ are elements all $f \in \mathrm{hom_\mathscr{C}(X,Y)}$ such that $(\mathrm{id}_W \otimes f) \circ \phi_W = \psi_W \circ (f \otimes \mathrm{id}_W)$ for all $W \in \mathscr{C}$. -My question is the following. There must be a problem in the follwoing reasoning, but I cannot find it. I wonder if anybody can point out the mistake. -Fix any field $k$ of characteristic $0$. If $G$ is a finite group, then $\mathrm{Vec}_G$ the category of $G$-graded vector spaces is strict monoidal and moreover it is semisimple. Its simple objects are one dimensional vector spaces $V_g$ with grading given by an element $g \in G$. Morphisms are grading-preserving linear maps. In particular $V_g$ and $V_h$ are isomorphic if and only if $g = h$. The monoidal structure of $\mathrm{Vec}_G$ is given on simple object by multiplication of elements in $G$: $V_g \otimes V_h = V_{gh}$. Now assume that $G$ has a trivial center. Then for any $g \in G$ there is no natural isomorphism from $V_g \otimes - $ to $ - \otimes V_g$, since $V_{gh}$ and $V_{hg}$ are not isomorphic for some $h \in G$. Hence, its Drinfeld center is trivial. -Note that the conclusion cannot be true, since $\mathscr{Z}(\mathrm{Vec}_G)$ is the representation category of the Drinfeld double $k[G] \ltimes \mathrm{Fun}(G)$ of $\mathrm{Fun}(G)$, where $k[G]$ is the group ring of $G$, $\mathrm{Fun}(G)$ are the $k$-valued functions on $G$ and $\ltimes$ denotes the crossed product with respect to the natural action. -Thank you very much for your help. - -REPLY [6 votes]: Your observation only means that there is no object $(X,\phi)$ in the Drinfeld center which has $X$ simple and non-isomorphic to $V_e$ ($e$ being the identity element of $G$) -But let $C\subseteq G$ be a conjugacy class, and let $V_C=\bigoplus_{g\in C}V_g$. Then you should be able to find a natural isomorphism $\phi_C:V_C\otimes(\mathord-)\to(\mathord-)\otimes V_C$, so that $(V_C,\phi_C)$ is a non-trivial element in $\mathcal{Z}(\mathrm{Vec}_G)$.<|endoftext|> -TITLE: Converse of the Weierstrass $M$-Test? -QUESTION [15 upvotes]: I was assigned a few problems in my Honors Calculus II class, and one of them was kind of interesting to do: - -Suppose that $f_{n}$ are nonnegative bounded functions on $A$ and let $M_{n} = \sup f_{n}$. If $\displaystyle\sum\limits_{n=1}^\infty f_{n}$ converges uniformly on $A$, does it follow that $\displaystyle\sum\limits_{n=1}^\infty M_{n}$ converges (a converse to the Weirstrass $M$-test)? - -I know that this question has been asked before, but I'm trying not to just copy an answer off the internet and instead to come up with an example of my own to see if I can actually understand the theorems that I'm learning. -To provide a counterexample, I tried to create a function which has a diverging $\sup$, but I'm not too confident that my proof is valid. Here it goes: -$$ -\text Let \ f_{n}(x) = \begin{cases} - \begin{cases} - \frac{1 + x}{2}, & \text{if }x \in (-1,0]\\ - \frac{1 - x}{2}, & \text{if }x \in (0,1)\\ - 0, & \text{elsewhere} - \end{cases}, & \text{if }n \text{ even}\\ - \begin{cases} - x, & \text{if }x \in (0,1]\\ - 2 - x, & \text{if }x \in (1,2)\\ - 0, & \text{elsewhere} - \end{cases}, & \text{if }n \text{ odd} -\end{cases} -$$ -Now, $\text Let f(x) = 0$. -From this definition, I can conclude that -$$ -\sup\{f_{n}\} = \begin{cases} - \frac{1}{2}, & \text{if }n \text{ even}\\ - 1, & \text{if }n \text{ odd} -\end{cases} -$$ -Now, to show that ${f_{n}(x)}$ is uniformly convergent, the definition of uniform convergence is used: -$$ -\forall_{\epsilon > 0}\ \exists_{N}\ \text s.t.\ \forall_{x}\ \text if\ n > N, |f(x) - f_{n}(x)| < \epsilon -$$ -Since $f_{n}(x)$ is strictly nonnegative and $f(x) = 0$, $|f(x) - f_{n}(x)| = f_{n}(x)$. By definition, $\epsilon > 0$, and since $f_{n}(x) = 0$ for $x \geq 2, f(x) - f_{n}(x) = 0 < \epsilon$ for $x \geq 2$. Therefore there exists a $N$ (namely, $N = 1$) which proves that the sequence is uniformly convergent. -Since $\lim_{n\to\infty} f_{n} \neq 0$, by the Limit Test, the infinite sum $\displaystyle\sum\limits_{n=1}^\infty \sup{f_{n}}$ diverges, which disproves the converse of the Weierstrass $M$-Test. $\blacksquare$ -This is the first time I've actually used LaTeX, so I'm sorry for the way it looks. Is there anything that I can do to make this proof better (or even valid, if it's wrong), or is it fine the way it is? -This might be a bit of a long question... - -REPLY [9 votes]: let $f_n(x):\mathbb{R}\to\mathbb{R}$ be $\frac{1}{n}\chi_{(n-1,n)}$. then $\sum f_n$ converges uniformly but $\sum M_n=\sum 1/n$ diverges. something like this would work (not trying to give an answer away, if this is valid, but trying to show along what lines you should be thinking) - -(for a set $A\subseteq\mathbb{R}$, $\chi_A(x)=1$ if $x\in A$, $\chi_A(x)=0$ if $x\not\in A$. sometimes called the indicator function of the set $A$, also denoted by $1_A$)<|endoftext|> -TITLE: Counting solutions of a cubic congruence using Gauss sums -QUESTION [10 upvotes]: In the introduction to André Weil's Number of solutions of equations in finite fields he mentions article 358 of Gauss' Disquisitiones. Can someone please show me the connection here: -How does the Gaussian sum of order $3$, for primes of the form $p = 3n+1$ determine the number of solutions for all congruence $ax^3 - by^3 = 1 \pmod p$? - -REPLY [5 votes]: By ignoring a small subcase ($c = 0$ or $d = 0$), we can count the solutions for $cd \neq 0$ by the sum -$\displaystyle \sum_{\stackrel{c+d = 1}{c,d \neq 0}} \#\{x:ax^3 = c\} \times \#\{y: -by^3 = d\} ... (*)$. -Expressing the term in this sum, using multiplicative characters: -Now we look at $\{x: x^3 = u\}$ for some $u$, and we want to calculate its size. This can be interpreted as a sum of cubic characters: $\chi(u) + \chi^2(u) + \chi^3(u)$, where $\chi$ is a cubic character on $\mathbb{Z}/p\mathbb{Z}$, i.e. $\chi \neq 1$, but $\chi^3 = 1$. Now where does this interpretation come from? - -If $x^3 = u$ has one solution, it has 3 because $p \equiv 1 \pmod{3}$ tells us that 1 has 3 cube roots in $\mathbb{Z}/p\mathbb{Z}$. On the character side, if $u$ is a cube of something, using $\chi^3 = 1$ w see that $\chi(u) + \chi^2(u) + \chi^3(u) = 3$. -If $x^3 = u$ has no solution, then notice that $\chi(u) \neq 1$. Then if we write $S = \chi(u) + \chi^2(u) + \chi^3(u)$, note that $\chi(u)S = \chi(u)$, which means $S = 0$. - -So how does this interpretation in terms of characters help? Plug in the character sums in (*). We are going to have a lot of cross terms. These cross terms can be splitted into sums of Jacobi sums, which are closely related to Gauss sums. (See Koblitz' Introduction to Elliptic Curves and Modular forms Section 2.2, or Ireland and Rosen's A Classical Introduction to Modern Number Theory) -The situation here: -In our case, -$(*) = \sum_{c+d = 1, c,d \neq 0} \left(1 + \chi(c/a) + \chi^2(c/a)\right)\left(1 + \chi(-d/b) + \chi^2(-d/b)\right)$. -There are three types of sum in the expansion. I would do one in each type to show you how it feels like. - -$\displaystyle \sum_{c+d = 1, c,d \neq 0} 1 = p-2$, since $c$ can't be 0 or 1 and can be everything else. -$\displaystyle \sum_{c+d = 1, c,d \neq 0} \chi(c/a) = -\chi(1/a)$. To show this, note that for a nontrivial character $\chi$, $\displaystyle \sum_{u \in \mathbb{Z}/p\mathbb{Z}} \chi(u) = 0$, by an argument similar to what we showed in step 2 in the last block. -$\displaystyle \sum_{c+d = 1, c,d \neq 0} \chi(c/a) \psi (-d/b)$, where $\psi = \chi$ or $\chi^2$. This is the one we need Jacobi sums $\displaystyle J(\chi, \psi) = \sum_{x \in \mathbb{Z}/p\mathbb{Z}} \chi(x)\psi(1-x)$. Note that - -$\displaystyle \sum_{c+d = 1, c,d \neq 0} \chi(c/a) \psi (-d/b) = \chi(1/a)\psi(-1/b) J(\chi, \psi)$ -and that if you check out the books I mentioned before, $J$ can be expressed by Gauss sums of $\chi$ or $\chi^2$. -Summary: -So after all these work, we can find $\displaystyle \sum_{\stackrel{c+d = 1}{c,d \neq 0}} \#\{x:ax^3 = c\} \times \#\{y: -by^3 = d\} $ in terms of Gauss sums. There remains the case $c = 0$ or $d = 0$, which is easy to do directly. Sum them up, and you get the answer. The two books I mentioned both have examples of these types, so you may want to read them.<|endoftext|> -TITLE: Sum of ideals in a ring -QUESTION [10 upvotes]: I have the following question: -If $I,J$ are ideals in $R$, then denote $I+J = \{ r \in R | r = x+y, x \in I, y \in J \}$ -It is not hard to show that $I+J$ is again an ideal in $R$ -If we then assume that $I+J = R$ we have the identity $R/(I \cap J) \simeq R/I \times R/J$ -I managed to show this by constructing a function $f: R \to R/I \times R/J$ and then using the first isomorphism theorem. The proof then basically follows how one would derive the second isomorphism theorem from the first, which leads me the question: is it possible to approach this problem using the 2nd/3rd isomorphism theories directly? - -REPLY [2 votes]: It is possible to prove this with the help of the "third isomorphism theorem". However you need to prove the following fact (which of course can be obtained from your result but can also be proved directly). -Step 1: If $I$ is an ideal of $R$ with complement $J$ (i.e. $I+J=R$ and $I\cap J=0$), then $R \cong I\times J$ -Step 2: Let use the above to prove your result. Writing $q:R\to R/(I\cap J)$ for the canonical ring homorphism and taking images we obtain $q(I)=I/(I\cap J) \cong (I+J)/J = R/J$ and $q(J)=J/(I\cap J)\cong (I+J)/J = R/J$ (using the third isomorphism theorem twice). Applying the result from Step 1 (since $q(I) + q(J)=R/(I\cap J)$, and $q(I)\cap q(J) = 0$ we obtain that $R/(I\cap J) \cong q(I) \times q(J) \cong R/J \times R/I$ as desired. -Proof of statement in Step 1: By the "third isomorphism theorem" $R/I =(I+J)/I \cong J/(I\cap J)= J$ and similarly $R/J \cong I$ we obtain a -homorphism $R \to I\times J$ (from the two quotient homorphisms). A straight forward calculation shows that this homorphism is an isomorphism. -As a final remark this result is true in any semi-abelian category. This means that it is true for groups, Lie algebras over a commutative ring, Heyting semi-lattices and many other related structures (and that one can give a single proof to cover all of these cases). As another example in the language of groups it would be: If $I$ and $J$ are normal subgroups of a group $G$ and $IJ=G$, then $G/(I\cap J) \cong G/I \times G/J$.<|endoftext|> -TITLE: Proof That $\mathbb{R} \setminus \mathbb{Q}$ Is Not an $F_{\sigma}$ Set -QUESTION [11 upvotes]: I am trying to prove that the set of irrational numbers $\mathbb{R} \setminus \mathbb{Q}$ is not an $F_{\sigma}$ set. Here's my attempt: -Assume that indeed $\mathbb{R} \setminus \mathbb{Q}$ is an $F_{\sigma}$ set. Then we may write it as a countable union of closed subsets $C_i$: -$$ -\mathbb{R} \setminus \mathbb{Q} = \bigcup_{i=1}^{\infty} \ C_i -$$ -But $\text{int} ( \mathbb{R} \setminus \mathbb{Q}) = \emptyset$, so in fact each $C_i$ has empty interior as well. But then each $C_i$ is nowhere dense, hence $ -\mathbb{R} \setminus \mathbb{Q} = \bigcup_{i=1}^{\infty} \ C_i$ is thin. But we know $\mathbb{R} \setminus \mathbb{Q}$ is thick, a contradiction. -This seems a bit too simple. I looked this up online, and although I haven't found the solution anywhere, many times there is a hint: Use Baire's Theorem. Have I skipped an important step I should explain further or is Baire's Theorem used implicitly in my proof? Or is my proof wrong? Thanks. -EDIT: Thin and thick might not be the most standard terms so: -Thin = meager = 1st Baire category - -REPLY [13 votes]: Your solution is correct. You could also argue that $\mathbb{R} = \bigcup_{i =1}^{\infty} C_{i} \cup \bigcup_{q \in \mathbb{Q}} \{q\}$, so by Baire one of the $C_{i}$ must have non-empty interior, contradicting the fact that $\mathbb{R} \smallsetminus \mathbb{Q}$ has empty interior.<|endoftext|> -TITLE: Riemann Sphere and a Strange Vector Space Definition -QUESTION [7 upvotes]: I'm reading Fractals Everywhere by Michael Barnsley. On pp. 6-8 [1] he defines a linear space which, he says, "is also called a vector space." However, his definition of a linear space only requires closure under vector addition and scalar multiplication. There is no mention, for example, of additive inverses. (I realize other properties are required.) -He also defines the Riemann Sphere (see section 1.5), and later, in an exercise, asks the reader to show that the Riemann Sphere is not a vector space. -I can see that the Riemann Sphere is not a vector space in the traditional sense because $\infty$ has no additive inverse. But it seems that using Barnsley's definition, it would be a vector space, if we adopt the convention that $\infty + x = \infty$ for all $x \in \mathbb{C}$ and that $a \cdot \infty = \infty$ for all $a \in \mathbb{R}$. -Am I missing something? Thanks. -[1] http://goo.gl/d0NzN - -REPLY [10 votes]: You might call this thing (vector addition without inverses and scalar multiplication) a semi-module (wikipedia) over a field. The definition is as follows (I assume that you have an additive identity, even though you don't specify one): -A semimodule over a field $k$ is a set $M$ together with a function $+:M\times M \to M$ (vector addition) which defines an associative and commutative operation, an element $0\in M$ which is an identity element for $+$, and an action $k\times M \to M$ (scalar multiplication) which distributes over $+$. -The problem is, once you have scalar multiplication distributing over vector addition, you have $v + (-1)v = (1-1)v = 0$, thus $(-1)v$ is an additive inverse! Thus a semimodule over a field is the same as a module over a field which is just another name for a vector space. Thus the definition of a 'linear space' implies the remaining vector space axioms. -You can get around this by forgetting about some of the structure on the field, namely the additive inverse, but most often people drop both the additive and the multiplicative inverse to get a semiring = rig (wikipedia). Then semimodules over a rig are very interesting objects, and feature in an area called tropical geometry. A basic example in tropical geometry is the rig $\mathbb{R}\cup \{\infty\}$ (similar to your example), but where 'addition' is $x\oplus y = min\{x,y\}$ and 'multiplication' is $x\otimes y = x+y$.<|endoftext|> -TITLE: How many strings of length $n$ with $m$ ones have no more than $k$ consecutive ones? -QUESTION [8 upvotes]: Here is a problem from generatingfunctionology that I'm stuck on: - -I'm trying to get started on part (a). I broke the string up like this. If the last digit is $0$, the number of possible strings is then $f(n-1,m,k)$. If the last digit is $1$, there are two subcases. If the $n-1^{th}$ digit is $0$, then we can cut them both off and the number of strings is $f(n-2,m-1,k)$. However, if the $n-1^{th}$ digit is $1$, then I don't know what to say, since even if I cut both last $1$s off, I can't have the last $k-2$ numbers of my $n-2$ long string be $1$s, but it's entirely possible that I could have $k-2$ $1$s earlier in the string. So I have something like: -$$ -f(n,m,k)=f(n-1,m,k)+f(n-2,m-1,k)+??? -$$ -and I don't know what third term to put there. What is the third term? Thanks. -If it's not trouble, I may ask questions on parts (b) and (c) when I get there. - -REPLY [8 votes]: Assume $m$ and $n$ are large, say larger than $k$. If you continue your what-is-the-end-of-the-string argument, you will get $f(n,m,k)$ as the sum of $f(n-i,m-i+1,k)$ from $i=1$ to $i=k$, each of these terms counting the strings ending by $01^{i-1}$. Now, this sum has $k$ terms and you want only three. - -Compare the $k$ terms sums for $f(n,m,k)$ and for $f(n-1,m-1,k)$. - -The same terms appear in both sums with two exceptions: the first term of $f(n,m,k)$ is $f(n-1,m,k)$, which is not in $f(n-1,m-1,k)$, and the last term of $f(n-1,m-1,k)$ is $f(n-1-k,m-k,k)$, which is not in $f(n,m,k)$. Of course this reflects a combinatorial fact, which I let you explicit. -Hence, at least for large enough $n$ and $m$, -$$ -f(n,m,k)=f(n-1,m-1,k)+f(n-1,m,k)-f(n-1-k,m-k,k). -$$<|endoftext|> -TITLE: How to solve $x^3 + 2x + 2 \equiv 0 \pmod{25}$? -QUESTION [6 upvotes]: My attempt was: -$x^3 + 2x + 2 \equiv 0 \pmod{25}$ -By inspection, we see that $x \equiv 1 \pmod{5}$. is a solution of $x^3 + 2x + 2 \equiv 0 \pmod{5}$. Let $x = 1 + 5k$, then we have: -$$(1 + 5k)^3 + 2(1 + 5k) + 2 \equiv 0 \pmod{25}$$ -$$\Leftrightarrow 125k^3 + 75k^2 + 25k + 5 \equiv 0 \pmod{25}$$ -$$\Leftrightarrow 5 \equiv 0 \pmod{25}$$ -And I was stuck here :( because k was completely cancelled out, how can we find solution for this equation? -Thanks, - -REPLY [7 votes]: I wanted to post some excerpts from Niven's book that gives a nice recurrence relation to lifting nonsingular roots. I always found it helpful to use as a kind of plug and chug way to go. I also included an example of it in action. - - - - -To address your specific problem, you see that $x\equiv 1,3$ are roots mod $5$. However, -$$ -f'(1)=3(1)^2+2\equiv 0\pmod{5}\quad\text{and}\quad f'(3)=3(3)^2+2\equiv 4\pmod{5} -$$ -so $x\equiv 3$ is nonsingular, and may be lifted to a solution $a_2$ mod $25$ by Hensel's Lemma. Notice that $\overline{f'(3)}=4$. By the recurrence (2.6) of the excerpt above, you have -$$ -a_2=3-35(4)\equiv 3+(15)(4)\equiv 3+10\equiv 13\pmod{25} -$$ -which is the solution found by others. I hope it's useful when the numbers you're working with may not be as nice.<|endoftext|> -TITLE: Inclusion of subgroups implies the group is cyclic -QUESTION [5 upvotes]: Let $G$ be a finite group such that for any two subgroups $H_{1}$ and $H_{2}$ of $G$ we have $H_{1} \subseteq H_{2}$ or $H_{2} \subseteq H_{1}$. Why this implies $G$ is a cyclic group? -Ah. I think this works: Suppose $g \in G$ then $G = \langle g \rangle$. Suppose otherwise, then we can find $z$ such that $z \not \in \langle g \rangle$. Now let $H_{1}=\langle z \rangle$ and $H_{2}=\langle g \rangle$. By assumption we have $H_{1} \subseteq H_{2}$ or $H_{2} \subseteq H_{1}$. In both cases we get the contradiction $z \in \langle g \rangle$. - -REPLY [2 votes]: Another (marginally different) proof: Consider a maximal cyclic subgroup $H$ of $G$ (one exists since $G$ is finite). If $g \in G$ then the assumption implies either that $\langle g \rangle \subset H$ or else that $H \subset \langle g \rangle$. In the latter case, we have equality, by maximality of $H$, and so in all cases $g \in H$. Thus $H = G$, -and so $G$ is cyclic.<|endoftext|> -TITLE: Square roots -- positive and negative -QUESTION [38 upvotes]: It is perhaps a bit embarrassing that while doing higher-level math, I have forgot some more fundamental concepts. I would like to ask whether the square root of a number includes both the positive and the negative square roots. -I know that for an equation $x^2 = 9$, the solution is $x = \pm 3$. But if simply given $\sqrt{9}$, does one assume that to mean only the positive root? And when simply talking about the square root of a number in general, would one be referring to both roots or just the positive one, when neither is specified? - -REPLY [2 votes]: The radical sign '√' means we are taking the positive square root of given equation -if we simply say taking square roots on both sides,then we apply a '±' before radical('√') sign,as I said '√' sign means positive square root,so in order to get negative one also we apply that '±' sign. -as you can see '(±√x)^2' gives result as x, i.e (+√x)(+√x)=x -and (-√x)(-√x)=x -The simplest way to understand this is by the following expression -if ^2=9 -taking square root on both sides -±√x^2=±√9 -So, -±|x|=±|3|,so +|x|=3,-|x|=-3,,i.e in order to define √ positive,mathematicians added | |,this,called modulus function,which makes everything positive -So,x=3 or x=-3 -so x=±3 or we can say x=±√9 as i said again √9 is always positive - notice I have used word **Square root** not the symbol,means we are taking both positive square root and negative square root - -but when we say -√x^2,notice here is no ± symbol,so here,it is asked for the positive square root only -conclusion:We conclude that √ is defined to be positive -you can also see this in Quadatic formula -$$x = \frac{-b \pm \sqrt{b^2 - 4ac} }{2a}$$ -there is written -± in order to include negative root too! -hope it helped you......<|endoftext|> -TITLE: What are the most important results in graph theory? -QUESTION [8 upvotes]: What are the theorems/results/widely applicable results in graph theory that everyone should know about? - -REPLY [3 votes]: Kuratowski's and Wagner's theorems which give necessary and sufficient conditions for a finite graph to be planar. -I should add that it's certainly possible to argue that these are actually theorems of topology rather than graph theory. -For a simple proof of the non-planarity of $K_{3,3}$ and $K_5$ one can consult Munkres's Topology 2nd ed. §64 and at the end of the section he notes "It is a remarkable theorem, due to Kuratowski, that the converse is also true! The proof is not easy."<|endoftext|> -TITLE: Why do we call it trace? -QUESTION [12 upvotes]: For any module $P$, we define $\mathrm{tr}(P)=\sum \mathrm{im}(f)$, where $f$ ranges over all elements of $\mathrm{Hom}(P,R)$, and call it trace. -Why does it have such a name? Does it have any relation to the trace of matrix? - -REPLY [9 votes]: I have always viewed the term trace as used to name $$\displaystyle \operatorname{tr}(P)=\sum_{f:P\to R}f(P)$$ as a reference to the fact that that ideal is the part of $R$ which you can reach from $P$... whatever that may mean :) One can check, for example, that $P$ is a generator of the category of modules iff $\operatorname{tr}(P)=R$, and that makes sense (to me!) -A more serious connection is the following. Suppose $V$ is a finite dimensional vector space over a field $k$, and let $\hom_k(V,k)$ be its dual space. Then there is a natural isomorphism $$\phi:V\otimes\hom_k(V,k)\cong\hom(V,V).$$ On the other hand, we have the usual trace map $\operatorname{tr}:\hom(V,V)\to k$, so we can consider the composition $$\operatorname{Tr}=\operatorname{tr}\circ\phi:V\otimes\hom_k(V,k)\to k,$$ which is also sometimes called a trace map. If you now replace $k$ by ring, $V$ by a left $R$-module, then what you wrote $\operatorname{tr}(P)$ is the image of my $\operatorname{Tr}$: it follows that the trace ideal is the image of the trace map.<|endoftext|> -TITLE: Is a linear transformation onto or one-to-one? -QUESTION [30 upvotes]: The definition of one-to-one was pretty straight forward. If the vectors are lin.indep the transformation would be one-to-one. But could it also be onto? -The definition of onto was a little more abstract. Would a zero-row in reduced echelon form have any effect on this? I just assumed that because it has a couple of free variables it would be onto, but that zero-row set me off a bit. Say if the matrix is 4 by 5. With two free variables. And the zero-row in echelon form. -(Sorry for not posting the given matrix, but that is to specific. And I don't want to get a ban from uni for asking online. The task is determine the onto/one-to-one of to matrices) -I'll check back after class and update the question if more information is desirable. -Update -The definitions in the book is this; -Onto: -$T: \mathbb R^n \to \mathbb R^m $ is said to be onto $\mathbb R^m $ if each b in -$\mathbb R^m $ is the image of at least one x in $\mathbb R^n $ -One-to-one: -$T: \mathbb R^n \to \mathbb R^m $ is said to be one-to-one $\mathbb R^m $ if each b in -$\mathbb R^m $ is the image of at most one x in $\mathbb R^n $ -And then, there is another theorem that states that a linear transformation is one-to-one iff the equation T(x) = 0 has only the trivial solution. -That doesn't say anything about onto. -Then there is this bit that confused be about onto: -Let $T: \mathbb R^n \to \mathbb R^m $ be a linear transformation and let A be the standard matrix for T. Then: -T maps $T: \mathbb R^n$ onto $\mathbb R^m $ iff the columns of A span $\mathbb R^m $. - -REPLY [18 votes]: let $T(x)=Ax$ be a linear transformation. -$T(x)$ is one-to-one if the columns of $A$ are linearly independent. -$T(x)$ is onto if every row of $A$ has a pivot.<|endoftext|> -TITLE: Partial fraction of $\frac{x^n}{(1-x)(1-2x)(1-3x)\ldots(1-nx)}$ -QUESTION [9 upvotes]: How should you break $\displaystyle \frac{x^n}{(1-x)(1-2x)(1-3x)\ldots(1-nx)}$ into partial fractions? - -REPLY [4 votes]: It is worth emphasizing that both of the presented solutions essentially employ what is known as Heaviside's cover-up method. As I explained in this answer, this method works generally, i.e. even for nonlinear denominators. As should be evident from the presentation that I gave there, the solution can be constructed by a purely algebraic deterministic algorithm, i.e. without employing any analytic techniques (residue calculus, limits, L'Hopital's rule, etc). Moreover, this is frequently the most efficient way to proceed - even for manual calculations.<|endoftext|> -TITLE: Prime-power decomposition of a square -QUESTION [5 upvotes]: I'm trying to learn number theory on my own, and here's a proof I'm not quite sure I got right. It feels too simple(?), I'm thinking maybe I'm missing something. So the question is: - -Prove that if $n$ is a square, then each exponent in its prime-power decomposition is even. - -My proof: -Let $n=m^2$, with $m$ having prime factors $p_i$ with exponents $e_i$ so that -$$m= p^{e_1}_1 p^{e_2}_2 \ldots p^{e_n}_n.$$ -When squared, this gives -$$m^2 = (p^{e_1}_1 p^{e_2}_2 \ldots p^{e_n}_n)^2 = p^{2e_1}_1 p^{2e_2}_2 \ldots p^{2e_n}_n,$$ -where all the exponents are even. - -REPLY [4 votes]: That is fine. -You can even use it in the other direction to prove that if each exponent in the prime-power decomposition of $n$ is even, then $n$ is a square by saying -$$n = p^{2e_1}_1 p^{2e_2}_2 \ldots p^{2e_n}_n = (p^{e_1}_1 p^{e_2}_2 \ldots p^{e_n}_n)^2$$ so $n = m^2$ where $m= p^{e_1}_1 p^{e_2}_2 \ldots p^{e_n}_n.$<|endoftext|> -TITLE: Absolute value of all values in a matrix -QUESTION [5 upvotes]: How do I express the matlab function abs(M), on a matrix $M$, in mathematical terms? -I thought about norms or just $|M|$, but these return scalars, not another matrix of the same size as $M$. -Sorry for the rough explanation, english isn't my primary language. - -REPLY [2 votes]: I think you can also denote the matrix M by giving its generic (i, j)-entry with parenthesis around: -M = (mij). -So Matlab's "abs(M)" would simply be: (|mij|).<|endoftext|> -TITLE: Why is the real line not used in Descriptive Set Theory? -QUESTION [18 upvotes]: In most Descriptive Set Theory books, the rationale for working with the Baire space ($\mathbb{N}^{\mathbb{N}}$) as opposed to the real line ($\mathbb{R}$) is that the connectedness of the latter causes 'technical difficulties'. -My question is, what are these technical difficulties, and why does Descriptive Set Theory (normally?) stick to zero-dimensional Polish spaces? -Thanks in advance. - -REPLY [23 votes]: There are several useful properties of $\mathbb{N}^\mathbb{N}$: - -$\mathbb{N}^\mathbb{N}$ is homeomorphic to $\mathbb{N}^\mathbb{N}\times\mathbb{N}^\mathbb{N}$. So we can view each element of $\mathbb{N}^\mathbb{N}$ as a code for a pair of elements, and the decoding maps are continuous. This is not the case for $\mathbb{R}$. -$\mathbb{N}^\mathbb{N}$ is not connected. Because $\mathbb{R}$ is connected, there are no nonconstant continuous functions from $\mathbb{R}$ to $\{0,1\}$; you have to move up a few levels in the Borel hierarchy to get such functions. This doesn't matter if you just care about the functions being Borel, but when you're looking at the actual levels of the Borel hierarchy it's more convenient to start with continuous functions rather than starting a little higher up. -It's easy to construct an element of $\mathbb{N}^\mathbb{N}$: you just have to construct a sequence of natural numbers. On the other hand, the representations of real numbers (Cauchy sequences, Dedekind cuts) are not as straightforward to work with. That makes proofs technically more difficult without making them more interesting. For example, compare the diagonalization proof that $\mathbb{N}^\mathbb{N}$ is uncountable with the diagonalization proof that $\mathbb{R}$ is uncountable. - -There are also a few reasons that the use of $\mathbb{N}^\mathbb{N}$ does not result in a loss of generality: - -For any uncountable complete separable metric space (c.s.m.s.) $X$, there is a bijection between $X$ and $\mathbb{N}^\mathbb{N}$ that is both Borel measurable and has a Borel measurable inverse. So if the property we are studying is preserved by Borel isomorphisms, we can just replace an uncountable c.s.m.s. $X$ with $\mathbb{N}^\mathbb{N}$. -Every c.s.m.s. is a continuous image of $\mathbb{N}^\mathbb{N}$. In fact, for any c.s.m.s. $X$ there is a closed subset $C$ of $\mathbb{N}^\mathbb{N}$ and a continuous bijection from $C$ to $X$. So if we are studying a property preserved by continuous maps, we can work with $\mathbb{N}^\mathbb{N}$ or with its closed subspaces without losing generality. - -Those types of reasons are why it's safe to stick with $\mathbb{N}^\mathbb{N}$ most of the time: the goal is to study an arbitary c.s.m.s. (including $\mathbb{R}$) but for most purposes there's no loss of generality in studying $\mathbb{N}^\mathbb{N}$.<|endoftext|> -TITLE: Find maximum divisors of a number in range -QUESTION [9 upvotes]: While playing around with programming, a problem suddenly came across my mind. - -Given $n$ let's say $2^{64}$ (64-bit unsigned integer). Find a positive integer $x$, $1 \leq x \leq n$, such that $x$ has maximum divisors. - -My attempt was: -Let $n = p_1^{a_1} \times p_2^{a_2} \times ... \times p_k^{a_k}.$ -The number of divisors is given by: -$$(a_1 + 1) \times (a_2 + 1) \times ... \times (a_k + 1)$$ -We need to find the maximum of these products. Since $(a_1 + 1)_{max} \implies {a_1}_{max}$, the product becomes: $a_1 \times a_2 \times .... \times a_k$ -Let $P = a_1 \times a_2 \times .... \times a_k$. -The maximum of $P$ means $a_i = a_j$ where $1 \leq i, j \leq k$. -So I guess my question is how can we find a number that is less than $n$ which has this property? -Thanks, - -REPLY [7 votes]: Sequence A002182 gives the highly composite numbers, where the number of divisors sets a record. -You want the largest under $2^{64}$. Unfortunately the table doesn't go that high, but the references do. Somewhat in the spirit of Mark Eicenlaub's answer, if you think of $$n = p_1^{a_1} \times p_2^{a_2} \times ... \times p_k^{a_k} \text{ and } d(n)=(a_1 + 1) \times (a_2 + 1) \times ... \times (a_k + 1)$$ and think of adding one to the exponent of $p_i$, $\log (n)$ increases by $\log (p_i)$ and $\log (d(n))$ increases by $\log(a_i+2)-\log(a_i+1)$. -So the figure of merit for an increase is $$\frac{\log(a_i+2)-\log(a_i+1)}{\log (p_i)}$$ and you can just look through the primes to find which you should add in. This will miss some, where taking out one and adding another gets you a step up. - -REPLY [5 votes]: This answer might be useful if you are loking for an actual algorithm to solve the problem (and not a thoretical approach). -If $x$ is an integer in the range $[1,n]$ with maximum number of divisors write - $$ x = q_1^{a_1} q_2^{a_2} \dots q_k^{a_k} $$ -with the $q_i$ distinct primes and - $$ a_1 \ge a_2 \ge \cdots \ge a_k. \quad(*)$$ -Then we see that the integer - $$ x' = 2^{a_1} 3^{a_2} \dots p_k^{a_k} \quad (**)$$ -has the same number of divisors and is $\le x$. So we can limit our search to the integers of the form $(**)$ which verify $(*)$. This makes the search really easy, and fast in small ranges: start with $(a_1,\dots,a_k) = (t,0,\dots,0)$ and $k$ such that $2\cdot3\cdot\dots\cdot p_{k+1} > n$, and $2^t \le n < 2^{t+1}$ and go through all the $k$-tuples verifying $(*)$ and $(**)$ in decreasing lexicographic order, keeping their product maximal and $ -TITLE: A samurai cuts a piece of bamboo -QUESTION [11 upvotes]: Suppose a samurai wants to try out his new sword and cuts a piece of bamboo twice, randomly, so now there are $3$ lenghts of bamboo. What is the probability of these 3 pieces being able to form a triangle? -I have never came across a continuous probability problem before, but I tried doing it anyway and got a result of 0.25 probability. -My solution: Let $L$ be the original lenght of the bamboo, $x$ be the place of the first cut and $y$ be the place of the second cut. Writing out all the 3 triangle inequalities, we come to the conclusion that no piece of bamboo can have more than $L/2$ lenght, then the probability we're looking for is: -$$ -\frac{\int_{x=0}^{L/2}(\int_{y=L/2}^{x+L/2}(1)dy)dx}{\int_{x=0}^{L}(\int_{y=x}^L(1)dy)dx}=0.25 -$$ - -REPLY [5 votes]: There is also a nice geometric-probability solution to the problem. For simplicity, let $L=1$, with $x$ and $y$ as you describe. The space of all possible values of $x$ and $y$ is the unit square $[0,1]\times[0,1]$, with each point being equally likely (as $x$ and $y$ are uniformly distributed). In order for the three pieces to form a triangle, each piece must have length less than $\frac{1}{2}$, so: - -if $xy$: $y<\frac{1}{2}$, $x-y<\frac{1}{2}$, and $1-x<\frac{1}{2}$. - -Graphing these in the unit square gives the shaded region shown below. - -This region is $\frac{1}{4}$ of the total area of the square, so the probability is $\frac{1}{4}$. - -It's worth noting this similar but slightly different question, which arose from a mis-written Monte Carlo simulation of this problem.<|endoftext|> -TITLE: Behaviour of the series $\exp_p(x)=\sum_{k=0}^{\infty}\frac{x^k}{(k!)^p}$ depending on $p\approx 2$? -QUESTION [13 upvotes]: Note:This is more a math-recreational question -Consider the series $\exp_p(x)=\sum_{k=0}^{\infty}\frac{x^k}{(k!)^p}$ which is some systematic modification of the exponential function. It's $\exp_1(x)=\exp(x) $ if p=1. I'm interested in the behaviour for $x=0\to -\infty$ depending on change of the parameter $p$. -[edit] For the plots, and also for the discussion of the oscillation I removed the constant 1 from the function. Then I edited that question for correction. But after I got some answer I better roll back the edits of the title and the formula and state that removal for the plots and the discussion of the oscillation only here explicitely instead. Please excuse that inconvenience [edit 2] -First oberservation is, that if $p>1$ the function begins to oscillate. If $p=2$ the oscillation diminuishes but for $p>2$ in general the oscillation seems to increase and even for a couple of values $p=2 + \epsilon $ which I checked manually where the (possibly) eventual increase occurs after an initial decrease. -So my questions are: - -is it true, that at p=2 and x from $ 0\to -\infty$ the amplitude of the oscillation diminueshes to zero? -is it true, that at $p=2+\epsilon, \epsilon>0$ and x from $ 0\to -\infty$ the amplitude of the oscillation eventually increases without bound? -How could I approach that question, for instance by considering the form of the power series, the analysis of the powers of the factorials. Are there any "keystones" which might be helpful? - -This is a plot for $p=2$ so $f(x)=\exp_2(-(x^2)) -1 $. I used the squaring of the x to see more of the oscillation. The -1 is to locate the oscillation around the x-axis. - -This is a plot for $p=2.02$ so $f(x)=\exp_{2.02}(-x^2) -1 $. Again I used the squaring of the x to see more of the oscillation. Bigger epsilons let the oscillation increase at an x nearer to the vertical $x=0$-line. - -REPLY [17 votes]: For p = 2, $exp_2(x) = J_0(2 \sqrt{x})$, where $J_0$ is a Bessel function. The asymptotics of this are known: as $x \to -\infty$, $exp_2(x) = \frac {\sin \left( 2\,\sqrt {x}+1/4\,\pi \right) }{\sqrt {\pi }\, -x^{1/4}} + O(x^{-3/4})$.<|endoftext|> -TITLE: proving derivative in real analysis -QUESTION [8 upvotes]: I have proved the following problem, can you help me check if there is any loopholes in my proof? -Let I be an open interval in R, let $c \in I$, and let $f, g\colon I\to \mathbb{R}$ be functions. Suppose that $f(c) = g(c)$, and that $f(x) \leq g(x)$ for all $x \in I$. Prove that if f and g are differentiable at c, then f'(c) = g'(c). -So the following is my proof: -Let $x \in I$, -if $x \lt c$, then $\lim\limits_{x\rightarrow c-}\frac{f(x) - - f(c)}{x-c}$ = $\lim\limits_{x\rightarrow c-}\frac{f(c) - - f(x)}{c-x} \geq \lim\limits_{x\rightarrow c-}\frac{g(c) - - g(x)}{c-x} = \lim\limits_{x\rightarrow c-}\frac{g(x) - - g(c)}{x-c}$ -if $x \gt c$, then $\lim\limits_{x\rightarrow c+}\frac{f(x) - - f(c)}{x-c} \leq \lim\limits_{x\rightarrow c+}\frac{g(x) - - g(c)}{x-c}$ -Since $f$ and $g$ are differentiable at $c$, we know that $\lim\limits_{x\rightarrow c-}\frac{f(x) - - f(c)}{x-c} = \lim\limits_{x\rightarrow c+}\frac{f(x) - - f(c)}{x-c}=f'(c)$ and $\lim\limits_{x\rightarrow c-}\frac{g(x) - - g(c)}{x-c} = \lim\limits_{x\rightarrow c+}\frac{g(x) - - g(c)}{x-c} = g'(c)$ -Thus $\lim\limits_{x\rightarrow c-}\frac{f(x) - - f(c)}{x-c} = \lim\limits_{x\rightarrow c+}\frac{f(x) - - f(c)}{x-c} = \lim\limits_{x\rightarrow c-}\frac{g(x) - - g(c)}{x-c} = \lim\limits_{x\rightarrow c+}\frac{g(x) - - g(c)}{x-c}$ -Therefore $f'(c) = g'(c)$. -I can't find any problem in my proof, but for some reason I'm not feeling comfortable with it. However, if it's totally correct, just tell me :) Thanks!! - -REPLY [3 votes]: The final equality needs better justification; otherwise, it looks fine (modulo a couple of typos). To give a better justification for the final equality, do something like this: -\begin{align*} -g'(c) &= \lim_{x\to c}\frac{g(x)-g(c)}{x-c} = \lim_{x\to c-}\frac{g(x)-g(c)}{x-c}\\ -&\leq \lim_{x\to c-}\frac{f(x)-f(c)}{x-c} = \lim_{x\to c}\frac{f(x)-f(c)}{x-c} = f'(c)\\ -&= \lim_{x\to c+}\frac{f(x)-f(c)}{x-c}\\ -&\leq \lim_{x\to c+}\frac{g(x)-g(c)}{x-c} = \lim_{x\to c}\frac{g(x)-g(c)}{x-c} = g'(c). -\end{align*} -Thus, we have $g'(c)\leq f'(c)\leq g'(c)$, so equality holds.<|endoftext|> -TITLE: Discrete Fourier Transform: Effects of zero-padding compared to time-domain interpolation -QUESTION [16 upvotes]: While studying the various algorithm implementations available on-line of the Fast Fourier Transform algorithm, I've come to a question related to the way the DFT works in theory. -Suppose you have a sequence of $N$ points $x_0, ..., x_{N-1}$. For $ k = 0, ..., N-1 $, let $ X_k = \sum_{n=0}^{N-1} x_n e^{-2ik\pi \frac{n}{N}} $. -I've noticed that many algorithms are easier to implement, or faster, when the size of the input can be expressed as a power of 2. To pad the signal, I've seen two approaches. - -Pad the signal with $0$s, settings $x_N, ..., x_{2^p-1} = 0$, and $X_k = \sum_{n=0}^{N-1} x_n e^{-2ik\pi \frac{n}{2^p}}$ -Interpolate the original values, by setting $\tau=N/2^p$ the new spacing between consecutive points and then guessing the values at $0, \tau, 2\tau, ..., (2^p-1)\tau$ through linear interpolation. - -I've heard people saying different things: -Some people oppose the first approach very strongly (I recently had a discussion with a physics teacher about this). They say that padding the signal with extra zeros give you the Fourier coefficients of a different function, which will bear no relation to those of the original signal. On the other hand, they say that interpolation works great. -On the other hand, most libraries, if not all, that I have reviewed use the second solution. -All the references that I could find on the internet were pretty vague on this topic. Some say that the best band-limited interpolation that you can do in frequency domain is obtained through time-domain padding, but I couldn't find any proof of such statement. -Could you please help me figure out which advantages and drawbacks both approaches have? Ideally, I'm searching for something with a mathematical background, not only visual examples =) -Thanks! - -REPLY [13 votes]: Zero-padding in the time domain corresponds to interpolation in the Fourier domain. It is frequently used in audio, for example for picking peaks in sinusoidal analysis. -While it doesn't increase the resolution, which really has to do with the window shape and length. As mentioned by @svenkatr, taking the transform of a signal that's not periodic in the DFT size is like multiplying with a rectangular window, equivalent in turn to convolving it's spectrum with the transform of the rectangle function (a sinc), which has high energy in sidelobes (off-center frequencies), making the true sinusoidal peaks harder to find. This is known as spectral leakage. -But I disagree with @svenkatr that zero-padding is causing the rectangular windowing, they are separate issues. If you multiply your non-periodic signal by a suitable window (like the Hann or Hamming) that has appropriate length to have the frequency resolution that you need and then zero-pad for interpolation in frequency, things should work out just fine. -By the way, zero-padding is not the only interpolation method that can be used. For example, in estimating parameters of sinusoidal peaks (amplitude, phase, frequency) in the DFT, local quadratic interpolation (take 3 points around a peak and fit a parabola) can be used because it is more computationally efficient than padding to the exact frequency resolution that you want (would mean a much larger DFT size).<|endoftext|> -TITLE: Integer Multiples of Integers Sum to 1 -QUESTION [5 upvotes]: Let $u=p_1^{n_1} \cdots p_t^{n_t}$ for $p_i$ prime and $n_i \in \mathbb{N}$. Let $m_i = \frac{u}{p_i^{n_i}}$. Then there exist $c_i \in \mathbb{Z}$ such that $c_1 m_1 + \cdots + c_tm_t = 1$. - -This is given as a property we are allowed to use in one of our homework questions, but we're not sure why we are allowed to use it. We know it's true for $m_i$ that are relatively prime, but these $m_i$ are specifically not. Can you explain why it still holds? - -REPLY [4 votes]: Hint $\, \ \rm\ \{ c_1 m_1 +\:\cdots\:+c_t\ m_t\ :\ c_i \in \mathbb Z\}\ $ is closed under subtraction so, by the lemma below, its least positive element $\rm\:d\:$ is a common divisor of all $\rm\:m_i.\:$ $\rm\:d = 1,\:$ by $\rm\ d\:|\:m_{\,i}\ $ and $\rm\:m_{\,i}\:$ is coprime to $\rm\:p_i,\ $ so $\rm\:d\:$ is a divisor of $\rm\ p_1^{n_{\ 1}}\:\cdots\:p_t^{n_{\ t}} $ coprime to all its prime factors,$\:$ so $\rm\ d = 1.$ -Note $\rm\,\ d\ = gcd(m_1,\:\cdots,\,m_t)\:$ since, being an integer linear combination of the $\rm\:m_i\:$ it is divisible by every common divisor $\rm\:c\:$ of all $\rm\ m_i,\, $ $\rm\ c\mid m_i \Rightarrow\ c\mid c_1 m_1\! +\:\cdots\,+c_t m_t =\, d.\, $ So $\rm\:d\:$ is the greatest common divisor, as a common divisor divisible by every common divisor. -Lemma $\: $ If $\;\rm D\subset\mathbb Z \;$ is closed under subtraction and $\rm D$ contains a nonzero element then $\rm\: D \: $ has a positive element and the least positive element of $\rm\: D\: $ divides every element of $\rm\: D.$ -Proof $\rm\,\ D$ has a positive element by $\rm \ 0 \ne d\in D\,$ $\Rightarrow$ $\,\rm d-d = 0\in D\,$ $\Rightarrow$ $\,\rm 0-d = -d\in D.\, $ Let $\rm d$ be the least positive element in $\rm D.\,$ By $\,\rm d\mid n \iff d\mid -n, \;$ if $\,\rm c\in D$ isn't divisible by $\rm d$ then we -may assume that $\rm c$ is positive, and the least such. But $\,\rm c-d\,$ is a positive element of $\rm D$ not divisible by $\rm d$ -and smaller than $\rm c,\,$ contra leastness of $\rm c.\,$ So $\rm d$ divides every element of $\rm D.$ -Remark $\ $ Alternatively, if you already have available $\rm\:CRT\:$ (Chinese Remainder Theorem), then simply apply it to solve the system $\rm\ x \equiv 1\ \ (mod\ p_i^{n_{\ i}}).\, $ It gives $\rm\ c_i \equiv\ m_{\:i}^{-1}\ (mod\ p_i^{n_{\ i}}).$<|endoftext|> -TITLE: $K(\mathbb R P^n)$ from $K(\mathbb C P^k)$ -QUESTION [17 upvotes]: EDIT: I found a brief discussion of this in Husemoller's Fibre Bundles, chapter 16 section 12. Here to compute $\tilde K(\mathbb R P^{2n+1})$ he says to consider the map -$$ -\mathbb R P^{2n+1} = S^{2n+1}/\pm 1\to \mathbb C P^n = S^{2n+1}/U(1). -$$ -Under this map the canonical line bundle over $\mathbb C P^n$ pulls back to the complexification of the canonical line bundle over $\mathbb R P^{2n+1}$. Then he says from looking at the (Atiyah-Hirzebruch) spectral sequence we get that $\tilde K(\mathbb R P^{2n+1}) = \mathbb Z/2^n$. I don't see how looking at the spectral sequence helps (and what we learn from the map from $\tilde K(\mathbb C P^{n}) \to \tilde K(\mathbb R P^{2n+1})$). All I can see is that $\tilde K(\mathbb R P^{2n+1})$ is pure torsion (by the Chern character isomorphism) and that it has order $2^n$ (from the spectral sequence). I can't see why, for example, it isn't just a direct sum of $\mathbb Z/2$'s. - -I found a homework assignment online from an old K-theory course and one of the problems says to compute $K(\mathbb R P^n)$ by using a suitable comparison map with $\mathbb C P^k$ and knowledge of $K(\mathbb C P^k)$. -I have attempted this but have not been able to get anywhere. The only map $\mathbb R P^n \to \mathbb C P^n$ I can think of is the one sending the equivalence class of $(x_0,\ldots,x_n) \in \mathbb R P^n$ to its equivalence class in $\mathbb C P^n$. Under this (I think) the tautological line bundle over $\mathbb C P^n$ (which generates $K(\mathbb C P^n)$) gets sent to the complexification of the tautological line bundle over $\mathbb R P^n$. But I really don't see where to go from here; if I had a map going the other way maybe I'd be able to say something but the map I have is neither injective nor surjective. I also can't see how torsion is going to come out of this: $K(\mathbb C P^n)$ is torsionfree but $K(\mathbb R P^n)$ isn't. - -REPLY [2 votes]: Using the long exact sequence of the pair $(\mathbb{R}\mathbb{P}^{n},\mathbb{R}\mathbb{P}^{n-1})$. For n=odd, we should have the exact sequence $$..\mathbb{Z}\rightarrow K^{-1}(\mathbb{R}\mathbb{P}^{n})\rightarrow K^{-1}(\mathbb{R}\mathbb{P}^{n-1})\rightarrow 0\rightarrow K^{0}(\mathbb{R}\mathbb{P}^{n})\rightarrow K^{0}(\mathbb{R}\mathbb{P}^{n-1})\rightarrow \mathbb{Z}...$$ for n=even, we should have the exact sequence $$0\rightarrow K^{-1}(\mathbb{R}\mathbb{P}^{n})\rightarrow K^{-1}(\mathbb{R}\mathbb{P}^{n-1})\rightarrow \mathbb{Z}\rightarrow K^{0}(\mathbb{R}\mathbb{P}^{n})\rightarrow K^{0}(\mathbb{R}\mathbb{P}^{n-1})\rightarrow 0\rightarrow...$$ -We wish to prove via induction that $K(\mathbb{R}\mathbb{P}^{n})=\mathbb{Z}/2^{n}\mathbb{Z}$. The base case $n=1,K(\mathbb{R}\mathbb{P}^{1})=K(S^{1})=0$ is established by the fact that $U(1)$ is connected. We proceed to the $n=2$ case. -We notice the middle coboundary map $\delta:K^{-1}(\mathbb{R}\mathbb{P}^{n-1})\rightarrow K^{0}(\mathbb{R}\mathbb{P}^{n}/\mathbb{R}\mathbb{P}^{n-1})$. In this current case it is from $\mathbb{Z}$ to $\mathbb{Z}$. To ascertain $\delta$ we notice the following commuative diagram: -$$\begin{CD} -S^{1} @> >> D^{2}\\ -@VV V @VV V\\ -\mathbb{R}\mathbb{P}^{1}@> >>\mathbb{R}\mathbb{P}^{2} -\end{CD}$$ -Hence $\delta$ is a $\times 2$ map. The exactness at $K^{-1}(\mathbb{R}\mathbb{P}^{2})$ implies $K^{-1}(\mathbb{R}\mathbb{P}^{2})=Ker(\delta)=0$, and $K^{0}(\mathbb{R}\mathbb{P}^{2})=\mathbb{Z}/2\mathbb{Z}$. -For $n=3$ we have the long exact sequence to be -$$..\mathbb{Z}\rightarrow K^{-1}(\mathbb{R}\mathbb{P}^{3})\rightarrow 0 \rightarrow 0\rightarrow K^{0}(\mathbb{R}\mathbb{P}^{3})\rightarrow \mathbb{Z}/2\mathbb{Z}\rightarrow \mathbb{Z}...$$ -The same commuative diagram -$$\begin{CD} -S^{2} @> >> D^{3}\\ -@VV V @VV V\\ -\mathbb{R}\mathbb{P}^{2}@> >>\mathbb{R}\mathbb{P}^{3} -\end{CD}$$ -implies the coboundary map $K^{0}(\mathbb{R}\mathbb{P}^{2})\rightarrow K^{-1}(\mathbb{R}\mathbb{P}^{3}/\mathbb{R}\mathbb{P}^{2}\cong S^{3})$ is still $\times 2$. -Reorganize the sequence we have $$0\rightarrow K^{0}(\mathbb{R}\mathbb{P}^{3})\rightarrow \mathbb{Z}/2\mathbb{Z}\rightarrow \mathbb{Z}\rightarrow K^{-1}(\mathbb{R}\mathbb{P}^{3})\rightarrow 0$$ -Hence $K^{0}(\mathbb{R}\mathbb{P}^{3})=\mathbb{Z}/2\mathbb{Z}$, $K^{-1}(\mathbb{R}\mathbb{P}^{3})=\mathbb{Z}$. -For $n=4$ we have the long exact sequence to be -$$...0\rightarrow K^{-1}(\mathbb{R}\mathbb{P}^{4})\rightarrow \mathbb{Z} \rightarrow \mathbb{Z}\rightarrow K^{0}(\mathbb{R}\mathbb{P}^{4})\rightarrow \mathbb{Z}/2\mathbb{Z}\rightarrow 0...$$ -Hence $K^{-1}(\mathbb{R}\mathbb{P}^{4})=0$, but we are not sure how to calculate $K^{0}(\mathbb{R}\mathbb{P}^{4})$. I think the map $\mathbb{Z}\rightarrow K^{0}(\mathbb{R}\mathbb{P}^{4})$ is $\mathbb{Z}\rightarrow \mathbb{Z}/2\mathbb{Z}$, both by the exact sequence and the geometrical picture. Hence $K^{0}(\mathbb{R}\mathbb{P}^{4})$ has two choices, either $\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}$, or simply $\mathbb{Z}/4\mathbb{Z}$. So we need to show $K^{0}(\mathbb{R}\mathbb{P}^{4})$ have an element $x$ such that $2x\not=0$. But calculating the Stifel-Whitney class of $w(RP^{4}\oplus RP^{4})$ by Whitney summation formula implies it to be non-zero. Hence such an element do exist. -We conclude that $K^{0}(\mathbb{R}\mathbb{P}^{4})=\mathbb{Z}/4\mathbb{Z}$, $K^{-1}(\mathbb{R}\mathbb{P}^{4})=0$. -The induction scheme, totally analogous to the pervious arguments thus gives us: -$K^{0}(RP^{2k+1})=\mathbb{Z}/2^{k}\mathbb{Z}$, $K^{-1}(RP^{2k+1})=\mathbb{Z}$. -and -$K^{0}(RP^{2k})=\mathbb{Z}/2^{k}\mathbb{Z}$, $K^{-1}(RP^{2k})=0$.<|endoftext|> -TITLE: Disjoint convex sets that are not strictly separated -QUESTION [19 upvotes]: Question 2.23 out of Boyd & Vanderberghe's Convex Optimization: - -Give an example of two closed convex sets that are disjoint but cannot be strictly separated. - -The obvious idea is to take something like unbounded sets which are disjoint but approach each other in the limit. For example, $f(x) = \frac1x$ and $g(x) = -\frac1x$. But isn't $x=0$ a strictly separating hyperplane here? - -REPLY [5 votes]: I'll just add that, for a slightly different definition of convex sets being "strictly separated", the example given by Albert Seppi doesn't work. (My definition is that $C$ and $D$ convex sets in $\mathbb{R}^n$ are strictly separated if there is a hyperplane $a^Tx=\beta$ such that $a^Ty<\beta$ for all $y\in C$ and $a^Ty>\beta$ for all $y\in D$) -In the latter case, $x=0$ separates strictly $\{(x,y)\mid y\geq \frac1x, x>0\}$ and $\{(x,y)\mid y\leq -\frac1x, x>0\}$. -However, it is impossible to separate the sets $\{(x,y)\mid y\geq \frac1x, x>0\}$ and $\{(x,y)\mid y\leq0\}$ because any line of slope non-zero will intersect the second set, so such a line must be of the form $(1\mbox{ } 0)^Tx=\beta$ for some $\beta$. But if $\beta\leq0$, it intersects the second set whereas $\beta >0$ implies that the line will intersect the first set.<|endoftext|> -TITLE: On the Bessel function $J_n(z)$ for high $z$, with respect to $n$ -QUESTION [7 upvotes]: Plotting the Bessel functions of the first kind $J_n(z)$ versus $n$ for some fixed $z\gg1$, it appears that there is a sharp cutoff just before $n=z$. -Three questions: - -What is a reference describing this sharp cutoff? -What is an expression for the location of the maximum of $J_n(z)$ with respect to $n$, for fixed (large) $z$? -What is a nice expression for the envelope of the function $J_n(z)$ with respect to $z$? I.e, what is a function that (approximately) goes through all the maxima of the following plot, and then dies off appropriately for $n>z$? - -REPLY [6 votes]: 1. -The so-called "transition region" for $J_n(z)$ is well known; among its many applications, the usual three-term recursion relation for Bessel functions is numerically sound in the direction of increasing $n$ for as long as $z>n$. As a guide of sorts, -$$J_n(z)\approx\frac{x^n}{2^n\Gamma(n+1)}\quad\mathrm{if}\quad 0 < z \ll n$$ -$$J_n(z)\approx\sqrt{\frac{2}{\pi z}}\cos\left(z-\frac{\pi}{4}-\frac{n\pi}{2}\right)\quad\mathrm{if}\quad n \ll z$$ -2. -The expressions for $\frac{\mathrm d}{\mathrm d\nu}J_\nu(z)$ are rather complicated; there's no reason to expect a simple expression for the solution of $\frac{\mathrm d}{\mathrm d\nu}J_\nu(z)=0$, as is usual for any transcendental equation. -3. -There are a lot of asymptotic results for Bessel functions with varying order in the DLMF; you might want to look into them. Offhand, I recall results for envelopes of $J_n(z)$ for fixed $n$ and varying $z$, but not for your situation; I'll edit this answer when I come across results of relevance to you.<|endoftext|> -TITLE: Irreducibility and Splitting Fields -QUESTION [5 upvotes]: Show that over any field $F$, the polynomial $x^3-3x+1$ is either irreducible or splits into linear factors. - -Edited: -This is my attempt: Let $f(x)=x^3-3x+1$. Let $a_1,a_2,a_3$ be the roots of $f$. Suppose char $F\neq 2,3$. Suppose also that $f$ is neither irreducible nor splits in $F$. Then $f$ is reducible which implies that $a_1 \in F$. i.e. $f(x)=(x-a_1)g(x)$, where $g(x)\in F[x]$ is irreducible with deg $g=2$. Let $K$ be the splitting field of $g$. The $K$ is Galois over $F$. -So if $\sigma \in $ Aut($K/F$), then $\sigma (a_1)=a_1$ since $\sigma $ fixes $F$ and $a_1 \in F$. Since $\sigma$ permutes the roots of $f$, WLOG suppose $\sigma (a_2)=a_3$. Then -$\sigma(\triangle) = \sigma((a_1-a_2)(a_1-a_3)(a_2-a_3))=-\triangle$. -But $\triangle^2=D(f)=81$, so $\triangle = \pm 9 \in F$, so $\triangle \in F$. Therefore, -$9=-9$ $\implies 1=-1 \implies$ char $F =2$ which is a contradiction. So $f$ is either irreducible or splits in $F$. -Next suppose char $F=2$. Well, I'm not exactly sure what I can say about $f$. -I would like to know if my approach is correct and also what to do in the second case. Thanks. -ADDED: -If char $F=2$, then $f=x^3+x+1$. Suppose $b$ is a root of $f$. Then $b^2 $ is also a root, since $f(b^2)=(b^2)^3+b^2+1=(b+1)^2+b^2+1=2b^2+2=0$. Is it enough to conclude that $f$ splits? - -REPLY [6 votes]: I like your approach, it's not one I would have thought of. Here's an idea that seems to work except when characteristic of F is 3. -If $x$ satisfies $x^3 - 3x + 1 = 0$, then so does $1 - \frac{1}{x}$. Clearly zero is not a root of the given polynomial, so this makes sense. Also if $x = 1 - \frac{1}{x}$, then $x$ would satisfy the equation $x^2 - x + 1 = 0$, and taken in combination with the cubic polynomial above, we would infer $3 = 0$. -So except for characteristic 3, the formula $1 - \frac{1}{x}$ turns out to cyclically permute the three roots of the cubic polynomial. Thus the polynomial either splits in a field F or is irreducible. -Added: Arturo has nailed the characteristic 3 case. - -REPLY [2 votes]: Here is one way to do it using almost no theory, just playing with algebra. -Suppose the polynomial has a root $a$ in $F$. If you divide $x^3 - 3x + 1$ by $x-a$ the quotient is the polynomial -$$ -x^2 + ax + a^2 - 3 -$$ -in $F(a)[x]$. From the quadratic formula (assuming characteristic $\neq 2$ here, for the moment, I guess) you can see that this will have its roots in $F(a)$ if and only if $12-3a^2$ is a square in $F(a)$. -You may know that any element in $F(a)$ can be written in the form $pa^2 + qa + r$ for some $p,q,r$ in $F$. The idea is to square this symbolic expression, rewrite it as a polynomial in $a$ of degree at most $2$, and then see if values $p, q, r$ in $F$ can be found making this equal to $-3a^2 + 0a + 12$. -Using the identity $a^3 = 3a - 1$ (from the fact that $a$ is a root of the given polynomial) and $a^4 = 3a^2 - a$ (from multiplying the previous identity by $a$) one gets -$$ -(pa^2 + qa + r)^2 = (3p^2 + 2pr + q^2) a^2 + (-p^2 + 2qr + 6pq) a + (r^2 - 2pq). -$$ -So the goal is to find $p,q,r$ satisfying $3p^2 + 2pr + q^2 = -3$, $-p^2 + 2qr + 6pq = 0$, and $r^2 - 2pq = 12$. Since we don't know anything about $F$, we might optimistically look for these $p, q, r$ in $\mathbb{Z}$. This system of $3$ polynomial equations in $3$ integer unknowns can be fed to software, and one finds that e.g. $p = 2$, $q = 1$, and $r = -4$ give a solution (no matter what $F$ is!). So the roots of $x^3 - 3x + 1$ are all in $F(a)$, and we can actually write formulas for them: $a$, $\frac{-a + (2a^2 + a -4)}{2}$, and $\frac{-a - (2a^2 + a - 4)}{2}$, which simplify to $a$, $a^2 - 2$, $-a^2 - a + 2$. -Although we assumed characteristic $\neq 2$ to use the quadratic formula, we can immediately check that the single identity $a^3 - 3a + 1 = 0$ is indeed enough to ensure that -$$ -(x-a)(x - (a^2 - 2)) (x - (-a^2 - a + 2)) = x^3 - 3x + 1. -$$ -[In detail: expanding, the coefficient of $x^3$ is $1$ on the nose, the coefficient of $x^2$ is $0$ on the nose, and the coefficient of $x$ is a polynomial in $a$ that, when divided by $a^3 - 3a + 1$, has a remainder of $-3$; similarly the constant term is a polynomial in $a$ that, when divided by $a^3 - 3a + 1$, has remainder of $1$.] -So if $x^3 -3x + 1$ has one root in $F$, it splits in $F$ for the reason that we can explicitly write the other two roots as polynomials in the one that we already have.<|endoftext|> -TITLE: Proof: Cartesian Product of Two Sets is a Set ZF -QUESTION [9 upvotes]: Hi I'm having trouble with this proof. I'm not sure if I did the first part right and how I should use the second part properly. I am following the hint in the book to use the Axiom of Replacement to prove step three below and follow through with Replacement and Union to finalize the proof. -1) Prove that the Cartesian Product of two sets is a set -$ A \times B = \left \{ \left \langle u, v \right \rangle \mid u \in A \land v \in B \right \} $ -2) Taking the Axiom of Replacement -$ \forall u \forall v \forall w \left ( \psi \left ( u, v \right ) \land \psi \left ( u, w \right ) \rightarrow v = w \right ) \rightarrow \forall z \exists y \forall v \left ( v \in y \leftrightarrow \left ( \exists u \in z \right ) \psi \left ( u, v \right ) \right ) $ -3) First prove that the following is a set -$ \forall x \left ( \lbrace \langle x, y \rangle \mid y \in t \rbrace \right ) $ -4) Taking (1) from Section 6 -$ \langle x, y \rangle = \langle u, v \rangle \iff x = u \land y = v $ -5) Use then this u and v for the Axiom of Replacement reduced to the following where v is unique -$ \forall z \exists y \forall v \left ( v \in y \leftrightarrow \left ( \exists u \in z \right ) \psi \left ( u, v \right ) \right ) $ -6) $ \forall t \left ( t \in y \right ) $ and $ z = \langle x, y \rangle $ -$ \forall \langle x, y \rangle \exists t \forall v \left ( v \in t \leftrightarrow \left ( \exists u \in \langle x, y \rangle \right ) \langle u, v \rangle \right ) $ -Hence $ y \in t $ proving that $ \forall x \left ( \lbrace \langle x, y \rangle \mid y \in t \rbrace \right ) $ is a set by the Axiom of Replacement -7) Now taking the Axiom of Union -$ A \cup B = \lbrace x \mid x \in A \lor x \in B \rbrace $ -8) And the Axiom of Replacement -$ \forall u \forall v \forall w \left ( \psi \left ( u, v \right ) \land \psi \left ( u, w \right ) \rightarrow v = w \right ) \rightarrow \forall z \exists y \forall v \left ( v \in y \leftrightarrow \left ( \exists u \in z \right ) \psi \left ( u, v \right ) \right ) $ -9) Prove that the Cartesian Product of two sets is a set -$ A \times B = \left \{ \left \langle u, v \right \rangle \mid u \in A \land v \in B \right \} $ -If there is something I can make more clear that would allow others to more easily aid me let me know. :| - -REPLY [8 votes]: Regarding the use of Replacement and Union without Powerset, the proof can be as follows: -I will assume that $(x,y)=\{\{x\},\{x,y\}\}$. First you show that for every $x\in A$ the set $\{(x,y)\ : y\in B\}$. To do this, fix an $x\in A$. Then you define a class-function that sends every element of $Y$ to $(x,y)$. The formula used in replacement would be $\phi(y,z):= z=(x,y)$ (keep in mind that $x$ is fixed). Or more typically $z=\{\{x\},\{x,y\}\}$ or even more typically: $$(\forall w)(w\in z\iff[((\forall u)(u\in w\iff u=x))\lor((\forall u)(u\in w\iff u=x\lor u=y))])$$ -To see that this satisfies the uniqueness of $z$ that is required for replacement simply observe that the sentence says that $z$ is a set that contains exactly $\{x\}$ and $\{x,y\}$ and through the axiom of extensionality this $z$ is unique. So for every set $x\in X$ we have defined a set $B_x=\{(x,y)\ :y\in B\}$. -Again using the axiom of replacement we obtain a set $C=\{B_x\ :x\in A\}$. Intuitively we create a class-function that sends every $x\in A$ to $B_x$. The formula that will be used for replacement will be $\psi(x,v):= v=B_x$. Typically this will be written: -$(\forall z)(z\in v\iff$ -$$(\forall w)(w\in z\iff[(\exists y\in B)((\forall u)(u\in w\iff u=x))\lor((\forall u)(u\in w\iff u=x\lor u=y))]))$$ -The proof that this $v$ is a set is done in the previous paragraphs and the proof of the uniqueness is again due to extensionality. So we have proved that $C=\{B_x\ :x\in A\}$ is indeed a set. From the axiom of union $\bigcup C$ is a set. Now I claim that $\bigcup C$ is the cartesian product $A\times B$. -To see this first take $a\in\bigcup C$. This means that there exists an $x\in A$ such that $a\in B_x$. Which means that there is an $y\in B$ such that $a=(x,y)$. Now take $x\in A$ and $y\in B$. First observe that $(x,y)\in B_x$. Then since $B_x\in C$ we have $(x,y)\in \bigcup C$. - -REPLY [6 votes]: I don't really follow what your book is saying, but maybe this is the proof they wanted: - -First, given $x$ and $y$, $\{x,y\}$ exists by the axiom of pairing, and $\{x\} = \{x,x\}$ exists by the axiom of pairing, so $\{\{x\},\{x,y\}\} = \langle x,y\rangle $ exists by a third application of pairing. Now, viewing $x$ as a parameter, we have proved that for each $y \in Y$, the set $\langle x,y\rangle$ exists, so by the axiom of replacement, for each $x$ the set $S_x = \{ \langle x,y\rangle : y \in Y\}$ exists. This last step may also require the axiom of comprehension depending on exactly how your book phrases the axiom of replacement. Now, since $X$ is a set, apply the axiom of replacement again to get that $\{\langle x,y\rangle : x \in X, y \in Y\}$ exists. - -I'll leave it up to you to formalize that proof to verify what formulas are used in the applications of replacement. -This is a strange way to prove that cartesian products exist, however, because the theorem can be proved as yunone indicated using the axiom of powerset but not the axiom of replacement. That fact is relevant in set theory because we sometimes work with models that do satisfy the axiom of powerset but not the axiom of replacement (e.g. $V_{\omega + \omega}$).<|endoftext|> -TITLE: For all integers b, c, and d, if x is rational such that x^2+bx+c=d, then x is an integer -QUESTION [6 upvotes]: Prove or disprove the following statment: For all integers b, c,and d, if x is a rational number such that $x^2+bx+c=d$, then x is an integer. -This is a homework question from the book Discrete Mathematics for Computer Scientists by Stein, Drysdale and Bogart. -I since x is rational I thought I could start off with: -${(\frac{m}{n})}^2+b\frac{m}{n}=d-c$ -But I don't know where to go from here. -Or I could try using the quadratic formula -$x=\frac{1}{2}\left(\pm\sqrt{b^{2}-4c+4d}-b\right)$ -but I am very weak with elementary number theory that I don't know where to go. I am thinking that regardless of if $\sqrt{b^{2}-4c+4d}$ is an integer or not, the fact that I have -$x=\frac{1}{2}*\pm$ SomeNumber -means that x is not an integer. -I am new to writing proofs, and unfortunately, I don't really know how to prove this. Any hints would be appreciated. -Thank you. -Edit: By plugging in simple numbers, for example x=1, b=1, c=1 and d=3 I can see that x is probably an integer, for all integers b,c,and d - so that means my thinking about the quadratic formula is not correct. I will still work on this. -2nd Edit: Now I plug in more numbers and don't get integers. For example $x^2+2x+3=4$. I am also new to this site, so I am not sure if I should continue to edit the post or write in the comments sections anytime I think of something new. Please advise. -3rd Edit: I think I know what to do. The last section of the book covered universal quantifiers. I believe the authors are are trying to get me to realize that they are saying $\forall b, c, d \in Z$ and I only need to give one one example for which the assertion is untrue. And in my previous edit, b=2, c=3, and d=4 did not result in x being an integer. - -REPLY [4 votes]: This is simply the monic quadratic case of the Rational Root Test. You could specialize that proof, or else proceed similarly to various irrationality proofs for square-roots. $\: $ E.g. $\:$ below is a proof that I discovered in high-school. First I present the proof for square-roots - where the idea is clearer. -Theorem $\ $ For $\rm\: c\in \mathbb Z,\:$ any rational root $\rm\:r\:$ of $\rm\ x^2 = c\ $ is am integer. -Proof $\ $ Put $\rm\ \color{#0a0}{r = m/n}\ $ with $\rm\:(m,n) = 1.\:$ Then $\rm\ \color{#c00}{jm-kn =1}\;$ for some $\:\rm j,k \in \mathbb{Z}\,$ by Bezout. -Hence $\,\rm \color{#0a0}{0 = (m-nr)}\:(k+jr) = mk-njc + (\color{#c00}{jm-kn}) r \ \Rightarrow\ r = -mk+njc \ \in\ \mathbb{Z}\ \ \ $ QED -This proof easily extends to the root of a general monic quadratic as follows. -Theorem $\ $ For $\rm\:b,c\in\mathbb Z,\,$ any rational root $\rm\:r\:$ of $\rm\ x^2 = \color{#90f}{b\ x + c}\ $ is an integer. -Proof $\ $ Put $\rm\ \color{#0a0}{r = m/n},\ (m,n)=1,\,$ so $\rm\,(m\!-\!nb,n)=1\ $ so $\rm\, \exists\ j,k\in \mathbb Z\!:\ \color{#c00}{1 = j(m\!-\!nb)\!-\!kn} $ -Hence $\rm\, \color{#0a0}{0 = (m\!-\!nr)}(k\!+\!jr)\ =mk\! +\! (jm\!-\!kn)r\!-nj(\color{#90f}{br\!+\!c}) = mk\!-\!njc + (\color{#c00}{j(m\!-\!nb)\!-\!kn})r$ - -The same proof easily extends to higher degree polynomials that are monic (lead coef $=1).$ -If you learn about denominator ideals then you'll see that the above proof simply says that the denominator ideal of $\rm\:r\:$ contains $\rm\:n\:$ and $\rm\:nr = m,\:$ so it contains their gcd $\rm\:(n,m) = 1,\,$ so $\rm\ r\in \mathbb Z.$ Using Dedekind's notion of conductor ideal, the proof easily generalizes to higher degree monic polynomials, yielding that PIDs are integrally closed.<|endoftext|> -TITLE: Recursively defined systems are always consistent? -QUESTION [5 upvotes]: I was reading something which contained the following statement: - -It is a well-established mathematical result that theories consisting only of recursive definitions... are inherently consistent. - -Does this result have a name? Could anyone point me to literature about this? - -REPLY [8 votes]: It is not true in the most general sense but it is true in a certain sense. -One way that it is true in the sense of the Knaster–Tarski theorem: - -If $L$ is a complete lattice and $\phi\colon L \to L$ is an order-preserving function, then the set of fixed points of $\phi$ is a complete sublattice of $L$, and in particular the set of fixed points in nonempty. - -Many recursive constructions correspond to order-preserving maps on a complete lattice, and so by this theorem the constructions do have fixed points. Those fixed points correspond to the thing that the recursive definition is trying to construct. -For example, to construct the Borel sets on a topological space $X$, consider the lattice of all sets of subsets of $X$. This is a powerset lattice and so it is complete. Let $\phi$ be the following operator: given a family $F$, $\phi(F)$ contains every set that can be obtained as a countable union of sets from $F \cup F^c \cup O$ where $F^c$ is the set of complements of sets in $F$ and $O$ is the set of open sets in $X$. Then $\phi$ is a order-preserving operator, and its least fixed point is the family of Borel sets of $X$. -Now here is a sense in which recursive definitions do not always define something. It's related to the field of "general recursive functions" which was an early way that people studied computability. Suppose that we we write down a set of recursive definitions for a function: -$$ -f(0) = 5 \qquad f(x+1) = 2f(x)+x -$$ -That particular definition does define a total function $\mathbb{N} \to \mathbb{N}$. But in general there is no reason to expect an arbitrary set of rules will work. For example, -$$ -f(0) = 1 \qquad f(3x) = x + 2 \qquad f(x+1) = 2 -$$ -does not define a function. The point here is we are considering arbitrary definitions, not just definitions in the form of the first example. -Here is an example based on the Collatz conjecture; does it give a total function? -$$ f(0) = 0 \qquad f(1) = 1 \qquad f(2x) = f(x) \qquad f(2x+1) = f(6x+4)$$ -As you can see, the problem of telling which sets of rules actually define a total function is far from simple, and it only gets worse if you have a system of several mutually recursively defined functions. -There is a third aspect of the problem, relating to the foundational importance of inductive definitions. For example, in unformalized mathematics, the natural numbers are the smallest set of objects which contains 0 and is closed under successor. This sort of concrete inductive definition is one reason that people often feel that the natural numbers are more directly accessible to intuition than, say, arbitrary uncountable sets.<|endoftext|> -TITLE: Any resource of the applications of the theory of class fields -QUESTION [8 upvotes]: We all agree that the theory of class fields plays an eminent role in modern number theory. -Nevertheless, what was our main concern is that how to solve various Diophantine equations to which the clues do not appear clear in the theory of class fields. And my question is: - -With respect to the beauty and perfection of the theory of class fields, can we use it to solve some mysterious Diophantine equations? Or can we apply it to other fields in Mathematics, such as Complex Analysis? - -The last question arises as one of my friends used to tell me that there is a mathematician who developes the theory of complex analysis in terms of class fields in one of his books. Nonetheless, when I looked up to his books, I didn't find any hint to this. Moreover, when I asked him about this again, he didn't say much. This stimulates somehow my imagination such that I am wondering if there indeed is such a theory. -Any response is appreciated. - -REPLY [20 votes]: Let me address the question as to whether class field theory can be used to solve some Diophantine equations. The answer is certainly yes. One historical example is given by Kummer's work on Fermat's Last Theorem; this relied on algebraic number theoretic results to do with cyclotomic fields, much of which can be reinterpreted as special cases of class field theory. (One place to see this discussed is the nice historical chapter on Kummer's work by Michael Rosen -in the book Modular forms and Fermat's Last Theorem (Cornell, Silverman, Stevens eds.).) -At a more basic level, consider the question of solving the following three equations: -$$x^2 + y^2 = p$$ -$$ x^2 + 5 y^2 = p$$ -$$x^2 + 23 y^2 = p$$ -where $p$ is a prime and $x$ and $y$ are integers. -Essentially (i.e. excluding some small primes, namely $p = 2$ in the first, -$p = 2$ or $5$ in the second, and $p = 2$ or $23$ in the third) each of -these questions can be rephrased as asking whether the prime $p$ splits into a product of principal ideals in the extension $K:= \mathbb Q(\sqrt{-D})$, -where $D = 4,$ $20$, and $23$ respectively. -Now the question of whether $p$ splits is easily answered; it is just a -question of whether the Jacobi symbol $\bigl( \frac{p}{D} \bigr)$ is equal -to $1$ or not. -But the question of whether $p$ splits into a product of principal ideals is more subtle; it is a question of whether $p$ splits completely in the Hilbert class field -$H$ of $K.$ -Now in the case $D = 4$, we know that $K$ has class number one, and so $K = H$. -Thus we can solve $x^2 + y^2 = p$ if and only if $p \equiv 1 \bmod 4$. -If $D = 20$, then $K$ has class number two, and in fact $H = K(\sqrt{-1})$ -is the compositum of the field $\mathbb Q(\sqrt{-4})$ and $\mathbb Q(\sqrt{-20})$. Thus $p$ splits completely in $H$ if and only if $\bigl(\frac{p}{4}\bigr) = 1$ -and $\bigl(\frac{p}{20}) = 1$, i.e. if and only if $\bigl(\frac{p}{4}\bigr) = 1$ -and $\bigl(\frac{p}{5}) = 1$, i.e. if and only if $p \equiv 1 \bmod 4$ and $p \equiv \pm 1 \bmod 5$, i.e. if and only if $p \equiv 1 \text{ or } 9 -\bmod 20$. -Finally, if $D = 23$ then $K$ has class number three, and so $H$ is not an abelian extension of $\mathbb Q$; rather it is an $S_3$-extension. The equivalence of class fields and abelian extensions says that $H$ is not a class field of $\mathbb Q$, and so there is no congruence condition on $p$ that determines whether or not $p$ splits completely in $H$. Thus there is no congruence condition on $p$ that determines whether or not we can solve -$x^2 + 23y^2 = p$. -Instead, one has the following result from non-abelian class field theory -(due, in this case, to Hecke, but best understood from a modern viewpoint as being a special case of Langlands's general program for non-abelian class field theory): -We can solve $x^2 + 23y^2 = p$ if and only if the coefficient of $q^p$ in the product -$$q\prod_{n=1}^{\infty}(1-q^n)(1-q^{23 n}),$$ -which is a priori equal to $-1,0,$ or $2$, is in fact equal to $2$. -Computing the product, one finds e.g. that the first such prime is $p = 59 = -6^2 + 23\cdot 1^2$. -(This infinite product is a certain modular form, which is a particular kind of non-abelian analogue of a Jacobi symbol.) -Whether class field theory has applications to complex analysis, I don't know, -but the appearance of modular forms at the end of the above discussion shows that complex analysis has applications to class field theory.<|endoftext|> -TITLE: Proving continuous image of compact sets are compact -QUESTION [31 upvotes]: How to prove: Continuous function maps compact set to compact set using real analysis? -i.e. if $f: [a,b] \rightarrow \mathbb{R}$ is continuous, then $f([a,b])$ is closed and bounded. -I have proved the bounded part. So now I need some insight on how to prove $f([a,b])$ is closed, i.e. $f([a,b])=[c,d]$. From Extreme Value Theorem, we know that $c$ and $d$ can be achieved, but how to prove that if $c < x < d$, then $x \in f([a,b])$ ? -Thanks! - -REPLY [14 votes]: I feel using the sequence definition is far easier to understand is much more intuitive, and the proof is nice and clean. But be careful when making the statement that being closed and bounded implies compactness, because this only holds in Euclidean space (see Heine-Borel Theorem for more about this). -Let $f:M\to N$ be a continuous function and $M$ be a compact metric space. Now let $(y_n)$ be any sequence in $f(M)$ (the image of $f$). We need to show that there exists a subsequence $y_{n_{k}}$ that converges to some $y \in f(M)$ as $k\to \infty$. -We choose a sequence $(x_n)\in M$, and since $M$ is compact by definition we have that there exists a subsequence $(x_{n_{k}})$ which converges to some $p\in M$ as $k \to \infty$. Given that a continuous function preserves the convergence of sequences i.e. if $(x_n) \to p$ in $M$, then $f((x_n)) \to f(p)$ in the image, we have that $f((x_{n_{k}})) \to f(p)$. Since $f(p) \in f(M)$, we have that our image is compact and we obtain our desired result. -Hopefully this helps, and feel free to ask any more questions :).<|endoftext|> -TITLE: $m$ balls $n$ boxes probability problem -QUESTION [13 upvotes]: I encountered this problem in my probability class, and we weren't able to solve it, so I would like to know the answer. -If you have $m$ balls and $n$ boxes, with $n < m$, and you insert the balls into the boxes randomly, what is the probability that all the boxes have at least one ball in it? -The problem doesn't specify if the balls are distinguishable or not, so you may assume either, so another question would be, if you assume they are distinguishable will you get the same answer as assuming they are not distinguishable? (This would be great because I think the non-distinguishable case is easier). -I appreciate any insight on the problem. - -REPLY [14 votes]: This answer treats the balls and boxes as distinguishable so each pattern is of equal probability. I doubt there is a practical experiment to test this which does not also have them distinguishable. -The number of ways of putting $m$ balls in $n$ boxes is $n^m$ since each ball can go in any of the boxes. -The calculation for the number of ways of putting $m$ balls in $n$ boxes with each box having at least one ball is more complicated. Let's call it $A(m,n)$. If, when deciding what to do with the last ball, all the other boxes are full, then it can go in any of the $n$ boxes; if however one of the $n$ boxes is empty and the others full then it must go in the empty one. So -$$A(m,n) = n A(m-1,n) + n A(m-1,n-1)$$ -and clearly $A(m,1) = 1$ (there is only one way with one box) and $A(m,n) = 0$ for $m -TITLE: The general argument to prove a set is closed/open -QUESTION [8 upvotes]: I am taking a topology course and we are now learning open and closed set. I am a bit confused to how to prove that a set is closed or opened, how should I approach these kind of problems. For example: -Question 1: Let $(\mathcal{X},d)$ be an arbitrary metric space. Prove that any set which contains a finite number of points $\{x_1,x_2,\ldots,x_n\}$ is closed. -Solution: If we take the point $x_i$ where $1\leq i\leq n$ and no matter how small we make $r$, in the ball some points are outside of our set. Hence the set is closed. -Question 2: Let $(\mathcal{X},d)$ be an arbitrary metric space. Prove that $B(\mathcal{X},r)=\{y\in\mathcal{X}:d(x,y) -TITLE: Why abstract manifolds? -QUESTION [27 upvotes]: If we can use Whitney embedding to smoothly embed every manifold into Euclidean space, then why do we bother studying abstract manifolds, instead of their embeddings in $\mathbb{R}^n$? A vague explanation I have heard is that from this abstract viewpoint, we gain understanding into the intrinsic behavior of the manifold, without knowing anything about the ambient space in which it can be embedded. Can anyone give some examples of this, or any other reason why abstraction is necessary? - -REPLY [6 votes]: If you restricted the definition of manifolds to be "submanifolds of Euclidean space", you'd run into the problem that sometimes you want to do calculus on an object -- natural objects like homogeneous spaces, Grassmann manifolds, cut-and-paste objects like the "pac man" torus and such. But the formal restrictions on usage of manifolds would mean you would have to embed the manifold in Euclidean space before you could apply the machinery of calculus to it. In that sense the submanifold restiction is like a type of formal red-tape that gets in the way of understanding. Abstract manifolds dispense with that.<|endoftext|> -TITLE: How do you check if a sequence of numbers is truly random? -QUESTION [7 upvotes]: Suppose a source produces an indefinite sequence of positive integers. How can you check whether the numbers are generated truly randomly? - -REPLY [2 votes]: Given some time dependent data-set, autocorrelation can be used to detect randomness and lack thereof. Take a look at for example http://en.wikipedia.org/wiki/Correlogram<|endoftext|> -TITLE: Why is the generalized quaternion group $Q_n$ not a semi-direct product? -QUESTION [10 upvotes]: Why is the generalized quaternion group $Q_n$ not a semidirect product? - -REPLY [6 votes]: One characterization of the generalized quaternion group is: - -If $G$ is a non-abelian $p$-group which contains only-one subgroup of order $p$, then $G$ is generalized quaternion group [Hall-Theory of groups; Theorem 12.5.2]. - -So if we try to write the generalized quaternion group $Q_n$ as semi-direct product, then we should have a normal subgroup $N$, a subgroup $H$, with one necessary condition that $N\cap H=1$; which is not possible because of uniqueness of subgroup of order $p$; it will contained in all subgroups of $Q_n$. Hence the generalized quaternion group is not semi-direct product of smaller $p$-groups.<|endoftext|> -TITLE: What is happening in a linear algebra computation? -QUESTION [5 upvotes]: About a year ago I took a Linear Algebra class that was required for my degree. Unfortunately that class had an unidentified pre-requisite and started at a much higher level then I really needed. Going in I had no prior experience with linear algebra. I can definitely see how understanding linear algebra would be a very good thing to have in my field so I've been trying to piece it together ever since and have felt like I'm close but I just don't quite get it. I understand that linear algebra is a way to solve a lot of equations rapidly... In my mind this seems like that means finding values for the variables... but it didn't seem like we ever did... Instead we were doing things like multiplication of matrices and that made no sense. Or we would apply advance algorithms to get the matrix into certain forms which the reason for never made sense to me. So what does it mean to solve a system of equations? What are some real world examples that might make understanding linear algebra easier? Why are orthogonal and other types of matrices so special? Any insights, examples, suggestions are greatly appreciated! - -REPLY [10 votes]: I'm not going to try and answer all of your questions because it's really a very broad question. I will at least give you a start and then the best thing you can do is either take another course or open a book and learn some things, coming back to ask more specific questions along the way. -Surely you are familiar with equations like $5x = 6$ where we want to find all solutions $x$ where $x$ is in some field, say in this case the real numbers. In this case every nonzero element has an inverse so you can multiply by $1/5$ to get $x = 5/6$. -Although linear algebra isn't really just about solving equations, that's where it starts. It's called linear because we only want to solve equations that are linear in the unknown variable. The simplest case would be something like -$$ -x + y = 4, -2x - y = -1 -$$ -Actually we can just write this in matrix form. If you remember how to multiply a matrix, then we can write this system as $Ax = b$: -$$ -\begin{pmatrix} -1 & 1\\ -2 & -1 -\end{pmatrix} -\begin{pmatrix} -x \\ -y -\end{pmatrix} -= -\begin{pmatrix} -4\\ --1 -\end{pmatrix} -$$ -The reason why we multiply matrices is because we want to solve $Ax = b$ by multiplying by the inverse of $A$ to get $x = A^{-1}b$. Of course, not all matrices have inverses. So the set of all $n\times n$ matrices, with addition and multiplication is a ring but not a field. A ring is sort of like a field but now we remove the requirement where inverses exist for all nonzero elements. Also matrix multiplication is not commutative: $AB$ is not necessarily equal to $BA$. -In the above case $A$ does have an inverse, and you can multiply on the left by $A^{-1}$ (see if you can find it) to get the solution to this system of equations. -Thus we have found: multiplication of matrices helps us solve equations. -However, we are only beginning because finding the inverse of a matrix is tricky, so we study the different ways to represent matrices and calculate with matrices in order to more efficiently move them around. This is a bit vague but intentionally so since there is so much mathematics going on in the background which you need to learn. -Linear algebra is really about vector spaces. To appreciate the idea of a vector space you should first get some experience with abstraction by doing hundreds of problems. A vector space is just a set of elements together with addition and scalar multiplication that satisfy certain axioms. It turns out that matrices correspond to maps between vector spaces in a chosen basis of that space. This may not make too much sense to you now, but the important point is that putting matrices in different forms corresponds to changing the basis of the vector space in different ways. -The reason why we like to use vector spaces is because then we can concentrate on the algebraic properties of vector spaces without having to worry about specific numbers or equations, which then can be applied to all sorts of problems which have little do with solving equations. -The best thing you can do to understand linear algebra is to take a course/read a book and just start solving problems. It is impossible to really understand what it is about first and then practice doing it. The understanding comes with the practice. - -REPLY [4 votes]: Linear algebra is not necessarily about solving linear equations rapidly. Indeed, most algebraist don't care for speed except for mathematicians in numerics and computer algebra. -Linear Algebra is taught because of two dual reasons (i) it is best understood (ii) it is omnipresent within any higher mathematics. -So, constructs from linear de facto appear in any science that does anything beyond simple pen-and-paper-computation. Most prominent examples where linear algebra is used 'immediatly' compromise computer graphics and modeling biological/chemical/social systems, not to forget linear optimization or numerical linear algebra. -In further answer would (probably) end up in a huge, long list.<|endoftext|> -TITLE: (Question) on Time-dependent Sobolev spaces for evolution equations -QUESTION [5 upvotes]: I have got a question on so-called time-dependet Sobolev spaces - in particular as introduced in Evans book on PDE for the treatment of parabolic and hyperbolic PDE. -Let us take a look at a linear hyperbolic PDE in $n$ spatial dimension and $1$ time dimension. -$u_{,tt} - Lu = 0$ -where -$Lu = \sum_{i,j} a^{ij} u_{,i} u_{,j} + \sum_{i} b^i u_{,i} + c u$. -Furthermore, we impose zero-boundary conditions on a open, bounded, smooth-boundary set $U \subset \mathbb R^n$, and regard a finite time interval $R := [0,T]$ -First of all, it is natural to grant the time variable not special treatment, and regard this PDE as a PDE on $R \times U$. We then apply the test function machinery and obtain the notion of weak solution in a natural way. We might want the solution to be two-times weakly differentiable in time and space directions. -We then assume our function lies within $H^2(R) \otimes H^2_0{U}$ with the topological tensor product. I am not too well-versed with this construction, as it does not belong to my university's canon, but for $u \in H^2(R) \otimes H^2_0{U}$ you would expect $u_{,tt} \in H^0(R) \otimes H^2_0{U}$. -On the other hand, we can regard our hyperbolic equation as an second-order ODE on a Hilbert space. Then we might want our solution to be $u \in H^2( R, H^2_0(U) )$. In that case $u_{,tt} \in H^2( R, H^2_0(U) )$. -In most of the above, we can assume weaker regularity, i.e. replace $H^2$ by $H^1$. This works, as in the weak formulation, the second distributional derivatives may be handed over to a test function. Then we the second-derivatives in any direction can be found in $H^{-1}$. -This makes sense, at least to me. However, the book of Evans treats weak derivatives in the above setting in in a different way, and I do not understand the transition. -For example, he defines the solution of the above hyperbolic equation, assuming zero boundary conditions, to have the properties [1] - -$u \in L^2( \mathbb R, H_0^1(U) )$ -$u_{,t} \in L^2( \mathbb R, L^2(U) )$ -$u_{,tt} \in L^2( \mathbb R, H^{-1}(U) )$. - -This is clearly the ODE-on-VS approach, but it appears the time-derivative is taken over to the spatial derivatives. Of course, he may do so, as this setting is more general than what proposed as a weak solution in the above paragraphs. But then we might use, say, distributions as well. -Whence I wonder why he does so - whether this is just for the reader convenience for whatever reason, whether it really points to what the solution really behaves like most regularly. -Can somebody explain this to me? -[1] L.C.Evans, Partial Differential Equations, 2nd Edition, p.400. - -REPLY [4 votes]: Firstly, you shouldn't think of the weak solutions as ODE on VS. For ODE on VS (using something like the Hille-Yosida theorem for semigroups of linear operator), the correct regularity should be strong continuity: that $u \in C^1(R,\mathcal{H})$ where $\mathcal{H}$ is the Hilbert space. -Secondly, an intuitive explanation of why $u_{tt} \in L^2(R, H^{-1}(U))$. Just use the hyperbolic equation: $u_{tt} = L u$. So $u_{tt}$ should belong to the same space as $Lu$. With two (spatial) derivatives acting on $u\in L^2(R,H^1(U))$, it is natural that $Lu \in L^2(R,H^{-1}(U))$, and hence also $u_{tt}$. -Thirdly: I don't think a naive application of the weak solution idea should give you what you claimed ($H^2(R)\otimes H^2_0(U)$). Compare, say, to the elliptic case of a function in a box. Demanding that a function admits 2 weak derivatives in each direction separately is rather stronger than demanding the function admits 2 weak derivative overall. The former is not isotropic. The latter is. (In particular, the former says that $\partial_x\partial_x\partial_y\partial_y\partial_z\partial_z u \in L^2$, which is much stronger than just $u\in H^2$.) In particular, if you have a function $u \in H^1(R\times U)\cap C^\infty(R\times U)$, you'd see that the $H^1(R\times U)$ norm is comparable to the sum of the $u\in L^2(R, H^1(U))$ and $u_t \in L^2(R, L^2(U)) = L^2(R\times U)$ norms.<|endoftext|> -TITLE: Solving $\displaystyle \cos(x-\alpha)\cos(x-\beta) = \cos{\alpha}\cos{\beta}+\sin^2{x}$ -QUESTION [8 upvotes]: Solve $\displaystyle \cos(x-\alpha)\cos(x-\beta) = \cos{\alpha}\cos{\beta}+\sin^2{x}$. -My attempt: -$\displaystyle \cos(x-\alpha)\cos(x-\beta) = \cos{\alpha}\cos{\beta}+\sin^2{x} \Rightarrow \cos(x-\alpha)\cos(x-\beta)-\cos{\alpha}\cos{\beta} = \sin^2{x},$ and -$$\begin{aligned}LHS & = \frac{1}{2}\cos\left(2x-\alpha-\beta\right)+\frac{1}{2}\cos\left(\alpha-\beta\right)-\frac{1}{2}\cos\left(\alpha-\beta\right)-\frac{1}{2}\cos\left(\alpha+\beta\right)\\& = \frac{1}{2}\cos\left(2x-\alpha-\beta\right)-\frac{1}{2}\cos\left(\alpha+\beta\right) \\& = -\sin\left(\frac{2x-\alpha-\beta+\alpha+\beta}{2}\right)\sin\left(\frac{2x-\alpha-\beta-\alpha-\beta}{2}\right) \\& = -\sin{x}\sin\left(x-\alpha-\beta\right)\end{aligned}$$ -Thus, we have: -$$\begin{aligned} & -\sin{x}\sin\left(x-\alpha -\beta\right) = \sin^2{x} \\& \Rightarrow \sin^2{x}+\sin{x}\sin\left(x-\alpha-\beta\right) = 0 \\& \Rightarrow \sin{x}\left(\sin{x}+\sin\left(x-\alpha-\beta\right)\right) = 0\end{aligned} $$ -So either $\sin{x} = 0$ or $\sin{x} = \sin\left(\alpha+\beta-x\right)$. If $\sin{x} = 0$, then we have $\sin{x} = 0 \Rightarrow \sin{x}$ $= \sin\left(0\right)$ $\Rightarrow x = n\pi$ or $(2n+1)\pi$ for $n\in\mathbb{Z}$ -- we can write this as $k\pi$, where $k\in\mathbb{Z}$. If, on the other hand, $\sin{x} = \sin\left(\alpha+\beta-x\right)$ then $x = 2n\pi+\alpha+\beta-x \Rightarrow x = n\pi+\frac{1}{2}\left(\alpha+\beta\right)$, or $ x = 2n\pi+\alpha+\beta-x $ (EDIT: this was meant to be $x = (2n+1)\pi-\alpha-\beta+x$), which contains no solutions (is that the right way to put it?). Thus the solutions for the equation are $n\pi+\frac{1}{2}\left(\alpha+\beta\right)$ and $k\pi$, for any integers $k$ and $n$. -Question: The answer in the book has the condition $\alpha+\beta \ne (2m+1)\pi$ -- why is that? -Request: If you have the time, please critique the way my solution is presented (reasoning, use of notation, flow etc - I've an admission test in which this plays big part soon). Is my use of $\Rightarrow$ ok? - -REPLY [5 votes]: One point: why are you going back from $x = n\pi+\frac{1}{2}(\alpha+\beta)$ to $x=2n\pi + \alpha+\beta-x$? And why do you say there are no solutions? -That said: you are incorrect in claiming that $\sin(x) = \sin(\alpha+\beta-x)$ implies that $x = 2n\pi + \alpha+\beta-x$. That is not the only ways in which two values of sine may be equal. After all, sine takes the same value at $0$ and at $\pi$, even though $0$ and $\pi$ don't differ by a multiple of $2\pi$. And it takes the value $\frac{1}{2}$ at both $\frac{\pi}{6}$ and at $\frac{5\pi}{6}$, even though the difference is not even an integer multiple of $\pi$. So that part of the derivation is incorrect. -Added/Edit. From the comments, it seems you meant to write the correct conditions: either $x$ and $\alpha+\beta-x$ differ by a multiple of $2n\pi$ (this is the condition you checked), or else they are "symmetric around $\pi/2$ or $3\pi/2$ up to a multiple of $2\pi$", which gives the condition $x = (2m+1)\pi - (\alpha+\beta-x)$. -Your error was discarding/completely ignoring the second condition. The second condition does not give an equation for $x$, but it does give an equation for $\alpha+\beta$! It says that you must have $0 = (2m+1)\pi - (\alpha+\beta)$, or that $\alpha+\beta=(2m+1)\pi$ must hold. In other word: if $\alpha+\beta = (2m+1)\pi$, then you will always have $\sin(x) = \sin(\alpha+\beta-x)$, regardless of what the value of $x$ is. -So, in summary: if $\alpha+\beta = (2m+1)\pi$ for some integer $m$, then every $x$ is a solution. And if $\alpha+\beta\neq (2m+1)\pi$, then your derivation is correct (once you fix the analysis of this second case).<|endoftext|> -TITLE: Braid groups and representations -QUESTION [5 upvotes]: I was wondering if $\mathbb{Z} \wr S_n$, where $\mathbb{Z}$ is the usual group of integers, $S_n$ the symmetric group on n elements and $\wr$ the wreath product of two groups, contains the braid group $B_n$. -I was also wondering if $n+1$-dimensional matrices of the form : -$$\begin{bmatrix} 1&0&0 \\\\ 1&0&1 \\\\ 1&1&0 \end{bmatrix}$$ for B2 -$$\begin{bmatrix} 1&0&0&0 \\\\ 1&0&1&0 \\\\ 1&1&0&0 \\\\ 0&0&0&1 \end{bmatrix}$$ -and -$$\begin{bmatrix} 1&0&0&0 \\\\ 0&1&0&0 \\\\ 1&0&0&1 \\\\ 1&0&1&0 \end{bmatrix}$$ - for B3 -and so on... form a representation of the braid group $B_n$. -Thank you for your help -A. Popoff - -REPLY [5 votes]: It is certainly not true that $\mathbb{Z}\wr S_n$ contains $B_n$, for any $n>2$. Indeed, by definition, $\mathbb{Z}\wr S_n$ has a free abelian subgroup of finite (indeed, $n!$) index. -On the other hand, the kernel of the natural map of pure braid groups $PB_n\to PB_{n-1}$ obtained by forgetting a strand is the free group of rank $n-1$ (this follows from the Birman Exact Sequence, I suppose), so if $n>2$, $PB_n$ (and hence $B_n$) contains a non-abelian free group. If it were contained in $\mathbb{Z}\wr S_n$ then this non-abelian free group would have an abelian subgroup of finite index, which is absurd. -[Note: this answer is essentially the same as Steve D's comment.]<|endoftext|> -TITLE: Given particle undergoing Geometric Brownian Motion, want to find formula for probability that max-min > z after n days -QUESTION [12 upvotes]: Consider a particle undergoing geometric brownian motion with drift $\mu$ and volatility $\sigma$ e.g. as in here. Let $W_t$ denote this geometric brownian motion with drift at time $t$. I am looking for a formula to calculate: -$$ -\mathbb{P}\big(\max_{0 \leq t \leq n} W_t - \min_{0\leq t \leq n} W_t > z\big) -$$ -The inputs to the formula will be $\mu$, $\sigma$, $z$, and $n$. - -REPLY [2 votes]: The following horrible formula for the joint distribution of max, min and end value of a Brownian motion was copied without guarantees from the Handbook Of Brownian Motion (Borodin/Salminen), 1.15.8, p.271. First, for simplicity, this is only written for $\sigma=1,t=1$, and the more general case comes directly from scaling. If we shorten W as the Brownian Botion at t=1, m as the minimum and M as the maximum over $[0,1]$, then for $a < min(0,z) \le max(0,z) < b$ it holds -$$ -P(a < m, M < b, W \in dz) = \frac{1}{\sqrt{2\pi}}e^{(\mu z-\mu^2/2)} \cdot \sum_{k =-\infty}^{\infty} \Bigl(e^{-(z+2k(b-a))^2/2} - e^{(z-2a + 2k(b-a))^2/2} \Bigr) dz\; . -$$ -(Apologies for using z here in a different context.) If one really wants to, one can compute from this an even more horrible formula for the above probability. It is now in principle possible to derive from this a formula for what you want, by finding the density function $p_{m,M,W}$, and using -$$ - P(e^M-e^m\le r) = \int_{(x,y,z)\ :\ e^x \le e^z \le e^y \le e^x + r} p_{m,M,W}(x,y,z) d(x,y,z)\;, -$$ -but I shudder at the monster I expect to fall out from this. It might be better to give up and simulate the probability in question, and find some asymptotics. -However, if you would like to proceed with it, I suggest you look not into the Handbook Of Brownian Motion, but rather into this paper, as it is much more readable.<|endoftext|> -TITLE: Sequence of powers of Gaussian integers capturing all positive integers? -QUESTION [17 upvotes]: Fix a complex number $z=x+iy$ where $x,y\in \mathbf{Z}$ Consider the sequence generated by the powers $$z^0, z^1, z^2, z^3,z^4 \ldots$$ The question is whether it is possible to capture any positive integer as either the real part or the imaginary part of this sequence of the powers of a fixed complex number. From some calculation and reasoning, - -it is very easy to see that in order to generate all positive integers, $x$ and $y$ have to be relatively prime; - -it appears that the sequence of real parts and the sequence of imaginary parts blows up very early and hence it would not be unwise to conjecture that such a complex number does not exist. -Is there any way to get hold of this problem? Thanks in advance. - - -My curiosity partly grew out of observations on various polynomial bijection questions e.g., I saw on MathOverflow. I hope that at least the simple statement of the problem appeals to one's thoughts. - -REPLY [21 votes]: Here's another, simpler proof: -$$z^5=(x^5-10x^3y^2+5xy^4) + \mathrm{i}(y^5-10y^3x^2+5yx^4) -\stackrel{\mathrm{A}}{=} -x^5+\mathrm{i}y^5 -\stackrel{\mathrm{B}}{=} -x+\mathrm{i}y=z\mod 10\;.$$ -A is because either $x$ or $y$ must be even, else $z^2$ has only even parts. B is because as a ring $\mathbb{Z}/10\mathbb{Z}$ is isomorphic to $(\mathbb{Z}/5\mathbb{Z})\oplus(\mathbb{Z}/2\mathbb{Z})$ and taking the fifth power is the identity in each component. -Thus the residues modulo $10$ have period $4$, so there are at most $8$ different ones and we can't produce numbers with the remaining two.<|endoftext|> -TITLE: Infinitely differentiable function with compact support -QUESTION [5 upvotes]: I already know that the function -$$ -f(x) = -\begin{cases} -\exp(- \frac{1}{x^2}), \quad x > 0 \\ -0 , \quad x \leq 0 - -\end{cases} - -$$ -is infinitely differentiable throughout $\mathbb R$. The only real problem, of course, lies in showing that $f^{(k)} (0) = 0$ for any positive integer $k$. What I have not been able to deduce is that -$$ -\phi(x) = -\begin{cases} -\exp(- \frac{1}{1 - x^2}), \quad |x| < 1 \\ -0 , \quad |x| \geq 1 - -\end{cases} - -$$ -is also infinitely differentiable throughout $\mathbb R$, using the previous function. The problem now is finding out what happens at $x = 1,-1$. Does the substitution $\zeta ^2 = 1 - x^2$ work, or is there another way to prove this? -Thank you all! - -REPLY [5 votes]: This is much easier if you first show that -$g(x) = \begin{cases} - e^{-\frac{1}{x}} & \text{for } x \gt 0 \\\\ - 0 & \text{for } x \leq 0 -\end{cases}$ -is smooth. Then it's easy to see that $f(x) = g(x^2)$ and $\phi(x) = g(1-x^2)$, i.e. $f$ and $\phi$ are compositions of smooth functions. - -REPLY [2 votes]: Hint 1: For every positive $c$ and every real $a$, the function $f_{a,c}$ defined by $f_{a,c}(x)=\exp(-c/(x-a))$ if $x>a$ and $f_{a,c}(x)=0$ if $x\le a$ is infinitely differentiable on the real line. -Hint 2: use $f_{a,c}$ to build a function infinitely differentiable on the real line, zero for $x\ge a$ and positive for $x -TITLE: Monotonic behavior of a function -QUESTION [8 upvotes]: I have the following problem related to a statistics question: -Prove that the function defined for $x\ge 1, y\ge 1$, -$$f(x,y)=\frac{\Gamma\left(\frac{x+y}{2}\right)(x/y)^{x/2}}{\Gamma(x/2)\Gamma(y/2)}\int_1^\infty w^{(x/2)-1}\left(1+\frac{xw}{y}\right)^{-(x+y)/2} dw$$ -is increasing in $x$ for each $y\ge 1$ and decreasing in $y$ for each $x\ge 1$. (Here $\Gamma$ is the gamma function.) -Trying to prove by using derivatives seems difficult. - -REPLY [8 votes]: Let $W \sim F(x, y)$ where $F(x,y)$ stands for an $F$ distribution with degrees of freedom $x$ and $y$. Then, the quantity -$$ -\mathbb{P}(W \geq 1 ) = f(x,y)=\frac{\Gamma\left(\frac{x+y}{2}\right)(x/y)^{x/2}}{\Gamma(x/2)\Gamma(y/2)}\int_1^\infty w^{(x/2)-1}\left(1+\frac{xw}{y}\right)^{-(x+y)/2} \mathrm{d}w \> . -$$ -From this, I think you can find the answer in the reference below. - -B. K. Ghosh, Some monotonicity theorems for $\chi^2$, $F$ and $t$ distributions with applications, J. Royal Stat. Soc. B, vol. 35, no. 3 (1973), pp. 480-492. - -Incidentally, note that $W = \frac{y}{x} \frac{U_{xy}}{1-U_{xy}}$ where $U_{xy} \sim \mathrm{Beta}(x/2, y/2)$ and so $\mathbb{P}(W \geq 1) = \mathbb{P}(U_{xy} \geq (1+y/x)^{-1})$.<|endoftext|> -TITLE: Do you know of any Calculus text defining the exponential and logarithm functions in an alternative way? -QUESTION [18 upvotes]: The story in nearly every introductory Calculus book is well known by everybody: you don't have the "right" to raise a number to an irrational power, so forget exponents for now and let's take a look at $y=x^{-1}$. How odd, the innocent formula for a power function's antiderivative breaks down but gee, it must have an antiderivative, it's smooth! Let's examine its properties... -...and in the end, Rosebud was his sled, no, wait, the mysterious antiderivative turns out to have an inverse that corresponds exactly to the elementary school concept of exponents, only it works for irrational exponents too! The hero wins! The End. -But what if we start from the opposite end? Start with the innocent, only-defined-for-rationals (so far) exponential function $y=k^x$, $k>0$, and if $x_0$ is irrational, prove that, as $x$ (while staying rational) approaches $x_0$, $k^x$ approaches some specific real number. Define that such number is $k^{x_0}$. -And from there, prove that our New! Improved! $k^x$ is continuous, has a derivative that's also an exponential, that there is some $k=e$ for which the exponential is its own derivative, that the inverse of $e^x$ has $x^{-1}$ as its derivative etc etc... -Do you know of any Calculus text that takes that approach? - -REPLY [4 votes]: I like the approach Lang takes in Undergraduate Analysis. He defines the exponential as a function that satisfies the following differential equation subject to specified initial conditions: -$$ - -f^{\prime} = f, \;f(0)=1 - -$$ -Using these assumptions he shows that if $f$ exists then it is unique. Later in the text he proves existence with power series. He gives an analagous treatment for $sin(x)$ and $cos(x)$. Fitzpatrick's Advanced Calculus takes a similar approach<|endoftext|> -TITLE: Exponential family representation of multi-variate Gaussians -QUESTION [19 upvotes]: I'm a bit stumped by the exponential family representation of a multi-variate Gaussian distribution. Basically, the exponential form is a generic form for a large class of probability distributions. The standard form is -$$f_X(x) = \exp[\theta' T(x) + F(\theta)]$$ -where $\theta$ is a set of parameters (based on $\mu$ and $\Sigma$), $T(x)$ is a vector of sufficient statistics, and $F$ is a function of the parameters that ensures the distribution is a pdf, i.e., sums to one. For more information on this form, see http://www.cs.columbia.edu/~jebara/4771/tutorials/lecture12.pdf, http://en.wikipedia.org/wiki/Exponential_family, etc. -The "conversion" for a multi-variate Gaussian distribution to exponential family form is listed as -$$\theta = [\Sigma^{-1}\mu, -\frac{1}{2}\Sigma^{-1}]'$$ -$$T(x) = [x, x x']'$$ -but this is confusing because the outer product $x x'$ is a matrix and $-\frac{1}{2}\Sigma^{-1}$ is also a matrix. Thus, it seems the product between $\theta$ and $T(x)$ should result in a scalar "entry" and a matrix "entry". Obviously, this expression needs to evaluate to a scalar. -The inner product works fine in the scalar case, and I understand this conversion is computed by manipulating to the quadratic form $x'Ax + b'x$. -Still, it seems that I am completely missing something here. Thanks for your help. - -REPLY [22 votes]: It has been such a long time since you asked but I just want to give a proper answer with full equations. -$\begin{align*} p(x) &= \frac{1}{\sqrt{\left|2\pi \Sigma \right|}} \exp \left\{-\frac{1}{2}\left(x-\mu\right)'\Sigma^{-1}\left(x-\mu \right) \right\}\\ &= \exp \left\{-\frac{1}{2}\log\left(\left|2\pi\Sigma \right| \right) \right\}\exp \left\{-\frac{1}{2}\left(x-\mu\right)'\Sigma^{-1}\left(x-\mu \right) \right\}\\ &= \exp \left\{-\frac{1}{2}\left[\underbrace{x'\Sigma^{-1}x - 2\mu'\Sigma^{-1}x}_{\theta'T\left(x\right)} + \mu'\Sigma^{-1}\mu + \log \left(\left|2\pi\Sigma\right| \right)\right] \right\} \end{align*}$ -To rearrange the original equation into the form of an exponential family, we need to use the relationship between Frobenius product and vectorizing operator. -$\begin{align*} x'\Sigma^{-1}x &= \Sigma^{-1}:xx'\\ &= \operatorname{vec}\left(\Sigma^{-1}\right)' \,\operatorname{vec}\left(xx' \right) \\ \mu' \Sigma^{-1} x &= \left(\Sigma^{-1}\mu \right)'x\end{align*}$ -$\therefore x'\Sigma^{-1}x - 2\mu'\Sigma^{-1}x = \begin{bmatrix}\operatorname{vec}\left(\Sigma^{-1}\right) \\ -2\Sigma^{-1}\mu \end{bmatrix}'\begin{bmatrix}\operatorname{vec}\left(xx'\right) \\ x \end{bmatrix}$<|endoftext|> -TITLE: Hyperbolic metric on the torus? -QUESTION [7 upvotes]: Here is a silly mistake I am making: where exactly is the mistake? -I know that torus cannot hold a metric of constant curvature -1 ( hyperbolic metric ). -But what if I do this: -The upper half-plane and $\mathbb{C}$ are diffeomorphic by a diffeomorphism, say $\phi$; pull the hyperbolic metric from the upper half plane to the complex plane $\mathbb{C}$, and then quotient it by $ Z\oplus Z$ to get a hyperbolic metric on the torus. Impossible! But what am I missing? - -REPLY [14 votes]: A bit more detail for the answers by Jason and Ryan: -You have some action of $\mathbb{Z}^2$ on the plane. Conjugate that action by $\phi$ to find an action of $\mathbb{Z}^2$ on the upper-half plane. The question now is: how can $\mathbb{Z}^2$ act on the upper-half plane via isometries? -Let $\alpha$ and $\beta$ be the two generators of $\mathbb{Z}^2$ and suppose that neither acts via the identity. The classification of isometries tells us that $\alpha$ and $\beta$ are either elliptic, parabolic, or hyperbolic. Since they commute with each other an analysis of fixed points shows that they must have the same type. Thus their action on the plane can be reduced to an isometric action on a circle (elliptic case) or on a line. After some $1$-dimensional work we find that the reduced, and hence the original, action is either unfaithful or not discrete.<|endoftext|> -TITLE: Question regarding Wilson's Theorem -QUESTION [7 upvotes]: Let $p$ be prime, and $a$ be integer. When does $(p - 1)! + 1 = p^a$ for some $a$ hold? -For example: -$$p = 5 \implies (5 - 1)! + 1 = 25 = 5^2$$ -$$p = 7 \implies (7 - 1)! + 1 = 721 = 7 \cdot 103$$ -Any idea? -Thanks, - -REPLY [5 votes]: A Wilson prime is a prime number $p$ such that $p^2$ divides $(p-1)!+1$. The only known Wilson primes are 5, 13, and 563.<|endoftext|> -TITLE: Finding limit of function with more than one variable: $\lim_{(x,y)\to(0,0)} \frac{xy}{\sqrt{x^2+y^2}}$ -QUESTION [7 upvotes]: $$\lim_{(x,y)\to(0,0)} \frac{xy}{\sqrt{x^2+y^2}}$$ -Approaching (0,0) along x or along y both result in the limit approaching 0, so you want to make sure that the limit exists by doing more tests. -My solutions manual uses $x = y^2$ (or $y = x^2$). Why either of those? Why not $y=x$ or $x=y$? Why a parabola? - -REPLY [8 votes]: None of those choices suffices to prove that the limit is $0$, so I don't know what the solution writer meant. No finite number of ways to approach $(0,0)$ can be enough to show that the limit exists. On the other hand, in a case where the limit doesn't exist at a point, one way to show it in some cases is to show that the function approaches different limits as you approach the point along two different curves. -In this case, $(|x|-|y|)^2\geq 0$ implies $|xy|\leq\frac{1}{2}(x^2+y^2)$, so that $$\left|\frac{xy}{\sqrt{x^2+y^2}}\right|\leq\frac{1}{2}\sqrt{x^2+y^2}.$$ This makes it easy to see that the limit is $0$.<|endoftext|> -TITLE: What is the difference between outer measure and Lebesgue measure? -QUESTION [46 upvotes]: What is the difference between outer measure and Lebesgue measure? -We know that there are sets which are not Lebesgue measurable, whereas we know that outer measure is defined for any subset of $\mathbb{R}$. - -REPLY [2 votes]: Lebesgue outer measure (m*) is for all set E of real numbers where as Lebesgue measure (m) is only for the set the set of measurable set of real numbers even if both of them are set fuctions. -by Geleta Tadele Mohammed<|endoftext|> -TITLE: Solve $\sin(5A) + \cos(5A)\sin(A) - \cos(3A) = 0$ -QUESTION [6 upvotes]: How do you solve this equation for A: -$~~\sin(5A) + \cos(5A)\sin(A) - \cos(3A) = 0$ -I've tried expanding it many times, but I can't seem to be able to reduce it to a format I can work with. Is there a simpler method of solution than repeated expansion? - -REPLY [7 votes]: $$ \sin(5A) + \cos(5A) \sin(A) - \sin(3A) = 0 $$ -Let $ x = e^{iA} $ and use De Moivre's, -$$ \frac{x^5 - x^{-5}}{2i} + \frac{x^5 + x^{-5}}{2} \frac{x - x^{-1}}{2i} - \frac{x^3-x^{-3}}{2i} = 0 $$ -Multiply by $ 4i x^6 $, -$$ 2(x^{11} - x) + (x^{10} + 1)(x^2 - 1) - 2(x^9 - x^3) = $$ -$$ (x^2 - 1)(x^{10} + 2x^9 + 2x + 1) = 0 $$ -The phase of each root to the polynomial above (the ones with $ | x | = 1 $ at least) is a solution $ A $ to your equation (up to an integer addition of $ 2\pi $).<|endoftext|> -TITLE: Can we say that $V \cong V^*$ is not natural? -QUESTION [9 upvotes]: Can we say that an isomorphism $V \cong V^*$ is not natural? It seems so intuitively, but formally the notion of a natural transformation between a functor and a cofunctor is not defined (or is it?). - -REPLY [19 votes]: This is more of a long comment on wildildildlife's answer. A natural question after reading that proof is: what happens if we restrict to the groupoid of finite dimensional vector spaces and their isomorphisms? (It may seem that picking $L=0$ is a bit of a "cheat".) The fact is that there is still no equivalence between the identity and the dual functor. The point is that the collection of isomorphisms $\{\alpha_V \colon V \to V^\ast \}$ is only allowed to depend on $V$, i.e. there must be a specific morphism for every $V$ that makes every diagram you can cook up using your category commutative. -In particular we can look at diagrams of the form -$ -\begin{array}{ccc} - V & \longrightarrow^{\alpha_V} & V^* \\ - \downarrow & & \uparrow \\ - V & \longrightarrow^{\alpha_V} & V^*. -\end{array}$ -Now we get $\alpha_V = A^\ast \alpha_V A$ for all $A \in \mathrm{GL}(V)$. Equivalently, if we use $\alpha_V$ to identify two bases of $V$ and $V^\ast$, we must have $\mathrm{id}_V = A^T A$ for all $A \in \mathrm{GL}(V)$, which is absurd. -This point of view also leads directly to what condition we need to impose on the category for the existence of an equivalence like this: we need to find a category of vector spaces where the automorphisms $A \colon V \to V$ are the orthogonal matrices (wrt a basis). The most natural thing to consider is then finite dimensional spaces equipped with an inner product1 and morphisms preserving the inner product. Hence we have rediscovered the principle that you can identify a vector space with its dual precisely when it has an inner product. -1 Remark: By an inner product I really mean an arbitrary nondegenerate bilinear form, so over $\mathbf R$ or $\mathbf C$ this is non-standard usage. Note that this category is automatically a groupoid, since any map of vector spaces preserving the inner product is an isomorphism.<|endoftext|> -TITLE: Show $\lim_{n\to \infty}n\int_{0}^{\frac{\pi}{2}}(1-\sqrt[n]{\sin(x)})\,\mathrm{d}x = \frac{\pi \ln(2)}{2}$ -QUESTION [11 upvotes]: Here is an interesting limit of an integral I do not know how to begin. Any help is greatly appreciated. -$\lim_{n\to \infty}n\int_{0}^{\frac{\pi}{2}}(1-\sqrt[n]{\sin(x)})\,\mathrm{d}x$ -I know it converges to $\frac{\pi \ln(2)}{2}$, but how?. -Thanks much - -REPLY [3 votes]: Since -$$ -\lim_{n\to\infty}n\left(x^{1/n}-1\right)=\log(x)\tag{1} -$$ -converges monotonically, -$$ -\begin{align} -\lim_{n\to\infty}n\int_0^{\pi/2}\left(1-\sqrt[\large n]{\sin(x)}\right)\,\mathrm{d}x -&=-\int_0^{\pi/2}\log(\sin(x))\,\mathrm{d}x\\ -&=-\frac12\int_0^\pi\log(\sin(x))\,\mathrm{d}x\\ -&=-\int_0^{\pi/2}\log(\sin(2x))\,\mathrm{d}x\\ -&=-\int_0^{\pi/2}\Big(\log(2)+\log(\sin(x))+\log(\cos(x))\Big)\,\mathrm{d}x\\ -&=-\frac\pi2\log(2)-2\int_0^{\pi/2}\log(\sin(x))\,\mathrm{d}x\tag{2} -\end{align} -$$ -Solving $(2)$, we get -$$ --\int_0^{\pi/2}\log(\sin(x))\,\mathrm{d}x=\frac\pi2\log(2)\tag{3} -$$<|endoftext|> -TITLE: calculating $a^b \!\mod c$ -QUESTION [7 upvotes]: What is the fastest way (general method) to calculate the quantity $a^b \!\mod c$? For example $a=2205$, $b=23$, $c=4891$. - -REPLY [7 votes]: Let's assume that a,b,c referred to here are positive integers, as in your example. -For a specific exponent b, there may be a faster (shorter) method of computing a^b than binary exponentiation. Knuth has a discussion of the phenomenon in Art of Computer Programming Vol. 2 (Semi-numerical Algorithms), Sec. 4.6.3 and the index term "addition chains". He cites b=15 as the smallest case where binary exponentiation is not optimal, in that it requires six multiplication but a^3 can be computed in two multiplications, and then (a^3)^5 in three more for a total of five multiplications. -For the specific exponent b=23 the parsimonious addition chain involves the exponents (above 1) 2,3,5,10,13, at which point a^23 = (a^10)*(a^13), for a total of six multiplications. Binary exponentiation for b=23 requires seven multiplications. -Another approach that can produce faster results when b is large (not in your example) depends on knowing something about the base a and modulus c. Recall from Euler's generalization of Fermat's Little Thm. that if a,c are coprime, then a^d = 1 mod c for d the Euler phi function of c (the number of positive integers less than c and coprime to it). In particular if c is a prime, then by Fermat's Little Thm. either c divides a and a^b = 0 mod c or else a^b = a^e mod c where e = b mod (c-1) since phi(c) = c-1 for a prime c. -If the base a and modulus c are not coprime, then it might be advantageous to factor a into its greatest common factor with c and its largest factor that is coprime to c. -Also it might be advantageous if c is not prime to factor it into prime powers and do separate exponentiations for each such factor, piecing them back together via the Chinese Remainder Thm. In your example c = 4891 = 67*73, so you might compute a^b mod 67 and a^b mod 73 and combine those results to get a^b mod c. This is especially helpful if you are limited in the precision of integer arithmetic you can do.<|endoftext|> -TITLE: What are the left and right ideals of matrix ring? How about the two sided ideals? -QUESTION [38 upvotes]: What are the left and right ideals of matrix ring? How about the two sided ideals? - -REPLY [24 votes]: The left ideals of the matrix ring $S = M_n(R)$ over a (unital, associative) ring $R$ are in a $1–1$ correspondence with the left submodules of a free left $R$-module $R^n$ of rank $n$. -Given any left $R$-submodule $K\le R^n$ define the ideal $I(K)$ as those matrices whose rows are elements of $K$. If $M$ is such a matrix and $A$ is an arbitrary $n\times n$ matrix over $R$, then $AM$ is a matrix whose rows are $R$-linear combinations of the rows of $M$, and so also contained in $K$. Similarly, by multiplying by elementary matrices (matrix units even), you get that every left ideal $I$ determines a submodule $K(I)$ consisting of the first rows of elements of the ideal. You then check that $I(K(I))=I$ and $K(I(K))=K$. -This is called Morita equivalence. The left ideal $P = I(R\oplus0\oplus\dots\oplus 0) = I(R\oplus 0^{n−1})$ is a direct summand of the left $S$-module $S$, and so is called a projective module. It is faithful, since $S \simeq P^n$ as left $S$-modules. In other words, it is a progenerator. $I(K) = _SP\otimes_R K$ and $K(I) = _R\operatorname{Hom}_S(P,I)$. -The short version is just that modules over $S$ are just big versions of modules over $R$. In particular, left ideals of $S$ are just left submodules of $R^n$. -Over some rings $R$, the submodules of $R^n$ can be significantly more complicated. Even for fields, the submodule lattice of $R=\mathbb R$ is very simple: just $0 \le R$. For $R^n$ one has the Grassmannian, which to my mind is quite a bit more complex than $R$. Other rings can be even worse, so one should be careful of leaning too heavily on the intuition for fields when dealing with matrix rings, though any Morita property will transfer between $R$ and $S$ without trouble using the "I()" and "K()" constructions.<|endoftext|> -TITLE: When do the measure-theoretic and elementary definitions of conditional probability/expectation coincide? -QUESTION [10 upvotes]: Suppose $(\Omega, \mathcal{A}, P)$ is the sample space and $X: (\Omega, \mathcal{A}) \rightarrow (\mathbb{R}, \mathcal{B})$ is a random variable. - -Using language of measure theory, -$P(A \mid X)$, the conditional -probability of an event given a -random variable, is defined from -conditional expectation as $E(I_A - \mid X)$. So $P(\cdot \mid X)$ is in -fact a mapping $\mathcal{A} \times - \Omega \rightarrow [0,1]$. -In elementary probability, we -learned that $$P(A \mid X \in B): = - \frac{P(A \cap \{X \in B\})}{P(X \in - B)}.$$ If I understand correctly, -this requires and implies $P(X \in B) \neq 0$. So -$P(\cdot \mid X \in \cdot)$ is in -fact a mapping $\mathcal{A} \times - \mathcal{B} \rightarrow [0,1]$. - -My questions are: - -When will $P(\cdot \mid X)$ in the -first definition and $P(\cdot \mid X - \in \cdot)$ in the second coincide/become consistent -with each other and how? -Is there some case when they can -both apply but do not agree with -other? Is the first definition a more -general one that include the second -as a special case? -Similar questions for conditional -expectation. - -In elementary probability, $E(Y \mid X \in B)$ is defined as -expectation of $Y$ w.r.t. the -p.m. $P(\cdot \mid X \in B)$. So -$E(Y \mid X \in \cdot)$ is a -mapping $\mathcal{B} \rightarrow - \mathbb{R}$. -In measure theory, $E(Y \mid X )$ is a random variable $\Omega - \rightarrow \mathbb{R}$. - -I was also wondering how $E(Y \mid X - \in \cdot)$ in elementary -probability and $E(Y \mid X )$ in -measure theory can coincide/become -consistent? Is the latter a general -definition which includs the former -as a special case? - -Thanks and regards! - -REPLY [3 votes]: $$\int_C P(A|X)(\omega)dP(\omega)=\int_C E(I_A|X)(\omega)dP(\omega)=\int_C I_A(\omega)dP(\omega)=P(A\cap C)$$ for $C$ in the sigma algebra generated by $X$. So, for $C=\{X\in B\}$, -$$\frac{\int_{\{X\in B\}} P(A|X)(\omega)dP(\omega)}{P(X\in B)}=P(A\mid X\in B).$$<|endoftext|> -TITLE: Is $ Z_t=\exp\left(W_t^2-\int_0^t(2W_s^2+1)ds\right)$ a martingale? -QUESTION [8 upvotes]: Let $W_t$ be a standard Brownian motion with $W_0 = 0$ and let $Z_t$ solve the stochastic differential equation $dZ_t = 2 Z_t W_t \mathrm{d}W_t$. This has solution -$$ -Z_t=\exp\Big\{W_t^2-\int_0^t{(2W_s^2+1)ds}\Big\} \> . -$$ -It is easy to show that $Z_t$ is a local martingale since $P(\int_0^T{(Z_sW_s)^2ds}<\infty)=1.$ -Could we show that $E[\int_0^T{(Z_sW_s)^2ds}]<\infty$, which implies $Z_t$ is a martingale in the interval $[0,T]?$ - -REPLY [8 votes]: Yes, $Z$ is a proper martingale. However, $\int_0^T(Z_sW_s)^2\,ds$ is not integrable for large $T$. As the quadratic variation of $Z$ is $[Z]_t=4\int_0^t(Z_sW_s)^2\,ds$, Ito's isometry says that this is integrable if and only if $Z$ is a square-integrable martingale, and you can show that $Z$ is not square integrable at large times (see below). -However, it is conditionally square integrable over small time intervals. -$$ -\begin{align} -\mathbb{E}\left[Z_t^2W_t^2\;\Big\vert\;\mathcal{F}_s\right]&\le\mathbb{E}\left[W_t^2\exp(W_t^2)\;\Big\vert\;\mathcal{F}_s\right]\\ -&=\frac{1}{\sqrt{2\pi(t-s)}}\int x^2\exp\left(x^2-\frac{(x-W_s)^2}{2(t-s)}\right)\,dx -\end{align} -$$ -It's a bit messy, but you can evaluate this integral and check that it is finite for $s \le t < s+\frac12$. In fact, integrating over the range $[s,s+h]$ (any $h < 1/2$) with respect to $t$ is finite. So, conditional on $W_s$, you can say that $Z$ is a square integrable martingale over $[s,s+h]$. -This is enough to conclude that $Z$ is a proper martingale. We have $\mathbb{E}[Z_t\vert\mathcal{F}_s]=Z_s$ (almost surely) for any $s \le t < s+\frac12$. By induction, using the tower rule for conditional expectations, this extends to all $s < t$. Then, $\mathbb{E}[Z_t]=\mathbb{E}[Z_0] < \infty$, so $Z$ is integrable and the martingale conditions are met. - -I mentioned above that the suggested method in the question cannot work because $Z$ is not square integrable. I'll elaborate on that now. If you write out the expected value of an expression of the form $\exp(aX^2+bX+c)$ (for $X$ normal) as an integral, it can be seen that it becomes infinite exactly when $a{\rm Var}(X)\ge1/2$ (because the integrand is bounded away from zero at either plus or minus infinity). Let's apply this to the given expession for $Z$. -The expression for $Z$ can be made more manageable by breaking the exponent into independent normals. Fixing a positive time $t$, then $B_s=\frac{s}{t}W_t-W_s$ is a Brownian bridge independent of $W_t$. Rearrange the expression for $Z$ -$$ -\begin{align} -Z_t&=\exp\left(W_t^2-\int_0^t(2(\frac{s}{t}W_t+B_s)^2+1)\,ds\right)\\ -&=\exp\left(W_t^2-2\int_0^t\frac{s^2}{t^2}W_t\,ds+\cdots\right)\\ -&=\exp\left((1-2t/3)W_t^2+\cdots\right) -\end{align} -$$ -where '$\cdots$' refers to terms which are at most linear in $W_t$. Then, for any $p > 0$, -$$ -Z_t^p=\exp\left(p(1-2t/3)W_t^2+\cdots\right). -$$ -The expectation $\mathbb{E}[Z_t^p\mid B]$ of $Z_t^p$ conditional on $B$ is infinite whenever -$$ -p(1-2t/3){\rm Var}(W_t)=p(1-2t/3)t \ge \frac12. -$$ -The left hand side of this inequality is maximized at $t=\frac34$, where it takes the value $3p/8$. So, $\mathbb{E}[Z_{3/4}^p\mid B]=\infty$ for all $p\ge\frac43$. The expected value of this must then be infinite, so $\mathbb{E}[Z^p_{3/4}]=\infty$. It is a standard application of Jensen's inequality that $\mathbb{E}[\vert Z_t\vert^p]$ is increasing in time for any $p\ge1$ and martingale $Z$. So, $\mathbb{E}[Z_t^p]=\infty$ for all $p\ge 4/3$ and $t\ge3/4$. In particular, taking $p=2$ shows that $Z$ is not square integrable.<|endoftext|> -TITLE: Inscribed kissing circles in an equilateral triangle -QUESTION [7 upvotes]: Triangle is equilateral (AB=BC=CA), I need to find AB and R. -Any hints? -I was trying to make another triangle by connecting centers of small circles but didn't found anything - -REPLY [7 votes]: Two hints: - -Draw the line segments from $A$ and $B$ through the center of the large circle and use the 30°-60°-90° triangles and similar triangles to get a relationship between $AB$ and $R$. -Draw the line parallel to $\overline{BC}$ through the point of tangency of the large circle and the small circle closest to $A$ to get a small equilateral triangle, and use what you learned about the relationship between $AB$ and $R$ on the small triangle. - - -edit (related to comments below): - -Above is the small triangle formed at the top of your diagram. $DF=DE=4$, since both are radii of the small circle. You can use the 30°-60°-90° triangles to find $AD$ and $AE$. In particular, what do you find is $\frac{AD}{AF}$? - -edit 2: Since the homework problem is now done, here's how I would actually have done the problem myself, though it is not the solution I would expect from a geometry student: In an equilateral triangle, the median, altitude, angle bisector, perpendicular bisector, etc. are all the same segment. The point of concurrency of the angle bisectors is the center of the inscribed circle and the point of concurrency of the medians divides the medians in the ratio $2:1$, so the height of the smaller triangle shown above is $R=3\cdot 4=12$ and the height of the large triangle is $3R=36$. That height is the $\sqrt{3}$ side in a $1:\sqrt{3}:2$ right triangle (30°-60°-90°), where $AB$ is the $2$ side, so $AB=\frac{2}{\sqrt{3}}\cdot 36=\frac{72}{\sqrt{3}}=24\sqrt{3}$. - -REPLY [3 votes]: Let $a$ be the side of the triangle. If $A$ denotes the area and $P$ denotes the perimeter, then the radius of the incircle is given by $R = \frac{2A}{P} = \frac{2\sqrt{3} a^2/4}{3a} = \frac{\sqrt{3} a}{6}$ -Let $x$ be the distance of the center of the smaller circle to the nearest vertex. -The altitude is $x + 8 + 2R = \frac{\sqrt{3}a}{2}$. -From similar triangle, we also have $\frac{x+4}{4} = \frac{x+8+R}{R} \Rightarrow \frac{x}{4} = \frac{x+8}{R}$ -You have three equations in $x$,$R$ and $a$ solve them to get your $R$ and $a$. -EDIT -$x+8 = \frac{\sqrt{3}a}{2} - 2R = \frac{\sqrt{3}a}{2} - \frac{\sqrt{3}a}{3} = \frac{\sqrt{3}a}{6} = R$. Hence, $\frac{x}{4} = 1 \Rightarrow x = 4$. -$R = x + 8 = 12 \Rightarrow a = 2 \sqrt{3} R = 24 \sqrt{3}$.<|endoftext|> -TITLE: Is there any example of a Lie Algebra, who has nontrivial radical but contains no abelian ideal? -QUESTION [6 upvotes]: Is there any example of a Lie Algebra, who has nontrivial radical but contains no abelian ideal? - -Here, the radical of a Lie algebra means its maximal solvable ideal. -This question occurs in the proof of the theorem which states that "A Lie algebra is semsimple if and only if its killing form is nondegenerate." In the proof, it is written that "to prove that $L$ is semisimple, it will suffice to prove that every abelian ideal of $L$ lies in the radical of the killing form." So I am wondering what if $L$ contains no abelian ideal whiling being semisimple. -Is it possible? -Many thanks. - -REPLY [9 votes]: If the radical $\mathfrak r$ of a Lie algebra $\mathfrak g$, then $\mathfrak r$ is a solvable Lie algebra. It follows that either $[\mathfrak r, \mathfrak r]$ is zero, so that $\mathfrak r$ is abelian, or $[\mathfrak r, \mathfrak r]$ is a non-trivial nilpotent ideal in $\mathfrak r$. In the last case, then the center of $[\mathfrak r,\mathfrak r]$, which is not trivial because of nilpotency, is an abelian ideal of $\mathfrak g$. - -REPLY [3 votes]: Let $\mathfrak{g}$ be a Lie algebra, $\mathfrak{r}$ its radical, and $\mathfrak{a}$ the center of $\mathfrak{r}$ (the set of $x \in \mathfrak{r}$ s.t. for all $y \in \mathfrak{r}$, $[x,y]=0$). -By the Jacobi identity, and since $\mathfrak{r}$ is an ideal of $\mathfrak{g}$, $\mathfrak{a}$ is an ideal of $\mathfrak{g}$. -Moreover a nontrivial solvable Lie algebra has nontrivial center. -EDIT: This last sentence is indeed false (thank you Matt E). So it's easier to formulate it this way: by Jacobi, whenever $\mathfrak{a}$ and $\mathfrak{b}$ are ideals of a Lie algebra, $[\mathfrak{a},\mathfrak{b}]$ is also an ideal, so the last non-trivial $\mathcal{D}^n \mathfrak{r}$ is an abelian ideal of $\mathfrak{g}$.<|endoftext|> -TITLE: What are the Weyl group of type $E_8$, $F_4$,$G_2$? -QUESTION [5 upvotes]: This problem is as titled. -The textbook states that the order of the Weyl group of type $E_8$, $F_4$ are $2^{14}3^55^27$ and 1152 respectively, but I am wondering how are these groups like, namely, how can they be decomposed into simpler groups, or what kind of subgroup or ideals do they have. -Thanks for any attention and help~ - -REPLY [2 votes]: The Weyl group of type $E_8$ is the group $O(8,\mathbb{F}_2)^+$ of order $696729600$. It is a stem extension by the cyclic group $C_2$ of an extension of $C_2$ by a group $G$, namely where $G$ is the unique simple group of order $174182400$, known as $PSΩ_8^+(\mathbb{F}_2)$. For a discussion see also this MO question (there is a discussion on the notation $O(8,\mathbb{F}_2)^+$). -The Weyl group of $F_4$ is a soluble group of order $1152$, -for detailed references see here. One of the presentations of it is -$$ -< x, y \mid x^2 = y^6 = (xy)^6 = (xy^2)^4 = (xyxyxy^{-2})^2 = 1 >. -$$<|endoftext|> -TITLE: Formula for $\sum_{k=0}^n S(n,k) k$, where $S(n,k)$ is a Stirling number of the second kind? -QUESTION [7 upvotes]: I would like to compute $\sum_{k=0}^n S(n,k) k$, where $S(n,k)$ is a Stirling number of the second kind. Any ideas? It is like I am convolving the Stirling numbers of the second kind with the positive integers. Thank you very much! - -REPLY [16 votes]: $$\sum_{k=0}^n \left\{ n \atop k\right\} k = \varpi(n+1) - \varpi(n),$$ -where $\varpi(n)$ is the $n$th Bell number. -Using generating functions, I prove this and the generalizations -$$\sum_{k=0}^n \left\{ n \atop k\right\} k^m = \sum_{i=0}^m \binom{m}{i} R(m-i) \varpi(n+i),$$ -$$\sum_{k=0}^n \left\{ n \atop k\right\} (-1)^k k^m = \sum_{i=0}^m \binom{m}{i} \varpi(m-i) R(n+i),$$ -where $R(n)$ is the $n$th Rao-Uppuluri-Carpenter number, in the paper "On Solutions to a General Combinatorial Recurrence" (Journal of Integer Sequences 14 (9), Article 11.9.7, 2011). See Identities 12 and 13, which are at the very end. This paper has been submitted for publication, and I am still waiting on referee reports. -I don't know whether these results are new, but I had not seen them before. (They are not the main point of the paper.) - -Added: Here are some additional derivations for $\sum_{k=0}^n \left\{ n \atop k\right\} k = \varpi(n+1) - \varpi(n)$. They are shorter than the one needed in the paper for the more general result. -First: Use the recurrence for the Stirling numbers of the second kind. (Due to OP - see comments.) -$$\sum_{k=0}^n \left\{ n \atop k\right\} k = \sum_{k=0}^n \left\{ n+1 \atop k\right\} - \sum_{k=0}^n \left\{ n \atop k-1\right\} = \sum_{k=0}^{n+1} \left\{ n+1 \atop k\right\} - \sum_{k=0}^n \left\{ n \atop k\right\} =\varpi(n+1) - \varpi(n).$$ -Second: Use Bell polynomials $B_n(x)$. -It is known that $\sum_{k=0}^n \left\{ n \atop k \right\} x^k = B_n(x)$ (Eq. 14 on the linked page), $\frac{d}{dx} B_n(x) = \frac{B_{n+1}(x)}{x} - B_n(x)$ (Eq. 16), and $B_n(1) = \varpi(n)$ (Eq. 1). Thus -$$\sum_{k=0}^n \left\{ n \atop k\right\} k = \frac{d}{dx} \left.\sum_{k=0}^n \left\{ n \atop k\right\} x^k \right|_{x=1} = \frac{d}{dx} \left. B_n(x) \right|_{x=1} = \frac{B_{n+1}(1)}{1} - B_n(1) = \varpi(n+1) - \varpi(n).$$ -Third: Use the double generating function for the Stirling numbers of the second kind (see, for example, Concrete Mathematics, 2nd edition, p. 351) -$$\sum_{n,k \geq 0} \left\{ n \atop k\right\} w^k \frac{z^n}{n!} = e^{w(e^z-1)}.$$ -(The right-hand side is actually the exponential generating function for the Bell polynomials, so this derivation is a variation on the second one.) -Differentiating both sides with respect to $w$ and then letting $w = 1$ yields the exponential generating function (egf) for the sum in question: -$$\sum_{n \geq 0} \left(\sum_{k=0}^n \left\{ n \atop k\right\} k \right) \frac{z^n}{n!} = e^{e^z-1} e^z - e^{e^z-1}.$$ -It is known that $e^{e^z-1}$ is the egf for the Bell numbers. Since $e^z$ is the egf of the infinite sequence of $1$'s, and multiplication of exponential generating functions corresponds to binomial convolutions of the sequences in question, this means -$$\sum_{k=0}^n \left\{ n \atop k\right\} k = \sum_{k=0}^n \binom{n}{k} \varpi(k) - \varpi(n).$$ -Finally, $\sum_{k=0}^n \binom{n}{k} \varpi(k) = \varpi(n+1)$ is a well-known identity for the Bell numbers, so we have -$$\sum_{k=0}^n \left\{ n \atop k\right\} k = \varpi(n+1) - \varpi(n).$$ - -Added much later: -Fourth: A combinatorial argument. The Stirling number $\left\{ n \atop k \right\}$ counts the number of ways to partition a set of $n$ elements into $k$ nonempty subsets, and the Bell number $\varpi(n)$ is the number of ways to partition a set of $n$ elements into any number of subsets. Thus $k \left\{ n \atop k \right\}$ is the number of ways to partition $n$ elements into $k$ nonempty subsets and then add an element $n+1$ to one of these $k$ subsets. Summing over all values of $k$ would then give the number of ways to partition a set of $n+1$ elements into any number of nonempty subsets, except for the partitions in which element $n+1$ is in a subset by itself. But this number is $\varpi(n+1) - \varpi(n)$, as the number of partitions of $n+1$ elements in which $n+1$ is in a subset by itself is the same as the number of partitions of $n$ elements. Putting all this together, we get -$$\sum_{k=0}^n \left\{ n \atop k\right\} k = \varpi(n+1) - \varpi(n).$$<|endoftext|> -TITLE: What is the expected number of steps in the following process? -QUESTION [31 upvotes]: We have $n$ boxes. And initially there are $x_1, x_2, x_3, \ldots, x_n$ marbles in each box. We randomly (with equal probabilities) select one of the boxes. We take one marble from it and we put it into another (different from the origin) box chosen randomly (with equal probabilities). We continue this process until one of the boxes become empty. How many operations we do on average? -It is not a homework. I don't know whether a closed form solution exists. My current results are: -\begin{align}{} -x_1 x_2 & \text{ for } n=2\\\ -\frac{3x_1 x_2 x_3}{x_1 + x_2 + x_3} & \text{ for } n=3 -\end{align} -I have crossposted in artofproblemsolving. This problem is related and maybe (or not) useful. -Update2: -As i learned: this problem has been studied before. As usual :) -It seems very hard even for $n=4$. No explicit solution is known, only asymptotics for the case $f(x,x,x,x)$. Nevertheless the solution is much much more easier if we change slightly the problem. For example. -Big thanks to Viktor for pointing the reference! - -REPLY [2 votes]: Here are some results for very small numbers, when there are $n$ variables: -$$ -\begin{align*} -f(1,1,1,\ldots,1) &= 1, \\ -f(2,1,1,\ldots,1) &= \frac{n}{n-1}, \\ -f(3,1,1,\ldots,1) &= \frac{n^3-2n^2+3n}{n^3-3n^2+4n-2} = \frac{n}{n-1} \cdot \frac{n^2-2n+3}{n^2-2n+2}, \\ -f(2,2,1,\ldots,1) &= \frac{n^3-n^2+2n}{n^3-3n^2+4n-2} = \frac{n}{n-1} \cdot \frac{n^2-n+2}{n^2-2n+2}. -\end{align*} -$$<|endoftext|> -TITLE: Birthday-coverage problem -QUESTION [21 upvotes]: I heard an interesting question recently: - -What is the minimum number of people required to make it more likely than not that all 365 possible birthdays are covered? - -Monte Carlo simulation suggests 2287 ($\pm 1$, I think). More generally, with $p$ people, what is the probability that for each of the 365 days of the year, there is at least one person in the group with that birthday? (Yes, ignoring the leap-day.) - -REPLY [27 votes]: For the coupon collector's problem with $n$ objects, let $T$ be the number of trials needed to get a complete set. Then we have the formula $$P(T\leq k)=n^{-k}\ n!\ \left\lbrace {k\atop n}\right\rbrace.$$ Here the braces indicate Stirling numbers of the second kind. -With $n=365$, Maple gives me $P(T\leq 2286)=.4994$ while $P(T\leq 2287)=.5003$, so that $2287$ is the smallest number to give a 50% chance to get all 365 birthdays.<|endoftext|> -TITLE: Set of elements in $K$ that are purely inseparable over $F$ is a subfield -QUESTION [10 upvotes]: Let $F\subset K$ be an algebraic field extension. Is the set of all elements of $K$ that are purely inseparable over $F$ necessarily a subfield of $K$? - -REPLY [14 votes]: Yes. An element $a\in K$ is purely inseparable iff $a^{p^n}\in F$ for some $n\geq 0$ (see here). Let $E$ be the subset of purely inseparable elements of $K$. For any $a,b\in E$, we have $a^{p^n}\in F$ and $b^{p^m}\in F$ for some $n,m\geq0$, so -$$(-a)^{p^n}\in F\hskip 0.5in (a^{-1})^{p^n}=1/(a^{p^n})\in F$$ -$$(a+b)^{p^{n+m}}=a^{p^{n+m}}+b^{p^{n+m}}=(a^{p^n})^{p^m}+(b^{p^m})^{p^n}\in F$$ -and -$$(ab)^{p^{n+m}}=a^{p^{n+m}}b^{p^{n+m}}=(a^{p^n})^{p^m}(b^{p^m})^{p^n}\in F$$ -Thus, $E$ is a subfield. - -REPLY [12 votes]: Yes. This is clearer if you use the following definition of pure inseparability: an element $\alpha$ of an algebraic extension $F/K$ is purely inseparable if $|\text{Hom}_K(K(\alpha), \bar{K})| = 1$. Now it is obvious that if $\alpha, \beta$ are purely inseparable then any $K$-homomorphism $K(\alpha, \beta) \to \bar{K}$ is determined by what it does to $\alpha$ and $\beta$, so is unique.<|endoftext|> -TITLE: $\mathbb{R}$ and $\mathbb{C}$ as $\mathbb{Q}$ vector spaces -QUESTION [12 upvotes]: Q: If we consider $\mathbb{R}$ and $\mathbb{C}$ as $\mathbb{Q}$-vector spaces, then how can we show they are isomorphic? -I know that if two vector spaces have bases with the same cardinality, then they are isomorphic. Also, Zorn's lemma tells us that every vector space has a basis. -In this case, answering my question amounts to showing that any bases of $\mathbb{R}$ and $\mathbb{C}$ over $\mathbb{Q}$ have the same cardinality. In other words, I need to show dim$ \mathbb{R} =$ dim $\mathbb{C}$ over $\mathbb{Q}$, that is, they have bases with the same cardinality. Can anyone help? -Thank you!! - -REPLY [8 votes]: Two ways: one using only the fact that $|\mathbb{R}|=|\mathbb{C}|=\mathfrak{c}$; the other using the fact that we can identify the additive structure of $\mathbb{C}$ with the plane. - -If $\mathbf{F}$ is a field and $\mathbf{V}$ is a vector space over $\mathbf{V}$, what is the cardinality of $\mathbf{V}$? If $\beta$ is a basis for $\mathbf{V}$, then every vector of $\mathbf{V}$ can be written uniquely as an $\mathbf{F}$-linear combination of vectors in $\beta$. Therefore, there is a bijection between the elements of $\mathbf{V}$ and the set -$$\bigl\{ f\colon\beta\to \mathbf{F}\bigm| f(\mathbf{b})=\mathbf{0}\text{ for almost all }\mathbf{b}\in\beta\bigr\}.$$ -That is, the set of functions of finite support from $\beta$ to $\mathbf{F}$, $\mathbf{F}^{(\beta)}$. If $\mathbf{F}$ is infinite, then this cardinality is equal to $|\mathbf{F}||\beta|=\max\{|\mathbf{F}|,|\beta|\}$. Here, $\mathbf{F}=\mathbb{Q}$. So if $\beta$ is a basis for $\mathbb{R}$ over $\mathbb{Q}$, we need $|\mathbb{R}| = \aleph_0|\beta|$. What is $|\beta|$? For $\mathbb{C}$, we need $|\mathbb{C}| = \aleph_0|\gamma|$, where $\gamma$ is a basis for $\mathbb{C}$ over $\mathbb{Q}$. What is $|\gamma|$? -Since the additive structure of $\mathbb{C}$ is just the same as the additive structure of $\mathbb{R}\oplus\mathbb{R}$, then they are isomorphic as $\mathbb{Q}$-vector spaces. So if $\{\mathbf{v}_{b}\}_{b\in\beta}$ is a basis for $\mathbb{R}$ over $\mathbb{Q}$, then -$$\bigl\{\mathbf{v}_{b}\bigr\}_{b\in\beta}\cup \bigl\{i\mathbf{v}_{b}\bigr\}_{b\in\beta}$$ -is a basis for $\mathbb{C}$ over $\mathbb{Q}$ (prove it). So $\dim_{\mathbb{Q}}(\mathbb{C}) = 2\dim_{\mathbb{Q}}(\mathbb{R})$. If the dimension of $\mathbb{R}$ over $\mathbb{Q}$ were finite, this would show they are not isomorphic. But is the dimension finite or infinite? And what does that tell you about the dimensions?<|endoftext|> -TITLE: The average of a bounded, decreasing-difference sequence -QUESTION [6 upvotes]: (not sure if "decreasing-difference" is the right way to put it - please edit away if you know the proper technical term) -From my analysis homework: - -Does there exist a bounded sequence - $\{x_n\}_{n \in \mathbb{N}}$ such that - $x_{n+1}-x_n \to 0$, but the sequence - $(\sum_{i=1}^n x_i)/n$ does not have a - limit? Give an example or show none - exist. - -In a previous part, I showed that there exists a bounded sequence $\{x_n\}_{n \in \mathbb{N}}$ such that $\left| x_{n+1}-x_n \right| \to 0$, yet $x_n$ does not converge, using a rearrangement of the alternating harmonic series. -Intuitively, I believe none exist. I've tried showing that the sequence is Cauchy, but that does not seem to help: -$$\left| \frac{ \sum_{i=1}^m x_i}{m} - \frac{ \sum_{i=1}^n x_i}{n} \right| = \left| \frac{ (n-m) \sum_{i=1}^m x_i - m \sum_{i=m+1}^n x_i}{mn} \right| $$ -I feel like I can't use my normal series tools because I am looking at an average, and the series does not necessarily converge. - -REPLY [3 votes]: Edit 2: for your actual question, the answer is yes, you can find counterexamples. Consider the sequence $(1/n)$, and build your sequence as follows. -Start at $x_1 = 1$, and let $\alpha_1$ be a running counter which we start at 2. We then alternate between a decreasing phase and an increasing phase. -Decreasing phase: let $x_i = x_{i-1} - 1/\alpha_{i}$ if $x_{i-1} > 1/\alpha_i$. And increase $\alpha_{i+1} = \alpha_i + 1$. If $x_{i-1} < 1/\alpha_i$, set -$$ x_i = x_{i+1} = \cdots = x_{100i} = x_{i-1}\qquad \qquad \alpha_{100i + 1} = \alpha_{i} $$ -and then enter the increasing phase starting from the $100i+1$'th term. -Increasing phase: let $x_i = x_{i-1} + 1/\alpha_i$ if $x_{i-1} + 1/\alpha_i < 1$. And increase $\alpha_{i+1} = \alpha_{i} + 1$. Else set -$$ x_i = x_{i+1} = \cdots = x_{100i} = x_{i-1} \qquad \qquad \alpha_{100i + 1} = \alpha_{i} $$ -and enter the decreasing phase starting from the $100i+1$'th term. -Due to the long constant phases, you see that $\limsup_{n\to\infty} \frac{1}{n} \sum_1^n x_i = 1$, and the $\liminf = 0$. (The length of the constant phases grow exponentially, so that each constant phase completely dominates all previous terms in the average.) -The sequence looks like this: -1, 1/2, 1/6, {300 terms of 1/6}, 5/12, 37/60, 47/60, 389/420, {repeat 30700 times}, 673/840 ... - -Edit: oh wait, the below doesn't actually address your question. - -As a side note, the limit $\lim_{n\to\infty} \frac1n \sum_1^n x_i $ is known as the Cesàro mean of the sequence $x_i$. It is a fact that for any converging sequence $x_i\to \bar{x}$, the sequence $c_n = \frac{1}{n} \sum_1^n x_i$ also converges to $\bar{x}$. -To do this, let $\delta > 0$. It suffices to find $N$ large enough that for all $m > N$, $|c_m - \bar{x}| < \delta$. Because $x_i\to \bar{x}$, we can find $I$ large such that for all $j > I$, $|x_i - \bar{x}| < \delta / 3$. Now choose $N > I$ large enough such that -$$ \sum_1^I x_i < \frac{\delta}{3}N \qquad \textrm{and} \qquad \frac{I}{N} < \frac{\delta}{3|\bar{x}|} $$ -Then we have for all $m > N$: -$$ m c_m = \sum_1^I x_i + \sum_{I+1}^m x_i $$ -$$ |c_m - \bar{x}| \leq | \frac{1}{m} \sum_1^I x_i | + | \frac{1}{m} \sum_{I+1}^m x_i - \bar{x} | $$ -The first term on the right hand side is bounded by $\delta/3$. The second term we estimate -$$ |\frac{1}m \sum_{I+1}^m x_i - \bar{x}| \leq |\frac{1}{m} \sum_{I+1}^m (x_i - \bar{x})| + |\frac{I}{m} \bar{x}| $$ -and by construction both of the terms on the RHS are bounded by $\delta / 3$. So we have the desired inequality.<|endoftext|> -TITLE: vector space of all smooth functions has infinite dimension -QUESTION [14 upvotes]: Now, I am working through a particular case in the book on smooth manifolds by John.M.Lee used in my graduate math class, let's say we have a smooth manifold X which has positive dimension. He then claims that the vector space $C^\infty$ of all smooth functions from $X \rightarrow \mathbb{R} $ will have infinite dimension over $\mathbb{R}$. While I do have a fair idea of the claim, I'm a little bit lost to show that the vector space has infinite dimension. - -REPLY [4 votes]: I originally posted the following as a comment but I think it is a particularly simple argument, so worth upgrading to a answer. -Choose any $\theta\in C^\infty(M)$ which is not constant on a connected component of $M$. Then its image is an infinite subset of $\mathbb{R}$ (it includes a nontrivial interval). As the space $\mathbb{R}[X]$ of real polynomials is infinite dimensional, and a polynomial is fully determined by its values on an infinite set, you can conclude that -$$ -\left\{f\circ\theta\colon f\in\mathbb{R}[X]\right\} -$$ -is an infinite dimensional subspace of $C^\infty(M)$. -Note that this argument is applicable much more generally. E.g., if a Riemann surface has at least one non-constant meromorphic function then the space of meromorphic functions is infinite dimensional. -If you like, you can modify this method to show that the dimension of $C^\infty(M)$ is (at least) the cardinality of the continuum, by proving the same is true for smooth functions on any nontrivial interval in $\mathbb{R}$. (Hint: the exponentials $x\mapsto e^{ax}$ are linearly independent).<|endoftext|> -TITLE: If given the girth and the minimum degree of a simple graph $G$, can we give a lower bound on the number of vertices it has? -QUESTION [6 upvotes]: I'm trying to prove that every simple graph $G$ of girth $g(G)=5$ (length of smallest cycle), and minimum degree $\delta$, has at least $\delta^2 + 1$ vertices. -I tried using induction on $\delta$ without any results, and also tried to apply the pigeonhole principle, to no avail. -Help? - -REPLY [8 votes]: If you start from any vertex, it is connected to at least $\delta$ vertices. Each of those is connected to at least $\delta-1$ others. These are all distinct or you would have a 4 (or less)-cycle. $1+\delta+\delta(\delta-1)=\delta^2+1$. The pentagon and Petersen graph follow exactly this, then connect appropriate ones of the second tier of vertices.<|endoftext|> -TITLE: The first Noether isomorphism theorem -QUESTION [5 upvotes]: I'm studying for my Intro to Algebra class. I've reached the point where I want to understand the first Noether isomorphism theorem. -Here is the definition I was given in class: -Let $f :R \rightarrow S$ be a surjective ring homomorphism. Then we have a "commutative diagram" of ring homomorphisms. -Here is the scanned diagram I received: Could someone please explain this diagram to me in basic terms? In addition, I'd deeply appreciate examples or proof of examples. - -Ultimately, can someone answer what it means to say something is equivalent to another thing "up to isomorphism?" - -REPLY [7 votes]: The first thing in the diagram are the solid arrows: those are maps that you "know." -You "know" the map $f\colon R\to S$, because you are assuming that you are given this map. -You also "know" the map $\pi\colon R\to R/\mathrm{ker}(f)$. This is simply the canonical projection onto the quotient. Remember that whenever you have a ring $R$ and an ideal $\mathcal{I}$ of $R$, you can form the quotient ring $R/\mathcal{I}$; and there is a canonical projection map $\pi\colon R\to R/\mathcal{I}$ given by $\pi(r) = r+\mathcal{I}$. In this case, since $f$ is a homomorphism, you know that $\mathrm{ker}(f)$ is an ideal, so you automatically get the quotient ring $R/\mathrm{ker}(f)$ and the quotient map $\pi\colon R\to R/\mathrm{ker}(f)$. -The second thing is that the "dashed arrow" represents a homomorphism which the theorem asserts exists; so one thing the theorem asserts is that there is a ring homomorphism $R/\mathrm{ker}(f)\to S$, and which is an isomorphism (assuming $f$ is onto). -Finally, saying the diagram "is commutative" means the following: whenever you have two paths from one ring in the diagram to another (in this case, there are two ways to get from $R$ to $S$: either through the arrow $f$, or by first going to $R/\mathrm{ker}(f)$ via the map I called $\pi$, and the map $R/\mathrm{ker}(f)\to S$ that we are asserting exists), then the results you get are the same no matter which path you take (in this case: if $r\in R$, then $f(r)$ will be the same as first taking $\pi(r)$, and then applying the map $R/\mathrm{ker}(f)\to S$ to $\pi(r)$. -For an example, take $R=\mathbb{Z}\times\mathbb{Z}$, $S=(\mathbb{Z}/10\mathbb{Z})\times(\mathbb{Z}/3\mathbb{Z})$, and $f(a,b) = (a\bmod 10, b\bmod 3)$. -This is a ring homomorphism: -\begin{align*} -f\bigl((a,b) + (c,d)\bigr) &= f(a+c,b+d) = \bigl( a+c\bmod 10, b+d\bmod 3\bigr)\\ -&= (a\bmod 10,b\bmod 3) + (c\bmod 10, d\bmod 3) = f(a,b) + f(c,d),\\ -f\bigl((a,b)(c,d)\bigr) &= f(ac,bd) = \bigl(ac\bmod 10, bd\bmod 3\bigr)\\ -&= (a\bmod 10, b\bmod 3)(c\bmod 10,d\bmod 3) = f(a,b)f(c,d). -\end{align*} -It is also onto: given $(x\bmod 10,y\bmod 3)\in S$, we can always find an element $(a,b)\in R$ such that $f(a,b)=(x\bmod 10, y\bmod 3)$. For instance, we can take $a=x$, $b=y$. -what is the kernel of this map? $f(a,b) = (0\bmod 10,0\bmod 3)$ if and only if $a\equiv 0 \pmod{10}$ and $b\equiv 0 \pmod{3}$. So the kernel is $10\mathbb{Z}\times 3\mathbb{Z}$. -Now, we have a map from $R$ to $R/\mathrm{ker}(f)$; namely, -\begin{align*} -\pi\colon \mathbb{Z}\times\mathbb{Z} &\longmapsto \frac{\mathbb{Z}\times\mathbb{Z}}{10\mathbb{Z}\times 3\mathbb{Z}}\\ -(a,b) &\longmapsto (a,b) + 10\mathbb{Z}\times 3\mathbb{Z} -\end{align*} -What the theorem states is that there is an isomorphism -$$\varphi\colon \frac{\mathbb{Z}\times\mathbb{Z}}{10\mathbb{Z}\times3\mathbb{Z}} \to \left(\frac{\mathbb{Z}}{10\mathbb{Z}}\right)\times\left(\frac{\mathbb{Z}}{3\mathbb{Z}}\right)$$ -which makes "the diagram commute". That is, for every $(a,b)\in R$, we will have -$$f(a,b) = \phi(\pi(a,b)).$$ -What is $\phi$? -$$\phi\bigl((a,b) + 10\mathbb{Z}\times 3\mathbb{Z}\bigr) = (a\bmod 10,b\bmod 3).$$ -Does this work? First, $\phi$ is well defined: if $(a,b)+10\mathbb{Z}\times 3\mathbb{Z} = (c,d)+10\mathbb{Z}\times3\mathbb{Z}$, then $(a,b)-(c,d)\in 10\mathbb{Z}\times 3\mathbb{Z}$; this means that $a-c\in 10\mathbb{Z}$ and $b-d\in 3\mathbb{Z}$, so $a\bmod 10 = c\bmod 10$ and $b\bmod 3$ = $d\bmod 3$. Thererfore, $(a\bmod 10,b\bmod 3) = (c\bmod 10, d\bmod 3)$. Thus, -$$\phi\bigl( (a,b)+10\mathbb{Z}\times 3\mathbb{Z}\bigr) = \phi\bigl( (c,d)+10\mathbb{Z}\times 3\mathbb{Z}\bigr),$$ -which shows that $\phi$ is well defined. -It is also straightforward to check that $\phi$ is a ring homomorphism, and that $f(a,b) = \phi(\pi(a,b))$ for all $(a,b)\in R$. To show that $\phi$ is an isomorphism, note that it must be a surjection (because $\phi\circ\pi = f$ is a surjection, and if a composition is surjective, then the second map is surjective), and if -$$(0\bmod 10,0\bmod 3) = \phi\bigl( (a,b) + 10\mathbb{Z}\times 3\mathbb{Z}\bigr) = (a\bmod 10, b\bmod 3),$$ -then $a\equiv 0\pmod{10}$ and $b\equiv 0 \pmod{3}$, so $(a,b)\in 10\mathbb{Z}\times 3\mathbb{Z}$; this means that -$$(a,b)+ 10\mathbb{Z}\times 3\mathbb{Z} = (0,0)+10\mathbb{Z}\times3\mathbb{Z},$$ -so the map is one-to-one. Thus, $\phi$ is an isomorphism, as desired. -Also, $\phi$ is the only homomorphism that "fits" into the diagram: because if $\psi$ also "fits", then given -$(a,b)+10\mathbb{Z}\times 3\mathbb{Z}$, we can write this as $\pi(a,b)$, so -$$\psi\bigl((a,b)+10\mathbb{Z}\times 3\mathbb{Z}\bigr) = \psi(\pi(a,b)) = f(a,b) = \phi(\pi(a,b)) = \phi\bigl( (a,b)+10\mathbb{Z}\times 3\mathbb{Z}\bigr),$$ -so $\psi=\phi$. -"Up to isomorphism." What about your final question? Consider the integers "in English" and the integers "in Spanish" ("los enteros"). The names of the elements are different ("one", "two", etc., vs. "uno", "dos", etc.) But "the integers" and "los enteros" are essentially the same ring: it's just a question of what we call the elements, not how we add them or multiply them. -Explicitly, we have a "translation" betwen "the integers" and "los enteros." This translation is such that if you take two integers, add them, and then translate the answer, you get the same thing as if you first translate the integer and then add them in Spanish. And the same thing with multiplication. Even though "the integers" and "los enteros" are technically different things (as sets), if all we are concerned about is the ring-theoretic properties of these rings, then they are essentially "the same ring." We don't really care whether we call the unit element "one" or "uno", the important point is that it is a unit, that if you add it to itself you will never get the zero element ("zero" or "cero", depending on which ring you are in) etc. The translation from English to Spanish and the translation from Spanish to English lets us go back and forth between the two at will, and none of the ring-theoretic properties will depend on which language we are using, but only on the ring-theoretic properties of the elements. -So, for all ring theoretic mathematical purposes, "the integers" are the same as "los enteros": because we have a perfect translation between the two rings, namely, an isomorphism. So, the two are really "equivalent" (for any ring-theoretic investigation you might want to undertake). -This means that there is really no point in distinguishing between the integers and los enteros. They are "equivalent rings", because they is an isomorphism between them. We say this by saying that they are "equivalent up to isomorphism." -(More formally: if you imagine the collection of all rings (okay, there are foundational problems with that, but ignore them; they are easy to get around), then the relation $R\cong S$ given by "there is a ring isomorphism between $R$ and $S$" is an equivalence relation: it is reflexive (the identity map shows $R\cong R$); symmetric (if $R\cong S$, then there is an isomorphism $f\colon R\to S$, so $f^{-1}\colon S\to R$ is an isomorphism from $S$ to $R$, proving $S\cong R$); and transitive: if $R\cong S$ and $S\cong T$, then there exist $f\colon R\to S$ and $g\colon S\to T$ that are isomorphisms, and $g\circ f\colon R\to T$ is an isomorphism, showing $R\cong T$. So we have an equivalence relation between rings, which partitions the collection of all rings into equivalence classes. When two rings are in the same equivalence class, we say they are "equivalent up to isomorphism", because the equivalence relation is "there is an isomorphism between".)<|endoftext|> -TITLE: Is there an explicit form for cubic Bézier curves? -QUESTION [41 upvotes]: (See edits at the bottom) -I'm trying to use Bézier curves as an animation tool. Here's an image of what I'm talking about: - -Basically, the value axis can represent anything that can be animated (position, scaling, color, basically any numerical value). The Bézier curve is used to control the speed at which the value is changing as well as it start and ending value and time. In this graphic, the animated value would slowly accelerate to a constant speed, then decelerate and stop. -The problem is, that Bézier curve is defined with parametric equations. -$f_x(t):=(1-t)^3p_{1x} + 3t(1-t)^2p_{2x} + 3t^2(1-t)p_{3x} + t^3p_{4x}$ -$f_y(t):=(1-t)^3p_{1y} + 3t(1-t)^2p_{2y} + 3t^2(1-t)p_{3y} + t^3p_{4y}$ -What I need is a representation of that same Bézier curve, but defined as value = g(time), that is, y = g(x). -I've tried solving for t in the x equation and substituting it in the y equation, but that 3rd degree is giving me some difficulty. -I also tried integrating the derivative of the Bézier curve (dy/dx) with respect to t, but no luck. -Any ideas? -Note : "Undefined" situations are avoided by preventing the tangent control points from going outside the hull horizontally, preventing any overlap in the time axis. -EDIT : -I have found two possible solutions. One uses Decasteljau's algorithm to approximate the $s$ parameter from the $t$ parameter, $s$ being the parameter of the parametric curve and $t$ being the time parameter. Here (at the bottom). -The other, from what I can understand of it, recreates a third degree polynomial equation matching the curve by solving a system of linear equation. Here. I understand the idea, but I'm not sure of the implementation. Any help? - -REPLY [2 votes]: I Probably Shouldn't Do This -My previous answer made clear that I think your question was the wrong question. But on principle, I can't leave it at that; I will answer the question anyway... -If you're really determined to draw shapes with your familiar Bezier tools, and to use one coordinate of a resulting curve as an easing value, I recommend that you use one of the following methods: -Numerically solve the polynomial P(t) - x = 0 to find t, using an end point as a 'first guess'. -You already know a solution at each end of the curve. Either of those is a good first guess for the next or previous value, which is a good first guess for the next value you want, and so on. That means you can use a numerical solver very quickly and accurately, and usually only finding one root. -You hinted at a similar solution, but you didn't seem very happy with it, calling it 'a bit hacky'. Here I mean to get you thinking more clearly about it, because it needn't bit at all hacky, and can be both quite fast as well as the most accurate. -Wikipedia's page on root finding algorithms has many possibilities to suggest, and you needn't even bother assessing their relative merits much; go with whatever you find easy to implement. Having a first guess to hand means lots of methods work very, very well in terms of both speed and accuracy. -For values of x that correspond to multiple different values of t, the value of t that is closest to your previous value will be found, and that seems desirable. The only question then is how to behave when the value you've been tracking vanishes. -Consider drawing an upright S shape on the graph in your example. Scanning from left to right, we start with only one solution, but then two more appear in a manner reminiscent of a particle-antiparticle creation. What is the definition of the behaviour you desire, in such cases? -At this stage most root finding algorithms will continue to return the t value corresponding to the bottom-most root. But when the original root and the middle root collide, they annihilate, again in a way reminiscent of particles and antiparticles. -What happens there depends a bit more on your choice of root solver. In most cases there will be two consequences. Usually, the root will simply jump to the top of the S, and take slightly more calculation time on that step. (Tracing back to the left will then stay on the top of the S.) For some choices of root finder, you could run into a bigger problem here, like the root finder no longer converging on any root, because the initial guess was too far away. -If you actually want to know about all the possible roots, that takes a bit more work. By removing a factor of (t - root) from the polynomial P(t) - x you will get a lower order expression, making it successively easier to find subsequent roots. But the division is difficult, and the accuracy of each subsequent root tends to be much worse than the last one. I would use these subsequent calculations of roots as first guesses for a further numerical iteration. -If you think multiple roots won't affect you for the curves you're likely to draw, because you're drawing single-valued functions, beware. Just because there is a single valid solution in the curve segment you've drawn, doesn't mean the root finder won't discover other solutions from outside it. The fact that the equation potentially has three roots makes your calculation harder even if only one is real, or only one is in the valid t domain. -And Here's Another Way -First, notice that the quadratic Bezier is a much easier case. You can get both solutions of t for every value of x from the quadratic formula, and making a consistent choice of root is easy. -So how closely can you approximate one cubic Bezier with many quadratic Beziers? -If using this method, remember that while any one subdivision can now give a maximum of two roots, the entire curve could have a total of three. -But HONESTLY You're Better Off With These! -Calculating polynomial expressions is easy, 'forward' mathematics; rooting finding is difficult, 'backwards' mathematics. Go forwards; make it easy for you AND for the computer; don't calculate roots for such a basic component of animation as easing. -As suggested by bubba: -Cubic easing from 0 to 1, over t = 0 to t = 1 -ease = 3t^2 − 2t^3 - -Cubic easing from -1 to 1, over t = -1 to t = 1 -ease = (3t - t^3)/2 - -These trig solutions also look nice. Trig is quicker if you calculate each sin(t) from the cos and sin of the previous t with some linear algebra; I advise doing so in any inside loop of an animation -Harmonic easing from 0 to 1, over t = 0 to t = 1: -ease = (1 + sin(pi.(2t - 1))) / 2 - -Harmonic easing from -1 to 1, over t = -1 to t = 1: -ease = sin(pi.t) - -You can still have an intuitive, graphical interface for designing lots of these functions. -If control points are horizontally fixed, your graph would actually provide a nice way to design a 1D Bezier of any order at all, with the independent variable represented on the horizontal axis. -With that one 'restriction' it would be a very intuitive interface for setting positions and velocities. It would be nice to extend that into an interface for providing velocities and accelerations, and then accelerations and jerks, and then jerks and jounces. Each new graph can provide positions from the previous graph, and just needs the gradients to be added. -And nobody could criticise that for a lack of 'expressiveness'!<|endoftext|> -TITLE: Non-associative, non-commutative binary operation with a identity -QUESTION [10 upvotes]: Can you give me few examples of binary operation that it is not associative, not commutative but has an identity element? - -REPLY [3 votes]: Here's an example of an abelian group without associativity, inspired by an answer to this question. Consider the game of rock-paper-scissors: $R$ is rock, $P$ is paper, $S$ is scissors, and $1$ is fold/draw/indeterminate. Let $\ast$ be the binary operation "play". -\begin{array}{r|cccc} -\ast & 1 & R & P & S\\ -\hline - 1& 1 & R & P & S \\ - R & R& 1 & P & R\\ - P & P& P & 1 & S\\ - S & S& R & S & 1 . -\end{array} -The mutliplication table above defines a set of elements, with a binary operation, that is commutative and non-associative. Also, each element has an inverse (itself), and the identity exists.<|endoftext|> -TITLE: Solutions to Alan Hatcher's "Algebraic Topology" -QUESTION [6 upvotes]: Does anyone know where I can find (if they exist) full solutions to the exercises of Alan Hatcher's Algebraic Topology? Thanks. - -REPLY [22 votes]: This should probably be a comment, but I felt was too long. I'm sure searching "allen hatcher solutions" is about the best you can do with google. But look at this quote from Hatcher's personal website: - -I have not written up solutions to the exercises. The main reason for this is that the book is used as a textbook at a number of universities where the problems sets count for part of a student's grade (that is how I teach the course for example). However, individuals who are studying the book on their own and would like hints for specific problems should feel free to email me and I will try to respond. - -His homepage lists his email address, so if you're interested in working through his book, I have a feeling he'd be glad to answer your questions.<|endoftext|> -TITLE: The union of a strictly increasing sequence of $\sigma$-algebras is not a $\sigma$-algebra -QUESTION [28 upvotes]: The union of a sequence of $\sigma$-algebras need not be a $\sigma$-algebra, but how do I prove the stronger statement below? - -Let $\mathcal{F}_n$ be a sequence of - $\sigma$-algebras. If the inclusion - $\mathcal{F}_n \subsetneqq \mathcal{F}_{n+1} $ is strict, then - the union - $\bigcup_{n=1}^{\infty}\mathcal{F}_n$ - is not a $\sigma$-algebra. - -REPLY [18 votes]: I want to do three things: - -give a counterexample to Jonas' lemma that there is necessarily a sequence of pairwise disjoint sets $F_n\in\mathcal{F}_{n+1}\setminus\mathcal{F}_n\;$, -show that there is however a sequence of pairwise disjoint sets $G_{k}\in\mathcal{F}_{n_k+1}\setminus\mathcal{F}_{n_k}$ for some strictly increasing sequence $n_k$ (so Jonas' proof still works), and -offer an alternative proof of the main statement, which also uses this modified lemma. - -As a counterexample to the unmodified lemma, consider $\mathcal{F}_1=\{\emptyset,[0,6]\}$ and -$\mathcal{F}_2$, $\mathcal{F}_3$ and $\mathcal{F}_4$ the $\sigma$-algebras generated by adding $[0,3]$, $[1,2]$ and $[4,5]$ respectively, to their predecessors. (To extend this to a strictly increasing sequence of $\sigma$-algebras, we can keep adding arbitrary intervals with new boundaries, e.g. $[2^{-n}/2,2^{-n}]$). If we choose $[0,3]$ as $F_1$, there are no new sets in $\mathcal{F}_3$ that are disjoint from it, and if we choose its complement instead, the same is true for $\mathcal{F}_4$. -So the best we can hope for is a sequence of pairwise disjoint sets $G_k\in\mathcal{F}_{n_k+1}\setminus\mathcal{F}_{n_k}$ for some strictly increasing sequence $n_k$: Then we can form a new sequence of strictly increasing $\sigma$-algebras, $\mathcal{G}_k:=\mathcal{F}_{n_k}$, and then $G_k\in\mathcal{F}_{n_k+1}\setminus\mathcal{F}_{n_k}\subseteq\mathcal{G}_{k+1}\setminus\mathcal{G}_k$ will be a disjoint sequence as desired, and we can prove the claim for $\mathcal{G}=\bigcup G_k=\mathcal{F}$ without loss of generality. Thus, Jonas' proof will remain substantially intact. So let's show that there is such a sequence. -It's not clear (to me) from the question whether the $\mathcal{F}_n$ all have the same underlying set, so I'll first show that we can assume that they do without loss of generality. Each $\mathcal{F}_n$ contains its underlying set as an element, so the union $U$ of all these underlying sets is a countable union of elements of $\mathcal{F}$, and hence in $\mathcal{F}$, and hence in some $\mathcal{F}_{n_0}$. So all $\mathcal{F}_n$ after that have the same underlying set $U$, and without loss of generality we can drop the initial ones that don't. -Since the $\mathcal{F}_n$ are strictly increasing, we can choose some (not necessarily pairwise disjoint) sequence of sets $F_n\in\mathcal{F}_{n+1}\setminus\mathcal{F}_n$. (We need the axiom of countable choice to do that, but I suspect it would be difficult to impossible to prove any of this without some form of choice, so I won't worry about that in the following.) -Let's say $F_n$ is distinctive in $A\in\mathcal{F}_n$ if $F_n\cap A\notin\mathcal{F}_n$. This also implies $\overline{F_n}\cap A\notin\mathcal{F}_n$, since $\overline{\overline{F_n}\cap A}\cap A=(F_n\cup\overline{A})\cap A=F_n\cap A$. Also, if $F_n$ is distinctive in $A$ and $B\in\mathcal{F}_n$, then $F_n$ is distinctive in $A\cap B$ or in $A\cap \overline{B}$, for otherwise $F_n\cap A\cap B\in\mathcal{F_n}$ and $F_n\cap A\cap\overline{B}\in\mathcal{F_n}$ and thus $(F_n\cap A\cap B)\cup(F_n\cap A\cap\overline{B})=(F_n\cap A)\cap(B\cup\overline{B})=F_n\cap A\in\mathcal{F}_n$. (Since we're working within a single underlying set $U$, the complements are well-defined.) -Now let $n_0=0$ and $V_0=U$ and, by induction, construct a strictly increasing sequence $n_k$ together with sets $V_k$ and $G_k$ such that $V_k,G_k\in\mathcal{F}_{n_m+1}\setminus\mathcal{F}_{n_m}$ and $V_k,G_k\subseteq V_{k-1}$ for all $k\in\mathbb{N}$, that $G_k$ is disjoint from $G_j$ for all $k>j$, that $V_k$ is disjoint from $G_j$ for all $k\ge j$, and that for all $k$ infinitely many $F_n$ are distinctive in $V_k$. -There are no disjointness conditions to fulfill for $k=0$, so we merely have to note that infinitely many (namely all) of the $F_n$ are distinctive in $V_0=U$ (since $F_n\cap U=F_n\notin\mathcal{F}_n$) to get the induction off the ground. Then, assuming that $V_k$ and $G_k$ have the above properties for all $kn_m$ that is distinctive in $V_{m-1}$ is distinctive in at least one of $V_{m-1}\cap F_{n_m}$ and $V_{m-1}\cap\overline{F_{n_m}}$. Since there are infinitely many such $F_j$, we can choose $V_m$ to be one of $V_{m-1}\cap F_{n_m}$ and $V_{m-1}\cap\overline{F_{n_m}}$ such that infinitely many $F_j$ are distinctive in $V_m$. Take $G_m$ to be the other of $V_{m-1}\cap F_{n_m}$ and $V_{m-1}\cap\overline{F_{n_m}}$. Since $V_{m-1}$, $F_{n_m}$ and $\overline{F_{n_m}}$ are all in $\mathcal{F}_{n_m+1}$, so are $V_m$ and $G_m$, and since $F_{n_m}$ is distinctive in $V_{m-1}$, neither $V_m$ nor $G_m$ is in $\mathcal{F}_{n_m}$. Thus $V_m,G_m\in\mathcal{F}_{n_m+1}\setminus\mathcal{F}_{n_m}$ as required. Also, since $V_{m-1}$ is disjoint from all $G_j$ with $j\le m-1$, i.e. $jk$ there is one of the above unions in $G_k$ such that its $j$-th digit differs from its $k$-th digit. If the $k$-th digit is $1$, we take the set itself, if it is $0$, we take its complement in $G$, thus inverting its digits. Then in either case the $k$-th digit is $1$ and the $j$-th digit is $0$ (since they differ by assumption). Now we can remove all $G_j$ for $j>k$ by intersecting with all these sets, leaving only $G_k$ and thus yielding the desired contradiction $G_k\in\mathcal{G}_k$. Thus, contrary to our assumption, there must be some $j>k$ such that the $j$-th and $k$-th digits coincide for all unions in $\mathcal{G}_k$, and this must be the case for all $k$. -This we can use to derive another contradiction, using a diagonal argument and forming a binary string as follows: Starting with $n_1=1$, in each step we choose $n_{i+1}>n_i$ such that the $n_{i+1}$-th and $n_i$-th digits coincide for all unions in $\mathcal{G}_{n_i}$, and we choose the $n_{i+1}$-th digit of the binary string being constructed such that it differs from the $n_i$-th digit. (The remaining digits can be chosen arbitrarily.) Then the union corresponding to this string is in none of the $\mathcal{G}_{n_i}$, and thus in none of the $\mathcal{G}_k$, and thus not in $\mathcal{G}$. This contradiction completes the proof. -Now one question remains: Either of the proofs (if we add the part about a pairwise disjoint sequence to Jonas' proof) is unlikely to be what whoever assigned the homework had in mind -- so is there a simpler proof?<|endoftext|> -TITLE: If $0$, $z_1$, $z_2$ and $z_3$ are concyclic, then $\frac{1}{z_1}$,$\frac{1}{z_2}$,$\frac{1}{z_3}$ are collinear -QUESTION [8 upvotes]: If the complex numbers $0$, $z_1$, $z_2$ and $z_3$ are concyclic, prove that $\frac{1}{z_1}$,$\frac{1}{z_2}$,$\frac{1}{z_3}$ are collinear. - -I really can't seem to get anywhere on this problem, but all I've deduced is that there might be some relationship between circle geometry properties and the arguments of the complex numbers. - -REPLY [2 votes]: The "mature" way to think of this is as a function on the Riemann sphere - this the standard complex plane with a point $\infty$ added, also called the "extended complex plane". The function $1/z$ is a special case of what is called a linear fractional transformation - these are all functions of the form $\frac{az+b}{cz+d}$, where the coefficients are in $\mathbb{C}$. One property of linear fractional transformations is that they send "generalized circles" to "generalized circles", where a generalized circle is a either a normal circle or a line (a line is a called a generalized circle because you can view it as a circle through the point at infinity). -In this case, we see our circle through 0 will be sent through a "circle through infinity" because we define $1/0$ to be $\infty$ here. So your four points will be sent to a line, and in particular, $1/z_1$, $1/z_2$, and $1/z_3$ will be collinear. -Linear fractional transformations and their properties are a standard topic in complex analysis, and you don't even need to know any analysis to prove the property I just mentioned. Any standard text on complex analysis should have the proof. I suggest you find a copy of Gamelin's book, if you are interested.<|endoftext|> -TITLE: What is a good graphing software? -QUESTION [9 upvotes]: What is a good graphing software ? The one that has the ability to accept an equation (linear, quadratic, etc.) from users and output a graph for that equation (software equivalnet of T1-84 graphing calculator). - -REPLY [7 votes]: You could also consider Desmos (web-based and free): https://www.desmos.com/calculator<|endoftext|> -TITLE: Why is $\pi$ = 3.14... instead of 6.28...? -QUESTION [26 upvotes]: Inspired by a paper (from 2001) entitled Pi is Wrong: -Why is $\pi$ = 3.14... instead of 6.28... ? -Setting $\pi$ = 6.28 would seem to simplify many equations and constants in math and physics. -Is there an intuitive reason we relate the circumference of a circle to its diameter instead of its radius, or was it an arbitrary choice that's left us with multiplicative baggage? - -REPLY [11 votes]: For those stating that $\frac{\tau}{2}r^2$ is more cumbersome than $\pi r^2$ I would like to point out that the factor of $\frac{1}{2}$ is found naturally in many equations throughout physics and goes to show a fundamental relationship between the the area of a circle and the rest of physics that we miss when we cover it up with $\pi$. For example: -$\frac{1}{2}mv^2$ = kinetic energy, where $m$ is mass and $v$ is velocity. -$\frac{1}{2}kx^2$ = potential energy stored in a spring, where $k$ is the sprint constant and $x$ is displacement. -$\frac{1}{2}at^2$ = displacement, where $a$ is acceleration and $t$ is time. -That missing factor of $\frac{1}{2}$ shows an important relationship between the area of a circle and the rest of physics, something we've been missing due to our adherence to an old way of looking at circles. We don't make circles using their diameter, regardless of how easy diameter is to measure, we make circles using the radius. When you consider that all important factor of $\frac{1}{2}$ and you use $\tau \approx (6.28)$ instead of $\pi \approx (3.14)$ the area of a circle becomes $\frac{1}{2}\tau r^2$<|endoftext|> -TITLE: restriction of scalars, reference or suggestion for proof -QUESTION [6 upvotes]: Let $f:R \rightarrow R'$ be a ring homomorphism that is epic in the category of rings. Let $M,N$ be $R'$-modules. Why is it that a homorphism $h:M \rightarrow N$ is $R'$-linear if and only if it is $R$-linear? I have used this result in specific cases in the past (and know why it's true),e.g., if $f$ is the usual map of $R$ to $S^{-1}R$,$S$ a multiplicatively closed subset of $R$, but never in the generality I stated. But now I need the general result and would like to know why it works. A reference or a sketch for a proof would be appreciated. - -REPLY [2 votes]: Here's an elementary approach, following this answer. -Claim. For any ring morphism $f:R\to S$, restriction of scalars is faithful. -Proof. We have a commutative triangle formed by restriction of scalars $f^\ast$ and each forgetful functor from modules to abelian groups. Each of the latter is faithful, and if $G\circ F$ is faithful so is $F$. -Proposition. If $f:R\twoheadrightarrow S$ is a commutative ring epimorphism then restriction of scalars $f^\ast$ is full. -Proof. Let $X,Y$ be two $S$-modules and write $f^\ast X,f^\ast Y$ for their pulled back $R$-module structures. We want to prove any $R$-linear map $\varphi:f^\ast X\to f^\ast Y$ satisfies $s\varphi(x)=\varphi (sx)$. We shall do this universally by considering all such morphisms at once. -Recall ${}_{R}\mathsf{Mod}(f^\ast X,f^\ast Y)$ is an abelian group. As any abelian group, we may consider its endomorphism ring. -This endomorphism ring admits two ring morphisms from $S$, $$S\rightrightarrows \mathsf{Ab}({}_{R}\mathsf{Mod}(f^\ast X,f^\ast Y),{}_{R}\mathsf{Mod}(f^\ast X,f^\ast Y))$$ -with one given by $s\mapsto (\varphi(x)\mapsto s \varphi(x))$ and the other by $s\mapsto (\varphi(x) \mapsto \varphi (sx))$. These are ring morphisms because $ X,Y$ are $S$-modules. We want to show these two ring morphisms coincide. -I claim these ring morphisms coincide upon precomposing with $f:R\to S$. -$$R\to S\rightrightarrows \mathsf{Ab}({}_{R}\mathsf{Mod}(f^\ast X,f^\ast Y),{}_{R}\mathsf{Mod}(f^\ast X,f^\ast Y))$$ -Indeed this amounts to the equation $f(r)\varphi(x)=\varphi(f(r)x)$ which holds because $X,Y$ are $S$-modules. We now apply the fact $f$ is epic to conclude the two of $S$-actions were already equal, as desired. -Corollary. If $f:R\twoheadrightarrow S$ is a commutative ring epimorphism then restriction of scalars $f^\ast$ is fully faithful (consequently also conservative).<|endoftext|> -TITLE: sum of logarithms -QUESTION [7 upvotes]: I have to solve find the value of -$$\sum_{k=1}^{n/2} k\log k$$ -as a part of question. -How should I proceed on this ? - -REPLY [7 votes]: Got it. The constant in Moron's answer is $C = \log A$, where $A$ is the Glaisher-Kinkelin constant. Thus $C = \frac{1}{12} - \zeta'(-1)$. -The expression $H(n) = \prod_{k=1}^n k^k$ is called the hyperfactorial, and it has the known asymptotic expansion -$$H(n) = A e^{-n^2/4} n^{n(n+1)/2+1/12} \left(1 + \frac{1}{720n^2} - \frac{1433}{7257600n^4} + \cdots \right).$$ -Taking logs and using the fact that $\log (1 + x) = O(x)$ yields an asymptotic expression for the OP's sum -$$\sum_{k=1}^n k \log k = C - \frac{n^2}{4} + \frac{n(n+1)}{2} \log n + \frac{\log n}{12} + O \left(\frac{1}{n^2}\right),$$ -the same as the one Aryabhata obtained with Euler-Maclaurin summation. - -Added: Finding an asymptotic formula for the hyperfactorial is Problem 9.28 in Concrete Mathematics (2nd ed.). The answer they give uses Euler-Maclaurin, just as Aryabhata's answer does. They also mention that a derivation of the value of $C$ is in N. G. de Bruijn's Asymptotic Methods in Analysis, $\S$3.7.<|endoftext|> -TITLE: When is an $n$-dimensional manifold characterized by its $m$-dimensional submanifolds? -QUESTION [12 upvotes]: For which $m$, $n$ (if any) is the following true: if $M$ and $M'$ are smooth manifolds of dimension $n$, and $\Phi$ is a bijection from $M$ to $M'$ such that for any subset $S$ of $M$, $\Phi(S)$ is an embedded submanifold of $M'$ of dimension $m$ iff $S$ is an embedded submanifold of $M$ of dimension $m$, then $\Phi$ is a diffeomorphism? -This is clearly false when $m=0$ and when $m=n$. I thought it looked plausible when, for example, $m=1$ and $n\geq 2$; but I can't see how to prove it. - -REPLY [2 votes]: (This is a bit long for a comment, but it's not a full answer) -Here's where I would start when looking at this question. - -Theorem: The category of smooth manifolds is a full subcategory of the category of Frölicher spaces. - -First, we need to know what a Frölicher space is! It is triple $(X,C,F)$ where $X$ is a set, $C \subseteq \operatorname{Map}(\mathbb{R},X)$ is a family of curves in $X$ and $F \subseteq \operatorname{Map}(X,\mathbb{R})$ is a family of functionals on $X$. Note that these are just set maps. The curves and functionals have to satisfy a compatibility condition: - -$\alpha \in C$ if and only if $\phi \circ \alpha \in C^\infty(\mathbb{R},\mathbb{R})$ for all $\phi \in F$, and -$\phi \in F$ if and only if $\phi \circ \alpha \in C^\infty(\mathbb{R},\mathbb{R})$ for all $\alpha in C$. - -A morphism of Frölicher spaces is a set map $f \colon X \to Y$ which satisfies the following (equivalent) conditions: - -$f \circ \alpha \in C_Y$ for all $\alpha \in C_X$, -$\phi \circ f \in F_X$ for all $\phi \in F_Y$, -$\phi \circ f \circ \alpha \in C^\infty(\mathbb{R},\mathbb{R})$ for all $\alpha \in C_X$ and $\phi \in F_Y$. - -The chain rule is enough to show that there is a functor from the category of smooth manifolds to that of Frölicher spaces, where we assign to a smooth manifold $M$ the Frölicher space $(M, C^\infty(\mathbb{R},M), C^\infty(M,\mathbb{R}))$. We want to show that this embeds the category of smooth manifolds as a full subcategory. -Firstly, that it is faithful is obvious since both categories are concrete (that is, equipped with a faithful functor to $\operatorname{Set}$) and the functor preserves this structure. -So we just need to show that it is full. As both categories are concrete (and the functor preserves this structure), it is sufficient to show that if $f \colon M \to N$ is a set map which is not smooth, then it is not a morphism in the Frölicher category. So let $M$ and $N$ be two smooth manifolds and $f \colon M \to N$ a set map which is not smooth. Then there is a chart for $M$ and a chart for $N$ which detects this non-smoothness. That is to say, there are smooth maps $\theta_M \colon U_M \to V_M \subseteq M$ and $\theta_N \colon U_N \to V_N \subseteq N$ such that $g = \theta_N \circ f \circ \theta_M$ both makes sense and is not smooth. -Now $g$ is a map from an open subset of some Euclidean space to an open subset of some Euclidean space. A map in to an open subset of a Euclidean space is smooth if and only if it is smooth when considered as a map into the ambient space, and then a map into a Euclidean space is smooth if and only if all the compositions with the projections are smooth. Thus we can find some projection $\pi_i \colon \mathbb{R}^n \to \mathbb{R}$ such that $\pi_i \circ \iota \circ g \colon U_M \to \mathbb{R}$ is not smooth. Furthermore, as $U_M$ is an open subset of some Euclidean space, we can find an open disc such that the restriction of this map to that disc is not smooth, and then composing with a diffeomorphism of the ambient space to that disc provides us with a non-smooth map $\mathbb{R}^m \to \mathbb{R}$. -Now comes the magic step. We use a result of Jan Boman which says that a map $\mathbb{R}^m \to \mathbb{R}$ is smooth if and only if its compositions with all smooth curves are smooth. That is, $h \colon \mathbb{R}^m \to \mathbb{R}$ is smooth if and only if $h \circ \gamma \colon \mathbb{R} \to \mathbb{R}$ is smooth for all $\gamma \in C^\infty(\mathbb{R},\mathbb{R}^m)$. So as our map was not smooth, there is a smooth curve $\gamma \colon \mathbb{R} \to \mathbb{R}^m$ which detects this non-smoothness. We can now unpick all the constructions to transfer this curve to a smooth curve in $M$, similarly we can transfer our projection to a smooth functional on $N$, and produce a smooth curve $\alpha \in C_M$ and functional $\phi \in F_N$ such that $\phi \circ f \circ \alpha$ is not smooth. Hence $f$ is not a morphism in the category of Frölicher spaces. -This proves the theorem. - -Now the question as stated asks about embedded submanifolds, not all maps from, say, $\mathbb{R}$. So the place to look is to see if Boman's result works if one is allowed to only test with curves that are parametrisations of submanifolds. As this is a local result, and immersions are local embeddings, it is enough to test with curves $\alpha \in C^\infty(\mathbb{R},M)$ with non-vanishing first derivative. -My instinct would be to say that the result still holds, but I don't have a good enough grasp of the details of Boman's proof to be sure. I do know that you don't need all smooth curves for it to work. For example, you can leave out all curves that have non-negligible intersection with, say, the $x$-axis. -References - -Boman, J. (1967). Differentiability of a function and of its compositions with functions of one variable. Math. Scand., 20, 249–268. -Kriegl, A. and M., Peter W. (1997). The convenient setting of global analysis (Vol. 53). Mathematical Surveys and Monographs. Providence, RI: American Mathematical Society. -nLab pages on, starting at Frölicher spaces.<|endoftext|> -TITLE: Both solutions to a quadratic make sense -- looking for applications -QUESTION [7 upvotes]: I'm looking for reasonably real, non-abstract applications modeled by quadratic equations where both solutions make sense. I'd like them to be accessible to high school algebra students. -One I come up with is firing a bullet through a sphere -- the two solutions correspond to the entry and exit points of the bullet, that is the intersections of the bullet's line and the sphere's surface. To keep it simple, we could do this instead with a line and circle in the xy plane, but the idea is the same. -Applications outside of physics would be particularly welcome. - -REPLY [4 votes]: Knowing two sides $a$ and $c$ of a triangle and an angle $\alpha$ not between them—i.e. the SSA condition—does not determine the length of the third side $b$. See Figure 3 on this page. (Unfortunately, the convention at the length of the side opposite the vertex $A$ is denoted $a$ is going to make the notation a little confusing below.) -Set the origin at the vertex $A$ with the $x$-axis along $AC$. Then the vertex $B$ is at $(c \cos \alpha, c \sin \alpha)$ and $C$ is at $(b,0)$, and we have $$(c \cos \alpha - b)^2 + (c \sin \alpha - 0)^2 = a^{2},$$ which is a quadratic in $b$ both of whose solutions evidently make sense.<|endoftext|> -TITLE: Divergence of $\sum\frac{\cos(\sqrt{n}x)}{\sqrt{n}}$ -QUESTION [6 upvotes]: I have difficulties in showing the series $f(x)=\sum_{n=1}^\infty \frac{\cos(\sqrt{n}x)}{\sqrt{n}}$ is divergent at every real numbers $x$. -However I cannot find any elementary methods to do this. Can anyone help me on this? -Thanks. - -REPLY [8 votes]: Since you asked for elementary methods, here is one. Intuitively the idea for your first series is that $\sqrt{n}x$ increases more and more slowly and that one can find larger and larger intervals of integers $n$ such that, on each of these intervals the cosines are uniformly larger than a positive constant, a fact which brings you back to the evaluation of finite sums of $1/\sqrt{n}$. -More precisely, once you solved the case $x=0$, assume without loss of generality that $x$ is positive and choose your favorite angle whose cosine is positive, for example $\cos(\pi/3)=1/2$. Hence $\cos(\sqrt{n}x)\ge1/2$ for every $n$ such that there exists an integer $k$ such that -$$ -2k\pi-\pi/3\le\sqrt{n}x\le2k\pi+\pi/3. -$$ -This condition translates as $n\in I_k$ where $I_k$ is an integer interval of width of order $(wk)$ around an integer of order $(ck)^2$, where $c=2\pi/x$ and $w=8\pi^2/(3x^2)$. -The sum of $\cos(\sqrt{n}x)/\sqrt{n}$ over $I_k$ is at least $1/2$ times the sum over $I_k$ of $1/\sqrt{n}$. This last sum is of order $1/(ck)$ times the number of terms in $I_k$, which is of order $(wk)$. Thus the sum of $\cos(\sqrt{n}x)/\sqrt{n}$ over $I_k$ is at least $w/(2c)+o(1)$. -When a series converges, each of its partial sums for $n$ in an interval $[n_0,n_1]$ is as small as one wants provided $n_0$ is large enough (this in fact characterizes convergent series). We disproved this property, hence the series diverges.<|endoftext|> -TITLE: Simplicial homology of real projective space by Mayer-Vietoris -QUESTION [67 upvotes]: Consider the $n$-sphere $S^n$ and the real projective space $\mathbb{RP}^n$. There is a universal covering map $p: S^n \to \mathbb{RP}^n$, and it's clear that it's the coequaliser of $\mathrm{id}: S^n \to S^n$ and the antipodal map $a: S^n \to S^n$. At first I thought I might use this fact and invoke abstract nonsense to compute the homology groups of $\mathbb{RP}^n$ but then I realised that it didn't give the correct answer. The rubric of the question suggests the following approach: - -Cut $S^n$ along two parallels on opposite sides of the equator to obtain two subspaces $L$, homeomorphic to two disjoint copies of the closed ball $B^n$, and $M$, homeomorphic to the closed cylinder $S^{n-1} \times B^1$. -The antipodal map sends $L$ to $L$ and $M$ to $M$, and so they are both double covers of their images $Q$ and $R$ (resp.) in $\mathbb{RP}^n$. Clearly $Q$ is homemorphic to a single copy of $B^n$, and with some thought it's seen that $R$ is a $n$-dimensional generalisation of the Möbius strip. -Use the fact that the $a$ and $p$ act nicely on $L$ and $M$ to obtain chain maps between the Mayer—Vietoris sequences for $S^n = L \cup M$ and $\mathbb{RP}^n = Q \cup R$. - -Edit: Following Jim Conant's comment about deforming $R$ into $\mathbb{RP}^{n-1}$, I managed to compute the homology of $\mathbb{RP}^n$ by induction (on $n$, of course). However, I'm not convinced this was the intended solution, as the question explicitly refers to the fact that any nice simplicial map $f: K \to P$ sending subcomplexes $L, M$ into $Q, R$ (resp.), where $K = L \cup M$ and $P = Q \cup R$, induces a chain map $f_*$ between the Mayer—Vietoris sequences -$$\cdots \to H_{r+1} (K) \to H_r (L \cap M) \to H_r(L) \oplus H_r(M) \to H_r(K) \to H_{r-1} (L \cap M) \to \cdots$$ -and -$$\cdots \to H_{r+1} (P) \to H_r (Q \cap R) \to H_r(Q) \oplus H_r(R) \to H_r(P) \to H_{r-1} (Q \cap R) \to \cdots$$ -but I have not used this fact anywhere in my solution. Have I made a mistake somewhere? - -REPLY [5 votes]: This is not really an answer to your question, and I'm not sure how your original attempt went, but one has to be careful when taking homology of colimit diagrams of $spaces$. Homology commutes (up to natural iso) with colimits in the category of chain complexes. This is not necessarily true of the composite functor $Top \overset{S_{*}}{\to} Ch_{+} \overset{H}{\to} Ab_{gr}$. Here $S_{*}$ is the functor which assigns to each space, its singular chain complex and $Ab_{gr}$ is the category of graded abelian groups. This is true, however, for $filtered$ colimits. In particular, if $X=\bigcup_{p}X_{p}$, then -$$H_{*}(X)\simeq \lim_{\leftarrow}H_{*}(X_{p}).$$ -What I'm saying is, more precisely, that the coequalizer diagram $S^{n}\substack{ \overset{\alpha}\to \\ \to} S^{n}$ is not a filtered colimit. You cannot therefore conclude that the coequalizer of $H_{*}(S^{n})\substack{\to \\ \to}H_{*}(S^{n})$ is isomorphic to the homology of $\mathbf{R}P^{n}$. Maybe you already knew this, but I was just pointing it out if you had not considered the point.<|endoftext|> -TITLE: Alternative "functorial" proof of Nielsen-Schreier? -QUESTION [7 upvotes]: There are two proofs of Nielsen-Schreier that I know of. The theorem states that every subgroup of a free group is free. The first proof uses topology and covering space theory and is rather elegant. The second uses combinatorial techniques on a free group of words with no relations. -Is there a more algebraic proof which somehow just uses the universal property of free groups and maybe other properties of groups that are proved more "algebraically"? -I'm interested because groups are defined purely algebraically by equations, and some proofs that a subgroup of a free abelian group is free abelian have a far more algebraic flavour. So perhaps there is some proof of Nielsen-Schreier that also has a more algebraic flavour? -Ideally I would like a proof that does not involve combinatorial properties of a group of words on generators; in other words preferably no facts from combinatorial group theory. - -REPLY [2 votes]: Recently emerged a purely algebraic proof (author's claim) employing diagram chasing of some of the wreath product's functorial properties. By authors Ribes, L.; Steinberg, B. under the title: "A wreath product approach to classical subgroup theorems". Enseign. Math. (2) 56 (2010), no. 1-2, 49–72., also available at the arXiv: Ribes, L.. They effectively use the universal property definition and are capable of proving the Kurosh's Subgroup Theorem as well.<|endoftext|> -TITLE: Sum is an isomorphism, finitely generated modules -QUESTION [5 upvotes]: Suppose $R$ is a local ring with maximal ideal $m$ and suppose $M$ and $N$ are $R$-finitely generated modules. Let $f,g: M \rightarrow N$ be an $R$-module homomorphism. If $f$ is an isomorphism and $g(M) \subset mN$ why is $f+g$ in fact an isomorphism? I think this might follow by Nakayama's lemma but I don't see this. Can you please help? - -REPLY [5 votes]: The hypotheses ensure that $g \otimes R/\mathfrak{m}: M/\mathfrak{m}M \rightarrow N/\mathfrak{m} N$ is zero and thus that $(f+g) \otimes R/\mathfrak{m}$ is an isomorphism. It is a standard consequence of Nakayama's Lemma that this implies that $f+g$ is surjective: see e.g. the end of Section 3.8.1 of my commutative algebra notes. -We still need to show that $f+g: M \rightarrow N$ is injective. Let $K$ be the kernel of this map. If we may assume that $K$ is a finitely generated $R$-module, then again this follows from Nakayama's Lemma since the hypotheses give $K/\mathfrak{m} K = -\ker ((f+g) \otimes R/\mathfrak{m}) = 0$. -However, the assumption of the previous paragraph does seem to be an additional assumption, albeit a mild one. It is automatic if $M$ is not just finitely generated but finitely presented and this in turn is automatic if the ring $R$ is Noetherian. (In the olden days, "local rings" were required to be Noetherian. I wonder if that's what you meant?) Whether the result is still true without any additional assumptions I'm not sure off the top of my head. Perhaps some context would be helpful: why do you think this result is true? - -REPLY [3 votes]: You need to apply Nakayama to both the kernel of $f+g$ and to the cokernel of $f+g$. I suspect that you need to suppose $R$ to be Noetherian local, in order to know that the kernel is again finitely generated. -Now if $K$ is either the kernel or the cokernel, you must argue that $K = mK$. -Give it a try... - -REPLY [2 votes]: Composing with $f^{-1}$ allows us to assume $M=N$, $f=\mathrm{Id}$. -Let $M'=(\mathrm{Id}+g)(M)$. -Then $P=M/M'$ satisfies $P=mP$ (if $x \in M$, $x=x+g(x)-g(x) \in M' + mM$) and is finitely generated, so that $P=0$. -Moreover, for all $x \in K = \ker (\mathrm{Id}+g)$, $x=-g(x)=g^2(x)=\ldots=(-1)^n g^n(x)$ for any integer $n$, so that $K \subset \bigcap_{n \geq 0} m^n M$. -Now if $R$ is noetherian, this last module is finitely generated and satisfies the same condition as $P$, so Nakayama's lemma can be applied again and $K=0$.<|endoftext|> -TITLE: Two basic complex analysis homework questions -QUESTION [6 upvotes]: I have some homework problems from Greene and Krantz' Function Theory of One Complex Variable. They come from Chapter 5. I definitely do not want answers, just light prodding in the right direction. - - -Let $f_j: D(0, 1)\to\mathbb C$ be - holomorphic and suppose that each - $f_j$ has at least $k$ roots in $D(0,1)$, counting multiplicities. Suppose - that $f_j\to f$ uniformly on compact - sets. Show by example that it does - not follow that $f$ has at least $k$ roots counting multiplicities. In - particular, construct examples, for - each fixed $k$ and each $\ell$, - $0\le\ell\le k$, where $f$ has exactly - $\ell$ roots. What simple hypothesis - can you add that will guarantee that - $f$ does have at least $k$ roots? - -I know we require continuity of the $f_j$ on the boundary for the number of zeroes to be the same in the limit, but I'm not clear on why this is. Presumably, that's the purpose of the question. In another question, the goal is to prove this when the disk and its boundary are in the region of holomorticity of the $f_j$. -I'm tempted to think that the problem we run in to is when the zeroes move to the boundary in the limit, but maybe I'm just not familiar enough to construct an actual example of this. I'm also confused by their use of at least $k$ roots. It seems simplest to start with a sequence of functions that all have exactly $k$ roots, but I'm afraid I'm missing something, and that it will only work if the functions have different numbers of zeroes, for some reason. -Basically, I would like some intuition about how things can go wrong when we only have holomorticity on the interior of the disk, and maybe a (small) clue about the form the sequence will take. - -Prove: If $f$ is a polynomial on $\mathbb C$, then the zeroes of $f^\prime$ are contained in the closed convex hull of the zeroes of $f$. (Here the closed convex hull of a set $S$ is the intersection of all closed convex sets that contain $S$.) [Hint: If the zeroes of $f$ are contained in a halfplane $V$, then so are the zeroes of $f^\prime$. - -I would really like to use the maximum modulus theorem to say that the zeroes of $f^\prime$ occur at maxima (minima) of $f$, and therefore that these only happen on the boundary of some set $U$ (where $f$ is continuous on $\overline U$ and holomorphic on $U$), but I can't see a way to relate this statement about general bounded domains and convex hulls. - -I could be looking at these entirely wrong. If I need to clarify my thoughts or say more, please let me know, and again, I definitely don't want more than little hints. Thanks all. - -REPLY [3 votes]: It seems to me that your intuition of zeroes wandering off towards the boundary is the correct one and you don't have to look for nastiness of the behavior on the boundary. In fact, Hurwitz's theorem can be stated as follows: - -Let $G$ be open and connected and let $f_{n},f: G \to \mathbb{C}$ be holomorphic such that $f_{n} \to f$ uniformly on compact sets. Assume that $U \subset G$ is bounded and open, that $\overline{U} \subset G$ and that $f$ has no zeroes on $\partial U$ then there exists an index $n_{0} = n_{0}(U)$ such that $f_{n}$ and $f$ have the same number of zeroes (counted with multiplicity) in $\overline{U}$ for all $n \geq n_{0}$. - -If you want to produce a sequence of functions with a zero wandering towards $\zeta \in \partial D$, simply put $f_{n} = (z - (1 - \frac{1}{n})\zeta)g$ where $g$ is an arbitrary non-constant holomorphic function on $D$. - -Concerning the second question(*), I don't quite understand why one would want to use the maximum principle. It can be done completely explicitly: -The logarithmic derivative of a polynomial $f$ of degree $n$ is -\[ -\frac{f'(z)}{f(z)} = \frac{1}{z-z_{1}} + \cdots + \frac{1}{z-z_{n}} = \overline{\sum_{k=1}^{n} \frac{z - z_{k}}{\vert z-z_{k}\vert^2}} -\] -where the $z_{1},\ldots,z_{n}$ are the (not necessarily distinct) zeroes of $f$. This is easily proved by induction on the degree of $f$. Note that for $g(z) = (z-z_{0})f(z)$ we have $\frac{g'}{g} = \frac{1}{z-z_{0}} + \frac{f'}{f}$. -If $f'(c) = 0$ we want to show that $c$ is in the convex hull of the $z_{k}$. Now if $c \in \{z_{1},\ldots,z_{n}\}$ this is trivial, so we may assume that this is not the case. But $0 = \overline{\frac{f'(c)}{f(c)}}$ then gives -\[ -\left(\sum_{k=1}^{n} \frac{1}{|c-z_{k}|^2} \right) \cdot c = \sum_{k=1}^{n} \frac{1}{|c-z_{k}|^2} z_{k} -\] -and this easily gives $c$ as a convex combination of the $z_{k}$. Recall that a convex combination is a sum of the form $\sum_{k=1}^{n} \lambda_{k} z_{k}$ with $\lambda_{k} \geq 0$ and $\sum \lambda_{k}= 1$. -(*) Much Later: This is called the Gauß–Lucas theorem.<|endoftext|> -TITLE: Distribution of days in a week on Christmas -QUESTION [5 upvotes]: I am wondering if there is a strict argument about the probabilities of Christmas (Dec. 25) on Monday, Tuesday, ..., Sunday. My experiments give: -Sunday 0.145 -Monday 0.14 -Tuesday 0.145 -Wednesday 0.1425 -Thursday 0.1425 -Saturday 0.14 -Friday 0.145 - -It looks that they are not equal. :) -My question is: -1) How to obtain the probabilities without restricting the counts over certain range of years? -2) why Sunday is more probable than Wednesday, which is more probable than Monday, if it is true? - -REPLY [11 votes]: You have all you need to know in the comments to your question. -Assuming you are using the Gregorian calendar, then in its cycle of $400$ years there are $97$ leap years and so $365\times 400+97 = 146097$ days, which is exactly $20871$ weeks, so each cycle repeats the weekdays. All you need to do is count $400$ consecutive Christmases; you could count $2800$ or some other multiple, but it would not change the proportions. -Since $400$ is not divisible by $7$, there is no possibility that each weekday will appear the same number of times. In fact you get the following numbers: -Sunday 58 -Monday 56 -Tuesday 58 -Wednesday 57 -Thursday 57 -Friday 58 -Saturday 56. - -Divide each of these by $400$ and you get the proportions in your question. -There is no particular reason why Sunday, Tuesday and Friday are most common; they just are. Some day(s) had to be more common than others since $400$ is not divisible by $7$; in the previously used Julian calendar, each weekday appeared four times for 25 December every $28$ years.<|endoftext|> -TITLE: The cardinality of the set of all finite subsets of an infinite set -QUESTION [39 upvotes]: Let $X$ be an infinite set of cardinality $|X|$, and let $S$ be the set of all finite subsets of $X$. How can we show that Card($S$)$=|X|$? Can anyone help, please? - -REPLY [12 votes]: This is an old post, but because Arturo's otherwise good answer is a bit cavalier on choice usage and the comments don't make the exact level of choice needed entirely clear, I thought I'd explain my own approach to the question in a ZF framework. -We can establish the following with no well-orderability assumptions: - -Lemma: If $X$ is a nonempty set and $X\times X\approx X$ (meaning there is a bijection from $X\times X$ to $X$), then $\bigcup_{n\in\omega}X^n\approx\omega\times X$. - -Proof: Fix a bijection $f:X\times X\to X$ and $x_0\in X$. Then we can define -$$g_0:x\in X^0\mapsto x_0$$ -$$g_{n+1}:x\in X^{n+1}\mapsto f(g_n(x\restriction n), x_n)$$ -Then by induction we have that $g_n$ is an injection from $X^n$ to $X$, and then $$h:x\in\bigcup_{n\in\omega}X^n\mapsto \langle \operatorname{dom}x,g_{\operatorname{dom}x}(x)\rangle$$ -is an injection from $\bigcup_{n\in\omega}X^n$ to $\omega\times X$. (We use that $X^n$ for different $n$ are disjoint because $x\in X^n$ implies $x:n\to X$ so that $\operatorname{dom}x=n$.) For the reverse inequality, define -$$j:n\in\omega,x\in X\mapsto(z\in n+1\mapsto x).$$ -Then if $j(n,x)=j(m,y)$ we have $(z\in n+1\mapsto x)=(z\in m+1\mapsto y)$ so the domains of the functions are equal, i.e. $n+1=m+1\Rightarrow n=m$, and also $x=j(n,x)(0)=j(m,y)(0)=y$. - -Adding an assumption of well-ordering allows us to simplify the statement to what we are after: - -If $X$ is an infinite well-orderable set, then the set $[X]^{<\omega}$ of finite subsets of $X$ is equipollent to $X$. - -Proof: Since singletons are in $[X]^{<\omega}$ and naturally bijective with $X$, $X\preceq[X]^{<\omega}$. For the converse, we have $X\times X\approx X$ because $X$ is infinite well-orderable, so by the lemma $$\bigcup_{n\in\omega}X^n\approx\omega\times X\preceq X\times X\approx X;$$ thus $\bigcup_{n\in\omega}X^n$ is also well-orderable, so we can reverse the surjection $f:\bigcup_{n\in\omega}X^n\to[X]^{<\omega}$ which maps each function to its range to get an injection. Thus, $X\preceq [X]^{<\omega}\preceq\bigcup_{n\in\omega}X^n\preceq X$.<|endoftext|> -TITLE: Acyclic vs Exact -QUESTION [17 upvotes]: I have a question about the words "acyclic" and "exact." Why does Brown use the term "acyclic" instead of "exact" in his book Cohomology of Groups? It seems to me that these two terms exactly coincide. Are there examples(or topics in math) in which being acyclic means being sth1 and being exact means being sth2, and when restricted to the homology theory sth1 and sth2 coincide? Thank you. - -REPLY [7 votes]: Mariano, you seem to be confusing a few things here. A complex is acyclic if and only if it is exact. (see for instance Exercise 1.1.5 in Weibel's Homological Algebra book, or probably anyplace where this is defined). -An object is acyclic for a functor if the derived functors of said functor vanish on the object. For instance a flasque sheaf for the global section functor. -A resolution is acyclic if the objects of the resolution are acyclic objects. -A projective resolution is acyclic for example for the $\mathrm{Hom}(\__, N)$ functor (for some $N$) because a projective module is and not because it is exact except at degree zero. That condition is encoded in the word resolution. -So, without knowing the way Brown uses this (niyazi, you'd have to be a little more specific about that) the answer to the question is something like this: as long as you are talking about a complex being acyclic, it means the same as exact, but acyclic also applies to an object, whereas we do not say that an object is exact.<|endoftext|> -TITLE: The algebraic closure of a finite field and its Galois group -QUESTION [16 upvotes]: $F$ is an extension field of a field $K$. -Let $F$ be an algebraic closure of $\mathbb{Z}_p $ ($p$ prime). Show that -$(i)$ $F$ is algebraic Galois over $\mathbb{Z}_p$ -$(ii)$ The map $\alpha:F\rightarrow F$ given by $u\mapsto u^p$ is a nonidentiy $\mathbb{Z}_p$-automorphism of $F$. -$(iii)$ The subgroup $H=\langle \alpha \rangle$ is a proper subgroup of Aut$(F/\mathbb{Z}_p)$ where the fixed field is $\mathbb{Z}_p$, which is also the fixed field of Aut$(F/\mathbb{Z}_p)$ by $(i).$ -So, here is my attempt for (i). Let $S \subset \mathbb{Z}_p[x])$ of monic polynomials of the form $x^{p^n}-x$. Then for all $f\in S$, gcd$(f,f^{\prime})=1$(i.e, the polynomials are separble) and $F=\mathbb{Z}_p(a\in F:f(a)=0)$. So $(F/\mathbb{Z}_p)$ is Galois. I'd be glad if I could get assistance for (ii) and (iii) as well. Thanks. -ADDED: Attempt at (iii). but I know that since $F/\mathbb{Z}_p$ is a finite Galois extension, its fixed field is $\mathbb{Z}_p$ hints... -The field $\mathbb{Z}_p$ must be contained in $F$. For $a\in \mathbb{Z}_p$, $\alpha(a)=a^p=a$. Thus the polynomial $x^p-x$ has $p$ zeros in $F$, namely, the elements of $\mathbb{Z}_p$. But the elements fixed under $\alpha$ are precisely the zeros in $F$ of $x^p-x$. Hence the fixed field of $\alpha$ is $\mathbb{Z}_p$ which is also the fixed field of Aut$(F/\mathbb{Z}_p)$. -Left to show that $H$ is a proper subgroup... If it helps I know that the order of $\langle \alpha \rangle $ is $n$... - -REPLY [3 votes]: The last statement recommended in Arturo Magidin's answer cannot work because it is not true at all: every nontrivial element is Galois group $\mathrm{Aut}_{F_{p}}F$ has infinite order (for reference see Exercise 15 on page no. 71 of this book). Let $K$ be the union of subfield in the first chain, i.e, union of $F_{{p}^{q^{n}}}$, then it is a field. Consider a homomorphism $h$ from $K$ to $F$ given by $x\mapsto x^p$; then by extension theorem there is an $F_p$ automorphism from $F$ to $F$. Since $H$ is not a proper subgroup, it implies that $h=α^n$ for some interger $n$ ( $n \neq0$). Apply $x$ both sides where $x$ is in $F_{p^{2^{n+1}}}$, it leads to $p^{2^{n+1}}$ less than roots of equation $h(x)=α^n(x)$, a contradiction.<|endoftext|> -TITLE: Why is the coordinate ring of a projective variety not determined by the isomorphism class of the variety? -QUESTION [25 upvotes]: I know that there are isomorphic projective varieties which have nonisomorphic coordinate rings, but I'm a little mystified as to "why" this is the case. Why doesn't a usual functoriality proof go through to prove this, and is there any insight into this other than that the definitions just work out this way? - -REPLY [5 votes]: Here's a slightly tangential elaboration on the above two answers. -Let $X = \mathrm{Proj} A$ for $A$ a noetherian ring. So here $A$ should fade into the background, but $X$ is the important thing; it turns out that the $\mathrm{Proj}$ depends only on the asymptotic part of $A$ in high dimensions. Then we can consider the category of coherent sheaves on $\mathrm{Proj} A$. The sheaf of regular functions is the canonical example. -There is a functor from finitely graded $A$-modules to coherent sheaves on $\mathrm{Proj} A$, denoted by a tilde. (This is as in Hartshorne II.5.) Unlike the case for an affine scheme, however, it is not an equivalence of categories. It is not too hard to show that the functor is essentially surjective, since given a coherent sheaf $\mathcal{F}$ on $\mathrm{Proj} A$ you can construct the graded $A$-module $\bigoplus \Gamma(X, \mathcal{F}(n))$ of the global sections of all the twists. Then one can show using elementary methods that this recovers $\mathcal{F}$. (In fact, this is true even if $A$ is not noetherian and $\mathcal{F}$ is simply assumed quasi-coherent.) -However, if we have a graded $A$-module $M$, we cannot recover $M$ from the associated sheaf $\tilde{M}$. For instance, if $M$ has finitely many nonzero components, then the homogeneous localizations $M_{(f)}$ vanish (for any homogeneous element $f$), so the associated sheaf is zero. But $M$ may not be zero. -This suggests that if we consider the abelian category of finitely generated graded $A$-modules, and quotient out by the Serre class of modules that are "asymptotically zero," then we will get the category of coherent sheaves on $\mathrm{Proj} A$. This is in fact true, though the proof uses a bit of cohomological machinery (see EGA III.2). -So the moral of the story is that the $\mathrm{Proj}$ sees only asymptotic data. If we have a morphism $A \to A'$ that induces an isomorphism in large enough degrees, then $\mathrm{Proj} A, \mathrm{Proj} A'$ will be isomorphic (as one example).<|endoftext|> -TITLE: Embedding of finite groups -QUESTION [8 upvotes]: It is well known that any finite group can be embedded in Symmetric group $S_n$, $GL(n,q)$ ($q=p^m$) for some $m,n,q\in \mathbb{N}$. Can we embed any finite group in $A_n$, or $SL(n,q)$ for some $n,q\in \mathbb{N}$? - -REPLY [11 votes]: Yes. -The symmetric group $Sym(n)$ is generated by $\{(1,2), (2,3),\ldots, (n−1,n)\}$. You can embed $Sym(n)$ into $Alt(n+2)$ as the group generated by $\{(1,2)(n+1,n+2), (2,3)(n+1,n+2), …, (n−1,n)(n+1,n+2)\}$. This embedding takes a permution $\pi\in Sym(n)$ and sends it to $\pi⋅(n+1,n+2)^{\text{sgn}(\pi)}$, where $\text{sgn}(\pi)\in\{0,1\}$ is the parity of the permutation. -In other words, $G\le Sym(n)\le Alt(n+2)$ embeds any group into a (slightly larger) alternating group. -The general linear group $GL(n,q)$ embeds in the special linear group $SL(n+1,q)$ using a determinant trick. We just add a new coordinate to cancel out the determinant of the matrix from $GL(n,q)$ so the result lands in $SL(n+1,q)$. -$$\operatorname{GL}(n,q) \cong \left\{ \begin{bmatrix} A & 0 \\ 0 & 1/\det(A) \end{bmatrix} : A \in \operatorname{GL}(n,q) \right\} ≤ \operatorname{SL}(n+1,q)$$ -In other words, $G\le GL(n,q)\le SL(n+1,q)$ embeds any group into a (slightly larger) special linear group. - -REPLY [4 votes]: Yes we can. -For $A_n$, we can embed the given group in some $S_{n-2}$ and then for the additional two elements choose the identity or the transposition according as the element of $S_{n-2}$ is even or odd. -For $SL(n,q)$, we can embed the given group in some $GL(n-1,q)$ and then choose the diagonal element in the additional row and column as the reciprocal of the determinant of the element of $GL(n-1,q)$. - -REPLY [3 votes]: To your first question (embeddable within $A_n$) I think the answer is yes for obvious reasons: one can embed $S_n$ within $A_{n+2}$. -(Consider the subset of $A_{n+2}$ that stabilizes the first $n$ elements (as a set), it's obvious that this set will consist of all permutations of these elements, where it may or may not interchange the last two points, depending on the sign. For instance for $n=3$ we retrieve $S_3$ as $(1\ 2\ 3)$, $(1\ 2)(4\ 5)$, etc...)<|endoftext|> -TITLE: Why are Artinian rings of Krull dimension 0? -QUESTION [15 upvotes]: Why are Artinian rings of Krull dimension 0? -As in the example of $\mathbb{Z}/(6)$, the ideal $\mathbb{Z}/(2)$ is prime, I think. So, Artinian rings may contain prime ideals. But why does not the primes ideal contain other prime ideals properly? -Another question is that, when a ring is of Krull dimension 0, is it necessarily Artinian? -Thanks. - -REPLY [14 votes]: HINT $\ $ By factoring out a prime ideal it reduces to showing that an Artinian domain is a field, which follows immediately by DCC. -This method of factoring out by a prime ideal to reduce from rings to domains is a ubiquitous algebraic problem solving technique. Indeed, the eminent algebraist Irving Kaplansky explicitly pauses to mention this method (in a less trivial context) in his classic textbook Commutative Rings. Follow the above link for an excerpt. Not only was Kap a great algebraist but also a great expositor - a rare combination. I wholeheartedly recommend his expositions, where I recall learning many beautiful mathematical ideas.<|endoftext|> -TITLE: Example of modules that are projective but not free; torsion-free but not free -QUESTION [45 upvotes]: Free modules are projective, and projective modules are direct summands of free modules. - -Are there examples of projective modules that are not free? - -(I know this is not possible for modules of fields.) -Free modules are torsion-free. But is the inverse true? - -Are there examples of torsion-free modules that are not free? - -Thank you~ - -REPLY [4 votes]: Given any two rings $R_\pm$. -Take the ring $R:=R_+\oplus R_-$. -Consider the free $R$-module $F:=R$. -Then it is projective as it is free. -Consider the summand $R$-module $P:=R_+\oplus 0$. -Then it is projective as a summand of a projective. -Now for any finite rings $\# R_\pm<\infty$: -By counting argument the constructed module is not free!<|endoftext|> -TITLE: Dimensions of irreducible representations of finite groups over $\mathbb Q$ -QUESTION [13 upvotes]: If $G$ is a finite group, then it is well known that there are finitely many inequivalent irreducible representations of $G$ over $\mathbb{C}$; moreover the sum of squares of dimensions of the representations is equal to $|G|$. Also, the dimension of representation divides $|G|$. -If we consider representations over $\mathbb{Q}$, are the dimensions of irreducible representations related to $|G|$ in such a nice way? - -REPLY [5 votes]: One doesn't need character theory calculations to decompose the group algebra into irreducibles. -Suppose $G$ is a finite group and $k$ a field in which $|G|$ is invertible. Then Maschke's theorem applies so all finite-dimensional representations are expressible as a direct sum of irreducibles. Write -$$k[G]\cong \bigoplus_{V\in\widehat{G}} V^{\oplus m(V)}$$ -for some unknown multiplicities $m(V)$, as $V$ ranges over irreducibles in $\widehat{G}$. Fix an irreducible $W$, and consider an intertwiner $k[G]\to W$. Such a map is determined by where $1$ is sent, and then conversely $1$ may be sent to any element of $W$. Hence $\dim\hom_{k[G]}(k[G],W)=\dim W$. -On the other hand, using distributivity of $\hom$, -$$\hom_{k[G]}\left(\bigoplus_{V\in\widehat{G}} V^{\oplus m(V)},W\right)\cong\bigoplus_{V\in\widehat{G}}\hom_{k[G]}(V,W)^{\oplus m(V)}\cong {\rm End}_{k[G]}(W)^{\oplus m(W)}$$ -(since $\hom_{k[G]}(V,W)=0$ if $V\not\cong W$) which has dimension $m(W)\dim{\rm End}_{k[G]}(W)$. -Equating $\dim W=m(W)\dim{\rm End}_{k[G]}(W)$ gives us the multiplicities $m(W)$. Hence -$$k[G]\cong\bigoplus_{V\in\widehat{G}} V^{\oplus (\dim V)/(\dim{\rm End}_{\large k[G]}(V))}\implies |G|=\sum_{V\in\widehat{G}}\frac{(\dim V)^2}{\dim{\rm End}_{k[G]}(V)}.$$ - -If $k$ is algebraically closed, $\dim V$ divides $|G|$ for each $V\in\widehat{G}$. This fails in general. However there is a weaker version which still holds: $\dim V$ divides $|G|\varphi(\exp G)$ for each $V$. Again, if $k$ is algebraically closed the number of irreducible representations equals the number of conjugacy classes (there is no generic, canonical bijection - instead they are "dual"). This fails if $k$ isn't algebraically closed, or if $|G|$ isn't invertible. And again a weaker version holds: the number of irreducible representations equals the number of $K$-conjugacy classes of $K$-regular elements. -These facts and more should be discussed at the GroupProps Wiki.<|endoftext|> -TITLE: Number of automorphisms of a direct product of two cyclic $p$-groups -QUESTION [16 upvotes]: Suppose I have $G = Z_{p^m} \times Z_{p^n}$ for $m, n$ distinct natural numbers and $p$ a prime. Is there a combinatorial way to determine the number of automorphisms of $G$? - -REPLY [8 votes]: $\def\ZZ{\mathbb{Z}}$ -It's worth pointing out that Arturo's answer 2 generalizes very nicely to all abelian $p$-groups: Consider $G = \bigoplus \ZZ/p^{\lambda_i}$; let $\mu_{\ell}$ be the number of $\lambda$'s which are equal to $\ell$; let $r$ be the number of summands of $G$ (so $r = \sum \mu_i$). -Then $\mathrm{End}(G) \cong \bigoplus \mathrm{Hom}(\ZZ/p^{\lambda_i}, \ZZ/p^{\lambda_j}) \cong \bigoplus \ZZ/p^{\min(\lambda_i, \lambda_j)}$ as abelian groups. -An element of $\mathrm{End}(G)$ is invertible if and only if its image in $\mathrm{End}(G/pG)$ is invertible. (Nakayama's lemma.) Now, $G/pG \cong (\ZZ/p)^r$, so its endomorphism ring is $\mathrm{Mat}_{r \times r}(\ZZ/p)$. The map $\mathrm{End}(G) \to \mathrm{End}(G/pG)$ is NOT surjective. Rather, the image is the block-upper-triangular matrices, where the sizes of the blocks are the $\mu_i$. Such a matrix is invertible iff and only if the diagonal blocks are invertible. So the fraction of $\mathrm{End}(G)$ which is made up of invertible elements is -$$\prod_{\ell} \frac{|\mathrm{GL}_{\mu_{\ell}}(\ZZ/p)|}{|\mathrm{Mat}_{\mu_{\ell} \times \mu_{\ell}}(\ZZ/p)|} = \prod_{\ell} \frac{(p^{\mu_\ell}-1)(p^{\mu_\ell}-p) \cdots (p^{\mu_\ell} - p^{\mu_\ell -1})}{p^{\mu_{\ell}^2}}$$ -$$=\prod_{\ell} \left( 1- p^{-1} \right) \left( 1-p^{-2} \right) \cdots \left( 1-p^{-\mu_{\ell} +1} \right).$$ -Putting it all together, $|\mathrm{Aut}(G)|$ is found by multiplying the above formula by $|\mathrm{End}(G)|$ (computed in the second paragraph), and we get -$$|\mathrm{Aut}(G)| = p^{\sum_{i,j} \min(\lambda_i, \lambda_j)} \prod_{\ell} \left( 1- p^{-1} \right) \left( 1-p^{-2} \right) \cdots \left( 1-p^{-\mu_{\ell} +1} \right).$$<|endoftext|> -TITLE: What does "X% faster" mean? -QUESTION [25 upvotes]: I was reading something today that was talking in terms of 10%, 100% and 1000% faster. I assumed that 10% faster means it takes 10% less time (60 seconds down to 54 seconds). -If that is correct wouldn't 100% faster mean 0 time and 1000% mean traveling back in time? - -REPLY [3 votes]: I second Peláez's answer. In addition, I want to explain this mistake of yours: “I assumed that 10% faster means it takes 10% less time (60 seconds down to 54 seconds).” 10% normalized = 0.1 , oldTime=60 , and the correct result is not 54: -$newTime = oldTime/(1+0.1) = 54.54545454545454\dots$ -$\neq 54 = oldTime\cdot(1-0.1)$. -If $X\to 0$, then the difference between formulas $\to 0$, so in calculations $\cdot(1-X)$ is often used. -But the following statement is correct and precise: 10% faster means that something moves 10% further.<|endoftext|> -TITLE: Prove that the limit of $\sin n$ as $n \rightarrow \infty$ does not exist -QUESTION [9 upvotes]: Using only the delta definition of a limit, how can we prove that the sequence $\{a_n\}$, where $a_n = \sin n$, as $n$ tends to infinity does not have a limit? -Thanks! - -REPLY [3 votes]: In any interval of the form $ [k\pi +\frac{\pi}{3},k\pi +\frac{2\pi}{3}]$, where $k $ is any natural number, there is at least a natural number $n_{k}$. The reason is that any such interval has length $\pi/3$ that is greater than 1. Since those intervals are mutually disjoint then the sequence $\{n_{k}\}$ is a sub-sequence of the sequence $\{n\}$ and, obviously, $|\sin n_k|\geq \frac{\sqrt{3}}{2}$. In a similar way, considering the intervals of the form $[k\pi -\frac{\pi}{6},k\pi +\frac{\pi}{6}]$, we can construct another sub-sequence $\{m_k\},$ of the sequence $\{n\},$ such that $|\sin m_k|\leq \frac{1}{2}$. Assume that $\lim_{n\to \infty}\sin{n}$ exists and is the number $l$. Using both sub-sequences defined above, we obtain that $|l|\geq \frac{\sqrt{3}}{2}$ and $|l|\leq \frac{1}{2}$, and this is a contradiction.<|endoftext|> -TITLE: Relative density of primes under extension -QUESTION [9 upvotes]: Let $\mathbb{P}_{\mathbb{C}}$ be the set of Gaussian primes and $\mathbb{P}_{\mathbb{N}}$ the set of primes in $\mathbb{N}$. -Let $\pi_{\mathbf{C}}(\sqrt{n})$ be the number of Gaussian primes with norm $\leq \sqrt{n}$ and $\pi_{\mathbf{N}}(n)$ be, as usual, the number of primes $\leq n$ in $\mathbb{N}$. Recall that norm($x+iy$)=N($x+iy$)=$x^2 +y^2$; hence, my taking of a square root above. -I am interested to know what the order of magnitude is for $$\frac{\pi_{\mathbf{N}}(n)}{\pi_{\mathbf{C}}(\sqrt{n})}$$i.e. Has the extension of the definition of primes increased/decreased the relative density of primes with respect to their set of definition? A rather quixotic question could be " Is there a general asymptotic for the number of primes in an arbitrary infinite field with the definition of being a prime as usual?" -Fact: - -Prime numbers of the form $4n + 3$ are also Gaussian primes. -Gauss's circle problem which asks for the number of Gaussian integers with norm less than a given value is presently unresolved. I think this is tangentially related to the asymptotic I am looking for. - -REPLY [8 votes]: I'm going to answer a different question, when $\sqrt{n}$ is replaced by $n$. The answer is that it goes to one! -We can see this by noting that rational primes are evenly split (in terms of asymptotic density) between $4n+1$ and $4n+3$; this is a slight strengthening of the usual Dirichlet density theorem. Primes of the former type split into two primes in the Gaussian integers, while primes of the latter remain prime. The norms of the former are the same as the prime $4n+1$; the norms of the latter are $(4n+3)^2$. -When you take asymptotics by considering Gaussian primes whose norm is less than $N$ for $N$ large, only the former case matters ($(4n+3)^2 \gg 4n+3$ for $n$ large), and since there are roughly $N/(2 \log N)$ rational primes of the form $4n+1$ in this range, each of which splits into two distinct Gaussian primes, we get that the number of Gaussian primes of norm less than $N$ is roughly $N/\log N$. -In fact, though, this is true for general number fields. This is an extension of the usual prime number theorem, and follows by essentially similar arguments: the point is that one can define the Dedekind zeta function which has the analogous properties of the Riemann-zeta function, and following the same proof of the prime number theorem, one can show that in a number field, the number of prime ideals of norm at most $N$ is asymptotically $N/\log N$. (Note that the primes that completely split over $\mathbb{Q}$ are the only ones that count asymptotically in the zeta-function; this is the generalization of the statement I made about the primes $4n+3$ not contributing much.) - -REPLY [6 votes]: I don't think you're asking the question you wanted to ask. First, if you're using $\sqrt{n}$ then it seems more natural to define the norm as the square root of what you've defined it. Second, either way the contribution from primes congruent to $3 \bmod 4$ is negligible, so you're basically only counting the contributions from the primes congruent to $1 \bmod 4$. -What you should be asking for is the relative density of primes congruent to $1 \bmod 4$ and primes congruent to $3 \bmod 4$, and it is known that both of these are $\frac{1}{2}$ by Dirichlet's theorem. -A very general statement which might answer your follow-up question (in what sense is this quixotic? This is an extremely natural question to ask) is the Chebotarev density theorem.<|endoftext|> -TITLE: Isometries of $\ell^p_n(\mathbb{C})$ -QUESTION [7 upvotes]: Let $1 -TITLE: What might I use to show that an entire function with positive real parts is constant? -QUESTION [9 upvotes]: So the question asks me to prove that an entire function with positive real parts is constant, and I was thinking that this might somehow be related to showing an entire bounded function is constant (Liouville's theorem), but are there any other theorems that might help me prove this fact? - -REPLY [6 votes]: Well, can't we just say that, since $-f(z)$ is entire, $e^{-f(z)}$ is also entire, and if we write -$f(z) = u(z) + iv(z), \tag{1}$ -where $u(z)$, $v(z)$ are the (harmonic) real and imaginary parts of $f(z)$ (so that $u(z) = Re \; f(z)$), then -$\vert e^{-f(z)} \vert = \vert e^{-u(z) - iv(z)} \vert = \vert e^{-u(z)} \vert \vert e^{-iv(z)} \vert = e^{-u(z)}, \tag{2}$ -since -$e^{-u(z)} > 0 \tag{3}$ -and -$\vert e^{-iv(z)} \vert = \vert \cos (-v(z)) + i\sin (-v(z)) \vert = 1; \tag{4}$ -but $-u(z) < 0$ by hypothesis; thus $e^{-u(z)} < 1$, whence $e^{-f(z)}$ is a bounded entire function, hence constant; hence $f(z)$ must itself be a constant. QED. -Can't we just say that? I think we can!<|endoftext|> -TITLE: Software for drawing and analyzing a graph? -QUESTION [9 upvotes]: I would like to know a good program for drawing graphs and analyzing them (finding Eulerian circuits, Hamilton cycles, etc.). I would also like to export the drawing to Word. - -REPLY [11 votes]: Why don't you try Sage -- it's free and should do as good a job anything else. You just type in -G = Graph (M) - -for M an adjacency matrix to create your graph, then you can do -G.eulerian_circuit() -G.hamiltonian_cycle() - -to find these things, and -plot(G) - -draws your graph as a .png file which shouldn't be too hard to open in Word.<|endoftext|> -TITLE: Show that if a convex $C$ has a supporting hyperplane at every point of its boundary, then it's convex -QUESTION [11 upvotes]: Exercise 2.27 in Boyd and Vanderberghe: -Suppose the set C is closed, has nonempty interior, and has a supporting hyperplane at every point in its boundary. Show that C is convex. -Seems to me one approach is to prove that the intersection of all the supporting hyperplanes is exactly C. Clearly this intersection contains C. Geometrically the other direction seems obvious, but any hint how to argue it rigorously? -Thanks! - -REPLY [5 votes]: Consider a hypothetical line segment that starts and ends at points p, q inside C, but temporarily passes outside C somewhere in the middle. To exit the set, the line segment must passes through the boundary at some point b, and the boundary hyperplane there separates p from q , thus a contradiction. - -edit: some changes for clarity -edit2: add image<|endoftext|> -TITLE: How to show $\sum_{n=-\infty}^\infty J_n J_{n+m} = \delta(m)$? -QUESTION [10 upvotes]: The following is an identity concerning the Bessel functions of the first kind $J_n(x)$ for integers $n$ and $m$: -$$\sum_{n=-\infty}^\infty J_n(x) J_{n+m}(x) = \delta(m)$$ -where $\delta(x)$ is the Kronecker delta function. -This can be derived from the Jacobi-Anger identity, but is there a simpler way to derive it, for instance using well-known recurrence relations of the Bessel functions? - -REPLY [3 votes]: Here's a hand-wavey argument using the generating function $e^{\frac{x}{2}(t-1/t)} = \sum_{m=-\infty}^\infty t^m J_m(z)$: -$$\begin{eqnarray} -e^{\frac{x}{2}(t-1/t)+\frac{x}{2}(u-1/u)} &=& e^{\frac{x}{2}(t-1/t)}e^{\frac{x}{2}(u-1/u)}\\\ -&=& \left(\sum_{m=-\infty}^\infty t^m J_m(x)\right)\left(\sum_{n=-\infty}^\infty u^n J_n(x)\right)\\\ -&=&\sum_{m=-\infty}^\infty\;\sum_{n=-\infty}^\infty t^{m}u^{n}J_m(x)J_n(x)\\\ -&=&\sum_{m=-\infty}^\infty\;\sum_{k=-\infty}^\infty t^{m}u^{m+k}J_m(x)J_{m+k}(x)\\\ -&=&\sum_{k=-\infty}^\infty u^k\sum_{m=-\infty}^\infty (tu)^{m}J_m(x)J_{m+k}(x)\;. -\end{eqnarray}$$ -Now let $t=1/u$ and we get -$$1 = \sum_{k=-\infty}^\infty u^k \left(\sum_{m=-\infty}^\infty J_m(x)J_{m+k}(x)\right)\;,$$ -which gives the desired result (since the parenthesized term is independent of $u$). -[Thanks to Joriki for cleaning up the LaTeX.]<|endoftext|> -TITLE: Does every ideal class contains a prime ideal that splits? -QUESTION [11 upvotes]: Suppose you have a number field $L$, and a non-zero ideal $I$ of the ring of integers $O$ of $L$. -Question part A: Is there prime ideal $\mathcal{P} \subseteq O$ in the ideal class of $I$ such that $p =\mathcal{P} \cap \mathbb{Z}$ splits completely in $O$? -If the extension $L/\mathbb{Q}$ is Galois I think one can show that the answer is yes. In fact I think it is possible to show that for every $I$ there are infinitely many such $\mathcal{P}$'s. The necessary tools for this are class field theory and Chebotarev's density theorem. On the other hand I'm not asking for infinitely many primes, only for one. So to be concrete: -Question part B: If the answer to A is yes, is it possible to give a proof that avoids showing that there are infinitely many $\mathcal{P}$'s? -Thank you. - -REPLY [6 votes]: The answer to both A and B is no. Take the non-Galois field $L = \mathbb{Q} (\sqrt[4]{-5})$, and let $N = L(i)$, the Galois closure of $L$. Computing with PARI/GP tells me $L$ has cyclic class group of order $4$ and $N$ has class group of order $2$. Thus, the map $N \colon \mathrm{Cl}_N \rightarrow \mathrm{Cl}_L$ induced by the relative norm of ideals is not surjective. -Assume, to reach a contradiction, that one of the classes of order $4$ in $\mathrm{Cl}_L$ contains a prime $\mathfrak{p}$ lying over a prime $p \in \mathbb{Z}$ that is completely split in $L$. It is well-known that this happens if and only if $p$ is completely split in $N$. Then $\mathfrak{p}$ is the norm of an ideal $\mathfrak{P}$ in $N$. But then the class of $\mathfrak{p}$ is the norm of the class of $\mathfrak{P}$, contradicting the fact that the norm map on classes is not surjective. - -A is true if we make the additional assumption that the normal closure of $L$ is linearly disjoint over $L$ from the Hilbert class field of $L$. -To see this, let $N$ be the normal closure of $L$ over $\mathbb{Q}$, let $H_N$ be the Hilbert class field of $N$, and let $H_L$ be the Hilbert class field of $L$. We must show that for each $\sigma \in G(H_L/L)$, there exists some prime ideal of $L$, completely split over $\mathbb{Q}$ and whose Frobenius for $H_L/L$ is $\sigma$. -We need some facts: - -$H_N$ is Galois over $\mathbb{Q}$ -$N H_L \subset H_N$ - -For 1, consider an embedding $\tau \colon H_N \hookrightarrow \overline{\mathbb{Q}}$. We have $\tau (N) = N$, so $\tau (H_N)$ is an abelian unramified extension of $N$, hence is contained in $H_N$ It follows that $\tau(H_N) = H_N$ and 1 follows. -For 2, consider a prime ideal $\tilde{\mathfrak{P}}$ in $N H_L$, which is Galois over $L$. Because $H_L/L$ is unramified, the inertia group for $\tilde{\mathfrak{P}}$ over $L$ is contained in $G(N H_L /H_L)$. But the definition of inertia groups also shows that the inertia group for $\tilde{\mathfrak{P}}$ over $N$ is contained in that for $\tilde{\mathfrak{P}}$ over $L$. Thus, the inertia group for $\tilde{\mathfrak{P}}$ over $N$ consists only of automorphisms that fix $N$ and $H_L$, so it is trivial and $N H_L/N$ is unramified. Since $G(N H_L/N)$ injects into $G(H_L/L)$ by restriction, $N H_L/N$ is abelian and 2 follows. -Now let us return to considering our $\sigma$ in $G(H_L/L)$. By 1 and the assumption that $N$ and $H_L$ are linearly disjoint over $L$, we can lift $\sigma$ to an automorphism $\tilde{\sigma}$ in $G(H_N/N)$. By Chebotarev's density theorem, we can find some prime ideal $\mathfrak{P}$ in $H_N$, unramified over $\mathbb{Q}$ and whose Frobenius over $\mathbb{Q}$ is $\tilde{\sigma}$. The decomposition group of $\mathfrak{P}$ over $\mathbb{Q}$ is thus contained in $G(H_L/L)$, so the prime $p = \mathfrak{P} \cap \mathbb{Z}$ is completely split in $L$. -Let $\mathfrak{p} = \mathfrak{P} \cap N$. By the formalism of the Artin symbol, - the Frobenius for $\mathfrak{P}$ over $N$ is $\tilde{\sigma}$ and its restriction to $H_L$, namely $\sigma$ is itself the image under the Artin map of the ideal $N_{N/L} \mathfrak{p}$. But again, because $p$ is completely split up to $L$, it is also completely split up to $N$ and so $N_{N/L} \mathfrak{p}$ is a prime ideal of $L$ with degree $1$ and Frobenius equal to $\sigma$ as desired.<|endoftext|> -TITLE: Understanding a proof by descent [Fibonacci's Lost Theorem] -QUESTION [10 upvotes]: I am trying to understand the proof in Carmichaels book Diophantine Analysis but I have got stuck at one point in the proof where $w_1$ and $w_2$ are introduced. -The theorem it is proving is that the system of diophantine equations: - -$$x^2 + y^2 = z^2$$ -$$y^2 + z^2 = t^2$$ - -cannot simultaneously be satisfied. - -The system is algebraically seen equivalent to - -$$t^2 + x^2 = 2z^2$$ -$$t^2 - x^2 = 2y^2$$ - -and this is what will be worked on. We are just considering the case where the numbers are pairwise relatively prime. That implies that $t,x$ are both odd (they cannot be both even). Furthermore $t > x$ so define $t = x + 2 \alpha$. -Clearly the first equation $(x + 2\alpha)^2 + x^2 = 2 z^2$ is equivalent to $(x + \alpha)^2 + \alpha^2 = z^2$ so by the characterization of primitive Pythagorean triples there exist relatively prime $m,n$ such that $$\{x+\alpha,\alpha\} = \{2mn,m^2-n^2\}.$$ -Now the second equation $t^2 - x^2 = 4 \alpha (x + \alpha) = 8 m n (m^2 - n^2) = 2 y^2$ tells us that $y^2 = 2^2 m n (m^2 - n^2)$ by coprimality and unique factorization it follows that each of those terms are squares so define $u^2 = m$, $v^2 = n$ and $w^2 = m^2 - n^2 = (u^2 - v^2)(u^2 + v^2)$. - -It is now said that from the previous equation either - -$u^2 + v^2 = 2 {w_1}^2$, $u^2 - v^2 = 2 {w_2}^2$ - -or - -$u^2 + v^2 = w_1^2$, $u^2 - v^2 = w_2^2$ - -but $w_1$ and $w_2$ have not been defined and I cannot figure out what they are supposed to be. Any ideas what this last part could mean? -For completeness, if the first case occurs we have our descent and if the second case occurs $w_1^2 + w_2^2 = 2 u^2$, $w_1^2 - w_2^2 = 2 v^2$ gives the descent. Which finishes the proof. - -REPLY [14 votes]: This descent has a very beautiful presentation based upon ideas going back to Fibonacci. -Fibonacci's Lost Theorem $ $ The area of an integral -pythagorean triangle is not a perfect square. -Over $400$ years before Fermat's celebrated proof by infinite descent of the essentially equivalent $\rm\,FLT_4\,$ (Fermat's Last theorem for exponent $4$), Fibonacci claimed -to have a proof of this in his Liber Quadratorum (Book of Squares). But, alas, to this day, his proof has never been found. -Below is my speculative reconstruction of Fibonacci's proof of this theorem, based upon similar ideas that survived in his extensive studies on squares and related topics. -A square arithmetic progression (SAP) is an AP $\rm\ x^2,\ y^2,\ z^2\ $ with a square stepsize $\rm\, s^2,\, $ viz. $$\rm\ x^2\ \ \xrightarrow{\Large s^2}\ \ y^2\ \ \xrightarrow{\Large s^2}\ \ z^2$$ -Naturally associated with every SAP is a "half square triangle", -ie. doubling $\rm\ z^2 + x^2\ $ produces a triangle of square area $\rm\ s^2,\, $ viz. -$\rm\ (z + x)^2 + (z - x)^2\, =\ 2\ (z^2 + x^2)\ =\ 4\ y^2\ $ -which indeed has $\ $ area $\rm\, =\ (z + x)\ (z - x)/2\ = \ (z^2 - x^2)/2\ =\ s^2\ $ -With these concepts in mind, the proof is very easy: -If there exists a pythagorean triangle with square area then it -may be primitivized and its area remains square. Let its primitive -parametrization be $\rm\:(a,b)\:$ and let its area be $\rm\:c^2,\:$ namely -$$\rm\ \frac{1}2\ leg_1\ leg_2\, =\ \frac{1}2\ (2\:a\:b)\ (a^2-b^2)\ =\ (a\!-\!b)\ a\ (a\!+\!b)\ b\ =\ c^2 $$ -Since $\rm\:a\:$ and $\rm\:b\:$ are coprime of opposite parity, $\rm\ a\!-\!b,\ a,\ a\!+\!b,\ b\ $ are coprime factors of a square, thus all must be squares. -Hence $\rm\ a\!-\!b,\ a,\ a\!+\!b\ $ form a SAP; doubling its half square triangle -yields a triangle with smaller square area $\rm\ b < c^2,\ $ hence descent. $\ \ $ QED -Remark $ $ This doubling construction is ancient - already in Euclid. It may be -viewed as a composition of quadratic forms $\rm\ (z^2 + x^2)\ (1^2 + 1^2)\:. $<|endoftext|> -TITLE: Prove that $x^3 \equiv a \pmod{p}$ has a solution where $p \equiv 2 \pmod{3}$? -QUESTION [6 upvotes]: Prove that $x^3 \equiv a \pmod{p}$ has a solution where $p \equiv 2 \pmod{3}$? - -How can I prove a congruence equation has a solution? I tried to link Fermat's little theorem with this problem, but I couldn't find a way to solve it. -My attempt was: -$$x^3 \equiv 1 \pmod{2}$$ -$$x^3 \equiv a \pmod{p}$$ -If $p \equiv 2 \pmod{3}$, I have $p = 3k + 2$, for some integers $k$. But I was stuck here :(. Any idea? -Another question is, is there are infinitely many primes of the form 3k + 2? -A hint would be sufficient. -Thanks, -Thanks, - -REPLY [3 votes]: You can verify directly, using Fermat's little theorem, that $x=a^{(2p-1)/3}$ is a solution to $x^3\equiv a\pmod p$ (and $(2p-1)/3$ is an integer since $p\equiv2\pmod 3$). -(Coming up with this solution is not quite as easy as verifying it, to be sure. Because of the existence of a primitive root $g$ modulo $p$, proving that $x^3\equiv a\pmod p$ always has a solution is equivalent, thanks to the change of variables $a = g^c$ and $x=g^b$, to proving that $3b \equiv c \pmod{p-1}$ always has a solution. This latter congruence has the solution $b\equiv 3^{-1}c = \frac{2p-1}3 c\pmod {p-1}$. Here the case $a\equiv0\pmod p$ should be treated separately.) -There are infinitely many primes of the form $3k+2$, that is, infinitely many primes that are congruent to $2\pmod 3$. See Dirichlet's theorem or the prime number theorem for arithmetic progressions.<|endoftext|> -TITLE: $\mathbb{Q}(\pi, i\pi)$ over $\mathbb{Q}$ -QUESTION [9 upvotes]: Is $\mathbb Q(\pi,i\pi):\mathbb Q$ a simple extension? - -REPLY [12 votes]: No. Let $x \in \mathbb Q(\pi,i\pi) = \mathbb Q(i,\pi).$ If $x$ is algebraic, then $\mathbb Q(x)$ is finite over $\mathbb Q$, hence an algebraic extension, and so does not contain the transcendental element $\pi.$ On the other hand, if $x$ is transcendental over $\mathbb Q$, then $\mathbb Q(x)$ is isomorphic to the field -of rational functions in one variable over $\mathbb Q$. The algebraic closure of -$\mathbb Q$ in this extension is equal to $\mathbb Q$ itself, and so this field does not contain $i$. -Thus $\mathbb Q(\pi,i\pi)$ is not of the form $\mathbb Q(x)$ for any of its elements $x$, and so is not a simple extension of $\mathbb Q$. - -REPLY [4 votes]: Suppose there exists a $x\in\mathbb Q(\pi,i)$ such that $\mathbb Q(\pi,i)=\mathbb Q(x)$. Since $\mathbb Q(\pi,i)$ is infinite over $\mathbb Q$, $x$ must be trascendental. Lüroth's theorem, then, tells us that every subfield of $\mathbb Q(x)$ properly containing $\mathbb Q$ is itself a simple trascendental extension of $\mathbb Q$. But this then applies to $\mathbb Q(i)\subseteq\mathbb Q(x)$. Of course, this is absurd. - -REPLY [2 votes]: Let's see. Assume otherwise, thus $\mathbb Q(\pi,i\pi) = \mathbb Q(\pi,i) = \mathbb Q(x)$ for some x. This means that they are equivalent as modules over $\mathbb Q$, and thus $i = a_nx^n + \cdots + a_0$ for some polynomial with $n \geq 1$, and so we have that $(a_nx^n + \cdots + a_0)^2 + 1 = 0$ is a rational polynomial of nonzero degree with $x$ as a root, thus $x$ is algebraic. But $\pi$ is transcendental, and since the algebraic numbers are a field we cannot write $\pi$ as a polynomial in $x$, contradicting the assumption that $\mathbb Q(\pi,i) = \mathbb Q(x)$. So the answer is no.<|endoftext|> -TITLE: What's the Difference Between a Vector and an Hypercomplex Number? -QUESTION [6 upvotes]: What's the difference between a vector and an hypercomplex number? For instance a 4-vector and a quaternion. They seem to share many properties. -Perhaps this question could be put more generally as: what's the difference between a vector space and a field? - -REPLY [3 votes]: A hypercomplex number is an element of a certain kind of distributive unital algebra over $\mathbb{R}$. An algebra over $\mathbb{R}$ is, by definition, a vector space over $\mathbb{R}$ with additional structure so that you can multiply elements together. It's not very hard to turn a given finite-dimensional vector space into an algebra of hypercomplex numbers, but the question is whether you can do so in a natural, useful way. -Of course, there are vector spaces over any field at all, so it's also easy to come up with a vector space which cannot be turned into an algebra of hypercomplex numbers. For instance, a finite dimensional vector space over a finite field has only finitely many elements, so obviously cannot be turned into an algebra over $\mathbb{R}$. -The interesting thing is that there are objects which are simultaneously all of these things, and in a recursive way: for example, $\mathbb{C}$ is both a field and an algebra over $\mathbb{R}$, and a fortiori a vector space over $\mathbb{R}$. But $\mathbb{R}$ is in turn both a field and an algebra over $\mathbb{Q}$... The study of fields which are algebras over other fields is called Galois theory, which also finds applications in algebraic geometry (though probably not in the way you expect!).<|endoftext|> -TITLE: Seeking a textbook proof of a formula for the number of set partitions whose parts induce a given integer partition -QUESTION [9 upvotes]: Let $t \geq 1$ and $\pi$ be an integer partition of $t$. Then the number of set partitions $Q$ of $\{1,2,\ldots,t\}$ for which the multiset $\{|q|:q \in Q\}=\pi$ is given by \[\frac{t!}{\prod_{i \geq 1} \big(i!^{s_i(\pi)} s_i(\pi)!\big)},\] where $s_i(\pi)$ denotes the number of parts $i$ in $\pi$. - -Question: Is there a book that contains a proof of this? - -I'm looking to cite it in a paper and would prefer not to include a proof. I attempted a search in Google books, but that didn't help too much. -A similar result is proved in "Combinatorics: topics, techniques, algorithms" by Peter Cameron (page 212), but has "permutation" instead of "set partition" and "cycle structure" instead of "integer partition". - -REPLY [6 votes]: These are the coefficients in the expansion of power-sum symmetric functions in terms of augmented monomial symmetric functions. I believe you will find a proof in: -Peter Doubilet. On the foundations of combinatorial theory. VII. Symmetric functions through the theory of distribution and occupancy. Studies in Appl. Math., 51:377–396, 1972. -See also MacMahon http://name.umdl.umich.edu/ABU9009.0001.001<|endoftext|> -TITLE: Find eigenvalues of a projection and explain what they mean -QUESTION [7 upvotes]: Suppose B represents the matrix of orthogonal (perpendicular) projection of $\mathbb{R}^{3}$ onto the plane $x_{2} = x_{1}$. Compute the eigenvalues and eigenvectors of B and explain their geometric meaning. -What I have I come up so far as an attempt to deduce this question down is, for instance. If we pick an arbitrary point in space ($\mathbb{R}^{3}$), then we must project this point onto a plane (in particular $x_{2} = x_{1}$) which I imagine on a three dimensional axis of ($x,y,z$), if we choose $x_{1}$ to represent the $x$-axis, $x_{2}$ to represent the $y$-axis, and $x_{3}$ to represent the $z$-axis then we would have a plane of the equation that looks like $y=z$ or conversely ($x_{2}=x_{1}$) which is its equivalent. Once you project this point onto the plane, I see that it is true to be perpendicular and its vector is coming out of the plane. My troubles are finding the new coordinates of the new point that is projected onto the plane. -Here is a skeleton sketch of what I had in math-ese. -$\left[\begin{array}{c} -?\\ -?\\ -? \end{array} \right] = -\left[\begin{array}{ccc} -\Box & \Box & \Box \\ -\Box & \Box & \Box \\ -\Box & \Box & \Box -\end{array} \right] -\left[\begin{array}{c} -x_{1} \\ -x_{2} \\ -x_{3} -\end{array} \right] $, $~~$ where B is the matrix with empty boxes for elements. -These question marks inside of the first matrix represent the coordinates in which I am trying to find. Once these our found, making some appropriate choices for the entries in the coefficient matrix labeled B in the question can be found, so that when B is multiplied by the last matrix $\left(\left[\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \end{array} \right]~\right) $, we will get back the matrix with the ? marks in the entries. I believe this is known as doing a linear transformation. I didn't know how to include graphics, but I hope the words was enough detail to be able to duplicate what I am saying on paper in a graphical meaning. If not, please let me know how I can clarify anything up. -Some help would be very appreciated. -Thanks - -REPLY [10 votes]: Kristi, first of all if you are projecting $\begin{bmatrix} -x_1\\ -x_2\\ -x_3 -\end{bmatrix}$ onto the $x_{1}=x_{2}$ plane, then you are projecting onto the plane $x=y$, not $y=z$ (since you defined $x=x_{1}$, $y=x_{2}$, and $z=x_{3}$). -Now, as you are trying to find the coordinates of the projection vector, imagine the geometric meaning -- $z$, the 'height' of the vector will not ever change, as it is not relevant to the equation, but $x$ and $y$ will, depending on where the vector lies. When we are trying to find a projection on an n-dimensional subspace $W$, we can use a formula of ${proj_{W}}{\vec{x}}$=$(\vec{u_1}\cdot \vec{x})$$\vec{u_1}$+$(\vec{u_2}\cdot \vec{x})$$\vec{u_2}$$+\cdots +$$(\vec{u_n}\cdot \vec{x})$$\vec{u_n}$, where $\vec{u_1}, \vec{u_2}\dots \vec{u_n}$ form an orthonormal basis of the subspace $W$. Here, $W$ is defined as $x=y$, meaning it can be spanned by vectors $\vec{v_1}$ = $ -\begin{bmatrix} -1\\ -1\\ -0 -\end{bmatrix}$ and $\vec{v_2}$ = -$ -\begin{bmatrix} -1\\ -1\\ -2 -\end{bmatrix}$, for example. To find an orthonormal basis of our space (meaning that all vectors in it will be mutually orthogonal/perpendicular, as well as of a length one), let's use the Gram-Schmidt process. An orthonormalized version of the vector $ -\begin{bmatrix} -1\\ -1\\ -0 -\end{bmatrix}$ would be $\vec{u_1}$ = $\frac{1}{\sqrt{2}}$$ -\begin{bmatrix} -1\\ -1\\ -0 -\end{bmatrix}$, as that will make it of length one. Now, by Gram-Schmidt, $\vec{u_2}=\vec{v_2}-\frac{\vec{v_2}\cdot\vec{u_1}}{\vec{u_1}\cdot\vec{u_1}} \vec{u_1}$, since we are basically subtracting the $\vec{u_1}$ component from our second vector, in order to get a vector perpendicular to $\vec{u_1}$ as a result. Calculations result into the following: $\vec{u_2}$= -$\begin{bmatrix} -1\\ -1\\ -2 -\end{bmatrix}$ -$-$$\frac{\begin{bmatrix} -1\\ -1\\ -2 -\end{bmatrix} \cdot \frac{1}{\sqrt{2}} -\begin{bmatrix} -1\\ -1\\ -0 -\end{bmatrix}}{\frac{1}{\sqrt{2}} -\begin{bmatrix} -1\\ -1\\ -0 -\end{bmatrix} \cdot \frac{1}{\sqrt{2}} -\begin{bmatrix} -1\\ -1\\ -0 -\end{bmatrix}} \frac{1}{\sqrt{2}} -\begin{bmatrix} -1\\ -1\\ -0 -\end{bmatrix}$ = $\begin{bmatrix} -1\\ -1\\ -2 -\end{bmatrix}$ - $\begin{bmatrix} -1\\ -1\\ -0 -\end{bmatrix}$=$\begin{bmatrix} -0\\ -0\\ -2 -\end{bmatrix}$. Normalizing the resulting vector, we get $\vec{u_2}$ = $\begin{bmatrix} -0\\ -0\\ -1 -\end{bmatrix}$. -Now that we have an orthogonal basis $\vec{u_1}$ = $\frac{1}{\sqrt{2}}$$\begin{bmatrix} -1\\ -1\\ -0 -\end{bmatrix}$ and $\vec{u_2}$ = $\begin{bmatrix} -0\\ -0\\ -1 -\end{bmatrix}$, we can calculate the projection. -So, to find the projection of your vector $\begin{bmatrix} -x_1\\\ -x_2\\\ -x_3 -\end{bmatrix}$ we use our orthonormal basis and the projection formula: ${proj_{W}}\begin{bmatrix} -x_1\\ -x_2\\ -x_3 -\end{bmatrix}$=($\frac{1}{\sqrt{2}}$$\begin{bmatrix} -1\\ -1\\ -0 -\end{bmatrix} \cdot \begin{bmatrix} -x_1\\ -x_2\\ -x_3 -\end{bmatrix})(\frac{1}{\sqrt{2}}$$\begin{bmatrix} -1\\ -1\\ -0 -\end{bmatrix})$$+$$(\begin{bmatrix} -0\\ -0\\ -1 -\end{bmatrix} \cdot \begin{bmatrix} -x_1\\ -x_2\\ -x_3 -\end{bmatrix})(\begin{bmatrix} -0\\ -0\\ -1 -\end{bmatrix})$. After arithmetic, this results into ${proj_{W}}\begin{bmatrix} -x_1\\ -x_2\\ -x_3 -\end{bmatrix}$=$(\frac{x_1+x_2}{\sqrt{2}})$$(\frac{1}{\sqrt{2}}\begin{bmatrix} -1\\ -1\\ -0 -\end{bmatrix})$+$\begin{bmatrix} -0\\ -0\\ -x_3 -\end{bmatrix}$=$\begin{bmatrix} -\frac{x_1+x_2}{2}\\ -\frac{x_1+x_2}{2}\\ -x_3 -\end{bmatrix}$. -So, now you have your coordinates. -To find out the eigenvalues, think of the nature of the transformation -- the projection will not do anything to a vector if it is within the plane onto which you are projecting, and it will crash it if the vector is perpendicular to the plane. So, your eigenvalues are 1 and 0. A basis of eigenspace of 1 $\xi_{1}$ will have two vectors, as the plane is spanned by two of them. You could choose them to be your original $v_1$ and $v_2$, which were $ -\begin{bmatrix} -1\\ -1\\ -0 -\end{bmatrix}$ and $\begin{bmatrix} -1\\ -1\\ -2 -\end{bmatrix}$. To find a basis of eigenspace of 0 $\xi_{0}$, you need to find a vector perpendicular to this plane. You could use a property of cross-product, which states that $\vec{v_1} \times \vec{v_2}$ produces a vector $\vec{v_3}$ perpendicular to both. Crossing the aforementioned vectors, you get $\vec{v_3}=\begin{bmatrix} -2\\ --2\\ -0 -\end{bmatrix}$. -Now that you know all of this, finding the matrix B is very easy by inspection; consider $\begin{bmatrix} -\frac{1}{2} & \frac{1}{2} & 0 \\ -\frac{1}{2} & \frac{1}{2} & 0 \\ -0 & 0 & 1 -\end{bmatrix}$.<|endoftext|> -TITLE: a question on summation expansion -QUESTION [5 upvotes]: Q1. -$$\sum\limits_{j=0}^{N-1} \sum\limits_{i=0}^{N-1} \sum\limits_{y=0}^{j} \sum\limits_{x=0}^{i} a(x)a(y)b(|x-y|) $$ -Given that $b(0) = 0$, Find the coefficient of $b(k)$, $0\le k \le N-1$ in the above expansion. -Q2. $$\sum\limits_{j=0}^{N-1} \sum\limits_{i=0}^{N-1} \sum\limits_{x=0}^{\min(i,j)} a(x) .$$ -Find the coefficient of $a(x)$ in the above expansion? - -REPLY [3 votes]: Let us use a general approach. A first principle to evaluate multiple sums is: - -When dealing with multiple sums, see what happens when exchanging the order of the summations. - -Re Q2, this means writing the sum $S$ as a sum over $x$ of $a(x)$ times a sum over $i$ and $j$. -Which brings us to a second general principle: - -Sums over a fixed set of integers are easier. - -And to the most useful tool to apply this principle: - -Write the spans of sums as indicator functions. - -In this context, some people use Iverson bracket $[\mathfrak A]$ to mean $1$ if assertion $\mathfrak A$ is true and $0$ otherwise, and I will use this convention. For instance, still Re Q2, -$$ -S=\sum_{j=0}^{N-1}\sum_{i=0}^{N-1}\sum_{x=0}^{N-1}[x\le \min\{i,j\}]\cdot a(x). -$$ -But $[x\le \min\{i,j\}]=[x\le i]\cdot[x\le j]$, hence exchanging the order of the summations yields -$$ -S=\sum_{x=0}^{N-1}a(x)\sum_{i=0}^{N-1}[x\le i]\sum_{j=0}^{N-1}[x\le j]. -$$ -The sum over $i$ and the sum over $j$ coincide and their common value is -$$ -\sum_{j=0}^{N-1}[x\le j]=\sum_{j=x}^{N-1}[x\le j]=N-x, -$$ -hence -$$ -S=\sum_{x=0}^{N-1}(N-x)^2a(x). -$$ -This solves Q2. -The solution for Q1 is somewhat more involved, but I suggest that you now see how far you can go with these principles to try to solve it.<|endoftext|> -TITLE: Three-variable system of simultaneous equations -QUESTION [6 upvotes]: $x + y + z = 4$ - $x^2 + y^2 + z^2 = 4$ - $x^3 + y^3 + z^3 = 4$ - -Any ideas on how to solve for $(x,y,z)$ satisfying the three simultaneous equations, provided there can be both real and complex solutions? - -REPLY [8 votes]: For a fixed number of variables and a fixed power $n$ the sum of powers $$x^n + y^n + z^n + ... + w^n$$ is a symmetric polynomial. -It is expressible in terms of elementary symmetric polynomials. The elementary symmetric polynomials for three variables are - -$e_1 = x + y + z$ -$e_2 = x y + x z + y z$ -$e_3 = x y z$ - -and your polynomials expressed in terms of them are - -$x + y + z = e_1$ -$x^2 + y^2 + z^2 = e_1^2 - 2 e_2$ -$x^3 + y^3 + z^3 = e_1^3 - 3(e_1 e_2 - e_3)$ - -Now we can find the values of $e_1,e_2,e_3$ evaluated at the given $x,y,z$: -$e_1 = 4$, $e_2 = 6$, $e_3 = 4$. -Now consider the polynomial $(t - x)(t - y)(t - z) = t^3 - e_1 t^2 + e_2 t - e_3 = t^3 - 4 t^2 + 6 t - 4$. -It has the solutions $t = 2, 1 + i$ and $1 - i$. -So now we can check if these are correct: - -$(2) + (1+i) + (1-i) = 4$ -$(2)^2 + (1+i)^2 + (1-i)^2 = 4 + 2i - 2i = 4$ -$(2)^3 + (1+i)^3 + (1-i)^3 = 8 + -2 + 2i -2 - 2i = 4$<|endoftext|> -TITLE: Isometries of the sphere $\mathbb{S}^{n}$ -QUESTION [21 upvotes]: Got this as homework and I don't know how to tackle this. Help please! -Prove that the isometries of $\mathbb{S}^{n} \subset \mathbb{R}^{n+1}$, with the induced metric, are restrictions to $\mathbb{S}^{n}$ of the linear orthogonal transformations. - -REPLY [9 votes]: Trivially any orthogonal map restricts to an isometry of the n-sphere. Now suppose you have $f: \mathbb{S}^n \rightarrow \mathbb{S}^n$ an isometry. If it's gonna be the restriction of an orthogonal map $F: \mathbb{R}^{n+1} \rightarrow \mathbb{R}^{n+1}$, a very reasonable candidate for $F$ is the map: -\begin{equation} -F(x)=f(\frac{x}{|x|})\cdot |x|, x\neq 0; F(0)=0 -\end{equation} -Let's prove $F$ is an orthogonal map. -The distance between any two distinct points $x$, $y$ in $\mathbb{S}^n$ is just the length of the shortest of the two arcs of great circle joining them (this does not need any connection theory or geodesics, just observe that this great circle is the intersection of $\mathbb{S}^n$ and the plane $\pi$ containing $0_{\mathbb{R^{n+1}}}$, $x$ and $y$, so for any other curve joining $x$ and $y$ in $\mathbb{S}^n$ not contained in $\pi$, its tangent vector will have a nonzero component normal to $\pi$, hence you can keep minimizing its length without moving the endpoints). Since $f$ is an isometry, it preserves the length of curves, so it preserves the distance in $\mathbb{S}^n$. What happens it that the length of this arc of great circle is the (smaller) angle in radians formed by $x$, $y$. Now (make a drawing if needed) take any $p$, $q$ in ${\mathbb{R^{n+1}}}$, since $F$ preserves the norm, the triangles formed by $0_{\mathbb{R^{n+1}}}$, $p$, $q$ and $0_{\mathbb{R^{n+1}}}$, $F(p)$, $F(q)$ are congruent (side-angle-side and this became Plane Geometry!). Hence, $F$ preserves the distance in ${\mathbb{R^{n+1}}}$ and -\begin{equation} -\langle F(p),F(q)\rangle=\frac{1}{2}(|F(p)|^2+|F(q)|^2-|F(p)-F(q)|^2)= -\frac{1}{2}(|p|^2+|q|^2-|p-q|^2)=\langle p,q\rangle. -\end{equation} -So, $F$ also preserves the inner product in $\mathbb{R^{n+1}}$. Observe now that since $\mathbb{S}^n\cup\{0\}\subset \text{Im }F$, $F$ is onto. Thus, for all $p,q,r\in\mathbb{R}^{n+1}$ and $\alpha, \beta\in\mathbb{R}$ -\begin{equation} -\langle F(\alpha p + \beta q)- \alpha F(p)-\beta F(q),F(r)\rangle = \langle \alpha p + \beta q,r\rangle - \alpha \langle p, r\rangle -\beta \langle q,r\rangle=0. -\end{equation} -Then, $F$ is a linear orthogonal map.<|endoftext|> -TITLE: Why not define 'limits' to include isolated points? -QUESTION [16 upvotes]: If I understand correctly, most definitions of 'limits' require that the function either a) be defined in an open neighborhood around the relevant point or b) more permissively, that the relevant point is a limit point; the definition of 'continuity' is then given a special case so that functions are continuous at isolated points. Why not extend the notion of 'limit' so that the limit of a function at an isolated point is just whatever the function's value is there? Is there some good reason not to? - -REPLY [10 votes]: Okay, now that I've checked Rudin to see that this really is the definition he gives of a limit of a function, I have an answer, but I don't like it. The motivation behind the definition of $\lim_{x \to a} f(x)$ is that you want to understand what $f$ is doing in a neighborhood of $a$ in order to compare it to what is happening at at $a$. If $a$ is an isolated point, there's nothing to compare.<|endoftext|> -TITLE: Solving in positive integers an equation containing exponentials -QUESTION [6 upvotes]: I stumbled upon this number theory problem while I was solving another problem. Here is the equation: $$3^kn + 3^{k-1} + 2^m(3^{k-1} + 2h) = 2^{m+l}n$$ where $k \geq 3, h,l,m,n\in\mathbb{N}$, $n$ is odd and $n$ is not a multiple of $3$. My impression is that it does not have a solution. However, I have not progressed on the problem anymore than that. Could you please help? -Thanks. - -REPLY [2 votes]: I think you can use infinite descent to show that the equation does not have a solution. We know that $n= 3k+1$ for $k$ an odd integer or $n =3k+2$ for $k$ an even integer. If $(k,h,l,m,n)$ is a solution, then $2^{m+1}$ divides the LHS and $n$ divides the LHS.<|endoftext|> -TITLE: Eigenvalues of the differentiation operator -QUESTION [6 upvotes]: I have a linear operator $T_1$ which acts on the vector space of polynomials in this way: -$$T_1(p(x))=p'(x).$$ -How can I find its eigenvalues and how can I know whether it is diagonalizable or not? - -REPLY [11 votes]: In the finite dimensional case, finding the eigenvalues can be done by considering the matrix of the operator, computing the characteristic polynomial, and finding the roots. This is not possible in the infinite dimensional case (as occurs in the case of the vector space of all polynomials with coefficients in $F$), because there is no matrix for the operator and no characteristic polynomial. -Instead, you have to go back to the definitions. An eigenvalue of $T$ is a scalar $\lambda$ for which there exists a nonzero vector $\mathbf{x}$ with $T(\mathbf{x}) = \lambda\mathbf{x}$. -Suppose $\mathbf{x}$ is an eigenvector. What can we say about $\mathbf{x}$ and $\lambda$? As usual in this kind of cases, we write down what everything means, and see what this entails/implies. Often, we can gain enough information to figure out who $\mathbf{x}$ and $\lambda$ have to be. -Let's write $\mathbf{x} = a_nx^n + \cdots + a_0$, with $a_n\neq 0$ (we know at least one coefficient has to be nonzero for $\mathbf{x}$ to be nonzero, a precondition for being an eigenvector; and so we may as well write it going up to just the degree we need; so we are going to write $x^2+0x+1$, but not $0x^4+0x^3+x^2+0x+1$, in order to make our life easier). -Then the equation $T(\mathbf{x}) = \lambda\mathbf{x}$ becomes -$$ na_nx^{n-1}+\cdots + a_1 = \lambda a_nx^n + \lambda a_{n-1}x^{n-1}+\cdots + \lambda a_0.$$ -This gives you a system of equations -\begin{align*} -\lambda a_n &= 0\\ -\lambda a_{n-1} - na_n &= 0\\ -\lambda a_{n-2} - (n-1)a_{n-1} &= 0\\ -&\vdots\\ -\lambda a_1 - 2a_2 &= 0\\ -\lambda a_0 - a_1 &=0. -\end{align*} -Since we are assuming $a_n\neq 0$, it should be an easy matter to determine all eigenvalues, and all corresponding eigenvectors from this. -Now, a linear transformation is diagonalizable if and only if there is a basis for the vector space that consists entirely of eigenvectors. Since you know by now what all the eigenvectors of $T$ are, you can figure whether you can find a linearly independent set of eigenvectors that spans the vector space of all polynomials. If you can, then $T$ is diagonalizable. If you cannot, then $T$ is not diagonalizable. - -REPLY [5 votes]: Take the derivative of $a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$ (with $a_n\neq 0$), and set it equal to $\lambda a_nx^n+\cdots+\lambda a_0$. Look particularly at the equality of the coefficients of $x^n$ to determine what $\lambda$ must be. Once you know what the eigenvalues are, consider which possible diagonalized linear transformations have that eigenvalue set, and whether such linear transformations can be similar to differentiation. - -REPLY [3 votes]: Differentiating lowers the degree, so the only case where you get out a scalar multiple of what you put in is when you differentiate a constant.<|endoftext|> -TITLE: FFT of waveform with non-constant timestep -QUESTION [7 upvotes]: I have a waveform which I would like to take the fourier transform of. However, the simulator which generated the waveform uses an adaptive algorithm to determine the timestep for each calculation. -Now, I figure I can regularize the timestep to first order by creating a new timebase and for each point in the new base calculating a linear interpolation from the two nearest points in the original sequence, but this seems inelegant, and potentially incorrect. -I also considered expanding the original sequence with whatever the smallest time-step is, replicating data points to fill-in the missing data (effectively creating steps), and then filtering the resulting fourier transform above some frequency (or simply ignoring data above some frequency). -Either of these will probably work with the data I have, since it is heavily oversampled for the frequency range I am interested in, but I worry that both would cause distortions if that were not the case. -Is there a better method? - -REPLY [6 votes]: There may be literature on this, and I'm no expert, but I believe there is a simple method to directly compute what you want: -First, FFT is just a fast algorithm for the DFT. For the following method, you will need to compute the DFT coefficients directly. Recall the formula for a single coefficient (slightly modified from Wikipedia to make it match what I say below): -$$X_k := \frac{1}{N} \sum_{n=0}^{N-1} x_n e^{-\pi i k \frac{2n+1}{N}}$$ -You can interpret this as a discretization of the integral (hope I'm getting this right): -$$Y_k := \int_0^1 y(t) e^{-2 \pi i k t} dt$$ -While $x$ is an $N$-tuple, $y$ is supposed to be the continuous function on $[0,1]$ that $x$ approximates. Each $n=0,\dots,N-1$ is associated with the interval $[a_n,b_n]:=[\frac{n}{N},\frac{n+1}{N}]$, so the union of all intervals is $[0,1]$. It also makes sense to assume there is a specific $t_n\in[0,1]$ such that $x_n = y(t_n)$, say $t_n := \frac{a_n + b_n}{2}$. (This is what I implicitly used in the first formula, making it a midpoint sum approximation. Please ask if this is not clear.) -Now, the input you have is basically a different discretization $z$ of $y$, which is an $M$-tuple ($M$ possibly being different from $N$), and where $a_m$, $b_m$, and $t_m$ for $m=0,\dots,M-1$ are given by your variable timestep. (You have to decide for yourself whether the interval boundaries or the measurement points are given; however, you have to make sure that $a_0=0$ and $b_{M-1}=1$.) What you need is a sum that appropriately discretizes the above integral, which would (hopefully, again) be: -$$Z_k := \sum_{m=0}^{M-1} (b_m-a_m) z_m e^{-2 \pi i k t_m}$$ -If the timestep is completely variable, you can get nonzero coefficients for arbitrarily large $k$, but you can stop calculating wherever you want, of course. -Now, computing these sums is not nearly as fast as an FFT, but there are several things you can do to make it computationally feasible. The most important thing is to calculate the sines and cosines just once for each $m$, i.e. iterate over $m$ at the outer level, not $k$. You should pre-calculate sine/cosine tables if you have a fixed "time resolution" that every timestep is a multiple of; otherwise use sine/cosine implementations that favor speed over accuracy. -If you want overlap between consecutive DFT "windows" (not sure if this is the correct word here), there is a way to reuse the part of each sum that is in both windows. Another way to reduce the time overhead is to use different window lengths for each coefficient, which gives you a variable (e.g. logarithmical) frequency resolution. I don't have time to explain this now, but you can mail me for details at SebastianR@gmx.de if you really want to implement this.<|endoftext|> -TITLE: Computing units of certain number fields -QUESTION [5 upvotes]: Some standard examples on various quals seems to be computing units/class numbers etc. of the ring $\mathbb{Q}(\alpha)$, where $\alpha$ is a root of either $X^3+aX+b$ or $X^5+aX+b$. -My questions is the following: What are some standard tricks that can be used to deduce that a particular unit is actually a fundamental unit? -I'm familiar with class field theory and L-series, so I don't mind higher-level methods. I'm sort of trying to figure out what's usable during an exam. Books on number theoretic algorithms only cover pretty general cases and these algorithms are too computationally intensive to be done manually. - -REPLY [2 votes]: For applications to diophantine equations it is usually sufficient to know that a unit is not a square or a cube; this can be verified quickly by showing that the unit in question is not a square or a cube in a residue class ring modulo a suitable chosen prime ideal. In particular, squares must be totally positive. -For finding an upper bound for the exponent n such that your unit is an n-th power modulo roots of unity, you will have to get your hands dirty by applying to geometric methods (Minkowski etc.). The only clean example I know beyond the quadratic case is that of cubic fields discussed by Artin in his book on algebraic numbers (see this question).<|endoftext|> -TITLE: A special kind of polynomial: $p(x)=x^{2}(4x^{2}-3)^2$ -QUESTION [5 upvotes]: Coonsider the polynomial $p(x) = x^2(4x^2-3)^2$. -This polynomial is special because: - -All of the local maxima $M_i$ are of the form $p(M_i)=1$ -All of the local minima $m_i$ are of the form $p(m_i)=0$ -and $p(-1)=p(1)=1$ -The polynomial has no other zeros, local maxima or local minima. - -My questions are: - -what conditions must a polynomial satisfy in order to have the above characteristics? -do they have a specific name? - -Thanks. - -EDIT -These are the elements of the family I have so far: -$$p_1(x)=x^2 (4x^2-3)^2$$ -$$p_2(x)=x^2 (16 x^4-20 x^2+5)^2$$ -$$p_3(x)=x^2 (64 x^6-112 x^4+56 x^2-7)^2$$ -Can someone find the pattern? What is $p_4(x)$? - -REPLY [4 votes]: I think you will find the results and techniques of the following paper useful in this regard. -MR1752251 (2001c:11035) 11D25 (11D41 11G05 11G30) -Buchholz, Ralph H.; MacDougall, James A.(5-NEWC) -When Newton met Diophantus: a study of rational-derived polynomials -and their extension to quadratic fields. -J. Number Theory 81 (2000), no. 2, 210–233. -http://dx.doi.org/10.1006/jnth.1999.2473 -This is an interesting paper, which surveys the problem of determining the set $D(n)$ of all "$k$-derived'' univariate polynomials of degree $n$ (where a polynomial $f \in k[x]$ is $k$-derived if $f$ and each of its successive derivatives has all roots in the ground field $k$). Define two polynomials $f_1,f_2\in k[x]$ to be equivalent if $f_1(x)=r f_2(s x+t)$ for $r,s,t\in k$, $r,s \neq 0$. Then up to equivalence, the following is known about $\mathbb Q$-derived polynomials: -$$D(1)=\{x\};\quad D(2)=\{x^2,x(x-1)\};$$ -$$D(3)=\{x^3\}\cup\bigg\{x(x-1)(x-a)\ :\ a=\frac{w(w-2)}{w^2-1},w\in \mathbb Q\bigg\};$$ -$$ D(4)\supseteq \{x^4\}\cup\left\{x^2(x-1)(x-a)\ \left|\ \begin{array}{l}a=\frac{9(2w+z-12)(w+z)}{(z-w-18)(8w+z)}, - (w,z)\in E(\mathbb Q),\\ - E\colon z^2=w(w-6)(w+18)\end{array}\right.\right\};$$ -$$ D(n)\supseteq \{x^n, x^{n-1}(x-1)\}\ {\rm for}\ n\geq 5.$$ -The authors prove that determining $D(n)$ in general devolves into two conjectures: (1) that no quartic with four distinct roots is $\mathbb Q$-derived; (2) that no quintic of type $x^3(x-a)(x-b)$, $a\neq b,\ a,b\neq0$, is $\mathbb Q$-derived. The first conjecture can be solved by determining all rational points on a hyperelliptic surface of degree 10. The second conjecture can be solved by determining all rational points on a curve of genus 2 (E. V. Flynn ["On $\mathbb Q$-derived polynomials'', Preprint; per revr.] has now proved this second conjecture). The authors also discuss briefly the situation of $K$-derived polynomials for quadratic extensions $K$ of $\mathbb Q$; there is, for example, the quartic $y^2=x^2(x-1)(x-\frac{37-20\sqrt{3}}{13})$ which is a ${\mathbb Q}(\sqrt{3})$-derived polynomial. -Reviewed by Andrew Bremner<|endoftext|> -TITLE: What are good books to learn graph theory? -QUESTION [125 upvotes]: What are some of the best books on graph theory, particularly directed towards an upper division undergraduate student who has taken most the standard undergraduate courses? I'm learning graph theory as part of a combinatorics course, and would like to look deeper into it on my own. Thank you. - -REPLY [12 votes]: I learned graph theory from the inexpensive duo of Introduction to Graph Theory by Richard J. Trudeau and Pearls in Graph Theory: A Comprehensive Introduction by Nora Hartsfield and Gerhard Ringel. Both are excellent despite their age and cover all the basics. They aren't the most comprehensive of sources and they do have some age issues if you want an up to date presentation, but for the basics they can't be beat. -There are lots of good recommendations here, but if cost isn't an issue, the most comprehensive text on the subject to date is Graph Theory And Its Applications by Jonathan Gross and Jay Yellen. This massive, beautifully written and illustrated tome covers just about everything you could possibly want to know about graph theory, including applications to computer science and combinatorics, as well as the best short introduction to topological graph theory you'll find anywhere. If you can afford it, I would heartily recommend it. Seriously.<|endoftext|> -TITLE: The discrete Fourier transform of a Dirichlet charachter -QUESTION [6 upvotes]: I usually work in number theory so I am not familiar with Fourier transforms, I have read up on them and know the basics but it never seems to be in number theory language. -I am trying to find the transform of a primitive Dirichlet character $\chi(n) \bmod q$. I know this is a periodic function and $\chi(n)=\exp\left(\frac{Kv(n)}{\phi(p^\alpha)}\right)$ but I have no idea have to find its transform or the transform of $f(n)\chi(n)$ -Yes you are right, say how would you calculate $\sum_(n\epsilon Z) f(n)\chi(n)$ - -REPLY [2 votes]: I finally found the answer, I feel dumb because it is quite simple, but this answer cannot be found at many places on the web. -Consider the gaussian sum of a character modulo $q$ : -$$\tau(\chi) = \sum_{n=1}^{q-1} \chi(n) e^{\textstyle\frac{2 i \pi n}{q}}$$ -I wrote $\sum_{n=1}^{q-1}$ but I could have written $\sum_{n \in G}$ the group of the inversible modulo $q$ (those $n$ with $gcd(n,q)=1$), because $\chi(n) = 0$ if $n \not\in G$. And this is important because it leads to the main trick : -take any $a \in G$ (which is inversible modulo $q$), thus the application $n \to a \,.n$ is a bijection from $G$ to itself, so that : -$$\forall a \in G, \qquad \tau(\chi) = \sum_{n=1}^{q-1} \chi(a n) e^{\textstyle\frac{2 i \pi a n}{q}}$$ -and simply because $\chi(a n) = \chi(a) \chi(n)$ : -$$\tau(\chi) = \chi(a) \sum_{n=1}^{q-1} \chi(n) e^{\textstyle\frac{2 i \pi a n}{q}}$$ -i.e. : -$$\sum_{n=1}^{q-1} \chi(n) e^{\textstyle\frac{2 i \pi a n}{q}} = \tau(\chi) \bar{\chi}(a)$$ -for $a \in G$, but the uniqueness/inversibility of the Fourier transform implies that the other values of $\chi$'s Fourier transform must be $0$, so that this is true for every $a$. -finally, the discrete Fourier transform (of length $q$) of a Dirichlet character $\chi$ modulo $q$ is $\bar{\chi}\, \tau(\chi)$. -note that in the same way we have also that $\sum_{n=1}^{q-1} \chi(n) e^{-2i \pi n k / q} = \bar{\chi}(k) \overline{\tau(\bar{\chi})}$. -then, writing a Fourier series representation for the distribution : -$$\delta_\chi(x) = \sum_{n=1}^\infty \chi(n) \delta(x-n) = \frac{1}{q} \sum_{k=1}^\infty \bar{\chi}(k)\left(\tau(\chi) e^{-2 i \pi k x / q} + \overline{\tau(\bar{\chi})} e^{2 i \pi k x / q}\right)$$ and with $L(s,\chi) = \int_0^\infty \delta_\chi(x) x^{-s} dx$ we can get the functional equation : -$$L(s,\chi) = \sum \chi(n) n^{-s} = L(1-s,\bar{\chi}) A(s)$$ -with $A(s) = \frac{1}{q}\int_0^\infty \left(\tau(\chi) e^{-2 i \pi x / q} + \overline{\tau(\bar{\chi})} e^{2 i \pi x / q}-2\right) x^{-s}dx$.<|endoftext|> -TITLE: proof that an alternative definition of limit is equivalent to the usual one -QUESTION [5 upvotes]: How to prove the following theorem? - -Let $I \in R$ be an open interval, let $c \in I$, and let $f: I-{c} \rightarrow R$ be a function. Then $\lim \limits_{x \rightarrow c} {f(x)}$ exists if for each $\epsilon > 0$, there is some $\delta > 0$ such that $x,y \in I$ and $\vert x-c \vert < \delta$ and $\vert y-c \vert < \delta$ implies $\vert f(x)-f(y) \vert < \epsilon$. - -I think this needs standard limit definition, sup and inf properties to prove. And I came up with a following scratch of proof: -(1) Suppose for each $\epsilon >0$, there is some $\delta >0$ such that $\vert x-c \vert < \delta$ and $\vert y-c \vert < \delta$ implies $\vert f(x)-f(y) \vert < \epsilon$. For each $r>0$, let $A_r$ = $I \bigcap (c-r,c+r)$. Then, for each $\epsilon>0$, there is some $\delta>0$, such that $x,y \in A_\delta$ implies $\vert f(x)-f(y) \vert < \epsilon$. Then I want to show there is some $a>0$ such that $f(A_a)$ is bounded. -(2) If $f(A_a)$ is bounded, then for each $s \in (0,a)$, $f(A_s) \subseteq f(A_a)$, and thus $f(A_s)$ is bounded. Define $a_s = \mathrm{glb}~f(A_s)$ and $b_s = \mathrm{lub}~f(A_s)$. Let $A = \{a_s \mid s \in (0,a)\}$ and $B=\{b_s \mid s \in (0,a)\}$. Then we know that A has a least upper bound and B has a greatest lower bound, and $\mathrm{lub}(A) \leq \mathrm{glb}(B)$. Now I want to show $\mathrm{lub}(A) = \mathrm{glb}(B)$ -(3) If I could show $\mathrm{lub}(A) = \mathrm{glb}(B)$, let $M=\mathrm{lub}(A) = \mathrm{glb}(B)$, I want to show $\lim \limits_{x \rightarrow c} {f(x)} = M$. -Can someone give me some help on how to prove (1) (2) (3), $f(A_a)$ is bounded, $\mathrm{lub}(A) = \mathrm{glb}(B)$, and $\lim \limits_{x \rightarrow c} {f(x)} = M$? Thanks! - -REPLY [6 votes]: Regarding your "scratch of proof". -(1) To show there is an $a$ such that $f(A_a)$ is bounded (and therefore, $f(A_b)$ is bounded for all $b\leq a$), simply pick $\delta$ that "works" for $\epsilon=1$. Now fix $x\in A_{\delta}$, and let $M = f(x)$. For all $y\in A_{\delta}$, you know that $|f(x)-f(y)|\lt 1$, hence -$$|f(y)| - |f(x)| \leq |f(y)-f(x)| \lt 1$$ -so $|f(y)|\lt 1 + |f(x)| = 1+M$. This shows that $f(A_{\delta})\subseteq [-1-M, 1+M]$, hence $f(A_{\delta})$ is bounded. -I confess that I don't really see how to handle (2) easily. -Alternative way. -Here is a possible way of attacking the problem which is closely related to what you are trying to do, but perhaps a bit easier: try constructing a sequence of points $x_1,x_2,\ldots$, with $x_1\to c$, and such that $f(x_1),f(x_2),\ldots$ is a Cauchy sequence. This will give you a "target" value for the limit, and then you can prove that the limit equals that target. -So: let $\epsilon_1 = \frac{1}{2}$. Then there exists a $\delta_1$, and we may assume $\delta_1\lt \frac{1}{2}$, such that $|x-c|\lt \delta_1$ and $|y-c|\lt \delta_1$ implies $|f(x)-f(y)|\lt \epsilon_1$. Take $x_1 = c-(\delta_1/2)$. -Now let $\epsilon_2 = \frac{1}{4}$. There exists a $\delta_2$, and we may assume $\delta_2\lt \min(\frac{1}{4},\delta_1)$, such that $|x-c|\lt \delta_2$ and $|y-c|\lt\delta_2$ implies $|f(x)-f(y)|\lt \epsilon_2$. Take $x_2 = c-(\delta_2/2)$. -Let $\epsilon_3 = \frac{1}{8}$. Then there exists a $\delta_3$, which we may assume satisfies $\delta_3\lt \min(\frac{1}{8},\delta_2)$, such that $|x-c|\lt\delta_3$ and $|y-c|\lt \delta_3$ implies $|f(x)-f(y)|\lt \epsilon_3$. Take $x_3 = c-(\delta_3/2)$. -Continuing this way, let $\epsilon_{n+1} = \frac{1}{2^{n+1}}$, and let $\delta_{n+1}\lt \min(\frac{1}{2^{n+1}},\delta_{n})$ be such that if $|x-c|\lt\delta_{n+1}$ and $|y-c|\lt\delta_{n+1}$ then $|f(x)-f(y)|\lt \epsilon_{n+1}$. Let $x_{n+1} = c - (\delta_{n+1}/2)$. -Now we have a sequence of points $\{x_n\}$ in $I$, with $x_n\to c$, and such that $|x_n - c| \lt \frac{1}{2^n}$ for all $n$. -Consider now the sequence $\{ f(x_m)\mid m=1,2,\ldots\}$. I claim this is a Cauchy sequence. -Indeed: let $\epsilon\gt 0$. Then there exists a natural number $N\gt 0$ such that $1/2^N \lt \epsilon$. Let $n,m\geq N$. Then $|x_n-c| \lt \delta_{N}$ and $|x_m-c|\leq \delta_N$, so we know that $$|f(x_n)-f(x_m)|\leq \epsilon_N = \frac{1}{2^N}\lt \epsilon.$$ -Thus, for every $\epsilon\gt 0$ there exists $N\gt 0$ such that for all $n,m\geq N$, $|f(x_n)-f(x_m)|\lt \epsilon$. Hence, the sequence $\{f(x_n)\}$ is a Cauchy sequence, and therefore has a limit, $L$. -Now, since $x_n\to c$, then if there is a limit for $f(x)$ as $x\to c$, then it will have to be equal to $L$. Prove that this is indeed the limit.<|endoftext|> -TITLE: Question about the integral of $1/(1+9x^2)$ -QUESTION [8 upvotes]: This might be a stupid question but what is the integral of $\frac{1}{1 + 9x^2}$? I want to think it's $\tan^{-1}(3x) + c$ but the book I'm working in says $\frac{1}{3}\tan^{-1}(3x) + c$. -How did they get $1/3$? - -REPLY [3 votes]: Elaborating a bit more on the above hints in a more clear flow. -So we are trying to integrate the following expression $~~~\rightarrow ~~~ \displaystyle\int \dfrac{1}{1+9x^{2}}\ dx$. -To do the this, we will need to make an appropriate substitution inside of the integrand and to make the $9x^{2}$ look like something equivalent. Doing this leads us to the following: -$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\displaystyle\int \dfrac{1}{1+(3x)^{2}}\ dx$ -Let: $~u=3x$ -$du=3 dx$ -$dx=\dfrac{1}{3}\ du$ -Substituting in u and dx we see that we get the following: -$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\displaystyle \dfrac{1}{3} \int \dfrac{1}{1+(u)^{2}}\ du$ -$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~=~\dfrac{1}{3} \text{arctan}(u)+C$ -$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~=~\dfrac{1}{3} \text{arctan}(3x)+C~~~~~~~~~~~~~~~~~~~~\blacksquare$ -So as we can see, the $\dfrac{1}{3}$ came from the substitution we made and then going from there and solving for what dx represented in terms of u. -Hope this helped out. Let me know if you have any questions on any of the steps made in this process. -Thanks. -Good Luck.<|endoftext|> -TITLE: Transformation of state-space that preserves Markov property -QUESTION [10 upvotes]: I am solving a problem in Mathematical Statistics by Jun Shao - -Let $\{X_n \}$ be a Markov chain. Show that if $g$ is a one-to-one Borel function, then $\{g(X_n )\}$ is also a Markov chain. Give an example to show that $\{g(X_n )\}$ may not be a Markov chain in general. - -I have a hard time on solving it, even though I have been staring it and thinking about it for a whole day. - -For the first part, which is to show that any -one-to-one Borel $g$ preserves -Markov property of a Markov chain, I -guess using the formula for density function under the change -of random variables by $g$, learned in elementary probability course, might -help, but I am not sure how to use it, or maybe the tools needed to solve the problem are not that simple? -For the second part, I really have no idea of how to construct some $\{ X_n\}$ so that $\{g(X_n )\}$ is not a Markov chain, for example, when $g(x)=x^2$? - -Here are also some extended thoughts and questions: - -If $g$ is not one-to-one, is -$\{g(X_n )\}$ always not a Markov chain for any Markov chain $\{ X_n\}$? -How about if $\{X_t \}, t \in - \mathbb{R}$? Does any one-to-one $g$ -also preserve Markov property of -continuous-time stochastic -processes? - -Thanks a lot! - -REPLY [21 votes]: The transformation you are interested in is often called lumping some states of the process and the resulting process a lumped chain. -For a given denumerable Markov chain $X$ on a state space $E$ with transitions $p$ and a given function $g$, whether $g(X)$ is still a Markov chain or not may depend on the initial distribution of $X$, but a necessary and sufficient condition for $g(X)$ to be Markov for any initial distribution of $X$ (condition known at least since C. J. Burke and M. Rosenblatt, A Markovian Function of a Markov Chain (1958), and widely used in the applications) is that for any $x$ and $x'$ in $E$ such that $g(x)=g(x')$ and any $z$ in $g(E)$, -$$ -\sum_{y:g(y)=z}p(x,y)=\sum_{y:g(y)=z}p(x',y). -$$ -In words, being at a state $x$, the probability to jump to a lumped state $z$ may depend on $x$, but only through the lumped state $g(x)$. -A simple but inspiring example is a Markov chain $X$ on three states $0$, $1$ and $2$ with transitions from $0$ to $1$, from $1$ to $2$, from $2$ to $1$ and from $2$ to $0$ and no other transition, thus, for a given $u$ in $(0,1)$, -$$ -p(0,1)=p(1,2)=1,\qquad -p(2,0)=u,\qquad -p(2,1)=1-u. -$$ -The chain $X$ is irreducible, aperiodic, and it converges in distribution to the unique stationary distribution $\pi$ given by -$$ -\pi(0)=\frac{u}{2+u},\qquad\pi(1)=\pi(2)=\frac1{2+u}. -$$ -Now, lump together states $1$ and $2$, for example by a function $g$ to a two-state space $\{a,b\}$ such that $g(0)=a$ and $g(1)=g(2)=b$. Then: - -When at $0$, the chain $X$ enters $\{1,2\}$ deterministically by $1$. -As long as the chain $X$ stays in $\{1,2\}$, it alternates deterministically between $1$ and $2$. -When in $\{1,2\}$, the chain $X$ can only jump to $0$ from $2$, not from $1$. - -Thus, for every $k\geqslant1$, -$$ -P(g(X_{n})=a\mid g(X_{n-1})=\cdots=g(X_{n-k})=b,g(X_{n-k-1})=a) -= -\left\{\begin{array}{cccl} -0&\text{if}&k&\text{is odd,}\\ -u&\text{if}&k&\text{is even.}\end{array}\right. -$$ -This shows that $g(X)$ is not Markov and even that $g(X)$ is not a Markov chain of any order.<|endoftext|> -TITLE: Nice references on Markov chains/processes? -QUESTION [29 upvotes]: I am currently learning about Markov chains and Markov processes, as part of my study on stochastic processes. I feel there are so many properties about Markov chain, but the book that I have makes me miss the big picture, and I might better look at some other references. -So I would like to know if there are some references with nice treatment on Markov chains/processes? They can be a whole book, some chapters of a book, some lecture notes, some webpages, .... Their approaches can be different, either with or without referring to measure theory (I think a measure-theory approach will be helpful in clarifying things, but I hope it also has some intuitive interpretation), as long as they have clear organization, and provide both a big picture and cover most of the important topics of Markov chains/processes. -I really appreciate your contribution! - -REPLY [2 votes]: While not as advanced as the books mentioned above, if you are looking for examples related to applications of Markov Chains and a nice "brief" treatment you might look at Chapter 5, of Fred Robert's book: Discrete Mathematical Models, Prentice-Hall, 1976.<|endoftext|> -TITLE: $L^p$ norm of Dirichlet Kernel -QUESTION [5 upvotes]: I'm having trouble showing that the $L^p$ norm of the $n$-th order Dirichlet kernel is proportional to $n^{1-1/p}$. -I've tried brute force integration and it didn't work out. I would be grateful for any hints. -Thanks. - -REPLY [9 votes]: Assuming the notation of DJC, apply the transformation $y = n x$ to obtain -$$ -\frac{1}{n^{p-1}} \left\| D_n \right\|^p_p = \int_{- n \pi}^{n \pi} \left| \frac{\sin(y + \frac{y}{2 n})}{n \sin(\frac{y}{2 n})} \right|^p dy . -$$ -Fatou's Lemma now shows that -$$ -\frac{1}{n^{p-1}} \left\| D_n \right\|^p_p \leq \int_{\mathbb{R}} \left| \frac{2 \sin(y)}{y} \right|^p dy . -$$ -For the other side of the inequality, note, that -$$ -\frac{1}{n^{p-1}} \left\| D_n \right\|^p_p \geq \int_{- n \pi}^{n \pi} \left| \frac{\sin(y + \frac{y}{2 n})}{\frac{y}{2}} \right|^p dy, -$$ -so using the triangular inequality of the $L^p$ norm we obtain -$$ -\frac{1}{n^{p-1}} \left\| D_n \right\|^p_p \geq \left( \left( \int_{\mathbb{R}} \left| \frac{2 \sin(y)}{y} \right|^p dy \right)^{1/p} - \left( \int_{- n \pi}^{n \pi} \left| \frac{\sin(y + \frac{y}{2 n}) - \sin y}{\frac{y}{2}} \right|^p dy \right)^{1/p} \right)^p . -$$ -The inequality now follows from -$$ -\int_{- n \pi}^{n \pi} \left| \frac{\sin(y + \frac{y}{2 n}) - \sin y}{\frac{y}{2}} \right|^p dy \leq \int_{- n \pi}^{n \pi} \left| \frac{\frac{y}{2 n}}{\frac{y}{2}} \right|^p dy = \frac{2 \pi}{n^{p - 1}} = o(n) . -$$<|endoftext|> -TITLE: fair value of a hat-drawing game -QUESTION [9 upvotes]: I've been going through a problem solving book, and I'm a little stumped on the following question: -At each round, draw a number 1-100 out of a hat (and replace the number after you draw). You can play as many rounds as you want, and the last number you draw is the number of dollars you win, but each round costs an extra $1. What is a fair value to charge for entering this game? -One thought I had was to suppose I only have N rounds, instead of an unlimited number. (I'd then let N approach infinity.) Then my expected payoff at the Nth round is (Expected number I draw - N) = 50.5 - N. So if I draw a number d at the (N-1)th round, my current payoff would be d - (N-1), so I should redraw if d - (N-1) < 50.5 - N, i.e., if d < 49.5. So my expected payoff at the (N-1)th round is 49(50.5-N) + 1/100*[(50 - (N-1)) + (51 - (N-1)) + ... + (100 - (N-1))] = 62.995 - N (if I did my calculations correctly), and so on. -The problem is that this gets messy, so I think I'm doing something wrong. Any hints/suggestions to the right approach? - -REPLY [3 votes]: Edit: I've added some information about the game with a restricted number $N$ of rounds, since the OP mentioned this. - -Just a comment to complement the nice answers we have already. -There is an algorithm that calculates the optimal strategy and -the value of each state for such an optimal stopping problem. -Let $f(x)$ be the payout -at each state $x\in {\cal S}$ for the underlying Markov chain $(X_n)$, and let -$g(x)$ be the cost of leaving state $x$. -Set $u_0=f$ and for $N\geq 1$ put $u_N=\max(Pu_{N-1}-g,f)$. Here $P$ -is the transition matrix of the Markov chain. -Then $u_N$ is the value function for the restricted game with at most $N$ rounds; that is, -$$u_N(x)=\sup_{T\leq N}\ \mathbb{E}\left[ f(X_T)\, | \, X_0=x\right],$$ -where the supremum is over all stopping times $T$ that satisfy $T\leq N$. -As $N\to\infty$, we get $u_N\uparrow v$ -where $v$ is the value function for the unrestricted game. The optimal strategy -is to stop when the chain hits the set $\lbrace x: f(x)=v(x)\rbrace$. -Here $v(x)$ means the value of state $x$, i.e., -$v(x)$ is the maximum expected payout starting in state $x$ and using -any finite stopping time $T$ as a strategy: -$$v(x)=\sup_T\ \mathbb{E}\left[ f(X_T)\, | \, X_0=x\right].$$ -This is all nicely explained in the section on Optimal Stopping in -Gregory Lawler's book "Introduction to Stochastic Processes". -In your problem we have $f(x)=x$ for $1\leq x\leq 100$, and $g(x)\equiv 1$. -Your Markov chain $(X_n)$ is a sequence of independent, uniform $\lbrace 1,2,\dots, 100\rbrace $ random variables. -Thus the $P$ operator applied to any function $h$ gives the constant function whose value is the average value of $h$, that is, $Ph(x)\equiv \sum_{i=1}^{100} h(i)/100$. So we calculate -$$u_0(x)=x,\quad u_1(x)=\max\left(x,{99\over 2}\right), \quad u_2(x)=\max\left(x,{12301\over 200}\right). $$ -Taking very large $N$ gives $$v(x)\approx \max\left(x,{86.35714}\right),$$ -which shows that the optimal strategy (over all finite stopping times!) is to quit -as soon as we get $87$ or higher, and the value of the game is $86.35714$ dollars. -In your problem it is pretty straightforward to calculate the exact answer, -but this algorithm also gives the answer for more complicated games where -exact calculations are not so easy.<|endoftext|> -TITLE: $S=1+10+100+100+10000+... = -1/9$? How is that -QUESTION [15 upvotes]: its written that if you multiply the sum above by -9 and use the distributive law, all terms except "1" will cancel. I can't see that. I know that this is a divergent series. (the article I was reading was a layman's introduction to zeta regularization) -Similarly, how is $S=1+2+3+4... = -1/12$ -Even if the series were to terminate somewhere, I dont see these value. This is completely baffling me. -Using $\frac{1}{1-x}$ I can put x=10 but that is not fair,isn't it? - -REPLY [3 votes]: for your $1+2+3+4+...=-1/12$ this is the value of $\zeta(-1)$, the riemann zeta function at $-1$. so while it is not an equality as written, the definition is usually given as $\zeta(s)=\sum_{n=1}^{\infty}n^{-s}$ which only converges for real $s$ greater than 1. however it can be continued to the rest of the plane (with a pole at 1).<|endoftext|> -TITLE: Math route for someone of this background -QUESTION [12 upvotes]: Apologies for a soft question. I do not have a lot of time because of my job (programmer) and my wife cannot work so I cannot quit. I also did not graduate from a very prestigious school with good grades. So grad school is out of the question for me. -I am 23 years old, I find that because I did not learn many topics well when I was younger, I have accumulated a lot of half baked knowledge about stuff. For example, I can spell out some theorems, results in fancy notation, but I don't really understand those things. So when I try to restart and re-educate myself I find that I have to unlearn a lot of gibberish that I put in my head just to pass through examinations. -I visited this site and saw some solutions people posted. I was so exhilerated after getting some of them that I wish I had the same understanding as the person who could think of such solutions. So in my private time, I have embarked on a project to learn math from scratch. I do not aspire to make meaningful contributions, because of the late start, but to discover mathematics and get some personal peace. -Before beginning, I decided that I am most interested in stuff like graph theory, and perhaps I would like to explore geometry as well (algebraic, differential) -So I planned a halfway route and here is my question... what is your advise for a person like me to learn mathematics and reach his goal (with literature recommendations possibly online) - -Start from basics, do good problems in pre-calculus algebra (inequalities, permutations, combinations, sequences and series etc?) to get some mathematical thinking -Calculus from Rudin, to get rid of most of the algorithmic procedures in my minds (is this too tough for me) -Where do I go from here? Follow some college's standard curriculum? Is there a curriculum for undergrads designed to specialize in graph theory/topology? - -REPLY [2 votes]: First, I really admire your courage and perseverance. -Second, pick up a Schaum's outline on secondary school mathematics and make sure you understand every chapter. If you cannot answer a question by looking at it, then take out a pencil and paper and go to work on solving it. Make sure you understand the techniques involved and the reasoning behind each problem. Reread the chapter if you need to and make sure you check your answers with those in the book. -I suggest this approach rather than starting with a standard math text, because Schaum's outlines are specifically geared toward self-study. Remember, if your foundation is not strong, succeeding in mathematics --- even elementary mathematics --- will be nearly impossible. -If you have any math-related questions, don't be afraid to share them on Math.SE. Once you're done with a thorough evaluation of the fundamentals, then you're ready for something more advanced. In the meantime, go to work!<|endoftext|> -TITLE: zeroes of holomorphic function -QUESTION [10 upvotes]: I know that zeroes of holomorphic functions are isolated,and I know that if a holomorphic function has zero set whic has a limit point then it is identically zero function,i know a holomorphic function can have countable zero set, does there exixt a holomorphic function which is not identically zero, and has uncountable number of zeroes? - -REPLY [9 votes]: As others have said, it comes down to the nonexistence of an uncountable discrete subset $Z$ of an open subset $U$ of the complex plane $\mathbb{C}$. The point of this answer is to record a proof of this that I find simplest. Namely, the space $\mathbb{C}$ is second countable: there is a countable base for the topology (take, e.g., open balls with rational radii centered at points $x+yi$ with $x,y \in \mathbb{Q})$). But every subspace of a second countable space is second countable: just restrict a countable base to the subspace. In particular $U$ is second countable and so is the putative uncountable discrete subset $Z$ of $U$. But this is absurd: since the only base of a discrete space consists of all the singleton sets, an uncountable discrete space is not second-countable!<|endoftext|> -TITLE: geodesics on a surface of revolution -QUESTION [14 upvotes]: I'm having problems with exercise 1 of chapter 3 of do Carmo's "Riemannian Geometry". Here is the background: -Let $(u,v)$ be the coordinates on $\mathbb{R}^2$. Let $f,g\in C^\infty(\mathbb{R})$, and observe that $\varphi:\mathbb{R}^2\rightarrow \mathbb{R}^3$ given by $\varphi(u,v)=(f(v)\cos u,f(v)\sin u,g(v))$ is an immersion assuming $f'(v)^2+g'(v)^2\not= 0$ and $f(v)\not= 0$. The image is the surface of revolution generated by the curve $(f(v),g(v))$ being rotated about the $z$-axis. The induced metric is -$$ (g_{ij})=\left( \begin{array}{cc} f^2 & 0 \\ 0 & f'^2+g'^2 \end{array} \right), $$ -and the local equations of a geodesic $\gamma$ are -$$ \left\{ \begin{array}{l} \frac{d^2 u}{dt^2} + \frac{2ff'}{f^2}\frac{du}{dt}\frac{dv}{dt}=0 \\ \frac{d^2 v}{dt^2}-\frac{ff'}{f'^2+g'^2}\left( \frac{du}{dt} \right)^2 + \frac{f'f'' + g'g''}{f'^2+g'^2} \left( \frac{dv}{dt} \right)^2 = 0. \end{array} \right. $$ -Then, do Carmo says: Obtain the following geometric meaning of the equations above: the second equation is, except for meridians ($u=u_0$) and parallels ($v=v_0$), equivalent to the fact that the "energy" $|\gamma'(t)|^2$ of a geodesic is constant along $\gamma$; the first equation signifies that if $\beta(t)$ is the oriented angle, $\beta(t)<\pi$, of $\gamma$ with a parallel $P$ intersecting $\gamma$ at $\gamma(t)$, then $r\cos \beta$ is constant, where $r$ is the radius of $P$. -This last paragraph is what's confusing me. First of all, I've seen the energy of a path as $\int_a^b |\gamma'(t)|^2 dt$, so maybe that's why he put "energy" in quotes. But also, geodesics have constant speed! So this should be constant along all geodesics too (regardless of whether they're meridians or parallels). But I figured that maybe I should just blindly plug and chug since that seems to work scarily often in Riemannian geometry, so I found $|\gamma'(t)|^2$, took its $t$-derivative, and substituted in the second equation in the system for geodesics. Here it is: -$$ |\gamma'(t)|^2 = \left\langle \frac{d \gamma}{dt} , \frac{d \gamma}{dt} \right\rangle = u'^2 g_{11} + 2u'v'g_{12} + v'^2 g_{22} = u'^2f^2+v'^2(f'^2+g'^2)$$ -so -$$ \frac{d}{dt} |\gamma'(t)|^2 = 2u'u''f^2+2u'^2ff' + 2v'v''(f'^2+g'^2)+v'^2(2f'f''+2g'g'') $$ -$$ = 2u'u''f^2+2u'^2ff' + 2v'(ff'u'^2-(f'f''+g'g'')v'^2)) + 2v'v''(f'^2+g'^2)+2v'^2(f'f''+g'g'')$$ -$$ = 2u'u''f^2+2u'^2ff'(1+v')+2v'v''(f'^2+g'^2)+2v'^2(f'f''+g'g'')(1-v')$$ -and this looks hopeless. - -REPLY [9 votes]: What you did wrong: the function $f$ should be treated as $f = f(\gamma(t)) = f(u(t),v(t))$. So $\frac{d}{dt}f = \partial_u f \frac{d}{dt}u + \partial_vf\frac{d}{dt}v$ using the chain rule. By the parametrization, you $\partial_uf = 0$, while $\partial_v f$ is what you wrote $f'$ originally. (Whereas in the above you implicitly wrote $\frac{d}{dt}f^2 = 2f f'$, which is wrong.) -This is an instance where you let notation get in your way. If would be clearer if you reserve the $\prime$ for $f'$, the derivative of the function $f$ relative to the parameter $v$ and $g'$ for the derivative of the function $f$ relative to parameter $v$, and use explicitly either $\frac{d}{dt}$ or $\cdot$ to denote time derivatives along the geodesic (as the geodesic equations that you originally copied down does). -For deriving the "conservation of energy", it may help if instead of the second equation in the two geodesic equations (also, I think there may be a sign error in it, but I am not 100% certain), you look at that equation multiplied by $(f'^2 + g'^2)\frac{dv}{dt}$, that is -$$ (f'^2 + g'^2) \dot{v} \ddot{v} + ff'\dot{v}(\dot{u})^2 + (f'f'' + g'g'')(\dot{v})^3 = 0 $$ -Incidentally, the second thing about angle against the parallels is called "Clairaut's relation".<|endoftext|> -TITLE: Sum of two squares proof -QUESTION [5 upvotes]: Find two pairs of relatively prime positive integers $(a,c)$ so that $a^2 + 5929 = c^2$. -Can you find additional pairs with $gcd(a,c) > 1$? -What I know: -$gcd(a,c) = 1$ implies that there are some $x$ and $y$ such that $ax + cy = 1$. Since $a d$oes not divide $c$, I'm guessing that $a^2$ does not divide $c^2$ as well (need confirmation). In that case we then have $gcd(a^2, c^2) = 1$ so there are some x and y such that $a^2 x + c^2 y = 1$. I'm not 100% sure if that leads us anywhere but it does give an equation that is somewhat matching the question. -Let $c^2 = d$ We know that a number $d$ can be written as a sum of two squares if all its prime factors are either 2 or congruent to $1 (mod 4)$. We have $\sqrt(5929) = 77$. So we have that if $d = a^2 + 5929$, $d$ must be a product of distinct primes that are congruent to $1 (mod 4)$ or 2. -What am I missing from here that is keeping me back from answering this? It doesn't seem like a very difficult question yet I'm having trouble with it. Maybe its midnight speaking :(. - -REPLY [6 votes]: This is a naïve approach that comes to mind. Seeing squares on opposite sides of an equation makes me want to move one of them and use the difference of squares formula. That is, I can't resist rewriting that as $5929=(c-a)(c+a)$. This then leads to the approach of factoring $5929=7^211^2$, leading to all possible choices of $c-a$ and $c+a$. E.g., choosing $c+a=11^27$ and $c-a=7$ leads to $c=\frac{7}{2}(11^2+1)$ and $a=\frac{7}{2}(11^2-1)$. Analyzing all such possibilities is one way to find your answer.<|endoftext|> -TITLE: Reference request: Riemann's paper on abelian functions -QUESTION [9 upvotes]: I don't know if this is the right kind of question for here. -But, can someone help me find an english translation (a link to a pdf would be nice) of: B. Riemann, "Theorie der Abelschen Funktionen", J. Reine Angew. Math., 54 (1857) -Many Thanks. - -REPLY [7 votes]: I found an English translation here: http://www.wlym.com/~jross/curvature/ - -Apparently the previous link I gave no longer exists, so here is another link to the same translation: Part I and Part II of the paper, respectively.<|endoftext|> -TITLE: Convexity of the product of two functions in higher dimensions -QUESTION [32 upvotes]: Exercise 3.32 page 119 of Convex Optimization is concerned with the proof that if $f:\mathbb{R}\rightarrow\mathbb{R}:x\mapsto f(x)$ and $g:\mathbb{R}\rightarrow\mathbb{R}:x\mapsto g(x)$ are both convex, nondecreasing (or nonincreasing) and positive, then $h:\mathbb{R}\rightarrow\mathbb{R}:x\mapsto h(x)=f(x)g(x)$ is also convex. -Are there any generalisations or analogies of this claim to the case where both $f$ and $g$ are convex functions mapping elements from $\mathbb{R}^n$ to $\mathbb{R}$, for any $n>1$? - -REPLY [32 votes]: Yes a generalization is possible. Here is an elementary approach to the convexity of the product of two nonnegative convex functions defined on a convex domain of $\mathbb{R}^n$. -Choose $x$ and $y$ in the domain and $t$ in $[0,1]$. Your aim is to prove that $\Delta\ge0$ with -$$\Delta=t(fg)(x)+(1-t)(fg)(y)-(fg)(tx+(1-t)y). -$$ -But $f$ and $g$ are nonnegative and convex, hence -$$ -(fg)(tx+(1-t)y)\le(tf(x)+(1-t)f(y))(tg(x)+(1-t)g(y)). -$$ -Using this and some easy algebraic manipulations, one sees that $\Delta\ge t(1-t)D(x,y)$ with -$$ -D(x,y)=f(x)g(x)+f(y)g(y)-f(x)g(y)-f(y)g(x), -$$ -that is, -$$ -D(x,y)=(f(x)-f(y))(g(x)-g(y)). -$$ -This proves a generalization of the result you quoted to any product of convex nonnegative functions $f$ and $g$ such that $D(x,y)\ge0$ for every $x$ and $y$ in the domain of $f$ and $g$. -In particular, if $f$ and $g$ are differentiable at a point $x$ in the domain, one asks that their gradients $\nabla f(x)$ and $\nabla g(x)$ are such that $z^*M(x)z\ge0$ for every $n\times 1$ vector $z$, where $M(x)$ is the $n\times n$ matrix -$$ -M(x)=\nabla f(x)\cdot(\nabla g(x))^*. -$$<|endoftext|> -TITLE: "Boundary" of convergence of $\frac{1-(1-c^{n})^{2n}}{(1-c)^{2n}}$ -QUESTION [6 upvotes]: I ran across this confounding limit I am wondering about. It is as follows: -$$\displaystyle \lim_{n\to \infty}\frac{1-(1-c^{n})^{2n}}{(1-c)^{2n}}, \;\ 0 -TITLE: the relationship between eigenvectors and matrix multiplication -QUESTION [5 upvotes]: If A has eigenvector $\mathbf{v}_1$ so that $A\mathbf{v}_1=\lambda_1\mathbf{v}_1$and B has eignenvector $\mathbf{v}_2$ so that $B\mathbf{v}_2=\lambda_2\mathbf{v}_2$, then what can you say about AB? can you say $AB\mathbf{v_3}=\lambda_3\mathbf{v}_3$? and what would be the relationship between $\mathbf{v}_1,\mathbf{v}_2,\mathbf{v}_3$ and what would be the relationship between $\lambda_1,\lambda_2,\lambda_3$? -Edit -$A,B$ are 3 by 3 matrices and $\lambda_1,\lambda_2,\lambda_3$ can be real or complex numbers and $\mathbf{v}_1,\mathbf{v}_2,\mathbf{v}_3$ is a triple. - -REPLY [3 votes]: It is possible to have $2 \times 2$ diagonal matrices $A$ and $B$ so that $A$ has eigenvalues $0$ and $\lambda_1$ and $B$ has eigenvalues $0$ and $\lambda_2$, but $AB$ is the $0$ matrix. This can be extended to $3\times 3$ matrices. -It can also be the case for $2 \times 2$ matrices or larger that the only eigenvalues of $A$ and $B$ are $0$, but $AB$ has an arbitrary nonzero eigenvalue. -If either of $A$ or $B$ is not invertible, then $AB$ can't be, so it must have $0$ as an eigenvalue. -If the eigenvectors $v_1$ and $v_2$ are parallel, then $v_1$ must be an eigenvector of $AB$ of eigenvalue $\lambda_1 \lambda_2$. -However, my guess is that if $v_1$ and $v_2$ are not parallel, and $\lambda_1$ and $\lambda_2$ are not $0$, then there is no restriction on the set of eigenvalues of $AB$.<|endoftext|> -TITLE: Number of abelian groups of order $p^n$ -QUESTION [5 upvotes]: If $p$ is prime, determine the number of abelian groups of order $p^n$ for each $1\leq n\leq8$ - (I assume that "up to isomorphism" should be included somewhere in the question for the sake of precision...)Could someone please review/confirm my work? - n = 1: $\mathbb{Z}_p $ - n = 2: $\mathbb{Z}_{p^2}$ and $\mathbb{Z}_p\times \mathbb{Z}_p$ - n = 3: $\mathbb{Z}_{p^3}$, $\mathbb{Z}_{p^2}\times \mathbb{Z}_p$, and $\mathbb{Z}_p\times \mathbb{Z}_p \times\mathbb{Z}_p$ - n = 4: $\mathbb{Z}_{p^4}$, - $\mathbb{Z}_{p^3} \times \mathbb{Z}_p$, $\mathbb{Z}_{p^2}\times \mathbb{Z}_{p^2}$, $\mathbb{Z}_{p^2}\times \mathbb{Z}_p \times \mathbb{Z}_p$, and $\mathbb{Z}_p\times \mathbb{Z}_p \times\mathbb{Z}_p \times \mathbb{Z}_p$ -et cetera - -I am simply considering all the options for when the largest exponent of $p$ is $n$, then $n-1$, and so on. How does this look? Thanks! -(Apparently I don't know how to "end a quote"...) - -REPLY [13 votes]: Your work is correct, except that you aren't answering the question asked (they asked you for the number of (nonisomorphic) groups, not for a list of the groups). So for $n=1$, the answer should be "1"; for $n=2$ the answer should be "2"; for $n=3$ the answer should be "3"; for $n=4$ the answer should be "5", etc. -The magic words you are looking for are "partitions of $n$." You should verify that there is a bijection between the isomorphism types of abelian groups of order $p^n$ and the partitions of $n$. - -REPLY [5 votes]: It's equal to the number of partitions of $n$. See http://mathworld.wolfram.com/AbelianGroup.html<|endoftext|> -TITLE: Basis for $\mathbb{Z}^2$ -QUESTION [17 upvotes]: Let $x = (a, b), y = (c, d) \in \mathbb{Z}^2$. What is the condition on $a, b, c, d$ so that ${x, y}$ is a basis? - -My answer: $ad\neq bc$ and $gcd(a, c) = gcd(b, d) = 1$. -The first condition ensures that they aren't the same vector; the second ensures that we can actually "get" all of the integer values/lattice points. -Is this correct? -Thanks. - -REPLY [2 votes]: For $ (a, b) $ and $ (c, d) $ to be a basis, they must be linearly independent. In other words, -$\det \begin{bmatrix} -a &b \\ - c& d -\end{bmatrix} $ -must be invertible. Over a field, this would imply that $\det(A) \neq 0 $, but since we are in $\mathbb{Z}$, we require $\det(A) = \pm 1$ so that each entry in $A^{-1} = \frac{1}{\det(A)} \begin{bmatrix} -d &-b \\ - -c& a -\end{bmatrix}$ -is an integer. -Note that $ad - bc = 1$ implies that $\gcd(a,c) = \gcd(b,d) = 1$.<|endoftext|> -TITLE: Topology exercises -QUESTION [23 upvotes]: Can anyone suggest a collection of (solved) exercises in topology? Undergrad level, as a companion to Dugundji's Topology (although excellent it doesn't provide the solutions to the problems). -Thanks. - -REPLY [18 votes]: Elementary Topology Problem Textbook -O. Ya. Viro, O. A. Ivanov, -N. Yu. Netsvetaev, V. M. Kharlamov, is available freely online at - -http://www.pdmi.ras.ru/~olegviro/topoman/eng-book-nopfs.pdf - -This book is published by AMS and that comes with solutions. - -REPLY [7 votes]: "Fundamentals of General Topology: Problems and Exercises" by A. V. Arkhangel'skii; V.I. Ponomarev is a fun book, though it might be hard to find and the level might be higher than you'd like. - -REPLY [6 votes]: I'd recommend "Elementary Topology Problem Textbook" by Viro, Ivanov, Netsvetaev, Kharlamov.<|endoftext|> -TITLE: Is the sub-field of algebraic elements of a field extension of $K$ containing roots of polynomials over $K$ algebraically closed? -QUESTION [5 upvotes]: If I have a field $K$ and an extension $L$ of $K$ such that all (non-constant) -polynomials in $K[X]$ have a root in $L$, is the set of algebraic elements -of $L$ over $K$ (the sub-field of all the elements of $L$ which are roots of a polynomial in $K[X]$) algebraically closed ? -Do you have a counterexample ? - -REPLY [5 votes]: This is a standard result that follows by factorization into separable and inseparable extensions. See the post below from Ask an Algebraist. Alas, the thread has since been deleted (due to vandals), but the references below to Isaacs will help you to locate the standard conceptual proof (to be contrasted with the non-conceptual elementary form given below by a student - cf. my remarks following it). See Isaacs: Algebra: a graduate course. Ch. 19: Separability and Inseparability and see esp. the remark preceding Theorem 19.18. - -From: Bill Dubuque; Date: July 6, 2010 -In reply to: "Thanks & Summary", posted by student on July 6, 2010: -student wrote: - -Thanks for your help, it's all clear now. The proof you gave provides good intuition! - For my own understanding I have written it up in total and tried to make the paralles - between the separable and the inseparable case clear. I post the proof below, - maybe someone reading this will benefit from it. -BTW you said that this theorem occurs in many textbooks. - Actually I could only locate it in Isaacs book (pp. 303-304), - however with a different proof (as you suggested). - Are there any specific books about algebra in general - or field theory in particular you would recommend? -OK here comes the proof (hopefully without typos), - which is basically a summary of the previous posts in this thread: -THEOREM - Let $L$ and $K$ be fields s.t. $L$ is an algebraic extension of $K$. - Assume every irreducible polynomial in $K[X]$ has at least one zero in $L$. - Then $L$ is an algebraic closure of $K$. -REMARK - If in the situation of the theorem $\operatorname{char}(L) = p$, then for every integer $k \geq 1$ - every element $c \in K$ has a unique $p^k$-th root $d$ in $L$ (i.e. $d^{p^k} = c$). -PROOF (REMARK) - Existence: The polynomial $X^{p^k} - c$ is in $K[X]$ and hence has a root $d$ in $L$. - Uniqueness: Assume $d^{p^k} = c = e^{p^k}$. Then $(d - e)^{p^k} = 0$ [freshman's dream] - and hence $d = e$. -PROOF (THEOREM) - Suppose $f$ is an irreducible polynomial with coefficients in $K$. - To prove the theorem we have to show that $f$ splits over $L$. - The proof is divided into two cases: $f$ separable and $f$ inseparable. -*) 1st case: $f$ separable. - Let $S$ denote a splitting field for $f$ over $K$ - lying inside a splitting field for $f$ over $L$. - Being the splitting field for a separable polynomial over $K$, - $S$ has a primitive element: $S = K(a)$. - Let $g$ be the minimal polynomial of $a$ over $K$. - Then $g$ has a root, say $b$, in $L$. - Since $g$ is irreducible and both $a$ and $b$ are roots of $g$, - $K(a)$ is isomorphic to $K(b)$. - [In fact $K(a) = K(b)$: - Since $S$ (being the splitting field for a polynomial in $K[X]$) - is normal over $K$, and since $g$ is irreducible and - has a root in $S$ (namely $a$), $g$ must split over $S$, hence $b$ is in $K(a)$. - Since $K(a)$ and $K(b)$ both have the same degree (namely $\deg(g)$) over $K$, - $K(a) = K(b)$.] - Thus $L \supseteq K(b) = K(a) = S$ contains all roots of $f$. -*) 2nd case: $f$ inseparable. - Then $\operatorname{char}(K) = p > 0$, - and $f = g(X^{p^n})$ for a separable irreducible polynomial $g\in K[X]$ and some $n > 0$. - By the 1st case $g$ splits in $L$, and we may let $a_1,...,a_m$ be the roots of $g$ in $L$. - Then $S = K(a_1,...,a_m)$ (being the splitting field for a separable polynomial over $K$) - has a primitive element: $S = K(a)$. - Let $h$ be the minimal polynomial of $a$ over $K$. - The polynomial $h(X^{p^n})$ is in $K[X]$ and hence has a root in $L$, say $b$. - Then $b^{p^n}$ is a root of $h$ in $L$. - Since $h$ is irreducible and both $a$ and $b^{p^n}$ are roots of $h$, - $K(a)$ is isomorphic to $K(b^{p^n})$. - In fact $K(a) = K(b^{p^n})$ [same argument as in the 1st case]. -Now let $r$ denote a root of $f$ lying inside a splitting field for $f$ over $L$. - Then $r^{p^n}$ is a root of $g$, hence $r^{p^n}$ is in $S = K(a) = K(b^{p^n})$. - Thus $r^{p^n} = F(b^{p^n})$ for some polynomial $F \in K[X]$. - By the remark, every coefficient of $F$ has a $p^n$-th root in L, - so $F(b^{p^n}) = G(b)^{p^n}$ for some $G \in L[X]$ [freshman's dream]. - Thus $(r - G(b))^{p^n} = 0$, hence $r = G(b)$ is in $L(b) = L$. -Q.E.D. - -This is just an "elementary" form of the standard proof (e.g. cf. Isaacs ref. above). -But, alas, by eliminating the innate structural elements (factorization into separable -and inseparable extensions) in favor of "simpler" calculations with elements, -the innate structure of the proof is greatly obfuscated. The elementary proof -has been constructed by directly inlining the lemmas leading up to Isaacs proof -vs. invoking them by name. This is not good pedagogically. Decomposing algebraic -extensions into their separable and inseparable parts is an essential tool required -to study general algebraic extensions. Such key ideas should not be obscured such -as above - esp. when, as here, the structural form is just as short and simple as is -the elementary form. Please read the section of Isaacs leading up to the proof to -better understand the innate structure. You've a much better chance of remembering -the structural proof years later because it has been abstracted into simple steps -using key structures/lemmas that will often be reused. But few if any students -would probably remember the non-conceptual elementary form as presented above.<|endoftext|> -TITLE: Find $\frac{\mathrm d^{100}}{\mathrm d x^{100}}\frac{x^2+1}{x^3-x}=$? -QUESTION [9 upvotes]: $$f(x)=\frac{x^2+1}{x^3-x}$$ -$$f^{(100)}(x)=?$$ -I tried differnetiating once and twice, but did not see any pattern emerging and can't guess what the 100th derivative should be. - -EDIT -so decomposing this as $$f(x)=-\frac{1}{x}+\frac{1}{x+1}+\frac{1}{x-1} -$$ does the job. Thanks for the hints! (edit: Sivaram has a complete calculation) Although a similar approach would greatly simplify this (next) problem can someone tell me what is wrong with my approach - -My usual line of attack is to use Taylor expansion. For example the next problem in the same list asks for the $100^{th}$ derivative of $$\frac{1}{x^2-3x+2}$$ at $x=0$ within 10% relative error. - -NOTE:The above is a mistype, the following attempt is for $\frac{1}{x^2+3x+2}$. A better general approach, which is what I was looking for is described in the answer posted below. - -I know I can expand in a Maclaurin series $$\frac{1}{x^2+3x+2}=\frac{1}{2} (1+\frac{x^2+3x}{2} + (\frac{x^2+3x}{2})^2 +\cdots)$$ -After taking 100 derivatives I would be left to differentiate the following. -$$\frac{1}{2}((\frac{x^2+3x}{2})^{50}+(\frac{x^2+3x}{2})^{51}+\cdots)$$ -$$=\frac{1}{2}\left(\frac{\sum_{k=0}^{50}{{50}\choose{k}}3^{50-k} x^{50+k} -}{2^{50}}+\frac{\sum_{k=0}^{51}{{51}\choose{k}}3^{51-k} x^{51+k} -}{2^{51}}+\cdots\frac{\sum_{k=0}^{100}{{100}\choose{k}}3^{100-k} x^{100+k} -}{2^{100}}\right)$$ -Because anything on either side of these values would disappear when i take the hundreth derivative at $x=0$ . And it is also easy to sea that I will get exactly one term from each of the sums, so I get an answer, -$$=100!\sum_{k=0}^{50}\frac{3^{2k}}{2^{50+k}}$$ -Which is wrong, well because the answer is too huge and Im to find a number within 10%. Can someone tell me where I went wrong, and if there is a cleaner way to approach these problems. - -REPLY [10 votes]: $$\frac{x^2+1}{x^3-x} = -\frac{1}{x} + \frac{1}{x+1} + \frac{1}{x-1} = -x^{-1} + (x+1)^{-1} + (x-1)^{-1}$$ -$$\frac{d^n(y^{-1})}{dy^n} = \frac{(-1)^n n!}{y^{n+1}}$$ -Hence, the $n^{th}$ derivative is -$$\frac{(-1)^{n+1} n!}{x^{n+1}} + \frac{(-1)^n n!}{(x-1)^{n+1}} + \frac{(-1)^n n!}{(x+1)^{n+1}} = (-1)^n n! \times \left( \frac{-1}{x^{n+1}} + \frac{1}{(x-1)^{n+1}} + \frac{1}{(x+1)^{n+1}}\right)$$ -Similarly, -$$\frac{1}{x^2-3x+2} = \frac{1}{x-2} - \frac{1}{x-1}$$ -Hence, the $n^{th}$ derivative is -$$\frac{(-1)^n n!}{(x-2)^{n+1}} - \frac{(-1)^n n!}{(x-1)^{n+1}} = (-1)^n n! \times \left( \frac{1}{(x-2)^{n+1}} - \frac{1}{(x-1)^{n+1}}\right)$$ - -REPLY [7 votes]: HINT: Try using partial fraction decomposition.<|endoftext|> -TITLE: Eigenvalues of product of matrices -QUESTION [7 upvotes]: If $\mathbf{A}_{n\times n}$ is a positive semi-definite matrix with eigenvalues $\{\alpha_k\},\ k\in\{1,...,n\}$, and $\mathbf{B}_{m\times n}$ is an arbitrary matrix with singular values $\{\beta_k\},\ k\in\{1,...,\min(m,n)\}$, can anything be said about the singular values $\{\gamma_k\},\ k\in\{1,...,\min(m,n)\}$ of the matrix $\mathbf{\Gamma}=\mathbf{BA}$? Is there a way I can relate $\gamma_k$ to $\alpha_k$ and $\beta_k$? - -REPLY [9 votes]: The singular values of $B$ are the square roots of the eigenvalues of $B^* B = C$, and those of -$BA$ are the square roots of the eigenvalues of $(BA)^* BA = A C A$. So the question is whether for positive semidefinite matrices $A$ and $C$ there is any relation between eigenvalues of $A C A$ and those of $A$ and $C$. The answer is, in general, not much. -You might consider some simple $2 \times 2$ examples, with different $C$ having the same eigenvalues, but the corresponding $A C A$ having different eigenvalues. -Of course the product of the eigenvalues is the determinant, and $\det(ACA) = \det(C) \det(A)^2$. Also, the sum of the eigenvalues is the trace, and using Cauchy-Schwarz you have a bound -${\rm tr}(ACA) = {\rm tr}(CA^2) \le ({\rm tr}( C^2))^{1/2} ({\rm tr}(A^4))^{1/2}$.<|endoftext|> -TITLE: Why do the Localization of a Ring -QUESTION [7 upvotes]: This question may be a bit vague, but neverthless, i would like to see an answer. Wikipedia tells me that: - -In abstract algebra, localization is a systematic method of adding multiplicative inverses to a ring. - -It is clear that for integral domains, we have the Field of Fractions, and we work on it. It's obvious that a Field has multiplicative inverses. Now if we consider any arbitrary ring $R$, by the definition of localization, it means that we are adding multiplicative inverses to $R$ thereby wanting $R$ to be a division ring or a field. My question, why is it so important to look at this concept of Localization. Or how would the theory look like if we never had the concept of Localization. - -REPLY [23 votes]: Localization is a technique which allows one to concentrate attention to what is happening near a prime, for example. When you localize at a prime, you have simplified abruptly the behavior of your ring outside that prime but you have more or less kept everything inside it intact. -For lots of questions, this significantly simplifies things. -Indeed, there are very general procedures, in lots of contexts, which go by the name of localization, and their purpose is usually the same: if you are lucky, the problems you are interested in can be solved locally and then the "local solutions" can be glued together to obtain a solution to your original problem. Moreover, an immense deal of effort has been done in order to extent the meaning of "local" so as to be able to apply this strategy in more contexts: I have always loved the way the proofs of some huge theorems of algebraic geometry consist more or less in setting up an elaborate technology in order to be able to say the magical "It is enough to prove this locally", and then, thanks to the fact that we worked so much in that technology, immediately conclude the proof with a "where it is obvious" :) -Of course, all sort of bad things can happen. For example, sometimes the "local solutions" cannot be glued together into a "global solution", &c. (Incidentally, when this happens, so that you can do something locally but not glue the result, you end up with a cohomology theory which, more or less, is the art of dealing with that problem) - -REPLY [16 votes]: In Algebraic Geometry, if you have a variety $\mathbf{V}$, then you have the corresponding coordianate ring $A(\mathbf{V})$ to study the variety. To each point $p\in\mathbf{V}$ there corresponds a maximal ideal $\mathfrak{M}$ of $A(\mathbf{V})$. -The ring $A(\mathbf{V})$ carries information about what is going on in all of $\mathbf{V}$. If you are only interested in understanding what is happening at (or near) the point $p$, then a lot of the information in $A(\mathbf{V})$ doesn't really matter. If you localize $A(\mathbf{V})$ at $\mathfrak{M}$, what you get is precisely the information about what is going on near $p$. For example, there is a one-to-one correspondence between the prime ideals of this localization and the subvarieties of $\mathbf{V}$ that go through the point $p$. -The coordinate ring is often not an integral domain (e.g., if the variety is not irreducible). So you want to be able to localize in rings that are not integral domains. -More generally, localizing gives you a way to focus on particular parts of the ideal structure of $R$, even when $R$ is not an integral domain (so that there is no ring of fractions to embed into); if the ideal you are looking at is a prime ideal, then you can "strip away" all the stuff that is not inside of $\mathfrak{p}$ and make it "not matter", so that you can just focus on the stuff what is going on inside of $\mathfrak{p}$. This is important and useful even when we don't have a division ring or a field to embed into.<|endoftext|> -TITLE: extend an alternative definition of limits to one-sided limits -QUESTION [5 upvotes]: The following theorem is an alternative definition of limits. In this theorem, you don't need to know the value of $\lim \limits_{x \rightarrow c} {f(x)}$ in order to prove the limit exists. - -Let $I \in R$ be an open interval, let $c \in I$, and let $f: I-{c} \rightarrow R$ be a function. Then $\lim \limits_{x \rightarrow c} {f(x)}$ exists if for each $\epsilon > 0$, there is some $\delta > 0$ such that $x,y \in I-{c}$ and $\vert x-c \vert < \delta$ and $\vert y-c \vert < \delta$ implies $\vert f(x)-f(y) \vert < \epsilon$. - -I have extended this to a theorem for one-sided limits: $\lim \limits_{x \rightarrow c+} {f(x)}$ exists if or each $\epsilon > 0$, there is some $\delta > 0$ such that $c 0$ there exists $\delta > 0$ such that if $0 < x - c < \delta$ then $|f(x) - g| < \varepsilon$ and if $0 < y - c < \delta$ then $|f(y) - g| < \varepsilon$. Hence -$$ -|f(x) - f(y)| \leq |f(x) - g| + |f(y) - g| < 2\varepsilon -$$ -for $0 < x - c < \delta$ and $0 < y - c < \delta$. -On the other hand, assume that your definition holds. Then there exists a strictly decreasing sequence $(c_n)_{n \geq 1}$ such that $c_n \to c$. Thus for a given $\varepsilon > 0$ there exists $\delta > 0$ such that there exists positive integer $N$ such that $|f(c_m)-f(c_n)| < \varepsilon$ for $m,n > N$. -It means that $(f(c_n))$ is Cauchy sequence and converges to, say, $g$. -(If you are not familiar with Cauchy sequences, it is easy to prove that the sequence $(a_n)$ such that for $\varepsilon > 0$ there exists $N > 0$ such that $|a_m - a_n| < \varepsilon$ for $m,n > N$ converges. To prove it, notice first that $(a_n)$ is bounded. Then, by Bolzano-Weierstrass theorem there is a subsequence $(a_{n_k})$ which converges to some $a$. Then, by Cauchy condition and triangle inequality $a_n \to a$.) -Assume now that there exists a sequence $(c'_n)_{n \geq 1}$ such that $f(c'_n) \to g' \neq g$. Let $C_n$ equals $c_n$ if $n$ is even, and equals $c'_n$ if $n$ is odd. Hence $(f(C_n))$ diverges but on the other hand we have $|f(C_m) - f(C_n)| < \varepsilon$ for sufficiently large $m,n$ and this means that $(f(C_n))$ is Cauchy, hence converges. Contradiction.<|endoftext|> -TITLE: For finite abelian groups, show that $G \times G \cong H \times H$ implies $G \cong H$ -QUESTION [11 upvotes]: Let $G$ and $H$ be finite abelian groups such that $G \times G \cong H \times H$. - Then $G \cong H$. - -I was going to just write the hypothesis as $G^2 \cong H^2$ and take square roots on both sides, but I don't think that would suffice (and neither would saying "true" a la Myself!)... -The hypothesis tells us that there is an isomorphism, say $f: G \times G\to H \times H$. -I would like to use this to come to the conclusion that there is a bijective homomorphism $g: G \to H$. (I will be using additive notation...) -Since $f((a, b) + (c, d)) = f(a,b) + f(c, d)$ for all $a,b \in G$ and $c, d \in H$, I was thinking about choosing an arbitrary $a, b \in G$ and calculating: -$$ -f((a, 0) + (b, 0)) = f(a, 0) + f(b, 0) \\ \Rightarrow -f(a + b, 0) = f(a, 0) + f(b, 0). -$$ -I basically need to define g in such a way that it extracts the first dimension from the equation. Clearly (I think!), g will invoke f in some way. Can I have a tip on this? It's probably very simple, but I'm not sure how to express it symbolically. -I might not need to show that g is bijective if I can say something to the effect of "this routine verification is straightforward and left to the reader", but I'm afraid I might not be able to perform such a verification if put on the spot! Here's a stab: -f injective $\Leftrightarrow f(a,b) = f(c,d) \Rightarrow (a, b) = (c, d) \Rightarrow a = c \wedge b = d$ -So by definition of g (forthcoming...), g(a) = g(c) implies that a = c. -Surjective: For all $x, y \in H$, there exists $a, b \in G : f(a,b) = (x, y)$ -Man, this really seems trivial, but without my definition of g, I feel like I'm handwaving... -Thanks again, guys! - -REPLY [8 votes]: Note that it is enough to assume that both $G$ and $H$ are finite $p$ torsion groups for some prime $p$ because under any isomorphism $p$ torsion elements go to $p$ torsion elements and any finite abelian group is direct sum of $p$ torsion groups for finitely many distinct prime $p$. So both $G$ and $H$ have the form $\mathbb{Z}/p^{n_1}\mathbb{Z} \times \mathbb{Z}/p^{n_2}\mathbb{Z} \times \ldots \mathbb{Z}/p^{n_m}\mathbb{Z}$ for some prime $p$. Now you may use structure theorem.<|endoftext|> -TITLE: convergence of the maximum of a series of identically distributed variables -QUESTION [10 upvotes]: My friend and I have been stumped on this problem for a little while and I thought asking for tips couldn't hurt (we did ask the teacher, but we got other problems after) -Here is the question : -Let $\{X_n\}_{n \geq 1}$ be a sequence of random variables defined on the same probability space $(\Omega, F, \mathbb{P})$ with the same law of finite expected value (E(|X_1|)<\infty ). Let -$Y_n = n^{-1} \max_{1 \leq i \leq n} |X_i|$. -Show that -$\lim_{n\rightarrow \infty} E(Y_n) = 0$ -and -$Y_n \rightarrow 0$ almost surely. -We have ideas of many parts of the proof, for example for the first one it would suffice to show that the expected value of the max of all $|X_i|$ is finite... and since the max is one of the $|X_i|$ for each $\omega \in \Omega$ it seems reasonable but we're not sure how to show it. -We also tried splitting the integral for the expected value into a partition of $\Omega$ considering the sets on which $X_i$ is the max, but didn't get too far with that. -For the second part, I think we could show it if we knew that $X_i(\omega)$ diverges for only a measure 0 set, but it's not that obvious (I think). -Any pointers to the right direction appreciated! - -REPLY [7 votes]: Assume without loss of generality that $X_1$ is almost surely nonnegative. -Almost sure convergence -This is a consequence of the first Borel-Cantelli lemma. To see this, fix any positive $x$. Then $P(X_n\ge nx)=P(X_1\ge nx)$ for every $n$ and -$$\sum_{n\ge1}P(X_1\ge nx)\le x^{-1}E(X_1), -$$ -hence the series of general term $P(X_n\ge nx)$ converges. By the first Borel-Cantelli lemma, the limsup of the events $[X_n\ge nx]$ has probability $0$. This means that $X_n M\}}$. Then $Y_n \le \frac{1}{n} (\max_{i \le n} U_i + \max_{i \le n} V_i)$. The first term is bounded by $M$, and the second by $V_1 + \dots + V_n$. Taking expectations, $E Y_n \le \frac{M}{n} + E V_1$, so $\limsup_{n \to \infty} E Y_n \le E V_1$. By choosing $M$ large enough, $E V_1$ can be made as small as desired (think dominated convergence). -For almost sure convergence, the argument that went here previously was wrong.<|endoftext|> -TITLE: What is $\gcd(0,a)$, where $a$ is a positive integer? -QUESTION [28 upvotes]: I have tried $\gcd(0,8)$ in a lot of online gcd (or hcf) calculators, but some say $\gcd(0,8)=0$, some other gives $\gcd(0,8)=8$ and some others give $\gcd(0,8)=1$. So really which one of these is correct and why there are different conventions? - -REPLY [4 votes]: It might be partly a matter of convention. However, I believe that stating that $\gcd(8,0) = 8$ is safer. In fact, $\frac{0}{8} = 0$, with no remainder. The proof of the division, indeed is that "Dividend = divider $\times$ quotient plus remainder". In our case, 0 (dividend) = 8 (divisor) x 0 (quotient). No remainder. Now, why should 8 be the GCD? Because, while the same method of proof can be used for all numbers, proving that $0$ has infinite divisors, the greatest common divisor cannot be greater than $8$, and for the reason given above, is $8$.<|endoftext|> -TITLE: Isoperimetric inequalities of a group -QUESTION [7 upvotes]: How do you transform isoperimetric inequalities of a group to the of Riemann integrals of functions of the form $f\colon \mathbb{R}\rightarrow G$ where $G$ is a metric group so that being $\delta-$hyperbolic in the sense of Gromov is expressible via Riemann integration? -In other words, how do you define "being $\delta-$hyperbolic group" by using integrals in metric groups? -(Note: I am not interested in the "Riemann" part, so you are free to take commutative groups with lebesgue integration etc.) - -REPLY [9 votes]: You can do this using metric currents in the sense of Ambrosio-Kirchheim. This is a rather new development of geometric measure theory, triggered by Gromov and really worked out only in the last decade. I should warn you that this is rather technical stuff and nothing for the faint-hearted. -Urs Lang has a set of nice lecture notes, where you can find most of the relevant references, see here. -My friend Stefan Wenger has done quite a bit of work on Gromov hyperbolic spaces and isoperimetric inequalities, his Inventiones paper Gromov hyperbolic spaces and the sharp isoperimetric constant seems most relevant. You can find a link to the published paper and his other work on his home page, the ArXiV-preprint is here. - -I should add that I actually prefer to prove that a linear (or subquadratic) isoperimetric inequality implies $\delta$-hyperbolicity using a coarse notion of area (see e.g. Bridson-Haefliger's book) or using Dehn functions, the latter can be found in Bridson's beautiful paper The geometry of the word problem.<|endoftext|> -TITLE: what type of math is this? -QUESTION [5 upvotes]: I am a total newbie to the world of math and was interesting in learning. I just finished my degree(non-math) and am going to study a few math books to see if it interests me to apply for something more quantitative but I want to study something interesting with interesting problems that won't bore me. -I thought about it, and thought of the type of problems that intrest me. One is predicting the future and the other is predicting the past. Here's a problem that I think would be cool. Say you have a list -calories 89, 34, 67, 43, 54, 232, 623 - -and someone tells you that someone had a total of "6553" calories in a day. What type of math would try to figure this out? Is it algebra? (by the way to get this question all I did was take each value above and multiple first one by 1, second one by two, etc.up to 7.) - -REPLY [7 votes]: To cast the problem a little more clearly, you have a number of "weights", $w_1,\ldots,w_n$, in this case: -\begin{align*} -w_1 &= 89\\ -w_2 &= 34\\ -w_3 &= 67\\ -w_4 &= 43\\ -w_5 &= 54\\ -w_6 &= 232\\ -w_7 &= 623, -\end{align*} -and a "target total" $T$, in this case $T=6553$. You want to find nonnegative integers $a_1,\ldots,a_n$ such that -$$a_1w_1 + \cdots + a_nw_n = T.$$ -In its broadest sense, this is an example of what is called a Diophantine equation (an equation in which we require the solutions to be nonnegative integers, or more generally rational numbers). They are studied in the branch of mathematics called Number theory.<|endoftext|> -TITLE: Largest $\sigma$-algebra on which an outer measure is countably additive -QUESTION [12 upvotes]: If $m$ is an outer measure on a set $X$, a subset $E$ of $X$ is called $m$-measurable iff -$$ -m(A) = m(A \cap E) + m(A \cap E^c) -$$ -for all subsets $A$ of $X$. -The collection $M$ of all $m$-measurable subsets of $X$ forms a $\sigma$-algebra and $m$ is a complete measure when restricted to $M$. -Is $M$ the largest $\sigma$-algebra on $X$ on which $m$ is a measure (i.e., on which $m$ is countably additive)? If not, what is? -Is $M$ the largest $\sigma$-algebra on $X$ on which $m$ is a complete measure? If not, what is? -I am especially interested in the case when $X$ is $\mathbb{R}$ or $\mathbb{R}^n$ and $m$ is the Lebesgue outer measure. In this case $M$ is the Lebesgue $\sigma$-algebra. -ADDED: -Julián Aguirre (thanks!) has shown in his response below that the answer to the first question is yes when $X$ is $\mathbb{R}^n$ and $m$ is the Lebesgue outer measure. Hence the answer to the second question in this situation is also yes. - -REPLY [4 votes]: The answer to the first question (and the one in the title) is yes when $M$ is the $\sigma$-algebra of Lebesgue measurable subsets of $\mathbb{R}^n$. Suppose $N$ is another $\sigma$-algebra such that $M\subsetneq N$. Then there exists $E\in N$ non measurable. Since $E=\cup_{k=1}^\infty (E\cap\{x\in\mathbb{R}^n:|x|\le k\})$, for at least one $k$ the set $E\cap\{x\in\mathbb{R}^n:|x|\le k\}$ is not measurable. Thus, we may assume without loss of generality that $E$ is bounded, and in particular, $m(E)<+\infty$. -Since $E$ is not measurable, there exists $\epsilon>0$ such that if $G$ is an open set and $E\subset G$, then $m(G\setminus E)\ge \epsilon$. (This follows from an equivalent definition of measurable set; cf. Proposition 15 on p.63 of Royden's Real Analysis, 3 ed.) -Now, since $m(E)=\inf\{m(G):G\text{ open, }E\subset G\}$, there exists an open set $O$ such that $E\subset O$ and $m(E)\ge m(O)-\epsilon/2$. Then $O=(O\setminus E)\cup E$ but -$$ -m(O\setminus E)+m(E)\ge\epsilon+ m(O)-\epsilon/2>m(O), -$$ -and hence $m$ is not additive on $N$. -For the general case it is easy to see that if $N$ is another $\sigma$-algebra such that $M\subsetneq N$, then there exists $A\subset X$ such that $m$ is not additive on the $\sigma$-algebra generated by $N\cup\{A\}$, but I have not been able to show that it is not additive on $N$.<|endoftext|> -TITLE: Does $G\oplus G \cong H\oplus H$ imply $G\cong H$ in general? -QUESTION [21 upvotes]: In this question, The Chaz asks whether $G\times G\cong H\times H$ implies that $G\cong H$, where $G$ and $H$ are finite abelian groups. The answer is to his question is yes, by the structure theorem for finite abelian groups, as noted in the answer by Anjan Gupta. -Even though I don't know the first thing about categories -- except for the things that I do know -- I'm wondering if and how such property could be expressed and proven in terms of universal properties, without actually manoevring inside the objects. For instance one may attempt to create a morphism $G\to H$ somehow appealing to the universal property of $\oplus$, and subsequently show this morphism is an isomorphism by chasing diagrams. But it seems likely that the existence of a structure theorem of some sort will be required. -This question may be considered trivial or weird for someone who's fluent with categories, I don't know. This topic contains quite a few references. I haven't really worked through any of them (yet) but I couldn't find anything helpful at first sight. --- edit, So a more precise title would have been "Under what conditions that can be expressed in a universal way does $G\oplus G\cong H\oplus H$ imply $G\cong H$ ?" but I don't like long titles. --- edit2, By the comment by Alexei Averchenko, maybe It's more natural to ask this question with the product instead of the coproduct. An answer to my question with the 'real' product would be appreciated too, obviously. - -REPLY [21 votes]: Even in the category of all abelian groups, $G\oplus G\cong H\oplus H$ does not imply $G\cong H$: there are countably generated torsion-free abelian groups $G$ such that $G\not\cong G\oplus G$, but $G\cong G\oplus G\oplus G$. Setting $H=G\oplus G$ gives an example where the implication does not hold. (The first examples were constructed by A.L.S. Corner, On a conjecture of Pierce concerning direct decompositions of abelian groups, Proc. Colloq. Abelian Groups (Tihany, 1963), Akadémiai Kiadó, Budapest, 1964, pp. 43-48; he proves that for any positive integer $r$ there exists a countable torsion-free abelian group $G$ such that the direct sum of $m$ copies of $G$ is isomorphic to the direct sum of $n$ copies of $G$ if and only if $m\equiv n\pmod{r}$.) -Of course, the analogous result for direct products fails for groups, though it holds for groups having both chain conditions (by the Krull-Schmidt theorem). -The fact that you have categories with arbitrary coproducts/products (like the category of all groups and the category of all abelian groups) in which the result fails means that no proof via universal properties alone can exist. A proof that depended only the universal properties would translate into any category in which products/coproducts exist, but the result is false in general even for very nice categories. -Whether the conditions for the implication to hold can be expressed via universal conditions also seems to me to be doubtful. Universal conditions don't lend themselves easily to statements of the form "If something happens for $A$ and $B$, then it happens for $f(A)$ and $f(B)$" (where by "$f(-)$" I just mean "this other object cooked up or related to $A$ in some way") unless the construction/relation happens to be categorical.<|endoftext|> -TITLE: Given a matrix $A$ find a matrix $C$ such that $C^3$=$A$ -QUESTION [6 upvotes]: This is a question I had on a test, we were told not to use brute-force and figure out a smart way to solve the problem. -We have a matrix $A =$ -$\displaystyle\begin{bmatrix} -2 & 3\\ -3 & 2 -\end{bmatrix}$. Find a matrix $C$ such that $C^{3}$=$A$. -What is the 'smart not brute-force' way to solve this, without picking numbers, looking for patterns and so on? -it was in eigenvalues section" in the end? - -REPLY [2 votes]: In fact, the only 2 x 2 matrices that do not have cube roots (over the complex numbers) are those with Jordan canonical form $\left[ \matrix{0 & 1\cr 0 & 0\cr}\right]$. -The 3 x 3 matrices with no cube root are those with Jordan form -$\left[ \matrix{0 & 1 & 0\cr 0 & 0 & 1\cr 0 & 0 & 0\cr}\right]$ or -$\left[ \matrix{\lambda & 0 & 0\cr 0 & 0 & 1\cr 0 & 0 & 0\cr}\right]$.<|endoftext|> -TITLE: Closed form for some integrals related to the complementary error function -QUESTION [5 upvotes]: While studying the use of the trapezoidal rule for numerically evaluating the complementary error function $\mathrm{erfc}(z)$, the following integrals showed up when I was trying to derive expressions for the truncation error: -$$\int_0^\pi \exp\left(-z^2\tan^2\frac{u}{2}\right)\cos(2mu) \mathrm du$$ -where $z$ is positive and $m$ is a positive integer. -Evaluating a bunch of these integrals in Mathematica, I gather that these integrals follow the pattern -$$\pi z^2\exp(z^2)\mathrm{erfc}(z)R_n(z)-2\sqrt{\pi}z S_n(z)$$ -where $R_n(z)$ and $S_n(z)$ are polynomials. -Are there any closed forms for these two polynomials? - -REPLY [3 votes]: I played around with this for a while, but didn't find a complete answer. -Anyway, here's some partial information, in case that's of any help. -The substitution $t=\tan(u/2)$ turns your integral into -$$\int_0^\infty e^{-z^2 t^2} \cos(4 m \arctan t) \frac{2 dt}{1+t^2}.$$ -Since $\exp(i \arctan t) = \frac{1+it}{\sqrt{1+t^2}}$, this can be written as -$$\Re \int_{-\infty}^\infty e^{-z^2 t^2} \frac{(1+it)^{4m}}{(1+t^2)^{2m+1}} dt -= \sum_{k=0}^{2m} \int_{-\infty}^\infty e^{-z^2 t^2} \frac{\binom{4m}{2k}(-1)^k t^{2k}}{(1+t^2)^{2m+1}} dt.$$ -By writing $t^{2k} = ((t^2+1)-1)^k$ and expanding using the binomial theorem, -this integral can be reduced to a sum of integrals of the form -$$ J_n(z) = \int_{-\infty}^\infty e^{-z^2 t^2} \frac{1}{(1+t^2)^n} dt.$$ -We have $J_0(z) = \sqrt{\pi}/z$ right away, and also $J_1(z) = \pi \exp(z^2) \mathrm{erfc}(z)$; -the latter equality follows since both sides satisfy the differential equation -$(d/dz)(\exp(-z^2) f(z)) = -2 \sqrt{\pi} \exp(-z^2)$, -and both sides agree at $z=0$. -Moreover, the following two-term recursion relation holds: -$$ (2m-2) J_m(z) - (2m-3-2z^2) J_{m-1}(z) - 2z^2 J_{m-2}(z) = 0.$$ -Proof: Verify by direct calculation that left-hand side is the integral of -$\frac{\partial}{\partial t} \left( \frac{t \exp(-t^2 z^2)}{(1+t^2)^{(m-1)}} \right)$. -From this it follows that each $J_n$ is a linear combination of $J_0$ -and $J_1$ with coefficients that are polynomials in $z^2$, and this -also implies that the pattern that you've observed really is correct, -but it seems tricky to get a nice explicit form for those polynomials...<|endoftext|> -TITLE: Why do we restrict the definition of Lebesgue Integrability? -QUESTION [45 upvotes]: The function $f(x) = \sin(x)/x$ is Riemann Integrable from $0$ to $\infty$, but it is not Lebesgue Integrable on that same interval. (Note, it is not absolutely Riemann Integrable.) -Why is it we restrict our definition of Lebesgue Integrability to absolutely integrable? Wouldn't it be better to extend our definition to include ALL cases where Riemann Integrability holds, and use the current definition as a corollary for when the improper integral is absolutely integrable? - -REPLY [4 votes]: If one thinks about an integral as giving you the area under a curve, this area should be expected to behave in a natural fashion, ie, if you chop it up into pieces and rearrange them via translations, the area should be preserved. -This property will hold for any $L^1$ function, but it need not be true for a function which has only an improper integral. -It might be helpful to think about infinite sums- absolutely convergent sums can be thought of as integrals of $L^1$ functions. Sums which are convergent but NOT absolutely convergent have peculiar properties. For example, take $\displaystyle \sum_{n=1}^{\infty} \frac{ (-1)^n }{n} $. -Although this sum has a classical limit, we may permute the terms in the sum to allow the limit to take any finite value. As a rough sketch of this process, we can consider the positive elements and the negative elements separately — each of these sums must diverge on their own, or the sum would converge absolutely. -Given any target sum $S$, we can take positive terms until the partial sum is greater than $S$, and then add negative terms until the partial sum is below $S$. Repeating in this fashion, the partial sums will oscillate around $S$, with the error term shrinking since the positive and negative terms tend to zero by convergence of the original sum. -Clearly there is something funny going on here: We can get a number for the sum of the series, but by rearranging we could get different numbers. In this same fashion we could construct functions with the same property — if we chop up the graph of the function and rearrange it, we can change the volume of the graph. This means that we can't really assign a meaningful notion of volume to the graph of a function if it isn't integrable in absolute value.<|endoftext|> -TITLE: How to characterize recurrent and transient states of Markov chain -QUESTION [7 upvotes]: According to Wikipedia with a little rephrasing: - -A state $i$ is transient if and only - if $P(T_i < \infty) <1$, recurrent if - and only if $P(T_i < \infty) =1$, - where $T_i$ is the first hitting time - to i, i.e. $T_i=\inf\{n \in \mathbb{N} \cup \{ \infty \}: X_n=i \mid X_0=i \}.$ - -If I understand correctly, this can be used as the definition of transient/recurrent state. -Usually $P(T_i < \infty)$ is written as a series $\sum_{n \in \mathbb{N}} P(T_i = n)$. But I would like to learn other ways to tell if a state is recurrent/transient, which might be easier in some cases. - -For example, can a -transient/recurrent state be -completely characterized in terms of -closed subsets of states (defined similarly as an absorbing state), as follows -(my own quote)? - -State $i$ is transient if and only if - there exists a closed subset $S$ of - states, s.t. $i \notin S$ and there exists $s \in S$ - and $n \in \mathbb{N}$ and the - $n$-step transition probability - $p_{is}^{(n)} > 0$. -Similarly, State $i$ is recurrent if - and only if there does not exist such a - closed subset of states as described above? - -Can we also characterize positive/null -recurrence in terms of closed -subsets of states? -Off the top of your head, what are some other necessary and/or sufficient -conditions for recurrent/transient -and positive/null recurrent state? - -Thanks and regards! - -REPLY [6 votes]: Tim's characterization of states in terms of closed sets is correct -for finite state space Markov chains. Partition the state -space into communicating classes. Every recurrent class is closed, -but no transient class is closed (because the chain must eventually -get "stuck" in some recurrent class). The part in parentheses is false -for infinite state space chains, as Didier's answer shows. -Another well-known characterization is that a state $i$ is transient -if and only if $$\sum_{n=1}^\infty P(X_n=i | X_0=i)<\infty.$$ This criterion is used, -for example, to prove Polya's result that the symmetric random walk on $\mathbb{Z}^d$ is recurrent if $d=1,2$, but transient when $d\geq 3$. -Similarly, the probability -$$P(X_n=i\mbox{ for infinitely many } n | X_0=i)$$ -is equal to zero or one, depending on whether the state $i$ is transient or recurrent.<|endoftext|> -TITLE: Applications of the wreath product? -QUESTION [36 upvotes]: We recently went through the wreath product in my group theory class, but the definition still seems a bit unmotivated to me. The two reasons I can see for it are 1) it allows us to construct new groups, and 2) we can use it to reconstruct imprimitive group actions. Are there any applications of the wreath product outside of pure group theory? - -REPLY [2 votes]: Here are two places the wreath product is used in "real" applications. - -The representation theory of $S_m\wr S_n$ can be used to prove results about voting for committees of a certain type. (Published at https://doi.org/10.1016/j.aam.2020.102077) -The wreath product has been used in analyzing a number of different music theoretic concerns; see Peck's article for a survey. - -In both cases, basically the object of study and its symmetries forced consideration of the wreath product rather than a larger symmetric group. I think that's pretty cool.<|endoftext|> -TITLE: How to calculate $E[(\int_0^t{W_sds})^n], n \geq 2$ -QUESTION [5 upvotes]: Let $W_t$ be a standard one dimension Brownian Motion with $W_0=0$ and $X_t=\int_0^t{W_sds}$. -With the help of ito formula, we could get -$$E[(X_t)^2]=\frac{1}{3}t^3$$ -$$E[(X_t)^3]=0$$ -When I try to employ the same method to calculate the general case $E[(X_t)^n]$, I got stuck. -I guess $X_t$ should be normal distribution since it could be the limit of the following -$$\lim_{n\rightarrow \infty}{\sum_{i=0}^{n-1}{W_{t_i}(t_{i+1}-t_i)}},$$ -where $ W_{t_i}\sim norm(0,\sqrt{\frac{t_i}{n}}).$ -If it is true, the problem would be trivial. -Update: Thanks for all the suggestions. Now I believe $X_t$ is a Gaussian process. -How about for this integral -$$Y_t=\int_0^t{f(W_s)ds}$$ -if we assume that $f$ is some good function, say polynomial or exponential, i.e -$$Y_t=\int_0^t{e^{W_s}ds}$$ -$$Y_t=\int_0^t{[a_n(W_s)^n+a_{n-1}(W_s)^{n-1}+...+a_0]ds}$$ - -REPLY [5 votes]: The random variable $X_t$ is Gaussian for the reasons in Didier's answer. -You can calculate the variance directly (without Ito's formula) as follows: -$$\mathbb{E}\left[\left( \int^t_0 W_s ds \right)^2\right] -= \int^t_0 \int^t_0 \mathbb{E}(W_r W_s) dr ds -= \int^t_0 \int^t_0 (r\wedge s) dr ds ={t^3\over 3}.$$<|endoftext|> -TITLE: Arc length of the Cantor function -QUESTION [15 upvotes]: How does one find the arc length of the Cantor function? Wikipedia says that the length is 2. I can "see" that the length is atmost 2 by a simple trinagle inequality argument. I am struggling to come up with a partition P such that the arc length is atleast 2. I tried a partition of the form { 1/ 3^n : 0 <= k <= n } but I guess I am making some mistake in my calculation so that I get the length as 3/4 instead of close to 2. - -REPLY [14 votes]: Possibly the easiest way to see this is to note that any partition divides $[0,1]$ into two kinds of intervals: those on which $f$ is constant, and those on which it is not. The total length of the constant intervals can be made arbitrarily close to 1, while the total length of the nonconstant intervals in any partition is at least 1 because they add up to a displacement of 1 on the $y$-axis. The result follows.<|endoftext|> -TITLE: Help understand isomorphism of tensor product and connection to vector spaces -QUESTION [6 upvotes]: well I'm having a hard time understanding the tensor product. Here's a problem from Atiyah and Macdonald's book: -Let $A$ be a non-trivial ring and let $m,n$ be positive integers. Let $f: A^{n} \rightarrow A^{m}$ be an isomorphism of $A$-modules. Show this implies that $n=m$. -Well the solution is at follows: -Let $m$ be a maximal ideal of $A$. Then we have an induced isomorphism: -$(A/m) \otimes_{A} A^{n} \rightarrow (A/m) \otimes_{A} A^{m}$. -Now it says that this is an isomorphism between vector spaces of dimension $n$ and $m$. -My questions are: -1) How do we know that $(A/m) \otimes_{A} A^{n}$ is a vector space over $A/m$ ? -2) How do we know it has exactly dimension $n$? -Is there some "standard" theorem that tells us this? Can you please explain this in detail? -Thanks - -REPLY [2 votes]: The original question you are trying to solve (that $A^m \cong A^n$ as $A$-modules implies $m = n$) does not need tensor products. If $M$ and $N$ are isomorphic $A$-modules then, for any ideal $I$ in $A$, $M/IM$ and $N/IN$ are isomorphic $A/I$-modules. More precisely, if $f \colon M \rightarrow N$ is an $A$-module isomorphism with inverse $g$, then the induced map $M/IM \rightarrow N/IN$ given by $x \bmod IM \mapsto f(x) \bmod IN$ is well-defined and $A/I$-linear, and it's an $A/I$-module isomorphism because the map $y \bmod IN \mapsto g(y) \bmod IM$ going the other way is an inverse (just check on each element that these two maps undo each other). Taking $M = A^m$, $N = A^n$, and $I = \mathfrak m$ to be a maximal ideal in $A$, we get from an $A$-module isom. of $A^m$ with $A^n$ that the $A/\mathfrak m$-vector spaces $A^m/{\mathfrak m}A^m$ and $A^n/{\mathfrak m}A^n$ are isomorphic. Now it remains to show $A^m/IA^m \cong (A/IA)^m$ for any ideal $I$, and this is left to you. No tensor products are needed. I think this is useful because it makes this result available if you are teaching an abstract algebra class without having to introduce tensor products already. Tensor products are of course very important, but they need not be used here. In fact, later on in the course when you have tensor products you can come back and reprove this result on isomorphisms with them and observe that the argument is basically the same as the one that didn't use tensor products. -After learning about tensor products, and then exterior powers, you can prove finer properties of $A$-linear maps $A^m \rightarrow A^n$. See Corollary 5.11 in -http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/extmod.pdf, which incidentally provides a third way to handle the original question via exterior powers without using maximal ideals in $A$.<|endoftext|> -TITLE: Countability of local maxima on continuous real-valued functions -QUESTION [17 upvotes]: I am working through a bank of previous exams and couldn't figure a problem out to my satisfaction. - -Let $f(x) : \mathbb{R} \to \mathbb{R}\,$ be a continuous function. - -Show that $f$ can have at most countably many strict local maxima. -Assume that $f$ is not monotone on any interval. Then show that the local - maxima of $f$ are dense in - $\mathbb{R}$. - -REPLY [13 votes]: For each $\delta>0$, the set of all $x\in\mathbb{R}$ such that $f(y)f(y_2)$. So $(0,1)$ contains a pair $x_1f(y_2)$. Because $f$ is continuous, there is a point $x_0\in [x_1,y_2]$ where $f$ has its maximum value. Because $f(x_2)>f(x_1)$ and $f(y_1)>f(y_2)$, $x_0$ is not an endpoint of $[x_1,y_2]$. Therefore $f$ has a local maximum at $x_0$.<|endoftext|> -TITLE: Intuition behind arc length formula -QUESTION [11 upvotes]: I understand the arc length formula is derived from adding the distances between a series of points on the curve, and using the mean value theorem to get: -$ L = \int_a^b \sqrt{ 1 + (f'(x))^2 } dx $ -But is there an intuition here I'm missing? Something about taking the integral of the derivative seems like it should mean something.. - -REPLY [15 votes]: It might help instead to think of our function as a parametrized curve in the plane. In other words, consider the curve $(x(t), y(t))$ for $t\in[a,b]$. Then it's clear that the differential $ds$ which lies along the curve is given by $ds=\sqrt{dx^2+dy^2}$. -Now we integrate $ds$ with respect to $t$ and the integral becomes $\displaystyle \int_a^b{\sqrt{(\frac{dx}{dt})^2+(\frac{dy}{dt})^2}}dt$. -Your formula follows by setting $x(t)=t$, and $y(t)=f(t)$.<|endoftext|> -TITLE: Joint moments of Brownian motion -QUESTION [11 upvotes]: My approach to this SE question uses the following joint moments of -Brownian motion. For $n=1,2$ they are obvious and well-known, the others -are not terribly hard to work out. Is there a reference where these -formulas are given, or/and is there a pattern to the coefficients? -Fix $t_1\leq t_2\leq t_3\leq\cdots \leq t_n$. -For odd values of $n$ we have $\mathbb{E}[W(t_1)\ W(t_2) \cdots W(t_n)]=0$ -while for even values of $n$ we get -\begin{eqnarray*} -\mathbb{E}[W (t_1)\ W(t_2)]&=& t_1 \cr -\mathbb{E}[W (t_1)\ W(t_2)\ W(t_3)\ W(t_4)]&=& 2t_1 t_2+t_1t_3 \cr -\mathbb{E}[W (t_1)\ W(t_2)\ W(t_3)\ W(t_4)\ W(t_5)\ W(t_6)]&=& 2t_1t_2t_5+t_1 t_3 t_5 +4 t_1 t_2 t_4 +2 t_1 t_3 t_4 +6 t_1 t_2 t_3 -\end{eqnarray*} -I suppose everything about Brownian motion -has been worked out, but I can't find this in any of my books. -It's not very important, but I'm just curious! - -REPLY [6 votes]: Another formula for this joint moment can be found as Lemma 4.5 in a paper by J. Rosen and M.B. Marcus [Annals of Probability, vol. 20, no. 4 (1992) pp. 1603-1684]; the authors refer to it as "well-known". The formula is this (for even $n$): The joint moment $E[W(t_1)W(t_2)\cdots W(t_n)]$ is the sum over all pairings $\{\{a(1),b(1)\},\{a(2),b(2)\},\ldots,\{a(n/2),b(n/2)\}\}$ of $\{1,2,\ldots,n\}$ of -$$ -\prod_{i=1}^{n/2}E[W(t_{a(i)},t_{b(i)})]. -$$ -A pairing is simply a partition of $\{1,2,\ldots,n\}$ into $n/2$ doubletons.<|endoftext|> -TITLE: The Use of the Axiom of Choice in an Elementary Proof -QUESTION [20 upvotes]: I wanted to give some of the new undergrad analysis students the following problem: given the real numbers (with the standard topology, as they'd expect) one cannot have an uncountable set such that every point is an isolated point. A few of my fellow grad students attempted solutions first. -A sketch of a potential proof was given as follows: take each point and create as large a ball as is possible without it containing any other point of the set, so say $(a-\alpha, a+\alpha)$. Then construct corresponding balls with a smaller radius: $(a-\frac{\alpha}{2}, a+\frac{\alpha}{2})$. These new balls will intersect each other trivially. Then choose a rational point from each of these new balls. This set will be in one-to-one correspondence with the balls which are in one-to-one correspondence with the points of the original set. -After presenting this proof, it was argued that it requires the (non-finite) axiom of choice (as we are picking one point from a potentially uncountable set). We modified the proof by using the density of the rationals to pick rationals $p_{a},q_{a}$ such that $(a-p_{a},a+q_{a})\subseteq (a-\frac{\alpha}{2}, a+\frac{\alpha}{2})$, and then have our function pick the "left-most" end-point. It was still argued that this used the axiom of choice. -Because I am not entirely familiar with what constitutes use of the axiom of choice, I wanted to open the question up to all of you. Do these proofs require the (non-finite) axiom of choice? If so, is there a proof you know of which does not require it? - -REPLY [6 votes]: As I mentioned recently in another answer, this problem comes down to two simple facts: -1) A subspace of a topological space admitting a countable base (i.e., a second countable space) again admits a countable base: just intersect every element of a countable base with the subspace. -2) A discrete space has a countable base if and only if it is countable. -In particular there is no choice going on here. -(I should note that the notion of "base for a topology" is more commonly encountered in a first topology course than an undergraduate real analysis course. I can see some merits to the more explicit approach given in the question. But for those who know the smallest (nonzero!) amount of general topology, this approach seems to me to be much cleaner and simpler.)<|endoftext|> -TITLE: Topologies on the space $\mathcal D'(U)$ of distributions -QUESTION [7 upvotes]: In my analysis lecture I am given a topology on the space of distributions as follows: -Let $u_k$ be a sequence in $\mathcal D'(u)$, $u \in \mathcal D'(u)$. We say $u_k \rightarrow u$, if $\forall \phi \in \mathcal D(u) : u_k(\phi) \rightarrow u(\phi)$. -This is the weak-$*$-topology on $\mathcal D'(u)$. It seems lecturers don't care too much about the topology of $\mathcal D'(u)$, hence I wonder whether there are stronger topologies on $\mathcal D'(u)$. - -REPLY [3 votes]: Some remarks: -$D(U)$ is reflexive, even a Montel space, so the dual of $D'(U)$ with weak or strong topology is again $D(U)$ [This is in contrast to Brian's remark]. -A linear functional on $D(U)$ is continuous (i.e. a distribution) if and only if it is sequentially continuous. This is remarkable, as the space of test functions $D(U)$ is not metrizable, so sequential continuity is usually not sufficient. -A sequence of distributions is weakly convergent if and only if it is strongly convergent [i.e. uniformly on bounded subsets of $D(U)$]. -The last remark is why usually only the weak topology is known. And Schwartz proved and mentioned this consequence quite often in his book.<|endoftext|> -TITLE: Adjunction space is a pushout -QUESTION [9 upvotes]: I would like to show that the diagram -$$\begin{array}{} -A & \stackrel{f}{\longrightarrow} & Y \\ -i \downarrow & & \downarrow {\phi_2} \\ -X & \stackrel{\phi_1}{\longrightarrow} & X \coprod_f Y \end{array}$$ -where $i:A \to X$ is an inclusion is a pushout. -Here $A$ is a closed subset of $X$, all maps given are continuous and $X \coprod_f Y$ is the disjoint union $X \coprod Y$ quotient by the equivalence generated by $\{(a,f(a)) \in (X \coprod Y) \times (X \coprod Y): a \in A\}$ (call the equivalence relation $\sim$) -So I start with $\nu: X \coprod Y \to X \coprod_f Y$ (the natural map) and define $\phi_1 = \nu | X$, and $\phi_2 = \nu | Y$. -Then for $a \in A$, $\phi_1(i(a)) = \nu(i(a)) = \nu(f(a)) = \phi_2(f(a))$ and the diagram is commutative. -The other part is to show that this is unique. So let $Q$ be another space such that there exists $\alpha_1:X \to Q$, $\alpha_2:Y \to Q$. We seek a $u: X \coprod_f Y \to Q$ -Define the function $\Theta:X\coprod_f Y \to Q$ with $\Theta | X = \alpha_1$ and $\Theta | Y = \alpha_2$. -Then for $a \in A$, $\Theta(i(a)) = \alpha_1(i(a)) = \alpha_2(f(a)) = \Theta(f(a))$. -So this means that $\Theta$ maps elements of the equivalence class $\sim$ from $X \coprod_f Y \to Q$ (maybe I am not saying the last bit clearly, but I think it is clear what I mean!) -Is this all reasonable? I only ask because this whole commutative diagram thing is very new to me... - -REPLY [2 votes]: Yes, that's right. On the generating set of $\sim$ we have that $(a,0) \sim (f(a), 1)$ (where the disjoint union I denote with $0$ as the second coordinate for elements of $X$ and $1$ for elements of $Y$) and these yield the same result: either we map under $u$ to $\alpha_1(a) = \alpha_1(i(a))$ under the requirement that $u$ must commute with $\alpha_1$ or to $\alpha_2(f(a))$ under the requirement that $u$ must commute with $\alpha_2$, if we take the second guise. But by commutativity of the diagram with $i$,$f$,$\alpha_1$ and $\alpha_2$, these 2 give the same value in $Q$. -So on the generating set we have no conflict in the definition of $u$, so now you must show that $u$ is well-defined on all classes (which can be larger than 2 points, of course). You could use that $x \sim y$ iff there are finitely many steps via generating pairs (or their inverse, of course) from $x$ to $y$. -The unicity is clear, like I said, from having to commute with $\alpha_1$ and $\alpha_2$ $u$ has to be defined like this; it just remains to verify it is actually well-defined. -[edit] Akhil's remark in the comments above merits mention too, as I forgot: we defined $u$ on the disjoint sum originally, showed we had no conflict with the equivalence relation $\sim$, so that we have a well-defined map from the quotient. As the map from the sum is continuous (it is iff both "summands" $\alpha_1$ and $\alpha_2$ are) the standard theorem on quotient spaces (sometimes called the universal theorem for quotient maps) implies that $u$ is indeed continuous from the quotient to $Q$.<|endoftext|> -TITLE: Quicksort with Trivalued Logic -QUESTION [7 upvotes]: Does anyone know a way to do a quick sort with trivalued logic? -The problem I’m trying to solve is this: I’m trying to display a view of a complex 3d object from a given viewing angle. I’ve broken the object into many 2d surfaces that I can draw separately, but to display the image properly, I need to determine the z-order of the surfaces – a classic computer drawing problem. It’s guaranteed that none of the surfaces intersect, so the problem is solvable. It would be simple if on comparing any two surfaces, I could always determine which one is in front – then a simple mergesort would suffice. But very often, if I compare two surfaces, it’ll turn out that, with the angle I’m viewing from, there’s no overlap at all. One surface is over here, and the other surface is over there, so it’s impossible to say which one is in front. -In mathematical terms, what I’m trying to do is sort a set of entities - call them $a$, $b$, $c$, etc. Transitivity is guaranteed: If $a < b$ is true and $b < c$ is true then $a < c$ is always true. But the complicating factor is the trivalued logic: $a < b$ could be unknown. A consequence is the final sorted list may contain small sets of elements within which the order doesn’t matter, e.g., The result may be $a < (b, c) < d$, etc. -Note that even if $a < b$ is unknown, other comparisons may indirectly force a certain ordering for $a$, $b$. E.g., If $a < b$ is unknown, but it turns out that $a < c = \mbox{true}$ and $b < c$ is false, then the sorted order must be $a < c < b$. -I can solve the problem with a bubble sort, but that’s bad because $O(N^2)$ comparisons, and each comparison is very expensive (since it involves figuring out whether two surfaces can block each other when viewed from a certain angle). Is there a way to solve this with a faster sort? (eg. Some adaptation of a mergesort)? - -REPLY [4 votes]: What you have is not really a trivalued logic, but a partially ordered set (aka poset). There is a fairly large body of research on sorting partially ordered sets (a quick googling for "poset sorting" gives some good hits). In particular, you may want to look for something called a "chain merge data structure". Also, a paper called "Sorting and Selection in Posets".<|endoftext|> -TITLE: Curvature of planar implicit curves -QUESTION [8 upvotes]: I am trying to understand how the curvature equation -$$\kappa = -\frac{f_{xx} f_y^2-2f_{xy} f_x f_y + f_x^2 f_{yy}}{(f_x^2+f_y^2)^{3/2}}$$ -for implicit curves is derived. These curves arise from equalities such as $f(x,y)=0$. I found this on the net: -http://www.cad.zju.edu.cn/home/zhx/GM/001/00-rep_dg.pdf -I can follow almost everything here until pg 49, then the author jumps to the final equation and I have no idea how he's done it. -Can anyone help, or point to other possible derivations? I understand the parametric form of curvature equation which is $\kappa = | \frac{d\vec{T}}{ds} |$ where $\vec{T}$ is unit tangent, if any parallels need to be made to that subject, just in case. -And one more question: How do I expand the term below? -$$\frac{\partial}{\partial x} \bigg( \frac{f_y}{\sqrt{f_x^2 + f_y^2}} \bigg)$$ -Do I have to use the Quotient Rule? -$$\frac{d}{dx}(\frac{u}{v}) = \frac{v \frac{du}{dx} - u \frac{dv}{dx}}{v^2}$$ -and in that case, I guess I would need to derive $\frac{\partial}{\partial x}(\sqrt{f_x^2+f_y^2})$. Would this be $\frac{1}{2}\frac{2f_x f_{xx} + 2f_y f_{yx}}{\sqrt{f_x^2+f_y^2}}$ -Thanks again - -REPLY [9 votes]: Let $(x_0,y_0)$ be a point of the curve $\gamma$ defined by $f(x,y)=0$, and let $s\mapsto(x(s),y(s))$ with $(x(0),y(0))=(x_0,y_0)$ be the parametric representation of $\gamma$ by arc length. Note that the sense of direction of $\gamma$ is not determined a priori, whence its curvature $\kappa$ is only determined up to sign. -From $f\bigl(x(s),y(s)\bigr)\equiv 0$ we get $f_x\dot x+ f_y\dot y\equiv 0$, and as $\dot x^2 +\dot y^2\equiv 1$ we see that (up to sign) -$$\dot x={f_y\over\sigma},\quad \dot y=-{f_x\over\sigma}\qquad \left(\sigma:=\sqrt{f_x^2 + f_y^2}>0\right).\qquad(*)$$ -To compute the curvature $\kappa$ we have to look at the polar angle of the tangent vector $(\dot x,\dot y)$, i.e., at -$$\theta:=\arg(\dot x,\dot y)=\arg(f_y, -f_x).$$ -The chain rule gives -$$\kappa=\dot\theta={d\over ds}\arg(f_y,-f_x)=\nabla\arg(f_y,-f_x)\bullet\left({d\over ds}(f_y),{d\over ds} (-f_x)\right),$$ -and using the formula $\nabla\arg(u,v)=\left({-v\over u^2+v^2}, {u\over u^2+v^2}\right)$ we obtain -$$\kappa=\left({f_x\over\sigma^2},{f_y\over\sigma^2}\right)\bullet(f_{yx}\dot x+f_{yy}\dot y,\ -f_{xx}\dot x-f_{xy}\dot y)={-f_y^2 f_{xx}+2f_xf_yf_{xy}-f_x^2f_{yy}\over\sigma^3} $$ -where we have used (*) and all partial derivatives of $f$ are to be evaluated at $(x_0,y_0)$.<|endoftext|> -TITLE: Time until a consecutive sequence of ones in a random bit sequence -QUESTION [9 upvotes]: This a reformulation of a practical problem I encountered. -Say we have an infinite sequence of random, i.i.d bits. For each bit $X_i$, $P(X_i=1)=p$. -What is the expected time until we get a sequence of $n$ 1 bits? -Thanks! - -REPLY [5 votes]: Here is a generating function approach to this problem. -Preliminaries -For $0\le k -TITLE: Is $(B_{t}+t)^{2}$ a Markov process? -QUESTION [8 upvotes]: Let $B_{t}$ be a Brownian motion relative to a filtration $F_{t}$, is $(B_{t}+t)^{2}$ a Markov process? Thanks! - -REPLY [15 votes]: There are two possible filtrations here -- the original filtration $\mathcal{F}_t$ generated by the Brownian motion and the one generated by the process $X$, which I'll denote by $\mathcal{F}^X_t$. So, there are two ways to interpret this question, (i) is $X$ Markov with respect to $\mathcal{F}_t$, and (ii) is $X$ Markov with respect to its own filtration $\mathcal{F}^X_t$? Unsurprisingly, the answer to (i) is no, $X$ is not Markov. To see this, it is straightforward to compute $\mathbb{E}[X_t\vert\mathcal{F}_s]$ for times $s < t$, and you get -$$ -\mathbb{E}[X_t\vert\mathcal{F}_s]=(B_s+t)^2+t-s=X_s + 2(t-s)B_s +(t-s)^2+t-s. -$$ -Due to the dependence on $B_s$, which is not uniquely detemined by $X_s$, this is not a function of $X_s$, so the process is not Markov (which is Byron's point in his answer). -The second question (ii) is a bit harder because computing the distribution of $X_t$ conditioned on $\mathcal{F}^X_s$ is tricky. The, perhaps surprising, answer to (ii) is yes, $X$ is Markov with respect to its own filtration! To prove this, it is necessary to show that $\mathbb{E}[f(X_t)\mid\mathcal{F}^X_s]$ is a function of $X_s$ (for times $s < t$ and bounded measurable function $f$). As a first step, note that, as the Brownian motion $B$ is Markov, $\mathbb{E}[f(X_t)\mid\mathcal{F}_s]=g(B_s)$ for a measurable function $g$. Apply the tower law for conditional expectations, -$$ -\mathbb{E}[f(X_t)\mid\mathcal{F}^X_s]=\mathbb{E}[\mathbb{E}[f(X_t)\mid\mathcal{F}_s]\mid\mathcal{F}^X_s]=\mathbb{E}[g(B_s)\mid\mathcal{F}^X_s]. -$$ -So, to show that $X$ is Markov under its own filtration we only have to show that the distribution of $B_s$ conditioned on $\mathcal{F}^X_s$ depends only on $X_s$. Also, given $X_s$ then $B_s$ can only take one of the two possible values $-s\pm\sqrt{X_s}$. We have to show that the probability of these two possibilities does not depend on the history of $X$. There are two main ways that I can think of showing this. -Direct Computation -We can directly compute the distribution of $B_s$ conditioned on $\mathcal{F}^X_s$ by breaking the time interval $[0,s]$ into $n$ discrete steps, which allows it to be computed as a ratio of probability density functions, then take the limit $n\to\infty$. Writing $h=s/n$ and $\delta w_k\equiv w_k-w_{k-1}$, the probability density function of $\hat B\equiv(B_{h},B_{2h},\ldots,B_{nh})$ can be written out by applying the independent Gaussian increments property as -$$ -p(w)=(2\pi h)^{-\frac{n}{2}}\exp\left(\frac{-1}{2h}\sum_{k=1}^n(\delta w_k)^2\right). -$$ -The distribution of $B_s$ conditional on $\hat X\equiv(X_h,X_{2h},\cdots,X_{nh})$ is simply given by a ratio of sums over the probability density function $p$, -$$ -\mathbb{P}\left(B_s=-s+\sqrt{X_s}\;\Big\vert\;\hat X\right)=\frac{\sum_{w\in P}p(w)}{\sum_{w\in P}p(w)+\sum_{w\in P^\prime}p(w)}. -$$ -Here, $P$ is the set of discrete paths for $\hat B$ agreeing with the values of $X$ and ending at $-s+\sqrt{X_s}$. Then, $P^\prime$ is the similar set of paths ending at $-s-\sqrt{X_s}$. If, as $w$ runs through $P$, we set $w^\prime_k\equiv-2kh-w_k$, then it can be seen that $w^\prime$ runs through $P^\prime$. So $\delta w^\prime_k=-2h-\delta w_k$ and, -$$ -\begin{align} -\sum_{w\in P^\prime}p(w)&=\sum_{w\in P}p(w^\prime)=\sum_{w\in P}(2\pi h)^{-\frac{n}{2}}\exp\left(\frac{-1}{2h}\sum_{k=1}^n(-2h-\delta w_k)^2\right)\\ -&=\sum_{w\in P}p(w)\exp\left(-2nh-2w_n\right) -\end{align} -$$ -As $2nh+2w_n=2\sqrt{X_s}$, this can be plugged into the expression above, -$$ -\mathbb{P}\left(B_s=-s+\sqrt{X_s}\;\Big\vert\;\hat X\right)=\frac{1}{1+e^{-2\sqrt{X_s}}}. -$$ -This only depends on $X_s$ and, letting $n$ go to infinity, this expression also holds for the probability conditioned on $\mathcal{F}^X_s$. -Girsanov Transformations -The theory of Girsanov transformations tells us that, defining $U=\exp(-B_s-\frac12s)$ and the new measure $\mathbb{Q}=U\cdot\mathbb{P}$, then $\tilde B_u\equiv B_u+u$ is a standard $\mathbb{Q}$-Brownian motion on the interval $[0,s]$. Also write $V=U^{-1}=\exp(\tilde B_s-\frac12s)$, so that $\mathbb{P}=V\cdot\mathbb{Q}$. Under the measure $\mathbb{Q}$, symmetry on reflecting the Brownian motion about zero shows that $\tilde B_s$ takes the values $\pm\sqrt{X_s}$ each with probability 1/2, when conditioned on $\mathcal{F}^X_s$. The conditional expectation under the $\mathbb{P}$ measure can be converted to a conditional expectation under $\mathbb{Q}$, -$$ -\begin{align} -\mathbb{E}[g(B_s)\mid\mathcal{F}^X_s]&=\mathbb{E}_{\mathbb{Q}}[Vg(-s+\tilde B_s)\mid\mathcal{F}^X_s]/\mathbb{E}_{\mathbb{Q}}[V\mid\mathcal{F}^X_s]\\ -&=\left(e^{\sqrt{X_s}}g(-s+\sqrt{X_s})+e^{-\sqrt{X_s}}g(-s-\sqrt{X_s})\right)/(e^{\sqrt{X_s}}+e^{-\sqrt{X}_s}) -\end{align} -$$ -This is a function of $X_s$, so $X$ is Markov. This method works because the change of measure "adding" a constrant drift to a Brownian motion only depends on the value of the the process at the end of the time interval, and is otherwise independent of the path taken.<|endoftext|> -TITLE: Teichmüller spaces via representations -QUESTION [13 upvotes]: I don't have much expertise in this area but I am confused by a remark I overheard regarding Teichmüller spaces. -I was always under the impression that for a surface $S$ (say genus $\geq 2$) that the Teichmuller space of $S$ was give by $\mathcal{T}(S) = \{\text{Hyperbolic structures for S} \} / \text{homotopy}$. -I was told that this is equivalent to the space of discrete faithful representations $\phi: \pi_1(S) \rightarrow PSL_2(\mathbb{R})$ quotiented by $PGL_2(\mathbb{R})$ . -My questions are as follows: - -Why is the representation mapping to $PSL_2(\mathbb{R})$? Every representation I've seen is always defined as a map $\varphi: G\rightarrow GL_n(V)$. Perhaps this is a case where the word representation is overloaded and just means a correspondence, but every source I've seen uses the word representation and this is a slight point of confusion for me. -Could you suggest a resource where the equivalence between the two definitions is proven? Everything I've read mentions that they are equivalent with very little justification. -Are there any examples where using one definition over the other vastly eases calculations/computations? - -Thanks! - -REPLY [8 votes]: Here are some partial answers which outline how to approach this problem. -For the first question, if $G$ is a group and $X$ is a set with some structure (e.g. $X$ might be a group or a vector space or a metric space or whatever), a homomorphism $G \to Aut(X)$, where $Aut$ refers to the fact that we consider all bijections from $X$ to $X$ which preserve its structure, is called a representation of $G$. If $X$ is a vector space, then $Aut(X)$ means the group of linear automorphisms, i.e. $GL(X)$; it is customary to say in this case that $\rho$ is a linear representation. If the homomorphism is injective, the convention is to say that the representation is faithful. -Fix a closed oriented surface $S$ of genus $g$ with preferred basepoint $s$. Given a faithful representation $\rho: \pi_1 (S,s) \to PSL(2,\mathbb{R})$ with discrete image, we obtain a hyperbolic surface. For $PSL(2, \mathbb{R})$ can -be identified with the orientation preserving group of isometries of the hyperbolic plane $\mathbb{H}^2$, and the quotient of $\mathbb{H}^2$ by the image of $\rho$ is homeomorphic to $S$. (You will want to use covering space theory to prove this; the universal cover of $S$ is homeomorphic to $\mathbb{H}^2$. Given a point in $S$, choose a point in the fiber of the universal covering projection, and map $s$ to the orbit of this point. This is a well-defined homeomorphism. Prove!) -Now, if $f:S \to X$ is a marked hyperbolic surface (this means that $f$ is a homeomorphism and that $X$ is a quotient of $\mathbb{H}^2$ by a discrete group of orientation preserving isometries), then we consider the set of pairs $(X,f)$. Teichmuller space can be defined as the set of marked hyperbolic surfaces up to equivalence, where $(X,f) \sim (Y,g)$ if $gf^{-1}$ is homotopic to an isometry. What needs to be analyzed is what is the relationship between -the induced representations $f_*$ and $g_*$ (the maps on the level of fundamental groups). The claim is that $(X,f) \sim (Y,g)$ if and only if the representations are conjugate in $PGL(2,\mathbb{R})$. -If the two representations are conjugate via $A \in PGL(2, \mathbb{R})$, then the map $\Gamma_X.p \mapsto \Gamma_Y.(ApA^{-1})$, where $\Gamma_X = \rho(\pi_1(S,s)$ -and $p \in \mathbb{H}^2$ and $PGL(2, \mathbb{R})$ is identified with the full isometry group, is an isometry; keep in mind that $X = \mathbb{H}^2/\Gamma_X$ -is the orbit space. To show that $gf^{-1}$ is homotopic to an isometry, you -again will want to appeal to covering space theory and use the fact that -$S$ is a $K(\pi,1)$ space. Proposition 1B.9 in Hatcher's text should give you some ideas. But, there are many details being left to you. There is probably a much nicer way to think about all of this; but I expect it would involve using somewhat fancier notions.<|endoftext|> -TITLE: Prove that 2 of 3 triangles sharing one side overlap -QUESTION [6 upvotes]: Let $C, D, E$ be three non-degenerate triangles in $\mathbb R^2$. Let $c, a, b$ be the vertices of $C$, let $d, a, b$ be the vertices of $D$, and let $e, a, b$ be the vertices of $E$. I want to show that there is one point contained in the interior of at least two of the given triangles. -Here are my thoughts: If two of the triangles are the same then we're done, so suppose without loss of generality that $C$ and $D$ are different. Think of the triangles as simplices. By definition, this means that the sets $\{c - a, b - a\}$, $\{d - a, b - a\}$, and $\{e - a, b - a\}$ are linearly independent. Since $C$ and $D$ are different, this must mean that $\{c - a, d - a\}$ is also linearly independent, hence it is a basis of $\mathbb R^2$. This means I can write $e - a = \gamma(c-a) + \delta(d-a)$ for some appropriate $\gamma,\delta$. Now any point on the triangle $E$ can be written as $\alpha a + \beta b + \epsilon e$ where $\alpha, \beta, \epsilon$ are nonnegative and sum to 1. Using the fact that $e - a = \gamma(c-a) + \delta(d-a)$, we can write any point on the triangle $E$ as $(\alpha + \epsilon(1-\gamma-\delta))a + \beta b + \gamma\epsilon c + \delta\epsilon d$. I was hoping to make this latter sum into a convex sum with just $a, b, c$ or $a, b, d$ by picking $\epsilon$ appropriately and therefore showing that $E$ overlaps with $C$ or $D$, but alas that doesn't work. -So, is this the right way of going about this? Or is there a better approach? - -REPLY [2 votes]: Draw a small circle centered at the midpoint of $[a,b]$. Each of the triangles $C$, $D$ and $E$ contains one half of this circle. Now apply the pigeon hole principle.<|endoftext|> -TITLE: Common terms in general Fibonacci sequences -QUESTION [7 upvotes]: Mathworld notes that "The Fibonacci and Lucas numbers have no common terms except 1 and 3," where the Fibonacci and Lucas numbers are defined by the recurrence relation $a_n=a_{n-1}+a_{n-2}$. For Fibonacci numbers, $a_1=a_2=1$; for Lucas numbers, $a_1=1$, $a_2=3$. How do you prove mathworld's statement? - -REPLY [4 votes]: Let $f_{n}$ and $l_{n}$ be the Fibonacci and Lucas numbers, we want to show that $f_n \ne l_m$ except for the frivial exceptions ($n=m=1$, $n=2, m=1$ and $n=4, m=2$). To see it you can consider the two sequences $f_{n+k}$ and $l_{n}$ slinding one with the other $k$ positions. The only exceptions arise in the first few values of $k$: - - k=0 k=1 k=2 -f_{n+k} 1 1 2 3 5 1 2 3 5 8 2 3 5 8 13 -l_{n} 1 3 4 7 11 1 3 4 7 11 1 3 4 7 11 - -To see that in this sucessions there are no more coincidences observe that for $k=0$, putting $g_{n} = l_{1+n} - f_{1+n}$ then $g_{1} > 1, g_{2} > 1$ and $g_{n}=g_{n-1}+g_{n-2}$ so $g_{n} > f_{n}$ for all $n$ and $f_{1+n} \ne l_{1+n}$ for all $n$. -You can do exactly the same for $k = 1$ since $g_{n} = l_{1+n}-f_{2+n}$ and for $k=2$ putting $g_{n} = f_{4+n}-l_{2+n}$, and again in both cases $g_{1} \ge 1$ and $ g_{2} \ge 1$ so $g_{n} \ge f_{n}> 0$, and in consequence $f_{n+2}\ne l_{n+1}$ and $f_{n+4}\ne l_{n+2}$ for $n \ge 1$. -Finally to see that for $k>2$ there are no more exception the argument is the same put $g_{n} = f_{n+k}-l_{n}$ we have $g_{1} \ge f_{4} - 1 \ge 1$ and $g_{2} \ge f_{5}-3 \ge 1$ and $g_{n}=g_{n-1}+g_{n-2}$ so $g_{n} \ge f_{n}>0$. The case $k< 0$ is identical using instead the function $g_{n} = l_{n-k} - f_{n}$.<|endoftext|> -TITLE: On distributions over $\mathbb R$ whose derivatives vanishes -QUESTION [7 upvotes]: Let $I \subset \mathbb R$ be open, $u \in \mathcal D'(I)$ be a distribution whose distributional derivatives vanishes (i.e. is zero for all test functions, which we may assume to be complex valued ). -We show $\forall c \in \mathbb C: \forall \phi \in \mathcal D(I) : u(\phi) = \int c\cdot\phi dx$. (EDIT: Correctly, $c$ should be quantified with $\exists$. My question has been why the following proof doesn't allow for arbitrary complex $c$, which explains the preceding statement.) -Proof: -Let $\Psi \in D(I)$. $\Psi$ is the derivative of a test function iff $\int \phi dx = 0$. In that case $u(\Psi) = 0$. -Let $h \in D(I)$ be arbitrary with $\int h dx = 1$. Now for every test function $\phi \in D(I)$ we see: -$\phi - \int \phi dx \cdot h \in D(I)$ and $\int ( \phi - \int \phi dx h ) dx = 0$. -therefore -$u( \phi - \int \phi dx h ) = 0$, i.e. $u(\phi) = u(h) \int \phi dx$ -$\square$ -If this is not wrong, how can I interpret the fact that $h$ has been arbitrary? -Note: This is part of a larger proof, which shows the same for non-one-dimensional domains. - -REPLY [6 votes]: The function $h$ is not arbitrary; it is arbitrary with respect to the condition -that $\int h dx = 1$. (And this latter condition is certainly necessary for the proof to go through.) -The proof given (which seems correct to me) shows that $u(\phi)$ only depends -on the value of $\int \phi$. In particular, one sees (after the proof is done) that $u(h)$ only depends on the value of $\int h dx$, which was fixed to be $1$. Thus $u(h)$ is independ of the choice of $h$ (as long as $\int h dx = 1$), -and is equal to the constant $c$ in the statement of the theorem. -[Added: As Theo points out in his answer, you have an incorrect universal quantifier on the constant $c$ in the statement of the theorem; it should read for some $c$. I didn't notice this when I read the question!] - -REPLY [5 votes]: The statement you're actually proving is: If the distributional derivatives of $u$ vanish then there exists $c$ such that $u(\phi) = c \int \phi$. (You're stating a completely different property: $\forall c$...) If you suppose for a moment that $u(\phi) = c \int \phi$, how can you recover $c$? Well, by evaluating $u$ at any test function $h$ with $\int h = 1$ (this is a restriction but not a severe one). Apart from the slip of confusing $\forall$ with $\exists$ your argument is correct. -In fact, the trick of using $h$ with $\int h = 1$ is used quite often in the theory of distributions and the fact that it doesn't matter which $h$ you take is one of the big strengths of the theory of distributions. For instance, if $h$ is such that $\int h = 1$ then $u_{n}(\phi) = \int nh(nx) \phi(x)$ approximates the Dirac $\delta_{0}$-distribution in the sense that $u_{n}(\phi) \to \phi(0) = \delta_{0}(\phi)$.<|endoftext|> -TITLE: Bounded Linear Mappings of Banach Spaces -QUESTION [8 upvotes]: This problem has been giving me some troubles. Does anyone have any ideas on how to go about proving this? -Let $X$ and $Y$ be Banach spaces. If $T: X \to Y$ is a linear map such that $f \circ T \in X^*$ for every $f \in Y^*$, then $T$ is bounded. -Thanks in advanced! - -REPLY [5 votes]: By the uniform boundedness principle applied to the set of all $f \circ T$ with $|f| = 1$ (operator norm from $Y$ to ${\bf C}$), there are two possibilities: 1) there is a constant $C$ such that $|f(T(x))| \leq C||x||$ for all $x \in X$ and all $f$ with $|f| = 1$, or 2) there exists at least one $x$ such that ${\displaystyle \sup_{|f| = 1} |f(T(x))| = \infty}$. -The second option cannot hold since $|f(T(x))| \leq ||T(x)||$ whenever $|f| = 1$. So the first option must occur; there is a constant $C$ such that $|f(T(x))| \leq C||x||$ for all $x \in X$ and all $f$ with $|f| = 1$. By the Hahn-Banach theorem, for any $y = T(x) \in Y$ one can create an $f$ with $|f| = 1$ such that $|f(T(x))| = ||T(x)||$. So we have $||T(x)|| \leq C||x||$. - -REPLY [3 votes]: Let $A=\{f\circ T \mid f\in Y^* \land \Vert f \Vert = 1\}$ (so $A\subset X^*$). Let $x\in X$. Then ${|(f\circ T)x|}\leq {\Vert f \Vert} {\Vert Tx \Vert}$, so -$$ -\sup_{\phi\in A} |\phi(x)| \leq \Vert Tx\Vert -$$ -which is finite. By the uniform boundedness principle, we have $$\sup_{\phi\in A} \Vert \phi \Vert<\infty -$$ -and -$$ -\begin{align*} -\sup_{\phi\in A} \Vert \phi \Vert &= \sup_{\substack{f\in Y^* \\ \Vert f \Vert = 1}} \Vert f\circ T\Vert\\ -&= \sup_{\substack{f\in Y^* \\ \Vert f \Vert = 1}} \left(\sup_{\substack{x\in X \\ \Vert x \Vert = 1}} |(f\circ T)x|\right)\\ -&= \sup_{\substack{x\in X \\ \Vert x \Vert = 1}}\left(\sup_{\substack{f\in Y^* \\ \Vert f \Vert = 1}} |(f\circ T)x|\right) -\end{align*} -$$ -But -$$ -\forall x\in X,\; \sup_{\substack{f\in Y^* \\ \Vert f \Vert = 1}} |(f\circ T)x| \ge \Vert Tx \Vert, -$$ -therefore -$$ -\Vert T \Vert = \sup_{\substack{x\in X \\ \Vert x \Vert = 1}} \Vert Tx \Vert < \infty. -$$ -So $T$ is bounded. - -REPLY [2 votes]: First, I claim that the unit ball of $Y^*$ is mapped into a bounded subset of $X^*$. This follows from the Banach-Steinhaus theorem. If $x \in X$, then $(f \circ T)(x) = f(T(x))$ is bounded as $f$ ranges over the elements of $Y^*$ of norm at most one. So we have the collection $\mathcal{C}$ of functionals $f \circ T$ on $X$, such that for each $x \in X$, $\sup_{r \in \mathcal{C}} ||r(x)|| < \infty$. This implies that $\mathcal{C}$ is a bounded set and that the transpose of $T$ is bounded. -Now, if the transpose of a linear transformation $T$ is bounded by some $C$, then $T$ is itself bounded by $C$ (to see this, suppose $x \in X$ is of norm at most one; then the claim is that $|\ell(T(x))| \leq C$ for $\ell$ a functional on $Y$ of norm at most one, which is equivalent by Hahn-Banach. But this is $ -|T^*(\ell)(x)|$, which by assumption is of norm at most $C$).<|endoftext|> -TITLE: Relationship between eigendecomposition and singular value decomposition -QUESTION [38 upvotes]: Let $A \in \mathbb{R}^{n\times n}$ be a real symmetric matrix. Please help me clear up some confusion about the relationship between the singular value decomposition of $A$ and the eigen-decomposition of $A$. -Let $A = U\Sigma V^T$ be the SVD of $A$. Since $A = A^T$, we have $AA^T = A^TA = A^2$ and: -$$A^2 = AA^T = U\Sigma V^T V \Sigma U^T = U\Sigma^2 U^T$$ -$$A^2 = A^TA = V\Sigma U^T U\Sigma V^T = V\Sigma^2 V^T$$ -Both of these are eigen-decompositions of $A^2$. Now consider some eigen-decomposition of $A$ -$$A = W\Lambda W^T$$ -Then -$$A^2 = W\Lambda W^T W\Lambda W^T = W\Lambda^2 W^T$$ -So $W$ also can be used to perform an eigen-decomposition of $A^2$. -So now my confusion: -It seems that $A = W\Lambda W^T$ is also a singular value decomposition of A. But singular values are always non-negative, and eigenvalues can be negative, so something must be wrong. -What is going on? - -REPLY [32 votes]: If $A = U \Sigma V^T$ and $A$ is symmetric, then $V$ is almost $U$ except for the signs of columns of $V$ and $U$. -$$A = W \Lambda W^T = \displaystyle \sum_{i=1}^n w_i \lambda_i w_i^T = \sum_{i=1}^n w_i \left| \lambda_i \right| \text{sign}(\lambda_i) w_i^T$$ where $w_i$ are the columns of the matrix $W$. -The left singular vectors $u_i$ are $w_i$ and the right singular vectors $v_i$ are $\text{sign}(\lambda_i) w_i$. (You can of course put the sign term with the left singular vectors as well.)The singular values $\sigma_i$ are the magnitude of the eigen values $\lambda_i$. -Hence, $A = U \Sigma V^T = W \Lambda W^T$ -and $$A^2 = U \Sigma^2 U^T = V \Sigma^2 V^T = W \Lambda^2 W^T$$ -Note that the eigenvalues of $A^2$ are positive.<|endoftext|> -TITLE: Expressing the product Ax as a linear combination of the column vectors of A -QUESTION [10 upvotes]: Expressing the product Ax as a linear combination of the column vectors of -$A$= -$\begin{bmatrix} -4 & 0 & -1\\ -3 & 6 & 2\\ -0 & -1 & 4 -\end{bmatrix}$ -$\vec{x}$=$\begin{bmatrix} --2\\ -3\\ -5 -\end{bmatrix}$ -I get it now. They just want me to multiply the two vectors together. -I end up with $\begin{bmatrix} --13\\\ -22\\\ -17 -\end{bmatrix}$ - -REPLY [7 votes]: I have edited your question to make sure it is understood correctly. So if I understand correctly, you have a matrix $A$ = $\begin{bmatrix} -4 & 0 & -1\\ -3 & 6 & 2\\ -0 & -1 & 4 -\end{bmatrix}$ and a vector $\vec{x}$ = $\begin{bmatrix} --2\\ -3\\ -5 -\end{bmatrix}$. You are trying to write the product $A$$\vec{x}$ as a linear combination of the column vectors of $A$. Now to do that, you need to perform the multiplication by its very definition: $\begin{bmatrix} -4 & 0 & -1\\ -3 & 6 & 2\\ -0 & -1 & 4 -\end{bmatrix}$ $\cdot$ $\begin{bmatrix} --2\\ -3\\ -5 -\end{bmatrix}$ actually means $-2$$\begin{bmatrix} -4\\ -3\\ -0 -\end{bmatrix}$ $+$ $3$$\begin{bmatrix} -0\\ -6\\ --1 -\end{bmatrix}$ $+$ $5$$\begin{bmatrix} --1\\ -2\\ -4 -\end{bmatrix}$, which is what I believe your question is asking for.<|endoftext|> -TITLE: What is the easiest way to see that there are no nonconstant holomorphic forms on the Riemann Sphere? -QUESTION [6 upvotes]: I've heard this result bandied about many times, and I know that it follows from e.g. the theory of divisors, but I'd like to see some simpler, straightforward ways of proving this fact. - -REPLY [8 votes]: Suppose there were a nonzero holomorphic 1-form $\omega$ on $S^2$. Then we have two charts $U_1, U_2$, each isomorphic to $\mathbb{C}$, on which coordinates are given by $z$ and $z' = 1/z$. In each representation, we have $\omega = f(z) dz$ and $\omega = g(z') dz'$. On the overlap, we must have $f(z) dz = g(1/z) (-1/z^2) dz$ by the transition formulas. That is, $f(z) = g(1/z) (-1/z^2)$ on $\mathbb{C}-\{0\}$. If you write out the Laurent series expansion (note that both $f,g$ are entire functions!) this is absurd if $g$ is nonzero. -This also works in the algebraic category (and gives an easy proof that the projective line has genus zero).<|endoftext|> -TITLE: Finding the z value on a plane with x,y values -QUESTION [6 upvotes]: so I have the x,y,z value for 3 points to define a plane in 3d space. -I need to find the z value of an arbitrary point given the x,y. -I can sort of see some ways to calculate this, but they seem like they might be doing a lot of extra steps. -I eventually need to encapsulate the process in a algorithm for a computer program, so the fewer steps the better. - -REPLY [8 votes]: The simplest way is to first find the equation of the plane. -So, suppose you are given three points, -$$ (a_1,b_1,c_1),\quad (a_2,b_2,c_2),\quad (a_3,b_3,c_3).$$ -I'm first going to assume everything will work out fine; I'll point out the possible problems later. - -First, construct two vectors determined by these three points: -$$\begin{align*} -\mathbf{v}_1 &= (a_1,b_1,c_1) - (a_2,b_2,c_2) = (a_1-a_2, b_1-b_2, c_1-c_2).\\ -\mathbf{v}_2 &= (a_1,b_1,c_1) - (a_3,b_3,c_3) = (a_1-a_3, b_1-b_3, c_1-c_3). -\end{align*}$$ -Then, compute their cross product: -$$\mathbf{n} = \mathbf{v}_1\times\mathbf{v}_2 = (r,s,t).$$ -The plane you want has equation $rx + sy + tz = k$ for some $k$. To find $k$, plug in one of the points you have, say $(a_1,b_1,c_1)$, so you know that -$$k = ra_1 + sb_1 + tc_1.$$ -Finally, given the $x$ and $y$ coordinate of a point, you can find the value of $z$ by solving: -$$z = \frac{1}{t}\left( ra_1 + sb_1 + tc_1 - rx - sy\right).$$ - -What can go wrong? - -For three points to determine a unique plane, you need the three points to not be collinear (not lie on the same line). You will find this when you compute the vector $\mathbf{n}$. If $\mathbf{n}=(0,0,0)$, then $\mathbf{v}_1$ and $\mathbf{v}_2$ are parallel, so that means that the three points are collinear and don't determine a unique plane. So you can just test $\mathbf{n}$ to see if it is nonzero before proceedings. -It's possible for there to not be a unique value of $z$ that goes with the given $x$ and $y$. This will happen if $\mathbf{n}$ has the form $\mathbf{n}=(r,s,0)$. Then either the given $x$ and $y$ satisfy the equation you get, in which case every value of $z$ works; or else the given $x$ and $y$ do not satisfy the equation you get and no value of $z$ works. - -Example. Suppose you are given $(1,2,3)$, $(1,0,1)$, and $(-2,1,0)$. Then -$$\begin{align*} -\mathbf{v}_1 &= (1,2,3) - (1,0,1) = (0,2,2).\\ -\mathbf{v}_2 &= (1,2,3) - (-2,1,0) = (3,1,3). -\end{align*}$$ -Then -$$\begin{align*} -\mathbf{n} &= \mathbf{v}_1\times\mathbf{v}_2 = \left|\begin{array}{rrr} -\mathbf{i} & \mathbf{j} & \mathbf{k}\\ -0 & 2 & 2\\ -3 & 1 & 3 -\end{array}\right|\\ -&= ( 6-2, 6- 0, 0-6) \\ -&= (4,6,-6). -\end{align*}$$ -So the plane has equation $4x + 6y - 6z = k$. To find $k$, we plug in $(1,2,3)$: -$$ 4(1) + 6(2) - 6(3) = -2,$$ -so the plane has equation -$$4x + 6y - 6z = -2$$ -or -$$2x + 3y - 3z = -1.$$ -Then, given two values of $x$ and $y$, say, $x=7$ and $y=-2$, you plug them in and solve for $z$: -$$z = \frac{1}{-3}(-1 -2(7) -3(-2)) = \frac{1 + 14 - 6}{3} = 3,$$ -so the point is $(7,-2,3)$.<|endoftext|> -TITLE: expected area of a triangle determined by randomly placed points -QUESTION [10 upvotes]: Three points are placed at independently and at random in a unit square. What is the expected value of the area of the triangle formed by the three points? - -REPLY [4 votes]: User 'Shai Covo' (who seems no longer active) gave a solution as the link to an old webpage maintained by Dr. Les Reid of the math department at Missouri State University. The solution seems to be valid after being fixed by the comments @joriki, and the rigorous integration approach @achille hui supports it (and so much more). -I think the solution is nice and worthy of displaying with a good format so here it is. - -The argument below is due to Philippe Fondanaiche (Paris, France), directly lifted from here. - -Let $A$ with coordinates $(a,u)$, $B(b,v)$ and $C(c,w)$ be the three points chosen uniformly and at random from a unit square. -As the probability to have 2 or 3 coincident points is nil, we will -consider hereafter the strict inequalities between the abscissas/coordinates of the 3 points. -There are 6 positions of the 3 abscissas having the same probability: -$$\begin{align} -a& -TITLE: Comparing the Lebesgue measure of an open set and its closure -QUESTION [30 upvotes]: Let $E$ be an open set in $[0,1]^n$ and $m$ be the Lebesgue measure. -Is it possible that $m(E)\neq m(\bar{E})$, where $\bar{E}$ stands for the closure of $E$? - -REPLY [21 votes]: Yes, this is possible. Already in dimension $1$. If you take a modified Cantor set $C$ in $[0,1]$, that is a nowhere dense compact subset of positive measure $\alpha \gt 0$. Its complement $E = [0,1] \smallsetminus C$ is open, has measure $1 - \alpha$ and its closure is all of $[0,1]$ by density. In higher dimensions simply take the product $E^n$. -Another way of doing it is to enumerate the rationals in $[0,1]$ and taking $E = [0,1] \cap \bigcup_{n=1}^{\infty} (q_{n} - \frac{\varepsilon}{2^{n+1}}, q_{n} + \frac{\varepsilon}{2^{n+1}})$. Then $\mu(E) \leq \sum_{n=1}^{\infty} 2 \cdot \frac{\varepsilon}{2^{n+1}} = \varepsilon$, so for $\varepsilon \lt 1$ the set $E$ will be open and dense in $[0,1]$ but not all of $[0,1]$ and its closure will be all of $[0,1]$ again. -Added: (in view of Davide's comment below). Note that there is a modified Cantor set $C_\alpha \subset [0,1]$ of any measure $0 \lt \alpha \lt 1$. Its complement $E_{\alpha} = [0,1] \smallsetminus C_\alpha$ is open and dense in $[0,1]$ and has measure $1-\alpha$ and by scaling this shows that for every pair of positive numbers $0 \lt a \lt b$ there is an open set $E_a$ of measure $\mu(E_a) = a$ whose closure $\overline{E_a}$ has measure $\mu(\overline{E_a}) = b$. I leave it as an easy exercise to construct an open set of measure $a \gt 0$ whose closure has infinite measure. - -REPLY [10 votes]: If you remove a fat Cantor set (that is, a nowhere dense positive measure closed subset) from $[0,1]$ you obtain a dense open subset of $[0,1]$ whose measure is $< 1$. So the answer to your question is yes.<|endoftext|> -TITLE: Does $\det(A) \neq 0$ (where A is the coefficient matrix) $\rightarrow$ a basis in vector spaces other than $R^{n}$? -QUESTION [7 upvotes]: I know that for a set of vectors $\{ v_{1}, v_{2}, \ldots , v_{n} \} \in \mathbb{R}^{n}$ we can show that the vectors form a basis in $\mathbb{R}^{n}$ if we show that the coefficient matrix $A$ has the property $\det(A) \neq 0$, because this shows the homogeneous system has only the trivial solution, and the non-homogeneous system is consistent for every vector $(b_{1}, b_{2}, \ldots , b_{n}) \in \mathbb{R}^{n}$. -Intuitively, this concept seems applicable to all polynomials in $\mathbf{P}_{n}$ and all matrices in $M_{nn}$. Can someone validate this? -edit: -I think to make the intuition hold, $A$ must be defined as follows in $M_{nn}$: -Let $M_{1}, M_{2}, ... , M_{k}$ be matrices in $M_{nn}$. -To prove these form a basis for $M_{nn}$, we must show that $c_{1}M_{1} + c_{2}M_{2} + ... + c_{k}M_{k} = 0$ has the only trivial solution, and that every $n \times n$ matrix can be expressed as $c_{1}M_{1} + c_{2}M_{2} + ... + c_{k}M_{k} = B$. -So I believe that for $M_{nn}$, $A$ must be defined as a $n^{2} \times n^{2}$ matrix where each row vector is formed from all the $(i, j)$ entries taken from $M_{1}, M_{2}, ... , M_{k}$ (in that order.) - -$\text{e.g. } A = \begin{pmatrix} -M_{1_{1,1}} & M_{2_{1,1}} & ... & M_{k_{1,1}} \\ -M_{1_{1,2}} & M_{2_{1,2}} & ... & M_{k_{1,2}} \\ -... & ... & ... & ... \\ -M_{1_{n,n}} & M_{2_{n,n}} & ... & M_{k_{n,n}} -& -\end{pmatrix}$ - -However, I am not sure about this. - -REPLY [5 votes]: You should be aware that for any given $n$ there's an essentially unique real vector space of dimension $n$, in the sense that any two are isomorphic (although non-canonically). For instance, the space of real polynomials of degree $\leq n$ is a real vector space of dimension $n+1$, hence isomorphic to ${\Bbb R}^{n+1}$, the space $M_n({\Bbb R})$ of square $n\times n$ real matrices is a real vector space of dimension $n^2$, hence isomorphic to ${\Bbb R}^{n^2}$, and so on. Once you prove a particular statement for ${\Bbb R}^{n}$, you proved it for ALL real vector spaces of dimension $n$. -Same thing, more in general, when you consider vector spaces over any other field than $\Bbb R$.<|endoftext|> -TITLE: Q is dense in R and Completions -QUESTION [7 upvotes]: What is the relation between the fact that $\mathbb{Q}$ is dense in $\mathbb{R}$ and the fact that the completion of $\mathbb{Q}$ is $\mathbb{R}$? Or in general, that is if $A$ is dense in a metric space $B$, what's the relation between the completion of $A$ and $B$? - -REPLY [3 votes]: Given a metric space $X$ with distance $d$, you can construct the set $\tilde{X}=C/\sim$ where $C$ is the set of all Cauchy sequences in $X$ and you declare that -$$ -\forall s,t\in C,\qquad s\sim t\iff\lim_{n\to\infty}d(s_n,t_n)=0. -$$ -Then, $\tilde{X}$ becomes a metric space under the distance -$$ -\tilde{d}([s],[t])=\lim_{n\to\infty}d(s_n,t_n) -$$ -(one needs to check well-definedness). Then one sees that $X$ embeds isometrically in $\tilde{X}$ as $x\mapsto[\{x\}]$ (class of the constant sequence at $x$) and proves that - -$\tilde{X}$ is complete under $\tilde d$; -$X$ is dense in $\tilde X$; -If $X^\prime$ is a complete metric space in which $X$ embeds isometrically as a dense subspace, then there exists a canonical isometry $X^\prime\rightarrow\tilde{X}$ which is the identity on $X$. - -The space $\tilde{X}$ is called the completion of $X$. -The reals $\Bbb R$ are the completion of $\Bbb Q$ under the euclidean metric induced by the standard absolute value. -By taking the $p$-adic absolute values on $\Bbb Q$ one may construct the complete field ${\Bbb Q}_p$ of $p$-adic numbers. -The celebrated theorem of Ostrowski says that $\Bbb R$ and the ${\Bbb Q}_p$ are essentially all the possible ways to construct a complete space out of ${\Bbb Q}$.<|endoftext|> -TITLE: Calculating the median in the St. Petersburg paradox -QUESTION [12 upvotes]: I am studying a recreational probability problem (which from the comments here I discovered it has a name and long history). One way to address the paradox created by the problem is to study the median value instead of the expected value. I want to calculate the median value exactly (not only find bounds or asymptotic values). I have found a certain approach and I am stuck in a specific step. I present my analysis and I would like some help on that specific step. -[Note: Other solutions to the general problem are welcome (however after the revelation of the long history I found a lot of material) but what I really want is to know the answer to the sub-problem that my approach raises.] -The problem -We have the following game: I toss a coin as many times is needed to get tails. Then I count the number of consecutive heads that preceded (call it h) and I give you $2^h$ dollars. How much are you willing to pay to play such a game? In other words, what is the maximum buy-in for that game you are willing to pay? Note also, that we can play this game any amount of finite times (each time with you paying the buy-in). -A naive answer -One straightforward way to answer this is to calculate the expected value of one game. This should be the upper limit for the buy-in. The expected value is the infinite sum of the return of each case times the probability of each case. More specifically -$$\sum_{i=0}^\infty (2^{i-1}\cdot\frac{1}{2^i}) = \sum_{i=0}^\infty \frac{1}{2} = \infty$$ This might seem counter-intuitive but it is true: Whatever constant and finite amount you bet per game, you are expected to win on the long run! Why is this so counter-intuitive though? Would you be willing to play this in practice with say 1000 dollars per game? The answer is no, because you would need an immensely large amount of games to actually win. So if we care about a more practical measure, the expected value is of no help. What we need is the median (or any other percentile value). If we know the median return for N games, we can at least know that if the buy-in is $\frac{median}{N}$, half of the possible cases you will lose and for half you will win. We will not know how much we will win or lose (we do have an upper bound on the losses though) but at least we know the chances to win or lose for a finite N number of games. -Finding the median -So how do you calculate the median return from N games (or more generally any ith percentile)? -If we play only one game (N=1) then it is trivial. The median is 1. For N=2 it starts getting more complicated. With probability 0.25 we'll get back 1+1, with 0.125 1+2, with 0.125 2+1. These 3 cases already bring us to a total of 0.5, so the median is 3 (and so the maximum bet is 1.5 per game). For any N, how do we enumerate all the cases and find the 50% point (or any i% point)? I realized that this is (partly) an ordering problem. We do not want just to enumerate random cases, we have to order them, starting from the case with the smallest possible return, then getting the one(s) with the next smallest return and so on. As we are doing this ordering we are adding the probabilities of these cases. When we reach 50% (or i%) we stop. The return value for that case is our median value (ith percentile value). The ordering is where I am stuck. -Sub-problem formulation -We can depict the possible space of returns with a matrix where the N columns are the N games and the infinite rows are the return for each game: -$$\begin{array}{c} \text{row 1} \\ \text{row 2} \\ \text{row 3} \\ \vdots \\ \text{row i} \\ \vdots \end{array} \;\;\;\; \overbrace{\begin{array}{cccc} 1 & 1 & \cdots & 1 \\ 2 & 2 & \cdots & 2 \\ 4 & 4 & \cdots & 4 \\ \vdots & \vdots & \ddots & \vdots \\ 2^{i-1} & 2^{i-1} & \cdots & 2^{i-1} \\ \vdots & \vdots & & \vdots \end{array}}^N$$ -A series of N games consists of picking values for each column (i.e., picking a game outcome for each game). The smallest possible total return is when all game outcomes are 1. So total return = N. The next possible one is when we get one outcome from the second row (total return N+1). The next smallest total return is N+2 (2 game outcomes from the second row). Notice though that for total return N+3 we have two "configurations": 1) cases where we have N-3 outcomes from the first row and 3 from the second row, OR 2) cases where we have N-1 outcomes from the 1st row and 1 outcome from the 3rd row! So ordering is not such an easy process. -Configurations vs. cases -Notice how I talked about "configurations" instead of individual cases. An individual case is a sequence of game outcomes (which are completely described by the game returns). For example a case of 4 games could be (1, 1, 16, 8) for a total return of 26. A configuration on the other hand is a more general construct which specifies how many outcomes we have from each row. A configuration completely determines the total return, but not the individual order that the outcomes happened. For example, the case given above is part of the configuration "2 outcomes from row 1, 1 outcome from row 4, 1 outcome from row 5". Cases (1,16,1,8) and (8,1,1,16) belong to the same configuration. From a configuration I can calculate how many distinct cases it has and what is the probability of each case. For example, for the configuration " $N_i$ outcomes from row i, $N_j$ from row j, $N_k$ from row k" we have: -The number of distinct cases is ${N\choose {N_i}}\cdot{{N-N_i}\choose{N_j}}\cdot{{N-N_i-N_j}\choose{N_k}}$ -The probability for each of these cases is $2^{-(i\cdot N_i + j\cdot N_j + k\cdot N_k)}$ -The total return value for any of these cases is $N_i \cdot 2^{i-1}+N_j \cdot 2^{j-1}+N_k \cdot 2^{k-1}$ -The example above shows a configuration with 3 rows, just to get a taste of the complexity of the problem. I can generalise the formulas to find distinct cases, their probabilities and their total returns for any given configuration. The problem is ordering the configurations. Can we find an algorithm that orders and lists the configurations based on their total return value? Let's describe each configuration as a series of pairs {(x,i), (y,j), ...} where the first number of a pair denotes the row number and the second number of a pair denotes how many outcomes do we have from that row. For example, {(1,4), (3,1), (4,2)} means that we get 4 outcomes from row 1, 1 outcome from row 3, and 2 outcomes from row 4. This also means that we played 4 + 1 + 2 = 7 games. I manually computed the first terms of the ordered configurations list, for N games. I give the configuration(s) on the left and the total return on the right. Note that some total returns have more than one configurations that produce them. -$\begin{array}{ll} \text{Configurations} & \text{Total return} \\ -\{(1,N)\} & N \\ -\{(1,N-1),\; (2,1)\} & N+1 \\ -\{(1,N-2),\; (2,2)\} & N+2 \\ -\{(1,N-3),\; (2,3)\},\;\; \{(1,N-1),\; (3,1)\} & N+3 \\ -\{(1,N-4),\; (2,4)\},\;\; \{(1,N-2),\; (2,1),\; (3,1)\} & N+4 \\ -\{(1,N-5),\; (2,5)\},\;\; \{(1,N-3),\; (2,2),\; (3,1)\} & N+5 \\ -\{(1,N-6),\; (2,6)\},\;\; \{(1,N-4),\; (2,3),\; (3,1)\},\;\; \{(1,N-2),\; (3,2)\} & N+6 \\ -\end{array}$ -If I can produce this order algorithmically then I will be able to calculate the median (or ith percentile) for any N. -I would also appreciate any help in formulating the problem in more accepted/mainstream terms. I believe that the formulation is valid and clear(?), but if we use a formulation from an established subfield maybe it will point to the solution too. Thanks! - -REPLY [2 votes]: I had to write an ugly recursive algorithm in C to calculate the exact median for this game. Please find the first 30 exact median values in the graph at page 62 philpapers.org/archive/ERGTEO.pdf<|endoftext|> -TITLE: Prove inequality: When $n > 2$, $n! < {\left(\frac{n+2}{\sqrt{6}}\right)}^n$ -QUESTION [8 upvotes]: Prove: When $n > 2$, -$$n! < {\left(\frac{n+2}{\sqrt{6}}\right)}^n$$ -PS: please do not use mathematical induction method. -EDIT: sorry, I forget another constraint, this problem should be solved by -algebraic mean inequality. -Thanks. - -REPLY [2 votes]: For $a_1, a_2, \ldots, a_n \geqslant 0$, we have:$$\sqrt [ n ]{ { a }_{ 1 }\cdot { a }_{ 2 }\cdots { a }_{ n } } \leqslant \frac { { a }_{ 1 }+{ a }_{ 2 }+\cdots +{ a }_{ n } }{ n } $$ -Consider ${ \left( n! \right) }^{ 2 }=\left( n\cdot 1 \right) \left( \left( n-1 \right) \cdot 2 \right) \cdots \left( 1\cdot n \right) $, for $n>1$: -$$\sqrt [ n ]{ { \left( n! \right) }^{ 2 } } =\sqrt [ n ]{ \left( n\cdot 1 \right) \left( \left( n-1 \right) \cdot 2 \right) \cdots \left( 1\cdot n \right) } \leqslant \frac { n\cdot 1+\left( n-1 \right) \cdot 2+\cdots +\left( n-\left( n-1 \right) \right) \cdot n }{ n } \\ =\frac { n\left( 1+2+\cdots +n \right) -1\times 2-2\times 3-\cdots -\left( n-1 \right) \times n }{ n } \\ =\frac { n\left( \frac { n\left( n+1 \right) }{ 2 } \right) -\frac { n\left( { n }^{ 2 }-1 \right) }{ 3 } }{ n } \\ =\frac { \left( n+1 \right) \left( n+2 \right) }{ 6 } <\frac { { \left( n+2 \right) }^{ 2 } }{ 6 } $$ -That's the conclusion, which we can get using high school math not the Stirlings.<|endoftext|> -TITLE: What is the right categorial definition of localisation of a module -QUESTION [5 upvotes]: Let $A$ be a ring, $S$ be a multiplicative subset, $M$ an $A$-module. let $\iota : A \to S^{-1}A$ be the map $a \mapsto a/1$. $\iota$ can be defined categorically as an initial object in the category of ring homomorphisms whose domain is $A$ which maps $S$ into the units (morphisms in this category are commuting triangles). -In Atiyah McDonald, as far as I can tell, there is no similar categorial definition for $S^{-1}M$. My question is: What are some good ways of defining $S^{-1}M$ categorically? - -REPLY [7 votes]: Let ${}_A\mathrm{Mod}$ be the category of left $A$-modules. For all $M\in{}_A\mathrm{Mod}$, the localized module $M_S$ is in the full subcategory $\mathcal C$ of ${}_A\mathrm{Mod}$ spanned by all objects $N$ such that for all $s\in S$, the multiplication map $$m_{N,s}:n\in N\mapsto sn\in N$$ is isomorphism of abelian groups. Moreover, let $\iota:\mathcal C\to{}_A\mathrm{Mod}$ be the inclusion functor, One can show easily that the localization functor $(\mathord-)_S:{}_A\mathrm{Mod}\to\mathcal C$ is a left adjoint to $I$. -Since left adjoints are unique up to isomorphism, when then exist, this uniquely characterizes the localization functor $(\mathord-)_S$. -Finally, one should notice that a little extra work will show that the subcategory $\mathcal C$ I defined above is in fact naturally equivalent to the category ${}_{A_S}\mathrm{Mod}$ of left $A_S$-modules.<|endoftext|> -TITLE: Looking for Open Source Math Software with Poor Documentation -QUESTION [18 upvotes]: I'm a techie who is looking to transition to technical writing as a career. It's been suggested that I produce documentation for an Open Source project to demonstrate my ability to do technical writing. I love Math so I thought I'd find a gap in an Open Source Math program and fill that gap. -What poorly documented Open Source Math software can you recommend that I look at? - -REPLY [3 votes]: Take a look at this project http://www.formulae.org It is open source, it is about math and it is partially documented. -There are several sources for documentation: the developer's guide (LaTeX, partially), the front-end user´s guide (LaTeX, starting), the API reference (JavaDoc, partially) and expression dictionary (online wiki, partially)<|endoftext|> -TITLE: Axiom of choice and automorphisms of vector spaces -QUESTION [56 upvotes]: A classical exercise in group theory is "Show that if a group has a trivial automorphism group, then it is of order $1$ or $2$." I think that the straightforward solution uses that a exponent two group is a vector space over $\operatorname{GF}(2)$, and therefore has nontrivial automorphisms as soon as its dimension is at least $2$ (simply transposing two basis vectors). -My question is now natural: - -Is it possible, without the axiom of choice, to construct a vector space $E$ over $\operatorname{GF}(2)$, different from $\{0\}$ or $\operatorname{GF}(2)$, whose automorphism group $\operatorname{GL}(E)$ is trivial? - -REPLY [40 votes]: Nov. 6th, 2011 After several long months a post on MathOverflow pushed me to reconsider this math, and I have found a mistake. The claim was still true, as shown by Läuchli $\small[1]$, however despite trying to do my best to understand the argument for this specific claim, it eluded me for several days. I then proceeded to construct my own proof, this time errors free - or so I hope. While at it, I am revising the writing style. -Jul. 21st, 2012 While reviewing this proof again it was apparent that its most prominent use in generating such space over the field of two elements fails, as the third lemma implicitly assumed $x+x\neq x$. Now this has been corrected and the proof is truly complete. -$\newcommand{\sym}{\operatorname{sym}} -\newcommand{\fix}{\operatorname{fix}} -\newcommand{\span}{\operatorname{span}} -\newcommand{\im}{\operatorname{Im}} -\newcommand{\Id}{\operatorname{Id}} -$ - -I got it! The answer is that you can construct such vector space. -I will assume that you are familiar with ZFA and the construction of permutation models, references can be found in Jech's Set Theory $\small[2, \text{Ch}. 15]$ as well The Axiom of Choice $\small{[3]}$. Any questions are welcomed. -Some notations, if $x\in V$ which is assumed to be a model of ZFC+Atoms then: - -$\sym(x) =\{\pi\in\mathscr{G} \mid \pi x = x\}$, and -$\fix(x) = \{\pi\in\mathscr{G} \mid \forall y\in x:\ \pi y = y\}$ - -Definition: Suppose $G$ is a group, $\mathcal{F}\subseteq\mathcal{P}(G)$ is a normal subgroups filter if: - -$G\in\mathcal{F}$; -$H,K$ are subgroups of $G$ such that $H\subseteq K$, then $H\in\mathcal{F}$ implies $K\in\mathcal{F}$; -$H,K$ are subgroups of $G$ such that $H,K\in\mathcal{F}$ then $H\cap K\in\mathcal{F}$; -${1}\notin\mathcal{F}$ (non-triviality); -For every $H\in\mathcal{F}$ and $g\in G$ then $g^{-1}Hg\in\mathcal{F}$ (normality). - -Now consider the normal subgroups-filter $\mathcal{F}$ to be generated by the subgroups $\fix(E)$ for $E\in I$, where $I$ is an ideal of sets of atoms (closed under finite unions, intersections and subsets). -Basics of permutation models: -A permutation model is a transitive subclass of the universe $V$ that for every ordinal $\alpha$, we have $x\in\mathfrak{U}\cap V_{\alpha+1}$ if and only if $x\subseteq\mathfrak{U}\cap V_\alpha$ and $\sym(x)\in\mathcal{F}$. -The latter property is known as being symmetric (with respect to $\mathcal{F}$) and $x$ being in the permutation model means that $x$ is hereditarily symmetric. (Of course at limit stages take limits, and start with the empty set) -If $\mathcal{F}$ was generated by some ideal of sets $I$, then if $x$ is symmetric with respect to $\mathcal{F}$ it means that for some $E\in I$ we have $\fix(E)\subseteq\sym(x)$. In this case we say that $E$ is a support of $x$. -Note that if $E$ is a support of $x$ and $E\subseteq E'$ then $E'$ is also a support of $x$, since $\fix(E')\subseteq\fix(E)$. -Lastly if $f$ is a function in $\mathfrak{U}$ and $\pi$ is a permutation in $G$ then $\pi(f(x)) = (\pi f)(\pi x)$. - -Start with $V$ a model of ZFC+Atoms, assuming there are infinitely (countably should be enough) many atoms. $A$ is the set of atoms, endow it with operations that make it a vector space over a field $\mathbb{F}$ (If we only assume countably many atoms, we should assume the field is countable too. Since we are interested in $\mathbb F_2$ this assertion is not a big hassle). Now consider $\mathscr{G}$ the group of all linear automorphisms of $A$, each can be extended uniquely to an automorphism of $V$. -Now consider the normal subgroups-filter $\mathcal{F}$ to be generated by the subgroups $\fix(E)$ for $E\in I$, where $E$ a finite set of atoms. Note that since all the permutations are linear they extend unique to $\span(E)$. In the case where $\mathbb F$, our field, is finite then so is this span. -Let $\mathfrak{U}$ be the permutation model generated by $\mathscr{G}$ and $\mathcal{F}$. -Lemma I: Suppose $E$ is a finite set, and $u,v$ are two vectors such that $v\notin\span(E\cup\{u\})$ and $u\notin\span(E\cup\{v\})$ (in which case we say that $u$ and $v$ are linearly independent over $E$), then there is a permutation which fixes $E$ and permutes $u$ with $v$. -Proof: Without loss of generality we can assume that $E$ is linearly independent, otherwise take a subset of $E$ which is. Since $E\cup\{u,v\}$ is linearly independent we can (in $V$) extend it to a base of $A$, and define a permutation of this base which fixes $E$, permutes $u$ and $v$. This extends uniquely to a linear permutation $\pi\in\fix(E)$ as needed. $\square$ -Lemma II: In $\mathfrak{U}$, $A$ is a vector space over $\mathbb F$, and if $W\in\mathfrak{U}$ is a linear proper subspace then $W$ has a finite dimension. -Proof: Suppose $W$ is as above, let $E$ be a support of $W$. If $W\subseteq\span(E)$ then we are done. Otherwise take $u\notin W\cup \span(E)$ and $v\in W\setminus \span(E)$ and permute $u$ and $v$ while fixing $E$, denote the linear permutation with $\pi$. It is clear that $\pi\in\fix(E)$ but $\pi(W)\neq W$, in contradiction. $\square$ -Lemma III: If $T\in\mathfrak{U}$ is a linear endomorphism of $A$, and $E$ is a support of $T$ then $x\in\span(E)\Leftrightarrow Tx\in\span(E)$, or $Tx=0$. -Proof: First for $x\in \span(E)$, if $Tx\notin\span(E)$ for some $Tx\neq u\notin\span(E)$ let $\pi$ be a linear automorphism of $A$ which fixes $E$ and $\pi(Tx)=u$. We have, if so: -$$u=\pi(Tx)=(\pi T)(\pi x) = Tx\neq u$$ -On the other hand, if $x\notin\span(E)$ and $Tx\in\span(E)$ and if $Tx=Tu$ for some $x\neq u$ for $u\notin\span(E)$, in which case we have that $x+u\neq x$ set $\pi$ an automorphism which fixes $E$ and $\pi(x)=x+u$, now we have: $$Tx = \pi(Tx) = (\pi T)(\pi x) = T(x+u) = Tx+Tu$$ Therefore $Tx=0$. -Otherwise for all $u\neq x$ we have $Tu\neq Tx$. Let $\pi$ be an automorphism fixing $E$ such that $\pi(x)=u$ for some $u\notin\span(E)$, and we have: $$Tx=\pi(Tx)=(\pi T)(\pi x) = Tu$$ this is a contradiction, so this case is impossible. $\square$ -Theorem: if $T\in\mathfrak{U}$ is an endomorphism of $A$ then for some $\lambda\in\mathbb F$ we have $Tx=\lambda x$ for all $x\in A$. -Proof: -Assume that $T\neq 0$, so it has a nontrivial image. Let $E$ be a support of $T$. If $\ker(T)$ is nontrivial then it is a proper subspace, thus for a finite set of atoms $B$ we have $\span(B)=\ker(T)$. Without loss of generality, $B\subseteq E$, otherwise $E\cup B$ is also a support of $T$. -For every $v\notin\span(E)$ we have $Tv\notin\span(E)$. However, $E_v = E\cup\{v\}$ is also a support of $T$. Therefore restricting $T$ to $E_v$ yields that $Tv=\lambda v$ for some $\lambda\in\mathbb F$. -Let $v,u\notin\span(E)$ linearly independent over $\span(E)$. We have that: $Tu=\alpha u, Tv=\mu v$, and $v+u\notin\span(E)$ so $T(v+u)=\lambda(v+u)$, for $\lambda\in\mathbb F$. -$$\begin{align} -0&=T(0) \\ &= T(u+v-u-v)\\ -&=T(u+v)-Tu-Tv \\ &=\lambda(u+v)-\alpha u-\mu v=(\lambda-\alpha)u+(\lambda-\mu)v -\end{align}$$ Since $u,v$ are linearly independent we have $\alpha=\lambda=\mu$. Due to the fact that for every $u,v\notin\span(E)$ we can find $x$ which is linearly independent over $\span(E)$ both with $u$ and $v$ we can conclude that for $x\notin E$ we have $Tx=\lambda x$. -For $v\in\span(E)$ let $x\notin\span(E)$, we have that $v+x\notin\span(E)$ and therefore: -$$\begin{align} -Tx &= T(x+u - u)\\ -&=T(x+u)-T(u)\\ -&=\lambda(x+u)-\lambda u = \lambda x -\end{align}$$ -We have concluded, if so that $T=\lambda x$ for some $\lambda\in\mathbb F$. $\square$ - -Set $\mathbb F=\mathbb F_2$ the field with two elements and we have created ourselves a vector space without any nontrivial automorphisms. However, one last problem remains. This construction was carried out in ZF+Atoms, while we want to have it without atoms. For this simply use the Jech-Sochor embedding theorem $\small[3, \text{Th}. 6.1, \text p. 85]$, and by setting $\alpha>4$ it should be that any endomorphism is transferred to the model of ZF created by this theorem. -(Many thanks to t.b. which helped me translating parts of the original paper of Läuchli. -Additional thanks to Uri Abraham for noting that an operator need not be injective in order to be surjective, resulting a shorter proof.) - -Bibliography - -Läuchli, H. Auswahlaxiom in der Algebra. Commentarii Mathematici Helvetici, vol 37, pp. 1-19. -Jech, T. Set Theory, 3rd millennium ed., Springer (2003). -Jech, T. The Axiom of Choice. North-Holland (1973).<|endoftext|> -TITLE: Numbers satisfying $\binom{n}{k} = m!$ -QUESTION [11 upvotes]: Let $k,m,n\in \mathbb{N}$ where $1 < k < n-1$. Consider the equation $$\binom{n}{k} = m!$$ which can also be equivalently written as $$n!=(n-k)!k!m!$$ The only instances I found are $\binom{4}{2} = 3!$ and $\binom{10}{3} = 5!$ I do not see any pattern coming out. As I went far out, it seemed that it is hard to find other examples as the second instance seems to be related to the problem of consecutive numbers being composed of only small primes. Is it true that these are the only instances? -Thanks. - -REPLY [9 votes]: You're asking about the existence of factorials in entries of Pascal's triangle. I googled for factorials pascal's triangle and found this discussion: -https://mathoverflow.net/questions/17058/factorials-in-pascals-triangle<|endoftext|> -TITLE: Find limit of $\sqrt[n]{a^n-b^n}$ as $n\to\infty$, with the initial conditions: $a>b>0$ -QUESTION [8 upvotes]: With the initial conditions: $a>b>0$; -I need to find $$\lim_{n\to\infty}\sqrt[n]{a^n-b^n}.$$ -I tried to block the equation left and right in order to use the Squeeze (sandwich, two policemen and a drunk, choose your favourite) theorem. - -REPLY [2 votes]: Another way: Note that $a^n - b^n = (a - b)(a^{n-1} + a^{n-2}b + ... + ab^{n-2} + b^{n-1})$. Since $b < a$, each term in the sum is bounded by $a^{n-1}$ and we have -$$a^{n-1} + a^{n-2}b + ... + ab^{n-2} + b^{n-1} < na^{n-1}$$ -Since the sum is at least the first term, we also have -$$a^{n-1} + a^{n-2}b + ... + ab^{n-2} + b^{n-1} > a^{n-1}$$ -Combining we have -$$a^{n-1}(a - b) < a^n - b^n < na^{n-1}(a - b)$$ -Taking $n$th roots we get -$$a^{1 - {1 \over n}}(a - b)^{1 \over n} < \sqrt[n]{a^n - b^n} -TITLE: Logic question: Ant walking a cube -QUESTION [34 upvotes]: There is a cube and an ant is performing a random walk on the edges where it can select any of the 3 adjoining vertices with equal probability. What is the expected number of steps it needs till it reaches the diagonally opposite vertex? - -REPLY [4 votes]: Here is what I thought. It is a Markov Chain Problem. -Mark the start point as $E_3$, here we define $E_n$ which means that it takes $n$ sides to reach the final point. -we have such relationship: -take one side, it will definitely go to point as $E_2$: -$$E_3 = E_2 + 1$$ -the probability is 2/3 to $E_1$ and 1/3 back to $E_3$: -$$E_2 = 2/3 (E_1 + 1) + 1/3 (E_3 + 1)$$ -similar to $E_1$: -$$E_1 = 1/3 + 2/3 (1 + E_2)$$ -solve these equations, you will get $E_3$ = 10.<|endoftext|> -TITLE: Evaluate $\int_{0}^{\frac{\pi}{4}}\ln(\cos(t))dt$ -QUESTION [13 upvotes]: $$\int_{0}^{\frac{\pi}{4}}\ln(\cos(t))dt=\frac{-{\pi}\ln(2)}{4}+\frac{K}{2}$$ -I ran across this integral while investigating the Catalan constant. I am wondering how it is evaluated. -I know of this famous integral when the limits of integration are $0$ and $\frac{\pi}{2}$, but when the limits are changed to $0$ and $\frac{\pi}{4}$, it becomes more complicated. -I tried using $$\cos(t)=\frac{e^{it}+e^{-it}}{2},$$ -then rewriting it as: -$$\int_{0}^{\frac{\pi}{4}}\ln\left(\frac{e^{it}+e^{-it}}{2}\right)=\int_{0}^{\frac{\pi}{4}}\ln(e^{it}+e^{-it})dt-\int_{0}^{\frac{\pi}{4}}\ln(2)dt.$$ -But, this is where I get stuck. -Maybe factor out an $e^{it}$ and get -$$\int_{0}^{\frac{\pi}{4}}\ln(e^{it}(1+e^{-2it}))dt=\int_{0}^{\frac{\pi}{4}}\ln(e^{it})dt+\int_{0}^{\frac{\pi}{4}}\ln(1+e^{-2it})dt$$ -I thought maybe the Taylor series for ln(1+x) may come in -handy in some manner. It being $\ln(1+x)=\sum_{k=1}^{\infty}\frac{(-1)^{k+1}x^{k}}{k}$ -Giving $\int_{0}^{\frac{\pi}{4}}\sum_{k=1}^{\infty}\frac{(-1)^{k+1}e^{-2kit}}{k}$ -Just some thoughts. I doubt if I am on to anything. I used a technique similar to this when solving $\int_{0}^{\frac{\pi}{2}}x\ln(\sin(x))dx$. -But, how in the world would the Catalan constant come into the -solution?. $K=\sum_{k=0}^{\infty}\frac{(-1)^{k}}{(2k+1)^{2}}\approx .916$ -Your learned input is appreciated. - -REPLY [3 votes]: $\newcommand{\+}{^{\dagger}} - \newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} - \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} - \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} - \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} - \newcommand{\dd}{{\rm d}} - \newcommand{\down}{\downarrow} - \newcommand{\ds}[1]{\displaystyle{#1}} - \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} - \newcommand{\fermi}{\,{\rm f}} - \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} - \newcommand{\half}{{1 \over 2}} - \newcommand{\ic}{{\rm i}} - \newcommand{\iff}{\Longleftrightarrow} - \newcommand{\imp}{\Longrightarrow} - \newcommand{\isdiv}{\,\left.\right\vert\,} - \newcommand{\ket}[1]{\left\vert #1\right\rangle} - \newcommand{\ol}[1]{\overline{#1}} - \newcommand{\pars}[1]{\left(\, #1 \,\right)} - \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} - \newcommand{\pp}{{\cal P}} - \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} - \newcommand{\sech}{\,{\rm sech}} - \newcommand{\sgn}{\,{\rm sgn}} - \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} - \newcommand{\ul}[1]{\underline{#1}} - \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert} - \newcommand{\wt}[1]{\widetilde{#1}}$ -$\ds{\int_{0}^{\pi/4}\ln\pars{\cos\pars{t}}\,\dd t - =-\,{\pi\ln\pars{2} \over 4} + {K \over 2}:\ {\large ?}}$ where -$\ds{K \equiv - \sum_{n = 0}^{\infty}{\pars{-1}^{n} \over \pars{2n + 1}^{2}} \approx 0.9160}$ is the Catalan Constant. - -\begin{align} -&\color{#c00000}{\int_{0}^{\pi/4}\ln\pars{\cos\pars{t}}\,\dd t} -=-\,{\pi\ln\pars{2} \over 4} + \int_{0}^{\pi/4}\ln\pars{2\cos\pars{t}}\,\dd t -\\[3mm]&=-\,{\pi\ln\pars{2} \over 4} + \half\int_{0}^{\pi/4}\ln\pars{\cot\pars{t}}\,\dd t -+\int_{0}^{\pi/4}\ln\pars{2\cos\pars{t} \over \cot^{1/2}\pars{t}}\,\dd t -\end{align} - Since ( see this link ) $\ds{K = \int_{0}^{\pi/4}\ln\pars{\cot\pars{t}}\,\dd t}$: - $$ -\color{#c00000}{\int_{0}^{\pi/4}\ln\pars{\cos\pars{t}}\,\dd t} -=-\,{\pi\ln\pars{2} \over 4} + {K \over 2} -+ \half -\color{#00f}{\int_{0}^{\pi/4}\ln\pars{4\cos^{2}\pars{t} \over \cot\pars{t}}\,\dd t} -\tag{1} -$$ - -The problem is reduced to show that the "$\color{#00f}{\mbox{blue integral}}$" -vanishes out: -\begin{align} -&\color{#00f}{\int_{0}^{\pi/4}\ln\pars{4\cos^{2}\pars{t} \over \cot\pars{t}}\,\dd t} -=\int_{0}^{\pi/4}\ln\pars{4\sin\pars{t}\cos\pars{t}}\,\dd t -=\int_{0}^{\pi/4}\ln\pars{2\sin\pars{2t}}\,\dd t -\\[3mm]&=\half\int_{0}^{\pi/2}\ln\pars{2\sin\pars{t}}\,\dd t -={1 \over 4}\,\pi\ln\pars{2} -+ \half\,\lim_{\mu \to 0}\partiald{}{\mu} -\int_{0}^{1}t^{\mu}\pars{1 - t^{2}}^{-1/2}\,\dd t -\\[3mm]&={1 \over 4}\,\pi\ln\pars{2} -+ {1 \over 4}\,\lim_{\mu \to 0}\partiald{}{\mu} -\int_{0}^{1}t^{\pars{\mu - 1}/2}\pars{1 - t}^{-1/2}\,\dd t -\\[3mm]&={1 \over 4}\,\pi\ln\pars{2} -+ {1 \over 4}\,\lim_{\mu \to 0}\partiald{}{\mu}\bracks{% -\Gamma\pars{\mu/2 + 1/2}\Gamma\pars{1/2} \over \Gamma\pars{\mu/2 + 1}} -\\[3mm]&={1 \over 4}\,\pi\ln\pars{2} -+ {1 \over 8}\,{\Gamma\pars{1/2} \over \Gamma\pars{1}}\ -\bracks{\overbrace{\Psi\pars{\half} - \Psi\pars{1}} -^{\ds{-2\ln\pars{2}}}}\, -\overbrace{\Gamma\pars{\half}}^{\ds{\root{\pi}}} = \color{#00f}{\large 0} -\quad\mbox{since}\quad\Gamma\pars{1} = 1.\tag{2} -\end{align} -$\ds{\Gamma\pars{z}}$ and $\ds{\Psi\pars{z}}$ are the Gamma and Digamma Functions, respectively. - -$\pars{1}$ and $\pars{2}$ lead to: - $$ -\color{#00f}{\large\int_{0}^{\pi/4}\ln\pars{\cos\pars{t}}\,\dd t -=-\,{\pi\ln\pars{2} \over 4} + {K \over 2}} -$$<|endoftext|> -TITLE: A first order sentence such that the finite Spectrum of that sentence is the prime numbers -QUESTION [8 upvotes]: The finite spectrum of a theory $T$ is the set of natural numbers such that there exists a model of that size. That is $Fs(T):= \{n \in \mathbb{N} | \exists \mathcal{M}\models T : |\mathcal{M}| =n\}$ . What I am asking for is a finitely axiomatized $T$ such that $Fs(T)$ is the set of prime numbers. -In other words in what specific language $L$, and what specific $L$-sentence $\phi$ has the property that $Fs(\{\phi\})$ is the set of prime numbers? - -REPLY [3 votes]: Consider finite fields equipped with a total order such that for every $x$, if it has an immediate successor, then that successor is $x+1$.<|endoftext|> -TITLE: Logic, set theory, independence proofs, etc -QUESTION [11 upvotes]: I have some big troubles trying to understand specific set theory stuff. -Especially when we demonstrate something about set theory we always have to keep our demonstration in set theory, typically not using second order logic. -For example to demonstrate Löwenheim-Skolem we have to quantify over formulas to explicitly build a countable model. This is a second-order proof, and we usually don't mind doing that. But if one want to show "$\mathrm{ZFC} + \mathrm{Con}(\mathrm{ZFC}) \vdash \exists M$ countable model of ZFC", this should not be possible anymore. -So here are my questions : - -Can "$\mathrm{ZFC} + \mathrm{Con}(\mathrm{ZFC}) \vdash \exists M$ countable model of ZFC" -If yes, how? Since we have to quantify over formulas. -Can "$\mathrm{ZFC} + \mathrm{Con}(\mathrm{ZFC}) \vdash \exists M$ countable and transitive model of ZFC" - -Thanks in advance - -REPLY [9 votes]: First of all you have to ask yourself what does it means when we say $Con(ZFC)$. Can we talk about $ZFC$ as an object inside set theory? Doesn't that require to talk about every formula since Replacement and Separation are schemata of formulas? -The answer lies in the following observation: In $ZFC$ we cannot quantify over metamathematical formulas, that is formulas as objects outside of the universe of set. What we can do though is assign a set to every formula. Then for every formula we have an object in the universe that represents the metamathematical formula. We usually symbolize this as $\ulcorner\phi\urcorner$. -Through this and because of the inductive construction of formulas we can create a formula $Form(x)$ that says "$x$ is a formula" and is true exactly when there is a formula that is assigned to $x$. Furthermore we can create a formula $ZFC(x)$ that says that "$x$ is a formula of $ZFC$". We can create formulas such as $Pr(x)$ that says that using a specific system of logical axiom and inference rules $ZFC$ proves $x$ (through this we can create the sentence $Con(ZFC)$). Also given a set $M$ and a binary relation $E\subset M\times M$ we can create a formula $Sat(M,E,x)$ that says $(M,E)\models x$. Then we can prove about the formulas as objects inside the universe the Lowenheim-Skolem theorem, the completeness theorem, the compactness theorem and every other metamathematical result. $ZFC$ as you know is enough to prove the completeness theorem and so we have that: $$ZFC\vdash Con(ZFC)\iff \exists M\textrm{countable model of}\ ZFC$$ -But is it the same? Are the results about these objects that we call formulas inside the universe the same as the results about the metamathematical formulas? The result we can obtain is a schema of theorems that state: $$\phi^{(M,E)}(a_1,\ldots,a_n)\iff(M,E)\models \ulcorner\phi(a_1\ldots,a_n)\urcorner$$ -Notice the following: The left hand side refers to the relativization of the metamathematical formula (hence it's a schema of theorems). Now the right-hand side, the formula I wrote in the previous paragraphs that talks about the satisfiability of a formula is definable only when $M$ is a set. Indeed if we could define satisfiability about classes then we would have a truth definition which contradicts Tarski's theorem. It should be noted that we can define satisfiability in classes when we bound the number of alternating quantifiers of formulas: We can define satisfiability for $\Delta_0$ formulas, and for every natural number $n$ we can define the satisfiability of $\Sigma_{n}$ formulas. -For your last question, I am not certain, though I am under the impression that the existence of a transitive model of $ZFC$ is stronger than the consistency of $ZFC$. This is because the set-model that exists due to the consistency may be not standard and may contain infinite descending $\in$-sequences while it thinks that it's well founded (I am aware of such constructions: for example apply Łoś's theorem using a non-principal ultrafilter that is not $\sigma$-complete). Then it would be impossible to apply Mostowski's collapse to get the standard transitive model. -Edit: Keep in mind that Mostowski's theorem has as requirements that the binary relation is well founded in the universe. So given a model $(M,E)$ we may have $x_{n+1} E x_n$ for every $n\in\omega$ but if the set with extension $\{x_n\ :n\in\omega\}$ is not in $M$ the axiom of foundation will not be violated. In such a case it would be impossible to apply Mostowski's theorem. -Edit2: Here's the model I described above. Take a non-principal ultrafilter $\mathcal{U}$ on $\omega$ (that is $\mathcal{U}\subset\mathcal{P}(\omega)$). Of course $\mathcal{U}$ is not $\sigma$-complete since if it was it would be principal. Now take the ulraproduct of the universe modulo $\mathcal{U}$: The universe will be $V^\omega/\mathcal{U}$ that contains equivalence classes of functions $f:\omega\to V$ defined as $[f]:=\{g:\omega\to V : \{n\in\omega : f(n)=g(n)\}\in\mathcal{U}\}$. There is a slight problem here, namely that these may be classes but it can be solved using Scott's trick to turn these classes into sets. Next we define: -$$[f]=[g]\iff \{n\in\omega : f(n)=g(n)\}\in\mathcal{U}$$ -$$[f]E[g]\iff \{n\in\omega : f(n)\in g(n)\}\in\mathcal{U}$$ -So we have a model $(V^\omega/\mathcal{U},E)$. For every formula $\phi$ you can prove the following result using Łoś's theorem: $$(V^\omega/\mathcal{U},E)\models\phi([f_1],\ldots,[f_n])\iff\{m\in\omega : \phi(f_1(m),\ldots,f_n(m))\}\in\mathcal{U}$$ -Take note here that this is a schema of theorems. Still it is enough to show in ZFC that the model satisfies foundation since it's one singular axiom. -Now take the functions $f_n(m)=m-n$ (in case $m-n<0$ let $f_n(m)=0$). Since $\mathcal{U}$ is non-principal it contains the Fréchet filter and thus for every $n\in\omega$ the set $\{m\in\omega: m\geq n\}\in\mathcal{U}$. Observe that $f_{n+1}(m)\in f_n(m)$ for every $m>n$. Thus $\{m\in\omega: f_{n+1}(m)\in f_n(m)\}\in\mathcal{U}$ and therefore $[f_{n+1}]E[f_n]$. This is true for every natural number $n$ while the model satisfies the axiom of foundation. -Given an inaccessible cardinal $\kappa$ we can create the above construction in $V_\kappa$ and have the fact that $(V_\kappa^\omega/\mathcal{U},E)\models ZFC$ as a theorem while at the same time we have that there is an infinite descending $E$-sequence.<|endoftext|> -TITLE: Formal logic proof of absolute value formula -QUESTION [6 upvotes]: I've been trying to prove this for a long time now, anyone willing to offer some help or get me pointed in the right direction? -$(x>0 \implies z = x) \wedge (x < 0 \implies z = -x) \implies z \ge 0$ - -REPLY [4 votes]: Hint: What happens if x = 0? Does that say anything about z? - -REPLY [3 votes]: As Robert's answer indicates, the statement is false. However, changing the first $>$ to $\geq$ gives a true statement which I would use a proof by contradiction, that is assume $z < 0$ and $(x \geq 0 \implies z = x) \wedge (x < 0 \implies z = -x)$ and derive a contradiction. This can be done as follows: -$(z < 0) \wedge ((x>0 \implies z = x) \wedge (x < 0 \implies z = -x))$ -$\implies (((x \geq 0 \implies z = x) \wedge z < 0) \wedge ((x < 0 \implies z = -x) \wedge z < 0))$ -$\implies ((x \geq 0 \implies x < 0) \wedge (x < 0 \implies -x < 0))$ -$\implies ((F \wedge (x < 0 \implies -x < 0)) \vee ((x \geq 0 \implies x < 0) \wedge F))$ -$\implies (F \vee F)$ -$\implies F$ -$\therefore \neg((z < 0) \wedge ((x \geq 0 \implies z = x) \wedge (x < 0 \implies z = -x)))$ -$\therefore (x \geq 0 \implies z = x) \wedge (x < 0 \implies z = -x) \implies z \ge 0$<|endoftext|> -TITLE: Group-Interview Secretary Problem -QUESTION [9 upvotes]: The secretary problem is a well-studied optimal stopping problem with a simple solution. Suppose a set of $N$ candidates are interviewed for a secretarial position, one at a time, in random order. Each interviewee must be either accepted on the spot or rejected for good. The goal is to select the single best candidate (i.e., the payoff is 1 if the best candidate is hired, and 0 otherwise). Under these conditions, what is the best hiring policy? -As it turns out, the optimal strategy is to interview the first $N/e$ candidates and reject them, and then hire the next candidate that is better than all of those. Surprisingly, this strategy leads to hiring the single best candidate one out of every $e$ times (or about $37\%$ of the time), regardless of $N$. -Now suppose that instead of being interviewed one at a time, the secretaries can be interviewed in groups of up to $k$ (for some fixed $k>1$), after which one of the group may be accepted on the spot or the entire group may be rejected for good. It is not hard to see that this strictly increases the probability of hiring the best candidate. The question is, by how much? Does the probability of making the best hire still tend to $1/e$, or is there a non-zero advantage over the $k=1$ case in the limit of large $N$? - -REPLY [4 votes]: This paper explains it better than I could. - -http://www3.stat.sinica.edu.tw/statistica/oldpdf/A10n216.pdf<|endoftext|> -TITLE: Sums of prime powers -QUESTION [47 upvotes]: You are given positive integers N, m, and k. Is there a way to check if -$$\sum_{\stackrel{p\le N}{p\text{ prime}}}p^k\equiv0\pmod m$$ -faster than computing the (modular) sum? -For concreteness, you can assume $e^k0$ the sum is superlinear and so cannot be stored directly (without, e.g., modular reduction) but it's not clear whether a fast algorithm exists. -You can think of the problem as "Your friend, who has access to great computational resources, makes the claim (N, m, k). If her claim is true, can you prove it? If her claim is false, can you refute it?". -Edit: I posted a related problem on cstheory, asking if there is a short proof or interactive proof that the sum is correct. - -REPLY [10 votes]: Deléglise-Dusart-Roblot [1] give an algorithm which determines the number of primes up to $x$ that are congruent to $l$ modulo $k,$ in time $O(x^{2/3}/\log^2x).$ A modification of the algorithm of Lagarias-Odlyzko [2] allows the same to be computed in time $O(x^{1/2+o(1)}).$ -So using either algorithm, find the number of primes in all residue classes mod primes up to $\log m.$ For each prime $q,$ take the total number of primes in each residue class times that residue class to the $k$-th power; this gives the value of -$$\sum_{\stackrel{p\le N}{p\text{ prime}}}p^k\pmod q.$$ -Use the Chinese Remainder Theorem to determine the value of the sum mod $2\cdot3\cdots\log m.$ The Prime Number Theorem ensure that this, or a little more, is greater than $m$ and hence sufficient to determine the result uniquely. (Note that if $m>N^{k+1}/\log N$ or so, the calculations can be done exactly working mod $k\log N$ or so.) -This gives the sum (mod m or in Z) in time $O(N^{1/2+o(1)})$ since the number of calculations needed is logarithmic. -References -[1] Marc Deléglise, Pierre Dusart, and Xavier-François Roblot, Counting primes in residue classes, Mathematics of Computation 73:247 (2004), pp. 1565-1575. doi 10.1.1.100.779 -[2] J. C. Lagarias and A. M. Odlyzko, Computing $\pi(x)$: An analytic method, Journal of Algorithms 8 (1987), pp. 173-191. -[3] Charles, answer on MathOverflow. (Yes, this is the same person. See the other answers there for different approaches.)<|endoftext|> -TITLE: Why does $L \cap (M + N) = (L \cap M) + (L \cap N) $ not hold for subspaces -QUESTION [5 upvotes]: Let $L$, $M$, and $N$ are subspaces of a vector space. Prove that following is not necessarily true. -$L \cap (M + N) = (L \cap M) + (L \cap N) $ -This problem is given in 'Finite dimensional vector spaces' by Halmos. I was using 'if a vector belongs to L.H.S. then it must belong to R.H.S and vice versa' argument. Neither I can disprove it using this argument nor I could find a case where this is wrong! - -REPLY [7 votes]: Try it in $\mathbb{R}^3$ with $M,N$ subspaces such that $M + N = \mathbb{R}^3$, such as $M$ being the $xy$ plane and $N$ the $z$ axis. Then $L\cap (M + N) = L$, but you should be able to find some $L$ (such as a slanted plane) for which $(L\cap M) + (L\cap N)$ is a strict subset of $L$.<|endoftext|> -TITLE: Is there a proof that $\pi \times e$ is irrational? -QUESTION [33 upvotes]: A little reading suggests: -It is known that either $\pi + e$ or $\pi \times e$ is transcendental (or possibly both), but no proof is known that one of those two numbers in particular is transcendental. -If we just want irrationality rather than transcendence, is a proof known? -Can we prove $\pi+e$ is irrational? Can we prove $\pi \times e$ is irrational? - -REPLY [25 votes]: It is not known whether $\pi + e$ is irrational, nor whether $\pi \times e$ is irrational. See $\# 22$ here.<|endoftext|> -TITLE: Can every even integer be expressed as the difference of two primes? -QUESTION [26 upvotes]: Can every even integer be expressed as the difference of two primes? -If so, is there any elementary proof? - -REPLY [15 votes]: This follows from Schinzel's conjecture H. Consider the polynomials $x$ and $x+2k$. Their product equals $2k+1$ at 1 and $4(k+1)$ at 2, which clearly do not have any common divisors. So if Schinzel's conjecture holds, there are infinitely many numbers $n$ such that the polynomials are both prime at $n$, and so subtracting gives the result.<|endoftext|> -TITLE: Reducibility over $\mathbb{Z}_2$? -QUESTION [18 upvotes]: I have seen that $x^{2}+x+1$ and $x^{4}+x+1$ are irreducible over $\mathbb{Z}_2$ and I thought a polynomial of the form $x^{2^m}+x+1$ for $m\ge3$ would be irreducible too. However using WolframAlpha, my hunch was wrong. It could be factored over $\mathbb{Z}_2$. WolframAlpha could only generate the factors for $m=3,\ldots, 13$ and so far, I observed that for odd $m$, my polynomial is divisible by $x^{2}+x+1$. - -I want to see how one can show that $x^{2^m}+x+1$ for $m\ge3$ is reducible. - -REPLY [12 votes]: I would like to present a solution that is completely different from Matt E's method. The starting point is the numerical observation (which I found with PARI) that when you factor $x^{2^n} + x + 1$ in ${\mathbf F}_2[x]$ the number of irreducible factors that occur for $n = 1, 2, 3,...,11$ is -$$ -1, 1, 2, 2, 4, 6, 10, 16, 30, 52, 94. -$$ -The striking thing about those numbers after the second one is that they are all even. The lesson here is that when you were looking at data to understand why these polynomials were usually reducible, you should not only have been looking for some "obvious" irreducible factor, but also at the number of irreducible factors. Without showing there is any one particular "obvious" irreducible factor I am going to explain why, for $n \geq 3$, $x^{2^n} + x + 1$ must have an even number of irreducible factors in ${\mathbf F}_2[x]$. That will show this polynomial is reducible in ${\mathbf F}_2[x]$, since an irreducible polynomial has only one irreducible factor. -In elementary number theory, there is a standard function which counts the parity of the number of prime factors of a positive integer. It's called the Möbius function and is defined as follows: for a positive integer $N$, -$$ -\mu(N) = \begin{cases} -0, & \text{ if } N \text{ has a multiple prime factor}, \\ -(-1)^r, &\text{ if } N = p_1\cdots p_r \text{ where the } p_i\text{'s} \text{ are distinct primes}. -\end{cases} -$$ -So $\mu(N) = 1$ if $N$ is a product of an even number of distinct primes (in particular, $\mu(1) = 1$ using $r = 0$) and $\mu(N) = -1$ if $N$ is a product of an odd number of distinct primes, while $\mu(N) = 0$ if there is some prime factor of $N$ appearing more than once (e.g., $\mu(12) = 0$ since 2 is a factor of 12 twice). So the Möbius function really only counts the parity of the number of prime factors of squarefree positive integers. For any prime $p$, $\mu(p) = -1$. Therefore if $\mu(N) = 1$ then $N$ is definitely not a prime number. -The catch with the Möbius function is that there's no known way to figure out what $\mu(N)$ is without essentially factoring $N$. Well, to be on the safe side, let me just say that if we know $N$ is squarefree, so $\mu(N) = \pm 1$, then figuring out whether $\mu(N)$ is $1$ or $-1$ can't be done without factoring $N$. Put simply, there is no formula for $\mu(N)$ other than its definition. -(Of course, if you haven't seen the Möbius function before, then another catch is that it looks like a completely crazy function. Why is it significant? The reason is the Möbius inversion formula, in which the Möbius function plays a prominent role. One of its consequences is that the multiplicative inverse of the Riemann zeta-function $\zeta(s) = \sum_{N \geq 1} 1/N^s$ is $\sum_{N \geq 1} \mu(N)/N^s$, so the Möbius function arises naturally if you want to write $1/\zeta(s)$ as a Dirichlet series.) -There are a lot of analogies between ${\mathbf Z}$ and ${\mathbf F}_p[x]$, for prime $p$, and in particular we can define a Möbius function on monic polynomials in ${\mathbf F}_p[x]$. (Think of monic polynomials as analogous to positive integers.) For any monic $f(x)$ in ${\mathbf F}_p[x]$, set -$$ -\mu(f) = -\begin{cases} -0, & \text{ if } f \text{ has a multiple irreducible factor}, \\ -(-1)^r, & \text{ if } f = \pi_1\cdots\pi_r \text{ where the } \pi_i\text{'s} \text{ are distinct monic irreducibles}. -\end{cases} -$$ -For example, $\mu(\pi) = -1$ when $\pi$ is irreducible, so $\mu(x+c) = -1$ for any constant $c$ in ${\mathbf F}_p$. Note that, as with the classical Möbius function, $\mu(f)$ tells us the parity of the number of monic irreducible factors of $f$ when $f$ is squarefree. But there is a huge difference between ${\mathbf Z}$ and ${\mathbf F}_p[x]$ when it comes to determining that something is squarefree, because there is a method of determining that a polynomial in ${\mathbf F}_p[x]$ is squarefree without factoring it: $f(x)$ is squarefree iff $f(x)$ and $f'(x)$ are relatively prime. -Note that being squarefree in ${\mathbf F}_p[x]$ is the same thing as being separable (no multiple roots), which ties this in with more standard concepts from field theory for polynomials. -Example. In ${\mathbf F}_2[x]$, $x^{2^n}+x+1$ is squarefree since its derivative is $1$, which is definitely relatively prime to the original polynomial. -The really amazing thing is that it's possible to compute $\mu(f)$ without having to factor $f$. -Theorem. If $p$ is an odd prime then $\mu(f) = (-1)^{\deg(f)}(\frac{\text{disc}(f)}{p})$, where $(\frac{\cdot}{p})$ is the Legendre symbol and $\text{disc}(f)$ is the discriminant of $f(x)$. If $p = 2$ and $f(x)$ is squarefree in ${\mathbf F}_2[x]$, let $F(x)$ be a monic lifting of $f(x)$ to ${\mathbf Z}[x]$. -Then $\mu(f) = 1$ if $\text{disc}(F) \equiv 1 \bmod 8$ and $\mu(f) = -1$ if $\text{disc}(F) \equiv 5 \bmod 8$. -It is a theorem of Stickelberger that a monic polynomial in ${\mathbf Z}[x]$ with odd discriminant must have discriminant that is $1 \bmod 4$, so mod 8 the discriminant is 1 or 5. -I will not prove the theorem here, but I'll tell you some background. This theorem is due to Pellet (1878) when $p$ is odd. The case $p=2$ was given by Stickelberger in a short paper in the first ICM in 1897 and Swan rediscovered it in 1962 (Pacific J. Math) in terms of a lifting from ${\mathbf F}_2$ to the 2-adic integers instead of ${\mathbf Z}$. The proof for odd $p$ just needs Galois theory of finite fields, but for $p = 2$ more work is required. I should point out that Pellet, Stickelberger, and Swan were not thinking directly in terms of a Möbius function on polynomials over a finite field. For example, Pellet and Stickelberger wrote the formula with the discriminant term on one side and the rest on the other side with the Möbius value appearing as a power of $-1$: they were studying the "quadratic" nature of the discriminant of a polynomial mod $p$ when the discriminant mod $p$ is nonzero. (The formula in the theorem is trivial when the discriminant is 0, since both sides are 0, but P, S, and S didn't write the formula in that case since it was of no interest to them.) -To do computations with this theorem, let me recall one of the definitions of the discriminant of a monic polynomial $h(x)$ of degree $d$ over a field: $\text{disc}(h) = (-1)^{d(d-1)/2}\prod_{i=1}^d h'(r_i)$, where $h(x) = \prod_{i=1}^d (x-r_i)$ in a splitting field. In particular, note $(-1)^{d(d-1)/2} = 1$ when $d$ is a multiple of 4. -Now it's time to get back to your original question. We will apply this theorem to your polynomial $x^{2^n} + x + 1$ in ${\mathbf F}_2[x]$ by showing $\mu(x^{2^n} + x + 1) = 1$ for any $n \geq 3$, where $\mu$ is the Möbius function on ${\mathbf F}_2[x]$. First observe that in characteristic 2, $x^{2^n} + x + 1 = (x+1)^{2^n} + x$. -Theorem. For any nonconstant polynomial $g(x)$ in ${\mathbf F}_2[x]$, $\mu(g^8 + x) = 1$. -The conclusion of this theorem is false when $g$ is constant, and we'll see early in the proof how $g$ being nonconstant matters: it means the degree of $g^8+x$ is determined by the $g^8$ part and not the $x$ part. -Proof of theorem: Let $G(x)$ in ${\mathbf Z}[x]$ be any monic lifting of $g(x)$. (For example, each mod 2 coefficient of $g(x)$ that is 0 or 1 mod 2 could be replaced with the integers 0 or 1.) Then $\deg(G) = \deg(g) \geq 1$, so $G^8 + x$ has degree $8\deg(G)$, which is a multiple of 8. That's the step that required $g$, and hence $G$, to be nonconstant. And the derivative of $G^8 + x$ is $8G^7G' + 1$, so if we let $r_1,\dots,r_D$ be the roots of $G^8 + x$, so $D$ is a multiple of 8, -$$ -\text{disc}(G^8 + x) = \prod_{i=1}^D (1 + 8G(r_i)^7G'(r_i)). -$$ -Since $G^8 + x$ is monic , its roots are algebraic integers, so the number on the right is a product of algebraic integers which are all 1 plus 8 times an algebraic integer. Therefore their product is 1 plus 8 times an algebraic integer. Since the left side $\text{disc}(G^8 + x)$ is in ${\mathbf Z}$, the number on the left must be $1 \bmod 8$, so by the Möbius formula in ${\mathbf F}_2[x]$, $\mu(g^8 + x) = 1$. QED -When $n \geq 3$, $2^n$ is a multiple of 8 and $(x+1)^{2^n} + x = g^8 + x$ for $g(x) = (x+1)^{2^{n-3}}$, so this theorem implies $x^{2^n} + x + 1$ is reducible over ${\mathbf F}_2$ because we showed it has an even number of irreducible factors. While Matt E's solution is much shorter than mine, and is a pleasant application of Galois theory of finite fields (I'll have to remember it for the next time I teach a Galois theory course), notice the theorem above is a much more general result: $g^8 + x$ is reducible for any nonconstant $g$ in ${\mathbf F}_2[x]$ because it has an even number of irreducible factors. -There is an analogue of this in all characteristics: for any prime $p$, -$\mu(g^{4p}+x) = 1$ for all nonconstant $g(x)$ in ${\mathbf F}_p[x]$. -Two settings where this Möbius formula was applied somewhat recently are described in http://www.math.uconn.edu/~kconrad/articles/texel.pdf.<|endoftext|> -TITLE: Perron-Frobenius Theorem and Graph Laplacians -QUESTION [7 upvotes]: How can the Perron-Frobenius theorem be used to show that for a connected graph, there is a simple eigenvector that is (i) real and (ii) smallest in magnitude and (iii) has an associated eigenvector that is positive? -The graph Laplacian is given as $L = D-A$, where $A$ is the non-negative adjacency matrix of the graph. The Perron-Frobenius theorem allows us to state that -$\rho(A) > 0$ and is a simple eigenvalue of $A$ -$Ax = \rho(A)x$ with all elements of $x$ positive. -The matrix $D$ is diagonal with positive elements. It is well-known that for a connected graph, 0 is the smallest eigenvalue of $L$ and it is simple, and $x=\mathbb{1}$ (vector of all ones) is the associated positive eigenvector. -I am confused mainly because $L$ is no longer a non-negative (or non-positive) matrix. Any ideas? - -REPLY [5 votes]: If you're positive that the question is about the Laplacian and not the adjacency matrix, then you should consider $\lambda I - L$ for a sufficiently large value of $\lambda$.<|endoftext|> -TITLE: Is there any possibility to do divergent summation with $\sum_{k=1}^{\infty}\exp(\sqrt k) $? -QUESTION [13 upvotes]: Self-studying some properties of the exponential-function I came to the question of ways to assign a value to the divergent sum $$s=\sum_{k=1}^{\infty}\exp(\sqrt k) $$ I have no idea how to attack this with standard methods (I do not know many). -I tried a replacement using reversing order of summation when $s$ is expressed as double sum and introducing zeta at negative half-integer values. I wrote the double-sum as -$$ \begin{array} {rr} s &=& \sum_{k=1}^{\infty} \sum_{j=0}^{\infty} \frac{k^{j/2}}{j!} - &=& \sum_{j=0}^{\infty} \frac{\sum_{k=1}^{\infty} k^{j/2}}{j!} - &=& \sum_{j=0}^{\infty} \frac{\zeta(-j/2)}{j!} -\end{array}$$ -With this Pari/GP gives me about $ s=-0.753717005339 $ . -Moreover I'm curious whether I have to extend the index to $-2$ to get formally $$s = -1 + 0 + \sum_{k=0}^{\infty}\frac{\zeta(-k/2)}{k!}$$ where I've set $\frac{\zeta(0.5)}{(-1)!} = 0 $ and $\frac{\zeta(1)}{(-2)!} =\frac{(-1)!}{(-2)!} =-1 $ -to arrive at $ s_1=-1.753717005339 $ -But this is just a shot in the dark. I've tried with another trick which works in some circumstances to sum the alternating series first, and then add an alternating partial series etc, like -$$s=\sum_{k=1}^{\infty} (-1)^{k-1} \exp(\sqrt k) + 2*\sum_{k=1}^{\infty} (-1)^{k-1} \exp(\sqrt{ 2 k}) + 4*... $$ but this doesn't help either since all sums are positive and I get another divergent and monotonuously increasing sequence of partial sums. One of the problems is, that regular summations cannot sum divergent series if all summands are positive and also increase, so my standard tools fail here. -Q: how else could I sum that series? -[edit]: Things begin to complicate... I tried another approach and got a result with a suspicious integer difference. I look at the formal powerseries -$$ g(x) = \exp(\sqrt{1+x}-1) = 1 + g_1 \frac{x}{1!}+ g_2 \frac{x^2}{2!}+ \ldots $$ -and $$ \begin{array} {ll} t&=&e*(g(0)+g(1)+g(2)+g(3)+...) \\\ &=& e*g(0) + e*(g(1)+g(2)+g(3)+\ldots) \\\ &=& e*g(0)+e*t_1 \end{array} $$ -Then, by the same principle of reordering summation of the formal doubleseries we get another sum of zetas, but now at negative integer arguments - $$ \begin{array} {ll} t_1 &=& g(1)+g(2)+\ldots \\\ &=& 1*\zeta(0)+g_1*\frac{\zeta(-1)}{1!} +g_2*\frac{\zeta(-2)}{2!}+\ldots \end{array} $$ if that reordering makes sense. -Interestingly the value which I get by this is $t=1.246282994682$ where much interestingly $s=t-2$ . This does not yet confirm one value over the other. But to have just a simple integer-difference seems to tell, that in principle these paths of computation are not completely meaningless (?) -[Edit2]: using Ramanujan-summation as shown in the wikipedia-link I arrive at the same latter value of about $1.2462$ which is again $s+2$. What puzzles me is, that I'm used to negative values for divergent sums of increasing positiv terms. Did I miss something in the Ramanujan-formula? I used the formula $$C(a) = \int_0^a f(t) dt - \frac12 f(0) - \ldots $$ where I insert my $g(x)$ above for $f(x)$ using $a=0$ (and thus the integral-term being zero). -[Edit3]: It seems, that the method at [edit1] is just the Ramanujan-method where the Bernoulli-numbers are translated to the respective zeta-values [Edit4]: I had to correct the sum-formula; the factorials had to be removed -Then the wikipedia-formula reads $$C(a) = \int_0^a f(t) dt + \sum_{k=0}^{\infty} \zeta(-k) \frac{f^{(k)}(0)}{k!} $$ Because we have a power series, the k'th derivative $f^{(k)}(0)=f_k*k! $ and we can replace this in the formula, cancelling the factorial. Furtherly I had used the function g(x), so $$ t=e + e*\sum_{k=0}^{\infty} \zeta(-k) g_k $$ should equal $s$ . Unfortunately, this is also a divergent sum, but can be Euler-/Borel-summed. -Interestingly, the integral-term $ \int_a^b f(t) dt $ gives just the mysterious number 2 : $$ e*\int_{-1}^0 g(t) dt = \int_{0}^1 \exp(\sqrt{t}) dt = 2 $$ according to wolfram-alpha . -With this it seems I can use the much better summable (if not convergent) series -$$ s=\sum_{j=0}^{\infty} \frac{\zeta(-j/2)}{j!} $$ and determine that Ramanujan-summ -$$ t = \int_{0}^1 \exp(\sqrt{t}) dt + s $$ -[Edit5] It seems, that the possibility for computations is coherent for other exponents in the basic series. If I generalize -$$ \begin{array} {rr} S_q &=& \sum_{k=1}^{\infty} \exp(k^q) \\ - g_q(x) &=& \exp((1+x)^q-1) &=& \exp((1+x)^q)/e \\ - Ca_q &=& \int_0^1 \exp(x^q) - \end{array} $$ -and $t_q$ and $s_q$ accordingly, then for some positive fractional q I get the following results. -$$ \small{ -\begin{array} {rrrr} - q& t_q & Ca_q & & s_q & C_q(0) \\ - 1.00000000000 & 1.13630512159 & 1.71828182846 &(= 1e - 1) & -0.581976706869 & 1.13630512286 \\ - 0.500000000000 & 1.24628299466 & 2.00000000000 &(=- 0e + 2)& -0.753717005339 & 1.24628299491 \\ - 0.333333333333 & 1.28422772983 & 2.15484548538 &(= 3e - 6)& -0.870617755549 & 1.28422772997 \\ - 0.250000000000 & 1.30316006154 & 2.25374537233 &(=-8e + 4!) & -0.950585310784 & 1.30316006162 \\ - 0.200000000000 & 1.31447347236 & 2.32268228066 &(=45e - 5!)& -1.00820880830 & 1.31447347240 \\ - 0.166666666667 & 1.32198952281 & 2.37359728681 &(=-264e + 6!)& -1.05160776400 & 1.32198952284 \\ - 0.142857142857 & 1.32734318867 & 2.41279179153&(=1855e - 7!) & -1.08544860285 & 1.32734318869 \\ - 0.125000000000 & 1.33134949700 & 2.44392029544 &(=-14832e + 8!)& -1.11257079844 & 1.33134949702 \\ - 0.111111111111 & 1.33445988876 & 2.46925379716 &(=1334978e - 9!)& -1.13479390840 & 1.33445988878 \\ - 0.100000000000 & 1.33694450266 & 2.49028031297 &(=-A240(10)e + 10!)& -1.15333581032 & 1.33694450267 \\ - 0.0909090909091 & 1.33897484256 & 2.50801667035 &(=A240(11)e - 11!)& -1.16904182779 & 1.33897484257 \\ - 0.0833333333333 & 1.34066501173 & 2.52318189730 &(=-A240(12)e + 12!)& -1.18251688557 & 1.34066501173 - \end{array} } -$$ -where $t_q$ in the second column is the assumed sum computed by the method $ t_q = Ca_q + s_q $ . The $A240(k)$-entries are also found in the sequence A000240 in OEIS beginning at $k=1$. -The last column is the same result computed by the Ramanujan sum $C(0)$ as denoted in the wikipedia-article (and my translation into the $g_q()$-function). That terms are always a diverging sequence of partial sums, so their Euler-sum is documented here for comparision of accuracy. -A plot of $q$ and $t_q$ looks like a nearly linear (negative) relation. -It is still open, which value ($t_q$ or $s_q$) should be taken as final sum. -Note that using $q=1$ we should get $ S_1 = e^1 + e^2 + e^3 + ... = \frac{e}{1-e} \approx -1.58197670687 $ where only $s_q$ is in the near (misses by 1). -The $t_q$ value for $q=1$ however seems obscure; the correct value would be $ S_1 = t_q - e = \frac{e}{1-e} $ . Here I do not know what this tells us? - -REPLY [4 votes]: Hmm, I'm answering myself with the best hypothesis - I think I got it now. -+++I had to correct the computation-procedure, all earlier data are replaced+++ -More insight was possible after generalization of the problem, where I do not only consider the square-root of the running exponent but just a fractional power $q$ : -$$S(q) = \exp(1^q) + \exp(2^q) + \exp(3^q) + \ldots \ \ \ \ 0<=q<=1 \tag 1$$ -Then we have also two parameters $q$ for which solutions are known: for $q=1$ we have the geometric series with quotient $e$ and for $q=0$ we have $e*\zeta(0)$ -$$ S(0) = e^1 + e^1 + e^1 + e^1 + \ldots = e*\zeta(0) \approx -1.3591 \tag 2$$ -$$ S(1) = e^1 + e^2 + e^3 + e^4 + \ldots = \frac{e}{1-e} \approx -1.5819 \tag 3$$ -I'll denote $$f_q(x) = exp(x^q) = \sum_{k=0}^{\infty} \frac{x^{q*k}}{k!} \tag 4$$ and -$$g_q(x)= \exp((1+x)^q-1)=\exp((1+x)^q)/e =f_q(x+1)/e = \sum_{k=0}^{\infty} g_k\frac{x^{q*k}}{k!} \tag 5$$ . -The Ramanujan-summation approximates that values well, so I use this as reference solution. For notational easiness I rewrite the wikipedia formula in terms of zeta instead of Bernoulli numbers and the derivatives of $f$ cancelled by the factorials such that just the coefficients $f_k$ remain; so $$ C(a) = \int_0^a f_q(t)dt + \sum_{k=0}^{\infty} \zeta(k)f_k \tag 6$$ Denote the integral with a name: $c_{q,a} =\int_0^a f_q(t)dt$ -Now because the derivative of $f$ at zero is infinite we can use the representation by the g-coefficients: $$ C(a) = c_{q,a} + e*\sum_{k=0}^{\infty} \zeta(k)g_k \tag 7$$ -Now we should use $C(0)$ for the Ramanujan-sum, so $c_{q,a}$ vanishes and -$$ R(q) = e*\sum_{k=0}^{\infty} \zeta(k)g_k \tag 8$$ -Unfortunately the sum does not yet converge, so that expression must again be done with a divergent summation-procedure. With that summation I get good approximations for the reference values -$$ \begin{array} {rlll} R(0) &\approx& -1.3591...&=& S(0)\\ - R(0.5) &\approx& -1.4719988...&???& S(0.5) \\ - R(1) & \approx & -1.5819... &=& S(1) \end{array} \tag 9$$ -where $R(0.5)$ is a new value for the divergent sum $ S(0.5)=\sum_{k=1}^{\infty} e^{\sqrt k}$. --- -The divergent sum of the $R()$-computation can be replaced by the adaption of my first described summation-method. By changing the summation-order in the double-series for $S(q)$ I arrive at a formal expression -$$s_0(q) = \sum_{k=0}^{\infty} \frac{\zeta(-kq)}{k!} \tag {10}$$ -With this Pari/GP gives me about $ s_0(1/2)=-0.753717005339 $ . -Two corrections seem to identify this with the $R(q)$-sums; the integral $c_a$ and one instance of $e$: -$$ G(q) = c_{1,q} - e + s_0 \tag {11}$$ -Unfortunately I've not yet understood, why that specific corrections are required. --- -Here is a plot for $s(q)$ where $q=0 \ldots 1.5 $ (the previously displayed plot had wrong data and is replaced) - -(The line is not perfectly linear)<|endoftext|> -TITLE: General Introduction to Functional and other Mathematic Notations -QUESTION [8 upvotes]: I've been a programmer for a good while now. Fairly experienced at a bit of math as far as coming up with algorithms and such but I am far far behind on understanding quite a deal of notation. -Here and there I run into an issue where someone will notify me that I've reinvented some piece of calculus, trig or some other fields. Occasionally this makes for some interesting code and all, but I've begun to think that I could very often avoid this by being able to read and write standard notation more fluently. -When it comes to this area, I'm honestly a complete newb. Are there any good introductions or resources that can help get me on a clear path to understanding? -I have some concept on simple functions, but not much. Tendency in study has been that I'll find myself too deep in something too complicated too quickly and forget everything. - -For instance, to borrow from another open bounty at this time, I cannot read the following: - -$$\sum_{n=-\infty}^\infty J_n(x) J_{n+m}(x) = \delta(m)$$ -My mind is stuck in code, help me out of my cave! :) - -REPLY [3 votes]: http://dlmf.nist.gov/ is a good site for special functions, etc. Thus, for the given example, searching for J(x) in the search box quickly leads to the sections about Bessel functions.<|endoftext|> -TITLE: If f is surjective and g is injective, what is $f\circ g$ and $g\circ f$? -QUESTION [5 upvotes]: Say I have $f=x^2$ (surjective) and $g=e^x$ (injective), what would $f\circ g$ and $g\circ f$ be? (injective or surjective?) -Both $f$ and $g : \mathbb{R} \to \mathbb{R}$. -I've graphed these out using Maple but I don't know how to write the proof, please help me! - -REPLY [9 votes]: Examples to keep in mind for questions like this: - -Take $X = \{1\}$, $Y = \{a,b\}$, $Z =\{\bullet\}$. Let $f\colon X\to Y$ be given by $f(1)=a$, and $g\colon Y\to Z$ given by $g(a)=g(b)=\bullet$. -Then $g\circ f\colon X\to Z$ is bijective; note that $f$ is injective but not surjective, and that $g$ is surjective but not injective. So, injectivity of the composite function cannot tell you anything about injectivity of the last function applied; and surjectivity of the composite function cannot tell you anything about surjectivity of the first function applied. -As above, but now take $Y = \{a,b\}$, $Z=\{\bullet\}$, and $W=\{1,2\}$. Let $g\colon Y\to Z$ be given by $g(a)=g(b) = \bullet$, and $h\colon Z\to W$ be given by $h(\bullet) = 1$. Then $h\circ g\colon Y\to W$ maps both $a$ and $b$ to $1$. Note that $g$ is surjective, $h$ is injective, but $h\circ g$ is neither. So: surjective followed by injective could be neither. - -Playing around with similar examples will show you that injective followed by surjective may also be neither. For instance, modify the first example above a bit, say $Y = \{a,b,c\}$, $Z = \{\bullet,\dagger\}$, $f\colon X\to Y$ given by $f(1)=a$, $f(2)=b$ (injective), and $g\colon Y\to Z$ given by $g(a)=g(b)=\bullet$, $g(c)=\dagger$ (surjective). Is $g\circ f$ injective? Is it surjective? - -REPLY [2 votes]: When you write $x$ in $f(x)=x^2$, it is a "dummy variable" in that you can put in anything in the proper range (here presumably the real numbers). So $f(g(x))=(g(x))^2$. Then you can expand the right side by inserting what you know about $g(x)$. Getting $g(f(x))$ is similar. Then for the injective/surjective part you could look at this question - -REPLY [2 votes]: You should specify the domains and codomains of your functions. -I guess that -$f:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0}$ and $g:\mathbb{R}\rightarrow\mathbb{R}$, but there are some other natural definitions you could make. -You can write down the compositions explicitly: -$f\circ g:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0}$ has $x\mapsto (e^x)^2=e^{2x}$. -This is injective (since $x\mapsto e^x$ is injective) and not surjective, since 0 is not in the image. -$g\circ f:\mathbb{R}\rightarrow\mathbb{R}$ $x\mapsto e^{x^2}$ is neither injective, nor surjective.<|endoftext|> -TITLE: How to prove the following about a group G? -QUESTION [7 upvotes]: How does one prove that: - -for all $a\in G$, where $G$ is a group (not necessarily abelian) $a^{|G|} = 1_G$. - -REPLY [7 votes]: First, your restrict yourself to finite groups (otherwise, the statement does not make sense). -Then, you apply Lagrange's Theorem to the subgroup $\langle a \rangle$ to conclude that the order of $a$ divides $|G|$. - -REPLY [3 votes]: This is a standard fact, which i think is proved in All abstract algebra books. Here is the proof, given in the book by Herstein. See Corollary 2. -Corollary 1: If $G$ is a finite group, and $a \in G$, then $o(a) \mid o(G)$. -Proof. Consider the cyclic subgroup generated by $a$, consisting of $a,a^{2},a^{3},\cdots,$. Since $a^{o(a)}=e$, therefore, this subgroup has atmost $o(a)$ elements. If it has fewer elements, then $a^{i}=a^{j}$, for some integers $0 \leq i < j < o(a)$. Then $a^{j-i}=e$, yet $0< j-i < o(a)$, which contradicts the meaning of $o(a)$. Thus the cyclic subgroup generated by $a$ has $o(a)$ elements, and hence by Lagrange's theorem $o(a) \mid o(G)$. -Corollary 2: If $G$ is a finite group and $a \in G$, then $a^{o(G)}=e$. -Proof. By Corollary 1, we have $o(a) \mid o(G)$ which implies $o(G)=k \cdot o(a)$, therefore $a^{o(G)}=a^{k \cdot o(a)} = (a^{o(a)})^{k} = e^{k}=e$.<|endoftext|> -TITLE: Continuity of a function that maps a point to the closest point on a compact convex set -QUESTION [12 upvotes]: Let $K$ be a nonempty compact convex subset of $\mathbb R^n$ and let $f$ be the function that maps $x \in \mathbb R^n$ to the unique closest point $y \in K$ with respect to the $\ell_2$ norm. I want to prove that $f$ is continuous, but I can't seem to figure out how. -My thoughts: Suppose $x_n \to x$ in $\mathbb R^n$. Let $y_n = f(x_n)$ and let $y = f(x)$. By the compactness of $K$, there is a convergent subsequence $(y_{k_n})$ that converges to some $y' \in K$. If $y \ne y'$, then $\|x-y\| < \|x-y'\|$. Furthermore, any point $z \ne y$ on the line segment joining $y,y'$ also satisfies $\|x-y\|<\|x-z\|$. I don't know where to go from here. Any tips? - -REPLY [4 votes]: Another way, for the record: -Since $f(a)$ is optimal, $$(f(b) - f(a))\cdot (a - f(a)) \le 0.$$ Similarly, since $f(b)$ is optimal, $$(f(b)-f(a))\cdot(f(b) - b)\le 0.$$ -When we sum these two inequalities and rearrange, we get -$$\begin{align}\|f(a)-f(b)\|^2&\le (f(b)-f(a))\cdot(b-a) \\ - & \le \|a-b\|\|f(a)-f(b)\|,\end{align}$$ -with the second inequality by Cauchy-Schwarz. Dividing through, we are done.<|endoftext|> -TITLE: Nice proofs of $\zeta(4) = \frac{\pi^4}{90}$? -QUESTION [116 upvotes]: I know some nice ways to prove that $\zeta(2) = \sum_{n=1}^{\infty} \frac{1}{n^2} = \pi^2/6$. For example, see Robin Chapman's list or the answers to the question "Different methods to compute $\sum_{n=1}^{\infty} \frac{1}{n^2}$?" -Are there any nice ways to prove that $$\zeta(4) = \sum_{n=1}^{\infty} \frac{1}{n^4} = \frac{\pi^4}{90}?$$ -I already know some proofs that give all values of $\zeta(n)$ for positive even integers $n$ (like #7 on Robin Chapman's list or Qiaochu Yuan's answer in the linked question). I'm not so much interested in those kinds of proofs as I am those that are specifically for $\zeta(4)$. -I would be particularly interested in a proof that isn't an adaption of one that $\zeta(2) = \pi^2/6$. - -REPLY [2 votes]: I would like to present you a method based on Daners' derivation of $\zeta(2)$ (which I summarised on MathStackexchange in here couple years ago). -We start defining (for all $n \in \mathbb{N}_0$) -\begin{align} -A_n & =\int_0^{\pi/2}\cos^{2n}x\;\mathrm{d}x,\\ -B_n& =\int_0^{\pi/2}x^2\cos^{2n}x\;\mathrm{d}x,\\ -C_n& =\int_0^{\pi/2}x^4\cos^{2n}x\;\mathrm{d}x -\end{align} -and let $\beta_n = B_n/A_n$ and $\gamma_n = C_n/A_n$. -The first integral for $A_n$ is well known and follows a reccurence relation -$$A_{n}=\frac{2n-1}{2n}A_{n-1}\tag{1}.$$ -By applying per partes on the $A_n$ integral twice, we get: -$$A_n=\int_0^{\pi/2}\cos^{2n}x\;\mathrm{d}x=x\cos^{2n}x\bigg{|}_0^{\pi/2}-\frac{x^2}{2}(\cos^{2n}x)'\bigg{|}_0^{\pi/2}+\frac{1}{2}\int_0^{\pi/2}x^2(\cos^{2n}x)''\;\mathrm{d}x$$ -The first two terms vanish, so only the integral remains and since $(\cos^{2n}x)''=2n(2n-1)\cos^{2n-2}x-4n^2\cos^{2n}x$, we get for $n\geq 1$: -$$A_n=(2n-1)nB_{n-1}-2n^2B_{n}\tag{2}$$ -Similarly, applying per partes twice on $B_n$ instead, we get -$$B_n=\frac16 (2n-1)nC_{n-1}-\frac13 n^2C_{n}\tag{3}$$ -Inserting $(1)$ into $(2)$ and $(3)$ and rearranging, we get a following recurence reltions - -$$\frac{1\cdot 2}{4}\frac{1}{n^2}=\beta_{n-1}-\beta_n\tag{4}$$ -$$\frac{3\cdot 4}{4}\frac{\beta_n}{n^2}=\gamma_{n-1}-\gamma_n\tag{5}$$ - -Summing these up, we get, by the telescoping property -$$\sum_{l=1}^k\frac{1}{2l^2}=\beta_0-\beta_k \qquad \text{and} \qquad \sum_{k=1}^n\frac{3\beta_k}{k^2}=\gamma_0-\gamma_n. \tag{6}$$ -Inserting one sum into another (expressing $\beta_k$), we get -$$3\beta_0\sum_{k=1}^n\frac{1}{k^2} - \frac32\sum_{k=1}^n\frac{1}{k^2}\sum_{l=1}^k\frac{1}{l^2}=\gamma_0 - \gamma_n. \tag{7}$$ -Note that in general -$$2\sum_{k=1}^n a_k\sum_{l=1}^k a_l = 2\sum_{1\leq l -\leq k \leq n}a_k a_l = \left(\sum_{k=1}^n a_k\right)^2+\sum_{k=1}^n a_k^2,$$ -substituing for $a_k = 1/k^2$ into $(7)$ and rewriting $\sum_{k=1}^n 1/k^2$ as $2\beta_0 - 2\beta_n$ everywhere, we get -$$\bbox[10px,#ffd]{\sum_{k=1}^n \frac{1}{k^4} = 4\beta_0^2 - 4\beta_n^2 - \frac{4}{3}\gamma_0 + \frac{4}{3}\gamma_n.}\tag{8}$$ -However, using the inequalities $x^4<(\pi/2)^2 x^2$ and $\frac{2x}{\pi}<\sin x$ valid on $(0,\frac{\pi}{2})$ and by $(1)$, we get -$$0<\gamma_{n-1} <\left(\frac{\pi}{2}\right)^2\beta_{n-1} < \frac{\int_0^{\pi/2}\sin^2x\cos^{2n-2}x}{\int_0^{\pi/2}\cos^{2n-2}x}=\frac{A_{n-1} - A_n}{A_n} = \frac{1}{2n}.$$ -Applying the squeeze theorem, we get -$$\lim_{n\rightarrow \infty} \beta_n = \lim_{n\rightarrow \infty} \gamma_n = 0$$ -and hence, taking the limit of $(8)$, -$$\zeta_4 = \lim_{n\rightarrow \infty} \sum_{k=1}^n\frac{1}{k^4} = 4\beta_0^2-\frac{4}{3}\gamma_0 = 4\left(\frac13 \frac{\pi^2}{2^2}\right)^2-\frac{4}{3} \frac{1}{5}\frac{\pi^4}{2^4} = \frac{\pi^4}{90}$$ -This finishes the proof.<|endoftext|> -TITLE: Is Lagrange's theorem the most basic result in finite group theory? -QUESTION [84 upvotes]: Motivated by this question, can one prove that the order of an element in a finite group divides the order of the group without using Lagrange's theorem? (Or, equivalently, that the order of the group is an exponent for every element in the group?) -The simplest proof I can think of uses the coset proof of Lagrange's theorem in disguise and goes like this: take $a \in G$ and consider the map $f\colon G \to G$ given by $f(x)=ax$. Consider now the orbits of $f$, that is, the sets $\mathcal{O}(x)=\{ x, f(x), f(f(x)), \dots \}$. Now all orbits have the same number of elements and $|\mathcal{O}(e)| = o(a)$. Hence $o(a)$ divides $|G|$. -This proof has perhaps some pedagogical value in introductory courses because it can be generalized in a natural way to non-cyclic subgroups by introducing cosets, leading to the canonical proof of Lagrange's theorem. -Has anyone seen a different approach to this result that avoids using Lagrange's theorem? Or is Lagrange's theorem really the most basic result in finite group theory? - -REPLY [2 votes]: I am late... -Here is a proposal, probably not far from Ihf's answer. -For $a \in G$ of order $p$, -define the binary relation - $x\cal Ry$ : $\exists k\in \mathbb{N} ; k -TITLE: Notation for the set created from the combination or permutation of a set -QUESTION [15 upvotes]: For a set $S$ with $n$ elements, the notation for a combination $\binom{n}{k}$, or $C(n, k)$, indicates the number of combinations of $k$ elements from $S$, but how does one indicate the actual set created from combinations of $k$ elements from $S$? That is, $\binom{n}{k}$ is the size of the set I'd like to represent. -Likewise, how would one indicate the actual set of items created from the permutations of $k$ elements, rather than the size of that set? - -REPLY [2 votes]: A common notation for the collection of all size-$k$ subsets of $S$ is given by the symbol $S_k$.<|endoftext|> -TITLE: Prove that the Ky Fan norm satisfies the triangle inequality -QUESTION [5 upvotes]: How can one simply see that Ky Fan $k$-norm satisfies the triangle inequality? (The Ky Fan $k$-norm of a matrix is the sum of the $k$ largest singular values of the matrix) - -REPLY [9 votes]: For any $k$-plane $U$ in $\mathbb{C}^n$, let $i_U$ be the inclusion of $U$ into $\mathbb{C}^n$ and let $p_U$ be the orthogonal projection of $\mathbb{C}^n$ onto $U$. -Lemma: Let the singular values of $A$ be $\sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_n$. Then $\max_{U, V} \ \mathrm{Tr} \left( p_V \circ A \circ i_U \right)= \sigma_1+ \cdots +\sigma_k$, where the max is over all pairs of $k$-planes in $\mathbb{C}^n$. -Proof is left for the reader, using his or her favorite definition of singular values. -Then $\max_{U, V} \ \mathrm{Tr} \left(p_V \circ (A+B) \circ i_U\right) \leq \max_{U, V} \ \mathrm{Tr} \left( p_V \circ A \circ i_U \right) + \max_{U, V} \ \mathrm{Tr} \left( p_V \circ B \circ i_U \right)$.<|endoftext|> -TITLE: Elementary question about Cayley Hamilton theorem and Zariski topology -QUESTION [13 upvotes]: A question about a proof of the Cayley-Hamilton theorem using Zariski topology. -"The set $C$ of all matrices of size $n \times n$ (over an algebraically closed field $k$) with distinct eigenvalues is dense in the Zariski topology". -Can we argue as follows? -Since non-empty open sets in the Zariski topology are dense in $k^{n}$ then we are done if we can show the complement of $C$, i.e the set of all matrices of size $n \times n$ with repeated eigenvalues is open in the Zariski topology. -Now to each matrix $B$ of size $n \times n$ compute its characteristic polynomial $p_{B}$ and associate to this polynomial its discriminant $D(p_{B})$. Now define a map: -$f: \mathbb{A}^{n^{2}} \rightarrow \mathbb{A}^{1}$ given by $f(B)=D(p_{B})$ where where $\mathbb{A}^{n}$ denotes the $n$-affine space. I'm identifying here $\mathbb{A}^{n^{2}}$ with the set of all matrices $n \times n$ over an algebraically closed field $k$. -Here is my question. How do we know the map $f$ is continuous with respect the Zariski topology? If we can show it is continuous aren't we done? because we can take $\{0\}$ this is closed in $\mathbb{A}^{1}$ because it is finite, so by continuity of $f$, $f^{-1}(\{0\})$ is closed in $\mathbb{A}^{n^{2}}$ but this preimage is exactly the set of all matrices with repeated eigenvalues. - -REPLY [8 votes]: This map is not only continuous. It is a actually a morphism of algebraic varieties. To see this note that the characteristic polynomial of a matrix is a polynomial whose coefficients are polynomials in the entries of the matrix. This is because it is equal to $\det(tI-A)$, and the determinant of a matrix is a polynomial in its entries. Also, the discriminant of a polynomial is again, a polynomial in its coefficients, so we see that this map is a polynomial in the entries of $A$. - -REPLY [4 votes]: $f$ is a polynomial hence it's continuous wrt. Zariski topology. You are almost done:you need to show that the complement of $f^{-1}(0)$ is non-empty, and this you know as there is matrix with different eigenvalues (eg. a suitable diagonal matrix)<|endoftext|> -TITLE: Incremental calculation of inverse of a matrix -QUESTION [11 upvotes]: Does there exist a fast way to calculate the inverse of an $N \times N$ matrix, if we know the inverse of the $(N-1) \times (N-1)$ sub-matrix? -For example, if $A$ is a $1000 \times 1000$ invertible matrix for which the inverse is known, and $B$ is a $1001 \times 1001$ matrix obtained by adding a new row and column to $A$, what is the best approach for calculating inverse of $B$? - -REPLY [10 votes]: Blockwise inverse<|endoftext|> -TITLE: What kind of completeness is the completeness of $\mathbb{R}$? -QUESTION [8 upvotes]: As opposed to the algebraic completion of $\mathbb{Q}$, which yields the algebraic numbers, we can say that $\mathbb{R}$ is complete in the sense that every non-empty subset of $\mathbb{R}$ bounded by above has a supremum. -So, it isn't algebraically complete, but is it topologically or metrically complete? What would be the right word to describe its completeness? -Thanks. - -REPLY [9 votes]: The reals are complete as a metric space (http://en.wikipedia.org/wiki/Real_number#Completeness) and as an ordered set in the sense of Dedekind (http://en.wikipedia.org/wiki/Dedekind_completion), and also categorically as the unique complete Archimedean ordered field.<|endoftext|> -TITLE: Archimedean property -QUESTION [15 upvotes]: I've been studying the axiomatic definition of the real numbers, and there's one thing I'm not entirely sure about. -I think I've understood that the Archimedean axiom is added in order to discard ordered complete fields containing infinitesimals like the hyperreal numbers. Additionally, this property clearly cannot be derived solely from the axioms of ordered field and completeness, since $^*\mathbb{R}$ and $\mathbb{R}$ are two complete ordered fields, two models of the axioms, one of them Archimedean and the other non-Archimedean. Are these ideas correct? -Thanks. - -REPLY [20 votes]: The answer to your question depends critically on what you mean by a "complete ordered field" $(F,<)$. Here are two rival definitions: -1) [added: sequentially] Cauchy complete: every Cauchy sequence in $F$ converges. -2) Dedekind complete: every nonempty subset $S \subset F$ which is bounded above has a least upper bound. -(There are in fact many other axioms equivalent to 2): that every bounded monotone sequence converges, that $F$ is connected in the order topology, the principle of ordered induction holds, and so forth.) -It turns out that there is a unique Dedekind complete ordered field up to (unique!) isomorphism, namely the real numbers $\mathbb{R}$. Famously $\mathbb{R}$ is also Cauchy complete -- or, if you like, Dedekind complete ordered fields satisfy the Bolzano-Weierstass theorem, which is enough to make Cauchy sequences converge -- so that Dedekind completeness implies Cauchy completeness. -The converse is true with an additional hypothesis: an Archimedean Cauchy-complete field is Dedekind complete. I show this in $\S 12.7$ of these notes using somewhat more sophisticated methods (namely Cauchy nets). For a more elementary proof, see e.g. Theorem 3.11 of this nice undergraduate thesis. -On the other hand, just as one can take the "Cauchy" completion of any metric space (or normed field) and get a complete metric space (or complete normed field), one can take the Cauchy completion of a non-Archimedean ordered field and get an ordered field which is Cauchy complete but not Dedekind complete. The easiest example of such a field is probably the -rational function field $\mathbb{R}(t)$ with the unique ordering that makes $t$ positive and infinitely large. -For some reason these subtleties seem to be hard to find in standard analysis texts. I myself didn't learn about them until rather recently (so, several years after my PhD). I actually wrote up some of this material as supplemental notes for a sophomore-junior level course I am currently teaching on sequences and series...but I have not as yet been able to make myself inflict these notes on my students. I talked about ordered fields in several lectures and it seemed to be one level of abstraction beyond what they could even meaningfully grapple with (so it started to seem a bit pointless).<|endoftext|> -TITLE: Is any transposition a product of simple transpositions? -QUESTION [8 upvotes]: Is any transposition a product of simple transpositions? If yes, how can you prove this? - -REPLY [5 votes]: There is an algorithm that takes a permutation and writes it as a product of simple transpositions. It's called bubble sort. -What you do is : if $f(i) > f(i+1)$, write $f$ = $f' \circ (i,i+1)$. Repeat on $f'$ until you get the identity and are left with a product of simple transposition.<|endoftext|> -TITLE: Why does Cantor's diagonal argument yield uncomputable numbers? -QUESTION [24 upvotes]: As everyone knows, the set of real numbers is uncountable. The most ubiquitous proof of this fact uses Cantor's diagonal argument. However, I was surprised to learn about a gap in my perception of the real numbers: -A computable number is a real number that can be computed to within any desired precision by a finite, terminating algorithm. -Turns out that the set of computable numbers is countable. My mind is effectively blown at this point. -So I'm trying to reconcile this with Cantor's diagonal argument. Wikipedia has this to say: "...Cantor's diagonal argument cannot be used to produce uncountably many computable reals; at best, the reals formed from this method will be uncomputable." - -So much for background information. -Say I have a list of real numbers $.a_{1n}a_{2n}a_{3n}\ldots$ for $n\geq 1$. Why do we get more than just computable numbers if we select digits different from the diagonal digits $a_{ii}$? I.e. if I make a number $.b_1b_2b_3\ldots$ where $b_i\neq a_{ii}$, why is this number not always computable? -The main issue I'm having is that it seems like I'm "computing" the digits $b_i$ in some sense. Is the problem that I have a choice for each digit? Or is there some other subtlety that I'm missing? - -REPLY [3 votes]: I'd like to take a closer look at the apparent contradiction you get when trying to apply Cantor's diagonal slash to the computable numbers, so let me repeat the argument in somewhat more detail: A real number $r$ is computable if and only if there exists an algorithm which, given $n\in \mathbb{N}$, computes the n-th digit of the decimal expansion of $r$ (care has to taken because the decimal expansion is not unique in general; in this case the algorithm has to output digits of one expansion consistently). So you pick an explicit machine model (for example, Turing machines; the word "algorithm" means "Turing machine") and an enumeration of all algorithms $A_1,A_2,...$, the outputs of which are called $a_{i,n}$ You construct a new algorithm which, given $n$, computes $a_{n,n}$ (for Turing machines this is essentially a Universal Turing machine) and outputs $b_n$ which is different from $a_{n,n}$. You can make an explicit choice here, for example, $b_n:=2$ if $a_{n,n}=1$, and $b_n:=1$ otherwise (thus avoiding potential problems with non-unique decimal expansions, which end in a sequence of zeroes or nines). This seems to define a computable number which at the same time is uncomputable because it is different from any number that $A_i$ defines. -The trouble with this "proof" is that it ignores the termination issues, and thus $a_{i,n}$ isn't even well-defined. You could make several attempts to repair this argument, but you will fail one way or the other. For example, if you say that all $A_i$s that do not terminate on some input $n$ are to be skipped in the enumeration, $i\mapsto a_{i,i}$ is no longer computable because you would have to filter the faulty algorithms, which you can't by the undecidability of the halting problem. If you just define $a_{i,n}$ as zero if $A_i$ does not terminate for the input $n$, then again $(i,n)\mapsto a_{i,n}$ is not computable. -So the undecidability of the halting problem is the real issue here, and it implies other notable properties of computable numbers, for example, you can't computably decide if two computable numbers are equal. -(The original last paragraph before the edit, which I will keep here to understand the comments below, was: So the undecidability of the halting problem is the real issue here, and it makes the very notion of computable numbers rather worthless in my opinion (you can't even computably decide if two computable numbers are equal, again because of the halting problem).)<|endoftext|> -TITLE: Is it faster to multiply a matrix by its transpose than ordinary matrix multiplication? -QUESTION [16 upvotes]: I'm writing a program that multiples a matrix by its transpose, and was trying to find efficiency hacks I could exploit considering that the two matrices being multiplied are related. Any ideas? - -REPLY [17 votes]: After Sivaram's suggestion I'm turning my comment into an answer. -In general, $(AB)^\textrm{T}=B^\textrm{T}A^\textrm{T}$, so for the product of a matrix with its transpose, you have $AA^\textrm{T}=(AA^\textrm{T})^\textrm{T}$ is symmetric. You can also see this directly by observing that if the $ij$ entry of $A$ is $a_{ij}$, then the $ij$ entry of $AA^\textrm{T}$ is $\sum_{k=1}^n a_{ik}a_{jk}$, which is the same if you switch $i$ and $j$. Therefore if you compute the entries with $i\leq j$, you get the entries with $i>j$ for free. There are $1+2+3+\cdots+n=\frac{n(n+1)}{2}$ such entries, instead of the $n^2$ you generally need. -(I was thinking of $n$-by-$n$ matrices, but if your input is $m$-by-$n$, this will give you $\frac{m(m+1)}{2}$ entry computations, each of which is a sum of $n$ products.)<|endoftext|> -TITLE: A double series yielding Riemann's $\zeta$ -QUESTION [27 upvotes]: Can you give me some hints to prove equality: -$$\sum_{m,n=1}^{\infty} \frac1{(m^2+n^2)^2} =\zeta (2)\ G-\zeta(4)=\frac{\pi^2}{6}\ G-\frac{\pi^4}{90}$$ -where $\zeta (t):= \sum\limits_{n=1}^{+\infty} \frac{1}{n^t}$ is the Riemann zeta function and $G := \sum\limits_{n=0}^{\infty} \frac{(-1)^{n}}{(2n+1)^2} \approx 0.915 965 594$ is Catalan's constant? -I tried with some reverse engineering, but I wasn't able to solve the problem at all. Even a good reference may be useful. -Thanks a lot in advance, guys. - -REPLY [17 votes]: This problem can be recast in terms of the famous problem of the number of ways to represent a positive integer as a sum of squares. With this perspective, we can see that the following more general statement is true for any $p > 1$ (so that each of the infinite series actually converges): -$$\sum_{n=1}^{\infty} \frac{1}{(n^2)^p} + \sum_{m,n = 1}^{\infty} \frac{1}{(m^2+n^2)^p} = \left(\sum_{n=1}^{\infty} \frac{1}{n^p}\right) \left(\sum_{n=0}^{\infty} \frac{(-1)^n}{(2n+1)^p}\right).$$ -The left-hand side is -$$\sum_{s=1}^{\infty} \frac{n_2(s)}{s^p},$$ where $n_2(s)$ is the number of ways of representing $s$ as the sum of one or of two squares of positive integers, in which order is distinguished (i.e., $1^2 + 2^2$ is counted separately from $2^2+1^2$). -It is known that $n_2(s) = d_1(s) - d_3(s)$ (see eq. 24 in the site linked above), where $d_k(s)$ is the number of divisors of $s$ congruent to $k \bmod 4$. -The first sum on the right side of the equation has (the $p$th powers of) all positive integers as denominators and the second sum on the right has (the $p$th powers of) all odd numbers as denominators. After multiplying those sums together, then, $1/s^p$ (ignoring signs) appears on the right as many times as there are odd divisors of $s$. Each odd divisor of $s$ congruent to $1 \bmod 4$ contributes a $+1/s^p$, and each odd divisor of $s$ congruent to congruent to $3 \bmod 4$ contributes a $-1/s^p$. Thus the coefficient of $1/s^p$ on the right side is exactly $d_1(s) - d_3(s)$. Therefore, the right-hand side is also -$$\sum_{s=1}^{\infty} \frac{n_2(s)}{s^p}.$$<|endoftext|> -TITLE: Does every Cauchy net of hyperreals converge? -QUESTION [15 upvotes]: This came up in a discussion with Pete L. Clark on this question on complete ordered fields. I argued that every Cauchy sequence in the hyperreal field is eventually constant, hence convergent; he asked whether the same is true for arbitrary Cauchy nets in $\mathbb{R}^*$. I'm not sure how to deduce this either from the transfer principle ("every Cauchy net converges" is a very second-order statement) or from the ultraproduct condition of $\mathbb{R}^*$. Does anyone know the answer? -(I agree that if $f: \mathbb{N}^* \to \mathbb{R}^*$ is an internal Cauchy net, then $f$ has a limit.) - -REPLY [4 votes]: Hints (general ordered field, not just "the" hyperreal field.) -(a) Can you show that a convergent net is Cauchy? -(b) Are there convergent nets not eventually constant? -(c) Conclude that there are non-constant Cauchy nets. -OF COURSE you need to define "Cauchy net" before you can even ask the question...<|endoftext|> -TITLE: Homology and Graph Theory -QUESTION [22 upvotes]: What is the relationship between homology and graph theory? Can we form simplicial complexes from a graph $G$ and compute their homology groups? Are there any practical results in looking at the homology of simplicial complexes formed from a graph? - -REPLY [16 votes]: There are a lot of interesting simplicial complexes one can create from a graph. Jakob Jonsson wrote a book called "Simplicial complexes of graphs," which defined a plethora of them. One of my favorite such complexes is the independence complex. The vertex set is the same as that of the graph, and there is a simplex $[v_0,\ldots,v_n]$ whenever no pair of the vertices $v_0,\ldots,v_n$ lie on an edge. It is a fascinating problem to compute the homology even for simple graphs. For example, I would love to know the homology of the independence complex for the 1-skeleton of the n-dimensional cube, but this appears to be a hard problem. Related to the independence complex is the so-called Theta complex which is "Alexander dual" to it in the combinatorial topology sense. See my paper with Oliver Thistlethwaite, where we get our feet wet analyzing the homology of the theta complex of k-skeleta of cubes. -While I'm plugging my own work, there is another notion, called graph homology, in which one constructs an algebraic chain complex whose generators are equivalence classes of decorated graphs. The boundary operator is usually defined by contracting edges of a graph one at a time. As far as I know, this was first considered by Kontsevich. Karen Vogtmann and I give a detailed exposition of various kinds of graph homology here.<|endoftext|> -TITLE: Differentiability and decay of magnitude of fourier series coefficients -QUESTION [10 upvotes]: I want to know the answer/references for the question on decay of Fourier series coefficients and the differentiability of a function. Does the magitude of fourier series coefficients {$a_k$} of a differentiable function in $L^2 (\mathbb{R})$ (or in any suitable space) decay as fast as or faster than $k^{-1}$. I want to know if there any such theorem ? Also about the converse statement. ? - -REPLY [2 votes]: Here are some references: - -Katznelson's Introduction to Harmonic Analysis, p. 22 onwards (He covers the one dimensional case.) -Grafakos' Classical Fourier Analysis, p. 176 onwards (This covers the torus of arbitrary dimensions case; see below for samplers.) -This pdf of Braunling's "Fourier Series on the $n$-dimensional Torus". -That pdf. -Observe that the correspondence between regularity and decay provided by Fourier transform is also at the core of Sobolev theory, and one might argue Sobolev spaces is the proper setting for this discussion. In this regard the possibilities are vast (e.g. Folland would be a valid starting point). - - -Here are some samplers from the Grafakos book I list above: -Fix $d\in \mathbb{Z}_{\geq 1}$ as the dimension; and denote the Fourier transform of a function $f:\mathbb{T}^d\to \mathbb{C}$ by -$$\widehat{f}:\mathbb{Z}^d\to \mathbb{C},\quad n\mapsto \int_{\mathbb{T}^d}f(x)e^{-2\pi i\; n\;\bullet\; x}dx.$$ -Theorem (Riemann-Lebesgue): $$\forall f\in L^1(\mathbb{T}^d):\lim_{|n|\to\infty} |\widehat{f}(n)| = 0.$$ -Theorem (Slow Decay? Sure.): $$\forall d_\bullet: \mathbb{Z^d}\to [0,\infty[: \lim_{|n|\to\infty} d_n = 0,\quad \exists g=g(d_\bullet)\in L^1(\mathbb{T}^d),\forall n\in \mathbb{Z}^d: |\widehat{g}(n)|\geq d_n.$$ -Theorem (Decay from Regularity): Let $s\in \mathbb{Z}_{\geq0}$. Then -$$\forall f\in C^s(\mathbb{T}^d):\lim_{|n|\to\infty} (1+|n|^s)|\widehat{f}(n)|=0.$$ -Theorem (Regularity from Decay): Let $s\in \mathbb{Z}_{> d}$. Then -$$\forall f\in L^1(\mathbb{T}^d):(1+|n|)^s|\widehat{f}(n)|=O_{|n|\to\infty}(1)\implies f\in C^{s-d}(\mathbb{T}^d).$$ -(Here the conclusion is in the sense of "coinciding with a function almost everywhere".) -One convenient soft version of this whole discussion is that a function on the torus is $C^\infty$ iff its Fourier coefficients decay faster than any polynomial. -As a final remark observe the effect of the dimension when going from decay to regularity: Heuristically the growth of the group $\mathbb{Z}^d$ contributes to decay a priori and hence ought to be deducted from the final regularity of the function.<|endoftext|> -TITLE: Prove that $b\mid a \implies (n^b-1)\mid (n^a-1)$ -QUESTION [11 upvotes]: Given natural numbers $a,b,n$, prove $b\mid a \implies (n^b-1)\mid (n^a-1)$. - -I tried the simple method of beginning with $b\mid a \implies \exists k \in \mathbb{N} $ such that $bk=a$ and then raising $n$ to the power of the LHS and the RHS and eventually forming $(n^b)^k-1=n^a-1$. That's obviously not enough. It tried making it work from the other side and didn't get too far either. I guess there's some $gcd$ theorem or something I need. -Any ideas? -Thanks. - -REPLY [3 votes]: There is a very useful identity -$$ -x^n-y^n=(x-y)(x^{n-1}+x^{n-2}y+\cdots+y^{n-1}). -$$ -Substituting $y$ with 1 here, we get -$$ -n^a-1=(n-1)(n^{a-1}+n^{a-2}+\cdots+1). -$$ -Now since $b|a$, there exists some $k\in\mathbb{Z}$ such that $a=bk$. Therefore, -$$ -n^a-1={(n^b)}^k-1=(n^b-1)((n^b)^{k-1}+(n^b)^{k-2}+\cdots+1). -$$ -Hence, $(n^b-1)|(n^a-1)$.<|endoftext|> -TITLE: Does $\sum_{n=1}^\infty (\frac{1}{n}-1)^{n^2}$ converge? -QUESTION [6 upvotes]: I am trying to decide if $$\sum_{n=1}^\infty \left(\frac{1}{n}-1\right)^{n^2}$$ converges. By the alternating series test, as far as I can see, the series converges. This is also true by the root test. In both cases I assume that $$\left|\left(\frac{1}{n}-1\right)^{n^2}\right| = \left(1-\frac{1}{n}\right)^{n^2}$$ and I can't see anything wrong with that. What makes me unsure is that Wolfram Alpha says that the sum does not converge. -Wolfram Alpha uses the limit test, which I am not able to complete: -$$\lim_{n\to\infty} a_n = \lim_{n\to\infty}\left(\frac{1}{n}-1\right)^{n^2} = \lim_{n\to\infty} e^{n^2 \ln{\frac{1-n}{n}}}$$ -The problem is the logarithm which is not defined for any $n$, and I am not aware of another way to compute that limit. -So does this series converge? -Edit: When I ask Wolfram Alpha to evaluate the sum, it says that the seires do not converge. But it also says that "computation timed out". Maybe this could be the reason for why it's wrong. - -REPLY [5 votes]: Hint: Consider $(1-1/n)^n \leq e^{-1}$. - -REPLY [5 votes]: $$\left| \left( \frac{1}{n} - 1\right)^{n^2} \right| = \left| \left( \frac{1}{n} - 1\right) \right|^{n^2} = \left( 1 - \frac{1}{n}\right)^{n^2}$$ -as you correctly noted. And we have -$$\limsup_{n\to\infty} \sqrt[n]{\left( 1 - \frac{1}{n}\right)^{n^2}} = \limsup_{n\to\infty} \left( 1 - \frac{1}{n}\right)^n = \frac{1}{e} < 1$$ -so this series converges.<|endoftext|> -TITLE: How many real quadratic number fields have the class number 1? -QUESTION [8 upvotes]: I know that in general the number of ideal classes are not 1, and that there are only 9 imaginary quadratic number fields which are principal ideal domains, i.e. $\mathbb(Q(\sqrt{-m}))$ where m is 1, 2, 3, 7, 11, 19, 43, 67, 163, and that Gauss had conjectured that there are infinitely many real quadratic number fields which are of class number 1. Although this sounds natural, since the order of unit groups in a real quadratic number field is infinity, while that of an imaginary one is finite, I cannot give a proof. So my question is this: - -How many real quadratic number fields are principal ideal domains? - -If there is any error, such as this is already known, or there is a wild discussion on this topic, please let me know, thanks very much. - -In addition: - Even if this is not solved, I would like to get some modern, or recent, research, or paper, to extend my horizon, so it will be best to have a reference which discusses the number of ideal classes of algebraic number fields, best regards here. - -REPLY [14 votes]: Broadly speaking, the information you gave in your question is up-to-date: ever since Gauss, number theorists have believed there should be infinitely many real quadratic fields of class number one, but we are not any closer to being able to prove this than Gauss was, to the best of my knowledge. Moreover you have identified the key problem: the fact that a real quadratic field has an infinite unit group means that there is an additional quantity in the analytic class number formula, the regulator, which is not present in the imaginary quadratic case, and it is hard to separate out the respective contributions of the class number and regulator. -In recent years a trend of research has been towards making increasingly precise quantitative conjectures about the expected behavior of the class number -- or class group -- of a real quadratic field of discriminant $D$. A lot of this comes under the heading of Cohen-Lenstra heuristics. -Since you asked for some pointers to recent work, here are two interesting papers that I found (from a google search, I fully admit): -I. Stephens and Williams, Computation of real quadratic fields with class number one. -II. Ono, Indivisibility of class numbers of real quadratic fields. (Wayback Machine), doi:10.1023/A:1001533613223 -There is certainly plenty more literature available. A MathSciNet search for "class number" AND "real quadratic field" calls up precisely [well, not for long!] 500 papers...<|endoftext|> -TITLE: Question regarding positive-definite matrices -QUESTION [5 upvotes]: Let $A, B$ be positive-definite matrices and $Q$ a unitary matrix, furthermore suppose $A=BQ$. -Prove or disprove: $A=B$. - -I'm having a hard time figuring out where to begin. Thanks. - -REPLY [2 votes]: (This is basically the same as user8268's proof, slightly rephrased/polished). -Let $\lambda, v$ be an eigenvalue-eigenvector of $Q$, then $Q v= \lambda v$. Then -$ \displaystyle A = B Q \Rightarrow v^* A v = v^* B Q v = \lambda \; v^* B v \Rightarrow -\lambda = \frac{v^* A v}{v^* B v}$ -But $A$ and $B$ are positive definite, hence both numerator and denominator are real and positive. Further, because eigenvalues of an orthogonal matrix have modulus one, we conclude that $\lambda =1$. -Then $Q$ is an orthogonal matrix with all its eigenvalues equal to one. Then, it must be the identity matrix (a quick way to see this is by its diagnolization; recall than an orthogonal matrix is normal, and hence diagonalizable).<|endoftext|> -TITLE: prove that $\lim_{x\to\infty} \pi(x)/x=0$ -QUESTION [13 upvotes]: I think I might have asked this question before, but I can't find it on the site, so I sincerely apologize if I am making a duplicate. But anyway, I have been working on this proof for several weeks and am stumped. - -If $\pi(x)$ is the number of primes less than or equal to $x$, prove that - $$\lim_{x\to\infty}\frac{\pi(x)}{x} = 0.$$ - -I have this: -So far I know that prime numbers can only be (if greater than $k$ for $p \pmod{k}$: - -$1 \pmod{2}$. -$1,2 \pmod{3} \Rightarrow$ upper bound of $\frac{\pi(x)}{x}$ is $\frac{2}{3}$. -$1,3 \pmod {4}$. -$1,2,3,4 \pmod{5}$. -$1,5 \pmod{6}\Rightarrow$ upper bound of $\pi(x)/x$ is $\frac{1}{3}$. -$1,2,3,4,5,6 \pmod{7}$. -$1,3,5,7\pmod{8}$. -$1,2,4,5,7,8\pmod{9}$. - -Any number prime $p \pmod{k}$ can only have a remainders that are relatively prime to $k$, as a number would not be prime if it could be expressed as a composite plus a factor of that composite. And I know that these possible remainders demonstrate a fraction of the possible numbers that can be prime, given that in any range of numbers there must be at least one that satisfies each possible remainder $\pmod{k}$. ... -But I'm not sure what I can conclude from this. I think that I need to find a way to express a number $N$ with respect to a prime $p$ such that $p \pmod N$ has a constant number of possible values, $K$. Then as $N$ increases, $K/N \to 0$. But otherwise I'm really stumped where to go. -I have considered the following: multiplying all prime numbers less than an arbitrary value and modding by that, so there are no relative primes less than a certain value except 1. But the problem with this is once you reach a certain value there can be a multiple of this as $2p_1p_2\cdots p_n$. So I don't think that works. -Any help would be much appreciated! Also, this is a first-semester number theory class, so I don't have much math knowledge to work with. I've done calc A,B,C, linear algebra A, and this number theory class. - -REPLY [14 votes]: There are a lot of approaches that can be used to establish this, but I'll try to stick to pushing your own idea through. I'm going to introduce a bit of notation first: -If we let $\varphi(n)$ be the number of positive integers less than or equal to $n$ that are relatively prime to $n$, you are saying that since for any given $n$ primes must fall among the $\varphi(n)$ congruence classes relatively prime to $n$ (except perhaps for the finite number of primes that divide $n$, and those don't affect the eventual distribution), that shows that the density of primes is at most $\frac{\varphi(n)}{n}$ for every $n$. That is, -$$\lim_{x\to\infty}\frac{\pi(x)}{x} \leq \frac{\varphi(n)}{n}\quad\text{for all }n;$$ -so it suffices to show that -$$\inf\left\{\left.\frac{\varphi(n)}{n}\right|\; n\geq 0\right\} = 0.$$ -For this, it is enough to show that $\liminf\frac{\varphi(n)}{n} = 0$. -This approach can work, though, and it can be done precisely by focusing on the integers $n$ that are "the product of all primes up to some $N$", so you are definitely on the right track and very close. -(I don't know if you can construct a sequence of numbers $N_k$ such that $\varphi(N_k)$ is constant and $N_k\to\infty$ as $k\to\infty$, which is what you say you want to do in your paragraph that begins "But I'm not sure what I can conclude from this..."; frankly, I doubt it can be done easily. Or at least, I can't think of a way to do it. Added: In fact, for every $K$, there is at most finitely many $n$ for which $\varphi(n)\leq K$; see below the break). -One way to push it through all the way it is to consider the following formula for $\varphi(n)$: -$$\varphi(n) = n\prod_{\stackrel{p|n}{p\text{ prime}}} \left(1 - \frac{1}{p}\right).$$ -Verify that this is true. -This means that -$$\frac{\varphi(n)}{n} = \prod_{\stackrel{p|n}{p\text{ prime}}}\left(1 - \frac{1}{p}\right).$$ -Now, pick $n$ to be "the product of all primes up to $N$", for some $N$. We're going to show that for the sequence we get from these very special $n$, we have $\lim\limits_{n\to\infty}\frac{\varphi(n)}{n} = 0$, which will show what we want to show. -For such an $n$ we have -$$\frac{\varphi(n)}{n} = \prod_{\stackrel{p|n}{p\text{ prime}}}\left(1 - \frac{1}{p}\right) = \prod_{\stackrel{p\leq N}{p\text{ prime}}}\left(1 - \frac{1}{p}\right).$$ -So it will suffice to show that -$$\lim_{N\to\infty} \prod_{\stackrel{p\leq N}{p\text{ prime}}}\left(1 - \frac{1}{p}\right) = 0.$$ -There are two pieces of information, both from Calculus, that you can use to establish this. First: if $|r|\lt 1$, then -$$\frac{1}{1-r} = 1 + r + r^2 + r^3 + \cdots + r^n + \cdots$$ -and in particular, if $p$ is prime, then -$$\frac{1}{1 - \frac{1}{p}} = 1 + \frac{1}{p} + \frac{1}{p^2} + \cdots + \frac{1}{p^n} + \cdots.$$ -The second piece of information is an upper bound for the integral $\int_1^k\frac{du}{u}$. Since $y = \frac{1}{u}$ is strictly decreasing, dividing the interval $[1,k]$ into $k-1$ equal parts and taking a left hand sum estimate gives that -$$\int_1^k\frac{du}{u} \lt 1 + \frac{1}{2} + \cdots + \frac{1}{k-1}.$$ -Finally, one last trick: instead of looking at -$$\prod_{\stackrel{p\leq N}{p\text{ prime}}}\left(1 - \frac{1}{p}\right),$$ -look at its reciprocal: -$$\frac{1}{\prod\limits_{\stackrel{p\leq N}{p\text{ prime}}}\left(1 - \frac{1}{p}\right)} = \prod_{\stackrel{p\leq N}{p\text{ prime}}}\frac{1}{1 - \frac{1}{p}} = \prod_{\stackrel{p\leq N}{p\text{ prime}}}\left(1 + \frac{1}{p} + \frac{1}{p^2} + \cdots + \frac{1}{p^k} + \cdots\right).$$ -See if you can show that this is greater than or equal to a quantity which you know goes to $\infty$ as $N\to\infty$ (say, by considering the integral I mentioned above). Then you can leverage that to show the limit inferior of $\frac{\varphi(n)}{n}$ is indeed equal to $0$. - -Added. In fact, for every $K\gt 0$ there are at most finitely many integers $n$ such that $\varphi(n)\leq K$, so your idea of trying to find a sequence going to $\infty$ for which $\varphi(n)$ always equals $K$ cannot prosper, I'm fraid. -To see this, note that $\varphi(n)$ is multiplicative: if $\gcd(a,b)=1$, then $\varphi(ab) = \varphi(a)\varphi(b)$. Also, if $p$ is a prime, then $\varphi(p^r) = (p-1)p^{r-1}$. This completely determines the value of $\varphi$ for any $n$, if you know the prime factorization of $n$. -Now fix $K$. If $n$ is divisible by any prime $p$ with $p\gt K+1$, then $\varphi(n)\geq \varphi(p) = p-1\gt K$. If $p$ is a prime with $p\lt K+1$, then if $r$ is such that $p^{r-1}\gt K$, then $\varphi(p^r)=(p-1)p^{r-1}\geq p^{r-1}\gt K$, so any integer divisible $n$ by at least $p^r$ will have $\varphi(n)\gt K$ as well. So any integer $n$ such that $\varphi(n)\leq K$ must be divisible only by primes less than or equal to $K+1$, and for each such prime there is a largest power that can divide $n$. This means that there are only finitely many $n$ for which $\varphi(n)\leq K$.<|endoftext|> -TITLE: Sequence that approaches integers -QUESTION [8 upvotes]: I'm referring to an answer posted on Math Overflow (see the post by fedja on https://mathoverflow.net/questions/59115/a-set-for-which-it-is-hard-to-determine-whether-or-not-it-is-countable) -The question is whether the set of real numbers $a > 1$ so that for $K > 0$ the distance between $K a^n$ and its nearest integer approaches $0$ for $n \to \infty$ is countable. -The integers are obviously in that set. However I couldn't come up with a proof that for all other reals the limit does not exist. - -REPLY [7 votes]: Here's a method of generating the set of numbers described in a way which makes clear that it is countable (so look away if you're taking Fedja's challenging to see how quickly you can prove it!). -Suppose that the distance beween $Ka^n$ and the nearest integer tends to zero. Then, letting $u_n$ be the nearest integer to $Ka^n$, you have $a=\lim_{n\to\infty}\frac{u_{n+1}}{u_n}$. Also, $Ka^n-u_n\to0$ and $u_n\to\infty$ as $n\to\infty$. -Then, $u_{n+2}-u_{n+1}^2/u_n\to0$. This means that, for large enough $n$, -$$ -u_{n+2}={\rm closest\ integer\ to\ }\frac{u_{n+1}^2}{u_n}. -$$ -Replacing $K$ by $Ka^m$ for large enough $m$, this can be assumed to hold for all $n\ge0$. So, we have a recurrence relation generating the entire sequence once $u_0,u_1$ are given, and allows us to calculate $a=\lim_{n\to\infty}u_{n+1}/u_n$. Therefore every element of the set is determined by a pair of positive integers $\{u_0 < u_1\}$, so it is countable.<|endoftext|> -TITLE: Convergence of a series of complex numbers that are dense on the unit circle -QUESTION [6 upvotes]: Let me introduce my problem: Let $C \subset \mathbb{C}$ be the unit circle of the complex plane and $Z=\left\{ z_n \right\}_{n \in \mathbb{N}} \subset C$ be a dense subset of the unit circle meaning that $\bar{Z}=C$ or otherwise put that for all $c \in C$ there is a subsequence $\left\{z_{n_{k}}\right\}_{k \in \mathbb{N}}$ of $Z$ such that $z_{n_{k}} \xrightarrow{k} c$. I want to know whether the following series converges: -\begin{align*} -\sum_{j\in\mathbb{N}}z_n -\end{align*} -For every $z_j$ on the unit circle there is it's anti-diametric $\hat{z_j}=-z_j$ and for all $\varepsilon>0$ we can find an index $\hat{j}\in\mathbb{N}$ so that $|\hat{z_j}-z_{\hat{j}}|<\varepsilon\Leftrightarrow |z_j+z_{\hat{j}}|<\varepsilon$. -If we group all members of the sequence this way (pairwise for a given $\varepsilon$) then the sum can be written in the following form: -\begin{align*} -\sum_{j\in\mathbb{N}}z_n=\left(z_1+z_{\hat{1}}\right)+ -\left(z_{k_2}+z_{\hat{k_2}}\right)+\ldots -\end{align*} -Then by the triangular inequality: -\begin{align*} -\sum_{j\in\mathbb{N}}z_n \le -|\sum_{j\in\mathbb{N}}z_n| \le -\left|z_1+z_{\hat{1}}\right|+ -\left|z_{k_2}+z_{\hat{k_2}}\right|+\ldots \le -\varepsilon + \varepsilon + \ldots -\end{align*} -So this way I don't prove that the series converges. Is there some other way to prove it or it doesn't hold at all? (Anyway I think it was not a good idea to use the absolute value there...). -And a second question on that: If $\left\{a_n\right\}_{n\in \mathbb{N}}$ is a real sequence such that $\sum_{j\in\mathbb{N}}a_n$ converges then $\sum_{j\in\mathbb{N}}a_nz_n$ converges (following the same procedure described above). Can we somehow loosen the conditions on $\left\{a_n\right\}_{n\in \mathbb{N}}$ ? Are there weaker conditions on the sequence $a_n$ so that the series will converge? -Update 1: Does this series converge: -\begin{align*} -\sum_{n\in\mathbb{N}}e^{2\pi\vartheta\alpha} -\end{align*} -with $\alpha\in \mathbb{R} - \mathbb{Q}$ ? - -REPLY [5 votes]: Like others have already said, your series is not convergent because the terms don't get smaller as $n \rightarrow \infty$, and this the case no matter how you order your terms. -What you sketched was instead a proof that if we "pack together" some of the terms and rearrange the packs, then we obtain a series $\Sigma (z_{i_n} + z_{j_n})$ which is convergent. -Indeed, you can pick the index sequences $i_n$ and $j_n$ as follows : -Pick $i_n =$ the smallest integer which was not picked for one of the previous $i_k$ or $j_k$ for $k -TITLE: Solution to the "near-Gaussian" integral $\int_{0}^{\infty} e^{- \Lambda \sqrt{(z^2+a)^2+b^2}}\mathrm{d}z$ -QUESTION [11 upvotes]: In the course of my research I have come across the following integral: -$\int_{0}^{\infty} e^{- \Lambda \sqrt{(z^2+a)^2+b^2}}\mathrm{d}z$. -This initially looks like it should be solvable by some suitable change of variable which will allow you to get it into a gaussian form. Unfortunately after trying for awhile I cannot find one. The constants $a$ and $b$ are combinations of parameters $s \in (0,\infty)$, $x \in [0,\infty)$: -$a = s^2-x^2$ and $b = 2sx$. -So the integral can be rewritten as: -$\int_{0}^{\infty} e^{- \Lambda \sqrt{z^4 +Az^2 +B}}\mathrm{d}z$, -with $A = 2(s^2-x^2)$ and $B = s^4 + x^4 + 2s^2x^2$. -Any help with a solution would be much appreciated. -Edit: I forgot to mention that the $s = kL$ where $L$ is a fixed value and I will eventually take a limit in which $k \rightarrow 0$, so there are opportunities for series expansions. I have tried the obvious by expanding the square root in powers of $k$, but there are then convergence issues in the region $|z-x| < k$. -A closed form solution is looking less and less likely as I try all the tricks I know and scour Gradshteyn, so a first term in $k$ (Edit: I originally said in $a$, that was a mistake) would also be much appreciated. - -REPLY [2 votes]: When $a=0$ , $b$ is a real number, -Then $\int_0^\infty e^{-\Lambda\sqrt{(z^2+a)^2+b^2}}~dz$ -$=\int_0^\infty e^{-\Lambda\sqrt{z^4+b^2}}~dz$ -$=\int_0^\infty e^{-\Lambda\sqrt{(\sqrt{|b|\sinh z})^4+b^2}}~d(\sqrt{|b|\sinh z})$ -$=\dfrac{\sqrt{|b|}}{2}\int_0^\infty\dfrac{e^{-\Lambda\sqrt{b^2\sinh^2z+b^2}}\cosh z}{\sqrt{\sinh z}}dz$ -$=\dfrac{\sqrt{|b|}}{2}\int_0^\infty\dfrac{e^{-|b|\Lambda\cosh z}\cosh z}{\sqrt{\sinh z}}dz$ -$=-\dfrac{\sqrt{|b|}}{2}\dfrac{d}{dx}\int_0^\infty\dfrac{e^{-x\cosh z}}{\sqrt{\sinh z}}dz~(x=|b|\Lambda)$ -$=-\dfrac{\sqrt{|b|}}{2}\dfrac{d}{dx}\dfrac{\Gamma\left(\dfrac{1}{4}\right)\sqrt[4]x~K_{-\frac{1}{4}}(x)}{\sqrt[4]2\sqrt\pi}(x=|b|\Lambda)$ (according to http://people.math.sfu.ca/~cbm/aands/page_376.htm) -Hint: -According to https://arxiv.org/pdf/1004.2445.pdf, there is a formula that -$\int_0^\infty f\left(\dfrac{bx^2}{x^4+2ax^2+1}\right)dx=\sqrt{\dfrac{a+1}{2}}\int_0^1\dfrac{f\left(\dfrac{bt}{2(a+1)}\right)}{t\sqrt{t(1-t)}}~dt$<|endoftext|> -TITLE: $a^n+1$ prime $\Rightarrow n = 2^k$ is a power of $2$ -QUESTION [5 upvotes]: I'm trying to prove that $a^n+1$ can only be prime if $n$ is a power of $2$. Is there a general factorization of $a^n+1$? - -REPLY [7 votes]: At the core it is the trivial identity $\rm\, (-1)^m \equiv -1\ $ for odd $\rm\,m.\,$ So if $\rm\,n\,$ has odd factor $\rm\,m,\ \ n = k m,\,$ -then we have $\rm\ \ a^k\equiv -1\ \,\Rightarrow\,\ a^n =\, (a^k)^m\equiv -1\ \ (mod\ a^k+1),\ $ hence $\rm\ a^k+1\ $ divides $\rm\ a^n+1\,.$ -Or $ $ put $\rm\ x = a^k\ $ in $\rm\ x+1\mid x^m + 1,\, $ by the Factor Theorem: $\rm\ x-c\, \mid\, f(x)-f(c)\ $ in $\rm\ \mathbb Z[x].$ -Note the this is simply the case $\rm\,m\,$ odd, $\rm\ x\to -x\ $ in this prior question today.<|endoftext|> -TITLE: Complex conjugate of $z$ without knowing $z=x+i y$ -QUESTION [6 upvotes]: Is it possible to determine (and if so, how) the complex conjugate $\bar{z}$ of $z$, if you don't already know that $z = x + i y$? -I think you can use $\log(z)$ to get the angle, and therefore the ratio of $y$ and $x$. But how do you get $|z|$, the radius? How then to get $r$ (so that $x = {\rm Re}(z) = r \cos(\log(z))$ and $y = {\rm Im}(z) = r \sin(\log(z))$)? -(this is related to How to express in closed form?, namely you can compute Re and Im using the conjugate, but then how do you reduce the conjugate itself fully to elementary functions (if at all)) - -REPLY [8 votes]: If you knew $z$ and $\bar{z}$ then you could easily recover the real and imaginary parts $x$ and $y$, since: -$x = \frac{z+\bar{z}}{2}$ -$y = \frac{z-\bar{z}}{2i}$ -So you certainly can't know $z$ and $\bar{z}$ without also knowing $x$ and $y$.<|endoftext|> -TITLE: Testing convergence of $\sum\limits_{n=2}^{\infty} \frac{\cos{\log{n}}}{n \cdot \log{n}}$ -QUESTION [6 upvotes]: Does the series: $$\sum\limits_{n=2}^{\infty} \frac{\cos(\log{n})}{n \cdot \log{n}}$$ converge or diverge? -I know that $|\cos(\log{n})| \leq 1$, but I really cannot apply it here. Any ideas on how to attack this problem - -REPLY [2 votes]: In case the link Jonas gave would be broken in the future, here is the solution:<|endoftext|> -TITLE: Does a map of spaces inducing an isomorphism on homology induce an isomorphism between the homologies of the loop spaces? -QUESTION [8 upvotes]: That is, let $f:X \rightarrow Y$ be a map of spaces such that $f_*: H_*(X) \rightarrow H_*(Y)$ induces an isomorphism on homology. We get an induced map $\tilde{f}: \Omega X \rightarrow \Omega Y$, where $\Omega X$ is the loop space of $x$. Does $\tilde{f}$ also induce an isomorphism on homology? - -REPLY [3 votes]: Here is a more explicit family of counterexamples. Let $X$ be a homology sphere of dimension at least $3$ with a point removed. Then $X$ is an acyclic space with the same fundamental group as the original homology sphere. Hence the constant map $f : X \to \text{pt}$ induces an isomorphism on homology. In order for the induced map -$$\Omega f : \Omega X \to \text{pt}$$ -on loop spaces (with some choice of basepoint) to induce an isomorphism on homology, it must induce an isomorphism -$$H_0(\Omega f) : H_0(\Omega X) \cong \mathbb{Z}[\pi_1(X)] \to H_0(\text{pt}) \cong \mathbb{Z}$$ -which is equivalent to $\pi_1(X)$ being trivial. But of course a homology sphere need not have trivial fundamental group, so for example we can take $X$ to be Poincare dodecahedral space minus a point.<|endoftext|> -TITLE: question on second mean value theorem for integration -QUESTION [12 upvotes]: I am wondering two different forms of the second mean value theorem for integration. For the one in wikipedia, I also wonder where I can find a proof. -The form I read from another reference is that: -$G:[a,b]\to \mathbb{R}$ is a monotonic function and $\phi : [a, b] \to \mathbb{R}$ is an integrable function, then there exists a number $x$ in $[a, b]$ such that -$$\int_a^b {G(t)\phi(t)}dt=G(a)\int_a^x{\phi(t)dt}+G(b)\int_x^b{\phi(t)dt}.$$ -Note the difference in where $x$ lies and whether to use the one-side limit for $G(a)$ and $G(b)$. To me, I think whether $G$ is right continuous at $a$ or left continuous at $b$ should not matter the integral $$\int_a^b {G(t)\phi(t)}dt$$, but clearly $G(a)$ vs. $G(a+)$ can be quite different. -I am wondering if anyone can give me a proof that the two are equivalent. Thanks. - -REPLY [5 votes]: Some notations. First, for every $x$ in $[a,b]$, let $F(x)=\displaystyle\int_a^x\phi(t)\mathrm{d}t$, hence $F$ is continuous and $F(a)=0$. Second, assume that $G$ is nondecreasing, the other case being similar. Thus there exists a nonnegative measure $\mu$ such that, for every $x$ in $[a,b]$, $G(x)=G(a)+\displaystyle\int_a^x\mathrm{d}\mu(t)$. -Now to the proof. Using an integration by parts, the integral $I$ of $G\phi$ over $[a,b]$ is -$$ -I=[F(t)G(t)]_a^b-\int_a^bF(t)\mathrm{d}\mu(t)=F(b)G(b)-\int_a^bF(t)\mathrm{d}\mu(t). -$$ -The hypothesis made on $G$ means that $\mu$ is a nonnegative measure hence the first mean value theorem for integration yields the existence of a point $x$ in $[a,b]$ such that -$$ -\int_a^bF(t)\mathrm{d}\mu(t)=F(x)\int_a^b\mathrm{d}\mu(t)=F(x)(G(b)-G(a)). -$$ -Coming back to $I$, one gets -$$ -I=F(b)G(b)-F(x)(G(b)-G(a))=G(a)F(x)+G(b)(F(b)-F(x)), -$$ -which, by definition of $F$, is the desired assertion. -(No continuity of $G$ is required.) -In case you are wondering, the first mean value theorem used above is not the usual one but it has the same proof than the usual one. Namely, $F$ being continuous on the compact set $[a,b]$ has a maximum $M$ and a minimum $m$ on $[a,b]$, thus -$$ -m\int_a^b\mathrm{d}\mu(t)\le\int_a^bF(t)\mathrm{d}\mu(t)\le M\int_a^b\mathrm{d}\mu(t). -$$ -(This step uses the hypothesis that $\mu$ has constant sign.) In other words, there exists $u$ in $[m,M]$ such that -$$ -\int_a^bF(t)\mathrm{d}\mu(t)=u\int_a^b\mathrm{d}\mu(t). -$$ -Now, the continuity of $F$ implies that there exists $x$ in $[a,b]$ such that $u=F(x)$ and you are done. - -Previous version When $\phi$ has a constant sign, this is a consequence of the intermediate value theorem. (An earlier version of this post did not assume $\phi$ of constant sign, hence an argument was wrong. Thanks to the OP for having asked some explanations.) -Let $I$ denote the integral of $G\phi$ over $[a,b]$, $J$ the integral of $\phi$ over $[a,b]$, and $H(x)$ the RHS of the equality you want to prove. Assume that $\phi$ is nonnegative and $G$ is nondecreasing, the other cases being similar. -Since $G(a)\le G(t)\le G(b)$ and $\phi(t)\ge0$ for every $t$ in $[a,b]$, $G(a)\phi(t)\le G(t)\phi(t)\le G(b)\phi(t)$ for every $t$, hence $G(a)J\le I\le G(b)J$. The function $x\mapsto H(x)$ is continuous on the interval $[a,b]$, $H(a)=G(b)J\ge I$ and $H(b)=G(a)J\le I$, hence by the intermediate value theorem there exists $x$ in $[a,b]$ such that $H(x)=I$. -(No continuity of $G$ is required.)<|endoftext|> -TITLE: Primary ideals of Noetherian rings which are not irreducible -QUESTION [18 upvotes]: It is known that all prime ideals are irreducible (meaning that they cannot be written as an finite intersection of ideals properly containing them). While for Noetherian rings an irreducible ideal is always primary, the converse fails in general. In a recent problem set I was asked to provide an example of a primary ideal of a Noetherian ring which is not irreducible. The example I came up with is the ring $\mathbb{Z}_{p^2}[\eta]$ where $p$ is prime and $\eta$ is a nilpotent element of order $n > 2$, which has the $(p,\eta)$-primary ideal $(p)\cap (\eta) = (p\eta)$. -But this got me thinking: how severe is the failure of primary ideals to be irreducible in Noetherian rings? - -In particular, are primary ideals of a Noetherian domain irreducible, or is a stronger condition on the ring required? I'd love to see suitably strong criteria for all primary ideals of a Noetherian ring to be irreducible, or examples of primary ideals of "well-behaved" rings which are not irreducible. - -REPLY [15 votes]: There is a beautiful characterization of prime, radical, irreducible and primary ideals among monomial ones in $k[x_1, \dots, x_n]$: -Theorem. Let $I$ be a monomial ideal of $k[x_1, \dots, x_n]$ and let $\mathcal{B}$ be its minimal basis. Then: - -$I$ is maximal iff $\mathcal{B}=\{x_1, \dots, x_n \}$; -$I$ is prime iff $\mathcal{B} = \{ x_{i_1}, \dots, x_{i_r} \}$; -$I$ is radical iff $\mathcal{B}$ is made up of square-free monomials; -$I$ is irreducible iff $\mathcal{B} = \{ x_{i_1}^{a_1}, \dots, x_{i_r}^{a_r} \}$; -$I$ is primary iff $\mathcal{B} = \{ x_{i_1}^{a_1}, \dots, x_{i_r}^{a_r}, m_1, \dots, m_s \}$, where $m_1,\dots, m_s$ are monomials in the variables $x_{i_1}, \dots, x_{i_r}$. - -So in this case it is very easy to produce a counter-example: $(x^2, y^2, xy)$. Its radical is maximal, so it is primary, but is reducible because $(x,y^2) \cap (x^2, y) = (x^2, y^2, xy)$.<|endoftext|> -TITLE: Good introductory books on homological algebra -QUESTION [33 upvotes]: Which books would you recommend, for self-studying homological algebra, to a beginning graduate (or advanced undergraduate) student who has background in ring theory, modules, basic commutative algebra (some of Atiyah & Macdonald's book) and some (basic) field theory? -I would especially like to hear your opinions on the following books: -A Course On Homological Algebra / P. J Hilton and U. Stambach -Introduction to Homological Algebra / Szen-Tsen Hu -Notes on Homological Algebra / Rotman -But other recommendations will also be appreciated. - -REPLY [4 votes]: See "An introduction to homological algebra" of Rotman (2010). I think this is the book Mark was talking about. It is VERY introductory.<|endoftext|> -TITLE: Coefficients of characteristic polynomial of a matrix -QUESTION [27 upvotes]: For a given $n \times n$-matrix $A$, and $J\subseteq\{1,...,n\}$ let us denote by $A[J]$ its principal minor formed by the columns and rows with indices from $J$. -If the characteristic polynomial of $A$ is $x^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$, then why $$a_k=(-1)^{n-k}\sum_{|J|=n-k}A[J],$$ -that is, why is each coefficient the sum of the appropriately sized principal minors of $A$? - -REPLY [5 votes]: $\newcommand\sgn{\operatorname{sgn}}$ -I learned of the following proof from @J_P's answer to what effectively is the same question. It arises from expanding the usual definition $\det A=\sum_{\sigma\in S_n}\sgn\sigma\prod_{1\le k\le n}A_{k,\sigma(k)}$, and deserves to be more well-known than it currently is. -Let $[n]:=\{1,\dots,n\}$, and write $\delta_{i,j}$ for the Kronecker delta, which is equal to $1$ if $i=j$, and is $0$ otherwise. Note that $\prod_{1\le k\le n}(a_k+b_k)=\sum_{C\subseteq[n]}\prod_{i\in C}a_i\prod_{j\in[n]-C}b_j$, since every term in the expansion on the left hand side will choose from each expression $(a_k+b_k)$ either $a_k$ or $b_k$, and so we may sum over all possible ways $C$ of choosing the $a_k$ terms. -We compute -\begin{align*} -\det(tI-A) -&=\sum_{\sigma\in S_n}\sgn\sigma\prod_{1\le k\le n} -(t\delta_{k,\sigma(k)}-A_{k,\sigma(k)})\\ -&=\sum_{\sigma\in S_n}\sgn\sigma\sum_{C\subseteq[n]} -\prod_{i\in C}(-A_{i,\sigma(i)})\prod_{j\in[n]-C}t\delta_{j,\sigma(j)}\\ -&=\sum_{C\subseteq[n]}(-1)^{|C|}\sum_{\sigma\in S_n}\sgn\sigma -\prod_{i\in C}A_{i,\sigma(i)}\prod_{j\in[n]-C}t\delta_{j,\sigma(j)}. -\end{align*} -For fixed $C\subseteq[n]$ and $\sigma\in S_n$, the last product $\prod_{j\in[n]-C}t\delta_{j,\sigma(j)}$ vanishes unless $\sigma$ fixes the elements of $[n]-C$, in which case the product is just $t^{n-|C|}$. So we need only consider the contributions of the permutations of $C$ in our sum, by thinking of a permutation $\sigma\in S_n$ that fixes $[n]-C$ as a permutation in $S_C$. The sign of this permutation considered as an element of $S_C$ remains the same, as can be seen if we consider the sign as $(-1)^{T(\sigma)}$, where $T(\sigma)$ is the number of transpositions of $\sigma$. We thus have -\begin{align*} -\sum_{C\subseteq[n]}(-1)^{|C|}\sum_{\sigma\in S_n}\sgn\sigma -\prod_{i\in C}A_{i,\sigma(i)}\prod_{j\in[n]-C}t\delta_{j,\sigma(j)} -&=\sum_{C\subseteq[n]}(-1)^{|C|}\sum_{\sigma\in S_C}\sgn\sigma -\prod_{i\in C}A_{i,\sigma(i)}t^{n-|C|}\\ -&=\sum_{C\subseteq[n]}(-1)^{|C|}t^{n-|C|}\sum_{\sigma\in S_C}\sgn\sigma -\prod_{i\in C}A_{i,\sigma(i)}. -\end{align*} -The term $\sum_{\sigma\in S_C}\sgn\sigma\prod_{i\in C}A_{i,\sigma(i)}$ is precisely the determinant of the principal submatrix $A_{C\times C}$, which is the $|C|\times|C|$ matrix with rows and columns indexed by $C$, and so -\begin{align*} -\sum_{C\subseteq[n]}(-1)^{|C|}t^{n-|C|}\sum_{\sigma\in S_C}\sgn\sigma -\prod_{i\in C}A_{i,\sigma(i)} -&=\sum_{C\subseteq[n]}(-1)^{|C|}t^{n-|C|}\det(A_{C\times C})\\ -&=\sum_{0\le k\le n}\sum_{\substack{C\subseteq[n]\\|C|=k}}(-1)^kt^{n-k} -\det(A_{C\times C})\\ -&=\sum_{0\le k\le n}t^{n-k}\left((-1)^k -\sum_{\substack{C\subseteq[n]\\|C|=k}} -\det(A_{C\times C})\right)\\ -&=\sum_{0\le k\le n}t^k\left((-1)^{n-k} -\sum_{\substack{C\subseteq[n]\\|C|=n-k}} -\det(A_{C\times C})\right). -\end{align*}<|endoftext|> -TITLE: What's the goal of mathematics? -QUESTION [60 upvotes]: Are we just trying to prove every theorem or find theories which lead to a lot of creativity or what? -I've already read G. H. Hardy Apology but I didn't get an answer from it. - -REPLY [2 votes]: This is a really old question, but one I ask myself all the time. (Warning: I'm going to stretch a metaphor so far that the Bard would be worried) -I once heard a description of Grothendieck which said the following: Many mathematicians saw the peak of the mountain and tried to climb it, coming up with the most ingenious ways to summit it, and yet all of them fell short. Grothendieck came along, saw the mountain and instead of trying to climb it he stood back and tried to build the airplane, and when he had succeeded he flew above the mountain from high above, seeing what all the other mathematicians wished, and all without ever trying to climb the mountain. Eventually his airplane revealed so much more about the geography than any mountain climber could have. -Why do people build airplanes? I would argue that planes come not out of seeing the heights of mountains, but out of an infatuation with air itself, with the notion of flight, with the freedom it provides. -Often I think mathematicians as a whole do what Grothendiek did in this metaphor, they see the mountains which humans wish to summit, and they do not climb them (or at least pure mathematicians do not wish to climb them), they simply try to build better airplanes to fulfill their love of flight, and those same planes reveal more than anyone could ever predict. -Mathematics can not reveal everything: Sometimes humans wish to explore under the earth, where we see the natural philosophical caves of the earth and sometimes go spelunking in them, and often times those mavericks among us write literature to dig deeper caves, and we must explore those so that one day we might understand the core of humanity and the world, and in this case airplanes and mountain climbers can only get us so far.<|endoftext|> -TITLE: Operator whose spectrum is given compact set -QUESTION [13 upvotes]: Let $A\subset \mathbb{C}$ be a compact subset. -Since $A$ is compact and metric space, it is separable, say $\overline{\lbrace a_n\rbrace_{n=1}^\infty}=A$. -Let $\mathcal{l}^2(\mathbb{Z})$ be the Hilbert space consisting of $L^2$-summable sequences and $\lbrace e_n\rbrace_{n=1}^\infty$ be the canonical basis of $\mathcal{l}^2(\mathbb{Z})$. -Define an operator $T\colon\mathcal{l}^2(\mathbb{Z})\to\mathcal{l}^2(\mathbb{Z})$ by sending $e_n$ to $a_ne_n$. I want to prove that $A=\sigma(T)$, where $\sigma(T)$ is the spectrum of $T$. -What I can prove is that $A=\overline{\lbrace a_n\rbrace_{n=1}^\infty}\subset \sigma(T)$ because each $a_n$ is an eigenvalue of $T$ and $\sigma(T)$ is closed. How can I prove the other inclusion, namely $\sigma(T)\subset A$? - -REPLY [9 votes]: Hint: If $\lambda$ in not is $A$, then there is $\epsilon>0$ with $\lvert\lambda-a_n\rvert>\epsilon$ for all $n$. Use this to write down a bounded inverse for $T-\lambda\cdot\mathrm{Id}$.<|endoftext|> -TITLE: Chebyshev center = center of mass? -QUESTION [11 upvotes]: I would like to know for which convex polyhedra $P$ in $\mathbb{R}^3$, -is the center of the largest sphere enclosed in $P$ (a.k.a. the Chebyshev center, or the incenter) -the same as the center of mass/gravity of $P$, assuming $P$ is a homogenous solid. -The conditions force quite a bit of symmetry. Perhaps only the Platonic and Archimedean solids have -the property. -The question can be asked in all dimensions, and I do not even know the answer for -$\mathbb{R}^2$. -I feel this question must have been explored, but I have not found it my literature searches. I'd appreciate any pointers. Thanks! -Edit. Here are the examples suggested by Rahul and Eric. -The left image is from Wikipedia's page on bipyramids. -The right image I made myself. - -REPLY [4 votes]: Too long for a comment: -There are other obvious shapes, such as suitable anti-prisms. -But I don't think you are going to find any symmetry is a requirement, at least if you have a large number of faces. You could take something with a 100 faces which you have already found works, and regard finding the in-centre as an optimisation problem subject to the constraints of being the correct side of each face. Because you have so many faces acting as constraints, you can relax a number of them without allowing the in-centre to move. So stick asymmetric pyramids or frusta or other things on several of these relaxed faces so the new solid is still convex and the centroid remains at the in-centre. -Or you could cut off several corners of the original solid in asymmetric ways, without affecting the in-sphere or moving the centroid. Once you have this, you probably don't even need symmetry of the points where the in-sphere is tangent to the faces of the solid, since you can introduce new tangents by cutting off corners suitably and then remove old tangents by stick other bits on. Then all symmetry is lost. -I expect the same applies in $\mathbb{R}^2$. Indeed I suspect the minimal number of edges of an asymmetric convex polygon with the in-centre coinciding with the centroid is 4, 5 or 6.<|endoftext|> -TITLE: Which is the "proper" definition of a geodesic curve? -QUESTION [19 upvotes]: I'm taking a course on differential geometry, and up until now I'd always thought that the definition of a geodesic is (loosely speaking) a curve on a surface with the minimal length between its endpoints. My professor, taking his lead from do Carmo, however, defines it as any curve whose geodesic curvature $\kappa_g=0$. We showed that this is equivalent to satisfying the following pair of nonlinear ordinary differential equations: -$$(\boldsymbol{E}u' + \boldsymbol{F}v')' = \frac12(\boldsymbol{E}_u(u')^2 + 2\boldsymbol{F}_uu'v' + \boldsymbol{G}_u(v')^2)$$ -$$(\boldsymbol{F}u' + \boldsymbol{G}v')' = \frac12(\boldsymbol{E}_v(u')^2 + 2\boldsymbol{F}_vu'v' + \boldsymbol{G}_v(v')^2)$$ -We then went through an incredibly painful calculation on the length of the family of curves $\gamma_\lambda$ to show that geodesics (i.e, those curves satisfying the geodesic equations above) are critical points of the functional -$$\displaystyle\mathcal{L}(\lambda) = \int_a^b{\left\|\frac{d\gamma_\lambda}{dt}\right\| dt},$$ -which is the length of the curve. Therefore, according to my professor's (and the textbook's) definition, geodesics are not necessarily length-minimizing, just critical points of $\mathcal{L}$. Therefore, on a sphere, two non-antipodal points have two geodesics: the obvious length-minimizing one, and the other one going the long way around the sphere (which is, in this case, a saddle point of $\mathcal{L}$). This is not just an oversight on my professor's part, he explicitly brought attention to this fact. -My question is, what are the advantages and disadvantages of these two conflicting definitions? I still see the length-minimizing one almost everywhere. -On a related note, the fact that a geodesic is only a critical point, not necessarily a minimum, leaves open the possibility of a geodesic actually being the longest path between two points. Are there any situations where this is actually possible? It seems you could always perturb a curve slightly to stay within the image of a chart while still increasing its length infinitesimally. Are there some weird spaces where this is not the case? - -REPLY [4 votes]: One example of a geodesic being the longest path in some sense is in Einstein's relativity. It is related to the Twin Paradox, where two twins set off from some point in spacetime and then meet again at another point in spacetime, to discover one has aged more than the other. -The geodesic is the path which takes the longest as measured by a clock passing along it. For Special Relativity, this is a straight line and a constant speed, i.e. inertial motion, and any clock going by any other path will measure less time.<|endoftext|> -TITLE: Why is this a $k$-module? -QUESTION [8 upvotes]: Just started studying tensor products. Let $A$ be a commutative ring with unity and let $M$ be an $A$-module. Now let $k$ be a field, I know that a $k$-module is precisely a $k$-vector space. -My question is the following: -Why $k \otimes_{A} M$ is also a $k$-vector space? here $\otimes$ denotes the tensor product with respect the ring $A$. -Is it because we can give $k \otimes_{A} M$ the structure of $k$-module by just taking: -$f: k \times (k \otimes_{A} M) \rightarrow k \otimes_{A}M$ given by: -$f(c,d \otimes m)=(cd) \otimes m$ ? where $c,d \in k$ - -REPLY [7 votes]: First: for $k\otimes_A M$ to make sense, you have to have an action of $A$ on $k$; that is, $k$ should be an $A$-module in some way. (It could be the trivial $A$ module, $ar=0$ for all $a\in A$ and $r\in k$, though that would make $k\otimes_AM$ the trivial module). -But: in general, if $S$ and $A$ are commutative rings, and you have an action of $A$ on $S$, then for any $A$-module $M$ you get an $S$-module by taking $S\otimes_A M$: this is called extension of scalars (or extension of the base). The action of $s$ on $S\otimes_A M$ is precisely the one you give: given any $s\in S$ and any generator $t\otimes m$, you define $s(t\otimes m) = (st)\otimes m$ and extend linearly. -Explicitly, we have a $A$-multilinear map -$$f\colon S\times S\times M \to S\otimes_A M$$ -given by $(s,t,m)\mapsto st\otimes m$. This map is $A$-multilinear: -$$\begin{align*} -f(s+s',t,m)&=(s+s')t\otimes m = (st+s't)\otimes m = st\otimes m+s't\otimes m\\ -&= f(s,t,m) + f(s',t,m).\\ -f(s,t+t',m) &= s(t+t')\otimes m = (st+st')\otimes m = st\otimes m + st'\otimes m\\ -&= f(s,t,m) + f(s,t',m).\\ -f(s,t,m+m') &= st\otimes(m+m') = st\otimes m + st\otimes m' = f(s,t,m)+f(s,t,m').\\ -f(as,t,m) &= (as)t\otimes m = s(at)\otimes m = f(s,at,m)\\ -&= a(st)\otimes m = st\otimes am = f(s,t,am)\\ -&= a(st\otimes m) = af(s,t,m). -\end{align*}$$ -Therefore, the universal property of the tensor product (the definition) says that the map $f$ induces a unique map $S\otimes_A(S\otimes_A M)\to S\otimes_A M$. This map makes $S\otimes_A M$ into an $S$-module. -In the particular case where $S$ is a field, as you have, this makes $k\otimes_A M$ into a $k$-module; and $k$-modules are the same thing as $k$-vector spaces. - -REPLY [3 votes]: I think you need to assume that k is an A-module (otherwise you can't take the tensor product). -Your statement follows from the following more general lemma: -Suppose A,B are rings and N is an A-module and a B-module in a compatible way a(bn) = b(an). -Let M be another A-module. Then $M\otimes_A N$ is a B-module. -Just take $B = k$, $N = k$, $M = A$ to get what you want. -Your guess for the solution is also correct.<|endoftext|> -TITLE: Applying Mean Value Theorem to formula -QUESTION [5 upvotes]: As I understand it, the mean value theorem is where $${f}'(c)=\frac{f(b)-f(a)}{b-a}$$ if $f$ is continuous on the open interval (a, b) and differentiable on the closed interval [a,b]. -A problem in the current homework set on WebAssign has me confused. Given $f(x)=x^{7}$ on the closed interval [0, 1], determine whether the MVT can be applied to the closed interval [a,b]. -Since $f(x)= x^{7}$ has a similar profile to a cubic function graph, and it is differentiable to ${f}'(x)= 7x^{6}$, it passes two criteria for the MVT. -Now, solving the MVT formula: -$${f}'(x)=\frac{f(b)-f(a)}{b-a} \Rightarrow \frac{[1^{7}]-[0^{7}]}{1-0} \Rightarrow \frac{1-0}{1-0} \Rightarrow \frac{1}{1}= 1$$ -Now, I need to find a number $c$ between 0 and 1 that f'(c)=1. However, the only whole number possiblities from the [0, 1] interval produce -$${f}'(0)= 7(0)^{6}= 0 \neq 1$$ -$${f}'(1)= 7(1)^{6}= 7 \neq 1$$ -Am I missing something here? The question has two parts: identify whether the MVT is applicable, and find the numbers $c$ that fit the theorem on the interval. The closest number for $c$ that I've found that works is 0.724, which gives a value of 1.00815, but it doesn't match 1 perfectly. - -REPLY [6 votes]: Your statement of the Mean Value Theorem is somewhat misleading, and has the intervals flipped. What the Mean Value Theorem states is that if $f$ is continuous on $[a,b]$ and differentiable on $(a,b)$, then there exists a $c\in (a,b)$ such that $f'(c) = \frac{f(b)-f(a)}{b-a}$. (Continuity on the larger interval, not the smaller one!) -As to your question: the interval $[0,1]$ includes all real numbers between $0$ and $1$, and not only the whole numbers. The solution will be some real number $c$, $0\lt c \lt 1$, such that $f'(c) = 1$. -In this case, you want a value of $c$ for which $1 = f'(c) = 7c^6$. So you want to find a value of $c$ for which $c^6 = \frac{1}{7}$. You can write down an expression that tells you exactly what $c$ is, though you won't be able to find any finite decimal expression that gives you $c$, because $c$ is an irrational number (it's not even a fraction). Nonetheless, you can write it down. (Just like you can write down a real number $d$ such that $d^2 = 2$: namely, $d=\sqrt{2}$; voila!). -Added. The intuition of the Mean Value Theorem is: -(i) If the function is "nice", then there is a point between $a$ and $b$ where the tangent is parallel to the line joining $(a,f(a))$ and $(b,f(b))$. Note that "nice" means continuous, has tangents everywhere, and you consider all points between $a$ and $b$. -Perhaps more useful for intuition: -(ii) If you think of $f(x)$ as being position, so that $f'(x)$ is velocity, then $\frac{f(b)-f(a)}{b-a}$ is the average velocity over the entire trip. What the Mean Value Theorem tells you is that there has to be at least one point in time during which your instantaneous velocity was equal to your average velocity. E.g., if you traveled 100 miles in two hour, then there had to be some instant at which you were traveling at 50 miles per hour. But it doesn't say that it had to be either when you were starting, an hour into the trip, or when you finished (the "whole numbers" on $[0,2]$). Not even that it had to be at some "nice time" (12:30, or 1:45, or 12:37). Some time; maybe you cannot even write it down exactly, but at some time.<|endoftext|> -TITLE: Sample sizes for an infinite population -QUESTION [5 upvotes]: I've poked about in some other questions, and I'm no sure how to deal with my problem and my knowledge of statistics has atrophied. Particularly that I'm trying to choose a sample size for a population that I don't know the size of (potentially infinite, but it could be 10,000's or 100,000's or more). How do I choose a sample size that will give me a meaningful answer. -Is it reasonable just to plug in a very large number, and see what comes out - does it approach a limit? - -My real world problem is this: -I have two computer systems (Able and Baker). My user community believes Able is faster than Baker. I can run a simple test on both, and see how long it takes to run one each. However, there are inconsistencies in performance (probably do to the network, which will have spikes in activity and I unfortunately can't removed from the test). -Baker will be running for years into the future, so I have no idea how many transactions will run in it over its lifetime. -Assuming the performance issues caused by the network are random, how many tests do I have to run each on Able and Baker to to be 90% confident that Able is faster than Baker? -Perhaps I'm asking the wrong question? Should I just be finding the average of a 100 tests on Able and 100 tests on Baker and compare? Can I make than number 100 smaller (to say like 20) - -REPLY [6 votes]: 1.) A surprising result that we encounter in a first statistics - course is that the quality of a typical estimate doesn't depend (much) on the - population size, but mainly the sample size. -For instance, a sample average based on ten data points is equally accurate whether - the population size is 1000, 1000000, or infinity. -You probably needn't worry about population size. -2.) Do you need 100 test runs for each program or will 20 suffice? - This partly depends on your needs. -I mean, is it important to you to know whether the two machines -differ on average by one minute, one second, one millisecond? Where do small -differences stop being important to you? You need to consider this question first - because of the tradeoff - between "quality of statistical test" and "required sample size". -The required sample size for your problem will likely look something like -$$n={2(1.645)^2 \sigma^2\over E^2}.$$ -Let's plug in some made up numbers, just to get an idea what's going on. -Suppose that the standard deviation of performance times is $\sigma = 1$ second. -And suppose we want to be 90% sure to detect an average difference -of 0.5 of a second. Then $n=2(1.645)^2/(1/2)^2=21.64\approx 22$. We should - take a sample of about 22 runs from each machine. -On the other hand, if you wanted to detect an average difference -of 0.25 of a second, the required sample size goes up by a -factor of four, that is, $n=86.59\approx 87$. To double the accuracy, you need four times as many sample points. -Concrete information on both inherent variability and on required accuracy -are part of the sample size calculation. -More details on statistical tests and sample size calculations can be found in -most introductory statistics textbooks. Good luck!<|endoftext|> -TITLE: analytic functions from square to unit disk -QUESTION [5 upvotes]: Let $f$ be an analytic function from $\{z; -1 < \Re(z) < 1, -1 < \Im(z) < 1\}$ to $\{z; |z| < 1\}$. If $f(0)=0$ and $f$ is one-one and onto, should $f(i\ z)=i\ f(z)$ for each $z$? I tried to show that $f(i\ z)-i\ f(z)$ is a constant, but it seems that I could not use Liouville Theorem. -Thank you very much. - -REPLY [6 votes]: Assuming you mean the open square, then yes. -Let $g(z)=f(iz)$ and $h(z)=if(z)$. Then $g$ and $h$ are analytic bijections from the square to the disk such that $g(0)=h(0)=0$ and $g'(0)=h'(0)=if'(0)$. This implies that $k=g\circ h^{-1}$ is an analytic bijection of the disk such that $k(0)=0$ and $k'(0)=1$. By Schwarz's lemma, $k(z)=z$ for all $z$ in the disk, and therefore $g=h$.<|endoftext|> -TITLE: Wreath product and solvability -QUESTION [10 upvotes]: A paper I'm reading claims that the smallest class of monoids which contains $\mathbb{Z}$ and is closed under finite direct product and block product only contains solvable monoids. I think that a proof that solvability of groups is closed under wreath product would be of great help to understand why this is true. Would anyone know where I can find such a proof? -In the "converse direction", I would be really interested in reading a proof of the following fact: Any solvable group is in the variety generated by a wreath product of cyclics (taken from Barrington and Thérien, Non-uniform automata over groups (1987)). - -REPLY [10 votes]: A wreath product $G\wr H$ is a semidirect product of $G^H$ and $H$; since a group $G$ with normal subgroup $N$ is solvable if and only if both $N$ and $G/N$ are solvable, this means that $G\wr H$ is solvable if and only if $G^H$ and $H$ are both solvable. -Thus, if $G\wr H$ is solvable, then so are $G^H$ and $H$. The only question then is whether $G$ is solvable if and only if $G^H$ is solvable. This is true. -First, if $G^H$ is solvable, then so is the subgroup corresponding to the $e_H$-coordinate (that is, the subgroup $f\colon H\to G$ such that $f(h)=1$ for all $h\neq e_H$, which is isomorphic to $G$). -Conversely, if $G$ is solvable, then so is any direct power of $G$ -This follows because for any direct product of groups, we have -$$\left(\prod_{i\in I} K_i\right)^{(n)} \subseteq \prod_{i\in I}K_i^{(n)}$$ -where $M^{(n)}$ is the $n$th derived subgroup of $G$. (Simply note that the projection maps $\pi_j\colon \prod_{i\in I} K_i \to K_j$ must map $(\prod K_i)^{(n)}$ into $K_j^{(n)}$, so that gives a map into the product of the $n$th derived subgroups); equality need not hold in general, but the inclusion always does). -Since $G$ is solvable, if and only if there exists $n$ such that $G^{(n)} = \{1\}$; then if $G$ is solvable, we have -$$\left(\prod_{i\in I}G\right)^{(n)} \subseteq \prod_{i\in I}G^{(n)} = \prod_{i\in I}\{1\} = \{1\}$$ -so $\prod_{i\in I}G$ is solvable. -In particular, taking $I=H$ we get that if $G$ is solvable, then so is $G^H$. -In summary: - -$G\wr H$ is solvable if and only if $G^H$ and $H$ are both solvable. -For any nonempty set $I$, $\prod_{i\in I}G$ is solvable if and only if $G$ is solvable. -$G^H \cong \prod_{h\in H}G$, so $G^H$ is solvable if and only if $G$ is solvable. - -So: $G\wr H$ is solvable if and only if $G$ and $H$ are both solvable. -Note. It is not true in general that an arbitrary direct product of solvable groups is solvable. What is true is that for fixed $n$, an arbitrary direct product of solvable groups is solvable of length at most $n$ if and only if each direct factor is solvable of length at most $n$. -The converse is a little trickier, but I think the following does it. -First: a group $G$ generates the variety $\mathfrak{V}$ if and only if every group in $\mathfrak{V}$ is a homomorphic image of a subgroup of a direct power of $G$. For example, $\mathbb{Z}$ generates the variety of abelian groups. A set of groups $S$ generates $\mathfrak{V}$ if and only if every group in $\mathfrak{V}$ is a homomorphic image of a subgroup of a direct product of copies of groups in $S$. -Second: a set of groups $S$ discriminates a variety $\mathfrak{V}$ if $S\subset\mathfrak{V}$ and for every finite set $\mathfrak{w}$ of words that are not laws of $\mathfrak{V}$, there is a group $D\in\mathfrak{D}$ in which the equations $w=1$, $w\in\mathfrak{w}$, can be simultaneously falsified. -It is a theorem that a set that discriminates a variety also generates that variety; in fact, a set of groups "discriminates" if and only if it discriminates the variety it generates. -Theorem. (Baumslag, B.H. Neumann, H. Neumann, and P. Neumann). If the group $G$ generates $\mathfrak{V}$, and the set $S$ discriminates $\mathfrak{W}$, then the set $G\wr S = \{ G \wr D\mid D\in S\}$ discriminates $\mathfrak{VW}$, and therefore also generates $\mathfrak{VW}$. -(In fact, one can take the restricted direct product above, rather than the full wreath product). -Now, every group word is equivalent to a group word of the form $x^nc$, where $c$ is a commutator word on and $n\geq 0$. Thus, the only words that are not laws of the variety of abelian groups are words of the form $x^nc$ with $n\gt 1$. All these laws can be simultaneously falsified by $\mathbb{Z}$, by evaluating every letter in the words at $1$; -The argument above was wrong (a bunch of hooey, really). Here's a proper argument: a word $\mathbf{w}$ is not a law of $\mathfrak{A}$ if and only if the exponent sum of at least one variable is not $0$. For abelian groups, the word can be rewritten as a linear equation in the variables with integer coefficients; an evaluation in $\mathbb{Z}$ satisfies the word if and only if it satisfies the corresponding linear equation with integer coefficients. -A finite set of words that are not laws of $\mathfrak{A}$ correspond to a finite system of linear equations with integer coefficients; an element of $\mathbb{Z}^n$ ($n$ the number of variables) satisfies at least one of the equations if it lies in the union of the solution sets of the individual equations. Since the equations are nontrivial, the solution sets are proper submodules of $\mathbb{Z}^n$; since $\mathbb{Z}^n$ is not a union of finite many proper submodules (same argument as for vector spaces over infinite fields), it follows that given a finite collection of words that are not laws of $\mathfrak{A}$ there is always an evaluation in $\mathbb{Z}$ that falsifies all of them simultaneously. -Therefore, $\mathbb{Z}$ discriminates $\mathfrak{A}$, the variety of all abelian groups. Also, $\mathbb{Z}$ generates the variety of all abelian groups. -Therefore, applying the B+3N theorem, we get that $\mathbb{Z}\wr\mathbb{Z}$ generates $\mathfrak{A}^2$, the variety of solvable groups of length at most $2$; then it follows that $(\mathbb{Z}\wr\mathbb{Z})\wr\mathbb{Z}$ discriminates, and hence generates, $\mathfrak{A}^3$. Continuing this way, the iterated wreath product of $\mathbb{Z}$ discriminates, and hence generates, the variety of solvable groups of length at most $n$. Thus, every solvable group lies in a variety generated by an iterated (restricted) wreath product of cyclic groups. -Presumably, if you know a bit more about $G$ some of the copies of $\mathbb{Z}$ could even be replaced by suitable finite cyclic groups. -Note. It is false in general that if $S$ generates $\mathfrak{V}$ and $T$ generates $\mathfrak{W}$, then $S\wr T = \{G\wr H\mid G\in S, H\in T\}$ generates $\mathfrak{VW}$. But if $S$ generates $\mathfrak{V}$, then $S\wr F_{\infty}(\mathfrak{W}) = \{ G\wr F_{\infty}(\mathfrak{W})\mid G\in S\}$ generates $\mathfrak{VW}$, where $F_{\infty}(\mathfrak{W})$ is the countably generated relatively free group in $\mathfrak{W}$, and the wreath products are restricted wreath products. This is a consequence of the B+3N theorem above, but it was actually proven earlier by Baumslag.<|endoftext|> -TITLE: How to decompose displaced Hermite-Gauss function into higher order HGs? -QUESTION [8 upvotes]: The Hermite-Gauss functions appear commonly in physics. These functions are formed from the product of a Hermite polynomial and a Gaussian: -$$ u_n(x) = \left(\frac{2}{\pi w_0^2}\right)^{1/4} \frac{1}{\sqrt{n! 2^n}} H_n\left(\frac{\sqrt{2}x}{w_0}\right)\exp\left\{-\left(\frac{x}{w_0}\right)^2\right\}$$ -and are orthonormal: -$$ \int_{-\infty}^{\infty} u_n(x) u_m(x) dx = \delta_{n,m}$$ -In a paper (2004 J. Opt. B 6 495) I found the following identity, which gives the decomposition of a displaced mode $u_0(x-a)$ in terms of a series over high-order Hermite-Gauss functions $u_n(x)$: -$$ \int_{-\infty}^{\infty} u_0(x - a) u_n(x) dx -= \frac{a^n}{w_0^n \sqrt{n!}} \exp\left\{ -\frac{a^2}{2 w_0^2}\right\} $$ -How is this derived? -(In addition to deriving it by hand, I would like to know how to coax Mathematica into giving it.) -EDIT: Here is an animation showing the decomposition of a displaced Gaussian into higher-order Hermite-Gauss functions (modes): - -REPLY [4 votes]: For example via the generating function for $u_n$'s: -we have -$$\exp(-(x-t)^2)=\sum_n H_n(x)\exp(-x^2) t^n/n!$$ -Let $U_n(x)=H_n(x)\exp(-x^2)$ (i.e. $u_n$ up to rescaling $x$ and without normalization). -We thus have -$$\exp(-(x-t)^2+x^2/2)=\sum_n U_n(x) t^n/n!$$ -Then we have -$$\int_{-\infty}^{+\infty}\exp(-(x-a)^2/2)\exp(-(x-t)^2+x^2/2)dx=\int\exp(-(x-a/2-t)^2+at-a^2/4)dx=$$ -$$=\exp(-a^2/4)\exp(at)\int\exp(-x^2)dx=\sqrt{2\pi}\exp(-a^2/4)\exp(at).$$ -Taking the coefficient of $t^n/n!$ we get -$$\int_{-\infty}^{+\infty}\exp(-(x-a)^2/2)U_n(x)dx=\exp(-a^2/4)a^n\sqrt{2\pi}$$ -which is your formula (modulo possible integration mistake I just produced :)<|endoftext|> -TITLE: Tempered distribution concentrated in a lower dimensional manifold -QUESTION [5 upvotes]: Question: What can you conclude about a tempered distribution $G\ \in\ S'(R^n)$ that is concentrated in some k-dimensional manifold $M\ \subset\ R^n$ (for k < n)? More specifically, is there a result analogous to the following n=1 result? -$n=1$ result (hope I remember it correctly): -Let $\ S(R)\ $ be the set of Schwartz functions ($C^\infty$ functions $f:\ R\ \to\ C\ $ s.t. $\ f^{(n)}$ goes to 0 at infinity faster than any inverse power of x (for n=0, 1, ...)). Let $\ S'(R)\ $ be the set of tempered distributions. $G\ \in\ S'(R)$ is said to be concentrated in a set $A\ \subset\ R\ $ iff $\forall\ \phi\ \in\ S(R)$ that vanishes on some open set $B\ \supset\ A\ $, $G(\phi)\ =\ 0$. -Suppose $\ G\ $ is concentrated in {$\ x\ $}, for some $x\ \in\ R$. Then $\exists\ c_0,\ ...\ c_L\ \in\ C\ $ s.t. $\ G\ $ = $\sum_{j=0}^L\ c_j\ \delta_x^{(j)}$. -[where $\delta_x^{(j)}\ (\phi)\ \equiv\ (-1)^j\ \phi^{(j)}(x)\ $] - -REPLY [2 votes]: Yes, at least locally, every distribution (temperedness becomes irrelevant if we are talking about local things) supported on a submanifold is (locally) a finite linear combination $f\rightarrow \sum_i u_i(\nu_i f)$ of distributions $u_i$ supported on the submanifold applied to (iterated) normal derivatives $\nu_i$ of a test function $f$. The proof is not so different from the proof in case the submanifold is a point... A sample argument is here . -Edit: Yes, as in Mariano S.-A.'s comment, by this is meant that the local pieces can then be glued back together by a (smooth) partition of unity. Then the temperedness would/could come into play, constraining the orders of the local pieces. But to address that precisely would certainly require further details about the imbedding of the submanifold, since temperedness on the ambient space, meaning relative to a metric, can obviously have a range of translations to the submanifold. Spiral imbeddings of $\mathbb R^1$ in $\mathbb R^2$ already illustrate how the metrics can be disparate. I don't know anything clear and systematic to say except the caution about this.<|endoftext|> -TITLE: Algorithm for randomly choosing learning cards -QUESTION [5 upvotes]: I'm programming a learning software. It works with question-/answercards. I´m searching for a algorithm that gives me a higher probability for cards that the user has answered wrong. -My actual idea (edit: Inverse transform sampling) is that each card has an integer which indicates how often the user has answerd the question wrong. Count all integer-values, creating a random integer between 0 and the counted integer-values and use this integer to go through my cards and count their integers until I reached the random integer. Then I reach the integer I choose this card :-) -But there must be a better solution ;-) -Edit: Rejection Sampling -N = number of cards -M = score of the highest card -c = random (1 - N) -x = random (1 - M) - -if (x <= (score of card-nr: c)) accept card! -else create new c & x and goto if-querry - -That means that cards with a higher score will choosen more often. - -REPLY [2 votes]: Here's a more efficient algorithm, requiring some space. You keep a lookup table containing the card to pick for each value of your random integer. The table is init by letting cell $i$ point at card $i$. When you increase the prominence of card $j$, just add a new cell pointing to $j$. -If you are memory-savvy, then you can use the following algorithm. Put all your cards in a balanced binary tree. Each card maintains both its own prominence and the sum of prominence of it and all its descendants. To select a card, use binary search. When you increase the prominence of a card, you need to update only its ancestors. So both operations take logarithmic time.<|endoftext|> -TITLE: Proof of upper-tail inequality for standard normal distribution -QUESTION [40 upvotes]: $X \sim \mathcal{N}(0,1)$, then to show that for $x > 0$, -$$ -\mathbb{P}(X>x) \leq \frac{\exp(-x^2/2)}{x \sqrt{2 \pi}} \>. -$$ - -REPLY [37 votes]: Integrating by parts, -$$\begin{align*} -Q(x) &= \int_x^{\infty} \phi(t)\mathrm dt = \int_x^{\infty} \frac{1}{\sqrt{2\pi}}\exp(-t^2/2) \mathrm dt\\ -&= \int_x^{\infty} \frac{1}{t} \frac{1}{\sqrt{2\pi}}t\cdot\exp(-t^2/2) \mathrm dt\\ -&= - \frac{1}{t}\frac{1}{\sqrt{2\pi}}\exp(-t^2/2)\biggr\vert_x^\infty -- \int_x^{\infty} \left( - \frac{1}{t^2} \right ) \left ( - \frac{1}{\sqrt{2\pi}} \exp(-t^2/2) \right )\mathrm dt\\ -&= \frac{\phi(x)}{x} - \int_x^{\infty} \frac{\phi(t)}{t^2} \mathrm dt. -\end{align*} -$$ -The integral on the last line above has a positive integrand and so -must have positive value. Therefore we have that -$$ -Q(x) < \frac{\phi(x)}{x} = \frac{\exp(-x^2/2)}{x\sqrt{2\pi}}~~ \text{for}~~ x > 0. -$$ -This argument is more complicated than @cardinal's elegant proof of the -same result. However, note that by repeating the above trick of -integrating by parts and the -argument about the value of an integral with positive integrand, we get that -$$ -Q(x) > \phi(x) \left (\frac{1}{x} - \frac{1}{x^3}\right ) = \frac{\exp(-x^2/2)}{\sqrt{2\pi}}\left (\frac{1}{x} - \frac{1}{x^3}\right )~~ \text{for}~~ x > 0. -$$ -In fact, for large values of $x$, a sequence of increasingly tighter upper and lower bounds can be developed via this argument. Unfortunately all the bounds diverge to $\pm \infty$ as $x \to 0$.<|endoftext|> -TITLE: How to calculate the eigenvector corresponding to zero eigenvalue -QUESTION [8 upvotes]: How can the eigenvector corresponding to zero eigenvalue be found out? I was trying with the following simple matrix in Matlab: -$$A=\left[\begin{array}{ccc}1 & -2 & 3 \\ 2 & -3 & 4 \\ 3 & -4 & 5 \end{array}\right] \; .$$ -In matlab computations, the matrix seemed nearly singular with one of the eigenvalues very close to zero (3e-15). That means the usual shifted inverse power methods for finding out the unit eigenvector corresponding to an eigenvalue won't work. But Matlab returns an eigenvector corresponding to 0. How? Basically, I would like to develop a program to compute this eigenvector given any singular matrix. What algorithm should I use? -Edit: (1) Edited to reflect that the 'nearly singular' comment was corresponding to Matlab calculation. -(2) Edited to specify the actual question. - -REPLY [8 votes]: A matrix $A$ has eigenvalue $\lambda$ if and only if there exists a nonzero vector $\mathbf{x}$ such that $A\mathbf{x}=\lambda\mathbf{x}$. This is equivalent to the existence of a nonzero vector $\mathbf{x}$ such that $(A-\lambda I)\mathbf{x}=\mathbf{0}$. This is equivalent to the matrix $A-\lambda I$ having nontrivial nullspace, which in turn is equivalent to $A-\lambda I$ being singular (determinant equal to $0$). -In particular, $\lambda=0$ is an eigenvector if and only if $\det(A)=0$. If the matrix is "nearly singular" but not actually singular, then $\lambda=0$ is not an eigenvalue. -As it happens, -$$\begin{align*} -\det(A) &= \left|\begin{array}{rr} --3 & \hphantom{-}4\\ --4 & 5 -\end{array}\right| + 2\left|\begin{array}{cc} -2 & 4\\ -3 & 5 -\end{array}\right| + 3\left|\begin{array}{rr} -\hphantom{-}2 & -3\\ -3 & -4 -\end{array}\right|\\ -&= \Bigl(-15 + 16\Bigr) + 2\Bigl( 10 - 12\Bigr) + 3\Bigl(-8+9\Bigr)\\ -&= 1 - 4 + 3 = 0, -\end{align*}$$ -so the matrix is not "nearly singular", it is just plain singular. -The eigenvectors corresponding to $\lambda$ are found by solving the system $(A-\lambda I)\mathbf{x}=\mathbf{0}$. So, the eigenvectors corresponding to $\lambda=0$ are found by solving the system $(A-0I)\mathbf{x}=A\mathbf{x}=\mathbf{0}$. That is: solve -$$\begin{array}{rcrcrcl} -x & - & 2y & + & 3z & = & 0 \\ -2x & - & 3y & + & 4z & = & 0 \\ -3x & - & 4y & + & 5z & = & 0. -\end{array}$$ -The solutions (other than the trivial solution) are the eigenvectors. A basis for the solution space (the nullspace of $A$) is a basis for the eigenspace $E_{\lambda}$. -Added. If you know a square matrix is singular, then finding eigenvectors corresponding to $0$ is equivalent to solving the corresponding system of linear equations. There are plenty of algorithms for doing that: Gaussian elimination, for instance (Wikipedia even has pseudocode for implementing it). If you want numerical stability, you can also use Grassmann's algorithm.<|endoftext|> -TITLE: Minimum variance unbiased estimator for scale parameter of a certain gamma distribution -QUESTION [8 upvotes]: Let $X_1, X_2, ..., X_n$ be a random sample from a distribution with p.d.f., -$$f(x;\theta)=\theta^2xe^{-x\theta} ; 00$$ Obtain minimum variance unbiased estimator of $\theta$ and examine whether it is attained? -MY WORK: -Using MLE i have found the estimator for $\theta=\frac{2}{\bar{x}}$ -Or as $$X\sim \operatorname{Gamma}(2, \theta)$$So -$E(X)=2\theta$, $E(\frac{X}{2})=\theta$ -so can I take $\frac {X}{2}$ as unbiased estimator of $\theta$. -I'm stuck and confused need some help. Thank u. - -REPLY [13 votes]: If one is familiar with the concepts of sufficiency and completeness, then this problem is not too difficult. Note that $f(x; \theta)$ is the density of a $\Gamma(2, \theta)$ random variable. The gamma distribution falls within the class of the exponential family of distributions, which provides rich statements regarding the construction of uniformly minimum variance unbiased estimators via notions of sufficiency and completeness. -The distribution of a random sample of size $n$ from this distribution is -$$ -g(x_1,\ldots,x_n; \theta) = \theta^{2n} \exp\Big(-\theta \sum_{i=1}^n x_i + \sum_{i=1}^n \log x_i\Big) -$$ -which, again, conforms to the exponential family class. -From this we can conclude that $S_n = \sum_{i=1}^n X_i$ is a complete, sufficient statistic for $\theta$. Operationally, this means that if we can find some function $h(S_n)$ that is unbiased for $\theta$, then we know immediately via the Lehmann-Scheffe theorem that $h(S_n)$ is the unique uniformly minimum variance unbiased (UMVU) estimator. -Now, $S_n$ has distribution $\Gamma(2n, \theta)$ by standard properties of the gamma distribution. (This can be easily checked via the moment-generating function.) -Furthermore, straightforward calculus shows that -$$ -\mathbb{E} S_n^{-1} = \int_0^\infty s^{-1} \frac{\theta^{2n} s^{2n - 1}e^{-\theta s}}{\Gamma(2n)} \,\mathrm{d}s = \frac{\theta}{2n - 1} \>. -$$ -Hence, $h(S_n) = \frac{2n-1}{S_n}$ is unbiased for $\theta$ and must, therefore, be the UMVU estimator. - -Addendum: Using the fact that $\newcommand{\e}{\mathbb{E}}\e S_n^{-2} = \frac{\theta^2}{(2n-1)(2n-2)}$, we conclude that the $\mathbb{V}ar(h(S_n)) = \frac{\theta^2}{2(n-1)}$. On the other hand, the information $I(\theta)$ from a sample of size one is readily computed to be $-\e \frac{\partial^2 \log f}{\partial \theta^2} = 2 \theta^{-2}$ and so the Cramer-Rao lower bound for a sample of size $n$ is -$$ -\mathrm{CRLB}(\theta) = \frac{1}{n I(\theta)} = \frac{\theta^2}{2n} \> . -$$ -Hence, $h(S_n)$ does not achieve the bound, though it comes close, and indeed, achieves it asymptotically. -However, if we reparametrize the density by taking $\beta = \theta^{-1}$ so that -$$ -f(x;\beta) = \beta^{-2} x e^{-x/\beta},\quad x > 0, -$$ -then the UMVU estimator for $\beta$ can be shown to be $\tilde{h}(S_n) = \frac{S_n}{2 n}$. (Just check that it's unbiased!) The variance of this estimator is $\mathbb{V}ar(\tilde{h}(S_n)) = \frac{\beta^2}{2n}$ and this coincides with the CRLB for $\beta$. -The point of the addendum is that the ability to achieve (or not) the CRLB depends on the particular parametrization used and even when there is a one-to-one correspondence between two unique parametrizations, an unbiased estimator for one may achieve the Cramer-Rao lower bound while the other one does not.<|endoftext|> -TITLE: Proof of $f = g \in L^1_{loc}$ if $f$ and $g$ act equally on $C_c^\infty$ -QUESTION [5 upvotes]: Let $f$ and $g$ be locally integrable, say on $R^n$ (for arbitrary open domains, just extend trivially). Suppose -$\forall \phi \in C_c^\infty : \int f \phi dx = \int g \phi dx$. -Let $K = supp(\phi)$. If $f,g \in L^2(K)$, we see by Hilbert space theory $f = g$ almost everywhere, as the $C_c^\infty(K)$ is dense in $L^2(K)$. -So the theorem holds for $f,g \in L^2_{loc}$. But $L^\infty_{loc} \subset L^2_{loc}$ is dense in $L^1_{loc}$ (in the respective topology), hence we have $f = g$ almost everywhere even for $L^1_{loc}$. -I haven't seen this proof in literature, and I wonder whether there is a gap. If that is the case, do you know a better proof? - -REPLY [4 votes]: Another way to do this is to let $\phi(x)$ be a smooth cutoff function equal to $1$ on some ball $B(a,r)$. Then your condition implies that the Fourier coefficient $\int_{R^n}(f(x) - g(x))\phi(x)e^{i\xi \cdot x}\,dx$ is zero for all $\xi \in R^n$. By the uniqueness theorem for the Fourier transform, $(f(x) - g(x))\phi(x) = 0$ a.e, so $f(x) = g(x)$ a.e. on the ball $B(a,r)$. This ball was arbitrary so $f(x) = g(x)$ a.e.<|endoftext|> -TITLE: Does this covariant functor really turn coproducts into products? -QUESTION [6 upvotes]: Let $X$ be a set. Let $\mathrm{Over}(X)$ be the category of sets $A$ with a map $A\to X$; the morphisms are equivariant maps, that is, maps which make the obvious triangle commute. -A map of sets $f\colon X\to Y$ induces a functor $f_*\colon\mathrm{Over}(X)\to\mathrm{Over}(Y)$ by postcomposition. This shows that $\mathrm{Over}$ is a functor from the category of sets to the category of categories (up to usual set theoretic issues which I would like to ignore). -Now let $X=X_1\sqcup X_2$ be a disjoint union, i.e., a coproduct of sets. Every set $\phi\colon A\to X$ gives a pair consisting of a set over $X_1$ and a set over $X_2$. These are the restrictions $\phi^{-1}(X_i)\to X_i$. On the other hand, if you give me a set over $X_1$ and a set over $X_2$, then I can write down a corresponding set over $X$ by just taking the disjoint union. -I think what I just said means that the coproduct decomposition $X=X_1\coprod X_2$ gives a product decomposition of categories $\mathrm{Over}(X)\cong \mathrm{Over}(X_1)\times \mathrm{Over}(X_2)$. -I am confused by the apparent fact that the covariant functor $\mathrm{Over}$ turns coproducts into products. What is happening here? -Edit: As Theo explains in his comment below, the coproduct of categories is just the disjoint union. So it is not a very complicated thing, but also not what we want here---which is really pairs of objects. -Edit: Does the pullback diagram -$$\begin{array}{ccc} -P & \rightarrow & A\\ -\downarrow_{f^!(\phi)} & & \downarrow_\phi\\ -X & \rightarrow^f & Y -\end{array}$$ -define a wrong-way functoriality for maps $f\colon X\to Y$ which would turn the canonical maps of a coproduct $X_1\sqcup X_2$ to the canonical maps of the product $\mathrm{Over}(X_1)\times \mathrm{Over}(X_2)$? This seems to correspond to Omar's observation. - -REPLY [7 votes]: The functor Over is covariant, and you do have that Over(X1∐X2)≅Over(X1)×Over(X2). However, I wouldn't say Over "turns coproducts into products", since that phrase usually -means that additionally the canonical maps Xi→X1∐X2 are sent to the canonical projections Over(X1)×Over(X2)→Over(Xi), and for that, of course, the functor would need to be contravariant. -In this example what is really going on is that Over(X) is equivalent to Fun(X,Set), the category of functors from X to Set (where the set X is regarded as a discrete category, i.e., the category whose objects are the elements of X and that only has identity morphisms). [The equivalence is given by sending f:A→X to the functor that sends an element x in X to its inverse image under f.] Of course, the functor Set→Cat given by Fun(_,Set) is contravariant, and it does turn coproducts into products in the sense I mentioned above. In summary, the covariant functor Over and the contravariant functor Fun(_, Set) agree on objects even though they have opposite variance, and the second turns coproducts into products. -Edit to answer the edit to the question: the pullback does indeed make Over(X) into a contravariant functor which is naturally isomorphic to Fun(_,Set). This is easy to check: if you have an object of Over(Y), say f:A→Y, and a function g:X→Y, then the inverse image of an element x in X under the pullback object of Over(X) is just the preimage of g(x) under f (the definition of pull-back is basically chosen to make this true).<|endoftext|> -TITLE: How to express $(1+x+x^2+\cdots+x^m)^n$ as a power series? -QUESTION [15 upvotes]: Is it possible to express $(1+x+x^2+\cdots+x^m)^n$ as a power series? - -REPLY [26 votes]: If you try to use the multinomial theorem, you get an $(m-1)$-fold iterated sum which might not be much progress. You can use inclusion-exclusion to express the coefficient of $x^k$ as a single sum instead. -The coefficient of $x^k$ in $(1+\ldots+x^m)^n$ is the number of ways to write $k$ as a sum of $n$ nonnegative integers such than none are at least $m+1$. -The number of ways to write $k$ as a sum of $n$ nonnegative integers is $k+n-1 \choose n-1$. -The number of ways to write $k$ as a sum of $n$ nonnegative integers so that a particular subset of size $s$ of the terms are at least $m+1$, with no restriction on the others, is the same as the number of ways to write $k-s(m+1)$ as a sum of $n$ nonnegative integers with no restrictions, $k-s(m+1)+n-1 \choose n-1$. -So, by inclusion-exclusion, the coefficient of $x^k$ in $(1+x+\ldots+x^m)^n$ is -$$\sum_{s=0}^{\lfloor k/(m+1) \rfloor} (-1)^s {n \choose s} {k-s(m+1)+n-1 \choose n-1}.$$ -For example, the coefficient of $x^{50}$ in $(1+x+\ldots+x^{10})^{10}$ is $1018872811$ which is easy to compute with a single sum $(12565671261 - 16771066400 + 5598162900 - 374946000 + 1051050)$, but hard to compute with a $9$-fold sum over thousands of terms. -Another way to get the same answer is to expand $\left(\frac {1-x^{m+1}}{1-x}\right)^n$ as $(1-x^{m+1})^n \times (1-x)^{-n}$. Expand the first term with the binomial theorem, and then use $(1-x)^{-1} = 1 + x + x^2 + \ldots$ so $(1-x)^{-n} = \sum {n \choose k} x^k$, which is the binomial theorem with exponent $-n$. Then multiply the two single sums, isolating the coefficient of $x^k$.<|endoftext|> -TITLE: Flat not projective, projective not free -QUESTION [8 upvotes]: I am looking for examples of a flat but not projective module, and of a projective but not free module. - -REPLY [13 votes]: The rational numbers are a flat but not projective $\mathbb Z$-module. -$\mathbb Z\oplus 0$ is a projective but not free -$\mathbb Z\oplus \mathbb Z$-module.<|endoftext|> -TITLE: Divergence of Gradient of the Unit Normal, and Curvature Equation -QUESTION [8 upvotes]: The curvature equation for implicit functions, level sets is usually given in two forms: one is the divergence of the gradient of the unit normal: -$\kappa = \bigtriangledown \cdot \frac{\bigtriangledown \phi}{|\bigtriangledown \phi|}$ -and the other is -$\kappa = \frac{\phi_{xx}\phi_y^2 - 2\phi_x\phi_y\phi_{xy} + \phi_{yy}\phi_x^2}{(\phi_x^2+\phi_y^2)^{3/2}}$ -How do we derive the second equation from the first? - -REPLY [7 votes]: Just expand in coordinates: -$$\begin{align}\kappa &= \nabla \cdot \frac{\nabla \phi}{|\nabla \phi|} = \nabla \cdot \frac{(\phi_x,\phi_y)}{\sqrt{\phi_x^2+\phi_y^2}}\\ -&=\left(\partial_x \frac{\phi_x}{\sqrt{\phi_x^2+\phi_y^2}}\right)+ -\left(\partial_y \frac{\phi_y}{\sqrt{\phi_x^2+\phi_y^2}}\right) \\ -&= \frac{\phi_{xx}}{\sqrt{\phi_x^2+\phi_y^2}} - \frac{\phi_x (\phi_x\phi_{xx}+\phi_y\phi_{xy})} -{(\phi_x^2+\phi_y^2)^{3/2}} + -\frac{\phi_{yy}}{\sqrt{\phi_x^2+\phi_y^2}} - \frac{\phi_y(\phi_x\phi_{xy}+\phi_y\phi_{yy})} -{(\phi_x^2+\phi_y^2)^{3/2}} \\ -&= \frac{\phi_{xx}(\phi_x^2+\phi_y^2) - \phi_x (\phi_x\phi_{xx}+\phi_y\phi_{xy}) +\phi_{yy}(\phi_x^2+\phi_y^2) - \phi_y(\phi_x\phi_{xy}+\phi_y\phi_{yy})}{(\phi_x^2+\phi_y^2)^{3/2}}\\ -&= \frac{\phi_{xx}\phi_y^2 - 2\phi_x\phi_y\phi_{xy} + \phi_{yy}\phi_x^2}{(\phi_x^2+\phi_y^2)^{3/2}} -\end{align}$$<|endoftext|> -TITLE: Decomposition of $\Bbb R^n$ as union of countable disjoint closed balls and a null set -QUESTION [14 upvotes]: This is a problem in Frank Jones's Lebesgue integration on Euclidean space (p.57), - -$$\mathbb{R}^n = N \cup \bigcup_{k=1}^\infty \overline{B}_k$$ -where $\lambda(N)=0$, and the closed balls are disjoint. - -could any one give some hints? - -REPLY [11 votes]: Fix some dimension $d \geq 1$. It suffices to prove that the subspace $X = [0,1)^d \subset \mathbf{R}^d$ is the union of a disjoint family of closed balls and a null set (with respect to Lebesgue measure $\lambda$ on $X$). Let's call any subset of $X$ with the form $\prod_{i=1}^d[x_i,x_i + s)$ where $s>0$ a box. A disjoint union of finitely many boxes (resp. closed balls) will be called a square (resp. round) set. Using a grid construction, it is not difficult to see that every open subset of $X$ is the disjoint union of countably many boxes. Thus follows: -Lemma 1: If $U$ is an open subset of $X$ and $\epsilon > 0$ there is a square set $S \subset U$ with $\lambda(U) - \lambda(S) < \epsilon$. -Let $a \in (0,1)$ be a constant (depending on $d$) such that every box contains a closed ball of $a$ times the measure. Choosing a ball for every box in a square set gives: -Lemma 2: For every square set $S \subset X$ there is a round set $R \subset S$ such that $\lambda(R) = a \lambda(S)$. -Choose $\epsilon > 0$ so that $(1-a) + \epsilon < 1$. We can now construct the desired family of disjoint balls. We will construct recursively for each $n=1,2,\ldots$, a round set $R_n$ such that $\lambda(X - R_n) \leq (1-a)^n + (1-a+\epsilon)^n$. -Since $X$ is square, there is a round set $R_1$ (just a ball actually) with $\lambda(R_1) = a \lambda(X) = a$ whence $\lambda(X - R_1) = 1-a \leq (1-a) + (1-a+\epsilon)$. -Now suppose that $R_n$ given for $n \geq 1$ and that $\lambda(X - R_n) \leq (1-a)^n + (1-a+\epsilon)^n$ holds. Since $X-R_n$ is open, lemma 1 gives us a square set $S$ disjoint from $R_n$ with $\lambda(R_n) - \lambda(S) \leq \epsilon^{n+1}$. Then, by lemma 2, there is a round set $R \subset S$ with $\lambda(R) = a \lambda(S)$. Putting $R_{n+1} = R_n \sqcup R$ we see that: -\begin{align*} -\lambda(X-R_{n + 1}) &= \lambda(X - R_n) - \lambda(R)\\ -&= [ \lambda(X-R_n) - \lambda(S)] + (1-a) \lambda(S)\\ -&\leq [ \lambda(X-R_n) - \lambda(S)] + (1-a) \lambda(X-R_n)\\ -&\leq \epsilon^{n+1} + (1-a)[(1-a)^n + (1-a+\epsilon)^n]\\ -&= [\epsilon^{n+1} - \epsilon (1-a+\epsilon)^n] + (1-a)^{n+1} + (1-a+\epsilon)^{n+1}\\ -&\leq (1-a)^{n+1} + (1-a+ \epsilon)^{n+1} -\end{align*} -and the bound is established. It is clear from our construction that $\bigcup_{n=1}^\infty R_n$ is a disjoint union of closed balls and the bound shows that its complement in $X$ is a null set so we are done. -Hopefully this doesn't have too many mistakes and is somewhat readable. I wasn't expecting the analysis portion of the problem to be as finicky as it turned out to be when I started typing this up...<|endoftext|> -TITLE: Is an intersection of two splitting fields a splitting field? -QUESTION [6 upvotes]: Let $F$ be a field, and let $K_1$, $K_2$ be two splitting fields over $F$ (Suppose they are contained in a larger field $K$). Is $K_1\cap K_2$ necessarily a splitting field over $F$? -The statement is true if $K_1$ and $K_2$ are finite extensions of $F$, however I'm not sure how to prove (or disprove) the statement in the general case. -Thanks. - -REPLY [6 votes]: The following is standard; for example, it appears in Lang's Algebra. -Proposition. Let $K$ be an algebraic extension of $F$, contained in some algebraic closure $\overline{F}$ of $F$. The following are equivalent: - -Every embedding of $K$ into $\overline{F}$ over $F$ is an automorphism of $K$. -$K$ is as splitting field over $F$. -Every irreducible polynomial $f(x)\in F[x]$ that has at least one root in $K$ splits over $K$. - -Proof. 1$\Rightarrow$2,3: Let $a\in K$, and let $f(x)$ be its irreducible polynomial over $F$. If $b$ is any root of $f(x)$ in $\overline{F}$, then the isomorphism $F(a)\to F(b)$ that maps $a$ to $b$ extends to an embedding of $K$ over $F$; by 1, it is an automorphism of $K$, so $b\in K$. Thus, an irreducible polynomial in $F[x]$ that has at least one root in $K$ has all its roots in $K$, proving 3. Letting $S$ be the collection of irreducible polynomials over $F$ of elements of $K$ shows that $K$ is a splitting field over $F$. -2$\Rightarrow$1: Let $S=\{f_i\}$ be a family of polynomials such that $K$ is the splitting field of $S$ over $F$. If $a$ is a root of some $f_i$ and $a\in K$, then for any embedding of $K$ into $\overline{F}$, $a$ must map to a root of $f_i$; but the roots of $f_i$ are all in $K$. Moreover, since $K$ is generated over $F$ by the roots of the $f_i$, then this means that every element of $K$ maps to an element of $K$ under the embedding. So $K$ satisfies condition 1. -3$\Rightarrow$1: Let $\sigma\colon K\to\overline{F}$ be an embedding over $F$. Let $a\in K$, and let $f(x)$ be the monic irreducible of $a$ over $F$. If $\sigma\colon K\to\overline{F}$ is any embedding over $F$, then $\sigma$ maps $a$ to a root of $f(x)$; but by assumption, all the roots of $f(x)$ are in $K$, so $\sigma(a)\in K$. Thus, $\sigma$ maps $K$ into itself, proving 1. QED -From this, it is easy to verify that the intersection of two splitting fields is indeed a splitting field.<|endoftext|> -TITLE: compact symplectic manifolds -QUESTION [5 upvotes]: Why there is no compact symplectic submanifold with dimension greater than 2 in $\mathbb{R}^{2n}$ ? - -REPLY [6 votes]: The symplectic form on $R^{2n}$ is exact, however the symplectic form on a compact symplectic manifold cannot be exact. Hence no symplectic submanifolds (whether of dim.2 or higher).<|endoftext|> -TITLE: Local variables when defining function in Mathematica -QUESTION [7 upvotes]: I defined a function in Mathematica. It is a function of n, so f[n_]:=, but in the definition, I used a sum, $\sum_{k=0}^n$. So, the $k$ is just an index variable for the sum and no $k$ shows up in the final answer. As I was using this function I tried evaluating f[k-1] and got a weird answer, 0. I finally figured out that Mathematica was trying to do the sum $\sum_{k=0}^{k-1}$, or so I guess. So, my question is, is there a way to make the $k$ local so that this error never occurs? My fix for now was to change $k$ to $index$ and I will probably not use f[index] at any point. - -REPLY [3 votes]: An alternative to Module or Block is to use Formal Symbols. This allows for the cleanest code. -One may still run into trouble depending on how you choose to use Formal symbols elsewhere, but if you never use \[FormalK] in the argument to f you are safe.<|endoftext|> -TITLE: Anecdotes about famous mathematicians or physicists -QUESTION [96 upvotes]: I'm not sure whether this question suits this website, however, I don't know where else I could ask it. It is no mathematical problem or something similar, still I hope it won't be closed. -A few weeks ago, our assistant professor in physics told us a story about Maxwell when we came to speak about Maxwell's equations. He said rumour has it that once in an exam, Maxwell faced a differential equation or integral - at that time thought unsolvable - and solved it. -I wonder whether there are more famous rumours or anecdotes about mathematicians or physicists (and which of them are true and which not). I believe everyone knows the story of how Gauss computed -$$\sum_{n=1}^{100} n$$ -an exercise his teacher gave to his class to keep it busy. Or a more famous example: Everyone knows how Newton discovered gravity (is that one actually true?). Or how Archimedes found Archimedes' principle. So, to put it into a single line: -Do you know any other noteworthy anecdotes about famous mathematicians or physicists? -EDIT: In case you provide an answer, please also state whether the anecdote is true or not, if possible. Thanks a lot for the hitherto existing answers! - -REPLY [9 votes]: I have always enjoyed this anecdote concerning an encounter between Harvey Friedman, whose finite form the Kruskal's Tree Theorem gives rise to the Tree sequence of undescribably large numbers, and the ultrafinitist Russian mathematician, Alexander Yessenin-Volpin. For Yessenin-Volpin, even numbers such as $2^{100}$ are out the reach of the human mind, and therefore of doubtful validity, let alone monsters such as Friedman's. -The question is : If $2$ is accepted, but $2^{100}$ is not, where does one draw the line? Friedman tells the story of when the two men, whose ideologies could hardly be more different, met : - -I raised this objection with the (extreme) untrafinitist Yessenin-Volpin during a lecture of his. He asked me to be more specific. I then proceeded to ask to start with $2^1$ and asked him whether this is "real" or something to that effect. He virtually immediately said yes. Then I asked about $2^2$, and again he said yes, but with a perceptible delay. Then $2^3$, and yes, but with more delay. This continued a couple more times, till it was obvious how he was handling this objection. Sure, he was prepared to always answer yes, but he was going to take $2^{100}$ times as long to answer yes to $2^{100}$ than he would to answer $2^1$. There is no way I could get very far with this. - - -Another concerns Paul Erdos. In his late years he suffered from terrible vision problems which made it extremely difficult for him to read. Colleagues arranged for him to have a corrective procedure at the local hospital. He was taken to meet the surgeon, who began to explain the procedure to Erdos, however Erdos declared he was not interested in details of the operation and simply wished to know "will I be able to read?". "Yes" was the surgeon's reply, noting that correcting his vision was the point of the procedure. -Some weeks later, on the day the procedure was scheduled, Erdos arrive at the hospital with his colleagues. After going through the necessary preparations, Erdos was wheeled into the operating theatre to the waiting surgical team. As they began to dim the lights to start the procedure, Erdos sat up and demanded to know what was going on. "We are dimming the lights to begin the procedure", was the reply. "But you said I could read!" was Erdos' innocent response.<|endoftext|> -TITLE: Hecke operators on modular forms -QUESTION [12 upvotes]: Would you please explain the importance of Hecke operators on modular forms? I am studying modular forms mostly on my own and I have a pretty good understanding up to Hecke operators. So, I just wonder why we care about them. - -REPLY [4 votes]: Here are a couple of reasons: - -Abstractly the space of modular forms (of a given weight and level) is just a vector space (finite dimensional). Hecke operators give you a very concrete set of (commuting) operators on this vector space and hence you can get some control on this spaces purely in terms of linear algebra. -The coefficients of cusp forms (a very important subset) of modular forms contain arithmetic information. Historically there were many remarkable conjectures about them. The introduction of Hecke operator demystifies them and many of these conjectures become simple. -one can study modular forms from the perspective of representation theory and geometry and in both these case Hecke operators arise as rather natural operations on this space. This suggests that Hecke operators are intimately connected to study of modular forms.<|endoftext|> -TITLE: Expected time to roll all 1 through 6 on a die -QUESTION [123 upvotes]: What is the average number of times it would it take to roll a fair 6-sided die and get all numbers on the die? The order in which the numbers appear does not matter. -I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer $(1-(\frac56)^n)^6 = .5$ or $n = 12.152$ -Can someone please explain this to me, possibly with a link to a general topic? - -REPLY [5 votes]: Associate a success with each number appearing that has not appeared before. Let $X_i$ be the number of trials between the $i^{th}$ success and the $(i + 1)^{st}$ success. -Let $X$ be the random variable representing the total number of trials required for the required event, and $E[X]$ be the required expected value. -Then by linearity of Expectation, we have $E[X] = 1 + \sum_{i=1}^{5}E[X_i]$. -To calculate $E[X_i]$, consider the following, -after receiving $i−1$ different numbers i.e after $i −1$ successes, each subsequent trial has probability $(6 − i)/6$ of getting a number that has not been appeared before. -Therefore,the random variable $X_i$ is geometric with parameter $p_i = (6−i)/6$, therefore $E[X_i] = 1/p_i = 6/(6-i)$. -It follows that $E[X] = 1 + 6\sum_{i=1}^{5}1/i$. -Hence $E[X]=14.7$.<|endoftext|> -TITLE: A transitive set of ordinals is an ordinal -QUESTION [6 upvotes]: This is Exercise III.2.20 of Bourbaki's Set Theory. -(Von Neumann ordinals are actually called "pseudo-ordinals" by Bourbaki, but I simply call them ordinals here) -Let $X$ be a transitive set, and suppose that each $x\in X$ is an ordinal. Then $X$ is an ordinal (Hint: for each $x\in X$, $x\cup\{x\}$ is an ordinal contained in $X$). -This statement is proven in many textbooks. The problem is that none of them uses Bourbaki's definition of ordinal: -For Bourbaki, a set $X$ is an ordinal if every proper transitive subset of $X$ is an element of $X$. [Edit: I'm not sure if this implies e.g. that $X$ is well-ordered by $\in$, since Bourbaki has no Axiom of Foundation.] -A proof that this implies one of the usual definitions (e.g. $X$ is a transitive set whose members are transitive) would be enough, too. [Sorry!] -This should be easy, but I don't know where to start. I'm glad for any help. -Second Edit: The statement I want to show in a hopefully clearer form: -Let $X$ be a transitive set such that any $x\in X$ has the property that any proper transitive subset of $x$ is an element of $x$. Then $X$ has the same property. - -REPLY [2 votes]: Remark: I see in your edit that Bourbaki doesn't have an axiom of foundation. This doesn't affects my proof in any substantial way, since my contradiction is of the form $B\in B$. Thus a slight modification by letting $B=\bigcup\{A\subset X : A \textrm{ transitive and well-founded}\}$ in the first part and analogously letting $B=\bigcup\{A\subset X\setminus\{a\} : A \text{ transitive and well-founded}\}$ in the second part is enough to show that the ordinals as defined by Bourbaki are the standard well-founded ordinals. I also added another way to show this in my edit that again doesn't use the axiom of foundation. -You want to show that if every transitive proper subset of $X$ is an element of $X$ then $X$ is a transitive set whose elements are transitive: - -If there exists an element $a\in X$ such that $a\nsubseteq X$ then take the set $$B=\bigcup\{A\subset X : A \text{ transitive and well-founded}\}$$ This set is transitive and well-founded as the union of transitive well-founded sets and we have that $a\notin B$ since otherwise we would have $a\subset B\subset X$. Thus $B$ is a proper subset of $X$ and so $B\in X$. We also have that $B\neq a$ since $B\subset X$ while $a\nsubseteq X$. Now the set $B\cup\{B\}$ is a transitive subset of $X$, which means that $B\cup\{B\}\subset B$ (by the definition of $B$). Thus $B\in B$, which is impossible since $B$ is well founded. We arrived at a contradiction because we assumed that $a\nsubseteq X$, therefore $a\subset X$. -A similar argument shows that $a\in X$ means that $a$ is transitive: In this case take the set $$B=\bigcup\{A\subset X\setminus\{a\} : A \text{ transitive and well-founded}\}$$ If $a$ is not transitive then $B\neq a$ since $a$ is not transitive while $B$ is. We have that $B\in X$ and therefore $B\cup\{B\}\subset X\setminus\{a\}$. Again this gives us $B\in B$ which is a contradiction. - - -Edit: The converse it easy. Given an ordinal $\alpha$ let $X\subsetneq\alpha$ be transitive. Then since every element of $\alpha$ is an ordinal and thus transitive, we have that $X$ is a transitive set whose elements are transitive sets, i.e. an ordinal. Since it's a proper subset of $\alpha$ it cannot be $\alpha$ nor can it be longer than $\alpha$ since then we would have $\alpha\in X$ which would give us $\alpha\in\alpha$ a contradiction. Thus $X\in\alpha$. Therefore the two definitions are equivalent. This is enough (since you know that a transitive set of ordinals is an ordinal) to show exactly what you want. -Also here's a quicker way to show the first part of my answer namely that the property implies the fact that $X$ is an ordinal: Let $\alpha$ be the greatest ordinal such that $\alpha\subset X$. Observe that this ordinal indeed exists, since if there is an increasing sequence of ordinals such that each of them is a subset of $X$ then their limit will be a subset of $X$ (because a limit ordinal is the union of a sequence that approaches it and the union of subsets of a set is a subset of that set). Now it's easy to see that $X=\alpha$. If $\alpha\subsetneq X$ then $\alpha\in X$ and thus $\alpha\cup\{\alpha\}\subset X$, which is a contradiction by the definition of $\alpha$. -P.S.: Sorry for the delayed edit but I was out of town for the weekend.<|endoftext|> -TITLE: another balls and bins question -QUESTION [21 upvotes]: I've seen many variations of this problem but I can't find a good, thorough explanation on how to solve it. I'm not just looking for a solution, but a step-by-step explanation on how to derive the solution. -So the problem at hand is: -You have m balls and n bins. Consider throwing each ball into a bin uniformly and at random. - -What is the expected number of bins that are empty, in terms of m and n? -What is the expected number of bins that contain exactly 1 ball, in terms of m and n? - -How would I approach solving this problem? -Thanks! - -REPLY [6 votes]: Let's work out the probability there are $k$ balls (out of $m$) in the first bin (out of $n$). This is a simple binomial probability with $p=1/n$ and $1-p = (n-1)/n$ so the probability is ${m \choose k} \dfrac{(n-1)^{m-k}}{n^m}$. -The expected number of times the first bin has $k$ balls is the same, so the expected number of bins with $k$ balls is $n$ times this, i.e. -$${m \choose k} \dfrac{(n-1)^{m-k}}{n^{m-1}}$$<|endoftext|> -TITLE: Is there a set with cardinality greater than N but less than R? -QUESTION [7 upvotes]: Is there a set with cardinality greater than the natural numbers but less than the real numbers? -Is there a simple proof which shows this, if the answer is no? - -REPLY [12 votes]: Yes and no. This may seem strange, so let me provide a little explanation. -There is no such thing as a proof without assumptions, so mathematicians are forced to make certain assumptions, which they take to be true without proving them, called axioms. Over the years a certain set of axioms has come into general usage among mathematicians, and most results in mathematics are proven from these axioms. -However, not every question is settled by these axioms. For your question, one cannot prove (meaning that it's been proven that no proof is possible, not just that no proof has been found yet) using the generally accepted axioms that such a set exists. However, one also cannot prove (same meaning) that no such set exists. It's kind of up to you then whether to assume the existence of such a set as a new axiom. This axiom is called the continuum hypothesis.<|endoftext|> -TITLE: Solvable subgroups of $S_p$ of order divisible by $p$ -QUESTION [10 upvotes]: This question is from Dummit and Foote's Abstract Algebra, page 638, question 20. It gives a nice paragraph of hints that basically guides one through the problem, but I'm very stuck at a crucial junction. Any useful hint is much appreciated. I have detailed what I know and what I do not know, but if you just want the tl;dr, just read the question, which is the following sentence. -"Let $p$ be a prime. Show that any solvable subgroup of $S_p$ of order divisible by $p$ is contained in the normalizer of a Sylow $p$-subgroup of $S_p$. [...] -Hint: Let $G \leq S_p$ be a solvable subgroup of order divisible by $p$. Then $G$ contains a $p$-cycle, hence is transitive on $\{1, \ldots, p\}$. Let $H < G$ be the stabilizer in $G$ of the element $1$, so $H$ has index $p$ in $G$. Show that $H$ contains no nontrivial normal subgroups of $G$ (note the conjugates of $H$ are the stabilizers of the other points). Let $G^{(n-1)}$ be the last nontrivial subgroup in the derived series for $G$. Show that $H \cap G^{(n-1)} = 1$ and conclude that $\lvert G^{(n-1)}\rvert = p$, so that the Sylow $p$-subgroup of $G$ (which is also a Sylow $p$-subgroup of $S_p$) is normal in $G$." -Here are the things I do know: - -$H$ has an order that divides $(p-1)!$ since it has index $p$ in $G$, and $G$ has order $pu$ for some $u$ not divisible by $p$. -Everything up to and excluding the part where I am asked to prove that $H \cap G^{(n-1)} = 1$. -I know how to prove the next part where I'm asked to prove that $|G^{(n-1)}| = p$ provided I know how to do that previous part! -I know that $\lvert S_p \rvert = p!$, so any Sylow $p$-subgroup of $S_p$ has size $p^1 = p$, since no other factors of $p!$ can contain $p$ as a prime factor. - -Now here are the things I do not know: - -I am terribly stuck at the step where I have to show $H \cap G^{(n-1)} = 1$. I tried showing that this is normal, so I can use the result immediately preceding to conclude that it is trivial. But I'm having major problems. I may just be missing something extremely obvious. -Even if I can do that part, the next part asks us to conclude that this Sylow $p$-subgroup is normal in $G$, which I can't immediately see how to derive. I'm assuming ``this Sylow $p$-subgroup'' is referring to the size $p$ subgroup $G^{(n-1)}$---it has the right size to be a Sylow $p$-subgroup. - -REPLY [6 votes]: Arturo's solution follows the hint and is correct. But I did not find the suggestion to show that $N \cap H = 1$ particularly helpful. -You could reason alternatively as follows. Use the same argument as Arturo to show that $NH = G$. Since $|NH| = |N||H|/|N \cap H|$, and $p$ does not divide $|H|$, it follows that $p$ divides $|N|$. Since $N$ is abelian, it has a unique Sylow $p$-subgroup of order $p$, which must be normal in $G$.<|endoftext|> -TITLE: Minimal Polynomial of $i + \sqrt{2}$ in $\mathbb{Q}$ -QUESTION [5 upvotes]: I am trying to find Minimal Polynomial of $i + \sqrt{2}$ in $\mathbb{Q}$. I was able to determine the minimal polynomial is fourth degree with roots at $i-\sqrt{2}$, $i+\sqrt{2}$,$-i-\sqrt{2}$,$-i+\sqrt{2}$. However I got this answer by guessing at what the roots should be. Is there a general technique for this type of problem. - -REPLY [2 votes]: The way I like to look at such things... suppose $P(z)$ is a polynomial with rational coefficients. Then $P(\bar{z}) = \bar{P(z)}$ for any complex number $z$. So if $i + \sqrt{2}$ is a root of $P(z)$, so is $-i + \sqrt{2}$. Similarly, suppose $z = a + b\sqrt{2}$, with -$a$ and $b$ both of the form $q_1 + q_2i$ for rational $q_1$ and $q_2$. Then if $P(a + b\sqrt{2}) = c + d\sqrt{2}$, one has $P(a - b\sqrt{2}) = c - d\sqrt{2}$. So if $i + \sqrt{2}$ is a root of $P(z)$, so is $i - \sqrt{2}$, and if $-i + \sqrt{2}$ is a root of $P(z)$, so is $-i - \sqrt{2}$. -The upshot is that if $i + \sqrt{2}$ is a root of $P(z)$, so are $-i + \sqrt{2}$, $i - \sqrt{2}$, and $-i - \sqrt{2}$. Thus the minimal polynomial of $i + \sqrt{2}$ over $Q$ will have to have these as roots and will be of degree at least 4. Then you can verify that the polynomial with these roots has rational coefficients and therefore is this minimal polynomial. -In some sense Arturo Magidin's answer is a way of describing the above phenomenon in terms of field automorphisms.<|endoftext|> -TITLE: Combinatorics question: Show divisibility -QUESTION [7 upvotes]: Let $a\geq2$, $b\geq2$ be two prime numbers and k be a natural number with $k\leq min(a,b)$. -How can one show that $z := \binom{a+b}{k} - \binom{a}{k} - \binom{b}{k}$ is divisible by the product $ab$? - -REPLY [2 votes]: Let $A$ and $B$ be disjoint sets of size $a$ and $b$, and let $S$ be the sets of subsets of $A\cup B$ of size k which contain at least one element of $A$ and one element of $B$. -Then we can find an action of the additive group $\mathbb{Z}_a\oplus\mathbb{Z}_b$ on $S$ which acts by rotating the elements of $A$ and $B$ separately. You can make a pretty easy argument that each of the orbits of this action on $S$ has size $ab$. (This is where you need the upper limit on $k$. When $k\leq a$, the subset cannot include all of $A$, and likewise for $b$ and $B$.) so you can show that the size of $S$ must be a multiple of $ab$. But the size of $S$ is just: -$${{a+b} \choose {k}} - {{a}\choose{k}}- {{b}\choose{k}}$$ -when $\operatorname{min}(a,b)\geq k>0$. -Similar results: -If $a\leq k \leq b$ then: -$${{a+b} \choose {k}} - {b \choose {k-a}} - {b \choose k}$$ -is divisible by $ab$. -If $a+b > k \geq \operatorname{max}(a,b)$ then: -$${{a+b} \choose {k}} - {b \choose {k-a}} - {a \choose {k-b}}$$ -is divisible by $ab$.<|endoftext|> -TITLE: Why Lie algebras of type $B_2$ and $C_2$ are isomorphic? -QUESTION [6 upvotes]: both of Lie algebras of type $B_2$ and $C_2$ have dimension 10 and we can find two basis of them on page 3 in the book: Introduction to Lie algebras and representation theory . How could we show that these two Lie algebras are isomorphic by constructing an explicit correspondence between two basis of them. Thank you very much. - -REPLY [7 votes]: $B_2=so(5)$ and $C_2=sp(4)$. Let me write it as $C_2=sp(V)$ where $V$ is a 4-dim. symplectic vector space with sympl. form $\omega$. The 6-dim. vect. space $\wedge^2 V^*$ has a natural inner product given by the wedge product: $(\alpha,\beta)$ is defined by $\alpha\wedge\beta=(\alpha,\beta)\,\omega\wedge\omega$. Let $W\subset \wedge^2 V^*$ be the orthogonal complement of $\omega\in\wedge^2 V^*$. $W$ is 5-dimensional and has an inner product. The action of $sp(V)$ on $V$ yields an action on $W$ preserving the inner product, i.e. to a Lie algebra morphism $sp(V)\to so(W)$. And this morphism is in fact an isomorphism.<|endoftext|> -TITLE: What are the 2125922464947725402112000 symmetries of a Rubik's Cube? -QUESTION [11 upvotes]: In a recent talk, Marcus du Sautoy says there are 2125922464947725402112000 (2.1*10^24) symmetries of a Rubik's cube, but doesn't explicitly identify what qualifies as a symmetry. -What counts as a symmetry of the Rubik's cube? Is it a thing like, "turn the top face once clockwise, then once counterclockwise"? -How are these symmetries counted? - -REPLY [21 votes]: Usually, something is called a "symmetry" if it leaves something invariant. For instance, the letter T has a symmetry in that you can take the mirror image with respect to its vertical axis and the shape remains the same. -In the case of the Rubik's cube, if you consider the coloured faces, there are no symmetries at all, since every move will change some of the faces. What is meant by "symmetries" in this case is operations that leave the "structure" of the cube invariant, i.e. that leave the cube in the same shape as before if you disregard the colours on the faces. These can be elementary operations, such as turning one face clockwise, or compound operations, such as the one you gave as an example, or more complex ones. The important point is that any sequences of operations that lead to the same end result are considered the same; for instance, if you turn a face clockwise three times, this is considered the same as turning it counter-clockwise once. -Since none of these operations leave the colours of the faces unchanged, and since they are considered different if and only if the end result is different, they can be counted by counting the number of different configurations into which they can bring the colours on the faces. So there's no need to think through all those gazillions of different sequences of moves; all you have to do is reason about which configurations of the faces are reachable through sequences of operations, and then count those. -Edit in reponse to the comment: Apologies for apparently giving an answer in the wrong "register"; it didn't seem from the phrasing of the question that you knew what a group was :-) Also apologies for not checking the numbers. -The number you cite is actually the total number of different positions of the cube pieces. Beyond the number of colour configurations reachable through turning faces, which is usually cited, this includes factors of $12$ for the number of different ways the pieces can be taken apart and put back together again, and $4^6$ for the number of different orientations of the central squares, which can't be distinguished from the colour markings. Of these $4^6=2^{12}$ different orientations of the central squares, $2^{11}$ can be reached without taking the cube apart. So denoting the number of configurations that's usually cited by $n$, you get - -$n$ configurations without taking the -cube apart and without marking the -central squares -$2^{11}n$ configurations without taking apart but with marking -$12n$ with taking apart but without marking and -$2^{12}\cdot12n$ with taking apart and marking, - -and the number you cited is that last number.<|endoftext|> -TITLE: What are the differences between classical Yang-Baxter Equation and quantum Yang-Baxter Equation? -QUESTION [6 upvotes]: what are the differences between classical Yang-Baxter Equation and quantum Yang-Baxter Equation? Thank you very much. - -REPLY [6 votes]: This is basically a comment to the answer of Mariano Suárez-Alvar, but it's too long for a comment. I want to say what is the connection of classical and quantum YBE and to correct a little omition. -A solution of Yang-Baxter equation is called an R-matrix. -In the quantum YBE, $R$ is an element of $H\otimes H$, where $H=U_h\mathfrak{g}$ is a quantum group, i.e. a deformation of the enveloping algebra $U\mathfrak{g}$. ($R$ is supposed to make the category of $H$-modules to a braided monoidal category. In particular it should give a representation of the braid group $B_n$ on the $n$-fold tensor power of any module $V$; quantum YBE is just the relevant relation in $B_3$.) -Classical YBE is a 1st order approximation of the quantum YBE. It is an equation for an element $r\in\mathfrak{g}\otimes\mathfrak{g}$, it says $[r_{12}, r_{13}] + [r_{12}, r_{23}] + [r_{13}, r_{23}] = 0$. If $r$ is skew-symmetric then it's the equation written by Mariano. If not (and in the interesting cases it is not skew-symmetric) it can be written as $[r_{skew},r_{skew}]=\phi$ where $r_{skew}$ is the skew-symmetric part of $r$ and $\phi\in\wedge^3\mathfrak{g}$ is obtained from the symmetric part and from the structure constants of $\mathfrak{g}$. -By a theorem of Etingof and Kazhdan (building upon Drinfeld's results), any classical R-matrix can be extended to a quantum one.<|endoftext|> -TITLE: Supplemental number theory text to Montgomery and Vaughan -QUESTION [13 upvotes]: We already have a large list of the Best book ever on Number Theory, but I'm looking for a more targeted response for analytic number theory. -Specifically, I'm taking a trip on which I may or may not have access to internet resources, nor my University's library. I'm starting to work through Montgomery and Vaughan's Multiplicative Number Theory. What would be the one book you recommend bringing as a supplement? - -REPLY [7 votes]: Of course, the second book in their sequence: Montgomery and Vaughn Chapters 16-27 draft version (online only) -Here are some other well known titles (in no particular order): - -H. Iwaniec and E. Kowalski, Analytic Number Theory -H. Davenport, Multiplicative Number Theory -A. E. Ingham, The Distribution of Prime Numbers -T. M. Apostol, Introduction to Analytic Number Theory -P. T. Bateman and H. G. Diamond, Analytic Number Theory: An introductory course -E. C. Titchmarsh (revised by D. R. Heath-Brown), The Theory of the Riemann Zeta-Function - -Hope that helps, -Remark: These books were all suggested readings from one of my courses. The books by Davenport, Bateman, Apostol and Ingham were suggested reading for the basics of analytic number theory, while Titchmarsh and Iwaniec and the online chapters of M&V were more related to the course material. - -REPLY [3 votes]: I would also add to Eric's answer that Tenenbaum's "Introduction to Analytic and Probabilistic Number" is a great resource if one is limited to the number of books you can carry.<|endoftext|> -TITLE: Any positive integer solutions to $x^6+y^{10}=z^{15}$? -QUESTION [40 upvotes]: This question might be easy. -The hard question is this: prove that if $a,b,c\geq3$ then there are no solutions in positive integers $x,y,z$ to $x^a+y^b=z^c$ with $x,y,z$ coprime. This implies Fermat, most cases of Catalan, etc., and is an open problem. -But it's really crucial that $x,y,z$ are coprime to make this question hard. For example if I want to find any solution to $x^9+y^{10}=z^{11}$ in positive integers, I just start with a random solution to $A+B=C$, e.g. $1+1=2$, and now I multiply by an appropriate power of all the primes dividing $ABC$ to get a solution. For example, if I start with $1+1=2$ then I multiply both sides by $2^N$ and deduce $2^N+2^N=2^{N+1}$. Now it's easy to find a positive $N$ with $N=0$ mod 9, $N=0$ mod 10 and $N=-1$ mod 11, and for this value of $N$ we get a solution in positive integers to $x^9+y^{10}=z^{11}$. -But this trick relies on the fact that 9, 10, 11 are pairwise coprime. It wouldn't surprise me if an extension of the trick could give a solution in positive integers to $x^6+y^{10}=z^{15}$, where the point is that the exponents aren't pairwise coprime, but 5 minutes on the back of an envelope didn't give me the trick I needed, and I thought that here might be a great place to ask. -What's the trick I've missed? - -REPLY [15 votes]: I agree with Qiaochu; there can't be a similar trick in this case. You can't profit from multiplying by common factors because in this case all solutions can be reduced to coprime solutions. -To see this, consider the number of factors of an arbitrary prime $p$ in the equation. We must have -$$rp^{6k}+sp^{10l}=tp^{15m}$$ -with $p \nmid r,s,t$. The lowest two powers of $p$ must coincide, since otherwise we could divide through by the lowest and the remaining equation couldn't be fulfilled mod $p$. So we can divide through by this common lowest power of $p$ and leave a power of $p$ in only one of the terms. But since the factors $6$, $10$ and $15$ are such that the least common multiple of each pair is a multiple of the third, dividing through by a multiple of that least common multiple will just substract a multiple of the third factor in the exponent of the third term, still leaving a multiple of that factor. It follows that we could have divided each of the numbers by an appropriate power of $p$ to begin with, leaving a power of $p$ in only one of the three. Doing this for all primes, we can reduce all solutions to coprime solutions.<|endoftext|> -TITLE: $\operatorname{tr}(AABABB) = \operatorname{tr}(AABBAB)$ for $2×2$ matrices -QUESTION [29 upvotes]: Similar to a previous question here, I wonder if cyclic permutations are the only relations amongst traces of (non-commutative) monomials. Since the evaluations $\operatorname{tr}:k\langle x,y,\dots \rangle \to k$ take an infinite dimensional vector space to a one-dimensional vector space there must be quite a few relations, but I wonder if any of them are on binomials other than the cyclic permutations. -At any rate, for small dimensions, we probably get some extra relations. -It appears that $\operatorname{tr}(AABABB−AABBAB) = 0$ for all $2×2$ matrices. Is this true? How does one prove it? - -REPLY [3 votes]: I recall reading long ago that such trace identities generally arise from those associated with the Cayley-Hamilton theorem (by multilinearizing the characteristic polynomial[2]). A quick web search on related keywords turned up the following paper[1]. In the introduction it is stated that "we prove that all trace identies of the full matrix algebra of order n over a field of characteristic zero are consequences of one corresponding to the Hamilton-Cayley theorem". There is also independent seminal work of Procesi, who obtains the trace identities via multilinear invariants of tensor products of vector spaces. No doubt much work has been done the in three decades since this seminal work appeared. A search on Razmyslov / Procesi and "trace identities" reveals much. -[1] Razmyslov. Trace identities of full matrix algebras over a field of characteristic zero. -Math Ussr Izv, 1974, 8 (4), 727-760. -[2] Formanek. Polynomial identities and the Cayley-Hamilton Theorem. -The Mathematical Intelligencer. Vol. 11, 1, 1989, 37-39.<|endoftext|> -TITLE: How is the column space of a matrix A orthogonal to its nullspace? -QUESTION [18 upvotes]: How do you show that the column space of a matrix A is orthogonal to its nullspace? - -REPLY [6 votes]: The OP's question would have been well-phrased had he specified that the matrix $A$ is symmetric i.e. $A=A^\top$, in which case ${\rm colspan}(A)={\rm rowspan}(A^\top)={\rm rowspan}(A)$. Now, consider the definition of ${\rm null}(A)$ as the space of all vectors $\mathbf{v}$ such that $A\mathbf{v}=\mathbf{0}$. Letting $\mathbf{a}_1,\ldots,\mathbf{a}_n$ be the rows (and columns) of $A$, matrix multiplication tells us that $\mathbf{a}_i\cdot\mathbf{v}=0$ for each $i=1,\ldots,\dim(A)$. Thus any vector $\mathbf{v}\in{\rm null}(A)$ is orthogonal to ${\rm colspan}(A)$. It follows that ${\rm null}(A)\perp{\rm colspan}(A)$.$\square$<|endoftext|> -TITLE: Understanding proof of Farkas Lemma -QUESTION [6 upvotes]: I've attached an image of my book (Theorem 4.4.1 is at the bottom of the image). I need help understanding what this book is saying. -In the first sentence on p.113: - -"If (I) holds, then the primal is - feasible, and its optimal objective is - obviously zero", - -They are talking about the scalar value resulting from taking the dot product of the zero vector and $x$, right? That's obviously zero. Because if they're talking about $x$ itself, then it makes no sense. Okay. -Next sentence: - -"By applying the weak duality result - (Theorem 4.4.1), we have that any dual - feasible vector $u$ (that is, one - which satisfies $A'u \ge 0$) must have - $b'u \ge 0$." - -I don't understand this sentence. How is the weak duality result being applied? I can see that $Ax = b, x \ge 0$ matches up with $Ax \ge b, x \ge 0$, but I don't see where $b'u \ge 0$ comes from. I would think that the only thing you could conclude from Theorem 4.4.1 is that $b'u \le 0$ since p = 0 in that problem. -Thanks in advance. - -REPLY [2 votes]: I'll try a very short answer to the question why a dual feasible vector $u$ must verify $b'\cdot u\geq 0$ when the primal LP has a feasible solution. Observe that any $p\geq 0$ would leave $pu$ dual feasible; as such, $b'\cdot u<0$ would make the dual objective unbounded from below, because $pb'\cdot u$ could be as low as desired. The dual objective could be unbounded from below only when the primal LP had no feasible solution (using weak dualities is enough to verify this).<|endoftext|> -TITLE: An expression that vanishes over every field -QUESTION [13 upvotes]: In this question, Jack Schmidt asks to prove a certain identity for $2\times 2$ matrices A and B. -In fact he asks to show that tr(AABABB−AABBAB) = 0. In an answer by user7406, he shows that 3 times this expression must be 0, solving the problem at least when the characteristic of the ground field isn't 3. -In a comment by Mariano Suárez-Alvarez he tells us a computercalculation show that it is in fact identically zero. -This made me wonder whether or not this is a surprise. I think the proper way to state the problem is as follows: -Let $\varphi \colon \mathbb Z\langle x_1,\dots x_n\rangle\to \mathbb Q [x_1,\dots,x_n]$ be the morphism from the free non-commutative ring over these variables to the polynomialring. Let $\Phi \colon \mathbb Z\langle x_1,\dots,x_n\rangle \to k[x_1,\dots,x_n]$ be the corresponding morphism by sending every element to 'itself' the ususal way. -Here are my questions: - -Is it true that if $\varphi(x)=0$ then $\Phi(x) = 0$? My guess is it is because one can construct $\psi$ such that $\Phi = \psi\circ \varphi$. -Does this indeed settle the original problem for characteristic 3 from the solution by user7406, thereby bypassing von Neumann or am I really missing something? - -[I first asked this question in a comment to the post by user7406, but I deleted that because I didn't want to hijack the other question.] - -REPLY [6 votes]: Since the image of $\varphi$ lies inside of $\mathbb{Z}[x_1,\ldots,x_n]$, you can replace $\mathbb{Q}[x_1,\ldots,x_n]$ with $\mathbb{Z}[x_1,\ldots,x_n]$ (since the image vanishes in the latter if and only if it vanishes in the former). -Any map from $\mathbb{Z}\langle x_1,\ldots,x_n\rangle$ to a commutative ring factors uniquely through $\mathbb{Z}[x_1,\ldots,x_n]$ by the universal property of the polynomial ring. -Explicitly: given any ring homomorphism $f\colon R\to S$, where $R$ and $S$ are commutative, and given any elements $s_1,\ldots,s_n\in S$, there exists a unique homomorphism $\overline{f}\colon R[x]\to S$ such that $\overline{f}(r)=f(r)$ for all $r\in R$ and $\overline{f}(x_i) = s_i$ for $i=1,\ldots,n$. -In particular, if $\Phi$ is any map from $\mathbb{Z}\langle x_1,\ldots,x_n\rangle$ to a commutative ring $S$, then $\Phi(x_i)$ determines a unique homomorphism $\psi\colon\mathbb{Z}[x_1,\ldots,x_n]\to S$ with $\psi(x_i)=\Phi(x_i)$ and $\psi(1) = \Phi(1)$. This map makes a commuting triangle with the quotient map $\varphi$. Hence we get $\Phi=\psi\circ\varphi$, as you suggest in point (1) (after replacing $\mathbb{Q}$ with $\mathbb{Z}$. -Thus, if $\Phi$ is any map from $\mathbb{Z}\langle x_1,\ldots,x_n\rangle$ to a commutative ring $S$, then necessarily $\mathrm{ker}(\varphi)\subseteq\mathrm{ker}(\Phi)$, which answers your first question. -(Or, you can think about the "multidegree" of a monomial in $\mathbb{Z}\langle x_1,\ldots,x_n\rangle$ which only counts the total exponent of each variable; an element of $\mathbb{Z}\langle x_1,\ldots,x_n\rangle$ lies in the kernel of $\varphi$ if and only if it is a sum of "homogeneous terms", i.e., same multidegree for all monomials, where the coefficients add up to $0$. Any such homogeneous term will also lie in the kernel of any $\Phi$, since the image is commutative). -But I'm not sure why you would need to go through this for your second question: $\mathrm{trace}(AABABB-AABBAB)$ can be written as a (commutative) polynomial in the $8$ variables that form the entries of $A$ and $B$, with integer coefficients; i.e., an element of $\mathbb{Z}[x_1,\ldots,x_n]$ (not of $\mathbb{Z}\langle A,B\rangle$, because we are looking at the trace, not the product itself). If $3$ times this expression is $0$ as an element of $\mathbb{Z}[x_1,\ldots,x_n]$, then this expression is itself $0$ simply because $\mathbb{Z}[x_1,\ldots,x_n]$ is an integral domain. And from there, you get that the corresponding expresssion is also identically zero over any field, because it is zero in the initial polynomial ring $\mathbb{Z}[x_1,\ldots,x_n]$. -That is, the lack of commutativity between the matrices does not matter here, because the expression we are looking for, the trace, is computed in terms of the coefficients of the matrices via commutative (though complicated) operations. -E.g., if you were talking about the trace of $AB-BBA$, you would have -$$\begin{align*} -A&=\left(\begin{array}{cc} -a_{11} & a_{12}\\ -a_{21} & a_{22} -\end{array}\right),\\ -B&=\left(\begin{array}{cc} -b_{11} & b_{12}\\ -b_{21} & b_{22} -\end{array}\right),\\ -\mathrm{trace}(AB - BBA) &= \mathrm{trace}(AB) - \mathrm{trace}(BBA)\\ -&= \Bigr(a_{11}b_{11} + a_{12}b_{21} + a_{21}b_{12}+a_{22}b_{22}\Bigr) \\ -&\qquad\mathop{+} \Bigl( (b_{11}^2+b_{12}b_{21})a_{11} + (b_{11}b_{12}+b_{12}b_{22})a_{21}\\ -&\qquad\quad\mathop{+} (b_{21}b_{11}+b_{22}b_{21})a_{12} + (b_{21}b_{12}+b_{22}^2)a_{22}\Bigr) -\end{align*}$$ -This is a polynomial in $\mathbb{Z}[a_{11},\ldots,b_{22}]$, with commuting variables; not an element of $\mathbb{Z}\langle a_{11},\ldots,b_{22}\rangle$. -Note that you cannot interpret the trace map as a ring homomorphism from $\mathbb{Z}\langle A,B\rangle$ to something, because the trace is not a ring homomorphism: the trace of the product is not the product of the traces.<|endoftext|> -TITLE: How to show the uniqueness of splitting fields? -QUESTION [7 upvotes]: When one defines the splitting field for an arbitrary collection of polynomials, how does one show the uniqueness of such a splitting field? (I'm guessing it is still unique.) The induction argument used for the case of a single polynomial obviously doesn't work. I'm not so keen on using transfinite induction, either. I thought of direct limit approach but since the isomorphism between splitting fields of a single polynomial is not canonical, it doesn't seem to work either. - -REPLY [8 votes]: The following can be found in several places, e.g., Hungerford; it is a proof via Zorn's Lemma (which in a sense is a sort of "transfinite induction", so perhaps you won't like it either). It is Zorn's Lemma that takes care of ensuring that you can "pick" compatible isomorphism on the single polynomials and then "glue them together" to get a single isomorphism for the splitting field of the entire set. -(This is a standard way to use Zorn's Lemma: you know you can do things non-canonically "one step at a time", so you consider the set of all "(possibly only) partially completed things" and order it by "compatibility". Then apply Zorn's Lemma to get a maximal element, and since you know that you can go "one step further" if necessary, that means the maximal element must be your destination already). -Suppose $K$ and $L$ are field extensions of $F$, and $S$ is a set of nonconstant poynomials in $F[x]$ such that $K$ and $L$ are both splitting fields of $S$ over $F$ (that is, every polynomial in $S$ splits in each of $K$ and $L$, and each of $K$ and $L$ are generated over $F$ by the roots of polynomials in $S$). We want to show that there is an isomorphism between $K$ and $L$. -Let $\mathscr{S}$ be the set of all triples $(E,N,\sigma)$, with $F\subseteq E\subseteq K$, $F\subseteq N\subseteq L$, and $\sigma\colon E\to N$ a field isomorphism that restricts to the identity on $F$. -Place a partial order on $\mathscr{S}$ by saying that $(E,N,\tau)\preceq (E',N',\sigma)$ if and only if $E\subseteq E'$, $N\subseteq N'$, and $\sigma\bigm|_{E}=\tau$. -We show that $\mathscr{S}$ satisfies the hypothesis of Zorn's Lemma: that is, it is a partially ordered set (trivial) such that every chain has an upper bound. Since $(F,F,\mathrm{id}_F)\in\mathscr{S}$, the empty chain has an upper bound. Now assume that $\mathscr{C}=\{(E_i,N_i,\sigma_i)\}_{i\in I}$ is a chain in $\mathscr{S}$. Let $E=\cup E_i$, $N=\cup N_i$, and define $\sigma\colon E\to N$ as follows: if $a\in E$, then there exists $i\in I$ such that $a\in E_i$; then define $\sigma(a) = \sigma_i(a)$. -This is well-defined: if $a\in E_i$ and $a\in E_j$, $i\neq j$, then either $E_i\subseteq E_j$ or $E_j\subseteq E_i$, since $\mathscr{C}$ is a chain; in the former case, $\sigma_j|_{E_i} = \sigma_i$, so $\sigma_j(a)=\sigma_i(a)$; similarly in the latter case. -Also, $\sigma$ is a field homomorphism: given $a,b\in E$, we know that $a\in E_i$ and $b\in E_j$ for some $i$ and some $j$; since $\mathscr{C}$ is a chain, either $E_i\subseteq E_j$ or $E_j\subseteq E_i$. Either way, we can find a single $k\in I$ such that $a$ and $b$ are both in $E_k$, and then $$\begin{align*} -\sigma(a+b) &= \sigma_k(a+b) = \sigma_k(a)+\sigma_k(b) = \sigma(a)+\sigma(b)\\ -\sigma(ab) &=\sigma_k(ab) = \sigma_k(a)\sigma_k(b) = \sigma(a)\sigma(b), -\end{align*}$$ -since $\sigma_k$ is a field homomorphism. The map is onto, for given any $b\in N$, there exists $i\in I$ with $b\in N_i$, and then $b\in \sigma_i(E_i)\subseteq \sigma(E)$. Thus, $\sigma$ is a field isomorphism. The definition also ensures that $\sigma$ restricts to the identity on $F$. -Thus, $(E,N,\sigma)\in\mathscr{S}$, and by construction $(E_i,N_i,\sigma_i)\preceq (E,N,\sigma)$ for all $i\in I$. Thus, $(E,N,\sigma)$ is an upper bound for $\mathscr{C}$. -Thus, $\mathscr{S}$ is a partially ordered set in which every chain has a maximal element. By Zorn's Lemma, it has maximal elements. Let $(\mathcal{K},\mathcal{L},\sigma)$ be a maximal element. We show that $\mathcal{K}=K$ and $\mathcal{L}=L$. -Let $f(x)\in S$ be any polynomial in $S$. Let $M$ be the splitting field of $f(x)$ over $\mathcal{K}$ contained in $K$, and let $N$ be the splitting field of $f(x)$ over $\mathcal{L}$ contained in $L$. -From the result for a single polynomial, we know that the isomorphism $\sigma\colon\mathcal{K}\to\mathcal{L}$ can be extended to an isomorphism $\tau\colon M\to N$. In particular, $(M,N,\tau)\in\mathscr{S}$, and $(\mathcal{K},\mathcal{L},\sigma)\preceq(M,N,\tau)$. By maximality of $(\mathcal{K},\mathcal{L},\sigma)$, it follows that $(\mathcal{K},\mathcal{L},\sigma)=(M,N,\tau)$, so in particular, $f(x)$ splits in $\mathcal{K}$ and in $\mathcal{L}$. Thus, every polynomial in $S$ splits in $\mathcal{K}$, and hence $K=\mathcal{K}$; and every polynomial in $S$ splits in $\mathcal{L}$, hence $L=\mathcal{L}$. -Therefore, $\sigma\colon K=\mathcal{K}\stackrel{\cong}{\to}\mathcal{L}=L$ is an isomorphism between $K$ and $L$.<|endoftext|> -TITLE: small o(1) notation -QUESTION [20 upvotes]: It's probably a vey silly question, but I'm confused. Does o(1) simply mean $\lim_{n \to \infty} \frac{f(n)}{\epsilon}=0$ for some $n>N$? - -REPLY [17 votes]: This should be a comment to chazisop's answer; I don't have enough rep to make it. -Chazisop, your quantifiers in 1 are the wrong way round, in fact there are two problems. Firstly, saying $\forall k>0 : f(n) \le k$ is simply equivalent to saying $f(n) \leq 0$. The right definition for o(1) is that $\forall k >0\ \exists N\ \forall n \geq N : |f(n)| \leq k$. Note that the $k$-quantifier appears at the start, this is non-negotiable! Secondly, notice the mod signs around $f(n)$. If you are only thinking of nonnegative functions (e.g. the running time of an algorithm) you can omit them, but not for arbitrary functions. -To the OP, no, that's not what o(1) means. There are two problems with what you've written: firstly, what is $\epsilon$? Secondly, what are the $n$ and $N$ supposed to be (I don't mean the $n$ in your limit)? You need to think about this statement more carefully. -The definition of $f(n)$ being $o(1)$ is that $\lim _{n \to \infty} f(n) = 0$. That means that for all $\epsilon>0$ there exists $N_\epsilon$, depending on $\epsilon$, such that for all $n \geq N_\epsilon$ we have $|f(n)| \leq \epsilon$. I guess this definition is probably where your $n>N$ comes from.<|endoftext|> -TITLE: What is a Structured Polyhedron? -QUESTION [13 upvotes]: In my work on lattice point enumeration of polytopes, I stumbled upon the following sequence: -\begin{eqnarray} -1, 120, 579, 1600, 3405, 6216, 10255, 15744, 22905, 31960, 43131, ... -\end{eqnarray} -which counts the Structured great rhombicosidodecahedral numbers (A100145) by the formula -\begin{eqnarray} -a(n)=\tfrac{1}{6} (222 n^3-312 n^2+96 n). -\end{eqnarray} -Such numbers fall into the category of figurate numbers, which count the number of points in a sequence of similar discrete geometric shapes. For example, the triangular and square numbers bear their names because they count the dots arranged in a sequence of triangular $(1,3,6,10,...)$ and square $(1,4,9,16,...)$ configurations. One generalizes these to higher dimensional regular polyhedral numbers like tetrahedral (A000292) or dodecahedral (A006566) numbers, for instance. These numbers are always enumerated by $\mathbb{Q}$-polynomials of degree $n$, where $n$ is the dimension of the polyhedron. -For the sequence above, the author gives the following description: -Structured polyhedral numbers are a type of figurate polyhedral numbers. Structurate polyhedra differ from regular figurate polyhedra by having appropriate figurate polygonal faces at any iteration, i.e. a regular truncated octahedron, n=2, would have 7 points on its hexagonal faces, whereas a structured truncated octahedron, n=2, would have 6 points - just as a hexagon, n=2, would have. Like regular figurate polygons, structured polyhedra seem to originate at a vertex and since many polyhedra have different vertices (a pentagonal diamond has 2 "polar" vertices with 5 adjacent vertices and 5 "equatorial" vertices with 4 adjacent vertices), these polyhedra have multiple structured number sequences, dependent on the "vertex structures" which are each equal to the one vertex itself plus its adjacent vertices. For polystructurate polyhedra the notation, structured polyhedra (vertex structure x) is used to differentiate between alternate vertices, where VS stands for vertex structure. -At first read, this doesn't make any sense. I thought the regular truncated octahedron had 6 vertices at each hexagonal face, not 7 as the author claims. (I know that this sequence isn't bogus because I can generate it in a completely different context, that of computing the cohomology and geometric genera in a singularity theory problem.) -Can anyone make sense of this and help me understand the difference between regular and structured polyhedra? -Update (4-1-11): I emailed the author of the entry on OEIS and never heard back from him. I think the responsibility now lies with us to figure this out. - -REPLY [4 votes]: The most important distinctions to understand what the author meant in his description by "structured" figurate polyhedra as opposed to "regular" is between vertices and points and between "from an edge or a vertex" and "centered". -You wrote: -''I thought the regular truncated octahedron had 6 vertices at each hexagonal face, not 7 as the author claims.'' -Precisely one of the simplest examples in 2D is the hexagonal numbers. A hexagon has 6 vertices, but when you produce a figurate number diagram of it, it has : - -1, 6, 12, 18, ... (A008458) points if you fill only the edges, -1, 6, 15, 28, ... (A000384) from greek tradition if you grow hexagons as embracing smaller ones starting from a vertex (see illustration of classical hexagonal numbers or -1, 7, 19, 37, 61, ... (A003215) points or circles or dots if you try to fill uniformly the greater hexagon by centered smaller ones. You could call this arrangement "regular" because the surface of the polygon is uniformly covered by points. - -In his series of sequences in the OEIS, the author decided to use "structured" faces as opposed to "regular" (he should perhaps have said "centered" as in "centered hexagonal numbers"). There is no difference between "regular" and "structured" for triangular and square faces (because each growth step covers the new surface regularly), but there are for hexagonal ones (and there is a lot of troubles with pentagons). -It explains the comment of the author that there can be several sequences for the same basic geometric shape in certain cases, depending on the arrangement of the growth vertices of reference on different faces. - -PS: I am one of the editors of the OEIS and I invite you to submit any additions, correction, comments, links and references to any of the 43 sequences James Record added to the encyclopedia. We are are particularly fond of alternate interpretations of sequences and links to the mathematical research literature.<|endoftext|> -TITLE: Adjoint of a linear transformation in an infinite dimension inner product space -QUESTION [14 upvotes]: We learned that if $V$ is a finite inner product space then for every linear transformation $T:V\to V$, there exists a unique linear transformation $T^*:V\to V$ such that $\forall u, v \in V: (Tv, u)=(v, T^*u)$. -The construction of $T^*$ used the fact that $V$ is finite and therefore has an orthonormal basis, which is not the case had it been infinite. -Are there infinite dimension inner product spaces such that not all linear transformations have an adjoint? Or is it somehow possible to extend this definition to infinite spaces as well? - -REPLY [4 votes]: Let's look at an example. Take the vector space $\mathbb{R}[x]$ of all real polynomials in one variable and define an inner product as -$$\left( \sum_{j=0}^n a_jx^j, \sum_{k=0}^m b_kx^k \right)=\sum_{h=0}^{\min(n,m)}a_hb_h.$$ -Now let $T$ be the linear operator such that -$$T\sum_{j=0}^n a_jx^j=\sum_{j=0}^n a_j+\left(\sum_{j=1}^na_j\right)x+\ldots + (a_{n-1}+a_n)x^{n-1}+a_nx^n.$$ -Think $T$ like the operator represented by the infinite matrix below: -$$\begin{bmatrix} -1 & 1 & 1 & \ldots \\ -0 & 1 & 1 & \ldots \\ -0 & 0 & 1 & \ldots \\ -\vdots & \vdots & \vdots & \ddots \\ -\end{bmatrix}$$ -Should $T$ have an adjoint with respect to the inner product $(,)$, it should be somehow associated to this infinite matrix: -$$\begin{bmatrix} -1 & 0 & 0 & \ldots \\ -1 & 1 & 0 & \ldots \\ -1 & 1 & 1 & \ldots \\ -\vdots & \vdots & \vdots & \ddots \\ -\end{bmatrix}$$ -but this makes no sense in $\mathbb{R}[x]$. Formally, let's suppose such an adjoint operator $T^\star$ exists. Fix $k \in \mathbb{N}$: who is $T^\star x^k$? For all $n=0, 1,\ldots$, we should have -$$(x^n, T^\star x^k)=(T x^n, x^k)=(1+\ldots+ x^n, x^k)=\begin{cases}1 & n \ge k \\ 0 & n -TITLE: When should I use $=$ and $\equiv$? -QUESTION [27 upvotes]: In what context should I use $=$ and $\equiv$? -What is the precise difference? -Thanks! -(I wasn't sure what to tag this with, any suggestions?) - -REPLY [19 votes]: The $\equiv$ symbol originally meant "is identically equal to", and as that name implies it is used with identities. It is actually stating that the equality holds for all instantiations of the free variables. For example $\sin\left(\theta+\frac{\pi}{2}\right)=\cos{\theta}$ is true for any value of $\theta$, therefore $\sin\left(\theta+\frac{\pi}{2}\right)\equiv\cos{\theta}$. -People often got that confused with "equal by definition" or "defined to be". There are separate symbols for those meanings, including $\triangleq$ and ≝ (Unicode 0x225d). The $\equiv$ symbol has been used for this purpose so often that this is now sometimes considered a correct usage. -The $\equiv$ symbol was also repurposed to mean a congruence relationship like several of the other answers have discussed.<|endoftext|> -TITLE: Motivation and use for category theory? -QUESTION [36 upvotes]: From reading the answers to different questions on category theory, it seems that category theory is useful as a framework for thinking about mathematics. Also, from the book Algebra by Saunders Mac Lane, in the preface for first edition there is a passage, -"... It is now clear that we study not just a single algebraic system (a group or a ring) by itself, but that we also study the homomorphisms of these systems; that is, the functions mapping one system into another as as to preserve the operations... All the systems of a given type together with the homomorphisms between them are said to form a "category" consisting of "objects" and "morphisms"..." This book proposes to present algebra for undergraduates on the basis of these new insights." - -This quote alone probably means that category theory is worth studying from a mathematician's perspective. But I would like to see examples of it being used to solve more concrete problems. I have read in Miles Reid's Commutative Algebra that Grothendieck used category theory successfully to solve problems in Algebraic geometry while at the same time saying that category theory is one of the most sterile intellectual pursuit for most students. If this is true then the example will be hard to understand, perhaps someone here could provide a non-trivial yet accessible example? Edit: Personally I have read somewhere (probably n-Category cafe) that its possible to construct groups entirely from categories and functors, and I really liked this idea because it illustrates how to think in terms of categories really well. It would be nice to see more examples like these. - -P.S. for background, I am in the process of reading most of Saunder Mac Lane's books. I have also read and in the process of reading a few of Emil Artin's book on algebra as well (e.g. Galois theory, Algebra and Galois theory and Geometric Algebra). Also if this post is more appropriate for Community Wiki then please change it for me. - -REPLY [13 votes]: I had the idea when I was in grad school that category theory was a convenient language in which to state things, but didn't have any deep results. However, I now believe nothing could be further from the truth! Category theory has now shown itself to be incredibly useful in topology for very concrete problems, such as finding knot invariants and even 3 and 4-manifold invariants. Reshetikhin-Turaev invariants, the Kontsevich integral, and Khovanov homology are powerful link invariants that all arise via a categorical approach.<|endoftext|> -TITLE: How do I convert the distance between two lat/long points into feet/meters? -QUESTION [15 upvotes]: I've been reading around the net and everything I find is really confusing. I just need a formula that will get me 95% there. I have a tool that outputs the distance between two lat/long points. -Point 1: 32.773178, -79.920094 -Point 2: 32.781666666666666, -79.916666666666671 -Distance: 0.0091526545913161624 - -I would like a fairly simple formula for converting the distance to feet and meters. -Thanks! - -REPLY [4 votes]: Since the question is tagged Mathematica it might be good to provide the Mathematica function, which is (as of version 7): -GeoDistance[{32.773178, -79.920094}, - {32.781666666666666,-79.916666666666671} -] - -==> 994.652 - -or, if you want to specify the reference ellipsoid: -GeoDistance[ - GeoPosition[{32.773178, -79.920094, 0}, "ITRF00"], - GeoPosition[{32.781666666666666, -79.916666666666671 , 0}, "ITRF00"] -]<|endoftext|> -TITLE: Importance of Poincaré recurrence theorem? Any example? -QUESTION [5 upvotes]: Recently I am learning ergodic theory and reading several books about it. -Usually Poincaré recurrence theorem is stated and proved before ergodicity and ergodic theorems. But ergodic theorem does not rely on the result of Poincaré recurrence theorem. So I am wondering why the authors always mention Poincaré recurrence theorem just prior to ergodic theorems. -I want to see some examples which illustrate the importance of Poincaré recurrence theorem. Any good example can be suggested to me? -Books I am reading: Silva, Invitation to ergodic theory. Walters, Introduction to ergodic theory. Parry, Topics in ergodic theory. -A few day ago I put this question in mathoverflow. I now realize that it would also be appropriate to ask here since my question is quite general. - -REPLY [2 votes]: A very simple reason exists for introducing "Poincaré recurrence theorem just prior to ergodic theorems." If you had a transformation that is not recurrent, then you do not have an invariant measure and therefore do not have an ergodic measure. This is just an example. -I'm sorry if my English is wrong, but I'm not a native speaker.<|endoftext|> -TITLE: Why is the rank of the Picard group of a K3 surface bounded above by 22? -QUESTION [17 upvotes]: I understand that, over $\mathbb{C}$, the rank of the Picard group of a K3 surface $X$ is bounded above by $20$ because we can use the exponential sheaf sequence: -$0 \to 2\pi i \mathbb{Z} \to \mathcal{O}_X \to \mathcal{O}_X^\times \to 0$, -and because $H^1(X,\mathcal{O}_X)$ is trivial this gives an injective homomorphism $H^1(X,\mathcal{O}_X^\times) \to H^2(X,\mathbb{Z})$, and $H^2(X,\mathbb{Z})$ has rank $22$. Then, by the Lefschetz Theorem on $(1,1)$-classes, this actually embeds into $H^{1,1}(X)$, which has rank $20$ (all of these follow straight away from the definition of a K3 surface). -I know that over finite fields for instance we don't have such arguments, and the rank can be $22$. -Question: What's the proper way of stating this argument in good generality, i.e. without assuming anything about the basefield. Presumably this would involve talking about algebraic cycles. Also, why can't the rank be $21$? - -REPLY [18 votes]: I will consider $X$ over an algebraically closed field $k$, and let $\ell$ be prime to the characteristic of $k$. The Kummer sequence -$$1 \to \mu_{\ell^n} \to \mathcal O_X^{\times} \buildrel \ell^n \over \to \mathcal O_X^{\times} \to 1$$ on the etale site of $X$ induces -$$Pic(X)/\ell^n \to H^2(X,\mu_{\ell^n}).$$ -Passing to the inverse limit over $n$ gives an injection -$$\bigl((Pic(X)/Pic^0(X)\bigr) \otimes_{\mathbf Z}\mathbb Z_{\ell} \hookrightarrow -H^2(X,\mathbb Z_{\ell})(1).$$ -(Here $Pic^0(X)$ denotes the connected part of $Pic(X)$ --- which is trivial in the case of a $K3$, although we don't need this, and $(1)$ denotes a Tate twist, which is important if $X$ is base-changed from a subfield and you want to consider Galois actions, but not otherwise.) -So the bound on the Picard rank of $X$, i.e. on the rank of -$Pic(X)/Pic^0(X)$ comes from a knowledge of the dimension of the etale cohomology -$H^2(X,\mathbb Z_{\ell}).$ This has dimension 22 in the case of a K3, hence -the desired bound (for any field $k$). There is no better bound achievable without Hodge theoretic arguments, which aren't available in characteristic $p$. -It is helpful to consider the situation with $K3$s, and the difference between the char. $0$ and char. $p$ situations, by analogy with the case of endomorphisms -of elliptic curves. -Indeed, the same argument as above, applied with $X$ taken to be a product of an elliptic curve $E$ over itself, will show that the Picard rank of $E\times E$ is at most $6$, -and hence that the rank of $End(E)$ is at most $4$ --- the divisors on $E\times E$ modulo algebraic equivalence come from $E\times O$ and $O\times E$, which always contribute a rank of $2$, and then from the graphs of endomorphisms, which give the remaining rank, which hence is at most $4$. If $E$ is supersingular in characteristic $p$ then in fact $End(E)$ is of rank $4$. On the other hand, in characteristic $0$, Hodge theoretic arguments (or arguments with the representation $E = \mathbb C/\Lambda$, which are concrete analogues of the Hodge theoretic arguments) show that the Picard rank of $E\times E$ is bounded by $4$, and hence that the rank of $End(E)$ is at most $2$.<|endoftext|> -TITLE: How to find closed form for a partial infinite product? -QUESTION [5 upvotes]: I ran across this infinite product: -$$\lim_{n\to\infty}\prod_{k=2}^n\left(1-\frac1{\binom{k+1}{2}}\right)$$ -I easily found that it converges to 1/3. Using my calculator, I found that -$$1-\frac1{\binom{k+1}{2}}=\frac{(k-1)(k+2)}{k(k+1)}$$ -Then, here is my question -$$\prod_{k=2}^n\frac{(k-1)(k+2)}{k(k+1)}=\frac{n+2}{3n}$$ -This is what my calculator gave me. How did it arrive at this? That is, how could I do this by hand if I wanted to? I tried writing out some terms and even the (n+1)st term, made cancellations, but it did not work out. I feel rather obtuse. How does one find a closed form for a partial infinite product like this? -$$\frac23\cdot \frac56\cdot \frac9{10}\cdot\cdot\cdot \frac{(n-1)(n+2)}{n(n+1)}\cdot \frac{n(n+3)}{(n+1)(n+2)}$$ -Making the cancellations leaves $\frac{(n-1)(n+3)}{(n+1)^2}$, not $\frac{n+2}{3n}=\frac2{3n}+\frac13$. -This is why it converges to 1/3. It is easy to see the limit. That is not my concern. -It is how does one arrive at the closed form of $\frac{n+2}{3n}$ for this 'finite' product? -I am overlooking something obvious. I just know it. -Thank you - -REPLY [5 votes]: $\binom{k+1}{2}=\frac{(k+1)k}{2}$, so $1-\frac{1}{\binom{k+1}{2}}=\frac{k(k+1)-2}{k(k+1)}=\frac{(k-1)(k+2)}{k(k+1)}$. Then $$\prod_{k=2}^{n}\frac{(k-1)(k+2)}{k(k+1)}=\frac{2(n-1)!(n+2)!}{6n!(n+1)!}=\frac{n+2}{3n}$$ where the 2 comes because the $(n+1)!$ in the denominator really starts at 3 and the 6 comes because the $(n+2)!$ really starts at 4. Then for the last equality we just recognize which terms don't cancel in the factorials. -Often when working on problems like this, it is not a good idea to reduce fractions. If your partial result had shown $\frac{1 \cdot 4}{2 \cdot 3}$ instead of $\frac{2}{3}$ and so on, it would have been easier to see the pattern.<|endoftext|> -TITLE: Uniform convergence problem -QUESTION [7 upvotes]: I encountered this problem while studying for an analysis exam. Here is a related question I asked some days ago. -The problem is as follows: Suppose $a_n$ is a decreasing sequence of positive real numbers and that$$\sum_{n = 0}^{\infty}{a_n \sin{(nx)}}$$ converges uniformly on $\mathbb{R}$, show that $$\lim_{n \to \infty}{(n a_n)} = 0.$$ -Any tip or solution is welcome, and also avoid using Fourier series, because they haven't been introduced in the book so it can be solved without using them. - -REPLY [10 votes]: $\sum_{i=[(k+1)/2]}^k a_i \sin(ix)$ goes uniformly to 0 as $k\to\infty$. Set $x=\pi/(2k)$. Then all $a_i$'s are $\geq a_k$, all the sines are $\geq1/\sqrt{2}$, hence the sum is $\geq (k-1)/2\times a_k/\sqrt{2}$. Since this goes to 0, so does $k a_k$<|endoftext|> -TITLE: Find the spectrum of the linear operator $T: \ell^2 \to \ell^2$ defined by $Tx=(\theta x_{n-1} +(1-\theta)x_{n+1})_{n\in \mathbb{Z}}$ -QUESTION [17 upvotes]: Let $\ell^2 =\ell^2(\mathbb{Z})$. Choose $\theta \in ]0,1[$ and set: -$$Tx=(\theta x_{n-1} +(1-\theta)x_{n+1})_{n\in \mathbb{Z}}$$ -for each $x=(x_n)_{n\in \mathbb{Z}}\in \ell^2$ (thus $T$ is a convex combination of the right and left shift operators). -It is easy to prove that, for every $\theta$, $T$ is a bounded linear operator of $\ell^2$ into itself, that $\lVert T\rVert =1$ and that $T$ is selfadjoint iff $\theta =\ frac{1}{2}$. -Moreover $T$ is not compact: in fact, if $e^m:=(\delta_n^m)$ (so $e^m$ is a vector of the canonical base of $\ell^2$), one has: -$$|Te^m -Te^p|^2=\begin{cases} 0 &\text{, if } p=m \\ \theta^2 +(1-\theta)^2+1 &\text{, if } m=p+2 \text{ or } p=m+2 \\ 2\theta^2+2(1-\theta)^2 &\text{, otherwise} \end{cases} \; ,$$ -thus $|Te^m-Te^p|^2> \theta^2+(1-\theta)^2>0$ for $m\neq p$; therefore the sequence $\{ Te^m\}_{m\in \mathbb{N}}$ does not contain any Cauchy's subsequence. -The problem is: - -I am not able to find the spectrum of $T$. - -About the eigenvalues, the only thing I know for sure is that $1$ is not in the point spectrum of $T$ for any value of $\theta$: in fact if $1$ were in the point spectrum $\sigma_P(T)$, then the eigenvectors would satisfy the linear recurrence: -$$x_n=\theta x_{n-1}+(1-\theta) x_{n+1} \; ,$$ -hence they have to be sequences of the type: -$$x_n=A \left( \frac{\theta}{1-\theta}\right)^n +B$$ -($A,B$ suitable constants); but a sequence like this doesn't belong to $\ell^2$ except in the trivial case $A=B=0$, which however doesn't give a valid eigenvector. Therefore $1\notin \sigma_P(T)$. - -But now, what about other eigenvalues? And what about the residue and continuous spectra of $T$? - -Any hint is welcome. - -REPLY [6 votes]: To determine the spectrum of $T$, let us first determine the one of the right shift $\tau$. -Since $||\tau||=1$, $\mathrm{Sp}(\tau) \subset \bar{B}(0,1)$. -But the same goes for the left shift $\tau^{-1}$, so $\mathrm{Sp}(\tau) \subset C(0,1)$. -It is actually equal to $C(0,1)$: $(\ldots,0,1,\lambda,\lambda^2,\ldots,\lambda^n,0,\ldots)$ is an "almost eigenvector". -For every $c \in \mathbb{C}^{\times}$, $(c \theta \mathrm{Id} - \tau)(c (1-\theta) \mathrm{Id} - \tau^{-1}) = (1+c^2 \theta (1-\theta)) \mathrm{Id} - cT$, so $f(c)=\frac{1+c^2 \theta (1-\theta)}{c} \in \mathrm{Sp} (T)$ iff $c \theta \in \mathrm{Sp}(\tau)$ or $c (1- \theta) \in \mathrm{Sp}(\tau^{-1})$ iff $|c|=\theta^{-1}$ or $(1-\theta)^{-1}$. -Now $f(\mathbb{C}^{\times})=\mathbb{C}$, and $f(\theta^{-1} (1-\theta)^{-1} c^{-1})=f(c)$, so $\mathrm{Sp}(T)= \left\{ f(\theta e^{i \alpha}) \right\} = \left\{ \cos \alpha + i (1-2 \theta) \sin \alpha \right\}$ which is an ellipsis (flat when $\theta=1/2$). -EDIT: It is easy to check that for all $\lambda$, $\lambda \mathrm{Id} - \tau$ is injective and has dense range, hence the same is true for $T$.<|endoftext|> -TITLE: Proving $\frac{1}{2}(n+1)<\frac{n^{n+\frac{1}{2}}}{e^{n-1}}$ without induction -QUESTION [5 upvotes]: I want to show that $\displaystyle\frac{1}{2}(n+1)<\frac{n^{n+\frac{1}{2}}}{e^{n-1}}$. But except induction, I do not know how I could prove this? - -REPLY [3 votes]: If $n\ge3$, $\frac12(n+1) -TITLE: Arbitrary intersection of closed, connected subsets of a compact space connected? -QUESTION [30 upvotes]: Let $(B_i)_{i\in I}$ be an indexed family of closed, connected sets in a compact space X. Suppose $I$ is ordered, sucht that $i < j \implies B_i \supset B_j$. -Is $B = \bigcap_i B_i$ necessarily connected? -I can prove it, if I assume $X$ to be Hausdorff as well: If $B$ is not connected, then there are two disjoint, closed, nonempty sets $C$, $D$ in $B$, such that $C \cup D = B$. Now these sets are also closed in $X$, hence by normality there exist open disjoint neighborhoods $U$, $V$ of $C$ and $D$, respectively. -Then for all $i$: $B_i \cap U^c \cap V^c \ne \emptyset$, since $B$ is contained in $B_i$ and $B_i$ is connected. Thus we must also have -$$ B \cap U^c \cap V^c = \bigcap_i B_i \cap U^c \cap V^c \ne \emptyset $$ -by compactness and the fact that the $B_i$ satisfy the finite intersection property. This is a contradiction to the choice of $U$ and $V$. -I can neither see a counterexample for the general case, nor a proof. Any hints would be greatly appreciated! -Thanks, -S. L. - -REPLY [15 votes]: I think the following is a counterexample: take $Y=[-1,1]\times\{a,b\}$, where $[-1,1]$ has the standard topology, and $\{a,b\}$ the discrete topology, and let $X=Y/\sim$, $y\sim y'$ if and only if there exist $x\in[0,1]$ such that $y=(x,a)$ and $y'=(x,b)$, or $y=(x,b)$ and $y'=(x,a)$. That is, identify all points except $0$. Give $X$ the quotient topology -This is the interval $[-1,1]$ with "a doubled origin", a common proving ground because the space is $T_1$ but not $T_2$, but any two points other than the doubled origins can be separated by open neighborhoods. (So, in a sense, it is "almost" Hausdorff; the Hausdorff property only fails for one choice of points, and there are lots of other points around). -Since $Y$ is compact and the quotient map is continuous and onto, $X$ is compact. -For every positive integer $n$, let $\mathcal{B}_n\subseteq Y$ be the set $[-\frac{1}{n},\frac{1}{n}]\times\{a,b\}$, and let $B_n$ be the image of $\mathcal{B_n}$ in $X$; that is, $B_n$ is the interval from $-\frac{1}{n}$ to $\frac{1}{n}$, including both origins. -$B_n$ is closed, since $X-B_n = [-1,-\frac{1}{n})\cup(\frac{1}{n},1]$ is a union of two open sets. It is also connected, because $B_n$ is a union of two connected subsets (the two copies of the interval $[-\frac{1}{n},\frac{1}{n}]$ obtained by removing one of the two $0$s) and the two subsets intersect. -What is $\cap_{n=1}^{\infty}B_n$? It's a set whose only two elements are the doubled origin points. But this subset of $X$ is not connected, because $X$ is $T_1$, so there exist open neighborhoods $U$ and $V$ such that $(0,a)\in U-V$ and $(0,b)\in V-U$. So $B\subseteq U\cup V$, $U\cap B\neq\emptyset \neq V\cap B$, and $B\cap U\cap V = \emptyset$. - -REPLY [6 votes]: The line with two origins should work as a counterexample. Take the intersection of nested closed intervals containing both origins. The intersection is the two origins, which is two points with the discrete topology, so is disconnected. -Edit: If you want the ambient space to be compact, let it be a closed, bounded interval containing the origins. (Thanks to Arturo for pointing this out).<|endoftext|> -TITLE: Tall fraction puzzle -QUESTION [46 upvotes]: I was given this problem 30 years ago by a coworker, posted it 15 years ago to rec.puzzles, and got a solution from Barry Wolk, but have never seen it again. Consider the series: -$$1, \frac{1}{2},\frac{\displaystyle\frac{1}{2}}{\displaystyle\frac{3}{4}},\frac{\displaystyle\frac{\displaystyle\frac{1}{2}}{\displaystyle\frac{3}{4}}}{\displaystyle\frac{\displaystyle\frac{5}{6}}{\displaystyle\frac{7}{8}}},\cdots$$ -Each fraction keeps its large bars while being put atop a similar structure. -This can also be represented as $$\frac{1\cdot 4 \cdot 6 \cdot 7 \cdot\cdots}{2 \cdot 3 \cdot 5 \cdot 8 \cdot\cdots}$$ terminating at $2^n$ for some $n$, where it is much closer to the limit than elsewhere. -The challenge: - -Find the limit, not too hard by experiment -In the last expression, find a simple, nonrecursive, expression to say whether $n$ is in the numerator or denominator -Prove the limit is correct-this is the hard one. - -REPLY [29 votes]: This problem (E 2692) was proposed by D. Woods in Americ. Math. Monthly 85, No. 1, p.48, -in 1978, and a solution by E. Robbins was published in Americ. Math. Monthly 86, No. 5, p.394f, in 1979. A solution from 1987 by Jean-Paul Allouche is given in -Proposition 5 of Jean-Paul Allouche and Jeffrey Shallit's paper The -ubiquitous Prouhet-Thue-Morse sequence (or here slides 24-28). -In 3. apart from a sketch of Allouche and Shallit's proof of Proposition 5, -I give my interpretation why the limit can be expressed as the infinite -product $\prod_{m=0}^{\infty }\left( \frac{2m+1}{2m+2}\right) -^{(-1)^{t_{m}}}$, where $\left( t_{m}\right) _{m\geq 0}$ is the -Prouhet-Thue-Morse sequence. This product is the starting point of their -proof. - -The first few terms of this sequence are -$$\begin{equation*} -\left( f_{n}\right) _{n\geq 0}=\left( 1,\frac{1}{2},\frac{2}{3},\frac{7}{10},% -\frac{286}{405},\frac{144\,305}{204\,102},\frac{276\,620\,298\,878}{% -391\,202\,754\,597},\ldots \right) -\end{equation*}$$ -These numerical values suggest that $\left( f_{n}^{2}\right) _{n\geq 0}$ -converges relatively fast to $\frac{1}{2}$, and thus $f_{n}$ to$\frac{\sqrt{2% -}}{2}$: -$$\begin{equation*} -\left( f_{n}^{2}\right) _{n\geq 0}=\left(1, -0.25,0.444\,44,0.49,0.498\,68,0.499\,88,0.499\,99,\ldots \right) -\end{equation*}$$ -The Prouhet-Thue-Morse sequence (A010060) OEIS page gives the closed form formula (already in Eelvex's answer) by Benoit Cloitre (benoit7848c(AT)orange.fr), May 09 2004. -The term $f_{n}$ can be written as the product of integers $1\leq k\leq -2^{n}$ raised to $e_{k}\in \left\{ -1,+1\right\} $. For instance, -$$\begin{eqnarray*} -f_{3} &=&\frac{\ \frac{1}{2}/\frac{3}{4}\ }{\frac{5}{6}/\frac{7}{8}}=\frac{1}{2}% -\left( \frac{3}{4}\right) ^{-1}\left( \frac{5}{6}\left( \frac{7}{8}\right) -^{-1}\right) ^{-1}=1\cdot 2^{-1}\cdot 3^{-1}4\cdot 5^{-1}\cdot 6\cdot 7\cdot -8^{-1} \\ -&=&\prod_{k=1}^{2^{3}}k^{e_{k}}=\prod_{k=1}^{2^{3}}k^{(-1)^{t_{k-1}}}\text{,} -\end{eqnarray*}$$ -where $\left( t_{k}\right) _{k\geq 0}=\left( 0,1,1,0,1,0,0,1,\ldots \right) $ -is the binary sequence known as the Prouhet-Thue-Morse sequence (A010060), which has several equivalent definitions. One that -is related directly to the way the numbers $k$ exchange between numerators -and denominators, in other words, whether the exponent $e_{k}=(-1)^{t_{k-1}}$ -is $+1$ or $-1$, is the following. Let $A_{k}$ be a sequence of strings of -0's and 1's with length $2^{k}$, with $A_{0}=0$. For $k\geq 0$, $A_{k+1}=A_{k}\overline{A}_{k}$, where $\overline{A}_{k}$ is obtained from $A_{k}$ by interchanging 0's and 1's. Then $\left( t_{k}\right) _{k\geq 0}$ -is the infinite sequence generated by $A_{k}$ as $k\rightarrow \infty $. It -has the following property: $t_{2m}=t_{m}$ and $t_{2m+1}=1-t_{m}$ for $m\geq -0$. Thus $t_{2m}+t_{2m+1}=1$ and since $t_{k}\in \left\{ 0,1\right\} $, one -of $t_{2m+2}$, $t_{2m+1}$ is $0$ and the other is $1$. In terms of the -exponents we have $e_{2m+1}=(-1)^{t_{2m}}=(-1)^{t_{m}}$ and $e_{2m+2}\ e_{2m+1}=(-1)^{t_{2m}+t_{2m+1}}=-1$. This means that one of the -integers $2m+1$ and $2m+2$ is in the numerator and the other in the -denominator, which is in accordance with the way how the tall fraction is -constructed from fractions $\frac{1}{2},\frac{2}{3},\frac{4}{5},\ldots $. Similarly, we have in general [edit: when $k$ runs from $1$ to $2^{n}$, $m$ varies from $0$ to $2^{n-1}-1$.] -$$\begin{eqnarray*} -f_{n} &=&\prod_{k=1}^{2^{n}}k^{e_{k}}=\prod_{k=1}^{2^{n}}k^{(-1)^{t_{k-1}}} -\\ -&=&\prod_{m=0}^{2^{n-1}-1}\left( 2m+1\right) ^{(-1)^{t_{2m}}}\left( 2m+2\right) -^{(-1)^{t_{2m+1}}}=\prod_{m=0}^{2^{n-1}-1}\left( \frac{2m+1}{2m+2}\right) -^{(-1)^{t_{m}}} -\end{eqnarray*}$$ and we want to evaluate the limit of the sequence $f_{n}$ -$$\begin{equation*} -\underset{n\rightarrow \infty }{\lim }f_{n}=\prod_{m=0}^{\infty }\left( -\frac{2m+1}{2m+2}\right) ^{(-1)^{t_{m}}}.\qquad(\ast ) -\end{equation*}$$ -In Proposition 5 of the mentioned paper, the authors show that -$$\begin{equation*} -\underset{n\rightarrow \infty }{\lim }f_{n}=\prod_{m=0}^{\infty }\left( -\frac{2m+1}{2m+2}\right) ^{(-1)^{t_{m}}}=\frac{1}{2}\prod_{m=0}^{\infty }\left( -\frac{2m+1}{2m+2}\right) ^{(-1)^{t_{2m+1}}} -\end{equation*}$$ -and, since $(-1)^{t_{2m+1}}=-(-1)^{t_{2m}}=-(-1)^{t_{m}}$, they get -$$\begin{equation*} -\underset{n\rightarrow \infty }{\lim }f_{n}=\frac{1}{2\ \underset{n\rightarrow -\infty }{\lim }f_{n}},\end{equation*}$$ -thus proving that $\underset{n\rightarrow \infty }{\lim }f_{n}^{2}=\frac{1}{2}$. The trick they use is to multiply both sides of $\left( \ast \right) $ -by the auxiliary product -$$\begin{equation*} -\prod_{m=1}^{\infty }\left( \frac{2m}{2m+1}\right) ^{(-1)^{t_{m}}}\qquad(\ast \ast ) -\end{equation*}$$ -pretty much as in Moron's answer. Concerning the issue of the convergence of the infinitive products, namely $\left( \ast \right) $ and $(\ast -\ast )$ the authors state that they "are convergent by Abel's theorem", but I must confess I have no idea which theorem is this.<|endoftext|> -TITLE: The well ordering principle -QUESTION [10 upvotes]: Here is the statement of The Well Ordering Principle: If $A$ is a nonempty set, then there exists a linear ordering of A such that the set is well ordered. -In the book, it says that the chief advantage of the well ordering principle is that it enables us to extend the principle of mathematical induction for positive integers to any well ordered set. How to see this? An uncountable set like $\mathbb{R}$ can be well ordered by the well ordering principle, so the induction can be applied to a uncountable set like $\mathbb{R}$? - -REPLY [8 votes]: The first thing you should be aware of: The Well-Ordering-Theorem is equivalent to the Axiom of Choice, and is highly non-constructive. Deriving the principle of transfinite induction on well-ordered sets is rather easy, the problem with $\mathbb{R}$ is that such a well-ordering exists only non-constructively, that is, you cannot explicitly give it, you can just assume that it exists. -A possible usage of it is when proving that $\mathbb{R}$ as $\mathbb{Q}$ vector space has a base (even though this follows from the more general theorem that every vector space has a base, which is mostly proved by Zorn's lemma but can as well be proven by transfinite induction). -Assume $\sqsubseteq$ was such a well-ordering on $\mathbb{R}\backslash \{0\}$ (which we can trivially get from a well-ordering on $\mathbb{R}$). -Define -$$\begin{align*} -A_0 &:= \{\inf_\sqsubseteq\; \mathbb{R}\}\\ -A_{n+1} &:=\begin{cases} A_n \cup\{\inf_\sqsubseteq\{ x\in\mathbb{R}\mid x\mbox{ lin. indep. from } A_n\}\} & \mbox{ if well-defined}\\ - A_n &\mbox{ otherwise} \end{cases}\\ -A_\kappa &:= \bigcup_{n<\kappa} A_n \qquad\qquad\text{for limit ordinals }\kappa. -\end{align*} -$$ -If for some $A_i$, no more element can be added to $A_{i+1}$, then we have a basis. If not, we can proceed. But trivially, we know that at least $A_\mu$ is such a basis, where $\mu$ is the ordinal isomorphic to $\sqsubseteq$. -(Sorry for the bad formatting, but this latex-thingy refuses to recognize my backslashes and curly brackets).<|endoftext|> -TITLE: Probability that a quadratic polynomial with random coefficients has real roots -QUESTION [34 upvotes]: The following is a homework question for which I am asking guidance. - -Let $A$, $B$, $C$ be independent random variables uniformly distributed between $(0,1)$. What is the probability that the polynomial $Ax^2 + Bx + C$ has real roots? - -That means I need $P(B^2 -4AC \geq 0$). I've tried calling $X=B^2 -4AC$ and finding $1-F_X(0)$, where $F$ is the cumulative distribution function. -I have two problems with this approach. First, I'm having trouble determining the product of two uniform random variables. We haven't been taught anything like this in class, and couldn't find anything like it on Sheldon Ross' Introduction to Probability Models. -Second, this strategy just seems wrong, because it involves so many steps and subjects we haven't seen in class. Even if I calculate the product of $A$ and $C$, I'll still have to square $B$, multiply $AC$ by four and then subtract those results. It's too much for a homework question. I'm hoping there might be an easier way. - -REPLY [4 votes]: Here's another approach which utilizes the total law of probability directly. $$P(B^2\geq 4AC)=\int_{0}^{1}P(B^2\geq4AC|C=c)f_{C}(c)dc$$ Using independence the above simplifies to $$P(B^2\geq 4AC)=\int_{0}^{1}P(B^2\geq4Ac)dc$$ When $c\in\Big(0,\frac{1}{4}\Big)$ we have $$P(B^2 \geq 4Ac)=\int_0^1 \int_{\sqrt{4ac}}^1dbda=1-\frac{4\sqrt{c}}{3}$$ When $c\in \Big[\frac{1}{4},1\Big)$ we have $$P(B^2 \geq 4Ac)=\int_0^1 \int_0^{b^2/4c}dadb=\frac{1}{12c}$$ Putting everything together $$P(B^2\geq 4AC)=\int_0^{1/4}\Bigg(1-\frac{4\sqrt{c}}{3}\Bigg)dc+\int_{1/4}^1\frac{dc}{12c}=\frac{5+3\ln(4)}{36}$$<|endoftext|> -TITLE: Torsion in free products of groups -QUESTION [10 upvotes]: While reading a paper about the modular group $\Gamma = PSL_{2}(\mathbb{Z})$, I read that $PSL_{2}(\mathbb{Z}) \cong C_{2} * C_{3}$ and consequently, all the torsion elements in $\Gamma$ are of order 2 or 3. While I understand the isomorphism, I don't know how to prove the second statement. -More in general, is it true that if $G \cong C_{n_{1}} * \ldots * C_{n_{k}}$, then all torsion elements in $G$ have order $n_{1}, \ldots, n_{k}$ ? - -REPLY [7 votes]: More generally, if $A$ and $B$ are groups, then the only torsion elements of $A*B$ are conjugates of elements of $A$ and elements of $B$; this was proven by Schreier. In particular, the order of any element of finite order must be the order of an element of $A$ or of an element of $B$. -Added. Schreier proved it as part of his construction of free products and free products with one amalgamated subgroup. The result can also be obtained as a consequence of the much stronger theorem of Kurosh; this is done, for example, in Rotman's Introduction to the Theory of Groups, Chapter 11. -Theorem. (Kurosh, 1934). If $H$ is a subgroup of a free product $\mathop{*}\limits_{i\in I} A_i$, then $H = F*\left(\mathop{*}\limits_{\lambda\in\Lambda}H_{\lambda}\right)$, for some possibly empty index set $\Lambda$, where $F$ is a free group and each $H_{\lambda}$ is a conjugate of a subgroup of some $A_i$. -As a corollary, you get -Corollary. If $G = \mathop{*}\limits_{i\in I}A_i$, then every finite subgroup of $G$ is conjugate to a subgroup of some $A_i$. In particular, every element of finite order in $G$ is conjugate to an element of finite order in some $A_i$.<|endoftext|> -TITLE: Why is the orthogonal group $\operatorname{O}(2n,\mathbb R)$ not the direct product of $\operatorname{SO}(2n, \mathbb R)$ and $\mathbb Z_2$? -QUESTION [23 upvotes]: We know that when $n$ is odd, $\operatorname{O}_n(\mathbb R) \simeq \operatorname{SO}_n (\mathbb R) \times \mathbb Z_2$. -However, this seems not true when $n$ is even. But I have no idea how to prove something is not a direct product. -I have tried to verify some basic properties of direct product. For example, $\operatorname{SO}_n(\mathbb R)$ is a normal subgroup of $\operatorname{O}_n(\mathbb R)$, whenever $n$ is odd or even. But they are not helpful. -So, is this statement true and how to prove it? -Thank you! - -REPLY [2 votes]: In this answer, we for fun generalize and write out the construction in more detail. - -An element in the orthogonal group$^1$ $O(n,\mathbb{F})$ has determinant $\pm 1$. The center is $$Z(O(n,\mathbb{F}))~=~\{\pm \mathbb{1}\}~\cong~\mathbb{Z}_2,\tag{1}$$ -$$Z(SO(n,\mathbb{F}))~=~\left\{\begin{array}{c}\{ \mathbb{1}\}\text{ if $n$ odd},\cr -\{\pm \mathbb{1}\}\text{ if $n$ even}.\end{array}\right.\tag{2}$$ -Here $n\in\mathbb{N}$. -$O(n,\mathbb{F})$ has 2 distinct components -$$O(n,\mathbb{F})~=~SO(n,\mathbb{F}) ~\sqcup~ P\cdot SO(n,\mathbb{F}),\tag{3}$$ -where $P\in O(n,\mathbb{F})$ is a fixed element with $\det P=-1$. -There is always a group isomorphism from the semidirect product -$$ \mathbb{Z}_2 ~\ltimes~ SO(n,\mathbb{F})~~\stackrel{\Phi}{\cong}~~O(n,\mathbb{F}) \tag{4}$$ -given by -$$ ((-1)^p, M)~~\stackrel{\Phi}{\mapsto}~~ P^p\cdot M, \qquad p~\in~\{0,1\},\qquad M~\in~SO(n,\mathbb{F}) . \tag{5}$$ -The factor $(-1)^p$ is the determinant. Explicitly, the semidirect product reads -$$ ((-1)^{p_1}, M_1)\cdot ((-1)^{p_2}, M_2)~=~((-1)^{p_1+p_2}, P^{-p_2}\cdot M_1\cdot P^{p_2}\cdot M_2). \tag{6}$$ -On one hand, if we choose the fixed element $P$ to belong to the center (1), the semidirect product (6) becomes a direct product. This is precisely possible if $n$ is odd. -On the other hand, for $n$ even, then $O(n,\mathbb{F})$ and the direct product $\mathbb{Z}_2 \times SO(n,\mathbb{F})$ have different centers, so they cannot be isomorphic, cf. answer by Eric O. Korman. - --- -$^1$ Here $\mathbb{F}$ is a field with characteristic different from 2.<|endoftext|> -TITLE: Characteristic function of the normal distribution -QUESTION [17 upvotes]: The standard normal distribution -$$f(x) = \frac{1}{\sqrt{2\pi}} e^{\frac{-x^2}{2}},$$ -has the characteristic function -$$\int_{-\infty}^\infty f(x) e^{itx} dx = e^{-\frac{t^2}{2}}$$ -and this can be proved by obtaining the moments. -However, is there a more direct method of proving that the standard normal has the stated characteristic function? I got stuck on trying to show that -$$\int_{-\infty}^\infty e^{-\frac{1}{2}(x-it)^2} dx= \sqrt{2\pi}.$$ - -REPLY [19 votes]: A simple change of variables allows you to compute $\mathbb{E}[e^{tX}]$ for real $t$ and standard normal $X$, -$$ -\begin{align} -\mathbb{E}[e^{tX}]&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty e^{-\frac12x^2}e^{tx}\,dx\\ -&= \frac{e^{\frac12t^2}}{\sqrt{2\pi}}\int_{-\infty}^\infty e^{-\frac12(x-t)^2}\,dx\\ -&=\frac{e^{\frac12t^2}}{\sqrt{2\pi}}\int_{-\infty}^\infty e^{-\frac12y^2}\,dy\\ -&=e^{\frac12t^2}. -\end{align} -$$ -Here, the substitution $y=x-t$ has been used. In fact, this identity holds for all complex $t$ as well by analytic continuation. The right hand side, $e^{\frac12t^2}$ is clearly analytic. The left hand side is analytic, as it has the derivative $\mathbb{E}[Xe^{tX}]$. The fact that you can commute differentiation and expectation follows from the dominated convergence theorem. Two analytic functions which agree on the real line must agree everywhere (by analytic continuation). So the identity holds for all complex $t$, and replacing $t$ by $it$ gives the expression you ask for.<|endoftext|> -TITLE: Sum of cosines of primes -QUESTION [10 upvotes]: Let $p_n$ be the nth prime number, $p_1=2,p_2=3,p_3=5,\ldots$ -How to prove this series converges/diverges? -$$\sum_{n=1}^\infty \cos{p_n}$$ - -REPLY [7 votes]: If it converges, then this disproves the twin prime conjecture, I believe. -If $\lim\ \cos p_n = 0$ and the twin prime conjecture were true, then we would have that -as $p_n$ runs through the lower twin prime (i.e. both $p_n$ and $p_n + 2$ are primes), -$0 = \lim\ \cos (p_n + 2) = \lim\ (\cos p_n \cos 2 + \sin p_n \sin 2) = \pm \sin 2$ -In fact, -If $\lim\ \cos p_n = 0$, then for any odd integer $M$, we must have that $\lim\ \cos (M\times p_n) = 0$ (as $\cos Mx$ can be written as an odd polynomial in $\cos x$), which I guess, implies that $ \lim\ \cos (2n+1) = 0$ -If I remember correctly there was a previous question which disproved this.<|endoftext|> -TITLE: Solving $x^2 \equiv 1 \pmod{p^{\ell}}$ -QUESTION [6 upvotes]: I am working through Ireland/Rosen at the moment, and I cannot solve a (probably) simple exercise. Any nudges you can give me in the right direction are appreciated. -How does one show that $x^2 \equiv 1 \pmod{p^{\ell}}$ has only two solutions (namely $\pm 1$)? Here, $\ell$ is a positive integer (which we can take to be at least $2$ as $\mathbb{F}_p$ is a field and must therefore only have two solutions). -I am aware of how to prove this statement in general using Hensel's Lemma, but there must be an elementary (maybe two line) proof as it is in the first few chapters of Ireland/Rosen. The book also discusses Euler's theorem around this point: -$$ -x^{\varphi(n)} \equiv 1 \pmod{n}, -$$ -where $\varphi(n)$ is the totient function, and I suspect this plays a role. - -How does one show that $x^2 = 1 \pmod{p^{\ell}}$ has only two - solutions, without invoking Hensel's lemma? (Please provide hints only!!) - -REPLY [6 votes]: Hint $\,\ p\,$ prime, $\, (p,a,b) = 1,\ p^n\mid ab\ \Rightarrow\ p^n\ |\ a\ $ or $\ p^n\ |\ b\ $ by iterating Euclid's Lemma. -Note that $\ (p,x\!-\!1,x\!+\!1) = (p,x\!-\!1,x\!+\!1\!-\!(x\!-\!1)) = (p,x\!-\!1,2) = 1\ $ for odd $\,p.$ -i.e. if $\,p^n$ divides a product of pairwise $\,p\,$-coprime elements $\,a_i\,$ then $\,p^n\,$ divides one of the $\,a_i,\, $ for otherwise, by unique factorization, $\ p\,$ divides at least two factors $\,a_i,\ a_j\,$ contra $\ (p,\,a_i,\,a_j) = 1$.<|endoftext|> -TITLE: what are the uses of this identity -QUESTION [5 upvotes]: Consider this wonderful ( think it is) identity -$$\begin{align*} -&a+b(1+a) + c(1+a)(1+b) + d(1+a)(1+b)(1+c) -+\cdots+l(1+a)(1+b)\cdots(1+k)\\ -&\qquad= - (1+a)(1+b)(1+c)\cdots(1+l)-1 -\end{align*} -$$ -I believe there must be some beautiful applications, for example deriving some other identities, of it. Can someone please explore these possibilities? - -REPLY [9 votes]: There is a whole article devoted to applications of this identity: Bhatnagar, In Praise of an Elementary Identity of Euler.<|endoftext|> -TITLE: Prove $(a_1+b_1)^{1/n}\cdots(a_n+b_n)^{1/n}\ge \left(a_1\cdots a_n\right)^{1/n}+\left(b_1\cdots b_n\right)^{1/n}$ -QUESTION [14 upvotes]: consider positive numbers $a_1,a_2,a_3,\ldots,a_n$ and $b_1,b_2,\ldots,b_n$. does the following in-equality holds and if it does then how to prove it -$\left[(a_1+b_1)(a_2+b_2)\cdots(a_n+b_n)\right]^{1/n}\ge \left(a_1a_2\cdots a_n\right)^{1/n}+\left(b_1b_2\cdots b_n\right)^{1/n}$ - -REPLY [3 votes]: By Holder $$\prod_{i=1}^n(a_i+b_i)\geq\left(\sqrt[n]{\prod_{i=1}^na_i}+\sqrt[n]{\prod_{i=1}^nb_i}\right)^n$$ -and we are done!<|endoftext|> -TITLE: Showing that matrix is invertible using eigenvalues -QUESTION [5 upvotes]: Let $A$ be matrix from the vector space of square $N \times N$ matrices. -With the inital information: $A^2-4A=4I$. -How does one show that $A+I$ is invertible? -(I need please a solution that involves eigenvalues) -Thank you - -REPLY [2 votes]: Hint: What does the initial information tell you about the eigenvalues of $A$? - -REPLY [2 votes]: Note that a square matrix $B$ is invertible if and only if $\lambda=0$ is not an eigenvalue of $B$. -Thus, -$$\begin{align*} -A+I\text{ is invertible }&\Longleftrightarrow \lambda=0\text{ is not an eigenvalue of }A+I\\ -&\Longleftrightarrow \text{there is no nonzero solution to }(A+I)x=0\\ -&\Longleftrightarrow \text{there is no nonzero solution to }Ax +x = 0\\ -&\Longleftrightarrow \text{there is no nonzero solution to }Ax = -x\\ -&\Longleftrightarrow \lambda = -1\text{ is not an eigenvalue of }A. -\end{align*}$$ -The "initial information" tells you that $A$ satisfies $t^2-4t-4$. So the minimal polynomial of $A$ divides $(t-2-2\sqrt{2})(t-2+2\sqrt{2})$. -What does that tell you about the eigenvalues of $A$? -Added. More generally, $\lambda$ is an eigenvalue of $A$ if and only if $\lambda+\mu$ is an eigenvalue of $A+\mu I$. Because $Ax=\lambda x$ if and only if $(A-\mu I)x = (\lambda-\mu)x$. So $A+\mu I$ is invertible if and only if $-\mu$ is not an eigenvalue of $A$.<|endoftext|> -TITLE: Do sets, whose power sets have the same cardinality, have the same cardinality? -QUESTION [21 upvotes]: Is it generally true that if $|P(A)|=|P(B)|$ then $|A|=|B|$? Why? Thanks. - -REPLY [21 votes]: Your question is undecidable in ZFC. If you assume the generalized continuum hypothesis then what you state is true. On the other hand Easton's theorem shows that if you have a function $F$ from the regular cardinals to cardinals such that $F(\kappa)>\kappa$, $\kappa\leq\lambda\Rightarrow F(\kappa)\leq F(\lambda)$ and $cf(F(\kappa))>\kappa$ then it's consistent that $2^\kappa=F(\kappa)$. This of course shows that it's consistent that we can have two cardinals $\kappa<\lambda$ such that $2^\kappa=2^\lambda$.<|endoftext|> -TITLE: The value of improper integral $x\exp(-\lambda x^2)\, dx$ -QUESTION [6 upvotes]: The integral in question is $\int_{-\infty}^{+\infty} x {e}^{-\lambda x^2}dx$ where $x$ and $\lambda$ both are real numbers. -My solution: -$\int_{-\infty}^{+\infty} x {e}^{-\lambda x^2}dx = \int_{-\infty}^{0} x {e}^{-\lambda x^2}dx +\int_{0}^{+\infty} x {e}^{-\lambda x^2}dx =$ -$\lim_{a\rightarrow -\infty}\int_{a}^{0} x {e}^{-\lambda x^2}dx +\lim_{b\rightarrow +\infty}\int_{0}^{b} x {e}^{-\lambda x^2}dx = \begin{vmatrix} -u = -\lambda x^2\\ -du = -\lambda 2xdx\\ -\begin{matrix} -x & 0 & a & b\\ -u & 0 & -\lambda a^2 & -\lambda b^2 -\end{matrix} -\end{vmatrix} =$ -$-\frac{1}{2\lambda}\left( \lim_{a\rightarrow -\infty}\int_{-\lambda a^2}^{0} {e}^{u}du +\lim_{b\rightarrow +\infty}\int_{0}^{-\lambda b^2} {e}^{u}du \right) =$ -$-\frac{1}{2\lambda}\left( \lim_{a\rightarrow -\infty}\left(1 - {e}^{-\lambda a^2} \right) +\lim_{b\rightarrow +\infty}\left({e}^{-\lambda b^2} - 1 \right) \right) = --\frac{1}{2\lambda}\left( \left(1 - 0 \right) +\left(0 - 1 \right) \right) = 0$ -1) Is this solution correct? -2) Suppose that real function of real argument $f\left(x\right)$ is odd and both limits of $f\left(x\right)$ as $x$ approaches $\pm\infty$ are finite values. Is it enough to say that $\int_{-\infty}^{+\infty}f\left(x\right)dx$ is equal to 0? - -REPLY [4 votes]: Yes. -This only works if $f$ is integrable. A counterexample would be $f(x)=x/(1+x^2)$, whose integral doesn't exist. What you need to know is that $\int_a^b f(x)dx\to 0$ as $a$ and $b$ go to infinity, and this will hold if $f$ is integrable. - -However, you could say that the Cauchy principal value of the integral is $0$ when $f$ is odd.<|endoftext|> -TITLE: Picking from an Uncountable Set: Axiom of Choice? -QUESTION [13 upvotes]: Question: Given the real numbers as a set, does it require the (non-finite) Axiom of Choice to pick out an arbitrary single element? What about if we wanted to pick out an integer? What about if we wanted to pick out "0"? -Motivation: Those who have seen some of my previous questions here know that I am terrible at picking out when and where to apply the axiom of choice, and when it is being used in a proof. Perhaps slightly motivated by the Bertrand Russell quote - -"The Axiom of Choice is necessary to select a set from an infinite number of socks, but not an infinite number of shoes." - -and perhaps slightly motivated by an argument I had over the AoC, I wanted to find out how much it is actually being used. The above question is as basic a question as I can think of to test a "lower bound" on the axiom of choice. - -REPLY [13 votes]: Given a finite number of nonempty sets $A_1,\ldots,A_k$, the Axiom of Choice is not needed to show that there are functions $f\colon\{1,\ldots,k\}\to\cup A_i$ such that $f(i)\in A_i$ for each $i$. That is, there is no need for the Axiom of Choice in order to select an element form finitely many nonempty sets. -In particular, you do not need the Axiom of Choice to show that you can choose a real number (a single set). -Simply: since $A_1$ is nonempty, there exists $a_1\in A_1$. Since $A_2$ is nonempty, there exists $a_2\in A_2$. Likewise, we have $a_i\in A_i$, $i=3,\ldots, k$. And we can let $f=\{(i,a_i)\mid i=1,\ldots,k\}$ be the Choice function. We can write all of this down because there are finitely many $A_i$. Apparently there are subtle non-standard-set-theory issues here (thanks to Carl Mummert for the pointer); so instead, let's say that for a family of nonempty sets indexed by a natural number you do not need the Axiom of Choice to get a choice function, and this can be shown by induction on the index set. -The specter of the Axiom of Choice (so to speak) does not even begin to suggest itself until you have to make infinitely many choices. Even then, you may not need it. -Note, however, that using phrases like "arbitrary real number" may make it seem like you are talking about some kind of uniform probability distribution over all the real numbers that makes all "selections" equally likely. This is a completely different thing altogether, but not what you are talking about here. This is likewise the problem that arises when, in the context of probability, we talk about "selecting a random integer." The problem with "selecting a random integer" is that you cannot have a uniform probability distribution on the integers: this would amount to a measure on the power set of the integers that is $\sigma$-additive, for which each integer has equal probability, and for which $\mu(\mathbb{Z})=1$; no such thing exists, because you would need to have $\mu(n)=0$ for each integer $n$, and $\sigma$-additivity would imply $\mu(\mathbb{Z}) = 0$ as well. This is what is behind the statement "you cannot 'choose a random integer'" true in that context; but this is an obstacle in the notion of "randomness", not a set-theoretic obstacle. Likewise, there can be no uniform probability distribution over all the reals, because uniformity would imply that each interval $[n,n+1)$ with $n\in\mathbb{Z}$ would have the same measure, and $\sigma$-additivity together with the fact that the reals have finite total measure implies that each interval must have measure equal to $0$, and so the reals would also have total measure $0$. Again, this is a probability/measure-theoretic obstacle, not an Axiom-of-Choice one.<|endoftext|> -TITLE: the expectation of a chocolate bar -QUESTION [12 upvotes]: So my buddy claims that if I split a chocolate bar at random into two pieces, then the expected size of the larger piece is $\frac{3}{4}$ of the bar. I can't figure out how he came up with this value... -Can someone explain this? If you can, can you provide some kind of a proof? -p.s. it would be helpful to think of this chocolate bar as a 1D array :) -UPDATE -Imagine the candy bar is a world-famous chocolate bar, the ones that are broken into chunks. However, this special chocolate bar has n chunks. If we broke the chocolate bar randomly along these chunks, what would the expected size of the larger chunk be? My buddy claims it to be $\leq{\frac{3}{4}}$. - -REPLY [18 votes]: The larger piece is always between 1/2 and 1. If the break is uniformly distributed along the bar, the larger piece is uniformly distributed between 1/2 and 1. This gives the expected value of 3/4. -Added when we have break lines: If there are $n$ pieces, there are $n-1$ break lines, all equally probable. If $(n-1)$ is even, the size of the largest piece has chance $\frac{2}{n-1}$ of being any step between $\frac{n+1}{2n}$ and $\frac{n-1}{n}$, so the expectation is $\frac{3n-1}{4n}$. If $(n-1)$ is odd there is $\frac{1}{n-1}$ chance the "larger" piece is $\frac{1}{2}$ and $\frac{2}{n-1}$ for each value between $\frac{n+1}{2n}$ and $\frac{n-1}{n}$, giving $\frac{3n-4}{4n-4}$ as the expected value. As stated in other answers, this is always below $\frac{3}{4}$, but approaches that as $n \to \infty$ - -REPLY [10 votes]: Define $Y = \max \{ U,1 - U\} $, where $U$ is a uniform$(0,1)$ random variable. -Then, by the law of total probability (conditioning on $U$), -$$ -{\rm E}(Y) = \int_0^1 {{\rm E}(Y|U = u)du} = \int_0^{0.5} {(1 - u)du} + \int_{0.5}^1 {udu} = \frac{3}{4} -$$ -EDIT. The discrete case is even more elementary, but involves more calculations. -If $n$ (the number of chunks) is odd (and greater than $1$), then the ratio is equal to $k/n$, $k=n-1,n-2,\ldots,(n+1)/2$, with probability $2/(n-1)$ (by symmetry). Hence, its expectation is equal to -$$ -\sum\limits_{k = (n + 1)/2}^{n - 1} {\frac{k}{n}} \frac{2}{{n - 1}} = \frac{{3n - 1}}{{4n}} < \frac{3}{4}. -$$ -If $n$ is even, then the ratio is equal to $k/n$, $k=n-1,n-2,\ldots,n/2+1$, with probability $2/(n-1)$ (again, by symmetry), and to $(n/2)/n = 1/2$ with probability $1/(n-1)$. Hence, its expectation is equal to -$$ -\sum\limits_{k = n/2 + 1}^{n - 1} {\frac{k}{n}} \frac{2}{{n - 1}} + \frac{1}{{2(n - 1)}} = \frac{{3n - 4}}{{4n - 4}} < \frac{3}{4}. -$$ -It is interesting to note that -$$ -\frac{{3n - 4}}{{4n - 4}} = \frac{{3(n - 1) - 1}}{{4(n - 1)}}. -$$ -Thus, the expectations for $n$ and $n-1$ are equal for $n > 2$ even.<|endoftext|> -TITLE: Lie algebra of a quotient of Lie groups -QUESTION [34 upvotes]: Suppose I have a Lie group $G$ and a closed normal subgroup $H$, both connected. Then I can form the quotient $G/H$, which is again a Lie group. On the other hand, the derivative of the embedding $H\hookrightarrow G$ gives an embedding of Lie algebras, $\mathfrak{h}\hookrightarrow\mathfrak{g}$. Since $H$ is normal, $\mathfrak{h}$ is an ideal of $\mathfrak{g}$, so we can form the quotient algebra $\mathfrak{g}/\mathfrak{h}$. - -Is it then true that the Lie algebra of $G/H$ is canonically isomorphic to $\mathfrak{g}/\mathfrak{h}$? (I guess what I really want to know is: is the functor $\phi\mapsto d_e\phi$ exact?) -If so, can we always write $\mathfrak{g}$ as a direct sum of vector spaces $\mathfrak{g}=\mathfrak{h}\oplus\mathfrak{g}'$ where $\mathfrak{g}'$ is a subalgebra of $\mathfrak{g}$ isomorphic to $\mathfrak{g}/\mathfrak{h}$? (I don't mean this to be a direct sum of Lie algebras; there would certainly in general be a nontrivial bracket between $\mathfrak{g}'$ and $\mathfrak{h}$.) - -Thank you. - -REPLY [11 votes]: Contrasting with what Mariano said, for (2), in the compact case it's true. That is, given a compact Lie group $G$ with Lie algebra $\mathfrak{g}$, if $\mathfrak{h}$ is an ideal of $\mathfrak{g}$, then there is another ideal $\mathfrak{p}$ in $\mathfrak{g}$ so that $\mathfrak{g} = \mathfrak{h}\oplus \mathfrak{p}$ where this really is a sum of Lie algebras (that is, the bracket between the $\mathfrak{h}$ part and the $\mathfrak{p}$ part is 0). -For, if $G$ is compact, it has a biinvariant metric which corresponds to an $Ad(G)$ invariant inner product on $\mathfrak{g}$. One can prove that this inner product satisfies $\langle [x,y],z\rangle = \langle x, [y,z]\rangle$. -Now, given $\mathfrak{h}$, let $\mathfrak{p}$ be the orthogonal complement. I'll use $h$ with subscripts to denote arbitrary elements of $\mathfrak{h}$ and likewise for $p$. -Then $\mathfrak{p}$ is an ideal since $\langle h, [p_1 + h_1,p]\rangle = \langle [h,p_1+h_1],p\rangle = 0$ since $[h,p_1+h_1]$ is in $\mathfrak{h}$ since it's an ideal. -Finally, we must have $[h,p]\in\mathfrak{h}\cap\mathfrak{p} = \{0\}$ since both are ideals.<|endoftext|> -TITLE: Cohomology of projective plane -QUESTION [29 upvotes]: How I can compute cohomology de Rham of the projective plane $P^{2}(\mathbb{R})$ using Mayer vietoris or any other methods? - -REPLY [43 votes]: If you remove a point from $P^2$ you are left with something which looks like a Moebius band. You can use this to compute $H^\bullet(P^2)$. -Let $p\in P^2$, let $U$ be a small open neighborhood of $p$ in $P^2$ diffeomorphic to an open disc centered at $p$, and let $V=P^2\setminus\{p\}$. Now use Mayer-Vietoris. -The cohomology of $U$ you know. The open set $V$ is diffeomorphic to an open moebious band, so that tells you the cohomology; alternatively, you can check that it deformation-retracts to the $P^1\subseteq P^2$ consiting of all lines orthogonal to the line corresponding to $p$ (with respect to any inner product in the vector space $\mathbb R^3$ you used to construct $P^2$), and the intersection $U\cap V$ has also the homotopy type of a circle. The maps in the M-V long exact sequence are not hard to make explicit; it does help to keep in mind the geometric interpretation of $U$ and $V$. -Later: alternatively, one can do a bit of magic. Since there is a covering $S^2\to P^2$ with $2$ sheets, we know that the Euler characteristics of $S^2$ and $P^2$ are related by $\chi(S^2)=2\chi(P^2)$. Since $\chi(S^2)=2$, we conclude that $\chi(P^2)=1$. Since $P^2$ is of dimension $2$, we have $\dim H^p(P^2)=0$ if $p>2$; since $P^2$ is non-orientable, $H^2(P^2)=0$; finally, since $P^2$ is connected, $H^0(P^2)\cong\mathbb R$. It follows that $1=\chi(P^2)=\dim H^0(P^2)-\dim H^1(P^2)=1-\dim H^1(P^2)$, so that $H^1(P^2)=0$. -Even later: if one is willing to use magic, there is lot of fun one can have. For example: if a finite group $G$ acts properly discontinuously on a manifold $M$, then the cohomology of the quotient $M/G$ is the subset $H^\bullet(M)^G$ of the cohomology $H^\bullet(M)$ fixed by the natural action of $G$. In this case, if we set $M=S^2$, $G=\mathbb Z_2$ acting on $M$ so that the non-identity element is the antipodal map, so that $M/G=P^2$: we get that $H^\bullet(P^2)=H^\bullet(S^2)^G$. -We have to compute the fixed spaces: - -$H^0(S^2)$ is one dimensional, spanned by the constant function $1$, which is obviously fixed by $G$, so $H^0(P^2)\cong H^0(S^2)^G=H^0(S^2)=\mathbb R$. -On the other hand, $H^2(S^2)\cong\mathbb R$, spanned by any volume form on the sphere; since the action of the non-trivial element of $G$ reverses the orientation, we see that it acts as multiplication by $-1$ on $H^2(S^2)$ and therefore $H^2(P^2)\cong H^2(S^2)^G=0$. -Finally, if $p\not\in\{0,2\}$, then $H^p(S^2)=0$, so that obviously $H^p(P^2)\cong H^p(S^2)^G=0$. - -Luckily, this agrees with the previous two computations.<|endoftext|> -TITLE: Explicit computation of a Galois group -QUESTION [15 upvotes]: Let $E$ be the splitting field of $x^6-2$ over $\mathbb{Q}$. Show that $Gal(E/\mathbb{Q})\cong D_6$, the dihedral group of the regular hexagon. - -I've shown that $E=\mathbb{Q}(\zeta_6, \sqrt[6]{2})$, where $\zeta_6$ is a (fixed) primitive sixth root of unity, and thus that $[E:\mathbb{Q}]=12$. -I'm getting a little mixed up working out the automorphisms, though. I know the Galois group is determined by the action on the generators $\zeta_6$ and $\sqrt[n]{2}$. So then the possibilities appear to be: \begin{align*}\sqrt[6]{2}&\mapsto\zeta_6^n\sqrt[6]{2}\;\;\;\mbox{ for } n=0,1,\ldots ,5 \\ \zeta_6&\mapsto \zeta_6^j\;\;\;\;\;\;\;\;\mbox{ for } j=1,5\,.\end{align*} -Does this make sense? Something doesn't quite feel right, but I'm not sure where the issue might be. I know that in some sense the generators are "independent", because I definitely can't get one generator from the other. (For example, it'd be different if we had fourth roots of unity because we could get $\sqrt{2}$ from both generators.) -Any help is appreciated - -REPLY [17 votes]: By definition of the splitting field, it is generated by the roots of $x^6 - 2$. These roots form the vertices of a regular hexagon on the complex plane. Any element of the Galois group must permute the roots, and is completely determined by what it does to the roots, since the splitting group is generated by them. -Let $\sigma$ be an element of the Galois group. Now consider what happens to two adjacent vertices, $\zeta_6^i\sqrt[6]{2}$ and $\zeta_6^{i+1}\sqrt[6]{2}$. By considering -$$\frac{\zeta_6^{i+1}\sqrt[6]{2}}{\zeta_6^{i}\sqrt[6]{2}} = \zeta_6,$$ -the action of $\sigma$ on the two adjacent vertices determines the action on $\zeta_6$. Since $\zeta_6$ satisfies $\Phi_6(x) = x^2-x+1$, the image of $\zeta_6$ must be another root of $x^2-x+1$, which is either $\zeta_6$ or $\zeta_6^5$, as you note. And if you know what happens to $\zeta_6^i\sqrt[6]{2}$ and to $\zeta_6$, then you know what happens to $\sqrt[6]{2}$ as well. And of course, if you know what happens to both of these, then you know what happens to the adjacent vertices $\sqrt[6]{2}$ and $\zeta_6\sqrt[6]{2}$. So knowing the action on at least two adjacent vertices is equivalent to knowing the action on $\sqrt[6]{2}$ and on $\zeta_6$, and vice versa, and this in turn tells you what is happening on all the vertices. This shows that what you are saying is correct. -Now, to finish off, try showing that two adjacent vertices are always mapped to two adjacent vertices, so that any permutation of the hexagon induced by $\sigma\in\mathrm{Gal}(E/\mathbb{Q})$ is actually a rigid motion of the hexagon. This will show the Galois group is contained in the dihedral group, and the computation of the order that you have already done will finish off the problem.<|endoftext|> -TITLE: Analytic functions with nonessential singularity at infinity must be a polynomial -QUESTION [15 upvotes]: This is an exercise from Alhfors Complex Analysis book- to show that an analytic function with a nonessential singularity at infinity must be a polynomial. -It seems like it should probably be pretty straight forward, but I must be missing something. -If it has a removable singularity at infinity than it extends to an analytic function on the Riemann sphere, and so must be constant by Liouville's theorem. -What if there is a pole at infinity though? -This was homework some time ago, and I never finished it :/ but have been thinking about it again recently. -Thanks :) - -REPLY [8 votes]: Another hint: look at the function $f(\frac{1}{z})$ at z = 0, it has a nonessential singularity at 0...<|endoftext|> -TITLE: Morphism of Exterior Algebras -QUESTION [9 upvotes]: Let $k$ be a field, let $V$ and $W$ be $k$-vector spaces of dimensions $n$ and $m$ respectively, and let $f:V\to W$ be a $k$-linear transformation. Let $\Lambda(V)$ and $\Lambda(W)$ denote the exterior algebras of $V$ and $W$ respectively. So we have -$$\Lambda(V) = \Lambda^0(V)\oplus\Lambda^1(V)\oplus\cdots\oplus\Lambda^n(V)$$ -and -$$\Lambda(W) = \Lambda^0(W)\oplus\Lambda^1(W)\oplus\cdots\oplus\Lambda^m(W).$$ -The wikipedia page on exterior algebras states that there is a unique function $\Lambda(f):\Lambda(V)\to\Lambda(W)$ such that $\Lambda(f)|_{\Lambda^1(V)}:\Lambda^1(V)\to\Lambda^1(W)$ is defined by $\Lambda(f)(v)=f(v)$. -In fact, $\Lambda(f)$ preserves grading (i.e. it can be written as a sum of maps $\Lambda^k(f):=\Lambda(f)|_{\Lambda^k(V)}:\Lambda^k(V)\to\Lambda^k(W)$). If $1\leq k \leq n$, then $\Lambda(f)$ is given by -$$\Lambda^k(f)(v_1\wedge\cdots\wedge v_k) = f(v_1)\wedge\cdots\wedge f(v_k).$$ -I do not understand how this function acts on $\Lambda^0(V)=k$. I know that we have a map -$$\Lambda^0(f):\Lambda^0(V)\to\Lambda^0(W)$$ which is really the same as -$$\Lambda^0(f):k\to k.$$ -My question has two parts: what is $\Lambda^0(f)$ and how is it determined from the universal mapping property for exterior algebras? - -REPLY [6 votes]: The map is the identity map. The map must be $k$-linear, so -$$\Lambda^0(f)(a) = \Lambda^0(f)(a\cdot 1) = a\Lambda^0(f)(1)$$ -hence the map is completely determined by the image of $1$. But since it is a morphism of algebras, the map must send $1$ to $1$, so $\Lambda^0(f)(1)=1$, hence $\Lambda^0(f)(a) = a$ for all $a\in k$.<|endoftext|> -TITLE: Product of two power series -QUESTION [14 upvotes]: Say if I define a power series over some arbitrary field $F$ as -$$a = \sum^{ \infty }_{i = 0} a_{i} X^{i} $$ -Then can I say: -$$ab = \sum^{ \infty }_{i = 0} \sum^{ \infty }_{j = 0} a_{i} b_{j} X^{i + j} $$ - -REPLY [16 votes]: Yes, using the natural notion of convergence for formal power series, the stated sum does indeed converge to the Cauchy product. One should beware - as exemplified in this thread - that there is widespread confusion about formal vs. functional power series - even by some experts (in other fields). Rota frequently told jokes in his lectures about certain distinguished mathematicians who published complete nonsense based on such confusion (Indiscrete Thoughts!) -In any case the basic ideas are quite simple if you merely take off your analyst hat and, instead, put on your algebraist or combinatorist hat. In particular, you should be able to find a correct discussion of convergence of formal power series in almost any good book on combinatorics or generating functions, e.g. here is an excerpt from Stanley's classic $\: $ Enumerative Combinatorics I.<|endoftext|> -TITLE: Expected tail and head length of $\rho$ for a finite random function -QUESTION [6 upvotes]: Let $F: D \rightarrow D$ be a random function on finite domain $D$ of size $n$. It is well-known that, from any $x \in D$, iterating $F$ on $x$ traces out a sequence of values $x, F(x), F(F(x)), \ldots$ that must eventually repeat (since $D$ is finite) and resembles a Greek "$\rho$". The expected number of total elements in this $\rho$ is $\sqrt{n\pi/2}$ with half being on the tail and half on the head. I've seen this stated in several places, but cannot find a proof. -Illustration: In order to clarify what I'm asking (since the comments make it clear that this might be useful), fix an integer $n$. Uniformly sample $a_0 \in [1,n]$ and make a vertex with label $a_0$. Do this again, obtaining a vertex labeled $a_1$ and make a directed edge from $a_0$ to $a_1$ (this does not preclude $a_0 = a_1$). The pigeon-hole principle tells us that eventually we will create a cycle in this digraph; we terminate the process when this occurs. Clearly the graph will look like the letter $\rho$: the "tail" is the part of the graph before we enter the cycle (the tail might have length zero, of course), and the "head" is the is part of the graph comprising the cycle. - -My question is this: what is the expected length of the tail and head, in terms of $n$. Asymptotically, the answer is known to be $\sqrt{n\pi/8}$, but I cannot find a derivation. - -REPLY [4 votes]: Well, I've spent some time on this and I have (sort of) got it figured out. -This question has roots in the "birthday problem" where we ask how many people we must have in a group before the probability that at least two share a birthday exceeds $1/2$. The answer is 23. (This assuming that birthdays are uniform, which they're not. The non-uniformity only lowers the number of people required.) -With a little thought, we can see that the length of the $\rho$ as asked above is equivalent to a birthday-like question: how many random points do we need before we repeat ourselves. We can ask this in a few different forms: what $t \in [1,n]$ do we need before we exceed probability $p$, what is the expected number of points needed to see a collision, etc. -Let's focus on the question asked: expectation. The probability that there is no collision after $t$ trials is clearly -$$\frac{n}{n} \frac{n-1}{n} \frac{n-2}{n} \cdots \frac{n-t+1}{n} $$ -Now let $X$ be the R.V. for the number of trials until we repeat for the first time. Then the probability above is $\Pr[X > t]$ so $E[X]$ is -$$ \sum_{t \geq 0} \frac{n}{n} \frac{n-1}{n} \frac{n-2}{n} \cdots \frac{n-t+1}{n} $$ -(I'm using an alternative formulation of expectation here). -This is all well-and-good, except one doesn't get a very good sense of the answer from the sum. As mentioned above, this is claimed to be about $\sqrt{n\pi/2}$, but how? -It turns out that a sum similar to the above was studied by Ramanujan nearly 100 years ago; he called it the Q-Function. -$$ Q(n) = \sum_{1 \leq t \leq n} \frac{n!}{(n-t)!n^t} $$ -Since the Q-Function starts at 1 and our desired sum starts at 0, our sum will be $1 + Q(n)$. Asymptotic analysis shows that -$$ Q(n) \sim \sqrt{\frac{n \pi}{2}} + \frac{2}{3} $$ -The derivation I found for this is in "Analysis of Algorithms", R. Sedgewick and P. Flajolet, Addison-Wesley, 1996. It's quite involved. Interestingly, the $\sqrt{\frac{n \pi}{2}}$ term comes from evaluating the famous integral over $e^{-x^2/2}$. -To answer the final question: what is the length of the tail and head? When a repetition occurs, it is uniformly likely to collide with any point already established. (Imagine the waving of hands now) therefore on average it will land in the middle, putting half of the $\sqrt{\frac{n \pi}{2}}$ elements on the tail and the other half on the head. Therefore the expected length of the head and tail is $\sqrt{\frac{n \pi}{8}}.$<|endoftext|> -TITLE: Self-Contained Proof that $\sum\limits_{n=1}^{\infty} \frac1{n^p}$ Converges for $p > 1$ -QUESTION [178 upvotes]: To prove the convergence of the p-series -$$\sum_{n=1}^{\infty} \frac1{n^p}$$ -for $p > 1$, one typically appeals to either the Integral Test or the Cauchy Condensation Test. -I am wondering if there is a self-contained proof that this series converges which does not rely on either test. -I suspect that any proof would have to use the ideas behind one of these two tests. - -REPLY [2 votes]: We first show that for $k\geq 1$ -$$ -\sum_{n=1}^\infty\frac{1}{n(n+1)\dotsb(n+k)}=\frac{1}{k\times k!}\tag{0}. -$$ -This result combined with the limit comparison test will yield the fact that $\sum n^{-p}<\infty$ for $p>1$. -Indeed, the series in (0) telescopes as can be seen from the partial fraction decomposition -$$ -\frac{1}{n(n+1)\dotsb(n+k)}=\frac{1}{k}\left( -\frac{1}{n(n+1)\dotsb(n+k-1)}-\frac{1}{(n+1)\dotsb(n+k)} -\right) -$$ -whence -$$ -\sum_{n=1}^m\frac{1}{n(n+1)\dotsb(n+k)}=\frac{1}{k}\left( -\frac{1}{k!}-\frac{1}{(m+1)\dotsb(m+k)} -\right). -$$ -Letting $m\to\infty$ yields the result. -Next observe that for $p>1$ we have that -$$ -\frac{1}{n^p}\sim\frac{1}{n(n+1)\dotsb(n+p-1)}. -$$ -Apply the limit comparison test and we are done.<|endoftext|> -TITLE: Frobenius morphism and global sections of direct image of structure sheaf -QUESTION [5 upvotes]: Let $X$ be a proper scheme defined over an algebraically closed field of characteristic $p > 0$. Let $F : X\rightarrow X$ be the absolute Frobenius morphism. What is the dimension of $H^0(X, F_*\mathcal{O}_X)$? - -REPLY [4 votes]: F is a finite morphism, so affine, so $H^i(X, \mathcal{O}_X) = H^i(X, F_*\mathcal{O}_X)$ for all i.<|endoftext|> -TITLE: Nonlinear function continuous but not bounded -QUESTION [20 upvotes]: I would like an example of a map $f:H\rightarrow R$, where $H$ is a (infinite dimensional) Hilbert space, and $R$ is the real numbers, such that $f$ is continuous, but $f$ is not bounded on the close unit ball $\{ x\in H : \|x\| \leq 1\}$. -Actually, $H$ could be replaced by any Banach space (but not just a normed space-- that's too easy). My motivation is that if $f$ is linear, this is impossible; but I have next to no intuition about non-linear functions. -Edit: Here's an example for $c_0$ which is even differentiable (disclaimer: I found it here: http://www.ms.uky.edu/~larry/paper.dir/korea.ps). Define $f:c_0\rightarrow F$ (where F is your field, real or complex) by $$ f(x) = \sum_{n=1}^\infty x_n^n \qquad (x=(x_n)). $$ You can estimate the sum by a geometric progression, so it does converge. A bit of checking shows that f is Frechet differentible (so certainly continuous). But $f(1,1,\cdots,1,0,\cdots)=n$ (if there are $n$ ones) so $f$ is not bounded on the closed unit ball. What I don't immediately see is how to adapt this to $\ell^2$, say. - -REPLY [2 votes]: The example that you gave for $c_0$ can be slightly modified to give a map that works for the $\ell^p$ also. -$$(a_n) \mapsto \sum_n n a_n^n$$ -is continuous on $c_0$ with the $\sup $ norm so also on all the $l_p$ with their norm. But $e_n \mapsto n$ so the map is not bounded on the unit ball. -If the Banach space has a Schauder base $e_n$, $(\|e_n \| = 1$), then we have a similar map -$$\sum_n a_n e_n \mapsto \sum_n n a_n^n$$ -It would be interesting to provide "direct" examples for other concrete Banach spaces, like $C([0,1])$, or $L^2(\mathbb{R})$, without using bases. -$\bf{Added:}$ Example for $C([0,1])$. Consider a continuous linear functional of norm $1$ which does not achieve its maximum on the unit ball, for example $l(f) = \int_0^{\frac{1}{2}} f(t) d t - \int_{\frac{1}{2}}^1 f(t) dt$, and take $\phi(f) = \frac{1}{1- l(f)}$ on the closed unit ball. -For $c_0$, take $l(a_n) = \sum_{n\ge 1} \frac{a_n}{2^n}$, of norm $1$, but maximum $1$ not achieved on the closed unit ball. -For $\ell^1$, take $l(a_n) = \sum (1-\frac{1}{n}) a_n$, norm $1$, $\ldots$. -Such examples are possible only for spaces that are not reflexive. -$\bf{Added:}$ -For $L^p([0,1])$ ($p\ge 1$) consider the continuous functional ( non-linear) on the closed unit ball -$$f\mapsto T(f)\colon = \int_0^1 t |f(t)|^p d t$$ -with supremum $1$, but which is not achieved. Then $f\mapsto \frac{1}{1-T(f)}$ is continuous and unbounded on the closed unit ball.<|endoftext|> -TITLE: Computing the Hilbert class field -QUESTION [11 upvotes]: Does anyone know any good source with nice examples of Hilbert class field computations? I'm trying to piece together the theory with some canonical examples. - -REPLY [13 votes]: You will find a lot of material on quadratic, cubic and quartic unramified extensions of quadratic number fields in the Seminar on complex multiplication (Borel, Chowla, Herz, Iwasawa, Serre) published in Springer's Lecture Notes 21. Beware of typos, however. -The primary source for unramified cyclic extensions of number fields is the thesis by -G. Gras, parts of which were published in Extensions abéliennes non ramifiées de degré premier d'un corps quadratique, -Bull. Soc. Math. Fr. 100 (1972), 177-193; there are, however, no exercises there.<|endoftext|> -TITLE: Why is every discrete subgroup of a Hausdorff group closed? -QUESTION [16 upvotes]: I have just begun to learn about topological group recently and is still not familiar with combining topology and group theory together. -I have read a useful property of discrete group on the wikipedia: -every discrete subgroup of a Hausdorff group is closed -But I have no idea how to prove it. I find that it cannot be proved only -considering the topological structure, since $\left\{\frac{1}{n}: n=1,2,3,...\right\}$ -is a discrete subspace of $\Bbb R$, which is not closed. -I don't know how to use the group structure here. Can you please help? Thanks. - -REPLY [3 votes]: Here is another solution (note that by neighborhood I mean open neighborhood): -Let $H \leq G$ be a discrete subgroup of the Hausdorff group $G$. -Step 1: -We will show that given a neighborhood $U$ such that $U \cap H = \{e\}$, there exits a neighborhood $V \subset U$ such that $VV^{-1} \subset U$ and $e \in V$. -Let $\sigma: U \times U \rightarrow G$ be the map $\sigma(y_1,y_2)=y_1y_2^{-1}$. By continuity there exits a neighborhood $N \subset U \times U$ of $(e,e)$ such that $\sigma(N)\subset U$. Then $N$ contains an open set of the form $V_1 \times V_2$ where $V_1,V_2 \subset U$ are open and $e \in V_1, V_2$. Take $V=V_1 \cap V_2$. Then $V$ is a neighborhood of $e$ and $V \times V \subset V_1 \times V_2$ and hence $VV^{-1} = \sigma(V\times V) \subset \sigma(V_1 \times V_2) \subset U$. -Step 2: -Let $x \in G-H$. We will find a neighborhood of $x$ contained in $G-H$. Assume $U$ is a neighborhood of $e$ such that $U \cap H= \{e\}$. Let $L_x: G \rightarrow G$ be defined by $L_x(y)=xy$. This is a homeoeomorphism with inverse $L_{x^{-1}}$. Now, let $V \subset U$ be a neighborhood of $e$ with the properties from step 1. Take $W= L_x(V)$; this is a neighborhood of $x$. Suppose now that $h_1, h_2 \in W \cap H$. Then $h_1=xv_1$ and $h_2=xv_2$ for some $v_1,v_2 \in V$. Thus $v_1^{-1}h_1=x=v_2^{-1}h_2 \implies h_1h_2^{-1}=v_1v_2^{-1} \in VV^{-1} \subset U$. Thus $v_1v_2^{-1} \in H \cap U$ so $h_1h_2^{-1}=e$, hence $h_1=h_2$. It follows that $V$ contains at most one element of $H$. -If $V$ contains no element of $H$ then $V$ is the desired subset. Otherwise, if $V \cap H = \{h\}$, since $V$ is Hausdorff, we can separate $x$ and $h$ by open subsets $U_x, U_h \subset V$ (which will also be open in $G$). Then $U_x$ is the desired subset since $U_x \cap H = \emptyset$.<|endoftext|> -TITLE: Why do they use $\equiv$ here? -QUESTION [7 upvotes]: I thought I had pretty much figured out the difference between $\equiv$ and $=$. Then I came across this while reading about partial derivatives (in Colley's Vector Calculus): -$$ -\frac{\partial^2f}{\partial z^2} = \frac{\partial}{\partial z} \left(\frac{\partial f}{\partial z}\right) = \frac{\partial}{\partial z} (y^2) \equiv 0 -$$ -when $f(x,y,z)=x^2y+y^2z$. Why do they use $\equiv$ in stead of $=$ here? - -REPLY [15 votes]: $\equiv$ is often used (between functions) to mean they are identical (instead of being equal at some point). in your example this means identically $0$. if someone write something like $f(z)=g(z)$ it might be thought these functions are equal at some point $z$ instead of every point $z$, so you could write $f\equiv g$ or $f(z)\equiv g(z)$ to mean they are equal everywhere.<|endoftext|> -TITLE: How to show that $\sum\limits_{k=1}^{n-1}\frac{k!k^{n-k}}{n!}$ is asymptotically $\sqrt{\frac{\pi n}{2}}$? -QUESTION [24 upvotes]: According to "Concrete Mathematics" on page 434, elementary asymptotic methods show that $\displaystyle \sum_{k=1}^{n-1}\frac{k! \; k^{n-k}}{n!}$ is asymptotically $\sqrt{\frac{\pi n}{2}}$. Does anybody see how to show this? - -REPLY [11 votes]: I don't think the accepted answer is sufficiently rigorous, so I dug up Knuth's derivation in The Art of Computer Programming, Vol. 1 (3rd ed.), Section 1.2.11.3, pp. 119-120. Here's a summary of his argument. -Use Stirling's approximation on $k!$ to get -$$\sum_{k=0}^n \frac{k^{n-k} k!}{n!} = \frac{\sqrt{2\pi}}{n!}\sum_{k=0}^n k^{n+1/2} e^{-k}\left(1 + O(k^{-1})\right).$$ -Then, use the Euler-Maclaurin formula to obtain -$$\sum_{k=0}^n k^{n+1/2}e^{-k} = \int_0^n x^{n+1/2} e^{-x} dx + \frac{1}{2} n^{n+1/2}e^{-n} + \frac{1}{24}n^{n-1/2}e^{-n} - R,$$ -where $R$ is the remainder term and can be shown to be $O(n^n e^{-n})$. -Since the integral is an incomplete gamma function, we have -$$\sum_{k=0}^n k^{n+1/2}e^{-k} = \gamma \left(n+\frac{3}{2},n\right) + O(n^{n+1/2}e^{-n}).$$ -In a two-page analysis earlier in the section, Knuth shows that for large values of $x$ and fixed $y$, $$\frac{\gamma (x+1,x+y)}{\Gamma(x+1)} = \frac{1}{2} + O\left(\frac{y}{\sqrt{x}}\right).$$ -Putting this all together yields -$$\sum_{k=0}^n \frac{k^{n-k} k!}{n!} = \frac{\sqrt{2\pi}}{\Gamma(n+1)} \left[\Gamma\left(n+\frac{3}{2}\right)\left(\frac{1}{2} + O\left(\frac{1}{\sqrt{n}}\right)\right) + O\left(\Gamma\left(n+\frac{1}{2}\right)\right)\right].$$ -Since, for fixed $a$ and $b$ (see, for instance, here), $$\frac{\Gamma(n+a)}{\Gamma(n+b)} = n^{a-b}\left(1 + O(n^{-1})\right),$$ -we finally get -$$\sum_{k=0}^n \frac{k^{n-k} k!}{n!} =\sqrt{ \frac{\pi n}{2}} + O(1).$$ -Knuth says, "Our derivations... use only simple techniques of elementary calculus." So perhaps this is what is meant by "elementary asymptotic methods" in Concrete Mathematics, rather than "this derivation is easy." :) - -Knuth also describes more advanced techniques whereby you can get the more precise estimate -$$\sum_{k=0}^n \frac{k^{n-k} k!}{n!} = \sqrt{ \frac{\pi n}{2}} - \frac{2}{3} + \frac{11}{24} \sqrt{\frac{\pi}{2n}} + \frac{4}{135n} - \frac{71}{1152} \sqrt{\frac{\pi}{2n^3}} + O(n^{-2}).$$<|endoftext|> -TITLE: Asymmetric Hessian matrix -QUESTION [52 upvotes]: Are there any functions, $f:U\subset \mathbb{R}^n \to \mathbb{R}$, with Hessian matrix which is asymmetric on a large set (say with positive measure)? -I'm familiar with examples of functions with mixed partials not equal at a point, and I also know that if $f$ is lucky enough to have a weak second derivative $D^2f$, then $D^2 f$ is symmetric almost everywhere. - -REPLY [24 votes]: I can give a proof of the following statement. - -Let $U\subseteq \mathbb{R}^2$ be open, and $f\colon U\to\mathbb{R}$ be such that $f_{xx}$, $f_{xy}$, $f_{yx}$ and $f_{yy}$ are well defined on some Lebesgue measurable $A\subseteq U$. Then, $f_{xy}=f_{yx}$ almost-everywhere on $A$. - -[Note: This is after seeing Grigory's answer. The statement here is a bit stronger than statement (1) due to Tolstov in his answer. I haven't, as yet, been able to see the translation of that paper, so I'm not sure if his argument actually gives the same thing.] -In fact, we can show that -$$ -f_{xy}=f_{yx}=\lim_{h\to0}\frac{1}{h^2}\left(f(x+h,y+h)+f(x,y)-f(x+h,y)-f(x,y+h)\right)\ \ {\rm(1)} -$$ -almost everywhere on $A$, where the limit is understood in the sense of local convergence in measure (functions $g_{(h)}$ tend to a limit $g$ locally in measure if the measure of $\{x\in S\colon\vert g_{(h)}(x)-g(x)\vert > \epsilon\}$ tends to zero as $h\to0$, for each $\epsilon > 0$ and $S\subseteq A$ of finite measure). -First, there are some technical issues regarding measurability. However, as $f_x$ and $f_y$ are assumed to exist on $A$, then $f$ is continuous along the intersection of $A$ with horizontal and vertical lines, which implies that its restriction to $A$ is Lebesgue measurable. Then, all the partial derivatives must also be measurable when restricted to $A$. -By Lusin's theorem, we can reduce to the case where all the partial derivatives are continuous when restricted to $A$. Also, without loss of generality, take $A$ to be bounded. -Fix an $\epsilon > 0$. Then, for any $\delta > 0$, let $A_\delta$ be the set of $(x,y)\in A$ such that - -$\left\vert f_{yy}(x+h,y)-f_{yy}(x,y)\right\vert\le\epsilon$ for all $\vert h\vert \le\delta$ with $(x+h,y)\in A$. -$\left\vert f_y(x+h,y)-f_y(x,y)-f_{yx}(x,y)h\right\vert\le\epsilon\vert h\vert$ for all $\vert h\vert\le\delta$ with $(x+h,y)\in A$. -$\left\vert f(x,y+h)-f(x,y)-f_y(x,y)h-\frac12f_{yy}(x,y)h^2\right\vert\le\epsilon h^2$ for all $\vert h\vert\le\delta$ with $(x,y+h)\in A$. - -This is Lebesgue measurable and existence and continuity of the partial derivatives restricted to $A$ implies that $A_\delta$ increases to $A$ as $\delta$ decreases to zero. By monotone convergence, the measure of $A\setminus A_\delta$ decreases to zero. -Now, choose nonzero $\vert h\vert\le\delta$. If $(x,y)$, $(x+h,y)$, $(x,y+h)$, $(x+h,y+h)$ are all in $A_\delta$ then, -$$f(x+h,y+h)-f(x+h,y)-f_y(x+h,y)h-\frac12f_{yy}(x+h,y)h^2$$ -$$-f(x,y+h)+f(x,y)+f_y(x,y)h+\frac12f_{yy}(x,y)h^2$$ -$$\frac12f_{yy}(x+h,y)h^2-\frac12f_{yy}(x,y)h^2$$ -$$f_y(x+h,y)h-f_y(x,y)h-f_{yx}(x,y)h^2$$ -are all bounded by $\epsilon h^2$. Adding them together gives -$$ -\left\vert f(x+h,y+h)+f(x,y)-f(x+h,y)-f(x,y+h)-f_{yx}(x,y)h^2\right\vert\le4\epsilon h^2.\ \ {\rm(2)} -$$ -Now, choose a sequence of nonzero real numbers $h_n\to0$. It is standard that, for any integrable $g\colon\mathbb{R}^2\to\mathbb{R}$ then $g(x+h_n,y)$, $g(x,y+h_n)$ and $g(x+h_n,y+h_n)$ all tend to $g(x,y)$ in $L^1$ (this is easy for continuous functions of compact support, and extends to all integrable functions as these are dense in $L^1$). Applying this where $g$ is the indicator of $A_\delta$ shows that the set of $(x,y)\in A_\delta$ for which one of $(x+h_n,y)$, $(x,y+h_n)$ or $(x+h_n,y+h_n)$ is not in $A_\delta$ has measure decreasing to zero. So, for $\vert h\vert$ chosen arbitrarily small, inequality (2) applies everywhere on $A_\delta$ outside of a set of arbitrarily small measure. Letting $\delta$ decrease to zero, (2) applies everywhere on $A$ outside of a set of arbitrarily small measure, for small $\vert h\vert$. As $\epsilon > 0$ is arbitrary, this is equivalent to the limit in (1) holding in measure and equalling $f_{yx}$ almost everywhere on $A$. Finally, exchanging $x$ and $y$ in the above argument shows that the limit in (1) is also equal to $f_{xy}$.<|endoftext|> -TITLE: Geometric interpretation of $\frac {\partial^2} {\partial x \partial y} f(x,y)$ -QUESTION [26 upvotes]: Are there any geometric interpretation for the second partial derivative? i.e. -$$f_{xy} = \frac {\partial^2 f} {\partial x \partial y}$$ -In particular, I'm trying to understand the determinant from second partial derivative test for determining whether a critical point is a minima/maxima/saddle points: -$$D(a, b) = f_{xx}(a,b) f_{yy}(a,b) - f_{xy}(a,b)^2$$ -I have no trouble understanding $f_{xx}(x,y)$ and $f_{yy}(x,y)$ as the of measure of concavity/convexity of f in the direction of x and y axis. But what does $f_{xy}(x,y)$ means? - -REPLY [10 votes]: Think of the mixed partial as an abstract tendency of the curve to twist like DNA, (but just because it's positive in absolute value doesn't mean the surface actually twists since other tendencies may overwhelm it), and the straight second derivatives $f_{xx}$ and $f_{yy}$ as an abstract tendency of the curve to bulge up or down in the $xz$- and $yz$- cross-sections. -To tease out the individual roles of the partials, let's assume for the moment that the mixed partials are identically zero, the better to isolate their effect later. If $f_{xx}$ and $f_{yy}$ are of opposite signs, then the curve has a saddle tendency (think of two curves, one in the $yz$-plane and one in the $xz$-plane, intersecting at right angles, one opening up and the other opening down: this will produce a negative intrinsic curvature or saddle shape like a hyperbolic paraboloid). Now think of the same parabolas but both opening say down, with $f_{xx}$ and $f_{yy}$ tending to have the same signs: that will tend to produce an intrinsic positive curvature (bulging) like an ellipsoid. I'm sure you knew this. -Now let's add in the mixed partials to show what their effect is. Think of the mixed partials as a pure twisting factor, also tending to produce a negative or saddle curvature, but rotated $45$ degrees! Yes, a twist is the same as a saddle tendency, but we think of it differently. Imagining your hand riding down along the wall of a saddle shape, like an airplane that dives while rotating, may help you see this. -Now: ever seen a diagram of a saddle showing a kind of "X" shape over it, diagonal lines coming out of the saddle-point, where the tendency to go up in say $y$ is canceled by the tendency to go down in $x$, and the thing just holds steady along the diagonal line, with $f_{xx}$ and $f_{yy}$ both zero here? Well, if you had a properly rotated saddle shape it might not go up or down in the $x$- or $y$- directions at all--and yet it would be negatively curved or twisting nonetheless as would be plainly visible from the other directions. It is THIS twist that the mixed partials measure (just as an $xy$-term rotates a conic, by the way). -If the curvature from the two ordinary second partials is negative, forget it--the mixed partials will make it even more negative. In other words, if $f_{xx}f_{yy}$ is negative because these differ in sign, the intrinsic curvature is already negative, and subtracting $f_{xy}^2$ will make it even more so. But if the curvature from the straight second partials is positive (bulging up or down), because $f_{xx}$ and $f_{yy}$ agree in sign, then possibly this positive tendency can still be overwhelmed by the independent negative-curvature twisting action of the mixed partials. That is why $f_{xy}$ is squared and subtracted: its sign doesn't matter to the saddle-ness and it is an inherently negatively-curved factor. The contest between these is the discriminant test you mention.<|endoftext|> -TITLE: Radius of convergence -QUESTION [5 upvotes]: I'm trying to understand how to handle power series who use floor or ceiling functions in their general term. -For example, the power series $\displaystyle{\sum_{k \geq 1} \left\lfloor \frac{2^k}{(k+1)^2}\right\rfloor}x^k$ is supposed to have a radius of convergence of $\frac{1}{2}$ but I fail to see how to handle this. -I see how the series converges for values less than $\frac{1}{2}$, but why does it diverge for larger values? - -REPLY [6 votes]: You can also write $$a_k := \left \lfloor \frac{2^k}{(k+1)^2} \right \rfloor $$ and then it means that $$ a_k \leq \frac{2^k}{(k+1)^2} \leq a_k + 1 $$ and then multiplying by $|x|^k$ we get -$$ a_k |x|^k \leq \frac{2^k}{(k+1)^2} |x|^k \leq (a_k + 1) |x|^k $$ -Now you can calculate the radius of convergence of the series $$\sum_{k = 1}^{\infty} \frac{2^k}{(k+1)^2} |x|^k $$ and it is equal to $1/2$. -And now you can conclude that the radius of convergence of the series $\sum a_k x^k$ is at least $1/2$ from the leftmost inequality. But using the rightmost inequality you can also see that the radius of convergence of the series $\sum a_k x^k$ cannot exceed $1/2$ because that would contradict the fact that the radius of convergence of the series $$\sum_{k = 1}^{\infty} \frac{2^k}{(k+1)^2} |x|^k $$ is $1/2$. -This is because if the series $\sum a_k x^k$ converges for some $x$ with $|x| > 1/2$ then the series $\sum (a_k + 1) x^k$ also converges because it is equal to the sum of the series $\sum a_k x^k + \sum x^k$, and then this would imply the convergence of the series $$\sum_{k = 1}^{\infty} \frac{2^k}{(k+1)^2} |x|^k $$ -also for this $x$.<|endoftext|> -TITLE: Is there a way to represent the interior of a circle with a curve? -QUESTION [14 upvotes]: As you already know, the interior of a circle is represented by an inequality. For example, -$$x^2+y^2\leq1$$ -for the unit circle. Today I was thinking by myself and I wondered if there is a curve that could represent every point inside of a circle. Maybe with a spiral like this, - -If you can't represent it perfectly with a curve, what would be the closest way to represent it? -This question is asked merely out of curiosity, it may be completely irrelevant or meaningless :) - -REPLY [9 votes]: Since topologically a disc and a square are the same, most of what you might want to know about this falls under the heading of Space-filling curves. To summarize, the answer to your main question is that the disc $D^2$ is the image of the interval $[0,1]$ under a continuous map, but not a one-to-one (non-intersecting) continuous map. So it depends on exactly what you mean by curve.<|endoftext|> -TITLE: Applications of Morse theory -QUESTION [5 upvotes]: Background -The use of tools from algebraic topology to study simplicial complexes coming from point cloud data has been thoroughly discussed in the papers of Carlsson, Zomorodian, Ghrist, Edelsbrunner, Harer and more. Computing homology can be achieved in cubic time (in simplicies) and using persistent homology one can make an educated guess on which non-zero homology groups are noise and which represent n-dimensional holes in the point cloud. -What -In smooth theory one can use Morse theory to compute the homology of a closed manifold. One can also give a description of the cells in a CW complex which is homotopy equivalent to the manifold as well as computing the cohomology ring. I wonder what other "properties" of the manifold one can find using Morse theory. -Why -The idea is to use Forman's discrete Morse theory to calculate properties of simplicial complexes. Since it is computationally expensive to define a discrete Morse function - it has to be something else than homology. - -REPLY [8 votes]: Morse theory is a very rich topic. Already, having the cohomology ring is a huge deal. That tells you a lot more about the manifold than just knowing the Betti numbers, for instance. -You can also use Morse theory to get at equivariant cohomology (in the case of a manifold with a group action) and apparently Steenrod squares (I went to a talk I did not fully understand in which the speaker claimed this). -There are some nice survey papers by Bott, notably Morse theory, old and new. There is another I am looking for. If I find it, I will update. -There is also a survey by Martin Guest arXiv:math/0104155v1. -Unfortunately, I can't speak to the computational aspects of any of these.<|endoftext|> -TITLE: Which functions satisfy $\forall n,x,y (x \equiv y \pmod n \implies f(x) \equiv f(y) \pmod n )$? -QUESTION [6 upvotes]: Which functions satisfy $\forall n,x,y (x \equiv y \pmod n \implies f(x) \equiv f(y) \pmod n )$ ? -I know polynomials do; are there others? - -REPLY [3 votes]: There are such functions which are not polynomials. We will construct an example of a function $g$ which grows faster than any polynomial, and such that for all integers $m \ge 1$, and all non-negative integers $x$ and $y$, if $x \equiv y \pmod{m}$, then $g(x)\equiv g(y)\pmod{m}$. It will follow immediately that if $f(x)=g(x^2)$, then for all integers $x$, $y$, if $x \equiv y \pmod{m}$ then $f(x) \equiv f(y)\pmod{m}$, and $f$ is not a polynomial. -Let $g(0)=1$. Suppose that we have defined $g(k)$ for all non-negative $k2^n$ (this part is to make sure that $g$ grows faster than any polynomial). -Suppose that $g$ has the desired congruence property for all $x$, $y$ with $0 \le x, y -TITLE: Is the set of all finite sequences of letters of Latin alphabet countable/uncountable? How to prove either? -QUESTION [10 upvotes]: Today in Coding/Cryptography class, we were talking about basic definitions, and the professor mentioned that for a set $A=\left \{ \left. a, b, \dots, z \right \} \right.$ (the alphabet) we can define a set $A^{*}=\left \{ \left. a, ksdjf, blocks, coffee, maskdj, \dots, asdlkajsdksjfs \right \} \right.$ (words) as a set that consists of all finite sequences of the elements/letters from our $A$/alphabet. My question is, is this set $A^{*}$ countably or uncountably infinite? Does it matter how many letters there are in your alphabet? If it was, say, $A=\left \{ \left. a \right \} \right.$, then the words in $A^{*}$ would be of form $a, aa, aaa, \dots$ which, I think, would allow a bijection $\mathbb{N} \to A^{*}$ where an integer would signify the number of a's in a word. Can something analogous be done with an alphabet that consists of 26 letters (Latin alphabet), or can countability/uncountability be proved otherwise? And as mentioned before, I am wondering if the number of elements in the alphabet matters, or if all it does is change the formula for a bijection. -P.S. Now that I think of it, maybe we could biject from $\underset{n}{\underbrace{\mathbb{N}\times\mathbb{N}\times\mathbb{N}\times\dots\times\mathbb{N}}}$ to some set of words $A^{*}$ whose alphabet $A$ has $n$ elements? Thanks! - -REPLY [6 votes]: Proposition. If the alphabet $A$ is countable, the set $A^*$ of all finite strings in that alphabet is also countable. -Proof. $A, A^2 = A \times A, A^3 = A \times A \times A$, etc. are all countable and well-ordered (under the lexicographic ordering induced by the well-ordering of $A$), and $$A^* = \bigcup_{n=0}^{\infty} A^n$$ and the union of countably many well-ordered countable sets is again countable, so $A^*$ is countable. Note this does not require the axiom of (countable) choice. -However, the set $A^{\mathbb{N}}$ of all countably-infinite strings is countable if and only if $A$ is empty or is a 1-letter alphabet.<|endoftext|> -TITLE: Inverse Limit of Sheaves -QUESTION [16 upvotes]: It is well-known that if you have an inverse system of abelian groups $(A_n)$ (this works in several other nice categories) in which all the maps are surjective (or at least satisfy the Mittag-Leffler Condition), and if you have a short exact sequence of inverse systems $0\to (A_n)\to (B_n)\to (C_n)\to 0$, then taking the limit is exact and you get another short exact sequence $0\to \lim A_n \to \lim B_n \to \lim C_n \to 0$. -Hartshorne warns that this is not the case with abelian sheaves on a space. In particular, you can have all the maps of $(\mathcal{F}_n)$ surjective, and a short exact sequence $0\to (\mathcal{F}_n)\to (\mathcal{G}_n)\to (\mathcal{J}_n)\to 0$ but you only get left exactness $0\to \lim \mathcal{F}_n\to \lim \mathcal{G}_n\to \lim \mathcal{J}_n$ -I.e. you get that $\lim^1(\mathcal{F}_n)\neq 0$ despite satisfying surjectivity of maps. Is there a canonical example of this happening? -My first guess was that this had to be related to the fact that you can have a surjective map of sheaves $\mathcal{F}\to \mathcal{G}$, yet still have an open set for which $\mathcal{F}(U)\to \mathcal{G}(U)$ is not surjective. The canonical example of when this happens is to use the exponential map on the sheaf of holomorphic functions on $\mathbb{C}^\times$, but it is very non-obvious to me how to turn this into an example of the above. - -REPLY [2 votes]: A natural example of this is the inverse system $\left \{ \mathbb Z/ \ell^n \mathbb Z \right \}_{n\ge 1}$ of $\ell-$adic sheaves in the étale topology. In this case $\lim^1 \left \{ \mathbb Z/ \ell^n \mathbb Z \right \}_{n\ge 1} \ne 0 $. -However, the same example does not work in the Zarisky topology (by flasqueness of the sheaves involved) or in the pro-étale topology (by the existence of weakly contractible objects), i.e., it holds for two different reasons: sheaves are not interesting enough (Zarisky) and the topology is very good (pro-\'etale).<|endoftext|> -TITLE: Cohomology of complex projective plane -QUESTION [10 upvotes]: How can I compute Cohomology of complex projective plane $CP^2$? -Any magic like the one here? - -REPLY [5 votes]: Let $f: \mathbb CP^n \to \mathbb R$ be given by: -$$[z_0, \dots, z_n] \mapsto \frac{|z_1|^2 + 2|z_2|^2 + \dots + n|z_n|^2}{|z_0|^2 + |z_1|^2 + \dots + |z_n|^2}$$ -One can show that $f$ is a morse function with morse polynomal $P_f(t) = \sum_{k=0}^n t^{2k}$, which satisfies the gap condition and therefore $f$ is a perfect morse function. We conclude $H^k(\mathbb CP^n,\mathbb R) = \mathbb R$ for $k = 0,2,\dots,2n$ and $H^k(\mathbb CP^n,\mathbb R) = 0$ otherwise.<|endoftext|> -TITLE: CMO 2011 Sum of integers -QUESTION [6 upvotes]: The following problem was asked in the CMO 2011 and I'd be interested in finding various solutions for it. Here's the problem: - -Fix a positive integer $d$, then for any integer $k$ there exists a positive integer $n$ and integers $\epsilon_i$ where $\epsilon_i = \pm 1$ for $i=1 \ldots n$ such that: -$$k = \sum_{i=1}^n \epsilon_i (1+id)^2$$ - -REPLY [10 votes]: Let $U_k=(1+kd)^2$. Then $U_{k+3}-U_{k+2} -U_{k+1}+U_k=4d^2$, a constant. Changing signs, we obtain the sum $-4d^2$. -Thus if we have found an expression for a certain number $S_0$ as a sum of the desired type, we can obtain an expression of the desired type for $S_0+(4d^2)q$, for any integer $q$. -It remains to show that for any $S$, there exists an integer $S'$ such that $S' \equiv S \pmod{4d^2}$ and $S'$ can be expressed in the desired form. -Look at the sum -$(1+d)^2+(1+2d)^2 +\cdots +(1+Nd)^2$, -where $N$ is ``large.'' We can at will choose $N$ so that the sum is odd, or so that the sum is even. -By changing the sign in front of $(1+kd)^2$ to a minus sign, we decrease the sum by $2(1+kd)^2$. In particular, if $k \equiv 0 \pmod {2d}$, we decrease the sum by 2 (modulo $4d^2$). If $N$ is large enough, there are many $k< N$ such that $k$ is a multiple of $2d$. By switching the sign in front of $r$ of these, we change (``downward'') the congruence class modulo $4d^2$ by $2r$. By choosing $N$ so that the original sum is odd, and choosing suitable $r <2d^2$, we can obtain numbers congruent to all odd numbers modulo $4d^2$. By choosing $N$ so that the original sum is even, we can obtain numbers congruent to all even numbers modulo $4d^2$. This completes the proof. -There is not much of a complication if instead of $1+kd$ we use $a+kd$, where $a$ and $d$ are relatively prime.<|endoftext|> -TITLE: Convergence of functions in $L^p$ -QUESTION [8 upvotes]: Let $\{f_k\} \subset L^2(\Omega)$, where $\Omega \subset \mathbb{R}^n$ is a bounded domain and suppose that $f_k \to f$ in $L^2(\Omega)$. Now if $a \geq 1$ is some constant, is it possible to say that $|f_k|^a \to |f|^a$ in $L^p$ for some $p$ (depending on $a$ and also possibly depending on $n$)? -Showing the statement is true would probably require a smart way of bounding $\left| |f_k|^a - |f|^a \right|$ by a term including the factor $|f_k - f|^2$. However, I don't really know what to do with the fact that $a$ doesn't have to be an integer... - -REPLY [2 votes]: First, one cannot expect better results than $p=\frac 2 a$ because we only know $f\in L^2(\Omega)$. And I do think it is true for $p=\frac 2 a$. Proof is as follows. -Note that $x^a$ is a convex increasing function for $x\ge 0$, hence (draw a picture and you can see this) -$$0\le \frac{u^a-v^a}{u-v}\le a\max\{u^{a-1},v^{a-1}\}, \forall u, v\ge 0, u\neq v.$$ -Plugging in $u=|f_k|, v=|f|$ and noticing that $||f_k|-|f||\le |f_k-f|$, we have -$$||f_k|^a-|f|^a|\le a|f_k-f|\max\{|f_k|^{a-1},|f|^{a-1}\}.$$ -Then, raising the last inequality to the power $p=\frac 2 a$, by Hölder's inequality, -$$\||f_k|^a-|f|^a\|_{L^p}^p\le a^p \|f_k-f\|_{L^2}^{2/p}\|\max\{|f_k|^{a-1},|f|^{a-1}\}\|_{L^{\frac{2}{a-1}}}^{\frac a {a-1}}.$$ -(When we apply Hölder, $|f_k-f|^p\in L^a,$ and $\max\{|f_k|^{a-1},|f|^{a-1}\}^p\in L^r, r=a^*=\frac a {a-1}.$ You have to check the exponents to see if I made any mistake.) -Since $f_k$ have bounded $L^2$-norm (they converge),$$\|\max\{|f_k|^{a-1},|f|^{a-1}\}\|_{L^{\frac{2}{a-1}}}$$ -is bounded by some constant. Sending $k\to\infty$, we have $|f_k|^a\to|f|^a$ in $L^{2/a}$.<|endoftext|> -TITLE: How do I know the limit of this infinite sequence -QUESTION [5 upvotes]: I have $a_k=\frac1{(k+1)^\alpha}$ and $c_k=\frac1{(k+1)^\lambda}$, where $0<\alpha<1$ and $0<\lambda<1$, and we have a infinite sequence $x_k$ with the following evolution equation. -$$ -x_{k+1}=\left(1-a_{k+1}\right)x_{k}+a_{k+1}c_{k+1}^{2} -$$ -I have proven that $x_k$ is bounded and obviously positive. How can I know its limit? - -REPLY [2 votes]: Let us show that $x_k\to0$. -For every positive $u$, there exists a finite integer $k(u)$ such that $c_{k+1}^2\le u$ for every $k\ge k(u)$, hence $x_{k+1}\le(1-a_{k+1})x_k+a_{k+1}u$. --- If there exists $k\ge k(u)$ such that $x_k\le u$, then $x_i\le u$ for every $i\ge k$, hence $\limsup x_i\le u$. --- Otherwise, $x_k\ge u$ for every $k\ge k(u)$ and furthermore $(x_{k+1}-u)\le(1-a_{k+1})(x_k-u)$. Hence $(x_k-u)\le b(k,k(u))(x_{k(u)}-u)$, with -$$ -b(k,k(u))=\prod_{i=k(u)+1}^k(1-a_i). -$$ -Now, the series $\displaystyle\sum_ka_k$ diverges hence $b(k,k(u))\to0$ when $k\to+\infty$, and $\limsup x_k-u\le0$. -In both cases, $\limsup x_i\le u$. This holds for every positive $u$, hence $x_k\to0$. -The proof uses only that $c_k\to0$, $a_k\in[0,1]$ and $\displaystyle\sum_ka_k$ diverges.<|endoftext|> -TITLE: Why does the Gram-Schmidt procedure divide by 0 on a linearly dependent lists of vectors? -QUESTION [5 upvotes]: Let $v_1, \dots, v_m$ be a linearly dependent list of vectors. -If $v_1 \ne 0$, then there is some $v_j$ in the span of $v_1, \dots, v_{j-1}$ -If we let j be the smallest integer with this property, and apply the gram-schmidt procedure to produce an orthonormal list $(e_1, \dots, e_{j-1})$ then $v_j$ is in the span of $(e_1, \dots, e_{j-1})$ and -$$v_j = \langle v_j, e_1\rangle e_1+ \dots + \langle v_j, e_{j-1}\rangle e_{j-1}$$ -Why does this guarantee that length of $v_j$=0? I'm missing something about linear dependence that should probably be obvious sorry :\ - -REPLY [5 votes]: The Gram-Schmidt process goes as follows: given $v_1,\ldots,v_n$, you define -$$\begin{align*} -u_1 &= v_1\\ -e_1 &= \frac{1}{||u_1||}u_1;\\ -u_2 &= v_2 - \langle v_2,e_1\rangle e_1;\\ -e_2 &= \frac{1}{||u_2||}u_2;\\ -&\vdots\\ -u_{k+1} &= v_{k+1} - \left(\langle v_{k+1},e_1\rangle e_1 + \langle v_{k+1},e_2\rangle e_2 + \cdots + \langle v_{k+1},e_k\rangle e_k\right)\\ -e_{k+1}&= \frac{1}{||u_{k+1}||}u_{k+1}\\ -&\vdots -\end{align*}$$ -So when you construct $u_{j}$, you get $\mathbf{0}$, and when you try to construct $e_{j}$ you attempt to divide by $0$. It's not $v_j$ which has length $0$, it's $u_j$. - -REPLY [2 votes]: It does not guarantee that. What it guarantees is that $v_j - \langle v_j, e_1\rangle e_1- \dots - \langle v_j, e_{j-1}\rangle e_{j-1} = 0$, which means that you cannot define $e_j$ by dividing this vector by its norm. Instead, you throw it away and move on to $v_{j+1}$. -E.g., consider the case $v_1\neq 0$ and $v_2=v_1$.<|endoftext|> -TITLE: Summing Matrix Series -QUESTION [7 upvotes]: I need to sum the series -$$I + A + A^2 + \ldots$$ -for the matrix -$$A = \left(\begin{array}{rr} -0 & \epsilon \\ --\epsilon & 0 -\end{array}\right)$$ -and $\epsilon$ small. The goal is to invert the matrix $I - A$. The text says to use a geometric series but I had a hard time finding it. I'm studying on my own so I can't ask my teacher. The way I did it follows. I know it isn't quite rigorous (I assume the series in question converge) so I'd like to see how I'm supposed to do it. -We see that -$$\left(\begin{array}{rr} -0 & \epsilon \\ --\epsilon & 0 -\end{array}\right) -\left(\begin{array}{rr} -a_{00} & a_{01} \\ -a_{10} & a_{11} -\end{array}\right) = -\left(\begin{array}{rr} -\epsilon a_{10} & \epsilon a_{11} \\ --\epsilon a_{00} & \epsilon a_{01} -\end{array}\right)$$ -so if we let $a(i, j, k)$ be entry $a_{ij}$ in the $k$'th power of $A$ then we see that -$$ -a(0, 0, k) = \epsilon a(1, 0, k-1) -$$ -$$ -a(1, 0, k) = -\epsilon a(0, 0, k-1) -$$ -Then, letting $\alpha$'s denote the entries in the sum without $I$ added in, we see that -$$ -\begin{eqnarray*} -\alpha_{00} &=& \sum_{k=0}^{\infty}a(0, 0, k) \\ -&=& \sum_{k=0}^{\infty}a(0, 0, 2k + 1) \\ -&=& \epsilon\sum_{k=0}^{\infty}a(1, 0, 2k) \\ -&=& \epsilon\alpha_{10} -\end{eqnarray*} - -$$ -and -$$ -\begin{eqnarray*} -\alpha_{10} &=& \sum_{k=0}^{\infty}a(1,0,k) \\ -&=& \sum_{k=0}^{\infty}a(1,0,2k) \\ -&=& -\epsilon + \sum_{k=1}^{\infty}a(1,0,2k) \\ -&=& -\epsilon - \epsilon\sum_{k=1}^{\infty}a(0,0,2k-1) \\ -&=& -\epsilon\left(1 + \sum_{k=0}^{\infty}a(0,0,2k+1) \right) \\ -&=& -\epsilon\left(1 + \alpha_{00}\right) -\end{eqnarray*} -$$ -so $\alpha_{00} = \epsilon\alpha_{10}$ and $\alpha_{10} = -\epsilon(1 + \alpha_{00})$ which we can solve for the $\alpha$'s. -It's pretty much the same for the other two. I feel like there has got to be a better way to do this. - -REPLY [5 votes]: if you dont want to use the geometric series directly on the matrix space, you can also think of this as the complex number $-i\epsilon$ under the correspondence $$a+bi\to\left(\begin{array}{cc}a&-b\\b&a\\\end{array}\right)$$ and use the geometric series (for complex numbers of norm less than 1) $$\sum A^n=\sum (-i\epsilon)^n=\frac{1}{1+i\epsilon}=\frac{1-i\epsilon}{1+\epsilon^2}= -\frac{1}{1+\epsilon^2}\left(\begin{array}{cc}1&\epsilon\\-\epsilon&1\\\end{array}\right)$$<|endoftext|> -TITLE: Largest eigenvalue of a real symmetric matrix -QUESTION [11 upvotes]: If $\lambda$ is the largest eigenvalue of a real symmetric $n \times n$ matrix $H$, how can I show that: $$\forall v \in \mathbb{R^n}, ||v||=1 \implies v^tHv\leq \lambda$$ -Thank you. - -REPLY [25 votes]: Step 1: All Real Symmetric Matrices can be diagonalized in the form: -$ -H = Q\Lambda Q^T -$ -So, $ {\bf v}^TH{\bf v} = {\bf v}^TQ\Lambda Q^T{\bf v} $ -Step 2: Define transformed vector: $ {\bf y} = Q^T{\bf v} $. -So, $ {\bf v}^TH{\bf v} = {\bf y}^T\Lambda {\bf y} $ -Step 3: Expand -$ {\bf y}^T\Lambda {\bf y} = \lambda_{\max}y_1^2 + \lambda_{2}y_2^2 + \cdots + \lambda_{\min}y_N^2 $ -\begin{eqnarray} -\lambda_{\max}y_1^2 + \lambda_{2}y_2^2 + \cdots + \lambda_{\min}y_N^2& \le & \lambda_{\max}y_1^2 + \lambda_{\max}y_2^2 + \cdots + \lambda_{\max}y_N^2 \\ - & & =\lambda_{\max}(y_1^2 +y_2^2 + \cdots y_N^2) \\ - & & =\lambda_{\max} {\bf y}^T{\bf y} \\ -\implies {\bf y}^T\Lambda {\bf y} & \le & \lambda_{\max} {\bf y}^T{\bf y} -\end{eqnarray} -Step 5: Since $Q^{-1} = Q^T, QQ^T = I $ -\begin{eqnarray} -{\bf y}^T{\bf y} &= &{\bf v}^TQQ^T{\bf v} = {\bf v}^T{\bf v} -\end{eqnarray} -Step 6: Putting it all back together -\begin{eqnarray} -{\bf y}^T\Lambda {\bf y} & \le & \lambda_{\max} {\bf y}^T{\bf y} \\ -{\bf v}^TH{\bf v} & \le & \lambda_{\max}{\bf v}^T{\bf v} -\end{eqnarray} -By definition, $ {\bf v}^T{\bf v} = \|{\bf v}\|^2 $ and by definition $\|{\bf v}\| = 1$ -\begin{eqnarray} -{\bf v}^TH{\bf v} & \le & \lambda_{\max} -\end{eqnarray} -Boom!<|endoftext|> -TITLE: Euler-Lagrange, Gradient Descent, Heat Equation and Image Denoising -QUESTION [17 upvotes]: For an image denoising problem, the author has a functional $E$ defined -$$E(u) = \iint_\Omega F \;\mathrm d\Omega$$ -which he wants to minimize. $F$ is defined as -$$F = \|\nabla u \|^2 = u_x^2 + u_y^2$$ -Then, the E-L equations are derived: -$$\frac{\partial E}{\partial u} = \frac{\partial F}{\partial u} - -\frac{\mathrm d}{\mathrm dx} \frac{\partial F}{\partial u_x} - -\frac{\mathrm d}{\mathrm dy} \frac{\partial F}{\partial u_y} = 0$$ -Then it is mentioned that gradient descent method is used to minimize the functional $E$ by using -$$\frac{\partial u}{\partial t} = u_{xx} + u_{yy}$$ -which is the heat equation. I understand both equations, and have solved the heat equation numerically before. I also worked with functionals. I do not understand however how the author jumps from the E-L equations to the gradient descent method. How is the time variable $t$ included? Any detailed derivation, proof on this relation would be welcome. I found some papers on the Net, the one by Colding et al. looked promising. -References: -http://arxiv.org/pdf/1102.1411 (Colding et al.) -http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.117.1675&rep=rep1&type=pdf -http://dl.dropbox.com/u/1570604/tmp/functional-grad-descent.pdf -http://dl.dropbox.com/u/1570604/tmp/gelfand_var_time.ps (Gelfand and Romin) - -REPLY [8 votes]: You should note that a solution, $f$, to your differential equation, $\mathcal{L}[f] = 0$, is the steady state solution to the second equation, as $\partial_t f = 0$. By turning this into a parabolic equation, only the error term will depend on $t$, and it will decay with time. This can be seen by letting -$$h(x,y,t) = f(x,y) + \triangle f(x,y,t),$$ -where $f$ is as before. Then -$$\mathcal{L}[h] = \mathcal{L}[\triangle f] = \partial_t \triangle f$$ -In general, this method makes the equations amenable to minimization routines like steepest descent. -Edit: Since you mentioned that you wanted a book to reference, when I was taking numerical analysis, we used v. 3 of Numerical Mathematics and Computing by Cheney and Kincaid, and I found it very useful. Although, at points it lacked depth, however it provided a good jumping off point. They also have a more mathematically in depth book Numerical analysis: mathematics of scientific computing that may be useful to you, which I have not read.<|endoftext|> -TITLE: Why are translation invariant operators on $L^2$ multiplier operators -QUESTION [6 upvotes]: For $m \in L^\infty$, we can define the multiplier operator $T_m \in L(L^2,L^2)$ implicitly by -$\mathcal F (T_m f)(\xi) = m(\xi) \cdot (\mathcal F T_m)(\xi)$ -where $\mathcal F$ is the Fourier transform. It is obvious from the defintion that $T_m$ commutes with translations. -How can you show the converse, i.e. every translation invariant $T \in L(L^2,L^2)$ is induced by a multiplier $m_T$? I have no idea how this might work. - -REPLY [9 votes]: Here's a sketch proof, which I think gets to the core reason why this is true. Remember that the fourier transform takes translations to multiplication by characters. So if $T$ commutes with translation, $\hat T = \mathcal{F} T \mathcal{F}^{-1}$ will commute with multiplication by characters. So $\hat T$ commutes with all operators which are in the closure of the linear span of the operators given by multiplication by characters. That is, $\hat T$ commutes with multiplication by any continuous function. It's not so hard to then show that $\hat T$ must itself by multiplication by some $m\in L^\infty$; under your definition, this means precisely that $T = T_m$.<|endoftext|> -TITLE: Is a completion of an algebraically closed field with respect to a norm also algebraically closed? -QUESTION [10 upvotes]: Assume we have an algebraically closed field $F$ with a norm (where $F$ is considered as a vector space over itself), so that $F$ is not complete as a normed space. Let $\overline F$ be its completion with respect to the norm. -Is $\overline F$ necessarily algebraically closed? -Thanks. - -REPLY [7 votes]: The answer is yes, and the issue is discussed in detail in $\S 3.5-3.6$ of these notes from a recent graduate number theory course. -It is very much as Akhil suggests: the key idea is Krasner's Lemma (introduced and explained in my notes). However, as Krasner's Lemma pertains to separable extensions, there is a little further work that needs to be done in positive characteristic: how do we know that $\overline{F}$ is algebraically closed rather than just separably closed? -The answer is given by the following fact (Proposition 27 on p. 15 of my notes): - -A field which is separably closed and complete with respect to a nontrivial valuation is algebraically closed. - -The idea of the proof is to approximate a purely inseparable extension by a sequence of (necessarily separable) Artin-Schreier extensions. I should probably also mention that I found this argument in some lecture notes of Brian Conrad (and I haven't yet found it in the standard texts on the subject). -Note that Corollary 28 (i.e., the very next result) is the answer to your question.<|endoftext|> -TITLE: In what way is the Peano curve not one-to-one with $[0,1]^2$? -QUESTION [22 upvotes]: In discussion about the question Is there a way to represent the interior of a circle with a curve?, it was mentioned that such a curve cannot be one-to-one (because $[0,1]$ is not homeomorphic to $[0,1]^2$). I'm curious about in what way the Peano curve is not one-to-one. -The construction of the Peano curve is a recursive refinement of a particular path that discretely looks one-to-one, in that it touches every coordinate point at a given scale in a bijection. In the limit there's no bijection, but at every step there is a bijection between the curve so far and the coordinates of points within $[0,1]^2$ truncated to so many binary digits. -In a surjection that is not an injection, there must be some overlap (some $x,y$ where $x\neq y$ but $f(x)=f(y)$. What I'm getting at is...where is the overlap? I'm guessing it's not just at one point - is it at all points? how much overlap? What is the nature of the overlap (for a given point on $[0,1]^2$, which points in $[0,1]$ map to it? -(for discussion's sake, use the definition of the Hilbert-Peano curve) -Edit: A small bit of clarification: given a point $(j,k)$, is their overlap, and if so, how much (what is the cardinality of the inverse image at that point)? How about just for a particular point like $(1/2, 1/2)$? - -REPLY [16 votes]: The points of overlap are precisely those where one at least one coordinate is a dyadic rational $n/2^N$ for some integers $n,N$, except some points on the boundary of the square $[0,1]^2$. If you focus on the middle point $(1/2,1/2)$ of the square in the animation on the Wikipedia page, you can observe that it is approached from three different directions. (Actually, $H(1/6)=H(1/2)=H(5/6)=(1/2,1/2)$, where $H:[0,1]\to[0,1]^2$ is the Hilbert curve.) The same goes for the points $(1/4,1/4), (3/4,1/4), (3/4,1/4), (3/4, 3/4),$ and so on. -It's not self-intersecting in all points: the point $(0,0)$ for example is only hit once. -Added after comments: -Here's how to see exactly which points are the points of self-intersection. -Subdivide $[0,1]^2$ into a grid of $2^n\times2^n$ squares. The $n$th iteration of the discrete Hilbert curve passes through each of these squares once. Number the squares from $1$ to $(2^n)^2$ in the order we pass through them, and let $H_n(k/(2^n)^2)$ be the center of the $k$th square, and extend $H_n$ to the whole interval $[0,1]$ by making it piecewise linear between the points $k/(2^n)^2$. The Hilbert curve $H:[0,1]\to[0,1]^2$ is defined by $H(x)=\lim_{n\to\infty}H_n(x)$. -Now suppose that $H(x)=H(y)$ where $x\neq y$, so that the point $H(x)\in[0,1]^2$ is a point where the curve self-intersects. Subdivide $[0,1]^2$ into a grid of $2^n\times 2^n$ squares, where $n$ is large enough so that the interval $(x-\epsilon,x+\epsilon)$ maps into one square, while $(y-\epsilon,y+\epsilon)$ maps into another square, for some small $\epsilon>0$. (This should always be possible by the construction of the Hilbert curve.) Since $H(x)$ and $H(y)$ belong to different squares, but they are equal, they must meet along the border of the squares, and this can only happen if one coordinate is a dyadic rational. -In the other direction, fix any point $(x,y)$ where one coordinate is a dyadic rational. This is a point on the border between two different squares in some $2^n\times 2^n$-subdivision of $[0,1]^2$. By the construction of the Hilbert curve, there exist a closed interval whose image is the first square, and another closed interval whose image is the second square. If $(x,y)$ is an interior point, by the way the Hilbert curve snakes around, we can always make sure those two intervals are disjoint. Since both intervals map surjectively onto the border between our two squares, there exists a point $x$ in the first interval and a point $y$ in the other interval such that $H(x)=H(y)$. - -REPLY [4 votes]: You can see some specific places where the curve is not 1-1 as follows. The various stages of the construction are controlled by a grid, and as the curve gets filled in, it approaches the edges (and corners) of the grid from each side. So in the limit, points in the square which lie on the edge of the grid at some stage of the construction will be in the image of at least two points of the peano arc. The interior corners will lie in at least 4 points. So this at least shows it is not 1-1.<|endoftext|> -TITLE: Entire one-to-one functions are linear -QUESTION [65 upvotes]: Can we prove that every entire one-to-one function is linear? - -REPLY [8 votes]: This proof uses only the Open Mapping Theorem and Cauchy's Estimates. -Suppose $f$ is entire and injective. Consider the injective entire function $$\widetilde{f} \colon z \mapsto f(z) - f(0).$$ We see that $\widetilde{f}(0) = 0$. Let $r > 0$, and consider the ball $B(0,r)$ of radius $r$ centered at the origin. By the Open Mapping Theorem, $\widetilde{f}(B(0,r))$ is an open set containing the origin. By openness, there is some $\epsilon > 0$ such that $B(0,\epsilon) \subset \widetilde{f}(B(0,r))$. Since $\widetilde{f}$ is injective, no point of $B(0,r)^c$ may be sent to any point of $B(0,\epsilon)$. In other words, $$|z| \geq r \Rightarrow |\widetilde{f}(z)| \geq \epsilon.$$ -Now $\widetilde{f}$ has a zero at the origin, so let's say this zero has order $k \geq 1$. This means there is some entire function $g \colon \Bbb C \rightarrow \Bbb C$ such that $g(0) \neq 0$ and $$\widetilde{f}(z) = z^k g(z) \ \forall \ z \in \Bbb C.$$ -Since $\widetilde{f}$ is injective, $\widetilde{f}$ has no other zeroes apart from its zero at the origin. So $g$ has no zeroes. This means the function $F \colon \Bbb C \rightarrow \Bbb C$ defined by $$F(z) = \frac{1}{g(z)} \ \forall \ z \in \Bbb C$$ is entire and also has no zeroes. Write the Taylor expansion of $F$ at the origin as $$F(z) = \sum_{n=0}^\infty \frac{F^{(n)}(0)}{n!}z^n.$$ -Given any $R > r$, we can apply Cauchy's Estimates on the circle of radius $R$ to obtain bounds on the derivatives of $F$ at the origin. Indeed, -$$F^{(n)}(0) \leq \max_{|z| = R}|F(z)|\frac{n! }{R^n}.$$ But we can apply the fact that $|\widetilde{f}(z)| \geq \epsilon \ \forall \ |z| \geq r$ to notice that $$\max_{|z| = R} |F(z)| = \max_{|z| = R} \frac{1}{|g(z)|} = \max_{|z| = R} \frac{|z|^k}{|\widetilde{f}(z)|} \leq \frac{R^k}{\epsilon}.$$ So if $n > k$, we have $$F^{(n)}(0) \leq \max_{|z| = R}|F(z)|\frac{n!}{R^n} \leq \frac{R^k n!}{\epsilon R^n} = \frac{n!}{\epsilon R^{n-k}} \xrightarrow{R \to \infty} 0.$$ -So we conclude from the Taylor expansion of $F$ that $$F(z) = \sum_{n=0}^k \frac{F^{(n)}(0)}{n!} z^n$$ is a polynomial of degree at most $k$. But as previously noted, $F$ has no zeroes. So $F \equiv c$ for some $c \in\Bbb C \backslash\{ 0\}$! This means $g \equiv c^{-1}$, and furthermore, $$\widetilde{f}(z) = c^{-1}z^k \ \forall \ z \in \Bbb C.$$ -Supposing $k \geq 2$, the polynomial $z^k - 1 \in \Bbb C[z]$ has at least two roots $\xi_1, \xi_2 \in \Bbb C$ which are certainly non-zero on account of the fact that $0^k \neq 1$. The multiplicity of the root $\xi_1$ is precisely $1$ because $$\frac{d}{dz}\Big\vert_{z=\xi_1} z^k - 1 = k \xi_1^{k-1} \neq 0.$$ This allows us to conclude that $\xi_1 \neq \xi_2$, and that $$\widetilde{f}(\xi_1) = c^{-1} \xi_1^k = c^{-1} \cdot 1 = c^{-1} \xi_2^k = \widetilde{f}(\xi_2).$$ This result contradicts the fact that $\widetilde{f}$ is injective. So $k=1$, and our final conclusion is that $$f(z) = \widetilde{f}(z) + f(0) = c^{-1} z + f(0) \ \forall \ z \in \Bbb C,$$ where for completeness's sake I recall that $c$ was necessarily non-zero.<|endoftext|> -TITLE: Different norms on a product space $X \times Y$ -QUESTION [5 upvotes]: It is well known how to define standard product topology on a product space $\prod_{i \in I} X_i$. -Assume now that $(X,\lVert \, \cdot \, \rVert_{X})$ and $(Y,\lVert \, \cdot \, \rVert_{Y})$ are normed spaces and that the space $X \times Y$ is also equipped with a norm $\lVert \, \cdot \, \rVert_{X \times Y}$. -Is it true that all norms on $X \times Y$ are equivalent? -It is quite easy to prove this if $\lVert \, \cdot \, \rVert_{X \times Y}$ is one of the p-norms, i.e. $\lVert (x,y) \rVert_p = (\lVert x \rVert_X^p + \lVert y \rVert_Y^p)^{1/p}$. All such norms are equivalent. We only need to know that all norms on a finite dimensional space are equivalent (in this case we use it for $\mathbb{R}^2$). -How it is general case? - -REPLY [4 votes]: It is not true in general. When the vector space if finitely dimensional all norms are always equivalent, but when the space has infinite dimension (for example the space of continuous functions on the reals) then norms don't have to be equivalent.<|endoftext|> -TITLE: picking a witness requires the Axiom of Choice? -QUESTION [6 upvotes]: $\forall I, A:Set. I\subseteq \bigcup A\to \exists f:I\to A. \forall i\in I. i\in f(i)$ -Does this theorem require the Axiom of Choice? To prove, I need to find $\forall i\in I$ a witness of that $i\in \bigcup A$. Any witness $B$ is such that $i\in B\land B\in A$. There may be more than one witness, but I must return one, a choice function will pick it. I define a function $w_S:I\to P(A), w_S(i) := \{B\in A|i\in B\}$, $w_S$ returns a set of witnesses. $im(w_S)$ satisfies the hypothesis of the Axiom of Choice, then I construct a choice function $c:im(w_S)\to A$ and return $c\circ w_S$. -The “relation→function” form of the Axiom of Choice from Equivalent statements of the Axiom of Choice is also suitable for the proof. -AFAIK if the Axiom of Choice is used in a proof, then the proof explicitly says that. But I came across some proof similar to my theorem which does not mention the Axiom of Choice, so I am unsure. -(Update 2011-03-30. My theorem above is required for this theorem: $\forall A,B:Set. finite(A) \land A\subseteq \bigcup B \to \exists B'\subseteq B. finite(B')\land A\subseteq\bigcup B'$ which intuitively is “every finite set is compact”. It is implicitly used here, theorem 3.2.2.) -(Update 2011-03-31. It turned out that my theorem is not required for the theorem in “Update 2011-03-30”. That is a good example to learn in what cases the Axiom of Choice is required. -I rewrote Apostolos's proof: -Proof. Assume that $X$ is a set of inhabited sets. Define $i(b) := \{B\in X | b\in B\}\cup\{\{b\}\}, dom(i)=\bigcup X$. Assume that $i(b_0)=i(b_1)$, then $\{b_0\}\in i(b_1)$, then $(\{b_0\}\in X \land b_1\in\{b_0\}) \lor b_0=b_1$, left disjunct also implies $b_0=b_1$. Then $i$ is an injection. Then $i:\bigcup X\to im(i)$ is a bijection. Assume that $B\in X$, then $B$ is inhabited, take $b, b\in B$, by definition of $i$ $B\in i(b)$, obviously $i(b)\in im(i)$. Then $X\subseteq (im(i))$. By my theorem, take $w:X\to im(i)$. By $i$ being a bijection, define $c:X\to\bigcup X, c := i^{-1}\circ w$. Assume that $B\in X$, then $w(B) = i(c(B))$, then by definition of $w$ $B\in w(B)$, then by definition of $i$ and $c$ $c(B)\in B$. Then $c$ is a choice function. Qed.) - -REPLY [3 votes]: Yes, this requires the axiom of choice. Suppose that $\mathcal{C}$ is any family of nonempty sets. Take the family -$$ -\mathcal{D} = \{ \{X, \langle X, a\rangle\} : X \in \mathcal{C}, a \in X \} -$$ -where $\langle \cdot,\cdot\rangle$ denotes an ordered pair. Note that $\langle X,a\rangle \not = X$ for all sets $X,a$ if we use the usual definition of an ordered pair in set theory. -Now $\mathcal{C} \subseteq \bigcup \mathcal{D}$, because each set in $\mathcal{C}$ is nonempty. Assuming your principle holds, let $f \colon \mathcal{C} \to \mathcal{D}$ be such that $X \in f(X)$ for each $X \in \mathcal{C}$. Then we can define a choice function $g$ on $\mathcal{C}$ with the rule: $g(X)$ is the unique set $a$ such that $\langle X,a\rangle \in f(X)$. This definition does not require the axiom of choice because $g(X)$ can be defined explicitly.<|endoftext|> -TITLE: Closed form for the sequence defined by $a_0=1$ and $a_{n+1} = a_n + a_n^{-1}$ -QUESTION [30 upvotes]: Today, we had a math class, where we had to show, that $a_{100} > 14$ for -$$a_0 = 1;\qquad a_{n+1} = a_n + a_n^{-1}$$ -Apart from this task, I asked myself: Is there a closed form for this sequence? Since I didn't find an answer by myself, can somebody tell me, whether such a closed form exists, and if yes what it is? - -REPLY [2 votes]: Let $b_n=a_n^{-1}$, so that $b_{n+1}=(b_n+b_n^{-1})^{-1}$ is the $n$th iterate of $$f(x)=(x+x^{-1})^{-1}=x-x^3+x^5-x^7+\cdots$$ -The correct asymptotics have been given elsewhere, but this answer is to point out that there is a general method applicable to any function $f$ analytic at 0 with a series expansion $x+a_kx^k+a_{k+1}x^{k+1}+\cdots$ with $k>1$ and $a_k<0$: I learned about it in Chapter 8 of de Bruijn's Asymptotic Methods in Analysis, where it is shown that if the starting value is small enough, the $n$th iterate of $f$ is asymptotically equivalent to $((1-k)a_kn)^{-1/(k-1)}$. In our case, $k=3$ and $a_k=-1$ so $b_n \sim (2n)^{-1/2}$ and $a_n \sim (2n)^{1/2}$. -A sketch of the proof in the case at hand is as follows. First, show by induction that if $u_n$ is any sequence tending to zero and such that -$$u_{n+1}=u_n-u_n^2+O(u_n^3)$$ -then $u_n = n^{-1} + O(n^{-2}\log n)$. This is the tricky part of the proof. -Next make a substitution $z_n= 2b_n^2$. We have -$$ b_{n+1}=b_n(1-b_n^2+b_n^4-\cdots)$$ -so -$$z_{n+1}=z_n\left(1-\frac{1}{2}z_n + \frac{1}{4}z_n^2+ \cdots\right)^2 = z_n-z_n^2 + \frac{3}{4}z_n^3 + \cdots$$ -from which we get $z_n = n^{-1}+O(n^{-2}\log n)$ and the asymptotics for $b_n$ follow. -Incidentally, the sequence $b_n$ is what you get by using Newton's method to find a root of $xe^{-x^{-2}/2}$. But I couldn't find a way to make that shed any light on the asymptotics...<|endoftext|> -TITLE: What does matrix multiplication have to do with scalar multiplication? -QUESTION [5 upvotes]: Why are matrix and scalar multiplication denoted the same way and treated as the same operation in standard mathematical notation? This is always a source of confusion for me because they have completely different properties (specifically commutativity). Multiplying a 1x1 matrix by an NxN matrix isn't even generally equivalent to multiplying an NxN matrix by a scalar. (The former is not even always defined.) Wouldn't it be clearer to consider these to be completely unrelated operations and use completely different notation to represent them? - -REPLY [3 votes]: The product of matrices is defined so that it corresponds to the composition of the corresponding linear maps. One can derive the usual formula for matrix multiplication from this fact alone. This should be covered in every good linear algebra textbook, e.g. Axler's Linear Algebra Done Right. $\:$ See also Arturo Magidin's answer here. So your question reduces to why composition of maps is denoted the same as multiplication. One answer is that rings arise naturally as subrings of linear maps on their underlying additive groups (left regular representation). This is a ring-theoretic analog of the Cayley represention of a group as subgroups of permutation, by acting on itself by left multiplication. This allows us to view "functions" as "numbers" and exploit operator theoretic techniques such as factoring characteristic polynomials and differential and difference operators (recurrences), etc. The point of the common notation is to emphasize this common ring structure so that one may exploit it by reusing similar techniques where they apply. -Examples of such techniques abound. For some examples of operator algebra see here, here, here. See also here, here where the fibonacci recurrence is recast into linear system form, yielding an addition formula and fast computation algorithm by repeating squaring of the shift matrix.<|endoftext|> -TITLE: What is the asymptotic behavior of A103213 in OEIS? -QUESTION [9 upvotes]: It's probably not at all hard—but at least right now it's not obvious to me—how to determine the asymptotic behavior of -$\sum_{k=1}^n \binom{n}{k} \frac{1}{k}$ -(link to OEIS). - -REPLY [5 votes]: Here is an exact lower bound, which, as is readily seen, approximately equal to $2^{n+1}/n$. -By Jensen's inequality, since $1/x$ is convex, -$$ -\frac{{\sum\nolimits_{k = 1}^n {{n \choose k}} }}{{\sum\nolimits_{k = 1}^n {{n \choose k}} k}} \leq \frac{{\sum\nolimits_{k = 1}^n {{n \choose k}\frac{1}{k}} }}{{\sum\nolimits_{k = 1}^n {{n \choose k}} }}. -$$ -From this it follows straightforwardly that -$$ -\frac{{(2^n - 1)^2 }}{{2^{n - 1} n}} \le \sum\limits_{k = 1}^n {{n \choose k}\frac{1}{k}} . -$$ -EDIT: Hence, -$$ -\sum\limits_{k = 1}^n {{n \choose k}\frac{1}{k}} \geq \frac{{2^{n + 1} }}{n} - \frac{4}{n} + \frac{1}{{2^{n - 1} n}}. -$$<|endoftext|> -TITLE: Direct proof that the wedge product preserves integral cohomology classes? -QUESTION [102 upvotes]: Let $H^k(M,\mathbb R)$ be the De Rham cohomology of a manifold $M$. -There is a canonical map $H^k(M;\mathbb Z) \to H^k(M;\mathbb R)$ from the integral cohomology to the cohomology with coefficients in $\mathbb R$, which is isomorphic to the De Rham cohomology. As a previous question already revealed, the images of this map are precisely the classes of differential $k$-forms $[\omega]$ that yield integers when integrated over a $k$-cycle $\sigma$, -$$ \int_{\sigma} \omega \in \mathbb{Z} \quad\text{ whenever } d\sigma = 0$$ -Let us call them "integral forms". -Motivated by the cup product on cohomology, my question/request is the following: - -Give a direct proof that the wedge product $[\omega\wedge\eta]\in H^{k+l}(M,\mathbb R)$ of two integral forms $\omega\in \Omega^k(M)$ and $\eta\in \Omega^l(M)$ is again an integral form. - -This should be true because the cup product is mapped to the wedge product, but the point of the exercise is to prove this statement directly, without constructing the singular cohomology $H^k(M,\mathbb Z)$ or homology first. -Maybe I also have to make sure that the condition of being an integral form is something that can be "checked effectively" without singular homology; this might be subject to a new question. - -REPLY [10 votes]: $\def\ZZ{\mathbb{Z}}\def\RR{\mathbb{R}}$I'm not sure if this is an answer to the question, since it does refer to $H_{\ast}(M, \ZZ)$, but I think it sheds some interesting light on why the problem is hard. -We begin with a flawed proof attempt. Write $\delta$ for the diagonal map $M \to M \times M$, and $\pi_1$ and $\pi_2$ for the projections from $M \times M$ onto its first and second factor. Let $\rho$ be an integer $k+\ell$-cycle and let $\alpha$ and $\beta$ be a $k$-form and an $\ell$-form on $M$. Then $\int_{\rho} \alpha \wedge \beta = \int_{\delta(\sigma)} \pi_1^{\ast}(\alpha) \wedge \pi_2^{\ast}(\beta)$. Suppose $\rho$ were homologous in $M \times M$ to $\sum \sigma_i \times \tau_i$, for various cycles $\sigma_i$ and $\tau_i$ in $H_{\ast}(M)$, with $\dim \sigma_i + \dim \tau_i=k+\ell$. Then we would have -$$\int_{\rho} \alpha \wedge \beta = \sum \int_{\sigma_i \times \tau_i} \pi_1^{\ast}(\alpha) \wedge \pi_2^{\ast}(\beta) =\sum_{(\dim \sigma_i, \dim \tau_i) = (k, \ell)} \int_{\sigma_i} \alpha \int_{\tau_i} \beta.$$ -This would prove the result. (The integrals over terms where $\dim \sigma_i \neq k$ would drop out. If $\dim \sigma_i -TITLE: A torsion-free quotient? -QUESTION [7 upvotes]: Let $G$ be a group and $T$ the set of elements of finite order in $G$. -If $T$ is a subgroup of $G$, then $G/T$ is a torsion-free group. -Suppose $G$ is a compact Hausdorff topological group. -Is it true that $G/cl(T)$ is torsion-free? -(where $cl(T)$ is the topological closure of $T$ in $G$). - -REPLY [2 votes]: I believe the answer is no. Choose an element $\theta$ of infinite order in $\mathbb R/\mathbb Z=S^1$. Consider the group $G_n=(\mathbb Z\times S^1)/\langle(n,\theta)\rangle$. This is a finite extension of the compact topological group $S^1$, so it is compact. Also $\langle (n,\theta) \rangle$ is a closed subgroup, so the quotient is Hausdorff. The torsion subgroup consists of all torsion elements in the second factor: any element of the form $(k,\mu)$ where $k\neq 0$ cannot be torsion. Now if we mod out by the closure of torsion elements, as Jack Schmidt notes, this kills the entire $S^1$, leaving $\mathbb Z/\mathbb n\mathbb Z$ coming from the first factor.<|endoftext|> -TITLE: Martingale problem -QUESTION [5 upvotes]: If $X_t$ is an $\mathbb{R}$- valued stochastic process with continuous paths, show that the following two conditions are equivalent: -(i) - for all $f\in C^2(\mathbb{R})$ the process $$f(X_t) − f(X_0) −\int_0^t Af(X_s)ds$$ is a martingale, -(ii) - for all $f\in C^{1,2}([0,\infty)\times\mathbb{R})$ the process - $$f(t,X_t)−f(0,X_0)−\int_{0}^t(\partial_t f(X_s)+A f(X_s))ds$$ is a martingale. - -REPLY [2 votes]: I'm trying to give a partial answer. -(ii) $\Rightarrow$ (i) is trivial. -(i) $\Rightarrow$ (ii). I cannot prove it completely, but the following argument should apply to the case $f(t,x)=g(t)h(x)$, where the variables are separable. -Assume (i). $M_t:=h(X_t) − h(X_0) −\int_0^t Ah(X_s)ds$ is a continuous martingale, with $M_0=0$. Apply the integration by parts formula for stochastic integrals, -$$\int_0^t g(s)dM_s=g(t)M_t-\int_0^tM_sg'(s)ds.$$ -The left-hand side is a martingale. One can show that the right-hand side equals -$$g(t)h(X_t)-g(0)h(X_0)-\int_0^t [g'(s)h(X_s)+g(s)Ah(X_s)]ds,$$ -because -$$\int_0^tg'(s)\int_0^s Ah(X_u)duds=g(t)\int_0^tAh(X_s)ds-\int_0^tg(s)Ah(X_s)ds.$$ -Thus, the spacial case is proved. To get the general result, use some kind of extension argument?<|endoftext|> -TITLE: An Alternate Proof to a Theorem Involving "e" -QUESTION [6 upvotes]: In a paper, it was claimed that $\lim_{x \to \infty} (1-\frac{f(x)}{x})^x \sim e^{-f(x)}$ when $\frac{(f(x))^2}{x}$ is $o(1)$. -I proved the claim in the following way; however, I'm seeking a simpler proof. - -Define $g(x) = \frac{(1-f(x)/x)^x}{x}$ and $h(x) = \frac{e^{-x}}{x}$. To prove the theorem, we must show that $\lim_{x \to \infty} \frac{g(x)}{h(f(x))} = 1$ when $\frac{(f(x))^2}{x}$ is $o(1)$. To this end, we use the binary expansion for $g(x)$, and Taylor series for $h(x)$: -$\lim_{x \to \infty} g(x) = \lim_{x \to \infty} \frac{(x-f(x))^x}{x^{x+1}} = -\lim_{x \to \infty} \frac{x^x - \binom{x}{1}x^{x-1}f(x) + \binom{x}{2}x^{x-2}(f(x))^2 - \cdots}{x^{x+1}} -= \lim_{x \to \infty} \frac{1-f(x)}{x}$ -(The last identity follows from the fact that $\frac{(f(x))^2}{x}$ is $o(1)$; that is $\lim_{x \to \infty} \frac{(f(x))^2}{x} = 0$.) -Now, since $h(x) \sim \frac{1 - x + x^2/2 - \cdots}{x}$, we have $h(f(x)) \sim \frac{1 - f(x) + (f(x))^2/2 - \cdots}{x}$, and $\lim_{x \to \infty} h(f(x)) = \lim_{x \to \infty} \frac{1-f(x)}{x}$. (Again, the last identity follows from the fact that $\frac{(f(x))^2}{x}$ is $o(1)$.) -Combining the two limits, we see that $\lim_{x \to \infty} \frac{g(x)}{h(f(x))} = 1$. - -As I said, this proof is long, and to me, it is not appealing. -Does anyone know a better proof? - -REPLY [13 votes]: Let'sconsider -$$\lim_{x\to\infty} \left(1 - \frac{{f(x)}}{x}\right)^{x} = -\lim_{x\to\infty} \exp\left[x \ln \left(1 - \frac{f(x)}{x}\right)\right].$$ Expanding the logarithm, we obtain -$$x \ln \left(1 - \frac{f(x)}{x}\right) = x \left[ - \frac{f(x)}{x} + O\left(\frac{f(x)}{x}\right)^2\right] -= -f(x) + O\left(\frac{f(x)^2}{x}\right).$$ -So we obtain (using the fact that $f(x)^2/x$ is $o(1)$) -$$\left(1 - \frac{{f(x)}}{x}\right)^{x} = e^{-f(x) + o(1)} -\sim e^{-f(x)}.$$<|endoftext|> -TITLE: About multiplying binary quadratic forms -QUESTION [5 upvotes]: The quadratic forms with discriminant -23 up to change of variables are: - -A(x,y): $x^2 + xy + 6 y^2$ -B(x,y): $2 x^2 - xy + 3 y^2$ -C(x,y): $2 x^2 + xy + 3 y^2$ - -Viewed as number fields it's relatively easy then compute: - -A(x,y)A(a,b): $A(xa - 6yb,ya + (x - y)b)$. -BB: $B(xa - (3/2)yb,ya + (x+(1/2)y)b)$ -CC: $C(xa - (3/2)yb,ya + (x-(1/2)y)b)$ - -I have not found any number of the form C which is not of the form A or B, maybe I just didn't look for enough though. - -Can we also compute $A(x,y)B(u,v), AC$ and $BC$? -Is there any way to think about these forms as ideals? -Since 23 = A, 3 = B but 23*3 = B = C so it doesn't seem like there is a simple group structure here, but B and C are 'conjugate' in some sense so perhaps there is a group structure on {{A},{B,C}} or maybe the structure is different than a group? -What about the converse problem? If d|A then d = A,B or C? update 7 is not of the form A, B or C but 7^2 = A. - -So in this case the converse problem is not solvable, but I wonder if there are examples of multiple forms where the converse problem does hold? -For an example of a single form where the converse problem works is G(x,y)=$x^2 + y^2$ I have the answer, it's just $d|G \implies d = G$ since every factor of a sum of two squares is a sum of two squares or a square ($= x^2 + 0^2$). - -Maybe it would have been better to write this question for discriminant -36 since it has the forms {$x^2 + 9y^2, 2x^2 + 2xy + 5y^2, 3x^2 + 3y^2$} none of which are conjugate... This set seems a bit simpler but it still has the strange non-multiplicative phenomenon with $7^2$. - -REPLY [6 votes]: You have noticed that there seems to be a group lurking around when you consider (primitive) quadratic forms with a fixed discriminant, but also there seem to be problems. This is the reason why trying to make a group out of quadratic forms is subtle! Historically, Lagrange first defined equivalent quadratic forms to be those linked to one another by any invertible integral change of variables. Such a change of variables has determinant 1 or -1. Later Gauss found that by using a finer equivalence relation, where only invertible integral changes of variable with determinant 1 are permitted, there is an associated group law on his equivalence classes of quadratic forms. -Lagrange's equivalence classes are unions of Gauss's equivalence classes in the following algebraic sense: start with a finite abelian group G and identify each g in G with its inverse g^(-1). If g^2 is trivial then g = g^(-1) and otherwise g is not g^(-1). Let G* be the equivalence classes of sets {g,g^(-1)} in G, which have size 1 or 2. So G* is a kind of collapsing of G, but it is very awkward to try to make G* a group. You just can't do it in general. For example, if G = Z/3 = {0,1,2 mod 3} then G is an additive group and G* = {{0},{1,2}}, which doesn't make any sense as a group using some natural addition operation on representatives of the sets making up G*. Lagrange was basically dealing with something like G*, albeit in the language of quadratic forms, which is why he never saw a group law. -The connection with your example is that the change of variables (x,y) --> (-x,y) turns your B into C but this change has determinant -1 so it wouldn't be "legal" for Gauss's equivalence relation. I think you can see how G* is like your {{A},{B,C}}.<|endoftext|> -TITLE: Confusion regarding Riesz's lemma -QUESTION [6 upvotes]: Wikipedia (and my teacher) state Riesz's lemma as follows: - -Let $X$ be a normed linear space and $Y$ be a subspace in $X$. If there exists $0 < r < 1$ such that for every $x\in X$ with $||x|| =1$ , one has $d(x, Y) < r$, then $Y$ is dense in $X$. - -Wikipedia then goes on to say - -In other words, for every proper closed subspace Y, one can always find a vector x on the unit sphere of X such that d(x, Y) is less than and arbitrarily close to 1. - -This is in fact the way Riesz's lemma is stated in several other places (e.g. appendix B of the book A Taste of Topology by Volker Runde). -Now, I don't see how these two statements are so quickly equivalent. The second one seems to be the contrapositive of the first one. -But this would mean that "non dense subspace" is the same as "proper closed subspace". -This doesn't seem to be true: I thought, for instance, of $C([0,1])$, the space of continuous functions on $[0,1]$ with the supremum norm, and of the subspace of differentiable functions. It's a dense subspace which is not closed. -What is (obviously) true is that proper closed subspaces are not dense. -So, correct me if I'm wrong, but it seems that to say "in other words" is innacurate, as the second statement is stronger than the first one! -Is my reasoning correct? - -REPLY [4 votes]: The statements are equivalent because not being dense is equivalent to the closure being proper, and the distance to a set is the same as the distance to its closure. The second statement is therefore a reformulation of the contrapositive of the first. -More explicitly, the second statement follows directly from the contrapositive of the first because, as you mentioned, proper closed subspaces are not dense. The contrapositive of the first statement follows from the second because if $Y$ is not dense, then $\overline{Y}$ is a proper closed subspace, so by the second statement there are unit vectors in $X$ whose distances to $\overline{Y}$, and therefore to $Y$, are arbitrarily close to $1$.<|endoftext|> -TITLE: Information-theoretic proof of Gödel's Theorem? -QUESTION [6 upvotes]: I'm looking for an information-theoretic proof of Gödel's Theorem that goes something like this, without any reference to diagonalization: - -Every axiom system in the scope of Gödel's Theorem has a finite number of bits. -It requires an infinite number of bits to specify the all the truths of number theory. -By the Soundness theorem, no new bits can be introduced by deduction. -So no such axiom system as specified in part 1 above can fully axiomatize number theory. - -Does such a proof exist? Is it even feasible? Please include references with your answer. Thanks - -REPLY [3 votes]: I think that parts (1) and (2) would already prove the theorem. The heart is part (2), which phrased another way says "no finite number of bits can encode all truths of number theory". The difficult thing with the proof would be making formal sense of "finite number of bits".<|endoftext|> -TITLE: $f^3 + g^3=1$ for two meromorphic functions -QUESTION [25 upvotes]: Can you find two non-constant meromorphic functions $f,g$ such that $f^3 +g^3=1$? - -REPLY [8 votes]: As an addendum of sorts to Theo's answer, the two functions $f$ and $g$ in the answer can be simplified through the use of the homogeneity relations -$$\wp\left(cz;\frac{g_2}{c^4},\frac{g_3}{c^6}\right)=\frac1{c^2}\wp\left(z;g_2,g_3\right),\quad \wp^\prime\left(cz;\frac{g_2}{c^4},\frac{g_3}{c^6}\right)=\frac1{c^3}\wp^\prime\left(z;g_2,g_3\right)$$ -Focusing on the case of positive $g_3$ (the treatment for negative $g_3$ is similar), we have -$$f(z)=\frac{3+\sqrt{3}\wp^\prime\left(\frac{\Gamma(1/3)^3}{2\pi}z;0,1\right)}{6\wp\left(\frac{\Gamma(1/3)^3}{2\pi}z;0,1\right)},\quad g(z)=\frac{3-\sqrt{3}\wp^\prime\left(\frac{\Gamma(1/3)^3}{2\pi}z;0,1\right)}{6\wp\left(\frac{\Gamma(1/3)^3}{2\pi}z;0,1\right)}$$ -where now only "equianharmonic case" Weierstrass elliptic functions are involved. -If these functions are plotted in the complex plane, a hexagonal structure similar to what is observed for the Dixon elliptic functions can be seen. This suggests that there might be a relationship between these functions and the Dixon elliptic functions. -In particular, using the homogeneity relations, one can show that -$$f(z)=-\frac{\mathrm{cm}\left(\frac{\Gamma(1/3)^3}{2\pi}\sqrt{3}z\right)}{\mathrm{sm}\left(\frac{\Gamma(1/3)^3}{2\pi}\sqrt{3}z\right)}$$ - -As promised, here are plots of $f(z)$ on the real line: - -a contour plot of the real and imaginary parts of $f(z)$: - -and plots of a single "hexagonal tile" of $f(z)$: - - -Plots of $g(z)$ are similar since $g(z)=f(-z)$.<|endoftext|> -TITLE: What connections are there between number theory and partial differential equations? -QUESTION [14 upvotes]: What connections are there between number theory and partial differential equations? - -REPLY [15 votes]: A belated partial/sample answer: first, to say that some function on a Euclidean or other space is a solution of a (natural!) PDE, perhaps the unique solution in a space of functions described by some integrability or other conditions, can be an excellent characterization of the thing. In many cases of traditional interest both in number theory and in physics, PDEs have many symmetries, and special solutions with symmetries often allow separation of variables reducing to ODEs, whose solutions have tractable asymptotics. Asymptotics are very handy for non-elementary functions. -Such things come up in number theory in examples such as the following. For context, Hecke observed that holomorphic binary theta series like $\theta(z)=\sum_{m,n} e^{\pi i z(m^2+n^2)}$ gives zeta functions of complex quadratic rings of algebraic integers. Prompted by Hecke, Maass looked for, and found, analogous automorphic forms (Maass' special waveforms) to produce zeta functions for real quadratic fields. These are eigenfunctions for the $SL_2(\mathbb R)$-invariant Laplacian $y^2(\partial_x^2+\partial_y^2)$ on the upper half-plane. At the time (1940s) this was a surprise, but with hindsight this has been assimilated pretty well. -In fact, the $L^2$ space on the quotient of the upper half-plane by $SL_2(\mathbb Z)$, with the invariant measure $dx\,dy/y^2$, decomposes in a very interesting way with respect to that Laplacian: there are genuine $L^2$ eigenfunctions consisting of cuspforms and also constants, and there is a 'continuous' part expressible as integrals ('wave packets') of (unhelpfully named "non-holomorphic") Eisenstein series $E_s$, the Eisenstein series being eigenfunctions of the Laplacian, but not $L^2$. -The return of number theory here is exemplified by the following particular thing. Even if one is directly interested only in holomorphic modular forms $f$ and associated $L$-functions, of course $|f|^2$ or $y^k\,|f|^2$ is no longer holomorphic, but/and the standard Rankin-Selberg integral expresses the (completed) $L$-function $L(s,f\times \bar{f})$ as the integral of $|f|^2$ against $E_s$. That is, that $L$-function has meaning in the spectral decomposition with respect to the Laplace-Beltrami operator, that it is the continuous-spectrum spectral coefficient! -Iwaniec' book "Spectral methods of automorphic forms" is a very accessible introduction to such things, and there are on-line discussions of such matters. -As in other answers somewhere on MSE or MO, it deserves to be added that "trace formulas" such as Selberg's can be viewed as studies of the resolvent of the Laplace-Beltrami operator (in rank-one groups, certainly). -While the Laplace-Beltrami operators relevant to automorphic forms are really "just" manifestations of the Casimir operator from the Lie algebra of the relevant Lie group, so that we could try to dismiss PDE talk as just concealed representation theory, that is a little misleading, insofar as there is some analysis that must be done, both local and global, and it is very handy to have an elliptic differential operator to be able to invoke elliptic regularity. -For that matter, both Harish-Chandra's discussion of repn theory and important developments such as Casselman's subrepn theorem (as in the Duke paper of Casselman and Milicic) make substantial use of systems of PDEs to characterize spherical functions and other stuff. -A different example is provided by Colin de Verdiere's proof of meromorphic continuation of Eisenstein series, by constructing a variant of that same Laplace-Beltrami operator, designed to have a compact resolvent, thus having a meromorphic continuation, from which the continuation of the Eisenstein series is obtained.<|endoftext|> -TITLE: Difference between linear map and homomorphism -QUESTION [29 upvotes]: I came across the following definition: -Given a ring $A$, with a unit $1 \in A$, and $A$-modules $M$ and $N$, we denote by $Hom(M, N)$ or $Hom_A(M, N)$ the space of $A$-linear maps from $M$ to $N$. -My question is: what exactly is the difference between homomorphism and a linear map? I can see that linearity is defined in terms of a vector space or module and homomorphism in terms of groups. -But every linear map is a homomorphism and when treating a group as a one dimensional vector space over itself, every homo. is also a linear map. This makes me think they are kind of the same. -Is it ok to think of it that way? Or am I confused? Because I feel confused. Thanks once again for your help! - -REPLY [67 votes]: "Homomorphism" comes from the greek homo (same) and morphus (form or shape). -So a "homomorphism" is a map that "preserves the shape" or "preserves the structure." - -If you are working with groups, you want $f\colon G\to H$ to preserve the group structure: identity, inverses, and products. So a homomorphism is a map $f$ such that $f(1)=1$, $f(a^{-1}) = (f(a))^{-1}$, and $f(ab) = f(a)f(b)$ (though it turns out that the latter is enough to guarantee all of them, so we only check the latter). -If you are working with rings, you want $f\colon R\to S$ to preserve the ring structure (addition and multiplication; if the rings have unity, then you want it to preserve unity). So you want $f(a+b) = f(a)+f(b)$, $f(ab)=f(a)f(b)$ (and if both rings have unity, you often want $f(1_R) = 1_S$). -If you are working with partially ordered sets, you want $f$ to preserve the order structure. So you want that if $a\leq b$, then $f(a)\leq f(b)$. -If you are working with graphs, you want the homomorphisms to preserve the graph structure, which is adjacency: if $v$ is adjacent to $w$, you want $f(v)$ to be adjacent to $f(w)$. -If you are working with topological spaces, you want homomorphisms to preserve the topological space structure; it turns out that the way to do this is to ask that the inverse image of an open set be open. -If you are working with "pointed sets" (sets with a distinguished object), then you want a homomorphism $f\colon S\to T$ to "preserve the structure", so you require it to map the distinguished object of $S$ to the distinguished object of $T$. -And if you are working with vector spaces over a field $F$, you want a homomorphism $f\colon V\to W$ to "preserve the vector space structure"; so you want it to preserve the additive structure, $f(x+y) = f(x)+f(y)$; and the scalar multiplication structure, $f(av) = af(v)$. -Similarly, if you are working with $R$-modules, a homomorphism will be a map $f\colon M\to N$ that preserves "the $R$-module structure", $f(m+m') = f(m)+f(m')$ and $f(rm) = rf(m)$. - -So the meaning of "homomorphism" will depend on the context. It is often clear. If I say "Let $G$ and $H$ be groups, and let $f\colon G\to H$ be a homomorphism", then it's pretty clear I'm talking about a group homomorphism. -But sometimes it isn't clear. What if I say "Let $f\colon\mathbb{Z}\to\mathbb{R}$ be a homomorphism"? Am I talking about a homomorphism of additive groups, or a homomorphism of rings? How about "$f\colon\mathbb{R}\to\mathbb{C}$"? Am I talking about additive groups, rings, topological spaces, $\mathbb{R}$-vector spaces, $\mathbb{Q}$-vector spaces, inner product spaces? Which? -So we often specify what kind of homomorphism we mean. This is especially important when a particular set has many different structures (such as $\mathbb{R}$, which is an additive group, a field, a vector space over $\mathbb{Q}$, a vector space over $\mathbb{R}$, etc). So we will say things like "let $f\colon M\to N$ be an $R$-module homomorphism", or "let $f\colon\mathbb{R}\to\mathbb{C}$ be an additive homomorphism" to specify which kind we are thinking about. -And, historically, some terminology precedes the generic "homomorphism." Homomorphisms of vector spaces have long been called "linear transformations", so we often call them that instead of "vector space homomorphism". When a vector space has several structures as a vector space (e.g., $\mathbb{C}^2$ can be thought of as a complex vector space or as a real vector space), we often specify the field, so we may say things like "let $f\colon\mathbb{C}^2\to\mathbb{C}$ be an $\mathbb{R}$-linear transformation" or just "$\mathbb{R}$-linear", to specify we are looking at the structure as a real vector space. -Because modules are a direct generalization of vector spaces, we often say "$R$-linear function" or "$R$-linear" to refer to homomorphisms of $R$-modules, by analogy to $\mathbb{R}$-linear or $\mathbb{C}$-linear for homomorphisms of real or complex vector spaces. Note that a module over a field is the same thing as a vector space.<|endoftext|> -TITLE: Do trivial homology groups imply contractibility of a compact polyhedron? -QUESTION [14 upvotes]: Is it true that a compact polyhedron X with trivial homology groups (except $H_{0}(X)$ of course) is necessarily contractible? If yes, what is the approach in proving it? If not, do you see a counter-example? - -REPLY [17 votes]: The 2-skeleton of the Poincare homology sphere, also describable as the presentation complex of the binary icosahedral group, provides a counterexample to your original question. The fundamental group is of order 120 and is perfect, which implies that that $H_1$ is trivial. You can check from the group presentation $ $ that the second homology group is trivial as well. - -REPLY [13 votes]: To sum up the comments: when Poincaré worked on the beginnings of algebraic topology, he originally thought that a space with trivial homology groups must be contractible. (More precisely, he thought that having the homology group of a 3-sphere implies being a 3-sphere.) However, he soon found a counterexample, the Poincaré homology sphere, which led him to the construction of the fundamental group. -When taking the fundamental group into account, the statement is indeed true: if a space has trivial fundamental group and trivial higher homology groups, then it must be contractible. This is a consequence of Whitehead's theorem and the Hurewicz map.<|endoftext|> -TITLE: What are "Super Numbers"? -QUESTION [8 upvotes]: I'm reading Hyperspace by Michio Kaku and in the chapter on SuperGravity "Super Numbers" are mentioned and are described as a number system where for any super number $a$, $a*a=-a*a$. I was wondering if anyone knew anything more about these numbers or could point me to another reference about them. Thanks! - -REPLY [3 votes]: If you're looking for a reference, the text "Supermanifolds" by Bryce DeWitt provides an excellent pedagogical overview of supernumbers, analysis over supernumbers, supermanifolds, super Lie groups, and supersymmetry.<|endoftext|> -TITLE: Is there a gamma-like function for the q-factorial? -QUESTION [9 upvotes]: I'm looking at quantum calculus and just trying to understand what is going with this subject. Looking at the q-factorial made me wonder if this function could take all real or even complex numbers in the same way that $\Gamma (z)$ works as an extension of $f(n) =n!$. Since, I need practice with both $\Gamma $ and q-analogs, would it be a good project to try to recreate $\Gamma (z)$ in this new setting or is the whole project lacking in sanity, mathematical soundness? -Also, a minor question, why is there sometimes a coefficient in q-analog expansions as in this expression: -$(a;q)_n = \prod_{k=0}^{n-1} (1-aq^k)=(1-a)(1-aq)(1-aq^2)\cdots(1-aq^{n-1}).$ -I'm a bit embarrassed not to know, but non of the lit. I have explains it, I'll just randomly see it tossed in there from time to time... and it really throws me off. -Thank you. - -REPLY [7 votes]: First of all, let me start by recommending an excellent book by Victor Kac, "Quantum calculus". -q-Gamma function $\Gamma_q$ generalized Euler $\Gamma$ function by replacing the recurrence identity with it's q-deformation: -$$ - \Gamma(x+1) = x \Gamma(x) \quad \Longrightarrow \quad \Gamma_q(x+1) = [x]_q \Gamma_q(x) -$$ -where $[x]_q = \frac{1-q^x}{1-q}$ is a q-number. -When $x$ is a positive integer, and assuming $\Gamma_q(1) = 1$, we get -$$ - \Gamma_q(n+1) = \prod_{k=1}^n \frac{1-q^k}{1-q} = \frac{(q,q)_n}{(1-q)^n} = \left\{ -\begin{array}{cc} - \frac{(q,q)_\infty}{(1-q)^n (q,q^n)_\infty} & |q|<1 \\ - \frac{ q^{n(n-1)/2} (q^{-1},q^{-1})_\infty}{(1-q^{-1})^n (q^{-1},q^{-n})_\infty} & |q|>1 -\end{array} - \right. -$$ -The last expression is now defined for $n \in \mathbb{C}$, as long as $n$ is not a non-positive integer, as $(q,q^{-k})_\infty = 0$ for $k \in \mathbb{Z}^+$.<|endoftext|> -TITLE: Evaluating $\int P(\sin x, \cos x) \text{d}x$ -QUESTION [54 upvotes]: Suppose $\displaystyle P(x,y)$ a polynomial in the variables $x,y$. -For example, $\displaystyle x^4$ or $\displaystyle x^3y^2 + 3xy + 1$. -Is there a general method which allows us to evaluate the indefinite integral - -$$ \int P(\sin x, \cos x) \text{d} x$$ - -What about the case when $\displaystyle P(x,y)$ is a rational function (i.e. a ratio of two polynomials)? -Example of a rational function: $\displaystyle \frac{x^2y + y^3}{x+y}$. - -This is being asked in an effort to cut down on duplicates, see here: Coping with *abstract* duplicate questions. -and here: List of Generalizations of Common Questions. - -REPLY [5 votes]: Here are some other substitutions that you can try on a rational function of trigonometric functions. We name them Bioche substitution in France. -Let $P(\sin t,\cos t)=f(t)$ where $P(x,y)$ is rational function. Let $\omega(t)=f(t)dt$. - -If $\omega(-t)=\omega(t)$, then $u(t)=\cos t$ might be a good substitution. -For example : $$\int \frac{\sin^3t}{2+\cos t}dt=-\int \frac{(1-\cos^2t)(-\sin t)}{2+\cos t}dt=-\int\frac{1-u^2}{2+u}=\int\frac{u^2-1}{2+u}=\int u-2+\frac{3}{u-2}du=\int u du-2\int du+3\int \frac{1}{u-2}=\frac{u^2}{2}-2u+3\log(u-2)$$ -If $\omega(\pi-t)=\omega(t)$, then $u(t)=\sin t$ might be a good substitution. -For example : $$\int \frac{1}{\cos t}dt=\int \frac{\cos t}{\cos^2 t}dt=\int \frac{\cos t}{1-\sin^2 t}dt=\int \frac{1}{1-u^2}du=\int \frac{1}{2} \bigg(\frac{1}{1+u}+\frac{1}{1-u}\bigg)du=\frac{1}{2}(\log(u+1)-\log(1-u))$$ -If $\omega(\pi+t)=\omega(t)$, then $u(t)=\tan t$ might be a good substitution. -For example : $$\int\frac{1}{1+\cos^2 t}dt=\int \frac{1}{1+\frac{1}{\cos^2 t}}\frac{dt}{\cos^2 t}=\int \frac{1}{1+\frac{\cos^2t+\sin^2t}{\cos^2 t}}\frac{dt}{\cos^2 t}=\int \frac{1}{2+\tan^2t}\frac{dt}{\cos^2 t}=\int\frac{1}{2+u^2}du=\frac{1}{\sqrt2}\arctan\frac{u}{\sqrt2}$$ -If two of the previous relations are verified (in this case the three relations are verified), then $u=\cos(2t)$ might be a good substitution. - -If none of these work, you can use the Weierstrass substitution presented in a previous answer.<|endoftext|> -TITLE: Translations of Darboux's "Leçons Sur La Théorie Générale Des Surfaces" -QUESTION [11 upvotes]: This might be slightly stretching the boundary of acceptable questions, but I think this is the best group to ask. -I'm interested in the classic 1887 texts "Leçons Sur La Théorie Générale Des Surfaces Et Les Applications Géométriques Du Calcul Infinitésimal" by French mathematician Jean Gaston Darboux (and which can be read online in very high quality scans thanks to the University of California here). They were very influential in their time in establishing the fledgling field of differential geometry, and introduced many of its fundamental tools (such as, not surprisingly, the Darboux frame). -My question is, has this work ever been translated into English? I can find many editions of the original French version, but I have been unsuccessful in finding any of the four volumes in English. I've checked the usual suspects (Dover, etc.) and my University's catalogue, but nothing comes up for the straightforward translation of the title (Lessons on the General Theory of Surfaces and the Geometric Applications of Infinitesmal Calculus) or the author's name. Was it retitled, perhaps, in translation? -Secondly, and on a very related note, does anybody know for certain the copyright status of the original work (not any particular subsequent editions)? If this was published in America, I would expect the copyright to expire 70 years after Darboux's death in 1917, but perhaps things work differently in France. The UC site I linked above lists "Possible copyright status: NOT_IN_COPYRIGHT" but that doesn't sound very reassuring to me. If somebody were to attempt a translation today, would they run into any legal restrictions, or is it essentially in the public domain by now? - -REPLY [2 votes]: I don't claim to be an expert in french, but I would be happy to help with translations from french into english. Sometimes, putting words into software like google translate will not be enough. For example, look at this sentence: -Soit $E = (C[0, 1], \mathbf{\mathbb{R}})$ le $\mathbf{\mathbb{R}}$-espace vectoriel des applications continues de $[0, 1]$ vers $\mathbf{\mathbb{R}}$, muni de -la norme $N_\infty$. -It is not hard to translate such as sentence. However, words like "soit" which is the verb "to be" in the third person singular of the subjunctive may mean something else for a translator, where as here it means "given" or "let". -I'd be happy to help you translate specific sentences, but not a whole text! -Ben<|endoftext|> -TITLE: What is the complexity of succinct (binary) Nurikabe? -QUESTION [19 upvotes]: Nurikabe is a constraint-based grid-filling puzzle, loosely similar to Minesweeper/Nonograms; numbers are placed on a grid to be filled with on/off values for each cell, with each number indicating a region of connected 'on' cells of that size, and some minor constraints on the region of 'off' cells (it must be connected and can't contain any contiguous 2x2 regions). The Wikipedia page has more explicit rules and sample puzzles, if anyone's curious. -There are some NP-completeness proofs for Nurikabe out there, but they all rely on a 'unary' presentation of the puzzle, with an amount of data that scales roughly with grid size; but one of the unusual features of Nurikabe as opposed to most other similar puzzles is that instances can be potentially 'succinct'. The sum of the provided numbers must be proportional to the area of the grid (since the density of on cells is at least $1/4$), but if the grid size is $n$ then it's possible for a puzzle to use $\mathrm{O}(1)$ numbers each of size $\Theta(n^2)$ (rather than for instance $\Theta(n)$ numbers each of size $\Theta(n)$), for a total puzzle instance size of $\mathrm{O}(\log(n))$ bits - or, viewed the other way, given $n$ bits we can encode at least some Nurikabe puzzles of grid size exponential in $n$. -What I don't know, though, is whether these succinct puzzles can encode computationally-hard problems; the constructions for the NP-completeness reductions I've seen all use $\Theta(n^2)$ numbers of bounded size (in fact, mostly all $1$s and $2$s), and it's possible that puzzles with $\mathrm{O}(1)$ numbers are simpler in some fundamental way (for instance, that their on regions are the union of $\mathrm{O}(1)$ rectangles, which would imply that polynomially-sized witnesses still exist). Does anyone know of any NEXP-completeness results for succinct Nurikabe (or for that matter any other relatively natural puzzles), or of a proof that even this binary-coded version is still NP-complete? -(update: I've asked this question over at cstheory.SE as well, as it seems appropriate there.) - -REPLY [2 votes]: I don't have a proof of NEXP-completeness but I can offer some evidence that succinct Nurikabe isn't in NP and that it can encode computationally difficult problems. Consider this 17 x 16 Nurikabe puzzle: - -and its (I believe) unique solution: - -This puzzle is based on the fourth iteration of the Hilbert curve construction described in the Wikipedia article on space filling curves, modified slightly to produce a puzzle with a unique solution. I don't see any way to encode a certificate for this solution with fewer than the 272 bits of the naive encoding. And I don't see any way to solve the puzzle short of exponential trial and error.<|endoftext|> -TITLE: Lusin Theorem conditions -QUESTION [7 upvotes]: Lusin Theorem (as stated by Rudin): -Let $X$ be a locally compact Hausdorff space and let $μ$ be a regular Borel measure on $X$ such that $μ(K)<∞$ for every compact $K⊆X$. Suppose $f$ is a complex measurable function on $X$, $μ(A)<∞$, $f(x)=0$ if $x∈X \setminus A$, and $ϵ>0$. Then there exists a continuous complex function $g$ on $X$ with compact support such that -$μ(x:f(x)≠g(x))<ϵ$. -But I can't seem to find in the proof anywhere a use of the fact that the measure is finite for compact sets. -Is the condition nessecary? Is there a counter-example? - -REPLY [5 votes]: It can be shown that any $\sigma$-finite regular measure on an LCH space must have $\mu(K) < \infty$ for $K$ compact. (Exercise: Prove it.) So we will have to use a measure which is not $\sigma$-finite. Modifying dissonance's example, let $X = \mathbb{R}$ and $\mu(A) = \infty$ for all nonempty $A$, which I think is a regular measure (rather trivially so), and take $f$ to be any discontinuous function. -Rudin's proof of Lusin contains the following line: - -Fix an open set $V$ such that $A \subset V$ and $\bar{V}$ is compact. There are compact sets $K_n$ and open sets $V_n$ such that $K_n \subset T_n \subset V_n \subset V$ and $\mu(V_n - K_n) < 2^{-n} \epsilon$. - -This invokes Theorem 2.17 (a) (paraphrased): - -For any measurable set $E$ and $\epsilon > 0$, there exists $F$ closed and $V$ open with $F \subset E \subset V$ and $\mu(V-F) < \epsilon$. - -The proof of 2.17 uses the assumption that $\mu$ is finite on compact sets in the second line, when it asserts that $\mu(K_n \cap E) < \infty$. -Also, it's easy to see that all of Rudin's quoted claims fail using the example I gave.<|endoftext|> -TITLE: Construction of "pathological" measures -QUESTION [9 upvotes]: A selfposed but never solved problem: - -Is it possible to find a measure space - $(X, \mathcal{M} ,\mu )$ such that - the range of $\mu$ is something like - the Cantor set (i.e. a bounded, - perfect, uncountable, totally - disconnected set)? - -I was thinking about this problem some time ago and now, reading some old MT post, it came back to my mind. -I remember I solved a similar selfposed problem, showing that one can construct a measure over an interval whose range is the union of a finite number of disjoint intervals, but the one listed on top resisted my efforts. -Any ideas? -P.S.: It seems TeX tags don't work, isn't it? - -REPLY [11 votes]: Recall that the Cantor set $C$ can be identified with the set of numbers in $[0,1]$ admitting a ternary expansion consisting entirely of $0$'s and $2$'s. Take $\mathbb{N}$ with the measure $\mu(n) = \frac{2}{3^{n}}$. Then for every subset $A \subset \mathbb{N}$ we have $\mu(A) \in C$ and for every $x = \sum_{n=1}^{\infty} a_{n} \frac{2}{3^n} \in C$ with $a_{n} \in \{0,1\}$ we find $A$ with $\mu(A) = x$ by taking $A = \{n \in \mathbb{N}\,:\,a_{n} = 1\}$. - -REPLY [5 votes]: Added: After posting this, I realized that the $n=1$ case is pretty straightforward -and doesn't require a big theorem. In fact, it is a nice exercise to show that any non-atomic probability space supports a uniform(0,1) random variable $U$. For $0\leq \alpha\leq 1$, the set $\lbrace\omega: U(\omega)\leq \alpha\rbrace$ has measure $\alpha$. -This result is Corollary 1.12.10 (page 56) in Bogachev's Measure Theory Volume 1. - -Liapunov's convexity theorem implies (take $n=1$) that the range of any finite non-atomic measure is a compact, convex set of $\mathbb{R}$.<|endoftext|> -TITLE: Prove that the convergence of the sequence ($s_n$) implies the convergence of ($s_n^3$) -QUESTION [8 upvotes]: I believe I have the gist of how to prove this. My professor worked out a problem similar to this one only, instead of ($s_n^3$), he used ($s_n^2$), and I am slightly confused as to how he came up with certain portions of his proof. The following is the proof he gave us for ($s_n^2$). I believe after understanding his proof better, I can prove the original problem more easily. So please do not post the solution to the original question. -Proof {the convergence of the sequence ($s_n$) implies the convergence of ($s_n^2$)} -Since the lim ($s_n$)=s, we know ($s_n$) is bounded. -That is there exists $M\in R$ such that $|s_n|$ $\le$ M for all $n\in N$ -Now, for every $\varepsilon >0$ we have lim ($s_n$)$=s$. Working on $\varepsilon/(M+|s|)>0$, there exists $N\in R$ such that $|s_n-s| \le \varepsilon /(M+|s|)$ whenever $n>N$, therefore for all $n>N$, $|s_n^2 - s^2| = |s_n - s|*|s_n + s| \le |s_n - s|(|s_n|+|s|) \le |s_n - s|*(M + |s|)< \varepsilon $ -Which proves lim $(s_n^2)$ = $s^2$. -The following is my proof for the current problem (that is in the title). -Let me know if I did anything incorrect. -Proof -Since the lim ($s_n$)=s, we know ($s_n$) is bounded. -That is there exists $M > 0$ such that $|s_n|\le M$ for all $n\in \mathbb{N}$ -Now, for every $\varepsilon >0$ since lim ($s_n$)=s, working on $\varepsilon /(3M^2)>0$, -there exists $N\in \mathbb{N}$ such that -$|s_n-s| < \varepsilon /3M^2$ whenever $n>\mathbb{N}$ -Therefore, for all $n>\mathbb{N}$ -$|s_n^3 - s^3|$ = $|s_n - s|$ $|s_n^2 + s_n*s + s^2| \le $ $|s_n - s|$ $(|s_n^2|+|s_n||s|+ |s^2|) \le $ $(|s_n|^2+|s_n||s|+ |s|^2) \le $ $|s_n - s|*(M^2 + M*M + M^2) \le $ $|s_n - s|*(3M^2)< \varepsilon $ -Which proves lim $(s_n^3)$ = $s^3$ - -REPLY [3 votes]: I really appreciate how you requested that no one tell you the proof because you want to solve your own HW problems, so +1 for that. I'm not sure if this will help you, but here goes anyway! -Definition: We say that a sequence $\{s_k\}$ converges to a limiting value $s$ if for every real number $\epsilon >0$, no matter how small, it is always possible to find a sufficiently large $N$ so that for every $M>N$: $|s_M-s|<\epsilon$. -The thing we really care about is $|s_n^2-s^2|$, and the idea is basically this: -since we can factor $s_n^2-s^2=(s_n-s)(s_n+s)$, then it follows from the convergence of $\{s_n\}$ that as n goes to infinity, the first factor will go to zero, and the second factor will be converge to 2s. Therefore the whole right hand side converges to zero, which shows that $\{s_n^2\}$ converges. -To make everything rigorous, you have to rescale your $\epsilon$ because the ordinary sequence and the squared sequence converge at a different rate. -We want to say $|s_n^2-s^2|<\epsilon '$, and this is not the same $\epsilon$ as in $|s_n-s|<\epsilon$, but the two are related. The important thing is once someone gives us the $\epsilon '>0$, how to find the N that satisfies the definition of convergence. We can do this by saying if some given N satisfies the definition for an ordinary sequence for some $\epsilon >0$, then the same N will work for the squared sequence, so long as we rescale our $\epsilon '$ to $\epsilon '=\epsilon(M+|s|)$. -In other words, whatever N works for $\epsilon=\frac{\epsilon '}{M+|s|}$, will work for $|s_n^2-s^2|<\epsilon '$.<|endoftext|> -TITLE: Limits: How to evaluate $\lim\limits_{x\rightarrow \infty}\sqrt[n]{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}}-x$ -QUESTION [65 upvotes]: This is being asked in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions, and here: List of abstract duplicates. - -What methods can be used to evaluate the limit $$\lim_{x\rightarrow\infty} \sqrt[n]{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}}-x.$$ -In other words, if I am given a polynomial $P(x)=x^n + a_{n-1}x^{n-1} +\cdots +a_1 x+ a_0$, how would I find $$\lim_{x\rightarrow\infty} P(x)^{1/n}-x.$$ -For example, how would I evaluate limits such as $$\lim_{x\rightarrow\infty} \sqrt{x^2 +x+1}-x$$ or $$\lim_{x\rightarrow\infty} \sqrt[5]{x^5 +x^3 +99x+101}-x.$$ - -REPLY [9 votes]: Possibly more elementary proof based on $\frac{c^{n}-d^{n}}{c-d} = \sum_{k=0}^{n-1} c^{n-1-k} d^k$. Using this for $c = \sqrt[n]{ x^n+ \sum_{m=0}^{n-1} a_m x^m }$ and $d=x$. -$$ - c - d = \frac{c^{n}-d^{n}}{ \sum_{k=0}^{n-1} c^{n-1-k} d^k } = \frac{ a_{n-1} x^{n-1} + \ldots + a_1 x + a_0}{ x^{n-1} \sum_{k=0}^{n-1} (\frac{c}{d})^{n-1-k} } = \frac{ a_{n-1} + a_{n-2} x^{-1} + \ldots + a_0 x^{1-n}}{\sum_{k=0}^{n-1} (\frac{c}{d})^{n-1-k} } -$$ -Now $\lim_{x\to \infty} \frac{c}{d} = \lim_{x \to \infty} \sqrt[n]{ 1 + \frac{a_{n-1}}{x} + \ldots + \frac{a_0}{x^n} } = 1$. This gives $\frac{a_{n-1}}{n}$.<|endoftext|> -TITLE: Alternative notation for exponents, logs and roots? -QUESTION [156 upvotes]: If we have -$$ x^y = z $$ -then we know that -$$ \sqrt[y]{z} = x $$ -and -$$ \log_x{z} = y .$$ -As a visually-oriented person I have often been dismayed that the symbols for these three operators look nothing like one another, even though they all tell us something about the same relationship between three values. -Has anybody ever proposed a new notation that unifies the visual representation of exponents, roots, and logs to make the relationship between them more clear? If you don't know of such a proposal, feel free to answer with your own idea. -This question is out of pure curiosity and has no practical purpose, although I do think (just IMHO) that a "unified" notation would make these concepts easier to teach. - -REPLY [4 votes]: Preamble -The question asked is "Has anyone ever considered alternative notation?" I think that it almost certain that many people have, but it is equally certain that no such notation has caught on. Most of the other answers here discuss proposed notations, none of which seem to have any traction (or even use) in the wider mathematical community. -As such, I am presenting this answer as a frame challenge, which is meant to address what I perceive as the underlying problem, as well as some of the misconceptions which have prompted this question. -Misconceptions in the Question -In the question, it is asserted that if $x^y = z$, then "we know that" -$$ x = \sqrt[y]{z} \qquad\text{and}\qquad y = \log_x(z). $$ -This is incorrect. - -If we assume that $x$ is a real variable, that $n$ is a given natural number, and that $a$ is a real number, then the equation -$$ x^n = a \tag{1}$$ -has $n$ complex solutions. If $n$ is odd, one of those solutions will be real; if $n$ is even and $a > 0$, then two of those solutions will be real. The notation $\sqrt[n]{a}$, depending on context, either denotes the real solution to the (1) (if $n$ is odd), the nonnegative real solution to (1) (if $n$ is even and $a > 0$). -It is not obvious what $\sqrt[n]{a}$ should mean if $n$ is not a positive integer, though there are reasonable ways of defining this notation in terms of (1). For example, if $n$ is a natural number, we could define -$$ \sqrt[-n]{a} = a^{-n} = \frac{1}{a^n},$$ -but we don't typically do that, and instead rely on exponential notation alone. -More generally, if $x$ is a complex variable, and $a$ and $n$ are complex constants, then $\sqrt[n]{a}$ typically denotes the principal $n$-th root of $a$, which can be defined in couple of slightly different ways, but generally means something like $r \mathrm{e}^{i/n}$. However, in this setting, there is enough ambiguity that one would usually choose to use more explicit notation in terms of the complex logarithm / exponential. - -If we assume that $y$ is a real variable and that both $a$ and $b$ are positive real constants, then the equation -$$ b^y = a \tag{2}$$ -has a unique solution: -$$y = \log_b(a) = \frac{\log(a)}{\log(b)}, $$ -where $\log$ denotes the natural logarithm (or the common logarithm, or any logarithm with a fixed base—it doesn't really matter). On the other hand, as soon as we allow either $a$ or $b$ to be something other than a positive real number (say, a negative real number, or a complex number), things immediately become much more complicated, and (2) has many solutions (or none at all, depending on context). - - -In either case, we can make sense of the assertions stated in the original question if we first restrict the sets of numbers we are willing to consider. The suite of conclusions stated hold if we assume that $x$ and $y$ are positive real numbers and that $n$ is a natural number. -History -It is worth noting that notations adopted here come from a few very different historical antecedents. It is only in relative recent mathematical history that the link between exponential, logarithmic, and radical functions was well understood and articulated. The notation reflects this. -While I am not an expert in this area, my understanding is that something like the following is true: - -Radical notation stems from classical Greek thinking (this is not to say that the Greeks used this notation; only that it reflects their way of thinking about problems). To the Greek way of thinking, a number represented a physical quantity—a length, or an area, or a volume. A number is inherently represented by a length, multiplication of two lengths gives an area, and so on. In this mode of thinking, it is very reasonable to ask "If the area of a square is $a$, what is the length of each side of that square?" -In this framework, $a$ is a positive number, and the exponent ($2$), is a natural number. The square root of $a$ is then that side length (a positive number). Similarly, the cube root of $a$ is the length of the side of a cube which has volume $a$ (again, both the cube root and $a$ are positive numbers). -The notation $\sqrt[n]{a}$ reflects this history—unless one has explicitly noted otherwise, $n$ is a natural number, and $a$ is a positive real number. We certainly can extend the definition of the radical notation, but my impression is that this is rarely done. - -While this is the bit that I am most uncertain about, my understanding is that a more broadly defined real exponential function, i.e. $x \mapsto \exp_a(x)$, comes about with the rise of calculus in the mid-17th century. In this context, we assume that $a > 0$ and that $x$ is a real variable, which allows us to talk about rates of changes which are proportional to the underlying variable (e.g. the rate at which a colony of bacteria grow is proportional to the size of that colony: -$$ \frac{\mathrm{d}P}{\mathrm{d}t} = kP, $$ -where $k$ is some intrinsic growth rate). -In this setting, $\exp(x)$ (the natural exponential function) is the unique solution to the initial value problem -$$ \frac{\mathrm{d}y}{\mathrm{d}x} = y, \qquad y(0) = 1. $$ -It can be observed that $\exp(x)$ has a lot of the same properties as $\mathrm{e}^x$ (where the former notation indicates the solution to an IVP, and the latter notation indicates "repeated multiplication"), but showing that these are the same requires a little bit of work. -As such, it is probably healthy to use different notation for these two notions. - -Logarithms were first developed in the 16th century by John Napier. In modern language and notation, Napier observed that there is a natural isomorphism between the multiplicative group of positive real numbers, and the additive group of all real numbers. As such, if you wanted to multiply two real numbers, it might save you some time to look up the logarithms of those two numbers in a table, add the results, and then take an antilogarithm (find the number in your table of logarithms whose logarithm is your sum). -Addition and two or three table lookups are relatively quick when compared to multiplying to very large or very precise numbers, so books of logarithms proved to be quite useful. The fundamental idea is that a logarithm is a function $f$ which satisfies the functional equation -$$ f(x+y) = f(x)f(y). $$ -Napier's tables of logarithms took $f$ to be the common logarithm (that is, the logarithm with base 10), but it turns out that any log will do. -Again, it is possible to show that the exponential function with base $a$ and the logarithmic function with base $a$ are inverse to each other, but this is a later historical development. - - -Pedagogy -Thus far, I have claimed that the notations $a^x$, $\sqrt[n]{a}$, $\exp_a(x)$, and $\log_a(x)$ have distinct historical motivation, and were originally understood as representing very distinct notions. However, this does not necessarily mean that we should continue to regard them as distinct, nor that would should not adopt common notation. -Indeed, this brings me to the crux of what I believe this question is about, and to the nut of my answer as a frame challenge. The underlying question is not "Has anyone considered alternative notation?", but rather "Why don't we use and/or teach alternative notation?" and, perhaps, "Should we use alternative notation?" -An entirely correct, but also useless, answer is that we don't use alternative notation because we don't. Expanding on this a little, mathematical notation is a kind of language which we use to transmit ideas. Like all language, mathematical notation evolves over time, and is the product of human interaction. We don't use alternative notation because (a) the notation we have is sufficiently well understood by "fluent" or "native" speakers of mathematics, and (b) because the community of people who practice mathematics have never felt a need for such notation—it simply hasn't proved useful enough to dispense with the old notation. -In other words, the language of mathematical notation has not evolved a new set of notation for these relationships because the native speakers of that language have not felt a need for it. -Which brings us to the neophyte speakers—the students who might find a "unified" notation "easier to learn". -In principle, an instructor could introduce a new notation and teach that to students. Indeed, I have sometimes been tempted to dispense with $\pi$ and instead use the notation $\tau$ ($=2\pi$) when teaching trigonometry. -However, I think that such an action would ultimately do an incredible disservice to students. The goal of mathematics instruction is (or, at least, should be) to teach students to "do" mathematics as it is currently "done" by professionals in the community. Part of this requires that we teach students to use the language and notation of actual working mathematicians. -As such, students need to be familiar (and even comfortable) with exponential notation, logarithmic notation, function notation, radical notation, and so on. They should be taught the subtle distinctions between these different notations, and should understand when and why one notation might be preferable over another. -Epilog -To completely unify the notation, we can start by writing -$$ z = x^y = \exp(y \operatorname{Log}(x) + i2k\pi) \qquad\text{or}\qquad \exp(\operatorname{Log}(z) + i2k\pi) = \exp(y \operatorname{Log}(x)), $$ -where we assume that $x,y,z\in\mathbb{C}$, $\operatorname{Log}$ is the principal branch of the complex logarithm, $\exp$ is the complex exponential function, and $k$ is any integer. -In this notation, we get something like[1] -$$ x = \exp\left( \frac{\operatorname{Log}(z) + i2k\pi}{y}\right), $$ -and -$$ y = \frac{\operatorname{Log}(z) + i2k\pi}{\operatorname{Log}(x)}. $$ -In other words, there is existing notation which already unifies the various notation of exponentiation, roots, and logarithms. It isn't necessarily "pretty", and it isn't appropriate for elementary students, but it already exists. - -[1] I will note that I have been a little sloppy in solving for $x$ and $y$ under the assumption that $x$, $y$, and $z$ are complex. What I have written should be fine if $y$ and $z$ are real, but I was not too careful about chasing complex exponents around.<|endoftext|> -TITLE: Does a topological group need to have a uniformity making all group operations uniformly continuous? -QUESTION [6 upvotes]: Let $G$ be a topological group. $G$ comes equipped with a left (resp. right) uniformity $\mathscr{L}$ (resp. $\mathscr{R}$) which can be characterized as the coarsest uniformity which is compatible with the topology and which makes $x \mapsto gx$ (resp. $x \mapsto xg$) a uniformly continuous map $G \to G$ for all $g \in G$. -Edit: My question is now just: - -Is there necessarily a uniformity on $G$ compatible with the topology which makes all left and right multiplication maps uniformly continuous? Bonus points if multiplication $G \times G \to G$ (using the product uniformity on $G \times G$) is uniformly continuous or inversion is continuous. - -As Harry Altman points out, there must be (as for any uniformizable space) a finest uniformity $\mathscr{U}$ on $G$ compatible with the topology. Since the uniformities on $G$ form a (complete) lattice there is also a coarsest uniformity $\mathscr{V}$ refining both $\mathscr{L}$ and $\mathscr{R}$. Any uniformity which answers my question must sit between $\mathscr{V}$ and $\mathscr{U}$. Such a uniformity is automatically compatible with the topology since it will sit between, say, $\mathscr{L}$ and $\mathscr{U}$ which are compatible with the topology. - -REPLY [4 votes]: I'm adding another answer based on the answer to this question over on MathOverflow (thanks to Todd Eisworth and Julien Melleray), and some other things. -The answer to your modified question is yes. -Given a topological groups, one can take the meet of its left and right uniformity to get the Roelcke uniformity. Even though meets of uniformities are nasty in general, in this case the result is quite nice and we get the original topology back. The Roelcke uniformity can be described quite simply as the uniformity generated by the entourages $\{ (x,y): x\in VyV\}$ for $V$ a neighborhood of the origin. And, in fact, the Roelcke uniformity makes both left and right translation uniformly continuous, as well as inversion, thus answering your question. -I don't know if or to what extent the Roelcke uniformity makes the multiplication map as a whole uniformly continuous, but it does work with both sorts of translations (and inversion), as you wanted. -(By contrast, if you take the join of the two uniformities as I originally suggested, to get the two-sided uniformity, while this does make inversion uniformly continous, it doesn't make left-translation or right-translation uniformly continuous unless the group was balanced to begin with (i.e. the left and right uniform structures were the same). This is despite the fact that in general joins of uniformities are much nicer than meets of uniformities.) This paragraph is crossed out because see the comments. I think it actually does make both of them uniformly continuous? A basis for this uniformity is the entourages $\{(x,y): x^{-1}y, xy^{-1}\in V\}$ for $V$ a neighborhood of the origin. -A good source for this stuff seems to be Topological Groups and Related Structures by Arhangel'skii and Tkachenko.<|endoftext|> -TITLE: Undergraduate mathematics programs -QUESTION [15 upvotes]: It has come time for me to decide where to pursue my undergraduate education. I plan to pursue a PhD in mathematics, so accordingly my primary concern is the mathematics programs at various institutions. While I realize that college selection is a personal process, I would like to hear some input from people familiar with the programs I am considering. -My interests currently lie in algebra and dynamical systems, but this is largely because I have not had the chance to study other subjects. That being said, I am most interested in the theoretical math programs of these institutions. It is also my hope that I will be able to perform research with professors as an undergraduate. One thing I am really looking for is to find a "community of scholars" where I can talk with other students about real mathematics on a regular basis. Essentially, I've narrowed my search down to two schools to which I've been admitted: -Chicago: Top ranked program in mathematics and chemistry (my secondary interest). The mathematics department offers a wide variety of classes in various specialties, and faculty are first-rate research mathematicians in most fields. It also has a large body of skilled undergrad and grad students to help create a "community of scholars". However, the school also has a reputation as "where fun goes to die" (this is in fact their mantra), and the large undergraduate and graduate student bodies mean that I may have to fight for attention. -Harvey Mudd: Well-respected math and science college. From what I've heard and my own experiences visiting, the faculty are some of the best teachers out there. Also, the lack of a graduate program means that as an undergraduate I would have a much easier time getting the attention of professors. However, their course offerings are narrower and not as in-depth (their highest course is algebraic geometry, in the language of varieties rather than schemes). Faculty members, as teaching faculty, generally do not have strong research records. -If anyone who has experience with these institutions and could tell me about them in general or their math programs in particular, especially to address my above concerns, I would greatly appreciate it. I will not in any way base my decision off these responses, I am simply looking for some things to think about. -To the moderators: I debated for quite a while as to whether this question was suitable for this site before posting it. If you feel it is not, please inform me so that I may delete it. - -REPLY [8 votes]: I am a current fourth year mathematics (and physics) major at Chicago. Let me just say that the mathematics department is absolutely top notch. In particular, the department treats it's undergraduates exceptionally. They really do care about their undergraduates here. By the way, I think that this is something that is often overlooked when applying to college (I know I did): departmental rankings are not very reflective of the quality of the undergraduate experience you will receive there. My guess is that this is mostly because departments tend to be ranked by the quality of research they produce and the quality of the grad students they produce, not so much the quality of the undergraduates they produce. Honestly, if you want to get an idea of how good undergraduate programs are at particular institutions, you'll probably just have to ask current and former students there. -While I could go on and on about how amazing the undergraduate mathematics program is here, unfortunately I have little else positive to say about the school. As a matter of fact, I've had quite a miserable experience here, and I would HIGHLY recommend seeking out other institutions with similar undergraudate mathematics department. In particular, I have absolutely nothing positive to say about the physics department and have heard similar horror stories about the chemistry department (although I personally can't vouch for this). -In any case, before you decide on coming here, make sure you can handle years of classes that have absolutely nothing to do with mathematics, e.g., "Philosophical Perspectives on the Humanities", "Self, Culture, Society", etc. I actually have a fair amount of interests outside of mathematics, and was still unable to find anything in the core here even remotely bearable. -Of course, feel free to contact me at jgleason@uchicago.edu if you have any questions.<|endoftext|> -TITLE: Rate of decay of Fourier coefficients vs smoothness -QUESTION [7 upvotes]: Suppose $f \in L^1$, $2\pi$ periodic and that the Fourier coefficients decay with order $|n|^{-k}$, $k \gt 2$. -Show that the derivative of $f$ is continuous. -I read that the rate of decay of Fourier coefficients relates that the "smoothness" of the function. But I'm not sure how to formalize my argument for this question. - -REPLY [2 votes]: [Notation. $a_n \lesssim b_n$ means: there exists some positive constant $C$ s.t. $a_n \le C b_n$.] -A rough'n'ready argument would be: -let -$$c_n=\frac{1}{2 \pi} \int_{-\pi}^{\pi}f(y)e^{-i n y}\, dy$$ -and write -$$f(x)=\sum_{n\in \mathbb{Z}} c_n e^{i n x}\quad (1)$$ -The decay condition on $c_n$ implies uniform convergence of this series: -$$\lvert c_n e^{i n x} \rvert \le \lvert n\rvert^{-k}\lvert n^k c_n \rvert \lesssim \lvert n\rvert^{-k}$$ -and $\sum_{n \in \mathbb{Z}} \lvert n\rvert^{-k}$ is a convergent numerical series. Now differentiate (1) termwise: you get -$$\sum_{n \in \mathbb{Z}}i n c_n e^{i n x}$$ -which is again a uniformly convergent series: -$$\lvert i n c_n \rvert \lesssim \lvert n\rvert^{1-k}.$$ -So (1) is a uniformly convergent series whose term-by-term derivative is uniformly convergent. This implies that $f$ is differentiable and -$$f'(x)=\sum_{n \in \mathbb{Z}}i n c_n e^{i n x}$$ -so that, again by uniform convergence, $f'$ is also continuous.<|endoftext|> -TITLE: Are the singular values of the transpose equal to those of the original matrix? -QUESTION [19 upvotes]: It is well known that eigenvalues for real symmetric matrices are the same for matrices $A$ and its transpose $A^\dagger$. -This made me wonder: -Can I say the same about the singular values of a rectangular matrix? So basically, are the eigenvalues of $B B^\dagger$ the same as those of $B^\dagger B$? - -REPLY [27 votes]: Both eigenvalues and singular values are invariant to matrix transpose no matter a matrix is square or rectangular. -The definition of eigenvalues of $A$ (must be square) is the $\lambda$ makes -$$\det(\lambda I-A)=0$$ -For $A^T$, $\det(\lambda I-A^T)=0$ is equivalent to $\det(\lambda I-A)=0$ since the determinant is invariant to matrix transpose. However, transpose does changes the eigenvectors. -It can also be demonstrated using Singular Value Decomposition. A matrix $A$ no matter square or rectangular can be decomposed as -$$A=U\Sigma V^T$$ -Its transpose can be decomposed as $A^T=V \Sigma^T U^T$. -The transpose changes the singular vectors. But the singular values are persevered.<|endoftext|> -TITLE: Integral yielding part of a harmonic series -QUESTION [6 upvotes]: Why is this true? -$$\int_0^\infty x \frac{M}{c} e^{(\frac{-x}{c})} (1-e^{\frac{-x}{c}})^{M-1} \,dx = c \sum_{k=1}^M \frac{1}{k}.$$ -I already tried substituting $u = \frac{-x}{c}$. Thus, $du = \frac{-dx}{c}$ and $-c(du) = dx$. Then, the integral becomes (after cancellation) $\int_0^\infty c u M e^u (1-e^u)^{M-1}\,du$. -I looked at integral-table.com, and this wasn't there, and I tried wolfram integrator and it told me this was a "hypergeometric integral". -Thanks, - -REPLY [4 votes]: Suppose that $X_1,\ldots,X_M$ are independent exponential random variables with mean $c$, so that their pdf and cdf are given by $f(x)=c^{-1}e^{-x/c}$ and $F(x)=1-e^{-x/c}$, $x \geq 0$, respectively. Let $Y_M=\max \{X_1,\ldots,X_M\}$. Then, $Y_M$ has cdf $F_M (x) = F(x)^M$ and hence pdf $f_M (x) = M F(x)^{M-1} f(x)$. Thus, the expectation of $Y_M$ is given by -$$ -{\rm E}[Y_M ] = \int_0^\infty {xMF(x)^{M - 1} f(x)dx} = \int_0^\infty {x\frac{M}{c}e^{ - x/c} (1 - e^{ - x/c} )^{M-1}dx}. -$$ -So you want to know why -$$ -{\rm E}[Y_M] = c\sum\limits_{k = 1}^M {\frac{1}{k}}. -$$ -Now, $Y_M$ is equal in distribution to $E_1 + \cdots + E_M$ where the $E_k$ are independent exponentials and $E_k$ has mean $c/k$; for this fact, see James Martin's answer to this MathOverflow question (where $c=1$). The result is thus established. -EDIT: As an additional reference, see Example 4.22 on p. 157 in the book Probability, stochastic processes, and queueing theory by Randolph Nelson. -EDIT: It is interesting to note that -$$ -\int_0^\infty {x\frac{M}{c}e^{ - x/c} (1 - e^{ - x/c} )^{M - 1} dx} = c\int_0^1 { - \log (1 - x^{1/M} )dx} . -$$ -This follows by using a change of variable $x \mapsto x/c$ and then $x \mapsto (1-e^{-x})^M$. -So, this gives the following integral representation for the $M$-th harmonic number $H_M := \sum\nolimits_{k = 1}^M {\frac{1}{k}}$: -$$ -H_M = \int_0^1 { - \log (1 - x^{1/M} )dx}. -$$ -Finally, it is both interesting and useful to note note -$$ -H_M = \int_0^1 {\frac{{1 - x^M }}{{1 - x}}dx} = \sum\limits_{k = 1}^M {( - 1)^{k - 1} \frac{1}{k}{M \choose k}}, -$$ -see Harmonic number. -With the above notation, the right-hand side corresponds to -$$ -{\rm E}[Y_M] = \int_0^\infty {{\rm P}(Y_M > x)dx} = \int_0^\infty {[1 - F_M (x)]dx} . -$$<|endoftext|> -TITLE: Formula relating covariant derivative and exterior derivative -QUESTION [5 upvotes]: According to Gallot-Hulin-Lafontaine one has -$$d\alpha (X_0,\cdots,X_q) = \sum_{i=0}^q (-1)^i D_{X_i} \alpha (X_1,\cdots,X_{i-1},X_0,X_{i+1},\cdots,X_q)$$ -It seems to me that it should be -$$d\alpha (X_0,\cdots,X_q) = \sum_{i=0}^q (-1)^i D_{X_i} \alpha (X_0,\cdots,\hat{X_i},\cdots,X_q)$$ -Is this right ? - -REPLY [4 votes]: If $\theta$ is a 1-form, then -$$ -d\theta(X,Y) = (\nabla_X\theta)(Y) - (\nabla_Y\theta)(X) -$$ -If $\Omega$ is a 2-form, then -$$ -d\Omega(X,Y,Z) = (\nabla_X\Omega)(Y,Z)+(\nabla_Z\Omega)(X,Y)+ (\nabla_Y\Omega)(Z,X) -$$ -and so on ... but you have to have a zero-torsion (symmetric) connection. These formulae will be useful when manipulating structure equations, for instance to obtain Bianchi identities. -- Salem<|endoftext|> -TITLE: Centralizer of $Inn(G)$ in $Aut(G)$ -QUESTION [5 upvotes]: Can the centralizer of Inn(G) in Aut(G), where G is preferably any non-abelian finite one, equal to Inn(G) itself? Clearly, such centralizer contains all $f$ in Aut(G) where $f(g)g^{-1}$ are in Z(G). - -REPLY [4 votes]: I was fiddling with this, but never had time to finish. Maybe it is interesting to someone. - -We define two bijections that help to describe the centralizer of the inner automorphism group in the full automorphism group. - -exp : Hom(G,Z(G)) → CAut(G)(Inn(G)) : ζ ↦ ( g ↦ gζ(g) ) -log : CAut(G)(Inn(G)) → Hom(G,Z(G)) : φ ↦ ( g ↦ g−1 φ(g) ) - -We need to show these are well-defined (that is they have the specified ranges). -Proof: If ζ is a homomorphism from G to Z(G), then exp(ζ) : G → G : g ↦ gζ(g) is indeed an automorphism, with inverse exp(−ζ) defined by −ζ, the homomorphism from G to Z(G) that takes g to ζ(g)−1. The inner automorphism defined by h takes g to h−1gh = gh. If one applies exp(ζ) and then conjugation by h, one gets ghζ(g), since ζ(g) is central and so ζ(g)h = ζ(g). On the other hand, if one conjugates by h and then applies exp(ζ), then one gets ghζ(gh), but since ζ is a homomorphism into an abelian group, ζ is constant on conjugacy classes, and ghζ(gh) = ghζ(g). Hence every exp(ζ) is in CAut(G)(Inn(G)). Conversely, if φ is an automorphism of G commuting with all inner automorphisms, then φ(x)φ(h) = φ(xh) = φ(x)h, so φ(h) and h define the same automorphism of G, and so φ(h) and h differ by some element of the center of G. Define log(φ):G→Z(G) implicitly from φ(h) = h log(φ)(h). It is not hard to check that log(φ) is a homomorphism from G to Z(G). -Notice that log(exp(ζ)) = ζ and exp(log(φ))=φ, so these are mutually inverse bijections. - -Suppose then that Inn(G) = CAut(G)(Inn(G)). This means that exp(Hom(G,Z(G))) = Inn(G). This means that log : Inn(G) → Hom(G,Z(G)) is bijective! However, log(∧h) = ( g ↦ [g,h] ) has a very simple form for the inner automorphism ∧h defined by conjugation by h. In particular, [G,G] ≤ Z(G) for log(∧h) to even make sense, and so G has nilpotency class at most 2. Notice that the kernel of log(∧h) contains all of Z(G), for any h. Indeed, the kernel of log(∧h) is CG(h), and so all centralizers are normal, which is weird. -Another sick thing: log(∧hk)(g) = [g,hk] = [g,k][g,h] = [g,h][g,k] = (log(∧h) + log(∧k))(g), so in fact log is an isomorphism of abelian groups G/Z(G) ≅ Hom(G,Z(G)). -Unfortunately, I cannot find a good element of Hom(G,Z(G)). The transfer is useless, and the central series induction arguments give inequalities pointing the wrong way. -For finite groups, presumably this is taken care of by counting as in theorem 1 of: - -Sanders, P. R. - "The central automorphisms of a finite group." - J. London Math. Soc. 44 (1969) 225–228. - MR248208 - DOI:10.1112/jlms/s1-44.1.225 - -but it didn't seem to address infinite (nilpotent) groups. The way it is done in this (very nice) paper mentioned by Steve certainly doesn't work out for infinite groups. In particular, Lemma I would be wonderful to have more concretely with homomorphisms rather than dualizing finite abelian groups a zillion times. - -Curran, M. J.; McCaughan, D. J. - "Central automorphisms that are almost inner." - Comm. Algebra 29 (2001), no. 5, 2081–2087. - MR1837963 - DOI:10.1081/AGB-100002170<|endoftext|> -TITLE: injective $R$-module homomorphism vs. injective ring homomorphism -QUESTION [6 upvotes]: The following question has been lingering in my mind for months. -Let $R$ be a non-zero commutative ring with $1$. Consider $\phi : R^n \rightarrow R^m$, -1) as an injective $R$-module homomorphism. -2) as an injective ring homomorphism. (by definition $\phi(1)=1$.) -In which of the above cases, we can deduce that $n \leq m$? and why? - -REPLY [4 votes]: Let $R = \mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} \times \ldots \times \mathbb{Z}/2\mathbb{Z}$ (infinitely many times). Then as a ring $R^m = R^n = R$ $\forall m, n \in \mathbb{N}$. So the answer is negative in the case 2. But in the first case the answer is YES. But proof for general commutative ring is complicated. Here I am giving an easy proof assuming $R$ is commutative noetherian ring. After localizing at a minimal prime ideal we may assume that $R$ is a zero dimensional local ring ie artinian ring and $\phi : R^n \rightarrow R^m$ is an injective $R$ module homomorphism. Now length of $R$ as a $R$ module is finite and is equal to $l$ (say). Then comparing the length of both sides we have $ln \leq lm$. This means that $n \leq m$<|endoftext|> -TITLE: Unique quadratic subfield of $\mathbb{Q}(\zeta_p)$ is $\mathbb{Q}(\sqrt{p})$ if $p \equiv 1$ $(4)$, and $\mathbb{Q}(\sqrt{-p})$ if $p \equiv 3$ $(4)$ -QUESTION [21 upvotes]: I want to prove the assertion: - -The unique quadratic subfield of $\mathbb{Q}(\zeta_p)$ is $\mathbb{Q}(\sqrt{p})$ when $p \equiv 1 \pmod{4}$, respectively $\mathbb{Q}(\sqrt{-p})$ when $p \equiv 3 \pmod{4}$. - -My first attempt is this. In $\mathbb{Z}[\zeta_p]$, $1-\zeta_p$ is prime and -$$ -p = \epsilon^{-1} (1-\zeta_p)^{p-1} -$$ -where $\epsilon$ is a unit. Since $p$ is an odd prime, $(p-1)/2$ is an integer and -$$ -\sqrt{\epsilon p } = (1-\zeta_p)^{(p-1)/2} -$$ -makes sense and belongs to $\mathbb{Z}[\zeta_p]$. -How do I deal with the $\epsilon$ under the square root? I guess the condition on the congruence class of $p$ comes from that. Is this even the right way to proceed? -Uniqueness is not clear to me either. I thought about looking at the valuation $v_p$ on $\mathbb{Q}$, extending it to two possible quadratic extensions beneath $\mathbb{Q}(\zeta_p)$, then seeing how those have to extend to common valuations on $\mathbb{Q}(\zeta_p)$, but I didn't see how to make it work. -I would appreciate some help. - -REPLY [13 votes]: My favorite way to prove this is to explicitly write down the quadratic Gauss sum: $$g_p = \sum_{a \in \mathbb F_p} \left( \frac{a}{p} \right) \zeta_p^{a}$$ -Then you can show $g_p^2 = (-1)^{\frac{p-1}{2}} p$ by direct manipulation. This gives the result very explicitly!<|endoftext|> -TITLE: Logical relations between relations -QUESTION [5 upvotes]: I'm interested in properties of relations. Things like completeness (connected, total), transitivity, euclideanness, symmetry and so on. I am interested in the logical connections between these relations. For example, symmetry implies not asymmetry. Or a reflexive, weakly connected relation is complete. -Is there a neat summary of these sorts of properties and their connections? - -REPLY [2 votes]: Relation between relations, huh? A compilation of properties of relation classes, and then how those properties are related? -Wikipedia on Binary relations has a table near the bottom where you can compare relation classes a little. Mostly these kinds of comparisons are straightforward to prove. -Unexpectedly, an area where these properties are manipulated and interact is in the area of Modal logic, where a given axiom implies a relation among the worlds of a Kripke structure. A number of very minor derivations are of the form "S4 + X = S5, because adding the X axiom adds the symmetric property to a transitive worlds relation which implies that it is an equivalence relation" (modulo actual correct use of those properties!).<|endoftext|> -TITLE: Multiplication of block matrices -QUESTION [11 upvotes]: My textbook says that multiplying block matrices works the same as regular matrix multiplication (granted the dimensions of the submatrices are appropriate). Wikipedia also has an example saying so. -It seems like proving it is purely technical and yet I'm having trouble putting it into words. What would be a good way to go? - -REPLY [7 votes]: It is the same as regular multiplication, except that matrix multiplication is not usually commutative. This means we have to pay attention to the order in which our blocks are multiplied. -That said I think you can develop the notation and proof by bootstrapping the $2\times 2$ case. Suppose $A$ is a $2\times 2$ block matrix, say having $I+J$ rows and $K+L$ columns, so that the block in the upper left corner is $I\times K$, etc. Then when $B$ is a $2\times 2$ block matrix having $K+L$ rows and $M+N$ columns, the block multiplication $AB$ would be compatible: -$$ A = \left( \begin{array} {c,c} A_{11} A_{12} \\ A_{21} A_{22} \end{array} \right) $$ -$$ B = \left( \begin{array} {c,c} B_{11} B_{12} \\ B_{21} B_{22} \end{array} \right) $$ -$$ AB = \left( \begin{array} {c,c,c} {A_{11}*B_{11}+A_{12}*B_{21}} {\; \;}{A_{11}*B_{12}+A_{12}*B_{22}} \\ {A_{21}*B_{11}+A_{22}*B_{21}} {\; \;}{A_{21}*B_{12}+A_{22}*B_{22}} \end{array} \right) $$ -Cases with more than two blocks per row or column can then be reduced to this simple case, by lumping blocks together and applying the multiplication recursively.<|endoftext|> -TITLE: Representing Ternary as Binary: Probability that the first $n$ bits are all zero -QUESTION [6 upvotes]: I'm using the ternary numeral system, i.e. numbers in base 3. There's a catch: The numbers I'm representing will never have a 2 as a digit. For instance, I use 0,1,10,11,100,101,110,111,... I.e. they look like binary digits, but they're really ternary numbers. -Now I'd like to know, for a random ternary number, in this form, of $m$ digits, what is the probability that the first (least significant) $n$ bits are all zero? -EXAMPLE -For $m=3$, we'd have the following numbers written out in base 3 (remember they only look like they're bits): 000,001,010,011,100,101,110,111. These correspond to the decimal values 0,1,3,4,9,10,12,13 respectively. So representing these in binary, they are 0,1,11,100,1001,1010,1100,1101 respectively. If I take $n=2$, I'd be finding the probability that the 2 least significant bits are all zero. Of the numbers in binary, 0,100, and 1100 have the 2 least significant bits as zero. So out of the 8 possible numbers that I have, 3 have this property. So the probability that I'm looking for is 3/8. -Now, I'm asking, for generalized $m$ and $n$ using this system, what are the probabilities? - -REPLY [4 votes]: Let $X(m)$ be the $\mathbb{Z}_{2^n}$-valued random variable -$\sum_{0\leq k -TITLE: Delta function integrated from zero -QUESTION [19 upvotes]: I am trying to understand the motivation behind the following identity stated in Bracewell's book on Fourier transforms: $$\delta^{(2)}(x,y)=\frac{\delta(r)}{\pi r},$$ where $\delta^{(2)}$ is a 2-dimensional delta function. Starting with something we know to be true, we can do $$1 = \iint \delta^{(2)}(x,y) dx\,dy = \int_0^\infty \int_0^{2\pi} \frac{\delta(r)}{\pi r} r\,dr\,d\theta = 2 \int_0^\infty \delta(r).$$ -This suggests that the integral of delta function from 0 to infinity is 1/2. In fact, this seems to make sense if we treat the delta function as a limiting case of an even function peaked at zero (Gaussian, sinc, etc.) However, Wikipedia, citing Bracewell, claims the following to be true: -$$\int_0^\infty \delta(r-a) e^{-s r} dr = e^{-s a},$$ -and plugging in 0 for a and s we get $$\int_0^\infty \delta(r) dr = 1.$$ -What is going on here?.. Where's the screw-up?.. If the integral from 0 to infinity is not 1/2, then how do we justify the above polar-coordinate expression for a 2D delta function?.. - -REPLY [6 votes]: The Wikipedia formula is only valid for $a>0$, but not for $a<0$ or $a=0$. -The left hand side of their formula makes sense, however, and equals zero when $a<0$ and equals one-half (as you expect) when $a=0$. -It may be easier to understand by rewriting the integral as -$$ \int_{-\infty}^\infty \delta(r-a)\ e^{-s r}\ \mathbb{I}_{[0,\infty)}(r) \ dr$$ -where $\mathbb{I}$ is the indicator function of the positive half of the line. -If you treat the delta function above as a limiting case of even functions peaked at zero, you will get the result for any value of $a$. - -REPLY [4 votes]: The equation you quote from Wikipedia (from this section) specifies a Laplace transform, and the article on the Laplace transform (in this section) states that the intended meaning of that integral is the limit as $0$ is approached from the left.<|endoftext|> -TITLE: The mod 2 degree of a function when the image space N has a boundary -QUESTION [6 upvotes]: I was flipping through Milnor's "Topology from the Differentiable Viewpoint," and I came upon a sentence concerning the mod 2 degree of a function from M to N. It essentially says: -"We may as well assume also that N is compact without boundary, for otherwise the deg mod 2 would necessarily be zero." -I understand why this is true for compactness. However, any ideas I have trying to prove the boundary part seem way to complex for a paranthetical aside. -Any ideas are appreciated. -Edit: Also, M and N are smooth manifolds of the same dimension, f is smooth, and M is assumed to be compact and boundaryless (so the definition of mod 2 degree makes sense). - -REPLY [3 votes]: One way to explain this is observing that Lefschetz duality gives you, when $N$ is $R$-orientable (for example, if $R=\mathbb Z/2\mathbb Z$) an isomorphism $H^n(N; R)\cong H_0(N,\partial N; R)$ and that if $\partial N$ intersects every path-component of $N$ then the latter group is zero. -Alternatively, if $f:M\to N$ is a map between $n$-manifolds and $N$ has a non-empty boundary, you can show that $f$ is homotopic to a map which is not surjective and therefore of degree zero: if $B$ is a connected component of $\partial N$, then $B$ has a neighborhood diffeomorphic to $B\times[0,1)$, and then you can deform $f$ by sliding it along the $[0,1)$ factor into another function $g:M\to N$ which misses all of $B\times[0,\tfrac12)$.<|endoftext|> -TITLE: Why is this entangled circle not a retract of the solid torus? -QUESTION [8 upvotes]: I'm doing exercise 16 on page 39 in Hatcher: - -Show that there are no retractions $r: X \rightarrow A$ in the following cases: -(a) $X = \mathbb{R}^3$ with $A$ any -subspace homeomorphic to $S^1$ -(b) $X = S^1 \times D^2$ with $A$ its -boundary torus $S^1 \times S^1$ -(c) $X = S^1 \times D^2$ and $A$ the circle shown in the figure. - - -I've done (a) and (b) using proposition 1.17. i.e. I assumed there was a retraction, then the map between the fundamental groups has got to be injective, therefore contradiction. -Now I'm stuck with (c) because according to my understanding the circle on the picture has the same fundamental group as $S^1$ which means it also has the same fundamental group as the solid torus. -What is a different way of proving that there is no retraction (not using prop. 1.17.)? Many thanks for your help! - -REPLY [3 votes]: (i) If $f:X \rightarrow Y$ is a homotopy equivalence then the induced homomorphism $f_* : \pi_1(X, x_0) \rightarrow \pi_1(Y,f(x_0))$ is an isomorphism. -(ii) If $X$ deformation retracts onto $A \subset X$ then $r$, the retraction from $X$ to $A$, is a homotopy equivalence. -claim: There are no retractions $r:X \rightarrow A$ -proof: (by contradiction) -Assume there was a retraction. Then by proposition 1.17. (Hatcher p. 36) the homomorphism induced by the inclusion $i_* : \pi_1(A, x_0) \rightarrow \pi_1(X,x_0)$ would be injective. -But $A$ deformation retracts to a point in $X$ so by (i) $i_*(\pi_1(A, x_0))$ is isomorphic to $\{ e \}$, the trivial group. Therefore $i_*$ cannot be injective. Contradiction. There are no retractions $r: X \rightarrow A$. -Can someone tell me if I got it right?<|endoftext|> -TITLE: A question about cyclic groups -QUESTION [5 upvotes]: How does one see that the cyclic group $C_n$ of order $n$ has $\phi(d)$ elements of order $d$ for each divisor $d$ of $n$? -(where $\phi(d)$ is the Euler totient function) - -REPLY [2 votes]: Recall $\:g^{\large i}\:$ has order $\: n/(i,n)\ $ for a generator $\:g\:$ of $\ C_n$ -Therefore $\ \ \displaystyle\ \ d\ =\ \frac{n}{(i,n)},\quad\ \ \ \ \ 0 \le i \le n$ -$\quad\displaystyle\iff\quad\ (id,\ nd)\ =\ n,\ \ \ \ \ \, 0 \le i \le n$ -$\quad\displaystyle\iff\quad \bigg(\frac{i\:d}{n},d\bigg)\ =\ 1,\ \ \ \ \: 0 \le i \le n,\ \ n\ |\ i\:d$ -$\quad\displaystyle\iff\quad\ \ \ (\ j,\ d)\ \ =\ \ 1, \ \ \ \ \:0\le j \le d$<|endoftext|> -TITLE: Question on conservative fields -QUESTION [9 upvotes]: I'm hoping to really knock out several questions I have in my mind with just this one. I've been doing a lot of practice problems on this topic, and although I get the right answers, I really don't know what the answers mean. So there's a theorem that says - -If F is a vector field defined on all - of $R^3$ whose component functions - have continuous partial derivatives and - curl F=0, then F is a conservative - vector field. - -So that's great, but it really doesn't give me an understanding of what that really means. -First of all, what does it mean for all of its component functions to have continuous partial derivatives. I mean I know how to determine if they are or not, but what does it mean if they are all continuous or if some are not, and how does that change what a conservative field is? -Second, what does it mean that the curl F=0, and why is it such an important occurrence that we gave it a special name such as conservative? -And lastly, (this I should know but sadly not), is there a difference in a vector field being defined on all of $R^3$ compared to a vector field being continuous on all of $R^3$. Do those mean the same thing? - -REPLY [7 votes]: There are a lot of questions here. Let me address Question 3 first: -Let $f$ be a function on some set $A$. When we say that $f$ is defined on $A$, we mean that $f(x)$ exists for every $x$ in $A$. However, just because $f$ is defined on $A$ doesn't mean that it is continuous on $A$: it may be continuous, or it may not be. The textbook intuition of continuity is that $f$ is continuous on $A$ if we can draw its graph without picking up our pencil. -Note, though, that to even talk about continuity on a set, our function has to be defined there first. -Now vector fields are really just a special type of function (one that inputs points and outputs vectors), so all of this applies to vector fields, too. - -Before I address Questions 1 and 2, let's first distinguish between a few concepts. -Let $F$ be a vector field defined, continuous, and having continuous partial derivatives on a region $U$ in $\mathbb{R}^3$. Consider the following four properties that $F$ may have: -(1) (Path-independence.) The line integral of $F$ between two points does not depend on the path chosen. That is, if $C_1$ and $C_2$ are paths between $a$ and $b$, then $$\int_{C_1} F\cdot dr = \int_{C_2} F\cdot dr.$$ -(2) (Integrals of closed loops are zero.) The line integral of $F$ around any closed curve is zero. That is, if $C$ is a closed curve, then $\int_C F\cdot dr = 0$. -(3) (Exactness.) There exists a scalar function $f$ with continuous partial derivatives such that $F = \nabla f$. -(4) (Closedness.) $\text{curl }F = 0$. -There are three important facts about these four properties: -Fact 1: Properties (1), (2), and (3) are equivalent. -That is, if $F$ satisfies any one of (1), (2), or (3), then it satisfies the other two. Any of these three properties may be taken as the definition of conservativity. -Fact 2: If $F$ is conservative (i.e. satisfies (1), (2), or (3)), then it also satisfies property (4). -Proof: If $F$ satisfies (3), say, then $\text{curl } F = \text{curl }\nabla f = 0$. -Fact 3: If in addition $F$ is defined on all of $\mathbb{R}^3$ and has continuous partial derivatives, then property (4) implies the first three. -In other words, if $F$ is defined on all of $\mathbb{R}^3$ (and has continuous partial derivatives), then all four concepts coincide. -However: As yoyo mentioned in the comments, property (4) is not in general equivalent to conservativity. The classic example is the vector field on $\mathbb{R}^2 \setminus (0,0)$ given by -$$F(x,y) = \left(\frac{y}{x^2 + y^2}, \frac{-x}{x^2 + y^2} \right).$$ -This vector field has the curious property that $\text{curl }F = 0$, yet does not satisfy any of the three (equivalent) conservativity properties. The reason for this is that $F$ is not defined on all of $\mathbb{R}^2$ because it is undefined at the origin $(0,0)$. -There are similar examples for $\mathbb{R}^3 \setminus (0,0)$ but I can't seem to come up with any right now. - -So now we can address Question 2. As I mentioned above, conservativity is not the same as saying that $\text{curl F} = 0$. However, if $F$ is defined on all of $\mathbb{R}^3$, then yes, they coincide. -So why is $\text{curl }F = 0$ such a good property to have? Well, the easy answer is that: (a) it's an easy property to check, and (b) in the event that $F$ is defined on all of $\mathbb{R}^3$, then we get the three (equivalent) conservativity conditions, which I hope you can see are good things to have. -The physical intuition behind conservativity is that it models conservative forces like gravity. In this scheme, $F$ would represent the force and $\int_C F\cdot dr$ would represent the work (energy) of applying $F$ to some object over a path $C$. It is intuitively clear that the work done by gravity (say) over any closed loop is zero: the potential energy hasn't changed any. - -Finally, let me address Question 1. Technically speaking, to talk about conservativity, you're right, we don't actually need the partial derivatives of $F$ to be continuous, or even to exist. That is, we can talk about properties (1), (2), and (3) above without the assumption that the partial derivatives of $F$ exist. -However, to talk about property (4), where we take the curl, we need to say that the partial derivatives of $F$ exist. Do we need to assume in addition that the partial derivatives are continuous? My guess is probably not (someone please correct me if I'm wrong). But assuming that the partial derivatives are continuous is certainly a nice simplifying assumption to have.<|endoftext|> -TITLE: Logical reason for the intersection of an infinite family of open sets not being necessarily open -QUESTION [6 upvotes]: First of all, rest assured that I have used the search tool and read Why do we require a topological space to be closed under finite intersection? and https://mathoverflow.net/questions/19152/why-is-a-topology-made-up-of-open-sets. -I'm looking for a more logical approach on this issue. I understand intuitively the reasons why an intersection of infinitely many open sets may fail to be open, but only in the case of a topology induced by a metric. In the general sense, what would be a somewhat rigorous argument? I feel that some counterexamples aren't enough for me to deeply understand the motives. I've read somewhere that the intersection of an infinite family of sets is related with an infinite conjunction, which is not allowed in the usual logical framework we use for mathematics. Is this the issue? Could you explain it in more detail? Are infinite disjunctions permissible, then? Finally, if we were to work within an infinitary logic, would this "issue" disappear, and consequently, would the building of topology become somewhat trivial, or less rich and interesting? -Thanks. - -REPLY [2 votes]: I'm not sure what you mean by a logical reason, but maybe the following is helpful. Essentially I'm going to suggest that it makes sense if we want to think about limits; otherwise we kind of force any space to either have isolated points or infinitesimally close elements. -We want somehow to capture the idea of closeness without having to refer to distance. Imagine you want to be able to talk about a sequence of points $(p_0, p_1, ...)$ in $X$ approaching a point $x \in X$. The intuition is something like that, for any collection of points which is a collection of points "near to" $x$ that we pick, we expect our sequence to eventually enter this collection and stay in it. That is, we want to talk about some family of "neighborhoods" (nearby areas) of $x$, and we want to say that for any neighborhood of $x$, the sequence eventually stays in it. -Now, suppose we require that every arbitrary intersection of neighborhoods of $x$ still be considered a neighborhood of $x$. Then there are two possibilities. One possibility is that we can take some intersection and get $\{ x\}$ as a neighborhood. In this case, no sequence of other elements can converge to $x$. The second possibility is that there must be a distinct point $y\neq x$ such that $y\in U$ for every neighborhood $U$ of $x$ (because even the intersection over all neighborhoods of $x$ is not just ${ x}$). In this case, if we want to keep thinking of a neighborhood as a collection of nearby points, we're essentially viewing $y$ as infinitely close to $x$. -So, since we might sometimes want to avoid having infinitesimally close elements (i.e. indistinguishable by the topology) and isolated points, we shouldn't require that every arbitrary intersection still be considered a neighborhood.<|endoftext|> -TITLE: Proving That Multiplication among Interval Numbers is Associative -QUESTION [5 upvotes]: I'm currently working on a problem where I need to provide a rigorous proof that multiplication among interval numbers is a associative. For those of you who haven't heard of an interval number before, here are some definitions: -We say that $x$ is an interval number on $\mathbb{R}^n$ if $x \subset \mathbb{R}^n$ and $x = [x^L,x^u]$, where $x^L \leq x^U$ and $x^L, x^U \in \mathbb{R}^n$. Here $x^L \leq x^U$ is defined component-wise, so if $x^L = (x^L_1,...x^L_n)$ and $x^U = (x^U_1,...x^U_n)$ then $x^L \leq x^U$ implies $x^L_1 \leq x^U_1, ... x^L_n \leq x^U_n$. -Given two interval numbers $x$ and $y$ on $\mathbb{R}^n$, where $x = [x^L,x^U]$ and $y = [y^L,y^U]$, their product is another interval $xy$ that has the form $[\min\{x^Ly^L,x^Ly^U,x^Uy^L,x^Uy^U\}, \max\{x^Ly^L,x^Ly^U,x^Uy^L,x^Uy^U\}]$. -Basically what I am trying to show is $x(yz) = (xy)z$ where $x,y,z \subset \mathbb{R}^n$ are interval numbers such that $x = [x^L,x^u]$, $y =[y^L,y^U]$ and $z = [z^L,z^U]$. - -REPLY [4 votes]: As Arturo commented, it's not really clear what you mean by inequalities and products in $\mathbb{R}^n$, so I'll assume that we are in $\mathbb{R}$. -Then it's easy if you use the fact that $xy = \{ ab : a \in x, b \in y \}$ rather than the characterization of $xy$ in terms of min and max, since $(xy)z$ and $x(yz)$ will both be equal to the interval $\{ abc : a \in x, b \in y, c \in z \}$.<|endoftext|> -TITLE: Does $k+\aleph_0=\mathfrak{c}$ imply $k=\mathfrak{c}$ without the Axiom of Choice? -QUESTION [36 upvotes]: I'm currently reading a little deeper into the Axiom of Choice, and I'm pleasantly surprised to find it makes the arithmetic of infinite cardinals seem easy. With AC follows the Absorption Law of Cardinal Arithmetic, which states that for $\kappa$ and $\lambda$ cardinal numbers, the larger infinite and the smaller nonzero, then $\kappa+\lambda=\kappa\cdot\lambda=\max(\kappa,\lambda)$. -I was playing around with the equation $k+\aleph_0=\mathfrak{c}$ for some cardinal $k$. From the above, it follows that $\mathfrak{c}=k+\aleph_0=\max(k,\aleph_0)$, which implies $k=\mathfrak{c}$. -I'm curious, can we still show $k=\mathfrak{c}$ without the Axiom of Choice? Is it maybe possible to bound $\mathfrak{c}-\aleph_0$ above and below by $\mathfrak{c}$? But then I'm not quite sure such algebraic manipulations even mean anything, or work like that here. Certainly normal arithmetic does not! Thanks. - -REPLY [21 votes]: There is a general argument without choice: Suppose ${\mathfrak m}+{\mathfrak m}={\mathfrak m}$, and ${\mathfrak m}+{\mathfrak n}=2^{\mathfrak m}$. Then ${\mathfrak n}=2^{\mathfrak m}.\,$ This gives the result. - -The argument is part of a nice result - of Specker showing that if CH holds - for both a cardinal ${\mathfrak m}$ - and its power set $2^{\mathfrak m}$, - then $2^{\mathfrak m}$ is - well-orderable. This shows that GCH - implies choice, and that the proof is - "local". It is still open whether CH - for ${\mathfrak m}$ implies that - ${\mathfrak m}$ is well-orderable. - -Anyway, here is the proof of the statement above: Note first that $2^{\mathfrak m}\cdot 2^{\mathfrak m}=2^{{\mathfrak m}+{\mathfrak m}}=2^{\mathfrak m}={\mathfrak m}+{\mathfrak n}$. -Let $X$ and $Y$ be disjoint sets with $|X|={\mathfrak m}$, $|Y|={\mathfrak n}$, and fix a bijection $f:{\mathcal P}(X)\times{\mathcal P}(X)\to X\cup Y$. -Note that there must be an $A\subseteq X$ such that the preimage $f^{-1}(X)$ misses the fiber $\{A\}\times {\mathcal P}(X)$. Otherwise, the map that to $a\in X$ assigns the unique $A\subseteq X$ such that $f^{-1}(a)$ is in $\{A\}\times {\mathcal P}(X)$ is onto, against Cantor's theorem. -But then, for any such $A$, letting $g(B)=f(A,B)$ gives us an injection of ${\mathcal P}(X)$ into $Y$, i.e., $2^{\mathfrak m}\le {\mathfrak n}$. Since the reverse inclusion also holds, we are done by Schroeder-Bernstein. -(Note the similarity to Apostolos's and Joriki's answers.) -The original reference for Specker's result is Ernst Specker, "Verallgemeinerte Kontinuumshypothese und Auswahlaxiom", Archiv der Mathematik 5 (1954), 332–337. A modern presentation is in Akihiro Kanamori, David Pincus, "Does GCH imply AC locally?", in "Paul Erdős and his mathematics, II (Budapest, 1999)", Bolyai Soc. Math. Stud., 11, János Bolyai Math. Soc., Budapest, (2002), 413–426. - -Note that assuming that ${\mathfrak m}$ is infinite is not enough for the - result. For example, it is consistent - that there are infinite Dedekind - finite sets $X$ such that ${\mathcal P}(X)$ is also Dedekind finite. To be - Dedekind finite means that any proper - subset is strictly smaller. But if - $2^{\mathfrak m}$ is Dedekind finite - and $2^{\mathfrak m}={\mathfrak n}+{\mathfrak l}$ for nonzero - cardinals ${\mathfrak n},{\mathfrak l}$, then we must have - ${\mathfrak n},{\mathfrak l}<2^{\mathfrak m}$.<|endoftext|> -TITLE: Deriving a trivial continued fraction for the exponential -QUESTION [17 upvotes]: Lately, I learned about the following continued fraction for the exponential function: -$$\exp(x)=1+\cfrac{x}{1-\cfrac{x/2}{1+x/2-\cfrac{x/3}{1+x/3-\cfrac{x/4}{1+x/4-\dots}}}}$$ -I thought it was something new, but evaluating the successive convergents of this continued fraction was a disappointment, as they are nothing more than the partial sums of the usual series $\exp(x)=\sum_{j=0}^{\infty}\frac{x^j}{j!}$. -So, there must be some way to obtain the continued fraction from the series. How might this be done? - -REPLY [19 votes]: This transformation of a series into its equivalent continued fraction, with the series partial sums being equal to the continued fraction convergents, is due to Euler. The series $$\sum_{n\geq 0}c_{n}=c_0+c_1+\dots+c_n+\dots$$ is transformed into the continued fraction -$$b_0+\mathbf{K}\left( a_{n}|b_{n}\right) =b_0+\dfrac{a_{1|}}{|b_{1}}+\dfrac{a_{2}|}{% -|b_{2}}+\cdots +\dfrac{a_{n}|}{|b_{n}}+\cdots ,$$ -whose elements $a_{n}$, $b_{n}$ are expressed in terms of $c_n$ as follows: $b_0=c_0$, $a_1=c_1$, $b_1=1$ and $$a_{n}=-\dfrac{c_{n}}{c_{n-1}},\qquad b_{n}=1+\dfrac{c_{n}}{c_{n-1}}\qquad n\ge 2.$$ -For the power series $e^x=\sum_{n\geq 0}\dfrac{1}{n!}x^{n}$, we have $c_{n}=\dfrac{1}{n!}x^{n}$, -and get $$a_{n}=-\dfrac{c_{n}}{c_{n-1}}x=-\dfrac{1}{n}x,\qquad b_{n}=1+\dfrac{c_{n}}{c_{n-1}}=1+\dfrac{1}{n}x\qquad n\ge 2.$$ -Thus -$$\begin{eqnarray*} -e^x &=&\sum_{n\geq 0}\frac{1}{n!}x^{n}=1+\sum_{n\geq 1}\frac{1}{n!}x^{n} \\ -&=&1+\frac{x|}{|1}-\frac{\frac{1}{2}x|}{|1+\frac{1}{2}x}-\cdots -\frac{\frac{% -1}{n}x|}{|1+\frac{1}{n}x}-\cdots, -\end{eqnarray*}$$ -which is equivalent to -$$1+\frac{x|}{|1}-\frac{x|}{|2+x}-\frac{2x|}{|3+x}-\cdots -\frac{nx|}{|n+1+x}+\cdots.$$ -This is explained in p.17 of Die Lehre von den Kettenbrüchen Band II by Oskar Perron and proved in Theorem 4.2 of Orthogonal Polynomials and Continued Fractions From Euler´s Point of View by Sergey Khrushchev. It is derived from a theorem that establishes the equivalence between a sequence and a continued fraction.<|endoftext|> -TITLE: Is there any way to read articles without subscription? -QUESTION [10 upvotes]: This is not mathematician question but I think it's related. How I can get access to some of "Software: Practice and Experience" articles without subscription? -Any advice is welcome. Sorry if I'm posting at wrong site. Thanks. - -REPLY [11 votes]: This is addressed at both PoorGuy and quanta's questions. Researchers don't generally subscribe to journals on an individual basis; we rely on institutional subscriptions at our universities. Many journals don't even have an option for an individual subscription, they only have prices for institutional use. -If you live near a large university you may be able to go to the university library and read articles there. The print collections of most libraries are open for public browsing, and online access subscriptions usually work from any on-campus computer. Many libraries have a "friend of the library" program that will let anyone get a library card for a nominal annual fee. This may include computer access and the ability to check out books. -If you go to a smaller university, you can often use their document delivery services to get a copy of a paper that you don't have access to. These are sometimes called "interlibrary loan" but for a journal paper they are much more likely to just send a copy of the paper than to ship a printed journal. If you are a library member (even at a public library), you should always look into this option before buying a paper directly from the publisher. -If you are at a university you also are likely to have access to MathSciNet, which is a very helpful resource for finding math papers.<|endoftext|> -TITLE: How can I find $\int\frac1{\sqrt[4]{1+x^4}}\mathrm dx$? -QUESTION [7 upvotes]: My question is, how can I evaluate the following integral? -$$\int\frac1{\sqrt[4]{1+x^4}}\mathrm dx$$ -Thanks. - -REPLY [7 votes]: Integral -$$\begin{align*} -\int\frac{1}{(1+x^4)^{1/4}}dx&=\int \frac{1}{x(1+1/x^4)^{1/4}}dx\\ -&=\int \frac{x^4}{x^5(1+1/x^4)^{1/4}}dx. -\end{align*}$$ -Substitution: $z^4=(1+1/x^4)$ -$4z^3 dz=-4\frac{1}{x^5}dx$ -Therefore, -$$\begin{align*} -\int\frac{1}{(1+x^4)^{1/4}}dx&=\int \frac{x^4}{x^5(1+1/x^4)^{1/4}}dx\\ -&=-\int\frac{z^2}{(z^4-1)}dz\\ -&=-\frac{1}{2}(\frac{1}{z^2-1}+\frac{1}{z^2+1})dz\\ -&=-\frac{1}{2}\ln\left|\frac{1-z}{1+z}\right|+\arctan z+ C -\end{align*}$$ -where $z=(1+1/x^4)^{1/4}$.<|endoftext|> -TITLE: $\arcsin$ written as $\sin^{-1}(x)$ -QUESTION [21 upvotes]: I know that different people follow different conventions, but whenever I see $\arcsin(x)$ written as $\sin^{-1}(x)$, I find myself thinking it wrong, since $\sin^{-1}(x)$ should be $\csc(x)$, and not possibly confused with another function. -Does anyone say it's bad practice to write $\sin^{-1}(x)$ for $\arcsin(x)$? - -REPLY [37 votes]: The notation for trigonometric functions is "traditional", which is to say that it is not the way we would invent notation today. - -$\sin^{-1}(x)$ means the inverse sine, as you mentioned, rather than a reciprocal. So $\sin^{-1}(x)$ is not an abbreviation for $(\sin(x))^{-1}$. Instead it's notation for $(\sin^{-1})(x)$, in the same way that $f^{-1}(x)$ means the inverse function of $f$, applied to $x$. -But $\sin^2(x)$ means $(\sin(x))^2$, rather than $\sin(\sin(x))$. In other contexts, like dynamical systems, if I have a function $f$, the notation $f^2$ means $f \circ f$. This is compatible with the $f^{-1}$ notation, if we take juxtaposition of functions to mean composition: $f^{-1}f^{3}$ will be $f^{2}$ as desired. - -So the traditional notation for sine is actually a mixture of two different systems: $-1$ denotes an inverse, not a power, while positive integer exponents denote powers, not iterated compositions. -This is simply a fact of life, like an irregular conjugation of a verb. As with other languages, the things that we use most often are the ones that are likely to remain irregular. That doesn't mean that they are incorrect, however, as long as other speakers of the language know what they mean. -Moreover, if you wanted to reform the system, there would be an equally strong argument for changing $\sin^2$ to mean $\sin \circ \sin$. This is already slowly happening with $\log$; I think that the usage of $\log^2(x)$ to mean $(\log(x))^2$ is slowly decreasing, because people tend to confuse it with $\log(\log(x))$. That confusion is less likely with $\sin$ because $\sin(\sin(x))$ arises so rarely in practice, unlike $\log(\log(x))$.<|endoftext|> -TITLE: Finite field, every element is a square implies char equal 2 -QUESTION [12 upvotes]: If $F$ is a finite field such that every element is a square, why must $char(F)=2$? - -REPLY [3 votes]: Another way: all elts of a finite field of size $2n\!+\!1$ are squares $\Rightarrow \color{#0a0}{x^{n}\! - 1} $ has $2n$ roots (all elts $\ne\! 0$), contra a polynomial over a field has no more roots than its degree, by $\,a = b^2\Rightarrow \color{#0a0}{a^n} = b^{\color{#c00}{2n}} \color{#0a0}{= 1},\,$ by Lagrange, since its multiplicative group has size $\,\color{#c00}{2n}$ (i.e. Euler's Criterion when size is prime).<|endoftext|> -TITLE: Why is there no polynomial parametrization for the circle? -QUESTION [16 upvotes]: How does one show that the unit circle admits no polynomial parametrization? -What is needed for this, are there general criteria? -Thanks - -REPLY [8 votes]: More generally, suppose $f$ and $g$ are analytic functions on a simply-connected domain $U$ with $f^2 + g^2 = 1$, i.e. -$(f+ig)(f-ig) = 1$, so $f + i g$ is an analytic function in $U$ that is never 0. This implies $f + i g = e^h$ for some function $h$ analytic in $U$. Now $f - i g = 1/(f + i g) = e^{-h}$. We then have $f = (e^h + e^{-h})/2 = \cos(h)$ and $g = (e^h - e^{-h})/(2 i) = \sin(h)$. Conversely, of course, $f = \cos(h)$ and $g = \sin(h)$ satisfy the equation $f^2 + g^2 = 1$ for any function $h$. -Now take the domain to be $\mathbb C$; it's easy to show that for any nonconstant function $h$ analytic in a neighbourhood of $\infty$, $\sin(h)$ and $\cos(h)$ have either removable or essential singularities at $\infty$, and thus can't be polynomials.<|endoftext|> -TITLE: Finding invertible polynomials in polynomial ring $\mathbb{Z}_{n}[x]$ -QUESTION [5 upvotes]: Is there a method to find the units in $\mathbb{Z}_{n}[x]$? - -For instance, take $\mathbb{Z}_{4}$. How do we find all invertible polynomials in $\mathbb{Z}_{4}[x]$? Clearly $2x+1$ is one. What about the others? Is there any method? - -REPLY [14 votes]: Lemma 1. Let $R$ be a commutative ring. If $u$ is a unit and $a$ is nilpotent, then $u+a$ is a unit. -Proof. It suffices to show that $1-a$ is a unit when $a$ is nilpotent. If $a^n=0$ with $n\gt 0$, then -$$(1-a)(1+a+a^2+\cdots+a^{n-1}) = 1 - a^n = 1.$$ -QED -Lemma 2. If $R$ is a ring, and $a$ is nilpotent in $R$, then $ax^i$ is nilpotent in $R[x]$. -Proof. Let $n\gt 0$ be such that $a^n=0$. Then $(ax^i)^n = a^nx^{ni}=0$. QED -Lemma 3. Let $R$ be a commutative ring. Then -$$\bigcap\{ \mathfrak{p}\mid \mathfrak{p}\text{ is a prime ideal of }R\} = \{a\in R\mid a\text{ is nilpotent}\}.$$ -Proof. If $a$ is nilpotent, then $a^n = 0\in\mathfrak{p}$ for some $n\gt 0$ and all prime ideals $\mathfrak{p}$, and $a^n\in\mathfrak{p}$ implies $a\in\mathfrak{p}$. -Conversely, if $a$ is not nilpotent, then the set of ideals that do not contain any positive power of $a$ is nonempty (it contains $(0)$) and closed under increasing unions, so by Zorn's Lemma it contains a maximal element $\mathfrak{m}$. If $x,y\notin\mathfrak{m}$, then the ideals $(x)+\mathfrak{m}$ and $(y)+\mathfrak{m}$ strictly contain $\mathfrak{m}$, so there exists positive integers $m$ and $n$ such that $a^m\in (x)+\mathfrak{m}$ and $a^n\in (y)+\mathfrak{m}$. Then $a^{m+n}\in (xy)+\mathfrak{m}$, so $xy\notin\mathfrak{m}$. Thus, $\mathfrak{m}$ is prime, so $a$ is not in the intersection of all prime ideals of $R$. QED -Theorem. Let $R$ be a commutative ring. Then -$$p(x) = a_0 + a_1x + \cdots + a_nx^n\in R[x]$$ -is a unit in $R[x]$ if and only if $a_0$ is a unit of $R$, and each $a_i$, $i\gt 0$, is nilpotent. -Proof. Suppose $a_0$ is a unit and each $a_i$ is nilpotent. Then $a_ix^i$ is nilpotent by Lemma 2, and applying Lemma 1 repeatedly we conclude that $a_0+a_1x+\cdots+a_nx^n$ is a unit in $R[x]$, as claimed. -Conversely, suppose that $p(x)$ is a unit. If $\mathfrak{p}$ is a prime ideal of $R$, then reduction modulo $\mathfrak{p}$ of $R[x]$ maps $R[x]$ to $(R/\mathfrak{p})[x]$, which is a polynomial ring over an integral domain; since the reduction map sends units to units, it follows that $\overline{p(x)}$ is a unit in $(R/\mathfrak{p})[x]$, hence $\overline{p(x)}$ is constant. Therefore, $a_i\in\mathfrak{p}$ for all $i\gt 0$. -Therefore, $a_i \in\bigcap\mathfrak{p}$, the intersection of all prime ideals of $R$. The intersection of all prime ideals of $R$ is precisely the set of nilpotent elements of $R$, which establishes the result. QED -For $R=\mathbb{Z}_n$, let $d$ be the squarefree root of $n$ (the product of all distinct prime divisors of $n$). Then a polynomial $a_0+a_1x+\cdots+a_nx^n\in\mathbb{Z}_n[x]$ is a unit if and only if $\gcd(a_0,n)=1$, and $d|a_i$ for $i=1,\ldots,n$. In particular if $n$ is squarefree, the only units in $\mathbb{Z}_n[x]$ are the units of $\mathbb{Z}_n$.<|endoftext|> -TITLE: Euler-Maclaurin Summation Formula for Multiple Sums -QUESTION [15 upvotes]: The Euler-Maclaurin summation formula is -\begin{eqnarray} -\sum_{k = a}^{b} f(k) = \int_{a}^{b} f(t) \, dt + B_1 (f(a) + f(b)) + \sum_{n = 1}^{N} \frac{B_{2n}}{(2n)!} ( f^{(2n-1)}(b) - f^{(2n-1)}(a) ) + R_{N}, -\end{eqnarray} -where $B_{n}$ is the $n^{\text{th}}$-Bernoulli number taking $B_{1} = \tfrac{1}{2}$, and the remainder term is bounded by the following -\begin{align} -|R_{N}| \leq \frac{|B_{2N} |}{(2n)!} \int_{a}^{b} | f^{(2N)}(t) | \, dt. -\end{align} -for any arbitrary positive integer $N$. Is there a similar formula for nested sums of the form, -\begin{eqnarray} -\sum_{k_1 = a_1}^{b_1} \cdots \sum_{k_n = a_n}^{b_n} f(k_1, \dots, k_n). -\end{eqnarray} -Thanks! - -REPLY [12 votes]: Yes! There's a whole chapter about it in this book.<|endoftext|> -TITLE: Matrix Differential Equation with a Skew-Symmetric Matrix -QUESTION [7 upvotes]: From a bank of masters exams: - -Say the position of a particle moving - in $\mathbb{R}^n$ is given by a smooth - vector-valued function $\vec{x}(t)$. - Suppose that $\vec{x}(t)$ satisfies a - differential equation, - $$ \frac{d\vec{x}}{dt} = A(t)\vec{x},$$ - where $A(t)$ is a - real anti-symmetric matrix depending - smoothly on $t$. Show that this - particle moves on a sphere, that is, - $||\vec{x}(t)||$ is constant. - -By the spectral theorem, $A$ is normal and therefore has a complete basis of eigenvectors in $\mathbb{C}^n$. I am familiar with the "standard" method of solving for matrix exponentials, i.e. finding the eigenvalues and eigenvectors of $A$, and then using linear combinations of $e^{\lambda t}\vec{x}$ as the solutions, but there is not a complete basis of eigenvectors in $\mathbb{R}$. Taking the matrix exponential $e^A$ doesn't seem to do anything. - -REPLY [7 votes]: Taking from user8268 and Shiyu: -Compute the time derivative of $||\vec{x}||^2 = \vec{x} \cdot \vec{x}$, which becomes -$ \begin{align} -\frac{d}{dt} ||\vec{x}||^2 &= \frac{d}{dt} (\vec{x} \cdot \vec{x}) -\\ &= \frac{d\vec{x}}{dt} \cdot \vec{x} + \vec{x} \cdot \frac{d\vec{x}}{dt} -\\ &= 2 \left( \vec{x} \cdot \frac{d\vec{x}}{dt} \right) -\\ &= 2 \left( \vec{x} \cdot A \vec{x} \right) -\\ &= 2 \left( \vec{x}^T A \vec{x} \right) = 2(0) = 0 -\end{align} $ -The last line is true because $\vec{x}^TA\vec{x} = 0$ for all $\vec{x}$ if $A$ is skew-symmetric. Therefore $||\vec{x}||^2$ is constant, implying that $||\vec{x}|| \geq 0$ is constant.<|endoftext|> -TITLE: Generalization of the series for $\frac{\pi^2}{6}$? Is there a more elementary proof? -QUESTION [6 upvotes]: In the same vein as: -$ \frac{\pi ^2}{6} = 1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{25} \cdots $ -Starting with: -$ \displaystyle \prod_{n=1}^{\infty} \left( 1 -\frac{q^2}{n^2} \right) = \frac{\sin(\pi q)}{\pi q}$ -I've noticed that: -$ - \frac{\pi ^2}{3!} = \displaystyle \sum_{j_1=1}^{\infty} -j_1^{-2} $ -$ \frac{\pi ^4}{5!} = \displaystyle \sum_{j_1,j_2=1 \atop j_1 \neq j_2}^{\infty} (j_1j_2)^{-2}$ -$ - \frac{\pi ^6}{7!} = \displaystyle \sum_{j_1,j_2,j_3=1 \atop j_i \neq j_k} - (j_1j_2j_3)^{-2}$ -$ \vdots $ -$ \frac{\pi ^{2n}}{(2n+1)!} = \displaystyle \sum_{j_1,...j_n=1 \atop j_i \neq j_k}^{\infty} (j_1j_2...j_n)^{-2}$ -$ \vdots $ -(Steps shown here: http://www.futurebird.com/?p=156 ) -Is there a more direct way to reach the same result that avoids a high power theorem like the Weierstrass factorization theorem ... which is what I use. -I'm enjoying playing with these concepts so I'd also like reading recommendations. - -REPLY [9 votes]: This is an example of a Multiple Zeta Value, namely $\zeta (2,2,2,\cdots,2)$. On the page -http://www.usna.edu/Users/math/meh/mult.html -there are several relations satisfied by such MZVs. For example, -$\zeta (2,2,2,2, \cdots) = (2n + 1) \zeta (3,1,3,1,\cdots)$ -where there are $2n$ copies of $2$ on the left, and $n$ blocks of $(3,1)$ on the right. So for $n = 1$ we have $\zeta (3,1) = 2 \pi^4 / 6!$. A good reference for this is -http://www.combinatorics.org/Volume_5/PDF/v5i1r38.pdf -which says in general that -$\zeta (\{ 3,1 \}^{n}) = \frac{2 \pi^{4n}}{(4n + 2)!}$<|endoftext|> -TITLE: Do these vectors form a basis? -QUESTION [6 upvotes]: I'm researching a potential algorithm, and I'm hoping that someone can verify my calculations. -I have sets of vectors in $\mathbb{R}^6$ that I can use. They have a corresponding value associated with them, but this relation is not necessarily a simple one. I'd like to find the value associated with the vector $(0,0,1,0,0,0)$. So I'm wondering if I can perform linear algebra, using the vectors that I can create, to do so. -One vector I can create is $(a,a,a,0,0,0)$ for any real $a$. A second is $(b,b,b,b,b,b)$. -I can also create vectors of the form $(c^0,c^1,c^2,c^0,c^1,c^2)$ for some real number $c$, where $c^k$ is just simply $c$ taken to the $k$th power. A second form that I can create is $(0, d, 2d, 0, 0, 0)$ for real $d$. -I am wondering if these vectors form a complete basis for $\mathbb{R}^6$. -In case it helps, I can use as many vectors as I want, adding and/or subtracting them, as long as they are of the forms above. -My Question -What I really want to know is, can I find the corresponding value for $(0,0,1,0,0,0)$? - -REPLY [10 votes]: The vectors $(a,a,a,0,0,0)$ are all multiples of $(1,1,1,0,0,0)$; the vectors $(b,b,b,b,b,b)$ are all multiples of $(1,1,1,1,1,1)$. The vectors $(0,d,2d,0,0,0)$ are all multiples of $(0,1,2,0,0,0)$. The span of all these vectors give you only a 3-dimensional subspace. -So the key lies in the vectors $(1,c,c^2,1,c,c^2)$; all these vectors span at most a $3$-dimensional subspace, since they all lie in the subspace of vectors $(x_1,x_2,x_3,x_4,x_5,x_6)$ with $x_1=x_4$, $x_2=x_5$, and $x_3=x_6$, which is $3$-dimensional. But they include $(1,1,1,1,1,1)$ (obtained with $c=1$); so you will get at best a 5-dimensional subspace from taking all these vectors together with the previously considered one; you cannot get a all of $\mathbb{R}^6$. -In fact, you get exactly a $5$-dimensional subspace: the vectors -$$\begin{align*} -&(1,c,c^2,1,c,c^2)\\ -&(1,k,k^2,1,k,k^2)\\ -&(1,\ell,\ell^2,1,\ell,\ell^2) -\end{align*}$$ -are linearly independent if and only if the vectors $(1,c,c^2)$, $(1,d,d^2)$, and $(1,\ell,\ell^2)$ are linearly independent. This occurs if and only if -$$\left|\begin{array}{ccc} -1 & c & c^2\\ -1 & d & d^2\\ -1 & \ell & \ell^2 -\end{array}\right| = (d-c)(\ell-c)(\ell-d)$$ -is nonzero (this is a Vandermonde matrix); so distinct values of $c$, $d$, and $\ell$ will give you three linearly independent vectors, which therefore span -$$\mathbf{W} = \bigl\{(x_1,x_2,x_3,x_4,x_5,x_6)\in\mathbb{R}^5\mid x_1=x_4, x_2=x_5, x_3=x_6\bigr\}.$$ -So your vectors span exactly a five-dimensional subspace of $\mathbb{R}^6$, and not all of $\mathbb{R}^6$ -In fact, $(0,0,1,0,0,0)$ will be one of the vectors that does not lie in the span: if it lay in the span, then so would $(0,1,0,0,0,0) = (0,1,2,0,0,0)-2(0,0,1,0,0,0)$; hence also $(1,0,0,0,0,0)=(1,1,1,0,0,0)-(0,1,0,0,0,0)-(0,0,1,0,0,0)$. Since you can get any vector of the form $(x,y,z,x,y,z)$, this would also allow you to obtain the other three standard basis vectors, and you would have a span equal to the entire space, which is not the case. -So, no, you cannot obtain $(0,0,1,0,0,0)$ with the vectors described.<|endoftext|> -TITLE: Prove that $s_n$ is monotone and bounded -QUESTION [6 upvotes]: Let the sequence $s_n$ be defined by $s_1$ = 1, $s_{n+1}$ = $\frac{1}{4} (2s_n + 5)$ for n $\in $ $\mathbb{N}$. The following is my proof and below it, my concerns. -Proof -(1) We will prove $s_n$ is increasing by induction. -That is $s_n \leq s_{n+1}$ for all n $\in \mathbb{N}$. -Since $$s_1 = 1 < s_2 = \frac{1}{4}(2(1)+5) = \frac{7}{4}$$ -Now assume $s_k$ $\leq $ $s_{k+1}$ -Then $$s_{k+2} = \frac{1}{4} (2s_{k+1} + 5) \geq \frac{1}{4} (2s_k +5) = s_{k+1}$$ -Therefore, $s_n$ is a monotone sequence because it is increasing. -(2a) We will prove $s_n$ is bounded above. -That is, find $M \in \mathbb{R}$ such that $s_n \leq M$ for every $n \in \mathbb{N}$. -Now, we try to prove $20$ is an upper bound. -Since $s_1 = 1 < 20$ -Suppose $s_k < 20$, then $$s_{k+1} = \frac{1}{4}(2s_k + 5) < \frac{1}{4}(2(20) + 5) = \frac{45}{4} < 20. $$ -Hence, $s_n < 20$ for every n $\in \mathbb{N}$. -Therefore, $s_n$ is bounded above. -(2b) We will prove $s_n$ is bounded below. -That is, find $M \in \mathbb{R}$ such that $s_n \geq M$ for every $n \in \mathbb{N}$. -Now, we try to prove $0$ is a lower bound. -Since $s_1 = 1 > 0$ -Suppose $s_k > 0$, then $$s_{k+1} = \frac{1}{4}(2s_k + 5) > \frac{1}{4}(2(0) + 5) = \frac{5}{4} > 0. $$ -Hence, $s_n > 0$ for every n $\in \mathbb{N}$. -Therefore, $s_n$ is bounded below. -Therefore $s_n$ is bounded. -Concerns -I want to confirm I did my Induction correctly. However, my main concern is in 2b. - -REPLY [2 votes]: There are a couple of typos in your proof of (1). What you intended to write is -$$s_{k+2}=\frac{1}{4}(2s_{k+1}+5) > \frac{1}{4}(2s_{k}+5)=s_{k+1}$$ -There are also a couple of places where $\le$ is used, where it would be better to use $<$. -As to the sequence being bounded below, there is really nothing to do. You have proved that the sequence is increasing. It starts at 1, so 1 is a lower bound.<|endoftext|> -TITLE: Stirling Number of the Second Kind Identity -QUESTION [9 upvotes]: I'm aware of the identity -\begin{align} -\sum_{i = 0}^{k} i! \binom{n+1}{i + 1} S(k,i) = H_{n,-k}, -\end{align} -where $H_{n,-k}$ is a generalized Harmonic number defined by $H_{n,m} = \sum_{r = 1}^{n} r^{-m}$. I believe the following sum is related to the generalized Harmonic numbers as well, -\begin{align} -\sum_{i = 0}^{k} (-1)^{i} i! \binom{n-i}{k-i} S(j,i) -\end{align} -and should be a nice function of $n, k$ and $j$, where $0 \leq j, k \leq n$. Any hints are certainly welcome! - -REPLY [3 votes]: By way of enrichment of this discussion we give a purely algebraic -proof that does not use Eulerian numbers. -Suppose we seek to prove that -$$\sum_{q=0}^n q! {m+1\choose q+1} {n\brace q} = H_{m,-n}$$ -with $m\ge n.$ -Recall the bivariate generating function $G(z, u)$ of the Stirling -numbers of the second kind which is -$$G(z, u) = \exp(u(\exp(z)-1)).$$ -This yields for the sum that -$$\sum_{q=0}^n q! {m+1\choose q+1} n! [z^n] [u^q] \exp(u(\exp(z)-1)) -\\= n! [z^n] \sum_{q=0}^n q! {m+1\choose q+1} \frac{(\exp(z)-1)^q}{q!} -= n! [z^n] \sum_{q=0}^n {m+1\choose q+1} (\exp(z)-1)^q.$$ -An important observation at this point is that $\exp(z)-1$ starts at -$z$, so we may extend the summation to $m$ without affecting the -coefficient of $z^n,$ getting -$$n! [z^n] \sum_{q=0}^m {m+1\choose q+1} (\exp(z)-1)^q.$$ -Rewrite this as -$$(m+1) \times n! [z^n] -\sum_{q=0}^m {m\choose q} \frac{(\exp(z)-1)^q}{q+1} -\\= (m+1) \times n! [z^n] \frac{1}{\exp(z)-1} -\sum_{q=0}^m {m\choose q} \frac{(\exp(z)-1)^{q+1}}{q+1}.$$ -Stop for a moment to perform a formal power series integration. -We have $$(1+x)^m = \sum_{q=0}^m {m\choose q} x^q$$ -so that -$$\frac{1}{m+1}(1+x)^{m+1} - \frac{1}{m+1} -=\frac{1}{m+1}\left((1+x)^{m+1} - 1\right) -\\= \sum_{q=0}^m {m\choose q} \frac{x^{q+1}}{q+1}.$$ -Apply this to the sum to obtain -$$(m+1) \times n! [z^n] \frac{1}{\exp(z)-1} -\frac{1}{m+1}\left(\exp(z(m+1)) - 1\right)$$ -which is -$$n! [z^n] \frac{\exp(z(m+1)) - 1}{\exp(z)-1} -= n! [z^n] \sum_{k=0}^m \exp(zk) -\\= n! \sum_{k=0}^m \frac{k^n}{n!} = \sum_{k=0}^m k^n = H_{m, -n}.$$<|endoftext|> -TITLE: $x^3+48=y^4$ does not have integer (?) solutions -QUESTION [10 upvotes]: How does one find all positive integer solutions to the equation $x^3+48=y^4$? - -REPLY [7 votes]: I realize this is a massive revive, but here is a solution without any powerful theorems about elliptic curves. -We have $x^3 + 64 = y^4 + 2^4$. It is easy to show all (odd) primes dividing the RHS are $1 \pmod{8}$. Now, looking modulo $16$ we deduce either $x \equiv 1 \pmod{16}$ and $y$ is odd or $x,y$ are both even. For the second case if $v_2(x) = 1$ we derive a contradiction because $v_2(LHS) = 3$ while $v_2(RHS) \ge 4$. If $v_2(x) \ge 2$ we have $v_2(LHS) \ge 6$. Thus we must have $v_2(y) = 1$ for $v_2(y^4 + 16) > 4$. Writing $y = 2k$ we need $v_2(k^4 + 1) \ge 2$. But this is absurd since $v_2(k^4 + 1) \le 1$ by checking modulo $4$. Thus it follows $x \equiv 1 \pmod{16}$. -But now utilize $x^3 + 64 = (x+4)(x^2 - 4x + 16)$. $x+4$ is $5 \pmod{8}$, which is absurd so we are done.<|endoftext|> -TITLE: What is the difference between the Discrete Fourier Transform and the Fast Fourier Transform? -QUESTION [32 upvotes]: Can anybody answer this question? -Thank you. - -REPLY [2 votes]: DFT is a discrete version of FT whereas FFT is a faster version of the DFT algorithm.DFT established a relationship between the time domain and frequency domain representation whereas FFT is an implementation of DFT. -computing complexity of DFT is O(M^2) whereas FFT has M(log M) where M is a data size<|endoftext|> -TITLE: Sum of cubed roots -QUESTION [5 upvotes]: I need to calculate the sums -$$x_1^3 + x_2^3 + x_3^3$$ -and -$$x_1^4 + x_2^4 + x_3^4$$ -where $x_1, x_2, x_3$ are the roots of -$$x^3+2x^2+3x+4=0$$ -using Viete's formulas. -I know that $x_1^2+x_2^2+x_3^2 = -2$, as I already calculated that, but I can't seem to get the cube of the roots. I've tried -$$(x_1^2+x_2^2+x_3^2)(x_1+x_2+x_3)$$ -but that did work. - -REPLY [11 votes]: If $x_1,x_2,x_3$ are the roots of $x^3+2x^2+3x+4=0$ then $$x^3+2x^2+3x+4 = (x-x_1)(x-x_2)(x-x_3) $$ $$= x^3 - (x_1 + x_2 + x_3)x^2 + (x_1 x_2 + x_1 x_3 + x_2 x_3)x - x_1 x_2 x_3 = x^3 - e_1 x^2 + e_2 x - e_3.$$ So $e_1 = -2$, $e_2 = 3$ and $e_3 = -4$. -Now the trick is to express the power sums $x_1^3 + x_2^3 = x_3^3$ and $x_1^4 + x_2^4 = x_3^4$ in terms of the elementary symmetric polynomials $\{x_1 + x_2 + x_3,x_1 x_2 + x_1 x_3 + x_2 x_3,x_1 x_2 x_3\}$. -See my answer to the question here for details on how to do that Three-variable system of simultaneous equations -In the case of the fourth power sums you should get $x_1^4 + x_2^4 + x_3^4 = e_1^4 - 4 e_1^2 e_2 + 4 e_1 e_3 + 2 e_2^2 = 18$. - -REPLY [2 votes]: I think what you need is Newton's identities, in particular the section about their application to the roots of a polynomial.<|endoftext|> -TITLE: $x,y$ are integers satisfying $2x^2-1=y^{15}$, show that $5 \mid x$ -QUESTION [15 upvotes]: Let $x, y >1$ be integers satisfying $2x^2-1=y^{15}$. How can I prove that $5 \mid x$? - -REPLY [20 votes]: Here is a proof that works even for the fifth power. -Suppose $2x^2-1=y^5$. Then we can write $2x^2=(y+1)(1-y+y^2-y^3+y^4)$. -Writing -$$5=(1-y+y^2-y^3+y^4)+(10(y+1)-10(y+1)^2+5(y+1)^3-(y+1)^4)$$ -shows that $g(y):=\gcd(y+1,1-y+y^2-y^3+y^4)$ is always equal to 1 or 5. -If $g(y)=5$, then 5 divides $2x^2$, hence 5 divides $x$, and we are done. -The other case leads to a contradiction. -If $g(y)=1$, then $y+1$ and $1-y+y^2-y^3+y^4$ are relatively -prime, and their product is twice a square. Thus, one of them is a -square and the other is twice a square. Since $1-y+y^2-y^3+y^4$ is odd, it must be a square. -But for $y>1$, you can easily check that -$$(2y^2-y)^2 < 4(1-y+y^2-y^3+y^4) <(2y^2-y+1)^2,$$ -so $1-y+y^2-y^3+y^4$ is not a square. -(I learned this from Ed Burger's book "Exploring The Number Jungle"). -This is the required contradiction. - -Added: I'm pretty sure that there are no integer solutions to $2x^2-1=y^5$ -with $y>1$, but I can't prove it yet. - -REPLY [5 votes]: You can rewrite this as $1-2x^2 = (-y)^{15}$ and then use unique factorization in the ring $\mathbb{Z}[\sqrt{2}]$ to show that $1+x\sqrt{2}$ must be a 15th power in the ring. Then write -$$1+x\sqrt{2} = u (a+b\sqrt{2})^{15}$$ -Where $u$ is a unit of the ring.<|endoftext|> -TITLE: How to type logarithms in Wolfram|Alpha? -QUESTION [29 upvotes]: Its sometimes hard to type it if logarithm is not natural and base is not 10, especially if base is variable. So anyone know rules how to type? - -REPLY [5 votes]: In their reference, Wolfram|Alpha states the following: - -Log[z] gives the natural logarithm of $z$ (logarithm to base $e$). -Log[b,z] gives the logarithm to base $b$. - -Michael's answer states this using parenthesis. Note that brackets are formally defined, while parentheses are inferred. Realistically this makes no difference, but for the sake of pedantry. -Additionally, if you search for the term that you need more information on, in this case log, you can get the definition & documentation by hovering over the shortened definition in the bottom corner: - -The Wolfram|Alpha reference, provides amazing insight into these type of questions. - -REPLY [2 votes]: You can type in loga(b). This gives the logarithm of b base a<|endoftext|> -TITLE: Continuous Collatz Conjecture -QUESTION [21 upvotes]: Has anyone studied the real function -$$ f(x) = \frac{ 2 + 7x - ( 2 + 5x )\cos{\pi x}}{4}$$ (and $f(f(x))$ and $f(f(f(x)))$ and so -on) with respect to the Collatz conjecture? -It does what Collatz does on integers, and is defined smoothly on all -the reals. -I looked at $$\frac{ \overbrace{ f(f(\cdots(f(x)))) }^{\text{$n$ times}} }{x}$$ briefly, and it appears to have bounds independent of $n$. -Of course, the function is very wiggly, so Mathematica's graph is -probably not $100\%$ accurate. - -REPLY [6 votes]: Yes, it has been studied in - -Xing-yuan Wang and Xue-jing Yu (2007), Visualizing generalized 3x+1 function dynamics - based on fractal, Applied Mathematics and Computation 188 (2007), no. 1, 234–243. - (MR2327110). - -I have found this reference in Jeffrey Lagarias's "The $3x + 1$ Problem: An Annotated Bibliography, II (2000-2009)". -This real/complex interpolation of yours is not the only one imaginable, you will find several others either in Lagarias' article cited above, or in the first part of this series of articles, namely in "The $3x+1$ problem: An annotated bibliography (1963--1999)".<|endoftext|> -TITLE: limit comparison test for alternating series -QUESTION [7 upvotes]: I am trying to understand why does the limit comprasion test doesn't work for alternating series, is it even true? or is there a counter example? I can't find one. can you help me please? -thanks. -benny - -REPLY [4 votes]: Usually, the Limit Comparison Test is stated as follows: - -Limit Comparison Test. Let $\sum a_n$ and $\sum b_n$ be two series of positive terms. If - $$\lim_{n\to\infty}\frac{a_n}{b_n}$$ - exists and is positive, then both $\sum a_n$ and $\sum b_n$ converge, or both diverge. - -In fact, it can be extended slightly to include the following two cases: - - -If $\lim\frac{a_n}{b_n} = 0$ and $\sum b_n$ converges, then $\sum a_n$ converges. -If $\lim\frac{a_n}{b_n} =\infty$ and $\sum b_n$ diverges, then $\sum a_n$ diverges. - - -The extended test, at any rate, fails if you try to compare alternating series with positive series. Take for example $\sum\frac{1}{\sqrt[4]{n}}$ and $\sum \frac{(-1)^n}{\sqrt{n}}$. The first series diverges ($p$-series with $p\lt 1$), and second converges (alternating series, and the terms go to zero and are decreasing in absolute value). However, -$$\lim_{n\to\infty}\frac{\quad\frac{1}{\sqrt[4]{n}}\quad}{\frac{(-1)^n}{\sqrt{n}}} = \lim_{n\to\infty}\frac{\sqrt{n}}{(-1)^n\sqrt[4]{n}} = \lim_{n\to\infty}\frac{\sqrt[4]{n}}{(-1)^n} = 0.$$ -If the extended test worked here, then you would have to conclude, since the sequence $\sum b_n = \sum\frac{(-1)^n}{\sqrt{n}}$ converges, that the series $\sum a_n = \sum\frac{1}{\sqrt[4]{n}}$ also converges, which it does not. -What about the regular test, which requires the limit to exist and be finite? If we try to compare an alternating series with a series of positive terms, then we cannot have a limit that is both positive and exists (the terms alternate between positive and negative, so if the limit exists it has to be zero). So it would have to involve comparing two alternating series, and as I write this I see joriki has posted an example, so I'll leave it here.<|endoftext|> -TITLE: A finite ring is a field if its units $\cup\ \{0\}$ comprise a field of characteristic $\ne 2$ -QUESTION [27 upvotes]: Suppose $R$ is a finite ring (commutative ring with $1$) of characteristic $3$ and suppose that for every unit $u \in R\:,\ 1+u\ $ is also a unit or $0$. We need to show that $R$ is a field. Is this true if ${\rm char}(R) > 3$? -Here is what I attempted to do. $\:$ First of all, $\:$ I noticed that the statement is not true if $\ R\ $ is infinite ($ \mathbb F_3[x]$ is an example of an infinite ring which is not a field but it satisfies all the required properties). Now, in a finite ring, a non-zero element is either a unit or a $\:0\:$ divisor, so I tried to show that $R$ has no $\:0\:$ divisors. Clearly, $R$ has no nonzero nilpotent elements (if $x$ is nilpotent, then $1+x$ is a unit, but then $1+(1+x)$ and $1+(2+x)$ is either a unit or $\:0\:.\:$ Hence $x$ is either a unit or $\:0\:,\:$ and since $x$ is nilpotent, it can't be a unit, so we must have $x = 0$). But this does not solve the problem, since $R$ could have elements that are $\:0\:$ divisors but not nilpotent (for example, $\ (1,0)\ $ is a $\:0\:$ divisor in $\ \mathbb Z/3\:\mathbb Z \times \mathbb Z/3\:\mathbb Z\ $ but it is not nilpotent). -Another observation I made is that the set of units, together with $\:0\:$ forms a group under addition, so that $J = R^{*}$, together with $\:0\:$ is a subring of $R$. hence we may view $R$ as a $J$-module (and since $J$ is clearly a field, $R$ is a $J$-vector space). -Another thing I tried is to show that $R$ has no proper nontrivial ideals. Viewing $R$ and $J$ as abelian groups, I noticed that a non-trivial ideal of $R$ can contain at most one element from each coset of $J$ in $R$, because if an ideal contains two distinct elements from the same coset of $J$ in $R$, this ideal would have to contain their difference, hence it'd have to contain a unit, hence it would not be a proper ideal. But again, I don't see how this observation leads to a solution. -As for the last part, I suspect that this statement will remain true if ${\rm char}(R) > 3$. Since $1$ is a unit, it follows that $1,2,3,\ldots$ are all either units or $0$, which can only happen if ${\rm char}(R) = p$, a prime number (and then I suspect that $R$ will have to be a finite field), but again I do not see how to prove (or disprove) this. -By the way, this is not a homework problem. I am studying algebra on my own, and after thinking about it for a few days and making the observations I listed above, I still don't see how to finish the proof. I would appreciate your suggestions. Thank you in advance. - -REPLY [4 votes]: Below is a complete elementary proof of the more general result that you conjectured. -Theorem $\ $ Finite ring $ \,R\,\supset\,\mathbb Z/p\ $ is a field, if prime $ \,p > 2\,$ and unit $ \,u\in R\, \Rightarrow\, 1\!+\!u\,$ unit or $\,0$ -Proof $ \ \ R\,$ satisfies $ \ x^{q} =\, x,\ \ q = p^n,\,$ since, $ $ as Thomas showed, the hypotheses imply $ \ f(x) = x^p\ $ is a permutation on the finite set $ \,R,\,$ so it has finite order $ \, f^{n}\!= 1\,.\,$ For $ \,r \in R\ $ let $ \,e = r^{q-1}.\, $ Then $ \,e^2\!-e\ =\ r^{\,q-2} (r^{q}\!-r)\ =\ 0.\, $ So $ \,(2e\!-\!1)^2\! = 4\,(e^2\!-e)+1 = 1\,$ so $\, 1+(2e\!-\!1)\, =\, 2\,e\,$ unit or $ \,0.\,$ $ \, 2^{-1}\in\mathbb Z/p\, \Rightarrow\, e $ unit or $ \,0.\,\ e$ unit $ \Rightarrow r\,$ unit; $ \ e=0\, \Rightarrow\, r = r^{q}\! = re = 0,\, $ i.e. $ \,r\in R\,$ is a unit or $\,0\,.$<|endoftext|> -TITLE: Introduction to Computational Topology -QUESTION [8 upvotes]: Question: Besides generally learning Algebraic Topology, what are some prerequisites for studying Computational Topology? Are there any accessible papers which introduce the field and the methods being used? -Motivation: A number of my peers do applied mathematics where I go to school, and upon hearing how they solved problems for their applied math classes I often think that the same sort of techniques could be used in topology. I am often referred to Stanford's computation topology page, though a number of the preprints on this page are not quite accessible to me. Of course, accessibility is subjective, but I wanted to see if anyone had any suggestions regarding how to proceed. It would be a shame to give up on this interesting looking subject just because I can't get my foot in the door! - -REPLY [5 votes]: I suggest you take a look at computational homology http://www.amazon.com/Computational-Homology-ebook/dp/B000RENIA8 by Tomasz Kaczynski and the notion of persistence. This is where I first encountered the stuff. There are many fine examples in the book on how to get started. In my own work as a quant I am often presented with high-dimensional data sets and I need to classify them. Often there is pattern recognition involved and in such cases I can use the techniques of computational homology to extract algebraic information from the dataset. -Usually this involves something along the lines of: - -Using cubical homology (easy to calculate with on a computer) and look for features in the homology groups which are 'persistent' across different sizes (resolutions) of cubes. -Convert the algebraic information into a barcode (something like the analog of a betti number) for easy manipulation in code. - -To get started with this you'll need to know how to code in something like C++ or Python. There are libraries that already do the heavy lifting and you wouldn't want to redo all that yourself. Google around on persistent homology and they'll pop up. You'll also need some good higher-dimensional data to work with which likely means you'll have to deal with databases b/c you'll need somewhere to hold it all. MySQL or postgres are both fine choices. -Good luck!<|endoftext|> -TITLE: Applications of Elliott's theorem concerning the classification of AF-algebras -QUESTION [6 upvotes]: An AF-algebra is a $C^* $-algebra which is the inductive limit of an inductive sequence of finite-dimensional $C^*$-algebras. -Elliott's theorem concerning the classification of AF-algebras says that two AF-algebras $\mathfrak{A}$ and $\mathfrak{B}$ are isomorphic as $C^* $-algebras if and only if $(K_0(\mathfrak{A}),K_0(\mathfrak{A})^+,\Gamma(\mathfrak{A}))$ and $(K_0(\mathfrak{B}),K_0(\mathfrak{B})^+,\Gamma(\mathfrak{B}))$ are isomorphic as scaled ordered groups, where $\Gamma(\mathfrak{A})$ denotes the dimension range, i.e. the elements of $K_0(\mathfrak{A})^+$ given as equivalence classes of projections in $\mathfrak{A}$. -Although this result is interesting and beautiful on its own, I would like to know whether there are interesting applications that can be understood by and might be interesting for students who are familiar with basic K-theory for $C^* $-algebras. Of course I'm also interested in more advanced applications or situations where Elliott's theorem provides insights which are hard to obtain otherwise. - -REPLY [10 votes]: This is more an example than an application, but, among many other non-obvious ordered groups that arise from AF algebras is the group of polynomials with integer coefficients, with positivity meaning having non-zero positive values on the open unit interval (unless the polynomial is the zero polynomial). As shown by Renault, this ordered group arises from Pascal´s triangle, interpreted as a Bratteli diagram! (As an application of this, it is possible to derive the solution to the classical Hausdorff moment problem.) -Another interesting example is the subgroup of the plane consisting of elements with rational coordinates, with positivity determined by the interior of a cone in the plane, the boundary lines of which do not necessarily have rational slope. When the cone is a half-plane, the ordered group is closely related to the continued fraction expansion of the slope of the (single) boundary line.<|endoftext|> -TITLE: Proof that there are infinitely many primes of the form $4m+3$ -QUESTION [15 upvotes]: I am reading a proof of there are infinitely many primes of the form $4m+3$, but have trouble understanding it. The proof goes like this: -Assume there are finitely many primes, and take $p_k$ to be the largest prime of the form $4m+3$. -Let $N_k = 2^2 \cdot 3 \cdot 5 \cdots p_k - 1$, where $p_1=2, p_2=3, p_3=5, \dots$ denotes the sequence of all primes. -We find that $N_k$ is congruent to $ 3 \pmod {4}$, so it must have a prime factor of the form $4m+3$, and this prime factor is larger than $p_k$ — contradiction. -My questions are: - -Why is $N_k$ congruent to $3 \pmod{4}$? -Why must $N_k$ have a prime factor of the form $4m+3$ if it's congruent to $3 \pmod{4}$? - -It seems that those should be obvious, but I don't see it. Any help would be appreciated! - -REPLY [9 votes]: For the first question, if $N_k=2^2\cdot 3\cdot 5\cdots p_k-1$, then -$$ -N_k-3=2^2\cdot 3\cdot 5\cdots p_k-1-3=4(3\cdot 5\cdots p_k-1) -$$ -which implies $4|N_k-3$, that is, $N_k\equiv 3\pmod{4}$. -For the second question, suppose instead that $N_k$ has no prime factors of form $4m+3$. Then all its prime factors must be of the form $4m+1$, as they can not be of the form $4m$ or $4m+2$, (except possibly for $2$). But notice $$(4m+1)(4k+1)=16mk+4m+4k+1=4(4mk+m+k)+1$$ so by an inductive argument, you see that a product of primes of the form $4m+1$ again has the form $4m+1$. In terms of congruences, if $p_i\equiv 1\pmod{4}$ and $p_j\equiv 1\pmod{4}$, then $p_ip_j\equiv 1^2\equiv 1\pmod{4}$. This would contradict the fact that $N_k$ has form $4m+3$, since $N_k\equiv 3\pmod{4}$.<|endoftext|> -TITLE: Prove the reduction formula -QUESTION [6 upvotes]: The question is to "prove the reduction formula" -$$ \int{ \frac{ x^2 }{ \left(a^2 + x^2\right)^n } dx } = \frac{ 1 }{ 2n-2 } \left( -\frac{x}{ \left( a^2+x^2 \right)^{n-1} } + \int{ \frac{dx}{ \left( a^2 + x^2 \right)^{n-1} } } \right) $$ -What I got is -Set -$ u = x $ -$ du = dx $ -$\displaystyle{ dv = \frac{ x }{ \left( a^2 + x^2 \right)^{n} } dx }$ -$\displaystyle{ v = \frac{ 1 }{ 2(n+1) \left( a^2 + x^2 \right)^{n+1} } }$ -So I got -$$ \frac{ 1 }{ 2n+2 } \left( \frac{x}{ \left( a^2 + x^2 \right)^{n+1}} - \int{ \frac{dx}{ \left( a^2+x^2 \right)^{n+1} } } \right) $$ -Which I believe is correct. They are subtracting from n in the integration step and I'm not sure why - -REPLY [6 votes]: You went wrong when you integrated $dv$. -You have $dv = x(a^2+x^2)^{-n}\,dx$. When you integrate, you add one to the exponent. But adding one to $-n$ gives $-n+1 = -(n-1)$. So -$$v = \frac{1}{2(-n+1)}(a^2+x^2)^{-n+1} = \frac{1}{2(1-n)(a^2+x^2)^{n-1}}.$$ -The minus sign from integration by parts can be cancelled out by switching the sign of $2(1-n)$ to get $2(n-1) = 2n-2$. -If you use the correct value of $v$, I think you will have no trouble establishing the formula.<|endoftext|> -TITLE: Formal Schemes Mittag-Leffler -QUESTION [17 upvotes]: Here is a question that is similar to my last one. I've been trying to learn about Grothendieck's Existence Theorem, but it seems that there aren't very many places that talk about formal schemes and even less that come up with examples. -Suppose $(\mathfrak{X}, \mathcal{O}_\mathfrak{X})$ is a Noetherian formal scheme and let $\mathcal{I}$ be an ideal of definition. Then we have a system of schemes $X_n=(|\mathfrak{X}|, \mathcal{O}_\mathfrak{X}/\mathcal{I}^n)$. -If the inverse system $\Gamma(X_n, \mathcal{O}_{X_n})\to \Gamma(X_{n-1}, \mathcal{O}_{X_{n-1}})$ satisfies the Mittag-Leffler condition (the images eventually stabilize), then we get some particularly nice properties such as $Pic(\mathfrak{X})=\lim Pic(X_n)$. -More generally, we don't have to be worried about converting between thinking about coherent sheaves on the formal scheme and thinking about them as compatible systems of coherent sheaves on actual schemes. -My question is, is there a known example of a formal scheme for which that system of global sections does not satisfy the Mittag-Leffler condition? One thing to note is that it can't be affine (the maps are all surjective) or projective (finite dimensionality forces the images to stabilize). -A subquestion is whether or not there is a general reason to believe such an example exists. People I talk to usually say things along the lines of: you definitely have to be careful here because in principle this could happen. But no one seems to have ever thought up an example. -Lastly (still related...I think), is there a known example where you can't think of coherent (or maybe just invertible) sheaves as systems because the two aren't the same? - -REPLY [2 votes]: I don't have enough reputation to comment, so I must submit an answer which I do not really have. There is one class of varieties you must consider when searching for a counterexample: quasiprojective varieties. That is, open subsets of projective varieties which are not affine, e.g a surface minus a point. You also want the maps in question to fail to be surjective, so you should look for an ideal $\mathcal I$ which satisfies $H^1(X, \mathcal I^r / \mathcal I^{r+1}) \neq 0$ for all $r>>0$. This sounds like it's harder to do, but not impossible if you know examples.<|endoftext|> -TITLE: Positive integers satisfying $x^{a+b} = a^b \cdot b$, how to show that $a=x$ and $b=x^x$? -QUESTION [13 upvotes]: Let $a,x,b$ be positive integers satisfying $x^{a+b} = a^b \cdot b$. How can I prove that $a=x$ and $b=x^x$? - -REPLY [11 votes]: The equation must be satisfied for each prime individually; that is, if $p$ is a prime factor of any of $a$, $b$ and $x$ and we denote the number of factors of $p$ in $a$, $b$ and $x$ by $n_a$, $n_b$ and $n_x$, respectively, we must have -$$(a+b)n_x=bn_a+n_b\;.\tag{1}$$ -[Edit: Thanks to Harry for pointing out that I forgot to treat the case $n_b=0$. In this case, $an_x=b(n_a-n_x)$, and so $p^{n_a}\mid n_a-n_x$ (since $b$ contains no factors of $p$). But $n_a\ge n_x$, since $(a+b)n_x=bn_a$, and so either $n_a=n_x$, which would imply $n_a=n_x=n_b=0$, or $n_a>n_x$, and thus $n_a\ge n_a-n_x\ge p^{n_a}$, which is impossible. Thus $n_b\neq0$.] -Now let $q$ be any prime factor (not necessarily distinct from $p$) in any of $a$, $b$ and $x$, and denote the number of factors of $q$ in $a$ and $b$ by $m_a$ and $m_b$, respectively. Then $q^{m_a}| a$ and $q^{m_b}| b$, and hence each term in the equation except for $n_b$ is divisible by $q^{\min(m_a,m_b)}$; thus $n_b$ is, too. In particular, $n_b$ is divisible by $p^{\min(n_a,n_b)}$. But $p^{n_b} \nmid n_b$, since $p^{n_b}>n_b$, and hence $n_b>n_a$. Since $p$ and $q$ are arbitrary, this implies $m_b>m_a$, and thus $q^{m_a} | n_b$. In particular $p^{n_a} | n_b$, and thus $p^{n_a}\le n_b$. Also, since $q^{m_a} | n_b$ for all prime factors $q$ of $a$, we have $a|n_b$. Thus we can write (1) as -$$a\left(n_x-\frac{n_b}{a}\right)=b(n_a-n_x)\;.\tag{2}$$ -To show that both sides of this equation are in fact zero, we can again consider factors of $p$. The right-hand side contains at least $n_b$ factors of $p$, and $a$ contains only $n_a$, so -$$p^{n_b-n_a}\mid n_x-\frac{n_b}{a}\;,$$ -and thus if this difference is not zero, we must have -$$p^{n_b-n_a} \le \left\lvert n_x-\frac{n_b}{a}\right\rvert\;.$$ -Considering first the case $n_x<\frac{n_b}{a}$, it follows that -$$ -p^{n_b-n_a} -\le -\left\lvert n_x-\frac{n_b}{a}\right\rvert -= -\frac{n_b}{a}-n_x -\le -\frac{n_b}{a} -\le -\frac{n_b}{p^{n_a}}\;, -$$ -and thus $p^{n_b}\le n_b$, which is impossible. Considering instead the case $n_x>\frac{n_b}{a}$, from (2) this implies $n_a>n_x$, and thus -$$ -p^{n_b-n_a} -\le -\left\lvert n_x-\frac{n_b}{a}\right\rvert -= -n_x-\frac{n_b}{a} -\le -n_x -< -n_a -< -p^{n_a}\;, -$$ -that is, $p^{n_b} -TITLE: Probability of picking a specific value from a countably infinite set -QUESTION [7 upvotes]: I have just learned in probability that picking a specific value from an uncountably infinite set (continuous) has a probability of zero, and that we thus estimate such things over an interval using integrals. This clearly does not apply to finite sets (discrete), where you can easily calculate the probability. But does it not apply to a countably infinite set (natural numbers for example), as it is discrete? On one hand, if we calculate limit of picking a certain element as one over x where x goes to infinity, it seems to be zero, but then again, it's discrete variable and I am not sure if it works the same way as continuous... - -REPLY [9 votes]: I'm assuming that implicit in your question is that you're looking for a uniform distribution. (Otherwise, the statement "picking a specific value from an uncountably infinite set has a probability of zero" is false.) -To answer such questions systematically, you need a clear definition of what you mean by probabilities. You'll find the usual definition e.g. in the Wikipedia articles on probability axioms, probability measure and probability space. The key point there is that probabilities need to be countably additive. This allows you to derive a contradiction from assigning zero probability to elementary events in a countable probability space, but not in the case of an uncountable space. Assigning zero to a singleton set in a countable space leads to the contradiction that the countable sum of the zeros for all the singletons must be $0$ (from countable additivity), but $1$ because it's the probability for the entire space. Note that this has nothing to do with "discreteness" in a topological sense; e.g., it's true for the rationals, independent of whether you regard them as a discrete space or with the usual topology induced by the topology of the reals.<|endoftext|> -TITLE: Invariant subspaces -QUESTION [5 upvotes]: Good morning, -Let T be a linear transformation, which acts on a vector space V, over a field F. -Let W be a sub-space which invariants of T, and f,g polynomials from the same field F. -I need to prove that W is invariant of g(T) and f(T)(W) invariant of g(T). -obviously, g(T) is also a linear mapping and therefore is also invariants of W as T does, I just don't know how to prove it correctly. -Have a good day. - -REPLY [5 votes]: Because $W$ is a subspace of $V$, it is invariant under all the maps $m_c:V\rightarrow V$, where for each $c\in F$ the map $m_c$ is defined by $m_c(v)=cv$. By assumption, $W$ is invariant under the map $T:V\rightarrow V$. -Hint: Show that if $W$ is invariant under two maps $C,D:V\rightarrow V$, then $W$ is invariant under $C\circ D$ and $C+D$. Then you will have that $W$ is invariant under all -$$f(T)=a_nT^n+\cdots+a_1T+a_0=(m_{a_n}\circ T\circ\cdots\circ T)+\cdots +(m_{a_1}\circ T)+m_{a_0}.$$<|endoftext|> -TITLE: Count the number of positive solutions for a linear diophantine equation -QUESTION [18 upvotes]: Given a linear Diophantine equation, how can I count the number of positive solutions? -More specifically, I am interested in the number of positive solutions for the following linear Diophantine equation: -$3w + 2x + y + z = 47$ -Update: I am only interested in non-zero solutions. - -REPLY [6 votes]: For the number of solutions of: -$$ \sum_{k=1}^{m}p_k a_k = M, p_k,a_k \in \mathbb{N} $$ -Make a generating function: -$$ P(x) = \prod_{k=1}^{m}\frac{x^{a_k}}{1-x^{a_k}} $$ -Notice that the coefficient of the expansion of $P(x)$ standard notation $[x^n]P(x)$ are giving the number of solutions of: -$$ \sum_{k=1}^{m}p_ka_k = n $$ -Therefore the number of solutions of -$$ \sum_{k=1}^{m}p_ka_k=M, p_k,a_k \in \mathbb{N} $$ -is given by $M$-th derivative at $0$ divided by $M!$ -$$\frac{1}{M!}\frac{\mathrm{d^M}P(x) }{\mathrm{d} x^M} (0)$$ -Now you can use one of the methods for calculating higher derivative: -$$f^{(n)}=\lim_{h \to 0} \frac{1}{h^n}\sum_{k=0}^{n}(-1)^{k+n}\binom{n}{k}f(x+kh)$$ -$$f^{(n)}=\frac{n!}{2\pi i}\oint\limits_\gamma \frac{f(z)}{z^{n+1}} \mathrm{d}z$$ -Where $\gamma$ is a circle around the origin. -Notice that the second gives a great way to directly calculate the number of the solutions: -$$S(M)=\frac{1}{2\pi i}\oint\limits_\gamma \frac{P(z)}{z^{M+1}} \mathrm{d}z$$ -In your case that is then: -$$S(M)=\frac{1}{2\pi i}\oint\limits_\gamma \frac{1}{z^{41}(1-z^3)(1-z^2)(1-z)^2} \mathrm{d}z$$ -Replacing (we have a pole at $1$): -$$z=\frac{1}{2}e^{i\theta}$$ -we have: -$$S(47)=\frac{2^{39}}{\pi}\int_{0}^{2\pi} \frac{1}{e^{40i\theta}(1-\frac{1}{8}e^{3i\theta})(1-\frac{1}{4}e^{2i\theta})(1-e^{i\theta})^2} \mathrm{d}\theta$$ -(The integral is solved by partial fractions if we are to do it manually or numerically, or for larger values only asymptotically.) -Notice that out of all partial fractions the form $\frac{1}{z^n}$ or $\frac{1}{z^{2n}+z^{n}+1}$ does not count in (the integral being equal to $0$). So we are left with: -$$z=\frac{1}{2}e^{i\theta}$$ -$$-\frac{306425}{144(z-1)}+\frac{10577}{72(z-1)^2}-\frac{83}{12(z-1)^3}+\frac{1}{6(z-1)^4}+\frac{1}{16(z+1)}$$ -which gives the integral evaluation (counting that odd exponents are changing the sign) -$$\frac{306425}{144}+\frac{10577}{72}+\frac{83}{12}+\frac{1}{6}+\frac{1}{16}=2282$$<|endoftext|> -TITLE: Strange Cubic Diophantine Equations -QUESTION [6 upvotes]: Does anyone have any ideas towards solving these four equations one at a time? - -$a^3 - 3a^2b + b^3 = \pm 1$ -$a^3 + 3a^2b - 6 ab^2 + b^3 = \pm 1$ - -I am guessing that the $1$ might mean we can use units in some algebraic number field to solve these but I have no idea which one or how to find it. Maybe I am wrong entirely. - -These are both Thue equations. Mordell shows how to solve $a^3 - 3a^2b + b^3 = 1$ in his book using $p$-adic methods, it is found the solutions are (x,y) = (1,0),(0,-1),(-1,1),(1,-3),(-3,2),(2,1). -These equations can be solved by pari/gp -? p = thueinit(x^3 - 3*x^2 + 1); -? thue(p,1) -% = [[-1, 2], [0, 1], [-1, -1], [-2, -3], [3, 1], [1, 0]] -? thue(p,-1) -% = [[1, -2], [0, -1], [1, 1], [2, 3], [-3, -1], [-1, 0]] -? p = thueinit(x^3 + 3*x^2 - 6*x + 1); -? thue(p,1) -% = [[0, 1], [-1, -1], [1, 0]] -? thue(p,-1) -% = [[0, -1], [1, 1], [-1, 0]] - -but it is not clear how they are being solved. - -REPLY [3 votes]: The change of variables $a = u+v$ and $b = u-v$ gives $$a^3 - 3a^2b + b^3=-(u^3 + 3vu^2 - 9v^2u - 3v^3) = 3(u-v)(u+v)(3u+v) - 8u^3 = 3X - 8Y = 1.$$ -So we could have $(X,Y) = \ldots(-13,-5),(-5,-2),(3,1),(11,4),(19,7),\ldots$, each case could be dealt with one by one but to deal with all (infinity) of them at once is not practical.. - -The change of variables $a = u-v$, $b = v$ gives $$a^3 + 3a^2b - 6 ab^2 + b^3 = u^3 - 9 v^2 (u-1) = X - 9 Y = \pm 1.$$ this puts us in a similar situation as above.<|endoftext|> -TITLE: Biquadratic Extension -QUESTION [5 upvotes]: I need a hint to solve exercise 13.2.9 in Dummit and Foote. Suppose $F$ is a field of char not equal to 2. Suppose $a^2 -b$ is a square where $a,b \in F$ and $b$ is not a square. -Show $\sqrt{a + \sqrt{b}} =\sqrt{m} +\sqrt{n}$ for some $m,n \in F$. -I've reduced to $(a+\sqrt{b})(a-\sqrt{b})=(\sqrt{m}+\sqrt{n})^{2}$ but dont know how to proceed - -REPLY [5 votes]: Hypothetically, if $\sqrt{a + \sqrt{b}} = \sqrt{m} +\sqrt{n}$ then $a + \sqrt{b} = m + n + 2 \sqrt{m n}$ so it could be that $a = m + n$ and $b = 4 m n$, then we would further have $a^2 - b = (m - n)^2$. -So let's say given $a,b$ we define $m = \tfrac{1}{2}(a + \sqrt{a^2 - b})$, $n = \tfrac{1}{2}(a - \sqrt{a^2 - b})$. These are certainly elements of the field due to the condition of $a^2 - b$ being a square, furthermore multiplying it out proves that it works.<|endoftext|> -TITLE: Intersection of two 'huge' sets in the plane -QUESTION [9 upvotes]: Consider two sets on the plane $A=\mathbb{Q}\times \mathbb{R}$ and $B=\mathbb{R}\times \mathbb{Q}$. -We know that $A\cap B=\mathbb{Q}\times \mathbb{Q}\neq\emptyset$. What about the general cases? -That is, will $A\cap B\neq\emptyset$ if $A,B\subset\mathbb{R}^2$ satisfy that - -each vertical fiber $A_y$ of $A$ is dense in $\mathbb{R}\times y$, -each horizontal fiber $B_x$ of $B$ is dense in $x\times\mathbb{R}$? - - -What if we replace the assumption by -a. each vertical fiber $A_y$ of $A$ is of positive volume, -b. each horizontal fiber $B_x$ of $B$ is of positive volume? -The only case that I know is if the positive volume assumption is replaced by full vulmue (Fubini theorem). -Thanks! - -REPLY [2 votes]: A simple way to construct sets with the required properties is to first pick $U,V\subseteq\mathbb{R}$ and set -$$ -\begin{align} -&A = \left\{ (x,y)\in\mathbb{R}^2\colon x+y\in U\right\},\\ -&B = \left\{ (x,y)\in\mathbb{R}^2\colon x+y\in V\right\}. -\end{align} -$$ -Then the vertical fibres $A_y$ are all translates of $U$ and the horizontal fibres $B_x$ are translates of $V$ (and, similarly, for the horizontal fibres of $A$ and vertical fibres of $B$). Choosing $U,V$ disjoint, then $A,B$ are also disjoint. You can take $U=\mathbb{Q}$ and $V=\mathbb{R}\setminus\mathbb{Q}$ for the first example and $U=(0,1)$, $V=(1,2)$ for the second. Or, if you prefer, take $U=\left((0,1)\setminus\mathbb{Q}\right)\cup\left(\mathbb{Q}\setminus(0,1)\right)$ and $V=\mathbb{R}\setminus U$ to give a simultaneous counterexample to both. -If you want to venture into non-measurable sets, which requires the axiom of choice, then taking $U$ to be any subset of $\mathbb{R}$ with full outer measure and zero inner measure (e.g., a Vitali set) and $V=\mathbb{R}\setminus U$ then $A$ and $B$ (as a subset of $\mathbb{R}^2$) together with their horizontal and vertical fibres (as subsets of $\mathbb{R}$) will have full outer measure, but have zero inner measure. In fact, again using the axiom of choice, its possible to find uncountably many pairwise disjoint sets whose horizontal and vertical fibres all have full outer measure.<|endoftext|> -TITLE: The square roots of different primes are linearly independent over the field of rationals -QUESTION [153 upvotes]: I need to find a way of proving that the square roots of a finite set - of different primes are linearly independent over the field of - rationals. - -I've tried to solve the problem using elementary algebra -and also using the theory of field extensions, without success. To -prove linear independence of two primes is easy but then my problems -arise. I would be very thankful for an answer to this question. - -REPLY [36 votes]: Assume that there was some linear dependence relation of the form -$$ \sum_{k=1}^n c_k \sqrt{p_k} + c_0 = 0 $$ -where $ c_k \in \mathbb{Q} $ and the $ p_k $ are distinct prime numbers. Let $ L $ be the smallest extension of $ \mathbb{Q} $ containing all of the $ \sqrt{p_k} $. We argue using the field trace $ T = T_{L/\mathbb{Q}} $. First, note that if $ d \in \mathbb{N} $ is not a perfect square, we have that $ T(\sqrt{d}) = 0 $. This is because $ L/\mathbb{Q} $ is Galois, and $ \sqrt{d} $ cannot be a fixed point of the action of the Galois group as it is not rational. This means that half of the Galois group maps it to its other conjugate $ -\sqrt{d} $, and therefore the sum of all conjugates cancel out. Furthermore, note that we have $ T(q) = 0 $ iff $ q = 0 $ for rational $ q $. -Taking traces on both sides we immediately find that $ c_0 = 0 $. Let $ 1 \leq j \leq n $ and multiply both sides by $ \sqrt{p_j} $ to get -$$ c_j p_j + \sum_{1 \leq k \leq n, k\neq j} c_k \sqrt{p_k p_j} = 0$$ -Now, taking traces annihilates the second term entirely and we are left with $ T(c_j p_j) = 0 $, which implies $ c_j = 0 $. Since $ j $ was arbitrary, we conclude that all coefficients are zero, proving linear independence.<|endoftext|> -TITLE: A set with a finite integral of measure zero? -QUESTION [11 upvotes]: Prove, or give a counter example: -Let $\mu$ be a finite positive borel measure on $\mathbb{R}$. Then $\int (x-y)^{-2} d \mu (y) = \infty $ almost everywhere on $\mu$ (for the selection of x's). -This is a question I had in an exam, and the answer is supposed to be presented in less than 30 words, so there must be something quite simple I'm missing. - -REPLY [5 votes]: Yes it is true. Consider the set $S$ on which the integral in question is bounded by a positive value $K$. Then $\mu$ cannot have measure greater than $K\epsilon^2$ on any interval of width $\epsilon$ intersecting $S$. However, any interval $[a,b]$ can be divided into $n$ intervals of width $(b-a)/n$. So, $\mu(S\cap[a,b])$ is bounded by $n K ((b-a)/n)^2$. Let $n$ go to infinity. -I didn't count, but I know that exceeded 30 words by a fair margin. -The argument is easily adapted to show that $\int\vert x-y\vert^{-1-\epsilon}\,d\mu(y)$ is infinite $\mu$-almost everywhere for any $\epsilon > 0$. The more difficult question is whether the same holds for $\int\vert x-y\vert^{-1}\,d\mu(y)$.<|endoftext|> -TITLE: Compactness of a bounded operator $T\colon c_0 \to \ell^1$ -QUESTION [22 upvotes]: Pitt Theorem says that any bounded linear operator $T\colon \ell^r \to \ell^p$, $1 \leq p < r < \infty$, or $T\colon c_0 \to \ell^p$ is compact. -I know how to prove this in case $\ell^r \to \ell^p$, and $c_0 \to \ell^p$, where $p > 1$. Main idea in the first case is that $\ell^r$ is reflexive and hence closed ball $B_{\ell^r}$ is weakly compact. In the second case we could just use Schauder Theorem ($T$ is compact if and only if $T^*$ is compact). -The only case left is $T\colon c_0 \to \ell^1$. I have tried something like this: -By Schauder Theorem we need to prove that $T^*\colon \ell^\infty \to \ell^1$ is compact. By Banach-Alaoglu Theorem we know that $B_{\ell^\infty}$ is compact in the $weak^{*}$ topology on $\ell^\infty$. Moreover, we know, since $\ell^1$ is separable, that $B_{\ell^\infty}$ is metrizable. Hence, it is enough to prove that if $(x_n)$ is a $weak{}^{*}$ convergent (say, to $x$) sequence in $B_{\ell^\infty}$ then $(Tx_n)$ converges (to $Tx$, I think). Since Schur Theorem (weak and norm convergence is the same in $\ell^1$) we only need to show that $(Tx_n)$ converges weakly in $\ell^1$. -And here I stuck. Could you give me any ideas or references? In every book I have looked so far this particular case was omitted. - -Edit (4.4.2011): I found in Diestel's Sequences and series in Banach spaces (chap. VII, Exercise 2(ii)) something like this: -A bounded operator $T: c_0 \to X$ is compact if and only if every subseries of $\sum_{n=1}^\infty Te_n$ is convergent, where $(e_n)$ is canonical basis for $c_0$. -I know how to prove this, but how we can show that operators $T: c_0 \to \ell^1$ possess the subseries property? - -REPLY [2 votes]: Any operator $T:c_0\to X$ fails to be compact only if it fails to be strictly singular (see Albiac and Kalton, Theorem $2.4.10$). This would mean there is an infinite-dimensional subspace $E\leq c_0$ so that $T|_E$ is an embedding. But $c_0$ is self-saturated, so there is some $F\leq E$ isomorphic to $c_0$, and $T|_F$ is an embedding. So if $T:c_0\to X$ fails to be compact, $X$ contains a copy of $c_0$. A corollary is that all operators $T:c_0\to \ell_p$ are compact, $1\leq p<\infty$. -In fact, this is a characterization of spaces which contain $c_0$. $X$ does not contain a copy of $c_0$ if and only if all operators $T:c_0\to X$ are compact.<|endoftext|> -TITLE: Fewest required values in magic square? -QUESTION [14 upvotes]: A magic square of order $n$ is an $n \times n$ grid containing each of the numbers $1,2,\dots,n^2$, so that the numbers in each row, column, and diagonal sum to the same number $n(n^2+1)/2$. -This question follows on from Is half-filled magic square problem NP-complete? about completing a partially filled magic square. -For $n=3$ the middle square must be $5$, so if the middle square is set to another value then there is no way to complete the magic square. On the other hand, if $n=4$ and a single number is specified (anywhere in the grid), then there is always a way to complete the magic square. This can be checked by inspection of the list of order-4 magic squares, from the site of Harvey Heinz. -Let $f(n)$ be the smallest number of values that when placed in an $n\times n$ grid, results in an pattern that cannot be completed to form a magic square. By the previous two examples, $f(3) = 1$ and $f(4) \ge 2$. The value $f(n)$ can also be thought of as the least number of values required in a partial $n \times n$ magic square for the decision problem to be interesting. -This motivates the question: - -What is the asymptotic behaviour of $f(n)$ as $n$ tends to infinity? - -I would be especially interested in knowing whether $f(n) = O(\log n)$, or if $f(n) = \Omega(n)$ (using big-O notation). -Consider the $k$ smallest numbers specified together with the $n-k$ largest numbers in the same row. This will fail to reach $n^2(n-1)/2$ when $k \gt n/2$. It then follows that $1 \le f(n) \le \lceil (n+1)/2 \rceil$. Are there sharper upper and lower bounds on $f(n)$, perhaps by distinguishing the case of $n$ even from $n$ odd? - -REPLY [4 votes]: I have an idea which may lead to a proof that $f(n) = \Omega(\sqrt{n})$, but it would require quite a bit of work to flesh out. -Here is the idea: Take $n = m^2$. We are going to try to divide the n x n magic square into $m^2$ smaller m x m squares in such a way that each of our m clues lies in a different square. (Or as close to that as we can get). -Specifically, we are looking for a permutation $\pi \in S_n$ such that $\pi(n-k) = n - \pi(k)$. We then define the (a,b)-subsquare to be the subset $[\pi(ma),\pi(ma+1)...\pi(ma+m-1)]$ x $[\pi(mb),\pi(mb+1),...\pi(mb+m-1)]$ of our original square for $a,b \in [0,...m-1]$. We point out that the numbers on the $x=y$ diagonal of the (a,a)-subsquare all came from the corresponding diagonal of the main square. Likewise, the numbers on the $x=m-1-y$ diagonal of the (a,m-1-a)-subsquare came from the other diagonal of the main square. -We next look for a way to divide the numbers from 1 to $n^2 = m^4$ in such a way as to make all $m^2$ subsquares into magic squares with the same sum. This is not possible for m even, but may be possible for m odd. We don't actually need that they have the same sum, just that the m x m matrix of sums is itself a magic square, with non-distinct entries -- this may help us in the case of even m. Once we've done this, we've solved the original magic square. -1) For any m-point subset S of $[0,...,n-1]^2$, is there a permutation $\pi$ such that $(\pi(a),\pi(b))$ is not in the same subsquare as $(\pi(c),\pi(d))$ for $(a,b),(c,d) \in S$? -2) Given such a permutation, is there a way to complete the $m^2$ subsquares into magic squares with the same sum using the numbers from 1 to $m^4$?<|endoftext|> -TITLE: Approximations Involving Exponential Functions -QUESTION [7 upvotes]: I am reading a text and I am curious to know how certain approximations were reached. -The first function approximations is: $$ 1- \frac{1}{2p}((1+p)e^{\frac{-y}{x(1+p)}} - (1-p)e^{\frac{-y}{x(1-p)}}) \approx \frac{y^2}{2x^2 (1-p^2)}$$ -,when $y \ll x$. Note that I tried using the approximation $e^x \approx 1+x$, when x is small, but all I got was the conclusion that $1- \frac{1}{2p}((1+p)e^{\frac{-y}{x(1+p)}} - (1-p)e^{\frac{-y}{x(1-p)}}) \approx 0$. -The second function approximation is: $$ 1-e^{\frac{-y}{x}}(1-Q(a,b)+Q(b,a)) \approx \frac{y^2}{x^2 (1-p^2)}$$ -,when $y \ll x$, where $Q(a,b) = \int_b^\infty e^{-\frac12 (a^2 + u^2)} I_0(au) u \, du$, $b = \sqrt{\frac{2y}{x(1-p^2)}}$, $a = bp$, $I_0$ is a modified Bessel Function of the first kind. -It is also a known fact that $$ Q(b,0) = 1$$ and $$ Q(0,b) = e^{-\frac{b^2}{2}}$$ I have tried to assume $$ a = 0$$, since $a = bp$ and $b$ is small, $ p$ is a number between 0 and 1. However, it is not clear how $(1-p^2)$ is in the denominator and not $(1-p^2)^2 $, which would be closer to the traditional $e^x$ approximation. -Any hints on how these approximations were derived would be appreciated. Thanks. - -REPLY [2 votes]: For the second one, use the following approximations, valid when a,b,x are small: $$ Q(a,b) \approx 1 - \frac{b^2}{2} + \frac{a^2 b^2}{4} + \frac{b^4}{8}$$ -$$ e^x \approx 1 + x + \frac{x^2}{2!}$$ -Then after the substitution has been made, the result will be: $$1 - (A)$$ -A has 15 terms, since it is the result of multiplying the 3 terms from the exponential approximation by (1+4+4 = 9) minus (2 from Q terms cancelling) minus (2 from 1, -1 cancelling). As an example of 2 of the 15 terms in A, $\frac{-y}{x} \frac{p^2}{(1-p^2)}$ and a higher term one: $\frac{-y^4}{4x^4 (1-p^2)^2}$. -Then after all 15 terms in A have been reached, eliminate all the terms containing $y^3$. I believe the reason is since y is much smaller than x, ($\frac{y}{x})^3$ is even smaller. Terms will cancel out and the desired approximation will be reached. -For reference, the approximation is defined on page 473, equation (10-10-10) of "Communication Systems and Techniques" by Mischa Schwartz.<|endoftext|> -TITLE: Conditions for no duality gap in quadratic programming? -QUESTION [8 upvotes]: Assume $Q \in \mathbb{R}^{n\times n}$, and $b,c,d \in \mathbb{R}^n$. A quadratic programming problem is: -$$ \min_{x \in \mathbb{R}^{n}} \tfrac{1}{2} x^T Q x + c^T x,$$ -subject to $A x \leq b, E x = d$. -I was wondering what are some sufficient and/or necessary conditions for the quadratic programming problem to have no duality gap? For example, consider cases whether or not $Q$ is symmetric and/or positive semi-definite? -Does any book or website mention about this, such as in Bazaraa's Nonlinear programming: theory and algorithms or Bertsekas's Nonlinear programming? -Thanks and regards! - -REPLY [6 votes]: First, you can assume $Q$ is symmetric, as otherwise you can convert the problem to one that does contain a symmetric matrix via $P = \frac{1}{2}(Q + Q^T)$. It's not hard to show that $P$ is symmetric and satisfies $x^T P x = x^T Q x$ for all $x$. -Vanderbei's Linear Programming: Foundations and Extensions proves that $Q$ being positive semidefinite is a sufficient condition for no duality gap. (See pp. 378-379 in the first edition.) -Bazaraa, Sherali, and Shetty's Nonlinear Programming: Theory and Algorithms shows that, for general nonlinear programming, the existence of a saddle point for the Lagrangian function is a necessary and sufficient condition for no duality gap. (See Theorem 6.2.5, pp. 269-270, third edition.) So obviously that gives a necessary and sufficient condition for quadratic programming as well. Perhaps there's a way to simplify the theorem about a saddle point to the special case of quadratic programming, but I don't see how to do it right now. (Added: There doesn't seem to be a known useful way to do that. See next paragraph.) -Added: For the nonconvex case ($Q$ is not positive semidefinite), finding useful sufficient conditions for no duality gap appears to be an ongoing research problem. For example, see "On the zero duality gap in nonconvex quadratic programming problems," by Zheng, Sun, Li, and Xu (Journal of Global Optimization, 2011, DOI: 10.1007/s10898-011-9660-y), particularly the introduction, where they give an overview of results on conditions for no duality gap, including some necessary and sufficient conditions for certain special cases of quadratic programming - but none, as far as I can tell, in the general case.<|endoftext|> -TITLE: Alternative name for "closed set" -QUESTION [8 upvotes]: It is usually argued (and also joked about) that classifying sets into open and closed is a bit paradoxical, since sets can be open and closed at the same time, or neither. This can be analyzed very clearly by noting that closed is an antonym of open: it means exactly not open (and vice versa). Then, by saying that a set is clopen, or open and closed, we are in some way claiming that the set is open and not open, and this is a basic contradiction from a logical point of view. -While I don't dare to think that I can make an impact on the use of these terms by changing the way I myself call some sets, it would be much easier for me to at least think of them with different names. Particularly, I've been wondering about the correctness of saying co-open instead of closed. -In the first place, it makes sense because it's more natural to see the link between declaring $A$ is co-open and the complement of $A$ is open. Also, since complementation is an involution, in some natural way we can say that a co-co-open set, which we may understand as a set whose complement has an open complement, is nothing but an open set; this, I believe, is desirable behavior for the use of the co- prefix. -My final worry about the correctness of using co-open is that the co- prefix is widely used in category theory, and is formalized by the notion of duality (to name but a few examples: initial and coinitial, or terminal and coterminal, product and coproduct, limit and colimit, cone and cocone). So I've been wondering whether there should be some categorical formulation of a topology so that saying $A$ is co-open in $\mathcal{C}$ was equivalent to saying $A$ is open in $\mathcal{C}^{op}$, where $\mathcal{C}$ was the categorical formulation of a topology in which $A$ is a closed set. This, in order to justify the use of co- with duality. -However, I am aware of the existence of other terms which start with the prefix co- but, I believe, don't have a possible categorical formulation or which were named before they were redefined in terms of categorical notions, like cologarithm, cosine, cosecant, cotangent, cotree. -My question would then be: aside from the issue that every mathematician writes closed, could co-open be considered correct as a terminological alternative, based on the points I exposed, and on others I might possibly have missed? At the end, if co-open is correct, this will be only useful for my own pedagogical reasons. -Thanks. - -REPLY [2 votes]: This can be analyzed very clearly by noting that closed is an antonym of open: it means exactly not open (and vice versa) - -This is confusing. Closed doesn't 'mean not open, it means the complement of an open set. So a set that is both open and closed is one which is open and for which its complement is open. -I think the terminology is good because we want a closed set to be one for which every convergent sequence in that set converges to a limit that is also in that set. We want it to be closed under convergence and this can only be the case if its complement is open. -So, closed is a valuable word in this context. If anything, I would replace open. I'm just beginning with this stuff though so there might be something that I'm missing.<|endoftext|> -TITLE: Lebesgue measurability of a set -QUESTION [6 upvotes]: Prompted by this question I was looking for $A \subset (0,1)$ such that for any interval $(a,b)\subset (0,1), A \cap (a,b)$ and $A^c \cap (a,b)$ are both uncountable. One such $A$ is the set of all numbers that have a finite number of $1$'s in their base $3$ expansion. As no choice was used in the construction, it should be Lebesgue measurable, but I can't prove it. How is it proved? - -REPLY [5 votes]: For each $n$, let $A_n$ be the subset of elements of $A$ that have at least a $1$ in the first $n$ digits of their ternary expansion, but no $1$s after the $n$th digit. Your set $A$ is equal to the union of the $A_n$s and the Cantor set. Each $A_n$ is a finite union of scaled translates of the Cantor set (take $3^{-n}C$, where $C$ is the Cantor set, to get all numbers that have ternary expansion with no $1$s and that have $0$ in the first $n$ positions; then you can "translate" by adding a suitable number that has a tail of $0$s and appropriate $1$s in the appropriate coordinates). -So each $A_n$ is a finite union of Lebesgue-measurable sets (scaled Cantor sets are Lebesgue-measurable, and translates of a Lebesgue-measurable set are Lebesgue-measurable), hence Lebesgue measurable. $A$ is a countable union of Lebesgue measurable sets, hence Lebesgue measurable. -Added. As Andres Caicedo points out in the comments below, the argument above shows that the set $A$ is in fact not merely Lebesgue measurable, but Borel. - -REPLY [3 votes]: The number $x_n$ in the $n^\text{th}$ place after the radix point in the ternary expansion of $x$ is a measurable function of $x$. This is true because the floor function is Borel measurable, and $x_n=\left\lfloor 3\cdot\left(3^{n-1}x-\lfloor3^{n-1}x\rfloor\right)\right\rfloor$. The set $\{x:x_n\neq 1\}$ is measurable for each $n$, and so therefore is the set $$\bigcup_{n=1}^\infty\bigcap_{k=n}^\infty \{x:x_k\neq 1\}.$$ - -Certain null sets of the second category give further examples of this. The complement of such a set can be contructed by taking a countable union of closed sets with empty interior whose complements have progressively smaller measure (e.g., using fat Cantor sets).<|endoftext|> -TITLE: Find the sum of all quadratic residues modulo $p$ where $p \equiv 1 \pmod{4}$ -QUESTION [14 upvotes]: I read one theorem in the book, they said there will be exactly $\dfrac{p-1}{2}$ quadratic residues of $p$. So for each $i$, -$$x^2 \equiv a_i \pmod{p} \text{ where } 1 \leq i \leq p - 1$$ -But if we sum all $a_i$, then what does this sum equal to? -$$\sum_{i=1}^{\frac{p-1}{2}}a_i = ?$$ -I haven't used the condition that $p \equiv 1 \pmod{4}$, so I think I missed one important point here. Any idea? -Update -The original problem -Let $p$ be a prime such that $p \equiv 1 \pmod{4}$. -Prove that the sum of those numbers $1 \leq r \leq p - 1$ that are quadratic residues modulo $p$ is $\dfrac{p(p-1)}{4}$. -Thanks, - -REPLY [25 votes]: Because $p\equiv 1\bmod 4$, we have that $-1$ is a quadratic residue modulo $p$. The product of two quadratic residues is a quadratic residue. This means that for any quadratic residue $a$, we have that $-a$ is also a quadratic residue. Thus the sum cancels to $0\bmod p$. - -Because there are $\frac{p-1}{2}$ quadratic residues mod $p$, and they all occur in pairs $a$ and $p-a$, there are $\frac{p-1}{4}$ pairs each of whose sum is $p$, hence the sum of them all is $\frac{p(p-1)}{4}$.<|endoftext|> -TITLE: When is the group of quadratic residues cyclic? -QUESTION [15 upvotes]: If a and b are two quadratic residues of the prime p, then it is easily checked that ab is also a quadratic residue modulo p; if c is a quadratic residue modulo p, and $ {cd \equiv {1} \pmod{p} } $, then since 1 is a quadratic residue of p, d is a quadratic residue of p; so the set of all quadratic residues form a group, denoted by $ \mathfrak R $. And my question is: - -When is $ \mathfrak R $ cyclic? - -I am very sorry not able to provide more motivations for studying this question; I want to know out of pure curiosity; it will be appreciated if anyone provides some insight or hint, thanks very much. - -REPLY [11 votes]: The original question has been answered (they are always a cyclic group). Perhaps slightly more interesting is: - -Fix $m\gt 0$. Then the subset of the units modulo $m$ that are squares forms a subgroup of the invertible elements modulo $m$. When is this subgroup cyclic? - -If there is a primitive root modulo $m$ (that is, if the group of units modulo $m$ is cyclic), then the property holds. This occurs when $m$ is a power of an odd prime, twice a power of an odd prime, $m=2$, or $m=4$. -It also holds if $m=8$, since then the only quadratic residue is $1$, so the group of quadratic residues is cyclic. In fact, it holds if $m=2^n$ is a power of $2$, the result holds: because the group of units modulo $2^n$ is isomorphic to $C_{2^{n-2}}\times C_2$ (where $C_k$ is the cyclic group of order $k$), so the group of squares is isomorphic to $C_{2^{n-3}}$, which is cyclic. -Other nontrivial examples include $m=3p^a$ or $m=6p^a$ where $p$ is an odd prime and $a\gt 0$, $m=35 = 5\times 7$ (or more generally, $m=5^a 7^b$); and many others. -To get the complete answer, factor $m$ into primes, -$$ m = p_1^{a_1}\cdots p_r^{a_r}$$ -where $p_1\lt p_2\lt\cdots\lt p_r$ are primes, and $a_r\gt 0$. By the Chinese Remainder Theorem, the group of units modulo $m$ is -$$\left(\mathbb{Z}/m\mathbb{Z}\right)^* = \prod_{i=1}^r \left(\mathbb{Z}/p_i^{a_i}\mathbb{Z}\right)^*.$$ -The group of units modulo $p_i^{a_i}$ is - -Cyclic of order $(p_i-1)p_i^{a_i-1}$ if $p_i$ is odd; -Trivial if $p_i=2$ and $a_i = 1$; -Isomorphic to $\displaystyle C_{2^{a_i-2}}\times C_2$ if $p_i=2$ and $a_i\gt 1$. - -The group of squares of invertible elements modulo $m$ is then isomorphic to a product of groups of the form - -Cyclic of order $\frac{p_i-1}{2}(p_i^{a_i-1})$ if $p_i$ is odd; -Trivial if $p_i=2$ and $a_i\leq 2$; -Cyclic of order $2^{a_i-3}$ if $p_i=2$ and $a_i\geq 3$. - -The product is cyclic if and only if the orders of the cyclic factors are pairwise relatively prime. -This gives: -Theorem. Let $m$ be a positive integer, and let $p_1,p_2,\ldots,p_r$ be the distinct prime divisors of $m$, $p_1\lt p_2\lt\cdots\lt p_r$. Let $\varphi$ be Euler's totient function. The subgroup of squares of the invertible elements modulo $m$ is cyclic if and only if: - -For $m$ odd, - -$\displaystyle\gcd\left(\frac{\varphi(p_i^{a_i})}{2},\frac{\varphi(p_j^{a_j})}{2}\right) = 1$ for all $1\leq i\lt j\leq r$. - -For $m$ even, - -$\displaystyle\gcd\left(\frac{\varphi(p_i^{a_i})}{2},\frac{\varphi(p_j^{a_j})}{2}\right) = 1$ for $1\lt i\lt j\leq r$; and -if $2^a$ is the largest power of $2$ that divides $m$ and $a\gt 3$, then $p_i\equiv 3\pmod{4}$ for all $i\gt 1$. - - -It is straightforward now to show (e.g., using Dirichlet's theorem on primes in arithmetic sequences) that there are $m$ with arbitrarily many distinct prime divisors for which the subgroup of squares of the units modulo $m$ is cyclic.<|endoftext|> -TITLE: Question regarding division on both sides of congruence equation -QUESTION [6 upvotes]: Inspiring from this thread Is there a formula for solving the congruence equation $ax^2 + bx + c=0$? -I want to get rid of the confusion when dividing both side of congruence. -I'm always confused about modulo arithmetic, so today I really want to make it clear. -If $a \equiv b \pmod{m}$ then obviously, $ac \equiv bc \pmod{m}$. -But it doesn't always work for division. What is the reason behind? Whenever I see division, I'm totally freak out. Multiplication makes sense to me, but not division. Could anyone walk me through this obstacle? -Another question is, -According to Arturo Magidin: - -"dividing" by $2a$ means multiplying by the modular inverse of $2a$ modulo p" - -Does it always work for any modulo $m$ or $m$ has to be prime? -Thank you - -REPLY [8 votes]: This comes down to the following general result: -Theorem. Let $R$ be a ring. Then -$$ax = ay\text{ implies }x=y \Longleftrightarrow az=0\text{ implies }z=0.$$ -Proof. $\Rightarrow)$ Assume that whenever $ax=ay$, we can conclude $x=y$. Suppose $az=0$. Then $az=a0$, so this implies $z=0$. -$\Leftarrow)$ Suppose that whenever $az=0$, we have $z=0$. If $ax=ay$, then $a(x-y) = ax-ay = 0$, so $x-y = 0$; therefore, $x=y$. QED -That is, cancellation works whenever what you are cancelling is not a zero divisor (and more specifically, cancellation on the left/right works when what you are cancelling is not a left/right zero divisor). -In particular, when you are working in modular arithmetic -$$ab\equiv ac\pmod{m}\text{ implies }b\equiv c\pmod{m}\Longleftrightarrow az\equiv0\pmod{m}\text{ implies } z\equiv 0\pmod{m}.$$ -Proposition. Let $a$ and $m$ be positive integers. Then $\gcd(a,m)=1$ if and only if $az\equiv 0\pmod{m}$ implies $z\equiv 0\pmod{m}$. -Proof. If $\gcd(a,m)=1$ and $az\equiv 0\pmod{m}$, then $m|az$ and $\gcd(m,a)=1$, which implies $m|z$. Therefore $z\equiv 0\pmod{m}$. -Conversely, suppose $az\equiv 0\pmod{m}$ implies $z\equiv 0\pmod{m}$. Let $d=\gcd(a,m)$, and write $a=dk$, $m=d\ell$. Then $a\ell = dk\ell = mk\equiv 0\pmod{m}$, so $\ell\equiv0\pmod{m}$. Therefore, $m|\ell$, Since $m=k\ell$, $k=1$. QED. -Corollary. Let $a$, $b$, $c$, and $m$ be positive integers. Then $ab\equiv ac\pmod{m}$ implies $b\equiv c\pmod{m}$ if and only if $\gcd(a,m) = 1$. -So, first, you should not think of this as "division", but as "cancellation" (for example, one can cancel in the integers, even though there is no "division"). And you should think of "division" in general not as an entirely separate operation, but really as "multiplying by the multiplicative inverse". For example, in the rationals, you don't "really" divide by $3$, you multiply by $\frac{1}{3}$, which is the (unique) rational which, when multiplied by $3$, gives $1$; that is, the multiplicative inverse of $3$.<|endoftext|> -TITLE: Proving the identity $\sum_{n=-\infty}^\infty e^{-\pi n^2x}=x^{-1/2}\sum_{n=-\infty}^\infty e^{-\pi n^2/x}.$ -QUESTION [19 upvotes]: Can you help prove the functional equation: $$\sum_{n=-\infty}^\infty e^{-\pi n^2x}=x^{-1/2}\sum_{n=-\infty}^\infty e^{-\pi n^2/x}.$$ -Specifically, I am looking for a solution using complex analysis, but I am interested in any solutions. -Thanks! - -REPLY [4 votes]: Answers have already been accepted but you wanted an approach based on complex analysis. You can derive the transformation properties of the theta functions directly from the residue theorem. -Let $C$ be the rectangle whose corners are $\pm(N+\frac12)\pm i$. We have from the residue theorem -$$\oint_C\frac{e^{-\pi tz^2}}{e^{2\pi iz}-1}dz=\sum_{n=-N}^{N}e^{-n^2\pi t}$$ -I put $t$ as the variable in the series so as not to confuse it with $z=x+iy.$ The vertical parts of the contour integral decay as $N\to\infty$ so we will have -$$\sum_{n=-\infty}^{\infty}e^{-n^2\pi t}=\int^{\infty-i}_{-\infty-i}\frac{e^{-\pi tz^2}}{e^{2\pi iz}-1}dz+\int^{\infty+i}_{-\infty+i}\frac{e^{-\pi tz^2}}{1-e^{2\pi iz}}dz.$$ -After some algebra, expanding the denominators as geometric series, I claim that the sum of these two integrals can be written as -$$e^{\pi t}\int^{\infty}_{-\infty}e^{-\pi tx^2}\sum_{n=-\infty}^{\infty}e^{2n\pi}\cos2\pi(n+t)xdx.$$ -This class of integral can also be evaluated by the residue theorem. Since this is not the point of this answer, I will skip this working and just state that it can be shown without much labour that for all $t>0,$ -$$\int^\infty_{-\infty}e^{-\pi tx^2}\cos2\pi(n+t)xdx=\frac{e^{-\pi t-2n\pi-n^2\pi/t}}{\sqrt{t}}.$$ -After cancellation, it follows immediately that -$$\sum_{n=-\infty}^{\infty}e^{-n^2\pi t}=\frac{1}{\sqrt{t}}\sum_{n=-\infty}^{\infty}e^{-n^2\pi /t}$$ -for all $t>0$, and this completes the proof, using only basic complex analysis, as required. -Even though this question was asked 9 years ago I hope this is illustrating to someone.<|endoftext|> -TITLE: Algorithm(s) for computing an elementary symmetric polynomial -QUESTION [19 upvotes]: I've run into an application where I need to compute a bunch of elementary symmetric polynomials. It is trivial to compute a sum or product of quantities, of course, so my concern is with computing the "other" symmetric polynomials. -For instance (I use here the notation $\sigma_n^k$ for the $k$-th symmetric polynomial in $n$ variables), the Vieta formulae allow me to compute a bunch of symmetric polynomials all at once like so: -$$\begin{align*} -&(x+t)(x+u)(x+v)(x+w)\\ -&\qquad =x^4+\sigma_4^1(t,u,v,w)x^3+\sigma_4^2(t,u,v,w)x^2+\sigma_4^3(t,u,v,w)x+\sigma_4^4(t,u,v,w) -\end{align*}$$ -and, as I have said, $\sigma_4^1$ and $\sigma_4^4$ are trivial to compute on their own without having to resort to Vieta. -But what if I want to compute $\sigma_4^3$ only without having to compute all the other symmetric polynomials? More generally, my application involves a large-ish number of arguments, and I want to be able to compute "isolated" symmetric polynomials without having to compute all of them. -Thus, I'm looking for an algorithm for computing $\sigma_n^k$ given only $k$ and the arguments themselves, without computing the other symmetric polynomials. Are there any, or can I not do better than Vieta? - -REPLY [6 votes]: You can compute $\sigma^k_n(x_1,\dots,x_n)$ in $O(n \log^2 k)$ time, using FFT-based polynomial multiplication. The details are explained here and are apparently due to Ben-Or: -https://cstheory.stackexchange.com/a/33506/5038 -This is asymptotically faster than any of the other methods proposed in any of the other answers. -Moreover, you can compute all of the values $\sigma^1_n(x_1,\dots,x_n), \sigma^2_n(x_1,\dots,x_n), \dots, \sigma^n_n(x_1,\dots,x_n)$ in just $O(n \log^2 k)$ time, using the same methods.<|endoftext|> -TITLE: Prove that $\lfloor \sqrt{p} \rfloor + \lfloor \sqrt{2p} \rfloor +...+ \lfloor \sqrt{\frac{p-1}{4}p} \rfloor = \frac{p^2 - 1}{12}$ -QUESTION [25 upvotes]: Problem -Prove that $\lfloor \sqrt{p} \rfloor + \lfloor \sqrt{2p} \rfloor +...+ \lfloor \sqrt{\frac{p-1}{4}p} \rfloor = \dfrac{p^2 - 1}{12}$ where $p$ prime such that $p \equiv 1 \pmod{4}$. -I really have no idea how to start :(! The square root part really messed me up. Can anyone give me a hint? -Thank you - -REPLY [20 votes]: The sum $S(p)$ counts the lattice points with positive coordinates under $y=\sqrt{px}$ from $x=1$ to $x=\frac{p-1}{4}$. Instead of counting the points below the parabola, we can count the lattice points on the parabola and above the parabola, and subtract these from the total number of lattice points in a box. Stop here if you only want a hint. -Since $p$ is prime, there are no lattice points on that parabola (with that range of $x$ values). -The total number of lattice points in the box $1 \le x \le \frac{p-1}4, 1\le y \le \frac {p-1}2$ is $\frac{(p-1)^2}8$. -The lattice points above the parabola are to the left of the parabola. These are counted by -$T(p)= \lfloor 1^2/p \rfloor + \lfloor 2^2/p \rfloor + ... + \lfloor (\frac{p-1}2)^2/p \rfloor$. -$T(p)+S(p) = \frac{(p-1)^2}8$, so $S = \frac {p^2-1}{12}$ is equivalent to $T(p) = \frac{(p-1)(p-5)}{24}$. -Consider $T(p)$ without the floor function. This sum is elementary: -$$\sum_{i=1}^{(p-1)/2} \frac{i^2}p = \frac 1p \sum_{i=1}^{(p-1)/2} i^2 = \frac 1p \frac 16 (\frac{p-1}2)(\frac {p-1}2 + 1)(2\frac{p-1}2 +1) = (p^2-1)/24.$$ -What is the difference between these? Abusing the mod notation, $\frac{i^2}p - \lfloor \frac{i^2}p \rfloor = 1/p \times (i^2 \mod p)$. So, -$$(p^2-1)/24 - T(p) = \sum_{i=1}^{(p-1)/2} \frac{i^2}p - \lfloor \frac{i^2}p \rfloor = \sum_{i=1}^{(p-1)/2} \frac 1p \times (i^2 \mod p) = \frac 1p \sum_{i=1}^{(p-1)/2} (i^2 \mod p).$$ -Since $i^2 = (-i)^2$, this last sum is over the nonzero quadratic residues. Since $p$ is $1 \mod 4$, $-1$ is a quadratic residue, so if $a$ is a nonzero quadratic residue, then so is $p-a$. Thus, the nonzero quadratic residues have average value $p/2$ and the sum is $\frac{(p-1)}2 \frac p2$. -$$(p^2-1)/24 - T(p) = \frac 1p \frac{(p-1)}2 \frac p2 = \frac{p-1}4$$ -$$T(p) = \frac{(p-1)(p-5)}{24}.$$ -That was what we needed to show.<|endoftext|> -TITLE: Proving that a graph of a certain size is Hamiltonian -QUESTION [6 upvotes]: For any graph with order $n \geq 3$, given that its size is -$$m \geq \frac{\left(n-1\right)(n-2)}{2} + 2,$$ -show that the graph is Hamiltonian. -I know that if I can show that the degree sum of any two non-adjacent vertices is $\geq n$, then I'd be done. -Likewise, if I could show that the above somehow implied that the degree of every vertex in the graph is $\geq n/2$, I'd also be done. However, I cannot see how to get to either one of those given the information I have. I have been trying the following: assuming that there exists two non-adjacent vertices $u$ and $v$ whose degree sum is $\leq (n-1)$, then -$$2m = \sum d\text{(other vertices))} + d(u) + d(v) \leq \sum d(\text{other vertices)} + (n-1).$$ -If I could show that this implied that -$$2m < \left(n-1\right)(n-2) + 4,$$ -I would have a contradiction, thereby proving that the graph is Hamiltonian. However, I have not been able to show this, so I am thinking it is the wrong approach. - -REPLY [4 votes]: Proof by induction on $n$. -Let $G$ be a graph of order $n$ and size $m$ (given). Denote the maximum degree of a vertex of G by $\Delta$ and the average degree by $\delta$. Then -$$\delta = \frac{2m}{n} \geq \frac{(n-1)(n-2)+4}{n} \geq n-3+\frac{6}{n}$$ -$\Delta \geq \delta$ and $\Delta\in \mathbb{Z}$, so $\Delta \geq n-2$. Let $v$ be a maximum degree vertex of G. -$d(v)$ is $n-1$ or $n-2$. -Case 1: $d(v)=n-2$ -$$e(G-v)= m-d(v)\geq \frac{(n-2)(n-3)}{2}+2$$ -So, by induction hypothesis, $G-v$ contains a Hamiltonian cycle. Two adjacent vertices in this cycle are neighbors of $v$, so add v to the cycle and we are done. -Case 2: $d(v)=n-1$ -Now the above doesn't quite hold, since $G-v$ contains 1 fewer edge than required. No problem! Let $H$ be $G-v$ with an arbitrary edge added (call this $jk$. By induction hypothesis, $H$ is Hamiltonian. If this cycle doesn't contain the added edge we are done, as in Case 1. Otherwise, deleting $jk$ gives a Hamiltonian path in $G-v$ from $j$ to $k$. $j\sim v$ and $k\sim v$, since $d(v)=n-1$, so we have a Hamiltonian cycle, as required. -Base case: easy. -Edit: A general tip -I find it useful in problems like this to see how the numbers arise. Consider briefly why any fewer edges are insufficient.<|endoftext|> -TITLE: About the integrability of function -QUESTION [5 upvotes]: Let $f$ is integrable function on $[0,1]$. -Define $g(x)=\int_x^b\frac{f(t)}{t}dt$ for $0 -TITLE: Is there an analytic approximation to the minimum function? -QUESTION [17 upvotes]: I am looking for an analytic function that approximates the minimum function. i.e., -$|f(x_1,x_2) - \min(x_1,x_2)| < \zeta$ for some $\zeta$ that may be related to $|x_1 - x_2|$. Or may be a series $f_1,f_2,....$, where $\lim_{n \to \infty} \zeta = 0$. - -REPLY [11 votes]: The only function that I know has this property is essentially -$$\max(x, y) \sim \frac{x e^{kx} + y e^{ky}}{e^{kx} + e^{ky}}$$ -for large $k$. -(@wnoise was close to the answer. In this case there is no problem if $x=y$.) -This works for positive or negative numbers and tends to the maximum for $k\to\infty$. Other formulas based on powers are only valid for positive numbers (as $\sqrt[n]{x^n + y^n}$). -The integral counterpart is -$$ \frac{\int{f(x) e^{kf(x)} dx}}{\int{e^{kf(x)} dx}} $$ -There is a small caveat if you want to implement this numerically you need to have an estimate of the maximum in the first place (and integrate over shifted values), otherwise the the integral (or even the sum) is going to under/overflow pretty easily. -This is related to the Laplace's method. Note that I asked a related question: -Real approximation to the maximum using Laplace's method integral<|endoftext|> -TITLE: Laplacian of a Function depending on r in Polar Coordinates -QUESTION [6 upvotes]: From a bank of exams: - -Let $u(x,y) = f(r)$ be a smooth - function in the plane that depends - only on $r = \sqrt{x^2 + y^2}$. - Compute $\Delta u = u_{xx} + u_{yy}$ - in terms of $f$ and its derivatives. - -Wikipedia states that the Laplace operator in polar coordinates is $$\Delta f = \frac{1}{r}\frac{\partial f}{\partial r} \left( r \frac{\partial f}{\partial r} \right) + \frac{1}{r^2}\frac{\partial^2 f}{\partial \theta^2},$$ which I suppose I could memorize directly, but I thought there might be an easier way. -I tried to prove this directly, by thinking that $$ u_{xx} = \frac{d^2f}{dr^2} \frac{\partial r}{\partial x} + \frac{df}{dr} \frac{\partial ^2r}{\partial x^2}$$ and -$$ u_{yy} = \frac{d^2f}{dr^2} \frac{\partial r}{\partial y} + \frac{df}{dr} \frac{\partial ^2r}{\partial y^2}.$$ -But then I get stuck at $$ u_{xx} + u_{yy} = \frac{d^2f}{dr^2} \frac{x+y}{\sqrt{x^2+y^2}} + \frac{df}{dr}\frac{1}{\sqrt{x^2+y^2}} -= \frac{d^2f}{dr^2} \frac{r(\cos \theta + \sin \theta)}{r} + \frac{df}{dr}\frac{1}{r}.$$ Any idea on where I'm going wrong? It looks like I need $\displaystyle{\frac{r(\cos \theta + \sin \theta)}{r} = 1}$. - -REPLY [5 votes]: we have -$$ -u_x=f_rr_x, u_{xx}=f_rr_{xx}+r_x(f_{rr}r_x)\text{ similarly for } y, -$$ -$$ -u_{xx}+u_{yy}=f_rr_{xx}+r_x(f_{rr}r_x)+f_rr_{yy}+r_y(f_{rr}r_y) -=f_{rr}(r_x^2+r_y^2)+f_r(r_{xx}+r_{yy}). -$$ -with -$$ -r_x=\frac{x}{\sqrt{x^2+y^2}}, r_{xx}=\frac{y^2}{(x^2+y^2)^{3/2}} \text{ similarly for } y -$$ -we have -$$ -u_{xx}+u_{yy}=f_{rr}+\frac{1}{r}f_r. -$$ -of course if $f$ depends on $\theta$ it gets more complicated<|endoftext|> -TITLE: Visualising regular CW complex -QUESTION [11 upvotes]: I am somewhat struggling to see the difference between a regular CW complex and a non-regular CW complex. -The difference is all the attaching maps are homeomorphisms - i.e. there are no identifications made on the boundary. So I guess if I produce a 1-sphere (circle) by a single zero cell and a single one cell, this is not regular (as both endpoints of the 1-cell get mapped to the zero cell)? However, if we use two 1-cells and two 0-cells we can get a regular CW structure? -How about: -. -I guess this is not regular (the 2-cell intersecting the 1-cell at the top is the problem. -The 'thoughtful' question coming from this - we have seen the sphere admits both a regular and non-regular CW complex. To me, the regular CW complex seems easier to work with, as the "degree term" in the cellular boundary formula is either $-1,0,1$. -What type of spaces admit a CW structure, but not a regular one? I am thinking of a pathological example, such as attaching a 2-cell to the 1-sphere with attaching map like $x \sin(1/x)$ (what would that look like?!) - -REPLY [11 votes]: By and large, lack of regularity is for convienience. The "standard" CW-decomposition of a 3-dimensional lens space $L_{p,q}$ has one 0-cell, one 1-cell, one 2-cell and one 3-cell. But it's impossible to make such a simple CW-decomposition into a regular one, since $H_1 L_{p,q} \simeq \mathbb Z_p$. A regular CW-decomposition with one cell in every dimension has $H_1$ free abelian. -Of course, the lens space has a regular CW-decomposition, but it's more work and more fuss to find it. This is much like how every manifold has a triangulation but you maybe don't want to work with a triangulation. The cellular boundary "degree term" is simpler, but there's far more cells, so the benefit of having a simple degree term is killed by having a complicated chain complex. -Presumably there are spaces that have non-regular CW-decompositions and lack regular CW-decompositions. But this is very much a fussy point-set topological curiosity -- the real reason one cares about regular vs. non-regular is the one given above. I think an example of a space where there is a CW-decomposition but no regular decomposition would be the interval $[0,1]$ attach a 2-cell, where the attaching map $f : S^1 \to [0,1]$ is given by: -write $z \in S^1$ as $z=e^{i\theta}$ with $\theta \in [0,2\pi]$. -then $f(z) = (\theta/2\pi) |\sin((2\pi)^2/\theta)|$ -A little argument tells you if there was a regular CW-structure then there would have to be infinitely-many cells. But then you can argue this space does not have the weak topology of such a complex. Anyhow, something like that should work.<|endoftext|> -TITLE: Significance of Matrix-Vector multiplication -QUESTION [8 upvotes]: Can someone give me an example illustrating physical significance of the matrix-vector multiplication? - -Does multiplying a vector by matrix transforms it in some way? -Do left & right multiplication signify two different things? -Is matrix a scalar thing? (EDIT) - -Thank you. - -REPLY [6 votes]: I believe that parts 2 and 3 of your question have been answered well. I'd like to take a stab at part 1, though the other answers to this part are probably better. -There's an interesting way of thinking of the application of a matrix to a vector using the Singular Value Decomposition (SVD) of a matrix. Let A be an $m \times n$ rectangular matrix. Then, the SVD of A is given by $A = U \Sigma V^T$, where $U$ is an $m \times m$ unitary matrix, $\Sigma$ is an $m \times n$ diagonal matrix of so-called singular values and $V$ is an $n \times n$ unitary matrix. For more on the SVD, check out the Wikipedia article: http://en.wikipedia.org/wiki/Singular_value_decomposition -That same article contains proof that every matrix has an SVD. Given that fact, we can now think of matrix vector multiplication in terms of the SVD. Let $\bf x$ be a vector of length $n$. We can write the matrix-vector multiplication as ${\bf b} = A {\bf x}$. But, $A{\bf x} = U\Sigma V^T {\bf x}$. -Since $V$ is unitary, $V {\bf x}$ does not change the magnitude of ${\bf x}$. Unitary matrices applied to a vector only change the direction of the vector (rotate it by some angle). The product $V^T {\bf x}$ rotates ${\bf x}$. -$\Sigma$ is a diagonal matrix. Its entries directly multiply the corresponding entries of the vector ${\bf x}$, thus scaling the vector (increasing its length) along the axis around which $V$ rotated ${\bf x}$. However, remember that the rotated vector $V^T{\bf x}$ is a vector of length $n$ while $\Sigma$ has dimensions $m \times n$. This means that $\Sigma$ is also embedding the vector $V^T{\bf x}$ in an $m$-dimensional space, i.e., changing the dimensions of the vector. If $m=3$ and $n=2$, for example, $\Sigma$ scales the 2D vector $V^T{\bf x}$ in 2 dimensions and then "places" it in a 3D space. -Finally, we have the product of the unitary matrix $U$ with the $m$-dimensional vector $\Sigma V^T{\bf x}$. $U$ rotates that vector in the $m$-dimensional space. -Every matrix thus potentially rotates, scales and embeds and then again rotates a vector when applied to a vector. When $m=n$, of course, a matrix-vector product doesn't involve any embedding- simply a rotation, scaling and another rotation. Like a little assembly line.<|endoftext|> -TITLE: No torsion in $H^1_c(X,\mathbf{Z})$? -QUESTION [8 upvotes]: If $X$ is a very nice topological space, for example a finite simplicial complex, then is it true that the cohomology with compact supports $H^1_c(X,\mathbf{Z})$ is torsion-free? I have seen an assertion in a paper that seems to be tantamount to this statement (unless I've made a slip and misread between the lines) but my topology is weak :-( and my hopelessly paging through Hatcher has not yet come up trumps... - -REPLY [9 votes]: I just noticed this! -Why not just argue sheaf-theoretically? We have the exact sequence -of sheaves $0 \to \mathbb Z \to \mathbb Z \to \mathbb Z/p \to 0,$ -which induces a corresponding short exact sequence of $H^0_c$s, -and hence we get a long exact sequence beginning with $H^1_c$: -$$0 \to H^1_c(X,\mathbb Z) \to H^1_c(X,\mathbb Z) \to H^1(X,\mathbb Z/p) \to -\ldots.$$ -The fact that the first arrow is injective is the torsion-freeness -statement that you want. -Summary: thinking sheaf-theoretically, the same argument that works for -$H^1$ works for $H^1_c$ as well.<|endoftext|> -TITLE: Diophantine applications of Spec? -QUESTION [38 upvotes]: Let $f(\bar x)$ be a multivariable polynomial with integer coefficients. -The zeros of that polynomial are in bijection with the homomorphisms $\mathbb Z[\bar x] \rightarrow \mathbb Z$ that factor through $\mathbb{Z}[\bar x]/(f)$. -As I understand it this viewpoint leads to the contrafunctor $\text{Spec}$ and schemes and such. -Can you show any concrete examples of Diophantine equations that we can solve using this viewpoint? - -REPLY [69 votes]: As far as I know, the first Diophantine problem (over a number field) that was solved using Spec and other tools of algebraic geometry was the following result (proved by Mazur and Tate in a paper from Inventiones in the early 1970s): - -If $E$ is an elliptic curve over $\mathbb Q$, then $E$ has no rational point of order 13. - -The proof as it's written uses quite a bit more than you can learn just from reading Hartshorne; I don't know if there is any way to significantly simplify it. [Added: Rereading the first page of the Mazur--Tate paper, I see that they -refer to another proof of this fact by Blass, which I've never read, but which -seems likely to be of a more classical nature.] -There is another result, which goes back to Billing and Mahler, of the same nature: - -If $E$ is an elliptic curve over $\mathbb Q$, then $E$ has no rational point -of order $11$. - -This was proved by elementary (if somewhat complicated) arguments. An analogous -result with $11$ replaced by $17$ was -proved by Ogg again -using elementary arguments. -These results were all generalized by Mazur (in the mid 1970s) as follows: - -If $E$ is an elliptic curve over $\mathbb Q$, then $E$ has no rational point of any order other than $2,\ldots,10$, or $12$. - -Mazur's paper doing this (the famous Eisenstein ideal paper) was the one which -really established the effectiveness of Grothendieck's algebro-geometric tools for solving classical number theory problems. For example, Wiles's work on Fermat's Last Theorem fits squarely in the tradition established by Mazur's paper. -As far as I know, no-one has found an elementary proof of Mazur's theorem; the -elementary techniques of Billing--Mahler and Ogg don't seem to be extendable to the general case. So this is an interesting Diophantine problem which seems to require modern algebraic geometry to solve. - -Often when a Diophantine problem is solved by algebro-geometric methods, it is not as simple as the way you suggest in your question. -For example, in the results described above, one does not work with one particular elliptic curve at a time. Rather, for each $N \geq 1$, there is a Diophantine equation, whose solutions over $\mathbb Q$ correspond to elliptic -curves over $\mathbb Q$ with a rational solution of order $N$. -This is the so-called modular curve $Y_1(N)$; although it was in some sense known to Jacobi, Kronecker, and the other 19th century developers of the theory of elliptic and automorphic functions, its precise interpretation as a Diophantine equation over $\mathbb Q$ is hard to make precise without modern -techniques of algebraic geometry. (As its name suggests, it is a certain moduli space.) -An even more important contribution of modern theory is that this Diophantine equation even has a canonical model over $\mathbb Z$, which continues to have -a moduli-space interpretation. (Concretely, this means that one starts with -some Diophantine equation --- or better, system of Diophantine equations --- over $\mathbb Q$, and then clears the denominators in a canonical fashion, -to get a particular system of Diophantine equations with integral coefficients -whose solutions have a conceptual interpretation in terms of certain data related -to elliptic curves.) -The curve $Y_1(N)$ is affine, not projective, and it is more natural to study projective curves. One can naturally complete it to a projective curve, -called $X_1(N)$. It turns out that $X_1(N)$ can have rational solutions --- some of the extra points we added in going from $Y_1(N)$ to $X_1(N)$ -might be rational --- and so we can rephrase Mazur's theorem as saying that -the only rational points of $X_1(N)$ (for any $N \neq 2,\ldots,10,12$) lie in -the complement of $Y_1(N)$. -In fact, there are related curves $X_0(N)$, and what he proves is that $X_0(N)$ has only finitely many rational points for each $N$. He is then able to deduce the result about $Y_1(N)$ and $X_1(N)$ by further arguments. - -The reason for giving the preceding somewhat technical details is that I want -to say something about how Mazur's proof works in the particular case $N = 11$ -(recovering the theorem of Billing and Mahler). -The curve $X_0(11)$ is an elliptic curve. One can write down its -explicit equation easily enough; it is (the projectivization of) -$$y^2 +y = x^3 - x^2 - 10 x - 20.$$ -(There is one point at infinity, which serves as the origin of the group law.) -Mazur wants to show it has only finitely many solutions. It's not clear how the explicit equation will help. (In the sense that if you begin with this equation, it's not clear how to directly show that it has only finitely many solutions over $\mathbb Q$.) -Instead, he first notes that it has a subgroup of rational points of order $5$: -$$\{\text{ the point at infinity}, (5,5), (16,-61), (16,60), (5,-6) \}.$$ -One knows from the general theory of elliptic curves that the full $5$-torsion subgroup of $X_0(11)$ is of order $25$, a product of two cyclic groups of order $5$. -We have one of them above, while the other factor is not given by -rational points. -In fact, the other $5$-torsion points have coordinates in the field $\mathbb Z[\zeta_5]$. (I don't know their explicit coordinates, unfortunately.) -Mazur doesn't need to know their exact values; instead, what is important for -him is that he is able to show (by conceptual, not computational, arguments) -that the full $5$-torsion subgroup of $X_0(11)$, now thought of not just as a Diophantine over $\mathbb Q$ but as a scheme over Spec $\mathbb Z$, -is a product of two group schemes of order $5$: namely -$$\mathbb Z/ 5\mathbb Z \times \mu_5.$$ -The first factor is the subgroup of order $5$ determined by the points with -integer coordinates; the second factor is a subgroup of order $5$ generated by -a $5$-torsion point with coefficients in Spec $\mathbb Z[\zeta_5]$. -What does it mean that this second factor is $\mu_5$? -Well, $X^5 - 1$ is a Diophantine equation, whose solutions are defined over -$\mathbb Z[\zeta_5]$, and have a natural (multiplicative) group structure, and this is what $\mu_5$ is. -What Mazur says is that an isomorphic copy of this "Diophantine group" (more precisely, this group scheme) lives inside $X_0(11)$. -Note that the classical theory of Diophantine equations is not very well set up -to deal with concepts like "isomorphisms of Diophantine equations whose solutions admits a natural group structure". (One already sees this if one tries to develop the theory of elliptic curves, including the group structure, in an elementary way.) So this is already a place where scheme theory provides new and important expressive power. -In any event, once Mazur has this formula for the $5$-torsion, he can make an infinite descent to prove that there are no other rational points besides the $5$ that we already wrote down. He doesn't phrase this infinite descent in -the naive way, with equations, as Fermat did with his descents (although it -is the same underlying idea): rather, he argues as follows: -The curve $X_0(11)$ stays non-singular modulo every prime except $11$ (as you can check directly from the above equation). Modulo $11$ it becomes singular: -you can check directly that reduced modulo $11$, the above equation becomes -$$(y-5)^2 = (x-2)(x-5)^2,$$ -which has a singular point (a node) at $(5,5)$. -Note now that all our rational solutions $(5,5), (16,-61),$ etc. (other than -the point at infinity) reduce -to the node when you reduce them modulo $11$. -Using this (plus a little more argument) what you can show is that if -$(x,y)$ is any rational point of $X_0(11)$, then after subtracting off -(in the group law) a suitable choice of one of our $5$ known points, you obtain -a point which does not reduce to the node upon reduction modulo $11$. -So what we have to show is that if $(x,y)$ is any rational solution on $X_0(11)$ -which does not map to the node mod $11$, it is trivial (i.e. the point at -infinity). -Suppose it is not: then Mazur considers a point $(x',y')$ (no longer necessarily rational, -just defined over some number field) which maps to $(x,y)$ under multiplication -by $5$ (in the group law). (This is the descent argument.) -Now this point is not uniquely determined, but it is determined up to -addition (in the group law) of a $5$-torsion point. Because we know the precise -structure of the $5$-torsion (even over Spec $\mathbb Z$) we see that this -point would have to have coordinates in some compositum of fields of the following type: (a) an everywhere unramified cyclic degree $5$ extension of $\mathbb Q$ (this relates to the $\mathbb Z/5\mathbb Z$ factor); and (b) an everywhere -unramifed extension of $\mathbb Q$ obtained by extracting the $5$th root of -some number (this relates to the $\mu_5$ factor). Now no such extension of $\mathbb Q$ exist (e.g. because -$\mathbb Q$ admits no non-trivial everywhere unramified extension), and hence -$(x',y')$ again has to be defined over $\mathbb Q$. Now we repeat the above -procedure ad infitum, to get a contradiction (via infinite descent). - -I hope that the above sketch gives some idea of how more sophisticated methods -can help with the solution of Diophantine equations. It is not just that one writes down Spec and magically gets new information. Rather, the introduction of a more conceptual way of thinking gives whole new ways of transferring information around and making computations which are not accessible when working in a naive manner. -A good high-level comparison would be the theory of solutions of algebraic equations before and after Galois's contributions. -A more specific analogy would be the difference between studying surfaces in space (say) with the tools of an undergraduate multi-variable calculus class, -compared to the tools of manifold theory. In undergraduate calculus, one has to -at all times remember the equation for the surface, work with explicit coordinates, make explicit coordinate changes to reduce computations from the curved surface to the plane, and so on. In manifold theory, one has a conceptual apparatus which lets one speak of the surface as an object independent of the equation cutting it out; one can say "consider a chart -in the neighbourhood of the point $p$" without having to explicitly write -down the functions giving rise to the chart. (The implicit function theorem -supplies them, and that is often enough; you don't have to concretely determine the output -of that theorem every time you want to apply it.) -So it goes with the scheme-theoretic point of view. One can use the modular -interpretation to write down points of $X_0(11)$ without having to give their -coordinates. In fact, one can show that it has a node when reduced modulo $11$ -without ever having to write down an equation. The determination of the $5$-torsion group is again made by conceptual arguments, without having to write down the actual solutions in coordinates. And as the above sketch of the infinite descent (hopefully) makes clear, it is any case the abstract nature -of the $5$-torsion points (the fact that they are isomorphic to -$\mathbb Z/5\mathbb Z \times \mu_5$) which is important for the descent, not any information about their explicit coordinates. -I hope this answer, as long and technical as it is, gives some hint as to the utility of the scheme-theoretic viewpoint. - -References: A nice introduction to $X_0(11)$ is given in this expository article of Tom Weston. -As for Mazur's theorem, I don't know of any expositions which are not at a -much higher level of sophistication. (There are simpler proofs of his main technical results now, e.g. here, -but these are simpler only in a relative sense; they are still not accessible -to non-experts in this style of number theory.)<|endoftext|> -TITLE: Homology of cube with a twist -QUESTION [41 upvotes]: Take the quotient space of the cube $I^3$ obtained by identifying each square face with opposite square via the right handed screw motion consisting of a translation by 1 unit perpendicular to the face, combined with a one-quarter twist of its face about it's center point. -I am trying to calculate the homology of this space. -It is not too hard to see that the CW decomposition of this space has 2 0-cells, 4 1-cells, 3 2-cells and 1 3-cell. -We end up (drawings would help here, but my MS-Paint skills are poor!) with the 2 0-cells ($P$ and $Q$) connected by the 4 1-cells $a,b,c,d$ with $a,c$ from $P$ to $Q$ and $b,d$ from $Q$ to $P$. Thus we have the closed loops $ab,ad,cb,cd$. They also satisfy the relations $abcd=1,dca^{-1}b^{-1}=1,c^{-1}adb^{-1}=1$ via the identification of opposite 2-cells (top/bottom, left/right, up/down). (There is a relationship between the generator loops - the fundamental group is the quaternion group). -From the CW decomposition we get the cellular chain complex -$0 \to \mathbb{Z} \stackrel{d_3}{\to} \mathbb{Z}^3 \stackrel{d_2}{\to} \mathbb{Z}^4 \stackrel{d_1}{\to} \mathbb{Z}^2 \to 0$ -I'm struggling to work out the boundary maps. Can it be 'seen' easily from the relations above? -I tried to use the cellular boundary formula. $d_1$ must be a 2 x 4 matrix. The cellular boundary formula gives the relation -$$d_1(e^1_\alpha) = \sum_{\beta=1}^2 d_{\alpha \beta} e^0_\beta$$ -Are the entries of the matrix $d_1$ then given by -$$\left(\begin{array}{cccc} -d_{11} & d_{21} & d_{31} & d_{41} \\ -d_{12} & d_{22} & d_{32} & d_{42} \\ - \end{array}\right)?$$ -I am pretty sure that $d_{\alpha \beta}$ must be $-1$ or $1$ as the attaching map is a homeomorphism (and is not 0), and is dependent on orientation. Therefore I get that -$$d_1 = \left(\begin{array}{cccc} -1 & 1 & 1 & 1\\ --1 & -1 & -1 & -1\\ -\end{array}\right).$$ -Similar logic says that $d_2$ is a 4x3 matrix. Again all entries must be 1 or -1. I'm struggling to see exactly what the boundary map should be here? -Any thoughts on the best approach are appreciated. - -REPLY [3 votes]: Just to compute a little more out for myself and others for $H_1(X,\mathbb{Z})$ ... -The boundary map $d_2: \mathbb{Z}^4 \rightarrow \mathbb{Z}^3$ has matrix representation -$$d_2=\left(\begin{array}{ccc}1&-1&1\\ 1&-1&-1\\ 1&1&-1\\ 1&1&1\end{array}\right)$$ which over $\mathbb{Z}$ reduces to -$$d_2=\left(\begin{array}{ccc}1&-1&1\\ 0&2&0\\ 0&0&2\\ 0&0&0\end{array}\right)$$, so $Im(d_2)=$. -And$$d_1=\left(\begin{array}{ccc}1&-1&1&-1\\ -1&1&-1&1\end{array}\right)$$, so $Ker(d_1)=\{(a,b,c, d)| d=a-b+c\} =$. -So $H_1(X,\mathbb{Z})=Ker(d_1)/Im(d_2)=/=\mathbb{Z}_2\oplus \mathbb{Z}_2,$ which makes sense since this is the abelianization of the fundamental group which is the quaternion group.<|endoftext|> -TITLE: How to find the limit of a sum of reciprocals $\lim_{n\to\infty}(1 + \frac{1}{2} + \frac{1}{3} + \cdots+ \frac{1}{n})$? -QUESTION [6 upvotes]: There's a limit that I am unable to solve. I think it should be equal to $\infty$. -$$\lim_{n\to\infty}\left(1 + \frac{1}{2} + \frac{1}{3} + \cdots+ \frac{1}{n}\right)$$ - -REPLY [4 votes]: If $\sum \dfrac{1}{n}$ were convergent to $S$, then it is absolutely convergent. -As a result we have that -$S_1 = \sum \dfrac{1}{2n-1} = 1 + \dfrac{1}{3} + \dfrac{1}{5} + \dots $ -and -$S_2 = \sum \dfrac{1}{2n} = \dfrac{1}{2} + \dfrac{1}{4} + \dots$ -are both absolutely convergent and -we have -$S_1 + S_2 = S$ -and -$S_2 = S/2$ and thus -$S_1 = S_2$ -which is not possible as $S_1 \gt S_2$.<|endoftext|> -TITLE: All natural solutions of $2x^2-1=y^{15}$ -QUESTION [5 upvotes]: How can I find all positive integers $x$ and $y$ such that $2x^2-1=y^{15}$? -PS. See here. - -REPLY [4 votes]: There are no solutions other then $(x,y)=(1,1)$ -See http://rmmc.asu.edu/abstracts/rmj/vol31-2/lucapag1.pdf<|endoftext|> -TITLE: Expressing the maximum of several variables using elementary functions -QUESTION [14 upvotes]: It's well-known that -$$\max(a,b)=\frac{a+b+|a-b|}{2}.$$ -Is there a (good) generalization to several variables? Of course $\max(a,b,c)=\max(a,\max(b,c))$ and so -$$\max(a,b,c)=\frac{a+\frac{b+c+|b-c|}{2}+|a-\frac{b+c+|b-c|}{2}|}{2}$$ -$$=\frac{a+0.5b+0.5c+0.5\left|b-c|+|a-0.5b-0.5c-0.5|b-c|\right|}{2}$$ -but I'd like a form that shows the natural symmetry better and which doesn't have so many operations. -This is a practical problem working on a system which has an absolute value operator but no maximum and not much ability to execute conditional statements, but to be honest the real reason I'm interedted is an attempt to beautify something that is seemingly ugly. -For the practical side I need 5-10 arguments and it's acceptable to assume that all arguments are at least 0, though of course it would be much more satisfying if this latter assumption was not needed. - -REPLY [2 votes]: I think that integer division may provide a path to a different answer, depending on exactly how it works. I 'll do the formatting programming-style, not math-style since that is what I'm use to. -Suppose you have n arguments, A[1], A[2], A[3]..., A[n]. -Define multiplier coefficient C[i][j] such that C[i][j] = 1 for A[i]>A[j], zero otherwise. This can be done using the absolute value trick as in the above formula. -Define a coefficient T[i] = sum (C[i][j]) / (n-1) using integer division. For subscript i belonging to the max value, the sum will be n-1, so T[i] will be 1. For other subscripts, the sum will be less than n-1, so T[i] will be zero. -Max = sum (T[i] A[i]) -Probably I have something backwards, but I think the approach is workable, though maybe not better than original suggestion.<|endoftext|> -TITLE: Floer theory or Floer homology, an introduction for physicists needed -QUESTION [12 upvotes]: I need an introduction to Floer theory that's suitable for perhaps a beginning math grad student or a 2nd year physics grad student. The wiki article is sufficiently over my head that it reads as "bar" "bar" "bar". Editor, please improve the tags. -The reason this comes up is that an answer to a problem on unitary matrices and Hilbert space bases was answered with a reference to Floer theory. - -REPLY [3 votes]: The reason I had asked this question is because a Floer theory proof was needed for a physics paper I was writing and I didn't want to include a proof I didn't understand. Eventually I gave up on the hope that someone would explain it to me and began reading the mathematics literature. -Those looking for a translation of the mathematics jargon into physics jargon (of the sort that every physics grad student is taught) might look at the paper I'm planning on submitting to Jour. Math. Phys., or perhaps Phys. Rev X. Section III, "Hamilton's Equations" is a translation of a short Floer theory proof that Sam Lisi gave in response to the question "Given two basis sets for a finite Hilbert space, does an unbiased vector exist?" For completeness, here's the current version: - -An Hermitian matrix generates a 1-parameter subgroup of unitary matrices and any unitary matrix is an element of such a 1-parameter subgroup. Since we are translating the problem into classical mechanics we will use $t$ for the parameter. Thus: -$$U(t) = \exp(it\;H).$$ -and the unitary matrix of interest is given by $U(1)$. Given an initial state $\vec{v}(0)$, the state at time $t$ is defined by a set of coupled ordinary differential equations: -$$\vec{v}(t) = U(t)\vec{v}(0) = \exp(it\;H)\vec{v}(0)$$ -The 1-parameter subgroup (and therefore the unitary matrix) are fully defined by the relationship between $\vec{v}$ and $\dot{\vec{v}}$. In components: -$$\dot{v}_j = i\Sigma_k H_{jk}\;v_k.$$ -Replace the complex variables with real and imaginary parts: -$$v_k = p_k + iq_k$$ -$$H_{jk} = r_{jk}+is_{jk}$$ -If these are compatible with a Hamiltonian $\mathbf{H}$, we have Hamilton's equations: -$$\dot{q}_j = +\partial \mathbf{H}/\partial p_j = \Sigma_k(+r_{jk}p_k-s_{jk}q_k),$$ -$$\dot{p}_j = -\partial \mathbf{H}/\partial q_j = \Sigma_k(-r_{jk}q_k-s_{jk}p_k).$$ -Compatibility requires that $s_{jk}=-s_{kj}$, which is true since $H$ is Hermitian. Integrating gives the Hamiltonian as: -$$\mathbf{H} = \Sigma_{j\neq k}(r_{jk}(p_jp_k+q_jq_k)+s_{jk}p_kq_j)+\Sigma_{j}r_{jj}(p_j^2+q_j^2)/2.$$ -This Hamiltonian, integrated for a period of time $t$, gives the unitary transformation $U(t)=\exp(iHt)$. Note that $\mathbf{H}$ is quadratic in momentum and position and so is a generalization of an harmonic oscillator. -Mathematicians study these Hamiltonians under the label "symplectic geometry." Here we give a brief and rough introduction to the mathematical language. Let $\{\hat{e}_j\}$ be a basis for the positions as a vector space. That is, given a position $\vec{q}=(q_1,q_2,...q_n)$, we treat the sum $\Sigma_j\hat{e}_jq_j$ as an element of a vector space. Similarly, let $\{\hat{f}_k\}$ be a basis for the momenta also with $n$ elements. Combining the two basis sets gives a basis for a $2n$-dimensional vector space $M$. Now define a bilinear map $\Omega$ on $M$ which acts on the basis sets as follows: -$$\Omega(\hat{e}_j,\hat{e}_k)=\Omega(\hat{f}_j,\hat{f}_k)=0,$$ -$$\Omega(\hat{e}_j,\hat{f}_k)=-\Omega(\hat{f}_k,\hat{e}_j)=\delta{jk}.$$ -Without $\Omega$, $M$ is the usual "phase space" of the physicists but the mathematicians prefer to call the combination a "symplectic vector space." -The map $\Omega$ can be thought of as a way of associating positions with momenta. That is, given two elements $u,v$ of $M$ with $\Omega(u,v)=1$, we can think of $u$ as a position and $v$ as its associated momentum. For example, if $\Omega(q_1,p_1)=1,$ then $\Omega(p_1,q_1)=-1$ so $\Omega(p_1,-q_1)=1$. Thus we can think of $p_1$ as a position and $-q_1$ as its associated momentum. This use follows the sense of the usual canonical (or contact) transformations familiar to classical mechanics. This example is one that is typically given in textbooks on the subject; we can swap a position for its associated momentum provided we introduce a minus sign. -Classical mechanics is about the movement of systems through phase space. Suppose a system begins at some particular position. A question of interest is "can the system return to that position at time $t$?" To answer this question, we consider a fixed position with all possible momenta. But Hamilton's equations can be transformed in ways that mix position and momentum. So to understand these questions we need a definition of "initial position" that allows for any possible transformation of Hamilton's equations. -If phase space is not transformed, then the appropriate elements of $M$ to consider are those with particular position and any momentum. This is easy to define by the $\hat{e}_j,\hat{f}_k$ basis elements; we let momentum be in the subspace spanned by the $\hat{f}_k$. Such a subspace has dimension $n$, just half that of $M$. More generally, consider the momentum subspace resulting from any canonical transformation along with a specification of position. Such a subset of $M$ defines an initial value problem in classical mechanics; the mathematicians call such a subset a "Lagrangian submanifold". -We now consider the canonical transformation from $q_j,p_j$ to $\rho_j,\sigma_j$ generated by: -$$F = \left(q_j\sqrt{\rho_j^2-q_j^2}+\rho_j^2\sin^{-1}(q_j/\rho_j)\right)/2.$$ -This gives $p_j$ and $\sigma_j$ as: -$$p_j = \partial F/\partial q_j = \sqrt{\rho_j^2-q_j^2},$$ -$$-\sigma_j = \partial F/\partial \rho_j = \rho_j\sin^{-1}(q_j/\rho_j).$$ -Solving for $p_j$ and $q_j$ in terms of $\sigma_j$ and $\rho_j$ we have: -$$p_j=\rho_j\cos(\sigma_j/\rho_j),$$ -$$q_j=\rho_j\sin(\sigma_j/\rho_j).$$ -Putting $\rho_j=1$ in the new coordinates defines a Lagrangian submanifold of $M$ for which $\rho_j^2=p_j^2+q_j^2=1.$ And this subset of phase space corresponds to the vectors of phases in Hilbert space. The new momentum consists of a product of $n$ copies of complex phases so it can be called a torus; since it is also Lagrangian, it is a "Lagrangian torus". The torus as we've defined it has a phase freedom. That is, if we add the same phase $\alpha$ to all the $\sigma_j$, the result will be a new vector that is also a vector of phases and that represents the same quantum state. This is just the usual arbitrary complex phase present in a quantum state vector. To eliminate it, the mathematicians prefer to identify equivalent vectors and so work with the equivalent torus in $CP^{n-1}$. -Cheol-Hyun Cho3 refers to our $CP^{n-1}$ torus as a "Clifford torus", an extension of the usual definition. His paper is perhaps the first proof that a Hamiltonian flow cannot "displace" such a torus, that is, move it in such a way that it no longer intersects with itself. Other papers that prove the existence of the intersection are [4,5] and it can be deduced from [6-8]. This completes the proof that an unbiased state exists for two bases. In addition, computer calculation with random unitary matrices failed to find any counter examples and Philip Gibbs9 proved the $n=3$ case in 2009. -3 C.-H. Cho, “Holomorphic discs, spin structures, and floer cohomology of the Clifford torus,” Int. Math. Res. Not. 35, 1803–1843 (2004), math / 0308224. -4 P. Biran and O. Cornea, “Lagrangian quantum homology,” The Yashafest, Stanford (2007), -math.SG / 0808.3989. -5 P. Biran and O. Cornea, “Rigidity and uniruling for lagrangian submanifolds,” Geom. Topol. 13, 28812989 (2009), math.SG / 0808.2440. -6 C.-H. Cho and Y.-G. Oh, “Floer cohomology and disc instantons of lagrangian torus fibers in fano toric manifolds,” Asian J. Math. 10, 773–814 (2006), math / 0308225. -7 M. Entov and L. Polterovich, “Quasi-states and symplectic intersections,” Eur. Math. Soc. 81, 75–99 (2006), math / 0410338. -8 K. Fukaya, Y.-G. Oh, H. Ohta, and K. Ono, “Lagrangian floer theory on compact toric manifolds: survey,” (2010), math.SG / 1011.4044. -9 P. Gibbs, “3x3 unitary to magic matrix transformations,” (2009), vixra 0907.0002.<|endoftext|> -TITLE: Prove $y=\ln(2x-1)/\ln(x)$ is a decreasing function -QUESTION [5 upvotes]: Given $y=\ln(2x-1)/\ln(x)$, prove $y$ is decreasing for $x>1$. -While this is obvious by couple computations, the usual differentiation method to show this is true is not getting me anywhere since finding the $y'=0$ point is rather nuisance with ln and what not. -We know the limit of this function is 1, and with first few computations we can see that it does indeed decrease. However is this not an insufficient explanation? -If there is a clean way of showing this, please do share! - -REPLY [8 votes]: The usual derivative method still works, despite having logs. Here, we find -$$ -y' = \frac{\frac{2}{2x-1} \cdot \ln(x) - \ln(2x - 1) \cdot \frac{1}{x}}{(\ln x)^2}. -$$ -Now, -$$\begin{align}y' < 0 &\Longleftrightarrow \frac{2}{2x-1} \cdot \ln(x) - \ln(2x - 1) \cdot \frac{1}{x} < 0 -\\ -& \Longleftrightarrow \frac{2x \ln x}{x(2x - 1)} - \frac{(2x -1)\ln(2x-1)}{x(2x - 1)} < 0 -\\ -&\Longleftrightarrow -2x \ln x - (2x-1)\ln(2x-1) < 0 -\end{align}.$$ -Now, we just need to show that for all $x > 1$, we have -$$ -F(x) := 2x \ln x - (2x-1)\ln(2x-1) < 0. -$$ -But, $F(1) = 0$, and so....<|endoftext|> -TITLE: How to think deeply in mathematics way? -QUESTION [25 upvotes]: Most of the time, I solve a problem by doing the following steps: -1. Look for a related theorem. -2. Look for a related formula. -3. Find a similar problem or sub-problem. -4. Try to adapt the one that I had done. -5. Asking at Math.SE is an alternative solution that I've just found out. -When I read the article named "Eight Laws Of Memory", they claimed "People poorly remember what they read because they do too little thinking", and this is exactly my problem. I read the answer of the question that I asked. Then I try to solve it in my own way, couple days later, I forgot what I did unless I see it again. -So, by "deeply thinking" what do we have to do, especially in solving Maths problems. My guess was "deeply thinking" means breaking the problem into all possible cases, and observe each outcome, comparing one with others. Can anyone share some experiences about how to think deeply? Any feedback or suggestion would be greatly appreciated. -Thank you, - -REPLY [14 votes]: This is just a comment on how I try to learn things. Of course, solving problems requires that you understand the material. From personal experience, I feel I absorb information better if I'm actively writing down thoughts, questions, observations, etc. as I read through a text. -So my basic advice is this, don't just sit and read a textbook and try to understand it in your head. When you finish reading a proposition, lemma, theorem, whatever, try to reprove it by yourself in your own words and write it down. Keep doing this until it sticks. Change the way you present the proof, so that the ideas are fluid to you, and thus the exercise is not one of memorizing the sentences of the proof, but getting a feel for general approaches to problems as well. This will force you to engage the material and think about what you're reading. Read the author's words like a skeptic, and rigorously prove little details that the author may find obvious, but a newcomer wouldn't. -I think there is some quote somewhere that when learning mathematics, you should produce 10 pages of writing for every one page you read.<|endoftext|> -TITLE: Cartesian product of two $\sigma$-algebras is generally not a $\sigma$-algebra? -QUESTION [14 upvotes]: When reading the book Measures, Integrals and Martingales written by R.L. Schilling, I saw a statement as below: -$$\mathcal{A} \times \mathcal{B}:= \{A \times B: A\in \mathcal{A}, B\in \mathcal{B}\}$$ -where $\mathcal{A}$ and $\mathcal{B}$ are $\sigma$-algebras, is not $\sigma$-algebra in general. -However, I cannot construct a counterexample. -Could anyone offer help here? -Kind regards - -REPLY [4 votes]: Simple example: $A$ and $B$ are both $\mathbb{R}$ and $\mathcal{A}$ and $\mathcal B$ are both Borel $\sigma$-algebra. Then sets like $(-\infty,a)\times(-\infty,b)$ are in $\mathcal A\times \mathcal B$, but not their complements. - -REPLY [2 votes]: Take the Borel Algebra $B$ in $R$ - generated by the open intervals of the form $(a,b)$. Then, every set of the type $\{a\}$ belongs to $B$, where $a\in R$. Then $(a,a)\in B\times B, \forall a\in R$. But for $a\neq b$ we have that the set $\{(a,a), (b,b)\}$ does not belong to $B\times B$. -In general, $(X\times Y)\cup (X'\times Y')$ is not of the form $X''\times Y''$.<|endoftext|> -TITLE: Integral of $1/\sinh(x)$ -QUESTION [5 upvotes]: Can you help me find the integral -$$ -\int{\frac{1}{\sinh(x)}dx}? -$$ - -REPLY [2 votes]: Arturo's answer brought to mind yet another way of computing this integral: cheat by using the formula $$\sinh x = -i \sin ix,$$ where $i = \sqrt{-1}$. You can manipulate the imaginary units like ordinary (real) constants for the purposes of integration, and the result follows from the formula for $\int \csc x\,dx.$ -Admittedly, the answer you'll get will have imaginary units in them, but you should be able to get rid of those via similar formulas (such as $\cos ix = \cosh x$, for example.)<|endoftext|> -TITLE: Function example? Continuous everywhere, differentiable nowhere -QUESTION [5 upvotes]: Possible Duplicate: -Are Continuous Functions Always Differentiable? - -If such a function exists, can anyone give an example of a function $f(x) : \mathbb{R} \longrightarrow \mathbb{R}$ that is continuous for all $x \in \mathbb{R}$ but differentiable nowhere? - -REPLY [7 votes]: Another popular example is what I know as Takagi's Function. -It is somehow different from the Weierstrass Function in that it is not constructed as a uniform limit of differentiable functions. However, it is a uniform limit of continuous functions in a way that the points of non-differentiability populate the "whole interval" (if that point of view makes any sense...). - -REPLY [6 votes]: A very famous example - and by far the most important when it comes to practical applications (finance: option pricing!) - is the Wiener process.<|endoftext|> -TITLE: Good books on Math History -QUESTION [68 upvotes]: I'm trying to find good books on the history of mathematics, dating as far back as possible. -There was a similar question here Good books on Philosophy of Mathematics, but mostly pertaining to Philosophy, and there were no good recommendations on books relating specifically to Math History. - -REPLY [6 votes]: In my opinion, A History of Mathematics by Victor J. Katz is the best single volume which covers mathematics in various civilizations from ancient to modern times. It is based on careful attention to original sources.<|endoftext|> -TITLE: Symmetries of Cube -QUESTION [6 upvotes]: The group of orientation preserving isometries of Cube is $S_4$. But if we allow orientation reversing isometries also, then the group will be of order 48. What is this group (Structure)? ( Part of answer can be $S_4\rtimes \mathbb{Z}_2$, because $S_4\triangleleft G$, $\mathbb{Z}_2\leq G$, which contains a reflection in a plane passing through centers of four vertical faces; and so $S_4\cap \mathbb{Z}_2=1$. But again what is this semidirect product? There are three semidirect products of $S_4$ by $\mathbb{Z}_2$) - -REPLY [19 votes]: One of the orientation-reversing isometries is the inversion $-I$, which commutes with all orientation-preserving isometries. So the group is just the direct product of $S_4$ with $\mathbb{Z}_2$.<|endoftext|> -TITLE: When should I use "graph" vs. "plot"? -QUESTION [22 upvotes]: My ODEs textbook uses both graph and plot but I can't figure out how it chooses one over the other. -From the book: - -Sketch the graph of the solution in the x1x2-plane for t ≥ 0. - [this one was referring to a continuous function] - -Also from the book: - -Plots of the solution and a tangent line approximation for the initial value problem (11). - [this one referring to a continuous solution] - -Is there a formal, to some degree, distinction between the two terms? When do I call it a plot and when do I call it a graph? - -REPLY [2 votes]: 'Plot' (verb) has a more active, tangible connotation than 'graph' (verb), ie a person or machine placing ink on paper, a surveyor doing measurements and markings, a computer monitor radiating visible light producing some representation or figure, , and so on. Obviously, 'plot' is associated with 'plotting devices' which evokes all sorts of technologies. I would not be surprised if the word 'plot' has its roots in surveying, navigation, and astronomy, in that order. -'Graph' (verb) has a more abstract intangible connotation than 'plot'. When this word is used the resulting 'graph' (as a noun) is by design a representation of some mathematical object (see below). And this object is the goal, not the representation itself. For example, a teacher may graph $y = x^2$ on a dusty blackboard, but once he starts talking about the graph, it's usually not the chalk he's talking about anymore, he's talking about a mathematical object. -'Graph' (noun) is closely related to 'function'. In mathematics, functions are often equated with their graphs, IE a function $f: X \rightarrow Y$ is just the choice of two sets $X$ and $Y$ and an appropriate subset of $f \subset X \times Y$, the latter being 'the graph' in the formal sense. When visualized in the usual way, with elements from X as the independent variables, we can call these the traditional graphs. -Finally, from wikipedia: graphs, plots, and charts. In the first link, graphs are of the 'traditional' sort. In the second link, a large number of plot types are given, including box plots and scatter plots. However, in addition there are many graphs of the traditional sort, but with modified/scaled axes. The entry on charts also has many examples which may be called either graphs or plots.<|endoftext|> -TITLE: For which number does multiplying it by 99 add a 1 to each end of its decimal representation? -QUESTION [34 upvotes]: This was asked by my maths lecturer a couple of years ago and ive been wracking my brains ever since: - -Find a number that, when multiplied by - 99 will give the original number but - with a 1 at the beginning and a 1 at - the end. -For example: 42546254 * 99 would equal - 1425462541 (it doesn't, but it - illustrates what the answer would look - like) - -REPLY [10 votes]: It was shown that $\rm\ x = 112359550561797752809\:.$ -Notice that $1/89\ =\ 0.0112359550561797752808988\ldots$ -EXERCISE $\: $ Explain it (this, perhaps, is the point of the OP). -NOTE $\ $ This is closely connected with fibonacci numbers. Hint: -$\rm\quad\quad\quad x^n\ =\ f_n\ x + f_{n-1}\ (mod\ x^2-x-1)\ $ -and note $\rm\ f_{11} = 89\ $ which is $\rm\ x^2-x-1\ $ for $\rm\ x = 10\:.$<|endoftext|> -TITLE: Connection between chain rule, u-substitution and Riemann-Stieltjes integral -QUESTION [6 upvotes]: I think I understand these concepts ok: - -chain rule -u-substitution -Riemann-Stieltjes integral - -But there seems to be a layer that I miss: They all seem to be connected, alas I don't know how exactly. In a way you should be able to transform one into the other?!? -Could anyone please enlighten me? Thank you! - -REPLY [25 votes]: The Chain Rule and $u$-substitution are "inverses of each other." In the Chain Rule, you obtain the derivative of a composition: -$$\Bigl(f(g(x))\Bigr)' = f'(g(x))g'(x).$$ -In $u$-substitution, you "recognize" a product of two functions $h(x)g(x)$ as an instance of $f'(g(x))g'(x)$ by doing a substitution. So -$$\frac{2x}{x^2+1} = \frac{1}{x^2+1}(2x)$$ -is "recognized" as $f'(g(x))g'(x)$ by taking $g(x) = x^2+1$, which makes $f'(u) = \frac{1}{u}$. -That is, the connection between the Chain Rule and $u$-substitution is completely and absolutely direct; one "undoes" what the other one does. The $u$-substitution simplifies the result of the Chain Rule to make integration "more obvious", calling $g(x)=u$ and $g'(x)\,dx = du$, so that instead of $f'(g(x))g'(x)\,dx$ you have $f'(u)\,du$. -The connection with Riemann-Stieltjes integral is rather more tenuous. Remember that the Riemann-Stieltjes integral of $f(x)$ with respect to $g(x)$, -$$\int_a^b f(x)dg(x)$$ -is defined to be the limit over partitions $P$ of $[a,b]$ as the mesh size goes to zero of the Riemann-Stieltjes sum -$$S(P,f,g) = \sum_{i=0}^{n-1} f(x_i^*)(g(x_{i+1})-g(x_i)),$$ -where $x_i^*$ is an arbitrary point in the partition interval $[x_i,x_{i+1}]$. -When $g$ is differentiable, the Mean Value Theorem tells us that there exists $x'_i$ in each $(x_i,x_{i+1})$ such that $g'(x'_i)(x_{i+1}-x_i) = g(x_{i+1})-g(x_i)$, so that we can replace the Riemann-Stieltjes sum (by switching, if necessary, the point $x_i^*$ of evaluation of $f$ to the same point $x'_i$) with: -$$\sum_{i=0}^{n-1} f(x'_i)g'(x'_i)(x_{i+1}-x_i)$$ -which is a Riemann sum for the function $h(x)=f(x)g'(x)$. Therefore, taking the limit over the partitions $P$ of $[a,b]$ as the mesh size goes to zero of this Riemann sum gives the integral of $h(x)$, and so you get the equality -$$\begin{align*} -\int_a^b f(x)\,dg(x) &= \lim_{||P||\to 0} \sum_{i=0}^{n-1}f(x_i^*)(g(x_{i+1})-g(x_i))\\ -& = \lim_{||P||\to 0}\sum_{i=0}^{n-1}f(x'_i)g'(x'_i)(x_{i+1}-x_i) = \int_a^b f(x)g'(x)\,dx.\end{align*}$$ -The integral on the right is similar, but not equal, to what you get with the Chain Rule; in the Chain Rule you would have $f(g(x))g'(x)$, rather than $f(x)g'(x)$. Instead, it's more like "half of a product rule" ( $(fg)' = fg' + f'g$, so here you have that first summand but not the second). So while the Riemann-Stieltjes integral looks somewhat like the Chain Rule (when $g(x)$ is differentiable), it's not quite the same. -Of course, it's possible that when you do some Riemann-Stieltjes integrals, particularly in a classroom setting, the choices of $f$ and $g$ will be precisely such that the simplified integral $\int f(x)g'(x)\,dx$ just happens to be doable with a $u$-substitution; e.g., it "just happens" that $f(x)g'(x) = \mathcal{F}'(h(x))h'(x)$ for a function $h(x)$ that has the same derivative as $g(x)$, and a function $\mathcal{F}$ that has derivative similar to $f$. For instance, if $f(x) = \frac{1}{1+x^2}$, $g(x) = x^2$, $h(x)=1+x^2$ (note that $g'(x)=h'(x)$) and $\mathcal{F}'(x) = \frac{1}{x}$, we get that -$$f(x)g'(x) = \frac{1}{1+x^2}(x^2)' = \frac{1}{1+x^2}(1+x^2)' = \mathcal{F}'(h(x))h'(x) = \Bigl(\mathcal{F}(h(x))\Bigr)'$$ -where $\mathcal{F}(x) = \ln|x|+C$ (since $\mathcal{F}'(x) = \frac{1}{x}$). -So doing a lot of Riemann-Stieltjes integration problems where the choices are selected to make this happen may give the illusion that Riemann-Stieltjes integration is also connected to the Chain Rule/$u$-substitution, but that is likely to be an artifact of the problems you are asked to solve, rather than a direct connection between them.<|endoftext|> -TITLE: Why does $\{(x,y,z): z \ge 0\}-\{(x,y,z): y=0,0\leq z \leq 1\}$ have trivial fundamental group? -QUESTION [8 upvotes]: I have just begun to learn about the fundamental group. -An exercise asks me to prove that $$X=\{(x,y,z): z \ge 0\}-\{(x,y,z): y=0,0\leq z \leq 1\}$$ has trivial fundamental group. -What I know is: -1) the definition of the fundamental group. -2) X has trivial fundamental group iff any loop in X can be shrunk into a constant loop at the base point. -3) Homeomorphic (path-connected) spaces have isomorphic fundamental groups. -4) Any convex subset of $\mathbb{E}^n$ and $S^m,m\ge 2$ has trivial fundamental group. -I tried to construct a homeomorphism from X to a convex subset of $\mathbb{E}^3$ such as an area like this: -$$\{(x,y,z): -1\leq y \leq 1,z>0\}$$ -But I failed. -Can you please help me? Thank you! - -REPLY [2 votes]: If $T$ is your space, then $T$ has the property that if $(x,y,z)\in T$ and $t >0$ then $(x,y,z+t)\in T$. -This lets you find a homotopy between any loop in $T$ to a loop in $T_0=\{(x,y,z): z>=1\}$ But $T_0$ is convex, so it is simply connected.<|endoftext|> -TITLE: Sum and product of ultrafilters -QUESTION [5 upvotes]: can anyone tell me, please, two ultrafilters such that $\mathcal{U}\otimes\mathcal{V}\neq\mathcal{V}\otimes\mathcal{U}$ and others two such that $\mathcal{U}\oplus\mathcal{V}\neq\mathcal{V}\oplus\mathcal{U}$?. Thanks - -REPLY [2 votes]: For $\mathcal U\otimes \mathcal V$, let $\mathcal U=(0)$ (principal), $\mathcal V$ -- any other. Then $A=\mathbf N\times \lbrace 0\rbrace$ is not in $\mathcal U\otimes \mathcal V$, (since all vertical sections of $A$ are $\lbrace 0\rbrace\notin \mathcal V$), but is in $\mathcal V\otimes \mathcal U$ (since all vertical sections of $A$ contain $0$).<|endoftext|> -TITLE: Finding a invariant subspaces for a specific matrix -QUESTION [10 upvotes]: Good morning, -How does one find the subspaces that are invariant under $A$ for -$$A = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 &2 & 0 & 0\\ 0 & 0 & 2 & 0\\ 0 & 0 & 0 & 3 \end{pmatrix}\ \in M_{4} (\mathbb{R}).$$ -Thank you. - -REPLY [3 votes]: Define $\mathscr{T}:\mathbb{R}^{4}\rightarrow\mathbb{R}^{4}$ by $\mathscr{T}(\mathbf{x})=A\mathbf{x}$ for $\mathbf{x}\in \mathbb{R}^{4},$ where $$A=\begin{pmatrix} - 1& 0& 0& 0\\ - 0& 2& 0& 0\\ - 0& 0& 2& 0\\ - 0& 0& 0& 3 -\end{pmatrix}\in M_{4} (\mathbb{R}).$$ -$$\mathbf{e}_{1}=(1,0,0,0)^{T},\mathbf{e}_{2}=(0,1,0,0)^{T},\mathbf{e}_{3}=(0,0,1,0)^{T},\mathbf{e}_{4}=(0,0,0,1)^{T}.$$ -The complete list of $\mathscr{T}$-invariant subspaces : - -Zero-dimensional subspace: $\{\mathbf{0}\}.$ -One-dimensional subspaces:$$\text{Span}\lbrace \mathbf{e}_{1}\rbrace,\text{Span}\lbrace \mathbf{e}_{2}\rbrace,\text{Span}\lbrace \mathbf{e}_{3}\rbrace, -\text{Span}\lbrace \mathbf{e}_{4}\rbrace,\text{Span}\lbrace \mathbf{e}_{2}+k\mathbf{e}_{3}\rbrace(k\ne 0).$$ -Two-dimensional subspaces: -$$\text{Span}\lbrace \mathbf{e}_{1},\mathbf{e}_{2}\rbrace,\text{Span}\lbrace \mathbf{e}_{1},\mathbf{e}_{3}\rbrace,\text{Span}\lbrace\mathbf{e}_{1},\mathbf{e}_{4}\rbrace, -\text{Span}\lbrace\mathbf{e}_{2},\mathbf{e}_{3}\rbrace,\text{Span}\lbrace \mathbf{e}_{2},\mathbf{e}_{4}\rbrace,\text{Span}\lbrace\mathbf{e}_{3},\mathbf{e}_{4}\rbrace,\text{Span}\lbrace\mathbf{e}_{1},\mathbf{e}_{2}+k\mathbf{e}_{3}\rbrace(k\ne 0),\text{Span}\lbrace\mathbf{e}_{4},\mathbf{e}_{2}+k\mathbf{e}_{3}\rbrace(k\ne 0). -$$ -Three-dimensional subspaces: - -$$\text{Span}\lbrace \mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\rbrace,\text{Span}\lbrace \mathbf{e}_{2},\mathbf{e}_{3},\mathbf{e}_{4}\rbrace,\text{Span}\lbrace \mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{4}\rbrace,\text{Span}\lbrace \mathbf{e}_{1},\mathbf{e}_{3},\mathbf{e}_{4}\rbrace,\text{Span}\lbrace \mathbf{e}_{1},\mathbf{e}_{4},\mathbf{e}_{2}+k\mathbf{e}_{3}\rbrace(k\ne0).$$ - -Four-dimensional subspace:$\text{Span}\lbrace\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3},\mathbf{e}_{4}\rbrace.$<|endoftext|> -TITLE: Logarithm of a Markov Matrix -QUESTION [6 upvotes]: Start with a Markov matrix $\mathbf{M}$, whose elements are all between $0 \le \mathbf{M}_{ij} \le 1$ and each row sums to one. There is a natural connection with this matrix and the rate matrix $\mathbf{W}$ in the Master Equation -$$ -\mathbf{M} = \exp( t \mathbf{W} ) -$$ -Here, given $\mathbf{W}$, the calculation of $\mathbf{M}$ is unambiguous with $t$ since the matrix exponential is unique and converges by the Taylor expansion. What about the other direction? -$$ -t \mathbf{W} = \log( \mathbf{M} ) -$$ -Do the properties of the Markov matrix guarantee this is unique and that the alternating sum in the log Taylor series converge? -Please provide a reference where this is discussed if possible! -Motivation (by request) -I've been studying the dynamics assoicated with an Ising model type system under single spin-flip Glauber dynamics. Glauber dynamics gives essentially the $\mathbf{W}$ matrix. If one were to observe the system over a finite time, an approximation to $\mathbf{M}$ could be made. I was interested in when it was permissible to convert between the two. In the reference provided by one of the answers the question boils down to: - -In probabilistic terms a Markov matrix A is embeddable if it - is obtained by taking a snapshot at a particular time of an autonomous finite - state Markov process that develops continuously in time. On the other hand - a Markov matrix might not be embeddable if it describes the annual changes - in a population that has a strongly seasonal breeding pattern; in such cases - one might construct a more elaborate model that incorporates the seasonal - variations. Embeddability may also fail because the matrix entries are not - accurate; in such cases a regularization technique might yield a very similar - Markov matrix that is embeddable; - -REPLY [6 votes]: This is an old subject which goes back at least to Elfving in the '30s. Kingman got interested in it in the '60s and more recently Israel, Rosenthal and Wu studied it with an eye to the practical side of the problem. -Markov matrices $M$ which can be written as $M=\exp(W)$ for a generator matrix $W$ are called embeddable because then, there exists a whole semi-group $(M_t)$ of Markov matrices such that $M=M_1$ (simply define $M_t=\exp(tW)$ for every nonnegative real number $t$). Recall that $W$ is a generator if the sum of its rows is zero and if every non diagonal element is nonnegative. -There are some obvious obstructions for a given Markov matrix $M$ to be embeddable. For example, the set of couples of states $(x,y)$ such that $(M^n)_{xy}=0$ cannot depend on $n\ge1$. It may also happen that the matrix $V=\log(M)$ defined by the usual series expansion of $x\mapsto\log(1+x)$ at $x=0$ applied to the matrix $I+(M-I)$, is not a generator even though $M=\exp(V)$ (I seem to remember this can even happen when $M$ is embeddable). -The dimension $2$ excepted (and maybe the dimension $3$ but I do not remember), no complete characterization of embeddability is available, but much is known. I would suggest to begin with the recent paper Embeddable Markov Matrices by E. B. Davies. It is available on the arXiv and it provides useful references to some previous works including the ones I mentioned in this post. -A related problem is to know when the equation $M=\exp(W)$ has several admissible solutions $W$, on this see the paper by Speakman in the list of references of Davies's paper.<|endoftext|> -TITLE: Newton's method for square roots - digit precision (exercise from Cheney's Numerical Analysis) -QUESTION [6 upvotes]: I would greatly appreciate some help for this exercise in Kincaid and Cheney's Numerical Analysis: "apply Newton's method to $f(x)=x^2-r$ (where $r>0$). Prove that if $x_n$ has $k$ correct digits after the decimal point, then $x_{n+1}$ has at least $2k-1$ correct digits, as long as $r>0.006$ and $k\geq 1$". -In general, if $f''$ is continuous and $r$ is a simple root of $f$ it holds that -$e_{n+1}=x_{n+1}-\sqrt{r}=\frac{1}{2}\frac{f''(\xi_n)}{f'(x_n)}e_n^2$, where $\xi_n$ is some number between $x_n$ and $\sqrt{r}$. -I suppose that the condition they're asking for is achieved (but I'm not sure) when -$e_{n+1}.006$, so $\sqrt{r}>0.077$. Recall also that $k\ge 1$, so $10^{-k}\le 1/10$. It follows (calculator) that -$$\frac{1}{|2\sqrt{r}-10^{-k}|} <\frac{1}{0.154-0.1} <18.52$$ -We conclude that -$$|e_{n+1}|<(18.52)(0.5)^2(10^{-2k}) <0.464 \times 10^{-(2k-1)}$$ -This estimate says that the estimate $x_{n+1}$ is correct to at least $2k-1$ decimal places. -We were as pessimistic as possible, given the information about $r$. You can see on closer analysis that the error behaves a little better if we start above $\sqrt{r}$ than if we start below. the reason we had to insist that $r$ not be too close to $0$ is that in that case, -$1/(|2\sqrt{r}-10^{-k}|)$ could be large.<|endoftext|> -TITLE: Is it possible to solve any Euclidean geometry problem using a computer? -QUESTION [29 upvotes]: By "problem", I mean a high-school type geometry problem. -If no, is there other set of axioms that allows that? -If yes, are there any software that does that? -I did a search, but was not able to find a single program that allows that. It is strange because even if it is impossible to solve any problem, most of the natural problems should be solvable. - -REPLY [3 votes]: As far as I know there is no practical tool which provide automatically a readable proofs of the same kind as high school proofs for many geometry theorems. -As Mitch pointed there are algebraic methods such as Grobner bases and Wu's method, but they do not provide readable proofs. They have been implemented and are available in software such as: -opengeoprover (an open source implementation in java), Predrag Janicic's gclc, geother by Dong Ming Wang (a maple implementation of Wu's method)... -Some provers are also available in GeoGebra 5. -There is also a method which produces proofs which sometimes can be considered readable: The area method by Chou, Gao and Zhang. -This method is implemented in open-geo-prover and I have implemented it in Coq. -There is also an old paper by Gelertner which describes an approach which try to mimic the human proofs (but as far as I know this method is not very efficient): -http://aitopics.org/sites/default/files/classic/Feigenbaum_Feldman/Computers_And_Thought-Part_1_Geometry.pdf<|endoftext|> -TITLE: Why is the inclusion of the tensor product of the duals into the dual of the tensor product not an isomorphism? -QUESTION [33 upvotes]: Let $V$ and $W$ be vector spaces (say over the reals). There is a linear injection $V^* \otimes W^* \to (V \otimes W)^*$ which sends $\sum_i f_i \otimes g_i \in V^* \otimes W^*$ to the unique functional in $(V \otimes W)^*$ sending $v \otimes w \mapsto \sum_i f_i(v) \cdot g_i(w)$ for all $(v,w) \in V \times W$. In the finite dimensional case it is easy to see this is an isomorphism by comparing dimensions on both sides. I'm aware that this is not an isomorphism in the infinite-dimensional case (see Relation with the dual space in the wikipedia article on tensor products) but I must admit that I'm not sure why. Cardinal arithmetic does not seem help here and, even if it did, I would be much happier to see an explicit example of a functional in $(V \otimes W)^*$ outside the range of this map. I'm having trouble cooking one up myself. -Added: Let me elaborate on my comment about cardinal arithmetic. Suppose that $X$ and $Y$ are infinite dimensional vector spaces over a field $k$. I'm confident that $\dim (X \otimes Y) = \dim X \cdot \dim Y$ holds. It is clear that $|X^*| = |k|^{\dim X}$ since we may identify a functional on $X$ with a function from a basis for $X$ to $k$. Also I think I've convinced myself that if $\dim X \geq |k|$ then in fact $|X| = \dim X$ ie. the cardinality and dimension of a vector space agree when the dimension is larger than the cardinality of the ground field. Consequently, assuming say that $|k| \leq \dim X \leq \dim Y$ it seems we have -$$ \dim(X \otimes Y)^* = |k|^{ \dim X \cdot \dim Y} = |k|^{ \dim Y} = \dim Y^* = \dim X^* \cdot \dim Y^* = \dim (X^* \otimes Y^*)$$ -which implies that in fact $(X \otimes Y)^*$ and $X^* \otimes Y^*$ are isomorphic but, rather frustratingly, the obvious map does not do the job. Did I make a mistake here? If not, is it generally true (ie without making assumptions about $|k|$) that there exists some isomorphism $(X \otimes Y)^* \to X^* \otimes Y^*$? - -REPLY [28 votes]: The map is not an isomorphism because an element in $X^*\otimes Y^*$ is a finite -sum of functionals of the form $x^{*}\otimes y^{*}$, where $x^* \in X^*$ and $y^* \in Y^*$. However, when $X$ and $Y$ are infinite dimensional, not every functional on $X\otimes Y$ will be of this form. -One case to consider, which will make this clear, is the case when $Y = X^*$. -There is one obvious element of $(X\otimes Y)^*$, namely the evaluation map -which takes a tensor $x\otimes y$ to the value of the functional $y$ on the vector $x$. Now one can check that this element is not in the image -of the map from $X^*\otimes Y^*$. - -I am going to take a little time to rewrite the preceding example in a different language, because I think that it helps illustrate what is going on. -First note that if $Y = X^*$, then -$X\otimes Y$ embeds into $End(X)$ -(the space of linear operators from $X$ to itself) as the space of finite rank linear operators (i.e. those whose image is finite dimensional). Denote -this image by $FREnd(X)\subset End(X)$. Note that any element of $FREnd(X)$ -has a well-defined trace (because even though the domain is infinite dimensional, the range is finite dimensional); in the tensor product -description, this is just the natural map from $X\otimes Y$ to the ground field -given by evaluation of the functionals in $Y$ on the vectors in $X$. -(This is precisely the functional on $X\otimes Y$ that we considered above, -reexpressed in the language of operators.) -Replacing $X$ by $X^*$ in the preceding paragraph, we see that -$X^*\otimes Y^* = FREnd(X^*)$. Note that there is a natural map -$FREnd(X) \to FREnd(X^*)$ given by mapping an endomorphism $\phi$ to its -transpose $\phi^t$. (In terms of tensor products, this is the natural -map $$X \otimes Y = X\otimes X^* \to X^{**}\otimes X^* \cong X^*\otimes X^{**} =X^*\otimes Y^*,$$ the isomorphism being the canonical one which switches the -two factors.) -The map $X^*\otimes Y^*\to (X\otimes Y)^*$ -can then be reintrepreted as the pairing between -$FREnd(X^*)$ and $FREnd(X)^*$ -defined as follows: -for $\phi\in FREnd(X^*)$ and $\psi \in FREnd(X),$ -$$\langle \phi,\psi\rangle := trace(\phi\circ \psi^t).$$ -And now we see why this map is not surjective: for example, if -$\phi$ is any endomorphism of $X^*$, i.e. any element of $End(X^*)$, -then the composite $\phi\circ \psi^t$ has finite rank (since $\psi^t$ does), -and so $trace(\phi\circ\psi^t)$ is defined. -Thus we in fact have an embedding of all of $End(X^*)$ into $FREnd(X)^*$. With a little more work you can check that this latter embedding is an isomorphism. The conclusion in this case is that the embedding $X^*\otimes Y^* \to (X\otimes Y)^*$ can be reinterpreted as the embedding -$$FREnd(X^*) \to End(X^*),$$ -which is not surjective when $X^*$ (or equivalently, $X$) is infinite dimensional, since it does not contain the identity map (for example). -Note that under the identification of $End(X^*)$ with $FREnd(X)^*$, -the identity map is identified precisely with the trace on $FREnd(X)$, -and so we get a reinterpretation of our original example, and see more clearly -what is going on: the point is that the identity endomorphism of an infinite dimensional vector space does not have finite rank.<|endoftext|> -TITLE: Finite subsets of a set A in the definable power set of A -QUESTION [6 upvotes]: I'm working through Kunen's famous book on set theory and I'm puzzled by exercise 19 of chapter VI. -Background for the exercise: -In chapter V (Definition 1.1) the author defines certain function of two variables $Df(A,n)$. This is the set of $n$-place relations on $A$ which are definable by a formula with $n$ free variables relativized to $A$. Defining $Df(A,n)$, first he defines recursively sets $Df'(k,A,n),k \in \omega$ and then sets $Df(A,n) = \bigcup_{k \in \omega} Df'(k,A,n)$. -Then he proves Lemma V 1.3 which says that if $\phi(x_0,...,x_{n-1})$ is any formula and $A$ is any set then the set $\{ s \in A^n : \phi^A(s(0),...,s(n-1)) \}$ is in $Df(A,n)$ ($\phi^A$ means $\phi$ relativized to $A$). -In the next chapter VI (Definition 1.1) the author defines the definable power set of a set $A$, or $\mathcal{D}(A)$. This is the set of subsets of A which are definable from a finite number of elements of $A$ by a formula relativized to $A$. Then in Lemma VI 1.3(c) he proves that the finite subsets of $A$ are in $\mathcal{D}(A)$. -Exercise VI 19: -The exercise asks to define alternatives to $Df$ and $\mathcal{D}$, namely such $Df^*$ and $\mathcal{D}^*$ that $Df^*$ still -satisfies Lemma V 1.3 but Lemma VI 1.3(c) is not provable in ZFC. He also gives a hint: -First define $\alpha = \omega$ if CON(ZF) and alpha = the least Gödel number of a contradiction if not-CON(ZF) (does he mean the least Gödel number of a proof of a contradiction?). Then define $Df^*(A,n) = \bigcup_{k < \alpha} Df'(k,A,n)$. -I think I might understand the idea. We suppose that Lemma VI 1.3(c) is provable. Then somehow we show that it is not possible that $\alpha \in \omega$ so that it must be that $\alpha = \omega$. But then we have proved CON(ZF) which is not possible by Gödel's 2nd. -My questions: -First I can't see how Lemma V 1.3 goes through with $Df^*$. For example the proof uses the fact that $Df(A,n)$ is closed under finite intersections. This is easy to see from the definition of $Df$ but what happens if not-CON(ZF) and alpha is a natural and only finitely many $Df'$ are taken into $Df^*$. Doesn't this destroy the finite intersection property? - -In more details this is what I tried to do. The case 'conjunction' of the induction on the structure of the formula. We assume $\phi = \phi_1 \wedge \phi_2$ and -$ZFC \vdash \forall A [ \{ s \in A^n : \phi_1^A(s(0),\ldots,s(n-1)) \} \in Df^*(A,n) ]$ -and -$ZFC \vdash \forall A [ \{ s \in A^n : \phi_2^A(s(0),\ldots,s(n-1)) \} \in Df^*(A,n) ]$. -Then we have to show that -$ZFC \vdash \forall A [ \{ s \in A^n : \phi_1^A(s(0),\ldots,s(n-1)) \wedge \phi_2^A(s(0),\ldots,s(n-1))\} \in Df^*(A,n) ]$ -or what is the same -$ZFC \vdash \forall A [ \{ s \in A^n : \phi_1^A(s(0),\ldots,s(n-1)) \} \cap \{ s \in A^n : \phi_2^A(s(0),\ldots,s(n-1))\} \in Df^*(A,n) ]$. -To prove this: working inside the formal theory, let $A$ be a set, $A_1 = \{ s \in A^n : \phi_1^A(s(0),\ldots,s(n-1)) \}$ and $A_2 = \{ s \in A^n : \phi_2^A(s(0),\ldots,s(n-1)) \}$. Since $A_1, A_2 \in Df^*(A,n)$ there are $k_1,k_2 < \alpha$ such that $A_1 \in Df'(k_1,A,n)$ and $A_2 \in Df'(k_2,A,n)$. If we let $k = \max \{k_1,k_2\}$ then $A_1 \cap A_2 \in Df'(k+1,A,n)$ by definition of the sets $Df'$. Now in the case of $Df$ there are no problems deducting that $A_1 \cap A_2 \in Df(A,n)$, but in the case of $Df^*$ what can we do if it happens that $k+1 = \alpha$? I can't see how this kind of a situation can be prevented. - -Also even if we somehow can prove Lemma V 1.3 with $Df^*$, why does it follow from provability of Lemma VI 1.3(c) (= finite subsets of $A$ are in $\mathcal{D}^*(A)$) that $\alpha \notin \omega$? - -REPLY [3 votes]: The key point in reproving V.1.3 for Df* is that V.1.3 is not actually a single theorem of ZF, it's a scheme of theorems, one for each formula. The result itself can't even be stated directly in ZF, as Kunen tersely notes in the middle of p. 154. -The proof V.1.3 is by metainduction on the structure of the formula. As such, the proof only really makes use of the DF' hierarchy for standard natural numbers, not for arbitrary "natural numbers", because the length of any formula is a standard natural number. Now, assuming Con(ZF), if the number $\alpha$ from Exercise VI.19 is defined, it is not a standard natural number - this is exactly what Con(ZF) says when we assert it in the metatheory. So assuming Con(ZF) we can still prove V.1.3 in the metatheory, because even if $\alpha$ is defined it is larger than any standard natural, so we still have the whole DF' hierarchy for standard naturals. -One very subtle point in Exercise VI.19 is the distinction between Con(ZF), which is assumed in the metatheory, and CON($\ulcorner$ZF$\urcorner$), which is the formalization of Con(ZF) that is used in the definition of Df*. -For proving the final part of Exercise VI.19, if my memory serves you can do it by looking at $A = \omega$ and analyzing the structure of the DF' hierarchy in this case. But I haven't looked at the definitions of Df' closely enough to verify this, so please take this last paragraph as just a suggestion.<|endoftext|> -TITLE: Is there a name for sec(x)'s relationship with tan(x)? -QUESTION [7 upvotes]: In a couple of trig identities, esp to do with integrals and derivatives, you see a relationship between tan(x) and sec(x). Similarly between csc(x) and cot(x). -$ \frac{d}{dx}\tan(x) = \sec^2(x) $ -$ \frac{d}{dx}\sec(x) = \sec(x) \tan(x) $ -$ tan^2(x) + 1 = sec^2(x) $ -Is there a name for this apparent relationship between $\tan(x)$ and $\sec(x)$? Something like "complimentary", or "counterparts" of one another..? - -REPLY [11 votes]: I wouldn't say it's "nothing special". In fact it's quite special. If $f(x)$ and $g(x)$ are functions that satisfy $f'(x) = g(x)^2$ and $f(x)^2 + 1 = g(x)^2$, then $f'(x) = f(x)^2 + 1$. But that is a differential equation whose general solution is $f(x) = \tan(x + C)$, where $C$ is an arbitrary constant. And then we get $g(x)^2 = \tan^2(x+C) + 1 = \sec^2(x+C)$, so $g(x) = \pm \sec(x+C)$.<|endoftext|> -TITLE: Function fails to be of bounded variation -QUESTION [6 upvotes]: Let $f$ fail to be of bounded variation on [0,1]. Show that there is a point $x_0$ in [0,1] such that $f$ fails to be of bounded variation on each nondegenerate closed subinterval of [0,1] that contains $x_0$. I'm trying proving this directly, but I think maybe a proof by contradiction could work here. Couldn't we just assume that $f$ is of bounded variation on every closed, bounded subset of [0,1] to begin the proof by contradiction? Perhaps then we could split up the interval [0,1] so that we end up with a situation where $f$ is of bounded variation on a subinterval that contains $x_0$? I'd appreciate some help here. - -REPLY [4 votes]: As Didier mentioned, the right statement probably is: - -Show that there is a point x0 in [0,1] - such that f fails to be of bounded - variation on each nondegenerate closed - subinterval of [0,1] which is a - neighborhood of x0 in [0,1]. - -Here is an alternative proof: -We use the standard notation $V_a^b(f))$ to denote the variation of $f$ on $[a,b]$. -Let $A : =\{ x \in [0,1] | V_0^x(f) < \infty \}$. -Then $0 \in A , 1 \notin A$ and it is very easy to see that $x_0=\sup A$ works.<|endoftext|> -TITLE: Homology of punctured projective space -QUESTION [13 upvotes]: I am trying to calculate $H_k(X)$ where $X = \mathbb{R}P^n - \{ x_0 \}$ -I started thinking about $k=2$. We can get the projective plane by taking the upper hemisphere with points along the equator identified according to the antipodal map. If we remove a point from the hemisphere, we can then enlarge this such that we are just left with a circle and thus for $k=2$ we just have the homology of the circle. -My geometric intuition starts to fail for $k=3$ and higher spaces. So my questions are: -1) Does the same construction work for higher $k$? (probably not, this seems too easy) -2) If not, what is the nice way to calculate the homology groups (say we know $H_k(\mathbb{R}P^n)$? I guess there is a way to use Mayer-Vietoris, but I just can't see it - -REPLY [3 votes]: We have a deformation retraction of $X$ to $\mathbb{RP}^{n-1}=\{x_0=0\}\subset X$ given by the homotopy : $$ ([0,1]\times X\to X:(t,[x_0:x_1:\dots:x_n])\mapsto [(1-t)x_0:x_1:\dots:x_n] $$ -Since $X$ and $\mathbb{RP}^{n-1}$ are thus homotopically equivalent they have the same homology so that $$ H_k(X)=H_k(\mathbb{RP}^{n-1}) \operatorname {for all} k\geq 0$$<|endoftext|> -TITLE: CW complex of a product -QUESTION [8 upvotes]: Another one that has been bugging me: -Say $X$ is a finite CW complex. What is the simplest CW structure on $S^n \times X$ -So I assume that $E$ is the family of cells in $X$ and $\Phi = \{ \Phi_e:e \in E \}$ is the family of attaching maps (technically I guess $\Phi_e | S^{k-1}$ is the attaching map of a $k$-cell). -I think that if I take the usual CW structure on the $n$-sphere (1 0-cell, $e^0_s$ and 1 $n$-cell $e^n_s$) that the family of cells $E'$ of $S^n \times X$ is just $E'=\{ e^0_s \times e, e^n_s \times e: e \in E \}$ -But I am unsure how to attach it? I guess we only really need to worry in the instances we are attaching a 0-cell and an $n$-cell, else we can just use the usual maps. -Writing down the cellular chain complex is not too bad - it will just have an extra copy of $\mathbb{Z}$ in the $0$-th and $n$-th position. -Can we then calculate $H_k(S^n \times X)$ in terms of $H_k(X)$? (Again, the boundary formulas will only change in the $0$-th and $n$-th case -Edit: And can we do it without the Kunneth formula? - -REPLY [6 votes]: Here is a nice solution I was just shown: -Use the retraction $r:X \times S^n \to x \times \{ x_0 \}$ given by $(x,s) \mapsto (x,x_0)$ along with the LES of the pair $(X \times S^n, X \times \{ x_0 \})$ to show $$H_k(X \times S^n) \simeq H_k(X \times \{ x_0 \} ) \oplus H_k(X \times S^n, x \times \{ x_0 \} )$$ -Then take $S^n$ as the upper and lower hemisphere, $D^n_+$ and $D^n_-$ with $D^n_+ \cap D^n_- \sim S^{n-1}$ such that $x_0 \in D^n_+ \cap D^n_-$. Then use a a relative Mayer-Vietrois sequence with the sets -$A = X \times D^n_+$ -$B = X \times D^n_-$ -$C = D = X \times \{ x_0 \}$ -to get $$H_K(X \times S^n,X \times \{ x_0 \} ) \simeq H_{k-1}(X \times S^{n-1}, x \times \{ x_0 \} )$$ -Iterate this and use excision to get -$$H_K(X \times S^n,X \times \{ x_0 \} ) \simeq H_{k-n}(X)$$ -Combining the above we get -$$H_k(X \times S^n) \simeq H_k(X) \oplus H_{k-n}(X)$$ exactly as @user8268 says -Nice!<|endoftext|> -TITLE: Growth rate of primitive and $\mu$-recursive functions -QUESTION [5 upvotes]: Functions that are not primitive recursive but $\mu$-recursive are said to grow too fast to be primitive recursive. - -Are there functions $f$ and $F$ such - that a function is primitive - recursive iff its growth rate is less - than $f$'s and $\mu$-recursive only iff its - growth rate is greater than $F$'s? - -Is there a function $G$ such that a function with growth rate greater than $G$'s isn't computable at all, i.e. not even $\mu$-recursive? - -REPLY [7 votes]: The answer to your first question is "no". There are non-recursive functions that only take values 0 and 1. For example, take the characteristic function of a non-recursive set. -Even if you restrict your question to functions that are strictly increasing, the answer is still no, for very similar reasons: You can have a function $f$ that is not recursive but $f(n+1)-f(n)=1$ or $2$ for all $n$. For example, start with a non-recursive set, and let $f(n+1)-f(n)-1$ be the characteristic function of that set. -What is true is that there are functions $f,F$ such that any primitive recursive function grows slower than $f$ and any recursive function grows slower than $F$. Any version of Ackermann's function is an example of the first phenomenon. An example of the second can be obtained easily as follows: First, note that there are only countably many recursive functions. List them as $f_1,f_2,\dots$, and let $F(n)=\sum_{i\le n}f_i(n)+1$. -There is actually quite a bit of literature around these issues. The partial order of functions $f:{\mathbb N}\to{\mathbb N}$ ordered by eventual domination (i.e., $f -TITLE: Find equation of quadratic when given tangents? -QUESTION [5 upvotes]: I know the equations of 4 lines which are tangents to a quadratic: -$y=2x-10$ -$y=x-4$ -$y=-x-4$ -$y=-2x-10$ -If I know that all of these equations are tangents, how do I find the equation of the quadratic? -Normally I would be told where the tangents touch the curve, but that info isn't given. -Thanks! - -REPLY [7 votes]: Since the two pairs of tangents are symmetric with respect to the $y$-axe, -the quadratic function $f(x)=ax^{2}+bx+c$ must be even ($f(x)=f(-x)$), -which implies that $b=0$. The equations of the tangents to the graph of $f(x)=ax^{2}+c$ at points $% -\left( x_{1},f(x_{1})\right) $ and $\left( x_{2},f(x_{2})\right) $ are -$$\begin{eqnarray*} -y &=&f^{\prime }(x_{i})x-f^{\prime }(x_{i})x_{i}+f(x_{i})\qquad i=1,2 \\ -&=&2ax_{i}x+c-ax_{i}^{2}. -\end{eqnarray*}$$ -These equations must be equivalent to two of the given tangents, one from each pair, e.g. $y=2x-10$ and $y=x-4$: -$$\left\{ -\begin{array}{c} -2ax_{1}x+c-ax_{1}^{2}=2x-10 \\ -2ax_{2}x+c-ax_{2}^{2}=x-4% -\end{array}% -\right. $$ -Finally we compare coefficients and solve the resulting system of $4$ -equations: -$$\left\{ -\begin{array}{c} -2ax_{1}=2 \\ -c-ax_{1}^{2}=-10 \\ -2ax_{2}=1 \\ -c-ax_{2}^{2}=-4% -\end{array}% -\right. \Leftrightarrow \left\{ -\begin{array}{c} -x_{1}=8 \\ -x_{2}=4 \\ -a=\frac{1}{8} \\ -c=-2% -\end{array}% -\right. $$ -Thus the quadratic is $f(x)=\frac{1}{8}x^{2}-2$.<|endoftext|> -TITLE: Distribution of compound Poisson process -QUESTION [9 upvotes]: Suppose a compound Poisson process is defined as $X_{t} = \sum_{n=1}^{N_t} Y_n$, where $\{Y_n\}$ are i.i.d. with some distribution $F_Y$, and $(N_t)$ is a Poisson process with parameter $\alpha$ and also independent from $\{Y_n\}$. - -Is it true that as -$t\rightarrow \infty, \, \frac{X_{t}-E(X_{t})}{\sigma(X_t) - \sqrt(N_t)} \rightarrow \mathbf{N}(0, 1)$ in distribution, where the limit is a standard Gaussian distribution? I am considering -using Central Limit Theorem to show it, but the -theorem I have learned only applies when $N_t$ is -fixed and deterministic instead of -being a Poisson process. -A side question: is it possible to derive the -distribution of $X_{t}$, for each $t\geq 0$? Some book that has the derivation? - -Thanks! - -REPLY [12 votes]: Let $Y(j)$ be i.i.d. with finite mean and variance, and set -$\mu=\mathbb{E}(Y)$ and $\tau=\sqrt{\mathbb{E}(Y^2)}$. -If $(N(t))$ is an independent Poisson process with rate $\lambda$, -then the compound Poisson process is defined as -$$X(t)=\sum_{j=0}^{N(t)} Y(j).$$ -The characteristic function of $X(t)$ is calculated as follows: -for real $s$ we have -\begin{eqnarray*} -\psi(s)&=&\mathbb{E}\left(e^{is X(t)}\right)\cr - &=&\sum_{j=0}^\infty \mathbb{E}\left(e^{is X(t)} \ | \ N(t)=j\right) \mathbb{P}(N(t)=j)\cr - &=&\sum_{j=0}^\infty \mathbb{E}\left(e^{is (Y(1)+\cdots +Y(j))} \ | \ N(t)=j\right) \mathbb{P}(N(t)=j)\cr - &=&\sum_{j=0}^\infty \mathbb{E}\left(e^{is (Y(1)+\cdots +Y(j))}\right) \mathbb{P}(N(t)=j)\cr - &=&\sum_{j=0}^\infty \phi_Y(s)^j {(\lambda t)^j\over j!} e^{-\lambda t}\cr - &=& \exp(\lambda t [\phi_Y(s)-1]) - \end{eqnarray*} -where $\phi_Y$ is the characteristic function of $Y$. -From this we easily calculate $\mu(t):=\mathbb{E}(X(t))=\lambda t \mu$ -and $\sigma(t):=\sigma(X(t))= \sqrt{\lambda t} \tau$. -Take the expansion $\phi_Y(s)=1+is\mu -s^2\tau^2 /2+o(s^2)$ and substitute it into -the characteristic function of the normalized random variable ${(X(t)-\mu(t)) /\sigma(t)}$ to obtain -\begin{eqnarray*} -\psi^*(s) &=& \exp(-is(\mu(t)/\sigma(t))) \exp(\lambda t [\phi_Y(s/\sigma(t))-1]) \ - &=& \exp(-s^2/2 +o(1)) -\end{eqnarray*} -where $o(1)$ goes to zero as $t\to\infty$. This gives the central limit theorem -$${X(t)-\mu(t)\over\sigma(t)}\Rightarrow N(0,1).$$ -We may replace $\sigma(t)$, for example, with $\tau \sqrt{N(t)}$ to get -$${X(t)-\mu(t)\over\tau \sqrt{N(t)}}= {X(t)-\mu(t)\over\sigma(t)} \sqrt{\lambda t \over N(t)} \Rightarrow N(0,1),$$ -by Slutsky's theorem, since $\sqrt{\lambda t \over N(t)}\to 1$ in probability by the law of large numbers. - -Added: Let $\sigma=\sqrt{\mathbb{E}(Y^2)-\mathbb{E}(Y)^2}$ be the standard deviation of $Y$, -and define the sequence of standardized random variables -$$T(n)={\sum_{j=1}^n Y(j) -n\mu\over\sigma\sqrt{n}},$$ -so that -$${X(t)-\mu N(t)\over \sigma \sqrt{N(t)}}=T(N(t)).$$ -Let $f$ be a bounded, continuous function on $\mathbb{R}$. By the usual -central limit theorem we have $\mathbb{E}(f(T(n)))\to \mathbb{E}(f(Z))$ where - $Z$ is a standard normal random variable. -We have for any $N>1$, -$$\begin{eqnarray*} -|\mathbb{E}(f(T(N(t)))) - \mathbb{E}(f(Z))| - &=& \sum_{n=0}^\infty |\mathbb{E}(f(T(n)) - \mathbb{E}(f(Z))|\ \mathbb{P}(N(t)=n) \cr - &\leq& 2\|f\|_\infty \mathbb{P}(N(t)\leq N) +\sup_{n>N} |\mathbb{E}(f(T(n)))- \mathbb{E}(f(Z)) |. -\end{eqnarray*} -$$ -First choosing $N$ large to make the right hand side small, then letting $t\to\infty$ so -that $\mathbb{P}(N(t)\leq N)\to 0$, shows that -$$ \mathbb{E}(f(T(N(t)))) \to \mathbb{E}(f(Z)). $$ -This shows that $T(N(t))$ converges in distribution to a standard normal as $t\to\infty$.<|endoftext|> -TITLE: $\ln(x^2)$ vs $2\ln x$ -QUESTION [22 upvotes]: These two are supposed to be equivalent because of the properties of logarithms, but the domains of $\ln(x^2)$ and $2\ln x$ seem different to me. For example, if I substitute $x=-1$ into the first, I get 0. But in the second, I get a non-real answer. -Why is this? The domains of these functions, when graphed, seem different as well. But everything I've been taught so far, and most of the things I can find on the web, do not explain this inconsistency. Typically, I consider the property $\ln(a^b) = b \ln a$ to be true... -I am only a senior in high school, currently in Calculus I, but hopefully the explanation won't be too outside of the realm of my understanding. But even if it is, I would still like to know the answer. -Thanks in advance - -REPLY [12 votes]: When the identity is stated as $\ln(a^b)=b\ln(a)$, it is assumed that $a$ is positive. Thus, regardless of whether it is possible to extend $\ln(a^b)$ to a larger range of $a$ values for special choices of $b$, this is a valid identity with this domain restriction. -If the case where $b$ is an even integer were singled out, then it would be good to point out that the identity can be generalized to $\ln(a^b)=b\ln(|a|)$ for all nonzero $a$. But the more general (domain restricted) identity allows $b$ to be any real number, including not only odd integers, but fractions, and even irrational numbers. In most cases, $\ln(a^b)$ wouldn't be defined in an ordinary (real) sense when $a$ is negative. -To properly define a function, its domain should be specified. Things aren't always done properly. For example, a typical question used in precalculus classes gives a formula defining a function, like $\displaystyle{f(x)=\frac{\sqrt{x+4}}{x-2}}$ and asks the student to "find the domain". The idea is that you're supposed to figure out all possible numbers you can plug into the formula such that the result is defined as a real number. This is a good exercise because it can help to build familiarity with the functions and test algebra skills, but it can be misleading. The domain can depend on context, and whoever is defining the function can decide. E.g., if $f$ is the function defined on $(0,\infty)$ by $f(x)=\ln(x^2)$, then I have stipulated the domain to be $(0,\infty)$. The identity $f(x)=2\ln(x)$ is valid on this domain. -Nonetheless, I absolutely agree with you and Ross that if you're just starting with $\ln(x^2)$ with no context to imply that $x$ is positive, then the negative case has to be taken into account, and thus the more general identity $\ln(x^2)=2\ln(|x|)$ would be required. -Similarly, in the identity $\ln(ab)=\ln(a)+\ln(b)$, it is assumed that $a$ and $b$ are positive. The left-hand side would also be defined when $a$ and $b$ are both negative, and in general you would have $\ln(ab)=\ln(|a|)+\ln(|b|)$ whenever $a$ and $b$ have the same sign. The fact that $\ln(ab)$ can be defined while $\ln(a)$ and $\ln(b)$ are not leads to so-called "extraneous solutions" in precalculus problems on solving logarithmic equations, if one is not careful about domain restrictions. -Example Problem: Solve the equation $\ln(x)+\ln(x+1)=-10$. -Solution: Using the identity $\ln(ab)=\ln(a)+\ln(b)$, the equation becomes $\ln(x^2+x)=-10$. Exponentiating yields $x^2+x=\frac{1}{e^{10}}$. Solving the quadratic equation, $x=-\frac{1}{2}\pm\frac{1}{2}\sqrt{1+4/e^{10}}$. But wait, one of these doesn't work in the original equation, because $-\frac{1}{2}-\frac{1}{2}\sqrt{1+4/e^{10}}$ is negative. This "solution" is a solution to $\ln(x^2+x)=-10$, but not to the original equation, because $\ln(x^2+x)$ has larger domain than $\ln(x)+\ln(x+1)$. -(Also, you can see that there should only be one solution to the original equation, because $\ln(x)+\ln(x+1)$ is always increasing where it is defined.)<|endoftext|> -TITLE: What do the $p$-adic roots of unity look like? -QUESTION [14 upvotes]: I know that $\mathbb{Z}_p$ has all the $p-1^{st}$ roots of unity (and only those). Is it true that mod $p$ they are all different? Meaning, is the natural map $\mathbb{Z}_p \rightarrow \mathbb{F}_p$, restricted to just the roots of unity, bijective? - -REPLY [8 votes]: This follows from the fact that $x^{p-1} - 1$ is relatively prime to its formal derivative over $\mathbb{F}_p$, which is $-x^{p-2}$.<|endoftext|> -TITLE: Same symbol "$\partial$" - different things ( the boundary $\partial A$ / partial derivative $\frac{\partial f}{\partial x}$)? -QUESTION [9 upvotes]: Are there any deep reasons, why we use the same symbol, $\partial$, when describing two (apparently fundamental) different mathematical objects, namely the boundary of a set (in topology), as well as the partial derivative (in analysis) ? -Can one of these maybe be viewed in (some very sophisticated manner) as the other ? -(According to wikipedia, this symbol occurs in even more places...) - -REPLY [5 votes]: Yes, Stokes theorem -$$\int_\Omega d\omega = \int_{\partial \Omega} \omega$$ looks nice in this notation--you just need to shift the d from the integrand to the integration manifold. - -REPLY [3 votes]: I was going to say "no" but a google search brought up this MathOverflow thread: -https://mathoverflow.net/questions/46252/is-the-boundary-partial-s-analogous-to-a-derivative<|endoftext|> -TITLE: Proving $2 ( \cos \frac{4\pi}{19} + \cos \frac{6\pi}{19}+\cos \frac{10\pi}{19} )$ is a root of$ \sqrt{ 4+ \sqrt{ 4 + \sqrt{ 4-x}}}=x$ -QUESTION [24 upvotes]: How can one show that the number -$2 \left( \cos \frac{4\pi}{19} + \cos \frac{6\pi}{19}+\cos \frac{10\pi}{19} \right)$ -is a root of the equation -$\sqrt{ 4+ \sqrt{ 4 + \sqrt{ 4-x}}}=x$? - -REPLY [2 votes]: Sorry if this is a little off-topic here, but the question where it was topical was marked as a duplicate of this one. -Well, $2$ is a primitive root $\pmod{19}$, so we can group the nonzero residues $2^n$ according to the congruence class of $n\pmod{18}$. -$$\begin{array}{ccccccc} -r_0&\color{red}{1}&\color{red}{8}&\color{red}{7}&\color{red}{18}&\color{red}{11}&\color{red}{12}\\ -r_1&\color{green}{2}&\color{green}{16}&\color{green}{14}&\color{green}{17}&\color{green}{3}&\color{green}{5}\\ -r_2&\color{blue}{4}&\color{blue}{13}&\color{blue}{9}&\color{blue}{15}&\color{blue}{6}&\color{blue}{10}\\ -\end{array}$$ -From the problem statement, clearly we are interested in $r_1$. To work out the products of $r_0$, $r_1$,and $r_2$ we can write out tables were we add all the elements $\pmod{19}$. The table for $r_0\times r_1$: -$$\begin{array}{ccccccc} - &\color{green}{2}&\color{green}{16}&\color{green}{14}&\color{green}{17}&\color{green}{3}&\color{green}{5}\\ -\color{red}{1}&\color{green}{3}&\color{green}{17}&\color{blue}{15}&\color{red}{18}&\color{blue}{4}&\color{blue}{6}\\ -\color{red}{8}&\color{blue}{10}&\color{green}{5}&\color{green}{3}&\color{blue}{6}&\color{red}{11}&\color{blue}{13}\\ -\color{red}{7}&\color{blue}{9}&\color{blue}{4}&\color{green}{2}&\color{green}{5}&\color{blue}{10}&\color{red}{12}\\ -\color{red}{18}&\color{red}{1}&\color{blue}{15}&\color{blue}{13}&\color{green}{16}&\color{green}{2}&\color{blue}{4}\\ -\color{red}{11}&\color{blue}{13}&\color{red}{8}&\color{blue}{6}&\color{blue}{9}&\color{green}{14}&\color{green}{16}\\ -\color{red}{12}&\color{green}{14}&\color{blue}{9}&\color{red}{7}&\color{blue}{10}&\color{blue}{15}&\color{green}{17}\\ -\end{array}$$ -From the above we can see that $r_0\times r_1=r_0+2r_1+3r_2$. The table for $r_1\times r_1$: -$$\begin{array}{ccccccc} - &\color{green}{2}&\color{green}{16}&\color{green}{14}&\color{green}{17}&\color{green}{3}&\color{green}{5}\\ -\color{green}{2}&\color{blue}{4}&\color{red}{18}&\color{green}{16}&\color{black}{0}&\color{green}{5}&\color{red}{7}\\ -\color{green}{16}&\color{red}{18}&\color{blue}{13}&\color{red}{11}&\color{green}{14}&\color{black}{0}&\color{green}{2}\\ -\color{green}{14}&\color{green}{16}&\color{red}{11}&\color{blue}{9}&\color{red}{12}&\color{green}{17}&\color{black}{0}\\ -\color{green}{17}&\color{black}{0}&\color{green}{14}&\color{red}{12}&\color{blue}{15}&\color{red}{1}&\color{green}{3}\\ -\color{green}{3}&\color{green}{5}&\color{black}{0}&\color{green}{17}&\color{red}{1}&\color{blue}{6}&\color{red}{8}\\ -\color{green}{5}&\color{red}{7}&\color{green}{2}&\color{black}{0}&\color{green}{3}&\color{red}{8}&\color{blue}{10}\\ -\end{array}$$ -This says that $r_1\times r_1=6+2r_0+2r_1+r_2$. And finally the table for $r_2\times r_1$: -$$\begin{array}{ccccccc} - &\color{green}{2}&\color{green}{16}&\color{green}{14}&\color{green}{17}&\color{green}{3}&\color{green}{5}\\ -\color{blue}{4}&\color{blue}{6}&\color{red}{1}&\color{red}{18}&\color{green}{2}&\color{red}{7}&\color{blue}{9}\\ -\color{blue}{13}&\color{blue}{15}&\color{blue}{10}&\color{red}{8}&\color{red}{11}&\color{green}{16}&\color{red}{18}\\ -\color{blue}{9}&\color{red}{11}&\color{blue}{6}&\color{blue}{4}&\color{red}{7}&\color{red}{12}&\color{green}{14}\\ -\color{blue}{15}&\color{green}{17}&\color{red}{12}&\color{blue}{10}&\color{blue}{13}&\color{red}{18}&\color{red}{1}\\ -\color{blue}{6}&\color{red}{8}&\color{green}{3}&\color{red}{1}&\color{blue}{4}&\color{blue}{9}&\color{red}{11}\\ -\color{blue}{10}&\color{red}{12}&\color{red}{7}&\color{green}{5}&\color{red}{8}&\color{blue}{13}&\color{blue}{15}\\ -\end{array}$$ -And so $r_2\times r_1=3r_0+r_1+2r_2$. We also know by symmetry that $0=1+r_0+r_1+r_2$, and we can work out -$$\begin{align}r_1^3&=r_1\times(6+2r_0+2r_1+r_2)\\ -&=6r_1+2(r_0+2r_1+3r_2)+2(6+2r_0+2r_1+r_2)+3r_0+r_1+2r_2\\ -&=12+9r_0+15r_1+10r_2\end{align}$$ -so we can set up the system -$$\begin{array}{ccccc}r_1^3&=12&+9r_0&+15r_1&+10r_2\\ -r_1^2&=6&+2r_0&+2r_1&+r_2\\ -r_1&=&&r_1&\\ -0&=1&+r_0&+r_1&+r_2\end{array}$$ -Eliminating $r_0$ and $r_2$, we find that -$$r_1^3+r_1^2-6r_1-7=0$$ -Let $r_1=x-\frac13$ -$$x^3-\frac{19}3x-\frac{133}{27}=0$$ -Using Vieta's trigonometric solution we find that -$$r_1=-\frac13+\frac23\sqrt{19}\cos\left(\frac13\cos^{-1}\left(\frac7{2\sqrt{19}}\right)+\frac{2\pi k}3\right)$$ -Comparing numerical values, $k=0$ corresponds to $r_1$, $k=1$ corresponds to $r_2$, and $k=2$ corresponds to $r_0$.<|endoftext|> -TITLE: Invariant Subspace of Two Linear Involutions -QUESTION [8 upvotes]: I'd love some help with this practice qualifier problem: -If $A$ and $B$ are two linear operators on a finite dimensional complex vector space $V$ such that $A^2=B^2=I$ then show that $V$ has a one or two dimensional subspace invariant under $A$ and $B$. -Thanks! - -REPLY [6 votes]: Consider the linear transformation $AB:V\to V$. Since $V$ is a complex vector space, $AB$ has at least an eigenvector, call it $X$, i.e. $ABx=\lambda x$. Note, $AABx=Bx=A\lambda x=\lambda Ax$ and then $x=BBx=BA\lambda x=\lambda BAx$ So consider the space $\langle x, Ax\rangle$. Note, this space is invariant under $A$, clearly, $Bx=\lambda Ax$ and $BAx=x$ so it's invariant under $B$. This space is at least one dimensional.<|endoftext|> -TITLE: Connection to Normal distribution -QUESTION [5 upvotes]: I've been working on finding the probability for the event, that the sum of $n$ independent random variables are less than $s$, when they are evenly distributed on $[0,1)$. -I've used the law of total probability to derive the formula: -$P(S -TITLE: How to Compute Genus -QUESTION [8 upvotes]: How to compute the genus of $ \{X^4+Y^4+Z^4=0\} \cap \{X^3+Y^3+(Z-tW)^3=0\} \subset \mathbb{P}^3$? -We know that the genus of $ \{X^4+Y^4+Z^4=0\} \subset \mathbb{P}^3$ is 3 because the degree is 4. -Now, I want to know the genus of the intersection as a curve. For that I have to use the adjunction formula and the fact that $K_{\mathbb{P}^3}=O(-4)$. - -REPLY [6 votes]: If $X$ is the the intersection in $\mathbb{P}^3$ of two hypersurfaces of degrees $d_1$, $d_2$, respectively, and $\mathrm{dim} \ X = 1$, then the genus of $X$ is: -$$ -g = \frac{d_1^2d_2 + d_1 d_2^2}{2} - 2 d_1 d_2 + 1. -$$ -So, in your case $d_1 = 4$ and $d_2 = 3$, therefore $g = 19$.<|endoftext|> -TITLE: What's the sum of this power series? -QUESTION [6 upvotes]: What's the sum of this power series? -$$f_k(x)=1-\frac{x^2}{k}+\frac{x^4}{k(k+1)\cdot2!}-\frac{x^6}{k(k+1)(k+2)\cdot3!}+\ldots$$ -I'm just helping someone, I'm not good at math! :\ - -REPLY [8 votes]: To expand on my comment: Your function is -$$f_k(x)=(k-1)!\sum_{m=0}^\infty \frac{(-1)^m}{m!(k+m-1)!}x^{2m}\;.$$ -The Bessel function of (integer) order $n$ is -$$J_n(x)=\sum_{m=0}^\infty\frac{(-1)^m}{m!(m+n)!}\left(\frac{x}{2}\right)^{2m+n}\;.$$ -Thus your function is -$$f_k(x)=n!J_n(2x)x^{-n}\;,$$ -with $n=k-1$.<|endoftext|> -TITLE: What is a description for the following number theoretic object? -QUESTION [7 upvotes]: The title couldn't quite contain the question, so I didn't attempt to make it precise. I should note that this is the third or fourth question I've asked these past two days about problems I've been having, and I want to thank you all for being really helpful. -The question is this: Say that you start with a number field $K$ that doesn't contain $\zeta_p$. Let $\Delta$ be the automorphism group of $K(\zeta_p)/K$. I will view $K(\zeta_p)^{\times} \otimes \mathbb{F}_p$ as a $\mathbb{F}_p[\Delta]$-module. Let $\omega$ be the cyclotomic character (meaning that for any $\delta \in \Delta$ and $\zeta_p$, $\delta(\zeta_p)=(\zeta_p)^{\omega(\delta)}$). Is there an easy description of $(K(\zeta_p)^{\times} \otimes \mathbb{F}_p)^{\omega}$? -Note that I mean the $\omega$-isotypic component, as is explained in one of my previous questions: Basic Representation Theory -Obviously $\zeta_p^n$ is in $(K(\zeta_p)^{\times} \otimes \mathbb{F}_p)^{\omega}$ for all integer $n$. What else is in there? Is there a nice description? - -REPLY [4 votes]: Write $L=K(\zeta_p)$, and let $G_K$ (resp. $G_L$) be the abolute Galois group of $K$ (resp. $L$), so that $G_K/G_L = \Delta$. Inflation-restriction gives an isomorphism $H^1(G_K,\mathbb Z/p) = H^1(G_L,\mathbb Z/p)^{\Delta},$ where the superscript $\Delta$ denotes the submodule of $\Delta$-invariants. (More precisely, there is a map from the left-hand side to the right-hand side, which -in this case is an isomorphism because the flanking terms in the inflation-restriction exact sequence involve cohomology of $\Delta$ with coefficients in -$\mathbb Z/p$, and this cohomology vanishes, since $|\Delta| \mid p-1$, which -is coprime to $p$.) -Now if we choose an isomorphism $\mathbb Z/p \cong \mu_p$ (i.e. fix a choice $\zeta_p$ of primitive $p$th root of $1$), then we get an isomorphism -$H^1(G_L,\mathbb Z/p)^{\Delta} = H^1(G_L,\mu_p)^{\omega}.$ (The superscript $\omega$ here I am using in your sense, namely it is the submodule on which $\Delta$ acts via $\omega$.) -[Added: Note that $\Delta$ acts on $\mu_p$ through the character $\omega$. -So a cohomology class $c$ in $H^1(G_L,\mathbb Z/p)^{\Delta}$ is fixed -by $\Delta$ if and only if, when it is regarded as a class in -$H^1(G_L,\mu_p)^{\omega}$, the action of $\Delta$ is given by the character -$\omega$.] -Finally, -Hilbert's Theorem 90 gives an isomorphism $H^1(G_L,\mu_p) = L^{\times}\otimes -\mathbb F_p.$ -So in conclusion, there is an isomorphism -$$(L^{\times} \otimes \mathbb F_p)^{\omega} \cong H^1(G_K,\mathbb Z/p) = Hom(G_K^{ab},\mathbb Z/p).$$ - -The preceding isomorphism has a concrete interpretation: namely, a (non-trivial) map from -$G_K^{ab}$ to $\mathbb Z/p$ corresponds to a degree $p$ abelian exension -of $K$, say $F$. If we form the compositum of $F$ and $L$, we obtain a -degree $p$ extension of $L$. (Because the degree of $L$ over $K$ divides -$p-1$, this compositum is still genuinely of degree $p$ over $L$.) Kummer -theory allows us to describe this extension by extracting the $p$th root -of some element $l \in L^{\times}$. The image of $l$ in $L^{\times}\otimes -\mathbb F_p$ is then an element of $(L^{\times}\otimes \mathbb F_p)^{\omega}.$ -Conversely, if we have an element of $(L^{\times}\otimes \mathbb F_p)^{\omega},$ say the image of some element $l \in L^{\times}$, -then adjoining the $p$th root of $L$ to $L$ gives a cyclic extension of $L$ -of degree $p$. The assumption that $\Delta$ acts on the image of $l$ in -$L^{\times}\otimes\mathbb F_p$ via $\omega$ shows (with a little calculation) -that $L(l^{1/p})$ is in fact abelian (of degree $(p-1)p$) over $K$, -and so it contains a degree $p$ subextension $F$ of $K$, so that $L(l^{1/p}) = F L.$ -The two constructions just described give an explicit description of each direction of the isomorphism obtained by cohomological methods above. - -For example, the element $\zeta_p$ that you mention in your question corresponds to the degree $p$ subextension -of $K$ contained in $K(\zeta_{p^2})$. But any number field admits (infinitely) many other degree $p$ abelian extensions, and these give rise to (infinitely) many other elements -of $(L^{\times}\otimes \mathbb F_p)^{\omega}$. -Let me finally explain why it is reasonable that there should be many such elements: the space -$L^{\times}\otimes \mathbb{F}_p$ is an infinite dimensional vector space over $\mathbb F_p$, which (since $\Delta$ has order prime-to-$p$) breaks up into a sum of -eigenspaces under the $\Delta$-action. But $\Delta$ has only finitely many characters, so just by counting, we see that at least one eigenspace is going to have to be infinite dimensional. In fact, all the eigenspaces will be infinite dimensional, and the preceding discussion proves this for the $\omega$-eigenspace.<|endoftext|> -TITLE: Center of a polygon inside the polygon -QUESTION [6 upvotes]: What is the name of the point(s) in a polygon, calculated by "shrinking" the polygon until there's no surface left? -Example (the light areas): - -Also, of possible, it would be cool to have an algorithm to calculate this in a reasonable time, given the coordinates of the edges. - -REPLY [3 votes]: It's the medial axis. See also the straight skeleton.<|endoftext|> -TITLE: Please help me to show, that $(\ln x)'=\frac1 x$ -QUESTION [7 upvotes]: In school, we recently started with derivations. I looked into a list of simple derivations and tried to prove them, in order to practice. Now, I tried to find the derivative of $\ln x$, but I got stuck. Some web pages suggest to use the identity $e = \lim_{h\to\infty}\left(1+h^{-1}\right)^h$, but I still don't get a solution. I started by the basic approach: -$$(\ln x)'=\lim_{\Delta\to0}\frac{\ln(x+\Delta)-\ln x}\Delta$$ -But I didn't find a way to get something useful out of this. Please help me. - -REPLY [17 votes]: The simplest way to find the derivative of the natural logarithm is to use the Inverse Function Theorem (or the Chain Rule), but since you say you only recently started, you may not know it yet. -So instead, we begin with two ingredients. One is that -$\ln(u)$ is continuous. That means that if $\lim\limits_{x\to a}f(x)$ exists, then -$$\lim_{x\to a}\ln(f(x)) = \ln\left(\lim_{x\to a}f(x)\right).$$ -The second ingredient (which you may or may not know yet) is that -$$\lim_{h\to\infty}\left(1 + \frac{a}{h}\right)^h = e^a.$$ -To see this, note that this is immediate if $a=0$; if $a\gt 0$, then just do a quick rewrite: -$$\begin{align*} -\lim_{h\to\infty}\left(1 + \frac{a}{h}\right)^h &= \lim_{h\to\infty}\left( 1 + \frac{1}{(h/a)}\right)^h\\ -&=\lim_{h\to\infty}\left(\left(1 + \frac{1}{(h/a)}\right)^{h/a}\right)^a\\ -&= \left(\lim_{h\to\infty}\left(1 + \frac{1}{(h/a)}\right)^{h/a}\right)^a. -\end{align*}$$ -If $a\gt 0$, then $h/a\to\infty$ as $h\to\infty$, so by the definition of $e$ you get that -$$\lim_{h\to\infty}\left(1+\frac{a}{h}\right)^h = \left(\lim_{(h/a)\to\infty}\left(1 + \frac{1}{(h/a)}\right)^{h/a}\right)^a = (e)^a = e^a.$$ -If $a\lt 0$, then replacing $a$ with $-a$ we can do the same trick as above after proving that -$$\lim_{h\to\infty}\left(1 - \frac{1}{h}\right)^h = e^{-1}.$$ -Indeed, though it takes a bit more algebraic trickery: -$$\begin{align*} -\lim_{h\to\infty}\left(1 - \frac{1}{h}\right)^h &= \lim_{h\to\infty}\left(\frac{h-1}{h}\right)^h = \lim_{h\to\infty}\left(\frac{h}{h-1}\right)^{-h}\\ -&= \left(\lim_{h\to\infty}\left(\frac{(h-1)+1}{h-1}\right)^h\right)^{-1}\\ -&= \left(\lim_{h\to\infty}\left(1 + \frac{1}{h-1}\right)^h\right)^{-1}\\ -&=\left(\lim_{h\to\infty}\left(1 + \frac{1}{h-1}\right)^{h-1}\left(1 + \frac{1}{h-1}\right)^1\right)^{-1}\\ -&= \left(\lim_{h\to\infty}\left(1 + \frac{1}{h-1}\right)^{h-1}\lim_{h\to\infty}\left(1 + \frac{1}{h-1}\right)\right)^{-1}\\ -&= \Bigl((e)(1)\Bigr)^{-1} = e^{-1}.\end{align*}$$ -Then, in the previous limit, if $a\lt 0$ then replace it with $-a$ and change the $+$ to a $-$, to get that the limit equals $(e^{-a})^{-1} = e^a$ as well. -And finally, with these ingredients in hand, we are ready. We have: -$$\begin{align*} -\frac{d}{dx}\ln x &= \lim_{\Delta\to 0}\frac{\ln(x+\Delta)-\ln(x)}{\Delta}\\ -&= \lim_{\Delta\to 0}\frac{1}{\Delta}\left(\ln(x+\Delta)-\ln(x)\right)\\ -&=\lim_{\Delta\to 0}\frac{1}{\Delta}\ln\left(\frac{x+\Delta}{x}\right)\\ -&=\lim_{\Delta\to 0}\frac{1}{\Delta}\ln\left(1 +\frac{\Delta}{x}\right)\\ -&=\lim_{\Delta\to 0}\ln\left(\left(1 + \frac{\Delta}{x}\right)^{1/\Delta}\right)\\ -&= \lim_{\Delta\to 0}\ln\left(\left(1 + \frac{1/x}{1/\Delta}\right)^{1/\Delta}\right). -\end{align*}$$ -If $\Delta\to 0^+$, then $\frac{1}{\Delta}\to\infty$, so letting $h=\frac{1}{\Delta}$ we have: -$$\lim_{\Delta\to 0^+}\left(1 + \frac{1/x}{1/\Delta}\right)^{1/\Delta} = -\lim_{h\to\infty}\left( 1 + \frac{1/x}{h}\right)^h = e^{1/x}.$$ -If $\Delta\to 0^-$, then $\frac{1}{\Delta}\to-\infty$, so letting $h=-\frac{1}{\Delta}$, we have: -$$\begin{align*} -\lim_{\Delta\to 0^-}\left(1 + \frac{1/x}{1/\Delta}\right)^{1/\Delta} &= \lim_{h\to\infty}\left(1 - \frac{1/x}{h}\right)^{-h}\\ -&= \lim_{h\to\infty}\left(\left(1 - \frac{1/x}{h}\right)^{h}\right)^{-1}\\ -&= \left(e^{-1/x}\right)^{-1} = e^{1/x}. -\end{align*}$$ -Therefore, we have: -$$\begin{align*} -(\ln x)' &= \lim_{\Delta\to 0}\frac{\ln(x+\Delta)-\ln(x)}{\Delta}\\ -&= \ln\left(\lim_{\Delta\to 0}\left(1 + \frac{1/x}{1/\Delta}\right)^{1/\Delta}\right)\\ -&= \ln\left(e^{1/x}\right) = \frac{1}{x}. -\end{align*}$$ -And this is why the Chain Rule or the Inverse Function Theorem are such a better way of proving this...<|endoftext|> -TITLE: Proving all Solutions of a Polynomial Cannot all be Real -QUESTION [7 upvotes]: If, a, b, c, d and e are all real numbers how could I prove that the 5 solutions of the equation: -$$f(x) = x^5 + ax^4 + bx^3 + cx^2 + dx + e == 0$$ -cannot all be real valued if: -$$2a^2 < 5b$$ -Any assistance is appreciated. - -REPLY [11 votes]: If there are 5 zeros, then the first derivative has 4 zeros, the second has 3 zeros, and the third has 2 zeros, all counted with multiplicity. The condition on $a$ and $b$ is exactly the condition that the third derivative has no real zeros.<|endoftext|> -TITLE: Suppose that $x > 1$. Prove that $s_n \to 1$ -QUESTION [5 upvotes]: Question: Suppose $x > 1$. Prove that $x^\frac{1}{n} \to 1$. -The following is a list of what I am trying to do through out my proof. - -Show $s_n$ is monotone (decreasing). -Show $s_n$ is bounded. -Using the monotone convergence thm and thm 19, show that $s_n$ converges to 1. -(Thm 19: If a sequence ($s_n$) converges to a real number s, then every subsequence of ($s_n$) also converges to s.) - -Proof -(1) We prove $s_n$ is decreasing by induction. -Since $s_1 = x > s_2 = \sqrt{x}$ -Now assume $s_k \ge s_{k+1}$, then -$$s_{k+2} = x^\frac{1}{k+2} < x^\frac{1}{k+1} = s_{k+1}$$ -Therefore, $s_n$ is monotone (it is decreasing). -(2a) We prove $s_n$ is bounded above by showing $x^2$ is an upper bound. -Since $s_1 = x < x^2$ and $s_n$ is decreasing, we can conclude $s_n$ is bounded above. -(2b) We prove $s_n$ is bounded below by showing $1$ is a lower bound. -Since $s_1 = x$ and $x > 1$, we can conclude $x^\frac{1}{n} > 1$ and that $s_n$ is bounded below. -Therefore, $s_n$ is bounded. -(3) By the monotone convergence thm, we know $s_n \to s$. -Now for each n, $$s_{2n} = x^\frac{1}{2n} = {x^\frac{1}{n}}^\frac{1}{2} = \sqrt{s_n}$$. -By thm 19, lim $s_{2n} = $ lim $s_n = $ lim $ \sqrt{s_n} = $ $ \sqrt{lim s_n}$ -Thus, $s = \sqrt{s}$ -$s^2 - s = 0$ -$s = 0, 1$ -Since $s_1 = x > 1$ and $s_n$ is bounded below by $1$, we can conclude $s \ne 0$. -Hence, $s = 1$ and $s_n \to 1$. -Concerns: Is part (2) of my proof sufficient to show that $s_n$ is bounded? - -REPLY [2 votes]: I always like to use the definition. Besides, you already have a fine critique of your current solution. Here is another. -Let $\epsilon > 0$ be arbitrarily small. Then for all -$ -n > \log x / \log (\epsilon+1) -$ -(note that $\log x > 0$ since $x>1$) we have -$$ -x < (\epsilon+1)^n, -$$ -which implies -$$ -|x^{1/n} - 1| < \epsilon, -$$ -as required.<|endoftext|> -TITLE: The Strong Whitney Embedding Theorem-Any Recommended Sources? -QUESTION [7 upvotes]: Just about all of the standard textbooks on manifold theory give proofs of weak versions of the Whitney Embedding theorem. But other then Whitney's original 1944 paper,are there any standard sources that contain a full proof of the strong version of the theorem? The only textbook I know that contains a full proof is Prasolov's Elements of Homology Theory-a wonderful book on algebraic topology I'd recommend to any student studying the subject. But does anyone know any other book sources for it? Just curious. - -REPLY [4 votes]: I would use as a general source for Whitney's embedding theorem "Global Analysis" by D. Kahn. This covers the basics. For more far leading resources, I would hurry up to fetch a copy of Whitney original work. This is most illuminating because Whitney really suceeded in explaing (implicitly only, of course) what's behind his ideas.<|endoftext|> -TITLE: The idea's clear; the proof isn't -QUESTION [7 upvotes]: I'm working through Enderton's book on set theory. In chapter 3, there are a series of exercises regarding functions. For instance - -Prove that if $F$ and $G$ are functions, $dom(F) = dom(G)$, and $F(x) = G(x)$ for all $x$ in the common domain, then $F = G$. -Assume that $F$ and $G$ are functions. Show that $F \subseteq G$ if and only if $dom(F) \subseteq dom(G)$ and $F(x) = G(x)$ for all $x \in dom(F)$. - -If I think about the statements, I come up with the following "sketches:" - -Suppose $(x,F(x)) \in F$. Then $x \in dom(F)$. Since $dom(F) = dom(G)$, $x \in dom(G)$ and $(x,G(x)) \in G$. Since $G(x) = F(x)$ for all $x$ in the common domain, $(x,F(x)) \in G$. The other direction is similar. -Assume $F$ and $G$ are functions. Then $F \subseteq G$ iff $(x,y) \in F$ implies $(x,y) \in G$. Hence, $x \in dom(F)$ implies $x \in dom(G)$ and $dom(F) \subseteq dom(G)$. Similarly, $(x,y) \in F$ implies $(x,y) \in G$ iff $F(x) = G(x)$ for all $x \in dom(F)$ since $F$ and $G$ are functions and $y$ is uniquely determined by $x$. - -But these are not formal proofs. I am having trouble in these exercises with formalizing the idea that I want to express. Hence, the question (request) is for an example of a formalization, so that I can see the appropriate style and level of detail involved. - -REPLY [7 votes]: You essentially have it. In fact, the first one is pretty much done (unless you want to go down to the nitty gritty of the formal definition of function as a set of ordered pairs). The second one is a bit more confused because you have "iffs" and "implies", which tends to make reading it more difficult. It might be best to separate the two implications you are trying to establish. So let me write those out. You'll see your first is essentially done, the second just needs a bit of clarification. - -To prove tht $F=G$, you need to prove that $F\subseteq G$ and $G\subseteq F$. So let $(x,y)\in F$ ($F$ is a function, so all its elements are ordered pairs). In particular, $x\in\mathrm{dom}(F)=\mathrm{dom}(G)$, so there exists $z$ such that $(x,z)\in G$. Note that $(x,y)\in F$ means $F(x)=y$, and $(x,z)\in G$ means $G(x)=z$. Thus, $y = F(x) = G(x) = z$, so $(x,z)=(x,y)\in G$. Thus, $F\subseteq G$. The converse follows by symmetry (or you can try to write it out yourself). -You have an "if and only if" statement, so you may want to do it in two parts. Start by proving the "if" statement: if $\mathrm{dom}(F)=\mathrm{dom}(G)$ and $F(x)=G(x)$ for all $x\in\mathrm{dom}(F)$, then $F\subseteq G$. The proof should be similar to the one above. -Once you're done with the "if", prove the "only if": If $F\subseteq G$, then $\mathrm{dom}(F)\subseteq \mathrm{dom}(G)$ and $F(x)=G(x)$ for all $x\in\mathrm{dom}(F)$. -How do you prove this? Well, $\mathrm{dom}(F) = \{x\mid \text{there exists }y\text{ such that }(x,y)\in F\}$; similarly for $\mathrm{dom}(G)$. So let $x\in\mathrm{dom}(F)$. Then there exists $y$ such that $(x,y)\in F$. Since $F\subseteq G$, then $(x,y)\in F$ implies $(x,y)\in G$. But $(x,y)\in G$ implies $x\in\mathrm{dom}(G)$, which is what we needed to prove. So $\mathrm{dom}(F)\subseteq \mathrm{dom}(G)$. Then you want to prove that for each $x\in\mathrm{dom}(F)$, $F(x)=G(x)$. So take $x\in\mathrm{dom}(F)$; this means there exists $y$ with $(x,y)\in F$; and since $\mathrm{dom}(F)\subseteq \mathrm{dom}(G)$, you have $x\in\mathrm{dom}(G)$ so there exists $z$ such that $(x,z)\in G$. Since $F\subseteq G$, then $(x,y),(x,z)\in G$. Since $G$ is a function, $y=z$. So $(x,y)\in G$, hence $y = G(x) = F(x)$.<|endoftext|> -TITLE: Constructive Proof of Kronecker-Weber? -QUESTION [20 upvotes]: This question is motivated by my attempt at solving Proving $2 ( \cos \frac{4\pi}{19} + \cos \frac{6\pi}{19}+\cos \frac{10\pi}{19} )$ is a root of$ \sqrt{ 4+ \sqrt{ 4 + \sqrt{ 4-x}}}=x$ -Consider numbers expressible as exponential sums $$\sum_k a_k \exp(2 i \pi \theta_k),$$ with $a_k$,$\theta_k$ a finite list of rationals. -These numbers are algebraic and satisfy some polynomial whose Galois group is abelian. The Kronecker-Weber theorem says the converse also holds. -Given an abelian polynomial (especially quadratic or cubic), how can we solve it in terms of one of these sums? -Basically I am looking for a proof of the Kronecker-Weber theorem that is constructive enough that I can compute with it. - -REPLY [7 votes]: $\def\QQ{\mathbb{Q}}\def\ZZ{\mathbb{Z}}\def\Gal{\mathrm{Gal}}\def\char{\mathrm{char}}$I want to record here what can be done reasonably easily following the approach of my 2011 answer. I have a plan for a much larger blog post, but it is an ambitious one so it may be several months until I get to write it. The aim of this answer is to provide a concrete proof of the following: -Suppose that $K/\QQ$ is Galois with Galois group $\ZZ/p$, where $p$ is a regular prime. Then $K$ is contained in a cyclotomic field. -Making $p$ irregular seems to force more advanced methods, either involving ramification groups or the Stickelberger theorem. Working with prime powers rather than primes seems to only require more care (especially $2^t$), not more high power tools, but it is a lot of work. -The case $p=2$ was done in the other answer, so I assume $p$ odd. -Let $\zeta_p$ be a primitive $p$-th root of unity. We have $\mathrm{Gal}(\QQ(\zeta_p)/\QQ) \cong (\ZZ/p)^{\ast}$. Explicitly, for $a \in (\ZZ/p)^{\ast}$, let $\sigma_a$ be the element of $\mathrm{Gal}(\QQ(\zeta_p)/\QQ)$ with $\sigma_a(\zeta_p) = \zeta_p^a$. -Basic computations -Let $\beta \in \QQ(\zeta_p)$. We would like to know whether or not $\QQ(\zeta_p,\beta^{1/p})$ is Galois and abelian over $\QQ$. -Claim 1 $\QQ(\zeta_p,\beta^{1/p})$ is Galois and abelian over $\QQ$ if and only if, for all $a \in (\ZZ/p)^{\ast}$, we have -$$\sigma_a(\beta) = \beta^a f_a^p \quad (\ast)$$ -for some $f_a \in \QQ(\zeta_p)$. -Proof: Suppose that $\QQ(\zeta_p,\beta^{1/p})$ is Galois and abelian over $\QQ$. Fix $a$ in $(\ZZ/p)^{\ast}$. Lift $\sigma_a$ to some automorphism $\sigma \in \Gal(\QQ(\zeta_p, \zeta^{1/p})/\QQ)$ and let $\tau \in \Gal(\QQ(\zeta_p, \zeta^{1/p})/\QQ(\zeta_p))$ be the automorphism $\tau(\beta^{1/p}) = \zeta_p \beta^{1/p}$. -Since $\sigma$ and $\tau$ commute, we compute -$$\tau(\sigma(\beta^{1/p}))= \sigma(\tau(\beta^{1/p})) = \sigma(\zeta_p \beta^{1/p}) = \zeta_p^a \sigma(\beta^{1/p}).$$ -We also have -$$\tau(\beta^{a/p}) = \zeta_p^a \beta^{a/p}.$$ -Therefore, $\sigma(\beta^{1/p})/\beta^{a/p}$ is fixed by $\tau$. Since $\tau$ generates $\Gal(\QQ(\zeta_p, \beta^{1/p})/\QQ(\zeta_p))$, we deduce that $\sigma(\beta^{1/p})/\beta^{a/p} = f_a$ for some $f_a \in \QQ(\zeta_p)$, and thus $\sigma_a(\beta) = \beta^a f_a^p$. -Conversely, suppose that $(\ast)$ holds. We can reverse the argument to show that $\QQ(\zeta_p, \beta^{1/p})/\QQ(\zeta_p)$ is Galois and -$$1 \to \Gal(\QQ(\zeta_p, \beta^{1/p})/\QQ(\zeta_p)) \to \Gal(\QQ(\zeta_p, \beta^{1/p})/\QQ) \to \Gal(\QQ(\zeta_p)/\QQ) \to 1$$ -is a central extension. But $\Gal(\QQ(\zeta_p)/\QQ)$ is cyclic, so every central extension of it is abelian. $\square$ -Let $\beta$ as above and let the ideal $(\beta)$ factor into primes as $(\beta) = \prod \pi^{v_{\pi}(\beta)}$. Note that, if $q$ is a prime of $\ZZ$ which is $1 \bmod p$, then $q$ splits completely in $\QQ(\zeta_p)$. Fix for convenience a particular $\pi_q$ lying over $q$ for each such $q$. For any prime $\pi$ of $\QQ(\zeta_p)$, let $\mathrm{char}(\pi)$ be the characteristic of the residue field. -Claim 2 With the above notation, there are constants $c_q$, one for each $q \equiv 1 \bmod p$ such that -$$v_{\pi}(\beta) \equiv \begin{cases} 0 \bmod p & \char(\pi) \not \equiv 1 \bmod p \\ c_q/a & \pi = \sigma_a(\pi_q),\ q \equiv 1 \bmod p \\ \end{cases}.$$ -Proof If $\char(p) \not \equiv 1 \bmod p$, then there is some non-identity $a$ with $\sigma_a(\pi) = \pi$. (Here we use $p \neq 2$. Otherwise, $p=2$ and $\pi = (2)$ is a counterexample.) Then $(\ast)$ implies that $a v_{\pi}(\beta) \equiv v_{\pi}(\beta) \bmod p$, so $v_{\pi}(\beta) \equiv 0 \bmod p$. -If $\char(\pi) = q \equiv 1 \bmod p$, then $(\ast)$ implies that $v_{\pi_q}(\sigma_a^{-1}(\beta)) \equiv a^{-1} v_{\pi}(\beta) \bmod p$, so $v_{\sigma_a(\pi_q)}(\beta) \equiv a^{-1} v_{\pi}(\beta) \bmod p$ and the result follows. $\square$ -Before proceeding to the proof, we need a supply of $\gamma$'s for which $\QQ(\gamma^{1/p},\zeta_p)$ is cyclotomic. -If $q$ is a prime which is $1 \bmod p$, then $\Gal(\QQ(\zeta_q, \zeta_p)/\QQ) = (\ZZ/q)^{\ast} \times (\ZZ/p)^{\times}$ surjects onto $(\ZZ/p) \times (\ZZ/p)^{\times}$, so there is some field tower $K \supset \QQ(\zeta_p) \supset \QQ$ with $\Gal(K/\QQ) \cong (\ZZ/p) \times (\ZZ/p)^{\times}$. By Kummer theory, we must have $K = \QQ(\zeta_p, \gamma_q^{1/p})$ for some $\gamma_q$. -From the previous computations, we know that $v_{\pi}(\gamma_q) \equiv 0 \bmod p$ if $\char(\pi) \not \equiv 1 \bmod p$. Considering ramification, or computing with Gauss sums, we also have $v_{\pi}(\gamma_q) \equiv 0 \bmod p$ if $\char(\pi) = q' \neq q$ with $q' \equiv 1 \bmod p$, and there is some $b \in (\ZZ/p)^{\ast}$ such that $v_{\sigma_a(\pi_q)}(\gamma_q) \equiv b/a \bmod p$. We may (and do) replace $\gamma_q$ by $\gamma_q^{1/b \bmod p}$ so that $v_{\sigma_a(\pi_q)}(\gamma_q) \equiv 1/a \bmod p$. (Stickelberger's relation gives an explicit choice of $\gamma_q$ with this normalization.) -The proof -Now, suppose that $K/\QQ$ is Galois with Galois group $\ZZ/p$. Then, by Kummer theory, $K(\zeta_p) = \QQ(\beta^{1/p}, \zeta_p)$ for some $\beta$, and $\beta$ must obey condition $(\ast)$. -We will show that $\beta$ is in the group generated by $(\QQ(\zeta_p)^{\ast})^p$, by the $\gamma_q$ and by $\zeta_p$. -We first consider the factorization of $(\beta)$ into prime ideals: $(\beta) = \prod \pi^{v_{\pi}}$, which is described by Claim 2 above. We replace $\beta$ by $\beta \prod \gamma_q^{-c_q}$. -After making this replacement, we have $v_{\pi}(\beta) \equiv 0 \bmod p$ for all $\pi$. -Thus, we can assume that $(\beta)=I^p$ for some ideal $I$. Since $p$ is regular, we deduce that $(\beta) = (f^p)$ for some $f \in \QQ(\zeta_p)$. Replacing $\beta$ by $\beta f^{-p}$ does not change the extension, and we may now assume that $\beta$ is a unit. -We abbreviate $\sigma_{-1}(x)$ by $\bar{x}$, since in any complex embedding of $\QQ(\zeta_p)$, the element $\sigma_{-1}$ acts by complex conjugation. Taking $a=-1$ in $(\ast)$, we have -$$\beta \bar{\beta} = \theta^p$$ -for some $\theta$ and, since $\beta$ is a unit, so is $\theta$. Hitting both sides of the equation with complex conjugation, we have $\theta^p = \bar{\theta}^p$. -Now, replace $\beta$ by -$$\beta' := \beta\cdot (\theta^{(p-1)/2}/\beta)^p.$$ -(This is the second time we have used that $p \neq 2$, to make $(p-1)/2$ an integer.) Note that $\beta'$ is likewise a unit. We have -$$\beta' \bar{\beta'} = \frac{\beta \bar{\beta} \theta^{p(p-1)/2} \bar{\theta}^{p (p-1)/2}}{\beta^p \bar{\beta}^p} = \frac{\theta^p \theta^{p(p-1)/2} \theta^{p(p-1)/2}}{\theta^{p^2}} = 1.$$ -We see that $\beta'$ is a unit of $\QQ(\zeta_p)$ such that $|\beta'|=1$ in every complex embedding. So, by a result of Kronecker, $\beta'$ is a root of unity. We see that $\QQ(\beta'^{1/p}, \zeta_p)$ is cyclotomic, as promised.<|endoftext|> -TITLE: What is undecidability -QUESTION [14 upvotes]: What does it mean that some problem is undecidable? -For instance the halting problem. -Does it mean that humans can never invent a new technique that always decides whether a turing machine will halt? -If not, what techniques are allowed such that halting problem is still undecidable? -For instance induction is a good technique, why cant one discover some new technique? -I have trouble understanding how some new invention cannot solve the halting problem. -Given some computer and a program, is there really insufficient information stored in it to determine if it will halt? -It seems like a purely mechanical problem - -REPLY [2 votes]: First, yes, to your question "Does it mean that humans can never invent a new technique that always decides whether a Turing machine will halt?", mostly because the rules of the game have been set: the formalism of TMs. (at this point it will be helpful to actually go into some of those details and see the construction that shows that assuming the existence of such a TM that checks for halting results in a contradiction). -But even so, suppose you were given such a magical 'new technique'...this has been investigated under the name 'oracle'...even with an oracle that decides if a TM can halt on given input, it turns out that there are still problems that can't be solved with a TM (by pretty much a similar paradoxical construction that showed that the plain old halting problem is undecidable). -Aside from all that, there are two general difficulties with the halting problem that might lead to misunderstanding (if ignoring the technical details): knowing what is being quantified over, and dealing with infinity. - -what is asked for is an algorithm (to be implemented on a TM) that takes as input the specs for another TM and a parameter and is supposed to return the answer to the questions 'Does the given TM halt for the given input parameter?'. The point is that, even though you might be able to show for a particular TM that it halts (or does not) on the given input, but you're suppose to be able to show it for -any- TM. That's kind of a tall order. -the other problem is, for an input that doesn't halt on the given TM...well, you don't know that ahead of time, so how do you know if the TM is going on forever, or just taking a really long time? You can't run things for as long as you want (that's sort of the definition of infinite). So -that's not a possible strategy, to check for -nonhalting- by simulating. - -Given the main result (there's no general algorithm that will show for -any-TM if it halts on a given input), it turns out, possibly in the direction you're looking for, that for many subclasses of all possible TMs, there -are- algorithms that can decide if one will halt or not. That is, maybe if you restrict your formalism a little bit, there just might be a compiler (the halting problem TM for this subclass) that will tell you if you have some annoying infinite loops in there (does not halt for some input).<|endoftext|> -TITLE: Euler's phi function and distinct primes -QUESTION [5 upvotes]: It is true that $\phi(p) = (p-1)$ only if p is a prime. I had also proven (I am not sure if this is a trivial fact or not) that $\phi(pq) = (p-1)(q-1)$ only if p and q are distinct primes. -However, I am having difficulty generalizing the result. It certainly seems true that if $\phi(p_1\cdot p_2\cdots p_n) = (p_1 - 1)(p_2 - 1)\cdots (p_n-1)$ then each $p_i$ are distinct primes. -What I had done in the initial proof for the case n = 2 was to use the the formula $\phi(pq) = \phi(p)\phi(q)\frac{d}{\phi(d)}$ where d = gcd(p,q) and the fact that if $a \mid b$ then $\phi(a) \mid \phi(b)$ to show that $d \mid 1$. The result follows quite easily after showing that p and q are coprime. However, this proof does not seem to be extendable to the general case. I hope that someone can help me with this. - -REPLY [2 votes]: The above-stated conjecture has infinitely many counterexamples. Namely -$$\rm\phi(p_1\cdot p_2\cdots p_n)\ =\ (p_1 - 1)\ (p_2 - 1)\cdots (p_n-1)\ \ \Rightarrow\ \ p_i\ distinct\ primes$$ -is false for all primes $\rm\:p > 3\:$ in the following examples -$$\rm\ \phi(3\cdot 3\cdot 4\cdot p)\ =\ 2\cdot 2\cdot 3\cdot (p-1)\ $$ -$$\rm\ \phi(2\cdot 4\cdot 9\cdot p)\ =\ 1\cdot 3\cdot 8\cdot (p-1)\ $$<|endoftext|> -TITLE: Kullback-Leibler divergence based kernel -QUESTION [6 upvotes]: I'm looking at this paper: "A Kullback-Leibler Divergence Based Kernel for SVM Classification in Multimedia Applications". The author suggests using the kernel function for two distributions $p$ and $q$: $k(p,q)= \exp (-a (D_{KL}(p,q) + D_{KL}(q,p)))$, where $a>0$ and $D_{KL}$ is the Kullback-Leibler divergence between $p$ and $q$. But it's not obvious from this paper that this kernel is positive definite. -How it can be proved that the kernel is positive definite? -Also, it's well known that $\exp (-a (D_{KL}(p,q) + D_{KL}(q,p)))$ can be positive definite if and only if $(D_{KL}(p,q) + D_{KL}(q,p))$ is a negative definite kernel. How can one proof that fact? - -REPLY [7 votes]: If you have a kernel of the form: $K(x,y) = \exp^{-a(M(x,y))}$, all is needed is for $M(x,y)$ to be a valid metric. So all that is required is to prove that the Symmetrised K-L Divergence (call it $KLS(p,q)$) is a valid metric. -For all x, y, z in X, this function is required to satisfy the following conditions: - -$d(x, y) \geq 0$ (non-negativity) -$d(x, y) = 0 \iff x = y$ (identity of indiscernibles. Note that condition 1 and 2 together produce positive definiteness) -$d(x, y) = d(y, x)$ (symmetry) -$d(x, z) ≤ d(x, y) + d(y, z)$ (subadditivity / triangle inequality). - -1 and 2 hold for each of $KL(p,q)$ and $KL(q,p)$ and therefore hold for $KLS(p,q)$. -3 holds trivially. -However 4 does not hold: -Counter example -Consider -a=[0.3 0.3 0.4] -b=[0.25 0.35 0.4] -c=[0.16 0.33 0.51] -we have -$KL(a||b)+KL(b||a)+KL(b||c)+KL(c||b)-[KL(a||c)+KL(c||a)]\approx -0.0327<0$ -So $KLS(p,q)$ is not a valid metric. -Unless I've missed something, I do not believe that their kernels are necessarily positive definite - I'm assuming that it wasn't discussed in the review process otherwise I'd expect to see it discussed in the paper. Practically, it may not be a problem, as for their real world examples the matrices may have been (at least close to) SPSD, and with appropriate regularisation (even just adding a small constant to the diagonal) the algorithms should still work. There is also some work in solving SVMs for indefinite kernels, see e.g. Training SVM with Indefinite Kernels or Analysis of SVM with Indefinite Kernels so all is not lost even if the kernels are indefinite. -It's interesting that their results are so much better than using Fisher kernels - in my experience too, Fisher kernels don't work that well - so this is potentially a nice way of combining generative and discriminative methods. Let us know how you get on if you get round to using them!!<|endoftext|> -TITLE: The modular curve X(N) -QUESTION [15 upvotes]: I have a question about the modular curve X(N), which classifies elliptic curves with full level N structure. (A level N structure of an elliptic curve E is an isomorphism -from $Z/NZ \times Z/NZ$ to the group of N-torsion points on E). -Some notation: $\Gamma(N)$ is the subgroup of SL$_2(\mathbb{Z})$, which contains all the matrices congruent to the identity matrix modulo N. $\mathbb{H}$ is the upper halfplane. -$\Gamma(N)\backslash\mathbb{H}$ is a Riemannsurface classifying elliptic curves with level N structure and the additional condition that the two base points we choose map to a certain N-th root of unity under the Weil pairing. The problem is that this curve is only defined over -$\mathbb{Q}(\zeta_N)$. -Apparently if we leave out the condition with the Weil pairing, we get a curve X(N) defined over $\mathbb{Q}$ which has $\phi(N)$ geometric components isomorphic to $\Gamma(N)\backslash\mathbb{H}$. -Is there a good way for constructing the curve X(N) from $\Gamma(N)\backslash\mathbb{H}$? Unfortunately the author refers to a french paper by Deligne-Rapoport.(I don't speak French) -Do you know any better references for this? - -REPLY [4 votes]: One standard way to describe the disconnected version of $X(N)$ is as follows: -it is the quotient -$$SL_2(\mathbb Z)\backslash \bigl(\mathcal H \times GL_2(\mathbb Z/N\mathbb Z)\bigr),$$ -where $SL_2(\mathbb Z)$ acts on $\mathcal H$ as usual, and on $GL_2(\mathbb Z/N -\mathbb Z)$ via left multiplication of matrices.<|endoftext|> -TITLE: number of ordered partitions of integer -QUESTION [8 upvotes]: How to evaluate the number of ordered partitions of the positive integer $ 5 $? -Thanks! - -REPLY [5 votes]: Counting in binary the groups of 1s or 0s form the partitions. Half are the same so there are 2^(n-1). As to be expected this gives the same results as the gaps method, but in a different order. -Groups -0000 4 -0001 3,1 -0010 2,1,1 -0011 2,2 -0100 1,1,2 -0101 1,1,1,1 -0110 1,2,1 -0111 1,3 - -Gaps -000 4 -001 3,1 -010 2,2 -011 2,1,1 -100 1,3 -101 1,2,1 -110 1,1,2 -111 1,1,1,1<|endoftext|> -TITLE: Under what conditions is integrating over a series expansion valid for an improper integral? -QUESTION [17 upvotes]: On stackoverflow, a question was asked about getting Mathematica to evaluate the integral, -$$\int^\infty_0 \frac{e^{-x}}{\sin x} \, \mathrm{d}x$$ -which we know is divergent. In one of the answers, the integrand is replaced with its Taylor expansion, and integrated term by term, and in physics, it is often taken for granted that this works. But, under what circumstances is this valid for improper integrals, in general? More precisely, what must be done to properly interchange the two limiting processes? - -REPLY [4 votes]: In some cases, a class of theorems called Tauberian theorems can help you to justify interchanging the order of two limiting operators. For example, if the improper integral $\int_{0}^{\infty} f(x) \, dx$ exists, then -$$\int_{0}^{\infty} \int_{0}^{\infty} x^{n} s^{n-1} f(x) \; e^{-xs} \, ds dx = \int_{0}^{\infty} \int_{0}^{\infty} x^{n} s^{n-1} f(x) \; e^{-xs} \, dx ds$$ -holds for all $n$. (Of course, existence of both iterated integrals are also guaranteed.) Originally, Tauberian theorems are answers to the following question: For what condition (Tauberian condition) ensures that a stronger summability method implies a weaker summability? Since a stronger summability method often exploits a good approximation to the identity, these theorems may be regarded as a special kind of interchanging the order of limiting operators. For example, a function $f(x)$ is Abel-summable to $I$ if -$$\lim_{\delta \to 0+} \int_{0}^{\infty} f(x) e^{-\delta x} \, dx$$ -exists with the value $I$. Then this reduces to the ordinary summability if we have -$$\lim_{\delta \to 0+} \int_{0}^{\infty} f(x) e^{-\delta x} \, dx = -\int_{0}^{\infty} f(x) \, dx = \int_{0}^{\infty} \lim_{\delta \to 0+} f(x) e^{-\delta x} \, dx.$$ -For the integral in question, we may understand it as the Cauchy principal value. That is, we identify this integral with -$$ \lim_{\epsilon \to 0+} \sum_{n=1}^{\infty} \int_{\pi(n-1) + \epsilon}^{ \pi n - \epsilon} \frac{e^{-x}}{\sin x} \, dx.$$ -By circumventing poles, this integral can be managed by several techniques.<|endoftext|> -TITLE: Proof that $t-1-\log t \geq 0$ for $t > 0$ -QUESTION [12 upvotes]: Using basic calculus, I can prove that $f(t)=t-1-\log t \geq 0$ for $t > 0$ by setting the first derivative to zero -\begin{align} -\frac{df}{dt} = 1 - 1/t = 0 -\end{align} -And so I have a critical point at $t=1$ and $f(1)=0$. Then I calculate the second derivative $\frac{d^2f}{dt^2} = 1/t^2 \geq 0$ meaning that $f$ is a convex function with a minimum value of 0 so $f \geq 0$ for $t > 0$. -However, something in my gut tells me there's a way to prove this without even using the first or any derivative of $f$. I've been thinking about this for a while and I haven't been able to do this. -Question is: can you prove $f\geq 0$ without relying on any derivatives of $f$? - -REPLY [20 votes]: With the definition of $\log t$ as an integral. We can define -$$\log t = \int_1^t \frac{1}{x}\,dx.$$ -The function $\frac{1}{x}$ is decreasing, so a left hand sum approximation is always an over estimate, and a right hand sum approximation is always an underestimate. Dividing the interval $[1,t]$ into a single interval of length $t-1$ and evaluating at the left endpoint, we get -$$\ln(t) = \int_1^t\frac{1}{x}dx \leq f(1)(t-1) = t-1$$ -giving the desired inequality. -If $0\lt t\lt 1$, then we first switch limits and use a right hand sum with one interval we get: -$$\ln(t) = \int_1^t\frac{1}{x}dx = -\int_t^1\frac{1}{x}dx \leq -f(1)(1-t) = t-1$$ -(we have $\int_t^1\frac{1}{x}dx \geq f(1)(1-t)$ since the right hand sum is an underestimate), so multiplying by $-1$ gives the inequality above), giving the desired inequality again. -If $t=1$, the inequality reduces to $0\geq \log(1)$, which is of course true. -With exponentials. $t-1-\log t\geq 0$ if and only if $\log t\leq t-1$, if and only if $t \leq e^{t-1}$. - -With the Taylor series definition of $e^t$. Since -$$e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots$$ -then -$$e^{t-1} = 1 + (t-1) + \frac{(t-1)^2}{2!} + \frac{(t-1)^3}{3!} + \cdots.$$ -If $t\geq 1$, then $e^{t-1}\geq 1+(t-1) = t$, giving the desired inequality. If $0\lt t\lt 1$, then we have an alternating series -$$ \frac{(t-1)^2}{2!} + \frac{(t-1)^3}{3!} + \frac{(t-1)^4}{4!}+\cdots$$ -with ever decreasing terms: -$$\frac{|t-1|^{n+1}}{(n+1)!} \lt \frac{|t-1|^n}{n!} \Longleftrightarrow |t-1|\lt n+1,$$ -which holds because $n\geq 2$ and $|t-1|\lt 1$. Thus, the "tail" (starting in the quadratic term) of the series is positive, so $e^{t-1} \geq 1+(t-1) = t$ still holds, giving the desired inequality as well. -With the definition of $e^t$ as a limit. We have -$$e^x = \lim_{n\to\infty}\left(1 + \frac{x}{n}\right)^n$$ -so -$$e^{t-1} = \lim_{n\to\infty}\left(1 + \frac{t-1}{n}\right)^n.$$ -If $t\geq 1$, the sequence is nondecreasing (we are compounding interest, so the more often we compound the bigger the payoff). In particular, $e^{t-1}\geq 1 +\frac{t-1}{1} = t$, giving the desired inequality. If $0\lt t \lt 1$, then we have -$$e^{t-1} = \lim_{n\to\infty}\left(1 - \frac{1-t}{n}\right)^n.$$ -Again, the sequence is increasing (this is like paying off a debt with fixed interest; if you pay down the capital more often, your total interest will be smaller in the end). So again we have $e^{t-1} \geq 1 - \frac{1-t}{1} = t$. - -REPLY [8 votes]: With the definition of $\log$ as an integral -$$t-1 -\log t = \int_1^t \frac{x-1}{x} dx.$$ -Because the integrand is positive for $x>1$, we have $t-1 -\log t \geq 0$ for $t>1$. -Because the integrand is negative for $x<1$, we have $t-1 -\log t \geq 0$ for $0 -TITLE: well separated points on sphere -QUESTION [14 upvotes]: Is there a way to generate k points on a n-sphere, say, $x_1,\dots,x_k$ such that $\min_{ i \neq j } \| x_i - x_j \| $ is as large as possible? Approximate solutions are also OK, I just need well separated points on a sphere. - -REPLY [2 votes]: There are lots of papers for nearly uniform distribution of points on 2-sphere, however, the question is for general n-sphere, for which the only systematic way I have seen is Fisher's 1986 PVQ: pyramid vector quantizer, used in data compressors e.g. OPUS audio codec. -It starts with uniform point distribution on l1 sphere ("pyramid"): set of integer points of fixed sum of absolute values. Then projects these points to l2 sphere. To get more uniform point distribution we can use some deformation like power before the projection.<|endoftext|> -TITLE: (Extended) Hall's Marriage Theorem from Dilworth's Theorem -QUESTION [6 upvotes]: This question comes from Exercises III.4.5 and III.4.6 of Bourbaki's Set Theory. They are about using Dilworth's Theorem to prove Hall's Marriage Theorem (did it) and a mild extension of it (can't do it). - -Dilworth's Theorem: Let $E$ be a finite ordered set and $k$ be the maximal number of elements of an antichain in $E$. Then there exists a partition of $E$ into $k$ totally ordered sets. - -Hall's Marriage Theorem: Let $E$ and $F$ be two finite sets and let $x\rightarrow A(x)$ be a mapping of $E$ into $\mathfrak{P}(F)$ such that $\text{Card}(\bigcup_{x\in H}A(x))\geq\text{Card}(H)$ for any subset $H$ of $E$. Then there exists an injection $f$ of $E$ into $F$ such that $f(x)\in A(x)$ for each $x\in E$. -To prove it, one defines an ordering on the disjoint union of $E$ and $F$ such that $x>y$ if and only if $x\in E$ and $y\in F$ and $y\in A(x)$. It is easy to see that $F$ is an antichain and that there is no antichain with more elements than $F$. So there exists a partition of $E\sqcup F$ into $\text{Card}(F)$ totally ordered sets, which are necessarily either one-point subsets of $F$ or pairs $\{x,y\}$ such that $y\in A(x)$. Those pairs define the asserted injection. - -Hall's Marriage Theorem (Extended Version): In the setting of above, suppose additionally that $G$ is a subset of $F$ and for each $L\subset G$, $\text{Card}(\{x\in E|A(x)\cap L\neq\emptyset\})\geq\text{Card}(L)$. Then we can choose $f$ as above with the additional property $G\subset f(E)$. -The proof is sketched by Bourbaki as follows: Consider the disjoint union of $G$, $F$, and $E$, which I denote here as $(\{1\}\times G)\cup(\{2\}\times F)\cup(\{3\}\times E)$, and define an ordering on it by setting as only relations - -$(1,z)<(2,y)$ if and only if $z=y$ -$(1,z)<(3,x)$ if and only if $z\in A(x)$ -$(2,y)<(3,x)$ if and only if $y\in A(x)$ - -Then apply Dilworth's Theorem. -The problem is that I don't quite know how. The greatest number of elements in an antichain is again $\text{Card}(F)$. So we get a partition of $G\sqcup F\sqcup E$ into $\text{Card}(F)$ totally ordered sets. Again, each of those sets contains exactly one element of $F$, at most one of $E$, and at most one of $G$. This way we can define an injection $f$ as before by defining $f(x)$ to be the element of $F$ which lies in the same set of the partition as $x$, but for $G\subset f(E)$ to hold, every set of the partition which contains an element of $G$ would have to contain an element of $E$ as well. But that's obviously not true for all partitions of $G\sqcup F\sqcup E$ into $\text{Card}(F)$ totally ordered sets. -So I think that either one has to find a more sophisticated way of defining $f$ from a given partition or one has to modify the order relation before applying Dilworth's Theorem, somehow using the condition on the subsets of $G$. Does someone see a way to finish the proof? - -REPLY [3 votes]: To finish the proof, you must also define an injection g of G into E, using the condition on the subsets of G and Dilworth's Theorem. -Then for each element a∈G \ f(E), you can redefine f so that a∈ f(E), using the injection g as follows : -Build a set of elements a0, a1, ... of G so that a0 = a and ai = g(f(ai-1)) for i>0. -Let j be the smallest integer so that aj does not belong to G (its existence has to be proved, though). -Redefine f so that f(g(ai)) = ai for each i < j. -The new f is still an injection of E into F such that f(x) ∈ A(x) for each x ∈ E, and besides, a∈ f(E). -Repeat this for each element of G \ f(E) and the problem is solved. -Hope I did not go wrong somewhere. Excuse my poor english, I'm French ! -Regards -Alexis<|endoftext|> -TITLE: Rank of an interesting matrix -QUESTION [7 upvotes]: Lets define: -$U=\left \{ u_j\right \} , 1 \leq j\leq N= 2^{L},$ the set of all different binary sequences of length $L$. -$V=\left \{ v_i\right \} , 1 \leq i\leq M=\binom{L}{k}2^{k},$ the set of all different gaped binary sequences with $k$ known bits and $L-k$ gaps. -$A_{M*N}=[a_{i,j}]$ is a binary matrix defined as following: -$$a_{i,j} = \left\{\begin{matrix} -1 & \text{if } v_i \text{ matches } u_j\\ -0 & \text{otherwise } -\end{matrix}\right.$$ -and finally, -$S_{M*M}=AA^{T}$ -now, the question is that: -i) What is the rank of matrix $S$ ? -ii) What is the eigen decomposition for $S$ ? -Here is an example for $L=2, k=1$: -$$U = \left \{ 00,01,10,11\right \} $$ -$$V = \left \{ 0.,1.,.0,.1\right \} ^*$$ -$$ A = \begin{bmatrix} -1 & 1 & 0 &0 \\ -0 & 0 & 1 &1 \\ -1 & 0 & 1 &0 \\ -0 & 1 & 0 &1 -\end{bmatrix}$$ -$$ S = \begin{bmatrix} -2 & 0 & 1 &1 \\ -0 & 2 & 1 &1 \\ -1 & 1 & 2 &0 \\ -1 & 1 & 0 &2 -\end{bmatrix}$$ -For the special case $k=1$, this has been previously solved by joriki and the solution can be found here. -any comments or suggestion is appreciated. -$^{*}$ here dots denote gaps. a gap can take any value, and each gaped sequence with $k$ known bits and $(L−K)$ gaps in $V$, exactly matches to $2^{L−k}$ sequences in U, hence the sum of elements in each row of $A$ is $2^{L−k}$. - -REPLY [3 votes]: As M.S. pointed out, $\mathrm{rank}(AA^\mathrm{T})=\mathrm{rank}(A)$, so we can determine $\mathrm{rank}(A)$ instead. -Let $w_{jr}=2u_{jr}-1$, that is, $w_{jr}$ is $\pm1$ according as the $r$-th bit of $u_j$ is $1$ or $0$. Then the vectors $b^S$ defined by -$$b^S_j=\prod_{r\in S}w_{jr}$$ -for all $S\subseteq\{1,\ldots,L\}$ form a basis of $\mathbb{R}^N$: There are $N=2^L$ of them and they are mutually orthogonal, since for $S,S'\subseteq\{1,\ldots,L\}$ with $S\neq S'$ and some $s\in S$ with $s\notin S'$ (assuming $S\neq\emptyset$ without loss of generality) -$$ -\begin{eqnarray} -\sum_{j=1}^N b^S_j b^{S'}_j -&=& -\sum_{j=1}^N\prod_{r\in S}w_{jr}\prod_{r'\in S'}w_{jr'} -\\ -&=& -\sum_{j=1}^N w_{js}\prod_{r\in S-\{s\}}w_{jr}\prod_{r'\in S'}w_{jr'}\;, -\end{eqnarray} -$$ -which is zero since the sum splits into two parts for $w_{js}=\pm1$ which cancel each other. -Now consider the representation of the symmetric group $S_L$ induced on $\mathbb{R}^N$ by the permutation of the bits of the $u_j$: a permutation $\pi\in S_L$ acts on the set $U$ by permuting the bits of the $u_j$, and this induces a representation $\rho$ on $\mathbb{R}^N$ in which $\pi$ acts on a vector of $\mathbb{R}^N$ by permuting the vector's entries as it permutes the $u_j$. The vectors $b^S$ with $|S|=m$ for fixed $m$ span an irreducible invariant subspace $B_m$, since they are transformed into each other by permutations and each of them can be transformed into any other by a suitable permutation. Thus, $\rho$ decomposes into $L+1$ irreducible subrepresentations $\rho_m$ over the $B_m$, of dimensions $\left({L\atop m}\right)$. The rows of $A$ also span a (reducible) invariant subspace under $\rho$, since they are transformed into each other by permutations. All rows of $A$ are orthogonal to a given $b^S$ iff $|S|> k$: If $|S|> k$, for each row $i$ at least one of the bits in $S$ is unknown in $v_i$ and thus takes on both values under the permutations and causes the inner product to split into two cancelling parts, whereas if $|S|\le k$, there are rows such that all bits indexed by $S$ are known, so that there is no cancellation in the sum. -Without using the decomposition of $\rho$, simply from orthogonality, we can infer -$$\text{rank}(A)\le\sum_{m=0}^k\left({L\atop m}\right)\;.$$ -Now the row space of $A$ is not orthogonal to the subspaces $B_m$ with $m\le k$, and is orthogonal to the subspaces $B_{L-m}$ with $mL$, but I haven't been able to figure out yet how to show in this case that both $B_m$ and $B_{L-m}$ are in the row space for $L-k\le m \le k$. -There was a question on MO whether there's a closed form for this, and the answer was negative. -[Edit:] I just realized that all this representation stuff is actually complete overkill, since we can construct the $b^S$ with $|S|\le k$ explicitly from the rows of $A$: Let $a_i$ denote the row of $A$ corresponding to $v_i$, let $K_i$ be the set of the indices of the known bits of $v_i$, and in analogy to $w_{jr}$ let $z_{ir}=2v_{ir}-1$ for $r\in K_i$; then -$$\sum_{K_i\supseteq S}a_i\prod_{r\in S} z_{ir}$$ -is a multiple of $b^S$, and it's non-zero iff $|S|\le k$. Thus, the rank given above is indeed correct for all $k$ and $L$.<|endoftext|> -TITLE: Finding the inverse of the arc length function -QUESTION [13 upvotes]: I'm just a simple high school math student, so please don't eat me =) -In my calculus text, I have the formula: -$$L(x) = \int_{c}^{x} \sqrt{[f'(t)]^2 + 1}\,dt$$ -Where $L(x)$ is the arc length of a curve $f(x)$ from $c$ to $x$. -How can I invert this function so that I can find valid values of $x$ to satisfy a given arc length? Something like $L^{-1}(x)$. - -REPLY [5 votes]: You can do it by a differential equation without getting $L(x)$ explicitly. If $\frac{dL}{dx} = \sqrt{1 + f'(x)^2}$, then - $\frac{dx}{dL} = \frac{1}{\sqrt{1 + f'(x)^2}}$. Numerical methods can be used to solve this differential equation.<|endoftext|> -TITLE: roots of $f(z)=z^4+8z^3+3z^2+8z+3=0$ in the right half plane -QUESTION [24 upvotes]: This is a question in Ahlfors in the section on the argument principle: How many roots of the equation $f(z)=z^4+8z^3+3z^2+8z+3=0$ lie in the right half plane? -He gives a hint that we should "sketch the image of the imaginary axis and apply the argument principle to a large half disk." -Since $f$ is an entire function, I think I understand that the argument principle tells us that for any closed curve $\gamma$ in $\mathbb{C}$, the winding number of $f(\gamma)$ around 0 is equal to the number of zeros of $f$ contained inside $\gamma$. -How would you go about actually applying the hint though? I am having trouble figuring out what the image of a large half disk under $f$ would look like. - -REPLY [9 votes]: in this case $\mathrm{Re} f(it)=t^4-3t^2+3$ has no roots, so $f(it)$ is alwas in the right half-plane, and for $t\to \pm\infty$ it "goes in the direction of the $x$-axis" (as the imaginary part of $f(it)$ grows more slowly). Hence $\mathrm{Arg} f(it)$ makes $0$ turns as $t$ goes from $-\infty$ to $+\infty$ -If you now take $(-iN,iN)$ for a large $N$, and complete it to a closed curve by a semicircle on the right, the number of turns of $\mathrm{Arg} f(it)$ comes only from the semicircle, and so it's $4/2$ ($4$ is the degree of $f$) $=2$. -Your polynomial has $2$ roots in the right half-plane. -In more general situation, one needs to find the total number of turns of $\mathrm{Arg} f(it)$ (it is a half-integer if deg of $f$ is odd). The method is to find the intersection of the curve $f(it)$ with the real and the imaginary axis, to see how it enters and leaves the quadrants of the plane. So one needs to find the real roots of $\mathrm{Re} f(it)$ and of $\mathrm{Im} f(it)$ - in fact, only their relative positions on the real axis. -It is actually quite useful - e.g. if $f$ is the characteristic polynomial a system of linear ord. diff. equations with constant coefficients, then no roots in the right half-plane = stability of the system. -edit: More about the method of finding the number of roots with positive real part. Let $f(z)=z^n+a_{n-1}z^{n-1}+\dots+a_0$, $a_i$'s are complex numbers. Let us decompose $f(it)$ as $f(it)=p(t)+iq(t)$, where $p$ and $q$ are real polynomials. -First we identify in which quadrant $(p(t),q(t))$ is when $t\to -\infty$ (its given just by the signs of the leading coefficients of $p$ and $q$). Then we draw on the real axis the real roots of $p$ and of $q$ (and mark multiplicities). Suppose that $p$ and $q$ have no common real root, i.e. that there is no purely imaginary root of $f$ (otherwise we need to shift $z$ by a small $\epsilon$). We don't need the exact values of the roots of $p$ and $q$; only their relative positions on the real axis is used. -Now proceed on the real axis from $t=-\infty$ to $+\infty$. Whenever you meet a root of either $p$ or of $q$, you change the quadrant. In fact, if you meet two roots of $p$ without a root of $q$ in between them, it means that you return to your original quadrant. So simply erase such pairs of roots until between any roots of $p$ (or $q$) there's a root of $q$ (or $p$). Now every remaining root means a change of the quadrant and those changes go in the same direction (clockwise or anticlockwise). So $1$ plus the number of the remaining roots, divided by $4$, is the number of turns of $f(it)$ as $t$ goes from $-\infty$ to $\infty$. (divided by $4$ as we count quadrants). -If we take a large right half-circle, its $f$-image makes $n/2$ turns. If we subtract from this $n/2$ the number we found above (number of turns of $f(it)$), we get the number of roots with positive real part. -Also notice that $\deg p=n$, $\deg q\leq n-1$, so the (absolute value of the) number of turns of $f(it)$ is at most $(1+n+(n-1))/4=n/2$. So if you want to have no root with positive real part (or no root with negative real part) then $p$ and $q$ must have all roots real and they must alternate.<|endoftext|> -TITLE: Every equivalence relation on a set $S$ defines a corresponding partition, and vice versa -QUESTION [8 upvotes]: Could someone help me prove this? This is one of the opening exercises in my Algebra book that I just can't figure out how to solve. I don't understand how to show that the equivalence classes defined by the relation would be disjoint. The opposite direction also completely baffled me on how to start. -Seeing as how Wikipedia refers to this proof as the Fundamental Theorem of Equivalence Relations, and seeing how someone here or on MathOverflow commented that it was one the most important things they'd make sure an algebra student took away from their classes, I figured it's important enough for me to see it proved. -I'd really appreciate that any answers do everything with function notation, instead of using kind of slippery words like "choose" or "pick"; I'm always bad at knowing when to invoke them. -I'd also appreciate examples of what this result is useful for proving. - -REPLY [8 votes]: Fix a set $S$. -An equivalence relation on $S$ is a subset $E$ of $S \times S$ that satisfies reflexivity, symmetry, and transitivity. (Instead of writing $x \sim y$, we say that $(x, y) \in E$.) Let $\mathcal E = \mathcal E(S)$ denote the set of all equivalence relations on $S$. -A partition of $S$ is a set of subsets $P = \{ S_i \}$ that span $S$ and are pairwise disjoint. Let $\mathcal P = \mathcal P(S)$ denote the set of all partitions on $S$. -Define a map $\alpha: \mathcal E \to \mathcal P$, taking an equivalence relation and producing a partition from it. How? Begin with an equivalence relation $E \in \mathcal E$. For any element $s \in S$, the other elements in the same part of the partition $\alpha(E)$ are precisely those that are equivalent to $s$ in $E$. You have to show that the two conditions that make a partition are satisfied. -In the other direction, define $\beta: \mathcal P \to \mathcal E$, taking a partition and producing an equivalence relation from it. How? Now, begin with a partition $P \in \mathcal P$. For any element $s \in S$, another element is equivalent to $s$ in $\beta(P)$ exactly when they belong to the same part of the partition. Here you have to show that the three conditions that make an equivalence relation are satisfied. -In fact, the maps $\alpha$ and $\beta$ are mutually inverse bijections, meaning that $\alpha(\beta(P)) = P$ for any partition $P$ and $\beta(\alpha(E)) = E$ for any equivalence relation $E$.<|endoftext|> -TITLE: Complement of a totally disconnected compact subset of the plane -QUESTION [14 upvotes]: Let $E \subset \mathbb{C}$ be compact and totally disconnected. Is there an elementary way to prove that $\mathbb{C} \setminus E$ is connected? - -REPLY [3 votes]: There are many "elementary" topological ways of proving this. Of course at some point you will have to use some properties of the complex plane. For example, it is not difficult to show that any set that separates the Riemann sphere must contain the boundary of a domain $U$ whose complement is connected; i.e. $U$ is simply-connected, and hence $\partial U$ is connected. So any set (not necessarily closed, by the way) that disconnects the plane contains a plane continuum that disconnects the plane. Of course this still uses something (as mentioned in the question on MathOverflow, a key fact is that the union of two disjoint compact sets that do not disconnect the plane also does not disconnect the plane). -Here is the most elementary proof that I can come up with on the spot. Again, we will need to use some properties of the plane, and we will also use elementary properties of Hausdorff convergence of continua. -Lemma. Suppose that $A\subset\mathbb{R}^2\setminus\{0\}$ disconnects $0$ from $\infty$. Then $A$ contains a nontrivial plane continuum that separates $0$ from $\infty$. -Proof. First we note that, for some $\varepsilon$, $A$ contains a closed set -$$B\subset \{x\in A: \varepsilon < \|x\| < 1/\varepsilon\}$$ -that separates $0$ from $\infty$. Indeed, this follows from the definition of connectedness: There is an open, bounded set $U$ such that $\partial U$ is contained in $A$, so we can set $B := \partial U$. -Now cover the plane by a grid of small squares of sidelength, say, $1/n$ ($n$ sufficiently large), and let $B_n$ be the union of (closed) squares that intersect $B$. Then $B_n$ separates $0$ from $\infty$. Now it is an elementary (though perhaps tedious) exercise to show that $B_n$ has a connected component $\tilde{B_n}$ that separates $0$ from $\infty$. (This is essentially a discrete fact about a two-dimensional grid.) -Let $C$ be a Hausdorff limit of the sequence $\tilde{B_n}$. Since each $\tilde{B_n}$ has diameter greater than $2\varepsilon$, and this sequence is uniformly bounded, the limit is a nontrivial continuum. By construction, $C\subset B$, and it is not difficult to see that $C$ must also separate $0$ from $\infty$ (though we do not need this for the original question). -Does this makes sense or am I missing something?<|endoftext|> -TITLE: function $(-2)^{x}$ -QUESTION [5 upvotes]: What is the real and imaginary parts for the function $f(x)=(-2)^{x}$ ? -Is there a unique solution to this question? - -REPLY [13 votes]: Since $-2 = 2e^{i\pi}$, then $\ln(-2) = \ln(2)+i(\pi+2k\pi) = \ln(2)+i(2k+1)\pi$ are the complex logarithm values of $\ln(-2)$. -Since $a^x = e^{x\ln(a)}$, we have -$$\begin{align*} -(-2)^x &= \exp(x\ln(-2))\\ - &= \exp\left(x\Bigl( \ln(2) + i(2k+1)\pi\Bigr)\right)\\ -&= \exp(x\ln(2) + ix(2k+1)\pi)\\ -&= e^{x\ln(2)} e^{ix(2k+1)\pi}\\ -&= 2^x\Bigl( \cos(x(2k+1)\pi) + i\sin(x(2k+1)\pi)\Bigr)\\ -&= 2^x\cos(x(2k+1)\pi) + i2^x\sin(x(2k+1)\pi). -\end{align*}$$ -So for integer $k$, the different values of $(-2)^x$ have real part $2^x\cos\Bigl(x(2k+1)\pi\Bigr)$, and complex part $2^x\sin\Bigl(x(2k+1)\pi\Bigr)$. -Added. To take the principal value of the logarithm (which requires the imaginary part to lie in $(-\pi,\pi]$, then you use $k=0$. - -REPLY [8 votes]: Hint: by definition, $a^x = e^{x \log a}$. Complicating matters: there are infinitely many branches of the log, corresponding to branches of $a^x$.<|endoftext|> -TITLE: How to prove an extension of ZFC is conservative -QUESTION [8 upvotes]: Working in ZFC. -I've defined a function-like binary predicate $R$ on a proper class. It has to be recursive; i.e. $R(a,b)$ must usually depend on one or more $R(c,d)$ for some $c$s and $d$s calculated from $a$ and $b$. There is, however, no well-founded recursive, unary function that agrees with $R$ (i.e. there is no $F$ such that $y = F(x)$ iff $R(x,y)$). -So I've defined it like ZFC defines $\in$: I've added a new binary predicate symbol $R$ and defined axioms for it. Every axiom conforms to the schema $\forall x_1 . \dots \forall x_n . P_1(x_1) \wedge \dots \wedge P_n(x_n) \rightarrow Q(x_1,\dots,x_n) \rightarrow R(x_1,x_2)$. I think the salient facts are 1) that no axiom concludes that a set exists, only that $R$ relates two sets; and 2) $Q(x_1,\dots,x_n)$ is a formula that usually refers to $R$. -I'd like to prove that ZFC+R proves that no more sets exist than ZFC does. How can I do this? -(Also, is "conservative extension" the right term for this?) - -REPLY [5 votes]: (Slightly editing the comments into an answer.) -You terminology is right: A theory $T$ is a conservative extension of a theory $S$ iff the language of $T$ extends the language of $S$, every axiom of $S$ is provable in $T$, and every theorem of $T$ in the language of $S$ is also a theorem of $S$. -If there is a procedure that allows us to expand any model $(M,\in^M)$ of $\mathsf{ZFC}$ to a model $(M,\in^M,R^M)$ where the relevant axiom schema is satisfied with $R$ interpreted as $R^M$, then yes, the new theory is conservative over $\mathsf{ZFC}$ and does not prove the existence of any new sets. -The typical example is to have the new axioms state that $R$ is a well-ordering of the whole universe. In $\mathsf{ZFC}$ it is not provable that there is such a class $R$ (recall that in $\mathsf{ZFC}$ all "classes" ought to be definable from parameters). However, using class forcing, one can add to any model of $\mathsf{ZFC}$ a predicate $R$ that is such a global well-ordering, and in a fashion that adds no new sets. -(Essentially, we force with initial segments.) -The point here is not that we needed to use class forcing, that is just the means towards our actual goal. The point is this: Suppose the theory $T=\mathsf{ZFC}+$"$R$ is a well-ordering" proves a sentence $\phi$ in the language of set theory. Pick any model of $\mathsf{ZFC}$. We can extend it to a model of $T$ in the manner just described. In the extended model, $\phi$ holds. But this means that $\phi$ holds in the original model, because adding $R$ did not add any sets. By the completeness theorem, $\phi$ is provable, because it is true in all models. -The general approach hinted at by this example is fairly malleable. A different example in the same spirit is that Gödel-Bernays set theory is conservative over $\mathsf{ZF}$. The point is that given a model of $\mathsf{ZF}$, we can extend it to a model of $\mathsf{GB}$ by taking as proper classes of the extension only the definable classes of the original model. But then, exactly the same argument as for the other example shows that we have conservation. -One can be more liberal and have more examples if one does not insist on the restrictions that the definition of conservative extension requires. For example, we could instead have $T$ interpret $S$. Among other things (details in the link), this means that from any model $M$ of $T$ we can define a model $I^M$ of $S$. That the interpretation is conservative would correspond to the requirement that every model of $S$ is $I^M$ for some $M$. -Of course, whether the example you have in mind indeed gives us a conservative extension depends on the details of the schema you have in mind.<|endoftext|> -TITLE: Central Limit Theorem -QUESTION [5 upvotes]: If N is a poisson random variable, why is the following true? -It is from "Probability and Stochastic Processes" by Yates, page 301, equation 8.2 to 8.3 -$$ P\left(\left|\frac{N-E(N)}{\sigma_N}\right| \geq \frac{c}{\sigma_N}\right) = P(|Z| \geq \frac{c}{\sigma_N})$$ -Z is the standard normal Gaussian random variable. The explanation in the text is : "Since E[N] is large", the CLT can be used. But I am familiar with the CLT being used with sums of random variables. -Thanks. - -REPLY [6 votes]: Several points. -1) CLT only gives approximation to normal, not equality. -2) While the standard CLT can be easily applied to the case where the parameter $\alpha$ is an integer tending to $\infty$, you have a slight problem if $\alpha_n \to \infty$ with $\alpha_n$ real; consider Douglas Zare's comment: the sequence of parameters $\alpha _n /\left\lfloor {\alpha _n } \right\rfloor $ is not fixed (though tends to $1$). -3) This problem is essentially a special case of this recent one -. Indeed, if $N$ has parameter $\alpha=t$, then it is equal in distribution to $X_t$, where $X = \{X_t: t \geq 0\}$ is a Poisson process with rate $1$. But $X$ is just a special case of a compound Poisson process, where the jump distribution is the $\delta_1$-distribution (this corresponds to the $Y_i$ being equal to $1$ in the linked post). So, instead of considering $\frac{{N - E(N)}}{{\sigma (N)}}$, you can consider $\frac{{X_t - E(X_t )}}{{\sigma (X_t )}}$ (which has been done in the linked post). -Remark. Note that in the linked post the $\frac{{X_t - E(X_t )}}{{\sigma (X_t )\sqrt {N_t } }} \to {\rm N}(0,1)$ appearing in question 1 should have been replaced with $\frac{{X_t - E(X_t )}}{{\sigma (X_t ) }} \to {\rm N}(0,1)$.<|endoftext|> -TITLE: If $g$ is a primitive root of $p^2$ where $p$ is an odd prime, why is $g$ a primitive root of $p^k$ for any $k \geq 1$? -QUESTION [7 upvotes]: I saw this theorem referenced in a paper on performing shuffles in an array. There is a proof on pages 20-21 of the this link, but the proof is very terse and omits a lot of intuition. -Is there an intuitive explanation for why this theorem is true, or at least a proof that omits fewer details? - -REPLY [4 votes]: First we claim that for any $k\geq 0$ we can write $$g^{p^{k-1}(p-1)}=1+a_kp^k\qquad\text{where }p\nmid a_k$$ -Clearly $g^{p-1}\equiv 1\pmod{p}$ and hence $g^{p-1}=1+a_0p$ for some $a_0$. Note that $p\nmid a_0$ because otherwise $g^{p-1}\equiv 1\pmod{p^2}$ (contradiction to the fact that $g$ is a primitive root mod $p^2$). -Assume that $g^{p^{s-1}(p-1)}=1+a_sp^s$ where $p\nmid a_s$. It follows -$$g^{p^s(p-1)}=1+a_sp^{s+1}+ \text{ multiple of }p^{s+2}=1+a_{s+1}p^{s+1}$$ -If $p\mid a_{s+1}$ then $p^{s+2}\mid a_sp^{s+1}$ which implies that $p\mid a_s$ (contradiction!). Thus $p\nmid a_{s+1}$. -Now we claim that for $k\geq 2$, the order of $g$ mod $p^k$ is $p^{k-1}(p-1)$. For $k=2$, the statement is true. Assume the order of $g$ mod $p^s$ is $p^{s-1}(p-1)$. Since -$$g^{p^s(p-1)}=1+a_{s+1}p^{s+1}$$ -clearly $g^{p^s(p-1)}\equiv 1\pmod {p^{s+1}}$. Suppose $l$ be a number such that $g^l\equiv 1\pmod {p^{s+1}}$. Then obviously $g^l\equiv 1\pmod{p^s}$. Since $p^{s-1}(p-1)$ is the order of $g$ mod $p^s$, then $l=tp^{s-1}(p-1)$. -Now $$g^l=g^{tp^{s-1}(p-1)}=1+ta_sp^s+\text{ multiple of }p^{s+1}$$ -Since $g^l\equiv 1\pmod {p^{s+1}}$ then $p^{s+1}\mid ta_sp^{s}$. But $p\nmid a_s$. Hence $p\mid t$. Write $t=pt_2$. It follows that $l=t_2p^s(p-1)$ and we conclude that $p^s(p-1)$ is the order of $g$ mod $p^{s+1}$. -The case that $g$ is primitive root mod $p$ is left to the reader :).<|endoftext|> -TITLE: Why has the Perfect cuboid problem not been solved yet? -QUESTION [13 upvotes]: Why hasn't Perfect Cuboid Problem been solved yet, whereas (possibly) more nontrivial ones such as FLT and Sphere packing have been solved? -I understand that calling some problems more nontrivial may be naive and seemingly trivial problems can be deceptively tricky, as with the FLT. All the same, FLT and to a lesser extent, Sphere Packing, garnered lots of attention by successive generations of mathematicians, until someone decided to finish it off and succeeded. -But, AFAIK, the Perfect Cuboid (PC) problem hasn't generated this kind of attention, perhaps because Fermat didn't leave a note about it. Is that the reason for PC remaining unsolved? One of the standard references for PC , Unsolved Problems in Number Theory , suggests several numerical results (p.178), but of course nothing like a proof, much like the status of FLT and Sphere Packing many decades ago. - -REPLY [2 votes]: It's just not a particularly interesting problem. Apart from the romantic (and ridiculous) idea that Fermat had a secret proof of his conjecture that was lost to history, there's not much compelling about FLT aside from the fact that it's very easy to state. What makes it interesting is that Frey proved that given a nontrivial rational point on the curve $x^n + y^n = z^n$ with $n > 2$, he could construct a elliptic curve $E/\mathbb{Q}$ that isn't modular. That would be significant; it ties into Taniyama-Shimura, the Hasse-Weil conjecture, the Langlands program, and so on. Without it, FLT would just be another arbitrary Diophantine equation with mild historical interest, relegated to amateur and recreational math. The brick problem is not as elegant as FLT and doesn't seem to tie into anything more significant, so it's not a topic of ongoing research.<|endoftext|> -TITLE: Why $L^{r}(X)\cap L^{t}(X)\subset L^{s}(X)$ for $10$ so that -$$ -\|f\|_s \;\leq\; \|f\|_r^\alpha\,\|f\|_t^\beta -$$ -for any measurable function $f\colon X\to\mathbb{R}$. Use this to show that -$$ -L^r(X) \cap L^t(X) \,\subset\, L^s(X). -$$ -I know this is supposed not to be difficult. But I cannot solve it. - -REPLY [5 votes]: Hint: Write $\frac{1}{s} = \frac{\alpha}{r} + \frac{\beta}{t}$ and apply Hölder's inequality.<|endoftext|> -TITLE: Can any of the exotic differentiable structures on $\mathbb R^4$ make $GL(\mathbb R^2)$ into an 'exotic' Lie group structure? -QUESTION [8 upvotes]: I am just beginning to learn about Lie groups and am made somewhat uncomfortable by the textbook's handwavy decision to talk about Lie groups $GL(V)$ where $V$ is some $n$-dimensional real vector space. It is not clear to me that when when $V\cong\mathbb R^2$ and $End(V)$ is made isomorphic to $\mathbb R^4$ after a choice of basis for $V$, that that $\mathbb R^4$ cannot be exotic (as I know nothing about exotic things except their existence). I believe that maybe the requirement that matrix multiplication be differentiable might settle the issue, but I have no idea to go about showing this. -Otherwise, if I choose to believe that all $n$-dimensional vector spaces for $n\neq 4$ have a unique differentiable structure (they have unique topologies for sure), it becomes obvious that talking about $GL(V)$ for arbitrary real vector spaces is perfectly sensical. -So in short, can $GL(\mathbb R^2)$ ever be a Lie group not diffeomorphic to standard $\mathbb R^4$? - -REPLY [8 votes]: Another way of seeing the answer is "no" is the following theorem: -Let $G$ and $H$ be two compact Lie groups. Suppose $f:G\rightarrow H$ is a continuous homomorphism. Then $f$ is smooth. -Letting your $G = GL(2)$ and $H = GL(2) (exotic)$, the identity map $i:GL(2)\rightarrow GL(2)$ is clearly continuous and a homomorphism, hence it is smooth. The exact same argument shows in the inverse is smooth, so $i$ is a diffeomorphism. -To prove the theorem to begin with, I'd begin with Cartan's Theorem which states that any (topologically) closed subgroup of a Lie group is automatically a smooth submanifold. The proof of this fact can be found in, for example, John Lee's book Introduction to Smooth Manifolds on pg. 526. -Now, given a continuous homomorphism $f:G\rightarrow H$ consider the graph of $f$ $K=\{(g,f(g))\}\subseteq G\times H$. Because $f$ is a homomorphism, $K$ is a subgroup. Because $f$ is continuous, $K$ is a closed subset. By Cartan's Theorem, $K$ is an embedded Lie subgroup. -Next, I claim that if $\pi_1:G\times H\rightarrow G$ is the projection, then $\pi_1|_K:K\rightarrow G$ is actually a diffeomorphism. It is 1-1 because $f$ is a function and onto because the domain of $f$ is all of $G$. It is smooth since it is the restriction of a smooth function to a smooth submanifold (thanks to Cartan's theorem). The fact that the inverse is smooth takes more work. One way to see it is that every smooth homomorphism has constant rank and and by Sard's theorem, the map has full rank somewhere. (If you'd like, I can expand on why the inverse if smooth.) -Finally, notice then that if $\pi_2:G\times H\rightarrow H$ is the other projection map, then $f = \pi_2 \circ \pi_1^{-1}$ is a composition of smooth maps, so is smooth.<|endoftext|> -TITLE: Every Function in a Finite Field is a Polynomial Function -QUESTION [32 upvotes]: From a bank of past master's exams I am going through: - -Let $F$ be a finite field. Show that any function from $F$ to $F$ is a polynomial function. - -I know that finite fields are fields of $p$ elements for $p$ prime [EDIT: It's actually $p^n$ for $p$ prime; see comment below]. Since I have $p$ choices for the $p$ elements to map to, then I have $p^p$ distinct functions. I think every function can be written in the form $f(x) = a_{p-1}x^{p-1} + \dots + a_0x^0$. For then given the values $f(0), f(1), \ldots f(p-1)$, I can solve for the coefficents by the linear system of equations -$$ a_0 + \sum_{i=1}^{p-1} n^i a_i = f(n).$$ -This then gives me a $p-1 \times p-1$ square matrix over the field $\mathbb{F}_p$: -$$\left( \begin{array}{ccccc} -1& 0 & 0 & \ldots & 0 \\ -1& 1 & 1 & \dots & 1 \\ -1& 2 & 4 & \dots & 2^{p-1} \\ -\vdots & \vdots & \vdots & & \vdots \\ -1 & p-1 & (p-1)^2 & \dots & (p-1)^{p-1} -\end{array} \right) -\left( \begin{array}{c} -a_0 \\ a_1 \\ a_2 \\ \vdots \\ a_{p-1} -\end{array}\right) -= -\left( \begin{array}{c} -f(0) \\ f(1) \\ f(2) \\ \vdots \\ f(p-1) -\end{array} \right)$$ -If I can show this matrix is invertible, then I can always find the $a_i$. But I am a bit stumped on how to show this (partially because I don't think I've ever done linear algebra in a vector space over a finite field). It does not seem easy to show linear independence, or nonzero determinant, or full row rank. - -Alternatively (I just thought of this), can I show this is true by arguing that the map between the two sets (the set of polynomials of degree $p-1$; and the set of functions $F \to F$) is injective, and that it must be a bijection because the sets have the same cardinality $p^p$? - -REPLY [10 votes]: There is a very simple argument based only on dimension and root counting. You want to show that the map$~g$ from the polynomials in $\def\Fq{{\Bbb F_q}}\Fq[X]$ to their polynomial functions in $\Fq^\Fq=\{\,f:\Fq\to\Fq\mid\,\}$ is surjective. It is easy to see the map is $\Fq$-linear and that $\dim(\Fq^\Fq)=q$. It is actually easier to show the stronger statement that the restriction$~\tilde g$ of$~g$ to the subspace $V=\{\,P\in\Fq[X]\mid\deg(P) -TITLE: When is the product of two quotient maps a quotient map? -QUESTION [27 upvotes]: It is not true in general that the product of two quotient maps is a quotient maps (I don't know any examples though). -Are any weaker statements true? For example, if $X, Y, Z$ are spaces and $f : X \to Y$ is a quotient map, is it true that $ f \times {\rm id} : X \times Z \to Y \times Z$ is a quotient map? - -REPLY [3 votes]: There is another example that I just learnt from exercise 16, section 2.2 of Algebraic Topology by Tammo Tom Dieck. This is : - -Let $X$ be a topological space and $A$ be a compact subspace of $X$. Denote the quotient map $p:X \to X/A$. Then $ p \times id_Y : X \times Y \to X/A \times Y$ is a quotient map for any arbitrary space $Y$.<|endoftext|> -TITLE: Continuous function with local maxima everywhere but no global maxima -QUESTION [6 upvotes]: Can there be such a function: -$f \colon \mathbb R \to \mathbb R$ is continuous and non-constant. It has a local maxima everywhere, i.e., for all $x \in \mathbb R$ there is some $\delta_x>0$ such that $f(x)\geq f(y)$ for all $y \in B(x,\delta_x)$. And, yet $f$ has no global maxima? -Thank you. -PS: $\mathbb R$ is with the usual topology. This is true for $\mathbb R$ with upper-limit topology. - -REPLY [5 votes]: Surprisingly enough, there are a few papers on this (like here). So other people have considered variations of this problem. -I think that you should also be familiar with a related but incredibly interesting fact: it is possible to have a continuous function that has a local maxima/minima at every rational number. That's a dense subset, which is astounding enough as it is. (One should note that having a countable number of local maxima is all one can ask for. To see this, note that around every maxima one can assign an interval over which it is the maximum, from the definition of a local maximum. But there is a rational number in this interval, and so there can be at most countably many). -One such function is the Weierstrass function. It seems not so hard to alter this so that it has countably many maxima and minima, but no global maxima or minima.<|endoftext|> -TITLE: Number of local maxima of a function -QUESTION [26 upvotes]: Let $z_j$ ($j=1,\dots, k$) be $k$ points on the complex plane none of which lies on the real line. Is it always true that the function -$$ F(x)=\sum_{j=1}^k \frac{1}{|x-z_j|^2} $$ has at most $k$ local maxima on the real line? - -REPLY [5 votes]: $F(x)=c$ has for any value of $c>0$ at most $2k$ solutions, which in a weak way supports the conjecture that the statement is always true.<|endoftext|> -TITLE: Would it be fine to use Serge Lang's two Calculus books as textbooks for freshman as Maths major? -QUESTION [5 upvotes]: I'm a freshman in Maths major, but the recommanded textbook(Calculus:A Complete Course by Robert A. Adams) by Prof. of Calculus course is too much expensive, well, I found there're Serge Lang's two Calculus books in second-hand bookstore. is it okay to use these two books as textbook either ? -"A First course in Calculus", Serge Lang, UTM, Springer -"Calculus of Several Variables", Serge Lang, UTM, Springer - -REPLY [13 votes]: Lang's book is great and I heartily recommend it. Unlike the rest of Lang's books, which he has written in order for he himself to learn the subject in question (and which books concern themselves with mathematics of significantly higher level), his Calculus books were written explicitly with the student in mind. -What this means is that it (at least his single-variable book; I haven't read the multi one) is the most well-arranged and pedagogically sane book on Calculus I've come across. Its intended audience consists of serious students who want to learn, but don't necessarily have a lot fo experience with mathematics: the book is more or less self-contained with respect to giving you all the necessary tools you need to solve the problems; it also has the virtue of being sufficiently rigorous and honest in its explanation of the key ideas. Many other textbooks either sacrifice ideas and intuition for logical formalism (Spivak's book is in fact an analysis book in disguise, I believe, so it's not even playing the same game), or they eschew a rigorous and careful treatment of ideas because the authors make no distinction between math being simple and math being easy. -But so, to answer your question, if you want to acquire a good understanding of Calculus, Lang will give you it, and may even give you a better one than other textbooks, if you read him closely enough (the real mathematician's answer of course, is that you go to the library and check out and read several books on Calculus to get an idea of the various perspectives since no one textbook is perfect, though in my eyes Lang's as close to perfect as we have).<|endoftext|> -TITLE: Find equation for hyperbola -QUESTION [6 upvotes]: Just taking (failing) a simple algebra class, can't figure this one out and no one can explain it to me and the book just tells me to do it. - -Find an equation for the hyperbola described: -foci at $(-4,0)$ and $(4,0)$; asymptote the line $y=-x$. - -So I know that since the numbers on the $x$ axis are changing it will be a horizontal hyperbola. That means $0$ is the center and $c$ is $4$. -I know the slope is $b/a$ for horizontal equations so I know that $b/a = -1$ -From that I can get $b = -a$. -This is as far as I can get, my book basically does these steps in the solution manual except they get $-b/a = -1$, $b=a$ I don't even know why. I can't work past this point without graphing and I know there is suppose to be a way just by working out the algebra but I don't see a solution. -I might not be prepared for this test but I am prepared to fail the test. - -REPLY [3 votes]: As the foci are on the $x$-axis and symmetric with respect to the $y$-axis, -the equation of the hyperbola is -$$\frac{x^{2}}{a^{2}}-\frac{y^{2}}{b^{2}}=1.$$ -Such an hyperbola has two asymptotes: -$$y=\frac{b}{a}x\qquad\text{and}\qquad y=-\frac{b}{a}x,$$ where both $a$ and $b$ are positive. The given asymptote is $y=-x$ and the -other one is $y=x$. The equation $y=-\frac{b}{a}x$ should be equivalent to $ -y=-x$, which implies that $-\frac{b}{a}=-1$. Hence $b=a$. Now you use the -information on the foci to find $a$. The distance from each focus to the -origin is $c=4$. These numbers are related by the equation $$a^{2}+b^{2}=c^{2} -.$$ For $b=a$ and $c=4$, we have $a^{2}+a^{2}=16$, thus $a=2\sqrt{2}$ (the -other solution is negative), and the equation of the hyperbola is -$$\frac{x^{2}}{8}-\frac{y^{2}}{8}=1.$$<|endoftext|> -TITLE: Multiplication in Permutation Groups Written in Cyclic Notation -QUESTION [51 upvotes]: I didn't find any good explanation how to perform multiplication on permutation group written in cyclic notation. For example, if -$$ - a=(1\,3\,5\,2),\quad b=(2\,5\,6),\quad c=(1\,6\,3\,4), -$$ -then why does $ab=(1\,3\,5\,6)$ and $ac=(1\,6\,5\,2)(3\,4)$? - -REPLY [3 votes]: There is a small example on this page . Basically multiplication of permutation groups is applying permutations from right to left on an unaltered sequence.<|endoftext|> -TITLE: Inverse Image of Maximal Ideals -QUESTION [19 upvotes]: Given a map of commutative rings with unit, it is often the case that the inverse image of a maximal ideal is not maximal. For example, consider the inclusion $\mathbb{Z} \subseteq \mathbb{Q}$. -However, it is well-known that the inverse image of a maximal ideal under a map of finitely generated algebras over an algebraically closed field is maximal. -Are there other examples where we see this same behavior? For example, - -Is the inverse image of a maximal ideal under a map of finitely generated $\mathbb{Z}$-algebras maximal? - -REPLY [7 votes]: Yes, because $\mathbb{Z}$ is a Hilbert-Jacobson ring. See e.g. $\S 12.2$ of these notes.<|endoftext|> -TITLE: How many sides does a circle have? -QUESTION [189 upvotes]: My son is in 2nd grade. His math teacher gave the class a quiz, and one question was this: - -If a triangle has 3 sides, and a rectangle has 4 sides, - how many sides does a circle have? - -My first reaction was "0" or "undefined". But my son wrote "$\infty$" which I think is a reasonable answer. However, it was marked wrong with the comment, "the answer is 1". -Is there an accepted correct answer in geometry? -edit: I ran into this teacher recently and mentioned this quiz problem. She said she thought my son had written "8." She didn't know that a sideways "8" means infinity. - -REPLY [2 votes]: One way to understand the question and demonstrate applicability is to consider the problem and efficiency of finding the area of the union of 2 overlapping circles versus 2 overlapping rectangles or squares. -Let us now constrain the union area to be constant. -In the case of circles it will always be the same shape, no matter which way the circles are positioned in relation to one another. -In the case of squares, it would be the same shape too. -In case of rectangles, the shape would vary. -We could here argue that the circle and square both have 1 side, because they are defined by a single length (radius or diagonal), or in other words, they have no apparent "orientation".<|endoftext|> -TITLE: algorithmic checking of proofs -QUESTION [5 upvotes]: Is it possible to check if a proof is correct algorithmically(especially with computer aid)? -I ask this question because I find that a lot of time is taken up during lectures going through the proof of the theorems, which is basically just checking someone else's work(IMO). I think time is better spent if the ideas behind the proof are discussed (how to develop such a method to solve such a problem). Also, I find a proof without motivation very hard to follow. - -REPLY [6 votes]: Sure, there's a whole literature on automated deduction, which includes checking proofs as well as finding them. -Automated proof checking is a different thing from what you see in class. Math is all about skepticism and believing things until you have been shown something definitively. Part of that has to do with motivations and concepts, but it wouldn't be math (it'd be philosophy), if you weren't manipulating ideas with some amount of formalism. -What I mean by skepticism is things like making a statement: -$$\sum_{n\ge1} 2^{-n} = 1$$ -and the mathematical adversary (at a certain level) will say - -How do you know that? - -You can talk all day about why you care (motivation) or a half minute visualization ("Oh I see now"), but you don't know it until you do some symbolic manipulation (or well-founded conceptual manipulation like the Greeks (ah...this is modern mathematics, not Babylonian or medieval mathematics where you got perfectly fine results without worrying about proof). So the way you know something in math is by, at some point, having to do the detailed grunt work of pushing the symbols around. At a later point you don't have to worry about the pushing around because you know you can do it if you have to. Look at any math journal - it's mostly narratives interspersed with single equations, very few derivations as such or at least they don't look like the mess you see in class (also in class the teacher is probably speaking the narrative, but writing the symbols and leaving out the pictures because they're too hard to draw)/ -There's another difficulty and that's pragmatic. In the teaching setting, there are students having all different learning strategies: some are visual, some need repetition, etc, etc and since everything is geared towards merit, it encourages teaching of testable things. And what's more easily testable (in math at least) is derivations (calculations or proofs), not essays on how category theory and set theory are comagisterial foundations of mathematics. -I think I may have gone astray here...yes, it'd be nice to have a little more explanation and motivation of how to get from A to B, but sometimes (most of the time) you also need to show the actual path of A to B.<|endoftext|> -TITLE: Prove that $\pi$ is a transcendental number -QUESTION [19 upvotes]: Does anyone has a link to a site that confirms that $\pi$ is a transcendental number? -Or, can anyone show how to prove that $\pi$ is a transcendental number? -Thank you in anticipation! - -REPLY [17 votes]: As suggested by Yuval's comment, the most straightforward way of showing that $\pi$ is transcendental proceeds through the Lindemann–Weierstrass theorem that $e^x$ is transcendental if $x$ is (nonzero and) algebraic; since $e^{i\pi}=-1$ is algebraic, then $i\pi$ must be transcendental, and therefore $\pi$ must be (since $i$ isn't rational, but it is algebraic!). You can find a rough proof of the theorem at its Wikipedia page. - -REPLY [10 votes]: Try the short paper The transcendence of $\pi$ by Niven -and his book Irrational Numbers.<|endoftext|> -TITLE: Collisions in a sample of uniform distribution -QUESTION [5 upvotes]: Asked at a Microsoft interview: -Assume you have a uniform distribution (can be discrete or continuous) of size X and you randomly select a sample of size Y. -1) What is the probability in terms of X and Y of a collision? -2) What is the expected number of collisions in terms of X and Y? - -REPLY [4 votes]: For the chance of a collision, see this question. -Derek Jennings's answer there with your notation reads: -$$\mathbb{P}(\mbox{collision})=1 - \left( 1- \frac{1}{X} \right)\left( 1- \frac{2}{X} \right)\cdots - \left( 1- \frac{Y-1}{X} \right),$$ - -To answer the second question, we should decide what counts as a collision. -For instance, if the sample is $1,2,1,1,3$ then you could say there is only one -collision, since only one value is repeated. On the other hand, twice during the course -of the sampling we could say "Hey, I've already seen that value". Under this second interpretation, there were two collisions. -Interpretation 1: -To find the expected number of collisions, it is useful to introduce the identically distributed indicator random variables -$Z(i)$ for $1\leq i\leq X$ where $Z(i)=1$ if item $i$ appears more than once, and $Z(i)=0$ otherwise. -The number $N(i)$ of times that item $i$ appears is a binomial$(Y,1/X)$ random variable so $$\mathbb{P}(Z(i)=0)=\mathbb{P}(N(i)\leq 1)=\left({X-1\over X}\right)^{Y}+Y\left({X-1\over X}\right)^{Y-1}\left({1\over X}\right),$$ -and -$$\mathbb{E}(\mbox{number of collisions})=\sum_{i=1}^X \mathbb{E}(Z(i)) -=X\mathbb{P}(Z(1)=1) =X-X\left({X-1\over X}\right)^Y -Y \left({X-1\over X}\right)^{Y-1}.$$ -Interpretation 2: -With similar arguments, you can calculate -$$\mathbb{E}(\mbox{number of collisions})=Y+X\left({X-1\over X}\right)^Y -X.$$ - -REPLY [2 votes]: For the continuous version, the probability of a collision will be $0$. That is, if the random variable $W$ has uniform distribution on the interval $[a, b]$, and we take a sample of size $n$, the probability of a collision is $0$. Indeed this is true for any continuous distribution. -For the discrete version, the answer is obviously different. This is a standard first probability course problem. You can find full information by searching under "Birthday Problem".<|endoftext|> -TITLE: When to give up on math? -QUESTION [15 upvotes]: How do I know when I should give up on math? I can't pass any of my math tests no matter much I study, I have tutors, attend office hours and I still can't do better than a D on any test. Most of the advice I hear is take the class two or three times which is ridiculous to me. It just seems that I am not good at math for whatever reason. I put at least 15 hours a week into math studies outside of class and this is only an entry level college algebra class. -I was planning on an engineering degree because I don't know how to pick a major other than picking one so I need math. - -REPLY [11 votes]: You should be aware that 15 hours per week is not a lot when you missed out on math during all your high school time. Just imagine that other students put in, say, 5-10 hours per week during their high school years and that you have to make up for this lost time. It will take a year of working 15 hours per week to just be at their starting level, but unfortunately, it is not clear that you are trying to plug the holes in your knowledge before building on it. Why do you expect to be at the same level as students who actually learned something during high school? -For me, the red flag in your post is not that you get a D, but that you do not seem to know before the test that you have not properly learned the content of the course. Did you feel that you had understood the course before the test? If yes, did you discuss with your tutors/professors where your self-assessment goes so wrong? If no, well, what is it that you do not understand and why don't you give us a concrete example. -Another possibility of course, is that you understand mathematics, but have legasthenic-like problems with calculations, but I guess that you would have mentioned if you tend to interchange digits or have trouble with basic arithmetic - -My suggestions: -a. Look at a site like Alcumus to practice some basics -http://www.artofproblemsolving.com/Alcumus/Introduction.php -(Note that not all of the problems there are simple but many are and they come with difficulty levels.) -b. Try to pinpoint your problem and ask a more precise question.<|endoftext|> -TITLE: Do results from any $L^p$ space for functions hold in the equivalent $\ell^p$ spaces for infinite sequences? -QUESTION [6 upvotes]: For e.g., is $\ell^2$ self-dual like $L^2$? If some $x[n]\in\ell^1\cap\ell^2$, then does it have a Fourier transform in $\ell^2$? - -REPLY [6 votes]: To complement what Arturo said, I would like to point out that there are some properties of $\ell^p$ spaces that have no correspondence in $L^p(\Omega)$ spaces. The easiest of such regards inclusion: we have -$$\ell^1 \subset \ell^2 \subset \ldots \subset \ell^\infty$$ -but, if $\Omega$ is an open subset of some $\mathbb{R}^n$, it's certainly not true that -$$L^1(\Omega) \subset L^2(\Omega) \subset \ldots L^\infty(\Omega).$$ -In fact, if $\Omega$ is bounded (or, more generally, if it is a finite measure space) then -$$L^\infty(\Omega) \subset \ldots \subset L^2(\Omega) \subset L^1(\Omega),$$ -that is, inclusions are reversed. -Another specific property of $\ell^p$ spaces regards duality. Riesz theorem asserts that, for $1 < p < \infty$ and $\frac{1}{p}+\frac{1}{p'}=1$, the mapping -$$f\in L^{p'}(\Omega) \mapsto T_f \in [L^p(\Omega)]',\quad \langle T_f, g \rangle= \int_{\Omega}f(x)g(x)\, dx;$$ -is an isometric isomorphism. This holds true for every measure space and so for $\ell^p$ also. However, this theorem gives no information about extreme cases $p=+\infty, p'=1$, which have to be studied separately, yielding various results. One of those is the following. -Proposition Let $c_0$ be the subspace of $\ell^{\infty}$ consisting of all sequences $x=(x_n)_{n \in \mathbb{N}}$ s.t. -$$\lim_{n \to \infty}x_n=0.$$ -Then the mapping -$$y=(y_n) \in \ell^1 \mapsto T_y \in [c_0]',\quad \langle T_y, x \rangle=\sum_{n \in \mathbb{N}}y_nx_n;$$ -is an isometric isomorphism and we can write -$$\ell^1 \simeq [c_0]'.$$ -As far as I know, we have no direct generalization of this to $L^1(\Omega)$ spaces. One may conjecture, for example, that the following is an isomorphism: -$$f \in L^1(\mathbb{R}) \mapsto T_f \in [C_0(\mathbb{R})]'$$ -(here $C_0(\mathbb{R})$ stands for: "continuous functions on the line vanishing at infinity"). But this is not true, because that mapping is not surjective: $[C_0(\mathbb{R})]'$ contains $\delta$, the linear functional defined by the equation -$$\langle \delta, g \rangle=g(0),\quad g \in C_0(\mathbb{R});$$ -and we have no representation for $\delta$ as $\delta=T_f$ for some $f \in L^1(\mathbb{R})$. In fact, suppose a $f$ as such exists. Then, for all $g\in C_0(\mathbb{R})$ whose support does not contain $\{0\}$, we would have -$$\int_\mathbb{R}f(x)g(x)\, dx=0,$$ -so that, for every open subset $A$ of $\mathbb{R}-\{0\}$, $f=0$ a.e. on $A$. But this forces $f=0$ a.e. on $\mathbb{R}$ and so $T_f=0$, which is a contradiction since $\delta$ certainly is not null.<|endoftext|> -TITLE: Classification of lens space -QUESTION [7 upvotes]: Let $L(p,q)$ be the lens space, that is $L(p,q)=S^3/\mathbb{Z}_p$. -Here, $\mathbb{Z}_p$ acts on $S^3$ by $(z_1,z_2)\mapsto (\rho z_1,\rho^q z_2)$, $ \rho=e^{\frac{2\pi i}{p}}$. -It is well known that -$L(p,q)$ and $L(p',q')$ are diffeomorphic if and only if $p'=p, q'=\pm q^{\pm1}$ (mod $p$). -In A. Hatcher's note page 39-42, there is a proof of the above classification theorem of lens space using the uniqueness of Heegaard torus in Lens space up to isotopy. But I have some misunderstandings with his argument when I following it line by line. -Where can I find the original proof of classification of Lens space by using uniqueness of Heegaard torus up to isotopy? -Note : I know that there is a proof that uses whitehead torsion of lens space and its invariance under the homeomorphism which I already familiar sufficiently. - -REPLY [7 votes]: The proof that Hatcher presents is due to Bonahon and Otal. The reference is here: -MR0663085 (83f:57008) -Bonahon, Francis; Otal, Jean-Pierre -Scindements de Heegaard des espaces lenticulaires. (French. English summary) [Heegaard splittings of lens spaces] -C. R. Acad. Sci. Paris Sér. I Math. 294 (1982), no. 17, 585–587. -57N10 -If I recall, I believe Hatcher's write-up is similarly detailed to Bonahon and Otal's argument. And there are steps missing in both presentations but they're readily filled.<|endoftext|> -TITLE: Modified Dirichlet function non-differentiability -QUESTION [6 upvotes]: I need some ideas to start with this problem. Show that the modified Dirichlet function defined as $D_M(x)=\begin{cases}0&\mbox{if }x \notin \mathbb{Q} , \\ \frac{1}{b}&\mbox{for } x = \frac{a}{b} \mbox {with}\gcd(a,b) = 1\;.\end{cases}$ -Is not differentiable for any $x_0 \in(c,d)\subset\mathbb{R}$ - -REPLY [4 votes]: We only need to check this for irrational numbers (since they are continuity points). -So let $x_0\notin\mathbb Q$. We ask whether the limit -$$\lim_{x\to x_0} \frac{f(x)-f(x_0)}{x-x_0}$$ exists. -Now, if $x\notin\mathbb Q$ then $\frac{f(x)-f(x_0)}{x-x_0}=0$. -For rationals we use Dirichlet's approximation theorem: -For a given irrational $x_0$, the inequality -$$\left| x_0 -\frac{p}{q} \right| < \frac{1}{q^2}$$ -is satisfied by infinitely many integers p and q. -Now, for given $\varepsilon>0$ we can choose q such that $\frac1{q^2}<\varepsilon$ and thus $\left| x_0 -\frac{p}{q} \right| < \frac{1}{q^2} < \varepsilon$. -For $x=\frac pq$ we get -$$\left|\frac{f(x)-f(x_0)}{x-x_0}\right| = \frac{\frac1q}{\left|x_0 -\frac{p}{q}\right|} > \frac{\frac1q}{\frac1{q^2}} = q.$$ -This shows that $\frac{f(x)-f(x_0)}{x-x_0}$ is unbounded in any neighborhood of $x_0$ (since $q$ can be chosen arbitrarily high). - -After answering the question I tried to google for differentiable "thomae function". Already the first result provides the following article - containing a much simpler proof: - -Kevin Beanland, James W. Roberts and Craig Stevenson: Modifications of Thomae's Function and Differentiability, The American Mathematical Monthly, Vol. 116, No. 6 (Jun. - Jul., 2009), pp. 531-535. link at author's blog, jstor. - -(I saw that I need large denominators, which reminded me of Dirichlet and I overlooked the simple way.) -A little later I've noticed that the simple proof was already suggested in one of joriki's comments - which I've overlooked too. - -I think it's worth mentioning different names used for this function (personally, I like popcorn function) - I quote from Wikipedia: -Thomae's function, named after Carl Johannes Thomae, also known as the popcorn function, the raindrop function, the ruler function, the Riemann function or the Stars over Babylon (by John Horton Conway) is a modification of the Dirichlet function.<|endoftext|> -TITLE: Abelian theorem regarding Riesz summability -QUESTION [6 upvotes]: This is my first time to post something here. If there is anything wrong, please inform me... Anyway, here is my question: -Let $k$ be a nonnegative integer. We say a sequence $(a_n)$ is $(R, k)$-summable to $a$ if -$ \displaystyle \lim_{x\to\infty} \sum_{n \leq x} a_n \left( 1 - \frac{\log n}{\log x} \right)^k = a,$ -and we denote $\sum a_n = a \ (R, k)$. -It is easy to show that $(R, 0)$-summability is equivalent to the ordinary summability, and $\sum a_n = a \ (R, k)$ implies $\sum a_n = a \ (R, j)$ for all $j \geq k$. -My question here is like this: Let $\alpha (s) = \sum a_n n^{-s}$. If $\sum a_n = a \ (R, k)$, then does the limit $\lim_{s \to 0^+} \alpha (s)$ exist? If so, then does the limit coincide with $a$? -I was able to prove that $\alpha (s)$ can by analytically continued for $\Re (s) > 0$, and for this continuation, we have $\alpha (s) \to a$ as $s \to 0^{+}$. But it is not immediate, or even not sure if this guarantees the convergence of $\sum a_n n^{-s}$ for $\Re (s) > 0$. Or is there any other way to proof of disprove that $\sum a_n n^{-s}$ exists for $\Re (s) > 0$? - -REPLY [3 votes]: I guess this question is from "Multiplicative Number Theory" by Montgomery. -As I studied this book too, I was stuck with this problem. -Finally, after 6 years, I have an answer to this. -A good reference to my answer is "The General Theory of Dirichlet Series" by Hardy and Riesz. Chapter 4. -The answer for the first question is NO, because the existence of the limit as $s\rightarrow 0$ is not guaranteed. Consider the following, -$$ -\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n^s}. -$$ -This series is known to be Cesaro summable of order $k$ if $-k<\sigma=\textrm{Re}(s)$. -(C,k) summability implies (R,k) summability. Then as you can see, the Dirichlet series has abscissa of convergence $0$. Hence, the conclusion cannot be true, use Cesaro summability of order 2 at $s=-1$. -As you said already, (R,k) summability implies the existence of analytic continuation to $\textrm{Re}(s)>0$, and the limit equals the sum.<|endoftext|> -TITLE: Mathematical symbol to reference the i-th item in a tuple? -QUESTION [16 upvotes]: Given a tuple e=(x,y), how do I reference the 2nd item (y)? - -REPLY [15 votes]: I figured I would collect a number of the comments together into an answer so you would have something to accept (citing, so no one would hate on me for an plagiarism). -As with many types of mathematical notations, there are a number of possible variations here. - -Sometimes $p_2(e)$ or $\pi_2(e)$ is used to denote the 2nd projection (Martin Sleziak, FrancescoTurco). -If you define an n-tuple as $\mathbf{x}\in\mathbb{R}^n$ (note the bold font), then the $i$th element can easily addressed with $x_i$ (Hauke Strasdat). -Sometimes even $e^{(2)}$ or $e^2$ (lhf). - -Just be sure you explain to the reader what you mean by the notation—don't assume he will understand (GEdgar).<|endoftext|> -TITLE: On the equation $(a^2+1)(b^2+1)=c^2+1$ -QUESTION [8 upvotes]: How do you find all positive integers $a,b,$ and $c$ such that $(a^2+1)(b^2+1)=c^2+1$? - -REPLY [10 votes]: See Kenji Kashihara, Explicit complete solution in integers of a class of equations $(ax^2−b)(ay^2−b)=z^2−c$, Manuscripta Math. 80 (1993), no. 4, 373–392, MR1243153 (94j:11031). -The review in Math Reviews says, -"The author studies the Diophantine equation $(ax^2−b)(ay^2−b)=z^2−c$, where $a,b,c\in{\bf Z}$, $a\ne0$, $b$ divides 4, and in the case $b=\pm4$, then $c\equiv0\pmod 4$. This equation for $a=1, b=1$ has been treated by S. Katayama and the author [J. Math. Tokushima Univ. 24 (1990), 1--11; MR1165013 (93c:11013)], and the present paper extends the techniques to show that there exists a permutation group $G$ on all integral solutions of the equation, and also an algorithmic method for computing a minimal finite set of integral solutions, in the sense that all integral solutions are contained in the $G$-orbits of the set. Such minimal sets are listed for the equations with $a=2, b=\pm1, 0\lt|c|\le85$." -Looks like this includes the case $a=1$, $b=-1$, $c=-1$ which is what we want.<|endoftext|> -TITLE: Finding the distribution of a random variable with Laplace-Stieltjes transforms -QUESTION [6 upvotes]: In an exercise series from my Queueing Theory course I am asked to find $E(W)$, $P(W > 0)$ and $P(W > 1)$ where $W$ is the waiting time in a $M/G/1$ queue. In this exercise, the interarrival times $A$ are i.i.d. and exponentially distributed with arrival rate $\lambda$ and the service times $B$ are i.i.d. with cdf $F(t) = 1 - \frac{1}{2}e^{-2t} - \frac{1}{2}e^{-\frac{2}{3}t}$ (so we're dealing with a $M/H_2/1$ queue , where $H_2$ denotes a hyperexponential distribution). We are also given that $\rho = \lambda E(b) = \frac{1}{2}$. -By calculating $E(B) = 1$ and $E(B^2) = \frac{5}{2}$ and using that $E(W) = \frac{\rho}{1-\rho}\frac{E(B^2)}{2E(B)}$ I found $E(W) = \frac{5}{4}$. -To determine $P(W > 1) = 1 - P(W \leq 1) = 1 - F_{W}(t)$ I need to know the cdf of the random variable $W$. Since the Laplace-Stieltjes transform of $B$, which has a $H_2$ distribution with $p_1 = p_2 = \frac{1}{2}$, is $$\tilde{B}(s) = p_1\frac{\mu_1}{\mu_1 + s} + p_2\frac{\mu_2}{\mu_2 + s} = \frac{1}{2+s} + \frac{1}{2+3s}$$ I can use the Pollaczek - Khinchin formula $$\tilde{W}(s) = \frac{(1-\rho)s}{\lambda \tilde{B}(s) + s - \lambda}$$ which gives, after plugging in $\tilde{B}(s)$, $\lambda = \frac{1}{2E(B)} = \frac{1}{2}$ and $\rho = \frac{1}{2}$, that $$\tilde{W}(s) = \frac{s}{2(\frac{1}{2+s} + \frac{1}{2+3s}) + s - \frac{1}{2}}$$ My question: does calculating the inverse Laplace-Stieltjes transform of $\tilde{W}(s)$ give me the cdf of $W$ and if yes, how would I go about doing that? Since $\tilde{W}(s)$ has no nice partial fraction decomposition (as one can see here) I'm having a hard time computing the inverse Laplace-Stieltjes transform of this expression. Or maybe I'm not seeing a simpler way of finding $P(W > 1)$? Any hints are greatly appreciated. -edit I now see that I indeed need the inverse Laplace-Stieltjes transform of $W$, however I'm still not able to do that. - -REPLY [12 votes]: Before answering the question, I want to stress I consider this homework question as one of the (much too few) homework questions asked on this site whose author respects the recommendations for asking homework questions: mainly, to show what one has tried and where one is stuck. -In the present case, why is the OP stuck? My guess is that this is due to the conjunction of two facts: -(1) The OP made a mistake in the computations presented. -(2) The OP might not be aware of some automatic ways to deduce the distribution of a random variable from its Laplace transform when the Laplace transform is a rational fraction. -We start with the Pollaczek-Khinchin formula for $\tilde W(s)=E(\mathrm{e}^{-sW})$. Plugging the values of $\tilde B(s)$, $\lambda$ and $\rho$ the OP mentions into this formula yields a different expression than the one the OP computed, namely, -$$ -\tilde{W}(s) = \frac{\frac12s}{\frac12(\frac{1}{2+s} + \frac{1}{2+3s}) + s - \frac{1}{2}}. -$$ -We note that this formula for $\tilde{W}(s)$ passes a test the one proposed by the OP does not, namely that $\tilde{W}(s)$ does not degenerate when $s\to0$. -Unsurprisingly, we can simplify $\tilde{W}(s)$ and a factor $s$ appears in the denominator, which allows to get rid of the factor $s$ in the denominator. When the dust has settled, we get the key formula -$$ -\tilde{W}(s) = \frac{4+8s+3s^2}{4+13s+6s^2}. -$$ -We note that $\tilde{W}(0)=1$, as should be. -First use of the key formula: to compute the value of $P(W=0)$ -When $s\to+\infty$, $\mathrm{e}^{-sW}\to\mathbb{1}_{W=0}$ hence $\tilde{W}(s)\to P(W=0)$. Keeping only the $s^2$ terms of the numerator and the denominator yields -the value the OP computed, that is, -$$ -P(W=0)=\frac{3}{6}=\frac{1}{2}. -$$ -Second use of the key formula: to compute the value of $E(W)$ -Classically $E(W)=-\tilde{W}'(0)$. Writing $\tilde{W}(s)$ as $\tilde{W}(s) = N(s)/D(s)$, we get the value the OP computed, that is, -$$ -E(W)=-\frac{N'(0)}{D(0)}+D'(0)\frac{N(0)}{D(0)^2}=-\frac{8}{4}+13\frac{4}{4^2}=\frac{5}{4}. -$$ -Third use (more involved) of the key formula: to compute the value of $P(W>1)$ -As noted by the OP, there seems to be no other way than -to compute the whole distribution of $W$. We already know this distribution has an atom $\frac12$ at $w=0$ and we can guess it has an absolutely continuous part described by a density $\frac12f(w)$ on $w>0$. Thus, we want $f$ such that for every $s$, -$$ -Q(s)=\int_0^{+\infty}\mathrm{e}^{-sw}f(w)\mathrm{d}w,\quad\mbox{where}\ Q(s)=2\tilde{W}(s)-2P(W=0). -$$ -Now, $Q(s)$ is a rational function in $s$. If this rational function has simple poles, sooner or later we will want to inverse expressions like -$$ -\frac1{s+c}=\int_0^{+\infty}\mathrm{e}^{-sw}f_c(w)\mathrm{d}w. -$$ -We should keep in mind the solution $f_c$ of this elementary case, which is simply -$$ -f_c(w)=\mathrm{e}^{-cw}. -$$ -From this point, our task is clear: to decompose $Q(s)$ into a linear combination of simple rational functions $1/(s+c)$ and to identify $f$ as the corresponding linear combination of functions $f_c$. If I am not mistaken, -$$ -Q(s)= \frac{4+3s}{4+13s+6s^2}=\frac{a}{s+c}+\frac{a'}{s+c'}, -$$ -where $(a,a',c,c')$ solves -$$ -a+a'=\frac12,\ ac'+a'c=\frac23,\ c+c'=\frac{13}6,\ cc'=\frac23. -$$ -Hence -$$ -f(s)=a\mathrm{e}^{-cs}+a'\mathrm{e}^{-c's}. -$$ -A last test to see if our computations went astray: since $f$ is the density of a measure of mass $1$, we should have $\displaystyle\frac{a}{c}+\frac{a'}{c'}=1$. -Finally, all this yields -$$ -P(W>1)=\frac12\left(a\frac{\mathrm{e}^{-c}}c+a'\frac{\mathrm{e}^{-c'}}{c'}\right), -$$ -and I think the numerical values needed to compute this are -$$ -a=\frac{u-3}{4u},\ a'=\frac{u+3}{4u},\ c=\frac{13+u}{12},\ c'=\frac{13-u}{12},\ -u=\sqrt{73}, -$$ -hence, -$$ -P(W>1)=\frac14\mathrm{e}^{-13/12}\left(\left(1-7/u\right)\mathrm{e}^{-u/12}+\left(1+7/u\right)\mathrm{e}^{u/12}\right). -$$<|endoftext|> -TITLE: Is integration by substitution a special case of Radon–Nikodym theorem? -QUESTION [9 upvotes]: I was wondering - -if Integration by substitution -is a method only for Riemann -integral? -if Integration by substitution -is a special case of Radon–Nikodym -theorem, and why? - -Thanks and regards! - -REPLY [9 votes]: There are measure-theoretic versions of integration by substitution, a couple of which can be found on the page you linked to (search the page for "Lebesgue"). Another version is an exercise in Royden's Real analysis (page 107 of the 2nd edition) which says that if $g$ is a monotone increasing, absolutely continuous function such that $g([a,b])=[c,d]$, and if $f$ is a Lebesgue integrable function on $[c,d]$, then $\displaystyle{\int_c^d f(y)dy=\int_a^bf(g(x))g'(x)dx}$. This is also an exercise in Wheeden and Zygmund's Measure and integral (page 124). One of the versions on the Wikipedia page generalizes this to subsets of $\mathbb{R}^n$ in the case where the change of variables is bi-Lipschitz. -I don't see how it would be. The Radon-Nikodym theorem says that if $\nu$ and $\mu$ are measures on $X$ such that $\nu$ is absolutely continuous with respect to $\mu$, then there is a $\mu$-integrable function $g$ such that $\int_X fd\nu=\int_Xfgd\mu$ for all $\nu$-integrable $f$. Both integrals are over the same set, with no change of variables. Maybe I'm not seeing what you have in mind. - -However, you can at least derive the formula for linear change of variables for Lebesgue measure using the Radon-Nikodym theorem, and maybe there's more to this than I initially thought. If $T:\mathbb{R}^n\to\mathbb{R}^n$ is an invertible linear map and $m$ is Lebesgue measure, then $\int_{\mathbb{R}^n}fdm=|\det(T)|\int_{\mathbb{R}^n}f\circ Tdm$ for all integrable $f$. A proof of this (without Radon-Nikodym) is given in a 1998 article by Dierolf and Schmidt, and they mention that in the proof they could also have used the Radon-Nikodym theorem. They don't pursue this, but the idea is that $f\mapsto\int_{\mathbb{R}^n}f\circ Tdm$ corresponds to an absolutely continuous measure on $\mathbb{R}^n$, so there is a $g$ such that $\int_{\mathbb{R}^n}f\circ Tdm=\int_{\mathbb{R}^n}fgdm$. In particular, considering $f=\chi_E$ shows that $m(T^{-1}(E))=\int_Egdm$ for all measurable $E$. From this you can show that $g$ must be constant, and the constant must be the measure of the image of the unit $n$-cube under $T^{-1}$, which is $|\det(T^{-1})|=\frac{1}{|\det(T)|}$.<|endoftext|> -TITLE: Supplementary exercises for Herstein's Noncommutative Rings -QUESTION [6 upvotes]: I've been studying from the book Noncommutative Rings by Herstein (not as a part of some official course), but unfortunately it doesn't contain any exercises apart from a few simple ones in the body. I was wondering if anyone had any suggestions on where I could find suitable exercises (since from what I've seen, this material is covered quite differently in different books, so it may not necessarily be possible to just take any other book on noncommutative algebra and use its exercises). -Any help will be appreciated. - -REPLY [2 votes]: Louis Halle Rowen's books "Ring Theory" (Volumes I and II) are excellent comprehensive introductions to the theory of noncommutative rings and bring the reader up to a level from which he could begin research in the subject. (These books are more comprehensive than Herstein.) Furthermore, there are a wealth of examples and exercises in these books which is really quite rare for mathematics texts at this advanced level.<|endoftext|> -TITLE: Necessity of the Noetherian condition to derive a result about associated prime ideals -QUESTION [5 upvotes]: Let $A$ be a Noetherian ring and $\mathfrak a$ be an ideal of $A$. Then it is well-known that the associated prime ideas of $\mathfrak a$ are those prime ideals that have the form $(\mathfrak a:x)$ for $x \in A$. -I want to know whether Noetherian condition is necessary or not, that is, for an arbitrary ring $A$ (always commutative with $1$) knowing that an ideal $\mathfrak a$ of $A$ has a minimal primary decomposition, is it possible to obtain the same result? - -REPLY [3 votes]: No, it is not! If $A$ is a strongly Laskerian ring, then every associated prime of an ideal $\mathfrak a$, i.e. prime ideal minimal over an ideal of the form $(\mathfrak a:z)$, $z\in A$, has the form $(\mathfrak a:x)$ for some $x\in A$. And, of course, there are strongly Laskerian rings which are not Noetherian.<|endoftext|> -TITLE: Adjointness of Hom and Tensor -QUESTION [14 upvotes]: Could someone provide me a link to the proof of the adjointness of Hom and Tensor. I did an extensive google search but could not find anything self contained that presented the proof in full generality (or at least the generality I know). -Let $R\to S$ be a ring homomorphism, let $M,N$ be $S$-modules and $Q$ an $R$-module. Then, we have -$$\textrm{Hom}_R(M\otimes_S N,Q) \cong \textrm{Hom}_S(M,\textrm{Hom}_R(N,Q)$$ - -REPLY [13 votes]: Let $f \in \operatorname{Hom}_R(M\otimes_S N,Q)$. We define $g \in \operatorname{Hom}_S(M, \operatorname{Hom}_R(N,Q))$ by: -$$g(m)(n)=f(m \otimes n)$$ -Similarly, if $g$ is defined, we can easily define $f$. -I'll leave it to you to prove that this map between $f$ and $g$ actually goes to the appropriate sets, but this is the basic argument. As mentioned by one of the comments, it does depend on how you define the tensor product.<|endoftext|> -TITLE: Trace as Bilinear form on a field extension -QUESTION [13 upvotes]: Can anyone help with this: -If $L/K$ is a finite field extension, and we have a $K$-bilinear form given by $$(x,y)\mapsto Tr_{L/K}(xy)$$ then the form is either non-degenerate or $Tr_{L/K}(x)=0$ for every $x\in L$. -So far, I feel like I've run into something a little nonsensical. Suppose the form is degenerate, i.e. $\exists \alpha\in L$, $\alpha\neq 0$ such that $(\alpha,\beta)=0$ for all $\beta$. Then specifically, $Tr_{L/K}(\alpha\alpha^{-1})=Tr_{L/K}(1)=0$. But it is a theorem from Morandi's "Field and Galois Theory" that if $\alpha\in K$ (the base field) then $Tr_{L/K}(\alpha)=n\alpha$ where $n$ is the dimension of the field extension, so in this case we get that $[L:K]=0$ which doesn't make any sense. Am I doing something wrong here? -Thanks! - -REPLY [11 votes]: For what it's worth, a complete proof of the following fact can be found in $\S 6$ of these notes. -Theorem: For a finite degree field extension $K/F$, the following are equivalent: -(i) The trace form $(x,y) \in K^2 \mapsto \operatorname{Tr}(xy) \in F$ is a nondegenerate $F$-bilinear form. -(ii) The trace form is not identically zero. -(iii) The extension $K/F$ is separable. -The specific answer to the OP's question appears in there, but here it is: always be careful to read the fine print when dealing with inseparable extensions. In this case, it turns out that when $K/F$ is inseparable, the trace from $K$ down to $F$ of $x \in K$ comes out as a power of $p$ (the characteristic of $F$) times the more familiar sum of Galois conjugates. But in characteristic $p$ multiplying something by a power of $p$ is the same as multiplying it by zero...so the trace is identically zero in inseparable extensions. -[Note also that the proof of another part of the theorem makes use of the Primitive Element Theorem, which is currently to be found in $\S 7$ of the notes, i.e., the following section. This is an obvious mistake which will have to be remedied at some point. There are plenty of other issues with these notes, which are still quite rough and incomplete.]<|endoftext|> -TITLE: How to split an integral exactly in two parts -QUESTION [15 upvotes]: This question is a by-product of a conversation with Theo Buehler in comments to this answer. Let's settle definitions. -Definition Let $(\Omega, \mathcal{F}, \mu)$ be a measure space. We say that $X\in \mathcal{F}$ is an atom if $\mu(X) > 0$ and its only subset of strictly positive measure is $X$ itself. $\Omega$ is said to be non-atomic if no atoms exist. -So in a non-atomic measure space we can always split a measurable subset into smaller ones. Now the question is: have we got some control over this splitting? Can we split something in exactly two parts? I'm especially interested in the following. -Question Let $(\Omega, \mathcal{F}, \mu)$ be a non-atomic measure space and $f \in L^1(\Omega), f \ge 0$. Are there measurable and disjoint $A, B \subset \Omega$ such that $A \cup B = \Omega$ and -$$\int_A f(x)\, d\mu= \int_B f(x)\, d\mu=\frac{1}{2}\int_\Omega f(x)\, d\mu?$$ - -REPLY [5 votes]: Here's a sketch of the proof that any set of finite measure can be cut into two halves of equal measure. This proof works in the setting of a non-atomic Radon measure defined over a locally-compact Hausdorff space. -We call a (measurable) set proper if it has a positive, finite measure. - -Show that every proper set has a proper subset of smaller measure. -Deduce that every proper set has a proper subset of arbitrarily small measure. -Conclude that every proper set $A$ has a subset whose measure lies in $(\mu(A)/3, \mu(A)/2]$. -Deduce that $A$ can be cut into two halves of equal measure. - -As Byron shows above, an easy corollary is that there are subsets of $A$ of arbitrary measure in $[0,\mu(A)]$. -There's a nice generalization: let $\mu_1,\ldots,\mu_k$ be measures as above, let $A$ be a set of finite measure (under all of these measures), and let -$$ M = \{(\mu_1(B),\ldots,\mu_k(B)) : B \subset A \text{ is measurable wrt } \mu_1,\ldots,\mu_k\}. $$ -Then $M$ is a convex subset of $\mathbb{R}^k$.<|endoftext|> -TITLE: Help to compute $\mathrm{Tor}_{n}^{\mathbb{Z}_{4}}(\mathbb{Z}_{2},\mathbb{Z}_{2})$? -QUESTION [5 upvotes]: Consider $\mathbb{Z}_{2}$ as a $\mathbb{Z}_{4}$-module. How to compute $\mathrm{Tor}_{n}^{\mathbb{Z}_{4}}(\mathbb{Z}_{2},\mathbb{Z}_{2})$? - -REPLY [8 votes]: You need a free (or projective) resolution of $\mathbb{Z}_2$. One is -$$\dots\to\mathbb{Z}_4\to\mathbb{Z}_4\to\mathbb{Z}_4\to\mathbb{Z}_4$$ -where every arrow is multiplication by $2$. Now you tensor this complex (over $\mathbb{Z}_4$) with $\mathbb{Z}_2$ and you get -$$\dots\to\mathbb{Z}_2\to\mathbb{Z}_2\to\mathbb{Z}_2\to\mathbb{Z}_2$$ -where the arrows are now zero. Your Tor's are the cohomology of this complex. As a result, they are $\mathbb{Z}_2$ for every $n$.<|endoftext|> -TITLE: $n!$ is never a perfect square if $n\geq2$. Is there a proof of this that doesn't use Chebyshev's theorem? -QUESTION [114 upvotes]: If $n\geq2$, then $n!$ is not a perfect square. The proof of this follows easily from Chebyshev's theorem, which states that for any positive integer $n$ there exists a prime strictly between $n$ and $2n-2$. A proof can be found here. -Two weeks and four days ago, one of my classmates told me that it's possible to prove that $n!$ is never a perfect square for $n\geq2$ without using Chebyshev's theorem. I've been trying since that day to prove it in that manner, but the closest I've gotten is, through the use of the prime number theorem, showing that there exists a natural number $N$ such that if $n\geq N$, $n!$ is not a perfect square. This isn't very close at all. I've tried numerous strategies over the past weeks and am now trying to use the Sylow theorems on $S_n$ to somehow show that $|S_n|$ can't be square (I haven't made any progress). -Was my classmate messing with me, or is there really a way to prove this result without Chebyshev's Theorem? If it is possible, can someone point me in the right direction for a proof? -Thanks! - -REPLY [24 votes]: Here is a way to do it. We'll need De Polignac's formula which is the statement that the largest $k$ such that $p^k$ divides $n!$ is $$k=\sum_{i}\left\lfloor\frac{n}{p^i}\right\rfloor.$$ Additionally, we'll take advantage of the fact that the function $\left\lfloor\frac{2n}{p}\right\rfloor-2\left\lfloor\frac{n}{p}\right\rfloor$ is only ever equal to $0$ or $1$. -Proof: Let's start with even numbers. Suppose that $(2n)!$ - is a square. Then $\binom{2n}{n}=\frac{(2n)!}{n!n!}$ - is a square as well, and we may write $$\binom{2n}{n}=\prod_{p\leq2n}p^{v_{p}}$$ where each $v_p$ is even. - The critical observation is that for primes $p>\sqrt{2n}$ - we have $v_{p}=\left\lfloor\frac{2n}{p}\right\rfloor-2\left\lfloor\frac{n}{p}\right\rfloor$, which must equal either $0$ - or $1$, and since $v_p$ is even, we conclude that $v_{p}=0$ - for $p>\sqrt{2n}$. This will lead to a contradiction as $\binom{2n}{n}$ cannot be composed of such a small number of primes - this would give impossibly strong upper bounds on the size of the central binomial coefficient. -For $p\leq\sqrt{2n}$, $$v_{p}=\sum_{i}\left\lfloor\frac{2n}{p^{i}}\right\rfloor-2\left\lfloor\frac{n}{p^{i}}\right\rfloor\leq\log_{p}2n$$ - and so $p^{v_p}=\exp(v_p\log p)\leq\exp(\log(2 n))= 2n$, which gives the upper bound $$\binom{2n}{n}=\prod_{p\leq\sqrt{2n}}p^{v_{p}}\leq\left(2n\right)^{\sqrt{2n}}.$$ - Expanding $(1+1)^{2n}$ - there will be $2n+1$ - terms of which $\binom{2n}{n}$ - is the largest. This implies that $\binom{2n}{n}>\frac{2^{2n}}{2n+1}$, and since $$\frac{2^{2n}}{2n+1}>(2n)^{\sqrt{2n}}=2^{\sqrt{2n}\log_2(2n)}$$ - for all $n> 18$, we conclude that $(2n)!$ is never a square. -To prove it for odd numbers, consider the quantity $\frac{(2n+1)!}{n!n!}$. - Observing that $\left\lfloor\frac{2n+1}{p}\right\rfloor-2\left\lfloor\frac{n}{p}\right\rfloor$ - only takes the values $0$ and $1$ for odd $p>1$ we see that the above proof carries through identically with a slight modification at the prime $2$.<|endoftext|> -TITLE: Finite Field Extension -QUESTION [7 upvotes]: Suppose $E/F$ is a field extension of degree $n$. Does it follow that $E = F(a_{1}, a_{2}, \ldots, a_{n})$ for some $a_{i} \in E$? I feel like this is true, but I'm getting confused with all the definitions. - -REPLY [4 votes]: Yes, the statement that $E=F(a_1,\ldots,a_n)$ for some $a_i\in E$ is implied by the statement that $[E:F]=n$. Recall that $[E:F]=n$ is, by definition, the statement that $E$ is an $n$-dimensional vector space over $F$. Thus, there exists a basis for this vector space consisting of $n$ elements, that is, there exist $a_1,\ldots,a_n\in E$ such that any $b\in E$ has -$$b=c_1a_1+\cdots+c_na_n$$ -where the $c_i\in F$. Thus, we will have $E=F(a_1,\ldots,a_n)$. -However, you should note that it might not be the case that all the $a_i$'s are necessary to generate $E$ as a field over $F$. For example, if $E=\mathbb{Q}(\sqrt[n]{2})$ and $F=\mathbb{Q}$, then $[E:F]=n$, and a basis for $E$ as a vector space over $F$ is -$$1,\sqrt[n]{2},\ldots,(\sqrt[n]{2})^{n-1}$$ -so that $E=F(1,\sqrt[n]{2},\ldots,(\sqrt[n]{2})^{n-1})$, but we actually already had $E=F(\sqrt[n]{2})$ - that is, $E$ can be generated over $F$ by a single element.<|endoftext|> -TITLE: How can we detect the existence of almost-complex structures? -QUESTION [7 upvotes]: Any smooth $2n$-manifold $M$ comes with a well-defined map $f:M\rightarrow BGL_{2n}(\mathbb{R})$ (up to homotopy) classifying its tangent bundle. Since $GL_{2n}(\mathbb{R})$ deformation-retracts onto $O(2n)$, $BGL_{2n}(\mathbb{R})\simeq BO(2n)$, which is a cute way of proving that every smooth manifold admits a Riemannian metric. An almost-complex structure, on the other hand, is equivalent to a reduction of the structure group from $GL_{2n}(\mathbb{R})$ to $GL_n(\mathbb{C})$, which is the same as asking for a lift of the classifying map through $BU(n)\simeq BGL_n(\mathbb{C})\rightarrow BGL_{2n}(\mathbb{R})$. - -Can we detect the nonexistence of a - lift entirely using characteristic - classes? If not, what else goes into the classification? - -I'd imagine these don't suffice themselves. If I'm remembering correctly we have $H^*(BO(2n);\mathbb{Z}/2)=\mathbb{Z}/2[w_1,\ldots, w_{2n}]$ and $H^*(BU(n);\mathbb{Z})=\mathbb{Z}[c_1,\ldots, c_n]$ and the only easy (i.e. non-cohomology operational) result relating these that I know is that $w_{2n}(TM) \equiv_2 c_n(TM)$, so this holds in the universal case $i^* : H^*(BO(2n);\mathbb{Z}/2) \rightarrow H^*(BU(n);\mathbb{Z}/2)$. I've heard that this problem is indeed solved. Maybe it does take some characteristic class & cohomology operation gymnastics, or maybe it needs extraordinary characteristic classes. Or maybe there's yet another ingredient in the classification...? - -REPLY [2 votes]: There are a few answers at MO (as well as some interesting comments by Tom Goodwillie), so I'm marking this question as "answered". The link, again, is here: https://mathoverflow.net/questions/63439/how-can-we-detect-the-existence-of-almost-complex-structures<|endoftext|> -TITLE: Meaning of non-existence of expectation? -QUESTION [17 upvotes]: When reading another post, I was wondering about the definition of existence of expectation of a random variable. - -From Kai Lai Chung, - -We say a random variable $X$ has a - finite or infinite expectation (or - expected value) according as $E(X)$ is - a finite number or not. In the - expected case we shall say that the - expectation of X does not exist. - -I was wondering what it means by -"the expected case" in the last -sentence? Is this generally regarded -as the meaning of non-existence of -expectation? -From Wikipedia: - -Let X be a discrete random variable. - Then the expected value of this random - variable is the infinite sum - $$\operatorname{E}[X] = \sum_{i=1}^\infty x_i\, p_i, $$ - provided that this series converges - absolutely (that is, the sum must - remain finite if we were to replace - all $x_i$'s with their absolute - values). If this series does not - converge absolutely, we say that the - expected value of $X$ does not exist. - -I was wondering - -if the meaning of nonexistence of expectation here is consistence with -the one by Kai Lai Chung, -if the meaning of nonexistence of expectation here is consistence with -the nonexistence of Lebesgue -integral in Rudin's book -where he says Legesgue integral -of a real-valued -Boreal-measurable function does not -exist if and only if the integrals -of the positive part and of the -negative part are both infinite, which allow the integral to exist when it is infinite. -if the expectation is infinite, then the expectation is regarded -as nonexistence? - -REPLY [15 votes]: With usual notation, decompose $X$ as $X=X^+ - X^-$ (also note that $|X|=X^+ + X^-$). $X$ is said to have finite expectation (or to be integrable) if both ${\rm E}(X^+)$ and ${\rm E}(X^-)$ are finite. In this case ${\rm E}(X) = {\rm E}(X^+) - {\rm E}(X^-)$. Moreover, if ${\rm E}(X^+) = +\infty$ (respectively, ${\rm E}(X^-) = +\infty$) and ${\rm E}(X^-)<\infty$ (respectively, ${\rm E}(X^+)<\infty$), then ${\rm E}(X) = +\infty$ (respectively, ${\rm E}(X) = -\infty$). So, $X$ is allowed to have infinite expectation. -Whenever ${\rm E}(X)$ exists (finite or infinite), the strong law of large numbers holds. That is, if $X_1,X_2,\ldots$ is a sequence of i.i.d. random variables with finite or infinite expectation, letting $S_n = X_1+\cdots + X_n$, it holds $n^{-1}S_n \to {\rm E}(X_1)$ almost surely. The infinite expectation case follows from the finite case by the monotone convergence theorem. -If, on the other hand, ${\rm E}(X^+) = +\infty $ and ${\rm E}(X^-) = +\infty $, then $X$ does not admit an expectation. -In this case, must of the following must occur (a result by Kesten, see Theorem 1 in the paper The strong law of large numbers when the mean is undefined, by K. Bruce Erickson): -1) Almost surely, $n^{-1}S_n \to +\infty$; 2) Almost surely, $n^{-1}S_n \to -\infty$; 3) Almost surely, $\lim \sup n^{ - 1} S_n = + \infty$ and $\lim \inf n^{ - 1} S_n = - \infty$. -EDIT: Since you mentioned the recent post "Are there any random variables so that ${\rm E}[X]$ and ${\rm E}[Y]$ exist but ${\rm E}[XY]$ doesn't?", it is worth stressing the difference between "$X$ has expectation" and "$X$ is integrable". -By definition, $X$ is integrable if $|X|$ has finite expectation (recall that $|X|=X^+ + X^-$). So, for example, the random variable $X=1/U$, where $U \sim {\rm uniform}(0,1)$, is not integrable, yet has (infinite) expectation (indeed, $\int_0^1 {x^{ - 1} \,{\rm d}x} = \infty $). Further, it is worth noting the following. A random variable $X$ is integrable (i.e., ${\rm E}|X|<\infty$) if and only if -$$ -\int_\Omega {|X|\,{\rm dP}} = \int_{ - \infty }^\infty {|x|\,{\rm d}F(x)} < \infty . -$$ -A random variable has expectation if and only if -$$ -\int_\Omega {X^ + \,{\rm dP}} = \int_{ - \infty }^\infty {\max \{ x,0\} \,{\rm d}F(x)} = \int_0^\infty {x\,{\rm d}F(x)} < \infty -$$ -or -$$ -\int_\Omega {X^ - \,{\rm dP}} = \int_{ - \infty }^\infty {-\min \{ x,0\} \,{\rm d}F(x)} = \int_{ - \infty }^0 {|x|\,{\rm d}F(x)} < \infty. -$$ -In any of these cases, the expectation of $X$ is given by -$$ -{\rm E}(X) = \int_0^\infty {x\,{\rm d}F(x)} - \int_{ - \infty }^0 {|x|\,{\rm d}F(x)} \in [-\infty,\infty]. -$$ -Finally, $X$ does not admit an expectation if and only if both $\int_\Omega {X^ + \,{\rm dP}} = \int_0^\infty {x\,{\rm d}F(x)}$ and $\int_\Omega {X^ - \,{\rm dP}} = \int_{ - \infty }^0 {|x|\,{\rm d}F(x)} $ are infinite. Thus, for example, a Cauchy random variable with density function $f(x) = \frac{1}{{\pi (1 + x^2 )}}$, $x \in \mathbb{R}$, though symmetric, does not admit an expectation, since both $\int_0^\infty {xf(x)\,{\rm d}x}$ and $\int_{ - \infty }^0 {|x|f(x)\,{\rm d}x}$ are infinite.<|endoftext|> -TITLE: With Choice, is any linearly ordered set well-ordered if no subset has order type $\omega^*$? -QUESTION [9 upvotes]: I've been fumbling around with order types and ordinals these past few days. I read about partial, total, and well-ordered structures, and I'm curious to see if a linearly ordered set has no subset with order type $\omega^*$, then it is in fact well ordered. Here $\omega^*$ is the order type of the negative integers. -My idea was this. Let $X$ be some totally ordered set such with subset with order type $\omega^*$. Take any nonempty subset $Y$ of $X$. If $Y$ is finite, it must have a least element, so assume $Y$ is infinite. By way of contradiction, I assume $Y$ has no least element. My strategy was to then somehow construct a subset of $Y$ which has order type $\omega^*$. I do this by first picking an element $a_0$ from $Y$. Since $Y$ has no least element, and since $Y$ is also totally ordered, there must be some other element $a_1\in Y$ such that $a_1\prec a_0$. Again, $a_1$ is not the least element of $Y$, so we can find an element $a_2\in Y$ such that $a_2\prec a_1\prec a_0$. Continuing along, I would eventually have a set $Z=\{\dots,a_3,a_2,a_1,a_0\}$, where I have written it in increasing order. But then $Z$ has order type $\omega^*$, a contradiction. -Does this argument hold up? If so, I feel it is a little handwavey, and the "continuing along" part needs to be formalized. Is there a way to do so with Choice or maybe (transfinite) recursion? I fear I may have written nonsense, as I've done many times in the past. Thanks for any criticism and insights. - -REPLY [7 votes]: Yunone: -Your argument is fine. The axiom of dependent choices (DC) is what you are using to "recursively" pick the members of your sequence. DC is a consequence of the axiom of choice, but it is strictly weaker. It says that if you have a relation $R$ on a set $X$ with the property that for any $a\in X$ there is a $b\in X$ with $a R b$, then there is a sequence $(a_n\mid n\in\omega)$ with $a_n R a_{n+1}$ for all $n$. -If you have a set that is not well-ordered, and DC holds, you can pick a nonemepty $X$ subset of your set without a least element and set $aRb$ iff $b -TITLE: How many steps does it take the computer to solve a Sudoku puzzle? -QUESTION [6 upvotes]: We all know what Sudoku is. Given a Sudoku puzzle, one can use a simple recursive procedure to solve it using a computer. Before describing the algorithm, we make some definitions. -A partial solution is a Sudoku puzzle with only some of the numbers entered. -Given an empty square in a partial solution, an assignment of a digit to the square is consistent if it doesn't appear in the same row, column or $3\times 3$ square. -The algorithm is as follows: - -If there is any square for which there is no consistent assignment, give up. -Otherwise, pick an empty square $S$ (*). -Calculate the set of all consistent assignments $A$ to this square. -Go over all assignments $a \in A$ in some order (**): - -Put $a$ in $S$, and recurse. - - -We have two degrees of freedom: choosing an empty square, and choosing and order for the assignments to the square. In practice, it seems that whatever the choice is, the algorithm reaches a solution very fast. - -Suppose we give the algorithm a partial Sudoku with a unique solution. Can we bound the number of steps the algorithm takes to find the solution? - -To make life easier, you can choose any rule you wish for ( * ) and (**), even a random rule (in that case, the relevant quantity is probably the expectation); any analyzable choice would be interesting. -Also, if it helps, you can assume something about the input - say at least $X$ squares are filled in. I'm also willing to relax the restriction that there be a unique solution - indeed, even given an empty board, the algorithm above finds a complete Sudoku very fast. Analyzes for random inputs (in whatever meaningful sense) are also welcome. - -REPLY [4 votes]: Since Sudoku is known to be NP-complete for arbitrarily large grid sizes, it's highly unlikely that your algorithm has any 'good' bound. As to why it works so well, I suspect the reason is simply that Sudoku puzzles are designed to be cleanly solved by humans; humans in general are really mediocre at backtracking searches (particularly once the depth gets to more than a small handful of steps), so most human Sudoku puzzles in fact have nearly-linear solutions with very little branching needed, just because those make for more interesting puzzles.<|endoftext|> -TITLE: Representation of integers -QUESTION [5 upvotes]: How can I prove that every integer $n>=170$ can be written as a sum of five positive squares? (i.e. none of the squares are allowed to be zero). -I know that $169=13^2=12^2+5^2=12^2+4^2+3^2=10^2+8^2+2^2+1^2$, and $n-169=a^2+b^2+c^2+d^2$ for some integers $a$, $b$, $c$, $d$, but do I show it? -Thank you. - -REPLY [11 votes]: Hint: let $n-169 = a^2+b^2+c^2+d^2$; if $a,b,c,d \neq 0$ then ... if $d = 0$ and $a,b,c \neq 0$ then ... if $c = d = 0$ and $a,b \neq 0$ then ... if $b = c = d = 0$ and $a \neq 0$ then ... if $a = b = c = d = 0$ then - wait, that can't happen!<|endoftext|> -TITLE: Isomorphism in localization (tensor product) -QUESTION [10 upvotes]: Let $A$ be a commutative ring with $1$ and let $M,N$ be $A$-modules. -Since there is a map $f: A \rightarrow S^{-1}A$, defined by $a \mapsto \frac{a}{1}$ then given any $S^{-1}A$-module we can view it as a $A$ module via restriction of scalars right? -Now $S^{-1}M$ and $S^{-1}N$ are $S^{-1}A$-modules. -My question is if the following isomorphism holds? -$S^{-1}M \otimes_{A} S^{-1}N \cong S^{-1}M \otimes_{S^{-1}A} S^{-1}N$ -Is the above valid because any $S^{-1}A$ module is an $A$-module or why? (or perhaps it is false), can you please help? -Following Daniel's hint: -$S^{-1}(M \otimes_{A} N) \cong S^{-1}A \otimes_{A} (M \otimes_{A} N) -\cong (S^{-1}A \otimes_{S^{-1}A}) (S^{-1}A \otimes_{A} (M \otimes _{A} N) )$ -After this I end up with $(S^{-1}A \otimes _{S^{-1}A} N) \otimes_{A} S^{-1}M$ which is isomorphic to $S^{-1}N \otimes_{A} S^{-1}M$. Where's the error? - -REPLY [9 votes]: Congratulations on having noticed this subtle point, rarely discussed in textbooks. -As is often the case, a more general statement is clearer; for your question take $P=S^{-1}M, Q=S^{-1}N$ in the following -General statement Suppose $P,Q$ are $S^{-1}A$-modules. Then there is a canonical $S^{-1}A$- isomorphism $P \otimes _A Q\to P \otimes_ {S^{-1}A} Q$ -Preliminary remark An $A$-module $E$ can have at most $one$ $S^{-1}A$-module structure compatible with its $A$-module structure. -Proof of Preliminary remark: we must have $\frac{a}{s} \ast e = (s\bullet)^{-1} (ae)$ (The existence of an $S^{-1}A$-module structure on $E$ forces multiplication by $s$ to be an $A$-linear automorphism $(s\bullet)$ of the $A$-module $E$) -Proof of General statement The preliminary remark shows that the $S^{-1}A$-module structures on $P \otimes_A Q$ coming from $P$ or from $Q$ coincide. Hence there are canonical ${S^{-1}A}$- morphisms -$P \otimes _A Q\to P \otimes_ {S^{-1}A} Q: p\otimes q\mapsto p\otimes q$ and -$P \otimes_ {S^{-1}A} Q \to P \otimes _A Q\ : p\otimes q\mapsto p\otimes q$ which are mutually inverse $S^{-1}A$- isomorphisms ; this proves the General statement.<|endoftext|> -TITLE: How to show that a linear map is surjective? -QUESTION [9 upvotes]: Sorry if this is somewhat a duplicate. The answers I see deal with functions in general rather than linear maps. -Let $T$ be a linear map from $U$ to $V$. -I understand that by definition a linear map is injective if every element in the range gets mapped there by a unique vector from the domain. This is easy to show by choosing two vectors $u$ and $v$ in $U$, and showing that if $T(u)=T(v)$, then $u=v$. -But for a surjective linear map, it does not seem like there is something simple like this we can do? We have to show that range$(T)=V$. How is this done? -EDIT: As a concrete example, suppose we have $T\in L(F^\infty \rightarrow F^\infty)$ defined by $T(x_1,x_2,x_3,\dots) = (x_2,x_3, \dots)$. How can we show this is surjective? Is it enough to: -Suppose $w\in W$, where $w=(w_1, w_2, \dots)$. Then let $u=(a, w_1, w_2)$ for some $a\in F$. And that's all we need? - -REPLY [12 votes]: If you're dealing with finite-dimensional spaces, the key is relation: -$$\tag{1} \dim U = \dim \mbox{im} T +\dim \ker T \; .$$ -If you know how to evaluate $\dim \ker T$, then you can easily compare $\dim V$ with $\dim \mbox{im} T=\dim U-\dim \ker T$ and your map will be surjective iff $\dim V =\dim \mbox{im} T$. -I want to remark also a straightforward consequence of (1). If $U$ and $V$ are both finite-dimensional and they both have the same dimension, then there is equivalence between being injective and being surjective for linear maps, i.e. a linear map $T:U\to V$ is injective iff it is surjective. -On the other hand, AFAIK, when you deal with infinite-dimensional spaces surjectivity proofs cannot be shortened by using tricks: in general, one has to show that for each $v\in V$ there exists $u\in U$ s.t. $v=Tu$. - -REPLY [2 votes]: For surjectivity, you need to show that for every $v \in V$ there exists a $u \in U$ such that $T(u) = v$.<|endoftext|> -TITLE: What does it mean to say a language is context-free? -QUESTION [21 upvotes]: What does it mean to say a language is context-free? - -REPLY [17 votes]: Given the technical definition of what a language is and what context-free is, it means that in the processing of rules defining the language, no context is used. That is, any variable is rewritten by itself, with no context. Once a variable is produced in a derivation, none of the string around that variable will ever be involved in any further derivation...no context is used in rewriting a variable. -More complicated languages do not have this restriction (they may allow use of context/other adjacent variables and terminals in rewriting a variable). -Context-free languages are "easier" to parse (quicker/more efficiently) than context sensitive ones. -Note that the term 'context' is very technical here; it is referring to the context of a substring when rewriting. Technical terms have a life of their own and don't necessarily relate well to the first layman's understanding of the word.<|endoftext|> -TITLE: Meaning of pullback -QUESTION [11 upvotes]: I was wondering if the following two -meanings of pullback are related and -how: - -In terms of Precomposition with a function: - -a function $f$ of a variable $y$, where $y$ - itself is a function of another - variable $x$, may be written as a - function of $x$. Then $f(y(x)) \equiv g(x)$ is the pullback of $f$ - by the - function $y(x)$. - -In the context of Category -theory: - -the pullback of the morphisms $f$ and - $g$ consists of an object $P$ and two - morphisms $p_1 : P \rightarrow X$ and - $p_2 : P \rightarrow Y$ for which the - diagram - -commutes. Moreover, the pullback $(P, p_1, p_2)$ must be universal with - respect to this diagram. - - -Also, is it possible to define -pushforward/pushout in terms of -composition of functions? - -Thanks and regards! - -REPLY [6 votes]: It took me a second to figure out what the Wikipedia page was saying. -Say you have a Fiber bundle $\pi:E \rightarrow B$ and a section (which is a function $s:B\rightarrow E$ such that $\pi(s(b))=b$ for $b \in B$.) -Also, say you have a function $f:B^\prime \rightarrow B$. Then the pullback object $E^\prime$ can be defined as the set $E^\prime = \{(e,b^\prime) \in E \times B' : \pi(e)=f(b^\prime)\}$ -Then you get an obvious pullback $\pi^\prime :E^\prime \rightarrow B^\prime$ defined by $\pi^\prime(e,b^\prime)=b^\prime$. -The key is that the pullback of $s$ is defined as $s^\prime:B^\prime \rightarrow E^\prime$ defined by $s^\prime(b^\prime) = (s(f(b^\prime)),b^\prime)$. -So the pullback of $s$ is (essentially) the composition of $s$ with $f$. -This pullback of $s$ can be made categorical because you have a square with top left object $B^\prime$ and bottom right $B$ defined with two paths: $B^\prime \xrightarrow{id} B^\prime \xrightarrow{f} B$ and $B^\prime \xrightarrow{s \circ f} E \xrightarrow{\pi} B$. So by the unversal property of $E^\prime$, there must be an $s^\prime:B^\prime \rightarrow E^\prime$ which, when composed with $\pi^\prime$ yields the identity.<|endoftext|> -TITLE: What is the algorithm for long division of polynomials with multiple variables? -QUESTION [9 upvotes]: I was helping a high-school student last night whose teacher had given as a homework problem the division $$\frac{15x^4-y^2}{x^2+y};$$ I tried a heuristic involving splitting off a difference of squares to end up with $$15x^2-15y+\frac{14y^2}{x^2+y},$$ but I was not satisfied because the remainder has the same degree as the denominator and normally problems like these should end up with a denominator of lower degree. -I next tried variants of synthetic division "with respect to $y$" and "with respect to $x^2$" and got nothing simpler. -I then tried the method outlined in Karl's Calculus Tutor to perform long division, first with the original fraction and then with the remainder term arrived at with the first method that came to my head, and I kept looping; unlike numerical or single-variable long division, multi-variable long division doesn't seem to follow an ordered progression inexorably leading toward a definite remainder of lower degree (in the case of numerical long division, of smaller absolute value) than the denominator, except in simple cases like in the example from Karl's Calculus Tutor. - -REPLY [8 votes]: See Chapter 2, Section 3 (p. 61) in the book Ideals, varieties, and algorithms by Cox, Little & O'Shea. (Google books link.)<|endoftext|> -TITLE: What is the dimension of a representation -QUESTION [6 upvotes]: What is meant by dimension of a representation in the following excersize: "Prove that any irreducible representation of an abelian group has the dimension of 1"? I looked at the solution, and it proves that any irreducible representation of an abelian group is scalar. I understand the proof, but I still can't figure out what is meant by dimension. - -REPLY [6 votes]: A representation of a group $G$ is a vector space $V$ together with an action of $G$ on $V$ by linear transformations. The dimension of the representation is just the dimension of $V$.<|endoftext|> -TITLE: eigen decomposition of an interesting matrix -QUESTION [6 upvotes]: Lets define: -$U=\left \{ u_j\right \} , 1 \leq j\leq N= 2^{L},$ the set of all different binary sequences of length $L$. -$V=\left \{ v_i\right \} , 1 \leq i\leq M=\binom{L}{k}2^{k},$ the set of all different gaped binary sequences with $k$ known bits and $L-k$ gaps. -$A_{M*N}=[a_{i,j}]$ is a binary matrix defined as following: -$$a_{i,j} = \left\{\begin{matrix} -1 & \text{if } v_i \text{ matches } u_j \\ -0 & \text{otherwise } -\end{matrix}\right.$$ -now, the question is: What are the eigenvectors and eigenvalues of the matrix $S_{M*M}=AA^{T}$? -Here is an example for $L=2, k=1$: -$$U = \left \{ 00,01,10,11\right \} $$ -$$V = \left \{ 0.,1.,.0,.1\right \} ^*$$ -$$ A = \begin{bmatrix} -1 & 1 & 0 &0 \\ -0 & 0 & 1 &1 \\ -1 & 0 & 1 &0 \\ -0 & 1 & 0 &1 -\end{bmatrix}$$ -$$ S = \begin{bmatrix} -2 & 0 & 1 &1 \\ -0 & 2 & 1 &1 \\ -1 & 1 & 2 &0 \\ -1 & 1 & 0 &2 -\end{bmatrix}$$ -For the special case $k=1$, is has been previously solved by joriki and the solution can be found here. See the same reference for a graph analogy of matrix $S$. -Furthermore, it has been shown here by joriki that: $$\text{rank}(AA^{T})=\text{rank}(A)=\sum_{m=0}^k\left({L\atop m}\right)\;\;$$ -As for the eigenvalues, numerical values suggest that $AA^{T}$ has $\binom{L}{m}$ eigenvalues equal to $\binom{L-m}{k-m}2^{g}, m=0,..,k$, where $g=L-k$ is the number of gaps. -any comments or suggestion is appreciated. -$^{*}$ here dots denote gaps. a gap can take any value, and each gaped sequence with $k$ known bits and $(L−K)$ gaps in $V$, exactly matches to $2^{L−k}$ sequences in U, hence the sum of elements in each row of $A$ is $2^{L−k}$. - -REPLY [2 votes]: The eigenvectors corresponding to the non-zero eigenvalues can be listed as follows. For each $m=0,1,...,k$, pick $m$ bit positions from the total $L$. This can be done in $\binom {L} {m}$ ways. Define vector x, with $i^{th}$ element, $x_i$, given by -$x_i = -\begin{cases} -1, & \text{if } v_i \text{ has no gaps among the m bits and has even number of 1s among these m bits} \\ --1, & \text{if } v_i \text{ has no gaps among the m bits and has odd number of 1s among these m bits} \\ -0, & \text{otherwise} -\end{cases} -$ -This is an eigenvector for $S = AA^T$ with eigenvalue $\binom {L-m}{k-m}*2^{L-k}$. To show this, we first see that $S_{ij}$ ($ij^{th}$ entry in $S=AA^T$) is the number of elements in $U$ which match both $v_i$ and $v_j$. From the construction of $x$, we can make two observations. First, if $x_i \neq 0$, then $x_j = x_i$ whenever $v_i$ has a common match with $v_j$, that is, whenver $S_{ij}$ is non-zero. Second, if $x_i = 0$, then for each $j_1$ with $S_{ij_1} \neq 0$ and $x_{j_1} \neq 0$, we can find a unique $j_2$ such that $S_{ij_1} = S_{ij_2}$ and $x_{j_1} = - x_{j_2}$. This is because, $x_i = 0$ implies there is a gap among the $m$ bits in $v_i$. So toggling that bit in $v_{j_1}$ gives a $v_{j_2}$ with $S_{ij_1} = S_{ij_2}$ and $x_{j_1} = -x_{j_2}$. -It follows from the second observation above that if $x_i = 0$, then $i^{th}$ entry in $Sx$ is 0. And from the first observation, if $x_i = 1$, then $i^{th}$ entry in $Sx$ is the number of elements in $U$ which match both $v_i$ and $v_j$, summed over all $j$ with $x_j = 1$. This is same as the number of $v_j$ which match $u$, summed over all $2^{L-k}$ different $u$ which match $v_i$. This number can be shown to be $\binom{L-m}{k-m}* 2^{L-k}$. Similar result can be shown for $x_i = -1$ case. So $x$ described above is an eigenvector for $S$ with eigenvalue $\binom{L-m}{k-m}* 2^{L-k}$. As we could choose $m$ bits in $\binom{L}{m}$ ways, we get $\binom{L}{m}$ vectors with this eigenvalue. These vectors can be shown to be mutually orthogonal, hence are linearly independent. -With $m$ varying from $0$ to $k$, we get as many eigenvectors as the rank of the matrix, each with a non-zero eigenvalue. So this gives all the eigenvectors with non-zero eigenvalues.<|endoftext|> -TITLE: An algebraic extension of a perfect field is a perfect field -QUESTION [5 upvotes]: I would like to show that an algebraic extension of a perfect field is a perfect field, using the following result: -Given a field $F$ and some family of perfect subfields $\{F_i\}_{i \in I}$ such that $F=\cup _{i\in I} F_i$, we have that $F$ is a perfect field. -EDIT: A perfect field is defined as follows: Any field of characteristic $0$ is perfect, and a field of characteristic $p$ is said to be perfect if any element in $F$ is a $p^{th}$ power of some element in $F$. -I've tried taking some element in the extended field and using the fact that it is algebraic over $F$ in order to construct a perfect subfield that contains the aforementioned element, however couldn't advance much. -Could anyone give me some direction toward the solution? - -REPLY [2 votes]: Here's something I think works: -Let $K/F$ be an algebraic extension. We know that $K=\bigcup_{a\in K}F(a)$, so that it suffices to prove each $F(a)$ is perfect. For any algebraic extension $L/F(a)$, we have that $L/F$ is also algebraic, and hence separable because $F$ is perfect. Thus, for any $b\in L$, the polynomial $\text{Irr}(b,F)$ is separable (i.e. has no repeated roots). But the polynomial $\text{Irr}(b,F(a))$ is a factor of $\text{Irr}(b,F)$ (by the defining property of minimal polynomials), hence must also have no repeated roots, i.e. be separable. Thus any $b\in L$ is separable over $F(a)$, hence any algebraic $L/F(a)$ is separable, hence $F(a)$ is perfect. -To be honest, I feel like I should use somewhere that $[F(a):F]$ is finite, but at the moment the above argument seems ok.<|endoftext|> -TITLE: what is f prime? -QUESTION [6 upvotes]: currently taking Measure and Integration course, which seems to have a different definition of f'. -traditionally, -$$f'(x)=\lim_{h\to 0} \frac{f(x+h)-f(x)}{h}$$ -but in folland's book, it seems to be defined as -$$f'(x)=\lim_{r\to 0} \frac{f(x+r)-f(x-r)}{m(B(x,r))}$$ -i was just wondering if these 2 definitions are really the same thing. thanks in advance - -REPLY [7 votes]: I'm assuming that $m(B(x,r))$ means $r$, because I can't make the question sensible any other way. -If the first limit exists then so does the second and they are equal. However, the second limit can exist while the first one doesn't. -For the first claim, note that -$$\frac{f(x+r/2) - f(x-r/2)}{r} = \frac{1}{2} \left( \frac{f(x+r/2) - f(x)}{r/2} + \frac{f(x) - f(x-r/2)}{r/2} \right)$$ -and use the standard result that, if $\lim_{t \to 0} g(t)$ and $\lim_{t \to 0} h(t)$ exist, then $\lim_{t \to 0} g(t) + h(t)$ exists and is the sum of the previous limits. -For the second claim, let $f(x) = |x|$. Then the derivative, defined in the usual sense, does not exist, but Folland's limit is $0$.<|endoftext|> -TITLE: How can I can solve integrals of rational functions of polynomials in $x$? -QUESTION [6 upvotes]: I would like to know a way to solve integrals such as this one: -$$\int \frac{x}{3x - 4}dx$$ -Also, I assume similar integrals where x is squared are solved in a similar manner. (If the answer is yes then don't also show me how to solve this second one, I want to see if I can do it myself. :) ) -$$\int \frac{x^2}{x^2 - 1}dx$$ -(I haven't included the conditions that x must meet in order for those to be valid expressions.) - -REPLY [11 votes]: Whenever you are trying to integrate a rational function, the first step is to do the division so that the numerator is of degree strictly smaller than the numerator (this is what Eugene Bulkin and J.M. are saying in the comments). For example, for -$$\int \frac{x}{3x-4}\,dx$$ -you should do the division of $x$ by $3x-4$ with remainder. This is -$$x = \frac{1}{3}(3x-4) + \frac{4}{3}$$ -which means that -$$\frac{x}{3x-4} = \frac{1}{3} + \frac{4/3}{3x-4}.$$ -So the integral can be rewritten as -$$\int \frac{x}{3x-4}\,dx = \int\left(\frac{1}{3} + \frac{4/3}{3x-4}\right)\,dx = \int\frac{1}{3}\,dx + \frac{4}{3}\int \frac{1}{3x-4}\,dx.$$ -The first integral is immediate. The second integral yields to a change of variable $u=3x-4$. We get -$$\begin{align*} -\int\frac{x}{3x-4}\,dx &= \int\frac{1}{3}\,dx + \frac{4}{3}\int\frac{1}{3x-4}\,dx\\ -&= \frac{1}{3}x + \frac{4}{9}\int\frac{du}{u}\\ -&= \frac{1}{3}x + \frac{4}{9}\ln|u| + C\\ -&= \frac{1}{3}x + \frac{4}{9}\ln|3x-4| + C. -\end{align*}$$ -In general, if you have a denominator of degree $1$, by doing the long division you can always express it as a polynomial plus a rational function of the form -$$\frac{k}{ax+b}$$ -with $k$, $a$, and $b$ constants. The polynomial is easy to integrate, and the fraction can be integrated with a change of variable. -The same is true for your second integral. Doing the long division gives, as you note, that -$$\int \frac{x^2}{x^2-1}\,dx = \int\left(1 + \frac{1}{x^2-1}\right)\,dx = \int\,dx + \int\frac{1}{x^2-1}\,dx.$$ -The first integral is easy. The second is as well, using partial fractions: -$$\frac{1}{x^2-1} = \frac{1}{(x+1)(x-1)} = \frac{1/2}{x-1} - \frac{1/2}{x+1}$$ -so: -$$\int\frac{1}{x^2-1}\,dx = \frac{1}{2}\int\frac{dx}{x-1} - \frac{1}{2}\int\frac{dx}{x+1} = \frac{1}{2}\ln|x-1| - \frac{1}{2}\ln|x+1|+C.$$ -See also some of the comments in this answer on solving integrals by partial fractions. - -REPLY [4 votes]: For your two examples my first lines would be -$$ \int \frac{x}{3x-4} \text{d}x = \int \frac{ \frac{1}{3}(3x-4) +4/3}{3x-4} \text{d}x $$ -and -$$ \int \frac{x^2}{x^2-1} \text{d}x = \int \frac{ (x^2-1) + 1}{x^2-1} \text{d}x . $$ -You can easily extend this technique, for example -$$ \int \frac{x^3}{x^2-1} \text{d}x = \int \frac{ x(x^2-1) + x}{x^2-1} \text{d}x . $$<|endoftext|> -TITLE: Probability distribution for the remainder of a fixed integer -QUESTION [21 upvotes]: In the "Notes" section of Modern Computer Algebra by Joachim Von Zur Gathen, there is a quick throwaway remark that says: - -Dirichlet also proves the fact, surprising at first sight, that for fixed $a$ in a division the remainder $r = a \operatorname{rem} b$, with $0 \leq r < b$, is more likely to be smaller than $b/2$ than larger: If $p_a$ denotes the probability for the former, where $1 \leq b \leq a$ is chosen uniformly at random, then $p_a$ is asymptotically $2 - \ln{4} \approx 61.37\%$. - -The note ends there and nothing is said about it again. This fact does surprise me, and I've tried to look it up, but all my searches for "Dirichlet" and "probability" together end up being dominated by talks of Dirichlet stochastic processes (which, I assume, is unrelated). -Does anybody have a reference or proof for this result? - -REPLY [10 votes]: sos440's answer is correct, but I think it makes the calculation look unnecessarily complicated. The boundaries where the remainder switches between being greater or less than $b/2$ are $a/b=n/2$, that is $b=2a/n$, for $n>2$. If we choose $b$ as a real number uniformly distributed over $[0,a]$, we can calculate the probability of $a \;\text{mod}\; b$ (defined as the unique number between $0$ and $a$ that differs from $a$ by an integer multiple of $b$) being less than $b/2$ by adding up the lengths of the corresponding intervals, -$$ -\begin{eqnarray} -&&\left(\left(\frac{2a}{2}-\frac{2a}{3}\right)+\left(\frac{2a}{4}-\frac{2a}{5}\right)+\left(\frac{2a}{6}-\frac{2a}{7}\right)+\ldots\right)\\ -&=&2a\left(1-(1-\frac{1}{2}+\frac{1}{3}-\ldots)\right)\\ -&=&2a(1-\ln2)\;, -\end{eqnarray} -$$ -which is the integral from $0$ to $a$ of the characteristic function $\chi_S$ with $S=\{b\mid a\;\mathrm{mod}\;b < b/2\}$ and yields the probability $p_a=2a(1-\ln2)/a=2(1-\ln2)$. By scaling from $[0,a]$ to $[0,1]$, we can interpret the probability for integer $b$ as an approximation to this integral using the rectangle rule, which converges to the integral as $a\to\infty$ since the mesh size of the approximation is $1/a\to0$.<|endoftext|> -TITLE: How is Riemann–Stieltjes integral a special case of Lebesgue–Stieltjes integral? -QUESTION [14 upvotes]: Thanks for reading! My questions are based on the following quotes from Wikipedia: - -About the existence of -Lebesgue–Stieltjes integral: - -The Lebesgue–Stieltjes integral $ \int_a^b f(x)\,dg(x)$ is defined - when - ƒ : [a,b] → R is Borel-measurable and - bounded and g : [a,b] → R is of - bounded variation in [a,b] and - right-continuous, or when ƒ is - non-negative and g is monotone and - right-continuous. - -I was wondering if this is the right -condition for its existence? -About the existence of -Riemann–Stieltjes integral: - -The best simple existence theorem - states that if f is continuous and g - is of bounded variation on [a, b], - then the integral exists. A function g - is of bounded variation if and only if - it is the difference between two - monotone functions. If g is not of - bounded variation, then there will be - continuous functions which cannot be - integrated with respect to g. In - general, the integral is not - well-defined if f and g share any - points of discontinuity, but this - sufficient condition is not necessary. -On the other hand, a classical result - of Young (1936) states that the - integral is well-defined if f is - α-Hölder continuous and g is β-Hölder - continuous with α + β > 1. - -For the question in the part 3, I -was wondering for Riemann–Stieltjes -integral $\int_a^b f(x) \, dg(x) $ -to exist, must g be nondecreasing? -It looks like not the case quoted -above. -Specialization from Lebesgue–Stieltjes integral to Riemann–Stieltjes integral: - -Where f is a continuous real-valued - function of a real variable and g is a - non-decreasing real function, the - Lebesgue–Stieltjes integral is - equivalent to the Riemann–Stieltjes - integral, - -I was wondering why it only mentions the case when g is nondecreasing? Is this the necessary condition for existence of Riemann-Stieltjes integral? -Do Lebesgue–Stieltjes integral and -Riemann–Stieltjes integral generally -use the same notation $ \int_a^b f(x)\,dg(x)$? How does one -know which one the notation refers -to? - -Thanks for helping! - -REPLY [4 votes]: I don't think my understanding is completely correct and should have posted as a comment, but it has length restrictions. - -It seems to me that when $g$ is BV and right continuous, and $f$ is Borel measurable, $f$ does not have to be bounded. There are unbounded Lebesgue integrable functions, so the same should be true for Lebesgue-Stieltjes integral, which is just Lebesgue integral w.r.t. the signed measure $\mu_g$ on $\mathcal{B}(\mathbb{R})$ induced by $g$. -It should be OK for function $g$ with bounded variation. We can write $g$ as the difference of two non-decreasing functions. -I don't think Riemann-Stieltjes integral requires the integrator to be non-decreasing. It might be BV or possibly an even broader class of functions. I guess the author of Wikipedia entry mentions only nondecreasing functions because s/he has CDF of a random variable in mind and wants to discuss its application in probability theory. -I also have the impression that these two integrals agree whenever the Riemann-Stieltjes integral exists. (Just like the relation between the Lebesgue integral and the Riemann integral.) -If these two notions agree, there's no danger of using the same notation. Otherwise, I've seen authors using prefix to distinguish different types of integrals, e.g., $(R)\int_a^b...$ for Riemann(-Stieltjes) integrals.<|endoftext|> -TITLE: Is expectation Riemann-/Lebesgue–Stieltjes integral? -QUESTION [8 upvotes]: In probability theory, when having $ E(f(X))=\int_{-\infty}^\infty f(x)\, dg(x) $, an expectation of a measurable function $f$ of a random variable $X$ with respect to its cumulative distribution function $g$, - -is it true that it is always a -Lebesgue–Stieltjes integral? -Furthermore, is it always a -Riemann–Stieltjes integral? - -Thanks and regards! - -REPLY [4 votes]: This question is closely related to your other question regarding Riemann-Stieltjes integrals. -In the case that $f$ is continuous, these two types of integral agree provided they are finite. -But there are also other cases. For instance, $f$ is merely Borel measurable, then the Lebesgue-Stieltjes integral $\int f(x) dg(x)$ is defined, but not Riemann-Stieltjes integral because $f$ might be unbounded. This still have probabilistic interpretation because in this case $f(X)$ is still a random variable, and we can still consider the expectation of $f(X)$. - -REPLY [3 votes]: It maybe better for you to use the notation -$$ -\mathsf{E}[f(X)] = \int\limits_{\mathbb{R}}f(x)Q(dx) -$$ -where $Q$ is a distribution of an r.v. $X$. Then you do not need to worry if $X$ is a continuous r.v. or a discrete one. Then depending on what you know about $X$ (sumulative distribution function $g$ or a density function $h$) you will rewrite the first equation using -$$ -Q(dx) = dg(x) = h(x)\,dx. -$$ -Usually only Lebesgue (Lebesgue-Stieltjes) integrals are used in the probability theory. On the other hand to calculate them you can use an equivalence of Lebesgue-Stieltjes and Riemann-Stieltjes integrals (provided necessary conditions). -Edited: For the discrete distribution CDF $g$ is a pure jump function with jumps at values $\alpha_i$ of the r.v. $X$ and sizes of jumps $p_i$ such that $p_i = \mathsf{P}(X = \alpha_i)$. Then the integral is again a Lebesgue integral above can be rewritten as a Lebesgue-Stieltjes integral using $g$ or as a sum.<|endoftext|> -TITLE: Proving the quotient of a principal ideal domain by a prime ideal is again a principal ideal domain -QUESTION [10 upvotes]: Please help me prove that - -the quotient of a principal ideal domain by a prime ideal is again a principal ideal domain. - -This was from Abstract Algebra - -REPLY [4 votes]: If the ideal is nontrivial, then the quotient will be a field, a PID; otherwise the quotient will be the original PID. -To prove the first assertion, prove that every nonzero prime ideal in a PID is necessarily maximal.<|endoftext|> -TITLE: Alternative definition for topological spaces? -QUESTION [5 upvotes]: I have just started reading topology so I am a total beginner but why are topological spaces defined in terms of open sets? I find it hard and unnatural to think about them intuitively. Perhaps the reason is that I can't see them visually. Take groups, for example, are related directly to physical rotations and numbers, thus allowing me to see them at work. Is there a similar analogy or defintion that could allow me to understand topological spaces more intuitively? - -REPLY [2 votes]: In "Quantales and continuity spaces" Flagg develops the notion of a metric space where the distance function takes values in a value quantale. A value quantale is an abstraction of the properties of the poset $[0,\infty]$ needed for 'doing analysis'. It is then showed that every topological space $X$ is metrizable in the sense that there exists a value quantale $V$ (depending on the topology on $X$) such that the topological space $X$ is given by the open balls determined by a metric structure on $X$ with values in $V$. At this level of abstraction it is thus seen that the open sets axiomatization for topology is nothing but the good old notion of a metric space, only taking values in value quantales other than $[0,\infty]$.<|endoftext|> -TITLE: GCD computations in $\mathbb{Z}[i]$ -QUESTION [7 upvotes]: Problem statement: - -Find a generator of the ideal $(85, 1+13i)$ in $\mathbb{Z}[i]$, i.e., a GCD for $85$ and $1 + 13i$ by the Euclidean Algorithm. Do the same for the ideal $(47-13i, 53+56i).$ - -Can you please outline the steps, then I can practice with others. - -Source: Abstract Algebra by Dummit & Foote, $\S$8.1 #7 - -REPLY [6 votes]: A good way to understand the Euclidean algorithm for $\mathbb{Z}[i]$ is to prove that $R:=\mathbb{Z}[i]$ is a Euclidean domain with respect to the function $\varphi(a+bi)=a^2+b^2$. -This can be done in the following way: -1) for $x\in\mathbb{Q}$ there are $y\in \mathbb{Z}$ and $z\in\mathbb{Q}$, $|z|\leq \frac 1 2$, such that $x = y+z$ (use the Gauss floor function). -2) if $a,b \in R$, then $\frac a b \in \mathbb{Q}(i)$. Write $\frac a b = y_1+z_1 + (y_2+z_2)i$, according to (1), with $y_j \in \mathbb{Z}$ and $z_j \in \mathbb{Q}, ~ |z_j|\leq \frac 1 2$. -3) Now we can write $a=qb+r$, $q:=y_1+y_2i$, $r:=b(z_1+z_2i)$. $q,r \in R$. -4) The important part is: $\varphi(r)<\varphi(b)$ (use the fact that $\varphi$ is multiplicative). -$\varphi$ works just like the absolute value in $\mathbb{Z}$. It will become smaller in every step, so the algorithm will terminate. -From this proof we gather the following algorithm: Compute the fraction $\frac{a}{b}=x+yi$ in $\mathbb{C}$. For $x,y$ choose the closest integers $\tilde x, \tilde y$. Then $a=b(\tilde x + \tilde y i) - r$ with a suitable $r$. In this way you can do a division with remainder in $\mathbb{Z}[i]$.<|endoftext|> -TITLE: Why use a matrix? -QUESTION [6 upvotes]: Why would I want to use a matrix? It's good for organizing a few numbers but I can't find too much use for them. Could someone explain? - -REPLY [3 votes]: We use matrices to study linear transformations between vector spaces. In particular, suppose we have a linear transformation $T: V \to W$ between two vector spaces $V$ and $W$. Suppose we have an ordered basis of $V$ and an ordered basis of $W$. Then we can represent vectors in $V$ and $W$ as column vectors. We can then represent $T$ by a matrix.<|endoftext|> -TITLE: Orthogonal Matrices and Symplectic Matrices and Preservation of Forms -QUESTION [5 upvotes]: I would like to know the properties of orthogonal matrices and symplectic - matrices in terms of the forms they preserve. Could someone please add and/or - correct, maybe give some refs/examples? -AFAIK, given a quadratic form q - on a vector space V over a field F, there is an associated orthogonal - group O(2n) ,a subgroup of GL(n,F), which - preserve q; if F is the reals O(2n) preserves q= inner-product and norm (since - in R, the norm is induced by the inner-product). Symplectic matrices only preserve symplectic forms, i.e., bilinear,antisymmetric,non-degenerate forms. -Are there relations between these groups; do they overlap, intersect, etc? -I am interested mostly in the case where the field is Z/2. -Thanks - -REPLY [3 votes]: Over the reals the intersection of the orthogonal and symplectic groups in even dimension is isomorphic -to the unitary group in the half dimension. -$ U(n) = O(2n, \mathbf{R}) \cap Sp(2n, \mathbf{R})$ -This is the 2 out of 3 property expressing the compatibility the symplectic structure with the symmetric bilinear form of the orthogonal group. -The orthogonal group over $Z_2$ is a subgroup of the symplectic group because a symmetric bilinear form is also alternating (since $-1 = +1$). -The full symplectic group $Sp(2n, Z_2)$ can be realized from the action of the $2^n$ dimensional Clifford group on the bits of the binary -representation of the basis vectors (up to a phase) as explained in Daniel Gottesmann's - paper.<|endoftext|> -TITLE: Epsilon Delta proof of a floor function -QUESTION [5 upvotes]: I have to provide a proof for a function but I'm struggling to grasp the main concept. -$$\lim_{x \to 3} \left\lfloor \frac{x}{2}\right\rfloor = 1 $$ -Here is what I've come up with: -$$\frac{x}{2} - 1 \lt \left\lfloor \frac{x}{2}\right\rfloor \lt \frac{x}{2} + 1$$ -But I'm stuck from here since I cannot use anything else but $\varepsilon$-$\delta$ (meaning no squeeze theorem.) Each of the limits (for every side) is not helpful. I get: $$ \frac{1}{2} \lt \lim_{x \to 3} \left\lfloor \frac{x}{2}\right\rfloor \lt \frac{5}{2}$$ -Any clarifications are welcome! -Thanks! - -REPLY [3 votes]: You can just use the fact that $\lfloor \frac{x}{2}\rfloor $ is constant on $[2,4)$, so for any $\varepsilon >0$, $\delta =0.5 $(or any convenient number $<1$) works.<|endoftext|> -TITLE: Determining the cardinality of a set -QUESTION [5 upvotes]: If you have a set that looks like $S_1 = \{0,1,2,3,4\}$ I understand the cardinality of the set is $5$. -What about if you have a set of sets so -$S_1=\{S_2,S_3,S_4\}$ -where, -$S_2=\{1,2,3\}$ -$S_3=\{1,2\}$ -$S_4=\{1\}$. -For the cardinality of $S_1$ do you count all the elements of the included sets so the answer would be $6$ or do you just count the number in the $S_1$ so it would be $3$. - -REPLY [5 votes]: It is the latter, i.e. we just count the number of elements of $S1$, which is 3.<|endoftext|> -TITLE: Characterization of rank in an exterior algebra -QUESTION [5 upvotes]: The wikipedia page on exterior algebras makes the following reasonable sounding statement (I paraphrase): -Let $V$ be a complex vector space and consider the second exterior power $\bigwedge^2 V$. By the rank of $\alpha \in V$, we mean the smallest number $p$ such that $\alpha$ can be written as a sum of $p$ decomposable elements. CLAIM: $\alpha$ has rank $p$ if and only if the $p$-fold wedge product $\alpha \wedge \dots \wedge \alpha$ is nonzero but the $(p+1)$-fold wedge product is 0. -Unfortunately I can't figure out a proof of this, even in the case $p=1$, though it feels like it ought to be elementary. I've looked in a bunch of algebra books, but none of them explain it, probably because it is really easy and I am just missing something. -Please help! - -REPLY [5 votes]: Suppose $\alpha$ can be written as a sum of $p$ decomposable elements, so that $$\alpha=v_1\wedge w_1+\cdots+ v_p\wedge w_p.$$ -Let $q\geq1$. The $q$th power of $\alpha$ is then $$\alpha^q = \sum_{i_1,\dots,i_q} (v_{i_1}\wedge w_{i_1})\wedge(v_{i_2}\wedge w_{i_2})\wedge\cdots\wedge (v_{i_q}\wedge w_{i_q})$$ where the sum is taken over all choices of indices $i_1$, $i_2$, $\dots$, $i_q$ in $\{1,\dots,q\}$. -Consider a term in this sum, corresponding to a choice of $i_1$, $i_2$, $\dots$, $i_q$. If two of the $i_j$s are equal, the term vanishes. If $q\leq p$; if the $i_j$s are all different, then since each $v_{i_j}\wedge w_{i_j}$ has degree two, we see than the term is equal to $(v_1\wedge w_1)\wedge(v_2\wedge w_2)\wedge\cdots(v_p\wedge w_q)$. -It follows from this that if $q>p$ then $\alpha^q=0$, for then all terms vanish, and then the rank of $\alpha$ is an upper bound for the highest non-zero power of $\alpha$. -On the other hand, if $q=p$, then there are $p!$ ways of choosing the indices $i_1$, $i_2$, $\dots$, $i_p$ in $\{1,\dots,p\}$ so that they are distinct, so we see that $$\alpha^p = p!\cdot (v_1\wedge w_1)\wedge(v_2\wedge w_2)\wedge\cdots(v_p\wedge w_p).$$ -Can you see why $\alpha^{\mathrm{rank}\alpha}\neq0$?<|endoftext|> -TITLE: What is a Lagrangian submanifold? -QUESTION [6 upvotes]: I see references to Lagrangian Submanifolds in the literature but don't know what they are. Is there a relation to Lagrangian Tori (which I also don't know what are). Could someone give a definition that includes an intuitive explanation for what their use is? - -REPLY [2 votes]: It's a special kind of submanifold of a symplectic manifold; see the definition on Wikipedia. There was a question about this on Math Overflow not long ago, where you will find some good answers.<|endoftext|> -TITLE: Find all irreducible monic polynomials in $\mathbb{Z}/(2)[x]$ with degree equal or less than 5 -QUESTION [49 upvotes]: Find all irreducible monic polynomials in $\mathbb{Z}/(2)[x]$ with degree equal or less than $5$. - -This is what I tried: -It's evident that $x,x+1$ are irreducible. Then, use these to find all reducible polynomials of degree 2. There ones that can't be made are irreducible. Then use these to make polynomials of degree 3, the ones that can't be made are irreducible. Repeat until degree 5. -Doing this way takes way too long and I just gave up during when I was about to reach degree 4 polynomials. -My question is: is there any easier way to find these polynomials? -P.S.: this is not exactly homework, but a question which I came across while studying for an exam. - -REPLY [67 votes]: Extrapolated Comments converted to answer: -First, we note that there are $2^n$ polynomials in $\mathbb{Z}_2[x]$ of degree $n$. -A polynomial $p(x)$ of degree $2$ or $3$ is irreducible if and only if it does not have linear factors. Therefore, it suffices to show that $p(0) = p(1) = 1$. This quickly tells us that $x^2 + x + 1$ is the only irreducible polynomial of degree $2$. This also tells us that $x^3 + x^2 + 1$ and $x^3 + x + 1$ are the only irreducible polynomials of degree $3$. -As hardmath points out, for a polynomial $p(x)$ of degree $4$ or $5$ to be irreducible, it suffices to show that $p(x)$ has no linear or quadratic factors. To rule out the linear factors, we can again throw out any polynomial not satisfying $p(0) = p(1) = 1$. That is, we can throw out any polynomial with constant term $0$, and we can throw out any polynomial with an even number of terms. This rules out $3/4$ of the polynomials. For example, the $4^{th}$ degree polynomials which do not have linear factors are: - -$ x^4 + x^3 + x^2 + x + 1 $ -$ x^4 + x^3 + 1 $ -$ x^4 + x^2 + 1 $ -$ x^4 + x + 1 $ - -The $5^{th}$ degree polynomials which do not contain linear factors are: - -$x^5 + x^4 + x^3 + x^2 + 1$ -$x^5 + x^4 + x^3 + x + 1$ -$x^5 + x^4 + x^2 + x + 1$ -$x^5 + x^3 + x^2 + x + 1$ -$x^5 + x^4 + 1$ -$x^5 + x^3 + 1$ -$x^5 + x^2 + 1$ -$x^5 + x + 1$ - -It still remains to check whether $x^2 + x + 1$ (which is the only quadratic irreducible polynomial in $\mathbb{Z}_2[x]$) divides any of these polynomials. This can be done by hand for sufficiently small degrees. Again, as hardmath points out, since $x^2 + x + 1$ is the only irreducible polynomial of degree $2$, it follows that $(x^2 + x + 1)^2 = x^4 + x^2 + 1$ is the only polynomial of degree $4$ which does not have linear factors and yet is not irreducible. Therefore, the other $3$ polynomials listed must be irreducible. Similarly, for degree $5$ polynomials, we can rule out -$$ -(x^2 + x + 1)(x^3 + x^2 + 1) = x^5 + x + 1 -$$ -and -$$ -(x^2 + x + 1)(x^3 + x + 1) = x^5 + x^4 + 1. -$$ -The other $6$ listed polynomials must therefore be irreducible. -Notice that this trick of throwing out polynomials with linear factors, then quadratic factors, etc. (which hardmath called akin to the Sieve of Eratosthenes) is not efficient for large degree polynomials (even degree $6$ starts to be a problem, as a polynomial of degree $6$ can factor as a product of to polynomials of degree $3$). This method, therefore only works for sufficiently small degree polynomials. -To recap, the irreducible polynomials in $\mathbb{Z}_2[x]$ of degree $\leq 5$ are: - -$x$ -$x+1$ -$x^2 + x + 1$ -$x^3 + x^2 + 1$ -$x^3 + x + 1$ -$ x^4 + x^3 + x^2 + x + 1 $ -$ x^4 + x^3 + 1 $ -$ x^4 + x + 1 $ -$x^5 + x^4 + x^3 + x^2 + 1$ -$x^5 + x^4 + x^3 + x + 1$ -$x^5 + x^4 + x^2 + x + 1$ -$x^5 + x^3 + x^2 + x + 1$ -$x^5 + x^3 + 1$ -$x^5 + x^2 + 1$<|endoftext|> -TITLE: What does the math notation $\sum$ mean? -QUESTION [11 upvotes]: I have come across this symbol a few times, and I am not sure what it "does" or what it means: -$\Large\sum$ - -REPLY [2 votes]: Coming from a programming background, I found it quite helpful to explain it using a for loop: -The mathematician would write it like this: -$\sum\limits_{i=m}^n f(i)$ -And the programmer would write it like this: -result = 0 -for (i=m; i<=n; i++) { - result += f(i) -} - -You can think of m as the start index and n as the end index.<|endoftext|> -TITLE: Residual Finiteness of Fundamental Groups of Seifert Fibered Spaces -QUESTION [10 upvotes]: I'm trying to understand why, if $S$ is a Seifert fibered space, then $\pi_1(S)$ is residually finite. From theorems 12.2 and 11.10 in Hempel's "3-manifolds", we can work with a finite-sheeted covering $M$ such that $M$ is an $S^1$-bundle over a closed surface $T$. According to a 1987 paper by Hempel, there exists a finite-sheeted covering $\widehat{M}$ of $M$, such that $\widehat{M}$ is of the type $S^1\times F$, $F$ a closed surface. Of course, the result follows from this, but my question is: -Why does $\widehat{M}$, with such a homeomorphism type, exist? -I'm not actually sure if the existence of $\widehat{M}$ is necessary to deduce the r.f. of $\pi_1(M)$, but I don't think knowing only that it fits into the sequence $1\rightarrow\mathbb{Z}\rightarrow\pi_1(M)\rightarrow\pi_1(T)\rightarrow 1$ is enough. - -REPLY [6 votes]: The proof of residual finiteness of the central cyclic extensions of surface groups is not hard: It suffices to consider the extension group with the presentation -$G=$. -Now, kill all $a_i, b_i$ except for $a_1, b_1$. The quotient is the integer -Heisenberg group -$H=$. -The latter group is isomorphic to the group of integer 3x3 upper triangular matrices with 1's on the diagonal. We thus get a homomorphism -$h: G\to H$ that is injective on the subgroup $$. We also have the -epimorphism $f: G\to F$, where $F$ is the surface group which kills the generator $t$. Now, take the product homomorphism $f\times h: G\to F\times H$ which is clearly injective. Since both $F$ and $H$ are residually finite, it follows that $G$ is residually finite as well. This argument also shows that $G$ is linear. It is an open problem (due to W.Thurston in 1980's) if the fundamental group of every compact 3-manifold $M$ is linear. (The hard case is when the JSJ-decomposition of $M$ is nontrivial, otherwise it follows from Perelman.)<|endoftext|> -TITLE: Are Vitali sets dense in [0,1)? -QUESTION [8 upvotes]: I am trying to get a better handle on the nature of Vitali sets, generated by choosing representatives in $[0,1)$ from the equivalence classes $\mathbb{Q} + r$ where $r \in \mathbb{R}$. -If $V$ is a Vitali set described above, I understand that $V$ contains only a single rational number and that it is uncountable. Also, its complement $[0,1) \sim V$ is uncountable since we have excluded from $V$ a collection of elements in $[0,1)$ associated to each of the uncountable number of elements in $V$. -However, I have been unable to find (or determine) if $V$ is dense in [0,1). Is this known? - -REPLY [7 votes]: Note that you get one Vitali set for each choice of representatives of the equivalence classes, so in fact there are many Vitali sets, not just one. -A Vitali set need not be dense in $[0,1)$. For example, instead of picking representatives that are in $[0,1)$, you can pick representatives that are in $[0,q)$ for any rational $q\gt 0$; in particular, you can make sure that your Vitali set is constrained to as small a part of $[0,1)$ as you care to specify (and you can translate the set by adding a constant rational to it, as well). -To see this, it suffices to show that for every real number $r$, there is a real number $s\in [0,q)$ such that $r-s\in\mathbb{Q}$. But this is easy: pick a rational $t\in [0,q)$. By the Archimedean property, there exists $N\geq 0$ such that $Nt\leq r\lt (N+1)t$. In particular, $0\leq r-Nt\lt t$, so letting $s=r-Nt$ gives the desired real number. -Since every real is equivalent to some real in $[0,q)$, you can always pick the class representatives to be in $[0,q)$ (instead of $[0,1)$). So you can ensure that you have a Vitali set is contained in $[0,q)$. -Similarly, if you select any interval $(a,b)$ contained in $[0,1)$ you can find a Vitali set that is contained in $(a,b)$, and in particular whose closure is not $[0,1]$ if $0\lt a\lt b\lt 1$. -In fact, you can find a Vitali set that has very small outer measure (any positive outer measure greater than $0$ and smaller than $1$ that you care to specify ahead of time): see for example JDH's answer to this question.<|endoftext|> -TITLE: How to compute Riemann-Stieltjes / Lebesgue(-Stieltjes) integral? -QUESTION [23 upvotes]: The definitions do not seem easy to me for computation. For example, Lebesgue(-Stieltjes) integral is a measure theory concept, involving construction for from step function, simple functions, nonnegative function till general function. -I was wondering, in practice, what common ways for computing Lebesgue(-Stieltjes) integral are? - -Is it most desirable, when possible, -to convert Lebesgue(-Stieltjes) -integral to Riemann(-Stieltjes) -integral, and Riemann-Stieltjes) -integral to Riemann integral, and -then apply the methods learned from -calculus to compute the equivalent -Riemann integral? -What about the cases when the -equivalence/conversion is not -possible? Is definition the only way -to compute Riemann-Stieltjes or -Lebesgue(-Stieltjes) integrals? - -My questions come from a previous reply by Gortaur - -Usually only Lebesgue - (Lebesgue-Stieltjes) integrals are - used in the probability theory. On the - other hand to calculate them you can - use an equivalence of - Lebesgue-Stieltjes and - Riemann-Stieltjes integrals (provided - necessary conditions). - -Thanks and regards! - -REPLY [38 votes]: Even with the Riemann Integral, we do not usually use the definition (as a limit of Riemann sums, or by verifying that the limit of the upper sums and the lower sums both exist and are equal) to compute integrals. Instead, we use the Fundamental Theorem of Calculus, or theorems about convergence. The following are taken from Frank E. Burk's A Garden of Integrals, which I recommend. One can use these theorems to compute integrals without having to go down all the way to the definition (when they are applicable). -Theorem (Theorem 3.8.1 in AGoI; Convergence for Riemann Integrable Functions) If $\{f_k\}$ is a sequence of Riemann integrable functions converging uniformly to the function $f$ on $[a,b]$, then $f$ is Riemann integrable on $[a,b]$ and -$$R\int_a^b f(x)\,dx = \lim_{k\to\infty}R\int_a^b f_k(x)\,dx$$ -(where "$R\int_a^b f(x)\,dx$" means "the Riemann integral of $f(x)$"). -Theorem (Theorem 3.7.1 in AGoI; Fundamental Theorem of Calculus for the Riemann Integral) If $F$ is a differentiable function on $[a,b]$, and $F'$ is bounded and continuous almost everywhere on $[a,b]$, then: - -$F'$ is Riemann-integrable on $[a,b]$, and -$\displaystyle R\int_a^x F'(t)\,dt = F(x) - F(a)$ for each $x\in [a,b]$. - - -Likewise, for Riemann-Stieltjes, we don't usually go by the definition; instead we try, as far as possible, to use theorems that tell us how to evaluate them. For example: -Theorem (Theorem 4.3.1 in AGoI) Suppose $f$ is continuous and $\phi$ is differentiable, with $\phi'$ being Riemann integrable on $[a,b]$. Then the Riemann-Stieltjes integral of $f$ with respect to $\phi$ exists, and -$$\text{R-S}\int_a^b f(x)d\phi(x) = R\int_a^b f(x)\phi'(x)\,dx$$ -where $\text{R-S}\int_a^bf(x)d\phi(x)$ is the Riemann-Stieltjes integral of $f$ with respect to $d\phi(x)$. -Theorem (Theorem 4.3.2 in AGoI) Suppose $f$ and $\phi$ are bounded functions with no common discontinuities on the interval $[a,b]$, and that the Riemann-Stieltjes integral of $f$ with respect to $\phi$ exists. Then the Riemann-Stieltjes integral of $\phi$ with respect to $f$ exists, and -$$\text{R-S}\int_a^b \phi(x)df(x) = f(b)\phi(b) - f(a)\phi(a) - \text{R-S}\int_a^bf(x)d\phi(x).$$ -Theorem. (Theorem 4.4.1 in AGoI; FTC for Riemann-Stieltjes Integrals) If $f$ is continuous on $[a,b]$ and $\phi$ is monotone increasing on $[a,b]$, then $$\displaystyle \text{R-S}\int_a^b f(x)d\phi(x)$$ -exists. Defining a function $F$ on $[a,b]$ by -$$F(x) =\text{R-S}\int_a^x f(t)d\phi(t),$$ -then - -$F$ is continuous at any point where $\phi$ is continuous; and -$F$ is differentiable at each point where $\phi$ is differentiable (almost everywhere), and at such points $F'=f\phi'$. - -Theorem. (Theorem 4.6.1 in AGoI; Convergence Theorem for the Riemann-Stieltjes integral.) Suppose $\{f_k\}$ is a sequence of continuous functions converging uniformly to $f$ on $[a,b]$ and that $\phi$ is monotone increasing on $[a,b]$. Then - -The Riemann-Stieltjes integral of $f_k$ with respect to $\phi$ exists for all $k$; and -The Riemann-Stieltjes integral of $f$ with respect to $\phi$ exists; and -$\displaystyle \text{R-S}\int_a^b f(x)d\phi(x) = \lim_{k\to\infty} \text{R-S}\int_a^b f_k(x)d\phi(x)$. - -One reason why one often restricts the Riemann-Stieltjes integral to $\phi$ of bounded variation is that every function of bounded variation is the difference of two monotone increasing functions, so we can apply theorems like the above when $\phi$ is of bounded variation. - -For the Lebesgue integral, there are a lot of "convergence" theorems: theorems that relate the integral of a limit of functions with the limit of the integrals; these are very useful to compute integrals. Among them: -Theorem (Theorem 6.3.2 in AGoI) If $\{f_k\}$ is a monotone increasing sequence of nonnegative measurable functions converging pointwise to the function $f$ on $[a,b]$, then the Lebesgue integral of $f$ exists and -$$L\int_a^b fd\mu = \lim_{k\to\infty} L\int_a^b f_kd\mu.$$ -Theorem (Lebesgue's Dominated Convergence Theorem; Theorem 6.3.3 in AGoI) Suppose $\{f_k\}$ is a sequence of Lebesgue integrable functions ($f_k$ measurable and $L\int_a^b|f_k|d\mu\lt\infty$ for all $k$) converging pointwise almost everywhere to $f$ on $[a,b]$. Let $g$ be a Lebesgue integrable function such that $|f_k|\leq g$ on $[a,b]$ for all $k$. Then $f$ is Lebesgue integrable on $[a,b]$ and -$$L\int_a^b fd\mu = \lim_{k\to\infty} L\int_a^b f_kd\mu.$$ -Theorem (Theorem 6.4.2 in AGoI) If $F$ is a differentiable function, and the derivative $F'$ is bounded on the interval $[a,b]$, then $F'$ is Lebesgue integrable on $[a,b]$ and -$$L\int_a^x F'd\mu = F(x) - F(a)$$ -for all $x$ in $[a,b]$. -Theorem (Theorem 6.4.3 in AGoI) If $F$ is absolutely continuous on $[a,b]$, then $F'$ is Lebesgue integrable and -$$L\int_a^x F'd\mu = F(x) - F(a),\qquad\text{for }x\text{ in }[a,b].$$ -Theorem (Theorem 6.4.4 in AGoI) If $f$ is continuous and $\phi$ is absolutely continuous on an interval $[a,b]$, then the Riemann-Stieltjes integral of $f$ with respect to $\phi$ is the Lebesgue integral of $f\phi'$ on $[a,b]$: -$$\text{R-S}\int_a^b f(x)d\phi(x) = L\int_a^b f\phi'd\mu.$$ - -For Lebesgue-Stieltjes Integrals, you also have an FTC: -Theorem. (Theorem 7.7.1 in AGoI; FTC for Lebesgue-Stieltjes Integrals) If $g$ is a Lebesgue measurable function on $R$, $f$ is a nonnegative Lebesgue integrable function on $\mathbb{R}$, and $F(x) = L\int_{-\infty}^xd\mu$, then - -$F$ is bounded, monotone increasing, absolutely continuous, and differentiable almost everywhere with $F' = f$ almost everywhere; -There is a Lebesgue-Stieltjes measure $\mu_f$ so that, for any Lebesgue measurable set $E$, $\mu_f(E) = L\int_E fd\mu$, and $\mu_f$ is absolutely continuous with respect to Lebesgue measure. -$\displaystyle \text{L-S}\int_{\mathbb{R}} gd\mu_f = L\int_{\mathbb{R}}gfd\mu = L\int_{\mathbb{R}} gF'd\mu$. - - -The Henstock-Kurzweil integral likewise has monotone convergence theorems (if $\{f_k\}$ is a monotone sequence of H-K integrable functions that converge pointwise to $f$, then $f$ is H-K integrable if and only if the integrals of the $f_k$ are bounded, and in that case the integral of the limit equals the limit of the integrals); a dominated convergence theorem (very similar to Lebesgue's dominated convergence); an FTC that says that if $F$ is differentiable on $[a,b]$, then $F'$ is H-K integrable and -$$\text{H-K}\int_a^x F'(t)dt = F(x) - F(a);$$ -(this holds if $F$ is continuous on $[a,b]$ and has at most countably many exceptional points on $[a,b]$ as well); and a "2nd FTC" theorem.<|endoftext|> -TITLE: how to find the unique smallest topology? -QUESTION [9 upvotes]: I have come across to the following question : -Let $\mathscr{T}_\alpha$ be a family of topologies on $ X$ . Show that there is a unique smallest topology on $X$ containing all the collections $\mathscr{T}_\alpha$ , and a unique largest topology contained in all $\mathscr{T}_\alpha$. -I think that the unique smallest topology equals the union of all the $\mathscr{T}_\alpha's$, and the unique largest topology contained in all $\mathscr{T}_\alpha$ equals the intersection of all the $\mathscr{T}_\alpha's$ . - -REPLY [10 votes]: The basic fact to verify is that if $\cal{T}_i$, $i \in I$ is an an indexed collection of topologies on a set $X$, then their intersection $\cap_{i \in I} T = \left\{ A \subset X: \forall i \in I: A \in\cal{T}_i \right\}$ is a topology on $X$ as well. This is straightforward from the topology axioms. -Once this is done, then the smallest topology that contains all $\cal{T}_i$ can be defined as the intersection of all topologies $\cal{T}$ that contain $\cup_{i \in I} T_i$ as a subfamily, and there is at least one (the discrete topology) so we take the intersection of a non-empty family of topologies, which is a topology as we saw above. And it is clearly the smallest one that contains all $\cal{T}_i$: if $\cal{T}$ is any such topology it is one of the topologies we take the intersection of, and thus clearly is a subset of $\cal{T}$. -Also, the largest topology contained in all $\cal{T}_i$ is simply their intersection, and this is clearly the largest topology that is contained in all of them.<|endoftext|> -TITLE: Square root of a specific 3x3 matrix -QUESTION [9 upvotes]: From a problem set I'm working on: (Edit 04/11 - I fudged a sign in my matrix...) - -Let $A(t) \in M_3(\mathbb{R})$ be defined: $$ A(t) = - \left( \begin{array}{crc} 1 & 2 & 0 \\ - 0 & -1 & 0 \\ t-1 & -2 & t \end{array} - \right).$$ -For which $t$ does there exist a $B \in M_3(\mathbb{R})$ - such that $B^2 = A$? - -In a previous part of the problem, I showed that $A(t)$ could be diagonalized into a real diagonal matrix for all $t \in \mathbb{R}$, with eigenvalues $1,-1,t$. -A few things I've thought of: - -The matrix is not positive-semidefinite, so the general form of the square root does not work. (Is positive-definiteness a necessary condition for the existence of a square root?) -Since $A = B^2$, then $\det(B^2) = (\det B)^2 = \det A$. So $\det A \geq 0$ for there to be a real-valued square root, forcing $t \leq 0$ to be necessary. -My professor suggested that, since $B^2$ fits the characteristic polynomial of $A$, $\mu_A(x) = (x-1)(x+1)(x-a)$, then the minimal polynomial of $B$ must divide $\mu_A(x^2) = (x^2-1)(x^2+1)(x^2-a) = (x-1)(x+1)(x^2+1)(x^2-a)$. Examining the possible minimal polynomials, one can find the rational canonical form, square it, and check whether the eigenvalues match. This probably could get me the right answer, but I am fairly sure that there is an alternative to a "proof by exhaustion". - -REPLY [8 votes]: Assume that there exists a real number $t$ and a real matrix $B$ such that $A(t)=B^2$. -Note that $-1$ is an eigenvalue of $A(t)$, hence $A(t)+I=(B-\mathrm{i}I)(B+\mathrm{i}I)$ is singular. This implies that $B-\mathrm{i}I$ or $B+\mathrm{i}I$ is singular. Since $B$ is real valued, this means that both $B-\mathrm{i}I$ and $B+\mathrm{i}I$ are singular. Likewise, $1$ is an eigenvalue of $A(t)$, hence $A(t)-I=(B-I)(B+I)$ is singular. This implies that $B-I$ or $B+I$ is singular. Hence the eigenvalues of $B$ are $\{\mathrm{i},-\mathrm{i},1\}$ or $\{\mathrm{i},-\mathrm{i},-1\}$. -In both cases, $B$ has three distinct eigenvalues hence $B$ is diagonalizable on $\mathbb{C}$. This implies that the eigenvalues of $A(t)$ are $-1$ (twice) and $1$ (once) and that $A(t)$ is diagonalizable as well. Hence $t=-1$. We now look at the matrix $A(-1)$. -One can check that $A(-1)$ is diagonalizable hence $A(-1)$ is similar to a diagonal matrix with diagonal $(1,-1,-1)$. Both $I_1$ (the $1\times1$ matrix with coefficient $1$) and $-I_2$ (the $2\times2$ diagonal matrix with diagonal coefficients $-1$) have square roots: take $I_1$ for $I_1$ and the rotation matrix $\begin{pmatrix}0 & 1 \\ -1 & 0\end{pmatrix}$ for $-I_2$. Hence $A(-1)$ is a square. -Finally $A(t)$ is a square if and only if $t=-1$.<|endoftext|> -TITLE: How to prove existence of solutions without constructing one? -QUESTION [13 upvotes]: For some problems, it is difficult to get an explicit solution. But it is very good if we can prove there do exist some solutions though we can't find them. Is it possible to prove existence of solutions without constructing one? Can you show some examples? Many thanks. - -Edit: Thanks everyone. Of course, it is the best if we can find one or all solutions to a problem. But it is not always easy. Then a non-constructive proof may also be very meaningful. Here is an example. In 1991, it was proved that a multiple layer perceptron neural network is a universal function approximator. But the proof is not constructive. However, it is of great importance for at least control community as people can try to use neural network to approximate any nonlinear systems without worrying about the theoretical foundation. What I am interested is whether there are any general methods for non-constructive existence proof. - -REPLY [2 votes]: The Implicit Function Theorem is another example of such a technique. There are plenty of examples of the Implicit Function Theorem in action, although some of the concrete examples seem a bit artificial.<|endoftext|> -TITLE: Suppose $\phi$ is a weak solution of $\Delta \phi = f \in \mathcal{H}^1$. Then $\phi\in W^{2,1}$ -QUESTION [12 upvotes]: I'm trying to prove the statement in the title in as simple a way as possible. It is Theorem 3.2.9 in Helein's book "Harmonic maps, conservation laws, and moving frames", although it is not proved there. The statement is as follows. - -Suppose $\phi\in\mathbb{R}^m$ is a weak solution of $\Delta \phi = f \in \mathcal{H}^1$, where $\mathcal{H}^1$ is the standard Hardy space on $\mathbb{R}^m$. Then - $$ -\Big\lVert\frac{\partial^2\phi}{\partial x^\alpha \partial x^\beta}\Big\rVert_{L^1(\mathbb{R}^m)} \le C\lVert f \rVert_{\mathcal{H}^1(\mathbb{R}^m)}. -$$ - -My idea is to use convolution with the kernel of the Laplacian, and then differentiate, estimate in $L^1$ and somehow interpolate between the $\mathcal H^1$ and $BMO$ norms. Then since the kernel of the Laplacian is in $BMO$, I am finished. However there are two problems with my proof: I don't know how to prove that one can interpolate a convolution between $\mathcal H^1$ and $BMO$ (atomic decomposition?) and I don't know how to prove that the kernel of the Laplacian is in $BMO$. -Does anyone have either a better way to prove this theorem, or a way to fix up my proof? Thanks! - -REPLY [8 votes]: Let me take a try: -Recall that the Riesz transform is bounded from $\mathcal H^1$ to $\mathcal H^1$ (and from $L^2$ to $L^2$). -We have the inequality -$$\|\partial_i \partial_j u \|_{L^2} \leq \| \Delta u \|_{L^2}.$$ -This is because $R_i R_j \Delta u = \partial_i \partial_j u$ where $R_i$ is the $i$-th Riesz transform. Now because the Riesz transform is also $\mathcal H^1$ bounded we have -$$\|\partial_i \partial_j u \|_{\mathcal H^1} \leq \| \Delta u \|_{\mathcal H^1}.$$ -But the $L^1$ norm is dominated by the $\mathcal H^1$ norm so -$$\|\partial_i \partial_j u \|_{L^1} \leq \| \Delta u \|_{\mathcal H^1}.$$ -Further note that -$$t f'(s) = f(t + s) - f(s) - \int_0^1 f''(s+r)(t - r) \, dr$$ -and a similar statement holds for partial derivatives. This implies that we can control the first derivative by the second and the function itself. This should give the result as asked in the title. -Alternatively we could use Mikhlin's multiplier theorem for $\mathcal H^1$ for the second part.<|endoftext|> -TITLE: Can this circulant determinant be zero? -QUESTION [7 upvotes]: The question is: -If $a,b,c$ are negative distinct real numbers,then the determinant $$ \begin{vmatrix} -a & b & c \\ -b & c & a\\ -c & a & b -\end{vmatrix} $$ is $$(a) \le 0 \quad (b) \gt 0 \quad (c) \lt 0 \quad (d) \ge 0 $$ -My strategy: I identifed that the matrix is a circulant hence the determinant can be expressed in the form of $-(a^3 + b^3 + c^3 - 3abc)$ which implies that $-(a+b+c)\frac{1}{2}[(a-b)^2 + (b-c)^2 + (c-a)^2]$ hence $(-)(-)(+) \gt 0$ but the answers says it is $\ge 0$, so can we have three $a,b,c$ such that the answer is $0$ ? - -REPLY [4 votes]: If you require $a, b, c$ distinct, no. The term $a+b+c$ can't be $0$ as they are all the same sign and the other term is a sum of squares which can only be $0$ if $a=b=c$. - -REPLY [4 votes]: Consider the equation: -$$(a^3 + b^3 + c^3)/3 = abc$$ -as expressing the equality of the arithmetic and geometric means of $a^3,b^3,c^3$. By a well known result this is only possible if the three cubes are equal. So, no, we cannot get them distinct and have the determinant be zero.<|endoftext|> -TITLE: Classifying splittings of primes? -QUESTION [10 upvotes]: I was wondering what general strategies are available to figure out if a prime splits? I know for quadratic extensions there aren't too many possibilities for how a prime can split, so we essentially only need to check that $X^2-d$ has a root modulo $p$ and that $p$ does not divide the discriminant. This can be one using quadratic reciprocity. -For e.g. an extension $\mathbb{Q}(\alpha)/\mathbb{Q}$, where $\alpha^3-\alpha-1=0$ there are a few more options. In this particular example computing the discriminant is easy and in general there are algorithms for it. The question is that for a prime $\mathfrak{P}\mid p$ how do we know that $f(\mathfrak{P}/p)=1$? Is there any "simple" algorithm that works for the polynomials $X^3+aX+b$ and $X^5+aX+b$? -EDIT: Per instructions below I'm editing the question. I was more interested in actually classifying and finding the density of the set of primes with a particular factorization. Given a particular prime, its factorization is much easier to find. In the quadratic case this is done by using quadratic reciprocity as I described above. What I don't know is how to work with cubics. I've been told that this can be done for cubics $X^3+aX+b$ and for simplicity I'm interested in the case $X^3-X-1$. Apparently another set of "easier" equations is $X^5+aX+b$. - -REPLY [7 votes]: For an extension $\mathbb{Q}(\alpha)/\mathbb{Q}$, where $\alpha$ has minimal polynomial $P$ which we can assume has all its coefficients in $\mathbb{Z}$, for every prime $p$ which does not divide the discriminant of $P$, there is a $\mathfrak{P}$ above $p$ with $f(\mathfrak{P}/p)=1$ iff $P \mod p$ has a root in $\mathbb{F}_p$ (which by Hensel's lemma can be lifted to a root of $P$ in $\mathbb{Z}_p$). -But note that $p$ can split even if there is no such root: a general version of Hensel's lemma tells us that the factorization of $P \mod p$ into irreducible (and coprime, since $p$ does not divide the discriminant) can be lifted to a factorization of $P$ (this gives you the factors of $P$ as an element of $\mathbb{Q}_p[X]$). -Each factor corresponds to a place $\mathfrak{P}$ above $p$, and $f(\mathfrak{P}/p)$ is equal to the degree of the factor. -However in degree $\leq 3$, a polynomial is irreducible iff it has no root. -Note that nothing is said about primes dividing the discriminant. They can be unramified or not. It is a bit harder (but there is an algorithm) to compute the decomposition in this case. -EDIT: of course, everything remains true if we take an arbitrary number field instead of $\mathbb{Q}$ - -REPLY [5 votes]: For a given prime $p$, the structure of the prime factorization of $p$ in the (ring of integers of the) field $\mathbb{Q}(\alpha)$ mirrors the factorization of the generating polynomial (say, $X^3-X-1$) over the finite field $\mathbb{Z}/p\mathbb{Z}$. This factorization can be algorithmically determined quite easily. -If, however, your task is to determine the entire set of primes which split, then this is essentially a higher reciprocity law and becomes quite difficult when the Galois group is not Abelian. I'm not sure what the general algorithm is, or even if there is one.<|endoftext|> -TITLE: Diophantine impossibility and irrationality (or similar) -QUESTION [8 upvotes]: The Diophantine equation $$a^2 = 2 b^2$$ having no solutions is the same as $\sqrt{2}$ being irrational. -Are there any Diophantine equations which are related to the irrationality of a number that is not algebraic? - -For a similar question with broader scope, the Diophantine equation $$x^n + y^n = z^n$$ implies a certain elliptic curves is "ir"-modular. -Are there more examples of this phenomenon? - -REPLY [11 votes]: The Diophantine equation $2^x=3^y$ having no solutions is the same as $\log3/\log2$ being irrational. It is known that $\log3/\log2$ is not algebraic.<|endoftext|> -TITLE: Solving integral $\int \frac{\sqrt{1 - x^2} - 1}{x^2 - 1}dx$ -QUESTION [5 upvotes]: I've been asking a lot of integral questions lately. :D This is the integral I'm trying to solve: $$\int \frac{\sqrt{1 - x^2} - 1}{x^2 - 1}dx$$ -By replacing $x = \sin(u)$ (thus $dx = \cos(u)du$ and $u = \arcsin(x)$) I arrived at: $$\int \frac{\cos(u)}{\cos^2(u)}du - u + C$$ -That fraction I think is $\sec(u)$, but we never learned about the secant function in school so I'd rather not use that. (Doesn't mean I don't want to know how to use it, I just want to be able to solve this some other way. :) ) - -REPLY [5 votes]: Why do you propose this substitution? -Your original integral can be split into two parts, $$ -\int \frac{\sqrt{1 - x^2} - 1}{x^2 - 1}dx = --\int \left(\frac{1}{\sqrt{1 - x^2}} + \frac{1}{x^2 - 1} \right)dx .$$ -The antiderivative of the first term is given by $\arcsin(x)$, the antiderivate of the second term is given by $-\text{atanh}(x)$. So the total antiderivative is given by -$$\int \frac{\sqrt{1 - x^2} - 1}{x^2 - 1}dx = -\arcsin(x) + \text{atanh}(x) +C.$$<|endoftext|> -TITLE: Finiteness of class group in idelic language -QUESTION [5 upvotes]: How should I understand the compactness of $A_{\mathbb{K}}^1/\mathbb{K}^{\times}$ in classical non-idelic language? I suppose the notations are standard, but just for completeness, - -$K$ is a global field; -$A_{\mathbb{K}}$ is the adele ring; -$A_{\mathbb{K}}^1$ is the kernel of the content map, defined on the idele -ring by multiplying the normalized absolute values at each place. -$K^{\times}$ embeds into $A_{\mathbb{K}}^1$ diagonally. - -Motivation: -I am trying to compare the idelic proof in Cassels-Frohlich to the classical proof involving Minkowski bound. I have been able to translate part of the story, e.g. the lemma on p.66 of Cassels-Frohlich seems to be an analogue of Minkowski's convex body theorem. However, the crux of the idelic proof seems to be the compactness of the group mentioned above, and I have been unable to see what it corresponds to in the classical case. -Thanks! - -REPLY [4 votes]: The compactness of the norm one idele class group for a global field $K$ (Fujisaki's Lemma) is in fact equivalent to the finiteness of the ideal class group (of any one $S$-integer ring) and the Dirichlet unit theorem (of any one $S$-integer ring), including the precise computation of the free rank of the unit group. -This perspective is given a very nice treatment in the number field case in this brief note of Paul Garrett.<|endoftext|> -TITLE: Is a bounded operator necessarily linear? -QUESTION [5 upvotes]: The Wikipedia article for bounded operator is all about linear bounded operator. I was wondering - -Can a bounded operator be -non-linear? If yes, how is this defined? -Is a bounded operator generally assumed to be linear? - -Thanks! - -REPLY [5 votes]: Yes, a bounded operator can be nonlinear. There are a lot of useful notions of `bounded non-linear operator'. One is that for an operator between topological spaces that the image of compact sets is compact. The operator $Tx = 1/(1-x)$ is bounded on $[0,\infty)$ under this definition, but so are a lot of nasty operators. It depends on what you are trying to get out of your operator. -No, one should always prove this.<|endoftext|> -TITLE: Localization at a prime ideal in $\mathbb{Z}/6\mathbb{Z}$ -QUESTION [11 upvotes]: How can we compute the localization of the ring $\mathbb{Z}/6\mathbb{Z}$ at the prime ideal $2\mathbb{Z}/\mathbb{6Z}$? (or how do we see that this localization is an integral domain)? - -REPLY [2 votes]: Here's a much less computational approach to show that the localisation is an integral domain (even a field): -For any (commutative, unital) ring $R$ and prime ideal $P$, recall that $R_P$ is a local ring with unique maximal ideal $P_P$. -In the given case, since there is an element $x \in R\setminus P$ (i.e. $3$) such that $x \in \operatorname{Ann}(P)$ it follows that $P_P$ is $(0)$. Thus $R_P$ is a local ring with unique maximal ideal $(0)$ and so is a field. -Note that this actually gives us the stronger statement that $R_P$ is a field iff $P_P$ is $(0)$.<|endoftext|> -TITLE: Finding the circles passing through two points and touching a circle -QUESTION [21 upvotes]: Given two points and a circle, construct a/the circle through the two points and -touching the given circle. -I came across this problem in History of Numerical Analysis by H. Goldstein. I -spent some time on this. I have a method of constructing it using radical axis. -I am wondering if there is a more elementary construction. - -REPLY [10 votes]: Here are some pictures illustrating the argument I alluded to at the end of my first answer. It has the virtue of working without distinguishing the cases whether the two points $\color{blue}{A}$ and $\color{blue}{B}$ are inside or outside the given circle $\color{blue}{c}$. - -Given: Two points $\color{blue}{A},\color{blue}{B}$ and a circle $\color{blue}{c}$ with center $\color{blue}{C}$. -Wanted: The two points $\color{red}{P,Q}$ on $\color{blue}{c}$ such that the circles through $\color{blue}{A},\color{blue}{B},\color{red}{P}$ and $\color{blue}{A},\color{blue}{B},\color{red}{Q}$ are tangent to $\color{blue}{c}$ (drawing a circle through three points is trivial). - -Before delving into the solution let's get rid of some degenerate cases: -Warm-up Exercise: Treat all the cases in which $\color{blue}{A} = \color{blue}{B}$ or one of $\color{blue}{A}$ or $\color{blue}{B}$ lies on $\color{blue}{c}$. -So, from this point on, we assume $\color{blue}{A} \neq \color{blue}{B}$ and that both lie either inside or outside the circle $\color{blue}{c}$. - -The idea is the same as in the other answer (i.e. user8268's solution). Reflecting the configuration at the circle $\color{green}{d}$ with center $\color{blue}{B}$ through $\color{blue}{A}$ fixes $\color{blue}{A}$ and sends $\color{blue}{B}$ to infinity. It transforms the circle $\color{blue}{c}$ into a circle $\color{blue}{c'}$ and transforms the circles we're looking for into tangents from $\color{blue}{A}$ to $\color{blue}{c'}$ because $\color{blue}{B}$ is sent to infinity and circle reflection preserves angles. Finding the tangents from the point $\color{blue}{A}$ to the circle $\color{blue}{c'}$ is easy and we need only reflect the points of tangency $P',Q'$ back to $\color{blue}{c}$ to find $\color{red}{P}$ and $\color{red}{Q}$. Drawing the circles through $\color{blue}{A}, \color{blue}{B}, \color{red}{P}$ and $\color{blue}{A}, \color{blue}{B}, \color{red}{Q}$ is again straightforward. -So here's the solution in somewhat more detail: - -Reflect the circle $\color{blue}{c}$ at the circle $\color{green}{d}$ through $\color{blue}{A}$: -To find $\color{blue}{c'}$, draw the line through $\color{blue}{B}$ and $\color{blue}{C}$ (if $\color{blue}{B} = \color{blue}{C}$ draw an arbitrary line through $\color{blue}{B}$) and reflect its points of intersection with $\color{blue}{c}$ at $\color{green}{d}$. Then the circle $\color{blue}{c'}$ is the circle with diameter those two reflected points. (See also point 2. in my other answer for more details.)Exercise: Prove that $\color{blue}{A}$ is always outside the circle $\color{blue}{c'}$. -Find the tangents from $\color{blue}{A}$ to $\color{blue}{c'}$: Call the points of tangency $P'$ and $Q'$. -Reflect the points $P'$ and $Q'$ at $\color{green}{d}$ to find $\color{red}{P}$ and $\color{red}{Q}$: -Draw the circles through $\color{blue}{A}, \color{blue}{B}, \color{red}{P}$ and $\color{blue}{A}, \color{blue}{B}, \color{red}{Q}$: done. - -Remark: Note that the red circles pass through the second points of intersection of $\color{green}{d}$ with the tangents from $\color{blue}{A}$ to $\color{blue}{c'}$. This means that the construction admits a slightly simpler variant (omitting steps 3 and 4) if one doesn't care about the points $\color{red}{P}$ and $\color{red}{Q}$. I chose to explain it the way I did, as the more efficient method in this remark doesn't exhibit as clearly why it works and in order to see that, one needs to consider the points $\color{red}{P}$ and $\color{red}{Q}$ anyway. - -As a final picture, the case that both $\color{blue}{A}$ and $\color{blue}{B}$ lie outside of $\color{blue}{c}$:<|endoftext|> -TITLE: Does there exist a general graph for any degree sequence of even natural numbers? -QUESTION [7 upvotes]: Suppose you have a given degree sequence $(d_1,d_2,\dots,d_n)$, where $d_i$ is even for every $i$. Does there exist a general graph with this degree sequence? -I say yes, the easiest way is to take any isolated graph of $n$ vertices, $\{v_1,\dots,v_n\}$, and then at each $v_i$, put in $d_i/2$ loops, so $\deg(v_i)=d_i$ for all $i$. Is this cheating? It seems very easy, and it all hinges on the fact that graph is allowed to have multiple edges between vertices. -As a follow up question, is there some way, sufficient and/or necessary conditions to tell if a graph with a given degree sequence exists, given that the sum of the degree sequence is even? (It may not necessarily be the case that every degree may be even itself.) - -REPLY [8 votes]: If you are looking for a simple graph, the Erdos Gallai Theorem mentioned before solves the problem. Also there is another theorem by Havel-Hakimi, which is more an algorithm (and also constructs such a graph): -$d_1 \geq d_2 \geq d_3 ... \geq d_n$ is the degree sequence of a graph if and only if -$d_2-1, d_3-1, ..., d_{d_1+1}-1, d_{d_1+2},..., d_n$ is a degree sequence. -It should be pretty clear how to construct the graph. - -For general graphs: -If you allow both loops and multiple edges, any sequence of numbers with even sum is a degree sequence. To see this, split the odd vertices in pairs, add one edge for each pair and then you are left exactly with the case you covered: all the vertices have even degree. -I think that if you don't allow loops but you allow multiple edges, you can do it if and only if the sum of degrees is even AND the largest degree is less or equal to the sum of the remining vertice. The only if part should be clear, while the if part can be proven the following way: add an edge between the two highest degrees, show that the largest degree is still less or equal to the sum of the remining vertices, and do induction by the sum of the degrees.<|endoftext|> -TITLE: Spectrum of the right shift operator on $\ell^2({\bf Z})$ -QUESTION [15 upvotes]: Here is the question: - -Considering the right shift operator $S$ on $\ell^2({\bf Z})$, what can one know about ran$(S-\lambda)$? - -Here is what I thought: - -If one wants to prove that the operator $S-\lambda$ is onto when $\lambda$ satisfies some conditions, does one have to construct the solution? -One needs to find the solution $(S-\lambda)x=y$ where $x,y\in \ell^2({\bf Z})$. Using the standard basis $e_n=(\delta_{nk})_{k=-\infty}^{\infty}$, one has to solve -$\lambda x_{k}-x_{k-1}=y_{k}, (k\in {\bf Z})$. -Intuitively, when $|\lambda|=1$, there may be no way for $S-\lambda:\ell^2({\bf Z})\to \ell^2({\bf Z})$ to be onto. However, when $|\lambda|\neq 1$, can one explicitly find $x=(x_{k})_{k=-\infty}^{\infty}$? - - -[ADDED] Thanks to a recent editing of my question, I have learned that the right and left shift operators acting on two-sided infinite sequences are also called bilateral shifts. - -REPLY [3 votes]: To address the final part of your question: you can write down the inverse of $S-\lambda$ for $|\lambda|\ne1$, so -in this sense you can "explicitly find $x$". (Although the spectral argument is a more efficient way to answer the range question). -If $\|T\|<1$ then $1-T$ is invertible, and -$$ (1-T)^{-1}=\sum_{k\ge0}T^k.$$ -Wikipedia calls this the Neumann series of $T$. -Since $S^{-1}$ is the backward shift, we have $\|S^{-1}\|=1$. So if $0<|\lambda|<1$ then $\|\lambda S^{-1}\|<1$ and -$$(S-\lambda)^{-1}=-S^{-1}(1-\lambda S^{-1})^{-1}=S^{-1}\sum_{k\ge0}(\lambda S^{-1})^k.$$ -Since $\|S\|=1$, if $|\lambda|>1$ then $\|\lambda^{-1}S\|<1$ so -$$ (S-\lambda)^{-1}=-\lambda^{-1}(1-\lambda^{-1} S)^{-1}=-\lambda^{-1}\sum_{k\geq 0}(\lambda^{-1}S)^k.$$<|endoftext|> -TITLE: Applications of Gröbner bases -QUESTION [19 upvotes]: I would like to present an application of Gröbner bases. The audience is a class of first year graduate students who are taking first year algebra. -Does anyone have suggestions on a specific application that the audience would appreciate? - -REPLY [6 votes]: find intersection points of a couple of conics (pick the right coefficients to make it not so tedious to do all the manipulation) -describing the motion of a constrained single hinged robot arm or planetary epicycles (make a cardioid from two equations) -colorability of a graph (see A Crash Course... ) (when presented with the construction, very easy to see that the algorithm produces a solution) - -REPLY [5 votes]: I learnt of a cool application here in Math.SE where I had asked a question to parametrize $$x=2t-4t^3$$ $$y=t^2-3t^4$$ -There was no straightforward way to eliminate $t$, however a user pointed out - -using a Gröbner basis routine such as that in Mathematica easily gives the implicit Cartesian equation -$$27x^4-4x^2(36y+1)+16y(4y+1)^2=0$$ -In Mathematica: GroebnerBasis[{x == 2t - 4t^3, y == t^2 - 3t^4}, {x, y}, t] - -I doubt this would be fascinating to graduates though.<|endoftext|> -TITLE: Questions about Bochner integral -QUESTION [7 upvotes]: I was wondering - -If there is distinction between -existence of Bochner integral and -Bochner integrability, or the two always mean the same? -If in Bochner integral, the -integrand is assumed to be -measurable wrt the Borel -$\sigma$-algebra of the codomain -Banach space? -if differentiation under integral -sign is still true for Bochner -integral? What is the condition -for that to be true? What kinds of -derivatives are involved above, -Fréchet derivative, the Gâteaux -derivative, or something else? -For example, $\frac{d}{dt} \int_a^t - f(t) g(x,t) dt$, where $f: \mathbb{R} \rightarrow \mathbb{R}$, $g: B \times - \mathbb{R} \rightarrow \mathbb{R}$, $B$ is a Banach space, -and the Bochner integral exists. - -Thanks and regards! Also are there some nice references? - -REPLY [11 votes]: About 3. we have something even better! Hille's theorem. -http://fa.its.tudelft.nl/~neerven/publications/papers/ISEM.pdf Theorem 1.19 - -Theorem 1.19 (Hille). Let $f : A \to E$ be $\mu$-Bochner integrable and let $T$ - be a closed linear operator with domain $D(T)$ in $E$ taking values in a Banach - space $F$ . Assume that $f$ takes its values in $D(T)$ $µ$-almost everywhere and the $µ$-almost everywhere defined function $T f : A \to F$ is $µ$-Bochner integrable. - Then - $$T \int_A f \, d\mu = \int_A T f \, d\mu.$$ - -A lovely theorem. Your other questions can be answered by the first chapter of the refered document. - -REPLY [3 votes]: Anton Deitmar and coauthor(s) have recently been writing some things about this: e.g., http://arxiv.org/abs/1102.1246 Presumably they give references.<|endoftext|> -TITLE: Free module implies projective module -QUESTION [12 upvotes]: Definiton : Let $M$ be an $A$-module. Then $M$ is projective if there exists an $A$-module $N$ such that $M \oplus N$ is free. -Prop: If $M$ is free then $M$ is projective. -Can we simply take the trivial module $\{0\}$ then $M \oplus \{0\} \cong M$ is free, so $M$ is projective? - -REPLY [8 votes]: The reason this seems simple is that there are many equivalent definitions of "projective module", and what you give as the definition is usually a property that is shown to be equivalent. -For example, in most treatments I know the the definition of projective module is given as either: - -An $A$-module $P$ is projective if and only if for every $A$-modules $M$ and $N$, and every homomorphism $f\colon P\to N$, if $\varphi\colon M\to N$ is an onto module homomorphism, then there exists $g\colon P\to M$ such that $\varphi=f\circ g$. - -or equivalently - -An $A$ module $P$ is projective if and only if for every $A$-module $M$, $f\colon M\to P$ is an onto module homomorphism, then there exists a module homomorphism $g\colon P\to M$ such that $f\circ g=\mathrm{id}_P$; in particular, every module that has $P$ as a quotient can be written as a direct sum of $P$ and another module, and the direct sum is compatible with the original quotient map. - -Under either of these definitions, the fact that free modules are projective is easy as well; one then proves that these two definitions are equivalent between them, and equivalent to the one you give.<|endoftext|> -TITLE: What are the integrals defined for $\mathbb{R}^n$? -QUESTION [10 upvotes]: In multivariate calculus, - -I was wondering what types of integrals are studied? Here are my naive view: - -Multiple integral: If I understand correctly, it is just a plain generalization of Riemann integral on $\mathbb{R}$ to on $\mathbb{R^n}$. -Integral of differential forms: It is something I am not able to truly understand. Is it a special kind of Lebesgue integral? Does its definition rely on measure? - -When trying to compare them together, I have some further questions: - -Are multiple integrals and integrals of differential forms two different types of integrals? Do they belong to some common type of integral, similarly to that Riemann integral and Lebesgue-Stieltjes integrals both belong to Lebesgue integral? How are they related? -Is it correct that multiple integrals have no orientation involved, but an integral of differential forms does, in the sense of changing the order of dummy variables in $dx_1 dx_2$ will or will not change the integral? - -What type of integral is used in vector calculus, for topics such as gradient, divergence, curl, Laplacian, the gradient theorem, Green's theorem, Stokes' theorem, divergence theorem? Are the line integral, surface integral and volume integral defined as belonging to Lebesgue integrals or Riemann integrals, integrals of differential forms, or something else? Do their definitions rely on measure? - -Thanks and regards! - -REPLY [3 votes]: I am reading this question as saying: There are too many integrals and that confuses me. Here is my attempt to clear the waters a little. -First, let's consider our old friend the Riemann integral. We used this to find the areas between curves (or volume bounded by surfaces if we go up a dimension). In fact, The Riemann integral was presented as the solution to this problem, and it works very well except that some things which are "obviously true" fail to even make sense for this integral. The example I have in mind is that $\int_{\mathbb{R}}\chi_{\mathbb{Q}} = 0$, where $\chi_{\mathbb{Q}}$ is the indicator function of the rationals, is something that we want to be true, but the function fails to be Riemann integrable. The Lebesgue integral is the solution to this problem. -Even though you have to learn a whole new theory of Lebesgue integration on measure spaces you can think of it as a patch which makes the dominated convergence theorem true since (as long as you pick the right definition of measurable function) the two integrals agree whenever the Riemann integral exists. This is done by (more or less) slicing horizontally rather that vertically, and having a more robust version of "area". -Your questions about differential forms are possibly answered by the following statement: -Calculus is awesome! -What I mean is that we have learned a tremendous amount by doing calculus, and wouldn't it be great if we could use this tool on "shapes" other than (open subsets of) $\mathbb{R}^n$, say for example curves and surfaces. As it turns out differential forms are the solution to this problem. Rather than go into a whole course about manifolds let me just say that whatever a manifold is, $\mathbb{R}^n$ is one and the forms notion of integration coincides with the regular way. So we can think of differential forms less like a new way to integrate and more like a way to extend the value of integration while at the same time giving a new language in which to reinterpret what what we know in $\mathbb{R}^n$. -To sum up when you are integrating you are always using differential forms and the Lebesgue integral, but of course you don't need to know that most of the time. -I hope this answers at least a few of your questions.<|endoftext|> -TITLE: Serre Spectral Sequence and Fundamental Group Action on Homology -QUESTION [14 upvotes]: I am looking at my algebraic topology notes right now, and I am looking at our definition for the Serre Spectral Sequence and it requires that the action of the fundamental group of the base space of a fibration $F\to E\to B$ be trivial on all homology groups of the fiber. Then we can construct the SSS etc. etc. What exactly does it mean for the action of the fundamental group to be trivial (rather, what is this action at all, how is it defined)? Intuitively, does this mean that if we take the fiber at a point, take its homology, and then follow the that homology along a loop, we come back to the same homology? -Any guidance would be much appreciated. I am currently trying to use the SSS to find the cohomology groups of path loop spaces of even dimensional spheres, and we have an example using the odd dimensional spheres, so I imagine I could just assume all the requirements are met to use the SSS, but I'd like to know what exactly is going on here. -Thanks! - -REPLY [13 votes]: let $p:\ E\rightarrow B$ be the fibration, and let $B$ be path-connected; then all fibers $p^{-1}(b)$ are homotopy equivalent. Furthermore, a path $f:\ [0,1]\rightarrow B$ defines a homotopy class $f_*$ of homotopy equivalence between $p^{-1}(f(0))$ and $p^{-1}(f(1))$. Even better, this only depends on the homotopy class of $f$, relative its endpoints. So specializing to the fundamental group, based at $b\in B$, we have a group homomorphism from $\pi_1(B,b)$ to $S$, where $S$ is the homotopy classes of homotopy equivalences between $p^{-1}(b)$ and $p^{-1}(b)$. If we let $F=p^{-1}(b)$, then each of these homotopy equivalences induces an automorphism of $H_n(F)$ for every $n$; that is, we get a group homomorphism from $\pi_1(B,b)$ to $Aut(H_n(F))$ for every $n$. -This is all covered well in chapters 6 and 9-10 of Kirk and Davis's "Lecture Notes in Algebraic Topology". - -REPLY [3 votes]: No, it means that the homomorphism induced by following a loop is trivial. You will always get the same group back but a generator may get sent to its negative. Take a Möbius strip $\mathbb R \to M \to S^1 $ and mod out each fiber of the strip by the action of $\mathbb Z$. This is a twisted torus (might be the Klein bottle). Pick any point $p \in S^1$, and a generator $\omega$ of $H_1(\pi^{-1}(p))$. The loop around the base induces an action of $H_1(\pi^{-1}(p))$ that sends $\omega$ to $-\omega$. -Does anyone know what happens to the Serre Spectral Sequence when this is not trivial?<|endoftext|> -TITLE: Simple probability question, balls and bins -QUESTION [5 upvotes]: This is a simple question I came across in reviewing. I am wondering if I got the correct answer. -The question is simple. You have $n$ balls and $m$ bins. Each ball has an equal probability of landing in any bin. I want to know what the probability that exactly $1$ bin is empty. -My answer seems simple enough, but I don't think it's sufficient. It is $(\frac{m-1}{m})^n$ since for each ball, it can go in any of the other bins. I think, however, that this is just the probability that some arbitrary bin $A$ is empty, not exactly one bin. What else should I consider? - -REPLY [34 votes]: Let's count configurations, and then divide by $m^n$. -There are $m$ choices for the empty bin. Then the other bins are occupied. We can count the ways to place $n$ balls in $m-1$ bins so that no bin is empty by inclusion-exclusion: It is -$$\sum_{k ~\text {bins known to be empty}} (-1)^k {m-1 \choose k} (m-1-k)^n.$$ -Another way to get this is to label the parts of a set partition of size $n$ with $m-1$ parts. The number of set partitions with a given number of parts is a Stirling number of the second kind, and we want $(m-1)! S(n,m-1)$. -Multiply this by $m$ and then divide by $m^n$ to get the probability exactly $1$ bin is empty. -We can use the same techniques to compute the probability exactly $e$ bins are empty for other values of $e$. For example, suppose there are $4$ bins and $6$ balls. Then there are $1560$ ways for there to be $0$ empty bins, $2160$ ways for there to be exactly $1$ empty bin, $372$ ways for there to be exactly $2$ empty bins, and $4$ ways for there to be exactly $3$ empty bins. The total is $4096 = 4^6$. Dividing by this gives a probability of $\frac{135}{256} = 0.52734375$ that exactly $1$ bin is empty.<|endoftext|> -TITLE: Torsion or Non-Torsion subgroup of $H_i$ do not define a homology theory. -QUESTION [6 upvotes]: I need to show that if we let $T_n(X,A)$ denote the torsion subgroup of $H_n(X,A)$, then the functor $(X,A)\mapsto T_n(X,A)$ with the obvious induced homomorphisms and boundary maps do not define a homology theory. -Using the axioms that Hatcher gives us, my guess is to find maps -$\tilde{H}_n(X/A)\rightarrow \tilde{H}_{n-1}(A)$ -$\downarrow\hspace{3 cm}\downarrow$ -$\tilde{H}_n(Y/B)\rightarrow \tilde{H}_{n-1}(B)$. -such that either $\tilde{H}_{n-1}(A)$ has zero torsion or $\tilde{H}_n(Y/B)$ has zero torsion, but not both. Then, the diagram for $T_i$ will not be commutative (I also have to make sure there aren't any trivial maps). The simplest map I can think of would be -$\mathbb{Z}_2\rightarrow \mathbb{Z}$ -$\downarrow\hspace{1 cm}\downarrow$ -$\mathbb{Z}_2\rightarrow \mathbb{Z}_2$. -but I don't know any spaces that have $H_i=\mathbb{Z}_2$ for $i$ even (at least Hatcher didn't go over any such spaces before this question). I just want to know if this is the right path to take, or if I'm wrong and commutativity does, in fact, hold, and I should consider something else. - -REPLY [3 votes]: For $T_n$ to be a homology theory, it must respect the exactness axiom; that is to any pair of spaces $(X,A)$, there should be an induced long exact sequence -$$ \cdots\rightarrow T_n(A)\rightarrow T_n(X)\rightarrow T_n(X,A)\rightarrow T_{n-1}(A)\rightarrow\cdots.$$ -But if we take the short exact sequence $0\rightarrow\mathbb{Z}\rightarrow\mathbb{Z}\rightarrow\mathbb{Z}_2\rightarrow 0$, then passing to torsion subgroups we get the non-exact sequence $0\rightarrow 0\rightarrow 0\rightarrow\mathbb{Z}_2\rightarrow 0$. So to show $T_n$ is not a homology theory, it is enough to exhibit the above exact sequence in the LES for ordinary homology of a pair. So take $X$ to be the Möbius strip, and $A$ to be its boundary; then the LES for ordinary (reduced) homology simplifies to the short exact sequence -$$ H_1(A)\rightarrow H_1(X)\rightarrow H_1(X,A)\rightarrow 0.$$ -Now just observe that $H_1(A)=\mathbb{Z}$, $H_1(X)=\mathbb{Z}$, and $H_1(X,A)=\mathbb{Z}_2$. -You can show $H_1(X,A)=\mathbb{Z}_2$ in two different ways. First, note that if I take the "central" circle of the Möbius strip $X$, then $X$ deformation retracts to this circle. The boundary $A$ corresponds to wrapping twice about this circle. So the inclusion map $i:\ A\rightarrow X$ induces a map on homology $i_*:\ H_1(A)\rightarrow H_1(X)$ which is multiplication by $2$; so in the above SES, $H_1(X,A)$ is the cokernel of this map, namely $\mathbb{Z}_2$. -Alternatively, $H_1(X,A)$ is the first homology of the space obtained by taking the cone over $A$ (call it $CA$), and attaching the boundary to the copy of $A$ in $X$. Now $A$ is just a circle, so $CA$ is just a regular cone, topologically a disk $D^2$. Attaching this disk to $A$ in $X$ means attaching the boundary of $D^2$ to the boundary of the Möbius strip $X$; this is more commonly known as $\mathbb{RP}^2$. So $H_1(X,A)\cong H_1(\mathbb{RP}^2)=\mathbb{Z}_2$.<|endoftext|> -TITLE: Energy estimate of the differential equation $\dot{x}=Ax$ -QUESTION [6 upvotes]: Conside the differential equation -$$\dot{x}=Ax,\qquad x(t):{\bf R}\to{\mathcal H}$$ where $\mathcal{H}$ is a Hilbert space and $A$ is a bounded linear operator on $\mathcal{H}$. With the initial condition $x(0)=x_0$, one can have $x(t)=e^{At}x_0$ (Is this legal in the infinite dimension case?). With the spectral method (under the assumption that $x_0$ is an eigenvector of $A$), one has the estimate -$$\|x(t)\|\leq e^{\omega t}\|x_0\|.$$ -Here is my question: - -With some additional assumption, can one estimate $\|x(t)\|$ without using the spectral method, say, simply taking the inner product? - -I have thought that the following sub-questions may be helpful: - -What is $\frac{d}{dt}\langle x(t),x(t)\rangle$? -In the finite dimension case, this can be done with the product rule. What about the infinite dimension case? - -REPLY [8 votes]: Yes, if $A$ is a bounded linear operator $e^{At}$ is quite OK. It can be defined by the Taylor series, for example, and [EDIT: for $t \ge 0$ ] we have -$$\|e^{At}\| \le e^{\|A\| t}.$$ -If $x_0$ is an eigenvector for $A$ with eigenvalue $\lambda$, then $x(t) = e^{\lambda t} x_0$. -As for your sub-question, it is still true that -$$\frac{d}{dt} \langle x(t),\, x(t) \rangle = \langle x'(t), x(t) \rangle + \langle x(t), x'(t) \rangle = 2 \Re \langle x(t), A x(t) \rangle.$$ The proof using difference quotients is basically the same as in single-variable calculus.<|endoftext|> -TITLE: Questions about the BBM equation: $-u_{txx}+u_{t}=u_{x}$, $u(x,0)=u_0(x)$, $x,t\in{\bf R}$ -QUESTION [7 upvotes]: Consider the BBM equation: -$$-u_{txx}+u_{t}=u_{x},\quad u(x,0)=u_0(x),\quad x,t\in{\bf R}$$ -One may rewrite it as -$$u_t=((I-A)^{-1}\partial_x)u$$ where $Au=u_{xx}$ if $(I-A)^{-1}$ exists. -Here are my questions: - - -Does $(I-A)^{-1}$ always exist? -Is there an integral operator $K:L^2({\bf R})\to L^2({\bf R})$ such - that $K=((I-A)^{-1}\partial_x)$? - -REPLY [9 votes]: On $L^2(\bf R)$, yes, $(I - \frac{d^2}{dx^2})^{-1}$ exists. It corresponds via the Fourier transform to multiplication by $1/(1 + p^2)$, and your $K$ corresponds to multiplication by $i p/(1 + p^2)$, or convolution with the inverse Fourier transform of that, namely $i {\rm sgn}(x) e^{-|x|}/2$.<|endoftext|> -TITLE: For which prime $p$ are the additive groups $\mathbb{F}_p$ and $\mathbb{F}_p^2$ $\mathbb{Z}[i]$-modules? -QUESTION [5 upvotes]: Homework for my algebra class. Chapter 14, Exercise 7.8 in Artin's Algebra, Second Edition: - -Let $F = \mathbb{F}_p$. For which - prime integers $p$ does the additive - group $F^1$ have a structure of a - $\mathbb{Z}[i]$-module? How about - $F^2$? - -I am submitting my possible solution as an answer; but I'm not sure if it's correct. Can I get a second opinion? Thank you. -P.S. If one of the regulars could tell me if questions like these are allowed (checking solutions), or how best to ask these kinds of questions, I'd be much obliged. - -REPLY [2 votes]: You are close, as discussed in the comments. -Claim. An abelian group $A$ has a structure as a $\mathbb{Z}[i]$ module if and only if there exists $\phi\in\mathrm{Aut}(A)$ such that $\phi^2(a) = -a$ for all $a\in A$. -Proof. Suppose that $A$ has a $\mathbb{Z}[i]$-module structure. Define $\phi\colon A\to A$ by $\phi(a) = i\cdot a$. Then $\phi(a+b) = i\cdot(a+b) = (i\cdot a)+(i\cdot b) = \phi(a)+\phi(b)$, so $\phi$ is an endomorphism. Since $\phi^4(a) = a$ for all $a\in A$, it follows that $\phi$ is invertible as well, hence an automorphism. Also, $\phi^2(a) = i\cdot(i\cdot a) = (ii)\cdot a = (-1)\cdot a = -a$. Thus, if $A$ has a structure as a $\mathbb{Z}[i]$-module, then there exists $\phi\in\mathrm{Aut}(A)$ such that $\phi^2(a)=-a$. -Conversely, suppose there exists $\phi\in\mathrm{Aut}(A)$ such that $\phi^2(a)=-a$ for all $a\in A$. Define the action by $(r+si)\cdot a = (r+s\phi)(a) = ra+s\phi(a)$. which is easily verified to be a module structure. QED -From this it easily follows that if $-1$ is a square in $\mathbb{F}_p$, then we can make $\mathbb{F}_p$ into a $\mathbb{Z}[i]$ module by considering the automorphism of $\mathbb{F}_p$ (as an abelian group) given by multiplication by $\alpha$, where $\alpha^2 = -1$. It also shows, via your argument, that $\mathbb{F}_p^2$ is a $\mathbb{Z}[i]$ module, by considering $\phi(a,b) = (-b,a)$. -You still have to show the necessity for $\mathbb{F}_p$, though. -Hint: See what $i\cdot 1$ is.<|endoftext|> -TITLE: Why is $8 \times 8$ matrix chosen for Discrete Cosine Transform? -QUESTION [8 upvotes]: In JPEG and MPEG, why is $8 \times 8$ matrix chosen for Discrete Cosine Transform? -Why not any other, say $64 \times 64$? - -REPLY [9 votes]: From the MPEG FAQ: - -Q22. Why was the 8x8 DCT size chosen? -A Experiments showed little compaction gains could be acheived with - larger - sizes, especially when considering the increased implementation - complexity. A fast DCT algorithm will require roughly double the - arithmetic operations per sample when the linear transform point - size is doubled. Naturally, the best compaction efficiency has - been - demonstrated using locally adaptive block sizes (e.g. 16x16, 16x8, - 8x8, 8x4, and 4x4) (See Gary Sullivan and Rich Baker 'Efficient Quadtree - Coding of Images and Video,' ICASSP 91, pp 2661-2664.). - Inevitably, this introduces additional side information overhead and - forces the decoder to implement programmable or hardwired - recursive - DCT algorithms. If the DCT size becomes too large, then more edges - (local discontinuities) and the like become absorbed into the - transform block, resulting in wider propagation of Gibbs (ringing) - and - other phenomena. Finally, with larger transform sizes, the DC - term is - even more critically sensitive to quantization noise.<|endoftext|> -TITLE: Do the positive rationals under multiplication contain a subgroup of finite index? -QUESTION [8 upvotes]: Do the positive rationals under multiplication contain a subgroup of finite index? - -Similar questions usually rely the fact a subgroup is divisible, however, this is not the case in this question. I have a feeling that the answer should be "no", but have been unable to show this. - -REPLY [3 votes]: Remember that if $p$ is a prime and $z$ is a nonzero integer, we define the $p$-order of $z$ to be $n$, $\mathrm{ord}_p(z) = n$, if and only if $p^n|z$ but $p^{n+1}$ does not divide $z$. For a rational $\frac{r}{s}$, we let $\mathrm{ord}_p(\frac{r}{s}) = \mathrm{ord}_p(r) - \mathrm{ord}_p(s)$. Define the ord -Fix a prime $p$, and let $n$ be any positive integer. Define -$$H(p,n) = \{q\in \mathbb{Q}_{\gt0} \mid \mathrm{ord}_p(q)\in n\mathbb{Z}\}.$$ -$H$ is a multiplicative subgroup of $\mathbb{Q}$, since the $p$-order satisfies -$$\mathrm{ord}_p(rs) = \mathrm{ord}_p(r) + \mathrm{ord}_p(s)\quad\text{and}\quad \mathrm{ord}_p\left(\frac{1}{r}\right) = -\mathrm{ord}_p(r).$$ -Finally, note that $H$ is of finite index in : $\mathrm{ord}_p$ is an abelian group homomorphism from $\mathbb{Q}_{\gt0}$ onto $\mathbb{Z}$; $H$ is the pre-image of $n\mathbb{Z}$, hence the index of $H$ is equal to $n$. -This is just a way to make explicit the comment of Hagen Knaf: the basis of the positive rationals under multiplication can be taken to be the primes.<|endoftext|> -TITLE: The series $\sum_{n=2}^{\infty }\frac{1}{\log (n!)}$ -QUESTION [8 upvotes]: I'm trying to find out whether this Series -$\sum_{n=2}^{\infty } a_{n}$ converges or not when -$$a_{n}=\frac{1}{\log (n!)}$$ -I tried couple of methods, among them: d'Alembert $\frac{a_{n+1}}{a_{n}}$, Cauchy condensation test $\sum_{n=2}^{\infty } 2^{n}a_{2^n}$, and they both didn't work for me. -Edit: I can't use stirling, and integral. -Thank you - -REPLY [6 votes]: You both gave me an Idea: -We know that from $n=2$ -$n!< n^{n}$, so $\frac{1}{ n\log n}<\frac{1}{\log n!}$ -and now from Cauchy Condensation $\frac{1}{ n\log 2}$ is obivously diverges and we're done.<|endoftext|> -TITLE: Product of spaces is a manifold with boundary. What can be said about the spaces themselves? -QUESTION [12 upvotes]: Suppose I have two topological spaces $X,Y$ and I know that $X\times Y$ is homeomorphic to a manifold with boundary. Can I conclude that $X$ and $Y$ are manifolds (maybe with boundary)? -If not, suppose that $Y=[0,1]$. Is it then true? My intuition states that this is true, but I cannot see directly an elementary proof. - -REPLY [7 votes]: Bing's dogbone space is a non-manifold $X$ such that $X\times\mathbb R\cong\mathbb R^4$. It is constructed as a quotient of $\mathbb R^3$ which is the identity outside of a ball. Hence we can do the construction inside a ball, to get a modified dogbone space $W$ with $\partial W=S^2$. Then I think that $W\times\mathbb R\cong D^3\times \mathbb R,$ which has boundary $S^2\times\mathbb R$. The basic idea as I understand it is that the nested tangle of genus 2 handlebodies unknots itself in $4$ dimensions, and this doesn't appear to use anything outside of a ball. However I haven't ever gone through the proof in detail, so I might be missing something. -If you take $Y=[0,1]$ I would guess that $X$ is probably a manifold with boundary. I recall hearing in a lecture many years ago that if $X\times S^1$ is a manifold, then so is $X$, which strikes me as a very similar problem.<|endoftext|> -TITLE: On integer solutions of the equation $x^2+y^2+z^2=16(xy+yz+zx-1)$ -QUESTION [6 upvotes]: Here is the question: -Question. Show that the equation -$$x^2+y^2+z^2=16(xy+yz+zx-1)$$ -does no have integer solutions. -I know a nice and easy (actually, an obvious) way to solve this problem. But I'm just wondering can we solve this using infinite descent method? I remember I saw a solution using that method, but it was wrong. - -REPLY [2 votes]: Here is another approach and solution. -Using the identity $X^2 + Y^2 + Z^2 = (X + Y + Z)^2 - 2(XY + XZ + YZ)$ -Substituting and simplifying we get -$$(X + Y + Z)^2 + 16 = 18(XY + XZ + YZ)$$ -But we know from Fermat and others that the sum of $2$ squares -is not divisible by primes of the form $4N+3$ unless both squares are -themselves divisible by such prime. Since $16$ is not divisible by -any primes of the form $4N+3$, we see that the left side of he equation -can not be divisible by $3$ while the right side is divisible by $3$. The -contradiction concludes the proof.<|endoftext|> -TITLE: Solving a scrambled $3 \times 3 \times 3$ Rubik's Cube with at most 20 moves! -QUESTION [11 upvotes]: I read somewhere that any scrambled form of $3 \times 3 \times 3$ Rubik's cube can be solved using at most $20$ moves, and I just said "wow"! I am wondering can we prove this by mathematical ways? Or this is just solvable by computers? I do not know even how to think about such a problem, so please tell me if you have any information about it. -[Also, I read an article about solving a $n \times n \times n$ cube! it was interesting...just as a suggestion: think to find a method for solving a given scrambled $n \times n \times n$ Rubik's cube.] -P.S. you can see here for some information about Rubik's cube and its moves. -Thanks! - -REPLY [17 votes]: Yes, the result is very mathematical, though computers were used for many of the calculations. Tomas Rokicki, Herbert Kociemba, Morley Davidson, and John Dethridge proved this result in July 2010 and put up a very nice webpage at cube20.org, with some history of the problem and their basic methods. -The team consisted of a programmer, a math teacher, a mathematician, and an engineer. There are two parts to the problem: find a method that lets you solve any cube in M moves or less, and find a really hard starting position, that takes at least N moves to solve. These are upper and lower bounds for the problem-- the goal is to keep finding better methods for solving cubes, or to keep finding tougher starting positions, until you can show M=N. -In 1981, it was known that some positions took at least 18 moves, and Morwen Thistlethwaite had a method to solve any cube in 52 moves or less. The website lists the progress over the last few decades, narrowing in on the number 20. Part of the progress comes from having more powerful computers to work with, but a lot of it is having a better idea (math!) to simplify the problem. -Here's a cool table (copied straight from cube20.org) showing how many starting positions require 20 moves, 19 moves, 18 moves, etc. You can see there are many, many starting positions; and it's actually really tough to find one that takes 20 moves to solve. - Distance Count of Positions - - 0 1 - 1 18 - 2 243 - 3 3,240 - 4 43,239 - 5 574,908 - 6 7,618,438 - 7 100,803,036 - 8 1,332,343,288 - 9 17,596,479,795 - 10 232,248,063,316 - 11 3,063,288,809,012 - 12 40,374,425,656,248 - 13 531,653,418,284,628 - 14 6,989,320,578,825,358 - 15 91,365,146,187,124,313 - 16 about 1,100,000,000,000,000,000 - 17 about 12,000,000,000,000,000,000 - 18 about 29,000,000,000,000,000,000 - 19 about 1,500,000,000,000,000,000 - 20 about 300,000,000 - -As for bigger cubes, there are many methods for solving bigger cubes already. It's actually not that much tougher than solving the 3x3x3 cube; it just takes longer. But I don't think many people will work on finding the optimal number of moves for bigger cubes-- it will be much harder (probably not possible with current computers) and is not as interesting (because everyone cares most about the 3x3x3 cube).<|endoftext|> -TITLE: Does the series $\sum \limits_{n=2}^{\infty }\frac{n^{\log n}}{(\log n)^{n}}$ converge? -QUESTION [8 upvotes]: I'm trying to find out now whether the series -$\sum_{n=2}^{\infty } a_{n}$ converges or not when -$$a_n = \frac{n^{\log n}}{(\log n)^{n}}$$ -Again, I tried d'Alembert $\frac{a_{n+1}}{a_{n}}$, Cauchy condensation test $\sum \limits_{n=2}^{\infty } 2^{n}a_{2^n}$, and they both didn't work for me. -I can't use Stirling, nor the integral test. -Edit: I'm searching for a solution which uses sequences theorems and doesn't involve functions. -Thank you - -REPLY [2 votes]: I used Cauchy condensation, then comparison test, then root test and it seems to converge: -Condensation test: -$$\sum \limits_{n=2}^{\infty }2^n\frac{2^{n^{n\log 2}}}{(n\log 2)^{2^{n}}}$$ -Comparison test: -$$\sum \limits_{n=2}^{\infty }2^n\dfrac{2^{n^{n\log 2}}}{(n\log 2)^{2^{n}}}< \sum \limits_{n=2}^{\infty }2^n\dfrac{n^{n}}{n^{2^{n}}}$$ -Root test: -$$\sqrt[n]{\frac{2^nn^{n}}{n^{2^{n}}}} \underbrace{\longrightarrow}_{n \to \infty}0$$ -Then the series converges<|endoftext|> -TITLE: Infinitely many integer solutions for the equations $x^3+y^3+z^3=1$ and $x^3+y^3+z^3=2$ -QUESTION [5 upvotes]: How do you show that the equation $x^3+y^3+z^3=1$ has infinitely many solutions in integers? How about $x^3+y^3+z^3=2$? - -REPLY [2 votes]: For -\begin{align} -x^3+y^3+z^3&=1\tag{1} -\end{align} -there are at least these one-parameter families of solutions -\begin{gather} -(9t^4)^3 + (-9t^4 \mp 3t)^3 + (\pm 9t^3 + 1)^3 = 1\tag{2}\\ -(3888t^{10} - 135t^4)^3 + (-3888t^{10} - 1296t^7 - 81t^4 + 3t)^3 + (3888t^9 + 648t^6 - 9t^3 + 1)^3 = 1\\ -\implies (27tu(144u^2-5))^3 + (-3t(27u(16u(3u+1)+1)-1))^3 + (9u(72u(6u+1)-1) + 1)^3 = 1, u=t^3\tag{3}\\ -(-1679616a^{16}-559872a^{13}-27216a^{10}+3888a^7+63a^4-3a)^3 +\\ -+ (1679616a^{16}-66096a^{10}+153a^4)^3 +\\ -+ (1679616a^{15}+279936a^{12}-11664a^9-648a^6+9a^3+1)^3 = 1\\ -\implies (3(-559872u^5-186624u^4-9072u^3+1296u^2+21u-1)a)^3 +\\ -+ (9(186624u^4-7344u^2+17)au)^3 +\\ -+ (1679616u^5+279936u^4-11664u^3-648u^2+9u+1)^3 = 1, u=a^3\tag{4} -\end{gather} -Beck attributes $(2)$ to Mahler (1936). For $(3)$ see mathpages. $(4)$ is the result of setting $b=1$ in an identity due to Kohmoto, cited by Wolfram, "Algebraic Identity". -Solutions to $(1)$ are a special case of "Fermat near-misses". Some are given in OEIS: -\begin{align} -A050787_i^3-A050789_i^3-A050788_i^3&=1\\ --A050791_i^3+A050793_i^3+A050792_i^3&=1 -\end{align} -References: -Michael Beck et al. New integer representations as the sum of three cubes. Mathematics of Computation, 76, no.259 (Jul 2007), p.1683-1690. S 0025-5718(07)01947-3 -Kurt Mahler. Note On Hypothesis K of Hardy and Littlewood. Journal of the London Mathematical Society, 11 (1936), p.136–138.<|endoftext|> -TITLE: Line bundles on open subset of projective variety that don't extend over entire variety -QUESTION [9 upvotes]: I'm looking for an example of the following. Let $X$ be a smooth quasiprojective variety over $\mathbb{C}$ and let $\overline{X}$ be a compactification of $X$. We then have a map $Pic(\overline{X}) \rightarrow Pic(X)$. I want an example where this map is not surjective. It will be surjective if, for example, $\overline{X}$ is smooth, but I don't think it should be surjective in general. I'd also be interested in conditions under which it will be surjective. -Thanks! -EDIT : Here are a couple of thoughts. Since $X$ is smooth, every line bundle on it comes from a Weil divisor. The closure in $\overline{X}$ of a Weil divisor in $X$ is another Weil divisor. The only thing that could go wrong is that this new Weil divisor might not come from a line bundle. Thus we are looking for Weil divisors that don't come from Cartier divisors, but I don't know enough examples of this to get what I'm looking for. - -REPLY [4 votes]: Your argument is absolutely correct, as long as $\bar{X} \setminus X$ is of codimension at least 2; this ensures that the only possible Weil divisor on $\bar{X}$ which restricts to your chosen $D$ on $X$ is just the closure of $D$ in $\bar{X}$. -So you need to think of your favourite example of a projective variety with a non-Cartier divisor, which is probably the quadric cone $x^2 + y^2 = z^2$ in $\mathbb{P}^3$. Any line through the vertex of the cone is a Weil divisor which is not locally principal. Take $X$ to be the cone with the vertex removed, and you have your example. In this case $X$ is isomorphic to $(\textrm{conic}) \times \mathbb{A}^1$, so $\textrm{Pic}\; X \cong \mathbb{Z}$. On the other hand, any two lines passing through the vertex together form a plane section, which is a Cartier divisor. So the image of $\textrm{Pic}\; \bar{X} \to \textrm{Pic}\; X$ is $2\mathbb{Z} \subset \mathbb{Z}$. (This example is described in many algebraic geometry textbooks, including Hartshorne, but I'm at home at the moment and can't give you a reference. If you haven't seen it before, it's worth spending a little while seeing in detail what's going on here.) -One condition which implies surjectivity is that your variety be locally factorial (since then every Weil divisor is Cartier).<|endoftext|> -TITLE: Show that a specific ideal is not principal -QUESTION [7 upvotes]: In some cases, it is quite straightforward to prove that a specific ideal cannot be principal. For example, in the ring of integers of $\mathbb{Q}(\sqrt{-5})$, the ideal $(2,1+\sqrt{-5})$ is not principal, by taking norms (since this is one of the ideals in the factorization of 2). -However, in that case, we used that the norm of a generating element would have to equal $\pm 2$. -Now, let $K=\mathbb{Q}(\sqrt{-39})$ and let $I=(2,\alpha-1)$ (where $\alpha$ the root of $x^2-x+10$, the minimal polynomial of $\sqrt{-39}$). I want to show that this ideal is not principal. (specifically, it is the square of one on the primes in the factorization of $2\mathcal{O}_K$). -Any suggestions? -Thanks. - -REPLY [6 votes]: Let me just say that in principle you should be able to decide whether any ideal $I$ -in an imaginary quadratic number ring $R = \mathbb{Z}[\sqrt{-d}]$ is principal or not by looking at norms. Indeed, because the unit group of $R$ is finite (usually just $\pm 1$ in fact), there are going to be at most finitely many elements $\alpha \in R$ with $N(\alpha) = N(I)$ and you could write a short piece of code to enumerate them all. Assuming that $I = \langle \beta_1,\ldots,\beta_n \rangle$ (we can arrange for $n = 2$, but never mind that) we just check whether $\beta_i/\alpha \in R$ for all $i$. If so, -then $\langle \alpha \rangle \supset I$, and since they have the same norm they must be equal. -Added: My answer above is unnecessarily cautious. Of course for any $\alpha \in R$ -and any unit $u$ of $R$, the principal ideals $\langle \alpha \rangle$ and $\langle u \alpha \rangle$ are equal. Thus the above argument works whenever you have a ring $R$ for which you can algorithmically determine all elements $\alpha$ with $\# R/\langle \alpha \rangle = N$ for any $N \in \mathbb{Z}^+$. All rings of integers of number fields have this property. Of course there are probably much more efficient algorithms than this: unfortunately my knowledge of the algorithmic aspects of algebraic number theory is very poor. - -REPLY [5 votes]: $\rm (\beta)\: =\: (2,\:\alpha-1)\ \Rightarrow\ (\beta\:\beta')\: =\: (2)\ \Rightarrow\Leftarrow\ $ via $ \rm\ (2,\:\alpha-1)\ (2,\:\alpha'-1)\: =\: (4,\:10,\:2\alpha-2,\:2\alpha'-2)\: =\: (2)$ -Simpler, avoiding (conjugate) ideals: $\rm\ \beta\ |\ 2,\:\alpha-1\ \Rightarrow\ N(\beta)\ |\ N(2),\:N(\alpha-1)\:,\:$ i.e. $\rm\:\beta\beta'\ |\ (4,10)= 2$<|endoftext|> -TITLE: minimal primes of a homogeneous ideal are homogeneous -QUESTION [5 upvotes]: I am trying to study the proof of this result. It appears as part 3 of the proposition on page 2 of the following document -http://math.mit.edu/classes/18.721/projgeom6.pdf -I understand everything but the last line. Here the author says $\psi_{\lambda}(P_i)=P_j$ for all nonzero $\lambda$ implies $\psi_{\lambda}(P_i)=P_i$. I would appreciate if anyone can throw some insight on this. -Thanks - -REPLY [2 votes]: Remember that $\psi_{\lambda}$ is the homomorphism on $\mathbb{C}[x_1,\ldots,x_n]$ induced by mapping $x_i$ to $\lambda x_i$. In particular, you have that $\psi_{\lambda}\circ\psi_{\mu} = \psi_{\lambda\mu}$. -Note that $\psi_{\lambda}$ must induce a permutation on $P_1,\ldots,P_k$, since $\psi_{\lambda}(P_i) = P_j$ for some $j$. -Since $\psi_{\lambda}$ induces a permutation of $P_1,\ldots,P_k$, we get a homomorphism from $\mathbb{C}-\{0\}$ to $S_k$, the permutation group of $k$ elements: given $\lambda$, let $\sigma_{\lambda}(i) = j$ if and only if $\psi_{\lambda}(P_i) = P_j$. But now write $\lambda = \mu^{k!}$ for some complex number $k$. Then -$$\sigma_{\lambda} = \sigma_{\mu^{k!}} = (\sigma_{\mu})^{k!} = 1_{S_k}$$ -(since $|S_k|=k!$, so any element raised to the $k!$ power is the identity). Therefore, $\sigma_{\lambda}$ is the identity. -(That is, $\mathbb{C}-\{0\}$ under multiplication is a divisible group, so every homomorphism into a finite group is trivial, so the image of $\mathbb{C}-\{0\}$ in $S_k$ induced by the action of $\psi_{\lambda}$ on $P_1,\ldots,P_k$ must be trivial, so the action is trivial).<|endoftext|> -TITLE: Algorithmic Analysis Simplified under Big O -QUESTION [7 upvotes]: Hi I am revising for my exams and I have the following inhomogeneous first order recurrence relation defined as follows: -f(0) = 2 -f(n) = 6f(n-1) - 5, n > 0 - -I have tried for ages using the methods I have been taught to solve this but I cannot get a proper solution. -1. Integrate a new function g(n) -2. f(n) = 6^n.g(n) -3. => 6^n.g(n) = 6.6^(n-1) .g(n-1) -5 -4. => g(n) = g(n-1)-5/6^n -5. => g(n) = sum(i=1, n)-5/6^i -6. => f(n) = 6^n.sum(i=1, n)-5/6^i -7. => *Evaluate the sum using geometric series forumla* -8. => sum(i = 1, n)-5/6^i = [sum(i = 1, n)a^i] -------> (a = -5/6) -9. => *sub a = -5/6 into geometric forumla [a(1-a^n)/(1-a)]* -10. => [(-5/6(1 - (-5/6)^n))/(1-(-5/6))] -11. => g(n) = [(-5/6(1 + (5/6)^n))/(1+5/6)] -12. => f(n) = 6^n . g(n) = 6^n[(-5/6(1 + (5/6)^n))/(1+5/6)] -13. => *sub in n = 0 to see if f(0) = 2* - -I cannot get this working, however. f(0) [base case] doesn't equal 2...Where have I gone wrong?? -Just to let you know here is the example I am following: -f(0) = 0 -f(n) = 3f(n-1)+1, n>0 - -f(n) = 3^n.g(n) -3^n.g(n) = g(n-1)+(1/3)^n -g(n) = sum(i=1, n)(1/3)^i -f(n) = 3^n . sum(i=1, n)(1/3)^n -sum(i=1, n) = sum(i=1, n)(a^i) ----> a = 1/3 -sub into geometric series formula gives: - -1/2(1-(1/3^n) - -Hence: -f(n) = 3^n/2(1-(1/3^n)) = 1/2(3^n - 1) = O(3^n) - -Now my maths isn't great, I know enough to get about but I have followed the exact steps as my lecturer did in the example, but I cannot get a solution to fit f(0). I have to follow the methods used in above example and I am absolutely stumped as to where the issue is.. - -REPLY [7 votes]: Your step 5 is wrong. -We actually have -$$g(n) - g(0) = \sum _{j=1}^{n} \frac{-5}{6^j}$$ -You missed the $g(0)$. -Thus giving us $g(n) = 2 - \frac{5(1 - (1/6)^n)}{6(1 - 1/6)} = 2 - \frac{6^n -1}{6^n} = 1 + 1/6^n$ -Hence $f(n) = 6^n g(n) = 6^n + 1$. -A simpler way to approach this problem is to set $f(n) = h(n) + 1$ -Which gives us $h(n) = 6h(n-1)$ and $h(0) = 1$. -Thus $h(n) = 6^n$ and so $f(n) = 6^n + 1$.<|endoftext|> -TITLE: Non isomorphic groups who product with Z is isomorphic -QUESTION [10 upvotes]: Are there groups $G$ and $H$ such that $G$ and $H$ are not isomorphic but $G \times \mathbb Z$ and $H \times \mathbb Z$ are? - -REPLY [3 votes]: Yes, there is an example in a paper by Hirshon, "On Cancellation in Groups". You can find a slightly different example written up here<|endoftext|> -TITLE: Interesting properties of Fibonacci-like sequences? -QUESTION [13 upvotes]: Everyone is familiar with the Fibonacci Sequence, [0] 1 1 2 3 5 8 ... and many of it's interesting properties. For example, as the sequence continues, the ratio of $\frac{F_n}{F_{n-1}}$ converges to $\tau=\frac{1+\sqrt{5}}{2}$, a ratio which can be used to describe a number of numerical relationships in nature. -From some quick Wikipedia browsing, we can find a number of Generalizations of Fibonacci numbers. For one abstract generalization, we can define Fibonacci-like sequences as follows: -An $n$-order Fibonacci-like sequence is generated by $F_n=\sum_{i=1}^{n}F_{n-i}$ with $n$ initial terms. -Thus, the Fibonacci sequence is such a sequence with $n=2$ and $F_0=0$ and $F_1=1$ -Using this basic generalization, we have Lucas Numbers, where $n=2$ and $F_0=2$ and $F_1=1$, whose consecutive-number ratio also converges to the golden ratio. There are also Tribonacci, Tetranacci, and n-nacci numbers, which follow this generalization for 3, 4, and n numbers, and pad the initializing values with 0s, e.g. $F_0=0$ ... $F_{n-2}=0$, $F_{n-1}=1$. -So, my question is, are there any important properties of these sequences that are worth learning? Do these sequences have real-world applications or reflections like the original Fibonacci sequence does? What can be learned from these? - -REPLY [4 votes]: I can think of at least eight interesting things about the tribonacci numbers and its limiting ratio, the tribonacci constant. Whereas for the Fibonacci numbers the discriminant $\sqrt{5}$ plays a role, for the tribonacci it is $\sqrt{11}$. -I. Sequences -Given three sequences with recurrence $s_n = s_{n-1}+s_{n-2}+s_{n-3}$ but different initial values as, -$$\begin{array}{|c|c|c|c|c|c|c|c|c|} -\hline -\text{Name} & \text{Formula} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & OEIS\\ -\hline -R_n & x_1^n+x_2^n+x_3^n &3 &1 &3 &7 &11 &21 &39 & 71 & A001644 \\ -\hline -S_n &\frac{x_1^n}{y_1}+\frac{x_2^n}{y_2}+\frac{x_3^n}{y_3}&3 &2 &5 &10 &17 &32 &59 &108 &(none)\\ -\hline -T_n &\frac{x_1^n}{z_1}+\frac{x_2^n}{z_2}+\frac{x_3^n}{z_3}&0 &1 &1 &2 &4 &7 &13 &24& A000073 \\ -\hline -\end{array}$$ -$$y_i =\tfrac{1}{19}(x_i^2-7x_i+22)$$ -$$z_i =-x_i^2+4x_i-1$$ -and the $x_i$ are the roots of $x^3-x^2-x-1=0$, with $T_n$ as the tribonacci numbers and the real root $T = x_1 \approx 1.83929$ the tribonacci constant. -II. Powers -Let $a=\tfrac{1}{3}(19+3\sqrt{33})^{1/3},\; b=\tfrac{1}{3}(19-3\sqrt{33})^{1/3}$, then powers of the tribonacci constant $T$ can be expressed in terms of those three sequences as, -$$3T^n = R_{n}+(a+b)S_{n-1}+3(a^2+b^2)T_{n-1}$$ -III. q-Continued fraction -Let $q = -1/(e^{\pi\sqrt{11}})$, then, -$$\frac{(e^{\pi\sqrt{11}})^{1/24}}{\frac{1}{T}+1} = 1 + \cfrac{q}{1-q + \cfrac{q^3-q^2}{1 + \cfrac{q^5-q^3}{1 + \cfrac{q^7-q^4}{1 + \ddots }}}}$$ -IV. Snub cube -The Cartesian coordinates for the vertices of a snub cube are all the even permutations of $v=1/T$, -$\hskip2.8in$ -V. Pi Formula -Let $v = 1/T$, then, -$$\small\sum_{n=0}^\infty \frac{(2n)!^3}{n!^6}\frac{2(4v+1)(2v+1)n+(4v^2+2v-1)}{(v+1)^{24n}} =\frac{4}{\pi}$$ -VI. Infinite Nested Radical -$$\frac{1}{T-1} = \sqrt[3]{\frac{1}{2}+ \sqrt[3]{\frac{1}{2}+ \sqrt[3]{\frac{1}{2}+ \sqrt[3]{\frac{1}{2}+\dots}}}}$$ -VII. Complete elliptic integral of the first kind $K(k_{11})$ -Alternatively, its exact value is, -$$K(k_{11}) = \frac{1}{11^{1/4}(4\pi)^2} \bigl(\tfrac{T+1}{T}\bigr)^2\; \Gamma\bigl(\tfrac{1}{11}\bigr) \Gamma\bigl(\tfrac{3}{11}\bigr) \Gamma\bigl(\tfrac{4}{11}\bigr) \Gamma\bigl(\tfrac{5}{11}\bigr) \Gamma\bigl(\tfrac{9}{11}\bigr) = 1.570983\dots$$ -VIII. Cubic Pell-Type Equation -The diophantine cubic Pell-type equation, -$$a^3 - 2 a^2 b + 2 b^3 - a^2 c - 2 a b c + 2 b^2 c + a c^2 + 2 b c^2 + c^3=1$$ -has an infinite number of integer solutions, -$$a,\;b,\;c = T_{n-1},\;T_{n-2},\;T_{n-3}$$ -Conclusion: Surely the tribonacci numbers are interesting enough? -P.S. See also this post.<|endoftext|> -TITLE: Uniform distribution on a simplex via i.i.d. random variables -QUESTION [16 upvotes]: For which $N \in \mathbb{N}$ is there a probability distribution such that $\frac{1}{\sum_i X_i} (X_1, \cdots, X_{N+1})$ is uniformly distributed over the $N$-simplex? (Where $X_1, \cdots, X_{N+1}$ are accordingly distributed iid random variables.) - -REPLY [16 votes]: Take a look at the Wikipedia article on the Dirichlet distribution. In particular the Dirichlet distribution with $\alpha_i = 1$ for all $i$ is the uniform distribution on the simplex. Furthermore, the Dirichlet distribution can be generated by taking $X_1, \ldots, X_n$ to be independent gamma random variables with the right choice of paramters, and then $Y_i = X_i/(X_1 + \cdots + X_n)$. In the particular case you're asking about, you can take the $X_i$ to all be exponential random variables with the same mean.<|endoftext|> -TITLE: Valuation ring of $k(x, y)$ of dimension $2$ -QUESTION [6 upvotes]: My question is as follows: - -Given a field $k$, is it always possible to find a valuation ring of $k(x, y)$ of dimension $2$? - -REPLY [4 votes]: If by $k(x,y)$ you mean the rational function field in two variables over $k$, then the answer is yes. -Consider the map $v:k[x,y]\rightarrow\mathbb{Z}\times\mathbb{Z}$ defined as follows: express a polynomial $f\neq 0$ in the form $f=x^ay^bg$, where $g$ is neither divisible by $x$ nor by $y$, and set $v(f):=(a,b)$. Since $v$ is multiplicative it can be extended to the fraction field $k(x,y)$ of $k[x,y]$. Order the group $\mathbb{Z}\times\mathbb{Z}$ lexikographically: $(a,b)<(c,d):\Leftrightarrow (a -TITLE: A circle rolls along a parabola -QUESTION [62 upvotes]: I'm thinking about a circle rolling along a parabola. Would this be a parametric representation? -$(t + A\sin (Bt) , Ct^2 + A\cos (Bt) )$ -A gives us the radius of the circle, B changes the frequency of the rotations, C, of course, varies the parabola. Now, if I want the circle to "match up" with the parabola as if they were both made of non-stretchy rope, what should I choose for B? -My first guess is 1. But, the the arc length of a parabola from 0 to 1 is much less than the length from 1 to 2. And, as I examine the graphs, it seems like I might need to vary B in order to get the graph that I want. Take a look: - -This makes me think that the graph my equation produces will always be wrong no matter what constants I choose. It should look like a cycloid: - -But bent to fit on a parabola. [I started this becuase I wanted to know if such a curve could be self-intersecting. (I think yes.) When I was a child my mom asked me to draw what would happen if a circle rolled along the tray of the blackboard with a point on the rim tracing a line ... like most young people, I drew self-intersecting loops and my young mind was amazed to see that they did not intersect!] -So, other than checking to see if this is even going in the right direction, I would like to know if there is a point where the curve shown (or any curve in the family I described) is most like a cycloid-- -Thanks. -"It would be really really hard to tell" is a totally acceptable answer, though it's my current answer, and I wonder if the folks here can make it a little better. - -REPLY [77 votes]: (I had been meaning to blog about roulettes a while back, but since this question came up, I'll write about this topic here.) -I'll use the parametric representation -$$\begin{pmatrix}2at\\at^2\end{pmatrix}$$ -for a parabola opening upwards, where $a$ is the focal length, or the length of the segment joining the parabola's vertex and focus. The arclength function corresponding to this parametrization is $s(t)=a(t\sqrt{1+t^2}+\mathrm{arsinh}(t))$. -user8268 gave a derivation for the "cycloidal" case, and Willie used unit-speed machinery, so I'll handle the generalization to the "trochoidal case", where the tracing point is not necessarily on the rolling circle's circumference. -Willie's comment shows how you should consider the notion of "rolling" in deriving the parametric equations: a rotation (about the wheel's center) followed by a rotation/translation. The first key is to consider that the amount of rotation needed for your "wheel" to roll should be equivalent to the arclength along the "base curve" (in your case, the parabola). -I'll start with a parametrization of a circle of radius $r$ tangent to the horizontal axis at the origin: -$$\begin{pmatrix}-r\sin\;u\\r-r\cos\;u\end{pmatrix}$$ -This parametrization of the circle was designed such that a positive value of the parameter $u$ corresponds to a clockwise rotation of the wheel, and the origin corresponds to the parameter value $u=0$. -The arclength function for this circle is $ru$; for rolling this circle, we obtain the equivalence -$$ru=s(t)-s(c)$$ -where $c$ is the parameter value corresponding to the point on the base curve where the rolling starts. Solving for $u$ and substituting the resulting expression into the circle equations yields -$$\begin{pmatrix}-r\sin\left(\frac{s(t)-s(c)}{r}\right)\\r-r\cos\left(\frac{s(t)-s(c)}{r}\right)\end{pmatrix}$$ -So far, this is for the "cycloidal" case, where the tracing point is on the circumference. To obtain the "trochoidal" case, what is needed is to replace the $r$ multiplying the trigonometric functions with the quantity $hr$, the distance of the tracing point from the center of the rolling circle: -$$\begin{pmatrix}-hr\sin\left(\frac{s(t)-s(c)}{r}\right)\\r-hr\cos\left(\frac{s(t)-s(c)}{r}\right)\end{pmatrix}$$ -At this point, I note that $r$ here can be a positive or a negative quantity. For your "parabolic trochoid", negative $r$ corresponds to the circle rolling outside the parabola and positive $r$ corresponds to rolling inside the parabola. $h=1$ is the "cycloidal" case; $h > 1$ is the "prolate" case (tracing point outside the rolling circle), and $0 < h < 1$ is the "curtate" case (tracing point within the rolling circle). -That only takes care of the rotation corresponding to "rolling"; to get the circle into the proper position, a further rotation and a translation has to be done. The further rotation needed is a rotation by the tangential angle $\phi$, where for a parametrically-represented curve $(f(t)\quad g(t))^T$, $\tan\;\phi=\frac{g^\prime(t)}{f^\prime(t)}$. (In words: $\phi$ is the angle the tangent of the curve at a given $t$ value makes with the horizontal axis.) -We then substitute the expression for $\phi$ into the anticlockwise rotation matrix -$$\begin{pmatrix}\cos\;\phi&-\sin\;\phi\\\sin\;\phi&\cos\;\phi\end{pmatrix}$$ -which yields -$$\begin{pmatrix}\frac{f^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}&-\frac{g^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}\\\frac{g^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}&\frac{f^\prime(t)}{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}\end{pmatrix}$$ -For the parabola as I had parametrized it, the tangential angle rotation matrix is -$$\begin{pmatrix}\frac1{\sqrt{1+t^2}}&-\frac{t}{\sqrt{1+t^2}}\\\frac{t}{\sqrt{1+t^2}}&\frac1{\sqrt{1+t^2}}\end{pmatrix}$$ -This rotation matrix can be multiplied with the "transformed circle" and then translated by the vector $(f(t)\quad g(t))^T$, finally resulting in the expression -$$\begin{pmatrix}f(t)\\g(t)\end{pmatrix}+\frac1{\sqrt{f^\prime(t)^2+g^\prime(t)^2}}\begin{pmatrix}f^\prime(t)&-g^\prime(t)\\g^\prime(t)&f^\prime(t)\end{pmatrix}\begin{pmatrix}-hr\sin\left(\frac{s(t)-s(c)}{r}\right)\\r-hr\cos\left(\frac{s(t)-s(c)}{r}\right)\end{pmatrix}$$ -for a trochoidal curve. (What those last two transformations do, in words, is to rotate and shift the rolling circle appropriately such that the rolling circle touches an appropriate point on the base curve.) -Using this formula, the parametric equations for the "parabolic trochoid" (with starting point at the vertex, $c=0$) are -$$\begin{align*}x&=2at+\frac{r}{\sqrt{1+t^2}}\left(ht\cos\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)-t-h\sin\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)\right)\\y&=at^2-\frac{r}{\sqrt{1+t^2}}\left(h\cos\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)+ht\sin\left(\frac{a}{r}\left(t\sqrt{1+t^2}+\mathrm{arsinh}(t)\right)\right)-1\right)\end{align*}$$ -A further generalization to a space curve can be made if the rolling circle is not coplanar to the parabola; I'll leave the derivation to the interested reader (hint: rotate the "transformed" rolling circle equation about the x-axis before applying the other transformations). -Now, for some plots: - -For this picture, I used a focal length $a=1$ and a radius $r=\frac34$ (negative for the "outer" ones and positive for the "inner" ones). The curtate, cycloidal, and prolate cases correspond to $h=\frac12,1,\frac32$. - -(added 5/2/2011) -I did promise to include animations and code, so here's a bunch of GIFs I had previously made in Mathematica 5.2: -Inner parabolic cycloid, $a=1,\;r=\frac34\;h=1$ - -Curtate inner parabolic trochoid, $a=1,\;r=\frac34\;h=\frac12$ - -Prolate inner parabolic trochoid, $a=1,\;r=\frac34\;h=\frac32$ - -Outer parabolic cycloid, $a=1,\;r=-\frac34\;h=1$ - -Curtate outer parabolic trochoid, $a=1,\;r=-\frac34\;h=\frac12$ - -Prolate outer parabolic trochoid, $a=1,\;r=-\frac34\;h=\frac32$ - -The Mathematica code (unoptimized, sorry) is a bit too long to reproduce; those who want to experiment with parabolic trochoids can obtain a notebook from me upon request. -As a final bonus, here is an animation of a three-dimensional generalization of the prolate parabolic trochoid:<|endoftext|> -TITLE: Faithful representations and character tables -QUESTION [8 upvotes]: Suppose an n-dimensional irreducible complex representation is not faithful. Then a non-identity element gets mapped to the identity matrix in $GL_n(\mathbb{C})$ so that the value of its associated character on the conjugacy class of this element is $n$. Thus, $n$ appears at least twice in the corresponding row of the group's character table. -I suspect the converse is true: if the row corresponding to an irreducible $n$-dimensional complex representation contains the dimension of the representation in more than one column, then the representation is not faithful. I have looked in a few of the standard algebra references and have been unable to find a proof. Can anyone point me in the right direction? We proved this for $n=2$, but it seems that it would be difficult and messy to generalize. I wonder if there is a simpler proof. - -REPLY [7 votes]: What you are saying is true and is in fact contained in all the standard references (e.g. Isaacs). The idea is that any character of an $n$-dimensional representation, evaluated on an element of order $d$ is a sum of $n$ $d$-th roots of unity (why?). Your statement now follows easily by the triangle inequality: -$|\chi(g)|\leq \chi(1)$ for all $g$ and $\chi(g)=\chi(1)$ iff $g$ is sent to the identity matrix.<|endoftext|> -TITLE: Tensor product of abelian group and a free abelian group -QUESTION [6 upvotes]: I am trying to show that if $F,H$ are abelian groups with $F$ free abelian, and if $a \in F$ and $h \in H$ are non-zero, then $a \otimes h \ne 0$ in $F \otimes H$. -This is specifically in a section describing the derived functor Tor. Of course, that doesn't mean the solution has to involve that, but there is probably a way. I know that $F$ free abelian means that $F$ is torsion free and hence $\mbox{Tor}(F,A)=0$. -I was trying to use a formulation of $\mbox{Tor}$ in terms of exact sequences. If: -$$0 \to R \stackrel{i}{\hookrightarrow} F \to A \to 0$$ is an exact sequence then $\mbox{Tor}(A,B) =\mbox{ker}(i \otimes 1_b)$ -Seemed to me if I picked the right sequence I could get that $\mbox{Tor}=0$ implies that the kernel is trivial, which would give the result, but I can't get this to work -Edit It appears that this is false from the answers below. -Here is a link to the question. - -REPLY [2 votes]: Suppose $F=\mathbb Z$ and $H=\mathbb Z/2\mathbb Z$. Let $a=2\in F$ and let $\xi\in H$ be the non-zero element. Then $$a\otimes\xi=(2\cdot1)\otimes\xi=1\otimes(2\cdot \xi)=1\otimes 0=0$$. -It follows that what you want to prove is false. - -REPLY [2 votes]: Here is an answer using Tor: -Consider the exact sequence -$$ 0\rightarrow\mathbb{Z}\rightarrow F\rightarrow K\rightarrow 0,$$ -where the map from $\mathbb{Z}$ to $F$ sends the generator $1$ to $a$, and $K$ is the cokernel. Tensoring with $H$ we get the long exact sequence -$$ \cdots Tor(F,H)\rightarrow Tor(K,H)\rightarrow H\rightarrow F\otimes H\rightarrow H\otimes K\rightarrow 0. $$ -Now $Tor(F,H)=0$ since $F$ is torsion-free. So the kernel of $H\rightarrow F\otimes H$, given by sending $h$ to $a\otimes h$, is $Tor(K,H)$. This is not always non-zero, and the problem as stated is incorrect. Consider $H=\mathbb{Z}/2\mathbb{Z}$, $F=\mathbb{Z}$, $a=2$.<|endoftext|> -TITLE: Transcendental Galois Theory -QUESTION [15 upvotes]: Is there a good reference on transcendental Galois Theory? - -More precisely, if $K/k$ admits a separating transcendence basis (or maybe if it is a separably generated extension) it seems to me that many of the usual theorems of Galois theory go through. Moreover, the group $\text{Aut}(K/k)$ seems to have additional structure; namely it should be an algebraic group over $k$. -For example, it seems to me that $k(x_1, ..., x_n)/k$ has automorphism group $GL_n(k)$. (EDIT: As Qiaochu Yuan points out, this is incorrect; the automorphism group at least must contain $PGL_{n+1}(k)$, acting via its action on the function field of $\mathbb{P}_k^n$.) This sort of thing must be well-studied; if so, what are the standard references on the subject? -I have seen Pete L. Clark's excellent (rough) notes on related subjects here but they seem not to address quite these sorts of questions. - -REPLY [11 votes]: For every $n \geq 1$, there is a natural effective action of $\operatorname{PGL}_{n+1}(k)$ on $k(x_1,\ldots,x_n)$. In fact $\operatorname{PGL}_{n+1}(k)$ is the automorphism group of $\mathbb{P}^n_{/k}$, the action being the obvious one induced by the action of $\operatorname{GL}_{n+1}(k)$ on the vector space $k^{n+1}$ in which $\mathbb{P}^n$ is the set of lines. -However, no one said this was the entire automorphism group of $k(x_1,\ldots,x_n)$! It is when $n = 1$ -- for instance because every rational map from a smooth curve to a projective variety is a morphism ("valuative criterion for properness"). However, $\operatorname{PGL}_{n+1}(k)$ is known not to be the entire automorphism group of $k(x_1,\ldots,x_n)$ when $n > 1$. Rather, the full automorphism group is called the Cremona group. For $n = 2$ we have a problem in the geometry of surfaces, and it was shown (by Max Noether when $k = \mathbb{C}$) that the automorphism group here is generated by the linear automorphisms described above together with a certain set of simple, well-understood birational maps, called quadratic maps or indeed Cremona transformations. But even when $n = 2$ this automorphism group is not an algebraic group: it's bigger than that. -When $n \geq 3$ it is further known that the linear automorphisms and the Cremona transformations do not generate the whole automorphism group, and apparently no one has even a decent guess as to what a set of generators might look like. I had the good fortune of hearing a talk by James McKernan on (in part) this subject within the last few months, so I am a bit more up on this than I otherwise would be. Anyway, he gave us the sense that this is a pretty hopeless problem at present. For instance, see this recent preprint in which a rather eminent algebraic geometer works rather hard to prove a seemingly rather weak result about finite subgroups of the three dimensional Cremona group! -So, yes, this is a different sort of question from the ones considered in my rough note on transcendental Galois theory. To all appearances it's a much harder question...<|endoftext|> -TITLE: Asymptotics for partitions of $n$ with largest part at most $k$ (or into at most $k$ parts) -QUESTION [6 upvotes]: Let $\bar p_k(n)$ be the number of partitions of $n$ with largest part at most $k$ (equivalently, into at most $k$ parts). Is there an elementary formula for the asymptotic behavior of $\bar p_k(n)$ as $n \rightarrow \infty$ that still involves $k$ in a nontrivial way? -I would also like to know of any references that discuss the numbers $\bar p_k(n)$. Browsing Stanley v.1 isn't much help, though it's possible that I am unaware of a section in the later parts or in v.2. - -REPLY [2 votes]: Very useful paper, though this only gives a good approximation to the number of partitions of n -into exactly k parts each no larger than N -due to Ratsaby (App. Analysis and Discrete M. 2008): -http://www.doiserbia.nb.rs/img/doi/1452-8630/2008/1452-86300802222R.pdf<|endoftext|> -TITLE: Prove that $x^2 \equiv a \pmod{p}$ has solution if and only if $x^2 \equiv a \pmod{p^e}$ has solution -QUESTION [6 upvotes]: Problem - -Prove that $x^2 \equiv a \pmod{p}$ has solution if and only if $x^2 \equiv a \pmod{p^e}$ has solution. - -I attempted to prove this by induction, but I was struggling with the proving the converse. -If $x^2 \equiv a \pmod{p^{e+1}}$ then $x^2 = a + kp^{e+1}$. Hence, $x^2 \equiv a \pmod{p^e}$ is straightforward. However, the converse drove me nuts. -By assuming that $x^2 \equiv a \pmod{p^e}$, then I have $x^2 = a + kp^e$, take this number modulo $p^{e+1}$, I can't see how this becomes $a \pmod{p^e}$. I used calculator to generate many cases to see how the pattern worked, and I realized the key idea is in $k$. But I couldn't figure out how to bring $k$ up from $p^e$ to $p^{e+1}$. Any idea? -Thank you - -REPLY [6 votes]: For $p=2$, the result is not true: taking $p=2$, $a=3$, $e^2$, we have that $x^2 \equiv 3\pmod{2}$ has a solution (any odd integer), but $x^2\equiv 3\pmod{2^2}$ has no solutions. -If $\gcd(a,p)=p$ the result is also not necessarily true: take $a=p$; then $x^2\equiv p\pmod{p}$ has a solution, but $x^2\equiv p\pmod{p^2}$ does not, since $x$ would have to be a multiple of $p$, and hence $x^2\equiv 0\pmod{p^2}$. -If $a$ is restricted to lying in $\{0,1,\ldots,p-1\}$, then these two conditions don't matter: for $p=2$, it is clear that both $x^2\equiv 0\pmod{2^e}$ and $x^2\equiv 1\pmod{2^e}$ have solutions for all $e\gt 0$; and if $p$ is odd and $a=0$, then $x^2\equiv 0\pmod{p^e}$ has solutions for all $e\gt 0$. -So, in any case, we can restrict to the case where $p$ is odd and $\gcd(a,p)=1$. In particular, any solution to $x^2\equiv a\pmod{p}$ or to $x^2\equiv a \pmod{p^e}$ must be relatively prime to $p$. -For odd primes, the problem can be solved using Hensel's Lemma, but one does not actually need it; just pushing it through what you are trying to do will do it, if you figure out what you need out of $k$ for things to work out. -Suppose $b^2 \equiv a \pmod{p^r}$, and you want to find $k$ such that $(b+kp^r)^2\equiv a \pmod{p^{r+1}}$. -Doing simple squaring, you have -$$b^2 + 2bkp^r + k^2p^{2r}\equiv b^2 +2bkp^r \pmod{p^{r+1}}.$$ -Now, $b^2 = a + tp^r$ for some $t$, so we want -$$tp^r + 2bkp^r = p^r(t+2bk)\equiv 0 \pmod{p^{r+1}}.$$ -This is equivalent to asking that -$$t + 2bk\equiv 0 \pmod{p}.$$ -So pick $k$ with $k(2b) \equiv -t\pmod{p}$ (which can be done because both $b$ and $2$ are relatively prime to $p$), and we are done. -By the way: one way to think of Hensel's Lemma is that it is the modular version of Newton's Method for approximating roots. -In Newton's Method, if $f'(b)\neq 0$, then you can go from $b$ to $b - \frac{f(b)}{f'(b)}$ as the "next approximation". Hensel's Lemma works the same way: you need $f'(b)$ to not be zero modulo $p$. Here we are working with $f(x) = x^2-a$; as long as $p\neq 2$, the formal derivative is not identically zero, which suggests what to do. -Notice the similarity with Newton's method in what we did: if $f(x) = x^2 - a$, then $f'(x)=2x$, so $f'(b)=2b$, and $f(b) = b^2-a = tp^r$, so -$$ b - \frac{f(b)}{f'(b)} = b - \frac{tp^r}{2b} = b + \left(\frac{-t}{2b}\right)p^r$$ -and what we are going to do is take $b+kp^r$ with $k$ given by -$$k(2b)\equiv -t\pmod{p};$$ -that is, $k$ is congruent to $\frac{-t}{2b}$ modulo $p$; precisely $-\frac{f(b)}{f'(b)}$ modulo $p$.<|endoftext|> -TITLE: Isomorphism between $I_G/I_G^2$ and $G/G'$ -QUESTION [9 upvotes]: Ok, this has been bugging me for a while, and I'm sure there's something obvious I'm missing. The references I've looked at for this result in an effort to resolve the issue didn't address it. -$G$ is a group, $\mathbb{Z}[G]$ its integral group ring, $I_G$ the augmentation ideal (i.e. the kernel of the map $\mathbb{Z}[G]\rightarrow\mathbb{Z}$ sending all group elements to 1), and $G/G'$ is the abelianization of $G$. I'm attempting to show that $I_G/I_G^2$ is isomorphic to $G/G'$. -It's straightforward to show that -$$(g-1)+(h-1)\equiv(gh-1)\bmod I_G^2\hskip0.5in(*)$$ -and thus every equivalence class mod $I_G^2$ is equal to some $(g-1)+I_G^2$. -Now all we have to do is define a map $\phi:G/G'\rightarrow I_G/I_G^2$ and show that it has an inverse $\psi:I_G/I_G^2\rightarrow G/G'$. The definitions are obvious enough: -$$\phi(gG')=(g-1)+I_G^2$$ -$$\psi((g-1)+I_G^2)=gG'$$ -and starred equation shows that these are homomorphisms. But! My problem is showing that these maps are well-defined. For example, if $(g-1)+I_G^2=(h-1)+I_G^2$, i.e. $$(g-1)-(h-1)\equiv (gh^{-1}-1)\equiv0\bmod I_G^2,$$ -we need to show that $$\psi((g-1)+I_G^2)=gG'=hG'=\psi((h-1)+I_G^2),$$ i.e. $gh^{-1}\in G'$. -If we have $gh^{-1}\in G'$, i.e. $gh^{-1}$ equals some $\prod_{a,b\in G} (aba^{-1}b^{-1})^{n_{a,b}}$, then -$$gh^{-1}-1=\left(\prod_{a,b\in G} (aba^{-1}b^{-1})^{n_{a,b}}\right)-1,$$ -and, unwinding using the starred equation, -$$\left(\prod_{a,b\in G} (aba^{-1}b^{-1})^{n_{a,b}}\right)-1\equiv \sum n_{a,b}(aba^{-1}b^{-1}-1)\equiv$$ -$$\sum n_{a,b}\left[(ab-1)-(a-1)-(b-1)\right]=\sum n_{a,b}(a-1)(b-1)\equiv0\bmod I_G^2$$ -but for some reason I can't make this work the other way. I'm sure I'm being silly; someone please point out where. - -REPLY [7 votes]: You have maps $\phi:\ G\rightarrow I_G/I_G^2$ and $\psi:\ I_G\rightarrow G/G'$. $\phi$ is a homomorphism because $\phi(gh) = (gh-1) = (g-1) + (h-1) + (g-1)(h-1)$, and $(g-1)(h-1)$ lies in $I_G^2$. Since $I_G/I_G^2$ is an abelian group, $\phi$ induces a well-defined map from $G/G'$ to $I_G/I_G^2$. -Now take an element of the form $(g-1)(h-1)$ in $I_G^2$. $(g-1)(h-1) = gh-g-h+1 = (gh-1)-(g-1)-(h-1)$. Thus $\psi((g-1)(h-1)) = \psi(gh-1)-\psi(g-1)-\psi(h-1)=ghg^{-1}h^{-1}G' = [g^{-1},h^{-1}]G'=G'$, so $\psi(I_G^2) = 1$, and so $\psi$ induces a well-defined map from $I_G/I_G^2$ to $G/G'$.<|endoftext|> -TITLE: Presentation of Borel subgroup of GL(2,p) -QUESTION [5 upvotes]: The Borel subgroup $B$ of GL(2,p) is the subgroup group of upper triangular matrices. It is easy to see that it is (internal) semi-direct product of two subgroups: -$B=U\rtimes T$ -where $U$ is the normal subgroup of $B$ consisting of unitriangular matrices, and $T$ is the subgroup of $B$ consisting of diagonal matrices. -In other words, $B\cong C_p\rtimes (C_{p-1}\times C_{p-1})$ -How to get presentation of $B$ from this? - -REPLY [7 votes]: The group is generated by the three elements -$$ -\sigma=\begin{pmatrix}1 & 1\\\\0&1\end{pmatrix}, x=\begin{pmatrix}\alpha & 0\\\\0&1\end{pmatrix}, \text{ and }y=\begin{pmatrix}1 & 0\\\\0&\alpha\end{pmatrix}, -$$ -where $\alpha$ is a generator of $(\mathbb{Z}/p\mathbb{Z})^\times$. To get a presentation, you need to determine how these three elements conjugate with each other. Of course, as you have already said, $x$ and $y$ commute, so you just need to compute $x\sigma x^{-1}$ and $y\sigma y^{-1}$. E.g. -$$ -x\sigma x^{-1} = \begin{pmatrix}\alpha & 0\\\\0&1\end{pmatrix}\begin{pmatrix}1 & 1\\\\0&1\end{pmatrix}\begin{pmatrix}\alpha^{-1} & 0\\\\0&1\end{pmatrix}= -\begin{pmatrix}1 & \alpha\\\\0&1\end{pmatrix}=\sigma^{a}, -$$ -for any lift $a$ of $\alpha\in\mathbb{Z}/p\mathbb{Z}$ to $\mathbb{Z}$. Similarly, -you will find that $y\sigma y^{-1}=\sigma^{b}$ where $b$ is a lift of $\alpha^{-1}$ to $\mathbb{Z}$. This gives you the presentation -$$ -B=\langle x,y,\sigma|x^{p-1}=y^{p-1}=\sigma^p = 1,xy=yx,x\sigma x^{-1}=\sigma^a,y\sigma y^{-1}=\sigma^{b}\rangle -$$ -The last relation can also be replaced by $y^{-1}\sigma y = \sigma^a$, if you don't want to introduce $b$.<|endoftext|> -TITLE: How do I figure out the coproduct on graded algebras? -QUESTION [6 upvotes]: I have to figure out the duals to a couple of graded algebras. This requires a comultiplication (also called a coproduct in Hatcher). Hatcher's book shows what form the comultiplication must take using a cohomology argument for the cohomology of H-spaces. I do not know that these algebras are the cohomology of some H-spaces. Is there a purely algebraic argument to figure out a comultipication for an arbitrary graded algebra or do I need some extra data. -If you were wondering, I need to figure the duals of $\Lambda (y)$ and $\Gamma [ \gamma ] $. - -REPLY [2 votes]: There is no way to "figure out" a comultiplication for an arbitrary graded algebra. Some algebras are not bialgebras in any way, and those algebras that can be made into bialgebras can usually turned into bialgebras in many different ways. -You will probably have to be more specific about the algebras you have in mind (at the very least, explain the notation you are using!) - -The following, earlier text answers another question, the one in the title... -Doesn't the following obvious construction work? If $A$ and $B$ are graded algebras you can present them as quotients of free algebras $T(V)$ and $T(W)$ modulo homogeneous ideals generated by sets of homogeneous elements $R_A\subset T(V)$ and $R_B\subset T(W)$, so that $A=T(V)/\langle R_A\rangle$ and $B=T(W)/\langle R_B\rangle$. Then $A\sqcup B$ is $T(V\oplus W)/\langle R_A\cup R_B\rangle$.<|endoftext|> -TITLE: Spectrum of the "discrete Laplacian operator" -QUESTION [7 upvotes]: In numerical analysis, the discrete Laplacian operator $\Delta$ on $\ell^2({\bf Z})$ can be written in terms of the shift operator -$\Delta=S+S^*-2I$ -where $S$ is the right shift operator. Since it is self-adjoint, the spectrum should be in the real line. On the other hand, simple calculation show that one can write the operator $\Delta-\lambda$ as the following -$\Delta-\lambda=-\frac{1}{\mu}(S-\mu)(S^*-\mu)$ (*) -where $\mu$ is such that $\mu+\frac{1}{\mu}=2+\lambda$. -(*) can give the intuition that the spectrum -$\sigma(\Delta)=\{\mu+\frac{1}{\mu}:|\mu|=1\}-2$ -However, for proving it, (*) seems not work. -Here are my questions: - - -Is - $\sigma(\Delta)=\{\mu+\frac{1}{\mu}:|\mu|=1\}-2$ - true? -Does the fact that $\sigma(S)$ is purely continuous imply that $\sigma(\Delta)$ is also continuous? - -REPLY [9 votes]: Yes. Let $f(z) = z + z^{-1} - 2$. Since $\Delta = f(S)$, $\sigma(\Delta) = f(\sigma(S))$. And yes, $\sigma(S) = \{z: |z| = 1\}$. Now note that if $z = e^{i\theta}$, $f(z) = 2 \cos(\theta) - 2$, so $\sigma(\Delta) = [-4, 0]$. -The spectrum of $\Delta$ is all continuous: it is easy to see that $\Delta$ has no eigenvalues in $\ell^2({\mathbb Z})$. In fact, it is absolutely continuous, and this follows from the fact that the inverse image under $f$ of any set of measure 0 in $\mathbb R$ has measure 0 in the unit circle.<|endoftext|> -TITLE: $(7a+1)x^3+(7b+2)y^3+(7c+4)z^3+(7d+1)xyz=0$ does not have integer solutions -QUESTION [6 upvotes]: Let $a,b,c,d$ be integers. How can I prove that the equation -$$(7a+1)x^3+(7b+2)y^3+(7c+4)z^3+(7d+1)xyz=0$$ -Does not have an integer solution $(x,y,z)$ such that $\gcd(x,y,z)=1$? - -REPLY [7 votes]: Below is a very simple solution that avoids the (omitted) hairy arithmetic in the accepted solution. As there, reduce to $\rm\:x,y,z \not\equiv 0\:.$ Divide by $\rm\: x^3\:$ to get $\rm\: f_{a,b} = 1 + 2\ a^3 + 4\ b^3 + a\:b \equiv 0\ \ (mod\ 7)\:,$ $\rm\: a = y/x\:,\ b = z/x\:.\ $ $\rm\: n\not\equiv 0\ \Rightarrow\ n^3 \equiv \pm1\ (mod\ 7)\:$ yielding $4$ possibilities $\rm\ a^3 \equiv \pm1,\ b^3 \equiv \pm 1\:.\:$ E.g. $\:$ if $\rm\ a^3 \equiv 1,\ b^3 \equiv -1\ $ then $\rm\:f_{a,b}\:$ becomes $\rm\ a\:b \equiv 1\ $ contra $\rm\ (ab)^3 \equiv a^3\ b^3 \equiv (1)(-1) \equiv -1\:.\ $ Reasoning very similarly shows simply and quickly that the remaining $3$ cases are unsolvable.<|endoftext|> -TITLE: Prove $\gcd(a+b, a-b) = 1$ or $2\,$ if $\,\gcd(a,b) = 1$ -QUESTION [26 upvotes]: I want to show that for $\gcd(a,b) = 1$ $a,b \in Z$ $\gcd(a+b, a-b) = 1$ or $\gcd(a+b, a-b) = 2$ holds. -I think the first step should look something like this: -$d = \gcd(a+b, a-b) = \gcd(2a, a-b)$ - -From here I tried to proceed with two cases. -1: $a-b$ is even, which leads to $\gcd(a+b, a-b) = 2$ -2: $a-b$ is odd, which leads to $\gcd(a+b, a-b) = 1$ -My main problem I think is, that I do not know how I should include $\gcd(a,b) = 1$ in the proof. -Any help is appreciated. Thx in advance. -Cherio Woltan - -REPLY [7 votes]: I advise not doing a proof by cases. That was my initial attempt, but I found myself going through a logical rabbit hole with no end in sight. I suggest attempting to show that the common divisor, call it $d$, satisfies $d\leq2$, which implies that $d=1$ or $2$. -Here is a proof using the above strategy. Here are a few lemmas that will assist us in our deductions. -I adopt the notation $\gcd(a,b) = (a,b)$. -$\enspace$$\enspace$Lemma 1 If $a \mid b$ and $b\neq0$, then $|a| \leq |b|$ -$\enspace$$\enspace$Lemma 2 If $a \mid b$ and $a \mid c$, then for every integer $m$ and $n$, $a\mid(mb+nc)$ -$\enspace$$\enspace$Lemma 3 For every integer $a$,$b$ and $c$, $(a+bc,b)=(a,b)$ -$\enspace$$\enspace$ Lemma 4 For every integer $a$ and $b$, if $(a,b)=1$, there are -integers $x$ and $y$ such that $ax+by=1$ -Now for the proof. -Proof: -Suppose $(a,b)=1$. Then from lemma 4, there are integers $x$ and $y$ such that -$$ax+by=1 \tag{1}$$ -Let $d=(a+b,a-b)$ (Recall that $d>0$ by definition. -From lemma 3, we can write -$$d=(2a,a-b)$$ -$$d=(2b,a-b)$$ -Since $d$ is the greatest common divisor of $2a$, $a-b$, and $2b$, it follows that $d$ also divides each of them. In particular, $d\mid 2a$ and $d\mid2b$. Hence, from lemma 2, we know that $d\mid(2am+2bn)$ for arbitrary integers $m$ and $n$. -By lemma 1, we can translate $d\mid(2am+2bn)$ into the inequality -$$d \leq 2(am+bn) \tag{2}$$ -Since equation $(2)$ is true for every integer $m$ and $n$, we can allow $m$ and $n$ to represent particular integers by universal instantiation* -In our case, let $m=x$ and $n=y$. Then $am+bn = ax+by$, which allows us to conclude from $(1)$ that $am+bn = 1$. -Therefore, we substitute $am+bn=1$ into $(2)$ to conclude that -$$d \leq 2 \tag{3}$$ -which implies that $d=1$ or $d=2$. - -Universal instantiation is the rule of inference that allows us to conclude that a proposition for a particular element in a domain is true given that that the statement is true for every element in the domain. For example, if the statement, "All women are wise" is true, then the statement "Kaity is wise" is also true.<|endoftext|> -TITLE: $H_p(\mathbb{R}P^3 \times \mathbb{R}P^2)$ -QUESTION [5 upvotes]: I'm working through an example of the Kunneth formula in my book. Without showing any working it states that for $X = \mathbb{R}P^3 \times \mathbb{R}P^2$ -$$H_p(X)=\begin{cases} -\mathbb{Z} & \mbox{if } p=0\\ -\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z} & \mbox{if } p=1 \\ -\mathbb{Z}/2\mathbb{Z} & \mbox{if } p=2 \\ -\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z} & \mbox{if } p=3 \\ -0 & \mbox{if } p\ge 4 -\end{cases} -$$ -I agree for $0 \le p \le 3$, but for $p=4$ do we not have some contribution from $H_3(\mathbb{R}P^3) \otimes H_1(\mathbb{R}P^2)=\mathbb{Z}\otimes \mathbb{Z}/2\mathbb{Z} \simeq \mathbb{Z}/2\mathbb{Z}$? - -REPLY [4 votes]: You are correct. At first I thought you were neglecting the Tor part of the Künneth Exact sequence, but in degree $4$ all of the $Tor(H_p(\mathbb{RP}^3),H_{3-p}(\mathbb{RP}^2))$ terms vanish.<|endoftext|> -TITLE: What non-symmetric matrices satisfy $x^TAx>0,\forall x\neq 0$ -QUESTION [6 upvotes]: Let $A\in R^{n\times n}$ be a matrix. It is positive definite if and only if $A$ is symmetric and $x^TAx>0,\forall x\in R^n$. -My question is: if $x^TAx>0,\forall x\in R^n$ but $A$ is not symmetric, what does $A$ look like? -I have an example. For a rotation matrix $A$ whose rotation angle is less than 90 degrees, $x^TAx>0,\forall x\in R^n$ but $A$ is not symmetric. Is this the only type of non-symmetric matrices that satisfy $x^TAx>0,\forall x\in R^n$? Can you give any other examples of this kind of matrices? Many thanks. - -REPLY [3 votes]: For the sake of having an answer, here is sos440's answer from the comments. -There is a unique way of decomposing $A$ into the sum of a symmetric matrix $A_{+}$ and an antisymmetric matrix $A_{-}$, namely $A = (A + A^{T})/2 + (A - A^{T})/2$. Then note that $x^{T} A x = x^{T} A_{+} x$. That is, $x^{T} A x$ does not depend on the antisymmetric part $A_{-}$, so the matrix $A$ satisfying $x^{T} A x > 0$ for all $x \neq 0$ is characterized as the sum of a positive definite matrix and an antisymmetric matrix.<|endoftext|> -TITLE: interpretation of independence events -QUESTION [5 upvotes]: $\{A_i, i \in \mathbb{N} \}$ are defined to be independent, if $P(\cap_{k=1}^{n} A_{i_k}) = \prod_{k=1}^{n} P(A_{i_k}) $ for any finite subset of $\{A_i, i \in \mathbb{N} \}$. - -We know $P(\cup_{i=1}^{\infty} A_i) - = \sum_{i=1}^{\infty} P(A_i) $ iff $\{A_i , i \in \mathbb{N}\}$ are -disjoint, which is independent of -the probability measure and purely -depends on the relation between the -sets. I was wondering if it is -possible to similarly -characterize/interpret $\{A_i , i - \in \mathbb{N}\}$ being independent -purely from relation between sets, -and make it independent of the -probability measure as much as -possible if completely is impossible? -Is the definition of $\{A_i, i \in - \mathbb{N} \}$ being independent -equivalent to $P(\cap_{i=1}^{\infty} - A_{i}) = \prod_{i=1}^{\infty} - P(A_{i}) $. What is the purpose of -considering any finite subset -instead? -Is generalization of independence -from probability space to general -measure space meaningful? -The only interpretations of -independence I know are: measure can be exchanged with product/intersection on independent sets, -and intuitively, independent events occur -independently of each other. Are -there other interpretation, -especially in the general measure -space setting? - -Thanks and regards! - -REPLY [2 votes]: Not sure your point 2 was addressed, so let me state that defining independence as suggested would lead to a trivial notion, quite different from independence as one wants it. -To wit, any collection of sets $(A_i)_{i\ge1}$, finite or infinite, could be made part of a larger collection $(A_i)_{i\ge0}$ such that the condition stated in 2 holds: simply add $A_0=\emptyset$. One would be led to say that a sequence is independent while one of its subsequences is not. -So, the problem has nothing to do with infinite sequences: to define the independence of $A$, $B$ and $C$ by the only condition that $P(A\cap B\cap C)=P(A)P(B)P(C)$ (thus forgetting the supplementary conditions that $P(A\cap B)=P(A)P(B)$, $P(B\cap C)=P(B)P(C)$ and $P(A\cap C)=P(A)P(C)$) already leads to a notion too weak to model any kind of independence, since the supplementary conditions I just wrote can fail.<|endoftext|> -TITLE: How can I find a basis for the kernel of a homomorphism on a free abelian group? -QUESTION [6 upvotes]: Let $\phi\colon F\to G$ be a homomorphism of finitely generated abelian groups. If $F$ is free, then $\ker(\phi)$ is also free and thus admits a basis. -Question: Is there a general procedure to find a basis for $\ker(\phi)$? - -As an example, consider the homomorphism $\phi\colon\mathbb Z^3\to (\mathbb Z/2)^2$ given by the matrix -$$ -\begin{pmatrix} -1&1&0\\0&1&1 -\end{pmatrix}. -$$ -After playing around, I guess that the elements $\begin{pmatrix}0\\2\\0\end{pmatrix}$, $\begin{pmatrix}1\\1\\0\end{pmatrix}$ and $\begin{pmatrix}0\\1\\1\end{pmatrix}$ form a basis. Is this true? How could I see this in a systematic way? - -REPLY [2 votes]: Try also the Smith normal form.<|endoftext|> -TITLE: What are the polar coordinates of the origin? -QUESTION [18 upvotes]: In polar coordinates, the origin has $r = 0$, but $\theta$ is not unique. -what sort of problems does this create, and how can I resolve them? For example, suppose an ant is wandering around a plane. Its speed is -$$s = \sqrt{\dot{r}^2 + r^2 \dot{\theta}^2}$$ -but if the ant wanders through the origin a quantity like $\dot{\theta}$ is undefined. In this particular case I can deal with it because the limit -$$\lim_{r\to 0}\ r^2\dot{\theta}^2$$ -is defined. Similarly, if I want to find an area with integration, I'd need to look at the Jacobian -$$\left|\begin{array}{cc}\partial r / \partial x & \partial r / \partial y \\ \partial \theta / \partial x & \partial \theta / \partial y\end{array}\right|$$ -which is not defined at the origin. Again I can get around it. If I want the area of the unit circle, for example, I can take -$$\lim_{\epsilon \to 0} \int_{\theta = 0}^{2\pi}\int_{r=\epsilon}^1 r\ \textrm{d}r\textrm{d}\theta$$ -How do I know I can always work around things like this? If I receive some other coordinate system, how can I tell if the points with no unique coordinates are going to give me trouble? - -REPLY [3 votes]: You are correct that the Jacobian from Cartesian to polar coordinates is singular at the origin. In practice what this means is that the transformation looses one degree of freedom at the origin. Put in simple terms, there are directions in which you cannot infinitesimally move at the origin. For example, if your state is $r=0$ and $\theta = 0$, then a move in the $x$ direction of $dx$ is achieved by $dr$ in polar coordinates, however, a move in the $y$ direction of $dy$ requires a change of $dr$ in the $r$ direction but requires a change of $\pi/2$ (finite) in the $\theta$ direction. Thus the degree of freedom is lost in the $y$ direction and only moves in the $x$ direction are possible. The same problem occurs in the lat-long parameterization of the sphere at the North and South poles.<|endoftext|> -TITLE: When can we say that a theorem has been proven? -QUESTION [9 upvotes]: I'm taking a Data Structures and Algorithms course for a CS program. The introductory material was all mathematics, mostly a series of formulas that we are to remember. I can work through the formulas without a problem. However, proving theorems is also a part of the material. -I've taken a fair amount of math classes but, somehow I never came to terms with proofs. I know that this question may be somewhat broad so, I'm willing to attempt to be specific if necessary. -Our book briefly explains proofs by induction, counterexample and contradiction. Regardless of the method, I don't know when the proof has been satisfied. - -REPLY [16 votes]: "Proof" really refers to a spectrum of related concepts. At one end is the notion of a formal proof. A formal proof is a sequence of statements, each of which is either an axiom or is deduced from the previous statements by some deduction rule. This notion of proof depends on the axioms and deduction rules one uses. If you want to get comfortable with this kind of proof, it might not be a bad idea to study some propositional logic, where the axioms and deduction rules are relatively simple. To get a little closer to actual mathematics, the next step is predicate logic. -At the other end is the notion that mathematicians actually use in practice. Studying logic might lead you to believe that the notion of proof that mathematicians use is that of a formal proof in ZFC. However, many working mathematicians probably could not describe the axioms of ZFC when asked; in practice, this is not how mathematicians think. And for good reason: writing out formal proofs in ZFC is, for most people, a huge waste of time. -The notion of proof that mathematicians use in practice is a social construction: certain types of deductions and assumptions are socially acceptable as obvious, and one uses these to construct a socially acceptable proof. I say this not to argue that the proofs that mathematicians describe in practice are invalid but just to emphasize that what counts as a proof is a subtle question, and the answer depends on historical and cultural context. What counted as a proof for Euler is not the same as what counts as a proof now, for example. -In your case, I think the best thing you could do to get a handle on what a proof is in practice is to read the beginning (and more, if you want) of Sipser's Introduction to the Theory of Computation, which contains the clearest introduction to mathematics I have ever seen.<|endoftext|> -TITLE: Short exact sequence of exact chain complexes -QUESTION [24 upvotes]: If $0 \rightarrow A_{\bullet} \rightarrow B_{\bullet} \rightarrow C_{\bullet} \rightarrow 0$ is a short exact sequence of chain complexes (of R-modules), then, whenever two of the three complexes $A_{\bullet}$,$B_{\bullet}$,$C_{\bullet}$ are exact, so is the third. - -This exercise 1.3.1 in Weibel's Introduction to Homological Algebra. -I was trying to tackle the exercise via diagram chasing in the diagram - -However, in all three situations I eventually get stuck. I am starting to think that diagram chase might not be the right approach here? -Thank you. - -REPLY [6 votes]: For completeness, I think I can cover the case where $A_{\bullet}$ and $B_{\bullet}$ are exact myself now. I will try to stick to the notations used in the other posts. -If $c \in C_n$ with $d(c)=0$, there is, by the surjectivity of $\beta_n$, a $b$ in $B_n$ such that $$\beta_n(b)=c.$$ -Then $$\beta_{n-1}(d (b)) = d(\beta_n (b)) = d(c) = 0$$ and thus, by exactness of the n-1-th row, there is an $a$ in $A_{n-1}$ with $\alpha_{n-1}(a) = d(b)$. -For $a$ we have $$\alpha_{n-2}(d(a)) = d(\alpha_{n-1}(a)) = d(d(b)) = 0$$ and, since $\alpha_{n-2}$ is injective, $d(a)=0$ follows. -Then, by exactness of $A_{\bullet}$ we have an $a'$ in $A_n$, such that $d(a')=a$. -Consider $b-\alpha_n(a')$ in $B_n$. We have $$d(b - \alpha_n(a'))=d(b) - d(\alpha_n(a')) = d(b) - \alpha_{n-1}(d(a')) = d(b) - d(b) = 0,$$ thus by exactness of $B_{\bullet}$, there is a $b'$ in $B_{n+1}$, such that $d(b')=b-\alpha_n(a')$. -Finally, $\beta_{n+1}(b')$ is the desired pre-image of $c$ in $C_{n}$, since $$d(\beta_{n+1}(b'))=\beta_n(d(b'))=\beta_n(b) - \beta_n(\alpha_n(a')) = c.$$ -Thanks, everyone!<|endoftext|> -TITLE: Integral points on an elliptic curve -QUESTION [15 upvotes]: Let's start with an elliptic curve in the form -$$E : y^2 = x^3 + Ax + B, \qquad A, B \in \mathbb{Z}.$$ -I am wondering about integral points. I know that Siegel proved that $E$ has only finitely many integral points. I know Nagell and Lutz proved that every non $\mathcal{O}$ torsion point has coordinates that are integers. -Can any of you tell me anything else we would know? Here are a few questions I have come up with but if there is any other interesting thing to say I would love to know it. Answers to any of these would be great. - -Can we say anything interesting about non-torsion integral points? (I don't have an idea of what "interesting" means exactly, maybe they are in a sort form or related to torsion points somehow) -Are there bounds for the number of or biggest integral points? -Do people keep track of records for most or biggest integral points? -If so, any idea of what these records are? -Anything else? - -Thanks - -REPLY [16 votes]: There are bounds for the size of the integral points on an elliptic curve over $\mathbb{Q}$. For example there's Baker's famous result that if $A, B, C, D \in \mathbb{Z}$ are such that $\max{ \{|A|, |B|, |C|, |D| \} } \leq H$ then any integral point $(x, y) \in E(\mathbb{Q})$ satisfies -$$\max{ \{ |x|, |y| \} } < e^{(10^6 H)^{10^6}}$$ -where $E: Y^2 = AX^3 + BX^2 + CX + D$, this is quoted from Theorem 5.4 in Chapter IX of Silverman's book The Arithmetic of Elliptic Curves. -Also from Silverman's book, there's conjecture 7.4 in that same chapter which says the following. -(Hall-Lang Conjecture) There exists constants $C$ and $r$ such that for every elliptic curve $E/\mathbb{Q}$ with Weierstrass equation -$$Y^2 = X^3 + AX + B$$ -where $A, B \in \mathbb{Z}$, and for every integral point $(x, y) \in \mathbb{Z}^2$ with $(x, y) \in E(\mathbb{Q})$ the following inequality holds -$$|x| \leq C (\max{ \{ |A|, |B| \} })^r$$ -You can find lots of really interesting information in Chapter IX of Silverman's book. - -REPLY [14 votes]: The most important point to note about integral points is that the set of integral points is not a property of the elliptic curve, but rather of the specific Weierstrass model. If $(x,y)=(n,m)$ is an integral solution to $y^2=x^3+Ax+B$, then for any $t\in \mathbb{N}$, the substitution $(x',y')=(t^2x,t^3y)$ gives a new Weierstrass model of the same elliptic curve, but this new model will have the integral point $(t^2n,t^3m)$. So the hunt for records is not only non-sensical if you allow the curve to vary, it doesn't even make sense on a fixed elliptic curve. -What is a worthwhile research question is to find optimal bounds for the heights of integral solutions on a given Weierstrass equation. That is still an active area of research. If you have that, then you can start with generators of the Mordell-Weil group and keep multiplying them until you exceed the bound. This gives an effective method to find all integral solutions (provided you could find the generators of the Mordell-Weil group). Sometimes, much more elementary methods also get you there, see e.g. Adrián's and my cooperative answer to a similar question. Such elementary methods motivate the study of ideal class groups and units in rings of integers of number fields.<|endoftext|> -TITLE: Adding and Removing Non-Compounded Percentages does not produce the same result? -QUESTION [7 upvotes]: If I take the value 100 and I want to and 10% tax to it and then a 7% tax to it, I am doing the following: -$$\begin{align*} -100 \times \left(1 + \frac{10}{100}\right) &= 110\\ -100 \times \left(1 + \frac{7}{100}\right) &= 107\\ -100 + 10 + 7 &= 117. -\end{align*}$$ -If I remove 10% and then 7% from 117, I do not get 100, I get 98.7094.... My formula is setup like this: -GrandTotal = 117 -AjustedTotal = GrandTotal -Value = GrandTotal - (GrandTotal/(1 + (percent/100)) -AdjustedTotal = AdjustedTotal - Value -If I am doing more than 1 percentage, I run into problems such as 10% and 7%. -Am I overdoing this and what am I doing wrong. Basically, I want to take a value and add taxes to it and then remove the same taxes to retrieve my original value. -I added a question about this on stackoverflow here, but I am stuck at programming the non-compounding part and the mixture of both. - -REPLY [5 votes]: Let's take a simpler example first. If you increase 400 by 25% you get 500. If you reduce 500 by 25% you get 375. With equations this is -$$400 \times (1+ 0.25) = 500$$ -$$500 \times (1- 0.25) = 375$$ -so it is clear that this multiplication does not work, and we should not expect it to since $(1+ 0.25) \times (1- 0.25) = 0.9375$ not $1$. So to undo the increase (from 400 to 500), instead we should divide as follows: -$$500 \div (1+ 0.25) = 400$$ -and since $1 \div (1+ 0.25) = (1 - 0.20) $, we would have -$$500 \times (1 - 0.20) = 400$$ -so a 25% increase must be undone by a 20% decrease. You seem to understand this in your equation for value. -The second issue is your lack of compounding the taxes. If you did compound then it is correct that $$100 \times (1 + 0.10) \times (1 + 0.07) = 117.70$$ -and you would undo it with $$117.70 \div (1 + 0.10) \div (1 + 0.07) = 100$$ -but that does not work here. -So instead you start with -$$100 \times (1 +0.10 + 0.07) = 117$$ -and to undo it you do -$$117 \div (1 +0.10 + 0.07) = 100$$ - -REPLY [2 votes]: Essentially, you've computed your after-tax value using the formula -$$\array{T &=& P + (0.10)P + (0.07)P\\&=& (1.17)P}$$ -where T is the after-tax price, and P is the before-tax price. -If you want to get back to P from T, just solve for P in the above equation to get -$$P = \frac{T}{1.17}$$<|endoftext|> -TITLE: Inverse Image as the left adjoint to pushforward -QUESTION [10 upvotes]: Assume $X$ and $Y$ are topological spaces, $f : X \to Y$ is a continuous map. Let ${\bf Sh}(X)$, ${\bf Sh}(Y)$ be the category of sheaves on $X$ and $Y$ respectively. Modulo existence issues we can define the inverse image functor $f^{-1} : {\bf Sh}(Y) \to {\bf Sh}(X)$ to be the left adjoint to the push forward functor $f_{*} : {\bf Sh}(X) \to {\bf Sh}(Y)$ which is easily described. -My question is this: Using this definition of the inverse image functor, how can I show (without explicitly constructing the functor) that it respects stalks? i.e is there a completely categorical reason why the left adjoint to the push forward functor respects stalks? - -REPLY [5 votes]: Asked and answered on Mathoverflow. :)<|endoftext|> -TITLE: Counting Elements of Order 2 in $\mathbb{Z}^{\times}_{n}$ -QUESTION [7 upvotes]: The Euler totient function $\varphi(n) = |\mathbb{Z}^{\times}_{n}|$ is even on $\mathbb{N}_{>2}$, so it is feasible that the group $\mathbb{Z}_{n}^{\times}$ can support an element of order $2$. If $n$ is $4, p^{r}$ or $2 p^{r}$ for odd prime $p$ and $r \geq 1$, then $\mathbb{Z}_{n}^{\times}$ is cyclic and such an element of order 2 necessarily exists if $\mathbb{Z}_{n}^{\times}$ is isomorphic to the rotation group of an even $n$-gon. What can be said about such order 2 elements in $\mathbb{Z}_{n}^{\times}$ for general $n > 1$? Do they always exist? If so, is there a way to count them as a function of $n$? -Here is what I understand. By the Chinese Remainder Theorem, if $n = p_{1}^{r_{1}} \cdots p_{s}^{r_{s}}$, then $\mathbb{Z}^{\times}_{n} \simeq \mathbb{Z}^{\times}_{p_{1}^{r_{1}}} \times \cdots \times \mathbb{Z}^{\times}_{p_{s}^{r_s}}$. So in order to have $\mathbb{Z}^{\times}_{2}$ as a subgroup, $n$ would need to be of the form $2m$ for odd $m$. -Update 1 Yes, if $n > 2$ and $n$ even, order 2 elements always exist by Cauchy's Theorem (duh). -Update 2 It seems that this is a non-trivial problem. There is no simple function of $n$ which counts the sequence $\{1, 1, 1, 1, 1, 3, 1, 1, 1, 3, 1, 1, 3, 3, 1, 1, 1, 3, 3, 1, 1, 7, \dots\}$ (A155828), which is the number of non-identity order 2 elements in $\mathbb{Z}^{\times}_{n}$ as a function of $n \geq 3$. -I used the following Mathematica code to generate this sequence: -${\tt Table[Count[Table[If[GCD[k, n] > 1, 0, Mod[k^2, n]], \{k, 2, n \}], - 1], \{n, 3, 100\}]}$ - -REPLY [5 votes]: If $n$ is an odd prime power, then the group $\mathbb{Z}_n^{\times}$ is cyclic, so it has exactly one element of order $2$. -If $n=2^k$, then the group $(Z_{2^k})^{\times}$ has either: - -No elements of order $2$ if $k=1$. -One element of order $2$ if $k=2$. -Three elements of order $2$ if $k\geq 3$: the group of units is isomorphic to $C_2\times C_{2^{a-2}}$. The elements of exponent $2$ are of the form $(x,y)$ with $x$ and $y$ of exponent $2$ (two possibilities each); this gives $4$ elements of exponent $2$, and removing the identity we get three elements of order $2$. - -If $n= 2^k p_1^{a_1}\cdots p_r^{a_r}$ is a prime factorization of $n$, then -$$\mathbb{Z}_n^{\times} \cong \mathbb{Z}_{2^k}^{\times}\times\mathbb{Z}_{p_1^{a_1}}^{\times}\times\cdots\times \mathbb{Z}_{p_r^{a_r}}^{\times}.$$ -An element $(a,b_1,\ldots,b_r)$ is of exponent $2$ if and only if each coordinate is of exponent $2$. There are $2$ possibilities for each $b_i$, $1\leq i\leq r$, plus the number of choices depending on $k$. Throwing away the identity, we get: - -$2^r - 1$ elements of order $2$ if $0\leq k\leq 1$. -$2^{r+1}-1$ elements of order $2$ if $k=2$. -$2^{r+2}-1$ elements of order $2$ if $k=3$. - - -See Sequence A060594 in the OEIS. It gives the number of solutions to $x^2\equiv 1 \pmod{n}$, which is one more than the number of element of order $2$ in $\mathbb{Z}_n^{\times}$.<|endoftext|> -TITLE: Interpretation of Hyperbolic Metric and Möbius Transforms -QUESTION [5 upvotes]: I was wondering if someone could explain the interpretation of the following results. In hyperbolic geometry, we say that lengths are invariant under the action of Mob($\mathbb{H}$) if given any piecewise-differentiable curve $f:[a,b]\rightarrow \mathbb{H}$ and an element $\gamma \in $ Mob($\mathbb{H})$ -$length_\rho (f) = length_\rho (\gamma \circ f)$, -namely the arclength of a curve $f$ in $\mathbb{H}$ with respect to some metric $\rho(z)$ is the same even after $\gamma$ has been applied to it. So I am looking for some conditions on this metric $\rho$, and since I know that $length_\rho (f) = \int_f \rho(z) |dz|$, after doing some manipulations I arrive at the condition -$\rho(z) - \rho\Big(\gamma(z)\Big) |\gamma'(z)| = 0$. -Now I know that the group of all Möbius transformations that preserve $\mathbb{H}$ is generated by $g(z) = az+b$, $a,b \in \mathbb{R}$, $a>0$, $h(z) = -\frac{1}{z}$ and $B(z) = -\bar{z}$. -Now we look at the case when $\gamma$ is a translation by $b$ units, and we arrive at the fact that -$\rho(z) = \rho(z+b)$, namely the metric $\rho(z)$ in $\mathbb{H}$ is invariant under any translation by a real number, and so dependso only on $Im(z)$. Doing next with $\gamma = az$, we get the condition combined with the previous one that -$\rho(z) = \frac{c}{Im(z)}$, where $c$ is some constant. We can show that this form of $\rho(z)$ is consistent with $h(z) = -\frac{1}{z}$ and $B(z) = -\bar{z}$, but we'll assume that for now. -So here's the question. Does this result mean that if lengths in the hyperbolic plane are invariant under Möbius transformations, then they are precisely the ones that have the element of arc length as $\frac{c}{Im(z)} |dz|$? -But then I can arrive at this metric for the hyperbolic plane without invoking Möbius transformations, namely looking at the pseudosphere and see how if I try to map this pseudo sphere into something flat. A natural coordinate system to choose for the pseudo sphere would be coordinates $(\theta, \rho)$, namely latitude and longitude. So say for a given latitude the radius of my pseudo sphere is $R$ then the length it subtends on the surface by an angle $d\rho$ is $R d\rho$. But if we map it to something flat then the length is just $d\rho$, so the lengths must have been "shrunk" by a factor of $R$. Further investigation reveals that I can arrive at the same hyperbolic metric without invoking Möbius transforms. -What's the Connection? -Ben - -REPLY [4 votes]: Let us begin with the pseudosphere: Classical differential geometry deals with smooth surfaces in $3$-space where lengths are lengths of curves lying on the surface. The theory develops various notions of curvature, and in due course the question arises whether there is a surface of revolution having constant Gaussian curvature $\kappa=-1$. Indeed there is: You have to rotate a tractrix about the $z$-axis. The resulting spindle-like surface $S$ is called a pseudosphere but looks nowhere like an ordinary sphere. In particular it has a "rim" along the equator (one of the principal curvatures is $\infty$ there), and this implies that $S$ is incomplete: There are geodesics on $S$ that cannot prolonged forever but brutally end on the rim. One more thing: $S$ has the topology of a cylinder extending to infinity at one end; therefore there are loops on $S$ which cannot be contracted to a point. -Now the hyperbolic plane $H$: It is the simplest domain on ${\mathbb C}$ one can think of. Furthermore in the realm of ${\mathbb C}$ one has the notion of conformal map. So the next question is: What are the conformal self-maps of $H$? By means of Schwarz' Lemma one finds the group $G$ of Moebius transformations you have listed. This group is richer than the groups of euclidean self-maps of $H$. Is there something (like angles, areas and the like) that is preserved by all maps $f\in G$? It turns out that one such invariant is the so-called hyperbolic line element $ds:={|dz|\over y}$, where $z=x+i y$, $y>0$. Now $H$ provided with this line element (or conformal metric $\rho(z)={1\over y}$) is what one nowadays calls a Riemannian manifold on which we can do differential geometry as we did before on surfaces in $3$-space. In particular one can compute the Gaussian curvature of $H$. Since the Moebius group is transitive on $H$ the local geometry around each point is the same for all points, therefore the Gaussian curvature is constant. The computation gives $\kappa\equiv-1$. But this $H$ has a much richer structure than the pseudosphere $S$; e.g., it is complete and allows of all sorts of regular tessellations (whereas the ordinary sphere admits only the five Platonic solids). -Since $S$ and $H$ both have constant Gaussian curvature $-1$ there has to be a connection between them. As you have indicated in your question you have constructed a (local) isometry $\psi:\ S\to H_1$ between $S$ and a part $H_1$ of $H$ such that to a rotation $\phi\mapsto \phi+ \delta$ of $S$ corresponds the translation $z\mapsto z+\delta$ of $H_1$. You should look at the inverse $\psi^{-1}:\ H_1\to S$ of this map. It is defined on all of $H_1:=\{z\in H\ |\ y\geq1\}$ and makes $H_1$ the so-called universal cover of $S$, insofar as any two points $z$ and $z+2k\pi$, $\ k\in{\mathbb Z}$, of $H_1$ map onto the same point on $S$. What one has gained is this: The covering surface $H_1$ is simply connected whereas $S$ is not.<|endoftext|> -TITLE: Conjugacy Classes of subgroups in GL(n,p) -QUESTION [6 upvotes]: What is the conjugacy class of subgroups of order $p$ in $GL(n,p)$? (Are all subgroups of order $p$ conjugate in $GL(n,p)$?) - -REPLY [2 votes]: (Same as Jack's, different normal form!) -If $A\in\operatorname{GL}(n,p)$ has order $p$, then $A^p-I=0$. It follows that $A$ satisfies the polynomial $X^p-1$, so the minimal polynomial $m_A$ divides $X^p-1=(X-1)^p$. Since every eigenvalue of $A$ is a root of $m_A$, we conclude that the only eigenvalue of $A$ (in any algebraic closure of $\mathbb F_p$), is $1$. -It follows at once that both the Jordan form $J$ of $A$ and a matrix conjugating $A$ to $J$ are in fact both elements of $\operatorname{GL}(n,p)$. If we want to count the elements of $\{A\in\operatorname{GL}(n,p):A^p=I\}$ up to conjugation in $\operatorname{GL}(n,p)$, then, it is enough to count the conjugacy classes of matrices of $\operatorname{GL}(n,p)$ in Jordan canonical form with minimal polynomial dividing $(X-1)^p$. -This is a pretty simple excercise. The answer is: the number of partitions of $n$ with parts not larger than $p$. -Hmm, this is counting conjugacy classes of matrices and not of subgroups...*<|endoftext|> -TITLE: Does anyone know of any good ways to get good at algebra without as much"grind" as doing hundreds of questions a night -QUESTION [6 upvotes]: I'm coming to a point in college where I can't avoid my math classes any longer. I need to get better at algebra so I don't flunk out of the class when I take it, however I've never been able to get good at doing it without taking forever on one problem. The main problem I always seem to have is just memorizing the rules for various things such as inequalities. Does anyone know of any good methods to learn and get better at remembering the rules of these equations without having to do hundreds of questions a night. Please note that I know homework will involve doing hundreds of problems a night, and I can deal with that, however my main issue is that I want to at least get to a point over the summer so that I can get good enough to not take ten minutes on one question while not taking all day for multiple days a week doing math questions. -Also, if anyone can provide me with ideas that involve computer programming in the process, that would make it more fun for me, as I do enjoy computer programming, and it would make learning how to do the equations more fun. -Thanks, -Flyboy - -REPLY [5 votes]: A certain amount of repetition is necessary for proficiency. It seems to me that it is better to have a good understanding of how to solve a few problems of a given type rather than to mindlessly solve billions and billions of problems. By this I mean: How does each step in the solution help you get closer to a solution for the problem? Is there a reason the steps are done in this order or could they be done in some other order? -It does not matter so much how long the first problem of a given type take. This supposes we are talking about minutes, not years. What matters is that once you start on a new type of problem you have the means to get proficient in a reasonable length of time. \ No newline at end of file