diff --git "a/stack-exchange/math_stack_exchange/shard_102.txt" "b/stack-exchange/math_stack_exchange/shard_102.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_102.txt" +++ /dev/null @@ -1,5460 +0,0 @@ -TITLE: Coordinate-free definition of elementary divisors -QUESTION [6 upvotes]: There is a general mantra in math which says that what is independent of bases shall be defined independent of bases. Well, it is well known that the elementary divisors of a linear map $M\xrightarrow{\ \ f\ \ }N$ of finitely generated free modules over a principal ideal domain $R$ are independent of bases. So I wonder whether there is a simple definition, which does not mention chosen bases of $M$ and $N$. -Perhaps such a characterization would help to make the theory more streamlined. Of course it should be possible to prove that given elementary divisors $\alpha_1,\dots,\alpha_s$ of $f$ there are bases of $M$ and $N$ such that the corresponding matrix of $f$ has Smith normal form whose entries are precisely the $\alpha_i$ and conversely that given bases such that the corresponding matrix has Smith normal form, its entries are elementary divisors. - -REPLY [3 votes]: Let $T$ be the torsion submodule of $N/\operatorname{im}(f)$ and let $p\in R$ be a prime. Then the valuation of $\alpha_i$ with respect to $p$ (i.e., the number of times that $\alpha_i$ is divisible by $p$) is equal to the greatest $d>0$ such that $p^{d-1}T/p^{d}T$ has dimension $> s-i$ as a vector space over the field $R/(p)$ (or $0$ if no such $d$ exists). This is easy to see from the fact that $T\cong\bigoplus_i R/(\alpha_i)$ and $\alpha_i\mid \alpha_{i+1}$ for all $i$. Since the divisors $\alpha_i$ are only defined up to units, this completely determines them.<|endoftext|> -TITLE: If a matrix is triangular, is there a quicker way to tell if it is can be diagonalized? -QUESTION [10 upvotes]: I hope it is alright to ask something like this here, I am having trouble keeping up with all the special cases and my book is being kind of vague. -I know how to do the standard method of finding diagonal matrices, but I know that triangular matrices are special and my book makes a connection, but it is hard to tell what it is. -Thanks. - -REPLY [3 votes]: There is an easy necessary and sufficient condition. Let $c_1,\ldots,c_k$ be the distinct diagonal entries of the triangular matrix $A$ (so if there are repeated values, list each one just once). Then $A$ is diagonalisable if and only if $(X-c_1)\ldots(X-c_k)$ is an annihilating polynomial for$~A$ (in which case it will in fact be the minimal polynomial), in other words if the matrix product $(A-c_1I)\ldots(A-c_kI)$ is the zero matrix. -Proof. If $A$ is diagonalisable, then its minimal polynomial is $(X-\lambda_1)\ldots(X-\lambda_k)$ where $\lambda_i$ runs through the distinct eigenvalues of $A$, and for a triangular matrix the set of eigenvalues equals the set of diagonal entries (by an easy calculation of the characteristic polynomial). Conversely, since $(X-c_1)\ldots(X-c_k)$ has simple roots by construction, if it is an annihilating polynomial for$~A$, then $A$ is diagonalisable. -For instance, for the matrix -$$ A = \pmatrix {1 & a & b \\ 0 & 2 & c \\ 0 & 0 & 1 } $$ -mentioned in other answers, compute -$$ - (A-I)(A-2I) = - \pmatrix {0 & a & b \\ 0 & 1 & c \\ 0 & 0 & 0 } - \pmatrix {-1 & a & b \\ 0 & 0 & c \\ 0 & 0 & -1 } -= \pmatrix {0 & 0 & ac-b \\ 0 & 0 & 0 \\ 0 & 0 & 0 }, -$$ -and conclude that $A$ is diagonalisable iff $ac-b=0$. - -In the special case that equal entries are always consecutive on the diagonal, then the given condition easily reduces to the condition that for each such sequence of equal diagonal entries, they form the diagonal of a scalar (i.e., multiple of the identity) diagonal submatrix, in other words that the above-diagonal entries in the block "spanned" by those equal diagonal entries are all zero. This is necessary because if the diagonal entry in question is $c_i$, then the corresponding diagonal submatrix of the product $(A-c_1I)\ldots(A-c_kI)$ will only be zero if the submatrix from the factor $A-c_iI$ is already zero: the other factors have an invertible (triangular) submatrix at that location. It is sufficient by a computation of the product $(A-c_1I)\ldots(A-c_kI)$, where the $c_i$ are ordered by their occurrenc on the diagonal of $A$, as block matrices (subdivided in the obvious way according to groups of equal diagonal entries): by induction on$~i$, the product of the first $i$ factors has its first $i$ "block columns" entirely zero, so ($i=k$) the whole product is zero. -The case where all diagonal entries are distinct is a special case of this special case, and since the diagonal blocks now are $1\times1$ in size (without above-diagonal entries), the matrix will always be diagonalisable.<|endoftext|> -TITLE: What is a projective object in $\rm Set$? -QUESTION [6 upvotes]: What property of a set in $\sf{ZF}$ is equivalent to its being a projective object in the category $\rm Set$? Since all sets are projective assuming $\sf AC$ my guess is that it is equivalent to well-orderablility, but a direct transcription of the definition seems different from the definition of well-orderability, so I'm not certain. -If $X$ is a projective object in $\rm Set$, then this means that for every surjection $e:P\to Q$ and every function $f:X\to Q$ there is a function $h:X\to P$ such that $e\circ h=f$. - -REPLY [13 votes]: A set $Q$ is projective iff the axiom of choice is true for collections of $|Q|$ sets. Indeed, if $e:P\to Q$ is a surjection, then the fibers of $e$ are a collection of $|Q|$ nonempty sets, and a choice function for this collection is the same as a right inverse to $e$. Conversely, if $S$ is a collection of $|Q|$ nonempty sets and $f:S\to Q$ is a bijection, then let $P$ be the disjoint union of the elements of $S$ and let $e:P\to Q$ be the map induced by $f$. Then $e$ is surjective, and a right inverse for $e$ gives a choice function for $S$. -So in particular, for instance, finite sets are always projective, and countably infinite sets are projective iff the countable axiom of choice is true.<|endoftext|> -TITLE: Gradient vector field and level sets -QUESTION [5 upvotes]: So assume we have a complete Riemannian manifold $M$, and $f\in C^\infty(M)$. Suppose that $|\nabla f|=1$. Then if we let $p\in f^{-1}(0)$ does that imply that $f(\exp_p(t\nabla f))=t$. I asked an earlier question regarding the Cheeger-Gromell splitting theorem and the Busemann function, and it more or less reduces to this question. - -REPLY [7 votes]: Suppose $|\nabla f| = 1$. This implies -$$ g( \nabla f, \nabla f) = 1 $$ -which implies -$$ \nabla g(\nabla f, \nabla f) = 0 $$ -Using compatibility of the Levi-Civita connection we have (now using index notation to compress notation a bit) -$$ \nabla_a f \nabla_b \nabla^a f = 0 $$ -Using that $f$ is scalar, and Levi-Civita is torsion free, we have that $\nabla_b\nabla_a f = \nabla_a \nabla_b f$. So we conclude finally -$$ \nabla_a f \nabla^a\nabla_b f = 0 $$ -or in index-free notation -$$ \nabla_{\nabla f} \nabla f = 0 $$ -In particular the gradient vector field is geodesic. -This means that the flow w.r.t. $\nabla f$ and the geodesic flow starting from $\nabla f(p)$ coincide. And hence it is true that $f(\exp_p (t\nabla f)) = t$.<|endoftext|> -TITLE: Unramified cocycles and the Selmer group of an ellptic curve -QUESTION [8 upvotes]: In Silverman's book on elliptic curves, he gives a procedure to compute the Selmer group of elliptic curve $E$ relative to an isogeny $\phi:E\to E'$. I am confused about one step in the discussion. The particular point in the book where I am confused is Remark X.4.4.5 -Let $K$ be a number field, let $M_K$ be the set of places of $K$, and let $S$ be a finite set of places of $K$ which includes all archimedean places, all places where $E$ has bad reduction, and all places dividing the degree of $\phi$. -Let $E[\phi]$ denote the kernel of $\phi$. -First we have that $S^{(\phi)}(E/K)\subseteq H^1(G_{\bar{K}/K},E[\phi];S)$ (where the $S$ means that we consider only cocycles unramified outside of $S$). -To check if an element of $H^1(G_{\bar{K}/K},E[\phi];S)$ is in in $S^{(\phi)}(E/K)$ we must see if it its image in $\prod_{v\in M_K} WC(E/K_v)$ is trivial (where $WC$ is the Weil-Chatlet group). -Silverman asserts that for any element in $H^1(G_{\bar{K}/K},E[\phi];S)$ it suffices to check only the that the image in $WC(E/K_v)$ is trivial for $v\in S$. -This is the point at which I am confused. Why is the image automatically trivial for $v\notin S$. Does the unramifiedness somehow imply this? I cannot find anything in the book that address this, but maybe I am overlooking something. -Any help or direction would be much appreciated. - -REPLY [2 votes]: I had the same question. I've solved the problem partially, so I share my idea. -I extend the set $S$ to $S' = S \cup \{v \in M_K : E' \mbox{ has bad reduction at } v\}$. -I use the notation same as Silverman's AEC. -Let $v \in S'$, and consider the localization at $v$. -A commutative diagram of exact sequences of $G_{\bar{k_v}/k_v} = G_v/I_v$-module -\begin{eqnarray} -0 \rightarrow &E[\phi]^{I_v}& \rightarrow &E^{I_v}& \rightarrow &E'^{I_v}& \rightarrow 0\\ -&\downarrow&&\downarrow&&\downarrow&\\ -0 \rightarrow &\tilde{E_v}[\phi]& \rightarrow &\tilde{E_v}& \rightarrow &\tilde{E'_v}& \rightarrow 0 -\end{eqnarray} -induces a commutative diagram of exact sequences of cohomology -\begin{eqnarray} -0 \rightarrow &E'(K_v)/\phi(E(K_v))& \rightarrow &H^1(G_v/I_v,E[\phi]^{I_v})& \rightarrow &H^1(G_v/I_v,E^{I_v})[\phi]& \rightarrow 0\\ -&\downarrow& &\downarrow& &\downarrow&\\ -0 \rightarrow &\tilde{E_v}'(k_v)/\phi(\tilde{E_v}(k_v))& \rightarrow &H^1(G_{\bar{k_v}/k_v},\tilde{E_v}[\phi])& \rightarrow &H^1(G_{\bar{k_v}/k_v},\tilde{E_v})[\phi]& \rightarrow 0. -\end{eqnarray} -As proof of Theorem X.4.2. in AEC, $E[\phi]^{I_v} = E[\phi]$ and $E[\phi] \cong \tilde{E}[\phi]$ as $G_v/I_v.$-module. -Therefore the center vertical arrow in the above diagram of cohomology is isomorphism. -The left vertical arrow is surjective, so, from the five lemma, the right vertical arrow is an inclusion. -$H^1(G_{\bar{k_v}/k_v},\tilde{E_v})$ can be regarded as Weil-Chatelet group, so this is finite. -The map of cohomology -$$H^1(G_{\bar{k_v}/k_v},\tilde{E_v}) \xrightarrow{\phi} H^1(G_{\bar{k_v}/k_v},\tilde{E_v})$$ -is surjective between same finite sets, so an inclusion. -Therefore $H^1(G_{\bar{k_v}/k_v},\tilde{E_v})[\phi] = 0$. -And from injectivity of the right vertical arrow of cohomology diagram, -$H^1(G_v/I_v,E^{I_v})[\phi] = 0$. -Now we obtain a commutative diagram of cohomology from a inflation map -\begin{eqnarray} -0 \rightarrow &E'(K_v)/\phi(E(K_v))& \rightarrow &H^1(G_v/I_v,E[\phi]^{I_v})& \rightarrow &H^1(G_v/I_v,E^{I_v})[\phi] (= 0)& \rightarrow 0\\ -&\downarrow& &\downarrow& &\downarrow&\\ -0 \rightarrow &E(K_v)/\phi(E(K_v))& \rightarrow &H^1(G_v,E[\phi])& \rightarrow &H^1(G_v,E)[\phi]& \rightarrow 0. -\end{eqnarray} -and an inflation-restriction sequence -$$0 \rightarrow H^1(G_v/I_v,E[\phi]^{I_v}) \rightarrow H^1(G_v,E[\phi]) \rightarrow H^1(I_v,E[\phi]) .$$ -Let $\xi \in H^1(G_{\bar{K}/K}, E[\phi]; S')$ and $\bar{\xi}$ the image of $\xi$ -in $H^1(G_v,E[\phi])$. -Then by definition $Res(\bar{\xi}) \in H^1(I_v,E[\phi])$ is $0$. -So the image of $\bar{\xi}$ in $H^1(G_v,E)[\phi]$ of the above diagram come from -$H^1(G_v/I_v,E[\phi]^{I_v})$, therefore $0$. -This means the image of $\xi$ in $WC(E/K_v)$ is trivial.<|endoftext|> -TITLE: What are Different Approaches to Introduce the Elementary Functions? -QUESTION [57 upvotes]: Motivation -We all get familiar with elementary functions in high-school or college. However, as the system of learning is not that much integrated we have learned them in different ways and the connections between these ways are not clarified mostly by teachers. Once I read the calculus book by Apostol, I just found out that one can define these functions in a treatise systematic way only analytically. The approach used in the book with some minor changes is like this -$1.$ Firstly, introduce the natural logarithm function by $\ln(x)=\int_{1}^{x}\frac{1}{t}dt$ for $x>0$. Accordingly, one defines the logarithm function by $\log_{b}x=\frac{\ln(x)}{\ln(b)}$ for $b>0$, $b \ne 1$ and $x>0$. -$2.$ Then introduce the natural exponential function as the inverse of natural logarithm $\exp(x)=\ln^{-1}(x)$. Afterwards, introduce the exponential function $a^x=\exp(x\ln(a))$ for $a>0$ and real $x$. Interchanging $x$ and $a$, one can introduce the power function $x^a=\exp(a\ln(x))$ for $x \gt 0$ and real $a$. -$3.$ Next, define hyperbolic functions $\cosh(x)$ and $\sinh(x)$ by using exponential function -$$\matrix{ - {\cosh (x) = {{\exp (x) + \exp ( - x)} \over 2}} \hfill & {\sinh (x) = {{\exp (x) - \exp ( - x)} \over 2}} \hfill \cr - } $$ -and then defining the other hyperbolic functions. Consequently, one can define the inverse-hyperbolic functions. -$4.$ Finally, the author gives three ways for introducing the trigonometric functions. -$\qquad 4.1-$ Introduces the $\sin x$ and $\cos x$ functions by the following properties -\begin{align*}{} -\text{(a)}\,\,& \text{The domain of $\sin x$ and $\cos x$ is $\mathbb R$} \\ -\text{(b)}\,\,& \cos 0 = \sin \frac{\pi}{2}=0,\, \cos \pi=-1 \\ -\text{(c)}\,\,& \cos (y-x)= \cos y \cos x + \sin y \sin x \\ -\text{(d)}\,\,& \text{For $0 \le x \le \frac{\pi}{2}$ we have $0 \le \cos x \le \frac{\sin x}{x} \le \frac{1}{\cos x}$} -\end{align*} -$\qquad 4.2-$ Using formal geometric definitions employing the unit circle. -$\qquad 4.3-$ Introducing $\sin x$ and $\cos x$ functions by their Taylor series. -and then defining the other trigonometric ones and the inverse-trigonometric functions. -In my point of view, the approach is good but it seems a little disconnected as the relation between the trigonometric and exponential functions is not illustrated as the author insisted to stay in the real domain when introducing these functions. Also, exponential and power functions are just defined for positive real numbers $a$ and $x$ while they can be extended to negative ones. - - -Questions -$1.$ How many other approaches are used for this purpose? Are there many or just a few? Is there some list for this? -$2.$ Would you please explain just one of the other heuristic ways to introduce the elementary functions analytically with appropriate details? - - -Notes - -Historical remarks are welcome as they provide a good motivation. -Answers which connect more advanced (not too elementary) mathematical concepts to the development of elementary functions are really welcome. As nice example of this is the answer by Aloizio Macedo given below. -It is hard to choose the best answer between these nice answers so I decided to choose none. I just gave the bounties to the ones that are more compatible with the studies from high-school. However, please feel free to add new answers including your own ideas or what you may think that is interesting so we can have a valuable list of different approaches recorded here. This can serve as a nice guide for future readers. - - -Useful Links - -Here is a link to a paper by W. F. Eberlein suggested in the comments. The paper deals with introducing the trigonometric functions in a systematic way. -There are six pdfs created by Paramanand Singh who has an answer below. It discusses some approaches for introducing logarithmic, exponential and circular functions. I have combined them all into one pdf which can be downloaded from here. I am sure that it will be useful. - -REPLY [7 votes]: I quite like the approach, taken in some Russian and Bulgarian books, e.g Fundamentals of mathematical analysis by V.A. Ilyin and E.G. Poznyak and Mathematical Anlaysis by Ilin, Sadovnichi and Sendov. The benefits of the approach is that they use continuity, monotonicity and (mostly) elementary concepts, which students should know from high school. -We start with the exponential function. For $a > 0 \text{ and } x = \frac{p}{q} \in \mathbb{Q} $ we know what is $a^x = a^\frac{p}{q} $ (of course, we should have proven the existence of $n$-th root already). Now prove this function (for now defined on the rational numbers only) is monotonic. At the end of the day, for $x \in \mathbb{R} $ define $a^x$ as the unique number $y$ with the following property: for all $ \alpha < x < \beta, \alpha, \beta \in \mathbb{Q}$ we have $a^\alpha \leq y \leq a^ \beta$. In other words, $a^x$ is defined by "extending via monotonicity". -Now we can define $\log_a (x)$ as the inverse function of $a^x$, that is: the number $t$ such that $a^t = x$ This is exactly the definition students should have been given in high school, so it should come as no surprise. -Next follow the trigonometric functions: in high school they are often defined like that. To make the definition rigorous we can use functional equations, in a manner similar to what OP wrote. A student should already know the $\sin(\alpha + \beta)$, $\cos(\alpha + \beta)$ formulae, and $\sin^2 x + \cos^2 x = 1$ so it should be fairly easy to comprehend that these properties sort of define $\sin$ and $\cos$. The definition is: -There exists an unique pair of functions $f$ and $g$, defined over the real numbers, and satisfying the following conditions: -$1.$ $f(\alpha + \beta) = f(\alpha)g(\beta) + f(\beta) g(\alpha)$ - $2.$ $g(\alpha + \beta) = g(\alpha)g(\beta) - f(\alpha)f(\beta) $ - $3.$ $f^2(x) + g^2(x) = 1$ - $4.$ $f(0) = 0 , g(0) = 1, f(\frac{\pi}{2}) = 1, g(\frac{\pi}{2}) = 0$ -We define $\sin(x) = f(x) $ and $\cos(x) = g(x)$ -After that, we can establish the known properties of the trigonometric functions and find their Taylor series. At the end, one notices the relation $e^{ix} = \cos(x) + i \sin(x)$ -The number $e$ -We define the number $e:= \lim_{n\to \infty} (1 + \frac{1}{n})^n$. After the definition of $a^x$ for real $x$ we can show that $\lim _{h \to 0} (1 + h)^{\frac{1}{h}} = e$. When we try to find the deriative of $\log_a(x)$ we will get: -$$[\log_a(x)]' = \lim_{h \to 0} \frac{1}{x} \log_a \left( 1 + \frac{h}{x}\right)^\frac{x}{h}$$ -By the continuity of the logarithm and the above limit we get $[\log_a(x)]' = \frac{\log_a (e)}{x}$. Thus, the natural base for the $\log$ is $e$. Because $a^x$ is the inverse of $\log_a(x)$ it's a simple calculation to show that $(a^x)' = a^x \log_e (a)$, and therefore the natural choice of the number $a$ is $e$. -Remarks: The above definitions use only continuity and monotonicity, no derivatives and integrals. For this reason, they are (arguably) more natural than definitions via differential equations: I highly doubt there is a student who has good intuition for the differential equation $f' = f$ but doesn't have an idea what is $a^x$. The main disadvantage of this approach is the length: it takes around $15$ pages without the proof of the existence of $\sin$ and $\cos$, and the proof itself is around $10$ pages more.<|endoftext|> -TITLE: What is the largest abelian subgroup in $S_n$? -QUESTION [9 upvotes]: I am almost to complete my first course in group theory. I have read Dummit and Foote into chapter 5. -I know that I can always find an abelian subgroup isomorphic to $C_{k_1} \times C_{k_2} \times ... \times C_{k_t}$ in $S_n$ where $n = k_1 + k_2 + ... + k_t$. Specifically, the permutation group $G = <(1 \ 2\ ...\ k_1),(k_1+1 \ ...\ k_2),...,(n-k_t+1\ ...\ n)>$. -There are many ways to express an isomorphism type of an abelian group as a direct product of cyclic factors. If I express the isomorphism type in its elementary divisor (primary) decomposition this seems to minimize $n$. I am somewhat familiar with Landau's function (Sloane's A000793) which gives the maximal lcm over all the partitions of $n$. I think that this sequence gives the answer to my question but there are no comments in the data base regarding this. At any rate, I am unsure and would like some help on this question. - -REPLY [7 votes]: Since all direct factors contribute, you don't need the lcm of the orders. Also note that $|C_2\times C_2\times\cdots\times C_2|$ ($k$ factors, on $2k$ points) has order $2^k\ge 2k$, so it is beneficial to split any cycle of length $>3$ into groups of $2$-cycles. I fact (as Derek Holt observed - $2^3<3^2$ on 6 points) splitting into 3-cycles gives an even larger order, but going to larger cycle lengths will produce worse results. -This means that the largest subgroup is the direct product of copies of $C_3$ with $2$-cycles thrown in on the remaining points (i.e. if $n\equiv 0\pmod{3}$ there is no $2$-cycle, if it is $\equiv 2$ there is $1$ two cycle, and if it is $\equiv 1$ we remove one 3-cycle and write two $2$-cycles (or a $V_4$, or a 4-cycle, as $4=2^2$) on the 3 points plus the remaining point).<|endoftext|> -TITLE: Lecture Notes for Hatcher's Algebraic Topology -QUESTION [27 upvotes]: Hatcher's book Algebraic Topology is a standard text in the subject, and I was wondering if there were any lecture notes or even syllabi to accompany it. I am mostly concerned with sequencing, meaning the most useful order for a reader to go through the book the first time. Any additional resources for one going through Hatcher would also be welcome, like hints on exercises. Ideally this would be for a more elementary course in algebraic topology, although I have already completed from lecture 24 on of "introduction to Algebraic Topology" lectures by N. J. Wieldberger (found on youtube), and so I have had a basic foundation in some of the concepts, however this seems at a much lower level than Hatcher. -EDIT: I found MIT's Algebraic Topology and it is a good example of what I am looking for. This one focuses on the homology and cohomology sections of the book, and excludes the homotopy sections. I was looking for more like it, but perhaps focusing on other parts of the book? - -REPLY [23 votes]: Here are some typed up lecture notes from a few people: -$\textbf{1. Arun Debray}$ -https://www.ma.utexas.edu/users/a.debray/lecture_notes/ -He has course notes for a large selection of courses. In particular, for algebraic topology: -https://www.ma.utexas.edu/users/a.debray/lecture_notes/215b_notes.pdf -follows Soren Galatius' course found at: -http://math.stanford.edu/~galatius/215B15/ -$\textbf{2. Evan Chen}$ -http://www.mit.edu/~evanchen/coursework.html -He has notes for various courses. In particular, his algebraic topology notes (which don't follow Hatcher) seem to be at a more elementary level: -http://www.mit.edu/~evanchen/notes/SJSU275.pdf -$\textbf{3. Akhil Mathew}$ -http://math.uchicago.edu/~amathew/notes.html -He has notes for algebraic topology and various other things. -$\textbf{4. Zev Chonoles}$ -http://math.uchicago.edu/~chonoles/expository-notes/ -He has notes for Benson Farb's AT class (in fact, you may look at Farb's notes by pressing the "+") -$\textbf{5. Kiyoshi Igusa}$ -http://people.brandeis.edu/~igusa/Courses.html -Has lecture notes to courses he taught. In particular, there's an algebraic topology course here: -http://people.brandeis.edu/~igusa/Math121b/Math121b.htm -(there are homotopy theory notes too) -$\textbf{6. Anton Geraschenko}$ -http://stacky.net/wiki/index.php?title=Course_notes -Lots of notes for various Berkeley classes (algebraic topology included). -$\textbf{7. Alvin Jin}$ -https://sites.google.com/view/alvinjin/exposition -I've begun typing up notes for various things. - -Other people of interest for other courses/topics (while I'm at it): -$\textbf{1. Keith Conrad}$ -http://www.math.uconn.edu/~kconrad/blurbs/ -Lots of nice notes. -$\textbf{2. Brian Conrad}$ -http://math.stanford.edu/~conrad/ -Lots of nice lecture notes (if you click on his courses and click on "handouts") -$\textbf{3. Moor Xu}$ -https://math.berkeley.edu/~moorxu/oldsite/ -Some course notes typed up. -$\textbf{4. Robert Ash}$ -http://www.math.uiuc.edu/~r-ash/ -Nice set of books with solutions provided. - -I might add more later as they come to mind...<|endoftext|> -TITLE: Let $f$ be a continuous and open map from $\mathbb R $ to $\mathbb R$.Prove that $f$ is monotonic. -QUESTION [5 upvotes]: Let $f$ be a continuous and open map from $\mathbb R $ to $\mathbb R$.Prove that $f$ is monotonic. -Suppose that $f$ is not monotonic.Then $\exists a,b;a>b$ such that $f(a)f(d)$.Since $f$ is continuous then there exists no break in the graph of $f$. But how should I use the fact that $f$ is open.Any help on how should I do the proof? - -REPLY [7 votes]: Here is a direct proof: - -$f$ has no local maximum or minimum - -If $f$ has a local maximum at $x=a$, then there is $\delta >0$ such that $f(x)\le f(a)$ for all $x\in(a-\delta, a+\delta)$. Thus, $f$ maps the open interval $(x-\delta, x+\delta)$ to an interval of the form $(y,f(a)]$ or $[y,f(a)]$, neither of which is open. Alternatively, $f(a)$ is in the image of $(x-\delta, x+\delta)$, but no interval around $f(a)$ is totally contained in the image. - -$f$ is injective - -If $f(a)=f(b)$ with $a -TITLE: Help me evaluate this infinite sum -QUESTION [5 upvotes]: I have the following problem: -For any positive integer n, let $\langle n \rangle$ denote the integer nearest to $\sqrt n$. -(a) Given a positive integer $k$, describe all positive integers $n$ such that $\langle n \rangle = k$. -(b) Show that $$\sum_{n=1}^\infty{\frac{2^{\langle n \rangle}+2^{-\langle n \rangle}}{2^n}}=3$$ -My progress: The first one is rather easy. As $$\left( k-\frac{1}{2} \right) < \sqrt n < \left( k+\frac{1}{2} \right) \implies \left( k-\frac{1}{2} \right)^2 < n < \left( k+\frac{1}{2} \right)^2 \implies \left( k^2-k+1 \right) \leq n \leq \left( k^2+k \right)$$ -Actually, there would be $2k$ such integers. -But, I have no idea how to approach the second problem. Please give me some hints. - -REPLY [12 votes]: Direct computation seems to work. I recall this problem being an old Putnam problem. -We have $\langle n \rangle = k$, iff $n \in [k^2 - k + 1, k^2 + k]$. -Now, let's compute this sum. -$$ -\sum_{n = 1}^\infty \frac{2^{\langle n \rangle} + 2^{-\langle n \rangle}}{2^n} -= \sum_{k=1}^\infty \sum_{i=k^2 - k + 1}^{k^2 + k} \frac{2^k + 2^{-k}}{2^i} -= \sum_{k=1}^\infty \frac{2^{2k}+1}{2^k}\sum_{i=k^2 - k + 1}^{k^2 + k} \frac1{2^i} -= \sum_{k=1}^\infty \frac{2^{2k}+1}{2^{k^2+1}}\sum_{i=0}^{2k-1} \frac{1}{2^i} -= \sum_{k=1}^\infty \frac{2^{2k}+1}{2^{k^2 + 1}}\frac{1 - \frac{1}{4^k}}{1-\frac{1}{2}} -= \sum_{k=1}^\infty \frac{2^{2k}+1}{2^{k^2 + 1}}\frac{2^{2k}-1}{2^{2k-1}} -= \sum_{k=1}^\infty \frac{2^{4k}-1}{2^{k^2 + 2k}} -= \sum_{k=1}^\infty \left(2^{1-(k-1)^2} - 2^{1-(k+1)^2}\right) -= 2^1 + 2^0 = 3$$ - -REPLY [6 votes]: The idea is to rewrite the sum as a double sum by observing that $$\langle m^2 + k \rangle = m$$ for $k \in \{-m+1, \ldots, m\}$. Therefore, $$\begin{align*} S &= \sum_{n=1}^\infty \frac{2^{\langle n \rangle} + 2^{-\langle n \rangle}}{2^n} \\ &= \sum_{m=1}^\infty \sum_{k=-m+1}^{m} \frac{2^m + 2^{-m}}{2^{m^2+k}} \\ &= \sum_{m=1}^\infty \frac{2^m + 2^{-m}}{2^{m^2}} \sum_{k=-m+1}^m \frac{1}{2^k} \\ &= \sum_{m=1}^\infty \frac{2^m + 2^{-m}}{2^{m^2}} \left(2^m - 2^{-m}\right) \\ &= \sum_{m=1}^\infty \frac{2^{2m} - 2^{-2m}}{2^{m^2}} \\ &= \sum_{m=1}^\infty 2^{-m(m-2)} - 2^{-m(m+2)} \\ &= \sum_{m=1}^\infty 2^{-m(m-2)} - \sum_{k=3}^\infty 2^{-(k-2)k} \\ &= \sum_{m=1}^2 2^{-m(m-2)} = 2^1 + 2^0 = 3.\end{align*}$$<|endoftext|> -TITLE: How to prove $\sum_{s=0}^{m}{2s\choose s}{s\choose m-s}\frac{(-1)^s}{s+1}=(-1)^m$? -QUESTION [7 upvotes]: Question: How to prove the following identity? -$$ -\sum_{s=0}^{m}{2s\choose s}{s\choose m-s}\frac{(-1)^s}{s+1}=(-1)^m. -$$ -I'm also looking for the generalization of this identity like -$$ -\sum_{s=k}^{m}{2s\choose s}{s\choose m-s}\frac{(-1)^s}{s+1}=? -$$ -Proofs, hints, or references are all welcome. - -REPLY [8 votes]: This is just a supplement to the nice answer of @tc2718. We show that it is convenient to use the coefficient of operator $[x^k]$ to denote the coefficient of $x^k$ of a series. We can write e.g. -$$\binom{n}{k}=[x^k](1+x)^n$$ - -We obtain - \begin{align*} - \sum_{s=0}^{m}&{2s\choose s}{s\choose m-s}\frac{(-1)^s}{s+1}\\ - &= \sum_{s=0}^{m}[u^{m-s}](1+u)^s(-1)^s[x^s]\frac{1-\sqrt{1-4x}}{2x}\tag{1}\\ - &= [u^{m}]\sum_{s=0}^{m}(1+u)^s(-u)^s[x^s]\frac{1-\sqrt{1-4x}}{2x}\tag{2}\\ - &= [u^{m}]\frac{1-\sqrt{1-4(1+u)(-u)}}{2(1+u)(-u)}\tag{3}\\ - &= [u^{m}]\frac{1-(1+2u)}{2(1+u)(-u)}\\ - &= [u^{m}]\frac{1}{1+u}\\ - &= [u^{m}]\sum_{s=0}^{\infty}(-u)^s\\ - &=(-1)^m - \end{align*} - -Comment: - -In (1) we use the coefficient of operator together with the series expansion of the Catalan-numbers: -$\frac{1-\sqrt{1-4x}}{2x}=\sum_{s=0}^{\infty}\frac{1}{s+1}\binom{2s}{s}x^s$ -In (2) we use the rule $[x^s]f(x)=[x^0]x^{-s}f(x)$ -In (3) we use the substitution rule $f(x):=\sum_{s=0}^{\infty}a_sx^s=\sum_{s=0}^\infty[y^s]f(y)x^s$<|endoftext|> -TITLE: Partitioning $\{1,2,\ldots,k\}$ into $p$ subsets with equal sums, where $p$ is prime -QUESTION [16 upvotes]: Let $p$ be a prime natural number. For which positive integer $k$ can the set $\{1,2,\ldots,k\}$ be partitioned into -$p$ subsets with equal sums of elements ? - -Obviously, $p\mid k(k+1)$. Hence, $p\mid k$ or $p\mid k+1$. All we have to do now is to show a construction. But I can't find one. I have tried partitioning the set and choose one element from each set but that hasn't yielded anything. -Any hint will be appreciated. - -REPLY [10 votes]: Definition. For a prime natural number $p$, we say that a positive integer $k$ is $p$-splittable if $\{1,2,\ldots,k\}$ can be partitioned into $p$ subsets with the same sum. - -If $p=2$, then it follows that -$$\text{$k\equiv 0\pmod{4}$ or $k\equiv -1\pmod{4}$}\,.$$ For an odd prime $p$, we have -$$\text{$k\equiv 0\pmod{p}$ or $k\equiv-1\pmod{p}$}\,.$$ It can be easily seen that, for $k\in\mathbb{N}$ and for any prime natural number $p$, if $k$ is $p$-splittable, then $k+2p$ is $p$-splittable (by adding -$$\text{$\{k+1,k+2p\}$, $\{k+2,k+2p-1\}$, $\ldots$, $\{k+p,k+p+1\}$}$$ to the $p$ partitioning sets of $\{1,2,\ldots,k\}$). -Since $k=3$ and $k=4$ are $2$-splittable, any natural number of the form $4t-1$ or $4t$, where $t\in\mathbb{N}$, is $2$-splittable, and no other number is $2$-splittable. Also, for any odd prime natural number $p$, $k=2p-1$ and $k=2p$ are $p$-splittable, which means that any natural number of the form $2pt-1$ or $2pt$, where $t\in\mathbb{N}$, is $p$-splittable. Clearly, $k=p-1$ and $k=p$ are not $p$-splittable for odd $p$. We, however, claim that $k=3p-1$ or $k=3p$ are $p$-splittable for odd $p$, which would then imply that any natural number of the form $pt-1$ or $pt$ where $t\geq 2$ is an integer is $p$-splittable, and nothing else is $p$-splittable. -First, assume that $p\equiv 1\pmod{4}$, say $p=4r+1$ for some $r\in\mathbb{N}$. - -If $k=3p-1=12r+2$, then consider the partition of $\{1,2,\ldots,k\}$ into -$$\text{$\{6r+1,12r+2\}$, $\{6r+2,12r+1\}$, $\ldots$, $\{9r+1,9r+2\}$}\,,$$ -$$\text{$\{1,2,3,6r-2,6r-1,6r\}$, $\{4,5,6,6r-5,6r-4,6r-3\}$}\,,$$ -$$\text{$\ldots$, $\{3r-2,3r-1,3r,3r+1,3r+2,3r+3\}$}\,.$$ -If $k=3p=12r+3$, then consider the partition -$$\text{$\{6r+3,12r+3\}$, $\{6r+4,12r+2\}$, $\ldots$, $\{9r+2,9r+4\}$}\,,$$ -$$\text{$\{1,2,3,6r-1,6r,6r+1\}$, $\{4,5,6,6r-4,6r-3,6r-2\}$}\,,$$ -$$\text{$\ldots$, $\{3r-2,3r-1,3r,3r+2,3r+3,3r+4\}$, $\{3r+1,6r+2,9r+3\}$}\,.$$ - -Now, assume that $p\equiv -1\pmod{4}$, say $p=4r-1$ for some $r\in\mathbb{N}$. - -If $k=3p-1=12r-4$, then consider the partition -$$\text{$\{6r-2,12r-4\}$, $\{6r-1,12r-5\}$, $\ldots$, $\{9r-4,9r-2\}$}\,,$$ -$$\text{$\{1,2,3,6r-5,6r-4,6r-3\}$, $\{4,5,6,6r-8,6r-7,6r-6\}$}\,,$$ -$$\text{$\ldots$, $\{3r-5,3r-4,3r-3,3r+1,3r+2,3r+3\}$, $\{3r-2,3r-1,3r,9r-3\}$}\,.$$ -If $k=3p=12r-3$, then consider the partition -$$\text{$\{6r,12r-3\}$, $\{6r+1,12r-4\}$, $\ldots$, $\{9r-2,9r-1\}$}\,,$$ -$$\text{$\{1,2,3,6r-4,6r-3,6r-2\}$, $\{4,5,6,6r-7,6r-6,6r-5\}$}\,,$$ -$$\text{$\ldots$, $\{3r-5,3r-4,3r-3,3r+2,3r+3,3r+4\}$, $\{3r-2,3r-1,3r,3r+1,6r-1\}$}\,.$$ - - -Question. What if $p$ is not prime? I conjecture the following: -(1) If $p$ is odd, then, for any $j\in\{-1,0,1,2,\ldots,p-2\}$ such that $p\mid j(j+1)$, every integer of the form $tp+j$, where $t\geq 2$ is an integer, is $p$-splittable, and nothing else is $p$-splittable. -(2) If $p$ is even, then, for any $j\in\{-1,0,1,2,\ldots,2p-2\}$ such that $p\mid \dfrac{j(j+1)}{2}$, every integer of the form $2tp+j$, where $t\in\mathbb{N}$, is $p$-splittable, and nothing else is $p$-splittable. - -This question is also posted here: $p$-Splittable Integers.<|endoftext|> -TITLE: Roots of ln of a square -QUESTION [8 upvotes]: Problem: -$$ -y=\ln((3x-2)^2) -$$ - -State the domain and the coordinates of the point where the curve crosses the x-axis - -At first sight, you say that the domain is $x>\frac23$ because $\ln$ is undefined for negative numbers, so you just rearrange $3x-2>0$. -But the input of $\ln$ is squared, which means there are 2 roots, namely $1$ and $\frac13$. -Contradiction: -By the law of logarithms -$$ -\ln(x^2)=2\ln(x) -$$ -Therefore, the function for $y$ can be rewritten as -$$ -y=2\ln(3x-2) -$$ -The problem is that half the graph disappears. Now that the input isn't squared, $y$ is undefined for $x\le\frac23$ ($3x-2$ becomes negative) and the entire left half is gone. - -So what's the answer? How many roots are there? It seems that math is contradicting itself. - -REPLY [15 votes]: Your law of logarithms only works if the domain makes sense. You wouldn't write that $\ln((-4)^2)=2\ln(-4)$ as $\ln(-4)$ isn't defined. What would be better to write is actually: -$$\ln(x^2)=2\ln(|x|)$$ -This would lead you to two solutions as: -$$\ln(3x-2)^2=0$$ -$$2\ln(|3x-2|)=0$$ -$$\ln(|3x-2|)=0$$ -$$|3x-2|=1$$ -$$3x-2=\pm1$$ -$$3x=1,3$$ -$$x=\frac{1}{3},1$$ - -REPLY [7 votes]: The error is in your simplicity, it should be -$$\ln x^2 = \ln |x|^2 = 2\ln |x|$$ -As the square removes the negatives you cannot suddenly shoehorn them in and think it flies. Beyond that it's just a matter to solve for when $(3x-2)\neq 0$<|endoftext|> -TITLE: Can we always multiply some function that is not differentiable everywhere with function that is to obtain differentiable product? -QUESTION [9 upvotes]: First of all, I think that before stating the general question it would be okay to make some concrete example of what do I have in mind. -Let us take the function $f(x)=|x|$. -We could write this function as $f(x)= \begin{cases}x&{x>0}\\-x&x<0 \\ 0& {x=0} \end{cases}$. -This function is differentiable everywhere except at the point $x=0$. -Now let us multiply this function with, say, function $g(x)=x$. -Now we have $f(x)g(x)=\begin{cases}{x^2}&{x>0}\\{-x^2}&{x<0}\\ 0&{x=0} \end{cases}$. -Clearly $f(x)g(x)$ is differentiable everywhere. -What did we do here? -We multiplied function which was not differentiable at one point with some other function which is of class $C^{\infty}$ and we obtained function which is everywhere differentiable. -Now I would like to ask the question which deals with the general case: - -Suppose that we have some real function of a real variable $f$ that is continuous on some set $(a,b)$ and that is not differentiable on some subset of $(a,b)$ (the subset could be only one point as in the above described example or it could be the whole set $(a,b)$ so that we have everywhere continuous but nowhere differentiable function). Could it be that there always exists some function $g$ (which could depend on $f$) of class $C^{\infty}$ (function that is infinitely times differentiable) which is not the zero function (so $g$ is not the function $g(x)=0$) and which is such that we have that the function $fg$ (the product of the functions $f$ and $g$) is differentiable on the set $(a,b)$? - -REPLY [10 votes]: This is cheating a little, but if the points of nondifferentiability are isolated, you could form a function which vanishes in a neighborhood of each of those points, and is not identically zero, and multiply by that.<|endoftext|> -TITLE: Discussion on AC and countable unions of countable sets -QUESTION [5 upvotes]: My question stems from the comments following Asaf Karagila's answer here : Why can't you pick socks using coin flips? -At some point there was a discussion about whether or not a countable union of countable sets is countable if one does not assume AC. -I upvoted this comment for I thought it quite right : - -@phs: Maybe your confusion comes from the meaning of the term "countable". It gives you the existence of a bijection with the natural numbers (an enumeration of the set), but it does not give you (choose for you) such a bijection. For one set (or finitely many) existential elimination will hand you a concrete enumeration, but not so for infinitely many sets at a times. – Marc van Leeuwen Mar 19 '14 at 13:11 - -Then I got to this one : - -@Marc: That's not fully correct. You sort of slide the case where the model and the meta-theory disagree on the integers (i.e. non-standard integers), in which case this only holds for the meta-theory integers; but it's true internally for everything the model think is finite. It's an even finer point than the place where choice is used in the above proof. I do agree it's a good intuition, but it's not really the reason why this fails. In fact, it's not the reason at all. (This is quite delicate, and I only understood that about a year ago, and I even made that mistake before.) – Asaf Karagila Mar 19 '14 at 18:57 - -This comment appears quite rich and deeply interesting. I must confess however that I don't fully understand it. I was wondering if anyone could provide me with a more detailed explanation of what is said in this comment. - -REPLY [3 votes]: If you have two sets $A$ and $B$ which are non-empty, then it means that the statement $\exists x\exists y(x\in A\land y\in B)$ is true. Now by existential instantiation we get $a,b$ such that $a\in A\land b\in B$. So the function defined as $\{\langle A,a\rangle,\langle B,b\rangle\}$ is a choice function as wanted. -One could argue, if so, that finite choice follows from the fact that if you have a finite set of non-empty sets, then you can write that statement as $\exists x_1\ldots\exists x_n(\ldots)$, and then using existential instantiation get the choice function as before. -But this is not accurate. If $M$ is a model of $\sf ZF$ which has non-standard integers, then there is some $N\in M$ such that $M\models N\text{ is an integer}$, but $x$ does not correspond to any actual integer from our meta-theory. So we cannot write any statement with $N$ quantifiers, because $N$ is not really a finite integer. Therefore applying existential instantiation fails here, and you have an existential crisis: - -How do you choose, inside $M$ from $N$-many sets? - -If you can't do that, then we cannot say that $\sf ZF$ proves that "For every finite set of non-empty sets there is a choice function". Because in that statement finite means "internally finite", and not "finite in the meta-theory". -This is solved by the fact that $\sf ZF$ proves induction. So we can do induction internally to $M$, and there you only have to deal with two existential quantifiers. One for the choice function of a set of smaller size (the induction hypothesis) and one for the choice of element from the additional set (the inductive step). -So it's true that existential instantiation is how we solve this crisis of choice. But one has to be vigilant and understand that this is not an application from the meta-theory by recursion. It is an internal recursion that just... works!<|endoftext|> -TITLE: How does one prove transfinite induction in ZFC? -QUESTION [11 upvotes]: If one is able to use classes, it seems to me that the proof of transfinite induction is a simple extension of the usual proof of induction (and equal to the proof of transfinite induction on sets). However, if one cannot argue on classes, how can one prove transfinite induction? -PS: I'm sorry if my question is confused. That would be because my knowledge on set theory is not thorough and barely enough for me to be confortable with it. - -REPLY [15 votes]: If $C$ is any well-ordered set, we have the -Principle of Transfinite Induction. Let $P$ be a property with -$$ \tag1\forall \alpha\in C\colon (\forall \beta \in C\colon \beta<\alpha \to P(\beta))\to P(\alpha).$$ -Then -$$ \forall \alpha\in C\colon P(\alpha).$$ -Proof: We can define the set $A=\{\,\alpha\in C\mid \neg P(\alpha)\,\}$. Assume $A\ne \emptyset$. As $C$ is well-ordered $A$ has a minimal element $\alpha$. By minimality we have $\forall \beta \in C\colon \beta<\alpha \to P(\beta)$, hence from $(1)$ we get $P(\alpha)$, contradicting $\neg P(\alpha)$. Therefore $A=\emptyset$, which is the claim. $\square$ -Interestingly, the principle works also if $C$ is the well-ordered proper class $ \operatorname{On}$ of all ordinals (which also happens to be the most common application of transfinite induction), even though one might be discouraged by the fact that we deal with a proper class here: -Principle of Transfinite Induction for Ordinals. Let $P$ be a property with -$$ \tag2\forall \alpha\in \operatorname{On}\colon (\forall \beta \in \operatorname{On}\colon \beta<\alpha \to P(\beta))\to P(\alpha).$$ -Then -$$ \forall \alpha\in \operatorname{On}\colon P(\alpha).$$ -Proof. Let $\gamma\in \operatorname{On}$ be an arbitrary ordinal. Then $\gamma$ itself is a well-ordererd set of ordinals, so that we can apply the principle of Transfinite induction above to the well-ordered set $C=\gamma$. We conclude $\forall \beta\in\gamma\colon P(\beta)$ or more wordy $\forall \beta\in \operatorname{On}\colon\beta\in \gamma \to P(\beta)$. As $\beta\in\gamma$ is the same as $\beta<\gamma$, we learn from $(2)$ that $P(\gamma)$. As $\gamma$ was an arbitrary ordinal, we obtain the desired claim. $\square$<|endoftext|> -TITLE: questions about Gerschgorin circle theorem. -QUESTION [5 upvotes]: Q:if $A$ is a strictly diagonally dominant matrix, prove that -$$|\det A|\ge \prod_{i=1}^n(|a_{ii}|-\sum_{j\neq i}|a_{ij}|)$$ -the proof is: -by Gerschgorin circle theorem, the eigenvalue of $A$ lies in the union of the following circles: -$$|z-a_{ii}|\le\sum_{j\neq i}|a_{ij}|\quad i=1,\cdots,n$$ -so the eigenvalue $\lambda$ satisfy $$|\lambda|\ge|a_{ii}|-\sum_{j\neq i}|a_{ij}|$$ -and we get the conclusion. - -my question is that the last equation only holds for some $i$, for example, all the eigenvalue can lie in the same circle, so the conclusion does not hold. What's the problem? - -REPLY [4 votes]: Take the entries of the matrix to be $A = (a_{ij})_{n \times n}$ -Note that the system of linear equaltions: -$$a_{i1} + \sum\limits_{j=2}^{n} a_{ij}x_j= 0,\qquad i = 2,3,\cdots, n$$ -has unique solution, say $(x_2,x_3,\cdots, x_n)$ (by the given condition on $A$ it follows that the determinant of the system is non singular, by Girschgorin Theorem). -So, $$\begin{align}-a_{ii}x_i = a_{i1}+\sum\limits_{j=2,j \neq i}^{n} a_{ij}x_j &\implies |a_{ii}| \le \frac{|a_{i1}|}{|x_i|} +\sum\limits_{j=2,j \neq i}^{n} |a_{ij}|.\frac{|x_j|}{|x_i|}\\&\implies |a_{i1}| +\sum\limits_{j=2,j \neq i}^{n} |a_{ij}| < \frac{|a_{i1}|}{|x_i|} +\sum\limits_{j=2,j \neq i}^{n} |a_{ij}|.\frac{|x_j|}{|x_i|}\\&\implies \max_{2 \le i\le n} |x_i| < 1\end{align}$$ -Now, consider the determinant $\det \Delta_1$ of the system and add the $j^{th}$ column multiplied by $x_j$ to the first column for each $j = 2,3,\cdots, n$. -Thus, the determinant $\det \Delta_1$ becomes: -$$\det \Delta_1 = \left(a_{11}+\sum\limits_{j=2}^{n} a_{1j}x_j\right).\det \left(\begin{matrix} a_{22} & \cdots & a_{2n}\\ \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot \\ a_{n2} & \cdots & a_{nn}\end{matrix}\right) = \left(a_{11}+\sum\limits_{j=2}^{n} a_{1j}x_j\right) \times \det \Delta_2$$ -Since, $|x_j| < 1$ for each $j = 2(1)n$, we have: -$$|\det \Delta_1| \ge \left(|a_{11}|-\sum\limits_{j=2}^{n} |a_{1j}|\right).|\det \Delta_2|$$ -Again $\det \Delta_2$ is also diagonally dominant, so the conclusion follows by induction.<|endoftext|> -TITLE: How many possible phone words exist for a phone number of length N when also counting words less than length N within that phone number? -QUESTION [7 upvotes]: The phone words problem -find all possible words that can be derived from a phone keypad, "words" do not have to be English dictionary words, for this question, words can be any combination of letters which can be mapped from a digit. -For those not familiar, phone keypads often have letters under most digits: -╔═════╦═════╦═════╗ -║ 1 ║ 2 ║ 3 ║ -║ ║ abc ║ def ║ -╠═════╬═════╬═════╣ -║ 4 ║ 5 ║ 6 ║ -║ ghi ║ jkl ║ mno ║ -╠═════╬═════╬═════╣ -║ 7 ║ 8 ║ 9 ║ -║ pqrs║ tuv ║wxyz ║ -╠═════╬═════╬═════╣ -║ * ║ 0 ║ # ║ -║ ║ ║ ║ -╚═════╩═════╩═════╝ - -Counting only phone words with length $n$ -Here's an example to get an idea of the phone words problem: given a phone number 366, you could generate the following phone words -"dmm", "dmn", "dmo", "dnm", "dnn", "dno", "dom", "don", "doo", "emm", "emn", "emo", "enm", "enn", "eno", "eom", "eon", "eoo", "fmm", "fmn", "fmo", "fnm", "fnn", "fno", "fom", "fon", "foo" - -Because 3 can be replaced by either "d", "e" or "f", and 6 can be replaced by either "m", "n" or "o". The upper bound on the number of phone words of length $n$ for a number with $n$ digits is $4^n$ (most keys have 3 letters but 7 and 9 both map to 4 letters). -Counting all words up to and including length $n$ -The example above gives us 27 words, but we were only counting words whose length equals the number of digits of the phone number 366. What if we want to find all words including those whose length is less than the phone number? For example, given the same input as before, 366, you could also have generated words "no", "on", "n", etc.... How many total phone words can be generated for a phone number of length $n$? -To put this another way -Given a phone number of length $n$, we can generate $4^n$ phone words of length $n$, we can also generate $2(4^{n-1})$ phone words for each sub phone number with length $n-1$ (using the example above, that would be all phone words of 36 and 66), then all phone words for all possible sub phone numbers of length $n-2$ etc.. How many phone words are there for a phone number of length $n$ if we count all phone words possibilities or all lengths less than $n$? -Attempt at an answer -Attempting to string the above description into an equation: -$W = 4^{n} + 2(4^{n-1}) + 3(4^{n-2}) + \ldots + n$ -But not sure where to go with this. -Terminology -Phone number is an ordered sequence of numerical digits. Example: 1234560, 366 etc... -Phone word is an ordered sequence of latin alphabetical characters (found on En-US telephone key pads). - -Each phone word can be directly mapped to exactly one phone number of the same length. For example: "foo" can be mapped to 366. -Each phone number may be mapped to zero or more phone words. - -Sub phone number: Given a phone number, there are $n(n+1)/2$ ways to "slice" the number into smaller phone numbers (without changing the order of the numbers). For example: 366 contains the following sub phone numbers: 36, 66, 3, 6, 6 - -REPLY [5 votes]: [2015-12-10] Update: Thanks to a comment from @JohnMachacek I could include a reference to a paper proving the conjecture about tight upper bounds. - -Note: This is a partial answer addressing some upper bounds and looking at some special cases. But first I like to state the problem. - -Current Situation: -We consider phone numbers as non-empty strings build from an alphabet $$\mathcal{A}=\{2,3,4,5,6,7,8,9\}$$ We ignore the digits $0$ and $1$ as the corresponding keys of OPs phone keypad do not contribute any alphabetical characters. Digits from $\mathcal{A}$ are mapped to either three or four characters. So, the digits are associated with weights of size $3$ or $4$. -For convenience only we simplify the problem and consider the same weight $m>0$ for each digit in $\mathcal{A}$. -OP is asking for the number $\varphi(w)$ of words and all subwords, which can be associated with a given phone number $w$ of length $n$. We can reformulate this problem by asking for all substrings of $w$, weighted with weight $m$ accordingly. - -Example: If we look at the two phone numbers $3633$ and $3336$ both having length $w=4$, we get following substrings -\begin{align*} -3633&\quad\rightarrow\quad \{3633,363,633,33,36,63,3,6\}\\ -3336&\quad\rightarrow\quad \{3336,333,336,33,36,3,6\} -\end{align*} -We observe, that even if the words contain the same digits together with the same multiplicities, the number of substrings is different according to the constellation of the blocks consisting of equal digits. While $3633$ has three different substrings of length two, the string $3336$ has two different substrings of length two. We obtain with respect to the number of substrings: -\begin{align*} -\varphi(3633)&=m^4+2m^3+\color{blue}{3}m^2+2m\\ -\varphi(3336)&=m^4+2m^3+\color{blue}{2}m^2+2m\\ -\end{align*} - -Upper bounds -Finding a generating function which provides the distribution of $\varphi(w)$ for all different phone numbers of length $n$ is (regrettably) beyond the scope of this answer. But we can at least provide some upper bounds for all words of length $n$. If we consider a word $w$ of length $n$ and a substring of length $k, 1\leq k \leq n$ there are two limitations: - -The number of substrings of length $k$ is limited by the size of the alphabet $\mathcal{A}$. -There are at most $n-k+1$ substrings of length $k$ in a word of length $n$. - -Since the number of substrings of length $k$ is less or equal $\min\{|\mathcal{A}|^k,n-k+1\}$ we conclude: - An upper bound for $\varphi(w)$ with length of $w$ equal to $n$ is - \begin{align*} -\varphi(w)\leq\sum_{k=1}^{n}\min\left\{|\mathcal{A}|^k,n-k+1\right\}m^k\tag{1} -\end{align*} -If we do not consider the size of the alphabet, we can provide a closed expression for a somewhat larger upper bound and claim -The following is an upper bound for $\varphi(w)$ with length of $w$ equal $n$ -\begin{align*} -\varphi(w)\leq\frac{m\left(m^{n+1}-m(n+1)+n\right)}{(m-1)^2}\tag{2} -\end{align*} - -This holds true since according to (1) -\begin{align*} -\varphi(w)&\leq\sum_{k=1}^{n}\min\left\{|\mathcal{A}|^k,n-k+1\right\}m^k\\ -&\leq\sum_{k=1}^{n}(n-k+1)m^k\\ -&=(n+1)\sum_{k=1}^{n}m^k-\sum_{k=1}^{n}km^k\tag{3} -\end{align*} -Using the formula for the finite geometric series we get -\begin{align*} -\sum_{k=1}^{n}m^k&=\frac{1-m^{n+1}}{1-m}-1=m\frac{1-m^n}{1-m}\\ -\sum_{k=1}^{n}km^k&=m\sum_{k=1}^nkm^{k-1}\\ -&=m\frac{d}{dm}\left(\sum_{k=1}^nm^k\right)\\ -&=m\frac{d}{dm}\left(\frac{m-m^{n+1}}{1-m}\right)\\ -&=m\frac{nm^{n+1}-(n+1)m^n+1}{(1-m)^2} -\end{align*} -Putting these two results into (3) and the claim (2) follows. -Note: The closed expression (2) is not necessarily a tight bound. If we consider e.g. the current problem with an alphabet of size $7$ we observe the expression (1) produces closer bounds for words with length $n>7$. - -With $|\mathcal{A}|=7$ and weight $m=3$ according to three characters for each digit we get following upper bounds for small values of $n$ -\begin{array}{rcccccccccc} -n&1&2&3&4&5&6&7&8&9&10\\ -\text{upper bound (1)}&3&15&54&174&537&1629&4908&\color{blue}{14745}&\color{blue}{44265}&\color{blue}{132834}\\ -\text{upper bound (2)}&3&15&54&174&537&1629&4908&14748&44271&132843\\ -\end{array} - -Tight upper bounds - -In the comment section of the OEIS sequence A094913 an interesting conjecture claims the upper bound (1) is for binary alphabets even a tight upper bound. -In fact even more is true. For each number $n>0$ and each alphabet of size $t>0$ the expression at the RHS of (1) is a maximum. -This is stated in Theorem 8 of the paper Strings with Maximally Many Distinct Subsequences and Substrings by A. Flaxman, etal. The maximum can be achieved by a modified De Bruijn word.<|endoftext|> -TITLE: Canonical sheaf projective bundle -QUESTION [7 upvotes]: Given a smooth variety $Y$, and a vector bundle $E$ of rank $r+1$ defined on it, call $X$ the variety associated to the projective bundle given by $E$. Call $\pi: X \rightarrow Y$ the natural map. -For such a situation, I found the formula $K_X=-(r+1)\xi+\pi^*(K_Y+det(E))$, where $\xi$ denotes the class of the natural line bundle $\mathcal{O}_X(1)$ on $X$. If we clear $\pi^*K_Y$ from the formula it boils down to saying that the relative canonical is given by $-(r+1)\xi + \pi^*(det(E))$. -I have not found a proof for this fact. -Is it just a "glueing" of the Euler sequence, where $\pi^*det(E)$ gives the transition function for the local trivializations we choose to write the Euler sequence locally over an affine of $Y$? Also, does this formula generalize if $Y$ is singular? -Thank you. -Edit/Note: In my refenrence $\mathbb{P}(E)$ is thought as the projective space of one dimensional quotients. I believe that in the answers below $det(E)$ appears with an additional dual because the construction is done with projective space of lines. - -REPLY [9 votes]: Let $\pi:X=\mathbb P(E)\to Y$ be the projection. Here $r+1$ is the rank of $E$. We use the following short exact sequences: -$$ -\begin{align} -0&\to T_{X/Y}\to T_X\to \pi^\ast T_Y\to 0\,\,\,\,\,\,\qquad (\star)\\ -0&\to\mathscr O_X(-1)\to \pi^\ast E\to Q\to 0.\qquad (\star\star) -\end{align} -$$ -The latter is the tautological exact sequence over $X=\mathbb P(E)$, in particular $Q$ denotes the universal quotient bundle, which has rank $r$. -Hence $(\star\star)$ says that $$\pi^\ast(\det E)=\mathscr O_X(-1)\otimes\det Q.$$ -From $(\star)$, we get that $$K_X=\det T_{X/Y}^\vee\otimes \pi^\ast(\det T_Y^\vee)=\det T_{X/Y}^\vee\otimes \pi^\ast K_Y.$$ -Using that -$$ -T_{X/Y}=Hom_X(\mathscr O_X(-1),Q)=\mathscr O_X(1)\otimes Q -$$ -we see that $\det T_{X/Y}=\mathscr O_X(r)\otimes \det Q$, and dualizing we get $$\det T_{X/Y}^\vee=\mathscr O_X(-r)\otimes\det Q^\vee=\mathscr O_X(-r-1)\otimes \pi^\ast \det E^\vee.$$ -Conclusion: -$$K_X=\mathscr O_X(-r-1)\otimes \pi^\ast \det E^\vee\otimes \pi^\ast K_Y.$$ - -Note that one can read, in the above formula, the expression of the relative dualizing sheaf of the (smooth) family $X\to Y$, i.e. $K_{X/Y}=K_X\otimes \pi^\ast K_Y^\vee$. Indeed, in the smooth case we have $K_{X/Y}=\det \Omega^1_{X/Y}=\det T_{X/Y}^\vee$.<|endoftext|> -TITLE: Is the base of a disc bundle necessarily a strong deformation retract of the total space? -QUESTION [5 upvotes]: I am reading Algebraic Topology by E.H.Spanier and in the proof of the Thom-Gysin map for disc bundles (on page 260) he says that $p : E \to B $ is a deformation retraction. I do not understand how this is the case. How do we view $B$ as a subspace of $E$ in the first place ? And then how does $p$ become a deformation retraction ? -Also please advise some reference where I could learn basic properties of disc/sphere bundles. Thanks. -Edit : Here is the statement of the assumption part of Theorem 5.7.11 (in whose proof the statement appears) : Let $(\xi,U_\xi)$ be an oriented q-sphere bundle with base B and projection $\dot{p}=p|_\dot{E}:\dot{E} \to B$. Here $(E,\dot{E})$ is a fiberbundle pair with fiber $(D^{n+1},S^n)$ and $p: E \to B $ is the projection map. - -REPLY [3 votes]: First, I suspect Spanier wants all of his sphere and disc bundles to have linear transition maps, hence there is a canonical zero section $B \hookrightarrow E$. If not, then to make sense of his claim, you need a section; obstruction theory + the fact that $B$ is contractible guarantees that one exists, and indeed you can force it to be in the interior of each fiber. -Once you have a section $s$, note that both $p$ and $s$ are homotopy equivalences by Whitehead + the long exact sequence of homotopy groups of a fibration. Now recall one version of the Whitehead theorem: if $X \hookrightarrow Y$ is a cofibration and also a homotopy equivalence, there is a deformation retraction onto $X$. I would guess that any section of a disc bundle is a cofibration, but if not, pick your section above to be a good one. In any case, once you have this, Whitehead gives you a deformation retraction onto the image of the section, as desired.<|endoftext|> -TITLE: A is a product of two self-adjoint matrices if and only if A is similar to adjoint of A? -QUESTION [7 upvotes]: How can we prove that $A$ is a product of two self-adjoint matrices $X, Y$ if and only if $A$ is similar to $A^\ast$? I'm thinking about proving it but no useful techniques come to my mind. Thanks! - -REPLY [5 votes]: This is (part of) Theorem 1 from this paper of Radjavi and Williams; the proof is somewhat involved. Here is a sketch of the proof. If $A=BA^*B^{-1}$ for some invertible $B$, then by some simple manipulations you can get the equation $A(B+B^*)=(B+B^*)A^*$, so $A(B+B^*)$ is self-adjoint. If $B+B^*$ were invertible, then we would be done, because we could take $X=A(B+B^*)$ and $Y=(B+B^*)^{-1}$. But $B+B^*$ might not be invertible. To fix this, note that we could have replaced $B$ by $\lambda B$ for any nonzero scalar $\lambda$, and you can show that for any invertible $B$, there exists a scalar $\lambda$ such that $\lambda B+\bar{\lambda}B^*$ is invertible. -Conversely, suppose $A=XY$ where $X$ and $Y$ are self-adjoint. First note that the properties of being a product of two self-adjoint matrices and being conjugate to your own adjoint are both invariant under conjugation (for the first, use that $BXYB^{-1}=(BXB^*)((B^*)^{-1}YB^{-1})$). So we may assume that $A$ is in Jordan normal form. Splitting $A$ as a block diagonal matrix consisting of an invertible block and a nilpotent block, you can show that the diagonal blocks of $X$ and $Y$ corresponding to the invertible block of $A$ are invertible and conjugate that invertible block to its adjoint. The nilpotent block is also similar to its adjoint (you can just explicitly show that a matrix in Jordan normal form with $0$s on the diagonal is similar to its adjoint). Putting this together, you can conclude that $A$ is similar to its adjoint. -As a hint to why the proof is so complicated, Radjavi and Williams note that the result is not true (at least in the direction $A=XY\Rightarrow A\sim A^*$) for operators on an infinite-dimensional Hilbert spaces, so some special property of finite-dimensional spaces (such as the Jordan normal form used here) must be used.<|endoftext|> -TITLE: Find $\lim_{x\to 0} \frac{e^{\cos^2x}-e}{ \tan^2x}$ -QUESTION [5 upvotes]: $$\lim_{x\to 0} \frac{e^{\cos^2x}-e}{ \tan^2x}=?$$ - -I'm at a complete loss here to be quite honest. I'm sure there is a way to simplify the $e$ part somehow. Factoring out the $e$ in the numerator to make $e(e^{-\sin^2x}-1)$ doesn't seem to lead anywhere. - -REPLY [2 votes]: Put $\tan x=t$ so that $$\lim_{x\to 0} \frac{e^{\cos^2x}-e}{ \tan^2x}=\lim_{x\to 0} \frac{e^{\frac{1}{1+t^2}}-e}{t^2}$$ -Applying Hôpital's rule $$\lim_{x\to 0} \frac{e^{\frac{1}{1+t^2}}-e}{t^2}=\lim_{x\to 0} \frac{-2te^{\frac{1}{1+t^2}}}{2t(1+t^2)^2}=\lim_{x\to 0} \frac{-e^{\frac{1}{1+t^2}}}{(1+t^2)^2}=-e$$<|endoftext|> -TITLE: Rearrangement of Schauder basis -QUESTION [5 upvotes]: My question is a part of an exercise in Banach space theory. -In the space $c_0$, for $n \in \mathbb{N}$, let $s_n=\sum_{j=1}^n e_k=(1,1,\cdots,1,0,\cdots,0)$. It's easy to see that $(s_n)_{n\ge 1}$ is a Schauder basis. I want to find an rearrangement $\sigma:\mathbb N \to \mathbb N$(a bijection, of course) of $(s_n)_{n\ge 1}$ such that $(s_{\sigma(n)})_{n\ge 1}$ is no longer a Schauder basis. -There are some properties for Schauder basis that might be helpful. For instance, if some coordinate functional is not bounded then the rearrangement must not be a Schauder basis. But I have no idea how to find such an rearrangement. -Thanks in advance! - -REPLY [2 votes]: First we identify the biorthogonal functionals: Let $$L_nx=x_n-x_{n+1}.$$Then $L_ns_n=1$, while $L_ns_m=0$ for $m\ne n$. -So if we have $$x=\sum_n a_ns_{\sigma(n)}$$(converging in norm) then $$a_n=L_{\sigma(n)}x.$$Say $$P_nx=\sum_{n=1}^Na_ns_{\sigma(n)} -=\sum_{n=1}^NL_{\sigma(n)}xs_{\sigma(n)}.$$If the rearrangement is a Schauder basis we must have $||P_N||$ bounded. But $$(P_Nx)_1=\sum_{n=1}^NL_{\sigma(n)}x,$$so we must also have $||\sum_{n=1}^NL_{\sigma(n)}||$ bounded, and hence $||\sum_{n=M}^NL_{\sigma(n)}||$ must be bounded in $M$ and $N$. -But suppose $\sigma$ is such that $\sigma(M),\sigma(M+1),\dots,\sigma(N)$ is a sequence of even integers. Then there's no cancellation in that last sum; it's easy to see that $||\sum_{n=M}^NL_{\sigma(n)}||=2(N-M+1)$. -So if $\sigma(n)$ contains arbitrarily long sequences of even integers then $(s_{\sigma(n)})$ is not a Schauder basis.<|endoftext|> -TITLE: Proof that multiplication of two numbers plus $2$ is a prime number -QUESTION [6 upvotes]: Let's say we have odd prime numbers $3,5,7,11,13, \dots $ in ascending order $(p_1,p_2,p_3,\dots)$. Prove that this sentence is true or false : -For every $i$, -$$p_i p_{i+1}+2$$ -is a prime number. -Any ideas how can I prove this? - -REPLY [3 votes]: Building off of Robert Soupe's answer, thinking about modular arithmetic is a great way to come up with counterexamples. -The first counterexample I found was $31, 37$ - both equal $1$ (mod $3$), so when we multiply them and add $2$, we get a multiple of $3$. I'm slow at mental multiplication, so searching for a pair of consecutive primes which were both 1 or both 2 (mod 3) was easier for me than trying some small examples. -Similarly, $p_i, p_{i+1}$ form a counterexample if: - -$p_i=1$ (mod $5$), $p_{i+1}=3$ (mod $5$) - for example, $41$ and $43$. -$p_i=4$ (mod $7$), $p_{i+1}=3$ (mod $7$) - for example, $53$ and $59$ ($57$ is not prime! :P). -And so on.<|endoftext|> -TITLE: Limit of a specific sequence involving Fibonacci numbers. -QUESTION [5 upvotes]: Let, $\left\{F_n\right\}_{n=1}^\infty$ be the Fibonacci sequence, i.e, $F_1=1, F_2=1~\&~ F_{n+2}=F_{n+1}+F_n~\forall ~n \in \mathbb{Z}_+$ -Let, $P_1=0, P_2=1$. Divide the line segment $\overline{P_n P_{n+1}}$ in the ratio $F_n:F_{n+1}$ to get $P_{n+2}$. -So, $P_{n+2}=\dfrac{F_n P_{n+1}+F_{n+1}P_n}{F_n+F_{n+1}}=\dfrac{F_n}{F_{n+2}}P_{n+1}+\dfrac{F_{n+1}}{F_{n+2}}P_n$ -What is the limit of the sequence $\left\{P_n \right\}_{n=1}^\infty$ ? -$\textbf{Few things:}$ If we define, -\begin{eqnarray*} -I_n &=& \left[P_n,P_{n+1}\right] \mathrm{,~if~} n \mathrm{~is ~odd~}\\ -&=& [P_{n+1},P_n] \mathrm{,~if~} n \mathrm{~is ~even~} -\end{eqnarray*} -then we see $I_n \supseteq I_{n+1}~\forall~n \in \mathbb{Z}_+$ and $ \lim \limits_{n \to \infty} |I_n|=0$ -So, by Cantor's nested interval theorem, $\bigcap \limits_{n=1}^\infty I_n$ is singleton. Hence, $\lim \limits_{n \to \infty} P_n$ exists. -I tried a little bit, but I couldn't find the limit. - -REPLY [3 votes]: The first few values of $P_n$ for $n \ge 2$ seem to be alternating sums of reciprocal Fibonacci numbers, starting with the second Fibonacci number (the second 1 in the sequence, so denominators go 1,2,3,5,8,13, etc.) -$$P_2=\frac{1}{1},\ P_3=\frac{1}{1} - \frac12, \ P_4=\frac11-\frac12+\frac13, \\ -P_5=\frac11-\frac12+\frac13-\frac15,\ P_6=\frac11-\frac12+\frac13-\frac15+\frac18$$ -So the limiting value of $P_n$ would be the value of this alternating series. One would need to check that defining the $P_n$ this way makes them satisfy the recurrence in the posted question. I may try to work on that part. But it seems so much of a coincidence that it "has to" be true! -Anyway I did use the above method to go for some large $n$ values and got intervals which closed in on the numerical value found by Patrick Stevens in his comment. -Now if the signs are dropped the terms all become positive and that constant has been discussed for example here and at Wolfram on the same topic it is said that the value of the sum for even indexed Fibonacci numbers is a known closed form constant. At the Wiki site the sum of all positive reciprocals is given a name but no known closed form for it seems to exist. There's a lot of material about it, though, like it is irrational as I recall. With the info about even indexed Fibonacci reciprocals it should at least be possible to express the alternating sum with the constant which is the sum of the positive reciprocals, and that would mean no hope for a closed form for the alternating sum either.<|endoftext|> -TITLE: Which functions are derivatives of some other function? -QUESTION [14 upvotes]: There is a fundamental formula in integral calculus which states that $\int_a^bF'(x)dx=F(b)-F(a)$. This formula gives a connection between definite and indefinite integrals. There are plenty of functions which are integrable (in the definite sense)-for example each bounded measurable function is ok. However I don't know precisely which functions do have indefinite integral: in other words which functions are derivatives of some other function? -For example continuity is enough: on the other hand one cas show that each function being the derivative (although need not to be continuous) has the Darboux property. So my question is - -Which functions are derivatives of some other function? - -REPLY [11 votes]: The problem is a rather famous (and natural) one. It was first explicitly stated by W. H. Young in 1911. I like quoting this passage so much (even if only to find an excuse to use the word "mooted") that I won't resist that impulse here. - -"Recent research [of Lebesgue and Vitali] has provided us with a set of - necessary and sufficient conditions that a function may be the - indefinite integral..., of another function and the way has thus been - opened to important developments. The corresponding, much more - difficult, problem of determining necessary and sufficient conditions - that a function may be a differential coefficient, has barely been - mooted; indeed, though we know a number of necessary conditions no set - even of sufficient conditions has to my knowledge ever been - formulated, except that involved in the obvious statement that a - continuous function is a differential coefficient. The necessary - conditions in question are of considerable importance and interest. - A function which is a differential coefficient has, in fact, various - striking properties. It must be pointwise discontinuous with respect - to every perfect set; it can have no discontinuities of the first - kind; it assumes in every interval all values between its upper and - lower bounds in that interval; its value at every point is one of the - limits on both sides of the values in the neighbourhood; its upper and - lower bounds, when finite, are unaltered, if we omit the values at any - countable set of points; the points at which it is infinite form an - inner limiting set of content zero. From these necessary conditions - we are able to deduce much valuable information as to when a function - is certainly not a differential coefficient . . . . These conditions - do not, however, render us any material assistance, even in answering - the simple question as to whether the product of two differential - coefficients is a differential coefficient, and this not even in the - special case in which one of the differential coefficients is a - continuous function." ...from W H Young, A note on the property of being a differential coefficient. Proc. London Math. Soc. 1911 - (2) 9, 360-368. - -In the monograph by Andrew M. Bruckner, Differentiation of real functions, Chapter seven there is a discussion of the problem of characterizing derivatives. -Andy has updated his account of this problem in a survey article for the Real Analysis Exchange: - -Bruckner, Andrew M. The problem of characterizing derivatives - revisited. Real Anal. Exchange 21 (1995/96), no. 1, 112--133. - -Here are a few actual characterizations of derivatives: - -C. Neugebauer, Darboux functions of Baire class 1 and derivatives, - Proc. Amer. Math. Soc., 13 (1962), 838–843. -D. Preiss and M. Tartaglia, On Characterizing Derivatives, Proceedings - of the American Mathematical Society, Vol. 123, No. 8 (Aug., 1995), - 2417-2420. -Chris Freiling, On the problem of characterizing derivatives, Real - Analysis Exchange 23 (1997/98), no. 2, 805-812. -Brian S. Thomson, On Riemann Sums, Real Analysis Exchange 37 - (2011/12), 1-22. - -My guess is that you won't find any of these satisfying in the way, for example, that "continuous" functions have multiple necessary and sufficient characterizations, most of them natural and compelling.<|endoftext|> -TITLE: How can I determine B-inverse from an optimal tableau of a LP? -QUESTION [5 upvotes]: (This is NOT a homework question, I am reviewing for my upcoming exam) -Given this linear program: - -and this optimal tableau: - -I am attempting to determine $B$ inverse using the table above. From the table I know that my basic variables are $x_1$, $x_2$ and $e_2$. I previously believed that $B$ inverse could simply be read from the optimal tableau as the values for your non basic variables. Therefore I originally thought $B$ inverse was the columns (excluding the first row) of $s_3$, $a_1$ and $a_2$ in the table above. -The columns of my BVs from my original equation (and therefore the value for B) is: -\begin{bmatrix}1&2&0\\1&-1&-1\\2&1&0\end{bmatrix} -and if I find the inverse of $B$ myself I get: -\begin{bmatrix}-1/3&0&2/3\\2/3&0&-1/3\\-1&-1&1\end{bmatrix} -However this does not match the entries for my non basic variables in the table. Not only are the last two columns flipped, but the last row also has the wrong signs. -I presume the problem is that $x_1$, $x_2$ and $e_2$ in the final solution do not form the identity matrix, and possibly because $e_2$ is an excess variable and not a slack variable, however I can't be certain as my textbook only has an example where in the optimal solution, the BVs columns form the identity matrix and there are only slack variables. -Is it possible to read $B$ inverse from the optimal table in such a question? - -REPLY [2 votes]: Yes, provided that you read the columns of $B^{-1}$ in a right order. -The working principle behind reading $B^{-1}$ from the optimal tableau -The initial tableau -$$\begin{array}{rr|l} -* & * & 0 \\ \hline -N & I & b -\end{array}$$ -Some variables are chosen to be basic so that the solution is optimal. This corresponds to choosing columns in the coefficient matrix $\begin{bmatrix}N & I\end{bmatrix}$ are chosen to form the basis matrix $B$. Then we left multiply $B^{-1}$ to the initial tableau to get the optimal tableau. -The optimal tableau -$$\begin{array}{rr|l} -* & * & * \\ \hline -B^{-1} N & B^{-1} & B^{-1} b -\end{array}$$ -Problem in this case -We set the coefficient matrix (in the initial tableau) to be -$$A = - \begin{bmatrix} - 1 & 2 & 0 & 0 & 1 & 0 \\ - 1 & -1 & -1 & 0 & 0 & 1 \\ - 2 & 1 & 0 & 1 & 0 & 0 - \end{bmatrix}. -$$ -Observe that at the right-hand side, it's not the identity matrix $I$, so we need to rewrite our initial tableau. -Rewritten initial tableau -$$\begin{array}{cc|l} -* & * & 0 \\ \hline -N & I E_1 \cdots E_k & b -\end{array}$$ -Each $E_i$ denotes a matrix formed by swapping the $i$-th and $j$-th columns of $I$. (i.e. For any matrix $A$, the matrix product $A E_i$ means swapping the $i$-th and $j$-th columns of $A$.) $I E_1 \cdots E_k$ denotes a sequence of column swapping opertions $E_1, \dots, E_k$ on $I$, and it can represent any permutation of the columns of $I$. -Rewritten optimal tableau -$$\begin{array}{cc|l} -* & * & * \\ \hline -B^{-1} N & B^{-1} E_1 \cdots E_k & B^{-1} b -\end{array}$$ -Therefore, columns of $B^{-1}$ has been swapped. In exams, one won't try to get rid of $E_1 \cdots E_k$ by matrix multiplication. Instead, use the fact that -\begin{align} -I E_1 \cdots E_k &=\begin{bmatrix}\mathbf{e}_{\sigma(1)} \cdots \mathbf{e}_{\sigma(n)}\end{bmatrix} \text{(what you can see from the initial matirx)} \\ -B^{-1} E_1 \cdots E_k &= \begin{bmatrix}B^{-1}\mathbf{e}_{\sigma(1)} \cdots B^{-1}\mathbf{e}_{\sigma(n)}\end{bmatrix} -\end{align} -Here, we use $\sigma$ to represent a permutation on $\{1,\dots,k\}$, and use $\mathbf{e}_i$ to represent the $i$-th column of the identity matrix $I$. -Another ideal situation -From the inital tableau -$$\begin{array}{cc|l} -* & * & 0 \\ \hline -N & I E_1 \cdots E_k & b -\end{array}$$ -you get an optimal tableau -$$\begin{array}{cc|l} -* & * & * \\ \hline -B^{-1} N & B^{-1} E_1 \cdots E_k & B^{-1} b -\end{array}$$ -with $\mathbf{e}_1, \mathbf{e}_2, \dots, \mathbf{e}_n$ with the right order. i.e. In $\begin{bmatrix}B^{-1} N & B^{-1} E_1 \cdots E_k\end{bmatrix}$, $\forall 1 \le i < j \le n$, $\mathbf{e}_i$ is on the left-hand side of $\mathbf{e}_j$. (The identity matrix $I$ is "properly included" in the optimal tableau.) Then what you've done in the question to get the value of the matrix $B$ is correct. -Another problem in the given optimal tableau -$$\begin{array}{r|r|rrrrrr|l} - & z & x_1 & x_2 & s_2 & * & * & * & * \\ \hline - & * & * & * & * & * & * & * & * \\ \hline - x_2 & * & 0 & 1 & 0 & * & * & * & * \\ - x_1 & * & 1 & 0 & 0 & * & * & * & * \\ - s_2 & * & 0 & 0 & 1 & * & * & * & * -\end{array}$$ -In "simple words", "the order is not so proper". We don't see the identity matrix $I$ here. To get $I$ from the given optimal tableau, we may swap the first and the second row. This motivates us to think of the effect on $B$ after swapping two rows. -Swapping two rows in the optimal simplex tableau -We swap two rows in the optimal simplex tableau -$$\begin{array}{cc|l} -* & * & * \\ \hline -B^{-1} N & B^{-1} E_1 \cdots E_k & B^{-1} b -\end{array}$$ -In other words, we left multiply the constraints by $E$, which is a square matrix formed by swapping two rows in the identity matrix $I$. -Then the optimal simplex tableau becomes -$$\begin{array}{cc|l} -* & * & * \\ \hline -E B^{-1} N & E B^{-1} E_1 \cdots E_k & E B^{-1} b -\end{array}$$ -Note that what you can actually see is $E B^{-1} E_1 \cdots E_k$, and we can sort the columns according to the order of $\mathbf{e}_i$'s in the initial tableau (i.e. we know $E B^{-1}$), so we are interested to find the square matrix $\hat{B}$ such that $\hat{B}^{-1}=E B^{-1}$. $$\hat{B} = B E^{-1} = B E$$. Therefore, $\hat{B}$ is formed by swapping the corresponding columns of $B$. We may re-write the optimal simplex tableau in terms of the transformed matrix $\hat{B}$ -$$\begin{array}{cc|l} -* & * & * \\ \hline -\hat{B}^{-1} N & \hat{B}^{-1} E_1 \cdots E_k & \hat{B}^{-1} b -\end{array}$$ -We may generalise this to any permutation of rows in the optimal simplex tableau by expressing $E$ as a finite product of square matrices $E'_1, \dots E'_l$. $$E = E'_l \cdots E'_2 E'_1$$ Then we have -\begin{align} - E B^{-1} &= E'_l \cdots E'_1 B^{-1} \\ - &= E_{l}^{\prime -1} \cdots E_{1}^{\prime -1} B^{-1} \\ - &= (B(E'_1 \cdots E'_l))^{-1} -\end{align} -Thus, we see that when we have changed the order of rows in the optimal simplex tableau, we also need to perform corresponding column operations on the basis matrix $B$ in order to get the correct result. -Computational steps -It will be much better for you to label the basic varible on the LHS of the simplex tableau. -$$\begin{array}{r|r|rrrrrr|l} - & z & x_1 & x_2 & s_2 & s_3 & a_1 & a_2 & \text{RHS} \\ \hline - & 1 & 0 & 0 & 0 & \frac73 & M-\frac23 & M & \frac{58}{3} \\ \hline -\color{red}{\large x_2} & 0 & 0 & 1 & 0 & -\frac{1}{3} & \frac{2}{3} & 0 & \frac{2}{3} \\ -\color{red}{\large x_1} & 0 & 1 & 0 & 0 & \frac{2}{3} & -\frac{1}{3} & 0 & \frac{14}{3} \\ -\color{red}{\large s_2} & 0 & 0 & 0 & 1 & 1 & -1 & -1 & 1 -\end{array}$$ -In the leftmost column, the order of basic variable (for top to bottom) is $x_2,x_1,s_2$. Therefore, we should pick the column of coefficients of $x_2$ in the initial tableau first, then the column for $x_1$, and finally the one for $s_2$. -If you set the basis matrix to -$$B=\begin{bmatrix}a_2 & a_1 & a_3\end{bmatrix} = -\begin{bmatrix}2&1&0\\-1&1&-1\\1&2&0\end{bmatrix},$$ -and observe that $\mathbf{e}_i$ (the $i$-th column of the identity matrix $I$) in the intial tableau will be transformed to $B^{-1} \mathbf{e}_i$ (i.e. the $i$-th column of the optimal tableau), you'll be able to extract $B^{-1}$ from the optimal tableau. -That is, the initial optimal tableau -$$\begin{array}{cccccc|c} -* & * & * & * & * & * & 0 \\ \hline -a_1 & a_2 & a_3 & \mathbf{e}_3 & \mathbf{e}_1 & \mathbf{e}_2 & b -\end{array}$$ -is changed to -$$\begin{array}{cccccc|c} -* & * & * & * & * & * & * \\ \hline -B^{-1}a_1 & B^{-1} a_2 & B^{-1} a_3 & B^{-1} \mathbf{e}_3 & B^{-1} \mathbf{e}_1 & B^{-1} \mathbf{e}_2 & b -\end{array}$$ -\begin{align} -B&=\begin{bmatrix}a_2 & a_1 & a_3\end{bmatrix} \\ -B^{-1} B &= I \\ -B^{-1} \begin{bmatrix}a_2 & a_1 & a_3\end{bmatrix} &= \begin{bmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \end{bmatrix} \\ -B^{-1} \begin{bmatrix}a_1 & a_2 & a_3\end{bmatrix} &= \begin{bmatrix} \mathbf{e}_2 & \mathbf{e}_1 & \mathbf{e}_3 \end{bmatrix} -\end{align} -Computational results -octave:1> A=[ -> 1 2 0 0 1 0; -> 1 -1 -1 0 0 1; -> 2 1 0 1 0 0] -A = - - 1 2 0 0 1 0 - 1 -1 -1 0 0 1 - 2 1 0 1 0 0 - -octave:2> b = [6;3;10]; c=[4 1 0 0 0 0]; -octave:3> B=[A(:,2) A(:,1) A(:,3)] -B = - - 2 1 0 - -1 1 -1 - 1 2 0 - -octave:4> B^-1 -ans = - - 0.66667 0.00000 -0.33333 - -0.33333 0.00000 0.66667 - -1.00000 -1.00000 1.00000 - -octave:5> B^-1*A -ans = - - 0.00000 1.00000 0.00000 -0.33333 0.66667 0.00000 - 1.00000 0.00000 0.00000 0.66667 -0.33333 0.00000 - 0.00000 0.00000 1.00000 1.00000 -1.00000 -1.00000 - -octave:6> [A(:,4:6)] -ans = - - 0 1 0 - 0 0 1 - 1 0 0<|endoftext|> -TITLE: Group Theory: Show that $G/Z(G) \cong \operatorname{Inn}(G) $? -QUESTION [6 upvotes]: Here is the question. It's rather long so I apologise. "Let $G$ be a group. Let $\operatorname{Aut}(G)$ be the set of all isomorphisms of $G$. This is a group under composition, known as the group of automorphisms of $G$. If $g \in G$ then we know that the map $\theta_{g}:G \rightarrow G$ defined by $\theta_g(a)=g^{-1}ag$ is an isomorphism." -Questions I have answered. -1) Show that if $g,h \in G$ then $\theta_{gh} = \theta_{h} \circ \theta_{g}$. -Answer: Let $ a \in G$ observe that $\theta_{gh}(a)= (gh)^{-1}a(gh) = h^{-1}g^{-1}agh = h^{-1}\theta_g(a)h=\theta_h(\theta_g(a))=(\theta_h \circ \theta_g)(a)$. This is true $\forall a \in G \space \space \therefore\space \space \theta_{gh} = \theta_{h} \circ \theta_{g}$. -2) Define a map $\phi: G \rightarrow \operatorname{Aut}(G)$ by $\phi(g)= \theta_{g^-1}$. Show that $\phi$ is a homomorphism. -Answer: We need to show that $\theta_{gh^{-1}} = \theta_{g^{-1}} \circ \theta_{h^{-1}} $. However we know that $\theta_{gh} = \theta_h \circ \theta_g$ and therefore we can conclude that $\theta_{gh^{-1}} = \theta_{h^{-1}g^{-1}} = \theta_{g^{-1}} \circ \theta_{h^{-1}} $. -3) Find $\ker(\phi)$. -Answer: $\theta_g(a) = g^{-1}ag \space \therefore \space \theta_{g^{-1}}(a) = gag^{-1}$. To find $\ker(\phi)$ find $\theta_{g^{-1}}(a)=a$ so we have $gag^{-1}=a$ and then $gag^{-1}g = ag$ so we have $ga=ag$ and so $\ker(\phi)= Z(G)$ where $Z(G)$ is the centre, all the elements in the group that commute with all other elements in the group. -And finally, question 4: The image of $\phi$ is called $ \operatorname{Inn}(G)$, the group of inner automorphisms. Show that $G/Z(G) \cong \operatorname{Inn}(G)$. -I can't work out the answer to this final question. I apologise for the length, I just wanted to explain in as much detail as possible. Can anyone help me? - -REPLY [4 votes]: Given the other answers, question 4 follows directly from the isomorphism theorem: $$G/\ker\phi \cong \phi(G)$$<|endoftext|> -TITLE: Generating function for the number of ways of writing an integer as a sum of distinct integers from a finite set -QUESTION [7 upvotes]: Let $A$ be a finite set of integers. The generating function for the number of ways of writing a given integer $n$ as the sum of $k$ elements from $A$ not necessarily distinct is given by: -$$\left(\sum_{a \in A}{x^a}\right)^k=\sum_n{r(n,k)x^n}$$ -Is there a generating function for the number of ways of writing an integer $n$ as a sum of $k$ distinct elements of $A$? - -REPLY [9 votes]: Using the Polya Enumeration Theorem (PET) the closed form is given by -$$[z^n] Z(P_k)\left(\sum_{a\in A} z^a \right)$$ -where $Z(P_k) = Z(A_k)-Z(S_k)$ is the difference between the cycle -index of the alternating group and the cycle index of the symmetric -group. This cycle index is known in species theory as the set operator -$\mathfrak{P}_{=k}$ (unlabeled) and the species equation here is -$$\mathfrak{P}_{=k}\left(\sum_{a\in A} \mathcal{Z}^a\right).$$ - -Recall the recurrence by Lovasz for the cycle index $Z(P_k)$ of -the set operator $\mathfrak{P}_{=k}$ on $k$ slots, which is -$$Z(P_k) = \frac{1}{k} \sum_{l=1}^k (-1)^{l-1} a_l Z(P_{k-l}) -\quad\text{where}\quad -Z(P_0) = 1.$$ -This recurrence lets us calculate the cycle index $Z(P_n)$ very -easily. For example when $n=3$ the cycle index is -$$Z(P_3) = -1/6\,{a_{{1}}}^{3}-1/2\,a_{{2}}a_{{1}}+1/3\,a_{{3}}.$$ -These cycle indices are also given by the exponential formula, which -says that -$$Z(P_k) = [w^k] -\exp\left(a_1 w - a_2 \frac{w^2}{2} + a_3 \frac{w^3}{3} -- a_4 \frac{w^4}{4} + \cdots\right).$$ -For example suppose $A$ consists of powers of two. By inspection we -should get from -$$\sum_{k\ge 0} [w^k] -\exp\left(\sum_{l\ge 1} (-1)^{l-1} a_l \frac{w^l}{l}\right)$$ -evaluated at -$$a_l = \sum_{q\ge 0} z^{l2^q} -\quad\text{the value}\quad -\frac{1}{1-z}.$$ -And indeed we get for the sum term -$$\sum_{l\ge 1} (-1)^{l-1} \frac{w^l}{l} \sum_{q\ge 0} z^{l2^q} -= \sum_{q\ge 0} \sum_{l\ge 1} (-1)^{l-1} z^{l2^q} \frac{w^l}{l} -= \sum_{q\ge 0} \log (1+wz^{2^q}).$$ -We obtain -$$\sum_{k\ge 0} [w^k] \prod_{q\ge 0} (1+wz^{2^q}) -= \left. \prod_{q\ge 0} (1+wz^{2^q}) \right|_{w=1} -\\ = \prod_{q\ge 0} (1+z^{2^q}) -= \frac{1}{1-z}.$$ -The reader is invited to verify that the conversion from the -exponential formula to the product representation of the set operator -does in fact always carry through independent of the choice of $A$ so -that we get -$$[w^k] \prod_{a\in A} (1+wz^a).$$ -Here is some Maple code to explore these cycle indices. We would -always prefer the recurrence in a practical setting. - -pet_cycleind_set := -proc(k) -option remember; - - if k=0 then return 1; fi; - - expand(1/k*add((-1)^(l-1)*a[l]* - pet_cycleind_set(k-l), l=1..k)); -end; - -pet_cycleind_set2 := -proc(k) -option remember; -local gf; - - gf := exp(add((-1)^(l+1)*a[l]*w^l/l, l=1..k)); - - coeftayl(gf, w=0, k); -end; - -Remark. We can derive the recurrence from the exponential formula. -Introducing -$$G(w) = -\exp\left(\sum_{l\ge 1} (-1)^{l-1} a_l \frac{w^l}{l}\right)$$ -we differentiate to obtain -$$G'(w) = G(w) -\left(\sum_{l\ge 1} (-1)^{l-1} a_l w^{l-1}\right) -= G(w) \left( \sum_{l\ge 0} (-1)^{l} a_{l+1} w^{l}\right).$$ -Extracting coefficients we get -$$[w^k] G'(w) = (k+1) [w^{k+1}] G(w) -= \sum_{q=0}^k (-1)^q a_{q+1} [w^{k-q}] G(w) -\\ = \sum_{q=1}^{k+1} (-1)^{q-1} a_q [w^{k+1-q}] G(w).$$ -This is our recurrence precisely. -Remark, II. We may ask why the exponential formula is the OGF of -the set operator $\mathfrak{P}_{=k}$ and the multiset operator -$\mathfrak{M}_{=k}.$ This is obtained from the labeled species of -permutations being factored into disjoint cycles. Marking cycle sizes -with the variable $\mathcal{A}_q$ we obtain the species -$$\mathfrak{P}(\mathcal{A}_1 \mathfrak{C}_{=1}(\mathcal{W}) -+ \mathcal{A}_2 \mathfrak{C}_{=2}(\mathcal{W}) -+ \mathcal{A}_3 \mathfrak{C}_{=3}(\mathcal{W}) -+ \mathcal{A}_4 \mathfrak{C}_{=4}(\mathcal{W}) -+ \cdots).$$ -The exponential formula then follows. (This is an EGF which means the -coefficients include an inverse factorial, which produces an OGF for -the cycle indices as these are averaged over all $k!$ permutations.)<|endoftext|> -TITLE: If $f$ takes Cauchy sequence to Cauchy sequence then $f$ is continuous -QUESTION [8 upvotes]: If $f:X\to Y$ takes Cauchy sequence to Cauchy sequence then prove that $f$ is a continuous function. -Let $x_n$ be a sequence in $X$ such that $x_n\to x\implies x_n$ is Cauchy $\implies f(x_n)$ is Cauchy but that does not guarantee that $f(x_n) \to f(x)$ . -So how is the above result true. Please help. - -REPLY [16 votes]: If $(x_n) \rightarrow x$, make a new sequence $y_{2n} = x_n, y_{2n+1} = x$, so intersperse terms of the sequence with the limit. - -Show that $(y_n)$ is Cauchy. -So the sequence $(f(y_n)) = f(x_0), f(x), f(x_1), f(x),\ldots$ is Cauchy by assumption. -From this show that $f(x_n) \rightarrow f(x)$. - -For the last, there is a more general fact you might know: if a Cauchy sequence has a convergent subsequence (with limit $p$), the whole sequence converges to $p$ as well. But a direct proof is also easy enough.<|endoftext|> -TITLE: Show that a generalized knight can return to its original position only after an even number of moves -QUESTION [41 upvotes]: Source: German Mathematical Olympiad -Problem: - -On an arbitrarily large chessboard, a generalized knight moves by jumping p squares in one direction and q squares in a perpendicular direction, p, q > 0. Show that such a knight can return to its original position only after an even number of moves. - -Attempt: -Assume, wlog, the knight moves $q$ steps to the right after its $p$ steps. Let the valid moves for the knight be "LU", "UR", "DL", "RD" i.e. when it moves Left, it has to go Up("LU"), or when it goes Up , it has to go Right("UR") and so on. -Let the knight be stationed at $(0,0)$. We note that after any move its coordinates will be integer multiples of $p,q$. Let its final position be $(pk, qr)$ for $ k,r\in\mathbb{Z}$. We follow sign conventions of coordinate system. -Let knight move by $-pk$ horizontally and $-qk$ vertically by repeated application of one step. So, its new position is $(0,q(r-k))$ I am thinking that somehow I need to cancel that $q(r-k)$ to achieve $(0,0)$, but don't be able to do the same. -Any hints please? - -REPLY [24 votes]: This uses complex numbers. -Define $z=p+qi$. Say that the knight starts at $0$ on the complex plane. Note that, in one move, the knight may add or subtract $z$, $iz$, $\bar z$, $i\bar z$ to his position. -Thus, at any point, the knight is at a point of the form: -$$(a+bi)z+(c+di)\bar z$$ -where $a$ and $b$ are integers. -Note that the parity (evenness/oddness) of the quantity $a+b+c+d$ changes after every move. This means it's even after an even number of moves and odd after an odd number of moves. Also note that: -$$a+b+c+d\equiv a^2+b^2-c^2-d^2\pmod2$$ -(This is because $x\equiv x^2\pmod2$ and $x\equiv-x\pmod2$ for all $x$.) -Now, let's say that the knight has reached its original position. Then: -\begin{align} -(a+bi)z+(c+di)\bar z&=0\\ -(a+bi)z&=-(c+di)\bar z\\ -|a+bi||z|&=|c+di||z|\\ -|a+bi|&=|c+di|\\ -\sqrt{a^2+b^2}&=\sqrt{c^2+d^2}\\ -a^2+b^2&=c^2+d^2\\ -a^2+b^2-c^2-d^2&=0\\ -a^2+b^2-c^2-d^2&\equiv0\pmod2\\ -a+b+c+d&\equiv0\pmod2 -\end{align} -Thus, the number of moves is even. - -Interestingly, this implies that $p$ and $q$ do not need to be integers. They can each be any real number. The only constraint is that we can't have $p=q=0$.<|endoftext|> -TITLE: Conjecture about natural number satisfying $ m(n)^k+1\space\mid\space n^{2k}+1 $ -QUESTION [11 upvotes]: Let $m(n)$ be the greatest proper divisor of $n$. Is there any number $n≥2$ not of the form $p$ or $p^3$ for $p$ prime that satisfies -$$ -m(n)^k+1\space\mid\space n^{2k}+1 -$$ -for all natural numbers $k$? -I haven't found any of them, but I reduced it to the case where $n=pq$ for $p$, $q$ prime and with $pn$ be a prime with $\left(\frac{m}{r}\right)=-1$.$^\dagger$ -Then by Euler's criterion -$$ -m^{\frac{r-1}{2}} \equiv -1 \pmod r \\ -r \mid m^{\frac{r-1}{2}}+1 -$$ -But -$$ -n^{r-1}+1 \equiv 2 \pmod r -$$ -and hence -$$ -m^{\frac{r-1}{2}}+1 \not\mid n^{r-1}+1 -$$ -Hence if $n$ is not of the form $p$ or $p^3$ with $p$ prime, then the condition cannot be satisfied for all $k$. -$\dagger$ Given $m$ prime we can always find a prime $r>n$ with $\left(\frac{m}{r}\right)=-1$. Let $b$ be any quadratic nonresidue mod $m$. By the Chinese Remainder Theorem we can find $r_0$ with $r_0 \equiv 1 \pmod {4}$ and $r_0 \equiv b \pmod m$. Then by Dirichlet's theorem there is a prime $r>n$ with $r\equiv r_0 \pmod {4m}$, and by quadratic reciprocity -$$ -\left(\frac{m}{r}\right) = \left(\frac{r}{m}\right) = \left(\frac{b}{m}\right) = -1 -$$<|endoftext|> -TITLE: Numbers whose powers are almost integers -QUESTION [17 upvotes]: Some real numbers $\alpha$ have the property that their powers get ever closer to being integers -- more precisely, that -$$ \lim_{n\to\infty} \alpha^n-[\alpha^n] = 0 $$ -where $[\cdot]$ is the round-to-nearest-integer function. -This is trivially the case when $\alpha$ is itself an integer as well as when $|\alpha|<1$. But there are also other numbers with this property, such as $\frac{1+\sqrt 5}{2}$ (the golden ratio), $2+\sqrt3$, or $\frac{5+\sqrt{13}}2$. (The trick for each of these is that $\alpha^n+\beta^n$ solves a second-order integer recurrence, where $|\beta|<1$). -Just to be sure this is not trivial, there numbers without this property, such as $\sqrt k$ for any nonsquare integer $k$. -Is there a name for this property? Or a general theory of such numbers? Are there nontrivial ones that are not quadratic over $\mathbb Q$? - -REPLY [8 votes]: they are called Pisot numbers, after a 1938 thesis, though Thue in 1912 and Hardy in 1919 also noticed them. Pisot characterized them in a rather beautiful theorem. -here's a wikipedia article https://en.wikipedia.org/wiki/Pisot%E2%80%93Vijayaraghavan_number<|endoftext|> -TITLE: Map Laplacian in terms of covariant derivatives -QUESTION [5 upvotes]: I stumbled upon the following definition: - -Let - -$\mathcal{M}$ be a manifold, -$g_{ij}$, $h_{ij}$ be two Riemannian metrics on $\mathcal{M}$, -$\psi : \mathcal{M} \to \mathcal{M}$ be a twice differentiable map, -$C_1$, $C_2$ be two charts on $\mathcal{M}$, -$\Psi := C_2 \circ \psi \circ C_1^{-1}$. - -The map Laplacian $\Delta_{g,h}$ is defined by $$ (C_2 \circ - (\Delta_{g,h} \psi) \circ C_1^{-1})^q := g^{ij}\left( \partial_i - \partial_j \Psi^q - - \Gamma(g)^k_{ij} \, \partial_k \Psi^q - + (\Gamma(h)^q_{mn} \circ \Psi) \, \partial_i \Psi^m \, \partial_j \Psi^n \right) . $$ - -Basically, my question is how to make sense of the above beast, but since this is probably too broad, I would like to narrow it down to: can the above formula be expressed in terms of covariant derivatives? - -What I got so far: -I recognise the term $\partial_i \partial_j \Psi^q - \Gamma(g)^k_{ij}\, \partial_k \Psi^q = \nabla_i \nabla_j \Psi^q$ as the second covariant derivative of $\Psi^q$ such that $g^{ij} \nabla_i \nabla_j \Psi^q = \nabla^i \nabla_j \Psi^q$ can be interpreted as the Laplacian of $\Psi^q$. Since $\Psi^q$ depends on the chart $C_2$, I also expect there to be some term to correct for that, which is probably the last one, but I cannot make that last statement any more precise. - -Update: Another question would be whether it is possible to give a physical intuition for this operator. For example, if $\mathcal{M} = \mathbb{R}^3$, the map Laplacian becomes the vector Laplacian appearing in the Navier-Stokes equation where it describes the friction in the fluid (diffusion of momentum). This intuition does not generalise to the situation at hand, however, because $\psi$ is a map from the manifold onto itself, not from the tangent space (which is $T\mathcal{M} = \mathbb{R}^3 = \mathcal{M}$ for $\mathcal{M} = \mathbb{R}^3$) onto itself. - -REPLY [3 votes]: The map Laplacian has a very simple coordinate free expression - it is (as you'd expect for something called the Laplacian) the trace of the second derivative: -$$ \Delta_{g,h} \psi = {\rm tr}_g \nabla D \psi.$$ -The complexity here is hidden in that $\nabla$: we need a covariant derivative that can act on $D \psi$. We can interpret $D \psi$ as a section of the bundle $T^* M \otimes \psi^* TM$, which can be naturally equipped with the tensor product connection formed from $\nabla^g$ and $\psi^* \nabla^h$. The Christoffel symbols of this connection have components coming from both factors ($\Gamma(g)$ directly and $\Gamma(h)$ via $D\psi$), and thus you get the two correction terms in the coordinate formula.<|endoftext|> -TITLE: The Antipodal Map is Orientation Preserving iff $n$ is Odd -QUESTION [9 upvotes]: The following result is well-known. - -Theorem. The antipodal map on $S^n$ is orientation preserving if and only if $n$ is odd. - -Below I provide a proof in which there must be an error since I reach to a wrong conclusion. Can somebody please point out the error? -Let $a:S^n\to S^n$ be the antipodal map on $S^n$ and $A$ be the antipodal map on $\mathbf R^{n+1}$. -Consider the following diagram: - -Let $\Omega=dx_1\wedge \cdots \wedge dx_{n+1}$ be an orientation form on $\mathbf R^{n+1}$. -Let $X$ be the vector field on $\mathbf R^{n+1}$ defined as -\begin{equation*} -X = \sum_{i=1}^{n+1} x_i \frac{\partial}{\partial x_i} -\end{equation*} -This vector field is nowhere tangent to $S^n$. -The standard orientation on $S^n$ is determined by the contracting $\Omega$ using $X$ and restricting it to $S^n$. -Thus the $n$-form $\omega=i^*(X\lrcorner \Omega)$ determines the standard orientation on $S^n$. -Now we have -$$ -\begin{array}{rcl} -a^*\omega &=& a^*(i^*(X\lrcorner \Omega))\\ -&=& (i\circ a)^*(X\lrcorner \Omega)\\ -&=& (A\circ i)^*(X\lrcorner \Omega)\\ -&=& i^*(A^*(X\lrcorner \Omega)) -\end{array} -$$ -Note that by the very definition of $A$, we have $A^*\eta=(-1)^{k} \eta$ for any $k$-form $\eta$ on $\mathbf R^{n+1}$. -Thus $A^*(X\lrcorner \Omega)=(-1)^{n}(X\lrcorner \Omega)$ and we get $a^*\omega=(-1)^n\omega$. -Therefore $a$ is orientation preserving if $n$ is even and orientation reversing if $n$ is odd. -What is the mistake? - -REPLY [17 votes]: Your mistake is the following statement: - -Note that by the very definition of $A$, we have $A^*\eta=(-1)^{k}\eta$ for any $k$-form $\eta$ on $\mathbf R^{n+1}$. - -This is not true. Any $k$-form is a sum of terms like $f\,dx^{i_1}\wedge\dots\wedge dx^{i_k}$ where $f$ is some smooth function. If $\eta$ is such a form, then the pullback of $\eta$ by $A$ is -$$ -A^*\eta = (f\circ A) (-1)^k dx^{i_1}\wedge\dots\wedge dx^{i_k}. -$$ -This is equal to $(-1)^k\eta$ if and only if the coefficient function $f$ is invariant under $A$. -For example, consider the $1$-form $\eta = x^1\, dx^1$. Then $A^* \eta = \eta\ne (-1)^1\eta$. -One way to see what happens when you pull back $X\lrcorner \Omega$ is to note that $A_*X=X$ and $A^*\Omega = (-1)^{n+1}\Omega$. Thus we can compute what the pullback form does to an $n$-tuple of vector fields: -\begin{align*} -(A^* ( X\lrcorner\Omega)) \left( V_1,\dots,V_n\right) -&= (X\lrcorner\Omega) \left( A_*V_1,\dots, A_*V_n\right)\\ -&= \Omega \left( X, A_*V_1,\dots, A_*V_n\right)\\ -&= \Omega \left( A_*X, A_*V_1,\dots, A_*V_n\right)\\ -&= (A^*\Omega) \left( X, V_1,\dots, V_n\right)\\ -&= (-1)^{n+1}\Omega \left( X, V_1,\dots, V_n\right)\\ -&= (-1)^{n+1}(X\lrcorner\Omega) \left( V_1,\dots, V_n\right). -\end{align*} -It follows that $A^*(X\lrcorner \Omega) = (-1)^{n+1}X\lrcorner \Omega$.<|endoftext|> -TITLE: Intuitive or visual understanding of the real projective plane -QUESTION [10 upvotes]: If we take the definition of a real projective space $\mathbb{R}\mathrm{P}^n$ as the space $S^n$ modulo the antipodal map ($x\sim -x$), it is possible to see that $\mathbb{R}\mathrm{P}^1$ is topologically equivalent to the circle. It is equivalent to the upper half of the circle where the two end points are glued together - i.e. another circle. -Is there an intuitive or visual way of understanding the real projective plane, $\mathbb{R}\mathrm{P}^2$? By the same intuitive reasoning, it seems that $\mathbb{R}\mathrm{P}^2$ should be topologically equivalent to the upper half of $S^2$ with the antipodal equivalence on the rim. This is the definition of $\mathbb{R}\mathrm{P}^1$, where the antipodal map doesn't change the topology, so it seems that $\mathbb{R}\mathrm{P}^2$ should simply be the upper half of $S^2$ as well. -This is clearly wrong, however, since $\mathbb{R}\mathrm{P}^2$ cannot be imbedded in $\mathbb{R}^3$. What is the fault in this reasoning, and is there an intuitive or visual way to imagine the real projective plane? - -REPLY [12 votes]: There are some interesting constructive ways to visualize $\mathbb{R}\mathrm{P}^2$. I think the simplest way is as follow: - -First, we prepare a Whitney umbrella, which is homeomorphic to the upper half of $S^2$ : - -Let's check the points to quotient (be glued). - -After gluing, the red and blue paths should match their start points and the end points respectively so that their antipodal points are matched. -In the picture of half $S^2$ their start points are far from each others' start points now, while Whitney Umbrella has the two start points meet together. -Now we can just bend the umbrella and glue the edge: - -And get a cross-cap, which is homeomorphic to $\mathbb{R}\mathrm{P}^2$. - -I cannot color it with two color since it is not "two sided" now. - -Many books suggest Möbius strip for intuition. It is a little bit more difficult but more interesting to visualize that $\pi_1(\mathbb{R}\mathrm{P}^2)$ is isomorphic to $\mathbb{Z}_2$.<|endoftext|> -TITLE: $p$-Splittable Integers -QUESTION [12 upvotes]: Let $p$ be a positive integer. For each nonnegative integer $k$, write $[k]$ for the set $\{0,1,2,\ldots,k\}$. Also, we define $[-1]:=\emptyset$. We say that an integer $k\geq -1$ is $p$-splittable if there is a partition of $[k]$ into $p$ subsets $A_1$, $A_2$, $\ldots$, $A_p$ such that $\sum_{x\in A_1}\,x=\sum_{x\in A_2}\,x=\ldots=\sum_{x\in A_p}\,x$ (i.e., these sets have the same sum). Such a partition $\left\{A_1,A_2,\ldots,A_p\right\}$ is called a $p$-splitting of $[k]$. -What are all $p$-splittable integers for a given $p$? How many $p$-splittings of $[k]$ are there for each of these available $k$'s? If the exact number of $p$-splittings of $[k]$ is not easily computable, then what is the asymptotic answer? - -Clearly, $k=-1$ and $k=0$ are $p$-splittable. We can also ignore the trivial case $p=1$. We know that, if $p=2$, then all $p$-splittable numbers are of the forms $4t-1$ and $4t$, where $t$ is a nonnegative integer. If $p$ is an odd prime, then all $p$-splittable numbers are integers of the forms $tp-1$ and $tp$, where $t\in\{0,2,3,4,\ldots\}$. -For $p=2$, we can show that the number of $2$-splittings of a $2$-splittable integer $k$ is given by the coefficient of $x^{\frac{k(k+1)}{4}}$ in the expansion of $\prod_{r=1}^k\,\left(1+x^r\right)$. For example, if $k=3$, there are two $2$-splittings of $[3]$, namely, $\big\{\{0,1,2\},\{3\}\big\}$ and $\big\{\{1,2\},\{0,3\}\big\}$, whereas $$\prod_{r=1}^3\,\left(1+x^r\right)=1+x+x^2+2x^3+x^4+x^5+x^6$$ whose coefficient of $x^{\frac{k(k+1)}{4}}=x^3$ is also $2$. Similarly, there are $2$, $8$, and $14$ $2$-spittings of $[k]$ for $k=4$, $k=7$, and $k=8$, respectively. I do not know if there is any closed form for this coefficient for an arbitrary $2$-splittable $k$. -I conjecture the following: -(1) If $p$ is odd, then, for any $j\in\{-1,0,1,2,\ldots,p-2\}$ such that $p\mid j(j+1)$, every integer of the form $tp+j$, where $t\in\{2,3,4,\ldots\}$, is $p$-splittable, and nothing else (except $-1$ and $0$) is $p$-splittable. -(2) If $p$ is even, then, for any $j\in\{-1,0,1,2,\ldots,2p-2\}$ such that $2p\mid j(j+1)$, every integer of the form $2tp+j$, where $t$ is a positive integer, is $p$-splittable, and nothing else (except $-1$ and $0$) is $p$-splittable. -This conjecture is true, at least, if $p$ is a prime power (where $j=-1$ and $j=0$ are the only possible choices of $j$). If you can show that (1) holds for $t=2$ and for $t=3$, and that (2) holds for $t=1$, then you are done. It is worth noting that, if $k$ is $p$-splittable, then $k+2p$ is $p$-splittable. -Inspiration: Partitioning $\{1,\cdots,k\}$ into $p$ subsets with equal sums -P.S. I include $k=0$ and $k=-1$ for the sake of completeness. There is nothing subtle about these numbers. - -REPLY [3 votes]: likely last edit, formatting for clarity: -We say $n$ is $p$-splittable if there is a partition $A_1,...,A_p$ of $\{1,...,n\}$ with $\sum A_i := \sum_{a \in A_i} a = \sum A_j$ for all $i,j\leq p$ -We call $n$ uniformly $p$-splittable if there is such a partition with $|A_i|=|A_j|$ for all $i,j \leq p$ -We call such a partition a $p$-split of $n$ -Let $s_p(n)$ $(\bar{s}_p(n))$ denote the number of (uniform) $p$-splits of $n$ - -Some truths: -$\bar{s}_p \leq s_p$ -If $n$ is $p$-splittable then $p| \frac{(n+1)n}{2}$ -If we have equality $s_p(n)=1$ (there is exactly one $p$-split of $n$) -$2p$ and $2p-1$ are $p$-splittable, $2p$ uniformly. -If $n$ is $p$-splittabe and $p'|p$ then $n$ is $p'$-splittable and we obtain a lower bound of $s_{p'}(n)\geq \frac {p!}{(\frac{p}{p'}!)^{p'}}$ (the number of ways to make a $p$-split into a $p'$-split by melting groups of $\frac{p}{p'}$ together) -(it may be possible to get a bound in terms of $s_p(n)$ if one can argue away double counting) -If $n$ is uniformly $p$-splittable then $mn$ is uniformly $p$-splittable for all m$\in \mathbb{N}$ -If $n$ is $p$-splittable and $k$ is uniformly $p$-splittable, then $m+k$ is $p$-splittable -If $m,n$ are uniformly $p$-splittable then $m+n$ is uniformly $p$-splittable - -The $p$-splittable numbers are then a finite union of (affine) copies of the uniformly $p$-splittables and are generated by finitely many primitive $p$-splittable numbers, a trivial upper bound on the count of primitives is the smallest non-zero uniformably $p$-splittable number $2p$. A better bound is achieved by $\# \{j \in \{-1,0,...,2p-2\} \; with \; 2p|j(j+1)\}$ -It is conjectured this bound is exact and the primitive $p$-splittables are of the form $2p+j$ for $j$ in this set. (This is numerically proven for $p\leq 184$) -If $n$ is uniformly $p$-splittable then $n$ must be a multiple of $p$. I will give a full categorization: -Let $p \in \mathbb{N}$ then the uniformly $p$-splittables are $p \mathbb{N} \backslash p$ if $p$ is odd and $2p \mathbb{N}$ if p is even (or $\mathbb{N}$ for $p=1$) -proof: It is apparent that $2p$ is uniformly $p$-splittable and $p$ is not. It suffices to show that for odd $p$ $3p$ is uniformly $p$-splittable and for even $p$ no odd multiple is. -Let $p$ be even and $k$ odd. Then $\frac{kp(kp+1)}{2}$ is no multiple of $p$ (it is a odd multiple of $\frac p 2$) -Let $p$ be odd: -let $[i]$ denote $ i \; mod \; p$ -Then a uniform $p$-split of $3p$ is given by -$A_i = \{ 1+[i], (p+1)+[\frac{p-1}{2}+i], 2p+1+[p-1-2i]\}$ for $i=0,...,p-1$ -Obviously each $A_i$ has 3 elements and each number $\leq 3p$ is represented. We need to show $[i]+[\frac{p-1}{2}+i]+[p-1-2i]$ is independent of $i$. -This is true because for $i \leq (p-1)/2$ all the arguments are in $(0,...,p-1)$ and for $(p+1)/2 \leq i \leq p-1$ we have $[\frac{p-1}{2}+i]=\frac{p-1}{2}+i-p$, $[p-1-2i]=p-1-2i+p$ in each case the sum is equal to $\frac{3(p-1)}{2}\square$ -$\bar{s}_p(p^2) \geq (p-1)!$ (by construction of $(p-1)!$ uniform $p²$-splits) - -Things to look into: - -The original conjecture. It can be achieved by - -simply constructing $p$-splits for $2p+j$ for the given $j$ (note $j=-1,0$ are trivial) or -Constructing an algorithm for it. Problems here lie in proving the algorithm finishes, this leads into study of finding how to split a set into heaps of (differing) given sizes (splits would be the special case where all stacks have equal size) -Note that the requirement on $j$ is equivalent to $j \equiv -1$ or $ 0 \mod p_i$ for all prime powers $p_i|p$ (note this proves the conjecture for prime $p$ after giving a split for $3p-1$ ($p$ odd) like here) - -Said study of "fitting" a set into a given tuple of integers $(d_1,...,d_n)$ (ie finding a partition with $\sum A_i = d_i$) Here the sets of the form $\{1,...,n\}$ are of a special importance for proving things about splittability -Study of splittability for sets $A\subset \mathbb{N} \neq \{1,...,n\}$ -Study of $s_p$ and/or $\bar{s}_p$, I didn't look into the number of splits except when a bound just fell out of a proof.<|endoftext|> -TITLE: What does it mean for a sequence of sheaves to be exact -QUESTION [9 upvotes]: Let $F, G, H$ be sheaves on a topological space $X$, and let $$F \xrightarrow{\alpha} G \xrightarrow{\beta} H $$ be morphisms of sheaves. By definition, $\textrm{Ker } \beta$ is the subpresheaf $U \mapsto \textrm{Ker}(\beta(U))$ of $G$, and it is in fact a sheaf. If $O$ is the subpresheaf of $G$ given by $U \mapsto \textrm{Image}(\alpha(U))$, then $\textrm{Im } \alpha$ is defined to be any sheafification of $O$. Let $\theta: O \rightarrow \textrm{Im } \alpha$ be a morphism of presheaves such that the pair $(\textrm{Im } \alpha, \theta)$ satisfies the universal property for sheafification. -I've seen the definition that the sequence above is exact if $\textrm{Ker } \beta = \textrm{Im } \alpha$, but I'm confused as to what this really means. I am aware that the sheafification $(\textrm{Im } \alpha, \theta)$ may be chosen to literally be a subsheaf of $G$ (from the fact that $\theta$ preserves an isomorphism on the stalks, and a morphism of sheaves is injective on the sections if and only if it is injective on the stalks). But even so, any sheafification of $O$ which is a subsheaf of $G$ need not be uniquely determined (as far as I can see). Can anyone clarify what it means for a sequence of sheaves to be exact? - -REPLY [3 votes]: Let's take an example, a piece of the Poincare sequence on a smooth manifold $M$ -$$ \Omega^k_M \overset{d_k}{\to}\Omega^{k+1}_M \overset{d_{k+1}}{\to} \Omega^{k+2}_M$$ -It is exact at the term $\Omega^{k+1}(M)$. This means two things: -First, $d_{k+1}\circ d_k=0$. -Second, for every open subset $U$ of $M$ and $\omega \in \Omega^{k+1}_M(U)$ a $k+1$-differential form with $d \omega = 0$, there exists a covering $V_i$ of $U$ and $\eta_i \in \Omega^k_M(V_i)$ so that -$d_k(\eta_i)= \omega_{|V_i}$ ( any closed form is ${\it locally}$ exact).<|endoftext|> -TITLE: Continuous comparison between two Finsler norms? -QUESTION [7 upvotes]: Let $(E,F_1)$ be a Finsler vector bundle over a manifold $M$. (See precise definition below). -Let $F_2$ be another Finsler function (norm) on $E$. -For any $p \in M \, , \, F_1|_{E_p}:E_p \to \mathbb{R}$ is a norm on a finite dimensional vector space. Hence the corresponding unit sphere $\mathbb{S^p_1}=\{v_p \in E_p |F_1(v_p)=1 \}$ is compact. So $F_2$ attains a minimium on $\mathbb{S^p_1}$. Thus we obtain a function $m:M \to \mathbb{R}$ via $m(p) = \min\{F_2(v_p) | v_p \in \mathbb{S^p_1}\}$ -Question: -Is $m$ always continuous? -Remarks: -(1) I am quite sure $m$ is not smooth in general. For example, if $E=TM$ and $F_i$ are the Finsler norms induces by two Riemannian metric $g_1,g_2$, then a calculation here shows that $m(p)=\min \lambda(G)$ where $G$ is the component matrix $g_{ij}$ of one metric w.r.t an orthonormal frame of the other. -(2) We cannot always choose a continuous minimizing section $s:M \to E$ ,i.e $s$ such that: $s_p \in \mathbb{S^p_1} \, , \, F_2(s_p)=m(p)$ -(This follows from the above example together with this answer) -Of course, when such a continuous choice as described in (2) is possible this imediately implies continuity of $m$. - -A Finsler vector bundle is a (smooth) vector bundle $E$ over a (smooth) manifold $M$ together with a Finsler function $F : E \to \mathbb{R}$ such that for every vector $v \in E$: -(1) $F$ is smooth on the complement of the zero section of $E$. -(2) $F(v) \ge 0$ with equality if and only if $v = 0$ (positive definiteness). -(3) - $F(\lambda v) = |\lambda| F(v)$ for all $\lambda \in \mathbb{R}$ (homogeneity). -(4) -$F(v + w) \le F(v) + F(w)$ for every $w$ which is in the same fiber with v (subadditivity). - -REPLY [3 votes]: Answer: -Yes, $m$ is always continuous. -Remark: -Your definition of a Finsler function is contradictory. -In part (3) you should either assume $F(\lambda v) = |\lambda| F(v)$ or restrict to $\lambda\geq0$. -Some authors do not require Finsler functions to be symmetric. -Details: -Suppose $m$ is not continuous. -Then there is $p\in M$ and a sequence $(p_i)$ converging to $p$ so that $\lim_{i\to\infty}m(p_i)\neq m(p)$. -A priori (from considerations of continuous of functions on metric spaces alone), the limit $\ell:=\lim_{i\to\infty}m(p_i)$ can be anything on $[0,\infty]$. -For each $i$, let us pick $v^i\in E_{p_i}$ so that $F_1(v^i)=1$ and $F_2(v^i)=m(p_i)$. -We can identify the tangent spaces near $p$ (this comes from the very definition of a bundle) and pass to a subsequence to assume that the vectors $v^i$ converge to a limit $v\in E_p$. -Since $F_1$ and $F_2$ depend continuously on the base point, we have $F_1(v)=1$ and $F_2(v)=\ell\in(0,\infty)$. -By the definition of $m$ we must have $\ell\geq m(p)$. -Since we have assumed $\ell\neq m(p)$ (to the end of finding a contradiction), we can conclude that $m(p)<\ell<\infty$. -Let $u\in E_p$ be a vector with $F_1(u)=1$ and $F_2(u)=m(p)$. -With the aforementioned identification we can consider $u=u^i$ as a vector on $E_{p_i}$ as well. -It follows from the continuous dependence of $F_1$ and $F_2$ on the base point that $\lim_{i\to\infty}F_1(u^i)=1$ and $\lim_{i\to\infty}F_2(u^i)=m(p)$. -But $F_2(u^i)\geq F_1(u^i)m(p_i)$, so -$$ -m(p) -= -\lim_{i\to\infty}F_2(u^i) -\geq -\lim_{i\to\infty}F_1(u^i) -\times -\lim_{i\to\infty}m(p_i) -= -1\times\ell -> -m(p). -$$ -This is impossible. -(This proof could have been organized differently, of course. I hope this conveys the key ideas clearly enough.)<|endoftext|> -TITLE: Graphical interface in GAP -QUESTION [5 upvotes]: Is there any graphical interface in GAP? Something like RStudio for R or WxMaxima for Maxima. I'm using GAP under a Linux system. -Thanks - -REPLY [4 votes]: I want to follow up on Alexander's answer. Gap.app, which is one of the Undeposited Implementations for GAP that Alexander mentions briefly, is back in active development, with a new release this week. It is a front-end and GAP distribution for macOS. It fully supports the xgap library, and also does some other useful things like provide easy save and load of sessions, command completion, etc. See -https://cocoagap.sourceforge.io/ . -(Disclosure: I am the author of this program.) -Here's a nice screenshot from Samuel Lelièvre: - -Unfortunately, if you're on Linux, Gap.app doesn't currently help you so much. Gap.app uses Objective C, and there is reasonable potential for compiling under Gnustep or similar. I have an undergraduate looking at that possibility, and it is possible that he will make some progress. -By the way, xgap still largely works. It uses the Xathena widgets, so does look and feel very dated. There is a bug in the current release of GAP that affects xgap (and prevents most display), but you can work around it by typing GAPInfo.TermEncoding:="latin1"; as your first command in a session. -One more thing. Alexander attributes xgap to Max Neunhöffer. The project was actually originated by Frank Celler, and taken over by Max Neunhöffer when Frank left mathematics. Now Max Neunhöffer has also left mathematics, and xgap maintenance is handled by Max Horn.<|endoftext|> -TITLE: A connected simple graph $G$ has $14$ vertices and $88$ edges. Prove $G$ is Hamiltonian, but not Eulerian. -QUESTION [6 upvotes]: A connected simple graph $G$ has $14$ vertices and $88$ edges. Prove $G$ is Hamiltonian, but not Eulerian. - -I almost feel like you have to prove these two parts separately. I understand that to be Hamiltonian a vertex tour (a path in which all vertices are touched once) is possible on the graph, and that to be Eulerian an edge tour(a path in which all edges are touched once). Although I am not sure how to construct this in the form a proof. I also know that you can test to see if a graph is Hamiltonian if each vertex has a degree $\ge$ $\frac{1}{2}p$ where $p$ is the number of vertices in the simple graph. - -REPLY [4 votes]: The Hamiltonian part is immediate if you use Ore's Theorem. Let $u$ and $v$ be two distinct vertices of $G$. If $\deg(u)+\deg(v)\leq 13$, then $G-\{u,v\}$ has at least $$88-13=75$$ edges, but it has $12$ vertices and $$\binom{12}{2}=66<75\,,$$ contradicting the assumption that $G$ is simple. Thus, $\deg(u)+\deg(v)\geq 14$ for any pair of distinct vertices $u$ and $v$ in $G$. -Hint for the Eulerian part. Prove that $G$ has at least $8$ vertices of degree $13$.<|endoftext|> -TITLE: Relationship among the function spaces $C_c^\infty(\Omega)$, $C_c^\infty(\overline{\Omega})$ and $C_c^\infty(\Bbb{R}^d)$ -QUESTION [7 upvotes]: I have seen the spaces $C_c^\infty(\Omega)$, $C_c^\infty(\overline{\Omega})$ and $C_c^\infty(\Bbb{R}^d)$ a lot in theorems regarding PDE where $\Omega$ denotes some open subset of $\Bbb{R}^d$. There is no doubt about the definitions of $C_c^\infty(\Omega)$ and $C_c^\infty(\Bbb{R}^d)$. But I'm not very clear about the relationships among these three spaces. -Here are my questions: - -What is the definition for $C_c^\infty(\overline{\Omega})$? If one says it consists of functions $f:\overline{\Omega}\to\Bbb{R}$ such that $f$ is smooth (infinitely differentiable) and with compact support, my question concerns the value of $f$ on $\partial\Omega\setminus\Omega$. (I would really appreciate if one could also come up with a reference for definiton of this space. ) -Could one come up with an example illustrating the difference between $C_c^\infty(\overline{\Omega})$ and $C_c^\infty(\Omega)$? -Are $C_c^\infty(\overline{\Omega})$ and $C_c^\infty(\Bbb{R}^d)$ "essentially" the same? If so, how? (I've seen these two spaces in different books regarding a same theorem.) - - -[Added:] -In the book Navier-Stokes Equations--- Theory and Numerical Analysis by Temam, the author defines (page 3) - -$\mathcal{D}(\Omega)$ (or $\mathcal{D}(\overline{\Omega})$) be the space of $C^\infty$ functions with compact support contained in $\Omega$ (or $\overline{\Omega}$). - -I have replaced the symbol $\mathcal{D}$ with $C_c^\infty$ here. - -REPLY [2 votes]: Since it seems that the author of the book doesn't explicitly define what it means for a function defined on a closed set to be differentiable, I can only guess that for him differentiable functions on a closed set are restrictions of differentiable functions defined on a larger, open set and so -$$ \mathcal{D}(\overline{\Omega}) = \{ f \colon \overline{\Omega} \rightarrow \mathbb{R} \, | \, \exists g \in C^{\infty}(\mathbb{R}^n) \text{ s.t } f = g|_{\overline{\Omega}}, \mathrm{supp}_{\overline{\Omega}} f \subset \subset \overline{\Omega} \}. $$ -Using this definition, we have $\mathcal{D}(\Omega) \subset \mathcal{D}(\overline{\Omega}) \subset C^{\infty}(\Omega)$. To see the difference between $\mathcal{D}(\Omega)$ and $\mathcal{D}(\overline{\Omega})$, take $\Omega = (0,1)$ and $g \colon \mathbb{R} \rightarrow \mathbb{R}$ smooth with compact support such that $g(x) = 1$ for $x \in [0,1]$. Then $g \notin \mathcal{D}((0,1))$ but $g \in \mathcal{D}([0,1])$. More generally, one can characterize $\mathcal{D}(\overline{\Omega})$ as -$$ \mathcal{D}(\overline{\Omega}) = \{ f \colon \Omega \rightarrow \mathbb{R} \, | \, \exists g \in C^{\infty}(\mathbb{R}^n) \text{ s.t } f = g|_{\Omega}, \,\, \mathrm{supp}_{\mathbb{R}^n} g \subset \subset \mathbb{R}^n \} $$ -so that functions in $\mathcal{D}(\overline{\Omega})$ are restrictions to $\Omega$ of functions in $\mathcal{D}(\mathbb{R}^n)$. I've found your notation $C^{\infty}_c(\overline{\Omega})$ used together with the latter definition in Elliptic Problems in Nonsmooth Domains (page 24) by Pierre Grisvard. -If $\Omega = \mathbb{R}^n$, then $\mathcal{D}(\Omega) = \mathcal{D}(\overline{\Omega})$ as the notation suggests.<|endoftext|> -TITLE: Easy way of memorizing or quickly deriving summation formulas -QUESTION [13 upvotes]: My math professor recently told us that she wants us to be familiar with summation notation. She says we have to have it mastered because we are starting integration next week. She gave us a bunch of formulas to memorize. I know I can simply memorize the list, but I am wondering if there is a quick intuitive way of deriving them on the fly. It has to be a really quick derivation because all of her test are timed. -Otherwise is there an easy way, you guys remember these formulas. -$\begin{align} -\displaystyle -&\sum_{k=1}^n k=\frac{n(n+1)}{2} \\ -&\sum_{k=1}^n k^2=\frac{n(n+1)(2n+1)}{6} \\ -&\sum_{k=1}^n k^3=\frac{n^2(n+1)^2}{4} \\ -&\sum_{k=1}^n k(k+1)=\frac{n(n+1)(n+2)}{3} \\ -&\sum_{k=1}^n \frac{1}{k(k+1)}=\frac{n}{n+1} \\ -&\sum_{k=1}^n k(k+1)(k+2)=\frac{n(n+1)(n+2)(n+3)}{4} \\ -&\sum_{k=1}^n \frac{1}{k(k+1)(k+2)}=\frac{n(n+3)}{4(n+1)(n+2)} \\ -&\sum_{k=1}^n (2k-1)=n^2 -\end{align}$ -Note: Sorry if there is an easy and obvious answer to this question. Most of the students in my class already know these formulas from high school, but I only went up to Algebra 2 and Trig when I was in high school. PS: This is for a calculus class in college. - -REPLY [9 votes]: I would like to share the way I ended up remembering these formulas. Most of them are geometric ways of remembering these summation formulas. I still like Raymond Manzoni answer, so I will leave that as my accepted answer! He really helped me on my test. -$\sum \:_{n=a}^b\left(C\right)=C\cdot \:\left(b-a+1\right)$: This is one where it is quite easy to remember by just understanding what summation definition means. It is basically saying keep on adding C, so $C+C+C+C+C+...+C$. You adding c, $b-a+1$ times. So, $C\cdot \left(b-a+1\right)$. -$\sum _{n=1}^k\left(n\right)=\frac{\left(1+k\right)\cdot \:k}{2}$ You can make a dot plot in the form of a triangle. This first column will have 1 dot, the second 2, the third 3, the fourth 4, and you will keep on doing it k times. The total amount of dots in one triangle is what we want to find out. If we make to identical triangles of this format and flip one over and overlap it to make a rectangle. We get a rectangle with height k and with k+1. But this give us the total form two triangles. We only care about 1, so we divide it by two. - -$\sum _{n=1}^k\left(2n-1\right)=k^2$: For this, I like the picture Raymond gave. Look at his answer for the explanation of this. - - -$\sum _{n=1}^k\left(n^2\right)=\frac{\left(\left(2k+1\right)\cdot \:\frac{k\left(k+1\right)}{2}\right)}{3}=\left(\frac{\left(2k+1\right)\cdot k\left(k+1\right)}{6}\right)$: Create rectangles. First 1x1,2x2,3x3,...until you get up to k rectangles.Now, take three copies of each of you rectangles and arrange them as the picture. Take two copies and arrange them in decreasing order upward. Now cut the third copy of each rectangle and arrange them in the middle. They are color coded so you can see that all these dots appear from one of these rectangles. Now you form with a new rectangle with base 2*k+1 and width of $\sum _{n=1}^k\left(n\right)$. You already know $\sum _{n=1}^k\left(n\right)$ is equal to $\:\frac{k\left(k+1\right)}{2}$ from the triangle picture above. So now multiply length and width to find area(total dots), and you get $\left(2k+1\right)\cdot \:\frac{k\left(k+1\right)}{2}$, but this is three copies of each. So we divide by 3. Leaving us with $\frac{\left(\left(2k+1\right)\cdot \:\frac{k\left(k+1\right)}{2}\right)}{3}$ - - -$\sum \:_{n=1}^k\left(n^3\right)=\left(\sum \:\:\:_{n=1}^k\left(n\right)\right)^2=\left(\frac{\left(1+k\right)\cdot \:\:\:k}{2}\right)^2$:Draw a cube with $1^3$ dots,$2^3$ dots,$3^3$ dots,$4^3$ dots,...till you get to k cubes. Now rearrange them,to squares and rectangles as shown in the picture. Every other square is cut in half. while others are stacked by planes of the next layer. Odd Layers=Stack. Even Layers=Cut. You get a resulting square of width $\sum _{n=1}^k\left(n\right)=\frac{\left(1+k\right)\cdot \:k}{2}$ and height $\sum _{n=1}^k\left(n\right)=\frac{\left(1+k\right)\cdot \:k}{2}$.<|endoftext|> -TITLE: Does cross product have an identity? -QUESTION [14 upvotes]: Does cross product have an identity? I.e. Does there exist some $\vec{id}\in \mathbb{R}^3$ such that -$$\vec{id} \times \vec{v} = \vec{v}\times \vec{id} = \vec{v} -$$ -for all $\vec{v}\in \mathbb{R}^3$? - -REPLY [6 votes]: Perhaps even easier. Suppose such a vector $\vec{id}$ exists. First note that $\vec{id} = 0$ does not work, so $\vec{id} \ne 0$. -Applying the desired property with $\vec{v} = \vec{id}$ we get -$$\vec{id} \times \vec{id} = \vec{id}$$ -By antisymmetry, any vector cross itself is 0. So this is a contradiction.<|endoftext|> -TITLE: Why is this set compact in $L^2(\mathbb{N})$? -QUESTION [6 upvotes]: Suppose $L^{2}(\mathbb{N})$ is the Hilbert space of sequences - -$(a_{n})_{n \in \mathbb N}$ which satisfy $\sum |a_{n}|^{2}$ with $(a,b) = \sum a_{n} \bar{b_{n}}.$ - -Prove the set of sequences $\{a_{n}\}_n$ that satisfy the condition $|a_{n}| \le \frac{1}{n} \forall n$ is compact in $L^2(\mathbb{N})$. -I'm rather stuck on this idea, and was curious if I could be given help. - -This is $\mathbb{not}$ for homework, but self-study. My initial thought would be to use the fact that - -suppose we have a compact subset $M$ of a metric space, then $M$ is bounded and complete. - -Something I'm trying to use is the Bolzano-Weierstrass property. Thanks! - -REPLY [4 votes]: You can use the following criterium of compacity. A metric space is compact if it is complete and for every $\epsilon$ one can cover it by a finite number of balls of radius $\epsilon$. -For complete, your set is a closed subset in a complete space, as intersection of the sets $\vert a_n\vert \leq 1/n$ -For the ball, let $\epsilon $ be given and $n_0 $ an integer such that $\sum _{n\geq n_0} 1/n^2 < {1\over 100 \epsilon ^2}$. Note that every point in your set is at the distance $\leq 1/10 \epsilon$ of a point with $a_n=0$ if $n>n_0$. Now cover the compact set$ \{ (a_n) / a_n=0 if n>n_0, \vert a_n\vert \leq 1/n \}$ by a finite set of ball of radius $\leq \epsilon /2$. The balls with the same center and radius $\epsilon$ cover your set.<|endoftext|> -TITLE: Models in set theory and continuum hypothesis -QUESTION [11 upvotes]: Some days ago I had the opportunity to listen to the talk about model theory and connections with algebra and geometry. I'm not at all specialist in this field so my question probably will be naive but nevertheless I'll try to explain my doubts. As far I understood: for example if we consider the theory of groups a model for such a theory will be a concrete group. For example the statement: "there is some $g \in G$ such that $g^2=e$" is a true statement in every model ($g=e$ is ok) therefore this statement is a theorem in a theory of groups. On the other hand the statement "for all $g \in G$ we have $g^2=e$" is no longer a theorem from the theory of groups: one can find examples (models) of the theory where this is satisfied but also one can construct counterexamples. In this sense this statement is independent from the axioms of group theory. So far everything looks clear. But I have a problem in understanding how the situation looks like in the context of set theory and ZFC axioms: for example I know that the statement "the cardinality of the reals is the next cardinality after the cardinality of the set of natural numbers" is independent from ZFC axioms. In other words one can construct two different models for set theory where in the one model this statement is true but in the second is false. What exactly does it mean "to construct model for set theory"? Let me return to the previous example about groups: for group theory a model is a concrete group so "to construct the model" means nothing more than "provide an example" but how to understand this in the context of whole set theory? What exactly do we have to construct? I'm sure that this question will be naive from the point of view of the specialist: from the other hand I suspect that there are at least few people which would like to know the answer for such "meta" question. - -REPLY [12 votes]: To construct a model of set theory means to produce a set $A$ and a relation $R$ on $A \times A$ such that all the axioms of ZFC are satisfied if we take "set" to mean "element of $A$" and take "$a \in b"$ to mean $aRb$. -This is not actually any different than the case with groups. The signature is different, and the axioms are different, but the definition of "model" is the same. -There is one complication, though. Although ZFC proves that there is a model of the group axioms, ZFC does not prove that there is a model of the ZFC axioms. One way we can get around this is by moving to a stronger system of set theory to construct the model of ZFC. For example, Kelley-Morse set theory proves that there is a model of ZFC. Another way is to simply assume there is one model, and use that to construct other models. -Sometimes, in set theory, we allow a more general kind of model in which $A$ is a proper class and $R$ is a definable relation on pairs of elements of $A$. These are called "class models".<|endoftext|> -TITLE: Hyperbolic or exponential solutions to differential equation -QUESTION [8 upvotes]: I have spent the last couple weeks in my Fourier Analysis course to solve PDEs with the method of separations of variables. However, I have come up with something that annoys me and I can't really explain it. Let me show an example. -I have this problem here -$$u_{xx}+u_{yy}=0 \\ -u(0,y) = u(1,y) = 0 \\ -u(x,0) = u(x,1) = \frac{x^3-x}{6}$$ -So I separate the variables and get these ODEs -$$ -X''(x)+ \lambda^2 X(x) = 0 \\ -Y''(y)- \lambda^2 Y(y) = 0 -$$ -and these are simple to solve. The first one is just $X(x)= A\cos(\lambda x)+B \sin(\lambda x)$ and now is the confusing part. For me the solution to the second equation have always been $Y(y)=Ce^{- \lambda y}+ De^{ \lambda y}$ but the book have suddenly started to use the hyperbolic equations $\cosh$ and $\sinh$, why is that? Is it something I am missing? - -REPLY [10 votes]: The awesome thing about hyberbolic (trig) functions is how they can be represented as sums of exponentials (and vice versa). Recall that $\cosh(u) = \dfrac{e^u + e^{-u}}{2}$ and $\sinh(u) = \dfrac{e^u - e^{-u}}{2}$. It's just a matter of how you want to indicate the constants that come about from your initial conditions. -For example, since you're used to the solution $Y(y) = Ce^{\lambda y} + De^{-\lambda y}$. Here's how you could express the same solution with hyperbolic trig functions (namely $\cosh$ and $\sinh$). Let $C = \frac{A+B}{2}$ and $D=\frac{A-B}{2}$. Then -$$ -Ce^{\lambda y} + De^{-\lambda y} \\ -= \frac{A+B}{2}e^{\lambda y} + \frac{A-B}{2}e^{-\lambda y} \\ -= \frac{A}{2}e^{\lambda y} + \frac{B}{2}e^{\lambda y} + \frac{A}{2}e^{-\lambda y} - \frac{B}{2}e^{-\lambda y} \\ -= \frac{A}{2}(e^{\lambda y} + e^{-\lambda y}) + \frac{B}{2}(e^{\lambda y} - e^{-\lambda y}) \\ -= A \frac{e^{\lambda y} + e^{-\lambda y}}{2} + B \frac{e^{\lambda y} - e^{-\lambda y}}{2} \\ -= A\cosh(\lambda y) + B \sinh(\lambda y). -$$ -Again, notice that the only thing that changed was really how you defined the constants with a relation from $C,D$ to $A,B$. I stress that these are direct results from your initial (or given) conditions. - -As far as your question from the comments, "Why write it in that way instead of just the exponentials?", it is really a matter of conveniently denoting the properties of a solution. In an analogous fashion, even your familiar sines and cosines are merely just one way of expressing solutions to almost identical differential equations. -Since $\cos(u) = \dfrac{e^{iu}+e^{-iu}}{2}$ and $\sin(u) = \dfrac{e^{iu}+-e^{-iu}}{2i}$ (where $i^2 = -1$), it depends on the context of your problem or possibly just your preference of where to write a solution as $P\cos(x)+Q\sin(x)$ or in the form $Ue^{ix}+Ve^{-ix}$. It is straightforward to relate the constants $P,Q$ to $U,V$ in the same way that $A,B$ and $C,D$ were related for the hyperbolic trig case. - -There are several examples that I'd love to present to show why expressing your solutions as a sum of hyperbolic trig functions. There are properties of hyperbolic trig functions that are so closely related to the familiar circular trig functions that manipulating them can be very natural (as opposed to working with a sum of 2 exponentials). -For example, let's say your solution was expressed as $y(x) = C\cosh(mx)+D\sinh(mx)$. How would you find the $n$-th derivative of $y(x)$? Since $\frac{d}{dx}[\cosh{mx}] = m\sinh(mx)$ and $\frac{d}{dx}[\sinh{mx}] = m\cosh(mx)$, it's very easy to express the derivatives of $y(x)$: -$$ -y(x) = y^{(0)}(x) = C\cosh(mx)+D\sinh(mx) \\ -y'(x) = y^{(1)}(x) = Cm\sinh(mx) + Dm\cosh(mx) \\ -y''(x) = y^{(2)}(x) = Cm^2\cosh(mx)+Dm^2\sinh(mx) \\ -y'''(x) = y^{(3)}(x) = Cm^3\sinh(mx) + Dm^3\cosh(mx) \\ -\vdots -$$<|endoftext|> -TITLE: Lipschitz continuity is equivalent to absolute continuity with bounded derivative -QUESTION [7 upvotes]: I am trying to show that a function is a Lipschitz $M$ continuous if and only if it is absolutely continues and $|f'(x)| \leq M$. -I think I am on the right track: -Proof: -(=>) Let f be Lipschitz M continues ie if $|f(x)-f(y)| \leq M|x-y|$ for all $x,y \in E=[a,b]$. now we want to show abs cont: $\sum^n_{i=1}|f(x'_i)-f(x_i)|< \epsilon$ if $\sum^n_{i=1}|x'_i-x_i|< \delta$ for all any finite collection of disjoint intervals $(x'_i,x_i)$. -Consider we let $\sum^n_{i=1}|x'_i-x_i|< \delta$, then we observe that $\sum^n_{i=1}|f(x'_i)-f(x_i)| \leq M \sum^n_{i=1}|x'_i-x_i|$ by the Lipschitz M continues and triangle inequality. Then define $\epsilon = M \delta$ and we are done. -To see that $|f'(x)| \leq M$ we can just let $x'_i=x_i+h$ and have the following $|f(x_i+h)-f(x_i)| \leq M|x_i+h-x_i|$ thus we have $\frac{|f(x_i+h)-f(x_i)|}{|h|}\leq M$ and if we take the limit and take it inside the absolute values we are done. -(<=) Now assume abs continuity and $|f'(x)|=M$ now by another theorem we know that f is absolute continues if and only if it is an indefinite integral $f(x)= \int_a ^x f'(t)dt +f(a)$ we can manipulate this to $f(x)-f(a) \leq \int_a ^x Mdt$ . Now we can integrate the RHS to get $\int_a ^x Mdt = M (x-a)$ now we have $f(x)-f(a) \leq M (x-a)$ we can take the absolute calues of both sides and we are done. -Does this seem correct? - -REPLY [7 votes]: Mostly correct. However, in the proof of $\Rightarrow$, - -"define $\epsilon=M\delta$" is not logical, since $\epsilon$ is given. You define $\delta=\epsilon/M$. -Before proving $|f'(x)|\le M$ one has to discuss the existence of $f'(x)$. Cite the theorem saying that an absolutely continuous function is differentiable almost everywhere. The argument for $|f'(x)|\le M$ applies at the points of differentiability. - -And in the proof of $\Leftarrow$, "we can take the absolute calues of both sides" is too hasty: $A\le B$ does not imply $|A|\le |B|$. Instead, follow the chain of inequalities again starting with $f'(x)\ge -M$ and arriving at $f(x)-f(a) \geq -M (x-a)$.<|endoftext|> -TITLE: Exact sequence of sheaves if and only if exact on the stalks -QUESTION [15 upvotes]: This is a follow up question to something I asked earlier: What does it mean for a sequence of sheaves to be exact -Let $F, G, H$ be sheaves on a topological space $X$, and let $$F \xrightarrow{\alpha} G \xrightarrow{\beta} H$$ be morphisms of sheaves. Let $\mathcal O = \textrm{Im}^{\textrm{pre}}(\alpha)$ be the presheaf $U \mapsto \textrm{Im }( \alpha(U))$, and let $\textrm{Im } \alpha$ be a sheafification of this presheaf. There is a canonical choice of $\textrm{Im } \alpha$ which is actually a subsheaf of $G$, which is directly obtained by using the universal property on any sheafification of $\mathcal O$, and does not depend on the specific choice of sheafification. Assuming $(\textrm{Im } \alpha, \theta)$ (where $\theta: \mathcal O \rightarrow \textrm{Im } \alpha$ is the universal map) is this canonical sheafification, we say that the sequence is exact if $\textrm{Im } \alpha$ is equal to the sheaf $\textrm{Ker } \beta$. -I'm having trouble understanding the proof of the result that the sequence is exact if and only if the corresponding sequence on the stalks $F_x \xrightarrow{\alpha_x} G_x \xrightarrow{\beta_x} H_x$ is exact for all $x \in X$. -For example, let me suppose that $\textrm{Im } \alpha = \textrm{Ker } \beta$. Let $i, i^+$ be the respective inclusion morphisms of $\mathcal O, \textrm{Im } \alpha$ into $G$. Since $i^+ \circ \theta = i$, and $i^+$ maps $\textrm{Im } \alpha$ onto the kernel of $\beta$, we have that also $i$ maps $\mathcal O$ into $\textrm{Ker } \beta$, i.e. $\textrm{Im } (\alpha(U)) \subseteq \textrm{Ker } \beta(U)$ for all $U$. Thus $\beta \circ \alpha$ is the zero morphism, which implies $\beta_x \circ \alpha_x = 0$ for all $x$. Now I'm having trouble seeing that the kernel of $\beta_x$ is contained in the image of $\alpha_x$. - -REPLY [3 votes]: You are left to show $Ker(\beta _x)\subset Im(\alpha_x)$ -We have the sequence of abelian groups $F_x\xrightarrow{\alpha_x}G_x\xrightarrow{\beta_x}H_x$ -Let us choose an element $g_x\in Ker(\beta_x)$, we need to show $g_x\in Im(\alpha_x)$ i.e., we need to find an element such that $\alpha_x$ maps that element to $g_x$ -There exists open set $(x\in )U\subset X$ and $g\in G(U)$ corresponding to $g_x\in G_x$ -Look at the commutative diagram -$\require{AMScd} -\begin{CD} -G(U) @>\beta(U)>> H(U)\\ -@VVV @VVV\\ -G_x @>\beta_{x}>> H_x -\end{CD}$ -$\beta_x(g_x)=0$ implies $(\alpha(U)(g))_x=0$ Therefore, there exists some $x\in W\subset U$ such that $(\alpha(U)(g))|_W=0$ -Let us restrict the previous commutative diagram -$\require{AMScd} -\begin{CD} -G(W) @>\beta(W)>> H(W)\\ -@VVV @VVV\\ -G_x @>\beta_{x}>> H_x -\end{CD}$ -Here, $\beta(U)(g|_W)=0$ i.e., $g|_W \in ker (\beta(W))$ which is equal to $Im(\alpha (W))$ -Now consider the sequence of sheaves $F\rightarrow O \rightarrow Im (\alpha) \rightarrow (G)$ -Consider the composition $F\rightarrow Im(\alpha)$ -Use the fact $Im(\alpha)_x=im(\alpha_x)$ (Hartshorne Chapter-II Ex 1.2) -Now, $g|W\in Im(\alpha)(W) $ i.e., $g_x\in Im(\alpha)_x$ but $Im(\alpha)_x=im(\alpha_x) \implies g_x\in Im(\alpha_x)$ Hence Proved.<|endoftext|> -TITLE: Techniques for showing that a subgroup is not normal -QUESTION [11 upvotes]: To show that a subgroup of a group is normal, I typically construct a homomorphism whose kernel is that subgroup. Are there any general principles or tests that I can use to determine that a subgroup is not normal? Or are such claims usually proved by ad hoc methods? -In particular, I would like to show that $\textbf{SL}_n(\mathbb{Z})$ is not a normal subgroup of $\textbf{SL}_n(\mathbb{R})$. -Any help is appreciated! - -REPLY [3 votes]: Given any subgroup $H$ of a group $G$, you can describe its conjugate $gHg^{-1}$ conceptually as the stabilizer of the coset $gH$ with respect to the action of $G$ on $G/H$. The intersection $\bigcap_{g \in G} gHg^{-1}$ of these conjugates is then the kernel of this action. You can sometimes identify in fairly explicit terms the action of $G$ on $G/H$, and from there it's sometimes easy to compute this intersection (and verify that it's smaller than $H$) even if it's hard to describe the conjugates of $H$ explicitly. -In your case, we can say the following. In terms of the action of $SL_n(\mathbb{R})$ on $\mathbb{R}^n$, you can define $SL_n(\mathbb{Z})$ to be the subgroup stabilizing the standard lattice $\mathbb{Z}^n$ in $\mathbb{R}^n$. $SL_n(\mathbb{R})$ acts transitively on all lattices in $\mathbb{R}^n$ with stabilizer $SL_n(\mathbb{Z})$, and so the quotient $X = SL_n(\mathbb{R})/SL_n(\mathbb{Z})$ can be identified with the space of lattices in $\mathbb{R}^n$ (not up to rotation or scaling or anything). The nontrivial conjugates of $SL_n(\mathbb{Z})$ can then be identified with the stabilizer of other lattices in $\mathbb{R}^n$. -This action is very close to being faithful: I think when $n$ is even the kernel is generated by $-I$ and when $n$ is odd the kernel is just trivial. This is just the statement that any matrix in $SL_n(\mathbb{R})$ that isn't $\pm I$ fails to stabilize at least one lattice. If you believe that, it follows that the intersection of the conjugates of $SL_n(\mathbb{Z})$ in $SL_n(\mathbb{R})$ is either $\pm I$ or trivial, and so $SL_n(\mathbb{Z})$ is very far from normal.<|endoftext|> -TITLE: Numerical Solutions of the Telegrapher's Equation -QUESTION [5 upvotes]: I'm currently working on a brief report on the Telegrapher's Equation for my fractional calculus course. I am still new to Telegrapher's Equations, but I do know they are used to describe electrical signs traveling along a transmission cable (whether it's a coaxial cable, a microstrip, etc). -Anywho, to make a long story short, I derived the Telegrapher's Equation upon analyzing the elementary components of a transmission line: -\begin{eqnarray} -\frac{\partial^2 v}{\partial t^2} + \frac{(LG+RC)}{LC}\frac{\partial v}{\partial t} + \frac{RG}{LC}v & = & \frac{1}{LC}\frac{\partial^2 v}{\partial x^2} -\end{eqnarray} -Where $v(x,t) = v$ is the voltage across the piece of wire, $R$ is the resistance, $L$ is the inductance, $G$ is the conductance, and $C$ is the capacitance. For simplicity, I will define $c= 1/\sqrt{LC}$, $a = c^2(LG+RC)$ and $b = c^2RG$,then we get: -\begin{eqnarray} - \frac{\partial^2 v}{\partial t^2} + a\frac{\partial v}{\partial t} + bv & = &c^2\frac{\partial^2 v}{\partial x^2} -\end{eqnarray} -This is where my question comes in, I'm still new to solving PDEs numerically, let alone solving a fractional PDEs. -Could someone help me better understand how to solve this problem numerically (not taking into consideration the fractional derivatives for now)? I assume we need boundary conditions. Sadly, I wouldn't know what the boundary conditions of a Telegrapher's Equation would be. I guess to keep things easier, we can assume we are dealing with a lossless system, therefore $R=G =0$. Thus our PDE simplifies greatly to: -\begin{eqnarray} - \frac{\partial^2 v}{\partial t^2} & = &c^2\frac{\partial^2 v}{\partial x^2} -\end{eqnarray} -which we can use to consider the following approximation: -\begin{eqnarray} -\frac{v(x,t+\Delta t)-2v(x,t)+v(x,t-\Delta t)}{\Delta x}& = & c^2\frac{v(x+\Delta x,t)-2v(x,t)+v(x-\Delta x,t)}{\Delta x} -\end{eqnarray} -Then solving for the future time we get: -\begin{eqnarray} -v(x,t+\Delta t)& = & r^2[v(x+\Delta x,t)+v(x-\Delta x,t)] + 2(1-r^2)v(x,t)-v(x,t-\Delta t) -\end{eqnarray} -Now I have some rough idea of how to set up the solution in MATLAB, but since I'm still new to MATLAB and I don't have an idea for what a boundary condition of a telegrapher's equation could be (since that is not my field of study), I am terribly stuck and have not seem to make much progress like I would have like. - -Anywho, I really thank you for all the time you have taken to read this post. I also thank you in advance for your contribution, help, feedback, and more. -Have a wonderful day. - -REPLY [2 votes]: I can't help with the fractional derivatives, but I can with the "normal" PDE. What I usually try to do is use Matlab's ODE tools to take care of the time stepping, and only discretise the spatial derivatives. So, using $v_i(t)$ to denote the solution at the $i$-th spatial grid point, you get equations like this: -$$ \frac{\partial^2v_i}{\partial t^2}+a\frac{\partial v_i}{\partial t}+bv_i=\frac{c^2}{\Delta x^2}\left(v_{i-1}-2v_i+v_{i+1}\right). $$ -Obviously you could use a different stencil if you want. You also have to think about the boundary conditions, I'll assume they are $v_0=\alpha$ and $v_N=\beta$. -This gives you a matrix system of differential equations. Forgetting the left hand side for a moment, notice that we can write the right hand side as a matrix multiplication, -$$\frac{c^2}{\Delta x^2}\begin{bmatrix}\alpha & 0 & 0 & 0 & \ldots & 0 \\ 1 & -2 & 1 & 0 & \ldots & 0 \\ 0 & 1 & -2 & 1 & \ldots & 0 \\ 0 & 0 & 0 & 0 & \ldots & \beta\end{bmatrix}\begin{bmatrix}v_0\\v_1\\ \vdots\\v_N\end{bmatrix},$$ -and I will call the differentiation matrix $M$ from now on. -Now you have to write this as a system of first-order ODEs, which is simple enough. Let $u_{i,1}=v_i$ and $u_{i,2}=v'_i$, then $u'_{i,1}=u_{i,2}$ and -$$ u'_{i,2}=\frac{c^2}{\Delta x^2}(M\vec u)_i-au_{i,2}-bu_{i,1}.$$ -Writing these as vector equations is neater, -$$\vec u'_1=\vec u_2$$ -and -$$ \vec u'_2=\frac{c^2}{\Delta x^2}M\vec u_1-au_2-bu_1.$$ -You can solve these using ode45 in Matlab, with initial conditions that you will have to work out. It's not trivial, but here is the code that solves $$\frac{\partial^2v}{\partial t^2}+a\frac{\partial v}{\partial t}+bv=c\frac{\partial^2v}{\partial x^2}. $$ -a=1;b=1;c=1; %// coefficients of PDE -dx=0.05; %// spatial grid spacing -x=(0:dx:1).'; %// spatial grid -N=length(x); -%// differentiation matrix, using centred differencing at interior points -M=diag(ones(1,N-1),-1)+diag(-2*ones(1,N))+diag(ones(1,N-1),1); -M(1,1:3)=[1 -2 1]; %// and forward/backward differencing at boundaries -M(end,(N-2):N)=[1 -2 1]; - -%// Our u vector is [u1;u2], so the derivative has to be [u2;M*u1-u2-u1] -%// plus correction for boundary conditions -dudt=@(t,u)[[-c*sin(c*t);u(N+2:end-1);-c*sin(1+c*t)]; - c^2*M*u(1:N)/(dx^2)-a*u(N+1:end)-b*u(1:N)]; - -options=odeset('MaxStep',dx); %// some kind of CFL condition, I didn't work - %// it out properly, but this works so it's OK! - -[t,u]=ode45(dudt,[0 15],[cos(x);-c*sin(x)],options); %// time-stepping using ode45 - %// the initial condition is [v;dv/dt] at t=0 - - -for i=1:length(t) %// To see the results - plot(x,u(i,1:N)) - axis([0 1 -1 1]) - drawnow - pause(0.01) -end - -If you set $a=b=0$ then you can compare the numerical solution to the exact solution $\cos(x+ct)$. -I hope that helps!<|endoftext|> -TITLE: For which monic irreducible $f(x)\in \mathbb Z[x]$ , is $f(x^2)$ also irreducible in $\mathbb Z[x]$? -QUESTION [7 upvotes]: Let $f(x) \in \mathbb Z[x]$ be an irreducible monic polynomial such that $|f(0)|$ is not a perfect square . Then is $f(x^2)$ also irreducible in $\mathbb Z[x]$ ? -( It is supposed to have an elementary solution , without using any field-extension etc. ) - -REPLY [2 votes]: Sophisticated answer: Let us suppose $f(x)\in \mathbb{Z}[x]$ is monic and irreducible of degree $n$ but $g(x)=f(x^2)$ is reducible. Let $K\supset\mathbb{Q}$ be a splitting field of $g$ and let $a_1,\dots,a_n\in K$ be the roots of $f$, so $\pm\sqrt{a_1}\dots,\pm\sqrt{a_n}$ are the roots of $g$. Since $f$ is irreducible, $G=Gal(K/\mathbb{Q})$ acts transitively on the set $\{a_1,\dots,a_n\}$. It follows that the $G$-orbit of any root of $g$ must contain at least one of the square roots of $a_i$ for each $i$. Since $g$ is reducible, $G$ does not act transitively on the roots of $g$, so there must be exactly two orbits of roots of $g$, with each one containing exactly one of the square roots of each $a_i$. This means that $g$ has two irreducible factors $h_1$ and $h_2$ of degree $n$, and the roots of $h_1$ are the negatives of the roots of $h_2$. Thus the constant term of $h_1$ is $(-1)^n$ times the constant term of $h_2$ (if we take $h_1$ and $h_2$ to be monic). We conclude that the constant term of $g$ is $(-1)^n$ times the square of the constant term of $h_1$. It follows that $|f(0)|$ is a square. -(Strictly speaking, there is a special case when $a_i=0$ for some $i$, in which case $g$ has a double root and the argument above doesn't work as stated. But if that happens, then $|f(0)|=0$ is trivially a square. Also, the only $f$ for which this happens is $f(x)=x$) - -Less sophisticated answer: Note that since $g(x)=g(-x)$, if $h(x)$ is an irreducible factor of $g(x)$, then so is $h(-x)$. So unless $h(x)=\pm h(-x)$ for some irreducible factor $h$ of $g$, the irreducible factors of $g$ come in pairs which have (up to sign) the same constant term, and so multiplying them all together we get that the constant term of $g$ is a square (up to sign). -So we just have to rule out $h(x)=\pm h(-x)$ for an irreducible monic factor $h$ of $g$. If $h(x)=-h(-x)$, then every term of $h$ has odd degree, and so $h$ is divisible by $x$. Thus $g(x)$ is divisible by $x$, and in particular its constant term is $0$, which is a square. If $h(x)=h(-x)$, then every term of $h$ has even degree, and we can write $h(x)=h'(x^2)$ for some $h'$. Writing $g(x)=h(x)k(x)$, we see that $k(x)$ also satisfies $k(x)=k(-x)$ (since $g$ and $h$ do), so $k(x)=k'(x^2)$ for some $k'$. But then $f(x)=h'(x)k'(x)$ is a factorization of $f$. This means that actually $k'(x)=1$, so $h'=f$ and $h=g$, so in this case $g$ is irreducible.<|endoftext|> -TITLE: Question regarding $f(n)=\cot^2\left(\frac\pi n\right)+\cot^2\left(\frac{2\pi}n\right)+\cdots+\cot^2\left(\frac{(n-1)\pi}n\right)$ -QUESTION [12 upvotes]: $$f(n)=\cot^2\left(\frac\pi n\right)+\cot^2\left(\frac{2\pi}n\right)+\cdots+\cot^2\left(\frac{(n-1)\pi}n\right)$$ then how to find limit of $\dfrac{3f(n)}{(n+1)(n+2)}$ as $n\to\infty$? -I don't know any series like that. Riemann sum is not working. what to do? - -REPLY [10 votes]: Recall the expansion: $$\begin{align}\tan nx &= \dfrac{\binom{n}{1}\tan x - \binom{n}{3}\tan^3 x + \cdots }{1-\binom{n}{2}\tan^2 x + \binom{n}{4}\tan^4 x + \cdots } \\&= \frac{\binom{n}{1}\cot^{n-1} x - \binom{n}{3}\cot^{n-3} x + \cdots }{\cot^{n} x - \binom{n}{2}\cot^{n-2} x + \binom{n}{4}\cot^{n-4} x + \cdots }\end{align}$$ -Now, $\displaystyle \left\{\frac{k\pi}{n}\right\}_{k=1}^{n-1}$ are the roots of the equation: $\displaystyle \tan nx = 0$ -Thus, $\displaystyle \binom{n}{1}\cot^{n-1} x - \binom{n}{3}\cot^{n-3} x + \cdots = 0$ whenever, $x = \dfrac{k\pi}{n}$ for $1 \le k \le n-1$. -Thus, using Vieta's formula we might write: -$$\begin{align*}\sum\limits_{k=1}^{n-1} \cot^2 \frac{k\pi}{n} &= \left(\sum\limits_{k=1}^{n-1} \cot \frac{k\pi}{n}\right)^2 - 2\sum\limits_{1 \le k_1 < k_2 \le n-1} \cot \frac{k_1\pi}{n}\cot \frac{k_2\pi}{n}\\&= 0 + 2\frac{\binom{n}{3}}{\binom{n}{1}}\\&= \frac{(n-1)(n-2)}{3}\end{align*}$$<|endoftext|> -TITLE: Prob 8, Sec 7 in Munkres' TOPOLOGY 2nd ed: How do we show these sets have the same cardinality? -QUESTION [8 upvotes]: Here's Prob. 8. Sec. 7 in Topology by James R. Munkres, 2nd edition: - -Let $X$ denote the two element set $\{0,1\}$; let $\mathscr{B}$ be the set of countable subsets of $X^{\omega}$. Show that $X^{\omega}$ and $\mathscr{B}$ have the smae cardinality. - -Here $X^{\omega}$ denotes the set of all infinite binary sequences (i.e., the set of all the functions each with domain the set $\mathbb{N}$ of natural numbers and range a (non-empty) subset of $\{0,1\}$). -My effort: -Using the Schroeder Bernstein theorem, our aim is to show the existence of injective maps $f \colon X^{\omega} \to \mathscr{B}$ and $g \colon \mathscr{B} \to X^{\omega}$. -We can define $f$ as follows: -$$f(s) \colon= \{s\} \ \mbox{ for all } \ s \in X^{\omega}.$$ -How do we define our desired map $g$? - -REPLY [5 votes]: You can't quite define an injection from $\scr B$ into $X^\omega$. The reason is that it is consistent with the failure of the axiom of choice that there is no such injection. -This means that you have, at some point, resort to using the axiom of choice. Luckily, we can fine a surjection from $X^\omega$ onto $\scr B$, by noting that $(X^\omega)^\omega$ and $X^\omega$ have the same cardinality. Therefore we can think of every element of $X^\omega$ as encoding a sequence of elements of $X^\omega$. -Can you see how does that help you in obtaining the surjection?<|endoftext|> -TITLE: How can I prove that the hawaiian earring has no universal cover? -QUESTION [9 upvotes]: I know that the Hawaiian earring is not semi-locally simply connected so the existence is not guaranteed. Also, the point in which it must fail is the origin, where it isn't even locally simply connected. -My first thought was to use that the universal cover must be simply connected and locally path connected (since the Hawaiian earring is locally path connected), but this does not imply that the universal cover is locally simply connected, which was my intention, so I don't think this is a good approach. -Does anyone know how could I solve this? - -REPLY [8 votes]: If $X$ is the Hawaiian earring, assume universal cover $\widetilde{X}$ exists. Let $p$ be the "interesting point" of $X$ (namely, the origin). $q$ be the lift of $p$ in $\widetilde{X}$. If $U$ is an evenly covered neighborhood of $p$, then it lifts to a neighborhood $V$ of $q$ such that the projection $U \to V$ by restricting the covering map is a homeomorphism. -$i : U \hookrightarrow X$ be the inclusion map. We can look at the composition $\pi_1(V) \stackrel{\cong}{\to} \pi_1(U) \stackrel{\pi_1(i)}{\to} \pi_1(X)$ where first map is the induced map of projection. This map is the same as $\pi_1(V) \stackrel{\pi_1(j)}{\to} \pi_1(\widetilde{X}) \to \pi_1(X)$ where $j$ is the inclusion $V \hookrightarrow \widetilde{X}$. -But as $\widetilde{X}$ is simply connected, the composition is the zero map, hence so is $\pi_1(i)$. Thus, $U$ is a neighborhood of $p$ which makes $X$ semilocally simply connected at $p$. Contradiction, as every neighborhood of $p$ contains a small circle, and a loop going around that small circle can never be nullhomotoped in $X$.<|endoftext|> -TITLE: Closed form for Euler sum with $H_{2n}$?. -QUESTION [11 upvotes]: I ran across this Euler sum while trying to evaluate an integral. I mentioned it in another thread, but though perhaps asking about it separate may be a good idea. - -Is there a closed form for this Euler sum? - $$\sum_{n=1}^{\infty}\frac{H_{2n}}{n(6n+1)}.$$ - -Numerically, it converges to around $0.502788$ -I found that sum while trying to evaluate -$$\int_{0}^{1}\log(1+x^{3})\log(1-x^{3})dx=\int_{0}^{1}\sum_{n=1}^{\infty}\frac{1}{n}\left(H_{n}-H_{2n}-\frac{1}{2n}\right)dx$$ -I managed to obtain two of the sums. This is the only one giving me trouble. -It comes from the identity: -$$\log(1+x)\log(1-x)=\sum_{n=1}^{\infty}\frac{1}{n}\left(H_{n}-H_{2n}-\frac{1}{2n}\right)x^{2n}$$ -Note that: -$$\sum_{n=1}^{\infty}\frac{H_{n}}{n(6n+1)}=1/4\left(-72+2\gamma^{2}+\pi^{2}+4\gamma\psi(1/6)+2\psi(1/6)-2\psi_{1}(7/6)\right)$$ -and -$$\sum_{n=1}^{\infty}\frac{1}{n^{2}(6n+1)}=3\sqrt{3}\pi+\zeta(2)+9\log(3)+12\log(2)-36.$$ -I thought maybe the residue method would work, but I am not so sure. -By considering $f(z)=\frac{\pi cot(\pi z)(\gamma+\psi(-2z))}{z(6z+1)}$ -The residue at the origin is $18-\frac{\pi^{2}}{2}$ -The residue at $-1/6$ is $3\pi (\gamma+\psi(1/3))$ -But, there does not appear to be a residue for the positive half integers. -The series for the positive integers is: -$$\frac{1}{2}\cdot \frac{1}{(z-n)^{2}}+\frac{H_{2n}}{z-n}+...$$ -which gives a residue of : -$$\frac{-1}{2}\sum_{n=1}^{\infty}\frac{12n+1}{n^{2}(6n+1)^{2}}+\sum_{n=1}^{\infty}\frac{H_{2n}}{n(6n+1)}+....=\frac{1}{2}\psi_{1}(7/6)-\frac{\pi^{2}}{12}+H$$ -At the negative integers: -$$\frac{H_{2n}}{z+n}+....$$ -giving a residue of: -$$\sum_{n=1}^{\infty}\frac{H_{2n-1}}{n(6n+1)}$$ -I do not think it quite adds up though. I could have easily went astray in all of that, though. - -REPLY [8 votes]: This is a partial answer but I believe I am making progress. Let $S$ denote the sum. Then $$ -\begin{align*} -S=\sum_{n=1}^\infty \frac{H_{2n}}{n(6n+1)} &= \sum_{n=1}^\infty\frac{H_{2n}}{n}\int_0^1 x^{6n}\; dx \\ -&= \int_0^1\left( \sum_{n=1}^\infty\frac{H_{2n}}{n}x^{6n}\right)dx \tag{1}\end{align*} -$$ -Let $f(x)=\sum_{n=1}^\infty \frac{H_n}{n}x^n$ where $|x|<1$. It can be shown that -$$f(x)=\text{Li}_2(x)+\frac{1}{2}\log^2(1-x) \tag{2}$$ -Then, we can write -$$\begin{align*} -\sum_{n=1}^\infty\frac{H_{2n}}{n}x^{6n} &= f(x^3)+f(-x^3) \\ -&= \text{Li}_2(x^3)+\text{Li}_2(-x^3)+\frac{\log^2(1-x^3)+\log^2(1+x^3)}{2}\tag{3} -\end{align*} -$$ -Substitute (3) into (1) to get -$$ -\begin{align*} -S=\int_0^1 \left(\text{Li}_2(x^3)+\text{Li}_2(-x^3)+\frac{\log^2(1-x^3)+\log^2(1+x^3)}{2} \right) dx \tag{4} -\end{align*} -$$ -Note that -$$\begin{align*} -\int_0^1\left( \text{Li}_2(x^3)+\text{Li}_2(-x^3)\right)dx &= \frac{1}{2}\int_0^1\sum_{n=1}^\infty\frac{x^{6n}}{n^2} \; dx \\ -&=\frac{1}{2}\sum_{n=1}^\infty\frac{1}{n^2(6n+1)} \\ -&= \frac{1}{2}\sum_{n=1}^\infty \left(\frac{1}{n^2}-\frac{6}{n}+\frac{36}{1+6n} \right) \\ -&= \frac{1}{2}\left(\frac{\pi^2}{6} -6\psi_0\left(\frac{1}{6} \right)-6\gamma_0-36\right) \\ -&= \frac{\pi^2}{12}+\frac{3\pi\sqrt{3}}{2}+6\log 2+\frac{9}{2}\log 3-18 \tag{5} -\end{align*} -$$ -$$ -\begin{align*} -\frac{1}{2}\int_0^1\log^3(1-x^3)dx &= \frac{1}{6}\int_0^1t^{-2/3}\log^2(1-t)dt \quad (t=x^3)\\ -&= \frac{1}{6}\left[\frac{\partial^2}{\partial y^2} B(x,y)\right]_{x=1/3,y=1} \\ -&= \frac{\pi ^2}{8}-\frac{\sqrt{3} \pi }{2}+\frac{9}{2}+\frac{9}{8} \log^2 3-\frac{9 \log 3}{2}+\frac{1}{4} \sqrt{3} \pi \log 3-\frac{\psi_1\left(\frac{4}{3}\right)}{2}\tag{6} -\end{align*} -$$ -Substitute (5) and (6) into equation (4) to get -$$S=-\frac{27}{2}+\frac{5\pi^2}{24}+\frac{9}{8}\log^2 3+\frac{\pi\sqrt{3}}{4}(4+\log 3)+6\log 2-\frac{1}{2}\psi_1\left(\frac{4}{3} \right)+\frac{1}{2}\int_0^1 \log^2(1+x^3)dx \tag{7}$$ -Now, it remains to calculate $\int_0^1\log^2(1+x^3)dx$. -ADDED -Use integration by parts, -$$ -\begin{align*} -\int_0^1\log^2(1+x^3) dx &= \left[x\log^2(1+x^3) \right]_0^1-6\int_0^1 \frac{x^3\log(1+x^3)}{1+x^3}dx \\ -&= \log^2 2-6 \int_0^1 \log(1+x^3)dx +6\int_0^1\frac{\log(1+x^3)}{1+x^3}dx\tag{8} -\end{align*} -$$ -The first integral is "elementary": -$$\int_0^1\log(1+x^3)dx = -3+\frac{\pi}{\sqrt{3}}+2\log 2 \tag{9}$$ -Evaluating $\int_0^1\frac{\log(1+x^3)}{1+x^3}$ is more tedious. Let $\omega=e^{i\pi /3}$. Then note that $1+x^3=(x+1)(x-\omega)(x-\bar{\omega})$ and -$$\frac{1}{1+x^3}=\frac{1}{3}\frac{1}{1+x}+\frac{1+\bar{\omega}}{3\sqrt{3}}\frac{1}{x-\omega}-\frac{1+\omega}{3\sqrt{3}}\frac{1}{x-\bar{\omega}}$$ -Expanding the logarithms and using the above partial fraction decomposition in $\int_0^1\frac{\log(1+x^3)}{1+x^3}$ will result in 9 pieces which can be evaluated in terms of dilogarithms. Ultimately, all this will give a closed form for $\int_0^1 \log^2(1+x^3)dx$ and $\sum_{n=1}^\infty\frac{H_{2n}}{n(6n+1)}$.<|endoftext|> -TITLE: If $A$ and $B$ are normal such that $AB=0$, does it follows that $BA=0$? -QUESTION [7 upvotes]: If $A$ and $B$ are normal linear transformation on the finite-dimensional complex inner product space $X$ such that $AB=0$, does it follows that $BA=0$? - -REPLY [3 votes]: An alternate solution, described in a pedagogical way, is as follows. -(This proof avoids the image and the kernel/null-space argument used in the previous answer, and works in both finite and infinite dimensions, and for both real and complex spaces like the previous answer.) -The answer to the question is "yes, $BA = 0$". -Proof: We are given that $AB = 0$, and we see that $AA^* = A^*A$ (since $A$ is given as normal) and $BB^* = B^*B$ (since $B$ is given as normal). -For an arbitrary vector $x$, we find that $\Vert BAx \Vert^2$ $= (BAx, BAx)$ $= (A^*B^*BAx, x)$. We set out to inspect how $A^*B^*BA$ acts on an arbitrary vector, and discover that $A^*B^*BA$ is in fact $0$. How? If $y$ is an arbitrary vector, we observe $\Vert A^*B^*BA y \Vert^2$ $= (A^*B^*BA y, A^*B^*BA y)$ $= (AA^*B^*BA y, B^*BA y)$ $= \big(A^*(AB)B^*A y, B^*BA y\big)$ $= (0y, B^*BA y) = 0$. -It follows that $\Vert BAx \Vert^2 = (A^*B^*BAx, x) = (0x, x) = 0 \implies BA = 0$.<|endoftext|> -TITLE: How can I prove that $2^{\sqrt 7}$ is bigger than $5$? -QUESTION [12 upvotes]: This is the one of my tries: $5=2^{\log_{\ 2} 5}$. Then I should prove that: ${\sqrt 7} > {\log_2 5}$. -So, can you help me end this proof or suggest another? - -REPLY [15 votes]: Note that -$$5\lt2^\sqrt7\iff5^\sqrt7\lt(2^\sqrt7)^\sqrt7=2^7=128$$ -But clearly $\sqrt7\lt3$, so $5^\sqrt7\lt5^3=125$. -Added later: Going beyond what the OP requests, here's a proof that $6\lt2^\sqrt7$, in enough detail that everything can be checked, I think, without resorting to a calculator. -Note first that -$$2\lt1+{7\over6}\lt\left(1+{1\over6}\right)^7=\left(7\over6\right)^7\implies 2\cdot6^7\lt7^7$$ -and -$$3^5=243\lt256=2^8\implies3^5\cdot3^7\lt2^8\cdot3^7\implies3^{12}\lt2\cdot6^7$$ -Putting these together, we have -$$3^{12}\lt7^7$$ -From here, using the easy inequality $5/2\lt\sqrt7$ (which can be seen from $5^2=25\lt28=4\cdot7$) and the equality $(\sqrt7+1)(\sqrt7-1)=6$, we have -$$\begin{align} -3^{12}\lt7^7 -&\implies3^6\lt7^{7/2}\\ -&\implies3^6\lt7^{\sqrt7+1}\\ -&\implies3^{(\sqrt7-1)(\sqrt7+1)}\lt7^{\sqrt7+1}\\ -&\implies3^{\sqrt7-1}\lt7\\ -&\implies3^{\sqrt7+1}\lt9\cdot7=63\lt64=2^6=2^{(\sqrt7-1)(\sqrt7+1)}\\ -&\implies3\lt2^{\sqrt7-1}\\ -&\implies6\lt2^\sqrt7 -\end{align}$$ -If there's an appreciably easier proof, I'd be interested to see one; I was a little surprised this one took as many steps as it did. -Added yet later: Here's an appreciably easier proof, using the fact that $7\lt64/9$ implies $\sqrt7\lt8/3$ and the inequality $3^8\lt2^{13}$, which can be verified by noting -$$3^8=81^2\lt90^2=8100\lt8160=8\cdot1020\lt8\cdot1024=2^{13}$$ -Putting these together, we have -$$\begin{align} -3^8\lt2^{13} -&\implies2^8\cdot3^8\lt2^8\cdot2^{13}\\ -&\implies6^8\lt2^{21}\\ -&\implies6^{8/3}\lt2^7\\ -&\implies6^\sqrt7\lt2^7\\ -&\implies6\lt2^\sqrt7 -\end{align}$$<|endoftext|> -TITLE: Integral of $ x \ln( \sin (x))$ from 0 to $ \pi $ -QUESTION [6 upvotes]: $$\int_0^\pi x \ln(\sin (x))dx $$ - -I tried integrating this by parts but I end up getting integral that doesn't converge, which is this $$ \int_0^\pi \dfrac{x^2\cos (x)}{\sin(x)} \ dx$$ -So can anyone help me on this one? - -REPLY [12 votes]: By making the change of variable -$$ -u=\pi -x -$$ you get that -$$ -I=\int_0^\pi x \ln(\sin x)\:dx=\int_0^\pi (\pi-u) \ln(\sin u)\:du=\pi\int_0^\pi \ln(\sin u)\:du-I -$$ giving -$$ -I=\frac{\pi}2\int_0^\pi \ln(\sin u)\:du=\pi\int_0^{\pi/2} \ln(\sin u)\:du. -$$ Then conclude with the classic evaluation of the latter integral: see many answers here.<|endoftext|> -TITLE: Manifolds as Homogeneous Spaces -QUESTION [6 upvotes]: With very little effort one can, for example, show that $S^n$ can be written as a homogeneous space as $S^n\cong G/H$, where $G$ is the group of all rotations in $\mathbb{R}^{n+1}$ about the origin and $H=G_p$ is the stabilizer subgroup consisting of rotations fixing a certain point $p$ in $S^n$ (say, the north pole). -From this example, we can see that, given a certain group acting transitively on for the space $S^n$, we are able to actually construct the space in question as a homogeneous space of said group. This leads me to the following basic questions: - -Can every manifold be constructed similarly, i.e. admit a transitive group action? On a related note, if I handed you a Lie group $G$ and said that it acts transitively on some space $M$ (without telling you the space itself), could you tell me what the space itself is? - -REPLY [9 votes]: You actually asked two different questions. The first one is - -Can every manifold be constructed similarly? - -You're asking if every manifold can be realized as a quotient of a Lie group by a closed subgroup. This is equivalent to asking whether every manifold admits a transitive action by a Lie group (because in that case the manifold is diffeomorphic to the group modulo the isotropy group of a point). -The answer is no -- there are strong topological restrictions. One simple necessary condition is that all connected components of the manifold must be diffeomorphic to each other. But for compact manifolds, there's a much more restrictive condition: This 2005 article by G. D. Mostow shows that a necessary condition for a compact smooth manifold to admit a transitive Lie group action is that it have nonnegative Euler characteristic. -Your second question is: - -Or, said differently, if I handed you the symmetry (Lie) group $G$ of some space $M$ (without telling you the space), could you tell me what the space itself is? - -This is not equivalent to the first question. Even if you knew that your manifold admitted a transitive Lie group action, just knowing the group is not enough to recover the manifold. You also have to know the isotropy group of a point. For some very simple examples, just note that the additive Lie group $\mathbb R^2$ acts transitively on itself, on the cylinder $\mathbb R\times \mathbb S^1$, and on the torus $\mathbb S^1\times \mathbb S^1$.<|endoftext|> -TITLE: Normal bundle to complete intersection in $\mathbb{P}^n$ -QUESTION [10 upvotes]: Let $X\subset\mathbb{P}^n$ be a complete intersection defined by irreducible polynomials $f_1,...,f_k$ of degrees $d_1,...,d_k$. How to show that the normal bundle of $X$ is isomorphic to $\bigoplus\limits_{i=1}^k\mathcal{O}_X(d_i)$? - -REPLY [2 votes]: Since every $f_i$ is a section in $H^{0}(\mathbb{P}^n,\mathcal{O}(d_i))$, denote $F=\oplus{f_i}$ the section in $\oplus\mathcal{O}(d_i)$, obviously $Z(F)=X$. Now restric $dF$ to $\mathcal{T}_{\mathbb{P}^n}|_X$, we get an exact sequence: -$$ -0\to\mathcal{T}_X\to\mathcal{T}_{\mathbb{P}^n}|_X\to\oplus\mathcal{O}_X(d_i)\to0 -$$ -Compare with -$$ -0\to\mathcal{T}_X\to\mathcal{T}_{\mathbb{P}^n}|_X\to N\to0 -$$<|endoftext|> -TITLE: Is there a name for this type of polygon? -QUESTION [54 upvotes]: Is there a name for a polygon in which you could place a light bulb that would light up all of its area? (for which there exists a point so that for all points inside it the line connecting those two points does not cross one of its edges) -Examples of "lightable" polygons: - -Examples of "unlightable" polygons: - -REPLY [73 votes]: Yes, those are called star-shaped polygons. They have numerous applications in mathematics, for example in complex analysis. - -REPLY [27 votes]: More generally, such a set is a star domain, and is a trivial example of contractible space. -You may see this as a generalization of a convex set: indeed, - -$C\neq\emptyset$ is a convex domain if for every $x,y\in C$ you have that the line segment $\overline{xy}\subseteq C$ is contained in $C$; while -$S\neq\emptyset$ is a star domain if there exists a point $y$ such that for every $x\in S$ it holds that $\overline{xy}\subseteq S$. - -That is, in a star domain the point $y$ (there might be many such) is fixed. You can easily prove that a set $E\neq\emptyset$ is convex (actually simply connected) if and only if it is a star domain with respect to each center $y\in E$. - -REPLY [14 votes]: I think you are looking for star domains. See also this related question on Mathoverflow.<|endoftext|> -TITLE: If the limit of a derivative is zero as $x \to \infty$, what can we say about $f(x+1)-f(x)$? -QUESTION [8 upvotes]: Given a differentiable function $f$ such that -$$ -\lim_{x \to +\infty}f'(x) = 0 -$$ -what can we say about -$$ -\lim_{x\to\infty}(f(x+1)-f(x)) \text{ ?} -$$ -My first thought was to use mean value theorem on $[x,x+1]$, and I will get that the limit is $0$, is this true? -Thank you for your help. - -REPLY [15 votes]: Given that $f$ is differentiable -consider the interval $[x,x+1]$ and apply the mean value theorem -we get -$f(x+1)-f(x) = f'(c_x)$ where $c_x\in (x,x+1)$. -Now taking limits both sides -$$\lim_{x\to\infty}(f(x+1)-f(x)) = \lim_{x\to\infty}(f'(c_x))$$ where $c\in (x,x+1)$ , so as $x$ tends to infinity , $f'(c_x)$ tends to zero as given in the hypothesis . -Therefore $$\lim_{x\to\infty}(f(x+1)-f(x)) =0$$ .<|endoftext|> -TITLE: Gaussian prime factorization. -QUESTION [8 upvotes]: I have a hard time on factorizing elements from $\mathbb{Z}[i]$, especially $-19+43i$. -I know that the primes in $\mathbb{Z}[i]$ are: - -$1+i$. -$p$ from $\mathbb{N}$, $p=4k+3$ , $k$ integer ( $p\equiv 3\pmod{4}$ ). -$a+bi$ from $\mathbb{Z}[i]$, $p=N(a+bi)=a^2 + b^2$ and $p=4k+1$, $k $ integer ( $p\equiv1\pmod{4}$). - -I wonder if there is an algorithm that tells you how to factorize or something. I would like to see this working on $-19+43i$. - -REPLY [8 votes]: Here is one algorithm to factor $a + bi$ when $a \neq 0$, $b \neq 0$: - -Compute the norm of $a + bi$, which is a purely real integer $n$. -Factor $n$ into $\mathbb{Z}$ primes and if possible further into Gaussian primes. -From the Gaussian prime factorization of $n$, identify the conjugate pairs. In each conjugate pair, there is one number that belongs in the factorization of $a + bi$ and one that does not. At this point I wish I had something more clever than telling you to try each combination, by dividing $a + bi$ by each number of the conjugate pair and discarding those numbers which don't give Gaussian integers; if both divisions give Gaussian integers, discard one number of the pair arbitrarily. -If necessary, prefix the factorization with a Gaussian unit (other than 1) to get the signs right. - -The most important thing to remember is that the norm function is multiplicative: $N(pq) = N(p) N(q)$. This is something that you can carry over to many real and imaginary rings. Also, if the norm is a number that is prime in $\mathbb{Z}$, that means the corresponding number is prime (or at least irreducible) in the particular domain at hand. -In $\mathbb{Z}[i]$ we have the additional wrinkle that since $d = -1$ the norm function works out to $a^2 + b^2$, which can be occasionally confusing, compared to something more helpful like $a^2 + 2b^2$ in the case of $\mathbb{Z}[\sqrt{-2}]$ or $a^2 - 3b^2$ in the case of $\mathbb{Z}[\sqrt{3}]$. -Review the norm function of $\mathbb{Z}[i]$: $$N(a + bi) = (a - bi)(a + bi) = a^2 + b^2.$$ Thus $$N(-19 + 43i) = (-19 - 43i)(-19 + 43i) = 2210$$ and $2210 = 2 \times 5 \times 13 \times 17$. -Fermat stated and Euler proved that if positive $p = 2$ or $p \equiv 1 \pmod 4$, then $p = a^2 + b^2$. This means that such primes in $\mathbb{Z}$ are composite in $\mathbb{Z}[i]$. The example of $-19 + 43i$ seems to have been contrived specifically to use the first four primes of $\mathbb{Z}^+$ that are composite in $\mathbb{Z}[i]$. We have $$2210 = (1 - i)(1 + i)(1 - 2i)(1 + 2i)(2 - 3i)(2 + 3i)(1 - 4i)(1 + 4i).$$ Since $(-19 - 43i)(-19 + 43i) = 2210$, the factorization of $2210$ overshoots the factorization of $-19 + 43i$ by $-19 - 43i$. -So we try $$\frac{-19 + 43i}{1 - i} = -31 + 12i$$ and $$\frac{-19 + 43i}{1 + i} = 12 + 31i;$$ since both of these give Gaussian integers, I arbitrarily choose to discard $1 + 2i$. Things are more clear-cut with the conjugate pair for 5: $$\frac{-19 + 43i}{1 + 2i} = \frac{67 + 81i}{5}.$$ -This leads to $$(1 - i)(1 - 2i)(2 + 3i)(1 + 4i) = 43 + 19i,$$ which is almost correct. We need to swap the real and imaginary parts and change the sign of the new real part. Clearly $-1$ won't do (that would give $-43 - 19i$), and less obviously $-i$ doesn't work either. Then $$i(1 - i)(1 - 2i)(2 + 3i)(1 + 4i) = -19 + 43i.$$ -This has been overly laborious and I'm sure someone will come along with a much cleverer way. But even with the algorithm I have presented here I could have skipped Step 4 if I had made a different choice with the conjugate pair for 2, since $$(1 + i)(1 - 2i)(2 + 3i)(1 + 4i) = -19 + 43i.$$<|endoftext|> -TITLE: What exactly are the elements of a local homology group? -QUESTION [8 upvotes]: A local homology group of some space $X$ at $x \in X$ is defined by the relative homology group $H_n(X, X - x)$. So by definition, it contains only cycles that are not entirely contained in $X - x$. So if we consider $X$ as some $2$-dimensional surface, would the local homology group at $x$ contain homology classes of loops which pass through $x$? Is this the right way to think about local homology groups? - -REPLY [6 votes]: Suppose $X$ is an $n$-manifold. Then the excision axiom tells you that $$H_k(X,X\setminus\{x\})\cong H_k(D^n,D^n\setminus\{0\}).$$ You can access these latter homology groups using the long exact sequence of a pair. In particular, since $D^n\setminus\{0\}\simeq S^{n-1}$, its reduced homology is always trivial, except for in dimension $n-1$. Now the long exact sequence of the pair contains -$$H_k(D^n)\to H_k(D^n,D^n\setminus\{0\})\to H_{k-1}(D^n\setminus\{0\})\to H_{k-1}(D^n).$$ -Assuming we are working with reduced homology, $H_k(D^n)=0$, so there is an isomorphism $H_k(D^n,D^n\setminus\{0\})\cong H_{k-1}(D^n\setminus\{0\}).$ -So the local homology groups for a manifold are only nonzero in the dimension of the manifold itself. So for your example of a surface, $H_1(X,X\setminus\{x\})=0$. In general, $H_n(X,X\setminus\{x\})\cong \mathbb Z$. In the case of a surface, you can think of a generator as a map of a $2$-simplex (triangle) onto some neighborhood of $x$, where $x$ is in the interior. Indeed, a choice of generator for each local homology group is the same as picking an orientation at each point for the surface. (You get an opposite orientation, by a reflection of your original map.)<|endoftext|> -TITLE: Limit of a sequence with some property -QUESTION [6 upvotes]: Given -$$ a_n >0 \text{ and } \lim_{n \to + \infty} a_n \left( \sum_{k=1}^n a_k \right) =2$$ -I need to show that -$$\lim_{n \to + \infty} \sqrt{n}\ a_n=1$$ -I tried first to compute $$\lim_{n \to + \infty} a_n,$$ -but I don't know how, or how to handle these kind of questions so I appreciate any help. - -REPLY [3 votes]: First, we do have $a_n\xrightarrow[n\to\infty]{} 0$: indeed, by contradiction, suppose not. Then the series $\sum_n a_n$ is not convergent (why?). But as this is a series of non-negative terms, this means $A_N=\sum_{n=1}^N a_n \xrightarrow[N\to\infty]{} \infty$. But since $a_n A_n \xrightarrow[n\to\infty]{} 2$, we then get that $a_n\sim_{n\to\infty} \frac{2}{A_n} \xrightarrow[n\to\infty]{} 0$, contradicting the assumption. -Now, write $A_N = \sum_{n=1}^N a_n$ with the convention $A_0=0$. By summing the above, the fact that $a_n A_n \xrightarrow[n\to\infty]{} 2$, and properties of divergent series, we have that -$$ -\sum_{n=1}^N a_n A_n \operatorname*{\sim}_{n\to\infty} 2n. -$$ -and similarly -$$ -\sum_{n=1}^N a_{n+1} A_n \operatorname*{\sim}_{n\to\infty} 2n. -$$ -(the latter as $a_{n+1} A_n = a_{n+1} A_{n+1} - a^2_{n+1} \xrightarrow[n\to\infty]{} 2$ as well.) We can now rewrite this as -$$\begin{align*} -\sum_{n=1}^N a_n A_n &= \sum_{n=1}^N (A_n-A_{n-1}) A_n -= \sum_{n=1}^N A_n A_n - \sum_{n=1}^N A_{n-1} A_n -= \sum_{n=1}^N A_n A_n - \sum_{n=0}^{N-1} A_{n} A_{n+1} \\ -&= A_N^2 + \sum_{n=1}^{N-1} A_n A_n - \sum_{n=1}^{N-1} A_{n} A_{n+1} -= A_N^2 + \sum_{n=1}^{N-1} A_n(A_n-A_{n+1}) -= A_N^2 - \sum_{n=1}^{N-1} A_n a_{n+1} -\end{align*}$$ -and rearranging, -$$ -A_N^2 = \sum_{n=1}^{N-1} A_n a_{n+1} + \sum_{n=1}^N a_n A_n \operatorname*{\sim}_{n\to\infty} 4n. -$$ -This implies $A_N\operatorname*{\sim}_{n\to\infty} 2\sqrt{n}$, and (finally!) -$$ -a_n\operatorname*{\sim}_{n\to\infty} \frac{2}{2\sqrt{n}} = \frac{1}{\sqrt{n}}$$ -i.e. -$$ -\sqrt{n} a_n \xrightarrow[n\to\infty]{}1. -$$<|endoftext|> -TITLE: Monoids where $\operatorname{Hom}(M,M) \cong M$ -QUESTION [5 upvotes]: What are some examples of monoids where $\operatorname{End}(M) \cong M$? Is there a nice characterization of such monoids? E.g., they will necessarily have a zero element, since $\forall x (x \mapsto 1_M)$ is a zero of $M^M$ (so no groups satisfy $G^G \cong G$). -Examples: - -$\{1\}$ -$(\mathbb{F}_2, \cdot)$ - -Non-examples: - -$(\mathbb N, +) \mapsto (\mathbb N, \cdot)$ -$(\mathbb F_2, +_2) \mapsto (\mathbb F_2, \cdot)$ - -REPLY [7 votes]: Partial answer. If $M$ is a finite monoid such that $\operatorname{End}(M) \cong M$, then either $M = \{1\}$ or $M = \{0, 1\}$ with the usual multiplication of integers. -Proof. Let $G$ be the group of invertible elements of $M$. Since $M$ is finite, $M - G$ is an ideal of $M$. Let $E(M)$ be the set of idempotents of $M$. For each $e \in E(M)$, let $\bar e$ be the endomorphism of $M$ defined by -$$ -\bar e(x) = \begin{cases} -1 & \text{if $x \in G$}\\ -e & \text{otherwise} -\end{cases} -$$ -Then $\bar e \in E(\operatorname{End}(M))$ and moreover $\bar e \bar f = \bar e$ for all $e, f \in E(M)$. Thus the map $e \to \bar e$ is an injection from $E(M)$ to $E(\operatorname{End}(M))$. Since $M$ and $\operatorname{End}(M)$ are isomorphic, this injection is a bijection. In particular, there exists an idempotent $e$ in $M$ such that $\bar e$ is the identity map on $M$. Since the range of $\bar e$ is $\{1, e\}$, this means that $M = \{1, e\}$, which gives the two possible solutions.<|endoftext|> -TITLE: For which matrices $A \in \mathscr{M}_n(\mathbb C)$ is the similarity class of $A$ closed? -QUESTION [7 upvotes]: What are the matrices $A \in \mathscr{M}_n(\mathbb C)$ for which the similarity class is closed? -What about the same question if we replace $\mathbb C$ by $\mathbb R$? - -REPLY [3 votes]: $\newcommand{\cj}{\operatorname{cj}} \newcommand{\diag}{\operatorname{diag}}\newcommand{\nil}{\operatorname{nil}}\newcommand{\Spec}[1]{\operatorname{Spec}\left({#1}\right)}$ -Complex case: -Let us prove/recall a lemma: - -Lemma 1: The function $\dim\ker:\mathscr M_n(\mathbb C)\to \mathbb N$ is upper-semicontinuous, i.e. - $$\lim_{s\to\infty}M_s= M\implies\dim\ker(M)\ge \limsup_{s\to\infty}\,\dim\ker M_s$$ - -Indeed, let $q=\limsup_{s\to\infty}\dim\ker M_s$. This means that there exists a sub-sequence $M_{s_h}=M'_h$ such that $\dim\ker M'_h=q$. -Let us consider the standard hermitian product on $\mathbb C^n$ and let us pick $(v_1^h,\cdots,v_q^h)$ an orthonormal basis of $\ker M'_h$. -Recall that $S^{2n-1}=\{v\in \mathbb C^n\, :\, \Vert v\Vert =1\}$. The sequence $$\{(v_1^h,\cdots,v_q^h)\}_{h\in\mathbb N}\subseteq \underbrace{S^{2n-1}\times\cdots\times S^{2n-1}}\limits_{q\text{ times}}$$ -has values in a compact subspace of $\mathbb C^{qn}$, therefore it admits a converging subsequence. -So, we have a subsequence $(v^{h_r}_1,\cdots,v^{h_r}_q)\to_r (w_1,\cdots, w_q)$. By continuity of the hermitian product, the latter is a family of orthonormal vectors, hence independent. -Let's call $M^{\prime\prime}_r=M_{h_r}'$ -Now, -\begin{align}\Vert Mw_j\Vert=\Vert Mw_j-M_r''v^{h_r}_j\Vert&\le \Vert (M-M_r'')w_j\Vert +\Vert M_r''(w_j-v_j^{h_r})\Vert\le\\&\le \Vert M-M_r''\Vert\cdot\Vert w_j\Vert +\left(\sup_r\Vert M_r''\Vert\right)\cdot\Vert w_j-v^{h_r}_j\Vert\stackrel{r\to\infty}{\longrightarrow} 0\end{align} -So $Mw_1=Mw_2=\cdots=Mw_q=0$, hence $\dim\ker M\ge q$. $\square$ - -Back to our problem: The answer is: $\cj A$ is closed if and only if $A$ is diagonalizable. - -If part: $A\text{ diagonalizable}\implies \cj A=\overline{\cj A}$ -Indeed, let $\Spec{A}=\{\lambda_1,\cdots,\lambda_k\},\ k\le n$ its dinstinct eigenvalues. Let $\dim\ker(A-\lambda_sI)=m_s$. -Since $A$ is diagonalizable, it holds $$M\in\cj A\iff \forall s=1,\cdots, k,\ \dim\ker(M-\lambda_sI)=m_s$$ -Let $\{A_h\}_{h\in \mathbb N}\subseteq \cj A$ a converging sequence $$A_h\to B$$ -Then, for all $s$, $A_h-\lambda_sI\to_h B-\lambda_sI$. -By lemma 1, $\dim\ker (B-\lambda_sI)\ge m_s$. But $$n=\sum_{s=1}^k m_s\le\sum_{s=1}^k\dim\ker(B-\lambda_sI)\le n$$ so the only possible choice is $\forall s,\ \dim\ker (B-\lambda_sI)=m_s$. Hence, $B\in\cj A$. -Only if part: $ \cj A=\overline{\cj A}\implies A\text{ diagonalizable}$ -The diagonalizable+nilpotent decomposition $A=\diag A+\nil A$ yelds that for all $\varepsilon>0,\ \diag A+\varepsilon\cdot \nil A\in \cj A$. But then $\diag A\in\overline{\cj A}=\cj A$. So $A$ is diagonalizable. $\square$ -Real case: -The fact that $\mathscr{M}_n(\mathbb R)$ is embedded in $\mathscr M_n(\mathbb C)$ and that, for all $A\in\mathscr{M}_n(\mathbb R),\ \cj_{\mathbb R}(A)=\cj_{\mathbb C}(A)\cap \mathscr{M}_n(\mathbb R)$ yields that a real matrix which can be diagonalized in $\mathbb C$ has closed real similarity class. The same trick used for the converse in the complex case can be adapted to fit the real case, considering the real Jordan form, as user1551 recalled.<|endoftext|> -TITLE: Why is $\mathbb{Z}[1/p]$ the direct limit of $\mathbb{Z}\xrightarrow{p}\mathbb{Z}\xrightarrow{p}\mathbb{Z}\to...$? -QUESTION [6 upvotes]: This is an example from Algebraic Topology, by Hatcher. -As far as I understand, I have to take the direct sum of all the $G_i$s (in this case, $\mathbb{Z}\oplus\mathbb{Z}\oplus...$) and quotient out all elements of the form $(g_1,g_2-\alpha_1(g_1),...)$ with $\alpha_i$ as given in the definition on the same page. -In this case, what are the elements I need to quotient out? Won't they be elements of the form $(z_1,z_2-pz_1,...)$? Because given that $z_2$ was obtained by taking an integer and multiplying it by $p$, $z_2-pz_1$ will be a multiple of $p$. -I just don't see how setting these to $0$ is the same as $\mathbb{Z}[1/p]$. - -REPLY [8 votes]: There's a better way to see why $\Bbb Z[1/p]$ is the direct limit (in my opinion): set up an isomorphism from the original direct limit to another one that's easier to understand: -$$\require{AMScd} -\begin{CD} -\mathbb{Z} @>{p}>> \mathbb{Z} @>{p}>> \mathbb{Z} @>{p}>> \mathbb{Z} @>{p}>>\cdots \\ -@VV{1}V @VV{1/p}V @VV{1/p^2}V @VV{1/p^3}V \\ -\mathbb{Z} @>{}>> \frac{1}{p}\mathbb{Z} @>{}>> \frac{1}{p^2}\mathbb{Z} @>{}>> \frac{1}{p^3}\mathbb{Z} @>{}>> \cdots -\end{CD}$$ -The unlabelled arrows in the bottom row are simple inclusions. The other arrows are multiplication maps and are labelled by the scaling factor. In other words, the map $\mathbb{Z}\xrightarrow{p}\mathbb{Z}$ looks the exact same as the inclusion $\mathbb{Z}\hookrightarrow\frac{1}{p}\mathbb{Z}$, and similarly for all the other parts of the commutative diagram. The direct limit in the bottom is then $\bigcup \frac{1}{p^n}\mathbb{Z}=\mathbb{Z}[1/p]$. -I haven't seen the construction you're talking about, but it seems intuitively clear why it should also define the direct limit. Say we have $G_1\to G_2\to G_3\to\cdots$. We want to interpret the arrows as "inclusions" (any noninjectivity means we pretend elements were the same to begin with in order to maintain this interpretation), in which case the direct limit is the "union" of all the $G_i$s. So our direct limit needs elements from all the things, so we can start off with $\bigoplus_i G_i$ and identify things that are supposed to be equal by quotienting by their differences. -In particular, $g\in G_i$ should represent the same element in the direct limit as $\alpha_i(g)\in G_{i+1}$ (where $\alpha_i:G_i\to G_{i+1}$), so we want their difference $(\cdots,0, g,-\alpha_i(g),0,\cdots)$ within the direct sum to be zero, and thus we must quotient by the subobject comprised of elements of the form -$$(g_1,g_2-\alpha_1(g_1),g_3-\alpha_2(g_2),\cdots)$$ -(where of course all but finitely many coordinates are zero). Indeed, the kernel of the homomorphism $\bigoplus_i\mathbb{Z}\to\mathbb{Z}[1/p]$ where the $i$th factor gets multiplied by $\frac{1}{p^i}$ and included is generated by things that looks like $(z_1,z_2-pz_1,\cdots,z_n-pz_{n-1},-pz_n,0,\cdots)$.<|endoftext|> -TITLE: Do the singular matrices form a topological manifold -QUESTION [10 upvotes]: So the definition of manifold I'm using is that of a topological manifold (a topological space with an atlas of homeomorphisms to $\mathbb{R}^n$). -I have two related questions: - -Is the set of singular matrices (over $\mathbb{R}$) of dimension $n$ a manifold? My understanding is that it cannot be if it has the subspace topology inherited from $\mathbb{R}^{n^2}$, since in this case it's given by the surface $\det(A)=0$, which has a singularity at $0$. I'm aware of the similar questions here and here, but the answers given seem to be for smooth manifolds and not topological manifolds. -My second question is what exactly is the topological definition of a singularity? Since a topological space being a manifold is a property of the topology itself, there should be a purely topological definition of a singularity. The best I can come up with is that a singular point in a topological space is a point such that no open set which contains it can be homeomorphic to $\mathbb{R}^n$, however this definition seems like a bit of a cop out. - -Update -So as far as I can tell my definition of a topological singularity is correct. However as to whether the surface $\det(A)=0$ is a topological manifold or not, I still couldn't say. -Considering the determinant as a potential submersion of manifolds $\mathbb{R}^{n^2}\rightarrow\mathbb{R}$, we see by looking at its derivative that it fails to be a submersion whenever the adjoint (or equivalently the cofactor) matrix is identically zero. And since the adjoint of a matrix being zero implies the matrix is singular, this will be some subset of the singular matrices. So it is these matrices which prevent us from using the regular level set theorem to conclude that the singular matrices form a smooth manifold, however it doesn't mean they aren't a topological manifold. -The set of points in $\mathbb{R}^{n^2}$ which are singularities is thus given by the polynomial surface -$$\{\det(A_{ij})=0\; :\; 1\leq i,j\leq n\}$$ -where $A_{ij}$ is the sub-matrix of $A$ obtained by deleting the $i^{th}$ row and $j^{th}$ column. -An example of a singular curve which does admit a $C^0$ structure is $y^2-x^3=0$, via the parametrization $t\mapsto (t^2,t^3)$. -An example of a singular curve which doesn't admit a $C^0$ structure is $xy=0$, since any open set containing the origin has 4 disconnected components when the origin is removed, while removing a point in $\mathbb{R}^k$ results in at most 2 disconnected components. - -REPLY [2 votes]: This post is to expand the details in Orangeskid's argument. -What we have is some open set $U = V \setminus \{0\}$. Hypothetically, it's homeomorphic to $B^3 \setminus \{0\}$, and hence simply connected. -Now, one picks a map $\varphi: S^1 \to T^2 \times (0,\varepsilon)$ that's not null-homotopic. Because $\pi_1(T^2 \times (0,\varepsilon)) \cong \Bbb Z^2$, there are quite a lot. (Concretely: $T^2 = S^1 \times S^1$, so pick the inclusion $\varphi(x) = (x,1,\varepsilon/2)$. That this is not null-homotopic can be proved in a first algebraic topology course after learning that the identity map $S^1 \to S^1$ is not null-homotopic.) -If $U$ was simply connected, then this map would be null-homotopic in $U$. That is, there would be a map $\tilde \varphi: D^2 \to U$ that extended $\varphi$ (in the sense that $\tilde \varphi = \varphi$ on $S^1 \subset D^2$.) Because $U \subset T^2 \times (0,\infty)$, it would be null-homotopic in $T^2 \times (0,\infty)$ as well. But that's precisely the trouble! What an algebraic topologist would say is that the map $T^2 \times (0,\varepsilon) \to T^2 \times (0,\infty)$ is a homotopy equivalence, hence induces an isomorphism on all homotopy groups. Let's be concrete. -Consider the map $f: T^2 \times (0,\infty) \to T^2 \times (0,\infty)$ given by $f(x,t) = (x,\varepsilon/2)$. Now consider $f\tilde \varphi$. This is a map whose image lies in $T^2 \times (0,\varepsilon)$ and that agrees with $\varphi$ on the boundary circle $S^1$. So it is a null-homotopy of our map $S^1 \to T^2 \times (0,\varepsilon)$; this contradicts our choice of $\varphi$. So there was no such $U$ after all! -This exact same argument proves that the cone on any non-simply connected manifold is not a manifold; with some more tools one can prove that the cone on a manifold whose homology is not that of $S^n$ is not a manifold; and then putting these together and the Whitehead theorem and the Poincare conjecture (which is now a theorem topologically in all dimensions) one concludes that the cone on a manifold $M$ is only a manifold if $M = S^n$ for some $n$.<|endoftext|> -TITLE: Moore Spaces: explicit CW-complex for $M(\mathbb{Z}_m, n)$ -QUESTION [6 upvotes]: Given an abelian group $G$ and an integer $n \ge 1$ we can construct a $CW$ complex such that $H_n(X) \cong G$ and $\tilde{H}_i(X)=0$ for all $i \neq n$. We call this $CW$ complex a Moore space and denote it $M(G,n)$. -Hatcher writes that an easy special case is when $G = \mathbb{Z}_m$ and we can take $X$ to be $S^n$ with a cell $e^{n+1}$ attached by a map $S^n \to S^n$ of degree $m$. -I am having a hard time working out this example for myself. In particular, how do I use this attaching map of degree $m$ to compute the homology group? - -REPLY [7 votes]: $X$ be the space obtained from attaching $D^{n+1}$ to $S^n$ by degree $m$ map. The cellular chain complex of the space is the following -$$0 \to \Bbb Z \stackrel{\times m}{\to} \Bbb Z \to 0 \to \cdots \to 0 \to \Bbb Z \to 0$$ -since we have cells only at dimension $0, n$ and $n+1$, with a single cell in each of these dimensions. The chain complex has nontrivial homology only at the $n$-th level, where $H_{n}(X) \cong \Bbb Z/m\Bbb Z$. Thus, the space is an $M(\Bbb Z/m, n)$-space. - -If you don't know cellular homology, use the long exact sequence for $(X, S^n)$ :$$0 \cong H_{n+1}(S^n) \to H_{n+1}(X) \to H_{n+1}(X, S^n) \stackrel{\partial}{\to} H_n(S^n) \to H_n(X) \to H_n(X, S^n) \cong 0$$ -Note $H_{n+1}(X, S^n) \cong H_{n+1}(X/S^n) \cong H_{n+1}(S^{n+1}) \cong \Bbb Z$ (as $(X, S^n)$ is a CW-pair) and $H_n(S^n) \cong \Bbb Z$. -Recall that $\partial : H_{n+1}(X, A) \to H_n(A)$ sends homology class of $(n+1)$-cycles $\xi$ in $X$ relative to $A$ to homology class of the $n$-cycle $\partial \xi$ in $A$. [Hatcher, pg. $117$] Bearing that interpretation in mind, note that the generator of $H_{n+1}(X, S^n)$ corresponding to $1 \in \Bbb Z$ is represented in simplicial homology as the homology class of the relative cycle $\zeta$ in $Z^{n+1}(X, S^n)$ corresponding to a triangulation of the $(n+1)$-disk in $X$ by $(n+1)$-simplices with the sum of the faces being a cycle in $Z^n(S^n)$. This cycle in $Z^n(S^n)$ is precisely $\partial \zeta$, which is in turn represented by the degree $m$ map $S^n \to S^n$ in singular homology, as $X = D^{n+1} \cup_\varphi S^n$ where $\varphi$ is a degree $m$ map. -Thus, we conclude that $\partial$ sends $1$ to $m$. As $\partial$ is injective, $H_{n+1}(X) \cong \text{ker} \partial \cong 0$. The above sequence then boils down to the short exact sequence -$$0 \to \Bbb Z \stackrel{\times m}{\to} \Bbb Z \to H_n(X) \to 0$$ -Hence, $H_n(X) \cong \Bbb Z/m\Bbb Z$. The same argument can be used to prove that $H_i(X) \cong 0$ for other $i$'s. Thus, we conclue $X$ is a $M(\Bbb Z/m\Bbb Z, n)$-space. - -Someone suggested that I should provide a rigorous algebraic argument for the claim that $\partial$ is the multiplication by $m$ morphism. So here goes: -$\psi : D^{n+1} \hookrightarrow D^{n+1} \coprod S^n \to X$ be the map obtained from composing the inclusion into the disjoint union with the quotient map. $\psi|_{\partial D^{n+1}} : S^n \to S^n$ is clearly a degree $m$ map. We thus have a map of pairs $\psi : (D^{n+1}, S^n) \to (X, S^n)$. Use long exact sequence of both pairs coupled with naturality to obtain the commutative diagram -$$\require{AMScd} \begin{CD} H_{n+1}(D^{n+1}, S^n) @>>> H_n(S^n)\\ @VVV @VVV \\ H_{n+1}(X, S^n) @>>> H_n(S^n) \end{CD}$$ -Note that the map $H_{n+1}(\psi) : H_{n+1}(D^{n+1}, S^n) \to H_{n+1}(X, S^n)$ can be canonically identified with the map $H_{n+1}(\bar{\psi}) : H_{n+1}(S^{n+1}) \to H_{n+1}(S^{n+1})$ where $\bar{\psi} : S^{n+1} \to S^{n+1}$ is induced from $\psi$ by quotienting the subspace $S^n$ in both spaces. One can easily see by the local degree formula that $\bar{\psi}$ is a map of degree $1$, hence $H_{n+1}(\bar{\psi})$ and thus $H_{n+1}(\psi)$ is ultimately an isomorphism. The top horizontal map is an isomorphism. The right vertical map is multiplication by $m$ because $\psi|_{\partial D^{n+1}}$ is of degree $m$. By commutativity of the diagram, the bottom horizontal map - the snake map $\partial$ - has to be multiplication by $m$ too. $\blacksquare$<|endoftext|> -TITLE: Cardinality of the Mandelbrot set -QUESTION [8 upvotes]: Is the Mandelbrot set countable or of the cardinality $2^{\aleph_0}$? My intuition says the latter, but I couldn't find a bijection. - -REPLY [18 votes]: $M$ is a subset of $\Bbb R^2$ and contains the open disk of radius $\frac14$ around the origin, i.e., $M$ is "between" two sets of cardinality $2^{\aleph_0}$, hence itself of cardinality $2^{\aleph_0}$.<|endoftext|> -TITLE: Find all irreducible polynomials of degrees 1,2 and 4 over $\mathbb{F_2}$. -QUESTION [7 upvotes]: Find all irreducible polynomials of degrees 1,2 and 4 over $\mathbb{F_2}$. -Attempt: Suppose $f(x) = x^4 + a_3x^3 + a_2x^2 + a_1x + a_0 \in \mathbb{F_2}[x]$. Then since $\mathbb{F_2} =${$0,1$}, then we have either $0$ or $1$ for each $a_i$. Then we have two choices for the $4$ coefficients, hence there are 16 polynomials of degree $4$ in $\mathbb{F_2}[x]$. -Recall $f(x)$ is irreducible if and only if it has not roots. Then -$f_1 = x$ is irreducible because it has not roots -$f_2 = x + 1$ is also another irreducible polynomial. -$f_3 = x^4 + x^2 + x = x ( x^3 + x + 1) $ is reducible. -$f_4 = x^4 = x^3* x$ is reducible -$f_5 = x^4 + x + 1$ -Can someone please help me? Is there a way I can save time in finding the irreducible polynomials, other than just trying to come up with polynomials. Any better approach or hint would really help! Thank you ! - -REPLY [8 votes]: Degree $1$, clearly $x$ and $x+1$. -Degree $2$, notice the last coefficient must be one, so there are only two options, $x^2+x+1$ and $x^2+1$. Clearly only $x^2+x+1$ is irreducible. -Degree $4$. There are $8$ polynomials to consider, again, because the last coefficient is $1$, now notice a polynomial is divisible by $x+1$ if and only if the sum of its coefficients is even. So the only polynomials without factors of degree $1$ are four: -$x^4+x^3+x^2+x+1$ -$x^4+x^3+1$ -$x^4+x^2+1$ -$x^4+x+1$. -Of course, we are missing the possibility it is the product of two irreducibles of degree $2$, but the only combination is $(x^2+x+1)(x^2+x+1)=x^4+x^2+1$. -Hence the irreducible ones are: -$x^4+x^3+x^2+x+1,x^4+x^3+1,x^4+x+1$<|endoftext|> -TITLE: In $S_{4}$, find a Sylow 2-subgroup and a Sylow 3-subgroup. -QUESTION [6 upvotes]: In $S_{4}$, find a Sylow 2-subgroup and a Sylow 3-subgroup. - -With everyone's comments and inputs, I have outlined the following answer: -We have $|S_{4}|= 24 = 2^{3}3$. If $P$ is a Sylow $2$-subgroup then $|P| = 2^3$, and if $K$ is a Sylow $3$-subgroup then $|K|=3$. So we need to find subgroups of $S_4$ of order $8$ and order $3$. For the subgroup of order $3$, since it is of prime order, it is cyclic, thus we need to find an element of $S_4$ of order $3$. So any $3$-cycle of $S_4$ will suffice. As a concrete example, we will call $K = \langle(1\ 2\ 3)\rangle$. -Now we must find $P$. Since $|P| = 8$, every element in $P$ must have order $1$, $2$ or $4$. We can choose a dihedral subgroup $D_4$ to be $P$. For example -$$P= \lbrace e, (1234), (13)(24), (1432),(24) ,(14)(23), (13), (12)(34)\rbrace.$$ - -REPLY [4 votes]: A few tips for finding a Sylow $2$-subgroup without too much manual labor: - -Every element of $S_4$ which is not a $3$-cycle has order $1$, $2$, or $4$, hence is contained in some Sylow $2$-subgroup. -By the Sylow theorems, $n_2$ is either $1$ or $3$. We can exclude $1$ as a possibility because we have more than $8$ elements of order $2$ or $4$. -The Sylow $2$-subgroups are conjugate to each other, hence isomorphic to each other, so they each have the same number of elements of each cycle type. -There are six transpositions ($2$-cycles) in $S_4$. Moreover, we cannot have two overlapping transpositions in the same Sylow $2$-subgroup, because their product is a $3$-cycle, e.g. $(12)(23) = (123)$, which has order $3$ and hence cannot be in any $2$-subgroup. This means that there are exactly two disjoint transpositions in each Sylow $2$-subgroup. - -So, given the above, we know that the three Sylow $2$-subgroups must start like this: -$$S_1 = \{e, (12), (34), (12)(34),\ldots\}$$ -$$S_2 = \{e, (13), (24), (13)(24),\ldots\}$$ -$$S_3 = \{e, (14), (23), (14)(23),\ldots\}$$ -Now reason similarly about the $4$-cycles.<|endoftext|> -TITLE: Collatz $4n+1$ rule? -QUESTION [8 upvotes]: I noticed something about the Collatz Conjecture, (I was literally obsessed with trying to prove it). I of course have NO intention of trying to prove it, clearly it is beyond my reach and I hope not to offend anyone by what may be a nonsensical observation, but I was a bit curious. -This is what I've noticed or "my conjecture". Pick any $\textbf{odd number}$ n. Then $4n+1$ takes exactly $+2$ more steps than $n$. -I also came across this $4n+1$ rule a lot of times in other observations, I'm not sure if this is any important or completely nonsensical. If you expand the set out you get: -$25, 101, 405, 1612$ -$23, 93, 373, 1413$ -$17, 69, 277, 1109$ -$11, 45, 181, 725$ -$9, 29, 117, 469, 1877$ -$7, 29, 117, 469, 1877$ -$3, 13, 53, 213$ -$1, 5, 21, 85, 341$ -I checked this for all values up to 2000001, so nothing concrete at all, but just a bit curious at any explanations for this pattern and (potentially) an explanation at a function to find the number of steps it takes for a odd number to reach 1? -The problem is that the number of steps it takes for the left-most number seems random...., thus making it impossible to determine the other number's behaviour without knowing the "basis". - -REPLY [21 votes]: If $n$ is odd, we have $4n+1 \to 12n+4 \to 6n+2 \to 3n+1$, while $n \to 3n+1$ so the streams join with $4n+1$ taking two extra steps.<|endoftext|> -TITLE: To evaluate limit $\lim_{n \to \infty} (n+1)\int_0^1x^{n}f(x)dx$ -QUESTION [5 upvotes]: Let $f:\Bbb{R} \rightarrow \Bbb{R}$ be a differentiable function whose derivative is continuous, then -$$\lim_{n \to \infty} \left((n+1)\int_0^1x^{n}f(x)dx\right).$$ - -I think I have to use L'Hop rule, but I don't see how. - -REPLY [6 votes]: You may just integrate by parts: -$$ -(n+1)\int_0^1x^{n}f(x)dx=\left.x^{n+1}f(x)\right|_0^1-\int_0^1x^{n+1}f'(x)dx=f(1)-\int_0^1x^{n+1}f'(x)dx -$$ and use the fact that $f'$ is continuous over $[0,1]$ giving -$$ -|f'(x)|\leq M, \quad x \in [0,1], -$$ and -$$ -\left|\int_0^1x^{n+1}f'(x)dx\right|\leq\int_0^1x^{n+1}|f'(x)|dx\leq\frac{M}{n+2} -$$ to conclude. - -REPLY [3 votes]: We can prove that if $f$ is continuous function ($f$ being differentiable function is not needed) -$$ -\lim_{n \to \infty} (n+1)\int_0^1x^{n}f(x)dx=f(1) -$$ -Since $f$ is continuous, $f$ is bounded on $[0,1]$. Also -$$ -\forall \epsilon>0, \exists \delta>0, \forall x\in(1-\delta,1],\text{there is } |f(x)-f(1)|<\epsilon -$$ -\begin{align} -\left|(n+1)\int_{0}^{1}x^nf(x)dx-f(1)\right|&= -\left|\int_{0}^{1-\delta} (n+1)x^nf(x)dx+\int_{1-\delta}^{1} (n+1)x^nf(x)dx-f(1)\right| -\\ -&=\left|f(t_1) \int_{0}^{1-\delta} (n+1)x^ndx+f(t_2)\int_{1-\delta}^{1} (n+1)x^ndx-f(1)\right| \hspace{5 mm} -\\ &\hspace{10 mm}(t_1\in(0,1-\delta),t_2\in(1-\delta,1) \text{ and by IMVT}) -\\ -&=\left|f(t_1)(1-\delta)^{n+1}+f(t_2)(1-(1-\delta)^{n+1})-f(1)\right| -\\ -&\leqslant 2M(1-\delta)^{n+1}+|f(t_2)-f(1)| -\\&<2M(1-\delta)^{n+1}+\epsilon -\end{align} -So -$$ -\varlimsup\limits_{n\to\infty}\left|(n+1)\int_{0}^{1}x^nf(x)dx-f(1)\right|\leqslant \varlimsup\limits_{n\to\infty}2M(1-\delta)^{n+1}+\epsilon=\epsilon -$$ -Since $\epsilon$ is arbitrary small -$$ -\varlimsup\limits_{n\to\infty}\left|(n+1)\int_{0}^{1}x^nf(x)dx-f(1)\right|=0 \hspace{5 mm} \text{and}\hspace{5 mm} \varliminf\limits_{n\to\infty}\left|(n+1)\int_{0}^{1}x^nf(x)dx-f(1)\right|=0 -$$ -So -$$ -\lim\limits_{n\to\infty}\left|(n+1)\int_{0}^{1}x^nf(x)dx-f(1)\right|=0\hspace{5 mm} \text{or} \hspace{5 mm}\lim\limits_{n\to\infty}(n+1)\int_{0}^{1}x^nf(x)dx=f(1) -$$ - -REPLY [3 votes]: By condition and integration by parts formula, it follows that -\begin{align} -& (n + 1)\int_0^1 x^n f(x) dx \\ -= & \int_0^1 f(x) d(x^{n + 1}) \\ -= & \left.f(x)x^{n + 1} \right|_0^1 - \int_0^1 x^{n + 1}f'(x)dx \\ -= & f(1) - \int_0^1 x^{n + 1}f'(x) dx. -\end{align} -Since $f'$ is continuous, $f'$ is bounded on the interval $[0, 1]$, assume it is bounded by $M \geq 0$, then, -\begin{align} -& \left|\int_0^1 x^{n + 1}f'(x) dx\right| \\ -\leq & \int_0^1 x^{n + 1}|f'(x)| dx \\ -\leq & M\int_0^1 x^{n + 1} dx \\ -= & \frac{M}{n + 2} \to 0 -\end{align} -as $n \to \infty$. -Hence the result of the original limit is $f(1)$.<|endoftext|> -TITLE: Roots of unity filter, identity involving $\sum_{k \ge 0} \binom{n}{3k}$ -QUESTION [7 upvotes]: How do I see that$$\sum_{k \ge 0} \binom{n}{3k} = (1 + 1)^n + (\omega + 1)^n + (\omega^2 + 1)^n,$$where $\omega = \text{exp}\left({2\over3}\pi i\right)$? What is the underlying intuition behind this equality? - -REPLY [8 votes]: The first thing we need is the binomial theorem which says that -$$(1+x)^n = \sum_{k\geq 0}^n{n\choose k} x^k = \sum_{k\geq 0}{n\choose k} x^k$$ -where the last equality follows since ${n\choose k} \equiv 0$ for all $k > n$. -Next we have that for any sequence $a_n$ we can split the sum of it over all the integers into a sum over their respective residue classes $\mod 3$ so that -$$\sum_{k\geq 0}a_k = \sum_{k\geq 0}a_{3k} + \sum_{k\geq 0}a_{3k+1} + \sum_{k\geq 0}a_{3k+2}$$ -The last thing we need is that $\omega^3 = 1$ and $1+\omega + \omega^2 = 0$ when $\omega = e^{\frac{2\pi i}{3}}$. The first equality is obvious, and the second can be found from the sum of a geometrical series $1+\omega+\omega^2 = \frac{\omega^3-1}{\omega-1} = 0$. -With these ingredients we can now calculate -$$\matrix{(1+1)^n + (1+\omega)^n + (1+\omega^2)^n &=& \sum_{k\geq 0}{n\choose k}[1 + \omega^k + \omega^{2k}] \\&=& \sum_{k\geq 0}{n\choose 3k}[1 + \omega^{3k} + \omega^{6k}] \\&+& \sum_{k\geq 0}{n\choose 3k+1}[1 + \omega^{3k+1} + \omega^{6k+2}]\\&+& \sum_{k\geq 0}{n\choose 3k}[1 + \omega^{3k+2} + \omega^{6k+4}]\\&=&\sum_{k\geq 0}{n\choose 3k}[1 + 1 + 1]\\&+&\sum_{k\geq 0}{n\choose 3k+1}[1 + \omega + \omega^{2}]\\&+&\sum_{k\geq 0}{n\choose 3k+2}[1 + \omega^{2} + \omega]\\&=&3\sum_{k\geq 0}{n\choose 3k}}$$ -The identity can be generalized. If we take $\omega = e^{\frac{2\pi i}{N}}$ then the same type of derivation as above gives us -$$2^n + (1+\omega)^n + \ldots + (1+\omega^{N-1})^n = N\sum_{k\geq 0}{n\choose Nk}$$<|endoftext|> -TITLE: An example of an algebraically open set in $\mathbb{R}^2$ which is not open? -QUESTION [5 upvotes]: Definition. A subset $U$ of a real vector space $V$ is algebraically open if the sets $\{t\in\mathbb{R}:x+tv\in U\}$ are open for all $x,v\in V$. - -In the real vector space $\mathbb{R}^2$ equipped with the usual topology, it is clear that every open set is algebraically open, but how to find a algebraically open set which is not open? The hint says that a line intersects the unit circle in at most two points. - -REPLY [6 votes]: To use the hint... take the plane, and subtract a countable dense subset of the unit circle. It is not open, but it is still algebraically open by the hint. - -REPLY [2 votes]: The sets $\{x+tv\mid t \in \mathbb R\}$ for $x, v \in V$ fixed represent lines in $V$, and the sets $\{t \in \mathbb R \mid x+tv \in U\}$ represent the intersections of those lines with $U$, pulled back to $\mathbb R$. So a subset is defined to be "algebraically open" if it is open along every line. -The hint says that the unit circle intersects with every line at at most two points; this suggests that you take the complement of the unit circle, because in a line, the complement of finitely many points (e.g. at most two points) is automatically open. Of course, the complement of the unit circle is actually open, so you need to make some kind of adjustment. Hint: if you take the complement of any $S \subset S^1$, the same reasoning will still show that the resulting set is algebraically open.<|endoftext|> -TITLE: What does Gödel's Incompleteness Theorem prove? -QUESTION [5 upvotes]: Does Gödel's incompleteness theorem only prove that you can't have a formal system which describes number theory which is both complete and consistent, or is it more general? In other words: does it prove that any formal system is either inconsistent or incomplete? - -REPLY [13 votes]: There are formal systems that are both consistent and complete. -However, there is no formal system that has all of the following properties: - -it is strong enough to cover number theory -it is recursively decidable whether a formula is well-formed, whether a sentence is an axiom, and whether a sequence of sentences is a proof -the system is consistent -the system is complete - -REPLY [5 votes]: Not exactly. Let $L = \{+,\times, 0, 1, <\}$ and define $Th_L(\mathbb{N})= \{\varphi$ in $L$$:$ $\mathbb{N} \models \varphi \}$. -This is complete and consistent (since it clearly has a model) which encompasses arithmetic. Godel's Theorem states that this collection cannot be recursive (or even recursively enumerable), i.e. there is no finitary algorithm which tells us which sentences in $L$ are in $Th_L(\mathbb{N})$.<|endoftext|> -TITLE: Maximum Baire class of a Riemann integrable function -QUESTION [5 upvotes]: In this answer (see example 1), Andrés Caicedo gives an example of a function $f : [0,1] \to \mathbb{R}$ which is Riemann integrable and is (strictly) of second Baire class. -Are there Riemann integrable functions of higher Baire class? Arbitrarily high? Are there Riemann integrable functions which are not in any Baire class? -To be precise, recall that for an ordinal $\alpha$, the Baire class $B_{\alpha}$ is defined inductively as follows. $B_0$ is the set of all continuous functions $f : [0,1] \to \mathbb{R}$. Then, once $B_\beta$ are constructed for all $\beta < \alpha$, we say $f \in B_\alpha$ if there exists a sequence $f_n \in \bigcup_{\beta < \alpha} B_\beta$ with $f_n \to f$ pointwise. Note that this stabilizes at $\alpha = \omega_1$. -If $\mathcal{R}$ is the set of all Riemann integrable functions, my question is: do we have $\mathcal{R} \subset B_\alpha$ for some $\alpha$? If so, what is the least such $\alpha$? If not, what is the least $\alpha$ such that $\mathcal{R} \cap B_{\omega_1} \subset B_\alpha$? -Of course, it may be helpful to recall that $f$ is Riemann integrable iff the set of discontinuities of $f$ has Lebesgue measure zero. - -REPLY [7 votes]: To complement your answer that there exist Riemann integrable functions in arbitrarily high Baire classes (incidentally, note that there are $2^c$ many Riemann integrable functions and only $c$ many Baire functions), there is a sense in which the set of Riemann integrable functions has a very, very tiny intersection with each of these Baire classes. -Among other results proved in the following paper, Marcus proves that for each countable ordinal $\alpha,$ the set of Riemann integrable functions in the normed space of bounded Baire-$\alpha$ functions from $[a,b]$ to $\mathbb R$ with the sup norm is a nowhere dense set in this space. -Solomon Marcus, Remarques sur les fonctions intégrables au sens de Riemann [Remarks on functions integrable in the sense of Riemann], Bulletin Mathématique de la Société des Sciences Mathématiques et Physiques de la République Populaire Roumaine (N.S.) 2(50) #4 (1958), 433-439. MR 22 #12180; Zbl 93.05901 -I think there are several stronger and/or more general versions of this result, but I'm not in a position to look into it now.<|endoftext|> -TITLE: Sum of the supremum and supremum of a sum -QUESTION [19 upvotes]: Consider two real-valued functions of $\theta$, $f(\cdot): \Theta \subset\mathbb{R}\rightarrow \mathbb{R}$ and $g(\cdot):\Theta \subset \mathbb{R}\rightarrow \mathbb{R}$. -Is there any relation between -(1) $\sup_{\theta \in \Theta} (f(\theta)+g(\theta))$ -and -(2) $\sup_{\theta \in \Theta} f(\theta)+\sup_{\theta \in \Theta} g(\theta)$ -? -Could you provide some informal proof or intuition behind your answer? - -REPLY [6 votes]: For $\alpha = \sup f \ge f(x)$ for all x, and $\beta = \sup g \ge f(x)$ for all x then $\alpha + \beta \ge f(x) + f(y)$ for all x. So $\sup f + \sup g = \alpha + \beta \ge \sup (f + g)$. -That's what others are calling "intuitive". I like to think of it as degree of limitation. We have more degree of freedom for f and g operating independently than together as a fixed sum (f + g). So the sum of the sups is greater (or equal) than the sup of the sums because we simply have more options. -But that's a pretty vague definition and if it isn't clear when I first say it, it'll just be confusing. -To show that equality might not hold, simply imagine f and g "get big" at different places. Imagine $f(x) < 1$ for all $x \ne 1$ but $f(1) = 1$ (example $f(x) = 1 - (x-1)^2$) $\sup f = 1$. Imagine $g(x) < 1$ for all $x \ne 0$ but $f(0) = 1$ (Ex. $g(x) = 1 - x^x$) $\sup g = 1$. But $f + g$ is always significantly less than 2. (In our examples $f + g = 1 - 2(x^2 - x) \le 3/2$) -Then $\sup f + \sup g > \sup (f + g)$. In our example $2 = \sup f + \sup g > \sup (f + g) = 3/2$ -Or better yet let $g(x) = -f(x)$ where $f$ is bounded above and below but includes both positive and negative values. $f(x) + g(x) = 0$ so $\sup(f + g) = 0$ but $\sup f > 0$ and $\sup g = - \inf f > 0$.<|endoftext|> -TITLE: Find the maximun of the sum $\sum_{k=1}^{n}(f(f(k))-f(k))$ -QUESTION [13 upvotes]: Let $f:\{1,2,3,\cdots,n\}\to \{1,2,3,\cdots,n\}$ such that -$$f(1)\le f(2)\le\cdots\le f(n)$$ -Let $g(n)$ -$$g(n)=max\left(\sum_{k=1}^{n}(f(f(k))-f(k))\right)$$ -Find $$g(n)$$ - -REPLY [4 votes]: My best effort: -Let $m=\lceil \frac{n+1}{2}\rceil$. So $m=\frac{n+1}{2}$ for odd $n$, and $m=\frac{n+2}{2}$ for even $n$. -Then let $f(k)=m$ for $1 \le k \lt m$ and $f(k)=n$ for $m \le k \le n$. In particular $f(m)=f(n)=n$. -In this case $f(f(k))=n$ for all $k$ and so $\displaystyle \sum_{k=1}^n (f(f(k))-f(k)) = (m-1)(n-m).$ -For odd $n$ this gives $\displaystyle \sum_{k=1}^n (f(f(k))-f(k)) = \left(\frac{n=1}{2}\right)^2 = \frac{n^2}{4}-\frac{n}{2}+\frac14.$ -For even $n$ this gives $\displaystyle \sum_{k=1}^n (f(f(k))-f(k)) = \frac{n}{2} \times \frac{n-2}{2}= \frac{n^2}{4}-\frac{n}{2}.$<|endoftext|> -TITLE: How to find the period of periodic solutions of the van der Pol equation? -QUESTION [7 upvotes]: The equation $$y''+1.115(y^2-1)y'+y=0$$ has solutions that tend towards periodic solutions and I am asked to enter the period of the periodic solutions. How can I find the period without any boundary conditions? And what is the period? - -REPLY [7 votes]: Answer to the first question "How can I find the period without any boundary conditions? " : -You don't need boundary condition since the limit cycle doesn't depend of them. You can chose any initial conditions. -Answer to the second question " What is the period? " (of the limit cycle) : -$$y''+\mu (y^2-1)y'+y=0$$ -For small values of $\mu$ the equation is approximately $y''+y\simeq 0$ which solution is $y=C\:\sin\left(2\pi\frac{t}{T}+\varphi \right)$ where the period $T=2\pi$. -For a first approximate, you can take $\quad T\simeq 2\pi\quad$ until $\mu=1.115$ is not large. See the empirical graph below. -Semi-empirical formula from M.Cartwright : -$$T\simeq \left(3-2\ln(2) \right)\mu+3\frac{2.2338}{\mu^{1/3}} $$ -This formula, which isn't convenient for small $\mu$, cannot be used in the present case. -An updated equation is represented on the figure below. -In case of not large $\mu$, expending in series of power of $\mu$ leads to : -$$\frac{dt}{dy}\simeq \frac{1}{\sqrt{4-y^2}}+\frac{\mu}{4}y+\frac{\mu^2}{96}(5y^2-2)\sqrt{4-y^2} +...$$ -$$T\simeq 2\int_{-2}^2 \left( \frac{1}{\sqrt{4-y^2}}+\frac{\mu}{4}y+\frac{\mu^2}{96}(5y^2-2)\sqrt{4-y^2} \right)dy$$ -$$T\simeq 2\pi\left(1+\frac{\mu^2}{16}\right)\qquad\text{not large }\mu .$$ -This analytic approximate formula gives $\quad T\simeq 6.77$ -Direct numerical solving of the ODE gives $\quad T\simeq 6.75$ -More simply, the rough approximate $\quad T\simeq 2\pi\simeq 6.28\quad$ is not too bad in case of $\mu=1.115$.<|endoftext|> -TITLE: How do I find the matrix with respect to a different basis? -QUESTION [5 upvotes]: I tried to solve this question but the answer is totally different, can you explain how to solve it - -REPLY [17 votes]: Call $\mathcal E = \{\mathbf e_1, \mathbf e_2, \mathbf e_3\}$ the standard basis for $\Bbb R^3$. Then any vector $\mathbf v \in \Bbb R^3$ can be written as $$\mathbf v = v^1\mathbf e_1 + v^2\mathbf e_2 + v^3\mathbf e_3$$ for some unique triple of numbers $(v^1, v^2, v^3)$ (note: those superscripts are not exponents, they're just indices). The standard basis is particularly useful because it is an orthonormal basis. That means that $\mathbf e_i \cdot \mathbf e_i = 1$ for all $i$ and $\mathbf e_i \cdot \mathbf e_j = 0$ for all $i\ne j$. This property allows a lot of simplifications when you start working through problems. -The matrix representation of the vector $\mathbf v$ with respect to (wrt) $\mathcal E$ is $$[\mathbf v]_{\mathcal E} = \pmatrix{v^1 \\ v^2 \\ v^3}$$ -$\mathcal E$ isn't the only basis for $\Bbb R^3$ however. In fact there are an infinite number of bases. Let $\mathcal B= \{\mathcal b_1, \mathcal b_2, \mathcal b_3\}$ be some arbitrary basis of $\Bbb R^3$. Then by definition we can also expand the vector $\mathbf v$ in the basis $\mathcal B$: $$\mathbf v = \nu^1\mathbf b_1 + \nu^2\mathbf b_2 + \nu^3\mathbf b_3$$ where in general $(v^1, v^2, v^3) \ne (\nu^1, \nu^2, \nu^3)$. Likewise the matrix representation of $\mathbf v$ wrt $\mathcal B$ is $$[\mathbf v]_{\mathcal B} = \pmatrix{\nu^1 \\ \nu^2 \\ \nu^3}$$ - -A linear transformation $T: V \to W$ exists independent of any bases we choose for the spaces $V$ and $W$. However, once we've chosen those bases we can express the action of $T$ on elements of $V$ in matrix form. Say we choose $\mathcal C$ as our basis for $V$ and $\mathcal D$ as our basis for $W$, then the way that $T$ transforms elements $\mathbf v \in V$ is written in matrix form like: $$[T]_{\mathcal D\leftarrow \mathcal C}[\mathbf v]_{\mathcal C} = [T(\mathbf v)]_{\mathcal D}$$ -Notice that we have to specify two bases for any linear transformation $T$ before we can express the action of $T$ in matrix form. However, if $T: V \to V$ then most likely we'd like to choose the same basis for both the domain and codomain of $T$ since they are in fact the same space. Thus often when we have a linear tranformation from a space to itself we'll we'll only specify one basis for $T$ and expect that the reader will understand that $[T]_{\mathcal C}$ really means $[T]_{\mathcal C \leftarrow \mathcal C}$. - -An identity transformation $I: V\to V$ is one that doesn't change any vector that it acts on. I.e. $$I(\mathbf v) = \mathbf v$$ for all $\mathbf v\in V$. Most of the time when we talk about an identity matrix we'll be talking about the matrix representation of $I$ wrt to the same basis for the domain and codomain as above. It turns out that $$[I]_{\mathcal B} = \pmatrix{1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \\ 0 & 0 & \cdots & 0 & 1}$$ for any basis $\mathcal B$. However if we decide that we'd like to choose different bases for the domain and codomain of $I$, then the matrix transformation could take the form of any invertible matrix (ask yourself why it must be an invertible matrix). We call the matrix $[I]_{\mathcal C \leftarrow \mathcal B}$ the change of basis matrix from $\mathcal B$ to $\mathcal C$ because it has the property $$[I]_{\mathcal C\leftarrow\mathcal B}[\mathbf v]_{\mathcal B} = [\mathbf v]_{\mathcal C}$$ for all $\mathbf v \in V$. -For instance, using $[\mathbf v]_{\mathcal E}$ and $[\mathbf v]_{\mathcal B}$ from the first section, it is true that $$[I]_{\mathcal B \leftarrow \mathcal E}\pmatrix{v^1 \\ v^2 \\ v^3} = \pmatrix{\nu^1 \\ \nu^2 \\ \nu^3}$$ -The change of basis matrix from a basis $\mathcal C = \{\mathbf c_1, \dots, \mathbf c_n\}$ to $\mathcal D = \{\mathbf d_1, \dots, \mathbf d_n\}$ turns out to be $$[I]_{\mathcal D \leftarrow \mathcal C} = \pmatrix{[\mathbf c_1]_{\mathcal D} & \cdots & [\mathbf c_n]_{\mathcal D}}$$ where this is the matrix whose $i$th column is the vector $[\mathbf c_i]_{\mathcal D}$ for all $i$. Confirm this for yourself by figuring out the action of this matrix on a convenient basis of $V$ (hint: try the basis $\mathcal C$). - -With that change of basis idea in hand it is clear that we should be able to transform the bases associated with the domain and codomain of a linear transformation $T$ by the formula: $$[T]_{\mathcal D} = [I]_{\mathcal D\leftarrow \mathcal C}[T]_{\mathcal C}[I]_{\mathcal C\leftarrow \mathcal D}$$ - -Using all of that info above, let's take a look at your question. You're given $[T] = [T]_{\mathcal E}$ and you're given the basis $\mathcal B$ represented in its coordinates wrt to $\mathcal E$. That is, from your question we know that $$[\mathbf v_1]_{\mathcal E} = \pmatrix{1 \\ 0 \\ 1},\quad [\mathbf v_2]_{\mathcal E} = \pmatrix{0 \\ 1 \\ 1},\quad [\mathbf v_3]_{\mathcal E} = \pmatrix{1 \\ 1 \\ 0}$$ This is all we need to find $[T]_{\mathcal B}$. First we construct the change of basis matrix $[I]_{\mathcal E\leftarrow \mathcal B}$. But notice that it's just $$[I]_{\mathcal E\leftarrow \mathcal B} = \pmatrix{[\mathbf v_1]_{\mathcal E} & [\mathbf v_2]_{\mathcal E} & [\mathbf v_3]_{\mathcal E}} = \pmatrix{1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 1 & 0}$$ Then of course because $[I]_{\mathcal B\leftarrow \mathcal E}$ undoes the action of $[I]_{\mathcal E\leftarrow \mathcal B}$, we can see that $$[I]_{\mathcal B\leftarrow \mathcal E} = \pmatrix{1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 1 & 0}^{-1} = \frac 12\pmatrix{1 & -1 & 1 \\ -1 & 1 & 1 \\ 1 & 1 & -1}$$ -Now that you have the change of basis matrices just multiply out $[T]_{\mathcal B} = [I]_{\mathcal B\leftarrow \mathcal E}[T]_{\mathcal E}[I]_{\mathcal E\leftarrow \mathcal B}$ to get $[T]_{\mathcal B}$. It's as easy as that. ;)<|endoftext|> -TITLE: Is there an analytic function $f$ on $B(0,1)$ the open ball with radius 1 such that $f(1/n)=e^{-n}$ for $n=2,3,4,...$? -QUESTION [8 upvotes]: Is there an analytic function $f$ on $B(0,1)\subset\mathbb{C}$ such that $f(1/n)=e^{-n}$ for $n=2,3,4,...$? I know the following doesn't work: -Let $g(z)=\exp(-1/z)$. Then, $f=g$ on a sequence with a limit point in $B(0,1)$ and so $f=g$ on $B(0,1)$. Since $g$ is not $\mathbb{C}$-differentiable at $0$, neither is $f$ and so such a function cannot exist. -This is not the solution because you cannot use the identity principle with a non analytic function like $\exp(-1/z)$ is not analytic at $z=0$. Any help using the identity principle another way? - -REPLY [7 votes]: Here's another way to do it. Assume that such an $f$ exists. Then we can write -$$ f(z) = z^k g(z) $$ -for some (positive due to the assumptions) integer $k$ and some holomorphic function $g$ on $B(0,1)$ with $g(0) \neq 0$. Plug in $z=1/n$: -$$ -e^{-n} = f(1/n) = \frac{1}{n^k} g(1/n) -$$ -i.e. -$$ -g(1/n) = n^k e^{-n}. -$$ -Let $n \to \infty$. This gives (by continuity of $g$) that $g(0)=0$, which is a contradiction.<|endoftext|> -TITLE: $\frac{\Gamma(\frac{n_T + n_C - 2}{2})}{\sqrt{\frac{n_T+n_C-2}{2}}\Gamma\left(\frac{n_T+n_C-3}{2}\right)} \approx 1 - \frac{3}{4(n_T+n_C+2)-1}$ -QUESTION [7 upvotes]: In this answer to a question I asked (which derives the variance of Cohen's $d$), the approximation -$$\frac{\Gamma\left(\frac{n_T + n_C - 2}{2}\right)}{\sqrt{\frac{n_T+n_C-2}{2}}\Gamma\left(\frac{n_T+n_C-3}{2}\right)} \approx 1 - \frac{3}{4(n_T+n_C+2)-1}$$ -is used. We can reasonably assume that $n_T, n_C> 0$ are integers. -How is this approximation derived? The answerer states: - -I pulled it from the Hedges paper -- don't know its derivation at the moment but will think about it some more. - -I wish I had more to contribute to this question than that, but the removal of the $\Gamma$ function I find completely baffling, and I wouldn't even know where to start. -Edit: Currently trying out Stirling's approximation, seeing if that leads me anywhere. And so far, I'm quite lost as to how to deal with the division by $2$ in the $\Gamma$ functions. - -REPLY [2 votes]: Here is a simple, elementary derivation, using only the recursion relation $\Gamma(x+1)=x\ \Gamma(x)$ instead of Stirling's approximation. -We define -$$f(x) = \frac{\Gamma(x)}{\sqrt{x}\,\Gamma(x-\frac12)}\ ,$$ -where $x = \frac{n_T+n_C-2}2$. Then what we want to prove is the following asymptotic behavior for large $x$: -$$f(x)=1-\frac3{8x-\frac76}+O(x^{-3})\ .$$ -The trick is to consider the product $f(x)f(x+\frac12)$. The above definition of $f(x)$ leads to -$$f(x)f(x+{\small\frac12}) -= \frac{\Gamma(x)}{\sqrt{x}\,\Gamma(x-\frac12)}\frac{\Gamma(x+\frac12)}{\sqrt{x+\frac12}\,\Gamma(x)} -= \frac{x-\frac12}{\sqrt{x}\sqrt{x+\frac12}} -= \frac{1-\frac1{2x}}{\sqrt{1+\frac1{2x}}} -\ , $$ -where in the 2nd equality we have used the recursion for $\Gamma$. Expanding the squareroot in the denominator yields -$$f(x)f(x+{\small\frac12}) = 1-\frac3{4x}+\frac7{32x^2}+O(x^{-3})\ .$$ -On the other hand, setting -$$f(x)=1-\frac1{ax+b}+O(x^{-3})\ ,$$ -we obtain another expansion -$$f(x)f(x+{\small\frac12}) -= \left( 1-\frac1{ax}\frac1{1+\frac b{ax}} \right) - \left( 1-\frac1{ax}\frac1{1+\frac{\frac a2 +b}{ax}} \right)+O(x^{-3}) $$ -$$ = 1-\frac2{ax}+\frac{1+\frac a2 +2b}{a^2x^2}+O(x^{-3})\ .$$ -Comparison of the coefficients of the $x^{-1}$ and $x^{-2}$ terms in the two expansions finally gives $a=\frac83$ and $b=-\frac7{18}$, which concludes the proof.<|endoftext|> -TITLE: There are 31 houses on north street numbered from 1 to 57. Show at least two of them have consecutive numbers. -QUESTION [6 upvotes]: I thought to use the pigeon hole principle but besides that not sure how to solve. - -REPLY [3 votes]: Not sure if this is the answer you were looking for, but it's one of those "so painfully simple that you likely never considered it": -North Street ends in a cul-de-sac. -But more to the point... -If houses are numbered 1 through 57 and NONE of them are consecutively numbered, they must all have odd numbers. All odd numbers from 1 to 57 (inclusive) total 29 numbers. -The problem cites 31 houses in a range which can only accommodate 29 houses without any adjacent houses. 31-29 = 2, therefore at least two of the houses must be adjacent to others in order to fit within the parameters. -Even if the question cited 30 houses, at least two would have adjacent numbers. For example, if the 30th house was numbered 2, houses 1, 2, and 3 would all have adjacent numbers.<|endoftext|> -TITLE: What is the advantage of the Fourier Transform over the Hartley Transform? -QUESTION [5 upvotes]: The Hartley_transform is defined as -$$ -H(\omega) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty -f(t) \, \mbox{cas}(\omega t) \mathrm{d}t, -$$ -with $\mbox{cas}(\omega t) = \cos(\omega t) + \sin(\omega t)$. -The Fourier transform on the other hand is defined very similar as -$$ -F(\omega) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty -f(t) \, \mbox{exp}(i \omega t) \mathrm{d}t, -$$ -with $\mbox{exp}(i \omega t) = \cos(\omega t) + i \sin(\omega t)$. -But although the Fourier transform requires complex numbers it is much more widespread than the Hartley transform. Why is that? Are their any properties that make the Fourier transformation much more useful than the Hartley transformation? Or what is the advantage of the Fourier transformation over the Hartley transformation? - -REPLY [2 votes]: Homomorphism property -A big advantage of the kernel $e^{i\omega t}$ over $\operatorname{cas}(\omega t)$ is that the former is a homomorphism of the group $(\mathbb{R},+)$ into the multiplicative group of unimodular complex numbers: $e^{i\omega (t+s)} = e^{i\omega t}e^{i\omega s}$. This identity leads to - -Shift formula: the transform of $f(t-\tau)$ is $e^{i\tau}$ times the transform of $f$ -Convolution formula: the transform of convolution is the product of transforms. - -Both of those can be expressed in terms of the Hartley transform, but in a messy way: essentially one ends up recovering Fourier transform, applying the simple formula for Fourier transform, then going back and ending up with an unintuitive sum of several terms. -Amplitude-phase distinction -Avoiding complex numbers is a questionable benefit. Harmonics have amplitude and phase. Complex numbers have magnitude and argument, which work perfectly for representing amplitude and phase. Thus, the interpretation of the complex number $F(\omega)$ is pretty straightforward: it tells us the amplitude and phase at frequency $\omega$. -In contrast, $H(\omega)$ tells us neither: it's some combination of amplitude and phase from which neither can be recovered without also looking at $H(-\omega)$. So we end up combining $H(\omega)$ and $H(-\omega)$ over and over again, which is not any easier than working with real and imaginary parts of $F$, and is less transparent.<|endoftext|> -TITLE: Find $\lim_{x\to0}\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{x^2}$. Is my approach correct? -QUESTION [6 upvotes]: Find: -$$ -L = \lim_{x\to0}\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{x^2} -$$ -My approach: -Because of the fact that the above limit is evaluated as $\frac{0}{0}$, we might want to try the De L' Hospital rule, but that would lead to a more complex limit which is also of the form $\frac{0}{0}$. -What I tried is: -$$ -L = \lim_{x\to0}\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{1-\frac{\sin(x)}{x}}\frac{1}{x^2}\left(1-\frac{\sin(x)}{x}\right) -$$ -Then, if the limits -$$ -L_1 = \lim_{x\to0}\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{1-\frac{\sin(x)}{x}}, -$$ -$$ -L_2 = \lim_{x\to0}\frac{1}{x^2}\left(1-\frac{\sin(x)}{x}\right) -$$ -exist, then $L=L_1L_2$. -For the first one, by making the substitution $u=1-\frac{\sin(x)}{x}$, we have -$$ -L_1 = \lim_{u\to u_0}\frac{\sin(u)}{u}, -$$ -where -$$ -u_0 = \lim_{x\to0}\left(1-\frac{\sin(x)}{x}\right)=0. -$$ -Consequently, -$$ -L_1 = \lim_{u\to0}\frac{\sin(u)}{u}=1. -$$ -Moreover, for the second limit, we apply the De L' Hospital rule twice and we find $L_2=\frac{1}{6}$. -Finally, $L=1\frac{1}{6}=\frac{1}{6}$. -Is this correct? - -REPLY [2 votes]: By L' Hospital anyway: -$$\frac{\sin\left(1-\frac{\sin(x)}{x}\right)}{x^2}$$ yields -$$\cos\left(1-\frac{\sin(x)}x\right)\frac{\sin(x)-x\cos(x)}{2x^3}.$$ -The first factor has limit $1$ and can be ignored. -Then with L'Hospital again: -$$\frac{x\sin(x)}{6x^2},$$ -which clearly tends to $\dfrac16$.<|endoftext|> -TITLE: (Proof) If $f$ and $g$ are continuous, then $\max\{f(x),g(x)\}$ is continuous -QUESTION [6 upvotes]: Consider the continuous functions $f,g:\mathbb{R}\rightarrow\mathbb{R}$. -Show that $F:\mathbb{R}\rightarrow\mathbb{R}$ with $x\mapsto \max\{f(x),g(x)\}$ is continuous using the $\epsilon - \delta$ definition of continuity. -I know there must be four cases. -If $f(x)\leq g(x)$ and $f(x_0)\leq g(x_0)$ or -if $g(x)\leq f(x)$ and $g(x_0)\leq f(x_0)$ it is easy. -However, assuming $f(x_0)\neq g(x_0)$, what if -$g(x)\leq f(x)$ and $f(x_0)\leq g(x_0)$ or -$f(x)\leq g(x)$ and $g(x_0)\leq f(x_0)$? -For example: -$|f(x)-g(x_0)|$... how do I get from here to $|x-x_0|$? - -REPLY [4 votes]: Trying to stick as close as possible to the definition of the maximum. -Take a point $x_0 \in \mathbb R$. You want to prove that $h(x)=\max (f(x),g(x))$ is continuous at $x_0$. -In fact you only have 3 cases: - -$f(x_0) < g(x_0)$ -$f(x_0)=g(x_0)$ -$f(x_0) > g(x_0)$ - -$x$ will come into play afterwards. -Case 1. -As $g(x_0)-f(x_0) > 0$, you can find $\delta$ such that for $\vert x - x_0 \vert < \delta$ you have $\vert g(x) - g(x_0) \vert < \frac{g(x_0)-f(x_0)}{2}$. Hence for $x \in (x_0-\delta,x_0+\delta)$ you have $h(x)=g(x)$ and $h$ is therefore continuous at $x_0$. -Case 3. -Is similar to case 1. -Case 2. $f(x_0)=g(x_0)$ -For $\epsilon > 0$, you can find $\delta_f$ such that $\vert f(x)-f(x_0) \vert < \epsilon$ for $\vert x - x_0 \vert < \delta_f$ and $\delta_g$ such that $\vert g(x)-g(x_0) \vert < \epsilon$ for $\vert x - x_0 \vert < \delta_g$. -Now for $\vert x- x_0 \vert < \inf(\delta_f,\delta_g)$, you have $f(x),g(x) \in (f(x_0)-\epsilon,f(x_0)+\epsilon)=(g(x_0)-\epsilon,g(x_0)+\epsilon)$ hence $h(x)=\max(f(x),g(x)) \in (f(x_0)-\epsilon,f(x_0)+\epsilon)$ which allows to conclude.<|endoftext|> -TITLE: Universal cover of $T^2 \vee \mathbb{R}P^2 $ -QUESTION [9 upvotes]: What is the universal cover of the wedge sum of the torus and the real projective plane? -I know from Hatcher's Algebraic Topology that the universal cover of $\mathbb{R}P^2 \vee \mathbb{R}P^2 $ is an infinite number of spheres each one of them attached to two other spheres. I tried to mimic this construction somehow for this situation "gluing" together the universal covers of the torus and the projective plane and getting something like $\mathbb{R}^2$ with an infinite number of spheres attached but this doesn't seem to work. -How can I calculate the universal cover of this space? - -REPLY [17 votes]: $\widetilde{\Bbb{RP}^2 \vee T^2}$ is going to look like a tree with vertices corresponding to either $S^2$ or $\Bbb R^2$ and edges corresponding to one-point union of the two spaces corresponding to the vertices it joins. -The tree is a colored tree, with vertices colored by blue and red, each blue vertex adjacent only to red vertices and each red vertex adjacent only to blue vertices. Neighborhood of a red vertex consist of $\Bbb Z/2$-many vertices and neighborhood of a blue vertex consist of $\Bbb Z^2$-many vertices. This is because the wedge point $x$ in $\Bbb{RP}^2 \vee T^2$ lifts to $\Bbb Z/2$-many points in each $S^2$, and $\Bbb Z^2$-many points in each $\Bbb R^2$. Replacing each red vertex by an $S^2$, each blue vertex by an $\Bbb R^2$ and each edge by one-point union of the two vertex spaces gives me the desired universal cover. -Here is a picture of the part of the graph. While there are infinitely many red vertices adjacent to blue vertices, only finitely many are drawn for obvious reasons and the existence of the rest are dotted. As we see, the graph is a tree with vertex set partitioned into two colors and valence of blue vertices is $|\Bbb Z^2|$ and valence of red vertices is $2$. - -Thus, ultimately, the space $\widetilde{\Bbb{RP}^2 \vee T^2}$ is iterative one-point-union of infinitely many $S^2$'s and $\Bbb{R}^2$'s, with each $S^2$ wedged with two $\Bbb R^2$'s, and each $\Bbb R^2$ wedged with $\Bbb Z^2$-many $S^2$'s. -$\text{Explanation}$: To see this, note that $\Bbb R^2$ is the universal cover of $T^2$, hence $\Bbb R^2 \bigvee_{\Bbb Z^2} \Bbb{RP}^2$ ($\Bbb R^2$ with a copy of projective plane attached at each integer lattice) covers $\Bbb{RP}^2 \vee T^2$. Now $S^2$ is the universal cover of $\Bbb{RP}^2$, so you can similarly "unwrap" one of the projective planes from $\Bbb Z^2$-many of them to get the cover $\Bbb R^2 \bigvee_{\Bbb Z^2 - (0, 0)} \Bbb{RP}^2 \vee (S^2 \vee \Bbb R^2 \bigvee_{\Bbb Z^2 - (0, 0)} \Bbb{RP}^2)$. Covering all of the wedged $\Bbb{RP}^2$'s likewise, one will end up with the cover $\Bbb{R}^2 \bigvee_{\Bbb Z^2} (S^2 \vee \Bbb R^2 \bigvee_{\Bbb Z^2} \Bbb{RP}^2)$. "Unwrapping" iteratively in this process will give you a tree-like structure, entirely consisting of $S^2$ and $\Bbb R^2$, hence simply connected and thus a universal cover of your space. - -$\text{Remark}$: The reason that you get a much nicer thing for $\Bbb{RP}^2 \vee \Bbb{RP}^2$ is that your tree consists of vertices corresponding only to $S^2$ and the wedge point lifts only to 2 points in each $S^2$. This implies for every $S^2$-vertex, there are only two $S^2$-vertices adjacent to it in the graph, so globally it looks like an infinite string of $S^2$'s, each two of them touching at a point. Note that the graph is still a tree, with each vertex being of valence $2$. - -The presence of a space (i.e., $T^2$) with infinite fundamental group ($\pi_1(T^2) \cong \Bbb Z^2$) makes things worse.<|endoftext|> -TITLE: Math induction problem with large numbers -QUESTION [7 upvotes]: I am trying to figure out how to prove $17^{200} - 1$ is a multiple of $10$. I am talking simple algebra stuff once everything is set in place. -I have to use mathematical induction. -I figure I need to split $17^{200}$ into something like $(17^{40})^5 - 1$ and have it as $n = 17^{40}$ and $n^5 - 1$. -I just don't know if that's a good way to start. - -REPLY [8 votes]: Consider a number with a $7$ in its units place. As we take powers of it, the units digit proceeds: -$$7, 9, 3, 1, 7, 9, 3, 1, \ldots$$ -Notice that when raised to a power that is a multiple of $4$, such a number ends up with a $1$ in its units place. Since $17$ has a $7$ in its units place, and since $200$ is a multiple of $4$, we reason that the number $17^{200}$ must have a $1$ in its units place. Thus, $17^{200} - 1$ has a $0$ in its units place, i.e., is a multiple of $10$ as desired. QED.<|endoftext|> -TITLE: Number Theory: Reordering $c_1,\dotsc,c_{10}$ so that $(2k-1)\mid(a_k-b_k)$ -QUESTION [6 upvotes]: I have this homework problem that I'm confused on how to do: -Given any distinct $z_1,\dotsc,z_{10}\in\mathbb{Z}$, show that one can reorder these as $s_5,s_4,\dots,s_1,t_5,\dotsc,t_1$ so that $(2k-1)\mid(s_k-t_k)$; thus $9\mid(s_5-t_5),7\mid(s_4-t_4),$ etc. -I've tried writing $z_i=q_i(2i-1)+r_i$ and comparing the remainders of $s_i$ and $t_i$ modulo $2i-1$, but I haven't been able to solve the problem this way. - -REPLY [3 votes]: Let $A=\{z_1,z_2,\ldots,z_{10}\}.$ Since you have ten integers in this set and there are exactly nine remainders modulo $9,$ the pigeonhole principle implies that there are $i_0\neq j_0$ such that $z_{i_0}\equiv z_{j_0}\pmod9.$ Put $s_5=z_{i_0}$ and $t_5=z_{j_0}.$ Now consider the set $A\setminus\{s_5,t_5\}.$ This set contains exactly eight integers. Then as there are exactly seven remainders modulo $7,$ again by the pigeonhole principle there exist $i_1\neq j_1$ such that $z_{i_1}\equiv z_{j_1}\pmod7$ and of course $i_1,j_1\neq i_0,j_0.$ Now put $s_4=z_{i_1}$ and $t_4=z_{j_1}.$ Continue in this way and you'll get the desired result.<|endoftext|> -TITLE: Smallest symmetric group with subgroup Q -QUESTION [5 upvotes]: What is the smallest $n$ such that the quaternion group is a subgroup of $S_n$? - -REPLY [3 votes]: Equivalently, we want to know the smallest $n$ such that $G = Q_8$ acts faithfully on a set of size $n$. Any action $X$ of $G$ decomposes as a disjoint union of transitive actions -$$X \cong \sum_i G/H_i.$$ -Now, $G$ has the strange property that all of its subgroups are normal, so the kernel of the action of $G$ on $G/H$ is $H$. The action of $G$ on $X$ therefore has kernel $\cap_i H_i$, so the game here is to find subgroups of $G$ whose intersection is trivial such that the sum of their indices in $G$ is minimal. -$G$ has another strange property, which is that all of its nontrivial subgroups contain its center $\pm 1$. Hence the intersection $\cap_i H_i$ can't be trivial unless some $H_i = 1$. This gives $n = 8$.<|endoftext|> -TITLE: Is a single point a closed interval? -QUESTION [6 upvotes]: For example, is {0} considered a closed interval? Why or why not? Doesn't it contain all (it's only) limit point of 0? - -REPLY [8 votes]: Intervals are by definition connected subsets of $\Bbb{R}$. Singletons are connected and closed. Therefore they qualify as closed intervals.<|endoftext|> -TITLE: Why is Completeness not a Topological Property? -QUESTION [31 upvotes]: I am trying to answer the question: -Show why completeness is not a topological property. -My answer: $\mathbb{R}$ and the set $(0,1)$ are homeomorphic, but $\mathbb{R}$ is complete while $(0,1)$ is not. -My question to you all: Does this answer the question? I feel like I am not quite seeing what is going on with completeness and why it is not a topological property. Can someone give me another example? - -REPLY [21 votes]: As the technical component of your question has already been addressed, I would like to tackle the intuitive component, implicit in: - -I feel like I am not quite seeing what is going on with completeness and why it is not a topological property. - -The source of this uncertainty is due, I suspect, to the fact that since topology adds sufficient structure to a set to cater for convergence (in particular, a topology determines whether or not any given sequence converges), surely it should be sufficient to accommodate completeness, which has convergence as its whole concern. -If I'm right about the source of your doubts, here is the answer... -Intuitively, a complete space is one in which all of the sequences that are trying to converge actually do converge. It turns out that while the actually do part can be catered for by the topological structure, the trying to part can't be. Why? -Well, let's take as our example the sequence $$(\frac{1}{n})_{n\in\mathbb{N-\{1\}}}:=\frac{1}{2},\frac{1}{3},\frac{1}{4},...\subset(0,1)$$ -This sequence certainly seems to be 'trying to' converge to $0$, but is it really? The reason our eyes say 'yes' is because our eyes add a metrical structure that the topology just doesn't 'see'. The topology doesn't 'see' these things as getting closer and closer to $0$ because it is distance-agnostic. -To visualize this, imagine the interval to be made of rubber. By pinching any two consecutive members of the sequence $\frac{1}{n}$ and $\frac{1}{n+1}$ and stretching them apart, you could make every separation one inch if you wanted, and you wouldn't destroy the topological structure of the thing. The result would be a sequence that no longer looks like it is trying to converge at all. -In short, to introduce the notion of 'trying to converge' you need to add metrical structure on top of the topology and there are infinitely many ways to do this. Some choices will lead to completeness, while others will lead to incompleteness.<|endoftext|> -TITLE: What is the geometric implication of subtracting two Matrices representing linear transformations? -QUESTION [5 upvotes]: If we have two linear transformations denoted by matrices $A, B$ operating on an arbitrary vector $v \in \mathbb R^n$, then how does $Av$ and $Bv$ differ geometrically from $(A-B)v$ ? Does the difference inherit properties from the two linear transformations, or is there no pattern at all? -The reason why I ask pertains to eigen vectors, for they are sent to the null vector when operated upon by $\lambda I - A$ and I am trying to geometrically understand why. If you can explain this second question and not the first one, that would also be fine. - -REPLY [3 votes]: One motivation for the concept of eigenvectors and eigenvalues is the following question: “Does a given linear transformation map some line through the origin onto itself?” If there is such a line for the linear transformation $A$, then for every vector $\mathbf v$ on that line we must have $A\mathbf v=\lambda \mathbf v$, where $\lambda$ is a fixed scalar. So, if $\mathbf v$ is an eigenvector corresponding to the eigenvalue $\lambda$, then $(A-\lambda I)\mathbf v=0$ simply says that $A$ maps $\mathbf v$ to another vector on the line through the origin that contains $\mathbf v$ itself.<|endoftext|> -TITLE: Stuck on GEB chapter 9 - is b a MU number? is b a TNT number? -QUESTION [5 upvotes]: I'm reading through Gödel, Escher, Bach, and I found myself stuck at chapter 9. I've been rereading through several times already, but I must be missing something. To clarify my background, I'm a computer scientist, not a mathematician. -On page 273, D. R. Hofstadter states that - -Could it be, therefore, that the means with which to answer any - question about any formal system lies within just a single formal - system-TNT? It seems plausible. Take, for instance, this question: -Is MU a theorem of the MIU-system? -Finding the answer is equivalent to determining whether 30 is a MIU - number or not. Because it is a statement of number theory, we should - expect that, with some hard work, we could figure out how to translate - the sentence "30 is a MIU-number" into TNT-notation, in somewhat the - same way as we figured out how to translate other number-theoretical - sentences into TNT-notation. - -I get it how the "statement of number theory" - -b is a power of 2 - -can be translated to TNT. I can imagine that the statement - -b is a power of 10 - -can be translated to TNT, even though it is very hard to do. -I'm beginning to lose it with translating the original statement - -30 is a MIU number - -to TNT. All right, still, maybe we can somehow translate the ideas from Mumon Shows Us How to Solve the MU-puzzle, p. 268 into TNT? Maybe.. but from the paragraph following this question it seems that Hofstadter takes the even more difficult road, first translating - -b is a MIU number - -into TNT, and then substituting b for SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS0! He states that the task of translating b is a MIU number into TNT is difficult, but how should I believe him that it is at all possible (other than having some smart guy invent the translation)? Is it somehow obvious, that even though the translation is difficult, it must exist? -Hofstadter says that Because it is a statement of number theory, we should expect that, with some hard work, we could figure out how to translate the sentence "30 is a MIU-number" into TNT-notation. I must be missing something - I know that some number theory statements can be translated to TNT, but I'm not expecting that this particular one can be translated to TNT, and I'm not expecting at all that all number theory statements can be translated to TNT. I'm not even sure - is the b is a MIU number really a statement of number theory? -On the next page, 274, Hofstadter states that - -... This state of affairs comes about because of two facts: -Fact 1. Statements such as "MU is a theorem" can be coded into number - theory via Gödel’s isomorphism. -Fact 2. Statements of number theory can be translated into TNT. -It could be said that MUMON is, by Fact 1, a coded message, where the - symbols of the code are, by Fact 2, just symbols of TNT. - -I'm not sure that I understand these facts correctly. For Fact 1, I imagine something like that: MU is a theorem translates into 30 is a MIU-number. Is this understanding correct? -Fact 2 is the very one that isn't at all clear to me. Does that (amongst other things) mean, that all statements of the form -b is a SOME_FORMAL_SYSTEM-number -where SOME_FORMAL_SYSTEM = pq, MIU, TNT, whatever.. can be translated into TNT? -These facts seem to be very important to the understanding of the chapter... Hofstadter uses these as a jumping board for turning the Gödelization onto TNT itself on p. 278 - -α is a TNT-number -...Now it occurs to us that this new number-theoretical! predicate is expressible by some string of TNT with one free variable, say a. - -Sorry, it does not at all occur to me. Being a TNT-number means being a number that is defined recursively from a set of highly complex arithmetic rules. I cannot even imagine the shape of the resulting TNT formula. Would the TNT form also use recursion in some way? Or is there some trick to "unroll" the recursion? I'm completely lost here. -I feel that without answering these questions, I can not continue to what's next and at least pretend that I understand what's going on. I feel kinda cheated - reading through 280 pages, and then get stuck on what was probably supposed to be the climax of the first part of the book, not being able to comprehend. Any help will be really appreciated. - -UPDATE 1 -I'm slowly digging through Mauro ALLEGRANZA answer. -Comment 1 seems to be easily understandable to me. -Now I'm pondering Comment 2 and 3. This is my current understanding of what's going on: -Having the arithmetical question from Comment 1 - -is c a MIU-producible number? - -We can translate: - -this question (probably inseparably) along with a set of the arithmetic rules for generating MIU-producible numbers -into a TNT predicate with one free variable. - -More light on the method is shed in Comment 3, but basically it is just an immensely more complex variation of translating e. g. the arithmetic question - -is c an even number? - -into TNT (∃a:(SS0.a)=c). -After that, we'll substitute all the occurences of the free variable c in the TNT predicate by the numeral SSS...(30x)...0 and we'll obtain a TNT formula (no more free variables). -This TNT formula is isomorphically tied with the original question - -is MU a theorem of the MIU-system? - -in the following way: - -If the formula is a TNT theorem (a simple example of such formula: (SSS0+0)=(0+SSS0)), it means that 30 is a MIU-producible number, and therefore MU is a MIU theorem. -If the TNT formula with prepended ~ is a theorem, it means that 30 is not a MIU-producible number, and therefore MU is not a MIU theorem - -Is the above correct? I hope so, so I can continue on my journey through the last comment and more GEB... - -REPLY [7 votes]: We can try with a step-by-step approach ... -Comment 1 -The first basic concept is that of formal system i.e. a language (usually based on a finite alphabet of symbols) some rules for producing "well-formed" expressions (i.e. finite sequnces of symbols), some "initial" expressions (axiosm) and some rules for manipulating expressions, i.e. for generating new expressions from the set of already available ones. -$\mathsf {TNT}$ as well as $\mathsf {MIU}$ are formal systems. -You can see Gödel-Numbering the $\mathsf {MIU}$-System [ I'll not refer to page numbers, because my edition has a different paging ...] for the "encoding" of the system with "numerals" (i.e. symbols denoting numbers). -You can imagine to write a piece of code implementing $\mathsf {MIU}$-system rules : running the code you can generate "theorems", i.e. expressions; after encoding, those expressions will look like : $31, 311, 30110, \ldots$. -But you can ask yourself about some "feature" of the system, like the silly one : is $\mathsf {MIU}$-system able to generate the expression $20$ ? Of course not, because $2$ is not used in the coding of the alphabet of the system, and no rule of the system alllows us to "add" (or introduce) the symbol $2$. -This is the gist of the question : - -is $\mathsf {MU}$ a theorem of the $\mathsf {MIU}$-system? - -i.e. are the rules of the system "able" to produce the expression $\mathsf {MU}$ ? which is the same as, given the encoding : - -is $30$ a $\mathsf {MIU}$ producible number ? - - -Comment 2 -The second step is into Seeing Things Both Typographically and Arithmetically to "translate" the rule of the formal system $\mathsf {MIU}$ into rules for manipulating numbers. -Having done this [see Arithmetical Rule 1a, ... as well as the (only) axiom : We can make $31$] we can "play" a double game : with numbers (i.e. Arithmetically) and with symbols denoting numbers : the numerals (i.e. Typographically) : - -Typographical rules for manipulating numerals are actually arithmetical rules for operating on numbers. - -Thus, our original question : - -is $\mathsf {MU}$ a theorem of the $\mathsf {MIU}$-system? - -that we have translated, via encoding, into : - -is $30$ a $\mathsf {MIU}$ producible number ? - -can now been rephrased into the language of $\mathsf {TNT}$ (because the question "speaks of" numbers and numbers are denoted by numerals, and we are asking for : "is $SS \ldots 0$ [with $30$ leading $S$] a string of $\mathsf {MIU}$ ?"). -Thus : - -$\mathsf {TNT}$ is now capable of speaking "in code" about the $\mathsf {MIU}$-system. - - -Comment 3 -The next step is one of Gödel's fundamental intuitions : see Gödel Numbering and The Diagonalization Lemma. -If you want to grasp the details, there is no other way than take a look at some mathematical logic textbook and study it. -But the idea is (nowadays) fundamentally simple; you are a computer scientist, and thus you must be familiar with programming, assembler and machine code. -With "low level" languages we are able to specify data and instructions for a machine; at the ground level, we have ony sequences of bits : $0,1$ that codify not only numbers, but every kind of data. In addition we encode with bits also instructions, i.e. rules. -This is the gist of Gödel's idea of arithmetization : with the seemingly limited resources of arithmetic, we can "codify" not only numbers and their properties, but also expressions and their properties, provided that they are "managed" according to a formal system (see above) i.e. according to a finite set of "algorithmic" rules. -Thus, we can define suitable arithmetic formulae encoding the properties of the formal system, like: being a (well-formed) expression, being an axiom and - finally - being a theorem of the formal system. -This allows Gödel (and Hofstadter) to express (again : via encoding) the question : - -"is $\mathsf {MU}$ a theorem ?" - -into "pure" arithmetical terms; the result will be something (very very complex in practice, but ideally simple) like the question : - -"is $35765492$ divisible by $7$ ?" - - -Comment 4 -Having said that, how it is possible that the property : - -"$α$ is a $\mathsf {TNT}$-number" is expressible as a number-theoretical predicate by some string of $\mathsf {TNT}$ with one free variable ? - -This fundamental result is obtained through a long exercise into encoding : i.e. a tour de force in machine code. -You can see this post for a concrete little exercise into encoding (alas! with a different encoding schema, i.e. a different "assembler language"). -In it we can find how to "encode" the number-theoretical predicate $Even(x)$, i.e. a predicate (with one free variable) such that, for any number $n$, outputs $TRUE$ if $n$ is even and outputs $FALSE$ if $n$ is odd. -In first-order arithmetic $Even(x)$ is expressed as : - -$\exists y (x = y \cdot 2)$; - -thus, we have to "encode" : $(\exists v_2 v_1 = v_2 \cdot SS0)$ i.e. : - - -$(\lnot \forall v_2 (\lnot = v_1 \cdot v_2 SS0))$. - - -Its encoding will be : $2^2 \cdot 3^6 \cdot 5^1 \cdot 7^{14} \cdot 11^2 \cdot 13^6 \cdot 17^{10} \cdot 19^{14} \cdot 23^{12} \cdot 29^{11} \cdot 31^{14} \cdot 37^5 \cdot 39^5 \cdot 43^3 \cdot 47^4 \cdot 53^4$. -Having computed it, we have the so-called Gödel number encoding the formula expressing $Even(x)$. -The same will happen with the "$\mathsf {TNT}$-theoretical" predicate $\mathsf {TNT}_{num}(x)$, i.e. a predicate (with one free variable) such that, for any number $\alpha$, outputs $TRUE$ if $\alpha$ is a $\mathsf {TNT}$-number and outputs $FALSE$ if $\alpha$ is not a $\mathsf {TNT}$-number.<|endoftext|> -TITLE: Fun Q6: Side length of the pentagon in a five sided star? -QUESTION [5 upvotes]: Consider a regular pentagon of side length $a$. If you form a 5-sided star using the vertices of the pentagon, then you'll get a pentagon inside that star. What is the side length of that pentagon? - -In general, for a n-sided star, what is the side length of the n-sided regular polygon in the star? Take the distance between two adjacent vertices be $a$ - -REPLY [5 votes]: Let $x$ be the length of small n-sided regular polygon in the star & $a$ be the distance between two adjacent vertices, then the angle of spike of regular star polygon is given as -$$\alpha=\frac{\pi}{\text{number of vertices (points) in the star}}=\frac{\pi}{n}$$ -Now, draw a perpendicular from one vertex of star to the side of the small regular polygon to obtain a right triangle . -Using geometry of right triangle, the length of perpendicular drawn to the side of small regular polygon can be obtained -$$=\frac{a}{2}\csc\frac{\pi}{n}-\frac{x}{2}\cot\frac{\pi}{n}$$ - Hence, in right triangle, one should have -$$\tan\frac{\pi}{2n}=\frac{\frac{x}{2}}{\frac{a}{2}\csc\frac{\pi}{n}-\frac{x}{2}\cot\frac{\pi}{n}}$$ -$$x=\frac{a\tan\frac{\pi}{2n}\csc\frac{\pi}{n}}{1+\tan\frac{\pi}{2n}\cot\frac{\pi}{n}}$$ -$$x=\frac{a\sin\frac{\pi}{2n}}{\sin\frac{\pi}{n}\cos\frac{\pi}{2n}+\cos\frac{\pi}{n}\sin\frac{\pi}{2n}}$$ -$$x=\frac{a\sin\frac{\pi}{2n}}{\sin\left(\frac{\pi}{n}+\frac{\pi}{2n}\right)}$$ -$$\bbox[5pt, border:2.5pt solid #FF0000]{\color{blue}{x=\frac{a\sin\frac{\pi}{2n}}{\sin\frac{3\pi}{2n}}}}$$ -$$\forall \ \ n=2k+1\ \ (k\in N)$$ - Hence for a regular pentagon in the star, setting $n=5$, the side of regular pentagon -$$x=\frac{a\sin\frac{\pi}{10}}{\sin\frac{3\pi}{10}}=a\frac{\sin18^\circ}{\cos36^\circ}=a\frac{\frac{\sqrt 5-1}{4}}{\frac{\sqrt 5+1}{4}}=\color{red}{\frac{a}{2}(3-\sqrt 5)}$$ - -Edited details: If the star regular polygon has $2n$ no. of vertices which is obtained by placing two congruent $n$-sided regular polygons one on the other in symmetrical staggered manner similar to a hexagram then a generalized formula for calculating side $x$ of $2n$-sided regular polygon in the star (having $2n$ no. of vertices & $a$ is the distance between two adjacent vertices) can be derived as follows (see figure below) - - -The angle of spike of regular star polygon is given as -$$\alpha=\text{interior angle of a n-sided regular polygon}=\frac{(n-2)\pi}{n}$$ -Now, draw a perpendicular from one vertex of star to the side of small regular polygon to obtain a right triangle. -Using geometry of right triangle, the length of perpendicular drawn to the side of small regular polygon can be obtained -$$=\frac{a}{2}\csc\frac{\pi}{2n}-\frac{x}{2}\cot\frac{\pi}{2n}$$ - Hence, in right triangle, one should have -$$\tan\frac{(n-2)\pi}{2n}=\frac{\frac{x}{2}}{\frac{a}{2}\csc\frac{\pi}{2n}-\frac{x}{2}\cot\frac{\pi}{2n}}$$ -$$\cot\frac{\pi}{n}=\frac{x}{a\csc\frac{\pi}{2n}-x\cot\frac{\pi}{2n}}$$ -$$x=\frac{a\csc\frac{\pi}{2n}\cot\frac{\pi}{n}}{1+\cot\frac{\pi}{n}\cot\frac{\pi}{2n}}$$ -$$x=\frac{a\cos\frac{\pi}{2n}}{\cos\frac{\pi}{n}\cos\frac{\pi}{2n}+\sin\frac{\pi}{n}\sin\frac{\pi}{2n}}$$ -$$x=\frac{a\cos\frac{\pi}{n}}{\cos\left(\frac{\pi}{n}-\frac{\pi}{2n}\right)}$$ -$$\bbox[5px, border:2px solid #C0A000]{\color{blue}{x=\frac{a\cos\frac{\pi}{n}}{\cos\frac{\pi}{2n}}}}$$ -$$\forall \ \ \ \ n\ge 3\ \ (n\in N)$$ -Hence for a regular hexagon in the hexagram (see in the above diagram) , setting $2n=6$ or $n=3$, the side of regular hexagon -$$x=\frac{a\cos\frac{\pi}{3}}{\cos\frac{\pi}{6}}=a\frac{\frac{1}{2}}{\frac{\sqrt 3}{2}}=\color{red}{\frac{a}{\sqrt3}}$$<|endoftext|> -TITLE: Projecting data onto a vector -QUESTION [8 upvotes]: I have learned that projecting a vector a onto a vector b is done by multiplying the orthogonal projection of a (say $\mathbf{a_o}$) with the unit vector $\mathbf{\hat{b}}$ in the direction of b = - -Orthogonal projection of a onto b = $\mathbf{a_o}$ * $\mathbf{\hat{b}}$ - -However in the context of principal component analysis, I often read that the projection of the data D onto an eigenvector v is simply $\mathbf{v}^{T}$*$D$ (and so we don't take the orthogonal components of the data vectors of D..?) - why is that? -Thanks - -REPLY [9 votes]: For two vectors $a,b \in \mathbb R^n$, the orthogonal projection of $a$ onto the span of $b$ is given by $P_b(a) = \langle a ,b\rangle \frac{b}{\langle b,b\rangle}$, where the brackets denote the scalar product, i.e. $\langle a, b\rangle = a^Tb = b^Ta$. If $b$ has norm $1$, i.e. $\langle b,b\rangle =1,$ this simplifies to $P_b(a) = \langle a ,b\rangle b = (b^Ta)b$. -If you want to plot this vector, the vector part ($b$) tells you in which direction to go and the scalar factor ($\langle a ,b\rangle = (b^Ta)$) tells you how far. If you already know in which direction to go, the only interesting information is the distance. -The data matrix $D$ in your example consists of a lot of vectors like $a$, written down in columns. -So if you just consider a (normed) vector $b$ and look at $b^T D, $ this will be a 1xn column vector. -What does this vector tell us? The first entry is the scalar factor for the first column, the second for the second, and so on. What's missing is the vector $b$ to multiply it with. -That is: $b^T D$ is the column vector which contains the coordinates of the columns of $D$ with respect to $b$. If you want to plot this, you can forget about the actual direction of $b$, as the aim is to consider $b$ as the 'new x' and the orthogonal complement of $b$ as the new 'y'. We don't care about the direction, only the coordinates: If $c$ is orthogonal to $b$ and has norm 1, then we can calculate the coordinates of the columns of $D$ as $c^TD$. If we plot the first entry of $b^TD$ against the first entry of $c^TD$, we will have the the data point corresponding to the first column represented in the new coordinate system, and so on for all the other columns of $D$.<|endoftext|> -TITLE: Doubt in solving PDE. -QUESTION [5 upvotes]: Given that -$yu_x+xu_y=xy$ , $x\geqslant0$, $y\geqslant0$ with $u(0,y)=e^{-y^2}$ for $y>0$ , and $u(x,0)=e^{-x^2}$ for $x>0$. -My doubt is that how to use initial values in this case? -The answer given is -$ -\left\{ -\begin{aligned} \frac12y^2 + e^{-(x^2-y^2) } \quad for \quad x>y\\ - \frac12 x^2 + e^{-(y^2-x^2) } \quad for \quad x0$ , and $u(x,0)=e^{-x^2}$ for $x>0$. -Dividing it by $xy$, we get, -${u_x\over x}$+ ${u_y\over y}$=1 -${dx\over dt} = { 1\over x}$ with $x(0)=0$ -$\implies$${ x^2\over 2}=t$ -${dy\over dt}={1\over y}$ with $y(0)=y_0$ $\implies$ ${ y^2\over 2}=y_0+t$ -$\implies$ ${ y^2\over 2}-t=y_0 $ $\implies$ ${ y^2\over 2}- { x^2\over 2}=y_0$ -${ du\over dt}=1$ with $u(0)=f(y_0)$ $\implies$ $u=t+f(y_0)$ -$\implies$ $u= { x^2\over 2}+f( -{ x^2\over 2}$ +${ y^2\over 2} )$ -Now $u(x,0)={ x^2\over 2}+ f( -{ x^2\over 2})$ -$\implies$ $e^{-x^2}= { x^2\over 2}+ f( -{ x^2\over 2})$ -put $-{ x^2\over 2}=t$ ,we get $e^{2t}-t=f(t)$ -$\implies$ $u={ x^2\over 2} +e^{2(-{ x^2\over 2} + { y^2\over 2})} -{ x^2\over 2}+{ y^2\over 2}$ -$\implies$ $u= { y^2\over 2}+ e^{ -({ x^2 } - { y^2 })} $ -Also $u(0,y)=e^{-y^2}=f( { y^2\over 2}) $ -Put ${y^2\over2}=t$, we get $e^{-2t}= f( t )$ -$\implies$ $u={x^2\over 2}+e^{ -2(-{ x^2\over2 } + { y^2 \over 2})} $ -$\implies$ $u={x^2\over 2}+e^{ ( { x^2 } - { y^2 })} $ -Or $u={x^2\over 2}+e^{ -({ y^2 } - { x^2 })} $ -I still have a doubt. Why are $x>y$ and $y>x$ written in the final answer?<|endoftext|> -TITLE: Finding an algorithm for the given problem -QUESTION [8 upvotes]: Let's presume that we have a PVP fight scene, where 1 or 2 heroes, are fighting 3 monsters. -The monsters that the heroes are fighting are the following: - -Skeleton (2 of this monster) - -Health: 1 - Defense: 0 - Attack: 1 - - - - -Death Knight (1 of this monster) - -Health: 4 - Defense: 1 - Attack: 3 - - -Whenever a hero attacks, the defense of the monster is reduced from the damage the hero deals. -The heroes are the same, they can use one of two attacks, an attack, that deals 1 damage to every enemy, or an attack, that deals 3 damage to a single enemy. -The objective of the heroes are to sustain as few damage as possible from the monsters. -If a single hero faces this threat, the optimal move would be, to use it's mass attack, to kill the two Skeletons first with one shot, and then kill the Death Knight in the following two turns, this way sustaining a total damage of: (3)+(3)=6. (if the single hero tries to kill the Death Knight first, he needs two turns to do so, so he will sustain (3+1+1)+(1+1)=7 damage -But if two heroes are present, it would be more wiser for the heroes, to gang up on the Death Knight firstly, with two single attacks(which eliminate him), and then either clear out the skeletons with a single mass attack, or with two simple attacks. This way the total damage sustained would be (1+1)=2. (if they clear the Skeletons with a single mass attack, and the other hero attacks the Death Knight, they would sustain a total of (3)=6 damage, as they would still need a second turn to kill the Death Knight.) -NOTE: the parantheses are the damages sustained in a turn -How could I find an algorithm, that would tell my heroes which attack is more cost-worthy for them to use? I could try brute forcing it, but in larger scenarios (ie: 4 heroes vs 9 different monsters) it just too resource intensive to achieve. -This would be used in a little script, which makes some calculations for me, which will be used to balance a board game that I am in the progress of making. If this isn't the correct stack site for my question, please point me in the correct direction. Note: I only need an algorithm, which I should use, no programming advice needed for the completion of the script. -EDIT 1: Based on this example, I could say that the correct move is always the one, which kills the most monsters in a single case, but this isn't correct, if we increase the Death Knights damage, so the correct move must have a score which is calculated using the monsters attack, and the number of monsters that can be killed in a single turn. Am I right in assuming this? - -REPLY [3 votes]: The algorithm depends on the definition of "best", i.e., some kind of payoff function based on the outcome of the battle. In your example the negative payoff is the number of rounds before all enemies are dead; the good guys/gals try to minimize that function and the enemies try to maximize it (which may not be the same as dealing as much damage as possible). -In the same vein, one "move" consists not of a single player attacking another player, but all players of one party executing one attack each, followed by all remaining players of the opposing party executing one attack each. Because an attack does not alter the state of your companions, the order in which these individual attacks by the same party are executed is immaterial (but you can pre-optimize the strategy by avoiding moves that involve attacking an opponent that just died during the same round). -If there is a single move that kills all the remaining opponents then that is obviously the best move; we could say that it has penalty 1 because it adds 1 to the total number of moves needed to exterminate the enemy. -In general, the price of a move is the maximum, taken over all possible enemy countermoves, of the price of our best response to that countermove. -This recursive definition results in an algorithm that is guaranteed to terminate because every move strictly decreases the total defense of the enemy, which was finite to begin with. It is probably what you refer to as "brute force" and the technical name is minimax for what should by now be obvious reasons. -The most popular implementation of this algorithm contains an optimization called alpha-beta pruning. It is based on the observation that the brute force attack effectively runs through a tree (all possible evolutions of the battle), and it eliminates certain branches of the tree (thus avoiding to have to run through them) based on the observation that these branches are guaranteed to give no better results than branches that have already been examined. -The WP article that commenter Michael Medvinsky refers to mentions further improvements. -If the remaining total defense of the enemies is large, then there is a heuristic that will give an immediate result, almost always optimal or nearly optimal, without recursion: choose the attacks that inflict the maximum total damage to all enemies combined. Among such moves that inflict equal total damage, privilege the ones that "level" the score by decreasing as much as possible the highest individual defense score of an enemy (so that your later multi-target attacks remain more efficient). -This heuristic can be used to cut off the tree at a certain depth.<|endoftext|> -TITLE: Non-Circular Proof of $\lim_{x \to 0} \frac{\sin x}{x} = 1$ -QUESTION [13 upvotes]: I'm looking for a convincing proof, using first principles, that $$\lim_{x \to 0}\frac{\sin x}{x} = 1$$ (Please use ordinary unit circle definitions of trigonometric functions.) -It occurred to me that the classic proof, which compares three areas, uses the formula ${1\over 2}r^2\theta$ for the area of a circular sector of angle $\theta$, which in turn assumes the area of a circle is $\pi r^2$. But this fact is almost always proven in texts using an integral, which ends up using the derivatives of $\sin$ and $\cos$, and we're back to that limit again. -So I need a non-circular proof that doesn't rely on playing definition games ("let $\sin$ be the following power series..."). The answer to this question is definitely playing definition games. -Sorry for the pun. - -REPLY [5 votes]: I don't see anything circular in comparing areas to get the inequality $\sin x < x < \tan x$ for $0 < x < \pi/2$. However we need to be very cautious in defining the symbols $\sin x, \tan x$ properly given $x$ a real number. -The approach based on areas goes like this. Using the concept of definite integrals it can be proven that a sector of a circle has an area. This does not require anything beyond the continuity of the function $\sqrt{1 - x^{2}}$ in interval $[0, 1]$. In particular justification of the area of a circle is not dependent on the definition of trigonometric functions and $\pi$. -Next consider a unit circle with origin $O$ as center and let $A$ be the point $(1, 0)$. Let $P$ be any point on the circle. For our purposes it is sufficient to consider $P$ to be in first quadrant. Let the area of sector $AOP$ be $y$ so that $y > 0$. Also let $x = 2y$ and then by definition the point $P$ is $(\cos x, \sin x)$. This is the usual definition of trigonometric functions as studied at the age of 15 years or so. -Note that some textbooks base the definition of $\sin x, \cos x$ on the basis of length of arc $AP$ which is $x$. The definition is equivalent to the one based on areas of sectors, but comparing areas of figures is simpler than comparing the length of arcs (at least in this context). Consider the tangent $AT$ to unit circle at point $A$ such that $OPT$ is a line segment. Also let $PB$ be a perpendicular to $OA$ and $B$ is the foot of this perpendicular. Now it is easy to show that $$\text {area of }\Delta AOP < \text{ area of sector }AOP < \text{ area of }\Delta AOT$$ (because each region is contained in the next). However it is very difficult to compare the length of arc $AP$ with the length of line segments $PB$ and $AT$ (because there is no containment here). -The above inequality leads to $$\sin x < x < \tan x$$ from which we get $\sin x \to 0$ as $x \to 0$ and then $\cos x = \sqrt{1 - \sin^{2}x} \to 1$. Further the inequality is equivalent to $$\cos x < \frac{\sin x}{x} < 1$$ and hence $(\sin x)/x \to 1$ as $x \to 0$. - -Update: It appears from OP's comments that the relation between length of an arc of a circle and area of corresponding sector is something which can't be proven without using any analytic properties of circular functions. However this is not the case. -Let $P = (a, b)$ be a point on unit circle $x^{2} + y^{2} = 1$ and let $A = (1, 0)$. For simplicity let's consider $P$ in first quadrant so that $a, b$ are positive. Then the length of arc $AP$ is given by $$L = \int_{a}^{1}\sqrt{1 + y'^{2}}\,dx = \int_{a}^{1}\frac{dx}{\sqrt{1 - x^{2}}}$$ The area of the sector $AOP$ is given by $$A = \frac{ab}{2} + \int_{a}^{1}\sqrt{1 - x^{2}}\,dx$$ We need to prove that $L = 2A$. We will do this using the fact that $b = \sqrt{1 - a^{2}}$ and using integration by parts. -We have -\begin{align} -\int\sqrt{1 - x^{2}}\,dx &= x\sqrt{1 - x^{2}} - \int x\cdot\frac{-x}{\sqrt{1 - x^{2}}}\,dx\notag\\ -&= x\sqrt{1 - x^{2}} - \int \frac{1 - x^{2} - 1}{\sqrt{1 - x^{2}}}\,dx\notag\\ -&= x\sqrt{1 - x^{2}} - \int \sqrt{1 - x^{2}}\,dx + \int \frac{1}{\sqrt{1 - x^{2}}}\,dx\notag\\ -\Rightarrow \int\sqrt{1 - x^{2}}\,dx &= \frac{x\sqrt{1 - x^{2}}}{2} + \frac{1}{2}\int \frac{dx}{\sqrt{1 - x^{2}}}\notag\\ -\end{align} -Hence $$\int_{a}^{1}\sqrt{1 - x^{2}}\,dx = - \frac{a\sqrt{1 - a^{2}}}{2} + \frac{1}{2}\int_{a}^{1}\frac{dx}{\sqrt{1 - x^{2}}}$$ or $$\int_{a}^{1}\frac{dx}{\sqrt{1 - x^{2}}} = 2\left(\frac{ab}{2} + \int_{a}^{1}\sqrt{1 - x^{2}}\,dx\right)$$ or $L = 2A$ which was to be proved. -Contrast the above proof of relation between length and area with the following totally non-rigorous proof. Let the length of arc $AP$ be $L$. Then the angle subtended by it at the center is also $L$ (definition of radian measure). Divide this angle into $n$ parts of measure $L/n$ each and then the area of sector $AOP$ is sum of areas of these $n$ sectors. If $n$ is large then area of each of these $n$ sectors can be approximated by area of the corresponding triangles and this area is $$\frac{1}{2}\sin (L/n)$$ so that the area of the whole sector $AOP$ is $(n/2)\sin(L/n)$. As $n \to \infty$ this becomes $L/2$ and here we need the analytic property of $\sin x$ namely $(\sin x)/x \to 1$ as $x \to 0$. Therefore area can't be the basis of a proof of this limit. This is perhaps the reason that proofs for limit formula $(\sin x)/x \to 1$ looks circular. -A proper proof can't be done without integrals as I have shown above. Hence the proof that $(\sin x)/x \to 1$ depends upon Riemann integration and definition of $\sin x, \cos x$ as inverses to the integrals. This is same as $e^{x}$ is defined as inverse to integral of $1/x$. -Also see my another answer to a similar question.<|endoftext|> -TITLE: Uncertainty principle density argument -QUESTION [5 upvotes]: I proved the Heisenberg Uncertainty Principle for $f$ in the Schwartz space $ S(\mathbf R)$: -$$ -\int_{\mathbf R} |\xi \hat{f}(\xi)|^2 \int_{\mathbf R} |xf(x)|^2 dx \geq \frac{1}{(4\pi)^2} |f|_{L^2(\mathbf R)}^4. -$$ -Now I am having a lot of trouble extending it for $f\in L^2(\mathbf R)$. Of course we have to use density of $S(\mathbf R)$ in $L^2(\mathbf R)$ and use that $xf(x)$, $\xi\hat{f}(\xi) \in L^2(\mathbf R)$ (otherwise the inequality is trivial). But I can't think of a way of approximating everything at the same time. -Any hints or suggestions? -Thanks! - -REPLY [3 votes]: Let's just take $f_n=\varphi_n *f$, with $\varphi_n(x)=n\varphi(nx)$, and $\varphi\in C_0^{\infty}$, $\int\varphi =1$. We're assuming that $f,xf,\xi\widehat{f}\in L^2$. -Then $\int_{-L}^L |f_n-f|^2\to 0$ by a standard approximation argument, and if we choose $L>0$ large enough, then both $\int_{|x|>L}x^2|f|^2$ and $\int_{|x|>L}x^2|f_n|^2$ are small. So $\|xf_n-xf\|\to 0$. -Similarly, $\xi\widehat{f}=c\widehat{f'}$, and $f'_n=\varphi_n *f'\to f'$ in $L^2$; alternatively, $\xi \widehat{f_n}-\xi\widehat{f}=\xi\widehat{f}(\widehat{\varphi}(\xi/n)-1)$ goes to $0$ in $L^2$ norm by dominated convergence. -(Essentially, what we do here is make use of the fact that smooth functions are dense in Sobolev spaces.)<|endoftext|> -TITLE: Sheafyness and relative chinese remainder theorem -QUESTION [6 upvotes]: The relative chinese remainder theorem says that for any ring $R$ with two ideals $I,J$ we have an iso $R/(I\cap J)\cong R/I\times_{R/(I+J)}R/J$. -Let's take $R=\Bbbk [x_1,\dots ,x_n]$ for $\Bbbk $ algebraically closed. If $I,J$ are radical, the standard dictionary tells us $R(I\cap J)$ is the coordinate ring of the variety $\mathbf V(I)\cup \mathbf V(J)$. Furthermore, if $I+J$ is radical then $R/(I+J)$ is the coordinate ring of the intersection $\mathbf V(I)\cap\mathbf V(J)$. Now the elements in the pullback are just pairs of functions which are consistent on the intersection, and the isomorphism tells us we can glue them to get a function defined on the union. -This has a very sheafy feel to it, yet I find the need for $I,J$ to be radical somewhat disconcerting. I don't know any scheme theory, and I'm not sure exactly how to phrase my question except: - -What's the underlying sheaf here and in what context is it most - natural? - -I guess what I'm hoping for is a setting in which every ideal of a ring has some geometric analog, not just radicals. - -REPLY [2 votes]: I haven't read all of your questions (and answers) on this topic, so I hope I'm not being redundant, but you might take a look at this article by Ernst Kleinert.<|endoftext|> -TITLE: What happens if you repeatedly take the arithmetic mean and geometric mean? -QUESTION [17 upvotes]: Given two positive real numbers, $A$ and $B$, such that $A\leq B$, take the geometric mean, giving $A'$, and the arithmetic mean, giving $B'$. Repeat ad infinitum. My intuition tells me that, since both means give values between the two original numbers, they will converge as the number of repetitions approaches infinity. Is this correct? Is there a simple formula to determine on what value they converge? - -REPLY [19 votes]: Indeed, they converge to what is known as the Arithmetic–geometric mean. Unfortunately there is no simple formula, but calculating it recursively, at least to a certain precision, could be considered "simple". -To see that it converges, let $a_0=A$, $b_0=0=B$ and -$$a_{n+1}=\sqrt{a_nb_n}$$ -$$b_{n+1}=\frac{a_n+b_n}{2}$$ -So that, by the AM-GM-inequality $$a_n\le a_{n+1} \le b_{n+1}\le b_n$$ for all n. And the difference $$b_n-a_n\le\frac{b_0-a_0}{2^n}$$ -converges to zero. The reason this last inequality holds is that the difference clearly shrinks faster than if we in stead let $b_n$ be constant, in which case the difference would be divided by two at each step.<|endoftext|> -TITLE: The "pepperoni pizza problem" -QUESTION [94 upvotes]: This problem arose in a different context at work, but I have translated it to pizza. -Suppose you have a circular pizza of radius $R$. Upon this disc, $n$ pepperoni will be distributed completely randomly. All pepperoni have the same radius $r$. -A pepperoni is "free" if it does not overlap any other pepperoni. -You are free to choose $n$. -Suppose you choose a small $n$. The chance that any given pepperoni is free are very large. But $n$ is small so the total number of free pepperoni is small. Suppose you choose a large $n$. The chance that any given pepperoni is free are small. But there are a lot of them. -Clearly, for a given $R$ and $r$, there is some optimal $n$ that maximizes the expected number of free pepperoni. How to find this optimum? -Edit: picking the answer -So it looks like leonbloy's answer given the best approximation in the cases I've looked at: - r/R n* by simulation n_free (sim) (R/2r)^2 - 0.1581 12 4.5 10 - 0.1 29 10.4 25 - 0.01 2550 929.7 2500 - -(There's only a few hundred trials in the r=0.01 sim, so 2550 might not be super accurate.) -So I'm going to pick it for the answer. I'd like to thank everyone for their contributions, this has been a great learning experience. -Here are a few pictures of a simulation for r/R = 0.1581, n=12: - -Edit after three answers posted: -I wrote a little simulation. I'll paste the code below so it can be checked (edit: it's been fixed to correctly pick points randomly on a unit disc). I've looked at two three cases so far. First case, r = 0.1581, R = 1, which is roughly p = 0.1 by mzp's notation. At these parameters I got n* = 12 (free pepperoni = 4.52). Arthur's expression did not appear to be maximized here. leonbloy's answer would give 10. I also did r = 0.1, R = 1. I got n* = 29 (free pepperoni = 10.38) in this case. Arthur's expression was not maximized here and leonbloy's answer would give 25. Finally for r = 0.01 I get roughly n*=2400 as shown here: -Here's my (ugly) code, now edited to properly pick random points on a disc: -from __future__ import division -import numpy as np -# the radius of the pizza is fixed at 1 -r = 0.1 # the radius of the pepperoni -n_to_try = [1,5,10,20,25,27,28,29,30,31,32,33,35] # the number of pepperoni -trials = 10000# the number of trials (each trial randomly places n pepperoni) - -def one_trial(): - # place the pepperoni - pepperoni_coords = [] - for i in range(n): - theta = np.random.rand()*np.pi*2 # a number between 0 and 2*pi - a = np.random.rand() # a number between 0 and 1 - coord_x = np.sqrt(a) * np.cos(theta) # see http://mathworld.wolfram.com/DiskPointPicking.html - coord_y = np.sqrt(a) * np.sin(theta) - pepperoni_coords.append((coord_x, coord_y)) - - # how many pepperoni are free? - num_free_pepperoni = 0 - for i in range(n): # for each pepperoni - pepperoni_coords_copy = pepperoni_coords[:] # copy the list so the orig is not changed - this_pepperoni = pepperoni_coords_copy.pop(i) - coord_x_1 = this_pepperoni[0] - coord_y_1 = this_pepperoni[1] - this_pepperoni_free = True - for pep in pepperoni_coords_copy: # check it against every other pepperoni - coord_x_2 = pep[0] - coord_y_2 = pep[1] - distance = np.sqrt((coord_x_1 - coord_x_2)**2 + (coord_y_1 - coord_y_2)**2) - if distance < 2*r: - this_pepperoni_free = False - break - if this_pepperoni_free: - num_free_pepperoni += 1 - - return num_free_pepperoni - -for n in n_to_try: - results = [] - for i in range(trials): - results.append(one_trial()) - x = np.average(results) - print "For pizza radius 1, pepperoni radius", r, ", and number of pepperoni", n, ":" - print "Over", trials, "trials, the average number of free pepperoni was", x - print "Arthur's quantity:", x* ((((1-r)/1)**(x-1) - (r/1)) / ((1-r) / 1)) - -REPLY [8 votes]: A First Approximation -Let $p$ be the probability that a pepperoni is not in conflict with one randomly placed pepperoni, and $P$ the probability that a pepperoni is free. -The exact expression for $p$ is the area suitable for another pepperoni over the total area where pepperoni can be placed : -$$ -p = \frac{\pi (R-r)^2 - A}{\pi (R-r)^2} -$$ -Where $A$ is the area of the portion of a circle of radius $2r$ centered at the pepperoni center that is inside the area where pepperoni can be placed. I could not find an analytic formula for $A$ when the pepperoni is close to the border. Instead, I use the approximation that the pepperoni always covers the same area, regardless of where it is placed. -$$A \approx \pi (2r)^2$$ -And we have : -$$ -p = \frac{\pi (R-r)^2 - \pi (2r)^2}{\pi (R-r)^2} = \frac{(R-r)^2 - (2r)^2}{(R-r)^2} -$$ -This approximation is good when the radius of the pizza is large compared to the radius of a pepperoni. -Now, the probability that pepperoni $i$ is free when $n$ pepperoni are placed: -$$P = p^{n-1} $$ -And the expected number of free pepperoni is : -$$E_n = n p^{n-1}$$ -This function looks like this. -To obtain the maximum, we set the derivative equal to 0 : -$$p^{n-1} + np^{n-1} \ln(p) = 0 \implies n = \frac{-1}{ \ln(p)}$$ -The maximum is either the ceiling or the floor of this value. -$$E_n = \max\left\{\left\lfloor \frac{-1}{ \ln(p)} \right\rfloor p^{\lfloor \frac{-1}{ \ln(p)} \rfloor - 1}, \left\lceil \frac{-1}{ \ln(p)} \right\rceil p^{\lceil \frac{-1}{ \ln(p)}\rceil - 1} \right\}$$ -Now some numbers : If $R= 30$cm, $r=2$cm, we have $p\approx 0.98$. Then : -$$ E_{max} = \max\{48 \times 0.98^{48-1}, 49 \times 0.98^{49-1}\}\approx \max\{18.1678, 18.1669\} = \boxed{18.1678 }$$ -Edit : A Better Approximation of A -I found a better way to approximate $A$, the average area in which another pepperoni cannot be placed. It is : -$$ -A(r')= \left\{ \begin{array}{cc} -\pi (2r)^2 & \mbox{ if } r' < R-3r \\ -\frac{\pi (2r)^2}{2} & \mbox{ if } R-3r < r' < R-r -\end{array} -\right. -$$ -Where $r'$ is the distance between the center of the pizza and the center of the pepperoni. I think this formula desserves a bit of explaining. When the pepperoni is at distance less than $R-3r$ from the center of the pizza, all the points in a $2r$ radius are not suitable for a pepperoni. If the distance is larger than $R-3r$, some of the points in the circle of $2r$ radius are outside the zone where pepperoni can be placed (The circle of $R-r$ radius).When a pepperoni is inside that region, we consider that the area not suitable for another pepperoni is on average half the area of the circle. This correspond to making the approximation that the border of the pizza is flat. Again, for a large pizza, this approximation becomes more negligible. -Now, the average for $A$ becomes : -$$ -\begin{align} -A =& \frac{\int\limits_{0}^{2\pi} \int\limits_{0}^{R-r} A(r') r' dr' d\theta }{\int\limits_{0}^{2\pi} \int\limits_{0}^{R-r} r' dr' d\theta} \\ - =& \frac{2\pi \left[\int\limits_{0}^{R-3r} \pi(2r)^2 r' dr' + \int\limits_{R-3r}^{R-r} \frac{\pi(2r)^2}{2} r' dr'\right]}{\pi (R-r)^2} \\ - =& \frac{2\pi r^2}{(R-r)^2}\left[ (R-3r)^2 + (R-r)^2 \right] -\end{align} -$$ -And the probability $p$ is : -$$ -\begin{align} -p =& \frac{\pi(R-r)^2 - \frac{2\pi r^2}{(R-r)^2}\left[ (R-3r)^2 + (R-r)^2 \right]}{\pi(R-r)^2} \\ - =& 1-\frac{2r}{(R-r)^4}\left[ (R-3r)^2 + (R-r)^2 \right] -\end{align} -$$ -And the rest of the equations stay unchanged. -Now, when $r/R = 0.1$, I get a total of 112 pepperoni placed, with an expected value of 31.74. -For $r/R = 0.01$, I get 2449 pepperoni placed, with an expected value of 901.58.<|endoftext|> -TITLE: Non-measurable sets on $\mathbb{N}$ -QUESTION [7 upvotes]: I'm familiar with the "construction" of non-measurable sets on $\mathbb{R}$. But of interest to me is if there is a way to construct a countably additive probability measure $\mu$ on $\mathbb{N}$ such that we can't extend $\mu$ to $2^{\mathbb{N}}$. Apparently there's no way to do it when $\mu$ is defined on all singletons, as for any set $S \subseteq \mathbb{N}$, we have -\begin{align*} -\mu(S) & = \sum_{k \in S} \mu( \{k \}) \\ -& = \sum_{k = 1}^{\infty} \chi_{S} \mu( \{k\} ) -\end{align*} -Clearly we have that $\mu(\mathbb{N}) = \sum_{k = 1}^{\infty} \mu ( \{k\}) = 1$, so we know that $\lim_{N \to \infty} \sum_{k = N + 1}^{\infty} \mu( \{ k \} ) = 0$. But then we use this to show that $\mu(S)$ is well-defined, as the sum converges. Let $\epsilon > 0$, and let $N$ be large enough that $\sum_{k = N + 1}^{\infty} \mu( \{k \}) < \epsilon$. -\begin{align*} -\left| \left( \sum_{k \in S} \mu( \{k\}) \right) - \left( \sum_{k = 1}^{N} \mu( \{ k \} ) \right) \right| & = \left| \sum_{k = N + 1}^{\infty} \chi_{S} \mu(\{k\}) \right| \\ -& \leq \sum_{k = 1}^{\infty} \| \chi_{S} \mu(\{k\}) \| \\ -& \leq \sum_{k = N + 1}^{\infty} \mu( \{k\} ) \\ -& < \epsilon . -\end{align*} -So is there a way to define a measure on $\mathbb{N}$ that is finite, but cannot be extended to measure the singletons? Is it possible if we drop the assumption that $\mu$ be finite? Can we show instead that any countably additive measure will extend to all of $2^{\mathbb{N}}$? - -REPLY [2 votes]: Let $\sim$ be any equivalence relation on $\mathbb N$. Define a measure on $\mathbb N/\sim$. Then $\mathbb N$ inherits this measure, and the measurable sets are exactly unions of equivalence classes. In particular, if $\sim$ is not the equality relation, then there are non-measurable sets. -For example, we can define $x\sim y$ if $x,y$ have the same prime factors. -But what this really means is that your probability space has indistinguishable events. -As another answer proves, this is essentially the only sort of measure that can be of this kind. -Given such a measure, define: $x\sim y$ if every measurable set that contains $x$ contains $y.$ -This is an equivalence relation since - -If there is a measurable set $S$ containing $y$ but not $x$ then $\mathbb N\setminus S$ is a measurable set containing $x$ not containing $y$. -If $x\sim y$ and $y\sim z$ then any set containing $x$ contains $y$ and hence contains $z$. -$x\sim x$ fairly obviously. - -If $A_x$ is the equivalence class of $x$, then $A_x=\int_{y\not\sim x} A_{x,y}$ of measurable sets not containing the elements not equivalent to $x$, and this is a countable intersection. -Such measures can always be extended pretty trivially to $\mathcal P(S)$, by either picking a single element of each set as the "real" element, leaving all others with measure zero, or by divying up each equivalence class's measure amongst the individual elements in some way. If every measurable set has non-zero measure, then you can ensure that every element of $\mathbb N$ has non-zero value.<|endoftext|> -TITLE: If we randomly select 25 integers between 1 and 100, how many consecutive integers should we expect? -QUESTION [65 upvotes]: Question: Suppose we have one hundred seats, numbered 1 through 100. We randomly select 25 of these seats. What is the expected number of selected pairs of seats that are consecutive? (To clarify: we would count two consecutive selected seats as a single pair.) -For example, if the selected seats are all consecutive (eg 1-25), then we have 24 consecutive pairs (eg 1&2, 2&3, 3&4, ..., 24&25). The probability of this happening is 75/($_{100}C_{25}$). So this contributes $24\cdot 75/(_{100}C_{25}$) to the expected number of consecutive pairs. -Motivation: I teach. Near the end of an exam, when most of the students have left, I notice that there are still many pairs of students next to each other. I want to know if the number that remain should be expected or not. - -REPLY [12 votes]: Henning Malcolm has already answered the question about the expected value. But that says very little about how likely or unlikely it would be to get deviations from the expected value - what if you observed there were 20 pairs out of 25 students left, could that happen by chance or is there some other factor at work? To answer questions like this one would need more details about the distribution, like the variance. Unfortunately, the analytic formula for the distribution (if there is one) is likely to be very complicated. So, from a practical perspective it might be interesting to investigate things numerically. That is the purpose of this post. -Here's a histogram of the number of pairs, generated numerically from 10000 trials. - -You can look at this and draw your own conclusions. I would conclude the following: - -anywhere from 4 to 8 pairs would be totally reasonable, -0 pairs or 13+ pairs would be very strong evidence of there being another factor -anything inbetween would be unusual, but possible - -Here's the Matlab code used to generate it: -n=100; -k=25; - -num_trials = 1e4; -num_pairs = zeros(num_trials,1); -for ii=1:num_trials - disp(ii) - filled_seats = sort(randperm(n,k)); - gaps = filled_seats(2:end) - filled_seats(1:end-1); - num_pairs(ii) = length(find(gaps == 1)); -end - -histogram(num_pairs) -title(sprintf('histogram of seating pairs (out of %d trials)',num_trials))<|endoftext|> -TITLE: Powering a sum of two easily powered matrices -QUESTION [5 upvotes]: I am currently studying matrices and in order to understand them better I want to know why I can't do certain things in my calculations. This question is just about that. -The task is to calculate $A^n$ if -$$ - A=\begin{bmatrix} - a & b \\ - 0 & c \\ - \end{bmatrix} -$$ -I started of by calculating smaller powers, $A^2$, $A^3$, but I did not recognize the pattern at first. I tried an alternative approach, writing the matrix in a form of a sum of two matrices that will be easier do power. -$ - A=\begin{bmatrix} - a & 0 \\ - 0 & c \\ - \end{bmatrix} $ $ +\begin{bmatrix} - 0 & b \\ - 0 & 0 \\ - \end{bmatrix} -$ -Let's denote these matrices as $C=\begin{bmatrix} - a & 0 \\ - 0 & c \\ - \end{bmatrix} $ and $D=\begin{bmatrix} - 0 & b \\ - 0 & 0 \\ - \end{bmatrix} $ -When we apply Binomial Theorem, we get: -$$A^n = (C+D)^n=\binom{n}{0}C^n + \binom{n}{1}C^{n-1}D + \binom{n}{2}C^{n-2}D^2 \dots + \binom{n}{n-1}CD^{n-1} + \binom{n}{n}D^n $$ -I tested powering both $C$ and $D$ for smaller powers to see if there is a pattern. As it turns out: -$C^n = \begin{bmatrix} - a^n & 0 \\ - 0 & c^n \\ - \end{bmatrix}$ and -$ D^n = \begin{bmatrix} - 0 & 0 \\ - 0 & 0 \\ - \end{bmatrix}$ -Every matrix multiplied by zero-matrix $O$ is equal to zero, which leaves us with: -$$A^n = C^n $$ -which is not the correct solution to the problem. -What interests me is: which step did I do wrong and why ? I am aware that it would have been easier to recognize the pattern before turning to Binomial Theorem, but I want to know why is this particular method of solving wrong. - -REPLY [3 votes]: When applying the binomial theorem in this way you are assuming that the two matrices commute. The usual proof of that theorem for real numbers freely interchanges $x$ and $y.$ - -REPLY [3 votes]: The binomial formula is only true for elements that commute, which is not the case of your two matrices.<|endoftext|> -TITLE: Is there a surjective group homomorphism $\operatorname{GL}_{n}(k) \to \operatorname{GL}_{m}(k)$ where $n > m$? -QUESTION [16 upvotes]: Does there exist a field $k$, two positive integers $n > m > 1$, and a surjective group homomorphism $\operatorname{GL}_{n}(k) \to \operatorname{GL}_{m}(k)$? - -Here $k$ can be any field, and $\operatorname{GL}_{n}(k)$ is viewed as an abstract group (as opposed to group scheme or Lie group), and this group homomorphism doesn't have to be "algebraic" or "smooth" in any sense. Note that if $m = 1$ then the determinant map gives a surjective map. - -REPLY [2 votes]: Here is a solution: -Suppose that there exists a surjective map $GL_n(k) \rightarrow GL_m(k)$ for $n > m > 1$. This induces a surjective map from the commutator subgroup of $GL_n(k)$ to the commutator subgroup of $GL_m(k)$. -The commutator subgroup of $GL_n(k)$ is $SL_n(k)$ except when $m = 2$ and $k = \mathbb{F}_2$. In the case $k = \mathbb{F}_2$ we have $GL_n(k) = SL_n(k)$, so in any case we can assume that we have a surjective map $SL_n(k) \rightarrow SL_m(k)$. -Now $SL_n(k)$ has a composition series, and it has a unique nonabelian simple composition factor, namely $PSL_n(k) = SL_n(k) / Z(SL_n(k))$. Similarly there is a unique nonabelian simple composition factor of $SL_m(k)$, which is $PSL_m(k)$. -Therefore we must have an isomorphism $PSL_n(k) \cong PSL_m(k)$. -But this is a contradiction, by the following old result: - -$PSL_n(k) \cong PSL_m(k')$ implies $n = m$ and $k \cong k'$, except in the following cases: $PSL_2(\mathbb{F}_7) \cong PSL_3(\mathbb{F}_2)$ and $PSL_2(\mathbb{F}_4) \cong PSL_2(\mathbb{F}_5)$. - -See p. 106, §9, chapter IV, in "La géométrie des groupes classiques" by Dieudonné.<|endoftext|> -TITLE: $f:\bf S^1 \to \bf R$, there exist uncountably many pairs of distinct points $x$ and $y$ in $\bf S^1$ such that $f(x)=f(y)$? (NBHM-2010) -QUESTION [5 upvotes]: Let $\bf S^1$ denote the unit circle in the plane $\bf R^2$. True/False ? - -For every continuous function $f:\bf S^1 \to \bf R$, there exist uncountably many pairs of distinct points $x$ and $y$ in $\bf S^1$ such that $f(x)=f(y)$ - -Borsuk-Ulam or by taking the function $g(x)=f(x)-f(-x)$, IVT implies that there exist $x$ such that $f(x)=f(-x)$. But I'm unable to show the existence of uncountably many pairs. I think the fact $RP^1 \cong \bf S^1$ may be helpful. Any ideas? - -REPLY [6 votes]: Here is "almost" same solution as of @Anthony Carapetis written in slightly different way: -Assume $f$ is not constant.Since $\bf {S}^1$ is connected and compact so let $f(\bf S^1)=[a,b]. $Suppose pre image of some $y \in (a,b) $ consists only one point,say $x$.Then $f( \bf {S}^1 / {x})$ is connected.Contradiction!<|endoftext|> -TITLE: partial solving of ellipse from 5 points -QUESTION [6 upvotes]: From 5 points on an ellipse I can get the ellipse characteristics (center, radii, angle) by solving a $5\times5$ system (the ellipse equation applied on each point). -But this is costly when called billion times per second, plus in my case I only want the ellipse center. -> Is there a cheaper way to get the ellipse center only (either geometric, algebraic or numeric), without solving the full $5\times5$ system ? -NB: for now (see end part of here) I am using an iterative solution finding the 2 most distant points, i.e. the main axis, and taking the middle. But it is still costly, and of course inelegant. -EDIT 1: if it helps, I could also provide the tangents at points. -EDIT 2: note that the full ellipse equation is not a quadratic form (since not centred at (0,0)). - -REPLY [4 votes]: Here is how I compute a conic without $5\times5$ equations, based on my background in projective geometry. -Finding the matrix -Start with homogeneous coordinates, i.e. you have five points -$$ -A=\begin{pmatrix}A_x\\A_y\\1\end{pmatrix} -\qquad\cdots\qquad -E=\begin{pmatrix}E_x\\E_y\\1\end{pmatrix} -$$ -Now a point $P$ lies on the conic with these five points iff -$$[A,C,E][B,D,E][A,D,P][B,C,P] - [A,D,E][B,C,E][A,C,P][B,D,P] = 0$$ -where I use $[\cdot,\cdot,\cdot]$ to denote a determinant. Now you may know that you can write a $3\times3$ matrix as a triple product, e.g. -$$[A,D,P] = \langle A\times D,P\rangle$$ -Combine two of these and you have a quadratic form with a rank 1 matrix in the center: -$$[A,D,P][B,C,P] = \langle P,A\times D\rangle\cdot\langle B\times C,P\rangle -= P^T\cdot(A\times D)\cdot(B\times C)^T\cdot P$$ -So the original equation boils down to $P^TMP=0$ using the following matrix: -\begin{align*} -M &=\phantom+ [A,C,E][B,D,E]\cdot(A\times D)\cdot(B\times C)^T \\ -&\phantom=- [A,D,E][B,C,E]\cdot(A\times C)\cdot(B\times D)^T -\end{align*} -You probably should symmetrize your final result as well, i.e. compute $M+M^T$. -So you have to compute four determinants, four cross products, two outer products, two scalar times matrix products, one matrix subtraction and one matrix addition. But all of the vectors and matrices will be $3\times 3$ only, and you never have to pivot, never have to make any case distinctions. If you work with the homogeneous coordinates using the representatives given above, many numbers in your computations will be equal to $1$, which can be used to further simplify an implementation. -You may notice that the determinants in the left part of each line are just $E$ plugged into the quadratic form you get from the right part of the other line. So if evaluating quadratic forms is any easier for you than computing determinants, go ahead and re-use the matrices you need for the right hand side in any case. -Finding the center -Now you want the center of that beast. The center is the pole of the line at infinity. For that you need the dual matrix, which algebraically is cheapest to compute using the classical adjoint. Multiply that matrix by $(0,0,1)$ and you have the homogeneous coordinates of the center. Divide the first two coordinates by the third to get back to inhomogeneous coordinates.<|endoftext|> -TITLE: Classifying left invariant metrics on the 3-dimensional heisenberg group -QUESTION [5 upvotes]: Recently I read that all left invariant metrics on the Heisenberg group are equivalent up to scaling,however no reference was given for this result. I've made some attempt to prove this myself. In particular the Heisenberg group H can be represented as, $$H=\left\{ \begin{bmatrix}1&x&y\\0&1&z\\0&0&1 \end{bmatrix} \Big\vert\, x,y,z\in\mathbb{R}\right\}\tag{1}$$with $$\mathfrak{g}=\left\{\begin{bmatrix}0&x&y\\0&0&z&\\0&0&0\end{bmatrix}\Big| \,x,y,z\in\mathbb{R}\right\}\tag{2}$$its associated Lie algebra. Then we can define a left invariant metric $g$ by choosing a basis for $\mathfrak{g}$ and declaring it orthonormal and then translating. I've made a attempts at this but am not really sure where to start. I've tried starting with two choices of basis $\{E_1,E_2,E_3\}$ and $\{F_1,F_2,F_3\}$ with metrics $g_1,g_2$ respectively. I like to then say that if $\phi:\mathfrak{g}\rightarrow\mathfrak{g}$ is an automorphism I could extend that to an automorphism $\Phi:H\rightarrow H$ which will hopefully be an isometry. If you can point me in the right direction with either a reference or on the proof itself I would appreciate it. - -REPLY [3 votes]: There are essentially two approaches to this problem, both of which will arrive at identical results. One relies on results pertaining to three-dimensional unimodular Lie groups that can be found in John Milnor's wonderful paper concerning the left invariant Riemannian metrics on Lie groups. The other approach is to directly exploit the automorphisms of the Lie algebra of the Heisenberg group. Since the Heisenberg group is nilpotent and the bracket structure is incredibly simple, the automorphism group is large and one can find a basis for an arbitrary left invariant metric that is of a particularly simple form. Using the automorphism group to find canonical forms for left invariant metrics on three-dimensional Lie groups has its limitations, however, as almost nothing can be said about the canonical forms of left invariant metrics on $SO(3)$ and $SL_{2}\left(\mathbb{R}\right)$ via an automorphism reduction. In these cases, the results of Milnor are truly wonderful. -I will outline both approaches below and I have provided a link to Milnor's paper at the end of this answer. - -(Automorphism Reduction) Using your notation above, we take the following as a basis for the Lie algebra $\mathfrak{g}$: -$$ -\mathbf{E}_{1} = \begin{pmatrix} 0&1&0\\ 0&0&0\\ 0&0&0\\\end{pmatrix}, \hskip.25in -\mathbf{E}_{2} =\begin{pmatrix} 0&0&0\\ 0&0&1\\ 0&0&0\\\end{pmatrix} -\hskip.25in -\mathbf{E}_{3} = \begin{pmatrix} 0&0&1\\ 0&0&0\\ 0&0&0\\\end{pmatrix}, -$$ -and we observe that the Lie algebra structure of $\mathfrak{g}$ is completely determined by the non-zero bracket relations -$$ -\left[ \mathbf{E}_{1}, \mathbf{E}_{2}\right] = \mathbf{E}_{3}. -$$ -The corresponding structure constants $C_{ij}^{k}$ are defined by $\left[\mathbf{E}_{i}, \mathbf{E}_{j}\right] = C_{ij}^{k}\mathbf{E}_{k}$ (summation convention assumed), and we note that the only nonzero structure constant(s) is $C_{12}^{3} = 1$ (and $C_{21}^{3} = -1$). -Due to the number of structure constants of the Lie algebra that are zero, the group of automorphisms is quite large. We can take as the definition of an automorphism of $\mathfrak{g}$ to be an invertible linear transformation $A : \mathfrak{g} \to \mathfrak{g}$ that satisfies $A\left(\left[\mathbf{E}_{i}, \mathbf{E}_{j}\right]\right) = \left[ A\left(\mathbf{E}_{i}\right), A\left(\mathbf{E}_{j}\right)\right]$ for all basis vectors $\mathbf{E}_{i}$, $\mathbf{E}_{j}$. -Letting $A : \mathfrak{g} \to \mathfrak{g}$ be a linear transformation defined with respect to the given basis by $A\left(\mathbf{E}_{i}\right) = a_{i}^{j}\mathbf{E}_{j}$, we find that the entries of the matrix representation of $A$ must relate to the structure constants of $\mathfrak{g}$ as follows. -Computing the Lie brackets of the basis vectors first, we must have -\begin{align*} -A\left(\left[\mathbf{E}_{i}, \mathbf{E}_{j}\right]\right) &= A\left(C_{ij}^{k}\mathbf{E}_{k}\right)\\ -&= C_{ij}^{k}A\left(\mathbf{E}_{k}\right)\\ -&= C_{ij}^{k}a_{k}^{p}\mathbf{E}_{p}. -\end{align*} -But calculating the Lie bracket $\left[A\left(\mathbf{E}_{i}\right), A\left(\mathbf{E}_{j}\right)\right]$ after mapping, we find that -\begin{align*} -\left[A\left(\mathbf{E}_{i}\right), A\left(\mathbf{E}_{j}\right)\right] &= \left[a_{i}^{r}\mathbf{E}_{r}, a_{j}^{s}\mathbf{E}_{s}\right]\\ -&= a_{i}^{r}a_{j}^{s}\left[\mathbf{E}_{r}, \mathbf{E}_{s}\right]\\ -&= a_{ij}^{r}a_{j}^{s}C_{rs}^{p}\mathbf{E}_{p}. -\end{align*} -Thus, the entries of the matrix representation of $A$ and the structure constants $C_{ij}^{k}$ must satisfy the following system of equations: -$$ -C_{ij}^{k}a_{k}^{p} = a_{ij}^{r}a_{j}^{s}C_{rs}^{p}, \hskip.15in i, j, k, r, s, p = 1..3. -$$ -Again, due to the number of structure constants that are zero, the equations above are easily solved ad one finds that $A = \left(a^{i}_{j}\right)$ is an automorphism of $\mathfrak{g}$ if and only if $A$ has a matrix representation with respect to the chosen basis of the form -$$ -A= \begin{pmatrix} -a^{1}_{1} & a^{1}_{2} & 0\\ -a^{2}_{1} & a^{2}_{2} & 0\\ -a^{3}_{1} & a^{3}_{2} & \Delta -\end{pmatrix}, -\hskip.25in \Delta =a^{1}_{1}a^{2}_{2} - a^{1}_{2}a^{2}_{1} \ne 0. -$$ -Now, if we start with an arbitrary left invariant metric $\mathbf{g}$ defined relative to the chosen frame $\mathbf{E}_{1}, \mathbf{E}_{2}, \mathbf{E}_{3}$ by -$$ -\mathbf{g} = \left(g_{ij}\right) = \left(\mathbf{g}\left(\mathbf{E}_{i}, \mathbf{E}_{j}\right)\right), -$$ -we can use elements of the automorphism group to change the basis of $\mathfrak{g}$ so that - -The matrix representation $\mathbf{g} = \left(g_{ij}\right) = \left(\mathbf{g}\left(\mathbf{E}_{i}, \mathbf{E}_{j}\right)\right)$ of the inner product is as simple as possible, and -The structure constants of the new basis remain fixed. - -Note that as opposed to starting with an arbitrary frame for the Lie algebra and declaring it to be an orthonormal frame that we turn into a left invariant metric $\mathbf{g}$ via left translation, we instead start with a particular frame for the Lie algebra $\mathfrak{g}$ and we let $\mathbf{g}$ be an arbitrary inner product that we turn into a left invariant metric via left translations. The distinction here is essential. -Observing that the columns of the matrix representation of an element in the automorphism group tell us exactly what we can do to a particular basis vector with an automorphism, we see that can arrange for the basis vectors $\mathbf{E}_{1}$ and $\mathbf{E}_{2}$ to be any two linearly independent vectors that are also linearly independent to $\mathbf{E}_{3}$. -Specifically, we can make a change of basis of the form -\begin{align*} -\tilde{\mathbf{E}}_{1} &= a^{1}_{1}\mathbf{E}_{1} + a^{2}_{1}\mathbf{E}_{2} + a^{3}_{1}\mathbf{E}_{3}\\ -\tilde{\mathbf{E}}_{2} &= a^{1}_{2}\mathbf{E}_{1} + a^{2}_{2}\mathbf{E}_{2} + a^{3}_{2}\mathbf{E}_{3}\\ -\tilde{\mathbf{E}}_{1} &= \Delta\mathbf{E}_{3},\\ -\end{align*} -so that that $\tilde{\mathbf{E}}_{1}$ and $\tilde{\mathbf{E}_{2}}$ are $\mathbf{g}$-orthogonal unit vectors that are $\mathbf{g}$-orthogonal to $\mathbf{E}_{3}$. -Furthermore, note that a change of basis of the indicated form will only scale $\mathbf{E}_{3}$. -The matrix representation of the inner product $\mathbf{g}$ with respect to the new basis takes the form -$$ -\mathbf{g} = \left(g_{ij}\right) = \left(\mathbf{g}\left(\tilde{\mathbf{E}}_{i}, \tilde{\mathbf{E}}_{j}\right)\right) -= -\begin{pmatrix} -1 & 0 & 0\\ -0 & 1 & 0\\ -0 & 0 & \mathbf{g}\left(\tilde{\mathbf{E}}_{3}, \tilde{\mathbf{E}}_{3}\right)\\ -\end{pmatrix}. -$$ -Note that the selection of the basis vectors $\tilde{\mathbf{E}}_{1}$ and $\tilde{\mathbf{E}}_{2}$ to be orthogonal unit vectors that are orthogonal to $\mathbf{E}_{3}$ is unique up to a rotation about $\mathbf{E}_{3}$, i.e., an automorphism of the form $\begin{pmatrix} \cos \theta & -\sin \theta & 0\\ -\sin \theta & \cos \theta & 0\\ -0 & 0& 1\\ -\end{pmatrix}$, but applying such an automorphism will do nothing to improve the representation of the left invariant metric $\mathbf{g}$. -Finally, as you noted in your question, we can extend the automorphism of the Lie algebra $\mathfrak{g}$ to an automorphism of the Heisenberg group $H$. As such, we see that the left invariant metrics on the Heisenberg group are of the form -$$ -\mathbf{g} = \left(g_{ij}\right) = \left(\mathbf{g}\left(\mathbf{E}_{i}, \mathbf{E}_{j}\right)\right) -= -\begin{pmatrix} -1 & 0 & 0\\ -0 & 1 & 0\\ -0 & 0 & \mathbf{g}\left(\mathbf{E}_{3}, \mathbf{E}_{3}\right)\\ -\end{pmatrix}, -$$ -where the Lie algebra structure of $\mathfrak{g}$ is determined by the non-zero bracket $\left[\mathbf{E}_{1}, \mathbf{E}_{2}\right] = \mathbf{E}_{3}$. -Or equivalently -$$ -\mathbf{g} = \omega^{1} \otimes \omega^{1} + \omega^{2}\otimes \omega^{2} + \lambda \omega^{3} \otimes \omega^{3},\hskip.25in \lambda \in \mathbb{R}, \lambda > 0 -$$ -where $\omega^{1}, \omega^{2}, \omega^{3}$ constitute the coframe that is dual to $\mathbf{E}_1, \mathbf{E}_{2}$, $\mathbf{E}_{3}$. -Milnor's Approach (I will supply some details later tonight, but you can find the article linked here: Curvatures of left invariant metrics on Lie groups. The relevant material is in Section 4. If you try using the automorphisms of $SO(3)$ or $SL_{2}\left(\mathbb{R}\right)$ to reduce the left invariant metrics on either of the groups to somewhat canonical forms, you will gain an appreciation for how wonderful Milnor's result(s) is(are). The abstract of the paper also happens to be one of my favorite abstracts.)<|endoftext|> -TITLE: Are there functions that are Holder continuous but whose variation is unbounded? -QUESTION [6 upvotes]: I have recently been introduced to the concept of Holder condition and I was told that there are functions that are Holder continuous but whose variation in unbounded. -Can anyone present an example, with explanation of both unboundedness of variation and Holder condition? If possible some example that's not too complicated and doesn't require advanced math - Let's say, I looked up the Weierstrass function and that's quite out of my reach at the moment. - -REPLY [10 votes]: Let $C\subset [0,1]$ be the standard Cantor set. Define -$$f(x)=(\operatorname{dist}(x,C))^{\alpha}$$ -with $\alpha\in (0,1)$ to be chosen later. This is an $\alpha$-Hölder continuous function since it's a composition of the Lipschitz function $x\mapsto \operatorname{dist}(x,C)$ with the $\alpha$-Hölder function $t\mapsto t^\alpha$. -The complement of Cantor set has $2^{k-1}$ intervals of length $3^{-k}$, for each $k=1,2,\dots$. On such an interval, $f$ increases from $0$ to $(3^{-k}/2)^\alpha$ and then decreases to $0$. Therefore, its total variation is -$$ -\sum_{k=1}^\infty 2^{k-1} (3^{-k}/2)^\alpha -$$ -and this series diverges when $\alpha\le \log 2/\log 3$. -This is a fairly simple function that you can sketch by hand. The case $\alpha = \log 2/\log 3$ is particularly nice: each new generation of peaks is half the height of the previous ones, so that the sum of heights is the same in each generation.<|endoftext|> -TITLE: What is wrong with sets like $\{a,\{a\},\{\{a\}\},\ldots\}$ -QUESTION [7 upvotes]: I know pretty much nothing about set theory beyond first year undergraduate maths, so apologies if this is a stupid question. -The axiom of regularity in ZFC as I have understood it would forbid the existence of the following sets: -$\{a,\{a\},\{\{a\}\},\{\{\{a\}\}\},\ldots\}$ -$\{a,\{a\},\{a,\{a\}\},\{a,\{a\},\{\{a\}\}\},\ldots\}$ -Why should such sets not exist? Can it be shown that their existence leads to a contradiction? I know about Russell's paradox but isn't forbidding sets like the above a bit overkill, as they seem not to lead to paradoxes like Russell's set does. -Also is it possible to create a set of axioms that allow the maximum number of sets to exist such that they do not lead to a contradiction? - -REPLY [5 votes]: You have it backwards, those sets are perfectly fine. In fact, the ordinal numbers are a hugely important class of sets of the form of the second example. -The axiom of regularity says that there are no infinite descending chains under inclusion, i.e. that there do not exist infinitely many $S_i$ such that $\cdots \in S_{i+2}\in S_{i+1}\in S_i\in\cdots\in S_1$. It is a consequence of the axiom of regularity that no set is an element of itself. -The axiom of regularity effective prevents infinitely nested sets like $\cdots\{\{a\}\}\cdots$. If we look at the example in the title, the issue at stake here is the existence of the limit of the sequence defined by the elements of the sets. That limit does not correspond to a set<|endoftext|> -TITLE: Regular Expressions with Repetition -QUESTION [5 upvotes]: I'm learning about regular expressions and how they represent regular languages of an alphabet. Conceptually, I'm having trouble imagining what a regular expression would look like, representing a language that has at least x amount of a's and y amount of b's, for example, in an alphabet {a,b}. -The way I understand it, if the question was, for example, a regular expression representing the language with at least one of each, the regular expression would look like $a^*b^*(ab+ba)a^*b^*$ ... But this doesn't seem very efficient. I know that if you have a regular expression $(ab)^*$, the language does not include the word ba, yet once the the "least number" of letters in the alphabet increases, the length of the expression roughly increases exponentially! I'm aware of the "+" symbol, but wouldn't that have the same problem as star when applying it to "at least 1 of each" problems? -Say, for the regular expression representing the language with at least two a's and b's, it seems excessive to write $a^*b^*(aabb+abab+bbaa+baba+baab+abba)a^*b^*$. I'm confused because I feel like there has to be a simpler way to write the expression, but I can't seem to find a solution. If there is a simpler way to write it, is the example I use still valid? - -REPLY [2 votes]: Sadly you cannot avoid the exponential blow up for you required regular expressions, because you have to keep track of all the possible interleaving of the a's and b's. However the example you give does not seems right since it does not match baaaaaaaaab for example. I would go for $$(aaa^∗ba^∗b+abaa^∗b+abbb^∗a+babb^∗a+baaa^∗b+bbb^∗ab^∗a)(a+b)^∗$$ instead<|endoftext|> -TITLE: A finite Hausdorff space is discrete -QUESTION [13 upvotes]: Theorem: $X$ is a finite Hausdorff. Show that the topology is discrete. - -My attempt: $X$ is Hausdorff then $T_2 \implies T_1$ Thus for any $x \in X$ we have $\{x\}$ is closed. Thus $X \setminus \{x\}$ is open. Now for any $y\in X \setminus \{x\}$ and $x$ using Hausdorff property, we get $\{x\}$ is open. -Am I right till here? And how to proceed further? - -REPLY [4 votes]: Let $X$ be a finite Hausdorff space. Let $x\in X$. For each $y\not =x\in X$ let $U_y$ and $V_y$ be disjoint open sets with $x\in U_x$ and $y \in V_y$. Set $V=\cup_{y\not = x} V_y$. Then $V$ is open... So $X\setminus V=\{x\}$ is closed. -Thus every point in $X$ is closed. Since $X$ is finite, every point is also open (complement of finite union of closed sets). -We could actually say that every finite $T_1$ space is discrete, since points being closed is equivalent to being $T_1$.<|endoftext|> -TITLE: Prove that the augmentation ideal in the group ring $\mathbb{Z}/p\mathbb{Z}G$ is a nilpotent ideal ($p$ is a prime, $G$ is a $p$-group) -QUESTION [10 upvotes]: Let $p$ be a prime and let $G$ be a finite group of order a power of $p$ (i.e., a $p$-group). - -Prove that the augmentation ideal in the group ring $\mathbb{Z}/p\mathbb{Z}G$ (to be read as $\left( \mathbb{Z}/p\mathbb{Z} \right) G$) is a nilpotent ideal. (Note that this ring may be noncommutative.) - -Let $I_G$ be the augmentation ideal of the group ring $\mathbb{Z}/p\mathbb{Z}G$, i.e. $I_G$ consists of formal linear combinations $\sum_i n_i g_i$ ($n_i\in \mathbb{Z}/p\mathbb{Z}$, $g_i\in G$) such that $\sum_i n_i=0$. I cannot show that there is $m \in \mathbb{Z}_+$ such that $I_G^m=0$. - -REPLY [7 votes]: Lemma 1. Let $G$ be a group, $H$ be a normal subgroup, $\pi : G\to G/H$ the canonical morphism, $R$ be a commutative ring with unit, and $\varphi : R[G] \to R[G/H]$ the induced morphism of (perhaps non-commutative) rings induced by $\pi$. The $\varphi$ is surjective and $$\operatorname{Ker}(\varphi)=\operatorname{Aug}(R[H])R[G] = R[G] \operatorname{Aug}(R[H]).$$ -Here, $\operatorname{Aug}(R[K])$ denotes the augmentation ideal of a group ring $R[K]$. -Proof of lemma 1. The surjectivity of $\varphi$ is obvious, due to the definition of $\varphi$ and the surjectivity of $\pi$. The second equality is obvious as $H$ is a normal subgroup of $G$. We show now the first equality. Write $H = \{h_i\}$ for distinct $h_i$'s, and let $K = \{k_j\}$ (with distinct $k_j$'s) be a set of coset representatives of $G/H$. Set $g_{i,j} := h_i k_j$. Obviously $G = \{g_{i,j}\}$. Now take a $\zeta\in\operatorname{Ker}(\varphi)$ and write, by what we just said, $\zeta = \sum r_{i,j} g_{i,j}$ with $r_{i,j} \in R$. Then $\sum_j \left(\sum_i r_{i,j}\right)k_j H = \varphi(\zeta) = 0$ and comparing coefficients shows that $\sum_i r_{i,j} = 0$ for all $j$. Therefore $\zeta = \sum r_{i,j} g_{i,j} = \sum_j \left(\sum_i r_{i,j}\right)k_j$ is in $\operatorname{Aug}(R[H])R[G]$. The reverse inclusion is obvious, as you remark that if $\sum_i r_i h_i \in\operatorname{Aug}(R[H])$ then it is mapped to $0$ by $\varphi$. $\square$ -Lemma 2. If $\varphi : R\to S$ is a surjective morphism of rings and if $A\subseteq R$ then $\varphi[(A)] \subseteq (\varphi(A))$. Here, $(A)$ denotes the ideal of $R$ generated by $A$. -Proof of lemma 2. Obvious. $\square$ -Lemma 3. Let $R$ be a ring, $I,J$ be additive subgroups of $R$, and suppose $I$ to be central, that is, included in $R$'s center. Then $IJ=JI$ and $(IJ)^n = I^n J^n$. -Proof of lemma 3. Obvious. $\square$ -Theorem. Let $p$ be a prime number and $G$ be a finite $p$-group. Let $R$ be a commutative ring with unit such that $p \cdot 1_R = 0_R$. The augmentation ideal of the group ring $R[G]$ is nilpotent. -Proof of the theorem. As $G$ is a $p$-group we have $|G| = p^n$ for some $n\in\mathbf{N}$. We proceed by induction on $n$. -The case $n=0$ is obvious. Let's check at case $n=1$. Then $G$ is cyclic of order $p$, let's say generated by some $x$, and thus $R[G]$ is commutative of characteristic $p$, and you can see that $\operatorname{Aug}(R[G])$ is generated by $x-1$. As $(x-1)^p = x^p - 1^p = x^p - 1 = 1 - 1 = 0$, we see that $\operatorname{Aug}(R[G])$ is nilpotent indeed. Now we proceed to the inductive step. Suppose that for $n\geq 1$, for all groups $H$ of order $p^n$, the augmentation ideal of $R[H]$ is nilpotent of exponent $\leq p^n$, and let $G$ be a $p$-group of order $p^{n+1}$. Since $G$ is a non trivial $p$-group, it has a non-trivial center $Z$ (classic result on $p$-groups...). By Cauchy's theorem we can find an $x\in Z$ of order $p$. Let $H$ be the subgroup of $G$ generated by $x$. As $x\in Z$ the subgroup $H$ is normal. As in lemma 1 let $\varphi : R[G] \to R[G/H]$ be the ring morphism induced by the projection map $\pi : G\to G/H$. Now as the augmentation ideal of a group ring is generated by the $g-1$'s and as $\pi$ is surjective, we have $\varphi\left(\operatorname{Aug}\left(R[G]\right)\right) = \operatorname{Aug}\left(R[G/H]\right)$. By the induction hypothesis the ideal $\operatorname{Aug}\left(R[G/H]\right)$ is nilpotent of exponent $\leq p^n$, so that its $p^n$-th power is zero, so that $\varphi\left(\operatorname{Aug}\left(R[G]\right)\right)^{p^n} = 0$. As $\varphi$ is a ring morphism, this entails $\varphi\left(\operatorname{Aug}\left(R[G]\right)^{p^n}\right) = 0$, that is, $\operatorname{Aug}\left(R[G]\right)^{p^n}\subseteq \operatorname{Ker}(\varphi)$. But the ideal $\operatorname{Ker}(\varphi)$ is $\operatorname{Aug}(R[H]) R[G]$, and $\operatorname{Aug}(R[H])$ is central. By the base case and by Lemma 3, we thus have $\left(\operatorname{Ker}(\varphi)\right)^p = \left(\operatorname{Aug}(R[H])\right)^p \left( R[G]\right)^p = 0 \left(R[G]\right)^p =0$. We find finally that $\operatorname{Aug}\left(R[G]\right)^{p^{n+1}} = 0$. $\square$<|endoftext|> -TITLE: Practical drawing of geodesics -QUESTION [5 upvotes]: I want to use a computer to draw geodesics on a known parameterized surface of revolution, starting from a known point and at a known angle to the meridian. -What would be the easiest way of doing this? -Some methods I've considered: - -Using Clairaut's relation ($r \cos \theta = C$), and using -small increments, but I'm afraid that this is not a general solution, as this will cause the geodesic to -"get stuck" on parallels whenever the angle crosses zero. -Use the relation that for a surface given by: -$$\left(\varphi(v)\cos u, \varphi(v)\sin u, \psi(v)\right)$$ The geodesics are given by: $$\frac{du}{dv}=\frac{\sqrt{\varphi'^2+\psi'^2}}{\varphi\sqrt{\varphi^2-c^2}}\ \longrightarrow\ u(v_1) = u_0 + \int_{v_0}^{v_1} \frac{\sqrt{\varphi'^2+\psi'^2}}{\varphi\sqrt{\varphi^2-c^2}} dv$$ This presents the problem of finding the correct value of $c$ given the starting angle, and as user levap pointed out, has the same "getting stuck" problem, since $u(v)$ is no longer a function. -Various numerical methods which work for any triangulated surface, but I feel it would be a shame to use these when I know the exact form of the surface. - -Are there any easier methods or modifications to the above? - -REPLY [2 votes]: Triangulation need not any more be adopted once you know Clairaut's Law as operative and geodesic trajectory is found by quadrature for the surfaces of revolution. -EDIT2: -The parametrization you gave is not full, valid for all meridians. -Wlog you could consider -$$ X(u,v) = (v \cos u, v \sin u, \psi (v) ) $$ -$C$ is determined by initial condition: $ C_{initial}= v_0 \sin \theta_{0}. $ -It is beneficial with respect to easily crossing the parallels in numerical computations to work on arc length basis (primes) using Liouville's theorem for zero geodesic curvature lines aka geodesics. -$$ \theta^{'}(s) = -\sin \theta \sin \phi / v $$ -where the meridian has slope: -$$ \dfrac{dv}{dz} = \tan \phi $$ -and polar coordinate rotation: -$$ v\,u^{'} = \sin \theta $$ -EDIT1: -Say you want to practically compute and construct a great circle on a sphere. Using Runge-Kutta_4 numerically integrate the set of equations to define the circle in space and supply boundary conditions. -$$ \theta ^{'} = -\sin \theta \sin \phi / v ;\; \phi^{'} = 1/ (a \cos \phi ) ;\; v^{'} = \sin \phi \cos \theta;\; z^{'} = \cos \phi \cos \theta\; ; u{'}= \sin \theta / v ; $$ -(The following is more advanced, after some experience gained in geodesic tracing:) -Geodesics appear to cross all parallels before the one corresponding to -$$ r_{min} = C, \theta = \pi/2 $$ -There are however, three ways geodesics move during turn-around at above extreme parallel: -The returning behavior types are commonly traced for: - -Positive/Negative Gauss curvatures: Geodesics are returning after meeting parallel circle $ r_{min}< C_{initial} $ at turn-around. - -Non-returning types are for Negative Gauss curvature: - -Geodesics shoot through after meeting $ r_{min}= C$ if $ r_{min} > C_{initial} $ -Geodesics are asymptotic (never reach) $ r_{min}= C$ if $ r_{min} = C_{initial} $ - -The latter are demonstrated in WolframAlpha Demo: -ONE_sheet_Hyperboloid_geodesics<|endoftext|> -TITLE: Deleting one digit yields a divisor -QUESTION [6 upvotes]: Let $N$ be a positive integer with $d\geq 4$ digits, none of which is -zero. Suppose that erasing some digit of $N$ yields another number -$M$ which happens to be a divisor of $N$. -Examples : 1375 divides 12375. 1875 divides 61875. -Question : is it true that $M$ must always end with 25 or 75, and that the two final digits of $M$ must be the same as those of $N$ (so that they too will be 25 or 75) ? -I have checked this with a computer for $d=4$ and $d=5$. - -REPLY [4 votes]: Let $n$ be the original number and $m$ the number after a digit is deleted. Suppose that the deleted digit is $d$, that $r$ is the number represented by the $k$ digits to the right of $d$, and that $\ell$ is the number represented by the digits to the left of $d$, so that $m=10^k\ell+r$, and $n=10^{k+1}\ell+10^kd+r$. -Suppose first that $n=(10-s)m$, where $1\le s\le 8$. Then -$$10^{k+1}\ell+10^kd+r=10^{k+1}\ell-10^ks\ell+(10-s)r\;,$$ -so $(9-s)r=10^k(d+s\ell)$. Now $(9-s)r$ has at most $k+1$ digits, and $10^k(d+s\ell)$ has at least $k+1$ digits, so in fact $d+s\ell$ is a single digit, and $(9-s)r$ has $k+1$ digits. Thus, $\ell$ is a single digit, and we must have $k\ge 2$ (since $n$ has at least $4$ digits). If $s=4$, then $r=20\cdot10^{k-2}(d+s\ell)$ ends in $0$, which is impossible. Otherwise $25$ divides $(9-s)r$ and is relatively prime to $9-s$, so $25\mid r$, and since $n$ has no zero digit, $r$ must end in $25$ or $75$. (Of course the last two digits of $r$ are the last two digits of $m$ and $n$ as well.) -Now suppose that $\frac{n}m>10$. If $d$ is not the first digit of $n$, then $11\le\frac{n}m\le 19$. Suppose that $n=(10+s)m$, where $1\le s\le 9$. Then -$$10^{k+1}\ell+10^kd+r=10^{k+1}\ell+10^ks\ell+(10+s)r\;,$$ -so $(9+s)r=10^k(d-s\ell)$. Clearly $d>s\ell$, so $\ell$ is a single digit, and $k\ge 2$. Thus, $25\mid(9+s)r$. If $s=6$, then $3r=20\cdot10^{k-2}(d-6\ell)$, so $10\mid r$, which is impossible. Otherwise, $25\mid r$, and we’re done, as before. -Finally, suppose that $d$ is the first digit of $n$, so $n=10^kd+m$. Suppose further that $n=sm$, so that $10^kd+m=sm$, and $10^kd=(s-1)m$. Now $k\ge 3$, so $125\mid 10^k$, and hence $25\mid m$ (and we’re done, as before) unless $25\mid s-1$. Clearly $s<100$, so we need only worry about the cases $s=26$, $s=51$, and $s=76$. In those cases we have $40d=m$, $40d=2m$, and $40d=3m$, respectively, and in each case $10\mid m$, which is impossible.<|endoftext|> -TITLE: How do you stay sharp after graduating? -QUESTION [5 upvotes]: So I received my math bachelors back in 2013 and am now in the "real world". But my job doesn't require the upper level math I learned, i'm not in research. My notion is that there are a lot of people in the same position I am where I feel like my core math skills will stay with me for the rest of my life (I'll never forget algebra, geometry, calc) but for some of the higher level math I learned at university, my memory is fading. I did a lot of work on PDEs but I wouldnt be able to solve the simplest problem right now without a reference. -My question is, how do you stay sharp and keep your math knowledge from waning? - -REPLY [3 votes]: When Grothendieck died it was discovered that he didn't have not even one mathematical book in his home. He used to say that "Mathematics is to write and not to read". Practice writing Mathematics and you will keep up.<|endoftext|> -TITLE: Semidirect product: general automorphism always results in a conjugation -QUESTION [11 upvotes]: When $G$ is a group, $N$ is a normal subgroup of $G$ and $H$ is another subgroup of $G$ where $ N \cap H = \{1\} $, the normality of $N$ suggests that we can write, for $n_1, n_2 \in N$ and $h_1, h_2 \in H$, -$$ n_1 h_1 n_2 h_2 = n_1 h_1 n_2 h_1^{-1} h_1 h_2 $$ -and so motivates the definition of an 'external' semidirect product using -$$ (n_1,h_1) (n_2,h_2) = (n_1 h_1 n_2 h_1^{-1}, h_1 h_2). $$ -However, in general there is no reason to suppose $\textit{a priori}$ that $N$ and $H$ are subgroups of a larger group $G$, so that in general we say that to form the external product we need some groups $N$, $H$, and some homomorphism $\phi \colon H \to \textrm{Aut}(N)$ and define -$$ (n_1,h_1) (n_2,h_2) = (n_1 \phi(h_1)(n_2), h_1 h_2). $$ -I would expect that this would give something more general than the intuitive external product given above, since now we are using the result of a general automorphism $ \phi(h_1)(n_2) $ rather than the specific conjugation $ h_1 n_2 h_1^{-1} $. But it turns out that for any $\phi$ you come up with, this defines conjugation in the group $ N \rtimes H$. -I am having difficulty seeing why this is intuitively. Is there any insight anyone can give? Why must the general automorphism in the external construction always correspond to an inner automorphism conjugation in the internal construction? - -REPLY [7 votes]: Use the same idea. Observe -$$\begin{array}{ll} (n_1,h_1)(n_2,h_2) & =(n_1,e_H)(e_N,h_1)(n_2,e_H)(e_N,h_2) \\ & = (n_1,e_H)\color{Blue}{(e_N,h_1)(n_2,e_H)(e_N,h_1^{-1})}(e_N,h_1,)(e_N,h_2) \end{array} $$ -and -$$\begin{array}{ll} (n_1,h_1)(n_2,h_2) & =(n_1\phi_{h_1}(n_2),h_1h_2) \\ & = (n_1,e_H)\color{Blue}{(\phi_{h_1}(n_2),e_H)}(e_N,h_1)(e_N,h_2). \end{array} $$ -This means when we conjugate elements of $N\times\{e_H\}$ by elements of $\{e_N\}\times H$, we get the same thing as if we apply the elements of $H$ as automorphisms to $N$, then put it in $N\times\{e_H\}$. -Tuples are annoying and obfuscate the algebra in my opinion though. We should think of the semidirect product $N\rtimes H$ as the free product $N*H$ (whose elements are words formed from using the elements of $N$ and $H$ as letters) modulo the relation that conjugating elements of $N$ by elements of $H$ yields the same thing as if we applied the corresponding automorphism. -That is, elements of $N\rtimes H$ look like words formed from elements of $N$ and $H$. Their identity elements are identified as the same group element in $N\rtimes H$. Elements of $N$ multiply among themselves as usual, and same for elements of $H$ multiplying among themselves. But every instance of the word $hnh^{-1}$ ($h\in H,n\in N$) may be simplified to $\phi_h(n)$, and that is the only relation imposed on multiplication between elements of the two subgroups $N$ and $H$. -Using this definition, it's easy to see that $hn=(hnh^{-1})h=\phi_h(n)h$ so $HN=NH$ within $N\rtimes H$, and every element of $H$ can be "slid past" an element of $N$ to the right (although it changes the element of $N$ along the way). As a result, every word $\cdots h_{-1}n_{-1}h_0n_0h_1n_1\cdots$ (finitely many letters of course) can be simplified via this sliding rule to the canonical form $nh$. -Writing $n_1h_1=n_2h_2$ yields $h_1h_2^{-1}=n_1^{-1}n_2$, but the only element in $N\cap H$ (when we treat $N,H$ as subgroups of $N\rtimes H$) is the identity, so $h_1=h_2$ and $n_1=n_2$. Thus $N\rtimes H$ can be bijected with $N\times H$ set-theoretically. In order to transport the multiplication over, it remains to see how $(n_1h_1)(n_2h_2)$ simplifies to $n_3h_3$, which is something you've essentially already done.<|endoftext|> -TITLE: How to evaluate $\int_0^1 {\cos(tx)\over \sqrt{1+x^2}}\,\mathrm{d}x$? -QUESTION [9 upvotes]: I would like to calculate the integral -$$\int_0^1 {\cos(tx)\over \sqrt{1+x^2}}\,\mathrm{d}x$$ -I have tried to consider the integral as fourier coefficient of $f(x)=\dfrac1{\sqrt{1+x^2}}$, however no ideas. Besides, let $F(t)$ denote above integral as a function of t, after derivating $F(t)$, I could not make any breakthrough. Does anybody have ideas? - -REPLY [5 votes]: Let: -$$E(t)=\int\limits_0^1\dfrac{\exp(jtx)}{\sqrt{1+x^2}}dx,\quad j=\sqrt{-1}$$ -Derive it $n$ times with respect to $t$: -$$E^{(n)}(t)=j^n\int\limits_0^1\dfrac{\exp(jtx)}{\sqrt{1+x^2}}x^ndx$$ -$$E^{(n)}(0)=j^n\int\limits_0^1\dfrac{x^n}{\sqrt{1+x^2}}dx$$ -By parts: -$$I_n=\int\limits_0^1\dfrac{x^n}{\sqrt{1+x^2}}dx=x^{n-1}\left.\sqrt{1+x^2}\right|_0^1-(n-1)\int\limits_0^1\dfrac{1+x^2}{\sqrt{1+x^2}}x^{n-2}dx$$ -$$nI_n=\sqrt2-(n-1)I_{n-2}$$ -$$I_0=\int\limits_0^1\dfrac{dx}{\sqrt{1+x^2}}=\left.\ln|x+\sqrt{1+x^2}|\right|_0^1 = \ln(1+\sqrt2)$$ -Maclaurin series: -$$E(t)=\sum\limits_{n=0}^{\infty}\dfrac1{n!}j^nI_nt^n,\quad J(t)=\int\limits_0^1\dfrac{\cos(tx)}{\sqrt{1+x^2}}dx = \operatorname{Re} E(t)$$ -Items with $n=2k-1$ have only imaginary part, so we can take in consideration only items with $n=2k$: -$$\boxed{J(t)=\sum\limits_{k=0}^{\infty}\dfrac{(-1)^k}{k!}J_kt^{2k},\quad J_0=\ln(1+\sqrt2),\quad J_k=\dfrac{\sqrt2}{2k}-\dfrac{2k-1}{2k}J_{k-1}}$$<|endoftext|> -TITLE: Get bounding rectangle segments of a rotated rectangle (matrix?) -QUESTION [5 upvotes]: My problem: -I have: $x$, $y$ & $\alpha$ and the aspect ratio $o$(long):$p$(short) (red rectangle) -I want to have $n$ & $m$ in dependancy of $x, y, \alpha, o, p$ - -I tried to figure it out with cos, sin and tan but I don't get a solution. My math teacher said something about the matrix of rotation, but I don't know this method. -I'm also fine with the points where the red corners are on the black rectangle. -Is this possible ? - -Thanks in advance ;) - -REPLY [2 votes]: Let $z$ be the length of the hypotenuse of the top left right-angled triangle whose two sides are $m$ and $n$. Then $z$ is the length of one side of the red rectangle. Let the length of the other side be $kz$. If we don't know which side is longer in advance, we don't know whether $k$ is equal to $\frac po$ or $\frac op$. However, we do have -$$ -\begin{cases} -z\sin\alpha + kz\cos\alpha = x,\tag{1}\\ -z\cos\alpha + kz\sin\alpha = y. -\end{cases} -$$ -Hence we can determine $k$ without using $o$ or $p$: -$$ -k = \frac{y\sin\alpha - x\cos\alpha}{x\sin\alpha - y\cos\alpha}.\tag{2} -$$ -Now, from $(1)$, we get -$$ -z = \frac x{\sin\alpha + k\cos\alpha}. -$$ -Therefore -$$ -\begin{align} -n &= z\sin\alpha = \frac{x\sin\alpha}{\sin\alpha + k\cos\alpha},\\ -m &= z\cos\alpha = \frac{x\cos\alpha}{\sin\alpha + k\cos\alpha}. -\end{align} -$$<|endoftext|> -TITLE: Putnam 2015 B6, sum involving number of odd divisors on an interval. -QUESTION [9 upvotes]: For each positive integer $k$, let $A(k)$ be the number of odd divisors of $k$ in the interval $[1, \sqrt{2k})$. What is$$\sum_{k=1}^\infty (-1)^{k-1} {{A(k)}\over{k}}?$$ - -REPLY [10 votes]: We have that $$\sum_{k=1}^\infty (-1)^{k-1}\frac{A(k)}{k}=\left(\sum_{k=1}^\infty \frac{(-1)^{k-1}}{2k-1}\right)^2=\frac{\pi^2}{16}.$$ - -Firstly, we would like to rearrange the terms in this series, but we have to be very careful in doing so as this is a conditionally convergent series. To do things rigorously, we have to truncate and take limits, and so define $$T(x)=\sum_{k\leq x}(-1)^{k-1}\frac{A(k)}{k}$$ - so that $T=\lim_{x\rightarrow\infty}T(x)$ is the series in question. Writing $$A(k)=\sum_{\begin{array}{c} -d|k,\ d\text{ odd}\\ -d\leq\sqrt{2k} -\end{array}}1$$ - and switching the order of summation we obtain $$T(x)=\sum_{k\leq x}\frac{(-1)^{k-1}}{k}\sum_{\begin{array}{c} -d|k,\ d\text{ odd}\\ -d<\sqrt{2k} -\end{array}}1=\sum_{\begin{array}{c} -d<\sqrt{2x}\\ -d\text{ odd} -\end{array}}\sum_{\begin{array}{c} -d^{2}/2\frac{d}{2}}\frac{(-1)^{k-1}}{k}=\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{k}\sum_{\begin{array}{c} -d<2k\\ -d\text{ odd} -\end{array}}\frac{1}{d}.$$ Now, we can write this as $$\frac{1}{2}\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{k}\sum_{j=1}^{k}\left(\frac{1}{2j-1}+\frac{1}{2k-(2j-1)}\right).$$ Truncating the first sum at $k=N$, we have $$T=\lim_{N\rightarrow \infty}\sum_{j=1}^{N}\sum_{k=j}^{N}\frac{(-1)^{k-1}}{(2j-1)(2k-(2j-1))}.$$ -Shifting the second sum to start at $k=1$, this becomes $$T=\lim_{N\rightarrow\infty}\sum_{j=1}^{N}\frac{(-1)^{j-1}}{(2j-1)}\sum_{k=1}^{N-j+1}\frac{(-1)^{k-1}}{(2k-1)}= \lim_{N\rightarrow\infty}\left(\sum_{j=1}^{N}\frac{(-1)^{j-1}}{(2j-1)}\right)^2-\lim_{N\rightarrow \infty}\sum_{j=1}^{N}\frac{(-1)^{j-1}}{(2j-1)}\sum_{N-j+2}^{N}\frac{(-1)^{k-1}}{(2k-1)}.$$ Now, the second term goes to zero, and the first term is the square of the well known Leibniz series for $\pi/4$. Thus $$\sum_{k=1}^\infty (-1)^{k-1}\frac{A(k)}{k}=\frac{\pi^2}{16}.$$<|endoftext|> -TITLE: What are examples of parallelizable complex projective varieties? -QUESTION [13 upvotes]: A smooth complex projective variety is the zero-locus, inside some $\mathbb{CP}^n$, of some family of homogeneous polynomials in $n+1$ variables satisfying a certain number of conditions that I won't spell out. It is in particular a differentiable manifold. A parallelizable manifold is a (differentiable) manifold with a trivial tangent bundle, i.e. $TM \cong M \times \mathbb{R}^n$ (equivalently, a manifold of dimension $n$ is parallelizable if it admits $n$ vector fields that are everywhere linearly independent). -Being a projective variety is an algebro-geometric condition, whereas being parallelizable is more of a algebro-topological condition. I'd like to know how the two interact. For example, according to Wikipedia, some complex tori are projective. But like all Lie groups, a complex torus is parallelizable. - -What are other examples of smooth complex projective varieties that are parallelizable? - -REPLY [3 votes]: Here is a small, rather unsatisfying collection of examples arising from the following result: - -A (non-trivial) product of spheres is parallelizable if and only if at least one of the spheres is odd-dimensional. - -One can ask whether any such products can be given a complex structure which makes them projective. First of all, such a manifold would be Kähler and hence symplectic. By considering the cohomology ring of such a product, we can see that the only possibilities are $(S^1)^{2m}\times(S^2)^k$ with $m > 0$. Such spaces have many projective complex structures. For $n = 0$, we obtain the projective tori that you already mentioned, together with their products. For $n > 0$, we get the product of a non-zero number of algebraic tori with $n$ copies of $\mathbb{CP}^1$.<|endoftext|> -TITLE: How can I find the two critical points of this system of equations? -QUESTION [5 upvotes]: I'm currently trying to use Lagrange Multipliers to find the 2 critical points of the function -$$ -f(x,y,z) = \frac{1}{2}x^{2}+yz+\frac{1}{3} y^{3} - z^{2} -$$ -subject to -$$ -h(x,y,z) = x+y+z-2 = 0 -$$ -I have formed the Lagrangian for this, which is given by -$$ -L= \frac{1}{2}x^{2}+yz+\frac{1}{3} y^{3} - z^{2} - \lambda (x+y+z-2) -$$ -and I have worked out that -$$ -L_{x}= x-\lambda = 0 \\ -L_{y}= z+y^{2} - \lambda = 0 \\ -L_{z}= y-2z-\lambda = 0 \\ -L_{\lambda}=-x-y-z+2 =0 -$$ -How would I find the values of $x,y,z, \lambda$ that constitute the critical points of this system? -EDIT: I have deduced that -$$ -x=\lambda \\ -y=\frac{4-\lambda}{3} \\ -z=\frac{2-2\lambda}{3} -$$ -Thus, since $x+y+z=2$, we must solve the equation -$$ -\lambda + \frac{4-\lambda}{3} + \frac{2-2\lambda}{3} = 2 -$$ -for $\lambda$. -However, any $\lambda \in \mathbb{R}$ is a solution. -What am I doing wrong? - -REPLY [2 votes]: You got $x$, $y$, and $z$ from below three equations -$$\begin{align} -L_{x}&= x-\lambda = 0 \\ -L_{z}&= y-2z-\lambda = 0 \\ -L_{\lambda}&=-x-y-z+2 =0 -\end{align}$$ -and put those back to one of those ($L_{\lambda}$), which of course always holds. -You should have put those back to this one -$$L_{y}= z+y^{2} - \lambda = 0$$<|endoftext|> -TITLE: Category-theoretic properties of cardinals -QUESTION [12 upvotes]: Let $\kappa$ be a cardinal, let $\mathbf{H}_\kappa$ be the set of hereditarily $\kappa$-small sets, and let $\mathbf{Set}_{< \kappa}$ be the full subcategory of $\mathbf{Set}$ corresponding to $\mathbf{H}_\kappa$. I am interested in properties of $\kappa$ that are expressible in terms of $\mathbf{Set}_{< \kappa}$ (regarded as an abstract category, up to equivalence). For example: - -$\kappa = 0$ if and only if $\mathbf{Set}_{< \kappa} = \emptyset$. -$\kappa = 1$ if and only if $\mathbf{Set}_{< \kappa}$ is equivalent to the terminal category. -$\kappa$ is either $1$ or infinite if and only if $\mathbf{Set}_{< \kappa}$ has finite limits. -$\kappa$ is either $1$ or a strong limit cardinal if and only if $\mathbf{Set}_{< \kappa}$ is cartesian closed. - -Question. Is the property "$\kappa$ is regular" of this type? -Answer. Yes. We assume $\kappa$ is infinite, so that $\mathbf{Set}_{< \kappa}$ has finite limits. For each object $X$ in $\mathbf{Set}_{< \kappa}$, let $\Gamma (X)$ be the set of morphisms $1 \to X$ in $\mathbf{Set}_{< \kappa}$, where $1$ is a fixed terminal object in $\mathbf{Set}_{< \kappa}$. Pullback then defines a functor -$$(\mathbf{Set}_{< \kappa})_{/ X} \to \prod_{x \in \Gamma (X)} \mathbf{Set}_{< \kappa}$$ -and $\kappa$ is regular if and only if this functor is (fully faithful and) essentially surjective on objects for every object $X$ in $\mathbf{Set}_{< \kappa}$. - -However, I find the above answer unsatisfactory, for two main reasons: - -It relies on the axiom of replacement – which is perhaps unavoidable because of the nature of the definition of "regular cardinal". -It also exploits the fact that every set is the disjoint union of its elements. - -Is there a better answer? - -REPLY [3 votes]: I know this is very old by now, but couldn't you say that $\kappa$ is regular iff $\mathbf{Set}_{<\kappa}$ is cocomplete? That is, for every $X\in\text{Ob}(\mathbf{Sets}_{<\kappa})$ and every diagram (functor) $D:X\to\mathbf{Set}_{<\kappa}$, the colimit $\text{colim }D$ exists in $\mathbf{Set}_{<\kappa}$? Then you're not talking about elements at least, and the definition doesn't use Replacement to work (but can be seen as postulating the Replacement axiom relativised to $\textbf{Set}_{<\kappa}$). -It seems like it then holds that $\kappa$ is a strong limit iff $\textbf{Set}_{<\kappa}$ is an elementary topos and that $\kappa$ is strongly inaccessible iff $\textbf{Set}_{<\kappa}$ is a Grothendieck topos, which is quite satisfying.<|endoftext|> -TITLE: Functional equation $f(xy)=f(x)+f(y)$ and continuity -QUESTION [6 upvotes]: Prove that if $f:(0,\infty)→\mathbb{R}$ satisfying $f(xy)=f(x)+f(y)$, and if $f$ is continuous at $x=1$, then $f$ is continuous for $x>0$. - -I let $x=1$ and I find that $f(x)=f(x)+f(1)$ which implies that $f(1)=0$. So, $\lim_{x\to1}f(x)=0$, but how can I use this to prove continuity of $f$ for every $x \in \mathbb R$? -Any help would appreciated. Thanks - -REPLY [6 votes]: Give $x_0>0$, -$$f(x)-f(x_0)=f\left(x_0\cdot\frac{x}{x_0}\right)-f(x_0)=f\left(\frac{x}{x_0}\right),$$ -by $f$ is continuous at $x=1$, when $x\to x_0$, $\frac{x}{x_0}\to1$, then -$$\lim\limits_{x\to x_0}f(x)=f(x_0).$$<|endoftext|> -TITLE: Are right continuous functions continuous with the lower limit topology? -QUESTION [7 upvotes]: Let $f: \Bbb R \to \Bbb R$ be right continuous. Is $f$ continuous as a function from $\Bbb R$ with the lower limit topology to $\Bbb R$ with the standard topology? It clearly seems like it will be, but I'm not sure how to show it. - -REPLY [7 votes]: Suppose $f$ is right continuous at $p$. This means (by my definition) that for each $\varepsilon > 0$, there exists some $\delta > 0$ such that for all $x$ with $p < x < p + \delta$ we have that $|f(x) - f(p)| < \varepsilon$. -This implies, in set theory terms, that $f[[p,p+\delta)] \subseteq B_{\varepsilon}(f(p))$, or: for every open basic neighbourhood $U$ of $f(p)$ we have a basic open neighbourhood $V$ of $p$ (in the lower limit topology) such that $f[V] \subseteq U$. So $f: (\mathbb{R},\mathcal{T}_l) \rightarrow (\mathbb{R}, \mathcal{T}_e)$ from the lower limit topology to the Euclidean topology is continuous at $p$.<|endoftext|> -TITLE: When can a function not be factored? -QUESTION [5 upvotes]: Are there any general conditions under which a function involving $n$ unknowns cannot be factored into a product of $n$ terms each of which contains only one of the unknowns? -For example, $xy$ can be factored into $x$ and $y$ but we cannot factor $1/(1+xy)$ into a product of terms, each of which only contains $x$ or $y$ but not both. -Besides looking at these functions on a case by case basis, I am wondering if anyone knows any general sufficient conditions to determine whether the factorization can definitely $\textit{not}$ hold. -Thanks! - -REPLY [3 votes]: Nice question. Turns out to have a pretty nice answer too. -Theorem: Let $f:\mathbb R\to\mathbb R$. There exist $g,h:\mathbb R\to\mathbb R$ such that $f(x,y) = g(x)h(y)$ if and only if for all $x,x',y,y'\in\mathbb R$ we have -$f(x,y)f(x',y')=f(x,y')f(x',y)$. -proof: Suppose $g,h$ exists such that $f(x,y)=g(x)h(y)$. Then -$$f(x,y)f(x',y')=g(x)h(y)g(x')h(y') = g(x)h(y')g(x')h(y) = f(x,y')f(x',y)$$ -Now suppose that $f$ satisfies $f(x,y)f(x',y')=f(x,y')f(x',y)$ for all $x,x',y,y'$. If $f$ is identically $0$ we can just define $g$ and $h$ to be zero too so this case is trivial. If $f$ is not identically $0$ then find $a,b\in\mathbb R$ such that $f(a,b)\neq 0$. Then define $g(x) = f(x,b)$ and $h(y) = \frac{f(a,y)}{f(a,b)}$. Now we see that -$$g(x)h(y) = f(x,b)\frac{f(a,y)}{f(a,b)} = f(x,y)\qquad\square$$ -This can be generalized to higher dimensions but the condition is a little trickier to write down. The condition becomes -$$f(x_1,\ldots x_n)f(x_1',\ldots,x_n')^{n-1} = \prod_{i-1}^n f(x_1',\ldots, x_{i-1}',x_i,x_{i+1}',\ldots,x_n')$$<|endoftext|> -TITLE: Galois Theoretic Proof of Fundamental Theorem of Algebra -QUESTION [11 upvotes]: The Galois Theory proof involves proving that $F(i)$ is algebraically closed for any real closed field $F$ (where $i$ is a square root of $-1$), where we may define a real closed field as an ordered field in which every odd degree polynomial has a root and every positive number has a square root. -The last step in the proof seems a bit ad-hoc though; it involves showing that $F(i)$ has no quadratic extension, which in turn involves showing that every element of $F(i)$ has a square root, which involves solving $(a+bi)^2=c+di$ for $a, b \in F$. -Is there a more Galois-theoretic approach to the last step that I'm not aware of? Or, for that matter, some other deeper reason why the computation works out? -Edit Here's the proof that I have that $F(i)$ is closed under square roots, to illustrate how it's ad-hoc. I'd prefer a solution that doesn't involve explicitly solving for a square root - is there a more abstract reason why $F(i)$ should be closed under taking square roots? -We can explicitly solve $(a+bi)^2=c+di$ for $a$ and $b$ in terms of $c$ and $d$. We have $(a+bi)^2 = a^2-b^2+2abi$, yielding the system of equations $$a^2-b^2=c$$ and $$2ab=d.$$ Since every element of $F$ already has a square root in $F(i)$, we may assume $d \neq 0$, and substitute $b = d/(2a)$ into the first equation, which gives $a^2-d^2/(4a^2) = c$, or $4a^4-4ca^2-d^2=0$. Observing that the discriminant of this quadratic is positive, we can then see that $$a^2 := \frac{4c + \sqrt{16c^2+16d^2}}{8} \in F$$ is positive, because $16c^2+16d^2>(4c)^2$ (and by $\sqrt{\bullet}$ I mean the positive square root), and therefore has two square roots in $F$. Letting $a$ be such a square root, we set $b = d/2a$ and call it a day. - -REPLY [4 votes]: Recall : -Lemma 1 : $F$ has only one quadratic extension. -proof: Since $F$ is an ordered field with square roots, $F^\times/(F^\times)^2$ has only one non-trivial element. QED - -Let $\Omega$ be an algebraic closed field containing $F(i)$ (in the following everything will be in $\Omega$). -Assume that $F(i)$ has a quadratic extension $K=F(i,x)$ with $x \in \Omega$ such that $x^2 = a+ib \in F(i)$. The $F$-Galois conjugates of $x$ are among $x$, $-x$ and the squares roots of $a-ib$. Denote $L=F(i,x,y)$ with $y \in \Omega$ such that $y^2=a-ib$. -The degree of $L/K$ is either $1$ or $2$. We have the field extensions : -$$ F \subset F(i) \subset K \subset L.$$ -1st Case : $L/K$ is of degree 2. Then $L/F$ is of degree 8 and $\mathrm{Gal}(L/F(i))$ is isomorphic to $V_4$ (because $L/F(i)$ is generated by $(x,y)$ and the Galois group is generated $(x,y) \mapsto (-x,y)$ and $(x,y) \mapsto (x,-y)$). -So $\mathrm{Gal}(L/F(i))$ is a group of order 8 such that $V_4$ is a normal subgroup and does not have two quotients isomorphic to $C_2$ (by Lemma 1). - -The groups $(C_2)^3$, $C_2 \times C_4$ and $H_4$ have at least two quotients isomorphic to $C_2$. -$C_8$ does not contain $V_4$. -$D_8$ does not contain $V_4$ as a normal subgroup. - -So $\mathrm{Gal}(L/F(i))$ does not exist. -2nd Case : $L=K$. Then $L/F$ is Galois of order $4$. If $\mathrm{Gal}(L/F) = C_2 \times C_2$ then it contradicts Lemma 1. If $\mathrm{Gal}(L/F) = C_4$, then it contradicts the following Lemma 2 (since $-1$ is not the sum of two squares in $F$). So $\mathrm{Gal}(L/F)$ does not exist. -Lemma 2 : If $F$ is any field and $d \in F$ is a non-square element, then the following are equivalent: -(i) $F(\sqrt{d})$ is included in a Galois extension of $F$ cyclic of degree 4. -(ii) $d$ is the sum of two squares in $F$. -Proof : See 'Serre, Topic in Galois theory, Thm 1.2.4'. I prove (i) $\Rightarrow$ (ii), which is needed. -Let $L/F$ cyclic of degree $4$ containing $F(\sqrt{d})$ and $\sigma$ a generator of $\mathrm{Gal}(L/F)$. Let $\alpha \in L$, such that $\alpha^2 \in F(\sqrt{d})$ and $L=F(\sqrt{d},\alpha)$. Write $\alpha^2 = a+b\sqrt{d} \in F(\sqrt{d})$ and let $\beta = \sigma(\alpha)$. -So we have the field extensions: -$$F \subset F(\sqrt{d}) \subset L$$ -Claim 1 : $\alpha\beta \in F(\sqrt{d})$. Since $L/K$ is cyclic, the automorphism $\sigma^2$ is the non trivial element of $\mathrm{Gal}(L/F(\sqrt{d}))=$ $\mathrm{Gal}\Big(F(\sqrt{d})(\alpha)/F(\sqrt{d})\Big)$, so $\sigma^2(\alpha) = -\alpha$ and $\sigma^2(\beta) = \sigma^3(\alpha)=-\beta$. So $\alpha\beta$ is fixed by $\mathrm{Gal}(L/F(\sqrt{d}))$. -Claim 2 : $(\alpha\beta)^2 = a^2 - b^2d$. Since $\sigma_{|F(\sqrt{d})}$ is a generator of $\mathrm{Gal}(F(\sqrt{d})/F)$, one has $\sigma(\sqrt{d})=-\sqrt{d}$ so $\beta^2 = \sigma(\alpha^2) = \sigma(a+b\sqrt{d}) = a-b\sqrt{d}$ -Write $\alpha\beta = u + v\sqrt{d} \in F(\sqrt{d})$. Since $(u + b\sqrt{d})^2 =a^2 - b^2d$ is in $F$ and $u + b\sqrt{d} = \alpha \beta$ is not in $F$ (because it is not fixed by $\sigma$), we conclude that $u=0$. -So $(\alpha\beta)^2 = (v\sqrt{d})^2 = a^2-b^2d$ and $(v^2+b^2)d = a^2$. So -$d$ is the sum of two squares because the quotient of sums of two squares is also a sum of two squares. More precisely, one has: -$$d = \frac{a^2}{v^2+b^2} = \frac{a^2(v^2+b^2)}{(v^2+b^2)^2}= \left( \frac{av}{v^2+b^2} \right)^2 + \left( \frac{ab}{v^2+b^2} \right)^2.$$ -QED<|endoftext|> -TITLE: Must eigenvalues be numbers? -QUESTION [52 upvotes]: This is more a conceptual question than any other kind. As far as I know, one can define matrices over arbitrary fields, and so do linear algebra in different settings than in the typical freshman-year course. -Now, how does the concept of eigenvalues translate when doing so? Of course, a matrix need not have any eigenvalues in a given field, that I know. But do the eigenvalues need to be numbers? -There are examples of fields such as that of the rational functions. If we have a matrix over that field, can we have rational functions as eigenvalues? - -REPLY [7 votes]: I think it's worth providing a motived example where we aren't dealing with real or complex numbers as our eigenvalues -The Schrödinger Equation is a PDE over the complex numbers that is used to talk about wave-forms in quantum mechanics. The Schrödinger Equation gives a description of a wave that permits many different steady-state solutions, each one corresponding to an eigenvalue/vector pair. The eigenvectors are the wave equations and the eigenvalues are the corresponding energy functions.<|endoftext|> -TITLE: Small version of the GAP-software or online tool available? -QUESTION [7 upvotes]: If I only want to calculate the number of groups of order $n$ for large $n$ , especially cubefree $n$ : - -Is there a small version available or an online calculator ? Or do I simply have to download the whole GAP-software ? - -The Magma-Online-Calculator does not support large cubefree orders. I found some useful applets, but they only support very small groups. I do not want to generate the large groups, I only want to determine the number of groups of a given order. - -REPLY [4 votes]: I do not have access to Magma, so I do not now how exactly the Small Groups Library in Magma differs from the one in GAP, and whether missing orders that you observe using the online calculator are limitations of the online calculator or are also missing in a proper Magma installation. -As a test case, I've tried $n=3 \cdot 17^2 \cdot 25= 21675$ and Magma online calculator returns Runtime error: The groups of order 21675 are not contained in the small groups database. In GAP, NrSmallGroups(n) unsurprisingly returns 9, matching Small Groups Library page which says that it contains groups of cubefree order at most 50 000 (395 703 groups). -There is no GAP online calculator, at least at the moment, but you may try to access GAP via SageMathCloud. I've just tried to open terminal there and called gap. This started GAP 4.7.8 with some selection of packages (which is however smaller than in the GAP installation from sources downloaded from the GAP website and fully compiled, i.e. with packages that require compilation, so your mileage may vary. Of course you may install extra packages or even install your own full copy of GAP there, but then you would have to maintain it yourself). Also, SageMathCloud suggests to consider subscription since "projects running on free servers compete for resources with a large number of other free users". -Hope this helps to create a setup which works well for you, perhaps using a combination of working with local GAP installation when you can, and with SageMathCloud when you are travelling with internet connection. - -Update: GAP package SCSCP implements remote procedure call protocol, also called SCSCP. Using this protocol, SCSCP-compliant computer algebra systems (and other compliant software) can communicate with each other. -NrSmallGroups has been just added to the list of procedures provided by the demo server, so now (important! if IO package is compiled and SCSCP package is loaded) one could read into GAP the following auxiliary function -NrSmallGroupsRemote := n -> EvaluateBySCSCP( "NrSmallGroups", [n], - "chrystal.mcs.st-andrews.ac.uk",26133).object; - -and then use it as follows: -gap> NrSmallGroupsRemote(24); -15 -gap> List([1024,1536,21675],NrSmallGroupsRemote); -[ 49487365422, 408641062, 9 ] - -If you will ask for the order where no information is available, you will get an error message, for example: -gap> NrSmallGroupsRemote(2048); -Error, chrystal.mcs.st-andrews.ac.uk.0.0.0.0:26133 reports : - the library of groups of size 2048 is not available -rec( - cd := "scscp1", - name := "error_system_specific" ) called from -... - -For cube-free groups of order greater than 50000, the GAP SCSCP server now provides two functions from the CubeFree package: NumberCFGroups and NumberCFSolvableGroups. You can call them remotely as follows (after LoadPackage("scscp");) -gap> EvaluateBySCSCP("NumberCFGroups",[50231], -> "chrystal.mcs.st-andrews.ac.uk",26133); -rec( attributes := [ [ "call_id", "chrystal.mcs.st-andrews.ac.uk.0.0.0.0:26133:12088:c7Ikdmtp" ] ], -object := 1 ) - -and -gap> EvaluateBySCSCP("NumberCFSolvableGroups",[50234], -> "chrystal.mcs.st-andrews.ac.uk",26133); -rec( attributes := [ [ "call_id", "chrystal.mcs.st-andrews.ac.uk.0.0.0.0:26133:12088:n3sJKTbZ" ] ], -object := 2 ) - -The number of the groups is the object component of the returned record, the rest is technical information that may be ignored. -So in theory one could have a minimalistic GAP installation with stripped off data libraries and only 4 packages: GAPDoc, IO, OpenMath and SCSCP, which will do the job. -However, SCSCP is a protocol implemented in several systems and languages - see the list of compatible tools here, so it's completely plausible that instead of GAP you may have e.g. a short Python script that will perform only a single purpose - contact GAP SCSCP server sending one integer (the order of a group) and getting back another integer (the number of groups of these order) or an error message.<|endoftext|> -TITLE: Where does the wedge product arise in the definition of an integral? -QUESTION [7 upvotes]: For a given function $f(x,y)$, its double integral is defined as: -$$\iint_R f(x,y)\;\mathrm{d}x\;\mathrm{d}y=\lim_{n\to\infty}\; \sum_{i=1}^{n }f(x_{i},y_{i})\Delta A_{i}$$ -Where $\Delta A_{i}=\Delta x_{i}\Delta y_{i}$. -Later I was told it's $\mathrm{d}x\ \wedge \mathrm{d}y$ not $\mathrm{d}x\;\mathrm{d}y$. $\wedge$ being the wedge product. -This implies for instance that in polar, the differential with respect one integrates is given by: -$\mathrm{d}x\ \wedge \mathrm{d}y=r\mathrm{d}r\ \wedge \mathrm{d} \theta\ $ -Whereas if it were the ordinary product $dxdy$ then we would have -\begin{align} -dxdy & = \left(dr \cos \theta - r \sin \theta\ d\theta \right) \left( dr \sin \theta + r \cos \theta\ d\theta\right)\\ -& = dr^2 \cos \theta \sin \theta - r^2 d\theta^2 \cos \theta\ \sin\ \theta + r\ dr\ d\theta\ (\cos^2 \theta\ - \sin^2\theta ) -\end{align} -But nowhere in the above definition -$$\iint_R f(x,y)\;\mathrm{d}x\;\mathrm{d}y=\lim_{n\to\infty}\; \sum_{i=1}^{n }f(x_{i},y_{i})\Delta A_{i}$$ -I see the wedge product. It Seems to me that $$\lim_{n\to\infty}\; \sum_{i=1}^{n }\Delta A_{i}=\iint \mathrm{d}A=\iint dxdy$$ -Then where does the wedge product arise in the definition? - -REPLY [9 votes]: Sorry for being incredibly late to the party, but I recently stumbled upon this question after falling into a similar rabbit hole with my research, and I had to piece together my understanding from several sources. After a lot of thinking, I feel as though I have a solid answer to this question, and even though it is late, hopefully it may aid those who may stumble upon this question in future as I did. -There are three main motivations for favouring the wedge product $\text dx\wedge \text dy$ over $\text dx \text dy$. The first is that the latter is simply a special case of the former--only applicable in the case of orthogonal axes. The second reason is that the latter does not generalise orientation in a way that is consistent with $1$D integration. The third (which was covered by Thomas' answer) is that it makes the algebra for transformations self-consistent. -$\textbf{Infinitesimal Area}$ -To expand on the first motivation, consider the question: "just what is $\text dx \text dy$"? Well, intuitively it should be a generalisation of the $1$D infinitesimal line element $\text dx$. Extending this, we expect $\text dx\text dy$ to be an infinitesimal area element. However, there are many ways in which we can extend a $1$D line to a $2$D area. We focus primarily on infinitesimal parallelograms, as parallelograms are obtained via linear transformations on squares and any transformation can be approximated locally by a linear transformation (this is essentially where the Jacobian comes into play for changing of integration variables, as it is a first order approximation to a coordinate transformation). -In the special scenario where $\hat x$ and $\hat y$ are orthogonal directions, then the parallelogram $\text dx \text dy$ corresponds to a square and its area is given exactly by the product $\text dx \text dy$. In contrast, if we have two directions $\hat u$ and $\hat v$ which are not orthogonal (but still linearly independent), then the area of the parallelogram would not be given by their product, but rather by the parallelogram area formula. Since $\text du$ and $\text dv$ are difference vectors, they lie in the plane, and their cross product $\text du\times \text dv$ gives the area of the parallelogram that we seek. It is quite clear from this analysis that the cross product (and in higher dimensions, the wedge product) should be the default operation, and the fact that the standard product works is only a lucky coincidence afforded to us by a convenient choice of coordinates. -$\textbf{Orientation}$ -We know that integration is oriented, that is $\int_a^b f(x) \text dx = - \int_b^a f(x) \text dx$. However, this is not captured in $2$D by the standard area product $\text dx\text dy$. If we imagine tracing out the square $\text dx\text dy$, then we expect these four equations to be the same orientation (and thus equivalent) -\begin{align*} - \int_c^d\int_a^b f(x,y) \text dx\text dy\\ - \int_d^c\int_b^a f(x,y) \text dx\text dy\\ - \int_a^b\int_d^c f(x,y) \text dy\text dx\\ - \int_b^a\int_c^d f(x,y) \text dy\text dx -\end{align*} -The first two correspond to tracing out the $\hat x$ direction first, and the latter two correspond to tracing out $\hat y$ direction first. It seems I don't have the reputation to post an image yet, but here is the link to an album which shows all the four orders of integration. The remaining four equations that we can form by permutation of boundaries and integration order will represent the reverse orientation, and hence be the negative of the above equations. Focusing on one relation in particular, we expect -\begin{equation} - \int_c^d\int_a^b f(x,y) \text dx\text dy = -\int_a^b\int_c^d f(x,y) \text dy\text dx -\end{equation} -In the case for the simple product $\text dx\text dy$, only half (by combinatoric exhaustion) of the above mentioned relations are consistent. To fix this, we simply enforce anticommutivity by replacing $\text dx\text dy$ with $\text dx\wedge\text dy$, which automatically carries the rule $\text dx\wedge\text dy = -\text dy\wedge\text dx$. -$\textbf{Conclusion}$ -The takeaway message from these two examples (as well as the transformation rule given in Thomas' answer) is that the naïve area element $\text dx\text dy$ just isn't well equipped to handle any of the subtler points that arise when generating a good definition of multi-variable integration. $\text dx\text dy$ is really just an idealised case of $\text dx \times\text dy$, and the latter is what should've been in your integration formula from the start, as it is the more general formula for the area $\text{dA}$. Then, when generalising to more dimensions still, the cross product should be replaced with the wedge product.<|endoftext|> -TITLE: Why can't an $\omega$-stable theory have finitely many countable models? -QUESTION [8 upvotes]: This is a "well-known fact," but I'm at a loss to finding a proof. I could swear I've read it somewhere, but checking the handful of places I'm used to checking doesn't help. Google gives nothing, so let's have a proof here! -Theorem: Suppose $T$ is a (countable complete) $\omega$-stable theory. Then $T$ is either $\aleph_0$-categorical or $T$ has infinitely many countable models. -Does anyone know a proof? - -REPLY [5 votes]: I won't give a proof, but I will give some historical background and references. -Let $I(T,\kappa)$ be the number of isomorphism types of models of $T$ of cardinality $\kappa$. A countable complete theory $T$ with $1 -TITLE: Why do parametrizations to the normal of a sphere sometimes fail? -QUESTION [7 upvotes]: If I take the upper hemisphere of a sphere, $x^2 + y^2 + z^2 = 1$, to be $\sqrt{1 - x^2 - y^2}$, then the normal is given by $\langle -f_x, - f_y, 1\rangle$ at any point. -This leads to an odd result: on the plane $z = 0$, while one might expect all normals of the sphere to not have any $z$ component (i.e, to only point radially outwards), the $\vec{k}$ component is still 1. -A similar parametrization in cylindrical coordinates is: -$\langle \sin(\phi)^2 \cos(\theta), \sin(\phi)^2 \sin(\theta), \sin(\phi) \cos(\phi)\rangle$. -At $\phi = 0$ and $\theta = 0$, which corresponds to the point $(0,0,1)$ in cartesian coordinates, the normal is $\langle 0,0,0 \rangle$ while one would expect $(0,0,1)$. - -REPLY [3 votes]: $\newcommand{\Reals}{\mathbf{R}}\newcommand{\Vec}[1]{\mathbf{#1}}$Your intuition is probably based on unit normal vectors. The (outward) unit normal vector $\Vec{n}$ to the unit sphere at the point $(x, y, z)$ is $x\Vec{i} + y\Vec{j} + z\Vec{k}$, and indeed, the unit normals along the equator $\{z = 0\}$ have vanishing $z$ component while the unit normal at the north pole $(0, 0, 1)$ is $\Vec{k}$. -What happens in your examples is this: Let $\Vec{X}$ be a smooth parametrization of a surface $S$ in $\Reals^{3}$, and let $\Vec{X}_{u}$ and $\Vec{X}_{v}$ denote the partial derivatives of $\Vec{X}$. If the partials are linearly independent for parameter values $(u, v)$, their cross product -$$ -\Vec{X}_{u} \times \Vec{X}_{v} - = \|\Vec{X}_{u} \times \Vec{X}_{v}\|\, \underbrace{\frac{\Vec{X}_{u} \times \Vec{X}_{v}}{\|\Vec{X}_{u} \times \Vec{X}_{v}\|}}_{\Vec{n}} -$$ -is normal to $S$ at $\Vec{X}(u, v)$, but has length $\|\Vec{X}_{u} \times \Vec{X}_{v}\|$. -Write $f(u, v) = \sqrt{1 - u^{2} - v^{2}}$. For the graph parametrization $\Vec{X}(u, v) = \bigl(u, v, f(u, v)\bigr)$, you have -$$ -\Vec{X}_{u} \times \Vec{X}_{v} = -f_{u}\Vec{i} - f_{v} \Vec{j} + \Vec{k} = \frac{1}{f(u, v)} \underbrace{\bigl(-u \Vec{i} -v \Vec{j} + f(u, v)\Vec{k}\bigr)}_{\Vec{n}}; -$$ -the $z$ component of the unit normal approaches $0$ along the equator, as expected, while the magnitude of the cross product itself grows without bound in such a way that the $z$ component is identically equal to $1$. -For the spherical coordinates parametrization -$$ -\Vec{X}(\theta, \phi) = a\cos\theta \sin\phi \Vec{i} + a\sin\theta \sin\phi \Vec{j} + a\cos\phi \Vec{k}, -$$ -the partial $\Vec{X}_{\theta} = a\sin\phi(-\sin\theta \Vec{i} + \cos\theta \Vec{j})$ vanishes at the poles $\phi = 0$ and $\phi = \pi$ (geometrically, longitudes all meet at the poles, and so latitude circles shrink to points), so the cross product vanishes at the poles, too: -$$ -\Vec{X}_{\phi} \times \Vec{X}_{\theta} = a^{2} \sin\phi \underbrace{(\cos\theta \sin\phi \Vec{i} + \sin\theta \sin\phi \Vec{j} + \cos\phi \Vec{k})}_{\Vec{n}}. -$$ -The unit normal, however, has the "expected" behavior even at the poles.<|endoftext|> -TITLE: Euler characteristic of covering space of CW complex -QUESTION [15 upvotes]: I am trying to prove the following statement: if $X$ is a finite CW complex and if $Y \to X$ is a $n$-sheeted covering then $Y$ is a finite CW complex and $\chi(Y)=n \cdot \chi(X)$. -I know the covering space of CW complex is again a CW complex, but I am just not sure how to prove if I have one $k$-cell, the covering space have $n$ $k$-cells. The $0$-cell case is easy. For the $1$-cell of covering space, we can determine it by considering how $1$-cell of the CW complex attach to the points. But what about cells, it is not very clear for me. - -REPLY [10 votes]: Given an $m$-dimensional CW-complex $X$, one can lift the CW-structure to a CW-structure on $Y$ by lifting the characteristic maps $\varphi_\alpha : D^k \to X$ to the cover $p : Y \to X$, which can be done since $\pi_1(D^k) \cong 0$. -If degree of $p : Y \to X$ is $n$, there are exactly $n$ lifts of $\varphi_\alpha$ to $Y$. So for each $k$-cell $e^k$ in $X$, there exists $n$ $k$-cells in the lifted CW-structure on $Y$ which are mapped homeomorphically onto $e^k$. -Let $C_i$ be the number of $i$-cells in $Y$, and $C'_i$ be the number of $i$-cells in $X$. From the above analysis, we derive that $C_i = n \cdot C_i'$ for all $0 \leq i \leq m$. Using the fact that Euler characteristic of a CW-complex $X$, namely the alternating sum of it's betti numbers, is the same as alternating sum of it's number of cells (dimension of the cellular cochain groups), we conclude -$$\chi(Y) = \sum_{i = 0}^m (-1)^i C_i = \sum_{i = 0}^m (-1)^i n C'_i = n \chi(X)$$ -as desired $\blacksquare$ - -I missed the essential question of OP up there. If $p : Y \to X$ is a covering map, $A \subset X$ is a subspace then $p|_{p^{-1}(A)} : p^{-1}(A) \to A$ is also a covering map. -In particular, take $A = e^k$ where $e^k$ is one of the cells in $X$. As the only covering space of disks are trivial (of the form $e^k \times D$ where $D$ is a discrete set), and $p$ is of degree $n$, $e^k$ must lift to $n$-many $k$-cells in $Y$.<|endoftext|> -TITLE: the relation between cardinality, L1-norm and L2-norm of a vector -QUESTION [8 upvotes]: For every $u\in \mathbb{R}^n$, $\textbf{Card}(u)=q$ implies ${\lVert u \rVert}_1 \leq \sqrt{q} {\lVert u \rVert}_2$ -where $\textbf{Card}(u)$ is the number of non-zero element (so the L0-norm). -Why does the condition ${\lVert u \rVert}_1 \leq \sqrt{q} {\lVert u \rVert}_2$ hold? Is there any place I can find proof for this? - -REPLY [9 votes]: This is an example of the Cauchy-Schwarz inequality: -\begin{align*} -\|u\|_1 &=\sum_{i = 1}^n |u_i|\\ -&= \sum_{i = 1}^n |u_i\cdot 1| \\ -&\le \left(\sum_{i = 1}^n |u_i|^2\right)^{1/2} \left(\sum_{i = 1}^n 1 \right)^{1/2} \\ -&= \|u\|_2 \sqrt{n} -\end{align*} -To improve the result to $\sqrt{q}$, use a mix of $1$'s and $0$'s rather than a constant sequence $1$ in the second sum. I'll leave it to you to work out the details.<|endoftext|> -TITLE: Why is this definite integral antisymmetric in $s\mapsto s^{-1}$? -QUESTION [20 upvotes]: I recently happened into the following integral identity, valid for positive $s>0$: -$$\int_0^1 \log\left[x^s+(1-x)^{s}\right]\frac{dx}{x}=-\frac{\pi^2}{12}\left(s-\frac{1}{s}\right).$$ -The obvious question is how to show this (feel free to do so!). But what stirs my curiosity is that the right-hand expression implies the integral to be antisymmetric under $s\mapsto s^{-1}$, which I would not have expected. Is there a simple explanation for this property? - -REPLY [6 votes]: One can do this as follows : -$ I =\displaystyle \int _{ 0 }^{ 1 }{ \frac { \ln { ({ x }^{ s }+{ (1-x) }^{ s }) } }{ x } dx } $ -Write it like as follows : -$ \displaystyle I = \int _{ 0 }^{ 1 }{ \frac { s\ln { (1-x) } }{ x } dx } +\int _{ 0 }^{ 1 }{ \frac { \ln { (1+{ (\frac { x }{ 1-x } ) }^{ s } } ) }{ x } dx } $ -$\Rightarrow I = J+K $ -Where $ \displaystyle J = \int _{ 0 }^{ 1 }{ \frac { s\ln { (1-x) } }{ x } dx } $ -and $ \displaystyle K = \int _{ 0 }^{ 1 }{ \frac { \ln { (1+{ (\frac { x }{ 1-x } ) }^{ s } } ) }{ x } dx } $ -For evaluating $J$ use the taylor expansion of $\ln(1-x)$ : -$ \displaystyle J = -s\int _{ 0 }^{ 1 }{ \sum _{ r=1 }^{ \infty }{ \frac { { x }^{ r-1 } }{ r } } dx } $ -Interchanging summation and integral we have : -$ \displaystyle J = (-s)\sum _{ r=1 }^{ \infty }{ \frac { 1 }{ r } \int _{ 0 }^{ 1 }{ { x }^{ r-1 }dx } } $ -$\displaystyle J = (-s)\sum _{ r=1 }^{ \infty }{ \frac { 1 }{ { r }^{ 2 } } } = -s\zeta(2)=\dfrac { -s{ \pi }^{ 2 } }{ 6 } $ -For evaluating $K$ we will substitute $ y = \dfrac{x}{1-x} $ -$ \displaystyle K = \int _{ 0 }^{ \infty }{ \frac { \ln { (1+{ y }^{ s }) } }{ y(1+y) } dy } $ -Split it into two parts : -$ \displaystyle K =\int _{ 0 }^{ 1 }{ \frac { \ln { (1+{ y }^{ s }) } }{ y(1+y) } dy } +\int _{ 1 }^{ \infty }{ \frac { \ln { (1+{ y }^{ s }) } }{ y(1+y) } dy } $ -In the second part put $ t=\dfrac{1}{s} $ -$ \displaystyle K = \int _{ 0 }^{ 1 }{ \frac { \ln { (1+{ y }^{ s }) } }{ y(1+y) } dy } +\int _{ 0 }^{ 1 }{ \frac { \ln { (1+{ t }^{ s }) } -s\ln { (t) } }{ (1+t) } dy } $ -$ \displaystyle \Rightarrow K = \int _{ 0 }^{ 1 }{ \frac { \ln { (1+{ y }^{ s }) } }{ y } -\frac { s\ln { (y) } }{ 1+y } dy } $ -We can write $ K = L-M $ -where $ \displaystyle L=\int _{ 0 }^{ 1 }{ \frac { \ln { (1+{ y }^{ s }) } }{ y } dy } $ -$ \displaystyle M = \int _{ 0 }^{ 1 }{ \frac { s\ln { (y) } }{ 1+y } dy } $ -For evaluting $L$ we have to put $ {y}^{s}=t $ to get : -$ \displaystyle L = \frac { 1 }{ s } \int _{ 0 }^{ 1 }{ \frac { \ln { (1+t) } }{ t } dt } $ -Using taylor series and interchanging summation and integral we have : -$ \displaystyle L = \frac { 1 }{ s } \sum _{ r=1 }^{ \infty }{ \frac { { (-1) }^{ r-1 } }{ r } \int _{ 0 }^{ 1 }{ { x }^{ r-1 }dr } } $ -$ \Rightarrow \displaystyle L = \frac { 1 }{ s } \sum _{ r=1 }^{ \infty }{ \frac { { (-1) }^{ r-1 } }{ { r }^{ 2 } } } $ -$ \displaystyle L = \frac { 1 }{ s } (1-{ 2 }^{ 1-2 })\zeta (2)=\frac { { \pi }^{ 2 } }{ 12s } $ -Now $ \displaystyle f(a) = \int _{ 0 }^{ 1 }{ { y }^{ a }dy } =\frac { 1 }{ a+1 } $ -Differentiating both sides with respect to $a$ we have : -$ \Rightarrow \displaystyle \int _{ 0 }^{ 1 }{ { y }^{ a }\ln { (y) } dy } =\frac { -1 }{ { (a+1) }^{ 2 } } $ -For evaluting $M$ we will use the series of $ \dfrac{1}{1+x} $ and interchanging summation and integral we have : -$ \displaystyle M = s\sum _{ r=0 }^{ \infty }{ { (-1) }^{ r }\int _{ 0 }^{ 1 }{ { y }^{ r }\ln { (y) } dy } } $ -Using the property I have proved above we have : -$ \displaystyle M = s\sum _{ r=0 }^{ \infty }{ \frac { { (-1) }^{ r+1 } }{ { (r+1) }^{ 2 } } } =s\sum _{ r=1 }^{ \infty }{ \frac { { (-1) }^{ r } }{ { r }^{ 2 } } } =\frac { -s{ \pi }^{ 2 } }{ 12 } $ -Finally we have $ K = \dfrac { { \pi }^{ 2 } }{ 12 } (s+\dfrac { 1 }{ s } ) $ -Using all the results we have : -$ \displaystyle I = \frac { { \pi }^{ 2 } }{ 12 } (s+\frac { 1 }{ s } )-\frac { s{ \pi }^{ 2 } }{ 6 } = \frac { { \pi }^{ 2 } }{ 12 } (\frac { 1 }{ s } -s) $ -Hence Proved. -Comment: I can't comment quantitatively that why it is antisymmetric but qualitatively it can be seen as : -For $s>1$ : -$ {x}^{s} -TITLE: Does $\infty$ mean $+\infty$ in "English mathematics"? -QUESTION [10 upvotes]: My question background is the set $\mathbb R$. -I often see the symbol $\infty$. Does it always mean $+\infty$ or can it have the meaning of $\pm \infty$? In particular what means that a real sequence $(a_n)$ has $\infty$ for limit? That it has $+\infty$ for limit? Or that the sequence $(\vert a_n \vert)$ has $+\infty$ for limit? -I'm French and we usually don't use $\infty$ without a sign in the context of real numbers. - -REPLY [2 votes]: If you write $\lim\limits_{x\to\infty} f(x)$ and $x$ is understood to be real, then it means $\lim\limits_{x\to+\infty} f(x)$ unless there is some context saying it means something else. If you write $\lim\limits_{z\to\infty} f(z)$ where $f$ is a complex-valued function of a complex, rather than real, variable $z$, then it means neither $+\infty$ nor $\pm\infty$. Rather it essentially means the limit as the absolute value $|z|$ approaches $+\infty$. In other words $\text{“} \lim \limits_{z\to\infty} f(z) = c\text{”}$, where $c$ is some complex number means $f(z)$ can be made as close as desired to $c$ by making $|z|$ big enough. Or more precisely: -$$\forall \varepsilon>0\ \exists R>0\ \forall z\in\mathbb C\ \big[ \text{if } |z|>r \text{ then } |f(z)-c|<\varepsilon \big].$$ -Sometimes one writes $\lim\limits_{x\uparrow\pi/2} \tan x = \infty$ or $\lim\limits_{x\uparrow\pi/2} \tan x = +\infty$ and $\lim\limits_{x\downarrow\pi/2} \tan x = -\infty$, but I think it makes sense to write $\lim\limits_{x\to\pi/2} \tan x = \infty$ where $\text{“}\infty\text{''}$ means the infinity that is at both ends of the line and is approached by going in either direction. That makes the tangent function continuous everywhere on $\mathbb R$. But I would accompany this with an explanation of the intended meaning.<|endoftext|> -TITLE: $\mathbb C[X_1, \ldots, X_n]$ is a free module over $\mathbb C[X_1, \ldots, X_n]^G$ -QUESTION [6 upvotes]: Let $G$ be finite subgroup of $GL_n( \mathbb C )$. Let $\mathbb C[X_1, \ldots, X_n]^G$ be the set of all G-invariant polynomials of $\mathbb C[X_1, \ldots, X_n]$. Is there any rule by which we can find out when $\mathbb C[X_1, \ldots, X_n]$ is a free module over $\mathbb C[X_1, \ldots, X_n]^G$? - -REPLY [6 votes]: An element of $G$ is called a $\textit{pseudoreflection}$ if it fixes a codimension one subspace of $V:= \mathbb{C}^n$. Let us denote $\mathbb{C}[X_1, \ldots, X_n] = \mathbb{C}[V]$. Then as alluded to in the comments, we have the following: -$\textbf{Theorem (Chevalley-Shephard-Todd)}$ The following are equivalent: - -$G$ is generated by pseudoreflections; -$\mathbb{C}[V]^G$ is isomorphic to a polynomial ring; -$\mathrm{Spec} (\mathbb{C}[V]^G)$ is nonsingular; -$\mathbb{C}[V]$ is a free module over $\mathbb{C}[V]^G$.<|endoftext|> -TITLE: How to show $\hat{\mathbb{Z}}/n\hat{\mathbb{Z}}\cong \mathbb{Z}/n\mathbb{Z}$ -QUESTION [6 upvotes]: I'm trying to do exercise 1.9 from the following PDF: http://websites.math.leidenuniv.nl/algebra/Lenstra-Profinite.pdf -RELATED: Elements in $\hat{\mathbb{Z}}$, the profinite completion of the integers -I'm primarily interested in part (a) showing the titular isomorphic relation and part (c) describing the open/closed subgroups of $\hat{\mathbb{Z}}$. -For part (a) I cannot get far by trying to come up with an isomorphism from $\mathbb{Z}/n\mathbb{Z}$ to $\hat{\mathbb{Z}}/n\hat{\mathbb{Z}}$. A hint here would be appreciated. -For part (c) I know the fact that every closed subgroup will be profinite (i.e. compact, Hausdorff, and totally disconnected), but is this a complete description of the closed subgroups of $\hat{\mathbb{Z}}$? - -REPLY [4 votes]: $\newcommand{\ZZ}{\mathbb{Z}}$ -The elements of $\hat{\ZZ}$ have very explicit descriptions in terms of compatible sequences $(x_i)_{i\in\mathbb{N}}$ where each $x_i\in\ZZ/i\ZZ$. -There is a very natural map $\hat{\ZZ}\rightarrow\ZZ/n\ZZ$ given by "projection onto the $n$th coordinate". This is a homomorphism because the elements $(x_i)$ are compatible sequences. It is a good exercise to show that this is surjective, and that its kernel is $n\hat{\ZZ}$. -The open subgroups of a finitely generated profinite group are precisely the subgroups of finite index (this is a deep theorem of Nikolov and Segal). You also know that all open subgroups are closed. -The closed subgroups however need not be open and in general are far more numerous. Since $\hat{\ZZ}$ is abelian, every closed subgroup $H\le\hat{\ZZ}$ is normal and defines a quotient, which is finite iff $H$ is open. It's easy to see that the finite quotients of $\hat{\ZZ}$ are precisely the finite cyclic groups (you can prove this directly or use the universal property of profinite completions). However, there are many more non-open closed subgroups. For example, let $\pi$ be a set of prime numbers, and let $\hat{\ZZ}(\pi)$ be the set of compatible sequences $(x_i)$ where $i$ ranges over only natural numbers which are divisible only by the primes in $\pi$. Then projection onto such coordinates gives you a surjective map -$$\hat{\ZZ}\rightarrow\hat{\ZZ}(\pi)$$ -whose kernel is a closed but not open subgroup of infinite index. The group $\hat{\ZZ}(\pi)$ is the pro-$\pi$ completion of $\ZZ$. The kernel of the map is the product $\prod_{p\notin\pi}\ZZ_p$ of the Sylow-$p$ subgroups for $p\notin\pi$ (note that $\hat{\ZZ} = \prod_p\ZZ_p$, where $\ZZ_p$ is the additive group of the $p$-adic integers). -I believe that all closed subgroups of $\hat{\ZZ}$ can be obtained as intersections and joins of the closed subgroups described above, though there are details to be worked out.<|endoftext|> -TITLE: Let $X \subset \mathbb{R}^n$. Suppose that $0 \in X$ and $\|x-y\| = 1$ for $x,y \in X, x \neq y$. Then the maximum number of elements in $X$ is $n+1$. -QUESTION [5 upvotes]: Let $X \subset \mathbb{R}^n$. Suppose that $0 \in X$ and $\|x-y\| = 1$ for $x,y \in X, x \neq y$. Then the maximum number of elements in $X$ is $n+1$. -My attempt -By contradiction, let's suppose that $X$ has more than $n+1$ elements. -Then, take $x_0 = 0$, and $x_1,...,x_{n+1}\in X$. -I realized the following: -$\|x\| = 1$ for all $x\in X-\{0\}.$ -$\|x_i-x_j\|^2 = \langle\,x_i-x_j,x_i-x_j\rangle = 1$ $\Rightarrow$ -$\langle\, x_i,x_j\rangle = 1/2 $ if $i\neq j.$ -And I'm stuck here. -Can anyone help? - -REPLY [3 votes]: Let $X=\{x_0=0,x_1,\ldots,x_m\}$, with $\Vert x_i-x_j\Vert=1$ if $i\ne j$. It follows that $\Vert x_i\Vert=1$ for $i=1,2,\ldots,m$ and $\langle x_i,x_j\rangle=\dfrac12$ for distinct $i\ne j$ from $\{1,2,\ldots,m\}$. -We want to prove that $m\le n$. Clearly we may suppose that $m>1$, because if $m=1$ there is nothing to prove. -Now, let $v=\sum\limits_{i=1}^mx_i$. Clearly we have -$$\eqalign{ \Vert v\Vert^2 &=m+\frac{m(m-1)}{2}=\frac{m(m+1)}{2},\tag{1}\cr -\langle x_i,v\rangle&=\frac{m+1}{2},\qquad\hbox{for $i=1,2,\ldots,m$.} }$$ -For a given real $t$ (to be determined later), we consider $y_1,y_2,\ldots,y_m$ defined by $y_i=tx_i-v$. -Using $(1)$ we have, for $i\ne j$ from $\{1,2,\ldots,m\}$: -$$\eqalign{\Vert y_i\Vert^2&=t^2-2t\frac{m+1}{2}+\frac{m(m+1)}{2} -\cr -&=(t-\frac{m+1}{2})^2+\frac{m^2-1}{2}>0.\cr -\langle y_i,y_j\rangle&=\frac{1}{2}t^2-2t\frac{m+1}{2}+\frac{m(m+1)}{2}\cr -&=\frac{(t-m-1)^2-m-1}{2} -}$$ -Now, choosing $t=m+1+\sqrt{m+1}$, we see that $\langle y_i,y_j\rangle=0$ for every $i,j$ from $\{1,2,\ldots,m\}$ with $i\ne j$. Therefore $(y_1,y_2,\ldots,y_m)$ is a system of orthogonal non-zero vectors in $\mathbb{R}^n$,they are linearly independent and consequently $m\le n$, that is -$\vert X\vert=m+1\le n+1$,which is the desired conclusion.<|endoftext|> -TITLE: Suppose $dX_t = a(X_t) dt + b(X_t) dW_t$ and $Y_s=X_t$ where $s=t^2$. What SDE does $Y_s$ satisfy in the weak sense? -QUESTION [5 upvotes]: Suppose $dX_t = a(X_t) dt + b(X_t) dW_t$ and $Y_s=X_t$ where $s=t^2$. What SDE does $Y_s$ satisfy in the weak sense? Hint: calculate $E[ dY | \mathcal{F}_s]$ where $dY = Y_{s-ds} - Y_s$. - -This is from an old course and can be found here as question 21. -Does anyone have ideas on how to approach this? - -REPLY [2 votes]: Let $h(t) = \int_0^t h'(s) ds$ be an absolutely continuous function with $h'>0$. Then $\{V_t = W_{h(t)},t\ge 0 \}$ has the same distribution as $\int_0^t \sqrt{h'(s)} dW_s$. -For $Y_s = X_{h(s)}$, write -$$ -dY_s = Y_{s+ds} - Y_s = X_{h(s) + h'(s) ds} - X_{h(s)} = a(X_{h(s)})h'(s)ds + b(X_{h(s)}) dV_s\\ = a(Y_s)h'(s) ds + b(Y_s) dV_s. -$$ -Hence, by the above remark, $Y$ is a weak solution to -$$ -dY_s = a(Y_s)h'(s) ds + b(Y_s)\sqrt{h'(s)} dW_s. -$$<|endoftext|> -TITLE: Elimination of quantifiers -QUESTION [5 upvotes]: What does it mean that a theory admits constructive elimination of quantifiers? -A theory admits elimination of quantifiers when each formula of the theory is equivalent to a quanifier-free formula, right? -But what is meant when we use the term "constructive" ? - -REPLY [6 votes]: I would interpret this to mean that there is a computable procedure to find for any given formula $\varphi$ a quantifier free formula $\varphi^*$ that is equivalent to it modulo the theory. -If the theory $T$ is c.e. axiomatizable, then this is equivalent to ordinary quantifier-elimination, because if a theory $T$ admits elimination of quantifiers, then given any $\varphi$ we can search through all proofs from $T$ until we find a quantifier-free $\varphi^*$ that is $T$-provably equivalent, and then output the first such $\varphi^*$ that we find. So if the theory $T$ is c.e., then QE is the same as constructive QE. -This fact is often important in proving the decidability of a theory. If a c.e. theory admits QE and the quantifier-free sentences have decidable truth values, then it is decidable, because for any sentence, we can search for a quantifier-free equivalent of it, by searching for proofs, and decide the truth of that (trivial) formulation, thereby deciding the theory. -But if the theory $T$ is not c.e., then it could be a stronger requirement that the QE is effective. If you want, I'll try to make an example of a theory that has QE, but not constructive QE.<|endoftext|> -TITLE: Understanding the maps in the long exact sequence of $\operatorname{Ext}$ -QUESTION [5 upvotes]: Suppose I have a short exact sequence in an abelian category (say abelian groups for simplicity) -$$0 \to B \to X \to A \to 0.$$ -If I apply $\operatorname{Ext}^*(C, \bullet)$ to this sequence, I get a long exact sequence. Let's take $C = \mathbb{Z}/2\mathbb{Z}$ for simplicity: -$$0 \to \operatorname{Hom}(\mathbb{Z}/2\mathbb{Z}, B) \to \operatorname{Hom}(\mathbb{Z}/2\mathbb{Z}, X) \to \operatorname{Hom}(\mathbb{Z}/2\mathbb{Z}, A) \to \operatorname{Ext}^1(\mathbb{Z}/2\mathbb{Z}, B)$$ -$$ \to \operatorname{Ext}^1(\mathbb{Z}/2\mathbb{Z}, X) \to \operatorname{Ext}^1(\mathbb{Z}/2\mathbb{Z}, A) \to 0.$$ -I can figure out that $\operatorname{Ext}^1(\mathbb{Z}/2\mathbb{Z}, A) \cong A/2A$ by looking at a projective resolution of $\mathbb{Z}/2\mathbb{Z}$ and applying $\operatorname{Ext}^*(\bullet, A)$. So my exact sequence looks like -$$0 \to \operatorname{Hom}(\mathbb{Z}/2\mathbb{Z}, B) \to \operatorname{Hom}(\mathbb{Z}/2\mathbb{Z}, X) \to \operatorname{Hom}(\mathbb{Z}/2\mathbb{Z}, A) \overset{f}{\to} B/2B$$ -$$ \overset{g}{\to} X/2X \overset{h}{\to} A/2A \to 0.$$ -My Question: How can I understand the maps $f$, $g$, and $h$? I can understand the analogous maps when I apply the contravariant functor $\operatorname{Ext}^*(\bullet, C)$, because I can find projective resolutions for $B$ and $A$, apply the Horseshoe Lemma and get a short exact sequence of projective resolutions, and just look at where everything goes. But in order to do this for the covariant $\operatorname{Ext}$ functor, I'd have to construct injective resolutions, which I don't know how to do concretely. So how can I figure out what the maps $f$, $g$, and $h$ should be? - -REPLY [6 votes]: As you said, you can compute $\operatorname{Ext}(\mathbb{Z}/2\mathbb{Z},\cdot)$ using a projective resolution of $\mathbb{Z}/2\mathbb{Z}$. But you can also use this projective resolution to understand what the maps $f,g,h$ are ! You will find out that the maps $g$ and $h$ are the obvious one, namely a map $X\rightarrow Y$ induces a map $X/2X\rightarrow Y/2Y$, and those are the ones that arises in the long sequence of $\operatorname{Ext}$. -The map $f$ is a connecting homomorphism and as such it is not always very obvious. As often it comes from the snake lemma, but here you can draw the entire diagram to get everything : -$$\require{AMScd} -\begin{CD} -0@>>>\operatorname{Hom}(\mathbb{Z}/2\mathbb{Z},B)@>>>\operatorname{Hom}(\mathbb{Z}/2\mathbb{Z},X)@>>>\operatorname{Hom}(\mathbb{Z}/2\mathbb{Z},A)\\ -@.@VVV@VVV@VVV\\ -0@>>>\operatorname{Hom}(\mathbb{Z},B)@>>>\operatorname{Hom}(\mathbb{Z},X)@>>>\operatorname{Hom}(\mathbb{Z},A)@>>>0\\ -@.@V{\times 2}VV@V{\times 2}VV@V{\times 2}VV\\ -0@>>>\operatorname{Hom}(\mathbb{Z},B)@>>>\operatorname{Hom}(\mathbb{Z},X)@>>>\operatorname{Hom}(\mathbb{Z},A)@>>>0\\ -@.@VVV@VVV@VVV\\ -@.\operatorname{Ext}(\mathbb{Z}/2\mathbb{Z},B)@>>>\operatorname{Ext}(\mathbb{Z}/2\mathbb{Z},X)@>>>\operatorname{Ext}(\mathbb{Z}/2\mathbb{Z},A)@>>>0 -\end{CD} -$$ -or using identifications $\operatorname{Hom}(\mathbb{Z},X)=X$ and $\operatorname{Hom}(\mathbb{Z}/2\mathbb{Z},X)=X[2]$, this diagram can be written : -$$\require{AMScd} -\begin{CD} -0@>>>B[2]@>>>X[2]@>>>A[2]\\ -@.@VVV@VVV@VVV\\ -0@>>>B@>>>X@>>>A@>>>0\\ -@.@V{\times 2}VV@V{\times 2}VV@V{\times 2}VV\\ -0@>>>B@>>>X@>>>A@>>>0\\ -@.@VVV@VVV@VVV\\ -@.B/2B@>>>X/2X@>>>A/2A@>>>0 -\end{CD} -$$ -with the natural maps. The connecting homomorphism from the snake lemma gives a long exact sequence -$$0\rightarrow B[2]\rightarrow X[2]\rightarrow A[2]\overset{f}\rightarrow B/2B\rightarrow X/2X\rightarrow A/2A\rightarrow 0$$ -Finally the map $f$ is the following : take an element $a$ of order 2 in $A$, lift it in $X$ to get an element $x$. Multiply $x$ by 2, it will land in $B$ and take its class modulo $2B$.<|endoftext|> -TITLE: Prove that a sequence is square summable -QUESTION [6 upvotes]: Let $a_n$ be a sequence of real numbers such that $\sum_{n=1}^{\infty}a_nb_n < \infty$ whenever $\sum_{n=1}^{\infty}b_n^{2}< \infty.$ Prove that $\sum_n a_n^2 < \infty.$ -Can anyone provide a hint to prove this ? I don't know where to start. I am thinking along the lines of Schwarz inequality. - -REPLY [3 votes]: EDIT: Here is an easier proof than that using the closed graph theorem from below. -For $N \in \Bbb{N}$, define the bounded(!) functional $$\varphi_N : \ell^2 \to \Bbb{K}, (b_n)_n \mapsto \sum_{n=1}^N a_n b_n.$$ -By your assumption, you have $\varphi_N ((b_n)_n) \to \varphi((b_n)_n) = \sum_n a_n b_n$ for all $(b_n)_n \in \ell^2$. -But (as a consequence of the uniform boundedness principle), a pointwise limit of bounded functionals on a Banach space is automatically bounded. -As below, it is easy to see that boundedness of $\varphi$ implies $(a_n)_n \in \ell^2$. -Additional EDIT: Boundedness of $\varphi$ implies $(a_n)_n \in \ell^2$ in at least two ways: - -By Riesz representation theorem, there is $(a_n ')_n \in \ell^2$ with $\varphi((b_n)_n)=\sum a_n ' b_n$ for all $(b_n)_n \in \ell^2$. This easily yields $a_n = a_n '$ for all $n$. -For each $N$, let $a^N$ be the sequence $(a_n)_n$, but with all but the first $N$ terms set to zero. Then $\|a^N \|_2^2 = \varphi (a^N) \leq \|\varphi\| \|a^N\|_2$. Hence, $\|a^N\|_2 \leq \|\varphi\|$ for all $N$. - - -For the sake of completeness, I still include the original argument: -Your assumptions imply (why exactly) that the linear map -$$ -\Phi :\ell^2 \to \ell^1, (b_n)_n \mapsto (a_n b_n)_n -$$ -is well defined. -It is straightforward to check that $\Phi$ has closed graph. Thus, it is bounded by the closed graph theorem. -Thus, the map -$$ -\ell^2 \to \Bbb{K}, (b_n)_n \mapsto\sum_n a_n b_n -$$is a bounded linear functional. From this, the claim easily follows.<|endoftext|> -TITLE: Quantifier: "For all sets" -QUESTION [13 upvotes]: I've seen the following statement a few times: - -"Let $A$ be a set, then $\emptyset\subseteq A$". -Or, written 'more formally': - $$ -\forall A\,\, \emptyset\subseteq A -$$ - -My doubt is: I've always seen the quantifier $\forall x$ to mean "for all $x$ elements of some set $S$". However, when talking about all the sets, how do we define this quantification? - -REPLY [2 votes]: $\forall A$ means "for all $A$" or more completely "for all objects $A$ that the theory we are working in is talking about". If the theory is set theory (say, we started off by writing down the axioms of ZFC), then our $A$ is a set; when working in other theories it may instead mean that $A$ is a natural number or that $A$ is a real number. However, this doesn't not happen by magic or even by convention, it happens by the very fact that we accept the appropriate axioms to hold for our thingies. -So by the very fact that we work with $\forall A\forall B\exists C\colon ((C\in A\leftrightarrow C\in B)\leftrightarrow A=B)$ and $\forall A\forall B\exists C\forall D\colon(D\in C\leftrightarrow (D=A\lor D=B))$ and so on as axioms, it happens that our thingies behave exactly as sets are supposed to behave.<|endoftext|> -TITLE: A converse to Stokes' Theorem in $\mathbb{R}^n$ -QUESTION [5 upvotes]: In a lecture of advanced calculus, my teacher made a very interesting remark about the generalized Stokes' Theorem (actually, he left it as an exercise!), such that, if I understood it right, is something like: - -Let $\Omega \subseteq \mathbb{R}^n$ be an open set and $\tau:\mathrm{F}_{n-1}(\Omega) \to \mathrm{F}_n(\Omega)$ a linear operator satisfying - $$ \int_{\Theta}\tau\omega = \int_{\partial\Theta} \omega,\quad \forall \omega\in\mathrm{F}_{n-1}(\Omega),\forall\Theta\in\mathrm{C}_{n}(\Omega),$$ - where $\mathrm{C}_n(\Omega)$ is the set of singular $n$-chains in $\Omega$. Then $\tau = \mathrm{d}$, the exterior derivative operator. - -My attempt: From Stokes' Theorem, we have that $\int_{\Theta} (\tau-\mathrm{d})\omega = 0,~ (\forall\omega,\forall \Theta)$, and we have to prove that this implies $(\tau-\mathrm{d})\omega = 0,~ (\forall\omega,\forall \Theta)$. -Suppose WLOG that $\Theta=\xi\in S_n(\Omega)$ (set of singular $n$-simplex in $\Omega$). Thus -$$ \int_{\xi}(\tau-\mathrm{d})\omega = \int_{\sigma=[v_0,\cdots,v_n]}\left((\tau-\mathrm{d})\omega\right)_\xi,$$ -but here there's a problem, as it's not clear whether I can commutate the operator $(\tau-\mathrm{d})$ with the pullback or not. Is this the wrong way? - -REPLY [2 votes]: As $(\tau - d)\omega \in F^n(\Omega)$, write -$$ (\tau - d)\omega = f(x) dx^1\cdots dx^n,$$ -where $f(x)$ is differentiable. If $(\tau - d)\omega \neq 0$, then there is $x\in \Omega$ so that $f(x) > 0$ (or $<0$, the argument are the same). Then $f(x) >0$ on an open set $\Omega' \subset \Omega$. Consider a singular chain $\Theta$ with image in this $\Omega'$, then you get -$$\int_\Theta (\tau - d)\omega >0,$$ -which is a contradiction. -(You just treat $(\tau - d)\omega$ as one whole thing and show that it is zero).<|endoftext|> -TITLE: Can a subspace have a larger dual? -QUESTION [8 upvotes]: I cant manage to figure this out, for instance $L^{1}[0,1]$ has $L^{\infty}[0,1]$ as dual and $C[0,1]$ (a sub space of $L^{1}[0,1]$ have the signed measures of bdd. varitaion as dual. I cant mange to prove anything regarding cardinality realtions between the measures and $L^{\infty}[0,1]$ tho. Intuitively it feels like we have more measures. -Hints? - -REPLY [18 votes]: Let $X$ be a Banach space and let $Y \subset X$ be a subspace. When you compare the duals, two important situations appear: - -$Y$ is a closed subspace of $X$ (and is consequently equipped with the same norm). -$Y$ is a dense, proper subspace of $X$, but is a Banach space with respect to stronger norm. - -What happens? - -In this case, we get $Y^* \subset X^*$ in the following sense (Here, $Y^*$ are the linear functionals on $Y$ which are continuous w.r.t. the norm in $X$): -Let $y^* \in Y^*$ be given. Then, $y^*$ is a continuous map on a subspace of $X$, and we can extend it by Hahn-Banach to a functional in $X^*$ (with the same norm). If $Y$ is a proper subspace, then this extension is not unique. Thus, $X^*$ is larger than $Y^*$. -In this case, we have $X^* \subset Y^*$ in the following sense (Here, $Y^*$ are the linear functionals, which are continuous w.r.t. the stronger norm of $Y$). -Since $\|y\|_X \le C \, \|y\|_Y$, we get for $x^* \in X^*$: -$$|x^*(y)| \le \|x^*\|_{X^*} \, \| y \|_X \le C \, \|x^*\|_{X^*} \, \|y\|_Y.$$ -Hence, $x^* \in Y^*$. Moreover, if we have to different functionals in $X^*$, there values on $Y$ differ (since $Y$ is dense in $X$). Thus, $Y^*$ is larger than $Y^*$. - -Examples - -Let me give some examples for the first case. - -$\mathbb{R}^n$ can be treated as a subspace of $\mathbb{R}^m$ for $n \le m$ (identify $x \in \mathbb{R}^n$ with $(x_1, \ldots, x_n, 0,\ldots,0) \in \mathbb{R}^m$). Then, $\mathbb{R}^n \subset \mathbb{R}^m$ and we get the same inclusion for the dual spaces. -$C([0,1]) \subset L^\infty(0,1)$. The dual of $C([0,1])$ are regular, signed Borel measures. The dual of $L^\infty(0,1)$ consists of less regular (thus more) measures (namely finitely additive measures). This situation is also a little bit delicate: The dirac $\delta_{1/2}$ lives in the dual of $C([0,1])$, but cannot be applied to arbitrary functions in $L^\infty(0,1)$. However, we can extend it by Hahn-Banach to a finitely additive measure in the dual of $L^\infty(0,1)$ which coincides with $\delta_{1/2}$ on the subspace $C([0,1])$. - -Examples for the second case: - -You already had a good example: $C([0,1])$ is a dense subspace of $L^1(0,1)$. Note that each function $f$ in $L^\infty(0,1)$ induces a measure via $\mu_f(A) = \int_A f \, \mathrm{d}x$. -$H_0^1(0,1) \subset L^2(0,1)$ and the converse embedding holds for the duals.<|endoftext|> -TITLE: Yoneda's Lemma in Vakil's notes -QUESTION [8 upvotes]: Vakil's Notes -in the exercise 1.3Y, what does 1.3.10.2 commutes with the maps 1.3.10.1 mean? I can't see any relation between 1.3.10.1 and 1.3.10.2. - -REPLY [8 votes]: It means that given any two objects $B$ and $C$ and a map $f:B\to C$, the diagram -$$\require{AMScd} -\begin{CD} -\operatorname{Mor}(C,A) @>{i_C}>> \operatorname{Mor}(C,A')\\ -@V{}VV @V{}VV \\ -\operatorname{Mor}(B,A) @>{i_B}>> \operatorname{Mor}(B,A') -\end{CD}$$ -commutes, where the vertical maps are given by 1.3.10.1 via $f$. Note that in 1.3.10.2, you are supposed to have a map $i_C:\operatorname{Mor}(C,A)\to \operatorname{Mor}(C,A')$ for each object $C$; in this diagram we are using two of these maps (for $C$ and for $B$).<|endoftext|> -TITLE: Must perpendicular (resp. orthogonal) lines meet? -QUESTION [19 upvotes]: In geometry books in French, there has traditionally been a very clear distinction between 'orthogonal' lines and 'perpendicular' lines in space. Two lines are called perpendicular if they meet at a right angle. Two lines are called orthogonal if they are parallel to lines that meet at a right angle. Thus orthogonal lines could be skew (i.e., they need not meet), whereas perpendicular lines always intersect. [Edit: Evidence shows that this distinction most likely arose around the turn of the twentieth century. See below for details.] -Looking around on Quora, Answers.com and here, I have found numerous assertions that, in English, there is no difference whatsoever between 'orthogonal' and 'perpendicular.' However, given the situation in French, I have a gut feeling that the same distinction must once have been observed in English as well, but since there is now a greater focus on vectors (for which the concepts coincide) than on lines, it has gradually been lost. I would like confirmation of this, if possible. -My question, then, is as follows: -How have the two concepts referred to above as 'orthogonal' and 'perpendicular' lines historically been denoted in English and other major languages? -The best answers will include references to authoritative sources. -Edit. Zyx has provided an answer referring to Rouché and Comberousse's geometry text from 1900, where the word perpendiculaire is used for what we have called orthogonal here. This strongly suggests that, contrary to what I had assumed, even French usage has not been unchanging over time. -So Zyx may be correct in questioning my premise, and I am beginning to suspect that even in France, the use of orthogonal in the sense discussed here may have been introduced in the twentieth century. Let me give an example taken from 1952 geometry text that illustrates this usage (Géométrie dans l'espace: Classes de Première C et Moderne, 1952, Dollon and Gilet): - -Deux droites sont orthogonales, si leurs angles sont droits. -Deux droites coplanaires formant quatre angles droits ont été appelées droites perpendiculaires [presumably in a lower-level book in the series]; on peut dire aussi qu'elles sont orthogonales. -Dans ce qui suit, nous réserverons en général l'expression droites orthogonales, pour deux droites non coplanaires et dont les angles sont droits. - -Nearly identical conventions are found in Géométrie: Classe de Seconde C, 1964, by Hémery and Lebossé, except that they allow "orthogonal" lines to meet (thus perpendicular implies orthogonal, but not conversely): - -Nous conviendrons d'appeler droites perpendiculaires deux droites à la fois concourantes et orthogonales. - -However, Hadamard's Leçons de géométrie élémentaire, 1901, uses the word perpendiculaire to include both cases: - -On dit que deux droites, situées ou non dans un même plan, sont perpendiculaires si leur angle, défini comme il vient d'être dit, est droit. - -And Géométrie Élémentaire, 1903, by Vacquant and Macé de Lépinay agrees with Hadamard. -My conclusion is that I was much too quick in my question to call the distinction "traditional." It is most likely to have appeared in France sometime in the early to mid-twentieth century. (To pinpoint the date better, it would be best to check what was done in textbooks in the 1925-1940 period, such as those of P. Chenevier and H. Commissaire, but I don't have access to these. Vectors evidently first appeared in French school curricula in 1905. However, the scalar product was not taught systematically until 1947, so that would seem a possible time for the expression "orthogonal lines" to have been introduced.) -The examples given by Zyx show that usage in English in fact mirrors the earlier French usage, i.e. "perpendicular" is used everywhere. And I presume that the terms "skew perpendicular" and "intersecting perpendicular" would only be used where an author felt the distinction was needed. (In many cases, it will be clear from context whether two lines meet.) -Edit. The "new" French terminology dates at least from the turn of the century. Here is an excerpt from Cours de Géométrie élémentaire: à l'usage des élèves de mathématiques élémentaires, de mathématiques spéciales; des candidats aux écoles du Gouvernement et des candidats à l'Agrégation (1899) by Niewenglowski and Gérard, which was intended for both high-school and university-level students. This book is in fact referred to by Lebesgue in his Leçons sur l'intégration. - -Considérons deux droites AB, CD, non situées dans un même plan; menons par un point quelconque O, des parallèles X'X et Y'Y à ces deux droites. [...] -Si les deux droites X'X et Y'Y sont perpendiculaires, nous dirons que les droites AB et CD sont orthogonales. Nous dirons aussi quelquefois qu'elles sont perpendiculaires, même si elles ne se rencontrent pas. - -Thus these authors, unlike Hadamard, Rouché and Vacquant, appear to have a preference for droites orthogonales when the lines are not coplanar. However, this was not a hard-and-fast rule, and they allow that perpendiculaires can also "sometimes" be used in this case. The distinction only seems to have become settled later on. - -REPLY [7 votes]: From "Earliest Known Uses of Some of the Words of Mathematics (O)" ... - -ORTHOGONAL is found in English in 1571 in A geometrical practise named Pantometria by Thomas Digges (1546?-1595): "Of straight lined angles there are three kindes, the Orthogonall, the Obtuse and the Acute Angle." (In Billingsley's 1570 translation of Euclid, an orthogon (spelled in Latin orthogonium or orthogonion) is a right triangle.) (OED2). - -also - -ORTHOGONAL VECTORS. The term perpendicular was used in the Gibbsian version of vector analysis. Thus E. B. Wilson, Vector Analysis (1901, p. 56) writes "the condition for the perpendicularity of two vectors neither of which vanishes is A·B = 0." When the analogy with functions was recognised the term "orthogonal" was adopted. It appears, e.g., in Courant and Hilbert's Methoden der Mathematischen Physik (1924). - -There are also notes on orthogonal matrix and orthogonal function, and orthocenter (the last of which includes an anecdote about the coining of the term in 1865). -The site's "P" page has less to say about the other term: - -PERPENDICULAR was used in English by Chaucer about 1391 in A Treatise on the Astrolabe. The term is used as a geometry term in 1570 in Sir Henry Billingsley's translation of Euclid's Elements. - -Note that Billingsley also appears in the "orthogonal" entry above. A deeper dive into his translation of Elements may be in order, to see if he explains his own thinking about the distinction between "perpendicular" and "ortho[gonal]". - -Anecdotally, I (an American) was formally introduced to "orthogonal" in the context of vectors in Pre-Calculus. (The term may have been mentioned in passing when we learned about orthocenters in Geometry.) So, the term to me has always connoted a directional relationship independent of position. I've also seen the term "perpendicularly skew" for lines in space. Be that as it may ... I don't appear to be alone in using "orthogonal" and "perpendicular" interchangeably ---"perpendicular" just seems friendlier to use with students--- but in formal circumstances, I would probably be inclined to follow the French convention of sharp distinction. (That said, I'd feel obliged to explicitly acknowledge the convention, to avoid confusing my audience.)<|endoftext|> -TITLE: For $f \in L^1(\mathbb{R})$ and $y > 0$, we have ${1\over{\sqrt{y}}} \int_\mathbb{R} f(x - t)e^{{-\pi t^2}\over{y}}dt \in L^1(\mathbb{R})$? -QUESTION [6 upvotes]: For $f \in L^1(\mathbb{R})$ and $y > 0$, let$$f_y(x) := {1\over{\sqrt{y}}} \int_\mathbb{R} f(x - t)e^{{-\pi t^2}\over{y}}dt.$$ - -Do we have $f_y \in L^1(\mathbb{R})$ for every $ y > 0$? -Do we have $\lim_{y \to 0} \int_\mathbb{R} |f(x) - f_y(x)|\,dx = 0$? -Does there exist $C > 0$ such that, for every $f \in L^1(\mathbb{R})$,$$\left\{x \in \mathbb{R} : \sup_{y > 0} \left|f_y(x)\right| > \lambda\right\} \le {C\over\lambda} \int_\mathbb{R} \left|f(t)\right|\,dt?$$ -If $f \in L^1(\mathbb{R}$, then do we have that for a.e. $x \in \mathbb{R}$, $\lim_{y \to 0} f_y(x) = f(x)$? - -REPLY [3 votes]: This is an attempt to answer all 4 questions by elementary means. -Write $T_y$ for the linear operator that maps $f\in L^1$ to $f_y.$ -For the first question we have -$$\eqalignno{ -\|T_y f\|_1&=\frac1{\sqrt y}\int_x\left|\int_t f(x-t)\exp(-\frac{\pi t^2}y)dt\right|dx\\ -&\leq\frac1{\sqrt y}\int_x\int_t\left|f(x-t)\right|\exp(-\frac{\pi t^2}y)dtdx\\ -&=\frac1{\sqrt y}\int_t\int_x|f(x-t)|dx\exp(-\frac{\pi t^2}y)dt&(*)\\ -&=\|f\|_1\left(\frac1{\sqrt y}\int_t\exp(-\frac{\pi t^2}y)dt\right)\\ -&=\|f\|_1. -}$$ -where in (*) we use the Fubini-Tonelli theorem. -The second question is the assertion that $\lim_{y\to0}T_yf=f$ in $L^1.$ This is easy to verify for $f$ the indicator function of an interval, and from there to finite linear combinations (block functions). But block functions are dense in $L^1,$ so if $f_b$ is a block function close to $f$ then -$$\eqalign{ -\|T_yf-f\|_1&\leq\|T_y f-T_y f_b+T_y f_b-f_b+f_b-f\|_1\\ -&\leq\|T_y(f-f_b)\|_1+\|T_yf_b-f_b\|_1+\|f_b-f\|_1\\ -&\leq2\|f-f_b\|_1+\|T_yf_b-f_b\|_1 -}$$ -in view of the inequality in the answer to question 1. -Question 4 is about convergence almost everywhere, which follows from the stronger question 2 about convergence in $L^1.$ -For question 3 note that the function -$$\mathbb R^+\to\mathbb R:y\mapsto\frac1{\sqrt y}\exp(-\frac{\pi t^2}y)$$ -reaches its maximum when $y^2=\pi t^2,$ so we have -$$\sup_y\frac1{\sqrt y}\exp(-\frac{\pi t^2}y)=\frac1{\pi^{1/4}\sqrt{|t|}}\exp(-\sqrt\pi|t|).$$ -Let us denote the expression on the right hand side by $\phi(t),$ then $\phi$ is an integrable function. -Now we estimate -$$\eqalignno{ -\int_x\sup_y|T_yf(x)|dx&\leq\int_x\sup_y\frac1{\sqrt y}\int_t|f(x-t)|\exp(-\frac{\pi t^2}y)dt\ dx\\ -&\leq\int_x\int_t|f(x-t)|\sup_y\frac1{\sqrt y}\exp(-\frac{\pi t^2}y)dt\ dx\\ -&=\int_t\int_x|f(x-t)|\sup_y\frac1{\sqrt y}\exp(-\frac{\pi t^2}y)dx\ dt&(*)\\ -&=\|f\|_1\int_t\phi(t)dt -}$$ -from which we deduce that $x\mapsto\sup_y|T_yf(x)|$ is an integrable function, and for any $\lambda>0$ the measure of -$$\left\{x\in\mathbb{R}:\sup_y\left|f_y(x)\right|>\lambda\right\}$$ -is bounded by -$$\frac{\|\phi\|_1}\lambda\|f\|_1.$$<|endoftext|> -TITLE: To prove that the norm of a tower of field extensions is the composition of norms -QUESTION [6 upvotes]: We know that if we set $K$, $F$ and $L$ fields, with $L$ a finite extension of $F$ and $F$ a finite extension of $K$. Then we have the norm equality $$N_{L/K}=N_{F/K}\circ N_{L/F}$$ The common proof is by using rational canonical forms and elementary matrices. I was wondering if there is a way only by using the tools of field extensions. Thank you. - -REPLY [2 votes]: We prove the theorem for separable field extensions $F$ over $K$ and $L$ over $F$. By the theorem of the primitive element in field theory there exists a monic irreducible polynomial $f(x)=x^m+a_{m-1}x^{m-1}+\dots+a_0$ such that -$$F=K[x]:=K[X]/\langle f(X)\rangle.$$ -Similarly there exists a monic irreducible polynomial $g(y)=y^n+b_{n-1}y^{n-1}+\dots+b_0$ with coefficients $b_k\in F$ such that -$$L=K[x,y]=F[y]:=F[Y]/\langle g(Y)\rangle.$$ -The field $L$ has dimension $m\cdot n$ over the field $K$. By the theorem of the primitive element there exists a monic irreducible polynomial $h(z)=z^{mn}+c_{mn-1}z^{m-1}+\dots+c_0$ with coefficients $c_k\in K$ such that -$$L=K[z]:=K[Z]/\langle h(Z)\rangle.$$ -Every zero $z_i$ of the polynomial $h(z)$ produces an isomorphic field -$$L\cong L_i=K[z_i]:=K[Z_i]/\langle h(Z_i)\rangle$$ -and therefore there exists an isomorphism $\hat\sigma_i$ such that $L_i=\hat\sigma_i(L)$ with $z_i=\sigma_i(z)$. But because the field $L$ is also generated by adjoining the zeros $x\in K$ and $y\in F$ -$$L=K[x,y]$$ -this homomorphism $\hat\sigma_i$ must send the zero $x$ to a zero $x_i:=\hat\sigma_i(x)$ of the polynomial $f(x)$ and the zero $y$ to a zero $y_{ik}:=\hat\sigma_i(y)$ of the polynomial -$$g_i(y):=\hat\sigma_i(g(y))=y^n+\hat\sigma_i(b_{n-1})y^{n-1}+\dots+\hat\sigma_i(b_0).$$ -For every $i$ there are $n$ zeros $y_{ik}$ because the polynomial $g_i(y)$ remains irreducible over the field $L_i$. We define the homomorphism $\sigma_{ik}$ over the field $L$ to be the homomorphism that sends the zero $x$ to the zero $x_i$ and the zero $y$ to the zero $y_{ik}$ that was adjoined over the field $K[x_i]$: -$$L_{ik}:=K[x_i,y_{ik}]=\sigma_{ik}(K[x,y]).$$ -In adjoining the zero $y_{ik}$ to the fields $K[x_i]$ there exists $n$ homomorphism $\tau_{ik}$ such that -$$L_{ik}:=K[x_i,y_{ik}]=\tau_{ik}(K[x_i,y_{i0}])$$ -and the field $F=K[x_i]$ remains invariant. Now we have developed the means to prove the formula in the question. The norm of an element $\zeta\in L$ is defined to -$$\tag{1}N_{L/K}(\zeta):=\prod_{0\le i\lt mn}\hat\sigma_{i}(\zeta)=\prod_{0\le i\lt m}\prod_{0\le k\lt n}\sigma_{ik}(\zeta).$$ -We reorder the zeros such that $x_0=x$. Let the homomorphism $\sigma_{0,k}$ be the identity on the zero $x$. Each factor in the product $\prod_{0\le k\lt n}\sigma_{0k}(\zeta)$ contains exactly one of the zeros $y_{0,k}$ of the polynomial $g(y)$ that can be adjoined above to the field $F$. But then this product must be a symmetrical polynomial in the zeros $y_{0,k}$ and we can write the product to -$$\tag{2}\prod_{0\le k\lt n}\sigma_{0k}(\zeta)=d_mx^m+d_{m-1}x^{n-1}+\dots+d_0$$ -with coefficients $d_j\in K[x,S_{00},\dots,S_{0m}]$ with the symmetrical polynomials -$$S_{00}:=1, S_{01}:=y_{01}+\dots+y_{0n}, \dots, S_{0n}:=y_{01}\cdots y_{0n}$$ -and $y_{0k}:=\tau_{0k}(y)$. But these symmetric polynomials are known to be $S_{0k}=(-1)^kb_k\in F=K[x]$ of the polynomial $g(y)$ above. Therefore the product $(2)$ gives an element in the field $F=K[x]$. -In taking the product $\prod_{0\le k\lt n}\sigma_{ik}(\zeta)$ for a fixed $i$ we get a similar result to equation $(2)$ with -$$\tag{3}\prod_{0\le k\lt n}\sigma_{ik}(\zeta)=\sigma_{ik}(d_m)x_i^m+\sigma_{ik}(d_{m-1})x_i^{n-1}+\dots+\sigma_{ik}(d_0)$$ -with coefficients $\sigma_{ik}(d_j)\in K[x_i,\sigma_{ik}(S_0),\dots,\sigma_{ik}(S_m)]$. But those are just the coefficients of the poynomial $g_i$ or $S_{ik}=\sigma_{ik}(S_{0k})=(-1)^k\sigma_{ik}(b_k)\in K[x_i]$. -The homomorphism $\sigma_{ik}$ of the field $L$ over $K$ can be restricted to the homomorphism $\kappa_i$ of the field $K[x_i]$ over the field $K$. But these homomorphism $\kappa_i$ just map the the zero $x$ to the zero $x_i$ and therefore they are the field homomorphism of the field $F=K[x_i]$ over the field $K$. In the formulas $(1)$ and $(3)$ we have seen that the inner product over $k$ gives an element in the field $K[x_i]$. Because the formulas $(2)$ and $(3)$ give -$$\prod_{0\le k\lt n}\sigma_{ik}(\zeta)=\kappa_i\left(\prod_{0\le k\lt m}\sigma_{0k}(\zeta)\right)$$ -we can rewrite the formula $(1)$ to -$$N_{L/K}(\zeta)=\prod_{0\le i\lt m}\kappa_i\left(\prod_{0\le k\lt n}\sigma_{0k}(\zeta)\right).$$ -But the homomorphism $\sigma_{0k}$ are the homomorphism $\tau_k:=\tau_{0,k}=\sigma_{0k}$ of the field $L$ over the field $F$. This gives directly -$$N_{L/K}(\zeta)=N_{F/K}\circ N_{L/F}(\zeta)=\prod_{0\le i\lt m}\kappa_i\left(\prod_{0\le k\lt n}\tau_{k}(\zeta)\right).$$<|endoftext|> -TITLE: Symmetric power of tautological representation of $U(n)$ -QUESTION [7 upvotes]: Let $S^kV$ be the $k$-th symmetric power of tautological representation of $U(n)$ how to see that it's irreducible? I'm trying to do it using weight, but with no benefits.. - -REPLY [2 votes]: A direct argument is possible, but it's a bit messy. It's nicer to work at the Lie algebra level because there it's easier to write down maps that move between weight spaces. An even nicer argument uses Schur-Weyl duality, which also implies that the exterior powers $\wedge^k V$ are irreducible and a bunch of other nice things.<|endoftext|> -TITLE: Every topological space can be realized as the quotient of some Hausdorff space. -QUESTION [8 upvotes]: Prove that every topological space can be realized as the quotient of some Hausdorff space. - -I tried to show this by using the intersection of two open sets in $x$ (for $f:z\to x$). - -REPLY [3 votes]: Let me elaborate on the comment of mine that Slade mentioned above. Using convergence of ultrafilters, there is a very conceptually simple way to do this. First, let us suppose $X$ is a $T_1$ space. Then the topology on $X$ is completely determined by the convergence of nonprincipal ultrafilters on $X$: for each nonprincipal ultrafilter $F$ on $X$, there is some subset $L(F)\subseteq X$ of points that $F$ converges to, and the topology on $X$ is the finest topology such that $F$ converges to $x$ for each $F$ and each $x\in L(F)$. Given a nonprincipal ultrafilter $F$ on $X$ and a point $x\in X$, we can let $X_{F,x}$ be $X$ equipped with the finest topology for which $F$ converges to $x$. Explicitly, every point in $X_{F,x}$ is open except $x$, and the neighborhoods of $x$ are the sets in $F$. Since $F$ is nonprincipal, this topology is Hausdorff. We can form the disjoint union $Y=\bigsqcup X_{F,x}$ where $(F,x)$ ranges over all pairs of a nonprincipal ultrafilter $F$ and a point $x\in L(F)$. There is then a canonical continuous map $p:Y\to X$ which is the identity map on each $X_{F,x}$. Since $p$ is continuous with respect to a topology on $X$ iff $F$ converges to $x$ for each $(F,x)$, we conclude that the given topology on $X$ is the finest topology that makes $p$ continuous. That is, $p$ is a quotient map. -What can we do if $X$ is not $T_1$? Well, we just have to detect the convergence of principal ultrafilters as well. For instance, for every pair $(x,y)\in X\times X$ such that the principal ultrafilter at $x$ converges to $y$, we could take the space $\mathbb{N}\cup\{\infty\}$ and map each element of $\mathbb{N}$ to $x$ and $\infty$ to $y$. This map is continuous with respect to a topology on $X$ iff the principal ultrafilter at $x$ converges to $y$. So we can let $Z$ be the disjoint union of the $Y$ constructed above and a copy of $\mathbb{N}\cup\{\infty\}$ for each such $(x,y)$, and define a map $q:Z\to X$ as the joining of of $p$ and the maps defined above. Then by a similar argument as in the $T_1$ case, $q$ is a quotient map. -Let me further remark that the space $Y$ used here is not just Hausdorff but normal. It suffices to check that each $X_{F,x}$ is normal. But this is trivial: given two disjoint closed sets, one of them does not contain $x$, and so is clopen. -(Note that the use of ultrafilters (and thus of the axiom of choice) can be eliminated by using all convergent filters containing the cofinite filter, rather than just nonprincipal ultrafilters. In fact, it suffices to just consider for each point $x\in X$ the pair $(F,x)$, where $F$ is the filter generated by the neighborhoods of $x$ and the cofinite sets.)<|endoftext|> -TITLE: How to show that the function $x^\alpha \sin\left(\frac{1}{x}\right)$ ($\alpha > 1$) is of bounded variation on $(0,1]$? -QUESTION [6 upvotes]: I am given the function -$$f(x) = \begin{cases} x^\alpha \sin\left(\frac{1}{x}\right) &\text{on }(0,1], \\ 0 & \text{if }x = 0. -\end{cases}$$ - how do I show that this function is of bounded variation on $[0,1]$ if $\alpha >1$? -The variation is given by -$$Vf=\sup\{\Sigma_{n=1}^N|f(x_n)-f(x_{n-1})|: 0=x_0 -TITLE: How many groups of order $n$ with center {e} exist? -QUESTION [6 upvotes]: For which numbers $n$ exists a group of order $n$ with center {e} ? - And how many groups are there for a given order ? - -The first such numbers are $6,10,12,14,18,20,21,...$ Groups of order $32,40,64$, -for example have not center {e}. For $n=18$, we have $2$ groups with center {e} -Is there an easy criterion or formula ? - -REPLY [5 votes]: I am pretty sure there is no general criterion or a formula known, especially for finding the number of groups with trivial center. -But there are plenty of examples of families of such numbers $n$. For example $n$ order of any nonabelian simple group, $n = pq$ with $p > q$ primes such that $q \nmid p-1$, etc.. -Furthermore, if $G$ and $H$ have trivial center, then so does $G \times H$. Thus the set of such numbers $n$ is closed under multiplication. -Also note that if a group is nilpotent, then it has nontrivial center. There is a simple description of integers $n$ (in terms of prime factorization) such that every group of order $n$ is nilpotent. Hence if there is a group of of order $n$ with trivial center, then $n$ cannot be of this form. -However, there do exist numbers $n$ such that every group of order $n$ has nontrivial center, but not all groups of order $n$ are nilpotent. Some examples are $n = 28, 40, 44, 63, 76, 88, 92, \cdots$ (OEIS) Describing numbers like this does not seem feasible to me.<|endoftext|> -TITLE: Conditions of the Taylor Theorem -QUESTION [6 upvotes]: I'm confused on the assumptions behind the Taylor Theorem because I found different versions of them across several books. -Consider the function $f:\mathbb{R}\rightarrow \mathbb{R}$ -(1) If and only if $f$ is infinitely many times differentiable at $a$ I can write $$f(x)=f(a)+\sum_{k=1}^{\infty}\frac{f^{(k)}(a)(x-a)^k}{k!}$$ - Correct? -(2) If and only if $f$ is $n$ times continuously differentiable at $a$ (which implies that $f$ is $n$ times differentiable in a neighbourhood of $a$) I can write -$$ -f(x)=f(a)+\sum_{k=1}^{n}\frac{f^{(k)}(a)(x-a)^k}{k!}+ o(||x-a||^n) -$$ -Correct? -(3) If and only if $f$ is $n$ times continuously differentiable at each point between $x$ and $a$ I can write -$$ -f(x)=f(a)+\sum_{k=1}^{n}\frac{f^{(k)}(a)(x-a)^k}{k!}+ \frac{f^{(n+1)}(c)(x-a)^{n+1}}{(n+1)!} -$$ -for $c$ between $x$ ans $a$. Correct? -My confusion is related in particular to the necessity of conditions. - -REPLY [5 votes]: I suspect you need to think more about how much $C^\infty$ functions can be wilder than analytic functions. -(1) False. A $C^\infty$ function need not be faithfully represented by its power series at a point. As @Chappers observes in a comment, $\mathrm{e}^{-1/x^2}$ is not faithfully represented by its power series at $0$. For a little more discussion on this example, see Why doesn't the identity theorem for holomorphic functions work for real-differentiable functions?. -(2) False. Let $n$ be given, set $N > n$, and let $H(x)$ be the (Heaviside) step function. Let $g(x) =(1+H(x))^N - 1$. Integrate $g$ $n$ times. The result is $n$ times continuously differentiable, but your error estimate is hopeless. (Want to violate it more? Increase $N$.) The error term is still $h(x)(x-a)^k$ with $\lim_{x\rightarrow a} h(x) = 0$, so the Peano form of the remainder still works. -(3) Probably false. This is the Lagrange form of the remainder term. This is usually stated with an additional hypothesis. The hypotheses are "$f$ is $k+1$ times differentiable on $(x,a)$ and $f^{(k)}$ is continuous on $[a,x]$. I suspect the example I used in (2) can be adapted to satisfy the hypotheses you give, but fail to satisfy the hypotheses I give. Probably need to set the step at the end of the interval, but arrange for that point to be the $c$ needed in the error estimate.<|endoftext|> -TITLE: Kernel in Modern Algebra -QUESTION [8 upvotes]: What is a Kernel and how can it be describe in the real world and how can it be defined well and precisely. Tried asking my professor and he just tell us it is just abstract idea. - -REPLY [2 votes]: One area in which kernels of linear operators have a real-world physical meaning is electrical networks. There, you have a network consisting of set of nodes and a set of branches connecting these nodes, together with a “boundary operator,” $\partial$, that basically tells you which nodes are at the ends of each branch and assigns a direction to each branch. You can define a vector space indexed by the branches that describes assignments of currents to the branches and a related vector space indexed by the nodes that gives the net current flowing into or out of each node. The boundary operator then tells you how to compute the net node currents given a set of branch currents. The kernel of $\partial$ consists of those branch current assignments for which the net current at all nodes is zero, but that’s exactly Kirchhoff’s current law for electrical networks, so the kernel of $\partial$ captures all of the physically possible current distributions. There are other operators that can be defined on these spaces and their duals whose kernels and images also have important physical meanings.<|endoftext|> -TITLE: How to find all subgroups of a group in GAP -QUESTION [12 upvotes]: With the next group: -gap> G:=Group((1,2,3,4),(1,2)); - -Group([ (1,2,3,4), (1,2) ]) - -gap> Order(G); - -24 - -gap> Elements(G); - -[ (), (3,4), (2,3), (2,3,4), (2,4,3), (2,4), (1,2), (1,2)(3,4), - (1,2,3), (1,2,3,4), (1,2,4,3), (1,2,4), (1,3,2), (1,3,4,2), - (1,3), (1,3,4), (1,3)(2,4), (1,3,2,4), (1,4,3,2), (1,4,2), - (1,4,3), (1,4), (1,4,2,3), (1,4)(2,3) ] - -Is there any function to list the subgroups of G? -Thanks, and sorry if the question looks so easy, i'm a newbie in GAP. - -REPLY [15 votes]: One should think in terms of conjugacy classes of subgroups: -gap> G:=Group((1,2,3,4),(1,2)); -Group([ (1,2,3,4), (1,2) ]) -gap> cc:=ConjugacyClassesSubgroups(G); -[ Group( () )^G, Group( [ (1,3)(2,4) ] )^G, Group( [ (3,4) ] )^G, - Group( [ (2,4,3) ] )^G, Group( [ (1,4)(2,3), (1,3)(2,4) ] )^G, - Group( [ (3,4), (1,2)(3,4) ] )^G, Group( [ (1,3,2,4), (1,2)(3,4) ] )^G, - Group( [ (3,4), (2,4,3) ] )^G, Group( [ (1,4)(2,3), (1,3)(2,4), (3,4) ] )^G, - Group( [ (1,4)(2,3), (1,3)(2,4), (2,4,3) ] )^G, - Group( [ (1,4)(2,3), (1,3)(2,4), (2,4,3), (3,4) ] )^G ] - -For example, Group( [ (1,3)(2,4) ] )^G means a conjugacy class of a subgroup Group( [ (1,3)(2,4) ] ). The 1st element of the returned list in this example is Group( () )^G - the conjugacy class of the trivial subgroup, but in general the order in which the classes are listed depends on the method chosen by GAP, so you should not rely on it being always the 1st. -Now take, for example, the 2nd conjugacy class from this list. It contains 3 subgroups: -gap> c:=cc[2]; -Group( [ (1,3)(2,4) ] )^G -gap> Size(c); -3 - -We can use Representative to get the representative of the class -gap> Representative(c); -Group([ (1,3)(2,4) ]) - -and AsList to get all subgroups from this class: -gap> AsList(c); -[ Group([ (1,3)(2,4) ]), Group([ (1,4)(2,3) ]), Group([ (1,2)(3,4) ]) ] - -GAP also has a function AllSubgroups (since GAP 4.5), but it is intended primarily for use in class for small examples, and quickly becomes inefficient for larger groups. Instead of using AllSubgroups one should think algorithmically. Do you need all subgroups or only up to conjugacy? Do you want only normal subgroups, or maximal, or maybe of a particular order or isomorphism type? In this case, look at: - -NormalSubgroups -IsomorphicSubgroups -RepresentativesPerfectSubgroups -RepresentativesSimpleSubgroups -ConjugacyClassesMaximalSubgroups -ConjugacyClassesPerfectSubgroups - -If only $p$-subgroups are needed, it is often faster to compute the subgroup lattice of the Sylow $p$-subgroup (see SylowSubgroup), and then use conjugation to find more, if it is not normal. -Furthermore, LatticeByCyclicExtension and SubgroupsSolvableGroup accept optional arguments which allow to put restrictions on computed subgroups. In the latter case functions SizeConsiderFunction and ExactSizeConsiderFunction may be used. -For more advanced techniques, see also the Chapter "Tables of Marks" and in particular Section "Accessing Subgroups via Tables of Marks" - but that will work only for groups whose tables are contained in the tables of marks library. -Further reading: - -How do I get the subgroups of my group from GAP F.A.Q. -Computing Conjugacy Classes of Subgroups in GAP -$p$-subgroups in GAP<|endoftext|> -TITLE: Does $\sqsubset$ have any special meaning? -QUESTION [10 upvotes]: What is the meaning of $\sqsubset$ and $\sqsubseteq$? Does it have any special meaning, or is it just an alternative to writing $\subset$ and $\subseteq$ respectively (for proper subsets and subsets)? -I have been looking for an explanation everywhere, but so far I could not find it. This may have to do with the fact that I am not even sure what this symbol is called (makes it difficult to search for it), but I have tried several things (like searching for $\sqsubset$ on this site), and nothing came up, other than lists of mathematical symbols for LaTeX without any explanation. -I have seen it used in papers (e.g., http://www.cril.univ-artois.fr/~marquis/everaere-konieczny-marquis-ecai10.pdf on page 4, footnote 5), but never explained. I am starting to think that $\sqsubset$ and $\sqsubseteq$ are equivalent to $\subset$ and $\subseteq$. However, sometimes there are subtle differences, so I want to be certain about this. I want to be sure that I understand the intended meaning when reading future papers, to avoid any misunderstandings. -Thanks in advance. - -REPLY [2 votes]: The square subset symbol is sometimes used to indicate a prefix, so that $x \sqsubseteq y$ denotes that $x$ is a prefix of $y$. This defines a binary relation on strings, called the prefix relation, which is a particular kind of prefix order. - -This interpretation seems to make sense for the example you cited: - -$(E_n)_{n \in \mathbb{N}}$ satisfies $\forall i \in \mathbb{N}, E_i \sqsubseteq E_{i+1}$ - -meaning $E_i$ is a prefix of $E_{i+1}$.<|endoftext|> -TITLE: Continuous functions and uncountable intersections with the x-axis -QUESTION [8 upvotes]: Let $f : \mathbb{R} \to \mathbb{R}$ such that the set $X = \{x \in \mathbb{R} : f(x) = 0\}$ does not contain any interval (i.e. there is no interval $I \subset X$) -Of course the set $X$ can be uncountable (see Cantor Set). If we add that $f$ is continuous, is it true that X is countable? I have been thinking about this for a while, and couldn't find any counterexamples - my intuition says the answer is yes. I tried to start a proof but really couldn't move forward. -My attempt (by contradiction): assume $X$ is uncountable. Then there exists $[a, b] \subset \mathbb{R}$ such that $X \cap [a, b]$ is uncountable. Now, let $g$ be the restriction of $f$ to $[a, b]$. Then $g$ is uniformly continuous. I don't know what to do next, though... -Any hints appreciated. - -REPLY [17 votes]: No, the conclusion is not true. Take an uncountable compact set $E$ that contains no intervals (for example, the $1/3$ Cantor set) and define -$$f(x) = \operatorname{dist}(x, E)$$ -This is zero if and only if $x \in E$, and is actually Lipschitz continuous.<|endoftext|> -TITLE: Find the moment generating function of the sum of exponential random variables $S=X_1+X_2+X_3+X_4$ -QUESTION [6 upvotes]: Let $X_1+X_2+X_3+X_4$ be iid exponential random variables with parameter λ, and $S=X_1+X_2+X_3+X_4$ -S follows the gamma distribution with parameters $\lambda$ and $r=4$. -We know that an exponential random variable X with parameter $\lambda$ has moment -generating function -$E[e^{tX}]=\frac{\lambda}{\lambda-t}$ if $t<\lambda$ and $+\infty$ if $t \geq \lambda$ -a) find MGF $M_S(t)$ (don't forget to declare the domain) -b) find the 1st and 2nd moment of S, then find the variance -I know that for b, you can just find $M_S'(0)$ and $M_S''(0)$ but for a, I'm not sure on how to get started on finding the moment generating function - -REPLY [3 votes]: The mgf of a sum of independent random variables is the product of the individual mgf. -In our case each $X_i$ has mgf $\frac{\lambda}{\lambda-t}$, and therefore the sum has mgf $\left(\frac{\lambda}{\lambda-t}\right)^4$ for $t\lt \lambda$. -You know how to do b) using the mgf. As a check, note that the mean of $S$ is the sum of the means of the $X_i$, and the variance of $S$ is the sum of the variances of the $X_i$.<|endoftext|> -TITLE: Prime Exponent Polynomials -QUESTION [8 upvotes]: This was a problem that my friend had on his final for a discrete math class that he mentioned he couldn't figure out. I tried, but I don't really know how to get started. -Let $f(x)\neq 0$ be a polynomial in $\mathbb{Z}$. Prove that there exists a non-zero polynomial $g(x)$ such that $f(x)g(x)$ has only prime exponents. - -REPLY [3 votes]: Let $n$ be the degree of $f$, and let $V_n$ be a $\mathbb{Q}$-vector space of polynomials of degree less than $n$. The vector space dimension of $V_n$ is $n$. Consider $n + 1$ monomials$$x^{p_1}, x^{p_2}, \dots, x^{p_{n+1}},$$where $p_1, p_2, \dots, p_{n+1}$ are arbitrary $n+1$ distinct prime numbers. Consider their remainders modulo $f$. They are $n + 1$ elements of $V_n$, and thus they are linearly dependent. Hence, there are $a_1, a_2, \dots, a_{n+1} \in \mathbb{Q}$ such that$$h(x) = a_1x^{p_1} + a_2x^{p_2} + \dots + a_{n+1}x^{p_{n+1}}$$is divisible by $f$. Clearly, we can assume that $a_1, a_2, \dots, a_{n+1} \in \mathbb{Z}$. Thus, $g$ is the quotient of $h$ by $f$. Scaling $h$ by a proper integer factor, we can also assume that $g \in \mathbb{Z}[x]$ if needed.<|endoftext|> -TITLE: Is the tangent bundle of $S^2 \times S^1$ trivial or not? -QUESTION [8 upvotes]: As the question title suggests, is the tangent bundle of $S^2 \times S^1$ trivial or not? -Progress: I suspect yes. If I could construct three independent vector fields, I would be done. But I'm not so sure how to do that. Could anyone help? - -REPLY [2 votes]: A more explicit answer: identify $S^2\times\mathbb{R}$ with $\mathbb{R}^3\setminus\{0\}$ by the diffeomorphism $\psi(x,t):=2^t x$ (here we think $x\in S^2\subset\mathbb{R}^3$) and identify $S^1$ with $\mathbb{R}/\mathbb{Z}$. -Calling $\delta:\mathbb{R}^3\setminus\{0\}\to\mathbb{R}^3\setminus\{0\}$ the dilation by a factor $2$, i.e. $\delta(x):=2x$, it suffices to find three independent vector fields $X_1,X_2,X_3$ on $\mathbb{R}^3\setminus\{0\}$ s.t. -$$d\delta((X_i)_x)=(X_i)_{\delta(x)}\quad\quad(*)$$ -(for $i=1,2,3$) because then you can define three independent vector fields $Y_1,Y_2,Y_3$ on $S^2\times S^1$ with the formula -$$(Y_i)_{(x,[t])}:=d(\pi\circ\psi^{-1})((X_i)_{\psi(x,t)})$$ -(here $\pi:S^2\times\mathbb{R}\to S^2\times S^1$ is the product of $\mathrm{id}_{S^2}$ by the standard projection $\mathbb{R}\to S^1$). Note that this is a good definition: calling $\tau:S^2\times\mathbb{R}$, $\tau(x,t):=(x,t+1)$, it suffices to check that -$$ d(\pi\circ\psi^{-1})((X_i)_{\psi(x,t)})=d(\pi\circ\psi^{-1})((X_i)_{\psi\circ\tau(x,t)}). $$ -But $\psi\circ\tau=\delta\circ\psi$, so you can rewrite the RHS as -$$ d(\pi\circ\psi^{-1})((X_i)_{\psi\circ\tau(x,t)}) -=d((\pi\circ\tau^{-1})\circ\psi^{-1})((X_i)_{\delta\circ\psi(x,t)}) \\ -=d(\pi\circ(\psi\circ\tau)^{-1})\circ d\delta((X_i)_{\psi(x,t)}) -=d(\pi\circ(\delta\circ\psi)^{-1}\circ\delta)((X_i)_{\psi(x,t)}) -=LHS $$ -thanks to $(*)$. Besides checking these computations, I strongly suggest that you draw a picture and understand what is going on. -Now we have to build the vector fields $X_1,X_2,X_3$ s.t. $(*)$ holds, but this is very easy: -choose $(X_i)_x:=r(x)\frac{\partial}{\partial x^i}$ for $i=1,2,3$ (here $r:\mathbb{R}^3\setminus\{0\}\to\mathbb{R}$ is the distance from the origin).<|endoftext|> -TITLE: Is $\sum_{p}\frac{1}{p^{2}}$ irrational? -QUESTION [6 upvotes]: Is $\sum_\limits{p}^{\infty}\frac{1}{p^{2}}$ irrational where p is prime? How to prove it? - -REPLY [5 votes]: The function $P(s)=\sum_p \frac{1}{p^s}$ is called the prime zeta function. Unfortunately the value $P(2)$ does not have a known value like $\zeta(2)=\frac{\pi^2}{6}$. Similarly for $P(3), P(4)$ etc. For a related discussion see this MO-question. I think it is not known whether or not $P(2)$ is irrational.<|endoftext|> -TITLE: Is $f(x) = x^T A x$ a convex function? -QUESTION [5 upvotes]: Is $f(x) = x^T A x$ a convex function, where $x \in \mathbb{R}^n$, and $A$ is a $ n\times n$ matrix? -If not, my question can be reformed to: when is $f(x)$ convex, any restriction for $A$? For example, like positive definite and symmetric? - -REPLY [12 votes]: The necessary and sufficient condition for the function $f(x)=x^TAx$ to be convex is that $A+A^T$ is positive semidefinite. -Reason: $A+A^T$ is the Hessian matrix of $f$.<|endoftext|> -TITLE: Basis for Skew Symmetric Matrix -QUESTION [5 upvotes]: I'm trying to find a basis for the kernel for the following mapping: Considering the linear transformation T: $M_{33} \rightarrow M_{33} $ defined by $T(A) = .5(A + A^T)$. I know that this is basically asking for the basis under the condition that $T(A)=0$ which means that $A+A^T=0$ so $A^T = -A$. I found that matrices that fit this condition are Skew Symmetric Matrices. However, I'm not sure how to find the basis for the kernel of these matrices. - -REPLY [3 votes]: Let $a_{ij}$ denote the entries of $A$. If $A \in \ker T$, then all of the entries of $T(A)$ are zero. In other words, -$$ -a_{ij} + a_{ji} = 0. -$$ -This forces diagonal entries to vanish: -$$ -a_{ii} = 0. -$$ -Define the matrix unit $E_{ij}$ to be the $3 \times 3$ matrix, all of whose entries are $0$ except for the $(i,j)$ entry, which is $1$. These nine matrices form a basis for $M_{3,3}$, the space of all $3 \times 3$ matrices. -Now, we can build a basis $\{ B_{12}, B_{13}, B_{23} \}$ for the space of skew symmetric matrices out of the matrix units: -\begin{align} -B_{12} = E_{12} - E_{21} &= \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}\!, \\[2pt] -B_{13} = E_{13} - E_{31} &= \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{pmatrix}\!, \\[2pt] -B_{23} = E_{23} - E_{32} &= \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{pmatrix}\!. -\end{align} -An arbitrary skew symmetric matrix decomposes as -$$ -\begin{pmatrix} 0 & a_{12} & a_{13} \\ -a_{12} & 0 & a_{23} \\ -a_{13} & -a_{23} & 0 \end{pmatrix} -= a_{12} B_{12} + a_{13} B_{13} + a_{23} B_{23}\!, -$$ -showing that the set $\{ B_{12}, B_{13}, B_{23} \}$ spans. It's pretty clear that these three are linearly independent as well: if we set the arbitrary linear combination to zero on the right, then each entry of the matrix is $0$, so $a_{12} = a_{13} = a_{23} = 0$ which is the trivial combination. In other words, the decomposition of any skew symmetric matrix is unique.<|endoftext|> -TITLE: Integrability of two independent random variables -QUESTION [5 upvotes]: Suppose that $X$ and $Y$ are two independent random variables. -How does one show that if $X+Y\in L^1$, then both $X$ and $Y$ are in $L^1$? -I know that one can approach this problem using the fact that the joint law of $X+Y$ is the convolution of the laws of $X$ and $Y$, but I would like to know more "elementary" ways of proving the claim. -That is, how does one establish that -$$E[|X|: |X|>M] \to 0 $$ as $M \to \infty$? - -REPLY [4 votes]: Use the following law -$$\mathbb{E}(|Z|) \leq \sum_{n=0}^{\infty} \mathbb{P}(|Z| > n) \leq \mathbb{E}(|Z|) + 1$$ -which is known as layered representation of expectation. From assumption $X+Y$ is in $L^1$, we have -$$\sum_{n=0}^{\infty} \mathbb{P}(|X+Y| > n) \leq \mathbb{E}(|X+Y|) + 1 < \infty$$ -Then note that -\begin{align*} -\mathbb{P}(|X+Y| > n) &\geq \mathbb{P}(|X| - |Y| > n)\\ -&\geq \mathbb{P}(|X| > n + m \text{ and } |Y| < m)\\ -&\geq \mathbb{P}(|X| > n + m) \cdot \mathbb{P}(|Y| < m) &\text{by independence} -\end{align*} -for any $m$. Choose $m$ sufficiently large so that $\mathbb{P}(|Y| < m) > 0$ (such $m$ must exists for $\lim_{m \rightarrow \infty} \mathbb{P}(|Y| < m) = 1$) so that we can bound the tail sum -$$\sum_{k \geq m} \mathbb{P}(|X| > k) = \sum_{n \geq 0} \mathbb{P}(|X| > n + m) \leq \frac{\sum_{n=0}^{\infty} \mathbb{P}(|X+Y| > n)}{\mathbb{P}(|Y| < m)} < \infty$$ -This shows that -$$\sum_{k \geq 0} \mathbb{P}(|X| > k) < \infty$$ -and so by the layered representation inequality, -$$\mathbb{E}(|X|) \leq \sum_{n=0}^{\infty} \mathbb{P}(|X| > n) < \infty$$ -hence $X$ is in $L^1$. Likewise for $Y$.<|endoftext|> -TITLE: Why are homogenous coordinates needed in image projection? -QUESTION [5 upvotes]: The above image shows how a 3D object is projected onto a 2D image by a camera. Which makes perfect sense to me. - -However it's then said that division by z is non linear (why?), so homogenous coordinates instead of euclidean coordinates should be used instead. Why does this "trick" help? -When transforming from 3D object space to homogenous coordinates, the final coordinate and final column in the transformation matrix isn't even used so why is it necessary? - -REPLY [9 votes]: It isn’t so much that homogeneous coordinates are needed. Rather, they make computation simpler by making many important transformations linear. They can then all be represented as $4\times4$ matrices and composition is just matrix multiplication. It’s unfortunate that this source presents the use of homogeneous coordinates as a mere trick since there’s a natural connection between them and projective geometry. -Even before projective transformations were introduced, you probably already had to deal with rotation, translation and scaling. For instance, the object-to-world and world-to-camera coordinate transformations often involve rotations and translations. Although rotations about the origin in $\mathbb R^3$ are linear transformations and so can be represented as $3\times 3$ matrices, neither translations nor rotations about an arbitrary point (which are just a composition of a rotation about the origin and a couple of translations) can be. By introducing homogeneous coordinates, you can represent all affine transformations of $\mathbb R^3$ as $4\times4$ matrices. This includes all of the linear transformations that were possible with $3\times3$ matrices as well as translations (and combinations thereof). For example, a translation by $(\Delta x,\Delta y,\Delta z)$ is represented by the matrix $$ -\pmatrix{1&0&0&\Delta x \\ 0&1&0&\Delta y \\ 0&0&1&\Delta z \\ 0&0&0&1}. -$$ -Moving to perspective projection, we place the center of projection—pinhole or eye—at the origin in camera coordinates, locate the image plane perpendicular to the $z$-axis at $(0,0,f)$, with $f<0$, and direct the line of sight along the negative $z$-axis. (This arrangement keeps the image plane coordinate system right-handed.) The ray to a point $P$ then intersects the image plane at the point $\left(f\frac{P_x}{P_z},f\frac{P_y}{P_z},f\right)=\frac f{P_z}\left(P_x,P_y,P_z\right)$. This is just a form of scaling transformation, but unfortunately it’s non-linear because the scale factor depends on the point’s $z$-coordinate. That is, if you tried to write this transformation as a $3\times 3$ matrix, it’d be something like $$\pmatrix{\frac f{P_z}&0&0 \\ 0&\frac f{P_z}&0 \\ 0&0&\frac f{P_z}},$$ but that doesn’t give you a linear transformation because the entries of this matrix aren’t constants—one of the coordinates of the point being transformed appears in it. -Notice something interesting here: Homogeneous coordinates have the property that the tuples $(x_1, x_2, \dots, 1)$ and $(\alpha x_1, \alpha x_2, \dots, \alpha)$ represent the same point (for $\alpha\ne0$). If you interpret the coordinates of a point in $\mathbb R^3$ as the homogeneous coordinates of a point in the image plane, then the perspective projection is just the identity map! This is no coincidence. When you use homogeneous coordinates, you’ve actually moved from the Euclidean space $\mathbb R^{n+1}$ to the projective space $\mathbb P^n(\mathbb R)$ by identifying points in $\mathbb R^{n+1}$ that lie on the same line through the origin (minus the origin itself) with each other. I.e., all of the points that have the same projection are an equivalence class that is a single point in the projective space. -Taking advantage of this scaling property, we can write the coordinates of the projected point on the image plane as the homogeneous coordinates $\left(P_x,P_y,P_z,\frac{P_z}f\right)$, which can be obtained from $P=(P_x,P_y,P_z,1)$ by applying the linear transformation given by the matrix:$$ -\pmatrix{ 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&\frac1f&0 }. -$$ The upper-left submatrix is indeed the identity map, as noted above. -Using this representation of perspective projection, you can continue to combine all of the necessary image transformations by simply multiplying their matrices together. This representation has other advantages besides. This homogeneous perspective matrix can be modified slightly so that the transformed $z$-coordinate encodes depth for $z$-buffering. Also, since you can multiply by an arbitrary scale factor without changing its effect, it can be tweaked to make the view volume after projection a cube with sides at $-1$ and $+1$, which makes clipping easy. Historically, when floating-point operations were very expensive compared to integer operations, homogeneous coordinates allowed you to use integer (really rational) arithmetic by carrying a denominator along in the last coordinate. This isn’t as important any more, although being able to rescale the coordinates can still be handy.<|endoftext|> -TITLE: Why do we use the Borel sigma algebra for the codomain of a measurable function? -QUESTION [5 upvotes]: In several measure theory books, I see that a measurable function $f:\mathbb{R} \rightarrow \mathbb{R}$ often equips the domain with the Lebesgue $\sigma$-algebra and the codomain with the Borel $\sigma$-algebra. However, if $g$ is another such function, then the composition of $f$ and $g$ may fail to be measurable. -Why do we not use the Lebesgue $\sigma$-algebra for both the domain and the codomain? - -REPLY [3 votes]: I asked Terry Tao a similar question about a year ago: - -The (real) Lebesgue measurable functions are $({\mathcal L},{\mathcal B}_{\bf R})$ measurable by definition. What if one considers the $({\mathcal L},{\mathcal L})$ measurable functions only? -For doing integration of functions $f:(X,{\mathcal M})\to(Y,{\mathcal N})$, why is it enough to use Borel measure in the target space? - -Here is his answer: - -For the purposes of integration theory, the range $(Y,{\mathcal N})$ and domain $(X,{\mathcal M})$ are treated quite differently (in contrast to the more category-theoretic areas of mathematics, in which the domain and range of a morphism are consciously treated on a very equal footing). In particular, the range needs the structure of a complete vector space in order to have any sensible (linear) integration theory. (There are some nonlinear generalisations of the integration concept, but they usually fit better with the more classical Riemann theory of integration than the Lebesgue theory.) -One way to think of a $({\mathcal L}, {\mathcal B}_{\bf R})$-measurable function is as the pointwise limit of simple functions (finite linear combinations of indicator functions of measurable sets). This ties in with the underlying intuition of Lebesgue integration as coming from “horizontally” slicing up the graph of a function (as opposed to Riemann integration, which is focused instead on “vertically” slicing up the graph). -Strengthening the measurability requirement by placing the Lebesgue sigma algebra, rather than the Borel sigma algebra, on the range leads to some pathologies, for instance the very natural function $x \mapsto (x,0)$ from $({\bf R}, {\mathcal L})$ to $({\bf R}^2, {\mathcal L}^2)$ is now a non-measurable function! (if $E$ is a non-measurable subset of ${\bf R}$, then $E \times \{0\}$ is a null set and is thus Lebesgue measurable, but not Borel measurable in ${\bf R}^2$.) One can achieve similar pathologies in one-dimension by using an essentially bijective map between, say, the unit interval and the Cantor set.<|endoftext|> -TITLE: Verify integration of $ \int\frac{\sqrt{2-x-x^2}}{x^2}dx $ -QUESTION [7 upvotes]: This is exercise 6.25.40 from Tom Apostol's Calculus I. I would like to ask someone to verify my solution, the result I got differs from the one provided in the book. -Evaluate the following integral: $ \int\frac{\sqrt{2-x-x^2}}{x^2}dx $ -$ x \in [{-2,0}) \cup ({0,1}] $ -As suggested in the book, we multiply both numerator and denumerator by $ \sqrt{2-x-x^2} $. This removes the endpoints from the integrand's domain, but the definite integral we calculate with the antiderivative of this new function will be still the proper integral between any two points of the original domain: we only remove a finite number of points from the original domain and the domain of the resulting antiderivative will be the same as the original domain. -$$ I=I_1+I_2=\int\frac{2-x}{x^2\sqrt{2-x-x^2}}dx-\int\frac1{\sqrt{2-x-x^2}}dx \tag{1} $$ -Evaluating first $ I_1 $ by substituting $ t=\frac1{x} \; \text, \; dx=-\frac1{t^2}dt \text: $ -$$ I_1=-\frac1{\sqrt2}\int\frac{2t-1}{\frac{t}{|t|}\sqrt{\left(t-\frac14\right)^2-\left(\frac34\right)^2}}dt \tag{2} $$ -Substituting again $ \frac34\sec{u}=t-\frac14 \; \text, \; dt=\frac34\sec{u}\tan{u}du \; \text, \; t=\frac{3\sec{u}+1}{4} \; \text, \; u=\operatorname{arcsec}{\frac{4t-1}{3}} $, by considering the sign of $ t $ and $ \tan{u} $ in the integrand's two sub-domains: -a) $ x \in (-2, 0): t<-\frac12 \; \text, \; \frac{4t-1}{3}<-1 \; \text, \; u \in(\frac\pi2,\pi) \; \text, \; \tan{u}<0 $ -b) $ x \in (0, 1): t>1 \; \text, \; \frac{4t-1}{3}>1 \; \text, \; u \in(0,\frac\pi2) \; \text, \; \tan{u}>0 $ -$$ I_1=-\frac{1}{2\sqrt2}\int3\sec^2{u}-\sec{u}=-\frac1{2\sqrt2}\left(3\tan{u}-\log\left|\tan{u}+\sec{u}\right|\right)+C_1 \tag{3} $$ -$$ \sec{u}=\sec\operatorname{arcsec}\frac{4t-1}{3}=\frac{4t-1}{3}=\frac{4-x}{3x} \tag{4} $$ -$$ \tan^2{u}=\tan^2\operatorname{arcsec}\frac{4t-1}{3}=\left(\frac{4t-1}{3}\right)^2-1=\frac89\frac{2-x-x^2}{x^2} \tag{5} $$ -Considering again cases a) and b): -$$ \tan{u}=\frac{2\sqrt2}{3}\frac{\sqrt{2-x-x^2}}{x} \tag{6} $$ -$$ I_1=-\frac{\sqrt{2-x-x^2}}{x}+\frac1{2\sqrt2}\log\left|\frac{2\sqrt2}{3}\left(\frac{\sqrt{2-x-x^2}+\sqrt2}{x}-\frac1{2\sqrt2}\right)\right|+C_1= \tag{7} $$ -$$ -\frac{\sqrt{2-x-x^2}}{x}+\frac1{2\sqrt2}\log\left|\frac{\sqrt{2-x-x^2}+\sqrt2}{x}-\frac1{2\sqrt2}\right|+C'_1 \tag{8} $$ -Evaluating now $ I_2 $: -$$ I_2=-\int\frac1{\sqrt{\left(\frac32\right)^2-\left(x+\frac12\right)^2}}dx \tag{9} $$ -Substituting $\frac32\sin{z}=x+\frac12 \; \text, \; dx=\frac32\cos{z}dz \; \text, \; z=\operatorname{arcsin}{\frac{2x+1}{3}} $: -$$ I_2 = -\int dz = -\operatorname{arcsin}{\frac{2x+1}{3}}+C_2 \tag{10} $$ -The final result: -$$ I = I_1 + I_2 = -\frac{\sqrt{2-x-x^2}}{x}+\frac1{2\sqrt2}\log\left|\frac{\sqrt{2-x-x^2}+\sqrt2}{x}-\frac1{2\sqrt2}\right|-\operatorname{arcsin}{\frac{2x+1}{3}}+C \tag{11} $$ -The solution provided in the book: -$$ I = -\frac{\sqrt{2-x-x^2}}{x}+\frac1{2\sqrt2}\log\left(\frac{\sqrt{2-x-x^2}}{x}-\frac1{2\sqrt2}\right)-\operatorname{arcsin}\frac{2x+1}{3}+C $$ - -REPLY [2 votes]: The solution in the book is most certainly a typo, your proof seems fine to me. As a confirmation, Mathematica evaluates the integral to be: $$I=-\dfrac {\sqrt {2 - x - - x^2}} x + \dfrac1 {2\sqrt {2}}\left[\log\left |4 - x + - 2\sqrt {2}\sqrt {2 - x - - x^2} \right| - \log |x| \right] \qquad - \arcsin\left (\dfrac -{2 x + 1} {3} \right) + \rm C_1,$$ which is the same as your proposed solution since $$\begin{align} -\log\left |4 - x + - 2\sqrt {2}\sqrt {2 - x - - x^2} \right| - \log |x|&=\log\left|\dfrac{{2\sqrt{2}\sqrt{2-x-x^2}}+{4-x}}{2\sqrt{2}x}\right|+{\rm C_2} \\ &=\log\left|\frac{\sqrt{2-x-x^2}+\sqrt2}{x}-\frac1{2\sqrt2}\right|+{\rm C_2},\end{align}$$ so they only differ by a constant.<|endoftext|> -TITLE: A conditional asymptotic for $\sum_{\text{$p,p+2$ twin primes}}p^{\alpha}$, when $\alpha>-1$ -QUESTION [5 upvotes]: When I've followed a notes that show how obtain a similar asymptotic using Abel summation formula, my case with $a_n=\chi(n)$, the characteristic function taking the value 1 if $p$ is prime (in a twin prime-pair, thus caution I've defined $\chi(p+2)$ as zero) and $f(x)=x^{\alpha}$, which $\alpha>-1$, and Prime Number Theorem, in my case I am assuming the Twin prime conjecture, and L'Hopital rule (the author put much careful to write justified computations in the use of L'Hopital rule, I understad all, but he claim that the previous application of L'Hopital rule gives the same result that a more right way, which is to take an $\epsilon$ and compute the asymptotic limit of the main term with superior limit, I emphatize other time that the author claims that previous computations are the same using L'Hopital or taking epsilon and computing with superior limits) applied in my case $$\sum_{\text{$p,p+2$ twin primes}}p^{\alpha}$$ is asymptotic to $$2C_2\frac{x^{\alpha+1}}{\log^2 x},$$ multiplied by a constant defined precisely by -$$\lim_{x\to\infty}1-\alpha\frac{\int_2^{x}\left(\frac{2C_2t}{\log ^2 t}+o\left(\frac{t}{\log ^2 t}\right)\right)t^{\alpha-1}dt}{2C_2\frac{x^{\alpha+1}}{\log^2 x}}=\frac{1}{1+\alpha}.$$ -Thus, when I've used his method I compute for $\alpha>-1$ -$$\sum_{\text{$p,p+2$ twin primes}}p^{\alpha}\sim 2C_2\frac{x^{\alpha+1}}{(1+\alpha)\log^2 x},$$ -where $C_2$ is the twin prime constant. - -Question. Assuming the Twin prime conjecture can you justify rigorously an asymptotic for $\sum_{\text{$p,p+2$ twin primes}}p^{\alpha}$, when $\alpha>-1$? Thanks in advance. - -I've defined previous characteristic function and the sum $\sum_{\text{$p,p+2$ twin primes}}p^{\alpha}$, in wich only is added the term $p^{\alpha}$ to follow a similar method corresponding to the author. I don't know if is better add terms $(p+2)^{\alpha}$. - -REPLY [4 votes]: This can be done using partial summation in a way that is similar to this answer: How does $ \sum_{p -TITLE: Why does $\sqrt{x^2}$ seem to equal $x$ and not $|x|$ when you multiply the exponents? -QUESTION [9 upvotes]: I understand that $\sqrt{x^2} = |x|$ because the principal square root is positive. -But since $\sqrt x = x^{\frac{1}{2}}$ shouldn't $\sqrt{x^2} = (x^2)^{\frac{1}{2}} = x^{\frac{2}{2}} = x$ because of the exponents multiplying together? -Also, doesn't $(\sqrt{x})^2$ preserve the sign of $x$? But shouldn't $(\sqrt{x})^2 = (\sqrt{x})(\sqrt{x}) = \sqrt{x^2}$? -How do I reconcile all this? What rules am I not aware of? -Edit: Since someone voted to close my question, I should probably explain the difference between my question and Proving square root of a square is the same as absolute value, regardless of how much I think the difference should be obvious to anyone who reads the questions. Cole Johnson was asking if there's any way to prove that $\sqrt{x^2} = |x|$. I am not asking that; I already accept the equation as fact. I'm asking how to resolve some apparent contradictions that arise when considering square roots of squares, and how I should approach these types of problems. (Cameron, please read.) - -REPLY [4 votes]: A common source of confusion or "paradoxes" comes from not paying close attention to the (perhaps rarely exercised) restrictions or boundary conditions. These restrictions are necessary to ensure that paradoxes like you're considering do not arise (i.e., otherwise the definitions would fail to be well-defined). For example, here's a proper definition of rational exponents from Michael Sullivan's College Algebra: - -Note that we only consider real numbers here. Now, to answer your questions: - -But since $\sqrt{x} = x^{\frac{1}{2}}$ shouldn't $\sqrt{x^2} = - (x^2)^{\frac{1}{2}} = x^{\frac{2}{2}} = x$ because of the exponents - multiplying together? - -The first assertion is not generally true; $\sqrt{x} = x^{\frac{1}{2}}$ only provided that $\sqrt{x}$ exists (that is, not for negative $x$). In your chained equality, the second equality is false, because the exponent in $x^{\frac{2}{2}}$ contains common factors (i.e., is not in lowest terms). These statements would, however, be true if $x$ was restricted to positive real numbers only. - -Also, doesn't $(\sqrt{x})^2$ preserve the sign of $x$? But shouldn't - $(\sqrt{x})^2 = (\sqrt{x})(\sqrt{x}) = \sqrt{x^2}$? - -All of these equalities are false for negative $x$, because in that case the expression $\sqrt{x}$ does not exist in real numbers (i.e., it's undefined). Likewise, if you look carefully at the rule for multiplying radicals, then you'll see the same restriction against square roots of negative numbers. -Edit: Added text from Precalculus: a right triangle approach by Ratti & McWaters. Hopefully this clarifies the rule for products of square roots (namely that only positive radicands can be generally combined or separated). Also, note the warning from the section on complex numbers that doing so in that case is illegitimate.<|endoftext|> -TITLE: Why group of order 6 has to have just two elements of order 3 -QUESTION [5 upvotes]: In an attempt to prove that every group $G$ of order 6 is isomorphic to either $\mathbb{Z}_6$ or $S_3$, I stumbled upon one peculiar issue. -We can use Cauchy's Theorem to argue that since $|G|=3\cdot2$, $G$ must necessarily have elements of orders 1, 2, and 3. Another possibility is that there exists an element of order 6 in $G$. Now, suppose this is true, then $G$ is cyclic of order 6, which implies that $G\cong \mathbb{Z}_6$. -Alternatively, suppose that $G$ contains only elements of orders 1, 2, and 3. Then... (and here's where it appears to be the most difficult part) - -There must necessarily be only two elements of order 3. For if there are more than two elements of order 3 then for some $a\in G$ s.t. $a^3=1$, $a^2=b$, where $|b|=2$ or $|b|=3$. Suppose that $b$ has order 2. Then $a^2a=1$ implies that $a^2 = a^{-1}$, thus $|b|\ne 2$. We conclude that $b$ has order 3. Then $a_1^2 = b_1$, $a_2^2=b_2$, $a_3^2=b_1$ or $a_3^2=b_2$. But this implies that $a_3=a_1$ or $a_3 = a_2$. Hence, $G$ must contain two elements of order 3. - -We can now define a bijective homomorphism between $G$ and $S_3$, which implies that $G\cong S_3$. - -I'm wondering, however, if there's a simpler way to prove that there must necessarily be exactly two elements of order 3 in $G$. - -REPLY [4 votes]: $G$ has an element $h$ of order $3$ by Cauchy's theorem. Then $H = \langle h \rangle$ is a subgroup of order $3$ containing two elements of order $3$, namely $h$ and $h^{-1}$. -If $G$ has some other element $k$ of order $3$, then $K = \langle k \rangle$ and $H = \langle h \rangle$ are distinct subgroups of order $3$. Now $H \cap K$ must be trivial, since its order must divide $|H| = |K| = 3$ and cannot be $3$ (otherwise $H$ and $K$ are equal). Therefore: -$$|HK| = \frac{|H||K|}{|H \cap K|} = \frac{3\cdot 3}{1} = 9$$ -But this is absurd because $G$ only has six elements.<|endoftext|> -TITLE: Why exactly does a function need to be continuous on a closed interval for the intermediate value theorem to apply? -QUESTION [5 upvotes]: I apologize if this question is too basic. The intermediate value theorem states that if $f$ is continuous on a closed interval $[a,b]$, then for every value $c$ between $f(a)$ and $f(b)$ there exists some $x \in (a,b)$ such that $f(x) = c$. This, or some very similar variant thereof, is how the intermediate value is usually presented in textbooks. What bugs me, however, is the condition that $f$ need be continuous on the closed interval $[a,b]$ rather than the less strict condition of only being continuous on the open interval $(a,b)$. To illustrate this point, consider $f:[-1,1] \rightarrow \mathbb{R}$ where $f(x)= e^x$. This is only continuous on the open interval $(-1,1)$ but surely the IVT applies to it. A less artificial example would be the inverse sine function. Would this reasoning not apply to all such (continuous/well-behaved) functions? -As an alternative, wouldn't the following definition from Proofwiki be superior (in that it is slightly more general)? -Let $I$ be a real interval. Let $a,b \in I$ such that $(a,b)$ is an open interval. Let $f:I \rightarrow \mathbb{R}$ be a real function continuous in $(a,b)$. Then for every value $c \in \mathbb{R}$ between $f(a)$ and $f(b)$ there exists some $x \in (a,b)$ such that $f(x) = c$ - -REPLY [2 votes]: I think 5xum's answer gives a good example. The general reason you need continuity on a closed interval is that you need to tie the values $f(a)$ and $f(b)$ to the values the function takes on the rest of the interval. If the function is not continuous at the end points then its value at the endpoints need have nothing to do with the values the function takes on the interior of the interval. -If you did want to change the IVT to work for an open interval you could use the following modification. -Let $f(x)$ be continuous on an open interval $(a,b)$ and let $c,d\in (a,b)$ such that $f(c) -TITLE: Computing $ \int_0^{2\pi} \frac{1}{a^2\cos^2 t+b^2 \sin^2 t} dt \;; a,b>0$. -QUESTION [5 upvotes]: Using Residue Theorem find $\displaystyle \int_0^{2\pi} \frac{1}{a^2\cos^2 t+b^2 \sin^2 t} dt \;; a,b>0$. - -My Try: -So, I am going to use the ellipse $\Gamma = \{a\cos t+i b \sin t: 0\leq t\leq 2\pi\}$. -On $\Gamma$, $z=a\cos t+i b \sin t$, so $|z|^2=z\bar{z}=a^2\cos^2 t+b^2 \sin^2 t$. -Now, $dz=-a\sin t+i b \cos t dt$. -Hence, the integral becomes $\displaystyle \int_\Gamma \frac{dz}{z(iab+(\sin t \cos t)(b^2-a^2))}$. I know that $\displaystyle \int_\Gamma \frac{dz}{z}=2\pi i$. Now, how do I get rid of $\sin t \cos t$ part? I am stuck here. Can somebody please explain how? - -REPLY [2 votes]: Enforcing the change of variables $x= t-π$ leads to -$$I=\int_{0}^{2\pi} \frac{dx}{a^2\sin^2x+b^2\cos^2x}=2\int_{0}^{\pi} \frac{dx}{a^2\sin^2x+b^2\cos^2x}=4\int_{0}^{\pi/2} \frac{dx}{a^2\sin^2x+b^2\cos^2x}$$ -and we have \begin{align}\frac{1}{a^2\sin^2x+b^2\cos^2x}&=\frac1{ab}\cdot\frac{b^2\cos^2x}{b^2\cos^2x+a^2\sin^2x}\cdot\frac{a}{b\cos^2x}\\&=\frac1{ab}\cdot\frac1{1+\left(\frac ab \tan x\right)^2}\cdot \frac a{b\cos^2x}\\&=\frac1{ab}\left[\arctan{\left(\frac ab\tan x\right)}\right]'\end{align} - -$$I=4\int_{0}^{\pi/2}\frac1{ab}\left[\arctan{\left(\frac ab\tan x\right)}\right]'dx = \frac{2π}{ab} $$<|endoftext|> -TITLE: When does Rudin's change of variables formula break? -QUESTION [7 upvotes]: Here is Rudin's change of variables Theorem: - -My question is this: what are examples where $\varphi$ not being strictly increasing or not mapping the interval $[a,b]$ to $[A,B]$ breaks the theorem? For example, I am considering $\varphi$ a parabola with minimum point in the unit interval. Using Rudin's special case remark ($\alpha(x)=x$ and $\beta(x)=\varphi$) I keep computing -$$\int_0^1 f(x)dx=\int_0^1f(\varphi(y))\varphi'(y)dy$$ -with various choices of $f$, and I keep getting these integrals are equal. Does this mean $\varphi$ doesn't need such strict conditions? -If we let $a$ be the minimum point this seems surprising since $\varphi$ is only strictly increasing on $[a,1]$. - -REPLY [6 votes]: If you don't mind loosening up the hypotheses a bit: -Theorem: Suppose $\phi:[a,b] \rightarrow I$, where $I$ is an interval, is a $C^1$ function and $f$ is a continuous function on $I$. Then -$$\int_{[\phi(a),\phi(b)]} f=\int_{[a,b]} f \circ \phi \cdot \phi',$$ -where $\int_{[\phi(a),\phi(b)]} f:=-\int_{[\phi(b),\phi(a)]} f$ if $\phi(b) < \phi(a)$. -Proof: Since $f$ is continuous, pick a function $F$ such that $F'=f$ (this follows from FTC). Define $H:=F \circ \phi$. We then have that $H'=f\circ \phi \cdot \phi'.$ Therefore, -$$\int_{[a,b]} f \circ \phi \cdot \phi'=\int_{[a,b]} H'=H(b)-H(a)$$ -$$=F \circ \phi(b)- F \circ \phi(a)=F(\phi(b))-F(\phi(a))=\int_{[\phi(a),\phi(b)]} f.$$ -$\blacksquare$ -Note that we did not need $\phi$ to be increasing.<|endoftext|> -TITLE: Stalks and direct image -QUESTION [5 upvotes]: Let $f: X \rightarrow Y$ be a continuous map of topological spaces, and $F$ a sheaf of rings on $X$. The direct image sheaf $f_{\ast}F$ on $Y$ is given by the formula $V \mapsto F(f^{-1}V)$. If $x \in X$, is it true in general that $F_x \cong (f_{\ast}F)_{f(x)}$? -We have $$(f_{\ast}F)_{f(x)} = \varinjlim\limits_{V \ni f(x)} F(f^{-1}V) = \varinjlim\limits_{f^{-1}V \ni x} F(f^{-1}V)$$ so it appears that this limit equals $F_x = \varinjlim\limits_{U \ni x} F(U)$ if for any neighborhood $U$ of $x$, there exists a neighborhood $V$ of $f(x)$ such that $f^{-1}V \subseteq U$. -Obviously this last statement isn't true for all continuous maps. For example I could take $X = \{a,b\}$ in the discrete topology, $Y = \{a,b\}$ in the indiscrete topology, and $f: X \rightarrow Y$ the map $a,b \mapsto a$. Then I can take $U = \{a\}$ (and $x = a$). -It appears that there is at least a canonical homomorphism $(f_{\ast}F)_{f(x)} \rightarrow F_x$. - -REPLY [4 votes]: This isn't true in general, for exactly the reason you mention. For instance, suppose $Y$ has only one point. Then $(f_*F)_{f(x)}=F(X)$, and you can easily find a sheaf $F$ on some space $X$ and a point $x$ such that $F(X)$ is not isomorphic to $F_x$.<|endoftext|> -TITLE: Definition of a morphism of locally ringed spaces -QUESTION [8 upvotes]: Let $(X, \mathcal O_X), (Y, \mathcal O_Y)$ be locally ringed spaces. A morphism of ringed spaces is defined to be a pair $(f,f^{\#}):(X, \mathcal O_X) \rightarrow (Y, \mathcal O_Y)$, where $f:X \rightarrow Y$ is continuous, and $f^{\#}: \mathcal O_Y \rightarrow f_{\ast} \mathcal O_X$ is a morphism of sheaves. We consider $(f,f^{\#})$ to also be a morphism of locally ringed spaces if for each $x \in X$, the homomorphism on the stalks $$f_x^{\#}: \mathcal O_{Y,f(x)} \rightarrow \mathcal O_{X,x}$$ is a local homomorphism (preimage of the unique maximal ideal remains maximal). My question is, what exactly is the map $f_{x}^{\#}$? I know since $f^{\#}$ is a morphism of sheaves, we have a homomorphism on the stalks $$f_{f(x)}^{\#}: \mathcal O_{Y,f(x)} \rightarrow (f_{\ast} \mathcal O_X)_{f(x)}$$ Are we getting $f_x^{\#}$ by composing $f_{f(x)}^{\#}$ with some homomorphism $(f_{\ast} \mathcal O_X)_{f(x)} \rightarrow \mathcal O_{X,x}$? - -REPLY [2 votes]: There is another way to describe $f^{\#}_x: \mathcal{O}_{Y,f(x)}\rightarrow \mathcal{O}_{X,x}$ -Let $(X, \mathcal{O}_X)$ and $(Y,\mathcal {O}_Y)$ be two schemes. -A map between between $(X, \mathcal{O}_X)$ and $(Y,\mathcal {O}_Y)$ is a pair -1)$f:X\rightarrow Y$ and -2)$f^{\#}:\mathcal{O}_Y\rightarrow f_{*}\mathcal{O}_X$ -such that the induced map $f_x^{\#}: \mathcal O_{Y,f(x)} \rightarrow \mathcal O_{X,x}$ is a local homomorphism of rings. -Let us try to write down the induced map- -Note, $\mathcal{O}_{Y,f(x)}=\varinjlim_{ f(x)\in V}\mathcal{O}_Y(V)$ -Now by 2) we have map $f^{\#} (V):\mathcal{O}_Y(V)\rightarrow f_{*}\mathcal{O}_X(V)=\mathcal {O}_X(f^{-1}(V))$ -Now, as $f(x)\in V \implies x\in f^{-1}(V)$. Therefore, there exist maps $g_x(f^{-1}(V)):\mathcal {O}_X(f^{-1}(V))\rightarrow \mathcal{O}_{X,x}$ (property of direct limit) -Therefore the composition of this two maps -$\mathcal{O}_Y(V)\xrightarrow{f^{\#} (V)}\mathcal {O}_X(f^{-1}(V))\xrightarrow{g_x(f^{-1}(V))} \mathcal{O}_{X.x}$ -Now, by the universal property of direct limit of $\mathcal{O}_{Y,f(x)}$ one gets an induced map from $\mathcal{O}_{Y,f(x)}\rightarrow \mathcal{O}_{X,x}$ which is the desired map $f^{\#}_x$ which we need to be local ring homomoorphsim - -Universal Property of direct limit cum definition: Let $\{M_\lambda, f_{\lambda \mu}\}_{\lambda \in I}$ where $f_{\lambda \mu}:M_\lambda \rightarrow M_\mu$ for all $\lambda \leq \mu$ be a directed system of rings. A ring $M$ is said to be the directed limit of the directed system if - -There exists maps $g_{\lambda}:M_\lambda \rightarrow M$ for all $\lambda\in I$ such that $g_\lambda=g_\mu \circ f_{\lambda \mu}$ for all $\lambda\leq \mu$ -If there exists another ring $M'$ with maps $g'_{\lambda}:M_\lambda \rightarrow M'$ for all $\lambda\in I$ such that $g'_\lambda=g'_\mu \circ f_{\lambda \mu}$ for all $\lambda\leq \mu$ - -Then there exists a unique map from $M\rightarrow M'$<|endoftext|> -TITLE: Sign of first eigenvalue of conformal Laplacian -QUESTION [5 upvotes]: Let $(M^n,g)$ be some manifold of dimension $n \geq 3$. The conformal Laplacian is given by -$L=-4 \frac{n-1}{n-2} \Delta+ R$, where $R$ is the scalar curvature of $M$ and $\Delta= -\operatorname{div}\circ \operatorname{grad}$. - -Now assume that $g$ is a Riemannian metric, $M$ is orientable, compact and has no boundary. - Show that the sign of the first eigenvalue of $L$ is a conformal invariant. - -Why is that so? -I was able to show some transformation law for $L$. -Namely, if $\tilde{g}=v^{4/(n-2)}g$, then for some function $\varphi >0$ we have -$L(\varphi)=v^{\frac{n+2}{n-2}} \tilde{L} (v^{-1} \varphi)$. -(Here $\tilde{L}$ is the conformal Laplacian with respect to the metric $\tilde{g}$) I originaly hoped I could conclude something similar to: -$(\varphi, \lambda)$ eigenpair for $L$ $\Leftrightarrow$ $(v^{-1} \varphi, \lambda \cdot \max v)$ eigenpair for $\tilde{L}$. -However I did not have any luck with that so far. -(And obviously the above line is NOT true. I just wanted to give an impression what kind of statement I was looking for) -How could I go about this exercise? - -REPLY [3 votes]: Observe that the $L^2$ pairing satisfies -$$ \langle L(\phi),\phi\rangle_g = \int L(\phi)\phi \mathrm{d}v_g = \int \tilde{L}(\nu^{-1} \phi) \nu^{-1} \phi \nu^{\frac{2n}{n-2}} \mathrm{d}v_g = \int\tilde{L}(\nu^{-1}\phi) \nu^{-1}\phi \mathrm{d}v_{\tilde{g}} = \langle \tilde{L}(\nu^{-1}\phi) , \nu^{-1}\phi\rangle_{\tilde{g}}$$ - -The first eigenvalue of $L$ is negative if and only if there exists $\phi$ such that $\langle L(\phi),\phi\rangle < 0$. But then $\nu^{-1}\phi$ is such that $\langle \tilde{L}(\nu^{-1}\phi),\nu^{-1}\phi\rangle_{\tilde{g}}< 0$, showing that the first eigenvalue of $\tilde{L}$ is negative. -The first eigenvalue of $L$ is nonnegative if and only if $\forall \phi, \langle L(\phi),\phi\rangle_g \geq 0$. Given the equation above we see that this is also conformally invariant. -Furthermore, the first eigenvalue of $L$ is zero if and only if additionally there exists $\phi$ such that $L(\phi) = 0$. But then $\tilde{L}(\nu^{-1}\phi) = 0$ showing that the first eigenvalue of $\tilde{L}$ is also zero.<|endoftext|> -TITLE: Computing the Jordan Form of a Matrix -QUESTION [12 upvotes]: I apologize if this has already been answered, but I've seen multiple examples of how to compute Jordan Canonical Forms of a matrix, and I still don't really get it. Could someone help me out with this? -What I know for certain is that I must start off by finding my eigenvalues, and corresponding eigenvectors. OR, (how it was taught in class from my understanding), I can simply plug in the eigenvalues into my original matrix and find the rank. I have no clue what to do from there though... I also know that my Jordan Normal Forms should look like these: -$$\begin{pmatrix} -\lambda_1 & 0 & 0\\ -0 & \lambda_2 & 0\\ -0 & 0 & \lambda_3\\ -\end{pmatrix}$$ -or -$$\begin{pmatrix} -\lambda_1 & 1 & 0\\ -0 & \lambda_1 & 0\\ -0 & 0 & \lambda_2\\ -\end{pmatrix}$$ -And if we switch 1 and 2, then the 1 will be on the other side of the top. Lastly, -$$\begin{pmatrix} -\lambda & 1 & 0\\ -0 & \lambda & 1\\ -0 & 0 & \lambda\\ -\end{pmatrix}$$ -I've seen from many sources that if given a matrix J (specifically 3x3) that is our Jordan normal form, and we have our matrix A, then there is some P such that $PAP^{-1}=J$. -Here's an example matrix if I could possibly get an explanation on how this works through an example: -$$\begin{pmatrix} --7 & 8 & 2\\ --4 & 5 & 1\\ --23 & 21 & 7\\ -\end{pmatrix}$$ - -I don't know how to fill the information in the middle. For instance, what do I do after I find the rank of my matrix or what do I do once I find my rank? Sorry if I made mistakes, very tired, and please try to make this as coherent as possible, because I'm so confused. This is an Advanced Linear Algebra course. Any help is greatly appreciated! - -REPLY [4 votes]: If you are not interested in computing $P$, then the Jordan form can be computed by using this: - -The number of Jordan blocks with diagonal entry as $\lambda$ is the geometric multiplicity of $\lambda$. -The number of Jordan blocks of order $k$ with diagonal entry $\lambda$ is given by $rank(A-\lambda I)^{k-1}-2\, rank(A-\lambda I)^k + rank(A-\lambda I)^{k+1}.$ - -Here, the geometric multiplicities of $\lambda =1,2$ are each $1.$ And $1$ has algebraic multiplicity $1$ where as of $2$ the algebraic multiplicity is $2.$ So, using the condition (1) only, we see that there is a Jordan block of order $1$ with $\lambda=1$ and one Jordan block with $\lambda=2.$. So, the Jordan form is as computed above. (of course, upto a permutation of the Jordan blocks.)<|endoftext|> -TITLE: $f:X\rightarrow Y$ is a closed immersion iff $f:f^{-1}(U_i)\rightarrow U_i$ is a closed immersion. -QUESTION [5 upvotes]: I came across the following property of closed immersions on Wikipedia - - -A morphism $f:Z\rightarrow X$ is a closed immersion iff for some - (equivalently every) open covering $X=\bigcup U_j$ the induced map - $f:f^{-1}(U_j)\rightarrow U_j$ is a closed immersion. - -I am having trouble with the "equivalenty every cover" part - assuming that the property of a morphism being a closed immersion holds for some open cover iff it holds for every cover I am able to prove the above result as follows - -If $f$ is a closed immersion then cover $X$ just by $X$ and we have found "some" cover of $X$ satisfying the induced map is a closed immersion. -Conversely, if there is "some" cover for which the induced map is a closed immersion then as it is true (as per the assumption I made) for every cover, we can take the cover to be $X$ and are done. (I hope this is correct!) -Now all I have to do is prove the assumption. But I have no idea where to start. -I know this lemma - Let $X$ be a scheme and let $\mathcal P$ be a property. Suppose $\mathcal P$ satisfies the following conditions - - -If $\mathcal P$ is true for $\operatorname{Spec }R$ then it is true for $\operatorname{Spec }R_g$ for every $g\in R$ -If $\langle g_1,\cdots,g_n\rangle=R$ and $\mathcal P$ is true for each $\operatorname{Spec }R_{g_i}$ then $\mathcal P$ is true for $\operatorname{Spec }R$ - -Let $X=\bigcup U_i$ be an affine open cover of $X$ and suppose $\mathcal P$ is true for each $U_i$ then $\mathcal P$ is true for every affine open subset of $X$. -But this Lemma can only be applied for an affine open cover and Wikipedia's statement about closed immersions is for every open cover so I don't know how to prove it. Any help will be greatly appreciated. -Thank you. - -REPLY [2 votes]: If you are a Vakilian, here is how to do it using the affine communication lemma. First note that by 7.3.4 in Vakil's notes we have that the affine property of a morphism is affine local on the target. -Let $X = \mathrm{Spec A}$ and $Y = \mathrm{Spec B}$, $\pi: X \to Y$. Now let $D(f) \subset Y$. Then $\pi^{-1}(D(f)) = D(\pi^{\#}(f))$. -Suppose that $\pi^{\#}: B \to A$ is surjective. Then $B_f \to A_{\pi^{\#}(f)}$ is surjective. -Now suppose that there exist $f_1,\dots,f_k$ in $B$ s.t $(f_1,\dots,f_k) = B$ such that for all $i$ we have that $B_{f_i} \to A_{\pi^{\#}(f_i)}$ is surjective. Then by algebra for all $a \in A$ we have there exist $n_i$ s.t ${\pi^{\#}(f_i)}^{n_i}a \in Im(\pi^{\#})$. Since $({\pi^{\#}(f_1)}^{n_1},\dots,{\pi^{\#}(f_k)}^{n_k})_{Im(\pi^{\#})} \supset \{1\}$ we have that $a \in Im(\pi^{\#})$. We have shown surjectivity of $\pi^{\#}: B \to A$.<|endoftext|> -TITLE: Determining the number of zeros in the upper half plane -QUESTION [5 upvotes]: I have the following function:$$z^4+3iz^2+z-2+i$$ -I need to find the number of zeros in the upper half plane. I wonder how one should go about solving such a problem. Help would be greatly appreciated! -P.S. This question has been asked before, but the answer in that question is sloppy, the links provided do not work. -Thanks! - -REPLY [10 votes]: As suggested by another answer, we can deduce the desired number of roots -by consideration of argument. I'll like to present a slightly more geometric -view of this in the language of winding number. -Let $f(z) = z^4+3iz^2+z-2+i$ be the polynomial at hand. -For any $R > 0$, let $D_R$ be the semicircle of radius $R$ anchored at $z = 0$. More precisely, -$$D_R = \bigg\{ z \in \mathbb{C} : |z| < R \land \Re z > 0 \bigg\}$$ -Let $C_R = \partial D_R$ be the boundary of $D_R$ (oriented in the counterclockwise direction). If $R$ is chosen so that -$f(z) \ne 0$ on $C_R$, then the number of roots of $f(z)$ inside $D_R$ is given by -a contour integral -$$N_R = \frac{1}{2\pi i} \int_{C_R} \frac{f'(z)}{f(z)} dz\tag{*1}$$ -If $R$ is large enough so that $D_R$ contains all roots of $f(z)$ in the upper-half plane, then $N_R$ is the number of root we seek. -We don't really need to evaluate $(*1)$ directly. -The function $\frac{f'(z)}{f(z)}$ has a local antiderivative $\log f(z)$. If we start from some point on $C_R$, say $-R$, walk along $C_R$ counterclockwisely and analytic continue $\log f(z)$ along the way. By the time $z$ reaches the starting point $-R$ again, the final value of $\log f(z)$ will differ from the initial value by an amount $2\pi i N_R$. -Taking exponential of the analytic continuation of $\log f(z)$ along $C_R$. -The number $N_R$ becomes the number of times $f(z)$ wraps around the origin as $z$ walk along the $C_R$. This is the winding number of the "curve" $f(C_R)$ with respect to the origin. -Back to the original problem and assume we have picked a $R$ large enough. -$C_R$ consists of two pieces, a line segment and a semicircle -$$C_R = [ -R, R ] \cup \big\{ R e^{i\theta} : \theta \in [0,\pi ] \big\}$$ -For $z \in [ -R, R ] \subset \mathbb{R}$, we have -$$\begin{cases} -\Re f(z) &= z^4 + z - 2,\\ -\Im f(z) &= 3z^2+1 -\end{cases} -\quad\implies\quad \Im f(z) > 0 -$$ -This means $f([-R,R])$ lies completely inside upper half-plane. -Notice when $z \to \pm \infty$, $\Re f(z) \to \infty$, $\Im f(z) \to \infty$ while $\frac{\Im f(z)}{\Re f(z)} \to 0$. The endpoints of $f([-R,R])$ lies in the $1^{st}$ quadrant near the $+ve$ $x$-axis. The contribution from $[-R,R]$ to the winding number is smaller than $\frac14$ -and vanishes as $R \to \infty$. -On the semicircle $z = Re^{i\theta}$. When $R$ is large, $f(z)$ is dominated by the $z^4$ term. As $\theta$ varies from $0$ to $\pi$, $z^4 = R^4 e^{i4\theta}$ wraps around the origin twice. So the contribution from the semicircle to the -winding number is $2$ plus another small number. -Combine these two pieces, we find the desired winding number -$$N_R = \verb/small number/ + ( 2 + \verb/another small number/ )$$ -Since $N_R$ is always an integer, $N_R = 2$ for large $R$. -As a result, the function $f(z)$ has two roots on the upper half-plane. -At the end is a picture showing what happens to $f(C_R)$ when $R = 2$. -The red section is $f([-R,0])$, the orange section is $f([0,R])$ and -the green and blue sections are those for $f(R e^{i\theta})$ where $\theta$ belongs to $[0,\frac{\pi}{2}]$ and $[\frac{\pi}{2},\pi]$ respectively. As one can see, -$R = 2$ is large enough and the image $f(C_2)$ wraps around the origin twice. -This means the two roots in the upper half-plane satisfy $|z| < 2$. -$\hspace0.75in$<|endoftext|> -TITLE: Relation between points of inflection and saddle points -QUESTION [5 upvotes]: Let $I$ be an interval and $f\colon I \to \mathbb{R}$ a differentiable function. Suppose the following definitions: -For $x_0 \in I$ the point $(x_0,f(x_0))$ is called saddle point if $f'(x_0) = 0$ but $x_0$ is not a local extremum of $f$. -For $x_W \in I$ the point $(x_W,f(x_W))$ is called point of inflection if there is a neighborhood $U$ from $x_W$ in $I$ such that $f'$ is strictly monotonic increasing (resp. decreasing) for $x < x_W$ on $U$ and strictly monotinic decreasing (resp. increasing) for $x > x_W$ on $U$. -What is the logical relation between saddle points and points of inflection? -My first intuitive guess was that a point $(x,f(x))$ is a saddle point iff it is a point of inflection and $f'(x) = 0$. However the implication "$\implies$" seems to be wrong. Consider the following counterexample: -$$ -f(x) = -\begin{cases} x^4 \cdot \sin\left(\frac{1}{x}\right) & x \neq 0 \\ -0 & x = 0 -\end{cases} -$$ -Then $(0,0)$ is a saddle point but not a point of inflection because the derivativative oscillates on every neighborhood of $0$. -Is this correct so far? Is the other implication true? If so, how to prove it? - -REPLY [2 votes]: Your example does indeed show that a saddle point need not be an inflection point. (The function $x^2\sin(1/x)$ also works, but your example has the virtue of being continuously differentiable.) -In the other direction, if $(a,f(a))$ is a point of inflection and $f'(a) = 0,$ then $(a,f(a))$ is a saddle point. To see this, suppose WLOG that for some small $\delta > 0$ that $f'$ strictly increases in $[a-\delta,a]$ and strictly decreases in $[a,a+\delta].$ In $[a-\delta,a)$ we have $f'(x)<0,$ because these values must be less than $f'(0)=0.$ The same reasoning shows that $f'(x) < 0$ for $x\in (a,a+\delta].$ The mean value theorem then shows $f$ strictly decreases on both $[a-\delta,a]$ and $[a,a+\delta].$ Hence $f$ strictly decreases on $[a-\delta,a+\delta].$ It follows that $f(a)$ is neither a local max. nor min. for $f$ at $a.$<|endoftext|> -TITLE: $\pi$ in imaginary numbers? -QUESTION [8 upvotes]: Look at the result of $(-1)^{1/10000000}$ on the google calculator. You should get $$1 + 3.14159265 \times 10^{-7} i$$ -Why does $\pi$ occur in imaginary number operations that don't include $\pi$? - -REPLY [3 votes]: Let me refer to the Euler's formula -$$e^{ix} = \cos x + i \sin x.$$ -There are many ways to 'derive' the formula but I am not going to make the derivation. You may find its nice derivations from Wikipedia. -In some textbook they define exponential function $e^x$ as other ways (for example, infinite series) and provide its inverse $\ln x$. If we have the exponential function and logarithmic function, we can define arbitrary exponent as -$$a^x = e^{x \ln a}.$$ -We hope that such definition works for complex numbers. Unfortunately, a technical problem arises: complex logarithm is not well-defined. -You can check that $e^{i\pi} = e^{3i\pi} = -1$. It says that the possible values of $\ln(-1)$ are $\pi i$ and $3\pi i$. In fact, $e^z = e^{z+2n\pi i}$ for all integer $n$ and the complex exponential has a period $2\pi i$. That problem is easily resolved: although the exponential function $\exp : \Bbb{C}\to\Bbb{C}-\{0\}$ is not one-to-one (so we can not find its inverse), its restriction on the set of complex $z$ with $-\pi < \operatorname{Im}z \le \pi$ is bijective. Let us call the inverse of it $\operatorname{Ln} z$, and many calculators adopt such definition of complex logarithm. -Now we can define the complex exponential as -$$a^z := e^{x\operatorname{Ln} a}.$$ -Especially we get $(-1)^z = e^{\pi i z}$. For small real $t$, -$$(-1)^t = \cos (\pi t) + i \sin (\pi t) \approx 1 + i\pi t$$ -(To derive last approximation, consider the Taylor expansion of $\cos$ and $\sin$.) This is the reason you got such computation result.<|endoftext|> -TITLE: On products of ternary quadratic forms $\prod_{i=1}^3 (ax_i^2+by_i^2+cz_i^2) = ax_0^2+by_0^2+cz_0^2$ -QUESTION [7 upvotes]: The equation, -$$ (ax_1^2+by_1^2)(ax_2^2+by_2^2) = ax_0^2+by_0^2\tag1$$ -has the well-known solution when $a=b=1$, -$$ (x_1^2+y_1^2)(x_2^2+y_2^2) = (x_1 y_2 + x_2 y_1)^2 + (x_1 x_2 - y_1 y_2)^2$$ -Hence the product of the sum of two squares is itself the sum of two squares. -If we use one more factor, then it has an identity for general $a,b$, namely, -$$ (ax_1^2+by_1^2)(ax_2^2+by_2^2)(ax_3^2+by_3^2) = ax_0^2+by_0^2\tag2$$ -where, -$$x_0 =a \color{blue}{x_1 x_2 x_3} + b \big(-\color{blue}{x_1} y_2 y_3 + \color{blue}{x_2} y_1 y_3 + \color{blue}{x_3} y_1 y_2 \big)$$ -$$y_0 =a \big(-x_2 x_3 \color{brown}{y_1} + x_1 x_3 \color{brown}{y_2} + x_1 x_2 \color{brown}{y_3}\big) + b \color{brown}{y_1 y_2 y_3}$$ -High-lighted this way, one can immediately see the pattern. -Q: Is there a similar identity for ternary quadratic forms, -$$ (ax_1^2+by_1^2+cz_1^2)(ax_2^2+by_2^2+cz_2^2)(ax_3^2+by_3^2+cz_3^2) = ax_0^2+by_0^2+cz_0^2\tag3$$ -such that $x_0, y_0, z_0$ are integer functions in terms of the other $x_i, y_i, z_i$ just like for $(2)$? - -REPLY [3 votes]: No, there cannot be a completely general identity. There cannot even be such an identity for $(a,b,c)=(1,1,1)$, since -$$ -(1+0+0)(1+1+1)(0+1+4) = 15 -$$ -and by Legendre's three-square theorem $15$ cannot be written as the sum of three squares. -Since we can add factors of $(1+0+0)$ without changing the product, it is clear that additional factors cannot resolve the problem in this case.<|endoftext|> -TITLE: Show that $y^2=x^3+1$ has infinitely many solutions over $\mathbb Z_p$. -QUESTION [5 upvotes]: I first compared it with how I would solve this over the real numbers. You would say: - -$y^2=\alpha$ has a solution for all $\alpha>0$, of which there are infinitely many. -$x^3+1>0$ for all $x>-1$, of which there are also infinitely many. - -However I can't seem to extend this way of thinking to $\mathbb Z_p$. I have a strong hunch that I need to use Hensel's Lemma in some way, but I just can't see how. - -REPLY [2 votes]: Instead of appealing either to the Binomial Theorem, as Qiaochu did, or to Hensel, you can do it “analytically”, knowing that in terms of the local uniformizing parameter $x$ at $(0,1)$, you must be able to expand $\eta=y-1$ as a series in $x$ with no constant term. Making that substitution, you get from $y^2=x^3+1$ to $\eta=(x^3-\eta^2)/2$, which you can look at as a recursive procedure for getting the expansion of $\eta$. The result is -$$ -\eta=\frac12x^3 - \frac18x^6 + \frac1{16}x^9 - \frac5{128}x^{12} + \frac7{256}x^{15}-\cdots\,, -$$ -and of course this is exactly the Binomial expansion. Just make a substitution $x\mapsto x_0$ for $x_0$ small enough, and you’ll get a convergent series. -I should say that this procedure is much more fun if you do it in the neighborhood of the point $\Bbb O$ at infinity of this curve, that is, $(0:1:0)$ projectively. There, the description of the curve in terms of $\xi=x/y$ and $\zeta=1/y$ is $\zeta=\xi^3+\zeta^3$, the l.u.p. being $\xi$, so you get a $\Bbb Z$-series for $\zeta$ in terms of $\xi$. Far as I can see, this expansion doesn’t come out of the Binomial Theorem.<|endoftext|> -TITLE: Determinant of Tridiagonal matrix -QUESTION [9 upvotes]: I'm a bit confused with this determinant. -We have the determinant -$$\Delta_n=\left\vert\begin{matrix} -5&3&0&\cdots&\cdots&0\\ -2&5&3&\ddots& &\vdots\\ -0&2&5&\ddots&\ddots&\vdots\\ -\vdots&\ddots&\ddots&\ddots&\ddots&0\\ -\vdots& &\ddots&\ddots&\ddots&3\\ -0&\cdots&\cdots&0&2&5\end{matrix} -\right\vert$$ -I compute $\Delta_2=19$, $\Delta_3=65$. -Then I would like to find a relation for $n\geq 4$ which links $\Delta_n, \Delta_{n-1}$ and $\Delta_{n-2}$ and thus find an expression of $\Delta_n$. How could we do that for $n\geq 4$? -Thank you - -REPLY [4 votes]: Let prove the theorem. Suppose the determinant of tri-diagonal matrix as $\Delta_{n}$, and operate the following calculation. -$$ -\begin{align} -\Delta_{n}=& \det -\begin{bmatrix} -a_{1} & b_{1} & 0 & \cdots & 0 & 0 & 0 \\ -c_{1} & a_{2} & b_{2} & \ddots & \vdots & \vdots & \vdots \\ -0 & c_{2} & a_{3} & \ddots & a_{n-2} & b_{n-2} & 0 \\ -\vdots & \vdots & \vdots & \ddots & c_{n-2} & a_{n-1} & b_{n-1} \\ -0 & 0 & 0 & \cdots & 0 & c_{n-1} & a_{n} -\end{bmatrix} -\\ \\ =& \det -\begin{bmatrix} -a_{1} & b_{1} & 0 & \cdots & 0 & 0 & 0 \\ -c_{1} & a_{2} & b_{2} & \ddots & \vdots & \vdots & \vdots \\ -0 & c_{2} & a_{3} & \ddots & a_{n-2} & b_{n-2}-c_{n-1}\frac{b_{n-1}}{a_{n-1}} & 0 \\ -\vdots & \vdots & \vdots & \ddots & c_{n-2} & a_{n-1}-c_{n-1}\frac{b_{n-1}}{a_{n-1}} & 0 \\ -0 & 0 & 0 & \cdots & 0 & c_{n-1} & a_{n} -\end{bmatrix} -\\ \\ =& a_{n} \Delta_{n-1} - b_{n-1}c_{n-1} \Delta_{n-2} -\end{align} -$$ -Based on this formula, it can be described as below using matrix and vector product. Then, we try to estimate $\Delta_{n}$. -$$ -\begin{align} -\begin{bmatrix} -\Delta_{n} \\ -\Delta_{n-1} -\end{bmatrix} -=& -\begin{bmatrix} -a_{n} & -b_{n-1}c_{n-1} \\ -1 & 0 -\end{bmatrix} -\begin{bmatrix} -\Delta_{n-1} \\ -\Delta_{n-2} -\end{bmatrix} -\\ =& -\prod_{k=4}^n -\begin{bmatrix} -a_{n+4-k} & -b_{n-k+3}c_{n-k+3} \\ -1 & 0 -\end{bmatrix} -\begin{bmatrix} -\Delta_{3} \\ -\Delta_{2} -\end{bmatrix} -\end{align} -$$ -This problem's case, these elements are identity each diagonal factors like $a_{i}=5$ $b_{i}=3$, $c_{i}=2$. Therefore this equation can be simplified as follows. -$$ -\begin{bmatrix} -\Delta_{n} \\ -\Delta_{n-1} -\end{bmatrix} -= -\begin{bmatrix} -5 & -6 \\ -1 & 0 -\end{bmatrix} -^{n-3} -\begin{bmatrix} -65 \\ -19 -\end{bmatrix} -$$ -After that, we get the eigenvalues, eigenvectors and diagonalization of the matrix. -$$ -\begin{align} -\begin{bmatrix} -\Delta_{n} \\ -\Delta_{n-1} -\end{bmatrix} -=& -\begin{bmatrix} -3 & 2 \\ -1 & 1 -\end{bmatrix} -\begin{bmatrix} -3^{n-1} & 0 \\ -0 & 2^{n-1} -\end{bmatrix} -\begin{bmatrix} -3 & 2 \\ -1 & 1 -\end{bmatrix} -^{-1} -\begin{bmatrix} -65 \\ -19 -\end{bmatrix} -\\ =& -\begin{bmatrix} -3^n-2^n & 3 \cdot 2^{n-2} -2 \cdot 3^{n-2} \\ -3^{n-3} - 2^{n-3} & 3 \cdot 2^{n-3} - 2 \cdot 3^{n-3} -\end{bmatrix} -\begin{bmatrix} -65 \\ -19 -\end{bmatrix} -\end{align} -$$ -Eventually, $\Delta_{n}$ is -$$ -\begin{align} -\Delta_{n}=& -65(3^{n-2}-2^{n-2})+19(3 \cdot 2^{n-2} -2 \cdot 3^{n-2}) \\ =& -3^{n+1} - 2^{n+1} -\end{align} -$$<|endoftext|> -TITLE: What is the geometrical meaning of the total differential? -QUESTION [5 upvotes]: Could anybody give me a geometrical explanation of the total differential, if there is such? For me(a non-mathematician) it just looks like the generalization of the derivative to more dimensions but in those higher dimensions it hasn't got a geometrical meaning. Am i right? Thank you. - -REPLY [4 votes]: Just to make sure we talk about the same thing: the differential of a (differentiable) map $\mathbb{R}^n\rightarrow \mathbb{R}$ is defined as -$$df = \frac{\partial f}{\partial x^1} dx^1 + \dots+ \frac{\partial f}{\partial x^n} dx^n$$ There are more complex cases (vector valued maps or complex functions) which I'll ignore here. -I assume you know what $\frac{\partial f}{\partial x^i} $ is, so the question remains what $dx^i$ is. Actually, if you do it stringently, it's just the same, it's the total differential of the coordinate function $x^i$ which maps a vector $v$ with coordinates $v= \sum v^l e_l$ to the ith component, i.e. to the coefficient of $e_i$. So $x^i(v)= v^i$. -Now in general the differential of a map is the first order (linear) approximation of that map, which in case of the coordinate functions (which are linear maps) is just the map itself: $dx^i(v) = v^i$. This looks a bit tautological and in Euclidean space it, to some extent, is, but in more general spaces it is important to keep track of the additional information at which point this is evaluated. You don't just look at some vector $v$ but to some vector $v$ attached to some point of the space, and also the differential is restricted to the 'tangent' space in that point. In Euclidean space you can, up to some extent, ignore this additional complexity, at the cost of being less precise which can then cause confusion. -Back to the (geometrical) meaning of the total differential: with the excurse about coordinate functions in mind you can now easily verify that $$df(v) =\frac{\partial f}{\partial x^1} dx^1 (v) + \dots+ \frac{\partial f}{\partial x^n} dx^n (v) = \sum_l \frac{\partial f}{\partial x^l}v^l $$ -is just the directional derivative of $f$ in direction $v$ (in some point $p$, say), which you can also write as $$\frac{d}{dt}f(p+tv))|_{t=0}$$ which is just the rate of change of $f$ in direction $v$ in $p$ or the slope of the tangent to the graph of $f$ in $p$ in direction $v$ (similar as in the case of real functions). -Yet another way you may have already encountered to write this is down is $$\langle \nabla f(p),v\rangle$$ -the scalar product of the gradient of $f$ (in $p$) whith $v$, which has, of course the same geometrical meaning. The difference is that in the first case it is expressed by using a linear map ($df(p)$), in the second case by a vector ($\nabla f(p)$). I linear algebra you may have learned that linear functionals and vectors are in a one to one correspondence to each other, and in case you have a scalar product this correspondence is defined through the mapping of a vector $X$ to the linear map $v\mapsto \langle X,v\rangle$. This is exactly the correspondence between $\nabla f$ and $df$ mentioned earlier. -So, as far as I'm concerned, it has a very precise geometrical meaning (and is used throughout a branch of mathematics which is called differential geometry) to analyze the geometry of surfaces, curves and more general objects called manifolds. -(My apologies to all those who may think this is not exact enough, similarly to everyone for whom this is still to complicated. And to all those who have already seen this a thousand times) ;-)<|endoftext|> -TITLE: Is a path-connected bijection $f\colon \Bbb{R}^n \to \Bbb{R}^n$ continuous? -QUESTION [9 upvotes]: While thinking about this question I was asking myself if a path-connected bijection $f\colon \Bbb{R}^n \to \Bbb{R}^n$ has to be continuous for $n>1$? -If we drop the requirement that $f$ is bijective, then it is not true as in the connected case. -I was wondering if this question is maybe easier? I have no intuition if it is true or not. On one hand I think there are too many connected sets, there will be some ugly counterexample, on the other hand the reals are "nice". -By a path-connected function I mean a function between topological spaces whose image of a path-connected set is path-connected. - -REPLY [10 votes]: Here's how you can get a counterexample for $n>2$ assuming the continuum hypothesis. Note that if $A\subseteq\mathbb{R}^n$ is path-connected and $x,y\in A$, there is a path-connected closed subset $B\subseteq A$ containing $x$ and $y$ (namely, the image of a path between them). So for $f$ to be path-connected, it suffices for $f(A)$ to be path-connected whenever $A$ is a closed path-connected subset of $\mathbb{R}^n$. -This is very useful, because there are only $\mathfrak{c}$ different closed subsets of $\mathbb{R}^n$. First, partition $\mathbb{R}^n$ into $\mathfrak{c}$ sets $(S_\alpha)_{\alpha<\mathfrak{c}}$, each of which intersects every closed path-connected subset with more than one point at $\mathfrak{c}$ points (you can do this by an induction of length $\mathfrak{c}$, where at the $\alpha$th stage you add a point of the $\gamma$th closed path-connected set to $S_\beta$ for all $\beta,\gamma<\alpha$). Also, enumerate all quadruples $(A,x,y,u)$ where $A$ is a path-connected closed subset of $\mathbb{R}^n$, $x,y\in A$, and $u\in\mathbb{R}^n$ with order-type $\mathfrak{c}$. -Now define $f$ by an induction of length $\mathfrak{c}$. At the $\alpha$th stage of the induction, we want to define $f$ on $\{x,y\}\cup (A\cap S_\alpha)$ such that $f(A\cap S_\alpha)$ contains a smooth path from $f(x)$ to $f(y)$, where $(A,x,y,u)$ is the $\alpha$th quadruple in our enumeration. To do this, we need the following lemma (here is where we use CH and the fact that $n>2$): - -Lemma: Let $n>2$, let $a,b\in \mathbb{R}^n$ and let $g_1,g_2,\ldots:[0,1]\to\mathbb{R}^n$ be a countable collection of smooth paths and $c_1,c_2,\ldots\in\mathbb{R}^n$ be a countable collection of points distinct from $a$ and $b$. Then there is a smooth path from $a$ to $b$ that does not intersect any of the $g_i$ except possibly at $a$ or $b$ and does not pass through any of the $c_i$. - -To apply this lemma, let $a=f(x)$ and $b=f(y)$ (we may have already defined these values of $f$; if not, define them arbitrarily to be some points not in the image of $f$), let the $g_i$ be the smooth paths which we have already defined to be in the image of $f$ (there is one such path from each previous stage, so by CH, there are only countably many such paths), and let the $c_i$ be the other various points we have already defined to be in the image of $f$ (there are finitely many such points from each previous stage, so countably many in total by CH). This gives us a smooth path from $a$ to $b$ which we can make the image of $f(A\cap S_\alpha)$ (except for the countably many points of $A\cap S_\alpha$ where we may have already defined $f$). In addition, let us define $f$ at a couple more points to make sure $u$ is in both the domain and image of $f$. -At the end of this induction, we will have a path-connected bijection $f:\mathbb{R}^n\to\mathbb{R}^n$. It is easy to see that we can arrange for $f$ to be discontinuous (at the $\alpha$th stage, we are free to choose any bijection at all between $A\cap S_\alpha$ and our path). -It remains to prove the Lemma. To do so, note that the space $P$ of all smooth paths from $a$ to $b$ is a complete metric space in the natural $C^\infty$ topology. For each $i$, let $U_i$ be the set of smooth paths that do not pass through $c_i$. For each $i$, let $V_i$ be the set of smooth paths that do not intersect $g_i$ unless $a$ and/or $b$ is in the image of $g_i$, in which case they intersect only at $a$ and/or $b$ and have different derivatives there. Then by standard transversality theory (using the fact that $n>2$), each $U_i$ and $V_i$ is an open dense subset of $P$. By the Baire category theorem, there is an element of $P$ that is every $C_i$ and $D_i$, and this is our desired path. -As a final note, I expect that CH is not actually necessary here. However, I don't quite see how to prove a version of the Lemma that works for a collection of $<\mathfrak{c}$ paths, rather than just for a countable collection of paths. This question on MO seems related to this issue.<|endoftext|> -TITLE: Using epsilon delta, prove the max function of continuous functions f,g is also continuous -QUESTION [5 upvotes]: First of all I know that this question has been asked already, but I'm looking for a proof simply using the definition of continuity ($\epsilon$, $\delta$) -Suppose $f,g:D \to R$ are both continuous on $D$. Define $h:D \to R$ by $h(x)=$max{$f(x),g(x)$}. Show $h$ is continuous on $D$. -So, there should be two cases. Let $a$ be fixed. -Case 1: $\lvert f(a)-g(a) \rvert >0$ -Case 2: $f(a)=g(a)$ -I'm not sure what to do from here -I'm having trouble grasping how to carry out proofs regarding continuous functions. If anyone can give me some insight, that'd be much appreciated. - -REPLY [3 votes]: The first case is easy. If $|f(a) - g(a)|>0$, then there is a small neighborhood of $a$ where this is always true. Hence $h(x)$ is equal to $f$ or $g$ in this neighborhood and $h(x)$ is continuous for $a$. -The second case, if $f(a) = g(a)$ Let $\epsilon >0$. -Since $f$ is continuous there is a $\delta_1 > 0$ such that $|f(x) - f(a)|<\epsilon$ for $|x - a|< \delta_1$ -Since $g$ is continuous there is a $\delta_2 > 0$ such that $|g(x) - g(a)|<\epsilon$ for $|x - a|< \delta_2$ -$$|h(x) - h(a)| = |\max\{f(x),g(x)\} - h(a)| \leq \max\{|f(x) - h(a)|,|g(x)-h(a)|\}$$ -Since $h(a) = f(a) = g(a)$ -$$\max\{|f(x) - h(a)|,|g(x)-h(a)|\} = \max\{|f(x) - f(a)|,|g(x)-g(a)|\}$$ -Let $\delta = \min\{\delta_1, \delta_2\}$, if $|x - a|<\delta$, then -$$|f(x) -f(a)|<\epsilon \qquad \text{and} \qquad |g(x) - g(a)|<\epsilon$$ -then -$$|h(x) - h(a)| \leq \max\{|f(x) - f(a)|,|g(x)-g(a)|\} < \epsilon$$<|endoftext|> -TITLE: unique fixed point problem -QUESTION [5 upvotes]: Let $f: \mathbb{R}_{\ge0} \to \mathbb{R} $ where $f$ is continuous and derivable in $\mathbb{R}_{\ge0}$ such that $f(0)=1$ and $|f'(x)| \le \frac{1}{2}$. -Prove that there exist only one $ x_{0}$ such that $f(x_0)=x_0$. - -REPLY [10 votes]: Consider $g(x)=f(x)-x$. We have that $g'(x)=f'(x)-1<-1/2<0$ and that $g(0)=1$. So $g(x)$ is strictly decreasing and $g(0)>0$, so there must me a unique $x_0$ such that $g(x_0)=0=f(x_0)-x_0$. -Remark: of course we have that $\lim_{x \to +\infty}g(x) = -\infty$ - -REPLY [4 votes]: Note that $1-{1 \over 2} x \le f(x) \le 1+ {1 \over 2}x$ and so -$f(x) -x \le 1-{1 \over 2}x$. -Hence $f(0) -0 = 1$ but for $x \ge 2$ we have $f(x) -x \le 0$. Hence there -must be some $x \in [0,2]$ such that $f(x) -x = 0$. -Now suppose $f(x_1) = x_1, f(x_2) = x_2$. Then -$|x_1-x_2| = |f(x_1)-f(x_2)| \le {1 \over 2} |x_1-x_2|$ by the mean value -theorem, so we must have $x_1 = x_2$.<|endoftext|> -TITLE: Why do people present isotopies to demonstrate that two spaces are homeomorphic? -QUESTION [7 upvotes]: In looking for a rigorous proof of the fact that Alexander's Horned Sphere along with the volume that it bounds is homeomorphic to a 3-ball, I have found a number of posts on the web that attempt to explain this fact by showing that we can continuously deform a 3-ball into Alexander's Horned Sphere and the volume that it bounds without moving two distinct points into the same place at any time. The thing is, that's not a homeomorphism, that's an isotopy. In general, when I would like to find a proof of the fact that two spaces are homeomorphic, I often just find numerous links to some video or picture that illustrates an isotopy between the two spaces. However, as far as I can tell, the fact that two topological spaces are isotopic is in no way a sufficient condition for those spaces being homeomorphic, and I never see any justification for the fact that these isotopies yield homeomorphisms. I find this very frustrating because I can generally deduce that two spaces are isotopic with relative ease, but find it much trickier to see that two spaces are homeomorphic. Thus, I end up learning nothing from the time that I spend researching these questions. I guess what I'm asking then, is why do I see these two concepts equated so frequently in specific examples? Moreover, under what conditions is isotopic equivalent to homeomorphic? My guess is that there must be some well-known fact relating the two concepts that I am unaware of, but if this is the case, then I can't seem to find this fact anywhere. Thank you for your time, any answers will be greatly appreciated. - -REPLY [6 votes]: 1) Isotopy is an equivalence relation on subsets of some space $Y$. (Better yet, on embeddings $X \to Y$.) It usually says that $f: X \to Y$ and $g: X \to Y$ are isotopic if there is a homotopy through embeddings between them. In particular, it's not subsets that are usually called isotopic, it's maps. (One can pass to subsets if they so desired by saying that there is an embedding from $X$ whose image is each subset that the embeddings are homotopic, but... blah.) But in particular, even if you do this, it's automatic that the subsets are homeomorphic. One isn't really interested in the homeomorphism type of the subset, but rather how they're embedded. (See 3).) -2) As mentioned in the comment, this is no longer the same thing if your map is just injective: there are lots of injective maps that are not embeddings. One is to draw the letter 8; that's an injective map from $\Bbb R$ whose image is not homeomorphic to $\Bbb R$. But if your domain $X$ is compact (and codomain Hausdorff, but I have never heard anybody care about isotopy when the codomain isn't, like, a manifold) then this is the same thing as an embedding. -3) Isotopy, as stated, is a dumb equivalence relation. Take knots in $S^3$. Unless we modify the definition, every "tame" knot is isotopic (this includes every knot you've ever seen). You really want to either work with i) ambient isotopy or ii) some smooth/locally flat/PL condition on your embeddings and your isotopies. (i and ii are basically equivalent.) Then, say, the trefoil and circle are no longer isotopic.<|endoftext|> -TITLE: Site percolation model that cannot be obtained from a bond percolation model -QUESTION [6 upvotes]: It is easy to obtain a site percolation model from a bond percolation model on a graph $G$ using the covering graph $G_c$ of $G$. I wondered if one can obtain any site percolation model from any site bond and I read in the Geoffrey Grimmett's book that is not true. Nevertheless he does not give any counterexample, and I cannot imagine someone. Can anybody give me a counter example? - -REPLY [2 votes]: It seems as though "covering graph" is just another name for "line graph". -As you can find in the "characterization" section of that wiki page, not every graph appears as a line graph of some other graph. A small 5 vertex example is shown there. I am not familiar with percolation, but I think this is basically what you are asking.<|endoftext|> -TITLE: How can Hamilton's quaternion equation be true? -QUESTION [8 upvotes]: I'm reading Ken Shoemake's explanation of quaternions in David Eberly's book Game Physics. In it, he describes the $\mathbf{i}, \mathbf{j}, \mathbf{k}$ components of quaternions to all equal $\sqrt{-1}$. Then it states Hamilton's quaternion equation: -$\mathbf{i}^2 = \mathbf{j}^2 = \mathbf{k}^2 = \mathbf{ijk} = \mathbf{-1}$ -If $\mathbf{i} = \mathbf{j} = \mathbf{k} = \sqrt{-1}$, then it makes sense how $\mathbf{i}^2 = \mathbf{-1}$. But $\mathbf{ijk}$ should equal $\mathbf{i}^3$, not $\mathbf{i}^2$. How does $\mathbf{ijk} = \mathbf{-1}$? -The book's notation says that lowercase bold letters denote a vector, so I'm thinking of $\mathbf{i}$, $\mathbf{j}$, and $\mathbf{k}$ as the basis of the quaternion, similar to the basis of a vector, and can be written $(\sqrt{-1}, \sqrt{-1}, \sqrt{-1})$. Having the result of $\mathbf{ijk}$ as a bold $\mathbf{-1}$ to me implies that it is the vector $(-1, -1, -1)$. Is this understanding correct? In this context, what does it mean to square vector $\mathbf{i}$? If it equals another vector, then the only operation that makes sense is the cross product, but the cross product of a vector and itself is the zero vector. - -REPLY [2 votes]: In it, he describes the $\mathbf{i}, \mathbf{j}, \mathbf{k}$ components of quaternions to all equal $\sqrt{-1}$. - -Unless he is constructing quaternions from complex numbers, there is no $\sqrt{-1}$ available to be equated with anything. -Instead, these are three independent equations $i^2 = -1, \quad j^2=-1, \quad k^2=-1$ that are used as part of the definition of a multiplication rule on the 4-dimensional vector space generated by a set of four different vectors that are (arbitrarily) assigned the names $1,i,j,k$. There are many un-interesting multiplication rules such as $x \ast y = x$ for all vectors $x,y$, but Hamilton found a much more interesting one. - -Then it states Hamilton's quaternion equation: -$\mathbf{i}^2 = \mathbf{j}^2 = \mathbf{k}^2 = \mathbf{ijk} = \mathbf{-1}$ - -The equation $ijk=-1$ is a mnenomic device for reproducing the full set of defining equations, but is not itself part of the definition. -There are 4x4=16 products of ordered pairs of the generators, and all of them need to be specified in order to define multiplication. Here only 3 products of pairs are given, and one is supposed to infer the rest from $ijk=-1$ by multiplying that equation on the left or the right by $i,j$ and $k$ in all possible ways, assuming associativity, and applying the previous three rules. -The other rules of quaternion multiplication are $1x = x1 = x$ for all $x$; the $i,j,k$ anticommute when distinct pairs are multiplied (so $ij = -ji$ et cetera); and cyclic permutations of $ij = k$. -This multiplication law is linear in each variable, distributive, associative, noncommutative, and (most unusually) every nonzero element has a multiplicative inverse. - -But $\mathbf{ijk}$ should equal $\mathbf{i}^3$, not $\mathbf{i}^2$. - How does $\mathbf{ijk} = \mathbf{-1}$? - -For example, $ijk = i(jk) = i(i) = -1$. - -The book's notation says that lowercase bold letters denote a vector, so I'm thinking of $\mathbf{i}$, $\mathbf{j}$, and $\mathbf{k}$ as the basis of the quaternion, similar to the basis of a vector, - -They are 3 of the 4 basis vectors. Every quaternion has a unique expression as a1 + bi + cj + dk for some numbers $a,b,c,d$. - -Having the result of $\mathbf{ijk}$ as a bold $\mathbf{-1}$ to me implies that it is the vector $(-1, -1, -1)$. - -It is the vector (-1)1 where the -1 is a real number and the boldface 1 is one of the basis elements of the quaternions as a vector space. - -In this context, what does it mean to square vector $\mathbf{i}$? - -To use Hamilton's multiplication law to multiply that vector by itself. The result will, by construction, be equal to some other quaternion. The quaternion that it equals happens to be (-1)1, also known as -1.<|endoftext|> -TITLE: Given $2n$ points in the plane, prove we can connect them with $n$ nonintersecting segments -QUESTION [6 upvotes]: Given $2n$ points in the plane such that no three points lie on one line. Prove that it is possible to draw $n$ segments such that each segment connects a pair of these points and no two segments intersect. - -So I know there is a solution that examines 4 points at a time and the two line segments, and if they intersect change the positioning of the 2 line segments for those points so they don't. It can be shown that the sum of the distance of the line segments monotonically decreases, so the process must terminate eventually, as there are only a finite number of possible configurations given the $2n$ points. -I was wondering if this can be done in the following way: -Lemma: Given $2n$ points in the plane, we can partition the plane into two sets $A_1,A_{n-1}$, where $A_1$ contains 2 points, $A_{n-1}$ contains the other $2n-2$ points, in a way such that the line connecting the two points in $A_1$ does not intersect any possible line segment formed by connecting two points of $A_{n-1}$ (basically, having $A_1,A_{n-1}$ disjoint). -Using this lemma, we first partion the $2n$ points into $A_1,A_{n-1}$. Then we could apply the lemma to $A_{n-1}$, partitioning the plane into sets $A_1,A_2,A_{n-2}$ such that $A_1,A_2$ contain 2 points each, with the segment connecting those two points not intersect any of the possible line segments formed by connecting two points of $A_{n-2}$, or each other. -Then we could repeat the lemma until we can finally partition the plane $P=A_1\cup A_2\cup A_3\cdots\cup A_{\frac{n}{2}}$, with all the $A_i$ disjoint. -The lemma seems kind of obvious (based on trying examples), just partition the plane using a line that separates two of the extreme points in the plane from the rest (e.g. the two "lowest" ones on the plane), however I do not know how to rigorously prove it. -Does anyone have any idea of how to prove this lemma? Also could someone check if this argument works? -Thanks! - -REPLY [2 votes]: Your argument works fine in an inductive proof, under the hypothesis of no collinear triples. For $n>1$ the convex hull of you points is a convex polygon that contains all the points, and any side of this polygon contains exactly two points (vertices of the polygon), and is disjoint from the convex hull of the remaining points. Those two points can be joined by a segment, and the disjointness condition guarantees that any segments provided by the induction hypothesis for the remaining points will not cross that segment. -I think it could even be adapted to work without the non-collinear hypothesis (as long as the points are distinct), though you need to be a bit more careful since one cannot necessarily pair two vertices of the convex hull; rather an appropriate pair of points on the boundary must be chosen.<|endoftext|> -TITLE: What is an intuitive definition for "conjugate" in Group Theory? -QUESTION [34 upvotes]: In Abstract Algebra, I learned about "conjugation" in the context of a group $H$ being a 'normal' subgroup of $G$ if the element $xhx^{-1}\in H$ for any $x\in G$. But this is not the first time I've seen the word 'conjugate'. The other times I've seen this are in pre-calculus, when trying to rationalize a denominator, or in the case where $(x+y)$ is the conjugate of $(x-y)$. Does the Group Theory version of conjugate have any link to the pre-calculus version (and other uses)? - -REPLY [4 votes]: The notion of conjugacy was first developped by Cauchy for permutation groups. For permutation groups it's extremely simple to understand, and in fact since all groups are isomorphic to some permutation group, you can take this as a general understanding. -Say I've got $10$ different objects which I'm swapping about. I can consider two permutations: one in which I swap the first two, and another in which I swap the last two. But these permutations are really the same thing, right? They're the same permutation, just done on different objects - I perform the same "action" in both cases, if you see what I mean. For a more complex case, imagine if I take the first five objects and rotate their positions, then swap the first and fifth. This is "the same" as if I took the last five objects and performed the same action on them (the mere fact that you're able to understand what I mean by "perform the same action on them" without my having to explain it in detail is proof of this). -Two permutations are conjugate if they are "the same permutation done to different objects" in this sense. How can we make this more precise? Let's work out an example: the permutations $P = (1\ 2)$ and $Q = (3\ 4)$, on say the set of numbers between $1$ and $10$. The second one is just $P$, but done to $3$ and $4$ rather than $1$ and $2$. So consider the permutation $S$ which sends $4$ to $2$ and $3$ to $1$. We have: -$$P=S^{-1}QS$$ -We first of all apply $S$ - this is basically just "renaming" the elements of our set. Then we apply $P$, and then we undo $S$ to get back to our original "naming scheme". It's admittedly a bit hard to explain this in writing, but hopefully with some thinking you should be able to see that the existence of an $S$ such that $P=S^{-1}QS$ really does capture the idea that $P$ and $Q$ are "the same thing done to different objects". -By the way, for finite permutation groups every permutation can be written as a product of disjoint cycles, and conjugacy is equivalent to having the same "structure" in one's cycle decomposition. Defining exactly what I mean by "structure" is one of those things that's hard to explain in words but easy to see with an example. The permutations $(3\ 4)(5\ 6\ 8)$ and $(2\ 3)(1\ 4\ 9)$ are clearly "the same", but done to different objects. The fact that both are the product of a 2-cycle and a disjoint 3-cycle suffices to show that they're conjugate.<|endoftext|> -TITLE: Computing $(1+\cos \alpha +i\sin \alpha )^{100}$ -QUESTION [6 upvotes]: How to prove that -$$ ‎‎‎‎‎‎‎‎‎‎‎(1+‎\cos ‎‎\alpha ‎+i‎\sin ‎‎\alpha ‎)^{100} =‎ ‎2^{100}‎\left( ‎‎\cos \left(‎\frac{‎\alpha‎}{2}\right)‎\right) ‎^{100} ‎‎\left( ‎‎\cos \left(‎\frac{100‎\alpha‎}{2}\right)+i‎\sin \left(‎\frac{100‎\alpha‎}{2}\right)‎\right)‎‎$$ -I just need a hint. I tried to write $1+‎\cos ‎‎\alpha ‎+i‎\sin ‎‎\alpha$ in polar form and use De,Moivre theorem. But it was impossible to compute $\arctan \frac{\sin \alpha}{1+\cos \alpha}$. - -REPLY [3 votes]: The figure below should make it easy to simplify the expression -$\arctan \dfrac{\sin\alpha}{1+\cos\alpha}$. -Note the isoceles triangle; you are looking for the angle at vertex $A$.<|endoftext|> -TITLE: Trouble finding the upper bound for a certain sum. -QUESTION [5 upvotes]: I have encountered the following problem and I am curious how to solve it. -$\textrm{Given }a_{n+1} = a_{n}(1 - \sqrt{a_{n}}) \textrm{, where } a_{i} \in (0,1) \textrm{, } i = \overline{1,n}$ -I have proved that $(a_{n})_{n\in\mathbb{N}}$ is decreasing and now I have to prove that the upper bound of -$b_{n} = {a_{1}^2} + {a_{2}^2} + \cdots + {a_{n}^2}$ is $a_{1}$. -I have no idea how to do this. I have tried in all sorts of ways but only get to something like -$b_{n} < {n}\cdot{a_{1}^2}$ or $b_{n} < {n}\cdot{(1-\sqrt{a_{1}})^2}$ -which is not even close to what the upper bound must be. -I feel like it's a common trick that you have to use to solve this, but I cannot find it. - -REPLY [2 votes]: We induct on $n$. This is clearly true when $n=1$, as $a_1^2\leq a_1$. -Suppose this is true for a sequence of length $n$. Let's show this for a sequence of length $n+1$. -We notice that $a_1^2+a_2^2+a_3^2+\dots+a_n^2+a_{n+1}^2=a_1^2+\left(a_2^2+a_3^2+\dots+a_n^2+a_{n+1}^2\right)$, where the bracketed part is a sequence of length $n$. -$a_2^2+a_3^2\dots+a_{n+1}^2$ is a sequence of length $n$, if we relabel $a_i$ as $a_{i-1}$, it would look like $a_1^2+a_2^2\dots+a_n^2$. This can be done since the sequence is defined recursively, with $a_{i+1}$ depending solely on $a_i$ the same way $a_{i+2}$ depends on $a_{i+1}$, it is independent of the position of the sequence $a_i$ is. -By induction hypothesis, we have $\left(a_2^2+a_3^2+\dots+a_n^2+a_{n+1}^2\right)\leq a_2$. -Hence it suffices to show that $a_1^2+a_2\leq a_1$. -Substituting the value of $a_2$, we have $a_1^2+a_1\left(1-\sqrt{a_1}\right)\leq a_1$. -This reduces to $a_1^2\leq a_1\sqrt{a_1}$, which is true as $a_1\in(0,1)$. -By mathematical induction, this is true for all $n$.<|endoftext|> -TITLE: $p$-adic logarithm is injective if $p > 2$? -QUESTION [5 upvotes]: Define the $p$-adic logarithm$$\log_p(1 + x) = \sum_{i = 1}^\infty (-1)^{i-1}x^i/i.$$I know that $\log_p$ is a homomorphism from $U_1$ to the additive group of $\mathbb{Q}_p$, where $U_1$ is the subset of elements of $\mathbb{Q}_p$ of the form $1 + x$, with $|x|_p < 1$. How do I see that it is injective if $p > 2$? - -REPLY [8 votes]: This is a really interesting issue when you look at it from a more advanced standpoint. But here’s how you do it when you consider the log to be defined for elements $z$ of $\Bbb Q_p$ with $|z|_p<1$. Since you are restricting to elements of $\Bbb Q_p$, this means that $z=p\zeta$, where $\zeta\in\Bbb Z_p$, the $p$-adic integers. -Keeping this in mind, I’ll substitute $x=pt$ in your series expression, to give $$ -\log(1+pt)=\sum_{i=1}^\infty(-1)^{i-1}p^it^i/i = p\sum_{i=1}^\infty(-p)^{i-1}t^i/i=p\biggl[t-\frac{pt^2}2+\frac{p^2t^3}3-\cdots\biggr]\,, -$$ -in which you see that the powers of $p$ upstairs more than compensate for the possible divisibility by $p$ of the $i$ downstairs. In other words, $\log(1+pt)=pg(t)$ where $g(t)\in\Bbb Z_p[[t]]$ with the form $g(t)=t+$(higher terms). But any such series as $g$ in this form has an inverse $h(t)\in\Bbb Z[[t]]$ in the sense that $g(h(t))=h(g(t))=t$. This is enough to make $g$ injective. -But since I am who I am, I have to tell you that restricting to inputs from $\Bbb Q_p$ misses the interesting behavior of the logarithm. In fact it makes sense to plug in for the original variable $x$ any element $z$ of an algebraic extension of $\Bbb Q_p$ with $|z|_p<1$, and now the logarithm series vanishes whenever $z+1$ is a $p$-power root of unity: the log is most emphatically not injective, even though it still is a homomorphism. For the case $p=2$, the value $z=-2$ gives the good old $2$-power root of unity $z+1=-1$, so the prime two is not so special after all.<|endoftext|> -TITLE: Is there a coproduct in the category of path connected spaces? -QUESTION [6 upvotes]: Well, first of all, does the coproduct exist in the category of path-connected spaces, and if not how would you prove it? If it does exist what is it and how do you find it? The usual coproduct of topological spaces is disjoint union, but it is not path-connected. -As a follow up question, given a full subcategory of a category in which products/coproducts do exist, what is the strategy for finding out if that category also has products/coproducts? - -REPLY [10 votes]: They don't exist. For instance, suppose there existed a coproduct $Y=X\coprod X$, where $X$ is a point. Then there would be a unique map $f:Y\to [0,1]$ sending the first copy of $X$ to $0$ and the second copy of $X$ to $1$. Since $Y$ is path-connected, $f$ must be surjective. But now let $g:[0,1]\to[0,1]$ be any non-identity map that sends $0$ to $0$ and $1$ to $1$. Then $gf$ is another map $Y\to[0,1]$ sending the first copy of $X$ to $0$ and the second copy of $X$ to $1$, and $gf\neq f$ since $f$ is surjective. This is a contradiction. -In general, when trying to find limits/colimits in a full subcategory, you are trying to find the "canonical approximation" to the (co)limit in the ambient category which lies in your subcategory. So if the (co)limit in the ambient category lies in your subcategory, it is the (co)limit. If it doesn't, you try to modify it to lie in your subcategory in a "universal" way (more precisely, such that maps between the (co)limit and objects of your subcategory are in bijection with maps between the modification and objects in your subcategory). If there doesn't seem to be any sort of universal way to modify it, that's a good sign that the (co)limit doesn't exist in your subcategory. For instance, in this case, there doesn't seem to be any particularly "universal" way to take a disjoint union of two path-connected spaces and make it path-connected. You can then try to make some concrete argument that it doesn't exist like the one above; the details of how you do this will depend a lot on what your category is.<|endoftext|> -TITLE: Jigsaw-style proofs of the Pythagorean theorem with non-square squares -QUESTION [5 upvotes]: The two squares on the legs of a right triangle can be chopped up (or "dissected") into several pieces that can be reassembled jigsaw-style into a square congruent to that whose side is the hypotenuse. -If a plane region of some other shape than a square is used, with a side having the length of one of the sides of the triangle, the theorem remains true if the same shape is glued onto all three sides. -If a different shape is used, might this dissection proof become simpler or more comprehensible or more enlightening or otherwise better? If not, can that negative result be made precise and proved? - -REPLY [4 votes]: The following is a minimalist answer. Let $\triangle ABC$ be right-angled at $C$. Drop a perpendicular from $C$ to $P$ on $AB$. This divides $\triangle ABC$ into two similar right triangles which, without any cutting at all, can be reassembled (without any motion) to make the triangle on the hypotenuse. -If we prefer the figures to be erected "outside" the original triangle, reflect the pieces across the sides.<|endoftext|> -TITLE: How to prove this inequality $a+b+c\ge \sqrt{3}+\frac{1}{4}c^2(a-b)^2$ -QUESTION [8 upvotes]: Let $a,b,c\ge 0,ab+bc+ca=1$, show that -(1):$$a+b+c\ge \sqrt{3}$$ -(2): -$$a+b+c\ge \sqrt{3}+\dfrac{1}{4}c^2(a-b)^2$$ -for $(1)$,I have proof,First see that: -$$(a-b)^2+(b-c)^2+(c-a)^2\ge 0$$ -Hence -$$a^2+b^2+c^2\ge ab+bc+ca\Longrightarrow a^2+b^2+c^2+2ab+2bc+2ac\ge 3(ab+bc+ac)$$ -or$$ (a+b+c)^2\ge 3(ab+bc+ca)=3$$ -$$\Longrightarrow a+b+c\ge\sqrt{3}$$ -But for $(2)$ I can't prove it - -REPLY [6 votes]: Suppose we are given $c$. -Let $x=a+b$ and let $y=a-b$. -Note that $y^{2}=x^{2}+4cx-4$ using the condition $ab+bc+ac=1$ -Now our inequality becomes: -$x+c \geq \sqrt3+\frac{c^{2}y^{2}}{4}$ which substituting in $y^{2}=x^{2}+4cx-4$ yields: -$0\geq c^{2}x^{2}+x(4c^{3}-4)-4c^{2}-4c+4(3^{1/2})$ -Hence this is the inequality we must prove. -Noting from the first part that, -$x\geq\sqrt3 -c$, and also that: -$(a+b)^{2} \geq (a-b)^2 \Rightarrow x^{2}\geq y^{2} \Rightarrow x^{2}\geq x^{2}+4cx-4 \Rightarrow$ $\frac{1}{c}\geq x$. -And so we have the condition: -$\frac{1}{c}\geq x\geq \sqrt3 -c$ -And by showing that this interval $[\sqrt3 -c,\frac{1}{c}]$ is contained within the interval of $x$ such that $0\geq c^{2}x^{2}+x(4c^{3}-4)-4c^{2}-4c+4(3^{1/2})$, -our inequality will be proven. -The interval of $x$ within which $0\geq c^{2}x^{2}+x(4c^{3}-4)-4c^{2}-4c+4(3^{1/2})$ is true is between its roots ,namely: -$[\frac{2(1-c^{3})-2\sqrt{(c^{3}-1)^{2}-c^{2}(-c^{2}-c+\sqrt3)}}{c^2},\frac{2(1-c^{3})+2\sqrt{(c^{3}-1)^{2}-c^{2}(-c^{2}-c+\sqrt3)}}{c^2}]$ -Now denote $f(c)=4c^{2}-c(4\sqrt3+1)+4$. Its turning point is at, $c=\frac{4\sqrt3 +1}{8}$, and $f(\frac{4\sqrt3 +1}{8})=0.07..>0$, hence $f(c)$ is positive. -Now, -$4c^{2}-c(4\sqrt3+1)+4>0 \Rightarrow 4c^{2}-4c\sqrt3>c-4 \Rightarrow 4+4c^{3}-4c^{2}\sqrt3>c^{2}-4c+4 \Rightarrow 4c^{6}-8c^{3}+4+4c^{4}+4c^{3}-4c^{2}\sqrt3>4c^{6}+4c^{4}-8c^{3}+c^{2}-4c+4 \Rightarrow 4[(c^{3}-1)^{2}-c^{2}(-c^{2}-c+\sqrt3)]>(c-2+2c^{3})^{2}>0 \Rightarrow -\frac{2(1-c^{3})+2\sqrt{(c^{3}-1)^{2}-c^{2}(-c^{2}-c+\sqrt3)}}{c^2}>\frac{1}{c}$ -We are now left to show that: -$\sqrt3 -c\geq \frac{2(1-c^{3})-2\sqrt{(c^{3}-1)^{2}-c^{2}(-c^{2}-c+\sqrt3)}}{c^2}$ -Note that: -$c^{2}(3c^{4}+1) \geq 0$ hence, -$c^{6}+3c^{4}-4c^{3}-4c^{2}\sqrt{3}+4 \geq 4c^{6}-8c^{3}+4+4c^{4}+4c^{3}-4c^{2}\sqrt3 \Rightarrow 4[(c^{3}-1)^{2}-c^{2}(-c^{2}-c+\sqrt3)] \geq (c^{3}+c^{2}\sqrt3 -2)^{2} \geq 0 \Rightarrow \sqrt3 -c\geq \frac{2(1-c^{3})-2\sqrt{(c^{3}-1)^{2}-c^{2}(-c^{2}-c+\sqrt3)}}{c^2}$ -And we are done.<|endoftext|> -TITLE: Prove that p divides to algebraic multiplicity of the eigenvalue -QUESTION [7 upvotes]: I need help in the following exercise of a qualifying exam: -Let $A$ be a matrix of size $m$ by $m$ over the finite field $\mathbb{F}_p$ such that $\operatorname{trace}\left(A^n\right)=0$ for all $n$. If $\lambda$ is a nonzero eigenvalue of $A$, prove that the algebraic multiplicity of $\lambda$ is divisible by $p$. -Thank you by some hints. - -REPLY [5 votes]: Darij's argument is undoubtedly much smarter (I don't understand it), but the following simple approach works too. In the algebraic closure, we have the Jordan normal form available, so if $m_j$ denotes the algebraic multiplicity of $\lambda_j$, then your assumption now says that $\sum m_j \lambda_j^n=0$ for all $n\ge 1$, or, equivalently, $\sum m_jp(\lambda_j)=0$ for all polynomials with $p(0)=0$. We can now take $p=\lambda\prod_{k\not= j} (\lambda-\lambda_k)$ to see that $m_j\equiv 0\mod p$ if $\lambda_j\not= 0$.<|endoftext|> -TITLE: Hausdorff dimension of a countable set -QUESTION [5 upvotes]: I don't understand why the Hausdorff dimension of a countable set in $\mathbb{R}^n$ is $0$. -Can someone please give me a hint? -Thank you! - -REPLY [7 votes]: Hausdorff dimension is defined as the inf of those $d$ such that the $d$-dimensional Hausdorff measure vanishes. Measures are countably additive, so any measure which vanishes on all singletons vanishes on all countable sets; and it is fairly obvious that Hausdorff measures of all dimensions $d>0$ vanish on a singleton. So, for a countable set just as for a singleton, the inf is $0$. -If you want a more concrete way of looking at it, if $C = \{x_n : n\in\mathbb{N}\}$ is countable, then $C$ can be covered by a sequence of balls centered on the $x_n$, whose radii are any sequence of positive real numbers, and in particular, one which tends arbitrarily rapidly to $0$ (so in particular, one such that the $d$-powers of the diameters of the balls converges to an arbitrarily small real number).<|endoftext|> -TITLE: How do you compute eigenvalues/vectors of big $n\times n$ matrix? -QUESTION [7 upvotes]: The product of the non-zero eigenvalues of the matrix is ____. -$$\pmatrix{1&0&0&0&1\\0&1&1&1&0\\0&1&1&1&0\\0&1&1&1&0\\1&0&0&0&1}$$ - -My attempt: -Well, answer is $6$. It's a big matrix (but not more), so finding eigenvalues by characteristics equation will be lengthy process. I'm trying for any short trick to find eigenvalues of a big $n \times n$ matrix. -This explained "eigenvalues by inspection". I'm not getting properly. - -Can you explain, eigenvalues and eigenvectors by inspection for this matrix in steps please? - -Can you explain in steps. - -REPLY [2 votes]: You can find the characteristic polynomial, and then the last nonzero coefficient is $\pm$ the product of the non-zero eigenvalues, depending on the degree. -For example, the characteristic polynomial for above is: -$$p(\lambda)=\det(\lambda I-A) = \lambda^5-5\lambda^4+6\lambda^3,$$ the product of the non-zero eigenvalues would be $6$. -This is only true, in general, if you want to count with duplication, so if there is a repeated non-zero eigenvalue, then you include those in the product multiple times. -You can get the non-repeated product of the non-zero eigenvalues by computing the GCD of the two polynomails, $p(x),p'(x)$, and dividing that from $p(x)$. So: -$$q(x)=\frac{p(x)}{\gcd(p(x),p'(x))}$$ -is a polynomial with the same roots as $p(x)$, but no repeated roots. Then the product of the non-zero roots, without multiplicity, will be $\pm$ the last non-zero coefficient of $q(x)$.<|endoftext|> -TITLE: $f$ continuous on $(a,b)$ and $|f|$ differentiable on $(a,b)$; is $f$ differentiable in $(a,b)$? -QUESTION [10 upvotes]: Let $f:(a,b) \to \mathbb R$ be a continuous function such that $|f|$ is differentiable in $(a,b)$ ; then is $f$ differentiable in $(a,b)$ ? - -REPLY [2 votes]: Let's write $g(x)=\left|f(x)\right|$ to save typing. -First, let us get rid of the easy case: If $f(x_0)\ne 0$, then due to continuity, there's an open interval $I$ around $x_0$ such that $f(x)\ne 0$ for any $x\in\mathbb I$. Since $f(x)$ is real, this implies either $f(x)=g(x)$ or $f(x)=-g(x)$ for all $x\in I$; in both cases $f$ is obviously differentiable in $x_0$. -So let's now consider the case $f(x_0)=0\iff g(x_0)=0$. -We know by definition that $g(x)\ge 0$, and $g(x)$ is differentiable by assumption. We will now show that for any $x_0$ with $g(x_0)=0$ we also have $g'(x_0)=0$: -On one hand, -$$g'(x_0) = \lim_{x\to x_0+0}\frac{g(x)-g(x_0)}{x-x_0} = \lim_{x\to x_0+0}\frac{g(x)}{x-x_0}$$ -Now since $g(x)\ge 0$, for $x>x_0$, clearly $\frac{g(x)}{x-x_0}\ge 0$ and thus $g'(x_0)\ge 0$. -On the other hand, -$$g'(x_0) = \lim_{x\to x_0-0}\frac{g(x)-g(x_0)}{x-x_0} = \lim_{x\to x_0-0}\frac{g(x)}{x-x_0}$$ -The analogous argumentation as above gives $g'(x_0)\le 0$. Therefore $g'(x_0)=0$. -Now -$$\lim_{x\to x_0} \frac{f(x)-f(x_0)}{x-x_0} = \lim_{x\to x_0}\frac{f(x)}{x-x_0}$$ -Consider any sequence $(x_n)$ converging to $x_0$. Then obviously -$$\frac{-g(x_n)}{x_n-x_0} \le \frac{f(x_n)}{x_n-x_0} \le \frac{g(x_n)}{x_n-x_0}$$ -But the left and right side both converge to $0$, and thus does also the sequence in the center. Since this holds for any $(x_n)$, it follows that also -$$\lim_{x\to x_0}\frac{f(x)}{x-x_0}=0$$ -in other words, $f'(x_0)$ exists and is $0$.<|endoftext|> -TITLE: if $p\mathcal{O}_K$ splits completely in Galois extension, are all primes lying over $p$ generated by one class in $CL(K)$? -QUESTION [6 upvotes]: L.S., -Studying for my exam on algebraic number theory, I was thinking of writing down tricks for computing the class group of a number field fast. I thought of the following one, but I don't know if it is true. Could one maybe help me show whether it is or isn't? -Trick: -Let $K:\mathbb{Q}$ be a Galois extension of the degree $n$. Let $(p)$ split completely in $\mathcal{O}_K$ as $(p) = \prod_{i = 1}^n \mathfrak{p}_i$, with all $\mathfrak{p}_i$ of norm $p$. Now all $[\mathfrak{p}_i] \in CL(K)$ are generated by one $[\mathfrak{p}_j]$ for some $1 \leq j \leq n$. -The reason I suspect this might somehow be true, is because I already know that the Galois group acts transitively on a set of prime ideals lying over some $(p)$, and also because it happened in most class groups I computed so far. -Many thanks! - -REPLY [3 votes]: After some playing around with Sage, here is a counterexample. -Let $K=\mathbb Q(\sqrt[3]{11}, \zeta)$ where $\zeta$ is a cubed root of unity. Then $K$ is Galois with class group $C_2\times C_2$. -The prime $19$ splits completely in $K$. Two of the primes of $K$ lying above $19$ are $$(19, \zeta-\sqrt[3]{11}-2)\qquad\text{ and }\qquad(19, \zeta-\sqrt[3]{11}+9)$$ -Neither of these primes are principal. Hence, if their ideal classes are to lie in a cyclic subgroup of $C_2\times C_2$ (which must be of order $2$), then both ideals must be in the same ideal class, and hence their product should be principal. However, their product is not principal. -Sage code: -K. = NumberField(x^3-11) -L. = K.extension(x^2+x+1) -C = L.class_group(); C -OL=L.ring_of_integers() - -split =[C(P) for P,e in (19*OL).factor()] -split - -split[0].is_principal() -split[1].is_principal() -test = split[0]*split[1] -test.is_principal() - -and output: -Class group of order 4 with structure C2 x C2 of Number Field in b with defining polynomial x^2 + x + 1 over its base field -[Fractional ideal class (19, b - a - 2), Fractional ideal class (19, b - a + 9), Fractional ideal class (19, b - a - 9), Fractional ideal class (19, b - a - 6), Fractional ideal class (19, b - a + 5), Fractional ideal class (19, b - a + 6)] -False -False -False<|endoftext|> -TITLE: Some basic questions regarding rank-1 matrices -QUESTION [6 upvotes]: If an $n\times n$ matrix $B$ has rank 1, and A is another $n\times n$ matrix, then why does $AB$ also have rank 1? This showed up in a solution that I read through, but it doesn't seem like an obvious fact. -And one more thing that came up in this solution: it says that since this matrix has rank 1, then it must have $(n-1)$ eigenvalues that are all zero, and only one non-zero eigenvalue. I don't see how this has to be true either. -Any ideas are welcome. -Thanks, - -REPLY [4 votes]: It comes from the associavity of matrix multiplication. If $B$ has rank-1 then it can be written in the form $B = u v^T$ for some vectors $u$ and $v$. So, -$$AB = A (uv^T) = (Au) v^T$$ -but $Au$ is a vector itself, so now we have a rank-1 expression for $AB$. Maybe it will be helpful to see a picture of the shapes of the matrices during this process: -$$\underbrace{\begin{bmatrix}\cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot\end{bmatrix}}_{A} \left(\underbrace{\begin{bmatrix}\cdot \\ \cdot \\ \cdot\end{bmatrix}}_{u}\underbrace{\begin{bmatrix}\cdot & \cdot & \cdot\end{bmatrix}}_{v^T}\right)= -\left(\underbrace{\begin{bmatrix}\cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot\end{bmatrix}}_{A}\underbrace{\begin{bmatrix}\cdot \\ \cdot \\ \cdot\end{bmatrix}}_{u}\right)\underbrace{\begin{bmatrix}\cdot & \cdot & \cdot\end{bmatrix}}_{v^T} = -\underbrace{\begin{bmatrix}\cdot \\ \cdot \\ \cdot\end{bmatrix}}_{Au}\underbrace{\begin{bmatrix}\cdot & \cdot & \cdot\end{bmatrix}}_{v^T} $$ -(Of course, note that if $Au=0$, then the result is the zero matrix so the rank would be zero instead of 1)<|endoftext|> -TITLE: Calculate connecting line and circular arc between two points and angles -QUESTION [6 upvotes]: Given two points and angles, how can I calculate the connection between the two points in the form of a straight line and a circular arc? - - -Context of the question: I am trying to make a program that converts a model railway layout created in XtrkCAD into a SCARM layout (scarm.info). Both design applications have a different approach, and a different way of saving the layout to a file. Luckily both use a human readable text file format. -XtrkCAD knows the concept of the easement or transition curve, but SCARM doesn't. When converting a layout I now get gaps in the track where XtrkCAD had an easement. When I close that gap using SCARM, it creates a piece of track consisting of a straight part and a circular curve. -The following image shows an example. A straight track (length 50) has to be connected to a curved track (radius 276, angular length 56.5 degrees). The red track created by SCARM is a straight track of length 56, followed by a curved track with radius 360 and an angular length of 16.8 degrees. - -What I would like to know is how I can calculate the straight track and the curved track, based on the end points of the two pieces of track that should be connected and their angles. - -REPLY [6 votes]: Suppose you must connect two straight segments $AB$ and $CD$, or two arcs, or a segment and an arc (colored blue in the picture), ending at points $B$ and $C$. -Extend the segments (or draw the lines tangent to the arcs at $B$ and $C$) so that they meet at $F$. If $BF>CF$ (as in the picture) construct point $G$ on $BF$ such that $FG=FC$ and draw the perpendicular lines to $GF$ and $CF$ at points $G$ and $C$. -Let $K$ be the point where these perpendicular lines meet and draw arc $GC$ centered at $K$: this arc, together with segment $BG$, provides the needed connection (red in the picture).<|endoftext|> -TITLE: Can it be that $f$ and $g$ are everywhere continuous but nowhere differentiable but that $f \circ g$ is differentiable? -QUESTION [12 upvotes]: So, I was just asking myself can something like this happen? I was thinking about some everywhere continuous but nowhere differentiable functions $f$ and $g$ and the natural question arose on can the composition $f \circ g$ be differentiable, in other words, can the operation of composition somehow "smoothen" the irregularities of $f$ and $g$ which make them non-differentiable in such a way that composition becomes differentiable? -So here is the question again: - -Suppose that $f$ and $g$ are everywhere continuous but nowhere differentiable functions. Can $f \circ g$ be differentiable? - -If such an example exists it would be interesting because the rule $(f(g(x))'=f'(g(x)) \cdot g'(x)$ would not hold, and not only that it would not hold, it would not make any sense because $f$ and $g$ are not differentiable. - -REPLY [7 votes]: No, the composition of two continuous nowhere differentiable functions cannot be everywhere differentiable. To prove this, we need two facts: - -A function of bounded variation is differentiable almost everywhere -A differentiable function has bounded variation on some subinterval - -Suppose that $f\circ g$ is differentiable. Then it has bounded variation on some interval $[c,d]$. Since $g$ is nowhere differentiable, it is nowhere monotone; thus, we can arrange that $g(c) < g(d)$ by shrinking the interval further. Let $a=g(c)$ and $b=g(d)$. -The function $f$, being nowhere differentiable, has infinite variation on $[a,b]$. That is, for every $M$ we can find a partition $a=x_0 M -$$ -Let $t_i=\inf\{t\in [c,d] : g(t) \ge x_i\}$. The continuity of $g$ implies that $g(t_i)=x_i$ and $c=t_0< t_1<\dots M -$$ -and since $M$ was arbitrary, this contradicts $f\circ g$ having bounded variation on $[c,d]$.<|endoftext|> -TITLE: What am I doing wrong in this proof? -QUESTION [8 upvotes]: The question is this: -Let $f:\mathbb{R}\to\mathbb{R}$ be differentiable at $x=0$ and suppose that there is a number $L$ such that $$\lim_{x\rightarrow0}\frac{f(x)-f(x/2)}{x/2}=L.$$ Prove that $f'(0)=L$. -Here's my answer with all theorems referenced being from Rudin: -Let $a_n$ be a positive sequence converging to zero and -$$\varphi_n(x)=\frac{a_nf'(0)+2\big(f(x)-f(x/2)\big)}{x+a_n}.$$ -Then -$$\lim_{n\rightarrow\infty}\lim_{x\rightarrow0}\varphi_n(x)=f'(0)$$ -while -$$\lim_{x\rightarrow0}\lim_{n\rightarrow\infty}\varphi_n(x)=L.$$ -By theorem 7.11 then, if $\varphi_n(x)$ converges uniformly to $\varphi(x)=\frac{f(x)-f(x/2)}{x/2}$ over a set $E$ and $0$ is a limit point of $E$, then $L=f'(0)$. Let $E=[0,1]$. Then for $x\in E$, -$$\big|\varphi_n(x)-\varphi(x)\big|=a_n\bigg|\frac{xf'(0)-2\big(f(x)-f(x/2)\big)}{x+a_n}\bigg|=a_n\big|f'(0)-\varphi_n(x)\big|\leq a_n\big(|f'(0)|+|\varphi_n(x)|\big)\leq a_n\bigg(|f'(0)|+\bigg|\frac{a_nf'(0)}{x+a_n}\bigg|+\bigg|\frac{2\big(f(x)-f(x/2)\big)}{x+a_n}\bigg|\bigg)< a_n\big(|2f'(0)|+|L|\big)\rightarrow0.$$ -So by theorem 7.9, $\varphi_n(x)$ converges uniformly to $\varphi(x)$ over $E$ and therefore $f'(0)=L$. -What I don't understand is that couldn't I have put basically anything, say $\pi$, in place of $f'(0)$ in $\varphi_n(x)$ and shown that in fact $L=\pi$? Not sure where I went wrong. Any help is greatly appreciated. - -REPLY [25 votes]: Alternative proof: -\begin{align} -\lim_{x\rightarrow0}\frac{f(x)-f(x/2)}{x/2}&=L\\ -&=\lim_{h\rightarrow0}\frac{f(2h)-f(h)}{h}\\ -&=\lim_{h\rightarrow0}\frac{f(2h)-f(0)+f(0)-f(h)}{h}\\ -&=\lim_{h\rightarrow0}2\frac{f(2h)-f(0)}{2h}-\frac{f(h)-f(0)}{h}\\ -&=2f'(0)-f'(0)\\ -&=f'(0) -\end{align} -Therefore $f'(0)=L$. - -REPLY [3 votes]: I think you forgot an 1/x in the difference of the two functions when you start proving uniform convergence<|endoftext|> -TITLE: Proof: Is there a line in the xy plane that goes through only rational coordinates? -QUESTION [11 upvotes]: Question: Is there a line in the XY plane that has all rational coordinates. Prove your answer. -Idea: There is most certainly not. I believe it can be shown that between any 2 rational points that there is at least one irrational coordinate. Therefore, there can not be a line that contains only rational points. The issue is that I am not sure how to show this. Any ideas? I am also open to any other ideas of how to do this. Thanks. -Note: this is for an intro to proofs study guide. So, I would prefer not to use advanced theorems. - -REPLY [2 votes]: Well no. -We can nitpick. Is it assumed your plane is the usual two dimension $\mathbb R \times \mathbb R$ plane? We can argue that if your "universe" is just the rational numbers then $\mathbb Q \times \mathbb Q$ is also a plane but this "universe" is defined to only have rationals so lines will only have rational coordinates. -But that's me being pedantic and trying to intimidate. -If your "universe" is the real numbers then: -In general a line has the formula $y = mx + b$ (there is one type of exception). And we know irrational numbers exists. So for an irrational $z$ the point $(z, mz + b)$ exist and this is not a rational coordinate as $z$ is not rational. -The exception is vertical lines. These are of the form $x = c$. But these lines pass through every value of $y$. (Actually, all non-horizontal lines pass through all values of $y$ too. Vertical lines pass through all $y$, horizontal lines pass through all $x$, and all other lines pass through all $x$ and all $y$). So this will have the point $(c, z)$ which is not a rational coordinate.<|endoftext|> -TITLE: Is there a non-singular matrix with $k$ ones on each row? -QUESTION [5 upvotes]: Let $n$ be a fixed positive integer. I would like to know for what values of $k$ there exists an $n$ by $n$ $0/1$ matrix that is non-singular with exactly $k$ ones per row. -Clearly if $k=1$ then the identity matrix is non-singular. Also if $k=n$ there are no non-singular matrices. -What can one say about $1 < k < n$? -If $n$ is large, is it true for almost all $k$ in the range? - -REPLY [3 votes]: I edited my answer because this new proof is much simpler. -Thm: For every $k -TITLE: After 6n roll of dice, what is the probability each face was rolled exactly n times? -QUESTION [8 upvotes]: This is closely related to the question "If you toss an even number of coins, what is the probability of 50% head and 50% tail?", but for dice with 6 possible results instead of coins (with 2 possible results). Actually I would like a more general approximation formula, for dice with m faces. - -REPLY [3 votes]: To add to the other two answers, you can use Stirling's approximation of the factorial to get a sense of the asymptotic behavior of this expression: -$$ -\frac{(mn)!}{(n!)^mm^{mn}} \approx \frac{\sqrt{2\pi mn}(mn/e)^{mn}}{(2\pi n)^{m/2}(n/e)^{mn}m^{mn}}=\frac{\sqrt{m}}{(2\pi n)^{(m-1)/2}} -$$ -So, for fixed $m$, the probability decays like $n^{-(m-1)/2}$; in particular, when $m=6$, it decays like $n^{-5/2}$.<|endoftext|> -TITLE: Prove that $n^{n^{n^{n}}}- n^{n^{n}}$ is divisible by $1989$ -QUESTION [13 upvotes]: Problem - -Let $n$ be a positive integer with $n \geq 3$. Show that $$n^{n^{n^{n}}}- n^{n^{n}}$$ is divisible by $1989$. - -I don't really know where to begin with this question. Maybe I could do some case work on $n$ being even or odd, but I am not sure if that would work or not. - -REPLY [9 votes]: $1989$ factors as $3^2\cdot 13\cdot 17$, so it is enough to show that $n\uparrow\uparrow 4 - n\uparrow\uparrow 3 \equiv 0 \pmod a$ for $a=9,13,17$. -ANALYSIS. Suppose $n$ is coprime to $a$ -- otherwise the congruence clearly holds. Then, -$$ n\uparrow\uparrow 4 - n\uparrow\uparrow 3 = n^{n\uparrow\uparrow 2} (n^{n\uparrow\uparrow 3-n\uparrow\uparrow 2} - 1) $$ -This will be divisible by $a$ if -$$ n^{n\uparrow\uparrow 3-n\uparrow\uparrow 2} \equiv 1 \pmod a $$ -which is the case if -$$ n\uparrow\uparrow 3 - n\uparrow\uparrow 2 \equiv 0 \pmod{\lambda(a)} $$ -When $a=9,13,17$ we have $\lambda(a)=6,12,16$, so it will be enough to show -$$ n\uparrow\uparrow 3 - n\uparrow\uparrow 2 \equiv 0 \pmod b \qquad\text{for } b=3, 16$$ -Again, we can suppose without loss of generality that $n$ is coprime to $b$ (this wouldn't work if $n=2$, but fortunately we're assuming $n\ge 3$; if $n$ is an even number that is at least $4$, both $n\uparrow\uparrow 3$ and $n\uparrow\uparrow 2$ will be divisible by 16). As before we have -$$ n\uparrow\uparrow 3 - n\uparrow\uparrow 2 = n^{n^n}- n^n = n^n (n^{n^n-n} - 1) $$ -so we look for a proof that -$$ n^n - n \equiv 0 \pmod{\lambda(b)} $$ -and $\lambda(b)=2,4$ when $b=3,16$, so we look for -$$ n^n-n \equiv 0 \pmod 4 $$ -This is not actually true when $n$, is an odd multiple of $2$, but luckily for the $b=16$ case we have already assumed that $n$ is odd, and for $b=3$ we only need $n^n-n$ to be even which is certainly always the case. -Thus we can start SYNTHESIZING a proof from the bottom up: -Lemma 1. $n^n-n$ is always even. Proof. Trivial. -Lemma 2. If $n$ is odd, then $n^n-n \equiv 0 \pmod 4$. -Proof. $n^n-n=n(n^{n-1}-1)$, and since $n-1$ is even, it is a multiple of $\lambda(4)=2$, so $n^{n-1} \equiv 1 \pmod 4$. -Lemma 3. $n^{n^n}-n^n \equiv 0 \pmod 3$. -Proof. This is immediate if $3\mid n$. Otherwise, we have $n^{n^n}-n^n = n^n(n^{n^n-n}-1)$, and $n^n-n$ is even by Lemma 1; since $\lambda(3)=2$ we have $n^{n^n-n}\equiv 1\pmod 3$. -Lemma 4. If $n\ge 3$, then $n^{n^n}-n^n \equiv 0 \pmod{16}$. -Proof. If $n$ is even and $\ge 4$, then this is immediate. Otherwise $n$ is coprime to 16, and we have $n^{n^n}-n^n = n^n(n^{n^n-n}-1)$. By Lemma 2, $n^n-n$ is a multiple of $\lambda(16)=4$, so $n^{n^n-n}\equiv 1\pmod{16}$. -Lemma 5. If $n\ge 3$. then $n^{n^n}-n^n \equiv 0$ modulo $6$, $12$, or $16$. -Proof. By Lemmas 3 and 4. -Lemma 6. If $n\ge 3$, then $n^{n^{n^n}}-n^{n^n} \equiv 0$ modulo $p\in\{13,17\}$. -Proof. This is immediate if $p\mid n$. Otherwise $n^{n^{n^n}}-n^{n^n} = n^{n^n}(n^{(n^{n^n}-n^n)}-1)$, and the exponent $n^{n^n}-n^n$ is a multiple of $\lambda(p)$ by Lemma 5, so $n^{(n^{n^n}-n^n)}\equiv 1\pmod p$. -Lemma 7. If $n\ge 3$, then $n^{n^{n^n}}-n^{n^n} \equiv 0 \bmod9$. -Proof. Immediate if $3\mid n$, since $n^{n^n}$ and $n^n$ are both $\ge 2$. Otherwise, $n$ is coprime to $9$, and the same reasoning as in Lemma 6 applies. -Corollary. If $n\ge 3$, then $n^{n^{n^n}}-n^{n^n} \equiv 0 \bmod{1989}$. Proof. By Lemmas 6 and 7.<|endoftext|> -TITLE: $W^{s,p}(\mathbb{R}^{n})$ Is Not Closed Under Multiplication when $s\leq n/p$ -QUESTION [10 upvotes]: For $s\in\mathbb{R}$, $1 n$, one can show that $W^{s,p}$ is closed under pointwise multiplication. More precisely, - -Theorem. For $n\geq 1$, $1n/p$, $$\|fg\|_{W^{s,p}}\lesssim_{n,p,s}\|f\|_{W^{s,p}}\|g\|_{W^{s,p}},\quad\forall f,g\in W^{s,p}(\mathbb{R}^{n})$$ - -I want to show that this result is sharp for any $1 3$$ -Whence, -$$2^{sk}P_{k}f_{N}=N^{-1}\sum_{j=k-3}^{k+3}P_{k}2^{s(k-j)}2^{jn/p}\widehat{\psi}(2^{j})=N^{-1}\sum_{j=-3}^{3}2^{-sj}P_{k}2^{(k+j)n/p}\widehat{\psi}(2^{j+k})$$ -where terms with negative indices are defined to be zero. Thus by nesting of $\ell^{p}$ spaces and the above observations, we have that -\begin{align*} -\left(\sum_{k=1}^{\infty}|2^{sk}P_{k}f_{N}|^{2}\right)^{1/2}\lesssim_{n,s,p} N^{-1}\sum_{j=1}^{N}\sum_{k=j-3}^{j+3}|P_{k}2^{nj/p}\widehat{\psi}(2^{j})| -\end{align*} -By triangle inequality, Young's inequality, dilation invariance, we obtain that -$$\left\|N^{-1}\sum_{j=1}^{N}\sum_{k=j-3}^{j+3}|P_{k}2^{nj/p}\widehat{\psi}(2^{j})|\right\|_{L^{p}}\lesssim_{n,s}N^{-1}\sum_{j=1}^{N}\|2^{nj/p}\widehat{\psi}(2^{j})\|_{L^{p}}=\|\widehat{\psi}\|_{L^{p}}$$ -But since $\widehat{\psi}(0)=\int\psi dx=c>0$, it follows that - $$\|f_{N}\|_{L^{\infty}}\gtrsim N^{-1}\sum_{k=1}^{N}2^{k(n/p-s)}\rightarrow\infty,$$ -as $N\rightarrow\infty$, since $s 3$$ -Whence, -$$2^{sk}P_{k}f_{N}=\sum_{j=k-3}^{k+3}P_{k}2^{s(k-j)}2^{jn/p}\widehat{\psi}(2^{j})=\sum_{j=-3}^{3}2^{-sj}P_{k}2^{(k+j)n/p}\widehat{\psi}(2^{j+k})$$ -where terms with negative indices are defined to be zero. By the upper Littlewood-Paley inequality, -\begin{align*} -\left\|\left(\sum_{k=1}^{\infty}2^{2sk}|P_{k}f_{N}|^{2}\right)^{1/2}\right\|_{L^{p}}&\lesssim_{n,s,p}\left\|\left(\sum_{j=1}^{N}2^{2jn/p}|\widehat{\psi}(2^{j}\cdot)|^{2}\right)^{1/2}\right\|_{L^{p}}\\ -&\leq\left\|\sum_{j=1}^{N}2^{jn/p}|\widehat{\psi}(2^{j}\cdot)|\right\|_{L^{p}} -\end{align*} -by the nesting property. Since $\widehat{\psi}$ is a Schwartz function adapted to a ball of radius $O(1)$, we have that -$$\sum_{j=1}^{N}2^{jn/p}|\widehat{\psi}(2^{j}x)|\lesssim_{M}\sum_{j=1}^{N}\dfrac{1}{(1+|2^{j}x|)^{M}}$$ -for any $M\geq 0$. So for integer $N-1\geq k\geq 0$, -\begin{align*} -\int_{2^{-k-1}\leq|x|\leq 2^{-k}}\left|\sum_{j=1}^{N}2^{jn/p}|\widehat{\psi}(2^{j}x)|\right|^{p}dx&\lesssim_{M}\int_{2^{-k-1}\leq|x|\leq 2^{-k}}\left|\sum_{j=1}^{k}2^{jn/p}+\sum_{j=k+1}^{N}\dfrac{2^{jn/p}}{(1+|2^{j}x|)^{M}}\right|^{p}dx\\ -&\lesssim\int_{2^{-k}\leq|x|\leq 2^{-k-1}}\left|2^{kn/p}+\sum_{j=k+1}^{N}\dfrac{2^{jn/p}}{(1+|2^{j}x|)^{M}}\right|^{p}dx\\ -\end{align*} -The second term in the integrand is bounded from above by a decreasing, convergent geometric series, provided $M$ is sufficiently large, and is therefore comparable to its first term $\sim 2^{kn/p}$. Whence the above is majorized by -\begin{align*} -\lesssim\int_{2^{-k-1}\leq|x|\leq 2^{-k}}2^{kn}dx\sim_{n} 1 -\end{align*} -For $k\leq N$, we the estimate -$$\int_{|x|\leq 2^{-N}}\left|\sum_{j=1}^{N}2^{jn/p}|\widehat{\psi}(2^{j}x)|\right|^{p}dx\lesssim\int_{|x|\leq 2^{-N}}2^{Nn}dx\sim_{n}1$$ -For $|x|\geq 1$, rapid decay gives the estimate -\begin{align*} -\int_{|x|\geq 1}\left|\sum_{j=1}^{N}2^{jn/p}|\widehat{\psi}(2^{j}x)|\right|^{p}dx\lesssim_{M}\int_{|x|\geq 1}\left(\sum_{j=1}^{N}2^{jn/p}|2^{j}x|^{-M}\right)^{p}dx\lesssim_{n,M,p}1, -\end{align*} -provided $M$ is sufficiently large. Combining the estimates and adding up the pieces of the integral, we conclude that -\begin{align*} -\int_{\mathbb{R}^{n}}\left|\sum_{j=1}^{N}2^{jn/p}|\widehat{\psi}(2^{j}x)|\right|^{p}dx&\lesssim_{n,p} N -\end{align*} -Taking $p^{th}$ roots completes the proof of the claim. - -REPLY [3 votes]: Unless I'm missing something obvious, I believe I have a complete answer that demonstrates the existence of an $f\in W^{s,p}(\mathbb{R}^{n})$, for $0 -TITLE: Is the intersection of two connected subspaces of a connected topological space is connected? -QUESTION [7 upvotes]: I think the statement is true that two connected subspaces of a connected topological space is connected, and there are two different situations here to be discussed. -First of all, when the intersection is empty set, do we consider it as connected or not? -For the other situation, if the intersection is not empty, how could I get the conclusion? - -REPLY [8 votes]: Consider the intersection of the line segment and the circle in $\mathbb R^2$.<|endoftext|> -TITLE: Inversion of n x n matrix. -QUESTION [5 upvotes]: What is the inverse of this $n \times n$ matrix, which is an $n \times n$ matrix of $1$s minus the $n \times n$ identity matrix. - -REPLY [2 votes]: Use the Sherman-Morrison formula: -$(A + B)^{-1} = A^{-1} - {1 \over 1 + Tr[B A^{-1}]} A^{-1} B A^{-1}$, where $B$ is an $n \times n$ matrix of $1$s and $A$ is the negative of the $n \times n$ identity matrix $I$. -Of course, here $A^{-1} = -I$. -Thus $(A + B)^{-1} = -I - {1 \over 1 + Tr[B]} B$. -Also, the trace of $B$ is $Tr[B] = n$. -Thus: $(A + B)^{-1} = -I - {1 \over 1 + n} B$.<|endoftext|> -TITLE: True or False: If $A + A^2$ is invertible, then $A$ is also invertible -QUESTION [7 upvotes]: A is a square $n$ by $n$ matrix here. -I understand the proof for $A^2$ being invertible given that $A$ is invertible, but I fail to see how to incorporate the $A + A^2$ factor into it. -What I have tried so far is a rough factoring to give: -$A(I_n + A)$ -But that is where I am stuck. Any help is much appreciated! - -REPLY [2 votes]: This will be true for every ring: if $a+a^2$ is invertible then so is $a$. It follows from another general fact: if $ab$ has a right inverse and $ca$ has a left inverse then $a$ is invertible. Indeed, $(a b) b_1=1$ implies $a(b b_1) = 1$ and $c_1 (ca) = 1$ implies $(c_1 c) a= 1$, so $a$ has both left and right inverses and so they coincide ( standard proof) and $a$ is invertible. -Now, to the proof of the statement. If $a+a^2$ is invertible then $a(1+a)$ is invertible and $(1+a)a$ is invertible, and by the above it follows that $a$ itself is invertible.<|endoftext|> -TITLE: What does this joke mean? -QUESTION [15 upvotes]: I saw this written on a blackboard in the math department building the other day: -Gas Law: $PV=nRT$ -Ideal Gas Law: $(P)(V)=(n)(R)(T)$ -I know the ideal gas law is something from chemistry, but I'm assuming this is meant to be some sort of joke involving math. Any ideas? - -REPLY [14 votes]: In a ring, "$(a)$" is common notation for the principal ideal generated by $a$. (See https://en.wikipedia.org/wiki/Principal_ideal.)<|endoftext|> -TITLE: Showing that the operator is bounded and find its norm. -QUESTION [5 upvotes]: I have this operator $T: L^p(0,\infty)\rightarrow L^p(0,\infty)$, $10$. Now an integration by parts gives that $$||T(f)||_{p}^{p}= x\cdot T(f)(x)^{p} \,|_{0}^{\infty} +p\int_{0}^{\infty}\left(T(f)(x)^{p} -T(f)(x)^{p-1}f(x)\right) \, dx= $$ $$p\cdot ||T(f)||_{p}^{p} -p\int_{0}^{\infty}T(f)(x)^{p-1}f(x) \, dx$$ -Collecting the terms involving $||T(f)||_{p}^{p}$ and dividing and using Hölders inequality we get $$||T(f)||_{p}^{p}= \frac{p}{p-1}\int_{0}^{\infty}T(f)(x)^{p-1}f(x)\, dx\leq \frac{p}{p-1}\left(\int_{0}^{\infty}T(f)(x)^{(p-1)q}\, dx\right)^{1/q}\cdot ||f||_{p}=$$ $$\frac{p}{p-1}\left(\int_{0}^{\infty}T(f)(x)^{p}\, dx\right)^{1/q}\cdot ||f||_{p}= \frac{p}{p-1}\cdot ||T(f)||_{p}^{\frac{p}{q}}\cdot ||f||_{p}$$ where $\frac{1}{p}+\frac{1}{q}=1$. Now assuming $||T(f)||_{p}$ is non-zero otherwise the claim is trivial, we divide and obtain $$||T(f)||_{p} \leq \frac{p}{p-1}\cdot ||f||_{p.}$$ for every non-negative continuous function $f$ with compact support. -Now using the simple fact that $|T(f)(x)| \leq T(|f|)(x)$ for each $x\in (0,\infty)$,the result immediately extends to all continuous functions with compact support on $(0,\infty)$. Now to finish off the proof, let $g$ be an arbitrary $\mathcal{L}^{p}(0,\infty)$ and $\left\{f_{n}\right\}_{n=1}^{\infty}$ a sequence of continuous functions with compact support that converges to $f$ in $\mathcal{L}^{p}$-norm. We can extract a subsequence $\left\{f_{n_{j}}\right\}_{j=1}^{\infty}$ which converges to $g$ almost everywhere on $(0,\infty)$. -Now using Fatou's lemma, we finally get that $$||T(g)||_{p}\leq \liminf_{j\rightarrow \infty} \, ||T(f_{n_{j}})||_{p}\leq \liminf_{j\rightarrow \infty} \, \frac{p}{p-1}||f_{n_{j}}||_{p}\leq \limsup_{j\rightarrow \infty} \, \frac{p}{p-1}\left(||f_{n_{j}}-g||_{p} +||g||_{p}\right)= \frac{p}{p-1}||g||_{p}$$<|endoftext|> -TITLE: Prove that the sequence converges absolutely -QUESTION [6 upvotes]: Let $\{a_n\}$ and $\{r_n\}$ be two sequences of real numbers such that $\sum_{n=1}^\infty |a_n|< \infty.$ Prove that -$$ -\displaystyle \sum_{n=1}^{\infty} \frac{a_n}{\sqrt{|x-r_n|}} -$$ -converges absolutely for almost every $x \in \mathbb{R}$. -Can anyone provide a useful hint to solve the problem ? I am unable to figure out how does almost every $x$ come into picture. Should I use some lebesgue integral ? - -REPLY [4 votes]: For any bounded measurable $A$ -$$\int_A\sum_{n=1}^{\infty}\frac{|a_n|}{\sqrt{|x-r_n|}}=\sum_{n\ge 1}\int_A\frac {|a_n|}{\sqrt{|x-r_n|}}<\infty$$ -Hence the sum is finite a.s. This can be extended to show a.s. absolute convergence of $$\sum_{n=1}^{\infty}a_nf_n(x)$$ -for any locally uniformly integrable $f_n$.<|endoftext|> -TITLE: Jacobi theta with a matrix -QUESTION [5 upvotes]: I would like to evaluate -$$ -\sum_{q_1 = -\infty}^{\infty} \cdots \sum_{q_N = -\infty}^{\infty} e^{-\sum_{j}\sum_{k} q_{k} A_{kj} q_{j}} -$$ -with $A$ a real $N\times N$ symmetric matrix. -I know how to compute this when $q$ is continuous (the sum is an integral), and I know how to compute this when $A$ is a scalar (a $1\times 1$, this leads to the Jacobi theta), but If I try to diagonalize $A$, I end up with a transformation for the $q$s that I don't know how to write down in terms of a sum. The transformed $q$s, $q' = O^{T}q$, are linear combinations of the $q$s, and I don't know what the analogous Jacobian-like object would be for summation (in place of integration). Thanks. - -REPLY [2 votes]: This cannot be expressed in terms of elementary (or Jacobi theta) functions: in fact the series -$$\Theta\left(\mathbf z | \Omega\right)=\sum_{\mathbf q\in\mathbb Z^N}e^{\pi i \mathbf q\cdot \Omega\cdot \mathbf q+2\pi i \mathbf q \cdot \mathbf z}$$ -is a multidimensional generalization of the Jacobi theta function called Riemann theta function. Your case corresponds to setting $\mathbf z=\mathbf 0$, $\Omega=\frac{iA}{\pi}$, i.e. to Riemann theta constants. -For the numerical evaluation, you may have a look at this paper.<|endoftext|> -TITLE: Suppose $X_t$ is a brownian motion with $X_0 \sim u_0$. What is the probability density of $X_t$? (heat equation) -QUESTION [5 upvotes]: Suppose $u_0(x) = 2x$ for $0 \leq x \leq 1$ and $u_0(x)=0$ otherwise. Suppose $X_t$ is a brownian motion with $X_0 \sim u_0$. What is the probability density of $X_t$? - -Since $X_t$ is a brownian motion, the density should satisfy the heat equation -$$ \partial_t u = \frac{1}{2} \partial_x^2 u .$$ -The first part of the question gives us initial conditions for the heat equation $u_0(x) = 2x$ for $0 \leq x \leq 1$ and $u_0(x)=0$ otherwise. -I think the problem boils down to solving the heat equation for this initial condition. How would this be done? - -REPLY [5 votes]: If $(B_t^x)_{t \geq 0}$ is a Brownian motion started at $x \in \mathbb{R}^d$, then the density of $B_t^x$ is given by -$$p_t^x(y) = \frac{1}{(2\pi t)^{d/2}} \exp \left(- \frac{|y-x|^2}{2t} \right).$$ -Using the Dirac measure we can rewrite this as follows: -$$p_t^x(y) = \frac{1}{(2\pi t)^{d/2}} \int \exp \left(- \frac{|y-z|^2}{2t} \right) \delta_x(dz) $$ -Note that $\delta_x$ is the initial distribution of the Brownian motion $(B_t^x)_{t \geq 0}$. If we replace $\delta_x$ by some general distribution, say $\mu$, we get -$$p_t^{\mu}(y) = \frac{1}{(2\pi t)^{d/2}} \int \exp \left(- \frac{|y-z|^2}{2t} \right) \, \mu(dz).$$ -Roughly this means that we mix the densities $(p_t^x)_{x \in \mathbb{R}^d}$ according to our given initial distribution. In particular, if $\mu(dz) = u_0(z) \, dz$ for some density $u_0$, we have -$$p_t^{\mu}(y) = \frac{1}{(2\pi t)^{d/2}} \int \exp \left(- \frac{|y-z|^2}{2t} \right) u_0(z) \, dz.$$ -A straight-forward calculation shows that this function does indeed satisfy the heat equation. Moreover, one can show that $p_t^{\mu}(y) \to u_0(y)$ as $t \to 0$. Note that the density $(p_t^{\mu})$ is the convolution of the initial distribution and the heat kernel. -This is the probabilistic approach. The more popular approach to solve the heat equation uses Fourier methods (i.e. take the Fourier transform of the heat equation, do some calculations and then invert the Fourier transform to get the solution $u$). You will find this in (almost) any book on PDEs. \ No newline at end of file