INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Proofs of $\lim\limits_{n \to \infty} \left(H_n - 2^{-n} \sum\limits_{k=1}^n \binom{n}{k} H_k\right) = \log 2$ Let $H_n$ denote the $n$th harmonic number; i.e., $H_n = \sum\limits_{i=1}^n \frac{1}{i}$. I've got a couple of proofs of the following limiting expression, which I don't think is that well-known: $$\lim_{n \to \infty} \left(H_n - \frac{1}{2^n} \sum_{k=1}^n \binom{n}{k} H_k \right) = \log 2.$$ I'm curious about other ways to prove this expression, and so I thought I would ask here to see if anybody knows any or can think of any. I would particularly like to see a combinatorial proof, but that might be difficult given that we're taking a limit and we have a transcendental number on one side. I'd like to see any proofs, though. I'll hold off from posting my own for a day or two to give others a chance to respond first. (The probability tag is included because the expression whose limit is being taken can also be interpreted probabilistically.) (Added: I've accepted Srivatsan's first answer, and I've posted my two proofs for those who are interested in seeing them. Also, the sort of inverse question may be of interest. Suppose we have a function $f(n)$ such that $$\lim_{n \to \infty} \left(f(n) - \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k} f(k) \right) = L,$$ where $L$ is finite and nonzero. What can we say about $f(n)$? This question was asked and answered a while back; it turns out that $f(n)$ must be $\Theta (\log n)$. More specifically, we must have $\frac{f(n)}{\log_2 n} \to L$ as $n \to \infty$.)
I made an quick estimate in my comment. The basic idea is that the binomial distribution $2^{−n} \binom{n}{k}$ is concentrated around $k= \frac{n}{2}$. Simply plugging this value in the limit expression, we get $H_n−H_{n/2} \sim \ln 2$ for large $n$. Fortunately, formalizing the intuition isn't that hard. Call the giant sum $S$. Notice that $S$ can be written as $\newcommand{\E}{\mathbf{E}}$ $$ \sum_{k=0}^{\infty} \frac{1}{2^{n}} \binom{n}{k} (H(n) - H(k)) = \sum_{k=0}^{\infty} \Pr[X = k](H(n) - H(k)) = \E \left[ H(n) - H(X) \right], $$ where $X$ is distributed according to the binomial distribution $\mathrm{Bin}(n, \frac12)$. We need the following two facts about $X$: * *With probability $1$, $0 \leqslant H(n) - H(X) \leqslant H(n) = O(\ln n)$. *From the Bernstein inequality, for any $\varepsilon \gt 0$, we know that $X$ lies in the range $\frac{1}{2}n (1\pm \varepsilon)$, except with probability at most $e^{- \Omega(n \varepsilon^2) }$. Since the function $x \mapsto H(n) - H(x)$ is monotone decreasing, we have $$ S \leqslant \color{Red}{H(n)} \color{Blue}{-H\left( \frac{n(1-\varepsilon)}{2} \right)} + \color{Green}{\exp (-\Omega(n \varepsilon^2)) \cdot O(\ln n)}. $$ Plugging in the standard estimate $H(n) = \ln n + \gamma + O\Big(\frac1n \Big)$ for the harmonic sum, we get: $$ \begin{align*} S &\leqslant \color{Red}{\ln n + \gamma + O \Big(\frac1n \Big)} \color{Blue}{- \ln \left(\frac{n(1-\varepsilon)}{2} \right) - \gamma + O \Big(\frac1n \Big)} +\color{Green}{\exp (-\Omega(n \varepsilon^2)) \cdot O(\ln n)} \\ &\leqslant \ln 2 - \ln (1- \varepsilon) + o_{n \to \infty}(1) \leqslant \ln 2 + O(\varepsilon) + o_{n \to \infty}(1). \tag{1} \end{align*} $$ An analogous argument gets the lower bound $$ S \geqslant \ln 2 - \ln (1+\varepsilon) - o_{n \to \infty}(1) \geqslant \ln 2 - O(\varepsilon) - o_{n \to \infty}(1). \tag{2} $$ Since the estimates $(1)$ and $(2)$ hold for all $\varepsilon > 0$, it follows that $S \to \ln 2$ as $n \to \infty$.
suggest textbook on calculus I read single variable calculus this semester, and the course is using Thomas Calculus as the textbook. But this book is just too huge, a single chapter contains 100 exercise questions! Now I'm looking for a concise and complete textbook. I'm not interested in routine, computational exercises, but rather some challenging problem sets. I have quite a strong basic knowledge of calculus from high school, but I still have difficulties in solving a few questions from past exam papers. So I'm looking for more challenging exercises. In fact, I'm looking forward to solving Putnam level questions. Please suggest some textbooks with these features. Thanks in advance.
Here's a new book to add to the growing list of books that handle Calculus more strictly than normal and include numerous hard problems: https://techvoke.com/couchtuner/ https://books.google.se/books?id=HMnvCQAAQBAJ&printsec=frontcover&dq=sasane+calculus&hl=sv&sa=X&ved=0CCQQ6wEwAGoVChMIsYe1utLQyAIVJotyCh2XsQaF#v=onepage&q=sasane%20calculus&f=false Amol Sasane's book The How and Why of One Variable Calculus was released in August 2015 by Wiley. I have attempted a newer edition of Thomas' Calculus, which I do not recommend. You won't be challenged at all because the workouts are so basic.
What did Simon Norton do after 1985? Simon Norton is a mathematician that worked on finite simple groups and co-authored the Atlas of Finite Groups. With John Conway they proved there is a connection with the Monster group and the j-function:monstrous moonshine There's now a book out titled The Genius in my Basement by Alexander Masters, where he says that Simon Norton stopped doing mathematics after 1985 when John Conway left for America. I find this hard to believe because the book also talks about his immense talent and natural attraction to the subject as a youngster: * *while still at school, gained a first-class external degree from London university *Won gold at the IMO between '67 and '69 and special prizes twice. Did he continue to produce mathematical papers after 1985? I suspect the answer is yes, and would love to know what he did after 1985.
He was teaching postgraduate students in the year 1987-88; I did Part III in Cambridge that year, where he lectured Reflection Groups. The most interesting subject of all I took that year, and I regret not doing that subject in the final exam.
Invertibility of elements in a left Noetherian ring Let $A$ be a left Noetherian ring. How do I show that every element $a\in A$ which is left invertible is actually two-sided invertible?
Recall that we get an isomorphism $A^\text{op} \to \operatorname{End}_A(A)$ by sending $a$ to the endomorphism $x \mapsto xa$. Here $A^\text{op}$ is the opposite ring. If $a$ is left invertible then the corresponding endomorphism $f$ is surjective, and if we can show that $f$ is injective then $f$ is invertible in $\operatorname{End}_A(A)$, whence $a$ is invertible in both $A^\text{op}$ and $A$. It isn't any harder to prove a more general statement: If an endomorphism of a Noetherian module is surjective, then it is an isomorphism. Here are some ideas for this. If $g$ is such an endomorphism then the increasing sequence of submodules $\{\operatorname{Ker}(g^n)\}$ must stabilize. Use this and the fact that each $g^n$ is surjective to show that the kernel is trivial.
The infinity of random variable The problem is: For infinite independent Bernoulli trials, prove that the total number of successful trials $N$ have the following property: $$ [N < \infty] = \bigcup\limits_{n=1}^{\infty}\,[N \le n] $$ Actually this is just part of bigger problem in a book, and the equation is given as an obvious fact and as a hint without any explanation. What does the equation exactly mean? I guess square brace means set, but what's the definition of $[N < \infty]$?
Forget everything except that $N$ is a function from $\Omega$ to $\mathbb R^+$. Then $[N<\infty]$ is the set of $\omega$ in $\Omega$ such that $N(\omega)$ is finite and $[N\le n]$ is the set of $\omega$ in $\Omega$ such that $N(\omega)\le n$. Hence $[N\le n]\subseteq[N<\infty]$ for every $n$. For the other inclusion, note that $N(\omega)$ finite implies there exists $n$ such that $N(\omega)\le n$. Hence the equality.
Motivation for the term "separable" in topology A topological space is called separable if contains a countable dense subset. This is a standard terminology, but I find it hard to associate the term to its definition. What is the motivation for using this term? More vaguely, is it meant to capture any suggestive image or analogy that I am missing?
On Srivatsan's request I'm making my comment into an answer, even if I have little to add to what I said in the MO-thread. As Qiaochu put it in a comment there: My understanding is it comes from the special case of ℝ, where it means that any two real numbers can be separated by, say, a rational number. In my answer on MO I provided a link to Maurice Fréchet's paper Sur quelques points du calcul fonctionnel, Rend. Circ. Mat. Palermo 22 (1906), 1-74 and quoted several passages from it in order to support that view: The historical importance of that paper is (among many other things) that it is the place where metric spaces were formally introduced. Separability is defined as follows: Amit Kumar Gupta's translation in a comment on MO: We will henceforth call a class separable if it can be considered in at least one way as the derived set of a denumerable set of its elements. And here's the excerpt from which I quoted on MO with some more context — while not exactly accurate, I think it is best to interpret classe $(V)$ as metric space in the following: Felix Hausdorff, in his magnum opus Mengenlehre (1914, 1927, 1934) wrote (p.129 of the 1934 edition): My loose translation: The simplest and most important case is that a countable set is dense in $E$ [a metric space]; $E = R_{\alpha}$ has at most the cardinality of the continuum $\aleph_{0}^{\aleph_0} = \aleph$. A set in which a countable set is dense is called separable, with a not exactly very suggestive but already established term by M. Fréchet. A finite or separable set is called at most separable.
A conjecture about the form of some prime numbers Let $k$ be an odd number of the form $k=2p+1$ ,where $p$ denote any prime number, then it is true that for each number $k$ at least one of $6k-1$, $6k+1$ gives a prime number. Can someone prove or disprove this statement?
$p = 59 \implies k = 2p + 1 = 119$. Neither $6k+1 = 715$ nor $6k-1 = 713$ is prime. Some other counter examples are: 59 83 89 103 109 137 139 149 151 163 193 239 269 281
Example of a model in set theory where the axiom of extensionality does not hold? I recently started a course in set theory and it was said that a model of set theory consists of a nonempty collection $U$ of elements and a nonempty collection $E$ of ordered pairs $(u,v)$, the components of which belong to $U$. Then the elements of $U$ are sets in the model and a set $u$ is interpreted as an element of $v$ if $(u,v) \in E$. It was also said that $U$ can also be a set and then $E$ is a relation in the set $U$ so that the ordered pair $(U,E)$ is a directed graph and reversely, any ordered graph $(U,E)$ can be used as a model of set theory. There have been examples of different models now where some of the axioms of ZFC do not hold and some do, but the axiom of extensionality has always held and I for some reason don't seem to comprehend enough of that axiom and its usage. Can someone tell an example of some collections $E$ and $U$ where the axiom of extensionality wouldn't hold?
The axiom of extensionality says: $$\forall x\forall y\left(x=y\leftrightarrow \forall z\left(z\in x\leftrightarrow z\in y\right)\right)$$ Obviously, if two sets are the same then they have the same elements. So in order to violate this axiom we need to have different sets which the model would think have the same elements. If you just want a model of sets in which the axiom of extensionality does not hold, consider for $a\neq b$ the following: $$\left(U=\Big\{\{a,b\},\{a\},a\Big\}, \in\right)$$ We have that $a\in\{a\}$, and $a\in\{a,b\}$. Since $a\neq b$ we have that $\{a\}\neq\{a,b\}$, however for all $x\in U$ we have $x\in\{a\}\leftrightarrow x\in\{a,b\}$. This is because $U$ does not know about $b$. It just knows that $\{a,b\}$ and $\{a\}$ are two distinct beings. It is unable to say why, in terms of $\in$ relation. The problems begins when you want more axioms. The more axioms you would want to have, the more complicated your universe will have to get.
How do I reflect a function about a specific line? Starting with the graph of $f(x) = 3^x$, write the equation of the graph that results from reflecting $f(x)$ about the line $x=3$. I thought that it would be $f(x) = 3^{-x-3}$ (aka shift it three units to the right and reflect it), but it's wrong. The right answer is $f(x) = 3^{-x+6}$ but I just can't get to it! An explained step by step would be appreciated so I can follow what is being done. Thanks in advance!
Your idea will work if you just carry it fully through. First shift three units to the left, so the line of reflection becomes the y axis, then flip, and finally remember to shift three units back to the right to put the center line back where it belongs. (This gives the $f(6-x)$ solution you already know).
Gandalf's adventure (simple vector algebra) So, I found the correct answer to this homework question, but I was hoping there was an easier way to find the same answer. Here's the question: Gandalf the Grey started in the Forest of Mirkwood at a point with coordinates $(-2, 1)$ and arrived in the Iron Hills at the point with coordinates $(-1, 6)$. If he began walking in the direction of the vector $\bf v = 5 \mathbf{I} + 1 \mathbf{J}$ and changes direction only once, when he turns at a right angle, what are the coordinates of the point where he makes the turn. The answer is $((-1/13), (18/13))$. Now, I know that the dot product of two perpendicular vectors is $0$, and the sum of the two intermediate vectors must equal $\langle 1, 5\rangle$. Also, the tutor solved the problem by using a vector-line formula which had a point, then a vector multiplied by a scalar. I'm looking for the easiest and most intuitive way to solved this problem. Any help is greatly appreciated! I'll respond as quickly as I can.
His first leg is $a(5,1)$ and second leg is $b(1,-5)$ (because it is perpendicular to the first) with total displacement $(1,5)$. So $5a+b=1$ $a-5b=5$ Then the turning point is $(-2,1)+a(5,1),$ which should equal (here is a check) $(-1,6)-b(1,-5)$
Is $M^2-[M]$ a local martingale when $M$ is a local martingale? I've learned that for each continuous local martingale $M$, there's a unique continuous adapted non-decreasing process $[M]$ such that $M^2-[M]$ is a continuous local martingale. For a local martingale $M$, is there a adapted non-decreasing process $[M]$ such that $M^2-[M]$ is a local martingale? (i.e. Do we have an analogous result for discontinuous local martingales?) Thank you. (The notes I have only consider the continuous case. I tried to adapt the argument, but ran into various problems...)
The answer is yes. For a good exposition of the semimartingale theory (includes local martingales, not necessarily continuous), I recommend Peter Medvegyev's "Stochastic Integration Theory". And the general discontinuous (but still cadlag) theory is harder than continuous case, but also fun to learn!
What is the logical operator for but? I saw a sentence like, I am fine but he has flu. Now I have to convert it into logical sentence using logical operators. I do not have any idea what should but be translated to. Please help me out. Thanks
I agree with Jiri on their interpretation. But coming from an AI background, I have a different sort of take: Your example "I am fine but he has flu" has to do with the common knowledge between the speaker and the audience. The speaker has a certain belief of the above common knowledge. The attempt is to warn the audience that the proposition next to 'but' is unexpected, given the proposition before 'but'. Let us denote the proposition of a sentence $S$ before 'but' as $before(S)$ and after 'but' as $after(S)$. Lets denote the information content of a proposition $B$ when $A$ is already known as $I(B|A)$. Then, 'but' means: $I(after(S)|before(S)) > I(\lnot after(S)|before(S))$. That is, the information content (surprise) of $after(S)$ is more than $\lnot after(S)$ when $before(S)$ is already known.
Interpolating point on a quad I have a quad defined by four arbitrary points, A, B, C and D all on the same plane. I then have a known point on that quad, P. I want to find the value of 's' as shown in the diagram above, where t and s are parametric values in the range (0, 1) that interpolate along the edges.
Let $E$ be the point you show on $AB$ and $F$ be the point on $CD$. Then $E=A+t(B-A), F=C+t(D-C), P=E+s(F-E)$, where you can write these out componentwise.
Is there any math operation defined to obtain vector $[4,3,2,1]$ from $[1,2,3,4]$? I mean have it been studied, does it have a name? Like Transpose, Inverse, etc.. have names. I wonder if the "inversion" of the components position have a name so then I could search material on this topic. Thanks
I do not know anything about reversal specifically, but it is a special case of what is known as a permutation matrix.
Computing Grades-getting average from a weighted test Ok well I have this basic problem in which there are grades (4 grades). There's an upcoming final that is weighted to be some fraction toward the final grade (2/3). I have to find what the final grade has to be to get an average grade of 80. and then 90. I completly forgot the procedure as to how to tackle this problem. Anybody have any hints/tricks for me to start me off?
The key to solving this problem is to realize that there are essentially two components that will go into the final grade : 1) The average of the previous four tests 2) The grade on the final Thus we can set it up as follows : Let $G =$ Grade in the class, $a =$ average score from the previous 4 tests, and $f =$ final exam score. \begin{align*} G = \frac{2f}{3} + \frac{a}{3} \end{align*} Using you're numbers you can solve for whatever possibilities you need. EDIT: you can also use this approach for any different weightings by simply changing the fractional amounts that $a$ and $f$ are worth, for example if you want the final $f$, to be worth $3/4$ the final grade then it would reflect as: \begin{align*} G = \frac{3f}{4} + \frac{a}{4} \end{align*}
Are column operations legal in matrices also? In linear algebra we have been talking a lot about the three elementary row operations. What I don't understand is why we can't multiply by any column by a constant? Since a matrix is really just a grouping of column vectors, shouldn't we be able to multiply a whole column by a constant but maintain the same set of solutions for the original and resulting matrices?
i think that the reason which causes "elementary" column operations incorrect is because of our rules toward the form of linear equations. That is, we define linear equations to be written as what we get used to writing now! You can try to write down theses equations straight and apply column operations to them, and see what happens.
Showing group with $p^2$ elements is Abelian I have a group $G$ with $p^2$ elements, where $p$ is a prime number. Some (potentially) useful preliminary information I have is that there are exactly $p+1$ subgroups with $p$ elements, and with that I was able to show $G$ has a normal subgroup $N$ with $p$ elements. My problem is showing that $G$ is abelian, and I would be glad if someone could show me how. I had two potential approaches in mind and I would prefer if one of these were used (especially the second one). First: The center $Z(G)$ is a normal subgroup of $G$ so by Langrange's theorem, if $Z(G)$ has anything other than the identity, it's size is either $p$ or $p^2$. If $p^2$ then $Z(G)=G$ and we are done. If $Z(G)=p$ then the quotient group of $G$ factored out by $Z(G)$ has $p$ elements, so it is cylic and I can prove from there that this implies $G$ is abelian. So can we show theres something other than the identity in the center of $G$? Second: I list out the elements of some other subgroup $H$ with $p$ elements such that the intersection of $H$ and $N$ is only the identity (if any more, due to prime order the intersected elements would generate the entire subgroups). Let $N$ be generated by $a$ and $H$ be generated by $b$. We can show $NK= G$, i.e every element in G can be wrriten like $a^k b^l $. So for this method, we just need to show $ab=ba$ (remember, these are not general elements in the set, but the generators of $N$ and $H$). Do any of these methods seem viable? I understand one can give very strong theorems using Sylow theorems and related facts, but I am looking for an elementary solution (no Sylow theorems, facts about p-groups, centrailzers) but definitions of centres and normalizers is fine.
Here is a proof of $|Z(G)|\not=p$ which does not invoke the proposition "if $G/Z(G)$ is cyclic then $G$ is abelian". Suppose $|Z(G)|=p$. Let $x\in G\setminus Z(G)$. Lagrange's theorem and $p$ being prime dictate that the centralizer $Z(x)$ of $x$ has order $1,p$,or $p^2$. Note that $\{x\}\uplus Z(G)\subset Z(x)$, $Z(x)$ has at least $p+1$ elements, so $|Z(x)|=p^2$, but this implies $x\in Z(G)$, a contradiction.
Question regarding counting poker dice Problem Poker dice is played by simultaneously rolling 5 dice. How many ways can we form "1 pair", "2 pairs"? For one pair, I got the answer right away. First I consider there are 5 spots for 5 dice. Then I pick 2 places out of 5, which means there are 3 places left, so we have to choose 3 out of 3 which is 1 way. Hence, I have: $${{5}\choose{2}} \cdot 6 {{3}\choose{3}} \cdot 5 \cdot 4 \cdot 3 = 3600.$$ However, I couldn't figure out why I got two pairs wrong. First, I pick 2 places for the first pair, then its rank. Next, 2 places for the second pair, and its rank. Since there is only 1 place left, I pick the rank for the last dice. $${{5}\choose{2}} \cdot 6 {{3}\choose{2}} \cdot 5 \cdot 4 \cdot 3 = 3600.$$ But the correct answer is 1800, which means I need to divide by a factor of 2. I guess that might be the order of two pairs can be switched, but I wonder is there a better way to count it? I'm so confused! Any idea?
You’ve correctly identified the mistake: you’ve counted each hand twice, because the pairs can be chosen in either order. For instance, you’ve counted the hand $11223$ once with $22$ as the first pair and $33$ as the second pair, and once again with $33$ as the first pair and $22$ as the second pair. Here’s a way to count that avoids that problem. First pick the two denominations that will be pairs; this can be done in $\binom62$ ways. Then pick the remaining denomination; this can be done in $4$ ways. Now choose which of the $5$ dice will show the singleton; this can be done in $5$ ways. Finally, choose which $2$ of the remaining $4$ dice will show the smaller of the two pairs; this can be done in $\binom42$ ways. The total is then $\binom62\cdot4\cdot5\cdot\binom42=1800$ ways. The key to avoiding the double counting is to choose the positions for a specific pair. Once you know where the smaller pair and the singleton are, you automatically know where the larger pair is: there’s nothing to choose.
Why is lambda calculus named after that specific Greek letter? Why not “rho calculus”, for example? Where does the choice of the Greek letter $\lambda$ in the name of “lambda calculus” come from? Why isn't it, for example, “rho calculus”?
Dana Scott, who was a PhD student of Church, addressed this question. He said that, in Church's words, the reasoning was "eeny, meeny, miny, moe" — in other words, an arbitrary choice for no reason. He specifically debunked Barendregt's version in a recent talk at the University of Birmingham. * *Source *Video
Questions about averaging i have some trouble with averages. Here are two questions rolled in one: why is : $$\frac{\prod _{n=1}^N \left(1-\text{rnd}_n\right)}{N} \neq \prod _{n=1}^N \frac{1-\text{rnd}_n}{N} \mbox{where $rnd_n$ is a random gaussian real} $$ And how can i get $\frac{\prod _{n=1}^N \left(1-\text{rnd}_n\right)}{N}$ using only the mean and the variance of rnd, not the actual values ? So i only know how rnd is shaped, but not the values, that are supposed to average out anyway. What rule about averaging am i violating?
As Ross has mentioned, you cannot know the actual value of the expressions you wrote based only on the characteristics of random variables such as mean or a variance. You can only ask for the distribution of these expressions. E.g. in the case $\xi_n$ (rnd$_n$) are iid random variables, you can use the fact that $$ \mathsf E[(1-\xi_i)(1-\xi_j)]=\mathsf E(1-\xi_i)\mathsf E(1-\xi_j) = (\mathsf E(1-\xi_i))^2$$ which leads to the fact that $$ \mathsf E \pi_N = \frac1N[\mathsf E(1-\xi_1)]^N = \frac{(1-a)^N}{N} $$ where $a = \mathsf E\xi_1$. Here I denoted $$ \pi_N = \frac{\prod\limits_{n=1}^N(1-\xi_n)}{N}. $$ This holds regardless of the distribution of $\xi$, just integrability is needed. In the same way you can also easily calculate the variance of this expression based only on the variance and expectation of $\xi$ (if you want, I can also provide it). Finally, there is a small hope that for the Gaussian r.v. $\xi$ the distribution of this expression will be nice since it includes the products of normal random variables. On your request: variance. Recall that for any r.v. $\eta$ holds $V \eta = \mathsf E \eta^2 - (\mathsf E\eta)^2$ hence $\mathsf E\eta^2 = V\eta+(\mathsf E\eta)^2$. As I told, you don't need to know the distribution of $\xi$, just its expectation $a$ and variance $\sigma^2$. Since we already calculated $\mathsf E\pi_N$, we just need to calculate $\mathsf E\pi^2_N$: $$ \mathsf E\pi_N^2 = \frac1{N^2}\mathsf E\prod\limits_{n=1}^N(1-\xi_n)^2 = \frac{1}{N^2}\prod\limits_{n=1}^N\mathsf E(1-\xi_n)^2 = \frac{1}{N^2}\left(\mathsf E(1-\xi_1)^2\right)^N. $$ Now, $$ \mathsf E(1-\xi_1)^2 = \mathsf E(1-2\xi_1+\xi^2_1) = 1-2a+\mathsf E\xi_1^2 = 1-2a+a^2+\sigma^2 = (1-a)^2+\sigma^2 $$ and $$ \mathsf E\pi_N^2 = \frac{1}{N^2}\left((1-a)^2+\sigma^2\right)^N. $$ As a consequence, $$ V\pi_N = \frac{1}{N^2}\left[\left((1-a)^2+\sigma^2\right)^N - (1-a)^{2N}\right]. $$
what kind of quadrilateral is ABCD? ABCD is a quadrilateral, given $\overrightarrow{AB}\cdot\overrightarrow{BC}=\overrightarrow{BC}\cdot\overrightarrow{CD}=\overrightarrow{CD}\cdot\overrightarrow{DA}$, then what kind of quadrilateral is ABCD? I guess it's a rectangle, but how to prove it? If the situation becomes $\overrightarrow{AB}\cdot\overrightarrow{BC}=\overrightarrow{BC}\cdot\overrightarrow{CD}=\overrightarrow{CD}\cdot\overrightarrow{DA}=\overrightarrow{DA}\cdot\overrightarrow{AB}$, i can easily prove ABCD is a rectangle. So, the question is given $\overrightarrow{AB}\cdot\overrightarrow{BC}=\overrightarrow{BC}\cdot\overrightarrow{CD}=\overrightarrow{CD}\cdot\overrightarrow{DA}$, can we get $\overrightarrow{AB}\cdot\overrightarrow{BC}=\overrightarrow{BC}\cdot\overrightarrow{CD}=\overrightarrow{CD}\cdot\overrightarrow{DA}=\overrightarrow{DA}\cdot\overrightarrow{AB}$? thanks.
Some informal degrees-of-freedom analysis: * *An arbitrary quadrilateral on a plane is described by $8$ parameters: coordinates of each vertex (to simplify matter, I don't take quotient by isometries). *A rectangle on a plane is described by $5$ parameters: endpoints of one side and (signed) length of the other side. We should not expect two equations to restrict the $8$-dimensional space of quadrilaterals down to $5$-dimensional space of rectangles. Three equations (also given in the post) are enough. The above is not a rigorous proof because two equations $f=0=g$ can be made into one $f^2+g^2=0$, etc. One needs some transversality consideration to make it work. But it's easier to just quote a geometric argument given by Henning Makholm in the comments. If you place $B$, $C$, and $D$ arbitrarily, then each of the two equations between dot products defines a line that $A$ must lie on. Putting A at the intersection between these two lines gives you a quadrilateral that satisfies the condition. So you cannot conclude anything about the angle at $C$ (i.e., it doesn't have to be a rectangle) -- nor anything about the relative lengths of $BC$ versus $CD$. A concrete example would be $A(2,5)$, $B(-1,1)$, $C(0,0)$, $D(1,0)$. Doesn't look like anything that has a nice name. Neither does $A(1,1)$, $B(1,2)$, $C(0,0)$, $D(0,2)$. The first of Henning's examples is below (the second isn't even convex)
Prove that every element of a finite group has an order I was reading Nielsen and Chuang's "Quantum Computation and Quantum Information" and in the appendices was a group theory refresher. In there, I found this question: Exercise A2.1 Prove that for any element $g$ of a finite group, there always exists a positive integer $r$ such that $g^r=e$. That is, every element of such a group has an order. My first thought was to look at small groups and try an inductive argument. So, for the symmetric groups of small order e.g. $S_1, S_2, S_3$ the integer $r$ is less than or equal to the order of the group. I know this because the groups are small enough to calculate without using a general proof. For example, in $S_3$ there is an element that rearranges the identity $\langle ABC \rangle$ element by shifting one character to the left e.g. $s_1 = \langle BCA \rangle$. Multiplying this element by itself produces the terms $s_1^2 = \langle CAB \rangle$; and $s_1^3 = \langle ABC \rangle$ which is the identity element, so this element is equal to the order of the group, which is three. I have no idea if this relation holds for $S_4$ which means I am stuck well before I get to the general case. There's a second question I'd like to ask related to the first. Is the order or period of any given element always less than or equal to the order of the group it belongs to?
Sometimes it is much clearer to argue the general case. Consider any $g \in G$, a finite group. Since $G$ is a group, we know that $g\cdot g = g^2 \in G$. Similarly, $g^n \in G$ for any $n$. So there is a sequence of elements, $$g, g^2, g^3, g^4, g^5, \ldots, g^n, \ldots $$ in $G$. Now since $G$ is finite, there must be a pair of number $m \neq n$ such that $g^m = g^n$ (well, there are many of these, but that's irrelevant to this proof). Can you finish the proof from this point? What does $g^m = g^n$ imply in a group? Hope this helps!
Extension of $3\sigma$ rule For the normally distributed r.v. $\xi$ there is a rule of $3\sigma$ which says that $$ \mathsf P\{\xi\in (\mu-3\sigma,\mu+3\sigma)\}\geq 0.99. $$ Clearly, this rule not necessary holds for other distributions. I wonder if there are lower bounds for $$ p(\lambda) = P\{\xi\in (\mu-\lambda\sigma,\mu+\lambda\sigma)\} $$ regardless of the distribution of real-valued random variable $\xi$. If we are focused only on absolute continuous distributions, a naive approach is to consider the variational problem $$ \int\limits_{\int\limits xf(x)\,dx - \lambda\sqrt{\int\limits x^2f(x)\,dx-(\int\limits xf(x)\,dx)^2}}^{\int\limits xf(x)\,dx + \lambda\sqrt{\int\limits x^2f(x)\,dx-(\int\limits xf(x)\,dx)^2}} f(x)\,dx \to\inf\limits_f $$ which may be too naive. The other problem is that dsitributions can be not necessary absolutely continuous. So my question is if there are known lower bounds for $p(\lambda)$?
In general this is Chebyshev's inequality $$\Pr(|X-\mu|\geq k\sigma) \leq \frac{1}{k^2}.$$ Equality is achieved by the discrete distribution $\Pr(X=\mu)=1-\frac{1}{k^2}$, $\Pr(X=\mu-k\sigma)=\frac{1}{2k^2}$, $\Pr(X=\mu+k\sigma)=\frac{1}{2k^2}$. This can be approached arbitrarily closely by an absolutely continuous distribution. Letting $k=3$, this gives $$\Pr(|X-\mu|\geq 3\sigma) \leq \frac{1}{9} \approx 0.11;$$ while letting $k=10$, this gives $$\Pr(|X-\mu|\geq 10\sigma) \leq \frac{1}{100} =0.01.$$ so these bounds are relatively loose for a normal distribution. This diagram (from my page here) compares the bounds. Red is Chebyshev's inequality; blue is a one-tailed version of Chebyshev's inequality; green is a normal distribution; and pink is a one-tailed normal distribution.
Equicontinuous set Let $\mathcal E$ be the set of all functions $u\in C^1([0,2])$ such that $u(x)\geq 0$ for every $x\in[0,2]$ and $|u'(x)+u^2(x)|<1$ for every $x\in [0,2]$. Prove that the set $\mathcal F:=\{u_{|[1,2]}: u\in\mathcal E\}$ is an equicontinuous subset of $C^0([1,2]).$ The point I am stuck on is that i can't see how to combine the strange hypothesis imposed on every $u\in\mathcal E$, in particular i solved the two differential equations $$u'(x)=1-u^2(x),\qquad u'(x)=-1-u^2(x),$$ which result to be the extremal case of the condition given. In particular the two solutions are $$u_1(x)=\frac{ae^t-be^{-t}}{ae^t+be^{-t}},\qquad u_2(x)=\frac{a\cos(x)-b\sin(x)}{a\cos(x)+b\sin(x)}.$$ I feel however i'm not ong the right path so any help is appreciated. P.S. Those above are a big part of my efforts and thoughts on this problem so i hope they won't be completely useless :P Edit In the first case the derivative is $$u'_1(x)=\frac{2ab}{(ae^t+be^{-t})}\geq 0$$ while for the other function we have, for $x\in[0,2],$ $$u'_2(x)=-\frac{\sin(2x) ab}{(a\cos(x)+b\sin(x))^2}\leq 0.$$ Moreover $u_1(1)>u_2(1)$, since $$\frac{ae-b^{e-1}}{ae+be^{-1}}>\frac{a\cos(1)-b\sin(1)}{a\sin(1)+b\cos(1)}\Leftrightarrow (a^2e+be^{-1})(\sin(1)-\cos(1)),$$ and $\sin(1)>\cos(1).$ Now, all this bounds i've found are useful to solve the problem?
Suppose $u \in \mathcal{E}$. It's enough to show $u(t) \le 3$ for all $t \in [1,2]$, since then we'll have $-10 \le u'(t) \le 1$, and any set of functions with uniformly bounded first derivatives is certainly equicontinuous. We also know that $u' \le 1$ on $[0,2]$, and so by the mean value theorem it suffices to show that $u(1) \le 2$. If $u(0) \le 1$ we are also done, so assume $u(0) > 1$. Let $v$ be the solution of $v'(t) = 1 - v(t)^2$ with $v(0) = u(0) > 1$. This is given by your formula for $u_1$ with, say, $b=1$ and some $a < -1$. I claim $u(t) \le v(t)$. This will complete the proof, since it is easy to check that $v(1) < 2$. (We have $v(1) = \frac{ae-e^{-1}}{ae+e^{-1}}$, which is increasing in $a$; compute its value at $a=-1$.) Set $w(t) = v(t) - u(t)$. We have $w(0)=0$ and $w'(t) > u(t)^2 - v(t)^2$. Suppose to the contrary there exists $s \in [0,1]$ such that $u(s) > v(s)$; let $s_0$ be the infimum of all such $s$. Then necessarily $u(s_0) = v(s_0)$, so $w(s_0)=0$ and $w'(s_0) > u(s_0)^2 - v(s_0)^2 = 0$. So for all small enough $\epsilon$, $w(s_0 + \epsilon) > 0$. This contradicts our choice of $s_0$ as the infimum. So in fact $u \le v$ and we are done.
What is the significance of the three nonzero requirements in the $\varepsilon-\delta$ definition of the limit? What are the consequences of the three nonzero requriments in the definition of the limit: $\lim_{x \to a} f(x) = L \Leftrightarrow \forall$ $\varepsilon>0$, $\exists$ $\delta>0 :\forall$ $x$, $0 < \lvert x-a\rvert <\delta \implies \lvert f(x)-L \rvert < \varepsilon$ I believe I understand that: * *if $0 = \lvert x-a\rvert$ were allowed the definition would require that $f(x) \approx L$ at $a$ ($\lvert f(a)-L \rvert < \varepsilon$); *if $\varepsilon=0$ and $\lvert f(a)-L \rvert \le \varepsilon$ were allowed the theorem would require that $f(x) = L$ near $a$ (for $0 < \lvert x-a\rvert <\delta$); and *if $\delta=0$ were allowed (and eliminating the tautology by allowing $0 \le \lvert x-a\rvert \le \delta$) the definition would simply apply to any function where $f(a) = L$, regardless of what happened in the neighborhood of $f(a)$. Of course if (2'.) $\varepsilon=0$ were allowed on its own, the theorem would never apply ($\lvert f(a)-L \rvert \nless 0$). What I'm not clear about is [A] the logical consequences of (3'.) allowing $\delta=0$ its own, so that: $\lim_{x \to a} f(x) = L \Leftrightarrow \forall$ $\varepsilon>0$, $\exists$ $\delta≥0 :\forall$ $x$, $0 < \lvert x-a\rvert <\delta \implies \lvert f(x)-L \rvert < \varepsilon$ and [B] whether allowing both 1. and 2. would be equivalent to requiring continuity?
For (3), if $\delta = 0$ was allowed the definition would apply to everything: since $|x-a| < 0$ is impossible, it implies whatever you like.
Proving the AM-GM inequality for 2 numbers $\sqrt{xy}\le\frac{x+y}2$ I am having trouble with this problem from my latest homework. Prove the arithmetic-geometric mean inequality. That is, for two positive real numbers $x,y$, we have $$ \sqrt{xy}≤ \frac{x+y}{2} .$$ Furthermore, equality occurs if and only if $x = y$. Any and all help would be appreciated.
Since $x$ and $y$ are positive, we can write them as $x=u^2$, $y=v^2$. Then $$(u-v)^2 \geq 0 \Rightarrow u^2 + v^2 \geq 2uv$$ which is precisely it.
Calculating Basis Functions for DFTs (64 Samples) I am attempting to graph some 64 sample'd basis functions in MatLab, and getting inconsistent results -- which is to say, I'm getting results that are still sinusoidal, but don't have the frequency they ought. Here's a graph of what is supposed to be my c8 basis function: Unfortunately, it only has 7 peaks, which indicates that I seem to have botched the frequency somehow. I'm assuming my problem lies somewhere within how I'm trying to graph in matlab, and not an error in the function itself. Here's my code: n = linspace(0, 2*pi*8, 64) x = cos(2*pi*8*n/64) plot(n,x) I'm inclined to believe x has the correct formula, but I'm at a loss as to how else to formulate an 'n' to graph it with. Why am I getting a result with the incorrect frequency?
You're plotting the function $\cos n\pi/4$, which has period $8$, and thus $8$ full periods from $0$ to $64$, but you're only substituting values from $0$ to $16\pi$. Since $16\pi\approx50$, you're missing a bit less than two of the periods. From what it seems you're trying to do, you should be plotting the function from $0$ to $64$, i. e. replace 2*pi*8 by 64 in the first line.
How to prove that the Binet formula gives the terms of the Fibonacci Sequence? This formula provides the $n$th term in the Fibonacci Sequence, and is defined using the recurrence formula: $u_n = u_{n − 1} + u_{n − 2}$, for $n > 1$, where $u_0 = 0$ and $u_1 = 1$. Show that $$u_n = \frac{(1 + \sqrt{5})^n - (1 - \sqrt{5})^n}{2^n \sqrt{5}}.$$ Please help me with its proof. Thank you.
Alternatively, you can use the linear recursion difference formula. This works for any linear recursion (i.e. a recursion in the form $a_n=qa_{n-1}+ra_{n-2}$. Step 1 for closed form of linear recursion: Find the roots of the equation $x^2=qx+r$. For Fibonnaci, this formula is $x^2=x+1$. The roots are $\frac{1\pm\sqrt5}2$. Step 2: The closed form is in the form $a(n)=g\cdot\text{root}_1^n+h\cdot\text{root}_2^n$. For Fibonacci, this yields $a_n=g(\frac{1+\sqrt5}2)^n+h(\frac{1-\sqrt5}2)^n$. Step 3: Solve for $g$ and $h$. All you have to do know is plug in two known values of the sequence into this equation. For fibonacci, you get $g=h=1/\sqrt5$. You are done!
How to find primes between $p$ and $p^2$ where $p$ is arbitrary prime number? What is the most efficient algorithm for finding prime numbers which belongs to the interval $(p,p^2)$ , where $p$ is some arbitrary prime number? I have heard for Sieve of Atkin but is there some better way for such specific case which I described?
This is essentially the same as asking for the primes below x for arbitrary x. There are essentially only two practical sieves for the task: Eratosthenes and Atkin-Bernstein. Practically, the sieve of Eratosthenes is fastest; the Atkin-Bernstein sieve might overtake it eventually but I do not know of any implementations that are efficient for large numbers. Unless your range is very small, it will not fit in memory. In that case it is critical to use a segmented sieve; both Eratosthenes and Atkin-Bernstein do this naturally. If you're looking for an existing program, try yafu, primesieve, or primegen. The first two are modified sieves of Eratosthenes and the last is an Atkin-Bernstein implementation, though efficient only to $2^{32}$ (or p = 65521 in your case).
How to predict the tolerance value that will yield a given reduction with the Douglas-Peucker algorithm? Note: I'm a programmer, not a mathematician - please be gentle. I'm not even really sure how to tag this question; feel free to re-tag as appropriate. I'm using the Douglas-Peucker algorithm to reduce the number of points in polygons (in a mapping application). The algorithm takes a tolerance parameter that indicates how far I'm willing to deviate from the original polygon. For practical reasons, I sometimes need to ensure that the reduced polygon doesn't exceed a predetermined number of points. Is there a way to predict in advance the tolerance value that will reduce a polygon with N points to one with N' points?
Here is a somewhat nontraditional variation of the Douglas-Peucker algorithm. We will divide a given curve into pieces which are well approximated by line segments (within tolerance $\varepsilon$). Initially, there is only one piece, which is the entire curve. * *Find the piece $C$ with the highest "deviation" $d$, where the deviation of a curve is the maximum distance of any point on it from the line segment joining its end points. *If $d < \varepsilon$, then all pieces have sufficiently low deviation. Stop. *Let $p_0$ and $p_1$ be the end points of $C$, and $q$ be the point on $C$ which attains deviation $d$. Replace $C$ with the piece between $p_0$ and $q$, and the piece between $q$ and $p_1$. *Repeat. It should be easy to see how to modify step 2 so that the algorithm produces exactly $n-1$ pieces, i.e. $n$ points, for any given $n$. Exercises Things I am too lazy to do myself: * *Show that for (almost) any result of the modified algorithm, there is a corresponding tolerance on which Douglas-Peucker would produce the same result. *Use priority queues for efficient implementation of step 1.
Right identity and Right inverse in a semigroup imply it is a group Let $(G, *)$ be a semigroup. Suppose * *$ \exists e \in G$ such that $\forall a \in G,\ ae = a$; *$\forall a \in G, \exists a^{-1} \in G$ such that $aa^{-1} = e$. How can we prove that $(G,*)$ is a group?
It is conceptually very simple that a right inverse is also a left inverse (when there is also a right identity). It follows from the axioms above in two steps: 1) Any element $a$ with the property $aa = a$ [i.e. idempotent] must be equal to the identity $e$ in the axioms, since in that case: $$a = ae = a(aa^{-1}) = (aa)a^{-1} = aa^{-1} = e$$ This already proves the uniqueness of the [right] identity, since any identity by definition has the property of being idempotent. 2) By the axioms, for every element $a$ there is at least one right inverse element $a^{-1}$ such that $aa^{-1}=e$. Now we form the product of the same two elements in reverse order, namely $a^{-1}a$, to see if that product also equals the identity. If so, this right inverse is also a left inverse. We only need to show that $a^{-1}a$ is idempotent, and then its equality to $e$ follows from step 1: $$[a^{-1}a][ a^{-1}a] = a^{-1}(a a^{-1})a = a^{-1}ea = a^{-1}a $$ 3) It is now clear that the right identity is also a left identity. For any $a$: $$ea = (aa^{-1})a = a(a^{-1}a) = ae = a$$ 4) To show the uniqueness of the inverse: Given any elements $a$ and $b$ such that $ab=e$, then $$b = eb = a^{-1}ab = a^{-1}e = a^{-1}$$ Here, as above, the symbol $a^{-1}$ was first used to denote a representative right inverse of the element $a$. This inverse is now seen to be unique. Therefore, the symbol now signifies an operation of "inversion" which constitutes a single-valued function on the elements of the set. See Richard A. Dean, “Elements of Abstract Algebra” (Wiley, 1967), pp 30-31.
Real-world uses of Algebraic Structures I am a Computer science student, and in discrete mathematics, I am learning about algebraic structures. In that I am having concepts like Group,semi-Groups etc... Previously I studied Graphs. I can see a excellent real world application for that. I strongly believe in future I can use many of that in my Coding Algorithms related to Graphics. Could someone tell me real-world application for algebraic structures too...
The fact that electrons , positrons , quarks , neutrinos and other particles exist in the universe is due to the fact that the quantum state of these particles respects poincare invariance. Put in simpler terms, If Einstein's theory of relativity is to hold , Some arguments using group theory show that these kinds of particles that I mentioned respects Einstein's theory and that there's no fundamental reason they shouldn't exist. Scientists have used group theory to predict the existence of many particles .We use a special kind of groups called lie groups that are groups and manifolds in the same time.For example $GL(n,R)$ is a lie group of invertible linear transformation of the n-dimensional Euclidean space. Symmetry operations correspond to elements living inside groups. If you map these symmetry elements to the group of invertible (and Unitary) transformations of a Hilbert Space ( An infinite dimensional vector space where particle quantum state lives ) You can study how these particle states transforms under the action of the group
If $a, b, c$ are integers, $\gcd(a,b) = 1$ then $\gcd (a,bc)=\gcd(a,c)$ If $a, b, c$ and $k$ be integers, $\gcd(a,b) = 1$ and $\gcd(a, c)=k$, then $\gcd (bc, a)=k$.
Since $gcd(a,b)=1$, there exist two integers $x$ and $y$ such that $$ax+by=1\tag{1}$$ Also $gcd (a,c)=k$, there exist two integers $x_{1}$ and $y_{1}$ such that $$ax_{1}+cy_{1}=k\tag{2}$$ Now multiplying $(1)$ and $(2)$ we get, $$a^{2}xx_{1}+acxy_{1}+bayx_{1}+bcyy_{1}=k$$ $$\Rightarrow a(axx_{1}+cxy_{1}+byx_{1})+bc(yy_{1})=k$$ It follows that $gcd(a,bc)=k.$
Why does this expected value simplify as shown? I was reading about the german tank problem and they say that in a sample of size $k$, from a population of integers from $1,\ldots,N$ the probability that the sample maximum equals $m$ is: $$\frac{\binom{m-1}{k-1}}{\binom{N}{k}}$$ This make sense. But then they take expected value of the sample maximum and claim: $$\mu = \sum_{m=k}^N m \frac{\binom{m-1}{k-1}}{\binom{N}{k}} = \frac{k(N+1)}{k+1}$$ And I don't quite see how to simplify that summation. I can pull out the denominator and a $(k-1)!$ term out and get: $$\mu = \frac{(k-1)!}{\binom{N}{k}} \sum_{m=k}^N m(m-1) \ldots (m-k+1)$$ But I get stuck there...
Call $B_k^N=\sum\limits_{m=k}^N\binom{m-1}{k-1}$. Fact 1: $B_k^N=\binom{N}{k}$ (because the sum of the masses of a discrete probability measure is $1$ or by a direct computation). Fact 2: For every $n\geqslant i\geqslant 1$, $n\binom{n-1}{i-1}=i\binom{n}{i}$. Now to the proof. Fact 2 for $(n,i)=(m,k)$ gives $\sum\limits_{m=k}^Nm\binom{m-1}{k-1}=\sum\limits_{m=k}^Nk\binom{m}{k}=k\sum\limits_{m=k+1}^{N+1}\binom{m-1}{(k+1)-1}=kB_{k+1}^{N+1}$. Fact 1 gives $B_{k+1}^{N+1}=\binom{N+1}{k+1}$. Fact 2 for $(n,i)=(N+1,k+1)$ (or a direct computation) gives $(k+1)B_{k+1}^{N+1}=(N+1)B_k^N$. Finally, $\mu=\dfrac{kB_{k+1}^{N+1}}{B_k^N}=k\dfrac{N+1}{k+1}$. Edit The same method yields, for every $i\geqslant0$, $$ \mathrm E(X(X+1)\cdots(X+i))=\frac{k}{k+i+1}(N+1)(N+2)\cdots(N+i+1). $$
How can the following be calculated? How can the following series be calculated? $$S=1+(1+2)+(1+2+3)+(1+2+3+4)+\cdots+(1+2+3+4+\cdots+2011)$$
Let $S$ be our sum. Then $$S=\binom{2}{2}+\binom{3}{2}+\binom{4}{2} + \cdots + \binom{2012}{2}=\binom{2013}{3}=\frac{2013\cdot 2012\cdot 2011}{3 \cdot 2 \cdot 1}.$$ Justification: We count, in two different ways, the number of ways to choose $3$ numbers from the set $$\{1,2,3,4,\dots, n,n+1\}.$$ (For our particular problem we use $n=2012$.) First Count: It is clear that there are $\binom{n+1}{3}$ ways to choose $3$ numbers from $n+1$ numbers. Second Count: The smallest chosen number could be $1$. Then there are $\binom{n}{2}$ ways to choose the remaining $2$ numbers. Or the smallest chosen number could be $2$, leaving $\binom{n-1}{2}$ choices for the remaining $2$ numbers. Or the smallest chosen number could be $3$, leaving $\binom{n-2}{2}$ choices for the remaining $2$ numbers. And so on, up to smallest chosen number being $n-1$, in which case there are $\binom{2}{2}$ ways to choose the remaining $2$ numbers. Thus the total count is $$\binom{n}{2}+\binom{n-1}{2}+\binom{n-2}{2}+\cdots +\binom{3}{2}+\binom{2}{2}.$$ Comparing the two counts, we find that $$\binom{2}{2}+\binom{3}{2}+\binom{4}{2}+\cdots +\binom{n-1}{2}+\binom{n}{2}=\binom{n+1}{3}.$$ Comment: Similarly, it is easy to see that in general $\sum_{k=r}^n \binom{k}{r}=\binom{n+1}{r+1}.$ These natural binomial coefficient identities give a combinatorial approach to finding general formulas for the sums of consecutive squares, consecutive cubes, and so on.
Trouble counting the number of "ace high" hands in poker I'm trying to count the number of "ace high" hands in a five card poker hand. The solution from my answer key puts the count at 502,860; however, I have an argument for why this number is too high. Please help me understand where my logic is flawed. Instead of coming up with an exact answer for the number of ace high hands I will show an upper bound on the number of ace high hands. First, go through the card deck and remove all four aces leaving a deck of 48 cards. We will use this 48 card deck to form the four card "non ace" part of the "ace high" hand. First, how many ways are there to form any four card hand from a 48 card deck? This is (48 choose 4) = 194,580. Now, not all of these hands when paired with an ace will form an "ace high" hand. For example A Q Q K K would be two pair. In fact, any four card hand with at least two cards of the same rank (e.g. Queen of Spades, Queen of Hearts) will not generate an ace high hand. So let's find the number of such hands and subtract them from 194,580. I believe the number of such hands can be found by first selecting a rank for a pair from these 48 remaining cards, that is, (12 choose 1)--times the number of ways to select two suits for our rank (4 choose 2)--times the number of ways to pick the remaining 2 required cards from 46 remaining cards, that is, (46 choose 2). So, restated, given our 48 card deck we can create a four card hand that contains at least one pair this many ways: (12 choose 1)(4 choose 2) (46 choose 2) = 74,520 [pair rank] [suits of pair] [remaining 2 cards] Thus the number of four card hands that do not include at least one pair is: (48 choose 4) - 74,520 = 120,060 We can pair each of these four card sets with one of our four aces to form the number of five card hands that contain an ace, but not any single pair (or better). This is 120,060 * 4 = 480,240 hands. However, this is already less than 502,860 shown by the key... and I haven't even begun to start subtracting out straights. Clearly I have made a mistake, but what is it?
In your method, "two pair" hands would be subtracted twice, "three of a kind" hands would be subtracted three times, and "full house" hands 5 times.
I need to define a family (one parameter) of monotonic curves I want to define a function family $f_a(x)$ with a parameter $a$ in $(0,1)$, where: For any $a$, $f_a(0) = Y_0$ and $f_a(X_0) = 0$ (see image) For $a = 0.5$, this function is a straight line from $(0,Y_0)$ to $(X_0, 0)$. For $a < 0.5$, up to zero (asymptotically perhaps), I want $f_a$ to be a curve below, and for $a > 0.5$, the curve should be to the other side. I didn't fill the diagram with many examples, but I hope you get the idea. Different values of $a$ always produce a distinct, monotonic curve, below all curves of larger values of $a$, and above all curves for smaller values of $a$. E.g.: when I decrease $a$, the distance of the $(0,0)$ point from the curve decreases, and if I increase $a$, it increases. Sorry for the clumsy description but I hope you got the intuition of what I'm trying to define! Any suggestion of how this function $f_a(x)$ could look like?
How about $$f_a(x) = y_0\left(1-(x/x_0)^{\frac{a}{1-a}}\right)$$ ?
Examples of mapping two sets where if the two sets are isomorphic doesn't imply that mapping is also 1 to 1 I am struggling with getting an example of two sets S and T and (onto) mapping f, where the fact S and T are isomorphic does not imply that f is also 1 - 1. If possible could you also give an example in which the fact that they are isomorphic would imply that they are 1 - 1? Thank You!
Hint: Can you do this for finite $T$ and $S$? What happens if $T$ and $S$ are infinite? Think of shift-like maps $\mathbb{N}\rightarrow \mathbb{N}$
What is the cardinality of the set of all topologies on $\mathbb{R}$? This was asked on Quora. I thought about it a little bit but didn't make much progress beyond some obvious upper and lower bounds. The answer probably depends on AC and perhaps also GCH or other axioms. A quick search also failed to provide answers.
Let me give a slightly simplified version of Stefan Geschke's argument. Let $X$ be an infinite set. As in his argument, the key fact we use is that there are $2^{2^{|X|}}$ ultrafilters on $X$. Now given any ultrafilter $F$ on $X$ (or actually just any filter), $F\cup\{\emptyset\}$ is a topology on $X$: the topology axioms easily follow from the filter axioms. So there are $2^{2^{|X|}}$ topologies on $X$. Now if $T$ is a topology on $X$ and $f:X\to X$ is a bijection, there is exactly one topology $T'$ on $X$ such that $f$ is a homeomorphism from $(X,T)$ to $(X,T')$ (namely $T'=\{f(U):U\in T\}$). In particular, since there are only $2^{|X|}$ bijections $X\to X$, there are only at most $2^{|X|}$ topologies $T'$ such that $(X,T)$ is homeomorphic to $(X,T')$. So we have $2^{2^{|X|}}$ topologies on $X$, and each homeomorphism class of them has at most $2^{|X|}$ elements. Since $2^{2^{|X|}}>2^{|X|}$, this can only happen if there are $2^{2^{|X|}}$ different homeomorphism classes.
How to prove $\log n \leq \sqrt n$ over natural numbers? It seems like $$\log n \leq \sqrt n \quad \forall n \in \mathbb{N} .$$ I've tried to prove this by induction where I use $$ \log p + \log q \leq \sqrt p \sqrt q $$ when $n=pq$, but this fails for prime numbers. Does anyone know a proof?
Here is a proof of a somewhat weaker inequality that does not use calculus: Put $m:=\lceil\sqrt{n}\>\rceil$. The set $\{2^0,2^1,\ldots,2^{m-1}\}$ is a subset of the set $\{1,2,\ldots,2^{m-1}\}$; therefore we have the inequality $m\leq 2^{m-1}$ for all $m\geq1$. It follows that $$\log n=2\log\sqrt{n}\leq 2\log m\leq 2(m-1)\log2\leq 2\log2\>\sqrt{n}\ ,$$ where $2\log2\doteq1.386$.
Cantor's completeness principle I hope everyone who has underwent a fundamental course in Analysis must be knowing about Cantor's completeness principle. It says that in a nest of closed intervals ,the intersection of all the intervals is a single point. I hope I can get an explanation as to why in case of only closed intervals this principle holds good, why not in case of open intervals?
The intersection of all the open intervals centered at $0$ is just $\{0\}$, since $0$ is the only point that is a member of all of them. But the intersection of all the open intervals whose lower boundary is $0$ is empty. (After all, what point could be a member all of them?) And they are nested, in that for any two of them, one is a subset of the other.
Why is it wrong to express $\mathop{\lim}\limits_{x \to \infty}x\sin x$ as $k\mathop{\lim}\limits_{x \to \infty}x$; $\lvert k \rvert \le 1$? Why is it wrong to write $$\mathop{\lim}\limits_{x \to \infty}x\left(\frac{1}{x}\sin x-1+\frac{1}{x}\right)=(0k-1+0)\cdot\mathop{\lim}\limits_{x \to \infty}x,$$ where $\lvert k \rvert \le 1$? And, as an aside, is there an idiom or symbol for compactly representing, in an expression, a number that is always within a range so that "where ..." can be avoided?
You can’t rewrite that comment this way: $\sin x$ is always between $-1$ and $1$, but it isn’t a constant, which is what you’re implying when you pull it outside the limit. You could write $$\lim\limits_{x\to\infty}x\left(\frac{1}{x}\sin x - 1 + \frac{1}{x}\right) = (0-1+0)\cdot\lim\limits_{x\to\infty}x,$$ provided that you explained why $\lim\limits_{x\to\infty}\dfrac{\sin x}{x}=0$. For that you really do need to write an explanation, not an equation. In an elementary course you should give more detail rather than less, so it might look something like this: $\vert \sin x\vert \le 1$ for all real $x$, so $\dfrac{-1}{x} \le \dfrac{\sin x}{x} \le \dfrac{1}{x}$ for all $x>0$, and therefore by the sandwich theorem $$0 = \lim\limits_{x\to\infty}\frac{-1}{x} \le \lim\limits_{x\to\infty}\frac{\sin x}{x} \le \lim\limits_{x\to\infty}\frac{1}{x} = 0$$ and $\lim\limits_{x\to\infty}\dfrac{\sin x}{x}=0$. In a slightly higher-level course you could simply say that $\lim\limits_{x\to\infty}\dfrac{\sin x}{x}=0$ because the numerator is bounded and the denominator increases without bound. But it’s just as easy to multiply it out to get $$\lim\limits_{x\to\infty}(\sin x - x + 1)$$ and argue that $0 \le \sin x + 1 \le 2$ for all $x$, so $-x \le \sin x - x + 1 \le 2-x$ for all $x$, and hence (again by the sandwich theorem) the limit is $-\infty$.
Combinatorics-number of permutations of $m$ A's and at most $n$ B's Prove that the number of permutations of $m$ A's and at most $n$ B's equals $\dbinom{m+n+1}{m+1}$. I'm not sure how to even start this problem.
By summing all possibilities of $n$, we get that the number of permutations $P_n$ satisfies $$P_n = \binom{m+n}{n} + \binom{m+(n-1)}{(n-1)} + \ldots + \binom{m+0}{0} = \sum_{i=0}^n \binom{m + i}{i}$$ Note that $$\binom{a}{b} = \binom{a-1}{b} + \binom{a-1}{b-1}$$ Repeatedly applying this to the last term, we get $$\begin{array}{rcl} \binom{a}{b} &=& \binom{a-1}{b} + \binom{a-1}{b-1} \\ &=& \binom{a-1}{b} + \binom{a-2}{b-1} + \binom{a-2}{b-2} \\ &=& \binom{a-1}{b} + \binom{a-2}{b-1} + \binom{a-3}{b-2} + \binom{a-3}{b-3} \\ &=& \binom{a-1}{b} + \binom{a-2}{b-1} + \binom{a-3}{b-2} + \binom{a-4}{b-3} + \ldots \\ &=& \binom{a-1}{b} + \binom{a-2}{b-1} + \binom{a-3}{b-2} + \binom{a-4}{b-3} + \ldots + \binom{a-b-1}{0} \\ &=& \sum_{i=0}^b \binom{a-b-1+i}{i} \end{array}$$ Substituting $b$ by $a-b$ we similarly get $$\binom{a}{a-b} = \sum_{i=0}^{a-b} \binom{b-1+i}{i}$$ Replacing $b = m + 1$ and $a = n + m + 1$ we thus get $$\binom{n + m + 1}{n} = \sum_{i=0}^{n} \binom{m+i}{i} = P_n$$
Intuitive explanation of $(a^b)^c = a^{bc}$ What is an intuitive explanation for the rule that $(a^b)^c = a^{bc}$. I'm trying to wrap my head around it, but I can't really do it.
I will assume that $b$ and $c$ are positive integers and that $a$ is any "number" (it doesn't really matter much what $a$ is...). Suppose I have $b \times c$ copies of the number $a$. I can arrange them into a $b \times c$ rectangular array: i.e., with $b$ rows and $c$ columns. When I multiply all $b \times c$ of these $a$'s together, I get $a^{bc}$. On the other hand, suppose I look at just one column of the array. In this column I have $b$ $a$'s, so the product of all the entries in a column is $a^b$. But now I have $c$ columns altogether, so the product of all the entries is obtained by multiplying the common product of all the entries in a given column by itself $c$ times, or $(a^b)^c$. Thus $a^{bc} = (a^b)^c$. If you want to justify this identity when $b$ and $c$ are other things besides positive integers -- e.g. real numbers, or infinite cardinals -- that's another matter: please ask.
Why PA=LU matrix factorization is better than A=LU matrix factorization? While finding A=LU for a matrix if zero is encountered in pivot position then row exchange is required. However if PA=LU form is used then no row exchange is required and apparently this also requires less computation. What I am not able to understand it that how finding a correct permutation matrix involves less efforts that doing a row exchange during A=LU process? Edit: Matrix PA will already have a form in which all rows are in correct order. what I am not able to understand is that why PA=LU computation is going to be better than A=LU computation?
Note sure if I understand your point. The purpose of a permutation matrix is exactly to do the row exchange for you. So consider $\bf PA = LU$ the more general form of $\bf A = LU$ in that it also takes care of row exchanges when they are needed. Let's recap for my own sake: By performing some row operations on $\bf A$ (a process called elimination), you want to end up with $\bf U$. You can represent each row operation through a separate matrix, so say you need two row operations, $\bf E_1$ followed by $\bf E_2$ before you get $\bf U$, then you have $\bf E_2E_1A = U \Rightarrow \bf L = (E_2E_1)^{-1} = E_2^{-1}E_1^{-1}$. But the neat thing is that you don't need matrix inversion to find those inverses. Say for example $\bf A$ is $2 \times 2$ and $\bf E_1$ is the operation subtract 2 times row 1 from row 2, then $\bf E_1^{-1}$ is just add 2 times row 1 to row 2. In matrix language: $$ {\bf E_1} = \begin{pmatrix} 1 & 0 \\ -2 & 1 \end{pmatrix} \Rightarrow {\bf E_1^{-1} = \begin{pmatrix} 1 & 0 \\ 2 & 1 \end{pmatrix} } $$ This translates into computational ease, because all you have to do is change the sign on the non-diagonal elements to get the inverse. Now on to permutation matrices. A permutation matrix is just an identity matrix with two of its rows exchanged. And even more conveniently, its inverse is just itself (because you get the original matrix back once you reapply the row exchange a second time), i.e. $\bf PPA = IA = A \Rightarrow P^{-1} = P$. So if row exchanges are needed, we add the additional step and write $\bf PA = LU$. Once again, a computational cakewalk.
$\mathbb{Q}/\mathbb{Z}$ has a unique subgroup of order $n$ for any positive integer $n$? Viewing $\mathbb{Z}$ and $\mathbb{Q}$ as additive groups, I have an idea to show that $\mathbb{Q}/\mathbb{Z}$ has a unique subgroup of order $n$ for any positive integer $n$. You can take $a/n+\mathbb{Z}$ where $(a,n)=1$, and this element has order $n$. Why would such an element exist in any subgroup $H$ of order $n$? If not, you could reduce every representative, and then every element would have order less than $n$, but where is the contradiction?
We can approach the problem using elementary number theory. Look first at the subgroup $K$ of $\mathbb{Q}/\mathbb{Z}$ generated by (the equivalence class of) $q/r$, where $q$ and $r$ are relatively prime. Since $q$ and $r$ are relatively prime, there exist integers $x$ and $y$ such that $qx+ry=1$. Divide both sides by $r$. We find that $$x\frac{q}{r}+y=\frac{1}{r}.$$ Since $y$ is an integer, it follows that $\frac{1}{r}$ is congruent, modulo $1$, to $x\frac{q}{r}$. It follows that (the equivalence class of) $1/r$ is in $K$, and therefore generates $K$. Now let $H$ be a subgroup of $\mathbb{Q}/\mathbb{Z}$ of order $n$. Let $h$ be an element of $H$. If $h$ generates $H$, we are finished. Otherwise, $h$ generates a proper subgroup of $H$. By the result above, we can assume that $h$ is (the equivalence class of) some $1/r_1$, and that there is some $1/b$ in $H$ such that $b$ does not divide $r_1$. Let $d=\gcd(r_1,b)$. There are integers $x$ and $y$ such that $r_1x+by=d$. Divide through by $r_1b$. We find that $$x\frac{1}{b}+y\frac{1}{r_1}=\frac{d}{r_1b}.$$ It follows that (the equivalence class of) $d/(r_1b)$ is in $H$. But $r_1b/d$ is the least common multiple of $r_1$ and $b$. Call this least common multiple $r_2$. Then since $r_1$ and $b$ both divide $r_2$, the subgroup of $H$ generated by (the equivalence class of) $1/r_2$ contains both $1/r_1$ and $1/b$. If $1/r_2$ generates all of $H$, we are finished. Otherwise, there is a $1/b$ in $H$ such that $b$ does not divide $r_2$. Continue.
Given a function $f(x)$, is there an analytic way to determine which integer values of $x$ give an integer value of $f(x)$? Basically, I have some function $f(x)$ and I would like to figure out which integer values of $x$ make it such that $f(x)$ is also an integer. I know that I could use brute force and try all integer values of $x$ in the domain, but I want to analyze functions with large (possibly infinite) domains so I would like an analytical way to determine the values of $x$. The function itself will always be well-behaved and inversely proportional to the variable. The domain will be restricted to the positive real axis. I thought about functions like the Dirac delta function but that only seemed to push the issue one step further back. I get the feeling that I am either going to be told that there is no way to easily determine this, or that I am misunderstanding something fundamental about functions, but I thought I'd let you all get a crack at it at least.
It's not just "some function", if it's inversely proportional to the variable $x$ that means $f(x) = c/x$ for some constant $c$. If there is any $x$ such that $x$ and $c/x$ are integers, that means $c = x c/x$ is an integer. The integer values of $x$ for which $c/x$ is an integer are then the factors of $c$. If the prime factorization of $c$ is $p_1^{n_1} \ldots p_m^{n_m}$ (where $p_i$ are primes and $n_i$ positive integers), then the factors of $c$ are $p_1^{k_1} \ldots p_m^{k_m}$ where $k_i$ are integers with $0 \le k_i \le n_i$ for each $i$.
Finding $p^\textrm{th}$ roots in $\mathbb{Q}_p$? So assume we are given some $a\in\mathbb{Z}_p^\times$ and we want to figure out if $X^p-a$ has a root in $\mathbb{Q}_p$. We know that such a root must be unique, because given two such roots $\alpha,\beta$, the quotient $\alpha/\beta$ would need to be a non-trivial $p^\textrm{th}$ root of unity and $\mathbb{Q}_p$ does not contain any. Now we can't apply Hensel, which is the canonical thing to do when looking for roots in $\mathbb{Q}_p$. What other approaches are available?
Let's take $\alpha\equiv 1\mod p^2$. Then $\alpha = 1 + p^2\beta\in\mathbb{Z}_p$, where $\beta\in\mathbb{Z}_p$. What we want to do is make Hensel's lemma work for us, after changing the equation a bit. We have $f(x) = x^p - \alpha = x^p - (1 + p^2\beta)$. We see that if $x$ is to be a solution, it must satisfy $x\equiv 1\mod p$, so $x = 1 + py$ for some $y\in\mathbb{Z}_p$. Now we have a new polynomial equation: $$ f(y) = (1 + py)^p - (1 + p^2\beta) = \sum_{i = 0}^p \begin{pmatrix} p \\ i\end{pmatrix}(py)^i - (1 + p^2\beta), $$ which reduces to $$ f(y) = \sum_{i = 1}^p \begin{pmatrix} p \\ i\end{pmatrix}(py)^i - p^2\beta. $$ So long as $p\neq 2$, we can set this equal to zero and cancel a $p^2$ from each term, and get $$ 0 = p^2 y + \begin{pmatrix} p \\ 2\end{pmatrix}(py)^2 + \ldots + (py)^p - p^2\beta = y + \begin{pmatrix} p \\ 2\end{pmatrix} y^2 + \ldots + p^{p-2}y^p - \beta. $$ Mod $p$, we can solve this equation: $$ y + \begin{pmatrix} p \\ 2\end{pmatrix} y^2 + \ldots + p^{p-2}y^p - \beta \equiv y - \beta \equiv y - \beta_0\mod p, $$ where $\beta = \beta_0 + \beta_1 p + \beta_2 p^2 + \ldots$ by $y = \beta_0$. Mod $p$, our derivative is always nonzero: $$ \frac{d}{dy}\left[y - \beta_0\right] \equiv 1\mod p, $$ so we can use Hensel's lemma and lift our modified solution mod $p$ to a solution in $\mathbb{Q}_p$. Therefore, if $\alpha\in 1 + p^2\mathbb{Z}_p$ and $p\neq 2$, there exists a $p$th root of $\alpha$ in $\mathbb{Q}_p$.
Projective Tetrahedral Representation I can embed $A_4$ as a subgroup into $PSL_2(\mathbb{F}_{13})$ (in two different ways in fact). I also have a reduction mod 13 map $$PGL_2(\mathbb{Z}_{13}) \to PGL_2(\mathbb{F}_{13}).$$ My question is: Is there a subgroup of $PGL_2(\mathbb{Z}_{13})$ which maps to my copy of $A_4$ under the above reduction map? (I know that one may embed $A_4$ into $PGL_2(\mathbb{C})$, but I don't know about replacing $\mathbb{C}$ with $\mathbb{Z}_{13}$).
Yes. Explicitly one has: $ \newcommand{\ze}{\zeta_3} \newcommand{\zi}{\ze^{-1}} \newcommand{\vp}{\vphantom{\zi}} \newcommand{\SL}{\operatorname{SL}} \newcommand{\GL}{\operatorname{GL}} $ $$ \SL(2,3) \cong G_1 = \left\langle \begin{bmatrix} 0 & 1 \vp \\ -1 & 0 \vp \end{bmatrix}, \begin{bmatrix} \ze & 0 \\ -1 & \zi \end{bmatrix} \right\rangle \cong G_2 = \left\langle \begin{bmatrix} 0 & 1 \vp \\ -1 & 0 \vp \end{bmatrix}, \begin{bmatrix} 0 & -\zi \\ 1 & -\ze \end{bmatrix} \right\rangle $$ and $$G_1 \cap Z(\GL(2,R)) = G_2 \cap Z(\GL(2,R)) = Z = \left\langle\begin{bmatrix}-1&0\\0&-1\end{bmatrix}\right\rangle \cong C_2$$ and $$G_1/Z \cong G_2/Z \cong A_4$$ This holds over any ring R which contains a primitive 3rd root of unity, in particular, in the 13-adics, $\mathbb{Z}_{13}$. The first representation has rational (Brauer) character and Schur index 2 over $\mathbb{Q}$ (but Schur index 1 over the 13-adics $\mathbb{Q}_{13}$), and the second representation is the unique (up to automorphism of $A_4$) 2-dimensional projective representation of $A_4$ with irrational (Brauer) character. You can verify that if $G_i = \langle a,b\rangle$, then $a^2 = [a,a^b] = -1$, $ a^{(b^2)} = aa^b$, and $b^3 = 1$. Modulo $-1$, one gets the defining relations for $A_4$ on $a=(1,2)(3,4)$ and $b=(1,2,3)$.
$\log_{12} 2=m$ what's $\log_6 16$ in function of $m$? Given $\log_{12} 2=m$ what's $\log_6 16$ in function of $m$? $\log_6 16 = \dfrac{\log_{12} 16}{\log_{12} 6}$ $\dfrac{\log_{12} 2^4}{\log_{12} 6}$ $\dfrac{4\log_{12} 2}{\log_{12} 6}$ $\dfrac{4\log_{12} 2}{\log_{12} 2+\log_{12} 3}$ $\dfrac{4m}{m+\log_{12} 3}$ And this looks like a dead end for me.
Here is a zero-cleverness solution: write everything in terms of the natural logarithm $\log$ (or any other logarithm you like). Recall that $\log_ab=\log b/\log a$. Hence your hypothesis is that $m=\log2/\log12$, or $\log2=m(\log3+2\log2)$, and you look for $k=\log16/\log6=4\log2/(\log2+\log3)$. Both $m$ and $k$ are functions of the ratio $r=\log3/\log2$, hence let us try this. One gets $1=m(r+2)$ and one wants $k=4/(1+r)$. Well, $r=m^{-1}-2$ hence $k=4/(m^{-1}-1)=4m/(1-m)$. An epsilon-cleverness solution is to use at the onset logarithms of base $2$ and to mimick the proof above (the algebraic manipulations become a tad simpler).
About Borel's lemma Borel lemma states that for $x_0 \in \mathbf{R}$ and for a real sequence $(a_n)_{n \in \mathbf{N_0}}$ there exists a smooth function $f: \mathbf{R} \rightarrow \mathbf{R}$ such that $f^{(n)}(x_0)=a_n$ for $n \in \mathbf{N_0}$. However is it true for $x_1, x_2 \in \mathbf{R}, x_1 \neq x_2$, and for real sequences $(a_n)_{n \in \mathbf{N_0}}, (b_n)_{n \in \mathbf{N_0}}$ there exists a smooth function $f: \mathbf{R} \rightarrow \mathbf{R}$ such that $f^{(n)}(x_1)=a_n$, $f^{(n)}(x_2)=a_n$ for $n \in \mathbf{N_0}$ ? Thanks
Since Giuseppe has already given an answer, I will give the details of the extension. Let $A\subset \mathbb R$ a set which consist of isolated points and for each $a\in A$, $\{x(a)_n\}$ a sequence of real numbers. We can find a smooth function $f$ such that for all $a\in A$ and $n\in\mathbb N$, we have $f^{(n)}(a)=x(a)_n$. Since $\mathbb R$ is separable, this set has to be countable, hence we can write $A=\{a_n,n\in\mathbb N\}$. We can write $b_{n,k}=x(a_n)_k$ for each $n$ and $k$. For each $n$, we consider $r_n $such that $\left]a_n-3r_n,a_n+3r_n\right[\cap A=\{r_n\}$. Let $g_n$ be a bump function, $g_n=1$ on $\left]-r_n,r_n\right[$ and $\mathrm{supp}g_n\subset \left[-2r_n,2r_n\right]$. Put $$f_{n,j}(x)= \frac{b_{n,j}}{j!}(x-a_n)^jg_n\left(\frac{x-a_n}{\varepsilon_{n,k}}\right)$$ where $\varepsilon_{n,k}=\frac 1{1+|b_{n,k}|}\frac 1{4^kM_{n,k}}$, with $\displaystyle M_{n,k}:=\max_{0\leq j\leq k}\sup_{x\in\mathbb R}|g_n^{(j)}(x)|$. Note that $f_{n,k}$ has a compact support, and putting for all $n$: $\displaystyle h_n(x):=\sum_{k=0}^{+\infty}f_{n,k}(x)$, $h_n$ is a smooth function. Indeed, for $n$ fixed and $k\geq d+1$ we have $\sup_{x\in\mathbb R}|f_{n,k}^{(d)}|\leq \frac 1{2^k}$, hence the normal convergence of the series on the whole real line for all the derivatives. A (boring but not hard) computation which uses Leibniz rule gives that we have $h_n^{(k)}(a_n)=b_{n,k}$. Now, put $$f(x):=\sum_{n=0}^{+\infty}\sum_{j=0}^{+\infty}f_{n,j}(x).$$ Since the supports of $h_n$ are pairwise disjoint, the series in $n$ is convergent to a smooth function. We will note that we cannot extend the result to sets $S$ which contain a point which is not isolated, because the continuity of the derivative gives a restriction on the sequence $\{x(s)_n\}$ for the $s$ in a neighborhood of the non-isolated point. Namely, if $s_0$ is a non-isolated point and $\{s_n\}\subset S$ a sequence which converges to $s_0$ then the sequence $\{x(s_n)_0\}$ has to converge to $x(s_0)_0$ (hence we can't get all the sequences).
Separability of a group and its dual Here is the following exercise: Let $G$ be an abelian topological group. $G$ has a countable topological basis iff its dual $\hat G$ has one. I am running into difficulties with the compact-open topology while trying this. Any help?
Let $G$ a topological locally compact abelian group. If $G$ has a countable topological basis $(U_n)_{n \in \mathbb{N}}$. We show $\hat{G}$ has a countable topological basis. For every finite subset $I$ of $\mathbb{N}$, let $O_I=\cup_{i \in I}U_i$. We define $B:=\{\bar{O_I} | \bar{O_I}$ is compact $\}$. $B$ is countable, because the cardinality is lower or equal than the cardinality of the finite subset of $\mathbb{N}$. $U(1)$ has a countable topological basis $(V_n)_{n \in\mathbb{N}}$. Let $O(K,V)=\{ \chi \in \hat{G} | \chi(K) \subset V\}$, with $K$ compact in $G$, $V$ open in $U(1)$. $O(K,V)$ is open in the compact-open topology on $\hat{G}$. Let $B'=\{O(K,V_n)|K \in B, n \in \mathbb{N}\}$. $B'$ is countable and is a topological basis of $\hat{G}$. If $\hat{G}$ has a countable topological basis, $\hat{\hat{G}}$ too. But $\hat{\hat{G}}=G$ (Pontryagin's duality), so $G$ has a topological basis.
If $A$ and $B$ are positive-definite matrices, is $AB$ positive-definite? I've managed to prove that if $A$ and $B$ are positive definite then $AB$ has only positive eigenvalues. To prove $AB$ is positive definite, I also need to prove $(AB)^\ast = AB$ (so $AB$ is Hermitian). Is this statement true? If not, does anyone have a counterexample? Thanks, Josh
EDIT: Changed example to use strictly positive definite $A$ and $B$. To complement the nice answers above, here is a simple explicit counterexample: $$A=\begin{bmatrix}2 & -1\\\\ -1 & 2\end{bmatrix},\qquad B = \begin{bmatrix}10 & 3\\\\ 3 & 1\end{bmatrix}. $$ Matrix $A$ has eigenvalues (1,3), while $B$ has eigenvalues (0.09, 10). Then, we have $$AB = \begin{bmatrix} 17 & 5\\\\ -4 & -1\end{bmatrix}$$ Now, pick vector $x=[0\ \ 1]^T$, which shows that $x^T(AB)x = -1$, so $AB$ is not positive definite.
Determine a point $$\text{ABC- triangle:} A(4,2); B(-2,1);C(3,-2)$$ Find a D point so this equality is true: $$5\vec{AD}=2\vec{AB}-3\vec{AC}$$
So,let's observe picture below.first of all you will need to find point $E$...use that $E$ lies on $p(A,B)$ and that $\left\vert AB \right\vert = \left\vert BE \right\vert $. Since $ p(A,C)\left\vert \right\vert p(F,E)$ we may write next equation: $\frac{y_C-y_A}{x_C-x_A}=\frac{y_E-y_F}{x_E-x_F}$ and $\left\vert EF \right\vert=3 \left\vert AC \right\vert$ so we may find point F.Since $\left\vert AF \right\vert=5 \left\vert AD \right\vert$ we may write next equations: $x_D=\frac{x_F+4x_A}{5}$ and $y_D=\frac{y_F+4y_A}{5}$
questions about residue Let $f(z)$ be a rational function on $\mathbb{C}$. If the residues of $f$ at $z=0$ and $z=\infty$ are both $0$, is it true that $\oint_{\gamma} f(z)\mathrm dz=0$ ($\gamma$ is a closed curve in $\mathbb{C}$)? Thanks.
In fact, to have $\oint_\gamma f(z)\ dz =0$ for all closed contours $\gamma$ that don't pass through a pole of $f$, what you need is that the residues at all the poles of $f$ are 0. Since the sum of the residues at all the poles (including $\infty$) is always 0, it's necessary and sufficient that the residues at all the finite poles are 0.
When is the preimage of prime ideal is not a prime ideal? If $f\colon R\to S$ is a ring homomorphism such that $f(1)=1$, it's straightforward to show that the preimage of a prime ideal is again a prime ideal. What happens though if $f(1)\neq 1$? I use the fact that $f(1)=1$ to show that the preimage of a prime ideal is proper, so I assume there is some example where the preimage of a prime ideal is not proper, and thus not prime when $f(1)\neq 1$? Could someone enlighten me on such an example?
Consider the "rng" homomorphism $f:\mathbb{Z}\to\mathbb{Q}$ where $f(n)=0$; then $(0)$ is a prime ideal of $\mathbb{Q}$, but $f^{-1}(0)=\mathbb{Z}$ is not proper, hence not prime. A different example would be $f:\mathbb{Z}\to\mathbb{Z}\oplus\mathbb{Z}$ where $f(1)=(1,0)$; then for any prime ideal $P\subset \mathbb{Z}$, we have that $I=\mathbb{Z}\oplus P$ is a prime ideal of $\mathbb{Z}\oplus\mathbb{Z}$, but $f^{-1}(I)=\mathbb{Z}$ is not proper, hence not prime.
Can we use Peano's axioms to prove that integer = prime + integer? Every integer greater than 2 can be expressed as sum of some prime number greater than 2 and some nonegative integer....$n=p+m$. Since 3=3+0; 4=3+1; 5=3+2 or 5=5+0...etc it is obvious that statement is true.My question is: Can we use Peano's axioms to prove this statement (especially sixth axiom which states "For every natural number $n$, $S(n)$ is a natural number.")?
Yes we can use the Peano's axiom to prove that integer = prime + integer. Think of $0 + a = a$.
Normalizer of the normalizer of the sylow $p$-subgroup If $P$ is a Sylow $p$-subgroup of $G$, how do I prove that normalizer of the normalizer $P$ is same as the normalizer of $P$ ?
Another proof We have that $P\in\text{Syl}_p(\mathbf{N}_G(P))$ and $\mathbf{N}_G(P)\trianglelefteq \mathbf{N}_G(\mathbf{N}_G(P))$. By Frattini's Argument: $$\mathbf{N}_{\mathbf{N}_G(\mathbf{N}_G(P\,))}(P)\cdot\mathbf{N}_G(P)=\mathbf{N}_G(P)=\mathbf{N}_G(\mathbf{N}_G(P)),$$ because $\mathbf{N}_{\mathbf{N}_G(\mathbf{N}_G(P\,))}(P)\subseteq \mathbf{N}_G(P)$.
rotation vector If $t(t),n(t),b(t)$ are rotating, right-handed frame of orthogonal unit vectors. Show that there exists a vector $w(t)$ (the rotation vector) such that $\dot{t} = w \times t$, $\dot{n} = w \times n$, and $\dot{b} = w \times b$ So I'm thinking this is related to Frenet-Serret Equations and that the matrix of coefficient for $\dot{t}, \dot{n}, \dot{b}$ with respect to $t,n,b$ is skew-symmetric. Thanks!
You have sufficient information to compute it yourself! :) Suppose that $w=aT+bN+cB$, with $a$, $b$ and $c$ some functions. Then you want, for example, that $$\kappa N = T' = w\times T = (aT+bN+cB)\times T=b N\times T+cB\times T=-bB+cN.$$ Since $\{T,N,B\}$ is an basis, this gives you some information about the coefficients. Can you finish?
The sum of a polynomial over a boolean affine subcube Let $P:\mathbb{Z}_2^n\to\mathbb{Z}_2$ be a polynomial of degree $k$ over the boolean cube. An affine subcube inside $\mathbb{Z}_2^n$ is defined by a basis of $k+1$ linearly independent vectors and an offset in $\mathbb{Z}_2^n$. [See "Testing Low-Degree Polynomials over GF(2)" by Noga Alon, Tali Kaufman, Michael Krivelevich, Simon Litsyn, Dana Ron - http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.9.1235 - for more details] Why does taking such a subcube and evaluating the sum of $P$ on all $2^{k+1}$ elements of it, always results in zero ?
Let the coordinates be $z_1,z_2,\ldots,z_n$. It suffices to do the case, when $P$ is a monomial, say $P=z_{i_1}z_{i_2}\cdots z_{i_k}$. Let's use induction on $k$. If $k=0$, then $P=1$, and the claim is clear. In the general case let us consider the coordinates $z_{i_j},1\le j\le k$. If all these take only a single value on the affine subcube, then the restriction of $P$ to the subcube is a constant, and the claim holds. On the other hand, if one of these coordinates, say $z_{i_m}$, takes both values within the subcube, then $P$ obviously vanishes identically in the zero-set of $z_{i_m}=0$, so we need to worry about the restriction of $P$ to the affine hyperplane $H_m$ determined by the equation $z_{i_m}=1$. The intersection of the subcube and $H_m$ will be another affine subcube of dimension one less, i.e. at most $k$. Fortunately also the restriction of $P$ to that smaller cube coincides with that of the monomial $P/z_{i_m}$ of degree $k-1$. Therefore the induction hypothesis applies, and we are done. [Edit:] The logic of the inductive step was a bit unclear in the first version. I think that it is clearer to first restrict to a smaller cube, and then observe that the degree also goes down. Not the other way around. [/Edit] Remark: In coding theory this is a standard duality property of the so called Reed-Muller codes. The polynomial $P$, when evaluated at all the points of $\mathbf{Z}_2^n$, gives a word of the code $RM(k,n)$. The characteristic function of the affine hypercube is of degree $n-k-1$, and is thus a word of the dual code $RM(n-k-1,n)$ that is also known to be equal to the dual: $RM(n-k-1,n)=RM(k,n)^\perp$. The duality means that these two functions both take value $=1$ at an even number of points, and the claim follows.
How to prove the sum of squares is minimum? Given $n$ nonnegative values. Their sum is $k$. $$ x_1 + x_2 + \cdots + x_n = k $$ The sum of their squares is defined as: $$ x_1^2 + x_2^2 + \cdots + x_n^2 $$ I think that the sum of squares is minimum when $x_1 = x_2 = \cdots = x_n$. But I can't figure out how to prove it. Can anybody help me on this? Thanks.
You can use Lagrange multipliers. We want to minimize $\sum_{i=1}^{n} x_{i}^{2}$ subject to the constraint $\sum_{i=1}^{n} x_{i} = k$. Set $J= \sum x_{i}^{2} + \lambda\sum_{i=1}^{n} x_{i}$. Then $\frac{\partial J}{\partial x_i}=0$ implies that $x_{i} = -\lambda/2$. Substituting this back into the constraint give $\lambda = -2k/n$. Thus $x_{i} = k/n$, as you thought.
Interesting but elementary properties of the Mandelbrot Set I suppose everyone is familiar with the Mandelbrot set. I'm teaching a course right now in which I am trying to convey the beauty of some mathematical ideas to first year students. They basically know calculus but not much beyond. The Mandelbrot set is certainly fascinating in that you can zoom in and get an incredible amount of detail, all out of an analysis of the simple recursion $z\mapsto z^2+c$. So my plan is to show them a movie of a deep fractal zoom, and go over the definition of the Mandelbrot set. But I'd like to also show them something mathematically rigorous, and the main interesting properties I know about the Mandelbrot set are well beyond the scope of the course. I could mention connectedness, which is of course a seminal result, but that's probably not that interesting to someone at their level. So my question is whether anyone has any ideas about an interesting property of the Mandelbrot set that I could discuss at the calculus level, hopefully including an actual calculation or simple proof.
A proof that once |z|>2 then the recursion will take it to infinity.
Do real matrices always have real eigenvalues? I was trying to show that orthogonal matrices have eigenvalues $1$ or $-1$. Let $u$ be an eigenvector of $A$ (orthogonal) corresponding to eigenvalue $\lambda$. Since orthogonal matrices preserve length, $ \|Au\|=|\lambda|\cdot\|u\|=\|u\|$. Since $\|u\|\ne0$, $|\lambda|=1$. Now I am stuck to show that lambda is only a real number. Can any one help with this?
I guess it depends whether you are working with vector spaces over the real numbers or vector spaces over the complex numbers.In the latter case the answer is no, however in the former the answer has to be yes.Is it not guys ?
Sequence sum question: $\sum_{n=0}^{\infty}nk^n$ I am very confused about how to compute $$\sum_{n=0}^{\infty}nk^n.$$ Can anybody help me?
If you know the value of the geometric series $\sum\limits_{n=0}^{+\infty}x^n$ at every $x$ such that $|x|<1$ and if you know that for every nonnegative integer $n$, the derivative of the polynomial function $x\mapsto x^n$ is $x\mapsto nx^{n-1}$, you might get an idea (and a proof) of the value of the series $\sum\limits_{n=0}^{+\infty}nx^{n-1}$, which is $x^{-1}$ times what you are looking for.
Group of order 15 is abelian How do I prove that a group of order 15 is abelian? Is there any general strategy to prove that a group of particular order (composite order) is abelian?
(Without: Sylow, Cauchy, semi-direct products, cyclic $G/Z(G)$ argument, $\gcd(n,\phi(n))=1$ argument. Just Lagrange and the class equation.) Firstly, if $|G|=pq$, with $p,q$ distinct primes, say wlog $p>q$, then $G$ can't have $|Z(G)|=p,q$, because otherwise there's no way to accomodate the centralizers of the noncentral elements between the center and the whole group (recall that, for all such $x$, it must strictly hold $Z(G)<C_G(x)<G$). Next, if in addition $q\nmid p-1$, then a very simple counting argument suffices to rule out the case $|Z(G)|=1$. In fact, if $Z(G)$ is trivial, then the class equation reads: $$pq=1+kp+lq \tag 1$$ where $k$ and $l$ are the number of conjugacy classes of size $p$ and $q$, respectively. Now, there are exactly $lq$ elements of order $p$ (they are the ones in the conjugacy classes of size $q$). Since each subgroup of order $p$ comprises $p-1$ elements of order $p$, and two subgroups of order $p$ intersect trivially, then $lq=m(p-1)$ for some positive integer $m$ such that $q\mid m$ (because by assumption $q\nmid p-1$). Therefore, $(1)$ yields: $$pq=1+kp+m'q(p-1) \tag 2$$ for some positive integer $m'$; but then $q\mid 1+kp$, namely $1+kp=nq$ for some positive integer $n$, which plugged into $(2)$ yields: $$p=n+m'(p-1) \tag 3$$ In order for $m'$ to be a positive integer, it must be $n=1$ (which in turn implies $m'=1$, but this is not essential here). So, $1+kp=q$: contradiction, because $p>q$. So we are left with $|Z(G)|=pq$, namely $G$ abelian (and hence, incidentally, cyclic).
Diverging sequence I can't understand diverging sequences. How can I prove that $a_n=1/n^2-\sqrt{n}$ is divering? Where to start? What picture should I have in my mind? I tried to use $\exists z \forall n^* \exists n\ge n^*: |a_n-A|\ge z$, but how should I see this? And how can I solve the question with this property?
Now I got this: $|a_n-A| \ge \epsilon$ $\frac{1}{n^2} - sqrt(n) \ge \epsilon + |A|$ Suppose $u=sqrt(n) (u \ge 0)$ $u^{-4}-u \ge \epsilon + |A|$ $u^{-4} \ge u^{-4} - u$ $u^{-4} \ge \epsilon + |A|$ $u \ge (\epsilon + |A|)^\frac{-1}{4}$ $n \ge \frac{1}{sqrt(\epsilon + |A|)}$ And what may I conclude now?
Cutting a Möbius strip down the middle Why does the result of cutting a Möbius strip down the middle lengthwise have two full twists in it? I can account for one full twist--the identification of the top left corner with the bottom right is a half twist; similarly, the top right corner and bottom left identification contributes another half twist. But where does the second full twist come from? Explanations with examples or analogies drawn from real life much appreciated. edit: I'm pasting J.M.'s Mathematica code here (see his answer), modified for version 5.2. twist[{f_, g_}, a_, b_, u_] := {Cos[u] (a + f Cos[b u] - g Sin[b u]), Sin[u] (a + f Cos[b u] - g Sin[b u]), g Cos[b u] + f Sin[b u]}; With[{a = 3, b = 1/2, f = 1/2}, Block[{$DisplayFunction = Identity}, g1 = ParametricPlot3D[Evaluate[Append[twist[{f - v, 0}, a, b, u], {EdgeForm[], FaceForm[SurfaceColor[Red], SurfaceColor[Blue]]}]], {u, 0, 2 Pi}, {v, 0, 2 f}, Axes -> None, Boxed -> False]; g2 = ParametricPlot3D[Evaluate[Append[twist[{f - v, 0}, a, b, u], EdgeForm[]]], {u, 0, 4 Pi}, {v, 0, 2 f/3}, Axes -> None, Boxed -> False]; g3 = ParametricPlot3D[Evaluate[Append[twist[{f - v, 0}, a, b, u], {EdgeForm[], FaceForm[SurfaceColor[Red], SurfaceColor[Blue]]}]], {u, 0, 2 Pi}, {v, 2 f/3, 4 f/3}, Axes -> None, Boxed -> False, PlotPoints -> 105]]; GraphicsArray[{{g1, Show[g2, g3]}}]];
Observe that the boundary of a Möbius strip is a circle. When you cut, you create more boundary; this is in fact a second circle. During this process, the Möbius strip loses its non-orientability. Make two Möbius strips with paper and some tape. Cut one and leave the other uncut. Now take each and draw a line down the middle. The line will come back and meet itself on the Möbius strip; on the cut Möbius strip, it won't.
How to calculate point y with given point x of a angled line I dropped out of school to early I guess, but I bet you guys can help me here. I've got a sloped line starting from point a(0|130) and ending at b(700|0). I need an equation to calculate the y-coordinate when the point x is given, e.g. 300. Can someone help me please ? Sorry for asking such a dumb question, can't find any answer here, propably just too silly to get the math slang ;)
You want the two point form of a linear equation. If your points are $(x_1,y_1)$ and $(x_2,y_2)$, the equation is $y-y_1=(x-x_1)\frac{y_2-y_1}{x_2-x_1}$. In your case, $y=-\frac{130}{700}(x-700)$
Why is the zero factorial one i.e ($0!=1$)? Possible Duplicate: Prove $0! = 1$ from first principles Why does 0! = 1? I was wondering why, $0! = 1$ Can anyone please help me understand it. Thanks.
Answer 1: The "empty product" is (in general) taken to be 1, so that formulae are consistent without having to look over your shoulder. Take logs and it is equivalent to the empty sum being zero. Answer 2: $(n-1)! = \frac {n!} n$ applied with $n=1$ Answer 3: Convention - for the reasons above, it works.
What does the exclamation mark do? I've seen this but never knew what it does. Can any one let me in on the details? Thanks.
For completeness: Although in mathematics the $!$ almost always refers to the factorial function, you often see it in quasi-mathematical contexts with a different meaning. For example, in many programming languages it is used to mean negation, for example in Java the expression !true evaluates to false. It is also commonly used to express inequality, for example in the expression 1 != 2, read as '1 is not equal to 2'. It is also used in some functional languages to denote a function that modifies its input, as in the function set! in Scheme, which sets its first argument to the value of its second argument.
How would I solve $\frac{(n - 10)(n - 9)(n - 8)\times\ldots\times(n - 2)(n - 1)n}{11!} = 12376$ for some $n$ without brute forcing it? Given this equation: $$ \frac{(n - 10)(n - 9)(n - 8)\times\ldots\times(n - 2)(n - 1)n}{11!} = 12376 $$ How would I find $n$? I already know the answer to this, all thanks toWolfram|Alpha, but just knowing the answer isn't good enough for me. I want to know on how I would go about figuring out the answer without having to multiply each term, then using algebra to figure out the answer. I was hoping that there might be a more clever way of doing this.
$n(n-1)\cdots(n-10)/11! = 2^3 \cdot 7 \cdot 13 \cdot 17$. It's not hard to see that $11! = 2^8 3^3 5^2 7^1 11^1$; this is apparently known as de Polignac's formula although I didn't know the name. Therefore $n(n-1) \cdots (n-10) = 2^{11} 3^3 5^2 7^2 11^1 13^1 17^1$. In particular 17 appears in the factorization but 19 does not. So $17 \le n < 19$. By checking the exponent of $7$ we see that $n = 17$ (so we have (17)(16)\cdots (7), which includes 7 and 14) not $n = 18$. Alternatively, there's an analytic solution. Note that $n(n-1) \cdots (n-10) < (n-5)^{11}$ but that the two sides are fairly close together. This is because $(n-a)(n-(10-a)) < (n-5)^2$. So we have $$ n(n-1) \cdots (n-10)/11! = 12376 $$ and using the inequality we get $$ (n-5)^{11}/11! > 12376 $$ where we expect the two sides to be reasonably close. Solving for $n$ gives $$ n > (12376 \times 11!)^{1/11} + 5 = 16.56.$$ Now start trying values of $n$ that are greater than 16.56; the first one is 17, the answer. Implicit in here is the approximation $$ {n \choose k} \approx {(n-(k-1)/2)^k \over k!} $$ which comes from replacing every factor of the product $n(n-1)\cdots(n-k)$ by the middle factor.
Square root of differential operator If $D_x$ is the differential operator. eg. $D_x x^3=3 x^2$. How can I find out what the operator $Q_x=(1+(k D_x)^2)^{(-1/2)}$ does to a (differentiable) function $f(x)$? ($k$ is a real number) For instance what is $Q_x x^3$?
It probably means this: Expand the expression $(1+(kt)^2)^{-1/2}$ as a power series in $t$, getting $$ a_0 + a_1 t + a_2 t^2 + a_3 t^3 + \cdots, $$ and then put $D_x$ where $t$ was: $$ a_0 + a_1 D_x + a_2 D_x^2 + a_3 D_x^3 + \cdots $$ and then apply that operator to $x^3$: $$ a_0 x^3 + a_1 D_x x^3 + a_2 D_x^2 x^3 + a_3 D_x^3 x^3 + \cdots. $$ All of the terms beyond the ones shown here will vanish, so you won't have an infinite series.
Does taking closure preserve finite index subgroups? Let $K \leq H$ be two subgroups of a topological group $G$ and suppose that $K$ has finite index in $H$. Does it follow that $\bar{K}$ has finite index in $\bar{H}$ ?
The answer is yes in general, and here is a proof, which is an adaptation of MartianInvader's: Let $K$ have finite index in $H$, with coset reps. $h_1,\ldots,h_n$. Since multiplication by any element of $G$ is a homeomorphism from $G$ to itself (since $G$ is a topological group), we see that each coset $h_i \overline{K}$ is a closed subset of $G$, and hence so is their union $h_1\overline{K} \cup \cdots \cup h_n \overline{K}$. Now this closed set contains $H$, and hence contains $\overline{H}$. Thus $\overline{H}$ is contained in the union of finitely many $\overline{K}$ cosets, and hence contains $\overline{K}$ with finite index. [Note: I hadn't paid proper attention to Keenan and Kevin's comments on MartianInvader's answer when I wrote this, and this answer essentially replicates the content of their comments.]
Condition For Existence Of Phase Flows I am a bit confused about the existence of one-parameter groups of diffeomorphisms/phase flows for various types of ODE's. Specifically, there is a problem in V.I. Arnold's Classical Mechanics text that asks to prove that a positive potential energy guarantees a phase flow, and also one asking to prove that $U(x) = -x^4$ does not define a phase flow- and these having me thinking. Consider the following two (systems of) differential equations: $\dot x(t) = y(t)$, $\dot y(t) = 4x(t)^3$ and $\dot a(t) = b(t)$, $\dot b(t) = -4a(t)^3$. Both phase flows might, as far as I see it, have issues with the fact that the functions $\dot y(t)$ and $\dot b(t)$ have inverses which are not $C^\infty$ everywhere. However, the $(x,y)$ phase flow has an additional, apparently (according to Arnold's ODE text) more important issue- it approaches infinity in a finite time. Why, though, do I care about the solutions "blowing up" more than I care about the vector fields' differentiability issues? $\textbf{What is, actually, the criterion for the existence of a phase flow, given a (sufficiently differentiable) vector field?}$
How does Arnold define "phase flow"? As far as I know, part of the definition of a flow requires the solutions to exist for all $t > 0$. If they go to $\infty$ in a finite time, they don't exist after that. On the other hand, I don't see why not having a $C^\infty$ inverse would be an issue.
Expectation of supremum Let $x(t)$ a real valued stochastic process and $T>0$ a constant. Is it true that: $$\mathbb{E}\left[\sup_{t\in [0,T]} |x(t)|\right] \leq T \sup_{t\in [0,T]} \mathbb{E}\left[|x(t)|\right] \text{ ?}$$ Thanks for your help.
Elaboration on the comment by Zhen, just consider $x(t) = 1$ a.s. for all $t$ and $T = 0.5$
Notations involving squiggly lines over horizontal lines Is there a symbol for "homeomorphic to"? I looked on Wikipedia, but it doesn't seem to mention one? Also, for isomorphism, is the symbol a squiggly line over an equals sign? What is the symbol with a squiggly line over just one horizontal line? Thanks.
I use $\cong$ for isomorphism in a category, which includes both homeomorphism and isomorphism of groups, etc. I have seen $\simeq$ used to mean homotopy equivalence, but I don't know how standard this is.
Determining the truth value of a statement I am stuck with the following question, Determine the truth value of each of the following statments(a statement is a sentence that evaluates to either true or false but you can not be indecisive). If 2 is even then New York has a large population. Now I don't get what does truth value means here. I would be thankful if someone could help me out. Thanks in advance.
If X Then Y is an implication. In other words, the truth of X implies the truth of Y. The "implies" operator is defined in exactly this manner. Google "implies operator truth table" to see the definition for every combination of values. Most importantly, think about why it's defined in this manner by substituting in place of X and Y statements that you know to be either true or false. One easy way to summarise the definition, is that either X is false (in which case it doesn't matter what the second value is), or Y is true. So applying this to your statement: 2 is indeed even (so now that X is true we only need to check that Y is true to conclude that the implication is indeed valid). NY does indeed have a large population, and so we conclude that the implication is valid, and the sentence as a whole is true!
summation of x * (y choose x) binomial coefficients What does this summation simplify to? $$ \sum_{x=0}^{y} \frac{x}{x!(y-x)!} $$ I was able to realize that it is equivalent to the summation of $x\dbinom{y}{x}$ if you divide and multiply by $y!$, but I am unsure of how to further simplify. Thanks for the help!
Using generating function technique as in answer to your other question: Using $g_1(t) = t \exp(t) = \sum_{x=0}^\infty t^{x+1} \frac{1}{x!} = \sum_{x=0}^\infty t^{x+1} \frac{x+1}{(x+1)!} = \sum_{x=-1}^\infty t^{x+1} \frac{x+1}{(x+1)!} = \sum_{x=0}^\infty t^{x} \frac{x}{x!}$ and $g_2(t) = \exp(t)$. $$ \sum_{x=0}^{y} x \frac{1}{x!} \frac{1}{(y-x)!} = [t]^y ( g_1(t) g_2(t) ) = [t]^y ( t \exp(2 t) ) = \frac{2^{y-1}}{(y-1)!} = \frac{y 2^{y-1}}{y!} $$
Why is this map a homeomorphism? A few hours ago a user posted a link to this pdf: There was a discussion about Proposition 3.2.8. I read it, and near the end, there is a map given $$ \bigcap_{i_1,\dots,i_n,\dots\in\{0,1\}}X_{i_1,\dots,i_n,\dots}\mapsto (i_1,\dots,i_n,\dots). $$ And it says this is a homeomorphism. Is there an more explicit explanation why it's a homeomorphism?
If you examine the construction of $C$, you’ll see that each set $Y_{i_1,\dots,i_n}$ is the closure of a certain open ball; to simplify the notation, let $B_{i_1,\dots,i_n}$ be that open ball. The map in question is a bijection that takes $B_{i_1,\dots,i_n}\cap C$ to $$\{(j_1,j_2,\dots)\in\{0,1\}^{\mathbb{Z}^+}: j_1=i_1, j_2=i_2,\dots,j_n=i_n\},$$ which is a basic open set in the product $\{0,1\}^{\mathbb{Z}^+}$. Every open subset of $C$ is a union of sets of the form $B_{i_1,\dots,i_n}\cap C$, so the map is open. Every open set in the product $\{0,1\}^{\mathbb{Z}^+}$ is a union of sets of the form $$\{(j_1,j_2,\dots)\in\{0,1\}^{\mathbb{Z}^+}: j_1=i_1, j_2=i_2,\dots,j_n=i_n\},$$ so the map is continuous. Finally, a continuous, open bijection is a homeomorphism.
Is there a rule of integration that corresponds to the quotient rule? When teaching the integration method of u-substitution, I like to emphasize its connection with the chain rule of integration. Likewise, the intimate connection between the product rule of derivatives and the method of integration by parts comes up in discussion. Is there an analogous rule of integration for the quotient rule? Of course, if you spot an integral of the form $\int \left (\frac{f(x)}{g(x)} \right )' = \int \frac{g(x) \cdot f(x)' - f(x) \cdot g(x)'}{\left [ g(x)\right ]^2 }$, then the antiderivative is obvious. But is there another form/manipulation/"trick"?
I guess you could arrange an analog to integration by parts, but making students learn it would be superfluous. $$ \int \frac{du}{v} = \frac{u}{v} + \int \frac{u}{v^2} dv.$$
Gaussian Elimination Does a simple Gaussian elimination works on all matrices? Or is there cases where it doesn't work? My guess is yes, it works on all kinds of matrices, but somehow I remember my teacher points out that it doesn't works on all matrices. But I'm not sure, because I have been given alot of methods, and maybe I have mixed it all up.
Gaussian elimination without pivoting works only for matrices all whose leading principal minors are non-zero. See http://en.wikipedia.org/wiki/LU_decomposition#Existence_and_uniqueness.
On the GCD of a Pair of Fermat Numbers I've been working with the Fermat numbers recently, but this problem has really tripped me up. If the Fermat theorem is set as $f_a=2^{2^a}+1$, then how can we say that for an integer $b<a$, the $\gcd(f_b,f_a)=1$?
Claim. $f_n=f_0\cdots f_{n-1}+2$. The result holds for $f_1$: $f_0=2^{2^0}+1 = 2^1+1 = 3$, $f_1=2^{2}+1 = 5 = 3+2$. Assume the result holds for $f_n$. Then $$\begin{align*} f_{n+1} &= 2^{2^{n+1}}+1\\ &= (2^{2^n})^2 + 1\\ &= (f_n-1)^2 +1\\ &= f_n^2 - 2f_n +2\\ &= f_n(f_0\cdots f_{n-1} + 2) -2f_n + 2\\ &= f_0\cdots f_{n-1}f_n + 2f_n - 2f_n + 2\\ &= f_0\cdots f_n + 2, \end{align*}$$ which proves the formula by induction. $\Box$ Now, let $d$ be a common factor of $f_b$ and $f_a$. Then $d$ divides $f_0\cdots f_{a-1}$ (because it's a multiple of $f_b$) and divides $f_a$. That means that it divides $$f_a - f_0\cdots f_{a-1} = (f_0\cdots f_{a-1}+2) - f_0\cdots f_{a-1} = 2;$$ but $f_a$ and $f_b$ are odd, so $d$ is an odd divisor of $2$. Therefore, $d=\pm 1$. So $\gcd(f_a,f_b)=1$.
The chain rule for a function to $\mathbf{C}$ Let $f:U\longrightarrow \mathbf{C}$ be a holomorphic function, where $U$ is a Riemann surface, e.g., $U=\mathbf{C}$, $U=B(0,1)$ or $U$ is the complex upper half plane, etc. For $a$ in $\mathbf{C}$, let $t_a:\mathbf{C} \longrightarrow \mathbf{C}$ be the translation by $a$, i.e., $t_a(z) = z-a$. What is the difference between $df$ and $d(t_a\circ f)$ as differential forms on $U$? My feeling is that $df = d(t_a\circ f)$, but why?
The forms will be different if $a\not=0$, namely if $\mathrm{d} f = w(z) \mathrm{d}z$ locally, then $\mathrm{d}\left( t_a \circ f\right) = w(z-a) \mathrm{d} z$. Added: Above, I was using the following, unconventional definition for the composition, $(t_a \circ f)(z) = f(t_a(z)) = f(z-a)$. The conventional definition, though, is $(t_a \circ f)(z) = t_a(f(z)) = f(z)-a$. With this definition $\mathrm{d} (t_a \circ f) = \mathrm{d}(f-a) = \mathrm{d} f$.
Solve $t_{n}=t_{n-1}+t_{n-3}-t_{n-4}$? I missed the lectures on how to solve this, and it's really kicking my butt. Could you help me out with solving this? Solve the following recurrence exactly. $$ t_n = \begin{cases} n, &\text{if } n=0,1,2,3, \\ t_{n-1}+t_{n-3}-t_{n-4}, &\text{otherwise.} \end{cases} $$ Express your answer as simply as possible using the $\Theta$ notation.
Let's tackle it the general way. Define the ordinary generating function: $$ T(z) = \sum_{n \ge 0} t_n z^n $$ Writing the recurrence as $t_{n + 4} = t_{n + 3} + t_{n + 1} - t_n$, the properties of ordinary generating functions give: $$ \begin{align*} \frac{T(z) - t_0 - t_1 z - t_2 z^2 - t_3 z^3}{z^4} &= \frac{T(z) - t_0 - t_1 z - t_2 z^2}{z^3} + \frac{T(z) - t_0}{z} - T(z) \\ T(z) &= \frac{z}{(1 - z)^2} \end{align*} $$ This has the coefficient: $$ t_n = [z^n] \frac{z}{(1 - z)^2} = [z^{n - 1}] \frac{1}{(1 - z)^2} = (-1)^{n - 1} \binom{-2}{n - 1} = n $$
Sum of a series of minimums I should get sum of the following minimums.Is there any way to solve it? $$\min\left\{2,\frac{n}2\right\} + \min\left\{3,\frac{n}2\right\} + \min\left\{4,\frac{n}2\right\} + \cdots + \min\left\{n+1, \frac{n}2\right\}=\sum_{i=1}^n \min(i+1,n/2)$$
If $n$ is even, your sum splits as $$\sum_{i=1}^{\frac{n}{2}-2} \min\left(i+1,\frac{n}{2}\right)+\frac{n}{2}+\sum_{i=\frac{n}{2}}^{n} \min\left(i+1,\frac{n}{2}\right)=\sum_{i=1}^{\frac{n}{2}-2} (i+1)+\frac{n}{2}+\frac{n}{2}\sum_{i=\frac{n}{2}}^{n} 1$$ If $n$ is odd, you can perform a similar split: $$\sum_{i=1}^{\frac{n-3}{2}} (i+1)+\frac{n}{2}\sum_{i=\frac{n-1}{2}}^{n} 1$$
Prove that $\cos(x)$ is identically zero using integration by parts Consider $$\int\cos(t-x)\sin(x)dx,$$ where $t$ is a constant. Evaluating the integral by parts, let \begin{align} u = \cos(t-x),\ dv = \sin(x), \\ du = \sin(t-x),\ v = -\cos(x), \end{align} so $$ \int\cos(t-x)\sin(x)dx = -\cos(t-x)\cos(x) - \int\sin(t-x)\cdot-\cos(x)dx. $$ Evaluating the integral on the right by parts again (with a slight abuse of notation), \begin{align} u = \sin(t-x),&\quad dv = -\cos(x), \\ du = -\cos(t-x),&\quad v = -\sin(x), \end{align} we get \begin{align} \int\cos(t-x)\sin(x)dx &= -\cos(t-x)\cos(x) - \left( -\sin(t-x)\sin(x)-\int\cos(t-x)\sin(x)dx\right) \\ &= -\cos(t-x)\cos(x) + \sin(t-x)\sin(x) + \int\cos(t-x)\sin(x)dx, \end{align} and subtracting the integral from both sides, we obtain the dazzling new identity $$\sin(t-x)\sin(x)-\cos(t-x)\cos(x)=0$$ for all $t$ and $x$! Pushing it further, the LHS expression is $-\cos(t)$, and as $t$ was just an arbitrary constant, this implies $\cos(x)$ is identically zero! Now I obviously know something's wrong here. But what, and where? Where's the flaw in my reasoning? P.S. I can evaluate the integral to get the proper answer lol. But this was rather interesting.
A standard trigonometric identity says that $$\sin(t-x)\sin(x)-\cos(t-x)\cos(x)$$ is equal to $$ -\cos((t-x)+x) $$ and that is $-\cos t$. As a function of $x$, this is a constant, i.e. since there's no "$x$" in this expression, it doesn't change as $x$ changes. Since the "dazzling new identity", if stated correctly, would say, not that the expression equals $0$, but that the expression is constant, it seems your derivation is correct. Except that you wrote "$=0$" where you needed "$=\text{constant}$".
Primes sum ratio Let $$G(n)=\begin{cases}1 &\text{if }n \text{ is a prime }\equiv 3\bmod17\\0&\text{otherwise}\end{cases}$$ And let $$P(n)=\begin{cases}1 &\text{if }n \text{ is a prime }\\0&\text{otherwise.}\end{cases}$$ How to prove that $$\lim_{N\to\infty}\frac{\sum\limits_{n=1}^N G(n)}{\sum\limits_{n=1}^N P(n)}=\frac1{16}$$ And what is $$\lim_{N\to\infty} \frac{\sum\limits_{n=1}^N n\,G(n)}{\sum\limits_{n=1}^N n\,P(n)}?$$ And what is $O(f(n))$ of the fastest growing function $f(n)$ such that the following limit exists: $$\lim_{N\to\infty} \frac{\sum\limits_{n=1}^N f(n)\,G(n)}{\sum\limits_{n=1}^N f(n)\,P(n)}$$ And does this all follow directly from the asymptotic equidistribution of primes modulo most thing, if such a thing were known? And is it known?
The first sum follows from Siegel-Walfisz_theorem Summation by parts on the second sum should yield for large $N$: $$\frac{\sum\limits_{n=1}^N n\,G(n)}{\sum\limits_{n=1}^N n\,P(n)}=\frac{(N\sum\limits_{n=1}^N G(n))-\sum\limits_{n=1}^{N-1}\sum\limits_{k=0}^{n} G(k)}{\sum\limits_{n=1}^N n\,P(n)}=\frac{(N\sum\limits_{n=1}^N P(n)/16)-\sum\limits_{n=1}^{N-1}\sum\limits_{k=0}^{n} P(k)/16}{\sum\limits_{n=1}^N n\,P(n)}=\frac1{16}$$
Confusion about a specific notation In the following symbolic mathematical statement $n \in \omega $, what does $\omega$ stand for? Does it have something to do with the continuum, or is it just another way to denote the set of natural numbers?
The notation of $\omega$ is coming from ordinals, and it denotes the least ordinal number which is not finite. The von Neumann ordinals are transitive sets which are well ordered by $\in$. We can define these sets by induction: * *$0=\varnothing$; *$\alpha+1 = \alpha\cup\{\alpha\}$; *If $\beta$ is limit and all $\alpha<\beta$ were defined, then $\displaystyle\beta=\bigcup_{\alpha<\beta}\alpha$. That is to say that after we have defined all the natural numbers, we define $\omega=\{0,1,2,3,\ldots\}$, then we can continue if so and define $\omega+1 = \omega\cup\{\omega\}$ and so on. In set theory it is usual to use $\omega$ to denote the least infinite ordinal, as well the set of finite ordinals. It still relates to the continuum since $\mathcal P(\omega)$ is of cardinality continuum, since $\omega$ is countable.
What is a good book for learning math, from middle school level? Which books are recommended for learning math from the ground up and review the basics - from middle school to graduate school math? I am about to finish my masters of science in computer science and I can use and understand complex math, but I feel like my basics are quite poor.
It depends what your level is and what you're interested in. I think a book that's not about maths but uses maths is probably more interesting for most people. I've noticed this in undergraduates as well: give someone a course using the exact same maths but with the particulars of their subject area subbed in, and they'll like it much better. Examples being speech therapy, economics, criminal justice, meterorology, kinesiology, ecology, philosophy, audio engineering…. cohomology of the tribar http://www.jstor.org/discover/pgs/index?id=10.2307/1575844&img=dtc.17.tif.gif&uid=3739664&uid=2&uid=4&uid=3739256&sid=21104710897987&orig=/discover/10.2307/1575844?uid=3739664&uid=2&uid=4&uid=3739256&sid=21104710897987 That said … for me a great introduction was Penrose's The Road to Reality. It pulls no punches unlike many popular physics books. I've always been interested in "the deep structure of the universe/reality" so ... that was a topic in line with my advice from above. But also Penrose is an excellent writer and takes the time to draw pictures.
Equation of a rectangle I need to graph a rectangle on the Cartesian coordinate system. Is there an equation for a rectangle? I can't find it anywhere.
I found recently a new parametric form for a rectangle, that I did not know earlier: $$ \begin{align} x(u) &= \frac{1}{2}\cdot w\cdot \mathrm{sgn}(\cos(u)),\\ y(u) &= \frac{1}{2}\cdot h\cdot \mathrm{sgn}(\sin(u)),\quad (0 \leq u \leq 2\pi) \end{align} $$ where $w$ is the width of the rectangle and $h$ is its height. I have used this in modelling parametric ruled surfaces, where it seems to be rather handy.
Integrating $\int \frac{1}{1+e^x} dx$ I wish to integrate $$\int_{-a}^a \frac{dx}{1+e^x}.$$ By symmetry, the above is equal to $$\int_{-a}^a \frac{dx}{1+e^{-x}}$$ Now multiply by $e^x/e^x$ to get $$\int_{-a}^a \frac{e^x}{1+e^x} dx$$ which integrates to $$\log(1+e^x) |^a_{-a} = \log((1+e^a)/(1+e^{-a})),$$ which is not correct. According to Wolfram, we should get $$2a + \log((1+e^{-a})/(1+e^a)).$$ Where is the mistake? EDIT: Mistake found: was using log on calculator, which is base 10.
Both answers are equal. Split your answer into $\log(1+e^a)-\log(1+e^{-a})$, and write this as $$\begin{align*}\log(e^a(1+e^{-a}))-\log(e^{-a}(1+e^a))&=\log e^a+\log(1+e^{-a})-\log e^{-a}-\log(1+e^a)\\ &=2a+\log((1+e^{-a})/(1+e^a))\end{align*}$$
How does one prove if a multivariate function is constant? Suppose we are given a function $f(x_{1}, x_{2})$. Does showing that $\frac{\partial f}{\partial x_{i}} = 0$ for $i = 1, 2$ imply that $f$ is a constant? Does this hold if we have $n$ variables instead?
Yes, it does, as long as the function is continuous on a connected domain and the partials exist (let's not get into anything pathological here). And the proof is the exact same as in the one variable case (If there are two points whose values we want to compare, they lie on the same line. Use the multivariable mean variable theorem to show that they must have the same value. This proof is easier if you know directional derivatives and/or believe that you can assume that the partial derivative in the direction of this line is zero because all other basis-partials are zero.)
The law of sines in hyperbolic geometry What is the geometrical meaning of the constant $k$ in the law of sines, $\frac{\sin A}{\sinh a} = \frac{\sin B}{\sinh b} = \frac{\sin C}{\sinh c}=k$ in hyperbolic geometry? I know the meaning of the constant only in Euclidean and spherical geometry.
As given by Will Jagy, k must be inside the argument: $$ \frac{\sin A}{\sinh(a/k)} = \frac{\sin B}{\sinh(b/k)} = \frac{\sin C}{\sinh(c/k)} $$ This is the Law of Hyperbolic trigonometry where k is the pseudoradius, constant Gauss curvature $K= -1/k^2$. Please also refer to " Pan-geometry", a set of relations mirrored from spherical to hyerbolic, typified by (sin,cos) -> (sinh,cosh).. in Roberto Bonola's book on Non-euclidean Geometry. There is nothing imaginary about pseudoradius.It is as real,palpable and solid as the radius of sphere in spherical trigonometry, after hyperbolic geometry has been so firmly established. I wish practice of using $ K=-1$ should be done away with,always using $ K = -1/k^2 $ or $ K = -1/a^2 $instead.
Why are samples always taken from iid random variables? In most mathematical statistic textbook problems, a question always ask: Given you have $X_1, X_2, \ldots, X_n$ iid from a random sample with pdf:(some pdf). My question is why can't the sample come from one random variable such as $X_1$ since $X_1$ itself is a random variable. Why do you need the sample to come from multiple iid random variables?
A random variable is something that has one definite value each time you do the experiment (whatever you define "the experiment" to be), but possibly a different value each time you do it. If you collect a sample of several random values, the production of all those random values must -- in order to fit the structure of the theory -- be counted as part of one single experiment. Therefore, if you had only one variable, there couldn't be any different values in your sample.
How many different combinations of $X$ sweaters can we buy if we have $Y$ colors to choose from? How many different combinations of $X$ sweaters can we buy if we have $Y$ colors to choose from? According to my teacher the right way to think about this problem is to think of partitioning $X$ identical objects (sweaters) into $Y$ different categories (colors). Well,this idea however yields the right answer but I just couldn't convince my self about this way to thinking,to be precise I couldn't link the wording of the problem to this approach,could any body throw some more light on this?
The classical solution to this problem is as follows: Order the $Y$ colors. Write $n_1$ zeroes if there are $n_1$ sweater of the first color. Write a single one. Write $n_2$ zeroes where $n_2$ is the number of sweaters of the second color. Write a single one, and so on. You get a string of length $X+Y-1$ that has exactly $X$ zeroes and $Y-1$ ones. The map that I have described above is a 1-1 correspondence between number of different combinations of $X$ sweaters with $Y$ colors and the set of such binary strings. Now, each string is uniquely determined by the positions of the ones in the string. How many ways are there to place $Y-1$ ones in an array of length $X+Y-1$? Does this help? Edit: With my reasoning, you arrive exactly at Brian Scott's formula. He uses bars where I have ones and stars where I have zeroes.
Closed form for a pair of continued fractions What is $1+\cfrac{1}{2+\cfrac{1}{3+\cfrac{1}{4+\cdots}}}$ ? What is $1+\cfrac{2}{1+\cfrac{3}{1+\cdots}}$ ? It does bear some resemblance to the continued fraction for $e$, which is $2+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\cdots}}}$. Another thing I was wondering: can all transcendental numbers be expressed as infinite continued fractions containing only rational numbers? Of course for almost all transcendental numbers there does not exist any method to determine all the numerators and denominators.
I don't know if either of the continued fractions can be expressed in terms of common functions and constants. However, all real numbers can be expressed as a continued fractions containing only integers. The continued fractions terminate for rational numbers, repeat for a quadratic algebraic numbers, and neither terminate nor repeat for other reals. Shameless plug: There are many references out there for continued fractions. I wrote a short paper that is kind of dry and covers only the basics (nothing close to the results that J. M. cites), but it goes over the results that I mentioned.
What is wrong with my reasoning regarding finding volumes by integration? The problem from the book is (this is Calculus 2 stuff): Find the volume common to two spheres, each with radius $r$, if the center of each sphere lies on the surface of the other sphere. I put the center of one sphere at the origin, so its equation is $x^2 + y^2 + z^2 = r^2$. I put the center of the other sphere on the $x$-axis at $r$, so its equation is $(x-r)^2 + y^2 + z^2 = r^2$. By looking at the solid down the $y$- or $z$-axis it looks like a football. By looking at it down the $x$-axis, it looks like a circle. So, the spheres meet along a plane as can be confirmed by setting the two equations equal to each other and simplifying until you get $x = r/2$. So, my strategy is to integrate down the $x$-axis from 0 to $r/2$, getting the volume of the cap of one of the spheres and just doubling it, since the solid is symmetric. In other words, I want to take circular cross-sections along the $x$-axis, use the formula for the area of a circle to find their areas, and add them all up. The problem with this is that I need to find an equation for $r$ in terms of $x$, and it has to be quadratic rather than linear, otherwise I'll end up with the volume of a cone rather than a sphere. But when I solve for, say, $y^2$ in one equation, plug it into the other one, and solve for $r$, I get something like $r = \sqrt{2 x^2}$, which is linear.
The analytic geometry of $3$-dimensional space is not needed to solve this problem. In particular, there is no need for the equations of the spheres. All we need is some information about the volumes of solids of revolution. Draw two circles of radius $1$, one with center $(0,0)$, the other with center $(1,0)$. (Later we can scale everything by the linear factor $r$, which scales volume by the factor $r^3$.) The volume we are looking for is twice the volume obtained by rotating the part of the circle $x^2+y^2=1$, from $x=1/2$ to $x=1$, about the $x$-axis. (This is clear if we have drawn a picture). So the desired volume, in the case $r=1$, is $$2\int_{1/2}^1 \pi y^2\,dx.$$ There remains some work to do, but it should be straightforward. Comment: There are other approaches. Volumes of spherical caps were known well before the formal discovery of the calculus. And if we do use calculus, we can also tackle the problem by making perpendicular to the $y$-axis, or by using the "shell" method. It is worthwhile experimenting with at least one of these methods. The algebra gets a little more complicated.
Proving two lines trisects a line A question from my vector calculus assignment. Geometry, anything visual, is by far my weakest area. I've been literally staring at this question for hours in frustrations and I give up (and I do mean hours). I don't even now where to start... not feeling good over here. Question: In the diagram below $ABCD$ is a parallelogram with $P$ and $Q$ the midpoints of the the sides $BC$ and $CD$, respectively. Prove $AP$ and $AQ$ trisect $BD$ at the points $E$ and $F$ using vector methods. Image: Hints: Let $a = OA$, $b = OB$, etc. You must show $ e = \frac{2}{3}b + \frac{1}{3}d$, etc. I figured as much without the hints. Also I made D the origin and simplified to $f = td$ for some $t$. And $f = a + s(q - a)$ for some $s$, and $q = \frac{c}{2}$ and so on... but I'm just going in circles. I have no idea what I'm doing. There are too many variables... I am truly frustrated and feeling dumb right now. Any help is welcome. I'm going to go watch Dexter and forget how dumb I'm feeling.
Note that EBP and EDA are similar triangles. Since 2BP=AD, it follows that 2EB=ED, and thus 3EB=BD. Which is to say, AP trisects BD.
What is an easy way to prove the equality $r > s > 0$ implies $x^r > x^s$? I have been using simple inequalities of fractional powers on a positive interval and keep abusing the inequality for $x>1$. I was just wondering if there is a nice way to prove the inequality in a couple line: Let $x \in [1,\infty)$ and $r,s \in \mathbb{R}$ What is an easy way to prove the equality $r > s > 0$ implies $x^r > x^s$?
If you accept that $x^y\gt 1$ for $x\gt 1$ and $y \gt 0$, then $x^r=x^{r-s}x^s \gt x^s$ for $x\gt 1$ and $r \gt s$.
Is the class of cardinals totally ordered? In a Wikipedia article http://en.wikipedia.org/wiki/Aleph_number#Aleph-one I encountered the following sentence: "If the axiom of choice (AC) is used, it can be proved that the class of cardinal numbers is totally ordered." But isnt't the class of ordinals totally ordered (in fact, well-ordered) without axiom of choice? Being a subclass of the class of ordinals, isn't the class of cardinals obviously totally ordered?
If I understand the problem correctly, it depends on your definition of cardinal. If you define the cardinals as initial ordinals, then your argument works fine, but without choice you cannot show that every set is equinumerous to some cardinal. (Since AC is equivalent to every set being well-orderable.) On the other hand, if you have some definition which implies that each set is equinumerous to some cardinal number, then without choice you cannot show that any two sets (any two cardinals) are comparable. (AC is equivalent to: For two sets $A$, $B$ there exists either an injective map $A\to B$ or an injective map $B\to A$. It is listed as one of equivalent forms if AC at wiki.)