INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
How do I find roots of a single-variate polynomials whose integers coefficients are symmetric wrt their respective powers Given a polynomial such as $X^4 + 4X^3 + 6X^2 + 4X + 1,$ where the coefficients are symmetrical, I know there's a trick to quickly find the zeros. Could someone please refresh my memory?
Hint: This particular polynomial is very nice, and factors as $(X+1)^4$. Take a look at Pascal's Triangle and the Binomial Theorem for more details. Added: Overly complicated formula The particular quartic you asked about had a nice solution, but lets find all the roots of the more general $$ax^{4}+bx^{3}+cx^{2}+bx+a.$$ Since $0$ is not a root, we are equivalently finding the zeros of $$ax^{2}+bx^{1}+c+bx^{-1}+ax^{-2}.$$Let $z=x+\frac{1}{x}$ (as suggested by Aryabhatta) Then $z^{2}=x^{2}+2+x^{-2}$ so that $$ax^{2}+bx^{1}+c+bx^{-1}+ax^{-2}=az^{2}+bz+\left(c-2a\right).$$ The roots of this are given by the quadratic formula: $$\frac{-b+\sqrt{b^{2}-4a\left(c-2a\right)}}{2a},\ \frac{-b-\sqrt{b^{2}-4a\left(c-2a\right)}}{2a}.$$ Now, we then have $$x+\frac{1}{x}=\frac{-b\pm\sqrt{b^{2}-4a\left(c-2a\right)}}{2a}$$ and hence we have the two quadratics $$x^{2}+\frac{b+\sqrt{b^{2}-4a\left(c-2a\right)}}{2a}x+1=0,$$ $$x^{2}+\frac{b-\sqrt{b^{2}-4a\left(c-2a\right)}}{2a}x+1=0.$$ This then gives the four roots:$$\frac{-b+\sqrt{b^{2}-4a\left(c-2a\right)}}{4a}\pm\sqrt{\frac{1}{4}\left(\frac{b-\sqrt{b^{2}-4a\left(c-2a\right)}}{2a}\right)^2-1}$$ $$\frac{-b-\sqrt{b^{2}-4a\left(c-2a\right)}}{4a}\pm\sqrt{\frac{1}{4}\left(\frac{b+\sqrt{b^{2}-4a\left(c-2a\right)}}{2a}\right)^2-1}.$$ If we plug in $a=1$, $b=4$, $c=6$, we find that all four of these are exactly $1$, so our particular case does work out.
Quick ways for approximating $\sum_{k=a_1}^{k=a_2}C_{100}^k(\frac{1}{2})^k(\frac{1}{2})^{100-k}$? Consider the following problem: A fair coin is to be tossed 100 times, with each toss resulting in a head or a tail. Let $$H:=\textrm{the total number of heads}$$ and $$T:=\textrm{the total number of tails},$$ which of the following events has the greatest probability? A. $H=50$ B. $T\geq 60$ C. $51\leq H\leq 55$ D. $H\geq 48$ and $T\geq 48$ E. $H\leq 5$ or $H\geq 95$ What I can think is the direct calculation: $$P(a_1\leq H\leq a_2)=\sum_{k=a_1}^{k=a_2}C_{100}^k(\frac{1}{2})^k(\frac{1}{2})^{100-k}$$ Here is my question: Is there any quick way to solve this problem except the direct calculation?
Chebyshev's inequality, combined with mixedmath's and some other observations, shows that the answer has to be D without doing the direct calculations. First, rewrite D as $48 \leq H \leq 52$. A is a subset of D, and because the binomial distribution with $n = 100$ and $p = 0.5$ is symmetric about $50$, C is less likely than D. So, as mixedmath notes, A and C can be ruled out. Now, estimate the probability of D. We have $P(H = 48) = \binom{100}{48} 2^{-100} > 0.07$. Since $H = 48$ and $H=52$ are equally probable and are the least likely outcomes in D, $P(D) > 5(0.07) = 0.35$. Finally, $\sigma_H = \sqrt{100(0.5)(0.5)} = 5$. So the two-sided version of Chebyshev says that $P(E) \leq \frac{1}{9^2} = \frac{1}{81}$, since E asks for the probability that $H$ takes on a value 9 standard deviations away from the mean. The one-sided version of Chebyshev says that $P(B) \leq \frac{1}{1+2^2} = \frac{1}{5}$, since B asks for the probability that $H$ takes on a value 2 standard deviations smaller than the mean. So D must be the most probable event. Added: OP asks for more on why $P(C) < P(D)$. Since the binomial($100,50$) distribution is symmetric about $50$, $P(H = i) > P(H = j)$ when $i$ is closer to $50$ than $j$ is. Thus $$P(C) = P(H = 51) + P(H = 52) + P(H = 53) + P(H = 54) + P(H = 55)$$ $$< P(H = 50) + P(H=51) + P(H = 49) + P(H = 52) + P(H = 48) = P(D),$$ by directly comparing probabilities.
equivalent definitions of orientation I know two definitions of an orientation of a smooth n-manifold $M$: 1) A continuous pointwise orientation for $M$. 2) A continuous choice of generators for the groups $H_n(M,M-\{x\})=\mathbb{Z}$. Why are these two definitions equivalent? In other words, why is a choice of basis of $\mathbb{R}^n$ equivalent to a choice of generator of $H_n(\mathbb{R}^n,\mathbb{R}^n-\{0\})=\mathbb{Z}$? See comments for precise definitions. Thanks!
Recall that an element of $H_n(M,M-\{x\})$ is an equivalence class of singular $n$-chains, where the boundary of any chain in the class lies entirely in $M-\{x\}$. In particular, any generator of $H_n(M,M-\{x\})$ has a representative consisting of a single singular $n$-simplex $\sigma\colon \Delta^n\to M$, whose boundary lies in $M-\{x\}$. Moreover, the map $\sigma$ can be chosen to be a differentiable embedding. (Think of $\sigma$ as an oriented simplex in $M$ that contains $x$.) Now, the domain $\Delta^n$ of $\sigma$ is the standard $n$-simplex, which has a canonical orientation as a subspace of $\mathbb{R^n}$. Since $\sigma$ is differentiable, we can push this orientation forward via the derivative of $\sigma$ onto the image of $\sigma$ in $M$. This gives a pointwise orientation on a neighborhood of $x$.
Why are horizontal transformations of functions reversed? While studying graph transformations I came across horizontal and vertical scale and translations of functions. I understand the ideas below. * *$f(x+a)$ - grouped with x, horizontal translation, inverse, x-coordinate shifts left, right for -a *$f(ax)$ - grouped with x, horizontal scaling, inverse so x-coordinate * 1/a *$f(x)$ + a - not grouped with x, vertical translation, shifts y-coordinate up, d *$af(x)$ - not grouped with x, vertical scaling, y-coordinate * a I have mostly memorized this part but I am unable to figure out why the horizontal transformations are reversed/inverse? Thanks for your help.
For horizantal shift: The logical reason for horizantal shift is that in(f)(x)=y=x the origin is (0,0)and in f(x)=(x-2)is (2,0) for this we should add 2 to get 0 becouse in parent function become 0 when we add 0 and in shifted function to make zero our function we ahould add 2
Proving an integer $3n+2$ is odd if and only if the integer $9n+5$ is even How can I prove that the integer $3n+2$ is odd if and only if the integer $9n+5$ is even, where n is an integer? I suppose I could set $9n+5 = 2k$, to prove it's even, and then do it again as $9n+5=2k+1$ Would this work?
HINT $\rm\ \ 3\ (3\:n+2)\ -\ (9\:n+5)\:\ =\:\ 1$ Alternatively note that their sum $\rm\:12\:n + 7\:$ is odd, so they have opposite parity.
Zero divisors of ${\Bbb Z}_n = $ integers $\!\bmod n$ Consider the following proposition: A nonzero element $m\in{\bf Z}_n$ is a zero divisor if and only if $m$ and $n$ are not relatively prime. I don't know if this is a classical textbook result. (I didn't find it in Gallian's book). For the "only if" part, one may like to use the Euclid's lemma. But I cannot see how can one prove the "if" part: If $m_1>0$, $(m_1,n)=d>1$, and $n|m_1m_2$, then $n\nmid m_2$. Edit: The "if" part, should be: If $m_1>0$ and $(m_1,n)=d>1$, then there exists $m_2$ such that $n|m_1m_2$, and $n\nmid m_2$. Does one need any other techniques other than "divisibility"? Questions: * *How to prove the proposition above? *How many different proofs can one have?
Hint $\rm\,\ d\mid n,m\,\Rightarrow\,\ mod\ n\!:$ $\rm\displaystyle\:\ 0\equiv n\:\frac{m}d\ =\ \frac{n}d\: m\ $ and $\rm\, \dfrac{n}d\not\equiv 0\,$ if $\rm\,d>1$
Hardy Ramanujan Asymptotic Formula for the Partition Number I am needing to use the asymptotic formula for the partition number, $p(n)$ (see here for details about partitions). The asymptotic formula always seems to be written as, $ p(n) \sim \frac{1}{4n\sqrt{3}}e^{\pi \sqrt{\frac{2n}{3}}}, $ however I need to know the order of the omitted terms, (i.e. I need whatever the little-o of this expression is). Does anybody know what this is, and a reference for it? I haven't been able to find it online, and don't have access to a copy of Andrews 'Theory of Integer Partitions'. Thank you.
The original paper addresses this issue on p. 83: $$ p(n)=\frac{1}{2\pi\sqrt2}\frac{d}{dn}\left(\frac{e^{C\lambda_n}}{\lambda_n}\right) + \frac{(-1)^n}{2\pi}\frac{d}{dn}\left(\frac{e^{C\lambda_n/2}}{\lambda_n}\right) + O\left(e^{(C/3+\varepsilon)\sqrt n}\right) $$ with $$ C=\frac{2\pi}{\sqrt6},\ \lambda_n=\sqrt{n-1/24},\ \varepsilon>0. $$ If I compute correctly, this gives $$ e^{\pi\sqrt{\frac{2n}{3}}} \left( \frac{1}{4n\sqrt3} -\frac{72+\pi^2}{288\pi n\sqrt{2n}} +\frac{432+\pi^2}{27648n^2\sqrt3} +O\left(\frac{1}{n^2\sqrt n}\right) \right) $$
Sum of First $n$ Squares Equals $\frac{n(n+1)(2n+1)}{6}$ I am just starting into calculus and I have a question about the following statement I encountered while learning about definite integrals: $$\sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6}$$ I really have no idea why this statement is true. Can someone please explain why this is true and if possible show how to arrive at one given the other?
Notice that $(k+1)^3 - k^3 = 3k^2 + 3k + 1$ and hence $$(n+1)^3 = \sum_{k=0}^n \left[ (k+1)^3 - k^3\right] = 3\sum_{k=0}^n k^2 + 3\sum_{k=0}^n k + \sum_{k=0}^n 1$$ which gives you $$\begin{align} \sum_{k=1}^n k^2 & = \frac{1}{3}(n+1)^3 - \frac{1}{2}n(n+1) - \frac{1}{3}(n+1) \\ & = \frac{1}{6}(n+1) \left[ 2(n+1)^2 - 3n - 2\right] \\ & = \frac{1}{6}(n+1)(2n^2 +n) \\ & = \frac{1}{6}n(n+1)(2n+1) \end{align}$$
Is it possible for function $f : \mathbb{R} \to \mathbb{R}$ have a maximum at every point in a countable dense subset of its domain? Is it possible for function $f : \mathbb{R} \to \mathbb{R}$ have a maximum at every point in a countable dense subset of its domain ? The motivation for this question is I have a sequence of functions $\{f_n\}$ where the number of maxima increases with $n$ and I am interested to know what happens to the sequence of functions. PS : every function of the sequence has a finite number of maxima. EDIT : $f$ should not be constant function.
Sample paths of Brownian motion have this property (with probability $1$), see here.
Reference book on measure theory I post this question with some personal specifications. I hope it does not overlap with old posted questions. Recently I strongly feel that I have to review the knowledge of measure theory for the sake of starting my thesis. I am not totally new with measure theory, since I have taken and past one course at the graduate level. Unfortunately, because the lecturer was not so good at teaching, I followed the course by self-study. Now I feel that all the knowledge has gone after the exam and still don’t have a clear overview on the structure of measure theory. And here come my specified requirements for a reference book. * *I wish the book elaborates the proofs, since I will read it on my own again, sadly. And this is the most important criterion for the book. *I wish the book covers most of the topics in measure theory. Although the topic of my thesis is on stochastic integration, I do want to review measure theory at a more general level, which means it could emphasize on both aspects of analysis and probability. If such a condition cannot be achieved, I'd like to more focus on probability. *I wish the book could deal with convergences and uniform integrability carefully, as Chung’s probability book. My expectation is after thorough reading, I could have strong background to start a thesis on stochastic integration at an analytic level. Sorry for such a tedious question. P.S: the textbook I used is Schilling’s book: measures, integrals and martingales. It is a pretty good textbook, but misprints really ruin the fun of reading.
Donald L. Cohn-"Measure theory". Everything is detailed.
Many convergent sequences imply the initial sequence zero? In connection to this question, I found a similar problem in another Miklos Schweitzer contest: Problem 8./2007 For $A=\{a_i\}_{i=0}^\infty$ a sequence of real numbers, denote by $SA=\{a_0,a_0+a_1,a_0+a_1+a_2,...\}$ the sequence of partial sums of the series $a_0+a_1+a_2+...$. Does there exist a non-identically zero sequence $A$ such that all the sequences $A,SA,SSA,SSSA,...$ are convergent? If $SA$ is convergent then $A \to 0$. $SSA$ convergent implies $SA \to 0$. We have * *$SSA=\{a_0,2a_0+a_1,3a_0+2a_1+a_2,4a_0+3a_1+2a_2+a_3...\}$ *$SSSA=\{a_0,3a_0+a_1,6a_0+3a_1+a_2,10a_0+6a_1+3a_2+a_3...\}$. I suppose when the number of iteration grows, the coefficients of the sequence grow very large, and I suppose somehow we can get a contradiction if the initial sequence is non-identically zero.
I would suggest you try using the alternating harmonic series. It is conditionally convergent so you can try rearrangements that might pop out convergent to zero.
Finding the real roots of a polynomial Recent posts on polynomials have got me thinking. I want to find the real roots of a polynomial with real coefficients in one real variable $x$. I know I can use a Sturm Sequence to find the number of roots between two chosen limits $a < x < b$. Given that $p(x) = \sum_{r=0}^n a_rx^r$ with $a_n = 1$ what are the tightest values for $a$ and $b$ which are simply expressed in terms of the coefficients $a_r$ and which make sure I capture all the real roots? I can quite easily get some loose bounds and crank up the computer to do the rest, and if I approximate solutions by some algorithm I can get tighter. But I want to be greedy and get max value for min work.
I actually had to do this for school about a month ago, and the method I came up with was as follows: * *Note that all zeros of a polynomial are between a local minimum and a local maximum (including the limits at infinity). However, not all adjacent pairs of a min and a max have a zero in between, but that is irrelevant. *Therefore, one can find the mins and maxes and converge on the root in between by using the bisection method (if they're on opposite sides of the x-axis). *Finding the mins and maxes is accomplished by taking the derivative and finding its zeros. *Considering that this is a procedure for finding zeros, step 3 can be done recursively. *The base case for recursion is a line. Here, $y=ax+b$ and the zero is $-\frac{b}{a}$. This is a very easy and quick way to find all real zeros (to theoretically arbitrary precision). :D
Question about proof for $S_4 \cong V_4 \rtimes S_3$ In my book they give the following proof for $S_4 \cong V_4 \rtimes S_3$ : Let $j: S_3 \rightarrow S_4: p \mapsto \left( \begin{array}{cccc} 1 & 2 & 3 & 4 \\ p(1) & p(2) & p(3) & 4 \end{array} \right)$ Clearly, $j(S_3)$ is a subgroup $S_4$ isomorphic with $S_3$, hence $j $ is injective. We identify $S_3 $ with $ j(S_3$). Also $V_4 \triangleleft S_4$ and clearly $V_4 \cap S_3 = \{I\}$. We now only have to show that $S_4 = V_4S_3$. Hence $V_4\cap S_3 = \{I\}$, we know that $\#(V_4S_3) = \#V_4 \#S_3 = 4 \cdot 6 = 24 = \#S_4$ thus $S_4 = V_4S_3$, which implies that $S_4 \cong V_4 \rtimes S_3$. However, I am wondering what the function $j$ is actually used for in the proof? (I do not see the connection.)
It is only used to identify the subgroup S3 of S4, and is only needed as a technicality. If you view S3 as bijections from {1,2,3} to {1,2,3} and S4 as bijections from {1,2,3,4} to {1,2,3,4}, and you view functions as having domains and ranges (not just rules), then no element of S3 is an element of S4. The function j allows you to view elements of S3 as bijections of {1,2,3,4} that happen to leave 4 alone. Then the elements of S3 (really j(S3)) are elements of S4, and so you can talk about it being a subgroup. The statement of the theorem appears to mention external semi-direct products, but the proof uses internal semi-direct products. To use an internal semi-direct product, you need subgroups.
Paths with DFA? My teacher made an example to explain DFA, it was about paths (URL paths), the rules were as follows: S ::= / S ::= /O O ::= [a-z] O ::= [a-z]R O ::= [a-z]S R ::= [a-z] R ::= [a-z]R R ::= [a-z]S Examples of paths could be: /foo, /foo/, foo/bar and so on. However, I don't understand why you would need the R rules since they are equal to the O rules. Can I write it without the R? If not, why?
You don't need them, in fact. The grammar you wrote is equivalent to the one obtained by deleting the R rules and substituting the second O rule by O ::= [a-z]O ... No idea why your teacher wrote it that way, sorry.
Why are the periods of these permutations often 1560? I ran across a math puzzle that went like this: Consider the list $1,9,9,3, \cdots$ where the next entry is equal to the sum mod 10 of the prior 4. So the list begins $1,9,9,3,2,3,7,\cdots$. Will the sequence $7,3,6,7$ ever occur? (Feel free to pause here and solve this problem for your own amusement if you desire. Spoiler below.) So the answer is "yes", and we can solve this by noticing that the function to derive the next digit is invertible so we can derive digits going to the left as well. Going left, we find $7,3,6,7$ pretty quickly. I wrote a program and found that the period (equivalently the length of the permutation's cycle) is 1560. But surprisingly (to me) altering the starting sequence from 1,9,9,3 to most any other sequence left the period at 1560. There are a few cases where it changes; for example, starting with 4,4,4,4 we get a period of length only 312. So, my question: what's special about 1560 here? Note: This feels a lot like LFSRs, but I don't know much about them.
Your recurrence is linear in that you can add two series together term by term and still have it a series. The period of (0,0,0,1) is 1560, so all periods will be a divisor of that. To get 1560 you just have to avoid shorter cycles.
Independence of sums of gaussian random variables Say, I have independent gaussian random variables $t1, t2, t3, t4, t5$ and I have two new random variables $S = t1 + t2 - t3$ and $K = t3 + t4$. Are $S$ and $K$ independent or is there any theorem about independece of random variables formed by sum of independent gaussians ?
In fact, the distribution of the $t_i$ plays no significant role here, and, moreover, existence of the covariance is not necessary. Let $S=X-Y$ and $K=Y+Z$, where $X$, $Y$, and $Z$ are independent random variables generalizing the role of $t1+t2$, $t3$, and $t4$, respectively. Note that, by independence of $X$, $Y$, and $Z$, for any $u_1,u_2 \in \mathbb{R}$ it holds $$ {\rm E}[e^{iu_1 S + iu_2 K} ] = {\rm E}[e^{iu_1 X + iu_1 ( - Y) + iu{}_2Y + iu_2 Z} ] = {\rm E}[e^{iu_1 X} ]{\rm E}[e^{iu_1 ( - Y) + iu{}_2Y} ]{\rm E}[e^{iu_2 Z} ] $$ and $$ {\rm E}[e^{iu_1 S} ] {\rm E}[e^{iu_2 K} ] = {\rm E}[e^{iu_1 X} ]{\rm E}[e^{iu_1 (-Y)} ]{\rm E}[e^{iu_2 Y} ]{\rm E}[e^{iu_2 Z} ]. $$ The following basic theorem then shows that $S$ an $K$ are generally not independent. Theorem. Random variables $\xi_1$ and $\xi_2$ are independent if and only $$ {\rm E}[e^{iu_1 \xi _1 + iu_2 \xi _2 } ] = {\rm E}[e^{iu_1 \xi _1 } ]{\rm E}[e^{iu_2 \xi _2 } ] $$ for all $u_1,u_2 \in \mathbb{R}$. (In particular, note that if $-Y$ and $Y$ are not independent, then there exist $u_1,u_2 \in \mathbb{R}$ such that ${\rm E}[e^{iu_1 ( - Y) + iu_2 Y} ] \ne {\rm E}[e^{iu_1 ( - Y)} ]{\rm E}[e^{iu_2 Y} ]$.)
Need a hint: prove that $[0, 1]$ and $(0, 1)$ are not homeomorphic I need a hint: prove that $[0, 1]$ and $(0, 1)$ are not homeomorphic without referring to compactness. This is an exercise in a topology textbook, and it comes far earlier than compactness is discussed. So far my only idea is to show that a homeomorphism would be monotonic, so it would define a poset isomorphism. But the can be no such isomorphism, because there are a minimal and a maximal elements in $[0, 1]$, but neither in $(0, 1)$. However, this doesn't seem like an elemenary proof the book must be asking for.
There is no continuous and bijective function $f:(0,1) \rightarrow [0,1]$. In fact, if $f:(0,1) \rightarrow [0,1]$ is continuous and surjective, then $f$ is not injective, as proved in my answer in Continuous bijection from $(0,1)$ to $[0,1]$. This is a consequence of the intermediate value theorem, which is a theorem about connectedness. Are you allowed to use that?
Raising a square matrix to the k'th power: From real through complex to real again - how does the last step work? I am reading Applied linear algebra: the decoupling principle by Lorenzo Adlai Sadun (btw very recommendable!) On page 69 it gives an example where a real, square matrix $A=[(a,-b),(b,a)]$ is raised to the k'th power: $$A^k.(1,0)^T$$ The result must be a real vector. Nevertheless it seems easier to do the calculation via the complex numbers:$$=((a+bi)^k+(a-bi)^k).(1,0)^T/2-i((a+bi)^k-(a-bi)^k).(0,1)^T/2$$ At this stage the result seems to be complex. But then comes the magic step and everything gets real again:$$=Re[(a+bi)^k].(1,0)^T+Im[(a+bi)^k].(0,1)^T$$ Now I did some experiments and made two observations: First, this step seems to yield the correct results - yet I don't know why. Second, the raising of this matrix to the k'th power even confuses CAS (e.g. WolframAlpha aka Mathematica, see e.g. the plots here) because they most of the time seem to think that the results are complex. My question Could you please give me a proof/explanation for the correctness of the last step. Perhaps you will even know why CAS are confused too (perhaps it is because their algorithms also go through the complex numbers and they too have difficulties in seeing that the end result will be real?)
What you are using is that for a given complex number $z=a+bi$, we have $\frac{z+\overline{z}}{2}=a={\rm Re}(z)$ and $\frac{z-\overline{z}}{2}=ib=i{\rm Im}(z)$ (where $\overline{z}=a-bi$). Also check that $\overline{z^k}=\overline{z}^k$ for all $k \in \mathbb{N}$.
How to factor quadratic $ax^2+bx+c$? How do I shorten this? How do I have to think? $$ x^2 + x - 2$$ The answer is $$(x+2)(x-1)$$ I don't know how to get to the answer systematically. Could someone explain? Does anyone have a link to a site that teaches basic stuff like this? My book does not explain anything and I have no teacher; this is self-studies. Please help me out; thanks!
Given $$A: x^2 + x - 2$$ you're trying to do the 'magic' in your head in order to get backwards to $$B: (x+2)(x-1)$$ What is it that you are trying to do backwards. It's the original multiplication of $(x+2)(x-1)$. Note that * *the -2 in $A$ comes from multiplying the +2 and -1 in $B$ *the +1 (it's kind of invisible it's the coefficient of $x$ ) in $B$ comes from: * *$x$ in the first part times -1 in the second, plus *+2 in the first part times $x$ in the second or $(-1)+2 = +1$. So that's how the multiplication works going forward. Now you have to think of that to go backwards. In $x^2 + x - 2$: * *where does the -2 come from? From two things that multiply to get -2. What could those possibly be? Usually we assume integers so the only possibilities are the two pairs 2, -1, and -2, 1. *of those two pairs, they have to -add- to the coefficient for $x$ or just plain positive 1. So the answer has to be the pair 2 and -1. Another example might help: given $$x^2-5x+6$$ what does this factor to? (that is, find $(x-a)(x-b)$ which equals $x^2 -5x + 6$). So the steps are: * *what are the factors of 6? (you should get 2 pairs, all negative. *for those pairs, which pair adds up to -5? The main difficulty is keeping track in your head of what is multiplying, what is adding, and what is positive and negative. The pattern for any sort of problem solving skill like this that seems like magic (but really is not) is to: * *Do more examples to get a speedier feel for it. *Check your work. Since you're going backwards, once you get a possible answer, you can do the non-magic (multiplying) to see if you can get the original item in the question.
Infinite shortest paths in graphs From Wikipedia: "If there is no path connecting two vertices, i.e., if they belong to different connected components, then conventionally the distance is defined as infinite." This seems to negate the possibility that there are graphs with vertices connected by an infinite shortest path (as opposed to being not connected). Why is it that for every (even infinite) path between two vertices there is a finite one? Note that infinite paths between vertices do exist - e.g. in the infinite complete graph -, but they are not the shortest.
To expand on my comment: It's clear that if an infinite path is defined as a map from $\mathbb N$ to the edge set such that consecutive edges share a vertex, then any vertices connected by such an infinite path are in fact connected by a finite section of the path. To make sense of the question nevertheless, one might ask whether it is possible to use a different ordinal than $\omega$, say, $\omega\cdot2$, to define an infinite path. But that doesn't make sense either, since there's no way (at least I don't see one) to make the two parts of such a path have anything to do with each other -- at each limit ordinal, the path can start wherever it wants, since there's no predecessor for applying the condition that consecutive edges share a vertex. Note that the situation is different in infinite trees, which can perfectly well contain infinite paths connecting the root to a node. This is because the definition of a path in an infinite tree is different; it explicitly attaches the nodes on levels corresponding to limit ordinals to entire sequences of nodes, not to individual nodes; such a concept doesn't exist in graphs.
How can I solve this infinite sum? I calculated (with the help of Maple) that the following infinite sum is equal to the fraction on the right side. $$ \sum_{i=1}^\infty \frac{i}{\vartheta^{i}}=\frac{\vartheta}{(\vartheta-1)^2} $$ However I don't understand how to derive it correctly. I've tried numerous approaches but none of them have worked out so far. Could someone please give me a hint on how to evaluate the infinite sum above and understand the derivation? Thanks. :)
Several good methods have been suggested. Here's one more. $$\eqalign{\sum{i\over\theta^i}&={1\over\theta}+{2\over\theta^2}+{3\over\theta^3}+{4\over\theta^4}+\cdots\cr&={1\over\theta}+{1\over\theta^2}+{1\over\theta^3}+{1\over\theta^4}+\cdots\cr&\qquad+{1\over\theta^2}+{1\over\theta^3}+{1\over\theta^4}+\cdots\cr&\qquad\qquad+{1\over\theta^3}+{1\over\theta^4}+\cdots\cr&\qquad\qquad\qquad+{1\over\theta^4}+\cdots\cr&={1/\theta\over1-(1/\theta)}+{1/\theta^2\over1-(1/\theta)}+{1/\theta^3\over1-(1/\theta)}+{1/\theta^4\over1-(1/\theta)}+\cdots\cr}$$ which is a geometric series which you can sum to get the answer.
Slick way to define p.c. $f$ so that $f(e) \in W_{e}$ Is there a slick way to define a partial computable function $f$ so that $f(e) \in W_{e}$ whenever $W_{e} \neq \emptyset$? (Here $W_{e}$ denotes the $e^{\text{th}}$ c.e. set.) My only solution is to start by defining $g(e) = \mu s [W_{e,s} \neq \emptyset]$, where $W_{e,s}$ denotes the $s^{\text{th}}$ finite approximation to $W_{e}$, and then set $$ f(e) = \begin{cases} \mu y [y \in W_{e, g(e)}] &\text{if } W_{e} \neq \emptyset \\ \uparrow &\text{otherwise}, \end{cases} $$ but this is ugly (and hence not slick).
Perhaps the reason your solution seems ugly to you is that you appear to be excessively concerned with the formalism of representing your computable function in terms of the $\mu$ operator. The essence of computability, however, does not lie with this formalism, but rather with the idea of a computable procedure. It is much easier and more enlightening to see that a function is computable simply by describing an algorithm that computes it, and such kind of arguments are pervasive in computability theory. (One can view them philosophically as instances of the Church-Turing thesis.) The set $W_e$ consists of the numbers that are eventually accepted by program $e$. These are the computabley enumerable sets, in the sense that there is a uniform computable procedure to enumerate their elements. We may now define the desired function $f$ by the following computable procedure: on input $e$, start enumerating $W_e$. When the first element appears, call it $f(e)$. It is now clear both that $f$ is computable and that $f(e)\in W_e$ whenever $W_e$ is not empty, as desired.
Show that $f \in \Theta(g)$, where $f(n) = n$ and $g(n) = n + 1/n$ I am a total beginner with the big theta notation. I need find a way to show that $f \in \Theta(g)$, where $f(n) = n$, $g(n) = n + 1/n$, and that $f, g : Z^+ \rightarrow R$. What confuses me with this problem is that I thought that "$g$" is always supposed to be "simpler" than "$f$." But I think I missed something here.
You are sort of right about thinking that "$g$" is supposed to be simpler than "$f$", but not technically right. The formal definition says nothing about simpler. However, in practice one is essentially always comparing something somewhat messy, on the left, with something whose behaviour is sort of clear(er) to the eye, on the right. For the actual verifications in this exercise, it would have made no difference if the functions had been interchanged, so probably the "colloquially standard" version should have been used. But maybe not, once or twice. Now you know a little more about the symmetry of the notion.
Bounding ${(2d-1)n-1\choose n-1}$ Claim: ${3n-1\choose n-1}\le 6.25^n$. * *Why? *Can the proof be extended to obtain a bound on ${(2d-1)n-1\choose n-1}$, with the bound being $f(d)^n$ for some function $f$? (These numbers describe the number of some $d$-dimensional combinatorial objects; claim 1 is the case $d=2$, and is not my claim).
First, lets bound things as easily as possible. Consider the inequality $$\binom{n}{k}=\frac{(n-k)!}{k!}\leq\frac{n^{k}}{k!}\leq e^{k}\left(\frac{n}{k}\right)^{k}.$$ The $n^k$ comes from the fact that $n$ is bigger then each factor of the product in the numerator. Also, we know that $k!e^k>k^k$ by looking at the $k^{th}$ term in the Taylor series, as $e^k=1+k+\cdots +\frac{k^k}{k!}+\cdots $. Now, lets look at the similar $3n$ and $n$ instead of $3n-1$ and $n-1$. Then we see that $$\binom{3n}{n}\leq e^{n}\left(3\right)^{n}\leq\left(8.15\right)^{n}$$and then for any $k$ we would have $$\binom{kn}{n}\leq\left(ke\right)^{n}.$$ We could use Stirlings formula, and improve this more. What is the most that this can be improved? Apparently, according to Wolfram the best possible is $$\binom{(k+1)n}{n}\leq \left(\frac{(k+1)^{k+1}}{k^k}\right)^n.$$ (Notice that when $k=2$ we have $27/4$ which is $6.25$) Hope that helps.
Subspace intersecting many other subspaces V is a vector space of dimension 7. There are 5 subspaces of dimension four. I want to find a two dimensional subspace such that it intersects at least once with all the 5 subspaces. Edit: All the 5 given subspaces are chosen randomly (with a very high probability, the intersection is a line). If i take any two of the 5 subspaces and find the intersection it results in a line. Similarly, we can take another two planes and find another line. From these two lines we can form a 2 dimensional subspace which intersect 4 of the 5 subspaces. But can some one tell me how we can find a two dimensional subspace which intersect all the 5 subspace. It would be very useful if you can tell what kind of concepts in mathematics can i look for to solve problems like this? Thanks in advance. Edit: the second paragraph is one way in which i tried the problem. But taking the intersection of the subspace puts more constraint on the problem and the solution becomes infeasible.
Assuming your vector space is over $\mathbb R$, it looks to me like "generically" there should be a finite number of solutions, but I can't prove that this finite number is positive, nor do I have a counterexample. We can suppose your two-dimensional subspace $S$ has an orthonormal basis $\{ u, v \}$ where $u \cdot e_1 = 0$ (where $e_1$ is a fixed nonzero vector). There are 10 degrees of freedom for choosing $u$ and $v$. The five subspaces are the kernels of five linear operators $F_j$ of rank 3; for $S$ to have nonzero intersection with ${\rm ker} F_j$ you need scalars $a_j$ and $b_j$ with $a_j^2 + b_j^2 = 1$ and $F_j (a_j u + b_j v) = 0$. This gives 5 more degrees of freedom for choosing points $(a_j, b_j)$ on the unit circle, minus 15 for the equations $F_j (a_j u + b_j v) = 0$, for a net of 0 degrees of freedom, and thus a discrete set of solutions (finite because the equations are polynomial). For actually finding solutions in particular cases, I found Maple's numerical solver fsolve worked pretty well - the system seems too complicated for the symbolic solvers.
Prove that any shape 1 unit area can be placed on a tiled surface Given a surface of equal square tiles where each tile side is 1 unit long. Prove that a single area A, of any shape, but just less than 1 unit square in area can be placed on the surface without touching a vertex of any tiled area? The Shape A may have holes.
Project $A$ onto a single square by "Stacking" all of the squares in the plane. Then translating $A$ on this square corresponds to moving $A$ on a torus with surface area one. As the area of $A$ is less then one, there must be some point which it does not cover. Then choose that point to be the four corners of the square, and unravel the torus.
Constructing self-complementary graphs How does one go about systematically constructing a self-complementary graph, on say 8 vertices? [Added: Maybe everyone else knows this already, but I had to look up my guess to be sure it was correct: a self-complementary graph is a simple graph which is isomorphic to its complement. --PLC]
Here's a nice little algorithm for constructing a self-complementary graph from a self-complementary graph $H$ with $4k$ or $4k+1$ vertices, $k = 1, 2, ...$ (e.g., from a self-complementary graph with $4$ vertices, one can construct a self-complementary graph with $8$ vertices; from $5$ vertices, construct one with $9$ vertices). See this PDF on constructing self-complementary graphs.
Fractions with radicals in the denominator I'm working my way through the videos on the Khan Academy, and have a hit a road block. I can't understand why the following is true: $$\frac{6}{\quad\frac{6\sqrt{85}}{85}\quad} = \sqrt{85}$$
No one seems to have posted the really simple way to do this yet: $$ \frac{85}{\sqrt{85}} = \frac{\sqrt{85}\sqrt{85}}{\sqrt{85}} $$ and then cancel the common factor.
Examples of results failing in higher dimensions A number of economists do not appreciate rigor in their usage of mathematics and I find it very discouraging. One of the examples of rigor-lacking approach are proofs done via graphs or pictures without formalizing the reasoning. I would like thus to come up with a few examples of theorems (or other important results) which may be true in low dimensions (and are pretty intuitive graphically) but fail in higher dimensions. By the way, these examples are directed towards people who do not have a strong mathematical background (some linear algebra and calculus), so avoiding technical statements would be appreciated. Jordan-Schoenflies theorem could be such an example (though most economists are unfamiliar with the notion of a homeomorphism). Could you point me to any others? Thanks.
May be this one. Every polygon has a triangulation but not all polyhedra can be tetrahedralized (Schönhardt polyhedron)
Trouble with absolute value in limit proof As usual, I'm having trouble, not with the calculus, but the algebra. I'm using Calculus, 9th ed. by Larson and Edwards, which is somewhat known for racing through examples with little explanation of the algebra for those of us who are rusty. I'm trying to prove $$\lim_{x \to 1}(x^2+1)=2$$ but I get stuck when I get to $|f(x)-L| = |(x^2+1)-2| = |x^2-1| = |x+1||x-1|$. The solution I found says "We have, in the interval (0,2), |x+1|<3, so we choose $\delta=\frac{\epsilon}{3}$." I'm not sure where the interval (0,2) comes from. Incidentally, can anyone recommend any good supplemental material to go along with this book?
Because of the freedom in the choice of $\delta$, you can always assume $\delta < 1$, that implies you can assume $x$ belongs to the interval $(0, 2)$. Edit: $L$ is the limit of $f(x)$ for $x$ approaching $x_0$, iff for every $\epsilon > 0$ it exists a $\delta_\epsilon > 0$ such that: $$\left\vert f(x) - L\right\vert < \epsilon$$ for each $x$ in the domain of $f$ satisfying $\left\vert x - x_0\right\vert < \delta_\epsilon$. Now if $\delta_\epsilon$ verifies the above condition, the same happens for each $\delta_\epsilon'$ such that $0 < \delta_\epsilon' < \delta_\epsilon$, therefore we can choose $\delta_\epsilon$ arbitrarily small, in particular lesser than 1.
Find control point on piecewise quadratic Bézier curve I need to write an OpenGL program to generate and display a piecewise quadratic Bézier curve that interpolates each set of data points: $$(0.1, 0), (0, 0), (0, 5), (0.25, 5), (0.25, 0), (5, 0), (5, 5), (10, 5), (10, 0), (9.5, 0)$$ The curve should have continuous tangent directions, the tangent direction at each data point being a convex combination of the two adjacent chord directions. I am not good at math, can anyone give me some suggestions about what formula I can use to calculate control point for Bézier curve if I have a starting point and an ending point. Thanks in advance
You can see that it will be difficult to solve this satisfactorily by considering the case where the points to be interpolated are at the extrema of a sinusoidal curve. Any reasonable solution should have horizontal tangents at the points, but this is not possible with quadratic curves. Peter has described how to achieve continuity of the tangents with many arbitrary choices. You can reduce those choices to a single choice by requiring continuity in the derivatives, not just their directions (which determine the tangents). This looks nice formally, but it can lead to rather wild curves, since a single choice of control point at one end then determines all the control points (since you now have to take equal steps on both sides of the points in Peter's method), and these may end up quite far away from the original points – again, take the case of the extrema of a sinusoidal; this will cause the control points to oscillate more and more as you propagate them. What I would try in order to get around these problems, if you really have to use quadratic Bézier curves, is to use some good interpolation method, e.g. cubic splines, and calculate intermediate points between the given points, along with tangent directions at the given points and the intermediate points. Then you can draw quadratic Bézier curves through all the points, given and intermediate, and determine control points by intersecting the tangents. This wouldn't work without the intermediate points, because the tangents might not intersect at reasonable points – again, think of the extrema of a sinuisoidal, where the desired tangents are in fact parallel – but I think it should work with the intermediate points – for instance, in the sinusoidal example, the intermediate points would be at the inflection points of the sinusoidal, and the tangents would intersect at suitable control points.
Qualitative interpretation of Hilbert transform the well-known Kramers-Kronig relations state that for a function satisfying certain conditions, its imaginary part is the Hilbert transform of its real part. This often comes up in physics, where it can be used to related resonances and absorption. What one usually finds there is the following: Where the imaginary part has a peak, the real part goes through zero. Is this a general rule? And are there more general statements possible? For Fourier transforms, for example, I know the statement that a peak with width $\Delta$ in time domain corresponds to a peak with width $1/\Delta$ (missing some factors $\pi$, I am sure...) in frequency domain. Is there some rule of thumb that tells me how the Hilbert transform of a function with finite support (e.g. with a bandwidth $W$) looks like, approximately? Tanks, Lagerbaer
Never heard of the Kramers-Kronig relations and so I looked it up. It relates the real and imaginary parts of an analytic function on the upper half plane that satisfies certain growth conditions. This is a big area in complex analysis and there are many results. For example, in the case of a function with compact support, its Hilbert transform can never have compact support, or even vanish on a set of measure greater $0$. Many books on analytic functions (especially ones on $H^p$ spaces and bounded analytic functions) cover this topic. Some books in signal processing also cover this but from a different perspective, and in most cases less rigorous.
Homology of the loop space Let $X$ be a nice space (manifold, CW-complex, what you prefer). I was wondering if there is a computable relation between the homology of $\Omega X$, the loop space of $X$, and the homology of $X$. I know that, almost by definition, the homotopy groups are the same (but shifted a dimension). Because the relation between homotopy groups and homology groups is very difficult, I expect that the homology of $\Omega X$ is very hard to compute in general. References would be great.
Adams and Hilton gave a functorial way to describe the homology ring $H_\ast(\Omega X)$ in terms of the homology $H_\ast(X)$, at least when $X$ is a simply-connected CW complex with one $0$-cell and no $1$-cells. You'll find a more modern discussion of their construction here.
integrals inequalities $$ \left( {\int\limits_0^1 {f^2(x)\ \text{d}x} }\right)^{\frac{1} {2}} \ \geqslant \quad \int\limits_0^1 {\left| {f(x)} \right|\ \text{d}x} $$ I can't prove it )=
$$\int_0^1 |f(x)| \, dx = \int_0^1 |1||f(x)| \, dx \leq \sqrt{\int_0^1 1 \, dx} \sqrt{\int_0^1 |f(x)|^2 \, dx} = \sqrt{\int_0^1 |f(x)|^2 \, dx}$$ By Cauchy-Schwarz.
How do I get the square root of a complex number? If I'm given a complex number (say $9 + 4i$), how do I calculate its square root?
Here is a direct algebraic answer. Suppose that $z=c+di$, and we want to find $\sqrt{z}=a+bi$ lying in the first two quadrants. So what are $a$ and $b$? Precisely we have $$a=\sqrt{\frac{c+\sqrt{c^{2}+d^{2}}}{2}}$$ and $$b=\frac{d}{|d|}\sqrt{\frac{-c+\sqrt{c^{2}+d^{2}}}{2}}.$$ (The factor of $\frac{d}{|d|}$ is used so that $b$ has the same sign as $d$) To find this, we can use brute force and the quadratic formula. Squaring, we would need to solve $$a^2-b^2 +2abi=c+di.$$ This gives two equations and two unknowns (separate into real and imaginary parts), which can then be solved by substitutions and the quadratic formula. I hope that helps!
$\tan(\frac{\pi}{2}) = \infty~$? Evaluate $\displaystyle \int\nolimits^{\pi}_{0} \frac{dx}{5 + 4\cos{x}}$ by using the substitution $t = \tan{\frac{x}{2}}$ For the question above, by changing variables, the integral can be rewritten as $\displaystyle \int \frac{\frac{2dt}{1+t^2}}{5 + 4\cos{x}}$, ignoring the upper and lower limits. However, after changing variables from $dx$ to $dt$, when $x = 0~$,$~t = \tan{0} = 0~$ but when $ x = \frac{\pi}{2}~$, $~t = \tan{\frac{\pi}{2}}~$, so can the integral technically be written as $\displaystyle \int^{\tan{\frac{\pi}{2}}}_{0} \frac{\frac{2dt}{1+t^2}}{5 + 4\cos{x}}~$, and if so, is it also reasonable to write it as $\displaystyle \int^{\infty}_{0} \frac{\frac{2dt}{1+t^2}}{5 + 4\cos{x}}$ EDIT: In response to confusion, my question is: Is it technically correct to write the above integral in the form with an upper limit of $\tan{\frac{\pi}{2}}$ and furthermore, is it is reasonable to equate $\tan{\frac{\pi}{2}}$ with $\infty$ and substitute it on the upper limit?
Continuing from my comment, you have $$\cos(t) = \cos^2(t/2) - \sin^2(t/2) = {1-t^2\over 1+ t^2}.$$ Restating the integral with the transformation gives $$\int_0^\infty {1\over 5 + 4\left({1-t^2 \over 1 + t^2}\right)}{2\, dt\over 1 + t^2} = 2\int_0^\infty {dt\over 9 + t^2}.$$
Involuted vs Idempotent What is the difference between an "involuted" and an "idempotent" matrix? I believe that they both have to do with inverse, perhaps "self inverse" matrices. Or do they happen to refer to the same thing?
A matrix $A$ is an involution if it is its own inverse, ie if $$A^2 = I$$ A matrix $B$ is idempotent if it squares to itself, ie if $$B^2 = B$$ The only invertible idempotent matrix is the identity matrix, which can be seen by multiplying both sides of the above equation by $B^{-1}$. An idempotent matrix is also known as a projection. Involutions and idempotents are related to one another. If $A$ is idempotent then $I - 2A$ is an involution, and if $B$ is an involution, then $\tfrac{1}{2}(I\pm B)$ is idempotent. Finally, if $B$ is idempotent then $I-B$ is also idempotent and if $A$ is an involution then $-A$ is also an involution.
Proof that the set of incompressible strings is undecidable I would like to see a proof or a sketch of a proof that the set of incompressible strings is undecidable. Definition: Let x be a string, we say that x is c-compressible if K(x) $\leq$ |x|-c. If x is not c-compressible, we say that x is incompressible by c. K(x) represents the Kolmogorov complexity of a binary string x. Theorem: incompressible strings of every length exist. Proof: The number of binary strings of length n is $2^{n}$, but there exist $\displaystyle\sum\limits_{i=0}^{n-1} 2^i$=$2^{n}$-1 descriptions of lengths that is less than n. Since each description describes at most one string then there is at least one string of length n that is incompressible. From here I feel it is natural to ask whether or not the set of incompressible strings is decidable, the answer is $\textit{no}$, but I would like to see the justification via a proof or proof sketch. Edit: I would like to add I am already familiar/comfortable with the proof that Kolmogorov complexity is uncomputable.
Roughly speaking, incompressibility is undecidable because of a version of the Berry paradox. Specifically, if incompressibility were decidable, we could specify "the lexicographically first incompressible string of length 1000" with the description in quotes, which has length less than 1000. For a more precise proof of this, consider the Wikipedia proof that $K$ is not computable. We can modify this proof as follows to show that incompressibility is undecidable. Suppose we have a function IsIncompressible that checks whether a given string is incompressible. Since there is always at least one incompressible string of length $n$, the function GenerateComplexString that the Wikipedia article describes can be modified as follows: function GenerateComplexString(int n) for each string s of length exactly n if IsIncompressible(s) return s quit This function uses IsIncompressible to produce roughly the same result that the KolmogorovComplexity function is used for in the Wikipedia article. The argument that this leads to a contradiction now goes through almost verbatim.
A construction in the proof of "any local ring is dominated by a DVR" Let $O$ be a noetherian local domain with maximal ideal $m$. I want to prove: for a suitable choice of generators $x_1,\dots,x_n$ of $m$, the ideal $(x_1)$ in $O'=O[x_2/x_1,\dots,x_n/x_1]$ is not equal to the unit ideal. This statement originates from Ex.4.11, Chapter 2 of Hartshorne.
If one is willing to use the results already proved in Hartshorne in the context of the valuative criterion, that is before exercise 4.11, I see the following approach: there exists a valuation ring $O_v$ of the field $K$ (for the moment I ignore the finite extension $L$ that appears in the exercise) that dominates the local ring $O$. In particular we have $v(x_k)>0$ for any set $x_1,\ldots ,x_n$ of generators of the maximal ideal $m$ of $O$. Suppose $v(x_1)$ is minimal among the values $v(x_k)$. Then $O^\prime\subseteq O_v$ and $q:=M_v\cap O^\prime$, $M_v$ the maximal ideal of $O_v$, is a proper prime ideal of $O^\prime$. By definition $x_1\in q$ and thus $x_1O^\prime\neq O^\prime$. The "suitable choice" is just relabelling the elements $x_k$ if necessary.
Application of Galois theory i have a question regarding roots of equation, find all $a$,such that the cubic polynomial $x^3-bx+a=0$ has three integer roots, how can you solve these by using galois theory,what does the reducible polynomials,splitting fields,field extensions have to do with these, explain each of them,in detail,because it serves as a introduction to galois theory, ok for eg take $b$ to be 3 and list all $a$ such that equation has three integer roots
Suppose the polynomial has three integer roots $r_1, r_2, r_3$. Then $(x - r_1)(x - r_2)(x - r_3) = x^3 - bx + a$, hence $$r_1 + r_2 + r_3 = 0$$ $$r_1 r_2 + r_2 r_3 + r_3 r_1 = -b$$ $$r_1 r_2 r_3 = -a.$$ Squaring the first equation gives $r_1^2 + r_2^2 + r_3^2 = 2b$, which immediately tells you that for fixed $b$ there are only finitely many possibilities for the roots, and from here it's casework for any fixed $b$. For example, for $b = 3$ we get $r_1^2 + r_2^2 + r_3^2 = 6$, which has solutions $(\pm 1, \pm 1, \pm 2)$ up to cyclic permutation, and of these solutions only $(-1, -1, 2)$ and $(1, 1, -2)$ add up to zero. Hence the possible values in this case are $a = \pm 2$.
Atiyah-Macdonald, Exercise 8.3: Artinian iff finite k-algebra. Atiyah Macdonald, Exercise 8.3. Let $k$ be a field and $A$ a finitely generated $k$-algebra. Prove that the following are equivalent: (1) $A$ is Artinian. (2) $A$ is a finite $k$-algebra. I have a question in the proof of (1$\Rightarrow$2): By using the structure theorem, we may assume that $(A,m)$ is an Artin local ring. Then $A/m$ is a finite algebraic extension of $k$ by Zariski lemma. Since $A$ is Artinian, $m$ is the nilradical of $A$ and thus $m^n=0$ for some $n$. Thus we have a chain $A \supseteq m \supseteq m^2 \supseteq \cdots \supseteq m^n=0$. Since $A$ is Noetherian, $m$ is finitely generated and hence each $m^i/m^{i+1}$ is a finite dimensional $A/m$-vector space, hence a finite dimensional $k$-vector space. But now how can I deduce that $A$ is a finite dimensional $k$-vector space?
The claim also seems to follow from the Noether normalization lemma: Let $B := k[x_1, \dotsc, x_n]$ with $k$ any field and let $I \subseteq B$ be any ideal. Since $A$ is a finitely generated $k$-algebra you may let $A := B/I$. By the Noether normalization lemma it follows that there is a finite set of elements $y_1, \dotsc, y_d \in A$ with $d = \dim(X)$ and the property that the subring $k[y_1, \dotsc, y_d] \subseteq A$ generated by the elements $y_i$ is a polynomial ring. The ring extension $k[y_1, \dotsc, y_d] \subseteq A$ is an integral extension of rings. If $d = 0$, it follows from the same lemma the ring extension $k \subseteq A$ is integral and since $A$ is finitely generated as $k$-algebra by the elements $\overline{x_i}$ and since each element $\overline{x_i}$ is integral over $k$, it follows that $\dim_k(A) < \infty$. Question: “But now how can I deduce that $A$ is a finite dimensional $k$-vector space?” Answer: It seems from the argument above you can use the Noether normalization lemma to give another proof of your implication, different from the proofs given above. Hence now you have two proofs of your result.
Infinite area under a curve has finite volume of revolution? So I was thinking about the harmonic series, and how it diverges, even though every subsequent term tends toward zero. That meant that its integral from 1 to infinity should also diverge, but would the volume of revolution also diverge (for the function y=1/x)? I quickly realized that its volume is actually finite, because to find the volume of revolution the function being integrated has to be squared, which would give 1/x^2, and, as we all know, that converges. So, my question is, are there other functions that share this property? The only family of functions that I know that satisfy this is 1/x, 2/x, 3/x, etc.
$\frac{1}{x^p}$ with $\frac{1}{2} < p \leq 1$ all satisfy these properties. Then, by limit comparison test, any positive function $f(x)$ with the propery that there exists a $\frac{1}{2} < p \leq 1$ so that $$ \lim_{x \to \infty} x^p f(x) = C \in (0, \infty) \,.$$ also has this property... This allows you create lots and lost of example, just add to $\frac{\alpha}{x^p}$ any "smaller" function. (i.e. $o(\frac{1}{x^p} )$)
Is an integer uniquely determined by its multiplicative order mod every prime Let $x$ and $y$ be nonzero integers and $\mathrm{ord}_p(w)$ be the multiplicative order of $w$ in $ \mathbb{Z} /p \mathbb{Z} $. If $\mathrm{ord}_p(x) = \mathrm{ord}_p(y)$ for all primes (Edit: not dividing $x$ or $y$), does this imply $x=y$?
[This is an answer to the original form of the question. In the meantime the question has been clarified to refer to the multiplicative order; this seems like a much more interesting and potentially difficult question, though I'm pretty sure the answer must be yes.] I may be missing something, but it seems the answer is a straightforward no. All non-identity elements in $\mathbb{Z} /p \mathbb{Z}$ have the same order $p$, which is different from the order $1$ of the identity element; so saying that all the orders are the same amounts to saying that $x$ and $y$ are divisible by the same primes. But different powers of the same prime, e.g. $x=2$ and $y=4$, are divisible by the same primes, and hence have the same orders.
Equality of outcomes in two Poisson events I have a Poisson process with a fixed (large) $\lambda$. If I run the process twice, what is the probability that the two runs have the same outcome? That is, how can I approximate $$f(\lambda)=e^{-2\lambda}\sum_{k=0}^\infty\frac{\lambda^{2k}}{k!^2}$$ for $\lambda\gg1$? If there's a simple expression about $+\infty$ that would be best, but I'm open to whatever can be suggested.
Fourier transforms yield a fully rigorous proof. First recall that, as explained here, for every integer valued random variable $Z$, $$ P(Z=0)=\int_{-1/2}^{1/2}E(\mathrm{e}^{2\mathrm{i}\pi tZ})\mathrm{d}t. $$ Hence, if $X_\lambda$ and $Y_\lambda$ are independent Poisson random variables with parameter $\lambda$, $$ f(\lambda)=P(X_\lambda=Y_\lambda)=\int_{-1/2}^{1/2}E(\mathrm{e}^{2\mathrm{i}\pi tX_\lambda})E(\mathrm{e}^{-2\mathrm{i}\pi tY_\lambda})\mathrm{d}t. $$ For Poisson distributions, one knows that $E(s^{X_\lambda})=\mathrm{e}^{-\lambda(1-s)}$ for every complex number $s$. This yields $$ f(\lambda)=\int_{-1/2}^{1/2}\mathrm{e}^{-2\lambda(1-\cos(2\pi t))}\mathrm{d}t=\int_{-1/2}^{1/2}\mathrm{e}^{-4\lambda\sin^2(\pi t)}\mathrm{d}t. $$ Consider the change of variable $u=2\pi\sqrt{2\lambda}t$. One gets $$ f(\lambda)=\frac1{\sqrt{4\pi\lambda}}\int_\mathbb{R} g_\lambda(u)\mathrm{d}u, $$ with $$ g_\lambda(u)=\frac1{\sqrt{2\pi}}\mathrm{e}^{-4\lambda\sin^2(u/\sqrt{8\lambda})}\,[|u|\le\pi\sqrt{2\lambda}]. $$ When $\lambda\to+\infty$, $g_\lambda(u)\to g(u)$ where $g$ is the standard Gaussian density, defined by $$ g(u)=\frac1{\sqrt{2\pi}}\mathrm{e}^{-u^2/2}. $$ Furthermore, the inequality $$4\lambda\sin^2(u/\sqrt{8\lambda})\ge2u^2/\pi^2, $$ valid for every $|u|\le\pi\sqrt{2\lambda}$, shows that the functions $g_\lambda$ are uniformly dominated by an integrable function. Lebesgue dominated convergence theorem and the fact that $g$ is a probability density yield finally $$ \int_\mathbb{R} g_\lambda(u)\mathrm{d}u\to1,\qquad\text{hence}\ \sqrt{4\pi\lambda}f(\lambda)\to1. $$
Binomial Distribution, finding of at least $x$ success When calculating the $P$ for at least $x$ success one uses $\text{max} (x-1)$ instead, and then take $1- (\text{max} (x-1))$. This works. And I understand it. Because we use the complement to calculate it, because the calculator supports it. But what I do not understand is the following. When calculating a combination of these, $P(\text{max}\,\, x\,\,\, \text{and}\,\,\, \text{min}\,\, y)$ we can just forget about the $1 - (\text{max}\,\, (x-1))$ part, and just use $\text{max}\,(x-1)$ directly. For example: $$P(\text{at least 150 sixes and at most 180 sixes)} = P(\text{max}\,\, 180 \,\,\text{sixes}) - P(\text{max}\,\,149\,\,\text{sixes}).$$ And then we don't have to do the $1-x$ part. Why is this?
If you threw 1000 dice, you might want to know $$\Pr(\text{at least 150 sixes and at most 1000 sixes)} = \Pr(\text{at most 1000 sixes}) - \Pr(\text{at most 149 sixes}).$$ But you cannot get more than 1000 sixes from 1000 dice, so $\Pr(\text{at most 1000 sixes}) =1$, and you can rewrite this more briefly as $$\Pr(\text{at least 150 sixes)} = 1 - \Pr(\text{at most 149 sixes}).$$ In other words, the method in you first case is a particular of the method in your second case. Incidentally, by the time you get to 150 sixes you could be using the central limit theorem, in which case you are using "max" because many tables and calculators give the cumulative distribution function of a standard normal $\Phi(x)=\Pr(X \le x)$.
Completeness and Cauchy Sequences I came across the following problem on Cauchy Sequences: Prove that every compact metric space is complete. Suppose $X$ is a compact metric space. By definition, every sequence in $X$ has a convergent subsequence. We want to show that every Cauchy sequence in $X$ is convergent in $X$. Let $(x_n)$ be an arbitrary sequence in $X$ and $(x_{n_{k}})$ a subsequence that converges to $a$. Since $(x_{n_{k}}) \to a$ we have the following: $$(\forall \epsilon >0) \ \exists N \ni m,n \geq N \implies |x_{n_{m}}-x_{n_{n}}| < \epsilon$$ Using this, we can conclude that every Cauchy sequence in $X$ is convergent in $X$? Or do we inductively create subsequences and use Cauchy's criterion to show that it converges?
Let $\epsilon > 0$. Since $(x_n)$ is Cauchy, exists $\eta_1\in \mathbb N$ such that $$ \left\vert x_n - x_m\right\vert < \frac \epsilon 2$$ for each pair $n, m > \eta_1$. Since $x_{k_n} \to a$, exists $\eta_2 \in \mathbb N$ such that $$ \left\vert x_{k_n} - a\right\vert < \frac \epsilon 2$$ for each $n > \eta_2$. Let $\eta = \max\{\eta_1, \eta_2\}$, if $n > \eta$ then $k_n \ge n > \eta$. Therefore we have $$ \left\vert x_n - a\right\vert \le \left\vert x_n - x_{k_n}\right\vert + \left\vert x_{k_n} - a\right\vert < \frac \epsilon 2 + \frac \epsilon 2 = \epsilon$$
Does the converse of uniform continuity -> Preservance of Cauchy sequences hold? We know that if a function $f$ is uniformly continuous on an interval $I$ and $(x_n)$ is a Cauchy sequence in $I$, then $f(x_n)$ is a Cauchy sequence as well. Now, I would like to ask the following question: The function $g:(0,1) \rightarrow \mathbb{R}$ has the following property: for every Cauchy sequence $(x_n)$ in $(0,1)$, $(g(x_n))$ is also a Cauchy sequence. Prove that g is uniformly continuous on $(0,1)$. How do we go about doing it?
You can also prove it by contradiction. Suppose that $f$ is not uniformly continuous. Then there exists an $\epsilon >0$ so that for each $\delta>0$ there exists $x,y \in (0,1)$ with $|x-y| < \delta$ and $|f(x)-f(y)| \geq \epsilon$. For each $n$ pick $x_n, y_n$ so that $|x_n-y_n| < \frac{1}{n}$ and $|f(x_n)-f(y_n)| \geq \epsilon$. Pick $x_{k_n}$ a Cauchy subsequence of $x_n$ and $y_{l_n}$ a Cauchy subsequence of $y_{k_n}$. Then the alternaticng sequence $x_{l_1}, y_{l_1}, x_{l_2}, y_{l_2},..., x_{l_n}, y_{ln}, ...$ is Cauchy but $$\left| f(x_{l_n}) - f(y_{l_n}) \right| \geq \epsilon \,.$$
Validity of $\sum\limits_{i=1}^n(a_i^2+b_i^2+c_i^2+d_i^2)\lambda_i\geq\lambda_1+\lambda_2+\lambda_3+\lambda_4$? Suppose that $\lambda_1\leq\lambda_2\leq\dots\leq\lambda_n$ is a sequence of real numbers. Clearly, if $a=(a_1,\dots, a_n)$ is a unit vector, then $\sum\limits_{i=1}^na_i^2\lambda_i\geq \lambda_1$. I want to see if the following generalization is true or not: If $a=(a_1,\dots, a_n)$, $b=(b_1,\dots, b_n)$, $c=(c_1,\dots, c_n)$, and $d=(d_1,\dots, d_n)$ ($n\geq 4$) form an orthonormal set, I wonder if we have $\sum\limits_{i=1}^n(a_i^2+b_i^2+c_i^2+d_i^2)\lambda_i\geq\lambda_1+\lambda_2+\lambda_3+\lambda_4$.
It doesnt hold: if $\lambda_1=x<0$ and $\lambda_i=0, i=2..n$, your inequality becomes $(a_1^2+b_1^2+c_1^2+d_1^2)x\geq x$ which becomes false if we find and orthogonal system $(a,b,c,d)$ such as $ a_1^2+b_1^2+c_1^2+d_1^2>1$. For example $a=(\frac{\sqrt{2}}{2}, -\frac{\sqrt{2}}{2},0,...,0)$, $b=(\frac{\sqrt{3}}{3},\frac{\sqrt{3}}{3},\frac{\sqrt{6}}{6},\frac{\sqrt{6}}{6},0,...,0)$, $c=(\frac{\sqrt{6}}{6},\frac{\sqrt{6}}{6},-\frac{\sqrt{3}}{3},-\frac{\sqrt{3}}{3},0...,0)$, $d=(0,0,0,0,1,...,0)$.
Connected planar simple Graph: number of edges a function of the number of vertices Suppose that a connected planar simple graph with $e$ edges and $v$ vertices contains no simple circuit with length greater than or equal to $4.\;$ Show that $$\frac 53 v -\frac{10}{3} \geq e$$ or, equivalently, $$5(v-2) \geq 3e$$
As Joseph suggests, one of two formulas you'll want to use for this problem is Euler's formula, which you may know as $$r = e - v + 2 \quad\text{(or}\quad v + r - e = 2)\qquad\qquad\quad (1)$$ where $r$ is the number of regions in a planar representation of $G$ (e: number of edges, v: number of vertices). (Note, for polyhedra which are clearly not planar, this translates into $r = F$, where $F$ is the number of faces of a polyhedron.) Now, a connected planar simple graph drawn in the plane divides the plane into regions, say $r$ of them. The degree of each region, including the unbound region, must be at least five (assuming graph $G$ is a connected planar graph with no simple circuit with length $\leq 4$). For the second formula you'll need: remember that the sum of the degrees of the regions is exactly twice the number of edges in the graph, because each edge occurs on the boundary of a region exactly twice, either in two different regions, or twice in the same region. Because each region $r$ has degree greater than or equal to five, $$2e = \sum_{\text{all regions}\;R} \mathrm{deg}(R) \geq 5r\qquad\qquad\qquad\qquad (2)$$ which gives us $r \leq \large\frac 25 e$. Now, using this result from (2), and substituting for r in Euler's formula, (1), we obtain $$e - v + 2 \leq \frac 25 e,$$ $$\frac 35 e \leq v - 2,$$ and hence, we have, as desired: $$e \leq \frac 53 v - \frac {10}{3} \quad\iff \quad \frac 53 v - \frac{10}{3} \geq e \quad \iff \quad 5(v-2) \geq 3e$$
How to calculate hyperbola from data points? I have 4 data points, from which I want to calculate a hyperbola. It seems that the Excel trendline feature can't do it for me, so how do I find the relationship? The points are: (x,y) (3, 0.008) (6, 0,006) (10, 0.003) (13, 0.002) Thanks!
A hyperbola takes the form $y = k \frac{1}{x}$. This may be difficult to deal with. So instead, let's consider the reciprocals of our x values as J.M. suggested. For example, instead of looking at $(2.5, 0.007713)$, we consider $(\frac{1}{2.5}, 0.007713)$. Then since we have flipped all of our x values, we are looking to fit something of the form $y = k \dfrac{1}{ \frac{1}{x} } = k x$. This can be accomplished by doing any standard linear regression technique. This is just an extension of J.M.'s comment.
Multi-dimensional sequences I was just wondering if it is possible to consider sequences in multiple dimensions? Denote $(x_{t})^{n}$ to be a sequence in dimension $n$. So the "normal" sequences we are used to are denoted by $(x_{t})^{1}$. Likewise, $(x_{t})^{2} = \left((x_{1}(t)), x_{2}(t) \right)$, etc.. It seems that for an $n$-dimensional sequence to converge, all of its component sequence must converge. Is there any utility in looking at $n$ dimensional sequences that have a "significant" number of its component sequences converge? More specifically: Let $$(x_{t})^{n} = \left(x_{1}(t), \dots, x_{n}(t) \right)$$ be an $n$ dimensional sequence. Suppose $p$ of the component sequences converge where $p <n$. What does this tell us about the behavior of $(x_{t})^{n}$?
Why not look at a simple example? Consider $(0,0,0),(0,0,1),(0,0,2),(0,0,3),\dots$. Two of the three component sequences converge. What would you say about the behavior of this sequence of triples?
Is the factorization problem harder than RSA factorization ($n = pq$)? Let $n \in \mathbb{N}$ be a composite number, and $n = pq$ where $p,q$ are distinct primes. Let $F : \mathbb{N} \rightarrow \mathbb{N} \times \mathbb{N}$ (*) be an algorithm which takes as an input $x \in \mathbb{N}$ and returns two primes $u, v$ such that $x = uv,$ or returns FAIL if there is no such factorization ($F$ uses, say, an oracle). That is, $F$ solves the RSA factorization problem. Note that whenever a prime factorization $x = uv$ exists for $x,$ $F$ is guaranteed to find it. Can $F$ be used to solve the prime factorization problem in general? (i.e. given $n \in \mathbb{N},$ find primes $p_i \in \mathbb{N},$ and integers $e_i \in \mathbb{N},$ such that $n = \prod_{i=0}^{k} p_{i}^{e_i}$) If yes, how? A rephrased question would be: is the factorization problem harder than factoring $n = pq$? (*) abuse of the function type notation. More appropriately $F : \mathbb{N} \rightarrow \mathbb{N} \times \mathbb{N} \bigcup \mbox{FAIL} $ Edit 1: $F$ can determine $p,q,$ or FAIL in polynomial time. The general factoring algorithm is required to be polynomial time. Edit 2: The question is now cross-posted on cstheory.SE.
Two vague reasons I think the answer must be "no": If there were any inductive reason that we could factor a number with k prime factors in polynomial time given the ability to factor a number with k-1 prime factors in polynomial time, then the AKS primality test has already provided a base case. So semiprime factorization would have to be considered as a new base case for anything like this to work. The expected number of prime factors is on the order of log(log(n)) which is unbounded although it is very slow. So for sufficiently large n there is unlikely to be a prime or a semiprime which differs from it by less than any given constant. For large enough k, it seems like the ability to factor p*q won't help us factor (p*q)+k, similarly to how the ability to prove p is prime won't help us factor p+k. Interesting question. I hope someone more knowledgeable than me can answer this with a reference and a decisive statement. EDIT: I found this paper entitled Breaking RSA May Be Easier Than Factoring which argues for a "no" answer and states the problem is open.
What is the background for $\sum_{k=1}^n|f(x_k)-f(x_{k-1})|$? The question is from the following problem: If $f$ is the function whose graph is indicated in the figure above, then the least upper bound (supremum) of $$\big\{\sum_{k=1}^n|f(x_k)-f(x_{k-1})|:0=x_0<x_1<\cdots<x_{n-1}<x_n=12\big\}$$ appears to be $A. 2\quad B. 7\quad C. 12\quad D. 16\quad E. 21$ I don't know what the set above means. And I am curious about the background of the set in real analysis.
Total variation sums up how much a function bobs up and down. Yours does this 16 units. Therefore choose D.
Permutation/Combinations in bit Strings I have a bit string with 10 letters, which can be {a, b, c}. How many bit strings can be made that have exactly 3 a's, or exactly 4 b's? I thought that it would be C(7,2) + C(6,2), but that's wrong (the answer is 24,600).
Hint: By the inclusion-exclusion principle, the answer is equal to $$\begin{align} & \text{(number of strings with exactly 3 a's)}\\ + & \text{(number of strings with exactly 4 b's)}\\ - &\text{(number of strings with exactly 3 a's and 4 b's)} \end{align}$$ Suppose I want to make a string with exactly 3 a's. First, I need to choose where to put the a's; the number of ways of choosing 3 places to put the a's, out of 10 places, is $\binom{10}{3}$. Now, I need to choose how to fill in the other places with b's or c's; there are 2 choices of letters and 7 places left. Thus, the number of strings that have exactly 3 a's is equal to $$\binom{10}{3}\cdot 2^7$$ You should be able to use similar reasoning to find the other numbers.
Problem finding zeros of complex polynomial I'm trying to solve this problem $$ z^2 + (\sqrt{3} + i)|z| \bar{z}^2 = 0 $$ So, I know $ |z^2| = |z|^2 = a^2 + b ^2 $ and $ \operatorname{Arg}(z^2) = 2 \operatorname{Arg} (z) - 2k \pi = 2 \arctan (\frac{b}{a} ) - 2 k\pi $ for a $ k \in \mathbb{Z} $. Regarding the other term, I know $ |(\sqrt{3} + i)|z| \bar{z}^2 | = |z|^3 |\sqrt{3} + i| = 2 |z|^3 = 2(a^2 + b^2)^{3/2} $ and because of de Moivre's theorem, I have $ \operatorname{Arg} [(\sqrt{3} + i ) |z|\bar{z}^2] = \frac{\pi}{6} + 2 \operatorname{Arg} (z) - 2Q\pi $. Using all of this I can rewrite the equation as follows $$\begin{align*} &|z|^2 \Bigl[ \cos (2 \operatorname{Arg} (z) - 2k \pi) + i \sin (2 \operatorname{Arg}(z) - 2k \pi)\Bigr]\\ &\qquad \mathop{+} 2|z|^3 \Biggl[\cos \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q\pi\right) + i \sin \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q\pi\right)\Biggr] = 0 \end{align*} $$ Which, assuming $ z \neq 0 $, can be simplified as $$\begin{align*} &\cos (2 \operatorname{Arg} (z) - 2k \pi) + i \sin (2 \operatorname{Arg} (z) - 2k \pi) \\ &\qquad\mathop{+} 2 |z|\Biggl[\cos \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q \pi \right) + i \sin \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q\pi\right)\Biggr] = 0 \end{align*} $$ Now, from this I'm not sure how to go on. I tried a few things that got me nowhere like trying to solve $$ \cos (2 \operatorname{Arg}(z) - 2k \pi) = 2 |z| \cos \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q\pi\right) $$ I'm really lost here, I don't know how to keep going and I've looked for error but can't find them. Any help would be greatly appreciated.
Here is an alternative to solving it using polar form. Let $z=a+bi$, so that $\bar{z}=a-bi$ and $|z|=\sqrt{a^2+b^2}$. Then you want to solve $$(a+bi)^2+(\sqrt{3}+i)\sqrt{a^2+b^2}(a-bi)^2=0,$$ which expands to $$(a^2-b^2)+2abi+(\sqrt{3}+i)\sqrt{a^2+b^2}\left((a^2-b^2)-2abi\right)=0$$ Thus, we need both the real part and the imaginary part of the left side to be 0, i.e. $$(a^2-b^2)+\sqrt{a^2+b^2}\left(\sqrt{3}\cdot (a^2-b^2)+2ab\right)=0$$ and $$2ab+\sqrt{a^2+b^2}\left(-2ab\sqrt{3}+(a^2-b^2)\right)=0.$$ It should be possible to solve these equations by simple manipulations, though I haven't worked it out myself yet.
Convex hull problem with a twist I have a 2D set and would like to determine from them the subset of points which, if joined together with lines, would result in an edge below which none of the points in the set exist. This problem resembles the convex hull problem, but is fundamentally different in its definition. One approach to determine these points might be to evaluate the cross-product of only x_1, x_2 and x_3, where x_1 is on the 'hull', x_2's 'hull'-ness is being evaluated and x_3 is another point on the set (all other points in the set should yield positive cross products if x_2 is indeed on the hull), with the additional constraint that x_1 < x_2 in one dimension. I realize that this algorithm is not entirely perfect; the plot below shows that some valid points would be missed as a result of the convex hull constraint. How else can I define this edge? Hope the question is clear.
It looks like you are looking for the lower [convex] hull. Some algorithms such as the Andrew's variant of Graham Scan actually compute this and compute the upper hull and then merge these two to obtain the convex hull. Andrew's algorithm can also be seen as a sweep algorithm, so if you want a quick implementation, you could just a vertical sweep algorithm (see the Wiki link for details).
One divided by Infinity? Okay, I'm not much of a mathematician (I'm an 8th grader in Algebra I), but I have a question about something that's been bugging me. I know that $0.999 \cdots$ (repeating) = $1$. So wouldn't $1 - \frac{1}{\infty} = 1$ as well? Because $\frac{1}{\infty} $ would be infinitely close to $0$, perhaps as $1^{-\infty}$? So $1 - 1^{-\infty}$, or $\frac{1}{\infty}$ would be equivalent to $0.999 \cdots$? Or am I missing something? Is infinity something that can even be used in this sort of mathematics?
There is one issue that has not been raised in the fine answers given earlier. The issue is implicit in the OP's phrasing and it is worth making it explicit. Namely, the OP is assuming that, just as $0.9$ or $0.99$ or $0.999$ denote terminating decimals with a finite number of 9s, so also $0.999\ldots$ denotes a terminating decimal with an infinite number of 9s, the said infinite number being denoted $\infty$. Changing the notation from $\infty$ to $H$ for this infinite number so as to avoid a clash with traditional notation, we get that indeed that 0.999etc. with an infinite number $H$ of 9s falls infinitesimally short of $1$. More specifically, it falls short of $1$ by the infinitesimal $\frac{1}{10^H}$, and there is no paradox. Here one does not especially need the hyperreal number system. It is sufficient to use the field of fractions of Skolem's nonstandard integers whose construction is completely constructive (namely does not use the axiom of choice or any of its weaker forms). As the OP points out, the infinitesimal $\frac{1}{H}$ (or more precisely $\frac{1}{10^H}$) is infinitely close to $0$ without being $0$ itself.
Using Horner's Method I'm trying to evaluate a polynomial recursively using Horner's method. It's rather simple when I have every value of $x$ (like: $x+x^2+x^3...$), but what if I'm missing some of those? Example: $-6+20x-10x^2+2x^4-7x^5+6x^7$. I would also appreciate it if someone could explain the method in more detail, I've used the description listed here but would like some more explanation.
You can also carry it out in a synthetic division table. Suppose you want to evaluate $f(x) = x^4 - 3x^2 + x - 5$ for $x = 3$. Set up a table like this 1 0 -3 1 5 3 ------------------------- 1 Now multiply the number on the bottom and total as follows. 1 0 -3 1 5 3 3 ------------------------- 1 3 Work your way across in this manner. 1 0 -3 1 -5 3 3 9 18 57 ------------------------- 1 3 6 19 52 We have $f(3) = 52$. Let's run a check $$ f(3) = 81 -3*9 + 3 - 5 = 54 - 2 = 52.$$ $$ f(3) = 54 - 2 = 52.$$ This is a clean, tabular way to see Horner's method work.
For which $n$ is $ \int \limits_0^{2\pi} \prod \limits_{k=1}^n \cos(k x)\,dx $ non-zero? I can verify easily that for $n=1$ and $2$ it's $0$, $3$ and $4$ nonzero, $4$ and $5$ $0$, etc. but it seems like there must be something deeper here (or at least a trick).
Write $\cos(kx)=(e^{ikx}+e^{-ikx})/2$. Obtain $$\begin{array}{ll} \int_0^{2\pi}\prod_{k=1}^n\cos(kx)dx & =\int_0^{2\pi} \prod_{k=1}^n \frac{e^{k i x} + e^{- k i x}}{2} dx \\ & = 2^{-n}\int_0^{2\pi} e^{-(1+2+\cdots+n) \cdot i x} \prod_{k=1}^n \left( 1 + e^{2 k i x} \right) dx \\ & =2^{-n}\int_0^{2\pi}e^{-n(n+1)/2\cdot ix}\sum_{\sigma\in\Sigma} e^{2\sigma ix}dx \\ & =2^{-n}\sum_{\sigma\in\Sigma}\int_0^{2\pi}e^{(2\sigma -n(n+1)/2)\cdot ix}dx\end{array}$$ where $\Sigma$ is the multiset of numbers comprised of the sums of subsets of $\{1,\cdots,n\}$. The integral in the summand is given by $+1$ if $2\sigma=n(n+1)/2$ and $0$ otherwise. Therefore the sum is nonzero if and only if there is a $n(n+1)/4\in\Sigma$, i.e. $n(n+1)/4$ can be written as a sum of numbers taken from the set $\{1,\cdots,n\}$. Firstly $4\mid n(n+1)\Leftrightarrow n\equiv 0,-1$ mod $4$ is necesesary, and moreover Lemma. Any number $0\le S\le n(n+1)/2$ may be written as a sum of numbers in $\{1,\cdots,n\}$. Proof. $S=0$ corresponds to the empty product. $S=1$ corresponds to the term $1$ itself. Otherwise suppose the claim holds true for $n$ as induction hypothesis, and we seek to prove the claim still holds true for $n+1$. Let $0\le S\le (n+1)(n+2)/2$. If $S\le n(n+1)/2$ then simply take the numbers from $\{1,\cdots,n\}$ via induction hypothesis, otherwise $0\le S-(n+1)\le n(n+1)/2$ and we may invoke the induction hypothesis on $S-(n+1)$, then add $n+1$ to that sum to obtain a sum of elements from $\{1,\cdots,n,n+1\}$ which add up to $S$. Therefore, $n\equiv 0,-1$ mod $4$ is both necessary and sufficient for the integral to be positive.
How to compute the transition function in non-determinism finite accepter NFA? I'm currently teaching myself Automaton using Peter Linz book - An Introduction to Formal Languages and Automata 4th edition. While reading chapter 2 about NFA, I was stuck this example (page 51): According to the author, the transition function $$\delta^{*}(q_1,a) = \{q_0, q_1, q_2\}$$, and I have no idea how this works since the definition is defined in the book as following: For an nfa, the extended transition function is defined so that $\delta^{*}(q_i,w)$ contains $q_j$ if and only if there is a walk in the transition graph from $q_i$ to $q_j$ labeled $w$. This holds for all $q_i, q_j \in Q$ and $w \in \sum^{*}.$ From my understanding, there must be a walk of label $a$ so that a state $q_k$ will be in the set. In the example above, there is no such walk label $a$ from $q_1$ to $q_0, q_2$. Perhaps, I missed some important points, but I honestly don't understand how the author got that answer, i.e. $\{q_0, q_1, q_2\}$. Any suggestion? Thank you, Note I already posted this question as I already posted my question at https://cstheory.stackexchange.com/questions/7009/how-to-compute-the-transition-function-in-non-determinism-finite-accepter-nfa. However, it was closed because it's not at graduate research level.
be careful that your machine should read 'a' to accept destination state. in your nfa, before reading 'a', 2 lambda transitions should be placed. first, to go to q2, and second, to go to q0. after that your machine can read a 'a' and places on q1. now transition to q2 and q0 are take placed by one and two lambda transitions. good luck
Express $\int^1_0x^2 e^{-x^2} dx$ in terms of $\int^1_0e^{-x^2} dx$ (Apologies, this was initially incorrectly posted on mathoveflow) In the MIT 18.01 practice questions for Exam 4 problem 3b (link below), we are asked to express $\int^1_0x^2 e^{-x^2} dx$ in terms of $\int^1_0e^{-x^2} dx$ I understand that this should involve using integration by parts but the given solution doesn't show working and I'm not able to obtain the same answer regardless of how I set up the integration. Link to the practice exam: http://ocw.mit.edu/courses/mathematics/18-01-single-variable-calculus-fall-2006/exams/prexam4a.pdf
You can use this result as well: $$\int e^{x} \bigl[ f(x) + f'(x)\bigr] \ dx = e^{x} f(x) +C$$ So your integral can be rewritten as \begin{align*} \int\limits_{0}^{1} x^{2}e^{-x^{2}} \ dx & = -\int\limits_{0}^{1} \Bigl[-x^{2} -2x\Bigr] \cdot e^{-x^{2}} -\int\limits_{0}^{1} 2x \cdot e^{-x^{2}}\ dx \end{align*} The second part of the integral can be $\text{easily evaluated}$ by putting $x^{2}=t$.
Proof that a function is holomorphic How can i show that the function $$f\colon\mathbb{C}\setminus\{-i\}\rightarrow\mathbb{C}\quad \text{defined by}\quad f(z)= \frac{1+iz}{1-iz}$$ is an holomorphic function?
One way is by differentiating it. You have $f(z)=\frac{1+iz}{1-iz}=-1+2\cdot\frac{1}{1-iz}$, so when $iz\neq 1$, $\begin{align*}\lim_{h\to0}\frac{f(z+h)-f(z)}{h}&=\lim_{h\to 0}\frac{2}{h}\left(\frac{1}{1-i(z+h)}-\frac{1}{1-iz}\right)\\ &=\lim_{h\to 0}\frac{2}{h}\cdot\frac{1-iz-(1-i(z+h))}{(1-i(z+h))(1-iz)}\\ &\vdots \end{align*}$ The next steps involve some cancellation, after which you can safely let $h$ go to $0$. This is not a very efficient method, but it illustrates that it only takes a bit of algebra to work directly with the definition of the derivative in this case. More simple would be to apply a widely applicable tool, namely the quotient rule, along with the simpler fact that $1\pm iz$ are holomorphic.
Calculate the area on a sphere of the intersection of two spherical caps Given a sphere of radius $r$ with two spherical caps on it defined by the radii ($a_1$ and $a_2$) of the bases of the spherical caps, given a separation of the two spherical caps by angle $\theta$, how do you calculate the surface area of that intersection? To clarify, the area is that of the curved surface on the sphere defined by the intersection. At the extreme where both $a_1,a_2 = r$, we would be describing a spherical lune. Alternatively define the spherical caps by the angles $\Phi_1 = \arcsin(a_1/r)$ and $\Phi_2 = \arcsin(a_2/r)$.
Here's a simplified formula as a function of your 3 variables, $a_1$, $a_2$, and $\theta$: $$ 2\cos(a_2)\arccos \left ( \frac{-\cos(a_1) + \cos(\theta)\cos(a_2)}{\sin(\theta)\sin(a_2)} \right ) \\ -2\cos(a_1)\arccos \left ( \frac{\cos(a_2) - \cos(\theta)\cos(a_1)}{\sin(\theta)\sin(a_1)} \right ) \\ -2\arccos \left ( \frac{-\cos(\theta) + \cos(a_1)\cos(a_2)}{\sin(a_1)\sin(a_2)} \right ) \\ -2\pi\cos(a_2) $$ As previously stated, the caps must intersect and one cannot entirely contain the other. This solution is copied from a graphics presentation by AMD, and is originally from a biochemistry paper. I have no proof that it works, so take it for what it's worth. TOVCHIGRECHKO, A. AND VAKSER, I.A. 2001. How common is the funnel-like energy landscape in protein-protein interactions? Protein Sci. 10:1572-1583
How to determine number with same amount of odd and even divisors With given number N, how to determine first number after N with same amount of odd and even divisors? For example if we have N=1, then next number we are searching for is : 2 because divisors: odd : 1 even : 2 I figured out that this special number can't be odd and obviously it can't be prime. I can't find any formula for this or do i have just compute it one by one and check if it's this special number ? Obviously 1 and number itself is divisors of this number. Cheers
For a given integer $n$, every divisor larger than $\sqrt{n}$ is paired with a divisor smaller than $\sqrt{n}$. Use this to figure out a general principle.
Deriving the rest of trigonometric identities from the formulas for $\sin(A+B)$, $\sin(A-B)$, $\cos(A+B)$, and $\cos (A-B)$ I am trying to study for a test and the teacher suggest we memorize $\sin(A+B)$, $\sin(A-B)$, $\cos(A+B)$, $\cos (A-B)$, and then be able to derive the rest out of those. I have no idea how to get any of the other ones out of these, it seems almost impossible. I know the $\sin^2\theta + \cos^2\theta = 1$ stuff pretty well though. For example just knowing the above how do I express $\cot(2a)$ in terms of $\cot a$? That is one of my problems and I seem to get stuck half way through.
Maybe this will help? cot(x) = cosx / sinx -> cot(2a) = cos(a + a) / sin(a + a) and then I assume you know these two. Edit: Had it saved as a tab and didnt see the posted answer, but I still think it would have been best to let you compute the rest by yourself so that you could learn it by doing instead of reading.
A double integral (differentiation under the integral sign) While working on a physics problem, I got the following double integral that depends on the parameter $a$: $$I(a)=\int_{0}^{L}\int_{0}^{L}\sqrt{a}e^{-a(x-y+b)^2}dxdy$$ where $L$ and $b$ are constants. Now, this integral obviously has no closed form in terms of elementary functions. However, it follows from physical considerations that the derivative of this integral $\frac{dI}{da}$ has a closed form solution in terms of exponential functions. Unfortunately, my mathematical abilities are not good enough to get this result directly from the integral. So, how does a mathematician solve this problem?
Nowadays many mathematicians (including me -:)) would be content to use some program to have $$I'(a)=\frac{e^{-a (b+L)^2} \left(2 e^{a L (2 b+L)}-e^{4 a b L}-1\right)}{4 a^{3/2}}.$$ As for the proof, put $t=1/a$ and let $G(b,t)=e^{-b^2/t}/\sqrt{\pi t}\ $ be a fundamental solution of the heat equation $u_t-u_{bb}/4=0\ $. Then $$ u(b,t)=I(1/a)/\sqrt\pi =\int_{0}^{L}\int_{0}^{L}G(b+x-y,t)\,dxdy. $$ If to tinker a bit about what happens then $t\to+0$ we'll have that $u$ is a solution of the Cauchy problem with initial condition $u(b,0)=\psi(b)$ where $\psi(b)=L-|b|$ then $|b|\le L$ and $\psi(b)=0$ otherwise. So $u(b,t)=\int_{-\infty}^\infty G(b-z,t)\psi(z)\,dz\,\,\,$. Taking Fourier transform with respect to b we have $$ \tilde u(\xi,t)=\tilde \psi(\xi) \tilde G(\xi,t)=-\frac{e^{-i L \xi} \left(-1+e^{i L \xi}\right)^2}{\sqrt{2 \pi } \xi^2} \frac{e^{-\frac{\xi ^2 t}{4}}}{\sqrt{2 \pi }}= $$ $$ -\frac{\left(-1+e^{i L \xi}\right)^2 e^{-\frac{\xi ^2 t}{4}-i L \xi}}{2 \pi \xi^2},$$ $$ \tilde u_t(\xi,t)=\frac{\left(-1+e^{i L \xi }\right)^2 e^{-\frac{1}{4} \xi (\xi t+4 i L)}}{8 \pi }. $$ Taking inverse Fourier transform etc. will give the answer above.
Descriptive examples for beta distribution Do you have descriptive/typical examples for processes whose results are described by a beta distribution? So far i only have one: You have a population of constant size with N individuals and you observe a single gene (or gene locus). The descendants in the next generation are drawn from a binomial distribution, so some individuals have several descendants, others have no dexcendants. The gene can mutate at a rate u (for example blue eyes become brown eyes in 10^-5 cases you draw an individual with blue eyes). Thae rate at which brown eyed individuals have blue eyed descendants ist the same. The beta distribution describes how likely it is to find X% of the individuals having a ceratian eye colour. Thereby 2*N*u is the value for both parameters of the beta distribution. Do you have mor examples. For which things is the beta distribution used? Sven
Completely elementary is the fact that for every positive integers $k\le n$, the distribution of the order statistics of rank $k$ in an i.i.d. sample of size $n$ uniform on the interval $(0,1)$ is beta $(k,n-k+1)$. Slightly more sophisticated is the fact that, in Bayesian statistics, beta distributions provide a simple example of conjugate priors for binomial proportions. If $X$ conditionally on $U=u$ is binomial $(n,u)$ for every $u$ in $(0,1)$, then the distribution of $U$ conditionally on $X=x$ which is called the conjugate prior of the binomial is beta $(x,n-x)$. This result is a special case of the multinomial Dirichlet conjugacy. Still more sophisticated is the fact that beta distributions are stationary distributions of Dubins-Freedman processes. These are Markov chains $(X_t)$ on $(0,1)$ moving from $X_t=x$ to $X_{t+1}=xU_t$ with probability $p$ and to $X_{t+1}=x+(1-x)U_t$ with probability $1-p$, where $p$ is a fixed parameter in $(0,1)$ and the sequence $(U_t)$ is an i.i.d. sequence with values in $(0,1)$. If the distribution of $U_t$ is uniform on $(0,1)$, then $(X_t)$ is ergodic and its stationary distribution is beta $(1-p,p)$. The seminal paper on the subject is due to Dubins and Freedman in the Fifth Berkeley Symposium. Later on, Diaconis and Freedman wrote a very nice survey. And the specific result mentioned above was somewhat generalized here.
Edge coloring a graph to find a monochromatic $K_{2,n}$ I am trying to prove or disprove the following statement: Let $n>1$ be a positive integer. Then there exists a graph $G$ of size 4n-1 such that if the edges of $G$ are colored red or blue, no matter in which way, $G$ definitely contains a monochromatic $K_{2,n}$. I tried to check a few cases in the hope of discovering a counter-example. For n=2 $G$ has to have size 7. The graph certainly is of the form "Square + 3 edges". Moreover, it should have the property that if among the 7 edges any 3 are deleted the remaining graph is a square. I couldnt construct any such graph. Is there any justification why such a graph cant exist, thereby negating the statement?
The claim does not hold for $n = 2$. Consider the following observations for any graph $G$ hoping to satisfy the claim. * *$G$ is a $K_{2,2}$ with three edges appended. *Without loss of generality, $G$ is connected and has no leaves. *$G$ has at least five vertices. Draw a $K_{2,2}$ and plus one more vertex. Since the vertex is not isolated and is not a leaf, it has two edges adjoining it to the $K_{2,2}$, which can be done in two nonisomorphic ways. Note now that we have only one edge remaining, so we can't add another vertex, as this would be a leaf. Thus, $G$ has exactly five vertices. The last edge can be added to the two nonisomorphic six-edge graphs in a total of four nonisomorphic ways (two for each - you can check the cases). For each of these candidates, it is easy to find an edge-coloring that avoids a monochromatic $K_{2,2}$.
How to find maximum $x$ that $k^x$ divides $n!$ Given numbers $k$ and $n$ how can I find the maximum $x$ where: $n! \equiv\ 0 \pmod{k^x}$? I tried to compute $n!$ and then make binary search over some range $[0,1000]$ for example compute $k^{500}$ if $n!$ mod $k^{500}$ is greater than $0$ then I compute $k^{250}$ and so on but I have to compute every time value $n!$ (storing it in bigint and everytime manipulate with it is a little ridiculous) And time to compute $n!$ is $O(n)$, so very bad. Is there any faster, math solution to this problem? Math friends?:) Cheers Chris
Computing $n!$ it is a very bad idea for great numbers $n$. To find the desired exponent you should develop something similar to the Legendre formula. You could also search for Legendre in the following document.
Why $\sqrt{-1 \times -1} \neq \sqrt{-1}^2$? We know $$i^2=-1 $$then why does this happen? $$ i^2 = \sqrt{-1}\times\sqrt{-1} $$ $$ =\sqrt{-1\times-1} $$ $$ =\sqrt{1} $$ $$ = 1 $$ EDIT: I see this has been dealt with before but at least with this answer I'm not making the fundamental mistake of assuming an incorrect definition of $i^2$.
Any non zero number has two distinct square roots. There's an algebraic statement which is always true : "a square root of $a$ times a square root of of $b$ equals a square root of $ab$", but this does not tell you which square root of $ab$ you get. Now if $a$ and $b$ are positive, then the positive square root of $a$ (denoted $\sqrt{a}$) times the positive square root of $b$ (denoted $\sqrt{b}$) is a positive number. Thus, it's the positive square root of $ab$ (denoted $\sqrt{ab}$). Which yields $$\forall a,b \ge 0, \ \sqrt{a} \sqrt{b} = \sqrt{ab}$$ In your calculation, because $i$ is a square root of $-1$, then $i^2$ is indeed a square root of $1$, but not the positive one.
2D Epanechnikov Kernel What is the equation for the $2D$ Epanechnikov Kernel? The following doesn't look right when I plot it. $$K(x) = \frac{3}{4} * \left(1 - \left(\left(\frac{x}{\sigma} \right)^2 + \left(\frac{y}{\sigma}\right)^2\right) \right)$$ I get this:
I have an equation for some p-D Epanechnikov Kernel. Maybe you will find it useful. $$ \begin{equation} K(\hat{x})=\begin{cases} \frac{1}2C_p^{-1}(p +2)(1-||\hat{x}||^2)& ||\hat{x}||<1\\\\ 0& \text{otherwise} \end{cases} \end{equation} $$ while $\hat{x}$ is a vector with p dimensions and $C_p$ is defined as: $$C_1 = 2;\ C_2=\pi,\ C_3=\frac{4\pi}3$$ would like to see an equation for $C_p$ for every p.
Given $N$, count $\{(m,n) \mid 0\leq mI'm confused at exercise 4.49 on page 149 from the book "Concrete Mathematics: A Foundation for Computer Science": Let $R(N)$ be the number of pairs of integers $(m,n)$ such that $0\leq m < N$, $0\leq n<N$, and $m\perp n$. (a) Express $R(N)$ in terms of the $\Phi$ function. (b) Prove that $$R(N) = \displaystyle\sum_{d\geq 1}\left\lfloor\frac{N}{d}\right\rfloor^2 \mu(d)$$ * *$m\perp n$ means $m$ and $n$ are relatively prime *$\mu$ is the Möbius function *$\Phi(x)=\sum_{1\leq k\leq x}\phi(k)$ *$\phi$ is the totient function For question (a), my solution is $R(N) = 2 \cdot \Phi(N-1) + [N>1]$ (where $[\;\;]$ is the Iverson bracket, i.e. [True]=1, [False]=0) Clearly $R(1)$ has to be zero, because the only possibility of $(m,n)$ for testing is $(0,0)$, which doesn't qualify. This agrees with my answer. But here is the book's answer: Either $m<n$ ($\Phi(N−1)$ cases) or $m=n$ (one case) or $m>n$ ($\Phi(N−1)$ again). Hence $R(N) = 2\Phi(N−1) + 1$. $m=n$ is only counted when $m=n=1$, but how could that case appear when $N=1$? I thought the book assumed $R$ is only defined over $N≥2$. But their answer for question (b) relies on $R(N) = 2Φ(N−1) + 1$ and proves the proposition also for the case $N=1$. They actually prove $2Φ(N−1) + 1 = RHS$ for $N≥1$. And if my assumption about the $R(1)$ case is true, then the proposition in (b) cannot be valid for $N=1$, for $LHS=0$ and $RHS=1$. But the fact that it's invalid just for one value seems a little fishy to me. My question is, where am I confused? What is wrong in my understanding about the case $R(1)$? Thank you very much.
I did a search and found the 1994-1997 errata for the book. So, the question was changed to: Let R(N) be the number of pairs of (m,n) such that 1≤m≤N, 1≤n≤N, and m⊥n This also slightly changes the solution for R(N), and everything makes sense. I don't post the solution to prevent spoilers. I'm sorry for having wasted everybody's time.
How can I compute the integral $\int_{0}^{\infty} \frac{dt}{1+t^4}$? I have to compute this integral $$\int_{0}^{\infty} \frac{dt}{1+t^4}$$ to solve a problem in a homework. I have tried in many ways, but I'm stuck. A search in the web reveals me that it can be do it by methods of complex analysis. But I have not taken this course yet. Thanks for any help.
Let the considered integral be I i.e $$I=\int_0^{\infty} \frac{1}{1+t^4}\,dt$$ Under the transformation $t\mapsto 1/t$, the integral is: $$I=\int_0^{\infty} \frac{t^2}{1+t^4}\,dt \Rightarrow 2I=\int_0^{\infty}\frac{1+t^2}{1+t^4}\,dt=\int_0^{\infty} \frac{1+\frac{1}{t^2}}{t^2+\frac{1}{t^2}}\,dt$$ $$2I=\int_0^{\infty} \frac{1+\frac{1}{t^2}}{\left(t-\frac{1}{t}\right)^2+2}\,dt$$ Next, use the substitution $t-1/t=u \Rightarrow (1+1/t^2)\,dt=du$ to get: $$2I=\int_{-\infty}^{\infty} \frac{du}{u^2+2}\Rightarrow I=\int_0^{\infty} \frac{du}{u^2+2}=\boxed{\dfrac{\pi}{2\sqrt{2}}}$$ $\blacksquare$
Null Sequences and Real Analysis I came across the following problem during the course of my study of real analysis: Prove that $(x_n)$ is a null sequence iff $(x_{n}^{2})$ is null. For all $\epsilon>0$, $|x_{n}| \leq \epsilon$ for $n > N_1$. Let $N_2 = \text{ceiling}(\sqrt{N_1})$. Then $(x_{n}^{2}) \leq \epsilon$ for $n > N_2$. If $(x_{n}^{2})$ is null then $|x_{n}^{2}| \leq \epsilon$ for $n>N$. Let $N_3 = N^2$. Then $|x_n| \leq \epsilon$ for $n> N_3$. Is this correct? In general, we could say $(x_{n})$ is null iff $(x_{n}^{n})$ is null?
You could use the following fact: If a function $f:X\to Y$ between two topological spaces is continuous and $x_n\to x$, then $f(x_n)\to f(x)$. (In case you do not have learned it in this generality, you might at least know that this is true for real functions or for functions between metric spaces. In fact, in the case of real functions the above condition is equivalent to continuity.) You can obtain your first claim by applying the fact to the continuous functions: $f: \mathbb R\to\mathbb R$, $f(x)=x^2$ (one implication) $f: \langle 0,\infty)\to \mathbb R$, $f(x)=\sqrt{x}$ (reverse implication)
bijective morphism of affine schemes The following question occurred to me while doing exercises in Hartshorne. If $A \to B$ is a homomorphism of (commutative, unital) rings and $f : \text{Spec } B \to \text{Spec } A$ is the corresponding morphism on spectra, does $f$ bijective imply that $f$ is a homeomorphism? If not, can anyone provide a counterexample? The reason this seems reasonable to me is because I think that the inverse set map should preserve inclusions of prime ideals, which is the meaning of continuity in the Zariski topology, but I can't make this rigorous.
No. Let $A$ be a DVR. Let $k$ be the residue field, $K$ the quotient field. There is a map $\mathrm{Spec} k \sqcup \mathrm{Spec} K \to \mathrm{Spec} A$ which is bijective, but not a homeomorphism (one side is discrete and the other is not). Note that $\mathrm{Spec} k \sqcup \mathrm{Spec}K = \mathrm{Spec} k \times K$, so this is an affine scheme. As Matt E observes below in the comments, one can construct more geometric examples of this phenomenon (e.g. the coproduct of a punctured line plus a point mapping to a line): the point is that things can go very wrong with the topology.
Fractional part of $b \log a$ From the problem... Find the minimal positive integer $b$ such that the first digits of $2^b$ are 2011 ...I have been able to reduce the problem to the following instead: Find minimal $b$ such that $\log_{10} (2.011) \leq \operatorname{frac}(b~\log_{10} (2)) < \log_{10} (2.012)$, where $b$ is a positive integer Is there an algorithm that can be applied to solve this or would you need to step through all possible b until you find the right solution?
You are looking for integers $b$ and $p$ such that $b\log_{10}2-\log_{10}(2.011)-p$ is small and positive. The general study of such things is called "inhomogeneous diophantine approximation," which search term should get you started, if you want something more analytical than a brute force search. As 6312 indicated, continued fractions come into it.
Isomorphism on commutative diagrams of abelian groups Consider the following commutative diagram of homomorphisms of abelian groups $$\begin{array} 00&\stackrel{f_1}{\longrightarrow}&A& \stackrel{f_2}{\longrightarrow}&B& \stackrel{f_3}{\longrightarrow}&C&\stackrel{f_4}{\longrightarrow}& D &\stackrel{f_5}{\longrightarrow}&0\\ \downarrow{g_1}&&\downarrow{g_2}&&\downarrow{g_3}&&\downarrow{g_4}&&\downarrow{g_5}&&\downarrow{g_6}\\ 0&\stackrel{h_1}{\longrightarrow}&0& \stackrel{h_2}{\longrightarrow}&E& \stackrel{h_3}{\longrightarrow}&F&\stackrel{h_4}{\longrightarrow} &0 &\stackrel{h_5}{\longrightarrow}&0 \end{array} $$ Suppose the horizontal rows are exact ($\mathrm{ker}(f_{i+1})=\mathrm{Im}(f_i) $) Suppose we know that $g_4:C\rightarrow F$ is an isomorphism. How to deduce that $D=0$? All what I could get is that $h_3:E\rightarrow F$ is an isomorphism and $f_4:C\rightarrow D$ is surjective.
This is wrong. Consider \begin{array}{ccccccccccc} 0 & \to & 0 & \to & 0 & \to & A & \to & A & \to & 0\\ \downarrow & & \downarrow & & \downarrow & & \downarrow & & \downarrow & & \downarrow\\ 0 & \to & 0 & \to & A & \to & A & \to & 0 & \to & 0 \end{array} where all maps $A \to A$ are the identity.
Does the cartesian product have a neutral element? Let $A$ be any set. Is there a set $E$ such that $A \times E = E \times A = A$? I thought of the empty set, but Wikipedia says otherwise. This operation changes dimension, so an isomorphism might be needed for such element to exist.
In some sense, the whole reason we have these things called addition and multiplication and the ring axioms is because of certain properties satisfied by the Cartesian product and disjoint union. Both are associative and commutative (up to natural isomorphism). One distributes over the other (up to natural isomorphism). Both have identity elements (up to natural isomorphism). Decategorify, restricting to finite sets, and you get the non-negative integers. Take the Grothendieck group, and you get the integers, and then at some point you are led to write down the ring axioms in general. But it's good to keep in mind where it all comes from.
Compound angle formula confusion I'm working through my book, on the section about compound angle formulae. I've been made aware of the identity $\sin(A + B) \equiv \sin A\cos B + \cos A\sin B$. Next task was to replace B with -B to show $\sin(A - B) \equiv \sin A\cos B - \cos A \sin B$ which was fairly easy. I'm struggling with the following though: "In the identity $\sin(A - B) \equiv \sin A\cos B - \cos A\sin B$, replace A by $(\frac{1}{2}\pi - A)$ to show that $\cos(A + B) \equiv \cos A\cos B - \sin A\sin B$." I've got $\sin((\frac{\pi}{2} - A) - B) \equiv \cos A\cos B - \sin A\sin B$ by replacing $\sin(\frac{\pi}{2} - A)$ with $\cos A$ and $\cos(\frac{\pi}{2} - A)$ with $\sin A$ on the RHS of the identity. It's just the LHS I'm stuck with and don't know how to manipulate to make it $\cos(A + B)$. P.S. I know I'm asking assistance on extremely trivial stuff, but I've been staring at this for a while and don't have a tutor so hope someone will help!
Note that you can also establish: $$\sin\left(\left(\frac{\pi}{2} - A\right) - B\right) =\sin\left(\frac{\pi}{2} - (A + B)\right) = \cos(A+B)$$ by using the second identity you figured out above, $\sin(A - B) \equiv \sin A\cos B - \cos A\sin B$, giving you: $$\sin\left(\left(\frac{\pi}{2} - A\right) - B\right) = \sin\left(\frac{\pi}{2} - (A+B)\right)$$ $$ = \sin\left(\frac{\pi}{2}\right)\cos(A+B) - \cos\left(\frac{\pi}{2}\right)\sin(A+B)$$ $$= (1)\cos(A+B) - (0)\sin(A+B)$$ $$ = \cos(A+B)$$
If $f(xy)=f(x)f(y)$ then show that $f(x) = x^t$ for some t Let $f(xy) =f(x)f(y)$ for all $x,y\geq 0$. Show that $f(x) = x^p$ for some $p$. I am not very experienced with proof. If we let $g(x)=\log (f(x))$ then this is the same as $g(xy) = g(x) + g(y)$ I looked up the hint and it says let $g(x) = \log f(a^x) $ The wikipedia page for functional equations only states the form of the solutions without proof. Attempt Using the hint (which was like pulling a rabbit out of the hat) Restricting the codomain $f:(0,+\infty)\rightarrow (0,+\infty)$ so that we can define the real function $g(x) = \log f(a^x)$ and we have $$g(x+y) = g(x)+ g(y)$$ i.e $g(x) = xg(1)$ as $g(x)$ is continuous (assuming $f$ is). Letting $\log_a f(a) = p$ we get $f(a^x) =a^p $. I do not have a rigorous argument but I think I can conclude that $f(x) = x^p$ (please fill any holes or unspecified assumptions) Different solutions are invited
Both the answers above are very good and thorough, but given an assumption that the function is differentiable, the DE approach strikes me as the easiest. $ \frac{\partial}{\partial y} f(x y) = x f'(xy) = f(x)f'(y) $ Evaluating y at 1 gives: $ xf'(x) = f(x)f'(1) $ The above is a separable DE: Let $ p = f'(1) $ and $ z = f(x) $ $ x\frac{dz}{dx} = pz \implies \int \frac{dz}{z} = p\int \frac{dx}{x}$ $ \therefore \ln|z| = p\ln|x| + C $ Let $ A = e^C $. $ \implies C = \ln(A) $ $ x > 0 \implies |x| = x $ $ \therefore \ln|z| = p\ln(x) + \ln(A) = \ln(x^p) + \ln(A) = \ln(Ax^p) $ Hence $ |z| = Ax^p $; $ z = \pm Ax^p = f(x)$ Let $ B = \pm A $ and now $ f(x) = Bx^p $ Now using the initial property: $ f(x)f(y) = Bx^p By^p = B^2 (xy)^p = f(xy) = B (xy)^p $ $B^2 = B \implies B $ is $0$ or $1$. If B is zero, that provides the constant function $ f(x) = 0 $, otherwise the solution is $ f(x) = x^p $. As can be seen from the other answers, this does not capture all possible solutions, but sometimes that's the price of simplicity.
Why does a diagonalization of a matrix B with the basis of a commuting matrix A give a block diagonal matrix? I am trying to understand a proof concerning commuting matrices and simultaneous diagonalization of these. It seems to be a well known result that when you take the eigenvectors of $A$ as a basis and diagonalize $B$ with it then you get a block diagonal matrix: $$B= \begin{pmatrix} B_{1} & 0 & \cdots & 0 \\ 0 & B_{2} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & B_{m} \end{pmatrix},$$ where each $B_{i}$ is an $m_{g}(\lambda_{i}) \times m_{g}(\lambda_{i})$ block ($m_{g}(\lambda_{i})$ being the geometric multiplicity of $\lambda_{i}$). My questionWhy is this so? I calculated an example and, lo and behold, it really works :-) But I don't understand how it works out so neatly. Can you please explain this result to me in an intuitive and step-by-step manner - Thank you!
Suppose that $A$ and $B$ are matrices that commute. Let $\lambda$ be an eigenvalue for $A$, and let $E_{\lambda}$ be the eigenspace of $A$ corresponding to $\lambda$. Let $\mathbf{v}_1,\ldots,\mathbf{v}_k$ be a basis for $E_{\lambda}$. I claim that $B$ maps $E_{\lambda}$ to itself; in particular, $B\mathbf{v}_i$ can be expressed as a linear combination of $\mathbf{v}_1,\ldots,\mathbf{v}_k$, for $i=1,\ldots,k$. To show that $B$ maps $E_{\lambda}$ to itself, it is enough to show that $B\mathbf{v}_i$ lies in $E_{\lambda}$; that is, that if we apply $A$ to $B\mathbf{v}_i$, the result ill be $\lambda(B\mathbf{v}_i)$. This is where the fact that $A$ and $B$ commute comes in. We have: $$A\Bigl(B\mathbf{v}_i\Bigr) = (AB)\mathbf{v}_i = (BA)\mathbf{v}_i = B\Bigl(A\mathbf{v}_i\Bigr) = B(\lambda\mathbf{v}_i) = \lambda(B\mathbf{v}_i).$$ Therefore, $B\mathbf{v}_i\in E_{\lambda}$, as claimed. So, now take the basis $\mathbf{v}_1,\ldots,\mathbf{v}_k$, and extend it to a basis for $\mathbf{V}$, $\beta=[\mathbf{v}_1,\ldots,\mathbf{v}_k,\mathbf{v}_{k+1},\ldots,\mathbf{v}_n]$. To find the coordinate matrix of $B$ relative to $\beta$, we compute $B\mathbf{v}_i$ for each $i$, write $B\mathbf{v}_i$ as a linear combination of the vectors in $\beta$, and then place the corresponding coefficients in the $i$th column of the matrix. When we compute $B\mathbf{v}_1,\ldots,B\mathbf{v}_k$, each of these will lie in $E_{\lambda}$. Therefore, each of these can be expressed as a linear combination of $\mathbf{v}_1,\ldots,\mathbf{v}_k$ (since they form a basis for $E_{\lambda}$. So, to express them as linear combinations of $\beta$, we just add $0$s; we will have: $$\begin{align*} B\mathbf{v}_1 &= b_{11}\mathbf{v}_1 + b_{21}\mathbf{v}_2+\cdots+b_{k1}\mathbf{v}_k + 0\mathbf{v}_{k+1}+\cdots + 0\mathbf{v}_n\\ B\mathbf{v}_2 &= b_{12}\mathbf{v}_1 + b_{22}\mathbf{v}_2 + \cdots +b_{k2}\mathbf{v}_k + 0\mathbf{v}_{k+1}+\cdots + 0\mathbf{v}_n\\ &\vdots\\ B\mathbf{v}_k &= b_{1k}\mathbf{v}_1 + b_{2k}\mathbf{v}_2 + \cdots + b_{kk}\mathbf{v}_k + 0\mathbf{v}_{k+1}+\cdots + 0\mathbf{v}_n \end{align*}$$ where $b_{ij}$ are some scalars (some possibly equal to $0$). So the matrix of $B$ relative to $\beta$ would start off something like: $$\left(\begin{array}{ccccccc} b_{11} & b_{12} & \cdots & b_{1k} & * & \cdots & *\\ b_{21} & b_{22} & \cdots & b_{2k} & * & \cdots & *\\ \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots\\ b_{k1} & b_{k2} & \cdots & b_{kk} & * & \cdots & *\\ 0 & 0 & \cdots & 0 & * & \cdots & *\\ \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 & * & \cdots & * \end{array}\right).$$ So, now suppose that you have a basis for $\mathbf{V}$ that consists entirely of eigenvectors of $A$; let $\beta=[\mathbf{v}_1,\ldots,\mathbf{v}_n]$ be this basis, with $\mathbf{v}_1,\ldots,\mathbf{v}_{m_1}$ corresponding to $\lambda_1$ (with $m_1$ the algebraic multiplicity of $\lambda_1$, which equals the geometric multiplicity of $\lambda_1$); $\mathbf{v}_{m_1+1},\ldots,\mathbf{v}_{m_1+m_2}$ the eigenvectors corresponding to $\lambda_2$, and so on until we get to $\mathbf{v}_{m_1+\cdots+m_{k-1}+1},\ldots,\mathbf{v}_{m_1+\cdots+m_k}$ corresponding to $\lambda_k$. Note that $\mathbf{v}_{1},\ldots,\mathbf{v}_{m_1}$ are a basis for $E_{\lambda_1}$; that $\mathbf{v}_{m_1+1},\ldots,\mathbf{v}_{m_1+m_2}$ are a basis for $E_{\lambda_2}$, etc. By what we just saw, each of $B\mathbf{v}_1,\ldots,B\mathbf{v}_{m_1}$ lies in $E_{\lambda_1}$, and so when we express it as a linear combination of vectors in $\beta$, the only vectors with nonzero coefficients are $\mathbf{v}_1,\ldots,\mathbf{v}_{m_1}$, because they are a basis for $E_{\lambda_1}$. So in the first $m_1$ columns of $[B]_{\beta}^{\beta}$ (the coordinate matrix of $B$ relative to $\beta$), the only nonzero entries in the first $m_1$ columns occur in the first $m_1$ rows. Likewise, each of $B\mathbf{v}_{m_1+1},\ldots,B\mathbf{v}_{m_1+m_2}$ lies in $E_{\lambda_2}$, so when we express them as linear combinations of $\beta$, the only places where you can have nonzero coefficients are in the coefficients of $\mathbf{v}_{m_1+1},\ldots,\mathbf{v}_{m_1+m_2}$. So the $(m_1+1)$st through $(m_1+m_2)$st column of $[B]_{\beta}^{\beta}$ can only have nonzero entries in the $(m_1+1)$st through $(m_1+m_2)$st rows. And so on. That means that $[B]_{\beta}^{\beta}$ is in fact block-diagonal, with the blocks corresponding to the eigenspaces $E_{\lambda_i}$ of $A$, exactly as described.
"Counting Tricks": using combination to derive a general formula for $1^2 + 2^2 + \cdots + n^2$ I was reading an online article which confused me with the following. To find out $S(n)$, where $S(n) = 1^2 + 2^2 + \cdots + n^2$, one can first write out the first few terms: 0 1 5 14 30 55 91 140 204 285 Then, get the differences between adjacent terms until they're all zeroes: 0 1 5 14 30 55 91 140 204 285 1 4 9 16 25 36 49 64 81 3 5 7 9 11 13 15 17 2 2 2 2 2 2 2 all zeroes this row Then it says that therefore we can use the following method to achieve $S(n)$: $S(n) = 0 {n\choose 0} + 1 {n\choose 1} + 3 {n\choose 2} + 2 {n\choose 3}$. I don't understand the underlying mechanism. Someone cares to explain?
The key word here is finite differences. See Newton series.
Proving $\frac{1}{\sin^{2}\frac{\pi}{14}} + \frac{1}{\sin^{2}\frac{3\pi}{14}} + \frac{1}{\sin^{2}\frac{5\pi}{14}} = 24$ How do I show that: $$\frac{1}{\sin^{2}\frac{\pi}{14}} + \frac{1}{\sin^{2}\frac{3\pi}{14}} + \frac{1}{\sin^{2}\frac{5\pi}{14}} = 24$$ This is actually problem B $4371$ given at this link. Looks like a very interesting problem. My attempts: Well, I have been thinking about this for the whole day, and I have got some insights. I don't believe my insights will lead me to a $\text{complete}$ solution. * *First, I wrote $\sin\frac{5\pi}{14}$ as $\sin\frac{9 \pi}{14}$ so that if I put $A = \frac{\pi}{14}$ so that the given equation becomes, $$\frac{1}{\sin^{2}{A}} + \frac{1}{\sin^{2}{3A}} + \frac{1}{\sin^{2}{9A}} =24$$ Then I tried working with this by taking $\text{lcm}$ and multiplying and doing something, which appeared futile. *Next, I actually didn't work it out, but I think we have to look for a equation which has roots as $\sin$ and then use $\text{sum of roots}$ formulas to get $24$. I think I haven't explained this clearly. * *$\text{Thirdly, is there a trick proving such type of identities using Gauss sums ?}$ One post related to this is: How to prove that: $\tan(3\pi/11) + 4\sin(2\pi/11) = \sqrt{11}$ I don't know how this will help as I haven't studied anything yet regarding Gauss sums.
Use $\sin(x) = \cos(\frac{\pi}2 - x)$, we can rewrite this as: $$\frac{1}{\cos^2 \frac{3\pi}{7}} + \frac{1}{\cos^2 \frac{2\pi}{7}} + \frac{1}{\cos^2 \frac{\pi}{7}}$$ Let $a_k = \frac{1}{\cos \frac{k\pi}7}$. Let $f(x) = (x-a_1)(x-a_2)(x-a_3)(x-a_4)(x-a_5)(x-a_6)$. Now, using that $a_k = - a_{7-k}$, this can be written as: $$f(x) = (x^2-a_1^2)(x^2-a_2^2)(x^2-a_3^2)$$ Now, our problem is to find the sum $a_1^2 + a_2^2 + a_3^2$, which is just the negative of the coefficient of $x^4$ in the polynomial $f(x)$. Let $U_6(x)$ be the Chebyshev polynomial of the second kind - that is: $$U_6(\cos \theta) = \frac{\sin 7\theta }{\sin \theta}$$ It is a polynomial of degree $6$ with roots equal to $\cos(\frac{k\pi}7)$, for $k=1,...,6$. So the polynomials $f(x)$ and $x^6U_6(1/x)$ have the same roots, so: $$f(x) = C x^6 U_6(\frac{1}x)$$ for some constant $C$. But $U_6(x) = 64x^6-80x^4+24x^2-1$, so $x^6 U_6(\frac{1}x) = -x^6 + 24 x^4 - 80x^2 + 64$. Since the coefficient of $x^6$ is $-1$, and it is $1$ in $f(x)$, $C=-1.$ So: $$f(x) = x^6 - 24x^4 +80x^2 - 64$$ In particular, the sum you are looking for is $24$. In general, if $n$ is odd, then the sum: $$\sum_{k=1}^{\frac{n-1}2} \frac{1}{\cos^2 \frac{k\pi}{n}}$$ is the absolute value of the coefficient of $x^2$ in the polynomial $U_{n-1}(x)$, which turns out to have closed form $\frac{n^2-1}2$.
When can two linear operators on a finite-dimensional space be simultaneously Jordanized? IN a comment to Qiaochu's answer here it is mentioned that two commuting matrices can be simultaneously Jordanized (sorry that this sounds less appealing then "diagonalized" :P ), i.e. can be brought to a Jordan normal form by the same similarity transformation. I was wondering about the converse - when can two linear operators acting on a finite-dimensional vector space (over an algebraically closed field) be simultaneously Jordanized? Unlike the case of simultaneous diagonalization, I don't think commutativity is forced on the transformations in this case, and I'm interested in other natural conditions which guarantee that this is possible. EDIT: as Georges pointed out, the statements that two commuting matrices are simultaneously Jordanizable is in fact wrong. Nevertheless, I am still interested in interesting conditions on a pair of operators which ensures a simultaneous Jordanization (of course, there are some obvious sufficient conditions, i.e. that the two matrices are actually diagonalizable and commute, but this is not very appealing...)
I am 2 years late, but I would like to leave a comment, because for matrices of order 2 exists a very simple criterion. Thm: If $A,B$ are complex matrices of order 2 and not diagonalizable then $A$ and $B$ can be simultaneously Jordanized if and only if $A-B$ is a multiple of the identity. Proof: Suppose $A-B=aId$. Since $B$ is not diagonalizable then $B=RJR^{-1}$, where $J=\left(\begin{array}{cc} b & 1 \\ 0 & b\end{array}\right)$ Thus, $A= RJR^{-1}+aId=R(J+aId)R^{-1}=R\left(\begin{array}{cc} b+a & 1 \\ 0 & b+a\end{array}\right)R^{-1}$. Therefore $A$ and $B$ can be simultaneously Jordanized. For the converse, let us suppose that $A$ and $B$ can be simultaneously Jordanized. Since $A$ and $B$ are not diagonalizable then $A=RJ_AR^{-1}$ and $B=RJ_BR^{-1}$, where $J_A=\left(\begin{array}{cc} a & 1 \\ 0 & a\end{array}\right)$ and $J_B=\left(\begin{array}{cc} b & 1 \\ 0 & b\end{array}\right)$. Therefore, $A-B=RJ_AR^{-1}-RJ_BR^{-1}=R(J_A-J_B)R^{-1}=R\left(\begin{array}{cc} a-b & 0 \\ 0 & a-b\end{array}\right)R^{-1}=(a-b)Id$. $\ \square$ Now, we can find many examples of matrices that commute and can not be simultaneously Jordanized. Example: The matrices $\left(\begin{array}{cc} a & 1 \\ 0 & a\end{array}\right), \left(\begin{array}{cc} b & -1 \\ 0 & b\end{array}\right)$ are not diagonalizable and their difference is not a multiple of the identity, therefore they can not be simultaneously Jordanized. Notice that these matrices commute.
Why is an empty function considered a function? A function by definition is a set of ordered pairs, and also according the Kuratowski, an ordered pair $(x,y)$ is defined to be $$\{\{x\}, \{x,y\}\}.$$ Given $A\neq \varnothing$, and $\varnothing\colon \varnothing \rightarrow A$. I know $\varnothing \subseteq \varnothing \times A$, but still an empty set is not an ordered pair. How do you explain that an empty function is a function?
The empty set is a set of ordered pairs. It contains no ordered pairs but that's fine, in the same way that $\varnothing$ is a set of real numbers though $\varnothing$ does not contain a single real number.
Reduction formula for $I_{n}=\int {\cos{nx} \over \cos{x}}\rm{d}x$ What would be a simple method to compute a reduction formula for the following? $\displaystyle I_{n}=\int {\cos{nx} \over \cos{x}} \rm{d}x~$ where $n$ is a positive integer I understand that it may involve splitting the numerator into $\cos(n-2+2)x~$ (or something similar to this form...), but how would one intuitively recognize that manipulating the expression into such a random arrangement is the way to proceed on this question? Moreover, are there alternative methods, and possibly even some way of directly computing this integral without the need for a reduction formula?
The complex exponential approach described by Gerry Myerson is very nice, very natural. Here are a couple of first-year calculus approaches. The first is kind of complicated, but introduces some useful facts. The second one, given at the very end, is quick. Instead of doing a reduction formula directly, we separate out a fact that is far more important than our integral. Lemma: There is a polynomial $P_n(x)$ such that $$\cos(nx)=P_n(\cos x)$$ Moreover, $P_n$ contains only terms of odd degree if $n$ is odd, and only terms of even degree if $n$ is even. Proof: The cases $n=1$ and $n=2$ are familiar. Suppose we know the result for $n$. We establish the result for $n+2$. Note that $$\cos((n+2)x)=\cos(2x)\cos(nx)-\sin(2x)\sin(nx)$$ The $\cos(2x)\cos(nx)$ part is expressible as a polynomial in $\cos x$, by the induction hypothesis. But $\sin(nx)$ is the derivative of $(-1/n)\cos(nx)$, so it is $(-1/n)(2\sin x)P_n'(\cos x)$. Thus $$\sin(2x)\sin(nx)=(-1/n)(2\sin x\cos x)(\sin x)P_n'(\cos x)$$ and now we replace $\sin^2 x$ by $1-\cos^2 x$. As we do the induction, we can easily check that all degrees are even or all are odd as claimed. Or else we can obtain the degree information afterwards from symmetry considerations. Now to the integral! If $n$ is odd, then $\cos(nx)=P_n(\cos x)$, where $P_n(x)$ has only terms of odd degree. Then $\frac{\cos(nx)}{\cos x}$ is a polynomial in $\cos x$, and can be integrated using the standard reduction procedure. If $n$ is even, pretty much the same thing happens, except that $P_n(x)$ has a non-zero constant term. Divide as in the odd case. We end up with a polynomial in $\cos x$ plus a term of shape $k/(\cos x)$. The integral of $\sec x$, though mildly unpleasant, is standard. Remark: If $n$ is odd, then $\sin(nx)$ is a polynomial in $\sin x$, with only terms of odd degree. If $n$ is even, then $\sin(nx)$ is $\cos x$ times a polynomial in $\sin x$, with all terms of odd degree. Added: I should also give the simple reduction formula that was asked for, even at the risk people will not get interested in the polynomials. Recall that $$\cos(a-b)+\cos(a+b)=2\cos a \cos b$$ Take $a=(n-1)x$ and $b=x$, and rearrange a bit. We get $$\cos(nx)=2\cos x\cos((n-1)x)-\cos((n-2)x)$$ Divide through by $\cos x$, and integrate. $$\int\frac{\cos(nx)}{\cos x}dx =2\int \cos((n-1)x)dx-\int\frac{\cos((n-2)x)}{\cos x}dx $$ The first integral on the right is easy to evaluate, and we get our recurrence, and after a while arrive at the case $n=0$ or $n=1$. Now working "forwards" we can even express our integral as a simple explicit sum.
Converting a QBFs Matrix into CNF, maintaining equisatisfiability I have a fully quantified boolean formula in Prenix Normal Form $\Phi = Q_1 x_1, \ldots Q_n x_n . f(x_1, \ldots, x_n)$. As most QBF-Solvers expect $f$ to be in CNF, I use Tseitins Tranformation (Denoted by $TT$). This does not give an equivalent, but an equisatisfiable formula. Which leads to my question: Does $Q_1 x_1, \ldots Q_n x_n . f(x_1, \ldots, x_n) \equiv Q_1 x_1, \ldots Q_n x_n . TT(f(x_1, \ldots, x_n))$ hold?
To use Tseitin's Transformation for predicate formulas, you'll need to add new predicate symbols of the form $A(x_1, ..., x_n)$. Then the formula $Q_1 x_1, ..., Q_n x_n TT(f(x_1,...,x_n))$ will imply "something" about this new predicate symbols, so the logical equivalence (which I assume what is meant by $\equiv$) does not hold. However $Q_1 x_1 ,..., Q_n x_n TT(f(x_1,...,x_n))$ is a conservative extension of $Q_1 x_1, ..., Q_n x_n f(x_1,...,x_n)$, that is everything provable from $Q_1 x_1, ..., Q_n x_n TT(f(x_1, ..., x_n))$ that does not use the extra symbols is already provable from $Q_1 x_1, ..., Q_n x_n f(x_1, ..., x_n)$
Find an absorbing set in the table: fast algorithm Consider a $m\times n,m\leq n$ matrix $P = (p_{ij})$ such that $p_{ij}$ is either $0$ or $1$ and for each $i$ there is at least one $j$ such that $p_{ij} =1$. Denote $s_i = \{1\leq j\leq n:p_{ij} = 1\}$ so $s_i$ is always non-empty. We call a set $A\subseteq [1,m]$ absorbing if for all $i\in A$ holds $s_i\subset A$. If I apply my results directly then I will have an algorithm with a complexity of $\mathcal{O}(m^2n)$ which will find the largest absorbing set. On the other hand I was not focused on developing this algorithm and hence I wonder if you could advise me some algorithms which are faster? P.S. please retag if my tags are not relevant. Edited: I reformulate the question (otherwise it was trivial). I think this problem can be considered as a searching for the largest loop in the graph(if we connect $i$ and $j$ iff $p_{ij} = 1$).
Since you have to look at every entry at least once to find $A_{\max}$ (the largest absorbing set), the time complexity of any algorith cannot be lower than $\mathcal{O}(n\times m)$. I think the algorithm below achives that. Let $A_i$ be the smallest absorbing set containing $i$ or empty if $i$ is not part of an absorbing set. To find $A_i$, the algorithm starts with $s_i$ and joins is with every $A_j$ for $j\in s_i$. It uses caching to avoid calculating $A_j$ twice. $A_{\max}$ should be the union of all $A_i$s. A_max := empty set for i from 1 to m merge A_max with result from explore(i) explore(i) if i is already explored return known result else for j from m + 1 to n if p_ij = 1 return empty set A_i := empty set for j from 1 to m if p_ij = 1 add j to A_i if i not equal to j A_j = explore(j) if A_j is empty then return empty set else merge A_i with A_j return A_i
How can I complexify the right hand side of this differential equation? I want to get a particular solution to the differential equation $$ y''+2y'+2y=2e^x cos(x) $$ and therefore I would like to 'complexify' the right hand side. This means that I want to write the right hand side as $q(x)e^{\alpha x}$ with $q(x)$ a polynomial. How is this possible? The solution should be $(1/4)e^x(\sin(x)+\cos(x))$ but I cannot see that.
The point is that (for real $x$) $2 e^x \cos(x)$ is the real part of $2 e^x e^{ix} = 2 e^{(1+i)x}$. Find a particular solution of $y'' + 2 y' + 2 y = 2 e^{(1+i)x}$, and its real part is a solution of $y'' + 2 y' + 2 y = 2 e^x \cos(x)$.
Effect of adding a constant to both Numerator and Denominator I was reading a text book and came across the following: If a ratio $a/b$ is given such that $a \gt b$, and given $x$ is a positive integer, then $$\frac{a+x}{b+x} \lt\frac{a}{b}\quad\text{and}\quad \frac{a-x}{b-x}\gt \frac{a}{b}.$$ If a ratio $a/b$ is given such that $a \lt b$, $x$ a positive integer, then $$\frac{a+x}{b+x}\gt \frac{a}{b}\quad\text{and}\quad \frac{a-x}{b-x}\lt \frac{a}{b}.$$ I am looking for more of a logical deduction on why the above statements are true (than a mathematical "proof"). I also understand that I can always check the authenticity by assigning some values to a and b variables. Can someone please provide a logical explanation for the above? Thanks in advance!
How about something along these lines: Think of a pot of money divided among the people in a room. In the beginning, there are a dollars and b persons. Initially, everyone gets a/b>1 dollars since a>b. But new people are allowed into the room at a fee of 1 dollar person. The admission fees are put into the pot. The average will at always be greater than 1 but since each new person is not charged what he (or she) is getting back, the average will have to drop and so [ \frac{a+x}{b+x}<\frac ab.] Similar reasoning applies to the other inequalities.
Modus Operandi. Formulae for Maximum and Minimum of two numbers with a + b and $|a - b|$ I came across the following problem in my self-study of real analysis: For any real numbers $a$ and $b$, show that $$\max \{a,b \} = \frac{1}{2}(a+b+|a-b|)$$ and $$\min\{a,b \} = \frac{1}{2}(a+b-|a-b|)$$ So $a \geq b$ iff $a-b \ge0$ and $b \ge a$ iff $b-a \ge 0$. At first glance, it seems like an average of distances. For the first case, go to the point $a+b$, add $|a-b|$ and divide by $2$. Similarly with the second case. Would you just break it up in cases and verify the formulas? Or do you actually need to come up with the formulas?
I know this is a little bit late, but here another way to get into that formula. If we want to know $\min(a,b)$ we can know which is smaller by taking the sign of $b-a$. The sign is defined as $sign(x)=\frac{x}{|x|}$ and $msign(x)=\frac{sign(x)+1}{2}$ to get the values $0$ or $1$; if $msign(a-b)$ is $1$ it means that $a$ is bigger, if it is $0$, $a$ is smaller. To get the minimum value, we need to sum all values that $sign(x-y)$ is $1$ (which means that $x$ is bigger than $y$) for $y=a$ and $x=b$. So we have $$\min(a,b)=msign(b-a)a+msign(a-b)b$$ and $$\max(a,b)=msign(a-b)a+msign(b-a)b$$ and simplifying $$\min(a,b)=\frac{1}{2}\left(a+b-|a-b|\right)$$ $$\max(a,b)=\frac{1}{2}\left(a+b+|a-b|\right)$$ All this come from this equations: $$\min(a,b)= \begin{cases} a & msign(a-b)==0\\ b & msign(a-b)==1 \end{cases} $$ $$\max(a,b)= \begin{cases} a & msign(a-b)==1\\ b & msign(a-b)==0 \end{cases} $$
A simple question about Iwasawa Theory There has been a lot of talk over the decades about Iwasawa Theory being a major player in number theory, and one of the most important object in said theory is the so-called Iwasawa polynomial. I have yet to see an example anywhere of such a polynomial. Is this polynomial hard/impossible to compute? I've read the definition in the standard literature, however, none of the texts/books/papers that I've seen provide any examples of this polynomial. Sigh... Any sightings of those polynomials out there? I would appreciate some feedback on this. Thanks.
Here is a function written for Pari/GP which computes Iwasawa polynomials. See in particular the note.
$F[a] \subseteq F(a)?$ I think this is probably an easy question, but I'd just like to check that I'm looking at it the right way. Let $F$ be a field, and let $f(x) \in F[x]$ have a zero $a$ in some extension field $E$ of $F$. Define $F[a] = \left\{ f(a)\ |\ f(x) \in F[x] \right\}$. Then $F[a]\subseteq F(a)$. The way I see this is that $F(a)$ contains all elements of the form $c_0 + c_1a + c_2a^2 + \cdots + c_na^n + \cdots$ ($c_i \in F$), hence it contains $F[a]$. Is that the "obvious" reason $F[a]$ is in $F(a)$? And by the way, is $F[a]$ standard notation for the set just defined?
(1) Yes, you are correct. Note that $F(a)=\{\frac{f(a)}{g(a)}:f,g\in F[x], g(a)\neq 0\}$; in other words, $F(a)$ is the field of fractions of $F[a]$ and therefore certainly contains $F[a]$. (2) Yes, the notation $F[a]$ is standard for the set you described. Exercise 1: Prove that if $a$ is algebraic over $F$, then $F[a]=F(a)$. (Hint: prove first that $\frac{1}{a}\in F[a]$ (if $a\neq 0$) using an algeraic equation of minimal degree of $a$ over $F$.) Exercise 2: Prove that if $a$ is transcendental over $F$, then $F[a]\neq F(a)$. (Hint: Prove that $F[a]\cong F[x]$ where $F[x]$ denotes the polynomial ring in the variable $x$ over $F$. Note that $F[x]$ is never a field if $F$ is a field.)
Evaluate $\sum\limits_{k=1}^n k^2$ and $\sum\limits_{k=1}^n k(k+1)$ combinatorially $$\text{Evaluate } \sum_{k=1}^n k^2 \text{ and } \sum_{k=1}^{n}k(k+1) \text{ combinatorially.}$$ For the first one, I was able to express $k^2$ in terms of the binomial coefficients by considering a set $X$ of cardinality $2k$ and partitioning it into two subsets $A$ and $B$, each with cardinality $k$. Then, the number of ways of choosing 2-element subsets of $X$ is $$\binom{2k}{2} = 2\binom{k}{2}+k^2$$ So sum $$\sum_{k=1}^n k^2 =\sum_{k=1}^n \binom{2k}{2} -2\sum_{k=2}^n \binom{k}{2} $$ $$ \qquad\qquad = \color{red}{\sum_{k=1}^n \binom{2k}{2}} - 2 \binom{n+1}{3} $$ I am stuck at this point to evaluate the first of the sums. How to evaluate it? I need to find a similar expression for $k(k+1)$ for the second sum highlighted above. I have been unsuccessful this far. (If the previous problem is done then so is this, but it would be nice to know if there are better approaches or identities that can be used.) Update: I got the second one. Consider $$\displaystyle \binom{n+1}{r+1} = \binom{n}{r}+\binom{n-1}{r}+\cdots + \binom{r}{r}$$ Can be shown using recursive definition. Now multiply by $r!$ and set $r=2$
For the first one, $\displaystyle \sum_{k=1}^{n} k^2$, you can probably try this way. $$k^2 = \binom{k}{1} + 2 \binom{k}{2}$$ This can be proved using combinatorial argument by looking at drawing $2$ balls from $k$ balls with replacement. The total number of ways to do this is $k^2$. The other way to count it is as follows. There are two possible options either you draw the same ball on both trials or you draw different balls on both trials. The number of ways for the first option is $\binom{k}{1}$ and the number of ways for the second option is $\binom{k}{2} \times \left( 2! \right)$ Hence, we have that $$k^2 = \binom{k}{1} + 2 \binom{k}{2}$$ $$\displaystyle\sum_{k=1}^{n} k^2 = \sum_{k=1}^{n} \binom{k}{1} + 2 \sum_{k=1}^{n} \binom{k}{2} $$ The standard combinatorial arguments for $\displaystyle\sum_{k=1}^{n} \binom{k}{1}$ and $\displaystyle\sum_{k=1}^{n} \binom{k}{2}$ gives us $\displaystyle \binom{n+1}{2}$ and $\displaystyle \binom{n+1}{3}$ respectively. Hence, $$ \sum_{k=1}^{n} k^2 = \binom{n+1}{2} + 2 \binom{n+1}{3}$$ For the second case, it is much easier than the first case and in fact this suggests another method for the first case. $k(k+1)$ is the total number of ways of drawing 2 balls from $k+1$ balls without replacement where the order is important. This is same as $\binom{k+1}{2} \times \left(2! \right)$ Hence, $$\sum_{k=1}^{n} k(k+1) = 2 \sum_{k=1}^{n} \binom{k+1}{2} = 2 \times \binom{n+2}{3}$$ This suggests a method for the previous problem since $k^2 = \binom{k+1}{2} \times \left(2! \right) - \binom{k}{1}$ (It is easy to give a combinatorial argument for this by looking at drawing two balls from $k+1$ balls without replacement but hide one of the balls during the first draw and add the ball during the second draw) and hence $$\sum_{k=1}^{n} k^2 = 2 \times \binom{n+2}{3} - \binom{n+1}{2} $$
Sorting a deck of cards with Bogosort Suppose you have a standard deck of 52 cards which you would like to sort in a particular order. The notorious algorithm Bogosort works like this: * *Shuffle the deck *Check if the deck is sorted. If it's not sorted, goto 1. If it's sorted, you're done. Let B(n) be the probability that Bogosort sorts the deck in n shuffles or less. B(n) is a monotonically increasing function which converges toward 1. What is the smallest value of n for which B(n) exceeds, say, 0.9? If the question is computationally infeasible then feel free to reduce the number of cards in the deck.
An estimate. The probability that Bogosort doesn't sort the deck in a particular shuffle is $1 - \frac{1}{52!}$, hence $1 - B(n) = \left( 1 - \frac{1}{52}! \right)^n$. Since $$\left( 1 - \frac{x}{n} \right)^n \approx e^{-x}$$ for large $n$, the above is is approximately equal to $e^{- \frac{n}{52!} }$, hence $B(n) \approx 0.9$ when $$- \frac{n}{52!} \approx \log 0.1 \approx -2.30.$$ This gives $$n \approx 2.30 \cdot 52! \approx 2.30 \cdot \sqrt{106\pi} \left( \frac{52}{e} \right)^{52} \approx 1.87 \times 10^{68}$$ by Stirling's approximation. By comparison, the current age of the universe is about $4.33 \times 10^{17}$ seconds, or about $4.33 \times 10^{32}$ flops if your computer runs at $1$ petaflops.
Finding double coset representatives in finite groups of Lie type Is there a standard algorithm for finding the double coset representatives of $H_1$ \ $G/H_2$, where the groups are finite of Lie type? Specifically, I need to compute the representatives when $G=Sp_4(\mathbb{F}_q)$ (I'm using $J$ the anti diagonal with top two entries $1$, and the other two $-1$), $H_1$ is the parabolic with $4=2+2$, and $H_2=SL_2(\mathbb{F}_q)\ltimes H$, where $H$ is the group of matrices of the form: $$\begin{bmatrix} 1&x&y&z \\\\ 0&1&0&y \\\\ 0&0&1&-x \\\\ 0&0&0&1 \end{bmatrix}$$ which is isomorphic to the Heisenberg group, and $SL_2$ is embedded in $Sp_4$ as: $$\begin{bmatrix} 1&&& \\\\ &a&b& \\\\ &c&d& \\\\ &&&1 \end{bmatrix}$$
Many such questions yield to using Bruhat decompositions, and often succeed over arbitrary fields (which shows how non-computational it may be). Let P be the parabolic with Levi component GL(2)xSL(2). Your second group misses being the "other" maximal proper parabolic Q only insofar as it misses the GL(1) part of the Levi component. Your double coset space fibers over $P\backslash G/Q$. It is not trivial, but is true that P\G/Q is in bijection with $W_P\backslash W/W_Q$, with W the Weyl group and the subscripted version the intersections with the two parabolics. This is perhaps the chief miracle here. Since the missing GL(1) is normalized by the Weyl group, the fibering is trivial. Then some small bit of care is needed to identify the Weyl group double coset elements correctly (since double coset spaces do not behave as uniformly as "single" coset spaces). In this case, the two smaller Weyl groups happen to be generated by the reflections attached to the two simple roots, and the Weyl group has a reasonable description as words in these two generators.
Alternative to imaginary numbers? In this video, starting at 3:45 the professor says There are some superb papers written that discount the idea that we should ever use j (imaginary unit) on the grounds that it conceals some structure that we can explain by other means. What is the "other means" that he is referring to?
Maybe he meant the following: A complex number $z$ is in the first place an element of the field ${\mathbb C}$ of complex numbers, and not an $a+bi$. There are indeed structure elements which remain hidden when thinking in terms of real and imaginary parts only, e.g., the multiplicative structure of the set of roots of unity.
Angle of a javelin at any given moment I am using the following formula to draw the trajectory of a javelin (this is very basic, I am not taking into consideration the drag, etc.). speedX = Math.Cos(InitialAngle) * InitialSpeed; speedY = Math.Sin(InitialAngle) * InitialSpeed; javelin.X = speedX * timeT; javelin.Y = speedY * timeT - 0.5 * g * Math.Pow(timeT, 2); How do I know at what angle my javelin for a given timeT?
I am making the assumption that the javelin is pointed exactly in the direction of its motion. (This seems dubious, but may be a close enough approximation for your purposes). The speed in the X direction is constant, but the speed in the Y direction is $\text{speedY} -g\cdot \text{timeT}$. So the direction of motion has angle $\text{angle}\theta$ from the positive X direction satisfying $$\tan(\text{angle}\theta)=\frac{\text{speedY}-g\cdot\text{timeT}}{\text{speedX}}.$$ If the initial angle is in $\left(0,\frac{\pi}{2}\right)$, then the angle always lies in $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$, and you can use the ordinary $\arctan$ function to get $\text{angle}\theta$.
Is there a name for the matrix $X(X^tX)^{-1}X^{t}$? In my work, I have repeatedly stumbled across the matrix (with a generic matrix $X$ of dimensions $m\times n$ with $m>n$ given) $\Lambda=X(X^tX)^{-1}X^{t}$. It can be characterized by the following: (1) If $v$ is in the span of the column vectors of $X$, then $\Lambda v=v$. (2) If $v$ is orthogonal to the span of the column vectors of $X$, then $\Lambda v = 0$. (we assume that $X$ has full rank). I find this matrix neat, but for my work (in statistics) I need more intuition behind it. What does it mean in a probability context? We are deriving properties of linear regressions, where each row in $X$ is an observation. Is this matrix known, and if so in what context (statistics would be optimal but if it is a celebrated operation in differential geometry, I'd be curious to hear as well)?
This should be a comment, but I can't leave comments yet. As pointed out by Rahul Narain, this is the orthogonal projection onto the column space of $X$
Dense and locally compact subset of a Hausdorff space is open Let $X$ be a Hausdorff space and let $D \subseteq X$ be locally compact and dense in $X$. Why is $D$ open? I can see that $D$ is regular but don't see why $D$ is in fact open.
Here is a straightforward proof inspired by Theorem 2.70 in Aliprantis and Border's Infinite Dimensional Analysis (3rd ed.), p.56). Let $p \in D$. Since $D$ is locally compact, there is a neighborhood of $x$ in $D$ which is compact in $D$ and a neighborhood $V$ of $x$ in $D$ which is open in $D$ and $V \subset U$. First, it is easy to see that $U$ is also compact in $X$. Since $X$ is Hausdorff, it implies that $U$ is closed (see, for example, Proposition 4.24 in Folland's Real Analysis (2nd ed.), p.128), and consequently $\overline{U}=U$. By definition, there is an open set $W$ in the topology of $X$ such that $V = W\cap D$. Since $D$ is dense in $X$, it follows that $$W \subset \overline{W} = \overline{W \cap E} = \overline{V} \subset \overline{U} = U \subset D.$$ Hence for every $p \in D$ there is a neighborhood of $p$ open in $X$ which is included in $D$, i.e., $D$ is open in $X$.