INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Birthday paradox for adjacent dates: exact probability I've seen a lot of questions on here that answer the birthday paradox, e.g. the probability $n$ people in a room share the same birthday, where a year is 365 days. So I won't rehash the proof that the formula is $$p(n) = 364/365 * 363/365 * ... * (365-(n-1)) / 365$$ But I was wondering about the exact probability $p(n)$ that two people either share the same birthday or have a birthday one day apart. I was able to figure things roughly. Each day you choose most likely wipes out two other days, so my initial guess q(n) was $$q(n) =~ 362/365 * 359/365 * ... * (365-3(n-1)) / 365$$ The problem with this is that person $A$ may have a birthday on January 3, and if person B has one on January 1, then the 2nd term of $q(n) is 360/365 and not 359/365. Now if we just want to see the number of people to make p(n) be about .5, this won't matter much. I can do a bit of hand waving to show that if $Z$ is the number such that $p(Z) < .5 < p(Z-1)$, then $Z'$ so that $q(Z') < .5 < q(Z'-1)$ is roughly $Z'/sqrt(3)$. So if we write the formula for $q'(n)$ as $$q'(n) = \prod_{i=0}^n r(n)$$ where $r(n)$ is the probability that person $n$ is the first one with an adjacent birthday to the previous $n-1$, there are a few observations. $r(n)$ must be close to $\frac{368-3n}{365}$ and it clearly must be more than $\frac{369-2n}{365}$ because, except for December 31, each date takes out the next day...or there's been a date ahead of it that takes out the next and previous days. For the purposes of finding $Z'$, $r(n)$ seems much closer to $\frac{368-3n}{365}$ as the possibility of day X being 2 apart from any of days $1...X-1$ overlap at step $Z'$ is quite low for the purposes of finding the halfway point, less than $\frac{3Z'-3}{365}$, in fact. But in the general case, the estimation doesn't work so well, and I am wondering about an exact value. Because I got lost in the calculations. Any help? Thanks!
two or more persons share a birthday B: two or more have adjacent birthdays We are interested in probability: $\mathbb{P}{\{A \cup B\}} = 1 - \mathbb{P}{\{(A \cup B)'\}} = 1 - \mathbb{P}{\{A' \cap B'\}} = 1 - \mathbb{P}{\{A'\}}\ \mathbb{P}{\{B'\ |\ A'\}}$ I can work out the example for 3 persons, with $d$ days in a year: $1 - \frac{d - 1}{d} \frac{d - 2}{d} * \frac{d-3}{d-1} (\frac{2}{d - 1} \frac{d - 5}{d - 2} + (1 - \frac{2}{d - 1}) \frac{d - 6}{d - 2})$ Big expression in parenthesis is (double-conditioned) probability of B', given A', conditioned on where the birthday of person #2 landed in relation to #1, either exactly two days off or not. I think one can condition the general term k of $\mathbb{P}{\{B'\ |\ A'\}}$ -- which relates person k+1 -- in this way, by summing over $i \in \{0, 1, ..., k\}$ with i the number of persons that have birthdays exactly two days off each other.
Special Orthogonal Group $SO(2)$ The special orthogonal group for $n=2$ is defined as: $$SO(2)=\big\{A\in O(2):\det A=1\big\}$$ I am trying to prove that if $A\in SO(2)$ then: $$A=\left(\begin{array}{cc} \cos\theta& -\sin\theta\\ \sin\theta&\cos\theta \end{array}\right)$$ My idea is show that $\Phi:S^1\to SO(2)$ defined as: $$z=e^{\theta i}\mapsto \Phi(z)=\left(\begin{array}{cc} \cos\theta& -\sin\theta\\ \sin\theta&\cos\theta \end{array}\right)$$ is an isomorphism of Lie groups. It is easy prove that is an monomorphism of Lie groups. How can I prove that is also surjective?
Let $\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)\in \mathrm{SO}(2)$. Then, $$\begin{pmatrix}a&c\\b&d\end{pmatrix}\begin{pmatrix}a&b\\c&d\end{pmatrix}=\begin{pmatrix}a^2+c^2&ab+cd\\ab+cd&b^2+d^2\end{pmatrix}=\begin{pmatrix}1&0\\0&1\end{pmatrix}$$ and $$\det\begin{pmatrix}a&b\\c&d\end{pmatrix}=ad-bc=1.$$ Thus, $\mathrm{SO}(2)$ is the subset of $\mathbb{R}^4$ satisfying the following four equations: $$ \begin{align*} a^2+c^2 &= 1 \\ b^2+d^2 &= 1\\ ad-bc &= 1\\ ab+cd &= 0. \end{align*} $$ The first two equations imply that $(a,c)$ and $(b,d)$ lie on a circle, so $$a=\cos\alpha,\quad c=\sin\alpha,\quad b=\cos\beta,\quad d=\sin\beta$$ for some angles $\alpha,\beta\in\Bbb R$. Inserting in the last two equations, we get $$ \begin{align*} \cos\alpha\sin\beta-\cos\beta\sin\alpha &= 1 \\ \cos\alpha\cos\beta+\sin\alpha\sin\beta &= 0. \end{align*} $$ Using the angle sum trigonometric identities, these equations are $$ \begin{align*} \sin(\beta-\alpha) &= 1 \\ \cos(\beta-\alpha) &= 0. \end{align*} $$ Hence, $\beta-\alpha\in \pi/2+2\pi\Bbb Z$ and we get $$\begin{pmatrix}a&c\\b&d\end{pmatrix}=\begin{pmatrix}\cos\alpha&-\sin\alpha\\\sin\alpha&\cos\alpha\end{pmatrix}.$$
Which of these quantity systems corresponds to sigma algebra? Given is the set $Ω = \left\{ 3,4,5,6,7\right\} $.   Determine which of the given quantity systems corresponds to an sigma algebra and justify it. $1. \left\{\left\{\right\},\left\{3\right\},\left\{5\right\},\left\{3,4\right\},\left\{3,4,6,7\right\},\left\{3,4,5,6,7\right\}\right\}$ $2. \left\{\left\{\right\},\left\{5\right\},\left\{3,4,6,7\right\},\left\{3,4,5,6,7\right\}\right\}$ $3. \left\{\left\{\right\},\left\{3\right\},\left\{4\right\},\left\{5\right\},\left\{6\right\},\left\{7\right\},\left\{3,4\right\},\left\{3,5\right\},\left\{3,6\right\},\left\{3,7\right\},\left\{4,5\right\},\left\{4,6\right\},\left\{4,7\right\},\left\{5,6\right\},\left\{5,7\right\},\left\{6,7\right\},\left\{3,4,5\right\},\left\{3,4,6\right\},\left\{3,4,7\right\},\left\{3,5,6\right\},\left\{3,5,7\right\},\left\{3,6,7\right\},\left\{4,5,6\right\},\left\{4,5,7\right\},\left\{4,6,7\right\},\left\{5,6,7\right\},\left\{3,4,5,6\right\},\left\{3,4,5,7\right\},\left\{3,4,6,7\right\},\left\{3,5,6,7\right\},\left\{4,5,6,7\right\},\left\{3,4,5,6,7\right\}\right\}$ $4. \left\{\left\{\right\},\left\{5\right\}\right\}$ I think that 1. and 3. are corresponded. Is that correct and if yes how to justify that?
* *is not a $\sigma$-algebra because $\{3\}\cup \{5\}=\{3,5\}$ is not in 1. *is a $\sigma$-algebra because $\{\}$, $\Omega$, and all the possible intersections and unions of the elements of 2. are in 2. *works because all the $2^{\Omega}=32$ subsets (of $\Omega $) are in 3. *would not work because $\Omega$ is not in 4.
A real function problem Let $f : \mathbb{R} \to \mathbb{R}$ be a function with continuous derivative such that $f(\sqrt{2}) = 2$ and $$f(x) = \lim_{t \to 0}\dfrac{1}{2t}\int_{x-t}^{x+t}sf'(s)\,ds \ \text{for all} \ x \in \mathbb{R}.$$ Then $f(3)$ equals $(a) \ \ \sqrt{3} \hspace{1.25 in} (b) \ \ 3\sqrt{2} \hspace{1.25 in} (c) \ \ 3\sqrt{3} \hspace{1.25 in} (d) \ \ 9$ Original Image Can someone help me solve this problem on real function . Here is my try I differentiated both sides with respect to $x$. And I got derivative of $x$ to be $0$. I concluded $f(x)$ is a constant function. So answer should be $2$. But $2$ is not an option . How to solve this ??
We have: \begin{align} f(x)&=\lim_{t \rightarrow 0}\frac{1}{2t}\int_{x-t}^{x+t}sf'(s)\,ds\\ &=\lim_{t \rightarrow 0}\frac{1}{2t}sf(s)\,\Big|_{x-t}^{x+t}-\lim_{t \rightarrow 0}\frac{1}{2t} \int_{x-t}^{x+t}f(s)\,ds\\ &=\lim_{t \rightarrow 0}\frac{1}{2t}sf(s)\,\Big|_{x-t}^{x+t}-f(x) \end{align} or \begin{align} 2f(x)&=\lim_{t \rightarrow 0}\frac{1}{2t}s f(s)\,\Big|_{x-t}^{x+t}\\ &=\lim_{t \rightarrow 0}\frac{1}{2t}\cdot \left[ (x+t)f(x+t) - (x-t)f(x-t) \right]\\ &=\lim_{t \rightarrow 0}\frac{1}{2t}\cdot \left[ x(f(x+t) - f(x-t)) +t(f(x+t) + f(x-t)) \right]\\ &=x \lim_{t \rightarrow 0}\frac{f(x+t) - f(x-t)}{2t} + \lim_{t \rightarrow 0} \frac{f(x+t) + f(x-t)}{2}\\ &=xf'(x) + f(x) \end{align} Or $$f(x)=xf'(x)$$ which is $$f(x)=c x$$ or, from $f(\sqrt{2})=2$, $$f(x)=\sqrt{2} x$$ the answer is (b)
Nonabelian Groups of order p^3 using centre and semidirect product I'm stuck in the classification of groups of order $p^3$. These were the steps i followed. I showed that if $|Z(G)|=p^2$ or $p^3$ then G is abelian. Hence order of centre is $p$. As centre is normal, G can be written as a semi direct product of $H\times K$($\times$ is semidirect product symbol) where $H$ is $Z(G)$ and $K$ is a group of order $p^2$. And let π be the homomorphism from K to Aut(H). Aut(H) is isomorphic to $Z_{p-1}$. Now there are 2 cases depending on what K is: (1) K is $Z_{p^2}$ Now π : $Z_{p^2 }$ --> $Z_{p-1}$ There aren't any homomorphisms(except the trivial) possible as no element in $Z_{p-1}$ has order $p$ or $p^2$. (2) K is $Z_p\times Z_p$ Again the same argument as in (1) will hold and no homomorphisms will be there except the trivial. If it is trivial homomorphism then group becomes abelian($Z_{p^2}\times Z_p $ and $Z_p\times Z_p \times Z_p $ respectively) I ended up proving that there is no nonabelian group of order $p^3$ which is not true, hence the proof is wrong somewhere.I couldn't find the mistake. Any help/partial progress will be appreciated.
Since $H$ is the center, $H$ commutes with everything so with $K$ also. Thus the resulting homomorphism is trivial. Moreover, there is no $K$ in $F$ such that $G=H\times K$ since every normal subgroup of $G$ intersects nontrivially with $Z(G)$.
Conjecture: $\lim_{N \to +\infty}\frac{1}{N}\sum_{k=1}^{N}\frac{\phi(k)}{k}=\frac{6}{\pi^2}$ I was playing around with the series $f(N)=\frac{1}{N}\sum_{k=1}^{N}\frac{\phi(k)}{k}$ and I found with Wolfram that $f(10,000)=0.607938$, which I noticed was very close to $\frac{6}{\pi^2}$. I am led to make the following Conjecture: $\lim_{N \to +\infty}\frac{1}{N}\sum_{k=1}^{N}\frac{\phi(k)}{k}=\frac{6}{\pi^2}$ Well, is it true? Note that its obvious that the sum is bounded above by $1$ (since $\phi(k)/k<1$), so it definitely doesn't diverge to infinity. Its also almost always decreasing. So it most likely converges.
Here's a start: It is well known that (a,b) = 1 for random a,b with probability $6/\pi^2.$ Then $\phi(k) \sim k \cdot \frac{6}{\pi^2}$ for large k, and because the density of primes is zero, by the definition of $\phi(x),$ so $$\frac{1}{n}\sum_{n\geq k\geq 1}\frac{\phi(k)}{k}\sim \frac{1}{n}\sum_{n\geq k\geq 1} \frac{6}{\pi^2}=\frac{6}{\pi^2}.$$ However this is not very rigorous.
Suppose a die is rolled $14$ times, and $X$ be the number of faces that appear exactly 2 times. How can I find $E(X)$? It seems that this question is relatively straightforward as it resembles the binomial. However, I cannot complete the entire argument if I try to do it in accordance with $X=0$, $X=1$, and so on and then compute the probability of each. Does anyone have any ideas?
I am using bionomial distribution to solve this question. $p(\text{particular face}) = \frac{1}{6}$ $q(\text{Not particular face}) = 1 - \frac{1}{6} = \frac{5}{6}$ Now according to bionamial distribution $P(X=r) = C(n,r) (q)^{n-r} (p)^r$ n is the number of dice. And X is particular result. Here X = 2 and n = 14. $P(X=2) = C(14,2) * \left(\frac{5}{6}\right)^{12} * \left(\frac{1}{6}\right)^2$ And exactly 6 pairs. So, = $6 * C(14,2) * \left(\frac{5}{6}\right)^{12} * \left(\frac{1}{6}\right)^2$
A more concise way of showing the limit of $a_n := (n^{3n})/(n3^n)$ I am trying to show the limit of the sequence $a_n:=\frac{n^{3n}}{n3^n}$ I have done it using the ratio test for limits as I will illustrate below, but I feel like there is an easier method but cannot find one; it would sate my curiosity to see it done in less lines: Let $a_{n+1}:=\frac{(n+1)^{3n+3}}{(n+1)3^{n+1}}$ $$\frac{a_{n+1}}{a_n}=\frac{(n+1)^{3n+3}}{(n+1)3^{n+1}}\times\frac{n3^n}{n^{3n}}=\frac{n}{3}(n+1)^2(1+\frac{1}{n})^{3n}\ge \frac{n}{3}(n+1)$$ We are utilising the comparison test to observe that $\frac{a_{n+1}}{a_n}$ is greater than a sequence which clearly tends to infinity, and thus by the ratio test $lim\frac{a_{n+1}}{a_n}>1$ as n tends to infinity, so the limit of $a_n=+\infty$ as $n \rightarrow+\infty$ Could anyone provide a shorter method which doesn't use the ratio test maybe? Thanks!
$$ a_n=\frac{n^{3n}}{n3^n}=\frac{n^{3n-1}}{3^n}=(\frac{n}{3})^{n}*n^{2n-1} $$ Do you see why this tends to infinity?
Prove the uniqueness of $\sin x_i=1$ (only) in this problem If $$\sin x_1+ \sin x_2+ \sin x_3=3 $$ then prove that $$\cos x_1+ \cos x_2+ \cos x_3=0 $$ My try:- as simply we can say that $\sin x_1=1$ we can say that $\cos x_1=0$. So $\cos x_1+ \cos x_2+ \cos x_3=0 $. But is there any better proof than this . What I mean is to prove the uniqueness of $\sin x_i=1$ (only).
Once $\sin x \le 1$ then $$\sin x_1+\sin x_2 + \sin x_3 \le 3$$ and the equality holds only if $\sin x_1=\sin x_2 = \sin x_3=1 \quad (1)$. Because if for some $i$ we get $\sin x_i <1 $ then: $$\sin x_1+\sin x_2 + \sin x_3 < 3$$ From $(1)$ we then conclude that: $$\cos x_1=\cos x_2 = \cos x_3=0$$
Summation of a series $1+\frac{1}{3}-\frac{1}{2}+\frac{1}{5}+\frac{1}{7}-\frac{1}{4}++-...$ I need to calculate the sum of the series:$T_{3n}=$ $1+\frac{1}{3}-\frac{1}{2}+\frac{1}{5}+\frac{1}{7}-\frac{1}{4}++-...$ I know that $T_{3n}=\sum_{k=1}^{n}\frac{1}{4k-3}+\frac{1}{4k-1}-\frac{1}{2k}$. And they gave a hint that $u_n=S_n-I_n$ , where $S_n=\sum_{k=1}^n \frac{1}{k}$ and $I_n=\log(n)$, converges. Can anyone give me a direction? I've tried to write $T_{3n}$ in terms of $S_n$ but without success.
Let $H_n=\sum\limits_{k=1}^n\frac{1}{k}$ Then as is well known $H_n-\ln n$ converges to $\gamma$ Now $$T_n=H_{4n-1}-\frac{1}{2}H_{2n-1}-\frac{1}{2}H_n$$ thus $$T_n-\ln (4n-1)+\frac{1}{2}\ln (2n-1)+\frac{1}{2}\ln n \to 0$$ and $$T_n\to \frac{3}{2}\ln 2$$
Choose $a, b$ so that $\cos(x) - \frac{1+ax^2}{1+bx^2}$ would be as infinitely small as possible on ${x \to 0}$ using Taylor polynomial $$\cos(x) - \frac{1+ax^2}{1+bx^2} \text{ on } x \to 0$$ If $\displaystyle \cos(x) = 1 - \frac{x^2}{2} + \frac{x^4}{4!} - \cdots $ Then we should choose $a, b$ in a such way that it's Taylor series is close to this. However, I'm not sure how to approach this. I tried to take several derivates of second term to see its value on $x_0 = 0$, but it becomes complicated and I don't see general formula for $n$-th derivative at point zero to find $a$ and $b$.
Your function is $$\cos(x)-\frac{a}{b}-\frac{1-\frac{a}{b} }{ bx^2+1 }$$ $$=1-\frac{x^2}{2}+\frac{x^4}{24}-\frac{a}{b}-1+\frac{a}{b}+bx^2-ax^2-b(b-a)x^4+x^4\epsilon(x).$$ thus, we need $b-a=-\frac{1}{2}$ and $b(b-a)=-\frac{1}{24}$. which gives $b=\frac{1}{12}$ and $a=\frac{7}{12}$.
Is there an equation for every graph? Is it possible to write a formula for any line imaginable, for example a drawing? I once saw this long, and complicated, formula made by a mathematician with loads of ceilings etc. The explanation was something to do with something he was programming for computers using pixels - I'm really not sure - but basically, the entire graph had every single pixel drawing there was, and then limits were set to section off the image you wanted. Is this possible with actual lines, not pixels. -- I did a search and it's this: Tupper's self-referential formula
There are fantastic formulas, like the Batman equation, but in general I believe it is not possible. If we restrict us to the graphs of continuous functions of one variable, for our formulas we usually stick to a set of elementary functions which is exhausted rapidly for more exotic cases and must be extended by special functions or integral functions, solutions of differential equations etc.
Limit of $\lim_{x \to \infty} f(x)$ using power series of $f$ Suppose we want to find $\lim_{x \to \infty} f(x)$. Moreover, we know that $f(x)$ has a power series expension for all $x \in \mathbb{R}$ \begin{align} f(x)=\sum_{n=0}^\infty a_n x^n \end{align} and we have a nice closed expression for all $a_n$'s. My question can we compute $\lim_{x \to \infty} f(x)$ based on the knowledge of $a_n$'s? Case 1: If $a_n \ge 0$ then \begin{align} f(x) \ge a_0+a_1x \end{align} so \begin{align} \lim_{ n\to \infty} f(x)= \infty. \end{align} Case 2: An interesting case which I would like to understand is when $a_n$'s have alternating sign and $|a_{n+1}| \ge |a_n|$. Thanks for any help.
It seems like a very delicate question; take for example $f(x)=\sin(x)=\sum_{k=0}^{\infty}\frac{(-1)^k}{(2k+1)!}x^{2k+1}$ and $g(x)=\frac{\sin(x)}{x}=\sum_{k=0}^{\infty}\frac{(-1)^k}{(2k+1)!}x^{2k}$. Then $f$ ad $g$ have both infinite radius of convergence and very similar coefficients, but $\lim_{x\to\infty}f(x)$ doesn't exist, while $\lim_{x\to\infty}g(x)=0$. So it seems that its difficult to directly tell something about the limit solely by considering the sequence of coefficients, without calculating $f$ (and thereby the limit) explicitly.
Is $\sum\limits_{n=1}^{\infty}\frac{1}{n^k+1}=\frac{1}{2} $ for $k \to \infty$? This series :$\displaystyle\sum_{n=1}^{\infty}\frac{1}{n^k+1}$ is convergent for every $k>1$ , it's seems that it has a closed form for every $k >1$, some calculations here in wolfram alpha show to me that the sum approach to $\frac{1}{2}$ for large $k$ , My question here is : Question: Does $\displaystyle\sum_{n=1}^{\infty}\frac{1}{n^k+1}\to\frac{1}{2} $ for $k \to \infty$?
Yes of course, we have the following: $\frac{1}{2}\leq \sum\limits_{n=1}^\infty \frac{1}{n^k+1}\leq\frac{1}{2}+\int\limits_1^\infty \frac{1}{x^k+1}dx\leq \frac{1}{2}+ \int\limits_1^\infty \frac{1}{x^k}dx=\frac{1}{2}+\frac{1}{k+1}$. Now use the theorem of squeezing.
Stars and bars with at least 1 group of 2 adjacent bars I am trying to figure out the number of $n$ permutations for $N$ stars and $K \ge{2}$ bars such that there is at least 1 group of 2 adjacent bars. For example, in this trivial example $N = 1$ and $K = 3$ then $n = 4$ $|||* \\ ||*| \\ |*|| \\ *|||$ This coincides with your typical counting argument $\frac{4!}{3!1!}$. However, for a more non-trivial example where $N = 2$ and $K = 3$, we have $n = 9$. My initial instinct was to group the 2 bars as a "group" and permute it with the rest of the stars and bars (so for $N = 2$ and $K = 3$, we have $\frac{4!}{1!1!2!} = 12 \ne 9$). The only counting argument I can come up with is to find the total # of permutations and subtract the # of permutations where there are no adjacent bars ($\frac{5!}{3!2!} - 1 = 9$), but this would involve tedious case work with groups of $*|*$ for the # of perms without adjacent bars. Clearly this does not work when $K > 2N$.
think of it as selecting $k$ objects out of $n+k$ such that no two are consecutive. This can be done in $\binom{n+1}{k}$ ways. (See here ) So the answer is $\binom{n+k}{k}-\binom{n+1}{k}$
Proof verification for $x_ny_n$ converges if and only if $x_n,y_n$ both converge. The first part of the statement is false, because if we let $x_n=(-1)^n$ and $y_n=(-1)^{n+2}$, which are both divergent sequences, but $x_ny_n=(-1)^n(-1)^{n+2}=(-1)^{2n+2}$ is convergent to 1. But I think the second part of this statement is true. To prove the second part, suppose $x_n \rightarrow L$ and $y_n \rightarrow M$ for some $L,M \in \mathbb{R}$. Now, let $N \in \mathbb{N}$ such that $n \geq N$ implies $|x_n-L| \lt \epsilon$ and $|y_n-M| \ \lt \epsilon$ where $N$ corresponds to $\epsilon \gt 0$. $|(x_n-L)(y_n-M)|=|(x_n-L)||(y_n-M)| \lt \epsilon^2 \lt \epsilon$, which is true when $0 \lt \epsilon \lt 1$. Expanding the two factors, $|(x_n-L)(y_n-M)| = |x_ny_n-x_nM-y_nL+LM|=$ $|x_ny_n-x_nM-y_nL+LM-2LM+2LM|$ = $|x_ny_n-LM-x_nM+LM-y_nL+LM|=$ $|x_ny_n-LM-M(x_n-L)-L(y_n-M)| \geq |x_ny_n-LM| - |M(-x_n+L)|-|L(-y_n+M)| =\\ |x_ny_n-LM| - |M||(-x_n+L)|-|L||(-y_n+M)| $ Since $|x_n-L| \lt \epsilon$ and $|y_n-M| \lt \epsilon$, the last two terms will go to $0$. Thus, we are left with $|x_ny_n-LM| \leq |(x_n-L)(y_n-M)| \lt \epsilon^2 \lt \epsilon$. Only thing that bothers me is my restriction for values of $\epsilon.$ To be exact, $\epsilon \geq 1$ should also be allowed, but for that case, my proof won't work. But I thought since $\epsilon$ is some number that is very very small, I assumed I could do such restriction. Is this okay to do? Edit: Also, I used $|a-b| \geq |a|-|b|$, is this true?
Your proof is correct. However, I would replace Since $|x_n-L| \lt \epsilon$ and $|y_n-M| \lt \epsilon$, the last two terms will go to $0$. Thus, we are left with $|x_ny_n-LM| \leq |(x_n-L)(y_n-M)| \lt \epsilon^2 \lt \epsilon$. by This shows that $$\begin{align} |x_ny_n-LM|&\leq |(x_n-L)(y_n-M)|+ |M||(-x_n+L)|+|L||(-y_n+M)|\\ &<\epsilon+|M|\epsilon+|L|\epsilon=(1+|M|+|N|)\epsilon \end{align}$$ Thus, if if we repeat the calculation with $\epsilon$ replaced by $\frac{\epsilon}{1+|M|+|N|}$, we are left with $|x_ny_n-LM| \lt \epsilon$. Alternatively, instead of taking $N$ such that $n>N$ implies $$|x_n-L|<\epsilon,\quad |y_n-M|<\epsilon,$$ you could take $N$ such that $n>N$ implies $$|x_n-L|<\frac{\epsilon}{1+|M|+|N|},\quad |y_n-M|<\frac{\epsilon}{1+|M|+|N|}.$$ Then, the end of the proof could be This shows that $$\begin{align} |x_ny_n-LM|&\leq |(x_n-L)(y_n-M)|+ |M||(-x_n+L)|+|L||(-y_n+M)|\\ &<\frac{\epsilon}{1+|M|+|N|}+|M|\frac{\epsilon}{1+|M|+|N|}+|L|\frac{\epsilon}{1+|M|+|N|}=\epsilon \end{align}$$ Yes, it is ok to do that restriction because if $|x_ny_n-LM|<\epsilon$ for some $\epsilon<1$ then $|x_ny_n-LM|<\varepsilon$ for all $\varepsilon\geq 1$. In general, in the delta-epsilon proofs, the values of $\epsilon$ can always be restricted to some small interval of the form $(0,k)$ (in your case, $k=1$). With respect to the reverse triangle inequality, see this.
Stabilisation by feedback I'm just going through some questions about stability from my course. The questions relate to determining for which $F$ the matrix $A−BF$ is asymptotically stable. The question which I am confused is: Determine for which F the matrix A-BF is stable, where A=\begin{array}{ccc} -1 & 0 \\ 0 & 2 \\ \end{array} and B=\begin{array}{ccc} 0 \\ 1 \\ \end{array} In the lecture notes and some of the other questions on problem sheets the lecturer always considers for which values of F, $det(A-BF-\lambda I)$ is stable. But in the question above he only considers $det(A-BF)$ and I am unsure why - as it gives a different result than when considering $det(A-BF-\lambda I)$.
The closed loop system matrix is $$ A-BF=\begin{bmatrix}-1 & 0\\0 & 2\end{bmatrix}-\begin{bmatrix}0\\1\end{bmatrix}\begin{bmatrix}F_1 & F_2\end{bmatrix}=\begin{bmatrix}-1 & 0\\-F_1 & 2-F_2\end{bmatrix}. $$ It has two eigenvalues $-1$ and $2-F_2$ (easy to see as the matrix is triangular, so the eigenvalues are on the main diagonal). To be stable both must have negative real parts. The first one is no problem, the second one should satisfy $$ 2-F_2<0\quad\Leftrightarrow\quad F_2>2. $$
Existence of a Vector in an Inner Product Space Let V be an inner product space and let $v \neq v'$ be vectors in $V$ . Show that there exists a vector $w \in V$ satisfying $\langle v,w\rangle\:\neq \:\langle v',w\rangle$. As $v-v'\neq 0$, we get $$\langle v-v',v-v'\rangle >0$$ which implies $$\langle v,v-v'\rangle -\langle v',v-v'\rangle >0$$ $$\langle v,v-v'\rangle >\langle v',v-v'\rangle $$ Let $w=v-v'$. I am not sure if my argument is correct. I appreciate any help.
$v-v'\ne 0\,$ is assumed. And $w$ is wanted with $\langle v-v',w\rangle\neq 0\,$. Then $w=v-v'$ does the job because an inner product is positive definite. This is merely a short reformulation of the correct argument given in the OP.
Prove that $\lim_{x\rightarrow 0}|x|=0$ Please check my proof There exist $\delta $such that $0<|x|<\delta $ imply $|x|< \epsilon $ Then choose $\delta =\epsilon $ for x is real number and $0<|x|<\delta \rightarrow |f(x)-0|=|0-0|< \delta =\epsilon $ Then limit is 0
There exist $\delta$ such that $0<\lvert x\rvert<\delta$ imply $\lvert x\rvert < \epsilon$. In such a type of proof for $$ \lim_{x\to 0} \lvert x \rvert = 0 $$ you want to establish, that it is possible to have $\lvert x \rvert$ come arbitrary close to $0$ (here $0$ being the value of the limit), if $x$ can be chosen from a neighbourhood around $0$ (here $0$ being the argument where the limit is asked for). The arbitrary closeness is formalized as a challenge for any positive $\epsilon$: $$ \lvert x - 0 \rvert < \epsilon \quad (*) $$ has to be achieved by being able to come up with a positive $\delta$ which confines $x$ to a neighbourhood around $0$: $$ \lvert x - 0 \rvert < \delta \quad (**) $$ and will imply that $(*)$ holds. In this case it is easy, just choose $\delta = \epsilon$ and this $\delta$ will do the job. Then $(**)$ will lead to $(*)$.
If you didn't already know that $e^x$ is a fixed point of the derivative operator, could you still show that some fixed point would have to exist? Let's suppose you independently discovered the operator $\frac{d}{dx}$ and know only its basic properties (say, the fact it's a linear operator, how it works on polynomials, etc.) If you didn't know that $e^x$ was a fixed-point of this operator, would there be any way to (possibly nonconstructively) show that such a fixed point would have to exist? I'm curious because just given the definition of a derivative it doesn't at all seem obvious to me that there would be some function that's it's own derivative. Note that I'm not asking for a proof that $\frac{d}{dx} e^x = e^x$, but rather a line of reasoning providing some intuition for why $\frac{d}{dx}$ has a fixed-point at all. (And let's exclude the trivial fixed point of 0, since that follows purely from linearity rather than any special properties of derivatives).
Suppose you have a small positive number $\epsilon$. Consider functions which map integer multiples of $\epsilon$ onto $\mathbb{R}$. These functions have an approximate derivative $$f'(i\epsilon)\approx \frac{f((i+1)\epsilon)-f(i\epsilon)}{\epsilon}$$ (for integer $i$) I think that it is intuitively clear that for these functions and this approximate derivative, the approximate derivative has a fixed point. It can be constructed trivially as follows: define $f(0)=1$, $f((i+1)\epsilon)=f(i\epsilon)+\epsilon f'(i)$. Of course, this approximate derivative approaches the true derivative in the limit $\epsilon\rightarrow 0$. If you expect that this line of reasoning still works "in the limit", then you expect that $\frac{d}{dx}$ has a fixed point.
Continuity and derivability of a piece wise f unction Let $f(x)=x^3-9x^2+15x+6$ and $$g(x)= \begin{cases} \min f(t) &\mbox{if } 0\leq t \leq x, 0 \leq x \leq6 \\ x-18 &\mbox{if }x\geq 6. \\ \end{cases} $$ Then discuss the continuity and derivability of $g(x)$. Could someone explain be how to deal with $\min f(t)$ where $0\leq t \leq x, 0 \leq x \leq6$.
To deal with the minimum, I think you should study the variations (by using the derivation sounds fine) of f in [0, 6]. As you probably noticed, f is a polynomial and the minimum in the definition of is written with large inequalities, which means that the minimum will be reached (a polynomial being continuous, and a continuous function being uniforlmy continuous on a closed interval). Like so, you will be able to write g with either a constant value or f itself on [0, 6] (and probably both if the minimum in not reached in 0 or 6). P.S : there shouldn't be "=0" at the end of the definition of f, right ?
$\mathcal{L}^1$-integrability with restriction to a set $E$ Let $E \in \Sigma$. We look at the measurable space $(E,\Sigma_E)$, where $\Sigma_E = \{E \cap F: F \in \Sigma\}$, which is a $\sigma$-algebra. We consider integration to this space. Let $f$ be a measurable function and denote by $f_E$ its restriction to $E$. Now, I want to prove by standard machinery that \begin{align} f_E \in \mathcal{L}^1(E,\Sigma_E,\mu_E) \iff \mathbb{1}_E\ f \in \mathcal{L}^1(S,\Sigma,\mu), \end{align} in which case the identity $\mu_E(f_E) = \mu(\mathbb{1}_E\ f):=\int_E f\ d\mu$ holds. Now, I am wondering if we can use the fact that $f_E = f\ \mathbb{1}_E$ in the elaboration of the standard machinery. If this is the case, the results follows easily. Otherwise, I do not know how to prove the above identity. Any suggestions?
For a simple positive function $f$ that takes values $c_1,\ldots,c_n$ on sets $E_1,\ldots,E_n \in \Sigma$, by definition $$\mu(f 1_E)=\sum_{i=1}^n c_i \mu(E \cap C_i) = \mu_E(f_E).$$ For an arbitrary positive $\Sigma$-measurable function $f$, $$\mu(f 1_E)=\sup\{ \mu(g) \mid g \in \Gamma_1 \}$$ $$\mu_E(f_E)=\sup\{ \mu_E(g) \mid g \in \Gamma_2 \}$$ where $\Gamma_1$ and $\Gamma_2$ are sets of all $\Sigma$ and $\Sigma_E$ measurable simple positive functions $g_1$ and $g_2$ such that $g_1(s) \leq f 1_E(s)$ and $g_2(s) \leq f(s)$ for all $s$ in $S$ and $E$ respectively. But, it is easy to show that $$\Gamma_2 = \{ g_E \mid g \in \Gamma_1 \}.$$
How to calculate expected value for piecewise constant distribution function? The distribution function of a discrete random variable X is given $F_X(x)=\begin{cases} 0, &x<1\\ \frac{5}{13},& 1\leq x< 2 \\ \frac{10}{13}, & 2\leq x<3 \\ \frac{11}{13}, & 3\leq x<4 \\ 1, & 4\leq x \end{cases} $ $A=(X=2)\cup (X=4)$ Calculate: $P(A)$ and $E(X)$ I was thinking to solve $P(A)$ with formula: $P(a)=\begin{pmatrix} n \\ a \end{pmatrix} p^a (1-p)^{n-a} $, but I dont $p$ and $n$. Which formula I should use?
the random variable $X$ can take four values, which are exactly the points of discontinuity of $F_X$: $$ \mathbb P (X=1)= \frac 5 {13}, \quad \mathbb P (X=2)=\frac {10} {13}- \frac 5 {13}, \quad \mathbb P (X=3)=\frac {11} {13} -\frac {10} {13}, \quad \mathbb P (X=4)=1- \frac {11} {13}. \quad $$ Therefore $$ \mathbb P(A)=\mathbb P(X=2)+\mathbb P(X=4) = \frac 7 {13}, $$ and $$ \mathbb E [X]= 1 \cdot \frac {5}{13}+2 \cdot \frac {5}{13}+3 \cdot \frac {1}{13}+4 \cdot \frac {2}{13}=2. $$
Algebra generated is a subset of sigma algebra generated Let $\mathcal C$ be a set of subsets of a sample space $\Omega$. I want to show that $a($$\mathcal C$$)$ $\subset$ $\sigma$$($$\mathcal C$$)$ and $\sigma$$(a$$($$\mathcal C$$)$$)$ $=$ $\sigma$$($$\mathcal C$$)$. I know definitions of each but can't get my head around these facts.. Anyone care to help?
$a(C)$ is the smallest algebra containing $C$, and $\sigma(C)$ is the smallest sigma-algebra containing $C$. Since all sigma-algebras are in particular algebras, we clearly see $a(C) \subset \sigma(C)$. Now $\sigma(a(C))$ is the smallest sigma algebra containing $a(C)$. Since $a(C)$ contains $C$, $\sigma(a(C))$ is a sigma algebra containing $C$ and so $\sigma(C) \subset \sigma(a(C))$. Conversely, as we just showed, $\sigma(C)$ is a sigma algebra that contains $a(C)$ and so $\sigma(a(C)) \subset \sigma(C)$. In summary, the trick is to just play around with the definitions, with emphasis on the use of minimality conditions.
Left/Right Eigenvectors Let $M$ be a nonsymmetric matrix; suppose the columns of matrix $A$ are the right eigenvectors of $M$ and the rows of matrix $B$ are the left eigenvectors of $M$. In one of the answers to a question on left and right eigenvectors it was claimed that $AB=I$. Is that true, and how would you prove it?
No. Let $$ M = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 4 & 2 \\ 0 & 0 & 0 & 2 \end{bmatrix} $$ Then two right eigenvectors for $1$ are $e_1$ and $e_2$, and two left eigenvectors for $1$ are $(\sqrt{2}/2) ( e_1 \pm e_2 )$. When you put these into $A$ and $B$, the upper left corner of your product will not be the $2 \times 2$ identity. I have a bad feeling this question's about to be edited to add the hypothesis that the eigenvalues be distinct... When that happens, the answer will still be no, for if we permute the rows of $A$ to form $A'$ they'll still all be eigenvectors for $M$, but the product $AB$ will undergo the same row permutation, so that even if $AB = I$, we'll have $A'B \ne I$.
Question about very simple lemma on cauchy sequences over real numbers I'm trying to prove the following: If $(a_n)$ is a cauchy sequence that does NOT tend to 0, then $\exists N$ s.t. $\forall n > N, a_n \neq 0$. Here's my proposed proof (please excuse the poor writing): $(a_n)$ does not tend to 0 implies that there exists a smallest $\epsilon > 0$ s.t. $\forall M \exists k > M$ s.t. $|a_k| > \epsilon$. This smallest $\epsilon$ must exist, else we could choose arbitrarily smaller $\epsilon$ s.t. $|a_n| < \epsilon$ and then $(a_n)$ would tend to 0, which is false. Since $(a_n)$ is a cauchy sequence, choose $N$ s.t. $|a_n - a_m| < \epsilon$ (the $\epsilon$ above) $\forall n, m > N$. Then $|a_n - a_k| < \epsilon$ (substituting $a_k$ for $a_m$ since one of the $a_m$ must satisfy the property that $a_k$ has) $-\epsilon < a_n - a_k < \epsilon$ $a_k - \epsilon < a_n < a_k + \epsilon$ If $a_k > \epsilon$, then $0 < a_k - \epsilon < a_n$ If $a_k < -\epsilon$, then $a_n < a_k + \epsilon < 0$ In either case, $a_n \neq 0$ Is my proposed proof correct? Is there a simpler, more elegant way to do it? The part I'm most wary of is where I claim the existence of a smallest $\epsilon$ since the sequence doesn't tend to 0. It makes sense to me, but I'm not sure if I'm allowed to. P.S. We haven't proved that all cauchy sequences in the reals converge, so I'm not sure if I'm allowed to use that fact.
Instead of choosing $N$ s.t. $\lvert a_n-a_m\rvert<\epsilon$, you might choose $N$ s.t. $\lvert a_n-a_m\rvert<\epsilon/2$ Then for $n>\max\{M,N\}$ and $k>n$ with $|a_k|>\epsilon$ use $|x-y|\ge |x|-|y|$: $\lvert a_n\rvert=\lvert a_k-a_k+a_n\rvert\ge \lvert a_k\rvert-\lvert a_k-a_n\rvert> \epsilon-\epsilon/2=\epsilon/2$ This means that $a_n$ cannot be $0$
XOR(Exclusive OR) problem Given 2 multisets A and B, |A| = |B|. The task is to find such X that A ^ X = B, where A ^ X means foreach element of A xor it with X; or say that theres no such X. |A| <= 10^5, elements of A, B are decimal number from [0; 10^18]. For example - If A = {0, 2} and B = {1, 3} then 1 is the answer: A ^ 1 = {1, 3}. I mean I am looking for algorithm with asymptotic << O(|A|^3 * Log(|A|)) which is straightforward brute force. Edit: well its seems that intended solution is some 'smart' brute force. FML
We will exploit property of xor operation which says that: even $\oplus$ even= even $\tag {1}$ similiarly odd $\oplus$ odd = even $\tag{2}$ even $\oplus$ odd= odd $\oplus$ even=odd $\tag{3}$. We have two sets A and B according to question containing integers ($\ge0$). The basic concept is that there are only two kinds of numbers either even or odd which is also the key fact in our solution to the problem. Xor operation is like a function here because we want a single X which can map A to B that is $$B=f(X)=A\oplus X$$ For all even A's the B's will be same and similarly in the case for odd A's also. So basically if in the set A we have m even numbers then m numbers in B would be either even or odd. Suppose m numbers are odd in B. then it is something like this:- $$A(evens) \oplus odd=B(odds) $$ i.e. m evens of A are mapped to m odds of B, the remainings in A which are odd has to be mapped to evens which is true. if at any point the corresponding match is not found then X doesn't exist. Algorithm: The algorithm is recursive ,you can convert it into iterative but recursion is more obvious here. Hint: XXOR(A,B) (1)map A to B according to the equality mentioned above. initially we will get two sets of equivalent mappings from A to B, let it be $f_1$(A(even) to B(odd)) and so $f_2$(A(odd) to B(even)) i.e. $X=odd+X$. if a division is not possible that generates same X in both $f_1$ and $f_2$, then no X exists then exit. else if we have reached the m.s.b. of the numbers, return X; (2) right shift each of the integer in both the sets A and B by one position. (3) According to the example XXOR(A(even)>>1 ,B(odd)>>1) XXOR(A(odd)>>1, B(even)>>1) Complexity: The worst case will be when we will have 2 classes of divisions.In that case: $$T(|A|,10^{18})=4T\left(\frac{|A|}{2},10^{18}/2\right)+4|A|$$ if $$T\left(\frac{|A|}{k},1\right)$$ is reached then terminate. The complexity of this algorithm is $O(|A|^2\times 10^{18})$ in the worst case.
Combinatorics - $5$ cards, $4$ different suits I have the following question: In a deck of $52$ cards with $4$ suits ($13$ of each), how many different ways are there to choose $5$ different cards such that every suit appears at least once. the correct answer is: $4×13^3×{13\choose 2}=685464$ My question is, why is the following wrong: $\frac{52*39*26*13*48}{5!}$ As $52$ is the first card, then we want $39$ as we don't want from the first suit, then $26$, and $13$, and then $48$ as for the last one we can choose again any suit. Then divide by $5!$ as we don't care about the order. Now I know this is wrong obviously as wee don't even get an integer.... the interesting thing is that when dividing by $2*4!$ as in $\frac{52*39*26*13*48}{2*4!}$ we get the same result as above, and also when just multiplying $52*39*26*13$ we get the correct result... I can't figure out where did I go wrong, thanks for the help!
First thing $\frac{52*39*26*13*48}{4*4!}$ and $52*39*26*13$ are not same. Second - After picking 5 cards we have 2 cards of same suit. These 2 are further arrange in 2 ways. Hope you can understand now.
Why do I first need to bring $-4x$ into the numerator in $\lim_{x\to \infty} 4x^2/(x-2) - 4x$ I tried solving the question in the title as follows: $$\lim_{x\to \infty} \frac{4x^2}{x-2} - 4x \to 4x - 4x \to 0$$ However, apparently that first step ($\to 4x - 4x$) was wrong, and I should first have brought the second $4x$ into the numerator. My question is not how I need to solve the question, as I know that now. My question is why what I did was wrong, as I lack any intuition for it, and it seems a mystery to me.
As $x \to \infty$, $\frac{4x^2}{x-2}$ and $4x$ are asymptotically equivalent. However, the notion of asymptotic equivalence is of relative equivalence (in the sense that their ratios tend to $1$). We cannot deduce anything about the differences. The differences may be fixed (e.g. $x^2 \sim x^2 + 1$), tend to $0$ (e.g. $x^2 \sim x^2 + \frac1x$) or tend to infinity (e.g. $x^2 \sim x^2 + x$). Therefore when working with limits which involve differences, the asymptotic equivalence becomes essentially irrelevant. This is why what you're doing is incorrect.
A regular function on an open set of Affine variety that can not be extended to a regular function I have a problem with Exercise 2.3.8(iv) of Brodmann and Sharp. Here is a brief introduction to this exercise. Let $V$ be the affine variety defined by the ideal $$\mathfrak{p}= \left( x_1^2x_2-x_3^2, x_2^3-x_4^2, x_2x_3-x_1x_4, x_1x_2^2-x_3x_4\right) \subset \mathbb{C}[x_1,x_2,x_3,x_4]$$ and $U = V \setminus \{(0,0,0,0)\}$. Note that $U$ is a quasi-affine variety which is isomorphic to $\mathbb{A}_\mathbb{C}^2 \setminus \{(0,0)\}$, by previous parts of this exercise. Now, take the regular function $\beta_2\colon U \to \mathbb{C}$ defined by: $$\beta_2 \left( \left( c_1,c_2, c_3, c_4 \right) \right) = \begin{cases} c_3/c_1 & \text{if } c_1 \neq 0 \\ c_4/c_2 & \text{if } c_2 \neq 0 \end{cases} $$ The aim is to show that $\beta_2$ can not be extended to a regular function on $V$. I would be thankful for any possible response or hint to this problem. Many thanks,
A short calculation with Macaulay 2 shows that the ideal $\mathfrak{p}$ is indeed prime and $S=R/\mathfrak{p}$ an integral domain ($R=\mathbb{C}[x_1,\ldots,x_4]$) First let us verify that $\beta_2$ is well defined: $$\frac{x_3}{x_1} - \frac{x_4}{x_2} = \frac{x_3 x_2 - x_1 x_4}{x_1 x_2} = 0 \text{ in } Q(S)$$ because $x_3 x_2 - x_1 x_4 \in \mathfrak{p}$. If $\beta_2$ would be a regular function on $\mathrm{spec}(R/\mathfrak{p})$ that is $x_4 - x_2 f \in \mathfrak{p}$ for a polynomial $f \in R$, the more it would be a regular function in $R/\mathfrak{p}'$ with $$\mathfrak{p}' = \mathfrak{p} + (x_1,x_3) = (x_1,x_3,x_2^3-x_4^2)$$ But this is essential a semicubical parabola $T=k[x,y]/(y^2-x^3)$ where $y/x \in Q(T)$ is well known not to be in $T$, instead we have that $T[y/x] \subseteq Q(T)$ is the integral closure of $T$ in $Q(T)$. ($(y/x)^2-x = 0$ and so $y/x$ is integral over $T$, that it generates the integral closure can be verified for $k=\mathbb{Q}$ with Macaulay 2).
Alternating binomial sum over even coefficients. Given a positive integer $n$, I'm looking for a nicer closed form for the expression $$\sum_{\substack{k=0\\2\mid k}}^n(-1)^{\frac k2}2^k\binom{n}{k}.$$ If it helps, it is fine to assume that $n$ is even. This comes from looking for solutions to $$x^2+y^2=5^n=(1+2i)^n(1-2i)^n,$$ if there's a nicer way to find solutions I'd be happy to know.
Let $k = 2j$, so the sum becomes $$\sum_j (-1)^j 2^{2j} \binom{n}{2j}$$ Now in $$\frac{1}{2} [(1+x)^n + (1-x)^n] = \sum_j \binom{n}{2j} x^{2j}$$ Let $x = 2i$, where $i^2 = -1$. The result is $$\frac{1}{2} [(1+2i)^n + (1-2i)^n] = \sum_j (-1)^j 2^{2j}\binom{n}{2j}$$ which is the desired sum. Some further simplification is possible by noting $1+2i = \sqrt{5} e^{\alpha i}$ and $1-2i = \sqrt{5} e^{-\alpha i}$, where $\alpha = \tan^{-1} 2$.
Linear algebra: help with proof of equality In an exercise, I need to show that $$(I_n - A)^{-1} + (I_n - A^{-1})^{-1} = I_n + 2 (I_n - A)^{-1}A$$ What I've tried: $$(I_n - A)^{-1} + (I_n - A^{-1})^{-1} = I_n + 2 (I_n - A)^{-1}A$$ $$\iff(I_n - A)[(I_n - A)^{-1} + (I_n - A^{-1})^{-1}](I_n - A^{-1})=(I_n - A)[I_n + 2 (I_n - A)^{-1}A](I_n - A^{-1})$$ $$\iff [I_n-(I_n-A)(I_n-A^{-1})^{-1}](I_n-A^{-1})=I_n-A+[2(I_n-A)(I_n-A)^{-1}A)(I_n-A^{-1})]$$ $$\iff I_n-A^{-1}+I_n-A=(I_n-A^{-1})(I_n+A)$$ $$\iff 2I_n-A-A^{-1}=-A^{-1}+A$$ $$\iff 2I_n=2A$$ $$\iff A=I_n$$ Which is obviously not what I had to prove. Any help with this would be appreciated :)
Actually $(I_n - A)[(I_n - A)^{-1} + (I_n - A^{-1})^{-1}](I_n - A^{-1})=I_n-A^{-1} + I_n -A$
Find rational numbers $(x,y)$ such that $ (x^2 + y^2 - 2x)^2 = x^2 + y^2$ The general limaçon has both a polar equation: $r = b + a \cos \theta $ and an algebraic equation: $$ (x^2 + y^2 - ax)^2 = b^2 (x^2 + y^2)$$ Can we find all the rational points on a curve like this? I want to consider the case $a = 2b$ and $b = 1$: $$ (x^2 + y^2 - 2x)^2 = x^2 + y^2$$ One solution is $(x,y) = (0,1)$ and another is $(x,y) = (1,0)$. How can I generate the other solutions over $\mathbb{Q}$ ?
An even easier route for generating rational points is to start from the parametric equations of the limaçon: $\left((1+2\cos\theta)\cos\theta,(1+2\cos\theta)\sin\theta\right)^\top$, and then perform the Weierstrass substitution $\cos\theta\mapsto\frac{1-u^2}{1+u^2},\;\sin\theta\mapsto\frac{2u}{1+u^2}$. Due to the nature of the substitution, you will not be able to obtain the point $(1,0)$, but all other rational points correspond to rational values of $u$.
Matrix determinant lemma derivation While reading this wikipedia article on the determinant lemma, I stumbled upon this expression (in a proof section): \begin{equation} \begin{pmatrix} \mathbf{I} & 0 \\ \mathbf{v}^\mathrm{T} & 1 \end{pmatrix} \begin{pmatrix} \mathbf{I}+\mathbf{uv}^\mathrm{T} & \mathbf{u} \\ 0 & 1 \end{pmatrix} \begin{pmatrix} \mathbf{I} & 0 \\ -\mathbf{v}^\mathrm{T} & 1 \end{pmatrix} = \begin{pmatrix} \mathbf{I} & \mathbf{u} \\ 0 & 1 + \mathbf{v}^\mathrm{T}\mathbf{u} \end{pmatrix}. \end{equation} Although I see that this equation "works", I'm interested in HOW this thing was invented. For example, why we have $u$ term in a central block matrix of the left side? UPD A little clarification of the question above. Let \begin{equation}L = \begin{pmatrix} \mathbf{I} & 0 \\ \mathbf{v}^\mathrm{T} & 1 \end{pmatrix} \end{equation} I see that \begin{equation} L^{-1}= \begin{pmatrix} \mathbf{I} & 0 \\ -\mathbf{v}^\mathrm{T} & 1 \end{pmatrix} \end{equation} and hence the first equation looks like \begin{equation} L\begin{pmatrix}\mathbf{I + uv^T} & \mathbf{u} \\ 0 & 1\end{pmatrix}L^{-1} = \begin{pmatrix} \mathbf{I} & \mathbf{u} \\ 0 & 1 + \mathbf{v}^\mathrm{T}\mathbf{u} \end{pmatrix}. \end{equation} I see that $\det(L) = \det(L^{-1}) = 1 $. Hence determinants of RHS and LHS are equal as well. What I do not understand is how we jumped from simple $\begin{pmatrix}\mathbf{I + uv^T} & \mathbf{0} \\ 0 & 1\end{pmatrix}$ or $\begin{pmatrix}\mathbf{I} & \mathbf{u} \\ \mathbf{-v}^T & 1\end{pmatrix}$ to $\begin{pmatrix}\mathbf{I + uv^T} & \mathbf{u} \\ 0 & 1\end{pmatrix}$ for a central part of LHS. Thank you.
I have no idea how this was invented and what was the original motivation, but let me outline a different proof for the formula $\det(I + uv^T) = 1 + v^T u$ which I think is much less magical and more natural. Set $$ B = uv^T = \begin{pmatrix} u_1 v_1 & \dots & u_1 v_n \\ u_2 v_1 & \dots & u_2 v_n \\ \vdots & \ddots & \vdots \\ u_n v_1 & \dots & u_n v_n \end{pmatrix} \in M_n(\mathbb{F}). $$ Let us start by finding the characteristic polynomial $p_B(x) = \det(B - xI)$ of $B$. Since $B$ has $\operatorname{rank}(B) \leq 1$, we know that it has at least $n - 1$ eigenvectors associated to the eigenvalue $0$. Since the sum of all eigenvalues must be $\operatorname{tr}(B) = v^T u$, we see that $$ p_B(x) = \det(B - xI) = (-1)^{n} x^{n-1}(x - v^T u). $$ Now plug in $x = (-1)$ and deduce the required formula: $$ p_B(1) = \det(B - (-1)I) = \det(B + I) = \det(I + uv^T) = (-1)^n (-1)^{n-1} (-1 - v^T u) = 1 + v^T u.$$
Calculate limit $\lim_{x \to 2} \frac{(2^x)-4}{\sin(\pi x)}$ without L'Hopital's rule How to calculate limit: $\lim_{x \to 2} \frac{(2^x)-4}{\sin(\pi x)}$ without L'Hopital's rule? If $x = 2$, I get uncertainty $\frac{0}{0}$
Put $x=t+2$ and compute $$4\lim_{t\to 0}\frac{e^{t\ln(2)}-1}{\sin(\pi t)}$$ $$=\frac{4\ln(2)}{\pi}\lim_{t\to 0}\frac{e^{t\ln(2)-1}}{t\ln(2)}\frac{\pi t}{\sin(\pi t)}$$ $$=\frac{4\ln(2)}{\pi}.$$
Calculating the limit of $\frac{x^2\sin(\frac{1}{x})}{\sin x}$ $$ \lim _{x\to 0} \frac{x^2\sin(\frac{1}{x})}{\sin x}$$ Okay since sine is bounded ${x^2\sin(\frac{1}{x})} \ \ \to 0$ $\sin x\to 0$ Thus we can apply l'hospitals to it . Applying l'Hospital's we get : $$\lim_{x\to 0}\frac{\sin(\frac{1}{x})(1-2x)}{\cos x}$$ Here We can't find a way out? What would you advice me to do? Is there anyway to compute the limit? Does this imply that the limit doesn't exist btw? No I don't think so, I think the conditions of the l'Hospital's are just not met.
L Hopital rule says that this is $$ \frac{(x^2\sin(\frac{1}{x}))'}{\cos x}$$ in cero, but the numerator has a discontinuity in the derivate, and the value must be calculated by definition: $$\lim _{x\to 0} \frac{x^2\sin(\frac{1}{x})}{x}=\lim _{x\to 0} x\sin(\frac{1}{x})=0$$ So the limit is cero. Another way: $$\lim _{x\to 0} \frac{x^2\sin(\frac{1}{x})}{\sin x}=\lim _{x\to 0}\frac{x}{sin(x)} x\sin(\frac{1}{x})=1*\lim _{x\to 0} x\sin(\frac{1}{x})=0$$
Why study finite-dimensional vector spaces in the abstract if they are all isomorphic to $R^n$? Timothy Gowers asks Why study finite-dimensional vector spaces in the abstract if they are all isomorphic to $R^n$? and lists some reasons. The most powerful of these is probably There are many important examples throughout mathematics of infinite-dimensional vector spaces. If one has understood finite-dimensional spaces in a coordinate-free way, then the relevant part of the theory carries over easily. If one has not, then it doesn't. I mean sure, but what else? Does anyone know examples of specific vector spaces?
If you decided that you are only going to call "vector space" those of the form $\mathbb R^n$, then you find yourself in the position that now subspaces are no longer vector spaces.
How to differentiate $y=\ln(x+\sqrt{1+x^2})$? I'm trying to differentiate the equation below but I fear there must have been an error made. I can't seem to reconcile to the correct answer. The problem comes from James Stewart's Calculus Early Transcendentals, 7th Ed., Page 223, Exercise 25. Please differentiate $y=\ln(x+\sqrt{1+x^2})$ My Answer: Differentiate using the natural log rule: $$y'=\left(\frac{1}{x+(1+x^2)^{1/2}}\right)\cdot\left(x+(1+x^2)^{1/2}\right)'$$ Now to differentiate the second term, note the chain rule applied and then simplification: $$\left(x+(1+x^2)^{1/2}\right)'=1+\frac{1}{2}\cdot(1+x^2)^{-1/2}\cdot(2x)$$ $$1+\frac{1}{2}\cdot(1+x^2)^{-1/2}\cdot(2x)=1+\frac{x}{(1+x^2)^{1/2}}$$ Our expression is now: $$y'=\left(\frac{1}{x+(1+x^2)^{1/2}}\right)\cdot\left(1+\frac{x}{(1+x^2)^{1/2}}\right)$$ Distribute the left term across the two right terms for my result: $$y'=\left(\frac{1}{x+(1+x^2)^{1/2}}\right)+\left(\frac{x}{\left(x+(1+x^2)^{1/2}\right)\left(1+x^2\right)^{1/2}}\right)$$ $$y'=\left(\frac{1}{x+(1+x^2)^{1/2}}\right)+\left(\frac{x}{\left(x(1+x^2)^{1/2}\right)+(1+x^2)^{1}}\right)$$ At this point I can see that if I simplify further by adding the fractions I'll still have too many terms, and it will get awfully messy. The answer per the book (below) has far fewer terms than mine. I'd just like to know where I've gone wrong in my algebra. Thank you for your help. Here's the correct answer: $$y'=\frac{1}{\sqrt{1+x^2}}$$
Hint: Your function is the hyberbolic arcsin (or the inverse hyperbolic sin). In order to derivate such function one should use the theorem for inverse derivatives. That is $$(f^{-1}(x))'=\frac{1}{f^{'}(f^{-1}(x))}.$$ See the link for more information . https://en.m.wikipedia.org/wiki/Inverse_hyperbolic_function
Two mappings with constant composition Can you tell me if my answer is correct? Thank you so much!!! Here is the problem: If $f,g$ are mappings of $S$ into $S$ and $f\circ g$ is a constant function, then (a) What can you say about $f$ if $g$ is onto? (b) What can you say about $g$ if $f$ is 1-1. Original Image As $f\circ g$ is a constant function, $f\circ g$ is neither onto nor 1-1. If $g$ is onto, $f$ cannot be onto because if $f$ is onto and $g$ is onto, $f\circ g$ would be onto. If $f$ is $1-1$, $g$ cannot be 1-1 because if $f$ is 1-1 and $g$ is 1-1, $f\circ g$ would be 1-1. In any case, the range of $g$ must be a subset of the domain of $f$.
1) If $g$ is onto then for any $x \in S$ there is a $y \in S$ so that $g(y) = x$. So for every $x \in S$ there is some $y \in S$ so that $f(x) = f(g(y))$. So what does that tell us about $f$? 2) If $f$ is 1-1 then for any $x \ne y$ we know $f(x) \ne f(y)$ so what can we say about $f(g(x))$ and $f(g(y))$ are those ever equal? Never equal? What does that tell us about $g(x)$ and $g(y)$?
What are the parts of a logarithm called? $$\log_x y = z$$ $x$ is the base. $z$ is the exponent or power. What's $y$ called?
As I learned it, $y$ in your equation is the "power." $z$ is very sharply the "exponent" (or "logarithm"), not the power. However, I also learned that few people make this sharp of a distinction. "The fifth power of two" is equal to $32$. Is thirty-two an exponent? Of course not. Is it a power? Well, I just said so in the question, didn't I? Which power is it? The fifth power of two, of course. $5$ isn't the power. It's which power (of what base) is being referred to. $2$ is the base. $5$ is the exponent. $32$ is the power. You can retain these words regardless of whether the equation you reference is a logarithm or exponentiation.
Precise definition of a limit- do i understand this concept? Can someone tell me if I'm thinking about this definition correctly? I tried to summarize the precise definition of a limit by saying: $F(x)=L$ over some range of $y$ values correlates to the range of possible $x$ values that make a limit statement plausible. So as the range of $f(x)$ values get smaller and closer to $L$ , the range of $x$ values also get smaller and closer to $a$. And the same situation if one range increases, the other range also increases. Even if I am on the right track, can someone explain it in different way so that I may better understand this?
So as the range of f(x) values get smaller and closer to L , the range of x values also get smaller and closer to a No offense, but that is a pretty confused sentence, if you ask me. I don't understand what exactly you mean by most of the words you use, so I will rather write my own explanation. The definition of a limit is that $\lim_{x\to a} f(x)=L$ is true if and only if, for every $\epsilon>0$, there exists $\delta>0$ such that if $0<|x-a|<\delta$, then $|f(x)-L|<\epsilon$. This definition tells you that, no matter what $\epsilon$ I pick, you can find me a $\delta$ such that $f(x)$ will be $\epsilon$ - close to $L$ if $x$ is $\delta$-close to $a$. With fewere variables, this would translate to I can get $f(x)$ to be as close to $L$ as I want, if I only put $x$ close enough to $a$.
Prove that it is not a primitive root module $p^2$ I don't even know how to start to prove the following... Let $p$ be an odd prime. Prove that if $a$ is a primitive root modulo $p$ then exactly one of the following integers $a,a+p,a+2p,...,a+(p-1)p$ is not a primitive root module $p^2$. Thanks a lot :)
As $a+rp\equiv a\pmod p$ and as $a$ is a primitive root $\pmod p,$ $a+rp,1\le r\le p-1$ are also primitive root $\pmod p,$ By Question about primitive roots of p and $p^2$, ord$_{p^2}(a+rp)=p(p-1)$ or $p-1$ for $0\le r\le p-1$ Now we have $a^{p-1}=1+kp$ where $k$ is some integer $(a+rp)^{p-1}\equiv a^{p-1}+(p-1)a^{p-2}rp\pmod{p^2}\equiv1+kp-a^{p-2}rp$ Now this will be $\equiv1\pmod{p^2}\iff k\equiv a^{p-2}r\pmod p\iff r\equiv ka$ as $a^{p-1}\equiv1\pmod p$ Clearly, there one such $r,0\le r\le p-1$ such that ord$_{p^2}(a+rp)=p-1$
Why Special Limit of Euler's Number fails at higher powers? In the "Calculus 6th edition" by Edwards and Penney, in the chapter of Transcendental functions, there is an interesting question about special limit that leads to the famous Euler's number 2.718281828. It is given as: $$\lim_{x\to\infty}(1+ \frac{1}x)^x$$ However, if you rise variable x to the higher power, say 10: the graph literally goes crazy as x goes to infinity, and the kills of to 1. Here is the graph of this situation: This special limit states that: the further you go with x the closer you approach e. So, as you see the limit fails at higher powers. Please, help me to to understand this situation. I suppose it has something to do with the capability of computer systems to calculate. Thank you!
The flat part of the graph is an error caused by underflow. To the computer, $10^{-10}=0$, so you have $(1+0)^{10^{10}}$. The crazy part is probably caused by floating point error. In short, both of these problems are probably caused by the finite precision allowed by computers.
How do I solve this: $\sigma(n)=\frac{3n(n-1)}{2}$? I would like to know the relationship between power of sum divisor function $\displaystyle\sigma(n)=\sum_{d|n} d$ and pentongonal numbers then i'm seeking to solve this equation :$$\displaystyle\sigma(n)=\frac{3n(n-1)}{2}$$ Note: For instance i have $n=2$ is a solution ,are there other ? Thank you for any help !!!
Since the sum of divisors function $\sigma(n)$ must be less than or equal to the sum of all the numbers less than or equal to $n$, and $\sum_{i=1}^n i=\frac{n(n+1)}{2}$ and $3n(n-1) > n(n+1)$ if $n>2$ there are no solutions other than 2. Perhaps you meant to ask when $\sigma(n)=\frac{3k(k-1)}{2}$ for some $k$. In other words, when is $\sigma(n)$ pentagonal?
How can I easily double any size number in my head? I'm a software engineer, and I often double numbers especially when doing binary to decimal conversions. When numbers get large, I have trouble doubling a number in my head without using paper. For example, I can double 128 in my head easily because it's common and I have it memorized, but numbers like 183 get more difficult. Is there some clever trick I can use to mentally double any number? I'm probably being idealistic, but it would be nice to have $4$-digit numbers be just as easy to double as $2$-digit numbers.
I usually look for an easy calculation that is close by the original one. For example: $2 \times 183 = 2 \times 180 + 6=366$ or $ 2\times 1481= 2 \times 1500 - 38 = 2960+2$
$\lim \limits_{n \to \infty} \frac{\sin x_n}{x_n}$ if $\lim \limits_{n \to \infty} x_n =0$ How to easily prove that $$\lim \limits_{n \to \infty} \frac{\sin x_n}{x_n}=1,$$ if $\lim \limits_{n \to \infty} x_n =0$? I proved it using inequality $$ 1-\frac{x^2}{2}<\frac{\sin x}{x}<1$$ therefore, $$1\xleftarrow[\text{$x_n \to 0$}]{}1-\frac{x_n^2}{2}<\frac{\sin x_n}{x_n}<1 \longrightarrow 1$$
Thanks everyone for the help! I've came up with the proper estimate, which could be obtained without 'magic logic circles': $$\sin x < x < \tan x$$ $$\frac{1}{\sin x} > \frac{1}{x} > \frac{1}{\tan x}$$ $$1\leftarrow1=\frac{\sin x_n}{\sin x_n} > \frac{\sin x_n}{x_n} > \frac{\sin x_n}{\tan x_n} = \cos x_n \xrightarrow[\text{$x_n \to 0$}]{} 1$$
Prove $10^{n+1}+3\cdot 10^n+5$ is divisible by $9$? How do I prove that an integer of the form $10^{n+1}+3\cdot 10^{n}+5$ is divisible by $9$ for $n\geq 1$?I tried proving it by induction and could prove it for the Base case n=1. But got stuck while proving the general case. Any help on this ? Thanks.
Proof by induction: * *Base case: $10^{0+1}+3\cdot10^{0}+5=18$ *Assumption: $10^{n+1}+3\cdot10^{n}+5=9k$ *Inductive step: $10^{n+2}+3\cdot10^{n+1}+5=$ $10^{n+2}+3\cdot10^{n+1}+50-45=$ $10(\color\red{10^{n+1}+3\cdot10^{n}+5})-45=$ $10(\color\red{9k})-45=$ $9(10k)-45=$ $9(10k-5)$
Using the Discriminant to find the value of 'k'. Find the value(s) of $k$ for which the equation, $(x+2)(x+k)=-1$, has equal roots. (I cannot get the two values as stated in the answer $k=0$ and $k=4$. My final line of working doesn't seem to factorize, it is $k^2-4k+8=0$)
the discrimnant is given by $$k^2-4k$$ and we get $$k(k-4)=0$$ if $$k=0$$ or $$k=4$$
Limit of Lebesgue Integrals $\lim_{n\to \infty} \int_{[0,\infty]} \frac{n\sin(\frac{x}{n})}{x(1+x^2)}dm$ Can you assist me in solving this limit: $$\lim_{n\to \infty} \int_{[0,\infty]} \frac{n\sin(\frac{x}{n})}{x(1+x^2)}\,dm$$ where $m$ is the Lebesgue Measure on $\mathbb{R}$? I thought I should try to use the dominated convergence theorem, but didn't succeed in bounding that integrand, through substitution either.
One may use that $$ |\sin x | \le |x|,\qquad x \in \mathbb{R}, $$ giving, as $n \to \infty$, $$ \left|\sin\Big(\frac{x}{n}\Big)\right|\le \left|\frac{x}{n}\right| \implies n\left|\sin\Big(\frac{x}{n}\Big)\right|\le |x|,\qquad x \in \mathbb{R}, $$ then $$ \left|\int_{[0,\infty)} \frac{n\sin(\frac{x}{n})}{x(1+x^2)}\:dm\right|\le \int_{[0,\infty)} \frac{ |x|}{x(1+x^2)}\:dm=\int_{[0,\infty)} \frac{ 1}{1+x^2}\:dm=\frac{\pi}2. $$ Can you take it from here?
Why does the Pythagorean Theorem not work on this problem in the way that I used it? To begin with, I apologize for the vagueness of my question. It's hard to explain what exactly my question entails without seeing what process I went through to try to solve the problem. My question is just that I don't understand why my method did not work. The problem In Figure, $\mathrm{P}$ is a point in the square of side-length $10$ such that it is equally distant from two consecutive vertices and from the opposite side $\mathrm{AD}$. What is the length of $\mathrm{BP}$? (A) 5 (B) 5.25 (C) 5.78 (D) 6.25 (E) 7.07 (I apologize for the crude drawing, the problem was in my book so I had to improvise using Paint.) Figure What I did: Since $\mathrm{BC}$ and $\mathrm{CD}$ are both $10$, I used the Pythagorean Theorem to get the length of diagonal $\mathrm{BD}$ as $\sqrt{200}$ and divide it by $2$. My answer was therefore (E) 7.07. What my book did: Let $\mathrm{T}$ be the midpoint of $\mathrm{AB}$. Set $\mathrm{BP}$ to $x$, and the length of $\mathrm{BT}$ to $10-x$. To complete the triangle, they set the length of $\mathrm{PT}$ to $5$. Then they used the Pythagorean Theorem to do $x^2 = (10-x)^2 + 5^2$, yielding an answer of (D) 6.25. While I understand how they did it, I simply cannot understand why my method didn't work. Is there some law that I'm not aware of pertaining to this problem? Since my incorrect answer was an answer choice, I assume there is a common error I'm making that was set as a trap. Could someone explain this to me? Thank you very much.
The diagram in correct proportion. To get the square edge length $10,$ multiply all lengths by $$ \frac{10}{8} = \frac{5}{4} = 1.25 $$
The valid interval of the maclaurin series for $\frac{1}{1+x^2}$ The Maclaurin series for $\frac{1}{1-x}$ is $1 + x + x^2 + \ldots$ for $-1 < x < 1$. To find the Maclaurin series for $\frac{1}{1+x^2}$, I replace $x$ by $-x^2$. The Maclaurin series for $\frac{1}{1+x^2} = 1 - x^2 + x^4 - \ldots$. This is valid for $-1 < -x^2 < 1$ if I replace $x$ by $-x^2$. So if I multiply each side by $-1$, I get $-1 < x^2 < 1$. If I take the square roots, I get $i < |x| < 1$. And I am stuck. And my book says that this Maclaurin series is valid for $-1 < x < 1$ anyway. There is no further explanation. How can I derive $-1 < x < 1$ from $i < |x| < 1$? Please help.
Taking square roots in $a<x^2<b$ to get $\sqrt a < |x| < \sqrt b$ is valid if $a\ge0$ and $b\ge0$. But notice that the solution of $-4< x^2$ is $-\infty<x<\infty$ since every square of a real number is $>-4.$ Since the square of a real number is never negative, the inequality $-1<x^2<1$ is equivalent to $x^2<1.$ That is equivalent to $x^2-1<0$, which is $(x-1)(x+1)<0$. A product of two numbers is negative only if one of them is negative and the other is positive. Since the second factor is bigger than the first, it needs to be the one that is positive. So we have $x+1>0$ and $x-1<0$. Thus $x>-1$ and $x<1.$
the New Year will be 2017: how many pairs of integer solutions to $x^2 + y^2 = (2017)^3$? We're almost in 2017. I wonder how many pairs of integer solutions has the following diophantine equation: $$x^2 + y^2 = (2017)^3$$ Thanks in advance.
I want to share with everyone that I just found an algebraic method to calculate primitive triples in my specific case of raising to the third power: $$[x(x^2-3y^2)]^2+[y(3x^2-y^2)]^2=n^3$$ I do not know why but on specialist internet sites this is not often found. Happy new year for all.
Please help me evaluate the following infinite sum: $\sum\limits_{k=1}^{\infty}\frac{1}{p_k(p_k-1)}$ I'm trying to evaluate: $\displaystyle{\sum_{k=1}^{\infty}\frac{1}{p_k(p_k-1)}}$, where $p_k$ is the $k$-th prime number. But I cannot even figure out how to begin. I have a feeling that this could involve the prime zeta function, but I'm not sure. In fact, I'm not even sure an analytic closed-form solution is possible. Could you please help?
We can rewrite this as $\sum \limits_{k=1}^\infty ( \frac{1}{p_k^2}+\frac{1}{p_k^3}+\dots)$. So we want $\sum\limits_{k=2}^\infty P(k)$ where $P$ is the prime zeta function.
Definite Integral of sin(arccosh(x)+7) How would I go about computing $\displaystyle\int_{10}^{16}\sin(\cosh^{-1}(x)+7)\mathrm dx$? I haven't attempted anything yet, because I don't even know how to integrate the inverse hyperbolic cosine.
Making the problem of the antiderivative more general, let us consider $$I=\int \sin \left(a+\cosh ^{-1}(x)\right)\,dx$$ As Fib1123 suggested, changing variable $$y=\cosh ^{-1}(x)\implies x=\cosh(y)\implies dx=\sinh(y)\,dy$$ the integrand becomes $$\sinh (y) \sin (a+y)=-i \sin(i y)\sin(a+y)$$ Now, using $$\sin(p)\sin(q)=\frac 12\left(\cos(p-q)-\cos(p+q)\right)$$ $$\sinh (y) \sin (a+y)=-\frac{i}{2} (\cos (a+y-i y)-\cos (a+y +i y))$$ Integrate now to get $$I=\frac i 2 \left(\frac{\sin (a+y-i y)}{ (i-1)}+\frac{\sin (a+y+i y)}{ (i+1)}\right)$$ Now expand the sines taking into account $\sin(iy)=i \sinh(y)$ and $\cos(iy)=\cosh(y)$ and simplify the complex terms. You should arrive to $$I=\frac{1}{2} (\cosh (y) \sin (a+y)-\sinh (y) \cos (a+y))$$ Edit In the same spirit, the following integrals could be computed $$\int \sin \left(a+b \cosh ^{-1}(x)\right)\,dx=\frac{\cosh (y) \sin (a+b y)-b \sinh (y) \cos (a+b y)}{b^2+1}$$ $$\int \cos \left(a+b \cosh ^{-1}(x)\right)\,dx=\frac{\cosh (y) \cos (a+b y)+b \sinh (y) \sin (a+b y)}{b^2+1}$$ $$\int \sin \left(a+b \sinh ^{-1}(x)\right)\,dx=\frac{\sinh (y) \sin (a+b y)-b \cosh (y) \cos (a+b y)}{b^2+1}$$ $$\int \cos \left(a+b \sinh ^{-1}(x)\right)\,dx=\frac{\sinh (y) \cos (a+b y)+b \cosh (y) \sin (a+b y)}{b^2+1}$$
If a cyclotomic integer has (rational) prime norm, is it a prime element? Let $p$ be a rational prime. Consider the ring of integers $\mathbb{Z}[\zeta_p] $ of the $p$-th cyclotomic field $\mathbb{Q}(\zeta_p)$. If the norm $N(\alpha)$ of $\alpha \in \mathbb{Z}[\zeta_p]$ is a rational prime, must $\alpha$ be a prime element of $\mathbb{Z}[\zeta_p] $? If it helps, I only need the case where $N(\alpha) \equiv 1$ mod $p$.
As pointed out by user1952009, we can look at the norm of the ideal $(\alpha) \subset \Bbb Z[\zeta_n]$. Definition. Let $K$ be a number field and let $I$ be a non-zero ideal of $\mathcal O_K$. Then the (absolute) norm of $I$ is defined as the cardinality of the quotient ring $\mathcal O_K / I$, which is finite. Proposition. Let $x \in \mathcal O_K$ be non-zero. Then $$N( x\mathcal O_K) = |N_{K/\Bbb Q}(x)|$$ Finally, if $\alpha$ has a prime norm, then the ideal $(\alpha)$ has a prime absolute norm, say $p$, which means that the ring $\mathcal O_K/(\alpha)$ has $p$ elements. It implies that this ring is the field $\Bbb F_p$ (see also here), and $(\alpha)$ is a maximal ideal of $\mathcal O_K$ and $\alpha$ is prime in $\mathcal O_K$.
Writing mathematically a formula that checks every digit I want to write mathematically a formula that checks the amount of the digit $0$ on even and on odd position of a given number $N$. So for example $N=2000$ has $2$ zeros on odd position and $1$ zero on even. Or if $N=51601$ then $0$ zeros on odd position and $1$ zero on even. How do I write this mathematically, I have no clue how to write a loop that checks every digit of a number of the size $n$? Something like this: $$O=\sum\limits_{\substack{pos = 0\\\ x=0}}^{pos = n} \mathbf{1}_{odd}\qquad \text{and}\qquad E=\sum\limits_{\substack{pos = 0\\\ x=0}}^{pos = n} \mathbf{1}_{even}$$ Where $x$ is the digit at position $pos$. And the variables $O$ stand for $\#$ odd zeros and $E$ for $\#$ of even zeros. Thank you for help Edit: It would be nice if it also works for a binary representation like $N=100101$
HINT Repeatedly divide the number by 100 and ignore the reminder
Solve $\cos x-\sin(2x)=0$ Solve $\cos x-\sin(2x)=0$ I did: $$\cos x=\color{blue}{\sin(\pi /2-x)}$$ therefore: $$\color{blue}{\sin(\pi /2-x)}=\sin(2x)$$ Can I do that:?? now to solve only for $\pi/2-x=2x$ so $x=\pi/6+2\pi k$
Cos x -sin(2x)= 0 implies that cos x- 2 sin x cos x=0 cos x (1- 2sin x) = 0 It means cos x=0 or sin x=1/2 Which gives answer x = 2n pi +- 90` or x= n pi +(-1)^n pi/6 I hope it helps you
Do we assume $f_n$'s map into $\Bbb{R}$ or $\Bbb{C}$ in Theorem 7.8 Rudin's *Principles of Mathematical Analysis*? Theorem 7.8 The sequence of functions $\{f_n\}$ defined on $E$ converges uniformly on $E$ if and only if for every $\epsilon > 0$ there exists an integer $N$ such that $m \geq N, n \geq N, x \in E$ implies \begin{equation} |f_n(x)-f_m(x)| \leq \epsilon \end{equation} For the backwards direction, since the codomain of $f$ is not given, how can we use Theorem 3.11 (Cauchy sequence in a compact metric space (or $\mathbb{R}^k$) converges to some point in the metric space) to prove pointwise convergence of $f$?
I cannot quickly scan the whole text to check all references to 7.8. Based on the contents of chapter Uniform convergence and continuity (pages 149-151), Theorem 7.8 is used to prove Theorem 7.15, which is essentially about complex-valued, bounded and continuous functions $\mathcal{C}(X)$. Theorem 3.11 is about metric spaces that are either compact, $\mathcal{R}^n$ or $\mathcal{C}$. So $E$ in Theorem 7.8 is a typo. Actually, it should be $\mathcal{C}$ as we are dealing with complex-valued functions. Yes, this book had 3 editions, so many views, but no mention of this bit in errata. Errata from George Bergman does not include this error https://math.berkeley.edu/~gbergman/ug.hndts/m104_Rudin_notes.pdf Notes from Jiří Lebl point to complex-valued functions for Theorem 7.8, page 3 of 21 in PDF https://math.okstate.edu/people/lebl/uw522-s12/lec1.pdf
Surfaces with constant Gaussian curvature Must the surfaces embedded in $\mathbb{R}^3$ with constant Gaussian curvature be (a part of) surfaces of revolution? It seems that on the text book, the examples are only those surfaces of revolution if one talks about constant Gaussian curvature. Any counter-examples or proof? Thanks.
Every "generalized cylinder" or "generalized cone" has Gaussian curvature identically zero. Most of these are not surfaces of rotation. More interestingly, for each surface of rotation having constant Gaussian curvature, there is a one-parameter family of "helical" surfaces of constant Gaussian curvature, the parameter being the "pitch" of the helix. (The family arising this way from the pseudosphere is Dini's surface.) There are also surfaces of constant negative Gaussian curvature having no ambient symmetries. This is physically clear if you imagine a hyperbolic patch made from a sheet of rubber or paper: The patch is "floppy", and most of its physical configurations cannot be "slid along themselves". Edit: Parametric formulas for helical surfaces of constant curvature (joint work with J. M. Antonio, 2008, unpublished) are given in terms of differential equations, just as for surfaces of rotation. Fix real numbers $k$, $C > 0$, and $B > 0$, and let $$ G(u) = \begin{cases} \phantom{\pm} B^{2} - u^{2} & K = 1, \\ \phantom{\pm} B^{2} + C^{2}u & K = 0, \\ \pm B^{2} + u^{2} & K = -1. \end{cases} $$ If we define \begin{align*} h_{2}(u) &= \sqrt{G(u) - k^{2}}, \\ h_{1}'(u) &= \sqrt{\frac{G(u)}{G(u) - k^{2}} \left[\frac{1}{G(u)} - \frac{G'(u)^{2}}{4(G(u) - k^{2})}\right]}, \\ \psi'(u) &= -\frac{k}{G(u)}\, h_{1}'(u), \end{align*} then $$ \mathbf{x}(u, v) = \left[ \begin{array}{@{}c@{}} h_{1}(u) + k(v + \psi(u)) \\ h_{2}(u) \cos(v + \psi(u)) \\ h_{2}(u) \sin(v + \psi(u)) \\ \end{array}\right] $$ parametrizes a helical surface of constant Gaussian curvature in each interval where $$ 0 < G(u) - k^{2}\quad\text{and}\quad \frac{G(G')^{2}}{4(G - k^{2})} \leq 1. $$ The surface is immersed, but not necessarily embedded, and is not geodesically complete unless $k = 0$ and $G(u) = 1 - u^{2}$ (the sphere), or $C = 0$ in the flat case (a cylinder). Taking $B = 0$ in the negative-curvature case gives helical surfaces converging to the pseudosphere as $k \to 0$.
Do the eigenvalues of the product of a positive diagonal matrix and a skew-symmetric matrix still have zero real part? Let $A$ be a $n\times n$ diagonal matrix with real, positive entries on the diagonal. Further, let $B$ be a $n\times n$ invertible, skew-symmetric and real matrix. Now let $n$ be even. Then it is known that the eigenvalues of $B$ all have zero real part. Is this also true of the product $AB$? I tried doing some simulations in Matlab: The real parts were not zero, but always $\sim 10^{15}$ times smaller than the imaginary parts, so I suspect this is a rounding error on Matlabs part. But how to prove it?
Let $R$ be the (symmetric) root of $A$. Then, $AB$ and $RBR$ are similar and we have $$ (RBR)^T = RB^T R = -RBR. $$ That is, $RBR$ is skew-symmetric too. In particular the eigenvalues of $RBR$ are imaginary (including zero) and the same as $AB$.
Applications of complex numbers to solve non-complex problems Recently I asked a question regarding the diophantine equation $x^2+y^2=z^n$ for $x, y, z, n \in \mathbb{N}$, which to my surprise was answered with the help complex numbers. I find it fascinating that for a question which only concerns integers, and whose answers can only be integers, such an elegant solution comes from the seemingly unrelated complex numbers - looking only at the question and solution one would never suspect that complex numbers were lurking behind the curtain! Can anyone give some more examples where a problem which seems to deal entirely with real numbers can be solved using complex numbers behind the scenes? One other example which springs to mind for me is solving a homogeneous second order differential equation whose coefficients form a quadratic with complex roots, which in some cases gives real solutions for real coefficients but requires complex arithmetic to calculate. (If anyone is interested, the original question I asked can be found here: $x^2+y^2=z^n$: Find solutions without Pythagoras!) EDIT: I just wanted to thank everyone for all the great answers! I'm working my way through all of them, although some are beyond me for now!
$\newcommand{\SLp}[1]{\mathrm L^{#1}}\newcommand{\norm}[1]{\lVert#1\rVert}$ The famous Riesz-Thorin Interpolation Theorem: Let $(X,\mathcal M,\mu),(Y,\mathcal N,\nu)$ be measure spaces, $p_0,p_1,q_0,q_1 \in [1,\infty]$. (If $q_0 = q_1 = \infty$, $\nu$ is also required to be semi-finite) Define, for $t \in \left]0,1\right[$, $$ \frac{1}{p_t} = \frac{1-t}{p_0} + \frac{t}{p_1}, \qquad \frac{1}{q_t} = \frac{1-t}{q_0} + \frac{t}{q_1} $$ If $\Phi\in\operatorname{Hom}(\SLp{p_0}(\mu) + \SLp{p_1}(\mu),\SLp{q_0}(\nu) + \SLp{q_1}(\nu))$ such that $\Phi\restriction_{\SLp{p_0}},\Phi\restriction_{\SLp{p_1}}$ are bounded. Then $$ \norm{\Phi}_{\SLp{p_t} \to \SLp{q_t}} \leq \norm{\Phi}_{\SLp{p_0} \to \SLp{q_0}}^{1-t}\norm{\Phi}_{\SLp{p_1} \to \SLp{q_1}}^t $$ which does not seem to be related to complex analysis, is usually proved using the Three Lines Lemma (a.k.a. Hadamard Three Line Theorem): Let $f$ be a bounded continuous complex function on the strip $E = \{z \in \mathbb{C}\mid a \leq \Re z \leq b\}$ that is holomorphic on $E^\circ$. Define $m(x) = \sup_{y \in \mathbb{R}} |f(x + iy)|$. Then $m(x)$ is logarithmically convex on $[a,b]$.
Why can't I integrate trigonometric functions without making a substitution? For example, $\int\sin^3x$ is turned into $\int sinxsin^2x$ then a substitution for $\sin^2x$ is made. What I would have done was $\int(sinx)^3$ and integrate via recognition: $\int (ax+b)^n dx$ =$\frac{(ax+b)^{n+1}}{a(n+1)} + C$ . However this will give a different answer from the correct one. Why is that? Why can't this reverse chain rule work for trigonometric functions?
This is basically boils down to the fact that $$ (x^n)' = n \, x^{n-1} $$ but $$ (\sin^n x)' = n \, \sin^{n-1} x \, \cos x. $$ You can see how these are not 'analogous' so you can't generalize the integral formula for $(a \, x + b)^n$
If $A(z_1)$ and $(z_2)$ are two points in argand plane, find $\angle ABO$ If $A(z_1)$ and $(z_2)$ are two points in argand(complex) plane such that $$\frac{z_1}{z_2}+\frac{\overline{z_1}}{\overline{z_2}}=2$$. Find the value of $\angle ABO$ where $O$ is origin. Using given condition, I found that Real part of $\frac{z_1}{z_2}=1$ but I am not able to use this to find $\angle ABO$. Could someone help me with this?
If we write $z_1=\rho_1 e^{i\theta_1}$ and $z_2=\rho_2 e^{i\theta_2}$ (with $z_1\ne z_2$, if $z_1=z_2$ we have trivially $\beta=0$), then from the equation we get: $${\rho_1 e^{i\theta_1}\over\rho_2 e^{i\theta_2}}+{\rho_1 e^{-i\theta_1}\over\rho_2 e^{-i\theta_2}}=2$$ $$e^{i(\theta_1-\theta_2)}+e^{-i(\theta_1-\theta_2)}=2{\rho_2\over\rho_1}$$ $${e^{i(\theta_1-\theta_2)}+e^{-i(\theta_1-\theta_2)}\over2}={\rho_2\over\rho_1}$$ $$\cos(\theta_1-\theta_2)={\rho_2\over\rho_1}$$ so we get $\rho_2\le\rho_1$. Now we can apply the law of sines to $\triangle ABO$: $${\sin\beta\over\rho_1}={\sin(\theta_1-\theta_2)\over\overline {AB}}\longrightarrow\sin\beta=\rho_1{\sin(\theta_1-\theta_2)\over\overline {AB}}$$ where $\beta=\angle ABO$. We know that: $$\sin(\theta_1-\theta_2)=\sqrt{1-\cos^2(\theta_1-\theta_2)}={\sqrt{\rho_1^2-\rho_2^2}\over\rho_1}$$ and applying the cosine rule to $\triangle ABO$: $$\overline{AB}=\sqrt{\rho_1^2+\rho_2^2-2\rho_1\rho_2\cos(\theta_1-\theta_2)}=\sqrt{\rho_1^2+\rho_2^2-2\rho_1\rho_2{\rho_2\over\rho_1}}=\sqrt{\rho_1^2-\rho_2^2}$$ hence we get: $\sin\beta=1\longrightarrow\beta={\pi\over2}$. The angle $\beta\equiv\angle ABO$ is always ${\pi\over2}$.
Why do arrows point towards the codomain in function diagrams? The definition of a function given in Kenneth Rosen's Discrete Math Book is Let A and B be nonempty sets. A function from A to B is an assingment of exactly one element of B to each element of A. [Emphasis mine] It seems the author wants us to think of the elements in the codomain being assigned to the elements in the domain, not the elements in the domain being assigned to the ones in the codomain. However, in this book and most others, functions are often illustrated with such diagrams: If it is more useful to think of the elemnts in the codomain as assigned to the elements in the domain, is there a reason why the arrows are not pointing in the other direction?
I would say that this is more about conventions than mathematics. As long as the mathematical meaning is clear, what is the big deal? For instance, in literature, the open interval $(a,b)$ can be also denoted as $]a,b[$. Arguing which notation is more reasonable is really not a mathematical question.
Find coefficient of $x^n$ in $(1+x+2x^2+3x^3+.....+nx^n)^2$ Find coefficient of $x^n$ in $(1+x+2x^2+3x^3+.....+nx^n)^2$ My attempt:Let $S=1+x+2x^2+3x^3+...+nx^n$ $xS=x+x^2+2x^3+3x^4+...+nx^{n+1}$ $(1-x)S=1+x+x^2+x^3+....+x^n-nx^{n+1}-x=\frac{1-x^{n+1}}{1-x}-nx^{n+1}-x$ $S=\frac{1}{(1-x)^2}-\frac{x}{1-x}=\frac{1-x+x^2}{(1-x)^2}$. (Ignoring terms which have powers of x greater than $x^n$) So one can say that coefficient of $x^n$ in $(1+x+2x^2+3x^3+.....+nx^n)^2$ =coefficient of $x^n$ in $(1-x+x^2)^2(1-x)^{-4}$ Is there a shorter way.
Such coefficient is clearly $$ 2n+\sum_{k=1}^{n-1} k(n-k) = \frac{n(n^2+11)}{6}.$$
I'm stuck on how to show this sequence monotonically decreasing: $U_{n+1}=U_{n}^{2}+\frac18$ and $U_0=\frac12$ as infos I have : $U_{0}=\frac{1}{2}$ $U_{n+1}=U_{n}^{2}+\frac{1}{8}$ I have already proved that $U_{n}$ is positive as the exercise requested but I still don't know how to use it with $U_{n+1}$ to show that $U_{n}$ is decreasing monotonically . Thanks for your attention.
Proceed by induction. First verify that $u_1 < u_0$. Next suppose $u_{n}<u_{n-1}$ for some positive integer n. As you've already shown, the terms of the sequence are positive, hence $$u_n<u_{n-1}$$ $$\Rightarrow {u_n}^2 < {u_{n-1}}^2$$ $$\Rightarrow {u_n}^2 + \frac{1}{8} < {u_{n-1}}^2 + \frac{1}{8}$$ $$\Rightarrow u_{n+1} < u_n$$ which completes the induction.
proving $t^6-t^5+t^4-t^3+t^2-t+0.4>0$ for all real $t$ proving $t^6-t^5+t^4-t^3+t^2-t+0.4>0$ for all real $t$ for $t\leq 1,$ left side expression is $>0$ for $t\geq 1,$ left side expression $t^5(t-1)+t^3(t-1)+t(t-1)+0.4$ is $>0$ i wan,t be able to prove for $0<t<1,$ could some help me with this
If $0< t < 1$ $\frac {t^7 + 1}{t+1} \ge .6 \iff t^7 + 1 \ge .6t + .6 \iff t^7 - .6t \ge -.4$ $\frac {d(t^7 - .6t)}{dt} = 7t^6 - .6 = 0$ if $t = \sqrt[6] \frac 6{70}$ $\frac{d^2(t^7 - .6t)}{d^2t} = 42t^5 > 0 $ if $t > 0$ so $t = \sqrt[6] \frac 6{70}$ is a minimum value of $t^7 - .6t$. And so $t^7 - .6t \ge \sqrt[6] \frac 6{70}^7 -.6*\sqrt[6] \frac 6{70} = \sqrt[6] \frac 6{70}(\frac 6{70} - .6)\approx -.341 > -.4$ so $t^7 - .6t \ge -.4$ for $0 < t < 1$. So $\frac {t^7 + 1}{t+1}= t^6 - t^5 + t^4 - t^3 +t^2 -t + 1 \ge .6$ and $t^6 - t^5 + t^4 - t^3 +t^2 -t + .4 > 0$ for $0 < t < 1$.
particular integral in partial differential equation Solve the partial differential equation $$\left[D^2+{D^\prime}^2\right]z=\cos mx\cdot\cos ny.$$ I have a problem in finding particular integral where $\displaystyle\frac\partial{\partial x}=D$ and $\displaystyle\frac\partial{\partial y}=D^\prime$.
$$\frac{\partial^2z}{\partial x^2}+\frac{\partial^2z}{\partial y^2}=\cos (mx)\,\cos (ny)\qquad (1)$$ The particular solution of (1) is sought in the form $$z=A\,\cos (mx)\,\cos (ny)$$ Substitution this in (1) yields $$-A\, {{n}^{2}} \cos{\left( m x\right) } \cos{\left( n y\right) }-A\, {{m}^{2}} \cos{\left( m x\right) } \cos{\left( n y\right) }=\cos{\left( m x\right) } \cos{\left( n y\right) }$$ Then $$A=-\frac{1}{n^2+m^2},$$ $$z_p=-\frac{\cos (mx)\cdot\cos (ny)}{n^2+m^2}$$
Why is the solution to $x-\sqrt 4=0$ not $x=\pm 2$? If the equation is $x-\sqrt 4=0$, then $x=2$. If the equation is $x^2-4=0$, then $x=\pm 2$. Why is it not $x=\pm 2$ in the first equation?
Your confusion is understandable! You are thinking: if the square root of some number $x$ is that number $y$ such that $y^2 = x$, then why is the solution to $x = \sqrt{4}$ different from $x^2 - 4 = 0$ ?!? And the answer is: it wouldn't be different! That is, if the square root of $x$ would indeed be defined as "that number $y$ such that $y^2 = x$", then $\sqrt{4}$ could be either 2 or -2. But, obviously, that is not how we look at $\sqrt{4}$ since we all know that that is just 2. This means: thinking of the square root of $x$ as "that number $y$ such that $y^2 = x$" is apparently not how we think about the square root. OK, but why not? Well, notice that the whole "that number" part would be misleading in the first place: it suggest that there is one number with this property, but obviously there is not. So, if anything, we would have to say that the square root of $x$ is "any number such that $y^2 = x$" ... and we could have done so ... ... but we didn't. OK, but then we (you!) ask once again: why not? Well, one simple reason is that we want the square root to act like a function, meaning that for any $x$, there is only one $y$ that is "the" square root of $x$. And we want functions, because functions are super useful: one thing in, and one thing out! Calculations can be done, etc. etc. OK, but how can we ensure a function? Well, one thing we can do is to define the square root of $x$ as "that positive number $y$ such that $y^2 = x$" ... and that's exactly what we did... and hence the difference between $x = \sqrt{4}$ and $x^2 - 4 = 0$ Of course, we could also have defined it as that negative $y$ such that $y^2 = x$" ... but in most practical cases, the positive one is the one you want, and the one that most often makes concrete sense in real life applications. As a final reason for defining the square root of a number to be what it is, is that it allows us to explicitly distinguish between the two solutions to $x^2-2=0$, those being $\sqrt{2}$ and $-\sqrt{2}$: if the square root of 2 was any number that when squared gives you 2, then there no longer is a difference between $\sqrt{2}$ and $-\sqrt{2}$. In fact, it would not even be clear how many, and which specific, numbers would be solutions to $x^2-2=0$ if you said $x = \sqrt{2}$! Indeed, without the square root picking out a specific number, how would you refer to these different solutions? There probably is a way, but the square root function certainly makes our mathematical lives a lot easier for something like this!
Volume comparison for minimal submanifolds I am reading the book "A course in Minimal Surfaces" by Colding and Minicozzi. I don't understand a step in the proof of Corollary 1.13. Let $\Sigma^k \subset \mathbb{R}^n$ be a $k$-dimensional minimal submanifold. Fix $x_0 \in \Sigma$. I want to study the behaviour of the function: $$ s\mapsto \Theta_{x_0}(s) \,\, \colon = \frac{ \text{Vol}\big( B_s(x_0) \cap \Sigma \big) }{\text{Vol}\big(B_s \subset \mathbb{R}^k \big)} $$ where $B_s(x_0)$ is the $n$-dimensional euclidean ball of radius $s$ centred in $x_0$ and $B_s$ is the $k$-dimensional euclidean ball of radius $s$ centred in the origin. I know that $\Theta_{x_0}(s)$ is monotone nondecreasing. I want to show that $$ \Theta_{x_0}(s) \ge 1. $$ In the book, the authors say that since $\Sigma$ is smooth and proper, it is infinitesimally Euclidean and hence $$ \lim_{s \rightarrow 0}\Theta_{x_0}(s) \ge 1. $$ Can you explain me this better?
If $\Sigma$ is smooth and proper then for $x_0\in \Sigma$, we know that $\exists$ a small ball in $\mathbb{R^k}$ around $x_0$ (by the definition of a manifold). But the ball can intersect $M$ multiple times (but it intersect $M$ at least once). So the numerator in the definition of $\Theta$ is the number of times the ball intersects $M$ and hence $\lim_{s\to 0}\Theta_{x_0}(s)\geq 1$.
Spectrum of bounded operators Let $A$ be a bounded operator on complex Hilbert space $H$ such that $$(1+A^6)(1+A^2+A^4)=0.$$ Let $k\in\mathbb{C}$ be an element of the spectrum $\sigma(A)$. How do I show that $k^{12}=1$? What I know: The spectrum of $A$ is $\sigma(A)=\{\lambda\in\mathbb{C}:A-\lambda I\text{ is not invertible }\}$. So there exists a non-zero vector $x\in H$ in the kernel of $A-kI$, otherwise said $Ax=kx$. Then we compute $(1+A^6)(1+A^2+A^4)=1+A^2+A^4+A^6+A^{8}+A^{10}=0$. This gives $(1+k^2+k^4+k^6+k^{8}+k^{10})x=0$. How do I get the statement from this? Edit: Following @DanielFischer's hint (thank you), we multiply by $(1-A^2)$ to get $$(1-A^2)(1+A^2+A^4+A^6+A^{8}+A^{10})=1-A^{12}=0.$$ This gives $k^{12}=1$.
Note that a priori you do not know that the spectrum of $A$ consists of real eigenvalues. That is, if $A - \lambda I$ is not invertible, in the infinite dimensional setting it doesn't always mean you can find $0 \neq x \in H$ such that $Ax = \lambda x$. However, you are given that $A$ satisfies some polynomial equation $p(A) = 0$ with $$ p(x) = (1 + x^6)(1 + x^2 + x^4). $$ Since $$ (1 - x^2)(1 + x^6)(1 + x^2 + x^4) = 1 - x^{12}$$ we also know that $g(A) = 0$ with $g(x) := 1 - x^{12}$. By the spectral mapping theorem, we must have $p(\sigma(A)) = \sigma(p(A)) = \{ 0 \}$ which implies that $$\sigma(A) \subseteq \{ z \in \mathbb{C} \, | \, p(z) = 0 \} = \{ z \in \mathbb{C} \, | \, z^{12} = 1 \}. $$
Singular Value Decomposition - Proof With singular value decomposition we can write the following: \begin{equation} A = U \Sigma V^{T} \end{equation} \begin{equation} U^{T}AV=U^{T}U\Sigma V^{T} V \end{equation} Since $U,V$ orthogonal, the above equation leads to the following: \begin{equation} \Sigma =U^{T}AV \end{equation} I've seen a proof that says the following \begin{equation} \Sigma^{-1}=V^TA^{-1}U \end{equation} Can someone help with to understand how we ended up to the latter equation.
If all the matrices involved are square and invertible, we have $U^T = U^{-1}$ and $V^{T} = V^{-1}$, so $$ \Sigma^{-1} = (U^{-1}AV)^{-1} = V^{-1}A^{-1}U = V^TA^{-1}U $$ as desired.
Find how many solutions are to $(x-3)(x-2)=\sin x$ Find how many solutions exist to the equation $(x-3)(x-2)=\sin x$ Let $E(x) = (x-3)(x-2)-\sin x$. $E(0)>0,\quad E(1,5) < 0, \quad E(3) > 0$ Because $E(x)$ is continuous and changes signs 3 times according to Intermediate Value theorem there're at least 2 roots. $E(x)$ is differentiable for all $x$ if there're 3 or more roots by Rolle's theorem $E'(x)$ would have 2 roots. $E'(x) = 2x-5-\cos x$ $2x = 5+\cos x$ How do I prove that $2x = 5+\cos x$ has less than 2 roots? Or did I do something wrong along the way?
The analogy that you are using only applies to polynomials. For example, let the function $f(x) = 5x^4 + 3$ and let $g(x)$ be a primitive of $f(x)$. Then we have that $g(x) = x^5 + 3x + C$. Depending on the constant $C$, the number of roots will change. Likewise, let $f(x) = \cos{x} + 10$. This equation does not have any roots, however, the preceding derivatives have an infinite number of roots.
Approximation with a normal distribution Every day Alice tries the stroke playing tennis until she reaches $50$ strokes. If each stroke is good with a probability of $0,4$, independently from others, approximately what is the probability that at least $100$ attemps are necessary to success? Let X be a binomial random variable with parameters $$n=100$$ and $$p=0,4$$ Since n is large we can approximate with a normal distribution with parameters $$\mu=40$$ and $$\sigma=\sqrt{0,4*100*0,6}=\sqrt{24}$$ Appling the normal approximation \begin{align}P(X> 49,5)&=1-P(x<49,5)\\&=1-P((X-\mu )/\sigma < (49,5-40 )/ \sqrt{24} )\\&=1-P((X-\mu )/\sigma < 1,939 )\\&=1-\Phi (1,939)\\&=1-0,9737\\&=0,0263\end{align} But the solution on the book is $0,974$ (it could be $P(x<49,5)$).
when n is large, a binomial random variable with parameters n and p will have approximately the same distribution as a normal random variable with the same mean and variance as the binomial. If the random variable is negative binomial is possible to apply this approximantion?
Need help on understanding the theorem in real analysis! Theorem 1.3.3: Let $L \in R$ and let $x_n$ be a sequence of real numbers. Then $x_n \to L$ if and only if $\limsup x_n = \liminf x_n = L$. Proof: First, suppose $x_n \rightarrow L.$ Thus if $\epsilon > 0$ there exists $N \in \mathbb{N}$ such that $ n \geq N$ implies $|x_n-L| < \epsilon/2$, which implies $s_n = \sup(T_n) \leq L + \epsilon/2$ and $i_n = \inf(T_n) \geq L-\epsilon/2$. Thus $$L-\frac{\epsilon}{2} \leq i_n \leq s_n \leq L +\frac{\epsilon}{2}$$ which implies that $|s_n-L| \leq \frac{\epsilon}{2} \lt \epsilon$ and $|i_n-L| \leq \frac{\epsilon}{2}<\epsilon$, for all $\epsilon > 0$. Thus $s_n \rightarrow L $ = lim sup $x_n$ and $i_n \rightarrow L$ = lim inf $x_n$. I only included the first part of the proof. I boxed the part where I was confused. Why did they use inequalities $s_n = \sup(T_n) \leq L + \epsilon/2$ and $i_n = \inf(T_n) \geq L-\epsilon/2$ instead of $s_n = \sup(T_n) \lt L + \epsilon/2$ and $i_n = \inf(T_n) \gt L-\epsilon/2$.
Maybe you are thinking that the strict inequality is valid because $|x_n-L|<\epsilon/2 $ is true, but note that this inequality is true for any element $x_n$ in the sequence beyond $N$, but the supremum or infimum of such elements in $T_n$ may not be any of the elements in the tail of the sequence, so you need to write it as a weak inequality. Also note that this inequality does not affect the proof anyway.
What were some major mathematical breakthroughs in 2016? As the year is slowly coming to an end, I was wondering which great advances have there been in mathematics in the past 12 months. As researchers usually work in only a limited number of fields in mathematics, one often does not hear a lot of news about advances in other branches of mathematics. A person who works in complex analysis might not be aware of some astounding advances made in probability theory, for example. Since I am curious about other fields as well, even though I do not spend a lot of time reading about them, I wanted to hear about some major findings in distinct fields of mathematics. I know that the question posed by me does not allow a unique answer since it is asked in broad way. However, there are probably many interesting advances in all sorts of branches of mathematics that have been made this year, which I might have missed on and I would like to hear about them. Furthermore, I think it is sensible to get a nice overview about what has been achieved this year without digging through thousands of different journal articles.
Personally, I was kind of fascinated by the solution to the Boolean Pythagorean triples problem which was finally solved in May. The problem asked whether or not the set of natural numbers $\mathbb{N}$ can "be divided into two parts, such that no part contains a triple $(a, b, c)$ with $a^2+b^2=c^2$". Heule, Kullmann and Marek managed to prove (with the help of a lot of computing power) that this is in fact not possible. References: Heule, Marijn J. H.; Kullmann, Oliver; Marek, Victor W. (2016-05-03). "Solving and Verifying the Boolean Pythagorean Triples problem via Cube-and-Conquer".
Show that $\sqrt{4 + 2\sqrt{3}} - \sqrt{3}$ is rational. Show that $\sqrt{4 + 2\sqrt{3}} - \sqrt{3}$ is rational. I've tried to attempt algebra on this problem. I noticed that there is some kind of nesting effect when trying to solve this. Please help me to understand how to attempt to denest this number. Any help would be greatly appreciated.
Let $x=\sqrt{4+2\sqrt{3}}-\sqrt{3}$. Then: $$(x+\sqrt3)^2=4+2\sqrt{3}\implies x^2-1=2(1-x)\sqrt3\implies(x-1)(x+1+2\sqrt3)=0$$ So, certainly $x=1$ or $x=-1-2\sqrt3$. But a moment's thought (e.g. considering that $x>0$) convinces us that the first of these must be the case - i.e., $x=1$ (a known rational number).
proving $ \binom{n}{0}-\binom{n}{1}+\binom{n}{2}+\cdots \cdots +(-1)^{n-1}\binom{n}{m-1}=(-1)^{m-1}\binom{n-1}{m-1}$ proving $\displaystyle \binom{n}{0}-\binom{n}{1}+\binom{n}{2}+\cdots \cdots +(-1)^{\color{red}{m}-1}\binom{n}{m-1}=(-1)^{m-1}\binom{n-1}{m-1}.$ $\displaystyle \Rightarrow 1-n+\frac{n(n-1)}{2}+\cdots \cdots (-1)^{n-1}\frac{n.(n-1)\cdot (n-2)\cdots(n-m+2)}{(m-1)!}$ Added writting LHS as $\displaystyle \binom{n}{0}-\left(\binom{n-1}{0}+\binom{n-1}{1}\right)+\left(\binom{n-1}{1}+\binom{n-1}{2}\right)+\cdots \cdots +(-1)^{n-1}\left(\binom{n-1}{m-2}+\binom{n-1}{m-1}\right)=(-1)^{m-1}\binom{n-1}{m-1}.$ $\displaystyle \binom{n}{0}-\binom{n-1}{0}+\binom{n-1}{1}-\cdots +(-1)^{n-1}\binom{n-1}{m-2}-\left(\binom{n-1}{1}-\binom{n-1}{2}+\cdots +(-1)^n\binom{m-1}{m-1}\right)$ wan,t be able to solve after that, help me to solve it
Extended HINT: The result is incorrect as originally stated; it should read $$\binom{n}0-\binom{n}1+\binom{n}2-\ldots+(-1)^{\color{crimson}m-1}\binom{n}{m-1}=(-1)^{m-1}\binom{n-1}{m-1}\;,$$ or, more compactly, $$\sum_{k=0}^{m-1}(-1)^k\binom{n}k=(-1)^{m-1}\binom{n-1}{m-1}\;.\tag{1}$$ Fix $n\in\Bbb N$. For $m=1$ the desired result is $$\binom{n}0=(-1)^0\binom{n-1}0\;,$$ which is indeed true, since both sides are equal to $1$. Suppose as an induction hypothesis that $(1)$ holds for some $m$; for the induction step we want to prove that $$\sum_{k=0}^m(-1)^k\binom{n}k=(-1)^m\binom{n-1}m\;.\tag{2}$$ Using the induction hypothesis we can rewrite the lefthand side of $(2)$: $$\sum_{k=0}^m(-1)^k\binom{n}k=(-1)^m\binom{n}m+\sum_{k=0}^{m-1}(-1)^k\binom{n}k=(-1)^m\binom{n}m+(-1)^{m-1}\binom{n-1}{m-1}\;,$$ so to complete the induction step we need only show that $$(-1)^m\binom{n}m+(-1)^{m-1}\binom{n-1}{m-1}=(-1)^m\binom{n-1}m\;.$$ This is easily done using one of the most basic identities involving binomial coefficients. Added: After some thought I realize that $(1)$ can be proved by direct calculation: $$\begin{align*} \sum_{k=0}^{m-1}(-1)^k\binom{n}k&=\sum_{k=0}^{m-1}(-1)^k\left(\binom{n-1}k+\binom{n-1}{k-1}\right)\\ &=\sum_{k=0}^{m-1}(-1)^k\binom{n-1}k+\sum_{k=0}^{m-1}(-1)^k\binom{n-1}{k-1}\\ &=\sum_{k=0}^{m-1}(-1)^k\binom{n-1}k+\sum_{k=1}^{m-1}(-1)^k\binom{n-1}{k-1}\\ &=\sum_{k=0}^{m-1}(-1)^k\binom{n-1}k+\sum_{k=0}^{m-2}(-1)^{k+1}\binom{n-1}k\\ &=\sum_{k=0}^{m-1}(-1)^k\binom{n-1}k-\sum_{k=0}^{m-2}(-1)^k\binom{n-1}k\\ &=(-1)^{m-1}\binom{n-1}{m-1}+\sum_{k=0}^{m-2}(-1)^k\binom{n-1}k-\sum_{k=0}^{m-2}(-1)^k\binom{n-1}k\\ &=(-1)^{m-1}\binom{n-1}{m-1}\;. \end{align*}$$
How to calculate Limit of $(1-\sin x)^{(\tan \frac{x}{2} -1)}$ when $x\to \frac{\pi}{2}$. How to calculate Limit of $(1-\sin x)^{(\tan \frac{x}{2} -1)}$ when $x\to \frac{\pi}{2}$. We can write our limit as $\lim_{x\to \frac{\pi}{2}}e^{(\tan \frac{x}{2} -1) \log(1-\sin x)}~ $ but I can not use L'Hopital rule. Is there another way?
Making the substitution $ x = \dfrac{\pi}{2} + y$ the required limit is $\lim_{y \to 0} \exp h(y)$ where $h(y)= \ln(1-\cos y) \left( \tan(\pi/4 + y/2) - 1 \right) = \ln(1-\cos y) \times \dfrac{2\tan(y/2)}{1-\tan(y/2)}$. Since $1-\cos y = 2 \sin^2(y/2)$ we have $$h(y) = (\sqrt{2}\sin(y/2)) \ln(2\sin^2(y/2)) \times \dfrac{2}{\sqrt{2}} \times \dfrac{\dfrac{\tan (y/2)}{y/2}}{\dfrac{\sin(y/2)}{y/2}} \times \dfrac{1}{1-\tan(y/2)} $$. Since $\lim_{x\to0}x\ln(x^2) = 2\lim_{x \to 0} x \ln |x| = 0$ and so we have $\lim_{y\to 0}(\sqrt{2}\sin(y/2)) \ln(2\sin^2(y/2)) = 0$ and $\lim_{y\to 0}h(y) = 0 \times \dfrac{2}{\sqrt{2}} \times \dfrac{1}{1} \times 1 = 0.$ So the required limit is 1.
Calculate limit with floor function or prove it doesnt exist Please help calculating the following limit: $$ \lim_{x\to\frac{\pi}{2}} \frac{\lfloor{}\sin(x)\rfloor}{\lfloor x\rfloor} $$ I used $$ t = x - \frac{\pi}{2} $$ and got: $$ \lim_{t\to 0} \frac{\lfloor{}\sin(t+\frac{\pi}{2})\rfloor}{\lfloor t+\frac{\pi}{2}\rfloor} $$ for t close to 0 we get from arithmetic of limits that the denominator is 1 but not sure how to go from here.. Thanks
In the range $\left(1,\pi\right)$ but at $\pi/2$, as the numerator is zero and the denominator nonzero, the function is zero.
problem in the demo of an equinumerosity I'm trying to understand the proof of a theorem about equinumerosity of 2 sets, but I am facing to a problem. Here is the summary of my issue: Let $h$ be a bijective function from $E$ to $h(E)$ with the particularity that $h(E)\subset E$, $h^0(E)=E$, Let $h^{n+1}(E) = h(h^n(E))$ be the sequence of sets given by the image of $E$ by $h$ with the particularity that $h^{n+1}(E)\subset h^n(E)$. Finally, let $(A_n)$ be a sequence defined by $A_0=E \setminus h(E)$, $A_{n+1}=h(A_n)=h^n(E)\setminus h^{n+1}(E)$. The proof continues, but I don't understand why $h(A_n)=h^n(E)\setminus h^{n+1}(E)$. To reduce the problem, and because a recurrence is possible, let's consider only the first iteration of the equality: $$ h(E\setminus h(E))=h(E)\setminus h^2(E) $$ Alone, I would not have produced this equality. Why is it true? What are the requirements concerning h? thank you, lowley
If we think about the set $E \setminus h(E)$, this is the set $\{ x : x \in E \mbox{ and } x \not\in h(E) \}$. So if we apply $h$ to this set, we see that for such an $x$, $h(x) \in h(E)$ (obviously), but we cannot have $h(x) \in h^{2}(x)$ (since $x$ was not in $h(E)$). This uses the fact that $h$ is one-to-one; by definition, an element $y$ is in $h^{2}(E)$ if there exists some $x$ in $h(E)$ with $h(x) = y$; since $h$ is one-to-one, you are guaranteed that there is not an $x^{\prime} \neq x$ (and so possibly not in $h(E)$) that also has $h(x^{\prime}) = y \in h^{2}(E)$. This shows that $h(E \setminus h(E)) \subset h(E) \setminus h^{2}(E)$. Now you just need to show the reverse inclusion to finish the proof.
Should I pick entirely different numbers on each lottery ticket? I was discussing optimal lottery ticket purchasing strategies with a friend, and an interesting question came up. Suppose you doing the following: * *You purchase multiple tickets for one draw *You select the option to pick the numbers at random for all tickets It occurred to me that if the numbers are selected at random, then it would be possible - indeed quite likely if you buy several tickets - that some number(s) may appear on multiple tickets. A quick Google confirms what I expected - that the random number selection process for my local lottery is independent for each ticket even when you buy them together and for the same draw, so this would be entirely possible. This had me wondering, does this factor decrease your odds at all, and if it does, could one improve upon the process of randomly selecting each ticket independently to improve things? Perhaps this is just a more specific version of the general question - should you avoid repeatedly selecting the same number across multiple tickets on the same draw? The parameters of the draw are: * *Numbers are 1-59 *Six numbers are drawn *Prizes start at three numbers, increasing in size up to all six Having not studied maths in any depth since my college days, I'm unsure how to frame the problem mathematically, so I'm interested both from a mathematical point of view and practically.
For the jackpot, you only care if the set of six numbers is different between your tickets. Having tickets 1,2,3,4,5,6 and 1,2,,4,5,7 gives you the jackpot on two different draws and gives you twice the chance of winning that you would have from buying only one ticket. The downside of these two tickets is that many combinations of three numbers are repeated. You are not increasing your chances of the smaller prizes as much by buying these two tickets as you would by buying two tickets that do not share numbers. Even for this, you only care if at least three numbers match between two tickets, so having 1,2,3,4,5,6 and 1,2,7,8,9,10 is as good as having two tickets that disagree completely. Presumably if you get a set of three on multiple tickets you get paid multiple times. That means the expected value of two tickets with overlap is the same as two without overlap. You will win less often, but some of the time you do win you win more money. The arguments of picking unpopular numbers only matter if there is a jackpot that is divided among the winners. In that case you want unpopular numbers so you share less. If the payout even for six of six is a fixed amount, you don't care about the popularity of the numbers, but the operators do.
Transformation matrices affine vs. euclidean is there a difference between an affine transformation an the euclidean transformation mentioned in this tutorial? Is an affine transformation a type of euclidean transformation?
No, the the "Euclidean warping" is a special type of affine transformation. Affine transformations are very general. They are made up of a nonsingular linear transformation plus a translation. The author explicitly describes Euclidean warping as encompassing scale, rotation and translation only. In other words, he wants to carry out the geometry of Euclidean similarity. Examples of affine transformations that are not Euclidean similarity transformations (as described in the paper): * *Reflections *Shear mappings
Prove $4-\sqrt{2}-\sqrt[3]{3}-\sqrt[5]{5} \gt 0$ Is it possible to know if $4-\sqrt{2}-\sqrt[3]{3}-\sqrt[5]{5} \gt 0$ without using decimal numbers?
It is not hard to verify following inequalities (just power both sides and it should result into simple inequalities in natural numbers only): \begin{align} \frac{4}{3} &< \sqrt{2} < \frac{5}{3}\\ \frac{4}{3} &< \sqrt[3]{3} < \frac{5}{3}\\ \frac{4}{3} &< \sqrt[5]{5} < \frac{5}{3}\\ \end{align} Summing these up will give you $$ 4 < \sqrt{2}+\sqrt[3]{3}+\sqrt[5]{5} < 5\\ $$
A question about straight lines tangent to a sphere in 3-dimensional Euclidean space Let E(3) be 3-dimensional Euclidean space with its standard metric and let S(2) be the 2-dimensional boundary of a fixed ball of E(3) whose radius is positive. Does there exist a set L of straight lines which satisfies the following conditions? (1) Each straight line in L is a subset of E(3) that is tangent to S(2). (2) Each point of S(2) belongs to exactly one straight line in L. (3) No pair of distinct straight lines in L is co-planar.
I believe such a set of lines can be constructed by transfinite recursion, using an argument similar to the one in this answer on MathOverflow. Let $\alpha$ be the smallest ordinal of cardinality $\lvert S \rvert$. Fix a bijection between $\alpha$ and $S$, so that each ordinal $\beta \lt \alpha$ corresponds to a point $x_\beta \in S$. Let $\gamma$ be an ordinal $\lt \alpha$ and suppose that we already have a set of lines $\{ l_\beta \mid \beta \lt \gamma \}$ such that * *$l_\beta$ is tangent to $S$ at $x_\beta$ for all $\beta$; *$l_\beta$ and $l_{\beta'}$ are not coplanar whenever $\beta \neq \beta'$. The set of lines that are tangent to $S$ at $x_\lambda$ has cardinality $2^{\aleph_0}$. The lines that are coplanar to at least one of the $l_\beta$ form a subset of cardinality at most $\lvert \gamma \rvert$; indeed, for each $\beta \lt \gamma$, there is exactly one line that is both tangent to $S$ at $x_\gamma$ and coplanar to $l_\beta$. Since $\lvert \gamma \rvert \lt \lvert S \rvert = 2^{\aleph_0}$, it follows that it is possible to choose a line $l_\gamma$ tangent to $S$ at $x_\gamma$ without creating a pair of coplanar lines.
$f(P)=f(Q)$ implies that $P=Q$ Let $(X,\mathbb{H})$ and $(Y,\mathbb{F})$ be two measurable spaces. Assume that $P$ and $Q$ be probability measures on $(X,\mathbb{H})$ and that $f:X\to Y$ is a $\mathbb{H}/\mathbb{F}$-measurable mapping. What are the weakest (alternatively some weak) conditions on $f$ for which $$ f(P)=f(Q) \implies P=Q, $$ holds? Here $f(P)$ and $f(Q)$ are push-forward measures. If we work under the assumption that $X$ and $Y$ are metric spaces and $P,Q$ are Borel measures then it is sufficient to say that $f$ is a homeomorphism, but what does this translate to when we have no topology on $X$ and $Y$ only $\sigma$-algebras.
What you are struggling with is the measure determination problem and I do not think that it is fruitful to try finding a function $f$ such that $f(P)=f(Q)\Rightarrow P=Q$. A family $\mathscr{M}$ of bounded $\mathbb{R}$-measurable functions is said to be measure determining whenever two finite measures $P$ and $Q$ on $(X,\mathbb{H})$ satisfy $$ \int \varphi dP=\int \varphi dQ\quad (\forall \varphi \in \mathscr{M}), $$ then $P=Q$. Let me show some examples. * *Let $\mathscr{P}$ be a $\pi$-class (i.e., a class that is closed under finite intersection) that generates $\mathbb{H}$. Then, $\{1_{A}:A\in \mathscr{P}\}$ is measure determining (Billingsley, Probability and Measure, 3rd ed., p.163, Theorem 10.3 or Kallenberg, Foundations of Modern Probability, 2nd ed., p.9, Lemma 1.17), where $1_A$ is the indicator function of the set $A$. If $X$ is a topological space and $\mathbb{H}$ is generated by the topology, then trivially the family of open (or closed) sets is a $\pi$-class generating $\mathbb{H}$ (the result you found is reduced to this case). If $X$ is a second countable topological space, a countable base $\mathscr{B}$ is a $\pi$-class that generates the Borel-$\sigma$-algebra. If $X=\mathbb{R}$, then $\{(-\infty ,x]:x\in \mathbb{R}\} $ is a $\pi$-class that generates the Borel-$\sigma$-algebra. *If $X$ is a metric space, the family of bounded, uniformly continuous $\mathbb{R}$-valued functions is measure determining. This can be proved by using the the dominated convergence theorem.
what are the different applications of group theory in CS? What are some applications of abstract algebra in computer science an undergraduate could begin exploring after a first course? Gallian's text goes into Hamming distance, coding theory, etc., I vaguely recall seeing discussions of abstract algebra in theory of computation / automata theory but what else? I'm not familiar with any applications past this. Bonus: What are some textbooks / resources that one could learn about said applications?
Cryptography (including elliptic curve cryptography) comes to mind. I suppose one could also argue that for performance reasons, a lot of problems get simplified by some sort of group action on the data set under consideration to reduce the size of the problem set to something more manageable; so it certainly doesn't hurt to have a good grounding in the basics.
Unexpected Proofs Using Generating Functions I recently came across this beautiful proof by Erdős that uses generating functions in a unique way: Let $S = \{a_1, \cdots, a_n \}$ be a finite set of positive integers such that no two subsets of $S$ have the same sum. Prove that $$\sum_{i=1}^n \frac{1}{a_i} < 2.$$ Question: Are there any more examples of surprising or unexpected proofs using generating functions that this community is aware of? (Please refrain from posting answers that are widely known such as change making, closed form for Fibonacci, etc.) The proof of the above theorem: Proof: Suppose $0< x < 1$. We have $$\prod_{i=1}^n (1 + x^{a_i}) < \sum_{i = 0}^{\infty} x^i = \frac{1}{1-x}.$$ Thus, $$\begin{align*} \sum_{i=1}^n \log(1+x^{a_i}) &< - \log(1-x) \\ \sum_{i=1}^n \int_0^1 \frac{\log(1+x^{a_i})}{x} \ dx &< - \int_0^1 \frac{\log(1-x)}x \ dx . \end{align*}$$ Putting $x^{a_i} = y$, we obtain $$\begin{align*} \sum_{i=1}^{n} \frac{1}{a_i} \int_0^1 \frac{\log(1+y)}{y} \ dy < - \int_0^1 \frac{\log(1-x)}{x} \ dx \end{align*}$$ i.e., $$\sum_{i=1}^n \frac{1}{a_i} \left( \frac{\pi^2}{12} \right) < \frac{\pi^2}6.$$ Thus, $\sum_{i=1}^n \frac{1}{a_i} < 2$ and the theorem is proved.
I remember this problem from Stein & Shakarchi's Complex Analysis. Let $\mathbb{N}$ denote the set of positive integers. A subset $S\subset \mathbb{N}$ is said to be in arithmetic progression if $$ S=\{a, a+d, a+2d, a+3d, \ldots\} $$ where $a, d\in \mathbb{N}$. Here $d$ is called the step of $S$. Show that $\mathbb{N}$ cannot be partitioned into a finite number of subsets that are in arithmetic progression with distinct steps (except for the trivial case $a=d=1$). Hint: Write $\sum_{n\in\mathbb{N}} z^n$ as a sum of terms of the type $\frac{z^a}{1-z^d}$. In the hint of this problem, the terms $\frac{z^a}{1-z^d}$ represents the ordinary generating function for the characteristic function of the arithmetic progression $S=\{a, a+d, a+2d, \ldots\}$. Another good example is this problem, and the answer by @barto
Formula for adjugate of matrix: $\operatorname{adj}(s\mathbf{I}-\mathbf{A}) = \mathrm{\Delta} p(s,\mathbf{A})$ The following (roughly) is written in the Adjugate Matrix Wikipedia page: If $$ p(t)~{\stackrel {\text{def}}{=}}~\det(t\mathbf {I} -\mathbf {A} )=\sum _{i=0}^{n}p_{i}t^{i}$$ is the characteristic polynomial of the real matrix $n$-by-$n$ matrix $\mathbf A$, then $$ \operatorname{adj}(s\mathbf{I}-\mathbf{A}) = \mathrm{\Delta} p(s,\mathbf{A})$$ where $$ \mathrm {\Delta } p(s,t)~=\sum _{j=0}^{n-1}\sum _{i=0}^{n-j-1}p_{i+j+1}s^{i}t^{j} $$ is the first divided difference of $p$. Can anyone prove this, or provide a reference, please. I'm most interested in the case where $s=1$, since this then gives me a nice polynomial expansion of $\operatorname{adj}(\mathbf{I}-\mathbf{A})$ in terms of powers of $\mathbf{A}$, which is well-known already, I would guess. A proof or a reference for this special case would be welcome, too.
The statement is almost trivial if you employ the method of universal identities and assume that $A$ is diagonal. A more concrete proof is still not hard if you know the definition of an adjugate operator (as opposed to an adjugate matrix). Without going down the full-fledged multilinear algebra path, an adjugate operator can be defined --- in a coordinate-free manner --- as follows. Let $f$ be the characteristic polynomial of a linear endomorphism $T$ on an $n$-dimensional vector space. Then $\operatorname{adj}(T)=g(T)$, where $g$ is the polynomial defined by $g(t)=(-1)^{n+1}\frac{f(t)-f(0)}{t-0}$. When a basis is chosen, so that the matrix of $T$ is $B$, the matrix $g(B)$ will be equal to the adjugate matrix $\operatorname{adj}(B)$ as conventionally defined by cofactor calculations. In your case, the characteristic polynomial of $B=sI-A$ is $$ f(t)=\det(tI-(sI-A))=(-1)^np(s-t) =(-1)^n\sum_{k=0}^np_k(s-t)^k. $$ Therefore $\operatorname{adj}(B)=g(B)$, where $$ g(t)=(-1)^{n+1}\frac{f(t)-f(0)}{t-0}=-\sum_{k=1}^np_k\frac{(s-t)^k-s^k}{t}. $$ It follows that $$\begin{align*} g(s-t)&=-\sum_{k=1}^np_k\frac{t^k-s^k}{s-t} =\sum_{k=1}^np_k\sum_{j=0}^{k-1} s^{k-j-1}t^j\\ &=\sum_{j=0}^n\sum_{k=j+1}^n p_k s^{k-j-1}t^j =\sum_{j=0}^{n-1}\sum_{i=0}^{n-j-1} p_{i+j+1} s^it^j\\ &=\Delta p(s,t). \end{align*}$$ and $\operatorname{adj}(sI-A)=g(sI-A)=\Delta p(s,A)$.
Proof without induction of the inequalities related to product $\prod\limits_{k=1}^n \frac{2k-1}{2k}$ How do you prove the following without induction: 1)$\prod\limits_{k=1}^n\left(\frac{2k-1}{2k}\right)^{\frac{1}{n}}>\frac{1}{2}$ 2)$\prod\limits_{k=1}^n \frac{2k-1}{2k}<\frac{1}{\sqrt{2n+1}}$ 3)$\prod\limits_{k=1}^n2k-1<n^n$ I think AM-GM-HM inequality is the way, but am unable to proceed. Any ideas. Thanks beforehand.
1) For $k\ge2$, $$ \frac{2k-1}{2k}\gt\frac12 $$ Therefore, for $n\ge2$ $$ \begin{align} \left[\prod_{k=1}^n\frac{2k-1}{2k}\right]^{1/n} &=\left[\frac12\prod_{k=2}^n\frac{2k-1}{2k}\right]^{1/n}\\ &\gt\left[\frac12\prod_{k=2}^n\frac12\right]^{1/n}\\ &=\left[\prod_{k=1}^n\frac12\right]^{1/n}\\[6pt] &=\frac12 \end{align} $$ 2) Squaring and cross-multiplication show that for $k\ge1$ $$ \frac{2k-1}{2k}\lt\sqrt{\frac{k-\frac12}{k+\frac12}} $$ Therefore, $$ \begin{align} \prod_{k=1}^n\frac{2k-1}{2k} &\lt\prod_{k=1}^n\sqrt{\frac{k-\frac12}{k+\frac12}}\\ &=\sqrt{\frac{\frac12}{n+\frac12}}\\ &=\frac1{\sqrt{2n+1}} \end{align} $$ 3) The AM-GM says $$ \begin{align} \left(\prod_{k=1}^n(2k-1)\right)^{\large\frac1n} &\le\frac1n\sum_{k=1}^n(2k-1)\\[6pt] &=n \end{align} $$
Does there exist any continuous function whose partials doesn't exist? Does there exist a continuous function of $f : \mathbb R^2 \longrightarrow \mathbb R$ such that it is continuous whose both the partial derivatives don't exist. I think the function $f : \mathbb R^2 \longrightarrow \mathbb R$ defined by $f(x,y) = |x|(1 + y)$, where $(x,y) \in \mathbb R^2$ has the above property at $(0,0)$. But I can't prove that $f$ is continuous at $(0,0)$ by $\epsilon-\delta$ method. Please help me. Thank you in advance.
Just consider the sum of two one-variable Weierstrass functions, one in the $x$ variable, one in the $y$ variable. This is even better, it has continuity everywhere and no differentiability or partial derivatives anywhere.
If $A$ is nilpotent matrix then $tr(A)=0$ I want to prove if $A$ is nilpotent matrix then $tr(A)=0$. I have seen some answers on MSE about this question but none of them were clear enough to be understood for me. I appreciate if someone explains the proof in mathematical notation rather than a general explanation about it.
I will try to give some explanation of the proofs given earlier. First of all note that trace is cyclic which means $tr(ABC)=tr(CAB)=tr(BCA)$, which gives us following equality $tr(P^{-1}AP)=tr(A)$. So trace is invariant under change of basis. Now as $A$ is nilpotent, there exist some positive integer $k$ such that $A^k=0$. Now because $A^k=0$, the kernel of $A^k$ is the whole space. Also note that if $v \in \ker(A^j)$ then $v\in \ker (A^{j+1})$, as $A^{j+1}v=AA^jv=0$. So we have the following relation $\{0\}=\ker A^0\subseteq\ker A^1\subseteq\ker A^2\subseteq\cdots \subseteq \ker A^k=V $. Choosing a basis of $\ker A^1$, then extending to a basis of the next space $\ker A^2$, and so on, eventually gives a basis of the whole space, because $\ker A^k=V$. By construction change of basis of $A$ to this new basis leads to a strictly upper triangular matrix, to see this note that for $v\in \ker A^j$, we have $A^jv=0 \rightarrow A^{j-1}Av=0 \rightarrow Av \in \ker A^{j-1}$. So for any $v \in \cup_{i=1}^j \ker A^i$, we have $Av \in \cup_{i=1}^{j-1} \ker A^i$. And so A is strictly upper triangular matrix in our constructed basis. So in this new basis we have trace equal 0 and because trace is invariant to change of basis, $tr(A)=0$.
Why does this process generate the factorial of the exponent? Consider the process of taking a series of numbers and constructing a new series consisting of the difference between consecutive terms, and repeating this until a constant is reached: $$2,8,18,32,50\\6,10,14,18\\4,4,4$$ When this process is applied to sequences of the form $f(n) = n^a$, the constant reached seems to always be $a!$: $$1,2,3\\1,1$$ $$1,4,9,16\\3,5,7\\2,2\\$$ $$1,8,27,64,125\\7,19,37,61\\12,18,24\\6,6$$ $$1,16,81,256,625,1296\\15,65,175,369,671\\50,110,194,302\\60,84,108\\24,24$$ Can it be proven?
Yes, it always yields the factorial. The way you describe to construct each new sub-series from the one above it is similar to taking the derivative of a power function, but at discrete intervals. The rule for taking the derivative of a power function is that $\frac{d}{dx}x^a=ax^{a-1}$. Repeatedly taking this derivative until there is a constant (the power is 0) means that the final coefficient in $cx^0$ will be $a(a-1)(a-2)\dots(2)(1)$, or $a!$.
Consider a function $f(x, y) = Ax^5 + Bx^4y + Cx^3y^2 + Dx^2y^3 + xy^4 − y^5$ Consider a function $f(x, y) = Ax^5 + Bx^4y + Cx^3y^2 + Dx^2y^3 + xy^4 − y^5$ where $A$, $B$, $C$, $D$ are unspecified real numbers. Determine the values of $A$, $B$, $C$, $D$ such that $f(x,y)$ satisfies $f_{xx}(x,y) + f_{yy}(x,y) = 0$ What I did so far, I took double derivative of x and y: $f_{xx}(x,y) = 20Ax^3+12Bx^2y+6Cxy^2+2Dy^3$ $f_{yy}(x,y) = 2Cx^3+6Dx^2y+12xy^2-20y^3$ How can I satisfies $f_{xx}(x,y) + f_{yy}(x,y) = 0$? There are no terms that cancels, subtracts or add.
Add up and you get $[20A+2C]x^3 + [12B+6D]x^2y + [6C+12]xy^2 + [2D-20]y^3=0$ So $D = 10$ $C = -2$ $B = -5$ $A = \frac{1}{5}$
Inverse of a factorial I'm trying to solve hard combinatorics that involve complicated factorials with large values. In a simple case such as $8Pr = 336$, find the value of $r$, it is easy to say it equals to this: $$\frac{8!}{(8-r)!} = 336.$$ Then $(8-r)! = 336$ and by inspection, clearly $8-r = 5$ and $r = 3$. Now this is all and good and I know an inverse function to a factorial doesn't exist as there is for functions like sin, cos and tan etc. but how would you possibly solve an equation that involves very large values compared to the above problem without the tedious guess and checking for right values. Edit: For e.g. if you wanted to calculate a problem like this (it's simple I know but a good starting out problem) Let's say 10 colored marbles are placed in a row, what is the minimum number of colors needed to guarantee at least $10000$ different patterns? WITHOUT GUESS AND CHECKING Any method or explanation is appreciated!
The inverse function of $y = x!$ means getting x in terms of $y$ , i.e $x =$ the largest number in factorisation of y as a factorial.(Where factorising as a factorial means you divide $y$ by $2$, then $3$ and so on. You stop when you get $1$) For example let $5040 = x! , x = ?$ Factoring $5040$ as a factorial $5040= 7\times 6\times 5\times 4\times 3\times 2\times 1$ , and $7$ is the largest number of that factorial $\implies x = 7$ In your problem $8!/336 = (8 – r)! , r = ?$ $8!/336 = 120$ , let $(8 – r) = x$ , hence $120 = x! , x = ?$ $120 = 5\times 4\times 3\times 2\times 1$, and the largest number of that factorial $ = x = 5 = (8 – r) \implies r = 3.$
If $f$ is homogeneous of degree $n$, set $p = xt$ and $q = yt$ and define $h(x,y,t) = f(p,q) = t^nf(x,y)$ A function $f$:$\mathbb R$ $\rightarrow$ $\mathbb R$ is called homogeneous of degree n if it satisfies $f(tx,ty) = t^nf(x,y)$ If $f$ is homogeneous of degree $n$, set $p = xt$ and $q = yt$ and define $h(x,y,t) = f(p,q) = t^nf(x,y)$ Apply the chain rule to $h(x,y,t)$ to show that $x\frac {\partial {f}}{dx} + y\frac {\partial {f}}{dy} = nf(x,y)$ I don't know how to start, usually with homogeneous equation I substitute x and y with tx and ty and gives me this $f(tx,ty) = t^nf(x,y)$ form with degree. I'm not quite sure how to tackle this question.
I think I got it, Let me know if I'm right $h(x,y,t) = f(p,q) = t^nf(x,y)$ $f(xt,yt) = t^nf(x,y)$ we were given If $f$ is homogeneous of degree $n$, set $p = xt$ and $q = yt$ Now take the partial derivative on L.H.S and full derivative on R.H.S $xt\frac {\partial {f}}{dxt} + yt\frac {\partial {f}}{dyt} = nt^{n-1}f(x,y)$ Now set $t = 1$ $x\frac {\partial {f}}{dx} + y\frac {\partial {f}}{dy} = nf(x,y)$
Generating multivariate normal samples - why Cholesky? Hello everyone and happy new year! May all your hopes and aspirations come true and the forces of evil be confused and disoriented on the way to your house. With that out of the way... I am trying to write a computer code that gets a vector $\mu \in R^n $ and matrix $\Sigma \in \mathbb R^{n \times n}$ and generates random samples from the multivariate normal distribution with mean $\mu$ and covariance $\Sigma$. The problem: I am only allowed to use the program to sample from the single variable normal distribution with mean $0$ and variance $1$: $N(0, 1)$. The proposed solution: Define a vector of zeros (initially) $v \in \mathbb R^n$, now for all $i$ from $1$ to $n$, draw from a single variable normal dist: $v_i \overset{}{\sim} N(0, 1)$. Now do a Cholesky decomposition on $\Sigma$: $\Sigma = LL^T$. Now finally the random vector we want that is distributed from the multivariate gaussian is $Lv + \mu$. My question is why? I don't understand the intuition, if it was a single dimensional distribution $N(\mu, \sigma^2)$ then I understand why $\sigma ^2 v + \mu$ is a good idea, so why cholesky? Wouldn't we want $\Sigma v + \mu$?
If all the variables in the multivariate gaussian were independent, we would have faced no issue but to use the formula $X_i =\sigma_i \nu+\mu_i $. Since they are correlated, we have (for example, bivariate case), $X_1 = \sigma_1\nu_1+\mu_1$ and $X_2 = \sigma_2[\rho\nu_1+\sqrt{1-\rho_{12}^2}\nu_2]+\mu_2$ and can be extended further to N. Note: $$\sum = \begin{bmatrix}\sigma_1^2 &\rho_{12} \sigma_1\sigma_2 &\rho_{13} \sigma_2\sigma_3 & \dots \\\\\rho_{12} \sigma_1\sigma_2 &\sigma_2^2 &\rho_{23} \sigma_2\sigma_3 & \dots\end{bmatrix}$$ By decomposing through Cholesky $\sum=LL^T$, we can get our $X = L\nu+\mu$ without manual calculations which are otherwise quite tedious for higher order.
Evaluate the triple integral problem Evaluate $\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}(x^2-2xy)e^{-Q}dxdydz$ , where $Q=3x^2+2y^2+z^2+2xy$.
\begin{align} I:&=\int_{-\infty}^{\infty}dx\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz\left(x^2-2xy\right)\exp\left(-3x^2 - 2y^2 - z^2 - 2xy\right)\\ &=\sqrt{\pi}\int_{-\infty}^{\infty}dx\int_{-\infty}^{\infty}dy\left(x^2-2xy\right)\exp\left(-3x^2-2y^2-2xy\right) \tag1\\ \end{align} We split the integral in $(1)$ into two parts: \begin{align} I_1&=\sqrt{\pi}\int_{-\infty}^{\infty}dxx^2\exp\left(-3x^2\right)\int_{-\infty}^{\infty}dy\exp\left(-2y^2-2xy\right) \tag2 \end{align} We complete the square: \begin{align} -2y^2-2xy&=-2\left(y^2+xy\right)\\ &=-2\left(\left(y+\frac{x}{2}\right)^2-\frac{x^2}{4}\right)\\ &=-2\left(y+\frac{x}{2}\right)^2+\frac{x^2}{2} \end{align} Thus, $(2)$ becomes \begin{align} I_1&=\sqrt{\pi}\int_{-\infty}^{\infty}dxx^2\exp\left(-\frac{5}{2}x^2\right)\int_{-\infty}^{\infty}dy\exp\left(-2\left(y+\frac{x}{2}\right)^2\right)\\ &=\frac{\pi}{\sqrt{2}}\int_{-\infty}^\infty dxx^2\exp\left(-\frac{5}{2}x^2\right)\\ &=\frac{\pi^{\frac{3}{2}}}{5\sqrt{5}} \tag3 \end{align} Then, we look at the second integral: \begin{align} I_2&=2\sqrt{\pi}\int_{-\infty}^\infty dx x\exp\left(-3x^2\right)\int_{-\infty}^\infty dyy\exp\left(-2y^2-2xy\right)\\ &=2\sqrt{\pi}\int_{-\infty}^\infty dx x\exp\left(-2x^2\right)\int_{-\infty}^\infty dyy\exp\left(-2\left(y+\frac{x}{2}\right)^2\right)\\ &=2\sqrt{\pi}\int_{-\infty}^\infty dx x\exp\left(-2x^2\right)\int_{-\infty}^\infty du\left(u-\frac{x}{2}\right)\exp\left(-2u^2\right) \tag4 \end{align} Splitting $(4)$ again, we have \begin{align} I_{2,1}&=2\sqrt{\pi}\int_{-\infty}^\infty dx x\exp\left(-2x^2\right)\int_{-\infty}^\infty duu\exp\left(-2u^2\right)\\ &= 0 \end{align} \begin{align} I_{2,2}&=\sqrt{\pi}\int_{-\infty}^\infty dx x^2\exp\left(-\frac{5}{2}x^2\right)\int_{-\infty}^\infty du\exp\left(-2u^2\right)\\ &=\frac{\pi}{\sqrt{2}}\int_{-\infty}^\infty dx x^2\exp\left(-\frac{5}{2}x^2\right)\\ &=\frac{\pi^{\frac{3}{2}}}{5\sqrt{5}} \tag5 \end{align} Adding up $(3)$ and $(5)$, we have \begin{align} \int_{-\infty}^{\infty}dx\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz\left(x^2-2xy\right)\exp\left(-3x^2 - 2y^2 - z^2 - 2xy\right)=\frac{2\pi^{\frac{3}{2}}}{5\sqrt{5}} \end{align}
From where do we get $\varphi(w) = w + \frac 1 w$ for $w^2$ and $z^2 - 2$? In Iteration of Quadratic Polynomials, Julia Sets, we must find some (invertible?) $\varphi$ s.t. $g = \varphi^{-1}(f(\varphi(z)))$ in order to study $\{g^{\circ n}(z)\}_{n=1}^{\infty}$ the iterates of $g$ assuming we have studied $\{f^{\circ n}(z)\}_{n=1}^{\infty}$ the iterates of $f$ Case 1. For $f(z) = z^2 + c$ and $g(z) = az^2+bz+d$ we can study $\{g^{\circ n}(z)\}_{n=1}^{\infty}$ by studying $\{f^{\circ n}(z)\}_{n=1}^{\infty}$, the iterates of $f$ because $$g^{\circ n}(z) = \varphi_1^{-1}(f^{\circ n}(\varphi_1(z)))$$ where $$\varphi_1(z) = az + \frac b 2$$ for some appropriate domain and range Case 2. For $g(w) = w^2$ and $f(z) = z^2 - 2$ we can study $\{f^{\circ n}(z)\}_{n=1}^{\infty}$ by studying $\{g^{\circ n}(z)\}_{n=1}^{\infty}$ because $$f^{\circ n}(z) = \varphi^{-1}_2(g^{\circ n}(\varphi_2(z)))$$ where $$\varphi_2(w) = w + \frac 1 w$$ for some appropriate domain and range From where did $\varphi_2(w)$ come? It doesn't seem to be in the form $\varphi_1(z) = az + \frac b 2$, though I'm thinking there's some substitution to be done (hence the use of $w$ and not $z$)
I think you just have a mix up in the notations for $\varphi_2(w) = w + \frac{1}{w}$. Do not use the $w$ variable, stick to just $z$. So assume the conformal change of variables is $$\varphi_2(z) = z + \frac{1}{z}$$ Assume $g(z) = z^2$ and $f(z) = z^2 - 2$. Then you can show that $$\varphi_2 \circ g = f \circ \varphi_2$$ This is equivalent to $$f = \varphi_2 \circ g \circ \varphi^{-1}_2$$ Let's check $\varphi_2 \circ g = f \circ \varphi_2$. It is enough to do that and everything else follows from it. Thus we have to verify the identity $$ \varphi_2 \big( g(z)\big) = \varphi_2( z^2 ) = f\big(\varphi_2(z)\big) = \big(\varphi_2(z)\big)^2 - 2$$ Indeed $$ \varphi_2 \big( g(z)\big) = \varphi_2( z^2 ) = z^2 + \frac{1}{z^2} $$ $$f\big(\varphi_2(z)\big) = \big(\varphi_2(z)\big)^2 - 2 = \left( z + \frac{1}{z}\right)^2 - 2 = z^2 + 2 \, z \, \frac{1}{z} + \frac{1}{z^2} - 2 =$$ $$= z^2 + 2 + \frac{1}{z^2} - 2 = z^2 + \frac{1}{z^2}$$
mathematical analysis problem about intermediate value theorem Suppose $f(x)\in C^2(-\infty ,\infty )$, $|f(x)|\le 1$, and $(f(0))^2+(f'(0))^2=4$. Prove that $\exists \xi $ such that $f(\xi )+f''(\xi )=0$. I think the function $(f(x))^2+(f'(x))^2$ may help but I don't know how to use this function.
Let $g(x)=f(x)^2 + f'(x)^2$. We know $g(0)=4$ and $g'(x)=2f'(x)(f(x)+ f''(x))$. We can prove the proposition by analyzing several cases. First case: $g'(0)=0$. Then either $f'(0)=0$ or $f(0)+f''(0)=0$. But $f'(0)=0\implies f(0)=2$ which can't be. Hence $f(0)+f''(0)=0$. From now on, assume by contradiction $f(x)+f''(x)\ne 0$ for all $x$. Since $f+f''$ is a continuous function, it must have only one sign over $(-\infty, \infty)$ (this, again, can be proved with the intermediate value theorem). Second case: $g'(0)>0$. Since $|f(x)|\le 1$ for all $x$, we can show that there must be some $x>0$ s.t. $g'(x)<0$. By intermediate value theorem, there is some $y\in (0,x)$ s.t. $g'(y)=0$. Assume this is the "first $y>0$" s.t. $g'(y)=0$(or, the minimum of such values, if you will). Then $g'>0$ on $[0,y)$. Hence $g(y)>g(0)=4$. But since we assumed $f(y) + f''(y)\ne 0$, then $f'(y)=0$, hence $4 < g(y)=f(y)^2$ which can't be. The case $g'(0)<0$ is analogous. By the same argument of the previous case, there must be some $y<0$ s.t. $g'(y)=0$. Assume this is "the last" $y<0$ s.t. $g'(y)=0$ (or, the maximum of such values, if you will). Then $g'<0$ on $(y,0]$. Hence $g(y)>g(0)=4$. But since we assumed $f(y) + f''(y)\ne 0$, then $f'(y)=0$, hence $4 < g(y)=f(y)^2$ which can't be.