INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Find all such $n$ that $a+b+c+d=0\implies a^7+b^7+c^7+d^7=0$ in $\mathbb{Z}/n\mathbb{Z}$ The problem is to specify all such $n>1$ that for any $a,b,c,d\in\mathbb{Z}/n\mathbb{Z}$ the following implication stands: $$a+b+c+d=0\implies a^7+b^7+c^7+d^7=0.$$ One can note that when $n=7$ we have $(a+b+c+d)^7=a^7+b^7+c^7+d^7$ so the above implication stands. If $n=2,3$ then $x^7=x$, so it's also true. For $n=4$ it's false. Counterexample: $(a,b,c,d)=(2,3,3,0)$. How to find other such $n$ and prove that those are the only ones?
The examples $(2,-1,-1,0)$ and $(3,-1,-1,-1)$ show that you need $2^7=2$ and $3^7=3$. This means that $n$ must divide both $2^7-2$ and $3^7-3$, and their greatest common divisor is $42$. It turns out that $k^7-k$ is divisible by $42$ for every natural $k$ (divisible by $7$ by Fermat's little theorem, by $2$ and $3$ by checking), so the $n$ that work are $1,2,3,6,7,14,21$ and $42$.
Cubic Equation with one real root Question: Suppose the equation $x^3-hx^2+kx-9=0$ has only one real root which has a value of $1$. Find the range of values of $k$. I really have no idea when it comes to a cubic equation. Any advice to solving?
I would start by factoring the polynomial based on the known route. By doing polynomial division $(x^3-hx^2+kx-9) : (x-1)$. By doing so, you will get a residue, that must be zero (since you factoring by a root), so you will get one restriction on $k$ and $h$, to ensure $1$ being a root. In addition, you will have a quadratic polynomial, that has to have complex roots. By inspecting the root of the $pq$-formula to be negative, there is an additional restriction on $k$ and $h$.
Is this equation solvable analytically? Are there any specific conditions for coefficients of a polynomial of degree 6 to Solve or find at least one of its roots? I do not need numerical method for solving. I want some analytical methods to find the roots if they exists. The polynomial is here: $$ \ x^{6} + 6 \Delta x^{5} + (4 \Delta ^{2} -36) x^{4} + (-24 \Delta ^{2} -96 \Delta ) x^{3}+(192+112 \Delta ^{2}-32 \Delta ^{4} ) x^{2} +(256 \Delta ^{2} +256 \Delta ) x-512 \Delta ^{2} -256=0\,. $$ $\Delta$ is an Anisotropic quantity between 0 and 10.
Generally speaking, for nearly every polynomial of degree greater than 4, you cannot express the results in terms of arithmetic and roots — you need to use some sort of special function; e.g. the quintic can be solved with the Bring radical But if you're going to use special functions anyways, you might as well use the special function "root of this polynomial", since it is a fairly simple special function and has an extremely simple relationship to the roots of your polynomial Among the rare exceptions, you can identify some (maybe all) of them by: * *Using a factoring algorithm. *Computing the Galois group of the polynomial over its coefficient field. WolframAlpha doesn't (immediately) find a simpler form for the roots; thus I'm inclined to expect that this isn't one of the exceptions.
Power series expansion of $\frac{z}{(z^3+1)^2}$ around $z=1$ I want to expand $f(z)=\frac{z}{(z^3+1)^2}$ around $z=1$. That is, I want to find the coefficients $c_n$ such that $f(z) = \sum_{n=0}^\infty c_n (z-1)^n$. So far, my first strategy was using long division after I expanded both the numerator and the denominator about $z=1$. To this end, we first apply a substitution $u:= z-1$, after we'll expand around $u=0$. For the denominator we simply get $$1 + u$$. For the numerator we have $$ \frac{1}{\big((u+1)^3+1\big)^2} = \frac{1}{2^2} \frac{1}{\big(1 + \frac{u^3+3u(u+1)}{2}\big)^2} \quad \mbox{put $x:= \frac{u^3+3u(u+1)}{2}$}\\ =-\frac{1}{2^2}\frac{\partial d}{\partial x}\frac{1}{x+1} \\ = -\frac{1}{2^2}(-1+2x-3x^2 +\ \ldots\ ) \\ =-\frac{1}{2^2}\big(1+6u-21u^2-52u^3+\ \ldots \ \big) $$ After using long division, we could try to see a pattern in the coefficients of $u^n$ but this doesn't seem to bear any quick results. My second attempt was using the binomial theorem to expand the power series of $$f(z) = \sum_{n=0}^\infty a_n z^{3n+1} \quad \mbox{with} \quad a_n = (n+1)(-1)^n$$ about $z=1$. To this end, we need to reorganize the $u^n$ in the following sum $$ \sum_{n=0}^\infty (u+1)^n = \sum_{n=0}^\infty \sum_{k=0}^n \binom nk u^k . $$ This seems fairly laborious. Below we have the terms for increasing $n$. $$ 1 \\ 1 + u \\ 1 + 2u + u^2 \\ 1 + 3u + 3u^2 + u^3 \\ \vdots \\ 1 + mu + \binom m2 + \ldots + mu^{m-1} + u^m \\ 1 + (m+1)u + \binom{m+1}{2}u^2 + \ldots + (m+1)u^m + u^{m+1} \\ 1 + (m+2)u + \binom{m+2}{2}u^2 + \ldots + (m+2)u^{m+1} + u^{m+2} \\ \vdots $$ If we look at the terms of $1$, $u$ and $u^2$, their coefficients $b_n$ in the sum over $n$ suggest the following coefficients for $1$, $u$ and $u^2$ respectively. $$ b_0 = \sum_{n=0}^\infty 1 \\ b_1 =\sum_{n=0}^\infty n \\ b_2 = \sum_{n=0}^\infty \binom n2 \ . $$ Expanding on this, we arrive at $$ \sum_{i=0}^\infty b_i u^i = \sum_{i=0}^\infty \bigg( \sum_{k=0}^\infty \binom ki \bigg)u^i $$ , which seems very incorrect, since we have a double infinite sum.
Notice that $$\frac z{(1+z^3)^2}=-z\frac{\partial}{\partial z^3}\frac1{1+z^3}=-z\frac\partial{\partial z^3}\sum_{n=0}^\infty(-1)^n(z^3)^n=\sum_{n=0}^\infty n(-1)^{n+1}z^{3n-2}$$ This expansion works for $|z|<1$, and since we know the expansion exists at $z=1$, we may apply the method described in this answer to get the expansion at $z=1$. $$=\sum_{j=0}^\infty \left(\sum_{k=j}^\infty \binom kj a_k \right)(z-1)^j$$ where $a_{3n+1}=(-1)^n(n+1)$ for $n\in\mathbb N$, else $a_k=0$. However, note that this method is usually only viable when the center for the radius of convergence was within the radius of convergence of the original series. For example, plugging $z=1$ into our "expansion" yields $$\frac14\stackrel?=1-2+3-4+5-6+\dots$$ However, an interesting case happens that we may regularize our divergent series to equal our original function by applying an Euler sum to it: $$f(z)=\sum_{j=0}^\infty\sum_{k=\lfloor j/3\rfloor}^\infty \binom{3k+1}j (-1)^k(k+1)(z-1)^j\\=\sum_{j=0}^\infty\sum_{i=\lfloor j/3\rfloor}^\infty\frac1{2^i}\sum_{k=\lfloor j/3\rfloor}^i\binom ik\binom{3k+1}j (-1)^k(k+1)(z-1)^j$$ which now converges as desired.
Prove that : $\sum n^{3}\sin\frac{1}{3^{n}}$ converges. I'm thinking of applying the ratio test. That would give me $\lim\limits_{n \to \infty } \dfrac {(n+1)^3}{n^3} \dfrac {\sin\frac {1}{3^{n+1}}}{\sin \frac {1}{3^n}}$. The first part is $1$ as $n$ approaches infinity, but what about the sine part? Or am I not doing this correctly?
For large $n$, we can taylor expand $\sin(1/3^n)\sim 1/3^n$, which is enough to note that $$ \sum n^3\sin 1/3^n\sim \sum n^3/3^n $$ Which converges.
What is a proof that a diameter bisects a circle? One of the major contributions Thales is said to have given is the proof that a diameter of a circle bisects the circle, yet Euclid doesn't even bat an eye. Then again, Euclid skipped over other things like needing to assume that the plane was complete. First off, what do you think Thales meant by 'circle'? What about 'bisect'? Second, how would you give a formal proof? The closest answer I've found by searching is at https://proofwiki.org/wiki/Circle_is_Bisected_by_Diameter and the top is the most confusing proof I have ever seen.
See: Elements I.Def.18 : A semicircle is the figure contained by the diameter and the circumference cut off by it. And the center of the semicircle is the same as that of the circle. We do not know that the semicircle is "half" of a circle. See III.Prop.31 : In a circle the angle in the semicircle is right, that in a greater segment less than a right angle, and that in a less segment greater than a right angle; further the angle of the greater segment is greater than a right angle, and the angle of the less segment is less than a right angle. See III.Def.11: Similar segments of circles are those which admit equal angles, or in which the angles equal one another. Therefore, the two semicircles of a circle are similar segments. Then we need III.Prop.24 : Similar segments of circles on equal straight lines equal one another. See : * *Robert Simson, (181) The Elements of Euclid, viz. the First Six Books, Together with the Eleventh and Twelfth. The Errors, by which Theon, Or Others, Have Long Ago Vitiated These Books are Corrected, and Some of Euclid's Demonstrations are Restored, page 297 : Note on Def.XVII, Bk.I.
Find the numbers in A.P. those sum is $24$ and product is $440$ If the sum of three numbers in A.P. is $24$ and their product is $440$, find the numbers. My Approach: Let the numbers be $a,a+d,a+2d$ So, according to question $$3a+3d=24$$ $$a+d=8$$ and $$a(a+d)(a+2d)=440$$ $$8a+ad=55$$ I can’t proceed from here. Please help.
Another way to view it is let $m$ equal the middle term. So they are $m -d, m, m+d$ $(m-d) + m + (m+d) = 3m = 24$ so $m = 8$. So the numbers are $8-d, 8, 8+d$ and $(8-d)8(8-d) = 8(64 -d^2)=440$ $64 -d^2 = 55$ $d^2 = 64 - 55 = 9$ $d = \pm 3$ so the numbers are $5,8,11$. Or $11, 8, 5$ Even if we did it your way. $a + d = 8$ and $8a + ad = 55$ we'd have. $d = 8 -a$ $8a + a(8-a) = 8a +8a - a^2 = 55$ $a^2 - 16a + 55 = 0$ so $a = \frac {16 \pm \sqrt{16^2 - 4*55}}{2} = $ $\frac {16 \pm \sqrt {256 - 220}}{2} = 8 \pm \frac{\sqrt{36}}2 = 8 \pm 3 = 5;11$ so $d = 8 -5 = 3$ or $d = 8 -11 = -3$ And the numbers are $5, 8, 11$ or $11, 8, 5$.
Wrong solution - but why ? Find all solutions to the ODE $$y'=\begin{pmatrix}0 & 1 \\ \frac{-2}{1-x^2} & \frac{2x}{1-x^2}\end{pmatrix}y$$ What I did : Guessing $y_1=\begin{pmatrix}x \\1\end{pmatrix}$ and reduce the order: complete $y_1$ to an invertible matrix such that $$H^{-1}=\begin{pmatrix}0 & 1 \\ 1 & -x\end{pmatrix}$$ Calculate $$B=\begin{pmatrix}0 & 1 \\ 1 & -x\end{pmatrix} \begin{pmatrix}0 & 1 \\ \frac{-2}{1-x^2} & \frac{2x}{1-x^2}\end{pmatrix}\begin{pmatrix}1 \\0\end{pmatrix}-\begin{pmatrix}0 \\0\end{pmatrix}$$ So $$B=\frac{2}{x^2-1}\begin{pmatrix}1 \\-x\end{pmatrix}$$ and $B_1=\frac{2}{x^2-1}$ and $B_2=\frac{-2x}{x^2-1}$ solve $z'=B_2z$ $\Rightarrow C_2=x^2-1$ and $C_1=\int B_1 C_2dx = \int 2dx=2x$ Calculate $HC$ $$HC=\begin{pmatrix}x & 1 \\ 1 & 0\end{pmatrix} \begin{pmatrix}2x \\ x^2-1\end{pmatrix}=\begin{pmatrix}3x^2-1 \\ 2x\end{pmatrix}$$ This should be an soultion but it doesn't work and I don't know why :(
In the wording of the question some symbols are undefined such as $B$, $C_1$ , $C_2$ , $z\quad$ This is confusing and makes difficult to answer with any certainty. Nevertheless, the end of calculus might be : $$z'=-\frac{2x}{x^2-1}z \quad\to\quad z=\frac{c_2}{x^2-1}$$ $$\int \frac{1}{(x^2-1)^2}dx=\frac{1}{2}\ln\left|\frac{x+1}{x-1}\right|-\frac{x}{x^2-1}$$ $$y=c_1\left(\begin{matrix}x\\1 \end{matrix}\right)+c_2\left(\begin{matrix}\frac{1}{2}x\ln\left|\frac{1+x}{1-x}\right|-1\\ \frac{1}{2}\ln\left|\frac{1+x}{x-1}\right|-\frac{x}{x^2-1} \end{matrix}\right)$$
How to factorize this $\sqrt{8 - 2\sqrt{7}}$? When I was at high school, our teacher showed us a technique to simplify square roots like this $\sqrt{8 - 2\sqrt{7}}$ that I forgot. It was something like 8 = 7+1; 7 = 7*1; and using them we could represent $\sqrt{8 - 2\sqrt{7}}$ in simpler form. I would be happy if you can show how it works, and how this technique is called.
Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X>Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$
Proving that the graph of a function is a closed subset of $\mathbb{R}^{n+m}$ I have a closet subset $A\subset \mathbb{R}^{n} $ and a continuous function $f:A\rightarrow\mathbb{R}^{m} $ Now I would like to prove that $$\text{graph}(f) = \{(x,f(x)):x\in A\}$$ is a closed subset of $\mathbb{R}^{n+m}$. I am not quite sure how to prove this. I suppose if I work with Cauchy-sequences here, this might do the job, but I do not know where to start. Any help would be greatly appreciated!
This is actually quite easy, just note that the function $$g:\begin{cases}\Bbb R^{n+m}\to \Bbb R^m \\ (x,y)\mapsto y-f(x)\end{cases}$$ Then $\text{graph}(f) = g^{-1}(0)$ is the inverse image of a closed set under the continuous function, $g$, hence closed.
Proving $ \sum_{n=1}^{\infty} nz^{n} = \frac{z}{(1-z)^2}$ for $z \in (-1, 1)$ I do not know where to start, any hints are welcome.
Another way: $$S(z)=z+2z^2+3z^3+4z^4+\cdots\Rightarrow $$ $$zS(z)=z^2+2z^3+3z^4+4z^5+\cdots\Rightarrow$$ $$\Rightarrow S(z)-zS(z)=S(z)(1-z)=z+z^2+z^3+\cdots=-1+(1+z+z^2+z^3+\cdots)$$ $$=-1+\frac{1}{1-z}=\frac{z}{1-z}\Rightarrow S(z)=\frac{z}{(1-z)^2}\Rightarrow \sum_{n=1}^{+\infty}nz^n=\frac{z}{(1-z)^2}\quad (|z|<1).$$ EDIT: I didn't see Zaid Alyafeai answer.
Limit I don't know how to start solving. This is the limit: $$\lim_{n\to\infty}\frac{2^{n-1}-4^n}{3^n\sin n + 4^{n+2}}$$ I have a solution and the steps and I still haven't understood how it's done, here's the proposed solution: $$\lim_{n\to\infty}\frac{2^{n-1}-4^n}{3^n\sin n + 4^{n+2}}=\lim_{n\to\infty}\frac{\frac12(\frac24)^n-1}{(\frac34)^n\sin n+16}=-\frac1{16}$$ WolframAlpha says this is correct but I haven't understood from where did all the fractions come from...
As often, the simplest is to use equivalents: $2^{n-1}=_\infty o(4^n)$, hence $2^{n-1}- 4^n\sim_\infty- 4^n $. Similarly $\;3^n\sin n+4^{n+2}\sim_\infty4^{n+2}$, whence $$\frac{2^{n-1}- 4^n}{3^n\sin n+4^{n+2}}\sim_\infty\frac{- 4^n}{4^{n+2}}=-\frac 1{4^2}.$$
Interpreting the cross-product of polynomials viewed as vectors Suppose one interprets quadratic polynomials (i.e., parabolas) \begin{eqnarray} F(x) &=& a x^2 + b x + c\\ G(x) &=& d x^2 + e x + f \end{eqnarray} as vectors $(a,b,c)$ and $(d,e,f)$ in $\mathbb{R}^3$. Then the formal cross-product of these vector coefficients is $$ (a,b,c) \times (d,e,f) = (-c e + b f, c d - a f, -b d + a e) \;, $$ which might be interpreted as $$ F{\times}G(x) = (-c e + b f) x^2 + (c d - a f) x + (-b d + a e) \;. $$ Is there some way to view the cross-product of polynomial vectors as another polynomial that is orthogonal to the originals, in a sense analogous to vectors in $\mathbb{R}^3$? So I would expect that, for generic polynomials, for some inner product (perhaps with a weighting function), $$ \int F(x) [F(x) \times G(x)] dx = 0 $$ In other words, Is there some viewpoint from which the cross-product of two polynomials $F$ and $G$ yields a polynomial $F {\times} G$ which is orthogonal to both $F$ and $G$? I illustrate a few special cases below, e.g., when $G(x) = s F(x)$ for some scale factor, then $F{\times}G(x)=0$. But in general I do not see a way to interpret $F{\times}G$ as "orthogonal" in some sense to $F$ and $G$. $G$ is a constant times $F$.     $F$ and $G$ are linear: $a=d=0$. $F$ and $G$ have no constant term: $c=f=0$.     $F$ and $G$ are centered on the $y$-axis: $b=e=0$.
Yes, but I'm not sure how interesting it is. You have a linear isomorphism $T \colon \mathbb{R}_{\leq 2}[x] \rightarrow \mathbb{R}^3$ sending a polynomial $p(x) = ax^2 + bx + c$ to the coefficients vector $(a,b,c)$. On $\mathbb{R}^3$, you have the cross product operation $\times$ and so you can use $T$ to transfer it to $\mathbb{R}_{\leq 2}[x]$ by defining $$ p \times' q := T^{-1}(T(p) \times T(q)). $$ The space $\mathbb{R}^3$ also has the standard inner product $\left< \cdot, \cdot \right>$ and so you can also use $T$ to transfer this inner product to $\mathbb{R}_{\leq 2}[x]$ by defining $$ \left< p, q \right>' := \left<T(p), T(q) \right>.$$ With this definition, $T$ becomes an isometry and so it preserves angles and lengths. In particular, since $T(p) \times T(q)$ is orthogonal to both $T(p)$ and $T(q)$ in $\mathbb{R}^3$, we'll have that $p \times' q$ is orthogonal to both $p,q$: $$ \left< p \times' q, p \right>' = \left<T(p \times' q), T(p) \right> = \left< T(p) \times T(q), T(p) \right> = 0 $$ (and similarly for $q$). You can write down the formulas for $\times'$ and $\left< \cdot, \cdot \right>'$ explicitly. They will look like the familiar formulas from $\mathbb{R}^3$: $$ (ax^2 + bx + c) \times' (dx^2 + ex + f) = (bf - ce)x^2 - (af - cd)x + (ae - bd), \left< ax^2 + bx + c, dx^2 + ex + f \right>' = ad + be + cf. $$ You can even describe the inner product $\left< p, q \right>'$ as an integration on some interval times an appropriate weight function (there are infinitely many possible choices for such a weight function).
$4x^2-5xy+4y^2=19$ with $Z=x^2+y^2$, find $\dfrac1{Z_{\max}}+\dfrac1{Z_{\min}}$ I have no idea how to approach this question at all. I've tried to find the maximum and minimum of the quadratic but i am too confused on what to do afterwards.
$$4x^2-5xy+4y^2-19=0$$ By quadratic formula we have, $$x=\frac{1}{8} (5 y\pm \sqrt{304-39y^2})$$ So, $$z=y^2+\frac{1}{64} (5 y\pm \sqrt{304-39y^2})^2$$ Now take derivative, set it to zero, and examine sign changes.
Knockout tournament. P1, P2, P3, P4, P5, P6, P7, P8, P9, P10, P11, P12, P13, P14, P15, P16 are 16 players who play a knockout tournament. In any match between P(i) and P(j), P(i) wins if i is less than j. Find the probability that P6 reaches the final. I tried making cases, but they seem endless. We know for sure that P1 will win the tournament and P16 will be eliminated in the first round. But there are many other cases for the first round only. Please help.
WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question. P6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the "other side". There are ${8 \choose 5}$ ways to choose 5 of the "other side" spots for P1,...,P5, out of a total of ${15 \choose 5}$. Thus the probability is $$ \dfrac{8 \choose 5}{15 \choose 5} = \frac{8}{429}$$
What is the difference between non-reflexive or irreflexive? What is the difference between a non-reflexive and irreflexive relation? Is one stronger than the other (i.e. all non-reflexive relations are irreflexive or vice-versa)?
Is this a trick question? [Original question: What's the difference between "reflexible" and "irreflexible"?] None: in both cases, there's no such thing :) You mean "reflexive" and "irreflexive". A relation $R \subseteq A \times A $ is reflexive on $A$ if $aRa$ for every $a\in A$. Thus $R$ is not reflexive on $A$ iff for some $a\in A, \text{not } aRa$. A stronger condition than "not reflexive" is irreflexive. $R$ is irreflexive on $A$ iff for all $a\in A, \text{not } aRa$. If $A\neq \emptyset$, then "$R \text{ is irreflexive on } A$" implies "$R \text{ is not reflexive on } A$", but the converse is not in general true unless $A$ is a singleton.
How do I calculate a weighted average from two averages? I have two sets of averaged data I am working with, which account for a score and the average amount of users that achieved this. For example: Average Score $4$, Total Number of participants (which the average is derived from): $835$ Average Score $3.5$, Total Number of participants: $4,579$ Can I calculate a weighted mean from these two averages and participant counts, or would that be inaccurate?
Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$
For what values of $s$ and $t$ is the matrix negative semidefinite? Let $$A = \pmatrix{s&s&0\\ s&s&0\\ 0&0&t}$$ where $s, t \in \mathbb R$ are parameters, and let $QA : R_ 3 \to R$ be the corresponding quadratic form. Determine for which values of $s$ and $t$, $A$ and $QA$ are negative semidefinite and indefinite. So far I calculated the product of X transposed AX and gave me that below: (https://i.stack.imgur.com/9Xd9d.jpg) I then said that this: (https://i.stack.imgur.com/JmZUe.jpg) Is that ok? If so how would I go about saying what values are indefinite. Many thanks for you help.
If $\mathrm A$ is negative semidefinite then $-\mathrm A$ is positive semidefinite. Using Sylvester's criterion, we obtain $$s, t \leq 0$$
An explanation of the pointwise limit of $x^{1+1/(2n-1)}$ when $n\to\infty$ Define $h_n(x)=x^{1+\frac{1}{2n-1}}$ on the domain $[-1,1]$. Then, according to my real analysis book, the pointwise limit of $h_n(x)$ is $|x|$. I tried to look at this limit informally (calc 1 style, in a sense) and I figured that the limit on this domain would be $x$. This was due to separating the function into $x\cdot x^{\frac{1}{2n-1}}$ and claiming that the second term in the product approaches $1$ as $n$ approaches infinity. Perhaps there is some fundamental flaw in my thinking. I'd like to know how the absolute value function is the pointwise limit and so any help is appreciated.
For negative $x$, the odd degree roots are also negative, if defined at all. Thus in the product you get a positive quantity. You can also rewrite this expression as $$ \left(x^2\right)^{\frac{n}{2n-1}} $$ to get a clearer convergence towards the absolute value.
Show that $\sum\limits_{n=1}^{32}\frac1{n^2}=1 + \frac{1}{4} +\frac{1}{9} +\dots+ \frac{1}{1024}<2$ Show that $$1 + \frac{1}{4} +\frac{1}{9} +\dots+ \frac{1}{1024} <2$$ I know that the denominators are perfect squares starting from that of $1$ till $32$. Also I know about this identity $$\frac{1}{n(n+1)} > \frac{1}{(n+1)^2} > \frac{1}{(n+1)(n+2)}.$$ But I am not able to implement it Please help me.
Another way: $$\begin{align} \sum_{k=1}^{2^5} \frac{1}{k^2} &\leq 1 + \sum_{k=2}^{2^5}\int_{k-1}^{k}\frac{1}{t^2} dt\\ &=1+\int_1^{2^5}\frac{1}{t^2} dt \\ &=2-\frac{1}{2^5}\\ &<2 \end{align}$$
Meridians on surface of revolution A curve $\alpha(t)=(r(t),z(t))$ in the $(r,z)$-plane, where $r(t)>0$, is rotated around the $z$-axis. We can parameterise it with $x(t,\phi)=(r(t) \cos\phi,r(t) \sin\phi,z(t))$ for $t \in (a,b)$ and $\phi \in (0,2\pi)$. Now given the second fundamental form $(L_{ij})$ $$\frac{1}{\sqrt{\dot r^2+ \dot z^2}} \begin{pmatrix} \dot r \ddot z- \dot z \ddot r & 0\\ 0& r \dot z \end{pmatrix}$$ I've got to prove that $\det(L_{ij})=0 $ only then if every meridian is a straight line. Well, calculating the determinant, I get the equation $\dot z \ddot r= \dot r \ddot z$. Now I don't know how to go on. I'd be grateful for any help.
Hints: Using primes instead of dots to denote derivatives with respect to $t$, you have $$ r' z'' - r'' z' = 0\quad\text{for all $t$.} \tag{1} $$ * *If $r'z'$ is non-vanishing in some interval, (1) is equivalent to $$ \left[\log\left(\frac{z'}{r'}\right)\right]' = \frac{z''}{z'} - \frac{r''}{r'} = 0. $$ *If $r'z' = 0$, either $r' = 0$ or $z' = 0$. Continuity and the first bullet point show that $r'z'$ cannot vanish on a set with empty interior if the profile curve $\alpha$ is regular. On the other hand, if $r'z'$ vanishes identically on some interval and $\alpha$ is regular, then either $r'$ or $z'$ vanishes identically. (These claims leave non-trivial details to check, so I trust this outline doesn't spoil all your fun.)
find all entire functions that are 1-1 That is all the given information. I should find all the entire functions that are one to one, I have no clue on how to do that. And there is also another one, that goes like this: Find all entire functions that are: $$\forall z \in \mathbb C^* \;\;\;\;\; \lvert f(z)\rvert \le \lvert \frac{\sin z}{z} \rvert$$ $$\mathbb{C}^* \text{ is complex plane without } (0,0)$$
Hint: Observe that if a function $f$ is entire and one to one, then $f$ cannot have an essential singularity at $\infty$ by the great Picard's theorem. This means the Taylor series expension of $f$ at $0$ has only finitely many terms, and thus $f$ is a polynomial. Now note that a polynomial is one-to-one if and only if the degree is $1$.
Equation of plane from normal A line is given by $x = 2 + 5t , y = 1 + 2t , z = 3 − 3t$ Define an equation in normal form for the plane which is perpendicular to the line and which intersects the point $(2, −3, 4)$. When I look at the answer it says that the normal to the plane is $ n = [5; 2; -3]$ which is also the direction vector of the line, which I understand. Then we get to the part that confuses me. Here they claim that $`5(x − 2) + 2(y + 3) − 3(z − 4) = 0`$ or $`5x + 2y − 3z + 8 = 0`$ is the equation of the plane we seek, but I don't follow. What's a more detailed way of describing this approach?
hint If $(a,b,c)$ is a normal vector to a plane, then the cartesian equation of this plane will be of the form $$ax+by+cz=d.$$ if the plane contains the point $(x_0,y_0,z_0)$, the equation becomes $$a(x-x_0)+b(y-y_0)+c(z-z_0)=0$$ with $d=ax_0+by_0+cz_0$. parmetric equations of the plane the vectors director of the plane can be taken as $u=(-b,a,0)\perp(a,b,c)$ and $v=(-c,0,a)\perp(a,b,c).$
True or False: Any surjective homomorphism between 2 isomorphic groups is an isomorphism. Suppose that $G$ and $G'$ are two groups isomorphic to each other. Is it true that any onto homomorphism from $G$ to $G'$ is an isomorphism, i.e. any surjective homomorphism has a trivial kernel? If it was not the case, then $G≈G'$ also Image of homomorphism $ =G'≈G/K$. Hence $G≈G/K$. This implies K is trivial. Am I right ?
The additive groups $(\mathbb C,+)$ and $(\mathbb R,+)$ are isomorphic; the map $z\mapsto\Re z$ is a surjective homomorphism but not an isomorphism. Let $F$ be the free group generated by a countably infinite set $X.$ Any non-injective surjection from $X$ to $X$ extends to a surjective homomorphism from $F$ to $F$ which is not an isomorphism.
Prove that $\inf(\{A, B\}) = A \cap B$ Say we have the partial order $\subseteq$ over the power set $P(X)$ from the set X. Prove that $\inf(\{A, B\}) = A \cap B$. So I've started with proving that $\inf(\{A, B\}) \subseteq A \cap B$ as follows: Take $M = \inf(\{A, B\})$, then by definition $M \subseteq A$ and $M \subseteq B$, so $M \subseteq (A \cap B)$, therefore $\inf(\{A, B\}) \subseteq (A \cap B)$. Is this part of the proof correct? And I could use some help on finding a proof for $A \cap B \subseteq \inf(\{A, B\})$. Thanks.
You can do a little bit better then that: You don't need to assume additionally, that $\inf(A,B)$ already exists. Simply show that $A\cap B$ has the property of being an infimum: Certainly $A\cap B \subseteq A$ and $A\cap B \subseteq B$, and if $C$ is another set with $C \subseteq A$ and $C\subseteq B$ then for all $x\in C$ we have $x\in A$ and $x\in B$, hence $x\in A\cap B$. Therefore $C\subseteq A\cap B$. This proves that $A\cap B$ is an infimum of $A$ and $B$.
Connectedness of $\mathbb{C}^{n}\setminus \mathbb{R}^{n}$ Hi everyone: How would you prove this classic result: if $ n\geq2 ,$ then $\mathbb{C}^{n}\setminus \mathbb{R}^{n}$ is connected? Any reference? Thanks.
The set that you are interested in is path connected. Consider $(z_1,\cdots,z_n),(w_1,\cdots,w_n)\in\mathbb{C}^n\setminus\mathbb{R}^n$. Since $(z_1,\cdots,z_n)\in\mathbb{C}^n\setminus\mathbb{R}^n$ there is at least one index $i$ where $z_i$ is not real (some of the other indices can be real, points in $\mathbb{R}^n$ have all coordinates real). Similarly, there is some index $j$ where $w_j$ is not real. We construct two paths. There are two cases: Case 1: $i\not=j$. Step 1: While keeping $z_i$ fixed, transform all $z_1,\cdots,z_{i-1},z_{i+1},\cdots,z_n$ into $w_1,\cdots,w_{i-1},w_{i+1},\cdots,w_n$. This can be done with a linear homotopy $(1-t)z_k+tw_k$ changes $z_k$ into $w_k$ as $t$ varies between $0$ and $1$. This never intersects $\mathbb{R}^n$ since the $i$-th coordinate is never real. Step 2: While keeping all other coordinates fixed, transform $z_i$ into $w_i$. Since the $j$-th coordinate is not real, this path never intersects $\mathbb{R}^n$. Case 2: $i=j$. In this case, let $k$ be an index other than $i$. Consider the path that keeps all other coordinates fixed and transforms $z_k$ into the complex number $0+1i$ ($i$ is not an index here). Then use Case 1 on $k$ and $j$.
Martingale Central Limit Theorem for Triangular Array Martingale Difference Sequence I was wondering if anyone out there would know of an appropriate central limit theorem (or be able to apply a central limit theorem type argument) that would allow me to find an asymptotic distribution/weak convergence of the following Martingale Difference Sequence (MDS): Let $\left\{\xi_j^n\right\}_{j=1}^n \sim $ MDS $(0,\sigma^4)$. Then: \begin{equation*} \frac{1}{\sqrt{n}}\sum_{j=1}^n \xi_j^n \quad \overset{\mathcal{D}}\longrightarrow \quad ?? \end{equation*} By the ....??.... theorem under ...?... (some assumptions), as $n\rightarrow\infty$? I now this is not a very "well posed" problem per se, and if there are any other "assumptions" that are required/needed in order to obtain this sort of weak convergence please let me know. Also could you also please provide a reference/references so that I may study this in a little bit more depth myself. Many thanks
You could use the result of McLeish, which is perfectly suited to the setting of arrays of martingale differences.
Explain a mapping? Let $Z = \{(a, b) : a \ge 0 \text{ and } b = 0, \text{ or } a = 0 \text{ and } b \ge 0\}$. Find a $C^\infty$ smooth function $\phi : \mathbb{R} \to \mathbb{R^2}$ which is $1$−$1$ and onto $Z$. Q) Can someone please explain what $\phi$ is? I think it's a tangent vector to my function evaluated with respect to or dependent on time?
The $\phi$ in question needs to be a function. If we think about the input parameter as "time", then $\phi$ defines a trajectory in $\Bbb R^2$. Our function needs to be 1-1, which means we can't pass through the same point twice, and it needs to be onto $Z$, which means the path needs to cover all of $Z$. A function that almost works, but fails to be differentiable, is $$ f(t) = \begin{cases} (|t|,0) & t < 0\\ (0,t) & t \geq 0 \end{cases} $$ To get a function that will work, we can use any smooth function $g(t)$ such that $g(t) = 0$ when $t<0$, $g(t)$ is increasing when $t \geq 0$, and $g(t) \to \infty$ as $t \to \infty$. With such a function, we may define $$ \phi(t) = (g(-t),g(t)) $$ note that this is exactly the same as defining $$ \phi(t) = \begin{cases} (g(-t),0) & t < 0\\ (0,g(t)) & t \geq 0 \end{cases} $$ One example of such a $g$ is $$ g(t) = \begin{cases} 0 & t \leq 0\\ t e^{-1/t} & t > 0 \end{cases} $$
Skyscraper sheaves cohomology Let $X$ be a topological space and $G$ an abelian group. Denote by $\mathcal{S}$ the skyscraper sheaf with group $G$ at the point $x\in X$. How I can prove that $\mathcal{S}$ has not cohomology, i.e $H^i(X,\mathcal{S})=0, \: \forall i>0$ ?
You can also see this using Čech cohomology: Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$. In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish. (I believe this approach can also be found in Forster's book on Riemann Surfaces.)
Is the sequence $\frac{1}{\arctan(-n)}\cdot\frac{3n-2}{n^2+n+10}$ increasing or decreasing? Is the following sequence increasing or decreasing $$\frac{1}{\arctan(-n)}\cdot\frac{3n-2}{n^2+n+10}$$? I managed to come to conclusion that $a_n=\frac{1}{\arctan(-n)}$ is strictly increasing and that for $b_n=\frac{3n-2}{n^2+n+10}, b_1<b_2<b_3>b_4>b_5...$. So $b_n$ is increasing for the first $3$ terms and after that it's decreasing. So what can I conclude about the product of theses $2$ sequences? I know that the product of $2$ increasing sequences doesn't have to be increasing.
Let $(a_n)$ be $\ge 0$ and decreasing, and $(b_n)$ be $\le 0$ and increasing. Then: $$a_{n+1}b_{n+1} - a_n b_n = a_{n+1}(b_{n+1} - b_n) + (a_{n+1} - a_n)b_n \ge 0$$ So your sequence is increasing.
Height : Normal distribution I am looking at the following: The average tallest men live in Netherlands and Montenegro mit $1.83$m=$183$cm. The average shortest men live in Indonesia mit $1.58$m=$158$cm. The standard deviation of the height in Netherlands/Montenegro is $9.7$cm and in Indonesia it is $7.8$cm. The height of a giant of Indonesia is exactly 2 standard deviations over the average height of an Indonesian. He goes to Netherlands. Which is the part of the Netherlands that are taller than that giant? I thought to do the following: Since the height of a giant of Indonesia is exactly 2 standard deviations over the average height of an Indonesian, we get that his height is $158+2\cdot 7.8=173.6$cm, right? We have the following: Now we want to compute $P(x>173.6)=1-P(x\leq 173.6)$, right? At the graph we have $173.3$ how could we compute the $P(x\leq 173.6)$ ?
You are right. $X$ is distributed as $\mathcal N(183, 9.7^2)$. To compute $P(X\leq 173.6)$ you use the standardized radom variable $Z=\frac{X-\mu}{\sigma}$, where $Z\sim \mathcal N(0,1)$ $P(X\leq 173.6)=\Phi\left(\frac{173.6-183}{9.7}\right)\approx\Phi(-0.97)$ $\Phi(z)$ is the cdf of the standard normal distribution. You can look at this table what $\Phi(-0.97)$ is. For orientation, the value is between $14\%$ and $18\%$. Link to a online calculator. Let mm be the minimal acceptable height, then $P(x>m)=0,01$, or not? It also equivalent to $P(x≤m)=0.99$, right? You are right that both equations are equivalent. You have made the right transformations. $\frac{m-158}{7.8}=2.32 \Rightarrow m=176.174\ cm$ Is this correct? More or less. We usually say that $\Phi(2.33)=0.99$. It is $\Phi(2.32)=0.98983$ and $\Phi(2.33)=0.99010$. The second value is nearer to 0.9 than the first value. But the funny thing is that if I use $2.33$ the result is $m=176.174$. Maybe you have used 2.33 on the RHS. Your answer to the second question is right. $\large \checkmark$
$\lim_{x \to 0} \frac{a^x - b^x}{cx^3 + dx^2}$ $$\lim_{x \to 0} \frac{a^x - b^x}{cx^3 + dx^2}$$ where $a,b>0$ and $c^2 + d^2 >0$. I think that the limit does not exist as the one sided limits of 0 goes to plus inifnity and minus inifinity, but I'm not sure. Any help appriciated
It is $0/0$ indeterminate so we may apply L'Hospital's rule to get $$\lim_{x\to0}\frac{\ln(a)a^x-\ln(b)b^x}{3cx^2+2dx}$$ Which is now of the form $p/0$, so it diverges to $\pm\infty$. By seeing it must be positive, we conclude it diverges to $+\infty$ if $a>b$ and negative infinity in the other case.
Stick of unit length is broken into three random pieces, what is the expected length of the longest piece? In regards to this question: Average length of the longest segment Can anybody explain why the cumulative distribution function is in the form that is given?
Take a one-foot stick,lay it down, and you simultaneously cut it at two points in order to get three pieces. It is equivalent to choose two points $X$ and $Y$ on the stick. From left to right, the first piece $A$ will be of size the minimum between $X$ and $Y$ minus $0$. The piece on the right will be of size $1$ minus the maximum between $X$ and $Y$, or $1-B$ where $B=max(X,Y)$ The one in the middle is of size $X-Y$ fs $X>Y$, or $Y-X$ if $X \leq Y$, in other words , $|X-Y|=B-A$ Finally, $C$ is the longest stick, that is $max(A,1-B,B-A)$ The cdf is then defined as follows $$F_C(a)=P(C \leq a)= P(max(A,1-B,B-A)\leq a)$$ If the maximum between $A,1-B$ and $B-A$ is smaller than $a$, therefore each of them are smaller than $a$, and reciprocally. We can rewrite the cdf as $$F_C(a)=P(C \leq a)= P(A\leq a,1-B\leq a,B-A\leq a)$$ $X$ and Y are totally symmetrical, you can write $$P(A\leq a,1-B\leq a,B-A\leq a)=P(A\leq a,1-B\leq a,B-A\leq a,X<Y)+P(A\leq a,1-B\leq a,B-A\leq a,X \geq Y)$$ Thus, $$P(A\leq a,1-B\leq a,B-A\leq a)=2P(X\leq a,1-Y\leq a,Y-X\leq a,X<Y)$$ A bit of geometry will give you the result : draw a square $[0,1]*[0,1]$, the vertical line $\{x=a\}$, the horizontal line $\{y=1-a\}$, the straight lines $y=a+x$ and $y=x$. Locate the intersection of the surfaces defined by $P(A\leq a,1-B\leq a,B-A\leq a)$ and you will get the results.
Calculating chance of results when dropping lowest result of rolling 4 dice Say I want to roll 4 6 sided dice, then take the 4 results, drop the lowest result (or just 1 of the lowest values, if it is rolled more than once), and add the remaining 3 dice together to get the number. (for those interested, this is the same as the rolling for abilities in D&D) I've managed to make a python script that runs every possible combination to get percents for each result(for example, I know that the chance of rolling an 18 is aprox. 1.62%), but I am curious if there is a way to mathematically calculate it, without simulating each outcome, or counting them out. I am specifically interested in the chances of rolling a 3 and an 18, but if there is a way to calculate each of the numbers, that would be even better. Again, I'm not interested in the result, as I already have it. I am interested in the process in calculating it, if possible.
You get $3$ only if your roll $[1,1,1,1]$. Therefore, the probability of $3$ is $1/6^4$. You get $18$ only if your roll any of the following: * *$[6,6,6,6]$, for which there is $1$ option *$[6,6,6,X]$, for which there are $5$ options *$[6,6,X,6]$, for which there are $5$ options *$[6,X,6,6]$, for which there are $5$ options *$[X,6,6,6]$, for which there are $5$ options Therefore, the probability of $18$ is $(1+5+5+5+5)/6^4$.
Coupled ODE's with constraints How do we solve a a system of ODE's: $\frac{dc_i}{dx} = f_i (c_i, x) $ Where $f_i $ can be linear or non-linear. Subject to: $\sum{\alpha_i c_i} = 0$? More specifically, this is the system of equations I'm looking at:
The system had better be consistent with the constraint, i.e. (assuming $\alpha_i$ are constants) $$\dfrac{d}{dx} \sum_i \alpha_i c_i = \sum_i \alpha_i f_i(c_i, x) = 0$$ If so, just solve the constraint for one of the $c_i$ and substitute in to the differential equations for the other $c_i$'s.
What is the rank of correlation matrix and its estimate? For a n-dimensional vector $\mathbf{x}$, a $n\times n$ correlation matrix $\mathbf{R}$ is https://en.wikipedia.org/wiki/Covariance_matrix#Correlation_matrix \begin{equation} \mathbf{R} = {E}\big[(\mathbf{x}-E(\mathbf{x}))(\mathbf{x}-E(\mathbf{x}))^T\big]\tag{1a} \end{equation} where $E(.)$ is expectation operator. If $E(\mathbf{x})=0$, the correlation $\mathbf{R}$ reduces to \begin{equation} \mathbf{R} = {E}\big[\mathbf{x}^{}\mathbf{x}^T\big]\tag{1b} \end{equation} The estimate of $\mathbf{R}$, call it $\mathbf{R_{xx}}$, can be computed by collecting $N$ independent n-dimensional sample vectors $\mathbf{x}$ (http://perso-math.univ-mlv.fr/users/banach/workshop2010/talks/Vershynin.pdf) \begin{equation} \mathbf{R_{xx}} = \frac{1}{(N-1)}\sum_{i=1}^{N} \mathbf{x}_i\mathbf{x}_i^T \tag{2} \end{equation} My question are * *what is the $rank(\mathbf{R})$ *what is the $rank(\mathbf{R_{xx}})$ when $N>>n$ From (1b), $rank(\mathbf{R})$ should be 1. For (2), I searched for "rank of sum of rank-1 matrices" and found this post Rank of sum of rank-1 matrices which essentially says that rank of sum of rank-1 matrices as be as high as n for independent vectors. These are two conflicting things and I am not able to understand what I am missing here.
$rank(\mathbf{R})$ equals to the number of independent random variables in $\mathbf{x}$. If $\mathbf{R}$ is full rank ($rank(\mathbf{R}) = n$), then it means that all components of $\mathbf{x}$ are linearly independent. If $rank(\mathbf{R}) = k \lt n$, that means there are only $k$ independent random variables in $\mathbf{x}$, the other $n-k$ random variables can be constructed by a linear combination of other components of $\mathbf{x}$. Your equation (1b) doesn't lead to $rank(\mathbf{R}) = 1$. With certainly conditions (for example, $\mathbf{x}_i$ i.i.d normal), your equation (2) should approach $\mathbf{R}$, and $rank(\mathbf{R_{xx}})$ approaches $rank(\mathbf{R})$.
$A^H=A^{-1}$ implies $\|x\|_2 = \|Ax\|_2$ for any $x\in \mathbb{C}^n$ Show that the following conditions are equivalent. 1) $A\in \mathbb{C}^{n\times n}$ is unitary. ($A^H=A^{-1}$) 2) for all $x \in \mathbb{C}^n$, $\|x\|_2 = \|Ax\|_2$, where $\|x\|_2$ is the usual Euclidean norm of $x \in \mathbb{C}^n. $ I am totally lost in this problem, I appreciate any hint. And here is my argument. From $1\to 2$, we get $A^{H}A=I$. By this problem "Prove that $\|A\|_2 = \sqrt{\|A^* A \|_2}$", I can say $\|Ax\|_2=\sqrt{\|A^HAx\|_2}$. Therefore, I get $\|Ax\|_2^2=\|x\|_2$. I also have problem to show the equality in this problem "Prove that $\|A\|_2 = \sqrt{\|A^* A \|_2}$".
Notice that for any complex $n\times n$ matrix $A$, $$ \| Ax \|^2 = \langle Ax, Ax \rangle = \langle x, A^H A x \rangle, \qquad \forall x \in \Bbb{C}^n. $$ So the implication $1) \Rightarrow 2)$ is straightforward. For the other direction, the following lemma is useful: Lemma. If $A$ is a $n\times n$ matrices such that $\langle x, Ax\rangle = 0$ for all $x \in \Bbb{C}^n$, then $A = 0$. Then the implication $2) \Rightarrow 1)$ immediately follows by applying the lemma to the matrix $A^H A - I$. Proof of Lemma. Plug $x = \alpha \mathrm{e}_k + \beta \mathrm{e}_l$, where $\alpha, \beta \in \Bbb{C}$ and $l \neq k$. Then $$0 = |\alpha|^2 A_{kk} + \bar{\alpha}\beta A_{kl} + \alpha\bar{\beta}A_{lk} + |\beta|^2 A_{ll}. $$ Plugging $(\alpha, \beta) = (1, 0)$, we find that all the diagonal entries of $A$ are zero. Plugging $(\alpha, \beta) = (1, 1)$ and $(\alpha, \beta) = (1, i)$ respectively, we obtain a system of equations $$ A_{kl} + A_{lk} = 0, \qquad A_{kl} - A_{lk} = 0. $$ Solving this equations shows that all the off-diagonal entries of $A$ are zero. //// Remark. The lemma above is no longer true on real vector spaces. Consider $\frac{\pi}{2}$-rotation, for instance.
Consider $I = \int^{14}_{8} e^{-x^4} dx$. Match each riemann sum. $I = \int^{14}_{8} e^{-x^4} dx$ We have $L_{1000}, L_{10}, R_{1000}$. We have $I= 0.335, 0.368, 0.367$. Match each sum with each $I$ value. I know that the graph is decreasing and approaches $0$ on the interval $[8,14]$, so $R_n \leq \text{ Actual Area I} \leq L_n$. I know $R_{1000} = 0.335$, as it is the number most away from the actual area $I$. I think $L_{10} = 0.367$ and $L_{1000} = 0.368$ since $1000$ rectangles is much more accurate then $10$ rectangles, and thus it will be more larger then the actual area $I$. Is this correct? It's the first time I'm doing these types of questions so I'm not $100%$ sure. (not real approximations)
Since the function is decreasing. $L_n \ge \int_a^b f(x) dx \ge R_n$ The right sum is less than the true value. The left sum is greater than the true value. As $n$ gets larger $L_n$ gets closer to the true value. Your logic is good there. Except then $L_{n}\ge L_{n+1}$
Basis for kernel and range for a linear transformation of polynomials mapped to the vector space of 2x2 Let $V$ be the vector space $P_2$(x) of polynomials in $x$ of degree 2 or less, $W$ be the vector space $M_{2,2}$ of 2×2 real matrices and $T$ be the linear transformation $T: V → W:$ $a + bx + cx^2$ $\mapsto$ \begin{bmatrix} a-b & b-c \\ 0 & c-a \end{bmatrix} Compute bases for the kernel and range of $T$ and for $V$ and hence verify the general rank-nullity theorem for $T : V → W$. I don't understand how to get the kernel and range when the linear transformation maps to a vector space of matrices and so cannot compute bases. Any help/solutions would be appreciated.
For the kernel of $T$ you want those polynomials in $V$ that map to the zero matrix in $W$. So \begin{align*} T(a+bx+cx^2) & = \begin{bmatrix}0&0\\0&0\end{bmatrix}\\ \begin{bmatrix} a-b & b-c \\ 0 & c-a \end{bmatrix} & = \begin{bmatrix}0&0\\0&0\end{bmatrix}. \end{align*} This gives $a=b=c$. So the polynomials which lie in the kernel are of the form $a(1+x+x^2)$, where $a \in \mathbb{R}$. So a basis for the kernel is $$\mathcal{B}_{\text{ker}}=\{1+x+x^2\} \implies \dim(\text{Ker} T)=1$$ Likewise we can go for a basis for the range. First we can get the range: assume that $\begin{bmatrix}p&q\\r&s\end{bmatrix} \in \text{Range }(T)$, then there exists some polynomial $a+bx+cx^2 \in V$ such that \begin{align*} T(a+bx+cx^2) & =\begin{bmatrix}p&q\\r&s\end{bmatrix}\\ \begin{bmatrix} a-b & b-c \\ 0 & c-a \end{bmatrix}& = \begin{bmatrix}p&q\\r&s\end{bmatrix} \end{align*} This gives the following system: $$ \begin{align*} a-b & = p\\ b-c & = q\\ 0 & = r\\ c-a&=s \end{align*} \Longrightarrow \begin{bmatrix} 1&-1&0&|&p\\ 0&1&-1&|&q\\ 0&0&0&|&r\\ -1&0&1&|&s \end{bmatrix} \Longrightarrow \begin{bmatrix} 1&-1&0&|&p\\ 0&1&-1&|&q\\ 0&0&0&|&r\\ 0&0&0&|&p+q+s \end{bmatrix} $$ From this it follows that the range only consists of matrices of the form $\begin{bmatrix}p&q\\0&-p-q\end{bmatrix}$. Now we can go for a basis for the range of $T$ as follows: $$\begin{bmatrix}p&q\\0&-p-q\end{bmatrix}=p\begin{bmatrix}1&0\\0&-1\end{bmatrix}+q\begin{bmatrix}0&1\\0&-1\end{bmatrix}$$ This shows that $$\mathcal{B}_{\text{range}}=\left\{\begin{bmatrix}1&0\\0&-1\end{bmatrix},\begin{bmatrix}0&1\\0&-1\end{bmatrix}\right\} \implies \dim(\text{Range } T)=2$$
Show that the set of odd numbers is countable Problem: Show that the set of odd numbers is countable Attempt: So for this problem, I just need to find a bijection from the natural numbers to the set of odd numbers. However, I find the claim "odd numbers" a bit ambiguous because it can be odd natural numbers or odd integers. However, I think that problem is about the odd integers. Would it be sufficient to show a bijection in the following manner? 1->1 2->-1 3->3 4->-3 5->5 6->-5 and so on.
Your answer is correct but in case you want an explicit function then the following help . Define $f:\mathbb{N}\to \{\dots,-5,-3,-1,1,3,5,\dots\}$ by $$ f(n)= \begin{cases} n&, \mbox{if $n$ is odd}\\ 1-n&, \mbox{if $n$ is even} \end{cases} $$ for all $n\in\mathbb{N}$.
countable intersection of open set argument I know that countable intersection of open sets could be closed or open. But I was wondering what is wrong with my argument here: Given an element say x in a countable intersection of open sets say denoted as $\bigcap_{i\in\mathbb{N}}A_{n}$ where each of $A_n$ is an open set. Then since x is an element that belongs to each of $A_i$, so for each $A_i$, we have an $\epsilon_i>0$ such that ($x-\epsilon_i, x+\epsilon_i$) is entirely contained in $A_i$. Then out of all the $\epsilon_i$, we can pick the minimum of all those $\epsilon_i$ say denoted as $\epsilon^*$ such that $\epsilon^* \leq \epsilon_i$. Then we have $(x-\epsilon^*, x+\epsilon^*)$ within each of the $A_i$. Since we can do it for arbitrary element of x in $\bigcap_{i\in\mathbb{N}}A_{n}$ , then we can conclude that $\bigcap_{i\in\mathbb{N}}A_{n}$ is open. I know it is incorrect, but which part is incorrect? Is it because of the "countable intersection" part? I know that instead of coutable, if I have "finite intersection", then the finite intersection of those open sets will be open. But not sure how to get the concept correct for the "countable intersection" to understand that "countable intersection" of open interval is not necessarily open without using Counterexample. I saw many counter example that shows intersection of open set could be closed. But I just want to know which part of my thinking above is incorrect. Thank you.
The problem is that while a finite set of positive real numbers has a minimum (which is then positive), an infinite set of positive real numbers may not. Instead it is only guaranteed to have an infimum, which could be zero. In your example, consider the case where $A_i=(-\frac{1}{i},\frac{1}{i})$ and $x=0$. Then we could take $\epsilon_i=\frac{1}{i}$, but then no minimum $\epsilon^*$ exists. And indeed $\cap_{i}A_i=\{0\}$ is not open.
Probability density of a stochastic process Good morning, recently I had to solve the two-dimensional SDE, and the solution process I found was $$\left\{\begin{array}{rcl}\xi^1_t&=&\xi^1_0+\int_0^t dw(s),\\ \xi_2^t&=&\xi_0^2+\int_0^t(\xi_s^1)^2ds\end{array}\right.,$$ where $w(t)$ is a standard one-dimensional brownian motion. Then $\xi_t^1$ has a normal distribution with mean $\xi_0^1$ and variance $t$. What kind of process is $\xi_t^2$? More generally what can I say on the whole process $(\xi_t^1,\xi_t^2)$ (as a Joint process)? I didn't find any reference in the literature unfortunately so that's why I'm asking. Thank you for all your kind replies.
This is only a partial answer, but I hope it could be useful. Let us set $$\xi^1_t=N(\xi^1_0,t)=X$$ Reminding that the sum of $k $ squared standard normal distributions corresponds to a chi-squared distribution $\chi_k^2$ (where $k $ denotes the degrees of freedom), for a single non-standard normal distribution $Y=N (\mu, \sigma^2) $ we have $$ \left (\frac {Y-\mu}{\sigma} \right)^2=\chi_1^2$$ Applying this to $X$, we get $$\left (\frac {X-\xi_0^1}{\sqrt {t}} \right)^2=\chi_1^2$$ and then $$(X-\xi_0^1)^2=t \, \chi_1^2$$ $$X^2 = 2X \xi_0^1 -(\xi_0^1)^2+t \, \chi_1^2$$ Note that in the RHS of the last equation, the first term is our initial normal distribution $X $ scaled by a factor $2 \xi_0^1$, so that its expected value is $2 (\xi_0^1)^2$. The second term is a constant, and the third term is a chi-square distribution with one degree of freedom (which by definition has an expected value of $1$) scaled by a factor $t$. So, due to linearity of expectation, we get $$E (X^2)=(\xi_0^1)^2+t$$ We can now rewrite the expression giving $\xi_2^t$ as $$\xi_2^t=\xi_0^2 +2 \xi_0^1 \int_0^t X \\ - \xi_0^1 \int_0^t ds + t \int_0^t \chi_1^2 $$ and grouping the constant terms into $J $ $$ \xi_2^t= 2 \xi_0^1 \int_0^t X + t \int_0^t \chi_1^2 +J $$ Therefore the distribution of $\xi_2^t$ is given by the sum of a constant term, a scaled integral of the initial normal distribution $X=\xi^1_t$, and a scaled integral of the chi-square distribution corresponding to the square of the standardized $X$. The integral of $X$ can be expressed in terms of the so-called error function, considering that the CDF $F (x ) $ of a generic normal distribution with mean $\mu $ and variance $\sigma^2$ is $$F (x)=\frac {1}{2} \left[ 1+erf \left( \frac {x-\mu}{\sigma \sqrt {2}} \right) \right] $$ The integral of $\chi_1^2$ can be estimated by reminding that the CDF $C_r(x) $ of a generic chi-square distribution $\chi_r^2$ with $r $ degrees of freedom is $$C_r (x)=P \left( \frac {1}{2} r, \frac {1}{2} x \right) $$ where $P (m,n) $ is a regularized gamma function.
How to find a distribution of the exponential of a negative lognormal distribution? I wish to find the distribution of $\exp(-(\exp(X))$, when $X$ is normally distributed. The first steps are straightforward: $$X \sim \operatorname{Normal}(\mu,\sigma^2),$$ $$ Y = \exp(X) \sim \log \mathcal{N}(\mu,\sigma^2) $$ However, I now want to find the distribution of a negative log-normal distribution: $Z = -Y$, and in particular, the distribution of the exponential of this distribution, but I am completely clueless how to tackle this: $$Z = \exp(-Y) \sim ???$$ I have found that $-Y$ might be done using some distributions in the Johnsons family, but I could not find how to do it. Any help would be appreciated!
I think it should be easy enough to find the cumulative distribution function $\Pr(Z \le z) = \Pr(X \ge \log_e(-\log_e(z))) = 1 - \Phi\left(\dfrac{\log_e(-\log_e(z))-\mu}{\sigma}\right)$ for $0 \lt z \lt 1$ and then the density $p(z) = \dfrac{1}{-\sigma z \log_e(z)}\phi\left(\dfrac{\log_e(-\log_e(z))-\mu}{\sigma}\right)$ for $0 \lt z \lt 1$ based on the cdf $\Phi$ and pdf $\phi$ of a standard normal distribution. Sketching the density for varying $\mu$ and $\sigma$ suggests it produces some slight odd results, especially varying $\sigma$ between $0.1$ and $1$ so I might doubt it has a commonly used name
Two circles touches each other externally at the point O. .. I am stuck on the following elementary problem that says: Two circles touches each other externally at the point O. If PQ and RS are two diameters of these two circles respectively and $PQ || RS$ (Where || indicates parallel),then prove that P,O,S are collinear. My Try: I added the points P,O,S and also the points Q,O; ans R,O. From the given condition , $\angle POQ=90^{\circ}=\angle ROS$. [since,Any angle inscribed in a semi-circle is a right angle.] Now,from the figure we see $\angle POR=180^{\circ}-90^{\circ}=\angle QOS$. And hence,$\angle POQ+\angle QOS=90^{\circ}+90^{\circ}=180^{\circ}=\angle POS$. Hence ,we can conclude P,O,S are collinear. Can someone verify it ? Am I right? Thanks in advance for your time.
$ POQ,ROS$ are right angles because they are contained in a semi-circle, by Thales theorem. EDIT1: My bad, an error occured before. Imagine two disjunct right triangles hinged at right angle vertex with discontinuous slope. Unless alternate angles at diameters are equal a same line continuity cannot be established. When cyan right triangle at right is rotated so that $ \beta = B $ these are alternate angles. So the transversal cutting line through $O$ has to be single continuous cutting line.
How to prove that $\lVert x + y\rVert = \lVert x\rVert + \lVert y \rVert \implies \lVert tx + (1-t)y\rVert = t\lVert x\rVert + (1-t)\lVert y \rVert$? If a norm on a vector space is given by an inner product then we have $\lVert x + y\rVert = \lVert x \rVert + \lVert y \rVert$ only when $x=\lambda y$ for some constant $\lambda$ (this follows from the Cauchy-Schwarz inequality). This does not hold for general (unitary invariant) norms. E.g. when considering generalised Ky Fan norms: $$ \lVert x\rVert_s = \sum_i s_i \lvert x\rvert_i^\downarrow$$ where $s_i\geq 0$ and $\lvert x\rvert^\downarrow$ is $\lvert x\rvert$ rearranged with the components from biggest to smallest. So for example the operator (sup) norm has $s_1 = 1$ and $s_i=0$ for $i>1$, while the trace norm has $s_i=1$ for all $i$. For these norms it is in general possible to find $x$ and $y$ so that the norm is additive for these vectors, while $x$ and $y$ are not multiples of each other. Now my question is the following. Given a unitarily invariant norm and vectors $x$ and $y$ such that $\lVert x + y\rVert = \lVert x\rVert + \lVert y \rVert$, can we then conclude that $\lVert tx + (1-t)y\rVert = t\lVert x\rVert + (1-t)\lVert y \rVert$ for $0\leq t \leq 1$? This is true for these generalised Ky Fan norms and for all norms that come from inner products. It is also true for all $p$-norms as this additivity still only holds for $x$ and $y$ that are multiples of each other (even though $p$-norms don't come from an inner product). My intuition behind this is that this additivity property (when $x$ and $y$ aren't multiples of each other) forces the norm to be "linear" in a certain way, such that the norm has to be very similar to a generalised Ky Fan norm. So as a bonus question: If there are $x$ and $y$ that aren't multiples of each other, such that the norm is additive for these $x$ and $y$, show that the norm is a generalised Ky Fan norm. Note that this can only be true when we assume the norm to be unitary invariant since otherwise we can construct a counter-example of this (take $(X_1,\lVert\cdot\rVert_1)\oplus (X_2,\lVert\cdot\rvert_2)$ with the norm $\sup\{\lVert\cdot\rVert_1,\lVert\cdot\rVert_2\}$ where $\lVert\cdot\rVert_1$ is a Ky Fan norm and $\lVert\cdot\rVert_2$ is some other norm).
The conclusion holds for any vector norm. If $\lVert x + y\rVert = \lVert x\rVert + \lVert y \rVert$ then for $0 \le t \le 1$: $$ \lVert x\rVert + \lVert y \rVert = \lVert x + y\rVert \\ = \lVert (1-t)x + tx + (1-t)y + ty\rVert \\ \le (1-t)\lVert x \rVert + \lVert tx + (1-t)y \rVert + t \lVert y\Vert \\ \stackrel{(*)}{\le} (1-t)\lVert x \rVert + t \lVert x \rVert + (1-t)\lVert y \rVert + t\lVert y\rVert \\ = \lVert x\rVert + \lVert y \rVert $$ so that equality holds everywhere, in particular at $(*)$: $$ \lVert tx + (1-t)y\rVert = t\lVert x\rVert + (1-t)\lVert y \rVert $$ It follows even that for all $s, t > 0$ $$ \lVert sx + ty\rVert = s\lVert x\rVert + t\lVert y \rVert \, , $$ as can be seen by replacing $t$ by $\frac{s}{s+t}$ in the previous identity.
How many moves do I need to solve the leap frog puzzle? This is a standard puzzle that all of us have seen (and also probably appears in Conway's combinatorial game theory books). There are $n$ green frogs and $n$ red frogs sitting on $2n+1$ lily pads in the given configuration GGG_RRR The frogs can only leap to an empty lilypad. They can jump over at most one frog. The problem is to change the original configuration to RRR_GGG I want to show that this can be done optimally $(n+1)^2 - 1$ steps for each $n$. I tried doing this by trying to find a recurrence on $n$, but I failed.
To get the number of moves, look at how many spaces each frog must move. Each frog moves $n+1$ spaces, so there are a total of $2n(n+1)$ spaces moved. There are $n^2$ jumps where a frog moves two spaces, so the number of moves is $2n(n+1)-n^2=n^2+2n=(n+1)^2-1$ To prove that a frog jumping another of the same color cannot be part of the optimum solution, assume it is a green frog. We then must have GG_ with other frogs in the row. How did we get here? The first G could have jumped backwards, but then jumping forwards undoes the move and we would be better without the pair. An R could have moved backwards, but then the front G should have jumped it and we would be farther along. Finally, a G could have moved forwards, but then we had GG_ before (one space to the right) and we look at the move before that.
It's $(X+Y)^3- (X^2+Y^2)\in \mathbb{C}[X,Y]$ irreducible? It's $(X+Y)^3- (X^2+Y^2)\in \mathbb{C}[X,Y]$ irreducible? I can't apply Eisenstein Criterion.
The brute force method : See it as a polynomial of degree $3$ of $K[X]$ with coefficients in $K = \mathbb{C}(y)$. If it is not irreducible then $$X^3+y^3+3X^2y+3Xy^2-X^2-y^2=( X +a)(X^2+bX+c)=X^3+bX^2 +cX+aX^2+abX+ac$$ $c = \frac{y^3-y^2}{a}$, $b = 3y-1-a$ $$= X^3+(3y-1-a)X^2 +\frac{y^3-y^2}{a}X+aX^2+a(3y-1-a)X+1$$ I made a mistake, you have to solve $\frac{y^3-y^2}{a}+a(3y-1-a) =3y^2 $ and show the solution $a \not\in \mathbb{C}(y)$
Multiplying every element in Covariance matrix This is regarding article in this link which explains Kalman filter. In this article at one point in equation 4 author says If we multiply every point in a distribution by a matrix A, then what happens to its covariance matrix Σ? Well, it’s easy. I’ll just give you the identity: How can derive this equation?
I have a small addition: The OP has not specified that $x$ is zero mean. In general, ${\rm COV}[x]$ is defined as $\mathbb{E}[(x - \mathbb{E}[x])(x - \mathbb{E}[x])^T]$. If $x$ is zero mean, i.e., $\mathbb{E}[x]=0$, Omnomnomnom has already given the answer. If it is not, it still works, like this: $$\begin{align}{\rm COV}[A x] & = \mathbb{E}[(Ax - \mathbb{E}[Ax])(Ax - \mathbb{E}[Ax])^T] \\ & = \mathbb{E}[(Ax - A\mathbb{E}[x])(Ax - A\mathbb{E}[x])^T ]\\ & = \mathbb{E}[A(x - \mathbb{E}[x])(x - \mathbb{E}[x])^T A^T ] \\ & = A \mathbb{E}[(x - \mathbb{E}[x])(x - \mathbb{E}[x])^T ]A^T \\ & = A {\rm COV}[x] A^T \\ \end{align}$$
Proof for tangent chord angle formula I want a proof for tangent chord angle formula by using the following method: Drawing a parallel line - See the diagram. I know the other proofs and I want to prove it with drawing a parallel line.
If $AB \parallel CD$, then $\angle BAC=\angle ACD$. Because $AB$ is tangent (and $AB \parallel CD$), then $|AC|=|AD|$, so the triangle $ACD$ is isosceles and $\angle ADC=\angle ACD$. $\angle AOC$ is central angle with inscribed angle $\angle ADC$, so $$(\widehat{AC}=)\angle AOC = 2 \angle ADC = 2 \angle ACD = 2 \angle BAC$$ We have then $$\frac{\widehat{AC}}{2}=\angle BAC$$
Solving $\sum_{i=3}^{n+1} i$ trying to solve $\sum_{i=3}^{n+1} i$ First I attempt to change the lower/upper bounds: $\sum_{i=3}^{n+1} i = \sum_{i=1}^{n-1} i$ in order to use $\sum_{i=1}^{n} i = \frac{n(n+1)}2$ so, $\sum_{i=1}^{n-1} i = \frac{(n-1)(n-1+1)}2 = \frac{n^2-n}2$ This is just practice out of a textbook that doesn't have answers - but I tried to input the summation in wolframalpha and my result is not one of the answers there. Where have I messed up? Additionally, is modifying the lower/upper bound of a summation in order to use an equality like the one above an ok way to approach these problems?
That's wrong. Take $n=2$. Then you say $3=1$. Use $$ \sum_{i=3}^{n+1} i = \sum_{i=1}^{n+1} i-\sum_{i=1}^{2} i $$ If you want change order of summation, then set $m=i-2$. Then $m$ goes from $1$ to $n-1$ but $i=m+2$ so $$ \sum_{i=3}^{n+1} i=\sum_{m=1}^{n-1} (m+2) $$
Show the following equality Basically I want to show the following: $$\sqrt{2}\ |z|\geq\ |\operatorname{Re}z| + |\operatorname{Im}z|$$ So what I did is the following: Let $z = a + bi$ Consider the following: $$2|z|^2 = 2a^2 + 2b^2 = a^2 + b^2 +a^2+b^2$$ Since $(a-b)^2\geq0$, hence $a^2+b^2\geq 2ab$ Thus $2|z|^2 \geq a^2+b^2+2ab = (a+b)^2$ Hence $\sqrt{2}|z| \geq a + b$ but $a = |\operatorname{Re}z|~,~ b = |\operatorname{Im}z|$ Did I make a mistake somewhere, if yes I would appreciate it if it could be pointed out and perhaps provide some guideline on how to prove this.
Let $z = r\cos\theta + ri\sin\theta$ Then $|Re(z)| + |Im (z)| = \pm\sqrt 2 r\sin(\theta \pm \frac{\pi}{4}) \leq \sqrt 2 r$ All four possible sign combinations will cover $\theta$ in all quadrants.
Finding the expected value of of $\int_0^s \sqrt{t+B_t^2}dB_t$? How can I find the expected value of $\int_0^s \sqrt{t+B_t^2}dB_t$? I know one condition is to show that if: $f: (0,\infty) \times \Omega \to \mathbb{R}$ is progressively measurable and $$\mathbb{E} \left( \int_0^s |f(t)|^2 \, dt \right)<\infty \quad \text{for all $s \geq 0$}$$ then $$M_s := \int_0^s f(t) \, dB_t, \qquad s \geq 0,$$ is a martingale. However, I don't know how to prove the above nor know of any good names for it. Is there a way to do it by Ito's formula for space and time or any other direct methods? Thanks.
I think you already wrote down everything you need to solve your problem. About measurability. I assume that we are speaking about the filtration $\mathcal F_t$ that is generated by the given Brownian motion $B_t$. Then $f(t)=\sqrt{t+B_t^2}$ is progressively measurable with respect to $\mathcal F_t$. Indeed, $B_t$ itself is progressively measurable w.r.t. $\mathcal F_t$ (it is adapted to $\mathcal F_t$ by the definition of this filtration and has a.s. continuous paths). The function $g(t)=t$ is also progressively measurable w.r.t. $\mathcal F_t$ (it is non-random). Since $f(t)$ is a measurable function of $B_t$ and $g(t)$, it is progressively measurable as well. About expectation. We have $$\mathbb E \left(\int_0^s|f(t)|^2 dt\right)=\mathbb E \left(\int_0^st+B_t^2 dt\right)=\mathbb E \left(\frac{s^2}{2}+\int_0^sB_t^2 dt\right)=\frac{s^2}{2}+\mathbb E \left(\int_0^sB_t^2 dt\right).$$ Now by Tonelli's theorem, we may interchange expectation and integration in the last term: $$\mathbb E \left(\int_0^sB_t^2 \ dt\right)=\int_0^s\mathbb E(B_t^2)\ dt=\int_0^s t \ dt=\frac{s^2}{2}.$$ We obtain that $$\mathbb E \left(\int_0^s|f(t)|^2 dt\right)=s^2<\infty.$$ Now we conclude that your integral is a martingale, and its expected value is therefore zero.
Proving a limit equals zero I have to prove, without L'hopital rule, the following limit: $$\lim_{x \to \infty}\sqrt{x} \sin \frac{1}{x} =0$$ I tried doing a variable change, setting $t=\frac{1}{x}$ and reaching the following: $$\lim_{t\to 0} \sqrt{\frac{1}{t}} \sin t $$ But I can't prove neither. Tried the second version with the squeeze theorem, but I can't prove the limit. Thanks!
$$\lim_{t\to0}\frac{\sin t}{\sqrt t}=\lim_{t\to0}\frac{\sin t}t\sqrt t=\lim_{t\to0}\frac{\sin t}t\cdot\lim_{t\to0}\sqrt t=1\cdot0.$$
cardinality / linear algebra Let $E$ be a $\mathbb C$-vector space with a finite dimension , and $u$ an endomorphism of $E$ denote $K=\{KerP(u) ; P\in\mathbb C[X]\}$ and $=\{ImP(u) ; P\in\mathbb C[X]\}$ show that $K$ and $I$ are finite and has same cardinality the confusion I has for this problem is that those two sets are infinite ?? because if a vector is in $K$ every translation of the vector is also in $K$ ?
Let's prove that $K$ is finite. Let $m\in\mathbb C[X]$ be the minimal polynomial of $u$. (Any polynomial $m$ such that $m(u)=0$ works.) Let $p\in\mathbb C[X]$. Write $d = \gcd(m,p) = rm+sp$ with $r,s\in\mathbb C[X]$. Then $d(u)=s(u)p(u)$ and $\ker d(u) \supseteq \ker p(u)$. Write $p=qd$ with $q\in\mathbb C[X]$. Then $p(u)=q(u)d(u)$ and $\ker d(u) \subseteq \ker p(u)$. Therefore, $\ker p(u) = \ker d(u)$. Since $m$ has only finitely many (monic) factors, there are only finitely many possibilities for $d$ and so there are only finitely many possibilities for $\ker d(u)$. The same argument proves that $I$ is finite. $\def\im{\operatorname{im}}$ Indeed, $d(u)=p(u)s(u)$ implies $\im d(u) \subseteq \im p(u)$ and $p(u)=d(u)q(u)$ implies $\im d(u) \supseteq \im p(u)$. Therefore, $\im p(u) = \im d(u)$. It remains to prove that $K$ and $I$ have the same cardinality.
What is the cartesian product of cartesian products? If I have four sets $A$, $B$, $C$ and $D$, what is the meaning of $(A \times B) \times (C \times D)$? Can I define it to be a matrix, such that the first index of each element will be given by its order in the bigger cartesian product i.e.$(A \times B) \times (C \times D)$, and the second index will be given by its order within the smaller cartesian product i.e. $(A \times B)$ or $(C \times D)$?
$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.
Sum of reciprocals of the triangle numbers Consider the sum of $n$ terms : $S_n = 1 + \frac{1}{1+2} + \frac {1}{1+2+3} + ... + \frac {1}{1+2+3+...+n}$ for $n \in N$. Find the least rational number $r$ such that $S_n < r$, for all $n \in N$. My attempt : $S_n = 2(1-\frac{1}{2} + \frac {1}{2} - \frac{1}{3} + .... + \frac {1}{n} - \frac {1}{n+1}) = 2(1 - \frac {1}{n+1}) $ Now what to do with that '$r$' thing ? How to proceed ?
First, let's point out that it isn't obvious why there is a "least" such rational number. For instance, if $S_n = 2$ for all $n$, then there isn't such a least rational number. So let's first prove that $r$ exists. Let $A = \{x \in \Bbb Q: S_n < x, \forall n\}$. The question should read: Prove that $\min A$ exists and find its value. $$S_n = \sum_{k=1}^n \frac1{\sum_{t = 1}^k t} = \sum_{k=1}^n \frac1{\frac{k(k+1)}2} = 2\left( \sum_{k=1}^n \frac1k - \sum_{k=1}^n \frac1{k+1}\right) \\ = 2 \left(1 - \frac1{n+1} \right)$$ If $x \in A$, then $S_n < x, \forall n \implies \lim S_n \le x \implies 2 \le x$, so $2$ is a lower bound of $A$. On the other hand, $2 \in A$. Thus, $2 = \min A$; in particular, $\min A$ exists.
Proving a relation is an equivalence relation specifically proving transitivity I'm currently studying for an exam and I've come across this question: * *Define a relation R on Z by xRy ⇔ 6|($x^{2} − y^{2}$) for x, y ∈ Z. Prove that R is an equivalence relation and describe the equivalence classes of R I understand how to prove it's reflexive, and I've tried to prove it's symmetric but I used the fact that -6|($y^{2}-x^{2}$) which doesn't seem like the correct way to answer this question, and I have no idea how to prove it's transitive, any help would be greatly appreciated. Thanks in advance!
Your proof that it's symmetric is almost certainly correct (but if you want to post the details, I could critique it). To prove transitivity, assume $x \sim y$ and $y\sim z$. By definition, $6| (x^2-y^2)$ and $6|(y^2-z^2)$. Another way of putting this would be to say there are integers $m,n$ such that $x^2-y^2 = 6m$ and $y^2-z^2 = 6n$. If we just add those two equations together, we get $x^2 - z^2 = 6(m+n)$. That is, $6|(x^2 -z^2)$, or $x\sim z$.
Proving divisibility of sequences For $a_n=6^{(2^n)}+1$, how can I show that $a_n\mid(a_{n+1}-2)$ ?
Observe that $$a_{n+1}-2=6^{2^{n+1}}+1-2$$ $$=6^{2^{n}\cdot2}-1$$ $$=\left(6^{2^{n}}\right)^2-1$$ $$=\left(6^{2^{n}}-1\right)\left(6^{2^{n}}+1\right)$$ $$=a_n\left(6^{2^{n}}-1\right)$$ So, $\left(a_{n+1}-2\right)$ is a multiple of $a_n$ $\therefore$ $$a_{n+1}-2=a_n\left({6^2}^n-1\right)$$
Intersection of lines in higher dimensions Given two lines in the parametric form (where $p$ is a point on the line, $\hat{v}$ is a unit direction vector and $t$ is the parameter) $q_0 = p_0 + t_0 \hat{v_0} \\ q_1 = p_1 + t_1 \hat{v_1}$ What is the general solution for detecting the intersection of lines in arbitrary dimensions? The 3D formula I know is based on 2-ary cross product, which doesn't generalize to higher dimensions. In 2D you can use the perp dot product instead. What about dimensions 4 and higher?
The question is basically, if exists scalars $t_0$ and $t_1$ such that $$p_0 + t_0v_0 = p_1 + t_1 v_1.$$ So you just need to know if the vectors $p_1-p_0$, $v_0$, and $v_1$ are linearly independent, since the preceding sentence is the same as $$(p_0 - p_1) + t_0 v_0 - t_1 v_1 = 0.$$ So arrange $p_1-p_0$, $v_0$, and $v_1$ in a matrix and test to see if the matrix has full rank. If it does not, then you can use this fact to find a linear dependence among the columns, which will yield an intersection of the lines.
Probability with an "OR" Marbles: 3 Yellow, 1 Purple, 2 Blue Probability of choosing a yellow first or a blue second. I understand that the OR situation will result in my adding the 2 probabilities and then subtracting the $P(A\text{ and }B)$. For me, I can see that P(yellow first or blue second) = $1/2 + 1/3 -P(A\text{ and }B)$. If I physically draw out the sample space I can see that $P(A\text{ and }B) = 1/5$. My question is how do I see this without drawing the sample space and physically counting the choices -- far too time consuming for a test. I would have thought that $P(A\text{ and }B)$ would be = $(1/2)(1/3)=1/6$ since that is $P(A)P(B)$. Can you help me see this?
The simplest way, I think, is to follow what happens: first pick, 3 good balls in 6, second pick, 2 good balls among the remaining 5, and $\frac{3}{6}\cdot\frac{2}{5}=\frac{1}{5}$.
Develop the Taylor series for the function $f(x)={\frac{1-x}{\sqrt{1+x}}}$ Develop the Taylor series for the function $$f(x)={\frac{1-x}{\sqrt{1+x}}}$$ for $$ a=0$$ I have tried to differentiate it, knowing that the Taylor series have the next form: $F(x)={\frac{f^{(n)}(a)}{n!}(x-a)}$ But it's far too complicated after 2/3 derivatives
The generalized binomial theorem says that $$ (1+x)^{-1/2}=\sum_{n\ge0}\binom{-1/2}{n}x^n \tag{1} $$ (the radius of convergence is discussed later on). Then $$ \frac{1-x}{\sqrt{1+x}}= (1-x)\sum_{n\ge0}\binom{-1/2}{n}x^n $$ Now you can distribute and collect terms: \begin{align} \frac{1-x}{\sqrt{1+x}} &=(1-x)\sum_{n\ge0}\binom{-1/2}{n}x^n \\[6px] &=\sum_{n\ge0}\binom{-1/2}{n}x^n-\sum_{n\ge0}\binom{-1/2}{n}x^{n+1} \\[6px] &=1+\sum_{n\ge1}\binom{-1/2}{n}x^n-\sum_{n\ge1}\binom{-1/2}{n-1}x^{n} \\[6px] &=1+\sum_{n\ge1}\left(\binom{-1/2}{n}-\binom{-1/2}{n-1}\right)x^n \end{align} The radius of convergence of a power series doesn't change when it's multiplied by a polynomial (verify it), so it's sufficient to look at the radius of convergence of $(1)$. With the ratio test, $$ \left|\frac{\dbinom{-1/2}{n+1}x^{n+1}}{\dbinom{-1/2}{n}x^{n}}\right|= \frac{n+1}{n+1/2}|x| $$ because $$ \binom{k}{n}=\frac{k(k-1)\dotsm(k-n+1)}{n!} $$ so $$ \frac{\dbinom{k}{n+1}}{\dbinom{k}{n}}= \frac{k(k-1)\dotsm(k-n+1)}{n!} \frac{(n+1)!}{k(k-1)\dotsm(k-n)}= \frac{n+1}{k-n} $$ Since the limit at $\infty$ of the ratio is $|x|$, the ratio of convergence is $1$.
More precise expression of the answer So the problem says "Let $X$ be the sum of outcomes of rolling $2$ dice, where the outcome for each dice appears with equal probability. What is the pmf of $X$?" I got: $X$ could take on any value in the set $\{2,3,4,5,6,7,8,9,10,11,12\}$ The probability mass function is $$f(x) = \begin{cases}\frac1 {36} & \text{ if } x∈{2,12}\\\frac1 {18} & \text{ if } x∈{3,11}\\ \frac1 {12} & \text{ if } x∈{4,10}\\ \frac1 {9} & \text{ if } x∈{5,9}\\\frac5 {36} & \text{ if } x∈{6,8}\\\frac1 {6} & \text{ if } x∈{7}\\ \ 0 & \text{ if } other\end{cases}$$ Is the answer correct? If so, is there any way I can write the answer in a more precise way instead of listing all the values of $x$?
Your answer is correct and precise; it may not be concise. For a more concise answer (which is not always possible, but is possible in this case) try to rewrite each fraction so it has the same denominator $36$, and try to see if there's a formula tying the numerator to the result(s) with that probability!
Prove trigonometric identity using combinatorics Prove that $\tan{nA}=\frac{\binom{n}{1}\tan{A}-\binom{n}{3}\tan^3{A}+\binom{n}{5}\tan^5{A}-\cdots}{\binom{n}{0}\tan^0{A}-\binom{n}{2}\tan^2{A}+\binom{n}{4}\tan^4{A}-\cdots}$ This is a question from the chapter permutations and combinations and I have no idea as to how to apply those concepts in this question and it would be great if I could get a hint...
Hint Perhaps this can help. $$\left(\frac{1 + i \tan A}{1 - i \tan A}\right)^n=\left(\frac{\cos A + i \sin A}{\cos A - i \sin A}\right)^n=\left(\frac{e^{iA}}{e^{-iA}}\right)^n=\frac{\cos nA + i \sin nA}{\cos nA - i \sin nA}=\frac{1+i \tan nA}{1-i \tan nA}$$
Differential equation for path in radial field Suppose that in a plane parallel to the yz-plane, the index of refraction $n$ is a function of the distance from the origin, $R$, i.e., $n = n(R)$. We know that (e.g., http://aty.sdsu.edu/explain/atmos_refr/invariant.html), that $n(R)R\sin\theta = \mathrm{constant}$ at every point, where $\theta$ is analogous to the "$z$" angles shown in the diagram: With the relation above I have the information to calculate $\theta$ at every point, but I'd like to recast it into a differential equation that gives me the path of a ray in terms of $y$ and $z$. Is it possible to find a form for $\frac{dz}{dy}$? I could write, for instance, $\theta = \tan^{-1}\left(\frac{z}{y}\right) - \tan^{-1}\left(\frac{dz}{dy}\right)$. Or would I have to parametrize $y$ and $z$? How would I do this? What is the easiest way of numerically finding the path?
I think it is better to work in polar coordinates and then convert back to Cartesian if necessary. Specifically, consider the $\{\log w\}$ plane (or rather, strip) for $w = y + iz$. The circles will map to horizontal lines $\mathop{\mathrm{Im}} \log w = \mathrm{const}$, the ray you need to find will map to... well, something, but because the logarithm is conformal, the angles between the two at each point will stay the same. Therefore, for $\log(y + iz) = \log R + i\phi$, you will get (schematically) $$n(R)R\sin(\arctan d\phi/d\log R) = C,$$ $$d\phi/d\log R = \tan\arcsin(C / n(R)R) = \sqrt{[n(R)R/C]^2 - 1}\,,$$ $$d\phi/dR = \sqrt{[n(R)/C]^2 - R^2}\,,$$ which looks a lot more tractable to me.
Limit of $a_n = \sum\limits_{k=1}^{n} \left(\sqrt{1+\frac{k}{n^2}}-1\right)$ Given $a_n = \sum\limits_{k=1}^{n} \left(\sqrt{1+\frac{k}{n^2}}-1\right)$, find $\lim\limits_{n \to \infty} a_n$. My try: To simplify, $$a_n = \frac{\displaystyle\sum_{k=1}^{n}\sqrt{n^2+k}-n}{n}$$ and I'm stuck from there. In addition, I have made a program to find the limit, which says it's 1/4. Can anybody give me a hint to start? Thanks for your time!
Generalization Suppose $f$ is differentiable at $x=0$ with $f(0)=0$ and consider, for all $n\in\mathbb{N}^\star$ : $$S_n=\sum_{k=1}^nf\left(\frac{k}{n^2}\right)$$ Then we have : $$\lim_{n\to\infty}S_n=\frac{1}{2}f'(0)$$ It's relatively easy to prove this, using a Taylor expansion. We know that : $$f(x)=\underbrace{f(0)}+xf'(0)+x\alpha(x)$$ with $\lim_{x\to 0}\alpha(x)=0$. Hence : $$S_n=\frac{f'(0)}{n^2}\sum_{k=1}^{n}k+\frac{1}{n^2}\sum_{k=1}^nk\,\alpha\left(\frac{k}{n^2}\right)$$ The first piece has limit $\frac{1}{2}f'(0)$ because $\sum_{k=1}^nk=\frac{n(n+1)}{2}$. We prove now that the second piece has limit $0$ : Given $\epsilon>0$, there exists $\delta>0$ such that : $$\forall x\in\mathbb{R},\vert x\vert\le\delta\implies\left|\alpha(x)\right|\le\epsilon$$ If $n$ is large enough (and $n\ge\frac{1}{\delta}$ is enough), we have : $$\forall k\in\{1,\cdots,n\},\,0\le\frac{k}{n^2}\le\frac{1}{n}\le\delta\;\mathrm{and}\;\mathrm{therefore}\;\left|\alpha\left(\frac{k}{n^2}\right)\right|\le\epsilon$$ so that : $$\left|\frac{1}{n^2}\sum_{k=1}^nk\,\alpha\left(\frac{k}{n^2}\right)\right|\le\frac{1}{n^2}\sum_{k=1}^nk\,\left|\alpha\left(\frac{k}{n^2}\right)\right|\le\frac{\epsilon}{n^2}\frac{n(n+1)}{2}=\frac{\epsilon(n+1)}{2n}\le\epsilon$$
Proof that N to the power 0 is 1 using square roots I wondered if using square roots to prove that $N^0=1$ is valid (where $N$ is any real number). The way I propose to do this is as follows: We consider when $x > 0$. If we do an iterative method: $x_2=\sqrt{x_1}$ $x_3=\sqrt{x_2}$ and so on... we get that this would tend towards 1, so $x_n=1$ as $n \rightarrow \infty $. This is equivalent to writing $$(x)^{\frac{1}{2}*\frac{1}{2}*...*\frac{1}{2}}$$ which tends towards $(x)^0$, and we know that this tends towards 1. Would this be valid? And how would one prove this for $x \leq 0$, and perhaps formulate it better than I have managed to?
Assuming that you are using one of the inequalities $$n\le\sqrt n\le1\text{ or }1\le\sqrt n\le n$$ to squeeze, you are indeed showing that $$\lim_{x\to0}n^x=1,$$ if the limit exists. But * *this is not sufficient to prove that the limit exists (as you just use the particular exponents $x=2^{-k}$), *this does not "prove" $n^0=1$, which is a pure matter of convention, but proves that the function $n^x$ is continuous at $0$ when you admit that $n^0:=1$.
Find $xyz$ given that $x + z + y = 5$, $x^2 + z^2 + y^2 = 21$, $x^3 + z^3 + y^3 = 80$ I was looking back in my junk, then I found this: $$x + z + y = 5$$ $$x^2 + z^2 + y^2 = 21$$ $$x^3 + z^3 + y^3 = 80$$ What is the value of $xyz$? A) $5$ B) $4$ C) $1$ D) $-4$ E) $-5$ It's pretty easy, any chances of solving this question? I already have the answer for this, but I didn't fully understand. Thanks for the attention.
\begin{align} x + z + y &= u=5 \tag{1}\label{1} ,\\ x^2 + z^2 + y^2 &= v=21 \tag{2}\label{2},\\ x^3 +z^3 + y^3 &= w=80 \tag{3}\label{3}. \end{align} What is the value of $xyz$? Surprisingly, the Ravi substitution works in this case, despite that not all the numbers $x,y,z$ are positive, and hence, the corresponding triangle is "unreal". So, let \begin{align} a &= y + z ,\quad b = z + x ,\quad c = x + y \tag{4}\label{4} ,\\ x&=\rho-a ,\quad y=\rho-b ,\quad z=\rho-c \tag{5}\label{5} , \end{align} where the triplet $a, b, c$ represents the sides of a triangle with semiperimeter $\rho$, inradius $r$ and circumradius $R$. Then \begin{align} x + z + y &= \rho \tag{6}\label{6} ,\\ x^2 + z^2 + y^2 &= \rho^2-2(r^2+4rR) \tag{7}\label{7} ,\\ x^3 + z^3 + y^3 &= \rho(\rho^2-12rR) \tag{8}\label{8} ,\\ xyz&=\rho\,r^2 \tag{9}\label{9} . \end{align} Excluding $rR$ from \eqref{7}-\eqref{8}, we get \begin{align} r^2&= \tfrac16\,\rho^2-\tfrac12\,v+\tfrac13\,\frac w{\rho} \tag{10}\label{10} , \end{align} and from \eqref{9} we have the answer \begin{align} xyz&= \tfrac16\,\rho^3-\tfrac12\,v\rho+\tfrac13\,w = \tfrac16\,5^3-\tfrac12\,21\cdot5+\tfrac13\,80 =-5 \tag{11}\label{11} . \end{align} As a bonus, we can find that \begin{align} rR &= \tfrac1{12}\,\frac{\rho^3-w}{\rho} \tag{12}\label{12} \end{align} and $x=\rho-a,\ y=\rho-b,\ z=\rho-c$ are the roots of cubic equation \begin{align} x^3-\rho\,x^2+(r^2+4rR)\,x-\rho r^2&=0 \tag{13}\label{13} ,\\ \text{or }\quad x^3-\rho\,x^2+\tfrac12\,(\rho^2-v)\,x-\tfrac13\,w-\tfrac16\,\rho\,(\rho^2-3v)&=0 \tag{14}\label{14} ,\\ x^3-5\,x^2+2\,x+5&=0 \tag{15}\label{15} . \end{align} One of the roots of \eqref{15} is \begin{align} x &= \tfrac53+ \tfrac23\,\sqrt{19}\,\cos\Big(\tfrac13\,\arctan(\tfrac9{25}\,\sqrt{331})\Big) \approx 4.253418 \tag{16}\label{16} ,\\ \text{the other two are }\quad y&\approx -0.773387 ,\quad z\approx 1.519969 \tag{17}\label{17} . \end{align}
Generating function for counting problem. How many 3 letter words can be formed using the letters of the word “TESTBOOK”? I'm clueless to solve this question using generating function, can anyone please help me or at least provide some hint.
The generating function is $f(z)=\left(1+z+\frac{z^2}{2!}\right)^2\cdot (1+z)^4$ The first two factors represent $t$ and $o$. The other four factors represent $e,x,b,k$ If you expand it you get: $f(z)=1+\frac{6}{1!}z+\frac{32}{2!}z^2+\frac{150}{3!}z^3+\frac{606}{4!}z^4+...$ There can be formed $6$ words with $1$ letter. $32$ words with $2$ letters. And $150$ words with $3$ letters.
No lattice point dominates another Suppose that $A,B$ are distinct lattice points in $\mathbb Z^n$. We say that $A$ dominates $B$ if all the components of $A-B$ are non-negative. Given positive integers $a_1,a_2,\dots ,a_n$, let $S$ be a set of lattice points in the integer lattice $L=[0,a_1]\times[0,a_2]\times\dots\times[0,a_n]$ such that no element of $S$ dominates another element. What is the maximal value of $|S|$? I encountered this problem in the context of posets and Dilworth's theorem. If we consider domination as a partial order on the elements of $L$, then by Dilworth's theorem, we need to find the minimal number of chains whose union is $L$. However, I'm unsure how to do this.
We are asked for a formula for the maximum size of an antichain in the product of $n$ chains, $L=[0,a_1]\times\cdots\times[0,a_n]$. The lattice $L$ is graded, rank unimodal, and has the Sperner property. These facts reduce this problem so a counting problem where inclusion/exclusion is applicable. To define terms, the statement that $L$ is ranked means that all maximal chains from a given $x\in L$ to the least element of $L$ have the same length, which is called the rank of $x$. Let $L_i$ be the set of elements of $L$ of rank $i$. The statement that $L$ is rank unimodal means that $$|L_0|\leq |L_1|\leq \cdots \leq |L_j|\geq |L_{j+1}|\geq \ldots\geq |L_K|$$ for some $j$, where $K=\sum a_i$. That is, the sizes of the $L_i$'s increase monotonically to some maximum, then decrease monotonically. Finally, $L$ is Sperner if the largest $L_i$ is a maximal antichain. Sperner proved in 1928 that in a product $L$ of $n$ chains which each have length 2 a maximal antichain is a middle rank, which has size $\binom{n}{\lfloor n/2\rfloor}$. In de Bruijn, N. G.; van Ebbenhorst Tengbergen, Ca.; Kruyswijk, D. On the set of divisors of a number. Nieuw Arch. Wiskunde (2) 23, (1951). 191-193. one finds that, in any product $L=[0,a_1]\times\cdots\times[0,a_n]$ of $n$ chains, some middle rank is a maximal antichain. That is, if $K=\sum a_i$, then $L_{\lfloor K/2\rfloor}$ is a maximal antichain in $L$. This problem is therefore the problem of computing $|L_{\lfloor K/2\rfloor}|$. This can be done through inclusion/exclusion. The number of $(x_1,\ldots,x_n)\in L_{\lfloor K/2\rfloor}$ is the number of solutions to $$x_1+x_2+\cdots+x_n = {\lfloor K/2\rfloor}$$ subject to the restrictions that $0\leq x_i\leq a_i$. Introduce some notation: $R_i$ is the condition $x_i>a_i$. It represents the failure of the restriction $0\leq x_i\leq a_i$. $R = \{R_1,\ldots,R_n\}$. If $S\subseteq R$, then $N(S)$ is the number of solutions to $x_1+x_2+\cdots+x_n = {\lfloor K/2\rfloor}$ for which $R_i$ is true for every $R_i\in S$. (If $S=\emptyset$, then $N(S)$ is the number of solutions to $x_1+x_2+\cdots+x_n = {\lfloor K/2\rfloor}$ in nonnegative integers, with no further restrictions on the $x_i$'s. If $S = \{R_1\}$, then $N(S)$ is the number of solutions to $x_1+x_2+\cdots+x_n = {\lfloor K/2\rfloor}$ in nonnegative integers, with $x_1>a_1$. In this case, $N(S)$ is the number of solutions with $x_1$ too big.) The final answer is $N(\emptyset) - \sum_{i} N(\{R_i\}) + \sum_{i,j} N(\{R_i,R_j\})-\cdots$. (Inclusion/Exclusion formula.) If $t=\lfloor K/2\rfloor$, then $N(\emptyset)=\binom{n-1+t}{n-1}$, $N(\{R_i\})=\binom{n-1+(t-a_i-1)}{n-1}$, $N(\{R_i, R_j\})=\binom{n-1+(t-a_i-a_j-2)}{n-1}$, ETC. (Here I am using that the number of nonnegative integer solutions to $x_1+x_2+\cdots+x_n = t$ is $\binom{n-1+t}{n-1}$.) Let me write out the answer explicitly in the first nontrivial case, namely the case where $L=[0,a_1]\times [0,a_2]\times [0,a_3]$ and $t=\lfloor K/2\rfloor=\lfloor (a_1+a_2+a_3)/2\rfloor$. The answer is $$\binom{2+t}{2}-\left(\binom{2+(t-a_1-1)}{2}+\binom{2+(t-a_2-1)}{2})+\binom{2+(t-a_3-1)}{2}\right)+\left(\binom{2+(t-a_1-a_2-2)}{2}+\binom{2+(t-a_1-a_3-2)}{2})+\binom{2+(t-a_1-a_3-2)}{2}\right)-\binom{2-(t-a_1-a_2-a_3-3)}{2}.$$ Some of these terms must be zero, but I left them in to show the pattern.
Jacobson radical of upper triangular matrix rings I suspect that the Jacobson radical of the ring $$A = \begin{bmatrix} R & M \\ 0 & S \\ \end{bmatrix}$$ should be $$\mathfrak{J}(A) = \begin{bmatrix} \mathfrak{J}(R) & M \\ 0 & \mathfrak{J}(S) \\ \end{bmatrix}$$ I know how to classify the left/right/two-sided ideals of this ring (T.Y. Lam, Chapter 1, Page 18) but I don't know how to find the maximal ones! I'd like to see a proof of my conjecture if it's indeed true. EDIT: I believe that this is not a duplicate of the linked question because first of all, this question is more general. Secondly, it requests the proof for something that is worth having its own separate question and thirdly, there's no proof in that linked question.
You can confirm that if $m(R)$ is any maximal right ideal of $R$ and $m(S)$ is any maximal right ideal of $S$, then $\begin{bmatrix}m(R)&M\\0&S\end{bmatrix}$ and $\begin{bmatrix}R&M\\0&m(S)\end{bmatrix}$ are maximal right ideals of $A$. The intersection of these as we range over the maximal right ideals of both rings is precisely $\begin{bmatrix}J(R)&M\\0&J(S)\end{bmatrix}$.
What points in the complex plane satisfy $|z|=arg(z)$. Question: What points in the complex plane satisfies $|z|=arg(z)$. Work thus far: Let $z=a+bi$ and $arg(z)$ is the angle between the vector $z$ and the axis. So $|z|=\sqrt{a^2+b^2}$ and $a=|z|\cos(\theta),b=|z|\sin(\theta)$. I would expect from intuition that the shape traced out by the points is a spiral. From here I don't how to proceed. Any hints would be appreciated.
Try thinking the other way: Fix some $\theta_0$ and imagine a line starting at the origin and making an angle $\theta_0$ with the real axis. How many points $z $, in that line, have modulus $|z|$ equal to $\theta_0$? Can you write an expression for it/them?
$k$-regular graph Let $G$ be a $k$-regular graph with $m$ edges and $k$ odd. Prove that $k\mid m$. We can see this statement is true by example, but how can we prove it?
HINT: Use the handshaking lemma: if $V$ is the vertex set of $G$, then $\sum_{v\in V}\deg v=2m$. Say there are $n$ vertices; what is $\sum_{v\in V}\deg v$ in terms of $n$ and $k$?
Why doesn't $(x+y)^2 $equal $x^2+y^2$? I need to explain this to someone. I know obviously the expanded form gives you $x^2 + 2xy + y^2$ but technically don't the individual exponents multiply to give $x^2 + y^2$ I might say I'm looking for an interesting geometrical explanation for this. Any help is appreciated!
If $x=y=1$, then it should be obvious that $$(1+1)^2=2^2=4$$ $$1^2+1^2=1+1=2$$ So it should be trivial that $$(1+1)^2\ne1^2+1^2$$ And it cannot hold in general if it fails for at least one case.
Proving that $\cos(\arcsin(x))=\sqrt{1-x^2}$ I am asked to prove that $\cos(\arcsin(x)) = \sqrt{1-x^2}$ I have used the trig identity to show that $\cos^2(x) = 1 - x^2$ Therefore why isn't the answer denoted with the plus-or-minus sign? as in $\pm \sqrt{1-x^2}$. Thank you!
$\cos^2+\sin^2=1\;$ implies $\;\cos(\arcsin(x))=\sqrt{1-\sin^2(\arcsin(x))}=\sqrt{1-x^2}$.
How to plot graph online Need help plotting $0\leq t\leq 2\pi$, $z(t)=e^{(1+i)t}$ and $z(t)=e^{(-1+i)t}.$ How can I plot them online or any software that I should use to get the graph of such curves? I am familiar with Wolfram but not able to get this.
Specify that you want a parametric plot and explicitly break it up into the real and imaginary parts like so: parametric plot (Re(exp((-1+i)*t)), Im(exp((-1+i)*t)), t=0..2pi) That input yields the following:
A question on positive elements bigger than $1_A$ in a unital C*-algebra $A$ I want to show the following statement but I do not know how. Let $A$ be a unital C*-algebra. Let $a$ be a positive element such that $a\geq 1_A$. Show that there exists $r$ in $A$ with $1_A=rar^*$. Thanks for all helps!
Since $1\leq a$, we know $a-1$ is positive, so $\sigma(a-1)\subset[0,\infty)$, thus $\sigma(a)\subset[1,\infty)$ and $a$ is invertible (here $\sigma(a)$ denotes the spectrum of $a$). Hence its positive square root $a^{1/2}$ is also invertible. Thus, we have $$ 1=1\cdot1=\left(a^{-1/2}a^{1/2}\right)\cdot\left(a^{1/2}a^{-1/2}\right) =a^{-1/2}\cdot\left(a^{1/2}a^{1/2}\right)\cdot a^{-1/2}=a^{-1/2}\cdot a\cdot a^{-1/2} $$ If we let $r=a^{-1/2},$ then $r=r^*$ and $$ 1=rar=rar^*. $$
Linear equation with different variable on the denominator? I have this given problem. That asking me to solve for $x$. Although this example has answered. I've had troubles on a certain part. Here's the equation with answer \begin{align*} \frac{2x-a}b &= \frac{4x-b}a\\ a(2x-a) &= b(4x-b) \\ 2ax-a^2 &= 4bx - b^2 \\ 2ax-4bx &= a^2 - b^2 \\ x(2a-4b) &= a^2 - b^2 \\ x&=\frac{a^2-b^2}{2a-4b} \end{align*} How did we arrive to $$a(2x -a) = b(4x-b)?$$
Maybe we need to write again the equation: $$\frac{2x-a}{b}=\frac{4x-b}{a}.$$ Using the Multiplication Property of Equality, we get $$ab\cdot\left(\frac{2x-a}{b}\right)=ab\cdot\left(\frac{4x-b}{a} \right).$$ Simplifying, we get $$\frac{a\cdot b\cdot (2x-a)}{b}=\frac{a\cdot b\cdot (4x-b)}{a}.$$ Apply Cancellation Law in Multiplication (meaning we can cancel $b$ at the left hand side and same to $a$ at the right hand side), we get $$a\cdot(2x-a)=b\cdot(4x-b).$$ Hope this help.
no of ways to fill a row (1xN grid) with a set of 1D bars? Given a row of length $N$, and a set of 1D bars having lengths $A[1...M]$, how many ways I can fill the row? A is an integer array, the bars are having dimensions $\{ 1\times A_1, 1\times A_2, 1\times A_3,..., 1\times A_M \}$ The row can also be considered as $1\times N$ grid. 2 Bars with equal length has to be considered as $2$ distinct bar not $1$. Bars are same from either direction, so just by reversing bars there wont be any new arrangement I thought solving the problem by converting it into : Let, Each space is an element of $1\times 1$, and all spaces are indistinguishable and similar. No of space $K=N-sum(A_1,A_2,...,A_M)$ Now the answer will be No of arrangements of $M$ distinct elements and $K$ similar elements. But again with no hope as could not find any solution for both the problems. **Examples : ** $N=3, A=\{1,1\} $ $ M=2, K=1$ $Ans = 6 $ $N=3, A=\{1,2\} $ $ M=2, K=0$ $Ans = 2 $ $N=4, A=\{1,2\} $ $ M=2, K=1$ $Ans = 6 $ $N=3, A=\{2\} $ $ M=1, K=1$ $Ans = 2 $ $N=10, A=\{7\} $ $ M=1, K=3$ $Ans = 4 $
I found one easy approach : Convert the problem further to No of distinct permutation of M distinct letters(the bars) and K letters(the spaces) of same type Here Total letters $n=M+K,$ Total letters of 1 category $n_i$ $ = K $ Hence total distinct permutation = $\frac{(\sum_i n_i)!}{\prod_i n_i!}$ = $\frac{(M+K)!}{K}$ I found the formula here .
Triangulation of Polytope It seems to me that, in two dimensions, a triangulation of a polytope with odd amount of vertices gives us an odd amount of simplices while a triangulation of a polytope with even amount of vertices gives us an even amount of simplices. (We can add vertices to the boundary of $P$ during the triangulation, but then they must be counted as vertices of $P$.) See the following images for example. Just count the vertices on the boundary of the polytope and the number of simplices that it is composed of. Does anyone know if this claim is true? If yes, where can I find a proof for it? Otherwise, a counterexample would be nice. Thanks.
This is a simple proof that, in two dimensions, a triangulation of a polytope with odd amount of vertices gives us an odd amount of simplices while a triangulation of a polytope with even amount of vertices gives us an even amount of simplices. You have a plane graph, bounded by a cycle $C$, in which every internal region has three edges. Let $C$ have length $k$, and let there be $n-k$ vertices in the interior, so n vertices altogether. Let this graph be $G$. If we add another vertex and make it adjacent to every vertex of $C$ we obtain a maximal planar graph $H$. It is known (an easy consequence of Euler's formula) that every maximal planar graph with $p>3$ vertices has exactly $3p-6$ edges and $2p-4$ regions. $H$ has $n+1$ vertices, so it has $3n-3$ edges and $2n-2$ regions. But $H$ has $k$ regions more than $G$, so $G$ has $2n-2-k$ regions (not counting the infinite region). This is even if and only if $k$ is even.
Which of the integers cannot be formed with $x^2+y^5$ So, I was asked by my teacher in school to solve this problem it really had me stumped.The problem is as follows:Given that $x$ and $y$ are integers, which of the following cannot be expressed in the form $x^2+y^5$? $1.)\ 59170$ $2.)\ 59012$ $3.)\ 59121$ $4.)\ 59149$ $5.)\ 59130$ Is it possible for an elegant solution and not tedious trial and error?
Since $9^5=59049$ the choice $y=9$ can obtain 1.), 4.), 5.) with $x=11,\,10,\,9$. Similarly, $x=162,\,y=8$ addresses 2). Proving 3) is impossible requires only a modulo 11 analysis; note that $59121=7$. Squares are $0,1,4,9,5,3$. Since 11 is prime, nonzero fifth powers square to 1 by Fermat's Little Theorem, so fifth powers are $0,1,10$. Notice we can't get $7,18$ as sums.
Show that $\overline{f(z)}$ is holomorhpic in $D(0;1)$ if and only if $f$ is constant. Let $f$ be holomorphic in $D(0;1)$. Show that $\overline{f(z)}$ is holomorhpic in $D(0;1)$ if and only if $f$ is constant. It is clear to me that if $f$ is constant then $\overline{f(z)}$ is holomorphic since the Cauchy-Riemann equations will be satisfied. However, I'm not sure about how to show the other direction. How do I do that?
There are several ways to show that. The most elementary is probably to look at the Cauchy-Riemann equations again. A little bit more advanced, but shorter would be: If $\overline f$ is holomorphic, then so are $f+\overline f$ and $f-\overline f$. But these are $\mathbb R$-valued and $i\mathbb R$-valued functions respectively, hence not open mappings. Then they must both be constant. Now $f$ is the sum of constant functions.
Proving that a form is of the type $\xi \wedge \eta$ Let $V $ be a vector space. Let $\xi$ be a non zero $1$ form. I want to show that if some $k$ form $\omega$ satisfies $\xi \wedge \omega=0$ then it is of the type $\omega=\xi \wedge \eta$. I have been able to prove this in case $\xi$ is one of the $dx_i$ but I am unable to prove it when it is a sum of these. Any hints will be appreciated. Thanks.
If $\xi$ is a non-zero one form, you can complete $\{\xi\}$ into a basis of $\Lambda^1(V)$ and write $\omega$ in this basis, which brings you back to the case $\xi = dx_i$.
Summation of $\arcsin $ series. What is $a $ if $$\sum _{n=1} ^{\infty} \arcsin \left(\frac {\sqrt {n}-\sqrt {n-1}}{\sqrt {n (n+1)}}\right) =\frac {\pi }{a} \,?$$ Attempt: What I tried is to convert the series to $\arctan$ and then convert it telescoping series. So in terms of $\arctan $ it becomes $$\arctan \left(\frac {\sqrt {n}-\sqrt {n-1}}{\sqrt {n}+\sqrt {n-1}}\right) $$ but now if I divide by $n$ it simplifies as $n\frac {\pi}4-\sum _1^{\infty} \arctan \left(\frac {\sqrt {n-1}}{\sqrt {n}}\right) $ but as $n$ is tending towards infinity it will lead to infinity which seems wrong. Also note that $a$ is an integer . Thanks!
Taking the principal branch of $\arcsin$ (with values in $\bigl[-\frac{\pi}{2}, \frac{\pi}{2}\bigr]$), we have $$\tan\bigl(\arcsin s\bigr) = \frac{\sin \bigl(\arcsin s\bigr)}{\cos \bigl(\arcsin s\bigr)} = \frac{s}{\sqrt{1 - s^2}}.$$ With $s = \frac{\sqrt{n} - \sqrt{n-1}}{\sqrt{n(n+1)}}$, we get \begin{align} 1 - s^2 &= 1 - \frac{(\sqrt{n} - \sqrt{n-1})^2}{n(n+1)}\\ &= \frac{n^2 + n - (n - 2\sqrt{n(n-1)} + n-1)}{n(n+1)}\\ &= \frac{n(n-1) + 2\sqrt{n(n-1)} + 1}{n(n+1)}\\ &= \frac{(1 + \sqrt{n(n-1)})^2}{n(n+1)}, \end{align} and so \begin{align} \tan \biggl(\arcsin \frac{\sqrt{n} - \sqrt{n-1}}{\sqrt{n(n+1)}}\biggr) &= \frac{\sqrt{n} - \sqrt{n-1}}{\sqrt{n(n+1)}}\cdot \frac{\sqrt{n(n+1)}}{1 + \sqrt{n(n-1)}} \\ &= \frac{\sqrt{n} - \sqrt{n-1}}{1 + \sqrt{n} \sqrt{n-1}} \\ &= \tan\bigl(\arctan \sqrt{n} - \arctan \sqrt{n-1}\bigr), \end{align} whence we obtain $$\sum_{n = 1}^{\infty} \arcsin \frac{\sqrt{n} - \sqrt{n-1}}{\sqrt{n(n+1)}} = \sum_{n = 1}^\infty \bigl( \arctan \sqrt{n} - \arctan \sqrt{n-1}\bigr) = \frac{\pi}{2}.$$
If $j$ is the inclusion $A\hookrightarrow X$ then $j_{*}:H_1(X)\to H_1(X,A)$ is surjective. I need to prove that: Let $X$ be a path-connected topological space and $A\subset X$. Given $j:X\hookrightarrow (X,A), x\mapsto j(x)=x$ prove that $j_{*}:H_1(X)\to H_1(X,A)$ is surjective. Idea: Show that the diagram $A\stackrel{i}{\hookrightarrow} X\stackrel{j}{\hookrightarrow}(X,A)$ induce the following exact sequence: $$0\to H_1(A)\to H_1(X)\to H_1(X,A)\to 0$$ Is there any theorem to affirm this fact?
This is not true in general. Take $A=\partial I$, $X=I$, where $I=[0,1]$. If $A$ is path-connected, then it is true. It can be seen to follow from the reduced homology long exact sequence: $$\cdots \to H_1(A) \to H_1(X) \to H_1(X,A) \to \widetilde{H_0}(A)=0 \to \widetilde{H_0}(X) \to \cdots$$
Discrete question. Prove that two propositions are logically equal. I have a question about bi conditional equivalencies and how to prove that they are logically equivalent. The question states, show that $\neg\;(p \leftrightarrow q)$ and $p \leftrightarrow\neg\; q$ are logically equivalent. I tried multiple approaches using several laws from left side or right side but I can't find the right path that leads to the right answer. Any ideas? Please help Thanks in advance!!!
$p \leftrightarrow q$ is true iff $p$ and $q$ have the same truth-value, and thus: $\neg(p \leftrightarrow q)$ iff $p$ and $q$ do not have the same truth-value iff $p$ and $q$ have opposite truth-values iff $p$ and $\neg q$ have the same truth-value iff $p \leftrightarrow \neg q$
determine chromatic polynomial i need to determine the chromatic polynomial of the following graph: i know that the $P(k) = k(k − 1)(k − 2)· · ·(k − n + 1)$, so every vertex has one color. so i assume that total is 10 different colors. but i do not know how to find the polynomial? do i need to change the graph? every help, appreciate. thanks
The formula you propose is only true for complete graphs, so it does not hold here. Decompose your graph $G$ into three graphs $G_1,G_2,G_3$ where * *$G_1$ is the cycle of length $4$ on the left (vertices $\{i,g,h,j\}$), with chromatic polynomial $$ P_{G_1}(k)=(k-1)^4+(-1)^4(k-1) $$ *$G_2$ is the tree in the middle (vertices $\{g,f,e\}$), with chromatic polynomial $$ P_{G_2}(k)=k(k-1)^2 $$ *$G_3$ is the cycle of length $5$ on the right (vertices $\{a,b,c,d,e\}$), with chromatic polynomial $$ P_{G_3}(k)=(k-1)^5+(-1)^5(k-1) $$ Then, use the fact that if two graphs $G$ and $H$ share a single vertex, the chromatic polynomial of $G \cup H$ is given by $$ P_{G\cup H}= \frac{P_G(k)P_H(k)}{k} $$ If I am not mistaken, you should get $$ \boxed{ P_{G=G_1\cup G_2 \cup G_3} = \frac{(k-1)^{11}+(k-1)^{8}-(k-1)^{7}-(k-1)^{4}}{k} } $$ To check if your answer is sensible, remember that the chromatic number of $G$ is the smallest integer that is not a root of the polynomial. Here, real roots are $1$ and $2$. It is straightforward to color the graph with $3$ colors, and since $G_3$ is a pentagon, at least three colors are necessary to color the graph. Therefore, this is in accordance with our chromatic polynomial.
How prove $\left(1+\frac{4a}{b+c}\right)\left(1+\frac{4b}{c+a}\right)\left(1+\frac{4c}{a+b}\right)\ge 25$ let $a,b,c>0$ show that $$\left(1+\dfrac{4a}{b+c}\right)\left(1+\dfrac{4b}{c+a}\right)\left(1+\dfrac{4c}{a+b}\right)\ge 25$$ It seem hard to prove AM-GM.Cauchy-Schwarz
$$\Leftrightarrow a^{3}+b^3+c^3+7abc\geq ab(a+b)+bc(b+c)+ca(c+a) > 0$$ Right by Schur: $$a^{3}+b^3+c^3+7abc> a^{3}+b^3+c^3+3abc\geq ab(a+b)+bc(b+c)+ca(c+a)$$
Find $\int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\mathrm dx$ Recently, I met a integration below \begin{align*} \int_{0}^{\infty }\frac{\sin x-\sin x^2}{x}\mathrm{d}x&=\int_{0}^{\infty }\frac{\sin x}{x}\mathrm{d}x-\int_{0}^{\infty }\frac{\sin x^{2}}{x}\mathrm{d}x\\ &=\int_{0}^{\infty }\frac{\sin x}{x}\mathrm{d}x-\frac{1}{2}\int_{0}^{\infty }\frac{\sin x}{x}\mathrm{d}x\\ &=\frac{1}{2}\int_{0}^{\infty }\frac{\sin x}{x}\mathrm{d}x=\frac{\pi }{4} \end{align*} the same way seems doesn't work in $$\int_{0}^{\infty }\frac{\cos x-\cos x^2}{x}\mathrm dx$$ but why? Then how to evaluate it? Thx!
We begin by noting that we can write the integral of interest as $$\begin{align} \int_0^\infty \frac{\cos(x)-\cos(x^2)}{x}\,dx&=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\int_0^\infty\frac{\sin(x)-\sin(x^2)}{x}\,dx\\\\ &=\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 \tag 1 \end{align}$$ Using Cauchy's Integral Theorem, we can write the right-hand side of $(1)$ as $$\int_0^\infty \frac{e^{ix}-e^{ix^2}}{x}\,dx-i\pi/4 =\int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx \tag 2$$ Integrating by parts the integral on the right-hand side of $(2)$ with $u=e^{-x}-e^{-x^2}$ and $v=\log(x)$ reveals $$\begin{align} \int_0^\infty \frac{e^{-x}-e^{-x^2}}{x}\,dx &=\int_0^\infty \log(x) e^{-x}\,dx-\int_0^\infty 2x\log(x)e^{-x^2}\,dx\\\\ &=\frac12 \int_0^\infty e^{-x}\log(x)\,dx\\\\ &=-\frac12\gamma \end{align}$$ And we are done! NOTE: In the note at the end of THIS ANSWER, I showed that $\gamma$ as given by $\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$ is equal to $\gamma$ as expressed by the limit $\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$. EDIT: A SECOND METHODOLOGY: It is straightforward to show that for $t>0$ $$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=0\tag3$$ Simply enforce the substitution $x^t\mapsto x$ to reduce the integral in $(3)$ to $$\int_0^\infty \frac{\cos(x^t)-e^{-x^t}}{x}\,dx=\frac1t \int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx$$ Then, exploiting a property of the Laplace Transform, we can write $$\int_0^\infty \frac{\cos(x)-e^{-x}}{x}\,dx=\int_0^\infty \left(\frac{x}{x^2+1}-\frac{1}{x+1}\right)\,dx$$ which is easily evaluated as $0$. Hence, we can write $$\int_0^\infty \frac{\cos(x^2)-\cos(x)}{x}\,dx=\int_0^\infty \frac{e^{-x^2}-e^{-x}}{x}\,dx\tag4$$ The integral on the the right-hand side of $(4)$ is identical to the integral on the the right-hand side of $(2)$, from which we obtain the previous result!
Inclusion Exclusion principle for coloring a board The questions says, how many ways are there to color a 3 by 3 board with colors red and blue, so that there are no 2 by 2 red squares. Since I'm required to solve the problem using inclusion-exclusion principle only, I started by making a 3 x 3 board and analyzing how many 2 x 2 boards can it consist of, +---+---+---+ +---+---+---+ | 0 | 0 | | | | 1 | 1 | +---+---+---+ +---+---+---+ | 0 | 0 | | | | 1 | 1 | +---+---+---+ +---+---+---+ | | | | | | | | +---+---+---+ +---+---+---+ +---+---+---+ +---+---+---+ | | | | | | | | +---+---+---+ +---+---+---+ | 2 | 2 | | | | 3 | 3 | +---+---+---+ +---+---+---+ | 2 | 2 | | | | 3 | 3 | +---+---+---+ +---+---+---+ I assumed that only 4 of the 2 x 2 boxes can be possibly made. There's 4 possible conditions which could lead to a 2 x 2 box with only red color. So, I make the 4 conditions $C_1 = C_2 = C_3 = C_4$ = 2 x 2 box has red color N($\bar{C_1}\bar{C_2}\bar{C_3}\bar{C_4}) = S_0 - S_1 + S_2 - S_3 + S_4$ Where $S_0$ is all the possible ways to fill a 3 x 3 board with 2 colors. $S_1=N(C_1) + N(C_2) + N(C_3) + N(C_4)$ $S_2=N(C_1C_2) + N(C_1C_3) + N(C_1C_4) + N(C_2C_3) + N(C_2C_4) + N(C_3C_4)$ and so on according to the notation of generalization of inclusion exclusion principle. $\bar{N} = N(\bar{C_1}\bar{C_2}...\bar{C_t}) = N - \sum_{(1≤i≤t)}N(C_i) + \sum_{(1≤i<j≤t)}N(C_iC_jC_k) + ...$ => $S_0 = 2^9$ Because there's 4 possible choices and 2 colors to pick from, using the pigeon hole principle => $S_1$ = $4 \choose 1$ $4 + 2 - 1 \choose 2$ => $S_2$ = $4 \choose 2$$3 + 1 - 1 \choose 1$ => $S_3 = S_4 = 0$ I wanted to know if I'm on the right path
To carry on from where you seem to have left the track a bit: $S_1=N(C_1) + N(C_2) + N(C_3) + N(C_4)$ $S_2=N(C_1C_2) + N(C_1C_3) + N(C_1C_4) + N(C_2C_3) + N(C_2C_4) + N(C_3C_4)$ $S_3=N(C_1C_2C_3) + N(C_1C_2C_4) + N(C_1C_3C_4) + N(C_2C_3C_4) $ $S_4=N(C_1C_2C_3C_4) $ $N(C_i)$ involves colouring the requisite block red and then free choices for the other five squares, giving $N(C_i) = 2^5$ options and thus $S_1 = 4\cdot 2^5 = 128$ There are two varieties when looking at the components of $S_2$: adjacent and diagonal blocks. These give respectively $3$ and $2$ free squares and there are four adjacent and two diagonal cases, giving $S_2=4\cdot 2^3+2\cdot 2^2 = 32 + 8 = 40$ The components of $S_3$ only have one free square each, so $S_3=4\cdot 2 = 8$ Finally there is only one way to colour every square red as required to make all four blocks red. Then as you observe, $S_0=2^9$ and the inclusion-exclusion calculation gives: $R = 512-128+40-8+1 = 417$ cases where there is no red $2\times 2$ block Cross-checking by a different method; we will have no red block if the centre square is blue, so that is $A_1=2^8=256$ possibilities immediately. Then if the centre square is red, we can avoid red blocks if opposite mid-side squares are blue. There's a little inclusion-exclusion exercise to get that $A_2=2\cdot 2^6-2^4 = 112$ Then with one mid-side square blue but the opposite red, we can avoid red blocks either by having an adjacent mid-side and the remaining corner blue or having the opposite corners blue. This works out to $A_3 = 4\cdot2^3+4\cdot2^2 = 48$ Finally if everything else is red and the four corners are blue the red blocks are also defeated, $A_4=1$. Giving $R = 256+112+48+1 = 417$ again.
Dimensions of sum and intersection of vector subspaces $L_1$ and $L_2$ are vector subspaces of the vector space $V$ with finite dimension. Prove: If $\dim(L_1+L_2) = 1 + \dim(L_1\cap L_2)$ than the sum $L_1+L_2$ equals to one of the subspaces and the intersection $L_1\cap L_2$ equals the other one. I can see why it's true, and I've tried to use the dimension theorem but couldn't evaluate it. Any ideas?
Hint: ($L_1\cap L_2)\subset L_i \subset (L_1 + L_2)$. What if you apply $\dim(\cdot)$?
Norm inequality in euclidean space Let $V$ be an Euclidean space and $(e_1,e_2,\dots,e_n)$ an orthonormal vector system of $V$. Show that, for every $x \in V$ the following is valid: $$\sum_{i=1}^n (e_i|x)^2 \leq \| x\|^2. $$ Can someone help me?
$(e_1,e_2,\ldots,e_n)$ can be extended to an orthonormal bases $(e_1,e_2,\ldots,e_m)$. Let $a_1,a_2,\ldots,a_m$ be real numbers such that $x=a_1e_1+\ldots+a_me_m$. Clearly, $a_i=(e_i|x)$. Then, $$\|x\|^2=\|e_1a_1+\ldots+e_ma_m\|^2=\sum_{i=1}^m\|e_i\|^2a_i^2+\sum_{i\ne j}(e_i|e_j)a_ia_j=\sum_{i=1}^m a_i^2\ge \sum_{i=1}^n a_i^2$$ The last equation comes from the fact that $(e_1,e_2,\ldots,e_m)$ is an orthonormal basis.
Verify that a set is a submanifold I'm stuck on the following exercise: "Verify that $M:=\{(x,y,z)\in\mathbb{R}^3:x^2+3y^2+2z^2=3, x+y+z=0\}$ is a submanifold. Also, say what its dimension is and compute the Jacobian matrix of the function." The definition of $C^k (k\geq 1)$ submanifold of dimension $p$ that I have in my notes is: "$V\subset\mathbb{R}^n$ is a $C^k (k\geq 1)$ submanifold of dimension $1\leq p\leq n-1$ iff $\forall x_0\in V \exists U$ neighbourhood of $x_0$ in $\mathbb{R}^n$ and a function $f\colon U\to\mathbb{R}^{n-p}$, $f\in C^k(U)$ such that: (1) $x_0\in U$, (2) $V\cap U=\{x\in U:f(x)=0\}$, (3)rank $Jf(x)=n-p\ \forall x\in U$." It's the first exercise I do on (sub)manifolds and I don't know how to get started, so I would appreciate if someone explained to me how to do this kind of exercise. Best regards, lorenzo.
You have a set in $\mathbb{R}^3$ defined by 2 equations, so, as in linear algebra, you can start guessing that it will have dimension 1. If you are familiar with the equations, you will know that your are cutting a cone with a plane, which results in a curve. Following your definition, I would define the function $f(x,y,z)= ( x^2+3y^2+2z^2-3, x+y+z )$. It is obvious that $f(p)=0$ if and only if $p\in M$. Now you have to check the other conditions.
How are the gaussian integers generated/derivated? I have quite some problems understanding the definition/derivation of the gaussian integers. We defined the notation $\mathbb{Z}[i]\subset \mathbb{C} $ the following way: $\mathbb{Z}[i]:=<\mathbb{Z} \cup i>$ and $<\mathbb{Z} \cup i>:=\bigcap_{R\subset \mathbb{C} , E \subset R}R$. Now as far as I understand it here $\mathbb{Z}[i]$ is the smallest subring of $\mathbb{Z} \cup i$. But how do you get from this to $<\mathbb{Z} \cup i>= \{a+b*i | a,b\in \mathbb{Z}\}$? Why can't there be a subring like {...,-2,-1,0,1,2,...,i} or would this just not contain everything needed? Or is it just not a ring? And if so how do I get to the mentioned as the smallest subring?
In general, if you have a ring $R$ and you would like to add an element $\alpha$ to $R$, you also have to add the elements $$r_0 + r_1\alpha + r_2\alpha^2 + \ldots + r_n\alpha^n,\ n\in\mathbb{N},\ r_i\in R$$ to $R$ to get another ring. (A ring has to be closed under addition and multiplication.) In the case $R=\mathbb{Z}$ and $\alpha = \mathrm{i}$ we have $\alpha^2=-1$, so every element above has the form $r_0 + r_1\alpha$ with $r_i\in\mathbb{Z}$.
Quick Logs question - Use $\log_{24} 12$ to find $\log_{24} 2$ Full Question is: Given that $\log_{24} 12 =0.782$, find the value of $\log_{24} 2$ . How do I / should I set this out as well, formally?
$$\log_{24}12=\log_{24}\left(\frac{24}{2}\right)=\log_{24}24-\log_{24}2=1-\log_{24}2$$
Even or Odd function I understand the rules of if $f(-x)=-f(x)$ then odd and if $f(-x) = f(x)$ then even. I also know it's possible for the function to be neither even nor odd. For simple polynomials these rules are easy to apply. For trig functions with phase shifts not so easy. Without using a visual of symmetry about the origin or y-axis, how would I determine if something like $f(x)=sin(x-\pi/8)$ is even, odd or neither? When I compute $f(-x)$ I get $f(-x)=sin(-x-\pi/8)$ and it's not easy to see if this is the same as $=-sin(x-\pi/8)$.
A simple example, you just have to apply the definition. Given $k$ real, define $$ f_k(x):=\sin(x+k\pi). $$ Then $f_k$ is odd if and only if $f_k(-x)=-f_k(x)$, i.e. $$ \sin(-x+k\pi)=-\sin(x+k\pi). $$ iff $$ -\sin(-x+k\pi)=\sin(x+k\pi) $$ But $-\sin(-y)=\sin(y)$, hence the above is equivalent to $$ \sin(x-k\pi)=\sin(x+k\pi)=\sin((x-k\pi)+2k\pi). $$ Hence $f_k$ is an odd function if and only if $k$ is an integer.
Proof that $\frac{a}{a+3b+3c}+\frac{b}{b+3a+3c}+\frac{c}{c+3a+3b} \ge \frac{3}{7}$ for all $a,b,c > 0$ So I am trtying to proof that $\frac{a}{a+3b+3c}+\frac{b}{b+3a+3c}+\frac{c}{c+3a+3b} \ge \frac{3}{7}$ for all $a,b,c > 0$. First I tried with Cauchy–Schwarz inequality but got nowhere. Now I am trying to find a convex function, so I can use jensen's inequality, but I can't come up with one which works.. Has anyone an idea?
Note that \begin{align*} (a+b+c)^2 \ge 3ab+3bc+3ac. \end{align*} Therefore, \begin{align*} &\ \frac{a}{a+3b+3c}+\frac{b}{b+3a+3c}+\frac{c}{c+3a+3b}\\ =&\ \frac{a^2}{a^2+3ab+3ac}+\frac{b^2}{b^2+3ab+3bc}+\frac{c^2}{c^2+3ac+3bc}\\ \ge&\ \frac{(a+b+c)^2}{a^2+b^2+c^2 + 6ab + 6ac+6bc}\\ =&\ \frac{(a+b+c)^2}{(a+b+c)^2 + 4ab + 4ac+4bc}\\ \ge&\ \frac{(a+b+c)^2}{(a+b+c)^2 + \frac{4}{3}(a+b+c)^2}\\ =&\ \frac{3}{7}. \end{align*}
Proof of mutual information property that $I((1-\beta)Z + \beta X; X) \geq I(Z; X)$ Suppose $\beta$ is a Bernoulli random variable taking $\lbrace 0, 1\rbrace$, $X, Z$ are random variables defined on the same probability space. Is it true or false that $$ I((1-\beta)Z + \beta X; X) \geq I(Z; X) $$
I don't think that this is true. The conditional mutual information $$I((1-\beta)Z+\beta X;X|\beta)$$ certainly exceeds $I(Z;X)$, but the same may not hold for mutual information. Consider the following example where $X$ and $\beta$ are independent Bernoulli-$1/2$ random variables taking values in $\{0,1\}$. Let $Z$ take values in $\{0,1\}$ and suppose that $Z=0$ if and only if $X=1$. We hence have $I(Z;X)=1$. However, $(1-\beta)Z+\beta X$ is independent from $X$, hence $I((1-\beta)Z+\beta X;X)=0$.
Surface of revolution between two circles I want to compute the surface of revolution between thoose two circles $(x_1,y_1)$ and $(x_2,y_2)$. I assume that the radius of the circles follow a well known law $x(y)$. What I don't understand is that in my book, they say that an infinitesimal surface is : $2 \pi r(y) ds$ with $ds=\sqrt{dx^2+dy^2}$. But I would choose $2 \pi r(y) dy$. Indeed if I want to compute the volume I would write : $ \pi r^2(y) dy$, so why should I take $ds$ for the surface and $dy$ for the volume, I don't understand. (I know that the fact I take $dy$ for the volume is not a justification to take $dy$ for the surface but what I want to know is...how can I know which differential element is the good one ???). How to be sure of the good differential element to take ?
$dA =2 \pi r(y) ds$ makes no sense of consistency of used notation. With $(x,y)$ notation of axes $ ds =\sqrt{dx^2+dy^2}$ and surface area $ dA = 2 \pi x ds = 2 \pi x(y) ds, $ and with $(r,z)$ notation of axes $ ds =\sqrt{dr^2+dz^2}$ and surface area $dA = 2 \pi r ds = 2 \pi r(z) ds$
Centralizer of semi-simple element in semi-simple Lie algebra Let $L$ be a finite dimensional semi-simple Lie algebra, and $H$ a toral (maximal abelian) subalgebra. For any $h\in H$ I want to prove that $C_L(h)$ is reductive, i.e. its radical (=maximal solvable ideal) is equal to its center. How should I proceed? What I did: Let $L=H\oplus (\bigoplus_{\alpha} L_a)$ be the root space decomposition of $L$ w.r.t. $H$. Let $L_{\alpha}=\langle x_{\alpha}\rangle.$ Then an element $h+\sum_{i=1}^k x_{\alpha_i}$ will commute with if and only if $h$ commutes with each $x_{\alpha_i}$. But $[h,x_{\alpha_i}]=\alpha_i(h)x_{\alpha_i}$ which forces that $\alpha_i(h)=0$ for $i=1,2,\cdots,k$. After this I couldn't do anything. Any hint?
Hint: You have observed that $C_L(h)=\mathfrak h\oplus_{\alpha:\alpha(h)=0}\mathfrak g_{\alpha}$. Since this is the decomposition into joint eigenspaces for the adjoint action of $\mathfrak h\subset C_L(h)$, it follows from standard arguments (the projection onto an eigenspace of a linear map can be written as a polynomial in the map) that an $\mathfrak h$-invariant subspace of $C_L(\mathfrak h)$ must be the direct sum of a linear subspace of $\mathfrak h$ and some of the root spaces. In particular, this applies to any ideal in $C_L(\mathfrak h)$ and hence to the radical $\mathfrak r$. Now first argue directly that an ideal containing a root space cannot be solvable and second that an ideal contained in $\mathfrak h$ must actually be contained in the joint kernel of $\{\alpha:\alpha(h)=0\}$, which is exactly the center of $C_L(h)$.
Identifying left- and right-Riemann sums of $\int_9^{14}e^{-x^4}\ dx$ My attempt: Relooking at it, I think $L_{20}$ would be the highest, so like $R_{1200} < L_{1200} < L_{20}$, but I have no way to justify it, any help is appreciated.
You're last thought is correct. One rigorous justification goes as follows: $\displaystyle L_{20} = \sum_{i=0}^{19}\frac{5}{20}f\left(9+\frac{5i}{20}\right)$, and $\displaystyle L_{1200} = \sum_{i=0}^{1199}\frac{5}{1200}f\left(9+\frac{5i}{1200}\right)$. Now, we are going to split up the $L_{1200}$ sum into groups of 60. In particular, $\displaystyle L_{1200} = \sum_{i=0}^{59}\frac{5}{1200}f\left(9+\frac{5i}{1200}\right) + \sum_{i=60}^{119}\frac{5}{1200}f\left(9+\frac{5i}{1200}\right) + \ldots + \sum_{i=1140}^{1199}\frac{5}{1200}f\left(9+\frac{5i}{1200}\right)$. Now, by monotonicity of the function $f$ we have that this sum is less than $\displaystyle\sum_{i=0}^{59}\frac{5}{1200}f\left(9\right) + \sum_{i=60}^{119}\frac{5}{1200}f\left(9+\frac{5\cdot 60}{1200}\right) + \ldots + \sum_{i=1140}^{1199}\frac{5}{1200}f\left(9+\frac{5\cdot 1140}{1200}\right) \\ =60\cdot\frac{5}{1200}f\left(9\right) + 60\cdot\frac{5}{1200}f\left(9 + \frac{5}{20}\right) + \ldots + 60\cdot\frac{5}{1200}f\left(9 + \frac{5\cdot 19}{20}\right) =: L_{20}$. Therefore, $L_{1200} \leq L_{20}$.