INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Show bijective correspondence Let $f : X \rightarrow Y$ be a continuous map. Let $\Im$ be a sheaf on $Y$ and $U \subseteq Y$ be an open subset. I need to show that there is a bijective correspondence between $(f^{-1} \Im)(f^{-1}(U))$ and $(f_{*}f^{-1}\Im)(U)$. Here, $f^{-1} \Im$ is the inverse image of $\Im$ and $f_{*} \Im$ is the direct image of $\Im$. Now since $f_{*} \Im(U) := \Im(f^{-1}(U))$, so $$ (f^{-1} \Im)(f^{-1}(U)) = f^{-1} f_* \Im(U) $$ But I'm not able to make any progress. Any help with this!
Given a continuous function $f:X\to Y$ and a sheaf $F$ on $X$, by definition the direct image sheaf $f_*F$ is given by $f_*F(U) : = F(f^{-1}U)$. Now, the inverse image sheaf $f^{-1}\mathfrak{J}$ where $\mathfrak{J}$ is a sheaf on $Y$, is a sheaf on $X$. Hence by definition we have that $$f_*f^{-1}\mathfrak{J}(U)= f^{-1}\mathfrak{J}(f^{-1}U). $$
Is the function $f(x, y) :=$ $\sin(xy) \over x^2 + y^2$ Lebesgue-integrable? Given the function $f: \Bbb R^2 \rightarrow \Bbb R$, $f(x, y) := \begin{cases} \sin(xy) \over x^2 + y^2, & \text{$(x, y) \in \Bbb R^2 \setminus ${0}$$} \\ 0, & \text{otherwise} \end{cases}$ decide whether it is Lebesgue-integrable or not. Hint: $\int_{(0, \infty)}$ $1 \over r$ $d\lambda(r) = \infty$. Where to start about something like that? I know that a function is Lebesgue integrable if it is measurable and if $f(x, y)_+$ and $f(x, y)_-$ are integrable. I think it is measurable since it is continuous. So now, I would have to determine what $f(x, y)_+$ is and prove (or disprove) that $\int_{\Bbb R} f(x, y)_+ < \infty$? Edit: Since I didn't receive further help in the comments, I am searching for an other approach. There is a similar problem to this: Prove that function is not Lebesgue integrable If you take a look at the answer with 8 upvotes, you'll find that the hint that I was given was applied there. So by switching to polar coordinates $(x, y) = (r \cos \phi, r \sin \phi)$, we would receive something like: $\int_0^{2\pi} \int_0^{\infty}$ $\sin(r^2 \ \cos \phi \ \sin \phi) \over r$ $dr \ d\phi.$ If there was a way to "clear" the numerator here, it would be fairly easy to apply the hint here too, which would yield that the function is not Lebesgue-integrable directly. But is there a way to do it? Edit 2: On the other hand, isn't $\int_0^{\infty}$ $\sin(r^2 \ \cos \phi \ \sin \phi) \over r$ $\le \int_0^{\infty}$ $1 \over r$?
The statement is equivalent with $$ \iint \frac{|\sin(xy)|}{x^2+y^2} dxdy=\infty. $$ Let us try substituting $t=xy$, $s=x^2+y^2$; then we will have $$ dt\,ds = \left|\det\begin{pmatrix}y&x\\2x&2y\end{pmatrix}\right| dx\,dy = 2|y^2-x^2| \,dx\,dy < 2s \,dx\,dy, $$ so $$ dx\,dy > \frac{dt\,ds}{2s}. $$ If $s>2t>0$ then the system $x^2+y^2=s$, $xy=t$ has four solutions, so $$ \iint \frac{|\sin(xy)|}{x^2+y^2} dx\,dy \ge 4\iint_{s>2t>0} \frac{|\sin t|}{s} \frac{dt\,ds}{2s} =\\= 2\int_{t=0}^\infty |\sin t| \left(\int_{s=2t}^\infty \frac{ds}{s^2}\right) dt = \int_{t=0}^\infty \frac{|\sin t|}{t} dt = \infty. $$
limit of $\frac{F(c_n)}{F(d_n)}$,where $\frac{c_n}{d_n} \to 1$ as $n\to\infty$ Suppose F(continuous) is the cdf of a non-negative random variable, and $c_n$,$d_n$ are two positive sequences going to zero as n$ \to \infty$, such that $\frac{c_n}{d_n}\to 1$. Can it be said that $\frac{F(c_n)}{F(d_n)}$ also goes to 1? Are there any additional assumptions which could make this statement true?
It seems that there is no general way of answering this question without further information. One assumption which makes this statement true(which I have mentioned in reply to RSerrao's comment) is $F^{1}(0)>0$.In that case we can write $\frac{F(c_n)}{F(d_n)}$ as $\frac{F(c_n)}{c_n}$*$\frac{d_n}{F(d_n)}$*$\frac{c_n}{d_n}$and take limits.
Question on indefinite special orthogonal group I'm stuck on this question: Construct an element $x \in SO(1,2)$ which is not diagonalizable. Any help would be appreciated
You may construct $M\in M_3({\Bbb R})$ so that it maps (independent) vectors $u,v,w$ to e.g. $u,v+u,w+v$. It is then clearly in a Jordan form, so not diagonalizable. So suppose we have such a matrix. How to find $u,v,w$? Writing $(x,y)=x^t g y$ with $g={\rm diag} (1,-1,-1)$ for the $SO(1,2)$ scalar product we need $M$ to preserve the scalar product. First, $(u,v)=(Mu,Mv)=(u,v+u)$ implies $(u,u)=0$ so the vector $u$ must be a null vector (in the light cone). It is unique up to normalization and rotations in the $y-z$ plane. We may take $u=[1 \ 1 \ 0]^t$. Second, $(v,v)=(v+u,v+u) \Rightarrow (u,v)=0$ so $v$ must be ($g$)-orthogonal to $u$. Incidently this shows that the Jordan situation may not occur in $SO(1,1)$ because only $u$ is orthogonal to $u$ when $u$ is null in $SO(1,1)$. Anyway, we are in $SO(1,2)$ and we may take $v=[0 \ 0 \ 1]^t$. This is unique up to adding multiples of $u$. Finally $(w,v)=(w+v,v+u)$ and $(w,w)=(w+v,w+v)$ implies $2(w,v)+(v,v)=0$ and $(w,u)+(v,v)=0$ which leads to $w=[0 \ -1 \ -1/2]^t$. This is unique up to adding multiples of $u$ and $v$. Putting this together we want the matrix $M$ to verify: $$ M \left( \begin{matrix} 1 & 0 & 0 \\ 1 & 0 & -1\\ 0 & 1 & -1/2 \end{matrix} \right) =\left( \begin{matrix} 1 & 1 & 0 \\ 1 & 1 & -1\\ 0 & 1 & 1/2 \end{matrix} \right) $$ and solving for $M$ yields: $$ M = \left( \begin{matrix} 1.5 & -0.5 & 1 \\ 0.5 & 0.5 & 1\\ 1 & -1 & 1 \end{matrix} \right) $$ You may verify that $M$ is the identity matrix + a nilpotent and that $M^t g M = g$ as wished. Given the various choices above I think that the nil-manifold, i.e. the manifold of $M$'s for which $M$ is not diagonalizable has dimension $1+1+2=4$ in $M_3({\Bbb R})$. But if you only need one example, there is no need to worry about what the full set of solutions may be.
On being able to write an arbitrary $ C^{3} $-solution of a particular third-order PDE in a special way Suppose that $ u: \mathbb{R}^{3} \to \mathbb{R} $ is a $ C^{3} $-function that satisfies the PDE $$ \forall (x,y,z) \in \mathbb{R}^{3}: \qquad (\partial_{2} \partial_{2} \partial_{1} u)(x,y,z) = 2 \sin(x). $$ Then $$ \forall (x,y,z) \in \mathbb{R}^{3}: \qquad (\partial_{2} \partial_{1} u)(x,y,z) = 2 \sin(x) y + f(x,z) $$ for some function $ f: \mathbb{R}^{2} \to \mathbb{R} $, which implies that $$ \forall (x,y,z) \in \mathbb{R}^{3}: \qquad (\partial_{1} u)(x,y,z) = \sin(x) y^{2} + f(x,z) y + g(x,z) $$ for some function $ g: \mathbb{R}^{2} \to \mathbb{R} $. As $ u \in {C^{3}}(\mathbb{R}^{3}) $ and $$ \forall (x,z) \in \mathbb{R}^{2}: \qquad (\partial_{1} u)(x,0,z) = g(x,z), $$ we find that $ g \in {C^{2}}(\mathbb{R}^{2}) $. Then because $$ \forall (x,z) \in \mathbb{R}^{2}: \qquad (\partial_{1} u)(x,1,z) = \sin(x) + f(x,z) + g(x,z), $$ we find that $ f \in {C^{2}}(\mathbb{R}^{2}) $ as well. Let $ F $ and $ G $ be, respectively, anti-derivatives of $ f $ and $ g $ with respect to their first arguments. Such anti-derivatives exist by the Fundamental Theorem of Calculus; for example, we could define \begin{align} \forall (x,z) \in \mathbb{R}^{2}: \qquad F(x,z) & \stackrel{\text{df}}{=} \int_{0}^{x} f(t,z) ~ \mathrm{d}{t}, \qquad (\clubsuit) \\ G(x,z) & \stackrel{\text{df}}{=} \int_{0}^{x} g(t,z) ~ \mathrm{d}{t}. \qquad (\spadesuit) \end{align} Then $$ \forall (x,y,z) \in \mathbb{R}^{3}: \qquad u(x,y,z) = - \cos(x) y^{2} + F(x,z) y + G(x,z) + h(y,z) \qquad (\star) $$ for some function $ h: \mathbb{R}^{2} \to \mathbb{R} $. Question. Can we find $ C^{3} $-functions $ F $, $ G $ and $ h $ so that $ u $ may be written as in $ (\star) $? We do not necessarily require $ F $ and $ G $ to be defined according to $ (\clubsuit) $ and $ (\spadesuit) $ respectively. Why this is a non-trivial question can be explained as follows. Suppose that we defined $ F $ according to $ (\clubsuit) $ and then tried to take the partial derivative of $ F $ with respect to its second argument, twice. Using the fact that $ f \in {C^{2}}(\mathbb{R}^{2}) $ and differentiating under the integral sign, we would obtain $$ \forall (x,z) \in \mathbb{R}^{2}: \qquad (\partial_{2} \partial_{2} F)(x,z) = \int_{0}^{x} (\partial_{2} \partial_{2} f)(t,z) ~ \mathrm{d}{t}. $$ However, $ f $ is not regular enough to justify differentiation under the integral sign one more time to obtain a formula for $ \partial_{2} \partial_{2} \partial_{2} F $. Herein lies the difficulty. Of course, if $ F $, $ G $ and $ h $ were arbitrary $ C^{3} $-functions, then the function $ u $ defined by $ (\star) $ would be a $ C^{3} $-solution of the given PDE. The question is thus asking if any $ C^{3} $-solution may be so expressed. Thank you for your help!
First of all, the right hand side is irrelevant; a general solution is $-y^2\cos x$ plus a general solution of the homogeneous PDE. So I'll consider the homogeneous case. Let $h(y,z)=u(0,y,z)$, this is clearly $C^3$. Let $G(x,z) = u(x,0,z)-u(0,0,z)$, also $C^3$. Finally, let $$F(x,y,z) = \frac{u(x,y,z) - G(x,z) - h(y,z)}{y} \tag1$$ The numerator vanishes when $y=0$, so the quotient is defined there (as a partial derivative in $y$) and therefore is $C^2$ smooth. So far we have written $$ u(x,y,z) = yF(x,y,z) +G(x,z)+h(y,z) \tag2 $$ with $G,H$ in $C^3$ and $F\in C^2$. Note that $yF\in C^3$, which implies $F$ is $C^3$ except possibly where $y=0$. Apply the PDE $\partial_2^2\partial_1u=0$ to (2) and deduce that $y\partial_1F$ is linear in $y$, hence $\partial_2\partial_1 F\equiv 0$. The latter implies $$ F(x,y,z) = F(x,0,z) + F(0, y, z) - F(0, 0, z) $$ because a $C^2$ function $u$ of two arguments with $\partial_2\partial_1 u\equiv 0$ is the sum of functions of one argument. But $F(0, y, z)=0$, by plugging $x=0$ in (1). Thus, $F$ is independent of $y$. Since we already noted $F$ is $C^3$ except possibly at $y=0$, the final conclusion is that $F\in C^3$.
Pullback of a differential form by a local diffeomorphism Suppose I have to smooth oriented manifolds, $M$ and $N$ and a local diffeomorphism $f : M \rightarrow N$. Let $\omega$ be a differential form of maximum degree on $N$, let's say, $r$. How can I rewrite $$\int_N \omega$$ in terms of the integral of the pullback of $\omega$ by $f$, $f^*\omega$? So, I know that when $f$ is a diffeomorphism, then $$\int_N\omega = \pm \int_M f^*\omega$$ depending on whether $f$ preserves the orientation or not. But that's in part due to the fact that $f$ is bijective, but that condition is removed when assuming $f$ is a local diffeomorphism. So is there a nice way to write that integral in terms of the pullback?
Yeah, and you can make it work even if $f$ is not a local diffeomorphism, only a proper map. The relevant notion is that of the degree of a smooth proper map between oriented manifolds of the same dimension. Assume for simplicity that $M,N$ are closed (compact, without boundary), non-empty and connected and let $f \colon M \rightarrow N$ be an arbitrary smooth map. Since $M,N$ are oriented, we can a generator $[\omega_1] \in H^{\text{top}}(M)$ such that $\omega_1$ is consistent with the orientation on $M$ and $\int_M \omega_1 = 1$. Choose $[\omega_2]$ similarly for $N$. The map $f$ induces a map $f^{*} \colon H^{\text{top}}(N) \rightarrow H^{\text{top}}(M)$ on cohomology and since the top cohomology groups are one-dimensional, we must have $f^{*}([\omega_2]) = c [\omega_1]$ for some $c \in \mathbb{R}$. This $c = \deg(f)$ is called the degree of $f$ and is a priori a real number. However, it can be shown that $c$ is in fact an integer which can be computed by counting the number of preimages of a regular value $p \in N$ with appropriate signs which take the orientations of $M,N$ into consideration. Knowing that, given $\omega \in \Omega^{\text{top}}(N)$, write $[\omega] = c[\omega_2]$ for some $c \in \mathbb{R}$. Then $$ [f^{*}(\omega)] = f^{*}([\omega]) = f^{*}(c[\omega_2]) = c \deg(f) [\omega_1] $$ so $$ \int_M f^{*}(\omega) = \deg(f) c \int_N \omega_1 = \deg(f) c = \deg(f) c\int_N \omega_2 = \deg(f) \int_N \omega. $$ In particular, if $f$ is a diffeomorphism, $\deg(f) = \pm 1$ so this generalizes your starting point. If $f$ is a local diffeomorphism then $f \colon M \rightarrow N$ is a covering map and $\deg(f)$ will be the number of points in an arbirary fiber, counted with appropriate signs. For much more details and proofs, see the book "Differential Forms in Algebraic Topology".
What does $f(\cdot)$ mean in math Let $f:\mathbb{R} \to \mathbb{R}$ be a function, what does $f(\cdot)$ mean usually? Is it another way of writing this function, or is it a real number?
It is another way of writing the function, emphasising that the value of $f$ at, for instance, $5$ is written as $f(5)$, and not $f5$ or $5f$ or $(5)f$ or $f|_5$ or anything else. The dot is just a placeholder. Some would write this as $f(x)$ instead of $f(\cdot)$, but this is a slightly different emphasis again. The notation $f(x)$ tends to be associated to a specific description of $f$, for instance $f(x) = 4x-3$.
Sylow subgroup of $S_{p^2}$ Find Sylow p-subgroup (at least 1) of $S_{p^2}$. We know that $|S_{p^2}|=(p^2)!$ and hence $|P|=p^{p+1}$ (where P is p-subgroup). How can I describe this subgroup?
Consider the subgroup of permutations on $1,2,3,\dots, p^n$ that permute the subsets $\{1,2,3,\dots,p\},\{p+1,p+2,\dots,2p\}, \{2p+1,2p+2,\dots, 3p\},\dots, \{p(p-1)+1,\dots,p^2\}$ cyclically, and also permute the elements inside each group cyclically. There are $p^p$ ways to permute the subsetes internally and there are $p$ ways to permute them externally. So this subgroup has $p^{p+1}$ elements.
visualization regarding triple integration problem I think our readers here may also have had this query when they initially began triple integration. Query is, in double integration we have double integration of a function (f) over an area / dxdy . and here the function is considered as the height or the third dimension which we can comfortably visualize. And if function is scalar 1 it's just the area of the plane surface . Now in case of triple integration if f is scalar 1 then we get the volume. But if (f) is not a scalar function then what dimension would we consider f to be with respect to the volume element dV / dxdydz.can someone give a good visualisation for a starter to have a good base about triple integration ?
You can visualize the f(x) in a triple integration as the curve satisfying points having the same temperature.
Parametrization for intersection of sphere and plane. $x^2+y^2+z^2=81$, $x+y+z=15$. How can I find the parametrization of the curve given by the intersection of the sphere $x^2+y^2+z^2=81$ and the plane $x+y+z=15$. Also it says that this intersection is a circle (clearly) with center $(5,5,5)$, and that $(7,4,4)$, $(4,7,4)$, $(4,4,7)$ are points on the curve (I don't know if this information is useful for the parametrization) My try was to substitute $z$ in the sphere equation with the expression obtained for $z$ in the plane equation, but I can't conclude this way.
Method 1 (following your first step) The projection in the $xy$ plane that you get is an ellipse, but it is not a horizontal nor a vertical one. If the equation in $x$ and $y$ that you got was a quadratic form, then you could reduce it and find its canonical representation by standard procedures seen in linear algebra. In this case we can use the same trick that was used in the answer here as follows. Note that $$ 2x^2+2xy=2\left(x+\frac{y}{2}\right)^2-\frac{y^2}{2}\tag{1} $$ and let $X:=x+\frac{y}{2}$ and $Y:=y$, so that $x=X-\frac{Y}{2}$ and $y=Y$. Substituting in the equation you got (and making use of $(1)$), we obtain, after many simplifications, $$ \frac{(X-\frac{15}{2})^2}{(\sqrt{3})^2}+\frac{(Y-5)^2}{2^2}=1 $$ This is the ellipse in its canonical form and we have a standard parametrization for it (with $\theta\in[0,2\pi]$): $$ X(\theta)=\frac{15}{2}+\sqrt{3}\cos\theta\\ Y(\theta)=5+2\sin\theta $$ Hence, $$ x(\theta)=5+\sqrt{3}\cos\theta-\sin\theta\\ y(\theta)=5+2\sin\theta $$ and finally $$ z(\theta)=15-x(\theta)-y(\theta)=5-\sqrt{3}\cos\theta-\sin\theta $$ Here's a parametric plot of the parametrization. Note: To be precise, the system $$ \begin{cases}x^2+y^2+z^2=81\\x+y+z=15\end{cases} $$ is equivalent to the system $$ \begin{cases}X:=x-y\\Y:=y\\\frac{(X-\frac{15}{2})^2}{(\sqrt{3})^2}+\frac{(Y-5)^2}{2^2}=1\\z=15-x-y\end{cases} $$ We need the equivalence in order to conclude that their solution sets are the same, that is, to conclude that our parametrization answers the initial problem. Method 2 In $\mathbb{R}^3$, the parametrization $$ P(\theta):={\bf c}+r\cos(\theta){\bf u}+r\sin(\theta){\bf v},\quad\theta\in[0,2\pi] $$ where ${\bf u}$ and ${\bf v}$ are unit orthogonal vectors is one for the circle with center ${\bf c}$ and radius $r$ lying in the plane generated by ${\bf u}$ and ${\bf v}$ at ${\bf c}$. With this in mind, the information that the intersection is a circle with center ${\bf c}:=(5,5,5)$ and that one of its point is ${\bf p}:=(7,4,4)$ is enough to find a parametrization. First, $r=\|{\bf p}-{\bf c}\|_2=\sqrt{6}$. Now, one can take ${\bf u}:=\frac{1}{\|{\bf p}-{\bf c}\|_2}({\bf p}-{\bf c})=\frac{1}{\sqrt{6}}(2,-1,-1)$. Let ${\bf w}:=(w_1,w_2,15-w_1-w_2)$ and ${\bf v}:=\frac{1}{\|{\bf w}-{\bf c}\|_2}({\bf w}-{\bf c})$. Since ${\bf v}\perp{\bf u}$ if and only if $w_1=5$, we can take $w_2:=0$ for simplicity and we get ${\bf v}=\frac{1}{\sqrt{50}}(0,-5,5)$. Here's a parametric plot of the parametrization. Note: Of course, if you weren't given the extra piece of information, then you would have to check that the plane and the sphere do interesect. Also you would have to make sure that their intersection is not a single point, so that you could assume that it is a circle (which is geometrically clear?). This was noted in the comments by user @Leafar. Since this is most likely a crafted exercise, you could have proceeded by inspection and tried integral values of $x$, $y$ and $z$ to discover that the point $(7,4,4)$ is in the intersection. Then from symmetry you would also see that $(4,7,4)$ and $(4,4,7)$ are in it. To find the center of the circle, you could follow a vector normal to the plane from the center ${\bf o}$ of the sphere until it intersects the plane (as noticed by user @Doug M here): the point of intersection is the center ${\bf c}$. That this is plausible can be seen, perhaps, by translating the plane so that it becomes a tangent to the sphere at ${\bf t}$. Then the radius $\overline{{\bf ot}}$ is perpendicular to the plane and by symmetry (?) ${\bf c}$ has to lie on $\overline{{\bf ot}}$. Here the center of the sphere is the origin, a vector normal to the plane is $(1,1,1)$ and $\alpha(1,1,1)$ is in the plane if and only if $3\alpha=15$, i.e. $\alpha=5$. Hence ${\bf c}=(5,5,5)$. Final note: You can verify that if $P(\theta)=(x(\theta),y(\theta),z(\theta))$ then with both methods we have $$ x(\theta)^2+y(\theta)^2+z(\theta)^2=81\\ x(\theta)+y(\theta)+z(\theta)=15 $$ as it should be. Also the arc lengths given by WolframAlpha in the links are nothing more than the circumference $2\pi\sqrt{6}$ of the circle.
Probability of three successes before three failures with additional conditions There is a random variable $X$ that takes integer values from 1 to 20. An event $A = \{10 \leq X \leq 19\}$ counts as a success. An event $B = \{2 \leq X \leq 9\}$ counts as a failure. An event $C = \{X = 20\}$ counts as three successes. An event $D = \{X = 1\}$ counts as two failures. What is the probability that three successes happen before three failures? My solution The probability that there are at least three failures equals $p=P(D)^2+P(D)P(B)+P(B)^3=\frac{1}{20}\frac{1}{20}+\frac{1}{20}\frac{8}{20}+\left(\frac{8}{20}\right)^3=\frac{173}{2000}$ I then count the probability of zero, one and two successes. $P\{\text{there are no successes}\}=p=\frac{173}{2000}$ $P\{\text{there is only one success}\}=p*P\{A\}=\frac{173}{2000}\frac{1}{2}$ $P\{\text{there are only two successes}\}=p*P\{A\}^2=\frac{173}{2000}\frac{1}{4}$ So, my probability equals $P(Q)=1 - \frac{173}{2000}\frac{7}{4}=\frac{6769}{8000}\approx84.86\%$ Am I right?
Probabilities: * *Single Success: $1/2$ *Triple Success: $1/20$ *Single Failure: $2/5$ *Double Failure: $1/20$ To get three successes before three failures you must obtain one of the following disjoint sequences: $$\boxed{\begin{array}{l:l} \text{A triple success.}&\tfrac 1{20}\\ \hdashline \text{ Three single successes }&{(\tfrac 12)}^3\\ \hdashline \text{A single failure and two successes, then a single success.} &\tbinom{3}{1}(\tfrac{2}{5}){(\tfrac{1}{2})}^3\\ \hdashline \text{ A single failure, then a triple success.}& \\ \hdashline \text{ } & \\ \hdashline \text{ } & \\ \hdashline \text{ }& \\ \hdashline \text{ }& \end{array}}$$ Complete the table, sum and simplify.
How to explain this geometry problem to an 8th grader? This is somehow embarrassing for me. So, I have been asked the following question (a similar one actually) by my friend who is currently an eight-grader: Suppose $a$, $b$, and $c$ are known, find the length of $AI$. Credit image: Wolfram MathWorld I'm able to tackle this problem using the cosine rule and the cosine double-angle formula. I obtained this result: $$AI=r\sqrt{\frac{4bc}{2bc+a^2-b^2-c^2}}$$ but unfortunately, she hasn't been taught the cosine rule nor also trigonometry (sine, cosine, and tangent). I haven't figure it out using any 'simple methods'. Is it even possible? I guess I'm missing something obvious here. My question is how to deal with this problem using elementary ways preferably without using trigonometry? Any help would be greatly appreciated. Thank you.
We know that $AM_b = AM_c$ and so on, because lengths of tangents from a point to a circle are equal. So we let $AM_b=x, BM_c=y, CM_a=z$. Now, $$AM_b + AM_c+ BM_a + BM_c + CM_b + CM_a = a+b+c$$ $$2(x+y+z)= a+b+c$$ $$x+y+z = s$$ We also know that $BC = y+z$. Thus $ x = s - BC = s-a$. Now since $AI$ is the hypotenuse of right triangle $AIM_b$, we have: $$AI^2= r^2 + (s-a)^2$$ Now using Heron's formula and $rs =\Delta$, we can represent $AI$ in terms of $a,b,c$ as: $$AI = \sqrt{\frac{(s-a)(s-b)(s-c)}{s} + (s-a)^2}$$
Limit of $\lim_{x\to0^+}\frac{\sin x}{\sin \sqrt{x}}$ How do I calculate this? $$\lim_{x\to0^+}\frac{\sin x}{\sin \sqrt{x}}$$ If I tried using l'Hopital's rule, it would become $$\lim_{x\to0^+}\frac{\cos x}{\frac{1}{2\sqrt{x}}\cos \sqrt{x}}$$ which looks the same. I can't seem to find a way to proceed from here. Maybe it has something to do with $$\frac{\sin x}{x} \to 1$$ but I'm not sure what to do with it. Any advice? Oh and I don't understand series expansions like Taylor's series.
By equvilency near zero $\sin x\approx x$ we have $$\lim_{x\to0^+}\frac{\sin x}{\sin \sqrt{x}}=\lim_{x\to0^+}\frac{x}{\sqrt{x}}=0$$ or $$\lim_{x\to0^+}\frac{\sin x}{\sin \sqrt{x}}=\lim_{x\to0^+}\frac{\sin x}{x}\frac{\sqrt{x}}{\sin \sqrt{x}}.\sqrt{x}=1\times1\times0=0$$
The ratio of their $n$-th term. The sum of $n$ terms of two arithmetic series are in the ratio of $(7n+ 1) : (4n+ 27)$. We have to find the ratio of their $n$-th term. � I tried to find the ratio by using the formula of summation of A.P. But it becomes too long due to many variables that is $a_1,a_2,d_1,d_2$
We have $$\frac{7n+1}{4n+27}=\dfrac{\dfrac n2\{2a_1+(n-1)d_1\}}{\dfrac n2\{2a_2+(n-1)d_2\}}=\dfrac{a_1+\dfrac{(n-1)}2\cdot d_1}{a_2+\dfrac{(n-1)}2\cdot d_2}$$ Replace $\dfrac{n-1}2$ with $m-1\iff n=2m-1$ to find the ratio of their $m$th term.
Twice differentiable function to infinity Let $f: \mathbb R \to \mathbb R$ be twice differentiable function, to which both $f'(x) > 0$ and $f''(x) > 0$ for all $x \in \mathbb R$. Show that $\lim_{x\to\infty}$ $f(x) = \infty$. Tried using the definitions of differentiation but got nowhere.
We can even show that $f$ grows faster than linearily. By the Mean value theorem, for all $x<y\in\mathbb R$ there is $u\in [x,y]$ such that $$f(y) = f(x) + f'(u)(y-x).$$ Of course, this applies to $f'$ as well, so $$f'(u) = f'(x) + f''(t)(u-x)$$ for some $t\in[x,u]$. Plugging in $x=0$: $$f(y) = f(0) + f'(u)\cdot y = f(0) + \left(f'(0) + \underbrace{f''(t) \cdot u}_{>0}\right)\cdot y > f(0) + f'(0)\cdot y\text{ for all }y>0$$ Since $f'(0) > 0$, this yields $f(y)\to_{y\to\infty}\infty$
Prove that the system of congruences has a solution I'm doing the following exercise and I don't know how to prove it. Suppose $a,b \in \mathbb{Z}$, and $d=\gcd(a,b)$. I have to prove that if $x\equiv y \bmod d$, then the system $$X\equiv x \mod a$$ $$X\equiv y \mod b$$ has a solution. I don't even how to start... Thanks for your help :)
Bezout's Lemma $a,b$ are relatively prime if and only if there exists an $x,y$ such that $ax+by=1$. Another way to phrase this is that $ax=1\pmod{b}$ and $by=1\pmod{a}$. See if you can use this statement to solve your problem.
Number of non-negative solutions of an equation with restrictions Q: Find the number of non-negative solutions of the equation $$r_1+r_2+r_3+\ldots +r_{2n+1}=R$$ when $0 \le r_i \le \min(N,R)$ and $0\le R\le (2n+1)N$. My Attempt: I tried the stars and bars method but it did not work properly. If the upper-bound for $r_i$ was not there, then the answer would have been $\binom{2n+1+R-1}{R}=\binom{2n+R}{R}$. But how do I deal with this problem in the given situation? EDIT: For the problem, you can simply consider $R$ as fixed and I wish to calculate the number of non-negative solutions to the given equation only.
So we have $$ \left\{ \begin{gathered} 0 \leqslant R \leqslant \left( {2n + 1} \right)N \hfill \\ 0 \leqslant r_{\,i} \leqslant \min (N,R) \hfill \\ r_{\,1} + r_{\,2} + \cdots + r_{\,2n + 1} = R \hfill \\ \end{gathered} \right. $$ understanding (from the context) that the $r_i$ are integers. The formulation that you give for the bounds is quite peculiar, since from it we get $$ r_{\,\text{avg}} = \frac{R} {{2n + 1}} \leqslant N\quad \quad M = \min (N,R) = \left\{ {\begin{array}{*{20}c} N & {\left| {\;\frac{R} {{2n + 1}} \leqslant N \leqslant R} \right.} \\ R & {\left| {\;R \leqslant N} \right.} \\ \end{array} } \right. $$ Now, if the variables are upper limited to $M=N$, then the average will definitely be not greater than $N$, while if $M=R$, then the upper bound is implied by the sum. In any case, your question turns down to finding (apart the change in denominating the parameters) $$N_{\,b} (s,r,m) = \text{No}\text{. of solutions to}\;\left\{ \begin{gathered} 0 \leqslant \text{integer }x_{\,j} \leqslant r \hfill \\ x_{\,1} + x_{\,2} + \cdots + x_{\,m} = s \hfill \\ \end{gathered} \right.$$ where $N_{\,b} (s,r,m)$ is given by the closed summation $$ N_b (s,r,m)\quad \left| {\;0 \leqslant \text{integers }s,m,r} \right.\quad = \sum\limits_{\left( {0\, \leqslant } \right)\,\,k\,\,\left( { \leqslant \,\frac{s} {r}\, \leqslant \,m} \right)} {\left( { - 1} \right)^k \left( \begin{gathered} m \hfill \\ k \hfill \\ \end{gathered} \right)\left( \begin{gathered} s + m - 1 - k\left( {r + 1} \right) \\ s - k\left( {r + 1} \right) \\ \end{gathered} \right)} $$ as explained in this post and in in this other one.
Prove $a = 2 \cdot 10^{2016} + 16$ cannot be a perfect square Prove the number $a = 2 \cdot 10^{2016} + 16$ cannot be a perfect square. I tried studying $a \mod b$ for different integers $b$, without succes.
Note that we have $$ 2\cdot 10^{2016} + 16 \equiv 2\cdot(-1)^{2016} + 5 = 7\pmod {11} $$ but the only squares modulo $11$ are $0, 1, 3, 4, 5, 9$.
Rank Nullity for Vector Space over Finite Field We know that the standard inner product of a vector space over a finite field might not be positive definite. e.g. $$x = (1, 1) \in \mathbb{Z_2}\times\mathbb{Z_2}$$ $$x \cdot x = 1 + 1 = 2 = 0$$ Also we defined two vectors $x,y$ to be orthogonal if $x \cdot y = 0$ So my question is, Let $V$ be an $n$ dimensional vector space over a finite field $F$, Let $B = \{x_1, x_2,...x_n\}$ be a basis such that $$x_i \cdot x_j = 0 \text{ for } i \neq j \ \ (1)$$ $$x_1 \cdot \ x_1 = 0 \ \ (2)$$ Then any $1 \times n$ matrix with $x_1$ being its row would fail the rank-nullity theorem. I know that this is impossible because you cannot have a non-zero vector being orthogonal to everything in the space. Therefore the construction of $B$ is impossible. But you take a basis $C = \{x_1, v_2, v_3, ... v_n\}$, apply gram-schmidt to that without normalizing the vectors, then won't you get an orthogonal basis that satisfy the condition $(1), (2)$ So I'm just wondering what is wrong here? Nevermind I figured out.... because $x_1 \cdot x_1 = 0$, gram-schmidt would blow up as soon as I try to work out the second vector.
The rank nullity theorem just says something about the dimensions of a pair of related subspaces. Whether or not you can find orthogonal bases of those subspaces using a given inner product is a different question. Your example shows the answer may be "no".
How many solutions does $1=x^π$ have? I was wondering how many solutions there are to $1 = x^\text{irrational number}$, since the cube root of 1 has 3 solutions and the 4th root has 4 etc and since the number of solutions to $x = x^{a/b}$ is b (where $a$ and $b$ share no factors), how many would $x^π=1$ have? Infinity, none or something else?
We have $$1^\pi = (\mathrm e^{2\pi\mathrm in})^\pi = \mathrm e^{2\pi^2\mathrm in}$$ Now $2\pi^2$ is incommensurable with $2\pi$. Therefore we have a countable infinity of solutions, one for each $n\in\mathbb Z$. In particular, the solutions are dense on the unit circle.
The map $f(z) = \frac{z-a}{1-\bar{a}z}$ preserves unit circle and open unit ball. Given $f(z) = \dfrac{z-a}{1-\bar{a}z}$, with $|a|<1$. I showed that if $|z|=1$, then $|f(z)|=1$; if $|z|<1$, then $|f(z)|<1$. However, I am stuck at showing that the map $f$ is "onto". Is there any elementary way of showing that this map is onto? I looked at similar questions here, but they are only showing that $f$ is "into". Thank you very much!
Hint: $$f(z) = \frac{z-a}{1-\bar{a}z} \iff z = \frac{f(z)+a}{1+\bar a f(z)}$$
Does the fact that $A^{17} = I_2$ imply that the matrix $A$ must be $I_2$? Does the fact that $A^{17} = I_2$ imply that the matrix $A$ must be $I_2$? Since the question does not specify whether the entries of $A$ are allowed to be complex valued functions, I said that IF the entries can indeed be complex numbers, then the answer must be no, because we can take $A = (\cos(\frac{2\pi k}{17})+i \sin(\frac{2\pi k}{17}))I_2, k = 0,1,...,16$. (i.e. the identity matrix times the complex root of the equation $x^{17}=1$) I was wondering whether, if the entries are only taken over real numbers, there exist $A$ that satisfy the condition $A^{17} = I_2$.
Yes, define $$ A=\begin{bmatrix}\cos\Big(\frac{2\pi}{17}\Big)&-\sin\Big(\frac{2\pi}{17}\Big)\\\sin\Big(\frac{2\pi}{17}\Big)&\cos\Big(\frac{2\pi}{17}\Big)\end{bmatrix}$$ $A$ is a counterclockwise rotation by $\frac{2\pi}{17}$ radians, hence $A^{17}=I$.
Are all roots of polynomials complex numbers? I'm sure this is a terrible question but I'd like to make sure. Given a polynomial, will its roots always be complex numbers? What else could they even be?
Usually one considers polynomials within a (number) system. Then you ask yourself whether there are roots within that system. The prototypical example is with the usual number systems, where one enlarges the system (integers, rationals, reals, complex) until within the complex numbers we get the fundamental theorem of algebra: the every polynomial with complex coefficients has a complex root. But, in more generality, one can consider polynomials with whatever objects can be added and multiplied: this applies to the coefficients and also to the "variable": and they don't have to be the same. As an example, since square matrices can be added and multiplied, and also multiplied by scalars, one can consider polynomials with complex coefficients but where the variable is a square matrix: things like $$ p(A)=3A^3-2A^2-A. $$
How can I solve this integral with using Residues $$\int_{|z| = 1}\frac{1}{(1-3z)(1-2z)^2}dz$$ z here is a complex number calculus
Set $\mathcal{D}=\{z\in\mathbb{C}\,:\, |z|\le1\}$. It is clear $z=\frac{1}{2}\in \mathcal{D}$ and $z=\frac{1}{3}\in \mathcal{D}$. We have $$\text{Res}_{z=\frac 12}f(z)=\lim_{z\to\frac12}\frac{d}{dz}\left(\left(z-\frac 12\right)^2\frac{1}{(1-3z)(1-2z)^2}\right)=\frac{1}{4}\lim_{z\to\frac12}\frac{3}{(1-3z)^2}=3$$ and $$\text{Res}_{z=\frac 13}f(z)=\lim_{z\to\frac13}\left(\left(z-\frac 13\right)\frac{1}{(1-3z)(1-2z)^2}\right)=-3$$ By application of Reside theorem, we have $$\oint_{|z|=1}\frac{1}{(1-3z)(1-2z)^2}dz=2\pi\text{i}(-3+3)=0$$
Proving a well-known inequality using S.O.S Using $AM-GM$ inequality, it is easy to show for $a,b,c>0$, $$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} \ge 3.$$ However, I can't seem to find an S.O.S form for $a,b,c$ $$f(a,b,c) = \frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \sum_{cyc}S_A(b-c)^2 \ge 0.$$ Update: Please note that I'm looking for an S.O.S form for $a, b, c$, or a proof that there is no S.O.S form for $a, b, c$. Substituting other variables may help to solve the problem using the S.O.S method, but those are S.O.S forms for some other variables, not $a, b, c$.
Because$:$ $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a} -3=\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$ It's enough to prove$:$ $$ab^2 +bc^2 +ca^2 -3abc \geqslant 0$$ by SOS. Let $$\text{P}=2(a+b+c)\cdot (ab^2 +bc^2 +ca^2 -3abc)$$ We have$:$ $$\text{P}=a \left( a+2\,b \right) \left( b-c \right) ^{2}+c \left( c+2\,a \right) \left( a-b \right) ^{2}+b \left( b+2\,c \right) \left( c-a \right) ^{2}$$ Let me explain the method$,$ let $$\text{P}_{\text{sos}}=\sum \left( {\it QQ}_{{1}}{a}^{2}+{\it QQ}_{{4}}ab+{\it QQ}_{{5}}ac+{\it QQ}_{{2}}{b}^{2}+{\it QQ}_{{6}}bc+{\it QQ}_{{3}}{c}^{2} \right) \left( a-b \right) ^{2} $$ Get an identity give $$\left\{\begin{matrix} -{\it QQ}_{{1}}-{\it QQ}_{{2}}=0&\\2\,{\it QQ}_{{1}}-{\it QQ}_{ {4}}-{\it QQ}_{{6}}=0&\\2\,{\it QQ}_{{2}}-{\it QQ}_{{4}}-{\it QQ}_{{5}}+ 2=0&\\2\,{\it QQ}_{{3}}+{\it QQ}_{{5}}+{\it QQ}_{{6}}-4=0&\\-{\it QQ}_{{1} }-{\it QQ}_{{2}}-2\,{\it QQ}_{{3}}+2\,{\it QQ}_{{4}}+2=0 & \end{matrix}\right.$$ Solve this with $Q_i \geqslant 0 ,(i=1..6)$ give us$:$ $$ \left\{ {\it QQ}_{{1}}=0,{\it QQ}_{{2}}=0,{\it QQ}_{{3}}=1,{\it QQ}_{ {4}}=0,{\it QQ}_{{5}}=2,{\it QQ}_{{6}}=0 \right\} $$ Take it into $\text{P}_{\text{sos}}$ give the SOS form. By the same way$,$ we have$:$ $$\text{P}=\frac{1}{3} \sum (2ab-bc-ca)^2 +\sum 2ab(b-c)^2$$
Find $\sqrt{1.1}$ using Taylor series of the function $\sqrt{x+1}$ in $x^{}_0 = 1$ with error smaller than $10^{-4}$ I should find $\sqrt{1.1}$ using Taylor series of the function $\sqrt{x+1}$ in $x^{}_0=1$ with error smaller than $10^{-4}$. The first derivatives are $$f'(x)=\frac{1}{2\sqrt{x+1}}$$ $$f''(x)=\frac{-1}{4\sqrt{x+1}^ 3}$$ $$f'''(x)=\frac{3}{8\sqrt{x+1}^5}$$ Applying $x^{}_0$ we have: $$f(1)=\sqrt{2}$$ $$f'(1)=\frac{1}{2\sqrt{2}}$$ $$f''(1)=\frac{-1}{4\sqrt{2}^ 3}$$ $$f'''(1)=\frac{3}{8\sqrt{2}^5}$$ And we can build the Taylor polynomial $$T(x)=\sqrt2 + \frac{1}{2\sqrt{2}}(x+1)+\frac{-1}{2!·4\sqrt{2}^3}(x+1)^2+\frac{3}{3!·8\sqrt{2}^5}(x+1)^3+R(\xi)$$ Is everything right until here? What I don't understand is how can I check that $R(\xi) > 10^{-4}$
$f(x) = \sqrt{1 + x}$ $f'(x) = \frac{1}{2\sqrt{1+x}}$ $f(0) = 1$ $f'(0) = 1/2$ $f(x + h) = f(x) + h f'(x) +.....$ $\sqrt{1.1} = f(.1) = f(0 + .1) \approx1 + 0.1 / 2 = 1.05$ you got confused by the fact they are using x + 1 in the function - i made a start for you, with two terms it is getting closer to the answer do you see the important part for you? $\sqrt{1.1} = \sqrt(1 + .1) = f(.1)$ since it is easy to workout f(0), f'(0),f''(0) without any square -roots, you then need to centre it around 0 - i.e. f(0 + .1)
Show that $\text{arg}(f(z))$ is a constant $\Rightarrow$ $f(z)$ is constant in $D$. The full question is as follows Let $f(z)$ be an analytic function in a region $D$ and $f(z) \neq 0$ in $D$. Show that $\text{arg}(f(z))$ is a constant $\Rightarrow$ $f(z)$ is constant in $D$. My approach would be to use Cauchy Riemann in terms of polar coordiantes. In polar coordinates the Cauchy-Riemann equations become $$\dfrac{du}{dr}=\dfrac{1}{r}\dfrac{dv}{d\theta} ~~,~~ \dfrac{dv}{dr} = -\dfrac{1}{r}\dfrac{du}{d\theta}$$ The derivative in polar version at a point $z$ whose polar coordinates are $(r,\theta)$ is then $$f^{'}(z) = e^{-i\theta}(\dfrac{du}{dr}+i\dfrac{dv}{dr}) = \dfrac{1}{r}e^{-i\theta}(\dfrac{dv}{d\theta}-i\dfrac{du}{d\theta})$$ So how do i go on from here? Since $arg(f(z))$ is equivalent to the $\theta$ in question, can i just say that $v_\theta = u_\theta = 0$? Any help would be appreciated.
Let $\arg f(z)=\theta_0$, and then $$f(z)=|f(z)|e^{\rm{i}\theta_0},\qquad \forall z\in D.$$ Let $$F(z)=e^{-\rm{i}\theta_0}f(z)=|f(z)|\in \mathbb R,\qquad \forall z\in D,$$ then $F$ is real-valued holomorphic function $(\text{Im} F(z)=0)$, which implies $f$ is constant.
If $ \sin B=3 \sin (2A+B)$, prove that $2\tan A+\tan (A+B)=0$ Given $\sin B=3\sin(2A+B)$, prove $ 2\tan A+\tan(A+B)=0$. My book uses componendo and dividendo approach to do this which I feel is bit unintuitive. I tried to do this by using identity for $\sin(x+y)=\sin x\cos y+\cos x\sin y$ but could not reach to answer. How do I do this?
I think using identity. Its become more difficult to solve. So you should have to use componendo and dividendo.
Prove that $a^{\frac {1}{4}}+b^{\frac {1}{4}}+c^{\frac {1}{4}}=0\Rightarrow a+b+c =2(\sqrt {ab}+\sqrt {bc}+\sqrt {ca})$ If $a^{\frac {1}{4}}+b^{\frac {1}{4}}+c^{\frac {1}{4}}=0$, prove that $a+b+c =2(\sqrt {ab}+\sqrt {bc}+\sqrt {ca})$. I guess that the given expression is true if and only if $a=b=c=0$. Is it true? Or,is there any other alternatives?
We have $$a^{1/4} + b^{1/4} + c^{1/4} =0$$ $$\Rightarrow (a^{1/4} + b^{1/4} + c^{1/4})^2 = a^{1/2}+b^{1/2} + c^{1/2} + 2 [(ab)^{1/4}+(bc)^{1/4}+(ac)^{1/4}]=0$$ $$\Rightarrow a^{1/2}+b^{1/2}+c^{1/2}=-2 [(ab)^{1/4}+(bc)^{1/4}+(ac)^{1/4}] $$ $$\Rightarrow (a^{1/2}+b^{1/2}+c^{1/2})^2 =(-2 [(ab)^{1/4}+(bc)^{1/4}+(ac)^{1/4}])^2$$ $$\Rightarrow a+b+c+2 [\sqrt{ab}+\sqrt {bc}+\sqrt {ac}] = 4 [\sqrt {ab} + \sqrt {bc} + \sqrt {ac} +2 [(a^2bc)^{1/4}+(ab^2c)^{1/4}+(abc^2)^{1/4}]] $$ $$\Rightarrow a+b+c-2 [\sqrt {ab}+\sqrt {bc}+\sqrt {ac}] =8 (abc)^{1/4}[a^{1/4}+b^{1/4}+c^{1/4}] =0$$ $$\boxed {a+b+c =2[\sqrt {ab}+\sqrt {bc}+\sqrt {ac}]}$$ Hope it helps.
Let $\omega=e^{i2\pi/2015}$, evaluate $\sum_{k=1}^{2014}\frac{1}{1+\omega^k+\omega^{2k}}$ Let $\omega=e^{i2\pi/2015}$, evaluate: $$S=\sum_{k=1}^{2014}\frac{1}{1+\omega^k+\omega^{2k}}$$ My Attempt: Clearly, $1,\omega,\omega^2,\omega^3,...,\omega^{2014}$ are the $2015$th roots of unity. Thus, $\omega^{2014}=\bar\omega$, $\omega^{2013}=\bar\omega^2$, and so on. Note that $$\dfrac{1}{1+\omega^k+\omega^{2k}}+\dfrac{1}{1+\bar\omega^k+\bar\omega^{2k}}=\dfrac{1}{1+\omega^k+\omega^{2k}}+\dfrac{\omega^{2k}}{1+\omega^k+\omega^{2k}}=\dfrac{1+\omega^{2k}}{1+\omega^k+\omega^{2k}}$$ hence $$S=\sum_{k=1}^{1007}\frac{\omega^k+\omega^{-k}}{1+\omega^k+\omega^{-k}}=\sum_{k=1}^{1007}\frac{2\cos\left(\frac{2k\pi}{2015}\right)}{2\cos\left(\frac{2k\pi}{2015}\right)+1}$$ How to proceed from here. Appears to be standard trigonometric expression but I am not able to recall.
For simplicity I will write $n=2015$. Since $X^n-1=\prod\limits_{k=0}^{n-1}(X-\omega^k)$ we conclude that $$\frac{nX^{n-1}}{X^n-1}=\sum_{k=0}^{n-1}\frac{1}{X-\omega^k}$$ Substituting $X=j=e^{2i\pi/3}$ and $X= \overline{j}$ and then subtracting we get $$\frac{nj^{n-1}}{j^n-1}-\frac{n\bar{j}^{n-1}}{\bar{j}^n-1}=\sum_{k=0}^{n-1}\left(\frac{1}{j-\omega^k}-\frac{1}{\bar{j}-\omega^k}\right)=(\bar{j}-j)\sum_{k=0}^{n-1}\frac{1}{1+\omega^k+\omega^{2k}} $$ Finally, since $n=2015\equiv 2\mod 3$ we see that $j^{n-1}=j$ and $j^n=j^2=\bar{j}$, we get $$\frac{1}{-i\sqrt{3}}\left(\frac{nj}{j^2-1}-\frac{n\bar{j}}{j-1}\right)=\frac{1}{3}+\sum_{k=1}^{n-1}\frac{1}{1+\omega^k+\omega^{2k}}$$ The final step is easy and we get $$\sum_{k=1}^{n-1}\frac{1}{1+\omega^k+\omega^{2k}}=\frac{2n-1}{3}=1343$$ This conclusion is valid for every $n$ which is equal to $2\pmod3$.
When I was solving an equation with floor function in my class I was teaching floor function equation in my class . I asked my students to solve $\lfloor x\rfloor+2x=1$ by $x=n+p ,\\\space n=\lfloor x\rfloor ,\space 0 \leq p <1 $ all of my student make the same mistake . their solution was $$x=n+p \to \lfloor x\rfloor+2x=1\\ n+(2n+2p)=1 \to \begin{cases}3n=1 \to n=\dfrac13 \\p=0 \end{cases}$$and $\dfrac13$ does not belong to $Z\\$ $\large After$ that I solved it by below methods $$x=n+p \to \lfloor x\rfloor+2x=1\\ 3n+2p=1 \to (1) \to \begin{cases}3n=1 \to n=\dfrac13 \\p=0 \end{cases}\\ (2) \to \begin{cases}3n=0 \to n=\dfrac13 \\2p=1 \to p=\dfrac12 \checkmark \end{cases} \to x=n+p=0+\dfrac12=\dfrac12\\$$ For better thinking ,I did this $$\lfloor x\rfloor+2x=1 \to \lfloor x\rfloor=1-2x \to 1-2x =k \in Z\\x=\dfrac{1-k}{2} \to \\\lfloor \dfrac{1-k}{2}\rfloor=k\\ k \leq \dfrac{1-k}{2} <k+1 \\k \leq \dfrac{1-k}{2} <k+1 \\\begin{cases}2k \leq 1-k \to k\leq \dfrac13 \to k=...,-2,-1,0\\1-k <2k+2 \to \dfrac{-1}{3} <k \to k=0,1,2,...\end{cases} \to \color{red}{k=0} \\ x=\dfrac{1-k}{2}=\dfrac{1-0}{2}\checkmark$$ And ... $$\lfloor x\rfloor+2x=1 \to \lfloor x\rfloor=1-2x\\f(x)=\lfloor x\rfloor ,g(x)=1-2x$$ plot them together and find cross section $$\\ \to x=\dfrac12\\$$ That class terminated .One of my student come to me and asked for $\large more \space Idea(s)$ to solve this (and like this problem ).I said I think and answer... Now I am asking for other solution (s) If exist ? or other observation .(k-12 class) Thanks in advanced .
First $\left \lfloor x\right \rfloor=1-2x$ In other hand by floor property $$\left \lfloor x\right \rfloor\leq x <\left \lfloor x\right \rfloor+1$$ or $$1-2x\leq x < 1+1-2x$$ add +2x then $$1\leq 3x < 2$$ or $$\frac{1}{3}\leq x< \frac{2}{3}$$ But any way $\left \lfloor x\right \rfloor = 0$. So $$\left \lfloor x\right \rfloor+2x=1$$ becomes $$2x=1$$ so $$x=\frac{1}{2}$$
Find all prime solutions of equation $5x^2-7x+1=y^2.$ Find all prime solutions of the equation $5x^2-7x+1=y^2.$ It is easy to see that $y^2+2x^2=1 \mod 7.$ Since $\mod 7$-residues are $1,2,4$ it follows that $y^2=4 \mod 7$, $x^2=2 \mod 7$ or $y=2,5 \mod 7$ and $x=3,4 \mod 7.$ In the same way from $y^2+2x=1 \mod 5$ we have that $y^2=1 \mod 5$ and $x=0 \mod 5$ or $y^2=-1 \mod 5$ and $x=4 \mod 5.$ How put together the two cases? Computer find two prime solutions $(3,5)$ and $(11,23).$
Can we just charge straight at this? $y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime. $x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime) so $kx=y\pm1$, $k$ even $k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $>5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible. Considering the two cases: * *(1) $x=\frac 12(y+1)$, $y=2x-1$: $x(5x-7) = 4x(x-1) \implies x = 3, y=5$ *(2) $x=\frac 12(y-1)$, $y=2x+1$: $x(5x-7) = 4x(x+1) \implies x = 11, y=23$ Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.
Find when three numbers have the same remainder when divided by the same number If three numbers 112, 232, and 400 are each divided by the number D, each of their quotients will have the same remainder R. Find R where R>1 How should I approach this?
Hint: $$112 - R \equiv 0 \mod D$$ $$232 - R \equiv 0 \mod D$$ $$400 - R \equiv 0 \mod D$$ $$120 \equiv 0 \mod D$$ $$168 \equiv 0 \mod D$$ Hence $D$ is a common factor of both $120$ and $168$. $$24 \equiv 0 \mod D$$ $D \in \{1,2,3,4,6,8,12,24 \}$ Use $R>1$ to identify which $D$ is possible.
Probability of a machine throwing up balls A machine can throw n balls of different color up in the air. The probability of throwing up r balls is directly proportional to r. If it is given that a particular ball is the first to be thrown up in the air, then what is the probability that all the balls have been thrown up by the machine. Answer:$\frac{2}{n+1}$ I have no idea on how to even begin solving this problem and any suggestions or solutions would be highly appreciated.
Let one of the colors be red and let $R$ be the event that the first ball is red. Let $n$ be the event that $n$ balls are thrown. Then the probability the the red ball is first given that all $n$ are thrown up must be $$P(R|n)= \frac{1}{n}$$ by symmetry. Similarly, for any number $1\le j \le n$ of balls thrown, by symmetry the probability the red ball is first must be $1/n$ so $$P(R|i) = \frac{1}{n}$$ for all $i.$ By Bayes, we have $$ P(n|R) = \frac{P(R|n)P(n)}{\sum_{i}P(R|i)P(i)} = \frac{P(n)}{\sum_{i=1}^n P(i)}= P(n).$$ So the probability that there are $n$ balls thrown given that the first is red is the same as the probabilty there are $n$ balls thrown. (Which you can compute by normalizing $P(n)\propto n$ is $2/(n+1)$. To do this set $P(i) = \alpha i$ and fix $\alpha$ so that $\sum_{i=1}^n P(i) = 1$.) When you think about it, this makes sense. The fact that the red ball is first gives you no additional information about $n$.
Proof of Hilbert's nullstellensatz, Let $k$ be an algebraically closed field and $$K=\frac{k[x_1,\dots,x_n]}{m}$$ be a finitely generated $k$-algebra, where $m$ is a maximal ideal. $K$ is algebraic over $k$. Then why is $k$ isomorphic to $K$? Sorry if this is obvious.
Hint: (Zariski) If a field $L$ is ring-finite over a subfield $K$, then $L$ is module-finite (and hence algebraic) over $K$. So if $I$ is a maximal ideal of $k[x_1,x_2,\ldots,x_n]$, then $k[x_1,x_2,\ldots,x_n]/I$ is a field, and so an algebraic extension of $k$. Since $k$ is algebraically closed, we get $k[x_1,x_2,\ldots,x_n]/I$=k.
Find the limit $\lim_{x\to 1}\frac{\sin{(\pi\sqrt x)}}{\sin{(\pi x)}}$ Find the following limit: $$\lim_{x\to 1}\frac{\sin{(\pi\sqrt x)}}{\sin{(\pi x)}}$$ My attempt: $$t:=x-1,\ x \rightarrow 1 \Rightarrow t\rightarrow 0,\ x=t+1$$ $$\lim_{x\to 1}\frac{\sin{(\pi\sqrt x)}}{\sin{(\pi x)}}=\lim_{t\to 0}\frac{\sin{(\pi\sqrt{t+1})}}{\sin{(\pi(t+1))}}=\lim_{t\to 0}\frac{\frac{\sin{(\pi\sqrt{t+1})}}{\pi(\sqrt{t+1})}\cdot \pi \sqrt{(t+1)}}{\frac{\sin{(\pi(t+1))}}{\pi(t+1)}\cdot \pi(t+1)}=\lim_{t\to 0}\frac{1\cdot \pi \sqrt{(t+1)}}{1\cdot\pi(t+1)}=\frac{\pi\sqrt{(0+1)}}{\pi\sqrt{(0+1)}}=1$$ The soulution should be $\frac{1}{2}$. What am I doing wrong?
As $\sin(\pi-y)=\sin y$ $$\dfrac{\sin(\pi\sqrt x)}{\sin(\pi x)}=\dfrac{\sin\pi(1-\sqrt x)}{\sin\pi(1-x)}$$ Now set $1-\sqrt x=u$
MLE of poisson random variable. Let $x_1=x_2=x_3=1, x_4=x_5=x_6=2\ $ be a random sample from a Poisson random variable with mean $\theta$, where $\theta\in \{1,2\}$. Then, the maximum likelihood estimator of $\theta$ is equal to... What I know is that, the MLE of poisson distribution is given by $$ \hat{\theta}_{MLE}=\sum_{i=1}^n\frac{X_i}{n} .$$ If we evaluate here then $\hat{\theta}_{MLE}$ is coming $1.5$, which is not there in the range of $\theta$. Then how will I find the MLE in this case?
You have to decide under which one of the two possible parameters your sample is more probable (literally). Under $\theta = 1$ you have $$ \prod_{i=1}^3P(X_i=1)\prod_{i=4}^6P(X_i=2)=(e^{-1}/1!)^3(e^{-1}/2!)^3. $$ Under $\theta = 2$ you have $$ \prod_{i=1}^3P(X_i=1)\prod_{i=4}^6P(X_i=2)=(e^{-2}2^1/1!)^3\times (e^{-2}2^2/2!)^3 $$ so check which expression is greater.
Find the value of $a^4-a^3+a^2+2$ when $a^2+2=2a$ Find the value of $a^4-a^3+a^2+2$ when $a^2+2=2a$ My Attempt, $$a^4-a^3+a^2+2=a^4-a^3+2a$$ $$=a(a^3-a^2+2)$$ What's next?
$a^4-a^3+a^2+2=(a^2+a+1)(a^2-2a+2)=0$. We can check also $a=1+i$ and $a=1-i$.
Quotient topology on $\mathbb{R}^2$ such that the result is homeomorphic to a sphere, to a rectangle Define a quotient topology on $\mathbb{R}^2$ such that the result is homeomorphic to a sphere and second to a closed rectangle with all the interior points included. For the first one: I know that $\mathbb{R}^2$ is homeomorphic to a sphere (without the nort pole). But I don't know how to get 'some extra point' with defining a quotient topology. I would rather say you get 'less points' by identifying some points. For the second: You have to identify all points outside some rectangle? Then you get a rectangle and one point that identifies all points outside the rectangle? Is that the result? Thanks in advance
Your second solution is actually very close to the answer for the first problem: identify all points outside some open disc to each other: the open disc is homeomorphic to a sphere minus the north pole, and the identification makes all the rest of the plane into said north pole. As for the rectangle, draw the rectangle in the plane, and identify each point outside the rectangle with the point on the boundary of the rectangle that it is closest to.
Coefficient of $x^2$ in $(x+\frac 2x)^6$ I did $6C4 x^2\times (\dfrac 2x)^4$ and got that the coefficient of $x^2$ is $15$, but the answer is $60$, why? Did I miss a step?
You have mistake. $\binom{6}{4} . x^2 . \left(\frac 2x\right)^4$ = $15 . x^2 . \frac{16}{x^4}$ = $240 . \frac{1}{x^2}$ Here 240 is coefficient of $x^{-2}$ not $x^2$. Correct term - $\binom{6}{2} . x^4 . \left(\frac 2x\right)^2$ = $15 . x^4 . \frac{4}{x^2}$ = $60 . x^2$ So answer is 60.
not surjective implies degree 0 This seems like an easy question, but I can't get my head around it. Consider manifolds $M$, $N$, $F \in C^\infty(M,N)$, $\dim(M) = \dim(N)$, $M$ compact, $N$ connected. The degree of $F$ is defined as the number of points in the preimage of some regular value $q$ of $F$ in $N$, that is $$\deg_2(F) := \operatorname{card}(F^{-1}(\{q\})) \mod 2.$$ (This is well defined by homotopy invariance.) Why is $\deg_2(F)=0$ if $F$ is not surjective?
Consider $N \setminus F(M)$. Since $F$ is not surjective, this set is non-empty. By definition, $q \in N \setminus F(M)$ is a regular value (it has no pre-image) and $F^{-1}(q) = \emptyset$ so $\deg_2(F) = |\emptyset| \mod 2 = 0$.
Why does the trigonometric Pythagorean theorem works outside the unit circle? I thought the the Pythagorean identity "$sin^2+cos^2 = 1$" was derived inside the unit circle when the hypotenuse of the triangle was one. So why does this formula work outside of the unit circle? Does the calculator just always assumes that the hypotenuse is one? Would it work if the hypotenuse wasn't one? Thanks in advance.
Every triangle outside the unit circle is basically the same as one on the unit circle. i.e., we can always scale it down and $\sin^2+\cos^2=1$ holds all the way.
How to factorize this cubic equation? In one of the mathematics book, the author factorized following term $$x^3 - 6x + 4 = 0$$ to $$( x - 2) ( x^2 + 2x -2 ) = 0.$$ How did he do it?
If $P$ is a polynomial with real coefficients and if $a\in\mathbb{R}$ is a root, which means that $P(a)=0$, then there exists a real polynomial $Q$ such that $\forall x\in\mathbb{R},\quad P(x)=(x-a)\,Q(x)$. On this case, you can see by inspection, that $P(2)=0$. It remains to find real constants $A,B,C$ such that : $$\forall x\in\mathbb{R},\quad x^3-6x+4=(x-2)(Ax^2+Bx+C)$$ Identification of coefficients leads to $A=1$, $-2C=4$ and, for example, $A-2B=0$ (equating the coeffts of $x^2$ in both sides).
Are there any slick way of computing the matrix of rotating about a given line passing origin by $\pi$? Let's just consider the lines in $\mathbb{R}^3$. For any line $l$, suppose it is determined by $(x_1:x_2:x_3)$. For any point $(a,b,c)$, after the rotation it is mapped to(as we can compute) $(a',b',c')=(2\lambda x_1-a,2\lambda x_2-b,2\lambda x_3-c)$, where $\lambda=\frac{ax_1+bx_2+cx_3}{x_1^2+x_2^2+x_3^2}$. I computed by using the condition that their middle point is on the line $(x_1:x_2:x_3)$, and $\overline{xx'}$ is orthogonal to the line $l$. And we can of course compute the matrix of this transformation. But I have trouble checking it indeed satisfies the condition for rotation matrix $\det(A)=1, A^T=A^{-1}$. I am wondering are their any more algebraic way of finding the matrix? Also, I wish someone could verify whether my answer is correct. Thanks!
Consider a vector $v=(a,b,c)$ and associate to it the skew matrix $A$ given by\begin{pmatrix} 0 & -c & b \\ c & 0 & -a \\ -b & a &0 \end{pmatrix} Then $R=e^A$ is rotation about the direction determined by $v$ and angle determined by $\theta=\lVert v\rVert$. Because the powers of $A$ can be computed in terms of $A$, you can find a simple expression for $R$. See here.
Need help on understanding uniform continuity. The definition of uniform continuity is that for any $\epsilon > 0$, we can find specific fixed $\delta>0$ such that for any $x_1,x_2 \in D_f$ we have $|x_1-x_2|<\delta \to |f(x_1)-f(x_2)|<\epsilon$. For any $\epsilon$ we chose, the same $\delta$ must be able to make $|f(x_1)-f(x_2)|<\epsilon$ for any $x_1,x_2 \in D_f$. But I know there's a theorem that says any continuous function is uniformly continuous when its domain is closed. I have a hard time understanding this because a function that is very steep will make $\delta > 0$ to be changed. The function is becoming more steep as it gets closer to the right end point, $b$. And clearly, this is function is defined on the closed interval, $[0,b]$. But, we can still see that $\delta$ has to change at different points on the curve to make $|f(x_1)-f(x_2)|<\epsilon$. What am I missing? EDIT: http://www.math.uconn.edu/~kconrad/blurbs/analysis/metricspaces.pdf I got my intuition from this paper, on page 30. It shows a graph where one function is continuous and other is uniformly continuous.
Yes, $\delta$ must be smaller at the right end of your example. So given $\epsilon$, choose a $\delta$ that works at $b$. That $\delta$ will work everywhere.
If P is prime, prove that $(p-1)!$ is congruent to $(p-1)$ $\pmod {1+2+3...+(p-1)}$ I have this as a problem to solve, but I'm not sure I'm headed in the right direction. I tried to simplify a bit and got $(p-1)(1-(p-2))$ and use the definition of congruence to show $1+2+3...+n-1$ divides $(1-(p-2)!)$ I hit a dead end with that though and am not sure how to proceed.
The $n$ should be $p$ I suppose. Use the following facts: $1 + 2 + \cdots + (p-1) = \frac{p(p-1)}{2}$ and $ (p-1)! \equiv -1 (\bmod p)$ and $(p-1)! \equiv 0 (\bmod (p-1)/2) $ $p$ and $\frac{p-1}{2}$ are relatively primes. $ (p-1)! \equiv (p-1) (\bmod p)$ (Wilson's Theorem) and $(p-1)! \equiv (p-1) (\bmod (p-1)/2) $ gives that $$(p-1)! \equiv (p-1) (\bmod \frac{p(p-1)}{2}).$$
Does this function sequence have a convergent subsequence? Let $\langle f_n \rangle$ be sequence of equi-continuous real-valued functions on $\mathbb R$ such that $f_n(0)=0$ for every $n$. Does $\langle f_n\rangle$ have a converging subsequence? I found that even equi-continuity condition and the value at $x=0$ does not guarantee the uniform convergence, as there is a counter-example of $f_n (x) = x/n$, which does not converge uniformly, but do the conditions of the question guarantee pointwise convergence of some subsequence?
We can extract a subsequence which convergences uniformly on compact sets. Indeed, first we notice that for each $N$, the sequence $\left(f_n\right)_n$ is uniformly bounded on $[-N,N]$. Indeed, using the definition of equicontinuity there exists a $\delta\gt 0$ such that if $\left\lvert x-y\right\rvert\lt\delta$ then $\left\lvert f_n(x)-f_n(y)\right\rvert\lt 1$. For a fixed $x\in [-N,N]$, there exists a $k\in\{0,\dots,2N\lfloor \delta^{-1}\rfloor\}$ such that $-N+k\delta\leqslant x\lt -N+(k+1)\delta$. Then $$ \left\lvert f_n(x)\right\rvert\leqslant \left\lvert f_n(x)-f_n\left(-N+k\delta\right)\right\rvert +\left\lvert f_n\left(-N+k\delta\right)\right\rvert\leqslant 1+\sum_{i=1}^k\left\lvert f_n\left(-N+i\delta\right)-f_n\left(-N+(i-1)\delta\right)\right\rvert $$ and all the terms in the sum are smaller than one hence $$ \left\lvert f_n(x)\right\rvert\leqslant 1+2N\lfloor \delta^{-1}\rfloor. $$ By Arzela-Ascoli theorem, there exists a uniformly convergent subsequence. Now, in order to get a subsequence for which the convergence is uniform on each compact set, we proceed as follows. We construct a non-increasing sequence of subset $\left(I_N\right)_{N\geqslant 1}$ of $\mathbb N$ such that each $I_N$ is infinite and the sequence $\left(f_n\right)_{n\in I_N}$ converges uniformly on $[-N,N]$. Then let $n_k$ be the $k$-element of $I_k$. The subsequence $\left(f_{n_k}\right)_k$ is uniformly convergent on compact sets.
How to show that if $\int_a^b|{f(x)}|^p\,d\alpha = 0$ then $\int_a^b|{f(x)}|\,d\alpha = 0$ How can I prove that if $\int_a^b|{f(x)}|^p\,d\alpha = 0$ then $\int_a^b|{f(x)}|\,d\alpha = 0$? (Assuming that $\alpha$ is increasing and $p > 1$). Intuitively this is clear but I'm having trouble proving it rigorously.
Let $q$ such that $\dfrac{1}{p}+\dfrac{1}{q}=1$ so $q>1$ and $$\int_a^b|f|d\alpha\leq\Big(\int_a^b|f|^pd\alpha\Big)^\frac1p\Big(\int_a^b|f|^qd\alpha\Big)^\frac1q=0$$ Without Holder, we say for all step function $\phi$ $$\int_a^b|f|^pd\alpha\leq\sup_{|f|^p\leq\phi}\int_a^b\phi d\alpha=0$$ this shows $|f|=0$ almost-everywhere for $\alpha$ (measure) on $[a,b]$. That is $$\int_a^b|f|d\alpha=0$$
Prove that $\frac {\sec (16A) - 1}{\sec (8A) - 1}=\frac {\tan (16A)}{\tan (4A)}$ Prove that:$$\frac {\sec (16A) - 1}{\sec (8A) - 1}=\frac {\tan (16A)}{\tan (4A)}$$. My Attempt, $$L.H.S= \frac {\sec (16A)-1}{\sec (8A)-1}$$ $$=\frac {\frac {1}{\cos (16A)} -1}{\frac {1}{\cos (8A)} -1}$$ $$=\frac {(1-\cos (16A)).(\cos (8A)}{(\cos (16A))(1-\cos (8A))}$$. What should I do next?
As $\cos2y=1-2\sin^2y,\sin2y=2\sin y\cos y$ $$\dfrac{1-\sec16A}{\tan16A}=\cdots=\dfrac{\cos16A-1}{\sin16A}=-\dfrac{2\sin^28A}{2\cos8A\sin8A}=-\tan8A$$ Similarly, $$\dfrac{1-\sec8A}{\tan8A}=-\tan4A$$ Can you take it home from here?
left adjoints preserve pushouts I need to show that left adjoints preserve epimorphisms. I want to follow the following idea: $f: A\longrightarrow B$ is epic if and only if the diagram: $$ \begin{array}{ccc} A & \overset{f}\longrightarrow & B \\ {\scriptstyle{f}}{\downarrow} & & {\downarrow}\scriptstyle{1_B} \\ B & \underset{1_B}{\longrightarrow} & B \end{array} $$ is a pushout. So I suppose that $A,B\in \mathcal{C}$, $f$ in $\mathcal{C}$ and that the diagram above is a pushout. I consider the functors $F:\mathcal{C}\longrightarrow\mathcal{D}$ and $G:\mathcal{D}\longrightarrow\mathcal{C}$ such that $F\dashv G$ and I want to show that the diagram $$ \begin{array}{ccc} F(A) & \overset{F(f)}\longrightarrow & F(B) \\ {\scriptstyle{F(f)}}{\downarrow} & & {\downarrow}\scriptstyle{F(1_B)} \\ F(B) & \underset{F(1_B)}{\longrightarrow} & F(B) \end{array} $$ is a pushout. Hence the conclusion will follow. However, I don't think this way of reasoning is totally fine: it turns out that I don't use the hypothesis of $F$ being left adjoint to $G$ in the proof that the second diagram above is a pushout. I know it is really simple and basic, but I feel like I am not seeing something. Can anyone please help me? (For the record, this is the sketch of my proof that the second diagram is a pushout. For sure it commutes ($F$ respects composition, being a functor). I consider an object $Y\in \mathcal{D}$ and maps such that the diagram $$ \begin{array}{ccc} F(A) & \overset{F(f)}\longrightarrow & F(B) \\ {\scriptstyle{F(f)}}{\downarrow} & & {\downarrow}\scriptstyle{y'} \\ F(B) & \underset{y}{\longrightarrow} & Y \end{array} $$ commutes. I need to find a unique $\overline{y}:F(B)\longrightarrow Y$ such that $\overline{y}\circ 1_{F(B)}=y'$. Take $\overline{y}\equiv y'=y$.)
That the image diagram is a pushout follows from the general fact that left adjoints preserve all colimits. Assuming you haven't proven that yet, you can compute $$ \begin{align*} \mathcal{D}(F(\operatorname{colim} X_j), Y) &\cong \mathcal{C}(\operatorname{colim} X_j, GY) \\&\cong \lim \mathcal{C}(X_j, GY) \\&\cong \lim \mathcal{D}(FX_j, Y) \end{align*} $$ which shows that $F(\operatorname{colim} X_j) \cong \operatorname{colim} F(X_j)$.
How to solve this limit: $\lim\limits_{x \to 0}\left(\frac{(1+2x)^\frac1x}{e^2 +x}\right)^\frac1x$ $$\lim\limits_{x \to 0}\left(\frac{(1+2x)^\frac{1}{x}}{e^2 +x}\right)^\frac{1}{x}=~?$$ Can not solve this limit, already tried with logarithm but this is where i run out of ideas. Thanks.
All such limits can be mechanically computed easily using asymptotic expansions. One should not use L'Hopital unless it is obvious that it works well. $ \def\lfrac#1#2{{\large\frac{#1}{#2}}} \def\wi{\subseteq} $ The complete solution produced by the mechanical computation is as follows. As $x \to 0$:   $\Big(\lfrac{(1+2x)^{1/x}}{e^2+x}\Big)^{1/x}$ $= \Big(\lfrac{\exp(\lfrac1x\ln(1+2x))}{e^2+x}\Big)^{1/x}$ $\in \Big(\lfrac{\exp(\lfrac1x(2x-2x^2+O(x^3)))}{e^2+x}\Big)^{1/x}$   $= \Big(\lfrac{\exp(2-2x+O(x^2))}{e^2+x}\Big)^{1/x}$ $= e^{-2} \Big(\lfrac{\exp(O(x^2))}{1+e^{-2}x}\Big)^{1/x}$ $\wi e^{-2} \Big(\lfrac{1+O(x^2)}{1+e^{-2}x}\Big)^{1/x}$   $\wi e^{-2} \Big((1+O(x^2))(1-e^{-2}x)\Big)^{1/x}$ $\wi e^{-2} \Big(1-e^{-2}x+O(x^2)\Big)^{1/x}$   $= e^{-2} \exp\!\Big(\lfrac1x\ln(1-e^{-2}x+O(x^2))\Big)$ $\wi e^{-2} \exp\!\Big(\lfrac1x(-e^{-2}x+O(x^2))\Big)$   $= e^{-2} \exp(-e^{-2}+O(x))$ $= e^{-2-e^{-2}} \exp(O(x))$ $\wi e^{-2-e^{-2}}(1+O(x))$   $= e^{-2-e^{-2}}+O(x)$. The two asymptotic expansions used in the above solution are: * *$\exp(x) \in 1+O(x)$ if $x \in o(1)$. *$\ln(1+x) \in x+\lfrac1{2}x^2+O(x^3)$ if $x \in o(1)$. One question is how I know how many terms of the asymptotic expansions to use. The answer is I do not know in advance. I just start with using the first two terms, and if at some point I cannot simplify due to the terms cancelling and leaving only asymptotic classes, then I trace where those remaining terms arose from and increase the number of terms in previous asymptotic expansions where needed. This is a mechanical process and is actually used in computer algebra systems to find such limits.
Find all complex numbers $z$ such that $|z|=\frac{1}{|z|}=|1-z|$ Problem: Find all complex numbers $z$ such that $|z|=\frac{1}{|z|}=|1-z|$. Basically I have an idea how to solve this and I get $x=\frac12$ but how should I express it mathematically? Should I go and find $y$ also?
Hint: $\require{cancel}$ $|z|^2=|1-z|^2$ $\iff$ $z \bar z = (1-z)(1-\bar z)$ $\iff$ $\cancel{z \bar z}=1-z-\bar z+\cancel{z \bar z}\,$ Since $1=|z|^2=z \bar z$ it follows that $\bar z = \cfrac{1}{z}$ so the last equality becomes $z^2-z+1=0\,$, which is a simple quadratic having the two possible values of $z\,$ as roots.
Find all the solutions of the system $AX=B$ if $B$ is the difference between the first and the fourth column of $A$. Let $$A \sim \begin{pmatrix} 1 &2 &4&1\\ 0&0&1&2\\ 1&3&1&1\\ 0&0&0&0\\ \end{pmatrix}$$ such that the equivalence is achieved by elementary row transformations. Find all the solutions of the system $AX=B$ if $B$ is the difference between the first and the fourth column of $A$. I hope I understood this: $$B=\begin{pmatrix} 0\\ -2\\ 0\\ 0 \end{pmatrix}$$ But I don't know what should I do here since I don't know what $A$ is exactly...I know $X$ should be a $4$x$1$ matrix and that's about it.
Hint - You have A and B. Solve AX = B. $$\begin{pmatrix} 1 &2 &4&1\\ 0&0&1&2\\ 1&3&1&1\\ 0&0&0&0\\ \end{pmatrix} \cdot \begin{pmatrix} x_1\\ x_2\\ x_3\\ x_4 \end{pmatrix} = \begin{pmatrix} 0\\ -2\\ 0\\ 0 \end{pmatrix}$$
What is the second derivative of a B-spline? A B-spline of degree $j$ is defined at knots $\vec k$ by the Cox-de Boor recursion formula \begin{align} B_{i,1}(x) &= \left\{ \begin{matrix} 1 & \mathrm{if} \quad k_i \leq x < k_{i+1} \\ 0 & \mathrm{otherwise} \end{matrix} \right. \\ B_{i,j}(x) &= \frac{x - k_i}{k_{i+j-1} - k_i} B_{i,j-1}(x) + \frac{k_{i+j} - x}{k_{i+j} - k_{i+1}} B_{i+1,j-1}(x) \end{align} and has derivative \begin{equation} \frac{\text{d}B_{i,j}(x)}{\text{d}x} = (j-1) \left( \frac{-B_{i+1,j-1}(x)}{k_{i+j}-k_{i+1}} + \frac{B_{i,j-1}(x)}{k_{i+j-1}-k_i} \right). \end{equation} I am trying to implement the O'Sullivan penalty which requires second derivatives. What is the second derivative of a B-spline?
This document gives (with a corrected typo) \begin{equation} \frac{\text{d}^{(n)}B_{i,j}(x)}{\text{d}x^{(n)}} = (j-1) \left( \frac{- \text{d}^{(n-1)} B_{i+1,j-1}(x) / \text{d}x^{(n-1)}}{k_{i+j}-k_{i+1}} + \frac{\text{d}^{(n-1)} B_{i,j-1}(x) / \text{d}x^{(n-1)}}{k_{i+j-1}-k_i} \right). \end{equation}
How many real roots has the equation? How many real roots (depends on the parametr $a$) has the equation $x^{13}=a(x^{14}+1)?$ I ploted left and right sides and observed that for all $a \neq 0$ there is only one root. Is it true? Edit. Correct typos.
The equation is equivalent to $P_a(x)=ax^{14}-x^{13} +a=0$ Check out Descartes' Rule of Signs: https://en.wikipedia.org/wiki/Descartes'_rule_of_signs It is very useful for this kind of problems. In this specific case, it says that $P_a(x)$ has exactly one zero, because * *$P_a(x)$ has 1 change of signs for $a>0$ and 0 change of signs for $a<0$. *$P_a(-x)$ has 0 change of signs for $a<0$ and 1 change of signs for $a>0$.
Prove that the following function is continuous. I need to prove that the function: $$f(x)=\tan{x}+\frac{\tan^2 x}{2^2}+\frac{\tan^3 x}{3^2}+\cdots+\frac{\tan^n x}{n^2}+\cdots$$ is continuous for $x \in \left[-\dfrac{\pi}{4}, \dfrac{\pi}{4} \right]$. I don't really know where to begin. I see, that for $-\dfrac{\pi}{4}$ and $ \dfrac{\pi}{4}$ it is a sum of a certain series, but don't know how to show the continuity.
The sequence of partial sums is a uniformly convergent (hint: $\|\tan\|_{\infty} =1$) sequence of continuous functions. So its limit, $f$, must be continuous.
6th Grade (12 Years Old) Mathematics - Triangle Inequality Question My son's math homework has this question which I find to be extremely confusing. Neither my wife or I can wrap our heads around what they are expecting as an answer: Two sides of a triangle have lengths $7$ and $9$. The third side has length $x$. Tell whether each of the following statements is always, sometimes but not always or never true. a) $x = 12$ b) $x = 2$ c) $x < 2$ d) $x < 16$ e) $x < 15$ To my mind the answer is sometimes to all of them. For $x = 12$ and $x = 2$, when considering all the possible values of $x$ it will sometimes be those values. For $x < 2$, $x < 15$, and $x < 16$, it's sometimes because all values less than those make it true, until you hit $0$ which forms a line and no longer a triangle. It looks like I've found the the teachers edition of the book (scroll to page 6 in the pdf; question 12), that just confuses me even further!
Think of a "hinge" of two sides with fixed lengths 7 and 9, respectively. You can put it at any opening between $0^\circ$ and $180^\circ$, but not including the extreme values $0^\circ$ and $180^\circ$. The third side will be given once you fix the degree of opening of your hinge. In the limit where the angle is extremely close to $0^\circ$, the third side tends to $9-7=2$. In the other limit, $180^\circ$, the third side tends to $9+7=16$. So the third side can have all lengths between 2 and 16, not including those two extreme values. (Once you guys have thought a bit about this, both you, your wife and your son will understand the triangle inequality well.)
Trapezoids - Which definition has a stronger case? Today my daughter Ella asked me "Is a trapezoid an irregular polygon?" and I realized I cannot give her a definitive answer. According to the Internet, trapezoids are alternately defined as having only one pair of parallel lines, and also at least one pair of parallel lines. My understanding is that this is simply an unresolved ambiguity in mathematics. My question is, which definition has the stronger case? So far I have this: The case for "only one": * *Many people seem to think this is more intuitive and/or traditional The case for "at least one": * *Inclusive definitions are generally more useful (if this true I'd like to learn why) *It's the only definition that fits with the concept of trapezoidal sums in calculus What am I missing?
Short answer is that the definition being used should be explicitly spelled out in every context where it actually matters. Inclusive definitions are generally more useful (if this true I'd like to learn why) Take for example the area formula: if a quadrilateral has a pair of parallel sides, then its area is half the distance between the two parallels times the sum of those two sides. The theorem obviously holds true for arbitrary trapezoids, but also for parallelograms, rhombi, rectangles and squares. If you defined each one in the strictest non-inclusive sense, then you would technically have to restate and/or prove the theorem separately for each shape in turn.
Confused about two series I know this question is probably really trivial but I really just don't get it and was hoping someone could explain it to me. With the following two series (where $c$ is a constant): $$\sum_{i=0}^n\ c = cn +c $$ $$\sum_{i=1}^n\ c = cn $$ I just don't get why they equal what they do. I suppose I'm confused as there is no $i$ term in the expression to which I can substitute actual values into to get the terms, it is just the constant $c$. I just don't know how the terms $cn$ and $cn + c$ came about. Thank you.
Hint: $$\underbrace{c+c+c+\dots+c}_n=cn$$ Likewise... $$c(n+1)=cn+c$$
$A+B=AB$ does it follows that $AB=BA$? If $A$, $B$ are two normal operators such that: $A+B=AB$ does it follow that $AB=BA$?
Rearrange the equation to \begin{eqnarray*} (A-1)B=A \end{eqnarray*} Now assume (A-1) is non singular \begin{eqnarray*} B=(A-1)^{-1}A \end{eqnarray*} Geometrically expand $(A-1)^{-1}$ ... so B can be expressed as a sum of powers of A , A will commute with each of these terms, so A commutes with B.
Distance between compact sets in a metric space. Let $K_1$ and $K_2$ be two disjoint compact sets in a metric space $(X,d).$ Show that $$a = \inf_{x_1 \in K_1, x_2 \in K_2} d(x_1, x_2) > 0.$$ Moreover, show that there are $x \in K_1$ and $y \in K_2$ such that $a = d(x,y)$. For the first part, suppose to the contrary that $\inf d(x_1, x_2) = 0$. Then $\epsilon$ is not a lower bound, so $d(x_1, x_2) < \epsilon$ for all $\epsilon > 0$. Since $K_1$ and $K_2$ are compact subsets of a metric space, they are closed and bounded. So, then $B(x_1, \epsilon) \cap K_2 \neq \emptyset$. Thus, $x_1$ is an adherent point to $K_2$. Since $K_2$ is closed, this means $x_1 \in K_2$, a contradiction. I'm stuck on the moreover part. I tried supposing to the contrary that $d(x,y) > a$, but I did not get far.
By the definition of an infimum, you can find a sequence of pairs $((x_1^n,x_2^n))_{n \ge 1}$ in $K_1 \times K_2$ such that $d(x_1^n,x_2^n) \le a+\frac{1}{n}$ for each $n \ge 1$. Since we are in a metric space, you can use sequential compactness to take subsequences of each sequence and find respective limits $x_1 \in K_1$ and $x_2 \in K_2$. Using the triangle inequality, we have for any $\delta>0$, $$d(x_1,x_2) \le d(x_1,x_1^n) + d(x_1^n,x_2^n) + d(x_2^n,x_2) \le \delta + a + \frac{1}{n} + \delta,$$ for all sufficiently large $n$. Thus $d(x_1,x_2)=a$. Proving the "moreover" part allows you to prove the first part of the question immediately, since if $a=0$, then $x_1=x_2 \in K_1 \cap K_2$, a contradiction. By the way, I don't think your proof of the first part is correct.
Is the following limit finite ....? I would like to see some clue for the following problem: Let $a_1=1$ and $a_n=1+\frac{1}{a_1}+\cdots+\frac{1}{a_{n-1}}$, $n>1$. Find $$ \lim_{n\to\infty}\left(a_n-\sqrt{2n}\right). $$
As other answers indicate, this sequence obeys to the following recurrence relation : $$a_1=1\quad\mathrm{and}\quad\forall n\in\mathbb{N},\,a_{n+1}=a_n+\frac{1}{a_n}$$ This proves that the sequence is increasing. Supposing its convergence to some finite limit $L>0$ would lead to $L=L+\dfrac{1}{L}$, a contradiction. Hence the sequence diverges towards $+\infty$. Now, for all $n\in\mathbb{N}$, we have : $$a_{n+1}^2-a_n^2=\left(a_n+\frac{1}{a_n}\right)^2-a_n^2=2+\frac{1}{a_n^2}\longrightarrow 2$$ By Cesaro's lemma : $$\frac{1}{n}\left(a_n^2-a_0^2\right)=\frac{1}{n}\sum_{k=0}^{n-1}\left(a_{k+1}^2-a_k^2\right)\longrightarrow 2$$ Thus $$a_n\sim\sqrt{2n}$$ Now, we will use ... Lemma Given a sequence $(u_n)_{n\ge1}$ of positive real numbers such that $u_n\sim\dfrac{1}{n}$, we have : $$\sum_{k=1}^nu_k\sim\ln(n)$$ (See below for a proof.) Since $a_{n+1}^2-a_n^2-2\sim\dfrac{1}{2n}$ and by the previous lemma : $$a_n^2-2n\sim\frac{\ln(n)}{2}$$ which can be written : $$a_n=\sqrt{2n}\,\sqrt{1+\frac{\ln(n)}{4n}+o\left(\frac{\ln(n)}{n}\right)}$$ Using now the Taylor expansion $\sqrt{1+t}=1+\frac{t}{2}+o(t)$ as $t\to0$, we get finally : $$\boxed{a_n=\sqrt{2n}\left(1+\frac{\ln(n)}{8n}+o\left(\frac{\ln(n)}{n}\right)\right)}$$ In particular, we see that $\lim_{n\to\infty}\left(a_n-\sqrt{2n}\right)=0$, but the result above is much more accurate. Proof of the above lemma Given $\epsilon>0$, there exists $N\in\mathbb{N}^\star$ such that : $$k>N\implies\left|u_k-\frac{1}{k}\right|\le\epsilon$$ As soon as $n>N$, we have : $$\left|\sum_{k=1}^nu_k-\sum_{k=1}^n\frac{1}{k}\right|\le\underbrace{\left|\sum_{k=1}^N\left(u_k-\frac{1}{k}\right)\right|}_{=A}+\sum_{k=N+1}^n\left|u_k-\frac{1}{k}\right|\le A+\epsilon\sum_{k=N+1}^n\frac{1}{k}$$ And a fortiori : $$\left|\sum_{k=1}^nu_k-\sum_{k=1}^n\frac{1}{k}\right|\le A+\epsilon\sum_{k=1}^n\frac{1}{k}$$ Since $\lim_{n\to\infty}\sum_{k=1}^n\frac{1}{k}=+\infty$, there exists $N'\in\mathbb{N}^\star$ such that : $$n>N'\implies\sum_{k=1}^n\frac{1}{k}>\frac{A}{\epsilon}$$ Finally : $$n>\max\{N,N'\}\implies\left|\sum_{k=1}^nu_k-\sum_{k=1}^n\frac{1}{k}\right|\le2\epsilon\sum_{k=1}^n\frac{1}{k}$$ This proves that : $$\sum_{k=1}^nu_k\sim\sum_{k=1}^n\frac{1}{k}$$ But we know that $\sum_{k=1}^n\frac{1}{k}\sim\ln(n)$ as $n\to\infty$, hence the conclusion (by transitivity of $\sim$).
Is $(11)$ a prime ideal of $\mathbb{Z}[\sqrt{-5}]$? Is $(11)$ a prime ideal of $\mathbb{Z}[\sqrt{-5}]$? I know that $11$ is an irreducible element in $\mathbb{Z}[\sqrt{-5}]$. Now to determine whether it is prime we can say $\mathbb{Z}[\sqrt{-5}]$ isomorphic to $\mathbb{Z}[x]/(x^2 + 5)$. So we get an isomorphism $$ \mathbb{Z}[\sqrt{-5}]/(11) \;\;\simeq\;\; \mathbb{Z}_{11}[x]/(x^2 + 5) \,.$$ Since $\mathbb{Z}_{11}$ is a field, $\mathbb{Z}_{11}[x]$ is a PID, and since $(x^2 + 5)$ is irreducible over $\mathbb{Z}_{11}[x]$, the ring $\mathbb{Z}_{11}[x]/(x^2 + 5)$ is a field. Hence $(11)$ can be treated as a maximal ideal as well as a prime ideal in the ring $\mathbb{Z}[\sqrt{-5}]$.
As indicated in the comments, yes, you are correct. It might be wise to justify why $(x^2+5)$ is irreducible over $\mathbb{Z}_{11}[x]$ though. I'm posting this CW answer so that users who confidently concur have something to vote on, and so this question doesn't stagnate in the Unanswered Questions Queue. If however anyone would like to write a more substantial response to the question, please downvote this answer and post your response.
Find the points of local maximum and minimun of the function: $\sin^{-1}(2x\sqrt{1-x^2}), ~x\in (-1,1)$ Find the points of local maximum and minimun of the function: $$f(x)=\sin^{-1}(2x\sqrt{1-x^2})~~~~;~~x\in (-1,1)$$ I know $$f'(x)=-\frac{2}{\sqrt{1-x^2}}$$ How to find the local maximum and minimum? I have drawn the fig and seen the points of local maximum and minimum. But how to find then analytically?
As suggested in the comment using my answer here in Solving $\arcsin\left(2x\sqrt{1-x^2}\right) = 2 \arcsin x$, $$ \arcsin(2x\sqrt{1-x^2}) =\begin{cases}2\arcsin x \;\;;-\dfrac1{\sqrt2}\le x\le \dfrac1{\sqrt2}\iff-\dfrac\pi4\le\arcsin x\le\dfrac\pi4\\ \pi - 2\arcsin x\;\;; \dfrac1{\sqrt2}< x\le 1\iff\dfrac\pi4<\arcsin x\le\dfrac\pi2\\ -\pi -2\arcsin x\;\;;-1< x \le-\dfrac1{\sqrt2}\iff-\dfrac\pi2\le\arcsin x<-\dfrac\pi4\end{cases} $$ Can you take it from here?
Introduction to group rings (reference request) I'm looking for a thorough introduction to group rings, specifically the simple case of group rings over the integers where the group is abelian and finitely generated. I realise that these are quotients of polynomial rings over the integers in finitely many variables, but I'm interested also in the perspective from group rings. Specifically I'd like to know, given such a group, methods to determine whether the group ring is connected, reduced and/or a domain, what its unit group is, etc.
You can read "Groups, Rings and Galois Theory" by Victor. P. Snaith. It provides a good introduction on the topic. Also, "Groups, Rings and Modules" by Auslander and Buchsbaum is quite good. Hope it helps.
Integral of product of two error complementary functions (erfc) Could you please help me to show that the integral $$ \int_0^{\infty} \mathrm{erfc}(ax) \, \mathrm{erfc}(bx)\, \mathrm{d}x $$ is equal to $$ \frac{1}{ab\sqrt{\pi}} (a+b-\sqrt{a^2+b^2}), $$ where $$ \mathrm{erfc}(y)=\frac{2}{\sqrt{\pi}} \int_y^{\infty} \exp(-t^2)\, \mathrm{d} t. $$ I have tried to expand the integral as $$ \frac{4}{\pi }\int_0^{\infty} \int_{ax}^{\infty} \int_{bx}^{\infty} \exp(-t^2 -s^2) \, \mathrm{d}s \, \mathrm{d}t \, \mathrm{d} x $$ but I could not come up with the right change of variables. Any ideas on how to proceed? Thank you in advance!
Use the probabilistic way, which shows this is a problem of geometry in the plane... That is, first note that, for every $x$, $$\mathrm{erfc}(x)=2P(X\geqslant \sqrt2 x)$$ where $X$ is standard normal hence, considering $(X,Y)$ i.i.d. standard normal and assuming that $a$ and $b$ are positive, one sees that the integral to be computed is $$I=4\int_0^\infty P(A_x)dx$$ where $$A_x:=[X\geqslant ax\sqrt2,Y\geqslant bx\sqrt2]$$ Now, $$(X,Y)=(R\cos\Theta,R\sin\Theta)$$ where $(R,\Theta)$ is independent, $\Theta$ is uniform on $(0,2\pi)$ and $R\geqslant0$ is such that $$P(R\geqslant r)=e^{-r^2/2}$$ for every $r\geqslant0$, hence $A_x\subset A_0=[\Theta\in(0,\pi/2)]$ and, on the event $A_0$, $$P(A_x\mid\Theta)=P(R\cos\Theta\geqslant ax\sqrt2,R\sin\Theta\geqslant bx\sqrt2\mid\Theta)=e^{-x^2u(\Theta)^2}$$ with $$u(\theta):=\max\left(\frac{a}{\cos\theta},\frac{b}{\sin\theta}\right)$$ Thus, still on $A_0$, $$\int_0^\infty P(A_x\mid\Theta)dx=\int_0^\infty e^{-x^2u^2(\Theta)}dx=\frac{\sqrt{\pi}}{2u(\Theta)}$$ This proves that $$\sqrt\pi I=4\sqrt\pi E\left(\int_0^\infty P(A_x\mid\Theta)dx\right)=2\pi E\left(\frac1{u(\Theta)}\mathbf 1_{A_0}\right)$$ that is, $$\sqrt\pi I=2\pi E\left(\min\left(\frac{\cos\Theta}a,\frac{\sin\Theta}b\right)\mathbf 1_{A_0}\right)$$ which is, using the distribution of $\Theta$, $$\sqrt\pi I=\int_0^{\pi/2}\min\left(\frac{\cos\theta}a,\frac{\sin\theta}b\right)d\theta=\frac1b\int_0^{\vartheta(a,b)}\sin\theta d\theta+\frac1a\int_{\vartheta(a,b)}^{\pi/2}\cos\theta d\theta$$ where the angle $\vartheta(a,b)$ in $(0,\pi/2)$ is uniquely defined by the condition that $$\tan\vartheta(a,b)=b/a$$ hence $$\sqrt\pi I=\frac{1-\cos\vartheta(a,b)}b+\frac{1-\sin\vartheta(a,b)}a$$ Finally, $$\cos\vartheta(a,b)=\frac{a}{\sqrt{a^2+b^2}}\qquad\sin\vartheta(a,b)=\frac{b}{\sqrt{a^2+b^2}}$$ hence $$I=\frac1{\sqrt{\pi}ab}\left(a-\frac{a^2}{\sqrt{a^2+b^2}}+b-\frac{b^2}{\sqrt{a^2+b^2}}\right)=\frac1{\sqrt{\pi}ab}\left(a+b-\sqrt{a^2+b^2}\right)$$
Does there exist continous function $f(x)$ defined on $(-\infty ; +\infty)$? Does there exist continous function $f(x)$ such that $$f(x)=\begin{cases} \frac{m}{n} & \text{if } x \text{ is irrational,} \\ \text{irrational} & \text{if } x \text{ is rational} \end{cases}$$ I think it's impossible, as definition of that function is similar to Dirichlet Function or Thomae's function. And these functions are always discontinous somewhere. Please help, I don't know what to start with. I'm first year undergraduate
No, there is no such function. FIrst, $f$ cannot be constant, so by the intermediate value theorem, there exists an interval $[a,b]$ such that $[a,b] \subset f(\Bbb R)$ ($a < b$) So we have $[a,b] \cap (\Bbb{R}-\Bbb{Q}) \subset f( \Bbb{Q} )$ But $\Bbb{Q}$ is countable, so $f( \Bbb{Q} )$ is countable (or finite) too. In the other hand, $[a,b] \cap (\Bbb{R}-\Bbb{Q})$ is uncountable : contradiction
Evaluate a limit involving definite integral Evaluate the following limit: $$\lim_{n \to \infty} \left[n - n^2 \int_{0}^{\pi/4}(\cos x - \sin x)^n dx\right]$$ I've tried to rewrite the expression as follows: $$\lim_{n \to \infty} \left[n - n^2 \sqrt{2}^n \int_{0}^{\pi/4}\sin^n \left( \frac{\pi}{4} - x \right) dx\right]$$ However, this doesn't seem to help too much. Thank you!
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Note that \begin{align} &\lim_{n \to \infty}\braces{n - n^{2}\int_{0}^{\pi/4}\bracks{\cos\pars{x} - \sin\pars{x}}^{\,n}\,\dd x} \\[5mm] = &\ \lim_{n \to \infty}\bracks{n - 2^{n/2}\,n^{2} \int_{0}^{\pi/4}\cos^{n}\pars{x + {\pi \over 4}}\,\dd x}\label{1}\tag{1} \end{align} When $\ds{n \to \infty}$, the main contribution to the integral comes from values of $\ds{x \gtrsim 0}$ such that it's is a 'candidate' to be evaluated by means of the Laplace Method. Namely, \begin{align} \int_{0}^{\pi/4}\cos^{n}\pars{x + {\pi \over 4}}\,\dd x & = \int_{0}^{\pi/4}\exp\pars{n\ln\pars{\cos\pars{x + {\pi \over 4}}}}\,\dd x \\[5mm] & \sim \int_{0}^{\infty}\exp\pars{n\bracks{-\,{\ln\pars{2} \over 2} - x}} \pars{1 - nx^{2}}\,\dd x \\[5mm] & = 2^{-n/2}\,\pars{{1 \over n} - {2 \over n^{2}}}\quad \mbox{as}\ n \to \infty\label{2}\tag{2} \end{align} With \eqref{1} and \eqref{2}: \begin{align} &\lim_{n \to \infty}\braces{n - n^{2}\int_{0}^{\pi/4}\bracks{\cos\pars{x} - \sin\pars{x}}^{\,n}\,\dd x} = \lim_{n \to \infty}\braces{n - 2^{n/2}\,n^{2} \bracks{2^{-n/2}\,\pars{{1 \over n} - {2 \over n^{2}}}}} \\[5mm] = &\ \bbx{\ds{2}} \end{align}
Increasing or decreasing permutation of $N$ numbers Given $N$ numbers, how many ways can we arrange those numbers such that they are Strictly increasing or decreasing. For eg - $N=2$ we have $1,2$ so possible rearrangements are- {{1,2} , {2,1}} Can someone help me out ? EDIT : The given answer is for non-strictly but it would be great if I have the answer converted to its special case when it's strictly increasing or decreasing.
This answer is for the original version of the question before the author changed the question. An example: Let's just look at nondecreasing sequences. To deal with nonincreasing sequences, the approach is the same (and to deal with them together, you need to subtract the number of constant sequences). You have the set $\{1,2,3\}$. Let $x_1$ be the number of $1$'s, $x_2$ be the number of $2$'s and $x_3$ be the number of $3$'s. Once the counts of $1$, $2$, and $3$ are known, the increasing and decreasing sequences are known. Since the sequences must be of length $3$, you're considering the problem of finding $x_1\geq 0$, $x_2\geq 0$, and $x_3\geq 0$ such that $x_1+x_2+x_3=3$. This is a standard formulation of the stars-and-bars problem. The number of such combinations is $$ \binom{3+3-1}{3}=\binom{5}{3}=10. $$ We can check this with \begin{align*} (1,1,1) && (1,1,2) && (1,1,3)\\ (1,2,2) && (1,2,3) && (1,3,3)\\ (2,2,2) && (2,2,3) && (2,3,3)\\ (3,3,3). \end{align*} In this case, the total is $2\binom{5}{3}-3=17$ ways to be nonincreasing or nondecreasing. (10 sequences that are nonincreasing, 10 sequences that are nondecreasing, and 3 sequences that are both nonincreasing and nondecreasing - the constant ones). In general, the formula should be $$ \binom{2N-1}{N}-N. $$
find Asymptotes of $f(x) = \arcsin(\frac{2x}{1+x^2})$ I'm trying to find the asymptotes of $f(x) = \arcsin(\frac{2x}{1+x^2})$. * *I've found that this function has no vertical asymptote, since $f$ is bounded between $[-\pi/2 , \pi/2 ]$, and since $\arcsin x$ is continuous where it is defined - for every $x_0 \in R$, $\lim_{x\to x0^+}|f(x)| = |f(x_0)| \neq \infty $. Hopefully this once is correct, please correct me if it isn't. *I think I'm wrong in the calculation of the horizontal asymptotes : if $y=ax+b$ is a horizontal asymptote at $\infty$, then $a = \lim_{x\to\infty}\frac{f(x)}{x} = 0$. Now, $b= \lim_{x\to\infty}(f(x)-ax) = \lim_{x\to\infty}f(x) = 0$ So I'm getting that this function has no vertical asymptotes, which I guess is correct, but I also get $y=0$ as a horizontal asymptotes which I'm pretty sure is wrong.. Where is my mistake?
Horizontal asymptotes are of the form $y=\text{constant}$. Use the form $a\,x+b$ for oblique asymptotes. And there is no mistake. Here is the plot of he function:
Is this logarithm-based way to solve an exponential equation valid? A friend of mine solved the following exponential equation in the following way: $$4^{x+1}-8^{2x}=0$$ Since $\log_{4}8=\frac{3}{2}$, $4^{x+1}-(4^{2x})^{\frac{3}{2}})=0 \Leftrightarrow 4^{x+1}-4^{3x}=0$. Now that both $x$-terms are base 4, this equation is trivial to solve. However, I don't understand how that process can work. Is there any base to this process or was it just a lucky coincidence? Note: I know how to solve that equation, I'm only asking if my friend's approach is valid or not
Another, less convoluted way is to convert everything to base 2: $$ 0 = (2^2)^{x+1} - (2^3)^{2x} = 2^{2x+2} - 2^{6x}... $$
When does pullback along a continuous map admit a left adjoint? In this MSE question I asked about conditions ensuring the existence of a right adjoint to base change along a continuous map, possibly in a convenient subcategory of topological spaces. What about the existence of a left adjoint? Does it always exist in the category of topological spaces? What are sufficient conditions/nice subcategories?
For any morphism $f : A \to B$ in any category $\mathcal{C}$, there is a functor $\Sigma_f : \mathcal{C}/A \to \mathcal{C}/B$ given by composing objects with $f$. If the pullback functor $f^* : \mathcal{C}/B \to \mathcal{C}/A$ exists, then $\Sigma_f \dashv f^*$. (I'm not sure if we still use the name $\Sigma_f$ should $f^*$ not exist) The general case reduces to the case of $B$ is a terminal object and we substitute $\mathcal{C}/1 \equiv \mathcal{C}$. In this case it's easy to see the adjunction: given an object $X \to A$ of $\mathcal{C}/A$ and $Y$ of $\mathcal{C}$, we need to prove the natural isomorphism $$ \hom_\mathcal{C}(X, Y) \cong \hom_{\mathcal{C}/A}(X \to A, Y \times A \to A)$$
Evaluate integral with exponential and polynomial How can I show that \begin{align} \int_{-\infty}^\infty e^{tx}\frac{1}{\pi(1+x^2)}\,\mathrm dx=\infty \end{align} for $t\neq0$. I started as follows: \begin{align} \int_{-\infty}^\infty e^{tx}\frac{1}{\pi(1+x^2)}\,\mathrm dx=\int_{-\infty}^0 e^{tx}\frac{1}{\pi(1+x^2)}\,\mathrm dx+\int_{0}^\infty e^{tx}\frac{1}{\pi(1+x^2)}\,\mathrm dx. \end{align} Here I can 'see' that the right integrand goes to infinity, and the left one would just be a real value I'd say, because the integrand goes to 0. Is there a way for me to evaluate these integrals in a somewhat rigorous way?
Suppose first that $t>0$. You have $$\lim_{x\to\infty}\frac{e^{tx}}{\pi(1+x^2)}=\infty.$$ As the integrand is positive everywhere, \begin{align} \int_{-\infty}^\infty e^{tx}\frac{1}{\pi(1+x^2)}\,\mathrm dx\geq \int_0^\infty e^{tx}\frac{1}{\pi(1+x^2)}\,\mathrm dx=\infty. \end{align} For $t<0$, you use that $$\lim_{x\to-\infty}\frac{e^{tx}}{\pi(1+x^2)}=\infty,$$ and now \begin{align} \int_{-\infty}^\infty e^{tx}\frac{1}{\pi(1+x^2)}\,\mathrm dx\geq \int_{-\infty}^0 e^{tx}\frac{1}{\pi(1+x^2)}\,\mathrm dx=\infty. \end{align}
Find a graph $G$ with 8 vertices such that $G$ and its complement are both planar. I have no problem producing a 8 vertices planar graph $G$, but I couldn't figure out how to find complement with $G$ that's also planar. Is there any systematic way of drawing such graph? Thanks for any help.
Take an $8$-cycle $v_1v_2v_3v_4v_5v_6v_7v_8v_1$ and add two more edges $v_1v_5$ and $v_2v_6.$ The resulting graph $G$ is clearly planar. I leave it to you to verify that the complement $\overline G$ is also planar. ($\overline G$ is a cube with $6$ additional edges, namely, a diagonal drawn on each face.) Methodology? It just seemed like a good idea to start with a maximal planar graph; the more edges in the graph, the fewer edges in the complement, and the easier it will be to draw the complement in the plane. So I took a cube (no particular reason, just because it's a nice planar graph with $8$ vertices) and triangulated each of the faces. The complement turned out to be a cycle plus two diagonals, which was planar and easy to describe. P.S. To see that $G$ is planar, let $v_1=(0,0),\ v_2=(1,1),\ v_3=(1,2),\ v_4=(1,3),$ $v_5=(1,4),\ v_6 (2,4),\ v_7=(2,2),\ v_8=(2,0),$ and draw the edges $v_1v_2,\ v_2v_3,\ v_3v_4,\ v_4v_5,\ v_5v_6,\ v_6v_7,\ v_7v_8,\ v_8v_1,\ v_1v_5,\ v_2v_6$ as straight line segments.
How to evaluate $\sum\limits_{k=0}^{n}\arctan f(k)$ where $f(k)$ is a rational fraction Find the sum closed form $$\sum_{k=0}^{n}\arctan{\dfrac{k^4+6k^3+10k^2-k-9}{(k+1)(k+2)(k+3)(k^3+7k^2+15k+8)}}$$ For problems involving sums, the idea is to use trigonometricidentities to write the sum in the form $$\sum_{k=1}^{n}[g(k)-g(k-1)]$$ and I initially considered pairing every two terms up to use the $arctanx+arctany $ trick, but it doesn't work because each arctanarctan term has a different coefficient.
By setting $f(k)=3+(7+2i)k+(5+i)k^2+k^3$ we are dealing with: $$ \sum_{k=1}^{n}\text{arg}\left(f(k)\;\overline{f(k+1)}\right) =\text{arg }f(1)-\text{arg }f(n+1).\tag{1}$$ In order to notice that, I factored $(k+1)(k+2)(k+3)(k^3+7k^2+15k+8)+i(k^4+6k^3+10k^2-k-9)$, getting: $$\left(3+(7+2 i) k+(5+i) k^2+k^3\right) \left((16-3 i)+(20-4 i) k+(8-i) k^2+k^3\right)\tag{2}$$ and checked that $(2)$ has the $f(k)\,\overline{f(k+1)}$ structure.
$|f(z)|\leq 1$, then possible value of $(e^f)''(0)$ Let $f$ be an analytic function on $\bar{D}=\{z\in \mathbb{C}:|z|\leq1\}$. Assume that $|f(z)|\leq1~ \forall z\in \bar{D}$. Then, which of the following is NOT a possible value of $(e^f)''(0)$. (i)6 (ii)2 (iii)$\frac{7}{9}e^{\frac{1}{9}}$ (iv)$\sqrt{2}+i\sqrt{2}$. I calculated $(e^f)'=f'e^f$. Here if $(e^f)'(0)$ were $0$ then I could have used Schwarz Lemma to find the estimate. But here it is not $0$. Don't know how to proceed.
If you've seen the Cauchy integral formula for derivatives of a complex analytic function $f(z)$ then you can use that to get a bound on both the first and second derivatives of $f(z)$ at $z=0$. The Cauchy integral formula for derivatives (on the unit disk for simplicity) says that if $f(z)$ is analytic on $\bar{D}$ and $|f(z)|\leq M$ then $$f^{(n)}(0)=\frac{n!}{2\pi i}\int_{|z|=1}\frac{f(z)}{z^{n+1}}\,dz$$ Now if you parametrize the boundary by $z=e^{i\theta}$ you can get $$f^{(n)}(0)=\frac{n!}{2\pi i}\int_0^{2\pi}f\left(e^{i\theta}\right)e^{-in\theta}i\,d\theta$$ So that $$|f^{(n)}(0)|\leq\frac{n!}{2\pi}\int_0^{2\pi}|f\left(e^{i\theta}\right)|\,d\theta$$ and therefore $$|f^{(n)}(0)|\leq n!M$$ since $|f\left(e^{i\theta}\right)|\leq M$ by hypothesis. Now apply that for $n=2$ on your second derivative of $e^f$ (so define $f$ above as $f:=e^f$) and you'll have an upper bound with $M=\left|e^f\right|\leq e^{|f|}=e$ and hence $\left|\left(e^f\right)''\right|(0)\leq 2e<6$
show $ x \to (x,f(x))$ is an embedding Let $f:\mathbb{R}^n \to \mathbb{R}^m$ be ca continuous function and $(\mathbb{R}^n,\mathcal{O}_{\mathbb{R}^n}) $ the real numbers equipped with the euclidian topology. Why the map $F:\mathbb{R}^n \to \mathbb{R}^{m+n}$ with $ x \to (x,f(x))$ is an embedding? Of course $F$ is naturally injective and F is also continuous. But it's not clear to me why the map $F: (\mathbb{R}^n,\mathcal{O}_{\mathbb{R}^n}) \to (F(\mathbb{R}^n),\mathcal{O}_{\mathbb{R}^{m+n} | \mathbb{R}^n})$ is open? Where $\mathcal{O}_{\mathbb{R}^{m+n} | \mathbb{R}^n}$ is the subspace topology from $\mathbb{R}^n$ in $\mathbb{R}^{m+n}$. Can someone give me a little hint?
It is easier to show that it is closed. Indeed, let $A$ be closed in $\mathbb{R}^n$. We will show that $F(A)$ is closed. Pick any convergent sequence $(X_n)$ in $F(A)$. Obviously $X_n=(x_n, f(x_n))$ for some sequence $(x_n)$ in $A$. Since $X_n$ is convergent, then so is $(x_n)$ because projection to the first coordinate is continous. Since $A$ is closed then $(x_n)$ converges to $x\in A$ and since $f$ is continous then $f(x_n)$ converges to $f(x)\in f(A)$. Therefore $(x_n, f(x_n))$ converges to $(x, f(x))\in F(A)$ so $F(A)$ is closed. Thus $F$ is both closed and bijective on image. Therefore it's an embedding.
How do I disproove this by counterexample? If $2^n-1$ is composite then $n$ is composite. Where n is larger than 1 How can I disprove this using a counter example?
It is well known that $2^n-1$ can only be prime if $n$ is prime (A prime of the form $2^n-1$ is called a Mersenne-prime) But the converse is not true. There are primes $n$, such that $2^n-1$ is composite, for example $n=23$, refuting the claim. The first few counterexamples are ? forprime(p=1,100,if(isprime(2^p-1)==0,print1(p," "))) 11 23 29 37 41 43 47 53 59 67 71 73 79 83 97 ?
How can I find the stability of the Fitzhugh-Nagumo model? I am studying the 1961 Fitzugh-Nagumo model paper (download it here), and I am lost in the stability study (p. 450). Specifically, I do not understand the Taylor series development. How does he reach the equations (6) p. 450? From the nullclines (p. 449): $x = -x + x^3/3 - z$ $y = (a-x)/b$ Fitzhugh reaches by Taylor series (p. 450) ($ ξ = x - x_{1}$ and $ η = y - y_{1} $): $ dξ/dt = c [η + (1 - x_{1}^2)ξ + x_{1}ξ^2 + ξ^3/3]$ $ dη/dt = -(ξ + bη) / c $ Can someone help me understand?
You don't get equation (6) from equations (4) and (5); those are only for determining what $x_1$ and $y_1$ are. What you do is substitute $x(t)=x_1+\xi(t)$ and $y(t)=y_1+\eta(t)$ into the actual system of ODEs, which consists of equations (1) and (2) on p. 447. Then $\dot x(t)$ is replaced by $\dot \xi(t)$ (since $x_1$ is just a constant), $y+x-x^3/3$ is replaced by $(y_1+\eta)+(x_1+\xi)-(x_1+\xi)^3/3$, and so on. After you've done that, and expanded everything on the right-hand side of the equations, you can simplify them by using that $x_1$ and $y_1$ are solutions of (4) and (5). This is important, since it will make all the constant terms cancel out, leaving only terms which contain powers of $\xi$ and/or $\eta$. And then, to get a linear system, one omits all terms of degree greater than one. All this is a standard procedure called “linearization at an equilibrium point (or fixed point, or singular point)”, which is explained in any textbook on dynamical systems, so if you're unfamiliar with how to do it or what it's useful for, it's perhaps a good idea to find a book where you can read more about it.
Definite integral of modified bessel functions of the first kind and trigonometric functions I have the following definite integral: $$ \int_{0}^{\pi} cos(n\theta)[I_{1}(acos\theta) + cos\theta I_{0}(acos\theta)]d\theta $$ with $I_{1}$ and $I_{0}$ the first and zero-order modified bessel functions of the first kind respectively ; $n \in \mathbb{N}^{*}$ and $a \in \mathbb{R}^{+*}$. with the hint that, $$ I_{n}(z) = \frac{1}{\pi} \int_{0}^{\pi} e^{zcos\theta}cos(n\theta)d\theta $$ we can transform the previous integral, $$ \int_{0}^{\pi} cos(\theta)I_{n}(acos\theta) + \frac{1}{2}(I_{n+1}(acos\theta) + I_{n-1}(acos\theta))d\theta $$ $$ \int_{0}^{\pi} cos(\theta)I_{n}(acos\theta) - \frac{1}{asin\theta}(I_{n}^{'}(acos\theta))d\theta $$ I couldn't go any further in the computation. But using the power series representation and considering $n$ odd and $n=2m+1$, we obtain: $$ \pi \sum_{k=0}^{\infty} \frac{U^{2k+n-1}}{2^{2k+n}\Gamma (k+n+1) k!} \prod_{i=1}^{k+m} (\frac{2i-1}{2i}) (U\frac{2(k+m)+1}{2(k+m+1)} + 2k+n) $$ If someone has an idea ... Thanks!
Take the integral $$ \int_{0}^{\pi} cos(\theta) I_{n}(acos(\theta)) + \frac{1}{2}(I_{n+1}(acos(\theta)) + I_{n-1}(acos(\theta))) d\theta $$ and apply the solution of the integral here, p724, 6.681-3: $$ \int_{0}^{\frac{\pi}{2}} cos(2\mu x)I_{2\nu}(2acosx)dx = \frac{\pi}{2}I_{\nu-\mu}(a)I_{\nu+\mu}(a) $$ This is also valid for the sinus by symmetry in our case. Stating $n=2m+1$ we obtain, $$ \pi I_{m}(\frac{a}{2})I_{m+1}(\frac{a}{2}) + \frac{\pi}{2}I_{m+1}(\frac{a}{2})^{2} + \frac{\pi}{2}I_{m}(\frac{a}{2})^{2} $$
Derivative of $e^x:$ What's wrong with this proof? Note that $$\lim_{h \rightarrow 0} (1 +h)^{1/h} = e$$ Let $f(x) = e^x$. Then $f'(x)$ is given by, $$\lim_{h \rightarrow 0} \frac{e^{x+h}-e^x}{h}=\lim_{h \rightarrow 0} \frac{e^x(e^h-1)}{h} = \lim_{h \rightarrow 0} e^x\cdot \frac{(e^h-1)}{h} $$ Now since $$e = \lim_{h \rightarrow 0} (1 +h)^{1/h},$$ we have $$\lim_{h \rightarrow 0} e^x\cdot \frac{((\lim_{h \rightarrow 0} (1 +h)^{1/h})^h-1)}{h} =e^x\cdot \lim_{h \rightarrow 0} \frac{((\lim_{h \rightarrow 0} (1 +h)^{h/h})-1)}{h} = \\ e^x \cdot \lim_{h \rightarrow 0} \frac{((\lim_{h \rightarrow 0} (1 +h))-1)}{h} = e^x \cdot \lim_{h \rightarrow 0} \frac{((\lim_{h \rightarrow 0} (h))+1-1)}{h} = \\e^x \cdot \lim_{h \rightarrow 0} \frac{\lim_{h \rightarrow 0} h}{h} = e^x \cdot 1 = e^x$$
The problem is you have two different limits and you're using $h$ to represent the variable in both limits. You need to call one of them something besides $h$. Note that this makes the step where you simplify $(1+h)^{h/h} = (1+h)$ invalid.
Evaluate $\int_{0}^{2\pi} \frac{d\theta}{1+\sin(\theta)\cos(\alpha)}$, ($\alpha = $ const.) by contour integration I'm having trouble with this one. Letting $z = e^{i\theta}$ lets me rearrange the equation to $$\int_{0}^{2\pi} \frac{2dz}{2iz+(z^2-1)\cos(\alpha)}$$ where the roots are $$z= \frac{-i\pm i|\sin(\alpha)|}{\cos(\alpha)}$$ However when finding the residue from here I get an extra factor of $\cos(\alpha)$ where there shouldn't be one. Any help is appreciated!
Using contour integration ... \begin{eqnarray*} \oint \frac{2dz}{2iz+(z^2-1)\cos(\alpha)} \end{eqnarray*} where the contour of the integral is the unit circle (as stated in the question ?). Rearrange to \begin{eqnarray*} \frac{2}{\cos(\alpha)} \oint \frac{dz}{\left( z+i\frac{1+\mid\sin(\alpha)\mid}{\cos(\alpha)} \right)\left( z+i\frac{1-\mid\sin(\alpha)\mid}{\cos(\alpha)} \right)} \end{eqnarray*} Now Doug M is right one of these poles is inside the unit circle & the other is outside. Either way they both give the same residue ... \begin{eqnarray*} \frac{\cos(\alpha)}{2i \mid \sin(\alpha) \mid} \end{eqnarray*} Now $2 \pi i $ times the sum of the residues (Cauchy's Integral Theorem) and we have \begin{eqnarray*} \int_{0}^{2\pi} \frac{d\theta}{1 + \sin\theta \cos\alpha}=\frac{2 \pi}{\mid \sin \alpha \mid}. \end{eqnarray*}
The solution to a summation $4^n/(4^n+2 )$ I found this problem on summations, and I'm not really sure how to solve it. Could someone give a hint as to how to do so? Find the value of $$\sum_{i=1}^{1000}f\left(\frac{i}{1000}\right),\qquad f(x) = \frac{4^x}{4^x+2}$$ It came on an exam where we couldn't use calculators, and it apparently is an integer answer, though Wolfram Alpha disagrees...(Even if it isn't, I would still like to know how to do it)
$$\sum_{n=1}^{1000}\frac{4^n}{4^n+2} = \sum_{n=1}^{1000}\frac{2^{2n}}{2^{2n}+2} = \sum_{n=1}^{1000}\frac{2^{2n-1}}{2^{2n-1}+1} = \frac23 + \frac89 + \frac{32}{33} + \frac{128}{129} + \cdots + \frac{2^{1999}}{{2^{1999}+1}} \approx 999.5149000482058.$$
Find the derivative of $f(x)=7x\ln |x|$ Find the derivative of $$f(x)=7x\ln|x|$$ How do they get the answer $$f'(x)= 7\ln|x|+7$$
Since $\dfrac{d}{dx}\vert x\vert=\dfrac{\vert x\vert}{x}$ \begin{eqnarray} \frac{d}{dx}7x\ln\vert x\vert &=&7\ln\vert x\vert+7x\cdot\frac{1}{\vert x\vert}\cdot\frac{\vert x\vert}{x}\\ &=&7\ln\vert x\vert+7 \end{eqnarray} Proof of derivative of $\vert x\vert$: \begin{eqnarray} \dfrac{d}{dx}\vert x\vert&=&\lim_{h\to0}\frac{\vert x+h\vert-\vert x\vert}{h}\\ &=& \lim_{h\to0}\frac{\vert x+h\vert-\vert x\vert}{h}\cdot\frac{\vert x+h\vert+\vert x\vert}{\vert x+h\vert+\vert x\vert}\\ &=&\lim_{h\to0}\frac{\vert x+h\vert^2-\vert x\vert^2}{h\cdot\left(\vert x+h\vert+\vert x\vert\right)}\\ &=&\lim_{h\to0}\frac{2xh+h^2}{h\cdot\left(\vert x+h\vert+\vert x\vert\right)}\\ &=&\lim_{h\to0}\frac{2x+h}{\vert x+h\vert+\vert x\vert}\\ &=&\frac{2x}{2\vert x\vert}\\ &=&\frac{x}{\vert x\vert}=\frac{\vert x\vert}{x} \end{eqnarray}
Does the Monty Hall Paradox hold true if the Game Host can open either door? The Monty Hall Paradox is the answer to this question. Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice? (wikipedia) If you are familiar with the paradox, you know that the answer is to always switch because the odds compress into the other door with 2/3 odds. But if there are three different things, and the game host is allowed to open either non-picked door, and the revealed item is not the desired item, is it still advantageous to switch? Does the removal of the mechanic where the game host might be limited to only opening one door keep the odds at 1/3 for each door? My initial instinct is that it stays the same, but this is already a mind puzzle anyway, so I'm not sure.
The Monty Hall problem is basically a shell game.   The dealer puts a ball under one of three cups and shuffles them around.   You pick one cup, the dealer lifts another, and you have a choice of whether to stay with your original choice, or move. Basically there are three events.   $A$ your first choice is correct, $B$ the dealer lifts an empty cup, and $C$ changing cups is a good idea. In the classic scenario, the dealer always chooses the empty cup and you are asked to evaluate the probability that changing cups is a good idea given that certain event. $$\begin{align}\mathsf P(C\mid B) ~&=~ \mathsf P(C\mid A,B)~\mathsf P(A\mid B)+\mathsf P(C\mid A^\complement,B)~\mathsf P(A^\complement\mid B) \\ &=~ 0\cdot \tfrac 13+1\cdot \tfrac 23 \\ &=~ \tfrac 23\end{align}$$ Since as it is certain that the dealer always chooses an empty cup, then the event that your first guess is right is independent of the dealer's revelation. So, how does uncertainty that the dealer could lift an empty cup change this? We go to Bayes' Rule.   Assuming the dealer's choice between the two untapped cups is biased to wards favouring the ball with probability $p$ when given your first guess was wrong. $$\begin{align}\mathsf P(C\mid B) &=~ \dfrac{\mathsf P(C\mid A,B)~\mathsf P(B\mid A)~\mathsf P(A)~+~\mathsf P(C\mid A^\complement,B)~\mathsf P(B\mid A^\complement)~\mathsf P(A^\complement)}{\mathsf P(B\mid A)~\mathsf P(A)+\mathsf P(B\mid A^\complement)~\mathsf P(A^\complement)} \\[1ex] &=~ \dfrac{0\cdot 1\cdot \tfrac 13 ~+~1\cdot p\cdot \tfrac 23}{1\cdot \tfrac 13 ~+~ 0\cdot \tfrac 23}\\[1ex] &=~ \dfrac {2p}{1+2p}\end{align}$$ So when the choice is unbiased, then $\mathsf P(C\mid B)= \tfrac 12$.   This is what most peoples intuition about the classic Monty Hall game tells them, and its based in failing to realise the host cheats in favour of the player by never picking the prize.
Methods for choosing $u$ and $dv$ when integrating by parts? When doing integration by parts, how do you know which part should be $u$ ? For example, For the following: $$\int x^2e^xdx$$ $u = x^2$? However for: $$\int \sqrt{x}\ln xdx$$ $u = \ln x$? Is there a rule for which part should be $u$ ? As this is confusing.
There is an acronym called "LIATE": Set $u$ to be the first function you see on this list (ordered): * *logarithm *inverse trigonometric function *algebraic function *trigonometric function *exponential Doesn't always work perfectly, but it's your best bet. In your first integral, the algebraic function $x^2$ takes precedence. In the second, the logarithm $\ln x$ takes precedence.
$A=\emptyset $ if and only if $B = A \bigtriangleup B$ Is this true? If $A$ and $B$ are sets, then $A=\emptyset $ if and only if $B = A \bigtriangleup B$. If $A=\emptyset$ then $B=\emptyset$ too? Could someone help me please?
I hope you know that $A\bigtriangleup B=(A\setminus B)\cup(B\setminus A)$. Then, if $A=\emptyset$, obviously $A\bigtriangleup B=(\emptyset\setminus B)\cup(B\setminus\emptyset)=B$. For the other direction, note that $A\setminus B$ never is a subset of $B$, unless it is empty. So $A\subseteq B$. But $B\setminus A=B$, so if $A\neq\emptyset$, then $B\setminus A\subsetneq B$. Thus $A=\emptyset$.
Using Darboux theorem for $f'$ check whether the given function $f(x)=x-\left[x\right], x\in[0,2]$ is a derivative of a function Using Darboux theorem for $f'$ check whether the given function $f(x)=x-\left[x\right], x\in[0,2]$ is a derivative of a function. Please help me to solve the problem. I see that $f(x)=0=f(2)$. What to do?
Hint: Note that $$f(x)=\begin{cases} x, &0\leq x<1\\ x-1, &1\leq x<2\\ 0, &x=2\end{cases}.$$ Thus $f$ has a jump discontinuity at $x=1$. On the other hand Darboux theorem tells you that the derivative of a function satisfies the intermediate value property. Now can you see that $f$ fails to satisfy this property?
smallest positive integer $r$ so that $5^{33333}≡ r (mod 11)$ Title is pretty self explanatory. I tried using the division algorithm to get some a hint about the $5^{33333}$. This was not helpful.
First check that $5^{10} \equiv 1 \pmod{11}$. Now divide the exponent to get $33333 \equiv 3 \pmod{10}$. This gives $$5^{33333} \equiv 5^{3} \equiv 4 \pmod{11}.$$
How to Solve The Redundant Literal Rule (AND)? (A . B = A . (A̅ +B)) How to prove them? (Tried so many times)
Use the distributive law and you get $$ A\cdot\bar{A} + A\cdot B. $$ $\text{True AND False}$ results in $\text{False}$. Which give $A\cdot \bar{A} = 0$. Again, $\text{False OR X}$ always results in $X$, hence we get $0 + AB = AB$. Thus, $$ A(\bar{A} + B) = A\cdot\bar{A} + AB = AB$$
Solving a linear least squares problem with trigonometric functions We want to calculate the amplitude $A$ and the phase angle $\phi$ of the oscillation $b(t)=A\sin(2t+\phi)$. We have $t_k=(0,\pi/4, \pi/2, 3\pi/4)$ and $b_k=(1.6,1.1,-1.8,0.9)$ Use $\sin(A+B)=\sin(A)\cos(B)+\cos(A)\sin(B)$ and $\alpha=A\cos(\phi), \beta=A\sin(\phi)$ to get a linear problem. We get $b(t)=\alpha\sin(2t)+\beta\cos(2t)$ Using the above, we get $b^T=A (\alpha, \beta)^T$ Using QR and/or normal equation [ code:https://hastebin.com/otezejobaj.pl ] we get $\alpha=0.1, \beta=1.7$ Now, I should write down the residual vectors for QR and for the normal equation Question 1: Are the residual vectors here: $Ax_1 - b$ and $Ax_2-b$ with $x_1=(\alpha,\beta)$ from QR Method and $x_2=(\alpha,\beta)$ from the normal equation. (It's the same result here)? Now, I should calculate $A$ and $\phi$. How should I do that numerically? Also, I noted the following: (1)$\beta = A\sin(\phi) \Rightarrow 1.7=a\sin(\phi)$ and $b(0)=A\sin(\phi)=1.6$ which can't be. Question 2: Is there a reason that (1) isn't legit? Edit: Since I'm in a least square problem, I can't actually expect (1) to work, right? Anyway, Question 1 is the important question here.
The four residuals are $$b(t_k)-b_k$$ evaluated with the computed $\alpha,\beta$, which you can group as a sum of squares $$\sum_{k=1}^4(b(t_k)-b_k)^2.$$ Also, $$A=\sqrt{\alpha^2+\beta^2},\\\tan\phi=\frac\beta\alpha.$$
Fibonacci sequence as basis of a vector space Let $\mathbb{R}^\infty$ be the vector space of real sequences $x=(x_1,x_2,x_3,\dots)$ and let $W$ be the subspace of the sequences $y$ such that $y_n=y_{n-1}+y_{n-2}$ for $n \ge 3$. Which is its dimension? I think it is 2, since $y_3=y_1+y_2$ (that are two independent parameters), $y_4=y_2+y_3=y_1+2y_2$; $y_5=y_3+y_4=2y_1+3y_2$, $y_6=3y_1+5y_2$ and so on. Moreover, the base will be $$ Span \{(1,0,1,1,2,3,\dots), (0,1,1,2,3,5,\dots) \} $$ i.e. we obtain in both cases the Fibonacci sequence. Is the exercise well done?
Each element of $W$ is determined by two initial values $y_0,y_1$. In other words it's determined by the value $(y_0,y_1)\in\mathbb{R}^2$. So there is a correspondence between a basis of $\mathbb{R}^2$ and a basis of $W$. For the basis $\{(1,0),(0,1)\}$ of $\mathbb{R}^2$ the corresponding basis of $W$ is the one you already wrote.
Positive-definite derivative implies injective function Suppose $f:\mathbb R^n\to \mathbb R^n$ is differentiable and that $Df$ is positive-definite at every point. As homework, I need to prove $f$ is injective. I thought about proving by contradiction. If $f(a) = f(b) ,a\neq b$, consider the straight line $\gamma:a\to b$ in $\mathbb R^n$, the composition $f\circ \gamma:\mathbb R\to \mathbb R$ (we take the codomain as $\mathbb R$ only) allows using the mean value theorem to find some $c\in (a,b)$ for which $(f\circ \gamma)^\prime (c)=0$. Then, by the chain rule $(f\circ \gamma)^\prime(c)=Df|_{\gamma c} \gamma^\prime(c)=0$. At this point I'm stuck. I don't see how to get to an inner product to contradict positive definiteness. Positive-definiteness implies invertibility whence $\gamma^\prime(c)=0$, which can't happen for a straight line, so it looks like the weaker hypothesis that $Df$ is everywhere invertible implies $f$ is injective. But that doesn't sound right...
Hint: You are almost there. You just have to take the scalar product with $(b-a)$, i.e. look at $\langle f(b)-f(a),b-a \rangle$, apply your argument and show that this quantity is strictly positive whenever $b\neq a$. More details: Let $\gamma(t)=a+t(b-a)$, $t\in [0,1]$ be our path from $a$ to $b$ and consider the real-valued function: $$F(t)=\langle f(\gamma(t))-f(a), b-a \rangle$$$\langle f(b)-f(a),b-a \rangle$ Suppose for a contradiction that $f(b)=f(a)$, whence implying that $F(1)=F(0)=0$. Then by the MVT there is $0<c<1$ so that: $$ F'(c) = \langle Df(\gamma(c)).(b-a), b-a \rangle = 0 $$ But $Df(c)$ is supposed positive definite at every point so we conclude that $b-a=0$ and that the function $f$ is injective. The proof shows in fact that for distinct $a$ and $b$ we have $F(1)>0$, i.e. that $$\langle f(b)-f(a),b-a \rangle > 0$$
Exact Values of the integal $\int_0^\infty \frac{r^{n-1}}{(1+r^2)^{\frac{s}{2}}}\,dr$ Does any one know the exact expression of the integral, $$E_n(s)=\int_0^\infty \frac{r^{n-1}}{(1+r^2)^{\frac{s}{2}}}\,dr~~~~s>n, n\in \mathbb{N}$$ or more generally, $$E_a(s)=\int_0^\infty \frac{r^{a-1}}{(1+r^2)^{\frac{s}{2}}}\,dr~~~~s>a, a\in \mathbb{R}$$ For the special case $s=n, n+2$ I find out by induction that $$ E_{n-1}(n)=\frac{\omega_{n-1}}{2\omega_{n-2}}~~\text{and}~~E_{n-1}(n+2)=\frac{\omega_{n-1}}{2n\omega_{n-2}}. $$ where $\omega_{n-1} = \frac{\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2})}$ is the surface measure of the n-dimensional sphere of $\mathbb{R}^n$. Further result is welcome
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{0}^{\infty}{r^{n - 1} \over \pars{1 + r^{2}}^{s/2}}\,\dd r & \stackrel{r^{2}\ \mapsto\ r}{=}\,\,\, {1 \over 2}\int_{0}^{\infty}{r^{n/2 - 1} \over \pars{1 + r}^{s/2}}\,\dd r \,\,\,\stackrel{\pars{r + 1}\ \mapsto\ r}{=}\,\,\, {1 \over 2}\int_{1}^{\infty}{\pars{r - 1}^{n/2 - 1} \over r^{s/2}}\,\dd r \\[5mm] & \stackrel{r\ \mapsto\ 1/r}{=}\,\,\, {1 \over 2}\int_{1}^{0}{\pars{1/r - 1}^{n/2 - 1} \over \pars{1/r}^{s/2}}\, {\dd r \over -r^{2}} = {1 \over 2}\int_{0}^{1}r^{s/2 - n/2 - 1}\,\pars{1 - r}^{n/2 - 1}\,\dd r \end{align} The integral converges whenever $$ \left.\begin{array}{lcl} \ds{\Re\pars{{s \over 2} - {n \over 2} - 1}} & \ds{>} & \ds{-1} \\[2mm] \ds{\Re\pars{{n \over 2} - 1}} & \ds{>} & \ds{-1} \end{array}\right\} \qquad\implies\qquad \bbx{\ds{0 < \Re\pars{n} < \Re\pars{s}}} $$ In such a case \begin{align} \int_{0}^{\infty}{r^{n - 1} \over \pars{1 + r^{2}}^{s/2}}\,\dd r & = {1 \over 2}\,\mrm{B}\pars{{s \over 2} - {n \over 2},{n \over 2}} = \bbx{\ds{{\Gamma\pars{s/2 - n/2}\Gamma\pars{n/2} \over 2\Gamma\pars{s/2}}}} \end{align} $\ds{\mrm{B}}$: Beta Function. $\ds{\quad\Gamma}$: Gamma Function.
Does flipping the negative in a fraction flip all terms both sides? This is an expression that contains a negative in the denominator. If I were to take the negative and place it in the numerator, would this change all positive terms to negative and vice-versa? Negative in denominator: $\frac{a+3-y^2}{-2}$ After flipping negative to the numerator: $\frac{-a-3+y^2}{2}$ (Is this correct?)
This is not an equation. Rather, this is an expression. For the change, multiply both the numerator and the denominator by -1.
Limit Calculation of $I_{n}=\int_{0}^{n}\sqrt[n]{x}\cdot e^{-x}dx$ with dominated converge theorem I considered the series $I_{n}=\int_{0}^{n}\sqrt[n]{x}\cdot e^{-x}dx$ und while calculating $\lim_{n\rightarrow \infty}I_n$ I wasn't sure how to use the dominated converge theorem which allows me to exchange limit and integration. I verified the conditions to use it and I know how to calculate, but the problem is the correct notation because $n$ is also a part of the bounds of integration.
note that $\sqrt[n]{x}=e^{\frac{1}{n}ln\ x}\leq e^{ln\ x}=x$, if $x\geq 1$. So, $\mathbb{1_{[0,n]}}(x)\sqrt[n]{x}e^{-x}\leq \mathbb{1}_{[0,1]}(x)(1+xe^{-x})$. Since $\mathbb{1}_{[0,1]}(x)(1+xe^{-x})$ is integrable, by the dominated converge theorem we have: $$ \mathbb{lim}_{n\rightarrow \infty}\int_{0}^{n}\sqrt[n]{x}e^{-x}=\mathbb{lim}_{n\rightarrow \infty}\int_{\mathbb{R}}\mathbb{1_{[0,n]}}(x)\sqrt[n]{x}e^{-x}=\int_{\mathbb{R}}\mathbb{lim}_{n\rightarrow \infty}\mathbb{1_{[0,n]}}(x)\sqrt[n]{x}e^{-x}=\int_{\mathbb{R}}\mathbb{1_{[0,\infty]}}(x)e^{-x} $$
Show the determinant of a matrix Let $R$ be a commutative ring and let $x_o...x_{n-1} \in \mathbb{R}$. Show for the matrix: $$M = \begin{pmatrix} 0 & & & x_0 \\ 1 & 0 & & x_1\\ & 1 & 0& ... \\ & &... & x_{n-1} \end{pmatrix} $$ and $v \in \mathbb{R}$ that $\det(vI_n - M) = v^n - \sum\limits^{n-1}_{i=0}x_iv^i$. I get the general idea of determining the determinant but right now I just can't figure out any good way. Maybe using the Leibniz formula?
First of all a remark: I advised you to expand the determinant using the first column, but I was distracted. I meant to write 'the first row', apologies for that. Anyway, here is the solution (perhaps try it yourself using the first column). Let us prove this statement by induction. Suppose $n = 1$. We have the matrix $[x_0]$ in this case and we easily find that $$\det(vI - x_0) = v-x_0.$$ Now suppose that the statement holds for matrices of dimension $n-1$. Consider a matrix of dimension $n$: \begin{equation} A_{n} = \begin{pmatrix} 0 & 0& 0 & \ldots & 0& x_0\\ 1 &0& 0& \ldots &0 & x_1\\ 0 &1&0& \ldots & 0& x_2\\ \vdots & \vdots & \vdots & &\vdots & \vdots\\ 0 & 0 & 0 & \ldots & 1 & x_{n-1} \end{pmatrix}. \end{equation} Therefore the matrix $vI - A_{n}$ is given by \begin{equation} vI - A_{n} = \begin{pmatrix} v& 0& 0 & \ldots & 0& -x_0\\ -1 &v& 0& \ldots &0 & -x_1\\ 0 &-1&v& \ldots & 0& -x_2\\ \vdots & \vdots & \vdots & &\vdots & \vdots\\ 0 & 0 & 0 & \ldots & -1 & v - x_{n-1} \end{pmatrix}. \end{equation} We use the expansion in minors using the first $\textbf{row}$ to determine the determinant of $vI - A_n$. We find that this determinant is equal to \begin{equation} v\cdot(-1)^{1+1} \det \begin{pmatrix} v& 0& \ldots &0 & -x_1\\ -1&v& \ldots & 0& -x_2\\ \vdots & \vdots & \vdots & &\vdots \\ 0 & 0 & \ldots & -1 & v - x_{n-1} \end{pmatrix} -x_0\cdot (-1)^{n+1} \det \begin{pmatrix} -1 & v & 0 &\ldots & 0\\ 0 & -1 & v &\ldots & 0\\ \vdots & \vdots &\vdots& & \vdots\\ 0 & 0 & 0&\ldots & -1 \end{pmatrix}. \end{equation} Due to the induction hypothesis, the first determinant is equal to \begin{equation} v^{n-1} - \sum_{i = 0}^{n-2}x_{i+1}v^i \end{equation} The second determinant is the determinant of a upper triangular matrix and hence equal to the product of the diagonal elements, so we find for this part $(-1)^{n-1}$. Putting al this info together, we find: \begin{align} \det(vI - A_n) &= v\cdot (-1)^2(v^{n-1} - \sum_{i = 0}^{n-2}x_{i+1}v^i) -x_0 \cdot (-1)^{n+1} \cdot (-1)^{n-1}\\ &= v^n - \sum_{i = 0}^{n-2}x_{i+1}v^{i+1} - x_0 \cdot (-1)^{2n}\\ &= v^n - \sum_{i = 0}^{n-2}x_{i+1}v^{i+1} - x_0. \end{align} We now can rewrite the sum by introducing a new variable: $j = i+1$, we then find that \begin{equation} \det(vI - A_n) = v^n - \sum_{j = 1}^{n-1}x_jv^j - x_0 = v^n - \sum_{j = 0}^{n-1}x_jv^j \end{equation} where we could write the last equation since $x_0 = x_0v^0$. This proves the induction hypothesis, so we can conclude our proof.
Given $G=\mathbb R$ and $x*y\equiv x+y+x^3 y^3$, is there an inverse? I am given a set $G = \mathbb{R}$ and $x*y = x+y+x^3y^3$. I need to find out whether there exists an inverse. Please note that this is not a group since * is not associative. The identity element $e$ is $0$. So I did the following: For the inverse to exists we must have the following: $x*y = x+y+x^3y^3 = e = y*x = 0$ So, $x+y+x^3y^3$ must equal to $0$. Imagine we fix $x$, then we get a cubic and we know that it has either 1 real solution or 2. For the inverse to exists, which is unique, we need 1 real root. Now, from here I am confused how to show if whether there are always 1 real root or not. I tried considering derivative, that is: $\frac{dy}{dx} = \frac{-1-3x^2y^3}{1+3x^3y^2}$ For $x>0$: $\frac{dy}{dx}$ is always negative and so the graph is decreasing and, thus will cross x-axis once and there will be 1 real root. For $x = 0$, the inverse is $0$ As for $x<0$: I cannot say much since $\frac{dy}{dx}$ becomes $\frac{-1-3x^2y^3}{1-3|x^3|y^2}$ and it can be both negative and positive. So, would that mean that the inverse does not exist for all $x$? And so there is no inverse in $G$? I hope my reasoning makes sense and I would appreciate any help! Thanks!
I think you have already said the answer. Since $x + y + x^3y^3 = 0$ is a cubic, it has at least one real root. For any $x$ there exists a $y$ such that $x*y = y*x = 0$ However, we have not proven that for any $x$ there exists a unique $x^{-1}$ Rather than using implicit differentiation, continue with the assumption that $x$ is constant. $\frac {\partial}{\partial y} (x+y+x^3y^3) = 0\\ 1+3y^2 x^3 = 0\\ y = \pm \frac 1 {\sqrt {3|x|^3}}$ Substitute this value of y back in, and suppose $x>0$. If $x - \frac 4 {3x\sqrt {3x}}, x + \frac 4 {3x\sqrt {3x}}$ have oppose signs. then there are 3 real roots. $x + \frac 4 {3x\sqrt {3x}} > 0$ for all positive $x$ $x - \frac 4 {3x\sqrt {3x}} < 0\\ 3x^2\sqrt {3x} < 4\\ 27x^5 < 16\\ x < (\frac {16}{27})^{\frac 15}$ $x\in (-\frac {16}{27}^{\frac 15}, \frac {16}{27}^{\frac 15}) \implies $ there are three $y's$ such $x + y + x^3y^3 = 0$
Defining $\pmod{1}$ over $\mathbb{R}$ I am trying to find a natural way to define $\pmod{1}$ over $\mathbb{R}$, and I would do this the same way as I would over the integers (with $\pmod{n}$ is defined as taking the quotient group $\mathbb{Z}/(n)$, where $(n)$ is the ideal generated by $n$), but $(1)\neq\mathbb{Z}$ so I can't write $\mathbb{R}/(1)$. Is there a way to unify these definitions?
You've got the right idea, $\Bbb Z/(n)$ is $\Bbb Z$ modulo the subgroup generated by $n$, so the same is true for $\Bbb R\mod 1$, it's $\Bbb R/(1)=\Bbb R/\Bbb Z$ so two real numbers are equivalent $\mod 1$ iff $x-y\in\Bbb Z$. Note here this is as a group not as an ideal, I think that's where you're stumbling. This is inherently a group operation, not one in rings. And in particular this is a good thing, because $\Bbb R$ is a field so if this were a question about an ideal to make a quotient ring you would be SOL because the only ideals of fields, $F$, are $\{0\}$ and $F$ so any proper quotient would be trivial and therefore useless for your purposes.
synthetic division question. Synthetic division is possible when the Divisior is in the form of $x+a$ or $x-a$. but what if the divisor is in the form of $x^2+a$, $x^2-a$, $x^3-a$,... and higher powers. how can we perform synthetic division in such cases. Thanks
For the particular case of divisors in the form $x^n-a$ it is possible to replace the long division with $n$ synthetic divisions. For a polynomial $P(x)$ being divided by $x^3-a$ for example, group the powers of $x$ in $P$ according to the remainder $\bmod 3$ and write it as: $$ P(x) = P_0(x^3) + x P_1(x^3) + x^2P_2(x^3) $$ Use synthetic division to calculate the quotients and remainders of the following: $$ P_k(x) = (x-a)Q_k(x) + r_k \quad\quad \text{for} \;\; k=0,1,2 $$ Then: $$ P(x)=(x^3-a)Q(x) + R(x) $$ where $Q(x) = Q_0(x^3) + xQ_1(x^3)+x^2Q_2(x^3)$ and $R(x)=r_0+r_1x+r_2x^2\,$. [ EDIT ] Following is a fully worked out example for $P(X)=x^4-6x^3+16x^2-25x+10$ (the polynomial was borrowed from another, unrelated question) being divided by $x^3-2$. * *Group the powers: $$P(X)=x^4-6x^3+16x^2-25x+10= (-6x^3+10) + x\cdot (x^3-25) + x^2 \cdot 16$$ $$ \iff \begin{cases} \begin{align} P_0(x) & = -6x+10 \\ P_1(x) & = x - 25 \\ P_2(x) &= 16 \end{align} \end{cases} $$ * *Divide $P_k$ by $x-2$ and determine $Q_k,r_k$ by synthetic division: $$ \begin{cases} \begin{alignat}{3} P_0(x) & = -6x+10 && = -6(x-2) - 2\\ P_1(x) & = x - 25 && = (x-2) - 23\\ P_2(x) & = 16 && = 16 \end{alignat} \end{cases} $$ $$ \iff \begin{cases} \begin{align} Q_0(x) & = -6 \,,\;\; r_0 = -2\\ Q_1(x) & = 1 \,,\;\; r_1 = - 23\\ Q_2(x) & = 0 \,,\;\; r_2 = 16 \end{align} \end{cases} $$ * *Calculate $Q,R$: $$ Q(x) = Q_0(x^3) + xQ_1(x^3)+x^2Q_2(x^3) = -6 +x+ x^2 \cdot 0 = x-6\\ R(x)=r_0+r_1x+r_2x^2=16x^2-23 x-2\,$$ * *Verify that indeed: $$x^4-6x^3+16x^2-25x+10=(x^3-2)(x-6)+ 16x^2-23 x -2$$