INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Inner product of scaled Hermite functions I'm attempting to find a closed form expression for $$\int_{-\infty}^{\infty}e^{-\frac{x^2\left(1+\lambda^2\right)}{2}}H_{n}(x)H_m(\lambda x)dx$$ where $H_n(x)$ are the physicist's hermite polynomials, but haven't had any luck. Anyone know of a way to compute this?
The integral when $n=m$ is $$ I_{nn} = 2^{2n}\sqrt{2\pi}\left(n!\right)^{2} \frac{\lambda^n}{{\left(\lambda^{2} + 1\right)^{n + \frac{1}{2}}}} {\sum_{k=0}^{\left \lfloor \frac{n}{2} \right \rfloor} \frac{\left(-1\right)^{k} }{2^{4k} (k!)^{2} \left(n-2k \right)!}}\left(\frac{{\lambda^{2} - 1}}{\lambda}\right)^{2k}. $$ The integral is zero whenever $n$ and $m$ have opposite parity. When $n\ge m$, define $s=\frac{n-m}{2}$ (which is guaranteed to be an integer) and the integral becomes $$ I_{nm}=2^{2m}\sqrt{2\pi}m! n!\frac{\lambda^{m}{\left(1-\lambda^{2} \right)}^{s} }{{\left(\lambda^{2} + 1\right)}^{m + s + \frac{1}{2}}}{\sum_{l=0}^{\left \lfloor \frac{m}{2} \right \rfloor} \frac{\left(-1\right)^{l} }{2^{4l} \left(l + s\right)! l! \left(m-2l\right)!}\left(\frac{{\lambda^{2} - 1}}{\lambda}\right)^{2l}} $$ The general case is $$ I_{nm}=2^{2m}\sqrt{2\pi}m! n!\frac{\lambda^{\operatorname{min}(n,m)}{\left(1-\lambda^{2} \right)}^{s}}{{\left(\lambda^{2} + 1\right)}^{\operatorname{max}(n,m) + \frac{1}{2}}}{\sum_{l=0}^{\left \lfloor \frac{\operatorname{min}(n,m)}{2} \right \rfloor} \frac{\left(-1\right)^{l} }{2^{4l} \left(l + s\right)! l! \left(\operatorname{min}(n,m)-2l\right)!}\left(\frac{{\lambda^{2} - 1}}{\lambda}\right)^{2l}} $$ In order to derive this result, first change to probabilists Hermite polynomials $$ H_n(x)=2^{\frac{n}{2}}\operatorname{He}_n(\sqrt{2}x) $$ so the integral becomes $$ I_{nm} = 2^{\frac{n+m}{2}}\int_{-\infty}^\infty e^{-\frac{x^2}{2}(1+\lambda^2)}\operatorname{He}_n(\sqrt{2}x)\operatorname{He}_m(\lambda\sqrt{2}x)dx. $$ Change the integration variable in order to recover the probabilists weighting function $y=x\sqrt{1+\lambda^2}$ $$ I_{nm} = \frac{2^{\frac{n+m}{2}}}{\sqrt{1+\lambda^2}} \int_{-\infty}^\infty e^{-\frac{y^2}{2}} \operatorname{He}_n\left(y\sqrt{\frac{2}{1+\lambda^2}}\right)\operatorname{He}_m\left(y\sqrt{\frac{2\lambda^2}{1+\lambda^2}}\right)dy. $$ Use the scaled Hermite polynomial on both polynomials $$ \operatorname{He}_n(\gamma x) = n!\sum_{k=0}^{\left \lfloor \frac{n}{2} \right \rfloor}\frac{1}{2^kk!(n-2k)!}\gamma^{n-2k}\left(\gamma^2-1\right)^k H_{n-2k}(x) $$ leads to a very long expression, from which all three cases ($n=m$, $n\ge m$, $n\le m$) obtain $$ I_{nm} = \frac{2^{\frac{n+m}{2}}}{\sqrt{1+\lambda^2}}n!m!\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}\sum_{l=0}^{\left\lfloor\frac{m}{2}\right\rfloor}\frac{(-1)^k\left(\sqrt{\frac{2}{1+\lambda^2}}\right)^{n-2k+m-2l}\lambda^{m-2l}\left(\frac{\lambda^2-1}{\lambda^2+1}\right)^{k+l}}{2^kk!(n-2k)!2^ll!(m-2l)!}\\ \times\int_{-\infty}^\infty\operatorname{He}_{n-2k}(x)\operatorname{He}_{m-2l}(x)e^{-\frac{x^2}{2}}dx. $$ Orthogonality will constrain one of the sums -- always choose the sum associated with the maximum of $n,m$. Take the case $n\ge m$, as previously stated, the parity of $n$ and $m$ has to be the same, otherwise the integrand is odd and the integral vanishes. Therefore write $n=m+2s$ for $s\in \mathbb{Z}$. The orthogonality constraint can be written $k=l+s$ and, after algebraic manipulation, the result above obtains.
Complex numbers multiplication I know how to multiply complex numbers (with formula ), but i can't figure out what is really happening on them , I was able to understand that if $z_1z_2=z_3$ then $z_3$ will have the argument of $z_1$ + argument of $z_2$ (its kind of rotation). My question is, what is happening with modules of this $z_3$. I want an intuitive answer not mathematical proof , wanted to understand the phenomenon.
Denote by $|z|$ the modulus of the complex number $z$. Let $z,w\in \mathbb{C}$. Then $|zw|=|z||w|$ and $\text{arg}(zw)=\text{arg}(z)+\text{arg}(w)$. This completely determines a complex number of you think in terms of the polar coordinates of such a number. So multiplication by a complex number can be seen as first rotating and then rescaling.
How to convert a random matrix to Unitary Matrix? I know that a complex matrix $n \times n$ is said to be unitary if $AA^*=A^*A=I$ or equivalently if $A^*=A^{-1}$. But I asked what if there is a random matrix and we want to turn it into an unitary matrix, please also give an example.
If you have access to Matlab or Octave, then producing such matrices is as easy as issuing the following commands: n = 10; A = rand(n) + 1i*rand(n); [U,~] = qr(A); I recommend that you use software such as this to generate these matrices in general. For small matrices that you would like to work by hand, the Gram-Schmidt procedure is what you want. Here is a simple example involving a real $3\times 3$ matrix. Let $$ A = \begin{bmatrix} 1 & 1 & 1\\ 1 & 0 & 2\\ 0 & 2 & 1 \end{bmatrix} $$ We are going to construct a $3\times 3$ unitary matrix $U$ from the columns of $A$. For simplicity I will first construct a matrix $\hat{U}$ with orthogonal columns, and then normalize at the end to get a unitary matrix. The first step is easy, we set the first column of $\hat{U}$ equal to the first column of $A$, i.e., $\hat{u}_1 = a_1$. The second column of $\hat{U}$ is equal to $a_2$ minus any contributions from $\hat{u}_1$, that is, $$ \hat{u}_2 = a_2 - \frac{\hat{u}_1\cdot a_2}{\hat{u}_1\cdot \hat{u}_1}\hat{u}_1 = \begin{bmatrix} 1\\ 0\\ 2 \end{bmatrix} - \frac{1}{2} \begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix} = \begin{bmatrix} \phantom{-}1/2\\ -1/2\\ \phantom{-}2 \end{bmatrix} $$ The last step is the same as the second except now we must remove from $a_3$ any contributions from $\hat{u}_1$ or $\hat{u}_2$. Thus, \begin{align} \hat{u}_3 &= a_3 - \frac{\hat{u}_1\cdot a_3}{\hat{u}_1\cdot \hat{u}_1}\hat{u}_1 - \frac{\hat{u}_2\cdot a_3}{\hat{u}_2\cdot \hat{u}_2}\hat{u}_2\\[1mm] &= \begin{bmatrix} 1\\ 2\\ 1 \end{bmatrix} - \frac{3}{2} \begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix} - \frac{1}{3} \begin{bmatrix} \phantom{-}1/2\\ -1/2\\ \phantom{-}2 \end{bmatrix}\\[1mm] &= \begin{bmatrix} -2/3\\ \phantom{-}2/3\\ \phantom{-}1/3 \end{bmatrix} \end{align} The last step to obtain $U$ from $\hat{U}$ is to normalize the columns of $\hat{U}$. Doing so we obtain the unitary matrix $$ U = \begin{bmatrix} 1/\sqrt{2} & \phantom{-}1/(3\sqrt{2}) & -2/3\\ 1/\sqrt{2} & -1/(3\sqrt{2}) &\phantom{-}2/3\\ 0 & \phantom{-}2\sqrt{2}/3 &\phantom{-}1/3 \end{bmatrix} $$ As you can see from this simple example, the procedure is quite tedious by hand and is best left to a computer. You can read more about the Gram-Schmidt procedure here.
Showing $(ab)^n=a^nb^n$ given $(ab)^2=a^2b^2$. This is part of Exercise 2.1.15 of F. M. Goodman's "Algebra: Abstract and Concrete". Let $G$ be a group. Suppose $(ab)^2=a^2b^2$ for all $a, b\in G$. Show that $(ab)^n=a^nb^n$ for all $a, b\in G$, $n\in\mathbb{N}$. My Attempt: I'm using induction on $n$. Let $a, b\in G$. Obviously $(ab)^1=ab=a^1b^1$ so the result holds for $n=1$. Assume $(ab)^r=a^rb^r$ for some $r\in\mathbb{N}$. Consider when $n=r+1$: we have $$\begin{align} (ab)^{r+1}&=(ab)^rab \\ &=a^rb^rab \\ &=a^{r+1}a^{-1}b^rab, \end{align}$$ but I don't know where to go from here.
Since $abab = a^2 b^2$, then $ab = b^{-1} a b^2$ and $a^r b^r ab = a^r b^r b^{-1} a b^2 = a^r b^{r-1} a b^2$, and you can "shift" the $a$ successively to obtain the expression $a^{r+1}b^{r+1}$
Solving: $(b+c)^2=2011+bc$ Solve $$(b+c)^2=2011+bc$$ for integers $b$ and $c$. My tiny thoughts: $(b+c)^2=2011+bc\implies b^2+c^2+bc-2011=0\implies b^2+bc+c^2-2011=0$ Solving in $b$ as Quadratic.$$\implies b=\frac{-c\pm \sqrt{8044-3c^2}} {2}.$$ So $8044-3c^2=k^2$, as $b$ and $c$ are integers. We also have inequalities: $8044>3c^2,8044>3b^2\\ \ \ \ \ 51>c\ \ \ \ , \ \ \ \ 51>b$ How to proceed further. Help.
With what you have done so far we can just try the possibilities. We can ask that $c \ge 0$ and recognize that we can change the signs of both $b$ and $c$ from a solution to get another. $b$ and $c$ can also be interchanged. I find $(10,39)$ and $(-10,49)$ as base solutions. A spreadsheet with copy down makes it simple.
Quotient groups and isomorphisms Let $H$ and $K$ be normal subgroups of $G$ such that $G = HK$. Prove that $(G/H \cap K) \cong (G/H) \times (G/K)$. To be honest, I'm not sure where to approach this. I know that the statement that $H$ and $K$ are normal are to allow the quotient groups to occur. I have studied the three isomorphic theorems and the correspondence theorem, but I do not really have a complete grasp on the concept.
Hint: Define a homomorphism $G\to (G/H)\times (G/K)$ in an obvious way. What is it's kernel?
What is the Probability that 1 out of 7 letters is chosen out of 2 draws? So let's say we have the letters AAABBCD, We write them down on cards and shuffle them, Then we randomly pick 2 cards. I would like to know the probability of at least 1 of those cards being A. So either the 1st or 2nd or both being A. I'm trying to wrap my head around this. In my mind it's the probability of choosing A twice added to the probability of choosing A first and the probability of choosing it second, so (3/7 * 1/3) + (3/7 * 2/3) + (4/7 * 1/2). Would this be correct? Thank you in advance.
Yes, your calculations are correct. Note: it could have perhaps been easier to have approached the problem indirectly instead of directly by instead calculating the probability of choosing no $A$'s. Picking no $A$'s occurs with probability $\frac{4}{7}\cdot \frac{3}{6}=\frac{2}{7}$ implying picking at least one $A$ occurs with probability $1-\frac{2}{7}=\frac{5}{7}$. Alternatively calculated, picking no $A$'s occurs with probability $\binom{4}{2}/\binom{7}{2}$ which again simplifies to $\frac{2}{7}$. As a general suggestion, keep an eye out for whether calculating directly or calculating indirectly will require more arithmetic or complication and pick the easier route.
Primitive of an holomorphic function Why does an holomorphic function have a primitive in a simply connected space? Also, it have a primitive only in a simply connected space?
Let $f(z)$ be your holomorphic function, defined on $U \subset \mathbb C$. Define your primitive by picking a point $z_0 \in \mathbb C$, and writing $$ F(z) = \int_{z_0}^z f(w) dw. $$ But for this to make sense, this integral needs to be independent of the choice of integration path between $z_0$ and $z$! If $U$ is simply connected, the integral is indeed independent of the path, by Cauchy's theorem. That's not to say that it's always impossible to define primitives on non-simply-connected spaces. For example, if $U = \mathbb C - \{0\}$, then $f(z) = 1$ has a primitive, namely $F(z) = z$, but $f(z) = 1/z$ is an example of a function which does not.
PCI-ring which is not semisimple . Is there an example of finite right(left) PCI-ring which is not right(left) semisimple ring? (A ring R is called right(left) PCI-ring if each proper right(left) cyclic R-module C is injective, proper cyclic means that C is cyclic but C is not isomorphic to R).
By the Faith-Cozzens theorem, if it is not semisimple, then it is simple (and right Hereditary and right Ore and a $V$-domain.) But a finite and simple ring is semisimple. So, no, there is no such finite ring. Thanks for answering. May I know if there exist a commutative PCI-ring not semisimple ring. No: a commutative $V$- ring is Von Neumann regular, and a commutative VNR domain is a field.
Let A be a totally ordered alphabet. Let L the lexicographic ordering on A* Let A be a totally ordered alphabet. Let L the lexicographic ordering on A*, and S the standard ordering on A* A / L is well-founded and S is well-founded B/ L is not well-founded and S is well-founded C/ L is well-founded and S is not well-founded D/ L is not well-founded and S is not well-founded Is B the correct answer? If it is, can anyone please explain why? Thank You
It is unclear what you mean by standard order as opposed to lexicograpical order. I would have assumed the standard order on an alphabet was the lex-order. Presumably your standard order checks if one word is a prefix of another. That means standard order on the Latin alphabet works like sad $<$ saddle $<$ saddlebag $<$ saddlebags. It's easy to see this is well-ordered because a descending chain has to decrease word length each step. So all descending chains are finite. To see the lex-order is not well-founded consider the following infinite descending chain: $AC > ABC > ABBC > ABBBC > \ldots$. The set of all elements $AB \ldots B C$ has no minimal element.
$1)$ For $ 0\le \theta \le\frac{\pi}{2}$, show that $\sin \theta \ge \frac{2}{\pi} \theta$ Not sure how to go about this: $1)$ For $ 0\le \theta \le\frac{\pi}{2}$, show that $\sin \theta \ge \frac{2}{\pi} \theta$ $2)$ By using Part 1, or by any other method, show that if $\lambda \le 1,$ then $$\lim_{R \to \infty} R^{\lambda} \int_{0}^{\frac{\pi}{2}}e^{-R\sin\theta}d\theta=0$$ The other method part threw me off a bit. EDIT: After working on it I have Two questions: 1) If I were to use the integral inequality such that: $$ J=\int_{0}^\frac{\pi}{2}e^{-Rsin\theta}Rd\theta \le \int_{0}^\frac{\pi}{2}e^{-2Rsin\theta}Rd\theta =-\pi e^{-2Rsin\theta}|^\frac{\pi}{2}_{0} \le \pi$$ Is that correct? 2)How would I go about finishing this second part using Jordan's Lemma : $$ R^{\lambda} \int_{0}^{\frac{\pi}{2}} e^{-Rsin\theta}d\theta= R^{\lambda} \int_{0}^{\frac{\pi}{3}} e^{-Rsin\theta}d\theta+R^{\lambda} \int_{\frac{\pi}{3}}^{\frac{\pi}{2}} e^{-Rsin\theta}d\theta$$
For (1), just draw the graph of $y=\sin x$ and $y=\frac {2}{\pi}x$. Since the graph of $y=\sin x$ is concave when $0<x< \pi/2$, it must be drawn above the line $y=\frac 2{\pi}x$ when $0<x<\frac {\pi}2$. For (2), just use the result of (1).
Compute the limit of $\int_{n}^{e^n} xe^{-x^{2016}} dx$ when $n\to\infty$ Find the following limit $$I = \lim_{n \to\infty} \int_{n}^{e^n} xe^{-x^{2016}} dx$$ My attempt Assumption: as $n \to \infty$ we can assume and interval on the positive real axis $[n,e^n]$ Here the function $e^{-x^{2016}}$ is a decreasing function, using this fact we use the sandwich lemma to evaluate I $$LHS = e^{-(e^n)^{2016}} \int_{n}^{e^n}x dx \leq I \leq e^{-(n)^{2016}} \int_{n}^{e^n}x dx =RHS$$ Limit of $LHS$ and $RHS$ can be shown to be zero, hence $I=0$. Evaluation of LHS $$\lim_{n \to \infty} e^{-(e^n)^{2016}} \frac{(e^n)^2 - n^2}{2} = \lim_{n \to \infty} e^{-(e^n)^{2016}} e^{2n}\frac{1 - \frac{n^2}{e^{2n}}}{2} = 0.$$ I need to know weather I can make as Assumption as I have made above. I also need some help in verifying weather the evaluation of limit LHS is done correctly.
Your solution is perfectly fine. But note that $f(x)=xe^{-x^{2016}}$ is decreasing monotonically for $x\ge \left(\frac{1}{2016}\right)^{1/2016}$. Hence, for $n\ge \frac{1}{2016}$, we can write $$(e^n-n)e^ne^{-e^{2016\,n}}\le \int_n^{e^n}xe^{-x^{2016}}\,dx\le (e^n-n)e^{-n^{2016}}$$ whence applying the squeeze theorem yields the expected result.
Closed form for the sums $S(n)={(2n-1)!\over \sqrt2}\cdot{\left(4/ \pi\right)^{2n}}\cdot\sum\limits_{k=0}^{\infty}(-1)^{k(k+1)/2}(2k+1)^{-2n}$ Consider the sums $$S(n)={(2n-1)!\over \sqrt2}\cdot{\left(4\over \pi\right)^{2n}}\cdot\sum_{k=0}^{\infty}{(-1)^{k(k+1)\over 2}\over (2k+1)^{2n}}$$ We have $S(1)=1$, $S(2)=11$, $S(3)=361$, $S(4)=24611$. I cannot spot any pattern within this sequence. How can we work out a closed form for $S(n)$?
The above series is the expansion of the function Sin[x]/Cos[2x]
Inequality for positive real numbers less than $1$: $8(abcd+1)>(a+1)(b+1)(c+1)(d+1)$ If $a,b,c,d$ are positive real numbers, each less than 1, prove that the following inequality holds: $$8(abcd+1)>(a+1)(b+1)(c+1)(d+1).$$ I tried using $\text{AM} > \text{GM}$, but I could not prove it.
Let $f(a,b,c,d)=8(abcd+1)-(a+1)(b+1)(c+1)(d+1)$. Hence, $f$ is a linear function of $a$, of $b$, of $c$ and of $d$. Thus, $$\min_{\{a,b,c,d\}\subset[0,1]}f=\min_{\{a,b,c,d\}\subset\{0,1\}}f=f(1,1,1,1)=0$$ and since the minimum does not occur, we are done!
Let $a,b,c\in \Bbb R^+$ such that $(1+a+b+c)(1+\frac{1}{a}+\frac{1}{b}+\frac{1}{c})=16$. Find $(a+b+c)$ Let $a,b,c\in \Bbb R^+$ such that $(1+a+b+c)(1+\frac{1}{a}+\frac{1}{b}+\frac{1}{c})=16$. Find $(a+b+c)$. I computed the whole product ;If $(a+b+c)=x\implies (1+x)(1+\frac{bc+ca+ab}{abc})=16$. Unable to view how to proceed further. Please help.
By applying Caushy Schwartz inequality we get ( 1 + a + b + c ) ( 1 + 1/a + 1/b + 1/c ) >= 16 Equality holds when a^2 = b^2 = c^2 = 1 Therefore a = b = c = 1
Find the area of a spherical triangle made by the points $(0, 0, 1)$, $(0, 1, 0)$ and $(\frac{1}{\sqrt{2}}, 0, \frac{1}{\sqrt{2}})$. Calculate the area of the spherical triangle defined by the points $(0, 0, 1)$, $(0, 1, 0)$ and $(\dfrac{1}{\sqrt{2}}, 0, \dfrac{1}{\sqrt{2}})$. I have come up with this: From the spherical Gauss-Bonnet Formula, where $T$ is a triangle with interior angles $\alpha, \beta, \gamma$. Then the area of the triangle $T$ is $\alpha + \beta + \gamma - \pi$. How do I work out the interior angles in order to use this formula? Any help appreciated.
See Mathcad implementation of equation for finding area when three co-ordinates given for sphere centered at (0,0,0) See the paper "On the Measure of Solid Angles" by Folke Eriksson.
Angle bisector in a trapezoid - surface area ratio In trapezoid ABCD (AB || CD) the angle bisector of angle ABC is perpendicular to segment AB and intersects it in point P. Point P divides the side AD in ratio 2:1. Find the ratio of the surface areas of the triangle and the quadrilateral. $h = \frac{2}{3}H$ $S_{triangle}= \frac{1}{3}|AB|H$ $S_{quadrilateral} = S_{trapezoid} -S_{triangle}$ $S_{trapezoid}=\frac{1}{2}(|AB|+|CD|)H$ Now I am stuck. I need to find CD in terms of H and AB, but I simply don't know how to do this. How should I proceed to solve this in the easiest possible way?
Let $E$ be the point where the lines $AD$ and $BC$ intersect. Let $k = [ABE]$. In $\Delta ABE$, segment $BP$ is both an angle bisector and an altitude, hence it's also a median. It follows that $[APB] = [EPB] = \frac{1}{2}k$. Since $CD\,||\,AB$, it follows that $\Delta DCE$ is similar to $\Delta ABE$. Since $AP = EP$ and $\displaystyle{{\small{\frac{DP}{AP} = \frac{1}{2}}}}$, it follows that $\displaystyle{{\small{\frac{DE}{AE} = \frac{1}{4}}}}$, hence $[DCE] = \frac{1}{16}k$. Then $[BCDP] = [EPB] - [DCE] = \frac{1}{2}k - \frac{1}{16}k = \frac{7}{16}k$. Therefore $\, \displaystyle{ \frac{[APB]}{[BCDP]} = \frac {\left(\frac{1}{2}k\right)} {\left(\frac{7}{16}k\right)} = \boxed{{\small{\frac{\,8\,}{\,7\,}}}} }$
Integrated $\int_0^{2\pi}\frac{ae^{i\theta}+b}{ce^{i\theta}+d}e^{in\theta}d\theta$ symbolically but interested in derivation I came across the following integral which seems pretty interesting when I evaluated it with Woflram Alpha: $$ I = \int_0^{2 \pi} \frac{a e^{i\theta} + b}{c e^{i\theta} + d} e^{i n \theta} d\theta $$ I found the following: \begin{equation} I = \begin{cases} \displaystyle \frac{2\pi b}{d} &\quad n = 0, \\ 0 & \quad n \neq 0. \end{cases} \end{equation} Does anyone know of a method to arrive at this result?
It is enough to exploit the orthogonality relation $$ \int_{0}^{2\pi}e^{-mi\theta}e^{ni\theta}\,d\theta = 2\pi\,\delta(m,n) $$ and expand $\frac{ae^{i\theta}+b}{c e^{i\theta}+d}$ as a geometric series in $e^{i\theta}$ or $e^{-i\theta}$, according to $|c|>|d|$ or $|d|>|c|$. If $|c|=|d|$ we have a singular integral since $e^{i\pi}+1=0$.
Proving that every third Fibonacci number is divisible by F2=2 In our notation we have that $F_{n-1}$ is the $n$th Fibonacci number since we start with $F_{0}=1$. We want to prove that every third Fibonacci number is divisible by $F_{2}=2$. The proof is the following using induction: $F_{3n+2}=F_{3n+1}+F_{3n}$ $F_{3n+2}=F_{3n}+F_{3n-1}+F_{3n-1}+F_{3n-2}$ (A) $F_{3n+2}=F_{3n-1}+F_{3n-2}+F_{3n-1}+F_{3n-2}$ (B) $F_{3n+2}=2(F_{3n-1}+F_{3n-2})$ I don't understand how you go from step (A) to step (B) can anyone explain this to me?
There is a well-known property of Fibonacci numbers: $$ \gcd(F_n,F_m)=F_{\gcd(n,m)} \tag{1} $$ Since $F_3=2$, $$ \gcd(F_{3n},2)=\gcd(F_{3n},F_3)=F_{\gcd(3,3n)} = F_3 = 2 \tag{2} $$ hence every Fibonacci number of the form $F_{3n}$ is even.
What are examples of vectors that are not usually called vectors? In algebra, a vector is an element of a vector space. An example of such an element is a matrix. In linear algebra, a vector is a shorthand name for a $1 \times m$ or a $ n \times 1 $ matrix. (Whereas a matrix itself is also a vector, by definition, but rarely referred to as such.) In (analytic) geometry, a (euclidean) vector is a geometric object with a magnitude and direction. These can be represented by tuples. Both matrices and tuples are clearly vector elements in some spaces, but are for some reason not called vectors by name, unlike euclidean vectors and row vectors. Are there any other examples of vectors that are not called vectors, like matrices and tuples?
The quality of a "vector" you are describing is the ability to write it as a set of coordinates with respect to some basis of the space in which they reside. In finite dimensions, provided that you maintain the ordering of the basis elements, this takes the form of a "tuple", or a row or column matrix. This "picture" of a vector space gets somewhat complicated once you move past finite dimensions, since our basis now contains infinitely many elements. Sometimes we can still write the vectors in our space as some kind of ordered list (like the space of all real sequences) even though the vector space is infinite dimensional. However we cannot write down a basis for these spaces, but we know that one must exist (assuming the axiom of choice). For such vector spaces, the basis becomes fairly useless, and we have to adopt other tools to study them. There are many examples of vector spaces which do not "look" like vectors in the traditional sense. The set of functions from a given set into the real or complex numbers will do, where we take the "pointwise operations" of addition and scalar multiplication.
Generalizing Poisson's binomial distribution to the multinomial case. If in a binomial distribution, the Bernoulli trials are independent and have different success probabilities, then it is called Poisson Binomial Distribution. Such a question has been previously answered here and here. How can I do a similar analysis in the case of a multinomial distribution? For instance, if a $k$-sided die is thrown $n$ times and the probabilities of each side showing up changes every time instead of being fixed (as in the case of regular multinomial distribution), how can I calculate the probability mass function of such a distribution? We assume that we have access to $\{\mathbb{p_i}\}_1^n$ where $\mathbb{p_i}$ is a vector of length $k$ denoted the probability of each of the $k$ sides showing up in the $i^{th}$ trial. Note: I have asked this question on stats.stackexchange as well, but I feel it is more pertinent here.
I have come across very few resources on the topic. Such a problem is called Generalized Poisson's distribution or GPB. I am listing a few of them here to help others. * *An old paper describing the calculation of its approximate pdf. *A 2018 paper describing an algorithm to compute its PDF using Fourier Transforms. *An R package implementing the same.
How to implement the formula without computing any matrix inverse? Given $A, B, C$ are $n \times n$ matrices, $B, C$ are nonsingular and $b$ is an $n$-vector, and $$x=B^{-1}(2A+I)(C^{-1}+A)b$$ This is a question coming from my computational mathematics course, and right now we are learning LU factorization. However, I don't see it can be used to solve this question. I also tried to move the elements to the other side, but things were getting too complicated to solve. Any answers will be appreciated!
I actually got the answer myself. Since $$x=B^{-1}(2A+I)(C^{-1}+A)b$$ By multiplying $B$ on both sides and opening the brackets, we get $$Bx=2AC^{-1}b+2AAb+C^{-1}b+Ab$$ Let $k$ be an n-vector, $k=C^{-1}b$, then $Ck=b$. By LU factorization, we can get $k$. Plug $k$ into the equation above, we get $$Bx=2Ak+2AAb+k+Ab$$ Let $y=2Ak+2AAb+k+Ab$, $y$ is an n-vector. Thus we get$$Bx=y$$Again, by LU factorization, we can find out $x$. QED.
Intuition about Stirling numbers Our professor gave us the theorem: $S(n+1,m+1)=\sum\limits_{k=m}^n \binom{n}{k} S(k,m)$, where $S(n,m)$ denotes the number of partitionings of a set with $n$ elements into $m$ blocks. I don't want a proof of this equation as Wikipedia and our professor gave it to us. I just don't understand the intuition behind it. Could someone explain using for example gifts and packages ?
The real key is that $\binom{n}{k}=\binom{n}{n-k}$. So you take $n+1$ gifts labeled $\{1,\dots n+1\}.$ Take gift $n+1$ and $n-k$ other gifts, and put them in the first package, then partition the remaining $k$ gifts into $m$ packages.
Finding The Derivative Using $\frac{d}{dx}x^n=nx^{n-1}$ So I am learning how to differentiate now, and I came across this problem $$f(x)=\frac{1-x}{2+x}$$ We are wanted to find $f'(x)$. When I use $$\lim_{h\to0}\frac{f(x+h)-f(x)}{h}$$ I find that $f'(x)=\frac{-3}{(2+x)^2}$ but, when I try to find $f'(x)$ the easy way. i.e $\frac{d}{dx}x^=nx^{n-1}$. I cannot do it for some reason. Is it not possible to use that derivatives property when dealing with quotients? I know we can use it polynomial addition, subtraction and multiplication but I am struggling with quotients. Can someone please explain what it is I am not seeing? My Attempt: $$\frac{d}{dx}\frac{1-x}{2+x}=\frac{d}{dx}(1-x)(2+x)^{-1}=(1)(-1)(2+x)^{-2}$$ Which is obviously wrong so can someone please break this down for me.:)
You will need to apply the quotient rule. Let the whole function be denoted $h(x)=\frac{f(x)}{g(x)}$, the numerator in your fraction be $f(x)$ and the denominator be $g(x)$. Then the derivative of your function can be found by the following forumla: $$h'(x)=\frac{g'(x)f(x)-f'(x)g(x)}{(g(x)^2}$$. Note that you can compute the derivative of $f(x)$ and $g(x)$ individually using the method you have described above, and can simply plug the results into the equation.
Partial derivatives using polar coordinates I was given the following problems as practice, and I've sold all but one. However, I am not sure that my answers are correct. Calculate $(\partial r/\partial x)_y$ , $(\partial r/\partial y)_x$ , $(\partial θ/\partial x)_y$ ,$(\partial y/\partial x)_r$, $(\partial r/\partial \theta)_x$ $x=r\cos(\theta)$, $y=r\sin(\theta)$ I have that $(\partial r/\partial x)_y=\cos(\theta)$ $(\partial r/\partial y)_x=\sin(\theta)$ $(\partial y/\partial x)_r=-\cot(\theta)$ $(\partial \theta/\partial x)y=-\sin(\theta)/r$ Would $(\partial r/\partial \theta)_x$ then be equal to $r\tan\theta$?
Note that $r=\sqrt{x^2+y^2}$. Holding $y$ fixed we have $$\begin{align} \frac{\partial r}{\partial x}&=\frac{x}{\sqrt{x^2+y^2}}\\\\ &=\frac xr\\\\ &=\cos(\theta) \end{align}$$ Similarly, holding $x$ fixed we have $$\begin{align} \frac{\partial r}{\partial y}&=\frac{y}{\sqrt{x^2+y^2}}\\\\ &=\frac yr\\\\ &=\sin(\theta) \end{align}$$ Then, holding $r$ fixed we see that $\sqrt{x^2+y^2}$ is constant. Hence, $$\begin{align} \frac{\partial r}{\partial x}&=0\\\\ &=\frac{x}{\sqrt{x^2+y^2}}+\frac{y}{\sqrt{x^2+y^2}}\frac{\partial y}{\partial x} \end{align}$$ whereupon solving for $\frac{\partial y}{\partial x}$ we find that $$\begin{align} \frac{\partial y}{\partial x}&=-\frac xy\\\\ &=-\cot(\theta) \end{align}$$ Finally, holding $y$ fixed, we have $x=y\cot(\theta)$ so that $$\begin{align} \frac{\partial x}{\partial x}&=1\\\\ &=y\left(-\csc^2(\theta)\frac{\partial \theta}{\partial x}\right) \end{align}$$ whereupon solving for $\frac{\partial \theta}{\partial x}$ we find that $$\begin{align} \frac{\partial \theta}{\partial x}&=-\sin^2(\theta)/y\\\\ &=-\sin^2(\theta)/(r\sin(\theta))\\\\ &=-\sin(\theta)/r \end{align}$$
Cutting a rectangle and recieving a square How can I cut a $1 \times 10$ rectangle $7$ times so that I can receive a square? I have tried doing it, but wasted lots of paper.
The solution in the linked image uses 4 cuts. None of the pieces needs to be rotated. It is generated using a dissection based on a strip: imagine layering the long, thin rectangle across the square and cutting at the edges of the square. Rectangle to square If exactly 7 cuts are needed, the extra cuts can be added arbitrarily.
How prove this inequality $\sum\tan{\frac{A}{2}}-\frac{\sqrt{3}-1}{8}\prod\csc{\frac{A}{2}}\le 1$ In $\Delta ABC$,show that $$\tan{\frac{A}{2}}+\tan{\frac{B}{2}}+\tan{\frac{C}{2}}-\frac{\sqrt{3}-1}{8}\csc{\frac{A}{2}}\csc{\frac{B}{2}}\csc{\frac{C}{2}}\le 1$$ I tried also $$\left(\tan{\frac{A}{2}}+\tan{\frac{B}{2}}+\tan{\frac{C}{2}}\right)^2\ge 3\left(\tan{\frac{A}{2}}\tan{\frac{B}{2}}+\tan{\frac{A}{2}}\tan{\frac{C}{2}}+\tan{\frac{C}{2}}\tan{\frac{B}{2}}\right)=3,$$ but we get there something, which impossible to kill during a competition.
Easy to show that $\sum\limits_{cyc}\tan\frac{\alpha}{2}=\frac{4R+r}{p}$ and $\prod\limits_{cyc}\sin\frac{\alpha}{2}=\frac{r}{4R}$. Also, if $M$ is a center gravity of the triangle and $I$ is a center of the inscribed circle in the triangle, so $MI^2=\frac{p^2+5r^2-16Rr}{9}$, which gives $p\geq\sqrt{16Rr-5r^2}$. Let $R=2xr$. Hence, we need to prove that $$\frac{4R+r}{p}-\frac{(\sqrt3-1)R}{2r}\leq1$$ or $$p\geq\frac{2r(4R+r)}{2r+(\sqrt3-1))R},$$ for which it's enough to prove that $$\sqrt{16Rr-5r^2}\geq\frac{2r(4R+r)}{2r+(\sqrt3-1))R}$$ or $$(32x-5)((\sqrt3-1)x+1)^2\geq(8x+1)^2$$ or $$(x-1)(32(2-\sqrt3)x^2-5(2-\sqrt3)x+3)\geq0,$$ which is obviously true for $x\geq1$. Done!
How can we show that $\int_{0}^{1}(\ln{x})^n\ln(-\ln{x})\mathrm dx=(-1)^n(a-b\gamma)\cdot{n!\over b}?$ Consider the integral $(1)$ $$\int_{0}^{1}(\ln{x})^n\ln(-\ln{x})\mathrm dx=I\tag1$$ $H_n={a\over b}$, it is the nth-harmonic number $H_n:=1,{3\over 2},{11\over 6},{25\over 12},...$ for $n:=1,2,3,4,... $ respectively. We have $$I=(-1)^n(a-b\gamma)\cdot{n!\over b}$$ [We think the closed form is sort of interesting one, because we haven't seem Euler's constant is mixing with the numerator and denumerator of harmonic number] An attempt: $u=-\ln{x}$ then $-xdu=dx$ simplified to $$\int_{0}^{\infty}\color{red}{u^n\ln{u}}e^{-u}\mathrm du\tag2$$ We can apply the Laplace transform to $(2)$, but I can't find the Laplace transform of $F(u)=u^n\ln{u}$ or else we can use $$e^{-x}=1-x+{x^2\over 2!}-{x^3\over 3!}+\cdots$$ $$\int_{0}^{\infty}\left(u^n-u^{n+1}+{u^{n+2}\over 2!}-{x^{n+3}\over 3!}+\cdots\right)\ln{u}\mathrm du\tag3$$ $(3)$ does not converge; let $n=1$ for example, the first term of the integral $$\int_{0}^{\infty}u\ln{u}\mathrm du={1\over 2}u^2\ln{u}-{u^2\over 2}|_{0}^{\infty}$$ How else can we go about tackling $(1)?$
Hint. One may recall the standard integral representation of the Euler gamma function, $$ \Gamma(s+1)=\int_{0}^{\infty}u^{s} e^{-u}\mathrm du,\quad s>-1, $$ then by differentiating under the integral sign, one gets $$ \Gamma'(s+1)=\int_{0}^{\infty}u^{s}\cdot\ln u \cdot e^{-u}\mathrm du,\quad s>-1, $$ or $$ \frac{\Gamma'(s+1)}{\Gamma(s+1)}=\frac1{\Gamma(s+1)}\int_{0}^{\infty}u^{s}\cdot\ln u \cdot e^{-u}\mathrm du,\quad s>-1, $$ then one may recall that the digamma function $\psi$ is such that $$ \psi(n+1):=\frac{\Gamma'(n+1)}{\Gamma(n+1)}=H_{n}-\gamma, \quad n=1,2,\cdots, $$ where $\gamma$ is the Euler-Mascheroni constant and we conclude with the change of variable $u=-\ln x$ using $\Gamma(n+1)=n!$.
Nonnegative nonlinear optimization by squaring I would like to minimize a function $f({\mathbf x})$, subject to the constraint that each element of ${\mathbf x}$ is nonnegative. In surveying the literature I see many complicated methods: exterior penalty, barrier functions, etc. However, there seems to be a simple solution: replace the objective function with one that squares each argument, i.e. optimize $f({\mathbf y}^2)$ over unconstrained ${\mathbf y}$ instead. Is there a well-known name for this technique and why isn't it mentioned more often?
Consider the following inequality-constrained (convex) quadratic program $$\begin{array}{ll} \text{minimize} & (x_1 - 1)^2 + (x_2 - 2)^2\\ \text{subject to} & x_1, x_2 \geq 0\end{array}$$ Let $x_i =: y_i^2$. We then have an unconstrained quartic optimization problem $$\text{minimize} \quad (y_1^2 - 1)^2 + (y_2^2 - 2)^2$$ Unfortunately, this quartic objective function is non-convex and has several local minima Visual inspection of the plot above tells us that, in total, there are $3^2 = 9$ critical points: $4$ local minima, $1$ local maximum and $4$ saddle points. Peeking from "below", we have the tooth-like plot Taking the partial derivatives and finding where they vanish, $$y_1 (y_1^2 - 1) = 0 \qquad \qquad \qquad y_2 (y_2^2 - 2) = 0$$ we obtain the $3^2 = 9$ critical points $$\left( y_1 = 0 \lor y_1 = \pm 1 \right) \land \left( y_2 = 0 \lor y_2 = \pm \sqrt 2 \right)$$ Squaring, we obtain $2^2 = 4$ points $$\left( x_1 = 0 \lor x_1 = 1 \right) \land \left( x_2 = 0 \lor x_2 = 2 \right)$$ which are plotted below with the quadratic objective function (on the nonnegative quadrant) In this case, the objective function is a convex, positive definite quadratic form written as a nice sum of squares and without bilinear terms. Hence, we know that the minimum is attained at $(1,2)$, which is in the feasible region.
Question on effective rate of interest given rate continuous compounding $\mathbf{\text{(i) Given r= $0.12 $ annually compounded, find the monthly}}$ $\mathbf{\text{repayments on a $ 1 000 000$ Dollar loan to be repaid completely in 5 years}}$ $\mathbf{\text{Solution}}$ Finding equivalent rate of monthly compounding: $(1+ 0.12)= (1+\frac{1_{12}}{12})^{12} \rightarrow r_{12}=0.11387$ Then $1 000 000= C [\frac{1-(1+\frac{0.1139}{12})^{-60}}{\frac{0.1139}{12}}] \rightarrow C= 21 937.42$ $\mathbf{\text{(ii) Given r= $0.12 $ with continuous compounding, find the monthly}}$ $\mathbf{\text{ repayments on a $ 1 000 000$ Dollar loan to be repaid completely in 5 years}}$ $e ^ {-0.12}=(1+\frac{r_{12}}{12})^{12}$ Been getting $r_{12}$ as negative. Please help.
The factor for an continuously compounding is $e^{r}$, not $e^{-r}$ There is a well known identity: $$\lim_{n \to \infty} \left(1+\frac{x}{n}\right)^n=e^x$$ Since in your case n is not very large the equation doesn´t hold. But you can say that the left side is monthly compounding (with n=12) and the right side is continuously compounding. The x on both sides has to be different if the equation should be true. $\left(1+\frac{r_{12}}{12}\right)^{12}=e^{0.12}$ Taking the 12-th root on both sides. $\left(1+\frac{r_{12}}{12}\right)^{1}=e^{0.01}$ $\frac{r_{12}}{12}=e^{0.01}-1$ $r_{12}=12\cdot (e^{0.01}-1)=0.120602...$
Show that $R$ is a ring Let $R$ a set with $+$ and $\cdot $ s.t. 1) $(R,+)$ is a group (not necessarily comutative) 2) $\cdot $ is associative, and distributive for $+$, i.e. $a\cdot (b\cdot c)=(a\cdot b)\cdot c$, $a\cdot (b+c)=a\cdot b+a\cdot c$ and $(a+b)\cdot c=a\cdot c+b\cdot c$ 3) there is $1\in R$ s.t. $1\cdot x=x\cdot 1=x$. Show that $R$ is a ring. I really don't know how to did it. I think that I have to show that $(R,+)$ is commutative, but I didn't success.
Hint: Compute $$ (1 + 1) \cdot (a + b) $$ in two ways, using the two distributive laws.
Multiple choice question of indefinite integral, $\int \frac{x + 9}{x^3 + 9x} dx$. If $\int \frac{x + 9}{x^3 + 9x} dx = k\arctan(mx) + n\ln (x) + p \ln (x^2 + 9) + c$, then $(m+n)/(k+p) = $ (A) 6 (B) -8 (C) -3 (D) 4 I tried solving it by differentiating the R.H.S. but couldn't arrive at the answer.
The derivative of the right hand side is $$ \frac{km}{1+m^2x^2}+\frac{n}{x}+\frac{2px}{x^2+9} $$ which should equal $$ \frac{x + 9}{x^3 + 9x} $$ Thus you need $m=1/3$ and then you can go on.
Exercise from Munkres, and a surjective loop from $[0, 1]$ onto $S^2$ I'm working through some Munkres exercises in preparation for an exam, and found this one (59.2) that I'm unsure about. "Criticize the following 'proof' that $S^2$ is simply connected: Let $f$ be a loop in $S^2$ based at $x_0$. Choose a point $p$ of $S^2$ not lying in the image of $f$. Since $S^2 - p$ is homeomorphic with $\mathbb{R}^2$, and $\mathbb{R}^2$ is simply connected, the loop $f$ is path homotopic to the constat loop." The only issue I can find with this argument, i.e. the only thing I'm not convinced is kosher, is the assumption that such a $p$ exists. If there existed a loop in $S^2$ whose image was all of $S^2$, this move would be illegal. Though I can imagine such a space-filling curve existing, I haven't the first clue how it'd be constructed, and am agnostic on whether such a curve could exist. Is this the problem? Or am I supposed to be noticing some other flaw? If the answer to the former is yes, how would one construct such a loop? Thanks in advance.
Pick a standardly-constructed space-filling $c: I \to I\times I$ (in the sense that $c(0)$ and $c(1)$ belong to the boundary of $I \times I$). Let $\pi: I \times I \to I \times I / \sim$ be the quotient map identifying the boundary, and $h: I \times I/ \sim \to S^2$ be a homeomorphism. Then $h \circ \pi \circ c$ does the job.
Why Use Arbitrary Unions and Finite Intersections in Topology? Why the definition of a topological space defined under finite intersection and arbitrary union What if we change the conditions by arbitrary intersection and finite union?
While defining topology in general, mathematicians wanted to generalize the understanding of "openess" from metric spaces such as $\mathbb{R}^N$, defined by the means of open balls. Thus, for example, the intersection of sets of the form $(-\frac{1}{n},\frac{1}{n})$ (where $n$ is any integer) should not be open, since it equals a singleton point $0,$ which is not open in the metric topology. Similarly, the union of sets of the form $(-r,r)$ (where $r$ is any real number) should be open, since the union equals $\mathbb{R}$, which is open in metric topology.
Show that $\sqrt{2-\sqrt{3}}=\frac{1}{2}(\sqrt{6}-\sqrt{2})$ Prove the following: $\sqrt{2-\sqrt{3}}=\frac{1}{2}(\sqrt{6}-\sqrt{2})$ How do you go about proving this?
Hint: In general a double radical $\sqrt{a\pm \sqrt{b}}$ can be denested if $a^2-b$ is a perfect square and in this case we have: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ In Your case $a^2-b=1$ so....
How many combinations of pennies, dimes, nickels, and quarters create 0.32$? I need help solving this. I cannot find the complete number of combinations. I have already found $5$, but I can't find any more.
Leaving out the pennies in each combo ... There are 2 combos with a quarter: 25+5 (that is: 1 quarter + 1 nickel ...so 2 pennies...) 25 (So just a quarter ...so 7 pennies ... for combos below, you'll have to figure out how many pennies to add ...) There is 1 combo with 3 dimes: 10+10+10 There are 3 combos with 2 dimes: 10+10+5+5 10+10+5 10+10 There are 5 combos with 1 dime: 10+5+5+5+5 10+5+5+5 10+5+5 10+5 10 There are 7 combos without dimes or quarters: 5+5+5+5+5+5 5+5+5+5+5 5+5+5+5 5+5+5 5+5 5 (32 pennies) Total: 18 combos
How to show that $\lim\limits_{x \to 0} \frac{3x-\sin 3x}{x^3}=9/2$? $\lim\limits_{x \to 0} \frac{3x-\sin 3x}{x^3}$ I need to prove that this limit equals to $\frac{9}{2}$. Can someone give me a step by step solution? EDIT: I am sorry. The $x$ goes to $0$, not $1$.
By elementary means: From $$\sin 3x=3\sin x-4\sin^3x$$ we draw $$L=\lim_{x\to0}\frac{3x-3\sin x+4\sin^3x}{x^3}=\lim_{x\to0}\frac{3x-3\sin x}{x^3}+4.$$ But $$\lim_{x\to0}\frac{x-\sin x}{x^3}=\lim_{3x\to0}\frac{3x-\sin3x}{27x^3}$$ so that $$L=\frac L9+4.$$
Show that if $c_n = \int_{-1}^1 (1-t^2)^\frac{n-1}{2}dt$ then $c_n = \frac{n-1}{n}c_{n-2}$ Question: Show that if $c_n = \int_{-1}^1 (1-t^2)^\frac{n-1}{2}dt$ then $c_n = \frac{n-1}{n}c_{n-2}$. I have tried two things both of which I didn't get super far with. First, I tried dividing n into two cases odd and even and applying the binomial theorem and second I tried integration by parts with $u = (1-t^2)^\frac{n-1}{2}$ and $dv = dt$. Maybe one of these approaches are right and I just screwed up somewhere but some help would be appreciated.
Observe that $$ c_n = \int_{-1}^{0}(1-t^2)^{\frac{n-1}{2}}dt + \int_{0}^{1}(1-t^2)^{\frac{n-1}{2}}dt = \\ =2\int_{0}^{1}(1-t^2)^{\frac{n-1}{2}}dt, $$ where you make the substitution $t\mapsto -t$ in the first integral (or just observe that the function being integrated is even). Then, we do partial integration: $$ \int_{0}^{1}(1-t^2)^{\frac{n-1}{2}}dt = t(1-t^2)^{\frac{n-1}{2}}\big|_{0}^{1} + (n-1)\int_{0}^{1}t^2(1-t^2)^{\frac{n-3}{2}}dt = \\ =(n-1) \int_{0}^{1}t^2(1-t^2)^{\frac{n-3}{2}}dt. $$ Then, we conclude that $$ (n-1)\int_{0}^{1}(1-t^2)^{\frac{n-3}{2}}dt - \int_{0}^{1}(1-t^2)^{\frac{n-1}{2}}dt = (n-1)\int_{0}^{1}(1-t^{2})^{\frac{n-1}{2}}dt, $$ which means $$ \int_{0}^{1}(1-t^2)^{\frac{n-1}{2}}dt = \frac{n-1}{n}\int_{0}^{1}(1-t^2)^{\frac{n-3}{2}}dt. $$ Then, it follows that $$ \frac{c_n}{2}=\frac{n-1}{n}\frac{c_{n-2}}{2} \implies c_n = \frac{n-1}{n}c_{n-2}. $$
Questions about root system for arbitrary Lie algebra According to the book "Introduction to Lie Algebras" by Erdmann and Wildon. I understand that we can find root space decomposition for a semisimple Lie algebra. However, when the book goes to the definition of root system. it appears to me that root system is NOT only for semisimple Lie algebra but for any arbitrary lie algebra. If my assertion is correct, how can I find roots for an arbitrary lie algebra if it can not be root space decomposed? For example how can I know the roots of $\mathfrak{gl}(n,\mathbb{C})$? Thanks in advance!
There is a weight space decomposition for all finite-dimensional Lie algebras $L$ over a field $K$ of characteristic zero, which later is used for the root space decomposition in the semisimple case. For this, let $H$ be a Lie subalgebra of $L$, and consider the restriction of the adjoint representation to $H$, i.e., ${\rm ad}:H\rightarrow \mathfrak{gl}(L)$, and define the generalized eigenspace $$ L_{\lambda}(h)=\{ x\in L\mid (ad(h)-\lambda id)^nx=0 \text{ for some } n\}, $$ for $h\in H$. If $K$ is algebraically closed, the Jordan decomposition of $ad(h)$ gives $$ L=L_0(h)\oplus \bigoplus_{i=1}^p L_{\lambda_i}(h), $$ where $0,\lambda_1,\ldots ,\lambda_p$ are the distinct eigenvalues of $ad(h)$. Now for each function $\alpha:H\rightarrow K$ let $$ L_{\alpha}=\bigcap_{h\in H}L_{\alpha(h)}(h). $$ Then the result is: Theorem: Let $K$ be algebraically closed and $H\subseteq L$ be a nilpotent subalgebra. Then we have the weight space decomposition $$ L=\bigoplus_{\alpha:H\rightarrow K}L_{\alpha}, $$ such that $H\subseteq L_0$, $[L_{\alpha},L_{\beta}]\subseteq L_{\alpha+\beta}$, and $[H,L_{\alpha}]\subseteq L_{\alpha}$.
Is this function coercive? We are given $A \in \mathcal{M}_{n,n}(\mathbb{R})$ a positive definite matrix, $b \in \mathbb{R}^n$ and $c \in \mathbb{R}. $ Our function is $f : \mathbb{R}^n \to \mathbb{R}$ defined by : $$f(x) = \frac{1}{2}\langle Ax, x \rangle + \langle b, x \rangle + c$$ A function is said to be coercive if: $$f(x) \to +\infty \text{ as } \lvert \lvert x \rvert \rvert _{2} \to +\infty.$$ I normally attempt to bound the function from below by some function that is also coercive but am unable to do so here.
By applying the Spectral Theorem, we can write $A = \sum_{i =1}^k \lambda_i P_{\lambda_i}$, where $\lambda_i$ is the $i$-th distinct eigenvalue of $A$ and $P_{\lambda_i}$ is the orthogonal projection matrix onto the eigenspace of $\lambda_i$. Since $A$ is positive defininite, each $\lambda_i >0$. If we set $\lambda$ to be the smallest eigenvalue of $A$, then $\langle Ax,x \rangle = \sum_{i=1}^k \lambda_i\|P_{\lambda_i} x\|_2^2 \geq \lambda \sum_{i=1}^k \|P_{\lambda_i} x\|_2^2 = \lambda \|x\|_2^2$. By Cauchy-Shwarz, $\langle b, x\rangle \geq -\|b\|_2\|x\|_2$. Thus we get the lower bound $f(x) \geq \frac{\lambda}{2}\|x\|_2^2 -\|b\|_2\|x\|_2 + c$, which goes to infinity with $\|x\|_2$.
How does the curves-fields correspondence fail in higher dimensions? It's embarassing how little I know about higher dimensional varieties. Let $k$ be an algebraically closed field. Then, the category of smooth projective curves over $k$ and nonconstant morphisms is equivalent to the category of fields $K/k$ of transcendence degree one. How does this fail in higher dimensions? I'm looking for examples of smooth projective varieties $X,Y$ which have isomorphic function fields, but are not isomorphic. Also, in higher dimensions, how can a rational map between smooth proj. varieties fail to be a morphism?
Perhaps this will aid your intuition. Consider this rational map from a smooth curve to a projective space: $$ f: \mathbb A^1 -\!\! \rightarrow\mathbb P^1, \ \ f(t) = [t:1/t^2] $$ At first glance, it looks like $f$ is not regular at $t = 0$. But that is an illusion. Clearing denominators, you see that $f$ can be rewritten as $$ f(t) = [t^3:1]$$ which is manifestly regular. So what is the algebraic property that makes the "clearing denominators" trick work? It's fact that the local ring on $\mathbb A^1$ at $t = 0$ is a discrete valuation ring. In this example, $t$ has valuation $1$ and $1/t^2$ has valuation $-2$ in the local ring, so the denominators are cleared by multiplying by $t^2$ which has valuation $+2$. How does this generalise when you have a rational map $$ f: X -\!\! \rightarrow \mathbb P^n $$ where $X$ is smooth, but of arbitrary dimension? Well, the local rings of $X$ in codimension 1 are discrete valuation rings. Using the same trick of "clearing denominators", you see that $f$ is regular everwhere except on a codimension 2 subspace of $X$. But as MooS points out, you can't do better than this - consider the blow up of a surface.
repeated integration of lebesgue integrable function I am currently working with Wheeden Zygmund's Measure and Integral. I got stuck on one of the exercises (Ch. 6, Ex. 13). Let $f \in L(-\infty,\infty)$ (Lebesgue integrable), and let $h > 0$ be fixed. Prove that $$ \int_{-\infty}^\infty (\frac{1}{2h} \int_{x-h}^{x+h} f(y) \, dy) \, dx = \int_{-\infty}^\infty f(x) \, dx $$ I believe it suffices to prove that $ \frac{1}{2h} \int_{x-h}^{x+h} f(y) \, dy = f(x) $ using the mean value theorem, but I am not entirely sure. Can anyone give me a hint on this problem? Thanks in advance!
Just note by Fubini's theorem that \begin{align*} \frac{1}{2h}\int_{-\infty}^\infty \int_{x-h}^{x+h} f(y) dy dx &= \frac{1}{2h}\int_{-\infty}^\infty \int_{-\infty}^{\infty} 1_{(x-h, x+h)}(y) \cdot f(y) dy dx \\ &= \frac{1}{2h}\int_{-\infty}^\infty f(y) \cdot \int_{-\infty}^{\infty} 1_{(y-h, y+h)}(x) dx dy = \int_{-\infty}^\infty f(y) dy, \end{align*} since $1_{(x-h,x+h)}(y) = 1_{(y-h, y+h)}(x)$. I leave it to you to verify that Fubini's theorem is indeed applicable.
$B^2A - A$ is invertible $\Rightarrow $ $AB - A$ is invertible I'm trying to solve the problem in the title. So far I have concluded: 1) $det(B^2A - A) \neq 0 \Rightarrow det(A)\neq0$ and $det(B^2 - I)\neq0 $ 2)Suppose $det(AB - A) = 0$ $\Rightarrow det(A)=0$ or $det(B - I) = 0 $ By (1): $det(A)\neq0$ and so I'd like to show: $det(B^2 - I)\neq0 \Rightarrow det(B - I)\neq0$ Suggestions?
$det(AB-A)=det(A)det(B-I)$, $det(A)=0$ implies $det(B^2A-A)=0$, $det(B-I)=0$ implies that $det(B-I)(B+I)=det(B^2-I)=0$.
If $\sin A+\sin^2 A=1$ and $a\cos^{12} A+b\cos^{8} A+c\cos^{6} A-1=0$ If $\sin A+\sin^2 A=1$ and $a\cos^{12} A+b\cos^{8} A+c\cos^{6} A-1=0$. Find the value of $2b + \dfrac {c}{a}$. My Attempt: $$\sin A+\sin^2 A=1$$ $$\sin A + 1 - \cos^2 A=1$$ $$\sin A=\cos^2 A$$ Now, $$a(\cos^2 A)^{6}+b(\cos^2 A)^{4}+c(\cos^2 A)^{3}=1$$ $$a\sin^6 A+ b\sin^4 A+c\sin^3 A=1$$ How do I proceed further?
I write it step by step: With $\sin A+\sin^2 A=1$ we have $\sin A=1-\sin^2 A=\cos^2A$ so \begin{eqnarray} && a\cos^{12} A+b\cos^{8} A+c\cos^{6} A-1=0\\ && a(\cos^2A)^6+b(\cos^2A)^4+c(\cos^2A)^3-1=0\\ && a\sin^6A+b\sin^4A+c\sin^3A-1=0\\ && a(1-\cos^2A)^3+b(1-\cos^2A)^2+c\sin A(1-\cos^2A)-1=0\\ && a(1-\sin A)^3+b(1-\sin A)^2+c\sin A(1-\sin A)-1=0\\ && a(1-3\sin A+3\sin^2A-\sin^3A+b(1-2\sin A+\sin^2A)+c(\sin A-\sin^2 A)-1=0\\ && a-3a\sin A+3a\sin^2A-a\sin^3A+b-2b\sin A+b\sin^2A+c\sin A-c\sin^2 A-1=0\\ && a-3a\sin A+3a\sin^2A-a\sin^3A+b-2b\sin A+b\sin^2A+c\sin A-c\sin^2 A-1=0\\ && a-3a\sin A+3a(1-\cos^2A)-a\sin A(1-\cos^2A)+b-2b\sin A+b(1-\cos^2A)+c\sin A-c(1-\cos^2A)-1=0\\ && a-3a\sin A+3a(1-\sin A)-a\sin A(1-\sin A)+b-2b\sin A+b(1-\sin A)+c\sin A-c(1-\sin A)-1=0\\ && a-3a\sin A+3a-3a\sin A-a\sin A+a\sin^2A+b-2b\sin A+b-b\sin A+c\sin A-c+c\sin A-1=0\\ && a-3a\sin A+3a-3a\sin A-a\sin A+a-a\sin A+b-2b\sin A+b-b\sin A+c\sin A-c+c\sin A-1=0\\ && (a+3a+a+b+b-c-1)+(-3a-3a-a-a-2b-b+c+c)\sin A=0\\ && (5a+2b-c-1)+(-8a-3b+2c)\sin A=0 \end{eqnarray}
How to design code for arbitrary low error rate with achievable code rate under Binary Erasure Channel (BEC) "I am looking for a code that can lower my error rate with a fixed code rate." Assume a binary erasure channel that a bit is probably erased in probability 0.5, and any bit won't flip. Then, the achievable rate R is defined as following: For a channel (,p(Y|X),) a rate R is achievable if and only if there for every ε>0 there exists an (M,n) code with (log M)/n ≥ R and λ ⩽ ε. I would like to verify if R=1/2 is achievable rate, so I design the code: Index set is {1,2} only, the codebook is {00, 11}, respectively. In this code, M = 2 and n = 2, and it satisfies (log M)/n ≥ R. How about the error rate? The error only occurs when the received bits are both erased, which has probability 0.5*0.5=0.25. If R = 1/2 is achievable rate, I should be able to find another (M,n) code that has a lower error rate ε. We are sure R = 1/2 is achievable rate because the channel capacity is 0.5 (calculated by 1 - 0.5). Do any one have idea about the code? Thanks in advanced!
To add to Stelios' answer (+1): there is indeed a very simple (practically trivial) code ... but if feedback is allowed: just resend the previous symbol if the decoder finds out it was erased. In this scheme, the number of times that each symbol is sent follows a geometric distribution, with mean $2$ - hence it achieves the capacity. Of course, one often does not have a the luxury of having feedback in the coding-decoding process. Nevertheless, it's important to know that Shannon's channel theorem is also valid in that case.
Ordering pairs of integers $(n, m)$ by the value of ${k_1}^n{k_2}^m$ Ok, so I have two positive integers $k_1$ and $k_2$ raised to positive integer powers $n$ and $m$ respectively. My question is how could I create a list of $(n, m)$ values such that the corresponding values of ${k_1}^n{k_2}^m$ are in ascending order. Is there a pattern which could be used to create a list of $n$ and then $m$ values, for example? Or perhaps a simple algorithm that could do the trick - I've had a go and didn't seem to get anywhere (without of course using brute force). If there is a nice solution, something else I would like to know is how this might be extended to the product of any number of integers $k$ and power values. EDIT: I've just had a go with $k$ values 2 and 3, and have found that for $n$ (the power of 2) the $j^2+j$th $n$ in the list where $j$ is a positive integer seems to be 0. Thanks :)
Here is an algorithm to generate the first $N$ numbers of the for $k_1^n k_2^m$. Clearly the first number in the list is zero. After this, note that every following number is $k_1$ or $k_2$ times a previous number. Also, since we generate the list in increasing order, we can multiply each element by $k_1$ and $k_2$ in the increasing order as well. In this fashion, at each step, there are only two candidates for the next element in the sequence: we only need to compare these two to see which of them is larger and move to the next candidate element. So the algorithm goes as such: generatelist(k1, k2, N): list = {1} L = 1 // Current size of list p1 = 1, p2 = 1 // Elements of the list to be multiplied by k1 and k2 while L < N: c1 = list[p1] * k1, c2 = list[p2] * k2 // Candidates if c1 < c2: list.push_back(c1) p1 = p1 + 1 else if c2 < c1: list.push_back(c2) p2 = p2 + 1 else: list.push_back(c1) p1 = p1 + 1 p2 = p2 + 1 L = L + 1 return list If you want to generate all $N$ elements in the list, this algorithm has $O(N)$ complexity and you can hardly do better asymptotically. If you just want to find the $N$th element, there might be better methods. If you are looking for a general rule, I doubt that one exists. This problem is equivalent to minimising $n \log k_1 + m \log k_2$, and there is no straightforward rule for that unless the logarithms are rational multiples of each other.
Euler to Trigonometric - Particular solution I am back to studying after some years out and I simply forgot the steps that allowed this to function. I recall this is basic, but I can't seem to crack the thing. This is a particular kind of 2nd degree differential equation - one in which the damping is neglected and thus the structure should vibrate endlessly. The textbook has: $$ A_1e^{i\lambda t}+A_2e^{-i\lambda t} $$ Which yields by using Euler's formula the following form: $$ A_1\sin(\gamma t)+A_2\sin(\gamma t) $$ I am aware of Euler's formula. What I can't do is making the first equation resemble the exponential => sine equation to consume the exponential and turn them into pure sines. It is shameful, I know, but any help would be greatly appreciated. Thanks!
You have confused $\gamma$ and $\lambda$ and must have a typo (please check closely). But assuming you fix that: Use Euler's equation, $e^{i \lambda t} = \cos (\lambda t) + i \sin (\lambda t)$ and substitute, knowing that $\cos (x) = \cos (-x)$ and $\sin (x) = -\sin (-x)$.
Conditions for M to be a Positive-definite matrix let $M$ be a symetric $n$ by $n$ real matrix on wikipedia it is said that all the eigenvalues of M are positve is equivalent to $M$ being positive definite. which means that if a matrix $M$ is positive definite then $det(M)>0$ but our teacher told us today that the reciprocal is also true ($M$ is said to be a symetric positive definite matrix if and only if $det(M)>0$) and I'm pretty sure this is wrong because $M$ could have some negative eigenvalues and at the same time a positive determinant. please can someone shed some light on this and give me a solid counter exemple and tell when the reciprocal is true. thank you
Determinant is just one of the leading principal minors. For $M$ to be positive definite all its leading principal minors have to be positive. I think, this is the easiest way to check positive definiteness. In the example above, the first leading principal minor is $-1$, thus this matrix is not positive definite. Just as an example from Wikipedia: $\begin{bmatrix}2&-1&0\\-1&2&-1 \\ 0 & -1 &2 \end{bmatrix}$. First principal minor is just $\Delta_1 = 2 >0$. The second one is: $\Delta_2 = \det\begin{bmatrix}2&-1\\-1 &2\end{bmatrix} = 4-1=3 > 0$ And finally, the last one is just the determinant of the whole matrix: $\Delta_3 = \det \begin{bmatrix}2&-1&0\\-1&2&-1 \\ 0 & -1 &2 \end{bmatrix} = 4 > 0$
Integral of monomials over the sphere Let $$ I = \int_{S^2} x^iy^jz^k dx dy dz \int_{S^1} u^l v^m du dv $$ an integral to be computed ($S^n$ being the n-sphere). Can someone confirm me that as soon as one of the exponent $i,j,k,l,m$ is odd then $I$ is zero?
This is trivial: an odd power of a coordinate is an odd function and by symmetry the integral vanishes.
Why aren't all functions considered a power function of 1 and integrated using some sort of chain rule? When integrating, say, $f(x) = x^{3} - 2x$, why do we go straight to integrating each term independently? Why is it not considered an implicit function of the power one and integrated as so? I'm not sure if that's a valid feat since I'm a beginner, but I was thinking: $\int \left [ f(x) \right ] ^{^{1}} dx$ would be $\frac{f(x)^{2}}{2}$ as another example to the usual: $\int f(x)^{n} dx = \frac{f(x)^{n+1}}{n+1}$ where $n \neq -1 $. As for what's inside, the $f(x)$, as I said I'm a beginner, but I know of u-substitution and maybe we could use it and work it out from there? Is that possible or just plain wrong?
You forget the chain rule. Indeed, $$\frac d{dx}\frac{y^2}2=y\frac{dy}{dx}\ne y$$ Which is why integration is so much harder than differentiation. However, it is true that $$\int[f(x)]^nf'(x)\ dx=\frac{[f(x)]^{n+1}}{n+1}+c$$ Which should follow from u-substitution or differentiating both sides.
Dual Commutes with Base Change Let $R$ be a ring, and let $S$ be an $R$-algebra. Let $M$ be an $R$-module. Under what conditions on $R$, $S,$ and $M$ is it true that we have the following isomorphism of $S$-modules? $$S \otimes_R \operatorname{Hom}_R(M, R) \simeq \operatorname{Hom}_S(S \otimes_R M, S)$$ I know there is an obvious map from the left-hand side above to the right-hand side (namely, send a map $\phi \colon M \to R$ to the map $\operatorname{id} \otimes_R\, \phi$), but I do not know what conditions are required for there to be an inverse map. Does this have anything to do with $M$ being torsion-free as an $R$-module?
a) Your morphism $f:S \otimes_R \operatorname{Hom}_R(M, R) \to \operatorname{Hom}_S(S \otimes_R M, S)$ is in general not bijective, as shown by the example ($R=\mathbb Z,S=\mathbb Z/2,M=\mathbb Z/2)$ $$\mathbb Z/2 \otimes_\mathbb Z \operatorname{Hom}_\mathbb Z(\mathbb Z/2,\mathbb Z )=0 \to \operatorname{Hom}_{\mathbb Z/2}(\mathbb Z/2 \otimes_\mathbb Z \mathbb Z/2, \mathbb Z/2)=\mathbb Z/2$$ b) Your morphism is an isomorphism whenever at least one of the two modules $A$-modules $B$ or $M$ is projective of finite type: Bourbaki, Algebra, Chapter II, §5.4, Prop.8, page 283.
Why is the Axiom of Choice not needed when the collection of sets is finite? According to Wikipedia: Informally put, the axiom of choice says that given any collection of bins, each containing at least one object, it is possible to make a selection of exactly one object from each bin. In many cases such a selection can be made without invoking the axiom of choice; this is in particular the case if the number of bins is finite... How do we know that we can make a selection when the number of bins is finite? How do we even know that we can make a selection from a single bin of finite elements? Then it gives an example: To give an informal example, for any (even infinite) collection of pairs of shoes, one can pick out the left shoe from each pair to obtain an appropriate selection, but for an infinite collection of pairs of socks (assumed to have no distinguishing features), such a selection can be obtained only by invoking the axiom of choice. But how can we even make a selection out of a single pair of socks if they don't have any distinguishing features? Is there another axiom being assumed here?
The axiom of choice says precisely that if $I$ is a set and, for each $i \in I$, we have a non-empty set $X_i$, then the product $\prod_{i \in I} X_i$ is non-empty. (Elements of products can be identified with choice functions, so non-emptiness of the product is equivalent to existence of a choice function.) When $I$ is finite, this is a statement which can be proved by induction on $|I|$, which does not require choice.
What is the limit of this sequence created using Pascal's triangle? Let use the Pascal Triangle to create a sequence. We sum from the first term on the left to the term in the middle and then we take the inverse root times two. We get then that the terms are \begin{eqnarray} &&a_0 := 2 \\ &&a_1 = 2 \frac{1}{1^{1/1}} = 2\\ &&a_2 = 2 \frac{1}{(2+1)^{1/2}} = 1.154700\dots\\ &&a_3 = 2 \frac{1}{(3+1)^{1/3}} = 1.25992\dots\\ &&a_4 = 2 \frac{1}{(6+4+1)^{1/4}} = 1.098200\dots\\ &&a_5 = 2 \frac{1}{(10+5+1)^{1/5}} = 1.14869\dots\\ &&a_6 = 2 \frac{1}{(20+15+6+1)^{1/6}} = 1.0727\dots\\ &&a_7 = 2 \frac{1}{(35+21+7+1)^{1/7}} = 1.10408\dots\\ \end{eqnarray} The first line of the triangle creates the $a_0$ the second creates the $a_1$ and so on. Then I have my conjecture: This sequence converges to 1 But I couldn't prove it or disprove it. What I did was tried to relate to the Binomial expansion: Let $0\leq x \leq 1$ so we have that $$(1+x)^2 = 1 + 2x^2 + x^4 \geq x^2 + 2x^2 = 3x^2$$ So we have that $$\frac{x}{x+1} \leq \frac{1}{\sqrt{3}}$$ And I tried to translate this to some limit using this $x$. There is a way to solve this problem like this? This series converges?
The sum of the elements in "half a row" of Pascal's triangle is either $2^{n-1}$ (if we are talking about the $n$-th row with $n$ being odd) or $2^{n-1}+\frac{1}{2}\binom{n}{n/2}$ if $n$ is even, due to $\binom{n}{k}=\binom{n}{n-k}$. The central binomial coefficient $\binom{2n}{n}$ behaves in the following way $$ \binom{2n}{n}\sim \frac{4^n}{\sqrt{\pi n}} $$ hence you are essentially asking about $$ \lim_{n\to +\infty} 2\cdot\frac{1}{(2^{n-1}+E(n))^{1/n}} $$ that is clearly $1$.
Find two vectors in span=$\{u_1,u_2,u_3\}$ that are not in span=$\{v_1,v_2\}$. Let $v_1=(1,1,-1)$, $v_2=(-1,1,1)$ and $u_1=(0,1,0)$ $u_2=(1,0,-1)$, $u_3=(1,1,1)$. Find two vectors in span=$\{u_1,u_2,u_3\}$ that are not in span=$\{v_1,v_2\}$. I have gotten up to the point of a RREF, but I'm stuck there somewhat. From this $\left({\begin{array}{cc|c|c} 1&-1&2&3\\ 1&1&2&1\\ -1&1&0&-1\end{array}}\right)$, I have reduced it through Gauss-Jordon Elimination to $\left({\begin{array}{cc|c|c} 1&-1&2&3\\ 1&1&2&1\\ 0&0&2&2 \end{array}}\right)$. Since the system is inconsistent, we can conclude that span=$\{u_1,u_2,u_3\}$ is not in span=$\{v_1,v_2\}$. Where do I go from here?
As we want to avoid vectors that are linear combinations of $v_1$ and $v_2$, let us find how those vectors look like: $a(1,1,-1) +b(-1,1,1)=(a-b,a+b,b-a)$. This shows that we have to avoid vectors $(x,y,z)$ such that $x+z=0$. We find that $u_1,u_2$ are both vectors satisfying that condition. SO we have to avoid anything which is a combination of $u_1$ and $u_2$ alone. The vectors of the form $au_1+ bu_3=(b,a+b,b)$ with $b\ne 0$ provide infinitely many examples you desire.
How many integers between $10000$ and $99999$ ( inclusive) are divisible by $3$ or $5$ or $7$? How many integers between $10000$ and $99999$ (inclusive) are divisible by $3$ or $5$ or $7$ ? My Try : Total Integers between $10000$ and $99999$ are $89999$. $\left\lfloor\frac{89999}{3}\right\rfloor+\left\lfloor\frac{89999}{5}\right\rfloor+\left\lfloor\frac{89999}{7}\right\rfloor$ - $\left\lfloor\frac{89999}{3\times5}\right\rfloor-\left\lfloor\frac{89999}{3\times7}\right\rfloor-\left\lfloor\frac{89999}{5\times7}\right\rfloor$ + $\left\lfloor\frac{89999}{3\times5\times7}\right\rfloor$ = $48857$ I don't have an answer for this. Am I right here ?
The number of integers between $3$ and $4$ inclusive divisible by $3$ is not $\lfloor\frac{(4-3+1)}{3}\rfloor=\lfloor\frac{2}{3}\rfloor=0$ but is instead $\lfloor \frac{4}{3}\rfloor - \lfloor\frac{2}{3}\rfloor = 1$. In the same way, you should not be using $\lfloor \frac{90000}{7}\rfloor$ but instead you should be using $\lfloor\frac{99999}{7}\rfloor - \lfloor\frac{9999}{7}\rfloor$ That is to say, the number of numbers $n$ in the range $a\leq n\leq b$ which is divisible by $7$ are those numbers in the range $1\leq n\leq b$ divisible by seven which are not numbers in the range $1\leq n\leq a-1$ which are divisible by $7$.
Proving there is a unique line m so that l ∩ m = P In my geometry class I'm suppose to prove the following theorem. Given line l and point P ∈ l, there is a unique line m so that l ∩ m = P and the angles formed at the intersection are all right. I was given the hint that I will need to prove two statements: first that the line m exists, and secondly that m is unique. However, I'm confused how I would show that line m exists. This is what I'm thinking.... Let A and B be distinct points, but are not on line L. Then by definition A and B create line m which is unique. I don't really understand how I'n suppose to prove this, could someone please explain or show how I'm suppose to do this proof.
Since you have $P \in m$, you have a point lying on the required line $m$ and since the angle between $m$ and $l$ is right angle you have the slope of the line $m$, and since there is a unique line with a given slope and a point lying on it you can claim that the line $m$ is unique.
within sequence of $n^2$ there need not be a monotonic subsequence of length $n+1$ I am reading the book "Ramsey theory on the integers" by Bruce M.Landman and Aaron Robertson. There they proved that within any sequence of $n^2+1$ numbers there exists a monotonic subsequence of length $n+1$ by using Pigeonhole principle. But I am trying to prove the following: given a sequence of only $n^2$ numbers, there need not be a monotonic subsequence of length $n+1$. I find the following sequence for $n=2$: $1, 0, 6, -1$ But I can't find even for $n=3$ and not able to generalize for arbitrary $n$ Could any help please?
$3,6,9,2,5,8,1,4,7$ right? This should work. Generalisation should be straightforward, and if it isn't I'd be happy to explain. Just comment. Okay, for general $n$ take $n,2n,3n,4n,\cdots,n^2,n-1,2n-1,3n-1,n^2-1,n-2,\cdots,\cdots,1,n+1,2n+1,3n+1,\cdots,n^2-n+1$ EDIT: Take $n$ APs, each of length $n$ with common difference $n$. The first starts at $n$, the second at $n-1$, the third at $n-2$ etc all the way down to 1. Then we concatenate them.
$Why$ is the axis of symmetry of a parabola $-{b\over 2a}$ and ${not}$ ${b\over 2a}$? I'm working on a lesson plan for my students regarding completing the square for a parabola, and I've done the following: $$\begin{align}ax^2+bx+c &= a\left(x^2+{b\over a}x\right) + c \\ & = a\left(x^2+{b\over a}x+{b^2\over 4a^2}-{b^2\over 4a^2}\right)+c \\& = a\left(x^2+{b\over a}x+{b^2\over4a^2}\right) - a{b^2\over4a^2} + c \tag{$*$}\\ &= a\left(x^2-2hx+h^2\right)-ah^2+c \tag{Let $h=-{b\over 2a}$}\\ &=a(x-h)^2 + c-ah^2 \\ &= a(x-h)^2+k.\tag{Let $k=c-ah^2$} \end{align}$$ This gives us the standard form of a parabola we all know and love to pick out the vertex of the parabola, but how am I going to justify the selection of $\displaystyle h = -{b\over2a}$ over $\displaystyle h={b\over 2a}$? The selection is to give us the axis of symmetry $\displaystyle x=-{b\over2a}$, of course, but how do I explain why the negative gives us the axis of symmetry? I found a demonstration here, but for my purposes it seems like a chicken-and-egg situation: I haven't discussed the quadratic formula yet, which I will be proving the quadratic formula as part of the lecture, but I need to discuss completing the square first. So it seems kind of silly to use the quadratic formula to prove a result that will be used to prove the quadratic formula. I know that by definition, the axis of symmetry is the line where any two $x$ values of the quadratic equation $ax^2+bx+c$ have the same $y$ value. But I'm stuck with trying to find these $x$-values otherwise. Do I need to reorganize my lesson plan?
The stationary point occurs at $x=-\frac{b}{2a}$. Therefore demonstrate that $$f(-\frac{b}{2a}+\epsilon)= f(-\frac{b}{2a}-\epsilon)$$ for all $\epsilon$.
Can this symmetric matrix be an orthogonal matrix? So the question is to prove whether this matrix M is orthogonal? $$ M = \begin{bmatrix} a & k & k \\ k & a & k \\ k & k & a \\ \end{bmatrix} $$ My attempt to find the inverse of M : $$det(M)=(a+2k)(a-k)(a-k)$$ The inverse of $M$ is : $$ \begin{matrix} (a+k)/((a-k)(a+2k)) & (-k)/((a-k)(a+2k)) & (-k)/((a-k)(a+2k))\\ (-k)/((a-k)(a+2k)) & (a+k)/((a-k)(a+2k)) & (-k)/((a-k)(a+2k))\\ (-k)/((a-k)(a+2k)) & (-k)/((a-k)(a+2k)) & (a+k)/((a-k)(a+2k))\\ \end{matrix} $$ $M$ is orthogonal if $M^T = M^{-1}$ So I end up with a system of 2 equations to solve : $a=\dfrac{a+k}{(a-k)(a+2k)}$ and $k=\dfrac{-k}{(a-k)(a+2k)}$ and now I am stucked. I would really appreciate feedbacks about whether my steps are all correct and if yes, what to do next ?
So the question is to prove whether this matrix M is orthogonal? It's not clear what exactly what you mean by this. Perhaps your question is: Is $M$ orthogonal for all $a,k \in \Bbb R$? The answer to this is clearly no. For instance: if $a = k$, then $M$ fails to be invertible, let alone orthogonal. It is also notable that orthogonal matrices have a determinant of $\pm 1$. Perhaps your question is: For which $a,k \in \Bbb R$ is $M$ orthogonal? Note that $M$ is orthogonal if and only if $MM^T = I$. However, $M$ is symmetric, so $M = M^T$, and the above can be rewritten as $M^2 = I$. To that end: $$ M^2 = \pmatrix{a^2 + 2k^2 & 2ak + k^2 & 2ak + k^2 \\ \vdots & \ddots} $$ So, our matrix will be orthogonal if and only if $M$ is the identity matrix, which is to say that $$ a^2 + 2k^2 = 1\\ 2ak + k^2 = 0 $$ The last equation can be written as $k(2a + k) = 0$. If $k = 0$, then the first equation becomes $a^2 = 1$ so that $a = \pm 1$ are the values for which $M$ is orthogonal. If $k = -2a$, then $9a^2 = 1 \implies a = \pm 1/3$. So, we have the additional solutions $a=1/3,k=-2/3$ and $a = -1/3, k = 2/3$. This is all made much easier using eigenvalues, and noting that $M$ has the form $$ M = k xx^T + (a - k)I $$ where $x$ is the column vector of $1$s and $I$ is the identity matrix.
Sum of some natural numbers equal to $n$ In how many ways we can have some natural numbers that their sum is equal to $n$ and none of them is greater than $k$, for given $n$ and $k$? NOTE: We don't know the number of the elements. Can anyone help me with this problem? I can't find anything on the internet. For example, for $n = 5$ and $k = 2$ the answer is $8$: 2 + 2 + 1, 2 + 1 + 2, 1 + 2 + 2, 1 + 1 + 1 + 2, 1 + 1 + 2 + 1 1 + 2 + 1 + 1, 2 + 1 + 1 + 1, 1 + 1 + 1 + 1 + 1
How about using generating functions? The number of ways to add $m$ positive integers each of which is less than or equal to $k$ so that their sum is $n$ is the coefficient of $x^n$ in $(x+x^2+\ldots+x^k)^m$. So the coefficient of $x^5$ in $\sum_{m=1}^{5} (x+x^2)^m$ is the answer for your example, which is 8.
Absolute value of differentiable function Given differentiable function $f:\mathbb{R}\to\mathbb{R}$ find points where $|f|$ is not differentiable. I think it's not differentiable in all zeros of $f$ and differentiable everywhere else. Is it true? Does it suffice to show that left and right derivatives are different in these points?
In order to show a function is not differentiable at a point, it does suffice to show that the left and right derivatives are different at that point. In general that is not an equivalent condition, but in the case of the absolute value of a differentiable function, that is the only thing that can go wrong. The problem is not just when $f(a) = 0$, but when $f$ crosses the $x$ axis at nonzero slope at $x=a$. Here is an exhaustive list of cases, given $a\in\mathbb R$: * *$f(a)\neq 0$. You can use continuity to find a neighborhood of $a$ on which $|f(x)|=f(x)$, or on which $|f(x)|=-f(x)$, depending on the sign of $f(a)$. *$f(a) = 0$ and $f'(a)\neq 0$. You can use the definition of the derivative to show the left and right derivatives of $|f|$ at $a$ are unequal. *$f(a) = 0$ and $f'(a) = 0$. You can use the definition of the derivative to show that $|f|$ is differentiable at $a$ with $|f|'(a) = 0$.
A function whose domain is a singleton is always continuous . How? According to a text, Suppose f is a function defined on aclosed interval [a, b], then for f to be continuous, it needs to be continuous at everypoint in [a, b] including the end points a and b. Continuity of f at a means the right hand limit and left hand limit of f at a and b respectively equal value of f at a and b. Because the left and right hand limit of f at a and b have no meaning. As a consequenceof this definition, if f is defined only at one point, it is continuous there, i.e., if thedomain of f is a singleton, f is a continuous function. How does the fact that function whose domain is singleton follow from above definition? Can someone explain and please dont use the epsilon delta definition of limit use the basic or old definition or just explain theoretically.
abstract mathematical answer: The most fundamental definition of continuity is "the preimage of open sets is open". And a singleton set has trivial topology so there is nothing non-open there. intuitive answer (hopefully): continuity of a function $y=f(x)$ at a point $x$ means: IF you approach $x$ in any way (either from right, left, or somehow jumping around), THEN you will always also approach the correct $y$ value. But if the domain of your function is only a single point, it is impossible to approach $x$ from anywhere. Therefore the "IF" part of the condition can never happen, so there is no condition to check. Therefore any such function is continuous.
A relationship between Fourier transform and convolution. If $f , g , \hat{f} \hat{g} \in L^1(\mathbb{R})$, I have to prove that $$ \int_{\mathbb{R}} \hat{f}(y) \hat{g}(y) e^{2 \pi i x y} \, dy = (f*g)(x) $$ for almost all $x \in \mathbb{R}$. I use the next definition for Fourier transform and for the convolution: $$ \hat{f}(x) = \int_{\mathbb{R}} f(y) e^{- 2 \pi i x y} \, dy \quad \mbox{ and } \quad (f*g)(x) = \int_{\mathbb{R}} f(x - y) g(y) \, dy\mbox{.} $$ I tried to use Fubini and change of variable theorems and I didn't obtain any result. Maybe I didn't change correctly the variables when I tried to show the equality using change of variable theorem. Thank you very much.
You can apply definition : by Fourier transform of $\displaystyle(f*g)(x) = \int_{\mathbb{R}} f(x - t) g(t) dt$ we have $$\hat{(f*g)}(y) = \int_{\mathbb{R}}\,\left(\int_{\mathbb{R}} f(x - t) g(t) dt\right)e^{- 2 \pi i x y} \, dx=\int_{\mathbb{R}}\,\int_{\mathbb{R}} f(x - t) g(t) e^{- 2 \pi i x y} \,dt dx$$ take substitution $x-t=u$ $$\int_{\mathbb{R}}\,\int_{\mathbb{R}} f(u) g(t) e^{- 2 \pi i (t+u) y} \,du dy=\int_{\mathbb{R}} f(u) e^{- 2 \pi i u y} \,du \int_{\mathbb{R}} g(t) e^{- 2 \pi i t y} \,du =\hat{f}(y) \hat{g}(y)$$
Differential equation maybe tricky I had a problem with part 2. The solution didn't come up in the form y in terms of x. It came up with a relation between x and y on the contrary to part 1). Is it sufficient? If no, then what is the solution of this differential equation 2) $(1+x)ydx=(y-1)xdy$ ?
It is absolutely fine to leave it in an implicit form (Your solution is correct). Otherwise, you'd have to express it in terms of the Lambert W function to obtain an explicit solution. Here is the definition given by Wikipedia: Definition: In mathematics, the Lambert-W function, also called the omega function or product logarithm, is a set of functions, namely the branches of the inverse relation of the function $f(z) = ze^z$ where $e^z$ is the exponential function and $z \in \mathbb{C}$. If you insist on an explicit solution, here is how to do this: Let's start from the solution you obtained: $$ce^{y-x}=xy$$ Let $u=-y$. Then: $$ce^{-u-x}=-xu$$ Taking reciprocals on both sides: $$\frac{e^{x+u}}{c}=-\frac{1}{xu}$$ One can rearrange this to obtain: $$ue^u=-\frac{c}{xe^{x}}$$ Now, notice that we may now apply the definition of the Lambert W! Doing so gives: $$u=W\left(-\frac{c}{xe^{x}}\right)$$ If we substitute back, and simplify arbitrary constants, we obtain the general solution: $$\bbox[5px,border:2px solid #C0A000]{y(x)=-W\left(\frac{k}{xe^x}\right)}$$
Find probability of addition of random variables Suppose you toss a coin four times. The sample space Ω = {HHHH,HHHT,HHTH,...,TTTT} contains 16 outcomes and you should assume each outcome is equally likely. Let X be the Binomial random variable that corresponds to the number of heads in an outcome, e.g., X(HTHT) = 2. Let Y be the Bernoulli random variable that evaluates to 1 if there is an even number of heads in the outcome, e.g., Y (HHHT) = 0 and Y (HTHT) = 1. Let Z = X +Y, e.g., Z(HTHT) = X(HTHT)+Y(HTHT) = 2+1 = 3. What are the values of: P(Z=0) P(Z=1)...P(Z=5) Lets talk about Z=0. In order for Z=0 to be true, X=0 and Y=0. So then, to get P(Z=0), do I have to find the intersection of X=0 and Y=0 or do I add P(X=0) and P(Y=0)?
Remember $Y= \begin{cases} 0 &:& X\in \{1,3\} \\ 1 &:& X\in\{0,2,4\}\end{cases}$ so $$Z=\begin{cases}1 &:& X\in\{0,1\}\\3 &:& X\in\{2,3\}\\ 5 &:& X\in\{4\}\end{cases}$$
Special values of $j$-invariant Let $j(\tau)$ be Klein's absolute invariant defined for $\tau \in \mathbb{H}$ by $$j(\tau) = q^{-1} + 744 + 196884q + 21493760q^2 + 864299970q^3 + \cdots$$ with $q := e^{2\pi i\tau}$. Are there any known special values of $j(\tau)$ for which $\textrm{Re}(\tau)$ is irrational? Wikipedia has a list of special values but in all of those cases $\textrm{Re}(\tau)$ is rational.
Since it wasn't specified that $\tau$ need to be an algebraic number, then yes, there are infinitely many special values of $j(\tau)$ where $\Re(\tau)$ is irrational. However, $\tau$ has a hypergeometric closed-form. It is known that the equation, $$j(\tau) = n$$ can be solved for $\tau$ in terms of the hypergeometric function. Using Method 4, we find, $$\tau = \frac{_2F_1\big(\tfrac16,\tfrac56,1,1-\alpha\big)}{_2F_1\big(\tfrac16,\tfrac56,1,\alpha\big)}\sqrt{-1}$$ where $$\alpha=\frac{1+\sqrt{1-\frac{1728}n}}{2}$$ For example, negating the year $n=-2017$, then, $$\tau \approx -0.273239 + 0.6868913\,i$$ such that, $$j(\tau)=-2017$$
What is the intuition behind mapping one range of numbers onto another range? If I have a range $x \to y$ and I want to map it to $x' \to y'$, what is the logic behind the mapping process? I found the formula online as: $$R = (y' - x') / (y - x)$$ $$\text{output} = (\text{input} - x) * R + x'$$ I understand that $R$ is the ratio of both ranges, but I don't understand the second part where we subtract $x$ from the input? Why do we do this, I can't seem to wrap my head around this.
As per the hint given by the comments, I managed to derive the equation by using the point slope formula. I found the slope with the two points as described by dxiv and the rest smoothly followed. EDIT: The real confusion was actually the part where I didn't realize that the range could be thought of as being $$f(a) = a'$$ $$f(b) = b'$$ Once I realized this it all came together and made more sense. Nothing really beats understanding how to derive the formula instead of just remembering it.
What is wrong in my solution? 2nd order ODE I have an ODE, however I can't find the solution as it's given in the book. \begin{align} \begin{cases} y''(x)=f(x), \qquad 0<x< 1 \\ y(0)=y(1)=0 \end{cases} \end{align} My solution: \begin{align} y'(x)&= \int f(x) \, \mathrm{d}x=F(x)+C_1 \\ y(x)&= \int (F(x)+C_1)\, \mathrm{d}x=\int F(x) \, \mathrm{d}x+C_1x + C_2 \end{align} The solution: \begin{align} y'(x)&=\int_0^x f(y) \, \mathrm{d} y +C_1 \\ y(x)&=\int_0^x \int_0^z f(y) \, \mathrm{d}y \mathrm{d}z+ C_1x+C_2 \end{align} Questions: Why definite integrals in the solution? Where are the limits from? Why is there a double integral in the solution? Thanks!
You have no initial condition for $y'$ (I mean $y'(0)$ isn't given) so all you can do for the first integration is an undefinite integral, $$y'(x)=\int f(t)\,dt+C=F(t)+C.$$ Now for the second integration, you can use the fact that $y(0)=0$ and use a definite integral $$y(x)=0+\int_0^xy'(u)\,du=\int_0^x(F(u)+C)\,du=\int_0^xF(u)\,du+Cx.$$ then plugging the final condition, $$\int_0^1F(u)\,du+C=0$$ and $$y(x)=\int_0^xF(u)\,du-\left(\int_0^1F(u)\,du\right)x.$$ If you don't want to use the intermediate $F$, then a double definite integral can do, such as $$y(x)=\int_{u=0}^x\int_{t=u_0}^u f(t)\,dt\,du-\left(\int_{u=0}^1\int_{t=u_0}^u f(t)\,dt\,du\right)x,$$ where $u_0$ is arbitrary.
Why is the equation $u_t + (a u)_x = 0$ conservative but $u_t + a u_x = 0$ isn't? Consider the following advection equations: $$u_t + a u_x = 0$$ $$u_t + (au)_x = 0$$ I have heard the second one described as being conservative, yet the first one isn't. Why is this?
Physically speaking, ''conservative'' means that the total mass of $u$ is conserved in time. To see this, suppose $u : [0,T] \times [0,1] \to \mathbb R$ with $u(t,0) = u(t,1)=0$ for all $t \in [0,T]$ and define the mass of $u$ at time $t$ by $$M(t)= \int_0^1 u(t,x)dx.$$ If $u$ satisfies the second (conservative) equation, you get $$ M'(t) = \int_0^1 \partial_t u(t,x)dx = -\int_0^1 \partial_x(a(t,x)u(t,x))dx = u(t,0)a(t,0)-u(t,1)a(t,1) = 0$$ Therefore the mass of $u$ is constant in time, so $u$ is conserved. In the first equation things go wrong because you get, after an integration by parts $$M'(t) = -\int_0^1 a(t,x) \partial_x u(t,x)dx = \int_0^1 u(t,x) \partial_x a(t,x)dx + u(t,0)a(t,0)-u(t,1)a(t,1)$$ $$= \int_0^1 u(t,x) \partial_x a(t,x)dx.$$ and so $M$ is in general not constant in time: equation non conservative.
Digits of irrationals I've been studying floating point arithmetic and I've read somewhere that numbers with infinitely many decimal digits without recursion are irrational. But since we can't know all the digits of such a number then how did we come to the conclusion that its digits have no recursion? Does it have anything to do with formulae used to compute the $n$-th digit of a number? (This is a question simply out of curiosity.)
I've read somewhere that numbers with infinitely many decimal digits without recursion are irrational. This property can indeed be used to determine whether some numbers are irrational. For instance, the prime constant, defined by $$ \rho =\sum _{{p}}{\frac {1}{2^{p}}}=\sum _{{n=1}}^{\infty }{\frac {\chi _{{{\mathbb {P}}}}(n)}{2^{n}}}, $$ where the sum goes over all primes $p$ and ${\displaystyle \chi _{\mathbb {P} }}$ is the characteristic function of the primes, has a $1$ as its $k^{\rm th}$ digit if $k$ is prime, and $0$ otherwise. It can be shown using an argument similar to yours that $\rho$ is irrational. Another example is the constant whose every $2^{k^{\rm th}}$ digit is $1$ and every other is $0$: $$ 0.0101000100000001\ldots, $$ which is also irrational. But since we can't know all the digits of such a number then how did we come to the conclusion that its digits have no recursion? As you rightly observe, in other cases irrationality is proved by other means. For instance, the irrationality of $\sqrt 2$ is proved by assuming first that it is rational, i.e., equal to some $\frac ab$, where $a\neq b$ and $a,b>1$ are positive integers, and coming to a contradiction (for a nice presentation see here). A slightly more difficult proof using different techniques is used to show that the number denoted by $\zeta(3)$, where $\zeta$ is the Riemann zeta function, is irrational.
Proof of this property: $x=0$ if and only if $|x|0$ I'm trying to prove a property that we've seen in our analysis lecture, but I have a little doubt about my way of doing it. The property is the following: $$x=0 \Leftrightarrow |x| < z \text{ for all } z>0$$ ...and we work with real numbers only. I tried to prove it that way: 1) First step: $x=0 \Rightarrow |x|<z \text{ for all } z>0$ We know by a property of absolute values that $x=0 \Leftrightarrow |x|=0$, so $x=0$ means that $|x|=0$, which is smaller than $z$ for all $z>0$. Thus, $x=0 \Rightarrow |x|<z \text{ for all } z>0$. 2) Second step: $|x|<z \text{ for all }z>0 \Rightarrow x=0$ Let's suppose, on the contrary, that $|x|<z \text{ for all }z>0 \Rightarrow x \ne 0$, i.e. $x<0$ or $x>0$. If $x<0$, then $|x|=-x < z$. This must be true for all $z \in \mathbb{R}$, but if $x=-2$, for example, then there exists $z$ (for example: $z=1$) such that the inequality above doesn't hold (we would have 2 < 1). So it's a contradiction. If $x>0$, then $|x|=x < z$. Again, if $x=2$ and $z=1$, we have a contradiction. As $x$ cannot be smaller or higher than $0$ without leading to a contradiction, the only possibility is that $x=0$. I'm not sure about the second part of my proof, as it doesn't look as formal as usual... we usually only use axioms and properties derived from those axioms, so I'm not very happy with using counterexamples here. Does anyone know if there is another and nicer way to prove it?
Your proof is flawed because you cannot prove a general fact from listing examples. Also the negation of "$|x| < y$ for all positive $y$" is "there exists some positive $y$ such that $|x| \geq y$", whereas you have its negation written as "$|x| < z$ for all $z \implies x \neq 0$, i.e. $x < 0$ or $x > 0$". Here is is how you could go about doing this: Suppose first that $x = 0$. Then $|x| = 0$, which is smaller than any positive real number. You attempted to prove the next part by contradiction, which could actually work, but the cleanest way is by contrapositive. Here is how that might work: Suppose that $x \neq 0$. Then $|x| \neq 0$ and $|x| \geq 0$, so $|x| > 0$. In particular there is a positive number which is less than or equal to $|x|$, $|x|$ itself. By contrapositive, if $x \in \mathbb{R}$ is such that $|x| < y$ for every positive $y \in \mathbb{R}$, then $x = 0$.
Discriminant of an almost cyclotomic field Let $k$ be a positive even square-free integer and let $$L:=\mathbb{Q}(\zeta_k,\sqrt{2}).$$ This is the maximal abelian extension of $L$ contained in $\mathbb{Q}(\zeta_k,2^{\frac{1}{k}})$. I would like to ask whether there is an explicit formula for the discriminant of $L$ only in terms of $k$, as the one that is given here for example.
If $k$ is squarefree and even then $L = \mathbf{Q}(\zeta_{k/2}, \sqrt{2})$, hence $L$ is the compositum of the linearly disjoint (over $\mathbf{Q}$) fields $K_1 = \mathbf{Q}(\zeta_{k/2}) = \mathbf{Q}(\zeta_k)$ and $K_2 = \mathbf{Q}(\sqrt{2})$. Since the discriminants are relatively prime (only odd primes are ramified in $K_1$ and $K_2$ is ramified at 2), there is an explicit formula for the discriminant of the compositum, namely $D_L = (D_{K_1})^{[K_2:\mathbf{Q}]} (D_{K_2})^{[K_1:\mathbf{Q}]}$, cf. Proposition I.2.11 in Neukirch's Algebraic Number Theory. To be more explicit, the discriminant of $L$ is $$D_L = \frac{(8k^2)^{\phi(k)}}{\displaystyle \prod_{p \mid \phi(k)} p^{2\phi(k)/(p-1)}}. $$
Adjoint of a symmetric operator acting on the complement of the graph Let $A:\mathcal D(A)\to\mathcal H$ be a closed symmetric (densely defined) operator on a Hilbert space. In an online lecture the lecturer remarks that the following is trivial: Viewing $\mathrm{Gr}(A)$ as a subspace of the graph of $A^*$, the map $\sigma : \mathrm{Gr}(A)^\perp\to\mathrm{Gr}(A)^\perp, x\mapsto -iA^*x$ is a self-adjoint involution. How can one see this? I can see that if $x$ lies in the orthogonal complement of the graph of $A$ (as a subspace of $\mathrm{Gr}(A^*)$ ) that then $x$ also lies in $\mathcal D(A^*)$. But I do not see any further steps, which is dispiriting given the remark that this is supposedly trivial. The context is that this step was apparently an important insight by von Neumann about the existence of self-adjoint extensions of symmetric operators.
The orthogonal complement of $\mathcal{G}(A)$ in $\mathcal{G}(A^*)$ consists of all $(y,A^*y)\in\mathcal{G}(A^*)$ such that $$ \langle (x,Ax),(y,A^*y)\rangle_{X\times X} = 0, \;\;\; x\in\mathcal{D}(A) \\ (x,y)_{X}+(Ax,A^*y)_{X} = 0,\;\;\; x\in\mathcal{D}(A) \\ (Ax,A^*y)_{X} = -(x,y)_{X},\;\;\; x\in\mathcal{D}(A). $$ The last relation is equivalent to $A^*y\in\mathcal{D}(A^*)$ and $(A^*)^2y=-y$. That is, $y\in\mathcal{D}((A^*)^2)$ and $(A^*)^2y=-y$. So, $$ \mathcal{G}(A)^{\perp}\cap\mathcal{G}(A^*)=\{ (y,A^*y)\in X\times X : y\in\mathcal{N}((A^*)^2+I)\}. $$ You can see how applying $A^*$ to both coordinates of $(y,A^*y)\in\mathcal{G}(A)^{\perp}\cap\mathcal{G}(A^*)$ defines to $J$ such that $J^2=-I$: $$ J(y,A^*y) = (A^*y, (A^*)^2y)=(A^*y,-y) \\ J(A^*y,-y) = ((A^*)^2y,-A^*y) = (-y,-A^*y)=-(y,A^*y) \\ \therefore J^2 = -I. $$ You can check that $iJ$ is then selfadjoint.
Two different expansions of $\frac{z}{1-z}$ This is exercise 21 of Chapter 1 from Stein and Shakarchi's Complex Analysis. Show that for $|z|<1$ one has $$\frac{z}{1-z^2}+\frac{z^2}{1-z^4}+\cdots +\frac{z^{2^n}}{1-z^{2^{n+1}}}+\cdots =\frac{z}{1-z}$$and $$\frac{z}{1+z}+\frac{2z^2}{1+z^2}+\cdots \frac{2^k z^{2^k}}{1+z^{2^k}}+\cdots =\frac{z}{1-z}.$$ Justify any change in the order of summation. [Hint: Use the dyadic expansion of an integer and the fact that $2^{k+1}-1=1+2+2^2+\cdots +2^k$.] I don't really know how to work this through. I know that $\frac{z}{1-z}=\sum_{n=1}^\infty z^n$ and each $n$ can be represented as a dyadic expansion, but I don't know how to progress from here. Any hints solutions or suggestions would be appreciated.
Hint for the first sum. Note that each positive integer $n$ can be written in a unique way as the product of a power of $2$, $2^k$, and an odd number $(2j+1)$. Hint for the second sum. Note that if $n=2^k(2j+1)$ then the coefficient of $z^n$ of the left-hand side is $$-1-2-2^2-\cdots -2^{k-1}+2^{k}.$$
How do I add the terms in the binomial expansion of $(100+2)^6$? So, I stumbled upon the following question. Using binomial theorem compute $102^6$. Now, I broke the number into 100+2. Then, applying binomial theorem $\binom {6} {0}$$100^6(1)$+$\binom {6} {1}$$100^5(2)$+.... I stumbled upon this step. How did they add the humongous numbers? I am really confused. Kindly help me clear my query.
$10^{12} + 12\times10^{10} + 6\times10^9 + 16*10^7 + 24\times10^5 + 192\times10^2 + 64$ $= 10^{12} + 10^{11} + 2\times10^{10} + 6\times10^9 + 10^8 + 6\times10^7 + 2\times10^6 + 4\times10^5 + 10^4 + 9\times10^3 + 2\times10^2 + 6\times10^1 + 4\times10^0= 1126162419264$
Show that if $ab \equiv ac$ mod $n$ and $d=(a,n)$, then $b \equiv c$ mod $\frac{n}{d}$ What I know so far: We know by the definition of congruence that $n$ divides $ab-ac$. So, there exists an integer $k$ such that $a(b-c)=kn$, and since $d=(a,n)$ we know that $a=ds$ and $n=dt$ from some integers $s$ and $t$. Then substituting for $a$ and $n$, we see that $ds(b-c)=k(dt)$.
Let $\,x=b\!-\!c.\ $ Then $\ n\mid ax\!\iff\! n\mid ax,nx$ $\iff\! n\mid(ax,nx)=(a,n)x\!\iff\! \dfrac{n}{(a,n)}\mid x$
Metric on sphere. In the book titled Space, time, and gravity by Robert Wald, I found the following, and I am quoting him, "the metric of the sphere is $ds^2=R^2[(d\psi)^2+cos^2\psi(d\phi)^2].$ 1-where $ds$ denotes the distance between two nearby points , $d\psi$ their difference in latitude, $d\phi$ their difference in longitude, and $R$ the radius of the sphere." 2- Is the above formula correct? I think cosine should be replaced to sine. 3- also, is the exponent 2 on $ds$ square?, or just the symmetric product $dsds$? since I am confused why he squared the other two quantities i,e. $(d\theta)^2$ why not just write the metric the normal way. 4- and is there a way to explain to a laymans person what's the two here $ds^2$ without confusing them that it's a square? 5- if I give you for example that between tow points the difference in longitude and in latitude is $\pi$, and radius is 3, then the distance between the two points is $18\pi^2$ since by these information we get $ds^2= 9[\pi^2+( cos^2(\pi) ) \pi^2]=18\pi^2$?. thank you.
* *"Nearby" here means "infinitesimally nearby", or "close enough that the deviation from flat space doesn't matter". Mathematically, it means an implied limit to zero. *The formula is correct. Note that latitude is zero on the equator and maximal at the north pole, with latitude of the north pole being $\pi/2$. You are probably more familiar with the spherical coordinates commonly used in physics, where instead of the latitude $\psi$ (angle from the equator), the angle from the north pole, $\theta=\frac\pi2-\psi$, is used. For spherical coordinates, you indeed get a sine, as $\cos\psi=\cos(\frac\pi2-\theta)=\sin \theta$. *The exponent is square (but note my remark on point 1). Basically, the formula is Pythagoras: When looking at sufficiently small distances, you cannot distinguish the sphere from an Euclidean plane (again, mathematically this informal description is formalized by the limit process), and therefore you can apply the Pythagoras rule: Distance squared ($ds^2$) equals "vertical" distance squared ($(R\,\mathrm d\psi)^2$) plus "horizontal" distance squared ($(R\cos\theta\,\mathrm d\phi)^2$). *It's a square of an extremely small quantity. Not mathematically exact, but close enough to understand. To make it rigorous, you cannot get around using mathematics anyway. *A difference of $\pi$ is not small by any measure. However, if the difference in both quantities is $10^{-6}$ each, then simply inserting in the formula will indeed give you a very good approximation to the actual distance.
Sum of digits of the $100$ th power of a continuous function in single variable Let $f(x)$ be a continuous function such that $f(x) > 0$ for all non-negative $x$ And, $$ (f(x))^{101} = 1 + \int_{0}^x f(t)dt $$ Then, $$ (f(101))^{100} = 100A+10B+C $$ Where $A,B,C$ are integers from $[0,9]$ . So, $ A + B + C = ?$ I've tried using Newton-Leibneitz theorem on the initial statement but all I ended up at was that $f(101)$ is an integral multiple of $101$ . I do not know how to proceed beyond that.
Note that $$(f(0))^{101}=1+\int_{0}^{0}f(t)\mathrm{d}t \implies (f(0))^{100}=1$$ Differentiating each side of $$ (f(x))^{101} = 1 + \int_{0}^x f(t)\mathrm{d}t $$ with respect to $x$ gives us that $$101(f(x))^{100}f'(x)=f(x) \iff (f(x))^{99}f'(x)=\frac{1}{101} \tag{1}$$ Note that $$g(x)=(f(x))^{100}\Rightarrow g'(x)=\frac{100}{101} \Rightarrow g(x)=\frac{100}{101}x+C$$ From $(1)$. Thus, we have that $(f(x))^{100}=\frac{100}{101}x+1$ from the fact that $(f(0))^{100}=1$. So, since the value is $101$, the answer is $2$.
An integration $ \int_0^1 x\ln \left ( \sqrt{1+x}+\sqrt{1-x}\right)\ln \left ( \sqrt {1+x} -\sqrt{1-x} \right)\mathrm{d}x.$ How can we evaluate $$\int_0^1 x\ln \left ( \sqrt{1+x}+\sqrt{1-x}\right)\ln \left ( \sqrt {1+x} -\sqrt{1-x} \right)\mathrm{d}x?$$ Usually when having as the integrand a logarithmic function, the first thing would be to try to integrate by parts, however in this case there's a product of logarithms and trying to integrate by parts would just produce two harder integrals and it might not be a good approach. There is also An integral by O. Furdui $\int_0^1 \log^2(\sqrt{1+x}-\sqrt{1-x}) \ dx$ which is quite similar to this one, however the substitution $\sqrt{1+x}-\sqrt{1-x}=2\sin t$ doesn't seem to be useful due to the $\sqrt{1+x}+\sqrt{1-x}$ term. Is there a better approach to this integral that reduces it to something simpler?
One may prove that $$ \int_0^1 x\ln \left( \sqrt{1+x}+\sqrt{1-x}\right)\ln \left( \sqrt {1+x} -\sqrt{1-x}\right)\:\mathrm{d}x=\frac{\ln^2 2}4-\frac{3\ln2}4+\frac1{16}. \tag1 $$ Hint. By using $$ \begin{align} 4ab&=(a+b)^2-(a-b)^2 \end{align} $$ giving $$ \begin{align} 4\ln b \ln c&=\ln^2 (bc)-\ln^2 \left(\frac bc\right) \end{align} $$ and by observing that $$ \begin{align} &\left ( \sqrt{1+x}+\sqrt{1-x}\right)\left ( \sqrt {1+x} -\sqrt{1-x} \right)=2x \\ &\frac{\sqrt{1+x}-\sqrt{1-x}}{\sqrt {1+x} +\sqrt{1-x}}=\frac{x}{1+\sqrt{1-x^2}},\quad 0<x<1, \end{align} $$ one gets $$ \begin{align} 4&\int_0^1 x\ln \left ( \sqrt{1+x}+\sqrt{1-x}\right)\ln \left ( \sqrt {1+x} -\sqrt{1-x} \right)\:\mathrm{d}x\\=&\int_0^1 x\ln^2 \left ( 2x \right)\:\mathrm{d}x -\int_0^1 x \ln^2\left( \frac{x}{1+\sqrt{1-x^2}}\right)\:\mathrm{d}x. \tag2 \end{align} $$ Then integrating by parts twice one gets $$ \int_0^1 x\ln^2 \left ( 2x \right)\:\mathrm{d}x=\frac{\ln^2 2}2-\frac{\ln2}2+\frac14, \tag3 $$ by the change of variable $u=\dfrac{x}{1+\sqrt{1-x^2}}$, one has $$ \begin{align} \int_0^1 x \ln^2\left( \frac{x}{1+\sqrt{1-x^2}}\right)\:dx&=\int_0^1\frac{4u(1-u^2)}{(1+u^2)^3}\cdot \ln^2 u\:du \\&=\frac12\int_0^1\frac{(1-v)}{(1+v)^3}\cdot \ln^2 v\:du \\&= \ln 2,\tag4\end{align} $$ obtained by parts. By inserting $(4)$ and $(3)$ into $(2)$ one gets $(1)$.
Find all entire functions with $f(z) = f(\frac{1}{z})$ Find all entire functions with $f(z) = f(\frac{1}{z})$, for all $z\ne 0$. I tried to use the power series of $f$ but this did not help. I also tried to use Liouville's Theorem on $\frac{f(z)}{f(\frac{1}{z})}$ but then I have to deal with possible singularities. Can someone help me with this problem?
I presume your $f$ is defined on the whole of $\mathbb C$, but the equality $f(z) = f(1/z)$ is only meant to hold for $ z \in \mathbb C \backslash \{ 0 \}$? Well, $f$ is holomorphic, so it is bounded on $\{ z \in \mathbb C : |z | \leq 1 \}$. But since $f(z) = f(1/z)$, this means that $f$ is also bounded on $\{ | z | \geq 1 \}$. Do you see where this is going?
Compute the radius of convergence of the power series $\sum_{n=0}^{\infty} n!x^n$ I need help with how to compute the radius of convergence of the power series $\sum_{n=0}^{\infty} n!x^n$? I was thinking of using the ratio test but am unsure of how to go about it. Any help would be greatly appreciated, thanks
The ratio test does apply ! Putting $a_n=n!$ we see that : $$\left|\frac{a_{n+1}}{a_n}\right|=\frac{(n+1)!}{n!}=n+1\underset{n\to\infty}{\longrightarrow}+\infty$$ Hence : $R=0$ In other words, this power series converges solely for $x=0$.
Does this vector lie in the vector subspace $U$? Given are the vectors $v_{1}=\begin{pmatrix} 1\\ 3\\ 5 \end{pmatrix}, v_{2}=\begin{pmatrix} 4\\ 5\\ 6 \end{pmatrix}, v_{3}=\begin{pmatrix} 6\\ 4\\ 2 \end{pmatrix}$ from $\mathbb{R}^{3}$. The vector subspace $U = \text{span}\left\{v_{1},v_{2},v_{3}\right\}$ Does the vector $\begin{pmatrix} 2\\ 4\\ 5 \end{pmatrix}$ lie in $U$? I'm not sure how this is done correctly but I think it's done by checking if the vector $\begin{pmatrix} 2\\ 4\\ 5 \end{pmatrix}$ is a linearly combination of $v_{1},v_{2},v_{3}$. So I have written it all like that: Let $a,b,c \in \mathbb{R}$. $$a\begin{pmatrix} 1\\ 3\\ 5 \end{pmatrix}+b\begin{pmatrix} 4\\ 5\\ 6 \end{pmatrix}+c\begin{pmatrix} 6\\ 4\\ 2 \end{pmatrix}=\begin{pmatrix} 2\\ 4\\ 5 \end{pmatrix}$$ And then calculated each variable by using Gauss. In the end I had a wrong statement aka no solution and from this I conclude that the vector $\begin{pmatrix} 2\\ 4\\ 5 \end{pmatrix}$ doesn't lie in $U$. I don't want write down all the calculation steps with Gauss because it would be too long, but is the way I did the correct way? Or is it done completely different?
Equivalently, we want a+ 4b+ 6c= 2, 3a+ 5b+ 4c= 4, and 5a+ 6b+ 2c= 5. Subtract the second equation from twice the third: 7a+ 7b= 6. Subtract the first equation from three times the third: 14a+ 14b= 13. Multiplying 7a+ 7b= 6 by two, 14a+ 14b= 12. a and b cannot satisfy both 14a+ 14b= 13 and 14a+ 14b= 12. There are no numbers a and b that make the vector equation true. $\begin{pmatrix} 2 \\ 4 \\ 5\end{pmatrix}$ cannot be written as a linear combination of $v_1$, $v_2$, and $v_3$ so is not in subspace U.
Solving $DEF+FEF=GHH$, $KLM+KLM=NKL$, $ABC+ABC+ABC=BBB$ She visits third class and is $8$ years old (you can imagine how ashamed I felt when I said so to her). I helped her with lots of maths stuff today already but this is very unknowable for me. Sorry it's in German but I have translated it :) It's saying "Each letter represents a digit. Determine them". First question, what is "them"? The letters I guess? How shall I determine them when they are unknown? Or is it simply $A=1, B=2, C=3, D=4, E=5, F=6, G=7, H=8, K=11, L=12, M=13, N=14$? Alright... With this we gave a) the first try: It doesn't seem to make sense to set $A=1, B=2, ...$ Or we did something wrong.. Any ideas how this could be solved? :s
The teacher did not see fit to write "each letter stands for a different digit." Therefore, "all letters are equal to the number zero" is a valid answer, and should earn maximum points. I would argue that it should earn above maximum, since it also has higher educational value, and homework is all about educational value. Regarding math, this answer provides an important lesson: the solution to a problem sometimes isn't the most obvious one. Sometimes, when making a proof, a non-obvious approach yields an easier result. Regarding engineering, this answer provides an even more important lesson: sometimes, the best solution to a problem is to get rid of the problem. Having to argue why this answer is valid also has educational value. I think this rather pointless homework provides a wonderful opportunity to teach her several life-saving skills, that will help her immensely in all stages of her life: * *Lateral thinking (ie, out of the box). *Critical thinking (ie, question blind trust in authority) *And, last but not least, the subtle and holy art of Trolling. Teaching her at an early age that every word matters will be of immense help when she later has to parse lawyer language in contracts. Critical thinking means every word written on a piece of paper might not be true. In fact, considering the state of our press, one should never read it without one's bullshit filter set to maximum. There is also a hidden bonus. When the teacher inevitably grades her paper as FAIL, because most teachers are idiots, your dad will get to file complaint, and have fun in the director's office. And this will earn him (and you) her unconditional respect.
Find joint density of $X$ and $\exp(-X)$ $X$ is a uniformly distributed random variable in $[-a,a]$. Find joint density of $X$ and $\exp(-X)$. Addition: This is in context to the first undergraduate course in probability I tried to solve it using Y = exp(-X). Then solving for: P(X=x,Y=y) = P(Y=y|X=x)P(X=x) gives the following result, which I am doubtful about: P(X=x,Y=y) = f(x) when y = exp(-x) and zero otherwise. Also, as pointed out in an answer that the Lebesgue measure is actually zero, so there will not exist a density distribution is what I could think of after seeing the distribution I got which is analytically highly discrete.
The pair $\left(X,e^{-X}\right)$ does not have a common density because their common distribution is concentrated on the set $$\{(x,y)\mid y=e^{-x}, \ -a\le x\le a\}$$ of $0$ Lebesgue measure. There is neither density nor pmf. (The lebesgue measure and the measure belonging to the common distribution of $\left(X,e^{-X}\right)$ are singular.)
Find the number of generators of the cyclic group $\mathbb{Z}_{p^r}$ Let $p$ be a prime number. Find the number of generators of the cyclic group $\mathbb{Z}_{p^r}$, where $r \in \mathbb{Z} \geq 1$ I'm trying to understand the question and am experimenting with $p=5$ and $r=1,2,3$. When $r=1$ it generates $\mathbb{Z_5}$, where every non-zero element is a generator of the group. When $r=2$ it generates $\mathbb{Z_{10}}$. All the elements relatively prime to $10$ are $1,3,7,$ and $9$, also $4$ generators. When $r=3$ it generates $\mathbb{Z_{15}}$. All of the elements relatively prime to $15$ are $1,2,4,7,8,11,13$, and $14$, which are $8$ generators. So I'm trying to figure out how to find the number of relatively prime elements for the general group $\mathbb{Z}_{p^r}$
I think the formula to calculate the answer of above question is $p^r-p^{r-1}.$ Please re-check it and if there is counter example then notify me.
Solving a Problem with Bearing A plane flies 1.3 hr at 110mph on a bearing of 38 degrees. It then turns and flies 1.5 hr at the same speed on a bearing of 128 degrees. How far is the plane from its starting point? To solve this, I have drawn a rough sketch. I have also labeled the appropriate sides at 143 miles (distance = rate * time; 1.3hr * 110mph) and 165 miles (1.5hr * 110mph). But I am having a difficult time concluding which angle within the sketch is the 90 degree angle to allow me to use a trigonometric function or the Pythagorean theorem to solve it. Edit: It is really important for me to know how to determine the measure of any angle that can be inferred. Not only because I am expecting the professor to attempt to fool us by providing measures where the triangle in question is not a right triangle, but because I thought I had already mastered this skill previously.
Since you turn a total of $90^{\circ}$ the path of the plane forms a right triangle and the two given paths are the legs. Using pythagorean $a^2+b^2=c^2$, where here $a=1.3\cdot 110=143$ and $b=1.5\cdot 110=165$, so $c=\sqrt{(1.3\cdot 110)^2+(1.5\cdot 110)^2} = 218.344 \text{ mi}$
Expectation of a normal distrinution Given that, $y$ is generated according to $N(x^T_0w,\sigma^2)$ (N-Normal distribution) Why the following true? $$E[y_o^2]= \sigma^2 +(x^Tw)^2 $$
Starting from the definition of variance $$\sigma^2 = E[(x-E[x])^2]$$ we can write $$\sigma^2 = E[x^2 - 2x \cdot E[x]-(E[x])^2] $$ and because of linearity of expectation $$ \sigma^2= \\ E[x^2] - E[2x \cdot E[x]]+E[(E[x])^2] \\ = E[x^2] - 2(E[x])^2+(E[x])^2 \\ = E[x^2] - (E[x])^2 $$ This can be written as $$E[x^2] = \sigma^2+(E [x])^2$$
Showing that linear subset is not a subspace of the Vector space $V$ I am given the following $V = \mathbb R^4$ $W = \{(w,x,y,z)\in \mathbb R^4|w+2x-4y+2 = 0\}$ I have to prove or disprove that $W$ is a subspace of $V$. Now, my linear algebra is fairly weak as I haven't taken it in almost 4 years but for a subspace to exist I believe that: 1) The $0$ vector must exist under $W$ 2) Scalar addition must be closed under $W$ 3) Scalar multiplication must be closed under $W$ I don't think the first condition is true because if I were to take the vector, there is no way I can get the zero vector back. Is that correct or am I doing something very wrong?
I feel I should indicate that condition 2 and condition 3 fails as well. The vector $v = (2, 0 , 0 , 0)$ belongs in the subset, but neither $2v$ nor $v+v$ belongs in the subset. It is thus not a subspace.
How to solve that logarithmic inequality? $$\log_{1\over31} (4x-5)^2 > \log_{1\over31} (5x-7)^2$$ $\begin{cases} x\neq {7\over 5} \\ x\neq {5\over 4} \\ (4x-5)^2<(5x-7)^2 \end{cases}$ $(4x-5)^2-(5x-7)^2<0, \quad (4x-5-(5x-7))(4x-5+5x-7)<0, \quad (2-x)(x-{4\over3})<0, \quad x \in \left(-\infty; {4\over3}\right) \cup \left(2; +\infty\right)$ $x \in \left(-\infty; {5\over4}\right)\cup\left({5\over4};{4\over3}\right)\cup\left(2;+\infty\right)$ But right answer is $x \in \left({4\over3};{7\over5}\right) \cup \left({7\over5}; 2\right)$
$\log_b$ with basis less than $1$ is decreasing. Therefore the equality is equivalent to $$(4x-5)^2>(5x-7)^2\iff 9x^2-30x+24=3(3x^2-10x+8)>0.$$ Use the Rational root theorem to find $2$ is a root, hence by Vieta's relations, $4/3$ is the other root. The solutions are the $x$ outide of the interval of the roots: $$(-\infty,4/3)\cup(2,+\infty)\smallsetminus\{5/4,7/5\}=(-\infty,5/4)\cup(5/4,4/3))\cup(2,+\infty).$$
Why raising both sides of this equation yields incorrect results? I was solving a simple trigonometric equation and found two ways to solve it: one is correct and pretty straightforward, and the other one is a bit more complicated and is really close to being correct, but has extra solutions that came out of nowhere (or at least I think so). We started doing trigonometry about 3 months ago, so don't be too mad at me if I do some silly mistakes :) Here's the equation (we need to find $x$): $\sqrt2\sin(x) - \sqrt2\cos(x) = \sqrt3$ Here's the incorrect solution: $\sqrt2\sin(x) - \sqrt2\cos(x) = \sqrt3$ $\sin(x) - \cos(x) = \frac{\sqrt3}{\sqrt2}$ $1 - 2\sin(x)\cos(x) = 1.5$ $2\sin(x)\cos(x) = -0.5$ $\sin(2x) = -0.5$ $2x = (-1)^{n+1}\arcsin(\frac{1}{2}) + \pi n$ $x = (-1)^{n+1}\frac{\pi}{12}+\frac{\pi n}{2}$, where $n \in\mathbb {Z}$ I don't know why, but after I raise both sides of $\sin(x) - \cos(x) = \frac{\sqrt3}{\sqrt2}$ to the power of 2 (lines 2-3), extra solutions for $x$ appeared. Here's what I mean: This is the graph before I raised both sides of my equation to the power of 2 And this is after As you can see, after I've risen both sides of my equation to the power of 2 extra solutions for $x$ have appeared. And I don't know why that happened. Can somebody please explain this? Thanks in advance. P.S. Here's the correct solution: $\sqrt2\sin(x) - \sqrt2\cos(x) = \sqrt3$ $\frac{\sqrt2}{2}\sin(x) - \frac{\sqrt2}{2}\cos(x) = \frac{\sqrt3}{2}$ $\cos(\frac{\pi}{4} + x) = -\frac{\sqrt3}{2}$ $x = \pm\frac{5\pi}{6} - \frac{\pi}{4} + 2\pi n$, where $n \in\mathbb {Z}$
When you have $f(x)=g(x)$ and you raise them to the power of $2$, then note that the solutions of $f(x)=-g(x)$ are also solutions of the new equation $[f(x)]^2=[g(x)]^2$. That's why new solutions appear.
Determine: $S = \frac{2^2}{2}{n \choose 1} + \frac{2^3}{3}{n \choose 2} + \frac{2^4}{4}{n \choose 3} + \cdots + \frac{2^{n+1}}{n+1}{n \choose n}$ We are given two hints: consider $(n+1)S$; and use the Binomial Theorem. But we are not to use calculus. My consideration of $(n+1)S$ goes like this: \begin{align*} \sum\limits_{k=1}^{n}\frac{2^{k+1}}{k+1}{n \choose k} &= \frac{1}{n+1}\sum\limits_{k=1}^{n}(n+1)\frac{2^{k+1}}{k+1}{n \choose k} \\ &= 2\frac{1}{n+1}\sum\limits_{k=1}^{n}2^k{n+1 \choose k+1} \\ &= 2\frac{1}{n+1}\sum\limits_{k=1}^{n}(1+1)^k{n+1 \choose k+1} \\ \end{align*} Now I think I'm in a position to use the Binomial Theorem, giving \begin{equation*} 2\frac{1}{n+1}\sum\limits_{k=1}^{n}\sum\limits_{i=0}^{k}{k \choose i}{n+1 \choose k+1} \end{equation*} I don't know if I am on the right track, but I do know that I'm stuck. Can anyone offer any advice on how to proceed?
HINT: Like Determine: $S = \frac{1}{2}{n \choose 0} + \frac{1}{3}{n \choose 1} + \cdots + \frac{1}{n+2}{n \choose n}$, $$\sum_{k=1}^n\dfrac{a^{k+1}}{k+1}\binom nk=\dfrac1{n+1}\sum_{k=1}^n\binom{n+1}{k+1}a^{k+1}$$ Now $$\sum_{k=-1}^n\binom{n+1}{k+1}a^{k+1}=(a+1)^{n+1}$$
Is Cosine the only function satisfying $ f'(x)= f(x+\frac{\pi}{2})$? Basically most of us know that $\frac {\textrm{d}}{\textrm{d}x} \cos x = -\sin x$ . Also $ \cos (x+\frac{\pi}{2})=-\sin x$ That makes $$ \frac {\textrm{d}}{\textrm{d}x} \cos x = \cos (x+\frac{\pi}{2}) $$ So, out of curiosity I wondered what other functions could have this property. It's pretty interesting that by translating the graph of a function by some vector we get the graph of its derivative. Therefore, here's my attempt: I tried to find the defining property of this function. Here we go: $$ f'(x)= f(x+\frac{\pi}{2}) $$ That makes $$ f''(x)= \left( f'(x)\right)'= f(x+\frac{\pi}{2})'=f'(x+\frac{\pi}{2})=f(x+\pi)$$ Generally $$f^n(x)=f(x+n\frac{\pi}{2})$$ With $f^n(x) $ being the $n$-th derivative of $f$. And I'm stuck at this step. What makes cosine adapt to this property is that $ \cos (x+\pi)=-\cos x$ which allows it to be the solution of the following differential equation: $$f''(x)+f(x)=0$$ Though I don't want to admit that $f(x+\pi)=-f(x) $ So, could you please help me? Isn't there any way to determine all the functions that satisfy the previously acknowledged property? Are they infinite?
We have: $$\sin(x+\pi/2)=-\sin(\pi/2-x)=\cos x=(\sin x)'$$ Actually, $\cos(x+a)$ for any $a$ will work. Applying Fourier transform rules, your original equation becomes: $$i\omega \hat f(\omega)=e^{i\omega \pi/2}\hat f(\omega)$$ So, when $i\omega \neq e^{i\omega\pi/2}$, $\hat f$ must be zero. The set of solutions to $i\omega =e^{i\omega\pi/2}$ is discrete, and contains only $-1$ and $1$ on the real line. Are there complex solution? I don't know. The result is that $\hat f$ must be a linear combination of delta functions at the roots $\omega_1,\omega_2,...$, an thus that $f$ is a linear combination of $e^{i\omega_k x}$, The cases $\omega=1,-1$ give $\sin x$ and $\cos x$. (This corresponds to the comment given above, where $\frac{\ln \lambda}{\lambda}=\frac{\pi}{2}$, by setting $\lambda =i\omega$.)
Simple Linear Regression: why do we estimate conditionla mean when we can estimate the parameters? If we can estimate $\beta_0$, $\beta_1$ and $\sigma^2$ in a simple linear regression model, why do we want to estimate the conditional mean $\beta_0+\beta_1x_0$ at a value $x_0$? I mean we have already all the information that we might possibly need. If I want to estimate the conditional mean I can just substitute the estimates $\hat{b_1}$ and $\hat{b_0}$ to obtain $\hat{y}=\hat{b_0}+\hat{b_1}x_0$ which is indeed the fitted value.
You have to distinguish between the prediction of the conditional mean at $x=x_0$ and the prediction of a given point at $x=x_0$. Recall that your population conditional mean model is $E[Y|X=x]=g(x) = \beta_0 + \beta_1x$, thus you are estimating these regression coefficient and then just plug in the $x_0$ and the point estimators of the $\beta$-s, i.e., $\widehat{E[Y|X=x_0]}=\hat{\beta_0} + \hat{\beta_1}x_0$. This estimator depends on a random sample, hence it has a distribution. If you assume that your data generating model is $y_i = \beta_0 + \beta_1 x_i + \epsilon_i$, where $\epsilon_i \sim N(0, \sigma^2)$, then it is straight forward to show that $$ \widehat{E[Y|X=x_0]} \sim N(\beta_0 + \beta_1x_0, \sigma^2 (1/n + (x_0 - \bar{x})^2/S_{xx}). $$ But what happens if you are interested in the prediction of $Y(x_0)$ itself and not its (conditional) mean? The point estimator is the same, i.e., $$ \hat{Y}(x_0)=\hat{\beta_0} + \hat{\beta_1}x_0, $$ however now you have to consider also the noise term $\epsilon_0$, as such its distributions should be calculated by $$ \hat{Y}(x_0)=\hat{\beta_0} + \hat{\beta_1}x_0 + \epsilon_0, $$ that is $$ \hat{Y}(x_0) \sim N(\beta_0 + \beta_1x_0, \sigma^2 (1+1/n + (x_0 - \bar{x})^2/S_{xx}). $$ Intuitively, the point estimator does not change because the best "guess" (in the given model) of $\epsilon_0$ value is $0$, however this consideration introduces much more noise to the estimator.
What do all the common maps "preserve"? I am trying to organize all the different "morphisms" in my head. To help me remember, I am trying to imagine what these maps "preserve." By "preserve" I mean an invariant under the mapping. Can you help me fill in the gaps? An isometry of a manifold preserves lengths and angles. An isometry of a vector space preserves the inner product. A homeomorphism of a manifold preserves topological properties (how do we quantify this?) A diffeomorphism of a manifold preserves "smoothness" as well as topological properties. An injection between any two sets preserves "uniqueness" A surjection between any two sets preserves the "wholeness" of a set An isomorphism between vector spaces preserves operations: scalar multiplication and vector addition. A morphism preserves...? Is this a good way of categorizing these maps? Should I be remembering them differently?
Though your examples might require some refinements (as written some in the comments), you basically got the idea: there is something common in whatever kind of 'morphisms'. And the most general thing in common, is that idenitites will always be included and they all are closed under composition, which makes their class a category. Most of your lines specifically take the invertable ones of the given type of morphisms. (E.g. homeomorphisms are the invertible morphisms in the category $\quad$ [objects: topological spaces, morphisms: continuous functions].)
Is the slope of a function bounded by the supremum of the derivative? For any function $f : \mathbb{R}^{m} \to \mathbb{R}^n$, is it always true that for $x$, $y \in \mathbb{R}^{m}$, $$\frac{\lVert f(x) - f(y) \rVert}{\lVert x-y \rVert} \leq \sup_{z \in \mathbb R^m}{\lVert D_f(z) \rVert} $$ where $D_f(z)$ is the Jacobian of $f$ evaluated at $z$, assuming $D_f(z)$ exists everywhere? I'd like to use this as a step in a proof, but am not sure how to show this holds without making any assumptions on $x$ and $y$.
Yes, this is a consequence of the mean value theorem in multiple dimensions
$f = 0$ almost everywhere implies Lebesgue integrability and $\int f = 0$ Let $f \in \mathcal{B}[a,b]$ so that $f = 0$ a.e. in $[a,b]$. Show that $f \in \mathcal{L}[a,b]$ and $\int_a^b f = 0$. Proof. Since $f$ is bounded, then there is some $L \in \mathbb{R}$ such that $$\sup_{x\in[a,b]} |f(x)| < L.$$ Furthermore, since $f = 0$ almost everywhere then $m(\{ f \neq 0 \}) = 0$. Consider the upper and lower sums over a measurable partition $P = \{E_{j}\}_{j=1}^{n}$, i.e. \begin{align*} U[f,P] = \sum_{j=1}^{n}M_j\cdot m(E_{j}) \\ L[f,P] = \sum_{j=1}^{n}m_j\cdot m(E_{j}) \end{align*} Now, consider only those $E_{k}$ where $f \neq 0$, call this $\widetilde{P} = \{E_{k}\}_{k=1}^{m}$, $m < n$. Note that $m(\widetilde{P}) = 0 \implies m(E_{k}) = 0$ for all $k=1,\ldots,m$, on account of the assumption. This then implies $$ 0 = L[f,\widetilde{P}] \leqslant U[f,\widetilde{P}] = 0 \implies L[f,\widetilde{P}] = U[f,\widetilde{P}] = 0$$ which implies that $f \in \mathcal{L}[a,b]$ and $\int_a^b f = 0$. I'm looking for critiques of my proof. One thing I notice is that I haven't really used the boundedness.
If I interpreted the question correctly (see the comments), then I can only make the following remarks: The Lebesgue measure is complete. This means that any subset of a null measurable set is also measurable and null. From this, it should be straightforward to see that any $f : \mathbb R \to \mathbb R$ that is zero almost everywhere is measureable, whether $f$ is bounded or not. For instance, the preimage $f^{-1}(a,\infty)$ with $a > 0$ is a subset of the null set on which $f$ is zero, hence $f^{-1}(a,\infty)$ is itself measurable (and null). You can check the other preimages for yourself. Secondly, if $f$ and $g$ are measurable functions with $f$ integrable and $f = g$ almost everywhere, then $g$ is also integrable and $\int g = \int f$. Essentially this is because any simple function that sits underneath $f$ also sits underneath $g$, once you "trim off" some null bits that do not affect the measurability or the measures of the domains on which the "blocks" are supported. The same is true with $f$ and $g$ reversed. So it looks like the question is trivial. If anyone disagrees, please do leave a comment.
What is $\det{(I-\alpha vv^T)}$? Let $v \in \mathbb{R^n}$ and $\alpha \in \mathbb{R}$ (a) What is the determinant of the matrix $M = I - \alpha vv^T$, where $I$ is the $n \times n$ identity matrix? (b) For what values $\alpha$ is $M$ nonsingular? For (a) I used a theorem: $\det(I + uv^T) = 1 + u^Tv$. So, $\det(I-\alpha vv^T)= 1 -(\alpha v)^Tv = 1 - v^T\alpha^Tv$ We get for $\alpha$ orthogonal, $-\|v\|_{2}^2 \leq v^T\alpha^Tv \leq \|v\|_{2}^2$ Thus, $1-\|v\|_{2}^2 \leq \det{(I-\alpha vv^T)} \leq 1+ \|v\|_{2}^2$. I don't know how I can answer (b).
For the first part, you can have $\det(I-\alpha vv^T)= 1 - v^T\alpha^Tv= 1-\alpha v^Tv=1-\alpha||v||^2$ Then put it equal to $0$ and solve it for $\alpha$, to find values of $\alpha$ for which the matrix is singular.
The Laplace Transform(s) of a Certain Family of Generalized Hypergeometric Functions Using the standard notation for a generalized hypergeometric function, given a non-negative integer $p$, define: $\mathcal{G}_{p}\left(x\right)={}_{p}F_{p}\left(\underbrace{\frac{1}{2},...,\frac{1}{2}}_{p\textrm{ times}};\underbrace{\frac{3}{2},...,\frac{3}{2}}_{p\textrm{ times}};-x^{2}\right)$ for all $x$ (in $\mathbb{R}$, or in $\mathbb{C}$). That is to say (as can be easily shown): $\mathcal{G}_{p}\left(x\right)=3^{p}\sum_{n=0}^{\infty}\frac{\left(-1\right)^{n}}{n!}\frac{x^{2n}}{\left(2n+3\right)^{p}}$ This function is integrable in the nicest possible ways (it is a schwartz function, rapidly decreasing, &c., &c....) and, it is analytic everywhere. I would like to be able to find the laplace transforms ($\mathcal{L}\left\{ \mathcal{G}_{p}\right\} \left(s\right) )$ of the $\mathcal{G}_{p}$s for every $p$ . At a minimum, I want a closed-form expression for the value of the laplace transforms at $s=1$ ; i.e., the value of the integral: $\int_{0}^{\infty}\mathcal{G}_{p}\left(x\right)e^{-x}dx$ I cannot do this integration term-by-term, since that leads to a divergent series (and is not valid, due to an absence of uniform convergence of the integrand for $x\in\mathbb{R}\geq0$). Any ideas?
Following Wang, I would suggest $$\mathcal{L}\{\mathcal{G}_{p}(x);s\}= _{p+1}F_{q}\left[\begin{matrix} 1, & a_{1}, & \dots & ,a_{n} \\ & b_{1}, & \dots & ,b_{q} \end{matrix};1/s\right]$$
Is this right? Trig integration $\int \sec^3 (x)\tan^2 (x)\,dx$ $$\begin{align} \int\sec^3(x)\tan^2(x)\,dx&=\int\sec^3(x)(\sec^2(x)-1)\,dx \\ &=\int\sec^5(x)\,dx-\int\sec^3(x)\,dx \end{align}$$ $$\int\sec^3(x)\,dx=\frac12(\sec(x)\tan(x)+\ln|\sec(x)+\tan(x)|)+C$$ $$\int\sec^5(x)\,dx$$ Let $u=\sec^3(x)$, then $du=3\sec^3(x)\tan(x)\,dx $. Let $dv=\sec^2(x)\,dx$, then $v=\tan(x)$ $$\begin{align} \int\sec^5(x)\,dx&=\sec^3(x)\tan(x)-3\int\sec^3(x)(\sec^2(x)-1)\,dx \\ &=\sec^3(x)\tan(x)-3\int\sec^5(x)\,dx+3\int\sec^3(x)\,dx \\ &=\frac14\sec^3(x)\tan(x)+\frac38(\sec(x)\tan(x)+\ln|\sec(x)+\tan(x)|)+C \end{align}$$ $$\int\sec^5(x)\,dx-\int\sec^3(x)\,dx=\frac14\sec^3(x)\tan(x)-\frac18(\sec(x)\tan(x)+\ln|\sec(x)+\tan(x)|)+C$$
Yep, everything's correct here!
Prove that $m=[\sqrt{n}+(1/2)]$ for a given conditions For any real number $x$, let $[x]$ denote the largest integer which is less than or equal to $x$. Let $N_1=2$, $N_2=3$, $N_3=5$, and so on be the sequence of non-square positive integers. If the $n$th non-square positive integer satisfies $ m²<N_n<(m+1)² $, then show that $ m=[\sqrt{n}+(1/2)] $. Please help me to solve this. Thanks in advance.
There are $(m+1)^2-m^2-1=2m$ integers $N_n$ strictly between $m^2$ and $(m+1)^2$. It follows that there are $$2+4+\ldots+2(m-1)=(m-1)m$$ such integers $<m^2$. This means that $m^2<N_n<(m+1)^2$ implies $$(m-1)m<n<m(m+1)\ .\tag{1}$$ Since all entries in $(1)$ are integers we then also have $$(m-1)m<n+{1\over4}<m(m+1)\ ,$$ or $$\left(m-{1\over2}\right)^2<n<\left(m+{1\over2}\right)^2\ .$$ From this we conclude that $$m<\sqrt{n}+{1\over2}<m+1\ ,$$ or $$m=\left\lfloor \sqrt{n}+{1\over2}\right\rfloor\ .$$