INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
There are infinitely many prime numbers $p$ for which $p + 2$ and $p + 4$ are also prime numbers I need help with this question: There are infinitely many prime numbers $p$ for which $p + 2$ and $p + 4$ are also primes. Where should I start? what proof techniques will be useful? Any input will be very useful to me.
No, there are not infinitely many such primes among the integers, $\mathbb Z$. In fact, there are only precisely two of them: $p = -7$ or 3. With the former, we have $p + 2 = -5$ and $p + 4 = -3$. With the latter, we have $p + 2 = 5$ and $p + 4 = 7$. So we have either the three primes $-7, -5, -3$ or the three primes 3, 5, 7. This suggests that in such a triple, one of the numbers must be a multiple of 3. Since you can't use congruences, the solution is somewhat more laborious, but not by much. Suppose $p = 3k + 1$. Then $p + 2 = 3k + 3 = 3(k + 1)$, and that's composite unless $k = -2$ or 0. So suppose instead that $p = 3k + 2$. Then maybe $p + 2$ is prime, but then $p + 4 = 3k + 6 = 3(k + 2)$. The case $p = 3k$ (with implied plus zero) should now be obvious.
How to solve $2\tan^{-1}\sqrt{x-x^2}=\tan^{-1}x+\tan^{-1}(1-x)$ $$2\tan^{-1}\sqrt{x-x^2}=\tan^{-1}x+\tan^{-1}(1-x)$$ I wasn't sure on how to go from here because I tried to draw triangles for each tangent function but that didnt work and I know that I can't distribute via doing tangent on both sides. (though I do enjoy complex methods in solving this, I do appreciate a high school level process of doing this problem.)
Letting $u = 1-x,v=x,$ you need to solve $2\arctan \sqrt{uv} = \arctan u + \arctan v.$ Taking tangents on both sides and applying tangent addition and double angle formulae gives $\frac{2\sqrt{uv}}{1-uv}=\frac{u+v}{1-uv}\implies 2\sqrt{uv}=u+v\implies 2\sqrt{x-x^2}=1\implies 4(x-x^2)=1$ which is a simple quadratic to solve.
Finding directrix of a parabola Let $3x – y – 8 = 0$ be the equation of tangent to a parabola at the point $(7, 13)$. If the focus of the parabola is at $(– 1, – 1)$, then the equation of its directrix is?
HINT: Let the equation of the directrix be $y=mx+c$ So, the equation of the parabola : $$(x+1)^2+(y+1)^2=\dfrac{(mx-y+c)^2}{m^2+1}$$ First of all, it passes through $(7,13)$ Find the equation of the tangent at $(7,13)$ and compare with $3x–y–8=0$
Numerical approximation of $\displaystyle \sum_{n=0}^{\infty}\frac{1}{2^{n+\sqrt{n}}}$, using a markov chain on $\mathbb{N}_0.$ Using the markov chain $\{X_n\}$ on $\mathbb{N}_0$, with transition probabilities $p(x,x+1)=p(x,0)=1/2$ for all $x\in \mathbb{N}_0$, how can we compute (numerically) the sum of the series $$\sum_{n=0}^{\infty}\frac{1}{2^{n+\sqrt{n}}}~?$$ I don t seem to understant how a simulation of our process can approximate the desired sum. Any help will be appreciated. Thank you in advance.
Following the hints by @Ian, we present an answer, using the ergodic theroem and and the Monte Carlo Simulation. First of all, we easily see that there is a unique invariant distribution $\pi:$ $$\pi=\pi P\iff \pi(k)=\frac{1}{2}~\pi(k-1),~k\in \mathbb{N} \Rightarrow~ \pi(k)=\frac{1}{2^k}\pi(0),~k\in \mathbb{N}$$ and $\displaystyle \sum_{k=0}^{\infty}\pi(k)=1\iff \pi(0)=1/2$, so $\pi(k)=1/2^{k+1},~k\in \mathbb{N}_0.$ Since the given Markov chain is irreducible, has an invariant distribution and is aperiodic, by the ergodic theorem, with probability one: $$\frac{1}{n}(f(X_1)+\ldots+f(X_n)) \rightarrow E^\pi(f)=\sum_{n=0}^{\infty}f(n)\pi(n).$$ By choosing $f(n)=2^{1-\sqrt{n}},~n\in \mathbb{N}_0$, we get with probability one: $$\lim\frac{1}{n}(f(X_1)+\ldots+f(X_n)) =\sum_{n=0}^{\infty}\frac{1}{2^{n+\sqrt{n}}}.$$ For the numerical part, we can use the Monte Carlo Simulation method, by taking the mean of a large sample enough of $f(X_n),~n\geq 0,$ to approximate the above series.
If $m$th term and $n$th term of arithmetic sequence are $1/n$ and $1/m$ then the sum of the first $mn$ terms of the sequence is $(mn+1)/2$ If $m$th term and $n$th term of arithmetic sequence are $1/n$ and $1/m$ respectively then prove that the sum of the first $mn$ terms of the sequence is $(mn+1)/2$. My Attempt ; $$\textrm t_{m}=\dfrac {\textrm 1}{\textrm n}$$ $$\textrm a + \textrm (m-1)d =\dfrac {1}{n}$$ And, $$\textrm t_{n}=\dfrac {1}{m}$$ $$\textrm a+\textrm (n-1)d=\dfrac {1}{m}$$ What do I do further?
Let $u_t = u_0 + a \cdot t$ $$ u_m = \frac{1}{n} $$ $$ u_n = \frac{1}{m} $$ Therefore $$ u_n - u_m = a \cdot \left( n-m \right)= \frac{1}{m} - \frac{1}{n} $$ $$ a=\frac{1}{m \cdot n} $$ So the initial term is$$ u_n = u_0 + \frac{n}{m \cdot n} = u_0 + \frac{1}{m} = \frac {1}{m}$$ $$ u_0 = 0 $$ We have $$ u_t = \frac{t}{m\cdot n} $$ We want to determine $$\sum_{t=0}^{m\cdot n} u_t = \sum_{t=0}^{m\cdot n} \frac{t}{m \cdot n}$$ $$\sum_{t=0}^{m\cdot n} u_t = \frac{1}{m \cdot n} \sum_{t=0}^{m\cdot n} t$$ $$\sum_{t=0}^{m\cdot n} u_t = \frac{1}{m \cdot n} \frac{(m \cdot n)\cdot(m \cdot n +1)}{2}$$ $$\sum_{t=0}^{m\cdot n} u_t = \frac{m \cdot n +1}{2}$$
Good books on graph theory for self-study? Recently I plan to study graph theory. I tried to read the book A Course in Combinatorics, yet I found the text hard to follow and problems too difficult. I'm just midway in chapter 2 and I already found several problems that I can't solve even after reading the hint and thinking for hours. (Is it just me or is it that the problems in this book are indeed really hard? And should I pursue reading this book?) So I want another book that (hopefully) satisfies the following: * *Comprehensive. *Has good problems with detailed hints (compared to the book above).
Douglas B. West's book called Introduction to Graph Theory is a good book for self study. You can find it here west graph theory.
$\frac{1}{\sin x}-\frac{1}{x}$ bounded on $[0,\pi/2]$. Why is $$\frac{1}{\sin x}-\frac{1}{x}$$ bounded when $x\in [0,\pi/2]$. I've come across this fact in Fourier series, but I can't figure out a why this is true. I would appreciate any help.
This function is continuous on the half-open interval $(0,\,\pi/2].$ At $x=0$ it can be extended so as to be continuous by letting its value by its limit as $x\to0$ if that exists. Then one has a continuous function on a closed bounded interval, and such functions are bounded. And if that extended function is bounded, then so is the function you started with before doing the extension. So the question is whether the following exists (meaning it's a finite number and not $+\infty$ or $-\infty$): \begin{align} \lim_{x\to0} \left( \frac 1 {\sin x} - \frac 1 x \right) = \lim_{x\to0} \frac{x-\sin x}{x\sin x}. \end{align} L'Hopital's rule handles that.
Finding the Image of a Linear Transformation? Just a simple question, but I had a hard time finding a decent explanation. I'm confused about finding the image when given a matrix and what exactly this represents. For example the matrix A = \begin{bmatrix}1&2&3\\1&3&4\\1&4&5\end{bmatrix} What is the image of $F_A$?
The image is the set of all vectors of the form $Av$ for an arbitrary vector $v$. For example for the standard basis vector $e_1=\begin{bmatrix}1\\0\\0\end{bmatrix}$, we know $Ae_1=\begin{bmatrix}1\\1\\1\end{bmatrix}$ is in the image. Hint 1: what happens for the other standard basis vectors $e_2$ and $e_3$? Hint 2: if you know $Ae_1$, $Ae_2$, and $Ae_3$, then for any $v = c_1e_1+c_2e_2+c_3e_3$ you can write $Av = c_1(Ae_1)+c_2(Ae_2)+c_3(Ae_3)$. Can you use this to describe the image of $A$ neatly?
Riemann-Stieltjes integral problem: $\int_{a} ^{b} g\, d\beta=\int_{a} ^{b} fg\, d\alpha$ Help, I've been stuck with this for hours, so far I've tried expanding the $\alpha$ integral using the definition of upper and lower integrals U and L but it doesn't seem to be a good way. Let be $\alpha,\ f,\ g\ :[a,b]\to\mathbb{R}$ continous, $\alpha$ non- decreasing and $f(x) \ge 0$. Let be $\beta(x) = \int_a^x f\, d\alpha$. Show $\int_a^b g\ d\beta = \int_a^b gf\, d\alpha$.
Proof: With $I = \int_a^b gf \, d\alpha$, apply the mean value theorem for integrals (since $f$ is continuous and $\alpha$ is non-decreasing) to a Riemann-Stieltjes sum. There exists $\eta_j \in (x_{j-1},x_j)$ for all $j$ such that $$\beta(x_j) - \beta(x_{j-1}) = \int_{x_{j-1}}^{x_j} f \, d \alpha = f(\eta_j)(\alpha(x_j) - \alpha(x_{j-1})) ,$$ and $$\left|\sum_{j=1}^n g(\xi_j)(\beta(x_j) - \beta(x_{j-1})) - I\right| \\ = \left|\sum_{j=1}^n g(\xi_j)f(\eta_j)(\alpha(x_j) - \alpha(x_{j-1}))- I \right|\\ \leqslant \left|\sum_{j=1}^n g(\xi_j)f(\xi_j)(\alpha(x_j) - \alpha(x_{j-1}))- I \right| + \left|\sum_{j=1}^n g(\xi_j)(f(\eta_j)-f(\xi_j))(\alpha(x_j) - \alpha(x_{j-1}))\right|. $$ For all sufficiently fine partitions the first term on the right-hand side is smaller than $\epsilon/2$ since $I = \int_a^b gf d\alpha$ exists. The second term on the right-hand side is also smaller than $\epsilon/2$ with sufficiently fine partitions since $f$ is uniformly continuous on $[a,b]$, $g$ is bounded, and $\alpha$ is non-decreasing. We have $|g(x)| \leqslant M$ for $x \in [a,b]$ and $\delta > 0$ such that if $|x-y| < \delta$ then $|f(x) - f(y)| < \epsilon/(2M (\alpha(b) - \alpha(a)))$ for all $x,y \in [a,b]$. Hence, if the partition norm is less than $\delta$, then $$\left|\sum_{j=1}^n g(\xi_j)(f(\eta_j)-f(\xi_j))(\alpha(x_j) - \alpha(x_{j-1}))\right| \leqslant \sum_{j=1}^n |g(\xi_j)||f(\eta_j)-f(\xi_j)||\alpha(x_j) - \alpha(x_{j-1})| \\ \leqslant M(\alpha(b) - \alpha(a))\frac{\epsilon}{2M(\alpha(b)-\alpha(a))} \\ = \frac{\epsilon}{2}.$$ Thus, $$\int_a^b g d \beta = I = \int_a^b gf d \alpha.$$
Use of commutative property in calculating $4765 + (-896) + (896 + 477) + (-4765 + 23)$ The pre-calculus question reads --> state the various properties to easily and mentally compute: $ 4765+(-896)+ (896+477)+(-4765+23)$ It is easy for me to see how the regrouping allows for easy mental math, so I would have said the associative property. I also see how $-896 + 896$ is the additive inverse property. However, the instructor indicated that this is problem also uses the commutative property and I don't see how this would apply here. Can someone explain how this can be the commutative prop? Teacher just indicated that it is clear that those 3 properties were used.
4765+(−896)+(896+477)+(−4765+23) = 4765+ (−896)+ 896 + 477 + (−4765) + 23 [associative property -- we can re-group the additions in any combinations as long as we keep to addition of negatives, not subtraction] = 4765+ ((−896)+ 896) + 477 + (−4765) + 23 [associative property again] = 4765 + 0 + 477 + (−4765) + 23 [property of opposite or additive inverse of a real number] = 4765 + 477 + (−4765) + 23 [property of zero as additive identity] = 4765 + (−4765) + 477 + 23 [commutative property of addition, re-order a + b = b + a] = (4765 + (−4765)) + 477 + 23 [associative property again] = 0 + 477 + 23 [property of opposite or additive inverse again] = 477 + 23 [property of zero as additive identity again] = 400 + 70 + 7 + 20 + 3 [place value conventions of number system] = 400 + 70 + 20 + 7 + 3 [commutative property again] = 400 + 90 + 10 = 400 + 100 = 500 [addition facts and using properties of base ten system to "carry"] This all may look insanely obvious to you. Spend a little time helping some kids who are having problems with arithmetic in Grades 1 to 3 and you will learn a new respect for the complexities and subtleties of numbers which you blissfully skim over every day, because you were lucky enough to master all of these skills when you were young.
$\int_{0}^{\frac{\pi}{4}}\frac{\tan^2 x}{1+x^2}\text{d}x$ on 2015 MIT Integration Bee So one of the question on the MIT Integration Bee has baffled me all day today $$\int_{0}^{\frac{\pi}{4}}\frac{\tan^2 x}{1+x^2}\text{d}x$$ I have tried a variety of things to do this, starting with Integration By Parts Part 1 $$\frac{\tan x-x}{1+x^2}\bigg\rvert_{0}^{\frac{\pi}{4}}-\int_{0}^{\frac{\pi}{4}}\frac{-2x(\tan x -x)}{\left (1+x^2 \right )^2}\text{d}x$$ which that second integral is not promising, so then we try Integration By Parts Part 2 $$\tan^{-1} x\tan^2 x\bigg\rvert_{0}^{\frac{\pi}{4}}-\int_{0}^{\frac{\pi}{4}}2\tan^{-1} x\tan x\sec^2 x\text{d}x$$ which also does not seem promising Trig Substitution $x=\tan\theta$ which results $$\int_{0}^{\tan^{-1}\frac{\pi}{4}}\tan^2 \left (\tan\theta\right )\text{d}\theta$$ which I think too simple to do anything with (which may or may not be a valid reason for stopping here) I had some ideas following this like power reducing $\tan^2 x=\frac{1-\cos 2x}{1+\cos 2x}$ which didn't spawn any new ideas. Then I thought maybe something could be done with differentiation under the integral but I could not figure out how to incorporate that. I also considered something with symmetry somehow which availed no results. I'm also fairly certain no indefinite integral exists. Now the answer MIT gave was $\frac{1}{3}$ but wolfram alpha gave $\approx$ .156503. Note The integral I gave was a simplified version of the original here is the original in case someone can do something with it $$\int_{0}^{\frac{\pi}{4}}\frac{1-x^2+x^4-x^6...}{\cos^2 x+\cos^4 x+\cos^6 x...}\text{d}x$$ My simplification is verifiably correct, I'd prefer no complex analysis and this is from this Youtube Video close to the end.
Let $f(x)=\tan x-x=\sum_{n=2}^\infty a_nx^{2n-1}$ where $a_n=\frac{(-1)^{n-1}2^{2n}(2^{2n}-1)B_{2n}}{(2n)!}$ and $B_{2n}$ is the $2n$-th Bernoulli number. Then $f'(x)=\tan^2 x=\sum_{n=2}^\infty (2n-1)a_nx^{2n-2}$. Hence, $$\int_0^{\pi/4}\frac{\tan^2x}{x^2+1}dx=\sum_{n=2}^\infty (2n-1)a_nb_n$$ where $$b_n=\int_0^{\pi/4}\frac{x^{2n-2}}{x^2+1}dx.$$
Euler's formula proof with Calculus I was reading this source here and it provides a proof of Euler's formula using calculus. Although I technically understand the reasoning, I can't quite wrap my head around one particular step: if $f(x)= \cos(x)+i \sin(x)$, then $f'(x)=if(x) \implies f(x)=e^{ix}$. I can kind of get this, as derivating $e^{ix}$ gives $ie^{ix}$, and I know that $e^x$ is the only non-constant function s.t. $f'(x)=f(x)$. Is there a more clear way to think about this step? Can we extend that property of $e^x$ to when it has more in the exponent? And why does it have that property anyways? Would there be a way of obtaining this step without simply knowing that $e^{ix}$ would work?
As a complement to the explanation of @Brevan Ellefsen. Have a look at the figure below. It explains geometrically that $e^{ix}$, imagined as a turning vector at unit speed, has a unitary "speed vector" turning at the same speed, orthogonal to it. This illustrates the fact that differentiation is equivalent to a $\pi/2$ shift, i.e., in geometrical terms, a $\pi/2$ rotation. If you analyse it in a separate way: $$\begin{cases}\cos'(x)=-\sin(x)=\cos(x+\pi/2)\\ \sin'(x)=-\cos(x)=\sin(x+\pi/2)\end{cases}$$ But this boils down to say that the derivative of $e^{ix}$ is $ie^{ix}$, which is interpretated as $e^{ix}$ "turned by" $\pi/2$.
Maximizing the logarithm of a rational function over a polytope Which optimization technique/algorithm can be used to solve such problems? I want to know the name of a technique because some problems I need to solve are more complex than this one. \begin{align} \max_{x_1,x_2}\quad \log_{2}(1+\dfrac{x_1}{x_2+0.1})+\log_{2}(1+\dfrac{x_2}{x_1+0.1})\\ s.t\quad\quad \log_{2}(1+\dfrac{x_2}{x_1+0.1})\geq0.1\\\quad \log_{2}(1+\dfrac{x_1}{x_2+0.1})\geq0.1 \\x_1\geq0,x_2\geq0\\ x_1\leq5,x_2\leq5 \end{align}
Take the following inequality constraint $$\log_{2} \left( 1 + \dfrac{x_j}{x_i+0.1} \right) \geq 0.1$$ and rewrite it as follows $$\dfrac{x_j}{x_i+0.1} \geq 2^{0.1} - 1$$ Given the nonnegativity constraint $x_i \geq 0$, we can multiply both sides by $x_i+0.1$, which yields the following linear inequality $$\left(2^{0.1} - 1\right) x_i - x_j \leq -\left(2^{0.1} - 1\right) 0.1$$ Thus, we conclude that the feasible region is a polygon, namely, a quadrilateral. * *Parametrizing each of the $4$ line segments that form the quadrilateral, we can find the maximum of the objective function on the boundary of the feasible region. *Computing the gradient of the objective function and finding whether it vanishes in the interior of the feasible region and, if so, where it does vanish, we can find the maximum of the objective function in the interior of the feasible region (when it actually does exist). Taking the maximum of these two maxima, we obtain the desired maximum.
Show that a harmonic function is constant by using the maximum principle Let $u \in C^2(\mathbb{R}^n)$ harmonic and $f(x)=tan(u(x))-e^{|x|^2}$ bounded above. Show that $u$ is constant. (Hint: use the maximum principle) When $f$ is bounded above then there must exist an $\delta>0$ such that $u(x) \in (-\frac{\pi}{2} + \delta , \frac{\pi}{2} - \delta) $ (modulo multiplies) otherwise $f$ is not bounded. But I can't show that $u$ is constant. Can someone give me a little hint?
A priori the existence of such $\delta$ is far from clear. What is clear is that $u(x) \in (-\frac{\pi}{2} + k \pi, \frac{\pi}{2} + k\pi)$ for some $k \in \mathbb{Z}$. Then we can apply Liouville's theorem, even in its simplest case of bounded $u$.
Find all non-negative integers satisfying the conditions Question 1. Find all non-negative integer a, b, c,d, e such that $$ a+b+c+d+e = 8$$ Question 2. Find all non-negative integer a, b, c,d such that $$ a+b+c+d = 8$$ Question 3. Find all non-negative integer a, b, c such that $$a+b+c = 8$$ Is there any method for this? I have no idea. I can just fix the limit. Thanks! I think must calculate on Maple or Sage math program. I hope that that someone can help. Thanks!
I will answer to question 3. For the other answers you can follow a similar reasoning but probably using sage would be a better solution. 1) assume $a,b,c>0$. Then by stars and bars method you know that there are $\binom{8-1}{3-1}=21$ possible combinations of values between $1$ and $6$ to form you solutions. (1, 1, 6), (1, 2, 5), (1, 3, 4), (1, 4, 3), (1, 5, 2), (1,6,1) (2, 1, 5), (2, 2, 4), (2, 3, 3), (2, 4, 2), (2, 5, 1) (3, 1, 4), (3, 2, 3), (3, 3, 2), (3, 4, 1) (4, 1, 3), (4, 2, 2), (4, 3, 1) (5, 1, 2), (5, 2, 1) (6, 1, 1) 2) assume that either that there is one (and only one) $0$ in $a, b, c$ Then you should find $\binom{3}{1}\binom{8-1}{2-1}=21$ solutions: (0, 1, 7), (0, 2, 6), (0, 3, 5), (0, 4, 4), (0, 5, 3), (0, 6, 2), (0, 7, 1) * *same as before but exchange the first two components *same as before but exchange the first and last components 3) assume that 2 variables out of 3 are zero. Then you should look for $\binom{3}{2}\binom{8-1}{1-1}=3$ solutions (8, 0, 0), (0, 8, 0), (0, 0, 8) and you are done. But as you can see Sage would be much quicker! Edit: Here it is the Sage script for creating the list of all solutions for the case of 5 variables: for j in range(5): pippo = Partitions(8, length=j) for i in pippo: Permutations(i+[0]*j).list() Solutions are listed according to the number of variables that are set to 0.
A series involve combination I want find another Idea to find sum of $\left(\begin{array}{c}n+3\\ 3\end{array}\right)$ from $n=1 ,to,n=47$ or $$\sum_{n=1}^{47}\left(\begin{array}{c}n+3\\ 3\end{array}\right)=?$$ I do it first by turn $\left(\begin{array}{c}n+3\\ 3\end{array}\right)$ to $\dfrac{(n+3)(n+2)(n+1)}{3!}=\dfrac16 (n^3+6n^2+11n+6)$ and find sum of them by separation $$\sum i=\dfrac{n(n+1)}{2}\\\sum i^2=\dfrac{n(n+1)(2n+1)}{6}\\\sum i^3=(\dfrac{n(n+1)}{2})^2$$ then I think more and do like below ... I think there is more Idea to find this summation . please hint, thanks in advanced
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{n = 1}^{47}{n + 3 \choose 3} & = -1 + \bracks{z^{47}}\sum_{k = 0}^{\infty}z^{k} \bracks{\sum_{n = 0}^{k}{n + 3 \choose n}} = -1 + \bracks{z^{47}} \sum_{n = 0}^{\infty}{-4 \choose n}\pars{-1}^{n}\sum_{k = n}^{\infty}z^{k} \\[5mm] & = -1 + \bracks{z^{47}} \sum_{n = 0}^{\infty}{-4 \choose n}\pars{-1}^{n}{z^{n} \over 1 - z} = -1 + \bracks{z^{47}}\bracks{{1 \over 1 - z} \sum_{n = 0}^{\infty}{-4 \choose n}\pars{-z}^{n}} \\[5mm] & = -1 + \bracks{z^{47}}\bracks{{1 \over 1 - z}\,\pars{1 - z}^{-4}} = -1 + \bracks{z^{47}}\sum_{k = 0}^{\infty}{-5 \choose k}\pars{-z}^{k} = -1 - {-5 \choose 47} \\[5mm] & = \bbx{\ds{-1 + {51 \choose 47}}} = -1 + {51 \times 50 \times 49 \times 48 \over 4 \times 3 \times 2} = \bbx{\ds{249899}} \end{align}
Can we solve $2 \int_0^{\frac{\pi}{4}} \sin^2x dx$ using any property also? $$\int_{\frac{-\pi}{4}}^{\frac{\pi}{4}} \sin^2 x dx$$ My way of solving - $$2 \int_0^{\frac{\pi}{4}} \frac{1- \cos2x}{2} dx$$ On solving i get $\dfrac{\pi}4 - \dfrac 12$ as an answer. My main question is can we us any other property here also? Something like - $$2 \int_0^{\frac{\pi}{4}} \sin^2x dx$$ $$2 \int_0^{\frac{\pi}{4}} \sin^2(\frac{\pi}{4}-x)dx$$ Then how to proceed?
The method you suggest is the best, but if you really want a different method, you could try integration by parts to get $$I=-\sin x\cos x+\int \cos^2 x dx$$ and then use $\cos^2 x =1-\sin^2 x$ to get an expression for $2I$ And so on...
Defining $R\times R$ as a ring? I feel a bit stupid, but I know that the normal definition of $R\times R$ as $R \times S = \{(r, s) : r \in R, s \in S\}$, under $(r, s) + (r', s')=(r+r',s+s')$ and $(r, s) \cdot (r', s')=(rr', ss')$ is a ring. But, can you define $R \times R$ otherwise as a ring? I'm trying to decide whether $R \times R$ has any non-zero nilpotent elements. Obviously it does not under the normal definition, but can you define $R \times R$ as a ring otherwise such that there are non-zero nilpotent elements?
Yes, you can, but it is not necessarily useful. There is a very precise meaning to the symbol $\times$: it is the product in the category you are considering. It is, in some sense, the smaller object that contains the whole structure of both the object you are taking the product of (more precisely, it satisfies a certain universal property that you can find here). If you define the ring structure in another way, you lose this property.
$\lim_{n\to\infty}\int_0^{\pi/2} \frac{\sin^n(x)}{1+x^2} \, dx$ $$\lim_{n\to\infty}\int_0^{\pi/2} \frac{\sin^n(x)}{1+x^2} \, dx$$ Is it right answer ? $$ \begin{cases} 0, & x \ne \pi/2\\[8pt] \dfrac{2\pi}{4 + \pi^2}, & x = \pi/2 \end{cases} $$
Consider the sequence of functions $f_n (x) = {\sin^n (x)}/({1+x^2})$ on $[0,2\pi]$ If $a \neq \pi/2, 3\pi/2$ for some $k\in\mathbb{Z}$, then $|\sin(a)| < 1$, so as $n \to \infty$ we have $f_n (a) \to 0$. Thus, pointwise almost everywhere, as $n\to\infty$ we have $f_n \to 0$. Let $g(x) = 1/(1+x^2)$. Note that $|f_n (x)| \le |g(x)|$ and $g(x)$ is integrable on $[0,2\pi]$. Hence, by the dominated convergence theorem, it follows $$\lim_{n\to\infty} \int_{0}^{\pi/2} \frac{\sin^n (x)}{1+x^2} \, dx = \lim_{n\to\infty} \int_{0}^{\pi/2} f_n (x) \, dx = \int_{0}^{\pi/2} \lim_{n\to\infty} f_n (x) \, dx = \int_{0}^{\pi/2} 0 \, dx = 0.$$
Prove two subsequent primes cannot be written as a product of two primes Suppose we have two subsequent primes, say $p$ and $p'$. Prove their sum cannot be written as a product of two primes, say $p_1$ and $p_2$. I wanted to proof by contradiction. I started by thinking about parity of the sum. Suppose $p=2$, then $p'=3$. But this sum cannot be written as a product of two primes $p_1$ and $p_2$. So we know that $p>2$; this implies that both $p$ and $p'$ are odd, so $p+p'$ is even. This means that either $p_1$ or $p_2$ must equal $2$. This results in: \begin{equation} p+p'=2p_1.\end{equation} Now, how can I finish the proof?
For $p=2$ then it's immediate. For odd primes $p,p'$ you would have $p_1p_2=even$ namely $p_1=2$ and $p_2=\frac{p+p'}{2}$. But then $p_2$ is a prime that lies between two consecutive primes $p,p'$. Contradiction.
How is a Generator Matrix for a (7, 4) Hamming code created? I see that a generator matrix is created with the following formulae: $$G = \left[I_{k}|P\right]$$ I do not understand what P is in this case. In my notes, I am told that in a (7, 4) Hamming code situation my $$G = \begin{pmatrix} 1 & 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 1 & 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 1 & 1 \end{pmatrix}$$ where P would be $$P=\begin{pmatrix} 1 & 0 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 0 \\ 0 & 1 & 1 \end{pmatrix}$$ How is this P generated?
* *we named data as d1, d2, d3 ,d4 *we named parity as p1, p2, p3 *make a G matrix or generator matrix so it might look like this. hamming code (7,4) \begin{matrix}\mathbf{1}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{0}\\\mathbf{0}&\mathbf{1}&\mathbf{0}&\mathbf{0}&\mathbf{1}&\mathbf{0}&\mathbf{1}\\\mathbf{0}&\mathbf{0}&\mathbf{1}&\mathbf{0}&\mathbf{0}&\mathbf{1}&\mathbf{1}\\\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}\\\end{matrix} *the question is how to make the G matrix? d1 =\begin{matrix}1\\0\\0\\0\\\end{matrix}d2=\begin{matrix}0\\1\\0\\0\\\end{matrix} d3 =\begin{matrix}0\\0\\1\\0\\\end{matrix}d4 =\begin{matrix}0\\0\\0\\1\\\end{matrix} pay attention to bit 1, totally only 1 bit for each row and zero in all other rows. The position of d1, d2, d3, d4 will remain the same. but you may find some formula put parity before data. so it will not arrange from d1,d2,d3,d4,p1,p2,p3 but it will arrange from p1,p2,p3,d1,d2,d3,d4. or you may also find any variation p3 and p1 will interchangeable. p1 could be in the p3 position or vice versa. 5. p1 =\begin{matrix}0\\1\\1\\1\\\end{matrix} p2=\begin{matrix}1\\0\\1\\1\\\end{matrix} p3=\begin{matrix}1\\1\\0\\1\\\end{matrix} pay attention to bit 0, totally only 1 bit for each row and one in all other rows. so don't be confused if sometimes you might find p4 or parity 4. it would be p4 =\begin{matrix}1\\1\\1\\0\\\end{matrix} seems like both of 1 in data bit position or 0 in parity bit position is walking down the row.
Integrating factor: DE $ f(xy)ydx+g(xy)xdy=0 $ Let the DE be of the following form: $ f(xy)ydx+g(xy)xdy=0 $ If $xy(f(xy)-g(xy)) \neq 0$ show that $\mu(x,y)=\frac 1 {xy(f(xy)-g(xy))}$ is this DEs integrating factor. EDIT: Please note its $f(xy) , g(xy)$ and not $f(x,y), g(x,y)$ EDIT#2: I set $f(xy)x=P$ and $g(xy)y=Q$ $(\mu P)_y = (\mu Q)_x$ $f\mu + y f_y\mu+ yf\mu_y = g\mu + xf_x\mu+xf\mu_x$ deviding by $\mu$ $\frac {\mu_y} \mu yf +f_y y + f = \frac {\mu_x} \mu xg + g_x\mu + g$ $\frac {\mu_y} \mu = \frac {-x(f-g)-xy(f_y-g_y)}{xy(f-g)}=\frac{-1}y - \frac{f_y - g_y}{(f-g)}$ analogue we get $\frac {\mu_x}\mu=\frac{-1}x - \frac{f_x - g_x}{(f-g)}$ Now if I carry on i get to $x(fg_x-f_xg)=y(fg_y-f_yg)$ and I don't know what to do here actually quite stuck. And wondering if I am missing an easyer path. EDIT#3: While writing this noticed that $f=f(xy) , g=g(xy)$ Could it be that I missed it. $\frac 1 y f_x=\frac 1 x f_y$ and same for $g(xy)$. Expressing the last solution I got to $f(yg_y-xg_x)=g(yf_y-xf_x)$ Making the solution trivial once we enter the newly found out $\frac 1 y f_x=\frac 1 x f_y$ and $\frac 1 y g_x=\frac 1 x g_y$. NB: $f_y = \frac{\delta}{\delta y} f(xy)$
For the DEQ of the form: $$\tag 1 x~ M~dx + y~N~dy = y~ f(xy)~dx + x~ g(xy)~dy=0 $$ Show that $\dfrac{1}{M x - N y}$, for $M x - N y \neq 0$, an integrating factor is given by $$\tag 2 \mu(x,y)=\dfrac 1 {xy(f(xy)-g(xy))}$$ Multiplying $(1)$ by $(2)$ $\tag 3 \dfrac{y f(xy)}{xy(f(xy)-g(xy))}~dx+\dfrac{x g(x, y)}{xy(f(xy)-g(xy))}~dy = 0$ Now we have to show that $(3)$ is exact. Step 1: For $M = f [x(f-g)]^{-1}$, we use the Product Rule and have $$\tag 4 M_y = f_y [x(f-g)]^{-1} -f[x(f-g)]^{-2}x(f_y-g_y) =\dfrac{f g_y-g f_y }{x(f-g)^2 }$$ Step 2: For $N = g[y(f-g)]^{-1}$, we have $$\tag 5 N_x = g_x [y(f-g)]^{-1} -g[y(f-g)]^{-2}y(f_x-g_x)=\dfrac{f g_x -g f_x}{y(f-g)^2 }$$ Step 3: Find $M_y - N_x$ as $$M_y - N_x = \dfrac{f g_y-g f_y }{x(f-g)^2 } -\dfrac{f g_x -g f_x}{y(f-g)^2 } = \dfrac{f(y g_y - x g_x)+g(-yf_y+xf_x)}{xy (f-g)^2 }$$ The last result is identically zero because $$y \dfrac{\partial g(xy)}{\partial y} = x\dfrac{\partial g(xy)}{\partial x} \\ y \dfrac{\partial f(xy)}{\partial y} = x\dfrac{\partial f(xy)}{\partial x}$$ $$\frac{f*0+g*0}{xy(f-g)^2}=0$$
Why is it necessary to talk about a pushforward measure? I understand that a random variable $X$ and a probability measure $P$ on a space $(\Omega,\mathcal{A})$ induce the distribution $P_X$ on a space $(\Omega',\mathcal{A}')$. But is there an example where it is important to differentiate between the distribution $P_X$ (the pushforward measure) and the probability measure $P$? Is there a theorem that deals with different distributions $(P_X)_n$ but only with one probability measure $P$? Or is this distinguishing between the two measure only formal?
Many characteristics of a random variable (the mean, variance, characteristic function, etc.) depend only on the distribution of that random variable. In some sense, writing down a triple like $(\Omega, \mathcal A, \mathbb P)$ is quite artificial. I was once working on some problem with a probabilist. When I mentioned $\omega \in \Omega$, he remarked that means I was doing not probability but measure theory.
Probability and independence If (Ω,Pr) is the probability space where |Ω| is prime, and Pr is the uniform probability distribution on Ω, how would you show that any two non-trivial events A and B cannot be independent, (they must be either positively or negatively correlated). I know that 2 events are independent if Pr(A∩B) = Pr(A)⋅Pr(B), positively correlated if Pr(A∩B) > Pr(A)⋅Pr(B) and negatively correlated if Pr(A∩B) < Pr(A)⋅Pr(B).
Since Pr is the uniform probability distribution on $\Omega$, $Pr(A) = |A|/|\Omega|$, and $Pr(B) = |B|/|\Omega|$, so $Pr(A) \cdot Pr(B) = |A|\cdot|B|/|\Omega|\cdot|\Omega|$. Suppose A and B are independent for a contradiction, we have $Pr(A) \cdot Pr(B) = |A|\cdot|B|/|\Omega|\cdot|\Omega| = Pr(A \cap B) = q / |\Omega|\cdot|\Omega|$ for some positive q. Then we have $q \cdot |\Omega|= |A| \cdot |B|$. Since $|\Omega|$ is prime, $|\Omega|$ divides either $|A|$ or $|B|$ by Euclid's Lemma, but A and B are two non-trivial events, so $|A| \neq p \text{ or } 0$ and $|B| \neq p \text{ or } 0$. This is a contradiction.
Expected Value of $max(X,3)$ I've seen a few of these questions of the form $max$(X,constant) but they get more math heavy than I know mine should get. Could someone help me understand how to solve this? What we are given is that the Distribution of our random variable X is {$1,2,3,4,5$} with probabilites {$1,\frac{39}{51},\frac{26}{50},\frac{13}{49},1$} respectively. Given this, I am told to find $E(X,3)$. How would I do this? I know there shouldn't be any integrals involved.
Let us define $Z:=\max\left\{X,3\right\}.$ Now, first let us consider the values of $Z$ conditioned on the values of $X$. That is, it should be clear that $$[Z|X=x] = \left\{\begin{matrix} 3, & x = 1,\\ 3, & x =2,\\ 3, & x = 3,\\ 4, & x = 4,\\ 5, & x = 5. \end{matrix}\right.$$ Now, because $[Z|X=x]$ is always constant, it's expectation is simply its constant value, i.e., $$[Z|X=2] = 3 \Rightarrow \mathbb{E}(Z|X=2) = 3.$$ Therefore, by conditioning we can simply compute the expectation in the normal way, i.e., $$\mathbb{E}(Z) = \sum_{x=1}^{5}\mathbb{E}(Z|X = x)\mathbb{P}(X=x) \\ = 3\left(\mathbb{P}(X=1) + \mathbb{P}(X=2) + \mathbb{P}(X=3)\right) + 4\mathbb{P}(X=4) + 5\mathbb{P}(X=5).$$ Where you can insert the PMF appropriately (it seems like in the question there is a problem with the PMF you gave as it doesn't sum to 1). ---edit--- A simpler method would be to use the following fact: for a discrete random variable $X$ and function $f$ $$\mathbb{E}[f(X)] = \sum_x f(x)\mathbb{P}(X=x).$$ In our case, $f(X)=\max \{X,3\}$, thus the solution is $$\mathbb{E}(\max \{X,3\}) =\sum_{x=2}^5\max\{x,3\}\mathbb{P}(X=x).$$ The technique in the original answer is more general though and will aid in getting the expectation of more complicated random variables, e.g., $\max\{X,Y\}$ where $X$ and $Y$ are both random variables.
How can $\int \frac{2^x3^xdx}{9^x-4^x}$ be found? I am stuck on this question: $\int \frac{2^x3^xdx}{9^x-4^x}$. I have tried to solve it by writing it down as $\int \frac{2^x3^xdx}{(3^x-2^x)(3^x+2^x)}$ and making some substitution, but I still can't find the solution. Could you please suggest any hints or methods?
HINT: $$\dfrac{2^x3^x}{9^x-4^x}=\dfrac1{(3/2)^x-(2/3)^x}$$ Choose $(3/2)^x$ OR $(2/3)^x=u$ Utilize the fact $(3/2)^x\cdot(2/3)^x=1$
homomorphisms over direct sum I am trying to compute some homology groups and for that I need to figure out, what are all homomorphisms from $\mathbb Z \oplus \mathbb Z$ into $\mathbb Z$. I would really appreciate any effort.
$\mathbf Z \oplus \mathbf Z \cong \mathbf Z^2$ is a free abelian group. So you can choose the images of the standard base vectors $e_1=(1,0)$ and $e_2=(0,1)$ of $\mathbf Z^2$ under a homomorphism in any abelain group in an arbitrary way. So any homomorphism from $\mathbf Z^2$ to $\mathbf Z$ is a follows: $$ \varphi( x_1 e_1+x_2 e_2)=x_1 k + x_2 m $$ where $k,m $ are fixed integers, the images of $e_1$ and $e_2,$ respectively.
Does there exist a triple of $distinct$ numbers $a,b,c$ such that $(a-b)^5 + (b-c)^5 + (c-a)^5 = 0$? Does there exist a triple of distinct numbers $a,b,c$ such that $$(a-b)^5 + (b-c)^5 + (c-a)^5 = 0$$ ? SOURCE : Inequalities (PDF) (Page Number 4 ; Question Number 220.1) I tried expanding the brackets and I ended up with this messy equation : $$-5 a^4 b + 5 a^4 c + 10 a^3 b^2 - 10 a^3 c^2 - 10 a^2 b^3 + 10 a^2 c^3 + 5 a b^4 - 5 a c^4 - 5 b^4 c + 10 b^3 c^2 - 10 b^2 c^3 + 5 b c^4 = 0$$ There is no hope of setting $a=b$ or $a=c$ as the question specifically asks for distinct numbers. So, at last I started collecting, grouping, factoring and manipulating the terms around but could find nothing. Wolfram|Alpha gives a solution as : $$c=\dfrac{1}{2}\big(\pm\sqrt{3}\sqrt{-(a-b)^2} + a+b\big)$$ How can this solution be found? Another thing I notice about the solution is that it contains a negative term inside the square root, so does that mean that the solution involves complex numbers and that there is no solution for $\big(a,b,c\big)\in \mathbb {R}$ ? I am very confused about how to continue. Can anyone provide a solution/hint on how to 'properly' solve this problem ? Thanks in Advance ! :)
Assume that $a,b,c$ are distinct. Let $a-b=x \neq 0,\; b-c=y \neq 0$. Note that $a-c=x+y \neq 0$ Note that the equation becomes $$x^5+y^5=(x+y)^5$$ So $$(x+y)^5-x^5-y^5=5xy(x^3+2x^2y+2xy^2+y^3)=0$$ Note that this becomes $$xy(x+y)(x^2+y^2+xy)=0 \iff x^2+xy+y^2=0$$ Using the quadratic formula, we can find $x,y$. Note that there are only non-real solutions.
Minimize $\big(3+2a^2\big)\big(3+2b^2\big)\big(3+2c^2\big)$ if $a+b+c=3$ and $(a,b,c) > 0$ Minimize $\big(3+2a^2\big)\big(3+2b^2\big)\big(3+2c^2\big)$ if $a+b+c=3$ and $(a,b,c) > 0$. I expanded the brackets and applied AM-GM on all of the eight terms to get : $$\big(3+2a^2\big)\big(3+2b^2\big)\big(3+2c^2\big) \geq 3\sqrt{3}abc$$ , which is horribly weak ! I can not use equality constraint whichever method I use. Thanks to Wolfram|Alpha, I know the answer is $125$ when $(a,b,c) \equiv (1,1,1).$ Any help would be appreciated. :)
Another way. Let $c=\max\{a,b,c\}$. Hence, $c\geq1$, $a+b\leq2$ and $$(2a^2+3)(2b^2+3)\geq\left(\frac{(a+b)^2}{2}+3\right)^2$$ because it's just $$(a-b)^2(12-a^2-6ab-b^2)\geq0$$ and $$12-a^2-6ab-b^2=12-(a+b)^2-4ab\geq12-2(a+b)^2\geq12-8>0.$$ Thus, it remains to prove that $$\left(\frac{(a+b)^2}{2}+3\right)^2(2c^2+3)\geq125$$ or $$\left(\frac{(3-c)^2}{2}+3\right)^2(2c^2+3)\geq125$$ or $$(c-1)^2(2c^4-20c^3+93x^2-190c+175)\geq0,$$ which is obvious. Done! I have else proofs, but I think the last is the best.
How to find vector answers over GF(2)? Alright, I am being asked to solve this problem: Problem: For the vectors v = [0, one, one] and u = [one, one, one] over GF(2), find v + u and v + u + u. I am stuck and I need some help. So what I assumed I need to do, is: v = [0, one, one] = 0, 1, 1 u = [one, one, one] = 1, 1, 1 So: v + u = 0,1,1 1,1,1 + -------- 1,0,0 Why? Because I am assuming that GF(2) is based on XOR. First question: is this correct? Second: v + u + u = 0,1,1 1,1,1 1,1,1 + -------- ?,?,? I have no idea what to do. Is GF(2) the same as modulo 2 or should I also work with XOR here? As you can see, I have just no clue what GF(2) essentially means / is and I can't use it correctly because of that. Could someone help me with this please? I want to understand it.
Addition modulo $2$ is the (possibly more) familiar XOR operation on bits; we have $1 + 1 = 0$, and $x + 0 = x = 0 + x$ for all $x \in GF(2)$. However, there's something that may not be obvious when it comes to expressions like $x + y + z$. For sums with more than two terms, by definition we must add one pair at a time; $x + y + z = (x + y) + z$, and this can be viewed as a sequence of binary (meaning, having two inputs) XOR operations if you like, where the exact parenthesization doesn't matter, since $+$ is associative. So for example, $1 + 1 + 1 = (1 + 1) + 1 = 0 + 1 = 1$ (and more generally, a sum is $1$ if and only if an odd number of inputs are $1$). This might be a bit unusual, depending on how you think about XOR with more than two inputs (see for example this answer on EE.SE). Evidently some define this as a sequence of binary XOR operations, as we do here, while others output $1$ if and only if exactly one input is $1$. I hadn't given much thought to the latter viewpoint, and am not sure whether these differing views of multi-input XOR came up in practice, or in what fields one definition might be more common than the other. So with only two inputs, addition modulo $2$ acts exactly as you'd expect, but you may need some (mental) recalibration with more than two inputs.
Solution of the differential equation $y=x \cdot \frac{dy}{dx}+\frac{b}{\frac{dy}{dx}}$ Solve the following differential equation: $$y=x \cdot \frac{dy}{dx}+\frac{b}{\frac{dy}{dx}}$$ where $b$ is a real constant. I tried solving it by taking $\frac{dy}{dx}=t$, then solving quadratic equation for roots. But this method gives complicated differential equation once I write expression for roots. Could someone suggest a better approach.
i don't know if this helps. let $m = \frac{dy}{dx}.$ now we can write your differential equation $y=x \cdot \frac{dy}{dx}+\frac{b}{\frac{dy}{dx}}$ as $$y = xm + \frac b m $$ solving this quadratic equation, you have $$2x\ m = y \pm \sqrt{y^2 - 4bx}\to 2x\frac{dy}{dx} = y\pm\sqrt{y^2 - 4bx} $$
How to prove this theorem about limit of integral? I am working on this problem. I know that I need to find $x_0$ that is the supremum of a function $f$. If $f$ is non negative continuous function on the interval $[a,b]$ then there exists $x_0$ such that $$\lim_{n\to \infty}\left( \int_a^b f(x)^n dx\right)^{1/n} = f(x_0).$$
Supposing the limit exists: Let $M = \max_{x\in[a,b]}f(x)$ and $m = \min_{x\in[a,b]}f(x)$. Then $$\left(\int_a^b f(x)^ndx\right)^{\frac{1}{n}}\le\left(\int_a^b M^ndx\right)^{\frac{1}{n}} = (b-a)^{\frac{1}{n}}M\xrightarrow{n\to\infty}M$$ and, similarly, $$\left(\int_a^b f(x)^ndx\right)^{\frac{1}{n}}\ge\left(\int_a^b m^ndx\right)^{\frac{1}{n}} = (b-a)^{\frac{1}{n}}m\xrightarrow{n\to\infty}m\ .$$ Therefore, $$\lim_{x\to\infty}\left(\int_a^b f(x)^ndx\right)^{\frac{1}{n}}\in[m,M]$$ and the result forllows by continuity of the function $f$.
Derivative of $x^{x^x}$ I need to compute the derivative of $$z=x^{x^x}$$ I made it like this: $$y=x^x$$ so $$z=y^x$$ then $$\ln(z)=x\ln(y)$$ taking derivative from both sides: $$\frac{z'}{z}=\ln(y)+x\left(\frac{1}{y}\right)y'$$ I know that $$(x^x)'=x^x(1+\ln(x))$$ So : $$z'=z*(\ln(x^x)+x*\left(\frac{1}{x^x}\right)*(x^x)'))=x^{x^x}(x\ln(x)+x\ln(x)+x)=x^{x^x}\left(2x\ln(x)+x\right)$$ I know I did something wrong. Where is my Mistake?? Thanks alot!
I think your confusion stems from what you think $x^{x^x}$ means. $x^{x^x}$ means taking $x$ to the power of $x^x$ which is distinctly different from taking $x^x$ to the power of $x$. As an example, $3^{3^3} = 3^{(3^3)} = 3^{27}$. $3^{3^3} \neq (3^3)^3=27^3$. So $z \neq y^x$. Instead, $z = x^y$.
For a faithful, exact functor $F$, $M=0$ if $FM = 0$ Here is my attempted proof of the question in title: It is given that $R$ is a commutative, unital ring. Assume that $F$ is faithful and $FM = 0$, for some $R$-module $M$ and $\alpha:M\to N$. We have an obvious exact sequence: $$0\to\ker\alpha\to M\to \Im\alpha\to 0.$$ Since $F$ is exact, it preserves images and kernels. That is, $F(\ker\alpha) = \ker(F\alpha)$ and $F(\Im\alpha) = \Im(F\alpha).$ Hence, applying $F$ to the given exact sequence above yields: $$0\to\ker(F\alpha)\to 0=FM\to\Im(F\alpha)\to 0 .$$ This means that $\ker(F\alpha) = \Im(F\alpha) =0$, because the corresponding homomorphisms must be injective. Hence, $F\alpha = 0\Rightarrow \alpha = 0$. But this gives an exact sequence $\ker\alpha = 0\to M\to 0 =\Im\alpha$, which then forces $M$ to be zero. Is there a problem in my proof? If there isn't, is it unnecessarily long? Thanks in advance.
Suppose that $F$ is defined on $R$-modules and $R$ has a unit. $F(M)=0$ and $F$ is faithful implies that $F(id_M)=F(2id_M)=0$ and $2id_M=id_M$since $F$ is faithful, we deduce $2x=x$ and $x=0$ for every $x\in M$. You also have let $0_M:M\rightarrow M$ the zero map, $F(0_M)=F(Id_M)$ this implies that $0_M=Id_M$ since $F$ is faithhful.
Proof if $M$ is bounded, then so is its closure I want to proof that if $M\subseteq \mathbb R$ is bounded, then so is $\overline{M}$ or more precise that if $s$ is the supremum of $M$, then it is the supremum of $\overline{M}$. I came up with a proof but I am not sure if it is correct: Let $s := \sup M$. If $M$ is closed we are done, so suppose $M$ is not closed. Suppose $s$ was not the supremum of $\overline{M}$. Then $\exists x\in \overline{M}$ with $x>s$. By the definition of the closure there exists a sequence $(a_n)_{n\in \mathbb N_0}$ in $M$ with $\lim_{n\to \infty} a_n = x$, i.e. for all $\epsilon> 0, |a_n -x| < \epsilon$ for all big enough $n\in \mathbb N_0$. But that means $$-\epsilon < a_n -x < \epsilon \Longleftrightarrow x < a_n + \epsilon \leq s+\epsilon$$ This is a contradiction to $x> s$.
I think you show that $s$ is an upper bound of $\overline{M}$, but you don't actually show that it's the least upper bound of $\overline{M}$. It should, however, be straightforward to show that for any upper bound $z$ of $\overline M$, $s \le z$. This should follow from the fact that any such $z$ will also be an upper bound of $M$. You also might want to think carefully about whether you really needed a proof by contradiction here. (See, for example, Tim Gowers' thoughts on it here.) I don't think you do, and dropping the attempt entirely but maintaining the rest of the argument still works. Consider this reframing of your proof (with some details you've supplied omitted): Suppose $M$ is not closed, and consider $x \in \overline{M} \cap M^c$. There must then exist a sequence $\{ a_n \} \subset M$ such that $a_n \to x$. This means that for any $\varepsilon > 0$, $x < s + \varepsilon$, which implies that $x\le s$. $x$ was arbitrary, so $s$ is an upper bound of $\overline M$.
Particle starts at $(0,-3)$ and moves clockwise around origin on graph $x^2+y^2=9$, find parametric equation Question particle starts at $(0,-3)$ and moves clockwise around origin on graph $x^2+y^2=9$, revolve in $9$ seconds find parametric equation in term of $t$. What I've done so far: I first thought that the graph ought to be $x^2+y^2=9$ so then I say that $x=\frac{9}{2}\cos{t}=x$ and $y=\frac{9}{2}\sin{t}$ but then this particle in this graph travels CCW so then I change it to : $x=\frac{9}{2}\cos{t}=x$ and $y=\frac{9}{2}\sin{-t}$ but then I found out that when I plug in $t=0$, I do not get -3 How do I phase shift this parametric equation so that it satisfies the fact that the particle starts at $(0.-3)$?
Let's start with the standard clockwise parametrization, $$x=3\cos (t)$$ $$y=-3\sin (t)$$ With $t \in [0,2\pi]$. This starts at $(3,0)$ and then moves clockwise. We want it to start at $(0,-3)$. If we rotate $(3,0)$ clockwise $90$ degrees we get $(0,-3)$. So shift $\frac{\pi}{2}$ radians clockwise. Let, $$x=3\cos (t+\frac{\pi}{2})$$ $$y=-3\sin (t+\frac{\pi}{2})$$ Now we need to deal with the fact that it should take $9$ seconds to do a full revolution. Right now it takes $2\pi$ seconds because the period of both is $2\pi$. If the new period is $9$ then $2\pi$ over the horizontal shift $a$ should be $9$. $$\frac{2\pi}{a}=9$$ $$a=\frac{2\pi}{9}$$ So, $$x=3\cos (\frac{2\pi}{9}t+\frac{\pi}{2})$$ $$y=-3\sin (\frac{2\pi}{9}t+\frac{\pi}{2})$$
How do I prove this assumption f: X → Y is a function and that f is surjective? Prove that for all $B ⊆ Y$, we have $f (f^{-1}(B)) = B$? Where can I start this problem? Knowing that f is surjective if ∀ b ∈ B ∃ a ∈ X when f(a) = b. How do you show that B is a subset of Y to get the inverse image or pre-image of B, that is $f (f^{-1}(B)) = B$? I'm acquainted with the definition of surjectivity, which states that for every y element of Y (codomain) it must satisfy a specific x element of X (domain). I sincerely appreciate your guidance.
First, we don't need to show that $B$ is a subset of $Y$, since that's given. Now, to prove two sets are equal, we need to prove that either set is a subset of the other. Suppose that $x\in f(f^{-1}(B))$. Then there exists $y\in f^{-1}(B)$ such that $f(y)=x$. Further, since $y\in f^{-1}(B)$, there exists $z\in B$ such that $f(y)=z$. Then we have $x=f(y)=z\in B$, so $x\in B$. Note that we didn't need surjectivity for this part. Now suppose $x\in B$. Since $f$ is surjective, there exists $y\in X$ such that $f(y)=x$. Then we have that $y\in f^{-1}(B)$, and so $x\in f(f^{-1}(B))$. Therefore, the two sets are equal.
How and why are these two similar looking functions different? I have two functions 1) $y=x^{x^{x}}$ 2) $y=(x^x)^x$ These two functions seem same to me and I just see it as a mere difference of writing style but when I graph it using an online graph plotter they have different curves also when I find their derivatives using logarithmic differentiation I get different results.For 1 and 2 I got $dy/dx$ as $x^{x^{x}}[x^x\cdot\ln(x)[1+\ln(x)]+x^{(x-1)}]$ and $(x^x)^x[x[2\ln(x)+1]]$ respectively So,my question is ,Are these two functions really different,if yes ,how?If no,how can you justify their similar looking expressions?
Note that the exponent of $y=x^{x^x}$ can be represented as $k=x^x$ so $y=x^k$. For $y=(x^x)^x$ the exponent inside of the bracket is multiplied with the outside exponent so $k=x\cdot x= x^2$ then $y=x^k=x^{x^2}$
Homotopy groups of spectra A spectrum $\mathbf{E}$ consists of a series of pointed spaces $\{E_n | n \in \mathbb{Z}\}$ of pointed spaces (CW-complexes or compactly generated) together with a series of maps $$ \sigma_n: \Sigma E_n \rightarrow E_{n+1}, $$ where $\Sigma$ denotes the reduced suspension. The homotopy groups of a spectrum for all $k \in \mathbb{Z}$ are defined by $$ \pi_k(\mathbf{E})= colim_{n\rightarrow \infty} \pi_{n+k}(E_n), $$ where the inductive system is given by $$ \pi_{n+k}(E_n) \xrightarrow{\Sigma_*} \pi_{n+k+1} (\Sigma E_n) \xrightarrow{(\sigma_n)_*} \pi_{n+k+1} (E_{n+1}). $$ Now a $\Omega$-spectrum is a spectrum where the adjoint maps of the structure maps $E_n \rightarrow \Omega E_{n+1}$ are weak homotopy equivalences.Here $\Omega$ denotes the space of based loops with compact-open topology that is right adjoint to the reduces suspension. My question concerns the homotopy groups of $\Omega$-spectra. They are supposed to be $\pi_k(\mathbf{E})= \pi_k(E_0)$ for $k \geq 0$ and $\pi_k(\mathbf{E})=\pi_0(E_{-k})$ for $k \leq 0$. It makes sense that this is true since for any $k \ge 0$ there is isomorphism $\pi_{n+k}(E_n)\cong \pi_{n+k-}(\Omega E_n) \cong E_{n+k-1}(E_{n-1})\cong \dots \cong \pi_k(E_0)$. A smiliar argument can be made for $k \leq 0$. But this is no proof, or is it? So isomorphisms somehow need to be compatible with the colimit. Thank you.
You could say that the reason to consider $\Omega$-spectra is precisely that you can calculate the homotopy groups directly, without having to pass to a colimit. For example, consider the sphere (pre)spectrum $\mathbb S := \Sigma^\infty S^0$. It's the spectrum whose $n^{\text{th}}$ space (for $n\ge 0$) is $S^n$ and whose $n^{\text{th}}$ structure map is the homeomorphism $\Sigma S^n\to S^{n+1}$. (For negative $n$, the spaces and maps are trivial.) The homotopy groups of $\mathbb S$ are the stable homotopy groups of the spheres $\pi_n^S := \operatorname{colim} \pi_{n+k}(S^n)$. The colimit is essential: $\pi_n^S \ne \pi_n(S^0)$, because $\mathbb S$ isn't an $\Omega$-spectrum: $S^n$ is not weakly equivalent to $\Omega S^{n+1}$. If $E$ is an $\Omega$-spectrum, however, you have a weak equivalence $E_m\cong\Omega^{n-m} E_n$ whenever $m\le n$. By definition, $\pi_{n+k}(E_n) = \pi_k(\Omega^n E_n)$, and since $\Omega^n E_n$ and $E_0$ are weakly equivalent, then $\pi_k(\Omega^n E_n)\cong\pi_k(E_0)$. Thus, $\pi_{n+k}(E_n)\cong\pi_k(E_0)$ for all $n$, so the colimit for determining $\pi_k(E)$ is the colimit of a constant system in $\pi_k(E_0)$ — the maps in the colimit are the identity, because they're $\pi_k$ applied to the weak equivalences $E_m\stackrel\simeq\to \Omega E_{m-1}$. Thus, for $\Omega$-spectra, you can use $\pi_k(E) = \pi_k(E_0)$. (The case for negative $k$ is similar.)
Randomly swapping two balls in two urns with 3 balls each. In total $3$ are black and 3 are white. Is this process a Markov chain? Three white and three black balls are distributed in two urns in such a way that each contains three balls. We say that the system is in state i, i = 0, 1, 2, 3, if the first urn contains i white balls. At each step, we draw one ball from each urn and place the ball drawn from the first urn into the second, and conversely with the ball from the second urn. Let $X_n$ denote the state of the system after nth step. Now how to prove that $(X_n=0,1,2,...)$is a markov chain and how to calculate its transition probability matrix. Solution:If at the initial stage both the urns have three balls each and we draw one ball from each urn and place them into urn different from the urn from which it is drawn. So after nth step state of the system will be 3 and it will remain it forever. So this is not a markov chain. I also want to understand the meaning of bold line. If I am wrong, explain me why and how I am wrong and what is the transition matrix of this markov chain. Would any one answer this question?
Define $F_n$ to be the indicator random variable which has value 1 if at $n^{th}$ step white ball Chosen from the first ball and else 0. Similarly Define Indicator random variable $S_n$ for second urn Now,To check Markov property we need to check P($X_n$= j|($X_{n-1}$,$X_{n-2}$,..$X_0$)=($i_{n-1}$,$i_{n-2}$,...$i_0$))=P($X_n$= j|$X_{n-1}$=$i_{n-1}$) First Observe the conditional Range of $X_n$ given $X_{n-1}$,$X_{n-2}$...,$X_0$ is $\space$ {$X_{n-1}-1$, $X_{n-1}$, $X_{n-1}+1$} Hence enough to check for these cases below. if j = $i_{n-1}+1$ Then rewrite this prob as P($F_n$=0,$S_n$=1|($X_{n-1}$,$X_{n-2}$,..$X_0$)=($i_{n-1}$,$i_{n-2}$,...$i_0$)) if j= $i_{n-1} -1$ Then rewrite this prob as P($F_n$=1,$S_n$=0|($X_{n-1}$,$X_{n-2}$,..$X_0$)=($i_{n-1}$,$i_{n-2}$,...$i_0$)) if j = $i_{n-1}$ Then rewrite this prob as P($F_n$=1,$S_n$=1|($X_{n-1}$,$X_{n-2}$,..$X_0$)=($i_{n-1}$,$i_{n-2}$,...$i_0$))+ P($F_n$=0,$S_n$=0|($X_{n-1}$,$X_{n-2}$,..$X_0$)=($i_{n-1}$,$i_{n-2}$,...$i_0$)) Now,Observe that $F_n$ and $S_n$ only depends on $X_{n-1}$ and not on $X_{n-2}$,$X_{n-3}$,...$X_0$ So,from here you can conclude P($X_n$= j|($X_{n-1}$,$X_{n-2}$,..$X_0$)=($i_{n-1}$,$i_{n-2}$,...$i_0$))= P($X_n$= j|$X_{n-1}$=$i_{n-1}$) $\textbf{NOTE:}$ This is a long hand mathematical approach to argue markov property of $X_n$.If one wants to keep life simple one can argue in one line that the $n^th$ draw depends only on the $X_{n-1}$ by giving appropriate arguments.
$\lim_{x\to2}\frac{\sqrt{1+\sqrt{x+2}-\sqrt3}}{x-2}$ $\lim_{x\to2}\frac{\sqrt{1+\sqrt{x+2}-\sqrt3}}{x-2}$ My try: $\lim_{x\to2}\frac{\sqrt{1+\sqrt{x+2}-\sqrt3}}{x-2}=$ $\lim_{x\to2}\frac{\sqrt{1+\sqrt{x+2}-\sqrt3}}{x-2}\times\frac{\sqrt{1+\sqrt{x+2}+\sqrt3}}{\sqrt{1+\sqrt{x+2}+\sqrt3}}=$ $\lim_{x\to2}\frac{\sqrt{x+2\sqrt{x+2}}}{(x-2)\sqrt{1+\sqrt{x+2}+\sqrt3}}$ I am stuck here.
$$\lim\limits_{x\to 2^{\pm}}\frac{\sqrt{1+\sqrt{x+2}-\sqrt{3}}}{x-2} = \left[\frac{\sqrt{3-\sqrt{3}}}{0^{\pm}}\right]=\pm\infty$$ As you can see, the left- and right-side limits are different, so the limit $\lim\limits_{x\to 2}\frac{\sqrt{1+\sqrt{x+2}-\sqrt{3}}}{x-2}$ does not exists.
A problem with four circles and a square I was doing random things when I noticed something which seemed strange to me. What I did was the following. * *Take two points $A$ and $C$ of the plane. We denote $\ell$ the distance $AC$. *Draw a circle $\mathscr C_0$ of center $A$ and radius $r_0$. *Then define $M_0$ to be a point of $\mathscr C_0$, and $M_1$ the middle of $[M_0C]$. *Finally, define $M_2$ and $M_3$ such that $M_0M_1M_2M_3$ is square. It should look like something like this: Then we are interested in the trajectory of $M_1$, $M_2$ and $M_3$ when $M_0$ move along the circle $\mathscr C_0$. We notice that every $M_i$ seems to move on a circle $\mathscr C_i$ of a unique radius $r_i$. But I don't get why this would be true, which leads us to the first questions: Question 1. Are all trajectories $\mathscr C_i$ circles? Question 2. What are the radius $r_i$ in terms of $\ell$ and $r_0$? Question 3. Where are located the centers of those circles? What I noticed is that if the radius $r_0$ changes, we still get three other circles, and they are all concentric: And this is the case even when $r_0\geqslant \ell$: It looks like this if $r_0$ varies continuously: What I did to try to find the centers (since all three circles seems to have the same three centers for different radius $r_0$) is to see what it would look like with $r_0$ really small: So the centers seems to be: * *the middle point $A_1$ of $[AC]$, *the two points $A_2$, $A_3$ such that $AA_1A_2A_3$ is a square.
Consider that vector ${CM_0} = {CA}+{AM_0} $, a combination of a fixed vector and a rotating constant-length vector. Then: ${CM_1} = \frac 12{CA}+\frac 12{AM_0} $ ${CM_2} = {CM_1} +{CM_1}^\perp = \frac 12({CA}+ {CA}^\perp) +\frac 12({AM_0} +{AM_0}^\perp)$ ${CM_3} = {CM_0} +{CM_1}^\perp = ({CA} + \frac 12 {CA}^\perp) + ({AM_0} +\frac 12{AM_0}^\perp)$ So at the other points of the square we have in each case a fixed vector to the centre of the circle and a rotating constant-length vector to a point on the circle. Due to the combination of perpendicular vectors we should see $M_2$ and $M_3$ with shifted phase by $45°$ and about $26°$ respectively, which is borne out by your graphics.
Are all functions sets? I am studying Zermelo-Frankel set theory from Jech's Set Theory book. I understood it like functions are sets but the book uses the phrase "If a class F is a function" in Axiom Schema of Replacement. Why does it call it a class F instead of a set?
You're right that technically that's a bad phrasing. It's informal language to help motivate the axiom, which is a bit technical. More precisely, Replacement says: If $F$ is a class such that $(i)$ each element of $F$ is an ordered pair, and $(ii)$ for each $(x, y), (x, y')\in F$ we have $y=y'$, then [the rest of the axiom]. A class satisfying $(i)$ and $(ii)$ is called a class function; basically, Replacement says "any class function whose domain is a set, is a set" (actually it says that the range is a set, but it's easy to see that these statements are equivalent).
Odd / Even integrals My textbook doesn't really have an explanation for this so could someone explain this too me. If f(x) is even, then what can we say about: $$\int_{-2}^{2} f(x)dx$$ If f(x) is odd, then what can we say about $$\int_{-2}^{2} f(x)dx$$ I guessed they both are zero? For the first one if its even wouldn't this be the same as $$\int_{a}^{a} f(x)dx = 0$$ Now if its odd f(-x) = -f(x). Would FTOC make this zero as well?
Start by splitting the integral into two pieces, the part over negatives values of $x$ and the part over positive values. $$ \int_{-2}^{2} f(x)\,dx = \int_{-2}^{0} f(x)\,dx + \int_{0}^{2} f(x)\,dx$$ From here you can apply the definition of an even or odd function
Question about complement events and probability I have a question about this specific problem: A communications network has a built in safeguard system against failures.In this system if line I fails, it is bypassed and line II is used. If line II also fails,it is bypassed and line III is used. The probability of failure of any one of these three lines is .01, and the failures of these lines are independent events. What is the probability that this system of three lines does not completely fail? My intuition, which is wrong for this problem, tells me to do (0.99)^3, since that is the probability of the network's safeguard not failing. This is however wrong, and I am supposed to do 1-(0.01)^3. I guess I am wondering what the difference is and if anybody can give me any intuition into understanding how these two solutions are different.
The probability that all three lines fail is $0.01^3$. The complement of this is that at least one of the three lines works, which is $1$ minus this probability. What you calculated was the probability that all three lines worked. This isn't quite what you needed. The system can work with three lines working, two lines, or just one: $$1 - 0.01^3 = 0.99^3 + 3 \cdot 0.99^2 \cdot 0.01 + 3 \cdot 0.99 \cdot 0.01^2.$$ So in other words you got the first term on the right side, but not the second and third term (which are the probabilities that exactly two and exactly one line worked, respectively).
How do I show that $\frac{1}{x^{2/x}} \to 1$ as $x\to\infty$? From looking at the graph, it looks like this function converges to 1 as $x\to\infty$. But with mathematical rigours, how would I show this?
$$\lim _{ x\rightarrow \infty }{ { x }^{ -\frac { 2 }{ x } } } =\lim _{ x\rightarrow \infty }{ { e }^{ -\frac { 2 }{ x } \ln { x } \quad \quad } } \overset { L'Hospital }{ = } \lim _{ x\rightarrow \infty }{ { e }^{ -\frac { 2 }{ x } } } =1$$
differential linear equation of order one $(2xy+x^2+x^4)\,dx-(1+x^2)\,dy=0$ I have no idea how to solve it. Should be linear equation of order one since I am passing through this chapter, but I can't put into the form of $$y'+P(x)y=Q(x)$$ Here is the equation: $$(2xy+x^2+x^4)\,dx-(1+x^2)\,dy=0$$ It is not exact since partial derivatives are not equal. Any help would be appreciated.
To put it into the form you requested: $$ -(1+x^2) \,dy + (2xy + x^2 + x^4) \,dx = 0 \implies \frac{dy}{dx} - \frac{2xy + x^2 + x^4}{1+x^2} = 0 \\ \implies \frac{dy}{dx} + \left(-\frac{2x}{1+x^2}\right) y = \frac{x^2 + x^4}{1+x^2}\\ \implies \frac{dy}{dx} + \left(-\frac{2x}{1+x^2}\right) y = x^2 $$
Linear system: 3 variables, 2 equations w/o all variables The question asks to find the direction numbers for the line of intersection of the planes: $$ x + y + z = 1 , x + z =0 $$ I'm comfortable solving these sorts of linear systems when both equations include each variable. However here I'm slightly stuck. If I parameterize $y = t$ for instance I have: $$x = -z$$ $$-z + t + z = 1$$ $$t=1$$ From this I would guess that the directional numbers would be $(-1, 1, 1)$ (or $(1,1,-1)$ depending on if you substitute for $x$ or $z$). However the book lists that it is: $(1,0,-1)$. How did they get $0$ for $y$?
Build the augmented matrix $$\left[\begin{array}{ccc|c} 1 & 1 & 1 & 1\\ 1 & 0 & 1 & 0\end{array}\right]$$ and then use Gauss-Jordan elimination to obtain the RREF of the augmented matrix $$\left[\begin{array}{ccc|c} 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1\end{array}\right]$$ Hence, the solution space is the line parametrized by $$\begin{bmatrix} x\\ y\\ z\end{bmatrix} = \begin{bmatrix} 0\\ 1\\ 0\end{bmatrix} + t \begin{bmatrix} -1\\ 0\\ 1\end{bmatrix}$$
denseness in Sobolev spaces is the follwing true: Let $I\subseteq \mathbb{R}^n$ open, bounded. Then $C^2(\overline{I},\mathbb{R}^n)$ is dense in $W^{1,p}(I,\mathbb{R}^n)$ with respect to the $W^{1,p}$-norm? If yes, do you have a reference? Thanks in advance.
This is theorem 8.7 in Brezis: Functional analysis, Sobolev spaces and partial differential equations. Brezis starts with Sobolev spaces on intervalls, which makes it quite accessible for a start.
Very difficult calculus related rates question I was looking at a math problem from a few years ago that I could not solve. I was wondering if anyone knows where to even begin. I have the answer along with the question, however, I do not know how to arrive at this answer - Here is the question: The answer is:
To simplify this problem, we can change the perspective by noting that climbing a mountain with decreasing velocity is equivalent to climb with constant velocity a mountain that grows larger as we rise up. In particular, based on the data of the problem, we can see our progressively enlarging mountain as a cylinder: in fact, since at any height $z $ the corresponding radius of the cone is $r_0\,(h-z)/z\,\, \,$, its circumference is $2\,\pi\, r_0(h-z)/z \,\,\,$ and the velocity is $ v_0\,(h-z)/z\,\, \,$, the time needed by a climber to cover the circumference is $2\,\pi \, r_0/v_0\,\,\,$, i.e. is independent by the height $z $. In other words, we can simplify this problem by imagining a man climbing a cilindrical mountain having radius $r_0$ with constant velocity $v_0$. The problem therefore reduces to that of calculating the position of our original target point on such a cylinder. To do this, we can note that if we call $L$ the slant of the initial cone and $x$ the distance from its top at a given instant of our ascent, the instantaneous velocity is $v_0 \cdot \frac{x}{L} \,\,$. So, at any instant of our ascent, to cover an infinitesimal distance $dx$ we need a time equal to $\frac{L}{v_0\,x} dx \,$. Integrating in the range between $L$ and $L/2$ (i.e., from the beginning of the ascent to the height corresponding to our target point) and multiplying to $v_0$ to get the distance covered on the cylinder, this leads to a distance of $$\displaystyle \int_{L/2}^L \frac{L}{x} dx=L \left[\log L - \log (L/2) \right]=L \log 2$$ Therefore, to reach the original target point climbing on the cylinder, unwrapping the lateral surface of the cylinder on to a plane to see it as a rectangle, we have to cover a distance equal to the hypothenuse of a right triangle whose legs are $L \log 2$ and $ \pi \, r_0$. This directly yields a distance of $$\sqrt {(L \cdot \log 2)^2 + (\pi r_0)^2}$$ which divided by the velocity $v_0$ (remind that climbing on our cylinder we have assumed the velocity to be constant) gives a time $T$ equal to $$T= \sqrt {\frac {(L \cdot \log 2)^2}{v_0^2} + \frac{(\pi r_0)^2}{v_0^2}}$$ Because the slant $L$ of the initial cone is equal to $\sqrt{h^2 + r_0^2} \,\,\,$, we obtain $$T=\sqrt {\frac {(h^2 + r_0^2) \cdot \log^2 2}{v_0^2} + \frac{(\pi r_0)^2}{v_0^2}}$$ and substituting $h=4 \,\,$, $r_0=100\,\,$, and $v_0=2\,\,$, we obtain for $T $ the value $$\sqrt {\frac {(4^2 + 100^2) \cdot \log^2 2}{2^2} + \frac{(100 \pi)^2}{2^2}} \\ = \sqrt {2504 \cdot \log^2 2 + 2500 \pi^2} $$
Show that no two of the spaces $(0, 1), (0, 1], [0, 1]$ are homeomorphic Show that no two of the spaces $(0, 1), (0, 1], [0, 1]$ are homeomorphic My Attempted Proof We have $(0, 1) \subset (0, 1] \subset [0, 1]$. We also have $(0, 1] = (0, 1) \cup \{1\}$ and thus $|(0, 1)| \neq |(0, 1]|$ and thus no bijective mapping $f : (0, 1) \to (0, 1]$ exists, hence $(0,1)$ and $(0, 1]$ are not homeomorphic regardless of the topology defined on them. The proof for the other cases are similar. $\ \square$ Is my proof correct? I've seen proofs which form a contradiction through connectedness. Is there any error in my arguments?
Use the commonly used method of removing points from your space. Removing any point from $ (0, 1) $ disconnects it, while you can remove at most $ 1 $ point from $ (0, 1] $ without disconnecting it, and at most $ 2 $ from $ [0, 1] $. Since homeomorphism preserves connected components, the result follows.
How would you calculate the Fractal Dimension of this asymmetric Cantor Set? This construction removes the second quarter after each iteration. Picture from Wikipedia: Wikipedia gives the Hausdorff Dimension as $\log_2 \phi$ where $\phi$ is the Golden Ratio. Intuitively, the dimension tells me that this set, scaled down by a factor of two, will "fit inside of itself" 1.618... times. My intuition is leaning on the definition of the "self-similarity" dimension though, which I realize is not the same as the Hausdorff Dimension given by Wikipedia, but I also know that for simple fractal sets like this, the Hausdorff and self-similarity dimensions usually coincide. In my analysis class last year, we talked briefly about the definition of the Hausdorff-measure and Hausdorff-dimension, but I've found it very difficult to locate examples of people actually showing how to calculate this dimension for any but the most basic objects.
You may compute the similarity dimension as follows. The set is composed of two copies of itself - one scaled by the factor $1/2$ and the other scaled by the factor $1/4$. Thus, the similarity dimension is the unique, positive number $s$ satisfying $1/2^s + 1/4^s = 1$. Since $1/4=1/2^2$, this yields $$\frac{1}{2^s} + \left(\frac{1}{2^s}\right)^2 = 1.$$ This is a quadratic equation in the expression $1/2^s$, which can be solved to obtain $$\frac{1}{2^s} = \frac{-1+\sqrt{5}}{2} = \varphi.$$ Thus, $s=\log(\varphi)/\log(2).$ It's also possible to compute the dimension using a box counting technique as in this answer.
How to calculate $\sum_{x \in \mathbb{Z}^2} \frac{2\pi(1 - 2x_1)}{|2\pi x + \hat{\pi}|^4}$? Consider the series $$\sum_{x=-\infty}^\infty \frac{2\pi(1 - 2x)}{(2\pi x + \pi)^4}.$$ This series converges to $\frac{\pi}{12}$ as can be seen in WolframAlpha . Now instead of a scalar $x$ I would like to consider instead a vector $x$ in $\mathbb{Z}^2$ and calculate the value of $$\sum_{x \in \mathbb{Z}^2} \frac{2\pi(1 - 2x_1)}{|2\pi x + \hat{\pi}|^4},$$ where $\hat{\pi} = [\pi, \pi]^T$. So this could be viewed as a generalization of the previous summation to two dimensions. How can the value that this series converges to be calculated?
Pulling out the $\pi$ factor, $$ -\frac{2}{\pi^3}\sum_{x\in\mathbb{Z}}\frac{(2x-1)}{(2x+1)^4}=-\frac{2}{\pi^3}\sum_{n\geq 0}\left(\frac{(2n-1)}{(2n+1)^4}+\frac{-3-2n}{(2n+1)^4} \right)$$ equals $$ \frac{8}{\pi^3}\sum_{n\geq 0}\frac{1}{(2n+1)^4} =\frac{8}{\pi^3}\left(\zeta(4)-\frac{1}{16}\zeta(4)\right)=\frac{8}{\pi^3}\cdot\frac{15}{16}\cdot\frac{\pi^4}{90}=\frac{\pi}{12}.$$ In a similar way, the second series equals $$ \frac{16}{\pi^3}\sum_{x\geq 0}\sum_{y\geq 0}\frac{1}{\left((2x+1)^2+(2y+1)^2\right)^2}$$ that is deeply related with a Dirichlet series. It is: $$ \frac{16}{\pi^3}\cdot\!\!\!\!\!\!\!\!\sum_{\substack{n\geq 1 \\ n\equiv 2\!\pmod{\!\!4}}}\!\!\!\!\!\! \frac{r_2(n)}{n^2}$$ where $r_2(n)$ stands for the number of ways for writing $n$ as a sum of two squares. Its explicit value is given by: $$ \frac{7\sinh(\pi)\,\zeta(3)-\pi^3}{4 \pi ^2 (1+\cosh\pi)}. $$ That can be achieved through the Poisson summation formula.
Explicit computation of a conditional expectation Let $\Omega=[0,1], \mathcal{F}=\mathcal{B}([0,1]), \mathbb{P}=\lambda$ on $[0,1]$. Let $\mathcal{G}$ be the smallest $\sigma$-algebra containing the Borel subsets of $[0,\frac{1}{2}]$. Compute for $X\in L^1$ the conditional expectation $\mathbb{E}[X|\mathcal{G}]$. My attempt: we can 'take out what is known', so it makes sense to rewrite $X=X\chi_{[0,\frac{1}{2}]}+X\chi_{(\frac{1}{2},1]}$. Then, using linearity and the 'taking out what is known'-property, we have $\mathbb{E}[X|\mathcal{G}]=X\chi_{[0,\frac{1}{2}]}+\mathbb{E}[X\chi_{(\frac{1}{2},1]}|\mathcal{G}]$. The answer file, however, says $$\mathbb{E}[X\chi_{(\frac{1}{2},1]}|\mathcal{G}]=2\mathbb{E}[X\chi_{(\frac{1}{2},1]}]\chi_{(\frac{1}{2},1]}$$ Could anyone explain me how to derive the last equality? EDIT: After a 2nd thought I got: $$\mathbb{E}[X\chi_{(\frac{1}{2},1]}|\mathcal{G}]=\frac{1}{\lambda(\frac{1}{2},1]} \int_{(\frac{1}{2},1]} X d\mathbb{\lambda}=2\mathbb{E}[\chi_{(\frac{1}{2},1]}X]$$ Which I guess kinda solves the problem, yet I don't understand why we should multiply it with $\chi_{(\frac{1}{2},1]}$
Recall that $\mathbb E(X|\mathcal G)=Y$ a.s., where $Y$ is a r.v. on $(\Omega, \mathcal G)$. You find that for $\omega\in[0,\frac12]$ $$Y(\omega)=X(\omega)\text{ a.s.}$$ And the second summand is the (constant) value of $Y$ for $\omega\in(\frac12,1]$: $$Y(\omega) \stackrel{a.s.}{=} 2\mathbb E[X\chi_{(\frac12,1]}]:=C $$ So, $$\mathbb E(X|\mathcal G) \neq X(\omega)\chi_{[0,\frac12]}+C,$$ but $$\mathbb E(X|\mathcal G) \stackrel{a.s.}{=}\begin{cases} X(\omega), & \omega \in [0,\frac12]\cr C, & \omega\in (\frac12,1]\end{cases} = X\chi_{[0,\frac12]} + C\chi_{(\frac{1}{2},1]}$$
Given the positive numbers $a, b, c$. Prove that $\frac{a}{\sqrt{a^2+1}}+\frac{b}{\sqrt{b^2+1}}+\frac{c}{\sqrt{c^2+1}}\le \frac{3}{2}$ Given the positive numbers $a, b, c$ satisfy $a+b+c\le \sqrt{3}$. Prove that $\frac{a}{\sqrt{a^2+1}}+\frac{b}{\sqrt{b^2+1}}+\frac{c}{\sqrt{c^2+1}}\le \frac{3}{2}$ My Try (Edited from Comments): By Cauchy Schwarz, we have that $$ (a^2+1)(1+3) \ge \left(a+\sqrt {3}\right)^2 \rightarrow \frac{a}{\sqrt{a^2+1}} \le \frac{2a}{a+\sqrt{3}} $$ I need a new method
Another way. Since $ab+ac+bc\leq\frac{1}{3}(a+b+c)^2\leq1$, by AM-GM we obtain: $$\sum_{cyc}\frac{a}{\sqrt{1+a^2}}\leq\sum_{cyc}\frac{a}{\sqrt{ab+ac+bc+a^2}}=\sum_{cyc}\frac{a}{\sqrt{(a+b)(a+c)}}\leq$$ $$\leq\frac{1}{2}\sum_{cyc}\left(\frac{a}{a+b}+\frac{a}{a+c}\right)=\frac{1}{2}\sum_{cyc}\left(\frac{a}{a+b}+\frac{b}{a+b}\right)=\frac{3}{2}.$$ Done!
calculating a limit with problem with l'hopital's law I need some help in calculating this limit: $\lim_{x\rightarrow2}(x-1)^{\frac{2x^2-8}{x^2-4x+4}}$ Thanks a lot.
HINTS: First, note that $$\frac{2x^2-8}{x^2-4x+4}=\frac{2(x+2)}{x-2}$$ Hence, we can write $$(x-1)^{\frac{2(x+2)}{x-2}}=e^{\frac{2(x+2)}{x-2}\log(x-1)}$$ Finish by using L'Hospital's Rule to show $$\lim_{x\to 2}\frac{2(x+2)}{x-2}\log(x-1)=\lim_{x\to 2}\left(2\log(x-1)+\frac{2(x+2)}{x-1}\right)$$
boundary and optimization over convex sets Suppose we are maximizing a concave objective function $f_0$ over a convex set $C$. Must the boundary of $C$ contain a point $x_b^*$ which maximizes our objective function? That is, if $S = \{x: x = \text{argmax} f_0(x), \text{ } x \text{ feasible}\}$, will there always be a point $x_b^* \in \partial C$ such that $x_b^* \in S$? If $f_0$ is linear, this is apparent. If$f_0$ takes one value on the entire feasible set, then we have optimal points both on the boundary and in the interior. So is there always an $x_b^*$ on the boundary that maximizes $f_0$?
No, just maximize $-x^2$ over $[-1,1]$.
Show: $p\text{ is prime}\iff p\text{ has no divisor }d\text{ where }1 Let $p\in\mathbb Z$, $p>1$. Prove that $$ p\text{ is prime}\iff p\text{ has no divisor }d\text{ where }1<d\leq\sqrt p. $$ It is easy to show "$\implies$"; if $p$ is prime, then its only positive divisors are 1 and $p$ itself. Therefore, there are no divisors $d$ for which it holds that $1<d\leq\sqrt p<p$. However, I'm having trouble with "$\impliedby$". I was thinking of using the contrapositive. So assume $p$ is not prime. Then we would like to show that $p$ has a divisor $d$ such that $1<d\leq \sqrt p$. I was thinking of using the fact that we can write $p$ as the (unique) product of finitely many prime numbers; $p=p_1\cdots p_k$, for some $k\in\mathbb N$. From here on I wouldn't know how to continue. Could someone help me out? EDIT This is my proof then, based on the hints given: Assume $p$ is not prime. Then $p$ must have at least one divisor $d$, such that $1<d<p$. We can therefore write $p=qd$, for some $q\in\mathbb Z$. This automatically means that $q$ is also a divisor of $p$. Now assume both $q$ and $d$ are greater then $\sqrt a$. Then $a=qd>a$. Contradiction. Therefore it holds that $q\leq\sqrt a$ or $d\leq\sqrt a$.
Check the lemma (which also proves the existence of prime divisors): If $n$ is not prime, the smallest non-trivial (i.e. $\ne 1, n$) divisor of $n$ is prime. Indee, if this smallest divisor is not prime, it has a non-trivial divisor, which is also a divisor of $n$, contradicting the ‘smallest divisor’ property. Corollary: If $n$ is not prime, the smallest non-trivial divisor $d$ of $n$ is $\le \sqrt n$. Indeed, suppose $d>\sqrt n$. Then $\;e=\dfrac nd<\dfrac n{\sqrt n}=\sqrt n<d$. Contradiction.
Properties of Determinant=0 Okay, so I am dealing with a problem with determinant equal to 0. Admittedly, I do not know that much about determinant equal to 0 other than that it can cause no solution or infinitely many solutions.I think the first choice is true, but I do not know which of the others would be true or why. (My gut tells me that A&D would be true while the others are false but this is not based on any solid mathematical understanding. $$ \begin{bmatrix} a1 \\a2 \end{bmatrix}=\begin{bmatrix} m11 & m12 \\m21 & m22 \end{bmatrix}*\begin{bmatrix} x1 \\x2 \end{bmatrix} $$ is abbreviated as A=MX If det(M)=0, then which are true? A. some values of A (such as A=0) will allow more than one X to satisfy the equation. B. given any X there is one and only one A which will satisfy the equation. C. there is no value of X which satisfies the equation when A=0. D. some values of A will have no values of X which will satisfy the equation. E. given any A there is one and only one X which will satisfy the equation.
"Admittedly, I do not know that much about determinant equal to 0 other than that it can cause no solution or infinitely many solutions." A. some values of A (such as A=0) will allow more than one X to satisfy the equation. D. some values of A will have no values of X which will satisfy the equation. E. given any A there is one and only one X which will satisfy the equation. From the one thing you say you know. Can you not see how that statement reflects directly on these 3. B. given any X there is one and only one A which will satisfy the equation. This does not hinge on your statement above. But what if I told you that $M\mathbf x$ is a function of $\mathbf x$. Would that give you any insight? C. there is no value of X which satisfies the equation when A=0. $M\mathbf 0 = \mathbf 0$ for any $M$
Any linear operator $T$ can be realised as the strong limit of compact operators I'm able to show that the strong limit of compact operators need not be compact. In Stein and Shakarchi however, Question 21.(b) of Chapter 4 reads as follows: Show that for any bounded operator $T$ there is a sequence $\{ T_n \}$ of bounded operators of finite rank so that $T_n \to T$ strongly as $n \to \infty$. Clearly since $T_n$ has finite rank, it is compact. And note that strong convergence means that for all $f \in \mathcal{H}$, $T_nf \to Tf$. I'm unsure of how to go about this though.
Since the Hilbert space $\mathcal H$ is separable, it has a countable orthonormal basis $\{e_k\}_{k=1}^\infty$. Now let $P_n$ be the projection onto the first $n$ coordinates, i.e. $$ P_n\left(\sum_{k=1}^\infty\alpha_ke_k\right)=\sum_{k=1}^n\alpha_ke_k.$$ Now given an operator $T\in B(\mathcal H),$ put $T_n=P_nT$ for each $n\in\mathbb N$. Then each $T_n$ is finite-rank, and for any $f\in\mathcal H$, writing $Tf=\sum_{k=1}^\infty\alpha_ke_k$ we have $$ \|(T_n-T)f\|=\left(\sum_{k=n+1}^\infty|\alpha_k|^2\right)^\frac{1}{2}\to 0 $$ as $n\to\infty$. Since $f\in\mathcal H$ was arbitrary, we know $\{T_n\}$ converges strongly to $T$, and since $T\in B(\mathcal H)$ was arbitrary, the result is proven.
Why is $\mathbb F_3[x]/(x^2 + 1) \cong \mathbb F_9$ I want to show that $\mathbb Z[i]/(3) \cong \mathbb F_9$, and I know that $\mathbb Z[i]/(3) \cong \mathbb Z[x]/(3, x^2 + 1) \cong \mathbb F_3[x]/(x^2 + 1)$. How do we know that $\mathbb F_3[x]/(x^2 + 1) \cong \mathbb F_9$? I know that $x^2 + 1$ is irreducible, is this important?
$F_3[X]/(x^2+1)$ is an extension of degree 2 of $F_3$ so its cardinal is $3^2$. Two finite field which have the same order are isomorphic. https://en.wikipedia.org/wiki/Finite_field
Total number of maximal ideal in the quotient ring $\frac{\mathbb Q[x]}{x^4-1}$ We have $\mathbb Q[x]$ be the ring of polynomial over $\mathbb Q$, then total number of maximal ideal in the quotient ring $\frac{\mathbb Q[x]}{x^4-1}$ is $2$ because $i$ and $-i$ are not in $\mathbb Q$. is it correct $?$.Thanlk you
Although this question is completely answered here, there is also an opportunity to address other problems for you. This question appeared in the related questions on the right, and therefore probably appeared in the list of possible duplicates whole you were entering your question. You really ought to actually pay attention and look for duplicates before submitting. Anyway. No, your solution is incorrect and aside from that, very incompletely expressed. It has three maximal ideals because the maximal ideals correspond to prime divisors of $x^4-1$, of which there are three since $(x-1)(x+1)(x^2+1)$ is a complete factorization. Be sure to include a complete explanation especially if it is brief. "Two maximal ideals because two random numbers aren't in $\mathbb Q$ " is not very enlightening, although we can tell you might be factoring.
Why is the equation $dy/dx + P(x)y=Q(x)$ said to be standard form? Well, I know that in linear differential equation the variable and its derivatives are raised to power of $1$ or $0$. But I am confused where did the standard form of linear differential equation came form? That is, why is the equation $dy/dx + P(x)y=Q(x)$ said to be standard form?
The standard form of a differential equation is ‎$$f_n\frac{d^ny}{dx^n}+f_{n-1}\frac{d^{n-1}y}{dx^{n-1}}+\cdots+f_1\frac{dy}{dx}+f_0y=g$$‎ for $n=1$, this called differential equation of the first order, i.e. ‎$$f_1\frac{dy}{dx}+f_0y=g$$‎ you take ‎$$\frac{dy}{dx}+py=q$$‎ while $p$ and $q$ are continuous functions of $x$, this called linear differential equation of the first order.
If $\left|z^3 + {1 \over z^3}\right| \le 2$ then $\left|z + {1 \over z}\right| \le 2$ $\displaystyle \left|z^3 + {1 \over z^3}\right| \le 2$ prove that $\displaystyle \left|z + {1 \over z}\right| \le 2$ $$\left|z^3 + {1 \over z^3}\right| = \left(z^3 + {1 \over z^3}\right)\left(\overline z^3 + {1 \over \overline{z}^3}\right) = \left(z + {1\over z}\right)\left(z^2 - 1 + {1\over z^2} \right)\left(\overline z + {1\over \overline z}\right)\left(\overline z^2 - 1 + {1\over\overline z^2} \right)$$ $$=\left|z + {1\over z}\right|^2\left|z^2 - 1 + {1\over z^2} \right|^2 \le 2$$ $$\therefore \left|z + {1\over z}\right| \le \sqrt{2}$$ where $\displaystyle \left|z^2 - 1 + {1\over z^2} \right| \ge 1$. But I am not able to prove $\displaystyle \left|z^2 - 1 + {1\over z^2} \right| \ge 1$, need some help on this.
It's all about identities. Note that $\left( z + \frac 1z\right)^3 = z^3 + \frac 1{z^3} + 3\left(z + \frac 1z\right)$. Apply the triangle inequality: $$ \left|\left( z + \frac 1z\right)^3 \right| \leq \left|z^3 + \frac 1{z^3}\right| + 3\left|\left(z + \frac 1z\right)\right| $$ Using what you know: $$ \left|\left( z + \frac 1z\right) \right|^3 - 3\left|\left(z + \frac 1z\right)\right| \leq \left|z^3 + \frac 1{z^3}\right| \leq 2 $$ Suppose that $x = \left|\left( z + \frac 1z\right) \right|$, then $x^3 - 3x \leq 2$ is true, along with $x \geq 0$. To solve this, note that $x^3-3x = x(x^2-3)$, which are both increasing functions for $x\geq 0$. So, $x^3-3x$ is also increasing. Hence, we only need to find when $x^3 - 3x=2$, which happens at $x=2$. Hence, we can conclude, by the increasing property, that $0 \leq x \leq 2$ is true, but this is the conclusion of the problem.
Augmentation Ideal of Universal Enveloping Algebra I am confused with the statement that "consider the augmentation map $\epsilon_L :\mathfrak{U}(L) \rightarrow \mathbb{F}$ which is the unique algebra homomorphism induced by $\epsilon_{L}(x)=0 , \forall x \in L$ (I hope $L$ here is seen as the copy $T^1$ in $\mathfrak{U}(L)$) where $\mathfrak{U}(L)$ is the universal enveloping algebra of the Lie algebra $L$ and the kernel of $\epsilon_{L}$ is said to be the augmentation ideal. My confusion is that if $\epsilon_{L}(x)=0 , \forall x \in L$ then does not that imply $ker(\epsilon_{L})$ is just $\mathfrak{U}(L)/\mathbb{F}$?
You wanted to write this $\mathfrak U(L)\backslash\mathbb F$, right? Since it does not make sense to write $\mathfrak U(L)/\mathbb F$, because $\mathbb F$ not is an ideal of $\mathfrak U(L)$. The correct is $\ker \epsilon_L=\mathfrak U(L)\backslash\mathbb F^*$, this is direct by definition of $\epsilon_L$. Moreover, you can show that $\mathfrak U(L)=\mathbb F\oplus\ker\epsilon_L$ if you consider $\mathbb F$ is the subalgbera of $\mathfrak U(L)$ generated by $1.$
Product rule for Hadamard product differentation? Is there a "simple" solution to $\bf \frac{\partial}{\partial w}\big(w \odot f(w)\big)$ assuming the matrix $\bf \frac{\partial f}{\partial w}$ is known? With simple I mean something like in the normal vector multiplication case $\bf \frac{\partial}{\partial w}\big(w^Tf(w)\big) = f(w) + \big[\frac{\partial f(w)}{\partial w}\big]^T w$ such that no other knowledge of $\bf f(w)$ is required.
Let Diag denote the function which creates a diagonal matrix from a vector. Define some variables for convenience $$\eqalign{ F &= \operatorname{Diag}(f) \cr W &= \operatorname{Diag}(w) \cr h &= w\odot f = f\odot w \cr J &= \frac{\partial f}{\partial w} \cr \cr }$$ Now find the differential and gradient of $h$ $$\eqalign{ dh &= f\odot dw + w\odot df \cr &= F\,dw + W\,(J\,dw) \cr \cr \frac{\partial h}{\partial w} &= F + WJ \cr }$$
Infinitely differentiable functions and their bounds We know that if $f(x)$ is analytic, i.e $f(x) \in C^\infty$ in an open set $D$ and $f(x)$ has a convergent Taylor series at any point $x_{0}\in D$ for $x$ in some neighborhood of $x_{0}$, we can write $\left|{\frac {d^{k}f}{dx^{k}}}(x)\right|\leq C^{k+1}k!$ Is there a counterpart for the derivative boundedness for infinite differentiable functions $g(x) \in C^\infty$? $\left|{\frac {d^{k}g}{dx^{k}}}(x)\right|\leq h(k)?$
Given any sequence $h_n$, you can find a $C^\infty$ function $g$ such that $g^{(n)}(0) = h_n$. This implies the answer is "no." Consider $$g(x) = \sum_{k=0}^\infty g_k(x), \qquad g_k(x) = \frac{h_k}{k!} x^k \alpha(c_k x),$$ where $\alpha:\mathbb R \to [0,1]$ is a $C^\infty$ function which is $1$ on $[-1,1]$ and $0$ on $\mathbb R \setminus (-2,2)$, and $c_k$ will be chosen later. So let's try to show that $\sum_k g_k^{(n)}$ converges uniformly for each $n$. Using Leibnitz' formula, we see that for $k \ge n$ $$ {\|g_k^{(n)}\|}_\infty \le \frac{|h_k|}{k!} \sum_{m=0}^n \binom nm \frac{k!}{(k-m)!}\big[\sup_{c_k x \in [-2,2]}|x|^{k-m}\big] c_k^{n-m} {\|\alpha^{(n-m)}\|}_\infty \le K_n |h_k| 2^kc_k^{n-k} $$ where $K_n$ depends only on $n$ and $\alpha$. So if $c_k \ge 3 |h_k|^{1/k}$, then $\sum_k {\|g_k^{(n)}\|}_\infty$ converges. Hence $g = \sum_k g_k$ converges in $C^\infty$. Then it can be easily shown that $g^{(n)}(0) = h_n$.
List all the elements of $A = \{ n \in \mathbb{Z} \mid \frac{n^2-n+2}{2n+1} \in \mathbb{Z}\}$ I was given the following set $A = \{ n \in \mathbb{Z} \mid \frac{n^2-n+2}{2n+1} \in \mathbb{Z}\}$ I have to list all the elements of $A$. I started using the Euclidean division: $$n^2-n+2=(2n+1)(\frac{1}{2}n-\frac{3}{4}) + \frac{11}{4}$$ In order to eliminate the fractions, I multiplied the whole expression with 4: $$4(n^2-n+2)=(2n+1)(2n-3) + 11$$ I´m stuck here, maybe I´m on the wrong way. Could anyone give me some hints how can I approach this problem. Thank you in advance.
The answer based on the hint of @lhf $$4\frac{n^2-n+2}{2n+1}= 2n-3+ \frac{11}{2n+1}$$ $$4\frac{n^2-n+2}{2n+1} \in \mathbb{Z} \Rightarrow \frac{11}{2n+1} \in \mathbb{Z}$$ $$\Leftrightarrow 2n+1 \in D_{11} \Leftrightarrow 2n+1 \in \{-11, -1, 1 , 11\}$$ $$\Leftrightarrow 2n \in \{-12, -2, 0 , 10\}$$ $$\Leftrightarrow n \in \{-6, -1, 0 , 5\}$$ $$\Rightarrow A = \{-6, -1, 0 , 5\}$$ Another approach: $$\frac{n^2-n+2}{2n+1} \in \mathbb{Z}$$ $$\Leftrightarrow n^2-n+2 \equiv 0[2n+1]$$ $$\Rightarrow 4(n^2-n+2) \equiv 0[2n+1]$$ $$\Leftrightarrow (2n+1)(2n-3) + 11 \equiv 0[2n+1]$$ $$\Rightarrow 11 \equiv 0[2n+1]$$ $$\Rightarrow (2n+1) \mid 11$$ $$\Leftrightarrow (2n+1) \in D_{11}$$ Another approach: $$4(n^2-n+2)=(2n+1)(2n-3) + 11$$ According to the successive division: $$4(n^2-n+2)\wedge (2n+1)=(2n+1) \wedge 11$$ We have $$(\forall n \in \mathbb{Z}) 4 \wedge (2n+1)= 1$$ Since $(\forall n \in \mathbb{Z}) n^2 - n + 2 \in \mathbb{Z^*}$, then $$4 \wedge (2n+1)= 1 \Rightarrow 4(n^2-n+2)\wedge (2n+1) = (n^2-n+2)\wedge (2n+1)$$ $$\Leftrightarrow (n^2-n+2)\wedge (2n+1) = (2n+1) \wedge 11$$ We have $ \frac{n^2-n+2}{2n+1} \in \mathbb{Z}$, then $(2n+1) \mid (n^2-n+2)$ $$\Leftrightarrow (n^2-n+2)\wedge (2n+1) = \mid2n+1\mid$$ $$\Leftrightarrow (2n+1) \wedge 11 = \mid2n+1\mid$$ $$\Leftrightarrow (2n+1) \in D_{11} $$
How to prove that $11...111$ is not the sum of two perfect squares I'm stuck with this problem: Show that $a=11...111$ is not the sum of two perfect squares. That is to say, there are no pair of integers ($b$ , $c$) so that $b^2+c^2=a$. I think I am supposed to use equivalence classes in some way, but I do not know how to approach it.
It's easy to prove that every perfect square numbers, when is divided by 4, the remainder must be 0 or 1. And now the solution is clear.
Where is the mistake in this solution of $\lim_{x \to 1}{\frac{1-x^2}{\sin (\pi x)}}$? I'm trying to solve this limit: $$\lim_{x \to 1}{\frac{1-x^2}{\sin (\pi x)}}$$ The answer ought to be $\frac{2}{\pi}$, but I end up with $0$: $\lim\limits_{x \to 1}{\frac{1-x^2}{\sin (\pi x)}} = $ $\lim\limits_{y \to 0}{\frac{1-(y+1)^2}{\sin (\pi (y+1))}} = $ $\lim\limits_{y \to 0}{\frac{\pi(y+1)}{\sin (\pi (y+1))} \frac{1-(y+1)^2}{\pi(y+1)}} = $ $\lim\limits_{y \to 0}{\frac{1-(y+1)^2}{\pi(y+1)}} = 0$ Where and why is my solution incorrect? Note: I'm aware of this post, however I believe mine is different because I'm asking where and why my solution went wrong, not why my answer was wrong.
Your third equality attempts to make use of the rule $\lim\limits_{x\to0}\frac{x}{\sin x} = 1$, but note that yours has $y\to 0$ yet the argument is not $y$, it is $\pi(y+1)$, which does not go to zero. That's where your work goes wrong.
Linear Algebra Terminology Trouble I have been reading up on how to bring a matrix to diagonal form, and I learned that a matrix is diagonalizable if and only if the eigenvectors "span the space." What does it mean for eigenvectors to "span the space"? I am inferring that it means to "fill in" the the columns of a matrix $\textbf{S}^{-1}$ that diagonalizes $\textbf{T}$ using the eigenvectors of $\textbf{T}$. For example, if a matrix $\textbf{T}$ has two colums, it will need two linearly independent eigenvectors to "span the space." Thank you.
I think it's probably a convoluted way of saying that the matrix has a full set of eigenvectors. Let's assume that we are talking about real numbers $\mathbb{R}$ and an $n\times n$ matrix. Informally speaking, two things can wrong when looking for eigenvalues and eigenvectors: 1. Some of eigenvalues could be complex and 2. Some of the eigenvalues could be "defective". Defective means that the algebraic multiplicity is higher than the geometric multiplicity. (The relevant discussion comes 4 lessons after this one: http://lem.ma/JZ) In both of these "bad" cases, the matrix is not cleanly diagonalizable. In the case of complex eigenvalues, you will end up with $2\times 2$ blocks on the diagonal, and in the defective case you will end up with Jordan blocks, e.g. like this $$\begin{bmatrix}1 & 2 \\ 0&1 \end{bmatrix}$$ Both cases are characterized by having fewer than $n$ linearly independent real eigenvectors. So you have fewer (linearly independent) eigenvectors than the dimension of the space. And because there are too few of them, they don't span the space. In the opposite case, when all eigenvalues are real and none are defective, the matrix is diagonalizable and you have $n$ (linearly independent) eigenvectors. To sum up: Diagonalizable = eigenvectors span the space Nondiagonalizble = eigenvectors fail to span the space
Why is $-\ \frac{1}{2}\ln(\frac{1}{9})$ equal to $\frac{\ln(9)}{2}$? I solved this problem in my textbook but noticed their solution was different than mine. $1. \ 9e^{-2x}=1$ $2. \ e^{-2x}=\frac{1}{9}$ $3. -2x=\ln(\frac{1}{9})$ $4. \ x=-\ \frac{1}{2}\ln(\frac{1}{9})$ However, the answer that my textbook gives is $\frac{\ln(9)}{2}$ I plugged these expressions into my calculator and they are indeed equivalent, however I don't see what properties I could use to get from my messy answer to the textbook's much cleaner one. Any help would be greatly appreciated. Thank you.
There exists the following property for logarithms: $$n \ln{x} = \ln{x^n}$$ So for your problem you have: $$ -\frac{1}{2} \ln{\left(\frac{1}{9}\right)}=\frac{1}{2}\ln{\left(\left(\frac{1}{9}\right)^{-1}\right)}=\frac{1}{2}\ln{9}= \frac{\ln9}{2}$$ I hope this is sufficient as an explanation.
Correlation($U,V$)=Correlation($X,Y$) Let $X$ and $Y$ be random variables such that $0<\sigma^2_X<\infty$ and $0<\sigma^2_Y<\infty$. Suppose that $U=aX+b$ and $V=cY+d$, where $a \not= 0$ and $c \not= 0$. Show that $\rho(U,V)=\rho(X,Y)$ if $ac>0$, and $\rho(U,V)=-\rho(X,Y)$ if $ac<0$.
Might be best to start with covariance and $a,c > 0$. $$Cov(U,V) = Cov(aX+b, cY+d) = Cov(aX,cY) + Cov(aX,d) + Cov(b,cY) + Cov(b,d)\\ = Cov(aX,cY) = acCov(X,Y).$$ Then $\rho(U,V) = Cor(U,V) = \frac{Cov(U,V)}{SD(U)SD(V)}.$ Finally, finish by finding $\sigma_U = SD(U)$ and $\sigma_V = SD(V).$ I hope my notation is sufficiently similar to notation in your text so that you can follow this.
How to validate the following inequality? $\sqrt{2(x+y+z)}\leq a\sqrt{x+y}+b\sqrt{z}.$ I want to find the least values of $a$ and $b$ for which the above inequality holds good for all nonnegative real values of $x, y, z.$
Assume that $z \neq 0$ and $x, y$ are not simultaneously zero. (For example, if $z$ is zero, $b$ can be any arbitrary negative number.) Consider the equation $$b = -\sqrt{\frac{x + y}{z}}a + \sqrt{\frac{2(x + y + z)}{z}}.$$ Let $a^* = \sqrt{\frac{2(x + y + z)}{x + y}}$ and $b^* = \sqrt{\frac{2(x + y + z)}{z}}$, i.e., $a^*$ is the $a$-intercept and $b^*$ is the $b$-intercept. The least value of $a$, $b$ (hence $a + b$) is then occurring at $(a^*, 0)$ if $a^* < b^*$ and $(0, b^*)$ otherwise.
$(x,y) \in \mathbb Z \times\mathbb Z$ with $336x+378y=\gcd(336,378)$ $(x,y) \in \mathbb Z \times\mathbb Z$ with $336x+378y=\gcd(336,378)$ Question: How can I get every possible combination of $x$ and $y$? My solution so far: First I have calculated the $\gcd(336,378)=42$. So using that I have $42= 1\cdot378-1\cdot336$ So $x=1$ and $y=-1$. To get the general formula I have tried two things with gcd and lcm as a factor: gcd: $42=1 \cdot 42 \cdot x \cdot 378- 1 \cdot 42 \cdot x \cdot 336, x \in \mathbb Z$ lcm: $42=1 \cdot 3024 \cdot x \cdot378 - 1 \cdot 3024 \cdot x, x\in \mathbb Z$ Unfortunately both do not make sense - it was just a guess.
You have found the minimum solution to Bezout's identity where $x=-1,y=1$ Now all the solutions are given in pairs by: $x+k\frac{b}{\gcd(a,b)},y-k\frac{a}{\gcd(a,b)}$ For example, $k=1 \rightarrow x=8,y=-7$ and $8\cdot 336 + (-7)\cdot 378 = 42$ $k=2 \rightarrow x=17,y=-15$ and $17\cdot 336 + (-15)\cdot 378 = 42$
Infinite torsion group with finitely many conjugacy classes Do there exist infinite torsion groups with finitely many conjugacy classes? One can easily see that there are no such groups with only two conjugacy classes. Note also that one can construct torsion-free groups with finitely many conjugacy classes via HNN extensions.
The first finitely generated examples were first constructed by Ivanov (I found this fact in Osin's paper, below). Note that Denis Osin constructed the first examples of finitely generated groups with exactly two conjugacy classes. He points out on page 2 of his paper that Ivanov's ideas cannot be extended to prove Osin's result. (Osin's paper appeared in the Annals of Mathematics and is the culmination of a large piece of work on small cancellation theory for relatively hyperbolic groups.) Ivanov's construction is as a limit of hyperbolic groups, so $G$ is such that there exist normal subgroups $N_1\lhd N_2\lhd\cdots \lhd F$ of a free group $F$ such that $F/N_i$ is hyperbolic for all $i$ and $G=F/N$ where $N=\cup N_i$. Suppose that $G$ has two conjugacy classes. Then there exist elements $g, t$ such that $t^{-1}gt=g^2$, and hence there exists some $i$ such that this identity holds in $F/N_i$. However, this identity never holds in hyperbolic groups (Osin cites here the old texts of hyperbolic groups, but I think this is due to Gersten and Short and is slightly later). This identity is also the reason why a group with two conjugacy classes must be torsion-free (just in case anyone is interested!). Suppose that $G$ has two conjugacy classes and contains torsion. Then every non-trivial element has prime order $p>2$. Note that there exists $g, t$ such that $t^{-1}gt=g^2$, and so we have $g^{2p-1}=t^{-p}gt^pg^{-1}=gg^{-1}=1$, and hence $2p-1=0\mod p$. But of course, by Fermat's little theorem we have that $p-1=0\mod p$, so $2p-2=0\mod p$, a contradiction.
Factor $9(a-1)^2 +3(a-1) - 2$ I got the equation $9(a-1)^2 +3(a-1) - 2$ on my homework sheet. I tried to factor it by making $(a-1)=a$ and then factoring as a messy trinomial. But even so, I couldn't seem to get the correct answer; they all seemed incorrect. Any help would be greatly appreciated. Thank you so much in advance!
$\begin{align}{\bf Hint}\,\ {\rm Let}\,\ x = 3(a\!-\!1).\ {\rm Then}\qquad &9(a\!-\!1)^2 +3(a\!-\!1)-2\\ =\ &x^2 + x - 2\\ =\ &(x+2)(x-1)\end{align}$ Remark $ $ Above is a special case of the AC-method, which gives a general way to change variables to transform polynomials to have leading coefficient $=1.\,$ This general method is well-worth learning since it often proves useful.
Metric spaces and the absolute value I tried with the absolute value properties to solve it but I couldn't find it. Let $ {\mathrm{(}}{X}{\mathrm{,}}\mathit{\rho}{\mathrm{)}} $ be a metric space : a) for $ \mathrm{\forall}\hspace{0.33em}{x}{\mathrm{,}}{y}{\mathrm{,}}{z} $ show that :$ \left|{\mathit{\rho}{\mathrm{(}}{x}{\mathrm{,}}{z}{\mathrm{)}}\mathrm{{-}}\mathit{\rho}{\mathrm{(}}{y}{\mathrm{,}}{z}{\mathrm{)}}}\right|\mathrm{\leq}\mathit{\rho}{\mathrm{(}}{x}{\mathrm{,}}{y}{\mathrm{)}} $ b)for $ \mathrm{\forall}\hspace{0.33em}{x}{\mathrm{,}}{y}{\mathrm{,}}{z}{\mathrm{,}}{w} $ show that: $ \left|{\mathit{\rho}{\mathrm{(}}{x}{\mathrm{,}}{y}{\mathrm{)}}\mathrm{{-}}\mathit{\rho}{\mathrm{(}}{z}{\mathrm{,}}{w}{\mathrm{)}}}\right|\mathrm{\leq}\mathit{\rho}{\mathrm{(}}{x}{\mathrm{,}}{z}{\mathrm{)}}\mathrm{{+}}\mathit{\rho}{\mathrm{(}}{y}{\mathrm{,}}{w}{\mathrm{)}} $
You could rewrite the inequalities and drop the absolute values. For example the first inequality is the same as: $$-\rho(x,y) \le \rho(x,z) - \rho(y,z) \le \rho(x,y)$$ Then you can split that in two and shuffle around the terms. To elaborate: by the triangle inequality we have: $$\rho(x,z) \le \rho(x,y) + \rho(y,z)$$ $$\rho(x,z) - \rho(y,z) \le \rho(x,y)\tag1$$ also by the triangle inequality we have: $$\rho(y,z) \le \rho(y,x) + \rho(x,z) = \rho(x,y) + \rho(x,z)$$ $$\rho(y,z)-\rho(x,y) \le \rho(x,z)$$ $$-\rho(x,y) \le \rho(x,z)-\rho(y,z)\tag2$$ Now combining (1) and (2) we get: $$-\rho(x,y) \le \rho(x,z)-\rho(y,z) \le \rho(x,y)$$ Now since $-a \le b \le a$ is eqivalent to $|b|\le a$ we have: $$|\rho(x,z)-\rho(y,z)| \le \rho(x,y)$$ The second inequality is proven in similar manner, but you need to extend the triangle inequality to involve more intermediate points: $$\rho(a,d) \le \rho(a,b) + \rho(b,c) + \rho(c,d)$$ (you can actually generalize this to arbitrary number of intermediate points).
Volume form for a warped product I would like to know if there exists some useful formulas that let us compute the volume form for a warped product, that is a $n$-dimensional Riemannian manifold of the form $dr^2 + \phi(r)^2 g_{N^{n-1}}$, where $N^{n-1}$ is a $n-1$-dimensional riemannian manifold and $g_{N^{n-1}}$ is the metric induced on it. Also the case where $N^{n-1}$ is the hypershere in $\mathbb{R}^n$ would be useful: in this case computations could be carried out by direct computation, but I find it awkward. I thank you in advance for any suggestion, Mattia.
Let me call your warped product manifold as $(M,g)$. Choose a local coordinate system $\{x^i\}_{i=1}^{n-1}$ on $N$, and denote $\partial_i:=\frac{\partial}{\partial x^i}$. Then $\partial_1,\ldots,\partial_{n-1},\partial_r$ form a basis for the tangent spaces of $M$. Denote $[g]$ to be the matrix whose entries are the components of the warped product metric $g$ with respect to this basis. Then we have \begin{align} [g]=\begin{pmatrix} 1 & \\ & \phi(r)^2[g_N] \end{pmatrix} \end{align} where $[g_N]$ has the similar meaning. Then I guess it is not difficult to compute $\det[g]$. Of course it will be in terms of $\det[g_N]$ and the warping function $\phi$.
Is $f$ continuous at zero? $$\require{cancel}$$ $$f(x) = \begin{cases} \frac{\sin x}{|x|} &\text{ if }x \neq0 \\ \hspace{0.3cm}1 &\text{ if }x=0. \end{cases}$$ My Attempt 1)$$\lim_{x\rightarrow0}\frac{\sin x}{|x|} = \lim_{x\rightarrow0}\frac{\sin x}{x} \frac{x}{|x|} = 1\lim_{x \rightarrow0}\frac{x}{|x|} \\$$ 2)$$\lim_{x\rightarrow0^{-}}\frac{x}{|x|}=-1 \hspace{0.3cm}\text{and}\hspace{0.3cm} \lim_{x\rightarrow0^{+}}\frac{x}{|x|}=1 $$ Therefore: $$1\lim_{x\rightarrow0}\frac{x}{|x|}=DNE $$ so, $f$ is not continuous at $0$. My question is does my solution actually prove that $f$ is not continuous at $0$? or is it continuous at zero because $f(x)=1$ when $x=0$?
1)$$\lim_{x\rightarrow0}\frac{\sin x}{|x|} = \lim_{x\rightarrow0}\frac{\sin x}{x} \frac{x}{|x|} = 1\lim_{x \rightarrow0}\frac{x}{|x|} \\$$ Note that the second equality does not hold, since for two sequences $(a_n)$, $(b_n)$ you only have $$ \lim_{n \to \infty} a_n b_n = \lim_{n \to \infty} a_n \lim_{n \to \infty} b_n $$ provided that both sequences converge. Hence in your case it would be better to start directly with a modification of part 2) : If $f(x)$ is continuous at $0$ then the following equality is necessary: $$ \lim_{x \to 0^+} f(x) = \lim_{x \to 0^-} f(x). $$ But on the one hand you have $$ \lim_{x \to 0^+} f(x) = \lim_{x \to 0^+} \frac{\sin(x)}{|x|} = \lim_{x \to 0^+} \frac{\sin(x)}{x} = 1 $$ and on the other hand $$ \lim_{x \to 0^-} f(x) = \lim_{x \to 0^-} \frac{\sin(x)}{|x|} = \lim_{x \to 0^-} \frac{\sin(x)}{-x} = -\lim_{x \to 0^+} \frac{\sin(x)}{x} = -1. $$ So $f$ can't be continuous at $0$.
How can we show that $\sum_{n=0}^{\infty}{2n^2-n+1\over 4n^2-1}\cdot{1\over n!}=0?$ Consider $$\sum_{n=0}^{\infty}{2n^2-n+1\over 4n^2-1}\cdot{1\over n!}=S\tag1$$ How does one show that $S=\color{red}0?$ An attempt: $${2n^2-n+1\over 4n^2-1}={1\over 2}+{3-2n\over 2(4n^2-1)}={1\over 2}+{1\over 2(2n-1)}-{1\over (2n+1)}$$ $$\sum_{n=0}^{\infty}\left({1\over 2}+{1\over 2(2n-1)}-{1\over (2n+1)}\right)\cdot{1\over n!}\tag2$$ $$\sum_{n=0}^{\infty}\left({1\over 2n-1}-{2\over 2n+1}\right)\cdot{1\over n!}=\color{blue}{-e}\tag3$$ Not sure what is the next step...
$$\sum_{n\geq 0}\frac{1}{(2n-1)n!}=-1+\int_{0}^{1}\sum_{n\geq 1}\frac{x^{2n-2}}{n!} =-1+\int_{0}^{1}\frac{e^{x^2}-1}{x^2}\,dx$$ $$ \sum_{n\geq 0}\frac{2}{(2n+1)n!}=2\int_{0}^{1}e^{x^2}\,dx $$ and due to integration by parts: $$ \int_{0}^{1}\frac{e^{x^2}-1}{x^2}\,dx = \left.-\frac{e^{x^2}-1}{x}\right|_{0}^{1}+2\int_{0}^{1}e^{x^2}\,dx $$ proving your $(3)$, then $(2)=S=0$.
First derivative equation with $\cosh^{-1}$ set equal to zero solving This is the equation I'm left with when I took the derivative of a function. I would like to optimise so I'm trying to find a min/max by setting equal to zero. I have been having trouble solving. $$\frac{2\text{arcosh}(y)}{\sqrt{y^2-1}}+2y-4=0$$
You want to minimize $$ f(x)=d^2=x^2+(\cosh x-2)^2. $$ Differentiating, we find that $$ f'(x)=2x+2\cosh x\sinh x-4\sinh x, $$ and that $$ f'(0)=0. $$ Now, $$ f''(x)=8\cosh x\sinh^2(x/2)\geq 0 $$ and $f''(x)=0$ if and only if $x=0$. Thus, you have a strictly concave function. Can you conclude from here? Update If you are not too familiar with convexity, you can argue like this: $f'(0)=0$, and $f''(x)>0$ if $x>0$. This means that $f'$ is increasing for positive $x$. In particular $f'(x)>f'(0)=0$ if $x>0$. But if $f'(x)>0$ for all $x>0$ it means, in turn, that $f$ is increasing for positive $x$. In particular, $f(x)>f(0)=1$ if $x>0$. On the other hand $f$ is even ($f(-x)=f(x)$), so $f(x)>f(0)$ also if $x<0$. We conclude that $f$ has a global minimum at $x=0$.
Find $g'(1)$ where $g(x) = \int_{x^2}^{2x}\, \sin(\pi u^2)\,du $ Find $g'(1)$, where $$g(x)=\int_{x^2}^{2x}\, \sin(\pi u^2)\,du, $$ I just want to make sure that my work is correct. I started with setting $f(u)=\sin(\pi u^2)$ then I used some properties: $(-F'(x^2)2x + 2F'(2x))$ therefore, \begin{align} g'(-1) &= -2f(1) + 2f(2)\\ &= 2(-f(1)+f(2))\\ &= 2(\sin(\pi) + \sin(4\pi)) \end{align} is it correct?
Hint Let $f:u\mapsto \sin(\pi u^2 )$ and $F:x\mapsto \int_0^x f(u)du$. then $$g(x)=F(2x)-F(x^2)$$ and $$g'(x)=2F'(2x)-2xF'(x^2)$$ with $F'(t)=f(t)$.
Prove using mathematical induction: for $n \ge 1, 5^{2n} - 4^{2n}$ is divisible by $9$ I have to prove the following statement using mathematical induction. For all integers, $n \ge 1, 5^{2n} - 4^{2n}$ is divisible by 9. I got the base case which is if $n = 1$ and when you plug it in to the equation above you get 9 and 9 is divisible by 9. Now the inductive step is where I'm stuck. I got the inductive hypothesis which is $ 5^{2k} - 4^{2k}$ Now if P(k) is true than P(k+1) must be true. $ 5^{2(k+1)} - 4^{2(k+1)}$ These are the step I gotten so far until I get stuck: $$ 5^{2k+2} - 4^{2k+2} $$ $$ = 5^{2k}\cdot 5^{2} - 4^{2k} \cdot 4{^2} $$ $$ = 5^{2k}\cdot 25 - 4^{2k} \cdot 16 $$ Now after this I have no idea what to do. Any help is appreciated.
$\begin{align}{\bf Hint}\qquad\qquad\qquad\qquad\,\ \color{#c00}{25} &=\,\ \color{#c00}{16 + 9}\\ 25^{\large N} &=\,\ 16^{\large N}\! +\! 9j\\ \Rightarrow\,\ 25^{\large N+1}\! = \color{#c00}{25}\cdot 25^{\large N} &= (16^{\large N}\!+\!9j)(\color{#c00}{16+\!9}) = 16^{\large N+1} +9\,(\cdots)\ \end{align}$ Or, said mod $\,9\!:\,\ \begin{align} 25&\equiv 16\\ 25^{\large N}&\equiv 16^{\large N}\end{align}\ \Rightarrow\, 25^{\large N+1}\equiv 16^{\large N+1}\,$ by the Congruence Product Rule Or, $ $ equivalently, $\ \big[25\equiv 16\big]^{\large N}\!\Rightarrow\, 25^{\large N}\!\equiv 16^{\large N}\, $ by the Congruence Power Rule, which is an inductive extension of the Product Rule.
Identifying cardinals in $\alpha$-recursion theory Throughout, we work in $V=L$. Fix an uncountable cardinal $\kappa$. $\kappa$-recursion theory is the natural generalization of recursion theory from $\omega$ to $\kappa$, using the following analogy: * *Finite = element of $L_\omega$ $\approx$ element of $L_\kappa$ *C.e. = $\Sigma_1$ over $L_\omega$ $\approx$ $\Sigma_1$ over $L_\kappa$ *Computable = $\Delta_1$ over $L_\omega$ $\approx$ $\Delta_1$ over $L_\kappa$. The vast majority of computability-theoretic concepts - e.g. productivity, immunity, etc. - generalize naturally to this setting. However, the converse is not true: there are natural notions at the $\kappa$ level which have no analogues, or no nontrivial analogues, on $\omega$. My question is about one of these - namely, the cardinality predicate: Is the relation "is a cardinal" computable in the sense of $L_\kappa$? It is easy to show that it is $\Pi_1$, but I don't see a $\Sigma_1$ definition, and indeed I don't think there is one. But I don't immediately see how to show that there is none . . .
If the cardinals were $\Sigma_1$-definable, then taking any countable elementary submodel $M$ when $\kappa>\omega_1$, we get that there are some uncountable cardinals in $M$. Collapsing $M$ gives us some $L_\gamma$, for a countable $\gamma$. But now the collapse of those cardinals also satisfy being a cardinal in $L_\gamma$, and being a $\Sigma_1$ property, this is upwards absolute. But now we run into a bit of a problem, since countable ordinals are not usually cardinals. Of course, for $\kappa=\omega_1$, the set of cardinals is $\omega+1$ which is indeed $\Sigma_1$ definable.
Derivative of a large product I need help computing $$ \frac{d}{dx}\prod_{n=1}^{2014}\left(x+\frac{1}{n}\right)\biggr\rvert_{x=0} $$ The answer provided is $\frac{2015}{2\cdot 2013!}$, however, I do not know how to arrive at this answer. Does anyone have any suggestions?
HINT: Write $\prod_{n=1}^{2014}\left(x+\frac1n\right)=e^{\sum_{n=1}^{2014}\log\left(x+\frac1n\right)}$ Can you proceed now?
Modulo Proof Help? I'm working on a proof for my number theory course and I am a bit confused on how to prove a certain case of the question.. * *If $n$ is a odd positive integer or if $n$ is divisible by $4$ then $$1^3 + 2^3 + 3^3 + ... + (n-1)^3 \equiv 0 \pmod n$$is this statement true if $n$ is even but not divisible by $ 4$? So for the case "$n$ is a positive integer divisible by $4$" I set $n = 4k$ and try to plug $n$ into $\frac{n^{2}(n-1)^{2}}{4}$ But I can't seem find a way to simplify it and otherwise prove its congruent to $0 \pmod n$.
If $n=4k$ then $n^2 = 4kn$; hence $n^2 (n-1)^2 / 4 = n \cdot k \cdot (n-1)^2$ which is divisible by n. If n is odd, $(n-1)^2$ is 0 mod 4, so $n^2(n-1)^2/4 = n (n(n-1)^2/4) which is 0 mod n.
To find value of $\sin 4x$ Given $\tan x = \frac { 1+ \sqrt{1+x}}{1+ \sqrt{1-x}}$. i have to find value of $\sin 4x$. i write $\sin 4x=4 \frac{ (1-\tan^2 x)(2 \tan x)}{(1+\tan^2 x)^2}$ but it seems very complicated to do this? Any other methods? Thanks
Write $\tan x=\dfrac{1+\sqrt{1+y}}{1+\sqrt{1-y}}$ As $-1\le y\le1$ for real $\tan x$ WLOG let $y=\cos2u$ where $0\le2u\le\pi$ $$\dfrac{1+\sqrt{1+y}}{1+\sqrt{1-y}}=\dfrac{1+\sqrt2\cos u}{1+\sqrt2\sin u}=\dfrac{\dfrac1{\sqrt2}+\cos u}{\dfrac1{\sqrt2}+\sin u}=\dfrac{2\cos\dfrac{45^\circ +u}2\cos\dfrac{45^\circ-u}2}{2\sin\dfrac{45^\circ +u}2\cos\dfrac{45^\circ-u}2}=\cot\dfrac{45^\circ +u}2$$ So, $\tan x=\tan\left(90^\circ-\dfrac{45^\circ +u}2\right)$ $\implies x=n180^\circ+90^\circ-\dfrac{45^\circ +u}2$ where $n$ is any integer $\implies\sin4x=-\sin(90^\circ+2u)=-\cos2u$
Prove a metric space in which every infinite subset has a limit point is compact.(Question about one particular proof) This is an exercise from "Baby Rudin" and I don't understand one step of a particular proof based on the textbook's hint. $\mathbf{Statement:}$Let X be a metric space in which every infinite subset has a limit point. Prove that X is compact. $\mathbf{Hint:}$ By Exercises 23 and 24, X has a countable base. It follows that every open cover of X has a countable subcover$G_n$, n=1,2,3,... If no finite subcollection of $G_n$ covers X, then complement $F_n$ of $G_1 \cup.. \cup G_n$ is nonempty for each n, but $\cap F_n$ is empty. If E is a set which contains a point from each $F_n$, consider a limit point of E, and obtain a contradiction. $\mathbf{Proof:}$ Following the hint, we consider a set consisting of one point from the complement of each finite union, e.t.,$x_n\notin G_1∪⋯∪G_n$. Then E cannot be finite. By hypothesis E must have a limit point z, which must belong to some set $G_n$;and since $G_n$ is open, there is a $\epsilon >0 $ such that $B(z,\epsilon) \subset G_n$ . But then $B(z,\epsilon)$ cannot contain $x_m$ if $m \gt n$, and so z cannot be a limit point of ${x_m}$. We have now reached a contradiction. $\mathbf{Question:}$ I don't understand the causal relation between "z cannot be a limit point of ${x_m}$." and that z is not a limit point of E.
The only elements of $E$ that $B(z,\epsilon)$ can possibly contain are among $x_1, \dots, x_n$. Let $a$ be the minimum of $\epsilon$ and the distances $d(z,x_i)$ for those values of $i = 1, \dots, n$ such that $z \ne x_i$. Then $E \cap (B(z,a) - \{z\}) = \varnothing$, hence $z$ cannot be a limit point of $E$. Edit: In response to stud_iisc below, the proof says that $E$ consists of all elements $x_m$, and that $x_m \not\in B(z,\epsilon)$ for $m > n$.
Regular in codimension one VS Singular locus is codimension at least two In Hartshorne, a scheme is regular in codimension one if the local ring at any (non-closed) point representing a codimension one subscheme is a regular local ring (of Krull dimension one). For varieties, the most naive notion of being regular in codimension one (at least to me!) is just to say that the set of singular points is subvariety of codimension at least two. Is my naive definition of "regular in codim one" equivalent to the definition in Hartshorne? (I ask this because my naive definition is easy to verify: for instance, a surface with only ADE singularities is clearly regular in codimension one by my naive definition - there is no need to do any commutative algebra, which I'm terrible at. But I do need to know if this singular variety satisfies the condition in Hartshorne, because having DVRs in codimension one allows me to define Weil divisors.)
The answer is yes. The Hartshorne definition means that the generic point of any irreducible closed subset of codimension one is a regular point. Since the local ring at the generic point can be obtained from the local ring at any other point by localizing (and localizing a regular local ring gives you a regular local ring again) this is equivalent to say that any irreducible closed subset of codimension one admits at least one regular point. The latter is of course equivalent of saying that the singular locus (assuming we have already shown it is closed) has at least codimension two.
Prove that $\sum \limits_{n=0}^{\infty} \frac{n!}{(n+1)!+(n+2)!} = \frac{3}{4}$ I was playing around with factorials on Wolfram|Alpha, when I got this amazing result : $$\sum \limits_{n=0}^{\infty} \dfrac{n!}{(n+1)!+(n+2)!} = \dfrac{3}{4}.$$ Evaluating the first few partial sums makes it obvious that the sum converges to $\approx 0.7$. But I am not able to prove this result algebraically. I tried manipulating the terms and introducing Gamma Function, but without success. Can anyone help me with this infinite sum ? Is there some well-known method of evaluating infinite sums similar to this ? Any help will be gratefully acknowledged. Thanks in advance ! :-) EDIT : I realized that $(n!)$ can be cancelled out from the fraction and the limit of the remaining fraction as $n \to \infty$ can be calculated very easily to be equal to $0.75$. Very silly of me to ask such a question !!! Anyways you can check out the comment by @Did if this "Edit" section does not help.
Thanks to pjs36 and Did, Notice that: $$\begin{align}a_n&=\frac{n!}{(n+1)!+(n+2)!}\\&=\frac1{(n+1)+(n+1)(n+2)}\\&=\frac1{(n+1)(n+3)}\\&=\frac12\left(\frac1{n+1}-\frac1{n+3}\right)\end{align}$$ Thus, we get a telescoping series, leaving us with: $$\sum_{n=0}^\infty\frac{n!}{(n+1)!+(n+2)!}=\frac12\left(\frac1{0+1}+\frac1{1+1}\right)=\frac34$$
Prove that for an open and continuous map $f:X \to Y$ between topological spaces, it is $f(\tau_X)=\tau_Y$ Let $(X,\tau_x),(Y,\tau_Y)$ be topological spaces and let $f:X \to Y$ be a continuous and open function. Prove that $f(\tau_X)=\tau_Y$ This is an excercise of the problems set of my course on Introductory Topology notes. I think what I've been asked to prove is not true since if I chose $X,Y$ both equiped with the discrete topology and $Y$ having more than one element, then any constant function $f$ $(f(x)=c \in Y$ for all $x \in X)$ is continuous and open but $f(\tau_X)=\{ \emptyset, \{c\} \}\not=\tau_Y$ Is this right? Perhaps I'm missunderstanding it. Any clarification or suggestion is welcome. Thanks
Right. In fact, a continuous function is open if and only if $f(\tau_X) \subseteq \tau_Y$, but the converse inclusion needn't hold, in general.
Is it logical to attempt differentiation of y=1? Is y=1 different from y=x^0? I am wary of getting in over my head, after initial searching it is apparent to me that I don't know how to properly phrase this question for the answer I want. I am only working at high school level but in class we learnt differentiation and how when $y = ax^n$, $ \frac{dy}{dx} = anx^{(n-1)}$. I was wondering if this applies to when $y = 1$, $\frac{dy}{dx} = 1\times1^0 = 1$. and also when $y = 1^2$, $\frac{dy}{dx} = 2\times1^1 = 2$. Obviously the gradient should be 0, and when calculated in terms of x it makes sense $y = x^0$, $\frac{dy}{dx} = 0\times \frac{1}{x} = 0$. I was wondering how this should be approached, since to me it implies that you CANNOT have a line without a $y$ AND $x$ term, yet $y=1$ CAN be drawn. Is y=1 a different thing to $y=x^0$? Does $y=1$ even exist in 2D space? Or do you just have to simplify your equation before differentiating it?
In calculus you differentiate functions, not formulas. Much of your confusion comes from thinking about variables $x$ and $y$ instead of functions. But since that's how you're learning I will try to use that vocabulary. The function you have in mind when you write $y=1$ is the function whose value is always $1$, independent of the value of the independent variable which you are implicitly thinking of as "$x$". The graph of that function is a horizontal line of height $1$. Its slope at every point is $0$, so the derivative of the function is $dy/dx=0$. Some functions that you encounter have formulas. For example, the formula $$ y= x $$ has graph a straight line sloping up from the origin while $$ y= x^2 $$ describes a parabola. Early in the study of calculus you show that when a function is given by a formula $$ y = x^n $$ for a positive integer $n$ then its derivative is given by the formula $$ \frac{dy}{dx} = n x^{n-1}. $$ Later in your study you will show that formula works for all values of $n$ (not just integers) except for $n=0$. In that case you know the answer right away because the intent it to describe a constant function, whose graph is horizontal. The formula just happens to give the the right answer except when $x=0$, but you shouldn't use the formula. That's the long answer to your good question.
How to perform this manipulation? (1) $ z^2y+xy^2+x^2z-(x^2y+xz^2+y^2z) $ (2) $ (x-y)(y-z)(z-x) $ How to go from STEP (1) to STEP (2). Nothing I do seems to work. I tried combining terms but that doesn't help. I do not want to go from step 2 to step 1. I arrived at step 1 in some question and I need to go from 1 to 2 to match my answer given in the textbook. I see a lot of comments asking me to just expand 2 and arrive at 1. I could see that too but I am really curious to know how it is done the other way around. Added to that, if you get (1) while solving some question, you obviously have to go from 1 to 2 and not from 2 to 1.
Hint: the first expression can easily seen to be $0$ for $x=y$ and $x=z$. Considering it as a polynomial in $x\,$ of degree $2$, this means that it factors as $\lambda(x-y)(x-z)$ where the dominant coefficient $\lambda$ must match the coefficient of $x^2$ from the original expression i.e. $\lambda=z-y$.
Computing $7^{13} \mod 40$ I wanted to compute $7^{13} \mod 40$. I showed that $$7^{13} \equiv 2^{13} \equiv 2 \mod 5$$ and $$7^{13} \equiv (-1)^{13} \equiv -1 \mod 8$$. Therefore, I have that $7^{13} - 2$ is a multiple of $5$, whereas $7^{13} +1$ is a multiple of $8$. I wanted to make both equal, so I solved $-2 + 5k = + 8n$ for natural numbers $n,k$ and found that $n = 9, k = 15$ gave a solution (just tried to make $3 + 8n$ a multiple of $5$. Therefore, I have that $$7^{13} \equiv -73 \equiv 7 \mod 40.$$ Is this correct? Moreover, is there an easier way? (I also tried to used the Euler totient function, but $\phi(40) = 16$, so $13 \equiv -3 \mod 16$, but I did not know how to proceed with this.)
$$\phi(40)=16 \to 7^{16}\equiv 1$$mod 40 $$7^{16}\equiv 1 \\7^{13}7^3\equiv 1\\ 343.7^{13}\equiv 1 \\ (320+23).7^{13}\equiv 1\\ 23.7^{13}\equiv 1\\ 23.7^{13}\equiv 1+40 \equiv 1+80\equiv 1+120\equiv 1+160\\ 23.7^{13}\equiv 161 \\23.7^{13}\equiv 7(23)$$ divide by $23$ , $(23,40)=1 $ so $$23.7^{13}\equiv 7(23) \to 7^{13}\equiv 7$$
Problem involving sequence and inequality Let $(x_n)_{n \ge 1}$ be a sequence of natural numbers such that $(n+1)x_{n+1}-nx_n \gt 0$. Show that, if the sequence is bounded, than there exists $k \in \Bbb N, k \ge 1$ such that $x_n=x_k, \forall n \ge k$.
Since all $(x_n)$ are natural numbers, the given assumption $(n+1)x_{n+1}-nx_n>0$ can be rewritten as $$x_{n+1}>\frac{n}{n+1}x_n=\left(1-\frac{1}{n+1}\right)x_n=x_n-\frac{x_n}{n+1}.$$ If the sequence is bounded, then there exists $M\in\mathbb{N}$ such that all $x_n\le M$. Then for any $n\ge M$: $$\frac{x_n}{n+1}\le\frac{M}{M+1}<1 \quad \implies \quad x_{n+1}>x_n-\frac{x_n}{n+1}>x_n-1 \quad \implies \quad x_{n+1}\ge x_n.$$ the last inequality being true because the terms in the sequence are integers. If $x_{n+1}>x_n$ holds for infinitely many $n\ge M$, then the sequence would be unbounded (all terms are integers, so every time the sequence grows, it grows at least by one), contradicting the assumption. Therefore $x_{n+1}>x_n$ holds only for finitely many $n\ge M$. Let $k$ be the index of the larger element (i.e. $k=n+1$) in the last pair for which $x_{n+1}>x_n$. Then $x_{n+1}=x_n$ for all $n\ge k$.
functions as arrow in a cathegory with more complex objects I'm beginning learning category theory and seeing some examples I wonder whether (or why) is necessary that arrows preserve some structure of objects. I know that there are functors that "forget" some information of a category but my point is to do this inside the category. For example can I define a category $Top^*$ with the class of all topological spaces as objects and functions as arrows, or a category $Rng^*$ with rings as objects and group homomorphisms as arrows?. I know I doesn't make much sense to study this ('cause what's the point of objects in $Top^*$ have topologies), however my the question is if category definitions (or axioms) allow this classes.
The definition of category does not require that objects have any kind of structure or that arrows preserve structure. You can define a category by drawing some dots and putting some arrows between dots to satisfy the requirements for composition, existence of identities, and associativity. The definition of category is abstract in the sense that it says a category has objects and arrows and imposes axioms on the objects and arrows.
Show that $x^e\le e^x$ for all $x\gt 0$ and $x \in \mathbb {R}$ I understand that this question may seem quite simple, but although I can see different ways of showing this, I don't understand how it follows from the context I was given (i.e why the second part of the question begins with "hence"). It may also help to bear in mind that I have only just started teaching myself calculus and so far I have only covered differential calculus up to the level taught in secondary schools. The question is in two parts (I understand that to answer (i) you simply find $f'(x)$ and evaluate $f'(e)$ to show that it is equal to zero and is therefore, since there is only one stationary point which is a maximum point as implied by the question, the maximum point): $f(x) = {\ln x\over x}$, $x\gt 0$ (i) Show that the maximum point on the graph of $y = f(x)$ occurs at the point $\left(e,\frac{1}{e}\right)$. (ii) Hence, show that $x^e\le e^x$ for all $x\gt 0$ Any help would be greatly appreciated.
By i) you have determined $f(x) = \frac {\ln x}x \le \frac 1e; \forall x > 0$. So $e \ln x \le x$ So $e^{e\ln x} \le e^x$ And $e^{e\ln x}=(e^{\ln x})^e = x^e \le e^x$. == Part i) is a matter of taking $f'(x)$ and setting to zero. $f'(x) = (\frac {\ln x}x)' = \ln x(-\frac 2{x^2}) + \frac 1x*\frac 1x = \frac{1- \ln x}{x^2}$ $f'(x) = 0 \implies$ $\ln x = 1$ $x = e$ $f''(x) = -\frac 1{x^2} - 2\ln x \frac 1{x^3} < 0$ for $x = e$ so $x=e; f(x) = \frac 1e$ is maximum and for all $x > 0; \frac {\ln x}x \le \frac 1e$.
Difference between gradient descent and finding stationary points with calculus? Take the example of trying to optimize a regression line: $$ y = b + mx $$ around some data. Method #1 You can do this by getting the partial derivatives of the error function: $$ z = (1/2) \Sigma(f(x) - y)^2 $$ with respect to b and m, and then setting these equations to zero to find the stationary points. Method #2 Use the gradient descent algorithm to find a local minima. Question: Method one 'seems' superior. Why couldn't you find at all the stationary points along the x and y axis (z is the error going upwards) and just pick the minimum values for both x and y? Why can't you do this and avoid the iterative process associated with gradient descent? Where am I going wrong with my intuition..?
Because the objective function (the sum of the errors squared, over the data points) is precisely a quadratic function, the method of steepest (gradient) descent will select the perfect direction on the first try, and if you go along the descent line until the minimum, will find the minimum in only one iteration. This is true not only for a one-dimensional line, but for any multivariate linear fit. The calculations needed to do the gradient descent are precisely the same as those needed to solve the simultaneous equations. And indeed, the practical person would use Method 1. However, if your objective function were not a perfect quadratic form, then two things happen. The Method 1 becomes impossible, since you can't solve the simultaneous non-liner equations, and the gradient descent method will choose a slightly inferior initial direction, so that multiple iterations will be needed. Here, the practical person is forced to use method 2, (or better, some method like conjugate gradient that deals with issues like spiralling in to the solution slowly, which often happens in naive steepest descent).
What is the relationship of bias and consistency of an estimator? Let $(\mathfrak{X}_n, (P_\vartheta)_{\vartheta \in \Theta})$ be a statistical model with $n$ samples. Then $\hat{\vartheta}_n: \mathfrak{X}_n \rightarrow \tilde{\Theta}$ is called an estimator for $\vartheta$. * *$\hat{\vartheta}_n$ is called unbiased, if $\mathbb{E}(\hat{\vartheta}_n) = \vartheta$. *$\hat{\vartheta}_n$ is called concistent, if $\lim_{n \rightarrow \infty} P(|\hat{\vartheta}_n - \vartheta| > \varepsilon) = 0$ for $\varepsilon > 0$. There are estimators which are both unbiased and consistent: Let $X_1, \dots, X_n \stackrel{iid}{\sim} Bin(1, \vartheta)$ with $\vartheta \in (0, 1)$. Then $\hat{\vartheta}_n = \frac{1}{n} \sum_{i=1}^n x_i$ is unbiased and consistent. There are estimators which are neither unbiased nor consistent. The estimator $\hat{\vartheta} = 0.5$ for the setting from before (if $\vartheta \neq 0.5$). But are there unbiased estimators which are not consistent? Are there consistent estimators which are not unbiased?
Yes to both. For the sequence of random variables you give: $\hat{\vartheta}^\prime_n =x_n$ is unbiased but clearly not consistent. $\hat{\vartheta}^{\prime\prime}_n =\frac1n\sum_{i=1}^n x_i + \frac1n$ is biased but consistent.