INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Limit of a function with powers (L'Hopital doesnt work) I have a little problem with limit of this function: $\lim_{x \to \infty} x^2(2017^{\frac{1}{x}} - 2017^{\frac{1}{x+1}})$ I have tried de L'Hopital rule twice, but it doesn't work. Now I have no idea how to do it.
Let us consider $$A=x^2(a^{\frac{1}{x}} -a^{\frac{1}{x+1}})$$ $$a^{\frac{1}{x}}=e^{\frac{\log(a)}x}=1+\frac{\log (a)}{x}+\frac{\log ^2(a)}{2 x^2}+\frac{\log ^3(a)}{6 x^3}+O\left(\frac{1}{x^4}\right)$$ Do the same for the other term; subtract from eash other, use common denominator and so on. For the first term, $$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\log(a)\left(\frac{1}{x}-\frac{1}{x+1} \right)+\cdots=\log(a)\frac{x+1-x}{x(x+1)}+\cdots=\frac{\log(a)}{x(x+1)}+\cdots$$ and so on. You should arrive to $$a^{\frac{1}{x}} -a^{\frac{1}{x+1}}=\frac{\log (a)}{x^2}+\frac{\log ^2(a)-\log (a)}{x^3}+O\left(\frac{1}{x^4}\right)$$ whcih will show the limit and how it is approached.
Minimal polynomial of $\sqrt3 + i$ over $\mathbb{R}$ and $\mathbb{C}$ I was able to show that $x^4-4x^2+16$ is the minimal polynomial of $\sqrt3 + i$ over $\mathbb{Q}$. Does this imply that this is also the mininal polynomial over $\mathbb{R}$ and $\mathbb{C}$?
No. Every polynomyal in $\mathbb R[X]$ can be factored in linear factors and quadratic factors depending on how many roots are real and how many are not, thus the minimum polynomial has degree at most $2$. In this case it is of degree 2, ie $(x-\alpha)(x-\overline{\alpha})$. Note that when there is a real solution $\beta$ you can factor out $(x-\beta)$ and repeat, when there is a not real root $\gamma$ ie $p(\gamma)=0$ you can see that $p(\overline{\gamma})=\overline{p(\gamma)}=0$ also, thus you can factor out $(x-\gamma)(x-\overline{\gamma})$ which is of course in $\mathbb R[X]$. Every polynomial in $\mathbb C[X]$ can be factored in linear factors because every polinomyal of degree at least $1$ has a solution via the Fundamental Algebra Theorem. Thus the minimum polynomial has degree $1$, ie $(x- \alpha) \in \mathbb C[X]$.
Expected value of a coin flip game A fair coin is flipped 100 times in a row. For each flip, if it comes up heads you win \$2, if it comes up tails you lose \$1. You start with \$50, if you run out of money you must stop prematurely. If you don't run out of money you stop after 100 flips. What is the expected value of this game? So for the case where you have no stopping condition before 100 flips you get that $E(100)=50$, and I assume the possibility of stopping will reduce this expectation somewhat. But how exactly is it changed?
Try it with a smaller number. Assume you are allowed eight flips and start with $4$. As you say, without the constraint of the starting value, the expected value is $4$. You run out of money if you throw TTTT, HTTTTTT, THTTTTT, TTHTTTT, TTTHTTT. Throwing four tails deprives you of four flips worth $2$ and has a chance of $\frac 1{16}$. Each of the rest deprives you of one throw worth $\frac 12$ and has a chance of $\frac 1{128}$. The expected value is then $$4-2\cdot \frac1{16}-4 \cdot \frac12 \cdot \frac 1{128}=3\frac {55}{64}$$ Similarly you lose chances if you throw $50$ tails in a row, or one head and $25$ tails among the first $53$ throws (where the head is among the first $50$), or so on. Each of these is a very small chance so the reduction in expected value is very small.
Why is $2\pi$ the period of $e^{iz}$? My teacher said that $2\pi$ the period of $e^{iz}$. I tried to show this but I am not sure how to continue $$e^{iz+2\pi}=e^{ix+2\pi-y}=e^{2\pi-y}(\cos x+i\sin x)$$ But I am not sure how this equals $$e^{-y}(\cos x+i\sin x)=e^{-y+i x}=e^{iz}$$ Is $2\pi$ the period of $e^{iz}$?
Let $f(z) = e^{iz}$ The $f(z + 2\pi) = e^{i(z + 2\pi)} = e^{iz + 2\pi i}= e^{iz}e^{2\pi i}$ $e^{2\pi i} = \cos 2\pi + i \sin 2\pi = 1$ So $f(z + 2\pi) = e^{iz} = f(z)$. So the period of $e^{iz}$ as a function of $z$ is $2\pi$. But the period of $e^z$ (notice $z$ is not mulitplied by $i$; $z$ is a variable complex number) as a function of $z$ is $2\pi i$
Possible projective duality between two determinantal formulas for triangle area The shoelace formula for the area of a polygon in terms of consecutive vertices is well-known. In the particular case of a triangle, this may be written using a 3-by-3 determinant as $$\text{area of triangle}=\frac{1}{2}\begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1\end{vmatrix}$$ where $(x_k,y_k)_{k=1,2,3}$ are the three vertices. Several proofs have appeared on this site already. What may be surprising is that there's another determinantal formula for the area in terms of the three lines. Specifically, a triangle with lines $a_k x+b_k y+c_k=0$ for $k=1,2,3$ satisfies $$\text{area of triangle}=\frac{1}{2C_1C_2C_3}\begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{vmatrix}^2$$ where $C_1,C_2,C_3$ are the cofactors of the third column. (An older question has multiple proofs.) The parallels between these formulas intrigue me: Both express the area of a triangle in terms of a determinant, but one in terms of points and the other in terms of lines. This is reminiscent of the duality of lines and points in projective geometry. Hence my question: Can these the two formulas indeed be understood through projective duality?
I believe the connection between these two formulas is just a consequence of Cramer's rule. Let $A=\begin{pmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{pmatrix}$ and $B=\begin{pmatrix} x_1 & x_2 & x_3 \\ y_1 & y_2 & y_3 \\ 1 & 1 & 1\end{pmatrix}$. Let us assume that $a_jx_i+b_jy_i+c_j=0$, whenever $i\neq j$, and $d_i=a_ix_i+b_iy_i+c_i$. Since this is a triangle then $d_i\neq 0$, for every $i$. Hence, $AB=\begin{pmatrix} d_1 & 0 & 0 \\ 0 & d_2 & 0 \\ 0 & 0 & d_3\end{pmatrix}$ and $\begin{pmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{pmatrix}\begin{pmatrix} \frac{x_1}{d_1} & \frac{x_2}{d_2} & \frac{x_3}{d_3} \\ \frac{y_1}{d_1} & \frac{y_2}{d_2} & \frac{y_3}{d_3} \\ \frac{1}{d_1} & \frac{1}{d_2} & \frac{1}{d_3}\end{pmatrix}=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}$. By Cramer's rule, $\frac{1}{\det(A)}\begin{vmatrix} a_1 & b_1 & 1 \\ a_2 & b_2 & 0 \\ a_3 & b_3 & 0\end{vmatrix}=\frac{1}{d_1}$, $\frac{1}{\det(A)}\begin{vmatrix} a_1 & b_1 & 0 \\ a_2 & b_2 & 1 \\ a_3 & b_3 & 0\end{vmatrix}=\frac{1}{d_2}$, $\frac{1}{\det(A)}\begin{vmatrix} a_1 & b_1 & 0 \\ a_2 & b_2 & 0 \\ a_3 & b_3 & 1\end{vmatrix}=\frac{1}{d_3}$. So $\det(AB)=d_1d_2d_3=\dfrac{\det(A)^3}{C_1C_2C_3}$. Thus, $\det(B)=\dfrac{\det(A)^2}{C_1C_2C_3}$.
Double counting the number of proper divisors Suppose $n$ is a composite natural number. Then $n$ has unique prime factorization. To count the number of proper divisors, simply take the product of the exponents +1 in the prime factorization. $$n = \prod_{i = 1}^{n} p_i^{a_i}$$ $$\mbox{proper divisors} = \prod_{i = 1}^{n}(a_i+1)$$ Is 1 counted multiple times by doing this? For instance, I can choose from $a_0+1$ factors contributed from $p_0$, namely $ 1, p_0, p_0^2, \dots , p_0^{a_0} $ Don't I count 1 multiple times?
In order to count one, the exponent on each prime factor has to be zero. So you are only counting it once.
Finding values that make the series converge For which values of $\theta \in [0,2\pi)$ does the sum converge? And then for these values of $\theta $, find the sum of the series. The given series for this question is $\sum_{n=0}^{\infty} (sin\theta)^n$ So this particular series is a geometric series, and geometric series converge when $r<1$ and converge to $\frac{a}{1-r}$ and diverge when $r>=1$. So referring to the unit circle wouldn't all possible values of $\theta$ include $0, \frac{\pi}{6}, \frac{\pi}{4}, \frac{\pi}{3},\frac{2\pi}{3},\frac{3\pi}{4}, \frac{5\pi}{6}, \pi, \frac{7\pi}{6}, \frac{5\pi}{4}, \frac{4\pi}{3}, \frac{5\pi}{3}, \frac{7\pi}{4},\frac{11\pi}{6} $ is this logic right? so then a would be 1 for this equation and r would be all of these values listed above?
The geometric series $\sum_{n=0}^{\infty} x^n$ converges if and only if $|x|<1$. You have noted that for the most part. When is $|\sin (\theta)|<1$? Almost always except when $|\sin (\theta)|=1$. So it diverges only when $\sin (\theta)=1$ or $\sin (\theta)=-1$. There are a whole infinite amount of $\theta \in [0,2\pi]$ not just the ones you've listed.
Is there any continuous function that satisfies the following conditions? Is there any continuous function $y=f(x)$ on $[0,1]$ that satisfies the following conditions? $$ f(0) = f(1) = 0, $$ and $$ f(a)^2-f(2a)f(b)+f(b)^2<0, $$ for some $0<a<b<1$. I tried to test with several functions (with different $a,b$) but non of them satisfied. Any help is appreciated. Thank you in advance.
All you have to do is choose numbers $a,b$ so that $0\lt a\lt 2a\lt b\lt1$ (or $0\lt a\lt b\lt2a\lt1$), then find numbers $y_1,y_2,y_3$ satisfying $y_1^2-y_2y_3+y_3^2\lt0,$ and then construct a continuous function (e.g. a fourth degree polynomial) $f(x)$ with $f(0)=f(1)=0,\ f(a)=y_1,\ f(2a)=y_2,\ f(b)=y_3.$ For example: $$f(x)=x(x-1)(4x-1)(8x-7),\ a=\frac14,\ b=\frac34$$
How to determine if a polynomial is bounded from below? Given the coefficients of a multivariable polynomial, how can I determine if it's bounded from below? In other words, given a polynomial $p\left(x_1,x_2,...x_n\right)$, does there exist a real number $C$ such that $p\left(x_1,x_2,...x_n\right) \ge C$? I do not need to find the minimum value of the polynomial, I just want to know whether or not it is bounded.
Deciding whether the polynomial $$p (x_1, x_2, \dots, x_n) - c$$ is globally nonnegative is not easy. Fortunately, deciding whether $p (x_1, x_2, \dots, x_n) - c$ can be written as a sum of squares (SOS) is easy, as the decision procedure is a semidefinite program, which is in P. However, not all globally nonnegative polynomials can be written as a sum of squares. From one of Parrilo's papers [0], an example to clarify the above: [0] Pablo Parrilo, Semidefinite programming relaxations for semialgebraic problems, 2001.
How to draw a regular pentagon with compass and straightedge I remember reading that Gauss managed to construct a regular pentagon with just a compass and straightedge, but I don't remember the particulars of how he did this. Could someone help me out and give me instructions on how to do this?
I'm not sure if this one is Gauss', but here's the one I use: * *Draw a circle. Let the center be $O$. *Define a direction as "left" and draw a line from the center going "left" until you hit the circle. This segment is $OA$. *Draw another line segment, this time going "up" (this is perfectly legal - you should know how to construct a perpendicular line to a segment). This segment is $OB$. *Find the midpoint of $OA$, calling it $M$. *Draw $BM$. *Find the angle bisector of $BMO$ and draw until you hit $OB$. Call this intersection $I$. *Draw a perpendicular line to $OB$ going "left" until you hit the circle at a point $C$. $BC$ is now one line segment of the pentagon and the rest is relatively simple (just draw circle centered around $C$ passing through $B$ to get third vertex, etc.) Something like this:
$E \sim \text{Binom}(n, .5)$ and $F \sim \text{Binom}(n+1, .5).$ What is $P(F > E)?$ Erica tosses a fair coin $n$ times and, independently, Fred tosses a fair coin $n+1$ times. What is the probability Fred gets more heads than Erica? I have been able to solve this problem theoretically, and found that the probability is $1/2.$ The next step is to code it in R, and I am not sure where to begin.
If you want R code for a simulation, here is about the simplest possible version. I used a million iterations with $n = 9.$ With a million iterations, you can expect about three place accuracy. m = 10^6; n = 9 erica = rbinom(m,n,.5); fred = rbinom(m,n+1,.5) mean(fred > erica) # mean of logical vector is proportion of TRUEs ## 0.500542 # aprx P(F > E) = 1/2 Extras: MAT = cbind(fred, erica) head(MAT) # first 6 rows of m x 2 matrix fred erica [1,] 3 4 [2,] 4 4 [3,] 3 3 [4,] 3 5 [5,] 5 6 [6,] 5 3 mean(fred); mean(erica) ## 4.999831 # aprx 5 = 10(.5) ## 4.498146 # aprx 4.5 = 9(.5) d = fred - erica summary(d) Min. 1st Qu. Median Mean 3rd Qu. Max. -9.0000 -1.0000 1.0000 0.5017 2.0000 10.0000 sd(d) ## 2.178673 hist(d, br=(-10:10)+.5, prob=T, col="skyblue2") abline(v=mean(d), col="red", lty="dashed", lwd=2) curve(dnorm(x, mean(d), sd(d)), col="blue", lwd=2, add=T)
Pole of the function $f(z)=\frac{z}{(1-e^z).\sin z}.$ Let , $f(z)$ be a meromorphic function given by $$f(z)=\frac{z}{(1-e^z).\sin z}.$$Prove or disprove that " $z=0$ is a pole of order $2$ ". 1st Argument : Firstly we can rewrite the given function as $\displaystyle f(z)=\frac{z}{\sin z}.\frac{1}{1-e^z}$. Now $\displaystyle \frac{z}{\sin z}$ has a removable singularity at $z=0$. Also , $\displaystyle \frac{1}{1-e^z}$ has a pole of order $1$ at $z=0$. So finally the function has a pole of order $1$ at $z=0$. 2nd Argument : But we know that " A function $f$ has a pole of order $m$ if and only if $(z-a)^mf(z)$ has a removable singularity at $z=a$." So here , if we can show that $\displaystyle z^2f(z)=\frac{z^3}{(1-e^z).\sin z}=g(z)(say)$ has a removable singularity at $z=0$ then we can show that $f$ has a pole of order $2$ at $z=0$. Now , as $\displaystyle \lim_{z\to 0}z.g(z)=0$ , so $g$ has a removable singularity at $z=0$ and consequently $f$ has a pole of order $2$ at $z=0$. I'm confused that which is correct ? I could not find any mistake in both of my arguments. Can anyone detect the fallacy ?
You remember wrongly the result: a holomorphic function $f$ over $B_r(a)\setminus\{a\}$ (the punctured circle of radius $r$ around $a$) has a pole at $a$ if and only if it hasn't a removable singularity at $a$ but $(z-a)^nf(z)$ has a removable singularity at $a$ for some integer $n>0$. The minimum such integer is the order of the pole. It is easy to see that this is the same as saying that there exists an integer $m$ such that $(z-a)^mf(z)$ has a removable singularity and $$ \lim_{z\to a}(z-a)^mf(z)\ne0 $$ In this case, $m$ is uniquely determined and is the order of the pole. It can be shown that if $f$ has a pole of order $m$ at $a$, then its representation as a Laurent series $$ f(z)=\frac{c_{-m}}{(z-a)^m}+ \frac{c_{-1}}{(z-a)}+c_0+c_1z+\dotsb $$ holds for every $z\in B_r(a)\setminus\{a\}$. In your case, the singularity at $0$ is not removable, but $$ zf(z)=\frac{z}{e^z-1}\frac{z}{\sin z} $$ has a removable singularity at $0$. Hence the order of the pole is $1$. Note that $\lim_{z\to0}zf(z)=1\ne0$.
Why does not exist a surjective morphism from the afine line onto a variety of two points? Sorry if the question is simple. Why does not exist a surjective morphism (as morphism of schemes) from $\mathbb{A}^1$ onto a variety $Y$ with only two points? I know that there does not exist such a morphism but I don't know why. J.Harris says that in "Algebraic Geometry (A first course) in Lecture 10, Quotiens (pg 124)"
Here $k$ is a field. Such a morphism is induced by a morphism of algebras $f:k\oplus k\rightarrow K[X]$, the image of $f((1,1)=1$. you also have $f(1,0)f(0,1)=f(0,0)=0$. You can suppose $f(1,0)=0$, without restricting the generality. This implies $f(0,1)=1$. If $P$ is an irreducible polynomial, $f^{-1}((P))=(1,0)$ since $f(0,1)=1$ is not in $P$. and $(1,0)$ is the image of the map $Spec(k[X])\rightarrow Spec(k\oplus k)$ induced by $f$.
Finding the hypotenuse I have the question "The force vectors in the following diagrams are all coplanar but not drawn to scale. Use appropriate trigonometry to answer the following questions. Calculate the resultant force on the following objects and the acceleration it produces." For this I have made a triangle and have used Pythagoras to find the length of the hypotenuse R. The answer is get for this is: However, the solutions say that the answer for the length R should be 4.2 N. I do not understand how This is achieved.
First combine two horizontal forces. As they are in opposite direction. So you got 3 N as resultant force. Now, $\sqrt{(3)^2 + (3)^2} = \sqrt{18}$ = 4.24 N
Proof of simple conditional logic with resolution I have a problem where there are two traffic lights in a junction. If the red light $R_i$ in traffic light $i=1,2$ is on, then green either $R$ or yellow $Y$ is on the other light and vice versa. I think this can be expressed as $$(R_1 \leftrightarrow (G_2 \lor Y_2)) \land (R_2 \leftrightarrow(G_1 \lor Y_1))$$ Also we need to enforce that two lights cannot be on simultaneously and one light exactly is on at each time, \begin{align}\neg (G_i \land Y_i)\land \neg(G_i \land R_i) \land \neg (Y_i \land R_i)\land(G_i\lor Y_i \lor R_i)\end{align} How would one proof with resolution that two red lights cannot be on simultaneously? Obviously if $R_i=1$, then either $G_{i^\prime}$ or $Y_{i^\prime}$ is 1 ($i^\prime = 1$ if $i = 2$ and vice versa) because otherwise $R_i \leftrightarrow (G_{i^\prime} \lor Y_{i^\prime}) =0$. Thus because $G_{i^\prime}$ or $Y_{i^\prime}$ is 1, $R_{i^\prime}$ cannot be 1 simultaneously because we would violate the clauses that exactly one light is on (To be more specific either $\neg(G_i \land R_i) \land \neg (Y_i \lor R_i)=0$). It seems clear cut, but I cannot formulate the formal resolution proof. Any help?
While @MauroALLEGRANZA has given the systematic solution, let me show a shortcut I'd use if I were to solve this problem with paper and pencil. These three clauses, which are part of your assumptions, $$(\neg G_i \vee \neg R_i) \wedge (\neg Y_i \vee \neg R_i) \wedge (G_i \vee Y_i \vee R_i)$$ are easily verified to be the CNF equivalent of $(G_i \vee Y_i) \leftrightarrow (\neg R_i)$. By transitivity of equivalence you are done: $$ R_i \leftrightarrow (G_{3-i} \vee Y_{3-i}) \leftrightarrow (\neg R_{3-i}) ~~~\text{ for } i \in \{1,2\} \enspace.$$ As an aside, real traffic lights may have simultaneous red lights for both intersecting roads, because the actual safety requirement they have to satisfy is the weaker $$ (G_{3-i} \vee Y_{3-i}) \rightarrow R_i ~~~\text{ for } i \in \{1,2\} \enspace. $$
Equivalence of convergence in metric spaces Let $\{x^{(k)}\}$ be a sequence in $\mathbb{R}^n$ with $x^{(k)} = (x_1^{(k)}, x_2^{(k)}, \ldots, x_n^{(k)})$, and $x = (x_1, x_2, \ldots, x_n) \in \mathbb{R}^n.$ Then $\{x^{(k)}\}$ converges to $x$ with respect to the $l^2$-metric $\rho_2$ if and only if $\{x^{(k)}\}$ converges to $x$ with respect to the $l^1$-metric $\rho_1$. Well, I need to show that for all $\epsilon > 0$, there exists an $N$ such that $||x^{k}-x||_2 < \epsilon$ if and only if $||x^{k}-x||_1 < \epsilon$. We have that $\rho_2(x^{(k)},x) = ||x^{k}-x||_2 = (\sum |x_j^{(k)} - x_j|^2)^{1/2}$, and that $\rho_1(x^{(k)}, x) = ||x^{k}-x||_1 = \sum |x_j^{(k)} - x_j|$. Previously, we showed that convergence of sequences for metric spaces is equivalent if we can find a constant $c > 1$ such that $\frac{1}{c} \rho_p(x,y) \leq \rho_q(x,y) \leq c\rho_p(x,y)$, so I thought maybe finding a $c$ such that $\frac{1}{c} \rho_2(x,y) \leq \rho_1(x,y) \leq c\rho_2(x,y)$ would be easier than playing with epsilons, but I can't find such a c. Can I have some help?
Your intuition is right. Let's see how to find these constants. We'll use the Cauchy-Schwarz inequality on $\mathbb{R}^n$, which tells us that $| a \cdot b | \le \Vert a \Vert_2 \Vert b \Vert_2$ for all $a,b \in \mathbb{R}^n$. To start off we use Cauchy-Schwarz to estimate $$ \Vert x \Vert_1 = \sum_{i=1}^n |x_i| = \sum_{i=1}^n |x_i | 1 \le \left(\sum_{i=1}^n |x_i|^2 \right)^{1/2} \left(\sum_{i=1}^n 1^2 \right)^{1/2} = \sqrt{n} \Vert x \Vert_2 $$ for any $x \in \mathbb{R}^n$. Next we note that $$ |x_j| \le \sum_{i=1}^n |x_i| = \Vert x \Vert_1 \text{ for }j=1,\dotsc,n $$ and hence $$ \max_{1\le j \le n} |x_j| \le \Vert x \Vert_1. $$ In turn we use this to estimate $$ \Vert x \Vert_2^2 = \sum_{i=1}^n |x_i|^2 \le \left( \sum_{i=1}^n |x_i| \right) \left(\max_{1\le j \le n} |x_j| \right) \le \left( \sum_{i=1}^n |x_i| \right)^2, $$ which tells us (after taking square roots) that $$ \Vert x\Vert_2 \le \Vert x \Vert_1. $$ Combining the above we now know that $$ \Vert x \Vert_2 \le \Vert x \Vert_1 \le \sqrt{n} \Vert x \Vert_2 $$ for all $x \in \mathbb{R}^n$. Using this, you can complete the sketch of your argument.
Function with a hash I have a question involving a function: $\tau(n) := $#{$d\in\Bbb{Z}>0|$d divides n} The hash symbol before the first bracket is confusing me, I don't know what this means. And could you please give an example for if I was to substitute in a value for n. Thanks
It could be the cardinality (number of elements) of the set behind the hash mark. The hash symbol is also used as abbreviation for number: link.
How to find if a metric space is complete? Specifically, I need to find if the metric space $(\mathbb{R}, d)$ with the metric $$d(x,y)=\dfrac{|x-y|}{\sqrt{1+x^2}\sqrt{1+y^2}}$$ is complete or not, but I didn't get how to show if a space is complete and, in case it isn't, how to find a counterexample.
Thinking about Cauchy completeness, one observation is that $d(x,y)$ can get small not only if $x$ and $y$ are close to each other in the usual metric, but also if one or both of them are large. This is likely the area where 'weird' effects can happen, leading to investigate sequences that are unbounded. The simplest one is $(n)_{n\in \mathbb{N}}$. The square roots in the definition are 'almost' $x$ or $y$, but just slightly modified to allow for the cases $x=0$ or $y=0$. So intuitively, $d(x,y)$ is about the same as $\frac{|x-y|}{|x|\cdot |y|}$, for large numbers $x,y$: $$\frac{1}{2}\cdot \left| \frac{1}{x} - \frac{1}{y}\right| = \frac{|x-y|}{2 |x| \cdot |y|} \leq d(x,y)\leq \frac{|x-y|}{|x|\cdot |y|} = \left|\frac{1}{x}-\frac{1}{y}\right|$$ for $|x|,|y|\geq 1$. Consider the sequence $(n)_{n\in \mathbb{N}}$. For $n,m\geq n_0$ we have $$d(n,m) \leq \left|\frac{1}{n}-\frac{1}{m}\right|\leq \frac{2}{n_0}$$ So it is a Cauchy sequence. For every $x\in\mathbb{R}$ with $|x|\geq 1$, $$d(x,n) \geq \frac{1}{2}\left|\frac{1}{x} - \frac{1}{n}\right|,$$ which converges to $\frac{1}{2|x|}>0$ for $n\to \infty.$ For $x\in\mathbb{R}$ with $|x|< 1$, $$d(x,n)\geq \frac{|1-n|}{\sqrt{2}\cdot \sqrt{2 n^2}} = \frac{1}{2} \cdot \left|\frac{1}{n} - 1\right|,$$ which converges to $\frac{1}{2}>0$. So the (Cauchy) sequence $(n)_{n\in \mathbb{N}}$ does not converge to any $x\in\mathbb{R}$ and the metric is not complete.
How to find a polynomial of which this field is a splitting field? Consider the field $Q(\sqrt{2} + i \sqrt{5})$. I've proven that $\mathbb{Q}(\sqrt{2} + i \sqrt{5}) = \mathbb{Q}(\sqrt{2}, i \sqrt{5})$. Also, by the degree product rule for extension degrees, I have $$ [\mathbb{Q}(\sqrt{2}, i \sqrt{5}) : \mathbb{Q}] = 4. $$ I'm now being asked to give a polynomial $f \in \mathbb{Q}[X]$ such that $Q(\sqrt{2} + i \sqrt{5})$ is the splitting field of $f$ over $\mathbb{Q}$. I don't know how to handle this problem. I tried setting $x = \sqrt{2} + i \sqrt{5}$ and squaring etc. but I cannot get a polynomial over $\mathbb{Q}$. The problem gives me a hint, saying I should look for a suitable field $E$ such that $$ \mathbb{Q} \subset E \subset Q(\sqrt{2} + i \sqrt{5})$$ but I'm not sure how this will help me. I know that $\mathbb{Q}(\sqrt{2}, i \sqrt{5}) = (\mathbb{Q}(\sqrt{2})(i\sqrt{5})$. So then I would maybe let $E = \mathbb{Q}(\sqrt{2})$. Then $x^2 + 5$ is the minimal polynomial of $i \sqrt{5}$ over $\mathbb{Q}(\sqrt{2})$. But how to find a polynomial from this over $\mathbb{Q}$?
Let $\alpha = \sqrt 2 + \mathrm i \sqrt 5$ and consider successive powers: \begin{eqnarray*} \alpha &=& 0+1\sqrt 2 + \mathrm i \sqrt 5 + 0\sqrt{10}\\ \\ \alpha^2 &=& -3+0\sqrt 2 + 0\sqrt 5 + \mathrm i \sqrt{10} \\ \\ \alpha^3 &=& 0-13\sqrt 2+\mathrm i \sqrt 5 + 0\sqrt{10}\\ \\ \alpha^4 &=& -31+0\sqrt 2 + 0\sqrt 5 -12\mathrm i \sqrt{10} \end{eqnarray*} Putting this into a matrix equation gives $$\left[\begin{array}{c} \alpha \\ \alpha^2 \\ \alpha^3 \\ \alpha^4 \end{array}\right] = \left[\begin{array}{cccc} 0 & 1 & \mathrm i & 0 \\ -3 & 0 & 0 & 2\mathrm i \\ 0 & -13 & \mathrm i & 0 \\ -31 & 0 & 0 & -12\mathrm i \end{array}\right] \left[\begin{array}{c} 1 \\ \sqrt 2 \\ \sqrt 5 \\ \sqrt{10} \end{array}\right]$$ The four-by-four matrix has non-zero determinant, and hence: $$\frac{1}{98}\left[\begin{array}{cccc} 0 & -12 & 0 & -2 \\ 7 & 0 & -7 & 0 \\ -91\mathrm i & 0 & -7 \mathrm i & 0 \\ 0 & -31\mathrm i & 0 & 3\mathrm i \end{array}\right] \left[\begin{array}{c} \alpha \\ \alpha^2 \\ \alpha^3 \\ \alpha^4 \end{array}\right] = \left[\begin{array}{c} 1 \\ \sqrt 2 \\ \sqrt 5 \\ \sqrt{10} \end{array}\right]$$ Expanding the first row gives $-\frac{12}{98}\alpha^2-\frac{2}{98}\alpha^4=1$, i.e. $$2(\alpha^4 + 6\alpha^2 + 49) = 0$$ The polynomial in question is then $x^4+6x^2+49$.
If X and Y are two independent standard normal r.v.s, calculate $P(X+Y \in [0,1] \mid X \in [0,1])$ I'm not sure how to solve this. This is my attempt so far. So if X,Y two independent standard normal r.v.s, we have: $$\mathbb{P}(X+Y\in [0,1] \mid X \in [0,1])=\frac{\mathbb{P}(\{X+Y\in [0,1]\} \cap \{X \in [0,1]\})}{\mathbb{P}(X \in [0,1])}.$$ Moreover, we have: \begin{split} \mathbb{P}(\{X+Y\in [0,1]\} \cap \{X \in [0,1]\}) = {} & \int_0^{1}dx\frac{1}{\sqrt{2 \pi}}e^{-\frac{x^2}{2}}\int_{-x}^{1-x} dy\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}}. \end{split} To calculate the integral, it looks like it might be better to switch to polar coordinates (?). Then we have: \begin{split} \int_0^{1}dx\frac{1}{\sqrt{2 \pi}}e^{-\frac{x^2}2}\int_{-x}^{1-x} dy \frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}} = & \frac{1}{2\pi}\int_{-\frac\pi4}^{0} \, d \varphi \int_0^{\frac{1} { \cos \varphi}}r e^{-\frac{r^2}2} \, dr \\ & + \frac{1}{2\pi}\int_{0}^{\frac\pi2} \, d \varphi \int_0^{\frac1 { \sin \varphi + \cos \varphi}}r e^{-\frac{r^2}2} \, dr \\ = &- \frac{1}{2\pi}\int_{-\frac\pi4}^{0} \, d \varphi \int_0^{-\frac1 { 2\cos^2 \varphi}} e^t \, dt \\ & - \frac{1}{2\pi}\int_{0}^{\frac{\pi}{2}}d \varphi \int_{0}^{-\frac{1}{2 ( \sin \varphi + \cos \varphi) ^2}} e^t\,dt, \\ \end{split} and I don't know what to do from here. Although, I'm not even sure if my procedure doesn't have any mistakes. Thanks for any insights.
Here is a simulation of your problem in R statistical software, which illustrates and confirms the results of @heropup and (just now) @SangChul Lee. (both +1) I generate a million realizations of $X, Y,$ and $S = X + Y,$ and focus on the values of $S|X \in (0,1),$ and then find the proportion of these conditional $S$'s in $(0,1),$ which is 0.3695 correct to about three places. m = 10^6; x = rnorm(m); y = rnorm(m); s = x+y cond = (abs(x-.5) < .5) mean(abs(s[cond]-.5) < .5) ## 0.3695232 In order to visualize this graphically, I reduce the number of simulated values to 100,000 (to keep the scatterplot from being too crowded). The scatterplot at left suggests the bivariate distribution of $S$ and $X.$ The vertical blue band shows the points representing the conditional distribution of $S.$ The denominator of the desired probability is the proportion of points in the vertical blue band, and its numerator is the proportion of points in the square bounded by red and blue lines. The (blue) conditional points of $S$ are shown in the histogram at the right, and the desired probability is the area under the histogram between the vertical red lines.
Trouble calcualting simple limit I'm not sure if my solution is correct. The limit is: $\lim_{x\to0}\cot(x)-\frac{1}{x}$. Here is how I tried to solve it: 1. $\lim_{x\to0}\cot(x)-\frac{1}{x}$ = $\lim_{x\to0}\cot(x) - x^{-1}$ 2. Since $\cot(0)$ is not valid, apply the de l'Hôpital rule $(\cot (x) )' = -\frac{1}{\sin^2 x}=\sin^{-2} x $ and $(x^{-1})'$ = $x^{-2}$ 3. $\lim_{x\to0}\sin^{-2} (x)-x^{-2} = 0 - 0 = 0$ However I'm not sure that my logic is correct
\begin{align} \lim_{x \rightarrow 0} cot(x)-x^{-1} &= \lim_{x \rightarrow 0} \frac{x\cos (x)-\sin(x)}{x\sin(x)} \\ &= \lim_{x \rightarrow 0} \frac{\cos(x)-x\sin(x)-\cos(x)}{\sin(x)+x\cos(x)}, \text{ L'Hôpital's} \\ &= \lim_{x \rightarrow 0} \frac{-x\sin(x)}{\sin(x)+x\cos(x)} \\ &= \lim_{x \rightarrow 0} \frac{-\sin(x)-x \cos(x)}{\cos(x)-x\sin(x)+\cos(x)}\text{, L'Hôpital's} \\&=0 \end{align}
How to convert the L1 optimization problem to interior method using primal and dual (path following algorithm)? I have implemented a program to solve primal and dual problem defined by: A) Primal is: maximize $cx$ subject to $Ax=b$ and $x>=0$ corresponding dual is: minimize $kb$ subject to $kA-z=c$ In the above $A$ is a matrix all all other variable is vector. Now I want to solve the minimization problem: B) minimize $(norm(A*x-b)+norm(x,1))$ Where norm(.) is usual L2 norm and norm(x,1) means L1 norm Can I solve B using the same solving method of A ? If so how can I change the variables of B similar to A? Any guidance would be appreciated.
Here is a more complete answer about the problem dual (since that is what you asked about originally): We start with the primal problem, an L1-regularized LLS $\min ||Ax-b||_2^2 + \lambda||x||_1$ Now, let us define $z$ s.t. $z := Ax - b$. Thus, our problem can be written as $\min z^Tz + \lambda ||x||_1$ s.t. $z = Ax-b$. Associate $\mu_i$ with the $i$th constaint, thus we have the Langrangian is simply $L(x,z,\mu) = z^Tz + \lambda||x||_1 + \mu^T(Ax-b-z)$. Therefore, the dual function is then given by $\displaystyle g(\mu) = \inf_{x,z}\left\{L(x,z,\mu)\right\}$. Now, we want to find $\max_{\mu}g(\mu)$, and so first we note that if, for some $i$, $|\sum_j a_{ji}\mu_j| > \lambda_i$, then we have a direction of unboundedness this is because the only terms in the numerator with $x_i$ are $\displaystyle \left(\sum_ja_{ji}\mu_j\right)x_i$ and $\displaystyle \lambda_i|x_i|$. Thus for any such $\mu$ we have $g(\mu) = -\infty$. Therefore, we can restrict our attention to $\mu$ s.t. $|(A^T\mu)_i| \leq \lambda_i \forall i$. Now, for each such $\mu$ it can be seen that the $x$ that minimizes $g(\mu)$ is $x=0$.Thus, for all $\mu$ of interest, we have that $\displaystyle g(\mu) = \inf_{x,z}\{L(x,z,\mu)\} = \inf_z\left\{z^Tz - \mu^Tz - \mu^Tb\right\}$. Fortunately, we can differentiate this, set it to zero, to see that in optimality, $z_i = \mu_i/2.$ Hence, for all $\mu$ s.t. $|(A^T\mu)_i| \leq \lambda_i \forall i$ we have that $\displaystyle g(\mu) = -\frac{1}{4}\mu^T\mu-\mu^Tb$. Therefore, the Lagrangian Dual is given by $\displaystyle \max_\mu -\frac{1}{4}\mu^T\mu-\mu^Tb$ s.t.$|(A^T\mu)_i| \leq \lambda_i \forall i$. Therefore, the dual is a convex optimization problem in $\mu$ and because the primal problem satisfies Slater's condition, we have that the duality gap is zero.
Prove $\sin2x+\sin4x+\sin6x=4\cos x\cos2x\sin3x$ Prove $\sin{2x}+\sin{4x}+\sin{6x}=4\cos{x}\cos{2x}\sin{3x}$ I have reached the point where the LHS equation has turned into $2\cos{x}\cos2x\sin{x}(2\sin2x+1)$ But I have no idea how to turn $\sin{x}(2\sin2x+1)$ into $2\sin3x$ A quicker method if it exists would be greatly appreciated Thanks in advance
To solve this problem, we can use the identities: $$ \sin A + \sin B = 2\sin \frac{A+B}{2} \cos \frac{A - B}{2}, $$ $$ \cos A + \cos B = 2\cos \frac{A+B}{2} \cos \frac{A - B}{2}, $$ and $$ \sin 2\phi = 2\sin \phi \cos \phi. $$ Going back to the question, $ \text{LHS} = \sin 2x + \sin 4x + \sin 6x \\ = 2\sin 3x \cos x + \sin 6x \\ = 2\sin 3x \cos x + 2\sin 3x \cos 3x \\ = 2\sin 3x (\cos x + \cos 3x) \\ = 2\sin 3x \times 2\cos 2x \cos x \\ = 4\cos x \cos 2x \sin 3x \\ = \text{RHS}. $ Hence, proved.
Why is it legal to consider $\frac{dy}{dx}$ for a parametric curve when y may not be a valid function of x? When deriving the formula for the derivative of a parametric curve, in the form of $x = x(t)$ and $y = y(t)$, the chain rule is applied to $\frac{dy}{dt}$ to obtain $\frac{dy}{dt} = \frac{dy}{dx}\cdot\frac{dx}{dt}$, from which the slope of $y$ with respect to $x$ can be obtained. My question is: why is it legal to use $\frac{dy}{dx}$ when $y$ is often not a function of $x$? For example, the curve described by the parametric equations $x=6sin(t)$ and $y=t^2+t$ (image here) is clearly not a function of $x$, since it fails the vertical line test infinitely many times, and yet $\frac{dy}{dx} = \frac{2t+1}{6cos(t)}$. What is the intuition behind $\frac{dy}{dx}$ in this case? I usually think of $\frac{dy}{dx}$ as the (unique) slope induced from a small change in $x$, but that doesn't make sense here, since a small change in $x$ corresponds to infinitely many changes in $y$.
The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\dfrac{dy}{dx}$ at that point.
Regular polygonal wheels. Side-view of unicycle. Can the trajectory of the center of the wheels be re-created using round wheels and distorted floors? Scenario 1 So imagine a rolling wheel, call it a unicycle if you'd wish. Now the wheel should be considered as being one of the regular polygons which are the building blocks of the platonic solids, or could even be considered generally as being any regular polygon. Now imagine that the floor is flat. Let the wheel roll. We imagine a mathematical continuous contact between the floor and the wheel: a perfect roll. If you look at the side of the unicycle, and the unicycle would be continuously sputtering out ink from its sides, onto a blank page behind it (we imagine no other force, such as gravity, changing the direction of the ink sputtered out ... a simple projection), one would neatly see the trajectory of the center of the wheel while it had been rolling. Now the question is the following. Scenario 2 After having imagined all of these, for some interesting cases of regular polygons on a flat floor. Now imagine that the wheel would be perfectly round. Can you, or can you not create the same drawings (of scenario 1) on the paper, by varying the surface of the floor? Are there any limitations to this re-creation of the drawings, compared to the previous scenario. Note: any illustrations / visualizations would be appreciated.
When a convex polygon "rolls (without slipping)" along a flat floor, it "pivots" about a single vertex at a time. Consequently, each interior point traces a continuous path made of arcs of circles. When a round wheel rolls along a floor made of (suitable) arcs of circles, its center also traces a path made of arcs of circles. Particularly, the path traced by the center of a regular $n$-gon when rolling without slipping along a flat floor can be expressed as the path made by the center of a circular disk rolling without slipping along a floor made of arcs of circles. In more detail, let $n \geq 3$ be an integer, and let $P = P_{n}$ be a regular $n$-gon whose sides have length $2$. Each side of $P$ subtends an angle $\frac{2\pi}{n}$ at the center $C$, so * *The distance from $C$ to the midpoint of a side is $\cot \frac{\pi}{n}$; *The distance from $C$ to each vertex is $\csc \frac{\pi}{n}$. Place $P$ (green) in a Cartesian coordinate system with its center at $(0, \cot \frac{\pi}{n})$ and with the midpoint of one side at the origin. As $P$ rolls to the right (successively more blue), the center $C$ traces an arc of circle of center $(1, 0)$ and radius $\csc \frac{\pi}{n}$, with the arc subtending an angle $2\pi/n$ and carrying $C$ to $(2, \cot \frac{\pi}{n})$. It's easy to check that when a circular disk rolls over a floor made of arcs of circles, its center also traces a continuous path made of arcs of circles. (The exact shape of the floor depends trivially on the radius of the rolling disk.)
Given that $H(x)$ is a Heaviside function, how would I graph $H(x)+2H(x-3)-3H(x-5)$ Question Given that $H(x)$ is a Heaviside function, how would I graph $H(x)+2H(x-3)-3H(x-5)$? I was thinking about shifting the function, but then realized that there are multiple $H$'s Or do I need to plot points?
It helps to write the function like this: $$f(x)=\begin{align}&\;\;\;\;\;\color{#eb6235}{0}\\&+(\color{#8778b3}{1}-\color{#eb6235}{0})H(x-\color{#e19c24}{0})\\&+(\color{#5d9ec7}{3}-\color{#8778b3}{1})H(x-\color{#e19c24}{3})\\&+(\color{#996633}0-\color{#5d9ec7}{3})H(x-\color{#e19c24}{5})\end{align}$$ where the numbers being subtracted from $x$ in the unit step function represent the breakpoints, and the matching constants correspond to the pieces that make up your piecewise function. As an example, note that $\color{#5d9ec7}{3}$ shows up in the $\color{#5d9ec7}{3}-\color{#8778b3}{1}$ and $\color{#996633}0-\color{#5d9ec7}{3}$ factors corresponding to $H(x-\color{#e19c24}{3})$ and $H(x-\color{#e19c24}{5})$, so your function takes the value $\color{#5d9ec7}{3}$ in the interval $[\color{#e19c24}{3},\color{#e19c24}{5})$. In this form, you can then actually make a direct conversion: $$f(x)=\begin{cases}\color{#eb6235}0&\text{if }x<\color{#e19c24}0\\\color{#8778b3}1&\text{if }\color{#e19c24}0\le x<\color{#e19c24}3\\\color{#5d9ec7}3&\text{if }\color{#e19c24}3\le x<\color{#e19c24}5\\\color{#996633}0&\text{if }\color{#e19c24}5\le x\end{cases}$$
If $ \sin\theta + \cos\theta = \frac 1 2$, what does $\tan\theta + \cot\theta$ equal? A SAT II question asks: If $ \sin\theta + \cos\theta = \dfrac 1 2$, what does $\tan\theta + \cot\theta$ equal? Which identity would I need to solve this?
Using the fact that the sum of the squares sines and cosines is unity. Square the given expression to obtain the product of sin and cos. Then convert the unknown quantity to the reciprocal of this product. In particular you should get the product to be $-3/8$ so the answer is $-8/3$.
Prove $n^5+n^4+1$ is not a prime I have to prove that for any $n>1$, the number $n^5+n^4+1$ is not a prime.With induction I have been able to show that it is true for base case $n=2$, since $n>1$.However, I cannot break down the given expression involving fifth and fourth power into simpler terms. Any help?
$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$
If $f(1)$ and $f(i)$ real, then find minimum value of $|a|+|b|$ A function $f$ is defined by $$f(z)=(4+i)z^2+a z+b$$ $(i=\sqrt{-1})$for all complex number $z$ where $a$ and $b$ are complex numbers. If $f(1)$ and $f(i)$ are both purely real, then find minimum value of $|a|+|b|$ Now $f(1)=4+i+a+b$ which means imaginary part of $a+b=-1$ and $f(i)=-4-i+a \cdot i+b$ which gives imaginary part of $a \cdot i+b=1$ but how to proceed further to find desired value?
You have $\text{Im}(a+b)=-1$ and $\text{Im}(a\text{i}+b)=1$. That is, $$\text{Im}\big(a(1-\text{i})\big)=\text{Im}(a+b)-\text{Im}(a\text{i}+b)=-2\,.$$ This means $$\sqrt{2}|a|=\big|a(1-i)\big|\geq \Big|\text{Im}\big(a(1-\text{i})\big)\Big|=\big|-2\big|=2\,.$$ Hence, $|a|\geq \sqrt{2}$. The equality holds if and only if $a(1-\text{i})=-2\text{i}$, or $a=1-\text{i}$. From the result above, $$|a|+|b|\geq|a|\geq\sqrt{2}\,.$$ The equality holds iff $a=1-\text{i}$ and $b=0$. Note that $(a,b)=(1-\text{i},0)$ satisfies the required conditions, so the minimum value of $|a|+|b|$ is indeed $\sqrt{2}$.
The notion of equality when considering composition of functions When going through some (very introductory) calculus homework I was given the function $f(x) = \frac{x}{1+x}$ and asked to find the composition $(f\circ f)(x)$ and then find its domain. Substituting, we find that $$(f\circ f)(x)= \frac{\frac{x}{1+x}}{1+\frac{x}{1+x}}$$ The domain is then found by solving $\frac{x}{1+x} = -1$ and to find that the composition is undefined at $-\frac{1}{2}$. It is also, of course, undefined at $-1$. Thus our domain is $\{x \in \mathbb{R} \mid x \neq -\frac{1}{2} $ and $x \neq -1$}. My question comes from noticing that if we take the algebra further we find that $$(f\circ f)(x)= \frac{\frac{x}{1+x}}{1+\frac{x}{1+x}} = \frac{x}{2x+1}$$ The domain is still surely unchanged. However, suppose I never did this problem and for some reason I simply desired to write down the function $\frac{x}{2x+1}$ on a sheet of paper and find its domain. I would find that it is defined for all reals except $-\frac{1}{2}$. (Wolfram alpha also verifies this). This would then imply that $$(f\circ f)(x)= \frac{\frac{x}{1+x}}{1+\frac{x}{1+x}} \neq \frac{x}{2x+1}$$ since the domain of the two functions is unequal. Couldn't we also work backwards from $\frac{x}{2x+1}$ in the following manner? $$\frac{x}{2x+1} = \frac{\frac{x}{1+x}}{\frac{1+2x}{1+x}} = \frac{\frac{x}{1+x}}{\frac{1+x}{1+x} + \frac{x}{1+x}} = \frac{\frac{x}{1+x}}{1 + \frac{x}{1+x}}$$ This is the function we orignally found the domain for. Did I some how just remove a point from the domain by just doing algebraic manipulations? My guess is perhaps there are two (or more?) notions of equality going on here. One notion would perhaps be the idea of two functions $f$ and $g$ being "formally" equivalent if $f$ can be algebraically manipulated to $g$ and vice versa. The other notion would be the more intuitive one where two functions are equal if they have the same domain and map the elements of the domain to the same points in the codomain. Thanks.
Note that the notation $(f\circ f)(x)$ gives some intuitive notion that we are going to do $2$ different operations. The first will be to "feed" $x$ to $f$, and then "feed" $f(x)$ to $f$. Indeed we have $(f\circ f)(x) = f(f(x))$. We know that $f$ is not defined for $x = -1$ and therefore the inner $f$ in $f(f(x))$ cannot be "fed" $-1$. However, the outer $f$ is fed values from $f(x)$ and it just so happens that $f(x) = -1 \iff x = -\frac12$. We can then proceed to write the algebraic manipulations you wrote assuming that $1+x \not= 0, \frac{x}{1+x} \not= -1$ given that otherwise you would be dividing by $0$. On the other hand, starting from $\frac{1}{2x+1}$ one cannot write $$\frac{\frac{x}{1+x}}{\frac{1+2x}{1+x}}$$ if we don't explicitly state that $1+x\not=0$, otherwise you would be dividing by $0$. Therefore one can always do the algebraic manipulations, given that one carries the excluded points along. Therefore, one finds $f(f(x))$ to be defined for $x\not\in \{-\frac12, -1\}$ and for the points where it is defined we have $f(f(x)) = \frac{1}{2x+1}$. Similarly, working backwards like you did, we get $$\frac{\frac{x}{1+x}}{\frac{1+2x}{1+x}} = \frac{\frac{x}{1+x}}{1 + \frac{x}{1+x}}$$ except for the points $x = -\frac12, -1$ because we had to exclude them. What is more, you are right when you say that two functions are equal if they have the same domain/codomain and if they map the same objects to the same images. Having that in mind, the functions $f(f(x))$ and $\frac1{2x+1}$ are not the same function, unless you restrict the second one to the points where we know $f(f(x))$ is well-defined.
How to easily identify different probability distributions? I am studying currently further option mathematics but i cant identify which distribution to utilize in which question? (these are genearlly word questions?) We are currently studying geometric,poisson, binomial, negative binomial. I was wondering if someone can explain how to identify which problem uses which distribution?
Binomial distributions are used for boolean variables, i.e. variables where they can either be true/false, negative binomial is used to model the number of successes(true) before a specified number of failures. Geometric distribution has two different types (https://en.wikipedia.org/wiki/Geometric_distribution). Taking the first definition, its used to describe how many trials you would expect to do before you had your first success. The Poisson distribution is used to describe the number of 'events' you would observe in a given time.
Finding a matrix using diagonalization I am working on a problem and I am little bit stuck. Useful hints will be appreciated greatly. I want to find a matrix $A$ such that $$A^TA =X,$$ where $$ X= \left[ {\begin{array}{cc} 0 & 1\ \\1 &0 \end{array} } \right].$$ The professor gave us a hint that I should diagonalise the matrix X to obtain a diagonal matrix D where $X= P^{-1}XD$ and find a matrix B such that $$B^TB =D.$$He said that once I find matrix B, I can use the matrix $B$ and $P$ to obtain the matrix A. Here is my approach. I diagonalized $X$ and obtained my matrix D to be $$D=\left[ {\begin{array}{cc} 1 & 0\ \\0 &-1 \end{array} } \right]. $$ I also got the matrix $P$ to be $$P=\left[ {\begin{array}{cc} 1 & -1\ \\1 &1 \end{array} } \right]. $$ Now I need to figure out matrix B such that $B^TB=D$. Suppose that $$B= \left[ {\begin{array}{cc} a & b\ \\c &d \end{array} } \right],$$ where $a,b,c,d \in \mathbb{C}$. I obtained the following equations from $B^TB=D$ that $$a^2 + c^2 =1$$ $$b^2 + d^2 =1$$ $$ab+cd=0.$$ I am having troubles solving this system of equations. I am stuck here at the moment.
The equations can be solved as follows. For $c=0$ we have $a=\pm 1$ and $ab=0$, hence $b=0$. This gives $a^2=d^2=1$. Hence $B=\pm I$. Otherwise we can write $d=-ab/c$ and substitute this into the first two equations. We obtain $$ a^2(1+a^2+c^2)=0. $$ Now $1+a^2+c^2>0$ for real $a,c$ hence $a^2=0$, so that $a=d=0$. For complex $a,c$ this does not hold.
Optimization and maximum geometry What is the side length of the largest square that will fit inside an equilateral triangle with sides of length 1. I created two equations: Square: Area=$x^2$ Triangle: Area= $\sqrt{3 /4}$ However, how can I find the maximum?? I did $\sqrt{3 /4}=x^2$ ang got fourth root $3 / 2$ which is wrong
Proof with some words You can create a rectangle inside your triangle. Let $x_B$ and $x_G$ the $x$-coordinates of the blue and green points, respectively. It is clear that $y_B = y_G = 0$. Moreover, it is easy to see that: $$x_B < 0, x_G >0 ~\text{and}~ x_B = -x_G.$$ The base of the rectangles is $$b = x_G - x_B = 2 x_G.$$ The height of the rectangle is given by the $y$-coordinate of the yellow or red point, let's call them $y_Y$ and $y_R$. Note also that the $x$-coordinates of these two points are: $$\begin{cases} x_R = x_B \\ x_Y = x_G \end{cases}. $$ Then, using the equation of the diagonal side of your triangle, we have that: $$\begin{cases} y_R = \frac{\sqrt{3}}{2} + \sqrt{3}x_R = \frac{\sqrt{3}}{2} + \sqrt{3}x_B \\ y_Y = \frac{\sqrt{3}}{2} - \sqrt{3}x_Y = \frac{\sqrt{3}}{2} - \sqrt{3}x_G \end{cases}. $$ As said, $y_R = y_Y$, indeed: $$y_R = \frac{\sqrt{3}}{2} + \sqrt{3}x_B = \frac{\sqrt{3}}{2} - \sqrt{3}x_G = y_Y.$$ Moreover, $h = y_R$, then: $$h = \frac{\sqrt{3}}{2} - \sqrt{3}x_G.$$ Finally, we found that the rectangle is defined by: $$\begin{cases} b = 2x_G\\ h = \frac{\sqrt{3}}{2} - \sqrt{3}x_G \end{cases},$$ while its area is $$ A =bh = 2x_G\left(\frac{\sqrt{3}}{2} - \sqrt{3}x_G\right).$$ Your rectangle is a square only when $b=h$. That is, when: $$2x_G = \frac{\sqrt{3}}{2} - \sqrt{3}x_G \Rightarrow x_G = \frac{\sqrt{3}}{2(2+\sqrt{3})}.$$ Then: $$b = h = \frac{\sqrt{3}}{2+\sqrt{3}}$$ and the area is $$A = \left(\frac{\sqrt{3}}{2+\sqrt{3}}\right)^2 = \frac{3}{7+4\sqrt{3}}.$$ It is very important to notice that this square is unique possible by construction. Indeed, we started from a rectangles, and we "proved" that this rectangle is a square only if $b=h= \frac{\sqrt{3}}{2+\sqrt{3}}$. So, the maximum area is $$A =\frac{3}{7+4\sqrt{3}} = 21 - 12 \sqrt{3} \simeq 0.215390309173472.$$
Solve this limit $\lim_{x\to \frac{1}{2}^-}\frac{\arcsin{2x}-\frac{\pi}{2}}{\sqrt{x-2x^2}}$ I am trying to figure out how to make this limit, even with the hopital. I've tried using hopital two times, but the situation 0/0 is still there. I've tried to solve it using wolfram, but I don't the solution. Even with rationalization + hopital nothing comes out. $$\lim_{x\to \frac{1}{2}^-}\frac{\arcsin{2x}-\frac{\pi}{2}}{\sqrt{x-2x^2}}$$ I wonder if there is some way to solve it, and would really appreciate any suggestion.
\begin{align} \lim_{x\to \frac{1}{2}^-}\frac{\arcsin{2x}-\frac{\pi}{2}}{\sqrt{x-2x^2}}&=\lim_{x\to \frac{1}{2}^-} \frac{2}{\sqrt{1-4x^2}}\frac{2\sqrt{x-2x^2}}{1-4x}, \text{ L'hopital}\\ &= \lim_{x\to \frac{1}{2}^-} \frac{2}{\sqrt{1-2x}\sqrt{1+2x}}\frac{2\sqrt{x}\sqrt{1-2x}}{1-4x}\\ &=\lim_{x\to \frac{1}{2}^-} \frac{2}{\sqrt{1+2x}}\frac{2\sqrt{x}}{1-4x}\\ &=\frac{2}{\sqrt{2}}\frac{2\sqrt{\frac{1}{2}}}{1-2} \\&=-2\end{align}
Logic: Using Quantifiers To Express "At Least 2?" Let $been(x,y)$ denote "person x has been to place y." Express “Every person has been to 2 or more places” using quantifiers and $been(x, y)$. This one is completely stumping me. I'm sure that I haven't come across this kind of concept before, and can't wrap my head around how I would approach it. I suspect the quantifiers should be $∀x$, $∀y$, and that it may involve negating the statement "every person has been to one or zero places," but that's as far as I can manage to go. Any help greatly appreciated.
Try this one: $$ \forall x \in \text{Persons}(\exists y_1, y_2\in \text{Places}(y_1 \neq y_2 \land \text{been}(x, y_1)\land \text{been}(x, y_2)) $$
Components of an open set in product topology Suppose $U = U_1 \times U_2 \times \cdots \times U_n $ is an open set in the topological space $X = X_1 \times X_2 \times \cdots \times X_n $, equipped with the box topology. (Note that $n$ is finite here). Then are $U_1, U_2, \cdots U_n$ open?
Your statement is correct, and your proof is nearly correct. Union of product is not product of union, as @positrón0802 has pointed out. Examples include putting two squares on plane, and observe that the union of two square can possible be not a square even if squares have intersection. For your statement, since $U = \cup_\alpha B_1^\alpha \times \dots B_n^\alpha$, define $V=\cup_\alpha B_1^\alpha$. $V$ is an open set. Now observe that $V=U_1$.
Probability of getting an odd number of heads if n biased coins are tossed once. The question is basically to find out the probability of getting odd number of heads when $ n$ biased coins,with $m^{th}$ coin having probability of throwing head equal to $\frac{1}{2m+1}$ ($m=1,2,\cdots,n$) are tossed once.The results for each coin are independent. If we consider first that only one head turns up.The probability then is equal to $$\sum_{m=1}^{n} [\frac{1}{2m+1} \prod_{k=1,k \neq m}^{n} (1-\frac{1}{2k+1})]$$ which seems very difficult to evaluate.It gets more complicated if we increase the number of heads.I couldnot find out a simple way to do this.Any suggestion would be highly appreciated.Thanks.
A note for solving lulu's recursion We have $$P_{n+1}=P_n \left(1-\frac 2{2n+3}\right)+\frac 1{2n+3}= P_n \frac{2n+1}{2n+3} + \frac 1{2n+3}$$ with $P_0 =0$ Multiplying by $2n+3$: $$P_{n+1} (2n+3)=P_{n+1}(2 (n+1)+1)=P_n \left({2n+1}\right)+1$$ Calling $A_n = P_n (2n+1)$ this gives $$A_{n+1}=A_n+1$$ with $A_0=0$. Of course, this implies $A_n=n$ and hence $$ P_n = \frac{n}{2n+1}$$
Trouble with finding shortest distance between $y=x^2$ and $y=x-1$ with Lagrangian multipliers I've found another thread with a similar question, but none of the answers help with the specific part I'm stuck on. Just to make things simpler, I've used the square of the distance $f(x_1,x_2,y_1,y_2)=(x_1-x_2)^2+(y_1-y_2)^2$, and I have constraints $G_1=x_1^2-y_1$, and $G_2=x_2-y_2-1$. Taking the gradients of $f$, $G_1$, and $G_2$, I have the system of equations $2(x_1-x_2)=2\lambda_1x_1$ $2(y_1-y_2)=-\lambda_1$ $-2(x_1-x_2)=\lambda_2$ $-2(y_1-y_2)=-\lambda_2$ What I've done so far is, I've first observed that the immediate implication of the 2nd and 4th equations is that $\lambda_1=-\lambda_2$. Using this, I cancelled out the 1st and 3rd equations with the substituted value for $\lambda_2$, and got that $2\lambda_1x_1=\lambda_1\implies x_1=\frac{1}{2}$, and by constraint 1 that $y_1=\frac{1}{4}$. From here though, I'm not sure how to pin down the value of $\lambda$, and therefore determine $x_2$ and $y_2$.
the distance between the curves is given by $$\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$$ with $$y_1=x_1^2,y_2=x_2-1$$ with these equations we obtain $$\sqrt{(x_2-x_1)^2+(x_2-1-x_1^2)^2}$$ now you must differentiate this with respect to $$x_1,x_2$$
Exercise: Initial Value Problem for non-homogeneous wave equation on $\mathbb R^3$ Let $u(x,t)$ be the solution of the following initial value problem: $u_{tt} - \Delta u = 1\;\forall x \in \mathbb R^3 ,\;t \gt 0\\u(x,0)=0\\u_t(x,0)=0$ Is it true or false that $u(x,t) \neq 0\;\forall x \in \mathbb R^3 and\;\forall t \gt 0$ ? Well, for a general non-homogeneous wave equation I.V.P. on $\mathbb R^3$,such as: $u_{tt} - \Delta u = f(x,t)\;\forall x \in \mathbb R^3 ,\;t \gt 0\\u(x,0)=φ(x)\\u_t(x,0)=ψ(x)\\$ we know that the solution $u$ is given by this : $\frac{1}{4\pi t^2} \int_{\partial Β(x,t)} φ(y) + \nabla φ(y) (y-x) +t ψ(y) \;\;dS(y)+\frac{1}{4\pi} \int_{0}^t \int_{\partial Β(x,t-s)} \frac{f(y,s)}{t-s}\; dS(y) \;ds$ So, I substituted $φ=0\;,ψ=0\;and\; f=1\;$ in the previous formula and I concluded to this: $u(x,t)=\frac {-\vert \partial Β(x,t-s) \vert }{4\pi} [\ln \vert t-s \vert ]_{0}^t \;$ where $\vert \partial Β(x,t-s) \vert =\int_{\partial Β(x,t-s)} dS(y) $ . I have trouble handling the above...If $t=s$ the $\ln\;$ is not defined..Could somebody help me see what I'm missing.. ? Hints or other solutions than this are also welcome. I would appreciate any help! Thanks in advance..
Ι 've made a mistake in the above calculation. After substitution we get $u(x,t)=\frac{1}{4\pi} \int_{0}^t \int_{\partial Β(x,t-s)} \frac{1}{t-s} dS(y)ds\;=\;\frac{1}{4\pi} \int_{0}^t \frac{1}{t-s} \int_{\partial Β(x,t-s)} \;\;dS(y)ds\;=\;\frac{1}{4\pi} \int_{0}^t \frac{4\pi (t-s)^2}{t-s} \;=\;\int_{0}^t t-s\;\;ds\;=\;[st-\frac{s^2}{2}]_{0}^t\;=\;\frac{t^2}{2}$ Now it is obvious that $u(x,t)\neq 0\;\forall x \in \mathbb R^3\;and\;\forall t \gt 0$ The mistake was that I hadn't seen $\vert \partial Β(x,t-s) \vert\;$ depends on the outer integral.
Sum of series in GP $a+ ar+ar^2 + ar^3 +ar^4+ \cdots+ ar^{n-1}=S_n$ $a\left(1+r+r^2 +r^3+\cdots+r^{n-1} \right) = S_n$ Iam trying to get $S_n = \dfrac{a(r^n -1)}{r-1}$ I don't know how to get $1+r+r^2 +r^3+\cdots+r^{n-1} =\dfrac{r^n -1}{r-1}$ Any help will be appreciated $:)$
Equation (1) $S_n = a_1 + a_2 + a_3 + a_4 + …....... +a_n$ Putting value of each term, Equation (2) $S_n = a + ar + ar^2 + ar^3 + ar^4 + ... + ar^{n-1}$ Multiply equation (2) by r, Equation (3) $rS_n = ar + ar^2 + ar^3 + ar^4 + ........... + ar^{n}$ Subtract equation (2) from (3), $rS_n - S_n = ar^n - a$ $S_n = \frac{a(r^n - 1)}{r - 1}$
"Half"-linear real function Let $f:\mathbb{R}\rightarrow\mathbb{R}$ such that $f(x+y)=f(x)+f(y)$, I want to show that: (a) If $f$ bounded in an interval $\Rightarrow$ $f$ continuous. (b) If $f$ bounded in a set with positive Lebesgue measure $\Rightarrow$ $f$ continuous. (c) If $f$ Lebesgue-measurable $\Rightarrow$ $f$ bounded around the origin Edit: I forgot to put what I have done so far, which isn't much sadly. I showed that $(a)\Rightarrow (b)$, since that set has to contain a compact set with positive measure, i.e. a closed interval. Edit 2: As pointed out in the comments my previous approach was wrong, I think I corrected it: Since $\lambda(A)>0\Rightarrow A-A$ contains a neighbourhood of the origin, where $f$ is bounded, therefore $(a)$ implies $(b)$.
We use the following Lemma: If $f \colon \mathbb{R} \to \mathbb{R}$ is an additive function (that is, $f(x+y) = f(x) + f(y)$ for all $x,y \in \mathbb{R}$), then $f$ is continuous on $\mathbb{R}$ if and only if there is an $x_0\in \mathbb{R}$ such that $f$ is continuous at $x_0$. The one direction is immediate, and for the other we note that $f$ is continuous at $x_0$ if and only if $f$ is continuous at $0$: $f$ is continuous at $x_0$ if and only if for every $\varepsilon > 0$ there is a $\delta > 0$ such that $\lvert y-x\rvert < \delta \implies \lvert f(y) - f(x)\rvert < \varepsilon$. That is, $\lvert h\rvert < \delta \implies \lvert f(x+h) - f(x)\rvert < \varepsilon$, and since $f(x+h) - f(x) = f(h)$ and $f(0) = 0$, this is equivalent to $\lvert h-0\rvert < \delta \implies \lvert f(h) - f(0)\rvert < \varepsilon$. Next we show that if an additive $f$ is bounded on some neighbourhood of $0$, then it is continuous at $0$ (and by the lemma, on all of $\mathbb{R}$). So suppose there is a $c > 0$ and an $M \in \mathbb{R}$ with $\lvert x\rvert < c \implies \lvert f(x)\rvert \leqslant M$. By the additivity, we have $f(k\cdot y) = k\cdot f(y)$ for all $k\in \mathbb{Z}$ and $y\in \mathbb{R}$. Given $\varepsilon > 0$, choose $n\in \mathbb{N}\setminus \{0\}$ so large that $\frac{M}{n} < \varepsilon$. Then for $\lvert x\rvert < \frac{c}{n}$ we have $$\lvert f(x)\rvert = \biggl\lvert \frac{n}{n}f(x)\biggr\rvert = \biggl\lvert \frac{1}{n} f(nx)\biggr\rvert = \frac{1}{n}\lvert f(nx)\rvert \leqslant \frac{1}{n}M < \varepsilon.$$ Thus $f$ is continuous at $0$. (a) If $f$ is bounded on a nondegenerate interval, say $\lvert f(x)\rvert \leqslant M$ for $x \in (a,b)$ with $a < b$, then $f$ is bounded on a neighbourhood of $0$: Let $\mu = \frac{a+b}{2}$ and $c = \frac{b-a}{2}$. For $\lvert x\rvert < c$, we have $a < x+\mu < b$ and therefore $\lvert f(x+\mu)\rvert \leqslant M$. Hence $$\lvert f(x)\rvert = \lvert f(x+\mu) - f(\mu)\rvert \leqslant \lvert f(x+\mu)\rvert + \lvert f(\mu)\rvert \leqslant 2M,$$ so $f$ is bounded on $(-c,c)$, and by the above, $f$ is continuous. (b) If $f$ is bounded on a set with positive Lebesgue measure, say $\lvert f(x)\rvert \leqslant M$ for $x\in A$ with $\lambda(A) > 0$, then $f$ is bounded on some nondegenerate interval, and by (a), it follows that $f$ is continuous. To see that $f$ is bounded on some nondegenerate interval, note that since $\lambda(A) > 0$, the set $A + A$ contains a nondegenerate interval, and for $x = a_1 + a_2 \in A + A$ we have $$\lvert f(x)\rvert = \lvert f(a_1) + f(a_2)\rvert \leqslant \lvert f(a_1)\rvert + \lvert f(a_2)\rvert \leqslant 2M.$$ (c) If $f$ is Lebesgue measurable, then $f$ is bounded on a set of positive Lebesgue measure, and hence - by (b) - continuous (and therefore bounded on some neighbourhood of $0$). Since $f$ is measurable, the sets $$A_n := \{ x \in \mathbb{R} : \lvert f(x)\rvert \leqslant n\}$$ are Lebesgue measurable, and we have $A_n \subset A_{n+1}$ for all $n\in \mathbb{N}$ and $$\mathbb{R} = \bigcup_{n = 0}^{\infty} A_n.$$ By the continuity from below of measures, we have $$\lim_{n\to \infty} \lambda(A_n) = \lambda(\mathbb{R}) = +\infty,$$ in particular $\lambda(A_n) > 0$ for all sufficiently large $n$.
Weird change of variable in summation? I don't understand the change of variable in this summation $$ \frac{1}{T}\sum_{k=-\infty}^{\infty} H \Big (\frac{F-k}{T}\Big )=1 $$ Change of variable: $f=\frac{F}{T}$ so $$ \frac{1}{T}\sum_{k=-\infty}^{\infty} H \Big (f-\frac{k}{T}\Big )=1 $$ Isn't this wrong? $T$ isn't changed? Shouldn't it instead be $$ \frac{f}{F}\sum_{k=-\infty}^{\infty} H \Big (f-\frac{f}{F}k\Big )=1 \quad \text{?} $$
In fact, both forms are correct. However when solving certain problems, not substituting the change for all variables is convenient. You have your summation: $$\frac{1}{T}\sum_{k=-\infty}^{\infty} H \Big (\frac{F-k}{T}\Big )=1$$ The fractions may be separated to give: $$\frac{1}{T}\sum_{k=-\infty}^{\infty} H \Big (\frac{F}{T}-\frac{k}{T}\Big )=1$$ And so now, you can apply the substitution $f=\frac{F}{T}$ for only the $\frac{F}{T}$ term: $$\boxed{\frac{1}{T}\sum_{k=-\infty}^{\infty} H \Big (f-\frac{k}{T}\Big )=1}$$ Note that: $$f=\frac{F}{T} \Rightarrow T=\frac{F}{f}$$ Therefore, you can substitute for $T$ to give your version of the summation, where you do not have the summation in terms of $T$ at all. $$\frac{1}{(\frac{F}{f})}\sum_{k=-\infty}^{\infty} H \Big (f-\frac{k}{(\frac{F}{f})}\Big )=1$$ $$\boxed{\frac{f}{F}\sum_{k=-\infty}^{\infty} H \Big (f-\frac{f}{F} k\Big )=1}$$
Geometrical meaning of $ \begin{pmatrix} a\\ b \end{pmatrix} \mapsto \begin{pmatrix} a&-b\\ b&a \end{pmatrix}$ Geometrical meaning of $\tau: \mathbb{R^2} \rightarrow \mathbb{R^{2,2}}, \begin{pmatrix} a\\ b \end{pmatrix} \mapsto \begin{pmatrix} a&-b\\ b&a \end{pmatrix}$ I have troubles of reading matrices and interpreting them. How can you interpret them? Here my best guess is, that a vector is mapped into a higherdimensional space. In the image, do I have to look at the columns to see the vectors, does that even make sense or any of the images below? EDIT:
The space $\mathbb{R}^{2,2}$ is isomorphic (as a vector space over $\mathbb{R}$) to $\mathbb{R}^4$, and there are no simple way to graphically represent this four dimensional space. You can interpret geometrically the matrix as the linear transformation from $\mathbb{R}^2$ to $\mathbb{R}^2$, that transforms the basis vector $[1,0]^T$ to the vector $[a,b]^T$ and $[0,1]^T$ to $[-b,a]$, and note that these two vectors are orthogonal.
How to solve $x^3-x^2+2x-1=0$ Recently I took an exam where it appeared the following equation: $$x^3+x^2-1=0$$ On the exam we only needed to aprox. the root to the first decimal. However Wolframalpha says that there is a "computable" root. I tried to work it out by trigonometric substitutions and the following: $$(x^3+x^2-1)(x^3-x^2+1)=x^6-x^4+2x^2-1$$ Now: $$x^2\to x$$ We get $$x^3-x^2+2x-1=0$$ From where I get nothing. Is there anyway to work out manually the roots? And whether a third degree polynomial equation which you know that has a "computable" root can be manually worked out? Thanks in advance.
If we start with your first line: $$x^3+x^2-1=0$$ and let $x=\frac23y-\frac13$ and multiply both sides by $\frac{27}2$ so that we end up with $$4y^3-3y-\frac{25}2=0$$ Recall that $4y^3-3y=\cosh(3\operatorname{arccosh}(y))$ so that we have $$\cosh(3\operatorname{arccosh}(y))=\frac{25}2$$ and solving for $y$, $$y=\cosh\left(\frac13\operatorname{arccosh}\left(\frac{25}2\right)\right)$$ $$x=\frac23\cosh\left(\frac13\operatorname{arccosh}\left(\frac{25}2\right)\right)-\frac13\approx0.754877666$$ and this is the only real root.
Why is $\sum_{n\ge1} \binom{s}{n}\left(1 - \zeta(n-s)\right)=2^s$ for all $s \in \mathbb{C}$? Probably an easy question, but I found the following identity that seems true for all $s \in \mathbb{C}$: $$\sum_{n=1}^{\infty} \binom{s}{n}\big(1 - \zeta(n-s)\big)=2^s$$ Why is this the case? Do similar identities exist for $3^s,4^s,...$? Additional observation: For $s \in \mathbb{N}$, only a finite sum up to $s+1$ is required to get the exact power, i.e.: $$\sum_{n=1}^{s+1} \binom{s}{n}\big(1 - \zeta(n-s)\big)=2^s$$ and also: $$\sum_{n=1}^{s+1} \binom{s}{n}\big(1+\frac{1}{2^{n-s}} - \zeta(n-s)\big)=3^s$$ etc. This trick doesn't seem to work for negative integers or non-integer values of $s$.
* *For $Re(s) < 0$ : $$F(s) = -\sum_{m=2}^\infty \sum_{n=1}^\infty {s \choose n} m^{s-n} = -\sum_{m=2}^\infty m^s((1+\frac{1}{m})^s-1) = \sum_{m=2}^\infty (m^s-(m+1)^s) = 2^s $$ Also as $N \to \infty$ : $(1+\frac{1}{m})^s-1 = \sum_{n=1}^N {s \choose n} m^{s-n}+ \mathcal{O}(m^{s-N})$ so inverting the double sum is allowed $$F(s) = -\sum_{n=1}^\infty \sum_{m=2}^\infty {s \choose n} m^{s-n} = \sum_{n=1}^\infty {s \choose n}(1-\zeta(n-s))$$ *For any $s \not \in \mathbb{N}$, as $n \to \infty$ : $1-\zeta(n-s) = \mathcal{O}(2^{s-n})$ so that $\sum_{n=1}^\infty {s \choose n}(1-\zeta(n-s))$ converges and is analytic $\implies$ by analytic continuation $$\sum_{n=1}^\infty {s \choose n}(1-\zeta(n-s)) = 2^s$$ is true for any $s$ not an integer
Abelian group (Commutative group) Prove that if in a group $(ab)^2= a^2 b^2$ then the group is commutative. I am having a hard time doing this. Here is what I have so far: Proof: $a^2 b^2= a^1 a^1 b^1 b^1$ =$aa^{-1}bb$ =ebb Hence,$aa^{-1}=e$ I am stuck, I do not know if this is the right process in proving this
$aabb=(ab)^2=abab$ implies that $a^{-1}aabbb^{-1} = a^{-1}ababb^{-1}$. So, $eabe = ebae$. Hence $ab=ba$.
Checking uniform continuity I want to show that $x^2\sin \frac {1}{x}$ is not uniformly continuous on $(0,\infty)$. I tried to find sequences $(x_n),(y_n)$ in $(0,\infty)$ such that $|x_n-y_n|\to 0$ but $|f(x_n)-f(y_n)|$ does not go to $0$. Please help.
For $f(x) = x^2 \sin(1/x)$ we have $$f'(x) = 2x \sin(1/x) - \cos(1/x).$$ For $ x \in (0,\infty)$ we have $$|f'(x)| \leqslant 2|x\sin(1/x)| + |\cos(1/x)| \leqslant 3.$$ Since the derivative is bounded the function is uniformly continuous on $(0,\infty).$ Note that $|x\sin(1/x)| \leqslant 1$ for $x \in (0,\infty)$. On $[1,\infty)$ we have $0 < 1/x \leqslant 1$ and $|x \sin(1/x)| = \frac{\sin(1/x)}{1/x} \leqslant 1$ since $\frac{sin y}{y} \uparrow 1$ as $y \to 0+$. On $(0,1)$ we have $|x \sin(1/x)| \leqslant 1$ since $|x \sin(1/x)| \to 0$ as $x \to 0$.
Is the interior of a smooth manifold with boundary connected? Let $M$ be a smooth connected manifold with boundary. Is it true that the interior of $M$ is also connected? (As usual I am assuming $M$ is second-countable and Hausdorff). I am trying to rule out something like two "tangent disks" (or circles) where the (topological) interior is obviously not connected. This case is not a counter-example since the union of two tangent disks (or circles) is not a manifold with boundary (the point of tangency is pathological).
Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.
Suppose $\mathcal{V}$ be a subspaces of $\mathbb{R}^n$, Suppose $\mathcal{V}$ be a subspaces of $\mathbb{R}^n$, Suppose $P: \mathbb{R}^n\to\mathbb{R}^n$ be a projection, then I need to prove the following $$ \mathcal{V}\cap\text {im } P=P^{-1}\mathcal{V}\cap\text{im }P$$ Suppose $x\in \mathcal{V}\cap\text {im } P\Rightarrow x\in\mathcal{V}\text { and } \exists y\in \text{ im } P\ni x=Py\in \mathcal{V}\Rightarrow Px=Py=x=P^{-1}x\in P^{-1}\mathcal{V}\Rightarrow x\in P^{-1}\mathcal{V}\cap im P$ now, suppose $x\in P^{-1}\mathcal{V}\cap im P$ so $Py=x$ I am not able to prove please help.
I would go and try to prove both inclusions separately. So let's focus on $$ (\mathcal{V}\cap\text {im } P) \subset (P^{-1}\mathcal{V}\cap\text{im }P) $$ first. $$ x \in (\mathcal{V}\cap\text {im } P) \Rightarrow x \in V \land x \in {im } P $$ $$ \Rightarrow x \in V \land \exists y \in \mathbb{R}^n : Py=x $$ By the definition of a projection we get: $$ \Rightarrow x \in V \land Py = Px = x $$ $$ \Rightarrow x \in (P^{-1}\mathcal{V}\cap\text{im }P) $$ Now for the second part I'll give you the following hint because P is a projection we know: $$ x \in \text {im } P \iff Px = x $$ I hope that will help you a bit.
Quotient rule of derivatives Quotient rule of derivative is: $(\frac{f}{g})^{\prime}$ = $(\frac{f^{\prime}g - g^{\prime}f}{g^2})$ but when I compute a deriative of $\frac{1}{(1-x)}$ , it gives $\frac{1}{(1-x)^2}$ which is right but taking second derivative of this gives me $\frac{2}{(1-x)^4}$ but the right answer is $\frac{2}{(1-x)^3}$. Where do I make a mistake?
Let's derive $$\left({1\over (1-x)^2}\right)^{'}={(1)'\cdot (1-x)^2-1\cdot\left((1-x)^2\right)^{'}\over (1-x)^4}$$ Now $(1)'=0$, $\left((1-x)^2\right)^{'}=-2(1-x)$ and so the derivative rewrites as $$\left({1\over (1-x)^2}\right)^{'}={2(1-x)\over (1-x)^4}={2\over (1-x)^3}$$
Why is $9 \times 11{\dots}12 = 100{\dots}08$? While I was working on Luhn algorithm implementation, I discovered something unusual. $$ 9 \times 2 = 18 $$ $$ 9 \times 12 = 108 $$ $$ 9 \times 112 = 1008 $$ $$ 9 \times 1112 = 10008 $$ Hope you can observe the pattern here. What to prove this? What is it's significance?
$$9 \times 11\cdots12 = 9 \times (11\cdots 11+1) = 99\cdots99 + 9.$$
What is the probability that 4 cards drawn randomly from a deck are all hearts, given that there are at least three hearts? Here is the question: $4$ cards are drawn at random from a standard $52$-card deck. At least $3$ of them are hearts. What is the probability that they are all hearts? I was able to obtain the correct solution as follows: $\frac{ \binom{13}{4}}{\binom{13}{4} + 39 \binom{13}{3}} = \frac{5}{83}$. since there are $\binom{13}{3} \cdot 39$ ways to choose a heart and a non-heart. I want to solve this using the Principle of Inclusion-Exclusion, but I'm having difficulty arriving at the same answer. To compute the number of 4-card hands with at least three hearts, I tried the following: $ \binom {4}{3} \binom{13}{3}49 - 3\binom{13}{4}$ but this is giving me the wrong answer. Reasoning: $\binom {4}{3}$ is to choose which three cards out of four are hearts, $\binom{13}{3}$ is to count which ranks of hearts are selected, and then there are 49 choices for the last (fourth) card. Since we have over-counted four-heart hands, we subtract $3\binom{13}{4}.$
I think you made a typo in your answer it should be $\frac{ \binom{13}{4}}{\binom{13}{4} + 39 \binom{13}{3}}$. Now about your other approach of finding number of ways of choosing 4 cards with at least 3 hearts note that it will be $\binom{13}{3}*49 - 3 \binom{13}{4}$, which is same as above. The reasoning goes as follows, first note that order does not matter, now there are $\binom{13}{3}*49$ ways of choosing at least three hearts with double counting of cases which have all hearts. The cases with all hearts are counted 4 times so we subtract those to get the right answer.
Solve $2^x\cdot 6^{x-2}=5^{2x}\cdot 7^{1-x}$ I have to solve this equation, $2^x\cdot 6^{x-2}=5^{2x}\cdot 7^{1-x}$. Now, I started by taking logs on both sides which gives me this funny looking equation $x\log{2}+(x-2)\log(2\cdot3)=2x\log(\frac{10}{2})+(1-x)\log{7}$ I have been stuck on this step for a while now and can't see how I can go further from here. Is there a way out?
$$\begin{matrix} 2^x.6^{x-2} & = & 5^{2x}.7^{1-x} \\ \left(\dfrac{2*6*7}{5^2}\right)^x & = & 6^2 * 7 \\ \left(\dfrac{84}{25}\right)^x & = & 252 \\ x & = & \dfrac{\ln\left(252\right)}{\ln\left(\dfrac{84}{25}\right)} \end{matrix}$$
Determining if a binary addition has overflow that if Im trying figure out how to determine whether a binary addition has overflow. My understanding is that if the cin is not equal to cout then there is overflow. So that in that case for the following example: 01110101 + 10111011 = 00110000. Has overflow. Is that correct.
You have overflow if there is carry out, regardless of whether there is carry in or not? If you have just two bits, $10+10$ overflows without a carry in. $10+10+1$(carry in) overflows even though carry in and carry out are equal.
Let $G$ be a group and $H$ a normal subgroup of order 2. Prove that if $G/H$ is cyclic, then $G$ is abelian. Let $G$ be a group and $H$ a normal subgroup of order 2. Prove that if $G/H$ is cyclic, then $G$ is abelian. I understand that if $G/H$ is cyclic, it is generated by a single element, i.e. $G/H = <xH>$, $x\in H$, and hence any $g \in G/H$ is $g = (xH)^n, n\in \mathbb{Z}_{\geq 0}$, so $g = x^nH$. Thus for any 2 elements, $f,g \in G/H, fg = x^nHx^mH = x^{n+m}H = x^{m+n}H = x^mHx^nH = gf$, so $G/H$ is abelian. Is this correct, and if so, can I use it to show G is abelian? What is the significance of H being order 2? Any help appreciated!
A subgroup of order $2$: $\{e,h\}$ is normal if and only if $ghg^{-1}=h$ for all $g\in G$. In other words if $h$ commutes with every element of $G$. Now use that $G/H$ is cyclic, in other words there is a $c$ such that $Hc^k$ covers all of $G/H$, in other words every element is of the form $c^k$ or $hc^k$. Clearly any two such elements of this form commute.
Big O definition and showing equality On this problem, I'm not sure what Big O definition they are referring. How would the big o definition help show this? Use the definition of $O$ to show that if $y = y_h + O(h^p)$, then $hy = hy_h + O(h^{p+1})$.
The first statement is saying, taking big O as $x$ approaches some $a$ for some $M>0$ $$ |y-y_h|<M|h^p|\implies \frac{|y-y_h|}{|h^p|}<M $$ for any $|x-a|<\delta$ for some $\delta>0$. Now what happens if you multiply the LHS by $\frac{|h|}{|h|}=1$?
How can this function be a valid PDF? We have a bivariate random variable with the joint PDF $$f(x_1,x_2)=0.5I_{[0,4]}x_1 I_{[0,0.25x_1]}x_2$$ then find the joint CDF ? It is a homework and I know how can I find the joint CDF , but I do not understand how can this function be a valid PDF ? , since $x_1 \in [0,4] $ and $x_2 \in [0,0.25x_1]$ if we integrate over the entire domain we must get $1$ but I get $2$ $$\int_0^4 \int_0^1 0.5 \, dx_2 \, dx_1 = 2 $$ any help please !
The correct integral is $$ \int_0^4 \int_0^{x_1 / 4} \frac{1}{2} \, \mathrm d x_2 \,\mathrm dx_1 $$ which does, in fact, integrate up to $1$.
Prove $k+1>k $ where $k$ is an integer I'm being asked to prove that if $k∈ \Bbb Z$, $k+1>k$. Judging from our instructions, it appears (I am unsure) as though I cannot use the law of induction to solve this. A hint gives that the proof depends on $1∈\Bbb N$. I was thinking of approaching this by using if $k∈\Bbb N$ then $x+k∈\Bbb N$. Problem is, we were not given $k∈\Bbb N.$ Is this solvable without induction? We are given associativity, commutivity, and the identity elements for addition and multiplication, the additive inverse, the properties of $=$, and basic set theory and logic. Order for the integers is given by Let $m,n,p∈\Bbb Z$. If $m<n$ and $n<p$, $m<p$. Also assumed, and possibly relevant is if $x∈\Bbb Z$, then $x∈\Bbb N$ or $-x∈ \Bbb N$ or $x=0$.
If you assume that $1 > 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 > k \qquad \mbox{ is equivalent to } \qquad 1 > 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x > y \qquad \mbox{ is equivalent to } \qquad x + z > y + z $$ and you can replace "$>$" by "$\geq$", "$<$", "$\leq$" or "$=$".
vector times cross product We have vectors $x, y, z$ where $z = x \times y$. What is $x \cdot z$? From my intuition, the cross product is perpendicular to both vectors, so dot product should be 0?
The geometric reasoning you mentioned is correct, and it probably the best way to understand what's going on. Another option is as follows: We know that $x \cdot z = x \cdot (x \times y)$. In general, the value of the triple product $u \cdot (v \times w)$ is given by the determinant that has $u$, $v$, $w$ as its rows. But, in our case, since $x$ occurs twice in our triple product, the determinant will have two identical rows, so its value will be zero. In short $$ x \cdot z = x \cdot (x \times y) = \det \left[\begin{matrix} \leftarrow & x & \rightarrow \\ \leftarrow & x & \rightarrow \\ \leftarrow & y & \rightarrow \\ \end{matrix} \right] = 0 $$
fundamental group of manifold, Lee's text topological manifold I am reading Lee' text "Introduction to Topological Manifold", I have a question about his proof of theorem 7.21. I include his proof below for reference. My question is about the statement underlined in red. I know that $U$ and $U'$ are connected since they are coordinate balls but how do we know that their intersection cannot be uncountable? I couldn't think of a proof to show that they are countable. Any help would be great. Thank you.
An open set of $R^n$ does not have an uncountable family of disjoint open subsets, because it is separable.
How to show that $\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx=(\pi+1)(\pi-2)?$ How do we show that? $$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx=(\pi+1)(\pi-2)\tag1$$ $(1)$ it a bit difficult to start with $$\int_{0}^{\pi}(1+2x)\cdot{\sin(x)[1-\sin^2(x)]\over 1+\cos^2(x)}\mathrm dx\tag2$$ Setting $u=\cos(x)$ $du=-\sin(x)dx$ $$\int_{-1}^{1}(1+2x)\cdot{(u^2)\over 1+u^2}\mathrm du\tag3$$ $$\int_{-1}^{1}(1+2\arccos(u))\cdot{(u^2)\over 1+u^2}\mathrm du\tag4$$ $du=\sec^2(v)dv$ $$\int_{-\pi/4}^{\pi/4}(1+2\arccos(\tan(v)))\tan^2(v)\mathrm dv\tag5$$ $$\int_{-\pi/4}^{\pi/4}\tan^2(v)+2\tan^2(v)\arccos(\tan(v))\mathrm dv=I_1+I_2\tag6$$ $$I_1=\int_{-\pi/4}^{\pi/4}\tan^2(v)\mathrm dv=2-{\pi\over2}\tag7$$ As for $I_2$ I am sure how to do it.
$J=\displaystyle \int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$ Perform the change of variable $y=\pi-x$, $\displaystyle J=\int_0^{\pi} (1+2(\pi-x))\dfrac{\sin^3 x}{1+\cos^2 x}dx$ Therefore, $\begin{align}\displaystyle 2J&=(2+2\pi)\int_0^{\pi}\dfrac{\sin^3 x}{1+\cos^2 x}dx\\ 2J&=-(2+2\pi)\int_0^{\pi}\dfrac{\sin^2 x}{1+\cos^2 x}\text{d}(\cos x)\\ 2J&=-(2+2\pi)\int_0^{\pi}\dfrac{(1-\cos^2 x)}{1+\cos^2 x}\text{d}(\cos x)\\ \end{align}$ Perform the change of variable $y=\cos x$ in the latter integral, $\begin{align}\displaystyle 2J&=(2+2\pi)\int_{-1}^{1}\dfrac{1-x^2}{1+x^2}dx\\ \displaystyle 2J&=(2+2\pi)\int_{-1}^{1}\dfrac{1}{1+x^2}dx-(2+2\pi)\int_{-1}^{1}\dfrac{x^2}{1+x^2}dx\\ &=(2+2\pi)\Big[\arctan x\Big]_{-1}^{1}-(2+2\pi)\left(\int_{-1}^1 \dfrac{1+x^2}{1+x^2}dx-\int_{-1}^1 \dfrac{1}{1+x^2}dx\right)\\ &=4(2+2\pi)\dfrac{\pi}{4}-2(2+2\pi)\\ &=2(\pi+1)(\pi-2) \end{align}$ Therefore, $\boxed{J=(\pi+1)(\pi-2)}$
Customers choose candies with different flavours Each of the eight customers are choosing randomly and independently on each other one flavour of candy. In the bowl there many candies with five different flavours. a) What is the probability that each flavour is chosen? And what would happen, if there were 100 customers? b) What is the probability that all of the customers has agreed on the same flavour? a) $P=1 - (\frac{1}{5})^{8} + (\frac{2}{5})^{8} + (\frac{3}{5})^{8} + (\frac{4}{5})^{8}$ My intuition is that one wasn't chosen , two wasn't chosen, three wasn't chosen ... Is it correct? For 100 customers it is the same just exponent is equal to 100. b) It is just $P=(\frac{1}{5})^8$ right?
I think in the first case it should be $$P=1-\binom 51(\frac{4}{5})^8+\binom 52(\frac{3}{5})^8-\binom 53(\frac{2}{5})^8+\binom 54(\frac{1}{5})^8$$ Here I have utilised the Inclusion Exclusion Principle where the property is taken as a particular flavour is not chosen. b) is incorrect. Edit after @lulu comment to b)below the correct answer should be $5(\frac{1}{5})^8$.Read @lulu comment below for explanation.
Formula for $\sum_{k=1}^n \frac{1}{k(k+1)(k+2)}$? I was trying to find a formula for the series $$\sum_{k=1}^n \frac{1}{k(k+1)(k+2)} =? $$ I tried to break this into partial fractions...To see if I could telescope this series.. The partial fraction went like this $$\frac{1}{k(k+1)(k+2)}=\frac{1}{2k}-\frac{1}{k+1}+\frac{1}{2k+4}$$ But the terms are so random that they hardly cancel.... I also tried partial sums but couldn't make much headway....Any help to solve this would be appreciated
$$\dfrac2{k(k+1)(k+2)}=\dfrac{k+2-k}{k(k+1)(k+2)}=\dfrac1{k(k+1)}-\dfrac1{(k+1)(k+2)}=f(k)-f(k+1)$$ where $f(m)=\dfrac1{m(m+1)}$
$\frac{A}{B}\otimes_C D\cong \frac{A\otimes_C D}{B\otimes_C D}$? Let $B<A$ and $D$ be $C$-modules. Am I correct in saying that $D$ being flat over $C$ means that $$\frac{A}{B}\otimes_C D\cong \frac{A\otimes_C D}{B\otimes_C D}$$ To me this seems an immediate consequence of the exact sequence $$0\to B\to A\to A/B\to 0$$ and then by exactness $$0\to B\otimes D\to A\otimes D\to (A/B)\otimes D\to 0$$ In particular it would follow, I believe, that for $I$ an ideal in $\mathbb{Z}[x_1,\dots,x_n]$ we have that $$\frac{\mathbb{Z}[x_1,\dots,x_n]}{I}\otimes_{\mathbb{Z}}\mathbb{Q}\cong \frac{\mathbb{Q}[x_1,\dots,x_n]}{I\otimes \mathbb{Q}}$$ I'm not very confident about my understanding of tensor products, so some confirmation/correction would be appreciated.
The quotient $$ \frac{A\otimes_CD}{B\otimes_CD} $$ doesn't make sense in general. The tensor of the embedding $B\to A$ is in general not injective, so there's no way to identify $B\otimes_CD$ with a submodule of $A\otimes_CD$. Flatness of $D$ is precisely the statement that for every module $A$ and every submodule $B$ of $A$, the tensor $B\otimes_CD\to A\otimes_CD$ of the inclusion map $B\to A$ is injective. In general, for every exact sequence $X\xrightarrow{f}Y\xrightarrow{g}Z\to0$ of $C$-modules, if you denote by $[X,D]$ the image of the map $X\otimes_CD\to Y\otimes_CD$, then $$ Z\cong \frac{Y\otimes_CD}{[X,D]} $$ which is an easy consequence of right exactness of the tensor product.
Under a finite dimensional norm-induced metric space, is every bounded sequence has a converging subsequence? What I know is that every bounded sequence in $\Bbb R^n$ has a converging sub sequence regardless of the chosen norm, since this is true under the Euclidean norm, and all norms on $\Bbb R^n$ are equivalent. My question is Is every bounded sequence has a converging sub sequence for any norm-induced metric space? If not, a counter-example is appreciated. Thanks!
I assume that by finite dimensional norm-induced metric space you mean a vector space in the first place because otherwise the concept of a norm or dimension doesn't make any sense. If $(V, \|\cdot\|)$ is any $n$-dimensional normed $\mathbb F$-Vector space, where $\mathbb F\in\{\mathbb R,\mathbb C\}$ and $\{b_1,...,b_n\}$ is a basis of $V$, then the unique linear map $\varphi$ that maps $b_i$ to $e_i$ where $\{e_1,...,e_n\}$ is the standard base of $\mathbb F^n$ is an isomorphism. It is easily checked that $|\cdot|$ defined by $|x| := \|\varphi^{-1}(x)\|$ defines a norm on $\mathbb F^n$. For this norm it is true that each bounded sequence has a convergent subsequence. If $(x_n)_{n\in\mathbb N}$ is a bounded sequence in $V$, then $(\varphi(x_n))_{n\in\mathbb N}$ is bounded in $(\mathbb F^n, |\cdot|)$, so it has a convergent subsequence $(\varphi(x_{n_k}))_k$, but then $(x_{n_k})_k$ must converge in $V$ because $\varphi^{-1}$ is continuous.
Find all the functions for which $f(x^{2})-f(y^{2})=(x+y)(f(x)-f(y)):\forall x, y\in\mathbb{R}$ Hey guys I need help on solving the following problem Find all the functions for which : $$f(x^{2})-f(y^{2})=(x+y)(f(x)-f(y)):\forall x, y\in\mathbb{R}$$ I started with taking $x=y$, and there you see that the both sides are equal to $0$, therefore the solution is the function $y=x$, but after that I can't prove that that is the only solution (if it is the only). Would appreciate some help, thanks.
We suppose $f(0)=0$. With $y=0$ we obtain $f(x^2)=xf(x)$. Thus $$ xf(x)-yf(y)=f(x^2)-f(y^2)=(x+y)(f(x)-f(y))=xf(x)-xf(y)+yf(x)-yf(y)\implies\\ yf(x)=xf(y) $$ for all $x,y\in\mathbb{R}$. With $y=1$ this yields $f(x)=f(1)x$ thus $f$ is of the form $f(x)=cx$ for some constant $c$. We see that whatever the value of $c$ such a function satisfies the equation. If $f(0)\neq 0$, we see that $g(x)=f(x)-f(0)$ still satisfies the equation and $g(0)=0$ so $g(x)=cx$ and thus all solutions are of the form $f(x)=cx+a$ for constants $c$ and $a$.
How to show $f(x) =(\sqrt[3]{x}+x)\sqrt{x}$ is injective and surjective? $f : [0, \infty) \to [0,\infty) d$ and $f(x) =(\sqrt[3]{x}+x)\sqrt{x}$. How should I go, if I wanted to show that it is bijective?
For injective is enough to see that: $$f'(x)=\frac{5}{6x^{1/6}}+\frac{3x^{1/2}}{2}>0$$ so, $f$ is monotone increasing. For surjective, take $y_0 \in (0, \infty)$ (because if $y_0=0$ take $x=0$) then $$f(x)=x^{5/6}+x^{3/2}=y_0$$ $p=x^{1/6}$ then $$g(p)=p^5+p^9-y_0=0$$ The polynomial $g$ has real coefficients and odd degree so there is a odd number of real roots. Also there is a positive root because the product of all roots is $y_0>0$.
How many 100-digit numbers' sums of digits equal 3? How many 100-digit numbers' sums of digits equal 3? How do I solve this?
We have a few possibilities. A 1 with two other ones somewhere. $99\choose{2}$ A 1 with a two somewhere else $99\choose{1}$ A 2 with a one somewhere else $99\choose{1}$ A 3 with 99 zeroes $1\choose{1}$ $${99\choose2} + {99\choose1} + {99\choose1} + {1\choose1} = 4851 + 99 + 99 + 1 = 5050$$
Derivative of arcsin In my assignment I need to analyze the function $f(x)=\arcsin \frac{1-x^2}{1+x^2}$ And so I need to do the first derivative and my result is: $-\dfrac{4x}{\left(x^2+1\right)^2\sqrt{1-\frac{\left(1-x^2\right)^2}{\left(x^2+1\right)^2}}}$ But in the solution of this assignment it says $f'(x)=-\frac{2x}{|x|(1+x^2)}$ I don't understand how they get this. I checked my answer on online calculator and it is the same.
You should have developed your result a bit more to obtain the assignment solution. \begin{align} -\frac{4x}{(x^2+1)^2\sqrt{1-\frac{(1-x^2)^2}{(x^2+1)^2}}}&=-\frac{4x}{(x^2+1)^2\sqrt{\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\ &=-\frac{4x}{(x^2+1)^2\sqrt{\frac{4x^2}{(x^2+1)^2}}}\\ &=-\frac{4x}{(x^2+1)^2\frac{2|x|}{(x^2+1)}}\\ &=-\frac{2x}{|x|(x^2+1) } \end{align} Voila!
Find all possible positive integer $n$ such that $3^{n-1} + 5^{n-1} \mid 3^n + 5^n $ Find all possible positive integers $n$ such that $3^{n-1} + 5^{n-1} \mid 3^n + 5^n $. I don't know how to start with. Any hint or full solution will be helpful.
Hint: $A:=3^{n-1}+5^{n-1}$, and $B:=3^n+5^n$. Compare $3A$ and $5A$ to $B$ to get the possible values of the integer $B/A$.
Find all $x$ such that $x^6 = (x+1)^6$. Find all $x$ such that $$x^6=(x+1)^6.$$ So far, I have found the real solution $x= -\frac{1}{2}$, and the complex solution $x = -\sqrt[3]{-1}$. Are there more, and if so what/how would be the most efficient find all the solutions to this problem? I am struggling to find the rest of the solutions.
Since $0$ is not a solution, this is equivalent to solve $$ \frac{x+1}{x}=\zeta $$ where $\zeta$ is any sixth root of $1$. Then $x+1=\zeta x$, so $$ x=\frac{1}{\zeta-1} $$ Of course $\zeta=1$ should not be considered. There are other five sixth root of $1$, namely $$ e^{2k\pi i/3} $$ for $1\le k\le 5$.
Approximating derivative as parementer decreases Here is the question and below my understanding/attempt: Let $f(x) = \arctan (x)$. Use the derivative approximation: $f'(x) = \frac{8f(x+h) - 8f(x-h) - f(x+2h) +f(x-2h)}{12h} $ to approximate $f'(\frac14\pi)$ using $h^{-1}$ = 2, 4, 8 . Try to take h small enough that the rounding error effect begins to dominate the mathematical error. For what value of h does this begin to occur? (You may have to restrict yourself to working in single precision.) So, from what I gather I just I have to do $\frac{8 \times \arctan(\frac\pi4 + \frac12) -8\times\arctan(\frac\pi4 - \frac12)..........}{12h}$ And then the same for $h^{-1}$= 4 and 8. How does the decreasing h come into play? Thanks!
As $h$ decreases, the results should get closer to the exact answer. The error of such approximations usually behaves like $c h^m$ for some real $c$ and integer $m$. Therefore, as you go from $h$ to $h/2$, the error would go approximately from $ch^m$ to $c(h/2)^m =ch^m/2^m$, so it would decrease by a factor of $2^m$. Considering this further can lead to the concept of Richardson extrapolation. See here for an explanation: https://en.wikipedia.org/wiki/Richardson_extrapolation
Surjectivity of an $\mathbb{N}^2$ to $\mathbb{N}$ function (This is my first post on here - let me know if I need to edit anything.) I'm just starting an introductory topology course and I've come across a problem that I've been trying to solve for a few hours now. I'm supposed to prove the bijectivity of the $\mathbb{N}\times\mathbb{N}\to\mathbb{N}$ function $f(i,j)=i+\frac{(i+j-2)(i+j-1)}{2}$. Most of my struggle is coming from proving that the function is onto. I understand that I need to find an $i$ and $j$ so that $f(i,j)=z$ $\epsilon$ $\mathbb{N}$, but I'm having a hard time finding definitions of $i$ and $j$ that don't depend on each other. And I have a start to a proof of one-to-one, where I started with $f(i,j)=f(h,k)$ and pretty much factored and simplified down to $(i+j)^2-i-3j=(h+k)^2-h-3k$, but I'm not sure where to go from there. I think I'm mainly having conceptual problems because the last time I did this sort of proof was in my discrete math course, which was much lower-level. All that aside, any hints would help immensely :)
It is unnecessary to modify the function since it is one-to-one and onto as proposed. Show that $f(i,j)=i+\dfrac{(i+j-2)(i+j-1)}{2}$ is one-to-one and onto from $\mathbb{N}^2\,\to\,\mathbb{N}$, where $\mathbb{N}=\{1,2,3,\cdots\}$ The onto part is fairly simple since $\left\{\dfrac{(k-2)(k-1)}{2}\right\}_{k=2}^\infty$ is an increasing sequence in $\mathbb{N}\backslash \{1\}$ but achieves a minimum value of $0$ in $\mathbb{Z}$ when $k=1$ and $k=2$. $$0,1,3,6,10,15,21,28,36,45,55,66,78,\cdots$$ For each $m\in\mathbb{N}$ define ${\langle m \rangle}=\max\left\{k:\frac{(k-2)(k-1)}{2}<m\right\}$ and let $i=m-\dfrac{({\langle m \rangle}-2)({\langle m \rangle}-1)}{2}$ and let $i+j={\langle m \rangle}$. Then $f(i,j)=i+\dfrac{(i+j-2)(i+j-1)}{2}=i+\dfrac{({\langle m \rangle}-2)({\langle m \rangle}-1)}{2} =m$ Using this we can see that, for example * *$f(1,1)=1$ *$f(1,2)=2$ *$f(2,1)=3$ *$f(1,3)=4$ Or we can skip ahead and see that for $m=25$, ${\langle m \rangle}=8$ since $\frac{(8-2)(8-1)}{2}=21$ is the largest value of $\frac{(k-2)(k-1)}{2}$ smaller than $25$. So $i=25-21=4$ and $j={\langle m \rangle}-i=8-4=4$. Thus $f(4,4)=25$. But what if, in the case of $m=25$ rather than letting $i+j={\langle m \rangle}=8$ we let $i+j={\langle m \rangle}-1=7$. Then $i=25-\frac{(7-2)(7-1)}{2}=10$ And since $i+j=7$ then$j=-3$ which is not in $\mathbb{N}$. To show that the function $f$ is one-to-one it would be necessary to show that for $m\in\mathbb{N}$ if $i+j={\langle m \rangle}-p$ for some $p<{\langle m \rangle}-1$ where $i=m-\dfrac{({\langle m \rangle}-p-2)({\langle m \rangle}-p-1)}{2}$then it would be the case that $j\le0$. This is in fact the case. Let \begin{eqnarray} j&=&{\langle m \rangle}-p+\frac{({\langle m \rangle}-p-2)({\langle m \rangle}-p-1)}{2}-m\text{ for }1\le p\le {\langle m \rangle}-2\\ &=&{\langle m \rangle}-p+\frac{({\langle m \rangle}-2-p)({\langle m \rangle}-1-p)}{2}-m\\ &=&\frac{({\langle m \rangle}-2)({\langle m \rangle}-1)}{2}+\frac{p^2-p(2{\langle m \rangle}-1)}{2}-m \end{eqnarray} Since this must be positive and since $\dfrac{({\langle m \rangle}-2)({\langle m \rangle}-1)}{2}-m<0$ then $\dfrac{p^2-p(2{\langle m \rangle}-1)}{2}$ must be positive for all values of $p$ in the interval $[1,{\langle m \rangle}-1]$. However, that is not the case since $\dfrac{p^2-p(2{\langle m \rangle}-1)}{2}\le0$ for every $p\in[0,2{\langle m \rangle}-1]$. So $j\le0$. Thus $f$ is one-to-one and onto. For $m\in\mathbb{N}$, \begin{eqnarray} f^{-1}(m)=\left(m-\langle m\rangle,\frac{3+\sqrt{1+8\langle m \rangle}}{2}-(m-\langle m \rangle)\right) \end{eqnarray} For example, find $f^{-1}(53)$. $\langle 53\rangle=45$ so $i=53-45=8$ and $j=\dfrac{3+\sqrt{1+8(45)}}{2}-8=3$ so $f^{-1}(53)=(8,3)$ which can be verified by substitution back into $f$.
Sum of series $\sum \limits_{k=1}^{\infty}\frac{\sin^3 3^k}{3^k}$ Calculate the following sum: $$\sum \limits_{k=1}^{\infty}\dfrac{\sin^3 3^k}{3^k}$$ Unfortunately I have no idea how to handle with this problem. Could anyone show it solution?
An overkill. Let $\mathfrak{M}\left(*,s\right) $ the Mellin transform. Using the identity $$\mathfrak{M}\left(\underset{k\geq1}{\sum}\lambda_{k}g\left(\mu_{k}x\right),\, s\right)=\underset{k\geq1}{\sum}\frac{\lambda_{k}}{\mu_{k}^{s}}\mathfrak{M}\left(g\left(x\right),s\right) $$ we have $$\mathfrak{M}\left(\underset{k\geq1}{\sum}\frac{\sin^{3}\left(3^{k}x\right)}{3^{k}},\, s\right)=\underset{k\geq1}{\sum}\left(\frac{1}{3^{s+1}}\right)^{k}\mathfrak{M}\left(\sin^{3}\left(x\right),s\right) $$ and since $$\mathfrak{M}\left(\sin^{3}\left(x\right),s\right)=\frac{\Gamma\left(s\right)\sin\left(\frac{\pi}{2}s\right)}{4}\left(3-\frac{1}{3^{s}}\right) $$ we have for $\textrm{Re}\left(s\right)>-1 $ $$\mathfrak{M}\left(\underset{k\geq1}{\sum}\frac{\sin^{3}\left(3^{k}x\right)}{3^{k}},\, s\right)=\frac{\Gamma\left(s\right)\sin\left(\frac{\pi}{2}s\right)}{4}\left(3-\frac{1}{3^{s}}\right)\frac{1}{3^{s+1}-1}=\frac{\Gamma\left(s\right)\sin\left(\frac{\pi}{2}s\right)3^{-s}}{4} $$ and so inverting we get $$\underset{k\geq1}{\sum}\frac{\sin^{3}\left(3^{k}x\right)}{3^{k}}=\frac{1}{2\pi i}\int_{1/2-i\infty}^{1/2+i\infty}\frac{\Gamma\left(s\right)\sin\left(\frac{\pi}{2}s\right)3^{-s}}{4}x^{-s}ds $$ now taking $x=1$ and shifting the complex line to the left we have, from the residue theorem, that we have to evaluate the residues of $$\textrm{Res}_{s=-2n-1} \frac{\Gamma\left(s\right)\sin\left(\frac{\pi}{2}s\right)3^{-s}}{4}=\frac{\left(-1\right)^{k}}{4\left(2k+1\right)!3^{-2n-1}} $$ hence $$\sum_{k\geq1}\frac{\sin^{3}\left(3^{k}\right)}{3^{k}}=\frac{1}{4}\sum_{k\geq 0}\frac{\left(-1\right)^{k}}{\left(2k+1\right)!}3^{2k+1}=\color{red}{\frac{\sin\left(3\right)}{4}}.$$
Maclaurin series of $f(g(x))$ I was doing some exercise about Maclaurin expansion when I notice something, I used to remember the series formula of some common functions with $x$ as argument, but when I had to calculate the expansion for the same function but with $x^2$ as argument, for example, I always recalculate the series from scratch. Then I started to realise that I could have just substituted $x$ with $x^2$. So is it wrong to say that, given a polynomial function $P(x)$ which represent the series of Maclaurin for a function $f(x)$, the series of Maclaurin for $f(g(x))$ is equal to $P(g(x))$ when $g(x)$ approach to $0$? If it's not completely wrong can you give me some hints in order to understand when it's correct?
Check out page 66 of Serge Lang's complex analysis textbook (4th edition). He states and proves a theorem which tells you sufficient conditions for composing two power series formally (making no allusions to the analytic properties of the series). In addition to the radius of convergence conditions alluded to in answers above, the odd requirement is that the inner series, $h$ in $$ f(h(x))=g(x) $$ has constant term zero.
Can probabilities be calculated using cardinality? The probability that a positive natural number drawn at random is odd, is $\frac{1}{2}$. Is there a way of calculating this fact using the cardinalities of the sets? Or is there some measure of cardinality, or of the number of elements within each set, which captures this fact? If we say the sample space is $\mathbb{N}$ and the successful outcomes are $O=\{2n-1:n\in\mathbb{N}\}$ then we might (obviously incorrectly) infer from the cardinalities of the sets that: $(\lvert\mathbb{N}\rvert=\lvert O\rvert)\implies P(n$ is odd$)=1$ It would seem to me that the closest we can come is to say the densities of $O$ and $\mathbb{N}\cap O$ in $\mathbb{N}$ are equal to each other and make our deduction by that means. Is that the normal approach within set theory?
No, there isn't any way to do it using the cardinality of the set. The cardinality of the set of odd numbers, the cardinality of the set of prime numbers, and the cardinality of the set of natural numbers are all the same: $\aleph_0$. But obviously the probability of choosing an odd number, a prime number, or a number at all are not the same - respectively, they should be $\frac{1}{2}$, $0$, and $1$. Within the field of set theory, then, there really isn't a way to calculate the density. You can define probability using limiting density: take $\lim_{n \to \infty}\frac{|X \cap n|}{n}$, where $X \cap n$ is the set of members of $X$ less than $n$. This limit will give the correct probabilities for everything I noted above. However, I wouldn't call this a set-theoretic approach - this is more number-theoretic in character, because it depends heavily on the order of the natural numbers.
Prove that if $f$ is a continuous mapping of a compact metric space $X$ into $\mathbb{R}^k$, then $f(X)$ is closed and bounded. Prove that if $f$ is a continuous mapping of a compact metric space $X$ into $\mathbb{R}^k$, then $f(X)$ is closed and bounded.Thus $f$ is bounded. My Attempted Proof We have $f: X \to \mathbb{R}^k$. Since a continuous mapping of a compact metric space $A$ into an arbitrary metric space $B$ implies $f(A)$ is compact, it follows that $f(X)$ is compact in our case. Let $\{V_{\alpha}\}$ be an open cover of $f(X)$. Then there exists finitely many indices such that $f(X) \subset V_{{\alpha}_1} \cup \ ... \ \cup V_{{\alpha}_n} $. Since we have each $V_{\alpha_{i}} \subset \mathbb{R}^k$, $\sup V_{\alpha_{i}}$ and $\inf V_{\alpha_{i}}$ exists in $\mathbb{R}^k$. Therefore $\sup f(X) \leq \sup \bigcup_{i=1}^n V_{\alpha_{i}}$ and $\inf f(X) \geq \inf \bigcup_{i=1}^n V_{\alpha_{i}}$, so that $f(X)$ is bounded. It remains to be shown that $f(X)$ is closed. Let $\gamma$ be a limit point of $f(X)$, and let $U_{\gamma}$ be a neighbourhood of $\gamma$. Then $U_{\gamma} \cap f(X) = C$ where $C$ is nonempty and $C \neq \{\gamma\}$ (by the definition of a limit point). Suppose $\gamma \not\in f(X)$, then there exists a $\epsilon > 0$ such that $U_{\gamma} = B_d(\gamma, \epsilon) \cap f(x) = \emptyset$ (where $B_d(\gamma, \epsilon)$ is the $\epsilon$-ball centered at $\gamma$ with respect to the metric on $\mathbb{R}^k$) . Thus $\gamma \in f(X)$ and we have $f(X)$ closed. $ \square$ Is my proof correct? If so how rigorous is it? Any comments on my proof writing skills and logical arguments are greatly appreciated.
Let $f_i=\pi_i°f:X\rightarrow R$. Thus, $f_i$ is a continuous function and since $X$ is compact $f_i (X)$ is also compact. So $f_i (X )$ is closed and bounded. Now since $f ( X )\subseteq f_1 (X)×f_2 (X)×...×f_k ( X)$, $f (X)$ must be bounded. On the other hand, since $X$ is compact $f (X) $ is a compact subset of the hausdorff space $R^k$. Thus it is closed.
Find $x$ in equation having Euler's Number I have to find $x$ in this equation. I have already tried cross multiplying but I am not sure what to do after. $$ \frac{e^x - e^{-x}}{e^x + e^{-x}} = \frac{3}{4} $$
We know that $$\sinh x =\frac {e^x -e^{-x}}{2} $$ and that $$\cosh x =\frac {e^x + e^{-x}}{2} $$ Can you now proceed where $\sinh x $ and $\cosh x $ are hyperbolic trigonometric functions? We can otherwise put $k =e^x $ and then get our equation as $$\frac {k-\frac {1}{k}}{k+\frac {1}{k}} =\frac {3}{4} $$ $$4 (k^2-1) =3 (k^2+1) $$ $$\Rightarrow k = \sqrt {7} \text { as an acceptable solution} $$ Hope it helps.
Property of Dirichlet character (Apostol 8.17) We may use the following theorem, Let $\chi$ be a Dirichlet character modulo $k$ and assume $d|k , d<k$. Then the following two statement are equivalent: * *$d$ is an induced modulus for $\chi$ *There is a character $\psi$ modulo $d$ such that $\chi(n) = \psi(n)\chi_{1}(n)$ for all $n$, where $\chi_1$ is the principal character modulo $k$. And I wanna show that, if $k$ and $j$ are induced moduli for $\chi$ then so is their gcd $(k, j)$. Using the theorem presented above, I can roughly see that the statement is true. But I do not know how to start the proof.
Here is the proof of Theorem 8.17 given in Tom Apostol's book (page 170, fifth edition): Assume that $2.$ holds. Choose $n$ satisfying $(n,k)=1$ and $n\equiv 1 \bmod d$. Then $\chi_1(n)=\psi(n)=1$ so that $\chi(n)=1$ and hence $d$ is an induced modulus. Thus $2.$ implies $1.$ For the converse (which is much longer), see Apostol's text. One exhibits a character $\psi$ modulo $d$ for which $\chi(n)=\psi(n)\chi_1(n)$ holds for all $n$ using Dirichlet's theorem. More precisely, if $(n,d)>1$ we can just take $\psi(n)=0$. For $(n,d)=1$ we need to find an integer $m$ such that $m\equiv n \bmod d$ with $(m,k)=1$. This is where Dirichlet's theorem on infinitely many primes in arithmetic progressions is used.
Prove that a series of integrals converges to zero exponentially I have the following series, which I want to show converges to $0$ exponentially fast in $N$: $$S_N=\sum_{n = N}^{\infty}\frac{1}{n!}M^{n}\int_{0}^{1}e^{- Mv}v^{n}\left(1-v^{r}\right)^{n}\frac{dv}{\sqrt{v}}\,,$$ with any given $r\ge1$ an integer, and $M$ is a constrained parameter that may be chosen appropriately to prove our statement. So, we need to show that there exists a choice $M\left( N,r \right)$ for which $$S_N =\mathcal{O}\left( C^N\right) \,,\quad \left|C\right|<1 \,.$$ The constraint on $M$ is that I want the theorem to hold for sufficiently large $M$, i.e. that the proof will only require a minimal value of $M$. Specifically, if this can be shown for $M(N,r)\ge\left( \log r+ r \log 2\right) N$, then this is sufficient for me. Indeed, for $r=1$ this is relatively straight-forward: $$ \begin{align*} S_N & = \sum_{n=N}^{\infty}\frac{M^{n}}{n!}\int_{0}^{1}e^{-Mv}v^{n}\left(1-v\right)^{n}\frac{dv}{\sqrt{v}}\\ & <\sum_{n=N}^{\infty}\frac{M^{n}}{n!}\int_{0}^{1}e^{-Mv}v^{n}e^{-nv}\frac{dv}{\sqrt{v}}\\ & =\sqrt{\frac{1}{M}}\sum_{n=N}^{\infty}\left[\frac{M}{M+n}\right]^{n+\frac{1}{2}}\int_{0}^{M+n}e^{-u}u^{n-\frac{1}{2}}\frac{du}{n!}\\ & <\sqrt{\frac{1}{M}}\sum_{n=N}^{\infty}\left[\frac{M}{M+n}\right]^{n+\frac{1}{2}}\frac{\Gamma\left(n+\frac{1}{2}\right)}{n!}\\ & <\sqrt{\frac{1}{M}}\sum_{n=N}^{\infty}\left[\frac{M}{M+n}\right]^{n+\frac{1}{2}}\frac{1}{\sqrt{n}}\\ & <\sqrt{\frac{1}{M}}\frac{1}{\sqrt{N}}\sum_{n=N}^{\infty}\left[\frac{M}{M+N}\right]^{n+\frac{1}{2}}\\ & =\sqrt{\frac{1}{M}}\sqrt{\frac{M}{N\left(M+N\right)}}\frac{\left[\frac{M}{M+N}\right]^{N}}{1-\left[\frac{M}{M+N}\right]}\\ & <\sqrt{\frac{M+N}{N^{3}}}\left[\frac{M}{M+N}\right]^{N}\,. \end{align*} $$ so we have $C=\frac{M}{M+N}<1$ for any choice of $M\left(N\right)$, as required. However, I don't see a similar approach to show this claim for $r>1$. For $r=2$ I thought about again bounding $(1-v^2)^n<e^{-nv^2}$, so now the integral may be expressed as some form of Mill's ratio, but I couldn't find any inequalities for it which include the additional factor of $v^n$ in the integrand. The difficult point is to somehow bound the integrals but not too loosely, so that the sum of the bound over $n$ is convergent, and may be itself bounded by some behavior in $N$. It is worth mentioning that I checked this claim numerically, up to r = 4, and it seems to hold.
This can be shown by bounding the integrand: $$ \begin{align*} S_N & = \sum_{n=N}^{\infty}\frac{M^{n}}{n!}\int_{0}^{1}e^{-Mv}v^{n}\left(1-v^r\right)^{n}\frac{dv}{\sqrt{v}} \\ & \le \sum_{n=N}^{\infty}\frac{M^{n}}{n!}\int_{0}^{1}e^{-Mv-nv^r}v^{n-\frac{1}{2}}dv \end{align*} $$ Noting that $v^{r}$ is a concave function, it is bounded by any line drawn tangential to it. Denoting by $v_{0}>0$ the arbitrary point where we draw the tangent, we can bound \begin{align*} v^{r} & <rv_{0}^{r-1}\left(v-v_{0}\right)+v_{0}^{r}=rv_{0}^{r-1}v-\left(r-1\right)v_{0}^{r}\,. \end{align*} Plugging this in, we have \begin{align*} S_N & <\sum_{n=N}^{\infty}\frac{1}{n!}M^{n}\int_{0}^{1}e^{-Mv-nrv_{0}^{r-1}v+n\left(r-1\right)v_{0}^{r}}v^{n-\frac{1}{2}}dv\\ & <\sum_{n=N}^{\infty}\frac{1}{n!}M^{n}e^{n\left(r-1\right)v_{0}^{r}}\int_{0}^{\infty}e^{-\left(M+nrv_{0}^{r-1}\right)v}v^{n-\frac{1}{2}}dv\\ & =\sum_{n=N}^{\infty}\frac{1}{n!}M^{n}e^{n\left(r-1\right)v_{0}^{r}}\frac{\Gamma\left(n+\frac{1}{2}\right)}{\left(M+nrv_{0}^{r-1}\right)^{n+\frac{1}{2}}}\\ & <\sqrt{\frac{1}{\left(M+Nrv_{0}^{r-1}\right)}}\sum_{n=N}^{\infty}\frac{1}{\sqrt{n}}\left(\frac{Me^{\left(r-1\right)v_{0}^{r}}}{M+nrv_{0}^{r-1}}\right)^{n}\\ & <\sqrt{\frac{1}{N\left(\alpha+rv_{0}^{r-1}\right)}}\sum_{n=N}^{\infty}\left(\frac{\alpha e^{\left(r-1\right)v_{0}^{r}}}{\alpha+rv_{0}^{r-1}}\right)^{n}\,, \end{align*} where he have denoted $M=\alpha N$. This is a geometric series which only converges if its quotient is smaller than one, namely if \begin{equation} \alpha\left(e^{\left(r-1\right)v_{0}^{r}}-1\right)-rv_{0}^{r-1}<0\,. \end{equation} This condition is trivially satisfied for $r=1$. For $r>1$, we note that for $v_{0}=0$ the left-hand side of this condition equals $0$. However, since $e^{\left(r-1\right)v_{0}^{r}}$ can be expanded in powers of $v_{0}^{r}$, the first non-vanishing derivative of the lhs is the $\left(r-1\right)^{\text{th}}$, giving $\left(-r!\right)<0$. Thus, this equation is satisfied at least in the neighborhood of $0^{+}$. Indeed, for $v_{0}\ll1$, we can expand $$ \alpha\left(r-1\right)v_{0}^{r}-rv_{0}^{r-1}<0\,, $$ giving us a self-consistent solution as long as $v_{0}<\frac{r}{\left(r-1\right)\alpha}$. To summarize, this implies that for any value of $M$, there exists a choice for $v_0$ for which the quotient of the geometric series is smaller than unity, i.e. some $C<1$, so that the sum converges and behaves is $C^N$ which tends to zero exponentially.
Finding a strict Liapunov function I need to show that the equilibrium point $(0,0)$ is asymptotically stable using Liapunov function. That means I shall find some strict Liapunov function. Its given me the non-linear system: \begin{cases} x_1' = -x_1 -\frac{1}{3}x_1^3 - x_1^2\sin(x_2) \\ x_2' = -x_2 -\frac{1}{3}x_2^3 \end{cases} My attempt: $V(x_1, x_2) = ax_1^2 + bx_2^2$, $a, b > 0$. I would like to determine $a$ and $b$. Then, $V(0,0) = 0$ and $V(x_1, x_2) > 0$, for all $(x_1, x_2) \neq (0,0)$. In order to decide if $V(x_1, x_2)$ is a strict Liapunov function, I wish that $\left<\nabla V(x_1, x_2), (x_1', x_2')\right> < 0$. \begin{align} \left<\nabla V(x_1, x_2), (x_1', x_2')\right> &= \left<(2ax_1, 2bx_2) ( -x_1 -\frac{1}{3}x_1^3 - x_1^2\sin(x_2), -x_2 -\frac{1}{3}x_2^3)\right>\\ &= 2ax_1(-x_1 -\frac{1}{3}x_1^3 - x_1^2\sin(x_2)) + 2bx_2(-x_2 -\frac{1}{3}x_2^3) \\ &= -2a(x_1^2 + \frac{1}{3}x_1^4) -2b(x_2^2 + \frac{1}{3}x_2^2) -2ax_1^3\sin(x_2) \end{align} I do not know what to do with the term that involves $\sin(x_2)$...
As i can remember, we have to prove that $\left<\nabla V(x_1, x_2), (x_1', x_2')\right> < 0$ over some neighborhood $B$ of $(0,0)$. We call $sg(x)$ the sign function defined by:$$sg(x)=1\quad if\ x>0$$ $$sg(x)=-1\quad if\ x<0$$ First, we have \begin{align*} -1\leq\sin(x_2)\leq 1\\ -2a\leq-2a\sin(x_2)\leq 2a\\ -2a|x_1^3|\leq-2ax_1^3\sin(x_2)\leq2a|x_1^3|\\ -2a(x_1^2 + \frac{1}{3}x_1^4)-2ax_1^3\sin(x_2)\leq2a|x_1^3|-2a(x_1^2 + \frac{1}{3}x_1^4) \end{align*} Now let's study the sign of the expression $(E)= 2a|x_1^3|-2a(x_1^2 + \frac{1}{3}x_1^4)$ \begin{align*} 2a|x_1^3|-2a(x_1^2 + \frac{1}{3}x_1^4)&=2ax_1^2(sg(x_1)x_1-1+\frac{1}{3}x_1^2)\\ &=\frac{2}{3}2ax_1^3(x_1^2+3sg(x_1)x_1-3) \end{align*} $\Delta=((3sg(x_1))^2-4(-3))=9+12=21 \implies \sqrt{\Delta}\simeq4.58$ Then, $x_1'=\frac{-3sg(x_1^3)-\sqrt{\Delta}}{2}$ and $x_1''=\frac{-3sg(x_1^3)+\sqrt{\Delta}}{2}$ When $x_1\in I=]x_1',x_1''[$, $(E)<0$ Notice that whether the sign of $x_1$ is $(+)$ or $(-)$,the interval I is always a neighborhood of $(0,0)$. Now that we have $(E)<0$ we add to it the last term of our first equation $-2b(x_2^2+\frac{1}{3}x_2^2)$ which is obviously negative. Thus we conclude that $$\left<\nabla V(x_1, x_2), (x_1', x_2')\right><0\quad\forall(x_1,x_2)\in B\setminus\{(0,0)\} $$ $$B=\{(x_1,x_2)\in I\times J,\ I=]x_1',x_1''[,\ J=]-\epsilon,+\epsilon[,\ \epsilon>0 \} $$
Show that an outcome in S to the event $ \bigcup_{i=n}^\infty C_{n}$ if and only if it belongs to all the events $A_{2},A_{2}$... Show that an outcome in S belongs to the event $ \bigcup_{i=n}^\infty C_{n}$ where $C_{n}=\bigcap_{i=n}^\infty A_{i}$ if and only if it belongs to all the events $A_{2},A_{2}...$ except possibly a finite number of those events. P.S. This is my first post so I probably did something wrong in posting this.
Suppose $\omega\in \bigcup_{i=n}^\infty C_{n}$. Then there is a $N>n$ that $\omega\in C_N$. Since $C_{N}=\bigcap_{i=N}^\infty A_{i}$, this means that for all $i\geqslant N, \:\omega\in A_i$. So if $\omega\notin A_i$, then $i<N$, i.e. there are only a finite number of those events $A_i$ that $\omega\notin A_i$ .
A power of a matrix I am trying to find $$\begin{bmatrix}t&1-t\\1-t&t\\\end{bmatrix}^n$$ in $\mathbb{Z}_n[t^{\pm 1}]/(t-1)^2$, where $n$ is a positive integer. My guess is that the result is the identity matrix but am not sure about a proof. Thanks for any help or hints!
If $$M = \pmatrix{t & 1-t\cr 1-t & t}$$ show by induction that $$ M^j = \pmatrix{j t - (j-1) & j - j t\cr j - jt & jt - (j-1)\cr} $$ and then take $j=n$.
If $p>0$ and$ q>0$, And $p^2-q^2$ is a prime number, then $p-q$ is...? Please help The answer is part $d)$none of the above, How do i proceed with such questions?
Note that $$p^2-q^2=(p-q)(p+q)$$ From the definition of prime numbers, it follows that $p-q=1$. If $p-q \neq 1$, then $p-q$ is a number that divides $p^2-q^2$ that is not $1$ or $p^2-q^2$.
How to prove: $2^\frac{3}{2}<\pi$ without writing the explicit values of $\sqrt{2}$ and $\pi$ How to prove: $2^\frac{3}{2}<\pi$ without writing the explicit values of $\sqrt{2}$ and $\pi$. I am trying by calculus but don't know how to use here in this problem. Any idea?
If we take the definition of $\pi$ to be based on the circumference of a circle (namely that $\frac{C}{d} = \pi$), then we can see a nice geometric proof of this fact. Consider a square with side length $1$. Now, draw a circle around this square such that all four corners of the square fall on the circle. This circle has a diameter of $\sqrt{2}$. The perimeter of the square involved is $4$. $\frac{4}{\sqrt{2}} = 2^{\frac{3}{2}}$ Now, by looking at this picture it is quite obvious that the circumference of the circle is larger than the perimeter of the square. If you were constructing these shapes this could be easily verified. But now we can say that: \begin{align} C &> P_{square} \\ C &> 4 \\ \frac{C}{d} &> \frac{4}{d} \ \ \ \ \textbf{Since $d > 0$} \\ \pi &> 2^{\frac{3}{2}} \end{align}
Does this simple proof-by-contradiction, also require contrapositive? Simple exercise 6.2 in Hammack's Book of Proof. "Use proof by contradiction to prove" "Suppose $n$ is an integer. If $n^2$ is odd, then $n$ is odd" So my approach was: Suppose instead, IF $n^2$ is odd THEN $n$ is even Alternatively, then you have the contrapositive, IF $n$ is not even ($n$ is odd), then $n^2$ is not odd ($n^2$ is even). $n = 2k+1$ where $k$ is an integer. (definition of odd) $n^2 = (2k+1)^2$ $n^2 = 4k^2 + 4k + 1$ $n^2 = 2(2k^2 + 2k) + 1$ $n^2 = 2q + 1$ where $q = 2k^2 + 2k$ therefore $n^2$ is odd by definition of odd. Therefore we have a contradiction. Contradictory contrapositive proposition said $n^2$ is not odd, but the derivation says $n^2$ is odd. Therefore the contradictory contrapositive is false, therefore the original proposition is true. Not sure if this was the efficient/correct way to prove this using Proof-By-Contradiction.
Your proof looks correct to me, but I would like to share with you my strategy for proof by contradiction. Consider the if/then statement $p\Rightarrow q$. In your case, $p$ represents "$n^{2}$ is odd" and $q$ represents "$n$ is odd". To achieve the proof by contradiction, we want to show that when $p$ is true, then it is impossible for $\neg q$, that is to say $n$ is even, to also be true. So we will assume that $p$ and $\neg q$ are both true (i.e., at the same time). Applying this to your particular problem, we have $n^{2}$ is odd and $n$ is even. By definition, $n=2k$ for some integer $k$. Then $$n^{2}=(2k)^{2}=4k^{2}=2\cdot(2k^{2}).$$ Since the integers are multiplicatively closed, we have that $n^{2}$ is $2$ times an integer. Hence $n^{2}$ is even, which contradicts the assumption that $n^{2}$ is odd. This tells us two things: * *The logical statement "$n^{2}$ is odd and $n$ is even" is a false statement. *The logical statement "$n$ is even implies $n^{2}$ is even" is a true statement. This is the contrapositive ($\neg q\Rightarrow\neg p$) of the original statement, which always has the same truth value as the original statement.
Differentiability in $(0,0)$ of $f(x,y)=\frac{x^4-y^3}{x^2+y^2}$ Let the function $f$ be defined as $$f(x,y)=\frac{x^4-y^3}{x^2+y^2}$$ If $(x,y)=(0,0)$ then $f$ is equal to zero. My problem is to prove that this function isn't differentiable at the point $(0,0)$. My solution: First idea is maybe to show that $f$ is not continuous at $(0,0)$. That will show that $f$ isn't differentiable at $(0,0)$. But it is easy to see that $f$ is continuous at $(0,0)$, so we need to check the partial derivatives at $(0,0)$. Partial derivative in $x$-direction is $$\lim_{h\to 0}\frac{f(h,0)-0}{h}=0$$ Partial derivative in $y$-direction is $$\lim_{h\to 0}\frac{f(0,h)-0}{h}=-1$$ We now know how our Jacobian matrix at the point $(0,0)$ looks. We now use the definition of derivatives: $$\lim_{(x,y)\rightarrow(0,0)}\frac{\frac{x^4-y^3}{x^2+y^2}+y}{\sqrt{(x^2+y^2)}}=\frac{x^4+x^2y}{(x^2+y^2)^{\frac{3}{2}}}$$ But this looks like having a limit in $(0,0)$ and it's equal to $0$?
A necessary condition for being differentiable at $(0,0)$ is that the directional derivative: $$ Df_{(0,0)}(h,k) = \lim_{t\rightarrow 0} \frac{1}{t} (f(th,tk)-f(0,0)) = \lim_{t\rightarrow 0} \frac{t^4 h^4 - t^3 k^3}{t^3(h^2+k^2)} = \frac{-k^3}{h^2+k^2}$$ should be a linear function of $h$ and $k$ (i.e. of the form $ah+bk$ for real constants $a$ and $b$), which is visibly not the case. In your own solution you may simply calculate $Df_{(0,0)} (1,1)$ and show that it is not the sum of $Df_{(0,0)} (1,0)$ and $Df_{(0,0)} (0,1)$, so it is not differentiable.
Why is the homogenization of a radical ideal radical? Let $I\subseteq k[X_1,\ldots,X_n]$ be a radical ideal and $I^*\subseteq k[X_1,\ldots,X_{n+1}]$ be its homogenization which by definition is generated by the set $\{F^*:F\in I\}$ where $F^*$ is the homogenization of the polynomial $F$ with respect to $X_{n+1}$ (e.g. $(X_1^3+X_1X_2+1)^*=X_1^3+X_1X_2X_3+X_3^3$). Why is $I^*$ a radical ideal? I know that it being a homogeneous ideal it's enough to show that if a power of a homogeneous polynomial belongs to the ideal then the polynomial itself belongs but even this turns out to be difficult to me.
Then you dehomogenize it and arrive at $(G_{*})^N \in I$, and use that $I$ is radical. So $G_{*}\in I$, and then $(G_{*})^* \in I^*$ gives $x_{n+1}^r\cdot (G_{*})^* \in I^*$, and therefore $G \in I^* $ ( Prop.5 of Alg. Curves by Fulton may be helpful).
Calculate the probability of winning Suppose you are playing a game where you flip a coin to determine who plays first. You know that when you play first, you win the game 60% of the time and when you play the game second, you lose 52% of the time. A Find the probability that you win the game? Let $A = \{ \text{Play first}\}$ and $\overline{A} = \text{Play second}$, let $B = \{ \text{win} \}$ We want $P(B)$. I know that $P(B | \overline{A}) = 0.48$ and $P(B | A) = 0.6$ Actual problem: I get that $P(B) = P(B | A)P(A) + P(B | \overline{A}) P(\overline{A}) = 0.6P(A) + 0.48P(\overline{A})$ I'm not sure how to move ahead. Can someone give me a hint?
Hint - Probability of getting head = $\frac 12$ And getting tail = $\frac 12$ Then use conditional probability to proceed further.
If $f$ is holomorphic except for $z_0$, then $\lim_{n\to\infty} \frac{a_n}{a_{n+1}}=z_0$ The question is from Stein & Shakarchi - Complex Analysis Chapter 2, Exercise 14. Suppose that $f$ is holomorphic in an open set $\Omega$ containing the closed unit disc, except for a pole at $z_0$ on the unit circle. Show that if $f$ is given by a power series expansion $$f(z)=\sum^\infty_{n=0}a_n z^n$$ in the unit disc $D_1(0)$, then $$\lim_{n\to\infty} \frac{a_n}{a_{n+1}}=z_0.$$ I solved this problem by using the pole formula $$f(z)=(z-z_0)^{-m}g_0(z)\implies f^{(n)}(z)=(z-z_0)^{-m-n}g_n(z)$$ where $g_0$ and $g_n$ are holomorphic and not zero at $z_0$, and $m$ is a positive integer. These are defined on $D_{1+\epsilon}(0)\subset\Omega$, which contains $z_0$ and $\epsilon$ is sufficiently small. I've got below : $$\lim_{n\to\infty} \frac{a_{n+1}}{a_{n}}=\lim_{n\to\infty} \dfrac{\dfrac{f^{(n+1)}(0)}{(n+1)!}}{\dfrac{f^{(n)}(0)}{(n)!}}=\lim_{n\to\infty}\dfrac{1}{n+1}(\dfrac{-m-n}{0-z_0}+H(0))=\dfrac{1}{z_0}.$$ where $H(z)$ is holomorphic in the disc. Actually, the pole formula I've used appears in Chapter 3 so I should not use this, but I think it is worth to try. Is there something wrong about this solution? Very thanks.
I see you are trying a different approach. It's good. However your argument does not work since you do not use the condition that $f$ is holomorphic in an open set $\Omega$ containing the closed unit disc, except for a pole at $z_0$ on the unit circle. Since your argument is a local one, if $g_0(z)$ is defined on $D_1(0)\cup \{|z-z_0|<\delta \}$, holomorphic and not zero at $z_0$, your argument goes and concludes $\lim_{n\to \infty}\frac{a_{n+1}}{a_n}=\frac{1}{z_0}$. If your argument is correct, you can prove: Suppose that $f$ is holomorphic in an open set $\Omega$ containing the closed unit disc, except for two poles at $z_0$ and $z_1$ on the unit circle. Show that if $f$ is given by a power series expansion $$f(z)=\sum^\infty_{n=0}a_n z^n$$ in the unit disc $D_1(0)$, then $$ \lim_{n\to\infty} \frac{a_n}{a_{n+1}}=z_0\quad \text{and}\quad \lim_{n\to\infty} \frac{a_n}{a_{n+1}}=z_1.$$ Proof. Consider $f(z)=(z-z_0)^{-m}g_0(z)$ on $D_1(0)\cup \{|z-z_0|<\delta \}$. Then $$ \lim_{n\to\infty} \frac{a_n}{a_{n+1}}=z_0.$$ Consider $f(z)=(z-z_1)^{-m}g_1(z)$ on $D_1(0)\cup \{|z-z_1|<\delta \}$. Then $$ \lim_{n\to\infty} \frac{a_n}{a_{n+1}}=z_1.$$ Do you recognize your argument is wrong ?
Number of field homomorphism extending a given one Given a number fields extension $F\subseteq K$ I'm trying to prove that the characteristic polynomial of an element $\alpha\in K$ - which is defined to be the characteristic polynomial of the $F$-endomorphism of $K$ given by the multiplication $x\mapsto \alpha x$ - is a power of its minimal polynomial. I know that if $F\subseteq K \subseteq E$ is the Galois closure of $K/F$ then the minimal polynomial of $\alpha$ is $$ f^{min}_\alpha (x)=\prod_{\sigma\in Gal(E/F)}(x-\sigma(\alpha))$$ while if $\text{Hom}_F(K,\mathbb{C})=\{\sigma_1, \dots, \sigma_n\}$ its characteristic polynomial is given by $$ f^{char}_\alpha (x)=\prod_{i=1}^n (x-\sigma_i(\alpha))$$ The result is the following $$ f^{char}_\alpha (x)=f^{min}_\alpha(x)^{[K/F(\alpha)]}$$ and the textbook proves it as follows. The multiplicity of the factor $(x-\sigma_i(\alpha))$ in the characteristic polynomial is given by the number of $\sigma_j$ s.t. $\sigma_j(\alpha)=\sigma_i(\alpha)$ which is the number of $F$-homomorphism $K\to E$ extending $\sigma_i\big|_{F(\alpha)}$. How do I prove that number is given by $[K/F(\alpha)]$? If $K/F$ was Galois I would use the fundamental theorem, but in this general case?
Let $\beta$ be a primitive element for $K/F(\alpha)$. Then with $p_\beta(x)\in F(\alpha)[x]$ the minimal polynomial for $\beta$ with $\deg p_{\beta}(x)=k$, we have $$K\cong F(\alpha)[x]/(p_\beta(x))$$ And each choice of root gives at most one different embedding of $F(\alpha)(\beta)$ into the Galois closure, so the number of embeddings is bounded by $[K:F(\alpha)]$, i.e. there are at most that many. However, each root gives a different embedding as--if we number the roots as $\beta=\beta_1,\ldots,\beta_k$ then writing the isomorphisms $$\phi_i: F(\alpha)(\beta_i)=K_i\to F(\alpha)[x]/(p_{\beta}(x))$$ where the equivalence class $[x]\in F(\alpha)[x]/(p_{\beta}(x))$ is mapped to $\beta_i$ as usual, then $\phi_i\circ \phi_j^{-1}$, $i\ne j$, is a non-identity--$\beta_i\mapsto\beta_j$ and $p_\beta(x)$ is irreducible--isomorphism between $K_i$ and $K_j$ as subfields of the Galois closure, hence the number of embeddings is at least $[K:F(\alpha)]$ proving equality. Note the first inequality holds in general, the second one relies on the fact that an irreducible polynomial in characteristic $0$ is separable, i.e. all roots are distinct. This is critical for the equality, as the general case only has that the number of embeddings is at most $[K:F(\alpha)]$.
Prove a convex and concave function can have at most 2 solutions Let $ a,b \in \mathbb{R}$ Let $f(x) : [a,b] \rightarrow \mathbb{R} $ is a continuous function in $ [a ,b] $, differentiable and strictly convex in $ (a, b) $ and $g(x) : [a,b] \rightarrow \mathbb{R} $ is a continuous function in $ [a ,b] $, differentiable and strictly concave in $ (a, b) $ How can I prove the intersection of $ f $ and $ g $ can have a maximum of two roots $f(x)-g(x)=0 $ ?
$h(x)=f(x)-g(x)$ is a strictly convex function in $[a,b]$. So we just have to prove that a convex function has at most $2$ roots in $[a,b]$. If $h(x)$ has $3$ roots let's say, $p,q,r$ then there is $c \in (p,q)$ such that $h'(c)=0$ (by Rolle's Theorem) and there is $d \in (q,r)$ such that $h'(d)=0$. But it means that there is $e \in (c,d)$ such that $(h')'(e)=0$, which is a contradiction because $h''(x)>0$.
Explain why $(a−b)^2 = a^2 −b^2$ if and only if $b = 0$ or $b = a$. This is a question out of "Precalculus: A Prelude to Calculus" second edition by Sheldon Axler. on page 19 problem number 54. The problem is Explain why $(a−b)^2 = a^2 −b^2 $ if and only if $b = 0$ or $b = a$. So I started by expanding $(a−b)^2$ to $(a−b)^2 = (a-b)(a-b) = a^2 -2ab +b^2$. To Prove that $(a−b)^2 = a^2 −b^2 $ if b = 0 I substituted b with zero both in the expanded expression and the original simplified and I got $(a−b)^2 = (a-0)^2 = (a-0)(a-0) = a^2 - a(0)-a(0)+0^2 = a^2$ and the same with $a^2 -2ab +b^2$ which resulted in $a^2 - 2a(0) + 0^2 = 2a$ or if I do not substite the $b^2$ I end up with $a^2 + b^2$. That's what I got when I try to prove the expression true for $b=0$. As for the part where $b=a$, $(a−b)^2 = (a-b)(a-b) = a^2-2ab+b^2$, if a and b are equal, let $a=b=x$ and I substite $a^2-2ab+b^2 = x^2-2(x)(x) + x^2 = x^2-2x^2+x^2 = 1-2+1=0$ I do not see where any of this can be reduced to $a^2-b^2$ unless that equals zero......I do see where it holds but I do not see how would a solution writting out look.After typing this it seems a lot clearer but I just can't see how to phrase a "solution". P.S: This is my first time asking a question here so whatever I did wrong I am sorry in advance and appreciate the feedback.
which resulted in $a^2 - 2a(0) + 0^2 = 2a$ $a^2 - 2a(0) + 0^2 = a^2$, not $2a$. or if I do not substitute the $b^2$ I end up with $a^2 + b^2$. Why would you not substitute the $b^2$? If you're substituting $b=0$ then you need to do it in all occurrences of $b$. This includes $b^2$. When you do this you'll see that the equation $(a-b)^2 = a^2 - b^2$ reduces to $a^2 = a^2$, which is certainly a true statement for all values of $a$. Also, you're overcomplicating the $a=b$ case. No need to introduce a new variable $x$. If $a = b$, then you can simply substitute either one in for the other. Let's replace $a$ with $b$. Then we have $$ (a-b)^2 = (b-b)^2 = 0^2 = 0$$ and on the other hand we have $$ a^2 - b^2 = b^2 - b^2 = 0 $$ This shows that $(a-b)^2 = a^2 - b^2$ if $a = b$. I should also point out that the work you've done (or at least the work you've shown us) only proves one direction. You're asked to prove the following: $$ (a-b)^2 = a^2 - b^2 \text{ if and only if } b = 0 \text{ or } a = b $$ But what you've done so far is: $$ (a-b)^2 = a^2 - b^2 \text{ if } b = 0 \text{ or } a = b $$ In other words, you need to handle the "only if" part. Do this by assuming $(a-b)^2 = a^2 - b^2$ and showing that the only two possibilities are $b=0$ or $a=b$. At least one of the other current answers offers guidance on this matter.
Calculate $\lim\limits_{x\to4} \frac {\sqrt {1+2x}-3} {\sqrt x -2}$ After I just learned there are ways to cancel out $0$s in the divisor of fractions onto limits I looked back at a task where I gave up when I got the result $\lim\limits_{x\to4} \frac {\sqrt {1+2x}-3} {\sqrt x -2}$ so I tried to find a way to get the $0$ out here, as well. Am I just not seeing the solution or is there no way to do it?
With $\sqrt x=t+2$, the limit becomes $$\lim_{t\to0}\frac{\sqrt{9+8t+2t^2}-3}t.$$ Then multiplying/dividing by the conjugate, $$\lim_{t\to0}\frac{8t+t^2}t\frac1{\sqrt{9+8t+2t^2}+3}=8\frac1{3+3}.$$
Difficult exponential equation $6^x-3^x-2^x-1=0$ $$6^x-3^x-2^x-1=0$$I don't have any idea how to solve this equation. I don't know how to show that the function $f(x)=6^x-3^x-2^x-1$ is strictly increasing (if it's the case). The derivative doesn't show me anything. I tried also to use the Lagrange theorem(like in this equation for example: $2^x+9^x=5^x+6^x$), but it's not the case! But I need a solution without calculus. Please help! Edit: In my country the theorem that states roughly this $f'(c)=\dfrac{f(b)-f(a)}{b-a}, $where $f:[a,b]\to\mathbb{R} $ is called Lagrange theorem
I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below. Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure fsolve({f(x) = g(x)}, {x}, x = -1 .. 2) where $f(x)$ and $g(x)$ are defined as above.
what does it mean for a vector to be in a plane? 4th edition, linear algebra and its application, gilbert strang exercise 2.1 question 17 *Let P be the plane in R^3 with equation x+y-2z = 4. The origin (0,0,0) is not in P! Find two vectors in P and check that their sum is not in P Answer at the back --> (4,0,0) is on the plane, (0,4,0) is on the plane but their sum (4,4,0) is not on the plane. Questions: What does it mean for a vector to be in a plane? Does the end point of the vector being on the plane mean that the vector is in the plane? This is a central theme in second chapter that strang has used wherein he called the necessity of passing through origin for a subspace as a consequence of closure under addition and closure under scalar multiplication. to quote something - "the distinction between a subset and a subspace is made clear by example. In each case can you add vectors and multiply by scalars, without leaving the space?" and he goes on to demonstrate with a few examples (Can you ever add two vectors and leave the space, or perform scalar multiplication to the same result?)
Take the $x,y$ and $z$ coordinates of the vector, insert them for $x,y$ and $z$ in the equation for the plane, and see what you end up with. In this case we get $4+0-2\cdot0=4$. Is that equality correct? If so, the vector is on the plane. If not, then the vector is not in the plane. Do the same thing for $(0,4,0)$ and $(4,4,0)$ and check whether those two are on the plane too.
Why is $\{x\in M: F(x)=G(x), DF_x=DG_x\}$ closed? Let $F$ and $G$ be smooth maps between smooth manifolds $M$ and $N$. Denote their differentials at $x\in M$ by $DF_x$ and $DG_x$. Why is the set $\{x\in M:F(x)=G(x), DF_x=DG_x\}$ closed in $M$? I know of the result that if $f$ and $g$ are continuous maps between topological spaces $X$ and $Y$ with $Y$ being Hausdorff then $\{x\in X:f(x)=g(x)\}$ is closed in $X$. Unfortunately it doesn't seem like I can directly apply this result since I have $DF_x$ and $DG_x$ which are now maps between tangent spaces.
Take a sequence $x_n\to x$ and note that if $$ F(x_n)=G(x_n)\quad\text{and}\quad DF_{x_n}=DG_{x_n} $$ for all $n$, then $$ F(x)=G(x)\quad\text{and}\quad DF_{x}=DG_{x}. $$
Prove that $\lim_{n\to\infty}\frac{6n^4+n^3+3}{2n^4-n+1}=3$ How to prove, using the definition of limit of a sequence, that: $$\lim_{n\to\infty}\frac{6n^4+n^3+3}{2n^4-n+1}=3$$ Subtracting 3 and taking the absolute value of the function I have: $$<\frac{n^3+3n}{2n^4-n}$$ But it's hard to get forward...
Let $\epsilon>0$. Choose $N\in\mathbb{N}$ such that $\frac{1}{N}<\frac{\epsilon}{4}$. If $n\geq N$, then we get $$\begin{align}\left|\frac{6n^4+n^3+3}{2n^4-n+1}-3 \right|& =\left|\frac{n^3+3n}{2n^4-n+1}\right|\\&=\frac{n^3+3n}{2n^4-n+1}\\ &<\frac{n^3+3n}{2n^4-n}\\ &\leq\frac{n^3+3n}{2n^4-n^4}\\ &\leq\frac{n^3+3n^3}{n^4}\\ &=\frac{4}{n}\leq\frac{4}{N}<\epsilon. \end{align}$$ Hope it helps.
How to derive $\bot \vdash P$ (EFQ)? I want to prove this statement (EFQ) using the natural deduction rules presented in the book Mathematical Logic (Chiswell & Hodges, 2007). It does not explicitly state EFQ as one of the fundamental rules of inference, so I am trying to derive it somehow. The book provides the usual rules for $\wedge, \vee, \to$ introduction and elimination, which seem to agree with every other text that I have seen, but each text seems to differ on the rest of the rules. C&H uses the following: ($\neg$E): $P, \neg P \vdash \bot$ ($\neg$I): If $P \vdash \bot$, then $\vdash \neg P$ (RAA): If $\neg P \vdash \bot$, then $\vdash P$ Is the following proof sufficient? * *$\bot$ (Initial premise) *Assume $\neg P$ *Therefore, P (RAA using 1 and 2, discharges assumption on line 2) I'm a little sketchy as to whether the use of RAA in line 3 is proper use, because the assumption is introduced after the $\bot$. Do you need to derive $\bot$ from $\neg P$ in order to use RAA? If this proof is not legal, then how would I prove it?
I looked at the $RAA$ rule defined by the book: It says that if you have a derivation $D$ of $\bot$, i.e. if you have: $D$ $-$ $\bot$ then you can have a derivation of any statement $\phi$ as follows: $D$ $-$ $\bot$ $-$ ($RAA$) $\phi$ And it says: "Its assumptions are those of $D$, except possibly $\neg \phi$" (by which they mean that if $\neg \phi$ is part of the undischarged assumptions of $D$, then you can remove $\neg \phi$) OK, since you have $\bot$ as a premise, your 'derivation' of $\bot$ is just: $\bot$ So, using the $RAA$ rule, this means we can now have the following derivation: $\bot$ $-$ ($RAA$) $P$ And that's it! So, no assumption of $\neg P$ necessary: you can immediately infer $P$ from $\bot$ as a special case of the $RAA$ rule.
If we assume $\sup A ≤ \sup B$, why is there not necessarily an element of $B$ that is an upper bound to $A$. I know that it is the case for when $\sup A < \sup B$, because it implies that set $A$ is bounded above and the elements of $B$ are greater than $A$. Would it be because set $A$ could be the same as set $B$ or at least have the same supremum?
What if $A=B=(0,1)$?${}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}$ I think your intuition is correct. If $\sup A\leq\sup B$, but strict inequality doesn't hold, then equality certainly holds.