INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Is it true that there will always be two among the selected integers so that one of them is equal to twice the other? Let $n \geq 2$ . Select $n+1$ different integers from the set ${\{1,2,...,2n}\}$. Is it true that there will always be two among the selected integers so that one of them is equal to twice the other? Is it true that there will always be two among the selected integers so that one is a multiple of the other? attempt: The first statement is false. To see this, consider the case when $n=2$. Then we select $3$ different integers from ${\{1,2,3,4}\}$. Choose $1,3,4$, then it does not work. Can someone please help me on the second part? Thank you
every number can be written uniquely in the form $2^ak$ so that $k$ is odd. There are exactly $n$ different possible values of $k$, so by the pigeon-hole principle there exist integers $k,a,b$ such that $2^ak$ and $2^bk$ are both in the subset of selected integers.
finding $\mathop{\sum\sum}_{0 \leq i < j\leq n}(i+j)\binom{n}{i}\binom{n}{j}$ finding $\displaystyle \mathop{\sum\sum}_{0 \leq i < j \leq n}(i+j)\binom{n}{i}\binom{n}{j}$ expanding sum $\displaystyle (0+1)\binom{n}{0}\binom{n}{1}+(0+2)\binom{n}{0}\binom{n}{2}+\cdots \cdots +(0+n)\binom{n}{0}\binom{n}{n}+(1+2)\binom{n}{1}\binom{n}{2}+(1+3)\binom{n}{1}\binom{n}{3}+\cdots+(1+n)\binom{n}{1}\binom{n}{n}+\cdots +(n-1+n)\binom{n}{n-1}\binom{n}{n}$ wan,t be able to go further , some help me
You may use a diagonal argument. Observe that $$\sum_{0\leq i<j\leq n}(i+j)\binom{n}{i}\binom{n}{j}=\sum_{0\leq j<i\leq n}(i+j)\binom{n}{i}\binom{n}{j},$$ therefore $$\begin{aligned}\sum_{0\leq i<j\leq n}(i+j)\binom{n}{i}\binom{n}{j}&=\frac{1}{2}\sum_{0\leq i,j\leq n,i\neq j}(i+j)\binom{n}{i}\binom{n}{j}\\ &=\frac{1}{2}\sum_{0\leq i,j\leq n}(i+j)\binom{n}{i}\binom{n}{j}-\sum_{i=0}^n i\binom{n}{i}^2.\end{aligned}$$ We have $$\begin{aligned}\sum_{0\leq i,j\leq n}(i+j)\binom{n}{i}\binom{n}{j}&=\sum_{0\leq i,j\leq n}i\binom{n}{i}\binom{n}{j}+\sum_{0\leq i,j\leq n}j\binom{n}{i}\binom{n}{j}\\ &=2\sum_{i=0}^n\sum_{j=0}^ni\binom{n}{i}\binom{n}{j}\\ &=2\sum_{i=0}^ni\binom{n}{i}\sum_{j=0}^n\binom{n}{j}\\ &=2\sum_{i=0}^ni\binom{n}{i}(2^n)\\ &=2(n2^{n-1})(2^n)\\ &=n2^{2n}. \end{aligned}$$ On the other hand $$\sum_{i=0}^n i\binom{n}{i}^2=n\binom{2n-1}{n-1}.$$ For the proof of this identity you can see here. Therefore $$\sum_{0\leq i<j\leq n}(i+j)\binom{n}{i}\binom{n}{j}=n2^{2n-1}-n\binom{2n-1}{n-1}.$$
How do I solve $(x+1)^5 +36x+36 = 13(x+1)^3$? I tried $$(x+1)^5 + 36 x + 36 = 13 (x +1)^3\\ (x+1)^5 + 36(x+1) = 13 (x +1)^3\\ (x+1)^4 +36 = 13 (x+1)^2 $$ But, don't understand how to solve further. Can somebody show step by step please. Thanks!
Hint: $$(x+1)^5-13(x+1)^3+36(x+1)=0$$ $$\left[(x+1)^4-13(x+1)^2+36\right](x+1)=0$$ $$\left((x+1)^2-9\right)\left((x+1)^2-4\right)(x+1)=0$$
Prove the $\frac {2r+5}{r+2}$ is always a better approximation of $\sqrt {5}$ than $r$. Prove the $\frac {2r+5}{r+2}$ is always a better approximation of $\sqrt {5}$ than $r$. SOURCE : Inequalities (Page Number 4 ; Question Number 207) I tried a lot of approaches, but without success. I rewrote $\frac {2r+5}{r+2}$ as $2 + \frac {1}{r+2}$. $\sqrt {5} \approx2.2360679775$ Equating $\frac {1}{r+2}$ and $0.2360679774$ , I get $r=2.23606797929$. So, $\frac {2r+5}{r+2}$ is a still a better approximation than $r$. How to proceed ? Any hints/ideas/pointers ?
Observe if $r<\sqrt{5}$, then there exists $\epsilon>0$ such that \begin{align} r+\epsilon<\sqrt{5} \ \ \Leftrightarrow& \ \ (r+\epsilon)^2 <5\\ \Leftrightarrow& \ \ r^2-5+2\epsilon r+ \epsilon^2=P(\epsilon)<0 \end{align} which is true if we sketch $P(\epsilon)$ as a function of $\epsilon$. In particular, we see that the positive root of $P(\epsilon)$ is \begin{align} \epsilon = \frac{-2r+\sqrt{20}}{2} = -r+\sqrt{5}= \frac{-r^2+5}{r+\sqrt{5}}>\frac{-r^2+5}{r+2}=\epsilon_0>0 \end{align} since $\sqrt{5}>2$. Hence \begin{align} r<r+\epsilon_0 = r+\frac{-r^2+5}{r+2} = \frac{r^2+2r-r^2+5}{r+2} = \frac{2r+5}{r+2}. \end{align} Remark: The idea is to find $\epsilon_0>0$ rational such that $(r+\epsilon_0)^2<5$.
Is there any relation between the number of distinct embeddings of a number field into $\mathbb C$ and the number of $\mathbb Q$ automorphisms of it? Suppose that $K$ is a finite extension of $\mathbb Q$, say of degree $n$. By the primitive element theorem, $K=\mathbb Q(\alpha)$. Then $\alpha$ has $n$ conjugates and we correspondingly get $n$ embeddings of $K$ into $\mathbb C$. But I believe that all these embeddings need to be have the same images in $\mathbb C$ (as an example, $\mathbb Q(\sqrt 2)$ and $\mathbb Q(-\sqrt 2)$ are really the same field). Is there any relation between the number of embeddings with distinct images and the order of the group $Aut(K/\mathbb Q)$? On working a few examples, it seems that if $|Aut(K/\mathbb Q)|=a$ and there are $b$ embeddings with distinct images in $\mathbb C$, then $ab=n$, the degree of the extension. Is this true? If so, how can I prove it? I don't seem to be having much success. I'd appreciate some help.
Yes, of course there is, and this is one of the main results in fields extensions-Galois Theory. Theorem: If $\;K/F\;$ is a fields extension of degree $\;n\;$ and $\;S,\,F\subset S\subset K\;$ is the separable closure of this extension, then there are $\;[S:F]\;$different embeddings $\;K\mapsto\overline{\Bbb F}\;$ , when this last is any algebraically closed field containing $\;F\;$ ( we can take the algebraic closure of $\;F\;$) . If we assume the extension $\;K/F\;$ is finite and separable, then the number of embeddings equals $\;n=[K:F]\;$ iff $\;K/F\;$ is normal, which would mean the extension $\;K/F\;$ is Galois. In this last case, any such embedding is an automorphism of $\;K/F\;$ , meaning $\;\sigma K=K\;$ for any embedding $\;\sigma:K\to\overline F\;$, and $\;\sigma f=f\;,\;\;\forall\,f\in F\;$ .
Reformulate linear program as semidefinite program We consider the linear program $$\min_{x \in R^n} \{c^Tx \mid a_1^Tx \le b_1, a_2^Tx \le b_2\}$$ where $c, a_1, a_2 \in \mathbb R^n$ and $b_1, b_2 \in \mathbb R$ are given. Now we need to reformulate this LP as an SDP. Can someone help with this task? Thank you!
Use the following linear matrix inequality (LMI) instead of the two linear inequalities $$\begin{bmatrix} b_1 - \mathrm a_1^{\top} \mathrm x & 0\\ 0 & b_2 - \mathrm a_2^{\top} \mathrm x\end{bmatrix} \succeq \mathrm O_2$$ Take a look at Sylvester's criterion for positive semidefiniteness.
An exercise in number theory and divisibility The following was given as an exersise to me and I 'm stuck. If $a$ and $b$ are positive integers and $b \ne 0$, then show that there are unique $c$ and $d$, integers, so that $a = cb + d$ and $-{b \over 2}<d \le {b\over2}$. If $b$ is even then by setting $S= \{ a-kb + {b\over2}:k\in \Bbb Z, a-kb + {b\over2} < 0\} $ it's not hard to show it by using the well ordering principle. But if $b$ is odd then $S$ is a set of rationals and I can't use the well ordering principle. Can you help me? I'm new to number theory, sorry if this is trivial or already answered.
Apply division algorithm first.So u get unique c' and d' such that a=c'b+d' where $0\leq d'<b$ If $d'<b/2$ then we can take d = d' and c = c'. If not then b>d'>b/2 => b/2>(d'-b)>-b/2 so,a=(c'+1)b+(d-b') take c =c'+1 and d =d'-b Uniqueness follows from uniqueness of c' and d'
$G$ is a simple undirected, regular, and planar graph with 9 vertices. Prove or disprove that $\bar G$ must have a Hamiltonian cycle. Original question: $G$ is a simple undirected planar graph with 9 vertices. Suppose that every vertex in G has the same degree (regular). Prove or disprove that the complementary graph $\bar G$ must have a Hamiltonian cycle. This is part of a past paper for a discrete math course, but the answers were not provided. I have tried to do it for a while now, but I don't even have any idea how to start this proof. Could someone please point me in the right direction, or even better, show me how to do this?
Planar graph on $n$ vertices has at most $3n - 6$ edges if $n \ge 3$ (this follows from Euler's formula). Since $G$ is regular graph on $9$ vertices we have $\deg G \le \lfloor \frac{2\cdot 21}{9}\rfloor = 4$. Then $\deg \overline{G} \ge 4$. If $\deg \overline{G} \ge 5$ then it is Hamiltonian by Dirac's theorem. The remaning case is $\deg \overline{G} = 4$. Futher my solution becomes rather sophisticated in spite of OEIS tells that are at most 16 possible graphs $\overline{G}$ left to consider. So I just point that all of them are Hamiltonian.
Prove that if $p$ and $8p-1$ are prime numbers then $8p+1$ is composite Prove that if $p$ and $8p-1$ are prime numbers then $8p+1$ is composite. I don't know where to start... I would appreciate some hints. Thank you!
If $8p-1$ is prime, then it is not divisible by 3. $8p$ is also not divisible by 3, since $p$ is a prime number. However, every third number must be divisible by 3. That means that $8p+1$ is divisible by 3, which means it is composite.
What real numbers do algebraic numbers cover? Hardy and Wright mention ( though don't give a proof ) that any finite combination of real quadratic surds is an algebraic number. For example $\sqrt{11+2\sqrt{7}}$. Are all finite combinations of cube root, fourth root ... $n^{th}$ root also algebraic ? such as $\sqrt[3]{2+3\sqrt[7]{5+3\sqrt{6}}}+\sqrt[9]{2}$.
The question in the title itself: What real numbers do algebraic numbers cover? has a rather amusing answer: Practically none. There are only countably many algebraic numbers, since each one is a root of some polynomial over $\mathbb{Q}$, and there are countably many such polynomials each of which has finitely many roots in $\mathbb{C}$. But there are uncountably many real numbers.
Intermediate value theorem wikipedia proof I have stubled upon the following proof on Wikipedia. I wonder if it is right. First I think that S must be open interval, right? That is not explicit. Second what troubles me is the a* definition I think it should be a* interval should be open on c so that $\upmu-\epsilon$ would be higher than $f(a*) -\epsilon$, since \upmu never reaches $f(b).
The set $S$ is not necessarily an interval, but it is open, which is what is needed for the proof; this follows from the continuity of $f$. It is true that $a^*$ can never be equal to $c$ (because of $S$ being open). But the interval doesn't play any role. All that you need is that $c-\delta<a^*<c$.
Two inequalities involving the rearrangement inequality Well, there are two more inequalities I'm struggling to prove using the Rearrangement Inequality (for $a,b,c>0$): $$ a^4b^2+a^4c^2+b^4a^2+b^4c^2+c^4a^2+c^4b^2\ge 6a^2b^2c^2 $$ and $$a^2b+ab^2+b^2c+bc^2+ac^2+a^2c\ge 6abc $$ They seems somewhat similar, so I hope there'a an exploitable link between them. They fall easily under Muirhead, yet I cannot figure out how to prove them using the Rearrangement Inequality. Any hints greatly appreciated.
$(a,b,c)$ and $\left(\frac{1}{a},\frac{1}{b},\frac{1}{c}\right)$ are opposite ordered. Thus, by Rearrangement $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}\geq a\cdot\frac{1}{a}+b\cdot\frac{1}{b}+c\cdot\frac{1}{c}=3,$$ which gives $$a^2c+b^2a+c^2b\geq3abc$$ Similarly we'll get $$a^2b+b^2c+c^2a\geq3abc$$ and after summing we are done!
Calculate $\int \frac{1}{\sqrt{4-x^2}}dx$ Calculate $$\int \dfrac{1}{\sqrt{4-x^2}}dx$$ Suppose that I only know regular substitution, not trig. I tried to get help from an integral calculator, and what they did was: $$\text{Let u = $\frac{x}{2}$} \to\dfrac{\mathrm{d}u}{\mathrm{d}x}=\dfrac{1}{2}$$ Then the integral became: $$={\displaystyle\int}\dfrac{1}{\sqrt{1-u^2}}\,\mathrm{d}u = \arcsin(u) = \arcsin(\frac{x}{2})$$ And I'm not sure how they accomplished this, where did the 4 go? I understand the arcsin part but not sure how they got rid of the 4? Also how did they know to substitute $\frac{x}{2}$? It doesn't seem very obvious to me.
Recall that $\frac{d}{dx}(arcsin x) =\frac{1}{\sqrt{1-x^2}}$ and hence posing $u=\frac{x}{2}$ and hence $dx=2du$ substituting you have $ \int \frac{du}{\sqrt{1-u^2}} = \arcsin u = \arcsin \frac{x}{2} $
Let $a,b,c \in \mathbb{Z} \backslash \{ 0\}$. Prove there exists $x$, $y$ such that $ax+by=c$ if and only if $(a,b)|c$ Title really says most of it. I tried reverse induction but it got too convoluted so I figured it probably wasn't the best way of proving it
Let $(a,b) = g$. Suppose that $g \mid c$. Then there exists $x',y' \in \mathbb Z$ such that $ax' + by' = g$. Since $g \mid c$, then $c = c'g$ for some $c' \in \mathbb Z$. So $$c = c'g= c'(ax'+by') = a(c'x') + b(c'y') = ax + by.$$ Where $x = c'x'$ and $y = c'y'$. So the equation $ax+by = c$ has a solution. Suppose that the equation $ax+by = c$ has a solution. Since $g \mid a$ and $g \mid b$, there exists $a',b' \in \mathbb Z$ such that $a = ga'$ and $b=g'b$. So $$c = ax + by = (ga')x + (gb')y = g(ax' + by').$$ Hence $g \mid c$.
$y=x^2-x^3$ Area under curve I have to use the limit process to fund the area of the region between the graph of the function, and the x-axis over the given interval. $f(x)=x^2-x^3$ in the interval of $[-1,0]$ I started the problem by solving for $\Delta x$, and got that $\Delta x=\frac{1}{n}$. Then I got $c_i=a+i\Delta x$, and got $c_i=-1+\frac{i}{n}$ Then I plugged it into the limit process: $\lim_\limits{n \to \infty}\sum_\limits{i=1}^n(f(c_i)\Delta x)$ Long Story short I got $1$ to be my answer is it correct? If not then how do I get the correct answer?
I am getting $\sum_{i=1}^{n}f(c_{i})\Delta x=\frac{5-12n+7n^2}{12n^2}$ so that the limit is $\frac{7}{12}$, which is the same as $\int_{-1}^{0}(x^2-x^3)dx$.
Derive $\cos(3\theta)=(4\cos\theta)^3 − 3\cos\theta$ I'm having trouble with the following derivation: Q: We can use Euler's Theorem ($e^{i\theta} = \cos\theta + i\sin\theta$), where $e$ is the base of the natural logarithms, and $i = \sqrt{-1}$, together with the binomial theorem as above, to derive a number of trigonometric identities. E.g., if we consider $(e^{i\theta})^2$, we can evaluate it two different ways. First, we can multiply exponents, obtaining $e^{i \, 2\theta}$ and then applying Euler's formula to get $\cos(2\theta) + i \sin(2\theta)$, or we can apply Euler's formula to the inside, obtaining $(\cos\theta + i \sin\theta)^2$, which we then evaluate via the binomial theorem: \begin{align} \cos(2\theta) + i \sin(2\theta) &= (\cos\theta + i \sin\theta)^2 \\ &= (\cos\theta)^2 + 2 i \cos\theta \sin\theta + i^2 (\sin\theta)^2 \\ &=(\cos\theta)^2 + 2 i \cos\theta \sin\theta − (\sin\theta)^2 \end{align} Equating real and imaginary parts gives us \begin{align} \cos(2\theta) &= (\cos\theta)^2 − (\sin\theta)^2 \\ \sin(2\theta) &= 2 \cos\theta \sin\theta \end{align} We can then rewrite the first of these identities, using $1=(\sin\theta)^2+(\cos\theta)^2$ to get $(\cos\theta)^2=1−(\sin\theta)^2$, whence the familiar $$\cos(2\theta)=1−2(\sin\theta)^2$$ Use this same approach to show $\cos(3\theta)=(4\cos\theta)^3−3\cos\theta$. A: This is my work so far: \begin{align} e^{i \, 3\theta} &= \cos(3\theta) + i \sin(3\theta) = (\cos\theta)^3 + 3 i (\cos\theta)^2 \sin\theta - 3\cos\theta (\sin\theta)^2 - i(\sin\theta)^3 \\ \cos(3\theta) &= (\cos\theta)^3 - 3(\sin\theta)^2 \cos\theta \\ \sin(3\theta) &= 3\sin\theta(\cos\theta)^2 - (\sin\theta)^3 \end{align} But now I'm unsure how to get $\cos(3\theta) = (4 \cos\theta)^3 − 3\cos\theta$ from what I've derived.
I believe your claim is actually incorrect. You should have the identity \begin{align} \cos 3\theta = 4\cos^3\theta -3\cos\theta \end{align} not \begin{align} \cos 3\theta = (4\cos\theta)^3 -3\cos\theta. \end{align} Observe \begin{align} e^{i3\theta} =&\ (\cos\theta + i\sin\theta)^3 = \cos^3\theta+3(i\sin\theta)\cos^2\theta+3(i\sin\theta)^2\cos\theta +(i\sin\theta)^3\\ =&\ \cos^3\theta -3\sin^2\theta \cos\theta+i(3\sin\theta\cos^2\theta-\sin^2\theta). \end{align} Hence taking the real part yields \begin{align} \cos 3\theta =&\ \cos^3\theta -3\sin^2\theta \cos \theta\\ =&\ \cos^3\theta - 3(1-\cos^2\theta)\cos \theta\\ =&\ 4\cos^3\theta -3\cos\theta. \end{align}
How to prove that $\sum_{n \, \text{odd}} \frac{n^2}{(4-n^2)^2} = \pi^2/16$? The series: $$\sum_{n \, \text{odd}}^{\infty} \frac{n^2}{(4-n^2)^2} = \pi^2/16$$ showed up in my quantum mechanics homework. The problem was solved using a method that avoids evaluating the series and then by equivalence the value of the series was calculated. How do I prove this directly?
HINT $$\sum_{n \, \text{odd}}^{\infty} \frac{n^2}{(n^2-4)^2}=\sum_{n=1}^{\infty} \frac{(2n-1)^2}{((2n-1)^2-4)^2}$$ Using partial fraction expansion, note $$\frac{(2n-1)^2}{((2n-1)^2-4)^2}=\left(\frac{1}{4(2n+1)^2}+\frac{1}{4(2n-3)^2}\right)-\left(\frac{1}{8(2n+1)}-\frac{1}{8(2n-3)}\right)$$ Note that the second part has cancelling terms.
What is the greatest area difference between two "nice triangles"? We call a triangle "nice if all angles are between $45$ and $90$ degrees (including $90$ and $45$ itself) and all sides are between $1$ and $2$ (including $1$ and $2$ itself). What is the greatest area difference between two "nice triangles"? My attempt: Because we have side and angle limits the best way to finding area is using the formula $S=bc\cos{A}$. We should find the greatest and lowest area. But here I got stuck and I don't know how to have both limits with each other. First I thought that the maximum area is $A=90^\circ $ and $b=c=2$. But then I saw that then we have$a=2\sqrt{2}>2$. Could you please give a way?
By the isoperimetric inequality the largest area of a nice triangle is $\sqrt{3}$, i.e. the area of a equilateral triangle with side length $2$. By the formula $2\Delta = ab\sin\gamma$ the smallest area of a nice triangle is $\frac{1}{2}\cdot 1\cdot 1\cdot\sin 60^\circ = \frac{1}{4}\sqrt{3}$. It follows that the largest area difference between two nice triangles is $\color{red}{\frac{3}{4}\sqrt{3}}$.
limit of a $C^1$ is $0$ at $\infty$ Let $f:\mathbb{R}\to\mathbb{R}$, $f\in C^1(\mathbb{R})$. Suppose $\int_0^\infty f$ converges and $f'$ is bounded. Prove that $\lim_{x\to\infty}f(x)=0$. My attempt: W.l.o.g. assume that $\lim_{x\to\infty}f(x)=L>0$. from some point $x_0\in [0,\infty)$ we get that $f$ is positive then we can write: $\int_0^\infty f$ = $\int_0^{x_0} f$ + $\int_{x_0}^\infty f$ $f'$ is bounded, thus $f$ is uniformly continuous in $[0,x_0]$ and the integral $\int_0^{x_0} f$ is equal to $I$ for some $I$. $\lim_{x\to\infty}\frac{f(x)}{1}=L>0$. And we get that $\int_{x_0}^\infty f$ and $\int_{x_0}^\infty 1$ converge and diverge together. $\int_{x_0}^\infty 1$ diverges, so $\int_{x_0}^\infty f$ diverges, too. contradiction.
Since $I=\lim\limits_{x\to\infty}\int_0^x f(t)\,dt$ exists finite, you have $\limsup\limits_{x\to\infty} f(x)\ge0$ and $\liminf\limits_{x\to\infty} f(x)\le 0$. Assume as a contradiction that $\limsup\limits_{x\to\infty} f(x)>0$. Since $\liminf\limits_{x\to\infty} f(x)\le 0$, you have that $f'(x)$ is negative in at least one point (actually, it must be frequently negative as $x\to\infty$). Therefore, $$A=\inf_{x> 0} f'(x)<0$$ Again, since $\liminf\limits_{x\to\infty} f(x)\le 0<\limsup\limits_{x\to\infty} f(x)$ and $f$ is continuous, there is a positive real number $M$ such that $f(x)=M$ frequently often as $x\to\infty$. Id est, there is a sequence $x_n\to\infty$ such that $f(x_n)=M$. By Lagrange theorem, $$\begin{align}&f(x)\ge A(x-x_n)+M&\text{if }x>x_n\\\end{align}$$ A fortiori, this holds for $x_n< x<x_n-\frac{M}{A}$ Hence, $$\int_{x_n}^{x_n-\frac MA}f(t)\,dt\ge \left[\frac A2t^2+(M-Ax_n)t\right]^{x_n-\frac MA}_{x_n}=-\frac{M^2}{2A}$$ But $x_n-\frac MA\stackrel{n\to\infty}\longrightarrow \infty$ and $$\liminf\limits_{n\to\infty}\int_0^{x_n-\frac MA}f(t)\,dt=\liminf\limits_{n\to\infty}\left(\int_0^{x_n}f(t)\,dt+\int_{x_n}^{x_n-\frac MA}f(t)\,dt\right)=\\=\lim\limits_{n\to\infty}\int_0^{x_n}f(t)\,dt+\liminf\limits_{n\to\infty}\int_{x_n}^{x_n-\frac MA}f(t)\,dt\ge I-\frac{M^2}{2A}>I$$ Which contradicts $\lim\limits_{x\to\infty} \int_0^x f(t)\,dt=I$. Therefore it must hold $\limsup\limits_{x\to \infty}f(x)\le0$ and, hence, $=0$. Now, by using the previous result on $g=-f$ (which satisfies the hypothesis of the theorem), we obtain that $$0\ge\limsup\limits_{x\to\infty} g(x)=-\liminf\limits_{x\to\infty} f(x)$$ So $\liminf\limits_{x\to\infty}f(x)\ge 0$ as well. Hence $\liminf\limits_{x\to\infty} f(x)=0$. This proves that $\lim\limits_{x\to\infty} f(x)=0$.
Prove that $23$ does not divide $2^n + 3^m$, for any $m, n \in \mathbb{N}$ Prove that $23$ does not divide $2^n + 3^m$, for any $m, n \in \mathbb{N}$. I've tried to prove that the equation $2^n + 3^m = 0$ has no solutions in $\mathbb{Z}_{23}$, but didn't succed. Thank you!
Hint: Render $2\equiv 5^2$, $3\equiv 7^2$ mod $23$. Can a sum of nonzero squares have zero residue mod $23$ when $23$ is a $4k+3$ prime? Another approach, if the above is "slightly overlevel" as in the comments: If $2^n+3^m$ were divisible by $23$ then so is $8^n+27^m$ as $a+b$ divides $a^3+b^3$. Then $8^n+4^m=2^{3n}+2^{2m}$ would also be a multiple of $23$. Through division by the smaller power of $2$ in $\{8^n,4^m\}$ we find that $2^k+1$ should be a multiple of $23$ for some whole number $k(=|3n-2m|)$. Now try to find a value of $k$ that makes $2^k\equiv-1\bmod 23$.
Homomorphism and indexes of subgroups Let $G$ be a finite group, $f:G \to M$ group homomorphism and $H \leq G$. Show that $[f(G):f(H)]$ divides $[G:H]$. I have already shown that $f(H) \leq f(G) \leq M$. I guess I should use Lagrange's theorem somehow, but I don't have any other ideas on how to proceed. Any help would be appreciated!
Sketch: Let $K$ be the kernel of $f$. * *Since $K$ is normal, $HK$ is a subgroup of $G$. *There is a bijection between cosets of $HK$ in $G$ and cosets of $f(H)$ in $f(G)$. The bijection is given by $gHK$ corresponds to $f(g)f(H)$. To see where this comes from, observe that $f^{-1}(f(H))=HK$. One direction of the proof: Suppose that the cosets $f(g_1)f(H)$ and $f(g_2)f(H)$ are equal. Then, there exists some $h\in H$ such that $f(g_1)f(h)=f(g_2)$. Therefore, $f(g_2^{-1}g_1h)=e_M$. Therefore, $g_2^{-1}g_1h\in K$ so there is some $k\in K$ such that $g_2^{-1}g_1h=k$ or that $g_2^{-1}g_1=kh^{-1}\in KH$. This implies that $g_2HK=g_1HK$. Starting with $g_2^{-1}g_1\in HK$, write $g_1=g_2hk$ and use this to show that $f(g_1)f(H)=f(g_2)f(H)$ by a similar argument. *The result follows since $[G:H]=[G:HK][HK:H]$ and $[G:HK]=[f(G):f(H)]$.
How is the infinite sum of a series calculated by symbolic math? I wonder how Wolfram can solve this series and provide the solution symbolically: $$\sum_{k=1}^\infty\frac 1{(2k-1)^4}$$ In this particular case I know how to use a Fourier series on a triangle function to get the result by employing Parseval's theorem, but this is only a particular example. The proof for $\sum_{k=1}^{k=\infty} \frac{1}{k^2} = \frac{\pi^2}{6}$ was found by Euler and uses a taylor series of a special function. But is there a recipe working correctly for each possible series? I cannot imagine that such algorithm exists. But how can Wolfram do it?
I bet that WolframAlpha has a database of the most common expressions. Wolfram publicly provides a huge list of functions, containing various series expansions of most functions, so it's not very hard to build an optimized database with the mapping series$\to$function. For example, WolframAlpha returns $$\sum_{k=1}^\infty\frac 1{(k+a)^n} = \zeta(n, a+1)\text.$$ To evaluate your input, apply a general technique: try to eliminate any integer factors from the summation variable. $$\sum_{k=1}^\infty\frac 1{(2k-1)^4}=\sum_{k=1}^\infty\frac1{2^4}\cdot\frac 1{(k-\frac12)^4}=\frac1{16}\cdot\sum_{k=1}^\infty\frac 1{(k-\frac12)^4}=\frac1{16}\cdot\zeta(4, 1/2)$$ Now, without even knowing what this $\zeta$ function is, we just need to know the value at the point $(4,1/2)$, and we're done. There is a database of special values of this function. It does not contain $(4, 1/2)$ though. So, try the transformation database. It has $$\zeta(n, 1/2)=(2^n-1)\zeta(n)$$ which brings us to another $\zeta$ function (one argument). Again, we need to know nothing about the function, just its value at $n=4$. Here, the database yields $$\zeta(4)=\frac{\pi^4}{90}$$ and we're done. Disclaimer: I don't know exactly how WolframAlpha works, so this is just a guess.
How to find length of a part of a curve? How can I find the length of a curve, for example $f(x) = x^3$, between two limits on $x$, for example $1$ and $8$? I was bored in a maths lesson at school and posed myself the question: What's the perimeter of the region bounded by the $x$-axis, the lines $x=1$ and $x=8$ and the curve $y=x^3$? Of course, the only "difficult" part of this is finding the length of the part of the curve between $x=1$ and $x=8$. Maybe there's an established method of doing this, but I as a 16 year-old calculus student don't know it yet. So my attempt at an approach was to superimpose many triangles onto the curve so that I could sum all of their hypotenuses. Just use many triangles like the above, $$ \lim_{\delta x\to 0}\frac{\sqrt{\left(1+\delta x-1\right)^2+\left(\left(1+\delta x\right)^3-1^3\right)^2}+\sqrt{\left(1+2\delta x-\left(1+\delta x\right)\right)^2+\left(\left(1+2\delta x\right)^3-\left(1+\delta x\right)^3\right)^2}+\cdots}{\frac7{\delta x}} $$ I'm not entirely sure if this approach is correct though, or how to go on from the stage I've already got to.
Element of curve length is $\sqrt{(\Delta x)^2+(\Delta y)^2}$ to find curve length you can add these elements $$\sum\sqrt{(\Delta x)^2+(\Delta y)^2}$$ this is equal to find $$\sum\sqrt{(\Delta x)^2+(\Delta y)^2}=\\\sum\sqrt{(\Delta x)^2\left(1+\left(\dfrac{\Delta y}{\Delta x}\right)^2\right)}\\=\\ \sum\Delta x\sqrt{1+\left(\dfrac{\Delta y}{\Delta x}\right)^2} $$ to better approximation we can calculate limit of $\displaystyle\sum\Delta x\sqrt{1+\left(\dfrac{\Delta y}{\Delta x}\right)^2}$ when $n \to \infty$ so $$\lim_{n \rightarrow \infty}\sum\Delta x\sqrt{1+\left(\dfrac{\Delta y}{\Delta x}\right)^2}=\\\int_{a}^{b}dx\sqrt{1+\left(\dfrac{d y}{d x}\right)^2}=\\\int_{a}^{b}dx\sqrt{1+(f'(x))^2}$$And now , in your case $$y=x^3 \to y'=3x^2\\\int_{1}^{8}dx\sqrt{1+(3x^2)^2}=\\\int_{1}^{8}\sqrt{1+9x^4}dx$$
What strategy should Alice choose to win this game? The following game was proposed to me: * *Alice has a coin which has a probability $p$ of landing on heads, $1-p$ of landing on tails. Alice doesn't know $p$. Alice is not allowed to lie. *Bob knows $p$. Bob is allowed to lie. *Alice tells Bob two functions $f, g: [0,1]\mapsto\mathbb R$ and asks Bob for the value of $p.$ Let Bob's answer be $\pi$. *Alice throws the coin. If it lands on heads, Bob is awarded $f(\pi)$ points, if it lands on tails, Bob is awarded $g(\pi)$ points. Alice wants to find a pair of functions $f,g$ such that if Bob tries to maximize the expected value for the number of points he is awarded, then $\pi=p$. Is there a simple way to describe the entire solution space?
Bob tries to maximize his win so he calculates \begin{equation} \max_{\pi \in[0,1]} [p f(\pi) + (1-p) g(\pi)] \end{equation} For the start and sake of simplicity assume that it coincides with $\frac{d}{d \pi}[p f(\pi) + (1-p) g(\pi)] = 0$. We get $q(p, \pi) = p f'(\pi) + (1-p) g'(\pi)$. Now Alice wants to construct $q$ such that $\forall p ~ q({p,p}) = 0$. Now comes the fun part finding a solution to $ p f'(p) + (1-p) g'(p) = 0$ We can start e.g. with $f(p) = \ln(p)-p$ then we have $ p (\frac{1}{p} -1) + (1-p) g'(p) = 0$ equivalent to $1+g'(p)=0$ so $g = -p$ Now we can check the second derivative to be sure that it is actually a maxima $p f''(\pi)+(1-p)g''(\pi) = -p\frac{1}{\pi^2}$ voila. The whole solution space should be obtainable via \begin{equation} -\frac{1-p}{p} g'(p) = f'(p) ~\text{with constraint}~ \forall \pi : p f''(\pi)+(1-p)g''(\pi) < 0 \end{equation}
Why not $1$/irrational? An older math teacher told me that I shouldn't leave a fraction with an irrational in the denominator. But lately, I keep hearing this from every math teacher that I have ever had. Thus if I have this fraction $$\frac1{2^{1/2}}$$ I should always convert it to one that has no irrationals in the denominator $$\frac{2^{1/2}}{2}$$ But why? My thought are that, just like the fact that we can't write out irrationals, we cant write fractions with irrational as denominators as decimals, but we can approximate them, so I don't see the problem. If we approximate $1/\sqrt{2}$, it would be around 0.70710678118, so I dont understand why I shouldn't include irrationals in denominators, at least if I don't want some points deducted from a test.
If I want to compute $1/\sqrt{2}$, I have to divide by $1.414$, at first glance. Ick! 4-digit long division! But if I convert it to $\sqrt{2}/2$, I just have to divide by $2$. Heck, I can do that in my head.
What is the Mathematical Property that justifies equating coefficients while solving partial fractions? The McGraw Hill PreCaculus Textbook gives several good examples of solving partial fractions, and they justify all but one step with established mathematical properties. In the 4th step of Example 1, when going from: $$1x + 13 = (A+B)x+(4A-5B)$$ they say to "equate the coefficients", writing the linear system $$A+B = 1$$ $$4A-5B=13$$ It is a simple step, color coded in the textbook for easy understanding, but McGraw Hill does not justify it with any mathematical property, postulate or theorem. Addition and/or multiplication properties of equality don't seem to apply directly. Can someone help me justify this step?!
The general principle is: two polynomials are equal at every point if and only if their coefficients are equal. "If their coefficients are equal then the polynomials are equal" is clear. Proving the reverse is not so easy in general. It follows from a stronger result from linear algebra, which says that the Vandermonde matrix for $d+1$ distinct real numbers is invertible, and so there is a unique polynomial of degree at most $d$ passing through any $d+1$ points, provided they all have different $x$ coordinates. This is probably not accessible to you at your level, but it is probably the best way to see it overall. Another way to see it, though making this rigorous requires some calculus, is to note that if two polynomials are equal at each point, then their constant terms must be the same. Subtracting off the constant term from each and dividing by $x$, you have two polynomials that now again have to be equal at each point. So you plug in $x=0$, which gives agreement of the linear coefficients of the original polynomials. Doing this a total of $d+1$ times gives the desired result. Where the lack of rigor comes in is in saying that $x/x=1$ even when $x=0$, which is not properly true. What we are really doing here is noticing that if two differentiable functions are equal everywhere then their derivatives are equal everywhere, and that if $p(x)=\sum_{k=0}^n a_k x^k$ then $a_k=\frac{p^{(k)}(0)}{k!}$, where $p^{(k)}$ denotes the $k$th derivative of $p$.
Find all conditions for $x$ that the equation $1\pm 2 \pm 3 \pm 4 \pm \dots \pm n=x$ has a solution. Find all conditions for $x$ so that the equation $1\pm 2 \pm 3 \pm 4 \pm \dots \pm 1395=x$ has a solution. My attempt: $x$ cannot be odd because the left hand side is always even then we have $x=2k(k \in \mathbb{N})$ also It has a maximum and minimum $1-2-3-4-\dots-1395\le x \le 1+2+3+4+\dots +1395$ But I can't show if these conditons are enough or make some other conditions.
Let $k=\sum_{i=1}^{1395}i=\frac 121395\cdot 1396$, which is the maximum sum you can attain. Claim: you can achieve any even sum from $-k+2$ to $k$ except $k-2, -k+4$ We proceed by strong induction over numbers of the form $3 \bmod 4$. The base cases are $n=3,$ where we should be able to achieve $6,2,-4$ which we can do with $1+2+3, 1-2+3, 1-2-3$ and $n=7$ where we can achieve $-26,-22,-20,-18,\ldots 24,28$. Then we show if it is true up to $m$, it is true for $m+4$. If our target is within the range we can obtain with $m$, we can put plus signs before $m+1, m+4$ and minus signs before $m+2, m+3$ and use the solution for the target with $m$. If it is greater than $\frac 12m(m+1)$ it is less than or equal to $\frac 12(m+4)(m+5)=\frac 12m(m+1)+4m+10$ For $m \ge 11$ we can negate all the top four terms and reduce our target with numbers up to $m-4$ by $4m-10$. This will not reduce the target below $0$ so we can use the solution for $m$ and the new target. A similar argument works for negative targets with plus signs before the four largest numbers.
Limit and a series $$\lim_{n\to+\infty}⁡\frac{\cos⁡(\frac1n)+\cos⁡(\frac2n)+⋯+\cos⁡(\frac nn)}{n}$$ What I have tried: I tried rewriting the series portion as the sum $i=1$ to n of $\cos(i/n)$. Then used the formula the sum $i=1$ to n of i = n(n+1)/2 and substituted for i. Then i had the lim(n→∞)⁡[(1/n)cos(n(n+1)/(2n)}] which would be just 0
Applying Bounds and the Squeeze Theorem Aside from Riemann sums, we can evaluate the limit of interest by noting that the cosine function, $\cos(x)$ is monotonically decreasing and positive for $0 \le x\le 1$. Hence, we have $$\frac1n \int_1^{n+1} \cos(x/n)\,dx\le \frac1n\sum_{k=1}^n\cos(k/n)\le \frac1n \cos(1)+\frac1n \int_1^n \cos(x/n)\,dx \tag 1$$ Carrying out the integrals in $(1)$ we find that $$\sin(1+1/n)-\sin(1/n)\le \frac1n\sum_{k=1}^n\cos(k/n)\le \frac1n \cos(1)+\sin(1)-\sin(1/n)$$ whereupon application of the squeeze theorem yield the coveted result $$\lim_{n\to \infty}\frac1n\sum_{k=1}^n\cos(k/n)=\sin(1)$$
A closed form for a triple integral with sines and cosines $$\small\int^\infty_0 \int^\infty_0 \int^\infty_0 \frac{\sin(x)\sin(y)\sin(z)}{xyz(x+y+z)}(\sin(x)\cos(y)\cos(z) + \sin(y)\cos(z)\cos(x) + \sin(z)\cos(x)\cos(y))\,dx\,dy\,dz$$ I saw this integral $I$ posted on a page on Facebook . The author claims that there is a closed form for it. My Attempt This can be rewritten as $$3\small\int^\infty_0 \int^\infty_0 \int^\infty_0 \frac{\sin^2(x)\sin(y)\cos(y)\sin(z)\cos(z)}{xyz(x+y+z)}\,dx\,dy\,dz$$ Now consider $$F(a) = 3\int^\infty_0 \int^\infty_0 \int^\infty_0\frac{\sin^2(x)\sin(y)\cos(y)\sin(z)\cos(z) e^{-a(x+y+z)}}{xyz(x+y+z)}\,dx\,dy\,dz$$ Taking the derivative $$F'(a) = -3\int^\infty_0 \int^\infty_0 \int^\infty_0\frac{\sin^2(x)\sin(y)\cos(y)\sin(z)\cos(z) e^{-a(x+y+z)}}{xyz}\,dx\,dy\,dz$$ By symmetry we have $$F'(a) = -3\left(\int^\infty_0 \frac{\sin^2(x)e^{-ax}}{x}\,dx \right)\left( \int^\infty_0 \frac{\sin(x)\cos(x)e^{-ax}}{x}\,dx\right)^2$$ Using W|A I got $$F'(a) = -\frac{3}{16} \log\left(\frac{4}{a^2}+1 \right)\arctan^2\left(\frac{2}{a}\right)$$ By integeration we have $$F(0) = \frac{3}{16} \int^\infty_0\log\left(\frac{4}{a^2}+1 \right)\arctan^2\left(\frac{2}{a}\right)\,da$$ Let $x = 2/a$ $$\tag{1}I = \frac{3}{8} \int^\infty_0\frac{\log\left(x^2+1 \right)\arctan^2\left(x\right)}{x^2}\,dx$$ Question I seem not be able to verify (1) is correct nor find a closed form for it, any ideas ?
Ok I was able to find the integral $$\int^\infty_0\frac{\log\left(x^2+1 \right)\arctan^2\left(x\right)}{x^2}\,dx$$ First note that $$\int \frac{\log(1+x^2)}{x^2}\,dx = 2 \arctan(x) - \frac{\log(1 + x^2)}{x}+C$$ Using integration by parts $$I = \frac{\pi^3}{12}+2\int^\infty_0\frac{\arctan(x)\log(1 + x^2)}{(1+x^2)x}\,dx$$ For the integral let $$F(a) = \int^\infty_0\frac{\arctan(ax)\log(1 + x^2)}{(1+x^2)x}\,dx$$ By differentiation we have $$F'(a) = \int^\infty_0 \frac{\log(1+x^2)}{(1 + a^2 x^2)(1+x^2)}\,dx $$ Letting $1/a = b$ we get $$\frac{1}{(1 + a^2 x^2)(1+x^2)} = \frac{1}{a^2} \left\{ \frac{1}{((1/a)^2+x)(1+x^2)}\right\} =\frac{b^2}{1-b^2}\left\{ \frac{1}{b^2+x^2}-\frac{1}{1+x^2} \right\}$$ We conclude that $$\frac{b^2}{1-b^2}\int^\infty_0 \frac{\log(1+x^2)}{b^2+x^2}-\frac{\log(1+x^2)}{1+x^2} \,dx = \frac{b^2}{1-b^2}\left\{ \frac{\pi}{b}\log (1+b)-\pi\log(2)\right\}$$ Where we used that $$\int^\infty_0 \frac{\log(a^2+b^2x^2)}{c^2+g^2x^2}\,dx = \frac{\pi}{cg}\log \frac{ag+bc}{g}$$ By integration we deduce that $$\int^1_0 \frac{\pi}{a^2-1}\left\{ a\log \left(1+\frac{1}{a} \right)-\log(2)\right\}\,da = \frac{\pi}{2}\log^2(2)$$ For the last one I used wolfram alpha, however it shouldn't be difficult to prove. Finally we have $$\int^\infty_0\frac{\log\left(x^2+1 \right)\arctan^2\left(x\right)}{x^2}\,dx = \frac{\pi^3}{12}+\pi \log^2(2)$$
Anyone can recommended book about three dimensional geometry? I want to learn about differential geometry,but I have read It required some background from three dimensional geometry.But I have a few backgroud abount it.Anyone can help me by recommended book about 3-d geometry
I personally recommend "Analytical Geometry: 2D and 3D" by P.R.Vittal. All concepts are explained very thoroughly. But, I feel that there are not many books exclusively devoted to that topic. Try also a Google Search. Hope it helps.
What is $x$, if $3^x+3^{-x}=1$? I came across a really brain-racking problem. Determine $x$, such that $3^x+3^{-x}=1$. This is how I tried solving it: $$3^x+\frac{1}{3^x}=1$$ $$3^{2x}+1=3^x$$ $$3^{2x}-3^x=-1$$ Let $A=3^x$. $$A^2-A+1=0$$ $$\frac{-1±\sqrt{1^2-4\cdot1\cdot1}}{2\cdot1}=0$$ $$\frac{-1±\sqrt{-3}}{2}=0$$ I end up with $$\frac{-1±i\sqrt{3}}{2}=0$$ which yields no real solution. And this is not the expected answer. I'm a 7th grader, by the way. So, I've very limited knowledge on mathematics. EDIT I made one interesting observation. $3^x+3^{-x}$ can be the middle term of a quadratic equation: $$3^x\cdot\frac{1}{3^x}=1$$ $$3^x+3^{-x}=1$$
You have proceeded wrongly. We have $$3^x +3^{-x} =1 $$ $$\Rightarrow 3^{2x} +1 =3^x $$ $$\Rightarrow 3^{2x} -3^x +1=0$$ giving us $$3^x = \frac {1\pm \sqrt {3}i}{2}$$ Notice that the LHS is always real but we have an imaginary part on the RHS. This is enough evidence to suggest that the above equation has no solutions. Hope it helps.
Can points on an integer lattice form the vertices of a regular hexagon Is it possible for six points in the integer lattice to form the vertices of a regular hexagon
No. Otherwise you would have two segments with endpoints in the lattice whose distances are in ratio $1$ to $\sqrt 3$ (one side of the hexagon and the diagonal perpendicular to it), but this can not happen in the lattice: it would mean that there are integers $a,b,c,d,$ such that $$a^2+b^2=3(c^2+d^2),$$ and by infinite descent this is impossible. Indeed, the LHS would be a multiple of $3$. By examining squares modulo $3$, the only way is that both $a$ and $b$ are multiples of $3$, but then you can keep going doing the same for the RHS, etc. Note that this fails in $3D$ because you can also have $1+1+1\equiv 0\pmod 3$.
If $f$ is continuous ,$F_n$ is pointwise convergent to $f$ and $F_n$ is uniformly convergent... If $f$ is continuous ,$F_n$ is pointwise convergent to $f$ and $F_n$ is uniformly convergent. Then $F_n$ in uniformly convergent to $f$. To me, this result makes perfect sense. But I can not seem to find it in my analysis notes. Is this implication correct?
Your implication is correcto. Suppose that the sequence converges uniformly to a función $g$ and show that for all $\epsilon>0$, and $x$ in the domain, the difference between $f(x)$ and $g(x)$ is less than $\epsilon$. What can we conclude?
Physics and the Apéry constant, an example for mathematicians The Wikipedia's entry for the Apéry's constant tell us that the Apéry's constant $$\zeta(3)=\sum_{n=1}^\infty\frac{1}{n^3}$$ arises in physical problems. Question. Can you tell us, from a divulgative viewpoint but with mathematical details if it is possible, a nice physical problem involving the Apéry's constant? Many thanks. I believe that it is a curiosity, but and you know mathematics but and an example of such concise problem in physics (see the problems, or other, that refers Wikipedia if you know to show/explain us the calculations after your introduction to the physic problem), then your answer should be nice for all us.
Zeta values appear in a large number of physical applications. Just to point out a relevant field, one of these is the study of black bodies, an issue that can be extended to the larger theory of fundamental particles such as photons, electrons and positrons. For example, a field where both $\zeta (3) $ and $\zeta (4) $ are commonly used is to quantify black body energy radiation. A black body of surface $S $ and temperature $T$ radiates energy at a rate equal to $\sigma ST^4$, where $$\sigma =\frac {2 \pi^5}{15 } \frac{k^4}{h^3c^2} = 12 \pi \zeta (4) \frac{k^4}{h^3c^2}=5.67 \cdot 10^{-8} \text {J }\, \text { m}^{-2}\,\text { s}^{-1} \,\text { K}^{-4} $$ is the Stefan-Boltzmann constant, defined by the Planck's constant $h $, the speed of light $c $, and the Boltzmann's constant $k $. The presence of $\zeta (4) $ results from the integral $$\int_0^\infty \frac {2 \pi x^3 dx}{e^x-1}=12 \pi \zeta (4) \approx 40.8026...$$ calculated over the black body spectrum (here is the numerical estimation of the integral by WA for $x=0$ to $10^6$). A similar expression, given by $\sigma' ST^3$, provides the rate of emission of photons over time by a black body. In this case, $\sigma'$ is given by $$\sigma'=4 \pi \zeta (3) \frac{k^3}{h^3c^2} $$ where the presence of the Apéry's constant $\zeta (3) $ results from the integral $$\int_0^\infty \frac {2 \pi x^2 dx}{e^x-1}=4 \pi \zeta (3) \approx 15.1055...$$ again calculated over the black body spectrum (here is the numerical estimation by WA). As a confirmation of the extension of these concepts to the study of subatomic particles, another similar expression including the Apéry's constant gives the estimated average density of photons for the cosmic microwave background radiation, given by $$16 \pi \zeta (3) \left ( \frac{kT_0}{hc} \right)^3 \approx 413 \, \text {cm}^{-3} $$ where $T_0$ is the temperature of the radiation. A nice derivation of this result is provided here.
Solving a Exponential equation How can I solve for n in the following equation: $10 = 2^n + 3^n$ Thank you for the assistance.
Note: I will assume that $n\in \mathbb{R}$, since there does not exist a solution for $n\in \mathbb{N}$. There does not exist a closed form solution for $n$ in terms of elementary functions. However, you can solve this numerically. I will use the Newton-Raphson method. Note 2: From now on, I will use $x$ instead of $n$ as the variable to find, just for convenience of the method. $$2^x+3^x=10$$ The process is as follows: $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)} \tag{1}$$ The function $f$ to consider is: $$f(x)=2^x+3^x-10$$ Evaluating its derivative: $$f'(x)=2^x\ln(2)+3^x\ln(3)$$ Substituting into equation $(1)$: $$x_{n+1}=x_n-\frac{2^{x_n}+3^{x_n}-10}{2^{x_n}\ln(2)+3^{x_n}\ln(3)}\tag{2}$$ We can iterate $(2)$ by using a spreadsheet or using more sophisticated software such as MATLAB. Let's start with a reasonable guess for the solution $x_0=2$. $$\begin{array}{c|c}n&x_n\\\hline0&2\\1&1.76304\\2&1.72982\\3&1.72926\\4&1.72926\\5&1.72926\end{array}$$ Note that as the iterations $n\to \infty$, $x_n\to x$. Doing this iteration repeatedly gives: $$x\approx 1.729255558981860$$ The function is strictly increasing for all $x\in \mathbb{R}$, therefore we know that there exists only one root.
A retract in normal space Let $X$ be a normal space and $K$ be a closed subset of $X$ homeomorphic to $\mathbb R$. Can I always find a retract of $X$ onto $K$? From some examples, such as $X=\mathbb R^n$ and $K=\mathbb R$, I guess it is true, but I can't find the way to prove it rigorously.
Fix a homeomorphism $h:K\to\mathbb R$. Let $\overline{\mathbb R}$ denote the extended real line, $\mathbb R\cup\{\infty,-\infty\}$. Since $\overline{\mathbb R}$ is homeomorphic to the interval $[0,1]$ and since $K$ is closed in the normal space $X$, the Tietze extension theorem lets you extend $h$ to a continuous map $\bar h:X\to\overline{\mathbb R}$. Now let $L=\bar h^{-1}(\{\infty,-\infty\})$ and observe that $L$ is a closed set disjoint from $K$. By normality, find a closed set $M$ with $K\cap M=\varnothing$ and $L\subseteq\text{Int}(M)$, and then use Urysohn's lemma to get a continuous $f:X\to[0,1]$ that is identically 1 on $K$ and identically 0 on $M$. I claim that the function $g:X\to\mathbb R$ defined to be product $f\cdot\bar h$ on $X-L$ and 0 on $\text{Int}(M)$ is well-defined; the point is that, on the overlap of the two open sets $X-L$ and $\text{Int}(M)$, $f$ is identically zero and $\bar h$ has values in $\mathbb R$, so their product is identically zero. It follows that $g$ is continuous on $X$, because its restrictions to the open sets $X-L$ and $\text{Int}(M)$ are obviously continuous. Also note that $g$ maps to $\mathbb R$, not $\overline{\mathbb R}$, because $L$ is included in the interior of $M$ where $g$ is zero. Finally, since $f$ is identically 1 on $K$, $g$ coincides with $\bar h$ and thus with $h$ on $K$. Therefore, the composition $h^{-1}\circ g$ is a retraction of $X$ onto $K$.
Is this Egyptian fractions algorithm the same as the greedy algorithm? How to prove it? I was thinking about possible new algorithms for Egyptian fractions expansion. We know that for any $p,q,m>0$: $$\frac{p}{q}>\frac{p}{q+m}$$ Here we assume $p<q$ and are coprime. Now it would make sense to find the smallest $m$ such that: $$\mod(q+m,p)=0$$ Then we represent the initial fraction as: $$\frac{p}{q}=\frac{p}{q+m}+\frac{pm}{q(q+m)}$$ Getting everything in lowest terms we obtain: $$\frac{p_0}{q_0}=\frac{1}{a_0}+\frac{p_1}{q_1}$$ Here $a_0=(q_0+m_0)/p_0$ (an integer) and we repeat the process for $p_1$ and $q_1$. Experimenting with Mathematica (and pen and paper of course) I noticed a very curious fact - this algorithm gives the exact same expansions as the greedy algorithm! But why? And if it's true, how do we prove it? Here is the Mathematica code I used. To speed up the process for large denominators and numerators I numerically check FractionalPart instead of using Mod, so far there were no errors related to this approximation. x=18/23; p0=Numerator[x]; q0=Denominator[x]; S=0; While[p0>1&&q0<10^21, M=Catch[Do[If[FractionalPart[(q0+k)/p0]<10^(-35),Throw[k]],{k,1,q0}]]; p1=Numerator[M p0/(q0(q0+M))]; q1=Denominator[M p0/(q0(q0+M))]; S+=p0/(q0+M); Print[StandardForm[p0/(q0+M)]," ",StandardForm[M p0/(q0(q0+M))]," ",M]; p0=p1; q0=q1] If[p0==1,S+=p0/q0]; N[(x-S)/x,10]
It is the greedy algorithm. If $$k-1 < \frac{q}{p} \leqslant k,$$ then $(k-1)p < q \leqslant kp$, and the smallest $m$ such that $q+m \equiv 0 \pmod{p}$ is $kp - q$, thus $$\frac{p}{q+m} = \frac{p}{kp} = \frac{1}{k}$$ where $$k = \biggl\lceil\frac{q}{p}\biggr\rceil.$$
Find a finite generating set for $Gl(n,\mathbb{Z})$ I need to find a finite generating set for $Gl(n,\mathbb{Z})$. I heard somewhere once that this group is generated by the elementary matrices - of course, if I'm going to prove that $GL(n,\mathbb{Z})$ has a finite generating set, I would need to prove that any matrix $M\in GL(n,\mathbb{Z})$ can be generated by only finitely many of them. At first, I didn't have a clue as to how to do this, so I did a bit of scouring the internet for any information that might be useful. There were a few proofs or hints at proofs, including here on MSE and also on MathOverflow, but they were either too advanced, didn't give enough details, assumed theory I can't assume at this point (for example about rings or principle ideal domains), or were extremely complicated (as in 4 pages with 4 lemmas that needed to be proven first - and this example didn't even prove exactly what the finite generator of $GL(n,\mathbb{Z})$ is). This looks promising. In their notation, essentially, if $n$ is even, then $GL(n,\mathbb{Z})$ is generated by $s_{1}$ and $s_{3}$ And when $n$ is odd, $-s_{1}$ and $s_{3}$ generate $GL(n,\mathbb{Z})$, where $s_{1}=\begin{pmatrix} 0&0&0&\cdots &0&1\\ 1&0&0&\cdots & 0&0\\0&1&0&\cdots & 0 &0 \\ \vdots & \vdots & \vdots & & \vdots &\vdots \\ 0&0&0&\cdots & 0&0\\ 0&0&0&\cdots &1 & 0\end{pmatrix}$ and $s_{3}=\begin{pmatrix} 1&1&0&\cdots &0&0\\ 0&1&0&\cdots & 0&0\\0&0&1&\cdots & 0 &0 \\ \vdots & \vdots & \vdots & & \vdots &\vdots \\ 0&0&0&\cdots & 1&0\\ 0&0&0&\cdots &0& 1\end{pmatrix}$ How is a relatively simple way to prove this, that does not invoke rings or ideals at all (only group theory is permissible), and does not amake reference so to papers or $Hom(G,C_{p})$ (whatever that is)? I'm guessing since the group operation in $GL(n,\mathbb{Z})$ is matrix multiplication, I'm guessing I'd have to show that any matrix $A$ can be generated by multiplying various combinations of $s_{1}$ and $s_{3}$ in the case when $n$ is even and various combinations of $-s_{1}$ and $s_{3}$ in the case when $n$ is odd. But what do those combinations look like when we're dealing with matrix multiplication? Do they include scalar multiples like integer linear combinations when the operation is addition? And how do we know what order to put them in, since matrix multiplication is not commutative? Thank you.
The elementary matrices generate $GL(n, \mathbb{Z})$. Use the row-switching transformations to reduce to the case $n = 2$. Then $SL(2, \mathbb{Z})$ is a finite-index subgroup of $GL(2, \mathbb{Z})$, and it's an amalgamated free product of two finite groups (see Serre's Trees, for example).
If $A\cong B$ and $C\cong D$ how can I prove that $A/C\cong B/D$? Let $A,B$ groups and $C\leq A$ and $D\leq B$. How can I prove that $A/C\cong B/D$ ? I tried as follow : Let $i:A\to B$ an isomorphism and $j:C\to D$ an isomorphism. I define $$\tau : A\to B/D$$ by $\tau(a)=f(a)+D$. The surjectivity is clear. For the injectivity, $$\tau(a)=0\implies f(a)\in D$$ Does this implies that $a\in C$ ? Since $f|_C :C\to D$ doesn't should be an isomorphism I'm not sure. My second try would be to defined $ f:C\to D$ an isomorphism and to prolong it to an isomorphism to $A\to B$, but is it possible ? If yes, then I can do my previous argument. If not, how can I do ?
A finite counterexample: $A=B=\mathbb Z_4\times \mathbb Z_2$, and $C=\{(0,0),(0,1)\}$, $D=\{(0,0),(2,0)\}$. Then $A/C$ is cyclic of order 4, but $B/D$ is the Klein 4-group.
Number of $32$-character alphanumeric strings with certain conditions I'm seeking a solution of one of the most complicated Math problem of my life. Here it is : First we need to figure out how many strings of set [a-zA-Z0-9] (Which is 26 Small Letters, 26 Capital Letters, and 0 to 9 digits] Are possible to construct of length 32 characters. Then we need to subtract these 3 out of our result. * *Possible 32 character strings which has only small letters. [a-z] *Possible 32 character strings which has only big letters. [A-Z] *Possible 32 character strings which has only digits. [0-9] Let me know if any questions.
Let $U,L,D$ be the sizes of the sets of strings containing only uppercase letters (26 usable characters), lowercase letters (26) or digits (10) respectively. Then the number of admissible strings is $$62^{32}-U-L-D=62^{32}-2\cdot26^{32}-10^{32}$$ $$=2.272\dots×10^{57}$$
Showing that the intersection of countable events equals 1? How would I show that if $P(A_{i}) = 1$ for all $i \geq 1$, then $P\displaystyle\left(\bigcap_{i=1}^{\infty }A_{i}\right)=1$? I am rather stuck on how to prove this out. I could rewrite $\bigcap_{i=1}^{\infty }A_{i}$ as an intersection of the complements of $A_{i}$, and then try to show that the probability equals zero? Because if the probability of the intersections of the complements is zero, then I can conclude that it's 1 for the unions. Any suggestions on the first steps, or a way to approach this would be great!
You're basically already there, now use countable sub-additivity and squeeze the value out: $$1\ge P\left(\bigcap_{i=1}^\infty A_i\right)^c = 1 - P\left(\bigcup_{i=1}^\infty A_i^c\right)\ge 1-\sum_{i=1}^\infty P(A_i^c)=1$$
Is the given function Lebesgue Integrable? Is this function $f:\Bbb R\to \Bbb R$ given by $f(x)=\lim f_n(x)$ where $$f_n(x)= \begin{cases} \dfrac{n}{n+1}&\text{;$x\in \Bbb Q^c$}\\0 &;x\in \Bbb Q\end{cases}$$Lebesgue Integrable? My try: I got $$f(x)=\begin{cases} \dfrac{n}{n+1}& \text{;$x\in \Bbb Q^c$}\\0& ;x\in \Bbb Q\end{cases}$$ and hence measurable But $\int_\Bbb R f=\int _{\Bbb Q^c} f=m(\Bbb Q^c)=\infty$.So not integrable Is my answer correct?
The limit function $f$ is just $1$ (a.e.) (just calculate the limit pointwise) so it is trivially not integrable over $\mathbb{R}$. Without actually calculating that $f=1$ you could also argue that, in this case, $f=\sup_nf_n$ so, since the $f_n$'s are measurable, $f$ is measurable; and, for the integrability just notice that $$\int_\mathbb{R}f\,dm\geq\int_\mathbb{R}f_1\,dm=\frac{1}{1+1}m(\mathbb{R}).$$
Is this algorithm to solve inequalities, correct? My book says: If P(x) and Q(x) are polynomials, then the inequalities regarding $\frac{P(x)}{Q(x)}$ can be solved by ALGORITHM: $(1)$Plot the critical points on number-line. Note $n$ critical points will divide the number line in $n+1$ regions. $(2)$ In the right most region, the expression will be positive and in other regions it will be alternatively negative and positive. So, mark positive sign int the right most region and than mark alternatively negative and positive signs in the remaining regions. I have seen some examples following and contradicting the algorithm. some of the examples are: $(1)$Example which follows the algorithm: $$f(x)=\frac{4x^2+1}{x}>0$$ In the example I get critical points $-1/2$ and $1/2$. Number line so our inequality is positive on $(-\infty,\frac{1}{2})\cup(\frac{1}{2},\infty)$ $(2)$ Example which contradicts :$$f(x)=(\frac{x}{2+x})^2\frac{1}{1+x}>0$$ here I get critical points $-1$,$0$. But the Algorithm doesn't give me correct result.Numberline The actual solution is:$(-1,0)\cup(0,\infty)$ here we neglected $zero$ than ploted the regions, but WHY? $(3)$In another case, $zero$ is a solution and we take into account $zero$ when plotting regions.$$f(x)=x^4-2x^2>0$$ solution is: $(-1,0)\cup(1,\infty)$ Someone Please Explain. the algorithm and where am i wrong in understanding my book.
Here is a method to determine which regions a rational function $f(x)=\frac{P(x)}{Q(x)}$ is strictly positive. * *First you find the critical points $P(x)$ and $Q(x)$ and corresponding multiplicities. *Break the number line up by the critical points you found in $1$. *Look at the leading coefficients of $P(x)$ and $Q(x)$. If they share the same sign then $f(x)$ is positive in the right most region, otherwise is is negative. *Working from right to left, once you go over a critical point, if the multiplicity is odd, then the sign of $f(x)$ changes. If it is even, then the sign of $f(x)$ remains unchanged. If it is a critical point of $Q(x)$ and even we also include the critical point in the region. Note, if $P(x)$ and $Q(x)$ share a critical point, do as above but look at the difference in multiplicities to determine if odd or even.
Sum of random decreasing numbers between 0 and 1: does it converge?? Let's define a sequence of numbers between 0 and 1. The first term, $r_1$ will be chosen uniformly randomly from $(0, 1)$, but now we iterate this process choosing $r_2$ from $(0, r_1)$, and so on, so $r_3\in(0, r_2)$, $r_4\in(0, r_3)$... The set of all possible sequences generated this way contains the sequence of the reciprocals of all natural numbers, which sum diverges; but it also contains all geometric sequences in which all terms are less than 1, and they all have convergent sums. The question is: does $\sum_{n=1}^{\infty} r_n$ converge in general? (I think this is called almost sure convergence?) If so, what is the distribution of the limits of all convergent series from this family?
I just ran a quick simulation and I get a mean sum of one (standard deviation of 0.7) Caveat: Not sure I coded it all right! Especially since I didn't test convergence.
Dilation, shrink a triangle by $30\%$ I would like to know how I shrink a triangle by $30\%$ using dilation not changing its center?
I presume you want to shrink the Area by $30\%$. Assume the center is at $p$. Then you start by translating the triangle so that its center is in $0$, and then note that the area scales like the square of the scaling factor, so you have to scale the coordinates by $\sqrt{0.7}$, and then translate it back to its original position. Composing these maps gives: $$f(\vec{x})=\sqrt{0.7}\ (\vec{x}-\vec{p})+\vec{p}$$
Continuity and integration using Dominated Convergence theorem I have to use the Dominated Convergence Theorem to show that $\lim \limits_{n \to \infty}$ $\int_0^1f_n(x)dx=0$ where $f_n(x)=\frac{n\sqrt{x}}{1+n^2x^2}$. I did the following: $$\frac{n\sqrt{x}}{1+n^2x^2} <\frac{n\sqrt{x}}{n^2x^2} = \frac{x^{-\frac{3}{2}}}{n}\leq x^{-\frac{3}{2}} $$ But $$\int_0^1x^{-\frac{3}{2}}dx=\frac{-2}{\sqrt{x}}\biggr|_0^1$$ which doesn't seem right. Any help will be appreciated.
A worse bound, a slightly different approach: $$ \frac{n\sqrt x}{1+n^2x^2} \le \frac{n\sqrt x}{\sqrt{1+n^2x^2}} = n \sqrt{ \frac{x}{1+n^2x^2}} = \frac{1}{\sqrt{x}} \sqrt{\frac{1}{1+\frac{1}{n^2x^2}}} \le \frac{1}{\sqrt{x}} $$ Where we used tha facts that $x \ge \sqrt{x} $ if $x \ge 1$ and $\sqrt{\frac{1}{1+x}} \le 1 $ on $[0,+\infty)$.
$\int_1^\infty \sin(e^x(x-2))dx$ converges or diverges Is the following integral converges or diverges? : $$\int_1^\infty \sin(e^x(x-2))dx$$ I tried to use the assignment $t=e^x$ and I got: $$\int_1^\infty \sin(e^x(x-2))dx=\int_e^\infty \frac{\sin(t(\ln\ t-2))}{t}dt$$ I thought of comparing it to $\int_1^\infty \frac{1}{x}dx$ and I am stuck.
In general, if $f(x)$ is an increasing function growing faster than $x$, say $\Omega(x^{1+\alpha})$ with $\alpha>0$, from some point on, then $$ \int_{1}^{+\infty}\sin(f(x))\,dx $$ is a convergent integral (in the improper Riemann sense) by applying the substitution $x=f^{-1}(t)$ and exploiting Dirichlet's test - $\sin(t)$ is a function with mean zero.
Is 0 an eigenvalue of the right shift operator? My friend and I are a little bit confused about this, the question we are working on asks us to find the eigenvalues of $R$, the right shift operator. The main point of contention is that $\sigma(R)$ is non-empty, and so should surely(?) contain an eigenvalue of $R$. Does the non-emptiness of $\sigma(R)$ only apply in finite dimensional Hilbert spaces? A detailed reply to clear up the confusion would be appreciated.
For a bounded linear operator on a complex Hilbert space, the spectrum is always non-empty. But it may lack eigenvalues. If $H=\ell^2(\mathbb N)$ and $R$ is the shift $$ R(a_1,a_2,\ldots)=(0,a_1,s_2,\ldots), $$ then $R$ has no eigenvalues. It's spectrum, though, is $\{\lambda:\ |\lambda|\leq1\}$. The easiest way to see it is to consider the left shift $$ L(a_1,a_2,\ldots)=(a_2,a_3,\ldots). $$ For any $\lambda$ with $|\lambda|<1$, one can check that $\lambda$ is an eigenvalue of $L$ with eigenvector $(\lambda,\lambda^2,\lambda^3,\ldots)$. As the spectrum is closed and $\|L\|\leq1$, one deduces that $$\sigma(L)=\{\lambda:\ |\lambda|\leq1\}.$$ Then $$ \sigma(R)=\{\bar\lambda:\ \lambda\in\sigma(L)\}=\{\lambda:\ |\lambda|\leq1\}. $$
Do adjoint functors "preserve" commutative diagrams? Let $(L,R)$ be a pair of adjoint functor. How to show that the commutativity of the left diagram induces the commutativity of the right one?
Let $(\eta,\epsilon):F \dashv G:\mathfrak{C} \to \mathfrak{D}$ be an adjoint pair of functors and assume you have the commuting diagram $$ \array{ A & \xrightarrow{f} & GX \\ h \downarrow & & \downarrow G(k) \\ B & \xrightarrow{g} & GY } $$ in $\mathfrak{C}$. Then applying the functor $F$ to the diagram gives the commuting diagram $$ \array{ FA & \xrightarrow{F(f)} & F(GX) \\ F(h) \downarrow & & \downarrow F(G(k)) \\ FB & \xrightarrow{F(g)} & F(GY) } $$ in $\mathfrak{D}$. Using the counit allows us to produce the commuting diagram $$ \array{ FA & \xrightarrow{F(f)} & F(GX) & \xrightarrow{\epsilon_{X}} & X \\ F(h) \downarrow & & \downarrow F(G(k)) & & \downarrow k\\ FB & \xrightarrow{F(g)} & F(GY) & \xrightarrow{\epsilon_Y} & Y } $$ which contracts to the commuting diagram $$ \array{ FA & \xrightarrow{\epsilon_X \circ F(f)} & X \\ F(h) \downarrow & & \downarrow k \\ FB & \xrightarrow{\epsilon_Y \circ F(g)} & Y } $$ in $\mathfrak{D}$. There is some work to check that the middle rectangle actually commutes, but it all follows from the triangle identities of adjunction and the couniversal property of $\epsilon$.
Let $a,b\in \mathbb{N}$ where $\gcd(a,b)=1$. Describe the set $\mathcal{S}=\{ax+by\mid x,y\ge0 \text{ and } x,y\in \mathbb{Z}\}$. Let $a,b\in \mathbb{N}$ where $\gcd(a,b)=1$. Describe the set $$\mathcal{S}=\{ax+by \mid x,y\geq0 \hspace{0.25cm} \text{ and } \hspace{0.25cm} x,y\in \mathbb{Z}\}.$$ Since $a,b\in \mathbb{N}$ and $\gcd(a,b)=1$, then there exists $x',y'\in\mathbb{Z}$ such that $$ax'+by'=1.$$ Let $\alpha\in\mathcal{S}$. Then $$\alpha=ax+by$$ where $a,b\in \mathbb{N}$ with $\gcd(a,b)=1$ and $x,y\in \mathbb{Z}$ with $x,y\geq 0$. If we were to multiply the first equation by $\alpha$ we will get $$\alpha ax'+\alpha by'=\alpha.$$ But $\alpha=ax+by$. So, $$\alpha ax'+\alpha by'=\alpha=ax+by.$$ We can rewrite this equation as follows: $$a(\alpha x'-x)=b(y-\alpha y').$$ Than $a(\alpha x'-x)| b$ or $a(\alpha x'-x)| (y-\alpha y')$ or $a(\alpha x'-x)| b(y-\alpha y')$. From here I am unsure how to proceed. Any tips
I'm not certain about your divisibility conditions. It seems like only the third will hold. I proceed slightly differently. Let's assume, without loss of generality, that $1<a<b$. Then we can construct an array of non-negative integers the is $a$ units wide and $b$ units long. This gives us a way to count all numbers from $1$ to $ab$. It is clear that all positive multiples of $a$ will be in $S$. Ask yourself this: what happens to multiples of $b$? For $1\leq k\leq a-1$ we have $bk$ will land in some nonzero congruence $\mod a$. In fact, each will be distinct (by coprimality). In addition, every element beneath $bk$ will be in $S$, since you just add multiples of $a$ to get to the next row. Once you get to $(a-1)b$, you have all congruence class filled $\mod a$. If you subtract $a$ from this value ($ab-b-a$), you have the last element that is not in $S$. Every natural number greater than or equal to $ab-b-a+1$ will be in $S$! Let's see this with $a=3$ and $b=5$ \begin{align*} 1\hspace{1cm}2\hspace{1cm}s_1\\ 4\hspace{.95cm}s_2\hspace{.9cm}s_3\\ 7\hspace{.95cm}s_4\hspace{.9cm}s_5\\ s_6\hspace{.85cm}s_7\hspace{.9cm}s_8\\ s_9\hspace{.8cm}s_{10}\hspace{.7cm}s_{11} \end{align*} where the $s$ values are 3, 5, 6, 8, 9, 10, 11, 12, 13, 14, and 15. Once we get to $3\cdot5-3-5+1=8$, we intersect all integers.
*NOT* [quantifier]. Why is *NOT* [for every] equivalent to [there is]? Given a proposition such as, "For every real number $x \ge 2$, $x^2 + x - 6 \ge 0$", I am told that the negation, "NOT [For every real number $x \ge 2$, $x^2 + x - 6 \ge 0$]", would be "There is a real number $x \ge 2$ such that $x^2 + x - 6 < 0$". I am specifically confused with regards to why NOT [for every] is equivalent to [there is] rather than [for none]? It seems logical to me that the negation of everything (for all) should actually be equivalent to nothing? I would greatly appreciate it if someone could please take the time to clarify this concept.
As an addendum to Casper's answer: there are logics in which $\neg (\forall x) P(x)$ is not equivalent to $(\exists x) [\neg P(x)]$. For example (and you might like to look up "constructive mathematics" for more on this), take $(\exists x)$ to mean "there is $x$ and moreover we can in principle compute an explicit such $x$", rather than the usual "there is $x$". Let $P(x)$ (where $x$ is restricted to be an integer between $0$ and $9$) be the statement that the digit $x$ appears only finitely often in the decimal expansion of $\pi$. Then the statement $\neg (\forall x) P(x)$ is certainly true - if not, $\pi$ would have a finite decimal expansion. But $(\exists x) [\neg P(x)]$ is not currently known to be true, because we don't know of any digit which definitely appears infinitely often in $\pi$'s decimal expansion.
How to Disprove $A-(B-C)=A-(B\cup C)$? For $A,B,C$ sets. I know I want to show that it is not the case that $A-(B-C)$ is not equal to $A-(B\cup C)$, I also know that the definition of set deference is x is an element in A but not in B if I have set A-B, but to disprove this is getting a bit difficult.
While the other answers already gave counterexamples, and while truth tables will always point you to sufficient counterexamples, the following is more about how to "look for" easy counterexamples. I want to show that it is not the case that $A-(B-C)$ is not equal to $A-(B\cup C)$ The difference between the two expressions is $\;B \setminus C\;$ vs. $\;B \cup C\;$. The former is a (smaller) subset of $B$ since $\;B \setminus C \subseteq C\;$, while the latter is a (larger) superset of $B$ since $B \cup C \supseteq B\,$. When does this difference get exacerbated the most? When $B$ is the "smallest" possible, so consider $B = \emptyset\,$. For $B=\emptyset$ obviously $\;B \setminus C = \emptyset\;$ and $B \cup C = C\,$, so $A \setminus (B \setminus C) = A$ and $A \setminus (B \cup C)=A \setminus C$. Again, look at the difference between the two sides, it's the $\setminus C$ on the RHS. When does that difference get exacerbated the most? When $C$ is the "largest" possible, and since we are only interested in the relation to $A$, that means $C=A$. So in the end, one natural counterexample is $B=\emptyset, C=A$ for which: $$A \setminus (B \setminus C) = A \setminus (\emptyset \setminus A) = A \setminus \emptyset = \;A \quad\ne\quad \emptyset\; = A \setminus A = A \setminus(\emptyset \cup A) = A \setminus (B \cup C) $$ (Note: the latter holds true for $A = \emptyset$ of course, but the question was about arbitrary sets, so this still is a sufficient counterexample.)
Prove that $\sum\limits_{n=1}^{\infty} \frac{n}{3^n} = \frac{3}{4}$ How can you derive that $$ \sum\limits_{n=1}^{\infty} \frac{n}{3^n} = \frac{3}{4} \, ?$$ I suspect some clever use of the geometric series will do, but I don't know how.
Let $$S_n= \sum_{n=1}^{\infty} \frac{n}{3^n}\tag1$$ $$\frac13S_n=\sum_{n=1}^{\infty} \frac{n}{3^{n+1}}=\sum_{n=1}^{\infty} \frac{n+1}{3^{n+1}}-\sum_{n=1}^{\infty}\frac1{3^{n+1}}\tag2$$ $(1)-(2)$, $$\begin{align}\frac23S_n&=\sum_{n=1}^{\infty} \frac{n}{3^n}-\sum_{n=1}^{\infty} \frac{n+1}{3^{n+1}}+\sum_{n=1}^{\infty}\frac1{3^{n+1}}\\ &=\sum_{n=1}^{\infty} \frac{n}{3^n}-\sum_{n=2}^{\infty} \frac{n}{3^{n}}+\sum_{n=1}^{\infty}\frac1{3^{n+1}}\\ &=\frac13+\sum_{n=1}^{\infty}\frac1{3^{n+1}}\\ &=\sum_{n=1}^{\infty}\frac1{3^{n}}=\frac12\end{align}$$ Thus, $$S_n=\frac34$$
Proof that convergent sequence in $\Bbb R$ is bounded. Convergent sequence in $\Bbb R$ is bounded. Proof: In the definition of a convergent sequence: $$(\forall \varepsilon>0), (\exists n_\varepsilon\in\Bbb N), (\forall n\in\Bbb N), ((n>n_\varepsilon)\Rightarrow(|a_n-a|<\varepsilon))$$ let $\varepsilon=1$, then there exists $n_\varepsilon\in\Bbb N$ such that $(n>n_\varepsilon)\Rightarrow(|a_n-a|<1$). Now for $n>n_\varepsilon$ we have $|a_n|\leq|a_n-a|+|a|\leq 1+|a|$. Let $M=\max\{|a_1|,...,|a_{n_\varepsilon}|,1+|a|\}$ Then $\forall n\in\Bbb > N, \ |a_n|\leq M$, i.e. the sequence is bounded. What's the idea behind this proof? I understand the first part, but then when they define $M$ I'm lost. I don't see how is that related to the first part of the proof.
The idea behind this proof is to separate the values of the sequence into 2 parts : the ones that are close to the limit (within distance 1 of it) and the ones that are "far" from the limit. By definition of convergent sequences, those last ones are in finite number, so the set of their absolute values is bounded, by a real $M$. And the set of the others is bounded by $|l|+1$. So the whole set of absolute values is bounded by $\max(|l|+1,M)$.
what is the probability given some constraints This is an exercise question on probability: If each coded item in a catalog begins with 2 distinct letters followed by 3 distinct nonzero digits, find the probability of randomly selecting one of these coded items with the first letter a vowel and the last digit even. Here's how I attempted it. The sample space is: 26*25*9*8*7 (unique letters and non zero digits) The numerator is: 5(number of possible vowels)* 25(can be anything)* 4 (possible even digits other than zero) * 8 (rest of the digits)*7(rest of the digits). Is this approach ok?
Yes it is correct. Probability = $\frac{5 × 25 × 7 × 8 × 4}{26 × 25 × 9 × 8 × 7}$
Let $x\in X$ and $S\subset X$ be countable and dense. Is there a sequence in $S$ converging to $x$? Let $(X,\tau)$ be a separable topological space and $S\subset X$ be a countable dense subset. $(1)$ For any $x\in X$ can we find a sequence of elements in $S$ that converges to $x$? $(2)$ For any non-isolated $x\in X$, can we find a sequence of elements in $X\setminus \{x\}$ that converges to $x$? If $(X,\tau)$ is $T_1$ then $(1)\Rightarrow (2)$. If $(X,\tau)$ is first countable then $(1)$ and $(2)$ hold. If $\tau$ is the cofinite topology the $(1)$ and $(2)$ hold. To prove $(1)$ if suffices to show the following: Let $\mathcal{F}$ be a family of subsets of $\mathbb{N}$ that has the finite intersection property, there exists a sequence $(a_n)_n$ in $\mathbb{N}$ such that, for every $F\in\mathcal{F}$, sequence $(a_n)_n$ belongs eventually in $F$.
The statement you wish to prove is not true. A counterexample is given by the Stone-Čech compactification of $\mathbb N$, denoted $\beta \mathbb N$. Since it is a compactification of $\mathbb N$, it follows that $\mathbb N$ itself is a (countable) dense subset. One property of this space is that all convergent sequences are eventually constant, meaning that if $( x_n )_n$ is a sequence in $\beta \mathbb N$ which converges to $x \in \beta \mathbb N$, then there is an $N$ such that $x_n = x$ for all $n \geq N$. In particular, no point in the remainder $\beta \mathbb N \setminus \mathbb N$ is the limit of a sequence in $\mathbb N$. More details about these facts can be found on Dan Ma's Topology Blog: Stone-Cech Compactification of the Integers – Basic Facts
Let $x$ be a real number. Prove that if $x^2\lt x$, then $x\lt1$ Let $x$ be a real number. Prove that if $x^2\lt x$, then $x\lt1$ I do not know how to proceed. I assume its proof is by contrapositive?
Proof by contrapositive is the easiest: let $x \ge 1$, then $x^2 = x \cdot x >= x$. Q.E.D.
Help in evaluating conditional probabilities. I need some help in evaluating some probabilities. My probability course is slightly too theoretical and abstract for me- we have been shown identities and theorems relating these probabilities but never were we shown an example or given any insight into how to actually evaluate them, perhaps I lack the assumed intuition but going purely off the definitions given to me I have no way that I'm aware of of actually doing the following (very simple) question. Ideally some insight rather than a full solution to the below would be great- perhaps an outline or the first few lines or even a link to a similar question and a solution to that that I can apply here. But I'm aware that's a lot to ask so anything at all is appreciated. Parliament contains a proportion $p$ of Labour members, who are incapable of changing their minds about anything, and a proportion $1−p$ of Conservative members who change their minds completely at random (with probability $r$) between successive votes on the same issue. A randomly chosen member is noticed to have voted twice in succession in the same way. What is the probability that this member will vote in the same way next time? So presumably we start with (say they have voted for $A$ so far) $\mathbb P($will vote $A$ again$)=\mathbb P($will vote $A\space|\space$voted for $A$ twice$)=\mathbb P($will vote $A\space\cap$ voted for $A$ twice)$/\mathbb P($voted for $A$ twice$)$ but I don't know how to now evaluate these probabilities. Thank you
Well, this person is either one of those that are incapable of changing their minds, or it is one of the others that randomly change their minds.
Vector subspace equality proof/disproof. Given $R,S,T$ are subspaces of vector space $V$, and $R+S=R+T$, does it follow $S=T$? Please don't give a full proof, but some general help would be much appreciated. I get the basic idea that to show $S=T$ would be to show them to be subsets of one another. Not sure how to do this in a concrete way though.
Another kind of example in infinite dimension: Take $V=\mathbb R[X]$ and * *$R=\mathbb R[X]_{\leq 6}$ *$S=\mathbb R[X]_{\leq 3}$ *$T=\mathbb R[X]_{\leq 2}$. Then $R+S=R+T$ but $S\ne T$.
Prove that three medians divide a triangle into 6 triangles with equal surface area. I have attempted to solve this task: I have drawn the medians and described the lengths of the segments. I also noticed that triangles ADC and BCD share the same height, thus $S_{total}= xh$ Whereas the surface of triangles ACD and BCD $S= xh/2$ And so I have already made it so far as to get the halves. But I haven't got a faintest idea how to get to the sixths. I would be most grateful if you gave your answer in simple terms. Geometry is my Achilles's heel.
Apply an affine transformation to produce an equilateral triangle. Such a transformation preserves relative areas, and all the little triangles are now the same.
Prove that $\int _0^1x^a\left(1-x\right)^bdx$ = $\int _0^1x^b\left(1-x\right)^adx$, where $a,b\in \mathbb{R}$ Prove that $$\int _0^1x^a\left(1-x\right)^bdx = \int _0^1x^b\left(1-x\right)^adx$$ How can I even get started on this? I evaluate the integral with parts, but it just gets more and more tedious since I'm working with these constants here.
Substitute $u=(1-x)$. We then have $du=-dx$ and when $x=0$, we have $u=1$, $x=1$ gives $u=0$. Thus $$\int_0^1x^a(1-x)^bdx=-\int_1^0(1-u)^au^bdu=\int_0^1(1-u)^au^bdu$$ No integration by parts or anything necessary, just a straight substitution.
Do integrable functions vanish at infinity? If $f$ is a real-valued function that is integrable over $\mathbb{R}$, does it imply that $$f(x) \to 0 \text{ as } |x| \to \infty? $$ When I consider, for simplicity, positive function $f$ which is integrable, it seems to me that the finiteness of the "the area under the curve" over the whole line implies that $f$ must decay eventually. But is it true for general integrable functions?
There are already good answers, I only wanted to make it more visual. Observe that \begin{align} \infty &< \sum_{k=0}^{\infty} k\ \cdot\ \ \ 2^{-k}\ \ =\hspace{10pt}2 < \infty \\ \infty &< \sum_{k=0}^{\infty} k\cdot(-2)^{-k} =-\frac{2}{9} < \infty \end{align} (it's easy enough to do by hand, but if you want, here and here are links to WolframAlpha). Thus, we can use: $$ f(x) = \sum_{k = 0}^{\infty}k\cdot(-1)^k \cdot \max(0,1-2^k\cdot|x-k|) $$ Below are diagrams for $|f|$ and $f$: I hope this helps $\ddot\smile$
What are the prerequisites for regression analysis? I want to be able to read this book: Data Analysis using Multilevel and Hierarchal models? I have read "Think Stats" published by O'Reilly and have taken some higher math classes in algebra and analysis. Not a lot of probability and statistics experience though. What books would I have to read as prerequisites? Or should I just go ahead and buy it?
You can check the preface, which you can see by checking Amazon's "look inside" feature. Here's what it says: "The prerequisite is statistics up to and including an introduction to multiple regression. Advanced mathematics is not assumed - it is important to understand the linear model in regression, but it is not necessary to follow the matrix algebra in the derivation of least squares computations. It is useful to be familiar with exponents and logarithms, especially when working with generalized linear models."
How to solve $\nabla^2\phi(\textbf{r})=m^2\phi(\textbf{r})$? How can I solve an equation of the form $\nabla^2\phi(\textbf{r})=m^2\phi(\textbf{r})$ where $m^2$ is a real positive constant and $\textbf{r}=(x,y,z)$ is a point 3-D space. I want to write the solution at least formally and Fourier transform doesn't help. The Fourier transform $$\phi(\textbf{r})=\int\frac{d^3\textbf{k}}{(2\pi)^{3/2}}e^{i\textbf{k}\cdot\textbf{r}}\tilde{\phi}(\textbf{k})\tag{1}$$ gives $$(\textbf{k}^2+m^2)\tilde{\phi}(\textbf{k})=0\tag{2}.$$I want to find a nonzero solution $\phi(\textbf{r})$ from (1) with the constraint $k^2+m^2\neq 0$. I have no idea how to proceed next.
The only physical solution here is $\phi \equiv 0$ as your equation says that $\phi({\bf k})=0$ since $k^2+m^2\not = 0$. Note that this derivation assumes that the Fourier transform of $\phi$ exists (which is in most physical applications is a very reasonable assumption). Another way to derive this: if we have a general region $V$ and impose the boundary condition $\phi = 0$ on $\partial V$ then by multiplying the PDE by $\phi$ and integrating using the we get $$0 = \int_V m^2 \phi^2 - \phi\nabla^2\phi\,{\rm d}{\bf x} = \int_V (\nabla \phi)^2 + m^2 \phi^2\,{\rm d}{\bf x}$$ and both terms are strictly positive which means that $\phi \equiv 0$ in $V$. In your case $V = \mathbb{R}^3$ and $\partial V$ is "at infinity" and I'm assuming here $\phi$ decays "fast enough" to $0$ as ${\bf r}\to \infty$ as for the integral above to exist. The integral on the right hand side above is what we usually call the "energy" of the field so if any other solution exists then it must correspond to infinite "energy". There are infinitely many solutions of this kind: $\phi({\bf r}) = c\frac{\sinh(mr)}{r}$ where $r = \|{\bf r}\|$ and $\phi({\bf r}) = Ae^{ax + by + cz}$ if $a^2+b^2+c^2 = m^2$ are two simple examples. What breaks down in our original derivation above for these solutions is that the Fourier transform simply does not exist. $$ $$
Semisimple Frobenius Algebras I'm reading this paper and am stuck on the proof of theorem 3.4 on page 7. In particular, how does the author justify the statement below If some component A′ of A is not a field, then it contains nontrivial nilpotents
Context: $A$ is assumed to be a commutative Frobenius algebra over some field $k$, and $A^{\prime}$ is a factor in the decomposition of $A$ as a product of local $k$-algebras. The author's claim then follows since (as a finite-dimensional $k$-algebra) $A^{\prime}$ as Artinian local, hence its maximal ideal is nilpotent. Therefore $A^{\prime}$ is a field if and only if its maximal ideal is the zero idealif and only if it has no nontrivial nilpotents.
Minimal distance to a cube in 2D and 3D from a point lying outside This is kind of a geometrical question. For my program I want to compute the minimal distance $r$ from a given point to the cube. Here is a drawing which shows what I mean: I have two vectors $\vec{p}$ and $\vec{q}$ which indicate the position of my two points. Point $p$ can be anywhere outside the cube. Point $q$ is exactly in the middle of the cube. The distance from point $q$ to the cubes surface is always $d$. I can easily compute $R$ which is the distance from point $q$ to point $p$. But what I need is the minimal distance $r$ from point $p$ to the cube. I am sure that I have to distinguish several cases depending on where point $p$ is located. I think there are three cases: 1) The minimal distance $r$ from point $p$ is to the edge of the cube (as drawn in the picture) 2) The minimal distance $r$ from point $p$ is to the corner of the cube 3) The minimal distance $r$ from point $p$ is to the surface of the cube After hours of trying to find a nice solution I hope someone can give me a hint.
For an axis aligned cube, there are nice tricks. Consider, for example, the axis aligned cube with corners $(\pm 1,\pm 1,\pm 1)$ (after a scaling and a shift, everything reduces to this case). For a point $(x,y,z)$: * *If $|x|\leq 1$, $|y|\leq 1$ and $|z|>1$, then the distance is $|z|-1$ (the face is closest). *If $|x|\leq 1$, $|y|>1$, and $|z|>1$, then the distance is $\sqrt{(|y|-1)^2+(|z|-1)^2}$ (the edge is closest). *If $|x|>1$, $|y|>1$, and $|z|>1$, then the distance is $\sqrt{(|x|-1)^2+(|y|-1)^2+(|z|-1)^2}$ (the vertex is closest). All other cases are similar. To visualize what is going on, draw a square, but extend the edges into (infinite) lines. This breaks up the space outside the box into $8$ regions, all points in each region are closest to the same edge or point. Now, doing the same thing in three dimensions results in $26$ regions and you need to figure out which region you're in. Pseudocode: if |x|<=1 { if |y|<=1 d=|z|-1 else { if |z|<=1 d=|y|-1 else d=sqrt((|y|-1)^2+(|z|-1)^2) } } else { if |y|<=1 { if |z|<=1 d=|x|-1 else d=sqrt((|x|-1)^2+(|z|-1)^2) } else if |z|<=1 d=sqrt((|x|-1)^2+(|y|-1)^2) else d=sqrt((|x|-1)^2+(|y|-1)^2+(|z|-1)^2) } } Also as @YvesDaoust mentions in the comments, this can be rewritten as $$ \sqrt{\max\{0,|x|-1\}^2+\max\{0,|y|-1\}^2+\max\{0,|z|-1\}^2} $$ although, after unpacking the maximums, you get, essentially, the series of inequalities above.
A closed form for $\sum _{j=0}^{\infty } -\frac{\zeta (-j)}{\Gamma (j)}$ Is there a closed form for $$\sum _{j=0}^{\infty } -\frac{\zeta (-j)}{\Gamma (j)}$$ where $\zeta (-j)$ Zeta function and $\Gamma (j)$ Gamma function. I tried everything, but I still can not solve it. Any Ideas?
Reflection formula for $\zeta(s)$ transforms the sum into $\sum_{n=1}^{\infty}\left(2\pi i\right)^{-2n}\left(2-4n\right)\zeta\left(2n\right)$. The latter can be computed by differentiating the well-known generating function $\sum_{n=0}^{\infty}\zeta\left(2n\right)z^{2n}=-\frac{\pi z\cot \pi z}{2}$, with the result $$1-\frac{1}{2\cosh1-2}.$$
Solving limit without L'Hopital rule Is there a way to solve such limit without using L'Hopital rule? $$\lim_{h \to 0} \frac{2\sin(\frac{1}{2}\ln(1+\frac{h}{x}))}{h} $$ Result should be $\frac{1}{x}$. Edit: Thank you for your help, everything got really simple knowing, that $\frac{\ln(1+x)}{x} \to 1$ when $x \to 0$.
This quotient is a rate of change w.r.t. h: $$\frac{2\sin\Bigl(\frac{1}{2}\ln\bigl(1+\frac{h}{x}\bigr)\Bigr)}{h}=\frac{2\sin\Bigl(\frac{1}{2}\ln\bigl(1+\frac{h}{x}\bigr)\Bigr)-2\sin\Bigl(\frac{1}{2}\ln\bigl(1+\frac{0}{x}\bigr)\Bigr)}{h-0},$$ so its limit is the derivative at $h=0$: \begin{align}\biggl(2\sin\Bigl(\frac{1}{2}\ln\Bigl(1+\frac{h}{x}\Bigr)\Bigr)\biggr)'_{h=0}&= \left[2\cos\Bigl(\frac{1}{2}\ln\Bigl(1+\frac{h}{x}\Bigr)\Bigr)\frac1{2x\Bigl(1+\dfrac{h}{x}\Bigr)}\right]_{h=0}\\ &=\left[\frac{\cos\Bigl(\frac{1}{2}\ln\Bigl(1+\frac{h}{x}\Bigr)\Bigr)}{x+h}\right]_{h=0}=\frac1x. \end{align}
List all subgroups of $\Bbb Z_{12}$ List all subgroups of $\Bbb Z_{12}$ $\Bbb Z_{12}$ is cyclic so all its subgroups are also cyclic $$\begin{aligned} <1> &= Z_{12} \\<2> &= \{ 0,2,4,6,8,10\} \\<3> &= \{ 0,3,6,9\} \\<4> &= \{ 0,4,8 \} \\<6> &= \{ 6 ,0\} \end{aligned}$$ are there more???
In general you have: In the cyclic group of order $n$, there is exactly one subgroup for each divisor of $n$. Since the divisors of $12$ are $1,2,3,4,6,12$, you have found all subgroups except the subgroup $\langle 12 \rangle$ corresponding to $12$, which is the trivial subgroup.
Showing matrices in $SU(2)$ are of form $\begin{pmatrix} a & -b^* \\ b & a^*\end{pmatrix}$ Matrices $A$ in the special unitary group $SU(2)$ have determinant $\operatorname{det}(A) = 1$ and satisfy $AA^\dagger = I$. I want to show that $A$ is of the form $\begin{pmatrix} a & -b^* \\ b & a^*\end{pmatrix}$ with complex numbers $a,b$ such that $|a|^2+|b|^2 = 1$. To this end, we put $A:= \begin{pmatrix} r & s \\ t & u\end{pmatrix}$ and impose the two properties. This yields \begin{align}\operatorname{det}(A) &= ru-st \\ &= 1 \ ,\end{align} and \begin{align} AA^\dagger &= \begin{pmatrix} r & s \\ t & u\end{pmatrix} \begin{pmatrix} r^* & t^* \\ s^* & u^* \end{pmatrix} \\&= \begin{pmatrix} |r|^2+|s|^2 & rt^* +su^* \\ tr^*+us^* & |t|^2 + |u|^2\end{pmatrix} \\ &= \begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix} \ .\\ \end{align} The latter gives rise to \begin{align} |r|^2+|s|^2 &= 1 \\ &= |t|^2+|u|^2 \ , \end{align} and \begin{align} tr^*+us^* &= 0 \\ &= rt^*+su^* \ . \end{align} At this point, I don't know how to proceed. Any hints would be appreciated. @Omnomnomnom's remark \begin{align} A A^\dagger &= \begin{pmatrix} |r|^2+|s|^2 & rt^* +su^* \\ tr^*+us^* & |t|^2 + |u|^2\end{pmatrix} \\ &= \begin{pmatrix} |r|^2+|t|^2 & sr^* +ut^* \\ rs^*+tu^* & |s|^2 + |u|^2\end{pmatrix} = A^\dagger A \ , \end{align} gives rise to $$ |t|^2 = |s|^2 \\ |r|^2 = |u|^2 $$ and $$ AA^\dagger :\begin{pmatrix} rt^* +su^* = sr^* +ut^* \\ tr^*+us^* = rs^*+tu^* \end{pmatrix}: A^\dagger A $$ At this point, I'm looking in to find a relation between $t,s$ and $r,u$ respectively.
We have $tr^\ast=-us^\ast$ so $\left| r\right|^2 \left| t\right|^2 = \left| s\right|^2 \left| u\right|^2$ and $\left| r\right|^2 -\left| r\right|^2\left| u\right|^2 = \left| s\right|^2 \left| u\right|^2$ so $\left| r\right|^2 =\left| u\right|^2$. Hence $r,\,u$ have the same modulus, as do $s,\,t$. If $tu\ne 0$ define $k:=\dfrac{r^\ast}{u}=-\dfrac{s^\ast}{t}$ so $u=\dfrac{r^\ast}{k},\,1=\dfrac{r^\ast r+s^\ast s}{k}$ and $k=1$. Hence $u=r^\ast$ and similarly $s^\ast=-t$. If $u=0$ $st=-1$ with $\left| s\right|=\left| t\right|=1$ so $s^\ast=-t$, and $\left| r\right|=\left| u\right|=0$ so $u=r^\ast$. If $t=0$ then $ru=1$ so $u=r^{-1}=r^\ast$ and $s^\ast=0=-t$ because $\left| s\right|=\left| t\right|$.
logarithmic equation $\log_2(3^x-8)=2-x$ I've been unable to solve the following equation: $$\log_2(3^x-8)=2-x$$ I can arrive at $$3^x-8=2^{2-x}$$ but I'm clueless afterwards. I know that the answer is $x=2$ but cannot arrive to that analytically. Thank you for any hint.
$\log_2(3^x-8)=2-x$$\implies 3^x-8=2^{2-x}$ For all value of $x$ in natural number , LHS is integer , RHS is not an integer when $x\gt2$, If, $x=2\implies3^2-8=2^{2-2 }=1$, $x=2$ is a solution For all value of $x$ in negative integer , LHS is not an integer , RHS is an integer . $x=0$ is not a solution. $x=2$ is a solution
How to calculate the nth term of sequence that increases by n? I have the following recursive formula the for a sequence: \begin{cases}V_{1} = 1\\V_{n} = V_{n-1} + n & n > 1\end{cases} This sequence increases by $n$ for each term increase. Now I need to find if $3003$ is a value of this sequence. I managed to solve the problem by finding an explicit formula for the sequence and solved in terms of $n$. Like this: \begin{align} 3003 = \dfrac {\left( n-1\right) n} {2}+n \end{align} which led to \begin{align} n = 77 &\vee n = -78 \end{align} So since there's no negative terms in a sequence I concluded that when $n=77$, $3003$ is the value of the sequence. What I would like to know is if there is some other simpler/direct way to solve this. I realized that the value of the nth term is the sum of all terms below $n$, including it $(n, n-1, n-2,\dots,0)$ and came up with that explicit formula but it isn't obvious.
Is it obvious now? $$\begin{align}V_n&=n+V_{n-1}\\&=n+(n-1)+V_{n-2}\\&=n+(n-1)+(n-2)+V_{n-3}\\&=\ \vdots\\&=V_1+2+3+\dots+n\\&=1+2+3+\dots+n\end{align}$$ To avoid the explicit formula for $V_n$, one could use calculus, which quickly shows, using integrals, that $$\frac{n^2}2<V_n<\frac{(n+1)^2}2$$ Once you've done that and figured out that $V_n=\frac{n(n+1)}2$, it follows that we want to solve $$3003=\frac{n(n+1)}2$$ which may be 'solved' by noticing that $$\frac{n^2}2<\frac{n(n+1)}2<\frac{(n+1)^2}2$$ which gives $$76.498=\sqrt{6006}-1<n<\sqrt{6006}=77.498$$ If $n$ is to be a whole number, then $$n=77$$ without ever having to solve hard quadratics.
For what range of $\beta$ is random walk heavy-tailed? I have $\beta > 0$ and $S_0 = 0$, and $S_n = \varepsilon_1 + \cdots + \varepsilon_n, n \geq 1$, a random walk with i.i.d increments $\lbrace \varepsilon_n \rbrace$ having a common distribution $$ P(\varepsilon_1 = -1) = 1 - C_{\beta} \text{ and } P (\varepsilon_1 > t) = C_{\beta}e^{-t^{\beta}}, t \geq 0,$$ where $C_{\beta} \in (0,1)$ such that $\mathbb{E}\varepsilon_1 = -1/2$. I want to find the range of values of $\beta$ for which this random walk is heavy-tailed. So far I have tried using the fact that a heavy-tailed distribution has infinite moment-generating function, as follows: \begin{align*} \varphi(t) = \mathbb{E}e^{t \varepsilon} = e^{-t}(1-C_\beta) + \int_0^\infty x (1-C_\beta e^{-t^\beta}) \, dx \end{align*} but that route doesn't seem to lead anywhere, given that the integral itself is always infinite. I have the same problem when trying to determine the value of $C_\beta$ by using fact that the expectation of $\varepsilon_1$ is $-1/2$. Can anybody see what I'm doing wrong? Or have any ideas for me to try?
I used a different definition of heavy-tailedness: If a distribution is heavy-tailed, then $\lim_{t \to \infty} e^{\lambda t} \overline{F}(t) = \infty, \quad \forall \lambda >0$. Therefore \begin{align*} \lim_{t \to \infty} e^{\lambda t} C_{\beta} e^{-t^{\beta}} = \infty \implies \lambda t - t^{\beta} >0 \implies \lambda > t^{\beta-1} \implies \beta - 1 < \frac{\log \lambda}{\log t} \end{align*} For large $t$ and fixed $\lambda$, $\frac{\log \lambda}{\log t} \approx 0$, and we therefore have that \begin{align*} \lim_{t \to \infty} e^{\lambda t} C_{\beta} e^{-t^{\beta}} = \infty \implies \beta < 1 \end{align*} Therefore the distribution is heavy-tailed if $\beta \in (0,1)$.
Prove $Re (z) = \frac{z + z^* }{2}$ and $Im (z) = \frac{z − z^*}{2i}$ In texts on complex numbers I often see an exercise that asks to prove the following: $$Re (z) = \frac{z + z^*}{2}$$ $$Im (z) = \frac{z − z^*}{2i}$$ where $z = x + iy$ and $z^* = x - iy$ I understand the meaning of complex numbers, but can't seem to find a path to proving these two identities. Any insight would be appreciated.
Solve the following equations in $x,y$: $z = x + iy \cdots(1)$ and $z^* = x - iy \cdots (2)$ Adding $1$ and $2$ we get $z + z^* = 2x \implies x = \frac{z + z^*}{2}$ i.e. $Re (z) = (z + z^* )/2 $ Subtracting $1$ and $2$ we get $z - z^* = 2iy \implies y = \frac{z - z^*}{2i}$ i.e. $Im (z) = (z − z^*)/2i$
If domain $R$ is a free $\mathbb{Z}$-module, and $I$ is an integral ideal of $R$, have $R$ and $I$ the same rank as $\mathbb{Z}$-modules? If an integral domain $R$ is a free $\mathbb{Z}$-module, and $I$ is an ideal of $R$, do $R$ and $I$ have the same rank as $\mathbb{Z}$-modules? Assume $I$ is non-zero.
I believe so. Let $\{b_1,\cdots,b_n\}$ be a $\Bbb Z$-basis for $R$, and let $w\in I$, nonzero element. Then $\{wb_i\}$ are still $\Bbb Z$-linearly independent, so the rank of $I$ is at least $n$. And at most $n$ as well, certainly.
Continuous Local Martingale and Quadratic Variation $X$ is a continuous local martingale with $X_0=0$. I'm trying to show that $\lim_{t\to \infty}X_t$ is finite then $[X]_\infty<\infty$, where $[X]$ denotes quadratic variation. I want to do this by showing the containments $$\lbrace \lim X_t \text{is finite}\rbrace\subset\cup_{i\ge 1}\lbrace \tau_i=\infty\rbrace\subset\lbrace [X]_\infty<\infty \rbrace,$$ where $\tau_i=\inf \lbrace t\ge 0: |X_t|=i\rbrace$ but I can't quite get this to work: If the limit is finite then $[X]_{\tau_n}$ is bounded. That's all I have for this direction. The rest of my ideas are only helpful in proving the reverse inclusions. How can I show the containments?
For the first inclusion note that if $\lim_{t \to \infty} X_t(\omega)$ exists, then it follows from the continuity of the sample paths that $t \mapsto X_t(\omega)$ is bounded. This, in turn, implies $\tau_i(\omega)=\infty$ for $i$ sufficiently large. To prove the second inclusion we use that, by the optional stopping theorem, $$X_{t \wedge \tau_i}^2 - [X]_{t \wedge \tau_i}$$ is a local martingale. If we denote by $(\sigma_k)_k$ a localizing sequence of stopping times, we get $$\mathbb{E}(X_{t \wedge \tau_i \wedge \sigma_k}^2) = \mathbb{E}([X]_{t \wedge \tau_i \wedge \sigma_k})$$ implying $$\mathbb{E}([X]_{t \wedge \tau_i \wedge \sigma_k}) \leq i^2.$$ It follows from the monotone convergence theorem that $$\mathbb{E}([X]_{t \wedge \tau_i}) \leq i^2$$ and $$\mathbb{E}\left(\sup_{t \geq 0} [X]_{t \wedge \tau_i} \right) \leq i^2.$$ Conclude from the last inequality that $[X]_{\infty}(\omega)<\infty$ almost surely for any $\omega \in \{\tau_i = \infty\}$.
Mixture and Alligation question: There is a Vessel holding 40 litres of milk. 4 litres of Milk is initially taken out from the Vessel and 4 litres of water is then poured in. After this 5 litres of mixtures of Mixture is replaced with six litres of water and finally six litres of Mixture is Replaced with the six litres of water. How Much Milk is there is in the Vessel? I have tried: Initally Vessel containing 40 litres of Milk : 4 litres out means -> 36 litres 4 litres of water is poured in - > 4 litres so Now total quantity is 40 litres Mixture containing water and Milk in the ratio: 36:4 i.e 9:1 Again 5 litres of Mixture is replaced with the six litres of water for that: 9x - 9/10*5 : x -1/10*5 Now the Ratio becomes: 90x - 45 : 10x -5 i.e 9x -9:2x -1 six litres of water is added 9x -9 :2x -5 again six litres of Mixture is replaced then 9x -9 - 9/10*6 : 2x -5 -9/10*6 that is 90x -144 :10x -84 after adding six litres of water again we got 90x -144 :10x -78 so Milk containing is 90x-144+10x -78 =41 100x -222 =41 100x = 263 x= 2.63 again substituting the value x=2.63 in 90x -144 then i am getting 92.7 milk total itself 41 litres of Milk Please anyone guide me answer what i am doing mistake please anyone guide me for the answer
Initial congifuration is $40$ litres of milk and no water, say $m=40$ and $w=0$. Remove $4$ litres of milk and add $4$ litres of water: $m=36, w=4$. The mixture is $36:4$ or $9:1$. $5$ litres of mixture contains $5\times \frac{9}{10}=\frac 92$ litres of milk and $5\times \frac{1}{10}=\frac 12$ litres of water. Remove $5$ litres of mixture: $m=36-\frac 92=\frac{63}2, w=4-\frac 12 = \frac 72$. Add $6$ litres water: $m=\frac{63}2, w=\frac 72+6=\frac{19}2$. The mixture is now $63:19$. $6$ litres of mixture contains $6\times \frac{63}{82}=\frac{189}{41}$ litres of milk and $6\times \frac{19}{82}=\frac{57}{41}$ litres of water. Remove $6$ litres of mixture: $m=\frac{63}2-\frac{189}{41}=\frac{2205}{82}, w=\frac{19}2-\frac{57}{41}=\frac{665}{82}$. Add $6$ litres of water: $m=2205/82, w=\frac{665}{82}+6 = \frac{1157}{82}$. Final configuration: $\frac{2205}{82}$ litres of milk (approx $26.9$) and $\frac{1157}{82}$ litres of water (approx $14.1$).
Solving 3 simultaneous equations It's been a while since I've had to do a simultaneous equation and I'm rusty on a few of the particulars. Say for example I have the following equations: x + y = 7 2x + y + 3z = 32 2y + z = 13 I know that I need to combine the above 3 equations into 2 other equations, for example, if I combine (counting down) 1 + 2 I'd get 3x + 2y + 3z = 32 And combing 2 with 3 I'd get 2x + 3y + 4z = 45 Which is fine, and I understand. It's the next steps I have trouble understanding. A lot of the examples I've been looking at have a value for each of the x, y z. Looking at this site here I'm not sure what is going on step 2. I can see that they are multiplying one line by 2. Is that something you always do? Like, with simultaneous equations do you always multiple one of the equations by 2? If not, how do you determine which number to use? My understanding of simultaneous equations is extremely limited.
You can "eliminate" one or more of the equations using one of the techniques already listed here. For systems of equations with three variables, a technique my intermediate calculus class uses to find critical points is to solve for one of the variables in (1), substitute that variable into (2) to solve for the unknown, and check validity by subbing both into (3).
Simplifying a recursive fraction involving Lambert function Is there a simplification for the following recursive fraction : $$\frac{\frac{\frac{n}{W(n)}}{W\left(\frac{n}{W(n)}\right)}}{W\left(\frac{\frac{n}{W(n)}}{W\left(\frac{n}{W(n)}\right)}\right)}$$ The above formula uses a recursion 3 times. I'm looking for a simplification when we have such a finite recursion, for instance when this one appears $i$ times. I would like to remove the recursion, i.e. obtain a single fraction. Thank you.
Since $u=W(u)e^{W(u)}$, it follows that $\frac u{W(u)}=e^{W(u)}$ and we get $$\frac{\frac{\frac{n}{W(n)}}{W\left(\frac{n}{W(n)}\right)}}{W\left(\frac{\frac{n}{W(n)}}{W\left(\frac{n}{W(n)}\right)}\right)}=e^{W\left(e^{W(e^{W(n)})}\right)}$$ Which is the best simplification I can see.
Evaluate the integral $\int \frac{x^2(x-2)}{(x-1)^2}dx$ Find $$\int \frac{x^2(x-2)}{(x-1)^2} dx .$$ My attempt: $$\int \frac{x^2(x-2)}{(x-1)^2}dx = \int \frac{x^3-2x^2}{x^2-2x+1}dx $$ By applying polynomial division, it follows that $$\frac{x^3-2x^2}{x^2-2x+1} = x + \frac{-x}{x^2-2x+1}$$ Hence $$\int \frac{x^3-2x^2}{x^2-2x+1}dx = \int \left(x + \frac{-x}{x^2-2x+1}\right) dx =\int x \,dx + \int \frac{-x}{x^2-2x+1} dx \\ = \frac{x^2}{2} + C + \int \frac{-x}{x^2-2x+1} dx $$ Now using substitution $u:= x^2-2x+1$ and $du = (2x-2)\,dx $ we get $dx= \frac{du}{2x+2}$. Substituting dx in the integral: $$\frac{x^2}{2} + C + \int \frac{-x}{u} \frac{1}{2x-2} du =\frac{x^2}{2} + C + \int \frac{-x}{u(2x-2)} du $$ I am stuck here. I do not see how using substitution has, or could have helped solve the problem. I am aware that there are other techniques for solving an integral, but I have been only taught substitution and would like to solve the problem accordingly. Thanks
$\int \frac{x^2(x-2)}{(x-1)^2} dx$ $\int {x^2(x-1-1)\over(x-1)^2}dx$ $\int {x^2(x-1)-x^2\over(x-1)^2}dx$ $\int ({x^2\over(x-1)}-{x^2\over(x-1)^2})dx$ $\int {x^2\over(x-1)}dx-\int{x^2\over(x-1)^2}dx$ $t=x-1 => dt=dx$ $\int {(t+1)^2\over(t)}dt-\int{(t+1)^2\over(t)^2}dt$ $\int {(t^2+1+2t)\over(t)}dt-\int(1+{1\over t})^2dt$ $\int (t+{1\over t}+2)dt-\int(1+{1\over t})^2dt$ $\int t dt+\int{1\over t}dt+\int2dt-\int(1+{1\over t})^2dt$ ${t^2\over2}+\ln t+2t-t+{1\over t}-2\ln t+C$ ${(x-1)^2\over2}+(x-1)+{1\over x-1}-\ln |x-1|+C$
If $f(x,y)=f(y,x)$ then $\frac{\partial f(x,y)}{\partial x}=\frac{\partial f(y,x)}{\partial y}$ Why is it that if $f(x,y)=f(y,x)$ then $\frac{\partial f(x,y)}{\partial x}=\frac{\partial f(y,x)}{\partial y}$ for all $x,y$ in $\Bbb R^2$. My lecturer just went over it like it was obvious but I cant seem to come up with a proof with why it is so. I thought maybe starting from the limit definition of a partial derivative would get me somewhere then I could jumble it till they were equal but this got me nowhere. Any help would be very much appreciated.
The formula is correct as long as you interpret it as $$ \partial_1 f(x,y) = \partial_2 f(y,x). $$ Indeed, $$ \begin{align} f(x,y) & =f(y,x) \\ f(x+h,y) &= f(y,x+h). \end{align} $$ Now subtract and divide by $h$.
Integral of a differential form on the ellipsoid. Let $$\omega = \frac{1}{z}\mathrm{d}x\wedge\mathrm{d}y - \frac{1}{y}\mathrm{d}x\wedge\mathrm{d}z + \frac{1}{x}\mathrm{d}y\wedge\mathrm{d}z$$ defined on the open subset of $\mathbb{R}^3$ $\{(x,y,z)\mid xyz \neq 0\}$. Find the integral of $\omega$ on the ellipsoid $$\frac{x^2}{a^2}+\frac{y^2}{c^2}+\frac{z^2}{c^2}=1$$ Now what I did was taking the following parametrization of the ellipsoid: $$x = a\cos\theta\sin\phi,\; y= b\sin\theta\sin\phi,\; z=c\cos\phi$$ where $(\theta, \phi) \in [0,2\pi]\times [0,\pi]$. Now something that is annoying me is the fact that the parametrization is not injective in $[0,2\pi]\times [0,\pi]$, so I thought about removing a parallel and a meridian of the ellipsoid, that is, taking $(0,2\pi)\times(0,\pi)$ instead, since what I'm removing has measure zero and therefore doesn't contribute at all to the value of the integral (I'm not really sure if this is how I should proceed or if it doesn't make sense at all, so some help clarifying that would be helpful too). Now I compute $\mathrm{d}x, \mathrm{d}y$ and $\mathrm{d}z$ in terms of $\theta, \phi$, and I get $$\begin{align*} \mathrm{d}x &= -a\sin\theta\sin\phi\mathrm{d}\theta + a\cos\theta\cos\phi\mathrm{d}\phi\\ \mathrm{d}y &= b\cos\theta\sin\phi\mathrm{d}\theta + b\sin\theta\cos\phi\mathrm{d}\phi\\ \mathrm{d}z &=-c\sin\theta\mathrm{d}\phi \end{align*}$$ and computing the wedge products, I finally get $$\varphi^{*}\omega = -\left(\frac{ab}{c} + \frac{ac}{b} + \frac{bc}{a} \right)\sin\phi \mathrm{d}\theta\wedge\mathrm{d}\phi$$ so if we let $M$ denote the ellipsoid with the meridian and parallel remove as explained above, then $$\begin{align*} \int_M \omega & = \int_U \varphi^*\omega = -\left(\frac{ab}{c} + \frac{ac}{b} + \frac{bc}{a} \right)\int_0^{2\pi}\int_0^{\pi}\sin\phi\mathop{d}\theta\mathop{d}\phi\\ & = -4\pi \left(\frac{ab}{c} + \frac{ac}{b} + \frac{bc}{a} \right) \end{align*}$$ So is this correct? Is there any way to simplify the calculations and arrive to the same result more easily? Thank you!
Here's a more conceptual (but no quicker) approach to the computation. Start by observing that on the unit sphere, the area $2$-form $$dA = \frac{dx\wedge dy}z = \frac{dz\wedge dx}y = \frac{dy\wedge dz}x.\tag{$\star$}$$ You can check this in coordinates in spherical coordinates (computing as you did) or as follows: Since the unit normal of the (unit) sphere is $(x,y,z)$, we have $dA = x\,dy\wedge dz + y\,dz\wedge dx+z\,dx\wedge dy$. On the other hand, on the sphere, we have $x\,dx+y\,dy+z\,dz=0$ (why?), so we can substitute and deduce ($\star$). Next, the linear map $T(x,y,z)=(ax,by,cz)$ maps the unit sphere diffeomorphically to our ellipsoid (and preserves the outward orientations, assuming $a,b,c>0$). Note that $T^*(\frac{dx\wedge dy}z) = \frac{ab}c\cdot\frac{dx\wedge dy}z$, $T^*(\frac{dz\wedge dx}y) = \frac{ac}b\cdot\frac{dz\wedge dx}y$, and $T^*(\frac{dy\wedge dz}x) = \frac{bc}a\cdot\frac{dy\wedge dz}x$. Thus, $T^*\omega = \left(\frac{ab}c+\frac{ac}b+\frac{bc}a\right)dA$. The result is now immediate, since we know that the surface area of the unit sphere is $4\pi$.
What is the smallest integer with 100 non trivial factors? Excluding 1 and itself as factors, what is the smallest integer with 100 different factor values? Example: 20 has 4 non trivial factors (2*10 and 4*5)
This can be found with OEIS entry A061799, for which $a_n$ is the smallest number with at least $n$ divisors. Your definition excludes 1 and the number itself, so we're looking for $a_{102}$ which is 50400. Alternatively, use OEIS entry A005179, for which $a_n$ is the smallest number with exactly $n$ divisors. In this case $a_{102}$ is 2949120.
Variant of Gödel sentence Let's take Peano Arithmetic for concreteness. Gödel's sentence $G$ indirectly talks about itself and says "I am not a PA-theorem." Then we come to the conclusion that $G$ cannot be a PA-theorem (since PA proves only true things), and hence $G$ is true. What about a sentence $H$ that says "I am a PA-theorem"? I think I saw on the internet some references about this issue, but now I cannot find them. Can someone provide references? (Either $H$ is a PA-theorem and it is true, or it is not a PA-theorem and it is false. In either case, it's not so interesting. But which one is it? I think $H$ is false, because, in order to prove $H$, you would first have to prove $H$. In other words, suppose for a contradiction that $H$ has a proof in PA, and let $X$ be the shortest proof. Then, presumably, $X$ would be of the form: "$Y$ is a proof of $H$, hence $H$ is a PA-theorem, hence $H$ holds." But then $Y$ would be a shorter proof. Contradiction.)
Your sentence is constructed to have the property that $$ H \leftrightarrow (\mathsf{PA}\vdash H) $$ In particular, then, PA proves $$ (\mathsf{PA}\vdash H) \to H $$ This is the premise for Löb's theorem which then concludes that PA proves $H$ itself. So $H$ is true!
Probabilistic classical algorithm on Deutsch-Jozsa problem Here is the description of Deutsch-Jozsa problem: Let $f: \{0,1\}^n \mapsto \{0,1\}$ be a function promised to be either constant or balanced ('balanced' means that $f$ outputs as many 0's as 1's). I need to show that a probabilistic classical algorithm making two evaluations of $f$ can with probability at least 2/3 correctly determine whether $f$ is constant or balanced. There is also a hint: Your guess does not need to be a deterministic function of the results of the two queries. Your result should not assume any particular a priori probabilities of having a constant or balanced function. I'm a bit lost here. My thinking is that if the two evaluations are the different, then $f$ is definitely balanced. Otherwise, $f$ could be either constant or balanced. But the chance of success would be depending on the probability of $f$ being constant, which is against the given hint. How should I approach this problem?
Algorithm: * *Independently and uniformly sample two queries from $\{0, 1\}^n$, denoted as $q_1$ and $q_2$. *Evaluate $f(q_1)$ and $f(q_2)$. If $f(q_1) \neq f(q_2)$, output $\color{blue}{\mathsf{balanced}}$; otherwise, output $\color{blue}{\mathsf{balanced}}$ with probability $\frac{1}{3}$ and $\color{red}{\mathsf{constant}}$ with probability $\frac{2}{3}$. Analysis: We have \begin{align} &P(\text{output }\color{blue}{\mathsf{balanced}} \mid f \ \color{blue}{\mathsf{balanced}}) \\[5pt] =\ &P(\text{output }\color{blue}{\mathsf{balanced}}, f(q_1) \neq f(q_2) \mid f \ \color{blue}{\mathsf{balanced}}) \\ &\quad + P(\text{output }\color{blue}{\mathsf{balanced}}, f(q_1) = f(q_2) \mid f \ \color{blue}{\mathsf{balanced}}) \\[5pt] =\ &P(\text{output }\color{blue}{\mathsf{balanced}}\mid f(q_1) \neq f(q_2), f \ \color{blue}{\mathsf{balanced}})\cdot P(f(q_1) \neq f(q_2) \mid f \ \color{blue}{\mathsf{balanced}}) \\ &\quad +P(\text{output }\color{blue}{\mathsf{balanced}}\mid f(q_1) = f(q_2), f \ \color{blue}{\mathsf{balanced}})\cdot P(f(q_1) = f(q_2) \mid f \ \color{blue}{\mathsf{balanced}}) \\[5pt] =\ &1\cdot \frac{1}{2} + \frac{1}{3} \cdot \frac{1}{2} = \frac{2}{3} \end{align} and \begin{align} P(\text{output }\color{red}{\mathsf{constant}} \mid f\ \color{red}{\mathsf{constant}}) = \frac{2}{3} \end{align} Therefore, \begin{align} &P(\text{output is correct}) = P(\text{output }\color{blue}{\mathsf{balanced}}, f \ \color{blue}{\mathsf{balanced}}) + P(\text{output }\color{red}{\mathsf{constant}}, f\ \color{red}{\mathsf{constant}}) \\ =\ &P(\text{output }\color{blue}{\mathsf{balanced}} \mid f \ \color{blue}{\mathsf{balanced}})\cdot P(f\ \color{blue}{\mathsf{balanced}}) \\ &\quad + P(\text{output }\color{red}{\mathsf{constant}}\mid f\ \color{red}{\mathsf{constant}})\cdot P(f\ \color{red}{\mathsf{constant}}) \\ =\ &\frac{2}{3}\cdot P(f\ \color{blue}{\mathsf{balanced}}) + \frac{2}{3} \cdot P(f\ \color{red}{\mathsf{constant}}) = \frac{2}{3} \end{align}
Chinese Remainder Theorem for rings. Please, take a look at The Chinese Remainder Theorem for Rings. for the theorem. My text gives an example to show that the theorem is not true for the non-commutative case. I do not understand the example so I hope that you can explain to me. Example: Consider the ring $R$ of non-commutative real polynomials in $X$ and $Y$. Let $I$ be the principle two-sided ideal generated by $X$ and $J$ be the principle two-sided ideal generated by $XY + 1$. Then $I + J = R$ but $I \cap J \neq IJ$. Can you explain to me why $I + J = R$ and why $I \cap J \neq IJ$? Thanks.
\begin{align*} &xy \in I \text{ and } xy + 1 \in J\\[6pt] \implies\; &1 \in I+J\\[6pt] \implies\; &I+J = R \end{align*} For the other question, let $r = (xy + 1)x$. Then $r \in I$ and $r \in J$, hence $r \in I\cap J$. However the lack of commutativity of the variables $x,y$ implies $r \notin IJ$. Therefore $I\cap J \ne IJ$.
solving second order ODE with $\operatorname{sech}^2(x)$ I am trying to solve the equation of the following type: $$u''(x) = (a-b\operatorname{sech}^2(cx))u(x),$$ where $a,b,c$ are positive real constants. It is my first time when I met the equation which includes $\operatorname{sech}^2(x)u$ function. Any hint will be good for me! Thank you in advance!
There is a class of solution possible provided you allow for a relation between coefficients a and b. Try the solution $(\operatorname{sech}(cx))^p$ and find two algebraic equations detailing the dependence of the power $p$ on $a$ and $b$.
Constant rate of change for $f(x) = -x^2 +7$ I'm taking Calculus I and we are going over Average Rate of Change and the Difference Quotient in preparation to discuss derivatives. The current investigation states: Let $f(x) = -x^2 + 7$. The question asked if $f(x)$ varies at a constant rate of change, which I decided it does not, since it is exponential and thus has some acceleration/deceleration. What I'm having trouble grasping is the next part. It asks What is the constant rate of change of the linear function $g$ that has the same change in output values over the interval $x=-3$ to $x=5$ as the function $f$? I was searching for resources trying to help with this problem and found How are the average rate of change and the instantaneous rate of change related for ƒ(x) = 2x + 5? So I applied the idea to my problem: $$\frac{(b^2+7)-(a^2+7)}{b-a} = \frac{b^2-a^2}{b-a}=\frac{(b+a)(b-a)}{b-a}=b+a$$ But I didn't really know how to use this afterwards, since in the answer there, they weren't left with the variables they substituted in. How can I find the rate of change like this?
The relevant points on the graph of the function $f$ are $(-3, -2)$ and $(5, -18)$ (evaluate $f(-3)$ and $f(5)$ to determine this). A linear function $g$ passing through these two points would have slope (i.e., constant rate of change) equal to $$ \frac{-18-(-2)}{5-(-3)} = -2. $$ Your attempt at a general solution could be made to work (there was just an typo in your starting point). The slope of the line through the points $(a, f(a))$ and $(b, f(b))$ is given by $$ \frac{f(b) - f(a)}{b-a} = \frac{(-b^2 + 7) - (-a^2 + 7)}{b-a} = \frac{a^2 - b^2}{b-a} = -(a + b), $$ which gives $-2$ in our particular case.
Common solution of $Ax=0$ and $Bx=0$ Could anyone tell me When $Ax=0=Bx$ has a common non trivial solution when $A,B\in\mathbb{R}^{m\times n}$ Suppose $x_1\ne 0$ be that common solution, then what we get is $Ax_1=Bx_1=0$ but I am not getting any relation between them! are they similar? does there exists non singular matrix $P$ such that $P^{-1}AP=B$?
The solutions of $Ax=0$ and $Bx=0$ are two vector subspaces of $\Bbb R^n$, say $W,U$. Clearly the intersection between them contain the zero vector, but if the intersection isn't trivial, then every $v\neq 0$ such that $v\in W\cap U$ is a common solution, in particular the entire subspace generated by $v$ is contained in the intersection and then you find infinitely many common solutions. In general this happen also without any relations between $A$ and $B$.
Spectral norm minimization via semidefinite programming Given symmetric matrices $A_0, A_1, \dots, A_n \in \mathbb R^{m \times m}$, let $A(x) := A_0 + x_1 A_1 +\cdots + x_n A_n$. How to formulate the following unconstrained spectral minimization problem as a semidefinite program? $$\min_{x \in \mathbb R^n} \|A(x)\|_2$$ Can anyone please help on this problem? Thanks!
Introducing variable $s > 0$ and rewriting the minimization problem in epigraph form, $$\begin{array}{ll} \text{minimize} & s\\ \text{subject to} & \| \mathrm A (\mathrm x) \|_2 \leq s\end{array}$$ Note that $\| \mathrm A (\mathrm x) \|_2 \leq s$ is equivalent to $\sigma_{\max} \left( \mathrm A (\mathrm x) \right) \leq s$, which is equivalent to $$\lambda_{\max} \left( (\mathrm A (\mathrm x))^{\top} \mathrm A (\mathrm x) \right) \leq s^2$$ Hence, $$s^2 - \lambda_{\max} \left( (\mathrm A (\mathrm x))^{\top} \mathrm A (\mathrm x) \right) = \lambda_{\min} \left( s^2 \mathrm I_m - (\mathrm A (\mathrm x))^{\top} \mathrm A (\mathrm x) \right) \geq 0$$ and, thus, we obtain $$s^2 \mathrm I_m - (\mathrm A (\mathrm x))^{\top} \mathrm A (\mathrm x) \succeq \mathrm O_m$$ Dividing both sides by $s > 0$, $$s \mathrm I_m - (\mathrm A (\mathrm x))^{\top} \left( s \mathrm I_m \right)^{-1} \mathrm A (\mathrm x) \succeq \mathrm O_m$$ Using the Schur complement test for positive semidefiniteness, the inequality above can be rewritten as the following linear matrix inequality (LMI) $$\begin{bmatrix} s \mathrm I_m & \mathrm A (\mathrm x)\\ (\mathrm A (\mathrm x))^{\top} & s \mathrm I_m\end{bmatrix} \succeq \mathrm O_{2m}$$ Thus, we obtain the following semidefinite program (SDP) in $\mathrm x \in \mathbb R^n$ and $s > 0$ $$\begin{array}{ll} \text{minimize} & s\\ \text{subject to} & \begin{bmatrix} s \mathrm I_m & \mathrm A (\mathrm x)\\ (\mathrm A (\mathrm x))^{\top} & s \mathrm I_m\end{bmatrix} \succeq \mathrm O_{2m}\end{array}$$ Alternatively, since $\mathrm A (\mathrm x)$ is symmetric for all $\mathrm x \in \mathbb R^n$, we can use the following SDP $$\begin{array}{ll} \text{minimize} & s\\ \text{subject to} & -s \mathrm I_m \preceq \mathrm A (\mathrm x) \preceq s \mathrm I_m\end{array}$$ which can be rewritten as follows $$\begin{array}{ll} \text{minimize} & s\\ \text{subject to} & \begin{bmatrix} s \mathrm I_m - \mathrm A (\mathrm x) & \mathrm O_{m}\\ \mathrm O_{m} & s \mathrm I_m + \mathrm A (\mathrm x)\end{bmatrix} \succeq \mathrm O_{2m}\end{array}$$
Expected Value for conditional chosen with repetition problem I am thinking of this for more than two weeks now and did not find any help in the literature: I have this experiment: There is an urn with 9 balls inside, numbered 1-9. You draw a ball and record the number. After that you put the ball back inside the urn. You do this until you have drawn one number 8 times (it does not matter, which one). How many draws do you expect to do until you get one number 8 times? The minimum number of draws is obviously 8 and the maximum is 64, but i did not find any distribution for this kind of problem. The only thing I know from simulation is, that the expected value is about 39.309. Any Ideas?
This problem is quite simple combinatorially but finding closed forms is difficult, indeed the intermediate results indicate there may not be any. Suppose we treat the case of $n$ coupons where we wait until some coupon has been seen $n-1$ times. We have from first principles that the probability for this to happen after $m$ draws is given by $$P[T=m] = \frac{1}{n^m} (m-1)! [z^{m-1}] \frac{d}{du} \left.\left(\sum_{q=0}^{n-3} \frac{z^q}{q!} + u\frac{z^{n-2}}{(n-2)!} \right)^n\right|_{u=1}.$$ This is $$P[T=m] = \frac{n}{n^m} (m-1)! [z^{m-1}] \frac{z^{n-2}}{(n-2)!} \left(\sum_{q=0}^{n-2} \frac{z^q}{q!}\right)^{n-1}.$$ We can now compute the expectation as follows. F := n -> z^(n-2)/(n-2)!*add(z^q/q!, q=0..n-2)^(n-1); X := proc(n) local FF; option remember; FF := expand(F(n)); add(m*n/n^m*(m-1)!*coeff(FF, z, m-1), m=n-1..1+(n-2)*n); end; For $n=9$ we thus obtain > X(9); 96899089924114484187946852578422805520046700098386996168 -------------------------------------------------------- 2465034704958067503996131453373943813074726512397600969 > evalf(%); 39.30942219 The form of this result indicates there may not be a simple answer. If there were any possibility of potential cancellation it would have appeared at this point. Code. #include <stdlib.h> #include <stdio.h> #include <assert.h> #include <time.h> #include <string.h> int main(int argc, char **argv) { int n = 6, trials = 1000; if(argc >= 2){ n = atoi(argv[1]); } if(argc >= 3){ trials = atoi(argv[2]); } assert(1 <= n); assert(1 <= trials); srand48(time(NULL)); long long data = 0; for(int tind = 0; tind < trials; tind++){ int dist[n]; int steps = 0; for(int cind = 0; cind < n; cind++){ dist[cind] = 0; } while(1){ int coupon = drand48() * (double)n; steps++; if(dist[coupon] == n-2) break; dist[coupon]++; } data += steps; } long double expt = (long double)data/(long double)trials; printf("[n = %d, trials = %d]: %Le\n", n, trials, expt); exit(0); }
Trick to this square root equations Okay, so this is a high school level assignment: $$ \sqrt{x+14}-\sqrt{x+5}=\sqrt{x-2}-\sqrt{x-7} $$ Here's a similar one: $$ \sqrt{x}+\sqrt{x-5}=\sqrt{x+7}+\sqrt{x-8} $$ When solving these traditionaly, I get a polynomial with exponent to the 4th which I cannot solve (I can guess the solutions via free term and divide the polynomial, accordingly, but I don't think that is the intended method here). Is there a trick to any of these two tasks that would prevent exponents from getting out of control? How would a highschooler solve them?
write like this $$\sqrt{x+14}+\sqrt{x-7}=\sqrt{x-2}+\sqrt{x+5}$$ square both sides: $$2x+7+2\sqrt{(x+14)(x-7)}=2x+3+2\sqrt{(x-2)(x+5)}\\ 2+\sqrt{(x+14)(x-7)}=\sqrt{(x-2)(x+5)}$$ square again $$4\sqrt{(x+14)(x-7)}+x^2+7x-94=x^2+3x-10\\ \sqrt{(x+14)(x-7)}=21-x \to (x+14)(x-7)=(21-x)^2$$ Can you finish? Don't forget to test the result in the original equation.
Why we can't have radicals with negative index? I am a high school student and we just learned about radical and radical notation. Our teacher says index of radical must be integer and greater than 2 by definition. But I can’t understand why we can’t have radical with negative or rational indexes? For example why can’t we have either of these? $$\sqrt[\frac32]8=8^{\frac1{\left(\frac32\right)}}=8^{\frac23}=\sqrt[3]{8^2}=\sqrt[3]{64}=4$$ $$\sqrt[-2]4=4^{\frac1{-2}}=4^{-\frac12}=\sqrt[2]{4^{-1}}=\sqrt[2]{\frac14}=\frac12$$ Our teacher says it’s because negative and rational indexes are not defined for radical notations but why they are not defined? They certainly have answers.
Nothing prevents you from choosing to define a meaning for something like $\sqrt[-2/3]{x}$. It's just not something that is usually done, because the only "reasonable" choice of definition would to be to make it mean the same as what we already have the notation $x^{-3/2}$ for -- and since the latter notation is simpler and easier to read, there is no particular demand for also writing it $\sqrt[-2/3]{\cdots}$. In short, the fact that we usually don't define this is not out of any kind of mathematical necessity, but simply because there doesn't seem to be any need to.
Is F open in the pointwise convergence topology? Let $F=\{f\in \mathcal{C}([a,b]): f(t)>0,\ \forall t\in [a,b]\}$ Is F open in the pointwise convergence topology $\mathcal{O}_{ptc}$ on $\mathcal{C}([a,b])$? Given a basis of $\mathcal{O}_{ptc}$, consisting of the family of all sets $\mathcal{B}(x_1,...,x_n;t_1,...t_n;\epsilon_1,...\epsilon_n)=\{f\in\mathcal{C}([a,b]): f(x_i)\in B_{\epsilon_i}(t_i),i=1,...,n\}\\ n\in\mathbb{N}, x_1,...,x_n\in[a,b]; t_1,...,t_n\in\mathbb{R};\epsilon>0$ I have to prove or disprove the statement. Now my answer (just in words) would be, that I can always chose elements from the basis to "model" every function of F, so that F is necessarily contained in a particular union of elements of the basis and never touches the x-axis. My solution says, F is NOT open, thus I am quite confused.
Suppose $f \in F$ and let $f \in U$ where $U$ is open. Then there is some basis element $B$ such that $f \in B \subset U$. By choosing a smaller basis element as necessary, we can assume that $B$ has the form $B= \{ g | g(x_k) \in (t_k-\epsilon, t_k+\epsilon), k=1,...,n \}$ for some fixed $t_k, x_k$ and $\epsilon >0$. Note that the only constraints that membership of $B$ imposes is that $g$ must lie in the interval $(t_k-\epsilon, t_k+\epsilon)$ at $x_k$, for a finite number of $x_k$. Choose $t^*$ to be distinct from the $t_k$ and choose $h$ to be any polynomial such that $h(t_k) = f(x_k)$ and $h(t^*) = -1$. Then $h \in U$ but clearly $h \notin F$. Hence $F$ cannot be open since any non empty open set contains elements that are not in $F$.
$\lim_{z \to \exp(i\pi/3)} \frac{z^3+8}{z^4+4z+16}$ Find $$\lim_{z \to \exp(i \pi/3)} \dfrac{z^3+8}{z^4+4z+16}$$ Note that $$z=\exp(\pi i/3)=\cos(\pi/3)+i\sin(\pi/3)=\dfrac{1}{2}+i\dfrac{\sqrt{3}}{2}$$ $$z^2=\exp(2\pi i/3)=\cos(2\pi/3)+i\sin(2\pi/3)=-\dfrac{1}{2}+i\dfrac{\sqrt{3}}{2}$$ $$z^3=\exp(3\pi i/3)=\cos(\pi)+i\sin(\pi)=1$$ $$z^4=\exp(4\pi i/3)=\cos(4\pi/3)+i\sin(4\pi/3)=-\dfrac{1}{2}-i\dfrac{\sqrt{3}}{2}$$ So, \begin{equation*} \begin{aligned} \lim_{z \to \exp(i \pi/3)} \dfrac{z^3+8}{z^4+4z+16} & = \dfrac{1+8}{-\dfrac{1}{2}-i\dfrac{\sqrt{3}}{2}+4\left(-\dfrac{1}{2}+\dfrac{\sqrt{3}}{2}\right)+16} \\ & = \dfrac{9}{\dfrac{27}{2}+i\frac{3\sqrt{3}}{2}} \\ & = \dfrac{6}{9+i\sqrt{3}} \\ & = \dfrac{9}{14}-i\dfrac{\sqrt{3}}{2} \\ \end{aligned} \end{equation*} But, when I check my answer on wolframalpha, their answer is $$\dfrac{245}{626}-i\dfrac{21\sqrt{3}}{626}.$$ Can someone tell me what I am doing wrong?
First, note that: $$z^3=\cos(\pi)+i\sin(\pi)=\color{red}{-1}$$ And also, you've evaluated $z$ correctly, but the substitution into your limit is wrong. Therefore, you should be using: $$\lim_{x\to \exp(i\pi/3)} \frac{z^3+8}{z^4+4z+16}=\frac{\color{red}{-1+8}}{\left(-\frac{1}{2}-i\frac{\sqrt{3}}{2}\right)+4\color{red}{\left(\frac{1}{2}+i\frac{\sqrt{3}}{2}\right)}+16}$$ Which gives you the correct answer.
integral of derivated function mix $$\int_0^{2\pi}(t-\sin t){\sqrt{1-\cos t}} dt$$ I can notice that I have something of the form $$\int{f(x){\sqrt {f'(x)}}dx}$$ but I don't know anything that could simplify it
Well, we have that $$\mathcal{I}:=\int_0^{2\pi}\left(t-\sin\left(t\right)\right)\sqrt{1-\cos\left(t\right)}\space\text{d}t=$$ $$\int_0^{2\pi}t\sqrt{1-\cos\left(t\right)}\space\text{d}t-\int_0^{2\pi}\sin\left(t\right)\sqrt{1-\cos\left(t\right)}\space\text{d}t\tag1$$ Now, we get that: * *Substiute $\text{u}=1-\cos\left(t\right)$: $$\int_0^{2\pi}\sin\left(t\right)\sqrt{1-\cos\left(t\right)}\space\text{d}t=\int_0^0\sqrt{\text{u}}\space\text{d}\text{u}=0\tag2$$ *Substiute $\text{s}=\frac{t}{2}$: $$\int_0^{2\pi}t\sqrt{1-\cos\left(t\right)}\space\text{d}t=\sqrt{2}\int_0^{2\pi}t\sin\left(\frac{t}{2}\right)\space\text{d}t=4\sqrt{2}\int_0^\pi\text{s}\sin\left(\text{s}\right)\space\text{d}\text{s}\tag3$$ Now, using integration by parts: $$4\sqrt{2}\int_0^\pi\text{s}\sin\left(\text{s}\right)\space\text{d}\text{s}=4\sqrt{2}\cdot\left(\pi+\int_0^\pi\cos\left(\text{s}\right)\space\text{d}\text{s}\right)=$$ $$4\sqrt{2}\cdot\left(\pi+\sin\left(\pi\right)-\sin\left(0\right)\right)=4\pi\sqrt{2}\tag4$$ So, we get that: $$\mathcal{I}=4\pi\sqrt{2}$$
Long Division with imaginary $i$ The problem asks for the student to divide $(ix^4 + 3x^3 -ix^2 + ix + 4i +6)$ by $(x-2i)$ using both synthetic division as well as long division. When using synthetic division it seems that I MUST group the $4i+6$ together in order to get the correct answer and I don't understand why I need to do that. Additionally, when I do the long division it seems that to get the same answer as the synthetic, I must also group the last 2 terms. Is there a math reason for this? Why won't I get a valid answer by non-grouping? I get a different answer when I don't group those last 2 terms.
The reason is that $4i+6$ is the constant term. Both the $4i$ and the $6$ together make up the constant term. Even though there is a real part and an imaginary part, these two parts must be taken together because the constant term of the polynomial consists of both of them. Treating them separately would be the same as, for example, treating $5$ as $2+3$ if you were dividing $x^2 + 2x + 5$ by $x-1$. This isn't specific to the constant term also. For example, you'd have to do something similar if instead of just $+ix$ you also had, say, $+ix + 2x$. Then you'd have to treat that as $(2+i)x$ and keep the $2$ and the $i$ together.
Continuity of a linear operator in Schwartz Space Let $f: \mathbb{R}\rightarrow\mathbb{R}$ be a $\mathbb{C}^{\infty}$ function which is bounded. Define $A:\mathcal{S}(\mathbb{R})\rightarrow\mathcal{S}(\mathbb{R})$ as $A(\phi)=f\phi$. Is $A$ continuous? Intuitively, I think this should not be true. If we take a function $f$ which is bounded but has an unbounded derivative then for a sequence $\{\phi_n\}\rightarrow0$ in $\mathcal{S}(\mathbb{R})$, $\{f\phi_n\}\nrightarrow0$ in $\mathcal{S}(\mathbb{R})$. I thought of taking, $f=\sin(x^2)$. However, I'm unable to find a suitable $\{\phi_n\}$. Note: $\mathcal{S}(\mathbb{R})$ is the Schwartz space.
Consider the bounded smooth function $f(x)=\sin(\exp(x^2))$ and $g(x)=\exp(-x^2/2)$. Then $g$ belongs to $\mathcal S(\mathbb R)$ but $fg$ does not because $$(fg)'(x)= 2x \exp(x^2/2)\cos(\exp(x^2)) - x\exp(-x^2/2)f(x)$$ does not converge to $0$. The "multiplier space" of $\mathcal S(\mathbb R)$ is calculated in Laurent Schwartz' book on distribution theory (which I do not have at hand, right now).
law of large numbers: zero chance error The following is from Freedman et al $(2005)$ Statistics. (a) A die will be rolled some number of times, and you win $1$ dollar if it shows a one more than $20$ percent of the time. Which is better: $60$ rolls, or $600$ rolls? (d) As in (a); but we win if the percentage of ones is exactly $16$ $2/3$ percent. According to a key I found online the solution is the following: (a) $60$ rolls. To win, you need a large percentage error, and that is more likely in $60$ rolls. (d) $60$ rolls because to get exactly the expected value means getting exactly zero chance error, and that is more likely with fewer rolls. Aren't these answers contradictory? I think, I understand answer (a): The more I roll the die, the smaller chance I have to reach 20 percent (and the more I approach $16$ $2/3$ percent). But how can I get large percentage error and zero chance error at the same time?
They're not contradictory. To see why, calculate the probability of exactly 50% heads for 2 and for 4 rolls, or 25% for 4 rolls and 8.
Suppose that $W_1$ and $W_2$ are both four-dimensional subspaces of a vector space $V$ of dimension seven. Explain why $W_1 \cap W_2 \neq \{0\}$. Suppose that $W_1$ and $W_2$ are both four-dimensional subspaces of a vector space $V$ of dimension seven. Explain why $W_1 \cap W_2 \neq \{0\}$. Suppose $W_1\cap W_2 = \{0\}$, since $\dim(W_1+W_2)=\dim(W_1)+\dim(W_2)-\dim(W_1\cap W_2)$ and $\dim(W_1)=\dim(W_2)=4$, $\dim(V)=7$, $$\dim(W_1+W_2)=\dim(W_1)+\dim(W_2)-\dim(\{0\})=4+4-0= 8 > \dim(V)=7$$ which does not make sense since both $W_1$, $W_2$ are subspaces of $V$. Therefore, $W_1 \cap W_2 \neq \{0\}$ This is how I solved it. Is it right?
Your proof is fine, but there's no need to use contradiction. By Grassmann’s formula $$ 7=\dim V\ge\dim(W_1+W_2)=\dim W_1+\dim W_2-\dim(W_1\cap W_2)= 4+4-\dim(W_1\cap W_2) $$ Therefore $$ \dim(W_1\cap W_2)\ge 4+4-7=1 $$