INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Is the Natural Functor on the Quotient Category Ever Not the Identity Functor on Objects? Let $C$ be a category and $C/R$ its quotient category. According to Wikipedia: There is a natural quotient functor from C to C/R which sends each morphism to its equivalence class. This functor is bijective on objects and surjective on Hom-sets (i.e. it is a full functor). Question: Notice the bolded above; why doesn't Wikipedia just state that this functor is the identity mapping on objects? Is it ever not the identity mapping (on objects)?
The short answer is "yes according to the definition provided in the link the object part of the functor is the identity", which is precisely the reason why the functor is bijective on objects. I can only try to guess possible reasons on why the authors stressed the fact that the functor is bijective on objects, instead that simply saying that it is the identity on objects. A possible reason could be that if you replace $\mathcal C/R$ with an isomorphic category you still get a get category which deserves the name of quotient category (one of category theory's motto is that we are interested in things up to, at least, isomorphisms) but such replacement does not ensure that the objects of the new quotient category are the same as the objects of $\mathcal C$.
Proof by induction the divisiblity Proof by induction, that $$x_n=10^{(3n+2)} + 4(-1)^n\text{ is divisible by 52, when n}\in N $$ for now I did it like that: $$\text{for } n=0:$$ $$10^2+4=104$$ $$104/2=52$$ $$\text{Let's assume that:}$$ $$x_n=10^{(3n+2)} + 4(-1)^n=52k$$ $$\text {so else}$$ $$4(-1)^n=52k-10^{3n+2}$$ $$for \text{ n+1}:$$ $$\text {after transforms get something like that:}$$ $$52k=10^{3n+3}$$ But I'm sure, that the last step I did wrong. Actually I don't know when the proof is done, if you would help me I would be thankful.
$\begin{align}{\bf Hint}\qquad\qquad {\rm mod}\,\ 52\!:\qquad \color{#0a0}{10^{\Large 3}} (\color{#0a0}{-4})\, &\equiv\, (\color{#0a0}{-4})(\color{#0a0}{-1})\\[.3em] 10^{\Large 3n+2} &\equiv\, (\color{}{{-}4})(-1)^{\Large n}\qquad\! {\rm i.e.}\ \ P(n) \\[.3em] {\rm scale\ prior\ by\ 10^{\Large 3}} \Rightarrow\ \ \color{}{10^{\Large 3}}10^{\Large 3n+2} &\equiv\ \color{#0a0}{10^{\Large 3}}\,(\color{#0a0}{{-}4})(-1)^{\Large n} \\[.3em] \Rightarrow\ 10^{\Large 3(\color{#c00}{n+1})+2}\! &\equiv (\color{#0a0}{-4})(\color{#0a0}{-1})(-1)^{\Large n}\\[.3em] &\equiv (-4)(-1)^{\Large \color{#c00}{n+1}}\ \ \ {\rm i.e.}\ \ P(\color{#c00}{n\!+\!1})\\ \end{align}$ Remark $\ $ More generally the same method shows $$\begin{align} a^{\Large 2}&=\, b\\ a^{\Large3}b\, &=\, c\ (= ab^2)\\ \Rightarrow\ a^{\Large 3n+2} &=\, b\, c^{\large n}\end{align}$$ OP is the special case $\ a,b,c = 10,-4,-1\,$ in $\,\Bbb Z_52 = $ integers mod $52$
Symmetry of the domain? For the tiple integral $$\iiint_D (z^2+z) \,dx\,dy\,dz$$ over the domain $D:x^2+y^2+z^2\leq4,\quad z^2\leq x^2+y^2$ The textbook states that by symmetry of the domain the integral simplifies to $$\iiint_D z^2 \,dx\,dy\,dz.$$ How exactly do they arrive at that conclusion?
This is because $$ \iiint_D z\,dV = 0 $$ Why is that? If $(a,b,c)$ is in $D$, then so is $(a,b,-c)$ (both inequalities have $z^2$ in them). So each point $P$ in $D$ has a mirror-image point $P'$ on the opposite side of the $xy$-plane. If $f(x,y,z) = z$, then $f(P') = -f(P)$. Accumulating all the points $P$ above the $xy$-plane, their $f$-values cancel in pairs with their corresponding points $P'$.
The norm of conjugate operator in Banach space I want to prove that $ \begin {Vmatrix} T \end {Vmatrix} = \begin {Vmatrix} T^{*} \end {Vmatrix}$ where $ T: E_1 \to E_2$, $T^{*}: E^{*}_2 \to E^{*}_1$ are operators between two Banach spaces and its dual spaces respectively. Let $f \in E^{*}_2$, $x \in E_1$ From $T^{*}f(x) = f(Tx)$ one gets by composition rule that $\begin {Vmatrix} T^{*}f(x) \end {Vmatrix} \leq \begin {Vmatrix} T \end {Vmatrix} \begin {Vmatrix} f \end {Vmatrix}\begin {Vmatrix} x \end {Vmatrix} $ and taking $\sup$ concludes $\begin {Vmatrix} T^{*} \end {Vmatrix} \leq \begin {Vmatrix} T \end {Vmatrix}$. Now I need to show converse inequality, but how can I do that? Hint says that Hahn-Banach theorem is needed. Or is there easier way than I chose?
Fixing $x\in E_1$ with $\|x\|\leq1$, the Hahn-Banach theorem implies there is some $f\in E_2^*$ such that $\|f\|=1$ and $|T^*f(x)|=|f(Tx)|=\|Tx\|$. Thus $$\|Tx\|=|T^*f(x)|\leq\|T^*\|\|f\|\|x\|\leq\|T^*\|$$ Now taking the supremum over $x$, we obtain $\|T\|\leq\|T^*\|$.
if $\sum_0^\infty a_n x^n = (\sum_0^\infty x^n )(\sum_0^\infty x^{2n})$ what is $a_n$? if $$\sum_0^\infty a_n x^n = (\sum_0^\infty x^n )(\sum_0^\infty x^{2n})$$ what is $a_n$? Here is my approach let $b_n= 1$ and $c_n= x^n$ Then by forming / relating to cauchy product we can conlude that the product is equal to : $$\sum_0^\infty a_n x^n$$ where $a_n = \sum_{k=0}^{n} b_k c_{n-k} = \sum x^{n-k}= x^n+x^{n-1}+...+x^0 = \frac{1-x^{n+1}}{1-x}$ What do you think about my approach? I feel like there is something wrong with it. If not - could you provide a different approach? By the way depending on the way I solve - I keep finding different answers. I also ended up with $a_n=\frac{3+2n+(-1)^n}{4}$ without using cauchy product.
Generally $$\sum_0^\infty c_n x^n = (\sum_0^\infty a_n x^n )(\sum_0^\infty b_nx^{n})$$ Then define $b_{2k+1} =0$ $$\sum_0^\infty c_n x^n = (\sum_0^\infty a_n x^n )(\sum_0^\infty b_{2k}x^{2k})$$ Then you can use the Cauhcy product formula. Now for the Cauchy product $$\sum_0^\infty c_n x^n = (\sum_0^\infty a_n x^n )(\sum_0^\infty b_nx^{n})$$ Let $a_n=1$ and $b_{2k}=1,b_{2k+1} = 0$, we conclude $$c_n = \sum_{k=0}^{n} b_k a_{n-k} = \sum_{k=0}^nb_k $$ Let us see the case for even $n = 2j$ $$\sum_{k=0}^{2j} b_k =\sum_{k=0}^{j} b_{2k} +\sum_{k=0}^{j-1} b_{2k+1} = \sum_{k=0}^{j} 1 = j + 1 = \frac{n}{2} +1 $$ Let us see the case for odd $n = 2j +1$ $$\sum_{k=0}^{2j+1} b_k =\sum_{k=0}^{j} b_{2k} +\sum_{k=0}^{j} b_{2k+1} = \sum_{k=0}^{j} 1 = j + 1 = \frac{n-1}{2} +1 $$ We deduce that $$\left(\sum_{n=0}^\infty x^n \right) \left(\sum_{n=0}^\infty x^{2n} \right) = \sum_{n=0}^\infty \left(\left\lfloor \frac{n}{2} \right\rfloor+1 \right)x^n$$ Now since $$\left\lfloor \frac{n}{2} \right\rfloor+1 =\frac{3+(-1)^n+2n}{4}$$ We conclude that $$\left(\sum_{n=0}^\infty x^n \right) \left(\sum_{n=0}^\infty x^{2n} \right) = \sum_{n=0}^\infty \left(\frac{(3+(-1)^n+2n}{4} \right)x^n$$ Another example Calculate the series expansion at $x=0$ of the integral $\int \frac{xy\arctan(xy)}{1-xy}dx$
Directional Derivative of functions $f:S\rightarrow\mathbb{R}$ Suppose $S\subseteq\mathbb{R}^3$ is a regular surface and let $f:S\rightarrow \mathbb{R}$ is differentiable. I know that a vector $v\in T_pS$ is of the form $v=\alpha'(0)$, where $\alpha:]-\epsilon,\epsilon[\rightarrow S$ is such that $\alpha(0)=p$. Now we have defined the directional derivative of f with direction v at point p as $Df_p(v)=(f\circ\alpha)'(0)$. My goal is to prove that $Df_p$ is linear. And this should be easy but I don't know how to it. Let $v,w\in T_pS$ and choose $\alpha,\beta$ curves such that $\alpha'(0)=v$, $\beta'(0)=w$. I want to prove that $D_{v+w}f(p)=D_vf(p)+D_wf(p)$, right? Should I fix $\gamma$ such that $\gamma'(0)=v+w$? I don't seem to go anywhere. Thank you very much!
Yes, this sort of abstract notation can be confusing sometimes. Let me write your expression for the directional derivative in coordinates, expanding it out using the chain rule: $$ Df_p (v) = \frac{d(f \circ \alpha)} {dt} (0)= \frac{\partial f}{\partial x}(p) \frac{d \alpha_x}{dt}(0) + \frac{\partial f}{\partial y}(p) \frac{d \alpha_y}{dt}(0) + \frac{\partial f}{\partial z}(p) \frac{d \alpha_z}{dt}(0) $$ $$ \ \ \ \ = \frac{\partial f}{\partial x}(p) v_x + \frac{\partial f}{\partial y}(p) v_y + \frac{\partial f}{\partial z}(p)v_z.$$ Similarly, replacing $v$ with $w$ and replacing $\alpha$ with $\beta$, we get $$ Df_p (w) = \frac{\partial f}{\partial x}(p) w_x + \frac{\partial f}{\partial y}(p) w_y + \frac{\partial f}{\partial z}(p)w_z.$$ Finally, replacing $v$ with $v+ w$ and replacing $\alpha$ with $\gamma$, we get $$ Df_p (v+w) = \frac{\partial f}{\partial x}(p) (v_x+ w_x) + \frac{\partial f}{\partial y}(p) (v_y + w_y) + \frac{\partial f}{\partial z}(p) (v_z + w_z).$$ Now that we have written all of this out, it should be clear that $$ Df_p (v+w) = Df_p(v) + Df_p(w).$$ which is the same as saying that $Df_p$ acts linearly on tangent vectors in $T_p$.
$T := \inf \{ t \geq 0 : B_t = S_1 \}$ not a stopping time Let $B_t$ be a Brownian motion, and let $S_t = \sup_{0\leq s\leq t}B_s$. Show that $T := \inf \{ t \geq 0 : B_t = S_1 \}$ is not a stopping time. It suffices to show e.g. $\{T \leq 1/2\} \not\in \mathscr{F}_{1/2}$ . This amounts to proving, $$\left\{ \sup_{s\in [0, 1/2]} B_s \geq \sup_{s\in [1/2, 1]} B_s \right\} \not\in \mathscr{F}_{1/2}$$ This seems pretty obvious to me: at time $t = 1/2$, you know $\sup_{s\in [0, 1/2]} B_s$ is some number, but cannot tell whether or not $B_s$ will cross above that number in $s\in [1/2, 1]$. But I am stuck figuring out how to make this intuition rigorous.
Let $M = \sup_{0 \le t \le 1/2} B_t$ and $M' = \sup_{1/2 \le t \le 1} (B_t - B_{1/2})$. Then $M$ is in $\mathcal{F}_{1/2}$ and $M'$ is independent of $\mathcal{F}_{1/2}$. Let $F$ be the cdf of $M'$. Note that $F(x) < 1$ for all $x$. Clearly $T \le 1/2$ iff $M' \le M - B_{1/2}$. Verify using properties of conditional expectation that $$P(T \le 1/2 \mid \mathcal{F}_{1/2}) = P(M' \le M - B_{1/2} \mid \mathcal{F}_{1/2}) = F(M - B_{1/2}) < 1 \qquad \text{a.s.}$$ If $\{T \le 1/2\}$ were $\mathcal{F}_{1/2}$-measurable, then $P(T \le 1/2 \mid \mathcal{F}_{1/2}) = 1_{\{T \le 1/2\}}$ almost surely. The above would then imply $1_{\{T \le 1/2\}} < 1$ a.s., which is to say $P(T \ge 1/2) = 0$. That is absurd. Alternative, slick solution: clearly $T \le 1$. If $T$ is a stopping time, then it is a bounded stopping time, so by the optional stopping theorem we have $E[B_T] = E[B_0] = 0$. But obviously $B_T = S_1 \ge 0$, so we conclude $B_T = S_1 = 0$ almost surely. That is likewise absurd.
Find a function $f:X\times X\to \mathbb{R}$ such that restrictions to some sets are continuous but $f$ is not continuous. Find a function $f:X\times X\to \mathbb{R}$ such that, for every $x\in X$, $f\restriction{X\times \{x\}}$ is continuous and $f\restriction{\{x\}\times X}$ is continuous but $f$ is not continuous. I have been thinking and I shout of functions continuous in either $\{x\}\times X$ or $X\times \{x\}$, and not continuous, but not in both. For example, take $f:X\times X\to \mathbb{R}$ such that $f(x,y)=\frac{1}{x}$ is $x\in \mathbb{Q}$ and $f(x,y)=0$ is $x\not\in \mathbb{Q}$. This function restricted to $\{x\}\times X$ will be continuous for every $x\in X$, but not in $X\times \{x\}$. Any help would be appreciated.
One traditional example is $\displaystyle f(x,y) = \begin{cases} \displaystyle\frac{xy}{x^2+y^2}, &\text{if } (x,y)\ne(0,0), \\ 0, &\text{if } (x,y)=(0,0). \end{cases} $
Finding points within a multi-variable calculus function Consider the following function. $f (x, y)  =  [(y + 6) ln x] − xe^2y − x(y − 5)5$ (a) Find  $f_x(1, 0)$ . (b) Find  $f_y(1, 0)$ . I know that I should separate them into two different equations but I do not know how to separate it. I believe that after that I should plug in the numbers to point into the equations to get the final answer but I do not know how to make them into 2 equation. I could really use them help.
$fx(x,y)=(0+6)⋅1(1)−e^{2(0)}−(y−5)^5 =3130$ and $fy(x,y) = ln(1)−2(1)e^{2(0)}−5(1)(0−5)^4 = -3127$
How to represent this parametrically? For the purpose of solving a problem involving manifolds, I want to know how to represent this situation... I have an $S^2$ sphere $(x^2 +y^2 + z^2 =1)$ and a point $(a,b)$ in the plane $\mathbb{R}^2$. I want to connect it with the north pole of the sphere $(0,0,1)$ by a line to compute its intersection with $S^2$. How do I find the equation of this line?
The line goes through the points: $$P_1(0,0,1)$$ $$P_2(a,b,0)$$ The direction vector, from $P_1$ to $P_2$ is, $\langle a-0,b-0,0-1 \rangle$. This is $\langle a,b,-1 \rangle$. Then the equation of the line is given by the position vector function, $$\vec r(t)=\langle a,b,-1 \rangle t+\langle 0,0,1 \rangle$$ So we may parametrize as follows, $$x=at$$ $$y=bt$$ $$z=1-t$$
How to prove floor function inequality $\sum\limits_{k=1}^{n}\frac{\{kx\}}{\lfloor kx\rfloor }<\sum\limits_{k=1}^{n}\frac{1}{2k-1}$ for $x>1$ Let $x>1$ be a real number. Show that for any positive $n$ $$\sum_{k=1}^{n}\dfrac{\{kx\}}{\lfloor kx\rfloor }<\sum_{k=1}^{n}\dfrac{1}{2k-1}\tag{1}$$ where $\{x\}=x-\lfloor x\rfloor$ My attempt: I try use induction prove this inequality. It is clear for $n=1$, because $\{x\}<1\le \lfloor x\rfloor$. Now if assume that $n$ holds, in other words: $$\sum_{k=1}^{n}\dfrac{\{kx\}}{\lfloor kx\rfloor }<\sum_{k=1}^{n}\dfrac{1}{2k-1}$$ Consider the case $n+1$. We have $$\sum_{k=1}^{n+1}\dfrac{\{kx\}}{\lfloor kx\rfloor }=\sum_{k=1}^{n}\dfrac{\{kx\}}{\lfloor kx\rfloor }+\dfrac{\{(n+1)x\}}{\lfloor (n+1)x\rfloor}<\sum_{k=1}^{n}\dfrac{1}{2k-1}+\dfrac{\{(n+1)x\}}{\lfloor (n+1)x\rfloor}$$ It suffices to prove that $$\dfrac{\{(n+1)x\}}{\lfloor (n+1)x\rfloor}<\dfrac{1}{2n+1}\tag{2}$$ But David gives an example showing $(2)$ is wrong, so how to prove $(1)$?
I tried all day and couldn't prove it but I made a little progress: Let's define $\{x\}'$ to be 1 if $x$ is an integer and $\{x\}$ otherwise, and note that the LHS of the original inequality satisfies $$\sum_{k=1}^{n}\dfrac{\{kx\}}{\lfloor kx\rfloor} \leq \sum_{k=1}^{n}\dfrac{\{kx\}'}{\lceil kx\rceil-1}\tag{1}$$ If $a=\lceil nx\rceil$ then $\lceil kx\rceil=\lceil k\frac an\rceil$ for $k=1,2,... n$ (can be proved by contradiction), and the modified fractional part is non-decreasing, so it suffices to prove that $$\sum_{k=1}^{n}\dfrac{\{k\frac an\}'}{\lceil k\frac an\rceil-1}<\sum_{k=1}^{n}\dfrac{1}{2k-1}$$ for integers $a\in (n,2n)$ (since we can assume $1<x<2$). The RHS of (1) can be rewritten as $$\sum_{k=1}^{n}\dfrac{\{kx\}'}{kx-\{kx\}'}=\sum_{k=1}^{n}\dfrac{1}{kx/\{kx\}'-1}$$ since $\lceil kx\rceil=kx+(1-\{kx\}')$. Letting $x=\frac an$, if $\gcd(a,n)=1$ then $\{\{kx\}':k=1,2,...n\}=\{\frac 1n,\frac 2n,...\frac nn\}$. For $k\in[1,n-1]$, let unique $t\in[1,n]$ such that $t\equiv ak\pmod{n}$. Then $\{kx\}'=\frac tn$ and $k=[a^{-1}t]_n$ so we can write our summation with index $t$: $$\frac1{a-1}+\sum_{k=1}^{n-1}\dfrac{1}{kx/\{kx\}'-1}=\frac1{a-1}+\sum_{t=1}^{n-1}\dfrac{1}{k\frac an/\frac tn-1}$$ $$=\frac1{a-1}+\sum_{t=1}^{n-1}\dfrac{1}{k\frac at-1}$$ Now since $ka\equiv t\pmod n$ we have $ka=u_tn+t$ for some $u_t\in[1,a]$, so this then becomes $$=\frac1{a-1}+\sum_{t=1}^{n-1}\dfrac{1}{\frac {u_tn+t}t-1}$$ $$=\frac1{a-1}+\frac 1n\sum_{t=1}^{n-1}\dfrac{t}{u_t}$$ I stopped at this point but I'll try to see if I can turn it into a proof tomorrow. Comment if you have any ideas!
Prove $g(x)=\frac{f(x)-f(a)}{x-a}$ is increasing Let $f:\mathbb{R}\rightarrow \mathbb{R}$ be a differentiable function such that $f'$ is increasing. Setting $a\in\mathbb{R}$, prove: the function $g(x)=\frac{f(x)-f(a)}{x-a}$ is also an increasing function on $(a,\infty)$ and $(-\infty,a)$ My thoughts: By MVT: $\exists c\in(a,x_0) \ s.t. f'(c)=\frac{f(x_0)-f(a)}{x-a}, \ \exists d\in(x_0,x_1) \ s.t. \ f'(d)=\frac{f(x_1)-f(x_0)}{x_1-x_0}$ $d>c \Rightarrow f'(d)\geq f'(c)$ Now I got stuck since I couldn't show $\frac{f(x_1)-f(a)}{x_1-a} \geq \frac{f(x_1)-f(x_0)}{x_1-x_0}$. Any help appreciated.
Note that $g$ is differentiable when $x\ne a$. Thus, we only need to show that $$g'(x)=\frac{f'(x)(x-a)-(f(x)-f(a))\cdot1}{(x-a)^2}\ge 0,$$ or $$f'(x)(x-a)\ge f(x)-f(a).$$ If $x>a$, we only need to show that $$f'(x)\ge\frac{f(x)-f(a)}{x-a}.$$ By MVT, the RHS is $f'(c)$ for some $c\in[a,x]$. And clearly $$f'(x)\ge f'(c).$$
Is there a binary system where $1000 = -1 $? And if not, why? My first thought after seeing that $0000 = 0$ and $1000 = -0$ waste one possible number, was to start negative numbers with $1000 = -1$. So that you just have to write the number flip the first bit and subtract one. It's way easier to see which number is meant, for humans at least. Why isn't it used nor mentioned on Wikipedia as a possible approach?
A different but similar system as you describe is used to store signed integers in computers: Two's complement The following reason is probably why two's complement is used instead of your system: (from Wikipedia) The two's-complement system has the advantage that the fundamental arithmetic operations of addition, subtraction, and multiplication are identical to those for unsigned binary numbers (as long as the inputs are represented in the same number of bits and any overflow beyond those bits is discarded from the result). This property makes the system both simpler to implement and capable of easily handling higher precision arithmetic. So a processor needs only one add instruction, not one for adding two unsigned (non-negative) numbers and one for adding two signed numbers, which would be the case with your system.
Limit of ratio of integrals Consider three continuous functions $f,u,v:[0,1] \rightarrow (0,1)$ and a sequence of integers $0 \leq a_n \leq n $ such that \begin{equation} \lim_{n \rightarrow + \infty}{\frac{a_n}{n}}=\alpha \in (0,1) \end{equation} Suppose that there is a unique $y$ (resp. $z$) such that $u(y)=\alpha$ (resp. $v(z)=\alpha$). I am trying to show that \begin{equation} \lim_{n \rightarrow +\infty} \frac {\int{f(x) u(x)^{a_n} (1-u (x))^{n-a_n}}dx}{\int{f(x) v(x)^{a_n} (1-v(x))^{n-a_n}} dx} = \frac{f(y)}{f(z)} \end{equation} The reason why I suspect that this property might be valid is because an analogous result holds true in the discrete case. Take $N$ real numbers $(x_1,\cdots,x_N)$ in $[0,1]$ and three sequences $(f_1,\cdots,f_N)$, $(u_1,\cdots,u_N)$ and $(v_1,\cdots,v_N)$ with values in $(0,1)$. Suppose that there is a unique index $i \in \{1,\cdots,N\}$ (resp. $j \in \{1,\cdots,N\}$) such that $u_i=\alpha$ (resp. $v_j=\alpha$). Then it is easy to show that for all $k \ne i$ \begin{equation*} \lim_{n \rightarrow +\infty} \dfrac{f_k u_k^{a_n} (1-u_{k})^{1-a_n}}{f_i u_{i}^{a_n} (1-u_{i})^{1-a_n}} = 0 \end{equation*} And therefore \begin{equation*} \displaystyle \sum_{k=1}^{N}{f_k u_k^{a_n} (1-u_{k})^{1-a_n}} \displaystyle \sim f_i \alpha^{a_n} (1-\alpha)^{1-a_n} \end{equation*} Similarly, \begin{equation*} \sum_{k=1}^{N}{f_k v_k^{a_n} (1-v_{k})^{1-a_n}} \sim f_j \alpha^{a_n} (1-\alpha)^{1-a_n} \end{equation*} And thus \begin{equation*} \dfrac{\sum_{k=1}^{N}{f_k u_k^{a_n} (1-u_{k})^{1-a_n}}}{\sum_{k=1}^{N}{f_k v_k^{a_n} (1-v_{k})^{1-a_n}}} \sim \dfrac{f_i}{f_j} \end{equation*} I have been unable to make any progress in the continuous case. Any advice would be much appreciated. Thanks!
The result fails. Suppose we define $a_n = n/2$ for $n$ even, $a_n = (n+1)/2$ for $n$ odd. Then we have $\alpha = 1/2.$ Simply take $f\equiv c$ for some constant $c\in (0,1).$ Finally, define $$u(x) = \frac{1-|x-1/2|}{2},\, v(x) = \frac{1-|x-1/2|^{1/2}}{2}.$$ For both $u,v$ the unique point where these functions take on the value $\alpha = 1/2$ is $x=1/2.$ If the result were true in this case, then the limit of the ratios of the integrals would be $1.$ I'll show that this limit is actually $\infty.$ Since $f(x)$ is constant, it slides out of the integrals and cancels. Suppose $n$ is even. The expression equals $$\frac {\int_0^1 [u(x)(1-u (x)]^{n/2} \, dx}{\int_0^1 [v(x)(1-v(x)]^{n/2} \, dx} = \frac {\int_0^1 [(1-(x-1/2)^2)/4]^{n/2} \, dx}{\int_0^1 [(1-|x-1/2|)/4]^{n/2} \, dx} $$ $$\tag 1=\frac {\int_0^1 (1-(x-1/2)^2)^{n/2} \, dx}{\int_0^1 (1-|x-1/2|)^{n/2} \, dx}$$ Now here's the thing: In the numerator of $(1)$ we have $1-(x-1/2)^2,$ which is an upside down parabola that peaks with value $1$ at $x=1/2.$ Downstairs we have $1-|x-1/2|,$ which also peaks at $x=1/2$ with value $1.$ But notice that as $x$ moves away from $1/2,$ $1-(x-1/2)^2$ decreases from $1$ at a slower rate than does $1-|x-1/2|.$ That will lead to the the numerator $\to 0$ as $n\to \infty$ at a slower rate than does the denominator. Claim: As $n\to \infty,$ the numerator in $(1)$ is on the order of $1/\sqrt n,$ while the denominator is on the order of $1/n.$ Thus the ratio in $(1)$ is on the order of $\sqrt n \to \infty.$ Here's the proof of the claim for the denominator: By symmetry, the integral is twice the integral from $0$ to $1/2.$ For this integral, let $x=1/2-y$ to get $$\int_0^{1/2}(1-y)^{n/2}\, dy.$$ Now let $y = z/n.$ We get $$\frac{1}{n}\int_0^{n/2}(1-z/n)^{n/2}\, dz.$$ As $n\to \infty,$ the last integral $\to \int_0^\infty e^{-z/2}\, dz.$ Thus these integrals are on the order of $1/n$ as claimed. I'll leave the claim for the numerators to you for the moment.
Finding Tens Digit of Very Large Number I think I can figure out the leading digit or the units digit, but how would one find the tens digit of, say, $$7^{100}$$ without having a calculator or web app that actually displays the full integer. Is there a way to find the tens digit given a scientific calculator that can only display the answer in scientific notation? Also, I have no idea what to tag this question.
The tens digit of $n$ can be read off of $n\pmod {100}$. In this case, $7^4\equiv 1 \pmod {100}$ so $7^{100}\equiv 1 \pmod {100}$, so the answer is $0$.
Series with radius of convergence 1 that diverges on roots of unity, converges elsewhere on the circle. I recall from complex analysis a while ago that the series $$\sum_{n=1}^\infty \frac{z^{n!}}{n}$$ has radius of convergence 1, diverges on roots of unity, and converges elsewhere on the circle. It's simple enough to see that it has to diverge on roots of unity: consider $z$ such that $z^k = 1$ for some $k \in \mathbb{N}.$ Plug this into the series and you see that past the $k$'th term, $z^{n!} = (z^k)^m = 1$ for some $m \in \mathbb{N}$; hence, the series becomes harmonic and must diverge. But I don't recall how to show that the series converges for other values on the circle. If this is simple I'd like just a hint, if it is involved I would prefer a sparse outline of the proof. Thank you!
Convention On the unit circle, we write $z=e^{2\pi i \theta}$ with $\theta\in\mathbb{R}$. Then the roots of unity are exactly when $\theta\in\mathbb{Q}$. Let $\{\alpha\}$ be the fractional part of $\alpha$. Since $z^{n!} = e^{2\pi i \theta n!}$, the distribution of $\{\theta n!\}$ becomes important to consider. There is an irrational $\theta$ that the series diverges If we take $\theta = e$, then the series diverges. This is because $\{en!\}$ converges to $0$. The set of all $\theta$ that the series diverges is a $G_{\delta\sigma}$ set This result is intensively discussed in this post in MO. This is because the set of all $\theta$ that the series converges is a $F_{\sigma\delta}$ set. For almost all $\theta$ the series converges This is due to a result by Lebeque. The function $g(\theta, n) = \theta n!$ satisfies the hypotheses of Theorem 1 in the paper. Therefore, for every $\epsilon>0$, the series $$ \sum_{n=1}^{\infty} \frac{ e^{2\pi i \theta n!} } {n^{1/2+ \epsilon}} $$ converges for almost all $\theta$. This shows that the series with $\epsilon=\frac12$ converges for almost all $\theta$. Hence the set of all $\theta$ that the series diverges is a $G_{\delta\sigma}$ set of Lebesgue measure zero which contains all rational numbers and $e$, and it does not contain $e/2$ by @user254665's answer.
Residues of Gamma Function at Poles I am struggling to understand the result that the analytic continuation of the Gamma function, $\Gamma(z)$ has poles at $C \setminus \{0,-1,-2,.....\}$ and residues of $(-1)^k/k!$ at pole $k$. How may this be extended through the functional relation, to give the residuals of $\Gamma(z+1)$ or $z \Gamma(z)$ ?
Everything just follows from $\Gamma(z+1)=z\,\Gamma(z)$. For instance, since $\Gamma(1)=1$ and $\Gamma(z)=\frac{\Gamma(z+1)}{z}$, $z=0$ is a simple pole for the $\Gamma$ function with residue $1$. By induction, every negative integer is a simple pole for the $\Gamma$ function and $$\text{Res}\left(\Gamma(z),z=-n\right) = \lim_{z\to -n}(z+n)\Gamma(z)=\lim_{z\to -n}\frac{\Gamma(z+n+1)}{z(z+1)\cdots(z+n-1)}=\color{red}{\frac{(-1)^n}{n!}}. $$ Similarly, for any negative integer $-n$ we have $$\text{Res}\left(\Gamma(z+1),z=-n\right)=\color{red}{\frac{(-1)^{n-1}}{(n-1)!}}.$$
Why is $ab=(a-b)(a+b)$ false when $a,b$ are coprime? I could not find this, and do not have a background in number theory so sorry if this sounds trivial. If $a$ and $b$ are coprime integers, why is the statement $$ab=(a-b)(a+b)$$ a contradiction?
If $a$ and $b$ are coprime integers and $ab=(a-b)(a+b)$, then either $a$ is even and $b$ is odd or the other way round. In any case, $ab$ would be even while $(a-b)(a+b)$ would be odd, quod est absurdum.
How can I 'prove' the derivative of this function? Consider the function $$ f: (-1, 1) \rightarrow \mathbb{R}, \hspace{15px} f(x) = \sum_{n=1}^{\infty}(-1)^{n+1} \cdot \frac{x^n}{n} $$ I am required to show that the derivative of this function is $$ f'(x) = \frac{1}{1+x} $$ I have attempted to do this using the elementary definition of a limit, as follows: $$ f'(x) = \lim_{a \rightarrow x} \left( \frac{f(a)-f(x)}{a-x} \right) = \lim_{a \rightarrow x} \left( \frac{ \sum_{n=1}^{\infty} \left[ (-1)^{n+1} \cdot \frac{a^n}{n} \right]- \sum_{n=1}^{\infty}\left[ (-1)^{n+1} \cdot \frac{x^n}{n} \right]}{a-x} \right) \\ = \lim_{a \rightarrow x} \left( \frac{ \sum_{n=1}^{\infty} \left[ (-1)^{n+1} \cdot \frac{a^n - x^n}{n} \right]}{a-x} \right) $$ but I am unsure of how to proceed from here (assuming my approach is correct. Can someone help me to show this?
Use the fact that $a^n-x^n=(a-x)\sum_{i=1}^na^{n-i}x^{i-1}$, canceling with the denominator and taking the limit as $a\to x$ we get $(-1)^{n+1}nx^{n-1}$ in the numerator. Cancel the $n$ and now prove that the resulting sum converges to $\frac{1}{1+x}$ using alternating series.
Relationship between the quaternionic group and quaternionic numbers? The quaternionic group $\mathcal{Q}$ consists of the elements $1$, $-1$, $i$, $-i$,$j$,$-j$,$k$,$-k$ that satisfy the multiplication rules $$i^2=j^2=k^2=-1$$ $$ ij=-ji=k$$ $$jk=-kj=i$$ $$ki=-ik=j$$ The quaternionic numbers $$a+ib+cj+dk$$ form a division dividion algebra. In Group Theory in a Nutshell on p61 A.Zee writes that those two structures are completely unrelated, but I almost cant swallow this. Are the quaternionic group and the quaternionic numbers really completely unrelated?
It's silly to say that they are completely unrelated. If you take just those quaternions for which precisely one of $a,b,c,d$ is non-zero, and the one that is non-zero is either 1 or $-1$, you obtain the quaternionic group. On the other hand, if you use the elements of the quaternionic group as the basis of an 8-dimensional vector space over $\mathbb R$, and then add relations to make $-x$ the negation of $x$ for each element of the group (creating a 4-dimensional vector space), and define a multiplication on that vector space by using the multiplication rules of the quaternionic group, you get precisely the algebra of the quaternions. Presumably, your book just wanted to emphasize that you should not confuse the two structures, as one is a group under multiplication, while one is an ($\mathbb R$-, or $\mathbb C$-)algebra.
Curvature and Torsion of a Curve I am currently working on a problem and think I know the answer but need verification. The question reads: Let there be a curve with non-zero curvature and zero torsion. Show this curve is planar. If the curve is allowed zero curvature at one point, does this above statement still hold? I have shown that the curve is planar with non-zero curvature and zero torsion. But when the curve has zero curvature $\textit{and}$ zero torsion, isn't the curve a straight line there? And if so, doesn't this straight line remain in the original plane normal to the constant $\boldsymbol b$?
Let $$f(t)=\begin{cases}(t,t^3,0)&t\leq 0\\(t,0,t^3)&t\geq 0\end{cases}$$ Here $f$ has zero torsion but is not planar. It has zero curvature only at $t=0$.
Algorithm for getting the point where line cuts the circle I have a circle and a few lines intersecting the circle. What I know about the problem is: * *The radius (R) of circle. *The center(C) of circle. *The start (S) and end (E) of lines. Using this information, how can I calculate the (green) points on the circle? I won't be doing this on paper but writing a method in C++. So, I need possibly a pseudocode algorithm which can do this.
Define your line segment parametrically: $$\begin{bmatrix} x \\ y \end{bmatrix} = S + (E-S)t \tag{P1}$$ Note that at $t = 0$, that $\begin{bmatrix} x \\ y \end{bmatrix} = S$, and that at $t = 1$, that $\begin{bmatrix} x \\ y \end{bmatrix} = E$. Then your circle is given by $$(x - C_x)^2 + (y - C_y)^2 = r^2$$ Plug the line (P1) in to the circle to find the $t$ value: $$(S_x + (E_x - S_x)t - C_x)^2 + (S_y + (E_y - S_y)t - C_y)^2 = r^2$$ This is a quadratic equation in $t$: $$At^2 + Bt + D = 0 \tag{P2}$$ * *$A = (S_x - E_x)^2 + (S_y - E_y)^2$ *$B = (S_x - C_x)(E_x - S_x) + (S_y - C_y)(E_y - S_y)$ *$D = (S_x - C_x)^2 + (S_y - C_y)^2 - r^2$ Solve (P2) for $t$ using the quadratic formula. Only the solution (if there is one) with $0 \le t \le 1$ is on the line segment. Plug the found $t$ into (P1) to get the intersection.
Finding the maclaurin series of a rational function confusion So I am told to find the maclaurin series of this function: $f(x)=\frac{10}{x^2-2x-24}$ . The question is, how would I do that? A few people told me to use partial fraction, geometric series or binomial series. I can see the reasoning, but I don't see why though. A Maclaurin series by definition is: So I would imagine that I would just take my $f(x)$, put it inside the summation and be done with it right there. The question ask me to find any number of terms or anything... It just says "Find the Maclaurin series for the following functions:"
$$f(x)=\frac{10}{(x-6)(x+4)} = \frac{1}{x-6}-\frac{1}{x+4}\tag{1}$$ and since in a neighbourhood of the origin $$ \frac{1}{x-6}=-\frac{1}{6-x}=-\frac{1/6}{1-x/6}=-\sum_{n\geq 0}\frac{x^n}{6^{n+1}}, $$ $$ \frac{1}{4+x}=\frac{1/4}{1+x/4}=\sum_{n\geq 0}\frac{(-1)^n x^n}{4^{n+1}}\tag{2} $$ we have: $$ f(x) = -\sum_{n\geq 0}\left(\frac{1}{6^{n+1}}+\frac{(-1)^n}{4^{n+1}}\right) x^n.\tag{3} $$ That is a power series with a positive convergence radius, hence it is the Taylor series of $f(x)$ at the origin (by the unicity of the Taylor series). In particular: $$ f^{(n)}(0) = -n!\left(\frac{1}{6^{n+1}}+\frac{(-1)^n}{4^{n+1}}\right) \tag{4}$$ that is not that easy to get from repeated differentiation, unless $(1)$ is performed before. That gives an alternative viable way: $(1)\mapsto RD\mapsto(4)\mapsto(3)$.
How to solve the differential equation $x^2 \cos{y} + \dfrac{dy}{dx} x \sin{y} =\sin^2y$, given in a competitive exam? How do I solve $x^2 \cos{y} + \dfrac{dy}{dx} x \sin{y} =\sin^2y$? I found the problem in a competitive exam.
$$x^2 \cos{y} + \dfrac{dy}{dx} x \sin{y} =\sin^2y$$ The change of function $Y(x)=\cos(y(x))$ leads to a Riccati ODE, then to a second order linear ODE. HINT : Solving this second order linear ODE involves Bessel functions.
Why is it that $x^4+2x^2+1$ is reducible in $\mathbb{Z}[x]$ but has no roots in $\mathbb{Q}$? $\textbf{ Lemma:}$ A non constant primitive polynomial $f(x) \in \mathbb{Z}[x]$ is irreducible in $\mathbb{Q}[x]$ if and only if $f(x)$ is irreducible in $\mathbb{Z}[x]$. I am reading a book in which it is given that $f(x)=x^4+2x^2+1$ is primitive in $\mathbb{Z}[x]$ and it has no roots in $\mathbb{Q}$ but it is reducible over $\mathbb{Z} $ as $f(x)=(x^2+1)(x^2+1)$. My question is doen't it conradict this lemma? since irreducible over $\mathbb{Z}$ and irreducible over $\mathbb{Q}$ is the same thing for primitive polynomial.
It is not necessary that a polynomial is reducible over a field iff it has a root in field. By meaning of a reducible polynomial is that we express the polynomial into product of non constant polynomials over the field. And a result that a polynomial having degree 2 or 3 is reducible iff it has root in field. In your example degree of polynomial is 4.
Given $n+1$ subsets of $\{1,2,...,n\}$, we can find two families of subsets with the same union Let $F=\{X_1,..X_{n+1}\}$ be a family of nonempty subsets of $\{1,2,...,n\}$. Show that there exist two disjoints subsets $I$ and $J$ of $\{1,2,...,n+1\}$ such that $\bigcup_{i \in I} X_i=\bigcup_{j \in J} X_j$ Apparently there is a solution using linear algebra but I don't see how.
Associate $X_i$ with the vector $\mathbf{v}_i$ of length $n$ which has a $1$ in coordinate $r$ if $r\in X_i$, and $0$ otherwise. All the $\mathbf{v}_i$ are in $\mathbb R^n$, which has dimension $n$, but there are $n+1$ of them so they are linearly dependent, i.e. there is some expression $\sum a_i\mathbf{v}_i=\mathbf{0}$, where the $a_i$ are not all zero. Write $I=\{i:a_i>0\}$ and $J=\{i:a_i<0\}$. Then $$\sum_{i\in I} a_i\mathbf{v}_i=\sum_{i\in J} -a_i\mathbf{v}_i.$$ Now the LHS has a positive coordinate anywhere in the union corresponding to $I$, and is $0$ everywhere else, and the RHS corresponds to $J$ in the same way, so the unions are equal. $a_i$ are not all zero, so at least one of $I,J$ is nonempty, but none of the $X_i$ is empty so they must both be nonempty.
Proving probabilistic classifier is optimal In my studies of probability within machine learning, I have come across the following setting: We have a domain $ \mathcal{X} \times \{0,1\} $ and two random variables $ X \subset \mathcal{X} $ and $ Y \in \{0,1\} $ with a joint probability distribution defined on $ \mathcal{X} \times \{0,1\} $, say $ \mathcal{D} $. We look at label functions of the form $ f : \mathcal{X} \to \{0,1\} $, and we seek a label function $ f $ such that the probability $ \mathcal{D}(\mathcal{X}=x,\mathcal{Y}=y : y \neq f(x)) $ is minimal. My textbook says that this is the Bayes classifier, which is: $ f(x) = \left\{ \begin{array}{ll} 1 & \mbox{if } \mathcal{D}(Y=1|X=x) \geq \frac{1}{2} \\ 0 & else \end{array} \right. $ I understand this intuitively, but can someone please provide a rigorous proof of this? I thank all helpers.
Since $D(Y\neq f(X))=1-D(Y= f(x))$, we can also focus on maximizing the latter in order to minimize the former (I am simplifying notation here a bit, feel free to let me know if it is unclear). We have that \begin{align*} D(Y= f(X))&=\sum_{x\in X}\sum_{y=f(x)}D(X=x,Y=y)\\ &=\sum_{x\in X} D(x)\sum_{y=f(x)}D(Y=y|X=x)\\ &=\sum_{x\in X} D(x) D(Y=f(x)|X=x) \end{align*} We thus have that \begin{align*} \sup_{f}D(Y= f(X))&=\sup_{f}\sum_{x\in X} D(x) D(Y=f(x)|X=x)\\ &=\sum_{x\in X} D(x) \sup_{f}D(Y=f(x)|X=x) \end{align*} In other words we are trying to find the function $f$ that maximizes $D(Y=f(x)|X=x)$ for all $x$. That function is precisely $$f(x)=\text{argmax}_{y=0,1}D(Y=y|X=x)$$ where by argmax I mean that you maximize the function but then return the argument (i.e. the $y$) that gave you the maximum instead of the maximum itself. Since $D(Y=0|X=x)+D(Y=1|X=x)=1$, this is exactly the Bayes classifier that was given.
Pigeonhole Principle of 18 pennies in a square of 6x6 I have a question regarding the Pigeonhole Principal. I essentially understand what it is but I am stuck on a problem: 18 pennies are placed on the squares of a 6x6 chess board randomly, up to one penny on each square of the 36. a) Prove that there is a row and column of the board that contain at least 5 pennies together. b) Show that there is an arrangement of the 18 pennies in a way such that no row-column combination would contain more than 6 pennies. How would I go about this problem? I just can't seem to grasp the concept 100% especially one that would involve some drawings. Thank you.
1) Assume that every row-column pair contained 4 pennies. Add up the number of pennies in each row-column pair across all row-column pairs. This gives $4\cdot 36$ pennies. However, we have counted each penny several times, specifically $12$ times (once for each row and once for each column). Dividing by $12$ yields $12$ pennies on the board. Therefore any arrangement with $4$ or fewer pennies in each row-column pair cannot have more than $12$ pennies, as adding one more penny on any square will break the rule. As noted in the comments, this same argument can be used to show that $5$ pennies in each row-column pair doesn't work (you find that there are $15$ pennies), and so in fact any arrangement requires at least $6$ pennies in some row-column pair. 2) Generally, "greedy" approaches (where you build the answer iteratively by making the best choice at each step) work for problems like this. If you work row-by-row filling each row with $3$ pennies in a way that makes no column have more than $3$ pennies, pretty much every possible arrangement works. For example, both the two-blocks-of-nine approach and the shift-the-row-by-one approach work. You'll note that both of these approaches produces $6$ pennies in every row-column pair. I postulate that this is necessarily the case, and that $19$ pennies requires at least $7$ in some row-column pair.
Find the maximum and minimum of a function (involving an trigonometric integral) Question: How can I find the maximum and minimum of this function, for a value of $\text{n}$? $$\text{G}_\text{sc}\left(\text{n}\right)=\alpha\cdot\left(1+\epsilon\cos\left(\theta\right)\right)^2\tag1$$ Where $\alpha\space\wedge\space\epsilon\in\mathbb{R}^+$ And, for $\theta$ we know that: $$\frac{\text{n}}{\text{A}}\int_0^{2\pi}\frac{1}{\left(1+\epsilon\cos\left(x\right)\right)^2}\space\text{d}x=\int_0^\theta\frac{1}{\left(1+\epsilon\cos\left(x\right)\right)^2}\space\text{d}x\space\Longleftrightarrow\space\theta\left(\text{n}\right)=\dots\tag2$$ Where $\text{A}\in\mathbb{R}$ and $0<\text{n}\le\text{A}$ and $\color{red}{\text{when you solve}}$ $\color{red}{\theta}$ $\color{red}{\text{we will get that}}$ $\color{red}{\theta}$ $\color{red}{\text{a function is of}}$ $\color{red}{\text{n}}$ My work: In order to find the minumum and maximum: $$\text{G}'_\text{sc}\left(\text{n}\right)=\frac{\text{d}}{\text{d}\text{n}}\left(\text{G}_\text{sc}\left(\text{n}\right)\right)=0\tag3$$ But I do not understand how to proceed.
Let $$I(\theta) = \int_0^\theta \frac1{(1+\epsilon\cos x)^2}\,dx,$$ then$$\frac{dn}{d\theta} = \frac{A}{I(2\pi)(1+\epsilon\cos\theta)^2}.$$ The first order condition of $G' = 0$ becomes $$0 = \frac{dG}{dn} = \frac{dG}{d\theta}\frac{d\theta}{dn} = -2\alpha\epsilon \frac{I(2\pi)}{A}\sin\theta(1+\epsilon\cos\theta)^3.$$ The zeros occurs at $\sin\theta = 0$ or $(1+\epsilon\cos\theta) = 0$.
Calculate max size of rectangle in pie chart I'm trying to get the maximum possible width and height of a rectangle inside a pie chart. All fields have the same angle. $\alpha$ is never bigger than $90^{\circ}$. I have the variables $\alpha$, $r$, $b$ and I know that $w = 3h$. I'm searching for $w$, $h$ and $P_1 (x_1, y_1)$. I'm a programmer, so I'm not that good with math and I have to translate this to Code afterwards. Thanks for your help! Edit: I'm using Javascript. Thanks to Paul. He explained me, that I have to use radians instead of degree using Math.tan. In addition, there is no Math.cot in Javascript. That's why I had to create two more functions. const tan = (deg) => Math.tan(deg * Math.PI / 180); const cot = (value) => 1 / tan(value);
Using the figure as our guide, and placing the origin at the center. Your rectangle is symmetric about the line $x = 0$ It is bound on the upper-left corner by the line $y=x$ It is bound on the lower left corner by the circle $x^2 + y^2 = R^2$ Where $R$ is the radius of the circle that bounds the rectangle. We can call these two coordinates $(x,x), (x,y)$ $x<0, y<0, |x|<|y|$ $H = x-y\\ W = -2x$ $W = 3H\\ -2x = 3x-3y\\ y = \frac 53 x$ $x^2 + (\frac 53 x)^2 = R^2\\ \frac {34}{9} x^2= R^2\\ x = - \frac {3}{\sqrt {34}} R$ the 4 corners are: $(- \frac {3}{\sqrt {34}} R, - \frac {3}{\sqrt {34}} R) , ( \frac {3}{\sqrt {34}} R, - \frac {3}{\sqrt {34}} R), (\frac {3}{\sqrt {34}} R,- \frac {5}{\sqrt {34}} R), (-\frac {3}{\sqrt {34}} R,- \frac {5}{\sqrt {34}} R)$ the area $= \frac 43 x^2 = \frac {12}{34} r^2$
Five Consecutive Integers divisible by a square greater than 1 I have the task of finding 5 consecutive integers of the form {x, x+1, x+2, x+3, x+4, x+5} where each number in the sequence is divisible by a square k greater than 1.. I tried to write a simple JAVA program to find a sequence of that form by checking if each number in the sequence = 0 mod k But that can't be possible, how would I go about finding these numbers?
Chinese remainder theorem: $$\begin{align}x&\equiv 0\pmod{4}\\ x&\equiv -1\pmod{9}\\ x&\equiv -2\pmod{25}\\ x&\equiv -3\pmod{49}\\ x&\equiv -5\pmod{121} \end{align}$$ That's gonna be ugly. If you only need five, then you can ignore the last line and you get $$x\equiv 29348\pmod{4\cdot 9\cdot 24\cdot 49}.$$ Then $x,x+4$ are divisible by $4$, $x+1$ is divisible by $9$, $x+2$ is divisible by $25$, and $x+3$ is divisible by $49$.
Algebra on Equivalent Infinitesimals Question: Is algebra on equivalent infinitesimals valid? Example: $$\sqrt{1+x} - 1 \sim_0 \frac{x}{2} \longrightarrow \sqrt{1+x} \sim_0 \frac{x}{2} + 1$$ I'm aware that both the left and right equalities are true, but is this just a coincidence that it looks like I added $1$ to both sides of the left equality to get the the right?
We can be more rigorous using remainders: $$\sqrt{1+x}=1+\frac x2+\mathcal O(x^2)$$ Where $\mathcal O(x^2)$ here means that as $x\to0$, then the remainder is at most $Cx^2$ for some $C$. Using Taylor polynomials and their remainders, we can see that for $|x|<0.5$, $$\left|\sqrt{1+x}-1-\frac x2\right|\le2^{-2.5}x^2$$ Indeed, try this out and you will see it holds true, which allows you to show that $$\sqrt{1+x}\sim_01+\frac x2$$ Anyways, in general, when you write something like $$\sqrt{1+x}-1\sim_0\frac x2$$ It means that they both have the same growth rates (their difference is bounded by something that goes to zero faster than them) and from there it's easy to show addition and multiplication can be moved around.
Can a Homogeneous Differential Equation be Nonlinear? First of all, I am not asking about $v=y/x$ transformation kinda homogeneous. Can we say this nonlinear differential equation is homogeneous $$y^{\prime}=ty^2.$$ Here there is no term without $y$, this is okay. But this is nonlinear equation. I saw here some discussions and some people said it is just for linear equations. I want to give two links: Dummies Series (assumes it can be nonlinear) (I also see some universities documents too). Wolfram Math World (assumes it has to be linear) Which one is the definition: 1) No term without $y$, 2) Linear and No term without $y$. 3) $y$ is a solution, then $\lambda y$ is solution for all $\lambda \in \mathbb R$. Considering the references, I will use the 3rd one. This option is also meaningful.
The answer to the title-question is: Yes, it can. Here are two references. The paper On the second order homogeneous quadratic differential equation by Roger Chalkley considers homogeneous quadratic differential equations of the form \begin{align*} Q(y)\equiv a\left(y^{\prime\prime}\right)^2+by^{\prime\prime}y^{\prime}+cy^{\prime\prime}y+d\left(y^{\prime}\right)^2 +ey^\prime y+fy^2=0\tag{1} \end{align*} from which we conclude linearity and homogeneity are two concepts which do not exclude each other. Note: The Math world page does not include the term linear in the definition of Homogeneous Ordinary Differential Equation. It rather uses a linear differential equation as an easy to follow example for a homogeneous ordinary differential equation. Another example is the paper Classification and Analysis of Two-Dimensional Real Homogeneous Quadratic Differential Equation Systems by Tsutomu Date.
Solving for zero using differentiation I need help finding the min and max to the equation $y'= 4.9385 \cos(0.017(x-80)).$ I've tried this by making $y' = 0$ and tried to solve for $x$ but I don't understand how to do it. So, can someone please help find the min and max for $0 = 4.9385 \cos(0.017(x-80))$ ?
Given the equation: $$ y = A \sin( \kappa\, (x-a) ) $$ The min max of $y$ is $$ y' = A \kappa \cos(\kappa\, (x-a) ) =0$$ This is solved by $\cos(\kappa\, (x-a) )=0$ or $$ \kappa\, (x-a) = \frac{\pi}{2} + n \pi $$
Mean value theorem for convex functions Let $f$ be a real function with left and right derivatives $f'_-$ and $f'_+$ on the open interval $(a,b)$, and continuous on $[a,b]$ (e.g., let $f$ be convex on $[a,b]$). Then, Is there something like the mean value theorem for $f$?
By the mean value version of Taylor's theorem we have: \begin{align} f(y) &=f(x)+f'(x)(y-x)+\dfrac{1}{2}f''(z)(y-x)^2, \text{for some }z\in[x,y].\\ \end{align}
Asymptotic expansion of cosine integral Can anybody help with this problem to find the full asymptotic expansion of $\int_1^\infty \frac{\cos(xt)\, d t}{t}$ from Bender & Orzsag). Does it work by Taylor expansion?
Considering $$\int\frac{\cos (xt)}{t}\,dt=\text{Ci}(t x)$$ $$I=\int_1^\infty \frac{\cos (xt)}{t}\,dt=-\text{Ci}( x)$$ provided $x>0$. Now, looking here, you will find that $$I=\cos(x)\left(\frac{1}{x^2}-\frac{6}{x^4}+\frac{120}{x^6}+O\left(\frac{1}{x^7} \right) \right)-\sin(x)\left(\frac{1}{x}-\frac{2}{x^3}+\frac{24}{x^5}+O\left(\frac{1}{x^7}\right) \right)$$ For illustration purposes, let us use it for $x=100 \pi$. The above expansion would give $$\frac{3-1500 \pi ^2+2500000 \pi ^4}{25000000000 \pi ^6}\approx 0.0000101315025301179$$ while the exact value should be $$0.0000101315025300648$$
Line in an affine space of dim3 I'm trying to understand the theory of the affine space $A^3(R)$ and its affine subspace of the line. A line is defined by the intersection of two planes: $r : \begin{cases}AX+BY+CZ+D=0\\ A'X+B'Y+C'Z+D'=0\end{cases}$ The direction of r is the one-dimensional subspace of V defined by: \begin{cases}AX+BY+CZ=0\\ A'X+B'Y+C'Z=0\end{cases} The directional vector if r is (and I don't understand why): $l=\begin{vmatrix} B & C \\ B' & C' \end{vmatrix}$, $m=-\begin{vmatrix} A & C \\ A' & C' \end{vmatrix}$, $n=\begin{vmatrix} A & B \\ A' & B' \end{vmatrix}$ because $(l,m,n)$ is the solution of the homogenous system. Then I've tried with an example. I have the line r : \begin{cases}x-y+2z+1=0\\ -x+y-z+2=0\end{cases} the subspace is defined by \begin{cases}x-y+2z=0\\ -x+y-z=0\end{cases} with the coefficient matrix $$A=\begin{bmatrix}1 & -1 & 2\\-1 & 1 & -1\end{bmatrix}$$ that canbe reduced to $$\begin{bmatrix}1 & -1 & 0\\0 & 0 & 1\end{bmatrix}$$ and the solution is $(t,t,0)$ with $t \in R$ PS: i've checked the exercise
The directional vector if r is (and I don't understand why): $l=\begin{vmatrix} B & C \\ B' & C' \end{vmatrix}$, $m=-\begin{vmatrix} A & C \\ A' & C' \end{vmatrix}$, $n=\begin{vmatrix} A & B \\ A' & B' \end{vmatrix}$ because $(l,m,n)$ is the solution of the homogenous system. You can look at this in different ways: * *The line is given as the solution set to a system of two planes. This system has an infinite number of solutions (all the points of the line, of course!) and the solution set is 1-dimensional. In parametric form, the solution set is simply a parametric representation of the line. You can solve the system and verify that the general solution is given by the formulas above, so this gives you the directional vector. *Since the line is given as the intersection of two planes and those planes are given in cartesian form, you can simply read the normal vectors of both planes: $(A,B,C)$ and $(A',B',C')$ respectively. The direction of the intersection (and thus of the line) is perpendicular to both normal vectors so you can find the directional vector by taking the cross product of the normal vectors of the two planes: $(A,B,C) \times (A',B',C')$; you'll find the formulas given above. Using this formula on your example will give you the directional vector $(-1,-1,0)$. Note that any non-zero multiple of a directional vector represents the same direction so this agrees with the solution set $(t,t,0)$ you found (take $t=-1$) by manually solving the system.
Solving equations with e on the form $1.22e^{0.015x} - 0.22 e^{-6x} = 3$ I'm wondering how equations of the form like: $1.22e^{0.015x} - 0.22 e^{-6x} = 3$ can be solved for $x$ without the use of an advanced calculator or plotting tool? I tried substituting $e^x$ by u and solving for u, however the resulting expression is also complicated with several complex answers, which leads me to guess that this might not be the smartest approach. Any ideas if there are simpler ways or are there no "shortcuts" for these types of equations?
Your equation is not going to have an analytical solution. However, as other answers suggested you can neglect one term to have an equation which has now an analytical solution. But of course you need to check that the solution of this new equation is close to the solution of the original equation. Note that this original solution is unique by considering the derivatives and the limits at $\pm \infty $. So if you neglect the term $ -0.22 e^{-6x}$ the equation becomes $ 1.22e^{0.015x} = 3$ which gives you as an approximate solution $ x_0 = \frac{\log \left( \frac{3}{1.22} \right) }{0.015} \approx 59.984 \dots $ Now let's set $ x = x_0 + \epsilon $ and look how to evaluate the error $\epsilon $ assuming $ \epsilon \ll x_0 $. Let's $a=1.22$, $b=0.015$, $c=-0.22$, $d=-6$ and $f = -3 $ so you equation is : $$a\cdot e^{bx} +c\cdot e^{dx} +f = 0$$ And we know : $$ a\cdot e^{bx_0} + f = 0$$ Therefore : $$a\cdot e^{b(x_0 + \epsilon)} +c\cdot e^{d(x_0 + \epsilon)} +f = 0$$ $$a\cdot e^{bx_0} e^{b \epsilon} +c\cdot e^{dx_0}e^{d\epsilon} +f = 0$$ $$c\cdot e^{dx_0}e^{d\epsilon} = f \cdot \left(e^{b \epsilon} -1 \right)$$ Only considering the first order in $ \epsilon$ we have : $$e^{d\epsilon} \approx 1 \text{ and }e^{b \epsilon} - 1 \approx \frac{(b\epsilon)^2}{2}$$ So in the end : $$ \epsilon \approx \frac{1}{b} \sqrt{ \frac{2c \cdot e^{dx_0}}{f}} \approx 1.7979998 \cdot 10^{-77} $$ So even if we neglected one term in the original equation, we know that the our approximate result is quite close the true solution.
Solve integral $\iiint_Ax^pdxdydz$ on $A=\{(x,y,z):x^2+y^2+z^2We have following integral to count: $$ \iiint_Ax^pdxdydz $$ where $p$ is constant real number and $A=\{(x,y,z):x^2+y^2+z^2<x^{\frac{1}{3}}\}$ I tried spherical substitution and cylindrical.
By setting $x=w^3$, the triple integral turns into $$3\iiint_{w^6+y^2+z^2<w}w^{3p+2}\,dw\,dy\,dz \tag{1}$$ that equals: $$ 3 \int_{0}^{1}\iint_{y^2+z^2<w-w^6}w^{3p+2}\,dy\,dz \,dw\tag{2}$$ or: $$ 3\pi \int_{0}^{1}w^{3p+2}(w-w^6)\,dw = \color{red}{\frac{5\pi}{(p+3)(3p+4)}}.\tag{3}$$
Probability of winning sweepstakes when some information is not given I wanted to give my students an interesting probability problem, and found myself perplexed by it. Below is the probability problem I came up with. To win some prize a participant in a sweepstakes has to guess a four digit number. The designers of the sweepstakes created it in such a way that for every digit, any number from 0 to 9 is possible. The creators of the sweepstakes did not inform well the participants about how the digits were selected. What is the probability that the first participant wins the prize if she thinks that no number can be repeated in the sequence? Intuition: The probability of the first participant winning when understanding the rules is 1/10000 (1 out of 10^4 possibilities). By thinking that no number can be repeated, the participant may not consider numbers that may very well be the winning one. Thus, there probability of winning then should be lower than 1/10000. Let W = win (chose winning number), C = winning number has no repeated digits. Then P(W) = P(W|C)P(C) + P(W|~C)P(~C) Naturally, if the winning number had repeated digits, then the participant can't win due to the misunderstanding. Thus P(W|~C)=0 and P(W) = P(W|C)P(C) Now P(W|C) = 1/5040 (1 out of 10*9*8*7 possibilities) and P(C) = 5040/10000 (from 10*9*8*7/10000). Therefore P(W) = 1/5040 (5040/10000)=1/10000, same as if the participant did not misunderstand the rules. Where did I go wrong?
It makes no difference how you come up with your guess, the probability of guessing correctly will always be $\frac 1{10^4}$. To write this out in your case: Let $p(n)$ be the probability that $n$ is the correct answer (so $p(n)=\frac 1{10^4}$ as we are assuming that the true distribution is uniform). Let $\psi(n)$ be the probability that you guess $n$. Now, under the false rule you have introduced, you are choosing uniformly at random from the $5040$ possible four digit numbers with distinct digits. Thus: $$ \psi(n) = \begin{cases} \frac {1}{5040}, & \text{if $n$ has distinct digits} \\ 0, & \text{if $n$ does not have distinct digits} \end{cases}$$ The probability of guessing correctly is given by $$P=\sum_{n=0}^{9999} p(n)\psi(n)=\frac 1{10^4}\times \sum_{n=0}^{9999} \psi(n)$$ But that last sum is $\frac 1{5040}$ times the number of $n$ with distinct digits, hence the sum is $1$, making $P=\frac 1{10^4}$ as claimed. The problem with your calculation comes when you write $P(C)=\frac {5040}{10000}$. What does that mean? $P(C)$ is, I believe, the probability that, following the false rule, I choose a number consistent with the false rule...so $P(C)=1$. If, instead, you meant that $P(C)$ was the probability that a uniformly random number in the range happened to satisfy the false rule, then $P(C)$ is what you say but in that case of course $P(\sim C)$ would not be $0$. Indeed we'd have $P(\sim C)=\frac {4960}{10000}$ in which case your calculation would give the correct answer of $\frac 1{10^4}$. Note that whatever you might have meant by $P(C)$ we should certainly have $P(C)+P(\sim C)=1$.
Complete categories are cocomplete? I've read in a paper and on wikipedia that any (small) category is complete if and only if it is cocomplete. Now obviously if one shows that complete$\implies$cocomplete, then it's easy to conclude from there, but I have no idea why that would be true. Could anyone care to explain it to me ?
The partial ordered class of sets has all suprema, but not all infima, since it lacks a largest element. Hence, its dual category is complete and not cocomplete.
Expressing statement with predicate logic Given interpretation ℕ with signature ⟨0, 1; +, ·; =⟩, I need to express statement using predicates: $x$ is not divisible by any prime smaller than $y$
Hint: Zeroth: How do we write out "w is larger than y"? Let $w > y$ be this formula. First: How do we write "w divides x"? Let $d(w,x)$ be this formula. Second: How do we say "w is prime"? - Let $P(w)$ be this formula. Third: Notice that "$x$ is not divisible by an prime smaller than $y$" is equivalent to saying that "every prime which divides $x$ is greater than or equal to $y$". Hence we can conclude that the formula looks as follows: $$(\forall w) ( [d(w,x) \wedge P(w)] \to [(w > y) \vee (w =y)])$$ I've left it to you to write out $d(w,x)$, $P(w)$, and $w > y$.
Why is there another transformation matrix for the bases of the image and of the preimage of this mapping? There is the following Matrix: \begin{pmatrix}1&1&1\\ a&b&c\\ a^2&b^2&c^2\end{pmatrix} At a point it is needed to calculate the determinant of the matrix. In the official solution it is written: $det\begin{pmatrix}1&1&1\\ a&b&c\\ a^2&b^2&c^2\end{pmatrix} = (c-b)(c-a)(b-a)$ And I don't see how they get this. If I calculate the determinant I am always getting this: $det\begin{pmatrix}1&1&1\\ a&b&c\\ a^2&b^2&c^2\end{pmatrix} = (bc^2-b^2c)-(ac^2-a^2c)+(ab^2-a^2b)=c(b(c-b)-a(c-a)+ab(b-a)).$ But after that point I don't know how to proceed and get the form above. Can you help me?
Substracting the first column from the second and third, you get $\begin{vmatrix} 1 & 1&1 \\ a&b &c \\ a^{2} & b^{2} &c^{2} \end{vmatrix}=\begin{vmatrix} 1 & 0&0 \\ a&b-a &c-a \\ a^{2} & b^{2}-a^{2} &c^{2} -a^{2} \end{vmatrix}=$ $=(b-a)\cdot (c-a)\begin{vmatrix} 1 & 0&0 \\ a&1 &1 \\ a^{2} & b+a &c+a \end{vmatrix}=(b-a)\cdot (c-a)\begin{vmatrix} 1 &1 \\ b+a &c+a \end{vmatrix}=$ $=(b-a)\cdot (c-a)\cdot (c+a-b-a)=(b-a)\cdot (c-a)\cdot(c-b).$
Verify that the following definition of the delta function is valid I want to verify that the following is a valid definition of the delta function: $$\delta(x)=\lim_{a \to 0} \frac{1}{\pi}\frac{a}{a^2+x^2}$$ This satisfies $$\begin{cases} 0 & \text{if } x \neq 0 \\ \infty & \text{if } x=0 \end{cases}$$ I think I also need to verify that $\int_\mathbb{R} \delta(x) \, dx=1$. How do I do this?
$F(a) =\frac 1{\pi}\int_{-\infty}^{\infty} \frac{a}{a^2+x^2} dx = \frac {1}{\pi}\arctan \frac xa |_{-\infty}^\infty = 1$ $\lim_\limits{a\to 0} F(a) = \int_{-\infty}^{\infty} \delta (x)\ dx = 1$
Derivative of $x|x|$ I am trying to find the derivative of $f(x)=x|x|$ using the defition of derivative. For $x > 0$ I found that $f'(x)=2x$ and for $x<0$ the derivative is $f'(x)=-2x$. Everything is fine up to here. Now I want to check what happens when at $x=0$. By the way, I know that $|x|$ is not differentiable at $x=0$. So I am checking the left & right limits of $f$ when $x$ approaches $0$. * *$\lim_{x \to 0^-}\cfrac{x|x|}{x} = \lim_{x \to 0^-}\cfrac{x(-x)}{x}=\lim_{x \to 0^-}\cfrac{(-x)}{1} = -0? = 0. $ *$\lim_{x \to 0^+}\cfrac{x|x|}{x} = \lim_{x \to 0^4}\cfrac{x(x)}{x}=\lim_{x \to 0^+}\cfrac{(x)}{1} = 0. $ I think that $f$ is not differentiable at $x=0$ since $|x|$ is not differentiable at that point. So , what do I do wrong? Should I write something like $\lim_{x \to 0^-}\cfrac{x|x|}{x} = -0^{-}$ and $\lim_{x \to 0^+}\cfrac{x|x|}{x} =0^{+}$ so that $f'$ does not exist at $x=0$?
For information, you have the following theorem which you can use Let $I$ be an interval, $a\in I$. If a function $f$ is continuous on $I$, has a derivative on $I\smallsetminus\{a\}$ and if $f'(x)$ has a limit at $a$, then $f $ has a derivative at $a$, and $$f'(a)=\lim_{x\to a}f'(x).$$
How do I find the area of the inscribed triangle within the ellipse without analytic geometry? I have the following problem where I need to solve for the inradius of a particular inscribed triangle in an ellipse: ∆ABC is situated within an ellipse whose major axis is of length 10 and whose minor axis is of length 8. Point A is a focus of the ellipse, point B is an endpoint of the minor axis and point C is on the ellipse such that the other focus lies on BC. Compute the inradius of ∆ABC. Hint: recall the area formula for a triangle involving the inradius. At the end I'm given a hint to use the area formula for a triangle $K = rs$ in solving for the inradius. I divided both sides by $s$ to get $K/s = r$, and then tried to solve for both $K$ and $s$. It was easy to solve for $s$, because if we let $A'$ be the other focus then $$ \begin{equation} \begin{split} \begin{gathered} P = AB + BC + AC = AB + BA' + A'C + CA\\ AB = a\\ BA' = a\\ A'C + CA = 2a\\ P/2 = 2a \end{gathered} \end{split} \end{equation} $$ However, I'm having trouble solving for $K$. I realize it's easy to just write up an analytic equation for the ellipse and find the equation of the line $BA'$ then solve for $C$, but I'm interested if there's a better way to do this. I noticed that since B connects line segments passing through both foci the question may have something to do with the reflective property of the ellipse, but I'm not sure. How do I find the area of the triangle non-analytically?
* *You know the angle(ABC) is 2t, where cos(t) = b/a. *You know the length(BC) is a + x2a, and the length(AC) is (1-x)2a, where x is unknown between 0 and 1. *Define two equations for Area: * *One equation is the side-angle-side equation: Area = length(AB)length(BC)sin(angle(ABC)/2 *One equation is Heron's Formula: Area = sqrt(P(P-length(AB)(P-length(BC)(P-length(AC)) *Now you have two simultaneous equations with two unknowns (x, Area), to plug into your favorite symbolic solver.
Why are vector spaces defined over a field? I have a general intuition of what vectors are/look like in the context of physics and what not. But it doesn't seem too closely related to properties of a field. So why are they defined over a field and not some arbitrary ring or something. Might be missing something...
They are defined over general rings, but vector spaces all have a basis, while modules need not have one. Modules which have a basis are known as free modules (of finite type if they have a finite basis). Worse, if the ring is not commutative, it may happen a free module has bases with different cardinalities. Most modules have no basis. The simplest example would be the ideal $(X,Y)\subset K[X,Y]$ $\;K$ a field), which has $\{X,Y\}$ as a minimal set of generators, but they're not linearly independent since $\;Y\cdot X-X\cdot Y=0$.
Why is $\int \frac{f'(x)}{f(x)}=\ln |f(x)|$ ignored in differential equations? I've noticed that whenever $\int \frac{f'(x)}{f(x)}$ comes up in a differential equation the answer is always given as $\ln f(x)$ rather than $\ln |f(x)|$ as I was taught it should be. Is it because of the arbitrary constant? In other words, since $$\int \frac{f'(x)}{f(x)}=\ln |f(x)|+\ln A$$ for some constant $A$, then the answer is $\ln A|f(x)|$ and because $A$ can be positive or negative it follows that there is no point including the absolute signs? Hence the answer is given as $\ln f(x)+C$ for some constant $C$ rather than $\ln |f(x)|+C$. Is this why?
Writing $ \int \frac{f'(x)}{f(x)} = \ln f(x)$ implicitly supposes that $f(x) > 0$. The logarithm is not defined for negative numbers. Why isn't the general formula needed? Consider the differential equation $$ y' = ty. $$ To find the solutions to such equations, we can proceed in two steps. Step 1. Suppose we have a solution $y: I \rightarrow \mathbb{R}$, where $I$ is an open interval. Further assume $y(t) \not = 0$, for all $t$. Since $I$ is an interval, then either $y > 0$ or $y < 0$. In the first case, we find $\frac{y'(t)}{y(t)} = t$, from which $\ln y(t) = t^2/2 + C$ and $y(t) = \alpha e^{t^2/2}$, for some $\alpha > 0$. Step 2. We verify that $y(t) = \alpha e^{t^2/2}$, $I = \mathbb{R}$, is indeed a solution for all $\alpha \in \mathbb{R}$ and that all solutions have been found. As you can see, when everything is done so carefully, there is no need to use the general formula $\int \frac{f'}{f} = \ln|f|$. I guess this is what teachers have in mind when presenting an abbreviated argument and writing "$\int \frac{y'(t)}{y(t)} = \ln y(t)$".
How to prove that $f_n(x):=x^n$ is not a Cauchy sequence in $C[0,1]$ under the norm $\|f\|= \sup|f(x)|$? How to prove that $f_n(x)=x^n$ is not a Cauchy sequence in $C[0,1]$ under the norm $\|f\|= \sup_{x\in [a,b]}|f(x)|$, by showing that it does not satisfy the definition of a Cauchy sequence?
You want to show that $$ \forall\epsilon>0,\exists N,\forall n,m:n,m>N\implies\|x^n-x^m\|=\max_{x\in[0,1]}|x^n-x^m|<\epsilon\tag{$\star$} $$ is not satisfied. Proposition: Every $\epsilon\in(0,1)$ fails to satisfy $(\star)$. Proof: Choose any $\epsilon\in(0,1)$, $N\in\mathbb{N}$ and $m>N$. Let $x_0\in(0,1)$ be such that $$ x_0^m=\epsilon+\frac{1-\epsilon}{2} $$ Now, since $x_0^n\to0$ as $n\to\infty$, we see that upon choosing $n>m$ large enough we obtain \begin{align} |x_0^n-x_0^m| &= x_0^m-x_0^n\\ &=\epsilon+\frac{1-\epsilon}{2}-x_0^n\\ &>\epsilon \end{align} Remark: The idea is that for fixed $x_0$ and $m$, the value $x_0^m-x_0^n$ tends to $x_0^m$. Hence all we needed was to make sure that $x_0^m>\epsilon$.
What is the number $\;'c'\;$ that satisfies the conclusion of Rolle's theorem? I have a problem What is the number $\;'c'\;$ that satisfies the conclusion of Rolle's theorem for the function $$f(x)=(x^2-1)(x-2)\;\;\text{in}\;(1,\;2]$$ I've tried$$f'(x)= 3x^2-4x-1$$ we know, to fiind 'c' $$f'(x)=0$$ So, $$\Rightarrow 3x^2-4x-1=0\\x=\dfrac {4+ \sqrt 4}{6}=1\\ x=\dfrac {4- \sqrt 4}{6}=0.33\\\therefore 1\notin(1,\;2]\\0.33\notin(1,\;2]$$ Please help. Where I've made misstake.
Solve quadratic equation correctly $$3x^2-4x-1=0\\\Rightarrow \dfrac{4\pm\sqrt{16-4(3)(-1)}}{2(3)}\\\Rightarrow x=\dfrac {2\pm \sqrt 7}{3}\\\Rightarrow x=\dfrac {2+ \sqrt 7}{3}\;\text{and}\; x=\dfrac {2- \sqrt 7}{3}\\x=\dfrac {2+ \sqrt 7}{3}=1.55\in (1,\;2]\\ x=\dfrac {2- \sqrt 7}{3}=-.215\notin(1,\;2]$$ Hence, $\; c=\dfrac {2+ \sqrt 7}{3}\;$ satisfies the conclusion of Rolle's theorem for the given function.
Help me understand linear function. today in my high school lesson I learned for linear function. I know that each linear function has the form of $f(x)=kx+n$ where $k,x,n ∈ \mathbb R$. Now let's say be have those two functions: $f_1(x)=kx+n_1\\ f_2(x)=kx+ n_2$ Since $k$ is same in both functions, we know that it represents those function whose graph will be parallel, but why and how to prove that those functions will be parallel?
Suppose $n_1 \ne n_2$. If there was a value $x=a$ where $f_1(a) = f_2(a)$ then $ka+n_1 = ka+n_2$. But then $n_1=n_2$ which is a contradiction.
An inequality of symmetric polynomials using AGM inequality? Let $x_1,x_2,x_3,x_4>0$ be positive real numbers and suppose that \begin{equation}\tag{1} x_1x_2x_3x_4 = \frac{x_1x_2+x_1x_3+x_1x_4+x_2x_3+x_2x_4+x_3x_4}{6}. \end{equation} I want to try and show that the following inequality holds: $$\tag{2} 1+\frac{\sqrt{x_1x_2x_3x_4}-1}{\sqrt{2}} \leq\frac{x_1+x_2+x_3+x_4}{4} \leq 1+\frac{x_1x_2x_3x_4-1}{\sqrt{2}} $$ I believe this is true, but trying to use Lagrange multipliers isn't getting me anywhere. This conjecture was born out of investigations I was making into using the AGM inequality on symmetric polynomials with constraints. From the AGM inequality, we see that $$ \sqrt{x_1x_2x_3x_4}\leq\frac{x_1x_2+x_1x_3+x_1x_4+x_2x_3+x_2x_4+x_3x_4}{6} $$ always holds for all nonnegative $x_i$. Hence if the inequality in (1) holds, then $\sqrt{x_1x_2x_3x_4}\leq x_1x_2x_3x_4$ holds. Unfortunately this doesn't help me prove (2). I also believe that equality in either of the inequalities in (2) holds if and only if $x_1=x_2=x_3=x_4=1$. But I can't show this either. Can anyone help me find a proof?
We'll prove a right inequality (I think the left inequality we can prove by the similar way). Let $f(x_1,x_2,x_3,x_4)=1+\frac{x_1x_2x_3x_4-1}{\sqrt2}-\frac{x_1+x_2+x_3+x_4}{4}+\lambda\left(x_1x_2x_3x_4 -\frac{x_1x_2+x_1x_3+x_1x_4+x_2x_3+x_2x_4+x_3x_4}{6}\right).$ Hence, since a continuous function on the compact gets there a minimal value and if $(x_1,x_2,x_3,x_4)$ is a minimal point, we obtain: $$\frac{\partial f}{\partial x_1}=\frac{x_2x_3x_4}{\sqrt2}-\frac{1}{4}+\lambda\left(x_2x_3x_4 -\frac{x_2+x_3+x_4}{6}\right)=0$$ or $$x_1x_2x_3x_4\left(\frac{1}{\sqrt2}+\lambda\right)=\frac{x_1}{4}+\frac{\lambda x_1(x_2+x_3+x_4)}{6},$$ which gives $$\frac{x_1}{4}+\frac{\lambda x_1(x_2+x_3+x_4)}{6}=\frac{x_2}{4}+\frac{\lambda x_2(x_1+x_3+x_4)}{6}$$ or $$(x_1-x_2)\left(\frac{3}{2}+\lambda(x_3+x_4)\right)=0.$$ Similarly we obtain in the minimal point: $$(x_1-x_3)\left(\frac{3}{2}+\lambda(x_2+x_4)\right)=0,$$ $$(x_1-x_4)\left(\frac{3}{2}+\lambda(x_2+x_3)\right)=0,$$ $$(x_2-x_3)\left(\frac{3}{2}+\lambda(x_1+x_4)\right)=0,$$ $$(x_2-x_4)\left(\frac{3}{2}+\lambda(x_1+x_3)\right)=0,$$ $$(x_3-x_4)\left(\frac{3}{2}+\lambda(x_1+x_2)\right)=0.$$ Now, if $x_1\neq x_2$, $x_1\neq x_3$ and $x_1\neq x_4$ so we get $x_2=x_3=x_4$. If $x_1=x_2$, but $x_1\neq x_3$ and $x_1\neq x_4$ so $x_3=x_4$. Thus, it remains to prove our inequality in the following cases. * *$x_1=x_2=x_3=x_4$; *$x_1=a$ and $x_2=x_3=x_4=b$; *$x_1=x_2=a$ and $x_3=x_4=b$. The rest for you.
evans book pde estimate question I'm reading Evans' book on PDE and I'm having troubles understanding one estimate. He defines the fundamental solution to Laplace' equation as $$ \Phi(x) = \begin{cases} -\frac{1}{2\pi} \, \log(|x|), \, & n=2, \\ \frac{1}{n \, (n-2) \, \omega_n} \, \frac{1}{|x|^{n-2}}, \, & n\geq 3, \end{cases} $$ where $\omega_n$ is the volume of the $n$-ball. For the solution of Poisson's equation $ -\Delta u = f$ he computes the Laplace acting on the convolution of $f$ and $\Phi$, involving this estimate: $$ \bigg|\int_{B(0,\varepsilon)} \Phi(y) \, \Delta_y f(x-y) \, dy \bigg| \leq C \, \lVert D^2f \rVert_{L^\infty} \int_{B(0,\varepsilon)} |\Phi(y)| \, dy \leq \begin{cases} C \, \varepsilon^2 \, |\log(\varepsilon)|, & n=2, \\ C \, \varepsilon^2, & n\geq 3. \end{cases} $$ How do you obtain the last inequality?
Hint: use polar coordinates to integrate the green's function on the $\epsilon$ ball. Edit: Observe for $n\geq 3$ we have \begin{align} \int_{B(0, \epsilon)} \frac{dx}{|x|^{n-2}} = \int^\epsilon_0 \int_{|x|=r} \frac{dS(x)}{|x|^{n-2}}\ dr = C\int^\epsilon_0 \frac{r^{n-1}}{r^{n-2}}\ dr = C'\epsilon^2. \end{align}
Fibonacci Numbers proof, circular reasoning? I got this from "Number Theory" by George E. Andrews (1971), at the end of the first chapter, he asks for proofs by mathematical induction about Fibonacci Numbers as exercices. In one of them I am asked to show that $$(\forall \, n \in \Bbb Z^+)((F_{n+1})^2-F_nF_{n+2}=(-1)^n)$$ which has already has been asked on Math S.E.: "Fibonacci numbers and proof by induction" but I am not looking for a complete solution here; rather, since I am learning the subject on my own I would appreciate it if someone could simply tell me if my reasoning is correct, it does not "feel" concrete to me. Going through the usual steps I first check that the base case $F_2^2-F_1F_3=1-2=(-1)^1$ is true such that the inductive step can be taken. Then, assuming $(F_{k+1})^2-F_kF_{k+2}=(-1)^k$ is true, I try to show that it implies that $(F_{k+2})^2-F_{k+1}F_{k+3}=(-1)^{k+1}$ is also true. Then, doing a bit of algebra I get, $$\begin{align} (F_{k+2})^2-F_{k+1}F_{k+3} & = (-1)^{k+1}=(-1)^k(-1)\\ & = ((F_{k+1})^2-F_kF_{k+2})(-1)\\ & = F_kF_{k+2}-(F_{k+1})^2\\ (F_{k+2})^2-F_{k+1}F_{k+3} + (F_{k+1})^2-F_kF_{k+2} & = 0\\ (F_{k+2})^2-F_{k+1}F_{k+3} + (-1)^k & = 0\\ (F_{k+2})^2-F_{k+1}F_{k+3} & = -(-1)^k = (-1)(-1)^k\\ & = (-1)^{k+1}\\ \end{align}$$ $$\tag*{$\blacksquare$}$$ I am not sure about this, it feels like circular reasoning, is it? Thanks a lot for the valuable input!
Yes, your answer is wrong, because you assume that $(F_{k+2})^2 - F_{k+1}F_{k+3} = (-1)^{k+1}$ but it's what you want to prove. By the way, here is the algebra: $(F_{k+2})^2 - F_{k+1}F_{k+3} = (F_{k+1}+F_k)^2 - F_{k+1}(F_{k+2} + F_{k+1})= (F_{k+1}^2 +2F_{k+1}F_k +F_k^2) - F_{k+1}(F_{k+1} + F_k + F_{k+1}) = -F_{k+1}^2+F_{k+1}F_k +F_k^2 = -(F_{k+1}^2-F_k(F_{k+1} +F_k))= -(F_{k+1}^2-F_kF_{k+2})= -(-1)^k=(-1)^{k+1} $
Is it possible to rewrite division (÷) in sigma notation? Disclaimer: I'm a beginner with summation and sigma notation. Background: Since division is the inverse of multiplication, and multiplication is repeated addition, it seems (at first glance) possible to rewrite $\frac a b = c$ in sigma notation--i.e., summation notation. Assumptions: I've read that the upper bound must be a whole number, so when a, b, and c are whole numbers, this seems correct: a = $\sum_{i=1}^b c$ Problem: But that fails when b is $\frac m n$ since b isn't a whole number. Question: Is it not always, or is it ever, possible to represent division in terms of sigma notation?
Careful there, cowboy! You need to be consistent in your assumptions throughout your reasoning. Division is always the inverse of multiplication, but multiplication is not always repeated addition. Take, for example, $ \frac{3}{2} \times \frac{5}{4} $. How do you add $\frac{3}{2}$ to itself $\frac{5}{4}$ times (or, for that matter, add $ \frac{5}{4} $ to itself $\frac{3}{2}$ times)? Multiplication is defined as repeated addition for integers: numbers that have vanishing fractional part. For those sorts of numbers, I'm sure you can see that $$ a = \sum_{k=1}^b{c} $$ indeed. Defining multiplication elsewhere requires new techniques. For fractions $\frac{a}{b}$, $\frac{c}{d} $, we reuse our previous definition of multiplication of integers to define $$ \frac{a}{b}\times\frac{c}{d} = \frac{a\times c}{b\times d} $$ Defining multiplication for the real numbers (numbers with decimal expansions, even infinitely long ones) gets a bit tricky, because you need some sort of process capable of handling things with infinite precision. The trick is to define $ r\times s $ to be (assuming $r,s\geq0$) the smallest real number $ k $ characterized by the following property: given any pair of fractions $ p,q $ in the ranges $ 0\leq p\leq r$ and $ 0\leq q\leq s$, we have $pq<k$. It's not immediately obvious how to turn these last two examples into some sort of sigma notation, although it can be done. (In fact, I think the way to do it most consonant with the rest of my answer involves reinventing the Riemann integral from calculus!) But there's a bigger point here. You'll notice I haven't mentioned division at all. In order to write division in sigma form, you have to transform it into multiplication first, so you might as well ask the question "When and how can multiplication be written in sigma form?" As I hope I've shown, the best way to approach that is to think about how you define multiplication on whatever sort of numbers you are working with.
Intuition and the fundamental theorem of calculus I think the following example explains the fundamental theorem of calculus quite intuitively. Or more precisely, that's what I thought; now I'm starting to have some doubts. Suppose $v(t)$ is the velocity of a car driving along the highway. The units for $t$ are in hours and the units for $v(t)$ are in miles per hour. Assume $v(t)$ is continuous and nonnegative. What is the displacement of the car over one hour (ie., $t \in [0,1]$)? Well, if we subdivide $[0,1]$ into $n$ subintervals of equal length, in each subinterval $\left[\frac{k}{n}, \frac{k+1}{n}\right]$ the velocity doesn't change too much for large $n$ and hence can be approximated by $v(\frac{k}{n})$. Therefore, the displacement in $\left[\frac{k}{n}, \frac{k+1}{n}\right]$ is equal to $\frac{1}{n} v(\frac{k}{n}) + \epsilon(k, n)$ where $\epsilon(k, n)$ is a small error dependent on $k$ and $n$. Hence $$ \text{Displacement} = x(1) - x(0) = \sum_{k=0}^{n-1} \frac{1}{n} v\left(\frac{k}{n}\right) + \sum_{k=0}^{n-1} \epsilon(k, n) $$ Note that the above equality holds for all $n$, since we have accounted for the error. If we assume that $ \sum_{k=0}^{n-1} \epsilon(k, n) \to 0$ as $n \to \infty$, then it's easy to see that $$x(1) - x(0) = \lim \sum_{k=0}^{n-1} \frac{1}{n} v\left(\frac{k}{n}\right) = \int_0^1 v(t) \ dt$$ However, it's not obviously clear to me why $\sum_{k=0}^{n-1} \epsilon(k, n) \to 0$ as $n \to \infty$. Why should this hold intuitively?
The function $v$ is uniformly continuous on $[0,1]$. Given any $\epsilon > 0$, there is an $N$ such that whenever $n \geq N$, we have, for all $k$ and all $x \in [k/n, (k+1)/n]$, the inequality $|v(x) - v(k/n)| \leq \epsilon$. Thus your error $\epsilon(k,n)$ is bounded in absolute value by $\epsilon/n$. Summing over $k$, the total error in the displacement is bounded in absolute value by $\epsilon$, so long as $n \geq N$.
Polynomial is irreducible over $\mathbb{Q}$ I've been scratching my head lately over this problem in my textbook: Prove that the polynomial $x^4 + x^3 +x^2 +x +1$ is irreducible over $\mathbb{Q}$. I've done some research and found this link, but they talk about Eisenstein's criterion, which we haven't covered in our math class just yet. Is there a general strategy where we can show whether a polynomial is irreducible over a field? My textbook really didn't go into depth on the topic of irreducibility of polynomials, but this Wiki link somewhat helps. Perhaps the rational root theorem may be helpful here, but how would I go about starting this proof? EDIT Please see the first comment for my initial strategy at showing the required result.
There exists a simple general method for monic quartic polynomials in $\mathbb{Z}[x]$. Suppose you have verified that the polynomial $$p(x) = x^4 + a x^3 + b x^2 + c x + d$$ has no rational roots. Then we want to see if it has quadratic factors, and if so we want to factor it. Also we don't want to use very elaborate trial and error methods. The first test that needs to be performed is to see whether p(x) is the square of a quadratic polynomial, that's easy to see by computing the GCD of $p(x)$ with its derivative. If a nontrivial GCD, then you're done. If not, then we proceed as follows. We reduce $p(x)$ modulo $x^2 - p x - q$, this yields: $$\left(p^3 + a p^2+ 2 p q + a q+ b p+c\right) x + p^2 q + q^2 +a p q+ b q+d$$ Then if $p(x)$ has a quadratic factor, this has to be identically zero. We thus need to equate the coefficient of $x$ and the constant term to zero and solve the two equations for $p$ and $q$. It's then convenient to start with eliminating the highest powers of $p$ in favor of lower powers until $p$ has been completely eliminated in favor of $q$. We then end up with an equation for $q$, and an expression for $p$ in terms of $q$. Then since $q$ had to divide $d$, you only have a few cases to check. If none of them work then $p(x)$ is irreducible. If $p(x)$ is reducible you'll find the factorization, unless both factor have the same value for $q$. In the latter case the single solution for $q$ obviously cannot tell you what both values for $p$ are (and they are different because $p(x)$ was verified to be square-free). If this exceptional case does not occur, then we have: $$p = \frac{a q^2+c q}{d-q^2}\tag{1}$$ and $$q^6 +b q^5 +(a c-d)q^4 + \left(a^2 d-2 b d+c^2\right)q^3 +\left(a c d-d^2\right)q^2 +b d^2 q+d^3 = 0$$ So, $q$ must be an integer that divides $d$ that satisfies this equation. If two such values for $q$ are found then you have found the factorization, the corresponding values for $p$ follow from Eq. (1). If only one solution for $q$ is found, then Eq. (1) will be singular, the two values for $p$ are then solutions of the equation: $$a p^2 + a^2 p+a b-2 c = 0$$
Is this statement true? $ \pi $ is a factor of $ 3 \pi $ but 3 isn't? What this question attempts to ask is, perhaps, the rigorous definition of factor itself. I browsed the web for a definition of factor, and everywhere it was defined loosely (That common sense definition of factors). But I have 3 questions - * *Can we extend the concept of factors to the realm of real numbers? like the one my original question seems to ask? *Factors must necessarily divide a number perfectly (i.e., integrally) but the question is - must that factor also be an integer? *if we agree that $\pi$ is a factor of $3\pi$ then that means every number has infinite number of factors like $\pi /2$, etc. Should we, then, redefine prime numbers?
Here is some rather elementary information which might be helpful. The answer to your title-question is no, the statement is false. Reasoning: Whenever you have a valid representation of a number as product of other numbers, then each of these numbers is called a factor. Let's assume we are working with real numbers. A representation \begin{align*} 3\cdot \pi \end{align*} is valid and we can conclude $3$ is a factor of $3\pi$ as well as $\pi$. A few remarks to your questions: * *Ad (1) Can we extend the concept of factors to the realm of real numbers? The answer is: Yes, we can. Whenever we have a multiplication $\cdot$, as we do have when working with reals, we can also talk about factors which are the constituents, the building blocks to do this multiplication. * *Ad (2) Factors must necessarily divide a number perfectly (i.e., integrally) but the question is - must that factor also be an integer? The formulation of this question is somewhat problematic. But first this answer: If there is a valid representation of real numbers given as product of factors, the factors need not be integers. Example 1: The product $3\pi$ consists of two factors $3$ and $\pi$. One of them is an integer, the other is not an integer. No problem at all. Example 2: The product $\pi \cdot \pi$ consists of two factors $\pi$ and $\pi$. None of them is an integer. No problem at all. Note: The formulation divide a number perfectly (i.e., integrally) is problematic, since perfect is not a technical term in this context and integrally is not necessary in this context (see e.g. $\pi \cdot \pi$). * *Ad (3) if we agree that $\pi$ is a factor of $3\pi$ then that means every number has infinite number of factors like $\pi /2$, etc. Should we, then, redefine prime numbers? This is really a clever question and it touches algebraic structures and number theory. It addresses questions like * *Which kind of numbers do we want to work with? Are these natural numbers, integers, reals or complex numbers or $\ldots$? *What kind of multiplication is useful for these numbers? *If we have specified a certain kind of multiplication, what are the consequences. What are the factors, what is divisibility, which numbers of them should be called prime. The appropriate definition of a number being prime is part of algebra and will be defined there, when studying ring theory. Final note: Sometimes you might work with and stick at integers without using any other numbers. In this specific context whenver you consider a product \begin{align*} a\cdot b \end{align*} then $a$ and $b$ are integers and nothing else. In such cases a representation $3\cdot \pi$ is not valid and not under consideration, simply because $\pi$ is not an integer. The question is $\pi$ a factor? is in this context meaningless.
Why does the subgraph remain connected at each stage of Fleury's algorithm? On pages 42-43 in [1], it says: We conclude our introduction to Eulerian graphs with an algorithm for constructing an Eulerian trail in a give Eulerian graph. The method is know as Fleury's algorithm. THEOREM 2.12 Let $G$ be an Eulerian graph. Then the following construction is always possible, and produces an Eulerian trail of $G$. Start at any vertex $u$ and traverse the edges in an arbitrary manner, subject only to the following rules: (i) erase the edges as they are traversed, and if any isolated vertices result, erase them too; (ii) at each stage, use a bridge only if there is no alternative. Proof. We show first that the construction can be carried out at each stage. Suppose that we have just reached a vertex $v$, erasing the edges as we go. If $v \neq u$, then the subgraph $H$ that remains is connected and has only two vertices of odd degrees, $u$ and $v$. ... Why is $H$ connected? I can't prove it. Does anyone has any idea? Thanks in advance. [1] Robin J. Wilson, Introduction to Graph Theory, 5th ed., Prentice Hall, 2012.
A "bridge" is an edge which when removed disconnects the graph. You never cross one until forced. Until then, $H$ remains connected simply by this choice. So consider when you are forced to cross the bridge. Why are you required to cross the bridge? Because there are no longer any other edges connecting to this vertex. What happens when you cross the bridge? The graph becomes disconnected. It now has two pieces. But one piece is that lone vertex you just left, which is then removed since it is now isolated. Now only the other piece is left. Therefore the remaining graph is still connected.
Simplifying $\sum\limits_{k=0}^{\infty}\frac 1{2^{k+1}}\sum\limits_{n=0}^{\infty}\binom kn\frac 1{2(n+1)(3n+1)}$ Question: Is there a way to simplify $$\sum\limits_{k=0}^{\infty}\dfrac 1{2^{k+1}}\sum\limits_{n=0}^{\infty}\binom kn\dfrac 1{2(n+1)(3n+1)}\tag{1}$$ Into a single summation symbol? $\sum\limits_{k=0}^{\infty}\text{something}$ I inputed it into WolframAlpha and got a really complicated expression$$\sum\limits_{k=0}^{\infty}\dfrac {1-2^{k+1}+3\left(_2F_1\left[\begin{array}{c c}\frac 13,-k\\\frac 43\end{array};-1\right]\right)+3k\left(_2F_1\left[\begin{array}{c c}\frac 13,-k\\\frac 43\end{array};-1\right]\right)}{2^{k+3}(k+1)}$$ Which isn't what I really wanted because the inner sum is significantly more complex than before. Is there a way? I'm still relatively new to this. If you have a hint, it would mean a lot if you commented it!
Notice that $$\binom kn=0\forall n>k$$ Thus, it simplifies down to $$\sum_{k=0}^\infty\frac1{2^{k+1}}\sum_{n=0}^k\binom kn\frac1{2(n+1)(3n+1)}$$ And by an inverse Euler sum, this reduces down to $$\sum_{n=0}^\infty\frac1{2(n+1)(3n+1)}=\frac1{12}(\pi\sqrt3+9\ln(3))$$
Let $A$ and $B$ be square matrices of order $n$. Suppose that $\mathrm{rank}(A)=\mathrm{rank}(B)$ and $A^2B=A$. I am having difficulty trying to solve this problem during my practice: Let $A$ and $B$ be square matrices of order $n$. Suppose that $\mathrm{rank}(A)=\mathrm{rank}(B)$ and $A^2B=A$. Prove the following: * *$ B $ and $ A^2 $ have the same nullspace *$ B^2A = B $ I am given the following hints: For 1) Use the fact that: (i) $\mathrm{rank}(AB) \le \min\{\mathrm{rank}(A),\mathrm{rank}(B)\}, A = (a_{ij})_{m \times n} $ and $ B = (b_{ij})_{n \times p} $ (ii) $\mathrm{rank}(A) + \mathrm{nullity}(A) = n$ to get a relation in $\mathrm{nullity}(B)$ and $\mathrm{nullity}(A^2)$. Then use the fact if $ U \subseteq V $ are vector spaces such that $\dim(U) = \dim(V) $, then $ U = V $. For 2) (i) Note that $ Bu = 0 \iff A^2u = 0 $ (ii) Prove that $(B^2A - B)x = 0 $ for all $ x \in \mathbb{R}^n $ For 1), my attempt: $$ A^2B = A $$ $$ A^2Bx = 0 \iff Ax = 0, x \in \mathbb{R}^n $$ $$ Bx \in \mathrm{nullity}(A^2) $$ I am stuck here. My direction of thought seems wrong. But I really do not know what else to do. For 2), my attempt: $$ A^2x = 0, x \in \mathrm{nullity}(A^2), \mathrm{nullity}(B) $$ $$ Bx = 0, x \in \mathrm{nullity}(B), \mathrm{nullity}(A^2) $$ $$ B^2A = BBA = Bu \ \text{(Let }BA = u)$$ $$ \therefore Bu = 0 $$ $$ A^2u = 0 $$ $$ Bu - A^2u = 0 $$ $$ (B - A^2)u = 0 $$ I cannot seem how to get to my desired answer from here. I spent hours trying to utilize the hints but to no avail. The final product of the proof seems too far from the given conditions. Could someone please advise me?
A hint for 1): Observe a) $\dim\ker A=\dim\ker B$ by the rank-nullity theorem. b) Since $A^2B=A$, we have $\ker B\subseteq \ker A$. Further, $\ker A\subseteq\ker A^2$, so $\ker B\subseteq\ker A^2$, and they have the same dimension…
Find the determinant of a Vandermonde-like matrix I reduced a certain determinant to the following which has the form: \begin{vmatrix} 1&1&\cdots & 1 \\ -a_1&a_2&\cdots &a_n\\ -a_1^{2}&a_2^{2}&\cdots&a_n^{2}\\ \vdots&\vdots&\ddots& \vdots\\ -a_1^{n-1}&a_2^{n-1}&\cdots&a_n^{n-1}\\ \end{vmatrix} To clarify a bit, it is exactly the Vandermonde determinant except the first column is negative, while the $(1,1)$-entry is still $1$. I think the key is to apply Vandermonde determinant yet I can't proceed. There may be a quick answer to this, however. Any hints?
It is also possible to evaluate a more general determinant, where the clever observation of @user1551 isn't available. Let $c_0,\dots, c_1$ be arbitrary and let $X$ be a polynomial variable. Then we will evaluate $$ \begin{vmatrix} c_0&1&\cdots & 1 \\ c_1 X&a_2&\cdots &a_n\\ c_2 X^2&a_2^{2}&\cdots&a_n^{2}\\ \vdots&\vdots&\ddots& \vdots\\ c_{n-1}X^{n-1}&a_2^{n-1}&\cdots&a_n^{n-1}\\ \end{vmatrix}. $$ By a Lagrange expansion on the first column we get that this is $$ \sum_{0}^{n-1} (-1)^{k} c_k X^k V_k \hspace{10em}\text{(*)} $$ where $V_k$ is the determinant of the corresponding $(n-1)\times(n-1)$ minor in $$ \begin{vmatrix} 1&\cdots & 1 \\ a_2&\cdots &a_n\\ a_2^{2}&\cdots&a_n^{2}\\ \vdots&\ddots& \vdots\\ a_2^{n-1}&\cdots&a_n^{n-1}\\ \end{vmatrix}. $$ But now consider the special case where all $c_k=1$; this is just the standard Vandermonde determinant, so we have $$ \sum_{0}^{n-1} (-1)^{k} X^k V_k = \prod_{r=2}^{n} (X-a_r)\prod_{2\leqslant s < t \leqslant n} (a_t - a_s). $$ We write, in the usual way, $$ \prod_{r=2}^{n} (X-a_r)=\sum_{k=0}^{n-1}(-1)^{k} \sigma_{n-1-k} X^k; $$ these are just the usual symmetric functions of the $a_2,\dots, a_{n}$ taking their sum, their sum two at a time, and so on. The coefficients $V_k$ then all have a common factor $\prod_{2\leqslant s < t \leqslant n} (a_t - a_s)$, the Vandermonde of $a_2, \dots, a_n$ (which deals with the skew-symmetry in these variables); the other factor in each case is just the symmetric function $\sigma_{n-k}$: $$ V_k= \sigma_{n-k}\prod_{2\leqslant s < t \leqslant n} (a_t - a_s).$$ Taking $c_0=1, c_1=\dots=c_n=1$ and substituting $X=a_1$ in (*) gives the value for the determinant of the original question.
Chinese Remainder Theorem Puzzle Im working on another CRT problem and I'm having a bit of trouble understanding the question at hand. a group of seven men have found a stash of silver coins and are trying to share the coins equally among each other. Finally, there are six coins left over, and in a ensuing fight, one man is slain. The remaining six men, still unable to share equally because two silver coins are left over, fight again — and another man lies dead. The remaining men attempt to share, except that one coin is left over. One more fight, one more dead man, and now that an equal sharing is possible. So I'm assuming this means $$x\equiv 2 \pmod {6}$$ $$x\equiv 1 \pmod {5}$$ $$x\equiv 0 \pmod {4}$$ right?
No need to apply CRT for this problem for $x\equiv 2 \pmod {6}$ $x\equiv 1 \pmod {5} $ $x\equiv 0 \pmod {4}$ The number is a multiple of $4$, so last digit must be one of $(4,8,2,6,0)$ The number gives remainder $1$ when divided by $5$. So last digit of the number must be one of $(6,1)$ From these two, we know last digit of the number $=6$ Therefore we are looking for number of the form $6k+2$ whose last digit is $6$ For $k=9$ we get $56$ which is the requried number
Generalizing open subsets to $\Bbb{R}$ Prove that if $X$ and $Y$ are open subsets of $\Bbb{R}$ then $X\times Y$ is an open subset of $\Bbb{R}^2$. State and prove a generalization to $\Bbb{R}^n$. I figure the best approach is just to start with the generalization. So if $X_1,...,X_n$ are open subsets of $\Bbb{R}^n$ then $X_1\times\cdots\times X_n$ are open subsets of $\Bbb{R}^n$. Prove this statement. This is where I'm struggling to figure out what to prove exactly. Do I use open balls? Or perhaps the complement? I solved a similar problem involving proving all subsets were closed.
Can you prove that if $(x_1, \ldots, x_n) \in O = (a_1,b_1) \times (a_2, b_2) \times \ldots \times (a_n, b_n)$ there exists $r > 0$ such that $B(x,r) \subset O$?
What combination length best secures my key safe? I have the following key safe and need to decide upon a combination for it. It's fairly simple mechanically with these features:- * *It allows the characters 0-9 and A and B (12 possible digits in total). *The combination can be 4 to 12 characters long. *Each button can be used only once. *A weirdness is that the buttons can be pressed in any order for the same code, so 1234 is the same code as 4321. *Once you've entered the correct code, you turn the knob to open the safe. It doesn't spring open immediately upon correct code entry. From the manufacturers website FAQ:- "The C500 Police approved key safe has 4,096 possible code combinations" It seems to me that the security /difficulty of the combination will depend on how many digits I set, but I can't figure out how many is best. Intuition suggests to me that the longest possible code is the most secure. But. Clearly if it's 12 digits long, and they're identical in any order, there can only be 1 combination that's 12 characters long. That's not good. Any ideas?
There are $4096$ subsets of the twelve element set of possible keys. That's the number advertised - it's not quite right since it includes keys with fewer than four elements, but that's not very many. The length of the key is really part of the key since when you turn the knob and it fails you have to start over. That means there's no advantage to choosing the length for the maximum number of possibilities (that would be $6$). You could argue that short codes are easier to test by brute force (there are "only" $495$ of length $4$) and that someone trying to break in systematically would try them first since they are easiest to type.
How many three-digit numbers can be formed using digits {$1$, $6$, $9$, $9$, $9$,}? If I had $5$ distinct digits $(1,2,3,4,5)$ I would do it like so: $\frac{5!}{(5-3)!} = 60$ But I don't understand what to do if I have $3$ repeating digits
You can just make a decision tree, showing the options to pick $9$ (blue) or other (pink) at the earlier stages where it will have an effect on subsequent options; the last choice is green because that just represents the available choices. Then multiply down each branch and add: $$2\cdot 1\cdot 1 + 2\cdot 1\cdot 2 + 1\cdot 2\cdot 2 + 1\cdot 1\cdot 3 = 2+4+4+3 = \fbox {13}$$ An inclusion-exclusion approach would be to regard this a choice with restrictions on how many $1$s and $6$s we're allowed. The unrestricted choice would be $3^3=27$ options. Excluding choices with excess $1$s involves any cases with two $1$s, $6$ cases, or three $1$s, $1$ case - in total $7$ cases to remove. Excluding choices with excess $6$s likewise gives $7$ cases to remove. Total valid choices then is $27-2\cdot 7=13$.
Tensor Products of Modules (restriction and extension of scalars) Let $\phi:A \rightarrow B$ be a ring homomorphism, $M$ be an $A$-module, and $N$ a $B$-module. Prove that $$N \otimes_B (B \otimes_A M) \cong N \otimes_A M$$ as either $A$ or $B$-modules. We know that $B \otimes_AM$ is a $B$-module and $N$ is an $A$-module via extension and restriction of scalars. We have that $N \cong N \otimes_BB$. Is it legal to do this: $$N \otimes_B (B \otimes_AM) \cong (N \otimes_BB) \otimes_AM \cong N \otimes_AM$$
Yep; and that's exactly the approach I'd use. But if you're nervous, that argument also suggests how to write down an actual function that gives the isomorphism: * *The forward direction is $n \otimes b \otimes m \mapsto nb \otimes m $ *The backward direction is $n \otimes m \mapsto n \otimes 1 \otimes m $ and then you simply need to show that both functions are well-defined and are inverses. To show these definitions are well-defined, we use the fact that I've expressed their values on pure tensors. Recall: Let $U$ be a right $R$-module and $V$ be a left $R$-module. There is a one-to-one correspondence between * *Functions $f : U \times V \to W$ such that $f(u,v)$ is $R$-linear in both variables *Linear maps $g : U \otimes_R V \to W$ And these are related by $f(u,v) = g(u \otimes v)$. Furthermore, if $U,W$ are left $S$-module, then if $f(u,v)$ is $S$-linear in $u$ if and only if $g$ is $S$-linear. This extends inductively to repeated tensors; e.g. Let $U$ be a right $R$-module, $V$ be a left $R$ module and a right $S$-module, and $W$ be a left $S$-module. There is a one-to-one correspondence * *Functions $f : U \times (V \times W) \to X$ such that $f(u,v,w)$ is $R$-linear in $u$ and $v$, and $S$-linear in $v$ and $w$ *Linear maps $g : U \otimes_R (V \otimes_S W) \to X$ And these are related by $f(u,v,w) = g(u \otimes v \otimes w)$. You can derive this by applying the two-factor version to $U$ and $V \otimes W$, and then again to $V$ and $W$.
Prove $\sum_{j=0}^n \left(-\frac{1}{2}\right)^j \binom{n}{j}\binom{n+j}{j}\binom{j}{k} = 0$ when $n+k$ is odd An integral led me to a power series with these coefficients: $$a_k = \sum_{j=k}^n \left(-\frac{1}{2}\right)^j \binom{n}{j}\binom{n+j}{j}\binom{j}{k}$$ I strongly suspect that the series should have $a_k = 0$ when $n+k$ is odd, and I've verified it for $k,n\leq 10$. I'm looking for a direct proof of this. Does anyone have a suggestion?
Suppose we seek to verify that $$a_{n,k} = \sum_{j=k}^n \left(-\frac{1}{2}\right)^j {n\choose j} {n+j\choose j} {j\choose k}$$ is zero when $n+k$ is odd. We have $${n+j\choose j} {j\choose k} = \frac{(n+j)!}{n! k! (j-k)!} = {n+k\choose k} {n+j\choose n+k}$$ and obtain for the sum $${n+k\choose k} \sum_{j=k}^n \left(-\frac{1}{2}\right)^j {n\choose j} {n+j\choose n+k}$$ Introduce $${n+j\choose n+k} = {n+j\choose j-k} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{j-k+1}} (1+z)^{n+j} \; dz$$ Note that this vanishes when $j\lt k$ so we may lower $j$ to start at zero, getting for the inner sum $$\frac{1}{2\pi i} \int_{|z|=\epsilon} z^{k-1} (1+z)^{n} \sum_{j=0}^n \left(-\frac{1}{2}\right)^j {n\choose j} \frac{(1+z)^j}{z^j} \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} z^{k-1} (1+z)^{n} \left(1-\frac{1+z}{2z}\right)^n \; dz \\ = \frac{1}{2^n} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{n-k+1}} (1+z)^{n} (z-1)^n\; dz \\ = \frac{1}{2^n} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{n-k+1}} (z^2-1)^n \; dz.$$ We thus have the closed form $$\frac{1}{2^n} {n+k\choose k} [z^{n-k}] (z^2-1)^n \\ = \begin{cases} \frac{1}{2^n} {n+k\choose k} (-1)^{(n+k)/2} {n\choose (n-k)/2} \quad\text{if}\quad n-k\quad\text{is even} \\ 0\quad\text{otherwise.} \end{cases}.$$ We see having reached the result that we did not make use of the differential in the integral which means the above also works using formal power series only.
How do we find the inverse Laplace transform of $\frac{1}{(s^2+a^2)^2}$? How do we find the inverse Laplace transform of $\frac{1}{(s^2+a^2)^2}$? Do I need to use the convolution theory? It doesn't match any of the known laplace inverse transforms. It matches with the Laplace transform of $\sin(at)$ but I don't know if that helps or not. Also there seems to be a formula with limits and imaginary numbers. How do I just know to apply that here?
Well, we can use the 'time-domain integration' property of Laplace transform: $$\mathscr{L}_t\left[\int_0^t\text{f}\left(\tau\right)\space\text{d}\tau\right]_{\left(\text{s}\right)}:=\int_0^\infty\left\{\int_0^t\text{f}\left(\tau\right)\space\text{d}\tau\right\}\cdot e^{-\text{s}t}\space\text{d}t=\frac{\text{F}\left(\text{s}\right)}{\text{s}}\tag1$$ Now, we know that (when $\Re\left(\text{s}\right)>0$): $$\mathscr{L}_t\left[\cos\left(\omega t\right)\right]_{\left(\text{s}\right)}:=\int_0^\infty\cos\left(\omega t\right)\cdot e^{-\text{s}t}\space\text{d}t=\frac{\text{s}}{\omega^2+\text{s}^2}\tag2$$ So, when we have: $$\frac{1}{\omega^2+\text{s}^2}=\frac{1}{\text{s}}\cdot\frac{\text{s}}{\omega^2+\text{s}^2}\space\space\space\to\space\space\space\int_0^t\cos\left(\omega\tau\right)\space\text{d}\tau=\frac{\sin\left(\omega\tau\right)}{\omega}\tag3$$ Using the derivative we get: $$\frac{\partial}{\partial\text{s}}\left\{\frac{1}{\omega^2+\text{s}^2}\right\}=-\frac{2\text{s}}{\left(\omega^2+\text{s}^2\right)^2}\space\Longleftrightarrow\space-\frac{1}{2\text{s}}\cdot\frac{\partial}{\partial\text{s}}\left\{\frac{1}{\omega^2+\text{s}^2}\right\}=\frac{1}{\left(\omega^2+\text{s}^2\right)^2}\tag4$$ So, we get: $$\mathscr{L}_\text{s}^{-1}\left[\frac{1}{\left(\omega^2+\text{s}^2\right)^2}\right]_{\left(t\right)}=\mathscr{L}_\text{s}^{-1}\left[-\frac{1}{2\text{s}}\cdot\frac{\partial}{\partial\text{s}}\left\{\frac{1}{\omega^2+\text{s}^2}\right\}\right]_{\left(t\right)}=$$ $$-\frac{1}{2}\cdot\mathscr{L}_\text{s}^{-1}\left[\frac{1}{\text{s}}\cdot\frac{\partial}{\partial\text{s}}\left\{\frac{1}{\omega^2+\text{s}^2}\right\}\right]_{\left(t\right)}=-\frac{1}{2}\int_0^t\mathscr{L}_\text{s}^{-1}\left[\frac{\partial}{\partial\text{s}}\left\{\frac{1}{\omega^2+\text{s}^2}\right\}\right]_{\left(\tau\right)}\space\text{d}\tau\tag5$$
How do I solve the differential equation: $ \frac{dy}{dx} = \frac{x+y}{x-y} $? $$ \frac{dy}{dx} = \frac{x+y}{x-y} $$ I have tried this problem so long... such as please help..
Well, we have that: $$\text{y}'\left(x\right)=\frac{x+\text{y}\left(x\right)}{x-\text{y}\left(x\right)}\tag1$$ Let $x\cdot\text{r}\left(x\right)=\text{y}\left(x\right)$: $$x\cdot\text{r}'\left(x\right)+\text{r}\left(x\right)=\frac{x+x\cdot\text{r}\left(x\right)}{x-x\cdot\text{r}\left(x\right)}\space\Longleftrightarrow\space\int-\frac{\text{r}'\left(x\right)\cdot\left(\text{r}\left(x\right)-1\right)}{1+\text{r}\left(x\right)^2}\space\text{d}x=\int\frac{1}{x}\space\text{d}x\tag2$$ Now, use: * *For the LHS, substitute $\text{u}=\text{r}\left(x\right)$: $$\int-\frac{\text{r}'\left(x\right)\cdot\left(\text{r}\left(x\right)-1\right)}{1+\text{r}\left(x\right)^2}\space\text{d}x=\int\frac{1-\text{u}}{1+\text{u}^2}\space\text{d}\text{u}=\arctan\left(\text{u}\right)-\frac{\ln\left|1+\text{u}^2\right|}{2}+\text{C}_1\tag3$$ *For the RHS: $$\int\frac{1}{x}\space\text{d}x=\ln\left|x\right|+\text{C}_2\tag4$$ So, we get: $$\arctan\left(\text{r}\left(x\right)\right)-\frac{\ln\left|1+\text{r}\left(x\right)^2\right|}{2}=\ln\left|x\right|+\text{C}\tag5$$ Now, set $x\cdot\text{r}\left(x\right)=\text{y}\left(x\right)$ back: $$\arctan\left(\frac{\text{y}\left(x\right)}{x}\right)-\frac{\ln\left|1+\left(\frac{\text{y}\left(x\right)}{x}\right)^2\right|}{2}=\ln\left|x\right|+\text{C}\tag6$$ Simplify a bit: $$2\arctan\left(\frac{\text{y}\left(x\right)}{x}\right)=\ln\left(x^2+\text{y}\left(x\right)^2\right)+\text{C}\tag7$$
Prove that $a\mid c$; given that $a\mid bc$ and $\gcd(a,b) = 1$. Please help answer the number theory problem. The proof I came across goes like this: Let $d = \gcd(a,b)$ then $d = ma + nb$. $ma + nb = 1$. then $mac + nbc = c$. $a\mid mac$, $a\mid nbc$, therefore $a\mid c$. Not sure how last step unfolded. Thanks. (This is my personal question I have not posted it anywhere else, and the context of my question is my own)
We know $\gcd(a,b)=1\iff au+bv=1$ and $a\mid bc\iff bc=ak$ for some $k\in\Bbb Z$. Then $$au+bv=1\implies auc+bvc=c\implies auc+akv=c\implies a(uc+kv)=c\implies a\mid c$$ Note, after the second implication we introduced $bc=ak$.
What is the value of the $X$? With the details, what could be the value of X?
Let $d(x, y)$ be the distance of two vectors $x, y \in \mathbb{R}^2$. Let $p$ be the point where the two triangles touch. We aim to find $||p||$. $d((15/2, 0), p) = 15/2$ and $d((0, 10), p) = 5$. Let $p = (p_1, p_2)$. Then \begin{align*} \sqrt{(15/2 - p_1)^2 + p_2^2} = 15/2, \ \sqrt{p_1^2 + (10-p_2)^2} = 5 \end{align*} The first gives $p_2 = \sqrt{15 p_1 - p_1^2}$, assuming it is positive. Inserting this into the second we get \begin{align*} \sqrt{p_1^2 + (10-\sqrt{15 p_1 - p_1^2})^2} = 5 \end{align*} And solving for $p_1$ we get $p_1 = 3$, so that $p_2$ must be $6$, using the other equations. $||p||$ must therefore be $\sqrt{ 3^2 + 6^2} = \sqrt{45} = 3 \sqrt{5}$ More detail upon request. Edit: it has been pointed out by Andrei in the comments below that the equations that solve for the coordinates of $p$ can be geometrically interpreted as finding the point at the intersection of two circles.
Number of 5-card Charlie hands in blackjack A five-card Charlie in blackjack is when you have a total of 5 cards and you do not exceed a point total of 21. How many such hands are there? Of course, the natural next question concerns six-card Charlies, etc. It seems like one way of determining the answer might be to determine the total number of 5-card hands and then subtract out the number of hands that exceed 21, but I am at a loss as to how to do this effectively. Is there some use of the inclusion-exclusion principle at work here? The condition that the cards do not exceed 21 is the difficulty I am having a hard time addressing. Any ideas?
Here's a recursive Java code public class MSE2183749 { // counts hands (with permutations) attaining at most t1 points, from n cards public static long countp(final int t1, final int n, final int[] cards) { if (n > t1) return 0; if (n == 0) return 1; long ac = 0; for (int j = 1; j < cards.length && j + n - 1 <= t1; j++) { if (cards[j] == 0) continue; int[] cards2 = Arrays.copyOf(cards, cards.length); cards2[j]--; ac += cards[j] * countp(t1 - j, n - 1, cards2); } return ac; } public static void main(String[] args) { long res = countp(21, 5, new int[] { 0, 4, 4, 4, 4, 4, 4, 4, 4, 4, 16 }); System.out.println(res/(5*4*3*2)); } } The result, as other answers have pointed out, is 139972
Lower bound for the 'de Polignac constant' Let's introduce the 'de Polignac constant' $ K_{Pol} : =\sum_{i>0}2^{-k_{i}} $ , where $ k_{i} $ is the $ i $ -th positive integer such that $ 2k_{i} $ is a Polignac number, i.e a number that is the difference of two consecutive primes in infinitely many ways. De Polignac's conjecture is equivalent to $ K_{Pol}=1 $ . Do we know a non trivial lower bound for $ K_{Pol} $? Edit : a proof that $ K_{Pol}>1/2 $ would entail the truth of the twin prime conjecture.
We can give much better estimates on $K_{Pol}$ than ones mentioned in Charles answer. All the results used are already proven in the same paper by Polymath, but instead of the results cited in the introduction, we can use stronger results encapsulated in Theorem 3.2, the relevant parts of which I will quote for completeness. Definition Denote by $DHL[k,j]$ the statement "For any admissible $k$-tuple $\mathcal H=(h_1,\dots,h_k)$ there exist infinitely many translates $n+\mathcal H=(n+h_1,\dots,n+h_k)$ of $\mathcal H$ which contain at least $j$ primes". Theorem 3.2 The following hold: * *$DHL[50,2]$, unconditionally. *$DHL[3,2]$, under the assumption of GEH conjecture. Also, this result isn't listed in the theorem, but on page 46 it is mentioned that EH implies $DHL[6,2]$, as proven by Goldston, Pintz and Yıldırım. I won't be doing full calculations in all cases, but I will prove an estimate using the result under GEH. Claim Under GEH, either $2$ is a Polignac number, or, for every $n\geq 1$, either $6n$ or $6n-2$ is a Polignac number. Proof The triple $(0,2,6n)$ is rather clearly admissible, so under GEH infinitely many translates of this triple contain two primes. If $2$ is not Polignac, then for each $n$, such pairs of primes must have difference $6n-2$ or $6n$. One of these must appear infinitely often. Therefore either $K_{Pol}$ is at least $2^{-1}$, or at least $\sum_{n=1}^{\infty}2^{-3n}=\frac{1}{7}$. In either case, $K_{Pol}\geq\frac{1}{7}$. I don't think you can improve this bound, since every admissible triple will have a difference divisible by $3$, so it appears to be consistent that Polignac numbers are precisely the ones divisible by $6$.
Prove $\lim\limits_{x \to +\infty } \frac{{f(x)}}{x} = \lim\limits_{x \to +\infty } f'(x)$ if both limits exist Let $f:\Bbb R \to \Bbb R$ be differentiable, and $\mathop {\lim }\limits_{x \to \infty } \frac{{f(x)}}{x}$ (the slope of some asymptote) exists and the limit of the derivative $\mathop {\lim }\limits_{x \to + \infty } f'(x)$ exists as well, then show $$\mathop \lim\limits_{x \to +\infty } \frac{{f(x)}}{x} = \lim \limits_{x \to +\infty } f'(x)$$ or $$\lim\limits_{x \to +\infty } \frac{{f(x)}}{x} = \lim\limits_{x \to +\infty } \lim\limits_{h \to 0} \frac{{f(x + h) - f(x)}}{h}$$ This is intuitively true, but I don't know how to show it and I don't know how to deal with the mixed limit. Thanks!
Let $f:\Bbb R \to \Bbb R$ be differentiable, and $\mathop {\lim }\limits_{x \to \infty } \frac{{f(x)}}{x}$ (the slope of some asymptote) exists and the limit of the derivative $\mathop {\lim }\limits_{x \to + \infty } f'(x)$ exists as well, then show $\mathop \lim\limits_{x \to +\infty } \frac{{f(x)}}{x} = \lim \limits_{x \to +\infty } f'(x)$ The proposition is true, but the hypothesis is stronger than needed. Instead of assuming that $\mathop {\lim }\limits_{x \to \infty } \frac{{f(x)}}{x}$ exists, it is enough to require that $\mathop {\lim }\limits_{x \to \infty } |f(x)| = \infty$. Then, provided that $\mathop {\lim }\limits_{x \to \infty } f'(x)$ exists, the result follows by direct application of L'Hôpital's rule.
Logarithm Properties Does $$\log_{a^2}x=\log_a 2x$$ If it does, then how would you prove it? If it doesn't, then how would I simplify $\log_{a^2} x$ so that it is in the form of $\log_a n$ where $n$ is some polynomial?
Here's an explanation which doesn't explicitly use the change of base formula: Let $c = \log_{a^{2}}(x)$ Then, from the definition of the $\log$ function: $(a^{2})^{c} = x $ $\log(a^{2c}) =\log x$ $2c\log a = \log x \Rightarrow c = \frac{\log x}{2\log a}$ Let $d = log_{a}(2x)$ Then $a^{d} = 2x$ $d = \frac{2\log x}{\log a}$ $\Rightarrow d = 4c \Rightarrow d \neq c$
Matrix reciprocal positive to prove $λ_{max⁡}=n$ Suppose we have $n\times n$ matrix $A$ having only positive elements and satisfying the property $a_{ij}=1/a_{ji}$ (a matrix satisfying this property is called a reciprocal matrix). If its largest eigenvalue $λ_{max}$ is equal to $n$, then the matrix $A$ satisfies the property (consistency property) $a_{ij}a_{jk}=a_{ik}$ where $i,j,k=1,2,...,n$. I already found an example of $5\times 5$ reciprocal matrix to show this condition: \begin{bmatrix}1&1/2&1&1&1/4\\2&1&2&2&1/2\\1&1/2&1&1&1/4\\1&1/2&1&1&1/4\\4&2&4&4&1\end{bmatrix} It has simple characteristic equation $λ^5-5λ^4=0$, so the eigenvalues of matrix $A$ are $λ=0$ and $λ=5$ and it's shown that $λ_{max}=n=5$. Is there anyone can help me to give another positive reciprocal matrix example that has simple characteristic equation and integer number as its eigenvalues ($4\times 4, 5\times 5$ or $6\times 6$ matrix)?
You get an endless supply of such matrices of any size from the following recipe. * *Let $B$ be an $n\times n$ matrix of all $1$. *Let $D$ be a diagonal $n\times n$ matrix with some positive entries. *Let $A=DBD^{-1}$. Here the eigenvalues of $B$ are $\lambda_{max}=n$ (multiplicity one) and $\lambda=0$ (multiplicity $n-1$). Therefore the same applies to $A$. Furthermore, $B$ is trivially reciprocal, and so is $A$, because $a_{ij}=d_i/d_j$. Your example matrix is gotten from this recipe with $n=5$ and $D=diag(1,2,1,1,4)$. Actually all the reciprocal matrices satisfying the consistency property are of this form. If $a_{ij}=1/a_{ji}$, then we can use $D=diag(a_{11},a_{21},\ldots,a_{n1})$. This works because the consistency property implies that $$ a_{ij}=\frac{a_{i1}}{a_{j1}} $$ for any pair $i,j$. for all $i,j$
The Fourier transform of $|x|^{-a}$, where $01)When checking the calculation of the Fourier transform of $|x|^{-a}$, where $\frac{n}{2}<a<n$, it can be proved that the Fourier transform of this function is rotational invariant and is a homogeneous function of degree $a-n$, then it states that we can write it in the form $c_{a,n}|\xi|^{a-n}$, I don't know why 2) When proving the case when $0<a<\frac{n}{2}$, it uses Fourier inversion formula, but I don't know why it can be applied to $|\xi|^{-a}$
* *Being rotationally invariant implies being constant on the unit sphere: $\hat f(\xi)=c$ when $|\xi|=1$. Being homogeneous of segree $a-n$ means $$\hat f(\xi) = |\xi|^{a-n}\hat f(\xi/|\xi|)$$ Put the two things together, and you have the conclusion. *Fourier inversion property (the inverse being almost the same transform with a different sign and perhaps normalization) holds for tempered distributions. The function $|x|^{-a}$, $0<a<n$, is a tempered distribution. Some references are in comments here.
Proof that hexagonal numbers are triangular While solving Project Euler problem 45, I tried to prove that all hexagonal numbers are triangular. I came up with the following. We need to prove that a number of the form (hexagonal formula) $$2n_h^2 - n_h$$ can be written as (triangular formula) $$\frac{n_t^2 + n_t}{2}.$$ This seems easy enough: $$ \begin{align} & 2n_h^2 - n_h \\ =& \frac{4n_h^2 - 2n_h}{2} \\ =& \frac{(2n_h)^2 - 2n_h}{2} \\ =& \frac{(-2n_h)^2 + (-2n_h)}{2} \square \end{align} $$ This proof seems to imply that the hexagonal number with index $n_h$ is the triangular number with index $-2n_h$. Clearly, there is something wrong here (the minus should not be there) but I cannot figure out where I went wrong.
Let's adopt a new notation $T(n)=\frac{n(n+1)}{2}$ and $H(n)=2n^2-n$ The aim is to prove $T(n)=H(m)$ for some $n,m$. This gives $\frac{n(n+1)}{2}=2m^2-m=m(2m-1)\iff n(n+1)=(2m-1)2m$ Under this form it is obvious that $n=2m-1$ works and $T(2m-1)=H(m)$ or equivalently $H(\frac{n+1}{2})=T(n)$ (for $n$ odd). Don't be surprised by your formula though, because in fact $T(-n)=\frac{-n(-n+1)}{2}=\frac{n(n-1)}{2}=T(n-1)$ And you get $H(n)=T(-2n)=T(2n-1)=H(n)$ and the cycle is complete.
Inequality with square root on both sides I have equation like this: $$\sqrt{\vphantom{|}\ 3x^2-7x-20}<\sqrt{\vphantom{|}\ 8x+22}$$ I'm unsure how to solve it. I'm guessing I have to square both sides, but I don't know what happens with the inequality sign. I guess there are four cases, depending on the sign of each side of the equation $(++,\ +-,\ -+,\ --)$. And then I have to check if the solution fits the case, ie if both sides are of apropriate sign for the resulting interval. But all this seems like a lot of work. There is an easier way, right? How is it usually done?
HINT: As $3x^2-7x-20=(3x+5)(x-4)$ $\sqrt{3x^2-7x-20}$ will be real if $x\ge$max$(4,-5/3)=4$ or if $x\le$min$(4,-5/3)=-5/3$ Now $\sqrt{8x+22}$ will be real if $8x+22\ge0\iff x\ge-11/4$ So, we need $-11/4\le x\le-5/3$ and for $x\ge4$ Now use $\sqrt a<\sqrt b\iff a< b$
Prove that $a * b = a + b - ab$ defines a group operation on $\Bbb R \setminus \{1\}$ So, basically I'm taking an intro into proofs class, and we're given homework to prove something in abstract algebra. Being one that hasn't yet taken an abstract algebra course I really don't know if what I'm doing is correct here. Prove: The set $\mathbb{R} \backslash \left\{ 1 \right\}$ is a group under the operation $*$, where: $$a * b = a + b - ab, \quad \forall \,\, a,b \in \mathbb{R} \backslash \left\{ 1 \right\} .$$ My proof structure: After reading about abstract algebra for a while, it looks like what I need to show is that if this set is a group, it has to satisfy associativity with the operation, and the existence of an identity and inverse element. So what I did was that I assumed that there exists an element in the set $\mathbb{R} \backslash \left\{ 1 \right\}$ such that it satisfies the identity property for the set and another element that satisfies the inverse property for all the elements in the set. However I'm having trouble trying to show that the operation is indeed associative through algebra since $$\begin{align} a(b * c) & = a(b+c) - abc \\ & \ne (a+b)c - abc = (a*b)c \\ \end{align}$$ So in short, I want to ask if it's correct to assume that an element for the set exists that would satisfy the identity and inverse property for the group. Also, is this even a group at all since the operation doesn't seem to satisfy the associativity requirements.
Associativity is satisfied: $$a*(b*c)=\\a*(b+c-bc)=\\a+b+c-bc-a(b+c-bc)=\\a+b+c-ab-bc-ac+abc$$ and $$(a*b)*c=\\(a+b-ab)*c=\\a+b-ab+c-(a+b-ab)c=\\a+b+c-ab-bc-ac+abc$$
Conditional Probabilities, babies using Bayes There are $3$ boys and $x$ girls in a hopital (babies). A mother gives birth to a child but doesn't know it's gender. A midwife picks a random baby, given that she picked a boy, what is the probability it comes from the mother? I used bayes, but I am struggling to specify (understand/make sense of) the probabilities. Can someone please explain what each of the following probabilities are? $$P(\text{Boy}\,|\,\text{Mother}),\, P(\text{Boy}),\, P(\text{Mother})$$
With the information given and reading "There are $3$ boys and $x$ girls in a hospital" as the position after the mother has given birth, I would have thought the probabilities were independent, i.e. * *$P(\text{mother's child picked})= \dfrac{1}{3+x}$ *$P(\text{boy picked})= \dfrac{3}{3+x}$ *$P(\text{mother's child picked and it is a boy})= \dfrac{3}{(3+x)^2}$ *$P(\text{mother's child picked}\mid \text{boy picked})= \dfrac{1}{3+x}$ *$P(\text{boy picked}\mid \text{mother's child picked})= \dfrac{3}{3+x}$ (Added later) If the "There are $3$ boys and $x$ girls in a hospital" is the position before the mother gives birth, then I would have said * *$P(\text{mother's child is a boy})= \dfrac{1}{2}$ *$P(\text{mother's child picked})= \dfrac{1}{4+x}$ *$P(\text{mother's child picked} \mid \text{mother's child is a boy})= \dfrac{1}{4+x}$ *$P(\text{boy picked} \mid \text{mother's child is a boy})= \dfrac{4}{4+x}$ *$P(\text{boy picked} \mid \text{mother's child is a girl})= \dfrac{3}{4+x}$ *$P(\text{boy picked})=\dfrac12 \dfrac{4}{4+x} + \dfrac12 \dfrac{3}{4+x} = \dfrac{7}{2(4+x)}$ *$P(\text{mother's child picked and it is a boy}) = \dfrac12 \dfrac{1}{4+x} = \dfrac{1}{2(4+x)}$ *$P(\text{mother's child picked} \mid \text{boy picked}) = \dfrac{\frac{1}{2(4+x)}}{\frac{7}{2(4+x)}}= \dfrac{1}{7}$ which is the same as Cato's answer
Proofs with inequalities I am asked to prove the following, given that $a<b$, and $c<d$, such that $a,b,c,d>0$: (1) $a+c<b+d$ (2) $ac<bd$ I have been struggling with these kinds of questions in my course, and am also wondering if there's a good way to approach these kinds of questions.
We want to show the following two things, (1) $a+c<b+d$, and (2) $ac<bd$. The first is quite straightforward. First, we add $c$ to $a<b\to a+c<b+c$. Then, add, $b$ to both sides of $c<d\to b+c<b+d$. Hence, we have $$a+c<b+c<b+d\to a+c<b+d,$$ which is what we wanted to show. For the second part, multiply $a<b$ by $c$. (Granted that $a,b,c,d>0$, we won't worry about reversal of the inequality when multiplying through by them.) We have, $ac<bc$. Then, multiply $c<d$, by $b$. We have $bc<bd$, and so, $$ac<bc<bd\to ac<bd,$$ which is again, what we wanted to show. It is useful to have a list of fundamental rules next to you when you are approaching questions like this.
Uniform Lp bound implies finite Lp norm with random index? Suppose that $\{X_N:N\in\mathbb N\}$ is a sequence of random variables uniformly bounded in $L^p$ norm for all $p>1$: $$\sup_N\|X_N\|_{L^p}<C(p).$$ Let $\mathcal N$ be some random variable taking values in $\mathbb N$ that is independent of the sequence $\{X_N\}$, and define $X_{\mathcal N}$ as the random variable $$\omega\to X_{\mathcal N(\omega)}(\omega).$$ Is it then possible to prove that $\|X_{\mathcal N}\|_{L^{p}}<\infty$? It is easy to see that $$\|X_{\mathcal N}\|_{L^{p}}=\left\|\sum_{N}X_N\cdot1_{\{N=\mathcal N\}}\right\|_{L^{p}},$$ and thus we get the result by Minkowski's and Hölder's inequalities if $$\sum_N\|1_{\{N=\mathcal N\}}\|_{L^{2p}}<\infty.$$ However, it is possible to prove this with weaker (or ideally no) conditions on the distribution of $\mathcal N$?
Notice that for all $\omega$ $$ X_{\mathcal N}\left(\omega\right)=\sum_{n=1}^{+\infty}X_{n}(\omega)\mathbf 1\left\{\mathcal N(\omega)=n\right\} $$ and using pairwise disjointness of the events $\left\{\omega\mid \mathcal N(\omega)=n\right\}$, we get $$ \left\lvert X_{\mathcal N}\left(\omega\right)\right\rvert^p=\sum_{n=1}^{+\infty}\left\lvert X_{n}(\omega)\right\rvert^p\mathbf 1\left\{\mathcal N(\omega)=n\right\}. $$ Integrating and using independence between $\mathcal N$ and the sequence $\left(X_n\right)_{n\geqslant 1}$, we get $$ \mathbb E\left\lvert X_{\mathcal N}\left(\omega\right)\right\rvert^p=\sum_{n=1}^{+\infty}\mathbb E\left\lvert X_{n}(\omega)\right\rvert^p\mathbb P\left\{\mathcal N(\omega)=n\right\}\leqslant c(p)\sum_{n=1}^{+\infty} \mathbb P\left\{\mathcal N(\omega)=n\right\}=c(p). $$
Is $K_n$ compact in $l^{\infty}(\mathbb{R})$ Consider $X=l^{\infty}(\mathbb{R})$ the space of all bounded sequence of real numbers endowed with the sup norm. I would like to know two thinks about this Banach space: * *Are the sets $K_n=[-n,n]^{\mathbb{N}}$ compacts? *Has the sets $K_n=[-n,n]^{\mathbb{N}}$ non empty interior? Note: I'd like to know this thinks to overlap some technical issues in a problem that I'm working.
Note that $K_{n}$ is the closed ball centered at 0 and with radius $n$ in your space. So it has a non-empty interior. It is not compact: the sequences $s_{n}$ such that $s_{n}(m) = \delta_{n,m}$ are a sequence in your space that has no convergent subsequence, because $\lVert s_{n}-s_{m} \rVert = 1$
Show that $\lim_{n\to \infty}\sin{n\pi x} =0$ if $x\in \mathbb{Z},$ but the limit fails to exist if $x\notin \mathbb{Z}.$ Show that $\lim_{n\to \infty}\sin{n\pi x} =0$ if $x\in \mathbb{Z},$ but the limit fails to exist if $x\notin \mathbb{Z}.$ 1st part If $x\in \mathbb{Z}$ then $\sin{n\pi x}=0$ for all $n,$ giving the first part. Edit: 2nd part If $x\notin \mathbb{Z},$ I want to show that the limit doesn't exist. How to do that?
Fix $0<x<1.$ Define $A$ to be the closed arc on the unit circle centerd at $(0,1),$ having arc length $\pi x.$ Good to draw a picture here, because it makes things transparent. A little geometry shows that if $e^{it}\in A,$ then $\sin t \ge \sin [(\pi/2)(1-x)] >0.$ Claim: $e^{in\pi x}\in A$ for infinitely many $n.$ Proof of claim: The points $e^{in\pi x}, n=1,2,\dots$ travel around the circle infinitely many times, in steps of arc length $\pi x.$ The closed arc $A$ has arc length $\pi x.$ Thus there is no way $e^{in \pi x}$ can "jump over" $A$ in this process, so it will land in $A$ at least once in every orbit of the circle. That proves the claim. The claim implies $\sin (n\pi x) \ge \sin [(\pi/2)(1-x)]$ for infinitely many $n.$ But exactly the same kind of reasoning applies to the arc $B,$ the closed arc on the unit circle centerd at $(0,-1),$ having arc length $\pi x.$ The conclusion of the claim holds for $B,$ hence $\sin (n\pi x) \le - \sin [(\pi/2)(1-x)]$ for infinitely many $n.$ Now in general, if $a>0$ and $y_n$ is a sequence such that $y_n\ge a$ for infinitely many $n,$ and $y_n \le -a$ for infinitely many $n,$ then $y_n$ cannot converge. Since that's the situation with $\sin (n\pi x)$ here, we see $\sin (n\pi x)$ diverges. Now $0<x<1$ above, but $\pi$-periodicity, or negative $\pi$-periodicity, shows the same result holds if $x\in (m,m+1)$ for any $m\in \mathbb Z.$ Thus $\sin (n\pi x)$ diverges for all $x\notin \mathbb Z.$
Proving a limit of an integral I don't know how to approach this problem. Proof that $\lim_{n\to \infty} (n+1)I_n = \frac 12$ where $I_n = \int_{0}^{1} \frac {x^n}{x+1}dx$
Hint: $$(n+1)\int_0^1 \frac{x^n}{1+x}\, dx - \frac{1}{2} = (n+1)\int_0^1 x^n\left ( \frac{1}{1+x} - \frac{1}{2}\right )\,dx.$$
How to evaluate the angles next to the median in a triangle? Suppose that $ABC$ is a triangle with a median $AM$. Than we can look at these angles: $\angle ABC, \angle ACB, \angle BAM$ and $\angle CAM$. Is there any "nice relation" between them, so that I am able to express two of them in terms of the others? I have tried sine and Stewart's theorem for medians (and some other trigonometry theorems), but these techniques allow me to express only a jungle of trigonometric functions of these angles. Can anyone find anything better?
$BA\cdot AM\cdot\sin\widehat{BAM} = AM\cdot AC\sin\widehat{CAM}$ (a median splits a triangle in two parts with the same area) hence $$\frac{\sin\widehat{BAM}}{\sin\widehat{CAM}}=\frac{b}{c}=\frac{\sin\widehat{ABC}}{\sin\widehat{ACB}}.$$
Find the supremum $(0,1)\setminus \mathbb{Q}$. My intuition tells me that $ A = (0,1)\setminus\mathbb{Q}$ has no supremum. But I'm not sure if the following arguments are sufficient or if my intuition is misguided... We have that 1 is an upper bound of A, since:$$1 > a \quad \forall a \in A$$ then because $\mathbb{Q}$ is dense in $\mathbb{R}$, there exists a $q$ in $\mathbb{Q}$ such that: $$a < q < 1 \quad \forall a \in A$$ hence we can find an $r$ in $\mathbb{Q}$ such that $$a < r < q \quad \forall a \in A$$ and the process repeats indefinitely, hence we will never find a least upper bound for the set. I assume I can use something similar to prove that $A$ has no infimum... Any help is appreciated!
I suspect your problem is this: you think that the supremum of a set $S$ must belong to $S$. This is not true! If you keep that in mind, I think you will easily see what the supremum must be. By the way, the supremum is different from the maximum: the maximum of a set $S$ does have to belong to $S$. So not every bounded set has a maximum, although every bounded non-empty set has a supremum.
gambler's ruin with unequal bet Like all existing gamble's ruin problem, assume P(i) represents the probability that the gambler wins N dollars given that his current wealth is i dollars (he has i dollars at the moment). In most of the existing solutions, it is assumed that the gambler bets on 1 dollars. It means, if the gambler wins the current step, his state is changed to (i+1) and if he loses, the state is changed to (i-1). In this case, P(i) is calculated using the iterative equation below: P(i+1) - P(i) = (q/p)(P(i) - P(i-1)) where p is the probability of winning in each single game and q = 1-p. Moreover, P(0) = 0 and P(N) = 1 Here I have a little different assumption. In a single gamble, assume the gambler wins 2 dollars with the probability of p and loses 1 dollar with the probability of q = 1-p (I just changed the bet in case he wins). In this especial case, if the gambler wins, then the current state i is changed to i+2. If the gambler loses, the current state is changed to i-1. Does anybody know how to calculate P(i) in such an especial case? I have tried the mentioned iterative equation like below: P(i+2) - P(i) = (q/p)(P(i) - P(i-1)) But I cannot achieve a general solution for P(i). The result through the iterative method will be quite complicated. Using the iterative solution, you should calculate P(i) according to P(i-1). Then, since P(0) = 0, P(1) is calculated easily and other P(i) would be appeared. But, I could not find a general equation to calculate P(i) directly. Please note that P(N+1) = P(N) = 1 and P(0) = 0
The problem is simply an ordinary homogeneous linear recurrence $$ P(i+2) = \frac{1}{p} P(i) - \frac{q}{p} P(i-1)$$ which can be solved with standard methods. Every solution to this will be a linear combination of the three basis solutions of the form $$ P(n) = r^n $$ which we can plugin to the recurrence $$ r^{i+2} = \frac{1}{p} r^i - \frac{q}{p} r^{i-1} $$ $$ r^3 - \frac{1}{p} r + \frac{q}{p} = 0 $$ and solve for the three possible values for $r$. (If this equation has repeated roots, then there will be basis solutions of different forms) Note that $r=1$ is one of the three solutions, so after dividing out $r-1$ you only need to use the quadratic formula to get the other roots. Wolfram alpha gives the general form. (I was only able to figure out how to specify the boundary condition at zero) Incidentally, you could alternatively solve this by repeating the trick you used. Writing $D(n) = P(n) - P(n-1)$, the recurrence you wrote becomes $$ D(n+2) + D(n+1) = \frac{q}{p} D(n) $$ Then, writing $E(n) = D(n) - u D(n-1)$ for an unspecified $u$, we get $$ E(n+2) + u D(n+1) + E(n+1) + u D(n) = \frac{q}{p} D(n) $$ $$ E(n+2) + u E(n+1) + u^2 D(n) + E(n+1) + u D(n) = \frac{q}{p} D(n) $$ so if we pick $u$ such that $u^2 + u = \frac{q}{p}$, the equation becomes $$ E(n+2) + (u+1) E(n+1) = 0 $$ whose general form is easily solved for.
Prove matrix derivative identity I found the following identity in the Matrix Cookbook: \begin{align*} \frac{\partial }{\partial X} \text{Trace}\left( X B X ^{\mathrm{T}} \right) = XB^{\mathrm{T}} + XB, \end{align*} where $X\in\mathbb{R}^{n\times m}$ and $B\in\mathbb{R}^{m \times m}$. Any hints on how I prove this?
A solution avoiding to use the coordinates. Note $f : \mathbb{R}^{n\times m} \to \mathbb R$ where $$f(X) = \text{Trace}\left( X B X ^{\mathrm{T}} \right)$$ You have $f=h \circ g$ where $g(X) = X B X ^{\mathrm{T}}$ and $h(Y) = \text{Trace}(Y)$. According to the chain rule you have $$(h \circ g)^\prime (X) = (h^\prime(g(X)) \circ g^\prime(X)$$ $h$ is a linear map. So its (Fréchet) dérivative is $$h^\prime(Y).V = \text{Trace}(V)$$ On its side, $g$ is a bilinear map. So we have $$g^\prime(X).U = U B X ^{\mathrm{T}} + X B U ^{\mathrm{T}}$$ Using the chain rule with the above derivatives, you get $$f^\prime(X).U = \text{Trace}(U B X ^{\mathrm{T}} + X B U ^{\mathrm{T}})$$ As $\text{Trace}$ is linear and $\text{Trace}(X)=\text{Trace}(X^{\mathrm{T}})$ for all square matrix, we also have $$f^\prime(X).U = \text{Trace}((X B^{\mathrm{T}} + X B)U ^{\mathrm{T}})$$ Which is exactly the result you're looking for.
Put identical coins into identical boxes In how many ways can I put 200 (identical) coins into 3 identical boxes? I started count it, and I saw that the possibilities goes this way: $200, 0, 0$ $199, 1, 0$ $198, 2, 0$ $198, 1, 1$ $197, 3, 0$ $197, 2, 1$ ect. which mean I can count it this way: $2 \cdot 1 + 2 \cdot 2 + 2 \cdot 3$ ect.. I'm wondering if there is a better way to count all the possibilities.
See OEIS sequence A001399. You are asking for the value of $a_{200}$ where $a_n$ is the number of partitions of the number $n$ into at most $3$ parts, or equivalently, the number of partitions of $n$ into parts of size at most $3$, that is, the number of ways we can write $n$ as a sum of $1$s, $2$s, and $3$s without regard to order. Let $A_{n,i}$ be the set of all partitions of $n$ into parts of size at most $3$ with at least one part equal to $i$. By the in-and-out formula, for $n\gt0$ we have $$a_n=|A_{n,1}\cup A_{n,2}\cup A_{n,3}|$$ $$=|A_{n,1}|+|A_{n,2}|+|A_{n,3}|-|A_{n,1}\cap A_{n,2}|-|A_{n,1}\cap A_{n,3}|-|A_{n,2}\cap A_{n,3}|+|A_{n,1}\cap A_{n,2}\cap A_{n,3}|$$ $$=a_{n-1}+a_{n-2}+a_{n-3}-a_{n-3}-a_{n-4}-a_{n-5}+a_{n-6}$$$$=a_{n-1}+a_{n-2}-a_{n-4}-a_{n-5}+a_{n-6}.$$ Now, the homogeneous linear recurrence $$a_n=a_{n-1}+a_{n-2}-a_{n-4}-a_{n-5}+a_{n-6}$$ has the characteristic polynomial $$t^6-t^5-t^4+t^2+t-1=(t-1)^3(t+1)(t^2+t+1)$$ with roots $$1,\ 1,\ 1,\ -1,\ e^{2\pi i/3},\ e^{-2\pi i/3};$$ so the general solution is $$a_n=An^2+Bn+C+D(-1)^n+E\cos\frac{2n\pi}3+F\sin\frac{2n\pi}3$$ where $A,B,C,D,E,F$ are arbitrary constants. We use the initial values $a_0=1$ and $a_n=n$ for $n=1,2,3,4,5$ (or $a_n=0$ for $n=-1,-2,-3,-4,-5$) to evaluate the constants and get the particular solution $$a_n=\frac{6n^2+36n+47+9(-1)^n+16\cos\frac{2n\pi}3}{72}.$$ Finally, $$a_{200}=\frac{240000+7200+47+9-8}{72}=\boxed{3434}.$$ By the way, since $$a_n=\frac{(n+2)(n+4)}{12}+\frac{-1+9(-1)^n+16\cos\frac{2n\pi}3}{72}$$ and since $$\left|\frac{-1+9(-1)^n+16\cos\frac{2n\pi}3}{72}\right|\lt\frac12,$$ it follows that $a_n$ is the nearest integer to $\frac{(n+2)(n+4)}{12}$. For $n=200$ we have $$\frac{(n+2)(n+4)}{12}=\frac{202\cdot204}{12}=3434=a_{200}.$$
Get a good approximation of $\int_0^1 \left(H_x\right)^2 dx$, where $H_x$ is the generalized harmonic number The code integrate (H_x)^2 dx, from x=0 to x=1 in Wolfram alpha online calculator, where as you see $H_x$ is a generalized harmonic number, tell us that holds $$\int_0^1 \left(H_x\right)^2 dx\approx 0.413172.$$ I've curiosity about Question. How one can calculate with analysis or numerical analysis an approximation of $$\int_0^1 \left(H_x\right)^2 dx?$$ Thus you are able to use your knowledges about the harmonic numbers, or well if your approach is using numerical analysis tell us what's your numerical method and how works it. Many thanks.
This is an interesting question that can be tackled in many ways, there are many chances a good piece of math will come out of it. For now, I will just keep collecting and rearranging observations, till reaching a complete answer. We have $H_x=\gamma+\psi(x+1)$ and $\int_{0}^{1}\psi(x+1)\,dx = \log\frac{\Gamma(2)}{\Gamma(1)}=0$, hence our integral equals $\gamma^2+\int_{0}^{1}\psi(x+1)^2\,dx$. The function $\psi(x+1)^2$ is positive and convex on $(0,1)$ and values of the $\psi$ function at rational points in $(0,1)$ can be computed in a explicit way through Gauss' Digamma Theorem, hence the numerical evaluation of the given integral is pretty simple through Simpson's rule or similar approaches. In a right neighbourhood of the origin we have $$ H_x = \zeta(2)x-\zeta(3)x^2+\zeta(4)x^3-\zeta(5)x^4+\ldots\tag{1} $$ hence $$ \int_{0}^{1}H_x^2\,dx = \sum_{m,n\geq 2}\frac{(-1)^{m+n}}{m+n-1}\zeta(m)\zeta(n) = \sum_{j\geq 3}\frac{(-1)^{j+1}}{j}\sum_{k=2}^{j-1}\zeta(k)\,\zeta(j+1-k) \tag{2}$$ where we may recall Euler's theorem about $\sum_{n\geq 1}\frac{H_n}{n^q}$: $$ \sum_{k=2}^{j-1}\zeta(k)\,\zeta(j+1-k) = (2+j)\,\zeta(j+1)-2\sum_{n\geq 1}\frac{H_n}{n^j}=j\,\zeta(j+1)-2\sum_{n\geq 1}\frac{H_{n-1}}{n^j}. \tag{3}$$ This approach should allow us to convert the original integral into a simple series, since $$ \sum_{j\geq 3}(-1)^{j+1}\zeta(j+1) \stackrel{\text{Abel reg.}}{=} 1-\zeta(2)+\zeta(3).$$ In particular, the problem boils down to the approximation/evaluation of the following series: $$ \sum_{n\geq 1}\left[\frac{1-2n}{2n^2}+\log\left(1+\frac{1}{n}\right)\right]H_{n-1} \tag{4}$$ whose general term yet behaves like $\frac{\log n}{n^3}$, leading to pretty fast convergence. If we apply summation by parts, we get a general term that is simpler but with a slower decay towards zero: $$ \begin{eqnarray*}(4)&=&\lim_{N\to +\infty}\left[\left(-\gamma+\frac{\pi^2}{12}\right)H_{N-1}-\sum_{n=1}^{N-1}\frac{\frac{1}{2}H_n^{(2)}-H_n+\log(n+1)}{n}\right]\\&=&\frac{1}{2}\zeta(3)+\sum_{n\geq 1}\frac{H_n-\log(n+1)-\gamma}{n}\tag{5} \end{eqnarray*}$$ Now we may employ the asymptotic series for harmonic numbers in order to write $(5)$ in terms of Bernoulli numbers, values of the Riemann $\zeta$ function and the series $$ \sum_{n\geq 1}\frac{\log(n+1)-\log(n)}{n}\stackrel{SBP}{=}\sum_{n\geq 1}\frac{\log(n+1)}{n(n+1)}=\int_{0}^{1}\frac{(1-x)\log(1-x)}{x\log x}\,dx \approx 1.25775 \tag{6}$$ that can be re-written in terms of Gregory coefficients or just as $\sum_{m\geq 1}\frac{(-1)^{m+1}\zeta(m+1)}{m}$. (Continues)
Confusion about the classification of groups of order $8$ The article "Classification of Groups of Order $n \le 8$" proves that there are only 5 different groups of order 8 under isomorphism. In this proof (especially in pages 3--4), it checks the multiplication $ba$ over and over again. I can follow the proof. However, I don't quite understand why $ba$ is so important and sufficient for determining a group (or its multiplication table) in this proof.
So let's retrace a piece of the proof, namely when $a\in G$ is of order $4$. You take $b\in G$ which is not in $H:=<a>$ so $H\neq Hb$. And since cosets are disjoint then $$G=\{e, a, a^2, a^3, b, ab, a^2b, a^3b\}$$ So this presentation already tells you the following: * *You know how to multiply $a^k$ and $a^m$, the product is $a^{k+m}$ *You know how to multiply $a^k$ and $a^mb$, the product is $a^{k+m}b$ In other words you have just filled a piece of the multiplication table. All that you need to know is how to multiply $a^mb$ by $a^k$ and how to multiply $a^mb$ by $a^kb$ (reversed order of operands). But note that if you can express $ba$ as an element of the form $a^k$ or $a^kb$ (just like the author does) then you will be able to do appropriate reductions for almost all (I will get back to one special case) products you are looking for. Note that you always can express $ba$ like this, however not every possibility is valid. Now $ba$ might not determine the group uniquely. The problem is that you still need to know what $b^2$ is (the only unknown product that does not involve neither $ab$ nor $ba$). But if you do know that then you know everything you need. For example, if $ba=ab$ then $$(a^2b)(a^3)=a^2baa^2=a^2aba^2=a^3ba^2=a^4ba=a^5b=ab$$ since $a$ is of order $4$. So you now know what element of $G$ is $(a^2b)a^3$. Similarly entire multiplication table can be filled simply by knowing what $ba$ is. Another example, let $ba=a^2b$. Then $$(a^2b)(a)=a^2ba=a^2a^2b=a^4b=b$$ Some combinations might be harder to calculate (e.g. $abab$), you need to assume something additional about $b$ (like for example the author considers the case when $b^2=a^2$). So all in all $ba$ and $b^2$ uniquely determine $G$ in case when $a$ is of order $4$ and $b\not\in<a>$. I know that my explanation is not formally perfect but I hope you get the intuition.
Commutators and scalar product. I have trouble understanding a fairly simple equality: $\langle x, [A,B]x\rangle = \langle Ax, Bx \rangle + \langle Bx, Ax \rangle$ where $x$ is a vector in $\mathbb{R}^n$ and $[A,B]$ is symmetric with $A,B \in M_{nn}(\mathbb{R}) $ and $A^T = A, B = -B^T$. I can only decompose the expression in terms of $ABx$ and $BAx$ but I dont know how to obtain $Ax$ and $Bx$ terms.
This formula is not correct: If we take $$A = B = \begin{pmatrix} 1 & 0\\ 0 & 1\end{pmatrix}$$ we have that $[A,B] = 0$ but we find that for all $x \in \mathbb{R}^2$ that $$0 = \langle x, 0\rangle = 2 \langle x, x \rangle = 2\| x \|^2$$ which is only possible if $x = 0$. This gives a counterexample. Moreover, note that $[A,B] = AB - BA$, so you would need at least a minus in your right hand side (this would make the previous counterexmaple work, since we would find that $\langle x, 0 \rangle = \langle x, x \rangle - \langle x, x \rangle = 0$). $\textbf{EDIT}$: With the edited question (so adding the constraints that $A = A^T$ and $B = - B^T$, we find the following: \begin{align} \langle x, [A,B]x \rangle &= x^T (AB - BA)x\\ &= x^TABx - x^TBAx\\ &= (A^Tx)^T(Bx) - (B^Tx)^T(Ax)\\ &= (Ax)^T(Bx) + (Bx)^T(Ax)\\ &= \langle Ax, Bx \rangle + \langle Bx, Ax \rangle \end{align} where we used in equation two the linearity of the inner product, in equation three we used that $(AB)^T = B^TA^T$ and in the fourht equation, we used the assumption on $A$ and $B$ and $(\lambda B)^T = \lambda B^T$ for any scalar $\lambda$. Note that you do not need that $[A,B]$ since this immediately follows from your assumptions on $A,B$ (convince yourself) :)
Some "converse" of mean value theorem The mean value theorem states that if $f:\Bbb R \to \Bbb R$ is continuous over $[x_0,x]$, and differentiable over $(x_0,x)$, then there exists $x^*\in(x_0,x)$ s.t. $f'({x^*}) = \frac{{f(x) - f({x_0})}}{{x - {x_0}}}$. Now suppose $f$ is differentiable over $[x_0,+\infty)$, for any $x^*>x_0$, can we always choose some $x>x^*$ s.t. $f'({x^*}) = \frac{{f(x) - f({x_0})}}{{x - {x_0}}}$?
Counterexample: For $f(x) = x^3$ we have $f'(0) = 0$, but for all $x_0 < x$ in $[-1,1]$ we have $$\frac{f(x) - f(x_0)}{x - x_0} = \frac{x^3 - x_0^3}{x-x_0} = x^2 + xx_0 + x_0^2 > 0.$$ In general, you need more restrictions for the converse to hold.
Proof that $a_n < b_n \implies \lim(a_n) \le \lim(b_n)$ First of all, I'm aware that there are many questions like this on the site, but they all seem to be related to either $\limsup$ or $\liminf$ and I couldn't find anything that would help me with my problem. I've done some Googling and found some great resources, but I'm still not quite sure how to get to some steps and would like your assistance. The problem is as follows: Given $a_n < b_n$ prove that $\lim_{n\to \infty}(a_n) \le \lim_{n\to\infty}(b_n)$. The proof is then done by contradiction, assuming that $a = \lim_{n\to \infty}(a_n) > b =\lim_{n\to\infty}(b_n)$. We take an $\epsilon = \frac{a-b} 2$, so that the $\epsilon$-neighborhoods of $a$ and $b$ are disjoint. From the definition of limits, we now know that there is such a $N$, so that $\forall n > N : |a_n-a|<\frac\epsilon2$ and $|b_n-b|<\frac\epsilon2$. The next step is absolutely always confusing. Two variants I've found are either: $a_n>a-\epsilon=a-\left(\frac{a-b} 2\right)=b+\left(\frac{a-b} 2\right)=b+\epsilon>b_n$ In which I do not understand why any two terms of that (in)equality are like that, or it is said that if $a > b$, there must be such an $\epsilon$ so that $a - \epsilon > b + \epsilon$. Then, $a - \epsilon > b + \epsilon > b_n, a_n > a - \epsilon > b + \epsilon > b_n$, which contradicts $a_n \le b_n$. Here I simply cannot comprehend how we came to the conclusion that $a_n > a - \epsilon$ and $b + \epsilon > b_n$. The definition of the limit uses absolute values everywhere, so surely the values depends on the signs of $a_n$ and $a$, and $b_n$ and $b$. Please help me understand what is it that I'm missing here. Thanks in advance!
$\newcommand{\eps}{\varepsilon}$If $\eps > 0$ is real, then for every real number $x$, $$ \text{$|x| < \eps$ if and only if $-\eps < x < \eps$.} $$ Particularly, \begin{align*} |a_{n} - a| < \eps &\quad \text{if and only if}\quad -\eps < a_{n} - a < \eps, \\ &\quad \text{if and only if}\quad a - \eps < a_{n} < a + \eps. \end{align*} Second, if $a$ and $b$ are real numbers such that $b < a$, then $b < \frac{1}{2}(b + a) < a$ (the mean/midpoint lies between). A bit of algebra shows that if $\eps = \frac{1}{2}(a - b)$, then $\eps > 0$, and $$ b + \eps = \tfrac{1}{2}(b + a) = a - \eps. $$ In each case, it's a pleasant and instructive exercise to sketch a number line and see the obvious geometric fact these inequalities express.
Prove that $(ab+ac+bc)\sum\limits_{cyc}\frac{1}{(a-7b)^2}\geq\frac{1}{4}$ Let $a$, $b$ and $c$ be non-negative numbers such that $\prod\limits_{cyc}(a-7b)\neq0$. Prove that: $$(ab+ac+bc)\left(\frac{1}{(a-7b)^2}+\frac{1}{(b-7c)^2}+\frac{1}{(c-7a)^2}\right)\geq\frac{1}{4}$$ I think this inequality is very interesting because it's similar to the known Ji Chen's inequality (Iran 1996): $$(ab+ac+bc)\left(\frac{1}{(a+b)^2}+\frac{1}{(b+c)^2}+\frac{1}{(c+a)^2}\right)\geq\frac{9}{4}$$ Example of my trying. BW does not help: Let $a=\min\{a,b,c\}$, $b=a+u$ and $c=a+v$. Hence, we need to prove that: $$44064(u^2-uv+v^2)a^4+864(38u^3+17u^2v+73uv^2+38v^3)a^3-$$ $$-24(217u^4-1478u^3v-2157u^2v^2-3494uv^3+217v^4)a^2+$$ $$+4(98u^5-1631u^4v+3938u^3v^2+15698u^2v^3-1463uv^4+98v^5)a+$$ $$+uv(196u^4-2793u^3v+10490u^2v^2-2457uv^3+196v^4)\geq0,$$ which is nothing. Thank you!
Hints: 1)Put : $A=a$, $AB=b$, $AC=c$ With this following substitution you can eliminate a variable We obtain $$(B+C+BC)(\frac{1}{(1-7B)^2}+\frac{1}{(B-7C)^2}+\frac{1}{(C-7)^2})$$ 2)Try to prove this : $(x+\alpha+\frac{1}{x+\beta}+\frac{x+\alpha}{x+\beta})(\frac{1}{(1-7(x+\alpha))^2}+\frac{1}{((x+\alpha)-7\frac{1}{x+\beta})^2}+\frac{1}{(\frac{1}{x+\beta}-7)^2})$ $\geq (x+\alpha+\frac{1}{x+\alpha}+1)(\frac{1}{(1-7(x+\alpha))^2}+\frac{1}{((x+\alpha)-7\frac{1}{x+\alpha})^2}+\frac{1}{(\frac{1}{x+\alpha}-7)^2}) $ With $0\leq \beta\leq \alpha \leq x$ For the last inequality it's easy to see that $$(x+\alpha+\frac{1}{x+\alpha}+1)(\frac{1}{(1-7(x+\alpha))^2}+\frac{1}{((x+\alpha)-7\frac{1}{x+\alpha})^2}+\frac{1}{(\frac{1}{x+\alpha}-7)^2}) $$ Is a translation of the function : $$(x+\frac{1}{x}+1)(\frac{1}{(1-7(x))^2}+\frac{1}{((x)-7\frac{1}{x})^2}+\frac{1}{(\frac{1}{x}-7)^2}) $$ And the minimum of this last function is 0.25
Confused about proof of Leibniz Integral Rule If we set $G(x) = \int_{0}^{x} f(x,y) dy$, then \begin{align} \frac{G(x+d)-G(x)}d =& \frac{\int_{0}^{x+d} f(x+d,y)dy - \int_{0}^{x} f(x,y)dy}d \\ =& \frac{\int_{0}^{x}f(x+d,y)dy+\int_{x}^{x+d} - \int_{0}^{x}f(x,y)dy}d \end{align} By grouping, $$\int_{0}^{x}\frac{f(x+d,y)-f(x,y)}ddy + \int_{x}^{x+d}f(x+d,y)dy,$$ which leads to: $$\int_{0}^{x}f'(x,y)dy + \frac{\int_{0}^{x+d}f(x+d,y)dy-\int_{0}^{x}f(x+d,y)dy}d.$$ Why is the second term not equal to $f(x+d,x)$? I know it isn't, I'm just trying to see where I'm going wrong in the proof.
I think the main sticking point is understanding why $$\lim_{d\to 0}\frac{\int_{x}^{x+d}f(x+d,y)dy}{d}=f(x,x).$$ To prove this note that $$d\inf_{y\in[x,x+d]} f(x+d,y)\leq\int_{x}^{x+d}f(x+d,y)dy\leq d\sup_{y\in[x,x+d]} f(x+d,y),$$ assuming that $f$ is continuous on a compact set. Now simply divide through by $d$ and then take the limit as $d\to 0$. The squeeze theorem does the rest.
Thorpe 15.4 Differential Geometry (Parameterized Surfaces) I was going through practice problems in my textbook and I came this And promptly managed to get stuck for a few hours. I don't entirely understand the question. To the best of my understanding, the surface produced by a parameterized n-surface IS the image of some subset of its domain that its one to one on. Based off of that, the question seems like an oxymoron? I'm not entirely sure on how to proceed or even begin for that matter. Thanks in advanced. Edit: Forgot to post theorem 2
I asked my professor. The example he gave was $x \mapsto (x^2, x^3)$ Since the normal doesn't exist at $x = 0$