INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Finding the angle between two 3 dimensional vectors This should be a fairly simple problem, but I've gotten it wrong a dozen times. I have two vectors, vector $\vec{N}$ in the x-z plane with a slope of $\frac{1}{8}$, and $\vec{E}$ in the y-z plane with a slope of $\frac{1}{4}$. The angle between two vectors is defined by: $$\theta=\arccos(\frac{\vec{N}\cdot\vec{E}}{\vert{N}\vert\vert{E}\vert})$$ I start by finding the component form of each vector for the $i,j,$ and $k$ directions. $$\vec{N}=\langle{i,0,k_N}\rangle$$ since the slope is $\frac{1}{8}$, $k_N=\frac{i}{8}$, $$\vec{N}=\langle{i,0,\frac{i}{8}}\rangle$$ Same goes for $k_E$ of $\vec{E}$ with the slope of $\frac{1}{4}$. $$\vec{E}=\langle{0,j,\frac{j}{4}}\rangle$$ The dot product of the two is simple because the first two products are zero. $$\vec{N}\cdot\vec{E}=\frac{ij}{32}$$ Now I look for the magnitude of each vector. $$\vert{N}\vert=\sqrt{i^2+0^2+(\frac{i}{8})^2}$$ $$=i\frac{\sqrt{65}}{8}$$ Then the $\vec{E}$ vector comes out to $$j\frac{\sqrt{17}}{4}$$ so $$\vert{N}\vert\vert{E}\vert=\frac{ji\sqrt{1105}}{32}$$ This works really well for finding $\frac{\vec{N}\cdot\vec{E}}{\vert{N}\vert\vert{E}\vert}$ because $32, i,$ and $j$ cancel out. $$\frac{\vec{N}\cdot\vec{E}}{\vert{N}\vert\vert{E}\vert}=\frac{1}{\sqrt{1105}}$$ Finally, I solve for $\arccos(\frac{\vec{N}\cdot\vec{E}}{\vert{N}\vert\vert{E}\vert})$ to get $\theta$. $$\theta=\arccos(\frac{1}{\sqrt{1105}})$$ $$\approx{1.54}$$ in radians. This is wrong. The answer is $\approx{1.6}$ radians. Where did I go wrong? EDIT: corrected the planes, this won't have an effect on the answer
So $\vec{N}\neq <i,0,i/8>$, likewise $\vec{E} \neq ⟨0,j,j/4 ⟩$. try $\vec{N}= <0,j,k/8>$ and $\vec{E}=⟨i,0,k/4 ⟩$.
Can numbers in a sequence be equal after performing some operations I have a sequence of 21 numbers (-10, -9, -8, ..., -1, 0 , 1, 2, ... , 10). I can take any pair $(a, b)$ from the sequence and replace it with pair $((3a - 4b)/5 ), (4a + 3b)/5)$. Is it possible that all numbers in a sequence will be equal after performing some operations?
Note that after any replacement, all terms of the sequence are still rational numbers. Suppose in a given step, you replace the pair $\small{\displaystyle{(a, b)}}$ by the pair $\small{\displaystyle{\left(\frac{3a - 4b}{5}, \frac{4a + 3b}{5}\right)}}$. Observing that \begin{align*} \left(\frac{3a - 4b}{5}\right)^2 + \left(\frac{4a + 3b}{5}\right)^2 &= \frac{9a^2 - 24ab + 16b^2}{25} + \frac{16a^2 + 24ab + 9b^2}{25}\\[6pt] &=\frac{25a^2+25b^2}{25}\\[6pt] &=a^2 + b^2 \end{align*} it follows that, after any replacement, the sum of the squares of the terms remains the same. Suppose that after some number of replacements, the terms are all equal to $x$ say. Since the sum of the squares must be equal to the original sum of the squares, we get \begin{align*} &x^2 + x^2 + x^2 + \cdots + x^2 = (-10)^2 + (-9)^2 + (-8)^2 + \cdots + 8^2 + 9^2 + {10}^2\\[6pt] \implies\; &21x^2=2(1^2+2^2+3^2 + \cdots + 10^2)\\[6pt] \implies\; &21x^2=2\left(\frac{(10)(10+1)(2(10)+1)}{6}\right)\\[6pt] \implies\; &21x^2=(10)(11)(7)\\[6pt] \implies\; &x^2 = {\small{\frac{110}{3}}}\\[6pt] \end{align*} contradiction, since $\large{\frac{110}{3}}$ is not the square of a rational number. It follows that it's not possible, after some number of replacements, to get all equal terms.
An integral of rational function with third power of cosine hyperbolic function Prove $$\int_{-\infty}^{\infty}\frac{1}{(5 \pi^2 + 8 \pi x + 16x^2) }\frac{\cosh\left(x+\frac{\pi}{4} \right)}{\cosh^3(x)}dx = \frac{2}{\pi^3}\left(\pi \cosh\left(\frac{\pi}{4} \right)-4\sinh\left( \frac{\pi}{4}\right) \right)$$ Attempt Note that $$\cosh\left( x+\frac{\pi}{4}\right) = \cosh(x)\cosh\left(\frac{\pi}{4} \right)+\sinh(x)\sinh\left( \frac{\pi}{4}\right)$$ Then the integral could be rewritten as $$I = \cosh\left(\frac{\pi}{4} \right)\int_{-\infty}^{\infty}\frac{\mathrm{sech} ^2(x)}{(5 \pi^2 + 8 \pi x + 16x^2) }dx\\+\sinh\left(\frac{\pi}{4} \right)\int_{-\infty}^{\infty}\frac{\sinh(x)}{(5 \pi^2 + 8 \pi x + 16x^2) \cosh(x)^3}dx$$ You can then integrate by part the second integral $$\int^{\infty}_{-\infty}\left[\frac{\cosh\left(\frac{\pi}{4} \right)}{(5 \pi^2 + 8 \pi x + 16x^2)}-\frac{ 4\sinh\left( \frac{\pi}{4}\right)(\pi+ 4 x)}{(5 \pi^2 + 8 \pi x + 16 x^2)^2}\right]\mathrm{sech}^2(x)\,dx $$ Integrating again $$I=-\int^{\infty}_{-\infty}\left[\frac{(8 (4 x + \pi) (32 x + 8 \pi) \sinh(\pi/4))}{(16 x^2 + 8 \pi x + 5 \pi^2)^3} - \frac{(16 \sinh(\pi/4)}{(16 x^2 + 8 \pi x + 5 \pi^2)^2}\\ - \frac{((32 x + 8 \pi) \cosh(\pi/4)}{(16 x^2 + 8 \pi x + 5 \pi^2)^2} \right]\tanh(x)\,dx$$ Note that $$\tanh(x) = 8 \sum_{k=1}^\infty \frac{x}{(1 - 2 k)^2 \pi^2 + 4 x^2}$$ Consider $R(x)$ a rational function then $$\int^{\infty}_{-\infty}R(x) \tanh(x) = 8 \sum_{k=1}^\infty \int^{\infty}_{-\infty}R(x)\frac{x}{(1 - 2 k)^2 \pi^2 + 4 x^2} \,dx$$ Any integral of that form could be found (I think) using the residue theorem then the resulting sum can be evaluated using the Digamma function. Question * *Although I think this approach will result in the correct answer I feel that a contour method will be so much easier, any idea ? *Maybe there is an easier method considering the nice closed form ?
$$I=\int_{-\infty}^{\infty}\frac{1}{(5 \pi^2 + 8 \pi x + 16x^2) }\frac{\cosh\left(x+\frac{\pi}{4} \right)}{\cosh^3(x)}dx$$ $$I=\int_{-\infty}^{\infty}\frac{1}{(4x+\pi+2i\pi)(4x+\pi-2i\pi) }\frac{\cosh\left(x+\frac{\pi}{4} \right)}{\cosh^3(x)}dx$$ $$I=\frac{-i}{16\pi}\int_{-\infty}^{\infty}\left(\frac{1}{x+\frac{\pi}{4}-\frac{i\pi}{2} }-\frac{1}{x+\frac{\pi}{4}+\frac{i\pi}{2}}\right)\frac{\cosh\left(x+\frac{\pi}{4} \right)}{\cosh^3(x)}dx$$ Make the substitution $x+\pi/4 \to x$ $$I=\frac{-i}{16\pi}\int_{-\infty}^{\infty}\left(\frac{1}{x-\frac{i\pi}{2} }-\frac{1}{x+\frac{i\pi}{2}}\right)\frac{\cosh\left(x \right)}{\cosh^3(x-\pi/4)}dx$$ We now split into two integrals and investigate each $$I_1=\int_{-\infty}^{\infty}\frac{1}{x+\frac{i\pi}{2}}\frac{\cosh\left(x \right)}{\cosh^3(x-\pi/4)}dx$$ Let $x+\frac{i\pi}{2} \to x$ $$\color{red}{I_1=\int_{\frac{i\pi}{2}-\infty}^{\frac{i\pi}{2}+\infty}\frac{\sinh(x)}{x\cosh^3(π/4 - x)} dx}$$ $$I_2=\int_{-\infty}^{\infty}\frac{1}{x-\frac{i\pi}{2}}\frac{\cosh\left(x \right)}{\cosh^3(x-\pi/4)}dx$$ Let $x-\frac{i\pi}{2} \to x$ $$\color{red}{I_2=\int_{-\frac{i\pi}{2}-\infty}^{-\frac{i\pi}{2}+\infty}\frac{\sinh(x)}{x\cosh^3(π/4 - x)} dx}$$ We now need to calculate $I_2 - I_1$ and multiply the result by $\frac{-i}{16}$, but I am stumped. There are clear symmetries in $I_1$ and $I_2$, as only the domain of integration changes; the function inside the integral stays the same. If anyone has any suggestions, I will gladly take them.
Is there a sequence that summed up minus the last entry is equal to 2 times the last entry So basically the title is the question. Is there a sequence that summed up minus the last entry is equal to 2 times last entry. After a long time spend trying I don't even know if this is possible. For example we have the following sequence: $\text{1 2 3 6 12 24 48 ... }$ Which is a sequence that summed up minus the last entry is equal to 1 times the last entry. $\text{(1 + 2 + 3 + 6 + 12 + 24 + 48) - 48 = 48}$ PS: First post, so if I did anything wrong please tell me. (Edit: I am looking for an infinite sequence)
To go a little farther than the comments, your condition is $$2a_n = \sum_{k=0}^{n-1} a_k$$ (unlike the comments, I am calling the first element of the sequence $a_0$). Note that $$\sum_{k=0}^{n-1} a_k = a_{n-1} + \sum_{k=0}^{n-2} a_k$$ and $$2a_{n-1} = \sum_{k=0}^{n-2} a_k$$ So $$2a_n = a_{n-1} + 2a_{n-1} = 3a_{n-1}\\a_n =\left(3\over2\right)a_{n-1}$$ Hence your sequence is $$a_n = \left(3\over 2\right)^na_0$$
Application of the Archimedean Property Prove if that $0<a<b$ where $a,b \in \mathbb{R}$ then there exist some $n \in \mathbb{N}$ such that $\frac{1}{n} < a$ and $b < n $ The question states to use the Archimedean Property; If $a,b \in \mathbb{R}$ where $a < b$, then there exist an $n \in \mathbb{N}$ such that $b <na$. My guess is to begin with the result of the Archimedean Property ($b<na$) and try to manipulate that and arrive at $\frac1n$ and $b<n$ but I'm having trouble doing so, is this the right idea? Any guidance is appreciated !
I'll get you started. Using el_tenedor's comments: Consider $b$ and $1$. By the Archimedean Property, there exists $N\in\mathbb{N}$ such that $b<N$. Now consider $a$ and $1$. By the Archimedean property, there exists $M\in\mathbb{N}$ such that $1<aM$, which implies the $\frac{1}{M}<a$. Now, what happens if you take the maximum of $\{M,N\}$? Here, we've assumed that $a<1$ and $b>1$, by the way. That is also something you'll need to work around.
Matrices: Rotations as products of reflections in $\Bbb R^2$ Ok so I have two matrices: The reflection about the line y=x. $$A=\pmatrix{0 & 1 \\ 1 & 0}$$ And the reflection about the line $y=0$. $$B= \pmatrix{ 1 & 0 \\ 0 & -1} $$ I need to show that both $AB$ and $BA$ represent rotations of $\mathbb{R}^2$. I know that they are rotations of 90 degrees in opposite directions, but how can I show this? I don't think giving a few examples is enough. Also, if I need to compute $ABABABAB$ and $BABABABA$, can I just bunch them up as $(AB)(AB)AB)(AB)$ and $(BA)(BA)(BA)(BA)$?
$$AB = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \ \ \textbf{and}\ \ BA=\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$$ Now take any vector $\textbf{x} = (u,v)$ then $AB(u,v) \cdot (u,v) = 0$ and $BA(u,v) \cdot (u,v) = 0$. Hence what can you say about the matrices since the vectors they give are always perpendicular? Once you see what their rotations correspond to, you can easily compute the matrix order.
Recursive sequence convergence.: $s_{n+1}=\frac{1}{2} (s_n+s_{n-1})$ for $n\geq 2$, where $s_1>s_2>0$ The problem is the following: suppose $s_1>s_2>0$ and let $s_{n+1}=\frac{1}{2} (s_n+s_{n-1})$ for $n\geq 2$. Show that ($s_n$) converges. Now, here is what I figured out: * *$s_2<s_4$: Base Case for induction that $s_{2n}$ is an increasing sequence. *Assume $s_{2n-2}<s_{2n}$. *Induction step: $s_{2n}<s_{2n+2}$ *$s_1>s_3$: Base Case for induction that $s_{2n-1}$ is a decreasing sequence. *Assume $s_{2n-1}<s_{2n-3}$. *Induction step: $s_{2n+1}<s_{2n-1}$. I have proved those two. However arguing in favor of convergence has me going around in circles. Since $s_1>s_2$ and (as discovered during the formulation of Base Cases) $s_3>s_4$, I figured it might be a good idea ot assume that if every odd member of the original sequence ($s_n$) is greater than the following even member, then the limit would be somewhere in between, the two (odd and even) sequences won't cross. Hence the upper and lower bounds would be $s_1$ and $s_2$ respectively. Here is how I approach this: * *Assume $s_{2n-1}>s_{2n}$ *Show that $s_{2n+1}>s_{2n+2}$ The proof as I mentioned has me running in circles. Any assistance?
$(s_{2n})$ is increasing and $(s_{2n-1})$ is decreasing and we have $s_2\leqslant s_{2n}<s_{2n+1}<s_{2n-1}\leqslant s_1$ so $(s_{2n})$ converges as well as $(s_{2n-1}),$ say to $a$ and $b,$ respectively. Now since $s_{2n}=\frac{1}{2}(s_{2n-1}+s_{2n-2})$ then $a=\frac{1}{2}(b+a)$ so $a=b.$ Thus, $s_n\to a=b$ as $n\to\infty.$
How to know if there is a extraneous solution in a radical expression I was trying to solve this problem: $$\sqrt{3x+13} = x+ 3$$ So I was pretty confident about this problem and started solving: $$(\sqrt{3x+13})^2 = (x+ 3)^2$$ $$(3x+13) = (x+ 3)^2$$ $$3x+13 = x^2 + 6x + 9$$ $$0 = x^2 + 3x - 4$$ $$0 = (x+4)(x-1)$$ So my final answer was $x = -4$ and $ x = 1 $. However, it was incorrect because when I plug back in -4 into the original equation I get a extraneous solution. My question is do I always need to plug back in my answers into a radical expression and check if they are valid? Or is there any other way to deduce that there will be a extraneous solution?
On its domain ($A\ge 0$), note that $$\sqrt A=B\iff A=B^2\enspace\textbf{and}\enspace B\ge 0,$$ since the symbol$\sqrt{\phantom{h}}$ denotes the non-negative square root of a non-negative real number.
Solve $\lim_{x \rightarrow 3} \frac{\ln(\sqrt{x-2})}{x^2-9}$ without using L'Hôpital's rule $\lim_{x \rightarrow 3} \frac{\ln(\sqrt{x-2})}{x^2-9}$ To do this I tried 2 approaches: 1: If $\lim_{x \rightarrow 0} \frac{\ln(x+1)}{x} = 1$, $\lim_{x\to1}\frac{\ln(x)}{x-1}=1$ and $\lim_{x\to0}\frac{\ln(x+1)}x=\lim_{u\to1}\frac{\ln(u)}{u-1}$, then I infer that $\frac{\ln(x)}{y} = 1$ where $x \rightarrow 1$ and $y \rightarrow 0$, then I have: $$\sqrt{3-2} \rightarrow 1 \\ 3^2-9 \rightarrow 0 $$ and so $$\lim_{x \rightarrow 3} \frac{\ln(\sqrt{x-2})}{x^2-9} = 1$$ But this is wrong. So I tried another method: 2: $$\lim_{x \rightarrow 3} \frac{\ln(\sqrt{x-2})}{x^2-9} = \frac{\ln(\sqrt{x-2})}{(x-3)(x+3)} = \lim_{x \rightarrow 3} \frac{\ln(\sqrt{x-2})}{(x-3)} \cdot \lim_{x \rightarrow 3} \frac{1}{x+3} = \frac{1}{6}$$ Which is also wrong. My questions are: What did I do wrong in each method and how do I solve this? EDIT: If $\frac{\ln(x)}{y} = 1$ where $x \rightarrow 1$ and $y \rightarrow 0$, and $\ln(x) \rightarrow 0^+$, does this mean that $\frac{0^+}{0^+} \rightarrow 1$?
Just in case you want to see another approach. Consider $$A=\frac{\ln(\sqrt{x-2})}{x^2-9}=\frac 12\frac{\ln({x-2})}{x^2-9}$$ Now, as Simply Beautiful Art answered, let $x=u+3$ which makes $$A=\frac{\log (1+u)}{2 u (u+6)}$$ Now, use Taylor series around $u=0$ $$\log(1+u)=u-\frac{u^2}{2}+O\left(u^3\right)$$ which makes $$A=\frac{u-\frac{u^2}{2}+O\left(u^3\right)}{2 u (u+6)}=\frac{1-\frac{u}{2}+O\left(u^2\right)}{2 (u+6)}$$ Now, long division $$A=\frac{1}{12}-\frac{u}{18}+O\left(u^2\right)$$ which shows the limit and how it is approached.
Find the points of contact of the tangent planes to the conicoid 2x^2-25y^2+2z^2=1. Find the points of contact of the tangent planes to the conicoid 2x^2-25y^2+2z^2=1 which pass through the line joining the points (-12,1,12) and (13,-1,-13).I can't understand the meaning of this question ,plese somebody help me to understand this. And solve this question.
. I able to find out ,another image below.
Solution of the differential equation $\ln \bigg(\frac{dy}{dx} \bigg)=e^{ax+by}$ Solve the given differential equation: $$\ln \bigg(\frac{dy}{dx} \bigg)=e^{ax+by}$$ Could someone give me guidance to proceed in this question. I am not able to deal with $\frac{dy}{dx}=e^{e^{ax+by}}$
Let $z= ax + by$ The $\frac{dz}{dx} = a + b \frac{dy}{dx}$ So we have that $\frac{1}{b} \frac{dz}{dx} - \frac{a}{b} = \frac{dy}{dx}$ thus: $$\frac{1}{b} \frac{dz}{dx} - \frac{a}{b} = e^{e^z}$$ Balancing both sides we have $$ \frac{1}{ a+b e^{e^z} } dz = 1 dx$$ To make the left side nicer consider the L substitution $L = e^{e^z}, dL = e^z e^{e^z} dz \rightarrow dz = \frac{dL}{L \ln(L)}$ This yields $$ \int \frac{1}{\ln(L)}\frac{1}{aL + bL^2} dL = x+C$$ As a newer nicer looking problem to solve. Next we consider that $(\sqrt{b}L +q)^2 =bL^2 + 2q\sqrt{b} +q^2$ So it follows that we have $$ \int \frac{1}{\ln(L)}\frac{1}{(\sqrt{b}L + \frac{a}{2\sqrt{b}})^2 - \frac{a^2}{4b}} dL = x+C$$ And I don't think this can be done in elementary means
Can't figure out a step in the proof of $\cosh^{-1}x=\ln(x+\sqrt{x^2-1}), \forall x\ge1$? I can't figure out the part start from $*$ in the proof from my book: Set $$y=\cosh^{-1}x,\,x\ge1$$ and note that $$\cosh y=x\quad and\quad y\ge0.$$ ... . $(*)$Since $y$ is nonnegative, $$e^y=x\pm\sqrt{x^2-1}$$ cannot be less than $1$. This renders the negative sign impossible. ... .
Note that we have $$\cosh y=\frac{e^y+e^{-y}}{2}=x$$ So we have by multiplying $2e^{y}$ on each side, $$e^{2y}-2xe^{y}+1=0$$ Setting $e^{y}=t$ and applying the quadratic formula, we have that $$e^{y}=x \pm \sqrt{x^2-1}$$ Note that $\cosh^{-1} x$ is a function that goes from the real numbers which are greater than $1$ to the non-negative reals. So we have that $y \ge 0$. Since $e^{y}$ is an increasing function, $$e^{y}\ge e^{0}=1$$ However,$$x-\sqrt{x^2-1}=\frac{1}{x+\sqrt{x^2-1}} \le \frac{1}{1+\sqrt{1^2-1}}=1$$ So $e^{y} \neq x-\sqrt{x^2-1}$ given that $x \neq 1$. In the case when $x=1$, we have that $x-\sqrt{x^2-1}=x+\sqrt{x^2-1}$. Thus, $e^{y}=x+\sqrt{x^2-1}$.
LCM(a,b)=a IFF $b\mid a$ My approach: $LCM(a,b)=a$ $\rightarrow$ $b\mid a$ By definition of LCM, $a=bx$ therefore $b \mid a$ $b\mid a$ $\rightarrow$ $LCM(a,b)=a$ Since $b\mid a$ then $a=bk$. Since $a$ is a multiple of $a$ and $a$ is a multiple of $b$, then $a$ is a common multiple of $(a,b)$. But can I claim it is the lowest common multiple? $a \times 1$ is clearly the smallest multiple of $a$ that I can generate. And since $b\mid a$, that doesn't make a smaller common multiple. But I'm not sure if this is clearest way to prove or describe this.
Yes, it is the lowest common multiple, no doubt. Observe that if $b|a$, then $a=bk$ for some non-zero constant $k$ and $$\mathrm{lcm(a,b)}=\mathrm{lcm(bk,b)}=bk=a$$
How come rearrangement of a convergent series may not converge to the same value / or does not converge at all By looking at Riemann's Rearrangement theorem I wonder, How come a convergent series particular re-arrengement may not converge to the same value of the original series. Isn't the below true? Let $\phi : N \to N$ be a bijection. Where $N$ is the natural number set A re-arrengement of a sequence $\sum a_n$ would be $\sum a_{\phi(n)}$ Then Give me an example of a series that have a rearrangement which does not converge or does not converge to the same value. I mean... 4+1+2+2+9+6 = 1+2+2+6+4+9 = 24 ??
I think you need to understand the nature of a conditionally convergent series. I restrict the discussion to series whose terms are real numbers. If a series $\sum a_{n}$ is conditionally convergent and we separate the positive terms of this series into $\sum b_{n}$ and negative terms into another series $\sum c_{n}$ then we have to understand following two things: * *$\lim_{n \to \infty} a_{n} = \lim_{n \to\infty}b_{n} = \lim_{n \to \infty}c_{n} = 0$ *$\sum b_{n}$ diverges to $\infty$ and $\sum c_{n}$ diverges to $-\infty$. Consider the expression $$S(M,N) =\sum_{n=1}^{M} b_{n} + \sum_{n=1}^{N} c_{n}$$ and we can see that it is of the indeterminate form $\infty - \infty$ as $M, N$ tend to $\infty $ independently. Since the series on right diverge it is possible to take desired number of terms from $\sum b_{n}$ and $\sum c_{n}$ to add up to any particular sum (including $\pm\infty$). To add some details, suppose I want to achieve the sum $2$. Then I choose terms from $\sum b_{n}$ so that the sum of these chosen terms is greater than $2$ (this is possible because $\sum b_{n}$ diverges and we can exceed any number by choosing sufficient number of terms). Next I add terms from the series $\sum c_{n}$ (note the terms here are negative so that adding them effectively reduces the sum) so that the sum is less than $2$. Repeating this procedure ad infinitum I get a series whose sum is $2$. Formally given any extended real number $S$, we can prove that there exist sequences $m_{k}, n_{k} $ taking positive integer values and tending to $\infty$ as $k\to\infty$ such that $S(m_{k}, n_{k}) \to S$ as $k\to\infty$. The above argument when formalized with all the details constitutes a proof of Riemann's rearrangement theorem.
What is the remainder when $15^{40}$ divided by $1309$? I know that $$ 15 \equiv 1\pmod{7}, N \equiv 1\pmod{7},$$ but cannot proceed further.
Note that $1309=7 \times 11 \times 17$. So $$15^{40} \equiv 1^{40} \equiv 1 \pmod {7}$$ Using $15 \equiv 1 \pmod {7}$. Then, $$15^{40} \equiv \left(15^{10}\right)^4\equiv 1 \pmod {11}$$ And $$15^{40} \equiv (15+17 \times 2)^{40} \equiv 7^{80} \equiv \left(7^{16}\right)^5 \equiv 1 \pmod {17}$$ From Fermat's Little Theorem. So $$15^{40} \equiv 1 \pmod {1309}$$ Using CRT. We are done.
Inductive proof of $a_n = 3*2^{n-1} + 2(-1)^n$ if $a_n = a_{n-1} + 2*a_{n-2}, a_1 = 1, a_2 = 8$ I would appreciate a little help in finalizing a proof for the following: Let $a_n$ be the sequence defined as $a_1 = 1$, $a_2 = 8$, and $a_n = a_{n-1} + 2*a_{n-2}$ when $n \geq 3$. Prove that $a_n = 3*2^{n-1} + 2(-1)^n$. I decided to use strong induction and show that if the statement is true for $1,...,n$ , then it is true for $n+1$ (going off the fact that the statement is true for $n=3$). I have: $$a_{n+1} = a_n + 2*a_{n-1} = 3*2^{n-1} + 2(-1)^n + 2(3*2^{n-2} + 2(-1)^{n-1}) = 3*2^n + 2(-1)^n + 4(-1)^{n-1}$$ Which is close to the result I want $(3*2^n + 2(-1)^{n+1})$ but the signs are switched for the $(-1)$ terms; therefore I suspect I missed a $-1$ somewhere, but I could not see where the error is. I appreciate all and any help. Thank you kindly!
$2(-1)^n=-2(-1)^{n+1}\;$ and $\;4(-1)^{n-1}=4(-1)^{n+1}$. Note your linear recurrence equation of order $2$ is the discrete version of linear second order differential equations, and can be solved likewise: You try to obtain as solutions geometric sequences $a^n\enspace (a\ne 0)$ (the equivalent of the exponential solutions for differential equations). This is equivalent to $a$ being a solution of the quadratic equation: $$a^2-a-2=0\qquad(\text{characteristic equation})$$ There two integer solutions: $\;-1$ and $\;2$. Hence the general solution of the linear recurrence equation is $$u_n=\alpha 2^n+\beta(-1)^n,$$ the coefficients $ \alpha$ and $\beta$ being determined by the initial conditions $u_1=1$, $u_2=8$. One finds $\alpha=\frac32$ and $\beta=2$.
Tangent line at $(0,0)$ of plane curve $y^3=x^5$ Let $f(x,y)=y^3-x^5$. $\nabla f(x,y)=(-5x^4,3y^2)$. $\nabla f(0,0)=(0,0)$ so $(0,0)$ is a singular point of the curve $y^3-x^5=0$. On the other hand, it makes sense to define $g(x)=x^{5/3}$ for $x \in \mathbb{R}$. Then the curve $y^3-x^5=0$ is the graph of the function $g$. But $g'(x)=\frac{5}{3} x^{2/3}$ so $g'(0)=0$, so the graph of $g$ has the tangent line $y=0$ at the point $(0,0)$. Therefore, there is something I am not understanding by what it means to say that $(0,0)$ is a singular point of the curve $y^3-x^5=0$, because there seems to be a tangent line at this point. So either being a singular point does not imply not having a tangent line, or I am somehow being sloppy in the definition of tangent line. I would dearly like to clear this up. I am not a differential or algebraic geometer but an analyst; I've been thinking about this for teaching a multivariable calculus course I've been teaching.
In algebraic geometry, for a curve defined by an equation $P(x, y)=0$, where $P\in K[X,Y]\;$ ($K$ a field), the tangents at origin have equation the homogeneous part of lowest degree in the equation. In the present case, it means the tangents are given by $\;y^3=0$, so it is the line $y=0$, counting for three.
Distributing pebbles The rules to this "game" are simple, but after checking 120 starting positions, i can still not find a single pattern that consistantly holds. I am grateful for the smallest of suggestions. Rules: You have two bowls with pebbles in each of them. Pick up the pebbles from one bowl and distribute them equally between both bowls. If the number of pebbles is odd, place the remaining pebble in the other bowl. If the number of distributed pebbles was even, repeat the rule by picking up the pebbles from the same bowl. If the number of pebbles was odd, repeat the rule by picking up the pebbles from the other bowl. Continue applying the previous rules until you have to pick up exactly 1 pebble from a bowl, at which point you win. There are some starting positions, however, for which you will never be able to win. If the number of pebbles in the bowls are 5 and 3 respectively, you will cycle on forever. Question: Depending on the number of pebbles in each bowl at the starting position, can you easily predict if that position will be winnable/unwinnable? Edit: Here is some python code i wrote to generate answers for given starting values: http://codepad.org/IC4pp2vH Picking up $2^n$ pebbles will guarantee a win. Edit: As shown by didgogns, starting with n pebbles in both bowls always results in a win.
Here's another image (1000x1000), similar to that of Simon Marynissen. Here the red pixels correspond to the unwinnable states. The rest are painted dark or light according to its "normalized distance" $k/(n+m)$ to the final winner state - black pixels take (relatively) few moves to win, white pixels take many moves. Not very illuminating, I'm afraid. Here's a zoom of the first 100x100 values.
Proving that $T^2-5T+6I = 0$. The question is: Suppose that T is a self-adjoint operator on a finie-dimensional inner product space and that 2 and 3 are the only eigenvalues of T. Prove that $T^2-5T+6I = 0$. To prove this question, can I take the inner product of $(T^2-5T+6I)v$ and $Tv$, where $Tv = 2v$ and $v \neq 0$, and then show that the inner product of those two vectors is equal to $0$? Since $v \neq 0$, would that imply that $T^2-5T+6I = 0$? Or am I coming at this question completely wrong? Thanks.
As $T$ is self-adjoint, it is diagonizable, so the minimal polynomial splits into linear factors and has only simple roots. This together with the fact that the only eigenvalues are $2$ and $3$ implies that the minimal polynomial is $(X-2)(X-3) = X^2-5X+6$. By the definition of the minimal polynomial, we have $T^2-5T+6I=0$.
Showing the existence of an infinite (strong) antichain Suppose $P$ is a poset such that there exist (strong) antichains of size $n$ for all $n \in {\bf N}$; i.e. there exist sets $S_n$ of size $n$ in $P$ such that no pair of elements of $S_n$ has a common lower bound. Must $P$ have an infinite antichain?
If there is an infinite set of minimal elements (or rather elements that below them the order is linear), we're done. So let's work under the assumption there are no minimal elements (or rather, every element has two incompatible smaller elements), as there are only finitely many of them, and they have to be in every maximal antichain. Now proceed by induction: pick an element $a$, and two incompatible elements smaller than $a$, call them $a_0$ and $b_0$. Now $a'$ has two incompatible elements below it, so we can choose one to be $a_1$ and $b_1$. Proceed by induction splitting $b_n$ to $a_{n+1}$ and $b_{n+1}$. Then $\{a_n\mid n\in\Bbb N\}$ is the antichain you seek. Choice is necessary, since without choice it is consistent there are counterexamples. For example, if $S$ is a set which is a countable union of pairs that no infinite set of pairs admits a choice function, then the tree of choice functions from finitely many pairs will satisfy this.
Prove the given equation for the two pairs of lines. If one of the straight lines given by equation $$ax^2 + 2hxy + by^2 = 0$$ coincide with one of those given by $$a_2x^2 + 2h_2xy + b_2y^2 = 0$$ and the other represented by them be perpendicular, prove that $${ha_2b_2\over b_2 - a_2} = {h_2ab\over b - a} = \sqrt{-aa_2bb_2} \tag{+}$$ All straight lines pass through origin. Let the inclination of four lines be $m, m_2, m_3, m_4$ Now $m = m_3$ and $\displaystyle m_2 = {-1\over m_4}$ $ax^2 + 2hxy + by^2 = 0$ can be represented as $$b(y - mx)(y - m_2x) = 0$$ On expanding we get $$b(y^2 - xy(m + m_2) + mm_2x^2) = 0$$ From which we get $-b(m + m_2) = 2h$ and $bmm_2 = a$. Also $a_2x^2 + 2h_2xy + b_2y^2 = 0$ can be represented as $b_2(y- m_3x)(y - m_4x) = 0$ Which is same as $$b_2(y- mx)\left(y + {1\over m_2}x\right) = 0$$ On doing same procedure as we did with first pair of lines, we get $\displaystyle 2h_2 = \left({1\over m_2} - m\right)b_2$ and $\displaystyle {-bm\over m_2} = a_2$ On substituting these values in (+) we get $${ha_2b_2\over b_2 - a_2} = {h_2ab\over b - a} = {bb_2m\over 2}$$ but $\displaystyle \sqrt{-aa_2bb_2} = b_2bm$ From this I am getting $$\sqrt{-aa_2bb_2} \ne {ha_2b_2\over b_2 - a_2} = {h_2ab\over b - a}$$. Where did I go wrong ?
The equations of two perpendicular lines can be put in the form $\alpha x+\beta y=0$ and $c(\beta x-\alpha y)=0$ ($c\ne0$). We now add an arbitrary third, shared line $\gamma x+\delta y=0$. The resulting equations for pairs of lines that meet the conditions of the problem are then $$\begin{align}(\alpha x+\beta y)(\gamma x+\delta y)=\alpha\gamma x^2+(\alpha\delta+\beta\gamma)xy+\beta\delta y^2&=0 \\ c(\beta x-\alpha y)(\gamma x+\delta y)=c\beta\gamma x^2+c(\beta\delta-\alpha\gamma)xy-c\alpha\delta y^2&=0\end{align}$$ from which we have $$\begin{align}a&=\alpha\gamma \\ b&=\beta\delta \\ h&= \frac12(\alpha\delta+\beta\gamma)\end{align}$$ and $$\begin{align}a_2&=c\beta\gamma \\ b_2&=-c\alpha\delta \\ h_2&=\frac12c(\beta\delta-\alpha\gamma).\end{align}$$ So, $${h_2ab\over b-a}={c(\beta\delta-\alpha\gamma)\cdot\alpha\gamma\cdot\beta\delta\over2(\beta\delta-\alpha\gamma)}=\frac12c\alpha\beta\gamma\delta$$ and $${ha_2b_2\over b_2-a_2}=-{(\alpha\delta+\beta\gamma)\cdot c\beta\gamma\cdot c\alpha\delta\over2\cdot(-c\alpha\delta-c\beta\gamma)}=\frac12c\alpha\beta\gamma\delta.$$ (The denominators are non-zero, otherwise the third line coincides with one of the two orthogonal lines.) Finally, $$aa_2bb_2 = \alpha\gamma\cdot c\beta\gamma\cdot\beta\delta\cdot (-c\alpha\delta) = -(c\alpha\beta\gamma\delta)^2$$ so $$\sqrt{-aa_2bb_2}=|c\alpha\beta\gamma\delta|.$$ Unfortunately, this doesn’t equal the previous expressions. The extra factor of 2 can be eliminated by removing it from the middle term of the original line pair equations, but without other conditions, I don’t see a way to guarantee that $c\alpha\beta\gamma\delta$ is positive.
Green's Function for 2D Poisson Equation In two dimensions, Poisson's equation has the fundamental solution, $$G(\mathbf{r},\mathbf{r'}) = \frac{\log|\mathbf{r}-\mathbf{r'}|}{2\pi}. $$ I was trying to derive this using the Fourier transformed equation, and the process encountered an integral that was divergent. I was able to extract the correct function eventually, but the math was sketchy at best. I am hoping someone could look at my work and possibly justify it. Here goes. First off, make the assumption that $G$ only depends on the difference $\mathbf{v}=\mathbf{r}-\mathbf{r'}$. Now, let's write $G$ as an inverse Fourier Transform and take the Laplacian, $$\nabla^2G(\mathbf{v}) = \int\frac{d^2k}{(2\pi)^2}(-k^2)e^{i\mathbf{k} \cdot \mathbf{v}} \hat{G}(\mathbf{k}) = \delta(\mathbf{v}) $$ For this to be a delta function, we require that $\hat{G}(\mathbf{k}) = -1/k^2$. Now taking the inverse Fourier Transform of $G$... \begin{align*} G(\mathbf{v}) &= -\int\frac{d^2k}{(2\pi)^2} \frac{e^{i\mathbf{k}\cdot\mathbf{v}}}{k^2} = -\int\limits_{0}^{\infty} \int\limits_{0}^{2\pi} \frac{dkd\theta}{(2\pi)^2} \frac{e^{i|\mathbf{k}||\mathbf{v}|\cos\theta}}{k}\\ &= - \int\limits_0^{\infty}\frac{dk}{2\pi}\frac{J_0(kv)}{k} \end{align*} Here $J_0$ is a Bessel function of the first kind. This integral is divergent as far as I can tell, but let's continue onward and take a derivative with respect to $|\mathbf{v}|$. \begin{align*} \frac{dG}{dv} &= \int\limits_0^{\infty}\frac{dk}{2\pi} J_1(kv)\\ &= \frac{1}{2\pi v} \end{align*} Then integrating this and setting the constant to zero we get the desired result... $$ G(\mathbf{v}) = \frac{\log v}{2\pi} $$ Clearly this was a lot of heuristics, but I am hoping someone could justify some of this with distributions etc... Could someone tell me what on earth I have done and why it worked?
Strictly speaking the Green function isn't Fourier transferrable as it is not L2 integrable. Any math that attempt to show directly the FT relation is necessarily flimsy. One remedy is to multiply an exponential function that decreases to 0 toward infinity but remain a constant 1 effectively within any finite region of interest. This way you should be able to justify otherwise flimsy math rigorously by carefully consider the FT integral toward infinity. However, the result implies that the Green function is the limit of a sequence of L2 functions whos FT converges to -1/k^2 pointwise but not in L2, as the L2 limit does not exist.
How to derive the formula cos(A+B) from the formula cos(A-B)? How do I derive the formula: cos(A+B)=cosAcosB-sinAsinB from the formula: cos(A-B)=cosAcosB+sinAsinB? The only difference that I noticed is the negative and positive sign. I was thinking that first, I replace B with (-B), but then after that how does cos(-B) turn to cos(B), and sin(-B) turn to -sin(B)? Thank you, can someone please explain to me. I hope my question was not too confusing.
$$\cos (A-(-B)) = \cos A \cos -B + \sin A \sin -B = \cos A \cos B - \sin A \sin B= \cos (A+B)$$ Based on the even odd properties of $\sin $ and $\cos $
Limit as $x \to \infty$ of $\frac{x^5+x^3+4}{x^4-x^3+1}$ Suppose we have to find the following limit $\lim_{x\to\infty}\frac{x^5+x^3+4}{x^4-x^3+1}$ Now, if we work with the De L'Hopital rule with successive differentiations we get $L=+\infty$ But if we work like this instead: $$L=\lim_{x\to\infty}\frac{x^5(1+\frac{1}{x^2}+\frac{4}{x^5})}{x^5(\frac{1}{x}-\frac{1}{x^2}+\frac{1}{x^5})}$$ then $L$ does not exist. What is correct and what is false here? I'm a little confused.
As a side note : Unless the sign is obvious like in $\sum\limits_{i=1}^{\infty}$ get used to explicitely write the sign of infinity $+\infty$ and $-\infty$, especially with the limit operator. Without signing, you consider implicitely consider the two limits in $\pm\infty$. Here it happens that $\frac1x$ has the same limit $0$ in $+\infty$ and in $-\infty$ so it doesn't matter. Yet that explains why $\frac10$ is indeterminate : * *if $x\to0^+$ that is $x\to0$ and $x\ge0$ then $\frac1x\to+\infty$ *if $x\to0^-$ that is $x\to0$ and $x\le0$ then $\frac1x\to-\infty$ Thus $\frac10$ has two limits and this is the definition of being indeterminate. But if you write $\displaystyle{x=\frac1{\frac 1x}}$ then $x\to+\infty$ transforms to $\frac1{0^+}$ which is not indeterminate anymore. Same to the other side.
What does "open set" mean in the concept of a topology? Given the following definition of topology, I am confused about the concept of "open sets". 2.2 Topological Space. We use some of the properties of open sets in the case of metric spaces in order to define what is meant in general by a class of open sets and by a topology. Definition 2.2. Let $X$ be a nonempty set. A topology $\mathcal{T}$ for $X$ is a collection of subsets of $X$ such that $\emptyset,X\in\mathcal{T}$, and $\mathcal{T}$ is closed under arbitrary unions and finite intersections.     We say $(X,\mathcal{T})$ is a topological space. Members of $\mathcal{T}$ are called open sets.     If $x\in X$ then a neighbourhood of $x$ is an open set containing $x$. It seems to me that the definition of an open subset is that subset $A$ of a metric space $X$ is called open if for every point $x \in A$ there exists $r>0$ such that $B_r(x)\subseteq A$. What is the difference of being open in a metric space and being open in a topological space? Thanks so much.
In the topological space $( X, \mathcal T)$ a subset of $X$ is simply called open iff it is an element of $\mathcal T$. Nothing else. This terminology is justified by the fact that the set of open sets (by the definition that you know) in a metric space is a topology and that some of the theorems about open sets, continuity and so on carry over from metric spaces to topological spaces. There will surely be some examples shortly after the definition in your book.
Defining a ring homomorphism from $\mathbb{C}\left [ x_1, \dots, x_n \right ]/\left ( f_1, \dots, f_r \right )$ I am working on a problem in Artin's Algebra related to the algebraic geometry talked in Chapter 11. The problem number is 9.2., F.Y.I. Here goes the problem: Let $f_1, \dots, f_r$ be complex polynomials in the variables $x_1, \dots, x_n$, let $V$ be the variety of their common zeros, and let $I$ be the ideal of the polynomial ring $R = \mathbb{C}\left [ x_1, \dots, x_n \right ]$ they generate. Define a ring homomorphism from the quotient ring $\bar{R} = R/I$ to the ring $\mathcal{R}$ of continuous, complex-valued functions on $V$. I attempted to use the correspondence theorem w.r.t. the variety of a set of polynomials, i.e. the maximal ideals bijectively correspond to the point in $V$ and we may somehow define the continuous functions there. However I cannot come up with any idea further. Also, the term 'continuous' here seems redundant since I expect the homomorphism will carry polynomials to polynomials. I appreciate your participation and will be thankful to anything from hints to full solution.
You can define $\psi\colon\mathbb{C}[x_1,x_2,\dots,x_n]\to\mathcal{R}$ by defining $\psi(g)$, for $g\in\mathbb{C}[x_1,x_2,\dots,x_n]$, as $$ \psi(g)\colon (t_1,\dots,t_n)\in V\mapsto g(t_1,\dots,t_n)\in\mathbb{C} $$ It's easy to see that $\psi(g)$ is continuous and that $\psi$ is a ring homomorphism. Since $f_1,f_2,\dots,f_r\in\ker\psi$, we have $I\subseteq\ker\psi$. Therefore the homomorphism factors through $\mathbb{C}[x_1,x_2,\dots,x_n]/I$: there exists a unique ring homomorphism $\phi\colon\mathbb{C}[x_1,x_2,\dots,x_n]/I\to\mathcal{R}$ such that $\phi\circ\pi=\psi$, where $\pi\colon\mathbb{C}[x_1,x_2,\dots,x_n]\to\mathbb{C}[x_1,x_2,\dots,x_n]/I$ is the canonical map to the quotient ring.
Calculating values of the Riemann Zeta Function The Riemann Zeta Function is most commonly defined as $$\zeta(s)=\sum_{n=0}^\infty \frac{1}{n^s}$$ There is some sort of million dollar prize that involves proving the real part of complex number s must be $\frac{1}{2}$ for all nontrivial zeros. Of course this intregued me, because well, it's a million dollars. Odds are I won't solve it, but still. Anyway, I started looking at it and realized that you'd be raising a number to a complex power. This made no sense to me, so I went online and found Euler's formula that explains how that would work $$e^{i\pi}=-1$$ It turns out that the smallest nontrivial zeros is at about $\frac{1}{2}+14.1345i$, so I plugged it in to the zeta function. I used Desmos.com, and used separate summations for the real and imaginary parts. I expected to get zero. I did not get zero. In fact, the bigger I had the summation get, say, instead of summing to 1000000, I'd sum to 1000000000, the further off I would get from zero. So tell me, how exactly are values for the Riemann Zeta Function computed?
One may note that when $\Re(s)\le1$, $$\sum_{k=1}^\infty\frac1{k^s}\approx\int_1^\infty\frac1{x^s}\ dx\to\infty$$ Thus, we'll need a different representation of the zeta function. If we let $\eta(s)$ be the alternating form of the zeta function, $$\eta(s)=\sum_{k=1}^\infty\frac{(-1)^{k+1}}{k^s}$$ Then, $$\zeta(s)-\eta(s)=\sum_{k=1}^\infty\frac{1+(-1)^k}{k^s}=\sum_{k=1}^\infty\frac2{(2k)^s}=2^{1-s}\zeta(s)$$ Thus, it follows that $$\zeta(s)=\frac1{1-2^{1-s}}\eta(s)$$ By taking the Euler sum of the $\eta$ function, we get $$\eta(s)=\sum_{n=0}^\infty\frac1{2^{1+n}}\sum_{k=0}^n\binom nk\frac{(-1)^k}{(k+1)^s}$$ and thus, we reach a globally convergent form of the Riemann zeta function: $$\zeta(s)=\frac1{1-2^{1-s}}\sum_{n=0}^\infty\frac1{2^{1+n}}\sum_{k=0}^n\binom nk\frac{(-1)^k}{(k+1)^s}$$ and when testing for zeroes, the $\frac1{1-2^{1-s}}$ part is negligible.
Determinant of non-triangular block matrix We have the following determinant property $$\det \begin{bmatrix} U & O \\ V & W \end{bmatrix} = \det(U) \cdot \det(W)$$ where $U \in R^{n\times n}$, $V \in R^{m\times n}$, $W \in R^{m\times m}$ and $O \in R^{n\times m}$ (the zero matrix). Now suppose the zero block appears in the top left corner instead. Does there in that case also exist a rule to calculate the determinant of the matrix more easily? The matrices I am thinking of here are of the form $$Z = \begin{bmatrix} O & A \\ A^T & B \end{bmatrix}$$ with all matrices conformable. An example would be $$Z = \begin{bmatrix} 0 & 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & -9 & 0 & 1 \\ 0 & 1 & 1 & 0 & 0 & -1 \\ 1 & -9 & 0 & -1 & 2 & 0 \\ 1 & 0 & 0 & 2 & 1 & 0 \\ 1 & 1 & -1 & 0 & 0 & 1 \end{bmatrix}$$
There is no such rule to calculate determinant easy as in the case the zero block is in the top right or bottom left corner. Here you can see all rules you can apply on block matrices https://en.wikipedia.org/wiki/Determinant. Instead, you can transform your matrix with Gaussian transformations to an upper triangular matrix and just multiply elements on diagonal.
Find the area of an infinitesimal elliptical ring. I have an ellipse given by, $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=c$$ and another ellipse that is infinitesimally bigger than the previous ellipse, i.e. $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=c+dc$$ I want to find the area enclosed by the ring from $x$ to $x+dx$ but I don't know how. Please don't solve the question, just point me in the right direction, I want to solve it myself. Here is a picture of what I want to do.
Note that you just scaled up both the major and minor axes by a common factor $ \sqrt c$ You want to look at $$ (\sqrt c\, y )\ d (\sqrt c x ) - y\ dx = (c-1) y\ dx . $$
Single set in a topology What is meant by a single set in a topological space? The statement goes as: "let $X$ and $X'$ denote a single set in the topologies $\mathcal{T}$ and $\mathcal{T'}$ respectively".
Here means,for example,a $\mathbf{one}$ set,say, $X_{0}$ endowed with two different topologies $\mathcal{T}$ and $\mathcal{T}'$, because in views of their different topologies,a $\mathbf{single}$ set $X_{0}$ could be two different topological spaces, namely $X$ and $X'$ respectively.But as sets,$X=X'$.
Finding linear dependency in two different ways Find the value(s) of h for which the vectors are linearly dependent. Justify each answer: $ \begin{bmatrix} 1 & -2 & 3 \\ 5 & -9 & h \\ -3 & 6 & -9 \end{bmatrix} $ The correct answer is all $h$, but I'm confused on how to arrive at this answer. At first, I looked at the 2nd and 3rd column and noticed the values in the 2nd column could be multiplied by $\frac{-3}{2}$ to get the values in the 3rd column. I concluded $h$ must be $\frac{27}{2}$. I also tried to put the matrix in row-reduced echelon form to get: $ \begin{bmatrix} 1 & 0 & 2h-27 \\ 0 & 1 & h-15 \\ 0 & 0 & 0 \end{bmatrix} $ , so I see that there is a free variable in th 3rd column, so $h$ can be any value since the system is already linearly dependent. I was wondering what was wrong with the way I did it with my first method.
More ways to solve this (among others): * *You can simply look at the first and last rows of the matrix which are clearly dependent. *Calculating the determinant $\begin{align*} \begin{vmatrix} 1&-2&3\\ 5&-9&h\\ -3&6&-9\\ \end{vmatrix} &= 1\begin{vmatrix}-9&h\\6&-9\end{vmatrix}+ 2\begin{vmatrix}5&h\\-3&-9\end{vmatrix}+ 3\begin{vmatrix}5&-9\\-3&6\end{vmatrix}\\ &=81-6h-90+6h+90-81\\ &=0\end{align*}$ Your first solution simply showed one value of $h$ for which the vectors are dependent. Using the same method as your first one, you could have subtracted the second column from the first to get the third, giving you $h=14$, again just one possible value of $h$.
How to show that if $9n^2=a^2+b^2$, $a$ and $b$ are multiples of $3$ To be honest, I don't know where to start with this problem: Let $n\in \mathbb{N}$. Prove that if $9n^2$ is the sum of two perfect squares $(a^2,b^2)$, then $a$ and $b$ are multiples of $3$.
Logically speaking, $$9n^2 = a^2 + b^2 \iff \left({a\over3}\right)^2 + \left({b\over3}\right)^2 = n^2.$$ It is clear that $n^2 \in \mathbb Z$, but because $a^2, b^2$ are perfect squares, then $a,b,n\in\mathbb Z$. Can you see where to take it from here?
Understanding a proof about $\delta$-slim triangle equivalence. I am trying to understand a proof from the book "Metric Spaces of Non-Positive Curvature". In the $(3)\Rightarrow (1)$ direction, I can get why is the internal point varies continuously. It makes sense, but I am having trouble proving it. Here is the definitions then the theorem and right after, the proof, as in the book: I have tried to show somehow that by defining $f:[0,1]\to [y,z]$ by $$f(t)=(\chi _{\Delta _t} |_{[y,z]})^{-1} (o_{\Delta _t})$$ we will get that $f(t)$ gives us an internal point of $\Delta _t$ but I am not sure on why is it continues. I thought using the fact that $\chi _{\Delta _t} |_{[y,z]}$ is an isometry and that $c$ is continues. Will be happy for some help!
This is more of a hint/plan of attack: Think about the corresponding tripod/(degenerate tripod at $t=0$) for the triangle, and note the length of the three sides determine a tripod, and you should be able to prove that the length of the sides are changing continuously. Since the lengths of the sides are changing continuously you get a tripods $T(a_t,b_t,c_t)$, and you can prove that $a_t$ (or whatever side you are looking at) are changing continuous (formally you may find using Gromov product useful). From here you should have that the the internal point is varying continuously.
$Ka=K$ iff $a\in K$ Let $K$ be a subgroup of of a group $G$. Let $a \in G$ Prove $Ka=K$ iff $a\in K$ need to show $Ka=K \Rightarrow a \in K $ and $a\in K \Rightarrow Ka=K$ $\Rightarrow]$($Ka=K \Rightarrow a \in K $) (no clue appreciate a hint ) $\Leftarrow]$ ($a\in K \Rightarrow Ka=K$ (same dont know how to approciate) Def $$ Ka =\{ka: k\in K \}$$
Maybe this can help. $\Rightarrow$ Assume that $Ka=K$. You know that $ea=a$ where $e$ is the identity in $G$ and hence, an identity in $K$ so that $e\in K$. Since $ea\in Ka$ and $Ka=K$, we get $ea\in K$, that is, $a\in K$. $\Leftarrow$ Assume that $a\in K$. We need to consider the following. $i.$ Let $x\in Ka$. Then there exists $k\in K$ such that $x=ka$. So, we have $a,k\in K$ and because $K$ is a subgroup of $G$, we get $ka\in K$. Because $ka=x$, we get $x\in K$. Hence, $Ka\subset K$. $ii.$ Let $x\in K$. Because $K$ is a subgroup of $G$, we get $a^{-1}\in K$ and hence, $xa^{-1}\in K$. This shows that $(xa^{-1})a\in Ka$. But $(xa^{-1})a=x$. Thus, $x\in Ka$ and so, $K\subset Ka$. Combining $(i)$ and $(ii)$, we get $Ka=K$.
Pivot Row in Simplex Method If I am trying to solve a minimization problem without converting it to a maximization problem how do I decide which variable to pivot ? I think it involves looking at the ratio of that variable with the RHS but unsure if I should choose the variable. Is the one with the smallest ratio or the largest. Here is the problem that I have: Minimize: $x_1+x_2-4x_3$ Subject to: $$ \begin{align} x_1+x_2+2x_3+x_4=9\\ x_1+x_2-x_3+x_5=2\\ -x_1+x_2+x_3+x_6=4\\ x_1,x_2,x_3,x_4,x_5,x_6 \geq 0\\ \end{align}$$ In this case I know the 4 needs to be pivoted to made negative but not sure if I need to use the $x_4,x_5,x_6$ as the first pivot.
Minimising $x_1+x_2-4x_3$ is equivalent to maximising it's additive inverse, so we can simply copy the coefficients in the original problem to the last row of the inital simplex tableau: \begin{array}{r|rrrrrr|rr} & x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & \text{RHS} & \text{ratio}\\ \hline x_4 & 1 & 1 & 2 & 1 & 0 & 0 & 9 & 9/2 \\ x_5 & 1 & 1 & -1 & 0 & 1 & 0 & 2 & - \\ x_6 & -1 & 1 & 1^* & 0 & 0 & 1 & 4 & 4\\ \hline & 1 & 1 & -4 & 0 & 0 & 0 & 0 \end{array} Choose the most negative number at the $z$-row (-4 in this case.), just like what we do for a standard simplex maximisation problem. Then pick the least nonnegative number at the "ratio" column. (You may consult my other post on choosing the leaving variable for further explanation.) \begin{array}{r|rrrrrr|rr} & x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & \text{RHS} & \text{ratio}\\ \hline x_4 & 3^* & -1 & 0 & 1 & 0 & -2 & 1 & 1/3\\ x_5 & 0 & 2 & 0 & 0 & 1 & 1 & 6 & - \\ x_3 & -1 & 1 & 1 & 0 & 0 & 1 & 4 & - \\ \hline & -3 & 5 & 0 & 0 & 0 & 4 & 16 \end{array} \begin{array}{r|rrrrrr|r} & x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & \text{RHS} \\ \hline x_1 & 1 & -1/3 & 0 & 1/3 & 0 & -2/3 & 1/3 \\ x_5 & 0 & 2 & 0 & 0 & 1 & 1 & 6 \\ x_3 & 0 & 2/3 & 1 & 1/3 & 0 & 1/3 & 13/3 \\ \hline & 0 & 4 & 0 & 1 & 0 & 2 & 17 \end{array} Hence our optimal solution is $(x_1,x_2,x_3,x_4,x_5,x_6) = (1/3, 0, 13/3, 0 ,6, 0)$ with optimal value -17.
Real Analysis Sets example The problem: Come up with an example of sets $A$, and $B$ in $\mathbb{R}^2$ such that $A \subset B$, $A \neq B$, and the boundary points of $A$, $bd(A) = B$. Here is what I think. I was going to set $A = \{ \bar{u} \in \mathbb{R}^2 | \|u \| =1 \} $ and $B = \{ \bar{u} \in \mathbb{R}^2 | \| u \| \leq 1 \}$ but I cannot think of good examples. Can someone give me some hints please? Try not to solve the problem! Thank you very much!!
Your example does not work, since $(0,0)$ is not a boundary point of B, just take the ball of radius $\frac{1}{2}$ around it, which clearly does not contain any elements from $A$. You need something dense in $B$ to be your set, and there is nothing quite like $\mathbb{Q}$ as far as dense sets go. Can you finish it from here?
How to show sequence defined by this recursive formula converges to 0? As part of the problem I'm working on, I reached the point where I have to show the sequence of error terms $e_n$ defined by: $$ e_{n+1} = \frac{e_n}{e_n+2} $$ converges to 0 for choice of initial $e_0 > -1$ I've been able to show this for $e_0 \geq 0$, as $e_n \geq 0 \implies 0 < e_{n+1} \leq \frac{1}{2}e_n$ How can one show that convergence to $0$ still holds for $-1 < e_0 < 0$? Is there a way to prove this using only the non-explicit definition of $e_n$?
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Note that $\ds{e_{n + 1} = {e_{n} \over e_{n} + 2} \implies {1 \over e_{n + 1}} + 1 = 2\pars{{1 \over e_{n}} + 1} \implies \bbx{\ds{{1 \over e_{n + 1}} + 1 = 2^{n + 1}\pars{{1 \over e_{0}} + 1}}}}$ With the above expression we can discuss several $\ds{e_{n}}$-behaviours !!!.
How to take the integral? $\int \frac{x^2-3x+2}{x^2+2x+1}dx$ $$\int \frac{x^2-3x+2}{x^2+2x+1}dx$$ So after all I had $$ \frac{-5x+1}{(x+1)^2} = \frac{A}{(x+1)} + \frac{B}{(x+1)^2}$$ and of course $$ \int xdx $$ but it is easy to solve, I do not know how to act with devided things, probably solve the system, or is there easier way to find A and B? After all steps I finally got: $$-5x + 1 = Ax + A + B$$
I add this answer as a kind of reference to handle any kind of partial fraction decomposition when the roots of the polynomial in the denominator are known. This is the general algebraic solution: suppose that you have two normalized polynomials $p,q\in\Bbb C[X]$ (normalized means that the coefficient of the maximum power of each one is $1$) with $\deg(q)>\deg(p)$, then we can write $$p=\prod(X-z_k)^{m_k},\quad\sum_k m_k=\deg(p),\quad p(z_k)=0$$ that is, the polynomial is written as the product of it roots, each one with multiplicity $m_k$. Then we want to write $$\frac{p}{q}=\sum_{j=1}^n\sum_{k=1}^{m_j}\frac{a_{jk}}{(X-z_k)^{m_j}},\quad a_{jk}\in\Bbb C\tag{1}$$ From (1) we make the ansatz $$\frac{p}{q}=\frac{a}{(X-z_1)^{m_1}}+\frac{p_1}{q_1}\tag{2}$$ where $a\in\Bbb C$, $p_1\in\Bbb C[X]$ and $q_1:=\frac{q}{X-z_1}$. Multiplying (2) by $q$ we get $$p=a\prod_{j=2}^n(X-z_j)^{m_j}+(X-z_1)p_1$$ from where we get the solution $$\bbox[2pt, border:2px yellow solid]{a=p(z_1)/\prod_{j=2}^n(z_1-z_j)^{m_j}}\tag{3}$$ Applying (3) recursively through the roots of $q$ you get the desired partial fraction expansion for $p/q$. And you knows that $$\int\frac{1}{(X-z_j)^{m_j}}=\begin{cases}\ln|x-z_j|+c,& m_j=1\text{ and } z_j\in\Bbb R\\\ln(x-z_j)+c,&m_j=1\text{ and }z_j\in\Bbb C\setminus\Bbb R\\\frac{-1}{(m_j-1)(X-z_j)^{m_j-1}}+c, &m_j>1\end{cases}$$
How to take the integral $\int\frac{dx}{\sin^3x}$ There's $$\int\frac{\mathrm dx}{\sin^3x}.$$ I tried to write it like $$\int\frac{(\sin^2x+\cos^2x)}{\sin^3x}\,\mathrm dx,$$ and then made partial fractions from it, but it didn't help much, the answer is still incorrect.
write your integrand as $$\frac{1}{\sin(x)}+\frac{\cos(x)^2}{\sin(x)^3}$$ and the first as $$\sin(x)+\frac{\cos(x)^2}{\sin(x)}$$ now you can set $$t=\sin(x)$$
How to find equal gcd for polynomials? In one of my previous questions for finding a value n so that the fraction isn't in the simplest form, the answer stated that $\gcd(2x+5,3x+4) = \gcd(2x+5,x−1)$. Can anyone explain how the answerer (who I'm very thankful to) arrived at this? Thank you!
The greatest common divisor of the leading coefficients of both expressions, that is, $\gcd (3,2)=1$. So, $$3x+4-(1)(2x+5) = x-1$$ This is called the Euclidean Algorithm. Hope it helps.
The inverse of $-1+\dfrac{\cos((\frac{1}{2}+m)w)}{\cos(w)}$ Is it possible to compute the inverse of the following function $f(w)=-1+\dfrac{\cos((\frac{1}{2}+m)w)}{\cos(w)}$ when $0 \leq w \leq 2\pi$.
It is not necessarily invertible on the interval $[0,2\pi]$ as shown in this example when $m=1$. You may try using other values of $m$ or other intervals at this desmos.com link.
Limits Definition and relaxation of the point of accumulation condition I am doing a self study reading Thomson-Bruckner-Bruckner book on analysis, found here: http://classicalrealanalysis.info/documents/TBB-AllChapters-Landscape.pdf The chapter on continuous functions defines Limit of a function as follows, Definition 5.1: (Limit) Let $f:E \mapsto \mathbb{R}$ be a function with domain E and suppose that $x_o$ is a point of accumulation of $E$. Then we write $\lim_{x \rightarrow x_o} f(x) = L$ if for every $\epsilon > 0$ there is a $\delta > 0$ so that $|f(x) - L| < \epsilon$ whenever $x$ is a point of $E$ differing from $x_o$ and satisfying $|x - x_o| < \delta$. The point being made is that $x_o$ is an accumulation point of $E$. In order to illustrate this point, the author has the following exercise problem, Show that $\lim_{x \rightarrow \ -2} \sqrt{x} = L$ is true for any $L$, if the definition of limits excludes that $x_o$ has to be a point of accumulation of the set $E$. My reasoning was that this ought to be vacuously true since $E \bigcap (-2-\delta, -2+ \delta) = \emptyset$ for any $\delta < 2$. However, I would like to make the case using the $\epsilon - \delta$ argument. I am not sure how to proceed.
You were actually almost done. As soon as you know the implication gets vacuously true if $0 < \delta < 2$, for every $\varepsilon > 0$ you've got a choice of $\delta$ (say $\delta := 1$). Alright, this problem is in fact a little logical issue. Recall that you are to find at least one $\delta > 0$ allowing the condition pertaining to $\delta$. You ended up with finding infinitely many such $\delta$, namely anyone in the open interval $]0,2[$. So for whatever $\varepsilon > 0$, you've got infinitely many choices of such $\delta$, namely those in $]0,2[$. So simply giving them $\delta := 1$, say, suffices.
Sherwan Morrison Woodbury Proof Suppose $X$ is an $n \times p$ matrix, $Y$ is a $ p \times n $ matrix and $A$ is an $n \times n$ matrix, all under the field of real numbers. Suppose $A$ is invertible and define $W=I+YA^{-1}X$ as a $ p \times p $ matrix under the field of reals. Prove that if $W$ is invertible then so is $A+XY$ and $(A+XY)^{-1} = A^{-1} - A^{-1}XW^{-1}YA^{-1}$. Prove that if $W$ is not invertible than neither is $A+XY$. For the first proof I was thinking about multiplying both sides by $A+XY$ on both sides, but I got lost throughout that computation. I wanted to show that $I=I$. And for the second proof is it sufficient to explain that the equation relies on $W$ inverse?
Part 1: Yes, the computation is cumbersome, but here we go $$ (A+XY)(A^{-1}-A^{-1}X(I+YA^{-1}X)^{-1}YA^{-1}) \\ = AA^{-1} + XYA^{-1} - AA^{-1}X(I+YA^{-1}X)^{-1}YA^{-1} \\ - XYA^{-1}X(I+YA^{-1}X)^{-1}YA^{-1} \\ = I + XYA^{-1} - X(I+YA^{-1}X)^{-1}YA^{-1} \\ - XYA^{-1}X(I+YA^{-1}X)^{-1}YA^{-1} \\ = I + X \big( I - (I+YA^{-1}X)^{-1} - YA^{-1}X(I+YA^{-1}X)^{-1} \big) YA^{-1} \\ = I + X \big( I - (I+YA^{-1}X) (I+YA^{-1}X)^{-1} \big) YA^{-1} \\ = I + X \big( I - I \big) YA^{-1} = I + 0 = I $$ Part 2: If $W$ is not invertible, then there exists a vector $u\neq 0$ with $Wu=0$. Let $v=A^{-1}Xu$. Obviously $v\neq 0$. (Assume $v=0$. Then $0=Wu=u+YA^{-1}Xu = u+Yv = u+0 =u$, which is a contradiction to our choice of $u$) With this $v$, we get $$ (A+XY)v = (A+XY)A^{-1}Xu = Xu+XYA^{-1}Xu = X(I+YA^{-1}X)u =XWu =0 $$ We have found a vector $v\neq 0$ with $(A+XY)v=0$. Therefore $A+XY$ is not invertible either. It is not sufficient to note that the proof relies on the existence of $W^{-1}$. $A+XY$ still could have an inverse that can be expressed without the inverse of $W$.
Show the sequence $a_1=1$, $a_{n+1}=(1-\frac{1}{2^n})a_n$ converges. $a_1=1 \,\text{and} \,$$a_{n+1}=(1-\frac{1}{2^n})a_n$ My try:This is a decreasing sequence and bounded below by $1$.So $a_n$ converges.
The first elements of the sequence: $$a_1=1,\,a_2=\frac12,\,a_3=\frac38,\,a_4=\frac{21}{64}\,,\ldots$$ so it looks like a descending sequence. With a little induction $$a_{n+1}:=\left(1-\frac1{2^n}\right)a_n\le a_n\iff1-\frac1{2^n}\le1$$ and since the last inequality is trivial we're done. Finally, again with a little induction $$a_{n+1}=\left(1-\frac1{2^n}\right)a_n\ge0\iff a_n\ge0$$ and thus zero is a lower bound, and thus the sequnece converges.
Where did I went wrong $\int_{0}^{\infty}{\mathrm dx\over (1+x)^2}=0$? Consider $$\int_{0}^{\infty}{e^{-x}\over 1+x}\mathrm dx=-eE_i(-1)=0.596347...\tag1$$ $$\int_{0}^{\infty}\left({1\over 1+x}-e^{-x}\right)\cdot{\mathrm dx\over 1+x}=eE_i(-1)=-0.596347...\tag2$$ $$\int_{0}^{\infty}{\mathrm dx\over (1+x)^2}=1\tag3$$ $E_i(x)$;Exponential integral Here is the problem I am so confused with $(1)+(2)$ $$\int_{0}^{\infty}{e^{-x}\over 1+x}\mathrm dx+ \int_{0}^{\infty}\left({1\over 1+x}-e^{-x}\right)\cdot{\mathrm dx\over 1+x}=0\tag4$$ Simplify $(4)$ $$\int_{0}^{\infty}{\mathrm dx\over (1+x)^2}=0\tag5$$ $(5)$ supposed to $\color{red}1$. Why did I went wrong?
The value for integral (2) you wrote is not correct. On $[0,\infty)$, we must have $(1+x)^{-1} \ge e^{-x}$, because $$e^x = \sum_{k=0}^\infty \frac{x^k}{k!} \ge 1 + x.$$ Thus, the integrand is strictly nonnegative for all $x \ge 0$. In Mathematica (Version 10.4 on my system), the command Integrate[(1/(1 + x) - Exp[-x])/(1 + x), {x, 0, Infinity}] returns E ExpIntegralEi[-1] which then evaluated numerically gives a negative real number. However, NIntegrate[(1/(1 + x) - Exp[-x])/(1 + x), {x, 0, Infinity}] gives 0.403653 which is correct. I suspect the issue has to do with branch cut evaluation. It may have been fixed in subsequent versions. Note that the correct value can be evaluated symbolically with the command Integrate[1/(1 + x)^2, {x, 0, Infinity}] - Integrate[Exp[-x]/(1 + x), {x, 0, Infinity}]
Epsilon-delta proof - minimum function Prove that $\displaystyle\lim_{x \to 2} \dfrac{x+2}{(x-2)^4} = \infty$ using epsilon-delta. The following proof is provided: Note that if $x>0$, then $x+2>1$, in which case $\dfrac{x+2}{(x-2)^4} > \dfrac{1}{(x-2)^4}$. If furthermore, $|x-2| < \delta$, then $\dfrac{x+2}{(x-2)^4} > \dfrac{1}{(x-2)^4} > \dfrac{1}{\delta ^4}$. This is larger than a given $N>0$, if $\delta \leq \dfrac{1}{\sqrt[4]{N}}$. Then it goes on to define delta as $\delta= \min(2,\dfrac{1}{\sqrt[4]{N}}). $ Now I don't understand why the $2$ is there in particular instead of some other number. Usually I understand the necessity of the minimum function, but in this case I cannot grok it for the life of me. Can anyone enlighten me, perhaps by using an example where it goes wrong if there is no $\min(2,..)$?
It is there to guarantee that $x>0$ which is used on the first line.
Example of a r.e. equivalence relation but not recursive I am looking for an example of an equivalence relation which is recursively enumerable but not recursive. I found the following statement: If R is an equivalence relation r.e. which is not recursive. Then for each $n$ there are infinitely many classes whose size is different than $n$. I will appreciate any clue to prove this statement or to construct such relation.
$x R y \iff x = y \vee \phi_x(x) = \phi_y(y)$ (where $\phi_i$ is an enumeration of the partial recursive functions). This works because the halting problem is undecidable, so it cannot be determined recursively whether $\phi_x(x)$ and $\phi(y)$ halt and are equal.
Optimizing Projectile Arclength I was running through some old Putnam problems and came across one from the 1940 exam that asked the following: A stone is thrown from the ground with speed $v$ at an angle $θ$ to the horizontal. There is no friction and the ground is flat. Find the total distance it travels before hitting the ground. Show that the distance is greatest when $\sin θ \cdot\ln (\sec θ + \tan θ) = 1$. My work: We can describe the motion with the following vector: $$\vec r(t)=\langle v\cos\theta \cdot t,v\sin\theta \cdot t-gt^2/2\rangle$$ We know the vector hits the ground when the $y$ component is equal to $0$. Solving for $t_0$, we get that the ball reaches the ground again at $t_0=\frac{2v\sin\theta}{g}$. Let's set up an integral for the arc length $s$: $$s=\int_0^{t_0}\sqrt{\left(\frac{dx}{dt}\right)^2+\left(\frac{dy}{dt}\right)^2}dt$$ Some quick differentiation and we can replace $\frac{dx}{dt}$ and $\frac{dy}{dt}$ with the following identities: $$s=\int_0^{t_0}\sqrt{(-v\sin\theta)^2+(v\cos\theta-gt)^2}dt$$ Since we are trying to optimize $\theta$, if we take the derivative of the arc length, set it equal to $0$, and find the maximum, then we can solve for optimal $\theta$. We'll also replace $t_0$ with its identity in terms of $\theta$. $$\frac{ds}{d\theta}=\frac{d}{d\theta}\int_0^{\frac{2v\sin\theta}{g}}\sqrt{v^2-2v\cos\theta\cdot gt+g^2t^2}dt$$ $$\frac{ds}{d\theta}=\frac{2v\cos\theta}{g}\sqrt{v^2-2v\cos\theta\cdot gt+g^2t^2}$$ Here I am unsure how to relate $t$ to $v$ and $\theta$. Do I use the identity for $t_0$, a kinematics equation, or simply treat $t$ as a constant? This, of course, assumes my work thus far has been valid. Next I would either show that the identity holds after finding $\theta$ or use some relation involving $\theta$ and show that it's an identity to the listed equation. Thanks for taking the time to read/respond.
WLOG, $v=1$ and $g=1$ (you can rescale time and space independently), the trajectory is $$x=t\cos\theta,\\y=t\sin\theta-\frac{t^2}2,$$ and the total travel time is $$2\sin\theta.$$ Then you want to maximize $$L=\int_0^{2\sin\theta}\sqrt{(t-\sin\theta)^2+\cos^2\theta}\,dt=\int_{-\sin\theta}^{\sin\theta}\sqrt{t^2+\cos^2\theta}\,dt\\ =\sin^2\theta\int_{-1}^{1}\sqrt{u^2+\tan^2\theta}\,du\\ =\frac12\sin^2\theta\left.\left(u\sqrt{u^2+\tan^2\theta}+\tan^2\theta\log(u+\sqrt{u^2+\tan^2\theta})\right)\right|_{u=-1}^1\\ =\frac12\sin^2\theta\left(2\sec\theta+\tan^2\theta\log\frac{\sec\theta+1}{\sec\theta-1}\right).$$ The claim should follow by differentiation and simplification.
What is the sum of the solutions to $6x^3+7x^2-x-2=0$ What is the easy way to solve the problem? The sum of the solutions to $6x^3+7x^2-x-2=0$ is: $$A) \ \frac{1}{6}$$ $$B) \ \frac{1}{3}$$ $$C) \ \frac{-7}{6}$$ $$D) -2$$ $$E) \text{ none of above}$$
Constant is -2. Using remainder theorem So you have 1,-1,2,-2. By putting-1 in x you get to know that -1 satisfies the solution. So -1is a solution and x+1 is a factor of above equation. After dividing 6x^2+7x^2-x-2 by x+1 we get, 6x^2 +x-2 as quotient. Solving this quadratic equation you get 1/2 and -2/3 as solutions. So -1 + 1/2 + -2/3= -7/6 none of the above.
Multiplication of two imaginary numbers. While reading about the complex numbers I found a property $\sqrt{-a} = i \sqrt{a}$. Further it was stated that $\sqrt{-a}×\sqrt{-b}=i \sqrt{a}×i \sqrt{b}= -\sqrt{ab}$. Suppose I take two numbers $\sqrt{-3}$ and $\sqrt{-2}$ and if I multiply both, then according to above statement $\sqrt{-3}×\sqrt{-2}= -\sqrt{3×2}= -\sqrt{6}= -2.449$.This gives a negative real number. But when I multiplied it in calculator then it gave me MATH ERROR. Why? Also it was given that doing $\sqrt{-a}×\sqrt{-b} = \sqrt{(-a)×(-b)}$ is wrong as it further becomes $\sqrt{ab}$ but answer is $-\sqrt{ab}$. This also means I cannot separate $\sqrt{(-a)×(-b)}$ to $\sqrt{-a}×\sqrt{-b}$. But for $\sqrt{-a} = i \sqrt{a}$, we have separated $\sqrt{(-1×a)}$ to $\sqrt{-1}×\sqrt{a}$ which we further write as $i \sqrt{a}$. Why?
Your answer is right. calculator you used does not support complex arithmetic. see this calculation in wolframalpha
Planar graph with triangular faces in which the link of every vertex bounds a face What possible connected planar graphs are there that satisfy the following properties? * *Every face (including unbounded face) is triangular, i.e. is bounded by exactly 3 edges. *For every vertex, the set of faces not incident to that vertex is either singleton or empty. So far I can only think of two genuinely distinct such graphs, namely $K_3$ and $K_4$. I'm hoping that these are the only possibilities.
Let $V$, $E$ and $F$ denote the amount of vertices, edges and faces of such a graph. The first condition states that $E=\frac{3F}{2}$, hence $2$ divides $F$. The second condition states that the degree of a vertex is either $F$ or $F-1$. Now use Euler's formula to get $V=2+E-F=2 + \frac{3F}{2}-F=2+\frac{F}{2}$. Now since the sum of the degree of the vertices is exactly $2E$ we have the inequalities: $$V(F-1) \leq 2E \leq VF$$ Substituting $E=\frac{3F}{2}$ and $V=2+\frac{F}{2}$ we get $$(2+\frac{F}{2})(F-1) \leq 3F \leq (2+\frac{F}{2})F$$ hence $$(4+F)(F-1) \leq 6F \leq (4+F)F$$ This means that $F^2-3F-4 \leq 0$ and $0 \leq F^2-2F$. This means that $2 \leq F \leq 4$. Then you can easily check graphs with $2$ or $4$ faces ($3$ faces is not possible, since $2 \nmid 3$). So the only graphs are indeed $K_3$ and $K_4$.
Sole functions on three dimensional projective space? Let the base field $k$ be algebraically closed. As stated in my question title, are the sole (algebraic) functions on the algebraic variety of $3$-dimensional projective space (over $k$) the constant functions?
If you by "algebraic functions" mean morphisms $\mathbb P^3 \to k$ that are regular everywhere (=no zeroes), then the answer is "yes". This is true for example because $H^0(\mathbb P^3, \mathscr O)=k$: the only global sections of the sheaf of regular functions are the constants. See for example Hartshorne, Theorem 5.1 in Chapter III. If you by "algebraic functions" include rational functions, that is, functions that are locally the quotient of regular functions, then "no", $\mathbb P^3$ have many of these.
Why is a matrix invertible when its row-echelon form has no zero row? If the row echelon form of a square matrix has no zero row, it is invertible. Otherwise, it is singular. Why? If the row echelon form has a zero row, in a linear system, it has either no solution or infinitely many solutions. So, is invertibility linked to having only one solution? Is there a geometrical interpretation for my question?
Excellent. Your request for a geometric interpretation shows me that you are on the right track in learning linear algebra! (Well, at least visualizing the standard 1-3 dimensions) Consider the Reduced Row Echelon Form (RREF) of a matrix A, it concisely describes some of the subspace information associated with A. The RREF tell us: * *rank : number of basis vectors in the column space/range *nullity : number of basis vectors in the null space/kernel *Invertibility/Linear independence : Whether the null space is trivial or not The null space being trivial (i.e, consisting of only an appropriate null vector) implies that the column space of A occupies the entirety of it's dimension(equal to the column count of A) and that there is no linear combination of any vectors in it's range that reduce to 0 vector. The process of matrix inversion is supposed to find a subspace which when multiplied with A gets projected to the appropriate identity matrix. If there is any linear combination of columns of A that reduces to 0, then it cannot be reversed to map onto it's original linear combination, which means that the vector is nullified. (Mapped to the 0 vector). This is exactly what the rank of a matrix succinctly describes with mathematical beauty. So, such linear vector combinations of non-invertible matrices are consumed by it's null space/kernel! The exact same can be witnessed and verified on the column space and null space of the non-invertible matrix A' (which are incidentally, NOT coincidentally, the row space and left-null space of A)
Minimize $P=5\left(x^2+y^2\right)+2z^2$ For $\left(x+y\right)\left(x+z\right)\left(y+z\right)=144$, minimize $$P=5\left(x^2+y^2\right)+2z^2$$ I have no idea. Can you make a few suggestions?
Let $x=y=2$ and $z=4$. Hence, $P=72$. We'll prove that it's a minimal value. Indeed, we need to prove that $$5(x^2+y^2)+2z^2\geq72\left(\sqrt[3]{\frac{(x+y)(x+z)(y+z)}{144}}\right)^2.$$ It's enough to prove last inequality for non-negative variables. Let $x+y=tz$. Since $x^2+y^2\geq\frac{1}{2}(x+y)^2$ and $(x+z)(y+z)\leq\frac{1}{2}(x+y+2z)^2$, it remains to prove that $$\frac{5}{2}t^2+2\geq72\left(\sqrt[3]{\frac{\frac{t(t+2)^2}{4}}{144}}\right)^2$$ or $$(5t^2+4)^3\geq9t^2(t+2)^4,$$ which is C-S and AM-GM: $$(5t^2+4)^3=\left(\frac{(5+4)(5t^2+4)}{9}\right)^3\geq\left(\frac{(5t+4)^2}{9}\right)^3=$$ $$=\left(\frac{(3t+2(t+2))^2}{9}\right)^3\geq\left(\frac{\left(3\sqrt[3]{3t\cdot(t+2)^2}\right)^2}{9}\right)^3=9t^2(t+2)^4.$$ Done!
What space corresponds to the localisation of the ring of continuous functions? Suppose $A$ is a commutative Banach algebra. By Gelfand duality there is a compactum $X$ such that $A = C(X)$ is the ring of continuous functions. The space $X$ can be recovered as the space of characters on $A$. That is to say multiplicative linear functionals $A \to \mathbb R$ under the weak$^*$ topology. Observe this topology on the space of characters does not depend on the topology of $A$. Now let $f \in C(X)$ be any non-invertible element. In other words $f$ has a zero. We can localise the ring $C(X)$ at $f$ to get the ring of 'formal fractions' $C(X)_f = \displaystyle \{\frac{g}{f^n} \colon g \in C(X), n \in \mathbb N\}$. There is a natural embedding $C(X) \to C(X)_f$ but I am unaware if $C(X)_f$ carries a compatible Banach algebra structure. By this I mean a norm under which it is complete and the embedding is an isometry. Nevertheless we can consider the space of characters on $C(X)_f$ and give that the weak$^*$ topology. * *Under what conditions is the character space of $C(X)_f$ some compactum $Y$? *When will we have $C_f(X) = C(Y)$? *Does $Y$ have a topological characterisation in terms of the space $X$ and function $f$?
There are some problems here. It is not true that commutative Banach algebras are isomorphic to $C(X)$ for some $X$. In the case where $A$ is semi-simple, we have an embedding with dense range. There are examples of infinite-dimensional commutative Banach algebras with a unique character! Localisations do not have Banach-algebra structure because they are fields. By the Mazur-Gelfand theorem, the field of complex numbers is the only commutative Banach algebra that is also a field.
Set defined by infimum How can we prove: $$|\alpha|*||x|| = ||\alpha x|| $$ With the norm defined as: $$ ||x|| := \inf\left\{ \lambda > 0 \mid x/\lambda\in B \right\} $$ Where B is convexe, open, symmetric and bounded and $ 0\in B$. I thought about to proof this by contradiction, but I never encountered a situation where I wanted to pull out a constant from a definition of a set.
Sketch of one direction: Observe that $\|\alpha x\|=\inf\{\lambda>0:\alpha x/\lambda\in B\}$. Since this is an infimum, by the definition of an infimum, for all $\varepsilon>0$, there exists a $\lambda_\varepsilon$ such that * *$\frac{\alpha x}{\lambda_\varepsilon}\in B$ *$\|\alpha x\|\leq \lambda_\varepsilon<\|\alpha x\|+\varepsilon$. Consider $\lambda'_\varepsilon:=\frac{\lambda_\varepsilon}{|\alpha|}$. This lambda satisfies $\frac{x}{\lambda'_\varepsilon}\in B$ since $\frac{x}{\lambda'_\varepsilon}=\frac{|\alpha| x}{\lambda_\varepsilon}$, which we know is in $B$ since $\frac{\alpha x}{\lambda_\varepsilon}\in B$ and $B$ is symmetric about the origin. Therefore, we know that $\lambda'_\varepsilon$ is one of the elements of $\{\lambda>0:x/\lambda\in B\}$. Hence, for the infimum, $\|x\|\leq\lambda'_\varepsilon$. From the inequalities above, we know that $$ \frac{1}{|\alpha|}\|\alpha x\|\leq \lambda'_\varepsilon<\frac{1}{|\alpha|}\|\alpha x\|+\frac{\varepsilon}{|\alpha|}. $$ Combining inequalities, we know that $$ \|x\|\leq\lambda'_\varepsilon<\frac{1}{|\alpha|}\|\alpha x\|+\frac{\varepsilon}{|\alpha|}. $$ In other words, $$ \|x\|<\frac{1}{|\alpha|}\|\alpha x\|+\frac{\varepsilon}{|\alpha|}. $$ Since $\varepsilon$ was arbitrary, we can let it be as small as possible, and, in the limit, we get $$ \|x\|\leq \frac{1}{|\alpha|}\|\alpha x\|. $$ This gives the proof of one side, for the other direction, mimic this proof, but start with $\|x\|$. You can, alternatively, replace $\alpha x$ by $x$ and $\alpha$ by $\frac{1}{\alpha}$ to use this proof as a lemma.
Reflections within an Ellipse I am looking for resources on the reflection properties of the circle, sphere, ellipse, and parabola. However, when looking for articles or entries, the same 3 examples keep coming up (Focus to focus in ellipse, parallel rays converge to focus in parabola etc...) I am looking for more general studies on reflections within these shapes, in particular, orbits and bounds of the "Contact points" around the ellipse, and the path of the rays within it. For example(conjecture): If a ray is cast between the two foci (the ray intersects the line between them), the ray (and subsequent reflections) are bounded by the hyperbola defined by the same two foci, that is also tangent to the first ray. Likewise, if the ray is cast between either focus and the edge of the ellipse, the ray is bounded by a smaller ellipse with the same focus. This holds in the case of a circle, where you end up with a smaller circle, or with the right set up, regular/star polygons. Similar behaviors occur with two intersecting ellipse. Example 2: Center a circle within an ellipse, where the circles radius is less than the minor axis of the ellipse. There are points were the reflections are bounded on both the circle and the ellipse, and others where the reflections become chaotic, and change rapidly with minor changes in the initial ray (sometimes settling into some attractor at the ends of the ellipse) I feel like there is a body of work out there on this, but I am not a mathematics researcher (undergraduate math degree only), and have no idea where to even start looking for work on the subject. does anyone know of a name/paper/field/journal that has anything like this?
There is indeed a lot out there. Google "elliptical billiards". Here's one link: mathworld.wolfram.com/Billiards.html. Your hyperbola conjecture is there.
Resitor bank optmizer I have a real world problem where the math is beyond me. I'm trying to set up an automated resistor bank much like a decade box. I have $18$ channel that I can switch in up to four channels in parallel at a time. Each channel is a fixed resistor value. The system will take a target resistance and calculate which channels to turn on (up to four of the $18$) to get the closest resistance to the target. That part is easy. The part I need help with is picking the fixed resistor values for the $18$ channels. I want to minimize the error between the target resistance and of the resistance value of the four channels switch in. Here's what I have: $R_t$ = target resistance $R_1$ = $1$ of $18$ fixed values $R_2$ = $1$ of $17$ fixed values (one fix value used for $R_1$) $R_3$ = $1$ of $16$ fixed values (two fix values used for $R_1$ & $R_2$) $R_4$ = $1$ of $15$ fixed values (three fixed values used for $R_1$, $R_2$ & $R_3$) Lets just take the case where we always switch in four channels so the error would be: $$\left|\frac{1}{R_t} - (\frac{1}{R_1}+\frac{1}{R_2}+\frac{1}{R_3}+\frac{1}{R_4})\right|$$ Lets put some bounds on it. The target resistance can be from $25$ to $300$ and any of the $18$ fixed resistor channels can be from $25$ to $10,000$ I was thinking the integral from $25$ to $300$ equals zero but I don't want the sum of the errors to be zero. I want the largest error (for $25\to300$) to be the smallest it can be. Plus I don't know how to deal with the problem that $R_1\to R_4$ can change to anyone of the $18$ fixed values at any time. I don't know how to work the fixed values into the equation and solve for them.
Fortunately, you measure the error by the reciprocal resistance, not the resistance itself. This allows us to simply work with reciprocals and their sums. Thus whet we are looking for, is $N=18$ numbers $a_1,\ldots,a_{18}$ such that the numbers that can be achieved as sums of up to four of these numbers are as uniformly dense as possible in a given range. Without the four-summand limit, we'd simply let $a_k=2^{k-1}u$ for suitable $u$; then we can represent any multiple of $u$ form $0$ up to $(2^N-1)u$. We'll drop the $u$ for now and attempt to produce as many sonsecutive integers as possible, For the given constraints, lets try what we can get by letting $a_k=A+(k-1)$, for example: With one summand we can achieve $A,A+1,A+2,\ldots,A+n-1$. With two summands we can certainly achieve $2A+1,\ldots,2A+2n-3$. With three summands we get $3A+3,\ldots,3A+3n-6$. And with four summands $4A+6,\ldots,4A+3n-10$. So to cover a contiguous range of integers, we need $A$ to be an integer with $$ 2A+1\le A+n,\quad 3A+3\le 2A+2n-2,\quad 4A+6\le 3A+3n-5$$ or $A\le \min\{n-1, 2n-5, 3n-11\}$. So for $n=18$, we may choose $A=17$, and thus cover all integers from $17$ to $112$. Unfortunately, this method is very wasteful (because many of the ${18\choose 4}+{18\choose 3}+{18\choose 2}+{18\choose 1}=4047$ combinations produce the same total $R$), and with $\frac{112}{17}\approx 6.6 \ll 12=\frac{300\,\Omega}{25\,\Omega}$, we see that we managed to cover merely have the range we want. One definitely needs better picks for the $a_k$ in order to be able to make use of more of the possible combinations. One has to get somewhat closer to the powers-of-two concept ... I suppose much of this process involves a good deal of trial and error.
what is the probability xth of card not being destroyed? Initially i have N cards with value(1,2,..N) , each time i will destroy cards based on number of card left. If N is odd , will leave smallest value card and destroy the half of the remaining cards. If N is even will destroy half of the card. what is the probability that Xth card is not being destroyed till last round? Note: Each card has equal probability of not being destroyed.
It very much depends on $N$ and not in a simple way. So, lets take thirteen cards as an example. In four steps : * *You keep 1 and destroy six, leaving seven cards. *You keep 1 and destroy three, leaving four cards. *You destroy two cards, leaving two. *You destroy one card. The probability for retaining 1 is $1/4$, and that for retaining any other particular card is: $1/16$. Take fourteen cards. * *You destroy seven, leaving seven cards. * *The lowest card left is: $k\in\{1,2,3,4,5,6,7,8\}$ with probability $\binom{14-k}{8-k}/\binom{14}{7}$ *You keep the lowest(which?) and destroy three cards, leaving four cards. *You destroy two cards, leaving two cards. *You destroy one card. So the probability of a card being retained is ...?
A self inverse function $f(x)=\frac{ax+1}{x-b}$ If $$f(x)=\frac{ax+1}{x-b} \forall x \in\mathbb{R}-b,ab\neq1,a\neq1$$ is a self inverse function such that $$\frac{f(4)}{4}=\frac{f(12)}{12}={f\left(\frac{1+b}{1-a}\right)}$$The question is to find out $a$ and $b$ For a self inverse function $f(f(x))=x$.So I tried to put $f(x)$ in place of $x$ and solve the resulting equation but it didnot helped me .Is there a more logical way to solve this problem in limited time?Any ideas?Thanks.
Is there a more logical way to solve this problem in limited time? Hint: if you take the premise to mean that there do in fact exist $a,b$ such as $f=f^{-1}$ then, for a shortcut, you can simply plug in $x=0$ and determine that $f\big(f(0)\big)=f(\frac{-1}{b})=0 \implies a=b\,$. Then use the second condition to determine $a\,$ or, as is the case, prove that no solutions exist.
The deep reason why $\int \frac{1}{x}\operatorname{d}x$ is a transcendental function ($\log$) In general, the indefinite integral of $x^n$ has power $n+1$. This is the standard power rule. Why does it "break" for $n=-1$? In other words, the derivative rule $$\frac{d}{dx} x^{n} = nx^{n-1}$$ fails to hold for $n=0$. Is there some deep reason for this discontinuity?
$\int x^n = \frac {x^{n+1}}{n+1} +c$ clearly is undefined if $n = -1$ so the rule is breaking down. but since $x^{-1}$ is bounded and continuous if $x>0$ the integral should exist. Lets tackle this from the other direction. What is $\frac {d}{dx} \log x$? First, $e = \lim_\limits{n\to\infty}(1+\frac 1n)^n$ From the definition of derivative. $\frac {d}{dx} \log x =$$ \lim_\limits {h\to 0} \frac {\log (x+h) - \log x}{h}\\ \lim_\limits {h\to 0} \frac {\log (\frac {x+h}{x})}{h} \\ \lim_\limits {h\to 0}\frac {\log (1+\frac hx)}h\\ \lim_\limits {h\to 0} \frac 1x \frac xh \log (1+\frac hx)\\ \frac 1x\lim_\limits {h\to 0} \log (1+\frac hx)^\frac xh\\ \frac 1x \log (\lim_\limits {h\to 0} (1+\frac hx)^\frac xh)$ We can say $n = \frac xh$ and and as $h$ goes to $0, n$ goes to $\infty$ $\frac {d}{dx} \log x = \frac 1x\log e\\ \frac {d}{dx} \ln x = \frac 1x$
Equality of polynomials We know that $a_nx^n + a_{n-1}x^{n-1} + \dots + a_0 = b_nx^n + b_{n-1}x^{n-1} + \dots + b_0 $. How we can prove that $a_n = b_n , a_{n-1} = b_{n-1} , \dots ,a_0 = b_0$ . Also if in right side instead of $x$ we put $z$ is this statement true ? $a_n = b_n , a_{n-1} = b_{n-1} , \dots ,a_0 = b_0$
That depends. * *If you consider them as formal polynomials, then the equality of coefficients is by definition. *If you consider them as functions $\mathbb R\to\mathbb R$, you can insert $n+1$ different values for $x$ to obtain a set of $n+1$ independent linear equations for the $n+1$ variables $b_k$. Such a system has only a single solution, and it is obvious that $b_k=a_k$ is a solution. *If you consider them as functions acting on some other field, then the claim may be false. For example in $\mathbb Z/2\mathbb Z$, $x=x^2=x^3=x^4=\ldots$, therefore as functions in that field, all you can say is that $a_0 = b_0$ (obtained by inserting $0$) and $\sum_{k=0}^n a_k = \sum_{k=0}^n b_k$ (obtained by inserting $1$). *If you consider $x$ not as variable taking arbitrary values, but as a single specific value, then the claim is definitely false. Rather, you've got just a single equation that the coefficients have to fulfill.
Prove that $\int \limits_{0}^{\infty} x^p e^{-g(x)/x} dx \leq e^{p+1} \int \limits_{0}^{\infty} x^p e^{-g'(x)}dx$. For a convex function $g(0)=0$ and for any $-1<p<\infty$, prove that $$\int \limits_{0}^{\infty} x^p e^{-g(x)/x} dx \leq e^{p+1} \int \limits_{0}^{\infty} x^p e^{-g'(x)}dx.\,\,\,\,\,\,(♣)$$ It the generalization of the not-so-famous Carleman's Integral Inequality, which states that : $$\int \limits_0^\infty \text{exp}\left\{\frac{1}{x}\int \limits_{0}^{x }\ln(f(t))\,dt\right\}dx \leq e \int \limits_0 ^{\infty}f(x)\,dx.\,\,\,\,\,(♥)$$ $(♥) $ is a special case of $(♣) $ with $p=0$. $(♥)$ is "OK-ayish" but $(♣)$ is very tricky to prove, in which I horribly fail. I can not find any suitable reference or method or something which can help me prove this monster. Any help will be appreciated. Thanks in Advance ! :-)
A proof can be found here, at section 2, but I'll go over how it works here. First, the convexity condition on $g$ is used, in particular fact 5 here to show that $$g(k x) \geq g(x) + (k-1) x g'(x)$$ for any $k>1$. Then, consider the integral $$J = \int_0^A x^p \exp\left(-\frac{g(kx)}{kx}\right) dx$$ for $A>0$. By a substitution, you can show $$\begin{align}J &= k^{-p-1}\int_0^{Ak} x^p \exp\left(-\frac{g(x)}{x}\right) dx \\ &\geq k^{-p-1}\int_0^{A} x^p \exp\left(-\frac{g(x)}{x}\right) dx\end{align}$$ On the other hand, use the convexity inequality on $g$ to show $$J \leq \int_0^A x^p \exp\left(-\frac{g(x)}{kx} - \frac{(k-1) g'(x)}{k}\right) dx$$ from which you can use Holder's inequality here in integral form to get $$J \leq \left(\int_0^A x^p \exp\left(-\frac{g(x)}{x}\right) dx \right)^{1/k} \left(\int_0^A x^p \exp\left(-g'(x)\right) dx\right)^{(k-1)/k}$$ which should start to look familiar. Putting our bounds on $J$ together, we get $$k^{-p-1}\left(\int_0^A x^p \exp\left(-\frac{g(x)}{x}\right) dx \right)^{(k-1)/k} \leq \left(\int_0^A x^p \exp\left(-g'(x)\right) dx\right)^{(k-1)/k}$$ So take the limit as $A \to \infty$ and rearrange to get $$\int_0^\infty x^p \exp\left(-\frac{g(x)}{x}\right) dx \leq \left(k^{\frac k{k-1}}\right)^{p+1} \int_0^\infty x^p \exp\left(-g'(x)\right) dx$$ and taking the $k\to 1$ limit finishes off the answer!
How is "point" in geometry undefined? And What is a "mathematical definition"? How is "point" in geometry undefined? I mean, when we say "A point in geometry is a location. It has no size, i.e., no width, no length, and no depth," is it not a definition? If it is not a definition, then how can we know whether some statement is definition or not? What are the characteristics of a definition in math?
I think your question is more about axiomatic systems in general. Maybe this analogy will help: Consider for example the axioms that govern set theory (called "ZFC"). The term "set" there is also undefined - even though we have some intuition about it. From there we then go on to state various properties that sets have to obey. More generally, when defining an axiomatic system (regardless if it's Euclidean geometry or ZFC set theory), you have "primitive notions" (points or lines resp. sets) and then you state property that relate the various primitive notions to each other. The main point though is that while we use our intuition to help us find proofs and derive properties, on a formal level these are just manipulations of symbols that are not bound to our intuition. That allows us, if would like to do so, to replace the names of all primitive notions with other names. Hilbert is famous for making such a remark, where he illustrates this idea taken to the extreme: "One must be able to say at all times--instead of points, straight lines, and planes--tables, chairs, and beer mugs" (source: Provenance of Hilbert quote on table, chair, beer mug , where you can find also a bit of history).
Finding out the permissible value of $k$ If $\log_ka=a$ and $\log_kb=b$ for exactly two distinct positive real numbers $a$ and $b$,then $k$ can't be equal to $A)e^{\frac1{2e}}$ $B)e^{\frac2e}$ $C)e^{\frac12}$ $D)e^{\frac13}$ Let $f(x)=\log_ex\log_ke-x$ .We are looking for only two solutions $a$ and $b$ of this equation.For this to happen $f'(x)$ must vanish at one point between $a$ and $b$ and $f''(x)$ at this corresponding point should not be zero otherwise it would indicate a point of inflexion.I tried differentiating but couldnot see how do I find out the plausible range of values of $k$.Any ideas?Thanks.
We may rewrite as follows: $$f(x)=k^x-x$$ And we want to show there is only one or no root for some $k$ by finding the extrema i.e. $f'(x_0)=0$. $$f'(x)=\ln(k)k^x-1=0\implies x_0=\frac{\ln\left(\frac1{\ln(k)}\right)}{\ln(k)}$$ And show that for one of those given values of $k$ that $$f(x_0)>0$$ $$f''(x_0)>0$$
Set of all rational points in unit ball Consider the set $A\subset \mathbb{R}^2$ of all rational points in the unit ball, that is $$A = \{(x,y)\in \mathbb{Q}^2: x^2+y^2<1\}.$$ It's easily seen that $A$ is not path-connected (since every continuous curve must pass through a point with at least one irrational coordinate, by the intermediate value theorem), so $A$ is not connected as well. Now consider the definition of connected subset: A subset S of a topological space $X$ is connected if and only if there are no open sets $U$ and $V$ in $X$ such that $$S\subset U\cup V, S\cap U \neq \varnothing, S\cap V\neq \varnothing \text{ and } S\cap U\cap V = \varnothing.$$ Since $A$ is disconnected, there must be some open sets $U$ and $V$ in $\mathbb{R}^2$ such that $$A\subset U\cup V, A\cap U = \varnothing, A\cap V = \varnothing \text{ and } A\cap U \cap V = \varnothing.$$ Can we write specifically what $U$ and $V$ are? Thank you very much.
There are many examples of such a $U$ and $V$. One working example is $$ U = \{(x,y) \in \Bbb Q^2 : x^2 + y^2 < \frac 1{\sqrt{2}} \}\\ V = \{(x,y) \in \Bbb Q^2 : \frac 1{\sqrt{2}} < x^2 + y^2 < 1\} $$ The only important thing about $1/\sqrt{2}$ here is that it's irrational and less than $1$.
Prove the dual space of $l^p$ is isomorphic to $l^q$ if $\frac{1}{q}+\frac{1}{p}=1$ Prove the dual space of $\ell^p$ is isomorphic to $\ell^q$ if $\frac{1}{q}+\frac{1}{p}=1$ ($1<p<\infty$) Define a map $J:\ell^q \to (\ell^p)'$ such that $Jy(x)=\sum_{k=1}^\infty x_ky_k,x\in \ell^p,y\in \ell^q$ I have verified that $Jy\in (\ell^p)'$, $J$ is linear and $\lVert Jy \rVert\leq \lVert y \rVert_q$. How to show $\lVert Jy \rVert \geq \lVert y \rVert_q$ and $J$ is surjective?
Surjectivity of the map $J$. Let $\varphi\in (\ell^p)'$, with $\|\varphi\|_*=1$, and set $y_n=\varphi(e_n)$, where $e_n=(0,\ldots,0,1,0,\ldots)$, with exactly one $1$ in the $n$th place. We shall show that $\{y_n\}\in \ell^q$, $\|\{y_n\}\|_q=1$ and $\varphi(\{x_n\})=\sum x_ny_n$. I. $\{y_n\}\in \ell^q$. For every $x_1,\ldots,x_n$, with $|x_1|^p+\cdots+|x_n|^p\le 1$, we have $\|(x_1,\ldots,x_n,0,0,\ldots)\|_p\le 1$ and hence $$ 1\ge \varphi(x_1e_1+\cdots+x_ne_n)=x_1y_1+\cdots+x_ny_n. $$ But $$ \sup_{|x_1|^p+\cdots+|x_n|^p\le 1}x_1y_1+\cdots+x_ny_n=\big(|y_1|^q+\cdots+|y_n|^q\big)^{1/q}, \tag{1} $$ and hence $$ |y_1|^q+\cdots+|y_n|^q\le 1, $$ and since this holds for every $n$, then $\|\{y_n\}\|_q\le 1$ and $\{y_n\}\in \ell^q$. II. $\varphi(\{x_n\})=\sum x_ny_n$. Let $x=\{x_n\}\in\ell^p$ and set $x^n=(x_1,x_2,\ldots,x_n,0,0,\ldots)$. Then $\|x^n-x\|\to 0$ and hence $\varphi(x^n-x)\to0$. But $$ \varphi(x^n)=\sum_{k=1}^n x_ky_k, $$ and as the series $\sum_{n=1}^\infty x_ny_n$, converges then $$ \varphi(x)=\lim_{n\to\infty}\varphi(x^n)=\sum_{n=1}^\infty x_ny_n=(x,y). $$ The fact that $\|y\|_q=1$ can be readily shown. Proof of (1). Clearly, (Hölder) $$ |x_1y_1+\cdots+x_ny_n|\le \left(|x_1|^p+\cdots+|x_n|^p\right)^{1/p} \left(|y_1|^q+\cdots+|y_n|^q\right)^{1/q}\le \left(|y_1|^q+\cdots+|y_n|^q\right)^{1/q} $$ It is easily shown that equality is obtained for $$ x_i=\frac{|y_i|^{q/p}\mathrm{sgn}(y_i)}{\left(|y_1|^q+\cdots+|y_n|^q\right)^{1-1/q}}, \quad i=1,\ldots,n. $$
Which of the following numbers is greater? Which of the following numbers is greater? Without using a calculator and logarithm. $$7^{55} ,5^{72}$$ My try $$A=\frac{7^{55} }{5^{55}×5^{17}}=\frac{ 7^{55}}{5^{55}}×\frac{1}{5^{17}}= \left(\frac{7}{5}\right)^{55} \left(\frac{1}{5}\right)^{17}$$ What now?
We have $7^4 = 49^2 < 50^2 = 4 \times 5^4 < 5^5$. Hence $$7^{55} < 7^{56} = (7^4)^{14} < (5^5)^{14} = 5^{70} < 5^{72}.$$
Markov Chains: Example to show past and future are not independent given any information about past. The Markov property does not imply that the past and the future are independent given any information concerning the present. Find an example of a homogeneous Markov chain $\left\{X_n\right\}_{n\ge0}$ with state space $E=\left\{1,2,3,4,5,6\right\}$ such that $P(X_2=6\mid X_1\in\{3,4\},X_0=2)\neq P(X_2=6\mid X_1\in\{3,4\})$. I don't understand the phrasing of this question. How should I approach this? Could this work? $P(X_2=6\mid X_1\in\{3,4\}) = 1$ But $P(X_2=6\mid X_1\in\{3,4\},X_0=2) =0$
Suppose that $X_0$ is equally likely to be in states $1$ and $2$, each with probability $\frac{1}{2}$. Suppose that from state $3$, there is a $100\%$ chance of going to state $6$ in the next step. Suppose that from state $4$, there is a $100\%$ chance of going to state $5$ in the next step. Suppose that from state $1$, there is a $100\%$ chance of going to state $3$. Suppose that from state $2$, there is a $100\%$ chance of going to state $4$. Now, the LHS of your equation will evaluate to $0$, because given that you were in state $2$ at $X_0$, you must go to state $4$ in $X_1$ and therefore state $5$ in $X_2$, which means there's a $0\%$ chance of being in state $6$ at $X_2$. The RHS of your equation will evaluate to $\frac{1}{2}$, because we are not conditioning on any value of $X_0$, so we use its initial probabilities: $0.5$ chance of $X_0=1$ and $0.5$ chance of $X_0=2$. This means that there's a $0.5$ chance of $X_3=6$ and $0.5$ chance of $X_3=5$. It will probably help you to draw the directed graph representing the Markov chain's transition probabilities. For good practice, also try to write out the transition matrix (it will be a $6\times 6$ matrix).
Prove $\lim _{x\to 0^+}\left(x^{f\left(x\right)}\right)=1$ Let $f(x)$ be a function with a right derivative at $x=0$ and suppose $f(0)=0$. $$\\$$ Prove $\lim _{x\to 0^+}\left(x^{f\left(x\right)}\right)=1$ I tried to apply an exponential of a logarithm to the expression and use L'Hôpital's rule, but couldn't get further. Any help or hints appreciated.
See that $x^{f(x)}=(x^x)^{\frac{f(x)}{x}}\rightarrow 1^{f'(x)}=1$.
How Euler showed that number :$2305843008139952128$ is perfect without using computer calculation? This number : $2305843008139952128$ is perfect as shown here and it was proved that is a perfect number by Euler without using computer , then My question here is: How Euler showed that number :$2305843008139952128$ is perfect without using computer calculation ?.
Since the number given is $2^{30}(2^{31}-1)$, all that needs to be done is show that $2^{31}{-}1$ is prime. Euler knew that factors of this number must be of the form $k(2\cdot31)+1$, and also must be $8n\pm1$. This gives $84$ primes to check for division up to $\sqrt(2^{31}-1)$ See Modular restrictions on Mersenne divisors from the Prime Pages.
quadratic variation of Brownian motion $B(t)$. Let $\{X_n\}_{n \geq 1}$ be a sequence of random variables with $\mathbb{E}[X_n] = u$. Suppose $\lim_{n \to \infty}\mathrm{Var}[X_n] = 0$. Do we have that $X_n$ converges to constant $u$ almost surely? What I ask actually comes from proving the quadratic variation of Brownian motion $B(t)$ is $t$. I was wondering how above argument for $X_n = \sum_i[B(t_i^n)-B(t_{i-1}^n)]^2$ implies that quadratic variation of Brownian motion $B(t)$ is $t$?
Yes, and the easiest way to prove this is Chebyshev's Inequality. Edit: Good point, but you can use Borel-Cantelli if the variances are summable. I think with your example we already know that there is an a.s. limit for the quadratic variation, so we're just making claims about what it is, I'm blanking on it now but that's likely the argument.
Showing a function goes to zero exponentially fast I'm trying to find an exponentially decaying upper bound for the function $$ f(\mu)=\frac{\mu(\rho-1)e^{-\mu(\rho-1)a}}{\rho-e^{-\mu(\rho-1)a}}, $$ as $\mu\rightarrow\infty$ where $\rho>1$ and all variables are nonnegative, i.e., in the form $$ f(\mu)\leq C_{1}e^{-C_{2}\mu}. $$ I can bound $f(\mu)$ as follows: $$ f(\mu)=\frac{\mu(\rho-1)e^{-\mu(\rho-1)a}}{\rho-e^{-\mu(\rho-1)a}}\leq\frac{\mu(\rho-1)e^{-\mu(\rho-1)a}}{\rho-1}=\mu e^{-\mu(\rho-1)a}. $$ I know that $\mu(\rho-1)e^{-\mu(\rho-1)a}$ is quasi-concave with a maximum of $1/(ea(\rho-1))$ but I haven't been able to find a bound with $\mu$ in the exponent as above.
Substituting $x = \mu (\rho-1)a$, you're looking for $C_1$ and $C_2$ such that \begin{equation} \frac{x e^{-x}}{\rho - e^{-x}} \le C_1 e^{-C_2 x} \end{equation} Taking the natural log of both sides and rearranging gives \begin{equation} \ln x - \ln\left[C_1(\rho - e^{-x})\right] \le (1 - C_2) x \end{equation} So as long as $C_2 < 1$, your function will be bounded above by $C_1 e^{-C_2 x}$ for sufficiently large $x$.
True of False: Every 3-dimensional subspace of $ \Bbb R^{2 \times 2}$ contains at least one invertible matrix. The true or false question states: "True of False: Every 3-dimensional subspace of $ \Bbb R^{2 \times 2}$ contains at least one invertible matrix." Here the $ \Bbb R^{2 \times 2}$ represents the space of all two by two matrices. It seems like this is true, but I am not sure how to prove or disprove the statement. (If such is true, then it's easy to see that every 3-dimensional subspace of $ \Bbb R^{2 \times 2}$ contains infinitely many invertible matrices)
Here's a quick proof which uses special properties of the field $\mathbb{R}$. Consider the set of matrices of the form $\begin{pmatrix}a & -b \\ b & a\end{pmatrix}$. Note that every nonzero matrix in this set is invertible, since such a matrix has determinant $a^2+b^2$ which is nonzero unless $a=b=0$ (here is where we use the fact that our field is $\mathbb{R}$). But these matrices form a $2$-dimensional subspace of $\mathbb{R}^{2\times 2}$, which must have nontrivial intersection with any $3$-dimensional subspace. So any $3$-dimensional subspace contains a nonzero matrix of this form, which is invertible. OK, now here's a more complicated proof that works over any field. Let $V\subseteq\mathbb{R}^{2\times 2}$ be $3$-dimensional and let $\{e_1,e_2\}$ be a basis for $\mathbb{R}^2$. Let $W$ be the $2$-dimensional subspace of $\mathbb{R}^{2\times 2}$ consisting of all $A$ such that $A(e_1)=0$. Note that $\dim V=\dim V\cap W+\dim V/(V\cap W)$ and $V\cap W$ and $V/(V\cap W)$ are each at most $2$-dimensional. So one has dimension $1$, and the other has dimension $2$. Suppose $\dim V\cap W=1$ so $\dim V/(V\cap W)=2$. Let $A\in V\cap W$ be nonzero, so $A(e_1)=0$ and $A(e_2)\neq 0$. Note that $\dim V/(V\cap W)=2$ means that every element of $\mathbb{R}^{2\times 2}/W$ has a representative in $V$. That is, for any matrix $B$, there is $C\in V$ such that $B-C\in W$, which means $B(e_1)=C(e_1)$. In particular, choosing $B$ such that $B(e_1)$ is linearly independent from $A(e_2)$, there is some $C\in V$ such that $C(e_1)$ is linearly independent from $A(e_2)$. If $C$ is invertible, we're done. Otherwise, $C(e_2)$ is a multiple of $C(e_1)$, and so $C(e_2)+A(e_2)$ is not a multiple of $C(e_1)$. Taking $D=C+A$, we then have that $D(e_1)=C(e_1)$ and $D(e_2)=C(e_2)+A(e_2)$ are linearly independent. Thus $D$ is an invertible element of $V$. The case that $\dim V\cap W=2$ and $\dim V/(V\cap W)=1$ is similar. Let $A\in V\setminus (V\cap W)$, so $A(e_1)\neq 0$. If $A$ is invertible, we're done; otherwise $A(e_2)$ is a multiple of $A(e_1)$. Since $\dim V\cap W=2$, we have $W\subset V$. In particular, let $B$ be a matrix such that $B(e_1)=0$ and $B(e_2)$ is not a multiple of $A(e_1)$. Then $A(e_2)+B(e_2)$ is not a multiple of $A(e_1)$, and $B\in W\subset V$. So $C=A+B\in V$ is invertible since $C(e_1)=A(e_1)$ and $C(e_2)=A(e_2)+B(e_2)$ are linearly independent. (In fact, with a little work you can prove you can always choose $e_1$ so that you're in the first case, so the second case is unnecessary.)
Probability of sum of dice is composite Let n dice be rolled. Let $S_{i}$ be the sum of the first $i$ rolls for $i=1...n$ Find $Prob($All $S_{i}$ are composite) as $n$ tends to ∞ My guess is 0 but how can I prove this? Or if I'm wrong how do I proceed?
Consider the first $n$ rolls for sufficiently large $n$. Let $$\mathbb{P} = \{0\}\cup\{ 1\leq i \leq n-1\mid \text{there is at least one prime in the range }[S_{i} + 1, S_{i} + 6]\}$$ Since there are more than $\frac{n}{\ln n}$ primes in the range $[1, n]$ for $n \geq 17$ (see here), the size of $\mathbb{P}$ is more than $\frac{n}{6\ln n}$. Hence, \begin{align} \Pr(S_1, S_2, \cdots, S_n \text{ are all composite})~\leq~\left(\frac{5}{6}\right)^{n / (6\ln n)} \end{align} When $n$ goes to $\infty$, the probability goes to $0$.
Need Help Logical Question There are three small tanks of capacity 35 L, 56 L, 84 L. Lets Find what will be the biggest capacity of a container which will measure the oil in 3 tanks in exact whole numbers. Ans=71. Please Provide Solution and Explain.
When words like biggest, highest and greatest are used. You have to find HCF. HCF of 35 L, 56 L and 84 L is 7 L. So answer is 7 L. In answer after 7 it's L not 1.
Smallest positive integral value of $a$ such that ${\sin}^2 x+a\cos x+{a}^2>1+\cos x$ holds for all real $x$ If the inequality $${\sin}^2 x+a\cos x+{a}^2>1+\cos x$$ holds for all $x \in \Bbb R$ then what's the smallest positive integral value of $a$? Here's my approach to the problem $$\cos^2 x+(1-a)\cos x-a^2<0$$ Let us consider this as a quadratic form respect to $a$. Applying the quadratic formula $a=\frac{-\cos x\pm\sqrt{5\cos^2 x+4\cos x}}2 $ and substituting $\cos x$ with $1$ and $-1$ we get 3 values of where the graph should touch the x axis $-2,0,1$ How should I proceed now?
The smallest positive integer is $1$. That doesn't satisfy the condition, because $$ \sin^2 x + \cos x + 1 > 1 + \cos x $$ has equality when $\sin^2x = 0$, which happens, among other places, at $x=0$. The next positive integer is $2$. Does that work? $$ \sin^2 x + 2\cos x + 4 > 1 + \cos x $$ holds if and only if $$ \sin^2 x + \cos x + 3 > 0 $$ which is easily true -- since $\sin^2 x$ is never less than $0$ and $\cos x$ is never less than $-1$, the left-hand side is always $\ge 2$. So the answer is $$ \Huge 2 $$
Tricky inequality involving 3 variables Let $x, y$ and $z$ be three real numbers satisfying the following conditions: $$0 < x \leq y \leq z$$ AND $$xy + yz + zx = 3$$ Prove that the maximum value of $(x y^3 z^2)$ is $2.$ I tried using the weighted AM-GM inequality, but to no avail as the powers 1,2 and 3 are giving me a hard time. How should I proceed? Thanks in advance.
Let $x=\frac{a}{2\sqrt2}$, $y=\sqrt2b$ and $z=\sqrt2c$. Hence, $c\geq b$ and by AM-GM: $$6=4bc+ab+ac\geq6\sqrt[6]{(bc)^4(ab)(ac)}=6\sqrt[6]{a^2b^5c^5}\geq6\sqrt{a^2b^6c^4},$$ which gives $$1\geq ab^3c^2=\frac{1}{2}xy^3z^2.$$ The equality occurs for $x=\frac{1}{2\sqrt2}$ and $y=z=\sqrt2$ and we are done!
Showing injectivity of non-linear functions? What are some general practices for showing non-linear functions injective? Particulary I've learned to do it with linear functions (even multiple variable), but since one cannot solve non-linear systems of equations by hand, nor do results regarding the Jacobian determinant apply, then what to do with non-linear functions?
Simply use the definition! Let $f: A \rightarrow B$ be a function. Then, we call f injective if and only if: $f(x) = f(y) \Rightarrow x = y \quad \forall x,y \in A$ or equivalent: $x \neq y \Rightarrow f(x) \neq f(y) \quad \forall x,y \in A$ (This follows from contraposition)
Find the limit : $\lim_{ x \to 1}\frac{\sqrt[n]{x^n-1}}{\sqrt[n]{nx}-\sqrt[n]{n}-\sqrt[n]{nx-n}}$ Find the limit: Without the use of the L'Hôspital's Rule $$\lim_{ x \to 1}\frac{\sqrt[n]{x^n-1}}{\sqrt[n]{nx}-\sqrt[n]{n}-\sqrt[n]{nx-n}}$$ My try: $u=x-1$ Now: $$\lim_{ x \to 1}\frac{\sqrt[n]{(u+1)^n-1}}{\sqrt[n]{n(u+1)}-\sqrt[n]{n}-\sqrt[n]{n(u+1)-n}}$$
We can simplify the term of interest and rationalize terms to obtain $$\begin{align} \frac{\sqrt[n]{x^n-1}}{\sqrt[n]{nx}-\sqrt[n]{n}-\sqrt[n]{nx-n}}&=\frac{\sqrt[n]{x^n-1}}{\sqrt[n]{n}\,(\,\sqrt[n]{x}\,-1\,-\,\sqrt[n]{x-1}\,)}\\\\ &=\frac{\sqrt[n]{x^{n-1}+x^{n-2}+\cdots +1}\,\,\sqrt[n]{x-1}}{\sqrt[n]{n}\,(\,\sqrt[n]{x}\,-1\,-\,\sqrt[n]{x-1}\,)}\\\\ &=\left(\frac{\sqrt[n]{x^{n-1}+x^{n-2}+\cdots +1}}{\sqrt[n]{n}}\right)\left(\frac{\sqrt[n]{x-1}}{\sqrt[n]{x}\,-1\,-\,\sqrt[n]{x-1}}\right)\\\\ &=\left(\frac{\sqrt[n]{x^{n-1}+x^{n-2}+\cdots +1}}{\sqrt[n]{n}}\right)\left(\frac{\sqrt[n]{x-1}}{\frac{x-1}{\sqrt[n]{x^{n-1}+x^{n-2}+\cdots +1}}-\sqrt[n]{x-1}}\right)\\\\ &=\left(\frac{\sqrt[n]{x^{n-1}+x^{n-2}+\cdots +1}}{\sqrt[n]{n}}\right)\left(\frac{1}{\frac{\sqrt[n]{(x-1)^{n-1}}}{\sqrt[n]{x^{n-1}+x^{n-2}+\cdots +1}}-1}\right)\\\\ &\to \left(\frac{\sqrt[n]{n}}{\sqrt[n]{n}}\right)\left(\frac{1}{\frac{0}{\sqrt[n]{n}}-1}\right)=-1 \end{align}$$
Image of the Zero matrix I'm learning some introductory linear algebra and am confused about the zero matrix's image. Is that just the point $\langle 0,0,0 \rangle$ / zero vector?
Let $A$ be a real $m\times n$ matrix. Then $A$ defines a linear mapping $\mathbb{R}^{n}\rightarrow \mathbb{R}^{m}$, where the domain is $\mathbb{R}^{n}$ and the codomain is $\mathbb{R}^{m}$. The image of $A$ is the set of all vectors in $\mathbb{R}^{m}$ that we get when we input any vector in $\mathbb{R}^{n}$ into the mapping, i.e. $$\mathrm{im} (A)=\left\{A\mathbf{x}\ \vert\ \mathbf{x}\in \mathbb{R}^{n}\right\}.$$ In the case of the $m\times n$ zero matrix $\mathbf{0}$, we have $$\mathrm{im} (\mathbf{0})=\left\{\mathbf{0}\mathbf{x}\ \vert\ \mathbf{x}\in \mathbb{R}^{n}\right\}.$$ Since $\mathbf{0}\mathbf{x}=\mathbf{0}\in \mathbb{R}^{m}$ for any $\mathbf{x}\in \mathbb{R}^{n}$, then the image of the zero map is indeed the trivial subspace $\left\{\mathbf{0}\right\}.$
Gram's matrix demonstration Given the Gram's matrix, A, where $A_{ij}=(\psi_{i}|\psi_{j})$ (of course $\psi_{i}$ $\in V$), can someone prove that $$ \textrm{If} \hspace{2mm} (\psi_{i}|A|\psi_{i})>0 \hspace{5mm} (\textrm{i.e A is positively defined}) \implies \{\psi_{i}\}_{i=1}^{i=N} \hspace{5mm} \textrm{form a a set of independent vectors ( i.e} \hspace{2mm} (\psi_{i}|\psi_{j})=0) $$ ? And in this case, show that conversely all Gram matrix of a set of vectors is a semipositively defined operator.
* *If A is positively defined then it is nonsingular. Lets assume that $\psi_{1} = \lambda_2\psi_{2}+\dots + \lambda_n\psi_{n}$ so that our vectors are linearly dependent. Then the first row (or column) of $A$ is a linear combination of other rows (columns) with coefficients $\lambda_2, \dots, \lambda_n$, so that $A$ is singular. Contradiction. *A Gram matrix of a set of vectors forming matrix $M$ is $G=X^TX$. So that $w^TGw = w^TX^TXw = (Xw)^TXw = \|Xw\|_2^2 \ge 0$.
Please help me with this combination problem Prove that $\sum\limits_{k = 0}^n k{m \choose k}{n \choose k}= n{m+n-1 \choose n}$ We can write ${m \choose k} = m!/(m-k)!(k)!$ similarly ${n \choose k}$ and ${m+n-1 \choose n}$ can also be written but I am confused how to proceed further.
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 0}^{n}k{m \choose k}{n \choose k} & = \sum_{k = 0}^{n}k{n \choose k}{m \choose m - k} = \sum_{k = 0}^{n}k{n \choose k}\bracks{z^{m - k}}\pars{1 + z}^{m} \\[5mm] & = \sum_{k = 0}^{n}k{n \choose k}\braces{\vphantom{\Large A}\bracks{z^{m}}z^{k}\pars{1 + z}^{m}} = \bracks{z^{m}}\braces{\vphantom{\Large A}\pars{1 + z}^{m}\overbrace{\sum_{k = 0}^{n}{n \choose k}kz^{k}} ^{\ds{nz\,\pars{1 + z}^{n - 1}}}} \\[5mm] & = n\bracks{z^{m - 1}}\pars{1 + z}^{m + n - 1} = \bbx{\ds{n\,{m + n - 1 \choose m - 1}}} \end{align}
Find the value of $\arctan(1/3)$ How can I calculate $\arctan\left({1\over 3}\right)$ in terms of $\pi$ ? I know that $\tan^2(\frac{\pi}{6})= {1\over3}$ but don't know if that helps in any way.
You can use Newton-Rapson with $\tan\left(x\right) - 1/3 = 0$ and the 'starting point' $x = \pi/6$: \begin{align} &\texttt{Clear[i, x];} \\ &\texttt{x = Pi/6;} \\ & \\ &\texttt{For[i = 0, i < 5, ++i,} \\ &\texttt{x -= (Tan[x] - 1/3)/Sec[x]^2; Print[N[x]]]} \end{align} \begin{align} &\texttt{0.340586, 0.321873, 0.321751, 0.321751}\,,\ \color{red}{0.321751} \end{align}
Ode periodic solutions I have a question concerning periodic odes: $y'=y^2f (t)$, where $f $ is a real continous and periodic function with period $T $, with initial condition $y(0)=y_0\gt0$. It is asked to show that there are always $y_0$ such that there is not a global solution and to find those conditions on $f $ such that there are global non-zero periodic solutions. I solved the equation with separation af variables getting $$y=\frac{1} {\frac{1}{y_0}-\int_{0}^t f (s)ds}.$$ And I have no othqer ideas on the first question. As far as the second is concerned I substituted $s+T$ obtaining the same solution, so I guess no more conditions are needed, but I feel I'm wrong. May you help me?
We introduce the antiderivative $F(t) = \int_0^t f(s)\, ds$ and the following decomposition of the integral: \begin{equation} F(t) = \int_0^{T\left\lfloor\frac{t}{T}\right\rfloor} f(s)\, ds + \int_{T\left\lfloor\frac{t}{T}\right\rfloor}^t f(s)\, ds = \left\lfloor t/T\right\rfloor\! F(T) + F(t-T\left\lfloor t/T\right\rfloor)) \, . \end{equation} If $F(T)$ is nonzero, then the choice $y_0 = 1/F(T)$ makes the denominator of $y$ vanish at the time $t=T$. Else, $F(T)=0$ and we can find $t^*$ in $[0,T[$ such that $F(t^*)$ is nonzero. The choice $y_0 = 1/F(t^*)$ makes the denominator vanish at the time $t=t^*$. Thus, we can always find $y_0$ such that no global solution exists. Now, let us consider a $T$-periodic global solution $y$. For $t$ such that $y(t)$ is nonzero, one has \begin{equation} y(t+T)^{-1} - y(t)^{-1} = -\int_t^{t+T} f(s)\, ds = -F(T) = 0 \, . \end{equation} Furthermore, the denominator in $y$ cannot vanish, i.e. $1/y_0$ cannot belong to the image of $[0,T[$ through $F$. The inverse $1/y_0$ of the initial value is allowed to be strictly larger than $\max_{t\in[0,T[} F(t)$, or strictly smaller than $\min_{t\in[0,T[} F(t)$.
Solving the homogeneous heat equation using the method of fundamental solutions I'm trying to solve the PDE $u_t=Du_{xx}$ using the method of fundamental solutions. I've used the ansatz $u(x,t)=t^{\alpha/2}f(x/\sqrt{t})$ and this has given me the following ODE: $$\frac{\alpha}{2}f(\phi)-\frac{1}{2}\phi f'(\phi)=Df''(\phi)$$ where $\phi=x/\sqrt{t}$. I'm then told to let $\alpha=-1/2$, leaving me with the ODE $$-\frac{1}{4}f(\phi)-\frac{1}{2}\phi f'(\phi)=Df''(\phi)$$ I'm not sure how to solve this though. Note: $D \in \mathbb{R}_{>0}$
This answer was for the original version of the question asked I would divide through by $y=f(\phi)$ first, and integrate w.r.t $y$: $$\int\frac{y''}{y}dy=C\int 2\phi - 1 dy=Cy(2\phi-1)+G$$ Where $G \in \mathbb{R}$. Then use a substitution, however I am not sure if the LHS integral has a closed form (exact) solution.
The derivative of $x^0$ For some reason I have not been able to find a straight answer to this. We know that $\frac{d}{dx}x^n=nx^{n-1}$ And this is true for $n=-1$ and $n=1$ $\implies$ $\frac{d}{dx}x^{-1}=-1x^{-2}$ and $\frac{d}{dx}x^1=1$ We also know that $\frac{d}{dx}C=0$ where $C$ is a constant. Suppose that $f(x)=x^0$. Obviously any number to the power of zero is $1$, i.e. $x^0=1$, and $\frac{d}{dx}1=0$, but $x$ is not a constant. So, $$\frac{d}{dx}x^0=x^{-1}$$ Is this true? My thought is possibly. Based on the fact that if $\frac{d}{dx}x^1=1$ and obviously any value to the power of one is equal to that value. I.e. $x^1$ simplifies to be $C$ a constant but $\frac{d}{dx}x^1\not=0$, and we know that $\frac{d}{dx}C=0$. So is it true that $f'(x)=x^{-1}$? Hopefully this is not way more simple than I am making it. UPDATE: I obviously made an error by saying that $\frac{d}{dx}x^0=x^{-1}$ It actually would evaluate directly as $0\times x^{-1}$
Symbols aren't magic. $f(x) = x^0$ means $f(x) = 1; x \ne 0;f(0) $ undefined. $f'(x) = 0; x \ne 0$. Because $f$ is a constant function. That's all there is to it. If one want to be clever, or so called clever. $f(x) = x^k; k = 0$ so $f'(x) = kx^{k-1} = 0*x^{-1} = 0; x \ne 0$ so everything is consistent. But it's not magic.
Is there a flaw in the theory of fractional calculus? Let's talk about the function $f(x)=x^n$. It's derivative of $k^{th}$ order can be expressed by the formula: $$\frac{d^k}{dx^k}x^n=\frac{n!}{(n-k)!}x^{n-k}$$ Similarly, the $k^{th}$ integral (integral operator applied $k$ times) can be expressed as: $$\frac{n!}{(n+k)!}x^{n+k}$$ According the the Wikipedia article https://en.wikipedia.org/wiki/Fractional_calculus, we can replace the factorial with the Gamma function to get derivatives of fractional order. So, applying the derivative of half order twice to $\frac{x^{n+1}}{n+1}+C$, should get us to $x^n$. Applying the half-ordered derivative once gives: $$\frac{d^{1/2}}{{dx^{1/2}}}\left(\frac{x^{n+1}}{n+1}+Cx^0\right)=\frac{1}{n+1}\frac{\Pi(n+1)}{\Pi(n+1/2)}x^{n+1/2}+C\frac{1}{\Pi(-1/2)}x^{-1/2}$$ where $\Pi(x)$ is the generalization of the factorial function, and $\Pi(x)=\Gamma(1+x)$ Again, applying the half-ordered derivative gives: $$\frac{1}{n+1}\frac{\Pi(n+1)}{\Pi(n)}x^n+\frac{C}{\Pi(-1)}x^{-1}=x^n$$ which works fine because $\frac{C}{\Pi(-1)}\rightarrow 0$. So, the derivative works good but that's not the case with fractional-ordered integration. Applying the half-ordered integral operator twice to $x^n$ should give us $\frac{x^{n+1}}{n+1}+C$. Applying the half-ordered integral once means finding a function whose half-ordered derivative is $x^n$. So, applying it once gives: $$\frac{\Pi(n)}{\Pi{(n+1/2)}}x^{n+1/2}+C\frac{1}{\Pi(-1/2)}x^{-1/2}$$ Again, applying the half ordered derivative to this function should give a function whose half-ordered derivative is this function. So, again applying the half-integral operator gives: $$\frac{x^{n+1}}{n+1}+C+C'\frac{1}{\Pi(-1/2)}x^{-1/2}\neq \frac{x^{n+1}}{n+1}+C$$ where $C'$ is another constant. So, why does this additional term containing $C'$ get introduced? Is the theory of fractional derivatives flawed? Is there any way to get a single constant $C$ in the end by applying the half-integral operator two times?
One way to deal with these constants is to change from indefinite integration to definite integration. For example, the Riemann-Liouville integral may be used: $$D^{-\alpha}_af(x)=\frac1{\Gamma(\alpha)}\int_a^xf(t)(x-t)^{\alpha-1}\ dt$$ Now our constant of integration is controlled by $a$, and there will exist some $a$ such that $$D^{-1}_a\frac d{dx}f(x)=f(x)$$
Finding limit involving floor function. Finding $\displaystyle \lim_{x\rightarrow 0}x^2\bigg(1+2+3+\cdots \cdots +\bigg\lfloor \frac{1}{|x|}\bigg\rfloor \bigg)$, where $\lfloor x \rfloor $ is a floor function of $x$ Attempt: put $\displaystyle x = \frac{1}{y}$ so $\displaystyle \lim_{y\rightarrow \infty}\frac{1+2+3+\cdots \cdots \lfloor y \rfloor }{y^2}$ could some help me
The sum rewrites $1+2+ \dots +\lfloor y\rfloor = \frac{\lfloor y\rfloor (\lfloor y\rfloor +1)}{2}$. Therefore, the studied function is asymptotically equivalent to $\frac{1}{2} \left(\frac{\lfloor y\rfloor}{y}\right)^2$, which tends towards $1/2$ at infinity.
Proof for Property of Complex Numbers Is the inequality $\lvert z_1 + z_2 \rvert \ge \lvert z_1 \rvert - \lvert z_2 \rvert$ incorrect, where $z_1$ and $z_2$ are any two complex numbers? I need an example to prove that it is. And in case it is correct, can you please give the proof? Thanks for any help.
We know $\forall z_1,z_2 \in C:|z_1+z_2|\leq |z_1|+|z_2|$ (triangle inequality) put $z_1=z_1-z_2 $ $$|(z_1-z_2)+z_2|\leq |(z_1-z_2)|+|z_2|\\|z_1|\leq |z_1-z_2|+|z_2| \to \\ |z_1-z_2|\geq |z_1|-|z_2|$$ Use this fact $ |a||b|=|ab| \to |z_2|=(1)|z_2|=|-1||z_2|=|-z_2|$ so $$|z_1-(-z_2)|\geq |z_1|-|-z_2| \\\to \\ |z_1+z_2|\geq |z_1|-|z_2|$$
Is there a process, similar to long division, to do nth roots where n is any positive integer? I did not even remember how to do square roots from high school, but I vaguely recall it was similar to long division. (Thankfully I remember how to do long division.) I just went to youtube and refreshed my memory on how to do square roots. Is there a "long root" process to do nth roots? N can be any integer of 2 or more. (N could be 1 too but that is trivial.) The radicand can be any positive number. The radicand does not hafta be a perfect square, perfect cube, or perfect n-power of anything. Btw, I am aware of factorization. So $\sqrt{153} = \sqrt{9 * 17} = 3\sqrt{17}$. In this case, what I want to do is something like 2 into 17, but instead of long division, use a "long root" process for putting 2 into 17. The process would go on forever, much like 25/7 goes on forever because the remainder never "settles". I would just stop when I get, say, 3 decimal places or however many I think is accurate enough. Examples: * *$\sqrt{68}$ *$\sqrt[3]{401}$ *$\sqrt[7]{50}$ *$\sqrt[21]{675}$ *$\sqrt[n]{x}$
I'm having trouble understanding exactly what you are asking about but I'll focus on the following "The process would go on forever, much like 25/7 goes on forever because the remainder never "settles". I would just stop when I get, say, 3 decimal places or however many I think is accurate enough." You might want to grab a book on introductory real analysis. R. P. Burn's Numbers and Functions has very good chapters on sequences and completeness that will satisfy your needs. For example, for any $a$, $0\leq a-\frac{\lfloor a10^n \rfloor}{10^n}<\frac{1}{10^n}$ ($\lfloor x \rfloor$ here is called floor $x$, which is the greatest integer less than $x$). From this we can deduce that for any number $a$ there is a sequence of rational numbers which tends to it. This is because the sequence $(1/10^n)$ tends to zero and so by the squeeze rule for null sequences, the sequence $(\frac{\lfloor a10^n \rfloor}{10^n}-a)$ tends to zero, so by the definition of limit we see how the result is proved. Remember that $\frac{\lfloor a10^n \rfloor}{10^n}$ is always rational. The chapter on completeness will go over how every nth root of a number exists and is unique. Hope that points you in the right direction.
We have mapping $ \left \langle x,x \right \rangle = x^{T}Ax $. Prove that $ \left \langle x,x \right \rangle > 0 $ iff... Let's suppose we have the mapping $ \left \langle x,y \right \rangle = x^{T}Ay $ , A is the symmetric matrix $\begin{pmatrix} a & b\\ b & c \end{pmatrix} $, $a,c >0$ And I need to proof that $ \left \langle x,y \right \rangle$ is a scalar product iff $ac-b ^{2} > 0$. I've already proved that this mapping is linear and $\left \langle x,y \right \rangle = \left \langle y,x \right \rangle $ for every $x$ and $y$. Now I want to show that $ \left \langle x,x \right \rangle > 0 $ iff $ac-b ^{2} > 0$, for $ x\neq 0 $ . Let's suppose that $x=\binom{x_1}{x_2}$ So I came to this $$x_{1}^{2}a + 2x_{1}x_{2}b + x_{2}^{2}c > 0 $$ $$x_{1}^{2}a + b^{2}\left \langle x,x \right \rangle + x_{2}^{2}c > 0 .$$ Now I see that $x_{1}^{2}+x_{2}^{2}= \left \| x \right \|^{2}$ but I stack here, because I don't know how to use this fact.
As mentioned in the comments and the other answer, this problem can easily be answered using eigenvalues of $A$. However, if you don't know what eigenvalues are, you can simply multiply the inequality $x_{1}^{2}a + 2x_{1}x_{2}b + x_{2}^{2}c > 0$ by $a>0$, which gives you $$x_{1}^{2}a^2 + 2x_{1}x_{2}ab + x_{2}^{2}ac > 0.$$ Now rewrite the right side as $$x_{1}^{2}a^2 + 2x_{1}x_{2}ab +x_2^2b^2+ x_{2}^{2}(ac-b^2)=(ax_1+bx_2)^2+(ac-b^2)x_2^2.$$ Now if $ac-b^2\leq 0$, this can be nonpositive for $(x_1,x_2)=(b,-a)\neq (0,0)$. On the other hand, if $ac-b^2>0$, then it is nonnegative, and can only be $0$ when $x_2=x_1=0$.
For $a,b,c \in R$ and $a,b,c>0$. Minimize $A=a^3+b^3+c^3$ For $a,b,c \in R$ and $a,b,c>0$ satisfy $a^2+b^2+c^2=27$, minimize $$A=a^3+b^3+c^3$$
By Power-Mean Inequality, $\left(\dfrac{a^3+b^3+c^3}{3}\right)^{1/3}\ge \left(\dfrac{a^2+b^2+c^2}{3}\right)^{1/2}=3$ $a^3+b^3+c^3 \ge 81$
Prove $W$ is closed iff for all $A\subset X$, $int(K\cup int(A))=int(K\cup A)$ Let $(X,\tau)$ be a topological space and $W\subset X$. Prove that if $W$ is closed, then for all $A\subset X$, $(W\cup A^\circ)^\circ=(W\cup A)^\circ$. Here's what I have: $A^\circ \subset A$, so $A^\circ\cup W\subset A\cup W$. Then $(A^\circ \cup W)^\circ \subset (A\cup W)^\circ$. Now, let $p\in (A\cup W)^\circ$. Then there is $U\in \tau$ such that $p\in U\subset A\cup W$. I want to find $V\in \tau$ such that $p\in V\subset W\cap A^\circ$. First I noticed that $U=(U\cap W)\cup (U\cap (X\setminus W))$ and $(U\cap (X\setminus W))\in \tau$. I still can't manage to find the adequate $V$ to conclude that $p\in(W\cap A^\circ)^\circ$. Any help would be appreciated.
Let $X,\tau$ be a topological space and let $A\subset X$. First, if $O\subset A$ and $O$ is open, then $O\subset A^\circ$. Proof: Let $O\subset A \implies O^\circ \subset A^\circ$, but since $O$ is open, then $O=O^\circ$. Then $O\subset A^\circ$. Now, take $p\in (W\cup A)^\circ$, from the definition of the interior of a set, we know that there is $U\in \tau$ such that $p\in U\subset A\cup W$. We are looking for $V\in \tau$ such that $p\in V\subset W\cup A^\circ$. Take $V=U$. Notice that $U=(U\cap W)\cup (U\cap (X-W))=(U\cap W)\cup (U-W)$ and $(U\cap (X-W))\in \tau$. Now, $U-W\subset A$. Then $U-W\subset A^\circ$. Then $U\subset (U\cap W)\cup A^\circ\subset W\cup A^\circ$. So $p\in U\subset W\cup A^\circ\implies p\in (W\cup A^\circ)^\circ$.
Question about $\lim_{n\to\infty} n|\sin n|$ I have a question regarding this limit (of a sequence): $$\lim_{n\to \infty} n|\sin(n)|$$ Why isn't it infinite? The way I thought this problem is like this-$|\sin(n)|$ is always positive, and n tends to infinity, so shouldn't the whole limit go to infinity? What is the right way to solve this and why is my idea wrong?
The limit of the sequence $\{n\left|\sin n\right|\}_{n\geq 0}$ as $n\to +\infty$ does not exist. Obviously $\left|\sin n\right|$ is arbitrarily close to $1$ for infinite natural numbers, making the $\limsup=+\infty$. On the other hand, if $\frac{p_m}{q_m}$ is a convergent of the continued fraction of $\pi$ we have $$ \left|p_m -\pi q_m\right|\leq \frac{1}{q_m} $$ and since $\sin(x)$ is a Lipschitz continuous function, the $\liminf$ is finite, by considering $n=p_m$.
everywhere differentiable function whose derivative is 0 almost everywhere is a constant There are examples showing that functions with almost everywhere 0 derivative can be increasing. However in those examples, functions are not differentiable everywhere. In fact, invoking theorem 7.21 from Rudin's Real and Complex Analysis, I can deduce that if a function $f$ is differentiable everywhere and its derivative equals $0$ a.e., then $f\equiv constant$. However, I'm wondering if there is some easier proof of such statement, since the proof of theorem 7.21 is quite weird to me. Is there any other theory that I can use to prove the statement?
Well, there is a whole theory of level sets of derivatives of everywhere differentiable functions, based largely on the theory of Henstock-Kurzweil (a.k.a. Denjoy or gauge) integral. A good starting point, with a lot of references, could be [D. Preiss, Level sets of derivatives, TAMS 272(1):161–184], available at http://www.ams.org/tran/1982-272-01/S0002-9947-1982-0656484-0/S0002-9947-1982-0656484-0.pdf This is certainly not simpler, though...
Prove that a torsion module over a PID equals direct sum of its primary components Let $R$ be a P.I.D. with $1$ and $M$ be an $R$-module that is annihilated by the nonzero, proper ideal $(a)$. Let $a=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}$ be the unique factorization of $a$. Let $M_i$ be the submodule of $M$ annihilated by $p_i^{\alpha_i}$. Prove that $M=M_1\oplus M_2\oplus \cdots \oplus M_k$. My attempt so far: For each $1\leq j \leq k$ define $a_j = \prod_{i\ne j} p_i^{\alpha_i}$. Let $\sum_{i=1}^{n} (a_jr_i)\cdot m_i$ be an arbitrary element of the submodule $(a_j)M$. We have $p_j^{\alpha_j}\cdot (\sum_{i=1}^{n} (a_jr_i)\cdot m_i) = (p_j^{\alpha_j}a_j(r_1 +\cdots + r_n))\cdot (m_1+\cdots +m_n) =(r_1 +\cdots +r_n)\cdot (a \cdot (m_1+\cdots +m_n)) =0$. So $\sum_{i=1}^{n} (a_jr_i)\cdot m_i \in M_j$, so that $(a_j)M\subset M_j$. Next, let $m\in M_j$. Since $R$ is a P.I.D., we know $1= a_jx + p_j^{\alpha_j}y$ for some $x, y \in R$. So $m= 1\cdot m = (a_jx + p_j^{\alpha_j}y)\cdot m = xa_j \cdot m + yp_j^{\alpha_j} \cdot m = xa_j \cdot m +0 \in (a_j)M$. Conclude that $(a_j)M = M_j$. Next, suppose $m\in (a_j)M\cap \sum_{t\ne j} (a_t)M$. We have $1\cdot m = xa_j \cdot m + yp_j^{\alpha_j} \cdot m= xa_j\cdot m + 0$. But note that $xa_j\cdot ((\sum_{t\ne j}a_t)\cdot m) = wa\cdot m$ for some $w\in R$, so that $xa_j\cdot ((\sum_{t\ne j}a_t)\cdot m) = 0$. It follows that $xa_j = 0$, and $m=0+0=0$. Conclude that $ (a_j)M\cap \sum_{t\ne j} (a_t)M = (0)$. Thus, $\sum_{i=1}^{k} (a_i)M$ is a direct sum. At this point, I'm not sure how to actually show this direct sum is equal to $M$. The only thing I tried is applying the Chinese Remainder Theorem as follows, but it doesn't seem to work. We have that $(a)M=(0)$. And since $R$ is a PID, we know that since $(p_i^{\alpha_i}, p_j^{\alpha_j})= (1) = R$ for any $i\ne j$, $(p_i^{\alpha_i})$ and $(p_j^{\alpha_j})$ are comaximal ideals. So apply the Chinese Remainder Theorem to get $M\cong M/(p_1^{\alpha_1})M \times \cdots \times M/(p_k^{\alpha_k})M$. I'd appreciate some help on finishing this.
Based on your argument, I assume you assume your PID to have a $1$. From there you need only show that $(a_1,\ldots, a_k)=R$. To do this show inductively that for $k<j$ we have $$(a_1,a_2,\ldots, a_j)=(p_{j+1}^{e_{j+1}}\ldots p_k^{e_k})$$ Then at the last step you'll have $(a_1,\ldots, a_{k-1},a_k)=(p_k^{e_k}, a_k)=R$ which finishes your approach.
Show that $a+b+c=0$ implies that $32(a^4+b^4+c^4)$ is a perfect square. There are given integers $a, b, c$ satysfaying $a+b+c=0$. Show that $32(a^4+b^4+c^4)$ is a perfect square. EDIT: I found solution by symmetric polynomials, which is posted below.
EDIT: I found solution by symmetric polynomials (in variables a, b, c) The following more or less transcribes OP's solution in direct calculations, without explicitly using Newton's relations. From the assumption that $a+b+c=0\,$: $$ 0 = (a+b+c)^2 = a^2+b^2+c^2 + 2(ab+bc+ca) $$ $$ \implies 2(ab+bc+ca)=-(a^2+b^2+c^2) \tag{1} $$ $$ \require{cancel} (ab+bc+ca)^2 = a^2b^2+b^2c^2+c^2a^2 + \cancel{2abc(a+b+c) } \tag{2} $$ $$ \begin{align} a^4+b^4+c^4 & = (a^2+b^2+c^2)^2-2(a^2b^2+b^2c^2+c^2a^2) \\[5px] & \overset{(1),(2)}{=} 4(ab+bc+ca)^2 - 2(ab+bc+ca)^2 \\ & = 2 (ab+bc+ca)^2 \end{align} $$ The latter gives $32(a^4+b^4+c^4)=\big(8 (ab+bc+ca)\big)^2\,$.
Find matrix $B$ such that $BA=4A$ Find the unique $B \in \mathbb{R}^{3 \times 3}$ such that for every $A \in \mathbb{R}^{3 \times 3}$ we have i) $BA=4A$ ii) The 1, 2 and 3 rows of $BA$ are the 3,2 and 1 rows of A. This problem is in the section where they define matrix multiplication. My only idea was to set up a giant system of equations, but well...it was too giant. Is there a smart way to solve it?
Since each conditions works for every matrix $A\in \mathcal{M}_3(\mathbb{R})$, take $A=I$, then i) $BA=4A$ becomes $BI=4I$, hence $B=4I$. ii) Let $M$ be the matrix whose rows are the rows 3, 2, 1 of $I$. Then, $BA=BI=M$, therefore $B=M$.
Prove that $(\sum_{i=1}^n i)^2$ = $\sum_{i=1}^n i^3$ by induction Prove that: $(\sum_{i=1}^n i)^2$ = $\sum_{i=1}^n i^3$ I can use the fact that $\sum_{i=1}^n i$ = n(n+1)/2 after the inductive hypothesis is invoked. I'm not sure where to start, I would usually break down one side but there isn't usually two sums, so I'm not sure.
$$n=1 \to (\sum_{i=1}^1 i)^2=\sum_{i=1}^1 i^3 \to1=1\\ (***)n=k \to (\sum_{i=1}^k i)^2=\sum_{i=1}^k i^3\\ n=k+1 \to (\sum_{i=1}^{k+1}i)^2=\sum_{i=1}^{k+1} i^3\\ (\sum_{i=1}^{k+1}i)^2=(\sum_{i=1}^{k}i+(k+1))^2=(\sum_{i=1}^{k}i)^2+(k+1)^2+2(k+1)(\sum_{i=1}^{k}i)\to (***)\\= \sum_{i=1}^k i^3+(k+1)^2+2(k+1)(\sum_{i=1}^{k}i)=\\ \sum_{i=1}^k i^3+(k+1)^2+2(k+1)(\dfrac{k(k+1)}{2})=\\ \sum_{i=1}^k i^3+(k+1)^2+(k+1)(k+1)k=\\ \sum_{i=1}^k i^3+(k+1)^2(1+k)=\\ \sum_{i=1}^k i^3+(k+1)^3=\\ \sum_{i=1}^{k+1} i^3$$