INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Is it true that $e^{\frac{i}{2}\pi}+e^{-\frac{i}{2}\pi}=0$? As in the title. Is it true that: $$e^{\frac{i}{2}\pi}+e^{-\frac{i}{2}\pi}=0$$ And if it indeed is, how could one prove it? Maybe it's a silly question, but - frankly I'm kinda new to complex exponentials.
Use Euler's formula: $$e^{i\theta}=\cos \theta + i\sin \theta$$ I think the easiest proof of this is with Taylor series, shown here: How to prove Euler's formula: $e^{it}=\cos t +i\sin t$? so $$e^{i\pi/2}+e^{-i\pi/2}= \cos (\frac \pi 2) +i\sin (\frac \pi 2)+ \cos (-\frac \pi 2)+i\sin (-\frac \pi 2)=i\Big( \sin(\frac \pi 2)+\sin(- \frac \pi 2)\Big)=i(1-1)=0$$
Finding the angle of $-3+5i$ To find the angle $\theta$ of this complex number I know that I have to imagine it in the complex graph, draw a triangle and then calculate the arctan. Here is the representation of $-3+5i$ on the graph: (I'll explain $\alpha$ and $\phi$ in a moment) I know that since I have the adjacent and opposite sides of the triangle I can correlate them with the angle using the tangent. Then, I need to calculate the arctan to get the angle. But my problem is finding the tangent, because I don't know which is the right triangle in this case. My question is: is $tan \theta = \frac{5}{-3}$ or $tan \theta = \frac{-3}{5}$? How do I know whether if $\theta$ is supposed to be $\alpha$ or $\phi$?
By convention your $\theta$ is defined to be the angle made with the positive $X$-axis in anticlockwise direction In this case it will be $90^\circ+\alpha$
Matrix form of differential equation, not diagonalizable So I came across this problem while studying and I'm a little confused. $$ y'(t) = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}y(t) $$ Usually I solve for the eigenvalues and the corresponding eigenvectors and then basically plug into: $$ y = c_{1} e^{\lambda_{1} t} \begin{bmatrix} e1 \\ e2 \end{bmatrix}+ c_{2} e^{\lambda_{2} t} \begin{bmatrix} e3 \\ e4 \end{bmatrix} $$ But in this case it isn't diagonalizable and I only get one eigenvector. How do you solve this?
For a $2\times 2$ matrix with repeated eigenvalue, and only one linearly independent eigenvector, one can obtain a second solution as $\vec y_2(t)=te^{\lambda t}\vec v_1+e^{\lambda t}\vec v_2$, where $\vec v_1$ is the eigenvector for the matrix, and $\vec v_2$ is a "generalized eigenvector", which satisfies $(A-\lambda I)\vec v_2=\vec v_1$, or equivalently satisfies $(A-\lambda I)^2\vec v_2=0$. So the general solution becomes $$\vec y(t)=c_1e^{\lambda t}\vec v_1+c_2(te^{\lambda t}\vec v_1+e^{\lambda t}\vec v_2)$$ If you are familiar with the matrix exponential, then you could also solve this problem using $e^{At}$ as a fundamental matrix. Notice here that $A$ can be written as the sum of the identity matrix, and a nilpotent matrix (i.e. $N^k=0$ for some $k\in\Bbb N$), so $A=I+N$, and $e^{At}=e^{It}e^{Nt}$.
Are there useful algebras between the Cayley-Dickson algebras? Reals, complex numbers, quaternions, octonions, etc are a hierarchy of algebras which can be constructed in a regular way. One obvious property of this hierarchy is that each such algebra has $2^n$ natural basis elements. Are there related algebras which have 3, 5, 7, or some other intermediate number of natrual basis elements? If not, why not?
Well, you can always equip $\mathbb{R}^k$ with the product $(\mathbf{x}\ast \mathbf{y})(i)=\mathbf{x}(i)\cdot \mathbf{y}(i)$ to give you a unital associative commutative algebra, but for all $k\geq 2$ you won't have this be a division algebra. The Frobenius Theorem says that the only real division algebras are $\mathbb{R},\mathbb{C},\mathbb{H}$ (up to isomorphism). Other standard algebras include * *group algebras (if $G$ is a group, consider the free $\mathbb{R}$-vector space generated by $G$, where we define the multiplication by the group law and extend linearly) *Clifford algebras which actually produces the examples of $\mathbb{R},\mathbb{C}$, and $\mathbb{H}$ (but not $\mathbb{O}$). *Field extensions (like $\mathbb{R}[x]$ or $\mathbb{R}(x)$ as $\mathbb{R}$-vector spaces, or $\mathbb{Q}[\sqrt{2}]$ as a $\mathbb{Q}$-vector space) *The algebra of $n\times n$ matrices $\mathbb{R}^{n\times n}$ *The algebra of continuous functions $C(X)$ for any topological space $X$ (you can just consider $\mathbb{R}$ or $[0,1]$ if you're not comfortable with topological spaces) *Lie Algebras All of these things are useful and some have entire fields of study devoted to them.
Prove that, if G is a bipartite graph with an odd number of vertices, then G is non-Hamiltonian Continuing with my studies in Introduction to Graph Theory 5th Edition by Robin J Wilson, one of the exercises asked to prove that, if $G$ is a bipartite graph with an odd number of vertices, then $G$ is non-Hamiltonian. This is what I've come up with. Is it strong enough? Let graph $G$ be a bipartite graph with an odd number of vertices and G be Hamiltonian, meaning that there is a directed cycle that includes every vertex of $G$ (Wilson 48). As such, there exists a cycle in G would of odd length. However, by Theorem 2.1, a graph $G$ is bipartite if and only if every cycle of $G$ has even length (Wilson 33). Proven by contradiction, if $G$ is a bipartite graph with an odd number of vertices, then $G$ is non-Hamiltonian. As an example, the picture below has 13 vertices so it must be non-Hamiltonian.
We would like to show that a bipartite graph with an odd number of vertices does not have a Hamilton circuit. Let's suppose we have a bipartite graph, G=(V,E) where V can be partitioned into disjoint sets, V1 and V2, where no edges connect any respective vertices in V1 and the same applies to those in V2. Assume that we have a Hamilton circuit. This circuit must begin and end at the same vertex, and traverse all of the vertices. It follows that the vertices sequence for the circuit would start at s1 which is in V1, and then the second would be s2 in v2, then s3 in v3, s4 in v4, ..., sn in v1. We can see that the cardinality of V1, |V1|, is equal to that of V2, |V2|. Because of this, we can see that the number of vertices in this graph G is even (2m where m is either |V1| or |V2|, since |V1| = |V2|). It now follows that if a bipartite graph has such a hamilton circuit, it cannot have an odd number of vertices. The circuit will always, like shown, have even number
Volume of a closed and orientable manifold is positive Let $g_{ij} dx^i \otimes dx^j$ be a Riemannian metric on an orientable and closed manifold $M$ (dimension $n$). Let $dvolg=\sqrt{det(g_{ij})} dx^1\wedge \cdots \wedge dx^n$. Show $\int_M dvolg>0$. I wonder where does the "closedness" of $M$ come in the proof. Or do we need the manifold to be "closed" at all?
"Closed" in this context means "compact without boundary." If the manifold is not compact, there is no guarantee that the volume is finite. But if the volume is finite, then the conclusion still holds. I suspect that whoever wrote the problem was working in a context where integrals of differential forms had been defined either on compact manifolds with or without boundary, or for compactly supported differential forms. If $M$ is noncompact, then neither of these conditions holds, and you have to interpret the integral as an improper integral (for example, defined as the limit over compact subdomains, or something like that).
Inequality of continuous functions with integral Let M be a positive real number.Let $f:[0,\infty)\to[0,M] $ be a continuous function satisfying $$ \int\limits_{0}^\infty (1+x)f(x)dx<\infty.$$ Prove the following inequlity. $$\left( \int\limits_{0}^\infty f(x)dx \right)^2\le 4M\int\limits_{0}^\infty xf(x)dx.$$
Your proposal in the comments is close. Try this: $$ \frac{d}{dt} \left( \int_0^t f(x) dx \right)^2 = 2 \left( \int_0^t f(x) dx \right) \frac{d}{dt} \int_0^t f(x) dx = 2 f(t) \int_0^t f(x) dx \\ \le 2 f(t) \int_0^t M dx = 2 M t f(t). $$ Now integrate from $t=0$ to $t=T$: $$ \left( \int_0^T f(x) dx \right)^2 \le \int_0^T 2 M tf(t) dt $$ and then send $T \to \infty$ to get $$ \left( \int_0^\infty f(x) dx \right)^2 \le \int_0^\infty 2 M xf(x) dx. $$
Manipulating a version of summations to produce the closed formula for the Catalan Numbers. Manipulate $g(x) = \frac{1}{2x}\left(1 - \sum_{k=0}^{\infty}{1/2 \choose k}(-4x)^k\right)$ into $g(x) = \sum_{n=0}^{\infty}{1/2 \choose n+1}(-1)^n2^{2n+1}x^n$ I know that I should substitute n + 1 for k to allow the summation to begin at 0, so I can see where part of the second representation comes from, but am lost on how to manipulate the other parts to obtain the desired form. Can anyone explain how to do so?
A slightly different variation: \begin{align*} \frac{1}{2x}\left(1-\sum_{k=0}^\infty\binom{\frac{1}{2}}{k}(-4x)^k\right)&= \frac{1}{2x}\left(-\sum_{k=1}^\infty\binom{\frac{1}{2}}{k}(-4x)^k\right)\tag{1}\\ &=\frac{1}{2x}\left(-\sum_{k=1}^\infty\binom{\frac{1}{2}}{k}(-1)^k2^{2k}x^k\right)\tag{2}\\ &=\sum_{k=1}^\infty\binom{\frac{1}{2}}{k}(-1)^{k-1}2^{2k-1}x^{k-1}\tag{3}\\ &=\sum_{k=0}^\infty\binom{\frac{1}{2}}{k+1}(-1)^{k}2^{2k+1}x^{k}\tag{4}\\ \end{align*} Comment: * *In (1) the summand with $k=0$ cancels out *In (2) we write $(-4x)^k=(-1)^k2^{2k}x^k$ *In (3) we multiply each summand of the series with $-\frac{1}{2x}$ *In (4) we shift the index $k$ by one to start from zero
Visual proof that $\lim_{x\to \infty} \frac{x^r}{a^x} = 0$ I am looking for a visual proof (in this sense: https://mathoverflow.net/questions/8846/proofs-without-words) for the fact that $$ \lim_{x\to \infty} \frac{x^r}{a^x} = 0 $$ ($r > 0,a>1$) Or at least for a simple (high school level) argument that makes this relation plausible (but it doesn't have to be a rigerous mathematical proof - I know to proof it rigerously, but that's not what I am looking for).
Every time $x$ increases by $1$, the bottom gets multiplied by $a$ and the top gets multiplied by $$\left(\frac{x+1}{x}\right)^r = \left(1+\frac{1}{x}\right)^r \approx 1+\frac{r}{x}$$ So the bottom keeps growing by the same factor while the factor that the top grows by gets smaller and smaller, and will eventually get smaller than $a$. In the long run (i.e. as $x \to\infty$) the bottom will win out, driving the whole expression to $0$. (This does make an appeal to a binomial theorem approximation.) It's not intuitive, and probably doesn't have the "Aha!" appeal of a visual proof, but I think it would help high schoolers start to think about comparing different rates of growth and help them build their intuition. And if I were presenting this to high schoolers I would definitely start off showing how it worked in a specific simple example, like $\frac{x^2}{2^x}$. Write ten terms of $x^2$, note that they're increasing, but then write ten terms of $2^x$ and note how the crush the terms on the top. Then make the generalization, and maybe note that if $a$ is close to $1$ and $r$ is large it will take more terms for the bottom to win, but it will eventually, and we are going to $\infty$.
We know that $A^{23} = 0$. What are the eigenvalues of $A$? Let $A$ be an $n \times n$ matrix. We know that $A^{23} = 0$. What are the eigenvalues of $A$? I think it's just $0$, but I'm not sure. How should I do this problem?
Needless to say that $n\ge 23$ so that $A$ is a nilpotent matrix. What about the eigenvalues of a nilpotent matrix?
How can I find the volume of this strange body? I need to calculate the volume of a body using multiple integrals. The body's boundary is defined by a surface whose formula is $$0 \leq z \leq c \cdot \sin\left(\pi \sqrt{{{x^2} \over{a^2}} + {{y^2} \over{b^2}}}\right)$$ where $a$, $b$ and $c$ are some positive parameters. The problem is that I have no idea what this body looks like. It's the first time I'm meeting sinus function in multitple integrals and it totally confuses me. I've tried to use the most popular substitutes, such as cylindrical and spherical coordinates, but it didn't help at all. Thanks for any help!
For $a = b = c = 1$, the surface looks like this However, the surface intersects the $z = 0$ plane Hence, the body is not connected. The volume of each component can be computed using polar coordinates. This volume is $\infty$.
How to input presentations of large p-group in GAP I am trying to find the best way to input large p-groups into GAP. I have tried looking at Print(GapInputPcGroup(G,"g")); but I would like to know is it possible to avoid telling GAP by hand, the consequences of a commutator/conjugate relation(perhaps I need to use some sort of collector?). To make the question concrete: As an example how can I best input the group below into GAP by hand: $G=\langle a,b | a^{3^{5}}=1,b^{3^{3}}=1, b^{a}=ba^{9} \rangle$ Can I just give GAP the commutator or conjugacy relation between $b,a$ ? For your reference, I can get this group in a different way and then GapInputPcGroup gives: Print(GapInputPcGroup(G,"g")); g:=function() local g1,g2,g3,g4,g5,g6,g7,g8,r,f,g,rws,x; f:=FreeGroup(IsSyllableWordsFamily,8); g:=GeneratorsOfGroup(f); g1:=g[1]; g2:=g[2]; g3:=g[3]; g4:=g[4]; g5:=g[5]; g6:=g[6]; g7:=g[7]; g8:=g[8]; rws:=SingleCollector(f,[ 3, 3, 3, 3, 3, 3, 3, 3 ]); r:=[ [1,g3], [2,g4], [3,g5], [4,g6], [5,g7], [7,g8], ]; for x in r do SetPower(rws,x[1],x[2]);od; r:=[ [2,1,g5], [4,1,g7], [6,1,g8], [3,2,g7^2*g8^2], [5,2,g8^2], [4,3,g8], ]; for x in r do SetCommutator(rws,x[1],x[2],x[3]);od; return GroupByRwsNC(rws); end; g:=g(); Print("#I A group of order ",Size(g)," has been defined.\n"); Print("#I It is called g\n"); Thank you for your help.
You can do this by first defining the finitely presented group and then computing a homomorphism onto its largest $p$-quotient, which in this case is isomorphic to the group itself. The image of the homomrphism is what you are trying to define. gap> F:=FreeGroup(2);; gap> a:=F.1;; b:=F.2;; gap> rels := [a^(3^5), b^(3^5), b^a/(b*a^9)];; gap> G := F/rels;; gap> Size(G); 59049 gap> hom := EpimorphismPGroup(G,3);; gap> Size(Image(hom)); 59049
How to establish a formula for $\int_{-\infty}^\infty \frac{e^x(1-e^x)}{1-e^{n x}}dx$ I am finding $$\int_{-\infty}^{+\infty} \frac{e^x(1-e^x)}{1-e^{nx}}dx$$ for positive integer $n$. I think maybe use contour integral. Let $w = nx$, we will get $$\int_{-\infty}^{+\infty} \frac{e^x(1-e^x)}{1-e^{nx}}dx = \int_{-\infty}^{+\infty} \frac{e^{w/n}(1-e^{w/n})}{1-e^{w}}dw$$ Denominator $=0$ when $w=2\pi ki$, where $k\in\mathbb{Z}$. What is the proper contour for this? Thank you.
Quite as dezdichado we substitute $e^x = t$ and get \begin{equation*} I = \int_{0}^{\infty}\dfrac{1-t}{1-t^n}\, dt. \end{equation*} To proceed we integrate $\dfrac{\log(z)(1-z)}{1-z^n}$, where \begin{equation*} \log(z) = \ln|z| + i\arg(z),\quad \text{ with } 0<\arg(z) < 2\pi , \end{equation*} around a keyhole contour. Then we get \begin{equation*} \int_{0}^{\infty}\dfrac{\log(t)(1-t)}{1-t^n}\, dt - \int_{0}^{\infty}\dfrac{(\log(t)+i2\pi)(1-t)}{1-t^n}\, dt = 2\pi i\sum_{k=1}^{n-1}{\rm Res}_{z=z_{k}}\dfrac{\log(z)(1-z)}{1-z^n} \end{equation*} where $z_{k} = \exp\left(\dfrac{2k\pi i}{n}\right)$. Consequently \begin{gather*} I = -\sum_{k=1}^{n-1}{\rm Res}_{z=z_{k}}\dfrac{\log(z)(1-z)}{1-z^n} = -\sum_{k=1}^{n-1}\dfrac{2k\pi i}{n}\dfrac{1-\exp\left(\dfrac{2k\pi i}{n}\right)}{-n\exp\left(\dfrac{2k\pi i(n-1)}{n}\right)} =\notag\\[2ex] \dfrac{2\pi i}{n^2}\sum_{k=1}^{n-1}k\left(\exp\left(\dfrac{2k\pi i}{n}\right)-\exp\left(\dfrac{4k\pi i}{n}\right)\right). \tag{1} \end{gather*} To calculate that sum we use that \begin{equation*} \sum_{k=0}^{n-1}z^k = \dfrac{z^n-1}{z-1}, \quad \text{ if } z\neq 1. \end{equation*} Differentiation followed by multiplication by $z$ yields \begin{equation*} \sum_{k=1}^{n-1}kz^k = \dfrac{nz^n}{z-1}-z\dfrac{z^n-1}{(z-1)^2}.\tag{2} \end{equation*} If $z = \exp\left(\dfrac{2k\pi i}{n}\right)$ or $z = \exp\left(\dfrac{4k\pi i}{n}\right)$ then $z^n = 1$ and we can use (2) to finish (1). \begin{equation*} I = \dfrac{2\pi i}{n^2}\left(\dfrac{n}{\exp\left(\dfrac{2\pi i}{n}\right)-1}- \dfrac{n}{\exp\left(\dfrac{4\pi i}{n}\right)-1}\right) = \dfrac{2\pi i}{n} \dfrac{\exp\left(\dfrac{2\pi i}{n}\right)}{\exp\left(\dfrac{4\pi i}{n}\right)-1} = \dfrac{\pi}{n\sin\left(\dfrac{2\pi}{n}\right)}. \end{equation*}
Relation between $\zeta(s), \ Re(s) < 1$ and the summation $\sum_{k=1}^\infty k^{-s}$ First thing I want to mention is that this is not a topic about why $1+2+3+... = -1/12$ but rather the connection between this summation and $\zeta$. I perfectly understand that the definition using the summation $\sum_{k=1}^\infty k^{-s}$ of the zeta function is only valid for $Re(s) > 1$ and that the function is then extrapolated through analytic continuation in the whole complex plan. However some details bother me : Why can we manipulate the sum and still obtain correct final answer. $$ S_1 = 1-1+1-1+1-1+... = 1-(1-1+1-1+1-...)= 1-S_1 \implies S_1 = \frac{1}{2} \\ S_2 = 1-2+3-4+5-... \implies S_2 - S_1 = 0-1+2-3+4-5... = -S_2 \implies S_2 = \frac{1}{4} \\ S = 1+2+3+4+5+... \implies S-S_2 = 4(1+2+3+4+...) = 4S \implies S = -\frac{1}{12} \\ S "=" \zeta(-1) $$ Clearly these manipulations are not legal since we're dealing with infinite non-converging sums. But it works ! Why ? Is there a real connection between the analytic continuation which yields the "true" value $\zeta(-1) = -1/12$ and these "forbidden manipulations" ? Could we somehow consider these manipulations as "continuation of non-converging sums" ? If so, is there a well-defined framework with defined rules because it is clear that we must be careful when playing with non-converging sums if we don't want to break the mathematics ! (For example Riemann rearrangement theorem) And since it seems that these illegal operations can be used to compute some value of zeta in the extended domain $Re(s) < 1$, are there other examples of such derivations, for example $0 = \zeta(-2) "=" 1^2 + 2^2 + 3^2 + 4^2 + ...$ ? Hopefully this is not an umpteenth vague question about zeta and $1+2+3+4...$ I did some research about it but couldn't find any satisfying answer. Thanks !
They "work" because the manipulations you present work where the original definition was valid. Notice that: $$\eta(s)=\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n^s}$$ This function can be made to converge for all $s:$ $$\eta(s)=\lim_{r\to1^{-1}}\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n^s}r^n$$ Whereupon we find that for $s=0$, $$\begin{align}S_{1r}&=r-r^2+r^3-\dots\\&=r-r(r-r-r^2+r^3-\dots)\\&=r-rS_{1r}\\\implies S_{1r}&=\frac r{1+r}\end{align}$$ Then take $r\to1$ to get $S_1=1/2$. Notice the similarities and differences between this and your method. In the same manner, $$\begin{align}S_{2r}&=r-2r^2+3r^3-\dots\\&=(r-r^2+r^3-\dots)-r(r-2r^2+3r^3-\dots)\\&=S_{1r}-rS_{2r}\\\implies S_{2r}&=\frac r{(1+r)^2}\end{align}$$ Letting $r\to1$, we get $S_2=1/4$. Notice that in each of the steps above, if we replace $r$ with $1$, we get the methods you present, despite that they don't really make sense in that way. Also notice that, by manipulation of the original definitions, $$\begin{align}\zeta(s)-\eta(s)&=2\left(\frac1{2^s}+\frac1{4^s}+\frac1{6^s}+\dots\right)\\&=2^{1-s}\left(\frac1{1^s}+\frac1{2^s}+\frac1{3^s}+\dots\right)\\&=2^{1-s}\zeta(s)\end{align}$$ $$\zeta(s)=\frac1{1-2^{1-s}}\eta(s)=\frac1{1-2^{1-s}}\lim_{r\to1^{-1}}\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n^s}r^n$$ In this way, everything you have presented makes sense in the context of $\Re(s)>1$, so it holds for all $s$ by analytic continuation. Notice things like the Riemann rearrangement theorem does not come in to play due to absolute convergence for $\Re(s)>1$, where we derive these formulas. Also, $S_r$ converges absolutely for any $|r|<1$. Also notice that what determines where a certain set of parentheses are allowed comes from these convergent scenarios. From all of the above, I note that instead of using algebra, derivatives may be used to give $$\zeta(-s)=\frac1{1-2^{1+s}}\lim_{r\to1}\left[\underbrace{r\frac r{dr}r\frac r{dr}\dots r\frac r{dr}}_s\frac{-1}{1+r}\right]$$ For whole numbers $s>0$. The rules that allow this to work is to work only with convergent sums that are analytic continuations of the original sums. Then, everything becomes normal and there are not so many worries.
Prove that $P(\bigcup_{i=1}^{\infty} A_i) = 1$ $A_i$ $(i=1,2,...)$ are independent events $\sum_{i=1}^{\infty}P(A_i) = \infty.$ Prove that: $P(\bigcup_{i=1}^{\infty} A_i) = 1 $ Can someone please help me out with this question?
If $P\left( \bigcup_{n=1}^\infty A_n \right) < 1,$ then $P\left( \bigcap_{n=1}^\infty (\text{not } A_n) \right)>0.$ By independence, we have $$ \prod_{n=1}^\infty P(\text{not } A_n) > 0, \text{ so } \log\prod_{n=1}^\infty (1-P(A_n))>-\infty. $$ $$ \sum_{n=1}^\infty -\log (1-P(A_n)) <\infty. $$ If you can show that when $0\le p<1$ then $p\le -\log(1-p),$ then you get $$ \sum_{n=1}^\infty P(A_n) < \infty. $$ The inequality $p\le -\log(1-p)$ depends on the base of the logarithm being $\le e$ but $>1;$ however, this is not essential, because for any base $>1$ one can find a suitable positive constant and say $p\le -(\text{constant}\cdot\log(1-p)).$ Observe that if $\sum_{n=1}^\infty p_n = \infty$ then $\sum_{n=\mathbf N}^\infty p_n=\infty$ no matter how be $\mathbf N$ gets. Thus after any such index $\mathbf N$, one can find some $n$ for which $A_n$ occurs. Therefore, infinitely many of them occur. Consequently this whole argument amounts to a proof of one of the Borel–Cantelli lemmas.
A curve consists of all of the points $(x, y)$ in the Cartesian plane with the sum of the distances from $(x, y)$ to $(1, 0)$ and to $(−1, 0)$ is 4 A curve consists of all of the points $(x, y)$ in the Cartesian plane such that the sum of the distances from $(x, y)$ to $(1, 0)$ and to $(−1, 0)$ is $4$. Find an equation for the curve that does not employ square roots.
This should help you: The rest will follow...
absolute convergence of power series I'd like to determine the radius of convergence R of the power series $$i) \sum_{k=1}^{\infty} \frac{x^{k}}{1+x^{k}}, \ x\in \mathbb{R} \ \ \ \ ii)\sum_{k=1}^{\infty} 2^{k}\cdot x^{k^{2}}, \ x\in \mathbb{R}$$ . My ideas: i) Root test: $$ \sqrt[k]{\vert \frac{x^{k}}{1+x^{k}} \vert} = \frac{\vert x\vert}{\sqrt[k]{\vert 1+x^{k} \vert}} \Rightarrow \vert x \vert < \sqrt[k]{\vert 1+x^{k} \vert} $$ But I don't know how to determine $$ \vert x \vert $$ for convergence. ii) Root test: $$ \sqrt[k]{\vert 2^{k}x^{k^{2}}\vert}=2\cdot \vert x \vert ^{k} \Rightarrow \limsup\limits_{k \rightarrow \infty}({2 \cdot \vert x \vert^{k} })<1 \Leftrightarrow \vert x \vert <1 \Rightarrow absolute \ convergence \ for \ \vert x \vert < 1$$ ?
$\lim_\limits{k\to\infty} \frac {|x|}{\sqrt[k]{1+x^k}} = |x|$ when $x<1$, passing the root test. If $|x|\ge 1$ then $\lim_\limits{k\to\infty} \frac {|x|}{\sqrt[k]{1+x^k}} = 1$ and the root test is inconclusive. but, if x>1 then $\lim_\limits{k\to\infty} \frac {x}{1+x^k} = 1$ It is necessary for the sequence to approach 0 in order for the series to converge. |x|<1 ii) is a little easier... by the root test $(2^kx^{k^2})^{\frac 1k} = (2 x^{k})$ $\lim_\limits{k\infty} 2x^k= 0$ when $x<1, 2$ when $x=1$ and $\infty$ when $x>1$ again $|x|<1$
Why does changing the bounds of integration result in the negative of the original bounds? If we were to have the integral from a to b if some function f(x), then that integral would represent the area under the curve from a to b, now based on the properties of integrals given to us in school, we are told that changing the bounds from b to a results in the negative of the original integral. However why would it result in the negative, you would think that it doesn't matter if you go from a to b or b to a as it would result in the same area under the curve. so my question is why is the integral negative when changing the bounds?
It seems to me that is purely conventional. As far as I know, the main advantage of this convention is that the relation $$ \int_a^b f(t) dt + \int_b^cf(t) dt = \int_a^c f(t)dt $$ becomes true independently of the ordering of the values $a, b, c$, which is a great formal benefit. Also note that in Lebesgue integration, the natural notion is that of integral of $f$ over a set, so one speaks of $$ \int_{[a, b]}f(t) dt $$ The convention for the bounds is not at all used in Lebesgue integral. There are some domains such as complex analysis or differential forms where the natural notion is that of integral along an oriented path. The same sign property holds when the orientation of the path is reversed. In these cases the property can be proved from the definition of these integrals.
How does this surface looks like? The formula is $({x^4 \over{a^4}} + {y^2 \over{b^2}} + {z^2 \over{c^2}})^2 = {x^2 \over{p^2}}$, where $a,b,c,p$ is positive. The left part is pretty similuar to ellipsoid, but there are few differences. I also have tried to use the difference of two squares and brackets expansion, but it didn't help. And the second function - what is the volume of this body? Thanks for any help!
After anisotropic scaling you can consider the reduced equation $$(x^4+y^2+z^2)=x^2.$$ It is symmetric by revolution and its section by the plane $z=0$, $$(x^4+y^2)^2=x^2,$$ or $$y=\pm\sqrt{|x|-x^4}.$$ The study is not very difficult. So the surface is an hourglass stretched along the three axis. The volume is given by the integral of the area of the cross section by the $yz$ plane, $\pi y^2=|x|-x^4$ over the whole $x$ range; then apply the scaling factors.
A planar graph $G$ without $4$-cycles or $5$-cycles that $mad(G)\geq\frac{10}{3}$ The well-known Theorem about the girth nunber $g(G)$ and $mad(G)$ in planar graphs that $mad(G)\leq\frac{2g(G)}{g(G)-2}$. If a planar graph $G$ with $g(G)=5$, then $mad(G)\geq\frac{10}{3}$. Is there an example that a planar graph without $4$-cycles or $5$-cycles $G$ with $mad(G)\geq\frac{10}{3}$ ? If it is fale, than a planar graph without $4$-cycles or $5$-cycles $G$ is a subset of the set of planar graphs without $3$-cycles or $4$-cycles.
The best known bounds on the strong oriented chromatic number of planar graphs with girth $5, 6$ and $12$ are obtained via the maximum average degree. Therefore, to get bounds on the strong oriented chromatic number of planar graphs without cycles of lengths $4$ to $i, i \geq 4$, it is natural to determine the maximum average degree of these classes. The following two lemmas give tight bounds on the maximum average degree of planar graphs without cycles of lengths $4$ to $i$ for all $i \geq 4$. $(1)$ If $G$ is a planar graph without cycles of length $4$, then $\operatorname{mad}(G)<\frac{30}{7}$. $(2)$ For all $\epsilon >0$, there exists a planar graph $G$ without cycles of lengths $4$ to $i$ such that $\operatorname{mad}(G) >3+\frac{3}{i-2}-\epsilon$. $(3)$ For all $i \geq 5$, if $G$ is a planar graph without cycles of lengths $4$ to $i$, then $\operatorname{mad}(G) < 3+ \frac{3}{i-2}$. $(4)$ For all $i\geq 5$, for all $\epsilon >0$, there exists a planar graph $G$ without cycles of length $4$ to $i$ such that $\operatorname{mad}(G) >3+\frac{3}{i-2}-\epsilon$. Thus by $(3)$, we have, that every planar graph $G$ without cycles of length $4$ to $11$ has $\operatorname{mad}(G)<3+\frac{3}{11-2} =\frac{10}{3}$. EDIT: These cases have been discussed in the paper Strong oriented chromatic number of planar graphs without short cycles in page $4$. Hope it helps you in a small way.
Proof of an equivalent definition of strictly convex? $X$ is a normed space. If for all $x,y\in X$ such that $\|x\|=\|y\|=1, x\neq y$, we have that $\|\frac{x+y}{2}\|<1$, then we know that $X$ is strictly convex. How can I show that for all $\lambda \in (0,1)$, $\|\lambda x + (1-\lambda) y\|<1$ always holds?
$$\|(x+y)/2\|< 1 \implies \|x+y\|<2\tag1\label1$$ $$ax+(1-a)y = a(x+y)+(1-2a)y\tag2\label2$$ $$ax+(1-a)y = (1-a)(x+y)+(2a-1)x\tag3\label3$$ Break it into cases: Case $1$: $a\leq1/2$ Use the triangle inequality on $\eqref2$ and substitute $\eqref1$. Case $2$: $a\geq1/2$ Use triangle inequality on $\eqref3$ and substitute $\eqref1$. Note: $\|cx\| = c\|x\| \iff c\geq0$.
Show that for every curve $\gamma$ in $\mathbb{R^m}$ we obtain the same velocity vector in $\mathbb{R^n}$ Let $f:\mathbb R^m \rightarrow \mathbb R^n$ differentiable at the point $p\in \mathbb R^m$. Prove that for every curve $\gamma: I \rightarrow \mathbb R^m$, such that $\gamma(0)=p$ and $\gamma'(0)=v$, we obtain the same velocity vector at $t=0$ for a curve in $\mathbb R^n$ given by $\phi(t)=(f \circ \gamma) (t)$. Attempt: Let $\phi(t)=(f \circ \gamma) (t).$ Differentiating, the chain rule gives us that $\phi'(t) = (\nabla f) (\gamma(t)) \cdot \gamma'(t)$. In particular, for $t=0$ we have that $\phi'(0) = (\nabla f)(\gamma(0)) \cdot \gamma'(0)= \nabla f(p) \cdot v$. Edit: I realized that I was using the chain rule for a scalar field, which doesn't make sense, since $f$ is a vector field. So the chain rule should give us for $t=0$: $\phi'(t)=Df(\gamma(0)) \gamma'(0) = Df(p) v$, which is a matrix of dimension $m \times 1$. Does the fact that this derivative is independent of any $\gamma$ curve is sufficient to conclude that we obtain the same velocity vector?
To expand on Daniel's answer, a more precise statement is that if $\gamma_1$ and $\gamma_2$ are two curves in $\mathbb{R}^m$ with $\gamma_1(0)=\gamma_2(0)$ and $\gamma_1^\prime(0)=\gamma_2^\prime(0)$ and $f$ a map $\mathbb{R}^m \rightarrow \mathbb{R}^n$ then $(f \circ \gamma_1) (0)= (f \circ \gamma_2)(0)$ and $(f \circ \gamma_1)^\prime (0)= (f \circ \gamma_2)^\prime(0)$. To see that $(f \circ \gamma_1) (0)= (f \circ \gamma_2)(0)$, use the definition of composition \begin{align*} (f \circ \gamma_1) (0) &= f(\gamma_1(0)) \\ &= f(\gamma_2(0) \\ &= (f \circ \gamma_2)(0). \end{align*} To see that $(f \circ \gamma_1)^\prime (0)= (f \circ \gamma_2)^\prime(0)$, we use the chain rule to calculate \begin{align*} (f \circ \gamma_1)^\prime(0) &= \sum_{\mu=1}^m \frac{\partial f }{\partial x^\mu} \frac{\gamma_1^\mu}{dt}\Bigr|_{t=0} \\ &= \sum_{\mu=1}^m \frac{\partial f }{\partial x^\mu} \frac{\gamma_2^\mu}{dt}\Bigr|_{t=0} \\ &= (f \circ \gamma_2)^\prime(0) \end{align*} where $x^\mu$ are co-ordinates on $\mathbb{R}^m$ and $\gamma_1^\mu,\gamma_2^\mu$ are the co-ordinates of $\gamma_1,\gamma_2$ respectively (I swept a bit of stuff about where everything is evaluated under the carpet when I used the chain rule here).
Combinatorial coefficients squared If $C_0,C_1,C_2,...,C_n$ are the combinatorial coefficients in the expansion of $(1+x)^n$ then prove that: $$1C_0^2+3C_1^2+5C_3^2+...+(2n+1)C_n^2=\dfrac{(n+1)(2n)!}{n!n!}=(n+1)\binom {2n}n$$ I am able to compute the linear addition but not the squares of coefficients. Thanks!
$$\sum_{k=0}^{n}(2k+1)\binom{n}{k}^2=\sum_{k=0}^{n}(2k+1)\binom{n}{k}\binom{n}{n-k}=[x^k]\left[(1+x)^n\sum_{k=0}^{n}\binom{n}{k}(2k+1)x^k\right] $$ but by setting $x=z^2$ we have: $$\sum_{k=0}^{n}\binom{n}{k}(2k+1)x^k=\frac{d}{dz}\sum_{k=0}^{n}\binom{n}{k}z^{2k+1}=(1+x)^{n-1}(1+(2n+1)x)$$ hence: $$\sum_{k=0}^{n}(2k+1)\binom{n}{k}^2= [x^n]\left[(1+x)^{2n}+2nx(1+x)^{2n-1}\right]=\binom{2n}{n}+2n\binom{2n-1}{n}$$ and simplifying: $$\sum_{k=0}^{n}(2k+1)\binom{n}{k}^2=(n+1)\binom{2n}{n} $$ as wanted.
soft question: explaining proportions/percentages in simple terms I know this is a fairly easy question but I haven't been able to word it into Google so as it would give me a substantive list of resources. Here's my question: If a process is 25% efficient, I'd multiply (1/0.25) by the output, which would yield what is necessary for a 100% efficient output. Is there a way to verbalize what this division is actually doing? (i.e. what are the units of 1 and 0.25)
The simple term you are looking for is to scale something. Your example: A process is 25% efficient. To find 100% efficient output, multiply (1/0.25) by the output. You are scaling the process. Here this means to multiply it by a scalar -- in this case a dimensionless quantity in which the units of a percentage-to-percentage ratio cancel out. So: A process is 25% efficient. Scale it by the inverse of that efficiency (1/0.25) to find the 100% efficient output. In general, "scale by the inverse" is one simple and compact way of describing the act of canceling out a ratio through scaling by its inverse. I think this is what you mean by asking how to verbalize "what this division is actually doing."
Differentiate ln$(100|x|)$ I asked to find $\frac{\text{dy}}{\text{dx}}$ for $y = \ln(100|x|)$. \begin{eqnarray} \frac{\text{d}\,\text{ln}(100|x|)}{\text{dx}} &=& \frac{\text{d ln(100|x|)}}{\text{d 100|x|}} \cdot \frac{\text{d (100|x|)}}{\text{dx}}\\ &=&\frac{1}{\text{100|x|}}\cdot \frac{\text{d (100|x|)}}{\text{dx}} \end{eqnarray} I'm not sure how to calculate the derivative of 100$|x|$. Any hints would be appreciated.
Notice that $$\ln(100|x|)=\ln(100)+\ln|x|$$ Thus, derivative is simply given as $$\frac d{dx}\ln(100|x|)=\frac1x$$
The ring of real "entire" functions: gcd domain? It is known that the rings of complex entire functions and the rings of real analytic functions are actually gcd domains (see also this MSE post). I just discovered (somewhat to my naive surprise) that for functions $f : \mathbb{R} \to \mathbb{R}$, being real analytic and real entire are not the same. In other words, having a local series expansion does not guarantee a global series expansion, a simple counterexample being provided by the function $f(x) = \frac{1}{1 + x^2}$. I am now wondering about the following Question: Is the ring of real entire functions a gcd domain? It seems that the usual proofs do not work as inverting a non-zero real entire function does not keep it real entire any more. Any help will be most appreciated.
I don't have enough reputation to comment, so I will answer. Since you say that the ring of complex entire functions is a gcd domain, can't you do the following? Take two real entire functions $f$ and $g$, complexify them to form complex entire functions, let $h = \text{gcd} (f, g)$ be written as $h = \alpha f + \beta g$, and then take the real part of the restriction of all of these functions on the real axis. That should give you a gcd.
Absolute convergence of $\sum_{n=0}^{\infty}a_n\implies\sum_{n=0}^{\infty}a_n^2$ is absolute convergent too I am pretty sure that Absolute convergence of $\sum_{n=0}^{\infty}a_n\implies\sum_{n=0}^{\infty}a_n^2$ is absolute convergent too is a true statement, but before I proof it I ask wether this exercise is probably meant for series in $\mathbb{R}$ respectivly $\mathbb{C}$ or is it meant for ANY series. Like series of vectors in an unknown vectorspace. Could someone clarify me? I don't know what I mumble so here is what I stumbled on, while searching for definitions of absolute convergence: https://proofwiki.org/wiki/Definition:Absolute_Convergence/General
Consider a series on a normed vector space $(V,\|\cdot\|)$ where for every $a\in V$ is well defined $a^2$ with $a^2\in V$ and $\|a^2\| = {\|a\|}^2$. The series $\sum a_n = {\left\{ \sum_{n=1}^k a_n \right\}}_{k\ge 1}$ is absolute convergent if $$ \sum \|a_n\| = \lim_{k\to \infty} \sum_{n=1}^k \|a_n\| < +\infty. $$ Let us now turn to the statement made. Since the series $\sum a_n$ converge absolute, then only a finite number of terms $a_n$ have the property $ \|a_n\|>1$, otherwise there would be infinite terms $a_n$ with $\|a_n\|>1$, then $\sum_{\|a_n\|>1} 1 = +\infty$ and follows $$ \sum \|a_n\| = \sum_{n\colon \|a_n\|>1} \|a_n\| + \sum_{n\colon \|a_n\|\le 1} \|a_n\| > \sum_{n\colon \|a_n\|>1} 1 + \sum_{n\colon \|a_n\|\le 1} 0 \ge +\infty $$ which is a contradiction. Then, it follows \begin{align*} \sum \|a_n^2\| = \sum {\|a_n\|}^2 &= \sum_{n\colon \|a_n\|>1} {\|a_n\|}^2 + \sum_{n\colon \|a_n\|\le 1} {\|a_n\|}^2\\ &\le \sum_{n\colon \|a_n\|>1} {\|a_n\|}^2 + \sum_{n\colon \|a_n\|\le 1} \|a_n\| \\ &\le \sum_{n\colon \|a_n\|>1} {\|a_n\|}^2 + \sum \|a_n\| < +\infty, \end{align*} since $\sum_{n\colon \|a_n\|>1} {\|a_n\|}^2$ is a finite sum, ${\|a_n\|}^2 \le \|a_n\|$ si $\|a_n\|\le 1$ and $\sum_{n\colon \|a_n\|\le 1} \|a_n\| \le \sum \|a_n\|$. This it, $\sum a_n^2$ is absolute convergent.\ Posdata: I use notation $$ n\colon \|a_n\|\le 1 = \{n\in \mathbb{N}\colon \|a_n\|\le 1\}. $$ Also, review this example, if $a_n=\frac{1}{n},~n\ge 1$, then $$ \sum a_n^2 < +\infty \quad \text{but} \quad \sum a_n = +\infty. $$
number of real solution of exponential and logarithmic equation The number of solution of the equation $(x-2)+2\log_{2}(2^x+3x)=2^x$ $\bf{My\; Try::}$ We can write it as $2\log_{2}(2^x+3x) = 2^x-x+2$ Now Let $f(x) = 2\log_{2}(2^x+3x)$ and $g(x)=2^x-x+2$ Here we have to calculate number of points where $f(x)$ and $g(x)$ intersect each other So here we will find nature of $f(x)$ and $g(x)$ So $$f'(x) = \frac{1}{\ln_{e}(2)}\frac{2\cdot (2^x\ln (2)+3)}{2^x+3x}>0\forall x \in \bf{Domian} $$ bcz $2^x+3x>0$ for logarithmic function. and $f(0) = 0$. So function $f(x)$ is strictly Increasing function. Now $$g'(x) = 2^x\ln 2-1 >0\forall x \geq 1$$ and for $0\leq x<1\;,$ Then $1\leq 2^x<2\Rightarrow 1\cdot \ln(2)-1 \leq 2^x\ln(2)-1<2\cdot \ln(2)-1$ So minimum of function at $x=\alpha \in (0,1)$. Now $g'(x)>0$ for $x>\alpha \in (0,1)$ and $g'(x)<0$ for $x=\alpha \in (0,1)$ So solution of $f(x) = g(x)$ exists when $x>0$ Now how can i solve it after that, Help required, Thanks
Plot the function $f(x)=(x-2)+2\log_{2}(2^x+3x)-2^x$,you can find there are only two solutions. So it becomes 1. to show $f(2)>0$ and $f(\pm \infty) <0$. 2. it is concave (i.e. second derivative is negative.
Understanding that there are infinitely many primes of form $4n+3$ I read the proof of, that there are infinitely many primes of form $4n+3$ and it goes here: Proof. In anticipation of a contradiction, let us assume that there exist only finitely many primes of the form $4n+3$; call them $q_1,q_2,\ldots ,q_s$. Consider the positive integer $$N=4q_1q_2\cdots q_s -1 = 4(q_1q_2\cdots q_s -1)+3$$ and let $N=r_1r_2\cdots r_t$ be its prime factorization. Because $N$ is an odd integer, we have $r_k\ne 2$ for all $k$, so that each $r_k$ is either of the form $4n+1$ or $4n+3$. By the lemma, the product of any number of primes of the form $4n+1$ is again an integer of this type. For $N$ to take the form $4n+3$, as it clearly does, $N$ must contain at least one prime factor $r_i$ of the form $4n+3$. But $r_i$ cannot be found among the listing $q_1,q_2,\ldots ,q_s$, for this would lead to the contradiction that $r_i \mid 1$. The only possible conclusion is that there are infinitely many primes of the form $4n+3$. $q_1,q_2,\ldots ,q_s$ I need an explanation from last three line i.e. this. They said that: * *$q_{1},q_{2}, \cdots ,q_{s}$ is of the form $4n+3$ (let). *$r_{i}$ is of form $4n+3$ (at least one). *$r_{i}$ cannot be found in listing $q_{1},q_{2}, \cdots ,q_{s}$ And lemma they used is: The product of two or more integers of the form $4n+1$ is of the same form. My problems: * *If above two holds then why $r_{i}$ cannot be found in listing $q_{1},q_{2}, \cdots ,q_{s}$. *And how $r_{i}|1$ *If all this holds then why $q's$ are infinite. Please give the most elementary explanation as you can, any help worth a lot to me, Thanks. (I took this from David M. Burton book).
The proof assumes that there are only finitely many primes of the form $4n+3.$ They are listed as $q_1,\cdots,q_s.$ From them the number $N=4q_1\cdots q_s-1$ is defined. (Note that the proof argues by reduction ad absurdum. Do you remember Euclid's proof for the existence of infinitely many primes? He assumed that there are finitely many and arrives at a contradiction.) Then it is shown that $N$ is of the form $4n+3$ and it is considered its prime factorization. Why must be at least one of the $r_i$'s of the form $4n+3?$ If all of them are of the form $4n+1$ then it would be $$N=r_1\cdots r_t=4n+1.$$ This is not possible because $N=4n+3.$ Finally, we have that $$N=4q_1\cdots q_s-1=r_1\cdots r_t.$$ If $r_i=q_j$ for some $j$ then we have that $r_i$ divides $4q_1\cdots q_s$ and $r_1\cdots r_t.$ So, it must divide $$1=r_1\cdots r_t-4q_1\cdots q_s.$$ And this is not possible. Thus we conclude that the number of primes of the form $4n+3$ must be infinite. Why? Because if we assume they are finite, say $q_1,\cdots, q_s$ we find a prime number $r_i$ of the form $4n+3$ that is not in the list above. This contradicts the assumption that all prime numbers of the form $4n+3$ are $q_1,\cdots, q_s.$
NP-completeness and difficulty of all possible solutions Take the SAT problem (or any other NP-complete). Is the set of all possible combinations of SAT problems are NP-complete? Or maybe there are combinations that can be quickly solved? If I take a random logical formula of n variables is always always be NP-complete problem? There are no exceptions for any particular forms of formulas?
Not all subsets of SAT are NP-complete. Schaefer's dichotomy theorem identifies infinite subsets of SAT that are in P, with the remainder being NP-complete. The known subsets that are in P are 2-SAT, HORN-SAT, ANTIHORN-SAT, XOR-SAT and SAT instances that can be satisfied with all-true or all-false assignments.
How find the $\inf\sqrt[r]{n_{1}n_{2}\cdots n_{r}}$ Let $r,n$ be given positive integers,and $n_{r}$ are positive integers, such $$n_{1}+n_{2}+\cdots+n_{r}=n$$ find the $$\inf\sqrt[r]{n_{1}n_{2}\cdots n_{r}}$$ I known use AM-GM $$n_{1}+n_{2}+\cdots+n_{r}\ge r\sqrt[r]{n_{1}n_{2}\cdots n_{r}}$$ so we have find the sup is $$\dfrac{n}{r}?$$ and How to find the inf?
Let $x>y$, where $\{x,y\}\subset\mathbb N$. Hence, since $xy\geq(x+1)(y-1)$, we get the minimum for $n_1=n_2=...=n_{r-1}=1$.
Can a bivariate polynomial be increasing in one non-zero direction only? Consider a bivariate polynomial $$f(x,y)=\sum_{i=0}^n \sum_{j=0}^m b_{ij}x^iy^j$$ defined on $[0,1]^2$. Suppose that on $[0,1]^2$ this polynomial weakly increases in direction $d=(d_1,d_2) \neq 0$: $$d_1 \frac{\partial f(x,y)}{\partial x} + d_2 \frac{\partial f(x,y)}{\partial y} \geq 0.$$ My question is whether it is possible to have a situation when such $d$ is the only non-trivial direction of monotonicity of $f$? That is, if there is another $\tilde{d} \neq 0$ such that $f$ is increasing in direction $\tilde{d}$ on $[0,1]^2$ then it has to be that $$\tilde{d}=\alpha d $$ for some $\alpha>0$?
I think I figured out an asnwer to my question. Yes, it can happen. E.g., it seem that the polynomial $$f(x,y)=x y(1-y), \quad (x,y) \in [0,1]^2$$ is increasing in the direction $d=(1,0)$ only.
How many 19-bit strings can be generated from having an even number of ones? So essentially how many 19-bit strings can you make with 2 1's or 4 1's or .... or 18 1's? I know the # of 19-bit strings that can be produced with 2 1's would be 19!/17!2! and the number of 19-bit strings that can be produced with 4 1's would be 19!/15!4! ..... up until 19!/18! in the case where there are 18 1's. The thing I don't understand is how much overlap occurs. I know this problem has to do with inclusion-exclusion principle, I am just confused on how to calculate the intersection of every single possible outcome.
I like the accepted solution but I just wanted to say something about doing the mathematics the "hard" way. You know that there are $\binom m k = \frac{m!}{k!(m-k)!}$ ways to choose a set of $k$ items from a set of $m$ items. Those items can be numbered bits chosen to be 1s or 0s. Of course you know the identity that: $$\sum_{k=0}^m \binom m k = 2^m$$ because of course if you sum up all the ways you might choose $k$ bits to be 1, then you get all possible bitstrings. However there is a simple proof of this which hinges on the idea that these coefficients appear in the binomial expansion, $$\sum_{k=0}^m \binom m k ~x^k ~y^{m-k}= (x + y)^m.$$ Just plug in $x = y = 1$ to find the above formula as a special case. Well now you want to consider only even bitstrings, and so we can still "do it the hard way" by asking for a function of $k$ that is $1$ if $k$ is even or $0$ if $k$ is odd, and a great example is $[1 + (-1)^k]/2.$ Plugging that in, the sum that you want is just $$\sum_{k=0}^m \binom mk \frac{1 + (-1)^k}2 = \frac12 \sum_{k=0}^m \binom mk + \frac12 \sum_{k=0}^m \binom mk (-1)^k.$$ The first sum is clearly $2^m/2 = 2^{m-1}$ and the second sum we can use the above formula to reason that it is actually $(-1 + 1)^m/2 = 0^m/2 = 0.$ So you can do this the hard way if you'd prefer and you'll definitely get the same result.
Is the limit of $f(n) = n-n$ zero as $n\rightarrow \infty$? I have been working on a proof which involves sums and products going to infinity. I am wondering whether the following proof of a limit is valid, and whether that result would allow me to come to another conclusion. What is: $$\lim \limits_{n \to \infty} f(n)\text {, where }f(n) = n-n$$ I have worked this out to be $$\lim \limits_{n \to \infty} n-n = \lim \limits_{n \to \infty} n(1-1) = \lim \limits_{n \to \infty} n\cdot 0 = 0$$ I'm not sure whether this is the correct way of proving this limit, or whether the answer is correct. My math teacher had said that the whole limit raised a red flag in his mind, and he wasn't sure why. If my limit is correct, though, I would like to know whether the following is also valid: $$\lim \limits_{n \to \infty} f(n)\cdot n = 0$$
Yes this is correct. If you define $f(n)=n-n$ it is sufficient to notice that $f(n) \equiv 0$
Matrices such that $w^\top A (w+v)=0$ for all $v\in w^\perp$ Given a non-zero vector $w$, is there a way of characterizing matrices $A$ such that $$\forall v\in w^\perp,\ w^\top A (w+v)=0?$$ For example, if $w^\top = [0,\ 1]$, then $v=[\alpha,\ 0]$ and $$w^\top A (w+v)=\begin{bmatrix} 0 & 1 \end{bmatrix} \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} \alpha \\ 1 \end{bmatrix} = c+\alpha d$$ which must be zero for all $\alpha$, implying that $c=d=0$. I managed to generalize the result for $w^\top=[w_1,\ w_2]$ with $w_2\neq 0$ using an unelegant approach (expanding the product by hand) and found $c=-a w_1/w_2$ and $d=-bw_1/w_2$. Is there a simple extension in larger dimensions?
Let $z=A^\top w$. Then your condition is simply asking for the hyperplane $\{v : z^\top v = -z^\top w\}$ to be the same as the hyperplane $w^\perp = \{v : w^\top v = 0\}$. This implies both $$z^\top w=0$$ and $$z \in \operatorname{span}(w).$$ Together, we have $z=0$, i.e. $$A^\top w=0.$$ If you check your examples, this exactly characterizes the matrices $A$ you found.
Identity function that returns $1$ for the input $0$. I'm looking for a way to write the following function: \begin{equation} id(x) = \begin{cases} x & x \neq 0 \\ 1 & x = 0 \end{cases} \end{equation} However, I want to implement it without using conditionals. Any ideas? The simpler, the better.
As @JohnHughes points out, normal algebra can't help you. If you're willing to use the Kronecker delta function then $$ f(x) = x + \delta(x,0) $$ does what you want.
If one egg is found to be good, then what is the probability that other is also good? A basket contains 10 eggs out of which 3 are rotten. Two eggs are taken out together at random. If one egg is found to be good, then what is the Probability that other is also good? I applied conditional probability. It says that one of them is good, so the probability of the other one being good can be found in the 9 eggs left out of which 6 are good, so Probability = $6/9$ Am I right with my understanding?
Let $E_k$ be the event that the $k$th egg is good. $P[E_2|E_1 ] = {P[E_1 \cap E_2] \over P[E_1]}$. $P[E_1] = {7 \over 10}$, $P[E_1 \cap E_2] = { \binom{7}{2}\over \binom{10}{2}} = {7 \over 15}$. Hence $P[E_2|E_1 ] = {10 \over 15} = {2 \over 3}$. Comment: The question is ambiguous, I interpreted "one egg is found to be good" as meaning the first checked egg is good. Another interpretation is to compute $P[N=2|N\ge 1]$ which is easily computed to be ${ 1\over 2}$ (where $N$ is the number of good eggs in the selection). However, I think this interpretation is less likely (unless elaborated otherwise) because if we assert $N\ge 1$, it means both eggs must have been checked, in which case the probability is academic.
The largest value of $n$ such that $n+10| n^3+100$ If $n\in \mathbb{Z}$ and follow the property $$n+10| n^3+100$$ then what is the largest value of $n$ Please help me to solve this!!!
Notice that: $n+10| n^3+100$ is same as $$n+10| n^3+1000-900$$ $$\implies n+10| (n^3+1000)-900$$ $$\implies n+10| (n+10)(n^2-10n+100)-900\implies n+10|900$$. The greatest integer dividing $900$ is $900$ itself. So, for maximum $n$, $$n+10=900\implies n=900-10=890$$
What is larger? Graham's number or Googolplexian? See YouTube or wikipedia for the defination of Graham's number. A Googol is defined as $10^{100}$. A Googolplex is defined as $10^{\text{Googol}}$. A Googolplexian is defined as $10^{\text{Googolplex}}$. Intuitively, it seems to me that Graham's number is larger (maybe because of it's complex definition). Can anybody prove this?
Googolplex can be bounded from above like a tower of exponents, so $$10^{10^{100}} < (3^3)^{10^{100}} = 3^{3\times 10^{100}} <3^{10^{101}} < 3^{(3^3)^{101}} = 3^{3^{303}} < 3^{3^{3^{3^3}}}$$ In the last step, we have used the fact that $303$ is much,much smaller than $3^{27}$. Now, take the Googolplexian.We can thus, easily check that, $$10^{10^{10^{100}}} < (3^3)^{10^{10^{100}}} < 3^{3^{3^{3^{3^3}}}}$$ So, Googolplexian is much smaller than a tower of exponents of $3$'s of length $6$, or in other words Googolplexian is less than $3\uparrow \uparrow6$.(using Knuth's up-arrow notation.) Now, compare this with just the first layer of Graham's number,i.e., $3\uparrow \uparrow \uparrow \uparrow 3$. Hope it helps.
About Euclid's proof of infinite primes..... I was checking that if product of first n primes+1 gives a prime again is true to how many n For example $$2+1=3$$ is a prime$$2\times 3+1=7$$ is a prime$$2\times 3\times 5+1=31$$ is a prime$$2\times 3\times 5\times 7+1=211 $$ is a prime$$2\times 3\times 5\times 7\times 11+1=2311$$ is a prime $$2\times 3\times 5\times 7\times 11\times 13+1=30031$$ is composite So prime chain is broken and further steps will give composite no.s only Now as I understood from proof of infinite primes Euclid said multiply all primes and add 1 and you will get another prime. It proves that there are infinite primes Then my question is how can we guarantee that the resulting number is prime?? Or it should be that we will get another prime dividing the resulting number Please help me to clear my confusion!!!!
Neither Euler nor Euclid said any such thing. Euclid wrote that if $p_1,..,p_n$ are primes and $q$ is $1$ more than their product , then any prime divisor of $q$ is a prime not equal to any of $p_1,...,p_n.$ If $p$ is the least $m>1$ that divides $q$ then $p$ is a prime divisor of $k.$ So $q$ does indeed have a prime divisor.
How to factorize $a^2-b^2-a+b+(a+b-1)^2$? The answer is $(a+b-1)(2a-1)$ but I have no idea how to get this answer.
$$(a+b-1)^2=a^2+b^2+1-2a-2b+2ab$$ $$\implies a^2-b^2-a+b+(a+b-1)^2=2a(a+b)-\{2a+(a+b)\}+1$$ $$ =2a\{a+b-1\}-\{a+b-1\}=?$$
Prove: if $n = \frac{k^k-1}{k-1}$ then $k=\Omega\left(\frac{\log{n}}{\log{\log{n}}}\right)$ I have to prove that for integers $n$ and $k$, $k\geq2$, and $$ n = \frac{k^k-1}{k-1} $$ follows, that $$ k=\Omega\left(\frac{\log{n}}{\log{\log{n}}}\right). $$ So it would be sufficient to show, that for some constant $c>0$ following inequality holds: $$ k \geq c\cdot\frac{\log{n}}{\log{\log{n}}} $$ In detail it would look like: $$ \frac{\log{n}}{\log{\log{n}}} = \frac{\log{\left(k^k-1\right)}-\log{\left(k-1\right)}}{\log{\left(\log{\left(k^k-1\right)}-\log{\left(k-1\right)}\right)}} \leq \ldots \leq c\cdot k $$ I'm trying to find an upper bound for the numerator and an lower bound for the denominator of the above fraction, but I haven't worked it out yet. The estimation for the nominator looks e.q. like this: $$ \log{\left(k^k-1\right)}-\log{\left(k-1\right)} \leq \log{k^k} - \log{\left(k-1\right)} = k\cdot\log{k} - \log{\left(k-1\right)} \leq k^2 - \left(1-\frac{1}{k-1}\right) = k^2 - \frac{k-2}{k-1} $$ It would be nice if somebody could help me with this assignment!
Geometric series: $$\frac{k^k-1}{k-1}=1+k+k^2+k^3+\dots+k^{k-1}<k^{k-1}+k^{k-1}+k^{k-1}+\dots+k^{k-1}=k^k$$ Thus, $$n<k^k$$ $$\log n<\log k^k=k\log k$$ $$k>\frac{\log n}{\log k}$$ if $k\ge3$, $$1+k+k^2+\dots+k^{k-1}>1+2+2^2+\dots+2^{k-1}>2+2+2+\dots+2=2^k$$ $$n>2^k$$ $$k<\log_2n$$ One can adjust this to see that for any base, the following is eventually true for large enough $k$. $$k<\log n$$ Combining these, we have $$k>\frac{\log n}{\log k}>\frac{\log n}{\log\log n}$$
Convert second order PDE $u_{tt} = u_{xx} + u$ to a system of first order PDE's I am attempting to convert $$ u_{tt} = u_{xx} + u$$ to a system of first order PDE's. I believe that the system will require 3 equations, one for each of $u, u_t,$ and $u_x.$ Here is my attempt: \begin{equation} \frac{\partial}{\partial t} \begin{pmatrix} u \\ u_t \\ u_x \end{pmatrix} - \frac{\partial}{\partial x} \begin{pmatrix} u \\ u_x \\ u_t \end{pmatrix} = \begin{pmatrix} u_t - u_x \\ u \\ 0 \end{pmatrix} \end{equation} This doesn't feel correct. If anyone is familiar with a standard way to do this, any help would be appreciated.
This is a wave equation and you need to change variables as follows: p = x+t, q = x-t. You will obtain: $$ -4\frac{d^2u}{dpdq}=u(p,q)$$ And then set $$ v= \frac{du}{dq} $$ and another equation $$\frac{dv}{dp} = -u(p, q)/4 $$
Calculate Row number from number grid I have the following number grid, which I believe has a cell for every number in $\Bbb N$ (not proven). The way this grid is built is simple. The first row holds every odd number in $\Bbb N$ The other rows are the double the value of the number the row above. Since every number exept for prime numbers is dividable with an integer, and all prime numbers exept for 2 are present in the first row all other numbers must be present as well. So my first question is, if I'm right assuming, that every number in this grid is unique, no number is 2 or more times in this grid. +---++----+----+----+-----+-----+-----+-----+ |1. || 1 | 3 | 5 | 7 | 9 | 11 | ... |-> +2 +---++----+----+----+-----+-----+-----+-----+ |2. || 2 | 6 | 10 | 14 | 18 | 22 | ... |-> +4 +---++----+----+----+-----+-----+-----+-----+ |3. || 4 | 12 | 20 | 28 | 36 | 44 | ... |-> +8 +---++----+----+----+-----+-----+-----+-----+ |4. || 8 | 24 | 40 | 56 | 72 | 88 | ... |-> +16 +---++----+----+----+-----+-----+-----+-----+ |5. || 16 | 48 | 80 | 112 | 144 | 176 | ... |-> +32 +---++----+----+----+-----+-----+-----+-----+ | : || : | : | : | : | : | : | : | +---++----+----+----+-----+-----+-----+-----+ | | | | | | V V V V V V *2 *2 *2 *2 *2 *2 My second question depends on the correctness of my assumption. Therefore every number in the grid needs to be unique. I noticed that this grid is special regarding the first value of any row and the value I needed to add to get to the next number. all numbers in row 1 can be described as $2k-1=x$, all numbers in row 2 as $4k-2=x$ , all numbers in row 3 as $8k-4=x$ and so on, so basically as $$2^rk+2^{r-1}=x,\; r \in \Bbb N,\; k \in \Bbb N,\; x \in \Bbb N$$ where $r$ represents the row number. My 2nd question is, how I would calculate the row number with only $x$ as input number given. I tried using wolfram alpha, but the result isn't enough since I (or we) don't know $k$. Any help is really appreciated :-)
The answer to both your questions is based on the observation that every number can be written uniquely as $x = 2^rk$ for $k$ odd, $r \geq 1$. So $x$ will only ever appear in the $r+1$ th row and the column with $k$ at its top, i.e. the $(k+1)/2$ th column.
Laplace transform of an integral? I have the following integral: $$ I = \int_{0}^{t} e^{-\tau}\theta(t-\tau )d\tau $$ where $\theta(t) \left\{\begin{matrix} 0& 0 \leq t < 1\\ 5 & t\geq 1 \end{matrix}\right.$ is the Heaviside step function. $$$$ I need to get the Laplace transform of it. I know there is a rule for it: $ \mathcal{L} \left \{ I \right \} = F(s)G(s), $ but I don't know if its correct the do the transformation as follows: $$ \mathcal{L} \left \{ e^{-\tau} \right \}= e^{-\tau}\mathcal{L}\left \{ 1\right \}$$ Because the transformation should be in respect of $ t $ Thanks in advance
When you use the rule $\mathcal{L} (I)=F(s)G(s)$, you are assuming that $f(t)=e^{−t}$ and $g(t)=\theta(t)$. So you can not pull $e^{−\tau}$ out. So in this case $\mathcal{L}(I)=F(s)G(s)$, where $F(s)$ is the Laplace transform of $f(t)=e^{-t}$, and $G(s)$ is the Laplace transform of $\theta(t)$. Now for $F(s)$, we can use the translation $\mathcal{L}(e^{at}f(t))=F(s-a)$ to obtain $1/(s+1)$. And $G(s)=5/s$. So $\mathcal{L}(I)=5/(s(s+1))$.
Function continuous at odd numbers from $0$ to $99$, but discontinuous everywhere else. I have an idea about this but am not entirely sure if it's right. If I have a function, say $g(x) = (x-1)(x-3)(x-5)...(x-99)$ The roots of g(x) are only odd numbers from 0 to 99. $$ f(x) = \begin{cases} g(x) & x \in\mathbb{Q} \\ 0 & x \notin\mathbb{Q} \end{cases} $$ Is this correct?
I think your answer is correct. The limit of $f(x)$ as $x$ approaches an odd integer is $0$ (because $g$ is continuous) and its value there is $0$ too, so $f$ is continuous there. For rational values of $x$ other than the odd integers $g(x) \ne 0$ but there are irrationals arbitrarily nearby at which $f$ is $0$. For irrational values of $x$ , $f(x) = 0$ but there are noninteger nearby rationals $q$ where $f(q) = g(q)$ is bounded away from $0$. If I'm wrong (and you're wrong) someone will comment here and tell me. Then I can delete my answer, or leave it as an instructive false start.
Positive initial data in a bounded domain forces a solution of a nonlinear heat equation to be positive as well Let $U\subset \mathbb{R}^n$ be open and bounded with smooth boundary, and let $f: \mathbb{R} \to \mathbb{R}$ have bounded derivative and satisfy $f(0)=0$. If $u$ solves \begin{align} u_t - \Delta u &= f(u) \text{ in } U \times (0,\infty)\\ u(x,t) &= 0 \text{ on } \partial U\times (0,\infty) \\ u(x,0) &= u_0(x)\text{ on } U\times \{t=0\} \end{align} where $u_0(x)\ge 0$, then $u\ge 0$ for all $(x,t) \in U\times (0,\infty)$. I know the result is true (via strong maximum principle) if $f=0$ on $U_T = U\times (0,T)$ for all $T>0$, so I'm wondering if the maximum principle might also help with this problem. If not, them I'm not sure how to proceed.
Yes, you can use the maximum principle, though I'm not sure if the argument is standard. It would go something like this: Suppose first that $f(0)>0$ and $u_0(x) > 0$ for $x \in U$. Then define $$T = \sup\big\{\tau>0 \, : \, u(x,t)>0 \text{ for all } (x,t) \in U\times [0,\tau)\big\}.$$ By our assumptions $T>0$. We just need to show that $T=\infty$. Assume to the contrary that $T<\infty$. Then $u$ attains its minimum over $U\times [0,T]$ at a point $(x,T)$ where $u(x,T)=0$. Therefore $$f(0)=u_t(x,T) - \Delta u(x,T)\leq 0.$$ This is a contradiction if $f(0)>0$. The rest of the proof boils down to reducing the problem to the case where $f(0)>0$ and $u_0>0$. You should try introducing a small parameter $\varepsilon>0$ and making a small perturbation of $u$. For example, $u_\varepsilon(x) = u(x) + \alpha\varepsilon - \varepsilon \exp(\beta x_1)$ should work for appropriately chosen constants $\alpha$ and $\beta$.
Does the following limit exist? $\lim_{r\to1}\sum_{n=0}^\infty a_nr^n$ Suppose I had a sequence $a_n$ that satisfied the following criteria: $\displaystyle\sum_{n=0}^\infty a_n$ diverges. $\displaystyle\lim_{n\to\infty}\left|\frac{a_{n+1}}{a_n}\right|=1$ (inconclusive ratio test) $a_na_{n+1}<0$ (the series is alternating) Is it provable if for all $a_n$ that satisfies the above that $$\lim_{r\to1^-}\sum_{n=0}^\infty a_nr^n$$ exists? I can clearly see for any fixed $|r|<1$, the series converges, but I don't know about the behavior as $r\to1$. As far as the examples I've had, this limit exists, for example, if $a_n=(-1)^nn$, the limit is $1/4$. So can someone show if this limit exists or not?
A counterexample would be $$ \begin{align} a_{2n} &= 1+\frac{1}{n+1} \\ a_{2n+1} &= -1 \end{align} $$ We then have $$ \sum_{n=0}^\infty r^n a_n \ge \sum_{n=0}^{\infty} \frac{r^{2n}}{n+1} = -\frac{\log(1-r^2)}{r^2} $$ and the right-hand side of this blows up as $r\to 1^-$. (Thanks to Wolfram Alpha for evaluating the series in the middle).
Show $\sqrt{n}[\log(X_{(1)}) - \log(\alpha)] \overset{p}{\to} 0$ This is a follow-up question to $W_n = \frac{1}{n}\sum\log(X_i) - \log(X_{(1)})$ with Delta method. Note: $\log = \ln$. Note also that this is from a past qualifying exam, so I am looking for a solution which can be done under time constraints. Please only use the following in your solutions: definitions of convergence in probability and distribution, Delta method, Central Limit Theorem, Slutsky's Theorem, Weak Law of Large Numbers, Continuous Mapping Theorem. I am not familiar with big-$O$ notation or being "bounded in probability." For those of you familiar with the literature, this is at the level of Casella and Berger, so any measure-theoretic approaches should not be used. Suppose $X_1, \dots, X_n \sim \text{Pareto}(\alpha, \beta)$ with $n > \dfrac{2}{\beta}$ are independent. The Pareto$(\alpha, \beta)$ pdf is $$f(x) = \beta\alpha^{\beta}x^{-(\beta +1)}I(x > \alpha)\text{, } \alpha, \beta > 0\text{.}$$ If $X_{(1)}$ is the first order statistic, I wish to show $$\sqrt{n}[\log(X_{(1)}) - \log(\alpha)] \overset{p}{\to} 0\text{.}$$ What can I tell you about $X_{(1)}$ at this point of the exam? * *$X_{(1)} \overset{p}{\to} \alpha$. *$n(X_{(1)} - \alpha)$ converges in distribution to an exponential distribution with mean $\alpha/\beta$. *$F_{X_{(1)}}(x) = \begin{cases} 0, & x \leq \alpha \\ 1-\left(\dfrac{\alpha}{x}\right)^{\beta n}, & x > \alpha\text{.} \end{cases}$ *$\mathbb{E}[X_{(1)}] = \dfrac{\alpha\beta n}{\beta n - 1}$ *$\mathbb{E}[X_{(1)}^2] = \dfrac{\alpha^2\beta n}{\beta n - 2}$ One possible approach: let $G$ be the CDF of $\sqrt{n}[X_{(1)}-\alpha]$. Then the support is over $(0, \infty)$ and for $x$ in the support of $G$, $$\begin{align} G(x) &= F_{X_{(1)}}\left(\dfrac{x}{\sqrt{n}}+\alpha\right) \\ &= 1 - \left[ \dfrac{x/\sqrt{n}+\alpha}{\alpha}\right]^{-\beta n} \\ &= 1 - \left[ \dfrac{x/\alpha}{\sqrt{n}}+1\right]^{-\beta n} \\ \end{align}$$ From here, we could probably apply the Delta method if I could figure out how to compute the limit as $n \to \infty$. The $\sqrt{n}$ is problematic. Given the above, I know that $\log(X_{(1)}) \overset{p}{\to} \log(\alpha)$, but I'm not sure if anything can be done with this. Alternative approaches are welcome as well.
To elaborate on BGM's answer, $X_{(1)} \sim \text{Pareto}(\alpha, \beta n)$, as we can see from its CDF. It follows that $$\ln(X_{(1)})-\ln(\alpha)\sim\text{Exp}(\beta n)$$ as it can be seen from Finding $\mathbb{E}[X]$, $X \sim \text{Pareto}$ under exam conditions. See Is the family of exponential distributions closed under scaling?. Thus, $$Y_n = \sqrt{n}[\ln(X_{(1)})-\ln(\alpha)] \sim \text{Exp}(\beta n/\sqrt{n}) = \text{Exp}(\beta\sqrt{n})\text{.}$$ Hence, its CDF is given by $$1-e^{-y\beta\sqrt{n}}$$ for $y > 0$.
a problem in uniform convergence A function $f$ is defined on $[0,1]$ by $f(x)=\frac{1}{n}$ for $\frac{1}{n}\geq x \geq \frac{1}{n+1}; n=1,2,3,\ldots,n$. Prove that $f \in R[0,1]$ and evaluate $\int_{0}^{1}f(x)dx$.
As in the Henry W. answer, this is Reimann integrable. The sum is not a particularly trivial telescoping sum, however. It can be done as follows: $$ \sum_{k=1}^\infty \frac1k\left(\frac1k-\frac1{k+1}\right) = \sum_{k=1}^\infty \frac1{k^2} - \sum_{k=1}^\infty \frac1{k(k+1)} = \frac{\pi^2}{6} - 1 $$ where the first integral is a pretty well-known sum first solved, I think, by Euler, and the second can be done most elegantly by noting that it is $$ \sum_1^\infty {k^{\underline{-2}}} = \left.-\frac{1}{k}\right|_{k=1}^\infty=0 - (-1) = 1 $$ where the underlined $-2$ in $k^{\underline{-2}}$ is the "fal;ing powers" notation.
A formula vaguely similar to Sherman and Morrison's If $V$ is a $p\times n$ matrix and $u$ is an $n$-vector, how can I prove the following equality? $$ \frac{u^TV^T(VV^T)^{-1}Vu}{1-u^TV^T(VV^T)^{-1}Vu} = u^TV^T[V(I-uu^T)V^T]^{-1}Vu. $$ My line of attack is to recognize the bottom denominator is a scalar, and tried to bring it into the inverse on top. However, I cannot get equivalence, does anyone know how?
This is simple consequence of Sherman-Morrison-Woodbury formula. By a direct application the Sherman-Morrison-Woodbury formula we have $$ \begin{align} {\begin{pmatrix} V(1 - uu^T) V^T \end{pmatrix}}^{-1} &= (VV^T-(Vu)(Vu)^T)^{-1}\\ &= (VV^T)^{-1} + \dfrac{(VV^T)^{-1}(Vu)(Vu)^T(VV^T)^{-1}}{1 - u^TV^T(VV^T)^{-1}Vu} \end{align} $$ Multiplying the LHS and RHS above first on the left by $(Vu)^T$ and then on the right by $Vu$ we get $u^TV^T{\begin{pmatrix} V(1 - uu^T) V^T \end{pmatrix}}^{-1}Vu = c + \dfrac{c^2}{1-c}$ where $c = u^TV^T(VV^T)^{-1}Vu.$ This simplifies to $\dfrac{c}{1-c}$ and the answer follows.
Find the interval in which $m$ lies so that the expression $\frac{mx^2+3x-4}{-4x^2+3x+m}$ can take all real values, $x$ being real Find the interval in which $m$ lies so that the expression $\frac{mx^2+3x-4}{-4x^2+3x+m}$ can take all real values, $x$ being real. I don't know how to proceed with this question. I have equated this equation with $y$ to obtain a quadratic equation: $(m+4y)x^2+(3-3y)x-(4+my)=0$. Now I have no idea as to how I can find the answer. A small hint will be helpful.
Hint $$f(x)=\frac{mx^2+3x-4}{-4x^2+3x+m}$$ The function $f$ must to be onto. It means that: $$p=\frac{mx^2+3x-4}{-4x^2+3x+m} \Rightarrow (m+4p)x^2+3x(1-p)-4-pm=0$$ The above equation must have roots for any $p \in \Bbb R$. It means that: $$\Delta=9(1-p)^2+4(m+4p)(4+pm)\geq 0$$ for any choice of $p$. $$(9+16m)p^2+(4m^2-46)p+9+16m \geq0$$ Then we have to analyze that new quadratic equation. Once the above expression must be always non negative, then. $$\Delta'=(4m^2-46)^2-4(9+16m)^2=(m^2-8m-16)(m^2+8m-7)\leq 0$$ Solving the above inequality we have the values of $m$.
limes of martingale given quadratic variation Given a martingale $M_t$ with $M_0=0$ and $E[M_t^2]<\infty$. Let the quadratic variation of $M_t$ be $[M]_t:=ln(1+t)$. Calculate $\limsup_{t\to\infty} \dfrac{M_t}{\sqrt{t}}$. Does anyone have an approach for that? I thought of using Itô's formula but this didn't help.
Let $\epsilon>0$, we define $$A_n=\{\max_{2^n\leq t\leq 2^{n+1}}\frac{|M_t|}{\sqrt{t}}>\epsilon\}$$ We have $$A_n\subseteq{\{\exists t \in[2^{n},2^{n+1}] , |M_t|>\epsilon\sqrt{2^n} \}}$$, Thus, $$P(A_n) \leq P({\{\exists t \in[2^{n},2^{n+1}] , |M_t|>\epsilon\sqrt{2^n} \}})$$ or equivalently , $$P(A_n) \leq P({\max_{2^n\leq t\leq 2^{n+1}} |M_t|>\epsilon\sqrt{2^n} })$$ A that stage, you can apply the Doob inequality : $$ P({\max_{2^n\leq t\leq 2^{n+1}} |M_t|>\epsilon\sqrt{2^n} })\leq\frac{E(M_{2^{n+1}}^2)}{\epsilon^22^n}$$ We have that $$M_t^2=U_t+log(1+t)$$ where $U_t$ is a martingale, therefore $$ P(A_n)\leq\frac{log(1+2^{n+1})}{\epsilon^22^n}$$ The Borel-Cantelli lemma concludes that $$P(\limsup_{t \to \infty}\frac{|M_t|}{\sqrt{t}} \leq \epsilon)=1 $$ Moreover, $$\{\frac{|M_t|}{\sqrt{t}}=0\}=\cap_{k\geq1}\{\limsup_{t \to \infty}\frac{|M_t|}{\sqrt{t}}\leq\frac{1}{k}\}$$ You can conclude
Yoneda lemma, bijection between sets of natural transformations Let $Y: \cal A\to Set^{\cal A^{op}}$ be the Yonneda embedding and $S:Set^{\cal A^{op}}\to Set$ an arbitrary functor. How do I use the Yonneda lemma to obtain a bijection between natural transformations $hom(A,-)\to SY$ AND these: $hom(hom(-,A),-)\to S$. See the second displayed formula Here.
Use the Yoneda lemma once to obtain $\text{Nat}(\hom(\hom(-,A),-),-),S) \cong S(\hom(-,A))$. Use it a second time to obtain $\text{Nat}(\hom(A,-),SY) \cong SY(A)$ and then notice that $SY(A)=S(\hom(-,A))$.
Generalised eigenspace of inverse $\newcommand{\id}{\operatorname{id}}$Let $V$ be a finite-dimensional vector space and $\Phi \in \operatorname{End}(V)$ be invertible. Let $\lambda \in \mathbb{F}, \lambda \neq 0$. $\ker (\Phi - \lambda\id)^{\dim V} = \ker (\Phi^{-1} - \lambda^{-1}\id)^{\dim V}$ How to prove this? My attempt: Let $n = \dim V$. Suppose that $\Phi^n v \in \ker(\Phi - \lambda\id)^n$. Expand $$(\Phi^{-1} - \lambda^{-1}\id)^n = \sum_{k=0}^n \binom{n}{k} \Phi^{-k} \lambda^{k - n}, \qquad(\Phi - \lambda\id)^n = \sum_{k=0}^n \binom{n}{k} \Phi^{n - k} \lambda^{k} $$ Apply: $$(\Phi^{-1} - \lambda^{-1}\id)^n\Phi^n v = \sum_{k=0}^n \binom{n}{k} \Phi^{n-k} \lambda^{k - n} v = \lambda^{-n} \sum_{k=0}^n \binom{n}{k} \Phi^{n-k} \lambda^{k} v = \lambda^{-n} (\Phi - \lambda\id)^n v $$
$\newcommand{\id}{\operatorname{id}}$Let $W=\ker(\Phi-\lambda\id)^{\dim V}$, and $d=\dim W$. Then $(X-\lambda)^d$ is the characteristic polynomial for the restriction $\Phi|_W$, and we can choose a basis of$~W$ for which the matrix of $\Phi|_W$ is upper triangular with diagonal entries all$~\lambda$. Its inverse $(\Phi|_W)^{-1}=\Phi^{-1}|_W$ then has upper triangular matrix with diagonal entries all$~\lambda^{-1}$, so characteristic polynomial $(X-\lambda^{-1})^d$. This is also an annihilating polynomial of $\Phi^{-1}|_W$, so $W$ is contained in $\ker((\Phi^{-1}-\lambda^{-1}\id)^d)$, and a fortiori in the kernel with $d$ replaced by $\dim V$. The other inclusion follows by interchanging $\Phi$ and $\Phi^{-1}$, and $\lambda$ and $\lambda^{-1}$.
Why if $a =b$ then $a = 0$ is not a correct statement Bogus Claim: If $a$ and $b$ are two equal real numbers, then $a = 0$ $a = b$ $a^2 = ab$ $a^2 - b^2 = ab - b^2$ $(a-b)(a+b) = (a-b)b$ $a + b = b$ $a = 0$ I found this in my proof handouts, and correct me if I'm wrong,but is it wrong because after line 4 we divide both sides by $(a - b)$ which would be $0$ if $a = b$ ?
* *You take the equation $(a-b)(a+b)=(a-b)b$ *You divide each side by $(a-b)$ *You get the equation $a + b = b$ But $a=b\implies(a-b)=0$, which means that step #2 is illegal.
Quantifier explination: $\forall x(R\to Q(x))\equiv R \to \forall x (Q(x))$ I don't understand the proof for this problem and couldn't find the answer anywhere. $$\forall x(R\to Q(x))\equiv R \to \forall x (Q(x))$$
Assume $\forall x(R\to Q(x))$. We want to prove $R\to \forall x Q(x)$. So we assume $R$ and attempt to prove $\forall x Q(x)$. So we let $x$ be arbitrary. For this $x$ we have, by specialization, that $R\to Q(x)$. As $R$, modus ponens gives us $Q(x)$. As $x$ was arbitrary, $\forall x Q(x)$. So we derived $\forall x Q(x)$ based on the hypthesis that $R$ holds. In other words, we have shown $R\to \forall x Q(x)$. This shows the $\implies$ direction of the desired equivalence. Now assume instead that $R\to \forall x Q(x)$. We want to prove $\forall x(R\to Q(x))$. So let $x$ be arbitrary. We want to prove $R\to Q(x)$. So assume $R$. Then by modus ponens $\forall x Q(x)$. In particular, $Q(x)$. Thus we showed $Q(x)$ from $R$, i.e., we showed $R\to Q(x)$. As $x$ was aribitrary, we showed $\forall x(R\to Q(x))$. This shows the other direction.
Find integral solution of $a^{b} - b^{a} = 3$ Find the integral solution of $$a^b - b^a = 3$$ I am a student of class 10 and got this question in a maths competetive exam. I have tried converting into a log equation in one variable, or solve it by using parity but couldn't solve it. Thanks in advance
I suppose you are familiar with the inequality $\ln(1+x)\le x$. Now if $b>a$, then $$b^a=a^a\left(1+\frac{b-a} a\right)^a=a^a e^{a\ln(1+(b-a)/a)}\le a^a e^{b-a}.$$ When $a\ge 3$, we have $$3=a^b-b^a\ge a^a(a^{b-a}-e^{b-a})\ge a^a(a-e)\ge27(3-e)>7$$ a contradiction. So $a\le2$ if $b>a$. But it can be checked that there is no solution for $a=1$ (obvious) and $a=2$ (since $2^b-b^2>3$ for $b\ge 5$). We conclude that there is no solution satisfying $b>a$. But $b\neq a$, so the only possibility is that $b<a$. However, for $3\le b<a$, it's well-known that $a^b<b^a$. So it's only possible when $b\le 2$. For $b=1$ the solution is $(a,b)=(4,1)$. For $b=2$, due to the same reason above($a^2-2^a<-3$ for $a\ge 5$), there is no solution. To conclude, the only solution is $(a,b)=(4,1)$.
Find the last Digit of $237^{1002}$? I looked at alot of examples online and alot of videos on how to find the last digit But the thing with their videos/examples was that the base wasn't a huge number. What I mean by that is you can actually do the calculations in your head. But let's say we are dealing with a $3$ digit base Number... then how would I find the last digit. Q: $237^{1002}$ EDIT: UNIVERSITY LEVEL QUESTION. It would be more appreciated if you can help answer in different ways. Since the Last digit is 7 --> * *$7^1 = 7$ *$7^2 = 49 = 9$ *$7^3 = 343 = 3$ *$7^4 = 2401 = 1$ $.......$ $........$ *$7^9 = 40353607 = 7$ *$7^{10} = 282475249 = 9$ Notice the Pattern of the last digit. $7,9,3,1,7,9,3,1...$The last digit repeats in pattern that is 4 digits long. * *Remainder is 1 --> 7 *Remainder is 2 --> 9 *Remainder is 3 --> 3 *Remainder is 0 --> 1 So, $237/4 = 59$ with the remainder of $1$ which refers to $7$. So the last digit has to be $7$.
We show that the last two digits of $237^{1002}$ are $69$. For any $n \in \mathbb N$, the two digits of $n$ are given by $n\bmod{100}$. If $\gcd(n,100)=1$, $n^{\phi(100)}=n^{40} \equiv 1\pmod{100}$, by Euler's theorem. Since $\gcd(237,100)=1$, applying Euler's theorem gives $$ 237^{1002} = \big(237^{40}\big)^{25} \cdot 237^2 \equiv 237^2 \equiv 37^2 \equiv 69\pmod{100}. $$ This proves our claim. $\blacksquare$
Showing that the union of the interiors of a closed sets making up a complete metric space is dense If $X$ is a complete metric space such that $X =\bigcup\limits_{n {\in \mathbb{N}}}F_n$, where each $F_n$ is closed, prove that $\bigcup\limits_{ n {\in \mathbb{N}}}$$F_n^o$ is dense in $X$. ($F_n^o$ denotes the interior of $F_n$.) I've tried a few ways to prove it, and I'm pretty sure that it uses the Baire Category Theorem somehow, but I'm not sure.
Well, suppose not. Then there exists an open set $G \subset X$ such that $G \cap \bigcup_{n \in \mathbb{N}}F_n^o = \emptyset$. Hence there exists a subset $E \subset G$ such that $\bar{E} \subset G$ so that $\bar{E} \cap \bigcup_{n \in \mathbb{N}}F_n^o = \emptyset$, where $\bar{E}$ denotes the closure of $E$. Now observe the following set-theoretic property: $(\bar{A} \cap B)^o \subset \bar{A} \cap B^o$. To see this, let $x \in (\bar{A} \cap B)^o.$ Then there exists $\varepsilon \gt 0$ such that $B_\varepsilon(x) \subset \bar{A} \cap B$. Thus $x$ is an interior point of $B$ and a point of $\bar{A}$. That is, $x \in \bar{A} \cap B^o$. Going back to the problem at hand, we have that for each $n \in \mathbb{N}$, $\bar{E} \cap F_n^o = \emptyset$. But then each $(\bar{E} \cap F_n)^o = \emptyset$ (since we just showed $(\bar{E} \cap F_n)^o \subset \bar{E} \cap F_n^o$). Thus for each $n$, $\bar{E} \cap F_n$ is nowhere dense (a set is nowhere dense iff its closure has empty interior). Now, we can write $\bar{E} = \bar{E} \cap X = \bar{E} \cap \bigcup_{n \in \mathbb{N}}F_n = \bigcup_{n \in \mathbb{N}} \bar{E} \cap F_n$. But we just demonstrated that $\bar{E} \cap F_n$ is nowhere dense for each $n \in \mathbb{N}$, so therefore $\bar{E}$ is of the first category. But $\bar{E}$ is a closed subset of a complete metric space and is thus complete, and therefore is of the second category. Hence we have a contradiction, and so our assumption that $\bigcup_{n \in \mathbb{N}}F_n^o$ wasn't dense in $X$ was wrong.
How to integrate $\int_a^b (x-a)(x-b)\,dx=-\frac{1}{6}(b-a)^3$ in a faster way? $\displaystyle \int_a^b (x-a)(x-b)\,dx=-\frac{1}{6}(b-a)^3$ $\displaystyle \int_a^{(a+b)/2} (x-a)(x-\frac{a+b}{2})(x-b)\, dx=\frac{1}{64}(b-a)^4$ Instead of expanding the integrand, or doing integration by part, is there any faster way to compute this kind of integral?
You can do this with substitution and Cavalieri's formula: $$ \int_0^1 u^n \,du = \frac{1}{n+1} $$ For the first one, let $u= \frac{x-a}{b-a}$. Then $x = (b-a)u +a$, which means $x-a = (b-a)u$ and $x-b = (b-a)(u-1)$. Also $dx=(b-a)\,du$. So \begin{align*} \int_a^b (x-a)(x-b)\,dx &= (b-a)^3 \int_0^1 u (u-1)\,du = (b-a)^3 \int_0^1 (u^2 - u) \\ &= (b-a)^3 \left(\frac{1}{3}-\frac{1}{2}\right) = -\frac{1}{6}(b-a)^3 \end{align*} For the second one, let $u= \frac{2}{a-b}\left(x-\frac{a+b}{2}\right)$. Then: \begin{align*} x-a &= \frac{a-b}{2}(u-1) \\ x-b &= \frac{a-b}{2}(u+1) \\ x - \frac{a+b}{2} &= \frac{a-b}{2}u \\ dx &= \frac{a-b}{2}\,du \end{align*} So \begin{align*} \int_a^{(a+b)/2} (x-a)\left(x-\frac{a+b}{2}\right)(x-b)\,dx &= \left(\frac{a-b}{2}\right)^4\int_1^0(u-1)u(u+1)\,du = -\frac{(b-a)^4}{16}\int_0^1 (u^3-u)\,du \\ &= -\frac{(b-a)^4}{16}\left(\frac{1}{4}-\frac{1}{2}\right) = -\frac{1}{64}(b-a)^4 \end{align*} This has less integration, but it took some time to get the substitution right.
Is this a projective resolution? Let $k$ be a field, $R=k[x]$. Is this a projective resolution of $k$ over $R$? $$0\to k[x]\to k[x]\to k\to 0$$ where the left map is $x\mapsto x-1$ and the right map is $x\mapsto 1$ ? If not, what is a projective resolution in this case?
The projective resolution of $k$ over $k[x]$ is this: $$0\longrightarrow k[x]\xrightarrow{\times (x-1)}\begin{aligned}[t]k[x]&\longrightarrow k\\\scriptstyle P(x)&\longmapsto \scriptstyle P(1)\end{aligned}\!\!\!\!\longrightarrow 0$$
Proving $\ln \lambda = \int_0^\infty \frac{\mathrm dt}t e^{-\lambda t}$ I am currently reading this paper. On page 5, it writes: For each positive eigenvalue $\lambda$ of the operator $D$ we may write an identity $$\ln \lambda = \int_0^\infty \frac{\mathrm dt}t e^{-\lambda t}.\tag{1.17}$$ This identity is "correct" to an infinite constant, which does not depend on $\lambda$ and, therefore, may be ignored in what follows. Now we use $\ln \det(D)=\mathrm{Tr}\ln(D)$ and extend $(1.17)$ to the I can not prove this identity. I tried $$\int_0^{\infty} \frac{\mathrm dt}{t}e^{-t\lambda}=\int_0^{\infty} \frac{\mathrm d(\lambda t)}{(\lambda t)}e^{-t\lambda}=\int_0^{\infty} x^{-1}e^{-x}\ \mathrm dx=\Gamma(0),$$ where fort the third equality, I used the definition of Gamma function.
Take a look at the footnote $4$ in the paper. This is not meant to be an identity, since $e^{-\lambda t}/t \sim 1/t$ as $t \to 0$ and the integral diverges. Note that if you differentiate the LHS, you get $\frac{1}{\lambda}$. If you take the derivative with respect to $\lambda$ inside the integral of the RHS, you also get $$-\int_0^\infty \frac{1}{t} \frac{d}{d\lambda} e^{-\lambda t} \, dt = \int_0^\infty -e^{-\lambda t} = \frac{1}{\lambda}.$$ So the "derivative" on both sides is the same, which means that both sides only "differ by an infinite constant". The way I understand it, this is just a heuristic to derive equation (1.18), which is proven rigorously later in section 2.2.
Going wrong with Bayes theorem Lets say we have following case: Probability of bring a drunk driver = $0.10$ Probability of a drinking test coming positive = $0.30$ Probability of a drinking test coming negative, given the subject was not drunk = $0.90$ Then by Bayes theorem, $$P(Not Drunk|Negative Test) = \frac{P(Negative Test|Not Drunk) \times P(Not Drunk) }{P(Negative Test)}.$$ Now, \begin{align} P(Negative Test|Not Drunk)& = 0.90\\ P(Not Drunk)& = 0.90\\ P(Negative test)& = 0.70 = (1 - Probability(Positive Test)) \end{align} Thus, $$P(Not Drunk|Negative Test) = (0.90 * 0.90) / 0.70 = 1.51.$$ As far as I understand, the probabilities shouldn't ever become more than 1 and above result is counterintuitive to me. Is this correct, if not where am I going wrong?
The problem is that your initial numbers cannot happen. This is because of the law of total probability which tells us that $P(A)=P(A|H_1)\cdot P(H_1) + P(A|H_2)\cdot P(H_2)$ if $H_1$ and $H_2$ are two oposing hypotheses. In your case, setting $A$ to be "Test is negative", $H_1$ to be "Driver is drunk"and $H_2$ to be "Driver is not drunk", you get $$0.7 = P(A) = P(A|H_1)\cdot P(H_1) + P(A|H_2)\cdot P(H_2) = P(A|H_1)\cdot 0.1 + 0.9\cdot 0.9 = \\=P(A|H_1)\cdot 0.1 + 0.81$$ meaning that $P(A|H_1)$ is negative.
Choosing $a$ such that equation $f(x, a)=g(x)$ has only one solution with respect to $x$ Let $f(x, a)$ and $g(x)$ are given functions. We want to find all values of $a$ such that equation $f(x, a)-g(x)=0$ has only one solution with respect to $x$. For example: $a \cdot \log_{a}x=x$ or $ax^4 - e^x=0$
Neat question! For the second, we have the first condition, that $f=0$ for some pair $(x,a)$ or $$ a=\frac{e^x}{x^4} $$ Noting that if $x=0$ we don't have any solutions at all. Then we use a tangency condition of the two curves to guarantee uniqueness, getting a system of two equations in two unknowns, i.e. $$ f'(x)=0\Rightarrow a=\frac{e^x}{4x^3} $$ Combining: $$ x=4,\\ a=\frac{e^4}{4^4} $$ For the first equation, use that $$\log_a x =\frac{\ln x}{\ln a}$$ and the same logic as above to get the system $$ a\frac{\ln x}{\ln a}=x\\ \frac{a}{\ln a}=x\\ \Rightarrow \ln x=1\Rightarrow x=e\Rightarrow a=e $$ with a's value by inspection.
How to see there exists const $C$ such that $\frac{d^{n-m}}{dx^{n-m}}(1-x^2)^n=C(1-x^2)^m\frac{d^{n+m}}{dx^{n+m}}(1-x^2)^n$ This comes up in relation to Legendre functions. The claim is made that for $n =0,1,2,3,\cdots$ and $m=0,1,2,3,\cdots,n$, there is a constant $C_{n,m}$ such that $$ \frac{d^{n-m}}{dx^{n-m}}(1-x^2)^n=C_{n,m}(1-x^2)^{m}\frac{d^{n+m}}{dx^{n+m}}(1-x^2)^n $$ A typical suggested proof is writing out all of the polynomial coefficients. Does anyone have a clever way to see this must be true? I've tried many things with differentiation, but I can't find any clean way to see this. Example: Consider $n=3,m=1$ where $(1-x^2)^n=1-3x^2+3x^4-x^6$, and \begin{align} \frac{d^{3-1}}{dx^{3-1}}(1-x^2)^3&=-6+36x^2-30x^4 \\ (1-x^2)^1\frac{d^4}{dx^4}(1-x^2)^3&=(1-x^2)(72-360x^2) \\ &=72-72x^2-360x^2+360x^4 \\ &=72-432x^2+360x^4 \\ &=(-12)(-6+36x^2-30x^4) \end{align}
Suppose we seek to determine the constant $Q$ in the equality $$ Q_{n,m} \left(\frac{d}{dz}\right)^{n-m} (1-z^2)^n = (1-z^2)^m \left(\frac{d}{dz}\right)^{n+m} (1-z^2)^n$$ where $n\ge m.$ We will compute the coefficients on $[z^q]$ on the LHS and the RHS. Writing $1-z^2 = (1+z)(1-z)$ we get for the LHS $$\sum_{p=0}^{n-m} {n-m\choose p} {n\choose p} p! (1+z)^{n-p} \\ \times {n\choose n-m-p} (n-m-p)! (-1)^{n-m-p} (1-z)^{m+p} \\ = (n-m)! (-1)^{n-m} \sum_{p=0}^{n-m} {n\choose p} {n\choose n-m-p} (1+z)^{n-p} (-1)^p (1-z)^{m+p}.$$ Extracting the coefficient we get $$(n-m)! (-1)^{n-m} \sum_{p=0}^{n-m} {n\choose p} {n\choose n-m-p} (-1)^p \\ \times \sum_{k=0}^{n-p} {n-p\choose k} (-1)^{q-k} {m+p\choose q-k}.$$ We use the same procedure on the RHS and merge in the $(1-z^2)^m$ term to get $$(n+m)! (-1)^{n+m} \sum_{p=0}^{n+m} {n\choose p} {n\choose n+m-p} (-1)^p \\ \times \sum_{k=0}^{n+m-p} {n+m-p\choose k} (-1)^{q-k} {p\choose q-k}.$$ Working in parallel with LHS and RHS we treat the inner sum of the LHS first, putting $${m+p\choose q-k} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q-k+1}} (1+z)^{m+p} \; dz$$ to get $$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} (1+z)^{m+p} \sum_{k=0}^{n-p} {n-p\choose k} (-1)^{q-k} z^k \; dz \\ = \frac{(-1)^q}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} (1+z)^{m+p} (1-z)^{n-p} \; dz.$$ Adapt and repeat to obtain for the inner sum of the RHS $$\frac{(-1)^q}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} (1+z)^{p} (1-z)^{n+m-p} \; dz.$$ Moving on to the two outer sums we introduce $${n\choose n-m-p} = \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n-m-p+1}} (1+w)^n \; dw$$ to obtain for the LHS $$\frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n-m+1}} (1+w)^n \\ \times \frac{(-1)^q}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} (1+z)^{m} (1-z)^{n} \sum_{p=0}^{n-m} {n\choose p} (-1)^p w^p \frac{(1+z)^p}{(1-z)^p} \; dz\; dw \\ = \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n-m+1}} (1+w)^n \\ \times \frac{(-1)^q}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} (1+z)^{m} (1-z)^{n} \left(1-w\frac{1+z}{1-z}\right)^n \; dz\; dw \\ = \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n-m+1}} (1+w)^n \\ \times \frac{(-1)^q}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} (1+z)^{m} (1-z-w-wz)^n \; dz\; dw.$$ Repeat for the RHS to get $$\frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n+m+1}} (1+w)^n \\ \times \frac{(-1)^q}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} (1-z)^{m} (1-z-w-wz)^n \; dz\; dw.$$ Extracting coefficients from the first integral (LHS) we write $$(1-z-w-wz)^n = (2-(1+z)(1+w))^n \\ = \sum_{k=0}^n {n\choose k} (-1)^k (1+z)^k (1+w)^k 2^{n-k}$$ and the inner integral yields $$(-1)^q \sum_{k=0}^n {n\choose k} (-1)^k {m+k\choose q} (1+w)^k 2^{n-k}$$ followed by the outer one which gives $$(-1)^q \sum_{k=0}^n {n\choose k} (-1)^k {m+k\choose q} {n+k\choose n-m} 2^{n-k}.$$ For the second integral (RHS) we write $$(1-z-w-wz)^n = ((1-z)(1+w)-2w)^n \\ = \sum_{k=0}^n {n\choose k} (1-z)^k (1+w)^k (-1)^{n-k} 2^{n-k} w^{n-k}$$ and the inner integral yields $$(-1)^q \sum_{k=0}^n {n\choose k} {m+k\choose q} (-1)^q (1+w)^k (-1)^{n-k} 2^{n-k} w^{n-k}$$ followed by the outer one which produces $$\sum_{k=0}^n {n\choose k} {m+k\choose q} {n+k\choose k+m} (-1)^{n-k} 2^{n-k}.$$ The two sums are equal up to a sign and the RHS for the coefficient on $[z^q]$ is obtained from the LHS by multiplying by $$\frac{(n+m)!}{(n-m)!} (-1)^{n-q}.$$ Observe that powers of $z$ that are present in the LHS and the RHS always have the same parity, the coefficients being zero otherwise (either all even powers or all odd). Therefore $(-1)^{n-q}$ is in fact a constant not dependent on $q$, the question is which. The leading term has degree $2n-(n-m)=n+m=(2n-(n+m))+2m$ on both sides and the sign on the LHS is $(-1)^n$ and on the RHS it is $(-1)^{n+m}.$ The conclusion is that the queried factor is given by $$\bbox[5px,border:2px solid #00A000]{ Q_{n,m} = (-1)^m \frac{(n+m)!}{(n-m)!}.}$$
Non-Convex Optimization and Lagrangian Optimization Let $f,g: \mathcal{X} \to \mathbb{R}$. Let $f$ and $g$ be concave. We want to solve the following opitmization problem \begin{align} \max_{x \in \mathcal{X}} f(x) \\ \text{ s.t. } g(x) \le c \end{align} where $c\ge 0$. My question: Since $g(x)$ is not convex the above problem does not fall into the category of convex optimization. Keeping the above in mind, would the Lagrangian approach still given a neccesary codition on the optimality? That is if we define \begin{align} L(x)=f(x)-\lambda (c-g(x)) \end{align} then the optimall solution must be a stationary point of $L(x)$?
Take $f(x) = x$, $c=0$ and $g(x) = \min(0,x^3)$. Note that $f,g$ are concave. The solution is $x=0$, but $g'(0) = 0$, hence for any $\lambda$ we have ${\partial L(0,\lambda) \over \partial x} = 1$. Note: ${\partial L(x,\lambda) \over \partial x} = {\partial f(x) \over \partial x} + \lambda {\partial g(x) \over \partial x}$, and so we have ${\partial L(0,\lambda) \over \partial x} = 1$.
Help verify the proof that $\min \left({\lfloor{N/2}\rfloor, \lfloor{(N + 2)/p}\rfloor}\right) = \lfloor{(N + 2)/p}\rfloor$ for $N \ge p$ Well, I think that I have this correct, however my proof seams cumbersome, so can someone check these results or suggest a simplified proof. Prove that ($N \ge p$) \begin{equation*} \min \left({\left\lfloor{\frac{N}{2}}\right\rfloor, \left\lfloor{\frac{N + 2}{p}}\right\rfloor}\right) = \left\lfloor{\frac{N + 2}{p}}\right\rfloor \end{equation*} for prime $p \ge 3$ with $N \ge p$ and $N \in \mathbb{Z}^{+}$. To see this, let $N = k\, p + m$ where $k \in \mathbb{Z}^{+}$ and $m \in \left\{{0}\right.$, $1$, $\cdots$, $\left.{p - 1}\right\}$ by the Quotient Remainder Theorem, then show that this equation holds for all values of $k$. Starting with \begin{equation*} \left\lfloor{\frac{N + 2}{p}}\right\rfloor = \left\lfloor{\frac{k\, p + m + 2}{p}}\right\rfloor = k + {\delta}_{2} \end{equation*} where \begin{equation*} {\delta}_{2} = \left\lfloor{\frac{m + 2}{p}}\right\rfloor = \begin{cases} 0, & \text{for } m \le p - 3, \\ 1, & \text{for } m \in \left\{{p - 2, p - 1}\right\}. \end{cases} \end{equation*} Also \begin{equation*} \left\lfloor{\frac{N}{2}}\right\rfloor = \left\lfloor{\frac{k\, p + m}{2}}\right\rfloor = \left\lfloor{\frac{k\, p}{2} + \frac{m}{2}}\right\rfloor. \end{equation*} Therefore \begin{equation*} k + {\delta}_{2} \le \left\lfloor{\frac{k\, p}{2} + \frac{m}{2}}\right\rfloor. \end{equation*} If $k$ is even then we have \begin{equation*} k + {\delta}_{2} \le \frac{k\, p}{2} + \left\lfloor{\frac{m}{2}}\right\rfloor. \end{equation*} Solving for $k$ gives \begin{equation*} k \ge \frac{2 \left({{\delta}_{2} - \left\lfloor{m/2}\right\rfloor}\right)}{p - 2}. \end{equation*} If ${\delta}_{2} = 0$ or ${\delta}_{2} = 1$ then $k \ge 2$ since $k$ is even and ${\delta}_{2} - \left\lfloor{m/2}\right\rfloor \le 1$. Now if $k$ is odd then we have three cases to consider starting with \begin{equation*} k + {\delta}_{2} \le \frac{\left({k + 1}\right) p}{2} + \left\lfloor{\frac{m - p}{2}}\right\rfloor. \end{equation*} Solving for $k$ gives \begin{equation*} k \ge \frac{2 \left({{\delta}_{2} - p/2 - \left\lfloor{\left({m - p}\right)/2}\right\rfloor}\right)}{p - 2}. \end{equation*} With $m = 0$, then ${\delta}_{2} = 0$ and \begin{equation*} k \ge \frac{2}{p - 2} \left({- \frac{p}{2} + \left\lceil{\frac{p}{2}}\right\rceil}\right) = \frac{1}{p - 2} \ge 1 \end{equation*} since $\left\lceil{p/2}\right\rceil = \left({p - 1}\right)/2 + \left\lceil{1/2}\right\rceil = \left({p + 1}\right)/2$ with $p \ge 3$. For the last two cases $m = p - 2$ or $m = p - 1$, $\delta = 1$, then \begin{equation*} k \ge \frac{4 - p}{p - 2} \ge 1 \end{equation*} for $p \ge 3$. Thus the primary equation holds for $p \ge 3$.
Show that \begin{equation*} \left\lfloor{\frac{N}{2}}\right\rfloor < \left\lfloor{\frac{N + 2}{p}}\right\rfloor \end{equation*} is not valid. Let $N = k\, p + m$ where $k \in \mathbb{Z}^{+}$ and $m \in \left\{{0}\right.$, $1$, $\cdots$, $\left.{p - 1}\right\}$ by the Quotient Remainder Theorem. Then show this holds for all values of $k$ and $m$. Starting with the left-hand side \begin{equation*} \left\lfloor{\frac{N}{2}}\right\rfloor = \left\lfloor{\frac{p\, k + m}{2}}\right\rfloor = \left\lfloor{\frac{2\, k + \left({p - 2}\right) k + m}{2}}\right\rfloor = k + \left\lfloor{\frac{\left({p - 2}\right) k + m}{2}}\right\rfloor \end{equation*} then the right-hand side \begin{equation*} \left\lfloor{\frac{N + 2}{p}}\right\rfloor = \left\lfloor{\frac{p\, k + m + 2}{p}}\right\rfloor = k + \left\lfloor{\frac{m + 2}{p}}\right\rfloor \end{equation*} then \begin{equation*} \left\lfloor{\frac{\left({p - 2}\right) k + m}{2}}\right\rfloor < \left\lfloor{\frac{m + 2}{p}}\right\rfloor = {\delta}_{2} = \begin{cases} 0, & \text{for } m \le p - a - 1, \\ 1, & \text{for } m \in \left\{{p - a, \cdots, p - 2, p - 1}\right\}. \end{cases} \end{equation*} Then \begin{equation*} \min \left({\left\lfloor{\frac{N}{2}}\right\rfloor, \left\lfloor{\frac{N + 2}{p}}\right\rfloor}\right) = \left\lfloor{\frac{N + 2}{p}}\right\rfloor \end{equation*} for $p \ge 3$. To see this, establish a contradiction. Then \begin{equation*} \left\lfloor{\frac{\left({p - 2}\right) k + m}{2}}\right\rfloor < {\delta}_{2}. \end{equation*} Now with $m \in \left\{{0}\right.$, $1$, $\cdots$, $\left.{p - 3}\right\}$, then ${\delta}_{2} = 0$ and $\left({p - 2}\right) k + m \ge 1$ with $\left\lfloor{\left[{\left({p - 2}\right) k + m}\right]/2}\right\rfloor \ge 0$ which leads to $0 < 0$ which is a contradiction. Next $m = p - 2$ and ${\delta}_{2} = 1$ then \begin{equation*} \left\lfloor{\frac{\left({p - 2}\right) \left({k + 1}\right)}{2}}\right\rfloor < 1 \end{equation*} with $\left({p - 2}\right) \left({k + 1}\right) \ge 2$ leads to $1 < 1$ which is a contradiction. Next $m = p - 1$ and ${\delta}_{2} = 1$ then \begin{equation*} \left\lfloor{\frac{\left({p - 2}\right) k + p - 1}{2}}\right\rfloor < 1 \end{equation*} with $\left({p - 2}\right) k + p - 1 \ge 3$ leads to $1 < 1$ which is a contradiction. Therefore there are no cases.
Simple math logic question: $A$ is $B$ if $C$, $A$ is $B$ if $D$ then $C \Leftrightarrow D$? Suppose that $A$ is $B$ if $C$, $A$ is $B$ if $D$ then $C \Leftrightarrow D$? Full context: I am looking over something which stated Definition: a matrix $A$ is "Hurwitz-Cantor" if condition $C$ is satisfied. Then it follows by Theorem: A matrix $A$ is "Hurwitz-Cantor" if $D$ is satisfied. Then somewhere in a proof much later on, the authors seem to assert that: Since $D$ is satisfied, therefore $A$ is Hurwitz-Cantor, and thus $C$ is satisfied. To me, the logic is off. Can anyone check?
In a definition of a qualifying adjective, "if" usually really means "if and only if". But it looks like your condition $D$ is only sufficient, not necessary. So I would say that $D\implies C$, but not necessarily that $C\implies D$.
Why is this 'Proof' by induction not valid? I am trying to understand why induction is valid. For instance why would this 'proof' not be valid under the principle of proof by induction ? : $$ \sum_{k=1}^{\infty} \frac{1}{k} \lt \infty$$ because using induction on the statement $$S(n) = \sum_{1}^{n} \frac{1}{k} \lt \infty$$ - "$S(1) < \infty$ is true and "$S(n) < \infty$" implies "$S(n+1) < \infty$" since $S(n+1) \lt S(n) + \frac{1}{n}$
With induction, you can only prove $S(n)$ is true for all positive integers $n$. However, even though $S(n)$ is true for arbitrarily large $n$, the statement "$S(\infty)$" does not follow from induction because $\infty$ is not a positive integer.
Trying to find a prespective on a truth table I'm currently reading "A Book of Set Theory" by Charles C. Pinter and I'm a bit stuck in the first chapter Exerciser 1.1, problem 7. Which states: Prove that for all sentences $P, Q, R, S$, if $P \implies Q$ and $R \implies S$, then a) $P \vee R \implies Q \vee S$ b) $P \wedge R \implies Q \wedge S$ In the same set of exercises the author had me solve the problems using truth tables which I did. Here is the the truth table for a: $$\begin{array}{c|c|c|c|} P, Q, R, S & P \implies Q & R \implies S & P \vee R \implies Q \vee S \\ \hline \text{T, T, T, T} & \text{T} & \text{T} & \text{T} \\ \hline \text{T, T, T, F} & \text{T} & \text{F} & \text{T} \\ \hline \text{T, T, F, T} & \text{T} & \text{T} & \text{T} \\ \hline \text{T, T, F, F} & \text{T} & \text{T} & \text{T} \\ \hline \text{T, F, T, T} & \text{F} & \text{T} & \text{T} \\ \hline \text{T, F, F, T} & \text{F} & \text{T} & \text{T} \\ \hline \text{T, F, T, F} & \text{F} & \text{F} & \text{F} \\ \hline \text{T, F, F, F} & \text{F} & \text{T} & \text{F} \\ \hline \text{F, T, T, T} & \text{T} & \text{T} & \text{T} \\ \hline \text{F, T, T, F} & \text{T} & \text{F} & \text{T} \\ \hline \text{F, T, F, T} & \text{T} & \text{T} & \text{T} \\ \hline \text{F, F, T, T} & \text{T} & \text{T} & \text{T} \\ \hline \text{F, F, F, T} & \text{T} & \text{T} & \text{T} \\ \hline \text{F, F, T, F} & \text{F} & \text{F} & \text{F} \\ \hline \text{F, T, F, F} & \text{T} & \text{T} & \text{T} \\ \hline \text{F, F, F, F} & \text{T} & \text{T} & \text{T} \\ \hline \end{array}$$ Now I'm trying to make sense of what I'm seen. How do I know if this is valid? If we interpret the original question as $(P \implies Q) \wedge (R \implies S) \equiv P \vee R \implies Q \vee S$. Then we can't prove it because we have cases where $P \implies Q$ is Truth and $R \implies S$ is False and yet we see that $P \vee R \implies Q \vee S$ is Truth. Assuming that the question expects that the statements are indeed true. Is there something I'm missing? Also could this be proven using inference? If so how could it be done? As far as I got was that it resembles to the Constructive Dilemma (CD). Any help would be greatly appreciated. :)
The problem does not ask you to prove an equivalence, but an implication. Whenever $P \Rightarrow Q$ and $R \Rightarrow S$, then $P \vee R \Rightarrow Q \vee S$. This is confirmed by your truth table. The converse does not need to hold, and in fact it doesn't.
Primitive roots of unity in p adic integers Question is to find a condition when can be $\mathbb{Z}_p$ has an $m$th root. First non trivial and simple root of unity is fourth root of unity $m=4$. So, we want to know when does $f(x)=x^2+1$ has solution in $\mathbb{Z}_p$. Suppose it has then it has to be of the form $x=a_0+a_1p+a_2p^2+\cdots$ i.e., we have $$\sqrt{-1}=a_0+a_1p+a_2p^2+\cdots$$ Squaring and going modulo $p$ we have $-1\equiv a_0^2\mod p $ Special case when $p=5$ we have $a_0=2$. Special case when $p=3$ we have no solution. I observe that this boils down to the question Given prime $p$, which roots of unity does $\mathbb{F}_p$ contain? I guess it is sufficient to look for $a_0$. Once we find $a_0$ then $a_i$ for $i\geq 1$ exists automatically.
In a word, $\Bbb Q_p$ has no $p$-th roots of unity, except for $p=2$. Beyond that, the torsion in the multiplicative group is exactly isomorphic to the multiplicative structure of $\Bbb F_p$: cyclic of order $p-1$. So, to answer your question, you only need look at the relationship between $m$ and $p-1$.
Common proof for $(1+x)(1+x^2)(1+x^4)...(1+x^{2^n})=\dfrac{1-x^{2^{n+1}}}{1-x} $ I'm asking for an alternative (more common?) proof of the following equality, more specifically an alternative proof for the inductive step: $$(1+x)(1+x^2)(1+x^4)...(1+x^{2^n})=\dfrac{1-x^{2^{n+1}}}{1-x} (x\neq 1)$$ This is how I proved it: Basecase: substitute $1$ for $n$, everything works out. Inductive step: assume that $$\prod _{i=1}^{n}(1+x^i)=\dfrac{1-x^{2^{n+1}}}{1-x}$$ then $$\prod _{i=1}^{n+1}(1+x^i)=\dfrac{1-x^{2^{n+1}}}{1-x} (1+x^{n+1})$$ Let $a$, $b$ and $c$ be positive real numbers, then $\dfrac {c}{a}=b\Leftrightarrow ab=c$, thus $$\dfrac {\left( \dfrac{1-x^{2^{n+2}}}{1-x}\right)}{\left( \dfrac{1-x^{2^{n+1}}}{1-x}\right)}=1-x^{2^{n+1}} \Leftrightarrow \dfrac{1-x^{2^{n+1}}}{1-x} (1+x^{n+1}) = \dfrac{1-x^{2^{n+2}}}{1-x}$$ $$\dfrac {\left( \dfrac{1-x^{2^{n+2}}}{1-x}\right)}{\left( \dfrac{1-x^{2^{n+1}}}{1-x}\right)}=\dfrac {1-x^{2^{n+2}}}{1-x^{2^{n+1}}}$$ Applying polynomial division, we see that indeed $\dfrac {1-x^{2^{n+2}}}{1-x^{2^{n+1}}} = 1+x^{2^{n+1}}$. Thus $\dfrac{1-x^{2^{n+1}}}{1-x} (1+x^{n+1}) = \dfrac{1-x^{2^{n+2}}}{1-x}$. However, the exercise was in a chapter on binomial coefficients and pascal's triangle, furthermore we didn't mention polynomial division in class. Which makes me think that there was another solution that I was "supposed" to see. How was I supposed to prove it?
Expanding $(1+x)(1+x^2)(1+x^4)\cdots(1+x^{2^n})$ gives $1+x+x^2+\cdots +x^{2^{n+1}-1}$ because every integer $k$ with $0 \le k \le 2^{n+1}-1$ has a unique binary representation (as a sum of powers of $2$).
Password combinatorics on identical notes I have the following two questions: A password consists only small letters a,…,z * *How many possibilities are there to put in a hat two identical folded notes such that on one of them is a password of length 6 and on the other a password of length 7, such that no letter repeats twice (not on the same note, and not between the two notes)? Is the answer different when the notes are of different colors? Explain. *How many possibilities are there to put in a hat two identical folded notes such that on each of them is a password of length 6? Is the answer different when the notes are of different colors? My answers are as follows: * *The notes are identical, hence we can generate a length 13 password and "split" it in the middle to 6 and 7 length password with no repeats, hence the answer is $\frac{26!}{13!}$. When the notes are not identical, we need to multiply the result by 2 as once one colored note has the 6 length pass, and the over the 7 and vice versa. Hence $2(\frac{26!}{13!})$ *The letters can repeat in the password itself and on the other note, but the notes are identical hence $(passA, passB) and (passB, passA) \mapsto \left \{ passA, passB \right \}$. Meaning we map 2 to 1, and the answer is $\frac{26^{12}}{2}$. When the colors are different we map 1 to 1 hence the answer is $26^{12}$. We had a debate between us students whether the above are correct, some say the calculations are wrong, some say the colors don't make any difference in the first problem... Any help would be greatly appreciated!
How many possibilities are there to put in a hat $2$ notes such that on one of them is a password of length $6$ and on the other a password of length $7$, such that no letter repeats more than once? If the notes are identical: $\binom{26}{6+7}\cdot(6+7)!$ If the notes are different: $\binom{26}{6+7}\cdot(6+7)!\cdot2!$ How many possibilities are there to put in a hat $2$ notes such that on each one of them is a password of length $6$? If the notes are identical: $26^{6+6}/2!$ If the notes are different: $26^{6+6}$
Squared Square Root confusion I am confused about the following: $$\sqrt{x^2}=?\ (\sqrt x)^2 =?\ x^{2/2}$$ The source of confusion is: Let $$f(x)=\sqrt {x^2}$$ $$g(x)=\ (\sqrt x)^2 $$ Then $$f(-3)=3$$ $$g(-3)=\mbox{undefined}?$$ While the range of $f(x)$ and $g(x)$ might be the same, their domain appears to be different ($g(x)$ restricts $x$ to be positive). At the same time, both can be written as $$x^{2/2}$$ What am I missing? -Thank you :)
The problem is $$\sqrt{x^2}=(\sqrt{x})^2$$ only if $x \ge 0$. If we are talking about real numbers. In general we can write: $$\sqrt{x^2}=\left(\sqrt{|x|}\right)^2 \Leftrightarrow \sqrt{x^2}=|x|$$
X red balls indistinguishable and Y green balls indistinguishable Let's say we have 10 (X) red balls and 3 (Y) green balls. All the balls are indistinguishable. Firstly, does "indistinguishable" mean that the order of the balls doesn't matter since they do not have a number? Secondly : * *How many ways can we place these 13 balls in a line? *How many ways can we place these 13 balls in a line, if only their color matters? *How many ways can we choose 3 red balls and 2 green balls? For the question number one, I think it is $$ 13! $$ For two other questions, I have no idea. I do not grasp the reasoning, the logic behind. What if we have K colors?
Yes indistinguishable means it doesn't matter if you swap two indistinguishable balls, since you can't distinguish the two results (but I assume only in question 2. you mean balls of the same color are indistuingishable and for the rest the balls are distinguishable) Assuming you mean distinguishable balls (else the second question formulation wouldn't make sense) some hints: * *Your answer is correct: For the first position you have 13 choices of balls (since they are distinguishable). For the second position 12 and so on. This yields your solution. *Hint: When only the color matters think of putting 13 balls in one line. Now choose which three you paint yellow. So it's choose 3 out of 13 (to be yellow) possibilities (I hope you know binomial coefficients). *Hint: How many ways can you choose 3 balls out of 10, how many ways 2 out of 3? Now multiply these numbers (why?).
Tensor Products and a Basis of $\Bbb R^2 ⊗ \Bbb R^2$ Let ${e_1, e_2}$ be the standard basis of $\Bbb R^2$. Show that $e_1⊗e_2+e_2⊗e_1$ cannot be written in the form $u ⊗ v$ with $u,v \in \Bbb R^2$. I am just being introduced to tensor spaces and I know that $e_1⊗e_1,e_1⊗e_2,e_2⊗e_1,e_2⊗e_2$ is a basis of $\Bbb R^2 ⊗ \Bbb R^2$ but I am not sure how to show a contradiction. Any hints appreciated. Edit: I also know that $e_1⊗e_2+e_2⊗e_1 = (e_1+e_2⊗e_1+e_2) - e_1⊗e_1 - e_2⊗e_2$
Assume that $e_1 \otimes e_2 + e_2 \otimes e_1 = u \otimes v$ and write $$ u = u_1 e_1 + u_2 e_2, \,\,\, v = v_1 e_1 + v_2 e_2. $$ Expanding $u \otimes v$, we have $$ u \otimes v = (u_1 e_1 + u_2 e_2) \otimes (v_1 e_1 + v_2 e_2) = \\ (u_1 v_1) (e_1 \otimes e_1) + (u_1 v_2) (e_1 \otimes e_2) + (u_2 v_1) (e_2 \otimes e_1) + (u_2 v_2) (e_2 \otimes e_2) = \\ 1 \cdot (e_1 \otimes e_2) + 1 \cdot (e_2 \otimes e_1). $$ Since $e_i \otimes e_j$ is a basis of $\mathbb{R}^2 \otimes \mathbb{R}^2$, we must have $$ u_2 v_2 = u_1 v_1 = 0, \,\,\, u_1 v_2 = u_2 v_1 = 1. $$ Show that this leads to a contradiction.
Summation Formula for Series I have a series of the form : \begin{equation} \frac{1}{M-1} + \frac{q}{M-2} + \frac{q^2}{M-3} + \frac{q^3}{M-4} + \frac{q^4}{M-5}+\dots = \sum_{i=1} ^{M-1} \frac{q^{i-1}}{M-i} \end{equation} I want to solve this series to find a general formula that provides its sum. I am not able to figure out the best and easy way to proceed with this. I would be glad if anybody could point me the right direction for solving such series.
Sum it from $M-1$ to $1$, i.e., sum it all up backwards. $$\sum_{k=1}^{M-1}\frac{q^{i-1}}{M-i}=\sum_{k=1}^{M-1}\frac{q^{M-k-1}}{k}=q^{M-1}\sum_{k=1}^{M-1}\frac{q^{-k}}{k}$$ Let $q=r^{-1}:$ $$=r^{1-M}\sum_{k=1}^{M-1}\frac{r^k}{k}\tag{$\star$}$$ Recall the geometric series: $$\frac{1-x^{M-1}}{1-x}=\sum_{k=1}^{M-1}x^{k-1}$$ Integrate wrt $x$ from $0$ to $r:$ $$\int_0^r\frac{1-x^{M-1}}{1-x}\ dx=\int_0^r\sum_{k=1}^{M-1}x^{k-1}\ dx=\sum_{k=1}^{M-1}\frac{r^k}k$$ Thus, you may rewrite your sum as $$\sum_{k=1}^{M-1}\frac{q^{i-1}}{M-i}=q^{M-1}\int_0^{1/q}\frac{1-x^{M-1}}{1-x}\ dx$$ From there, you may use integration techniques to derive closed form solutions for some $q,M$. As demonstrated on this graph.
Evaluation of the integral $\int^{\infty}_{0} \frac{dx}{(x+\sqrt{x^2+1})^n}$ $$ \mbox{If}\ n>1,\ \mbox{then prove that}\quad\int_{0}^{\infty}{\mathrm{d}x \over \left(x + \,\sqrt{\, x^{2} + 1\,}\,\right)^{n}} = {n \over n^{2} - 1} $$ Could someone give me little hint so that I could proceed in this question. I tried putting $x = \tan\left(A\right)$ but it did not work out.
You asked for a hint, so here is one: Let $$ u=x+\sqrt{1+x^2}. $$
Alternative ways to prove that $\int_0^{2\pi} \cos(\sin{x})e^{\cos{x}} dx=2\pi$ without complex analysis? $$\int_0^{2\pi} \cos(\sin{x})e^{\cos{x}} dx=2\pi$$ I derived this rather incredible result via Cauchy's theorem as I was working through some simple contour integrals. I was wondering if this integral can be solved without complex analysis, and if so how? I can't see any obvious substitutions that can be possible, nor any parametrisations that could simplify the integral through Feynman's method.
Notice first that the integrand is symmetrical about the axis $\theta=\pi$, so we have $$\int_0^{2\pi}e^{\cos\theta}\cos(\sin\theta)d\theta = 2\int_0^{\pi}e^{\cos\theta}\cos(\sin\theta)d\theta$$ We define the following function: \begin{align} I(t) :&= \int_{0}^{\pi}e^{t\cos\theta}\cos(t\sin\theta)d\theta\\ &= \frac{1}{2}\int_{0}^{\pi}\exp\left[\frac{t}{2}(e^{i\theta}+e^{-i\theta})\right]\left[\exp\left(\frac{t}{2}(e^{i\theta}-e^{-i\theta})\right)+\exp\left(-\frac{t}{2}(e^{i\theta}-e^{-i\theta})\right)\right]d\theta\\ &= \frac{1}{2}\int_{0}^{\pi}\left[\exp\left(te^{i\theta}\right)+\exp\left(te^{-i\theta}\right)\right]d\theta \end{align} We differentiate with respect to $t$ and obtain \begin{align} I'(t) = \frac{1}{2}\int_0^\pi\left[e^{i\theta}\exp\left(te^{i\theta}\right) + e^{-i\theta}\exp\left(te^{-i\theta}\right)\right]d\theta \end{align} Let $u:=e^{i\theta}$, then $-idu=e^{i\theta}d\theta$, $u(0) = 1$ and $u(\pi)=-1$. Similarly, let $z:=e^{-i\theta}$, then $idz=e^{-i\theta}d\theta$, $z(0) = 1$ and $z(\pi)=-1$. Then \begin{align} I'(t) &= -\frac{i}{2}\int_1^{-1}e^{tu}du + \frac{i}{2}\int_1^{-1}e^{tz}dz\\ &= \frac{i}{2}\left[\int_{-1}^1 e^{tu}du - \int_{-1}^1 e^{tz}dz\right] \\ &= 0 \end{align} since $u$ and $z$ are dummy variables. Since $I'(t) = 0$ for any $t$, $I(t) = c$ for some $c$ for all values of $t$. We easily compute $$I(0) = \int_0^\pi d\theta = \pi$$ Thus we have the following result: $$\int_0^{2\pi}e^{\cos\theta}\cos(\sin\theta)d\theta = 2\pi$$
For what $A,B \subset \mathbb R$ is $\{(x,y) \in A \times B \mid x^2-y^2=0\}$ a well defined bijection? For what $A,B \subset \mathbb R$ is relation $\{(x,y) \in A \times B \mid x^2-y^2=0\}$ a well defined bijection from $A$ to $B$? This concept is pretty new for me so I'm not sure how to show this on the right way.
As it stands $x^2 - y^2 = 0$ $x = y$ or $x = -y$ So, how do we restrict $A$ and $B$ so that only one of the above holds? Restrict $A$ to either the positive reals or the negative reals, and $B$ to either the positive reals or the negative reals. And then for every $x\in A$ there will be a unique $y\in B$ that satisfies the relation. And for every $y \in B$ there is one $x\in A$
How do I solve quadratic equations when the coefficients are complex and real? I needed to solve this: $$x^2 + (2i-3)x + 2-4i = 0 $$ I tried the quadratic formula but it didn't work. So how do I solve this without "guessing" roots? If I guess $x=2$ it works; then I can divide the polynomial and find the other root; but I can't "guess" a root. $b^2-4ac=4i-3$, now I have to work with $\sqrt{4-3i}$ which I don't know how. Apparently $4i-3$ is equal to $(1+2i)^2$, but I don't know how to get to this answer, so I am stuck.
Sometimes questions have small tricks in them which allow for quick solutions. This question is one of them if the expression is rearranged and factorised. $x^2+(2i−3)x+(2−4i) = 0$ Group the real and imaginary components: $x^2-3x+2+2ix-4i=0$ Now factorise: $(x-1)(x-2)+2i(x-2)=0$ ...and factorise again: $(x-2)(x-1+2i)=0$ ...and applying the null factor theorem, $x=2$ or $x=1-2i.$ I know that this is just a lucky coincidence with this question but it is a technique nonetheless to solve problems. My best advice would be to know as many techniques as possible, such as: * *factorisation (on occasion it gives elegant solutions like the one above) *sum/product of roots (uses simultaneous equations) *trial and error/factor theorem *quadratic formula and more, using simpler techniques wherever you can. Square roots of complex numbers can either be dealt with algebraically or using the polar forms in other solutions here. Doing so algebraically can be done by using an identity with standard forms of complex numbers ($x+iy$, $a+ib$ etc.) and simultaneous equations. let $Z = \sqrt{4i-3}$ where Z is complex. i.e. $Z$ is of the form $Z = a + ib$, where $a$ and $b$ are REAL coefficients. (This is important later on.) Square both sides: $Z^2 = -3+4i$ Expand the standard form of $Z^2$: $(a+ib)^2 = -3+4i$ This becomes $a^2 - b^2 + 2abi = -3+4i$ The above is an identity and so the real and imaginary parts can be equated. Hence two equations are formed: $a^2 - b^2 = -3$................(1) $2abi = 4i$ which becomes $ab = 2$......(2) In (2), $b = 2/a$ so $a^2 - (2/a)^2 + 3 = 0$ i.e. $a^4 + 3a^2 - 4 = 0$ (multiplying all by $a^2$ and rearranging) which is a quadratic-styled quartic: $(a^2 + 4)(a^2 - 1) = 0.$ Solving, $a=1, -1, 2i$ and $-2i$ BUT since $a$ and $b$ are real, we discard $2i$ and $-2i$. Now we simply solve for $b$ using $b = 2/a$. Hence the solutions are $a=1, b=2$ or $a=-1, b=-2$. and so $\sqrt{4i-3} = \pm(1+2i)$ since we defined $Z = a+ib$. Best of luck with any future questions! Hope this helped.
What is the intuition behind the formula for the average? Why is the average for $n$ numbers given by $(a+b+c+\cdots)/n$? I deduced the formula for the average of 2 numbers which was easy because its also the mid point, but I couldn't do it for more than 2 numbers.
In most contexts, what passes for an 'average' can be thought of this way: if you replaced a collection of separate instances with their 'average', you get the same result. The usual mean comes from thinking this way for addition: if you have numbers $a_1,\ldots,a_n$, their sum is $a_1+\cdots+a_n$. If you replaced all of them with their mean $\mu$, you should also get $a_1+\cdots+a_n$. Therefore $\mu$ must satisfy $$ n\mu=a_1+\cdots+a_n, $$ leading to the formula you've seen. As another example: doing the same thing but for multiplication leads to the geometric mean $\sqrt[n]{a_1a_2\cdots a_n}$.
Solve $\left(\sqrt{\sqrt{x^2-5x+8}+\sqrt{x^2-5x+6}} \right)^x + \left(\sqrt{\sqrt{x^2-5x+8}-\sqrt{x^2-5x+6}} \right)^x = 2^{\frac{x+4}{4}} $ Solve $$\left(\sqrt{\sqrt{x^2-5x+8}+\sqrt{x^2-5x+6}} \right)^x + \left(\sqrt{\sqrt{x^2-5x+8}-\sqrt{x^2-5x+6}} \right)^x = 2^{\frac{x+4}{4}} $$ Preface; I think there should be an algebraic method of solving this equation for $x$ since graphing these two graphs We get whole number solutions such as $x=0,2,3$ So I think there is someway of manipulating this equation into a disguised quadratic somehow! So my attempt is this: $$\left(\sqrt{\sqrt{x^2-5x+8}+\sqrt{x^2-5x+6}} \right)^x + \left(\sqrt{\sqrt{x^2-5x+8}-\sqrt{x^2-5x+6}} \right)^x = 2^{\frac{x+4}{4}} $$ Let $u=x^2-5x+8$ and $u-2=x^2-5x+6$ which means we can rewrite our equation as $$\left(\sqrt{\sqrt{u}+\sqrt{u-2}} \right)^x + \left(\sqrt{\sqrt{u}-\sqrt{u-2}} \right)^x = 2^{\frac{x+4}{4}} $$ Squaring both sides we get $$ \left( \sqrt{u} + \sqrt{u-2}\right)^x + 2\left(\sqrt{\sqrt{u}+\sqrt{u-2}} \right)^x\left(\sqrt{\sqrt{u}-\sqrt{u-2}} \right)^x+ \left( \sqrt{u} - \sqrt{u-2}\right)^x = 2^{\frac{x+4}{2}}$$ $$\left( \sqrt{u} + \sqrt{u-2}\right)^x +\left( \sqrt{u} - \sqrt{u-2}\right)^x+ 2(\sqrt{2})^x+ = 2^{\frac{x+4}{2}}$$ Now some little algebra $2^{\frac{x+4}{2}}-2(\sqrt{2})^x=2^{\frac{x}{2}} \cdot 2^2 - 2 \cdot 2^{\frac{x}{2}}=2^{\frac{x}{2}} \cdot 2=2^{\frac{x+2}{2}}$ $$ \left( \sqrt{u} + \sqrt{u-2}\right)^x +\left( \sqrt{u} - \sqrt{u-2}\right)^x = 2^{\frac{x+2}{2}} $$ Square both sides again $$ \left( \sqrt{u} + \sqrt{u-2}\right)^{2x} +\left( \sqrt{u} - \sqrt{u-2}\right)^{2x}+2\left( \sqrt{u} + \sqrt{u-2}\right)^x\left( \sqrt{u} - \sqrt{u-2}\right)^x = 2^{x+2} $$ $$ \left( \sqrt{u} + \sqrt{u-2}\right)^{2x} +\left( \sqrt{u} - \sqrt{u-2}\right)^{2x}+2(2^x) = 2^{x+2} $$ $$ \left( \sqrt{u} + \sqrt{u-2}\right)^{2x} +\left( \sqrt{u} - \sqrt{u-2}\right)^{2x} = 2^{x+1} $$ Now I've hit a roadblock.. :(
HINT: Replace $\sqrt u-\sqrt{u-2}=\dfrac2{\sqrt u+\sqrt{u-2}}$ to form a Quadratic equation in $$\left(\dfrac{\sqrt2}{\sqrt u+\sqrt{u-2}}\right)^{x/2}$$ which is $$\left(\left(\dfrac{\sqrt2}{\sqrt u+\sqrt{u-2}}\right)^{x/2}-1\right)^2=0$$
If $x, y \leq 500$ then find the number of nonnegative integer solutions to $4 x - 17y = 1$ If $x, y \leq 500$ then find the number of nonnegative integer solutions to $4 x - 17y = 1$. I don't know how to proceed. Please help me out. Thank you.
$$4x-17y=17-16\iff4(x+4)=17(y+1)\iff\dfrac{17(y+1)}4=x+4$$ which is an integer $\implies4|17(y+1)\implies4|(y+1)$ So $y$ can be written as $4m-1$ where $m$ is any integer $\implies x=17m-4$ We need $0\le17m-4\le500\iff1\le m\le29\ \ \ \ (1)$ and $0\le4m-1\le500\iff1\le m\le125\ \ \ \ (2)$ What is the intersection of $(1),(2)?$
Range of function $y=x^2 + \frac{4}{x^2+9}$ Please don't give the solution, I already got the answer by a different method. I want to know why the method in the picture is wrong? Why cannot we simple add inequalities like that to get the interval of range and then find the minimum value from that range? The correct answer to this question is : 4/9 ( minimum value)
You're right that $t\ge9$ and $0<4/t\le 4/9$. Therefore $$ t+\frac{4}{t}-9\ge0 $$ However, this does not tell you that the minimum is $0$, which is not even an attained value. Indeed, you have $t\ge9$ and $4/t>0$, so certainly $$ t+\frac{4}{t}\color{red}{>}9 $$ Thus, just considering those inequalities is not sufficient for determining the minimum value.
example of a metric space in which triangle inequality is equality Is there any example of a metric space $X$ with more than two points such that the triangle inequality is always equality?
In any metric space with at least two points, the triangle inequality is an actual inequality. For if $x\neq y$ and thus $d(x,y) > 0$, then by the triangle inequality, $$ 0 = d(x,x) \leq d(x,y) + d(y,x) = 2d(x,y), $$ and by our hypothesis we must therefore conclude that $$ d(x,x) < d(x,y) + d(y,x). $$ Edit: removed some redundancies. The same conclusions hold for a nontrivial pseudometric.
Vectors in three-dimensional space: locus of a variable point The fixed point B has position vector $b$ relative to a fixed point $O$. A variable point $M$ has position vector $m$ relative to $O$. Find the locus of M if m $\cdot$ (m – b)=0. I am told the answer is derived as such: Can somebody explain the solution to me? I've tried but can't seem to understand...thanks!
If $\vec{OM}=(x,y,z)$ and $\vec{OB}=(a,b,c)$ then $$\vec{OM}.\vec{BM}=0\implies$$ $$x(x-a)+y(y-b)+z(z-c)=0 \implies$$ $$(x-\frac{a}{2})^2+(y-\frac{b}{2})^2+(z-\frac{c}{2})^2=\frac{a^2+b^2+c^2}{4}$$ the point M is in the sphere of center $(\frac{a}{2},\frac{b}{2},\frac{c}{2})$ and radius $R=\frac{\sqrt{a^2+b^2+c^2}}{2}$.
Monotone likelihood ratio without densities I would like to find a generalization of the monotone likelihood ratio ordering that does not require that the probability distributions admit a density and have the same support, which would allow me to deal with degenerate probability distributions, and with the case where the supports are disjoint. For instance, if $T_0$ is a degenerate probability distribution localized at $0$ and $T_1$ is a probability distribution with support included in $[1,+\infty)$, I would like the definition to guarantee that $T_1$ dominates $T_0$ according to this ordering. This seems to be a natural extension of the standard definition. I would also like to the definition to boil down to the standard criterion if the distributions admit a density and have a common support. Are there any existing generalizations of the monotone likelihood ratio order that would be appropriate for this purpose? Thanks!
A general definition is the following (see Def. 1.C.1 in Skahed and Shuntikumar, Stochastic Orders and Applications, 2007). Let $X$ and $Y$ be continuous [discrete] r.v.'s with densities [probability distributions] $f$ and $g$; then $Y \succeq X$ (in the likelihood ratio order) is $$f(u)g(v) \ge f(v)g(u)$$ for all $u \le v$. An equivalent formulation is to require that the ratio $$\frac{f(t)}{g(t)}$$ is decreasing over the union of supports of $X$ and $Y$ (interpreting $a/0$ as $\infty$ when $a > 0$).
Proof on cubic residues This problem is from "An Introduction to Number Theory" by Ivan Niven. Define a to be a cubic residue if $x^3 \equiv a \mod p$ has solutions. Prove that if $p=3k+2 ,$ then every number that is in reduced residue system is a cubic residue. And if $p=3k+1$, only one-third of the reduced residues are cubic residues.I did find a solution here. But, I've not covered the portion in the book after section 2.5(self-study). I attempted the question in the following way and would appreciate any hints to reach to conclusion: As $B=\{1,2,\dots ,p-1\}$ is a reduced residue system, it should be enough to prove that $i^3 \equiv j^3\mod p \Rightarrow i\equiv j\mod p$. So, one part of it boils down to If $p=3k+2$, then show that $p \nmid i^2+j^2+ij $, if $i\neq j \wedge i,j \in B$. I have no idea how to use the $p=3k+2$ part for proving this.
Hint $\ $ The map $\ x\,\mapsto\, x^{\large 3}\ $ has inverse $\ x\,\mapsto\, x^{\large\color{#c00}{ 1+2k}} \ [\,\equiv x^{\large\color{#c00}{1/3}}\,]\ $ since by $\rm\color{#0a0}{Fermat}$ $$(x^{\large 3})^{\large\color{#c00}{ 1+2k}}\equiv x^{\large 1+2(1+3k)}\equiv x^{\large 1+2(p-1)}\equiv x (\color{#0a0}{x^{\large p-1}})^{\large 2}\equiv x\, \color{#0a0}{\bf 1}^{\large 2}\equiv x $$ Remark $\ $ More conceptually it's true because $\, \color{#c00}{1\!+\!2k\equiv 1/3}\ $ exists $\!\!\mod{p\!-\!1}\ $ since $${\rm mod}\,\ p\!-\!1 = 1\!+\!3k:\ \ 1\!+\!2k \equiv -k\equiv \dfrac{1}3\ \ \ {\rm by}\ \ \ 3(-k)\equiv 1\qquad $$ So raise $\ x^{\large 3}\equiv a\ $ to power $\, 1/3\equiv 1\!+\!2k\pmod{p\!-\!1}\ $ to get $\, x\equiv a^{\large 1+2k}$ Note $\ $ It suffices to know that $\,1/3\,$ exists mod $\,p\!-\!1,\,$ or equivalently, by Bezout, that $\,\gcd(3,p\!-\!1) = 1.\,$ We do not need to explicitly compute the inverse as we did above.
Proving finite complement topology is not A1 In my topology course they show that the first countable axiom isn't valid for the finite complement topology. They state the following: Suppose X an uncountable set with the following topology, $$ \mathcal{T} = \{ A \subset X | ~X \setminus A ~\text{finite}\}.$$ Take a countable neighbourhood basis $\{B_n | n\in \mathbb{N}\}$ for a neighbourhood filter of $x$, then, $$ X\setminus\{x\} = \cup_n X\setminus B_n $$. Since the term on the right is an countable union of finite sets and thus countable there follows a contradiction. Now my question is how do they get the equality?
It is clear that $X\setminus\{x\}\supset \cup_n X\setminus B_n$, since $x\in B_n$ for all $n$. We need to prove the other inclusion. Start with a $y\in X\setminus\{x\}$. Now $U=X\setminus\{y\}$ is an open subset of $X$ containing $x$. Hence there is some $B_r$ such that $x\in B_r\subset U$. So, this means that $y\notin B_r$. Thus $y\in X\setminus B_r\subset\cup_n X\setminus B_n$.
existence of a ring homomorphism given a diagram Let $K$ be a field, $T_i$, $T'_i$ some variables,and $I$,$J$ ideals. Given $\varphi$, a ring homomorphism, and $\mu$ , $\mu'$ (the quotient homomorphism) Is there a homomorphism of rings $\phi$, such that the following diagram commutes? Is it unique? I know I need to use that fact that those rings are free, but how? $\require{AMScd} \begin{CD} K[T'_1,\ldots,T'_n] @>{\phi?}>> K[T_1,\ldots,T_m]\\ @VV\mu 'V @VV \mu V \\ K[T'_1,\ldots,T'_n] /J@>{\varphi}>> K[T_1,\ldots,T_m] /I \end{CD}$
The fact that the right column of your diagram has a polynomial ring is irrelevant. So, let's be given $K[X_1,X_2,\dots,X_n]$, an ideal $J$ thereof, a (commutative) $K$-algebra $R$, an ideal $I$ thereof and a $K$-algebra homomorphism $\varphi\colon K[X_1,X_2,\dots,X_n]/J\to R/I$. Denoting by $\alpha\colon K[X_1,X_2,\dots,X_n]\to K[X_1,X_2,\dots,X_n]/J$ and $\beta\colon R\to R/I$ the canonical projections, we want to find a ring homomorphism $\psi\colon K[X_1,X_2,\dots,X_n]\to R$ such that $$ \beta\circ\psi=\varphi\circ\alpha $$ Such a homomorphism is determined once we assign images for $X_1,X_2,\dots,X_n$. Just take $r_i\in R$ (for $i=1,2,\dots,n$) such that $\beta(r_i)=\varphi(\alpha(X_i))$ and you're done, by declaring $\psi(X_i)=r_i$. There is no uniqueness, because the elements $r_i$ can in general be chosen in different ways (differing by elements of $I$).
Problem with Singular Value Decomposition I have a very trivial SVD Example, but I'm not sure what's going wrong. The typical way to get an SVD for a matrix $A = UDV^T$ is to compute the eigenvectors of $A^TA$ and $AA^T$. The eigenvectors of $A^TA$ make up the columns of $U$ and the eigenvectors of $AA^T$ make up the column of $V$. From what I've read, the singular values in $D$ are square roots of eigenvalues from $AA^T$ or $A^TA$, and must be non-negative. However, for the simple example $A = \left[\begin{matrix} 1 & 0 \\ 0 & -1 \end{matrix}\right]$, $A^TA$ and $AA^T$ are both the identity, and thus have eigenvectors $\bigg\lbrace \left[\begin{matrix} 1 \\ 0 \end{matrix}\right]\ ,\ \left[\begin{matrix} 0 \\ 1 \end{matrix}\right]\bigg\rbrace$. Clearly, the eigenvalues are 1 and 1, so our decomposition ought to be: \begin{align} \left[\begin{matrix} 1 & 0 \\ 0 & -1 \end{matrix}\right] = \left[\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right]\left[\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right]\left[\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right] \end{align} What has gone wrong?
To directly answer your question, the problem is that the "typical way" you describe for getting an SVD is incorrect: The typical way to get an SVD for a matrix $A = UDV^T$ is to compute the eigenvectors of $A^TA$ $\color{red}{and}$ $AA^T$. The eigenvectors of $A^TA$ make up the columns of $U$ and the eigenvectors of $AA^T$ make up the column of $V$. One should either give $U$ then solve $V$, or give $V$ then solve $U$ by using the relation $A=UDV^T$. On the other hand, you wrote that ... and thus have eigenvectors $\bigg\lbrace \left[\begin{matrix} 1 \\ 0 \end{matrix}\right]\ ,\ \left[\begin{matrix} 0 \\ 1 \end{matrix}\right]\bigg\rbrace$ But one could equally have $\bigg\lbrace \left[\begin{matrix} -1 \\ 0 \end{matrix}\right]\ ,\ \left[\begin{matrix} 0 \\ 1 \end{matrix}\right]\bigg\rbrace$, or $\bigg\lbrace \left[\begin{matrix} 1 \\ 0 \end{matrix}\right]\ ,\ \left[\begin{matrix} 0 \\ -1 \end{matrix}\right]\bigg\rbrace$, or $\bigg\lbrace \left[\begin{matrix} -1 \\ 0 \end{matrix}\right]\ ,\ \left[\begin{matrix} 0 \\ -1 \end{matrix}\right]\bigg\rbrace$. Any one of these sets consists of independent eigenvectors (with unit length) of the identity matrix. If one chooses from these sets arbitrarily to form $U$ and $V$, there is no reason to expect that $$ A=UDV^T. $$
Why does this method for finding the square root of a number work? Recently I read this, which teaches a trick for finding the square root of a number very quickly and was quite astounded. How/why does this method work? For those of you with linkphobia, I copied and pasted part of the page below: Step-1: Look at the magnitude of the “hundreds number” (the numbers preceding the last two digits) and find the largest square that is equal to or less than the number. This is the 1st part of the answer. Step-2: Now, look at the last (unit’s) digit of the number. If the number ends in a: $0 \rightarrow$ then the ending digit of the answer is a $0$. $1 \rightarrow$ then the ending digit of the answer is $1$ or $9$. $4 \rightarrow$ then the ending digit of the answer is $2$ or $8$. $5 \rightarrow$ then the ending digit of the answer is a $5$. $6 \rightarrow$ then the ending digit of the answer is $4$ or $6$. $9 \rightarrow$ then the ending digit of the answer is $3$ or $7$. To determine the right answer from $2$ possible answers (other than $0$ and $5$), mentally multiply the findings in step-1 with its next higher number. If the left extremities (the numbers preceding the last two digits) are greater than the product, the right digit would be the greater option $(9,8,7,6)$ and if left extremities are less than the product, the right digit would be the smaller option $(1,2,3,4)$.
Write $n = 10x + y$ where $y \in \{0, 1, \dots, 9\}$. Then $n^2 = 100x^2 + 20xy + y^2$ Note that the "magnitude of the hundreds numbers" of $n^2$ (as in the link) is $x^2 + 0.2xy$ if $xy \geq 5$ and $x^2$ otherwise. Since $y < 10$, we may characterize $x$ as the largest integer smaller or equal to the square root of the "magnitude of the hundreds number". That's step 1 in the link. The congruence $y^2 \equiv y \mod 10$ has two solutions. Those are the options for the last digit in step 2. Finally note that $(x + 1)x$ is $x^2 + x$. The "magnitude of the hundreds" is bigger than $x^2 + x$ if and only $y \geq 5$. This allows you to choose one and only one solution from step 2. That's what the last step in the method is saying.
Solving a certain nonlinear transport equation We want to solve \begin{align} (t+u)u_x +tu_t &= x-t \\ u(1,x) &= 1+x \end{align} Normally, I'd rewrite the equation as $\frac{t+u}{t}u_x + u_t = \frac{x-t}{t}$, then as per the method of characteristics, set $\frac{dx}{dt} = \frac{t+u}{t}$. Setting $v(t) = u(t, x(t)),$ we see $\dot{v}(t) = x(t)-t$. Now here, I'd normally solve for $v$, plug it back into the characteristic equation and proceed, but I'm not sure how to proceed. Might be something basic from ODE that I'm forgetting. Thanks.
$$(t+u)u_x+tu_t=x-t$$ Characteristic differential equations : $\quad \frac{dx}{t+u}=\frac{dt}{t}=\frac{du}{x-t}$ A first family of characteristic curves comes from : $\frac{dx}{t+u}=\frac{du}{x-t}=\frac{dx+du}{(t+u)+(x-t)}=\frac{dx+du}{x+u}=\frac{d(x+u)}{(x+u)}=\frac{dt}{t} \quad\to\quad \frac{x+u}{t}=c_1$ A second family of characteristic curves comes from : $\frac{dx}{t+u}=\frac{dt}{t}=\frac{dx-dt}{(t+u)-t}=\frac{dx-dt}{u}=\frac{du}{x-t} \quad\to\quad u^2-(x-t)^2=c_2$ With any independent $c_1,c_2$ , all above is valid only on the characteristic curves. Outside, $c_1$ and $c_2$ are not independent. The relationship can be expressed on the form of an implicit equation $\Phi(c_1,c_2)=0$ : $$\Phi\left(\frac{x+u}{t}\:,\:u^2-(x-t)^2\right)=0$$ or alternatively $c_1=F(c_2)$ where $F$ is any differentiable function : $$\frac{x+u}{t}=F\left(u^2-(x-t)^2\right)$$ This is an implicit form for the general solution of the PDE. Then, with the condition : $u(1,x)=1+x$ $$\frac{x+(1+x)}{1}=F\left((1+x)^2-(x-1)^2\right) \quad\to\quad F(4x)=2x+1$$ This determines the function $F(X)=\frac{X}{2}+1$ With $X=u^2-(x-t)^2$ put into the above general solution: $$\frac{x+u}{t}=\frac{u^2-(x-t)^2}{2}+1 $$ This quadratic equation can be solved for $u$. One root $u=t-x$ doesn't satisfy the condition. The other root is the solution of the PDE consistent with the condition $u(1,x)=1+x$ : $$u(x,t)=x-t+\frac{2}{t}$$
How would I derive the partial sum formula for $\sum _{ n=1 }^{ \infty }{ (-1)^{ n }\cdot n } $? I was solving a practice competitive programming question when I realized that this seems very familiar to the things I have done in a Calculus class. So, I tried to figure out the closed form partial sum for $\sum _{ n=1 }^{ m }{ (-1)^{ n }\cdot n } $, but I realized that I am in over my head. I entered "$-1 + 2 + -3 + 4 + -5 +...$" into Wolfram Alpha, and I got that the partial sum formula for $\sum _{ n=1 }^{ m }{ (-1)^{ n }\cdot n } = \frac { 1 }{ 4 } (2(-1)^{ m }m+(-1)^{ m }-1)$ I feel guilty for blindly applying a formula to solve a problem, and I have not been able to find any wikipedia article, or any other source that provides an explanation for this closed form partial sum formula. I would appreciate if someone can provide an explanation for this.
You are computing the sum of the even numbers minus that of the odd numbers. Assuming $m=2k+1$, $$\sum_{n=0}^k2k-\sum_{n=0}^k(2k+1)=\sum_{n=0}^k(2k-(2k+1))=-\sum_{n=0}^k1=-k-1.$$ Or assuming $m=2k$, undo the last odd term and get $$-k-1+2k+1=k.$$ To get a closed formula, notice that $$m=2k\to S=\frac m2,\\ m=2k+1\to S=-\frac{m+1}2,$$ which can be summarized as $$(-1)^m\frac{m+\dfrac{1-(-1)^m}2}2$$ or $$(-1)^m\frac{m+m\bmod2}2.$$
Combinations n sided polygon Three vertices of a convex n sided polygon are selected. If the number of triangles that can be constructed such that none of the sides of the triangle is also the side of the polygon is 30 then 'n' is
Assuming $n\ge 5$: Total number of triangles: $$\binom n3$$ Triangles such that exactly one of its sides is a side of the polygon: $$n(n-4)$$ Triangles such that exactly two of its sides are sides of the polygon: $$n$$ Then $$\binom n3-n(n-4)-n=30$$ $$n^3-9n^2+20n-180=0$$ $$(n-9)(n^2+20)=0$$
What exactly is a scalar? Particularly, referring to scalars I am only implying to individual numbers, for this instance, let's take the set R(real numbers). These numbers do lie of the real line, i.e. the x axis, which is perpendicular to the y-axis, together form the cartesian plane. So, how are they scalar? For instance, the number 5 can very well be represented by (5, 0) in the plane, which could also imply that it is a position vector. If so, let u & v be two vectors in the cartesian plane, then what would the dot product u.v imply? How can the value of it be interpreted? It's obviously not just a random number formed by multiplication of two vectors, what is it that the value of the dot product is so significant?
There's no such thing as a scalar. But there's a notion of field, and if $k$ is a field, we can speak of the vector spaces over $k$. Sometimes, we've fixed a field $k$, and we're studying vector spaces over it. In this case, we sometimes write "let $a$ denote a scalar" instead of the more formal "let $a$ denote an element of $k$." But that's all it is - a terminological gimmick, no more. And you're completely correct that $k$ can itself be viewed as a vector space over $k$. This allows us to can say things like "every scalar can be viewed as a vector in a $1$-dimensional vector space" etc.
Prove that if {a;b} $\in \mathbb R^+$ then $a^2+b^2>ab$ I have tried factoring it already, but it doesn't seem to evolve much: First I multiply each side by $2$: $ 2(a^2+b^2)>2ab$ Then I substitute using the relation $(a+b)^2=a^2+2ab+b^2$ and it becomes: $2(a^2+b^2)>(a+b)^2 - (a^2+b^2)$ and then: $3(a^2+b^2)>(a+b)^2$ And that's pretty much it, I'm stuck.
We have that $$0\le (a-b)^2=a^2+b^2-2ab.$$ Thus we have $$2ab\le a^2+b^2.$$ Now, if $ab$ is positive then $$ab< 2ab\le a^2+b^2.$$ And if $ab$ is negative then $$ab< 0 <a^2+b^2.$$
Basel problem, but with cosines I found a formula in a book $ \sum_{n=1}^{\infty} \frac{(\cos (\pi n s) - \cos (\pi n s'))^2}{n^2} = \frac{\pi^2}{2} |s-s'|$ but no explanation of where it came from. I checked that it holds true numerically. I wondered whether this series has a name and where could I find more information about it. It looks like some variant of the Basel problem, but other than that I'm clueless. Thanks!
Upon expanding the square, the numerator comes down to $$(\cos(\pi ns)-\cos(\pi ns'))^2=\cos^2(\pi ns)+\cos^2(\pi ns')-2\cos(\pi ns)\cos(\pi ns')$$ Using trigonometric identities, this further reduces down to $$=\frac12\cos(2\pi ns)+\frac12\cos(2\pi ns')+1-\cos(\pi n(s+s'))-\cos(\pi n(s-s'))$$ We may then apply the Clausen function to deduce that (refer to formula 7) $$\sum_{n=1}^\infty\frac{\frac12\cos(2\pi ns)+\frac12\cos(2\pi ns')+1-\cos(\pi n(s+s'))-\cos(\pi n(s-s'))}{n^2}\\=\frac12\operatorname C_2(2\pi s)+\frac12\operatorname C_2(2\pi s')+\frac{\pi^2}6-\operatorname C_2(\pi(s+s'))-\operatorname C_2(\pi(s-s'))\\=\pi^2\left(\frac1{12}-\frac12s+\frac12s^2+\frac1{12}-\frac12s'+\frac12s'^2+\frac16-\frac16+\frac12(s+s')-\frac14(s+s')^2-\frac16+\frac12(s-s')-\frac14(s-s')^2\right)\\=\frac{\pi^2}2\left(s-s'\right)$$ assuming that $0\le s\le1$ and $0\le s'\le1$.