INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Consider the family of functions of domain $\mathbb{R}\setminus \{a\}$, $\frac{2x^2+x-3}{x-a}$.Find a so that there are no vertical asymptotes Consider the family of functions of domain $\mathbb{R}\setminus \{a\}$ defined by $f(x) = \frac{2x^2+x-3}{x-a}$. Determine $a$ so that $f$ has no vertical asymptotes. I know that one way to solve this is by factoring ${2x^2+x-3}$ into $(x-a)$ times something. I'm not very good at factoring so I messed around a bit to see if I could find the solution. My book says the solutions are x = 1 and x = -1.5. I looked at the graph of the function an realised these were the zeros of $2x^2+x-3$. With a little help from Wolfram I found out how to factor $2x^2+x-3$ : $(x-1)(2x+3)$. And so $a = 1$ To get the other solution I tried this: $${2x^2+x-3} = (x-a)(y+z)$$ $x \cdot y = 2x^2 \Leftrightarrow y = 2x$ $${2x^2+x-3} = (x-a)(2x+z) = 2x^2+xz-2xa-az = 2x^2+x(z-2a) -az$$ $az = 3$ $x(z-2a) = x \Leftrightarrow z-2a = 1$ Then I put the above two conclusions into a system and calculated them and got $a = 1 \lor a = -1.5$, no problem there. My questions: * *Why does $a$ take the values of the zeros of $2x^2+x-3$? Is it just a coincidence? *Is there a simpler way of solving this?
Note that $$f(x) = \frac{2x^2+x-3}{x-a} = \frac{(2x+3)(x-1)}{x-a} = \frac{2(x+1.5)(x-1)}{x-a}.$$ Thus if $a=-1.5$ or $a=1$, then $f(x)$ simplifies to a linear polynomial over the domain $\mathbb{R} \setminus \{a\}$, and linear polynomials do not have vertical asymptotes.
Solve the functional Equation:- $f(x+y)=3^xf(y)+9^yf(x)\forall x,y\in \mathbb{R}$ Problem Statement:- Consider a differentiable function $f:\mathbb{R}\rightarrow\mathbb{R}$ for which $f(1)=6$ and $$f(x+y)=3^x\cdot f(y)+9^y\cdot f(x),\;\;\forall x,y\in \mathbb{R}$$ then find $f(x)$. My attempt:- As the function is differentiable hence, we can differentiate the function w.r.t $x$ and $y$ but since $x$ and $y$ are independent, so $\dfrac{dy}{dx}$, so we get $$f'(x+y)=3^x\ln3\cdot f(y)+9^y\cdot f'(x)\tag{1}$$ From the functional equation given in the problem we have $$f(0+0)=3^0\ln{3}\cdot f(0)+9^0\cdot f(0)\implies f(0)=0$$ $$\therefore (1)\implies f'(y)=\ln{3}f(y)+c\cdot 9^y\tag{where $c=f'(0)$}$$ I am pretty much stuck after this. If possible can you also tell me the line of thought that I should have while solving these types of problems.
I just happened to come up with an answer that doesn't need any differentiation. We are given $$f(x+y)=3^x\cdot f(y)+9^y\cdot f(x)$$ On interchanging $x$ and $y$, we get $$f(y+x)=3^y\cdot f(x)+ 9^x\cdot f(y)$$ we get $$3^x\cdot f(y)+9^y\cdot f(x)=3^y\cdot f(x)+ 9^x\cdot f(y)\\ \implies f(y)\cdot (3^x-9^x)=f(x)\cdot (3^y-9^y)\\ \implies \dfrac{f(x)}{3^x-9^x}=\dfrac{f(y)}{3^y-9^y}=k(\text{say})$$ So, we get $$f(x)=k({3^x-9^x})$$ Since $f(1)=6$, so $$f(1)=k(-6)=6\implies k=-1$$ $$\therefore \boxed{f(x)=9^x-3^x}$$ I don't know whether there are some other functions that satisfy the given conditions, if there are then please do post that in your answer. And if you have some other approach please do post so that there is something new to learn.
Find the eigenvalues of this matrix I was wondering if anyone wanted to give a shot at finding the eigenvalues of this $4\times 4$ system. I have tried without success, mainly because I end up having to solve a very nasty cubic polynomial that wolfram seems to be having trouble with as well. There may be some method to solve this that I didn't try, so I wanted to post this here just in case someone could help me out, $$J(\epsilon)=\begin{bmatrix} -\mu &&0&&0 && -\beta b/\mu\\ 0&& -\sigma && 0&& \beta b/\mu\\0&& \sigma&& -(\phi+d)&& 0\\ 0 &&0&&\alpha&&-(\phi+\epsilon)\end {bmatrix}$$ Thanks!
You can do row reduction of the matrix $$J(\epsilon)=\begin{bmatrix} -\mu &&0&&0 && -\beta b/\mu\\ 0&& -\sigma && 0&& \beta b/\mu\\0&& \sigma&& -(\phi+d)&& 0\\ 0 &&0&&\alpha&&-(\phi+\epsilon)\end {bmatrix}$$ add row 2 and row 3 $$J(\epsilon)=\begin{bmatrix} -\mu &&0&&0 && -\beta b/\mu\\ 0&& -\sigma && 0&& \beta b/\mu\\ 0&& 0&& -(\phi+d)&& \beta b/\mu\\ 0 &&0&&\alpha&&-(\phi+\epsilon)\end {bmatrix}$$ row 3 $\times \alpha /(\phi+d)+$row 4 $$J(\epsilon)=\begin{bmatrix} -\mu &&0&&0 && -\beta b/\mu\\ 0&& -\sigma && 0&& \beta b/\mu\\ 0&& 0&& -(\phi+d)&& \beta b/\mu\\ 0 &&0&&0&&-(\phi+\epsilon)+(\beta b/\mu )(\alpha /(\phi+d))\end {bmatrix}$$ now determinant will be the multiplication of the diagonal elements $$|J-\lambda I|=(-\mu-\lambda)(-\sigma-\lambda)(-(\phi+d)-\lambda)(-(\beta b/\mu )(\alpha /(\phi+d))-\lambda)=0$$
Summation function of the inverse of Euler function How can I show that $$ \sum_{n \leq x} \frac{1}{ \varphi(n)}= \frac{ \zeta(2) \zeta(3)}{ \zeta(6)}\log(x)+O(1)?$$
An elementary proof is as follows: We use the following: $$ \frac1{\phi(n)}=\sum_{d|n} \frac 1d \frac{\mu(\frac nd)^2}{\frac nd \phi(\frac nd)}.$$ This identity follows from first proving $$ \frac n{\phi(n)} = \sum_{d|n} \frac{\mu(d)^2}{\phi(d)}. $$ Then we have $$ \begin{align} \sum_{n\leq x } \frac1{\phi(n)} &= \sum_{n\leq x}\sum_{d|n} \frac 1d \frac{\mu(\frac nd)^2}{\frac nd \phi(\frac nd)} =\sum_{d\leq x}\frac 1d \sum_{k\leq \frac xd} \frac{\mu(k)^2}{k\phi(k)}\\ &=\sum_{k\leq x} \frac{\mu(k)^2}{k\phi(k)} \sum_{d\leq \frac xk} \frac 1d=\sum_{k\leq x} \frac{\mu(k)^2}{k\phi(k)} \left(\log \frac xk + O(1)\right)\\ &=\sum_{k=1}^{\infty} \frac{\mu(k)^2}{k\phi(k)} \log x + O(1). \end{align} $$ It can be also shown elementary way (use Euler product) that $$ \sum_{k=1}^{\infty} \frac{\mu(k)^2}{k\phi(k)}=\frac{\zeta(2)\zeta(3)}{\zeta(6)} $$
Show that the function $f(x)=\frac{1}{x}$is uniformly continuous on the set $A:=[a,\infty )$ where a is positive constant please check my proof Let $\epsilon >0 \delta >0$ and a is arbitary positive constant in interval $$|x-a|<\delta \leftrightarrow |f(x)-f(a)|<\epsilon $$ $$\leftrightarrow |\frac{1}{x}-\frac{1}{a}|<\epsilon $$ $$\leftrightarrow |\frac{a-x}{xa}|<\epsilon $$ $$\leftrightarrow \frac{1}{a}|\frac{a-x}{x}|<\epsilon $$ $$|\frac{a-x}{x}|<a\epsilon $$ choose $\delta =a\epsilon $ then $|\frac{1}{x}-\frac{1}{a}|<\delta \leftrightarrow |\frac{1}{x}-\frac{1}{a}<a\frac{\epsilon }{a}=\epsilon $ then it is uniformly conitinouos
As pointed out in a comment, you're only looking at values near $x=a$. If you prove that for every $\epsilon > 0$ there exists a $\delta > 0$ such that $|f(x)-f(a)| \le \epsilon$ whenever $|x-a| \le \delta$ (which you did not, your proof is confused), then this would only imply the continuity of $f$ at $a$, not more. I suggest another approach: note that $f$ is continuous in $[a,+\infty)$ (if $a>0$) and $\lim_{x \rightarrow +\infty} f(x) = 0 $. These two properties combined together give the uniform continuity of $f$ in $[a,+\infty)$. The main idea is to combine Heine-Cantor theorem (a continuous function on a compact set is uniformly continuous) with the vanishing condition. Let me know if you manage to conclude or you need a complete proof.
taylor expansion on the unit sphere I am looking for a taylor type expansion for functions defined on the unit sphere $S^2$, is the following correct or what should be the right form: $$f(y)-f(x)=\langle \nabla_g f(x), \gamma\rangle \Theta+higher\,\,order\,\, terms$$ where $\nabla_g$ is the gradient with respect to $g$,the the round metric, $\Theta$ is the angle which is the geodesic distance between $x$ and $y$, $\langle, \rangle$ is the Euclidean inner product and $\gamma$ is the direction between $x$ and $y$ which is tangent to the geodesic between $x$ and $y$ at the initial point $x$. Is there anything wrong? Please help, thanks a lot!
Note that the composition of the exponential map at x and f is a real valued function on the vector space $T_xS^2$. Hence you could use the classical Taylor expansion. See also: http://press.princeton.edu/chapters/absil/Absil_Chap7.pdf
If $H$ is a normal subgroup of $G$ with $G/H$ abelian, then the commutator subgroup of $G$ is in $H$. This is part of Exercise 2.7.9 of F. M. Goodman's "Algebra: Abstract and Concrete". Let $C$ be the commutator subgroup of a group $G$. Show that if $H$ is a normal subgroup of $G$ with $G/H$ abelian, then $C\subseteq H$. The following seems to be wrong. My Attempt: The commutator subgroup $C$ of $G$ is the subgroup generated by all elements of the form $xyx^{-1}y^{-1}$ for $x, y\in G$. Since $G/H$ is abelian, we have for $x, y\in G$, $$\begin{align} xyx^{-1}y^{-1}H&=xyy^{-1}x^{-1}H \\ &=H, \end{align}$$ so that all elements of the form $xyx^{-1}y^{-1}$ are in $H$. Thus $C\subseteq H$. But I don't use the fact that $H$ is normal. What have I done wrong and what is the right proof?
The fact that $G/H$ is abelian gives us the second equality of:$$xyH=(xH)(yH)=(yH)(xH)=yxH$$ Consequently we find: $$x^{-1}y^{-1}xy=(yx)^{-1}xy\in H$$ This for every $x,y\in G$ so we are allowed to conclude that $H$ contains the commutator subgroup.
Congruence Modulo Powers of Primes I need help with the following: Let $p$ be an odd prime and $x, y \in \mathbb{Z}$. Then $x \equiv y$ mod $p^k \implies x^p \equiv y^p$ mod $p^{k+1}$. I know that I can write $x^p - y^p$ as $(x - y)F(x,y)$ for some ugly polynomial $F$, but I'm not sure whether that's too much help. Ta
Why not directly?: $$x\equiv y\pmod{p^k}\implies x=y+mp^k\;,\;\;m\in\Bbb Z\implies$$ $$x^p=\sum_{n=0}^p\binom pny^n(mp^k)^{p-n}\;\;(**)$$ But all the binomial coefficients except the first one and the last one are dividisible by $\;p\;$ (either you know this or you can prove it by induction...) , so we get: $$(**)=(\text{something divisible by}\;p^{k+1})+m^pp^{pk}+y^p=y^p\pmod{p^{k+1}}$$ since, of course, $\;pk>k+1\;$
Find all irreducible factors of $x^{10}-1$ in $\mathbb{Z}_3[x]$ I have problem with finding irreducible factors of $x^{10}-1$ in $\mathbb{Z}_3[x]$. Easily one gets that $$x^{10}-1=(x-1)(x+1)(x^8+x^6+x^4+x^2+1)$$ And now I am stuck. I tried dividing the polynomial $x^8+x^6+x^4+x^2+1)$ by irreducible polynomials of order 2 (only three of them but no luck there). Do I have to try now the irreducible polynomials of order 3,4, etc.? Does anyone has simpler/faster/neater solution? Without using Wolfram ;) :) Thanks
Certainly one could muse over a faster and neater solution. However, it seems better to apply the Berlekamp algorithm right away, to get $$ x^8+x^6+x^4+x^2+1=(x^4 + 2x^3 + x^2 + 2x + 1)(x^4 + x^3 + x^2 + x + 1). $$ And we do not need Wolfram for it, though it is possible to use it.
Need help finding Green's function for $x^2y''-2xy'+2y=x\ln x$ The problem is as follows: Use the given solutions of the homogeneous equation to find a particular solution of the given equation. Do this by the Green's Function Method. $x^2y''-2xy'+2y=x\ln x;\quad x,x^2$ Use initial conditions $y(1)=y'(1)=0$. I keep getting $$G(x,x')= \begin{cases} 0 & 1\leq x<x' \\ -x+\dfrac{x^2}{x'} & x'<x<\infty \end{cases}$$ but when I evaluate $$y(x)=\int_1^x\left(-x+\dfrac{x^2}{x'}\right)\cdot x'\ln x' dx'$$ I am left with the result $$y(x)=\dfrac{1}{4}x\left[-1+4x+x^2(-3+2\ln x)\right].$$ This does not agree with the solution to the differential equation that Mathematica gives: $$y(x)=\frac{1}{2} x \left(2 x-\log ^2(x)-2 \log (x)-2\right).$$ Could someone help me figure out what I'm doing wrong here? Edit: I figured out that the Green function I found corresponds to a $f(x)$ of $\frac{\ln x}{x}$ (obtained when the original equation is divided on both sides by $x^2$), not $x\ln x$. I'm not exactly sure why this is though.
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ The Green function derivative 'jump' at $\ds{x = x'}$ is $\ds{\color{#f00}{1 \over x'^{2}}}$ $\pars{~\mbox{instead of}\ \color{#f00}{1}}$. That yields $\ds{-\,{x \over x'^{2}} + {x^{2} \over x'^{3}}}$ when $\ds{x > x'}$. The result is given by $$ \int_{1}^{x}\pars{-\,{x \over x'^{2}} + {x^{2} \over x'^{3}}}x'\ln\pars{x'} \,\dd x' = \bbox[#ffe,15px,border:1px dotted navy]{\ds{% -x + x^{2} - x\ln\pars{x} - {1 \over 2}\,x\ln^{2}\pars{x}}} $$ Note that $\ds{\int_{x'^{-}}^{x'^{+}}x^{2}\,\partiald[2]{\mrm{G}''\pars{x,x'}}{x}\,\dd x = \color{#f00}{x'^{2}}\bracks{\left.\partiald{\mrm{G}\pars{x,x'}}{x} \right\vert_{\ x\ =\ x'^{+}} - \left.\partiald{\mrm{G}\pars{x,x'}}{x} \right\vert_{\ x\ =\ x'^{-}}} = 1}$. Moreover, with your 'original Green Function', the 'right integration' is given by $$ \int_{1}^{x}\pars{-x + {x^{2} \over x'}}\,{\ln\pars{x'} \over x'}\,\dd x' \,\dd x' = \bbox[#ffe,15px,border:1px dotted navy]{\ds{% -x + x^{2} - x\ln\pars{x} - {1 \over 2}\,x\ln^{2}\pars{x}}} $$ which is the right result.
Why $ \nabla \|g(b)\|_2=\frac{\langle g(b),\nabla g(b)\rangle}{\|g(b)\|_2} $ for any differentiable g? I'm trying to understand this answer to a question I've made before. I said in that same question the following: The gradient of $f = \lVert a \times b \rVert_2$ with respect to $b$ is apparently equivalent to $$\frac{(a \times b) \times a}{\lVert a \times b \rVert_2}$$ Actually, this is my problem, I don't understand why that is the case. So, user LutzL said that I had made the observation: $$ \nabla \|g(b)\|_2=\frac{\langle g(b),\nabla g(b)\rangle}{\|g(b)\|_2} $$ which is actually different from my observation. What he tries to show is that they are equivalent, but I somehow don't manage to follow his explanation. Can someone explain me why are the two equivalent? Why do we have this equality $ \nabla \|g(b)\|_2=\frac{\langle g(b),\nabla g(b)\rangle}{\|g(b)\|_2} $? Notation: $g: \mathbb{R}^3 \to \mathbb{R}^3$, $\nabla g$ denotes the (total) derivative of $g$, i.e. a matrix. (Sometimes denoted in other sources as $Dg$.) The brackets $\langle , \rangle$ refer to matrix multiplication, not the inner product, in other notation $\langle g(b), \nabla g(b) \rangle :=: [g(b)]^T Dg(b)$.
The gradient of $\| g(b) \|^2 = \left< g(b), g(b) \right>$ can be computed by the product rule as $$ \nabla \left( \| g(b) \|^2 \right) = \left< \nabla g(b), g(b) \right> + \left< g(b), \nabla g(b) \right> = 2 \left< g(b), \nabla g (b) \right>. $$ The gradient of $\| g(b) \| = \sqrt{ \left< g(b), g(b) \right>}$ can then be computed using the chain rule as $$ \nabla \| g(b) \| = \frac{2 \left< g(b), \nabla g(b) \right>}{2 \sqrt{\left<g(b), g(b) \right>}} = \frac{ \left< g(b), \nabla g(b) \right>}{\| g(b) \|}. $$ Now apply this formula to $g(b) = a \times b$ noting that this is a linear map.
Finding $f'(x)$ In the following problem I need to consider the function defined by $f(x)=e^{\frac{-1}{x^2}}$ if $x\ne 0$ and $0$ if $x=0$ I am trying to calculate $f'(x)$ for $x\ne0$. Then after calculating that use that to show that $f''(0)=0$ To calculate the first part I need to use the rule that, $$f'(x)=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$$ but how do I plug in the values I need then find my second part of my question?
Let $f$ be given by $$f(x)=\begin{cases}e^{-1/x^2}&,x\ne 0\\\\0&,x=0\end{cases}$$ Then, for $x\ne0$, $$f'(x)=\frac{2e^{-1/x^2}}{x^3}$$ while for $x=0$ we see that $$\begin{align} f'(0)&=\lim_{x\to 0}\frac{e^{-1/x^2}-0}{x-0}\\\\ &=0 \end{align}$$ Finally, we have $$\begin{align} f''(0)&=\lim_{x\to 0}\frac{f'(x)-f'(0)}{x}\\\\ &=\lim_{x\to 0}\frac{\frac{2e^{-1/x^2}}{x^3}-0}{x-0}\\\\ &=0 \end{align}$$
How do I show that partial derivatives exist everywhere? I'm having trouble with a certain multi-variable calculus question. $$ f(x,y) = \begin{cases} \large\frac{2xy^2}{x^2 + y^4}, & \text{$(x,y)\neq 0$} \\[2ex] 0, & \text{$(x,y) = 0$} \end{cases}$$ I need to show that both $\large\frac{∂f}{∂x}$ and $\large\frac{∂f}{∂y}$ exist everywhere. I can easily manage to find both partial derivatives, but I'm not really sure what the question means when it asks to show that they "exist everywhere". Any help would be appreciated, thanks.
The potential problem is at the origin. But note that $$f_x(0,0)=\lim_{h\to 0}\frac{f(h,0)-f(0,0)}{h}=\lim_{h\to 0}\frac{\frac{2h(0^2)}{h^2+0^4}-0}{h}=0$$ and $$f_y(0,0)=\lim_{h\to 0}\frac{f(0,h)-f(0,0)}{h}=\lim_{h\to 0}\frac{\frac{2(0)(h^2)}{0^2+h^4}-0}{h}=0$$ Therefore, $f_x(0,0)=f_y(0,0)$. For $x^2+y^2>0$, we can simply note that $f(x,y)$ is composition of differentiable functions with $$\begin{align} \frac{\partial f(x,y)}{\partial x}&=\frac{2y^2(y^4-x^2)}{(x^2+y^4)^2}\\\\ \frac{\partial f(x,y)}{\partial y}&=\frac{4xy(x^2-y^4)}{(x^2+y^4)^2} \end{align}$$ Hence, we see that $$\begin{align} \frac{\partial f(x,y)}{\partial x}=\begin{cases}\frac{2y^2(y^4-x^2)}{(x^2+y^4)^2}&,x^2+y^2>0\\\\ 0&,x=y=0 \end{cases} \end{align}$$ $$\begin{align} \frac{\partial f(x,y)}{\partial y}=\begin{cases}\frac{4xy(x^2-y^4)}{(x^2+y^4)^2}&,x^2+y^2>0\\\\ 0&,x=y=0 \end{cases} \end{align}$$ NOTE: While the first partial derivatives, $f_x$ and $f_y$, exist everywhere, neither is continuous at the origin.
Is there a unique polynomial function $f(x)$ of degree $\lt n$ such that $f(n) = a_n$ where $\{a_n\}$ is a sequence? Is it true that for every sequence $a$ of $n$ numbers there is exactly one polynomial function $f(x)$ of degree $\leq n$ such that all $f(1)=a_1,f(2)=a_2,\dots f(n)=a_{n}$? If so, is there an algorithm to, given the sequence, generate the coefficients of this function? Intuitively, I feel like this is true, because: * *Given $a$ and $b$, you can find a polynomial function $f$ that has degree $1$ such that $f(0)=a$ and $f(1)=b$. *By induction: given coefficients $a_1,a_2\dots a_n$, you can find a polynomial function of degree $n$ such that $f(x)$ yields a constant value for all $x$ in $\{1,2\dots n-1\}$. Mostly, the reason I want to be able to find such a function is so when my math teacher says "find the function rule" and presents us with an obviously linear function I can give her some strange polynomial that just happens to give the correct answers for those values.
A polynomial degree $\leq n$ modeling $a_1,a_2, \cdots a_n$ would not be unique. Consider $f(1)=1$, $f(2)=4$. Obviously $f(x)=3(x-1)+1$ works, but we can let $f(x)=x^2$. A polynomial of lowest degree that models $\{a_1,a_2,a_3,...a_n\}$ is unique. A polynomial of lowest degree that fits a set of points is called the Lagrange Polynomial for that set of points. Here we want the Lagrange Polynomial for a very special set of points. A "closed form" is possible: $$f(x)=\sum_{i=0}^{n-1} {x-1 \choose i} \Delta^i(1)$$ Here $\Delta^0(1)=f(1)$. Also $\Delta$ is defined as the operation mapping $f(x)$ to $f(x+1)-f(x)$. This operation is called the forward difference.Then $\Delta^i(1)$ is the operation iterated $i$ times then evaluated at $1$. Ex: Find the Lagrange polynomial for $1,3,5$. $$\color{red}{1},3,5$$ Taking forward differences once gets. $$\color{red}{2},2$$ Twice, $$\color{red}{0},$$ This gives a Lagrange polynomial, $$f(x)=\color{red}{1}{x-1 \choose 0}+\color{red}{2}{x-1 \choose 1}+\color{red}{0}{x-1 \choose 2}$$ $$=2x-1$$ Using this formula, you can troll your teacher. For example if your teacher asks find the rule for $1,3,5,$ then you can use the formula to find a rule for a sequence that goes $1,3,5,\pi$. $$1,3,5,\pi$$ $$2,2,\pi-5$$ $$0,\pi-7$$ $$\pi-7$$ This gives, $$f(x)=1+2{x-1 \choose 1}+0+(\pi-7){x-1 \choose 3}$$ $$=1+2(x-1)+(\pi-7)\frac{(x-1)(x-2)(x-3)}{3!}$$
Graph Theory proving planarity I have these set's of graphs: I used Euler's inequality, and the 4 color theorem which resulted in a inconclusive result. Using Kuratowski's theorem I was unable to create a K3 3 or K5 graph. So does this prove that the graphs are planar? How do we really know that there does not exist a K3 3 or K5? I did use planar embedding and was able to put it in a form where the edges do not overlap, but the method does not seem rigorous enough to prove planarity.
Here's an illustration of a subgraph of $G$ which is a subdivision of $K_5$, so Kuratowski's Theorem implies $G$ is non-planar: Here's a planar drawing of $H$, demonstrating that it's planar: If it contained a subgraph which is a subdivision of $K_5$ or $K_{3,3}$, this planar drawing would not be possible.
relation between union and intersection A and B are any two sets. if A U B ≠ B is it also true that A ∩ B ≠ A ? and if so then why? This is just a step I am using for a bigger proof, if the above is not true then I'll have to search for a different direction. Thanks in advance.
Yes, it is true. $A \cup B \neq B$ means $\exists x \in A \setminus B$ (otherwise, we would have $ A \subset B$ which implies $A \cup B = B$). And since $x \not\in B$, $x \not \in A \cap B$ while $x \in A$
How should I write down the alternating group $A_3$? I didn't understand how to write down the alternating group A3. Is this the group consisting of only the even permutations? Also, what familiar group is this isomorphic to?
Its elements are the $3$-cycles and the identity $\begin{cases} f(1)=2, f(2)=3, f(3)=1 \\ g(1)=3, g(2)=1, g(3)=2 \\ Id \end{cases}$.
Let $g_{ij}$ be a Riemannian metric in normal coordinates at $p$. Why is $\partial_k g_{ij}(p)=0$? Let $(M^n,g)$ be a Riemannian manifold, fix a point $p\in M$, let $e_1,\ldots,e_n$ be an orthonormal basis for $\mathrm{T}_p M$, let $$\exp_p:\mathrm{B}_{\mathrm{T}_p M}(0,\varepsilon)\to\mathrm{B}_M(p,\varepsilon)$$ be a diffeomorphism defining a normal neighborhood, and use the coordinates $e_1,\ldots,e_n$ on $\mathrm{B}_{\mathrm{T}_p M}(0,\varepsilon)$ to induce coordinates $x^1,\ldots,x^n$ on $M$. Then we have the following properties: * *$p = (0,\ldots,0)$ in coordinates *Geodesics starting at $(0,\ldots,0)$ are of the form $\gamma(t) = t\,v$ for some $v\in\mathrm{T}_p M$ *$g_{ij}(0,\ldots,0) = \delta_{ij}$ I know how to prove these three properties, but I don't see how to easily compute $g_{ij}$ in all of $\mathrm{B}_{\mathrm{T}_p M}(0,\varepsilon)$. How would I show that $\partial_k g_{ij}=0$?
If $\gamma (t)=\exp_p\ te_i$ and $E_k$ is normal coordinate vector field, then $ E_k(t)=(d\exp_p)_{te_i}\ e_k$, and $ (\gamma '(t),E_k(t))=0$ by Gauss lemma Hence $$0=\gamma'(\gamma',E_k)=(E_i,\nabla_{E_i} E_k) $$ Hence $$ 0=(E_j+E_k,\nabla_{E_j+E_k} E_m) =(E_j,\nabla_{E_k}E_m)+(E_k,\nabla_{E_j}E_m) = E_m (E_k,E_j)$$
Polynomial form of $\det(A+xB)$ Let $A$ and $B$ be two $2 \times 2$ matrices with integer entries. Prove that $\det(A+xB)$ is an integer polynomial of the form $$P(x) = \det(A+xB) = \det(B)x^2+mx+\det(A).$$ I tried expanding the determinant of $\det(A+xB)$ for two arbitrary matrices, but it got computational. Is there another way?
The constant term $\mathrm{det}(A)$ comes from setting $x = 0.$ The coefficient $\mathrm{det}(B)$ is the constant term of $x^2 P(1/x) = \mathrm{det}(xA + B),$ again setting $x = 0.$
Do Functions Have Truth Values? I'm reading through a few analysis books and I am a little confused by some of the definitions that are given for functions. Some texts define functions to be some subset of the cartesian product of two sets, given that the elements of this subset satisfy the properties of a function. This is intuitively clear to me, but others first define the idea of a relation and then define functions to be a sort of relation that satisfies the typical properties of a function. This is confusing to me since I have always thought of relations as having truth values and functions as having no truth value. Is it appropriate to think of relations as having truth values and functions as not having truth values? Are the varying definitions compatible or is one wrong? I appreciate any help.
A function $f$ corresponds to the relation $xRy$ iff $f(x) = y$.
Unique solution to system of nonlinear equations (non-singular Jacobian) Suppose I have a system of $n$ nonlinear, $C^\infty$, real implicit functions with $n$ real variables: $\{f_i(x_1,...x_n)\}_{i=1}^n$. To provide more structure, we have $f_1(x_1,...x_n)= x_1 + g(x_2,...,x_n)$, ... $f_n(x_1,...x_n)= x_n + g(x_1,...,x_{n-1})$, etc. In other words, in each $f_i$, the variable $x_i$ is additively separable. None of the equations are redundant. When can I assert that there exists at most one solution $(x_1^*,...x_n^*)$ to the system? Following the theory of system of linear equations, is it sufficient that the Jacobian matrix of the system is non-singular everywhere? The idea is from implicit function theorem. If there are $m+n$ nonlinear equations with $n$ endogenous variables and $m$ exogenous variables, and the Jacobian matrix is nonsingular at a point, then we can express the $n$ endogenous variables as functions of the $m$ exogenous variables near that point. If nonsingular everywhere, then we can do the same everywhere. The question now is what if $m=0$.
It's cleaner to rewrite the question as asking about a single smooth function $F : \mathbb{R}^n \to \mathbb{R}^n$. If the Jacobian is everywhere nonsingular, then by the inverse function theorem, $F$ is a local diffeomorphism: that is, $F$ locally has an inverse everywhere. It does not necessarily follow that $F$ has a global inverse, because there is no guarantee that we can consistently glue all these local inverses together. For example, if we replace $\mathbb{R}^n$ by a more general manifold $M$, it can be the case that $M$ admits a nontrivial covering map to itself, the simplest example being $M = S^1$. Now this can't happen for $\mathbb{R}^n$, but more subtle things might happen. If $F$ is assumed in addition to be surjective and proper, then by Ehresmann's theorem it's a fibration. Its fibers must be connected, and in fact must be contractible, in addition to being zero-dimensional, so the conclusion is that they are points. Without properness, it appears that counterexamples are known: The strong real Jacobian conjecture was that a real polynomial map with a nowhere vanishing Jacobian determinant has a smooth global inverse. That is equivalent to asking whether such a map is topologically a proper map, in which case it is a covering map of a simply connected manifold, hence invertible. Sergey Pinchuk (1994) constructed two variable counterexamples of total degree 25 and higher.
Degree of vertex in left part of a bipartite graph with distance less than 3 in right part. A friend and I have worked on the following problems and would appreciate your help generalizing the answer for $n\ge5$: Suppose participants $P$ at a conference speak one of more of the langages in the set $L$ and that each pair can communicate in at least one language. We already proved that if $|L|=3$ and $|P| \ge10$ then one language has to be spoken by at least $2/3$ of participants. That ratio is $3/5$ if $|L|=4$ and $|P|$ is large enough to dodge low participants irregularity. We generalize easily to the following problem and would appreciate your take : Let $P$ and $L$ be the two sides of a bipartite graph such that for any pair $(x,y)$ of distinct vertices in $P$ then $dist(x,y)=2$. What is the lowest value for $$p(n)=\max_{\ell\in L} \frac{\deg(\ell)}{n}$$ where $|L|=n$.
Here's an incomplete answer. If there are $m$ participants, and at most $pm$ speak each language, then each language allows conversation between $\binom{pm}{2}$ pairs of people; there are $\binom{m}{2}$ pairs of people total, so we must have $$n \cdot \binom{pm}{2} \ge \binom{m}{2}.$$ This can only hold for arbitrarily large $m$ if $n \cdot \frac{p^2}{2} \ge \frac12$, or $p(n) \ge \frac{1}{\sqrt n}$. To show that this is not a terrible lower bound: we have a nearly-matching upper bound when $n = q^2 + q + 1$ for some prime power $q$. In this case, there is a construction with $n$ languages and $n$ participants in which each participant speaks $q+1$ languages: just let $P$ be the set of points and $L$ be the set of lines of the projective plane of order $q$, and suppose a person speaks a language if the corresponding point lies on the corresponding line. We can reproduce this exactly for $2n, 3n, 4n, \dots$ participants by having groups of size $2, 3, 4,\dots$ speak exactly the same set of languages, and approximately by dividing people into groups as evenly as possible. In this way, no language is spoken by more than $p(q^2+q+1) = \frac{q+1}{q^2+q+1}$ of the participants, which is a $\frac1{\sqrt n} + O(\frac1n)$ fraction. I expect the projective plane construction to be the best possible when it applies, but I don't know what the optimal answer is for, e.g., $p(5)$.
Testing for the Convergence/Divergence of a series Given only that the series $\sum a_n$ converges, either prove that the series $\sum b_n$ converges or give a counterexample, when we define $b_n$ by, i ) $\frac{a_n}{n}$ ii) $a_n \sin(n ) $ iii) $n^{\frac{1}{n}} a_n$ Is there any general approach to such questions?
i). $b_n = a_n / n$ $n$ is always positive in this summation, so $b_n$ and $a_n$ have the same sign. If $\sum_{n=1}^{\infty} a_n$ converges absolutely, then $0 \leq |a_n|/n \leq |a_n| \ \ \ \forall n\geq 1$ so $ \sum b_n$ also converges absolutely by the squeeze or sandwich theorem. If $\sum_{n=1}^{infty} a_n$ converges conditionally, then (a) $b_n$ also has strictly alternating signs, (b) $\lim_{n \rightarrow \infty} a_n = 0$ implies $\lim_{n \rightarrow \infty} a_n/n = b_n = 0$, (c) $ \forall n, |a_{n+1}| < |a_n|$ implies $\forall n, |a_{n+1}|/n < |a_n|/n$ and $b_{n+1}=|a_{n+1}|/(n+1) < |a_{n+1}|/n < |a_n|/n = b_n$ so $\sum b_n$ also converges conditionally. (ii) $b_n = a_n \sin (n)$ If $a_n \geq 0 \ \ \ \forall n \geq 1,$ OR if $\sum a_n $ converges absolutely, then since $-1 \leq \sin (n) \leq 1 \implies 0 \leq |\sin (n) | \leq 1$ Thus $0 \leq |b_n| = |\sin (n)| |a_n| \leq |a_n| $ so $\sum b_n$ converges absolutely by the squeeze or sandwich theorem.. BUT if $\sum a_n$ converges only conditionally, in this case we canot be sure about $ \sum b_n$ (iii) $b_n = n^{1/n} a_n$ $ n = e^{\ln n} $ so $ n^{1/n} = e^{\ln n \cdot 1/n} = e^{\frac{\ln n}{n}}$ $ \ln n < n \ \ \ \ \forall n > 0$ and by l'Hopital or other methods $ \lim_{n \rightarrow \infty} (\ln n) / n = 0$, so $ \lim_{n \rightarrow \infty} e^{\frac{\ln n}{ n}} = e^0 = 1$ Thus $\lim_{n \rightarrow \infty} b_n = a_n$ and therefore $\sum b_n$ converges by the limit comparison test. Yes there is a general method. Use the squeeze or sandwich theorem and comparison test where possible and use limits and the limit comparison tests otherwise. It helps to know properties of trig functions and exponents and other common functions in the limit. Be very careful with signs.
Integral expansions How to find the leadi­ng behaviour of $$\int_0^{\pi^2/2}\int_0^{\pi^2/2} e^{x \cos(\sqrt{q+s})}\,dq\,ds$$ as x ten­ds to infinity?If we ­consider the first in­tegral w.r.t dq and t­ry Laplace method, is­ it right to take q=0­ as the maximum? Plea­se help to proceed
We first perform the substitution $$y=s+t, z= s-t$$ in order to obtain $$I = \int_0^{\pi^2/2} \!dy\,y e^{x \cos\sqrt y} + \int_{\pi^2/2}^{\pi^2}\!dy\, (\pi^2-y)e^{x \cos\sqrt y}.$$ As $\cos \sqrt y$ is negative on the interval $y \in[\pi^2/2,\pi^2]$ the second integral is exponentially small for $x\to \infty$ and the dominant contribution comes from the first integral. In the first integral, $\cos \sqrt y$ obtains its maximum at $y=0$. So employing Laplace's method, we obtain the leading order $$I \sim \int_0^\infty \!dy\,y e^{x (1-y/2)} =\frac{4 e^{x}}{x^2}$$
Using complex analysis , how to prove that any holomorphic function $f:\overline{D(0;1)} \to \overline{D(0;1)}$ has a fixed point? Using complex analysis , how to prove that any holomorphic function $f:\overline{D(0;1)} \to \overline{D(0;1)}$ has a fixed point ? From this answer Suppose $f(z)$ is analytic in the closed unit disc... I can see that if $|f(z)|<1$ on $|z|=1$ then I can prove it easily ; but what if that condition doesn't hold ? I am stuck . Please help . Thanks in advance
For every $n \in \mathbb N$ , define $g_n : \overline{D(0;1)} \to \mathbb C $ as $g_n(x)=(1-\dfrac 1n)f(x)$ , then $|g_n(x)|<|f(x)|\le1$ , so $g_n: \overline{D(0;1)} \to D(0;1)$ ; then by the question you link , which you say you can easily show , we have that each $g_n$ has a fixed point , hence for every $n$ , $\exists z_n \in \overline {D(0;1)} $ such that $g_n(z_n)=z_n$ i.e. $(1-\dfrac 1n)f(z_n)=z_n , \forall n \in \mathbb N$ ; now as $\overline{D(0;1)}$ is compact so $\{z_n\}$ has a convergent subsequence say $\{z_{r_n}\}$ converging to some $z \in \overline{D(0;1)}$ , then since $f$ is continuous and $\dfrac 1{r_n} \to 0$ , as $n \to \infty$ , so we get $f(z)=z$
Is there a function whose all limits at rational points approach infinity? Is there a function $f\colon\mathbb R \to \mathbb R$, such that its limits at rational points approach infinity?
Suppose $\lim_{x\to q}f(x)=\infty$ for each $q\in\mathbb Q.$ Given $q\in\mathbb Q$ and $n\in\mathbb N,$ there is a neighborhood $U_{n,q}$ of $q$ such that $f(x)\gt n$ for all $x\in U_{n,q}\setminus\{q\}.$ For each $n\in\mathbb N,$ the set $U_n=\bigcup_{q\in\mathbb Q}U_{n,q}$ is a dense open set, and $f(x)\gt n$ for every irrational $x\in U_n.$ By the Baire category theorem, the set $$\left(\bigcap_{n\in\mathbb N}U_n\right)\setminus\mathbb Q=\left(\bigcap_{n\in\mathbb N}U_n\right)\cap\left(\bigcap_{q\in\mathbb Q}(\mathbb R\setminus\{q\})\right)$$ is nonempty, i.e., there is an irrational number $x\in\bigcap_{n\in\mathbb N}U_n.$ Then $f(x)\gt n$ for all $n\in\mathbb N,$ which is absurd.
Show that $\cot(\pi/14)-4\sin(\pi/7)=\sqrt7$ Show that $\cot(\pi/14)-4\sin(\pi/7)=\sqrt7$. This problem is from G.M. 10/2016 and I can't solve it. I tried with an isosceles triangle with angles $3\pi/7, 3\pi/7$ and $\pi/7$ and I tried to find a relation between the sides of the triangle but I couldn't find anything. I also thought to solve it with complex numbers but again I could't find anything. Any ideas?
Let $a=\frac{\pi}{7},c=\cos a$. This answer uses that $x=c$ is a root of $$8x^3-4x^2-4x+1$$ The proof is written at the end of this answer. Multiplying the both sides of $$8c^3-4c^2-4c+1=0$$ by $2$ gives $$16c^3-8c^2-8c+2=0,$$ i.e. $$(16c^2-24c+9)+(16c^3-24c^2+9c)-7+7c=0,$$ i.e. $$(1+c)(16c^2-24c+9)=7(1-c)$$ Multiplying the both sides by $1-c$ gives $$(1-c^2)(4c-3)^2=7(1-c)^2,$$ i.e. $$1-c^2=\frac{7(1-c)^2}{(4c-3)^2}$$ So $$\sqrt{1-c^2}=\frac{\sqrt 7\ (1-c)}{4c-3}$$ since $4c-3\gt 4\cos\frac{\pi}{6}-3=2\sqrt 3-3\gt 0$. So, we have $$\frac{4c-3}{1-c}\sqrt{1-c^2}=\sqrt 7\tag1$$ By the way, $$\begin{align}\cot\frac{a}{2}-4\sin a&=\sqrt{\frac{1+c}{1-c}}-4\sqrt{1-c^2}\\\\&=\sqrt{\frac{1-c^2}{(1-c)^2}}-4\sqrt{1-c^2}\\\\&=\frac{\sqrt{1-c^2}}{1-c}-4\sqrt{1-c^2}\\\\&=\frac{4c-3}{1-c}\sqrt{1-c^2}\tag2\end{align}$$ The claim follows from $(1)(2)$. Finally, let us prove that $x=c$ is a root of $$8x^3-4x^2-4x+1$$ Since $3a+4a=\pi$, we have $$\sin(3a)=\sin(4a)$$ from which we have $$\sin a\ (3-4\sin^2a)=2\sin(2a)\cos(2a)=4\sin a\cos a\cos(2a)$$ Dividing the both sides by $\sin a$ gives $$3-4(1-c^2)=4c(2c^2-1),$$ i.e. $$8c^3-4c^2-4c+1=0$$
What is orthogonal transformation? When $A^{T}A = AA^{T} = I$, I am told it is an orthogonal transformation $A$. But don't really get what it means. Hope to hear some explanations. $\begin{bmatrix}\cos\theta & \sin\theta \\ -\sin\theta & \cos\theta\end{bmatrix} \begin{bmatrix}\cos\theta & -\sin\theta \\ \sin\theta & \cos\theta\end{bmatrix} = \begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}$
You can transform a vector into another vector by multiplying it by a matrix: $$w=Av$$ Say, you had two vectors $v_1,v_2$, let's transform them into $w_1,w_2$ and obtain the inner product. Note that the inner product is the same as transposing then matrix multiplying: $$w_1\cdot w_2\equiv w_1^Tw_2=v_1^TA^TAv_2$$ Now, if the matrix is orthogonal, you get: $$w_1\cdot w_2=v_1^Tv_2\equiv v_1\cdot v_2$$ So, we see that the inner product is preserved when the transformation is orthogonal. Isn't this interesting? It means that if you have a geometrical object (which can be represented with a set of vectors) then orthogonal transformation $A$ will simply rotate or flip your object preserving its geometry (all angles, and sizes).
Using the simmetry of the function $t \rightarrow 1/t $ to change limits of an integral. I was looking at this very famous answer and unfortunately I could not get through even the first step: \begin{align} & 2 \int_0^{1} dt \frac{t^{-1/2}}{1-t^2} \log{\left (\frac{5-2 t+t^2}{1-2 t +5 t^2} \right )} + 2 \int_1^{\infty} dt \frac{t^{-1/2}}{1-t^2} \log{\left (\frac{5-2 t+t^2}{1-2 t +5 t^2} \right )} \\ &= 2 \int_0^{1} dt \frac{t^{-1/2}}{1-t^2} \log{\left (\frac{5-2 t+t^2}{1-2 t +5 t^2} \right )} + 2 \int_0^{1} dt \frac{t^{1/2}}{1-t^2} \log{\left (\frac{5-2 t+t^2}{1-2 t +5 t^2} \right )} \\ \end{align} Could someone please walk me through this step? how are we using the simmetry of $1/t$ here?
In the integral $$\int \limits _1 ^\infty \frac{t^{-1/2}}{1-t^2} \log{\left (\frac{5-2 t+t^2}{1-2 t +5 t^2} \right )} \Bbb dt$$ make the change $u = \frac 1 t$. This will turn it into $$\int \limits _1 ^0 \frac {u^{\frac 1 2}} {1 - \frac 1 {u^2}} \log \left( \frac {5 - \frac 2 u + \frac 1 {u^2}} {1 - \frac 2 u + \frac 5 {u^2}} \right) \left( - \frac 1 {u^2} \right) \Bbb d u = \int \limits _0 ^1 \frac {u^{\frac 1 2}} {1 - \frac 1 {u^2}} \log \left( \frac {5 - \frac 2 u + \frac 1 {u^2}} {1 - \frac 2 u + \frac 5 {u^2}} \right) \left( \frac 1 {u^2} \right) \Bbb d u = \\ \int \limits _0 ^1 \frac {u^{\frac 1 2}} {u^2 - 1} \log \left( \frac {5u^2 - 2u + 1} {u^2 - 2u + 5} \right) \Bbb d u = - \int \limits _0 ^1 \frac {u^{\frac 1 2}} {u^2 - 1} \log \left( \frac {u^2 - 2u + 5} {5u^2 - 2u + 1} \right) \Bbb d u = \\ \int \limits _0 ^1 \frac {u^{\frac 1 2}} {1 - u^2} \log \left( \frac {5 - 2u + u^2} {1 - 2u + 5u^2} \right) \Bbb d u .$$ Now change the letter $u$ into $t$ and you get the second integral on the second row of your question.
Why remainders of division give base expression? $214 = 3 · 71 + 1$ $71 = 3 · 23 + 2$ $ 23 = 3 · 7 + 2 $ $7 = 3 · 2 + 1 $ $2 = 3 · 0 + 2$ As a result, to obtain a base $3$ expansion of $214$, we take the remainders of divisions and we get that $(214)_{10} = (21221)_3$. Question: why do the remainders give base $3$ expansion of $214$? And why order is from bottom to top?
$$214=(((((((2\times 3)+1)\times 3)+2)\times 3)+2)\times 3)+1$$
Expected value : Reciprocal Sum Positive iid Random Variables Let $\{X_i\}_{i=1}^{n}$ be iid positive valued random variables. Show that $\mathbb{E}[\frac{\sum_{i=1}^{m}X_i}{\sum_{i=1}^{n}X_i}]=\frac{m}{n}$ for $1\leq m\leq n$. I know $\mathbb{E}[\frac{\sum_{i=1}^{m}X_i}{\sum_{i=1}^{n}X_i}]=m \mathbb{E}[\frac{X_i}{\sum_{i=1}^{n}X_i}]$.
If you can prove that $\mathbb{E}[\frac{\sum_{i=1}^{m}X_i}{\sum_{i=1}^{n}X_i}]=m \mathbb{E}[\frac{X_i}{\sum_{i=1}^{n}X_i}]$, then simply find $$1=\mathbb{E}[\frac{\sum_{i=1}^{n}X_i}{\sum_{i=1}^{n}X_i}]=n \mathbb{E}[\frac{X_i}{\sum_{i=1}^{n}X_i}]$$ and conclude.
Question About the Logic of my Proof Okay, I am working on the following relatively simple problem: Let $f(x) = |x-3| + |x-1|$ for all $x \in \mathbb{R}$. Find all $t$ for which $f(t+2) = f(t)$. So, if $f(t+2)=f(t)$, the is equivalent to $|t+1| = |t-3|$. Thus, if this holds, one can square both sides and arrive at $t=1$. So, this value of $t$ is a necessary condition, but prima facie it isn't the only value. To show sufficiency, could I let $t = 1 + \epsilon$, plug it into the above equation, deduce that $\epsilon = 0$, and conclude that $t=1$ is the only value? Would that go into showing this is the only value?
You don't need square both sides: $$|t+1|=|t-3|\Leftrightarrow t+1=\pm(t-3)$$ But $$t+1=t-3\to1=-3$$ which has no solution, so $$t+1=-t+3\to t=1$$ So, $1$ is the only one solution.
Satisfying the Cauchy Riemann Equations Where does the function $f(z)=\bar{z} $ satisfy the Cauchy-Riemann Equations?
There are two forms of the Cauchy-Riemann equation we could look at to obtain the answer. The first form asks where $$\frac{\partial u}{\partial x}=\frac{\partial v}{\partial y},\frac{\partial u}{\partial y}=-\frac{\partial v}{\partial x}$$ is satisfied (note: $f(z)= u+iv$). There is also an equivalent relation we satisfy to show that the Cauchy-Riemann equations are satisfied. This has the form: $$\frac{\partial f(z)}{\partial \bar{z}}=0$$ With either of these relationships, it should be fairly simple to solve.
Write $(-\sqrt{3} + i)^{11}$ as $x+iy$, where $x,y \in \mathbb{R}$ Write $(-\sqrt{3} + i)^{11}$ as $x+iy$, where $x,y \in \mathbb{R}$ I was wondering if there was a pattern or method of calculating this more efficiently. I feel like i am not really using complex analysis. Let $x = -\sqrt{3} + i$ $\Rightarrow x^2 = 2 - 2i\sqrt{3}$ $\Rightarrow x^3 = 6i$ $\Rightarrow x^4 = -8 - 8i\sqrt{3}$ $\Rightarrow x^8 = -128 + 128i\sqrt{3}$ $\Rightarrow x^8 \times x^3 = (-\sqrt{3} + i)^{11}$
Yes, there is a pattern. Recall that complex numbers can be written as $re^{2\pi i\theta}$. When $\theta \in\mathbb Q$, there exists a pattern we can exploit. This is that if $x = re^{2\pi ip/q}$, then $x^q = r^q e^{2\pi ip}$, and $e^{2\pi ip} =1$. So, we have that $x^q = r^q$. If you've seen any abstract algebra, this is exactly the order of the element of a multiplicative group. Now, you have that $x^3 = 6i$. We have that $i = e^{\frac{\pi}{2}i} = e^{2\pi i/4}$, so just from this I know that $(x^3)^4 = (6i)^4 = 6^4 = 24$. So, $x^{11} = 24(-\sqrt{3}+i)^{-1}$, which is much easier to compute than just iterated multiplication. You can also use the binomial theorem for things like this, but I find using the polar form (either by directly converting, or by repeatedly multiplying until you find something like $6i$ that has a very simple polar form, and then converting) to be much tidier.
Why are primitive roots called generators? I learned recently that the reason that g is commonly used to denote a primitive root is because it stands for "generator". I also know that this has something to do with the non-zero residues. However I don't understand how a g could be used to "generate" all non-zero residues.
Let $p$ be a prime number. Consider all non-zero residues modulo $p$. We can multiply two non-zero residues and obtain a non-zero residue as well, also there is a trivial non-zero residue (equal to $1$), and, finally, each non-zero residue has an inverse. In other words, the set of non-zero residues modulo $p$ is a group over multiplication (it's often denoted by $\mathbb{Z}_p^*$). Let $g$ be any primitive root now. One can see that $g$ is a generator of this group (and hence this group is cyclic), that is, each non-zero residue equals $g^k$ for some integer $k$. If we consider a composite base $m$, it's not enough to consider non-zero residues since they are not a group (for example, $2\times3=0$ modulo $6$). One should instead consider all residues coprime with $m$ (the set of such residues is also a group over multiplication and it's also denoted by $\mathbb{Z}_m^*$). Again, if there is a primitive root modulo $m$, it generates this group.
Simplify boolean expression X'YZ + XY'Z + XYZ' i have this expression to put in the XOR forms. X'YZ + XY'Z + XYZ' The steps i did already are these ones: Z(X'Y + XY') + XYZ' Z(X^Y) + XYZ' But if i put the same expression on WolframAlpha it says that the final solution should be: XY ^ XZ ^ YZ ^ XYZ i know that AB' + A'B = A^B, but in this case i dont know how X^Y can be correlated to XY and obtain the full solution. Can someone help me? Thank you
\begin{eqnarray*} XY \hat{ } XZ \hat{ } YZ \hat{ } XYZ & = & (XY (XZ)^{'}+ (XY)^{'}XZ) \hat{ }(YZ (XYZ)^{'}+ (YZ)^{'}XYZ) \\ & = & (XY (X^{'}+Z^{'})+ (X^{'}+Y^{'})XZ) \hat{ } (YZ (X^{'}+Y^{'}+Z^{'})+ (Y^{'}+Z^{'})XYZ) \\ & = & (XY Z^{'}+ Y^{'}XZ)\hat{ }(YZ X^{'}) \\ & = & (XY Z^{'}+ Y^{'}XZ)^{'}(X^{'} YZ )+(XY Z^{'}+ Y^{'}XZ)(YZ X^{'})^{'} \\ & = & ((XY Z^{'})^{'} (Y^{'}XZ)^{'})(X^{'} YZ )+(XY Z^{'}+ Y^{'}XZ)(Y^{'}+Z^{'} +X) \\ & = & (X^{'}+Y^{'}+ Z) (X^{'}+Y+Z^{'})(X^{'} YZ )+XY Z^{'}+ Y^{'}XZ \\ & = & X^{'} YZ +XY Z^{'}+ Y^{'}XZ \\ \end{eqnarray*}
Battle of the Sexes, Mixed Strategy BNE below is a question pertaining to finding mixed strategies BNE: So for this question I know how to calculate the probabilities at which each player should be indifferent. Alice will be indifferent in the Mean and Nice case when Pr(pub)= 5/6 and Pr(cafe) = 1/6) Given there is a 50/50 shot of being mean of nice, bob can either play Pub always when nice and 2/3 when mean, or 2/3 when nice and always when mean. Mean Bob will be indifferent when Alice plays Pr(Cafe) = 1/6 and Pr(pub) = 5/6. Nice Bob will be indifferent when Alice plays Pr(Cafe) = 5/6 and Pr(Pub) = 1/6. However, after this point I am pretty stuck. I am not sure how to combine these two to determine the mixed-strategy equilibrium. I have the answer, (below) but I do not know how to interpret it. Thank you!
Here is a way to think about it. Let $a_c=Pr($alice plays cafe$)$ and $b_c^i=Pr($ bob of type i plays cafe$)$ where $i=\{nice(n), mean(m)\}$. The only way Alice will play a mix strategy is if she is indifferent given Bob's actions, i.e $$ \underbrace{\frac{1}{2}\left[5b_c^n+2(1-b_c^n)\right] + \frac{1}{2}\left[5b_c^m+2(1-b_c^n)\right]}_{\text{Expected payoff if Alice plays Cafe}} $$ $$ = \underbrace{\frac{1}{2}\left[0b_c^n+3(1-b_c^n)\right] + \frac{1}{2}\left[0b_c^m+3(1-b_c^n)\right]}_{\text{Expected payoff if Alice plays Pub}} $$ or simply $$ b_c^n+b_c^m=\frac{1}{3}. $$ The above expression gives us a relationship between how the different types of Bob must randomize in order to leave Alice indifferent. We don't need both types to randomize in order for Alice to be indifferent but at least one of Bob's types must. The nice type is indifferent if: $$ \underbrace{3a_c+0(1-a_c)}_{\text{Nice plays Cafe}}=\underbrace{2a_c+5(1-a_c)}_{\text{Nice plays Pub}} $$ which happens only if $a_c=\frac{5}{6}$. Nice Bob strictly prefers Pub if $a_c<\frac{5}{6}$, and strictly prefers Cafe if $a_c>\frac{5}{6}$. The mean type is indifferent if: $$ \underbrace{0a_c+3(1-a_c)}_{\text{Mean plays Cafe}}=\underbrace{5a_c+2(1-a_c)}_{\text{Mean plays Pub}} $$ which happens only if $a_c=\frac{1}{6}$. Mean Bob strictly prefers Cafe if $a_c<\frac{1}{6}$, and strictly prefers Pub if $a_c>\frac{1}{6}$. Now we can construct out mixed strategy BNE. One is Alice mixes with $a_c=\frac{5}{6}$, Nice Bob mixes with $b_c^n=\frac{1}{3}$ and mean Bob plays Pub ($b_c^m=0$). In the expression given by the question, this is Alice playing $5/6C+1/6P$, and Bob playing $1/3 CP+2/3PP$. Another is where Alice mixes with $a_c=\frac{1}{6}$, Nice Bob plays pub for sure ($b_c^n=0$) and mean Bob mixes $b_c^m={1}{3}$. In the expression given by the question, this is Alice playing $1/6C+5/6P$, and Bob playing $1/3 PC+2/3PP$. There is no equilibrium where both types of Bob mix simultaneously. Hope this helps.
Integrating $\frac{1}{\sqrt{x^2+1}}$ with the help of a variable change The question I am faced with is as follows: With the help of a variable change $\sqrt{x^2+1}=x+t$, demonstrate that over $\mathbb{R}$, with $k\in \mathbb{R}$ $$\int{\frac{1}{\sqrt{x^2+1}} \ \mathrm{d}}x=\ln(x+\sqrt{x^2+1})+k$$ Now what I did was something different, I built a right triangle with an angle $\alpha$ (its adjacent side being $1$ and opposite side being $x$), and with a hypotenuse of $\sqrt{x^2+1}$. Thus, $x=\tan{\alpha}$ and $\mathrm{d}x=\sec^2{\alpha} \ \mathrm{d}\alpha$. So we have: $$\int{\frac{1}{\sqrt{x^2+1}} \ \mathrm{d}}x=\int{\cos{\alpha} \ \mathrm{d}x}=\int{\frac{\sec^2{\alpha}}{\sec{\alpha}} \ \mathrm{d}\alpha}=\int{\sec{\alpha} \ \mathrm{d}\alpha}$$ I know how to integrate $\sec{\alpha} \ \mathrm{d}\alpha$, and after replacing $\alpha$ with the initial value of $x$, I indeed end up with the result $\ln(x+\sqrt{x^2+1})+k$. But how would one end up with the same result using the given hint? Can someone help me out? Thanks.
With $\sqrt{x^2+1}=t-x$ (slightly better) you have $x^2+1=t^2-2tx+x^2$, so $$ x=\frac{t^2-1}{2t}=\frac{t}{2}-\frac{1}{2t} $$ so also $$ dx=\frac{1}{2}\left(1+\frac{1}{t^2}\right)\,dt=\frac{t^2+1}{2t^2}\,dt $$ and $$ \sqrt{x^2+1}=t-x=t-\frac{t}{2}+\frac{1}{2t}=\frac{t^2+1}{2t} $$ Then the integral becomes $$ \int\frac{2t}{t^2+1}\frac{t^2+1}{2t^2}\,dt $$
Is $\mathbb C \setminus [0,\infty)$ simply connected? Is $\mathbb R^2 \setminus ( [0,\infty)\times \{0\}) $ simply connected ? My guess is it is , but I can only show it is path connected , apart from that I am stuck . Please help . Thanks in advance
Hint. $$ (x,y)\in\mathbb R^2 \mapsto -(e^x+iy)^2 \in \mathbb C\setminus[0,\infty)$$ is a homeomorphism.
The sum of a set $A$ with the empty set, $\varnothing$ Given that the sum of two sets is defined as $$ A + B = \big\{ a + b : a \in A, b \in B \big\}, $$ how might one compute the sum $$ A + \varnothing $$ where $A$ may or may not be empty? In his book Functional Analysis, Rudin writes that the sum is simply the empty set itself. From this, I would assume that the sum of any sum with the empty set is the empty set since addition does not seem to be well-defined in this sense (something plus nothing), but I would like some clarity on the subject.
Because the empty set doesn't have any element then there is no element $a+b$ that can belong to $A+\emptyset$, hence $A+\emptyset=\emptyset$ for any $A$ (empty or not).
show that for $n=1,2,...,$ the number $1+1/2+1/3+...+1/n-\ln(n)$ is positive show that for $n=1,2,...,$ the number $1+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{n}-\ln(n)$ is positive, that it decreases as $n$ increases, and hence that the sequence of these numbers converges to a limit between $0$ and $1$ (Euler's constant). I'm trying to prove this by induction on $n$ and I made the base step, I could not with the inductive step because to do so suppose that for $n=1,2,\dots,$ it is true that $1+\frac{1}{2}+\frac{1}{3}+\dots+\frac{1}{n}-\ln(n)$ is positive and let's see that $1+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{n}+\frac{1}{n+1}-\ln(n+1)$ is positive, We see that \begin{align} &1+\frac{1}{2}+\frac{1}{3}+\dots+\frac{1}{n}+\frac{1}{n+1})-\ln(n+1)\\ =&1+\frac{1}{2}+\frac{1}{3}+\dots+\frac{1}{n}-\ln(n)+\frac{1}{n+1}-\ln(n+1)+\ln(n)\\ >&\frac{1}{n+1}-\ln(n+1)+\ln(n) \end{align} But I do not know how to prove that $\frac{1}{n+1}-\ln(n+1)+\ln(n)>0$ what do you say? Can you do what I did?
There is a different way (more long) that is not using integrals. First observe that $$n=\prod_{k=1}^{n-1}\frac{k+1}k$$ Hence $$\sum_{k=1}^n\frac1k-\ln(n)=\frac1n+\sum_{k=1}^{n-1}\left(\frac1k-\ln\left(\frac{k+1}k\right)\right)$$ Then if we define in $[1,\infty)$ the function $$f(x):=\frac1x-\ln\left(\frac{x+1}x\right)$$ we can see that $f'(x)=-\frac1{x^2(x+1)}$ is negative, hence $f$ is strictly decreasing. From here is easy to check that $f$ is positive because $f(1)>0$, $\lim_{x\to\infty}f(x)=0$ and $f$ is continuous. Then $$\frac1k-\ln\left(\frac{k+1}k\right)>0,\quad\forall k\in\Bbb N_{>0}$$
CDF of a random variable Consider a random variable $Y_{n}$ such that, $P$($Y_{n}$ = $\frac{i}{n}$) = $\frac{1}{n}$ , $i$ = 1, 2,..., $n$. Is the cdf of $Y_n$ for every integer $n \ge 1$ simply $\sum_{k=1}^i \frac{1}{n}$ = $\frac{i}{n}$? Also, how do you show that for any $u$ $\in$ $R$ the $\lim_{n\to\infty} P($$Y_{n}$ $\le$ $u$) = P(U $\le$ $u$) Where U is uniform $[0,1]$. My thought was that if $P($$Y_{n}$ $\le$ $\frac{i}{n}$) = $\frac{i}{n}$, then $P($$Y_{n}$ $\le$ $u$) = $u$ which is equal to $\int_0^u$ 1 dU. Any suggestions?
The cumulative distribution function (CDF) is obtained by integration of the density: \begin{aligned} F_{Y_n}(t) & = \int_{-\infty}^t \sum_{i=1}^n P\!\left(Y_n=\frac{i}{n}\right) \delta_{i/n}(x) \, dx \\ & = \underset{i\leq n t}{\sum_{i=1}^n} \frac{1}{n} \\ & = \frac{1}{n} \min(n,\max (1,\lfloor n t \rfloor)) \, , \end{aligned} where $\delta$ is the Dirac delta, and $\lfloor \cdot \rfloor$ is the floor function. When $n$ goes to infinity, this CDF tends towards the CDF of the uniform distribution over $[0,1]$: \begin{aligned} \lim_{n\rightarrow +\infty} F_{Y_n} (t) & = \min(1,\max (0, t )) \\ & = F_U (t) \, . \end{aligned}
The additivity axiom of probabiity One of the axioms of probability is: If the sample space is finite and $A$ and $B$ are disjoint events then $Pr[A\cup B]=Pr[A]+Pr[B]$, and if the sample space is infinite, then for for any (possibly infinite number of) disjoint events, like $A_1,A_2,\ldots$, then $Pr[\cup_iA_i]=\sum_iPr[A_i]$ I cannot understand why the first one cannot imply the second one. I have seen several probability books and they have mentioned it and skipped its detail. Can anyone give an example that the union of just two disjoint sets does not imply the union of any (possibly infinite) number of disjoint events?
This is a good example to show that induction can prove things "for all positive integers $n$" but that does not necessarily imply the result "for $n=\infty$." There are still some things you can say with only the finitely additive axiom: Let $\{A_n\}_{n=1}^{\infty}$ be an infinite sequence of disjoint events. Fix $m$ as a positive integer. Since $\cup_{n=1}^{\infty} A_n \supseteq \cup_{n=1}^m A_n$ we conclude: $$P[\cup_{n=1}^{\infty} A_n] \geq P[\cup_{n=1}^m A_n] = \sum_{n=1}^m P[A_n] $$ where the final equality holds by finite additivity. Taking a limit as $m\rightarrow \infty$ gives: $$ P[\cup_{n=1}^{\infty} A_n] \geq \sum_{n=1}^{\infty} P[A_n]$$ The reverse inequality requires the countable additivity axiom (as described in the aduh answer). As a sanity check, notice that the above inequality indeed holds for the "fair integer lottery" example from aduh's answer, using $A_n = \{n\}$ for $n \in \{1, 2, 3, ...\}$ (and redefining that example only over positive integers, rather than all integers, for simplicity), since $1 \geq \sum_{n=1}^{\infty} 0 = 0$.
Some curious binomial coefficient identities I was playing around with some polynomials involving binomial coefficients, and inadvertently proved the following two identities: (1) For all $p, q \in \mathbb{N}$ and $i \in \left[ 0, \min\{p, q\} \right]$: $$ \begin{pmatrix}p \\ i\end{pmatrix} = \sum_{j=0}^i (-1)^j \begin{pmatrix}q \\ j\end{pmatrix} \begin{pmatrix}p + q - j \\ i - j\end{pmatrix} \text{.} $$ (2) For all $q \in \mathbb{N}_{\ge 1}$ and $i \in [0, q]$: $$ \frac{q - i}{q} = \sum_{j=0}^i (-1)^j \begin{pmatrix}i \\ j\end{pmatrix} \begin{pmatrix}2q - 1 - j \\ i - j\end{pmatrix} \begin{pmatrix}q - j \\ i - j\end{pmatrix}^{-1} \text{.} $$ Can either of these identities be proven in any trivial way (e.g., by reduction to known identities)?
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ $\ds{\sum_{j = 0}^{i}\pars{-1}^{\,j}{q \choose j}{p + q - j \choose i - j} = {p \choose i}.\qquad p, q \in \mathbb{N}\,;\quad i \in \bracks{0,\min\braces{p,q}}}$. From the above conditions it's clear, because $\ds{j \leq i}$, that $\ds{p + q - j \geq 0}$. Then, $\ds{{p + q - j \choose i - j}_{j\ >\ i} = 0}$ such that $\color{#f00}{the\ above\ sum\ can\ be\ extended}$ to all numbers $\ds{\in \mathbb{N}}$: \begin{align} &\sum_{j = 0}^{i}\pars{-1}^{\,j}{q \choose j}{p + q - j \choose i - j} = \sum_{j = 0}^{\color{#f00}{\infty}}\pars{-1}^{\,j}{q \choose j}\ \overbrace{{-p - q + i- 1 \choose i - j}\pars{-1}^{i - j}}^{\ds{p + q - j \choose i - j}} \\[5mm] = &\ \pars{-1}^{i}\sum_{j = 0}^{\infty}{q \choose j} \braces{\vphantom{\huge A}\bracks{z^{i - j}}\pars{1 + z}^{-p - q + i - 1}} = \pars{-1}^{i}\bracks{z^{i}}\braces{\vphantom{\huge A}% \pars{1 + z}^{-p - q + i - 1}\ \overbrace{\sum_{j = 0}^{\infty}{q \choose j}z^{j}}^{\ds{\pars{1 + z}^{q}}}} \\[5mm] = &\ \pars{-1}^{i}\bracks{z^{i}}\pars{1 + z}^{-p + i - 1} = \pars{-1}^{i}{-p + i - 1 \choose i} \\[5mm] = &\ \pars{-1}^{i}\ \underbrace{{-\bracks{-p + i - 1} + i - 1 \choose i}\pars{-1}^{i}} _{\ds{-p + i - 1 \choose i}}\ =\ \bbx{\ds{p \choose i}} \end{align}
What is the closest point to a plane? I can't solve this question: Let $\mathcal{P}$ be the plane containing the points $(-3,4,-2)$, $(1,4,0)$, and $(3,2,-1)$. Find the point in this plane that is closest to $(0,3,-1)$. I don't know how to do this question. Any hints/solutions? If it is vectors, then I am pretty sure I don't know how to do it. :(
It is the orthogonal projection $H$ of the $M(0,3,-1)$ onto the plane defined by the points $A(-3,4,-2)$, $B(1,4,0)$ and $C(3,2,-1)$. * *If the vectors $\overrightarrow{AB}$ and $\overrightarrow{AC}$ are orthogonal, it is given by the formula $$\overrightarrow{AH}=\frac{\overrightarrow{AM}\cdot\overrightarrow{AB}}{\overrightarrow{AB}\cdot\overrightarrow{AB}}\,\overrightarrow{AB}+\frac{\overrightarrow{AM}\cdot\overrightarrow{AC}}{\overrightarrow{AC}\cdot\overrightarrow{AC}}\,\overrightarrow{AC}.\tag{1}$$ To end determining the point $H$, just calculate $$\overrightarrow{OH}=\overrightarrow{OA}+\overrightarrow{AH}.$$ *If the vectors $\overrightarrow{AB}$ and $\overrightarrow{AC}$ are not orthogonal, you first have to deduce an orthogonal basis of the plane $\bigl\langle\overrightarrow{AB}, \overrightarrow{AC}\bigr\rangle$ by Gram-Schmidt process: if $$\overrightarrow{AC'}=\overrightarrow{AC}-\frac{\overrightarrow{AM}\cdot\overrightarrow{AB}}{\overrightarrow{AB}\cdot\overrightarrow{AB}}\,\overrightarrow{AB},$$ the vectors $\overrightarrow{AB}$ and $\overrightarrow{AC'}$ are orthogonal, and you can apply formula $(1)$.
Calculate depth for light intensity I'm totally stumped with this one. Don't know where to start. Any hint is appreciated. For every meter a diver descends below the water surface, the light intensity is reduced by 3.5%. At what depth is the light intensity only 25% of that at the surface?
Note that if we are $1$ meter below the water, the intensity of light is $1-0.035=0.965$ the intensity at the surface. If we submerge another meter, the intensity is $0.965\times 0.965=(0.965)^2$. Continuing, if we are $x$ meters below the surface, the intensity is $(0.965)^x$. We need to find $x$ such that $(0.965)^x=0.25$. Proceeding, we find that at $$x=\frac{\log(0.25)}{\log(0.965)}$$ the intensity is 25%.
Prove that $\lim_{t\rightarrow0^{+}}\frac{e^{-\frac{1}{t}}}{t^{k}}=0 $ for every positive integer $k$ without L'Hospital's Rule. Prove that $\lim_{t\rightarrow0^{+}}\frac{e^{-\frac{1}{t}}}{t^{k}}=0 $ for every positive integer $k$. I am trying to show that the function $f:\mathbb{R}\rightarrow\mathbb{R}$ defined by $f(t)=\begin{cases} \exp\left(-\frac{1}{t}\right), & \mbox{if }t>0\\ 0, & \mbox{if }t\leq0 \end{cases}$ is smooth and to prove this, I have to prove that the limit exists. I am having trouble showing that it exists. I know that it can be shown using L'Hospital's Rule, but i would prefer to show it using the $\epsilon - \delta $ definition.
Pick $\epsilon > 0$. By taking the derivative one sees that $e^{-1/t}t^{-k}$ is increasing on some interval $(0,a)$. Choose $m \in \mathbb{N}$ such that $2^{-m} < a$ and $$mk - 2^m < \log_2\epsilon.$$ Therefore if $0 < t < 2^{-m}$ we have $$\frac{e^{-1/t}}{t^k} < \frac{2^{-1/t}}{t^k} < \frac{2^{mk}}{2^{2^m}} = 2^{mk - 2^m} < \epsilon.$$
Find closed form of Wallis's product type $\prod_{n=1}^{\infty}\left(\prod_{k=0}^{m}(n+k)^{{m\choose k}(-1)^k}\right)^{(-1)^n}$ Wallis's product $${2\over 1}\cdot{2\over 3}\cdot{4\over 3}\cdot{4\over 5}\cdots={\pi\over 2}\tag1$$ Generalised of Wallis's product type $$\prod_{n=1}^{\infty}\left(\prod_{k=0}^{m}(n+k)^{{m\choose k}(-1)^k}\right)^{(-1)^n}\tag2$$ Where $m\ge 1$ Setting $m=1$, we get $(1)$ $$\prod_{n=1}^{\infty}\left({n\over n+1}\right)^{(-1)^n}={\pi\over 2}\tag3$$ and $m=2$ we got $$\prod_{n=1}^{\infty}\left(n\cdot{n+2\over (n+1)^2}\right)^{(-1)^n}={\pi^2\over 8}\tag4$$ How can we find the closed form for $(2)$? Conjecture closed form may take the form of $${\pi^{2^{m-1}}\over F(m)}$$
$$ \log P(m) = \sum_{n\geq 1}(-1)^n\sum_{k=0}^{m}(-1)^k \binom{m}{k}\log(n+k) $$ can be written, by exploiting the integral representation for the logarithm given by Frullani's theorem, as $$ \log P(m) = \sum_{n\geq 1}(-1)^{n+1} \int_{0}^{+\infty}\frac{e^{-nx}(1-e^{-x})^m}{x}\,dx =\int_{0}^{+\infty}\frac{\left(1-e^{-x}\right)^m}{\left(1+e^x\right) x}\,dx$$ and always by Frullani's theorem the RHS of the last line is a linear combination of $\log\frac{\pi}{2}$ and logarithms of natural numbers: it is enough to reduce $(1-t)^m$ $\pmod{t+1}$, since: $$ \log P(m) = -\int_{0}^{1}\frac{(1-t)^m}{(1+t)\log(t)}\,dt.$$
Proving $ \forall r, s \in \mathbb{R^+} \sqrt {r. s} = \sqrt r . \sqrt s$ I am new to writing proofs, I have written down a proof for $ \forall r, s \in \mathbb{R^+} \sqrt {r\cdot s} = \sqrt r \cdot \sqrt s$ I know I am a bit messy with my proof, I would love to get some feedback. Also please verify if my proof is good enough. Here is my approach, Proof - Let, $\sqrt r = p$ and $\sqrt s = q$ where $p, q \in \mathbb{R^+}$. Then, $\sqrt r \sqrt s = p\cdot q$ ... (1) Squaring both the sides we get, $(\sqrt r \sqrt s)^2 = (p\cdot q)^2$ ...(2) $(\sqrt r \sqrt s)^2 = (r^{1/2}\cdot s^{1/2})^2$ [as $\sqrt a = a^{1/2}$] Lemma 1 - $\forall a, b \in \mathbb{R} [(a\cdot b )^2 = a^2\cdot b^2] $ Proof - By definition of squaring, $ a^2 = a\cdot a$ It follows from the definition that, $(a \cdot b)^2 = (a \cdot b ) (a\cdot b) = a^2\cdot b^2$ $\blacksquare$ From lemma 1, $(r^{1/2}\cdot s^{1/2})^2 = r^{2 / 2}\cdot s^{2/2} = r\cdot s$ Therefore, $(\sqrt r \sqrt s)^2 = r\cdot s$ From equation 2, we have, $r\cdot s = (p\cdot q)^2 $ ...(3) If the equality holds the LHS of the equation must yield the same result as equation 1, Now from equation 3, $ \sqrt {r\cdot s} = \sqrt{(p\cdot q)^2}$ $\sqrt{(p\cdot q)^2} = ((p\cdot q)^2)^{1/2}$ [as $\sqrt a = a^{1/2}$] Now from the power rule $(a^b)^c = a^{b\cdot c}$, it follows that $((p\cdot q)^2)^{1/2} = (p\cdot q)^{2/2}= p\cdot q$ .. (4) From equation (1) and equation (4). As both sides yield the same result, it is true that, $ \forall r, s \in \mathbb{R^+} \sqrt {r. s} = \sqrt r \cdot \sqrt s$ $\blacksquare$ I have one concern with this proof, by this same reasoning that I have followed, I can even prove the statement for $\forall r, s \in \mathbb{R^-}$, which is not true, so I must be wrong somewhere.
Since, as you say, ''you are new to writing proofs'', I think that you want some advice to improve your proof. The first step is to start with a good definition of the mathematical objects that you are using. In this case we need a definition of $\sqrt{a}$ for $a\in \mathbb{R}^+$. This definition is: $1) \qquad \sqrt{a}$ is a non negative real number $x\ge 0$ such that $x^2=a$ The second step is to well define what we want to prove, and in this case it is the identity: $\sqrt{ab}=\sqrt{a}\sqrt{b}$, where ( by the definition $1)$) we have: $\sqrt{a}=x\ge 0$ , $\sqrt{b}=y\ge 0$ and $\sqrt{ab}=z\ge 0$, such that $z^2=ab$, $y^2=b$ and $x^2=a$ (note that this exclude the possibility that $a,b$ (and $ab$) can be negative numbers). Now, for the proof, we can start from the LHS or from the RHS of this identity. From the LHS: $$ \sqrt{ab}=z \Rightarrow z^2=ab=x^2y^2 $$ So, using commutativity of the product in $\mathbb{R}$: $$ z^2=(xy)^2 $$ and, since $xy\ge 0$ we can write $$ xy=\sqrt{ab}=z $$ that is: $$ \sqrt{a}\sqrt{b}=\sqrt{ab} $$ Note that we don't need fractional exponents, but only the definition of the square root. in particular we did not use the rule $(a^b)^c=a^{bc}$ that is not valid, in general, for a fractional exponent. In your proof you started from the RHS, I leave to you the writing of the proof in this case.
Understanding of connection between $\operatorname{Spec}k[x_1,\dots,x_n]$ and $k[x_1,\dots,x_n]$ Let $\mathbb{A}^n_k$ be defined as the set $\{(a_1,\dots,a_n):a_i\in k\}$. There are standard maps $Z$ and $I$. The former assigns to an ideal its zero set and the latter assigns to a subset of $\mathbb{A}^n_k$ the ideal of the subset, i.e., the set of all polynomials which vanish at all points of that subset. If $k$ is algebraically closed, then the Nullstellensatz says that $I(Z(J))=\sqrt{J}$. What if we replace $\mathbb{A}^n_k$ with $\operatorname{Spec}k[x_1,\dots,x_n]$? Then the map $Z$ can be replaced with another map $Z$ which assigns to an ideal $I$ the set of all prime ideals in $k[x_1,\dots,x_n]$ containing $I$, $Z(I)$. (And just like the sets $Z(I)$ in the previous paragraph are the closed sets for the Zariski topology on $\mathbb{A}^n$, the sets $Z(I)$ that I have just described, are the closed sets for the Zariski topology on $\operatorname{Spec}k[x_1,\dots,x_n]$.) But what about the map $I$ from the previous paragraph? Does it have an analogue in this case? That is, does there exist a map $I: \operatorname{Spec}k[x_1,\dots,x_n]\rightarrow k[x_1,\dots,x_n]$? How it is defined if so? Further, is it true that these new $I$ and $Z$ also satisfy $I(Z(J))=\sqrt{J}$? If so, does it follow from the analogous fact above?
Yes, for $X\subset\operatorname{Spec}k[x_1,\dots,x_n]$, you take $$I(X)=\bigcap_{\mathfrak p\in X}\mathfrak p.$$ In particular, when $X=Z(J)$, we see $I(Z(J))$ is the intersection of all prime ideals containing $J$, and if you know some commutative algebra then it's just about trivial that $I(Z(J))=\sqrt J$.
If $x^2 + y^2 + z^2 + 2xyz = 1$ then $x+ y+ z \leq \frac{3}{2}?$ True or false? If $x \geq 0, y \geq 0, z \geq 0 $ and $x^2 + y^2 + z^2 + 2xyz = 1$ then $x+ y+ z \leq \frac{3}{2}.$ I want to know if there is a way to demonstrate this conditional inequality. I know I can make a connection with two properties known in a triangle. I tried to find an algebraic demonstration of these problems and we did. Thanks in advance for any suggestions.
Let $(a,b,c)=(\lambda x,\lambda y,\lambda z)$, where $\lambda(x+y+z)=\frac32$. Then $a+b+c=\frac32$, and we have $$a^2+b^2+c^2+2abc=\lambda^2(x^2+y^2+z^2+2\lambda xyz),$$ which is increasing in $\lambda$. Hence to show $\lambda\geq1$ it suffices to prove $$a+b+c=\frac32\implies a^2+b^2+c^2+2abc\geq1.$$ We now use the notation for cyclic sums $\sum$ and symmetric sums $\sum_\text{sym}$. Note that $(\sum a)^2=\sum a^2+2\sum ab=\frac94$, so \begin{align*} \sum a^2+2\sum abc&\geq1\\ \iff\frac94\left(\sum a^2+2\sum abc\right)&\geq\sum a^2+2\sum ab\\ \iff18abc&\geq8\sum ab-5\sum a^2\\ \iff27abc&\geq\left(\sum a\right)\left(8\sum ab-5\sum a^2\right)\\ &=8\sum_\text{sym}a^2b+24abc-5\sum a^3-5\sum_\text{sym}a^2b\\ \iff5\sum a^3+3abc&\geq3\sum_\text{sym}a^2b. \end{align*} The last inequality follows from AM-GM ($\sum a^3\geq3abc$) and Schur's inequality ($\sum a^3+3abc\geq3\sum_\text{sym}a^2b$): $$5\sum a^3+3abc\geq3\left(\sum a^3+3abc\right)\geq3\sum_\text{sym}a^2b,$$ and we are done.
Why $1\equiv a^{p-1} \mod p$? Let $\mathbb{Z}_p^*$, where $p$ is prime and let $a\in\mathbb{Z}_p^*$. Consider the following equation:$$(p-1)! \equiv (p-1)! a^{p-1} \mod p$$ I've read that since $\gcd((p-1)!, p) = 1$ we can infer that $$a^{p-1} \equiv 1$$ So I have two questions: * *Why is it true that $\gcd ((p-1)!, p)= 1$? *Why can we infer that $a^{p-1} \equiv 1$?
If you write down $(p-1)!$ as $(p-1)\cdot(p-2)\cdot\cdot\cdot1$ you can easily notice that $p$ and $(p-1)!$ have no common factors. So $gcd((p-1)!,p) = 1$. Now, knowing that: $$ax \equiv b \mod(m) \implies x \equiv b \cdot a^{-1} \mod(\frac{m}{gcd(m,a)})$$ you have the answer to your question.
Beverton-Holt population dynamic model I have the model: $$x_{n+1} =\frac{rx_n}{1+x_n}$$ where $r$ is a positive constant. Using the transformation $y=\frac{1}{x_n}$ show that $y_{n+1}$ is a linear function of $y_n$ and find $x_n$ in terms of $n$. Can I use that $y_{n+1}=\frac{1}{x_{n+1}}$?
$$ y_{n+1} = \frac{1}{r}(y_n+1) $$ has a simple closed form: set $y_n=z_n+\frac{1}{r-1}$. The recurrence takes the form $z_{n+1}=\frac{1}{r}z_n$, so: $$ z_n = \frac{z_0}{r^n},\qquad y_n=\frac{z_0}{r^n}+\frac{1}{r-1},\qquad x_n=\frac{1}{\frac{z_0}{r^n}+\frac{1}{r-1}} $$ and at last $$ x_n = \color{red}{\frac{(r-1)r^n x_0}{(r-1)+(r^n-1)x_0}}.$$
either $y_1(x)=y_2(x)$ or $\{x\in[a,b] \mid y_1(x)=y_2(x) \} $ is finite Let $ y_1 $ and $y_2$ be two solutions of $ y''+P(x)y'+Q(x)y=R(x)$ on $[a,b]$ where $P , O ,R $are continuous functions on $[a,b]$ . Prove that the either $y_1=y_2$ or the set $\{x\in[a,b] \mid y_1(x)=y_2(x) \} $ is finite . Here is what i have tried - Set $g(x)=y_1(x)-y_2(x)$ then the differential equation becomes $$g''(x)+P(x)g'(x)+Q(x)g(x)= 0 \tag 1 $$ and the question is equivalent to showing When $g(x)$ is a solution to the differential equation (1) then either $g(x)=0 \forall x\in[a,b] $ or $g(x)=0$ for finitely many $x\in[a,b] $ . If $g(x)=0$ for all $x\in [a,b]$ is the trivial solution to (1) . So we must show that if $g(x)\neq 0$ for some $x\in [a,b]$ then there are only finitely many $x\in[a,b]$ satisfying $g(x)=0$ . Assume to the contrary that $g(x)=0$ for infinitely many values $x$ . As $[a,b]$ is compact ,there exists a sequence $x_n \in [a,b]$ such that $ \lim x_n= \alpha$ , $x_n\neq\alpha$ and $g(x_n)=0$ . Hence $g(\alpha)=\lim(g(x_n))=\lim(0)=0$ .We also have $g'(\alpha)=\lim \frac{g(x_n)-g(\alpha)}{x_n-\alpha} = \lim \frac{0-0}{x_n-\alpha }=0$ . From Differential equation (1) we have $g''(\alpha)=0$ .I have no other idea more useful than this . Please help me with a hint .
You got $g(α)=0$ and $g′(α)=0$.. With these initial values and a homogeneous linear differential equation, you can only get the zero solution.
Optimal Score in A Game You are playing a game with a deck of r red cards and b blue cards. You are also given p red tokens and r+b-p blue tokens. (0 ≤ p ≤ r+b). You will now play a game with r+b turns. On the i-th turn, you choose one of your tokens, then choose a random card from the deck uniformly at random. You earn a point if the color of the token matches the color of the card. Otherwise, your score doesn’t change. Afterwards, both the token and the card will get discarded. Compute the expected value of your score, assuming you play optimally I found that optimal score will be (p*r + (r+b-p)*b)/(r+b) Can anyone explain the logic behind this >
Let's recap here : R red cards B blue cards P red tokens Q = R+B-P blue tokens Turn 1 : Playing optimally leads you to choose the tokens of the same color as the largest set of cards. For example, if $R\ge B$ meaning there's more red cards, then you would choose the red token and you chances of winning would be : $$ P(\text{Win turn 1})= {\text{number of red cards}\over \text{total number of cards}} = {R\over R+B} $$ If we generalize, at the n-th turn you chances to win are : $$ P(\text{Win turn n})= {\text{biggest stack of colored cards}\over \text{total number of cards}} = {\max(R_n,B_n)\over R_n+B_n} $$ Where $R_n$ and $B_n$ are respectively the number of remaining red and blue cards in the deck at the n-th turn. At some point you'll run out of options for the tokens, let's say you end up with only blue tokens then your chances would be : $$P(\text{Win turn n})={B_n\over R_n+B_n}$$ Now you have to compute the expectation but I can't help you with that, since I'm still at learning bernoulli's. Good luck !
Finding functions with the given properties In the following questions I am trying to find a function with the given properties or explain why no such function exists. 1) An infinitely differentiable function on $R$ with the Taylor series that only converges on $(-1,1)$ For this I have chosen the geometric series, $$\frac{1}{1-x}=1+x+x^2+x^3+...=\sum_{n=0}^{\infty}x^n$$ So this series is a geometric series and only converges when $|r|<1$. Thus, converges only for $(-1,1)$ but is this series infinitely differentiable? 2) An inifinitely differentiable function on $R$ with a Taylor Series that only converges for $x\le0$ Would the following series work? $$\sum _{n=0}^{\infty}\frac{(-x)^{n}}{n}$$
* *No. That function fails to be differentiable at $x = 1$. But you're on the right track. The radius of convergence of the power series is always less than the distance to the nearest singularity. But to be differentiable on $\mathbb R$, it can't have any singularities for $x \in \mathbb R$. How do we reconcile this? *No. Aside from the fact that, as written, the first term is 1/0, if we start from $n=1$ that is the series for $-\ln|1+x|$, which converges for $|x|<1$ and is not everywhere differentiable. The answer to this question depends on whether we require the power series to be about 0 and whether we need it to converge for all $x \le 0$ or just for no values $x > 0$. It's also worth noting that there exist infinitely differentiable functions with a convergent power series at $x = 0$, but that series only converges to the value of the function for $ x \le 0$.
Finding the closed form for a recurrence relation I'm having trouble finding a closed form for a geometric recurrence relation where the term being recursively multiplied is of the form (x+a) instead of just (x). Here's the recursive sequence: $a_{n} = 4a_{n-1} + 5$ for $n \geq 1$ with the initial condition $a_{0} = 2$. I know that in general the way to solve these problems is to start by writing out all of the arithmetic for the first few values of $a_{n}$, starting with the initial condition. Here's what I have: $a_{0} = 2$ $a_{1} = 4 (2) + 5 \equiv ((2)(2)(2) + 5)$ $a_{2} = 4(4(2)+5)+5 \equiv (2)(2)((2)(2)(2)+5)+5$ $a_{3} = 4(4(4(2)+5)+5)+5 \equiv (2)(2)((2)(2)((2)(2)(2)+5)+5)+5$ $\ldots$ etc. So at this point it's pretty clear to me that $a_{n} = 2^{2n + 1} + something$ My problem is figuring out how to account for all of those 5's. Especially since the first 5 is being multiplied by $2^{3}$ and all of the other 5's are being multiplied by $2^{2}$. I guessed something like this: $a_{n} = 2^{2n+1} + 5(4^{n-1}) + 5^{n-1}$ $\ldots$ and the results were close, but not exact. Can anyone help me out with the correct method for solving these types of problems? Thanks very much for your time.
Finish the distributing the operators: $$\begin{array} {rll} % a_0 &= 2 \\ % a_1 &= 4\cdot 2 + 5 \\ % a_2 &= 4\cdot (4\cdot 2 + 5) + 5 &= 4^2\cdot 2 + 4\cdot 5 + 5 \\ % a_3 &= 4 \cdot (4^2\cdot 2 + 4\cdot 5 + 5) + 5 &= 4^3 \cdot 2 + 4^2\cdot 5 + 4 \cdot 5 + 5 \\ % a_4 &= 4 \cdot (4^3 \cdot 2 + 4^2\cdot 5 + 4 \cdot 5 + 5) + 5 &= 4^4 \cdot 2 + 4^3 \cdot 5 + 4^2\cdot 5 + 4 \cdot 5 + 5 \\ % \end{array}$$ Do you see the geometric series? (Factor out the 5)
Fastest way to integrate $\frac{1}{t(t^6-1)}$ My friend mentioned to me that there is a very quick way to integrate $\frac{1}{t(t^6-1)}$. The standard method for dealing with integrals of rational functions by partial fraction decomposition is long (but doable) as there are 5 terms to integrate. After playing around for some time I still don't see the fast way of doing this integral. Am I missing something obvious?
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int{\dd t \over t\pars{t^{6} - 1}} & \,\,\,\stackrel{t\ =\ x^{-2}}{=}\,\,\, 2\int{x^{11} \over x^{12} - 1}\,\dd x = {1 \over 6}\,\ln\pars{x^{12} - 1} = {1 \over 6}\,\ln\pars{{1 \over t^{6}} - 1} \\[5mm] & = \bbx{\ds{{1 \over 6}\,\ln\pars{1 - t^{6}} - \ln\pars{t} + \pars{~\mbox{a constant}~}}} \end{align}
Prove that $\frac1{n+1} < \int_n^{n+1}\frac{1}{x}\mathrm dx$. This is probably a basic calculus question, but I have absolutely no idea where to go with this question. Prove: $$ \dfrac{1}{n+1}<\int_{n}^{n+1}{\dfrac{1}{x}dx}=\ln(n+1)-\ln(n) $$ The antiderivatives were given as a hint I presume, but that seems basic enough and I'm not sure how it could be helpful. Edit: I'm going to put the answer in here as use for anyone who comes along in the future and can't figure it out. I'm putting it in spoiler blocks because you should really try it yourself. Proof: \begin{align} \dfrac{1}{n+1}<\dfrac{1}{x}<\dfrac{1}{n} \quad\textrm{for}\quad n<x<n+1 \end{align} \begin{align} \dfrac{1}{n+1}<\dfrac{1}{x} \implies \int_{n}^{n+1}{\dfrac{1}{n+1}dx}<\int_{n}^{n+1}{\dfrac{1}{x}dx} \end{align} \begin{align} \int_{n}^{n+1}{\dfrac{1}{n+1}} = {\dfrac{1}{n+1}x}\rvert{_{n}^{n+1}} = \dfrac{1}{n+1}(n+1-n)=\dfrac{1}{n+1}\end{align} \begin{align} \therefore\dfrac{1}{n+1}<\int_{n}^{n+1}{\dfrac{1}{x}dx} \end{align} Q.E.D.
Just note that $\tfrac{1}{x}$ is decreasing on $(0,\infty)$, so $$\int_{n}^{n+1}\frac{1}{x}dx > \int_{n}^{n+1}\frac{1}{n+1}dx.$$
Intuition difference between derivatives of $\exp (x) $ and $\log (x)$ It is well known that the exponential function $\exp (x)$ has the derivative $$\frac{d}{dx} \exp (x) = \exp (x).$$ However, its inverse, the (natural) logarithm, $\log (x)$, in fact changes after derivation: $$\frac{d}{dx} \log (x) = \frac{1}{x}. $$ I understand why this is correct, but this is not intuitive to me. Why does $\exp (x) $ not change after derivation, while $\log (x) $ does and they are just mirrored at $y = x $? EDIT: Thank you all for your answers, they really helped me understanding this!
If $f $ and $f^{-1}$ are reflexive over the line $y=x$. So at the point $(k,j) $ the derivative of $f$ is $f'(k) $ and so the derivative of $f^{-1} $ at $(j,k) $ will be ${f^{-1}} '(j)=\frac 1 {f'(k)} $. So if $f=\exp $ and $f^{-1}=\log $ we have that at the point $(k,j=\exp (k))$ we know $\exp'(k)=\exp(k)$. Therefore at the point $(j=\exp (k),k)$, we know $\log'(j)=\frac 1 {\exp (k)}=\frac 1j $. ==== Interesting to note that by the chain rule: $f\circ f^{-1}(x)=x $ So $(f\circ f^{-1})'(x)=f'(f^{-1}(x)){f^{-1}}'(x)=1 $ and ${f^{-1}}'(x)=\frac 1 {f'(f^{-1}(x))} $ which is interesting. And it means $\log'(x)=\frac 1 {\exp' (\log (x))}= \frac 1 {\exp(\log (x))}= \frac 1x $ And if we want to be perverse: $\exp'(x)=\frac 1 {\log'(\exp (x))}=\frac 1 {1/\exp (x)}=\exp (x) $.
How to find $g'(x)$ when $g(x)=\int_{5x+1}^{x^2} \frac{\sin t}{t}dt$ How do I find $g'(x)$ when $$g(x)=\displaystyle\int\limits_{5x+1}^{x^2}\,\dfrac {\sin\,t}{t}\;dt$$ My attempt$$f(t)=\dfrac{\sin t}{t}$$ applying FTC$$g'(x)=\dfrac{\sin(x^2)^{x^2}}{x^2}-\dfrac{\sin 5x+1}{5x+1}$$ But I got the incorrect answer.
Here is a pretty failsafe method: Let $F$ be a primitive of your integrand $t\mapsto\sin(t)/t$. Then $$ g(x)=\int_{5x+1}^{x^2}\frac{\sin t}{t}\,dt=F(x^2)-F(5x+1). $$ Thus, by the chain rule, $$ \begin{aligned} g'(x)&=\frac{d}{dx}F(x^2)-\frac{d}{dx}F(5x+1)\\ &=F'(x^2)\cdot2x-F'(5x+1)\cdot 5\\ &=\frac{\sin(x^2)}{x^2}\cdot 2x-\frac{\sin(5x+1)}{5x+1}\cdot 5. \end{aligned} $$
Why do all elementary functions have an elementary derivative? Considering many elementary functions have an antiderivative which is not elementary, why does this type of thing not also happen in differential calculus?
* *for the same reason that there are strict rules for grammar, but not for creating poems. Differentiation goes ‘downwards’ – in the direction of gravity, so to speak – whereas integration goes ‘upwards’ – against the pull of gravity, so to speak. The price of defeating gravity is ambiguity, which takes the form of a lack of strict rules. Notice that integration gives you a taste of ambiguity from the very start: changing the value of a function a single point does not change the value of the integral.
$\ell_1$ norm is tagged as non-convex by Julia I have an optimization problem where the objective function is $$\text{minimize}_{P \in S_n} ~~ \| PXP'-Y \|_1$$ over the set of permutation matrices $S_n$. However, my solver in Julia (convex package) says the objective function is not DCP. Since it is the $\ell_1$ norm, I expect it to be convex. Can anyone explain me why it is not convex?
$$\begin{array}{ll} \text{minimize} & \| \mathrm P \mathrm X \mathrm P^{\top} - \mathrm Y \|_1\\ \text{subject to} & \mathrm P \in \mathbb P_n\end{array}$$ where $\mathbb P_n$ is the set of $n \times n$ permutation matrices. Since $\mathbb P_n$ is a discrete set, we have a discrete optimization problem, which is obviously non-convex. Instead, the feasible region could be the (convex) Birkhoff polytope $\mathbb B_n$, whose $n!$ extreme points are the $n!$ permutation matrices in $\mathbb P_n$. Dropping the constraint $\mathrm P \in \mathbb P_n$ and introducing a new matrix variable $\mathrm Q \in \mathbb R^{n \times n}$, we obtain $$\begin{array}{ll} \text{minimize} & \langle 1_n 1_n^{\top}, \mathrm Q \rangle \\ \text{subject to} & -\mathrm Q \leq \mathrm P \mathrm X \mathrm P^{\top} - \mathrm Y \leq \mathrm Q\end{array}$$ which is not a linear program (LP). It is a quadratically constrained linear program (QCLP). Is it even convex? What does Julia say?
How to find the values that make this interval enclose an integer Suppose I have an interval that looks like this: $\left[\frac{k}{\lfloor m \rfloor }, \frac{k}{\lfloor mr \rfloor}\right)$ $m$ and $r$ are positive real numbers, but are constants in this problem. Here is the question: Which integer values of $k$ make this interval include at least one integer? I have tried various ways to tackle this problem, but none have panned out. The thing is I have no idea where to even start. How does one solve problems like this?
You can think of this graphically. Consider the graphs of the functions $f(x) = \frac{1}{\lfloor m\rfloor} x$ and $g(x) = \frac{1}{\lfloor mr\rfloor} x$ in a Cartesian plane with horizontal coordinate $x.$ Now consider all of the points with integer coordinates that lie on or above the graph of $f(x)$ but below the graph of $g(x).$ The set of $k$ you are looking for is the set of $x$ coordinates of all those points. Clearly, for $k$ large enough that the difference $g(k) - f(k)$ is at least $1,$ the interval $\left[\frac{k}{\lfloor m \rfloor }, \frac{k}{\lfloor mr \rfloor}\right)$ contains at least one integer. You just have to check the finite number of values of $k$ for which $g(k) - f(k) < 1$ in order to find all remaining values of $k$ for which $\left[\frac{k}{\lfloor m \rfloor }, \frac{k}{\lfloor mr \rfloor}\right)$ contains at least one integer.
index of a group and element of a center order are co-prime This is a weird question: Let $G$ be a group and $H$ its subgroup of index 5. $a \in Z(G)$ and $ord(a)=3$. Prove that $a\in H$. So my idea was to look at $\langle aH\rangle$, we know that $H \le \langle aH\rangle$ and that: $5=[G:H] = [G: \langle aH\rangle]\cdot [\langle aH\rangle:H]$ therefore since 5 is a prime we have the two situations: i) $[G:\langle aH\rangle]=5$ and $[\langle aH\rangle:H]=1$ - in which $H=\langle aH\rangle$ thus $a\in H$ ii) $[G:\langle aH\rangle]=1$ and $[\langle aH\rangle:H]=5$ - in which we get $G=\langle aH\rangle$ but I fail to see a contradition here. Will appreciate any help, thanks!
Indeed, you are very close. You already know that $[\langle aH\rangle : H] \mid 5$. Suppose for the sake of contradiction that $a \notin H$. Then the elements of $\langle aH\rangle$ are of the form $a^kh$ for $h\in H$. Since the order of $a$ is $3$, this means that the distinct cosets are $\{H, aH, a^2H\}$, so that $[\langle aH\rangle : H] = 3$, which contradicts the fact that $[\langle aH\rangle : H] \mid 5$.
A beginner's question on rational points suppose $k$ is a finite field, and $\widetilde{k}$ a finite extension of $k$ (say, of degree $n$). How does the set of $k$-rational points of $\mathrm{Spec}(\widetilde{k})$ look like ? (As a $k$-rational point corresponds to a morphism $\mathrm{Spec}(k) \mapsto \mathrm{Spec}(\widetilde{k})$, it also corresponds to a $k$-morphism $\widetilde{k} \mapsto k$, but field morphisms are always injective, right ? Or should one see the latter as a $k$-algebra morphism ?) Thanks !
Questions like this are often implicitly meant in the context of schemes over $\operatorname{Spec(k)}$. So, since both schemes are affine, this question is about $k$-algebra homomorphisms $\bar{k} \to k$ (where both rings are $k$-algebras in the suggested way). There are none. (unless $n=1$, in which case there is just the one) And, for the sake of precision, the above is meant with the convention that $k$-algebra homomorphisms are also ring homomorphisms — that is, they preserve $1$. But if you really did mean general schemes, a.k.a. schemes over $\operatorname{Spec} \mathbb{Z}$, $\operatorname{Spec(\bar{k})}$ can have $k$-points — that is, there can be ring homomoprihms $\bar{k} \to k$. For example, if $k = F(x^2)$ and $\bar{k} = F(x)$ (with the field extension given by inclusion), there is a homomorphism $\bar{k} \to k$ sending $x \mapsto x^2$.
Baire Category Theorem in weak topology We say that a topological space is Baire if intersection of open dense sets is dense. Suppose that we know that a certain space is Baire in a given topology $\tau$. Can we determine whether this space will still be Baire in a weaker topology $\sigma \subset \tau$? What I attempted: Sets which are open in weak topology are also open in the strong topology, however sets that are dense in the weak topology may not be dense in the strong topology. This makes it difficult to make any conclusions about the Baire property.
In general, the weaker topology need not be Baire. Let $H$ be an infinite-dimensional Hilbert space. In the norm topology $\tau$, $H$ is a complete metric space and hence Baire. Let $\sigma$ be the usual weak topology on $H$ (i.e. the weakest topology in which each map $x \mapsto \langle x,y \rangle$ is continuous). It's a standard fact that any open set in $\sigma$ is unbounded with respect to the norm. So every closed norm-ball in $H$ is closed and nowhere dense in $\sigma$. Since $H$ is a countable union of closed balls, $\sigma$ is not Baire.
How do I find the probability of one correct guess when matching elements? If I have 6 pictures of people and 6 names. I have to match the names to the people and getting just one right results in a victory. I have no knowledge or clues to figure out which name belongs to which person, and I must use each name once. I know that my chances are not 1 in (6!) since that is the chance that I get all of them correct, I only need to get one correct. I know my chances are greater than 1 in 6 since I have a 1 in 6 chance at the beginning to meet a win condition instantly, and if I fail then I have more chances to meet the win condition. Trying to work this out on my own, I know the first name has 1 in 6 chance to be correct when placed to a face. After it is placed, there are two possibilities, either it is correct (which is victory and I no longer need to worry) or it is not and I must continue. In which case I move onto the next name, which has a 1 in 6 chance the face it belongs to is already taken, and 5 in 6 it wasn't already taken. This is where I get stuck and I am not sure how to continue. How do I calculate this probability?
There are $\binom{6}{1}$ to match one of the six names to the correct picture and $5!$ ways to assign the remaining names. However, this over counts, because some of the remaining names may be assigned to the correct picture(s). For instance, we have counted each arrangement in which two names are assigned to the correct picture twice, once for each match. However, we only want to count these arrangements once, so we must subtract these arrangements from the total. There are $\binom{6}{2}$ ways to match two of the six names to the correct picture and $4!$ ways to assign the remaining names. However, we have subtracted too much since some of these arrangements have more than two matches. In general, there are $\binom{6}{k}$ to match $k$ of the names to a picture and $(6 - k)!$ ways to assign the remaining names. By the Inclusion-Exclusion Principle, the number of ways the names could be assigned to the pictures so that at least one of the names matches the picture is $$\binom{6}{1}5! - \binom{6}{2}4! + \binom{6}{3}3! - \binom{6}{4}2! + \binom{6}{5}1! - \binom{6}{6}0!$$ Dividing this number by the $6!$ possible ways to assign names to pictures gives the probability that at least one name has been correctly matched. In connection to this problem, you may want to read about derangements, permutations which leave no object in its original position.
Bounding OR-type coordinates between $n$ Boolean vectors in a ball of radius $r$ Say I have set of $m$ Boolean vectors $$B = \{x_1,\ldots, x_m\}$$ where $x_i = [x_{i,1},\ldots, x_{i,n}] \in \{0,1\}^n$ for all $x_i \in B$. I know the following about the vectors $x_i \in B$: (i) $|x_i| \in [1,n-1]$ for all $x_i \in B$ (at least 1 zero coordinate, and one non-zero coordinate) (ii) $d(x_i, x_j) \geq 1$ for all $x_i, x_j \in B$ (all vectors are distinct) (iii) $d(x_i, x_j) \leq r$ for all $x_i, x_j \in B$ (*all vectors differ by at most $r$ coordinates) Here, $d(x_i, x_j) = |x_i - x_j| = \sum_{k=1}^{k=n} 1[x_{i,k} \neq x_{j,k}]$. My question is the following: Given a vector $x_i \in B$, what is maximum number of coordinates where $x_i$ is zero, but at least one of the other vectors in $B$ is non-zero? That is, what is the largest possible size of the following set: $$Z(x_i) = \left\{k =1,\ldots, n ~\Big|~ x_{i,k} = 0 ~\text{and}~ \sum_{j \neq i} x_{j,k} \geq 1 \right\} $$ Partial answer: I can derive a non-trivial bound using only (iii) as shown below. I am not sure if this bound is tight, or whether it can be improved. In particular, my bound does not account for (i) and (ii). To get the bound, first define: $$S_{01}(x_i,x_j) = \{k ~|~ x_{i,k} = 0, x_{i,j} = 1\}$$ and note that: $$|S_{01}(x_i,x_j)| = \frac{d(x_i,x_j) + |x_j| - |x_i|}{2}$$ Since $\sum_{j \neq i} x_{j,k} \geq 1$ requires that $x_{j,k} = 1$ for at least one $j \neq i$, we have the following upper bound: $$\begin{align} |Z(x_i)| &\leq \max_{j \neq i} |S_{01}(x_i,x_j)| \\ &=\max_{j \neq i} \frac{d(x_i,x_j) + |x_j| - |x_i|}{2} \\ &\leq \frac{r - |x_i| + \max_{j \neq i} |x_j|}{2} \end{align}$$
If I didn't misunderstand the question, the following set is a counterexample to your bound with $r=4,|x_1|=1,|x_j|=2,2\leq j\leq 7$ and $Z(x_1)=4.$ $$ \begin{array}{lr} x_1 & 10000\\ x_2 & 01100\\ x_3 & 01010\\ x_4 & 01001\\ x_5 & 00110\\ x_6 & 00101\\ x_7 & 00011 \end{array} $$
Is this limit valid/defined and if so, what is the value of it? I was wondering if the following limit is even defined/valid; if it makes any sense. If so, what is the value of it? If not, why is it not defined/valid? Define $n$: $$ab=n$$ for $ a\rightarrow \infty$ and $ b\rightarrow 0$ Sure, this might be a weird question due to its perhaps philosophical nature. Please keep in mind that I am not yet that good at maths, so I would greatly appreciate an "understandable" answer. Edit: I have read some of your answers. Does the following clarification change anything? Let $a,b\in$ R There is no "relation" between the two variables, one cannot be expressed using the other (such as $a=1/b$) To rephrase the question: If two variables approach infinity and zero respectively, what would their product be (if it can be determined)?
The limit is undefined because it isn't specified at what rate $a$ and $b$ are approaching $\infty$ and $0$ relative to each other. For example, if you would define $b$ as $1\over a$,then as $a \rightarrow \infty$ and $b \rightarrow 0$, $n \rightarrow 1$ However, if you would define $b$ as $1\over a^2$, then as $a \rightarrow \infty$ and $b \rightarrow 0$, $n \rightarrow 0$ Since the relation between $a$ and $b$ is not given, the limit remains undefined. If as you mention in the edit there is no relation between $a$ and $b$, then there would be no limit for $n$. If $a$ would grow faster proportionally to $b$ getting smaller, $n$ would get larger, and if $b$ would shrink faster, then $n$ would get smaller. Since $n$ is not closing in on a certain number (or $\pm \infty$), $n$ does not have a limit.
Interpretation of sums using $\cdots$ Consider the sum $$\sum_{1\le k_1 < k_2 < \cdots < k_r \le n}k_1k_2\ldots k_r$$ Does this simply mean $$\sum_{\substack{|K|=r}\\\inf(K)\ge 1\\\sup(K)\le n}\prod_{k\in K} k$$ I am specifically worried about the situation when $n=r=0$. In the second notation, this is clearly 1, but I'm not sure if it is 0 or 1 in the first notation.
I set out to write an answer that said that clearly the first notation gives $1$ as well but now I am not so sure anymore. You are essentially defining the set $I = \{\,(k_1, \dots, k_r)\,\vert\,1 \leq k_1 < \dots < k_r \leq n\,\}$. I see three sensible ways to translate the condition $1 \leq k_1 < \dots < k_r \leq n$. * *You can first split it into conditions for each $k_i$, i.e. $1 \leq k_1 < k_2$, $k_1 < k_2 < k_3$, $\cdots$, $k_{r-1} < k_r \leq n$ (and then split each of those into two inequalities). If you proceed like this, you don’t get any conditions in case $r = 0$, thus leaving you with the one-element set $\{()\}$ and a value of $1$. *Alternatively, you translate the condition directly as $1 \leq k_1$, $k_1 < k_2$, $\cdots$, $k_r \leq n$ which would give you $1 \leq n$ in case that $r = 0$ and thus $I$ would be empty if $n = 0$ as well. *You might also translate the condition as $1 \leq k_i$, $k_i \leq n$ for each $i$ and $k_1 < k_2$, $\cdots$, $k_{r-1} < k_r$. Treating the to relations on the outside differently doesn’t seem completely ridiculous, after all, it is a different relation. This would also give the value $1$. It seems to me that the third interpretation is the correct one, but I really can’t give a better reason than that it agrees with your second notation which seems reasonable. If you want to use the first notation and the case $r = n = 0$ is important, you should probably mention which value you intend the sum to denote.
Prove that $f'(0)$ does not exist for the given function $f(x)$ If we want to show this we must show that $f$ is not differentiable at $x=0$. The function is defined as follows: $$f(x)= \begin{cases} x\sin{\frac{1}{x}}, & \text{if $x\ne0$} \\ 0, &\text{if $x=0$} \end{cases} $$ which is a piecewise function. I say $f'(0)$ is not defined because $\lim_{x \to 0} f(x) = 1$ but $f(0) = 0$ is not the same value, so $f'(0)$ DNE, so $f$ is not differentiable at $0$, but my professor say this is completely bad! I say $\lim_{x \to 0} x\sin(\frac{1}{x}) = \lim_{x \to 0} \dfrac{\sin(\frac{1}{x})}{\frac{1}{x}} = 1$. But I think this is not right and I don't know how to show this does not exist!
we have $-1< sin(1/x)<1$ implies that $-x< x*sin(1/x) <x$ finally $lim x→0 (x*sin(1/x))$ = $limx→0$ $x$ = $lim$ x→0 $-x$ = $0$ sandwich theorem
If $a$ and $b$ are relatively prime integers then prove that ($a$ ,$b^2$) =1 From the title I've stucked in this question for half an hour. Could anyone help me?
We have $$ (a,b)=1 \iff ax+by=1 \text{ for some integer $x,y$} $$ Then $$ (ax+by)^2=1 \implies a(ax^2+2bxy)+b^2 y^2=1 \implies (a,b^2)=1. $$
What would be good example of antisymmetry for this relation? We have set $X=\{a,b,c,d\}$ and relation $R=\{(a,a),(a,b),(a,c),(a,d),(b,b),(c,c),(d,b),(d,c),(d,d)\}$. It is obvious that it is not symmetric and I suppose that it is antisymmetric but I can't come up with some good example to show it. I know the rule of antisymmetry $xRy \wedge yRx \rightarrow x = y$, but I am not sure how to apply it on this relation.
You can't “give an example” for antisimmetry. You could give an example for failure of antisymmetry. How can you show this is antisymmetric? Let's stretch it a bit and suppose we have also proved it is transitive. Then it should be an order relation and the pairs tell us that * *$a$ precedes $b$ *$a$ precedes $c$ *$a$ precedes $d$ *$d$ precedes $b$ *$d$ precedes $c$ and no other relation between distinct elements holds. Now we can see that all pairs necessary for the order relation above are indeed present. So the relation $R$ is indeed antisymmetric and transitive. This is the Hasse diagram for the relation.
Special reptiles - repeating shapes and fractals Let me first explain what a reptile is. A reptile is a two-dimensional object, a shape, that can be dissected into smaller, equally sized copies of the same shape. To illustrate this, see here a couple of reptiles: A shape is called an $n$-reptile, or $n$-rep for short, if the shape can be dissected into $n$ smaller, equally sized copies. So above you see a $2$-rep (recognize the shape? Yup, it's A4 paper! Could be any of the A-series in fact) a $3$-rep (a Sierpinski triangle) and a random $4$-rep. Now obviously, if a shape is an $n$-rep, it's also a $n^2$-rep; simply dissect it once in $n$ pieces, and dissect every piece you made again in $n$ pieces. One can repeat this pattern to see that in fact, if a shape is an $n$-rep, then it must be an $n^k$-rep for integer $k\geq 1$. These dissections in $n^k$ pieces aren't very interesting though, since they're all based on the base case with $n$ pieces. This got me thinking, and so here's my question: Does there exist an $n$-reptile that is simultaneously an $m$-reptile with $n$ and $m$ coprime?
The Viper substitution tiling divides a particular triangle into 9 similar sub-triangles without reflection, in a non-trivial way. The same triangle can also be divided into 4 similar sub-triangles without reflection, see the top half of the triangle in the rule image on that page.
Calculating the derivative $\frac{{\partial {{\bf{X}}^{{\rm{ - }}1}}}}{{\partial {\bf{X}}}}$ How to calculate the derivation $\frac{{\partial {{\bf{X}}^{{\rm{ - }}1}}}}{{\partial {\bf{X}}}}$,where ${\bf{X}}$ is square matrix.Thanks a lot for your help!
First note that $\frac{\partial F}{\partial X}$ is a 4th order tensor. Let $F=X^{-1},\,$ then working out the problem in index notation yields $$\eqalign{ dF_{ij} &= -F_{ik}\,dX_{kl}\,F_{lj} \cr\cr \frac{\partial F_{ij}}{\partial X_{kl}} &= -F_{ik}\,F_{lj} \cr\cr }$$
Solve $\int \frac{1}{\sin ^{\frac{1}{2}}x \cos^{\frac{7}{2}}x}dx$ So it's given this indefinite integral $$\int \frac{1}{\sin ^{\frac{1}{2}}x \cos^{\frac{7}{2}}x}dx$$ Is there anyone could solve this integral? Thanks in advance.
Generalization: $$\int\dfrac{dx}{\sin^ax\cos^{2b-a}x}=\int\dfrac{(1+\tan^2x)^{b-1}}{\tan^ax}\sec^2x\ dx$$ Set $\tan x=u$
Find the last two digits of $47^{89}$ Find the last two digits of the number $47^{89}$ I applied the concept of cyclicity $47\cdot 47^{88}$ I basically divided the power by 4 and then calculated $7^4=2401$ and multplied it with 47 which gave me the answer $47$ but the actual answer is $67$. How?
$(50-3)^2\equiv10-1\pmod{100}$ $\implies47^{4n+1}=47(47^2)^{2n}\equiv47(10-1)^{2n}$ Now $\displaystyle(10-1)^{2n}=(-1+10)^{2n}\equiv1-\binom{2n}110\pmod{100}\equiv1+80n$ Here $4n+1=89\iff n=?$ Can you take it from here?
Derivative: matrix quadratic form when $\mathbf{B}$ is dependent on $\mathbf{m}$ I have the following equation: $$ f = \mathbf{m^{T}Bm} $$ where B is: $$ \mathbf{B} = \frac{1}{3(m_{x}^{2} + m_{y}^{2})}\begin{pmatrix} 1 & 0.5 & 1\\ 0.5 & 1 & 0.5\\ 1 & 1 & 0.5 \end{pmatrix} $$ and m is: $$ \mathbf{m} = \begin{pmatrix} m_{x}\\ m_{y}\\ m_{z} \end{pmatrix} $$ also there is another equation: $$ \mathbf{m} = a\mathbf{x} + b\mathbf{y} + c\mathbf{z} $$ UPDATE: I extract the equation from a more complicated one but here $m_x = a$, $m_y = b$ and $m_z = c$ My question is How can I calculate the derivative of f with respect to a ? Thanks very much.
Let $$\eqalign{ P &= \begin{bmatrix} 2 & 1 & 2 \\ 1 & 2 & 1 \\ 2 & 2 & 1 \end{bmatrix},\,\, &Q = \begin{bmatrix} 6 & 0 & 0 \\ 0 & 6 & 0 \\ 0 & 0 & 0 \end{bmatrix} \cr\cr m &= [\,a\,b\,c\,]^T \cr M &= mm^T,\,\,\, &f = \frac{P:M}{Q:M} \cr }$$ where $\,:\,$ represents the inner/Frobenius product, i.e. $A:B=\operatorname{tr}(A^TB)$ Find the differential of the function, then the gradient $$\eqalign{ df &= \frac{(Q:M)\,P:dM-(P:M)\,Q:dM}{(Q:M)^2} \cr &= \Bigg(\frac{(Q:M)\,P-(P:M)\,Q}{(Q:M)^2}\Bigg):dM \cr &= R:dM \cr &= R:(dm\,m^T+m\,dm^T) \cr &= R:dm\,m^T+R:m\,dm^T \cr &= R:dm\,m^T+R^T:dm\,m^T \cr &= (R+R^T)\,m:dm \cr \cr g=\frac{\partial f}{\partial m} &= (R+R^T)\,m \cr \cr }$$ The derivative wrt $a$ is the first component of the gradient $$\eqalign{\frac{\partial f}{\partial a} &= g_x \cr\cr\cr}$$ Update The gradient of the gradient can also be found. To make that calculation easier, let's use symmetric matrices $$\eqalign{ P &= \begin{bmatrix} 4 & 2 & 4 \\ 2 & 4 & 3 \\ 4 & 3 & 2 \end{bmatrix},\,\, &Q = \begin{bmatrix} 12 & 0 & 0 \\ 0 & 12 & 0 \\ 0 & 0 & 0 \end{bmatrix} }$$ This won't change the function value because the skew parts are annihilated by the inner product with the symmetric matrix $M$. This change makes $R$ symmetric, too. Let's also define some scalar coefficients to simplify the expression for $R$ $$\eqalign{ R &= \alpha P-\beta Q,\,\,\,&dR&= P\,d\alpha-Q\,d\beta \cr \alpha&=(Q:M)^{-1},\,\,\,&d\alpha&=-\alpha^2(Q:dM)=-2\alpha^2\,m^TQ\,dm \cr \beta &= \alpha^2(P:M) = \alpha f,\,\,\,&d\beta&=\alpha\,df + f\,d\alpha \cr &\,&\,&= \alpha g^T\,dm - 2f\alpha^2\,m^TQ\,dm \cr }$$ Now find the differential and gradient of $g$ $$\eqalign{ g &= 2Rm \cr\cr dg &= 2\,(R\,dm + dR\,m) \cr &= 2\,(R\,dm + Pm\,d\alpha - Qm\,d\beta) \cr &= 2\,(R\,dm + Pm\,d\alpha - Qmf\,d\alpha - Qm\alpha\,df) \cr &= 2\,(R\,dm + (Pm-fQm)\,d\alpha - \alpha Qmg^T\,dm) \cr &= 2\,(R\,dm - Rm\,\frac{d\alpha}{\alpha}- \alpha Qmg^T\,dm) \cr &= 2\,(R\,dm - Rm(2\alpha\,m^TQ\,dm) - 2\alpha Qmm^TR\,dm) \cr &= 2\,\big(R -2\alpha Rmm^TQ-2\alpha Qmm^TR\big)\,dm \cr &= 2\,\big(R -2\alpha RMQ -2\alpha QMR\big)\,dm \cr\cr \frac{\partial g}{\partial m} &= 2R - 4\alpha(RMQ+QMR) \cr\cr }$$ So that's the second derivative, but please double-check the algebra before trusting it. As a quick check, $(Q,M,R)$ are all symmetric matrices, therefore the second derivative is also a symmetric matrix -- as it should be.
Probability - three-headed dragon and three knights I wrote a test from probability today and totally screw it. There was a exercise about a three-headed dragon and three possible knights. Chosen knight must cut off at least two dragon heads to save princess. When he cut off zero or one head he failed. Every knight also use different weapon with different probability to cut head off. First knight uses a sword with constant probability $\frac{1}{2}$ to cut off one head in a single attempt. He has three attempts. Second knight uses an axe and has constant probability $\frac{1}{2}$ to hit the dragon. Also when he hits the dragon, he has $\frac{1}{2}$ change to cut off one head or $\frac{1}{2}$ to cut off two heads. He has two attempts. Third knight uses a bow. His first attempt has hit probability $\frac{1}{3}$, second attempt has probability $\frac{1}{2}$ and third attempt has probability $\frac{2}{3}$. Every successful attempt cuts off one head. He has three attempts. Which knight should be chosen and what chance to save the princess does each knight have? I would appreciate step by step solution from which I can learn. Thank you.
You will need to calculate the probability of each knight cutting off at least two heads. First Knight: Since he has three attempts and he cuts off either $1$ or $0$ heads per attempt, there are eight possible outcomes for the number of heads he cuts off in each of his three attempts. $000, 001, 010, 011, 100, 101, 110$, and $111$. Since he has a $50$% chance in each attempt, all eight possibilities are equally likely, and in four of them he cuts off at least two heads. Therefore his probability of success is $\frac{4}{8}=50$% Second Knight: There are three ways for the second knight to cut off at least two heads. Either he cuts off two heads on the first try, which has probability $25$% Or he cuts $1$ head on the first try and at least $1$ more on try two. This would have probability $(\frac{1}{4})(\frac{1}{2})=12.5$% Or he misses on try one and cuts off $2$ heads on try two, which has probability $(\frac{1}{2})(\frac{1}{4})=12.5$% Overall, his probability of success is $50$% Third Knight: The third knight also has three ways to cut off at least two heads. Either he cuts off a head on each of his first two attempts, which has probability $(\frac{1}{3})(\frac{1}{2})=16.67$% Or he hits, misses, and then hits, which has probability $(\frac{1}{3})(\frac{1}{2})(\frac{2}{3})=11.11$% Or he misses, then hits twice, which has probability $(\frac{2}{3})(\frac{1}{2})(\frac{2}{3})=22.22$% Overall, his probability of success is $50$% So it looks like it doesn't really matter which knight you choose because they all have a $50$% chance of succeeding. Although Knight 2 does have the best chance of getting all three heads.
Prove certain properties of the function $d_p(a,b)=p^{-v_p(a-b)}$ I was ill and didn't manage to attend a few classes on number theory. Now, I'm struggling to prove a theorem that was presented during the lectures I didn't addend and intentionally ,,left as an exercise'' by the lecturer: define function $d_p:\mathbb{Z}\times\mathbb{Z}\rightarrow \mathbb{R}$ such that $d_p(a,b)=p^{-v_p(a-b)}$, where $v_p$ is the $p$-adic exponent of $a - b$. Prove that the following properties are true: * *$d_p(a,b)=0\Leftrightarrow a-b=0$ *$d_p(a,b)\ge 0$ *$d_p(a,c)\le d_p(a,b)+d_p(b,c)$ *$d_p(a,b)=d_p(b,a)$ I'm sorry to say the only thing I thought of was writing numbers $a$ and $b$ in the following form: $a=p^{\alpha}\frac{m}{n}$ and $b=p^{\beta}\frac{z}{x}$. Then, I guess, we have: $$ a-b=p^{\alpha}\left(\frac{m}{n}-p^{\beta-\alpha}\frac{z}{x}\right)$$ but I don't know if it's of any use in proving any of the four properties. I'm not looking for a complete solution - I know you guys don't want to do all the work for me, but could somebody drop me at least some hint?
Let me address the properties in order. Note that you must also note that you're taking the convention that $p^{-\infty} = 0$, since $v_p(0) = \infty$ (if you have left $v_p(0)$ undefined, you'll need to separately define $d(a,a) = 0$ for any $a$). Here are some leading questions/suggestions that should help you out. Below, I'll say "$p$ divides $n$ $m$ times" to mean that $n = p^m n'$, where $(n',p) = 1$; in other words, I'm saying that $v_p(n) = m$. * *You essentially want to show that $v_p(x) = \infty$ if and only if $x = 0$. If $n$ is a nonzero number, can $p$ divide $n$ infinitely many times? *Can $p^x$ be negative for any real value of $x$? What if "$x = -\infty$"? *I would recommend proving something stronger: that $d_p(a,c)\leq \max(d_p(a,b),d_p(b,c))$. Translate this into a statement about the $v_p$'s: you should find that you want to show $v_p(a-c)\geq\min(v_p(a-b),v_p(b-c))$. Now, set $a - b = x$, $b - c = y$. Then this statement is equivalent to $$v_p(x + y)\geq\min(v_p(x), v_p(y)).$$ Since $x$ and $y$ are integers, you can write $x = p^{v_p(x)} x'$, $y = p^{v_p(y)} y'$ for some $x',y'\in\Bbb Z$ relatively prime to $p$. Now, consider how many times $p$ can divide their sum. First suppose that $v_p(x)\geq v_p(y)$ without loss of generality, and then use your thought above. *You know that $v_p(n)$ is the number of times $p$ divides $n$. How many times does $p$ divide $-n$?
In cartesian product, is the order of factors important? For example- $\{1,3\} \times \{1,2\}$. Here $4$ elements will be formed. So if i write $\{1,2\}$ before $\{1,3\}$, is there any difference?
Yes, the elements of a Cartesian product are ordered tuples. The Cartesian product is not commutative.   The order of its operation is significant. $$\rm \forall A\neq\emptyset~,\forall B\neq\emptyset\qquad A\neq B~\to~ A\times B\neq B\times A$$ As per your example: $\{1,3\}\times\{1,2\} = \{(1,1),(1,2),(3,1),(3,2)\}$ $\{1,2\}\times\{1,3\} = \{(1,1),(1,3),(2,1),(2,3)\}$ Since the order in the tuples is significant, these are not the same sets. If, however, you meant the order of the tuples within the product; the product is a set so that is not important. $\{(1,1),(1,2),(3,1),(3,2)\}=\{(1,1),(3,1),(1,2),(3,2)\}$ et cetera.
How do I show the following result for two commuting bounded idempotent linear operators on a normed space? I'm trying an old qualifier problem stating that for two commuting idempotent bounded linear operators $A$ and $B$ on a normed linear space, either $A=B$ or the operator norm of $A-B$ is at least one. I'm at a complete loss as to how to approach this. Playing round with norms doesn't seem to get me anywhere and I don't see what other theorem to employ.
From $A^2=A, B^2=B$ and $AB=BA$ we get $(A-B)^3=A-B$ . Let $P:=(A-B)^2$. Then $P^2=P$. Case 1: $P=0$, hence $A+B=2AB$. Multiplying with $A$ gives: $A+AB=2AB$, thus $A=AB$. We have the same result for $B$: $B=AB$. Hence $A=B$ Case 2: $P \ne 0$. Then: $||P||=||P^2|| \le ||P||^2$ and so $||P|| \ge 1$. Consequence: $||(A-B)^2|| \ge 1. $ This gives $1 \le ||(A-B)^2|| \le ||A-B||^2$. Therefore $||A-B|| \ge 1$
What's your explanation of the Raven Paradox? The Raven Paradox starts with the following statement (1) All ravens are black. which is equivalent to the following statement (2) Everything that is not black is not a raven. In all the circumstances where statement (2) is true, (1) is also true. And, if (2) is false, i.e., if we find an evidence against it, then (1) will also be false. Now, whenever we see a Black Raven, we see an evidence which supports the statement 'All Ravens are Black'. So, if we see more and more black Raven, then our belief gets stronger and stronger that all Ravens are black. But since the statements (1) and (2) are equivalent, so collecting evidence supporting statement (2) is also an evidence that all Ravens are black. So, if we see , for example, a red apple, then it's an evidence supporting that 'All Ravens are Black'. It's because 'A red apple' is neither black (because it is red) nor is it a Raven (because it's an Apple. Apples can't be Ravens, can they?). This conclusion seems paradoxical, because it implies that information has been gained about ravens by looking at an apple. Also, the evidence is completely unrelated. I attempted to explain it but I'm not completely convinced by my explanation. How can we resolve this paradox? EDIT: It can be used to collect evidence supporting completely false statements like: 'All dinosaurs are educated'. Because we've seen plenty of things until now which are neither educated nor they're dinosaurs. EDIT2: I think that the paradox still remains. If we have a journey and took a look at every non-black thing in the universe and found it to be non-Raven, then from this argument it should be proved that all Ravens are black. But that's paradoxical because it would mean false statements can also be proved by taking a look at everything else.
An intuitive explanation of why evidence for the second statement carries less weight is that there are far more not black things than ravens. Suppose that you are sampling marbles from a bag. Suppose that you draw 5 and they are all black; what is the probability that all in the bag are black? You need to know how many are in the bag. Try 10, 100, 1000, etc.
How can I prove this trigonometric equation with squares of sines? Here is the equation: $$\sin^2(a+b)+\sin^2(a-b)=1-\cos(2a)\cos(2b)$$ Following from comment help, $${\left(\sin a \cos b + \cos a \sin b\right)}^2 + {\left(\sin a \cos b - \cos a \sin b\right)}^2$$ $$=\sin^2 a \cos^2b + \cos^2 a \sin^2 b + \sin^2 a \cos^2 b + \cos^2 a \sin^2 b$$ I am stuck here, how do I proceed from here? Edit: from answers I understand how to prove,but how to prove from where I am stuck?
On the LHS, you have $$2s_a^2c_b^2+2c_a^2s_b^2$$ after grouping. On the RHS, $$1-(c_a^2-s_a^2)(c_b^2-s_b^2)=(c_a^2+s_a^2)(c_b^2+s_b^2)-(c_a^2-s_a^2)(c_b^2-s_b^2)=2s_a^2c_b^2+2c_a^2s_b^2.$$
Mr. M receive $10$ messages on New Years Eve on average. Determine the probability that he receives $3$ messages at most on the next New Years Eve. Mr. M receive $10$ messages on New Years Eve on average. Determine the probability that he receives $3$ messages at most on the next New Years Eve. So, "on average" should mean that $E(X) = 10$, with $X$ being a random variable that counts the number of messages Mr. M received by $n$ persons who could potentially send him such a message. It might be useful to define $$X = X_1 \ + \ ... \ + X_n$$ with $X_k := 1$, if the $k$-th person sends Mr. M a message, $0$ otherwise. Then, each $X_k$ is Bernoulli distributed. All in all, we are looking for the probability that $$P(X \le 3).$$ Since each $X_k$ is Bernoulli distributed, $X$ itself is Binomial Distributed, hence, $$P(X \le 3) = \sum_{k = 0}^3 {n \choose k} p^k q^n{n - k}$$ with $p$ being the probability that a person sends him a message. But I cannot see how to determine $p$ with the help of the information that $E(X) = 10$.
It is reasonable to suppose that the number of people who could potentially send him a message is quite large, and the probability to do it for any person is quite small. Binomial distribution in this case can be approximated by Poisson. Remind that the Poisson distribution serves as the limiting distribution of the number of rare events in the large series of independent trials. Here is just the case. $\lambda=np=10$ and $$ P(X\leq 3)=e^{-10}\left(1+10+\frac{10^2}{2!}+\frac{10^3}{3!}\right).$$
Exam-Problem Functional analysis/sobolev spaces In my FuncAna exam I had the following problem, but I was not able to do anything. Right now I am still overstrained in fining a proof... It cannot be that hard ;) Let $I=(0,1)$, $b>0$ and $f\in L^2(I)$ be given. We had to show that there exists one and only one $u\in W^{2,2}(I)$ s.t. \begin{align} \int_0^1 u''\phi'' + bu'\phi'+u\phi \mathrm{d}x = \int_0^1 f\phi\mathrm{d}x \end{align} $\forall \phi \in W^{2,2}(I)$.
For any $b>0$ $$ \langle u,v\rangle=\int_0^1 (u''v'' + b\,u'v'+u\,v)\,dx $$ is an inner product on $W^{2,2}$ equivalent to the usual one. On the other hand $$ \phi\mapsto\int_0^1 f\,\phi\,dx $$ is a bounded linear functional on $W^{2,2}$. The result now follows from the Riesz representation theorem. Another possibility would be to use the Lax-Milgram theorem on the bilinear form $\langle u,v\rangle$.
Use D'Alembert's solution to find specific solution to wave equation Consider the PDE $$u_{tt}=c^2u_{xx}$$ where $u(x,0)=\phi(x)$ and $u_t(x,0)=\psi(x)$. I'm asked to derive the D'Alembert solution, which I have done and found to be $$u(x,t)=\frac{1}{2}(\phi(x-ct)+\phi(x+ct))+\frac{1}{2c}\int_{x-ct}^{x+ct}\psi(s) \, ds$$ I'm then asked to show that if $\phi(x)=0$ and $\psi(x)=\delta(x)$ then $$u(x,t)=\frac{1}{2c}\left(H(x+ct)-H(x-ct) \right)$$ where $H$ is the Heaviside function. From my previous work I get that $$u(x,t)=\frac{1}{2c}\int_{x-ct}^{x+ct}\delta(s) \, ds$$ So in order to get the solution it seems as though it's the case that $\int_0^{x+ct}\delta(s) \, ds=H(x+ct)$. Is this true? If so, why?
\begin{align} \int_{x-ct}^{x+ct}\delta(s)ds & = \int_{-\infty}^{x+ct}\delta(s)ds-\int_{-\infty}^{x-ct}\delta(s)ds \\ & = H(x+ct)-H(x-ct) \end{align}
Hard limit of sine function How can I compute this limit : $$\lim_{n \to \infty } {1 \over n}\sum\limits_{k = 1}^n {\left\lvert \sin k\right\rvert} $$ Thank you.
Hint: $|\sin k|=\sin (k \bmod \pi)$ The values of $k \bmod \pi$ will bounce around in the interval $[0,\pi)$ so you are asked for the average value of $\sin x$ over this interval. What integral can you do to get this?
How to interpret this comment from OEIS A050229? I've been wracking my brain trying to gleam some insight into this comment from 2007: Numbers n for which there is a permutation of 0..n-1 such that each number is the sum of all the previous, plus 1, mod n. - R. H. Hardin, Dec 28 2007 The sequence OEIS:A050229 (which is related to OEIS:A001122 "Primes with primitive root 2") begins with: * *1 *2 *3 *5 *11 *13 *19 *29 Can someone provide an example of how the permutation definition in the comment would play out? Background: I was playing around with Lucas' primality test: * *for a number $n$ *$x=n-1$ *let $q$ be the set of factors of $x$ *The test passes if a number $a$ can be found such that: * *$a^{x}=1 (mod.n)$ *for each $q$: * *$a^{x/q}\neq1 (mod.n)$ In my results, $2$ was a valid $a$ such that the sequence A001122 was generated. However, $a^x$ grows unwieldy quite rapidly, and there are many primes with which $a\neq2$ ($a=3$ for 7, 17; $a=5$ for 23). It seems that "primes with primitive root 2" result in $a=2$, which simplifies things down the road. Naturally, I tossed the sequence into OEIS and up came A001122 and upon further reading: A050229. Now, to give you insight into my level of understanding: I don't fully understand primitive roots, even after reading the Wikipedia page on them and playing around with numbers over the past several days, so maybe I'm missing something obvious. Ultimately this question is just focused on grokking R.H. Hardin's 2007 comment.
The fact that $2$ is a primitive root $\bmod p$ says that $2^0,2^1,2^2,\ldots 2^{p-1}$ are all distinct. The comment comes from the fact that the sum of all the powers of $2$ up to $2^n$ plus one is $2^{n+1}$. The permutation he refers to is then $0,1,2,4,8,16, \ldots \bmod p$. The fact that $2$ is a primitive root $\bmod p$ guarantees that the statement is true when starting with $0$. I would be worried that it might be true for other numbers with some other starting value, while the statement seems to imply that it is not.
Prove that: $\sum\limits_{cyc}\frac{a}{\sqrt{a^2+3bc}}\leq\frac{9(a^2+b^2+c^2)}{2(a+b+c)^2}$ Let $a$, $b$ and $c$ be positive numbers. Prove that: $$\frac{a}{\sqrt{a^2+3bc}}+\frac{b}{\sqrt{b^2+3ac}}+\frac{c}{\sqrt{c^2+3ab}}\leq\frac{9(a^2+b^2+c^2)}{2(a+b+c)^2}$$ I tried Cauchy-Schwarz: $$\left(\sum\limits_{cyc}\frac{a}{\sqrt{a^2+3bc}}\right)^2\leq(a+b+c)\sum_{cyc}\frac{a}{a^2+3bc}.$$ Hence, it remains to prove that $$\sum_{cyc}\frac{a}{a^2+3bc}\leq\frac{81(a^2+b^2+c^2)}{4(a+b+c)^5},$$ which is wrong for $c\rightarrow0^+$. Also we can use the following C-S: $$\left(\sum\limits_{cyc}\frac{a}{\sqrt{a^2+3bc}}\right)^2\leq(1+1+1)\sum_{cyc}\frac{a^2}{a^2+3bc}.$$ Thus, it remains to prove that $$\sum_{cyc}\frac{a^2}{a^2+3bc}\leq\frac{27(a^2+b^2+c^2)^2}{4(a+b+c)^4},$$ which is wrong again: $b=c=1$.
Using the substitutions $(a, b, c) \to (a^2, b^2, c^2)$, the inequality becomes $$\sum_{\mathrm{cyc}}\frac{a^2}{\sqrt{a^4+3b^2c^2}} \le \frac{9(a^4+b^4+c^4)}{2(a^2+b^2+c^2)^2}.$$ Using the Cauchy-Bunyakovsky-Schwarz inequality, we have \begin{align} \mathrm{LHS}^2 &= \sum_{\mathrm{cyc}} \frac{a^4}{a^4 + 3b^2c^2} + \sum_{\mathrm{cyc}} \frac{2a^2b^2}{\sqrt{(a^4+3b^2c^2)(b^4+3c^2a^2)}}\\ &\le \sum_{\mathrm{cyc}} \frac{a^4}{a^4 + 3b^2c^2} + \sum_{\mathrm{cyc}} \frac{2a^2b^2}{a^2b^2 + 3abc^2}. \end{align} It suffices to prove that $$\frac{81(a^4+b^4+c^4)^2}{4(a^2+b^2+c^2)^4}\ge \sum_{\mathrm{cyc}} \frac{a^4}{a^4 + 3b^2c^2} + \sum_{\mathrm{cyc}} \frac{2a^2b^2}{a^2b^2 + 3abc^2}.$$ After clearing the denominators, it suffices to prove that $f(a, b, c)\ge 0$ where $f(a,b,c)$ is a homogeneous polynomial of degree $26$. Due to symmetry and homogeneity, WLOG, assume that $1 = c \le b \le a$. Let $b = 1 + s, \ a = 1+s + t; \ s,t \ge 0$. $f(1+s+t, 1+s, 1)$ is a polynomial in $s, t$ with non-negative coefficients. We are done.
Finding the probability density function... I've found the probability density function for other functions, but this one seems rather difficult. $$F(x)=P(X\le x)=1-e^{-x^2}$$ My teacher's solution manual is showing: $$ \begin{cases} 0, & x<0 \\ 2xe^{-x}, & x>0 \end{cases} $$ Maybe I am just having a hard time finding this integral, but I don't seem to get the same results. If I use u-substitution I am setting $u = x^2$ where $du = 2x$ then, but how does that get into the answer, am I selecting a wrong $u$? Is it integration by parts maybe? Thanks.
If the density function is given by $$ f(x) = \begin{cases} 0, & \text{if $x<0$,} \\[2ex] 2xe^{-x^2}, & \text{if $x\ge0$,} \end{cases} $$ then we have, by the change of variable, $u=t^2$, $du=2tdt$, $$ F(x)=\int_0^x2t e^{-t^2}dt=\int_0^{x^2}e^{-u}du=\left[-e^{-u}\right]_0^{x^2}=1-e^{-x^2} $$ as announced.
Is $f$ differentiable at $(0, 0)$? I asked a question earlier relating to the equation: Let $f : \mathbb{R^2} \rightarrow \mathbb{R}$ be given by $$f(x, y) = \cases{ \frac{xy}{x+y}& if $x+y\neq0$ \\ 0 & if $x+y=0$}$$ I have another question relating to this equation which goes as follows: Is $f$ differentiable at $(0, 0)$? Im really not sure how to find if its differentiable or not so any help will be appreciated
The partial derivatives of $f$ at $(0,0)$ exist. They are equal to $$ \partial f/\partial x(0,0) = 0, \ \ \ \partial f/ \partial y (0,0) = 0,$$ Therefore, if $f$ is to be differentiable at $(0,0)$, the derivative cannot be anything other than the linear transformation $ \mathbb R^2 \to \mathbb R$ represented by the matrix $[0 \ 0 ]$. Let's test to see if the linear transformation $[0 \ 0]$ really is a derivative for $f$ at $(0,0)$. If $[0 \ 0]$ is the derivative, then, for any $\epsilon > 0$, there would need to exist a $\delta > 0$ such that $$ \sqrt{h_1^2 + h_2^2} < \delta \implies \left|f(h_1, h_2) - f(0,0) - \left[0 \ 0\right] \left[ \begin{array}{c} h_1 \\ h_2\end{array}\right] \right| < \epsilon \sqrt{h_1^2 + h_2^2}$$ i.e. for any $\epsilon > 0$, there would need to exist a $\delta > 0$ such that $$ h < \delta \implies \begin{cases}\left| h \frac{\cos \theta \sin \theta}{\cos \theta + \sin \theta} \right| < \epsilon h & {\rm \ if \ } \theta \neq \frac {3\pi} 4, \frac {7\pi}4 \\ 0 < \epsilon h & {\rm \ if \ } \theta = \frac {3\pi} 4, \frac {7\pi}4 \end{cases}$$ Here I introduced polar coordinates for $(h_1, h_2)$: $$ h_1 = h \cos \theta, \ \ \ h_2 = h \sin \theta.$$ The problem is that, as $\theta$ approaches $ \frac {3\pi }4$ or $\frac {7\pi}4$, the expression $(\cos \theta \sin \theta)/(\cos \theta + \sin \theta)$ tends towards plus or minus infinity. (You can see this best if you draw a graph.) So for a given $ \epsilon$, the inequality $$ \left| \frac{\cos \theta \sin \theta}{\cos \theta + \sin\theta} \right| < \epsilon$$ cannot be satisfied for all $\theta \in [0, 2\pi) \backslash \{ \frac {3\pi} 4, \frac{7\pi }4 \}$. Therefore, there exists no $\delta > 0$ such that $$ h < \delta \implies \left| h \frac{\cos \theta \sin \theta}{\cos \theta + \sin \theta} \right| < \epsilon h {\rm \ \ \ for \ all \ \ } \theta \in [0, 2\pi) \backslash \{ \tfrac {3\pi} 4, \tfrac{7\pi } 4 \}.$$ The conclusion is that $f$ is not differentiable at $(0,0)$. Alternative argument: If $f$ is differentiable at $(0,0)$, then its directional derivative at $(0,0)$ in the direction of the vector $(\lambda_1, \lambda_2)$ must be $$ df/dt = \lambda_1 \partial f/\partial x + \lambda_2 \partial f/\partial y .$$ This is zero, since both partial derivatives vanish. Now let's compute the directional derivative explicitly. Consider a straight path parametrised by $(x,y) = (\lambda_1 t , \lambda_2 t)$. Then $$ f(t) = t \frac{\lambda_1 \lambda_2}{\lambda_1 + \lambda_2}$$ along this path (assuming that $\lambda_1 \neq \lambda_2$), so $$ df/dt = \frac{\lambda_1 \lambda_2}{\lambda_1 + \lambda_2},$$ which in general is NOT zero. This shows that $f$ is not differentiable at $(0,0)$.
Find all the positive integers a, b, and c such for which $\binom{a}{b} \binom{b}{c} = 2\binom{a}{c}$ I tried an the following equivalence from a different users post from a while back that stated the following. $\binom{a}{b} \binom{b}{c} = \binom{a}{c} \binom{a-c}{b-c}$ where does this equivalence come from? After applying this equivalence and trying to hammer it out algebraically I end up with... $\frac{(a-c)!}{(b-c)!(a-b)!}=2$ Not any closer than when I didn't use the equivalence. How can I solve this?
You correctly arrived at recognizing that $\binom{a-c}{b-c}=2$ Now, the only remaining step is to recognize that the only binomial coefficient which equals two is $\binom{2}{1}$, thus $a-c=2$ and $b-c=1$, so they form a triple $(c+2,c+1,c)$ Why is this? You can prove it by strong induction noting that $\binom{n}{0}=1, \binom{n}{n}=1$ and $\binom{n}{r}=\binom{n-1}{r}+\binom{n-1}{r-1}$ which for $n\geq 3$ is by inductive hypothesis either $0+1$ or is the sum of two positive numbers, one of which is greater than or equal to $2$, and is thus greater than or equal to three. Alternatively, you can note that $\binom{n}{1}=\binom{n}{n-1}=n$ and binomial coefficients are monotonic up until the midpoint., that is $n=\binom{n}{1}<\binom{n}{2}<\binom{n}{3}<\dots<\binom{n}{\lfloor n/2\rfloor}$, similarly $\binom{n}{\lceil n/2\rceil}>\dots>\binom{n}{n-2}>\binom{n}{n-1}=n$
Prove $\mathbf{\det(I+xy^T+uv^T)}=(1+\mathbf{y^Tx})(1+\mathbf{v^Tu)-(x^Tv)(y^Tu)}$ I am asked to prove $$\mathbf{\det(I+xy^T+uv^T)}=(1+\mathbf{y^Tx})(1+\mathbf{v^Tu)-(x^Tv)(y^Tu)}$$ By first proving that $\mathbf{\det(I+xy^T)}=1+\mathbf{y^Tx}$, where $\mathbf{x}$ and $\mathbf{y}$ are $n$ vector. Assuming that $x\neq0$, we can find vectors $w_1, w_2, \cdots, w_{n-1}$ such that the matrix $Q$ defined by $$Q=[x,w_1,w_2,\cdots, w_{n-1}]$$ is nonsingular and $x=Qe_1$, where $e_1=(1,0,0,\cdots,0)^T$. If we define $$y^TQ=(z_1,z_2,\cdots,z_n),$$ then $$z_1=y^TQe_1=y^TQ(Q^{-1}x)=y^Tx,$$ and $$\det(I+xy^T)=\det(Q^{-1}(I+xy^T)Q)=\det(I+e_1y^TQ).$$ After here I am not quite sure how to close the argument and then go on to prove that $\mathbf{\det(I+xy^T+uv^T)}=(1+\mathbf{y^Tx})(1+\mathbf{v^Tu)-(x^Tv)(y^Tu)}$.
Hint: First off, for any (compatible) matrices $A$ and $B$, we have $\det(I+AB) = \det(I+BA)$. In your case, $A$, $B$ are just column vectors.
Solving Linear Congruences. Question is -: Solve the linear congruence $3x \equiv 4\left(mod\, \, \, \, 7\right)$, and find the smallest positive integer that is a solution of this congruence My Approach-: $3x \equiv 4\left(mod\, \, \, \, 7\right)$ $\Rightarrow x \equiv 3^{-1}\, \,4\left(mod\, \, \, \, 7\right)$ $3^{-1}$ means it is the multiplicative inverse of $3\, \,mod\, \,7$ multiplicative inverse of $3\, \,mod\, \,7$ $\Rightarrow 7=3*2+1$ $\Rightarrow 3=1*3+0$ $\Rightarrow 1=1*7+\left(-2\right)3$ thus $-2$ or $5$ is the inverse. Thus i am getting $\Rightarrow x \equiv 3^{-1}\, \,4\left(mod\, \, \, \, 7\right)$ $\Rightarrow x \equiv 20\left(mod\, \, \, \, 7\right)$ But in the solution they are multiplying the inverse $5$ to both sides and get equation as-: $15 \,x \equiv20 \, \left(mod\,\,7\right) $ and then $x \equiv 15x\,\equiv\,20\,\equiv\,\,6\,\left(mod\,\,7\right)$ The solution is given here Please help me out ,where i am wrong! thanks!
writing this as a formal fraction we have $$x\equiv \frac{4}{3}\equiv \frac{11}{3}\equiv \frac{18}{3}\equiv 6\mod 7$$
Boundedness of $a+\frac 1a$ when iterated Here's something I was wondering... Is $$a + \frac 1a$$ for any positive real number $a$ bounded when iterated? For example,, if we start at $a=1$, continuing gives us $a= 1+ \frac 11=2$, then $a=2+\frac 12=2.5$ and so on. A quick program shows that it seems to grow without bound, but how would one prove this mathematically? If it is possible that is... Any hints would be appreciated.
How much do you add at each step? If there is some positive integer $n$ such that you always add at least $1/n$, then obviously the sequence grows without bound. If there is no such number, then for any $n$ you will eventually add less than $1/n$. The only way to add less than $1/n$ is if your sequence grows to more than $n$. So your sequence must grow without bound in this case too.
What is the maximum area of a rectangle circumscribing another rectangle? The problem says that we have to find the maximum area of a rectangle that circumscribes another rectangle of sides $a$ and $b$.
A solution based on trigonometry: Let $\alpha$ be the (smallest) angle formed by a side of the original rectangle and a side of the circumscribing rectangle. Then the area $A$ of the circumscribing rectangle is equal to the area $ab$ of the original rectangle plus the area of four right triangles, $a\sin\alpha\cdot a\cos\alpha + b\sin\alpha\cdot b\cos\alpha$. So $$ A = ab + a^2\sin\alpha\cos\alpha + b^2\sin\alpha\cos\alpha = ab + (a^2+b^2)\sin\alpha\cos\alpha $$ $$ = ab + {1\over2}(a^2+b^2)\sin2\alpha. \quad\mbox{(We used: } \ \sin2\alpha=2\sin\alpha\cos\alpha.) $$ The area $A$ attains its maximum when $\sin2\alpha=1$, i.e. when $\alpha=45^\circ$, and we have $$ A_\max = ab + {1\over2}(a^2+b^2) = {(a+b)^2\over2}. $$ (It is easy to check that the circumscribing rectangle with area $A_\max$ is a square with side ${a+b\over\sqrt2}$.)
Is this matrix injective/surjective? Is this matrix injective or surjective? $M=\begin{pmatrix} 1 & 2 & -3\\ 2 & 3 & -5 \end{pmatrix}$ I need to calculate rank. $M^{T}= \begin{pmatrix} 1 & 2\\ 2 & 3\\ -3 & -5 \end{pmatrix}$. If we form it with Gauss, we get (it's formed correctly): $\begin{pmatrix} 6 & 12\\ 0 & 5\\ 0 & 0 \end{pmatrix}$ $\Rightarrow rank(M)= 2 \Rightarrow$ not surjective because $M \in \mathbb{R}^{3}$ but $rank(M) \neq 3$ Is it injective? No because $dim(Ker(M)) = 3-2= 1 \neq 0$ Is it good? If not please explain not too complicated. I think there can be trouble at beginning when I transposed? Edit: I'm not sure if $M$ is really in $\mathbb{R}^{3}$, I said that because we have $2$ lines but $3$ columns. That's fine?
Your application $M$ goes from $\mathbb{R}^3$ to $\mathbb{R}^2$ so it can't be injective for a dimentional reason. Your calculation for the rank is right because the rank of a matrix $A$ is equal to that of its transpose. Feel free to ask if I wasn't clear .
Absolute value $\lvert x \rvert >-1$ (precalculus) How can $\lvert x \rvert >-1$ be true for all real $x$? If $x\geq 0$: $x>-1$. If $x<0$: $-x>-1 \iff x<1$. So $-1<x<1$. But if I for instance take $x=-5$ I get $\lvert -5 \rvert >-1 \iff -(-5)>-1 \iff 5>-1$. This is true but contradicts $-1<x<1$. What is wrong here? Update: Would it be any difference if I instead had $\lvert x\rvert \geq -1$?
Your analysis is flawed. Your conclusion "$x>-1$" on the second line only follows under your condition that "$x\geq 0$". But the conclusion "$x<1$" on the third line only follows under your condition that "$x<0$". These are mutually exclusive conditions -- there is no $x$ that simultaneously satisfies them both. You are combining conclusions from different assumptions.