INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Formal proof with first order logic axioms. How do I formally prove the following:
$$[\forall y Gy \wedge \exists x Hx] \iff \exists x[\forall y Gy \wedge Hx]$$
and by formally I mean using premises and whatever other first order logical axioms are needed.
I proved it like such, but this is not how I was supposed to do it.
$[\forall y Gy \wedge \exists x Hx] \iff \exists x[\forall y Gy \wedge Hx]$ : Distribute Quantifier
$\implies$ $[\forall y Gy \wedge \exists x Hx] \iff [\forall x\forall y Gy \wedge \exists x Hx]$ :Null Quantification
$\implies$ $[\forall y Gy \wedge \exists x Hx] \iff [\forall y Gy \wedge \exists x Hx]$
Can anyone help? Here are some of the Axioms accepted:
$$M.P: Modus Ponens$$
$$TA : Tautologies$$
$$\forall x(\delta \implies \psi) \implies (\forall x\delta \implies \forall x\psi)$$
$$E.I: Existential Instantiation$$
$$U.G: Universal Generalization$$
$$U.I: Universal Instantiation$$
$$E.G: Existential Generalization$$
| I see you were trying Quantifier Distribution and null Quantification so I assume it is ok to use equivalence rules.
Well, this is an immediate application of a Prenex Law:
$Q \land \exists x \phi(x) \Leftrightarrow \exists x (Q \land \phi(x))$
Where Q does not contain $x$ as a free variable.
If equivalence rules are not allowed, then we will need to know what inference rules you are allowed.
|
Minimum absolute values of algebraic integers in cyclotomic fields Let $\omega$ be a primitive $p^{th}$ root of unity with $p$-odd prime.
Consider $\mathbb{Z}[\omega]$, the ring of integers in cyclotomic field $\mathbb{Q}(\omega)$. I wanted to know how small can be the elements of $\mathbb{Z}[\omega]$, in the sense -
Is the set $$\{|\alpha|: \alpha\in\mathbb{Z}[\omega], \alpha\neq 0\}$$
bounded below?
| If you want something less beefy than the equidistribution theorem, just take $p$ to be whatever you like. You know that $\sqrt{p^*}\in\Bbb Z[\omega]$ where $p^*=(-1)^{p-1/2}p$, and since $p^*\equiv 1\mod 4$ we have ${2j+1+\sqrt{p^*}\over 2}\in\Bbb Z[\omega]$ is an algebraic integer for all $j\in\Bbb Z$. If $p\equiv 1\mod 4$ then let $n<\sqrt{p^*}=\sqrt{p}<n+1$. There are two cases:
$$\alpha = \begin{cases}
{n-\sqrt{p^*}\over 2} & n = 2k+1 \\
{n+1-\sqrt{p^*}\over 2} & n= 2k
\end{cases}$$
both of which give a number, $\alpha$, with absolute value $<1$, in particular its powers converge to $0$, which gives an infinite family of rings with small absolute values.
|
Union of Balls around Rationals Let $(r_n)_{n \ge 1}$ be an enumeration of the rationals. Consider the union $A := \cup_n (r_n-\frac{1}{n^2},r_n+\frac{1}{n^2})$. It is unclear a-priori whether $A$ covers the real line, since although the rationals are dense in the reals, the $\frac{1}{n^2}$'s might shrink too fast. However, using measure theory, it is very easy to see $A$ does not cover much: indeed, $m(A) \le \sum_n m((r_n-\frac{1}{n^2},r_n+\frac{1}{n^2})) = \sum_n \frac{2}{n^2} = \frac{\pi^2}{3}$.
Since this argument relies much on the convergence of $\sum_n \frac{1}{n^2}$, I am wondering whether $B := \cup_n (r_n-\frac{1}{n},r_n+\frac{1}{n})$ covers the whole real line, or what portion of it? Does the amount covered depend on the enumeration we choose?
| Let $q_3\in \Bbb Q \cap (5/6,1).$
Recursively, for $j\in \Bbb Z^+$ let $q_{3(j+1)} \in \Bbb Q \cap (q_{3j}+\frac {1}{3j}-\frac {1}{6(j+1)}, q_{3j}+\frac {1}{3j}).$
Then $0<q_{3i}< q_{3j}$ when $i<j,$ and we have $\cup_{j=1}^n(-\frac {1}{3j}+q_{3j},\frac {1}{3j}+q_{3j}\supset [1,1+ \sum_{j=1}^n\frac {1}{6j}).$
So $\cup_{j\in \Bbb Z^+}(-\frac {1}{3j}+q_{3j},\frac {1}{3j}+q_{3j})\supset [1,\infty).$
Similarly we can find a discrete $\{q_{3j-1}:j\in \Bbb Z^+\}\subset \Bbb Q$ such that $q_{3j+2}<q_{3j-1}<0$ and $\cup_{j\in \Bbb Z^+}(-\frac {1}{3j-1}+q_{3j-1},\frac {1}{3j-1}+q_{3j-1})\supset (-\infty,-1].$
Let $q_1=0.$
Since the set $S=\{q_1\}\cup \{q_{3j}:j\in \Bbb Z^+\}\cup \{q_{3j-1}:j\in \Bbb Z^+\}$ is discrete, the set $\Bbb Q$ \ $S$ is infinite so we can enumerate $\Bbb Q$ \ $S=\{q_{3j+1}:j\in \Bbb Z^+\}.$
And we have $\cup_{j\in \Bbb Z^+}(-1/j+q_j,1/j+q_j)=\Bbb R.$
We can also enumerate $\Bbb Q$ in a different way, to make $\cup_{n\in \Bbb N}(-1/n+q_n,1/n+q_n)$ a set of finite measure.
|
How to obtain a approximate closed-form expression for $\int_0^1 \frac{e^{-a x-\frac{1}{b x}}}{c x} \, dx$? How to obtain an approximate closed-form expression for
$$\int_0^1 \frac{e^{-a x-\frac{1}{b x}}}{c x} \, dx,\quad a>0\land b>0\land c>0,$$
I know there may be a tradeoff between complexity and accuracy and hence multiple solutions exist. The preferred result is expressed with elementary functions. Thank you for your help.
| Let $\mu=\sqrt{\frac{a}{b}}$ and $\nu=\sqrt{ab}$:
$$\begin{eqnarray*}J(a,b)=\int_{0}^{1}\exp\left(-ax-\frac{b}{x}\right)\frac{dx}{x}&=&\int_{0}^{1}\exp\left[-\nu\left(\mu x+\frac{1}{\mu x}\right)\right]\frac{dx}{x}\\&=&e^{-2\nu}\int_{0}^{\mu}\exp\left[-\nu\left(\sqrt{x}-\frac{1}{\sqrt{x}}\right)^2\right]\frac{dx}{x}\\&=&e^{-2\nu}\int_{-\infty}^{\sqrt{\mu}-\frac{1}{\sqrt{\mu}}}\frac{2 e^{-\nu z^2}\,dz}{\sqrt{4+z^2}}\\&=&2e^{-2\nu}\int_{-\infty}^{\frac{1}{2}\left(\sqrt{\mu}-\frac{1}{\sqrt{\mu}}\right)}\frac{e^{-4\nu z^2}\,dz}{\sqrt{1+z^2}}\end{eqnarray*}$$
and
$ \int_{-\infty}^{0}\frac{e^{-4\nu z^2}}{\sqrt{1+z^2}}=\frac{1}{2}e^{2\nu}K_0(2\nu)$, so that:
$$ J(a,b) = K_0(2\nu)+2e^{-2\nu}\int_{0}^{\frac{1}{2}\left(\sqrt{\mu}-\frac{1}{\sqrt{\mu}}\right)}\frac{e^{-4\nu z^2}\,dz}{\sqrt{1+z^2}}$$
where $K$ is a modified Bessel function of the second kind and the remaining integral is easy to approximate through the Cauchy-Schwarz inequality.
|
Pigeon hole principle solution explanation Having trouble understanding a solution to a textbook problem.
A computer randomly prints three-digit codes, with no repeated digits in any code. What is the minimum number of codes that must be printed in order to guarantee that at least six of the codes are identical?
I have the solution,
$^{10} P_{3} = (10)(9)(8) = 720$ distinct values
The minimum number of codes is $5\times ^{10} P_{3} + 1 = 3601$
I don't understand why $n$ is being multiplied by $5$ and added with $1$.
To my understanding, having $726$ codes would ensure that $6$ of them are repeated. So why does the value need to be multiplied and added to?
| The idea is that you want to guarantee to have one code printed six times.
So, for example, with $3600$ print outs you could potentially have every code printed exactly five times. This is why you need to have at least $3601$ to guarantee one code being printed six times.
|
Finding the variance of the sum of weighted normal distributions I'm trying to find the variance of the PDF of a "center of mass" calculation, where every (discreet) node has a normal distribution, as:
$\frac{\sum_{i=-\frac{N}{2}}^{\frac{N}{2}} i\mathcal{N}(S_i,\sigma^2)}{\sum_{i=-\frac{N}{2}}^{\frac{N}{2}} S_i}$
I first tried to perform the calculation by treating the i's as constants, changing the variance as: $\sigma^2 \Rightarrow i^2 \sigma^2$ but this means that the location of the node changes its variance, which does not make sense. Furthermore, the variance of the sum should be $\sigma^2=\sum_{i=-\frac{N}{2}}^{\frac{N}{2}}i^2 \sigma^2$, but this gives me results for the variance that are quite different than what my simulations of this scenario yield.
so my question is what am i missing, and how will this new PDF look like?
| Let the denom term be $\alpha = \sum_{i=-\frac{N}{2}}^{\frac{N}{2}} S_i$. So
$$var\Big(\frac{1}{\alpha}\sum_{i=-\frac{N}{2}}^{\frac{N}{2}} i\mathcal{N}(S_i,\sigma^2)\Big) = \frac{1}{\alpha^2}.var\Big(\sum_{i=-\frac{N}{2}}^{\frac{N}{2}} i\mathcal{N}(S_i,\sigma^2)\Big) = \frac{1}{\alpha^2}.var(A)$$
Let $w$ be an $(N+1) \times 1$ vector as :
$$w = [\mathcal{N}(S_{-\frac{N}{2}},\sigma^2) \ldots \mathcal{N}(S_{+\frac{N}{2}},\sigma^2)]^{T} $$ and
a "counter" vector as
$$c = [-\frac{N}{2} \ldots \frac{N}{2}]^T$$
So $A = c^Tw$, then
$$var(A) = var(c^Tw) = E(c^Tww^Tc) = c^TE(ww^T)c = c^T(\sigma^2 \pmb{I})c = \sigma^2c^Tc = \sigma^2\sum\limits_{i=-\frac{N}{2}}^{\frac{N}{2}}i^2 = 2\sigma^2 \sum\limits_{i=1}^{\frac{N}{2}}i^2$$
you can say that $\sum\limits_{i=1}^{\frac{N}{2}}i^2 = \frac{N}{12}(\frac{N}{2} + 1)(N + 2)$
Then the total variance is
$$\frac{1}{\alpha^2}.var(A) =\frac{\sigma^2}{\alpha^2}.\frac{N}{6}(\frac{N}{2} + 1)(N + 2) $$
|
Computation of eigenvalues/vectors of a $9\times 9$ matrix I have a symmetric matrix (with real coefficients) and I need to compute its eigenvalues and eigenvectors. My matrix depends on 3 parameters $(\nu_1,\nu_2,\nu_3)$ that are not independent (in fact we have $\nu_1^2+\nu_2^2+\nu_3^2=1$). If I am using Maple to compute the eigenvalues/vectors, I get an ugly answer since I don't know how to use the fact that I know that $\nu_1^2+\nu_2^2+\nu_3^2=1$ and consequently, there is certainly a lot of simplification that Maple doesn't do.
Is there a software that I can use to do this kind of computation?
For example, how could I if the matrix is
$$
B := \begin{pmatrix}
0 & \nu_1 & \nu_2 & \nu_3 & 0 & 0 & \nu_3 & \nu_2 & \nu_1 \\
\nu_1 & 0 & 0 & 0 & 0 & 0 &0 & 0 & \nu_2 \\
\nu_2 & 0 & 0 & 0 & 0 & 0 &0 & 0 & \nu_3 \\
\nu_3 & 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 \\
\nu_3 & 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 \\
\nu_2 & 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 \\
\nu_1 & \nu_2 & \nu_3 & 0 & 0 & 0 &0 & 0 & 0 \\
\end{pmatrix}
$$
| One can do this with most of the CASs, by taking the condition $a^2+b^2+c^2-1:=0$ as another polynomial equation for computing a Gröbner basis. It depends on your specific matrix, whether or not the complexity of the system of polynomial equations is still manageable.
Edit: the characteristic polynomial of $B$ is given by
$$
f(t)=t^9 - 2t^7(a^2 + b^2 + c^2) - 2t^6ab(a + c) + t^5(a^2c^2 - 2ab^2c +
b^4 + b^2c^2 + c^4).
$$
Using $a^2+b^2+c^2=1$ we obtain
$$
f(t):=t^5 \cdot(t^4 - 2t^2 + 2tb( - ac + b^2 + c^2 - 1) - 2ab^2c + b^4 + c^2).
$$
So we have $\lambda=0$ eigenvalue with multiplicity $5$, and the other four eigenvalues the zeroes of the polynomial in brackets.
|
Does every finitary monad with this propery arise as a free module monad? Let $T:\mathbf{Set} \rightarrow \mathbf{Set}$ denote a finitary monad such that $T(\emptyset) \cong 1$ and $T(A \sqcup B) \cong T(A) \times T(B)$, naturally in $A$ and $B$.
Question. Does $T$ necessarily arise as the free module monad for some unital semiring?
| I think the answer is Yes.
Since $T$ is finitary, we may restrict to finite sets in the domain. We have natural isomorphisms $T(A) \cong T(\{1\})^A$. The set $T(\{1\})$ has a semiring structure defined as follows:
*
*The unique map $\emptyset \to \{1\}$ induces a map $\{1\} \cong T(\emptyset) \to T(\{1\})$, i.e. an element $0 \in T(\{1\})$.
*The monad unit $\{1\} \to T(\{1\})$ induces another element $1 \in T(\{1\})$.
*The codiagonal $\{1\} + \{1\} \to \{1\}$ induces the addition map $T(\{1\}) \times T(\{1\}) \to T(\{1\} + \{1\}) \to T(\{1\}).$
*There is a natural map $T(A) \times B \to T(A \times B)$, a so-called strength, induced by the natural map $B \to \hom(A,A \times B) \to \hom(T(A),T(A \times B))$. This induces a multiplication map
$$T(A) \times T(A) \to T(A \times T(A)) \cong T(T(A) \times A) \to T(T(A \times A)) \to T(A).$$
In the last step, we have used the monad multiplication.
Of course one has to check that the semiring axioms are satisfied (this requires some work), and that the isomorphism $T(A) \cong T(\{1\})^A$ respects the monad structure (this should be easy). I think the proof may appear somewhere in Durov's thesis.
Edit: It is 4.8.6 in Durov's thesis. But we have to require that the natural maps $T(A + B) \cong T(A) \times T(B)$ are not arbtirary, but rather induced by the monad structure in a certain way, see 4.8.1. Specifically, the projection $T(A + B) \to T(A)$ (and likewise $T(A + B) \to T(B)$) should be the $T$-module map which is induced by the map $A + B \to T(A)$ which is the monad unit on $A$ and the "zero map" $B \to \{1\} \to T(\emptyset) \to T(A)$ on $B$.
|
Can someone please explain why this is the first move when solving this integral. The integral which I am solving is $$\int_0^c\sqrt{c^2-x^2}~\mathrm dx$$ and the first thing which they suggest doing is setting $x=(\sqrt{a})/(\sqrt{b}) \sin u$. I understand everything after this but I do not comprehend where and why this step came to be.
| Maybe it is better to divide the procedure into two steps.
First by letting $X=x/c$ (here we assume $c>0$) we obtain
$$\int_0^c\sqrt{c^2-x^2}\,dx=c^2\int_0^1\sqrt{1-X^2}\,dX$$
Now it seems quite natural to use the substitution $X=\sin u$.
|
How to proof that the ratio of two RANDOM variables is still a RANDOM variable? I know that there is a similar post on this on the following link:
How to prove the ratio of two random variables is also a random variable
However, the answers were not very clear.
So, how to prove that $Q = \frac{X}{Y}$ is a random variable given that $X$ and $Y$ are random variables.
If the solution is using the product of two random variables, how can i prove it? and also, how can i prove that the inverse of a random variable is still random variable.
Sorry if this question is too basic.
| Let $Y$ be a random variable then it's easy to show that for each measurable function $f$ also $f\circ Y$ is a random variable due to $(f\circ Y)^{-1}(A) = Y^{-1}\left(f^{-1}(A)\right)$.
Because $f$ is measurable so is $B=f^{-1}(A)$ for each measurable $A$ and because $Y$ is a RV so is $Y^{-1}(B) = Y^{-1}\left(f^{-1}(A)\right)$.
Take $f: \Bbb R \setminus\{0\} \to \Bbb R \setminus\{0\}, f(x) = \frac{1}{x}$ which is measurable because it's continuous gives you that $\frac{1}{Y}$ is a RV.
To show that the product of two RV is a RV is the same argument for the continuous hence measurable function $u(x,y) = xy$ and the fact that $(XY)^{-1} = \binom{X}{Y}^{-1}\circ u^{-1}$ and $\binom{X}{Y}^{-1}(A \times B) = \binom{X^{-1}(A)}{Y^{-1}(B)}$.
Consider that $\{A \times B \;|\; A,B\in\mathcal{B}(\Bbb R)\}$ is a generator of $\mathcal{B}(\Bbb R^2)$ which is stable under intersections so it's enough to show it for these kinds of sets…
|
Finding the determinant using row operations. So I have to find the determinant of $\begin{bmatrix}3&2&2\\2&2&1\\1&1&1\end{bmatrix}$ using row operations. From what I've learned, the row operations that change the determinate are things like swaping rows makes the determinant negative and dividing a row by a value means you have to multiply it by that value. Once you have an upper triangular matrix then you just multiply the diagonal and you should have the determinant. So how come I can't get the right answer? Here's what I did. I start by dividing the first row by 3. $3\begin{bmatrix}1&2/3&2/3\\2&2&1\\1&1&1\end{bmatrix}$. Then I subtract the second row by 2 times the first row $3\begin{bmatrix}1&2/3&2/3\\0&2/3&-1/3\\1&1&1\end{bmatrix}$. Then I subtract the third and first row. $3\begin{bmatrix}1&2/3&2/3\\0&2/3&-1/3\\0&1/3&1/3\end{bmatrix}$. Then I multiply the third row by 2 and subtract it from the second row. $3\begin{bmatrix}1&2/3&2/3\\0&2/3&-1/3\\0&0&1\end{bmatrix}$. So now I have an upper triangular matrix so now I just do $(3)(1)(2/3)(1)$ and I get 2. The answer to this question however is not 2 but 1. So where did I go wrong? If the things I read were true then I should've got the right answer but I didn't which tells me that there's another row operation rule that no one told me about.
| I find it a bit easier to avoid fractions, therefore I would avoid dividing by 3 first. Instead, let's take 2R3-R2=R2 to obtain:
$\begin{bmatrix}3 & 2 & 2\\0 & 0 & 1\\1 & 1 & 1\end{bmatrix}$
Now, take R1-3R3=R3 to obtain
$\begin{bmatrix}3 & 2 & 2\\0 & 0 & 1\\0 & -1 & -1\end{bmatrix}$
Because we took 3R3, this will change the value of the determinant. We need to compensate by multiplying by 1/3. Leaving us with:
1/3$\begin{bmatrix}3 & 2 & 2\\0 & 0 & 1\\0 & -1 & -1\end{bmatrix}$
Now, interchange R2 and R3 to obtain
1/3$\begin{bmatrix}3 & 2 & 2\\0 & -1 & -1\\0 & 0 & 1\end{bmatrix}$
This will change the value of the determinant. We need to add a negative sign to compensate for the interchange. Leaving us with:
-1/3$\begin{bmatrix}3 & 2 & 2\\0 & -1 & -1\\0 & 0 & 1\end{bmatrix}$
Now, multiply the diagonal and then multiply by -1/3 leaving you with 1.
|
Subgroups of nilpotent groups are subnormal
This lemma is taken from Robinson's A Course in the Theory of Groups. Why does $$\zeta_{i+1}G/\zeta_iG=\zeta(G/\zeta_iG)$$ imply that $H\zeta_iG\triangleleft H\zeta_{i+1}G$? I'll post my answer, if I find one.
| First of all, given any group $G$ and a subgroup $H$ one has that
$$Z(G)\triangleleft C_G(H)\triangleleft N_G(H)$$
So that in fact $H\triangleleft H\ Z(G)$. Now consider the general case of a group $G$ with a normal subgroup $N$ and a subgroup $K$ containing $N$ such that $ K/N=Z(G/N)$, and take any $H\leq G$: then
$$HK/N=HN/N\ HK/N= HN/N\ Z(G/N)$$
thus $HN/N\triangleleft HK/N$ and by the correspondence theorem $HN\triangleleft HK$.
Setting $N=\zeta_i(G)$ the claim above holds.
|
How many ways can 20 different diplomats be assigned to 5 different continenets? Well, there are 20 diplomats, and $any$ of the 5 continents can be assigned to them, so this is $5 * 5 * 5 * 5 ... * 5 = 5^{20}$ (any of the five for the first diplomat, any of the five for the second, etc...)
The main question here is:
What if each continent needs to have 4 diplomats each? How would I do this then?
| There are $\binom{20}{4}$ ways to assign four diplomats to the first continent, $\binom{16}{4}$ ways to assig four diplomats to the second continent and so on. Thus the answer is
$$\binom{20}{4}\binom{16}{4}\binom{12}{4}\binom{8}{4}.$$
|
When does $\sqrt{a b} = \sqrt{a} \sqrt{b}$? So, my friend show me prove that $1=-1$ by using this way:
$$1=\sqrt{1}=\sqrt{(-1)\times(-1)}=\sqrt{-1}\times\sqrt{-1}=i\times i=i^2=-1$$
At first sight, I stated "No, $\sqrt{ab}=\sqrt{a}\times\sqrt{b}$ is valid only for $a,b\in\mathbb{R}$ and $a,b\geq0$"
But, I remember that $\sqrt{-4}=\sqrt{4}\times\sqrt{-1}=2i$ which is true (I guess).
Was my statement true? But, $\sqrt{ab}=\sqrt{a}\times\sqrt{b}$ is also valid if one of a or b is negative real number. Why is it not valid for a dan b both negative? If my statement was wrong, what is wrong with that prove?
| An alternate way to understand what is happening here is to note that
$$
1=e^{0i}
$$
When we take the square root, we have
$$
1=\sqrt{e^{0i}} = \sqrt{e^{-\pi i}\times e^{\pi i}}
$$
Notice that $e^{-\pi i}=e^{\pi i}=-1$.
Now, since we are working in polar form, we can evaluate the square roots consistently, arriving at
$$
1=e^{-\pi i/2}\times e^{\pi i/2} = -i\times i = 1
$$
Essentially, the problem lies in the "branch cut" that occurs with the square root operation - you must be careful with the evaluation.
To put it another way, $1=e^{2n\pi i}$ for all integer $n$, and the square root function has to respect its specific value (of $n$), as it can take multiple different values depending on that $n$. To get $1=-1$ as in the question, one must simultaneously use $1=e^{0i}$ and $1=e^{2\pi i}$.
|
Average number of arrivals of one poisson process before first arrival of other poisson process. I have two poisson processes: $N_t$ with rate $\lambda$ and $M_t$ with rate $\mu$. I have to calculate the average number of arrivals for $N_t$ before the first arrival of $M_t$.
This is my reasoning:
The average time for the first arrival of $M_t$ is $\mu$, and the average time between arrivals of $N_t$ is $\lambda$, independently of the moment I start to measure the time and counting the arrivals. So in a interval $[0, \mu]$ the average number of arrivals for $N_t$ is $\lambda\mu$ because for an arbitrary $t$ the average number of arrival for $N_t$ is $\lambda t$.
¿Am I right?
Edit: I applied the same reasoning to calculate the average arrivals for $M_t$ before the first arival form $N_t$ and it gives $\lambda\mu$ too, there's something wrong here...
Another edit: facepalm the average time between arrivals for $M_t$ is $\frac{1}{\mu}$, so the number of arrivals for $N_t$ in the interval $[0, \frac{1}{\mu}]$ would be $\frac{\lambda}{\mu}$.
On the other hand, the average number of arrivals for $M_t$ before the first arrival of $N_t$ would be $\frac{\mu}{\lambda}$
I think this looks better now.
| Just get
$$E[N(\tau_1)]=\sum_{n=0}^\infty nP(N(\tau_1)=n)=\sum_{n=0}^\infty n\int_0^\infty P(N(t)=n)\mu e^{-t\mu}dt=\frac{\lambda}{\mu}$$
Where $\tau_1$ is the first arrival time of $M(t)$.
Your answer is correct.
|
Elegant generalization of power rule via limits Today I was thinking about how to generalize the derivative of $f(x) = x^n$. Of course, we know the following:
$$\frac{d}{dx}x^n = n\cdot x^{n-1}$$
But how can we show that the power rule behaves this way? I took to the limit process and came up with this:
$$\frac{d}{dx}x^n = \lim_{h\to 0}\frac{(x+h)^n-x^n}{h}$$
First I thought to express $(x+h)^n$ as a sum through the binomial theorem and rewrite the exression.
$$\frac{d}{dx}x^n = \lim_{h\to 0}\frac{1}{h}\left[\sum_{i=0}^n\binom{n}{i}x^nh^i\;-x^n\right]$$
Here's where I struggle to elegantly finish the generalization.
$$\frac{d}{dx}x^n = \lim_{h\to0}\frac{1}{h}\left[\binom{n}{0}x^n+\binom{n}{1}x^{n-1}h+\binom{n}{2}x^{n-2}h^2\;...\;-x^n\right]$$
Simplifying the first term:
$$\frac{d}{dx}x^n = \lim_{h\to0}\frac{1}{h}\left[{x^n}+\binom{n}{1}x^{n-1}h+\binom{n}{2}x^{n-2}h^2\;...\;-x^n\right]$$
Cancelling end terms:
$$\frac{d}{dx}x^n = \lim_{h\to0}\frac{1}{h}\left[\binom{n}{1}x^{n-1}h+\binom{n}{2}x^{n-2}h^2\;...\right]$$
Factoring out an $h$:
$$\frac{d}{dx}x^n = \lim_{h\to0}\;\frac{h}{h}\cdot\left[\binom{n}{1}x^{n-1}+\binom{n}{2}x^{n-2}h\;...\right]$$
Simplifying and using direct substitution:
$$\frac{d}{dx}x^n = \binom{n}{1}x^{n-1}+0+0\;...$$
We know this:
$$\binom{n}{1} = \frac{n!}{(n-1)!1!} = n$$
Thus:
$$\frac{d}{dx}x^n = n\cdot x^{n-1}$$
I know it may seem petty, but I hate having the points of ellipsis in a formal proof and overall it's a bit messy. How can I clean this up a bit so that it's more presentable?
| If it helps, you could retain the summation notation, assuming $n\ge2$:
$$\eqalign{
\Bigl(\sum_{i=0}^n\binom ni x^{n-i}h^i\Bigr)-x^n
&=x^n+nx^{n-1}h+\Bigl(\sum_{i=2}^n\binom ni x^{n-i}h^i\Bigr)-x^n\cr
&=nx^{n-1}h+h^2\sum_{i=2}^n\binom ni x^{n-i}h^{i-2}\cr}$$
and keep going from there.
|
Linear Regression about SSR I've studied about Anova.
But I wonder
$$SSR=\sum_{i=1}^{30}(\hat{y}_i-\bar{y})^2=(b_1)^2\sum_{i=1}^{30}(x_i-\bar{x})^2$$
In this equation, why does $(b_1)^2$ appear?? and how to tranform summation about y to it about x?
And finally Why do they use $b_1$ instead of $β_1$
| Are you sure that this is the sum of squares of residuals? I would have expected $\displaystyle \sum_{i=1}^{30}(y_i - \hat{y}_i)^2$
Meanwhile you have something like $\hat{y}_i= b_0 + b_1 x_i$ and $\bar{y}=b_0 + b_1 \bar{x}$ from your linear regression, so $\hat{y}_i-\bar{y}=b_1(x_i-\bar{x})$, giving $$\displaystyle \sum_{i=1}^{30}(\hat{y}_i-\bar{y})^2 = \sum_{i=1}^{30}(b_1(x_i-\bar{x}))^2 = b_1^2 \sum_{i=1}^{30}(x_i-\bar{x})^2$$ but this is what I would expect ANOVA to call the sum of squares of treatment
|
Replace variable by value in gap I want to generate a list of maps in a function and return this list. For example the list of maps
$$ [k\mapsto k+1,\ k\mapsto k+2,\ k\mapsto k+3] $$
I thought I could do this by the following code:
get_maps := function()
local maps,i;
maps := [];
for i in [1..3] do
Add(maps,k->k+i);
od;
return maps;
end;
But the value of i changes inside the function. So in fact I get three identical mappings $k\mapsto k+3$.
How can I access the value of i inside the function instead of the variable i?
| What happens here is that when the list of k->k+1 maps is returned, it takes the current value of i (local variable defined in get_maps) which is 3 after the for loop has been completed.
You can avoid this by placing the function creation within a function (in the example called construct_map: Then the i added is always the local variable i, which for each iteration of the function will be a different variable, assigned to a different value.
Try the following:
get_maps := function()
local maps, i, construct_map;
maps := [];
construct_map := i -> (k -> k+i);
for i in [1..3] do
Add( maps, construct_map(i) );
od;
return maps;
end;
Then it works as desired:
gap> f:=get_maps();
[ function( k ) ... end, function( k ) ... end, function( k ) ... end ]
gap> List(f, t -> t(1));
[ 2, 3, 4 ]
However all three functions with Print as k->k+i, wirth the value of i hidden. If you really want the functions to be (e.g.) k->k+3, you would have to produce a string with the function definition (including the concrete number) and evaluate it.
|
Determining a probability of a random variable using its variance and expected value. Given a random variable $X : \mathbb R \to S$ where $S \subset \mathbb R$, its expected value $\mathbb E[X] = 10$ and its variance $var(X) = 20$ how can one prove that $\mathbb P(X > 1) > \frac{3}{4}$.
Trying to split the space $\mathbb R$ into two parts $]-\infty, 1]$ and $]1, \infty[$ doesn't seem to work and neither the inequality of Markov nor the inequality of Chebyshev seem to reduce the problem as $X$ isn't necessarily positive.
| Chebyshev tells us that $$P\left(|X-10|≥k\sigma\right)≤\frac 1{k^2}$$ Here, $\sigma=\sqrt {20}\sim 4.472135955$ so, taking $k=\frac 9{\sigma}=2.01246118$ the relevant expression is $$P(|X-10|≥9)≤ 0.24691358$$ Your result follows from this.
|
Puzzle | Missing One Rupee I came across the following riddle.
A group of $30$ Indian merchants go to a hotel and stay there for a night. The hotel owner provides for their lunch and breakfast. He serves dinner at the rate of $2$ plates/$\large{₹}$ and breakfast at the rate $3$ plates/$\large{₹}$. When the merchants check out in the morning they are given the bill of $\large{₹}25$ for their food. As per the owner $30$ dinners at the rate of $2$ plates/$\large{₹}$ equals to $\large{₹}15$, and $30$ breakfast at the rate of $3$ plates/$\large{₹}$ equals to $\large{₹}10$, and hence $\large{₹}25$$(10+15)$.
However the merchants give the owner $\large{₹}$$24$ claiming that dinner and breakfast together costed $5$ plates for $\large{₹}$$2$. Therefore, $60$ plates ($30$ of night and $30$ of morning) costs $\frac{2}{5}\times 60 = \large{₹}$ $24$.
Whom do you favor, merchants or the owner, and why?
I am unable to decide who is correct as mathematically they both appear correct to me. Is there a way to find out as to which method is wrong and which is correct one?
| The merchants divided 60 plates by 5. They cannot do this, as the 60 has 30 lunch and 30 breakfast while the 5 has 3 plates of breakfast and 2 plates of dinner.
If the merchants would be correct, 60 divided by these 5 gives 12. Hence there are 12 of such combinations (3 breakfast and 2 lunch) which means that the total number of breakfast should be 12*3=36 and total number of lunch should be 12*2=24. But that is not the case hence the merchants are incorrect.
|
How to check whether a complex multivariable function is convex?
Given a convex function $f : \mathbb R^2 \to \mathbb R$ and real numbers $a$ and $b$, I want to check whether the following function $g : \mathbb R^2 \to \mathbb R$ is convex.
First, I think I should use the lemma below:
So I start to write down g(αx1 + (1-α)y1 , αx2+(1-α)y2)
what I get is like this:
Then I get stuck, the function I get is just too complex and I don't know how to do next. Did I do it in the wrong way? Can anyone give me some hints?
| Hints:
*
*Sum of convex functions are convex.
*Non-negative multiple of a convex function is convex.
*Composition of a convex function and an affine function is convex.
*Maximum of convex functions is convex.
|
Find the last digit of $3^{1006}$ The way I usually do is to observe the last digit of $3^1$, $3^2$,... and find the loop. Then we divide $1006$ by the loop and see what's the remainder. Is it the best way to solve this question? What if the base number is large? Like $33^{1006}$? Though we can break $33$ into $3 \times 11$, the exponent of $11$ is still hard to calculate.
| HINT:
Find the last digit of $3^{1006\bmod4}$.
Adding a formal solution, just in case anyone will find it useful:
You are looking for $3^{1006}\pmod{10}$.
Since $\gcd(3,10)=1$, by Euler's theorem, $3^{\phi(10)}\equiv1\pmod{10}$.
We know that $\phi(10)=\phi(2\cdot5)=(2-1)\cdot(5-1)=4$.
Therefore $3^{4}\equiv1\pmod{10}$.
Therefore $3^{1006}\equiv3^{4\cdot251+2}\equiv(\color\red{3^4})^{251}\cdot3^2\equiv\color\red{1}^{251}\cdot3^2\equiv1\cdot9\equiv9\pmod{10}$.
|
Metric for how symmetric a matrix is Given a square NxN matrix A, what is a measure of how symmetric A is?
I can get the symmetric and antisymmetric parts of A as:
$A_{sym}=\frac{1}{2}(A+A^{T})$
and
$A_{anti}=\frac{1}{2}(A-A^{T})$
Is there some commonly used function, $F(A,A_{sym},A_{anti})$, that gives a measure of how symmetric a matrix is? E.g. something like the ratio of the determinants of $A_{sym}$ and $A_{anti}$?
| One simple possibity:
$s \equiv (|A_{sym}|-|A_{anti}|)/(|A_{sym}|+|A_{anti}|)$
Here |·| is whatever matrix norm you choose. Then $-1\le s \ \le +1$ with the lower bound saturated for an antisymmetric matrox, upper bound saturated for a symmetric one.
|
Determine the Truth Value of each of these Statements? I need help determining the truth value of each of these statements if the domain of each variable consists of all integers. Justify your answer.
*
*$\forall x\exists y (x = 3y + 1)$
*$\exists x\forall y (y^2 > x)$
If I take $x$ as $-5$ and take $y$ as $-2$ then it would be
$ -5 = 3(-2) + 1 $
$ -5 = -5$
True
So what I did is true only if i take those values. If I take $x$ as $-5$ and $y$ as $-5$ then it wouldn't equal so its False
So I am just confused on how to do these type of questions so can someone help me figure it out.
Thank you.
| For #1 your correct reading is "for all $x$ there exists a $y$ ..."
You correctly say that when $x= -5$ you can take $y= -2$.
Then you make a mistake. The way the sentence is written tells you the value of $y$ is allowed to depend on the value of $x$. All you have to do is find one once you know $x$. That's what you did starting with $x = -5$ to get $y = -2$. You clearly know enough algebra to find a $y$ that goes with any given $x$ you start with. (In this particular case there will be just one value of $y$.) Will that $y$ always be an integer?
For #2 you have to find some particular $x$ for which the statement is true for every possible $y$. Is there such a thing? (Hint: negative numbers are allowed.)
|
Query related to Eq. 3.471.9 of Book of Gradeshteyn (Integration tables series and products) Equation no. 3.471.9 of Integral series and products (By Gradeshteyn) is written below $$\int_0^{\infty}x^{v-1}e^{-\frac{\beta}{x}-\gamma x}dx=2\left(\frac{\beta}{\gamma}\right)^{\frac{v}{2}}K_{v}(2\sqrt{\beta \gamma})$$ although it is mentioned that $Re(\beta)>0$ and $Re(\gamma)>0$ there is nothing written about $v$. So my question related to the values of $v$. Is the above equation valid for all possible real values of $v$? And if it is not valid then how to solve the above integral for general real values of $v$. Many thanks in advance.
| $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
\int_{0}^{\infty}x^{\nu - 1}\expo{-\beta/x - \gamma x}\dd x & =
\int_{0}^{\infty}x^{\nu - 1}\exp\pars{-\root{\beta\gamma}\bracks{\root{\beta \over \gamma}{1 \over x} + \root{\gamma \over \beta}x}}\,\dd x
\end{align}
Set $\ds{x = \root{\beta \over \gamma}\expo{-\theta}}$:
\begin{align}
\int_{0}^{\infty}x^{\nu - 1}\expo{-\beta/x - \gamma x}\dd x & =
\int_{\infty}^{-\infty}\pars{\beta \over \gamma}^{\pars{\nu - 1}/2}
\expo{\pars{1 - \nu}\theta}
\expo{-2\root{\beta\gamma}\cosh\pars{\theta}}\bracks{-\pars{\beta \over \gamma}^{1/2}\expo{-\theta}}\dd\theta
\\[5mm] & =
\pars{\beta \over \gamma}^{\nu/2}\int_{-\infty}^{\infty}\expo{-\nu\theta}
\expo{-2\root{\beta\gamma}\cosh\pars{\theta}}\dd\theta
\\[5mm] & =
\pars{\beta \over \gamma}^{\nu/2}\int_{0}^{\infty}\pars{\expo{-\nu\theta} +
\expo{\nu\theta}}
\expo{-2\root{\beta\gamma}\cosh\pars{\theta}}\dd\theta
\\[5mm] & =
2\pars{\beta \over \gamma}^{\nu/2}\int_{0}^{\infty}\cosh\pars{\nu\theta}
\expo{-2\root{\beta\gamma}\cosh\pars{\theta}}\dd\theta
\\[5mm] &=
\bbx{2\pars{\beta \over \gamma}^{\nu/2}
\,\mrm{K}_{\nu}\pars{2\root{\beta\gamma}}} \\ &
\end{align}
where $\ds{\,\mrm{K}_{\nu}}$ is a Modified Bessel Function.
|
How to divide 2n members of the club into disjoint teams of 2 members each, when teams not labelled? I am doing this and getting $\sum_{i=0}^{2n-2} \frac{(2n-i)!}{2!(2n-i-2)!}$ but the answer given is $\frac{(2n)!}{2^n n!}$ I even tried to get this by simplifying my result, but not getting the same.
| Look at the problem of arranging letters in the word MAMA. It is $\binom{4}{2}$, because it enumerates all ways to get 2 disjoint subsets froma set of 4 numbers and compare to the case when Ms and As are of different colors: you need to mulfiply it by 2!. The first case is your situation.
|
Calculating the "stillness" of numerical dataset I am a software developer and not very proficient in math. There is a dataset of type [0.84980994, 0.117350034, 47.58483789, ....]
I want to calculate some evaluation of a rate of change of $X_n$ and $X_{n-1}$ of the list. What kind of evaluation can be used?
EDIT: Maybe this is not clear from the original question, but I want to compare the evaluation of the dataset with an empirically derived value to test a hypothesis.
My hypothesis is that the level of change of the dataset (differences between the values) is less than a specified threshold.
Should also note that the difference matters only between neighboring samples, not the whole set of values.
| You seem a little vague as to the criterion for stability.
Looking into the literature of a couple of methods that have been
successfully used may help you crystallize what you mean by stability.
Control charts, long and widely used in the field of quality management,
typically consider a variety of indicators that a process may have
gone 'out of control'. [Roughly, a few criteria are: too many observations in a row above
(or below) the mean; too many consecutively increasing (or decreasing)
observations; going beyond limits such as $\bar X \pm S$, where
the mean $\bar X$ and the SD $S$ may be fixed (based on prior data)
or updated at each new observation.]
Also, there are a number of criteria for stability vs. change from
past behavior in the field of time series, especially as used by
economists. You might start by looking at 'autocorrelation' for
various 'lags'. The autocorrelation function (ACF) is also used by statisticians
using Markov Chain Monte Carlo methods to judge when or whether a
simulated Markovian progress reaches 'steady state'.
Also there is the statistical ideas of outliers, often as 'detected' by
boxplots or (especially among regresion residuals) by Studentized ranges
of excessive absolute value.
|
What is spectral invariant? Picture below is from The Spectrum of the Laplacian in Riemannian
Geometry . What is the mean of spectral invariant ? I google spectral invariant , and find it is a notation of symplectic geometry . But seemly, there is nothing about symplectic geometry.
| The term spectral invariant refers to an object, such as a function $Z$ of one real variable, defined in terms of a Riemannian manifold $(M, g)$ but that depends only on the spectrum of the $g$-Laplacian on the space of square-integrable functions on $M$.
Isometric manifolds obviously have equal spectral invariants, but the converse is not a priori apparent (and in fact not true, hence the concept of isospectral manifolds and the field of spectral geometry).
|
When $0I am stuck after the following step.
$a^{n+1}<a^{x+1}b^{y}$ and $a^{x}b^{y+1}<b^{n+1}$.
Here $x+y+1=n+1$ but is it enough to prove the result by induction?
Just realised the statement could be rewritten as $a^{n}<a^{x}b^{n-x}<b^{n}$.
Couldn't prove it by induction but got an idea for an alternate proof from the comment of @kingW3 and the answer of @Arnaldo Nascimento.
Proof:
For $0<x$, $0<y$, and $0<n$.
$a^{y}<b^{y}\Rightarrow a^{y}a^{x}<b^{y}a^{x}\Rightarrow a^{n}=a^{x+y}<b^{y}a^{x}$ and $a^{x}<b^{x}\Rightarrow a^{x}b^{y}<b^{x}b^{y}\Rightarrow a^{x}b^{y}<b^{x+y}=b^{n}$. Therefore, $a^{n}<a^{x}b^{y}<b^{n}$.
| $a^x=a^{n-y}=a^n.a^{-y}$ and $b^y=b^nb^{-x}$ so
$$a^xb^y=\frac{a^nb^n}{a^yb^x}<\frac{a^nb^n}{a^ya^x}=\frac{a^nb^n}{a^{x+y}}=b^n$$
$$a^xb^y=\frac{a^nb^n}{a^yb^x}>\frac{a^nb^n}{b^yb^x}=\frac{a^nb^n}{b^{x+y}}=a^n$$
and then
$$a^n<a^xb^y<b^n$$
|
Is $p \rightarrow \lozenge (q \rightarrow q)$ a tautology in K? Intuitively, it is a tautology. Imagine two possible worlds $m0$ and $m1$, such that $m1$ is accessible from $m0$, i.e., we have the folowing scheme of possible worlds: $m0 \rightarrow m1$. Whatever is the truth value of $q$ in $m1$, $\lozenge(q \rightarrow q)$ is true in $m0$. It follows that $p\rightarrow \lozenge(q\rightarrow q)$ is true in $m0$, wathever is the truth value of $p$ in $m0$ and $m1$. It is a tautology. But I've avaliated this expression by using MOLTAP, and the result was NOT VALID. Is MOLTAP wrong?
| Yet another way to see that $p\rightarrow\Diamond(q\rightarrow q)$ is not a tautology in $K$ is as follows: if it is, then $T\rightarrow\Diamond(q\rightarrow q)$ is a theorem, then so is $\Diamond(q\rightarrow q)$ i.e. $\Diamond T$, where $T$ stands for $true$. But $\Diamond T$ is not a theorem in $K$. It is only a theorem in modal logic $D$, which corresponds to serial frames. In fact $\Diamond T$ is equivalent to the $D$ axiom $\Box p \rightarrow \Diamond p$, see answer to 3-rd question below this one.
|
Group homomorphisms form an abelian group
Let $\mathrm{Hom}(G_1, G_2) = \{f:G_1\to G_2 \mid f $ is a morphism $\}$. Show that $(\mathrm{Hom}(G_1, G_2), +)$ is an abelian group.
I had problems just with the closure property! How do you show that $f+g$ is a morphism? I should add that $(G_1,\cdot)$ and $(G_2,\cdot)$ are abelian groups?
| Show that it has the required properties. In particular,
$$\begin{align}(f+g)(x+y)&=f(x+y)+g(x+y)\\&=f(x)+f(y)+g(x)+g(y)\\&=f(x)+g(x)+f(y)+g(y)\\&=(f+g)(x)+(f+g)(y)\end{align}$$
|
What is an example of a symmetric operator with non-real spectrum? I've been wracking my brains but all the examples I can think of with differential operators end up having either real eigen-values and/or intractable (possibly) complex eigen-values.
For example, take
\begin{align}
D(T)&=\{u\in L^2(0,\pi)~|~u''\in L^2(0,\pi),iu(0)=u'(0),u(\pi)=0\}, \\
Tu&=-u''.
\end{align}
Then integrating by parts shows it's symmetric, and the function $\sin(\kappa x)-i\kappa\cos(\kappa x)$ is an eigen-function for the eigen-value $\kappa^2$ which must satisfy $\tan(\kappa\pi)=i\kappa$. From here I must show that valid $\kappa$'s must be non-real (we exclude $0$ since $T$ has kernel $\{0\}$). I am not sure how to proceed.
Is there a better way? Is it too much of me to ask to be able to compute explicitly a complex eigen-value, as opposed to simply establishing the existence of some other piece of the spectrum, e.g., continuous, essential, etc.?
| $i\frac{d}{dx}$ defined on functions of compact support in $(0,\infty)$ (i.e. with no boundary condition at $0$).
The intuition is that $i\frac{d}{dx}$ is the infinitesimal generator of translations on $(-\infty,\infty)$ but this can't work on $(0,\infty)$ .
|
Is the following intersection complete? In $\mathbb{P}^n$ with coords $[z_0,z_1,...,z_n]$, denote by $g$ the action
\begin{equation}
g:z_i \rightarrow z_{i+1},
\end{equation}
with $z_{n+1}$ identified with $z_0$, so $g$ generates a group $G$ of order $n+1$, which acts on the homogeneous coordinates of $\mathbb{P}^n$. Suppose $f$ is an irreducible homogeneous polynomial of degree $k \geq 2$ such that $g^j.f$ are all different for $i=0,...,n$. We also assume that $f(1,1,...,1) \neq 0$ to avoid the trivial case! Is the intersection of these $n+1$ $g^j.f$ complete in $\mathbb{P}^n$?
If not, what could we say about the intersection? like dimension? degree?
| I doubt whether anything so general can be said. Just to illustrate, let me give an example. Let $F_d=\sum z_i^d$ and consider $f=F_{d+1}-2z_0 F_d$ (I am working over complex numbers and the 2 above in particular assures $f(1,\ldots,1)\neq 0$). If $n$ is not too small, $f$ is irreducible, $g^if$ are distinct and the intersection of all these is just the intersection of $F_{d+1}, F_d$. I am sure more complicated examples can be constructed to make it impossible to say anything general about dimension and degree.
|
Why is $\mathbb N \times \mathbb N$ not subset of $\mathbb N \times \mathbb N \times \mathbb N \times \mathbb Q \times \mathbb Q$? Why is $\mathbb N \times \mathbb N$ not subset of $\mathbb N \times \mathbb N \times \mathbb N \times \mathbb Q \times \mathbb Q$ ?
| Every element of $\Bbb N\times\Bbb N\times\Bbb N\times\Bbb Q\times\Bbb Q$ is an ordered $5$-tuple of the form $\langle n_1,n_2,n_3,q_1,q_2\rangle$, where $n_1,n_2$ and $n_3$ are in $\Bbb N$, and $q_1$ and $q_2$ are in $\Bbb Q$. An element of $\Bbb N\times\Bbb N$ is an ordered pair $\langle n_1,n_2\rangle$, where $n_1$ and $n_2$ are in $\Bbb N$. This isn’t even the right kind of object to belong to $\Bbb N\times\Bbb N\times\Bbb N\times\Bbb Q\times\Bbb Q$: it’s not an ordered $5$-tuple.
What is true is that $\Bbb N\times\Bbb N\times\Bbb N\times\Bbb Q\times\Bbb Q$ has many subsets that have natural bijections with $\Bbb N\times\Bbb N$. Some of them are:
$$\begin{align*}
&\Bbb N\times\Bbb N\times\{0\}\times\{0\}\times\{0\}\\
&\Bbb N\times\{0\}\times\Bbb N\times\{0\}\times\{0\}\\
&\{0\}\times\Bbb N\times\Bbb N\times\{0\}\times\{0\}\\
&\{0\}\times\{0\}\times\{0\}\times\Bbb N\times\Bbb N
\end{align*}$$
The last one works because $\Bbb N\subseteq\Bbb Q$.
|
Limit of $\left(1+\frac{2}{n^2}\right)^n $
Compute$$\lim_{n\to\infty}\left(1+\frac{2}{n^2}\right)^n $$
I dont know how to do it without using continuity of exponential function
I mean I cannot do this:
$$\lim_{x\to\infty}a_n =a \wedge \lim_{x\to\infty}b_n =b\Rightarrow \lim_{x\to\infty}{a_n}^{b_n} =a^b$$
| Hint: the expression equals $[(1+2/n^2)^{n^2}]^{1/n}.$
|
Help finding $(fg)(-2)$ So I just need a confirmation really, as my friend and I don't agree on the answer.
Problem: let $f(x)= \frac{x^2}{1-x^2}$ and $g(x)=log_3 (x+3)+1$
Find $fg(-2)$
$f(-2)\cdot g(-2)$
$-4/3\cdot 1= -4/3$
But my friend got 1. Don't you have to find $f(-2)$ and $g(-2)$ first and then perform the multiplication or what?
| I am assuming that you mean $f \circ g(-2)$ ... which means you first find $g(-2)$ which is 1, and then find $f(1)$, which is $\infty$ ... so I'm not really sure what method you are using but both the answers seem wrong
|
Is there integer :$n>0$ for which $\displaystyle\frac{n^4+(n+1)^4}{n²+(n+1)^2}$ is integer? I have tried to show if
$$\displaystyle \gcd (n^4+(n+1)^4,n^2+(n+1)^2 )=1$$
for every positive integer $n$ using standard theorem in number theory as Bèzout and Gauss theorem but I don't succeed. I'm now interesting to seek for a fixed integer $n>0$ for which the ratio:
$$\displaystyle\frac{n^4+(n+1)^4}{n^2+(n+1)^2}$$
is integer if it is possible ?
Thank you for any help.
| $$
\frac{n^4+(n+1)^4}{n²+(n+1)^2}
=
n^2 + n + \frac32 - \frac1{2 (2 n^2 + 2 n + 1)}
$$
This reduces the question to when is $\displaystyle\frac12 - \frac1{2 (2 n^2 + 2 n + 1)}$ an integer?
The only solutions are $n=0$ and $n=-1$.
|
If $T$ is an operator on a vector space $V$ over $\mathbb{R}$, can the eigenvalues be complex? So if we have an operator $T \in \mathcal{L}(V)$, where $V$ is a vector space over a field $F$, and $F$ = $\mathbb{R}$ or $\mathbb{C}$. Then, the eigenvalues $\lambda$ could be real- or complex-valued. Correct?
Now, if $T \in \mathcal{L}(V)$ but $F$ = $\mathbb{R}$, then can $T$ necessarily have complex-valued eigenvalues? If so, when (what criteria must be met)?
Thanks. I'm genuinely curious.
| Suppose $T$ is an operator on a real vector space. The minimal polynomial of $T$ may have complex roots, but only the real roots are actually eigenvalues.
Example: Let $T$ be the transformation on $\mathbb{R}^2$ whose matrix with respect to the standard basis is $$ A = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}_.$$
The minimal polynomial of $T$ is $x^2+1$. The roots of which are $i, -i$. But there are no non-zero vectors $v \in \mathbb{R}^2$ such that $Av = iv$ or $Av = -iv$.
|
Prove G is abelian. Let G be a group. Prove or disprove: We have $(g_1 * g_2)* g_3 = (g_3 * g_2) * g_1 \forall g_1, g_2, g_3 \in G$ if and only if G is abelian.
I am currently studying my old notes but it seems I found this intuitively easy to prove before that I did not write out a formal proof, but am unable to prove it now. Any suggestions are helpful.
| The backward direction (if $G$ is abelian, then the statement in your question holds) should be straightforward - use associativity an the abelian property to change the left-hand side into the right-hand side.
For the forward direction, note that $g_1,g_2$, and $g_3$ have their relative positions swapped in the equation. So in particular, if you set one of them, say $g_1$, equal to the group identity element, then that equation says $g_2*g_3=g_3*g_2$. You are allowed to do so because the equation holds for all $g_1\in G$, and the resulting equation $g_2*g_3=g_3*g_2$ then also holds for all $g_2,g_3 \in G$.
|
Composition of functions is continuous?
If $f$ any $g$ be two functions defined from $[0,1]$ to $[0,1]$ with $f$ strictly increasing. Then
*
*if $g$ is continuous, is $f\circ g$ continuous?
*if $f$ is continuous, is $f\circ g$ continuous?
*if $f$ and $f\circ g$ are continuous, is $g$ continuous?
Here, $f\circ g$ implies composition of $f$ and $g$. I think the answer to the third is yes by using the fact that preimage of an open set under a continuous map is open? Any idea .Thanks.
| I think, for this kind of question, it's best to go with your gut and see if you can't produce a few examples that corroborate or refute it before trying to prove it. Can you think of any examples or counterexamples to either (1) or (2)? Your proposed answer to (3) is indeed correct, so let's focus on (1) and (2).
For (2), the only things we're given are that $f$ is continuous and strictly increasing. But $g: [0,1] \to [0,1]$ could be pretty much anything! So, your first hunch should probably be "no, $f \circ g$ need not be continuous." Can you find a counterexample?
For (1), try to use the same type of reasoning.
|
An elementary exercise from modules over algebras I was trying to prove following equivalence of statements: let $V$ be an $A$-module, $A$ is an algebra of finite dimension over a field $F$. Then following are equivalent:
1) $V$ is completely reducible $A$-module (i.e. it is direct sum of simple $A$-submodules).
2) $J(A)V=0$, where $J(A)$ is Jacobson radical of $A$.
I didn't get any direction to proceed; any hint?
| If $J(R)$ annihilates $V$, then $V$ is also an $R/J(R)$ module.
Since $R/J(R)$ is Artinian, it is a semisimple ring, so $V$ is a semisimple $R/J(R)$ module. But $R$ and $R/J(R)$ share the same simple modules, so $V$ is also a semisimple $R$ module.
For the other direction, I think you've already seen that $J(R)$, which annihilates all simple $R$ modules, obviously annihilates all semisimple $R$ modules.
|
Transform expression from $2^{-n}$ to $x \times 10^{-m}$ I'm trying the understand the concept of how floating number are stored in the computer. While trying to understand it I ran into following kind of transformations:
$2^{-n} \approx x \times 10^{-m}$
For example:
$2^{-1074} \approx 5 \times 10^{-324}$
Can you please explain me how this transformation from the base 2 expression to the base 10 expression is being performed?
Thanks in advance
| Log tables are what's used by a computer, as noted in the accepted answer.
However, you can approximate powers of two into powers of ten in your head, if you know that:
$2^{10} = 1024 \approx 1000 = 10^3$
This is a very rough approximation, but worth knowing. It gets you in the right neighborhood.
Applying it to $2^{-1074}$, we get:
$2^{-1074} = 2^{-4} \times 2^{-1070} = 2^{6} \times 2^{-1080} = 2^{6} \times (2^{10})^{-108} \approx $
$2^{6} \times (10^3)^{-108} = 2^{6} \times 10^{-324} = 64 \times 10^{-324} \approx 6 \times 10^{-323} $
However, the rounding error involved adds up to a full order of magnitude in this case; the approximation given in your question is more accurate:
$5 \times 10^{-324}$
Still, this method is worth knowing for sanity-checking an answer within a few orders of magnitude in your head.
|
Why do we think of ${\mathbb{R}}^2$ as a plane? It seems like every one assumes there's an obvious answer to this question but I just can't see it. ${\mathbb{R}}^2$ is just defined as the set of all 2-tuples of real numbers i.e.
${\mathbb{R}}^2 := \{(a,b)| a,b \in\mathbb{R} \}$
The $xy$ - plane clearly has a lot more structure in it than just being a set of points. There is a definitive order to the arrangement of points. We can define curves on it (maybe even closed curves with some enclosed area). We can talk about distances between 2 points & so on. So where does this additional structure come from & how does that relate to the set ${\mathbb{R}}^2$ .
| There is a lot of additional structure, yes. However, we usually "construct" this structure on top of the "raw plane" $\mathbb R^2$
*
*The notion of distance is a metric
*Closeness is a topology
*Curves / solutions of polynomials are a variety
*...
However, we always start with $\mathbb{R}^2$ and add structure on top of it. The choice of structure depends on what you want to study currently!
|
Evaluating $\int_{ - \infty }^\infty {\frac{{{e^{7\pi x}}}}{{{{\left( {{e^{3\pi x}} + {e^{ - 3\pi x}}} \right)}^3}\left( {1 + {x^2}} \right)}}dx}$
How to evaluate this integral?
$$\int_{ - \infty }^\infty {\frac{{{e^{7\pi x}}}}{{{{\left( {{e^{3\pi x}} + {e^{ - 3\pi x}}} \right)}^3}\left( {1 + {x^2}} \right)}}dx}$$
Maybe we can start by $$\int_0^\infty {\frac{{dx}}{{({x^2} + 1)\cosh ax}}} = \frac{1}{2}\left[ {{\psi _0}\left( {\frac{a}{{2\pi }} + \frac{3}{4}} \right) - {\psi _0}\left( {\frac{a}{{2\pi }} + \frac{1}{4}} \right)} \right]$$ in this. Then take the derivative with respect to $a$, but I'm failed to solve it!
| $$I=\frac{1}{8} \int_{-\infty}^{\infty}
\frac{\cosh(7\pi z)}{(1+z^2)\cosh^3(3\pi z)} \text{d}z$$
Now we're considering the function
$$f(z)=\frac{\cosh(7\pi z)\psi^{(0)}\left ( 1-iz\right ) }{\cosh^3(3\pi z)}$$
From the residue theorem and calculate the residues at $$z=\frac{i}{6},\frac{i}{2},\frac{5i}{6}$$
We finally obtain
$$
\int_{-\infty}^{\infty} \frac{e^{7\pi z}}
{(e^{3\pi z}+e^{-3\pi z})^3(1+z^2)} \text{d}z
=\frac{3375\pi^3-12000\sqrt{3}\pi^2+25760\pi+26784\sqrt{3} }{27000\pi^2}
$$
|
Inhibited population growth versus disease growth The rate population growth within a contained system is proportional to the current population y and the distance between y and the limit capacity L. The ODE is given by :
$\frac{dy}{dx}$ = ky ( 1 - $\frac{y}{L}$)
The rate of disease growth within a population L is proportional to those already infected y and those not yet infected (L-y). This ODE is given by:
$\frac{dy}{dx}$ = ky (L - y)
What is the intuitive difference between the 2 equations?
The rate of growth of each both tend to zero as the population tends to the respective limit, but how does the approach behaviour differ between the two?
| Hint: if you rewrite the first equation as
$$\frac{dy}{dx} = \frac {ky} {L} (L-y)$$
you can see that the difference with the second equation reduces to a constant in the RHS.
|
Number of ways a laser will bounce on a circular reflective material in 4 times Is there a formula for getting the number of ways a laser will bounce on a circular reflective material in 4 times, returning to its original position?
So the correct answer is 4, but is there a formula?
Here's a picture showing the 4 ways:
| The points where the beam first hits the circle are at angles $\frac{2k\pi}{n+1}$ from the initial point. If $\gcd(k,n+1)\gt1$, then the beam will return to the start point after $\frac{n+1}{\gcd(k,n+1)}-1$ bounces, so we only want to count the number of $1\le k\le n$ so that $\gcd(k,n+1)=1$. This is $\phi(n+1)$ where $\phi$ is the Euler Totient Function.
Since $\phi(4+1)=4$, we get $4$ ways.
|
Find the power of operator I have an operator $U=x(t) + \int_0^1 x(st) \, ds$.
Goal is to find $U^2$ without iterated integrals.
I start with:
$$U(Ux(t))=x(t) + \int_0^1 x(st)\,ds+\int_0^1 (x(qt) + \int_0^1 x(sqt)\,ds )\, dq= \\ =x(t) + 2\int_0^1 x(st) \, ds + \int_0^1 \int_0^1 x(sqt) \, ds\,dq.$$
I have a problem in transformation $\int_0^1 \int_0^1 x(sqt) \, ds\,dq$ . I have used integration by parts for the internal integral, but i do not know what to do next.
| Let $s=\frac{u}{q}$, $ds=\frac{1}{q}\,du$, and integrate by parts
\begin{align}
\int_0^1 \int_0^1 x(sqt)\,ds\,dq &= \int_0^1 \frac 1 q \int_0^q x(ut)\,du\,dq \\
& = \left.\ln q\int_0^q x(ut)\,du\right|_{q=0}^{q=1}-\int_0^1 (\ln q)x(qt) \, dq \\
& = -\int_0^1 x(qt)\ln q \,dq.
\end{align}
|
Exponential distribution probability example If the expected time until a neutrino appears in 1 minute follows an exponential distribution with mean 1, what is the probability that no neutrinos arrive in two minutes?
So the density would be $\lambda e^{-\lambda x}$ where $\lambda=1$, from here to get the $P(X\leq x)$ you take $\int_0^x e^{-x} dx = 1-e^{-x}$. I have no idea where I would go from here. Would I do $1-P(X\leq 2)$? The whole exponential distribution doesn't really make sense to me.
| This problem need to be solved by Poission Process.
Let $N_t$ denote the number of neutrinos upto time $t$.
It is well-known that $N_t\sim Poi(\lambda t)$
So the required probability $$P(N_2-N_0=0)=P(N_2=0)=e^{-2}$$
Disclaimer: I have assumed that by $2$ minutes, you meant the first $2$ minutes.
If not, notify me, I will change the answer accordingly.
|
Understanding a geometric argument in the proof of the strong maximum principle for elliptic operators in Evans's PDE Here is the strong maximum principle in Evans's Partial Differential Equations:
Here $U\subset\mathbb{R}^n$ is open and bounded. Also,
The proof is very short once one has Hopf's lemma.
Here is my question:
Would anyone elaborate the underlined sentence that why such $y$ exists? (I don't have any intuition at all why this should be true.)
| We are assuming $u$ attains its maximum inside $U$, so $C$ is nonempty, and we are assuming it's non-constant, so $V$ is nonempty. And $V$ is open by the continuity of $u$. Since $U$ is connected and $U = C \cup V$, $C$ must not be open. So there is a point $z \in C$ which is not an interior point of $C$. Since $z \in U$ and $U$ is open, we can find $r>0$ such that $B(z,r) \subset U$. In particular, $d(z,\partial U) \ge r$. And since $z$ is not an interior point of $C$, we can find $y$ with $|z-y| < r/2$ and $y \notin C$. Since $y \in U$ and $U=C \cup V$, we must have $y \in V$. And we now have $d(y,C) \le |y-z| < r/2 < d(y, \partial U)$.
|
Determine the vectors of components For the polynomial vector space $\mathbb{R}[x]$ of degree $\leq 3$ we have the following three bases:
$$B_1 = \{1 - X^2 + X^3, X - X^2, 1 - X + X^2, 1 - X\} , \\
B_2 = \{1 - X^3, 1 - X^2, 1 - X, 1 + X^2 - X^3\}, \\
B_3 = \{1, X, X^2, X^3\}$$
How can we determine the following vectors of components $\mathbb{R}^4$ ?
$\Theta_{B_1}(b)$ for all $b \in B_1$
and
$\Theta_{B_3}(b)$ for all $b \in B_1$
Could you give me hint?
Do we use the transformation matrix? If yes, how?
$$$$
EDIT:
I have seen the following notes :
$\Theta_{B_1}(b\in B_1)=i$-th comlumn of idenity, since it shown always at itself, and $\Theta_{B_3}(b\in B_1)=i$-th column of $B_1$.
Why does this hold?
| Hint:
$$
B_1=
\begin{bmatrix}
1&X&X^2&X^3
\end{bmatrix}
\begin{bmatrix}
1&0&1&1\\
0&1&-1&-1\\
-1&-1&1&0\\
1&0&0&0
\end{bmatrix}
$$
$$
B_2=
\begin{bmatrix}
1&X&X^2&X^3
\end{bmatrix}
\begin{bmatrix}
1&1&1&1\\
0&0&-1&0\\
0&-1&0&1\\
-1&0&0&-1
\end{bmatrix}
$$
$$
B_3=
\begin{bmatrix}
1&X&X^2&X^3
\end{bmatrix}
\begin{bmatrix}
1&0&0&0\\
0&1&0&0\\
0&0&1&0\\
0&0&0&1
\end{bmatrix}
$$
Note that the columns of the matrices above are the matrices for the various bases with respect to $B_3$.
For example, consider the first vector of $B_1$, $1-X^2+X^3$
$$
\Theta_{B_3}\!\left(1-X^2+X^3\right)=\begin{bmatrix}1\\0\\-1\\1\end{bmatrix}
$$
since
$$
1-X^2+X^3=
\overbrace{\begin{bmatrix}
1&X&X^2&X^3
\end{bmatrix}}^{B_3}
\begin{bmatrix}
1\\0\\-1\\1
\end{bmatrix}
$$
This is how the matrix for $B_1$ was created.
|
Suppose that $p(z) = a_nz^n+\cdots+a_0$ and it has maximum modulus $1$ on the boundary of the unit disk. Suppose that $p(z) = a_nz^n+\cdots+a_0$ and it has maximum modulus $1$ on the boundary of the unit disk, show that $|p(z)| \leqslant max\{1,|z|^{n}\}$. How to show that $|p(z)| \leqslant |z|^n$?
| Note that $q(z) = z^n p(z^{-1})$ is a polynomial. What do you know about $\lvert q(z) \rvert$ when $\lvert z\rvert=1$? Using the maximum modulus principle, what does that say about $\lvert p(z^{-1}) \rvert$ if $0 < \lvert z \rvert\leq 1$?
|
The Most general solution satisfying equations $\tan x=-1$ and $\cos x=1/\sqrt{2}$ The most general value of $x$ satisfying the equations $\tan x=-1$ and $\cos x=1/\sqrt{2}$, is found to be $x=2n\pi+\frac{7\pi}{4}$.
My approach:
$$
\frac{\sin x}{\cos x}=-1\implies \sqrt{2}\sin x=-1\implies\sin x=\frac{-1}{\sqrt{2}}=\sin (\frac{\pi}{4}+\frac{\pi}{2})=\sin\frac{3\pi}{4}\\\implies x=n\pi+(-1)^n\frac{3\pi}{4}
$$
If I consider the cosine function
$$
\cos x=\frac{\sin x}{\tan x}=-\sin x=-\sin\frac{3\pi}{4}=\cos(\frac{3\pi}{4}+\frac{\pi}{2})=\cos\frac{5\pi}{4} \implies x=2n\pi+\frac{5\pi}{4}
$$
Is their anything wrong with my approach ?How do I compare different forms of general solutions without inputting for $n$ ?
| Here is what you got wrong:
$$\frac{\sin x}{\cos x}=-1\implies \sqrt{2}\sin x=-1\implies\sin x=\frac{-1}{\sqrt{2}}$$
So you got $\sin x<0$ and $\cos x>0$. Then you are in the $4$th quadrant.
$$\sin x=\frac{-1}{\sqrt{2}}=\sin \left(\frac{7\pi}{4}\right) \Rightarrow x=\frac{7\pi}{4}+2n\pi$$
P.S.: You got wrong because $\frac{5\pi}{4}$ is in $3$th quadrant and $\frac{3\pi}{4}$ is in $2$th quadrant.
|
Compactness of a set in a metric space Let $X$ be a metric space, and $A\subset X$. Show that $\bar A$ is compact iff for every sequence in $A$ there exists a subsequence that converges to a point in $X$. I showed the "forward" direction, but am stuck showing the reverse.
Let $(x_n)\subset \bar A$ be a sequence. if $A$ is closed, then $A=\bar A$ and if a sequence in a closed set converges, it converges to an element of the set. But by assumption every sequence in $A$ has a convergent subsequence (to an element in $A$), and the result follows.
Thing is, what happens if $A$ is not closed? Choose a sequence in $\bar A$ such that every element of the sequence is not in $A$, i.e. $(x_n)\subset\partial A \setminus A$. What guarantees us that it has a convergent subsequence? Thanks!
| Hint: if $(x_n)\subset \overline A$, we can find a sequence $(y_n) \subset A$ such that for all $n$, $d(x_n,y_n)\le 1/n$.
|
Question involving Bayes' Rule The question is as follows:
Suppose a screening test for AIDS has the following features: (i) If a blood sample comes from someone with AIDS then the test will be positive 95% of the time. (ii) If the blood sample comes from someone without AIDS then the test will be negative 95% of the time.
Suppose that 5% of the population has AIDS. If a blood sample tests positive, what is the probability that the person whose blood was tested has AIDS?
Now I think the approach to solve this comes from Bayes' Rule. The partition $H_i$ would be if the person has AIDS or not, and the event $A$ would be if the test is positive or not. Would I be correct in assuming this?
Bayes' Rule:
$$\mathbb{P}(H_i\mid A) = \frac{\mathbb{P}(H_i) \mathbb{P}(A\mid H_i)}{\sum_{j=0}^\infty\mathbb{P}(H_j)\mathbb{P}(A\mid H_j)}$$
My guess is :
$$\mathbb{P}(H_i\mid A) = \frac{(0.05)(0.95)}{(0.05)(0.95) + (0.95)(*)(0.95)^2}$$
The star is where I'm not 100% sure.
| The chance of someone testing positive while having AIDS (=0.95*0.05) is the same as the chance of someone testing positive who does not have AIDS (=0.05*0.95)!
So, if you test positive, you are equally likely to have AIDS as to not have AIDS. So, if a person under these conditions tests positive, they have a 50% chance of having AIDS.
|
Proving the Taylor Expansion Series with Newton's Generalized Binomial Theorem
Newton's Generalized Binomial Theorem states:$$(a+b)^n=a^n+na^{n-1}b+\dfrac {n(n-1)}{2!}a^{n-2}b^2+\dfrac {n(n-1)(n-2)}{3!}a^{n-3}b^3+\ldots\tag{i}$$
For any number $n$.
And I was going over a book, and it presented the Taylor Expansion for $a^n$ as$$a^n=1+cn+\dfrac {c^2n^2}{2!}+\dfrac {c^3n^3}{3!}+\dfrac {c^4n^4}{4!}+\ldots+\&c\tag{ii}$$
Where $c=(a-1)-\dfrac 12(a-1)^2+\dfrac 13(a-1)^3-\&c=\ln a$. The book also said that $(\text{ii})$ can be proved by using $(\text{i})$. My question:
Question: How would you go about proving $(\text{ii})$ using $(\text i)$?
The book gave this hint:
Let $a^x=\left\{1+(a-1)\right\}^x$ and expand that using $(\text i)$. Collect the coefficients of $x$ to obtain $c$. Assume $c_2,c_3,c_4,\ldots$ as the coefficients of the succeeding powers of $x$ and with this, expand $a^y$ in the same way. Then expand $a^{x+y}$ and equate the coefficients because $a^x\cdot a^y=a^{x+y}$. This will determine $c_2,c_3,c_4,\ldots$.
| There are some minor issues with the way you have presented the formulas. Newton's general binomial theorem says that
General Binomial Theorem: If $x, n$ are real numbers with $|x| < 1$ then $$(1 + x)^{n} = 1 + nx + \frac{n(n - 1)}{2!}x^{2} + \frac{n(n - 1)(n - 2)}{3!}x^{3} + \cdots$$
The result holds for $x = \pm 1$ also but with certain restrictions on $n$. Putting $x = b/a$ and assuming $|b| < |a|$ we can get the series mentioned in your post.
Next we can also write $(1 + x)^{n} = \exp(n\log (1 + x))$ and then using the exponential series we get $$(1 + x)^{n} = 1 + n\log(1 + x) + \frac{(n\log(1 + x))^{2}}{2!} + \cdots$$ Thus we have two series expansions for $(1 + x)^{n}$ and for the series consisting of $\log(1 + x)$ we can see that the coefficient of $n$ is $\log(1 + x)$. The coefficient of the $n$ in the general binomial expansion of $(1 + x)^{n}$ is given by $$x - \frac{x^{2}}{2} + \frac{x^{3}}{3} - \cdots$$ and hence $$\log(1 + x) = x - \frac{x^{2}}{2} + \frac{x^{3}}{3} - \cdots$$ The above result holds for $-1 < x \leq 1$.
This is the traditional route mentioned in many textbooks. What you are trying to achieve is to get to the exponential series by using binomial theorem. This is very clumsy because you will need to get the expansion of $(\log(1 + x))$ by some other means. It is much simpler to use other approaches to get to exponential series.
But in case you wish to proceed in this manner you can do it with some more effort. Let's assume that we know the expansion of $\log(1 + x)$ by some means and let $$a^{n} = 1 + c_{1}n + \frac{c_{2}}{2!}n^{2} + \cdots$$ then we can write $a^{n} = (1 + (a - 1))^{n}$ and using binomial theorem get $$a^{n} = 1 + nb + \frac{n(n - 1)}{2!}b^{2} + \frac{n(n - 1)(n - 2)}{3!}b^{3} + \cdots$$ where $b = a - 1$. Equating coefficient of $n$ in both series we get $$c_{1} = b - \frac{b^{2}}{2} + \cdots = \log(1 + b) = \log a$$ Next we note that $a^{m + n} = a^{m}a^{n}$ and hence $$1 + c_{1}(m + n) + \frac{c_{2}}{2!}(m + n)^{2} + \cdots = \left(1 + c_{1}m + \frac{c_{2}}{2!}m^{2} + \cdots\right)\left(1 + c_{1}n + \frac{c_{2}}{2!}n^{2} + \cdots\right)$$ From the above we get $$\frac{c_{2}}{2!}(m + n)^{2} = \frac{c_{2}}{2!}m^{2} + \frac{c_{2}}{2!}n^{2} + c_{1}^{2}mn$$ and hence $c_{2} = c_{1}^{2}$ and similarly we can show that $c_{k} = c_{1}^{k}$ so that $c_{k} = (\log a)^{k}$. And hence we get $$a^{n} = 1 + n\log a + \frac{(n\log a)^{2}}{2!} + \cdots$$ Note that the above approach is based on formal manipulation of power series and does not deal with the issues of convergence hence it is not possible to ascertain the range of values of variables for which the formulas hold true. It is best to use rigorous approaches to deal with exponential, logarithmic and binomial series as presented in my blog posts (linked earlier in this answer).
|
Solving $ax \equiv c \pmod b$ efficiently when $a,b$ are not coprime I know how to compute modular multiplicative inverses for co-prime variables $a$ and $b$, but is there an efficient method for computing variable $x$ where $x < b$ and $a$ and $b$ are not co-prime, given variables $a$, $b$ and $c$, as described by the equation below?
$ a x \equiv c \mod b $
For example, given
$ 154x \equiv 14 \mod 182 $, is there an efficient method for computing all the possibilities of $x$, without pure bruteforce?
Please note that I'm not necessarily asking for a direct solution, just a more optimized one.
I do not believe that the Extended Euclidean Algorithm will work here, because $a$ and $b$ are not co-prime.
Edit:
Follow up question, since the first one had a shortcut:
Could the be computed efficiently as well?
$12260x \equiv 24560 \mod 24755$.
$107$ needs to be one of the computed answers.
| Solving $154x \equiv 14 \pmod{182}$ is the same as finding all solutions to
$$ 154x + 182y = 14.$$
In this case, we might think of this as finding all solutions to
$$14(11x + 13y) = 14(1),$$
or rather
$$11x + 13 y = 1.$$
Finally, solving this is the same as solving $11x \equiv 1 \pmod {13}$, which has solution $x \equiv 6 \pmod{13}$.
So we learn that $x \equiv 6 \pmod{13}$ is the solution. Of course, this isn't a single residue class mod $182$. Thinking modulo $182$, we see that the solutions are $x \equiv 6, 6+13,6+26,6+39, \ldots, 6+13*13 \equiv 6, 19, 32, \ldots, 175.$
This approach works generally --- factor out the greatest common divisor, consider the resulting modular problem, and then bring it back up to the original problem.
|
The connection between the Jacobian, Hessian and the gradient? In this Wikipedia article they have this to say about the gradient:
If $m = 1$, $\mathbf{f}$ is a scalar field and the Jacobian matrix is reduced to a row vector of partial derivatives of $\mathbf{f}$—i.e. the gradient of $\mathbf{f}$.
As well as
The Jacobian of the gradient of a scalar function of several variables has a special name: the Hessian matrix, which in a sense is the "second derivative" of the function in question.
So I tried doing the calculations, and was stumped.
If we let $f: \mathbb{R}^n \to \mathbb{R}$, then
$$Df = \begin{bmatrix}
\frac{\partial f}{\partial x_1} & \dots & \frac{\partial f}{\partial x_n}
\end{bmatrix} = \nabla f$$
So far so good, but when I try to calculate the Jacobian matrix of the gradient I get
$$D^2f = \begin{bmatrix}
\frac{\partial^2 f}{\partial x_1^2} & \frac{\partial^2 f}{\partial x_2 x_1} & \dots & \frac{\partial^2 f}{\partial x_n x_1} \\
\frac{\partial^2 f}{\partial x_1 x_2} & \frac{\partial^2 f}{\partial x_2^2} & \dots & \frac{\partial^2 f}{\partial x_n x_2} \\
\vdots & \vdots & \ddots & \vdots \\
\frac{\partial^2 f}{\partial x_1 x_n} & \frac{\partial^2 f}{\partial x_2 x_n} & \dots & \frac{\partial^2 f}{\partial x_n^2}
\end{bmatrix}$$
Which according to this article, is not equal to the Hessian matrix but rather its transpose, and from what I can gather the Hessian is not generally symmetric.
So I have two questions, is the gradient generally thought of as a row vector? And did I do something wrong when I calculated the Jacobian of the gradient of $f$, or is the Wikipedia article incorrect?
| You did not do anything wrong in your calculation.
If you directly compute the Jacobian of the gradient of $f$ with the conventions you used, you will end up with the transpose of the Hessian. This is noted more clearly in the introduction to the Hessian on Wikipedia (https://en.wikipedia.org/wiki/Hessian_matrix) where it says
The Hessian matrix can be considered related to the Jacobian matrix by $\mathbf{H}(f(\mathbf{x})) = \mathbf{J}(∇f(\mathbf{x}))^T$.
The other Wikipedia article should probably update the language to match accordingly.
As for the gradient of $f$ is being defined as a row vector, that is the way I have seen it more often, but it is noted https://en.wikipedia.org/wiki/Matrix_calculus#Layout_conventions that there are competing conventions for general matrix derivatives.
However, I don't think that should change your answer for the Hessian- with the conventions you are using, you are correct that it should be transposed.
|
eigenvector and spectral norm $x^TAx=\lambda$ with $A$ is a semi-positive definite matrix, $x$ is a unit vector, and $\lambda\ne 0$ is the largest eigenvalue of $A$. Then $x$ is an eigenvector of $A$? I.e., $Ax=\lambda x$?
| It can be solved by Lagrange multipler for the following optimization
$
\text{maximize}\{ x^TAx\}\quad s.t. \quad x^Tx=1 .
$
To solve the above optimization, using Lagrange multipler, it needs
$
\text{maximize}\{ x^TAx+\lambda(x^Tx-1) \}.
$
Taking the derivative of the objective function w.r.t. $x$ and then let it be $0$ yields $Ax=\lambda x$.
Thus the optimal $x$ must be the eigenvector of A.
|
$\lim_{x \to \pi/4} \frac{x}{4x-\pi} \int_{\pi/4}^x \frac{tan^2\theta}{\theta^2} d\theta $ I need help solving this limit:
$\lim_{x \to \pi/4} \frac{x}{4x-\pi} \int_{\pi/4}^x \frac{tan^2\theta}{\theta^2} d\theta $
The limit is solvable using l'hopital's rule to get $\frac{64}{\pi^2}$, but I need to see if it is possible to do so without using it. This problem was presented to me by my friend who is taking Calculus 1 so nothing beyond simple integrals and derivatives if possible.
| Using the mean-value theorem, we have for some $\phi \in [\pi/4,x]$
$$\begin{align}
\frac{x}{4x-\pi}\int_{\pi/4}^x \frac{\tan^2(\theta)}{\theta^2}\,d\theta&=\frac{x}{4x-\pi}\left(\frac{\tan^2(\phi)}{\phi^2}\right)(x-\pi/4)\\\\
&=\frac{x}{4}\left(\frac{\tan^2(\phi^2)}{\phi^2}\right)\\\\
&\to \frac{\pi}{16}\frac{1}{(\pi/4)^2}\,\,\text{as}\,\,x\to \pi/4\\\\
&=\frac{1}{\pi}
\end{align}$$
|
General solution to $(\sqrt{3}-1)\cos x+(\sqrt{3}+1)\sin x=2$ $(\sqrt{3}-1)\cos x+(\sqrt{3}+1)\sin x=2$ is said to have a general solution of $x=2n\pi\pm\frac{\pi}{4}+\frac{\pi}{12}$.
My Approach:
Considering the equation as
$$
a\cos x+b\sin x=\sqrt{a^2+b^2}\Big(\frac{a}{\sqrt{a^2+b^2}}\cos x+\frac{b}{\sqrt{a^2+b^2}}\sin x\Big)=\sqrt{a^2+b^2}\big(\sin y.\cos x+\cos y.\sin x\big)=\sqrt{a^2+b^2}.\sin(y+x)=2
$$
$\frac{a}{\sqrt{a^2+b^2}}=\sin y$ and $\frac{b}{\sqrt{a^2+b^2}}=\cos y$.
$$
{\sqrt{a^2+b^2}}=\sqrt{8}=2\sqrt{2}\\\tan y=a/b=\frac{\sqrt{3}-1}{\sqrt{3}+1}=\frac{\frac{\sqrt{3}}{2}.\frac{1}{\sqrt{2}}-\frac{1}{2}.\frac{1}{\sqrt{2}}}{\frac{\sqrt{3}}{2}.\frac{1}{\sqrt{2}}+\frac{1}{2}.\frac{1}{\sqrt{2}}}=\frac{\sin(\pi/3-\pi/4)}{\sin(\pi/3+\pi/4)}=\frac{\sin(\pi/3-\pi/4)}{\cos(\pi/3-\pi/4)}=\tan(\pi/3-\pi/4)\implies y=\pi/3-\pi/4=\pi/12
$$
Substituting for $y$,
$$
2\sqrt{2}.\sin(\frac{\pi}{12}+x)=2\implies \sin(\frac{\pi}{12}+x)=\frac{1}{\sqrt{2}}=\sin{\frac{\pi}{4}}\\\implies \frac{\pi}{12}+x=n\pi+(-1)^n\frac{\pi}{4}\implies x=n\pi+(-1)^n\frac{\pi}{4}-\frac{\pi}{12}
$$
What's going wrong with the approach ?
| converting the given equation in $$\tan$$ we get
$${\frac { \left( 1+\sqrt {3} \right) \left( 1- \left( \tan \left( x/2
\right) \right) ^{2} \right) }{1+ \left( \tan \left( x/2 \right)
\right) ^{2}}}+2\,{\frac { \left( \sqrt {3}-1 \right) \tan \left( x/2
\right) }{1+ \left( \tan \left( x/2 \right) \right) ^{2}}}-2=0
$$
simplifying and factorizing we obtain
$$-1/3\,{\frac { \left( \sqrt {3}+3 \right) \left( \tan \left( x/2
\right) +2-\sqrt {3} \right) \left( 3\,\tan \left( x/2 \right) -
\sqrt {3} \right) }{1+ \left( \tan \left( x/2 \right) \right) ^{2}}}
=0$$
thus you have to solve
$$- \left( \sqrt {3}+3 \right) \left( -\tan \left( x/2 \right) -2+
\sqrt {3} \right) \left( -3\,\tan \left( x/2 \right) +\sqrt {3}
\right)
=0$$
|
Integrating over a subset (Real Analysis) Let $0 < p < 1$ and $f: X \longrightarrow (0,\infty)$ measuarable with $\int_X f d\mu = 1$.
Prove that $\int_A f^p d\mu \leq \mu(A)^{1-p}$.
Someone please help?
| It's a direct application of Hölder's inequality, using the exponents $1/p $ and $1/(1-1/(1/p))=1-p $. So
$$
\int_A f^p\,d\mu\leq\left (\int_A (f^p)^{1/p}\,d\mu\right)^p\left (\int_A1^{1/(1-p)}\,d\mu\right)^{1-p}\leq\mu (A)^{1-p}.
$$
|
Which version of Rolle's theorem is correct? #According to my textbook:
Rolle's theorem states that if a function $f$ is continuous on the closed interval $[a, b]$ and differentiable on the open interval $(a, b)$ such that $f(a) = f(b)$, then $f′(x) = 0$ for some $x$ with $a ≤ x ≤ b$.
#According to Wikipedia:
If a real-valued function $f$ is continuous on a proper closed interval $[a, b]$, differentiable on the open interval $(a, b)$, and $f(a) = f(b)$, then there exists at least one $c$ in the open interval (a, b) such that
$f'(c)=0$.
So one definition says that $c$ should belong in closed interval $[a,b]$ but the other says that $c$ should be in open interval $(a,b)$.
Which definition is correct ? Why?
| In both the theorams 2 things are common
1.) Function is continuous on $\left[a,b\right]$
2.) function is differentiable on $\left(a,b\right)$$\Longrightarrow$may be or may not be differentiable at x=a and x=b.
Case 1.
$\Longrightarrow$function is differentiable at x=a and x=b
Both theorams are correct
Case 2.
$\Longrightarrow$function is not differentiable at x=a and x=b
textbook theoram becomes improper.Where wikipidea theoram is still
correct
|
$\cos(\arcsin(x)) = \cdots $ I've been asked to prove $$y=\frac{\sqrt{3}} 2 x+\frac 1 2 \sqrt{1-x^2}$$
given $x=\sin(t)$ & $y=\sin(t+\frac \pi 6)$
I did $t=\arcsin(x)$ and plugged that into the $y$ equation.
Used the $\sin(a+b)$ identity to get:
$$y=x\cos\left(\frac \pi 6\right)+\frac{\cos(\arcsin(x))}2 = \frac{\sqrt3} 2 x+\frac{\cos(\arcsin(x))} 2$$
Now I'm sure there must be an identity for $cos(arcsin(x))$ however I'm unaware of it.
I'm also unaware of how to prove it.
I did a quick google and I found this page which says:
which is seemingly exactly what I need to complete the question; however, it wouldn't be proving that $y = \text{answer}$ if I didn't show how to get to this result.
Is there a "more correct" way to complete this question without having to fiddle with this formula / arcsins etc.
| The usual proof involves drawing a triangle. The opposite side will be called $\sin(y)=x$, the hypotenuse will be $1$, and the angle will be $\arcsin x$. Can you use the pythagorean theorem to find $\cos(\arcsin x)$?
|
Can we derive $N=aa^{\dagger}$ from these conditions? Let $[a,b]:=ab-ba$ for all $a,b\in X$ a non commutative ring. Suppose that $a,a^{\dagger},N$ are operators satisfying
$$\begin{align}
[a,a^{\dagger}]&=1 \\
[N,a]&=-a \\
[N,a^{\dagger}] &= a^{\dagger}
\end{align}$$
where $1$ is simply the identity operator and $a^{\dagger}$ is the physicist's adjoint of $a$.
Does it follow that
$$
N=aa^{\dagger}
$$
just from these $3$ conditions?
Motivation: My friend ask me a related question, which can be easily solve if the above is true. The $3$ conditions are simply given at the very beginning of the exercise. If we interpret $N$ as the number operator, $a$ as the annihilation operator and $a^{\dagger}$ as the creation operator then it is generally known that $N=aa^{\dagger}$. So I wonder if his professor simply forgot to write that in the exercise sheet or if the fact can be derived.
| If we assume that $N$ is positive, the second condition is just a restatement of the second one. If you multiply the first condition by $-a$ on the left you get the second condition for $N=a^{\vphantom\dagger} a^\dagger$. So $N=a^{\vphantom\dagger} a^\dagger$ is certainly a solution. The question is whether it is unique.
This boils down to the question of whether $Xa-aX=0$ implies $X=0$. Such condition certainly doesn't follow from the given conditions. Actually, $X=\lambda I$ will always commute with $a$, so $N=a^{\vphantom\dagger} a^\dagger+\lambda I$ will satisfy the equations for all $\lambda\in\mathbb R$.
|
Draw $\{z\in\mathbb{C}|z\overline{z}<3+2\text{Im}(z)\}$ I want to draw the set $\{z\in\mathbb{C}|z\overline{z}<3+2\text{Im}(z)\}$. However, I don't know what to do about $2\text{Im}(z)$.
If the set would be $\{z\in\mathbb{C}|z\overline{z}<3\}$, it'd be quite easy, since $z\overline z=\vert z\vert^2$, so the set would contain all complex numbers inside the circle around the center with radius $r=\sqrt{3}$. But how to interpret $2\text{Im}(z)$?
| Since $\operatorname{Im}z = \frac{1}{2i}(z - \bar z)$ the condition can be rewritten as:
$$
z \bar z \lt 3 - i(z-\bar z) \\
z \bar z + i z - i \bar z + 1 \lt 4 \\
(z - i)(\bar z +i) \lt 4 \\
|z-i|^2 \lt 4
$$
Therefore the set is the interior of the disc of radius $2$ centered at $i$.
|
Automorphisms of $\mathbb Q(\sqrt 2)$ I have a really quick question in Galois theory:
If I have a field such as $\mathbb Q(\sqrt2)$, and I want to look at the automorphisms of it, it seems clear that $a+b\sqrt 2\mapsto a+x\sqrt 2$ for some $x$ (as $a\mapsto a$ to ensure that $\sigma(1) =1$, which ensure's its a homomorphism).
My question is why can $x$ only be $\pm b$? What's the barrier with a field automorphism $a+b\sqrt 2\mapsto a+2b\sqrt 2$?
| You need that $\sigma(\sqrt{2})^2 = \sigma(\sqrt{2}^2) = \sigma(2) = 2$.
Thus, $\sigma(\sqrt{2})$ also needs to be a root of $X^2 - 2$.
This can be generalized for any algebraic number: its image must have the same minimal polynomial, so the choice are the various roots of the minimal polynomial.
|
Why are these two limits equal? Why is $$\lim_{n\rightarrow\infty} n(1+{1\over n})^p$$ the same as $$\lim_{n\rightarrow\infty} n(1+{p\over n})$$ ?
I saw this answer in reference to a limit question here, and I am not sure how exactly to prove this to myself. Does it have something to do with the binomial theorem?
This is in reference to $$\lim_{x\to-\infty} \left(\sqrt{x^2+2x}+\sqrt[3]{x^3+x^2}\right).$$
$$\lim_{x\to-\infty}\sqrt{x^2+2x}\color{red}{+}\sqrt[3]{x^3+x^2}\\=\lim_{x\to-\infty}|x|\left(1+{2\over x}\right)^{1\over 2}\color{red}{+}x\left(1+{1\over x}\right)^{1\over3}\\=\lim_{x\to-\infty}|x|\left(1+{1\over 2}\cdot{2\over x}\right)\color{red}{+}x\left(1+{1\over 3}\cdot{1\over x}\right)=-{2\over 3}$$
I understand that the two both equal to infinity but it seems strange to just randomly multiply by the exponent and forget about it. Furthermore if it can be any number of my choice then would the limit still hold?
| No need for the binomial theorem. Since $x \mapsto x^p$ is continuous over $[0,\infty)$, then
$$
\lim_{n \to \infty} \left(1+\frac1n \right)^p =1
$$ giving
$$
\lim_{n \to \infty} n\left(1+\frac1n \right)^p =\infty \cdot 1=\infty.
$$
One also has
$$
\lim_{n \to \infty} n\left(1+{p\over n}\right)=\infty \cdot 1=\infty.
$$
|
Manifold orientable iff nth exterior power of the cotangent bundle is trivial Let $M$ be a manifold with dimension $n$. Then prove $M$ is orientable if and only if $\Lambda^nT^*M$ is trivial.
For both directions I used the argument of existence of global frame implied by the hypothesis. However it does not seem right.
| Suppose $\bigwedge^n T^\ast M$ is trivial (vector bundle).
That means that there is a $n$-form $\omega$ such that $\omega_p \neq 0$ for each $p\in M$. For each $p \in M$ take a chart $x:U \to \mathbb R^n$, with $p\in U$, such that $$\omega = a dx_1 \wedge \ldots \wedge dx_n$$
with $a(q)>0$ for each $q\in U$. This family of charts we chose give us a oriented atlas.
On the other hand, suppose we have a oriented atlas. For each chart $x:U \to \mathbb R^n$ on this atlas we obtain de $n$-form $dx_1 \wedge \ldots \wedge dx_n$ defined on $U$. Using partition of unity you can glue this $n$-forms in order to produce a $n$-form $\omega$ such that $\omega_p\neq 0$ for each $p\in M$. That means that $\bigwedge^n T^\ast M$ is trivial.
|
Poles, branch cuts, and zeros From what I understand, these three concepts all describe the points where the function is not continuous. How to tell them apart? Thanks!
| *
*If $f(z)$ is holomorphic/analytic on $0 < |z-z_0| < r$ then $z_0$ is an isolated singularity. From the Cauchy integral formula in an annulus you have the Laurent series $f(z) = \sum_{n=-\infty}^\infty a_n (z-z_0)^n$ converging on $0 < |z-z_0| < r$, and two cases are possible :
*
*$a_n = 0 $ for $n < -k$ so that $(z-z_0)^{k}f(z)$ is analytic on $|z-z_0| < r$ and $z=z_0$ is a pole of order $k$. If $k\le 0$ then $z_0$ was in fact a removable singularity
*otherwise $z= z_0$ is an essential singularity of $f(z)$
*Other types of singularities are non-isolated and include :
*
*branch points : a point around which you can continue analytically $f(z)$, but $f(z_0+e^{2i \pi}(z-z_0)) \ne f(z)$)
*and frontiers : $f(z) = \sum_{n=1}^\infty z^{2^n}$ is analytic on $|z| < 1$ but $\lim_{r \to 1^-} f(r e^{2 \pi i m/2^k}) = \infty$ whenever $m,k \in \mathbb{N}$, so you can't continue analytically $f(z)$ beyond $|z| < 1$
|
What is this group called? Let $X$ denote a set. There's a corresponding group obtained by taking the group freely generated by $X^2$ and then quotienting out by the following families of relations:
*
*$(x,x) = 1$
*$(x,y)(y,z) = (x,z)$
*$(x,y)(y,x) = 1$
*For each quadruple $(x,y,x',y')$ such that $\{x,y\} \cap \{x',y'\} = \emptyset$, we have:
$$(x,y)(x',y') = (x',y')(x,y)$$
Question. What is this group called?
My motivation for considering this group is that it acts on the set $(\{0,1\}^\mathbb{N})^X,$ of $X$-many binary streams, by interpreting $(x,y)$ as the act of taking the first digit of stream $x$, removing it from $x$, and appending it to the beginning of $y$. Each of the four families above can be explained in these terms; for example $(x,x)=1$ is saying that if I take the first digit of stream $x$, and put it back on stream $x$, nothing changes.
| Not a complete answer, but too long for a comment:
The group you described has a neat equivalent characterization:
It can be described by the presentation $\langle a_1, a_2, ... , a_{|X| - 1}| [a_i, a_j] = [a_k, a_j] \forall i \neq j, j \neq k, k \neq i\rangle$
Proof:
Suppose, $X = \{0, ... , n\}$ ($|X| = n + 1$). Let's define $a_i := (0, i)$. That results in $(i, 0)$ being equal to $a_i^{-1}$ and for $i \neq j, j \neq 0, i \neq 0$ $(i, j) = a_i^{-1}a_j$. Now this change of notation already covers the first three rules, making them redundant.
Now, let's deal with the fourth rule. Without the loss of generality we can assume that one of the following cases takes place:
1)$x' = 0$, $x, y$ and $y' = z$ are pairwise distinct and non-zero. Then $(0, z)(x, y) = (x, y)(0, y')$, which turns into $a_{z}a_{x}^{-1}a_y = a_{x}^{-1}a_{y}a_{z}$ which results in $a_ya_za_y^{-1} = a_xa_za_{x}^{-1}$, or equivalently $[a_y, a_z] = [a_x, a_z]$
2)$x, y, x' = z$ and $y' = t$ are pairwise distinct and non-zero. Then $(x, y)(z, t)=(z, t)(x, y)$ turns into $a_x^{-1}a_ya_z^{-1}a_t = a_z^{-1}a_ta_{x}^{-1}a_y$ which is $(a_ya_z^{-1}a_y^{-1})(a_ya_ta_y^{-1})=(a_xa_z^{-1}a_x^{-1})(a_xa_ta_x^{-1})$ which is a direct corollary of case 1
Thus $[a_y, a_z] = [a_x, a_z]$ for all distinct $x$, $y$ and $z$ are the only non-trivial relations required.
|
How to integrate $\int \frac{f'(x)}{f^2(x)}$? In my book it states $\int \frac{f^1(x)}{f^2(x)}=-\frac{1}{f(x)}$.
I don't really understand why and it doesn't say in my book.
My first idea was maybe we can cancle out $f^1(x)$. So $\int \frac{f^1(x)}{f^2(x)} = \int \frac{1}{f^1(x)}$.
But I'm not sure if you can do that and it also doesn't explain the minus from $-\frac{1}{f(x)}$ .
| $\displaystyle \int \frac{f'(x) } {(f(x) )^2 }dx =- \frac{1}{f(x)} + c$, so is it possible your expression represents that?
|
First hitting time distribution for a discrete random walk Can anyone provide the first hitting time distribution for a discrete random walk?
Edit: Specifically, a 1D random walk, starting at $k=0$. Each step moves either $-1$ or $+1$ without any boundaries. I require the distribution for the first hitting time at some arbitrary point $m>0$.
I cannot find it anywhere. I can only find it for continuous Brownian motion.
| For $n,m\geq 1$,
$$P(\tau(m)=n)=\begin{cases}
\displaystyle{{m\over n}\binom{n}{(n+m)/2}{1\over 2^n}} &\mbox{if } m+n \mbox{ is even}\\[5pt] 0 &\mbox{if }m+n \mbox{ is odd}.\end{cases} $$
|
why is this volume $0$? (double integral, polar coordinates) I got that $$\iint x^2-y^2 dA=0$$ over the region $x^2+y^2=1$ (I used polar coordinates). But why is that? How can a volume be 0?
| Do not forget that an integral sums up function values!
That double integral is telling you to sum up all the function values of $x^2 - y^2$ over the unit circle.
To get $0$ here means that either the function does not exist in that region OR it's perfectly symmetrical over it. Knowing that $x^2 - y^2$ is a hyperbolic parabola, it definitely exists in this region, and getting a $0$ answer means that half of the function must be below the plane, while the other half is above it, making it the summation of those function values total to $0$.
So the volume here isn't actually $0$, but the integral is. If you wanted to know the actual geometric volume, you would have to split up the surface to the part that is above the plane, and then multiply by $2$.
Sorry for the lengthy response, but I hope that clears up how you get "$0$" volume.
|
Geometrical interpretation of cup product. I know geometrically the meaning of cohomology groups of topological spaces. Is their any geometrical interpretation of cup product?
| If you are willing to restrict your attention to a compact, connected, oriented manifold $M$, rather than a general topological space, then yes, there is a geometric interpretation. It's called the Poincare Duality Theorem.
In brief, letting $m = \text{dimension}(M)$, the Poincare Duality isomorphism $f : H^i(M) \to H_{m-i}(M)$ (I'll use $\mathbb{R}$-coefficients) produces for each $c \in H^i(M;\mathbb{R})$ an $m-i$ dimensional chain representing the homology class $f(c)$ which I'll denote $c^\perp$. This has the property that given $c \in H^i(M)$ and $d \in H^j(M)$, and assuming that the chains $c^\perp$ and $d^\perp$ are chosen to be transverse to each other in an appropriate sense, $f(c \cup d) \in H_{m-(i+j)}(M)$ is defined by the $m-(i+j)$ dimensional "intersection chain" $c^\perp \cap d^\perp$.
Just as an example, if $i+j=m$ then $c \cup d$ is equal to a scalar multiple of the "fundamental" cohomology class. In this case, to say that $c^\perp$ and $d^\perp$ are transverse to each other implies that $c^\perp \cap d^\perp$ is a set of points with signed numbers on them, i.e. a 0-chain, the scalar is equal to the sum of these numbers.
|
$gcd(f_k, f_{k+3})$, where $f_k$ is the k'th Fibonacci number I wanted to find the $gcd(f_k, f_{k+3})$, where $f_k$ is the k'th Fibonacci number (i.e. $f_0=0, f_1=1, f_k=f_{k-1} + f_{k-2}$ for $k \geq 2$. So far I've tried to exprss $f_{k+3}$ as $2f_{k+1}+f_k$. Using this it follows that $gcd(f_k, f_{k+3}) = gcd(f_k, 2f_{k+1}$), but I don't see how I can continue from there one. Can someone help me out?
Thanks!
| Since $\gcd(f_k, f_{k+1})=1$, we have $\gcd(f_k, 2f_{k+1})=\gcd(f_k, 2)$.
Since $f_3=2$, we have $\gcd(f_k, 2)=2$ iff $n$ is a multiple of $3$. Otherwise, $\gcd(f_k, 2)=1$.
|
Use Mathematical Induction to prove equation? Use mathematical induction to prove each of the following statements.
let $$g(n) = 1^3 + 2^3 + 3^3 + ... + n^3$$
Show that the function $$g(n)= \frac{n^2(n+1)^2}{4}$$ for all n in N
so the base case is just g(1) right? so the answer for the base case is 1, because 4/4 = 1
then for g(2) is it replace all of the n's with n + 1 and see if there is a concrete answer?
| You want to show that $$\sum_{j=1}^n j^3 = \frac{n^2(n+1)^2}{4}.$$
You've shown that it holds for the base case, which is good.
Now, you assume that it holds for $n=k$:
$$\sum_{j=1}^k j^3 = \frac{k^2(k+1)^2}{4}.$$
Then, a strategy is to write out one part or the other for the $n=k+1$ case, and manipulate it so that you pull out the $n=k$ expression. Let's do the sum:
$$\sum_{j=1}^{k+1} j^3 = \sum_{j=1}^{k} j^3 + (k+1)^3.$$
This is just algebra; I pulled out the last term and wrote it explicitly. But now I can substitute in the assumption for the sum on the right hand side.
The rest is to manipulate the expression on the right hand side to come up with the expression
$$\frac{(k+1)^2(k+2)^2}{4},$$
and then you're done.
Can you take it from here?
|
How can I show that $4^{1536} - 9^{4824}$ can be divided by $35$ without remainder? How can I show that $4^{1536} - 9^{4824}$ can be divided by $35$ without remainder?
I'm not even sure how to begin solving this, any hints are welcomed!
$$(4^{1536} - 9^{4824}) \pmod{35} = 0$$
| Note that, for increasing integer values of $n $, the value of $4^n \mod 35$ (let us call it $r_1$) shows a cyclic behaviour. In particular, for any $n \equiv k\mod 6$, we have
$$ k=0\rightarrow r_1 =1$$
$$ k=1\rightarrow r_1 =4$$
$$ k=2\rightarrow r_1 =16$$
$$ k=3\rightarrow r_1 =29$$
$$ k=4\rightarrow r_1 =11$$
$$ k=5\rightarrow r_1 =9$$
Similarly, considering increasing integer values of $n $, the value of $9^n \mod 35$ (let us call it $r_2$) shows the following cyclic behaviour for any $n \equiv k\mod 6$. We have
$$ k=0\rightarrow r_2 =1$$
$$ k=1\rightarrow r_2 =9$$
$$ k=2\rightarrow r_2 =11$$
$$ k=3\rightarrow r_2 =29$$
$$ k=4\rightarrow r_2 =16$$
$$ k=5\rightarrow r_2 =4$$
Now both $1536$ and $4824$ are $\equiv 0\mod 6$, so that both $4^{1536} $ and $9^{4824} $ are $\equiv 1 \mod 35$.
|
Convergence and absolute convergence of $\sum_{n=1}^{\infty} = {(-1)^n \over n + (-1)^{n-1}}$ I am trying to conclude about the convergence and absolute convergence of
$$\sum_{n=1}^{\infty} = {(-1)^n \over n + (-1)^{n-1}}$$
For absolute convergence, we can note that
$$\lvert a_n \rvert = {1 \over n + (-1)^{n-1}}$$
$$\sum_{n=1}^{\infty} = {1 \over n + (-1)^{n-1}} = {1 \over 2} + 1 + {1 \over 4} + {1 \over 3} + {1 \over 6} + {1 \over 5} + \dots$$
We can see that this is a harmonic series with the terms rearranged. The sequence of partial sums will be strictly monotonic and for even numbers the terms will be equal to the terms of the sequence of partial sums of the harmonic series. This means that the series doesn't converge, so we have no absolute convergence.
Now, how can we conclude about convergence? ${1 \over n + (-1)^{n-1}}$ is not monotonic, so the tests I have covered so far (Leibniz, Dirichlet and Abel) are not applicable.
| In fact, we can evaluate the series in closed form. Proceeding, we write
$$\begin{align}
\sum_{n=1}^{2N}\frac{(-1)^{n}}{(-1)^{n-1}+n}&=\sum_{n=1}^N\left(\frac{1}{2n-1}-\frac{1}{2n}\right)\\\\&=\sum_{n=1}^N\left(\frac{1}{2n-1}+\frac{1}{2n}\right)-\sum_{n=1}^N\frac1n\\\\&=\sum_{n=1}^{2N}\frac1n-\sum_{n=1}^N\frac1n
\\\\&=\sum_{n=1}^{N}\frac{1}{n+N}\\\\
&=\frac1N\sum_{n=1}^{N}\frac{1}{1+n/N} \tag {1}\\\\&\to \int_0^1 \frac{1}{1+x}\,dx\,\,\text{as}\,\,N\to \infty \tag{2}\\\\
&=\log(2)
\end{align}$$
where we used only elementary arithmetic to take us to $(1)$ and recognized the sum in $(1)$ as a Riemann sum to arrive at $(2)$.
An alternative way forward to evaluating the series is to write $$\sum_{n=1}^N\left(\frac{1}{2n+1}-\frac{1}{2n}\right)=\sum_{n=1}^{2N}\frac{(-1)^{n-1}}{n}$$
Then, recalling that $\log(1+x)$ has Taylor series representation $\log(1+x)=\sum_{n=1}^\infty \frac{(-1)^{n-1}x^n}{n}$ for $-1<x\le 1$, we see that $$\sum_{n=1}^\infty\frac{(-1)^{n-1}}{(-1)^n+n}=\log(2)$$as expected!
|
Standard error of exteremely biased coin OK, so I know that the typical standard error of a coin is estimated by $$\sigma_p=\sqrt{ \frac{p*(1-p)}n }$$
where $p$ is the estimated probability and and $n$ is the number of samples. This seems reasonable at high $n$ and $p \sim 0.5$; however, it seems unreasonable if I have $p = 1$ and $n = 20$, $\sigma_p = 0$.
Is there a better formula for standard error when $ p \sim 0$ or $p \sim 1$ and $n$ is low?
Note: this is a real-world problem and increasing $n$ is non-trivial.
Thanks!
| If $p = P(S) = 1,$ then $X \sim Binom(n, p),$ has $X \equiv n$ and $Var(X) = 0 = \sqrt{p(0)/n}$ so the formula for the variance works fine.
In public opinion polls, the margin of sampling error is often given as $\pm \sqrt{1/n},$ which comes from the largest possible variance at $p = 1/2.$
Then the margin of error for a 95% confidence interval (using the normal approximation for large $n$) is about
$$\pm 1.96\sqrt{p(1=p)/n} = \pm 1.96\sqrt{1/4n} \approx \pm \sqrt{1/n}.$$
Thus, $n = 2500$ subjects give a margin of sampling error $\pm 2$%, and
$n = 1100$ subjects give a margin of sampling error of about $\pm 3$%.
In one sense, this vastly over-estimates the margin of error for cases
in which $p$ is near 0 or 1. But pollsters understand that non-sampling
errors (lack of response, unwillingness to give honest responses, sampled
population differing from target population, and so on) can be especially
serious for such extreme values of $p.$ So they use $\pm \sqrt{1/n}$
anyway, hoping to cover all contingencies.
Perhaps you have some intuition about
practical difficulties in sampling when $p$ is far from 1/2 that is responsible for your doubts about
the variance formula. But as an exact mathematical statement about
sampling error only, the formula is correct.
|
Curious limits with tanh and sin These two limits can be easily solved by using De l'Hopital Rule multiple times (I think), but I suspect that there could be an easier way... Is there?
\begin{gather}
\lim_{x\to 0} \frac{\tanh^2 x - \sin^2 x}{x^4} \\
\lim_{x\to 0} \frac{\sinh^2 x - \tan^2 x}{x^4}
\end{gather}
Thanks for your attention!
| First get rid of the squares with
$$\lim_{x\to 0} \frac{\tanh^2 x - \sin^2 x}{x^4}=\lim_{x\to 0} \frac{(\tanh x - \sin x)(\tanh^2 x + \sin^2 x)}{x^4}=2\lim_{x\to 0} \frac{\tanh x - \sin x}{x^3}.$$
Then, as the functions are odd, there will be no quadratic term, and you can substitute $x=\sqrt t$ to skip it. Then by two applications of L'Hospital
$$2\lim_{t\to 0} \frac{\tanh\sqrt t - \sin\sqrt t}{t\sqrt t}=2\lim_{t\to 0}\frac{(\tanh^2\sqrt t-1)-\cos\sqrt t}{2\sqrt t\frac32\sqrt t}\\
=2\lim_{t\to 0}\frac{2\tanh\sqrt t(\tanh^2\sqrt t-1)+\sin\sqrt t}{2\sqrt t\,3}=-\frac13.$$
By the substitution $x\leftrightarrow ix$, the two limits are equal.
|
Find all integer solutions $(x,y)$ such that $2x^2 + y^2 = 3x^2y$
Find all integer solutions $(x,y)$ such that $2x^2 + y^2 = 3x^2y$.
We can rearrange the given equation to $$y^2 = x^2(3y-2)\tag1$$ Thus $3y-2$ must be a perfect square and so $3y-2 = k^2$.
How can we continue?
| You have that $3y-2$ divides $y^2$.
Notice that $(3y-2,y^2)| (3y-2,y)^2$.
On the other hand, if $d$ divides $3y-2$ and $y$ we have $d|2$. So $(3y-2,y^2)| 4$.
Therefore $3y-2|4$
Therefore $y=0,1$ or $2$.
If $y=0$ we have $x=0$.
If $y=1$ we have $x=\pm 1$
If $y=2$ we have $x=\pm 1$
|
Ito's integral converge in probability https://en.wikipedia.org/wiki/Itô_calculus
From wikipedia:
$$\int_0^tH_tdB_t\equiv \lim_{n\rightarrow\infty}\sum_{i=1}^nH_{t_i}(B_{t_i}-B_{t_{i-1}})$$
It can be shown that this limit converges in probability. i.e.
$$P(\lim_{n\rightarrow\infty}\sum_{i=1}^nH_{t_i}(B_{t_i}-B_{t_{i-1}})\text{ exists})=1$$
Can someone give a proof or a reference how it converges in probability?
It is known that the brownian motion is almost surely of unbounded variation, but can we say the limit doesn't exists almost surely?
| The definition of Ito's integral is based on the fact that $L^2$ is complete. So the limit you mentioned above is in the sense of $L^2$-convergence indeed. Consequently, $L^2$-convergence implies convergence in probability.
|
Are the invertible elements dense in a finite dimensional Banach Algebra? Let $A$ be a Complex unital Banach algebra which is finite dimensional. Let $G(A)$ be the set of all invertible elements in $A$. I want to know if the $G(A)$ is dense in $A$ in the norm topology.
| Let $a \in A$.
Case 1: $a \in G(A)$. Then $a \in \overline{G(A)} $.
Case 2: $a \notin G(A)$. Then $0 \in \sigma(a)$ ( = spectrum of $a$). Since $A$ is finite dimensional, $\sigma(a)$ is a finite set. Hence $0$ is an isolated point of $\sigma(a)$.
Therefore we get a sequence $( \mu_n)$ in $ \rho(a) $ (= resolvent set of $a$) such that $ \mu_n \to 0$.
This gives $a- \mu_n e \in G(A)$ and $a- \mu_n e \to a$. Hence again $a \in \overline{G(A)} $.
These two cases show $A = \overline{G(A)} $.
|
Ackermann function $A(m, n)$, all nonnegative integer solutions to $A(m, n) = m + n$? The Ackermann function $A(m, n)$ is given by the recursion$$\begin{cases} A(0, n) \overset{\text{def}}{=} n + 1 \\ A(m + 1, 0) \overset{\text{def}}{=} A(m, 1) \\ A(m + 1, n + 1) \overset{\text{def}}{=} A(m, A(m + 1, n)).\end{cases}$$What are all non-negative integer solutions of the equation $A(m, n) = m + n$?
| $A(m,n) \geq n+m+1$ for all $m,n \geq 0$. Proof: by induction to $m$.
Base case: $A(0,n) =n+1 \geq n+1$.
Induction hypothesis (IH1): $A(m,n) \geq m+n+1$ for all $n \geq 0$ and some $m$.
Inductive step: Within the inductive step, we need to do induction to $n$.
Base case: We have that $$A(m+1,0)=A(m,1)\geq m+2 = (m+1)+0+1$$ where the inequality holds by IH1.
Induction hypothesis (IH2): $A(m+1,n)>(m+1)+n+1$ for some $n\geq0$.
Inductive step: We have that \begin{align*} A(m+1,n+1) &=A(m,A(m+1,n)) \\&\geq m+A(m+1,n)+1 \\&\geq m+m+1+n+1+1 \\ &= 2m+n+3 \\&\geq m+n+3 \\&= (m+1)+(n+1)+1 \end{align*}
The first inequality holds because IH1, and the second holds because of IH2. This completes this inductive step, hence $A(m+1,n) \geq (m+1)+n+1$ for all $n \geq 0$.
This also completes the other inductive step.
Hence $A(m,n) \geq n+m+1>m+n$, so there are no solutions.
|
Show that $a^2 \equiv a \mod (1+i)$ for all $a \in \mathbb{Z}[i]$. This is a problem statement from a section on quotient rings in Abstract Algebra, so I assume it requires the use of the FIT for rings. When looking around for similar problems, I was only able to find examples with number theory, which isn't really what I'm looking for.
My main question here is: what does $\mathbb{Z}[i]$ mean? I'm familiar with the notation $\mathbb{Z}(i) = \{a+bi : a,b\in\mathbb{Z}\}$ and $\mathbb{Z}[X]$ (a polynomial ring), but never have I seen $\mathbb{Z}[i]$.
I think that I'll be able to figure this out knowing that, but I still would very much appreciate a hint on how to start. I assume you can just show that $a^2 - a \equiv a(a - 1) \equiv 0 \mod (1 + i)$, but I'm not really sure where to go from there.
| By the Third Isomorphism Theorem,
$$
\frac{\mathbb{Z}[i]}{(1+i)} \cong \frac{\mathbb{Z}[x]}{(1+x, x^2 +1)} \cong \frac{\mathbb{Z}}{((-1)^2 +1)} = \frac{\mathbb{Z}}{(2)} \, .
$$
By Fermat's Little Theorem, every element $a \in \mathbb{Z}/2\mathbb{Z}$ satisfies $a^2 = a$.
|
Show that expectation value is finite Let's consider sequence of an independent random variables $X_n$ which has the same distribution and we know that for each $n$, $EX_n=0$ the problem is to show that $E\left|X_0X_1+X_1X_2+...+X_{n-1}X_n\right|< \infty$ for $n \ge 1$
My try:
$E|X_0X_1+X_1X_2+...+X_{n-1}X_n|\le E|X_0X_1|+...+E|X_{n-1}X_n|=n \cdot E|X_0X_1| = n\cdot EX_0^2$ but now I don't know how to show this is finite.
| If $\left(X_n\right)_{n\geqslant 1}$ is an independent sequence, so is the sequence $\left(\left|X_n\right|\right)_{n\geqslant 1}$. Since
$$\left|\sum_{j=0}^{n-1}X_jX_{j+1} \right|\leqslant \sum_{j=0}^{n-1}\left|X_j\right|\left|X_{j+1} \right|$$
and $\left|X_j\right|$ is independent of $\left|X_{j+1} \right|$ for any $j\in\left\{0,\dots,n-1\right\}$, we derive that
$$\mathbb E\left|\sum_{j=0}^{n-1}X_jX_{j+1} \right|\leqslant \sum_{j=0}^{n-1}\mathbb E\left|X_j\right|\cdot \mathbb E\left|X_{j +1} \right|, $$
and the right hand side consists of finitely many finite terms.
Notice that we only need pairwise independence and the fact that all the $\left|X_n\right|$ have a finite expectation. In particular, the $X_n$ do not need to be centered.
|
A question regarding $e^{-\lambda}\approx \left(1-\frac{\lambda}{n}\right)^n$ For large $n$ and moderate $\lambda$, $e^{-\lambda}\approx \left(1-\frac{\lambda}{n}\right)^n$. Moderate $\lambda$ however does not say very much, is there some way to be a bit more specific? Is there perhaps some term that shows the remainder from the actual value of $e^{-\lambda}$ that can be used to show what moderate $\lambda$ ought to mean?
| One possible way to get error bounds is to use logarithms on both sides. Note that for example
$$\ln \left(\left(1 - \frac{\lambda}{n}\right)^n\right) = n \ln \left(1 - \frac{\lambda}{n}\right) = n \left(-\frac{\lambda}{n} + O\left(\frac{\lambda^2}{n^2}\right)\right) = -\lambda + O\left(\frac{\lambda^2}{n}\right),$$
which yields
$$e^{-\lambda} = \left(1 - \frac{\lambda}{n}\right)^n \exp\left(-O\left(\frac{\lambda^2}{n}\right)\right).$$
If you want some more explicit bounds, then you need to analyze the remainder of the Taylor expansion of the logarithm more precisely.
For example, the following two bounds hold for all $x > 0$:
$$1 - \frac{1}{x} \le \ln(x) \le x - 1$$
(See here and here). This implies that for $x < 1$ we have
$$\frac{-x}{1 - x} \le \ln(1 - x) \le -x,$$
which gives us the following bounds in the case $\lambda < n$:
$$\exp\left(-\frac{\lambda}{1 - \frac{\lambda}{n}}\right) \le \left(1 - \frac{\lambda}{n}\right)^n \le e^{-\lambda}.$$
|
Describing the space of matrices which "jordanize" a given matrix This is a naive linear algebra question. I apologize for the level but I could not find an answer in the literature.
Let $A$ be a $n$ by $n$ matrix (say over $\mathbb C$). Suppose the Jordan form of $A$ is the matrix $J_\lambda$, for some partition $\lambda$ of $n$. My question is: how can one describe the set of invertible matrices which "jordanize" $A$? In other words, I would like to have an explicit description of $$G_A=\{M\in \textrm{GL}_n\,|\,M^{-1}AM=J_\lambda\}\subset \textrm{GL}_n.$$
I remember some time ago I thought the answer would be that $G_A$ is a product of $\mathbb C^\ast$, as many as there are distinct eigenvalues for $A$, which can be read off from $\lambda$. But now I was looking at a specific example more closely and I am unsure how one would go about proving that. Any help or reference is very much appreciated!
Added. For instance, if $n=2$, one can check by explicit computation that for the (already jordanized) matrix
$$A=
\begin{pmatrix}
\eta & 0\\
0 & \mu
\end{pmatrix},\qquad \eta\mu\neq 0,\,\,\,\eta\neq \mu,
$$
one has $$
G_A=\Big\{\begin{pmatrix}
a & 0\\
0 & b
\end{pmatrix}:a,b\in\mathbb C^\ast\Big\}=\mathbb C^\ast\times \mathbb C^\ast.
$$
I wonder if one can be as explicit as this (expressing $G_A$ in terms of $\mathbb C^\ast$ or maybe $\mathbb C$) for any matrix of given Jordan type.
| Suppose that both $M$ and $N$ are in $G_A$. Then
$$
(A =)\; MJM^{-1} = NJN^{-1} \implies\\
J = (M^{-1}N)^{-1}J(M^{-1}N) \implies\\
(M^{-1}N)J = J(M^{-1}N)
$$
In other words: fix any one element $M \in G_A$. Let
$$
C_J = \{X \in GL : XJ = JX\}
$$
Then $G_A = \{MX: X \in C_J\}$. Note also that $C_J$ may be thought of as the kernel of the linear map $X \mapsto XJ - JX$.
Note: if $J$ is block diagonal with
$$
J = \pmatrix{
J_1\\
&J_2\\
&&\ddots\\
&&& J_k
}
$$
Where each $J_i$ is a Jordan block, then a matrix will commute with $J$ if and only if it is block diagonal of the same shape and corresponding blocks commute.
|
Prove that any map from a surface to another one preserves at least one angle Tissot proved that for any kind of representation of a surface onto another, there exists, at every point of the first surface, two perpendicular tangents, which are unique unless the angles are preserved at that point (i.e. the map is conformal), such that the images of these two tangents are perpendicular on the second surface.
I can't find the paper; and don't really know if this is easy to prove.
The only idea I had so far is picking two perpendicular tangents of the first surface and rotate them and try using something like Bolzano, but it hasn't worked so far and I don't think it will.
Does anyone have a reference/knows the general idea of how to prove that? I would really appreciate that. Thanks!
| You need the Polar decomposition for a linear map. So a (real) linear map is always the composition of a rotation and a symmetric map (thus is diagonalizable). Since a rotation does not alter angles, so we need only to consider the symmetric map. Then we can find an orthonormal basis of the space, with respect to which the map is only a stretch on the base vectors.
|
Showing that a if a real matrix $A$ satisfies $A^t=A^3$, then $A^2$ is diagonalizable over the reals I tried proving that if $A$ is a real square matrix with $A^t=A^3$, then $A^2$ is diagonalizable over $\mathbb R$.
Since $A^t=A^3, A$ if considered as a complex matrix, commutes with its adjoint. So $A=U^*DU$ for a diagonal matrix $D$ containing the eigenvalues of $A$. Then $A^2=U^*D^2U$ and $D^2$ has entries $\lambda^2$ for each eigenvalue $\lambda$ of $A$. Now note that if $\lambda$ is an eigenvalue for $A$ with eigenvector $v$, $\bar\lambda$ is one of $A^*=A^t$ with the same eigenvector, but then $\bar\lambda$ must also be an eigenvalue of $A^3$ with same vector. So we have $A^3v=\bar\lambda v=\lambda^3 v$. So $\bar\lambda=\lambda^3$ and so $\lambda\bar\lambda=|\lambda|^2=\lambda^4$. Hence $\lambda^4$ is the square of a real number and thus $\lambda^2$ is real for all $\lambda$. So eigenvalues of $A^2$ are real and the matrix $D^2$ above is a real matrix. Consequently, $U$ is actually orthogonal when we write $A^2=U^*D^2U$ and so $A^2$ is indeed diagonalizable over $\mathbb R$.
My question is, is the above solution correct, or is there any flaws in it?
| Your answer is mostly correct. You should take care to fix the statement
So $\bar\lambda=\lambda^3$ and so $\lambda^2=|\lambda|$ is real for all $\lambda$.
I would say $(\lambda^2)^2 = |\lambda|^2$, which means that $\lambda^2$ is real. Also, you should clarify how you deduce
So eigenvalues of $A^2$ are real and the matrix $D^2$ above is a real matrix. Consequently, $U$ is actually orthogonal when we write $A^2=U^*D^2U$ and so $A^2$ is indeed diagonalizable over $\mathbb R$.
|
transforming $\chi^2(n)$ distribution to $t$ distribution if we have two independent random variables for both of them $\chi^2(n)$ distribution
Prove it has a t distribution with degree freedom ($n$)
$$\frac{\sqrt n (x_1 +x_2 )}{ \sqrt {x_1 x_2}}$$
| @MichaelHardy and @Henry are trying to explain that you cannot
prove this because it is not true.
Here is something that is true. It is valuely related to what
you posted and might even be what you intended. It illustrates the definition of
a random variable with Student's t distribution:
Let $X_1 \sim Norm(0,1)$ and, independently, $X_2 \sim Chisq(df=n).$
Then $$T = \frac{\sqrt{n}X_1}{\sqrt{X_2}} = \frac{X_1}{\sqrt{X_2/n}} \sim T(n),$$
Student's t distribution with $n$ degrees of freedom. You can find the
proof in many mathematical statistics texts.
The simulation below in R statistical software demonstrates this relationship
for $df = 15.$
m = 10^6; df = 15; x1 = rnorm(m); x2 = rchisq(m, df)
t = sqrt(df)*x1/sqrt(x2)
mean(t); var(t); df/(df-2)
## -0.0001963736 # aprx E(T) = 0
## 1.15594 # aprs Var(T) = 15/13
## 1.153846 # exact Var(T)
quantile(t) # min, aprx quartiles, max
## 0% 25% 50% 75% 100%
## -1.012026e+01 -6.929461e-01 -5.062215e-04 6.930493e-01 7.579955e+00
qt(c(.25,.75), df)
## -0.6911969 0.6911969 # exact lower and upper quartiles of T(15)
Below is a histogram of the million realizations of $T$ simulated in
this way. Student's t distributions tend to have heavy tails, hence the
'long' horizontal axis to accommodate a few stragling values far from 0
in both directions (which produce bars too short to show at the resolution
of the graph). The curve is the PDF of $T(15).$
|
Area of a surface by rotating the curve about the x-axis. The curve $y=\sqrt{5-x}$ with $a=3$ and $b=5$ is rotated about the $x$-axis. Find the exact area of the surface obtained.
| In order to solve this problem, we need to use the following equation: $$ SA = 2\pi \int_{a}^{b}y\sqrt{1+(\frac{dy}{dx})^2}\hspace{1mm}dx $$ Where y, in this case, is given by: $$ y = \sqrt{5-x} $$ And, as you mentioned in your comment, the derivative with respect to x is given by: $$\frac{dy}{dx}=\frac{-1}{2\sqrt{5-x}} $$We then can substitute these expressions into the equation: $$ SA = 2\pi \int_{3}^{5}\sqrt{5-x}\hspace{0.6mm}\cdot \hspace{0.6mm} \sqrt{1+\frac{1}{4(5-x)}} \hspace{1mm}dx $$ Since the left term and the right term both have an exponent of ½, and we know that: $$ a^c \cdot b^c=(ab)^c $$ We can simplfy the integrand to: $$ \sqrt{5-x+\frac{1}{4}} $$ Which leaves us with the following expression to be integrated:$$ 2\pi \int_{3}^{5} \sqrt{\frac{21}{4}-x} \hspace{1mm}dx $$ Which shouldn't be too difficult of an integral to solve at this point in your calculus studies.
|
Why is the first order derivative of a function equal to the Lipschitz constant? (edited)
I am learning about the concept of Lipschitz constant and function.
I understand you can write:
$$|f(x) - f(y)| \le M |x - y|$$
(where M is the Lipschitz constant) and thus:
$$M \ge \dfrac {|f(x) - f(y)|} {|x - y|}$$
Of course I realise this has similarities with:
$$ f'(x) = \lim_{h \to 0} \frac{|f(x+h)-f(x)| }{h}$$
Could someone please explain what's the actually proper way mathematically to explain or prove that the Lipschitz constant $M$ is actually the same as the first order derivative of the function $f(x)$ (or is it? and if so under which conditions?).
| Say $f$ is defined and continuous on an interval $[a,b]$, and is differentiable on the interior $(a,b)$.
*
*If $M$ is a Lipschitz constant, then so is any $N>M$. You might be more interested in that $L:= \sup_{x\in (a,b)} |f'(x)|$ is the optimal Lipschitz constant if it is finite.
*By mean value theorem for every $x,y\in [a,b]$, $x < y$, there exists some $c\in (x,y)\subseteq (a,b)$ such that
$$ \left | \frac{f(y) - f(x)}{y - x} \right | = |f'(c)| \le L.$$
That is, $L$ is indeed a Lipschitz constant.
*Without loss of generality let $L>0$.
By the definition of supremum, there is a sequence $(x_n)_{n\in\mathbb N}$ in $(a, b)$ such that $\lim_{n\to\infty} |f'(x_n)| = L$.
*For $n\in\mathbb N$, due to differentiability, there exists some $y_n \in (a,b)$ with
$$ \left| \frac{f(y_n) - f(x_n)}{y_n - x_n} - f'(x_n) \right | < \frac{1}{n}. $$
Then, by the (reversed) triangle inequality, it follows
$$ \left | \frac{f(y_n) - f(x_n)}{y_n - x_n} \right | \ge |f'(x_n)| - \left | \frac{f(y_n) - f(x_n)}{y_n - x_n} - f'(x_n) \right | \to L, $$
for $n\to\infty$.
That is, every Lipschitz constant needs to be at least $L$.
|
When will a rational point on elliptic curve be a generator of all other rational points on the curve? I understand that when given a rational point on an elliptic curve, one can find more by method of secants and tangents. This also creates an abelian group of rational points on the curve.
When you generate such a group of points, will it contain all the rational points? I'd appreciate answers, other posts, or good reads on the question.
I have in mind specifically elliptic curves of the form:
$$y^2=x^3-n^2x$$
| No, in general one single point will not generate all the rational points. If $E$ is an elliptic curve defined over $\mathbb{Q}$, then the rational points form a finitely generated abelian group $E(\mathbb{Q})$. The classification of finitely generated abelian groups tells us that
$$E(\mathbb{Q}) \cong E(\mathbb{Q})_\text{tors} \oplus \mathbb{Z}^{R_{E/\mathbb{Q}}},$$
i.e., $E(\mathbb{Q})$ is generated by a finite number of points, some of finite order (those in the torsion subgroup $E(\mathbb{Q})_\text{tors}$), and some of infinite order. In particular, if $R_{E/\mathbb{Q}}>1$, then a single point $P$ in $E(\mathbb{Q})$ will only generate part of the group, but not all. It also can happen that $R_{E/\mathbb{Q}}=0$, but the torsion subgroup is bicyclic, e.g., $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$, and in this case although there are only finitely many rational points, you can't generate them all from one single rational point.
In the case you are interested in, for a curve $E_n:y^2=x^3-n^2x$ (known as the curves related to the congruent number problem) the group of rational points is of the form $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}\oplus \mathbb{Z}^{R_n}$. So it is never generated by a single point. If you mean to ask whether $R_n$ is always $1$ when it is positive, then that is not true either. For instance, $R_5=1$, but $R_{34}=R_{41}=R_{65}=2$ (here $34,41,65$ are the only values of $n\leq 100$ such that $R_n\geq 2$).
|
Number of undirected graphs with n vertices and k edges (inclusive of simple, non-simple, isomorphic, and disconnected graphs) Given the constraints (or non-constraints rather), is there a closed solution on a set of labeled vertices?
One way I looked at this problem is by trying to implement a constrained stars-and-bars technique on the diagonal and diagonal-exclusive half of the $n \times n $ adjacency matrix, but I'm not getting any good leads.
It seems there are multiple definitions of a self-loop in undirected graphs. In the context of this question, I define a self-loop to represent a degree of 2 in an undirected graph.
| If you consider labeled graphs, there are $n^2$ legitimate spots for an edge (if you allow loops). Since you accept the non-simple graphs (so parallel edges), each of the $k$ edges can be put in any of those spots. In the end, you get $(n^2)^k$ which is equal to
$ n^{2k}$.
|
Geometric reason why even unimodular positive definite lattices exist only in dimensions divisble by $8$ It is a well-known fact that even unimodular rank $n$ lattices $L\subseteq \mathbb{R}^n$ only exist if $8\vert n$.
The only proof of this that I know (in the book "Elliptic functions and modular forms" by Koecher/Krieg) is rather ingenious and uses the modularity of the associated theta function $$\Theta(\tau,L)=\sum_{\gamma\in L}e^{i\pi\Vert \gamma\Vert^2}$$ to conclude that
$$\Theta(i,L)=e^{\frac{i\pi n}{4}}\Theta(i,L)$$
and hence $8\vert n$.
While it is quite natural to associate a theta function to a lattice, it seems to me that there has to be a deeper, somehow "purely geometric reason" for this phenomenon (i.e. the condition on the dimension) which does not use the theory of modular forms.
So my question is the following:
What is the "geometric" reason why even unimodular positive definite lattices exist only in dimensions divisble by $8$?
(I am aware that the term "geometric" is not well-defined and can be interpreted broadly: feel free to do so)
| Instead of relying on the theory of modular forms for the proof, one can also use the methods from "geometry of numbers". Such a proof can be found, for example, in Serre's book A course in Arithmetic, on page $53$, Theorem $2$. Of course, the geometry of numbers might not give you an "geometric" reason in the sense you are looking for. I am not sure, that a "pure geometric argument" suffices.
Perhaps it is helpful to view the more general picture.
Consider unimodular symmetric bilinear modules, which are free $\mathbb{Z}$-modules endowed with an integral symmetric bilinear form of discriminant $\pm 1$. As a real form it has signature $(r,s)$. Then one can show
with "geometric" methods that the following is true:
Proposition: The signature $(r,s)$ of an even unimodular symmetric bilinear module satisfies the congruence $r\equiv s\bmod 8$.
For even unimodular lattices of rank $r$ we obtain $r\equiv 0\bmod 8$. For a proof, see Serre, as above.
|
What is the sum $\sum_{j=1}^{2n} j^2$? What is the formula to calculate the sum of the following statement?
for example: $$\sum_{j=1}^{n} j^2 = \frac{1}{6}n(n+1)(2n+1)$$
The statement I need help with:
$$\sum_{j=1}^{2n} j^2$$
Thanks for the help in advance, I am really stuck on this question.
| The formula you mentioned itself contains the answer.
As you know:$\sum_{j=1}^{n} j^2 = \frac{1}{6}n(n+1)(2n+1)$.
Here on substituting $2n$ in place of $n$ you will get: $$\sum_{j=1}^{2n} j^2=\frac{1}{6}(2n)(2n+1)(4n+1)=\frac{1}{3}n(2n+1)(4n+1)$$
|
How can we minimize a function of two variables? Here we are more interested in the method to minimize the function, rather than what the actual result is.
The function I currently have, which I may need to change, is:
$$b / d + (1/6 \log{(b)})(1+\log{(b)})(1+2\log{(b)})m/(gd)$$
with:
$$b=g \log_2(\log_2{(g)}+d\log_2{(2n^2a^2)}+\log_2{(nm)})$$
Here $a$, $m$, and $n$ are given and known. I'm interested in knowing the method to find the $d$ and $g$ which minimize the function. For example, can I simply take the derivative to find the minimum values?
My question is, what method can I use to find the minimum values of a function of this type? Can it be as simple as taking the derivative, assuming that finding the derivative is easy?
| In general, if you have a function of two variables, $f(x,y)$, to find the critical points you need to take partials and set them equal to zero $$\frac {\partial f}{\partial x}=0$$
$$\frac {\partial f}{\partial y}=0$$
The values of $x$ and $y$ which satisfy these equations will be either minima, maxima, or saddle points. You can plug them into the function to see which is bigger and compare them.
If you do not want to manually plug these values into the function, you can instead use the second derivative test. Let $D=f_{xx}f_{yy}-f_{xy}^2$, evaluating $D$ and all second partials at the critical points you have four options:
If $D>0$ and $f_{xx}>0$ you have a local minimum.
If $D>0$ and $f_{xx}<0$ you have a local maximum.
If $D<0$ you have a saddle point.
If $D=0$ you need to use a third order test to determine the nature of the critical point, although from the nature of your function, I believe it will be laborious enough to compute the first derivative, let alone the third partials. I would instead resort to a computer. But partial derivatives are the way to go with multivariate functions when looking for maxima and minima.
|
$\lim_{z\to 0}f(z)=0$ along $y=mx^n$ but for which $\lim_{z\to 0}f(z)$ does not exist Given a nonzero real number $m$, construct a function $f(z)$ such that
$$
\lim_{z\to 0}f(z) = 0
$$
along each curve of the form $y = m\cdot x^n$ (for $n = 1,2,3,4,\dots$), but for which
$$\lim_{z\to 0}f(z)$$
does not exist. I don't know from where I should start.
Thank you ...
| $f(z) = 0$ on the union of the curves $y =mx^n$, $f(z) = 1$ in the complementary.
The union of curves $y = mx^n$ with $n\in\Bbb N$ is a "small" set. In particular, the complementary intersects every neighborhood of 0.
If also $m$ is variable you can take
$f(x,y) = 0$ when $|y|\ge e^{-1/x^2}$ or $y=0$ and 1 in the complementary.
|
Value of $k$ for which integral takes minimum value Find the value of $k$ for which the value of given integral is minimum:
$$\int_{0}^{\infty} \frac{x^k}{2+4x+3x^2+5x^3+3x^4+4x^5+2x^6}.dx$$
Could someone give me hint as how to begin this question? I am not able to get the initial thought.
| The denominator of the integrand function is a palyndromic polynomial, hence
$$ I_k = \int_{0}^{+\infty}\frac{x^{k-3}\,dx}{q\left(x+\frac{1}{x}\right)}\stackrel{x\to x^{-1}}{=}\int_{0}^{+\infty}\frac{x^{1-k}\,dx}{q\left(x+\frac{1}{x}\right)}=I_{4-k} $$
with $q(z)=2z^3+4z^2-3z-3=(z-1)(3+6z+2z^2)$. The minimum for $I_k$ is so:
$$ \color{blue}{\large I_2} = 2\int_{0}^{1}\frac{dx}{x\,q\left(x+\frac{1}{x}\right)}=2\int_{2}^{+\infty}\frac{dz}{q(z)\sqrt{z^2-4}}=0.06822013435\ldots $$
|