title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
aquarium polluted water differential equation | I think they mean
$$\frac{dy}{dt} = -\frac{y}{2}$$
That is, the rate at which the filter drains and replaces water is $1/2$ the amount of water in the aquarium. Then
$$y(t) = y(0) \, e^{-t/2}$$
To find the time it takes to drain and replace $1/2$ the water in the aquarium, take logs of both sides:
$$1/2 = e^{-t/2} \implies t = 2 \log{2}$$ |
Probability of picking a random natural number | There is no uniform distribution on the natural numbers. So the question itself is self-contradictory.
Some explanation:
You say that each number is chosen with the same probability. Say probability $c$.
Let's denote the probability to choose $n$ by $p(n)$. So you say $p(n)=c$ for any natural $n$.
For any distribution, the sum of probabilities always equals 1:
$$p(1)+p(2)+p(3)+ \dots = 1.$$
So $c+c+c+\dots = 1.$ But if $c > 0$, you get
$$c+c+c+\dots = \infty.$$
And if $c=0$ you get
$$c+c+c+\dots = 0.$$
There's no value of $c$ for which this sum equals $1$. |
Given $f(x) = \frac{\sqrt{x^2+a^2}}{x - 1}$, check if $f$ is monotone on the interval $(0, \infty) \setminus \{ 1 \}$. | You can say that $f(x)$ is piecewise-monotonic on $(0,\infty)\setminus\{1\}$ because it's monotonically decreasing on each of the intervals $(0,1)$ and $(1,\infty)$. However, the problem with saying it's monotonic on that set is the following. There is a vertical asymptote at $x=1$: $\lim_{x\to 1^-}f(x)=-\infty$ and $\lim_{x\to 1^+}f(x)=+\infty$. This means that we can't say $f(x)$ is decreasing (and hence monotonic) on $(0,\infty)\setminus\{1\}$. In fact, we have $f(0.5)<0<f(1.5)$ for all $a$, which contradicts the idea that $f(x)$ is decreasing on $(0,\infty)\setminus\{1\}$. More generally, $f(x)$ is negative on $(0,1)$ and positive on $(1,\infty)$.
A similar question to ask is whether $g(x)=\frac 1x$ is monotonic on $(-\infty,\infty)\setminus\{0\}$. It's not because of the asymptote at $x=0$. |
Determining if Series involving Complex Numbers Converges | $$\left|\frac{1}{n^2+i}\right| = \frac{1}{\sqrt{n^4+1}} < \frac{1}{\sqrt{n^4}} = \frac{1}{n^2};$$
thus, by the comparison test and $p$-test (with $p=2$) the series converges absolutely. |
How good is this prime zeta function approximation? | You're right, the result must be wrong. If the proof of the main result, $(1 − P(s))^2 =2/\zeta(s)− 1 + P(2s),$ were correct, it would be valid for all $s>1,$ not just integers. That's impossible, since the LHS grows to infinity as $s\rightarrow1+,$ while the RHS $\rightarrow-1+P(2).$ That's because $\zeta(s)$ has a pole at $s=1,$ and $P(s)$ a logarithmic singularity.
Numerically, the difference between both sides is small for large $s,$ predictably, as both approach $1$. For $s=2,$ it's $\approx0.007185545379369163,$ not even a satisfactory approximation. For $s=1.5,$ it's $\approx0.08228197886084833$, for $s=1.25,$ it's $\approx0.4091107555146472$. |
Pythagorean Theorem for imaginary numbers | The Pythagorean Theorem is specific to right triangles in Euclidean space, which have nonnegative real lengths. It is properly generalized by considering inner product spaces, which are vector spaces $V$ equipped with a function $\langle \cdot,\cdot \rangle:V\to \mathbb R$ (or $\mathbb C$) satisfying certain axioms. This allows us to define a right triangle as a triple of points $(\vec a,\vec b,\vec c)$ such that $\langle \vec b-\vec a,\vec c-\vec a\rangle=0$, which can be seen as saying the sides $\vec b-\vec a$ and $\vec c-\vec a$ are orthogonal. It also gives us a notion of length, with the length of a vector $\vec v$ being $\sqrt{\langle \vec v,\vec v\rangle}$. The generalization then states that
$$\langle \vec b-\vec a,\vec b-\vec a\rangle+\langle \vec c-\vec a,\vec c-\vec a\rangle=\langle \vec b-\vec c,\vec b-\vec c\rangle$$
assuming $\vec b-\vec c$ is the longest side, i.e. that the length of the side $\vec b-\vec a$ squared plus the length of the side $\vec c-\vec a$ square equals the length of the side $\vec b-\vec c$ squared. This is very different from the generalization you gave. |
calculation the Integral $ \int_{0}^{1}\frac{\tan^{-1}(x)}{1+x}dx$ | Let us denote the considered integral by $I$. The change of variables $x=\frac{1-t}{1+t}$ shows that
$$
I=\int_0^1\frac{1}{1+t}\tan^{-1}\left(\frac{1-t}{1+t}\right)dt
$$
But if $f(t)=\tan^{-1}\left(\frac{1-t}{1+t}\right)+\tan^{-1}t$, then it is easy to see that $f'(t)=0$ so $f(t)=f(0)=\pi/4$ for $0\leq t\leq 1$, hence
$$
I=\int_0^1\frac{\frac{\pi}{4}-\tan^{-1}t}{1+t}dt=\frac{\pi}{4}\ln(2)-I
$$
So, $I=\dfrac{\pi}{8}\ln2$. |
Obtaining the pmf of a function while knowing its mgf | Yep, you're almost finished! The hard part was computing that
$$ \psi_Y(t) = \frac 1 4 + \frac 1 4 e^t + \frac 1 2 e^{2t}; \tag{$\ast$}$$
and remember that the definition of the moment generating function is
$$ \psi_Y(t) = \mathbb{E}(e^{ty}) .$$
Now, since we are finding a pmf then we have a discrete random variable and so we appeal to the discrete definition of $\psi_Y(t)$. That is, $$ \psi_Y(t) = \mathbb{E}(e^{ty}) = \sum_{y=0}^\infty e^{ty}\,\mathbb{P}(Y=y). $$
But this must be consistent with $(\ast)$ and so, $$\sum_{y=0}^\infty e^{ty}\,\mathbb{P}(Y=y) = \frac 1 4 + \frac 1 4 e^t + \frac 1 2 e^{2t}. $$
Suppose that we only compute the first term of our sum, this tells us that
$ e^{0\cdot y}\,\mathbb{P}(Y=0) = \mathbb{P}(Y=0). $
Similarly, we compute the second term of our sum to give us $ e^{1\cdot y}\,\mathbb{P}(Y=1) = e^y\,\mathbb{P}(Y=1). $
Finally, we compute the third term to give us $ e^{2\cdot y}\,\mathbb{P}(Y=2) = e^{2y}\,\mathbb{P}(Y=2). $
All we need to do now is equate coefficients of $e^{ty}$ to give us the required pmf. That is,
$$ \mathbb{P}(Y=0) + e^t\,\mathbb{P}(Y=1) + e^{2t}\,\mathbb{P}(Y=2) + \sum_{y=3}^\infty e^{ty}\,\mathbb{P}(Y=y) = \frac 1 4 + \frac 1 4 e^t + \frac 1 2 e^{2t}.$$
We discard $y\geq 3$ since all coefficients of these terms are $0$. |
understanding vector-matrix-vector operation in linear algebra | This $x^t A x$ pattern, in my experience, occurs most often when $A$ is a symmetric matrix (and if it's not, you can replace $A$ by $B = \frac{1}{2}(A + A^t)$, and the result will be the same).
And even more often, $A$ is positive-definite symmetric, in which case it can be factored as $M^t M$ (that's a big theorem), so that
$$
x^t A x = x^t M^t M x = (Mx)^t (Mx)
$$
which is the dot product of $Mx$ with itself, i.e., the squared length of $Mx$. You can think of $M$ as representing a change-of-coordinates, so this computation amounts to computing the length of $x$ in some different coordinate system, or alternatively, computing the length of $x$ in the standard coordinate system, but in some non-standard metric.
Consider the matrix
$$
M = \begin{bmatrix} 1 & 1 \\ 2 & 1 \end{bmatrix}
$$
Then
$$
A = M^t M =
\begin{bmatrix} 5 & 3 \\ 3 & 2 \end{bmatrix}.
$$
Now for $x = M = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$, we have $x^t A x = 5$; we can regard this as an "distorted length", in which the length of a vector $
\begin{bmatrix} a\\b\end{bmatrix}$ is $5 a^2 + 6 ab + b^2$ instead of $a^2 + b^2$. The "unit circle" in this distance is an ellipse in the ordinary metric (and vice versa).
Alternatively, you can say "If we just treat $$ \begin{bmatrix} 1 \\ -1 \end{bmatrix}$$ and $$\begin{bmatrix} -1 \\ 2 \end{bmatrix}$$ (the columns of $M^{-1}$) as our basis vectors -- i.e., we choose to measure coordinates with respect to a different coordinate system, then these two vectors are in fact orthonormal with respect to this allegedly-distorted metric."
One may regard this as saying that for any ellipse (the "unit circle" for an arbitrary symmetric positive definite metric arising from a bilinear function), there's a linear transformation taking its maxor and minor axes to the standard basis for 2-space.
In short: there are a lot of ways of looking at this. |
"$((A\times B) \to C)$" denotes what? | In this context, it seems to be the set of all mappings, which you can see because on the right of the equation you have $A \rightarrow (B \rightarrow C)$, and this wouldn't really make sense if $B \rightarrow C$ were a single mapping. I think $B \rightarrow C$ is usually written $C^B$. This 'equation' is called currying a function, where if we take a function $f(a, b) = \phi_{a,b}$ with two arguments, we can turn it into a function $g(a)$ which yields a function $g_a(b) = \phi_{a,b}$. It can be expressed with a universal property in the category of sets. |
Globally hyperbolic Lorentzian manifold | The definition of a Cauchy hypersurface you cited only implies that it is a continuously ($C^0$) embedded submanifold. This leads to $M$ being only homeomorphic to $\mathbb{R}\times S$. That the definition of global hyperbolicity (your item 4. is the actual original definition) is equivalent to the existence of a ($C^0$) Cauchy hypersurface and the topological splitting $M=\mathbb{R}\times S$ is the classical result proven by Geroch in the 1970s.
On your questions: The Cauchy hypersurface is smooth if it is a smoothly embedded submanifold (usually $C^\infty$). This is an additional condition for the Cauchy hypersurface. The Cauchy hypersurface is spacelike if additionally the restriction of the metric is Riemannian or, equivalently, all inextendible causal curves hit it at most once. This is another additional condition put on the Cauchy hypersurface.
For a long time it was an open question if a globally hyperbolic spacetime actually always admits a smooth spacelike hypersurface and hence if it is actually diffeomorphic to $\mathbb{R}\times S$. This was only proven at the begin of this century by Bernal and Sánchez. You might want to consult their papers for details:
https://arxiv.org/pdf/gr-qc/0306108.pdf
https://arxiv.org/pdf/gr-qc/0401112.pdf
https://arxiv.org/pdf/gr-qc/0512095.pdf |
Is $f:[0,1]\rightarrow\mathbb{R} $ Riemann integrable | It is not difficult to prove directly that under given conditions $f$ is Riemann integrable on $[0,1]$. To prove this we will show that corresponding to any given $\epsilon>0$ there is a partition $P$ of $[0,1]$ such that the difference between upper and lower Darboux sum for $f$ over $P$ is less than $\epsilon $.
Since $f$ is bounded on $[0,1]$, there is a number $M$ such that $|f(x) |<M$ for all $x\in[0,1]$. Now $f$ is Riemann integrable on interval $[\epsilon /4M,1]$ and hence there is a partition $P'$ of $[\epsilon /4M,1]$ such that $$U(f, P') - L(f, P') <\frac{\epsilon} {2}\tag{1}$$ Now consider $P=\{0 \} \cup P'$ so that $P$ is a partition of $[0,1]$ and $$U(f, P) - L(f, P) = (A-B) \frac{\epsilon} {4M}+U(f,P')-L(f,P')\tag{2}$$ where $$A=\sup\, \{f(x) :x\in[0,\epsilon /4M]\},\,B=\inf\,\{f(x):x\in[0,\epsilon /4M]\}$$ Clearly $A-B\leq 2M$ and hence it follows from equations $(1)$ and $(2)$ that $$U(f, P) - L(f, P) <\epsilon $$ and thus our job is done.
Nitpick: Someone may ask "What happens when $\epsilon/4M\geq 1$?" Then we can see that the partition $P=\{0,1\}$ works. |
Find the equations of the bisectors of the angles between the lines $y=x$ and $y=7x+4$. | The bisector of the acute angles between two lines with positive slope
has positive slope.
A line with positive slope and the positive $X$ axis form an acute angle. Then, if two lines with positive slopes $m$ and $m'$ (with $m>m'$) form angles with the positive $X$ axis $\alpha$ and $\beta$ with $\alpha>\beta$, the acute angle between both lines is $\alpha-\beta$.
Now, the slope $m''$ of the bisector of this angle must meet the inequality $m'<m''<m$, so $m''>0$.
Since the bisectors that you have found have slopes of opposite signs (as it must be, since they are perpendicular), only one of them can be the bisector of the acute angle. |
Convert a line in $ \Bbb R^3 $ given as intersection of two planes to parametric form. | What you look for is a point on the line $\left(x_0, y_0, z_0\right)$, and a direction vector of the line $\left(a, b, c\right)$.
To find a point on the line, you can for example fix $x$ and find $y,z$ (there are some case where this won't work).
To find a direction vector, note that the vector $\left(A_1,B_1,C_1\right)$ is orthogonal to the first plane therefore to the line. Likewise, $\left(A_2,B_2,C_2\right)$ is orthogonal to the line. If you take their vector product you will get a direction vector.
Another way to find a direction vector, is to find another point on the line and subtract one point from the other. |
Absolute value of a complex number (can't uderstand notes) | When dealing with complex numbers $s=x+iy$ we can factor expressions such as $x^2+y^2=(x-iy)(x+iy)$ Due to $(i)(-i)=1$. It is essentially like a difference of 2 squares. |
GLM for Poisson Regression for Soccer Ratings Not Converging | The problem here is that your model's parameters cannot be identified. That is to say that the same shift by a constant value in attack and defence ratings will produce the same differences for each row. You can fix this degree of freedom by, e.g., setting defence rating of team $C$ to zero.
Try to estimate the ratings with the following matrix that assumes C_defence = 0. You should be able to find the ratings now as everything is relative to team $C$ defensive rating:
import numpy as np
X = np.array([
[1,0,0,-1,0],
[0,-1,1,0,0],
[0,0,1,0,0],
[0,0,0,-1,1],
[1,0,0,0,0],
[0,-1,0,0,1]
])
Note that a better solution may be to impose $L_1$ or $L_2$ regularization for model parameters. This will also enable parameter identification. Moreover, it is especially useful when modelling football data which are quite noisy.
Finally, you may want to introduce explicitly an intercept and home team advantage (if applicable) parameter in your model. |
A monoid can have only one zero | A monoid can have only one zero.
Assume that it can have more than one zero.
$xz_1=z_1x=z_1$ and $xz_2=z_2x=z_2$
well then $z_1=z_1z_2=z_2$ hence these are the same, and thus there is only one zero. |
Show that $\operatorname{Vol}_{3}(\{0\leq x,y\leq1,\,z=2x+y\})=0$ | HINT: Divide $[0,1]\times [0,1]$ into $n\times n$ squares. Over the $i$th square, you want a box $R_i$ of height $<\epsilon$ that encloses the portion of $D$ over that square. So you want to choose $n$ large enough so that the maximum of $2x+y$ minus the minimum of $2x+y$ on each square will be $<\epsilon$. Can you figure this out? (Spoiler: You will need $3/n<\epsilon$.) |
What is good chalk for lecturing? | I had completely forgotten about this question and came across it again recently, so I decided to email Dr. Strang. To my surprise, he responded almost immediately. The chalk he used is called Railroad Chalk, and you can find it at many popular online retailers. |
Any uniformly continuous real-valued function on $(0,1)$ has unique uniformly continuous extension to $\Bbb R$ | Uniqueness of extension is not true. For example, if $f(x)=x$ for all $x \in (0,1)$ the we can define two extensions $g$ and $h$ to $\mathbb R$ as follows: $g(x)=x$ for all $x$, $h(x)=x$ for $0\leq x \leq 1$, $h(x) =0$ for $x <0$ and $h(x) =1$ for $x >1$. Both of these are uniformly continuous extensions to $\mathbb R$.
Existence: once you extend $f$ to $[0,1]$ you can simply define $f(x)=f(0)$ for $x <0$ and $f(x)=f(1)$ for $x>1$.
Hints for extending to $[0,1]$: let $x_n \to 0$ Then $f(x_n)$ is Cauchy so it has a limit $l$. If $y_n$ is another sequence converging to $0$ then $(x_1,y_1,x_2,y_2,...)$ tends to $0$ so $(f(x_1),f(y_1),f(x_2),f(y_2),...)$ is convegent. This implies that the odd and even terms of the sequence converge to the same limit. Hence $l$ is independent of $(x_n)$. We can define $f(0)$ as $l$. I leave the rest to you. |
Suprema proof: Prove $\sup(f+g)\leq \sup f+\sup g$ | (1)
$\forall x\ f(x)\le\sup f$ ($\sup$ is upper bound)
$\forall x\ g(x)\le\sup g$ (idem)
$\forall x\ f(x)+g(x) ≤ \sup f+\sup g$ (add inequalities)
$\sup f+g ≤ \sup f+\sup g$ (because $\sup$ is the least upper bound)
(2) Your "amount" is "set"? |
Solve $\frac{2\, \cos{4 x} + 1}{2\, \cos{x} - \sqrt{3}} = \frac{2\, \sin{4 x} - \sqrt{3}}{2\, \sin{x} - 1}$ where $x$ is $-\pi \leq x \leq \pi$ | Hint:
\begin{align}
\frac{2 \cos{4 x} + 1}{2 \cos{x} - \sqrt{3}}
&= \frac{2 \sin{4 x} - \sqrt{3}}{2 \sin{x} - 1}
\tag{1}\label{1}
.
\end{align}
Factoring \eqref{1} we get
\begin{align}
\tfrac{\sqrt3}2\,\sin(4x)
-\tfrac12\,\cos(4x)
+\tfrac12\sin(x)
+\tfrac{\sqrt3}2\cos(x)
-1
-\sin(3x)
&=0
\qquad
(x\ne \pm\tfrac\pi6)
\tag{2}\label{2}
,
\end{align}
which is equivalent to
\begin{align}
{\color{brown}
(\cos\tfrac\pi6\,\sin(4x)
-\sin\tfrac\pi6\,\cos(4x))
}
+
{\color{blue}
(\cos\tfrac\pi3\sin(x)
+\sin\tfrac\pi3\cos(x))
}
-1-\sin(3x)
&=0
\tag{3}\label{3}
,\\
{\color{brown}\sin(4x-\tfrac\pi6)}
+
{\color{blue}\sin(x+\tfrac\pi3)}
-1-\sin(3x)
&=0
\tag{4}\label{4}
,\\
(\sin(x+\tfrac\pi3)
-\sin(3x))
-
(\sin\tfrac\pi2+\cos(4x+\tfrac\pi3))
&=0
\tag{5}\label{5}
.
\end{align}
which, by using sum-to-product identities, is converted to
\begin{align}
2\cos(2x+\tfrac\pi6)\cos(x+\tfrac\pi3)
-2\cos^2(2x+\tfrac\pi6)
&=0
\tag{6}\label{6}
,\\
\cos(2x+\tfrac\pi6)
(\cos(2x+\tfrac\pi6)-\cos(x+\tfrac\pi3))
&=0
\tag{7}\label{7}
,\\
\cos(2x+\tfrac\pi6)
\sin(\tfrac32\,x+\tfrac\pi4)
\sin(\tfrac12\,x-\tfrac\pi{12})
&=0
\tag{8}\label{8}
.
\end{align} |
Hatcher's definition of direct limit of a sequence of homomorphisms of abelian groups | You are right; if Hatcher wrote that, then that is poor writing.
The subgroup is made up of such elements with the property that
only finitely many of the $g_i$ are nonzero.
I would describe the subgroup as that generated by elements
$(0,0,\ldots,0,g_m,-\alpha(g_m),0,0,\ldots)$. |
Show that the following series is always convergent. | Take $r$ so $\rho(A) < r < R$.
By Gelfand's spectral radius formula,
$\|A^k\|^{1/k} < r$ for sufficiently large $k$, where $\|\cdot\|$ is the operator norm induced by the norm $|\cdot |$, and then $|A^k v|/R^k \le \|A^k\| |v|/R^k <
(r/R)^k |v|$ so the series converges. |
Doubts about fundamental theorem of Homomorphism | Let’s simplify the situation for a moment. Forget that $V$ and $U$ are vector spaces and that $T$ is a linear transformation; just think of $V$ and $U$ as sets and of $T$ as a map from $V$ onto $U$. For each $u\in U$ let $V_u=\{v\in V:T(v)=u\}$, the set of points in $V$ that are mapped to $u$ by $T$. Let $\mathscr{P}=\{V_u:u\in U\}$. If $u_0,u_1\in U$ and $u_0\ne u_1$, then $V_{u_0}\cap V_{u_1}=\varnothing$: $T$ cannot send any $v\in V$ both to $u_0$ and to $u_1$. Thus, the map $\varphi:U\to\mathscr{P}:u\mapsto V_u$ is a bijection, and so, of course, is its inverse sending $V_u\in\mathscr{P}$ to $u$.
Now put the linear algebra back into the picture. First, $\ker T=\{v\in V:T(v)=0_U\}$, so in the notation of my first paragraph, $\ker T=V_{0_U}$: it’s one of the members of $\mathscr{P}$. Fix $v_0\in V$ and let $u_0=T(v_0)$; what vectors in $V$ belong to $V_{u_0}$? Suppose that $v\in V_{u_0}$; then $T(v)=u_0=T(v_0)$. Since $T$ is linear, $T(v-v_0)=T(v)-T(v_0)=0_U$, so $v-v_0\in\ker T$, and $v\in v_0+\ker T$, where $v_0+\ker T=\{v_0+v:v\in\ker T\}$. Conversely, you can easily check that if $v\in v_0+\ker T$, then $T(v)=u_0$, and therefore $v\in V_{u_0}$. Thus, $V_{u_0}=v_0+\ker T$. In other words, the members of $\mathscr{P}$ are precisely the sets of the form $v_0+\ker T$ for $v_0\in V$.
By definition the members of $V/\ker T$ are the sets $v_0+\ker T$ for $v_0\in V$, and we’ve just seen that these are the members of $\mathscr{P}$, so in fact $V/\ker T=\mathscr{P}$. Thus, we can just as well think of the map $\varphi$ defined above as a bijection from $U$ onto $V/\ker T$. Its inverse, which I’ll call $h$, is a bijection from $V/\ker T$ onto $U$. What does $h$ look like? Let $v_0+\ker T\in V/\ker T$, and let $u_0=T(v_0)$. We’ve just seen that $v_0+\ker T=V_{u_0}$, and we know from the first paragraph that $h(V_{u_0})=u_0$. In other words, $h(v_0+\ker T)=u_0=T(v_0)$.
We’ve now shown that the map $h:V/\ker T\to U:v+\ker T\mapsto T(v)$ is a bijection; in terms of the sets involved, it’s just the inverse of the bijection $\varphi$ of the first paragraph. To complete the proof that $V/\ker T$ and $U$ are isomorphic, we just check that $h$ is linear, which is a straightforward computation. |
Algebraically finding a Nash equilibrium | First, I want to clarify your payoff functions. The way you defined does not make sense to me. $i<j$ and $i>j$ should be $s_i<s_j$ and $s_i>s_j$ ? I guess you mean whoever chooses a strictly smaller number gets $\frac{s_1+s_2}2$ and the other one gets $1-\frac{s_1+s_2}2$; and if they choose the same number, both get $1/2$.
Second, I think you almost have a rigorous proof. Let me try to summerize.
Suppose $(s_1,s_2)$ is a Nash equilibrium, then using proof by contradiction, we can show that $s_1=s_2$. Suppose not, either $s_1<s_2$ or $s_2<s_1$. In the first case, Player 1 benifits from deviating from this strategy by choosing any $s_1'\in (s_1,s_2)$. The same for the second case.
Now the only type of Nash equlibrium must be $(s,s)$. Consider different cases: If $s>1/2$, either player can get a higher payoff by choosing $s'\in (1-s,s)$.
If $s<1/2$, either player can get a higher payoff by choosing $s'\in (s, 1-s)$.
If $s=1/2$, neither player benifits from deviating because increasing $s$ to $s'>s$ results in a payoff of $1-\frac{s+s'}2<1/2$ and decreasing $s$ to $s'<s$ results in $\frac{s+s'}2<1/2$. |
Is this subset where the continuous function $y$ and $w$ coincide closed? | Hint: If $z=y-w$ then this set is $z^{-1}(\{0\}).$ |
How to find the slope of $y.\ln x = x.\ln y$ at $x = e$? | The hint.
Show that if $x^y=y^x$, $x\neq y$, $x>1$ and $y>1$, so $x^y>e^e$,
which says that $(e,e)$ is a needed point.
Indeed, let $y=kx$, where $k>1$.
Thus, $$x^{kx}=(kx)^x$$ or
$$x=k^{\frac{1}{k-1}},$$ which gives $$y=k^{\frac{k}{k-1}}$$ and we need to prove that $f(k)>e^e,$ where
$$f(k)=k^\frac{k^{\frac{k}{k-1}}}{k-1}.$$
Now, $$f'(k)=\frac{f(k)\cdot k^\frac{k}{k-1}}{(k-1)^3}\cdot\left(\frac{k-1}{\sqrt k}+\ln k\right)\left(\frac{k-1}{\sqrt k}-\ln k\right)>0$$ because easy to show that $$\frac{k-1}{\sqrt k}-\ln k>0$$ for any $k>1$,
which says that $f$ increases and since
$$\lim_{k\rightarrow1^+}f(k)=e^e,$$ we are done.
We see that $$\inf_{k>1}f=e^e,$$ which says that an unique common point on the line $x^y=y^x,$ where $x\neq y$, $x>1$ and $y>1$ plus a point $(e,e)$ and on the line $y=x$ it's a point $(e,e).$
Now, easy to get our slope.
For $x\neq y$ by the Hospital's rule we obtain:
$$\lim_{k\rightarrow1^+}y'(x)=\lim_{k\rightarrow1^+}\frac{\ln{y}-\frac{y}{x}}{\ln{x}-\frac{x}{y}}=\lim_{k\rightarrow1+}\frac{\frac{k}{k-1}\ln{k}-k}{\frac{1}{k-1}\ln{k}-\frac{1}{k}}=$$
$$=\lim_{k\rightarrow1^+}\frac{k\ln{k}-k^2+k}{\ln{k}-1+\frac{1}{k}}=\lim_{k\rightarrow1^+}\frac{\ln{k}+1-2k+1}{\frac{1}{k}-\frac{1}{k^2}}=\lim_{k\rightarrow1^+}\frac{\frac{1}{k}-2}{-\frac{1}{k^2}+\frac{2}{k^3}}=-1.$$ |
Discrete Math reccurrence realtion | For all recurrence relations, you will have to define the first term.
I'm not so sure about your notation with $f(n)$... perhaps you mean $T_n$ where $n$ represents the $n$th power of 2 of the number of teams and $T_n$ is the number of rounds?
In which case, I think yes.
So your recurrence relation would be:
$T_{1}$ = 0
$T_{n+1}$ = $T_n + 1$
where:
$T_n$ is the number of rounds in the tournament
$2^n$ is the number of teams in the tournament. |
Factorize $x^3-3x+2$ | They are called cubic functions / cubic equations. A closed formula for the solutions exists but it is quite ugly so the common method to factorize the term is to guess one root $x_0$ and then do long division by $(x-x_0)$.
So the method one would be to use the formula on
$x^3-3x+2=0$ and find that the roots are $1,1,-2$ and you are done. The trivial method is to guess $x_0=1$ and use the long division |
How can $V$ be a vector space over the field of real numbers when it is explicitly defined as being over the field of complex numbers? | Check the definition of a vector space over a field, for example on Wikipedia: https://en.wikipedia.org/wiki/Vector_space#Definition
Being over $\mathbb{R}$ means that vectors can be multiplied by real numbers and you get again a vector in $V$. This is certainly true in your example since complex numbers multiplied by real numbers are again complex numbers. |
Conversion of explicit to implicit ODEs (uniqueness & algorithm) | If $r$ has an inverse, we see that $f$ is uniquely determined as $f(x)=r'(r^{-1}(x))$. |
Which norm does pseudo-inverse for least square for matrix equation minimizes? | It minimizes the Frobenius norm. |
Find the Taylor series for $\frac{1}{z}$ | Hint:
$$\frac{1}{1+x}=1-x+x^2-x^3+...$$
Applying this, we get:
$$\frac1z=\frac12\left( 1-\frac{z-2}{2}+\frac{(z-2)^2}{4}-\frac{(z-2)^3}{8}+... \right)=\left( \frac12-\frac{z-2}{4}+\frac{(z-2)^2}{8}-\frac{(z-2)^3}{16}+... \right)$$ |
Do the two manifolds intersect at a submanifold | Yes, it is very possible. Consider, for example, the circle $x^2+y^2=1$ and the parabola $y=x^2-\frac54$. You can make higher-dimensional examples quite easily.
EDIT (responding to the totally different question): With polynomials or real analytic functions, this cannot happen unless the hypersurfaces coincide (at least on a connected component). However, in the smooth category, you can use bump functions to create two hypersurfaces that coincide on a large (closed) subset (with interior) and then diverge. Note that they will coincide on a manifold with boundary. If you require that the set of coincidence have no boundary, then they'll have to agree on a whole connected component, of course. |
How to prove that the commuting matrices form a vector space of an image of matrix | Hint:
$$
(aC+B)A-A(aC+B)=aCA+BA-aAC-AB=a(CA-AC)+(BA-AB)
$$ |
Why is the determinant of the following matrix zero | Algebraically (mentioned by t.b.): The columns of the matrix are $b_1\mathbf{a}$, $b_2\mathbf{a}$, $\dots, b_n\mathbf{a}$. If all of $\mathbf{b}$'s components are zero then the result is trivial, while if at least one of the components is nonzero, say $b_1$ without loss of generality, then (for $n>1$)
$$\color{Blue}{-(b_2+b_3+\cdots+b_n)}(b_1\mathbf{a})+\color{Blue}{b_1}(b_2\mathbf{a})+\color{Blue}{b_1}(b_3\mathbf{a})+\cdots+\color{Blue}{b_1}(b_n\mathbf{a})$$
$$=\big((\color{Blue}{-b_2}b_1+\color{Blue}{b_1}b_2)+(\color{Blue}{-b_3}b_1+\color{Blue}{b_1}b_3)\cdots+(\color{Blue}{-b_n}b_1+\color{Blue}{b_1}b_n)\big)\mathbf{a}$$ $$=(0+0+\cdots+0)\mathbf{a}=\mathbf{0}.$$ is a nontrivial linear combination of the matrix columns that evaluates to zero, hence the columns are linearly dependent and thus the matrix's determinant is zero.
Shortcut (via David): Use the multlinearity of the determinant to reduce $$\det\begin{pmatrix}b_1\mathbf{a}&b_2\mathbf{a}&\cdots&b_n\mathbf{a}\end{pmatrix}=b_1b_2\cdots b_n \det\begin{pmatrix}\mathbf{a}&\mathbf{a}&\cdots&\mathbf{a}\end{pmatrix}.$$ For $n>1$, it is impossible for $n$ copies of a vector $\mathbf{a}$ to be linearly independent, since e.g. $$1\mathbf{a}+(-1)\mathbf{a}+0\mathbf{a}+\cdots+0\mathbf{a}=\mathbf{0}.$$ Hence the original determinant must also be zero.
Geometrically: The parallelepiped formed by the matrix's columns are all contained in the one-dimensional subspace generated by $\mathbf{a}$: for $n>1$ this has zero $n$-dimensional content (hyper-volume), hence the determinant is zero.
Geometrically/Algebraically: As I posted in a different answer (paraphrased here),
We're assuming $\mathbf{a},\mathbf{b}\ne\mathbf{0}$. Then $\mathbf{b}^\perp$, the orthogonal complement of the linear subspace generated by $\mathbf{b}$ (i.e. the set of all vectors orthogonal to $\mathbf{b}$) is therefore $(n-1)$-dimensional. Let $\mathbf{c}_1,\dots,\mathbf{c}_{n-1}$ be a basis for this space. Then they are linearly independent and $$(\mathbf{a}\mathbf{b}^T)\mathbf{c}_i =\mathbf{a}(\mathbf{b}^T\mathbf{c}_i)= (\mathbf{b}\cdot\mathbf{c}_i)\mathbf{a}=\mathbf{0}.$$ Thus the eigenvalue $0$ has geometric multiplicity $n-1\qquad$ [...]
The determinant is the product of the matrix's eigenvalues, so if one of those is $0$ the product is necessarily zero as well.
Analytically/Combinatorially: Via Leibniz formula we have
$$\det(\mathbf{a}\mathbf{b}^T)=\sum_{\sigma\in S_n}(-1)^{\sigma}\prod_{k=1}^na_{\sigma(k)}b_k$$
$$=\left(\sum_{\sigma\in S_n}(-1)^\sigma\right)\prod_{k=1}^na_kb_k=0.$$
Above we observe that $\sum (-1)^{\sigma}$ is zero because the permutations of even and odd parity are in bijection with each other (e.g. take an arbitrary transposition $\tau$ and define the map $\sigma\to\tau\sigma$). |
Is it possible to draw $85 ^{\circ} $ and $110 ^{\circ}$ angles by compass and straightedge construction? | We cannot construct these arcs by straightedge and compass. For if we can draw an $85^\circ$ arc, we can draw a $5^\circ$ arc by subtraction from an easily constructible $90^\circ$ arc, and then construct a $20^\circ$ angle by repeated duplication, again easily done by straightedge and compass. And if we can draw a $110^\circ$ arc, we can draw a $20^\circ$ angle by subtracting a $90^\circ$ arc.
However, it was proved by Wantzel, almost two centuries ago, that the $20^\circ$ angle cannot be drawn with straightedge and compass. Equivalently, we cannot trisect the $60^\circ$ angle with straightedge and compass. For details, please see the Wikipedia article on Angle Trisection.
Comment: If we allow other tools than the classical Euclidean straightedge and compass, the answer changes. There are $3$-legged compass analogues (linkages) that can do trisection. We can also trisect angles by Origami (the Japanese art of paper-folding). From antiquity on, various correct trisection methods have been described. As they must, they all go beyond straightedge and compass. |
A question about class equation | The class equation
$$|G|=|Z(G)|+ \sum_{i=1}^K[G:C_{G}(x_i)] $$, $x_i$ is not in the center. Now
$|Z(G)|$ is in fact the sum of order of singleton equivalence classes if we go back to the definitions of the defining equivalence relation. |
How can I know if sever functions are linearly independent? | Notice that in order to show dependence () you need to show that there are constants $c_1, c_2, c_3$ , so that $c_1e^{-t}+c_2e^t+c_31=0$, but where $0$ is the $0$ function, i.e., the function with the constant value $0$.
Now, this means that the above linear combination must equal zero for all t .
Let's show this is not possible: we start with:
$$ c_1e^{-t}+c_2e^t+1=0 $$ ( we can always rescale so that $c_3=1$). Now, set $e^t=u$ , and let's treat $u$ as a variable. We substitute above, to get $$ \frac{c_1}{u}+c_2u+1=c_2u^2+u+c_1=0 $$.
Now, we can treat this as a quadratic in $u$, and we can find its roots by: $$\frac{-1 \pm \sqrt(1-4c_2c_1)}{2c_2} $$
which forces the condition $$ 1-4c_2c_1=1 => 4c_2c_1=0 $$.
So that we must have either $c_1=0$ or $c_2=0$, either of the cases leads to a contradiction when substituting in the top equation; considering $c_1=0$ first, we get
$$c_2e^t+1=0 => c_2=-1/e^t $$ But this is already impossible, since this must hold for all t (remember that the $0$ on the right hand side is the $0$ function , not the number $0$).
We get a similar contradiction by assuming $c_1=1$ |
Proof of Rudin's Theorem 3.10 | You're missing an important point. If $x<c$ for all $x\in S$, then $\sup S\le c$. (For example, take $S = (0,1)$. For every $x\in S$, we have $x<1$, but $\sup S = 1$.) |
I think $A,B$ must be closed and disjoint | Yes, $A$ and $B$ should be closed, disjoint, and non-empty. For continuity, prove that the functions $f_A(x)=d(x,A)$ and $f_B(x)=d(x,B)$ are continuous; then $f$ is the quotient of two continuous real-valued functions, and the function in the denominator is never $0$. (That’s one reason why you need $A$ and $B$ to be disjoint.) |
Finding $\lim_{(x,y) \to 0} \frac{x^2}{ x - y}$ | Hint: What happens if we approach $(0,0)$ along the path $y = x-x^3$? |
Proving that set is (or is not) a field | $P = \mathbb Z[\alpha]$, where $\alpha=\sqrt[3]{3}$. Since $P$ is the image of $\mathbb Z[X]$ under $X\mapsto \alpha$, it is clear that $P$ is a ring. Now, $2\in P$ but $1/2 \not\in P$ because $1,\alpha,\alpha^2$ are linearly independent over $\mathbb Q$. So, $P$ is not a field. |
One counter automata | The hint you were given to construct the grammar also tells you how to construct the automaton. The idea is to use the counter to encode the length of the prefix $\beta$ and the postfix $\delta.$ That is all. The sole addition is that you must also recognize whether $(u,v) = (a,b)$ or $(u,v) = (b,a).$
Imagine the states as two parallel segments/chains of five states each, arranged in a horizontal line if you will. In addition to these two segments there is a start state and an accepting state for a total of twelve states. The start state connects with an $\epsilon$-transition to the top segment and another $\epsilon$-transition to the bottom one. The top one corresponds to $(u,v) = (a,b)$ and the bottom one to $(u,v) = (b,a).$ The first state of each segment counts the symbols in the prefix by pushing $g$ onto the stack, then it $\epsilon$-transitions to the next state in the segment, which has a transition on $a$ to its neighbor, which loops on $a$ and $b$ without counting anything and $\epsilon$-transitions to its right neighbor, which transitions on $b$ to its neighbor, which decrements the counter by popping the symbol $g$ of the stack and transitions to the accepting state on seeing the bottom marker "#". The second (lower) segment works the same way with $a$ and $b$ reversed.
Remark. You can make do with just five total states if you allow a non-deterministic transition function which eliminates the need for the inner transitions on $\epsilon$. Count the length of the prefix in the start state, transition non-deterministically on $a$ to the upper segment, now consisting of only one state, and on $b$ to the lower segment. Loop without counting in each segment and transition on $b$ from the upper state and $a$ from the lower to a common state that decrements the counter and transitions to the accepting state on seeing the marker.
Addendum. Depending on your model you may want to transition out of the accepting state into a rejecting state on seeing extra symbols. It is possible also to merge the last states of the two chains from the first solution, since both do the same thing, i.e. reduce the counter and the pair of non-matching $(u,v)$ has already been found. |
compute H(X|Y) ( conditional probablity) | Let $p\mathop{:=}\mathsf P(X_1,Y_1)$. Then since each column and each row must total $\tfrac 14$, we complete your table as follows:
$$\boxed{\begin{array}{|c|c:c:c|}\hline
& Y_1 & Y_2 & Y_3 & Y_4 \\ \hline
X_1 & p & 0 & 0 & \tfrac 14-p \\ \hdashline
X_2 & \tfrac 14-p & 0 & p & 0 \\ \hdashline
X_3 & 0 & p & \tfrac 14-p & 0 \\ \hdashline
X_4 & 0 & \tfrac 14-p & 0 & p \\ \hline
\end{array}}$$
The information you provided is insufficient to determine what $p$ actually is.
If the choice between each of the two $X_k$ for each $Y_h$ were unbiased, then $p=\tfrac 18$. However, nothing indicates this is so.
But since there are the same two conditional probabilities for the two $X_k$ any given $Y_h$ (although which event they match differ), we can calculate the conditional entropy as:
$$\begin{align}\mathsf H(X\mid Y_h) & = - \sum_{k=1}^4 \mathsf P(X_k\mid Y_h)\log_2(\mathsf P(X_k\mid Y_h)) & \big[h\in\{1,2,3,4\}\big]
\\ & = -\big( 4p \log_2 (4p) + 4(1-p)\log_2(4(1-p)) + 0\log 0 + 0\log 0\big)
\\ & = -4\big( p \log_2(p) + (1-p) \log_2(1-p)+2 ) \big)
\end{align}$$ |
The abelian group G is n-divisible if (n,|G|)=1 | You do not need $G$ to be abelian to derive this.
Take any element $g ∈ G$.
Let $m = \operatorname{ord} g$.
Then $〈g〉 \cong ℤ/mℤ$ by $g ↦ 1$.
Now $m$ has to divide $|G|$, so $(n,m) = 1$ as well.
Therefore there is an $k ∈ ℤ$ such that $\overline{k}\overline{n} = 1$ in $ℤ/mℤ$.
Using the isomorphism from above what can you conlude about $g^k$ in $G$? |
How to find a specific point in $\mathbb{R}^n$? | Let's translate the conditions into English.
The first condition $\|a' - x_1\| > \|a - x_1\|$ says "$a'$ is further from $x_1$ than $a$ is." The second condition $\|a' - x_2\| > \|a - x_2\|$ says "a' is further from $x_2$ than $a$ is."
Now let's draw a picture. This is really a question about distances. Let's let $R_1$ be the distance from $a$ to $x_1$ and let's let $R_2$ be the distance from $a$ to $x_2$. All the points further from $x_1$ than $a$ lie in the complement of $B_{x_1}(R_1)$ and all the points further from $x_2$ than $a$ lie in the complement of $B_{x_2}(R_2)$. So just pick a point that lies in the intersection of those two sets.
If you need to exhibit a specific point ...
note that all three points lie in a ball of radius $R = \max\{ \|a\|, \|x_1\|, \|x_2\| \}$. Then let $a' = (3R, 0, 0, \ldots, 0)$ and use the triangle inequality to show that it satisfies your conditions. |
Exercise: Application of Hahn-Banach Theorem | You still need a twist in your argument. You can define a norm $p$ on $M$ by
$$
p(\sum_{k=1}^n\lambda_kx_k)=\sum_{k=1}^n|\lambda_k|.
$$
Now, as $M$ is finite-dimensional, all norms on it are equivalent. This in particular tells us that there exists a constant $c$ such that $p(x)\leq c\|x\|$ for all $x\in M$ (since $\|\cdot\|$ is another norm on $M$). So you have
$$
|f(x)|\leq\,c\,\max\{|c_1|,\ldots,|c_n|\}\,\|x\|,\ \ x\in M,
$$
and now you can apply Hahn-Banach.
Or a slightly more direct approach would be to notice that since $M$ is finite dimensional, every functional is continuous, and thus $f$ is necessarily bounded in $M$. |
Existence of upper triangular matrix proof | Bonus Theorem. If $k$ is a field then the following are equivalent: (i) every square matrix over $k$ has an eigenvalue (in $k$) (ii) every square matrix is similar to an upper-triangular matrix (iii) the characteristic polynomial of every square matrix splits over $k$.
Note: As I suspected when I wrote this, (i) is also equivalent to saying $k$ is algebraically closed. See here for that.
Anyway, (ii) implies (iii) is trivial, as is (iii) implies (i). For (i) implies (ii): Assume (i).
First note that if $B=(b_1,\dots,b_n)$ is a basis then
Lemma 1. The matrix for $T$ wrt the basis $B$ is upper triangular if and only if $Tb_k$ is a linear combination of $b_1,\dots,b_k$ for every $k$.
So it's enough to prove this:
Lemma 2. There exists a non--zero $v\in V$ such that if $E$ is the span of $v$ then there is a subspace $W$ with $V=E\oplus W$ and $TW\subset W$.
Proof that Lemma 2 implies the result: By induction on the dimension there is a basis $b_1,\dots,b_{n-1}$ for $W$ such that $Tb_k$ is a linear combination of $b_1,\dots,b_k$ for $1\le k\le n-1$. Let $b_n=v$.
Edit: This is seeming easier. At first I was wondering how to get $v$ as in Lemma 2. But no, we just concentrate on $W$ and then $v$ comes along for free. It's clear that Lemma 2 follows from Lemma 3:
Lemma 3. There is a subspace $W\subset V$ of codimension $1$ such that $TW\subset W$.
(Given $W$ as in Lemma 3, let $v$ be any non-zero element of $V\setminus W$ and Lemma 2 follows.)
And Lemma 3 is finally something we can prove:
Let $\Lambda$ be an eigenvector of the adjoint $T^*:V^*\to V^*$, and let $W$ be the kernel of $\Lambda$. Then $W$ certainly has codimension $1$, and if $x\in W$ then $$\Lambda(Tx)=(T^*\Lambda)x=\lambda\Lambda x=0;$$hence $TW\subset W$. |
Is there a normal subgroup $H$ such that $S_6/H$ is isomorphic to $S_5$ | If $H$ contains an element $h$ which is the product of a $2$-cycle and a $3$-cycle, it contains the $3$-cycle $h^2$.
And as $H$ is supposed to be normal, it contains all the $3$-cycles (all the $3$-cycles are conjugate elements). As the $3$-cycles generate $A_6$, $H$ contains $A_6$. A contradiction. |
Help with a limit of an integral: $\lim_{h\to \infty}h\int_{0}^\infty{{ {e}^{-hx}f(x)} dx}=f(0)$ | Note that $h\int_{0}^\infty{{ {e}^{-hx}} dx} = 1$. Fix $\epsilon> 0$, by continuity there is a $\delta>0$ such that $|x|< \delta \Rightarrow |f(x)-f(0)|\leq \epsilon$. Note also (by boundedness) that there is a $M>0$ such that for every $x \in \Bbb{R}$,$|f(x)| \leq M$. Therefore,
\begin{align}\bigg|h\int_{0}^\infty{{ {e}^{-hx}f(x)} dx} - f(0)\bigg| &=\bigg| h\int_{0}^\infty{{ {e}^{-hx}(f(x) - f(0))} dx} \bigg|\\&=\bigg| h\int_0^\delta {{ {e}^{-hx}(f(x) - f(0))} dx} + h\int_\delta^\infty {{ {e}^{-hx}(f(x) - f(0))} dx} \bigg|\\
&\leq \bigg|h\int_\delta^\infty {{ {e}^{-hx}}}\epsilon\, dx \bigg|+ \bigg|2M h\int_\delta^\infty {{ {e}^{-hx}} dx}\bigg| \\
&\leq
\epsilon + 2M (e^{-h\delta} ) \overset{h \to \infty}{\leq} 2 \epsilon \end{align}
Since $e^{-h\delta} \overset{h \to \infty}{\longrightarrow} 0$ |
Counting compass and straightedge constructions | You are mostly on the right track, only the combinatorial part is off.
The Galois group $G$ of $\Bbb{Q}(\zeta)\Bbb{Q}$ is cyclic of order $11-1=10$. Because $2$ is a primitive root modulo $11$, a generator for this cyclic group is the mapping $\sigma:\zeta\mapsto \zeta^2$.
For $z_S$ to be constructible it is necessary that $n:=[\Bbb{Q}(z_S):\Bbb{Q}]$ is a power of two.
Because $z_S\in\Bbb{Q}(\zeta)$ it follows that $n\mid 10$, and we are left with the alternatives $n=1$ and $n=2$.
Because $G$ is cyclic the Galois correspondence tells us that there exists a single quadratic intermediate field, namely the fixed points of $\sigma^2:\zeta\mapsto \zeta^4$. All the elements of that quadratic field are constructible, so $z_S$ is constructible if and only if $\sigma^2(z_S)=z_S$.
Because $11$ is a prime, the only $\Bbb{Q}$-linear relation among the elements of $R$ is the obvious
$$1+\zeta+\zeta^2+\cdots+\zeta^{10}=0.$$
If $J\subseteq \{0,1,\ldots,10\}$ and we denote $z_J=\sum_{j\in J}\zeta^j$, then
$$\sigma^2(z_J)=\sum_{j\in J}\zeta^{4j}=z_{4J},$$ where we calculate $4j$ modulo $11$.
Items 4 and 5 imply that $z_J$ is constructible if and only if $J=4J$ (you seem to have reached this point under your own steam as $\varphi:\zeta\mapsto \zeta^9$ and $\sigma^2$ generate the same subgroup of $G$).
But at this point something went wrong. You still seem to be clear on the idea that in the interesting cases $j\in J\implies 4j\in J$. To wrap up:
If $1\in J$, then also $4,5=4^2,9=4^3,3=4^4\in J$. Similarly if $4\in J$, then $4^2=5,4^3=9,4^4=3$ and $4^5=1$ must all be in $J$. The same with others. In other words, if $J$ contains any of $1,4,5,9,3$ it must contain all five of them.
Similarly, if $2\in J$, then $8=4\cdot2,$ $10=4^2\cdot2$, $7=4^3\cdot2$ and $6=4^4\cdot2$ must all be in $J$. Repeating the argument, either $2,8,10,7,6$ are all in, or none of them are.
This leaves us with the choices:
$J=\emptyset$,
$J=\{0\}$,
$J=\{1,4,5,9,3\}$,
$J=\{0,1,4,5,9,3\}$,
$J=\{2,8,10,7,6\}$,
$J=\{0,2,8,10,7,6\}$,
$J=\{1,4,5,9,3,2,8,10,7,6\}=\Bbb{Z}_{11}\setminus\{0\}$,
$J=\Bbb{Z}_{11}$.
So a total of $8$ alternatives. Basically you need to make three binary choices: i) either you include or don't include $0$, ii) either you include or don't include $1$ (and hence all of $1,4,5,9,3$), iii) either you include or don't include $2$ (and hence all of $2,8,10,7,6$). This makes $2^3=8$ the answer.
Although (see item 5 again) the empty set and all of $\Bbb{Z}_{11}$ give rise to the same sum $z_J=0$. The others pair up as negatives of each other due to item 5. Namely
$$
z_S=-z_{R\setminus S}
$$
for all $S\subseteq R$.
In a different language: you split the set $R$ into the orbits of the subgroup $\langle \phi\rangle=\langle\sigma^2\rangle\le G$, and use collections of orbits. |
Where can I find an English version of this paper by Gel'fand and Raikov? | Here is a link to the Google Books page. |
Lesbegue Outer Measure | It suffices to show that $m^\ast(A\cap E) = m(A)$ for all measurable sets $A$, because this will yield $m(A) = m^\ast (A\cap E) = m^\ast (B\cap E) = m(B)$.
To see this, note that $m^\ast (A\cap E) \leq m^\ast (A) = m(A)$, so that we can assume (toward a contradiction) that $m^\ast (A \cap E) < m(A)$.
By definition of $m^\ast$, there is some countable family $(I_n)_n$ of measurable sets (usually intervals, depending on the exact defnition of $m^\ast$) with $A \cap E \subset \bigcup_n I_n$ and such that $\sum_n m(I_n) < m(A)$.
Let $\varepsilon := \frac{1}{2} \cdot (m(A) - \sum_n m(I_n)) > 0$. Then, there is also a countable family $(J_n)$ covering $A^c = [0,1] \setminus A$ such that $\sum_n m(J_n) < m(A^c) + \varepsilon = 1 - m(A) + \varepsilon$.
This shows $$E = (A \cap E) \cup (A^c \cap E) \subset \bigcup_n I_n \cup A^c \subset \bigcup_n I_n \cup \bigcup_n J_n$$
and hence
$$
m^\ast (E) \leq \sum_n m(I_n) + \sum_n m(J_n) < \sum_n m(I_n) + (1 - m(A) + \varepsilon) = 1 -2 \varepsilon + \varepsilon < 1,
$$
a contradiction. |
$p^n\mid ((p^{n-m}+1)^j-1)$? | I hope this is enough.
Using the LTE Lemma lets define $\alpha:=v_p((p^{n-m}+1)^j-1)\Leftrightarrow p^\alpha\|(p^{n-m}+1)^j-1$. So $p^\alpha$ exactly divides $(p^{n-m}+1)^j-1$. Using now the Lemma we get $\alpha=v_p((p^{n-m}+1)-1)+v_p(j)=v_p(p^{n-m})+v_p(j)=n-m+v_p(j)$. It is enough that $\alpha\ge n$, bcs then we know that $p^n$ will divide $(p^{n-m}+1)^j-1$. So $v_p(j)\ge m$, which means $j$ is independent of $n$. Now we just have that $j=p^m\cdot t, t\in\mathbb{N}$ (this follows from the fact that $j=p^b\cdot k, k,b\in\mathbb{N}$ and $k\ge m$ and $p\nmid k$ $\Rightarrow$$b=m+h,t=p^h\cdot k$). |
Inequality for the difference of square roots of complex numbers | Let $a=-1-\epsilon\, i$, $b=2\,\epsilon\, i$. Then $\sqrt{a+b}\sim i$ and $\sqrt{a}\sim-i$ hence $|\sqrt{a+b}-\sqrt{a}|\sim2$ as $\epsilon\to0$, while the right hand side is smaller than $2\,C\,\epsilon$.
For the inequality to hold you will have to take $a$ and $a+b$ in a region of the form $|\arg z|<\theta$ for some $\theta<\pi$. |
If a function $f$ is measurable in the completion space then there is a function $g$ measurable in the original space, $f$ = $g$ a.e | Although your question pertains to equation $(1)$, observe that the same assumption, namely that $D = (D \setminus N) \cup N$, is made earlier when defining $g$ on $D$ by defining it separately on $D \setminus N$ and $N$.
You are right that the author is assuming that $N \subset D$, and that this is not true from the way $N$ is defined. However, if we let $N' := N \cap D$, then the same proof goes through with $N'$ in place of $N$. For instance, it is clear that $N' \in \mathfrak{A}$, $N' \subset D$, $N'$ is a null set and $C_n \subset B_n \subset N'$. Note that you are not using the fact that $D$ is a complete measure space, you only need that $D, N \in \mathfrak{A}$ to conclude that $D \cap N \in \mathfrak{A}$, etc.
So, we can take without loss of generality that $N$ is a subset of $D$. But the author should have mentioned this for the sake of clarity, in my opinion. Good catch. |
Getting the third point from two points on one line | Assume:
$$dx = x2 - x1$$
$$dy = y2 - y1$$
Then,
$$x3 = x1 + dx*k$$
$$y3 = y1 + dy*k$$
The square of distance between $p1$ and $p3$ is:
$$(dx^2 + dy^2)*k^2 = 300^2$$
Now you can find $k$ and then $x3$ and $y3$ |
Integration and differentiation of Fourier series | Fourier series (as with infinite series in general) cannot always be term-by-term differentiated. For general series we have the following theorem
Theorem: Term-by-term differentiation: If a series $\sum f_k(x_0)$ converges at some point $x_0\in [a,b]$ and the series of derivatives $g(x) = \sum f_k'(x)$ converges uniformly on $[a,b]$ then the series $f(x) = \sum f_k(x)$ converges uniformly for all $x\in[a,b]$ and $f(x)$ is differentiable with $f'(x) = g(x)$.
Your example fails this theorem dramatically as the derivative of the Fourier series of $x$ do not converge anywhere.
We also have some more specific results. The following theorem gives conditions for which this can be done for Fourier series:
Theorem (Term-by-term differentiation of Fourier series): If $f$ is a piecewise smooth function and if $f$ is also continuous on $[-L,L]$, then the Fourier series of $f$ can be term-by-term differentiated if $f(-L) = f(L)$.
The last condition is not satisfied when the Fourier series has a jump-discontinuity as $x=L$ so we in general we don't expect to be able to term-by-term differentiate a Fourier series that has a jump-discontinuity (thought the theorem does not rule it out). For your example of the Fourier series of $f(x) = x$ the first condition are satisfyed as $f$ is both smooth and continuous on $[-1,1]$ however $f(-1) \not= f(1)$ so the theorem does not apply.
For integration of Fourier series we have the following theorem
Theorem (Term-by-term integration of Fourier series): The Fourier series of a piecewise smooth function $f$ can always be term-by-term integrated to give a convergent series that always
converges to the integral of $f$ for $x\in[-L,L]$.
Note that the resulting series does not have to be a Fourier series. For example if we have a Fourier series $f(x) = a_0 + \ldots$ with $a_0\not= 0$ then $\int f(x){\rm d}x = a_0x+ \ldots$ and the presence of the $a_0x$ term makes this not be a Fourier series (though for this example one can probably expand $x$ in a Fourier series to get such a series).
The proofs of the theorems above can be found in any good textbook on Fourier series. You can also find them in the following course notes (by P. Laval). |
Why can't we define linear mapping over vector spaces over two different fields? | The Problem
Note carefully that in the equation
$$T(a\alpha + \beta) = aT(\alpha) + T(\beta),$$
the operations on the left and right side are different.
To begin with, the $+$ on the LHS is the vector addition of $V$ and on the RHS is the one of $W$.
More importantly, the scalar multiplication $a\alpha$ is happening in $V$ and $aT(\alpha)$ in $W$.
This definition of scalar multiplication depends crucially on the field you are in.
For example, if $V$ is a vector space over $\Bbb R$ and $W$ over $\Bbb Q$, then given an $\alpha \in \Bbb R\setminus\Bbb Q$, the product $\alpha w$ is not defined for $w \in W$.
A Possible Solution
You could, however, make sense of it if
$$\Bbb F_1 \subset \Bbb F_2,$$
where $\Bbb F_1$ (resp., $\Bbb F_2$) is the field over which $V$ (resp., $W$) is a vector field.
However, this is nothing too enlightening as given any vector space over $\Bbb F_2$, it can naturally be made into a vector space over $\Bbb F_1$ by restricting the scalar multiplication.
The Conclusion
The takeaway is the following: When defining a vector space $V$ over a field $\Bbb F$, the only interaction that these objects have is via the scalar multiplication. (Which is a function from $\Bbb F \times V$ to $V$.)
Linear transformations are defined in a way so as to preserve this interaction. This is why you run into trouble when you change the fields for the "interactions" may not be compatible. |
How to find the period of $\cos(\cos\theta)$? | $$\cos\cos\theta=1\iff\frac{\cos\theta}{2\pi}\in\Bbb Z\iff\cos\theta=0\iff\frac{\theta-\frac\pi2}{\pi}\in\Bbb Z$$
That is, $\cos\cos\theta$ reach its maximum only for $\theta\equiv\pi/2\pmod\pi$.
Moreover, $\cos\cos\theta=\cos\cos(\theta+\pi)$ for any $\theta\in\Bbb R$. Thus, the period is $\pi$. |
Looking for a proof of Cleo's result for ${\large\int}_0^\infty\operatorname{Ei}^4(-x)\,dx$ | I would like to thank M.N.C.E. for suggesting the use of the identity $$2\log(x)\log(y) = \log^{2}(x) + \log^{2}(y) - \log^{2} \left(\frac{x}{y} \right) $$ where $x$ and $y$ are real positive values.
As I stated here,
\begin{align} &\int_{0}^{\infty} [\text{Ei}(-x)]^{4} \, dx \\ &= -4 \int_{0}^{\infty} [\text{Ei}(-x)]^{3} e^{-x} \, dx \tag{1} \\ &= 4 \int_{0}^{\infty} \int_{1}^{\infty} \int_{1}^{\infty} \int_{1}^{\infty} \frac{1}{wyz} e^{-(w+y+z+1)x} \, dw \, dy \, dz \,dx \\& =4 \int_{1}^{\infty} \int_{1}^{\infty} \int_{1}^{\infty} \frac{1}{wyz} \int_{0}^{\infty} e^{-(w+y+z+1)x} \, dx \, dw \, dy \, dz \\ &= 4 \int_{1}^{\infty} \int_{1}^{\infty} \int_{1}^{\infty} \frac{1}{wyz} \frac{1}{w+y+z+1} \, dw \, dy \, dz \\ &= 4 \int_{1}^{\infty} \int_{1}^{\infty} \int_{1}^{\infty} \frac{1}{yz} \frac{1}{y+z+1} \left(\frac{1}{w} - \frac{1}{w+y+z+1} \right) \, dw \, dy \, dz \\ &= 4 \int_{1}^{\infty} \int_{1}^{\infty} \frac{1}{yz} \frac{1}{y+z+1} \log(2+y+z) \, dy \, dz \\ &= 8 \int_{1}^{\infty} \int_{1}^{z} \frac{1}{yz} \frac{1}{y+z+1} \log(2+y+z) \, dy \, dz \\ &= 8 \int_{2}^{\infty} \int_{u-1}^{u^{2}/4} \frac{1}{v} \frac{1}{u+1} \log(2+u) \frac{dv \, du}{\sqrt{u^{2}-4v}} \tag{2} \\& =16 \int_{2}^{\infty} \frac{\log(2+u)}{u+1} \int_{0}^{u-2} \frac{1}{u^{2}-t^{2}} \, dt \, du \tag{3} \\& =16 \int_{2}^{\infty} \frac{\log(2+u)}{u+1} \frac{1}{u} \text{arctanh} \left( \frac{u-2}{2}\right) \, du \\ &= 8 \int_{2}^{\infty} \frac{\log(2+u) \log(u-1)}{u(1+u)} \, du \\ &= 8 \Bigg(\int_{0}^{1/2} \frac{\log(1-u) \log(1+2u)}{1+u} \, du - \int_{0}^{1/2} \frac{\log(u) \log(1+2u)}{1+u} \, du \\ &- \int_{0}^{1/2} \frac{\log (1-u) \log(u)}{1+u} \, du + \int_{0}^{1/2} \frac{\log^{2}(u)}{1+u} \, du \Bigg) \tag{4} \end{align}
$(1)$ Integrate by parts.
$(2)$ Make the change of variables $u= y+z$, $v=yz$.
$(3)$ Make the substitution $ t^{2}=u^2-4v$.
$(4)$ Replace $u$ with $\frac{1}{u}$.
Then using the identity I mentioned at the beginning of this post,
$$ \begin{align} &\int_{0}^{\infty} [\text{Ei}(-x)]^{4} \, dx \\ &= -4\int_{0}^{1/2} \frac{\log^{2} \left(\frac{1-u}{1+2u} \right)}{1+u} \, du + 4 \int_{0}^{1/2} \frac{\log^{2} \left(\frac{u}{1+2u} \right)}{1+u} \, du + 4 \int_{0}^{1/2} \frac{\log^{2} \left(\frac{u}{1-u} \right)}{1+u} \, du \\& = -12 \int_{1/4}^{1} \frac{\log^{2} (u)}{(1+2u)(2+u)} \, du +4 \int_{0}^{1/4} \frac{\log^{2} (u)}{(1-2u)(1-u)} \, du + 4 \int_{0}^{1} \frac{\log^{2}(u)}{(1+2u)(1+u)} \, du \\& \approx 14.0394 \end{align}$$
After performing partial fraction decomposition, you can evaluate all the integrals by integrating by parts twice. In some cases you need to pick a particular "$v$" for it to work. Wolfram Alpha can provide the antiderivatives if needed.
It will require additional work to get it in the form provided by Cleo.
EDIT:
Actually, the only tricky one is $$\int \frac{\log^{2}(x)}{2+x} \, dx. $$
Let $u=\log^{2}(x)$ and $dv = \frac{dx}{2+x}$. For $v$ choose the antiderivative $\log \left(\frac{2+x}{2} \right)$.
Then $$ \int \frac{\log^{2}(x)}{2+x} \, dx = \log^{2}(x) \log \left(\frac{2+x}{2} \right) - 2 \int \frac{\log (x) \log \left( \frac{2+x}{x} \right)}{x} \, dx.$$
Now let $u = \log(x)$ and $dv= \frac{ \log \left(1 + \frac{x}{2}\right)}{x} \, dx$. Then by definition $v = - \text{Li}_{2} \left( - \frac{x}{2}\right)$.
So
$$ \begin{align} \int \frac{\log^{2}(x)}{2+x} \, dx &= \log^{2}(x) \log \left(\frac{2+x}{2} \right) +2 \text{Li}_{2} \left(- \frac{x}{2} \right) \log(x) - 2\int \frac{\text{Li}_{2} \left(- \frac{x}{2} \right)}{x} \, dx \\ &= \log^{2}(x) \log \left(\frac{2+x}{2} \right) +2 \text{Li}_{2} \left(- \frac{x}{2} \right) \log(x) -2 \text{Li}_{3} \left(- \frac{x}{2} \right)+C. \end{align} $$
The other 5 are very similar and don't require any tricks at the start. |
Unconstrained matrix factorization vs SVD | Your first question is not technically a question, but your supposition is incorrect. No, it is not true that you can find a factorization that comes closer in the mean square error sense (I assume that's what MSE stands for). The EYM theorem guarantees that the approximation attained by truncating the SVD is optimal.
Your second question is indeed a question, but I don't know what you mean by "extracting property of $M$ from this result". In particular, I don't know what you mean by "extracting", "a property of $M$", or "this result". |
Find the exact value of integration $ \int_0^1 \frac{1}{\sqrt{1-x}+\sqrt{x+1}+2} dx$ | Hint:
$$
\begin{aligned}
& \int\frac{dx}{\sqrt{1-x}+\sqrt{x+1}+2}\\
& \stackrel{x\to\cos2\phi}=
\int\frac{\sin{2\phi}\,d\phi}{1+\frac1{\sqrt2}(\sin\phi+\cos\phi)}
=\int\frac{\sin{2\phi}\,d\phi}{1+\sin(\phi+\frac\pi4)}\\
&\stackrel{\phi\to\theta+\frac\pi4}
=\int\frac{\cos{2\theta}\,d\theta}{1+\cos\theta}=\int\frac{2\cos^2{\theta}-1}{1+\cos\theta}\,d\theta\\
&\stackrel{\theta\to2\arctan t}=\int\left[2\left(1-\frac{2}{t^2+1}\right)^2-1\right]\,dt.
\end{aligned}
$$
The rest should not be complicated. |
B-spline curve fitting with conditions on derivatives | I'm assuming your "points" are in 2D or 3D. So, you have 5 points and 5 derivative vectors. You can insert a cubic segment between each pair of points. This will give you a cubic b-spline that is $C_1$-continuous.
Specifically, suppose your 5 points are $P_1, P_2, P_3, P_4, P_5$ and your 5 derivative vectors are $V_1, V_2, V_3, V_4, V_5$. Assign parameter values $t_1, t_2, t_3, t_4, t_5$ to the 5 points. You can use $t_1=0$, and $t_5=1$, and $t_2, t_3, t_4$ should be spaced according to the spacing of the points. Let $h_i = (t_{i+1}-t_i)/3$ for $i=1,2,3,4$.
Your b-spline should have knot sequence
$$(0,0,0,0, t_2, t_2, t_3, t_3, t_4, t_4, 1,1,1,1)$$
and its control points should be
$$P_1, \quad P_1 + h_1V_1, $$
$$P_2 - h_1V_2, \quad P_2 + h_2V_2,$$
$$P_3 - h_2V_3, \quad P_3 + h_3V_3, $$
$$P_4 - h_3V_4, \quad P_4 + h_4V_4,$$
$$P_5 - h_4V_5, \quad P_5$$
So, 10 control points and 14 knot values, which works out correctly for a b-spline of degree 3 (order 4). |
How to construct a self orthogonal Latin square of order 5 | Use the elements of $\;\mathbb{Z}_v$ as the names of the rows and columns of your Latin square. Let $\boldsymbol{A}=(a_{ij})$ such that $a_{ij}=2i-j\in \mathbb{Z}_v$. This forms a self-orthogonal Latin square always when the $\gcd(v,6)=1$, so it works for the order $5$ case as well. |
When does a function attain its infimum. | This is not true for convex $f$: Take
$$
f(x,y)= \max( e^x + y,0),
$$
which is a maximum of convex functions, hence convex. The minimum $0$ is attained at, e.g., $(x,y)=(0,-1)$.
Define $A=\{(x,0): \ x\in \mathbb R\}$. Minimizing $f$ on $A$ is equivalent to minimizing $e^x$ for $x\in \mathbb R$, where the infimum is not attained. |
$Pr(A+B+C=X+Y+Z)$ with $A, B, C, X, Y, Z$ being iid random variables | Since all six variables are independent. $$\mathsf G_{A+B+C-X-Y-Z}(t)=\mathsf G_A(t)\,\mathsf G_B(t)\,\mathsf G_C(t)\,\mathsf G_X(t^{-1})\,\mathsf G_Y(t^{-1})\,\mathsf G_Z(t^{-1})$$
And since they are identically distributed therefore
$$\mathsf G_{A+B+C-X-Y-Z}(t)=\mathsf G_A^3(t)\,\mathsf G_A^3(t^{-1})$$
Now... what is the PGF of a uniform discrete random variable over $[0;9]\cap\Bbb N$ and how do you plan to deal with the complication at $t=0$?
$$\mathsf G_A(t)=\tfrac 1{10}\sum_{x=0}^9 t^x$$
So... ?
via:
$\begin{split}\mathsf G_{\sum_k a_kX_k}(t) &=\mathsf E(t^{\sum_k a_kX_k})\\[1ex]&=\mathsf E(\prod_k t^{a_kX_k}) \\[1ex]&\overset{ind.}= \prod_k\mathsf E((t^{a_k})^{X_k}) \\[1ex]&= \prod_k\mathsf G_{X_k}(t^{a_k})\end{split}$
PS: You are not looking for the $t^0$ term. The probability mass function is recovered from the derivatives of $\mathsf G$.$$\mathsf P(S=k)= \mathsf G_S^{(k)}(0)/k!$$ |
fiber product of affine varieties is associative? | Fiber products in any category are naturally associative.
Let $X,Y,Z,A,B$ be objects in a category $\mathcal{C}$ with maps $f:X\to A$, $g:Y\to A$, $h:Y\to B$, $j:Z\to B$.
Then the fiber products $(X\times_A Y)\times_B Z$ and $X\times_A(Y\times_B Z)$ both represent the functor from $\mathcal{C}^{\text{op}}$ to $\mathbf{Set}$ which takes an object $W$ to the collection of all triples of morphisms $(\alpha,\beta,\gamma)$, where $\alpha: W\to X$, $\beta: W\to Y$, $\gamma:W\to Z$ such that $f\circ \alpha = g\circ \beta$ and $h\circ \beta = j \circ \gamma$. Since they both represent the same functor, they are naturally isomorphic. |
Continuous functions from compact Hausdorff spaces to the interval | $C(X,I)$ is the closed ball of radius $1/2$ centred at the constant function $1/2$ in the Banach space $C(X,\mathbb R)$. A closed ball of nonzero radius in a Banach space is compact if and only if the Banach space is finite-dimensional. $C(X,\mathbb R)$ is infinite-dimensional. Therefore $C(X,I)$ is not compact. |
Solve an equation; Ax=b regarding matrices | Seems that, from your problem statement:
$$Av_1 = v_1 \,\,\text{ and }\,\, A v_2 = v_2 \,\, \text{ and }\,\, Av_3 = 2v_3.$$
Also, it should be clear that $v_1,v_2,v_3$ span $\mathbb{R}^3$ so
$$b= c_1 v_1 + c_2 v_2 + c_3 v_3$$ for some $c_1,c_2,c_3$ not all zero.
Find the $c_j$'s . Then
$$Ab = A(c_1v_1 + c_2v_2 + c_3 v_3)= c_1 Av_1 + c_2 Av_2 + c_3 Av_3=c_1v_1+c_2v_2+ 2c_3 v_3.$$ |
Problem related to means | Assuming that $a,c\ge0$ and $n$ is a positive integer, we have that $x^n$ is a convex function for $x\ge0$.
For any convex function, $f$,
$$
\frac{f(a)+f(c)}2\ge f\left(\frac{a+c}2\right)
$$ |
Permutation questions | For the second question, are you familiar with the result that says that (the number of elements of a group $G$ commuting with an element $g$) times (the number of elements conjugate to $g$) equals the order of $G$? and the result that says that in $S_n$ elements are conjugate if and only if they have the same cycle structure? and can you work out how many elements of $S_6$ have the same cycle structure as $(12)(34)$? Oh, I suppose you also have to know how many elements there are in $S_6$. |
The identity element of the Tensor Product of Vector Bundles | An isomorphism of vector bundles is just a morphism of vector bundles that is an isomorphism on every fiber (as pointed out by san in a comment).
You say you have an isomorphism between the fibers. I'll assume that the isomorphism you have is $\mathbb{R}\otimes F \to F : r\otimes f\mapsto rf$ where $F$ is a fiber.
Its inverse is $F\to\mathbb{R}\otimes F: f\mapsto 1\otimes f$.
The bundle isomorphism between $E$ and $(M\times\mathbb{R})\otimes E$ is obtained by taking the above isomorphism on every fiber:
$$ (M\times\mathbb{R})\otimes E \to E : (m,r)\otimes f_m \mapsto (rf)_m $$
where I used subscripts to indicate basepoints. The addition of a basepoint in the notation is really the only difference with the vector space case.
The inverse is
$$ E \otimes (M\times\mathbb{R})\otimes E : f_m \mapsto (m,1)\otimes f_m .$$
This answers your question (I hope).
(Optional extra paragraph.)
Note that finding the isomorphism of vector bundles was not actually harder than finding the isomorphism between fibers. We just needed to include a basepoint in the notation. This is often true: if you can find a natural map between vector spaces, you get the same map between vector bundles. For example, if $V$ and $W$ are vector spaces then $\text{Hom}(V,W) \cong V^*\otimes W$ by some natural map.
Using the same map but adding basepoints, we get that $$\text{Hom}(D,E) \cong D^*\otimes E$$ where $D$ and $E$ are vector bundles.
(Here by $\text{Hom}(D,E)$ I mean the vector bundle whose fiber over $m$ is $\text{Hom}(D_m,E_m)$. I don't mean the set of vector bundle morphisms from $D$ to $E$.) |
Integral of a function with compact support | The key of the reasoning is that, defined
$$
V(x)=\left(
\begin{matrix}
fF_1 g\\
fF_2 g\\
\vdots\\
fF_{d-1} g\\
fF_d g
\end{matrix}
\right) \implies V\text{ has compact support in }\mathbb{R}^d
$$
Since
$$
\sum_{i=1}^d \int\limits_{\mathbb{R}^d} \frac{\partial (fF_i g)}{\partial x_i} \mathrm{d}x = \int\limits_{\mathbb{R}^d} \nabla\cdot V\, \mathrm{d}x,
$$
then, by considering a ball $B(0,r)\in\mathbb{R}^d$ of radius $r>0$, we have that
$$
\begin{split}
\sum_{i=1}^d \int\limits_{\mathbb{R}^d} \frac{\partial (fF_i g)}{\partial x_i} \mathrm{d}x & = \int\limits_{\mathbb{R}^d} \nabla\cdot V\, \mathrm{d}x\\
&\triangleq\lim_{r\to\infty}\int\limits_{B(0,r)} \nabla\cdot V\, \mathrm{d}x\\
&\text{ by Gauss-Green}\\
&=\lim_{r\to\infty}\int\limits_{\partial B(0,r)} V\cdot\nu_x\, \mathrm{d}\sigma_x=0
\end{split}
$$
where $\nu_x$ is the normal unit vector to $\partial B(0,r)$ in the point $x\in \partial B(0,r)$, since $V$ has compact support $\iff$ $V\cdot\nu_x=0$ for all $\nu_x$ and all $r$ larger than a fixed finite value $r_0>0$. |
A Differential Equation Problem | If you change the independent variable $x = e^t$, the equation becomes a pretty typical delay differential equation. If you have suitable initial condition, the problem reduces to a sequence of initial value problems for ODE. It is not actually necessary to change the variable; working in terms of $x$ you can do the same things, only the steps will be nonuniform in size: $[a^k,a^{k+1}]$.
Getting an exact formula for solution, when the DDE has variable coefficients, would take a miraculous coincidence. Don't expect it. |
Sum of Legendre Symbols for evey number less than $p$ | It's perfectly good. Another way: the nonzero squares mod $p$ are $1^2 \pmod p$,
$2^2 \pmod p$,..., $(p-1)^2 \pmod p$. That would be $p-1$, but they're double-counted since $x^2 \equiv (p-x)^2$, so there are $(p-1)/2$ squares mod $p$. |
limits of integration in polar coordinates | Just do it (the evident integral). Note that the $r < 0$ in $r \,\mathrm{d}r \mathrm{d}\theta$ will occur when the bounds of integration $\int_0^{\cos \theta}$ are "backwards", so we could get twice the value, depending on a special property of $f$.
$f$ need not be defined for negative radii -- this integral is undefined.
$f$ could well be defined for negative $r$, but need not agree with the value it has at the same point expressed with positive radius, so $f(-r, \theta) \neq f(r, \pi + \theta)$. In this case, the result could be "anything", depending on what unrelated values the function has on the two ways to represent the "same point". (They're not the same point in the $r$-$\theta$ coordinates, since we are allowing $(r,\theta) \in \mathbb{R} \times \mathbb{R}$. They have different coordinates. They only appear the same when we plot them using the polar coordinates recipe. This extra constraint is analogous to requiring that $f$ be $2\pi$-periodic in $\theta$.)
$f$ could well be defined for negative $r$ with $f(-r, \theta) = f(r, \pi + \theta)$. Then the positive $r$ part duplicates the negative $r$ part -- because the bounds of the $r$ integral are in the "flipped" position, so we can arrange to eliminate the two minus signs. |
Representation of elements in Cantor set. | Suppose $a_n$ is $1$. We will divide our proof into cases.
Case $1$: If $a_k \ne 0$ for some $k > n$, then $x \in \left( \frac{3k+1}{3^n} , \frac{3k+2}{3^n} \right)$ for some $n$. But this set is taken out of the unit interval during the formation of the Cantor set, so $x$ is not in the Cantor set.
Case $2$: If $a_k = 0$ for all $k > n$, then $x = \sum_{k=1}^\infty b_k 3^{-k}$ with
$$b_k = \begin{cases} a_k & k < n \\ 0 & k = n \\ 2 & k > n \end{cases}$$
This follows because $0 \le x - \sum_{k=1}^N b_k 3^{-k} \le 3^{-N}$ (equality holds if $N \ge n$). |
Use compactness to show that $f(x_0)=\inf_{x\in K}f(x)$ | A characterization of lower semi continuity is that for any $a \in \mathbb{R}$, the set $f^{-1}((-\infty, a])$ is closed in $\mathbb{R}$ (I don't know what definition of lower semi continuity you're using becasue there are a few, but showing that the definition is equivalent to this is a good exercise, show it!)
Let $m = \inf_{x \in K} f(x)$ (finite by assumption). Let $A_{n} = \{x \in K: f(x) \leq m + \frac{1}{n}$}. By the characterization of lower semi continuity above, each $A_{n}$ is closed. Any closed subset of a compact set is compact, so each $A_{n}$ is compact. Also, clearly $A_{n} \subset A_{n+1}$ for each $n \in \mathbb{N}$ -- by definition of $A_n$. Thus,
$$
A := \bigcap_{n \in \mathbb{N}} A_{n} \neq \emptyset
$$
since nested sequences of compact sets in $\mathbb{R}$ (or any complete metric space, actually) have a non-empty intersection. Pick any $x_{0} \in A$. Check that $x_{0}$ is a minimizer like you want. |
try to understand proof of equivalence of borel-sigma | When you show that $(a,b]$ is the countable intersection of $(a, b + \frac{1}{n})$, you are showing that $\{(a,b]:a, b \in \mathbb{R}\} \subseteq \sigma((a,b))$. Since $\sigma((a,b])$ is the "smallest" $\sigma$-algebra that contains $\{(a,b]:a, b \in \mathbb{R}\}$, we have then that $\sigma((a,b]) \subseteq \sigma((a,b))$.
If you can show that for all $a,b \in \mathbb{R}$, $(a,b) \in \sigma((a,b])$, then you can conclude that $\sigma((a,b)) \subseteq \sigma((a,b])$. Then the double inclusion will imply that $\sigma((a,b)) =\sigma((a,b])$.
Does that make sense? |
Integral curve of vector field | The vector field $X = w_{1}\left(q_{1}\frac{\partial}{\partial p_{1}} - p_{1}\frac{\partial}{\partial q_{1}}\right) + w_{2}\left(q_{2}\frac{\partial}{\partial p_{2}} - p_{2}\frac{\partial}{\partial q_{2}}\right)$ is essentially comprised of the rotational fields flowing clockwise about the origin in the $p_1q_{1}$-plane and the $p_{2}q_{2}$-plane, respectively. For example, the plot of the vector field $ Y = q\frac{\partial}{\partial p} - p\frac{\partial}{\partial q}$ in the $pq$-plane is given below.
The rotational symmetry of the field is evident and the factors of $w_{1}$ an $w_{2}$ merely scale the speed with which one flows around the integral curves.
Recognizing this, you should be able to find the integral curves geometrically. The only trick will be to get your curve passing through your desired initial point.
Taking the initial point to be $P(1, 0, 1, 0)$, you should find the integral curve to be
$$
\gamma(t) = \left(p_{1}(t), q_{1}(t), p_{2}(t), q_{2}(t)\right)
$$
where
\begin{align*}
p_{1}(t) = \cos w_{1}t\\
q_{1}(t) = -\sin w_{1}t\\
p_{2}(t) = \cos w_{2}t\\
q_{2}(t) = \sin w_{2}t.\\
\end{align*}
Now take the initial point to be $P(p_{1}, 0, p_{2}, 0)$, where $p_{1}$ and $p_{2}$ are both greater than zero. Then the integral curve should be built out of the circle of radius $p_{1}$ centered at the origin in the $p_{1}q_{1}$-plane and the circle of radius $p_{2}$ centered at the origin in the $p_2q_{2}$-plane. Again, both circles should be traced out clockwise.
That is, you should have
$$
\gamma(t) = \left(p_{1}(t), q_{1}(t), p_{2}(t), q_{2}(t)\right)
$$
where
\begin{align*}
p_{1}(t) &= p_{1}\cos w_{1}t\\
q_{1}(t) &= -p_{1}\sin w_{1}t\\
p_{2}(t) &= p_{2}\cos w_{2}t\\
q_{2}(t) &= -p_{2}\sin w_{2}t.\\
\end{align*}
You should be able to pivot directly from here to the desired curves. You merely need to amp up your rotational speed in both planes.
In particular, let $r_{i}$ denote the distance from a point in the $p_1q_{i}$-plane to the origin in the $p_{i}q_{i}$-plane (i.e., $r_{i} = \sqrt{p_{i}^2 + q_{i}^2}$) and $\alpha_{i}$ denote the oriented angle measured clockwise from the $p_{i}$ axis in the $p_{i}q_{i}$-plane.
Then the integral curve through a beginning at a point $P(p_{1}, q_{1}, p_{2}, q_{2})$ should take the form
$$
\gamma(t) = \left(p_{1}(t), q_{1}(t), p_{2}(t), q_{2}(t)\right),
$$
where
\begin{align*}
p_{1}(t) &= r_{1}(0)\cos\left(w_{1}\left(t + \frac{\alpha_{1}(0)}{w_{1}}\right)\right)\\
q_{1}(t) &= -r_{1}(0)\sin\left(w_{1}\left(t + \frac{\alpha_{1}(0)}{w_{1}}\right)\right) \\
p_{2}(t) &= r_{2}(0)\cos\left(w_{2}\left(t + \frac{\alpha_{2}(0)}{w_{2}}\right)\right)\\
q_{1}(t) &= -r_{2}(0)\sin\left(w_{2}\left(t + \frac{\alpha_{2}(0)}{w_{2}}\right)\right), \\
\end{align*}
and the angles $\alpha_{i}(0)$ are angle measures of the projection $(p_{i}, q_{i})$ in the indicated polar coordinates of the $p_{i}q_{i}$-plane.
Further, note that the quantities $r_{i}(t)= \sqrt{\left(p_{i}(t)\right)^2 + \left(q_{i}(t)\right)^2}$, $i= 1, 2$, are conserved quantities for the flow. |
Integral $\int_{\sqrt{33}}^\infty\frac{dx}{\sqrt{x^3-11x^2+11x+121}}$ | Referring to Zacky’s comment it suffices to turn the cubic denominator into a quadratic one, then perform the Landen transform to come to an elliptic integral of the first kind equivalent to K(k11)_
fjaclot |
How is this inequality called? (And how to improve this process) | Following Daniel Fischer comments I'm trying to post an answer:
Let $G = (a,b)$
We have that $$\lVert u\rVert_{L^\infty(G)}^2 = \lVert u^2\rVert_{L^\infty(G)} \le \int_a^b 2 |u(t)u'(t)|dt \le 2 \lVert u\rVert_{L^2(G)}\lVert u'\rVert_{L^2(G)}$$
where the first inequality is justified by the fact that $u^2$ is absolutely continuos and $u(a) = 0$ so that we can write $u^2(x) = \int_a^x 2u(t)u'(t)dt$ and the second inequality is just Holder. |
3 Ants going at different speeds, when they will be at the same place Motion Problem | Hint. Suppose that the ants meet after $t$ seconds, and measure distance around the circle from where C starts. Then A has travelled $3t$ metres, but had a $60$ metre start for a total of $60+3t$ from the initial point. Likewise B will be a distance 30+5t from the initial point. However B may have travelled a number of times around the circle, say $x$ times more than A, and therefore has travelled $90x$ metres further than A. So we have the equation
$$60+3t+90x=30+5t\ .$$
See if you can explain by using similar ideas why
$$60+3t+90y=10t\ ,$$
where $y$ is the number of times C has "lapped" A.
Now eliminate $t$ from these two equations; then find the smallest possible values of $x$ and $y$, remembering that while $t$ could be any positive number, $x$ and $y$ must be positive integers.
Good luck! |
Rose curve angles of rotation | "Angles of rotation" is a little verbose. They mean "arguments of the petals' tips" and this entails solving for where $5\sin2\theta$ attains its extrema.
Since $-1\le\sin x\le1$ for real $x$, we only need to solve $\sin2\theta=1$ and $\sin2\theta=-1$, the solutions with $\theta\in[0,2\pi)$ being $\theta=\pi/4\lor\theta=5\pi/4$ and $\theta=7\pi/4\lor\theta=3\pi/4$ respectively. Thus the general solution for the $\theta$ at which petal tips occur is $\theta=(2n+1)\pi/4$, $n\in\Bbb Z$. |
Prove that $\left(x_1+\dots+x_k\right)^j\leq k^{j-1}\left(x_1^j+\dots+x_k^j\right)$ for $x_1,\dots,x_k\geq0$. | For $j \ge 1$ the map $x \mapsto x^j$ is convex on $\mathbb{R}^+$, so we can apply Jensen's inequality:
$$\left(\sum \limits_{i = 1}^k x_i\right)^j = k^j \left(\sum \limits_{i = 1}^k \frac{1}{k} x_i\right)^j \le k^j \sum \limits_{i = 1}^k\frac{1}{k}x_i^j = k^{j - 1} \sum \limits_{i = 1}^k x_i^j$$ |
How to determine the number of combinations in a conditional group? | Suppose the three people who are selected are Anne, Barbara, and Charles.
There are six orders in which those same three people could be selected:
Anne, Barbara, Charles
Anne, Charles, Barbara
Barbara, Anne, Charles
Barbara, Charles, Anne
Charles, Anne, Barbara
Charles, Barbara, Anne
However, all six choices constitute the same committee. Therefore, you need to divide your answer by the $3! = 6$ orders in which you could obtain the same three people, which yields the answer
$$\frac{10 \cdot 8 \cdot 6}{3!} = \frac{10 \cdot 8 \cdot 6}{6} = 10 \cdot 8 = 80$$
Alternate Approach: Choose three of the five couples from which to choose the committee members in $\binom{5}{3}$ ways, then choose one of the two members from each selected couple, which can be done in $2^3$ ways. Thus, the number of possible selections is
$$\binom{5}{3} \cdot 2^3 = 10 \cdot 8 = 80$$ |
Is the statement $A \in A$ true or false? | It seems that you confused $A\in B$ ($A$ is an element of $B$) and $A\subset B$ ($A$ is a subset of $B$, that is, every element of $A$ is an element of $B$).
Edit: Example: $1\in \{1,2\}$, and $\{1\}\subset \{1,2\}$.
Every set is a subset of itself (this is what you argue in 1), but a set can never be an element of itself (at least in standard set theory, that is, Zermelo-Fraenkel set theory, where the axiom of regularity forbids this). |
Set builder notation: defining the number of elements | The number of elements of a set A, called the cardinality of A is denoted $|A|$. Here's what I understand you are trying to say:
I have a set $L$ and a set $S$ which is a subset of $L$. $S$ is the set $\{A, B, C\}$, where: $$A = \{a_1, a_2, ... a_n\}$$ $$B = \{b_1, b_2, ... b_n\}$$ $$C = \{c_1, c_2, ... c_n\}$$ for some integer $n$ such that $1 \le n \lt |L|$. |
Elementary function evaluation | Yes, that is very relevant. Since $f(g(x))=x$, then
$$f^{2011}(g^{1994}(x))=f^{2011-1994}(x)=f^{17}(x)$$
Now look what happens when you compose $f(x)$ with itself:
$$f(x)=\frac{1}{1-x}$$
$$f^2(x)=\frac{x-1}{x}$$
$$f^3(x)=x$$
This is very helpful. Now that we know that $f(x)$ inverts itself after three iterations, we can break our problem up into
$$f^{17}(x)=f^{3*5+2}(x)=f^3(f^3(f^3(f^3(f^3(f^2(x)))))=f^2(x)=\frac{x-1}{x}$$
And that should be the answer. |
Is there a proof that there is no such number that makes a pythagorean triple with sum and product of its digits? | Call the number $N$, the sum of its digit $S$ and the product of its digits $P$.
Clearly $N$ must have more than one digit.
It cannot have a 0, because then $P=0$. So in particular its second digit must be $\ge1$,
Suppose its first digit is $a$ and it has $k$ other digits. Then $N>10^ka+10^{k-1}$, $P\le 9^ka$ and $S\le 9k+a$, so $S+P=a(9^k+9\frac{k}{a}+1)\le a(9^k+9k+1)$
For $k\ge4$ we have $a(9k+1)\le9(9k+1)<10^{k-1}$ and $9^k<10^k$, so $S+P<N$.
For $k=2$ let the second digit $b>1$, then $S=a+b,P=ab,N=10a+b$ and so $S+P=a(b+1)+b\le 10a+b=N$ and we cannot have a Pythagorean triple.
Similarly for $k=3$, let the number be $abc$. Then $S+P=a+b+c+abc,N=100a+10b+c$. Now $ab<11a<11a+b$, so $P=abc<9(11a+b)$. Hence $S+P<99a+9b+(a+b+c)=N$. So again no Pythagorean triple is possible. |
Application of Exponential Distribution | Yes, you can use the exponential distribution to model the time between cars: set it up with the appropriate rate (2 cars/min or 20 cars/min or whatever) and then do a cumulative sum (cumsum in R) to find the time in minutes at which each car passes. Here's an example in R:
> waits <- rexp(10,2)
> waits
[1] 0.14713730 0.26549532 0.83353238 0.19284503 0.30513264 0.62242778
[7] 0.01943296 0.25699842 0.40801597 0.31635216
> cumsum(waits)
[1] 0.1471373 0.4126326 1.2461650 1.4390100 1.7441427 2.3665704 2.3860034
[8] 2.6430018 3.0510178 3.3673699
Here we have an average of two cars per minute. The first comes at about .15 minutes, the second at .26 minutes after that (i.e., at .41 minutes), and so on. The tenth one comes at 3.36 minutes.
There's another way that doesn't require doing the cumulative sum, though, which may be easier: a Poisson process, which directly generates the times at which cars come. To do it this way, you first sample the total number of cars $n$ that you see in the time interval from a Poisson distribution with parameter Rate*Time (e.g. if you have 2 cars per minute, and your time interval is an hour, your parameter will be 2*60 = 120), and then sample $n$ points from a uniform distribution on the time interval. The result ends up being the same either way. If you want, you can then calculate the waiting times (what you sampled from an Exponential distribution using the first method) by taking the differences between the ordered samples (diff in R will do this for you).
Here's an example of the alternative approach:
> n <- rpois(10,2*5)
> runif(n,0,5)
[1] 4.3983792 2.0030962 3.6927402 1.5854187 0.3782581 2.0634806 1.0005454
[8] 0.1659071 2.4213297 3.5768906
> diff(c(0,sort(a)))
[1] 0.16590711 0.21235101 0.62228724 0.58487331 0.41767751 0.06038446
[7] 0.35784905 1.15556095 0.11584959 0.70563900
Here, we first sampled $n$, which turned out to be nine, so we're simulating seeing 9 cars in the first 5 minutes (2 cars per minute average). Then we sample the times uniformly from 0 to 5, getting the vector of times. Note that they aren't in order. If we want to get the waiting times back, then we can run the line of code beginning with diff. |
if a function f is $C^1$ and its derivative is bounded and $\int_{0}^{\infty}f$ converges then $f(x)\overset{x\rightarrow \infty}{\rightarrow}0$ | Assume that $f$ did not go to $0$. Then, there exists an $\varepsilon>0$ and a sequence $(x_n)_{n\in \mathbb{N}}$ such that $x_n\to\infty$ and $|f(x_n)|\geq \varepsilon$. By thinning the sequence, the signs of $f(x_n)$ can be chosen equal so without loss of generality, $f(x_n)\geq \varepsilon$.
Let $M>0$ such that $|f'(x)|\leq M$ for all $x$ and note that $|f(x+\delta)-f(x)|\leq M\delta$ for all $\delta>0$ and $x\geq 0$ by the mean value theorem. Thus, we have that $f(y)\geq \frac{\varepsilon}{2}$ for all $y\in [x_n, x_n+\frac{\varepsilon}{2M}]$.
Accordingly
$$
\left|\int_0^{x_n+\frac{\varepsilon}{2M}} f(x)\textrm{d}x-\int_0^{x_n} f(x)\textrm{d}x\right|= \int_{x_n}^{x_n+\frac{\varepsilon}{2M}} f(x)\textrm{d}x\geq \frac{\varepsilon^2}{4M}
$$
Thus, if $y_{2n-1}=x_n$ and $y_{2n}=x_n+\frac{\varepsilon}{2M}$, we get that $y_n\to \infty$ but $\left(\int_0^{y_n} f(x)\textrm{d}x\right)_{n\in \mathbb{N}}$ is not Cauchy, and thus, the integral is not convergent. |
Is $\int_x^{\infty}e^{-\frac{t^2}{2}} < \frac{1}{x}e^{-\frac{x^2}{2}}$? | Use
$$e^{-t^2/2} = \frac{t}{t}e^{-t^2/2} < \frac{t}{x}e^{-t^2/2}$$
for $t > x$. |
$M \not \models \phi$ vs $M \models \neg \phi$ | Yes. If $M$ is a structure, then "$M\models\neg\varphi$" means by definition "$M\not\models\varphi$."
However, note that things get more complicated at the level of theories. A theory $\Gamma$ - that is, a set of sentences - is said to satisfy a sentence $\varphi$ if $\varphi$ is true whenever $\Gamma$ is true; that is, $$\mbox{for all structures $M$, if $M\models\Gamma$ then $M\models\varphi$.}$$ (Here "$M\models\Gamma$" is shorthand for "$M\models\psi$ for every $\psi\in\Gamma$.") If $\Gamma$ satisfies $\varphi$, we write (perhaps confusingly) "$\Gamma\models\varphi$."
Now it is no longer true that "$\models\neg$" is the same as "$\not\models$"! This is because theories don't have to decide everything, whereas models do. For example, if we take $\Gamma$ to be the set of group axioms, and let $\varphi$ be the sentence $\forall x\forall y(x*y=y*x)$, then
$\Gamma\not\models\varphi$, since there are nonabelian groups; but also
$\Gamma\not\models\neg\varphi$, since there are abelian groups.
Note: there is nothing special here about first-order logic! Second-order logic, infinitary logic, logic with a cardinality quantifier, etc. also behave the same way. Basically, any logic founded on classical propositional logic (two truth values, excluded middle, etc.) will work this way, since the truth definition for negation will be "$M\models\neg\varphi\iff M\not\models\varphi$". Note that things will be different if we look at logics founded on intuitionistic propositional logic, but that's a whole other can of worms. |
Showing that $f_{xy}(0,0) \neq f_{yx}(0,0)$ for $xy \cdot \frac{x^2-y^2}{x^2+y^2}$ | $$f_{xy}=\lim_{h\to0}\dfrac{f_y(h,0)-f_y(0,0)}{h} = \lim_{h\to0}\dfrac{\lim_{k\to0}\dfrac{f(h,k)-f(h,0)}{k}-\lim_{k\to0}\dfrac{f(0,k)-f(0,0)}{k}}{h}$$
This gives $$f_{xy}=\lim_{h\to0}\dfrac{\lim_{k\to0}\dfrac{hk\frac{h^2-k^2}{h^2+k^2}-0}{k}-0}{h}=\lim_{h\to 0}\frac{h}{h}=1$$
Similarly
$$f_{yx}=\lim_{k\to0}\dfrac{f_x(0,k)-f_x(0,0)}{k} = \lim_{k\to0}\dfrac{\lim_{h\to0}\dfrac{f(h,k)-f(0,k)}{h}-\lim_{h\to0}\dfrac{f(h,0)-f(0,0)}{h}}{k}$$
This gives
$$f_{xy}=\lim_{k\to0}\dfrac{\lim_{h\to0}\dfrac{hk\frac{h^2-k^2}{h^2+k^2}-0}{h}-0}{k}=\lim_{k\to 0}\frac{-k}{k}=-1$$ |
Submodules of finitely generated modules | I don't know from which result this is a corollary in your book, or what you know about modules over a PID, so I will give you a proof from scratch.
Let $M$ be an $R$-module generated by $n $elements, and let $N$ be a submodule of $M.$
Assume that you know the desired result is true when $M$ is free (I will handle this case later). Now, for a general $M$, you have an isomorphism $M\simeq A^n/P$, where $P$ is a submodule of $A^n.$ Then submodules of $M$ correspond via this isomorphism to submodules of $A^n/P$. These submodules have the form $N'/P$ where $N'$ is a submodule of $A^n$ containing $P.$ By the case of free modules, $N'$ is generated by $m\leq n$ elements, and thus so is $N'/P$. Hence, it is also true for the submodules of $M.$
It remains to handle the case where $M$ is free.We will proceed by induction on the rank $n$ of $M$. If $n=0$, then $M=0$ and there is nothing to do. Now assume the result true for any free module of rank $n$, and let $M$ be a free module of rank $n+1$. Fix a basis $(e_1,\ldots,e_{n+1})$ of $M$.
Let $N$ be a submodule of $M$ and let $(x_j)_{i\in J}$ be a family of generators of $N$. We have $$x_{j}=\sum_{i=1}^{n+1} a_{ij}\cdot e_i \mbox{ for all }j\in J.$$
Let $\mathfrak{a}$ be the ideal generated by the $a_{n+1 \ j},j\in J$.Since $R$ is a PID, we have $\mathfrak{a}=(a)$. We may write $$a=\sum_{j\in J} \lambda_j a_{n+1 \ j},$$ where the $\lambda_j's$ are all zero, except for a finite number of them.
Set $$y_0=\sum_{j\in J} \lambda_j\cdot x_j\in N.$$ Then, the $(n+1)$-th coordinate of $y_0$ is $a$.
One may also write $a_{n+1 \ j}=\mu_j a$, and thus the submodule
$N'$ generated by the elementss $x_j-\mu_j y_0,j\in J$ is a submodule of $M'=Re_1\oplus\cdots\oplus Re_n$, which is free of rank $n$. By induction hypothesis, $N'$ is generated by $y_1,\ldots,y_m\in N'\subset N, m\leq n.$
One deduce easily that $x_j$ is a linear combination of $y_0,y_1,\ldots,y_{m}$. Hence, $N$ is generated by the $m+1\leq n+1$ elements $y_0,\ldots,y_{m}$.
This finishes the induction step, and the proof. |
Periodic solutions of $y'''+ay'+byy'=0$ | For solutions such that $y' \ne 0$ for some interval
\begin{align}
& y'' +by +cy ^2=C_1 \\
\cdot y' \implies & y'y'' +byy' +cy ^2y'=C_1y' \\
\int dx \implies & \frac{1}{2}y'^2+\frac{1}{2}y^2+\frac{1}{3}y^3 = C_1y+ C_2 \\
\text{solve for }y'^2 \implies & y'^2 = -y^2-\frac{2}{3}y^3 + C_1y+ C_2 \\
\end{align}
This can be rewritten as
$$
\frac{y'}{\sqrt{-\frac{2}{3}y^3 -y^2 + C_1y+ C_2}} = \pm 1
$$
If the conditions you have may make the constants $C_1$ and $C_2$ vanish then it may be integrated further. |