instruction
stringlengths 12
30k
|
---|
I am having trouble understanding the proof of
**Theorem** If $F : M \to N$ is an embedding, then $F (M)$ is an embedded
submanifold of $N$
Proof. Let $p \in M$. Since $F$ is an immersion, by Theorem 1 (see below) there are charts $(U,\varphi)$ around $p$ and $(V,\psi )$ around $F(p)$ such that $F(U) \subseteq V$ and for all
$x \in \phi(U)$,
$\psi\circ F \circ \varphi^{−1}(x) = (x; 0_{n−m})\tag{4}$
By applying translations we can assume that $\varphi(p) = 0_m$ and $\psi(F(p)) = 0_n$. Let $\varepsilon > 0$ so that
$B_{\varepsilon}(0_m) \subseteq \varphi(U)$ and $B_{\varepsilon}(0_n) \subseteq \psi(V )$
Then $B_{\varepsilon}(0_m) \times \{0_{n-m}\} \subseteq B_{\varepsilon}(0_n) \subseteq \psi(V)\tag{5}$
Because $F:M\to F(M) $is a homeomorphism, then $F(\varphi^{−1}(B_{\varepsilon}(0_m))) \subseteq F(M)$ open in the subspace topology. So there is an open subset $W \subseteq N$ such that
$F(\varphi^{−1}(B_{\varepsilon}(0_m))) =W\cap F(M) \tag{ 6}$
And because of (4),
$\psi(W\cap F(M)) = (\psi\circ F \circ \varphi^{−1}(x))(B_{\varepsilon}(0_m))) \subseteq B_{\varepsilon}(0_m) \times \{0_{n-m}\}\tag{ 7}$
Let $\tilde {W}=W \cap \psi^{-1}(B_{\varepsilon}(0_n))$
So $(\tilde {W}, \psi|_{\tilde W}) $ is a chart around $F(p)$ with
$\psi|_{\tilde {W}}(\tilde{W} \cap F(M)) $
$= \psi|_{\tilde{W}}( W \cap \psi^{-1}(B_{\varepsilon}(0_n)) \cap F(M))$
$=\psi (W \cap F(M)) \tag{8}$
$=B_{\varepsilon}(0_m)\times \{0_{n-m}\}\tag{9}$
$=\psi|_{\tilde {W}}(\tilde {W})\cap (\Bbb R^{m}\times\{0_{n-m}\})\tag{10}$
**1)Is there is a typo in (7)?** I think it should be "$=$" instead of "$\subseteq$"
because then I think that's what it's being used to go from (8) to (9).
2)**I can't figure out what is going on in the last chain of equalities**.The preceding expressions should be used somehow , but it's not clear how. How does $\psi^{-1}(B_{\varepsilon}(0_n))$ dissapear in (8)? How do I get (9)? and how do I go from there to (10)?
--------------------------------------
The definition of embedded submanifolds that is being used in this last part is:
DEF Let $M$ be a smooth manifold of dimension $m$, and $S ⊂ M$. Then
$S$ is an embedded submanifold of dimension $k$ if for every $p \in S $ there is a chart $(U, \varphi)$ of $M$ around $p$ is such that
$\varphi(U \cap S) = (\Bbb R^k \times \{0_{m-k}\}) \cap \varphi(U)$
and at the beginning og the proof the a particular case of the rank theorem used:
Theorem 1 (Rank theorem for injective differential). Suppose $M$ is a smooth manifold of dimension $m$, and that $N$ is a smooth manifold of dimension $n$. Suppose $F : M \to N$ is smooth. Let $p \in M$. If $dF_p$ is injective, then there are charts $(U, \varphi)$ of M around p and $(V,\psi )$ of N around $F(p)$ such that $F(U) \subseteq V$
and for all $x \in\varphi(U)$, and
$\psi\circ F \circ \varphi^{−1}(x) = (x, 0_{n−m})$
|
I am having trouble understanding the proof of
**Theorem** If $F : M \to N$ is an embedding, then $F (M)$ is an embedded
submanifold of $N$
Proof. Let $p \in M$. Since $F$ is an immersion, by Theorem 1 (see below) there are charts $(U,\varphi)$ around $p$ and $(V,\psi )$ around $F(p)$ such that $F(U) \subseteq V$ and for all
$x \in \phi(U)$,
$\psi\circ F \circ \varphi^{−1}(x) = (x; 0_{n−m})\tag{4}$
By applying translations we can assume that $\varphi(p) = 0_m$ and $\psi(F(p)) = 0_n$. Let $\varepsilon > 0$ so that
$B_{\varepsilon}(0_m) \subseteq \varphi(U)$ and $B_{\varepsilon}(0_n) \subseteq \psi(V )$
Then $B_{\varepsilon}(0_m) \times \{0_{n-m}\} \subseteq B_{\varepsilon}(0_n) \subseteq \psi(V)\tag{5}$
Because $F:M\to F(M) $is a homeomorphism, then $F(\varphi^{−1}(B_{\varepsilon}(0_m))) \subseteq F(M)$ open in the subspace topology. So there is an open subset $W \subseteq N$ such that
$F(\varphi^{−1}(B_{\varepsilon}(0_m))) =W\cap F(M) \tag{ 6}$
And because of (4),
$\psi(W\cap F(M)) = (\psi\circ F \circ \varphi^{−1}(x))(B_{\varepsilon}(0_m))) \subseteq B_{\varepsilon}(0_m) \times \{0_{n-m}\}\tag{ 7}$
Let $\tilde {W}=W \cap \psi^{-1}(B_{\varepsilon}(0_n))$
So $(\tilde {W}, \psi|_{\tilde W}) $ is a chart around $F(p)$ with
$\psi|_{\tilde {W}}(\tilde{W} \cap F(M)) $
$= \psi|_{\tilde{W}}( W \cap \psi^{-1}(B_{\varepsilon}(0_n)) \cap F(M))$
$=\psi (W \cap F(M)) \tag{8}$
$=B_{\varepsilon}(0_m)\times \{0_{n-m}\}\tag{9}$
$=\psi|_{\tilde {W}}(\tilde {W})\cap (\Bbb R^{m}\times\{0_{n-m}\})\tag{10}$
**1)Is there is a typo in (7)?** I think it should be "$=$" instead of "$\subseteq$"
because then I think that's what it's being used to go from (8) to (9).
2)**I can't figure out what is going on in the last chain of equalities**.The preceding expressions should be used somehow , but it's not clear how. How does $\psi^{-1}(B_{\varepsilon}(0_n))$ dissapear in (8)? How do I get (9)? and how do I go from there to (10)?
--------------------------------------
The definition of embedded submanifolds that is being used in this last part is:
DEF Let $M$ be a smooth manifold of dimension $m$, and $S ⊂ M$. Then
$S$ is an embedded submanifold of dimension $k$ if for every $p \in S $ there is a chart $(U, \varphi)$ of $M$ around $p$ is such that
$\varphi(U \cap S) = (\Bbb R^k \times \{0_{m-k}\}) \cap \varphi(U)$
and at the beginning of the proof a particular case of the rank theorem used:
Theorem 1 (Rank theorem for injective differential). Suppose $M$ is a smooth manifold of dimension $m$, and that $N$ is a smooth manifold of dimension $n$. Suppose $F : M \to N$ is smooth. Let $p \in M$. If $dF_p$ is injective, then there are charts $(U, \varphi)$ of M around p and $(V,\psi )$ of N around $F(p)$ such that $F(U) \subseteq V$
and for all $x \in\varphi(U)$, and
$\psi\circ F \circ \varphi^{−1}(x) = (x, 0_{n−m})$
|
I saw this problem : Prove that $\sum\limits_{n=3}^ \infty \frac{1}{n \ln(n)(\ln(\ln(n)))^2}$ converges, this is an easy problem could be proved using Cauchy condensation test twice.
$$\sum_{n=3}^ \infty \frac{2^n}{2^n n\ln(2)(\ln(n \ln(2)))^2}=\sum_{n=3}^\infty \frac{1}{n \ln(2)(\ln(n \ln(2)))^2} $$
and
$$\sum_{n=3}^\infty \frac{2^n}{2^n (\ln(2^n \ln(2)))^2}=\sum_{n=3}^\infty \frac{1}{ n^2(\ln( \ln(2)))^2} $$ which converges.
-------------
I because curios what is the limit of this sum ? I tried every method I know but all of them lead to nothing.
|
I saw this problem : Prove that $\sum\limits_{n=3}^ \infty \frac{1}{n \ln(n)(\ln(\ln(n)))^2}$ converges, this is an easy problem could be proved using Cauchy condensation test twice.
$$\sum_{n=3}^ \infty \frac{2^n}{2^n n\ln(2)(\ln(n \ln(2)))^2}=\sum_{n=3}^\infty \frac{1}{n \ln(2)(\ln(n \ln(2)))^2} $$
and
$$\sum_{n=3}^\infty \frac{2^n}{2^n (\ln(2^n \ln(2)))^2}=\sum_{n=3}^\infty \frac{1}{ n^2(\ln( \ln(2)))^2} $$ which converges.
-------------
I because curios what is the limit of this sum ? I tried every method I know but all of them lead to nothing.
Since it might be impossible to find the exact sum of this series I want to ask for a numerical approximation for this sum.
|
I think this is a bad question, because it doesn't really matter if there is a name for sets with this specific property, as the property isn't sufficiently interesting to warrant a name being described to it.
If you wanted to be wordy about it, you could just say, "$\mathbb{N}\setminus A$ is less than a geometric sequence."
But this doesn't really improve on the more precise mathematical description you have given in your question, so I would just describe it the way you have done.
In conclusion, *looking for* names of sets with particular properties seems like a fruitless activity, as it is very easy to come up with infinitely many distinct properties that sets can have, and ask, "do sets with this property have a name?" each and every time. Most properties of such sets will not have a name. So just describe the property mathematically if you don't know the name. If then someone informs you of the name that exists for that property then you can amend your work/knowledge accordingly.
|
I think this is a bad question, because it doesn't really matter if there is a name for sets with this specific property, as the property isn't sufficiently interesting to warrant a name being described to it.
I also don't see how the sets you describe are "refined notions of natural density."
*Looking for* names of sets with particular properties seems like a fruitless activity (unless the property is sufficiently "interesting"), as it is very easy to come up with infinitely many distinct properties that sets can have, and ask, "do sets with this property have a name?" each and every time. Most properties of such sets will not have a name. So just describe the property mathematically if you don't know the name. If then someone informs you of/you stumble across in a paper the name that exists for that property then you can amend your work/knowledge accordingly.
|
Let $R$ be a positive integer, $\mathcal{X}$ be the sample space and $x \in \mathcal{X}$ be an event of the sample space; $P(x)$ denotes the probability of occurrence of event $x$. The problem is to upper bound the following expression:
$$
\sum_{x\in \mathcal{X}}x(P(x))^{\frac{1}{R}},
$$
in terms of $$\sum_{x\in \mathcal{X}}xP(x).$$
Is it possible to get a tighter bound for the expression?
|
I'm reading Jean-Pierre Serre's 1970 "Cours d'arithmetique". I'm having trouble reading the beginning of his chapter 2 devoted for example to $\mathbb Z_3$, $3-$adic numbers that I discover and that interest me because of [questions][1] I have about the ring sequence
$$(\mathbb Z/2\mathbb Z,\mathbb Z/6\mathbb Z,\mathbb Z/30\mathbb Z,...,\mathbb Z/p\#\mathbb Z,...)$$
He writes
> Let $n\geq 1, A_n:=\mathbb Z/3^n\mathbb Z$. An element of $A_n$ clearly defines an element of $A_{n-1}$
What I understand here is that, if $\textbf{a}\in A_n$ then $\textbf{a}=a+3^n\mathbb Z$. And if we do the Euclidean division of $a$ by $3^{n-1}$ $$a=3^{n-1}q+r$$with $0\leq r\leq 3^{n-1}$. Then $$a+3^n\mathbb Z=3^{n-1}q+r+3^n\mathbb Z=r+3^{n-1}(q+3\mathbb Z)\subset r+3^{n-1}\mathbb Z$$
For example, $\mathbb Z/9\mathbb Z\to \mathbb Z/3\mathbb Z$ $$0\mapsto0$$ $$1\mapsto 1$$ $$2\mapsto 2$$ $$3\mapsto 0$$ $$4\mapsto 1$$ $$5\mapsto 2$$ $$...$$
Then he writes
> This results in a homomorphism $$\varphi_n:A_n\to A_{n-1}$$ which is surjective, and of kernel $p^{n-1}A_n$
No problem here. Then he writes
> [...] By definition, an element of $\mathbb Z_3$ is $x=(...,x_n,...,x_1)$, with $$x_n\in A_n\land \varphi_n(x_n)=x_{n-1} \text{ if }n\geq 2$$
__________________________
1. Is my comprehension of what is presented by Serre as "clear", correct ?
____________________
2. I wonder if we can do the same with the sequel $$(\mathbb Z/2\mathbb Z,\mathbb Z/6\mathbb Z,\mathbb Z/30\mathbb Z,...,\mathbb Z/p\#\mathbb Z,...)$$
For example, $\varphi : \mathbb Z/6\mathbb Z\to \mathbb Z/2\mathbb Z$ defined by $$0\mapsto 0$$ $$1\mapsto 1$$ $$2\mapsto 0$$ $$3\mapsto 1$$ $$4\mapsto 0$$ $$5\mapsto 1$$
And if so, what would be the "numbers" we would get then?
[1]: https://math.stackexchange.com/questions/4886427/questions-about-mathbb-z-30-mathbb-z
|
The surface integral $\int_S (z^2 + y^2 + x^2) \, dS$ over the cube $S$?
|
The surface integral $\iint_S (z^2 + y^2 + x^2) \, dS$ over the cube $S$?
|
Given 9 real numbers $x_1, x_2, ... , x_9\in [-1,1]$ such that $x_1^3+x_2^3+...+x_9^3=0$. Find the maximum value of $S=x_1+x_2+...+ x_9$.
I have tried ordering the numbers from smallest to largest and then dividing the set of integers $\{x_1, x_2, ... , x_9 \}$ into two subsets of only negative numbers and only positive numbers. In particular,
$S_1 =\{ x_1, x_2,..., x_j \}$ and $S_2 =\{ x_{j+1}, x_{j+2},..., x_9 \}$ such that all the elements in $S_1$ are negative and all the elements in $S_2$ are positive.
From there, letting $-(x_1^3+ x_2^3+... x_j^3)= x_{j+1}^3+ x_{j+2}^3+... x_9^3=P$ and evaluating some inequalities, I got:
$maxS=\sqrt[3]{(9-j)^2P}-[\left\lfloor P \right\rfloor+\sqrt[3]{P-\left\lfloor P \right\rfloor}]$
This answer was impossibly complicated and I can't seem to find the maximum of S with respect to $j$ and $P$.
Is there a better solution to this? If not, how do I find the maximum of S?
|
I am a post graduate student who is currently studying some optimization for quadratic form. From the class lecture, I know that we can always turn any quadratic functions into theirs corresponding canonical form using some changes of variable.
For example: $f\left( {x,y} \right) = x \times y$ can be rewritten as $f\left( {X,Y} \right) = \frac{1}{4}{X^2} - \frac{1}{4}{Y^2}$ through the following change of variable $\left\{ {\begin{array}{*{20}{c}}
{X = x + y}\\
{Y = x - y}
\end{array}} \right.$.
Out of pure curiosity, my question is:
Does "**canonical cubic form**" and "**canonical quartic form**" for function of the form $f\left( {x,y,z} \right) = xyz$ and $g\left( {x,y,z,t} \right) = xyzt$ respectively even existed ?
If these canonical form existed then what would be a systematic way to find them ?
Thank you for your enthusiasm !
|
Existence of canonical from for cubic and quartic form?
|
I was solving a problem from old exams and got stuck here. I'd appreciate the help.
We have the three variables p, q, and r. There are 8 valuations of the variables. If F is a propositional logic formula containing p, q, and r, and we construct a truth table for F, the table will thus have 8 rows.
Provide a formula F with the variables p, q, and r that is true for the valuations
${ p : F, q : T, r : F }, { p : T, q : T, r : F }, { p : T, q : F, r : F }$
and is false for all other valuations.
I drew a truth table and got an answer which goes:
$((p⋁q⋁r) ⋁ \neg (p⋁q⋁r)) \to ((r \to p) ⋀ r)$
BUT this took a lot of time to the point I started to think is this even how we are supposed to solve this problem. There has gotta be another way than just playing with the truth table given the fact that we have a short time for the exam.
My question: Is there another easier way to solve it? If not, how do I find an easier formula and faster?
|
I'm looking for the inverse Laplace transform of
$$F(s) = \frac{1}{s + e^{-s\tau}}$$
where τ is a positive real parameter.
I am trying to use general inverse formula of Laplace transformation to solve it. But then, I need to find the singularities of F(s), that is, $ s + e^{-s\tau} = 0$.
Transform the euqation, I can get $ \tau = \frac{log(-s)}{(-s)}$. Seems that the number of singularities depends on the value of parameter $\tau$.Then, question comes to me, how to find the residue at those possible singulaties? And then how to proceed the calculation for the general inverse formula?
Many thanks in advance for your advice.
|
There is a standard result in measure/integration theory which I just cannot seem to obtain.
If $f \colon X \to \mathbb{C}$ is measurable ($X$ is any measurable space), there exist simple measurable functions $\phi_k \colon X \to \mathbb{C}$ such that $\phi_k \to f$ pointwise and all $|\phi_k| \le |f|$. This is fine.
_However_, it is often claimed that the $\phi_k$ can be taken such that also $|\phi_k| \le |\phi_{k+1}|$ for all $k$. (See for instance Folland's _Real Analysis_, Proposition 2.10.) This I cannot seem to obtain, at least not with the simplicitly for which it is claimed to follow.
It is typically stated that this can be obtained as follows (e.g., Folland): Write $f = u + iv = u^+ - u^- + i(v^+ - v^-)$, the standard decomposition into positive and negative parts $u^\pm$ and $v^\pm$ for $u = \Re f$ and $v = \Im f$. Pick simple measurable functions $s_k^\pm$ and $t_k^\pm$ such that $0 \le s_k^\pm \uparrow u^\pm$ and $0 \le t_k^\pm \uparrow v^\pm$. Put
$$\phi_k := s_k^+ - s_k^- + i(t_k^+ - t_k^-).$$
The $\phi_k$ are simple measurable functions $X \to \mathbb{C}$, and clearly $\phi_k \to f$ pointwise. All good.
_At this point it is typically claimed that also $|\phi_k| \le |\phi_{k+1}|$ for all $k$, but how is this obtained in this exact setup? It appears standard wisdom is that $|\phi_k| = s_k^+ + s_k^- + t_k^+ + t_k^-$, from which it would follow easily indeed, but I am skeptical of this identity._
Suppose for the moment that $f$ is real (but please address the complex case), so that $v = 0$, hence $t_k^\pm = 0$. Then $\phi_k = s_k^+ - s_k^-$, a difference of two positive functions. From this follows
$$|\phi_k| = \phi_k^+ + \phi_k^- \le s_k^+ + s_k^-,$$
but it is easy to cook up positive functions $s_k^\pm$ (below) for which the above inequality is strict, i.e., $|\phi_k| < s_k^+ + s_k^-$. Hence it cannot be the case that $|\phi_k| = s_k^+ + s_k^-$ in general; it appears to me a common pitfall is to confuse oneself from the identity $\phi_k = s_k^+ - s_k^-$ to believe that $s_k^\pm$ must be the positive and negative parts of $\phi_k$ (e.g., last computation [here][1]), but this is not true unless there is a trick involved. What is this trick?
---
Example of strict inequality in a positive decomposition: Take $u := 1_{(-\infty, 1]}$ and $v := 1_{[-1, \infty)}$ on the real line. Take $f := u - v$. Then in general $f^+ \le u$ and $f^- \le v$, and in this case there holds $|f| = f^+ + f^- < u + v$.
---
I suppose I could salvage the proposition by working out at a detailed level, but the proposition is claimed to follow leisurely from the approximation of positive measurables, so surely there is a trick here I am missing?
[1]: https://proofwiki.org/wiki/Measurable_Function_is_Pointwise_Limit_of_Simple_Functions
|
A random walk S with absorbing barriers at $0$ and $N$, at each step the particle can move right,left or stay put with probabilities (p+q+r=1). Let $W$ be the event that the particle is absorbed at $0$ rather than $N$ , and let $p_k=\mathbb{P}(W|S_0=k)$. Show that if the particle starts at $k$ where $0 < k < N$, the conditional probability given that the first step is rightwards, given $W$, equals $\frac{pp_{k+1}}{p_k}$. Deduce that the mean duration $J_k$ of the walk conditional on $W$ satisfies the equation:
$$pp_{k+1}J_{k+1}-(1-r)J_k+qp_{k-1}J_{k-1}=-p_k$$
$\mathbb{E}[T|S_0=k, W]=$
$\mathbb{E}[T|S_0=k, W, X_1=1]\mathbb{P}[X_1=1|S_0=k, W] +$
$\mathbb{E}[T|S_0=k, W, X_1=0]\mathbb{P}[X_1=0|S_0=k, W] + $
$\mathbb{E}[T|S_0=k, W, X_1=-1]\mathbb{P}[X_1=-1|S_0=k, W]$
$J_k = (1+J_{k+1})\frac{pp_{k+1}}{p_k} + (1+J_{k})\frac{rp_{k}}{p_k}+(1+J_{k-1})\frac{qp_{k-1}}{p_k}$
$-p_k = J_{k+1}pp_{k+1} - J_{k}(1- r)p_{k}+J_{k-1}qp_{k-1}$
set $\rho^k=(\frac{q}{p})^k$
$p_k=\frac{\rho^k-\rho^N}{1-\rho^N}$
set $\mu_k=(\rho^k-\rho^N)J_k$
$\rho^N-\rho^k = p\mu_{k+1} - (1- r) \mu_k + q\mu_{k-1}$
solving the homogeneous part
$p\mu_{k+1} - (p+q) \mu_k + q\mu_{k-1}=0$
$p\mu^2-(p+q)\mu+ q=0$
$(\mu-1),(p\mu-q)$
so the homogeneous solution is
$\mu= 1, \frac{q}{p}$
$A + B \rho^k$
The inhomogeneous solution is
$\frac{k(\rho^k -\rho^N)}{p-q}$
I can't seem to get the result above.
Here's my attempt
Taking $C(\rho^k -\rho^N)$
$ C(p(\rho^{k+1} - \rho^N) - (p+q)(\rho^{k} - \rho^N) + q(\rho^{k-1} - \rho^N))=\rho^N-\rho^k$
focusing on the $\rho_k$ part
$ C(p\rho^{k+1} - (p+q)\rho^{k} + q\rho^{k-1})$
$ C(p\rho^{k}(\rho - 1) + q\rho^{k-1}(1 -\rho))$
$ C(p\frac{q^k}{p^k}(\rho - 1) + q\frac{q^{k-1}}{p^{k-1}}(1 -\rho))$
$ C(\frac{q^k}{p^{k-1}}(\rho - 1) + \frac{q^{k}}{p^{k-1}}(1 -\rho))$
$ C(\frac{q^k}{p^{k-1}}(q - p) + \frac{q^{k}}{p^{k-1}}(p - q))$
$=0$
How do you derive the inhomogeneous result given?
Focusing on the boundary conditions $\mu_0=0$
$A + B \rho^0 +\frac{0(\rho^k+\rho^N)}{p-q}=0$
$A + B=0 $
$A=-B$
and $\mu_N=N$
$A + B \rho^N +\frac{N(\rho^N+\rho^N)}{p-q}=0$
$A + B \rho^N +\frac{2N\rho^N}{p-q}=0$
$A = -B \rho^N - \frac{2N\rho^N}{p-q}$
$-B = -B \rho^N - \frac{2N\rho^N}{p-q}$
$B(1-\rho^N) = \frac{2N\rho^N}{p-q}$
$B = \frac{2N\rho^N}{(p-q)(1-\rho^N)}$
hence
$A + B \rho^k +\frac{k(\rho^k+\rho^N)}{p-q}$
$=\frac{2N\rho^N(1-\rho^K)}{(p-q)(1-\rho^N)} + \frac{k(\rho^k+\rho^N)}{p-q}$
However the answer given is
$=\frac{1}{p-q}.\frac{1}{\rho^k-\rho^N}\frac{2N\rho^N(1-\rho^K)}{(p-q)(1-\rho^N)} + k(\rho^k+\rho^N)$
Where does the $\frac{1}{\rho^k-\rho^N}$ term come from?
|
Recall that a ring $R$ is called right (left) Noetherian if every right (left) ideal $I$ of $R$ is a finitely generated $R$-module, i.e., there exists $x_1,\ldots,x_m \in I$ such that $I=x_1R+\ldots+x_mR$ (or $I=Rx_1+\ldots+Rx_m$).
Suppose that $R$ is finitely generated as an additive group, i.e., $R=\mathbb{Z}x_1+\ldots+\mathbb{Z}x_m$ for some $x_1,\ldots,x_m \in R$. Is it true that every right (or left) ideal of $R$ is finitely generated as an $R$-module?
|
Suppose that $A\subseteq\mathbb{N}$. Then one can look at the function $f(n)=|\{0,...,n\}\setminus A|$ ($|S|$ denoting the cardinality of $S$ ). I am interested in the case when $f(n)\leq Cn^\alpha$ with $C>0$ and $\alpha<1$. In this case $A$ obviously has natural densitiy $1$. My question is: Is there already a name for this concept and if not how one "should" call sets with this property?
I use sets with this property a lot in a work of mine, so I do need some useful name for it, since spelling it out each time seems bad practice. So the point of my question is to avoid the situation where I choose a name for it and then someone tells me that this concept has already been used multiple times and there already exist a standard name for it. As it is hard to search for an answer to my question I am asking it here in case someone has been working with this notion before.
The reason why these sets are interesting for me is that they have the following property: For $A\subseteq \mathbb{N}$, $a\in \mathbb{N}$, $K\in\mathbb{N}$ and $\epsilon>0$ consider the set $F(A,\epsilon,a,K)=\{n\in\mathbb{N}\mid\forall 0\leq k\leq \epsilon\log n\forall i\leq K a^k : na^k+i\in A\}$. Then if $A$ has the above property and $\epsilon>0$ is small enough, then $F(A,\epsilon,a,K)$ has this property as well.
|
In machine learning there is one fundamental step which is the feedforward step of neural networks. Basically given a matrix of weights W, a input vector x, a bias vector and a g function likes tanh the algorithm is defined as
g(Wx+b)
Nothing very complex, however things get more complex if input is not a vector. If the input is a matrix then the weights W is no longer a table but a cube. What is the general algorithm that allows me to perform the Wx part of the algorithm regardless of the number of "dimensions"?
Is it useful to consider W as a vector of matrices if x is N * M or consider W as a matrix of matrices if x is N * M * P?
Thank you, it's over a week I'm searching for a response.
|
I have $F(G(x))=G(x)$, where $x$ is any real number, and function $F$ can be computed if value $G(x)$ is given. I have also proved that function $G$ is monotone with respect to $x$ and $G(x)\in[0,1]$.
I want to obtain a mapping $G$ (or an algorithm G).
Is this a fixed-point problem? How to solve it, either mathematically or by algorithm?
----------------------------------------------------------
Original problem explained:
The above problem comes from an economic and market scenario.
We have a function $\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)}$, where the random variable $v_i$ is any real number (e.g., person $i$'s money). For any person $i$, his money $v_i$ is iid. $F(v)$ and $f(v)$ are the given cumulative distribution function (CDF) and probability density function (PDF) of $v_i$, respectively. However, we don't know the form of $G(v_i)$, we only know that $G(v_i)\ge 1$.
We want to select a person $i^*$ who has the largest $ \psi(v_i)$ comparing with any other person. And more importantly, we also want that the probability that this person $i^*$ has the largest $\psi(v_i)$ should equal to the probability $\frac{1}{G(v_i)}$.
So briefly, we want to find a function $G(v_i)$ such that when we select the largest $\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)}$, the maximizer (i.e., person $i^*$) are selected with a probability $\frac{1}{G(v_i)}$.
|
Let $a,b,c\in\mathbb{R}$ be three parameters satisfying
$$abc\ne 0.$$
Consider the following equations of $(u,v,w)\in\mathbb{R}^3$:
$$
(-c) \bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = 2vw,
$$
$$
(-b) \bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = T^2 +w^2 +2vT,
$$
$$
(-a)\bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = 2uT,
$$
where $$T= -au-bv-cw.$$
Surprisingly, computer (WolframAlpha) tells me that for any $(a,b,c)$ as above, the solution of these equations is
$$
(u,v,w)= (0,0,0).
$$
[![enter image description here][1]][1]
How can I prove this claim? I tried many times but fail...
I notice that these equations are equivalent to
$$
3(v+T)^2-2vT-(u^2+v^2) = \frac{2vw}{-c} = \frac{T^2+w^2+2vT}{-b} = \frac{2uT}{-a},
$$
which gives that (if $u\ne 0$ )
$$
T = \frac{a}{c}\frac{vw}{u}.
$$
Can I use this to find something different?
Another observation is that, if $(x_0,y_0,z_0)\ne (0,0,0)$ is a solution, then for any $t\ne 0$, $(tx_0,ty_0,tz_0)$ is a solution.
[1]: https://i.stack.imgur.com/y5Tqw.jpg
|
I have $\beta(\alpha(x))=\alpha(x)$, where $x$ is any real number, and function $\beta$ can be computed if value $\alpha(x)$ is given. I have also proved that function $\alpha$ is monotone with respect to $x$ and $\alpha(x)\in[0,1]$.
I want to obtain a mapping $\alpha$ (or an algorithm).
Is this a fixed-point problem? How to solve it, either mathematically or by algorithm?
----------------------------------------------------------
Original problem explained:
The above problem comes from an economic and market scenario.
We have a function $\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)}$, where the random variable $v_i$ is any real number (e.g., person $i$'s money). For any person $i$, his money $v_i$ is iid. $F(v)$ and $f(v)$ are the given cumulative distribution function (CDF) and probability density function (PDF) of $v_i$, respectively. However, we don't know the form of $G(v_i)$, we only know that $G(v_i)\ge 1$.
We want to select a person $i^*$ who has the largest $ \psi(v_i)$ comparing with any other person. And more importantly, we also want that the probability that this person $i^*$ has the largest $\psi(v_i)$ should equal to the probability $\frac{1}{G(v_i)}$.
So briefly, we want to find a function $G(v_i)$ such that when we select the largest $\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)}$, the maximizer (i.e., person $i^*$) are selected with a probability $\frac{1}{G(v_i)}$.
|
The relation between two functions $\beta$ and $\alpha$ is: $\beta(\alpha(x))=\alpha(x)$. what is this problem?
|
Is $\sin\left(\frac{1}{x}\right)$
periodic? with what period time?
|
Is $\sin\left(\frac{1}{x}\right)$ periodic?with what period time?
|
I have $\beta(\alpha(x))=\alpha(x)$, where $x$ is any real number, and function $\beta$ can be computed if value $\alpha(x)$ is given. I have also proved that function $\alpha$ is monotone with respect to $x$ and $\alpha(x)\in[0,1]$.
I want to obtain a mapping $\alpha$ (or an algorithm).
Is this a fixed-point problem? How to solve it, either mathematically or by algorithm?
----------------------------------------------------------
Original problem explained:
The above problem comes from an economic and market scenario.
We have a function $\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)}$, where the random variable $v_i$ is any real number (e.g., person $i$'s money). For any person $i$, his money $v_i$ is iid. $F(v)$ and $f(v)$ are the given cumulative distribution function (CDF) and probability density function (PDF) of $v_i$, respectively. However, we don't know the form of $G(v_i)$, we only know that $G(v_i)\ge 1$.
We want to select a person $i^*$ who has the largest $ \psi(v_i)$ comparing with any other person. And more importantly, we also want that the probability that this person $i^*$ has the largest $\psi(v_i)$ should equal to the probability $\frac{1}{G(v_i)}$.
So briefly, we want to find a function $G(v_i)$ such that when we select the largest $\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)}$, the maximizer (i.e., person $i^*$) are selected with a probability $\frac{1}{G(v_i)}$. This seems to be a fixed-point problem, but I'm not familiar with it.
I appreciate your help!
|
This is an exercise in the book by *B. Daya Reddy*.
For $f \in L^2(0, 1)$, let $u_f$ be the solution of the ODE: $u'' + u' - 2u = f$, $u(0) = u(1) = 0$. Define the functional $\ell$ by
$$
\langle \ell, f \rangle = \int_0^1 u_f(x) dx \ \forall f \in L^2(0, 1)
$$
Show that $\ell$ is a bounded linear functional.
I have shown $\ell$ is linear, but struggle to bound $\ell$. From the ODE
$$
u'' + u' - 2u = f, u(0) = u(1) = 0
$$
integrate from $0$ to $1$ for both side, we have
$$
\langle \ell, f \rangle = \int_0^1 u_f(x)dx = \dfrac{u_f'(1) - u_f'(0)}{2} - \dfrac{1}{2}\int_0^1 f(x)dx
$$
Therefore,
$$
\vert \langle \ell, f \rangle \vert \le \dfrac{\vert u_f'(1) - u_f'(0) \vert}{2} + \dfrac{1}{2}\lVert f \rVert_2
$$
Am I on the right track ? How to bound the term $\dfrac{\vert u_f'(1) - u_f'(0) \vert}{2}$ in term of $L^2$ norm of $f$ ? Any hints are appreciated. Thanks
|
I have $\beta(\alpha(x))=\alpha(x)$, where $x$ is any real number, and function $\beta$ can be computed if value $\alpha(x)$ is given. I have also proved that function $\alpha$ is monotone with respect to $x$ and $\alpha(x)\in[0,1]$.
I want to obtain a mapping $\alpha$ (or an algorithm).
Is this a fixed-point problem? How to solve it, either mathematically or by algorithm?
----------------------------------------------------------
Original problem explained:
The above problem comes from an economic and market scenario.
We have a function $\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)}$, where the random variable $v_i$ is any real number (e.g., person $i$'s money). For any person $i$, his money $v_i$ is iid. $F(v)$ and $f(v)$ are the given cumulative distribution function (CDF) and probability density function (PDF) of $v_i$, respectively. However, we don't know the form of $G(v_i)$, we only know that $G(v_i)\ge 1$.
We want to select a person $i^*$ who has the largest $ \psi(v_i)$ comparing with any other person. And more importantly, we also want that the probability that this person $i^*$ has the largest $\psi(v_i)$ should equal to the probability $\frac{1}{G(v_i)}$.
So briefly, we want to find a function $G(v_i)$ such that when we select the largest $\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)}$, the maximizer (i.e., person $i^*$) are selected with a probability $\frac{1}{G(v_i)}$. This seems to be a fixed-point problem, that why I summarize it into the following style
$$
\mathbb{P}\big[{\psi}(G(v_i))\big]= \frac{1}{G(v_i)}.
$$
But I'm not sure whether this is correct.
I appreciate your help!
|
I have $\beta(\alpha(x))=\alpha(x)$, where $x$ is any real number, and function $\beta$ can be computed if value $\alpha(x)$ is given. I have also proved that function $\alpha$ is monotone with respect to $x$ and $\alpha(x)\in[0,1]$.
I want to obtain a mapping $\alpha$ (or an algorithm).
Is this a fixed-point problem? How to solve it, either mathematically or by algorithm?
----------------------------------------------------------
Original problem explained:
The above problem comes from an economic and market scenario.
We have a function $$\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)},$$ where the random variable $v_i$ is any real number (e.g., person $i$'s money). For any person $i$, his money $v_i$ is iid. $F(v)$ and $f(v)$ are the given cumulative distribution function (CDF) and probability density function (PDF) of $v_i$, respectively. However, we don't know the form of $G(v_i)$, we only know that $G(v_i)\ge 1$.
We want to select a person $i^*$ who has the largest $ \psi(v_i)$ comparing with any other person. And more importantly, we also want that the probability that this person $i^*$ has the largest $\psi(v_i)$ should equal to the probability $\frac{1}{G(v_i)}$.
So briefly, we want to find a function $G(v_i)$ such that when we select the largest $\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)}$, the maximizer (i.e., person $i^*$) are selected with a probability $\frac{1}{G(v_i)}$. This seems to be a fixed-point problem, that why I summarize it into the following style
$$
\mathbb{P}\big[{\psi}(G(v_i))\big]= \frac{1}{G(v_i)}.
$$
But I'm not sure whether this is correct.
I appreciate your help!
|
Suppose $A = \begin{bmatrix} x & 1\\ y & 0\end{bmatrix}, B = \begin{bmatrix} z & 1\\ w & 0\end{bmatrix}$, for $x,y,z,w \in \Bbb{R}$.
I have observed by considering many examples of $x,y,z,w$ that:
If all the eigen values of $A^2B$ and $AB^2$ are less than one in absolute value $\implies$ $\det(AB+A+I)<0$ and $\det(BA+B+I)<0$ is not possible.
OR alternatively,
If all the eigen values of $A^2B$ and $AB^2$ are less than one in absolute value $\implies$ $\det(AB+A+I)\ge 0$ OR $\det(BA+B+I)\ge 0$
I wonder how to prove it actually?
A computational proof using computer package was shown in
https://mathoverflow.net/questions/435267/proof-of-a-matrix-implication/435689#435689
But I am wondering about formal or analytical proof for this question which can be done using pen paper.
$\textbf{EDIT}$
$\textbf{The case of $y=x$ or $z=w$ is covered by Andreas as an answer below.}$
$\textbf{The only case that remains to show is whether the conjecture still holds for $y\neq x$ or $w \neq z$.}$
|
Is Infinite Product of Hausdorff spaces Hausdorff with product topology?
|
I have $\beta(\alpha(x))=\alpha(x)$, where $x$ is any real number, and function $\beta$ can be computed if value $\alpha(x)$ is given. I have also proved that function $\alpha$ is monotone with respect to $x$ and $\alpha(x)\in[0,1]$.
I want to obtain a mapping $\alpha$ (or an algorithm).
Is this a fixed-point problem? How to solve it, either mathematically or by algorithm?
----------------------------------------------------------
Original problem explained:
The above problem comes from an economic and market scenario.
We have a function $$\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)},$$ where the random variable $v_i$ is any real number (e.g., person $i$'s money). For any person $i$, his money $v_i$ is iid. $F(v)$ and $f(v)$ are the given cumulative distribution function (CDF) and probability density function (PDF) of $v_i$, respectively. However, we don't know the form of $G(v_i)$, we only know that $G(v_i)\ge 1$.
We want to select a person $i^*$ who has the largest $ \psi(v_i)$ comparing with any other person. And more importantly, we also want that the probability that this person $i^*$ has the largest $\psi(v_i)$ should equal to the probability $\frac{1}{G(v_i)}$.
So briefly, **we want to find a function $G(v_i)$** such that when we select the largest $\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)}$, the maximizer (i.e., person $i^*$) are selected with a probability $\frac{1}{G(v_i)}$. This seems to be a fixed-point problem, that why I summarize it into the following style
$$
\mathbb{P}\big[{\psi}(G(v_i))\big]= \frac{1}{G(v_i)}.
$$
But I'm not sure whether this is correct and how to continue for the computation of $G(v_i)$.
I appreciate your help!
|
> Find all $x\in \mathbb R$ such that $16sin^{3}x -14cos^{3}x = \sqrt[3]{sinxcos^{8}x + 7cos^{9}x}$
It's a tough one question I've found. I've tried with this way.
$16tan^{3}x -14 = \sqrt[3]{tanx + 7 }$
By inspection $tanx=1$ is one of the answer.
but According to [WA][1], $tanx$ is not equal to 1!!!!
what happen to my solution and How to deal with this kind of the question .
[1]: https://www.wolframalpha.com/input?i=%2416tan%5E%7B3%7Dx+-14++%3D+%5Csqrt%5B3%5D%7Btanx++%2B+7+%7D%24
|
I have $\beta(\alpha(x))=\alpha(x)$, where $x$ is any real number, and function $\beta$ can be computed if value $\alpha(x)$ is given. I have also proved that function $\alpha$ is monotone with respect to $x$ and $\alpha(x)\in[0,1]$.
I want to obtain a mapping $\alpha$ (or an algorithm).
Is this a fixed-point problem? How to solve it, either mathematically or by algorithm?
----------------------------------------------------------
Original problem explained:
The above problem comes from an economic and market scenario.
We have a function $$\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)},$$ where the random variable $v_i$ is any real number (e.g., person $i$'s money). For any person $i$, his money $v_i$ is iid. $F(v)$ and $f(v)$ are the given cumulative distribution function (CDF) and probability density function (PDF) of $v_i$, respectively. However, we don't know the form of $G(v_i)$, we only know that $G(v_i)\ge 1$.
We want to select a person $i^*$ who has the largest $ \psi(v_i)$ comparing with any other person. And more importantly, we also want that the probability that this person $i^*$ has the largest $\psi(v_i)$ should equal to the probability $\frac{1}{G(v_i)}$.
Briefly, **we want to find a function $G(v_i)$** such that when we select the largest $\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)}$, the maximizer (i.e., person $i^*$) are selected with a probability $\frac{1}{G(v_i)}$. This seems to be a fixed-point problem, that why I summarize it into the following style
$$
\mathbb{P}\big[{\psi}(G(v_i))\big]= \frac{1}{G(v_i)}.
$$
But I'm not sure whether this is correct and how to continue for the computation of $G(v_i)$.
I appreciate your help!
|
I have $\beta(\alpha(x))=\alpha(x)$, where $x$ is any real number, and function $\beta$ can be computed if value $\alpha(x)$ is given. I have also proved that function $\alpha$ is monotone with respect to $x$ and $\alpha(x)\in[0,1]$.
I want to obtain a mapping $\alpha$ (or an algorithm).
Is this a fixed-point problem? How to solve it, either mathematically or by algorithm?
----------------------------------------------------------
Original problem explained:
The above problem comes from an economic and market scenario.
We have a function $$\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)},$$ where the random variable $v_i$ is any real number (e.g., person $i$'s money). For any person $i$, his money $v_i$ is iid. $F(v)$ and $f(v)$ are the given cumulative distribution function (CDF) and probability density function (PDF) of $v_i$, respectively. However, we don't know the form of $G(v_i)$, we only know that $G(v_i)\ge 1$.
We want to select a person $i^*$ who has the largest $ \psi(v_i)$ comparing with any other person. And more importantly, we also want that the probability that this person $i^*$ has the largest $\psi(v_i)$ should equal to the probability $\frac{1}{G(v_i)}$.
Briefly, **we want to find a function $G(v_i)$** such that when we select the largest $\psi(v_i)=v_i-\frac{G(v_i)-F(v_i)}{f(v_i)}$, the maximizer (i.e., person $i^*$) are selected with a probability $\frac{1}{G(v_i)}$. This seems to be a fixed-point problem, so I summarize it into the following style
$$
\mathbb{P}\big[{\psi}(G(v_i))\big]= \frac{1}{G(v_i)}.
$$
But I'm not sure whether this is correct and how to continue for the computation of $G(v_i)$.
I appreciate your help!
|
> Find all $x\in \mathbb R$ such that $16\sin^{3}x -14\cos^{3}x = \sqrt[3]{\sin x\cos^{8}x + 7\cos^{9}x}$
It's a tough one question I've found. I've tried with this way.
$16\tan^{3}x -14 = \sqrt[3]{\tan x + 7 }$
By inspection $\tan x=1$ is one of the answer.
but According to [WA][1], $\tan x$ is not equal to $1$!!!!
what happen to my solution and How to deal with this kind of the question .
[1]: https://www.wolframalpha.com/input?i=%2416tan%5E%7B3%7Dx+-14++%3D+%5Csqrt%5B3%5D%7Btanx++%2B+7+%7D%24
|
Suppose $A = \begin{bmatrix} x & 1\\ y & 0\end{bmatrix}, B = \begin{bmatrix} z & 1\\ w & 0\end{bmatrix}$, for $x,y,z,w \in \Bbb{R}$.
I have observed by considering many examples of $x,y,z,w$ that:
If all the eigen values of $A^2B$ and $AB^2$ are less than one in absolute value $\implies$ $\det(AB+A+I)<0$ and $\det(BA+B+I)<0$ is not possible.
OR alternatively,
If all the eigen values of $A^2B$ and $AB^2$ are less than one in absolute value $\implies$ $\det(AB+A+I)\ge 0$ OR $\det(BA+B+I)\ge 0$
I wonder how to prove it actually?
A computational proof using computer package was shown in
https://mathoverflow.net/questions/435267/proof-of-a-matrix-implication/435689#435689
But I am wondering about formal or analytical proof for this question which can be done using pen paper.
**EDIT**
**The case of $y=x$ or $z=w$ is covered by Andreas as an answer below. The only case that remains to show is whether the conjecture still holds for $y\neq x$ or $w \neq z$.**
|
> Find all $x$in \mathbb R$ such that $16\sin^{3}x -14\cos^{3}x = \sqrt[3]{\sin x\cos^{8}x + 7\cos^{9}x}$
It's a tough one question I've found. I've tried with this way.
$16\tan^{3}x -14 = \sqrt[3]{\tan x + 7 }$
By inspection $\tan x=1$ is one of the answer.
but According to [WA][1], $\tan x$ is not equal to $1$!!!!
(Sorry , I've seen later that $x = \pi /4$ , but I don't know how to find all roots of the question.)
Can root of unity solve this?
[1]: https://www.wolframalpha.com/input?i=%2416tan%5E%7B3%7Dx+-14++%3D+%5Csqrt%5B3%5D%7Btanx++%2B+7+%7D%24
|
I'm trying to find an explicit upper bound bound (explicit => without big O or small o notation, and with explicitly calculated constants) for the class number of a quadratic field.
So far, I've found that It was Littlewood who first addressed the question of how large the class number $h$ of an imaginary quadratic field $\mathbb{Q}(\sqrt{d})$ can be as a function of $|d|$ as $d \rightarrow$ $-\infty$ through fundamental discriminants. In 1927 he showed, assuming the generalized Riemann hypothesis (GRH), that for all fundamental $d<0$
$$
h \leq 2(c+\mathrm{o}(1))|d|^{\frac{1}{2}} \log \log |d|,
$$
where $c=e^\gamma / \pi$, where $\gamma$ is Euler's constant.
Is there any way to get rid of the $o(1)$ in this expression? Or is there a more explicit bound available somewhere?
|
I am trying to solve the following exercise: *Given $\phi: B(\mathcal{H}) \to \mathbb{C}$ linear functional, prove that $\phi$ is continuous with respect to weak operator topology if and only if $\phi$ is continuous with respect to strong operator topology.*
However, I don't know how to approach it, and I don't have clear in mind how weak and strong operator topologies are defined for linear functional - I only studied how these topologies are defined for operators $x \in B(\mathcal{H})$. Can anyone help me understand the exercise and solve it?
Notation: $B(\mathcal{H})$ is the set of bounded, linear operators acting on Hilbert space $\mathcal{H}$. $\mathbb{C}$ is the set of complex numbers.
|
I was looking at [papers about the SYK model](https://arxiv.org/abs/1711.08482) page 33 (equation 112), in which they write
$$\int\mathscr{D}\Sigma\,\mathscr{D}G~\exp\left\{-\frac{N}{2}\int\limits_{\left[0,\beta\right]^2} d\tau~d\tau^{\prime}~\Sigma(\tau,\tau^{\prime})\left(G(\tau,\tau^{\prime})~-~\sum_{i=1}^N\psi_i(\tau)\psi_i(\tau^{\prime})\right)\right\}~=~1.\tag{112}$$
This would follow from $$\int d\sigma\,dg\ e^{-\sigma g}=1.\tag{*}$$
Wolfram alpha confirms this as a Cauchy principle value.
How would this be proved?
|
I want to define a mathematical model (formula) to design missing information from sampling. In my problem, I have real events whose value is binary (0, 1) which might change every second. For a given sample period, $p$, I periodically checks the value at $t$, and assumes the value stays the same until the next observation time, $t+p$. Obviously, there must be missing information proportional to the sampling period. Is there a mathematical formula (model) to express the missing information rate?
[![Top part is the real measurement, and the below part means sampled information][1]][1]
[1]: https://i.stack.imgur.com/0OFE6.png
In the above figure, the top part is the real events, and the below part is the constructed events from the sampling. The sampling rate is two, and it is missing the real value from time between 1 and 2, 3 and 4, 5 and 6.
Can someone please give me advice how to model it?
Thanks in advance.
|
What are the contraharmonic and inverse contraharmonic means? Are there any inequalities that relate them to each other like the AM-GM inequality, for example?
|
What are the Contraharmonic and Inverse Contraharmonic Means?
|
In statistics, the hypothesis testing is used for determining whether the null hypothesis can be rejected or failed to reject. For example, we use hypothesis testing in order determine the significance of the estimator in linear regression:
$H_0: \beta_1 = 0 \ \ \ H_1 : \beta_1 \neq 0$
My question is the approach of hypothesis testing in statistics related to intuitionist idea where the proof by contradiction is not permitted? Since, our main objective here is to show that $\beta_1$ is different from $0$, which means that the explanatory variable affects the dependent variable. In the above case we assumed that the parameter $\beta_1$ is equal to $0$. Thus, if we do not find enough proof that $\beta_1 = 0$, we reject the null hypothesis, which we conclude that $\beta_1 \neq 0$.
|
Is hypothesis test an intuitionist approach?
|
Consider the following logical statement.
$$ (A \land B) \implies (C \land D)$$
**Question:** Under what conditions does the following statement follow from the above?
$$ (A\implies C) \land (B \implies D)$$
---
**Context:** I realised the heart of [my previous question](https://math.stackexchange.com/questions/4889835) is this simpler to state, more general question.
---
**My Thoughts**
As a beginner, my initial error was to separate $A$ from $B$ in the antecedent of the first statement, and proceed from there. This is an error because the antecedent is only true if $A$ and $B$ are **both** true.
I then tried reading about rules for distributing conjunctions over implications, but that seemed to be a very mechanistic approach, lacking intuition.
My third attempt was to consider that the second statement implies the first fairly easily, but this doesn't seem fruitful in revealing the conditions in which the first implies the second.
|
I am a high school student and there is something I want to ask about the application of digital sums.
Let's say there is a fraction "**520/7",** let **520/7=a**, so **520= a × 7**, so if we now calculate the digital sums, it would be like **7= a × 7** , it means the digital sum of a should be 1 and nothing else so it means the remainder of this fraction on dividing by 9 is 1, but when we calculate the answer we see that it results in a repeating and infinite rational no. Which is 74.285714285714...and so on, which do not have any SINGLE digital sum as it keeps on changing as we add more and more digits. But our proof says it should be 1? So what's going on? Also when we say the digital sum of any no. is same as the remainder we get when we divide that no. By 9 but is it applicable for fractions as well? Because let's say there is a no. 18.225, if we divide this by 9 the remainder will be 0.225 and not 9, so this statement seems to be applicable on only integers. Am I right?
|
When is digital sum applicable and when it isn't?
|
In statistics, the hypothesis testing is used for determining whether the null hypothesis can be rejected or failed to reject. For example, we use hypothesis testing in order determine the significance of the estimator in linear regression:
$H_0: \beta_1 = 0 \ \ \ H_1 : \beta_1 \neq 0$
My question is the approach of hypothesis testing in statistics related to intuitionist where the proof by contradiction is not permitted? Or to the classical reasoning?
Since, our main objective here is to show that $\beta_1$ is different from $0$, which means that the explanatory variable affects the dependent variable. In the above case we assumed that the parameter $\beta_1$ is equal to $0$. Thus, if we do not find enough proof that $\beta_1 = 0$, we reject the null hypothesis, which we conclude that $\beta_1 \neq 0$.
|
I am stuyding my combinatorics syllabus and came across two claims, that are said to be generalisations of the Matrix Tree Theorem:
G = (V,E) is a complete graph without loops. U is a subset of V. The matrix Q^U(G) is the matrix retreived from the Laplacian matrix L(G) by removing the rows and columns that correspond to the vertices in U.
1) Let e = (v,w) be an element of E.Then, the number of spanning trees of G that contain e is equal to det Q^{v,w} (G)
2) Let T = (V_T, E_T) with V_T a subset of (but not equal to) V, be a subtree of G. The number of spanning trees of G that contain T is equal to det Q^{V_T}(G)
The proof is presented as an exercise for the reader, but I cannot figure it out. By drawing random graphs, I understand that the claims, but I do not get yet why they are true.
Could somebody help me out? Thanks!
|
> Find all $x$ in $\mathbb R$ such that $16\sin^3(x) -14\cos^3(x) = \sqrt[3]{\sin x\cos^8(x) + 7\cos^9(x)}$
It's a tough question I've found. I've tried using
$16\tan^3(x) -14 = \sqrt[3]{\tan x + 7 }$
By inspection, $\tan x=1$ is one of the answers.
but According to [WA][1], $\tan x$ is not equal to $1$.
(Sorry , I've seen later that $x = \frac{\pi}{4}$ , but I don't know how to find all roots of the question.)
Can root of unity solve this?
[1]: https://www.wolframalpha.com/input?i=%2416tan%5E%7B3%7Dx+-14++%3D+%5Csqrt%5B3%5D%7Btanx++%2B+7+%7D%24
|
I was solving a problem from old exams and got stuck here. I'd appreciate the help.
We have the three variables p, q, and r. There are 8 valuations of the variables. If F is a propositional logic formula containing p, q, and r, and we construct a truth table for F, the table will thus have 8 rows.
Provide a formula F with the variables p, q, and r that is true for the valuations
$${ \lbrace p : F, q : T, r : F \rbrace, \lbrace p : T, q : T, r : F \rbrace, \lbrace p : T, q : F, r : F \rbrace }$$
and is false for all other valuations.
I drew a truth table and got an answer which goes:
$((p⋁q⋁r) ⋁ \neg (p⋁q⋁r)) \to ((r \to p) ⋀ r)$
BUT this took a lot of time to the point I started to think is this even how we are supposed to solve this problem. There has gotta be another way than just playing with the truth table given the fact that we have a short time for the exam.
My question: Is there another easier way to solve it? If not, how do I find an easier formula and faster?
|
This is an exercise in the book by *B. Daya Reddy*.
For $f \in L^2(0, 1)$, let $u_f$ be the solution of the ODE: $u'' + u' - 2u = f$, $u(0) = u(1) = 0$. Define the functional $\ell$ by
$$
\langle \ell, f \rangle = \int_0^1 u_f(x) dx \ \forall f \in L^2(0, 1)
$$
Show that $\ell$ is a bounded linear functional.
I have shown $\ell$ is linear, but struggle to bound $\ell$. From the ODE
$$
u'' + u' - 2u = f, u(0) = u(1) = 0
$$
integrate from $0$ to $1$ for both side, we have
$$
\langle \ell, f \rangle = \int_0^1 u_f(x)dx = \dfrac{u_f'(1) - u_f'(0)}{2} - \dfrac{1}{2}\int_0^1 f(x)dx
$$
Therefore,
$$
\vert \langle \ell, f \rangle \vert \le \dfrac{\vert u_f'(1) - u_f'(0) \vert}{2} + \dfrac{1}{2}\lVert f \rVert_2
$$
Am I on the right track ? How to bound the term $\dfrac{\vert u_f'(1) - u_f'(0) \vert}{2}$ in term of $L^2$ norm of $f$ ? Any hints are appreciated. Thanks
---------
Updated: The Lax-Milgram theorem that is introduced so far in the book is as follows:
Let $H$ be a Hilbert space and let $a: H \times H \rightarrow \mathbb{R}$ be a continuous, $H$-elliptic (means $a(u, u) \ge \alpha \lVert u \rVert^2 \ \forall u \in H$) bilinear form defined on $H$. Then, given any continuous linear functional $\ell$ on $H$, there exists a unique element $u \in H$ such that $a(u, v) = \langle \ell, v \rangle \ \forall v \in H$ and $\lVert u \rVert \le 1/\alpha \lVert \ell \rVert$
|
Let $R \subseteq A \times A$ and $S \subseteq A \times A$ be two arbitary equivalence relations.
Prove or disprove that $R \cup S$ is an equivalence relation.
Reflexivity: Let $(x,x) \in R$ or $(x,x) \in S \rightarrow (x,x) \in R \cup S$
Now I still have to prove or disprove that $R \cup S$ is symmetric and transitive. How can I do that?
My guess for symmetry is: R and S are equivalence relations, which means that $(x,y), (y,x) \in R \cup S$ For each $(x,y)$ in R and S there is an $(y,x)$ in $R$ and $S$ so that $(x,y) \sim (y,x)$. Is that correct?
Transitivity: ?
|
I have the following coinflip game:
We flip a coin infinite many times(Maybe countably many times/in practice we have a given/it holds for infinite).
For every pair of tails, tails twice in a row, A gets a point. While for every pair of tails-heads, tails followed by heads, B gets a point. If it is heads-heads or heads-tails neither get a point. The winner is the player with the most points when they stop.
I have simulated 3 different lengths 47,100 and 1000 flips, each game was simulated 100000 and took the probability of A or B winning. (number of times one won/number of simulations). With tails being 0 and heads 1.
For each length I got that B had a slightly higher chance of winning than A. But I end up with the two questions:
1. Why is this so, what is there a proof/probability theoretical explanation?
2. When i run my code there is a bigger difference in A's prob. of winning between 47 flips and 100 flips(0.03) then 100 and 1000(0.005). What is the explanation of this(I though high number of simulations should avoid big differences w.r.t. the game length).
|
Let $I = \int_2^\infty \frac{sin^2(x)}{x^\alpha(x^\alpha + \sin(x))}dx$. Then $I$ is divergent iff $\exists \epsilon > 0: \forall A(\epsilon) = A\ge 2 : \exists A_1, A_2 > A: |\int_{A_1}^{A_2} \frac{sin^2(x)}{x^\alpha(x^\alpha + \sin(x))}dx| \ge \epsilon$. Let $A_1 = n\pi$ and $A_2 = 2n\pi$ for some large enough $n\in\mathbb{N}$. Since we're working on a positive interval and $x \ge\sin(x)$ for $x\ge 2$ we can drop the absolute value and we get the following chain of inequalities:
$\int_{n\pi}^{2n\pi} \frac{sin^2(x)}{x^\alpha(x^\alpha + \sin(x))}dx \ge \int_{n\pi}^{2n\pi} \frac{sin^2(x)}{x(x + \sin(x))}dx \ge \int_{n\pi}^{2n\pi} \frac{sin^2(x)}{x(x + 1)}dx \ge \int_{n\pi}^{2n\pi} \frac{sin^2(x)}{2n\pi(2n\pi + 1)}dx \ge \frac{n\pi}{2} \frac 1 {2n\pi(2n\pi + 1)} = \frac 1{4(2n\pi + 1)}$.
My question is if I set $\epsilon = \frac 1{4(2n\pi + 1)}$ would the argument be valid or not? I'm pretty sure it wouldn't be but I can't construct an integral in which the $n$ cancels out completely.
|
> Show that for any real random variable $X$ and any proper open subset $U$ of $\mathbb{R}$, we have $$\mathbb P(X \in U)=\sup\limits \left \{\mathbb P(X \in K): K \subseteq U \text { is compact} \right \}.$$
**My Attempt** $:$ Let $U \subseteq \mathbb R$ be open. Let $$\alpha : = \sup\limits \left \{\mathbb P (X \in K)\ :\ K \subseteq U\ \text {compact} \right \}.$$
For each $n \geq 1,$ consider the set $$K_n : = \left \{x \in U\ :\ d \left (x, U^c \right ) \geq n\ \text {and}\ \left \lvert x \right \rvert \leq n \right \}.$$
Then each $K_n$ is compact since the function $f : x \mapsto d \left (x, U^c \right )$ is
continuous and $$K_n = f^{-1} \left ( [n, \infty) \right ) \cap [-n,n].$$
Thus each $K_n,$ being the intersection of a closed set and a compact set
(in the Hausdorff space $\mathbb R),$ is compact.
Now for each $x \in U,$ there exists
$m_1 \in \mathbb N$ such that $\left \lvert x \right \rvert \leq m_1.$ Since $U^c$ is closed it follows that $d \left (x, U^c \right ) > 0$
and hence by Archimedean property there exists $m_2 \in \mathbb N$ such that
$d \left (x, U^c \right ) \geq \frac {1} {m_2}.$ Let $m_3 : = \max\limits \left \{m_1, m_2 \right \}.$ Then $x \in K_{m_3} \subseteq \bigcup\limits_{n=1}^{\infty} K_n.$ Since
$K_n \subseteq U$ for each $n \geq 1,$ it follows that $$U = \bigcup\limits_{n=1}^{\infty} K_n.$$
Since finite union of compact sets is compact, replacing $K_n$ by
$K_n^{\prime} : = \bigcup\limits_{j=1}^{n} K_n \subseteq U,$ it follows that $U$ can be written as a countable
increasing union of compact sets $K_n^{\prime}$ contained in $U.$ Then by the continuity of the
probability measure from below it follows that $$\mathbb P (X \in U) = \lim\limits_{n \to \infty} \mathbb P \left (X \in K_n^{\prime} \right).$$
Hence $\alpha \geq \mathbb P (X \in U).$ But since $\mathbb P (X \in K) \leq \mathbb P (X \in U),$ for each
compact set $K \subseteq U,$ we also have $\alpha \leq \mathbb P (X \in U).$ Thus $\alpha = \mathbb P (X \in U),$
as required. $\square$
Is it fine what I did? Thanks for reading.
|
A group of *2n* individuals consisting of *n* couples, are randomly arranged at a round table. You are required to find an upper bound for the probability that none of the couples are seated next to each other.
**Solution:**
This is a combinatorial problem. Let's denote the total number of ways to arrange *2n* individuals around a round table as T, and the number of ways to arrange them such that no couples are seated next to each other as S. The probability that none of the couples are seated next to each other is then given by $P = \displaystyle\frac{S}{T}.$
1. Total arrangements (T): Since the table is round, we can fix one person and arrange the remaining 2n-1 people. This can be done in (2n-1)! ways.
2. Arrangements with no couples together (S): This is a bit trickier. We can think of each couple as a single entity first. So we have n entities to arrange, which can be done in (n-1)! ways (again, because the table is round). Now, within each couple, we have 2 people that can be arranged in 2! ways. Since we have *n* couples, the total number of arrangements is $(n-1)! \times (2!)^n$.
So, the probability $P =\displaystyle\frac{S}{T} = \frac{[(n-1)! * (2!)^n]}{(2n-1)!.}$
This is the exact probability, but author asked for an upper bound. An upper bound for this probability can be obtained by using the fact that $(n-1)! \leq n!$ and $(2n-1)! \geq (n!)^2$ for $n \geq 1.$ So, we have:
$P \leq \displaystyle\frac{[(n!)\times(2!)^n]}{(n!)^2} =\displaystyle\frac{(2^n)}{n!}.$
This is an upper bound for the probability that none of the couples are seated next to each other. Please note that this is a very loose upper bound, and the actual probability will be much less than this.
Is this answer correct?
Assuming in the above exercise, that *n* is large, how can we approximate the probability that exactly *k* of the couples are seated next to each other?
**Inclusion-exclusion theorem is used in the following case:**
Consider n independent trials in which each trial results in one of the outcomes $1,\dots, k$ with respective probabilities $p_i,\dots, p_k.$ Suppose we are interested in the probability that each of the k outcomes occurs at least once in the n trials. If we let $A_i$ denote the event that outcome i does not occur in any of the trials, then the desired probability is $1 -P((\bigcup_{i=1}^k A_i)$ and it can be obtained by using the inclusion-exclusion theorem:
$$p(\bigcup_{i=1}^k A_i) = \displaystyle\sum_i P(A_i) -\displaystyle\sum_i\sum_{j<i} P(A_i,A_j) + \dots + (-1)^{k+1}P(A_i,\dots,A_k) $$
where
$$P(A_i) = (1-p_i)^n$$
$$P(A_i,A_j)= (1- p_i -p_j)^n j < i$$
$$P(A_i,A_j,A_r) =(1-p_i -p_j -p_r)^n , r<j <i $$
and so on. The difficulty with this approach, however, is that its computation requires the calculation of $2^k -1$ terms.
Now, in our question, I don't think we should use this inclusion-exclusion theorem.
|
This is an exercise in the book by *B. Daya Reddy*.
For $f \in L^2(0, 1)$, let $u_f$ be the solution of the ODE: $u'' + u' - 2u = f$, $u(0) = u(1) = 0$. Define the functional $\ell$ by
$$
\langle \ell, f \rangle = \int_0^1 u_f(x) dx \ \forall f \in L^2(0, 1)
$$
Show that $\ell$ is a bounded linear functional.
I have shown $\ell$ is linear, but struggle to bound $\ell$. From the ODE
$$
u'' + u' - 2u = f, u(0) = u(1) = 0
$$
integrate from $0$ to $1$ for both side, we have
$$
\langle \ell, f \rangle = \int_0^1 u_f(x)dx = \dfrac{u_f'(1) - u_f'(0)}{2} - \dfrac{1}{2}\int_0^1 f(x)dx
$$
Therefore,
$$
\vert \langle \ell, f \rangle \vert \le \dfrac{\vert u_f'(1) - u_f'(0) \vert}{2} + \dfrac{1}{2}\lVert f \rVert_2
$$
Am I on the right track ? How to bound the term $\dfrac{\vert u_f'(1) - u_f'(0) \vert}{2}$ in term of $L^2$ norm of $f$ ? Any hints are appreciated. Thanks
|
> Show that for any real random variable $X$ and any proper open subset $U$ of $\mathbb{R}$, we have $$\mathbb P(X \in U)=\sup\limits \left \{\mathbb P(X \in K): K \subseteq U \text { is compact} \right \}.$$
**My Attempt** $:$ Let $U \subseteq \mathbb R$ be open. Let $$\alpha : = \sup\limits \left \{\mathbb P (X \in K)\ :\ K \subseteq U\ \text {compact} \right \}.$$
For each $n \geq 1,$ consider the set $$K_n : = \left \{x \in U\ :\ d \left (x, U^c \right ) \geq n\ \text {and}\ \left \lvert x \right \rvert \leq n \right \}.$$
Then each $K_n$ is compact since the function $f : x \mapsto d \left (x, U^c \right )$ is
continuous and $$K_n = f^{-1} \left ( [n, \infty) \right ) \cap [-n,n].$$
Thus each $K_n,$ being the intersection of a closed set and a compact set
(in the Hausdorff space $\mathbb R),$ is compact.
Now for each $x \in U,$ there exists
$m_1 \in \mathbb N$ such that $\left \lvert x \right \rvert \leq m_1.$ Since $U^c$ is closed it follows that $d \left (x, U^c \right ) > 0$
and hence by Archimedean property there exists $m_2 \in \mathbb N$ such that
$d \left (x, U^c \right ) \geq \frac {1} {m_2}.$ Let $m_3 : = \max\limits \left \{m_1, m_2 \right \}.$ Then $x \in K_{m_3} \subseteq \bigcup\limits_{n=1}^{\infty} K_n.$ Since
$K_n \subseteq U$ for each $n \geq 1,$ it follows that $$U = \bigcup\limits_{n=1}^{\infty} K_n.$$
Since finite union of compact sets is compact, replacing $K_n$ by
$K_n^{\prime} : = \bigcup\limits_{j=1}^{n} K_j \subseteq U,$ it follows that $U$ can be written as a countable
increasing union of compact sets $K_n^{\prime}$ contained in $U.$ Then by the continuity of the
probability measure from below it follows that $$\mathbb P (X \in U) = \lim\limits_{n \to \infty} \mathbb P \left (X \in K_n^{\prime} \right).$$
Hence $\alpha \geq \mathbb P (X \in U).$ But since $\mathbb P (X \in K) \leq \mathbb P (X \in U),$ for each
compact set $K \subseteq U,$ we also have $\alpha \leq \mathbb P (X \in U).$ Thus $\alpha = \mathbb P (X \in U),$
as required. $\square$
Is it fine what I did? Thanks for reading.
|
So I want to find the length of the function $y = a - 2\sqrt{ax} + x$ in the interval $(0, a)$, assuming $a > 0$.
I found the derivative: $ y' = 1 - \frac {\sqrt{a}}{\sqrt{x}} $
Then using the formula $$l = \int_{0}^{a}\sqrt{1+(y'(x))^2} dx$$ I get the integral:
$$\int_{0}^{a}\sqrt{2-2\sqrt{\frac{a}{x}}+\frac{a}{x}} dx$$
How can I solve this?
|
I want to derive $\sin(A-B) = \sin A \cos B - \cos A \sin B$ from $\cos(A-B)=\cos A \cos B +\sin A \sin B$
$$\cos(A-B)=\cos A \cos B +\sin A \sin B$$ substitute $A$ for $A + \frac{\pi}{2}$
$$\cos(A+\frac{\pi}{2}-B)=\cos( A+\frac{\pi}{2}) \cos B +\sin( A+\frac{\pi}{2}) \sin B$$
$$\sin(A-B)=\sin(A) \cos B +\cos(A) \sin B$$
but this is wrong. However you get the correct answer with $\frac{\pi}{2} - A$.
Why?
|
I have the following coinflip game:
We flip a coin arbitrary many times(finite).
For every pair of tails, tails twice in a row, A gets a point. While for every pair of tails-heads, tails followed by heads, B gets a point. If it is heads-heads or heads-tails neither get a point. The winner is the player with the most points when they stop.
I have simulated 3 different lengths 47,100 and 1000 flips, each game was simulated 100000 and took the probability of A or B winning. (number of times one won/number of simulations). With tails being 0 and heads 1.
For each length I got that B had a slightly higher chance of winning than A. But I end up with the two questions:
1. Why is this so, what is there a proof/probability theoretical explanation?
2. When i run my code there is a bigger difference in A's prob. of winning between 47 flips and 100 flips(0.03) then 100 and 1000(0.005). What is the explanation of this(I though high number of simulations should avoid big differences w.r.t. the game length).
|
So I want to find the length of the function $y = a - 2\sqrt{ax} + x$ in the interval $(0, a)$, assuming $a > 0$.
I found the derivative: $ y' = 1 - \frac {\sqrt{a}}{\sqrt{x}} $
Then using the formula $$l = \int_{0}^{a}\sqrt{1+(y'(x))^2} dx$$ I get the integral:
$$\int_{0}^{a}\sqrt{2-2\sqrt{\frac{a}{x}}+\frac{a}{x}} dx$$
How can I solve this? And is $l = \sqrt{2}a$, because I found that as an answer to a similar question but without integration?
|
Let $F$ be a field and $M_2 (F)$ denotes the ring of all $2\times 2$ matrices with entries in $F$.
I was wondering if someone could help me about this claim: Why is $M_2 (F)$ a direct sum of two minimal left ideals?
Thanks in advance.
|
Why is $M_2 (F)$ a direct sum of two minimal left ideals?
|
So I want to find the length of the function $y = a - 2\sqrt{ax} + x$ in the interval $(0, a)$, assuming $a > 0$.
I found the derivative: $ y' = 1 - \frac {\sqrt{a}}{\sqrt{x}} $
Then using the formula $$l = \int_{0}^{a}\sqrt{1+(y'(x))^2} dx$$ I get the integral:
$$\int_{0}^{a}\sqrt{2-2\sqrt{\frac{a}{x}}+\frac{a}{x}} dx$$
How can I solve this? And is $l = \sqrt{2}a$, because I found that as an answer to a [similar question][1] but without integration?
[1]: https://math.stackexchange.com/questions/2763733/what-is-the-length-of-latus-rectum-of-parabola-sqrtx-sqrty-sqrta
|
Pauli matrices satisfy
$$
\sigma_i \sigma^{\dagger}_j = \delta_{ij} \sigma_0 + \epsilon_{ijk}\sigma_k\,.
$$
Can one construct a set of three complex-valued $2\times 2$ matrices $a_i$, $i=1,2,3$, such that
$$
a_i a^{\dagger}_j = \delta_{ij} \sigma_0 - \epsilon_{ijk}\sigma_k\,?
$$
If not, is it possible for matrices with dimension $2\times N$, with $N>2$?
|
We have the following mod $p$ technique to detemine the Galois group of a polynomial in $\mathbb{Z}[x]$:
>Assume $f\in\mathbb{Z}[x]$ is irreducible, $p$ is a prime,
and $\bar{f}$ is the polynomial obtained by $f$ mod $p$.
(Or more precisely, $\bar{f}$ is the image of the homomorphism $\phi: \mathbb{Z}[x]\to F_p[x]$,
induced by $\pi: \mathbb{Z}\to F_p$).
If $\bar{f}$ is separable over $F_p$ and it factors in $F_p$ into irreducible polynomials as $f=f_1\cdots f_l$,
with ${\rm deg}f_i=d_i$, then ${\rm Gal}(f,\mathbb Q)$ contains a permutation
which is the product of $l$ disjoint cycles, each with length $d_1,\cdots,d_l$.
I wonder is the converse true? Or, if ${\rm Gal}(f,\mathbb Q)$ contains a permutation
which is the product of $l$ disjoint cycles, each with length $d_1,\cdots,d_l$,
then does there always exist a prime $p$, s.t. the corresponding $\bar{f}$ is separable over $F_p$
and it factors in $F_p$ into irreducible polynomials as $f=f_1\cdots f_l$, with ${\rm deg}f_i=d_i$?
If this is incorrect for the identity element, then is it correct for the non-identity elements in ${\rm Gal}(f,\mathbb Q)$?
(Attempt: For the simplest case $f(x)=x^2+ax+b$ and the identity element in $S_2$, it is equivalent to determine if exists $u,v\in\mathbb Z$ and $p$ prime s.t. $u+v\equiv a \mod p$ and $uv\equiv b \mod p$.
This already seems hard for me.)
|
Is the converse of $\mod p$ method in determining Galois group true? Or is there a prime corresponding to every member of the Galois group?
|
i have been self studying algebraic geometry and have always struggled to grasp how affine spaces naturally "lie" in projective Space, as in, i haven't found any convincing sketches of what that would look like in even the simplest of cases; $\mathbb{A}^1$ and $\mathbb{P}^1$. I do understand the concept of "points at infinity" and that parallel lines intersect in projective space, but this isn't enough for me to say "yeah i could totally explain this to someone else".
This is how i would visualize it right now : When i hear of Projective space $\mathbb{P}^1$ all i think about is a bunch of lines passing through the origin in 2-dimensional space. (Probably as wrong as it gets)
Does anyone have some nice literature that focuses on explaining these things in a very visual manner?
|
Can the Square root of $(-9)$ have two answers namely $+3i$ or $-3i$?
Squaring both $3i$ and $-3i$ does give $-9$.
Why does my text book say only $3i$?
|
Let $a,b,c\in\mathbb{R}$ be three parameters satisfying
$$abc\ne 0.$$
Consider the following equations of $(u,v,w)\in\mathbb{R}^3$:
$$
(-c) \bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = 2vw,
$$
$$
(-b) \bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = T^2 +w^2 +2vT,
$$
$$
(-a)\bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = 2uT,
$$
where $$T= -au-bv-cw.$$
Surprisingly, computer (WolframAlpha) tells me that for any $(a,b,c)$ as above, the solution of these equations is
$$
(u,v,w)= (0,0,0).
$$
[![enter image description here][1]][1]
How can I prove this claim? I tried many times but fail...
I notice that these equations are equivalent to
$$
3(v+T)^2-2vT-(u^2+v^2) = \frac{2vw}{-c} = \frac{T^2+w^2+2vT}{-b} = \frac{2uT}{-a},
$$
which gives that (if $u\ne 0$ )
$$
T = \frac{a}{c}\frac{vw}{u}.
$$
Then from the definition of $T$,
$$
\frac{a}{c} \frac{v}{u} = -a\frac{u}{w} -b \frac{v}{w} -c,
$$
and by the fact
$\frac{2vw}{-c} = \frac{T^2+w^2+2vT}{-b}$, we know (both sides divide by $w^2$)
$$
\frac{a^2}{c^2} \frac{v^2}{u^2} + 1 + 2\frac{a}{c}\frac{v}{w}\frac{v}{u} = 2\frac{b}{c}\frac{v}{w}.
$$
If we set $s= v/u$, $t= u/w$ (we assume $u,w\ne 0$, otherwise it is trivial), then
$$
\frac{a}{c} s = -a t- bst-c, \quad
\frac{a^2}{c^2} s^2 +1 + 2\frac{a}{c} s^2 t =2\frac{b}{c}st .
$$
Moreover, from the first equation and $T = \frac{a}{c}\frac{vw}{u}$, we get another relation
$$
2\left(\frac{a}{c} s + st\right)^2 +\frac{a^2}{c^2} s^2 -t^2 =-\frac{2}{c} st.
$$
Now we have three new equations, I want solve $st$ from the first one and put it into other two equations, then I get two quadratic equations in two variables $(s,t)$, and show that it has only zero solution. Maybe this thought can help.
Another observation is that, if $(x_0,y_0,z_0)\ne (0,0,0)$ is a solution, then for any $t\ne 0$, $(tx_0,ty_0,tz_0)$ is a solution.
[1]: https://i.stack.imgur.com/y5Tqw.jpg
|
So I want to find the length of the function $y = a - 2\sqrt{ax} + x$ in the interval $(0, a)$, assuming $a > 0$.
I found the derivative: $ y' = 1 - \frac {\sqrt{a}}{\sqrt{x}} $
Then using the formula $$l = \int_{0}^{a}\sqrt{1+(y'(x))^2} dx$$
I get the integral:
$$l =\int_{0}^{a}\sqrt{2-2\sqrt{\frac{a}{x}}+\frac{a}{x}} dx$$
Which is improper, because at the lower bound $0$ $\sqrt{\frac{a}{x}}$ and $\frac{a}{x}$ approach infinity.
$$ l = \int_{0}^{a}\sqrt{2x-2\sqrt{ax}+a}*x^{-1/2} dx = \frac{1}{\sqrt{a}}\int_{0}^{a}\sqrt{2x-2\sqrt{ax}+a} d2\sqrt{ax}+a$$
How can I solve this? And is $l = \sqrt{2}a$, because I found that as an answer to a [similar question][1] but without integration?
[1]: https://math.stackexchange.com/questions/2763733/what-is-the-length-of-latus-rectum-of-parabola-sqrtx-sqrty-sqrta
|
Let $a,b,c\in\mathbb{R}$ be three parameters satisfying
$$abc\ne 0,\quad \mbox{and} \quad 2a^2\ne b^2.$$
Consider the following equations of $(u,v,w)\in\mathbb{R}^3$:
$$
(-c) \bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = 2vw,
$$
$$
(-b) \bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = T^2 +w^2 +2vT,
$$
$$
(-a)\bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = 2uT,
$$
where $$T= -au-bv-cw.$$
Surprisingly, computer (WolframAlpha) tells me that for any $(a,b,c)$ as above, the solution of these equations is
$$
(u,v,w)= (0,0,0).
$$
[![enter image description here][1]][1]
How can I prove this claim? I tried many times but fail...
I notice that these equations are equivalent to
$$
3(v+T)^2-2vT-(u^2+v^2) = \frac{2vw}{-c} = \frac{T^2+w^2+2vT}{-b} = \frac{2uT}{-a},
$$
which gives that (if $u\ne 0$ )
$$
T = \frac{a}{c}\frac{vw}{u}.
$$
Then from the definition of $T$ (we assume $w\ne 0$, otherwise $2a^2=b^2$),
$$
\frac{a}{c} \frac{v}{u} = -a\frac{u}{w} -b \frac{v}{w} -c,
$$
and by the fact
$\frac{2vw}{-c} = \frac{T^2+w^2+2vT}{-b}$, we know (both sides divide by $w^2$)
$$
\frac{a^2}{c^2} \frac{v^2}{u^2} + 1 + 2\frac{a}{c}\frac{v}{w}\frac{v}{u} = 2\frac{b}{c}\frac{v}{w}.
$$
If we set $s= v/u$, $t= u/w$, then
$$
\frac{a}{c} s = -a t- bst-c, \quad
\frac{a^2}{c^2} s^2 +1 + 2\frac{a}{c} s^2 t =2\frac{b}{c}st .
$$
Moreover, from the first equation and $T = \frac{a}{c}\frac{vw}{u}$, we get another relation
$$
2\left(\frac{a}{c} s + st\right)^2 +\frac{a^2}{c^2} s^2 -t^2 =-\frac{2}{c} st.
$$
Now we have three new equations, I want solve $st$ from the first one and put it into other two equations, then I get two quadratic equations in two variables $(s,t)$, and show that it has only zero solution. Maybe this thought can help.
Another observation is that, if $(x_0,y_0,z_0)\ne (0,0,0)$ is a solution, then for any $t\ne 0$, $(tx_0,ty_0,tz_0)$ is a solution.
[1]: https://i.stack.imgur.com/y5Tqw.jpg
|
Is the converse of $\bmod p$ method in determining Galois group true? Or is there a prime corresponding to every member of the Galois group?
|
So I want to find the length of the function $y = a - 2\sqrt{ax} + x$ in the interval $(0, a)$, assuming $a > 0$.
I found the derivative: $ y' = 1 - \frac {\sqrt{a}}{\sqrt{x}} $
Then using the formula $$l = \int_{0}^{a}\sqrt{1+(y'(x))^2} dx$$
I get the integral:
$$l =\int_{0}^{a}\sqrt{2-2\sqrt{\frac{a}{x}}+\frac{a}{x}} dx$$
Which is improper, because at the lower bound $0$ $\sqrt{\frac{a}{x}}$ and $\frac{a}{x}$ approach infinity.
$$ l = \int_{0}^{a}\sqrt{2x-2\sqrt{ax}+a}*x^{-1/2} dx = \frac{1}{\sqrt{a}}\int_{0}^{a}\sqrt{2x-2\sqrt{ax}+a} d2\sqrt{ax}+a$$
With substitution as suggested by the comments: $y=\sqrt{\frac{a}{x}}$, $dx=-\frac{a}{y^2}dy$:
$$l = -a*\int_{\infty}^{1}\sqrt{2-2y+y^2}*\frac{1}{y^2}dy=a*\int_{1}^{\infty}\sqrt{2-2y+y^2}*\frac{1}{y^2}dy$$
How can I solve this? And is $l = \sqrt{2}a$, because I found that as an answer to a [similar question][1] but without integration?
[1]: https://math.stackexchange.com/questions/2763733/what-is-the-length-of-latus-rectum-of-parabola-sqrtx-sqrty-sqrta
|
Hi learning Lean I came cross the following property of category equivalence:
>**Definition** An equivalence of categories cosnists of $(F, G, \eta, \epsilon)$ where $F: C \rightarrow D$ and $G: D\rightarrow C$ are functors and $\eta: 1_C \cong GF$, $\epsilon: FG\cong 1_D$ are natural isomorphisms.
and
> **Theorem** if $F \Rightarrow FGF \Rightarrow F$ is identity, so is $G \Rightarrow GFG \Rightarrow G$.
The purpose of this theorem is for showing that the above definition can be equipped with the two triangle equalities in the above theorem, as said in the document of Lean's Mathlib4 [Equivalence of categories](https://leanprover-community.github.io/mathlib4_docs/Mathlib/CategoryTheory/Equivalence.html).
I proved it manually but I don't get the proof linked to http://globular.science/1905.001, which shown as a diagram bellow, found in the source code in [here](). What does the meaning of it? How to translate it to a plain English proof? Thank you for any helps, ideas and references for it!
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/g9KcH.png
|
To avoid confusion, the following somewhat cumbersome notation seems appropriate to me :
Let us write $(2:1:7)_{ten}$ instead of $217$. It means $$(\color{red}2:\color{green}1:\color{blue}7)_{ten}=\color{red}2\times ten^2+\color{green}1\times ten^1+\color{blue}7\times ten^0$$
That's base *ten*
____________________
So let's look at base *four* because the Martian in the joke image has only four fingers while we have ten.
Very logically in his world he will form packs of four, then bundles of packs of four, and so on. For example ,
[![enter image description here][1]][1]
$$(\color{red}1:\color{green}2:\color{blue}3)_{four}=\color{red}1\times four^2+\color{green}2\times four^1+\color{blue}3\times four^0$$
________________________
To count the four stones in the image, the Martian only needs four digits $0,1,2$ and $3$. When he sees a package, very logically, he says: "$1$ package" and writes $$10$$
He's never heard of $4$ because he doesn't really need $4$. Hence his question: "what is base four?"
[1]: https://i.stack.imgur.com/QwTLl.png
|
So I want to find the length of the function $y = a - 2\sqrt{ax} + x$ in the interval $(0, a)$, assuming $a > 0$.
I found the derivative: $ y' = 1 - \frac {\sqrt{a}}{\sqrt{x}} $
Then using the formula $$l = \int_{0}^{a}\sqrt{1+(y'(x))^2} dx$$
I get the integral:
$$l =\int_{0}^{a}\sqrt{2-2\sqrt{\frac{a}{x}}+\frac{a}{x}} dx$$
Which is improper, because at the lower bound $0$ $\sqrt{\frac{a}{x}}$ and $\frac{a}{x}$ approach infinity.
$$ l = \int_{0}^{a}\sqrt{2x-2\sqrt{ax}+a}*x^{-1/2} dx = \frac{1}{\sqrt{a}}\int_{0}^{a}\sqrt{2x-2\sqrt{ax}+a} d2\sqrt{ax}+a$$
With substitution as suggested by the comments: $y=\sqrt{\frac{a}{x}}$, $dx=-\frac{a}{y^2}dy$: (I think I made a mistake, because this one [doesn't converge][1])
$$l = -a*\int_{\infty}^{1}\sqrt{2-2y+y^2}*\frac{1}{y^2}dy=a*\int_{1}^{\infty}\sqrt{2-2y+y^2}*\frac{1}{y^2}dy$$
How can I solve this? And is $l = \sqrt{2}a$, because I found that as an answer to a [similar question][2] but without integration?
[1]: https://www.wolframalpha.com/input?i2d=true&i=Integrate%5Bsqrt%5C%2840%292-2y%2BPower%5By%2C2%5D%5C%2841%29*Divide%5B-a%2CPower%5By%2C2%5D%5D%2C%7By%2Cinf%2C1%7D%5D
[2]: https://math.stackexchange.com/questions/2763733/what-is-the-length-of-latus-rectum-of-parabola-sqrtx-sqrty-sqrta
|
Given a set $X$ and a disjoint collection $R \subseteq \operatorname{Pot}(X)$, define a relation $\sim$ on $X$ via $x \sim y \Leftrightarrow x = y \text{ or } \exists S \in R: x \in S \text{ and } y \in S$. Define $X/R = X/\sim$ as the set of equivalence classes. Also, if $X$ is equipped with a topology, equip $X/R$ with the quotient topology.
Consider $S^1 \subseteq \mathbb{C}$.
---
For each $t \in [0, 1]$, let $S_t = S^1$ be a copy of the circle, one for each point of $[0, 1]$. Let $A = \bigsqcup_{t \in [0, 1]} S_t$ be their disjoint union. Let $B = A/\{\{1 \in S_t \mid t \in [0, 1]\}\} $. This means that $B$ is obtained by gluing all circles in $A$ at one point. Let $C = ([0, 1] \sqcup A)/\{\{t \in [0, 1], 1 \in S_t\} \mid t \in [0, 1]\}$. This means that $C$ is obtained by gluing a point of each circle in $A$ to a point of $[0, 1]$.
Is the mapping $\varphi: C \to B, t \in [0, 1] \mapsto 1 \in B, z \in S_t \setminus \{1\} \mapsto z \in S_t \setminus \{1\}$ that contracts $[0, 1]$ to the point a homotopy equivalence? What is the fundamental group $\pi_1(B)$? What is the fundamental group $\pi_1(C)$?
I would guess that $\pi_1(B)$ is the free group on uncountably many generators, since $B$ is the wedge sum of uncountably many circles.
For each $z \in S^1$, let $S_z = S^1$ be a copy of the circle, one for each point of $S^1$. Let $D = \bigsqcup_{z \in S^1} S_z$ be their disjoint union. Let $E = (S^1 \sqcup D)/\{\{z \in S^1, 1 \in S_z\} \mid z \in S^1\}$. This means that $E$ is obtained by gluing a point of each circle in $D$ to a point of $S^1$.
What is the fundamental group $\pi_1(E)$?
|
Let $a,b,c\in\mathbb{R}$ be three parameters satisfying
$$abc\ne 0,\quad \mbox{and} \quad 2a^2\ne b^2.$$
Consider the following equations of $(u,v,w)\in\mathbb{R}^3$:
$$
(-c) \bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = 2vw,
$$
$$
(-b) \bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = T^2 +w^2 +2vT,
$$
$$
(-a)\bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = 2uT,
$$
where $$T= -au-bv-cw.$$
Surprisingly, computer (WolframAlpha) tells me that for any $(a,b,c)$ as above, the solution of these equations is
$$
(u,v,w)= (0,0,0).
$$
[![enter image description here][1]][1]
How can I prove this claim? I tried many times but fail...
I notice that these equations are equivalent to
$$
3(v+T)^2-2vT-(u^2+v^2) = \frac{2vw}{-c} = \frac{T^2+w^2+2vT}{-b} = \frac{2uT}{-a},
$$
which gives that (if $u\ne 0$ )
$$
T = \frac{a}{c}\frac{vw}{u}.
$$
Then from the definition of $T$ (we assume $w\ne 0$, otherwise $2a^2=b^2$),
$$
\frac{a}{c} \frac{v}{u} = -a\frac{u}{w} -b \frac{v}{w} -c,
$$
and by the fact
$\frac{2vw}{-c} = \frac{T^2+w^2+2vT}{-b}$, we know (both sides divide by $w^2$)
$$
\frac{a^2}{c^2} \frac{v^2}{u^2} + 1 + 2\frac{a}{c}\frac{v}{w}\frac{v}{u} = 2\frac{b}{c}\frac{v}{w}.
$$
If we set $s= v/u$, $t= u/w$, then
$$
\frac{a}{c} s = -a t- bst-c, \quad
\frac{a^2}{c^2} s^2 +1 + 2\frac{a}{c} s^2 t =2\frac{b}{c}st .
$$
Moreover, from the first equation and $T = \frac{a}{c}\frac{vw}{u}$, we get another relation
$$
2\left(\frac{a}{c} s + st\right)^2 +\frac{a^2}{c^2} s^2 -t^2 =-\frac{2}{c} st.
$$
Now we have three new equations, I want solve $st$ from the first one and put it into other two equations, then I get two quadratic equations in two variables $(s,t)$, they are:
$$
s^2 \frac{a^2}{c^2} (1-\frac{2}{b}) + s \left(\frac{2a}{c^2}-\frac{2a}{b}+\frac{2a^3}{b^2 c^2}\right) + t \left( \frac{2a}{c}+ \frac{2a^3}{b^2 c}\right)+ 3 + \frac{1}{b^2},
$$
and show that it has only zero solution. Maybe this thought can help.
Another observation is that, if $(x_0,y_0,z_0)\ne (0,0,0)$ is a solution, then for any $t\ne 0$, $(tx_0,ty_0,tz_0)$ is a solution.
[1]: https://i.stack.imgur.com/y5Tqw.jpg
|
To avoid confusion, the following somewhat cumbersome notation seems appropriate to me :
Let us write $(2:1:7)_{ten}$ instead of $217$. It means $$(\color{red}2:\color{green}1:\color{blue}7)_{ten}=\color{red}2\times ten^2+\color{green}1\times ten^1+\color{blue}7\times ten^0$$
That's base *ten*
____________________
So let's look at base *four* because the Martian in the joke image has only four fingers while we have ten.
Very logically in his world he will form packs of four, then bundles of packs of four, and so on. For example ,
[![enter image description here][1]][1]
$$(\color{red}1:\color{green}2:\color{blue}3)_{four}=\color{red}1\times four^2+\color{green}2\times four^1+\color{blue}3\times four^0$$
________________________
To count, in particular the four stones in the image, the Martian only needs four digits $0,1,2$ and $3$. When he sees a package, very logically, he says: "$1$ package" and writes $$10$$
He's never heard of $4$ because he doesn't really need $4$. Hence his question: "what is base four?"
[1]: https://i.stack.imgur.com/QwTLl.png
|
Give an interpretation where
$$∃_{x} (\neg P(x) ∨ Q(x)) \to (∃_{x} P(x) ∧ ∀_{x} \neg Q(x))$$
is false.
How does someone even begin with questions like this? I have interpreted it in my head and I kind of get it in a sense. But seems like the only thing I know is that since it is an implication, the only way it will be false is if True -> False. Can someone please help me continue?
This question is part of old exams I am solving.
|
Why won't Wolfram Alpha compute my Limit?
|
I know that the dimension of the manifold formed by **n-dimensional symmetric matrices** is n(n+1)/2.But the manifold formed by **n-dimensional positive definite symmetric matrices** is of what dimension?Thanks!
|
The manifold formed by n-dimensional positive definite symmetric matrices is of what dimension?
|
I have the values of a polynomial $p(x)$ defined on the Galois field mod $p$ (with $p$ prime) at the points zero to around two million. I need an algorithm to find the coefficients of the original polynomial $p(x)$ in time $O(n \log (n))$, as $O(n^2)$ is way too slow for this application. In Chapter 3 of Pan's *Structured Matrices and Polynomials* there is an algorithm, but it is stated that $|t_i| = 1$ for all $i$, which is not the case here. Is there any other alternative?
|
I'm reading Jean-Pierre Serre's 1970 "Cours d'arithmetique". I'm having trouble reading the beginning of his chapter 2 devoted for example to $\mathbb Z_3$, $3-$adic numbers that I discover and that interest me because of [questions][1] I have about the ring sequence
$$(\mathbb Z/2\mathbb Z,\mathbb Z/6\mathbb Z,\mathbb Z/30\mathbb Z,...,\mathbb Z/p\#\mathbb Z,...)$$
He writes
> Let $n\geq 1, A_n:=\mathbb Z/3^n\mathbb Z$. An element of $A_n$ clearly defines an element of $A_{n-1}$
What I understand here is that, if $\textbf{a}\in A_n$ then $\textbf{a}=a+3^n\mathbb Z$. And if we do the Euclidean division of $a$ by $3^{n-1}$ $$a=3^{n-1}q+r$$with $0\leq r\leq 3^{n-1}$. Then $$a+3^n\mathbb Z=3^{n-1}q+r+3^n\mathbb Z=r+3^{n-1}(q+3\mathbb Z)\subset r+3^{n-1}\mathbb Z$$
For example, $\mathbb Z/9\mathbb Z\to \mathbb Z/3\mathbb Z$ $$0\mapsto0$$ $$1\mapsto 1$$ $$2\mapsto 2$$ $$3\mapsto 0$$ $$4\mapsto 1$$ $$5\mapsto 2$$ $$...$$
Then he writes
> This results in a homomorphism $$\varphi_n:A_n\to A_{n-1}$$ which is surjective, and of kernel $p^{n-1}A_n$
No problem here. Then he writes
> [...] By definition, an element of $\mathbb Z_3$ is $x=(...,x_n,...,x_1)$, with $$x_n\in A_n\land \varphi_n(x_n)=x_{n-1} \text{ if }n\geq 2$$
__________________________
1. Is my comprehension of what is presented by Serre as "clear", correct ? Probably you would have to complete to take another representative of $\textbf{a}$ and make sure that you get the same $r$ and talk about uniqueness.
____________________
2. I wonder if we can do the same with the sequel $$(\mathbb Z/2\mathbb Z,\mathbb Z/6\mathbb Z,\mathbb Z/30\mathbb Z,...,\mathbb Z/p\#\mathbb Z,...)$$
For example, $\varphi : \mathbb Z/6\mathbb Z\to \mathbb Z/2\mathbb Z$ defined by $$0\mapsto 0$$ $$1\mapsto 1$$ $$2\mapsto 0$$ $$3\mapsto 1$$ $$4\mapsto 0$$ $$5\mapsto 1$$
And if so, what would be the "numbers" we would get then?
[1]: https://math.stackexchange.com/questions/4886427/questions-about-mathbb-z-30-mathbb-z
|
## Setting
Suppose $Q\subset \mathbb{R}$ is a compact metric space. Let $\mathcal{P}(Q)$ be the set of Borel probability measures on $Q$. This set is endowed with the topology of weak-* convergence: a sequence $\left\{ m_{N} \right\}$ of $\mathcal{P}(Q)$ converges to $m \in \mathcal{P}(Q)$ if $\forall \varphi \in C(Q)$
$$
\lim _N \int_Q \varphi(x) d m_N(x)=\int_Q \varphi(x) d m(x) ,
$$
which can be metrized by the distance
$$
\mathbf{d}_1(\mu, \nu)=\sup \left\{\int _{Q} fd\left(\mu -\nu\right) : \operatorname{Lip}\left(f;Q\right)\leq 1\right\},
$$ where $\operatorname{Lip}\left(f;Q\right)$ denotes the minimal Lipschitz constant of $f$. The equivalent distance is given by
$$
d_1(\mu, \nu)=\inf _{M\in\Pi(\mu, \nu)} \int_{Q \times Q} d(x, y) \,d M(x, y),
$$
where $\Pi (\mu,\nu)$ is all couplings of $\mu$ and $\nu$, i.e.
$$
M\left(A\times Q\right)=\mu\left(A\right),\,M\left(Q\times A\right)=\nu\left(A\right) \quad\forall A \in \mathscr{B}\left(Q\right).
$$
## Question
I'm reading a Pierre Cardaliaguet's MFG note about the limit of sequence of symmetric functions. The author says that the function
$$
U\left(m\right)=\sup_{y \in \operatorname{Spt}\left(m\right)}\left| y \right|:\mathcal{P}\left(Q\right)\to \mathbb{R}
$$
is not continuous. Here $\operatorname{Spt}(m)$ denotes the support of $m$, which is defined by
$$
\operatorname{Spt}(m):=\left\{x \in Q \mid \forall N_x\in \mathcal{O}_{x}:\left(x \in N_x \Rightarrow \mu\left(N_x\right)>0\right)\right\}.
$$
But I have no idea proving it. Can someone give me some glues? Thanks so much!
|
Let $a,b,c\in\mathbb{R}$ be three parameters satisfying
$$abc\ne 0,\quad \mbox{and} \quad 2a^2\ne b^2.$$
Consider the following equations of $(u,v,w)\in\mathbb{R}^3$:
$$
(-c) \bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = 2vw,
$$
$$
(-b) \bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = T^2 +w^2 +2vT,
$$
$$
(-a)\bigg(3(v+T)^2-2vT-(u^2+v^2)\bigg) = 2uT,
$$
where $$T= -au-bv-cw.$$
Surprisingly, computer (WolframAlpha) tells me that for any $(a,b,c)$ as above, the solution of these equations is
$$
(u,v,w)= (0,0,0).
$$
[![enter image description here][1]][1]
How can I prove this claim? I tried many times but fail...
I notice that these equations are equivalent to
$$
3(v+T)^2-2vT-(u^2+v^2) = \frac{2vw}{-c} = \frac{T^2+w^2+2vT}{-b} = \frac{2uT}{-a},
$$
which gives that (if $u\ne 0$ )
$$
T = \frac{a}{c}\frac{vw}{u}.
$$
Then from the definition of $T$ (we assume $w\ne 0$, otherwise $2a^2=b^2$),
$$
\frac{a}{c} \frac{v}{u} = -a\frac{u}{w} -b \frac{v}{w} -c,
$$
and by the fact
$\frac{2vw}{-c} = \frac{T^2+w^2+2vT}{-b}$, we know (both sides divide by $w^2$)
$$
\frac{a^2}{c^2} \frac{v^2}{u^2} + 1 + 2\frac{a}{c}\frac{v}{w}\frac{v}{u} = 2\frac{b}{c}\frac{v}{w}.
$$
If we set $s= v/u$, $t= u/w$, then
$$
\frac{a}{c} s = -a t- bst-c, \quad
\frac{a^2}{c^2} s^2 +1 + 2\frac{a}{c} s^2 t =2\frac{b}{c}st .
$$
Moreover, from the first equation and $T = \frac{a}{c}\frac{vw}{u}$, we get another relation
$$
2\left(\frac{a}{c} s + st\right)^2 +\frac{a^2}{c^2} s^2 -t^2 =-\frac{2}{c} st.
$$
Now we have three new equations, I want solve $st$ from the first one and put it into other two equations, then I get two quadratic equations in two variables $(s,t)$, they are:
$$
s^2 \frac{a^2}{c^2} (1-\frac{2}{b}) + s \left(\frac{2a}{c^2}-\frac{2a}{b}+\frac{2a^3}{b^2 c^2}\right) + t \left( \frac{2a}{c}+ \frac{2a^3}{b^2 c}\right)+ 3 + \frac{1}{b^2}=0,
$$
and
$$
A s^2 + B s + C t^2 + D t + F=0,
$$
where
$$
A = \frac{a^2}{c^2}(5-\frac{4}{b}), \quad B = 4a (\frac{a^2}{c^2}+1)(1-\frac{1}{b}) -\frac{2a}{c^2},
$$
$$
C = 2a^2-1,\quad D = c (B + \frac{4a}{b}), \quad F = 4a^2 (1-\frac{1}{b}) + 2c^2 -2.
$$
I want to show that they have only zero solution. Maybe this thought can help.
Another observation is that, if $(x_0,y_0,z_0)\ne (0,0,0)$ is a solution, then for any $t\ne 0$, $(tx_0,ty_0,tz_0)$ is a solution.
[1]: https://i.stack.imgur.com/y5Tqw.jpg
|
Consider the divergent series $$\sum_{n=1}^{\infty} \frac{2^n}{n} $$
This can be seen as arising from the function $f(z) = -\ln(1-z)=\sum_{n=1}^{\infty} \frac{z^n}{n} $ and 'evaluating' that power series at $z=2$. I am interested in a divergent renormalization for this expression.
For many such divergent series we can use analytic continuation to find the divergent value. For example to evaluate $\sum_{n=0}^{\infty} 2^n = -1 $ it suffices to analytically continue $\sum_{n=0}^{\infty} z^n = \frac{1}{1-z}$ to the whole complex plane and once you do that it's easy to reason that $\sum_{n=0}^{\infty} 2^n = -1$ (although there are even easier algebraic arguments).
The problem here, is that the function $-\ln(1-z)$ has monodromy. As you analytically-continue the logarithm you end up with this multi-valued helix. And so when we ask what is $-\ln(-1)$ the function can be one of $...-3i\pi,-i\pi, i\pi, 3i\pi... $
If we apply some 'common sense' we might ask 'what are the most reasonable options' and that reduces it to $i\pi$ and $-i\pi$. Depending on if you think it's more natural to move clockwise or counterclockwise around the origin.
At this point picking ONE of those values seems extremely difficult. There's no real reason to pick clockwise over counterlockwise or vice versa and the notions of 'principal branch' are a human construct, not really opinionated by the math itself.
If there was a setting where this series arises that FORCES a choice it would be nice (for example in physics, or some other part of number theory etc...) but I don't know of any offhand. How do I actually assign a divergent summation to this? It's probably going to be one of those two choices $\pm i\pi$ but I would like a good argument for which one and why. (Or even cooler if you could show that in 2 different natural settings this series HAS to take on different values).
|
How to find the divergent renormalization of $\sum_{n=1}^{\infty} \frac{2^n}{n}$?
|
I know that the dimension of the manifold formed by **n-dimensional symmetric matrices** is $\dfrac {n(n+1)}2$.But the manifold formed by **n-dimensional positive definite symmetric matrices** is of what dimension?Thanks!
|
I'm studying general topology and a question has come to my mind.
Given a sequence x (i.e. map from the set N of natural numbers) in a topological space and given a point p of the latter, we say that *p is a special point for x* if the x-preimage of every neighbourhood of p is cofinal in the canonical order of N (that is, for every neighbourhood I of p and for every natural number n, there exists a natural number m>n such that x(m) belongs to I).
Let's consider the following claim: *if p is a special point for x, then p is an accumulation point of the image of x*. Initially I believed it to be false, but for now I didn't succeed in finding a counterexample.
Is it true? Or, if it's not, how could one disprove it?
It would be also appreciated if anyone mentioned a more standard naming for the above property.
|
The original question :
> Find all $x$ in $\mathbb R$ such that $16\sin^3(x) -14\cos^3(x) = \sqrt[3]{\sin x\cos^8(x) + 7\cos^9(x)}$
It's a tough question I've found. I've tried using
$16\tan^3(x) -14 = \sqrt[3]{\tan x + 7 }$
By inspection, $\tan x=1$ is one of the answers.
but According to [WA][1], $\tan x$ is not equal to $1$.
(Sorry , I've seen later that $x = \frac{\pi}{4}$ , but I don't know how to find all roots of the question.)
Can root of unity solve this?
[1]: https://www.wolframalpha.com/input?i=%2416tan%5E%7B3%7Dx+-14++%3D+%5Csqrt%5B3%5D%7Btanx++%2B+7+%7D%24
|
**Setup**
Let there be a board looking like a rectangular table. A piece is placed at any square of the board. Two players play a game. They move the piece in turns. The piece can only be moved to an adjacent square (no diagonal moves). The piece can’t be moved to a square that it has already visited. A player who can’t make a move loses. Who has a winning strategy: the player who makes a first move or their opponent?
[![enter image description here][1]][1]
**Motivation**
This question comes in continuation of [this MathSE thread](https://math.stackexchange.com/questions/4875047/combinatorial-game-played-on-a-grid) discussing a particular case where the starting square is in the corner of the board. It is proven there (by dividing the board into dominoes) that for an odd area board the second player wins, for an even area board the first player wins.
**Reasoning**
We can apply the dominoes argument here, too. If the board has even area, one of its sides has even length. We can divide the board into dominies along that even side. The first player has a following winning strategy: he makes a move inside a domino where the piece currently is. The second player then moves to a new domino, the first player again makes a move inside of it, and so on. It is clear that the first player always has a possible move, so the second player eventually loses.
[![enter image description here][2]][2]
Now let’s consider a board of odd area. An answer to the question starts to depend on a starting square! Let us color the board in a chess-like manner. Since the area of the board is odd, its sides are of odd length, and all four corners are of the same color. Let it be blue color.
[![enter image description here][3]][3]
It is easy to see that if a starting square is blue, then the rest of the board can be divided into dominoes. Since all four corners and the starting square share the same color, the parity of all four distances from the starting square to the borders is the same. If all the distances are odd, we can make a frame around the starting square, and the rest of the board is split into four rectangles with an even side:
[![enter image description here][4]][4]
If all the distances to the borders are even, we can make rows of dominoes towards the borders and, again, get four even-sided rectangles.
[![enter image description here][5]][5]
The second player has a winning strategy. The first player moves into a new domino, the second player makes a move inside of it. This process iterates. The second player always has a move and eventually wins.
Now, what if a starting square is not blue, but grey? The board can’t be split into dominoes, since it has odd area. The rest of the board can’t be split into dominoes either, since the number of the blue squares is greater than the number of grey squares by $2$, but each domino takes exactly one square of each color.
It is easy to see, that on the board $3\times3$ the first player wins no matter what players do. It seems the same is true for the board $3\times5$.
**Question**
Who has a winning strategy in the case of an odd area board, when a starting square is grey? Is it true that the first player has a way to guarantee a win? Is it true that the first player wins no matter what the players do?
[1]: https://i.stack.imgur.com/t2vUa.jpg
[2]: https://i.stack.imgur.com/GKKr0.jpg
[3]: https://i.stack.imgur.com/sZQhT.jpg
[4]: https://i.stack.imgur.com/1ql31.jpg
[5]: https://i.stack.imgur.com/rIBfc.jpg
|
**the variable (X,Y) uniformly distributed over D:{(x,y)∈R^2:|x|+|y|<=1}, A=X-Y,B=X+Y
are A and B independent**?
I tried to prove F(ab)=F(a)F(b), i tried to find the joint function of AB trough changing variables. x=(A-B)/2 and Y=(A+B)/2 and got that the joint function is F(ab)=c/2 how do I continue from here? how can i find the marginal functions to check if the equation is correct?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.