instruction
stringlengths
12
30k
Under which set of conditions a function could have just one local minimizer?
We aim to prove that the set \{V(n, 2) = \{(a, b) \in R^n \times R^n : \|a\|^2 = \|b\|^2 = 1 \text{ and } <a,b> = 0\} is a smooth (2n-3)-submanifold of R^n \times R^n. \ This will be demonstrated by considering the function f :R^n \times R^n \rightarrow R^3 given by f(a, b) = (\|a\|^2, \|b\|^2, <a,b>) and showing that the derivative $Df(a, b)$ is a surjective linear transformation for all (a, b) $\in$ V(n, 2). \ Should I use the implicit function theorem to prove the first one with derivative is the surjective linear mapping to R^3? and I dont know how to proof second part. Thanks for your help!
Let $\{X_n\}$ be a sequence of random variables. $X_n\rightarrow X$ a.s. Let $G$ be a $\sigma$-algebra. Suppose each $X_n$ is independent of $G$. Is it true that $X$ is also independent of $G$? I proved it in this way: **Fact 1** a random variable $X$ is independent of a $\sigma$-algebra $G$ if and only if $\forall A\in G$, $X$ is independent of $1_A$. **Fact 2** [Kac's theorem][1]: two random variables $X,Y$ are independent if and only if $\forall \eta,\xi \in \mathbb{R}^d: \mathbb{E}e^{i (X,Y) \cdot (\xi,\eta)} = \mathbb{E}e^{i X \cdot \xi} \cdot \mathbb{E}e^{i Y \cdot \eta}$ By the two facts above, we can let $X=X_n$ and $Y=1_A$ and use dominant convergence theorem to finish the proof. So my question is: 1. Can somebody help verify the proof or provide some suggestions? 2. Is there any alternative approach instead of using Kac's theorem(Which I failed to find this theorem on Google)? Thank you very much. [1]: https://math.stackexchange.com/questions/287138/moment-generating-functions-characteristic-functions-of-x-y-factor-implies-x/287321#287321
Why is a random walk $(X_n)_{n\in \mathbb{N}_0}$ on $\mathbb{R}^d $(defined below) a time-homogeneous Markov process? In particular, why does $(X_n)$ satisfy [requirement 3 of Def17.3](https://math.stackexchange.com/questions/183945/why-does-a-time-homogeneous-markov-process-possess-the-markov-property)? **Definition of $(X_n)_{n\in \mathbb{N}_0}$** Let $(Y_n)_{n\in \mathbb{N}}$ be i.i.d $\mathbb{R}^d$valued random variables and let $$S^x_0=x,\;\;\;\;\;S^x_n=x+\sum_{k=1}^n Y_k\;\;\;(n\in \mathbb{N})\;\;\;\;x\in \mathbb{R}^d $$ Define probability measures $P_x on ((\mathbb{R}^d)^{\mathbb{N}_0},(B(\mathbb{R}^d)^{\otimes \mathbb{N}_0}))$ by $P_x=P\circ (S^x)^{−1}$ . \ Then the canonical process $$X_n:(\mathbb{R}^d)^{\mathbb{N}_0}\ni x=(x_k)_{k\in \mathbb{N}_0}\mapsto x_n\in \mathbb{R}^d$$ is a Markov chain with distributions $(P_x)_{x\in \mathbb{R}^d}$ .
How can I prove that this stochasitic process is a time-homogeneous Markov process?
My Question ---- >Define subsets $G$, $G_0$, and $G_1$ of $\mathbb{R}$ by \begin{align*} G &= \{x:x=r+n\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\},\\ G_0 &= \{x:x=r+2n\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\},\ \text{and}\\ G_1 &= \{x:x=r+(2n+1)\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\}. \end{align*} Define a relation $\sim$ on $\mathbb{R}$ by letting $x \sim y$ hold when $x-y\in G$; the relation $\sim$ is then an equivalence relation on $\mathbb{R}$. Use the axiom of choice to form a subset $E$ of $\mathbb{R}$ that contains exactly one representative of each equivalence class of $\sim$. Let $A = E + G_0$ (that is, let $A$ consist of the points that have the form $e+g_0$ for some $e$ in $E$ and some $g_0$ in $G_0$). I got confused by the book's remark that "*the set $A$ defined above is not Lebesgue measurable: if it were, then both $A$ and $A^c$ would include (in fact, would be) Lebesgue measurable sets of positive Lebesgue measure*". Could someone please help me explain why this is true? Background Information ---- The above remark is made after the follow proposition: > **Proposition 1.4.11**$\quad$ *There is a subset $A$ of $\mathbb{R}$ such that each Lebesgue measurable set that is included in $A$ or in $A^c$ has Lebesgue measure zero.* > > **Proof**$\quad$ Define subsets $G$, $G_0$, and $G_1$ of $\mathbb{R}$ by \begin{align*} G &= \{x:x=r+n\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\},\\ G_0 &= \{x:x=r+2n\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\},\ \text{and}\\ G_1 &= \{x:x=r+(2n+1)\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\}. \end{align*} One can prove that $G$ and $G_0$ are subgroups of $\mathbb{R}$ (under addition), and $G_0$ and $G_1$ are disjoint, that $G_1 = G_0 + \sqrt{2}$, and that $G = G_0 \bigcup G_1$. Define a relation $\sim$ on $\mathbb{R}$ by letting $x \sim y$ hold when $x-y\in G$; the relation $\sim$ is then an equivalence relation on $\mathbb{R}$. Use the axiom of choice to form a subset $E$ of $\mathbb{R}$ that contains exactly one representative of each equivalence class of $\sim$. Let $A = E + G_0$ (that is, let $A$ consist of the points that have the form $e+g_0$ for some $e$ in $E$ and some $g_0$ in $G_0$). > > We now show that there does note exist a Lebesgue measurable subset $B$ of $A$ such that $\lambda(B)>0$. For this let us assume that such a set exists; we will derive a contradiction. Proposition 1.4.10 implies that there is an interval $(-\epsilon,\epsilon)$ that is included in $\text{diff}(B)$ and hence in $\text{diff}(A)$. Since $G_1$ is dense in $\mathbb{R}$, it meets the interval $(-\epsilon,\epsilon)$ and hence meets $\text{diff}(A)$. This, however, is impossible, since each element of $\text{diff}(A)$ is of the form $e_1-e_2+g_0$ (where $e_1$ and $e_2$ belong to $E$ and $g_0$ belongs to $G_0$) and so cannot belong to $G_1$ (the relation $e_1-e_2+g_0=g_1$ would imply that $e_1=e_2$ and $g_0=g_1$, contradicting the disjointness of $G_0$ and $G_1$). This completes our proof that every Lebesgue measurable subset of $A$ must have Lebesgue measure zero. > > One can check that $A^c = E + G_1$ and hence that $A^c = A + \sqrt{2}$. It follows that each Lebesgue measurable subset of $A^c$ is of the form $B+\sqrt{2}$ for some Lebesgue measurable subset $B$ of $A$. Since $A$ has no Lebesgue measurable subsets of positive measure, it follows that $A^c$ also has no such subsets, and with this the proof is complete. The definition of $\text{diff}$ is the following: > **Definition**$\quad$ Let $A$ be a subset of $\mathbb{R}$. Then $\text{diff}(A)$ is the subset of $\mathbb{R}$ defined by \begin{align*} \text{diff}(A) = \{x-y:x \in A\ \text{and}\ y \in A\}. \end{align*} Proposition 1.4.10 is the following: > **Proposition 1.4.10**$\quad$ *Let $A$ be a Lebesgue measurable subset of $\mathbb{R}$ such that $\lambda(A) > 0$. Then $\text{diff}(A)$ includes an open interval that contains 0.* ---- Any help will be really appreciated!
Probability of not visiting a recurrent state does not go to zero under an infinite measure?
This is the first exercise in section 6.5 of Robert Ash's abstract algebra, I want to understand the given solution: Noting that $\psi_n(X^p)= \prod_{w_i} X^p- w_i$, the roots of $X^p −ω_i$ are the $p$-th roots of $ω_i$, which must be primitive $np$-th roots of unity because the map $\theta\to \theta^p$ is a bijection between primitive $np$-th roots of unity and primitive $n$-th roots of unity ( because $\phi(np) = p\phi(n)$ as $p|n$ ). I am assuming here $\phi$ is Euler's function (we can show the above equality by induction) but what is the link with that bijection?
The cyclotomic polynomials satisfy $\psi_{pn}(X)= \psi_n(X^p)$ if $p|n$?
The Constant Elasticity of Variance (CEV) process is a one dimensional diffusion process given by the following stochastic differential equation. \begin{equation} d X_t = \mu X_t \cdot dt + \sigma X_t^\beta \cdot d B_t \tag{1} \end{equation} where $\mu,\sigma,\beta$ are positive real parameters and $B_t$ is the Brownian motion. In what follows we assume that the starting value of the process reads $X_0 = x > 0$ and that $\beta > 1 $. The infinitesimal generator of this process reads $ {\mathfrak G}_z := \mu z d/d z + \sigma^2/2 z^{2 \beta} d^2/d z^2$ . The eigen-functions of this operator $ {\mathfrak G}_z \phi^{\pm} (z) = \lambda \phi^{\pm}(z) $ to the eigenvalue $\lambda > 0 $ are given below: \begin{eqnarray} \phi^{+}(z) &=&U\left(\frac{\lambda}{2(-1+\beta) \mu}, 1+ \frac{1}{2(-1+\beta)}, \frac{\mu z^{2-2\beta} }{(-1+\beta) \sigma^2} \right) \tag{2a} \\ \phi^{-}(z) &=& L_{-\frac{\lambda}{2(-1+\beta) \mu}}^{(\frac{1}{2(-1+\beta)})} \left( \frac{\mu z^{2-2\beta} }{(-1+\beta) \sigma^2}\right) \tag{2b} \end{eqnarray} where $U$ is the confluent hypergeometric function and $L$ is the generalized Laguerre polynomial. We have checked that $U(z)$ is a strictly increasing function of $z$. --------------------------- Now, by using the theory of diffusion processes, see section 4.6 pages 128-134 in <cite authors="Ito, K.; McKean, H. P. jun.">_Ito, K.; McKean, H. P. jun._, Diffusion processes and their sample paths, Berlin-Heidelberg-New York: Springer-Verlag. XVII, 321 p. (1965). [ZBL0127.09503](https://zbmath.org/?q=an:0127.09503).</cite> , we have found the Laplace transform of the first hitting time $\tau_y := inf(s>0:X_s = y)$ of a horizontal barrier $b$ by this process. The quantity in question reads: \begin{eqnarray} E_x \left[ e^{-\lambda \tau_y} \right] = \frac{\phi^{+}(x)}{\phi^{+}(y)} \quad \mbox{for $x \le y$} \tag{3} \end{eqnarray} Now, by inverting the Laplace transform in $(3)$, by using the Bromwich integral and then the Cauchy theorem, we have expressed the probability density function of the first hitting time $n_x(t;y) := P_x\left( \tau_y \in dt\right)/dt $ as follows: \begin{eqnarray} n_x(t;y) = \sum\limits_{p=0}^\infty \underbrace{ \frac{U\left( -\zeta_p^{(y;\mu,\sigma,\beta)}, 1+ \frac{1}{2(-1+\beta)}, \frac{\mu x^{2-2\beta}}{(-1+\beta) \sigma^2} \right)}{ \zeta_p^{(y;\mu,\sigma,\beta)} U^{(1,0,0)}\left( -\zeta_p^{(y;\mu,\sigma,\beta)}, 1+ \frac{1}{2(-1+\beta)}, \frac{\mu y^{2-2\beta}}{(-1+\beta) \sigma^2} \right) } }_{{\mathfrak w}_p^{(y;\mu,\sigma,\beta)}} \cdot \underline{(2 (-1+\beta) \mu \zeta_p^{(y;\mu,\sigma,\beta)} ) \cdot e^{-2 (-1+\beta) \mu \cdot \zeta_p^{(y;\mu,\sigma,\beta)} \cdot t}} \tag{5} \end{eqnarray} As we can see the quantity in $(5)$ is an infinite linear combination of exponential distributions with weights $ \left( {\mathfrak w}_p^{(y;\mu,\sigma,\beta)} \right)_{p=1}^\infty $ that sum up to unity. Here $ \left( \zeta_p^{(y;\mu,\sigma,\beta)} \right)_{p=1}^\infty $ are zeros of the function $ {\mathbb R}_+ \ni \lambda \rightarrow U(-\lambda,1+ \frac{1}{2(-1+\beta)}, \frac{\mu y^{2-2\beta}}{(-1+\beta) \sigma^2}) \in {\mathbb R} $. ------------------------ Now we took the following process parameters $\mu,\sigma,\beta = 3/2,1/2,5/2$ and the first value and the barrier $x,y = 3/2, 5/2 $ and we plotted the quantity $(5)$ below. We also verified the normalization numerically. Here we go: {\[Mu], \[Sigma], \[Beta]} = {3/2, 1/2, 5/2}; (*Here x\[LessEqual]y*) {x, y} = {3/2, 5/2}; SetOptions[FindRoot, WorkingPrecision -> mprec, PrecisionGoal -> prec]; mzeros = \[Lambda] /. Table[FindRoot[ HypergeometricU[-\[Lambda], 1 + 1/(2 (-1 + \[Beta])), (\[Mu] y^( 2 - 2 \[Beta]))/((-1 + \[Beta]) \[Sigma]^2)] == 0, {\[Lambda], n}], {n, 0, 50}]; mzeros = Sort[#[[1]] & /@ Tally[mzeros]]; ts = Array[# &, {300}, {1/100, 3}]; vals = {#, Total[Table[ HypergeometricU[-mzeros[[p]], 1 + 1/(2 (-1 + \[Beta])), (\[Mu] x^( 2 - 2 \[Beta]))/((-1 + \[Beta]) \[Sigma]^2)]/ \!\(\*SuperscriptBox[\(HypergeometricU\), TagBox[ RowBox[{"(", RowBox[{"1", ",", "0", ",", "0"}], ")"}], Derivative], MultilineFunction->None]\)[-mzeros[[p]], 1 + 1/(2 (-1 + \[Beta])), (\[Mu] y^( 2 - 2 \[Beta]))/((-1 + \[Beta]) \[Sigma]^2)] (2 (-1 + \ \[Beta]) \[Mu]) Exp[-2 (-1 + \[Beta]) \[Mu] mzeros[[p]] #], {p, 1, Length[mzeros]}]]} & /@ ts; ListPlot[vals, PlotRange :> All, AxesLabel -> {"t", "\!\(\*SubscriptBox[\(n\), \(x\)]\)(t,y)"}] f = Interpolation[vals]; NIntegrate[f[xi], {xi, 0.01, 3}] [![enter image description here][2]][2] As you can see the distribution in question has a correct shape and a correct normalization. The negative values close to the origin are due to a numerical error. -------------------------- Having said all this my question would be how do we evaluate the limit of $\beta \rightarrow 1_+$. In this case the process tends towards the geometric Brownian motion and as such we should have: \begin{equation} \lim_{\beta \rightarrow 1_+} n_x(t;y) \stackrel{(??)}{=} \frac{\left| \log(\frac{y}{x} )\right|}{\sqrt{2 \pi t^3} \sigma} e^{-\frac{1}{2 \sigma^2 t} \left[ \log(\frac{y}{x} - (\mu - \frac{\sigma^2}{2} ) t\right]^2} \end{equation} as shown in [a previous question on a similar topic][1]. How do we work out this limit analytically in our framework? ##### Update: We have verified numerically that the Laplace transform $ (3) $ approaches the correct limits when $ \beta \rightarrow 1_+ $. See code below: {lmb, mu, sig} = RandomReal[{0, 1}, 3, WorkingPrecision -> 50]; x = RandomReal[{0, 1}, WorkingPrecision -> 50]; y = RandomReal[{x, 2}, WorkingPrecision -> 50]; NN = 100; HypergeometricU[lmb/mu NN, NN, NN (2 mu/sig^2) x^(-1/NN)]/ HypergeometricU[lmb/mu NN, NN, NN (2 mu/sig^2) y^(-1/NN)] (x/y)^((-2 mu + sig^2 + 2 Sqrt[2 lmb sig^2 + (mu - sig^2/2)^2])/( 2 sig^2)) 0.275584165705622319278498109409236772998453 + 0.*10^-50 I 0.2729011283963510407220223125184479965801097988605 [1]: https://math.stackexchange.com/questions/4643792/the-distribution-of-the-first-hitting-time-for-a-generic-linear-diffusion [2]: https://i.stack.imgur.com/dYZ1G.png
My Question ---- >Define subsets $G$, $G_0$, and $G_1$ of $\mathbb{R}$ by \begin{align*} G &= \{x:x=r+n\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\},\\ G_0 &= \{x:x=r+2n\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\},\ \text{and}\\ G_1 &= \{x:x=r+(2n+1)\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\}. \end{align*} Define a relation $\sim$ on $\mathbb{R}$ by letting $x \sim y$ hold when $x-y\in G$; the relation $\sim$ is then an equivalence relation on $\mathbb{R}$. Use the axiom of choice to form a subset $E$ of $\mathbb{R}$ that contains exactly one representative of each equivalence class of $\sim$. Let $A = E + G_0$ (that is, let $A$ consist of the points that have the form $e+g_0$ for some $e$ in $E$ and some $g_0$ in $G_0$). I got confused by the book's remark that "the set $A$ defined above is not Lebesgue measurable: if it were, then both $A$ and $A^c$ would include (in fact, would be) Lebesgue measurable sets of positive Lebesgue measure". Could someone please help me explain why this is true? Background Information ---- The above remark is made after the follow proposition: > **Proposition 1.4.11**$\quad$ *There is a subset $A$ of $\mathbb{R}$ such that each Lebesgue measurable set that is included in $A$ or in $A^c$ has Lebesgue measure zero.* > > **Proof**$\quad$ Define subsets $G$, $G_0$, and $G_1$ of $\mathbb{R}$ by \begin{align*} G &= \{x:x=r+n\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\},\\ G_0 &= \{x:x=r+2n\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\},\ \text{and}\\ G_1 &= \{x:x=r+(2n+1)\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\}. \end{align*} One can prove that $G$ and $G_0$ are subgroups of $\mathbb{R}$ (under addition), and $G_0$ and $G_1$ are disjoint, that $G_1 = G_0 + \sqrt{2}$, and that $G = G_0 \bigcup G_1$. Define a relation $\sim$ on $\mathbb{R}$ by letting $x \sim y$ hold when $x-y\in G$; the relation $\sim$ is then an equivalence relation on $\mathbb{R}$. Use the axiom of choice to form a subset $E$ of $\mathbb{R}$ that contains exactly one representative of each equivalence class of $\sim$. Let $A = E + G_0$ (that is, let $A$ consist of the points that have the form $e+g_0$ for some $e$ in $E$ and some $g_0$ in $G_0$). > > We now show that there does note exist a Lebesgue measurable subset $B$ of $A$ such that $\lambda(B)>0$. For this let us assume that such a set exists; we will derive a contradiction. Proposition 1.4.10 implies that there is an interval $(-\epsilon,\epsilon)$ that is included in $\text{diff}(B)$ and hence in $\text{diff}(A)$. Since $G_1$ is dense in $\mathbb{R}$, it meets the interval $(-\epsilon,\epsilon)$ and hence meets $\text{diff}(A)$. This, however, is impossible, since each element of $\text{diff}(A)$ is of the form $e_1-e_2+g_0$ (where $e_1$ and $e_2$ belong to $E$ and $g_0$ belongs to $G_0$) and so cannot belong to $G_1$ (the relation $e_1-e_2+g_0=g_1$ would imply that $e_1=e_2$ and $g_0=g_1$, contradicting the disjointness of $G_0$ and $G_1$). This completes our proof that every Lebesgue measurable subset of $A$ must have Lebesgue measure zero. > > One can check that $A^c = E + G_1$ and hence that $A^c = A + \sqrt{2}$. It follows that each Lebesgue measurable subset of $A^c$ is of the form $B+\sqrt{2}$ for some Lebesgue measurable subset $B$ of $A$. Since $A$ has no Lebesgue measurable subsets of positive measure, it follows that $A^c$ also has no such subsets, and with this the proof is complete. The definition of $\text{diff}$ is the following: > **Definition**$\quad$ Let $A$ be a subset of $\mathbb{R}$. Then $\text{diff}(A)$ is the subset of $\mathbb{R}$ defined by \begin{align*} \text{diff}(A) = \{x-y:x \in A\ \text{and}\ y \in A\}. \end{align*} Proposition 1.4.10 is the following: > **Proposition 1.4.10**$\quad$ *Let $A$ be a Lebesgue measurable subset of $\mathbb{R}$ such that $\lambda(A) > 0$. Then $\text{diff}(A)$ includes an open interval that contains 0.* ---- Any help will be really appreciated!
> Theorem 4 The discretized signature kernel over $k$, $$ \mathrm{k}^{+}: X^{+} \times X^{+} \rightarrow \mathbb{R}, \quad \mathrm{k}^{+}(x, y)=\left\langle\mathrm{S}^{+}\left(\mathrm{k}_x\right), \mathrm{S}^{+}\left(\mathrm{k}_x\right)\right\rangle $$ > 1. is a positive definite kernel, > 2. $$\mathrm{k}^{+}(x, y)=\sum_{m \geq 0} \sum_{\substack{1 \leq i_1<\cdots<i_m<|x| \\ 1 \leq j_1<\cdots<j_m<|y|}} \prod_{r=1}^m \nabla_{i_r, j_r} \mathrm{k}(x, y)$$ > 3. $$\mathrm{k}^{+}(x, y)=1+\sum_{\substack{i_1 \geq 1 \\ j_1 \geq 1}} \nabla_{i_1, j_1} \mathrm{k}(x, y) \cdot\left(1+\sum_{\substack{i_2>i_1 \\ j_2>j_1}} \nabla_{i_2, j_2} \mathrm{k}(x, y) \cdot\left(1+\sum_{\substack{i_3>j_2 \\ j_3>j_2}} \ldots\right)\right)$$ where we use the notation $x=\left(x_i\right)_{i=1}^{|x|}, y=\left(y_i\right)_{i=1}^{|y|} \in X^{+}$and $$ \nabla_{i, j} \mathrm{k}(x, y):=\mathrm{k}\left(x_{i+1}, y_{j+1}\right)+\mathrm{k}\left(x_i, y_j\right)-\mathrm{k}\left(x_i, y_{j+1}\right)-\mathrm{k}\left(x_{i+1}, y_j\right) $$ I'll explain what I am finding a bit hard using the first two $m$s. For $m=0$, (1) shows that both the $i_k$ and $j_k$ indices start from $k=1$ and end at $k=m$, but $m$ starts from zero, and, in the second equation, it seems like they are considering the case $i_0$ $j_0$ as $=1$. How does this work exactly? Generally, I don't see where the $1$s are coming from in (2). Let $m=1$, we get $$\sum_{1\leq i_1< |x|}\sum_{1\leq j_1<|y|}\prod_{r=1}^1f(i_r,j_r)$$ If either $|x|=1$ or $|y|=1$, what do I do? Would I be correct in saying that this notation assumes sequences that are longer than $1$? Finally, I wrote this as two sums, but I could be wrong here. What do you think? Or more generally, because of the inequalities in the summation, i.e $i_1$ starting from $1$, $i_2$ from $i_1+1$, etc, all up to $|x \text{ or } y|-1$, then at least $\min(|x|, |y|)\geq m+2$ if I want to compute the sums at level $m$? [1]: https://i.stack.imgur.com/UskYb.png
> Theorem 4 The discretized signature kernel over $k$, $$ \mathrm{k}^{+}: X^{+} \times X^{+} \rightarrow \mathbb{R}, \quad \mathrm{k}^{+}(x, y)=\left\langle\mathrm{S}^{+}\left(\mathrm{k}_x\right), \mathrm{S}^{+}\left(\mathrm{k}_x\right)\right\rangle $$ > 1. is a positive definite kernel, > 2. $$\mathrm{k}^{+}(x, y)=\sum_{m \geq 0} \sum_{\substack{1 \leq i_1<\cdots<i_m<|x| \\ 1 \leq j_1<\cdots<j_m<|y|}} \prod_{r=1}^m \nabla_{i_r, j_r} \mathrm{k}(x, y)$$ > 3. $$\mathrm{k}^{+}(x, y)=1+\sum_{\substack{i_1 \geq 1 \\ j_1 \geq 1}} \nabla_{i_1, j_1} \mathrm{k}(x, y) \cdot\left(1+\sum_{\substack{i_2>i_1 \\ j_2>j_1}} \nabla_{i_2, j_2} \mathrm{k}(x, y) \cdot\left(1+\sum_{\substack{i_3>j_2 \\ j_3>j_2}} \ldots\right)\right)$$ where we use the notation $x=\left(x_i\right)_{i=1}^{|x|}, y=\left(y_i\right)_{i=1}^{|y|} \in X^{+}$and $$ \nabla_{i, j} \mathrm{k}(x, y):=\mathrm{k}\left(x_{i+1}, y_{j+1}\right)+\mathrm{k}\left(x_i, y_j\right)-\mathrm{k}\left(x_i, y_{j+1}\right)-\mathrm{k}\left(x_{i+1}, y_j\right) $$ > Source (page 14): https://jmlr.org/papers/volume20/16-314/16-314.pdf I'll explain what I am finding a bit hard using the first two $m$s. For $m=0$, (1) shows that both the $i_k$ and $j_k$ indices start from $k=1$ and end at $k=m$, but $m$ starts from zero, and, in the second equation, it seems like they are considering the case $i_0$ $j_0$ as $=1$. How does this work exactly? Generally, I don't see where the $1$s are coming from in (2). Let $m=1$, we get $$\sum_{1\leq i_1< |x|}\sum_{1\leq j_1<|y|}\prod_{r=1}^1f(i_r,j_r)$$ If either $|x|=1$ or $|y|=1$, what do I do? Would I be correct in saying that this notation assumes sequences that are longer than $1$? Finally, I wrote this as two sums, but I could be wrong here. What do you think? Or more generally, because of the inequalities in the summation, i.e $i_1$ starting from $1$, $i_2$ from $i_1+1$, etc, all up to $|x \text{ or } y|-1$, then at least $\min(|x|, |y|)\geq m+2$ if I want to compute the sums at level $m$? [1]: https://i.stack.imgur.com/UskYb.png
Let $V, V'$ be full flags in $\mathbb{C}^n$, let $\lambda, \mu$ be admissible partitions, and let $$\sigma_\lambda(V) = \{\Lambda: \dim(\Lambda \cap V_{n-k+i-\lambda_i}) \geq i\}$$ and $$\sigma_\mu(V) = \{\Lambda: \dim(\Lambda \cap V_{n-k+i-\mu_i}) \geq i\}$$ be general Schubert cycles. Then for each $i$ and any $\Lambda \in \sigma_\lambda(V) \cap \sigma_\mu(V')$, we have $$\dim(\Lambda \cap V_{n-k+i-\lambda_i}) \geq i$$ and $$\dim(\Lambda \cap V'_{n-k+(k-i+1)-b_{k-i+1}})\geq k-i+1.$$ This implies that $$\Lambda \cap V_{n-k+i-\lambda_i} \cap V'_{n-i+1-b_{k-i+1}} \neq 0.$$ I don't understand why the last statement is true. I assume we have to show that $$\dim(\Lambda \cap V_{n-k+i-\lambda_i} \cap V'_{n-i+1-b_{k-i+1}}) \geq 1,$$ but how can this be shown? We can use the dimension formula from the linear algebra, but for that to work, I need to show that $$\dim((\Lambda \cap V_{n-k+i-\lambda_i}) \ \oplus ( \Lambda \cap V'_{n-i+1-b_{k-i+1}})) \leq k,$$ but I don't know if this last inequality is true. I would appreciate any help. Thank you!
See Theorem 3.3 https://www.sciencedirect.com/science/article/pii/S0022247X12002600?via%3Dihub#s000015 and also a recent preprint https://arxiv.org/abs/2312.07940.
[Exercise][1] [1]: https://i.stack.imgur.com/doo2O.png To prove equation 2 : I used the following formula : $$(X \cdot M)_{T^t} - (X \cdot M)_{S^t} = \int_{S^t}^{T^t} X_s \, dM_s$$ $$ \mathbb{E}\left(\left[(X \cdot M)_{T^t} - (X \cdot M)_{S^t}\right]\left[(Y \cdot M)_{T^t} - (Y \cdot M)_{S^t}\right]\right) = \mathbb{E}\left[\int_{S^t}^{T^t} X_s \, dM_s \times \int_{S^t}^{T^t} Y_s \, dM_s\right]$$ Is there any Ito's formula that I can use to go further ?
[Exercise][1] [1]: https://i.stack.imgur.com/doo2O.png To prove equation 2 : I used the following formula : $$(X \cdot M)_{min(T,t)} - (X \cdot M)_{min(S,t)} = \int_{min(S,t)}^{min(T,t)} X_s \, dM_s$$ $$ \mathbb{E}\left(\left[(X \cdot M)_{min(T,t)} - (X \cdot M)_{min(S,t)}\right]\left[(Y \cdot M)_{T^t} - (Y \cdot M)_{S^t}\right]\right) = \mathbb{E}\left[\int_{min(S,t)}^{min(T,t)} X_s \, dM_s \times \int_{min(S,t)}^{min(T,t)} Y_s \, dM_s\right]$$ Is there any Ito's formula that I can use to go further ?
I have a certain infinite power series: $S(x) = a0+a1*x+a2*x^2+a3*x^3+...$, which is defined for $0 <= x <= 1$ and I have reasons to assume that it is convergent for all such $x$. Is there any way to prove that if one rewrites this series in the form $S(1-y) = b0+b1*y+b2*y^2+b3*y^3+...$ then the infinite expressions (sums) for $b0, b1, b2,$ etc. obtained from this equality are all convergent, and the series in powers of $y$ is convergent, too? I only have approximate numerical values for $a0, a1, a2,$ etc. up to $a120$ (don't have any closed form formula for them). Leszek
Given a convergent power series in x, can one prove the equivalent power series in (1-x) is convergent?
I have a certain infinite power series: $S(x) = a_0+a_1x+a_2x^2+a_3x^3+ \ldots$, which is defined for $0 \leq x \leq 1$ and I have reasons to assume that it is convergent for all such $x$. Is there any way to prove that if one rewrites this series in the form $S(1-y) = b_0+b_1y+b_2y^2+b_3y^3+ \ldots$ then the infinite expressions (sums) for $b_0, b_1, b_2,$ etc. obtained from this equality are all convergent, and the series in powers of $y$ is convergent, too? I only have approximate numerical values for $a_0, a_1, a_2,$ etc. up to $a_{120}$ (don't have any closed form formula for them). Leszek
Is there a specific terminology for these upper unitriangular matrices?
[![enter image description here][1]][1] [1]: https://i.stack.imgur.com/iwaf5.jpg The slice maps are defined in the above sceenshot. If we take normal state $\psi$ on $N$, then the slice map $L_\psi$ defined by $L_\psi(\sum x_i\otimes y_i)=\sum \psi(y_i)x_i$ extends to a normal conditional expectation from $M\otimes N$ to $M$. Can we choose a special $\psi$ that make $M\otimes N$ is $*$-isomorphic to $N$?
I am currently studying martingales with Resnick's book *A Probability Path*. He defines a martingale as closed on the right if there is an $X \in L_1$ such that $X_n = \mathbb{E}[X \mid \mathcal{B}_n]$ for all $n$. One can also find that definition [here](https://math.stackexchange.com/questions/246762), defining it as "right-closable." What I am curious about is the "on the right." I did a little bit (perhaps not enough) searching around on the internet and I can't seem to find any definition of what it might mean to be closed "on the left." Does anyone happen to know of such a definition or where I might find one?
I'm trying to solve the 2D heat equation in cartesian coordinates with initial condition of the form $1/r$. The problem is as follows: $$ \begin{equation} \begin{cases} \frac{\partial u}{\partial t} = \alpha \left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} \right), & x \in \mathbb{R}, y \in \mathbb{R}, t > 0 \\ u(x, y, 0) = \frac{-y}{x^2 + y^2}, & x \in \mathbb{R}, y \in \mathbb{R}. \end{cases} \end{equation} $$ Separating variables doesn't seem to be the best approach here and I couldn't find the Fourier transform of this singular initial condition. So I'm trying to solve it via convolution: $u(x, y, t) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} G(x - \xi, y - \eta, t) \frac{-\eta}{\xi^2 + \eta^2} \, d\xi \, d\eta$ where: $G(x, y, t) = \frac{1}{4 \pi \alpha t} e^{-\frac{x^2 + y^2}{4 \alpha t}}$ The problem is that the integral for $u(x,y,t)$ doesn't seem to have an analytical solution. Is that really that case? Can this PDE only be solved numerically? Edit: The Fourier Transform of the initial condition can be found explicitly with Mathematica: ``` initialFourierTransform = -((I ky (HeavisideTheta[-kx] + HeavisideTheta[kx]))/(kx^2 + ky^2)) ``` However the evaluation of the inverse Fourier transform to find the solution is taking a lifetime and I suspect there might be an error. ``` uHat[kx_, ky_, t_] := initialFourierTransform*Exp[-alpha*(kx^2 + ky^2)*t] (*Compute the inverse Fourier transform to find u(x,y,t)*) u[x_, y_, t_] := InverseFourierTransform[uHat[kx, ky, t], {kx, ky}, {x, y}] ```
>Finding range of function $\displaystyle f(x)=\bigg|\sin\bigg(\frac{x}{2}\bigg)\bigg|+\bigg|\cos(x)\bigg|$ Using $\displaystyle 0\leq \bigg|\sin\bigg(\frac{x}{2}\bigg)\bigg|\leq 1$ and $\displaystyle 0\leq |\cos(x)|\leq 1$ $\displaystyle 0\leq f(x)\leq 1+1=2$ But minimum value is $\displaystyle \frac{1}{\sqrt{2}}$ How can I get the minimum value?
How to use variational calculus on a ratio of two functionals, with inequality constraint associated with the denominator?
I am not understanding how to conclude this problem, I've been tries for days so far. I have to find the minimum of $f(x, y) = (x-2)^2+y$ subject to $y-x^2 \geq 0, y + x^3 \leq 0, y \geq 0$. I have tried to give a sketch of the problem, for it's easy to compute: the level curves of $f$ are parabolas (concave), and the region described by the constraints is the positive left upper plane under the curve $-x^3$. With the level curves, I only found a "good" candidate, which is the point $(0, 0)$, but graphically is "easy". How can I solve the request via Lagrange multipliers? I thought about KKT conditions too, but the exercise tells me that "I have to verify the solution doesn't satisfy KKT conditions", which I don't get since $(0, 0)$ satisfy all the constraints.
Is it okay to use an indicator function and have two statistics in the support? And, if this is possible, would the indicator function be a function of both statistics? We have that $\mathbf{X}$ is a random sample from Uniform$(\theta, \theta+1)$ and we want to re-write the joint pdf from $$f(\mathbf{x}| \theta) = \prod \mathbf{1} [\theta < x_i < \theta +1]$$ to this new pdf $$f(\mathbf{x}|\theta) = \mathbf{1} [\max(\mathbf{x}) - 1 < \theta < \min(\mathbf{x})]$$ Does this new function help us find sufficient statistics or minimal sufficient statistics? Is it okay to treat indicator functions this way?
Indicator functions based on one or more statistics (Is it alright?) (Mathematical stats)
So, lets $A$, be a $C^k-$atlas, and $B$ a set of charts wich each chart is compatible which each chart in A. The thing is that I can take $(V_1,\psi_1),(V_2,\psi_2) \in B$, such that $V_1\cap V_2 \neq \varnothing$. By the definition of $B$, there is $(U,\phi)\in A$ which is compatible with $(V_1,\psi_1),(V_2,\psi_2)$. This means: $\phi\circ \psi_{1}^{-1}:\psi_{1}(U\cap V_{1}) \longrightarrow \phi(U\cap V_{1})$ $\psi_2\circ \phi^{-1}:\phi(U\cap V_{2}) \longrightarrow \psi_2(U\cap V_{2})$ So they conclude that, by composition. $\psi_2\circ\psi_1^{-1}$ is $C^k$. But as far as I understand, it would mean that their restriction in $\psi_{1}(U\cap V_1\cap V_2)$ it's so. My teacher told me that it was because of the implicit funcion theorem, but I don't see it really. So my question is why we can conclude that in fact the compatibility works? If someone help me with that or gives me resource to read, I will realy appreciate it.
>Finding range of function $\displaystyle f(x)=\bigg|\sin\bigg(\frac{x}{2}\bigg)\bigg|+\bigg|\cos(x)\bigg|$ Using $\displaystyle 0\leq \bigg|\sin\bigg(\frac{x}{2}\bigg)\bigg|\leq 1$ and $\displaystyle 0\leq |\cos(x)|\leq 1$ $\displaystyle 0\leq f(x)\leq 1+1=2$ But minimum value is $\displaystyle \frac{1}{\sqrt{2}}$ Other way what I tried $\displaystyle f(x)=\bigg|\sin\bigg(\frac{x}{2}\bigg)\bigg|+\bigg|1-2\sin^2\bigg(\frac{x}{2}\bigg)\bigg|$ $\displaystyle f(t)=|t|+|1-2t^2|,t=\sin\bigg(\frac{x}{2}\bigg)$ How can I get the minimum value?
This is a simple exercise needed to prove the Optional Stopping Theorem that I'm working on. Suppose $(X_n)$ is a supermartingale and we have stopping times $T, S$. Then we already know in general that $\mathbb{E}(X_T|F_s)\leq X_{\min(S,T)}$ by splitting $X_T=X_{\min(S,T)} +\sum_{k=0}^n (X_{i+1}-X_i)1_{S\leq k\leq T}$, however I want to show directly with $T\leq S$ that $$\mathbb{E}[X_T|F_S]\leq X_T$$ This seems trivial if the expectation was conditioned upon $F_n$ where $n\leq T$ almost surely, but I'm unsure of the proof for a stopped $\sigma$-algebra. I can do this: $$\mathbb{E}(\mathbb{E}[X_T|F_S]1_{A})\leq \mathbb{E}(X_T 1_A)$$ for any $A \in F_S$, i.e. $A\cap\{S\leq n\}\in F_n$ for any $n\geq 0$. This yields, by earlier assumption, that $A\cap \{S\leq n\}\cap \{T\leq S\}\in F_n$, but can we then assume from there that $A\cap \{T\leq n\}\in F_n$? I guess the above is the same as trying to show that $X_T$ is $(F_S)$-measurable if $T\leq S$ a.s. I would appreciate any brief pointers on how I can go on about this, as well as anything I should add to better clarify my current thinking.
I'm attempting to solve the following ODE: $$xy''+y'-y=0$$ According to Frobenius' theorem, there exists a solution to this ODE in the form of a series given by the linear combination of two solutions such that: $y=C_1y_1+C_2y_2$, where: $$y_1=\displaystyle \sum_{n=0}^\infty a_nx^n$$ $$y_2=y_1log|x|+\displaystyle \sum_{n=0}^\infty b_nx^n$$ My issue is not how to solve the ODE, I've already managed to do it, but rather how to find the second solution, $y_2$, in a more elegant way instead of deriving the expression twice, substituting the series in the equation and identifying a pattern for the general term of the sum. In order to do this, following my professor's advice, I have defined the differential operator: $$L=x\frac{d^2}{dx^2}+\frac{d}{dx}-1$$ where, of course, $L[y_1]=L[y_2]=0$ since they are both solutions to my equation. Applying this operator on the second solution, I get: $$L[y_2]=L[y_1]log|x|+y_1L[log|x|]+L[u_2]=0$$ Since the first term is null, then i end up with: $$y_1L[log|x|]+L[u_2]=0$$ and from here I get: $$-y_1+L[u_2]=0$$ although, according to the calculations I did earlier, before using operators, what I should really obtain is: $$2y_1'+L[u_2]=0$$ Why is this? I'm assuming it is because I'm considering my operator is linear when it really isn't, although I don't quite understand why it wouldn't be, since it fullfils the basic properties of linearity.
So, lets $A$, be a $C^k-$atlas, and $B$ a set of charts wich each chart is compatible which each chart in A. The thing is that I can take $(V_1,\psi_1),(V_2,\psi_2) \in B$, such that $V_1\cap V_2 \neq \varnothing$. By the definition of $B$, there is $(U,\phi)\in A$ which is compatible with $(V_1,\psi_1),(V_2,\psi_2)$. This means: $\phi\circ \psi_{1}^{-1}:\psi_{1}(U\cap V_{1}) \longrightarrow \phi(U\cap V_{1})$ $\psi_2\circ \phi^{-1}:\phi(U\cap V_{2}) \longrightarrow \psi_2(U\cap V_{2})$ So they conclude that, by composition. $\psi_2\circ\psi_1^{-1}$ is $C^k$. But as far as I understand, it would mean that their restriction in $\psi_{1}(U\cap V_1\cap V_2)$ it's so. My teacher told me that it was because of the implicit funcion theorem, but I don't see it really. So my question is why we can conclude that in fact the compatibility works? If someone help me with that or gives me resource to read, I will realy appreciate it. Edit: to clarify the question, what I mean is. How can I conclude that $\psi_2\circ\psi_1^{-1}$ with its domain $\psi_1(V_1\cap V_2)$ is in fact $C^k$?
I'm looking for a function (preferrably in Python) which takes a complex number $a$ as input, and outputs the zeros of the function $f:x \rightarrow J_x(a)$ where $J_x(a)$ is the Bessel function of the first kind. This problem is relevant to solving a Schoedinger equation in a 1D condensed matter system (N.B. I am **not** asking to find the zeros of the bessel function $f:x \rightarrow J_n(x)$). Does anybody know of a programming language such as Python, Wolfram language, etc. which has this built-in; and if so what is that function? It would save me the effort of implementing some sort of Newton's method-esque function myself. Please let me know if this belongs on stackoverflow.
Joe is playing a game where his given a bag one at a time and has to win the bag by drawing the corresponding color marble from the bag. There are 6 bags. Each bag is assigned a color that contains marbles of that color: The red bag has red marbles, the blue bag has blue marbles and so on. Each bag also has a proportion of black marbles. If Joe draws from the red bag, he'll either get a red marble or a black marble. If Joe gets a non-black marble from the bag, he "wins" the bag. If he draws a black marble, he doesn't win the color. The proportion of color marbles in their respective bags are as follows. ``` {'Red': 0.25, 'Blue': 0.30, 'Purple': 0.15, 'Green': 0.30, 'Yellow': 0.30, 'Orange': 0.20} ``` In this example, .25 of the marbles are Red and .75 are black. **Winning outcomes** To win the game Joe has to win a set specific set of colors. Here is a minimum list of events where Joe can win. ``` [('red', 'purple', 'yellow'), ('red', 'yellow', 'blue'), ('orange', 'purple', 'yellow'), ('orange', 'yellow', 'blue'), ('purple', 'yellow', 'blue'), ('red', 'green', 'orange', 'yellow'), ('red', 'green', 'purple', 'blue'), ('red', 'orange', 'purple', 'blue'), ('green', 'orange', 'purple', 'blue')] ``` **Rules**: - Joe is handed one bag at a time and the order of the bags is completely random. - Joe draws only one marble from the bag he is handed. - If Joe draws a non-black marble, he wins that color. - The game ends when he earns a combination of bags that are contained in the winning outcomes list above or he draws black marbles from each bag. - Joe only plays the game once. The probability of each outcome is the intersection of the events, which can be found by taking the product of the events. ``` P(Red) * P(Purple) * P(Yellow) =.25 *.15 * .30 ``` and so on. **Question**: What is the total probability that he gets any one of the outcomes and wins the game? I'm really struggling with this. I don't think it's the probability that exactly 1 event occurs. ``` P(exactly 1 occurs)= P(A and not B and not C)+ P(not A and B and not C)+ P(not A and not B and C) ``` The result of that method would seem like too low of a probability. More to my confusion, if he draws from the red bag first and wins a red marble, but then draws from the purple bag and gets a black marble, then any winning outcome that contains a purple bag is eliminated from his winning outcomes list. In this example, if he draws a black marble from a purple bag, that eliminates five outcomes that he can win. (The outcome with purple, yellow, blue is NOT eliminated because Red, Purple, Yellow, Blue is a superset of purple, yellow, blue). I think this is the source of my confusion. My winning outcomes don't contain supersets so I might be missing data that otherwise would make this easier to understand. How can I understand this?
What's the probability of Joe winning?
Joe is playing a game where he is given a bag one at a time and has to win the bag by drawing the corresponding color marble from the bag. There are 6 bags. Each bag is assigned a color that contains marbles of that color: The red bag has red marbles, the blue bag has blue marbles and so on. Each bag also has a proportion of black marbles. If Joe draws from the red bag, he'll either get a red marble or a black marble. If Joe gets a non-black marble from the bag, he "wins" the bag. If he draws a black marble, he doesn't win the color. The proportion of color marbles in their respective bags are as follows. ``` {'Red': 0.25, 'Blue': 0.30, 'Purple': 0.15, 'Green': 0.30, 'Yellow': 0.30, 'Orange': 0.20} ``` In this example, .25 of the marbles are Red and .75 are black. **Winning outcomes** To win the game Joe has to win a set specific set of colors. Here is a minimum list of events where Joe can win. ``` [('red', 'purple', 'yellow'), ('red', 'yellow', 'blue'), ('orange', 'purple', 'yellow'), ('orange', 'yellow', 'blue'), ('purple', 'yellow', 'blue'), ('red', 'green', 'orange', 'yellow'), ('red', 'green', 'purple', 'blue'), ('red', 'orange', 'purple', 'blue'), ('green', 'orange', 'purple', 'blue')] ``` **Rules**: - Joe is handed one bag at a time and the order of the bags is completely random. - Joe draws only one marble from the bag he is handed. - If Joe draws a non-black marble, he wins that color. - The game ends when he earns a combination of bags that are contained in the winning outcomes list above or he draws black marbles from each bag. - Joe only plays the game once. The probability of each outcome is the intersection of the events, which can be found by taking the product of the events. ``` P(Red) * P(Purple) * P(Yellow) =.25 *.15 * .30 ``` and so on. **Question**: What is the total probability that he gets any one of the outcomes and wins the game? I'm really struggling with this. I don't think it's the probability that exactly 1 event occurs. ``` P(exactly 1 occurs)= P(A and not B and not C)+ P(not A and B and not C)+ P(not A and not B and C) ``` The result of that method would seem like too low of a probability. More to my confusion, if he draws from the red bag first and wins a red marble, but then draws from the purple bag and gets a black marble, then any winning outcome that contains a purple bag is eliminated from his winning outcomes list. In this example, if he draws a black marble from a purple bag, that eliminates five outcomes that he can win. (The outcome with purple, yellow, blue is NOT eliminated because Red, Purple, Yellow, Blue is a superset of purple, yellow, blue). I think this is the source of my confusion. My winning outcomes don't contain supersets so I might be missing data that otherwise would make this easier to understand. How can I understand this?
What's the probability of Joe winning a set of outcomes?
An elliptic curve over a field $K$ is defined as a 'genus $1$ curve with a base point'. My question is, why don't we adopt the definition 'a smooth curve that is isomorphic to a degree $3$ smooth curve with a base point'? I think these definitions are equivalent because a genus $1$ curve with a base point is isomorphic to the Weierstrass form over $K$, which is a degree $3$ smooth curve with a base point. On the other hand, every smooth curve that is isomorphic to a degree $3$ curve has a genus $\frac{(3-1)(3-2)}{2} = 1$. What is the merit of defining elliptic curves as 'a genus $1$ curve with a base point' rather than as 'a smooth curve that is isomorphic to a degree $3$ smooth curve with a base point'?
How can I prove that this stochastic process is a time-homogeneous Markov process?
Find the range of function $f(x)=|\sin\bigg(\frac{x}{2}\bigg)|+|\cos(x)|$, using $0\leq \bigg|\sin\bigg(\frac{x}{2}\bigg)\bigg|\leq 1$ and $ 0\leq |\cos(x)|\leq 1, 0\leq f(x)\leq 1+1=2$. But minimum value is $\frac{1}{\sqrt{2}}$. Other way what I tried: $ f(x)=\bigg|\sin\bigg(\frac{x}{2}\bigg)\bigg|+\bigg|1-2\sin^2\bigg(\frac{x}{2}\bigg)\bigg|$. $f(t)=|t|+|1-2t^2|,t=\sin\bigg(\frac{x}{2}\bigg).$ How can I get the minimum value?
I have to understand a thing about this exercise: find the minimum of $f(x, y) = (x-2)^2 + y$ subject to $y-x^3 \geq 0$, $y+x^3 \leq 0$ and $y \geq 0$. Now, I solved the problem quite easily in a sketching way: the level curves of $f(x, y)$ are concave parabolas ($y = k - (x-2)^2$), and the feasible region is the upper left plane bounded by $x<0$-axis and under the curve $-x^3$. The candidate solution is $(0, 0)$ at which $f(x, y) = 4$. On the other side, I wanted to solve it with Kuhn-Tucker multipliers, so I set the problem in the standard form for a minimum problem that is: $$-\max -f(x, y) \qquad \text{s.t.} \qquad \begin{cases} -y+x^3 \leq 0 \\ y+x^3 \leq 0 \\ -y \leq 0 \end{cases}$$ with KKT Lagrangian $$L = -(x-2)^2-y- \lambda(-y+x^3) - \mu(y+x^3) - \Theta(-y)$$ which leads to the optimal conditions $$ \begin{cases} -2(x-2) - 3\lambda x^2 - 3\mu x^2 = 0 \\ -1+\lambda - \mu + \Theta = 0 \\ -y + x^3 \leq 0 \quad ; \quad \lambda(-y+x^3) = 0 \\ y+x^3 \leq 0 \quad ; \quad \mu(y+x^3) = 0 \\ -y\leq 0 \quad ; \quad \Theta y = 0 \lambda, \mu, \Theta \geq 0 \end{cases} $$ From here I have to study $8$ cases. Here are some: $\bullet$ When $\lambda = \mu = \Theta = 0$ the system is impossible. $\bullet$ When $\lambda = \mu = 0$, $\Theta \neq 0$ I obtain $(2, 0)$ which doesn't satisfy the constraints. $\bullet$ When $\lambda = 0$, $\mu \neq 0, \Theta = 0$ I get $\mu = -1$ which is not admissible. $\bullet$ When $\lambda, \mu, \Theta \neq 0$ I eventually manage to get among the others $$\begin{cases} \Theta = \mu + 1 - \lambda \\ (\mu+1-\lambda)y = 0 \end{cases} $$ From which either $y =0$ or $\lambda = \mu +1$. For $\lambda = \mu +1$ using the first equation I get $$\mu = \frac{4-2x-3x^2}{6x^2}$$ from which $$\lambda = \frac{4 - 2x + 3x^2}{6x^2}$$ If $y = 0$ then from the complementarity equation $\mu(x^3-y) = 0$ I obtain $(4-2x-3x^2)x = 0$, hence either $x = 0$ or $x = \frac{1}{3}(-1\pm \sqrt{13})$, but those last ones don't satisfy all the constraints. On the other side, $x = 0$ would be good if not for the fact that I cannot take it since it would make $\lambda, \mu$ nonsensical. So I ask you: how to deal with this problem analytically? It looks like KKT conditions might not be satisfied, but I am not sure of this. I would like other pairs of eyes/mind from you, thank you!
Exercise 2.13 on page 46 reads >(Double Induction) Let $P(x,y)$ be a property. Assume > >(**) If $P(k,l)$ holds for all $k,l \in \mathbb{N}$ such that $k < m$ or ($k=m$ and $l<n$), then >$P(m,n)$ holds. > >Conclude that $P(m,n)$ holds for all $m,n \in \mathbb{N}$. Since the authors state on page 44 >(The Induction Principle, Second Version) Let $P(x)$ be a property (possibly with parameters). Assume that, for all $n\in\mathbb{N}$, > >(*) If $P(k)$ holds for all $k<n$, then $P(n)$ holds. > >Then $P(n)$ holds for all $n \in \mathbb{N}$. I was trying to solve the exercise applying the second version of induction (answers [here][1] use a double application of regular induction, which is stated on page 42 of the book) My argument is as follows: Assume (\*\*) is true. Let $n=n_{0}$ be fixed and consider the property $Q(m):P(m,n_{0})$. Then (\*\*) implies the condition (*) for $Q$ and by the second version of induction we have $\forall m\, P(m,n_{0})$. Question: Since $n_{0}$ was arbitrary, can I conclude that $P(m,n)$ holds for all $m,n\in\mathbb{N}$? If not, I guess I can perform an additional symmetric application of the second version of induction. This time fixing $m=m_{0}$ in order to conclude $\forall n\, P(m_{0},n)$. Hence, do I need to apply the second version of induction above once or twice? [1]: https://math.stackexchange.com/questions/1004990/double-induction
I have difficulty understanding the solution below and have already summarized my difficulties as follows, 1. why "the area of the polygon $abcelef$ .... represents the number of ways the three points can be taken, so that the circle circumscribing the triangle will lie wholly within the given polygon." It looks like the ratio of the two areas is the probability that a circle with a given radius lying wholly in the polygon. 2. "An element of the polygon at $G$ is $4 x d x d \psi$", what does the "element" mean? a differential form? and why is it $4 x d x d \psi$; 3. why is an element of the polygon at $R$ $dt$? 4. why is $dt$ only a differential form of $d\theta$? why not include $dx$ too? 5. why do we introduce an imaginary angle $\phi$? **Problem**: A circle is circumscribed about a triangle formed by joining three points taken at random in the surface of a circumscriptible polygon of $n$ sides, find the chance that the circle lies wholly within the polygon. **Solution**: Let $ABCDEF\ldots$ be the polygon, $PGR$ the triangle formed by joining the three random points $P$, $G$ and $R$, $O$ the center of the entirely circumscribing $PGR$ (which I think is a typo, should be $M$ rather than $O$). Draw the polygon $abcef$..., making its sides parallel to those of the given polygon, and at a distance from them equal to $MP$, and draw $ONS$ and $MH$ perpendicular to $AB$ and $PG$. Check the figure below [![Illustration][1]][1] Now while $PG$ is given in length and direction, and $\angle PRG$ is given, if $MP$ is less than the radius of the inscribed circle of the given polygon, the area of the polygon $abcelef\ldots$ represents the number of ways the three points can be taken, so that the circle circumscribing the triangle will lie wholly within the given polygon. Let $PG =2x$, $OS =r$, perimeter of $ABCDEF\ldots$ $=s$, area of segment $PRG$ $=t$, area of sector $PMG$ $=v$, area of triangle $PMG$ $=u$, area of polygon $ABCDEF\ldots$ $= \Delta$, $\angle PMH = \theta$, $\phi=\sin ^{-1}(\frac{x}{r})$, and $\psi=$ the angle which $PG$ makes with some fixed line. Then we have $PM = x \csc \theta$, $ON = r - x \csc \theta$, area $abcdef \ldots$ $=(r - x \csc \theta)^{2} \frac{\Delta}{r^{2}}$, $v = \theta x^{2} \csc^{2} \theta$, $u = x^{2} \cot \theta$, $t=v-u$, and $dt = dv - du = 2 x^{2} \csc^{2} \theta(1 - \theta \cot \theta) d \theta$. An element of the polygon at $G$ is $4 x d x d \psi$, or $4 r^{2} \sin \phi \cos \phi d \phi d \psi$, and at $R$ it is $dt$. The limits of $x$ are $0$ and $r$; of $\phi$, $0$ and $\frac{1}{2} \pi$; of $\theta, \phi$ and $\pi - \phi$; and of $\psi$, $0$ and $2 \pi$. Hence, doubling, since $R$ may lie on either side of $PG$, wo have for the required chance, \begin{align*} p &=\frac{2}{\Delta^3} \int_0^r 4 x d x \int_{\theta=\phi}^{\theta=\pi-\phi} d t \int_0^{2 \pi} d \psi(r-x \csc \theta)^2 \frac{\Delta}{r^2}\\ &=\frac{16 \pi}{r^{2}\Delta^2} \int_0^r x d x \int_{\theta=\phi}^{\theta=\pi-\phi} d t(r-x \csc \theta)^2\\ &= \frac{32 \pi}{r^2 \Delta^2} \int_0^r x^3 d x \int_\phi^{\pi-\phi}(r - x \csc\theta)^{2}(1 - \theta \cot \theta) \csc^2 \theta d\theta\\ &= \frac{2 \pi r^{4}}{3 \Delta^2} \int_0^{\frac{1}{2}\pi} [2(\pi-2 \phi) \sin 2 \phi+4-4 \cos 4 \phi-3 \sin ^2 2 \phi \cos 2 \phi+64 \sin^{4} \phi \cos \phi \log \tan \frac{1}{2} \phi] d \phi\\ &= \frac{2 \pi^2 r^{4}}{5 \Delta^2}\\ &=\frac{8 \pi^2 r^2}{5 s^2} \\ \end{align*} **Remarks** If $ABCDEF \ldots$ is a circle, $s = 2\pi r$, so $p = 2/5$, and the problem is from the book "Inside Interesting Integrals" written by Prof. Paul J, Nahin, which is stated as follows, Imagine a circle (let’s call it $C_{1}$) that has radius $a$. We then chose at random (i.e., uniformly distributed) and independently, three points from the interior of that circle. These three points, if non-collinear, uniquely determine another circle, $C_{2}$. $C_{2}$ may or may not be totally contained within $C_{1}$. What is the probability that $C_{2}$ lies totally inside $C_{1}$? See the [answer][2] [1]: https://i.stack.imgur.com/hrq3H.png [2]: https://math.stackexchange.com/questions/3555686/probability-of-random-sphere-lying-inside-the-unit-ball?noredirect=1&lq=1
I have to understand a thing about this exercise: find the minimum of $f(x, y) = (x-2)^2 + y$ subject to $y-x^3 \geq 0$, $y+x^3 \leq 0$ and $y \geq 0$. Now, I solved the problem quite easily in a sketching way: the level curves of $f(x, y)$ are concave parabolas ($y = k - (x-2)^2$), and the feasible region is the upper left plane bounded by $x<0$-axis and under the curve $-x^3$. The candidate solution is $(0, 0)$ at which $f(x, y) = 4$. On the other side, I wanted to solve it with Kuhn-Tucker multipliers, so I set the problem in the standard form for a minimum problem that is: $$-\max -f(x, y) \qquad \text{s.t.} \qquad \begin{cases} -y+x^3 \leq 0 \\ y+x^3 \leq 0 \\ -y \leq 0 \end{cases}$$ with KKT Lagrangian $$L = -(x-2)^2-y- \lambda(-y+x^3) - \mu(y+x^3) - \Theta(-y)$$ which leads to the optimal conditions $$ \begin{cases} -2(x-2) - 3\lambda x^2 - 3\mu x^2 = 0 \\ -1+\lambda - \mu + \Theta = 0 \\ -y + x^3 \leq 0 \quad ; \quad \lambda(-y+x^3) = 0 \\ y+x^3 \leq 0 \quad ; \quad \mu(y+x^3) = 0 \\ -y\leq 0 \quad ; \quad \Theta y = 0 \lambda, \mu, \Theta \geq 0 \end{cases} $$ From here I have to study $8$ cases. Here are some: $\bullet$ When $\lambda = \mu = \Theta = 0$ the system is impossible. $\bullet$ When $\lambda = \mu = 0$, $\Theta \neq 0$ I obtain $(2, 0)$ which doesn't satisfy the constraints. $\bullet$ When $\lambda = 0$, $\mu \neq 0, \Theta = 0$ I get $\mu = -1$ which is not admissible. $\bullet$ When $\lambda, \mu, \Theta \neq 0$ I eventually manage to get among the others $$\begin{cases} \Theta = \mu + 1 - \lambda \\ (\mu+1-\lambda)y = 0 \end{cases} $$ From which either $y =0$ or $\lambda = \mu +1$. For $\lambda = \mu +1$ using the first equation I get $$\mu = \frac{4-2x-3x^2}{6x^2}$$ from which $$\lambda = \frac{4 - 2x + 3x^2}{6x^2}$$ If $y = 0$ then from the complementarity equation $\mu(x^3-y) = 0$ I obtain $(4-2x-3x^2)x = 0$, hence either $x = 0$ or $x = \frac{1}{3}(-1\pm \sqrt{13})$, but those last ones don't satisfy all the constraints. On the other side, $x = 0$ would be good if not for the fact that I cannot take it since it would make $\lambda, \mu$ nonsensical. So I ask you: how to deal with this problem analytically? It looks like KKT conditions might not be satisfied, but I am not sure of this. I would like other pairs of eyes/mind from you, thank you! Here is the sketch too, with Mathematica code. It's not as good as I thought, since the feasible region should include a missing portion (the origin to the left and above). [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/yVX5N.png plot1 = RegionPlot[{y - x^3 >= 0 && y + x^3 <= 0 && y >= 0}, {x, -3, 3}, {y, -3, 5}, Axes -> True] plot2 = Plot[{-(x - 2)^2, 4 - (x - 2)^2, 3 - (x - 2)^2}, {x, -3, 5}, PlotStyle -> {Dashed, Dashed, Dashed}] Show[plot1, plot2]
[![enter image description here][2]][2] How do I prove the second part? ___ **My attempt:** I used the following formula : $$(X \cdot M)_{min(T,t)} - (X \cdot M)_{min(S,t)} = \int_{min(S,t)}^{min(T,t)} X_s \, dM_s$$ $$ \mathbb{E}\left(\left[(X \cdot M)_{min(T,t)} - (X \cdot M)_{min(S,t)}\right]\left[(Y \cdot M)_{T^t} - (Y \cdot M)_{S^t}\right]\right) = \mathbb{E}\left[\int_{min(S,t)}^{min(T,t)} X_s \, dM_s \times \int_{min(S,t)}^{min(T,t)} Y_s \, dM_s\right]$$ Is there any Ito's formula that I can use to go further ? [1]: https://i.stack.imgur.com/doo2O.png [2]: https://i.stack.imgur.com/C997o.png
I managed to solve it using the following function: given a cartesian point A and point B. the geodesic path on a sphere is defined as: r(t) = sin(1-t)*A + sin(t)*B, for t=[0, 1] Can someone explain me why does it work?
I have to understand a thing about this exercise: find the minimum of $f(x, y) = (x-2)^2 + y$ subject to $y-x^3 \geq 0$, $y+x^3 \leq 0$ and $y \geq 0$. Now, I solved the problem quite easily in a sketching way: the level curves of $f(x, y)$ are concave parabolas ($y = k - (x-2)^2$), and the feasible region is the upper left plane bounded by $x<0$-axis and under the curve $-x^3$. The candidate solution is $(0, 0)$ at which $f(x, y) = 4$. On the other side, I wanted to solve it with Kuhn-Tucker multipliers, so I set the problem in the standard form for a minimum problem that is: $$-\max -f(x, y) \qquad \text{s.t.} \qquad \begin{cases} -y+x^3 \leq 0 \\ y+x^3 \leq 0 \\ -y \leq 0 \end{cases}$$ with KKT Lagrangian $$L = -(x-2)^2-y- \lambda(-y+x^3) - \mu(y+x^3) - \Theta(-y)$$ which leads to the optimal conditions $$ \begin{cases} -2(x-2) - 3\lambda x^2 - 3\mu x^2 = 0 \\ -1+\lambda - \mu + \Theta = 0 \\ -y + x^3 \leq 0 \quad ; \quad \lambda(-y+x^3) = 0 \\ y+x^3 \leq 0 \quad ; \quad \mu(y+x^3) = 0 \\ -y\leq 0 \quad ; \quad \Theta y = 0 \lambda, \mu, \Theta \geq 0 \end{cases} $$ From here I have to study $8$ cases. Here are some: $\bullet$ When $\lambda = \mu = \Theta = 0$ the system is impossible. $\bullet$ When $\lambda = \mu = 0$, $\Theta \neq 0$ I obtain $(2, 0)$ which doesn't satisfy the constraints. $\bullet$ When $\lambda = 0$, $\mu \neq 0, \Theta = 0$ I get $\mu = -1$ which is not admissible. $\bullet$ When $\lambda, \mu, \Theta \neq 0$ I eventually manage to get among the others $$\begin{cases} \Theta = \mu + 1 - \lambda \\ (\mu+1-\lambda)y = 0 \end{cases} $$ From which either $y =0$ or $\lambda = \mu +1$. For $\lambda = \mu +1$ using the first equation I get $$\mu = \frac{4-2x-3x^2}{6x^2}$$ from which $$\lambda = \frac{4 - 2x + 3x^2}{6x^2}$$ If $y = 0$ then from the complementarity equation $\mu(x^3-y) = 0$ I obtain $(4-2x-3x^2)x = 0$, hence either $x = 0$ or $x = \frac{1}{3}(-1\pm \sqrt{13})$, but those last ones don't satisfy all the constraints. On the other side, $x = 0$ would be good if not for the fact that I cannot take it since it would make $\lambda, \mu$ nonsensical. So I ask you: how to deal with this problem analytically? It looks like KKT conditions might not be satisfied, but I am not sure of this. I would like other pairs of eyes/mind from you, thank you! Here is the sketch too, with Mathematica code. It's not as good as I thought, since the feasible region should include a missing portion (the origin to the left and above). [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/yVX5N.png plot1 = RegionPlot[{y - x^3 >= 0 && y + x^3 <= 0 && y >= 0}, {x, -3, 3}, {y, -3, 5}, Axes -> True] plot2 = Plot[{-(x - 2)^2, 4 - (x - 2)^2, 3 - (x - 2)^2}, {x, -3, 5}, PlotStyle -> {Dashed, Dashed, Dashed}] Show[plot1, plot2] **Second Thought** Or maybe I just could say that $y \leq -x^3$ is the same as $-y \geq x^3$ but this, with the other condition implies $\begin{cases} x^3 \leq -y \\ x^3 \leq y \end{cases}$ could just imply either $x = 0$ and then $y = 0$, or $y = 0$ and $x$ must be negative, though in this last case we would keep increasing the value of $f$ rather than find the minimum.
What is the counterexample to show that almost sure convergence does not imply mean (L^p space) convergence? All the examples I've seen were about the fact that convergence in mean does not imply almost sure convergence. I have not found any counterexamples for vice versa, I really want to prove it because if I find a counterexample, than I will automatically prove that measure convergence do not imply convergence in mean. Also I do not know what Borel-Cantelli lemma is and would like to understand an example without it
>Find the range of function $$f(x)=|\sin\bigg(\frac{x}{2}\bigg)|+|\cos(x)|.$$ Using $0\leq \bigg|\sin\bigg(\frac{x}{2}\bigg)\bigg|\leq 1$ and $ 0\leq |\cos(x)|\leq 1, 0\leq f(x)\leq 1+1=2.$ But minimum value is $\frac{1}{\sqrt{2}}$. Other way what I tried: $ f(x)=\bigg|\sin\bigg(\dfrac{x}{2}\bigg)\bigg|+\bigg|1-2\sin^2\bigg(\dfrac{x}{2}\bigg)\bigg|$. $f(t)=|t|+|1-2t^2|,t=\sin\bigg(\dfrac{x}{2}\bigg).$ How can I get the minimum value?
Take a Hilbert space $H$, an orthogonal projector $P$, an unbounded self-adjoint operator $A$. Is $A^{-1} P A$ bounded ?
Why is $\sum_{m=0}^x 2 \binom{s}{m} p^m (1-p)^{s-m} \leq 2\exp{\left(-\frac{2(x - sp)^2}{s}\right)}$?
Below I am trying to make the asked points clear, computations do not stay in focus. First of all a picture: [![mse 43881764][1]][1] The problem starts somehow with the middle, making the least number of words in a random language count for exposing the problem. This is not the way we think, and not the way we start solving. Instead, let us introduce the objects in their order. - The circle $\mathcal O$ with center $O$ in the origin and radius one is given. (The general case is translated and rescaled to this one.) - A polygon $\Pi=ABCDEF\dots$ circumscribed to $\mathcal O$ is chosen. So $AB$, $BC$, $CD$, $DE$, $EF$, $\dots$ are all tangent to $\mathcal O$. - The parameter $r$ is the distance from $O$ to the side $AB$, so $r=1$, i must have as few variables to look at as possible. - The situation so far is fixed. - Consider the probability space $\Omega$ of three points $(P,R,G)$ inside $\Pi$, seen as a "solid area", the (closed) convex hull of its vertices. Denote such a triple always by $(P,R,G)$ with $P,R,G\in\Pi^3$. A measurable subset $Z\subset \Pi^3$ is an event of $\Omega$, it has the tacitly chosen probability $$ \Bbb P(E) =\frac 1{\operatorname{Area}(\Pi)^3}\int_Z 1\; dP\; dR\; dG =\frac 1{\Delta^3}\int_Z 1\; dP\; dR\; dG\ . $$ - We can construct now functions depending on $P,R,G$. For instance, $M=M(P,R,G)$, the circumcenter of $\Delta PRG$ is such a function with values in $\Bbb R^2$. And the projection $H=H(P,R,G)$ is an other function $H:\Omega\to\Omega\subset \Bbb R^2$. Both are continuous, thus measurable. Also, $x=x(P,R,G)=\frac 12|PG|$, half the length of the side $PG$ is such a function. (It does not depend on $P$.) - And all the other letters that are involved in the quick solution are also (continuous, thus measurable) functions on $(P,R,G)$. For instance the circumradius function $(P,Q,R)\to \rho(P,R,G):=|MP|=|MR|=|MG|$. One should once for all times make an alphabet with a clear order and same letters in all languages, this simplfies typing. Here $M$ is the function $M=M(P,R,G)$ already introduces. Lengths of segments are now without the pipe symbols. - Draw now parallels at distance $\rho=\rho(P,R,G)$ from the sides of $\Pi$ taken on the same side as $O$, and $\Pi$. Intersect the parallels to obtain a polygon $\Xi=\Xi(P,R,G)=abcdef\dots$ with sides parallel to those of $\Pi=ABCDEF\dots$. They are similar, so the quotient of the areas is the square of the proportionality factor. Here $a,b,c,d,e,f,\dots$ are also seen as functions of $(P,R,G)$. - Introduce now the random variables (where random does not refer to the way letters and their order were chosen) $x,s,t,v,u,\Delta,\theta,\phi,\psi$. - We want to compute the integral using a change of variables. So in few moments the above letters are variables. We only write dependencies between variables. - $(P,R,G)$ is our starting parameter, we integrate w.r.t. $dP\; dR\; dG$, so we have an integral in $\Bbb R^6$, six parameters / degrees of freedom of integration are needed. If we use the cartesian notations $P=(x_P,y_P)$, $R=(x_R,y_R)$, $G=(x_G,y_G)$, then the measure under the normed integral is explicitly $$ dP\; dR\;dG = dx_P\; dy_P\; dx_R\; dy_R\; dx_G\; dy_G \ . $$ From these six given parameters $(P,R,G)$ we manufacture other. - Consider the map $(P,R,G)\to(P,R,(x,\psi))$. Here $2x$ is the length of $PG$, and $\psi$ the angle it makes with some fixed line, e.g. with the $Ox$ axis. It corresponds to a change to polar variables in $G$, $$ (x_P,y_P,x_R,y_R,x,\psi) (x_P,y_P,x_R,y_R, \underbrace{x_P+2x\cos \psi}_{x_G}, \underbrace{y_P+2x\sin \psi}_{y_G})\ . $$ Here the domain of definition of the new parameter is what it is, $(x_P,y_P,x_R,y_R)\in\Bbb D^2$, but the last two parameters depend on $(x_R,y_R)$. The domain will suffer further transformations, till we get it under control. The transformed volume element is formally $$ \begin{aligned} & dx_P\; dy_P\; dx_R\; dy_R\; dx_G\; dy_G \\ &\qquad = dx_P\wedge dy_P\wedge dx_R\wedge dy_R\wedge dx_G\wedge dy_G \\ &\qquad= dx_P\wedge dy_P\wedge dx_R\wedge dy_R\wedge d(x_P+2x\cos \psi)\wedge d(y_P+2x\sin \psi) \\ &\qquad= dx_P\wedge dy_P\wedge dx_R\wedge dy_R\wedge d(2x\cos \psi)\wedge d(2x\sin \psi) \\ &\qquad= 4\; dx_P\wedge dy_P\wedge dx_R\wedge dy_R \wedge (\cos \psi\; dx-x\sin\psi \; d\psi) \wedge (\sin \psi\; dx+x\cos\psi \; d\psi) \\ &\qquad= 4x\; dx_P\wedge dy_P\wedge dx_R\wedge dy_R \wedge dx \wedge d\psi \\ &\qquad=4x\; dx_P\; dy_P\; dx_R\; dy_R\; dx\; d\psi \end{aligned} $$ (or use Jacobian determinants in absolute value for the transformation if the wedges are unclear), so we formally replace $dG=4x\; dx\; d\psi$. This is the story at the point with the "volume element of the polygon around $G$". - When I continue in this manner there is no soon end of the story, so I will say what will be replaced by what new variables. - We replace $x$ by $\phi$, so that formally $\phi=\arcsin x$, $x=\sin \phi$, $dx=d(\sin\phi)=\cos\phi\; d\phi$, so the part in $x$ in the last volume element, that $4x\; dx$ gets replaced by $4\sin\phi\;\cos \phi\; d\phi$. Our new parameter is $(P,R,\phi,\psi)$ - From $(P,R,\phi,\psi)$ we can quickly recover $G$, and then build $M$ and $\Xi=abcdef\dots$, so consider $M,\Xi$ as functions of the new parametrization. However, from $M,\phi,\psi$ we can recover $R$, so we pass to the new parameter $(M,R,\phi,\psi)$, and formally $dR=dM$, since $R,M$ are obtained from each other by a (variable) translation involving the other variables. Our new parameter comes with the volume element: $$ 4\sin\phi\;\cos \phi\; dM\;dR\;d\phi\;\psi \ , $$ - It is time to control the range for some parameters. The main observation, and the reason for the geometric constructions, is that the circumcircle $\odot(PRG)$ is completely contained in $\Pi$, iff $M$ is in $\Xi=abcdef\dots$ - because so $M$ has distance at least $MP$ to all the sides. Then we perform integration first w.r.t. $dM$, and we have an expression for the area of $\Xi$ , the proportionality $(ON:OS)^2$ is needed, now recall $OS=r=1$, and $ON=OS-NS=1-MP$. - We have to do now something with the parameter $R$. For this, i will make a quick passage to the following choice of parameters, and their order is important. - We make a choice of $x\in[0,1]$. Recall that the radius of $\mathcal O$ is $r=1$. This constrains in the picture the length $2x$ of the segment $PG$, wherever its final destination and determination in terms of the following parameters will be. We have $PH=HG=x$. We may use $\phi=\phi(x)=\arcsin x$ as a function of $x$ in the resulted integral. - The directions of $PHG$ and $MH$ determine each other, this is the parameter $\psi$. So far we have a fixed segment $PG$ with mid point $H$ to move on the plane parallel to a fixed direction. - Let us now fix the parameter $\theta=\widehat {PMH}$. Together with $x$ it completely determines the measures of $\Delta MPH$, which we now model in wood and search for one of its good places inside $\Pi$. In particular the radius $MP$ of the final $\Delta PRG$ is now nailed down. We use it to get the point $N$ on $OS$ with $NS=MP$. Construct $\Xi$ similar to $\Pi$, so that the boundary of $\Xi$ is at distance $MP$ in the exterior to the boundary of $\Pi$. - Fixing $\theta$ is done in the same time with restricting and parametrizing $R$. It lives in the one or the other sector of bounded by $MP$ and $MG$, and the triangle $\Delta MPG$ is removed. This parametrization is not the one I would make, details are then hidden in the computations. - Now it is time to fix the vertex $M$ of our wooden triangle $\Delta MPG$ inside $\Xi$. Then $M,P,G$ are fixed, and $G$ also finds its final place. How many parameters are burned so far? Five, one for each $x,\psi,\theta$, and two for $M$. ---------- To the questions: **1.** The area of the polygon $\Xi = abcdef\dots$ represents strictly speaking the integral over $\Xi=\Xi(x,\theta)$ w.r.t. $dM$, so it takes from the initial integral w.r.t. $dP\; dR\; dG$ a part $dM$, where $dM$ is replacing inside of a parametrization the $dP$. This is all only "lyrics", but the steps above show some details. **2.** The "volume" or rather "area element" is $dG$, and after the parametrization of $G$ as $P+2x(\cos \psi,\sin\psi)$ we have $dP\wedge dG=dP\wedge 4x\; dx\; d\psi$. So in the presence of $dP$ (and $dR$) the change of variables from $G$ to $(x,\psi)$ feels like passing to polar coordinates. We formally replace $dG$ from the whole integral. **3.** It is again a matter of reparametrizing $R$ in terms of $\theta,t$ with the meanings defined by the text. Then $dR$ is expressed in a specific manner in terms of $d\theta\; dt$ (times a Jacobian factor). **4.** Including $dx$ would not change things, since we compute something like $dx\wedge dR$, and $dx\wedge ?$-two-forms obtained after rephrasing $dR$ are cancelled. Or put it in another way. We apply Fubini, and split the set of good cases $Z$ w.r.t. $x$, so for each $x$ we have a set $Z_x$ to further integrate the remained parameters. **5.** The angle $\phi$ controls the range for $\theta$, so that the resulted $PM$, radius of the circle $\odot(M)=\odot(PRG)$ does not exceed one. It is used only as a function of $x$ in a tacit manner, making $\theta$ run only between $\phi$ and $\pi\phi$, i.e. $PM=\frac x{\sin\theta}=\frac {\sin\phi }{\sin\theta}\le 1$. [1]: https://i.stack.imgur.com/5XKYm.jpg [2]: https://i.stack.imgur.com/OMDoV.jpg
I am reading a text that shows this expression $$\beta_j = (1 - \lambda\sqrt{pj}/||S_j||)_+ S_j$$ From the text, the expression inside the parenesis is a number, but what does the + means on this situation?
What does a subscript + means?
How can I obtain a polynomial formula for the product of a triangle's [incircle and excircles](https://en.wikipedia.org/wiki/Incircle_and_excircles), avoiding the use of radicals? Assume three lines in general position are given by three equations $$a_ix+b_iy+c_i=0\quad\text{for }i\in\{1,2,3\}$$ A circle with center $(p,q)$ and radius $r$ may be given by the equation $$(x-p)^2+(y-q)^2-r^2=0$$ which can also be written as $$(x,y,1)\begin{pmatrix}1&0&-p\\0&1&-q\\-p&-q&p^2+q^2-r^2\end{pmatrix} \begin{pmatrix}x\\y\\1\end{pmatrix}=0$$ Using the adjugate matrix to switch from primal to dual conic, we can say a line is tangent to that circle if it satisfies $$(a,b,c)\begin{pmatrix} p^2-r^2 & pq & p \\ pq & q^2-r^2 & q \\ p & q & 1 \end{pmatrix} \begin{pmatrix}a\\b\\c\end{pmatrix}=0$$ So by plugging in the three lines from the start, we get three non-linear equations. In general we get 4 distinct solutions for $p,q,r^2$. (We actually get 8 solutions if we solve for $r$ instead of $r^2$, but these are only $r=\pm\sqrt{r^2}$ and by convention we'd pick the positive radius. All the formulas only use $r^2$ not $r$ so it makes sense to treat that as the variable.) The underlying quartic equation behind this would introduce plenty of radicals, and rules on how to match the solutions, which complicates subsequent work. It would be much better if we could avoid taking roots here. And I think that by dealing with all four circles together, that should be possible. I conjecture that the product of the four circles, i.e. the formula $$0=\prod_{i=1}^4 (x-p_i)^2+(y-q_i)^2-r_i^2$$ can be stated as a polynomial of combined degree 8 in $x,y$ where the coefficients themselves can be given as polynomials in the coordinates of the lines, namely $a_i,b_i,c_i$. If the coordinates of the lines are rational, then the coefficients in the product of circles will be rational, too. I have checked this conjecture using one specific triangle, chosen fairly arbitrarily (using three rational points on the unit circle): \begin{align*} a_1 &= 23 & b_1 &= 41 & c_1 &= -47 \\ a_2 &= 4 & b_2 &= -7 & c_2 &= 4 \\ a_3 &= 3 & b_3 &= -5 & c_3 &= 3 \end{align*} For this I got the product of circles as \begin{align*} 59636082025&x^8 \\ + 238544328100&x^6y^2 \\ + 357816492150&x^4*y^4 \\ + 238544328100&x^2*y^6 \\ + 59636082025&y^8 \\ - 241134854740&x^6 \\ + 424846368960&x^5y \\ - 1438821671300&x^4y^2 \\ + 849692737920&x^3y^3 \\ - 2154238778380&x^2y^4 \\ + 424846368960&xy^5 \\ - 956551961820&y^6 \\ + 108802118880&x^5 \\ + 265528980600&x^4y \\ - 1771920221760&x^3y^2 \\ + 3788213456560&x^2y^3 \\ - 1880722340640&xy^4 \\ + 3522684475960&y^5 \\ - 12729722876&x^4 \\ + 1691695948800&x^3y \\ - 2241047404232&x^2y^2 \\ + 2683644936960&xy^3 \\ - 6476277331836&y^4 \\ - 484880944704&x^3 \\ + 468260157040&x^2y \\ - 1625087765184&xy^2 \\ + 7083496190160&y^3 \\ + 48256725504&x^2 \\ + 293698179840&xy \\ - 4644695250832&y^2 \\ - 13129818624&x \\ + 1675829775360&y \\ - 243236814336&\quad=0 \end{align*} How can I get these coefficients without going through the detour of the four distinct circles and their irrational parameters? Background for my question is https://math.stackexchange.com/q/4887813/35416 discussing the probability of the centroid lying within the incircle. For an exact solution, radicals would make one's life really hard. But at the same time, since the centroid will never lie within an excircle, considering the sign of the product of circles should work just as well, and when combined with a rational parametrization of the circle might lend itself to some nice algebraic approach for that question. That's what got me thinking, but at the moment I'm actually more intrigued by this question here for its own merit. I feel like I'm missing some very useful tool in my arsenal, but don't know how to learn more. I'm including the Galois theory tag as I have the rough understanding that Galois theory deals with the relationship between the different roots of a polynomial. So if I want to understand how the different solutions interact when I multiply the circles, I assume that topic might have contributions. But so far my knowledge of Galois theory is pretty much exhausted by getting my computer algebra system to compute the Galois group of some polynomial, and then using that to decide whether a number is constructible or not.
I encountered a tough integral and I am wondering if anyone has any ideas on how to evaluate it. $$\displaystyle \int_{0}^{\infty}\frac{x}{x^{2}+a^{2}}\log\left(\left|\frac{x+1}{x-1}\right|\right)dx=\pi\tan^{-1}\frac1a, \;\ a>1$$ I tried breaking up the log and differentiating w.r.t 'a', but did not make considerable progress. Is it possible to do this one with residues?. I attempted to write it as $$\displaystyle \frac{1}{a^{2}}\int_{0}^{\infty}\frac{x}{1+(x/a)^{2}}\log(x+1)dx-\frac{1}{a^{2}}\int_{0}^{\infty}\frac{x}{1+(x/a)^{2}}\log(x-1)dx$$ Now, make the sub $t=x/a, \;\ dx=adt$ $\displaystyle \int_{0}^{\infty}\frac{t}{1+t^{2}}\log(1+at)dt-\int_{0}^{\infty}\frac{t}{1+t^{2}}\log(at-1)dt$ Differentiate w.r.t 'a' gives: $$\displaystyle \int_{0}^{\infty}\frac{t^{2}}{(at+1)(t^{2}+1)}dt-\int_{0}^{\infty}\frac{t^{2}}{(at-1)(t^{2}+1)}dt$$ But, this may not be a good idea because of convergence issues. Does anyone have a good idea of how to approach this one?. Thanks and take care.
I am reading this paper about model selection that shows this expression for $$\beta_j = (1 - \lambda\sqrt{pj}/||S_j||)_+ S_j$$ for the vectors $\beta_j$ of estimations. From the text, the expression inside the parenesis is a number, but what does the + means on this situation?
I think this is a good place for thinking about the alternative way of framing induction. First, note that $\Bbb N \times \Bbb N$ is well ordered (meaning every non-empty subset has a $\lt$-least element) by the lexicographic ordering $(a, b) \lt (c, d) \iff a \lt c \lor (a=c \land b \lt d)$. This is easy to see. Let $S \subseteq \Bbb N \times \Bbb N$ Since $\Bbb N$ is well-ordered, any non-empty subset has a smallest first coordinate $x$. Consider $S_x = \{b \mid (x, b) \in S \}$. Then $S_x$ is a non-empty subset of $\Bbb N$, so it has a least element $y$. By definition, we know that $(x, y) \in S$. If also $(a, b) \in S$ with $(x, y) \neq (a, b)$, then by definition of $x$, we know that $x \leq a$. If $x=a$, then by definition of $y$, we know that $y \lt b$. Thus, we know that $(x, y) \lt (a, b)$ and $(x, y)$ is the $\lt$-least element of $S$, as required. Now assume $(\ast \ast)$ and suppose $P(m, n)$ were false for some $(m, n)$. Then there is a $\lt$-least such $(m, n)$ for which $P$ fails. What can it be? That least counterexample of $P$ can't be $(0, 0)$ because for $(0, 0)$, the hypothesis of $(\ast \ast)$ is vacuously true. Now we show the least counterexample of $P$ can't have the form $(k+1, 0)$. Since $(k+1, 0)$ is, by assumption, the least pair for which $P$ fails, we know that $P(a, b)$ holds whenever $a \lt k+1$. We don't have to worry about $(k+1, n+1)$ because it's never the case that $n+1 \lt 0$. Thus, the hypothesis of $(\ast \ast)$ holds and we know that $P(k+1, 0)$ holds, contrary to our assumption that $(k+1, 0)$ is a counterexample. Finally, similar reasoning shows that the least counterexample can't have the form $(k+1, l+1)$. In short, the set of counterexamples can't have a least element. But every non-empty subset of $\Bbb N \times \Bbb N$ has a least element, so the set of counterexamples must be empty. That means $P$ holds for all $(m, n)$.
Yes, your first argument works. However, I think this is a good place for thinking about the alternative way of framing induction as using well ordering to show that the set of counterexamples must be empty. First, note that $\Bbb N \times \Bbb N$ is well ordered (meaning every non-empty subset has a $\lt$-least element) by the lexicographic ordering $(a, b) \lt (c, d) \iff a \lt c \lor (a=c \land b \lt d)$. This is easy to see. Let $S \subseteq \Bbb N \times \Bbb N$ Since $\Bbb N$ is well-ordered, any non-empty subset has a smallest first coordinate $x$. Consider $S_x = \{b \mid (x, b) \in S \}$. Then $S_x$ is a non-empty subset of $\Bbb N$, so it has a least element $y$. By definition, we know that $(x, y) \in S$. If also $(a, b) \in S$ with $(x, y) \neq (a, b)$, then by definition of $x$, we know that $x \leq a$. If $x=a$, then by definition of $y$, we know that $y \lt b$. Thus, we know that $(x, y) \lt (a, b)$ and $(x, y)$ is the $\lt$-least element of $S$, as required. Now assume $(\ast \ast)$ and suppose $P(m, n)$ were false for some $(m, n)$. Then there is a $\lt$-least such $(m, n)$ for which $P$ fails. What can it be? That least counterexample of $P$ can't be $(0, 0)$ because for $(0, 0)$, the hypothesis of $(\ast \ast)$ is vacuously true. The least counterexample also can't have the form $(0, l+1)$ because the only smaller elements have the form $(0, b), b \leq l$, and since we are assuming that $(0, l+1)$ is the least counterexample, we know that $P(0, b)$ holds for $b \leq l$, which means (via $(\ast \ast)$) that $P(0, l+1)$ holds after all. Now we show the least counterexample of $P$ can't have the form $(k+1, 0)$. Since $(k+1, 0)$ is, by assumption, the least pair for which $P$ fails, we know that $P(a, b)$ holds whenever $a \lt k+1$. We don't have to worry about $(k+1, n+1)$ because it's never the case that $n+1 \lt 0$. Thus, the hypothesis of $(\ast \ast)$ holds and we know that $P(k+1, 0)$ holds, contrary to our assumption that $(k+1, 0)$ is a counterexample. Finally, similar reasoning shows that the least counterexample can't have the form $(k+1, l+1)$. In short, the set of counterexamples can't have a least element. But every non-empty subset of $\Bbb N \times \Bbb N$ has a least element, so the set of counterexamples must be empty. That means $P$ holds for all $(m, n)$.
I have to understand a thing about this exercise: find the minimum of $f(x, y) = (x-2)^2 + y$ subject to $y-x^3 \geq 0$, $y+x^3 \leq 0$ and $y \geq 0$. Now, I solved the problem quite easily in a sketching way: the level curves of $f(x, y)$ are concave parabolas ($y = k - (x-2)^2$), and the feasible region is the upper left plane bounded by $x<0$-axis and under the curve $-x^3$. The candidate solution is $(0, 0)$ at which $f(x, y) = 4$. On the other side, I wanted to solve it with Kuhn-Tucker multipliers, so I set the problem in the standard form for a minimum problem that is: $$-\max -f(x, y) \qquad \text{s.t.} \qquad \begin{cases} -y+x^3 \leq 0 \\ y+x^3 \leq 0 \\ -y \leq 0 \end{cases}$$ with KKT Lagrangian $$L = -(x-2)^2-y- \lambda(-y+x^3) - \mu(y+x^3) - \Theta(-y)$$ which leads to the optimal conditions $$ \begin{cases} -2(x-2) - 3\lambda x^2 - 3\mu x^2 = 0 \\ -1+\lambda - \mu + \Theta = 0 \\ -y + x^3 \leq 0 \quad ; \quad \lambda(-y+x^3) = 0 \\ y+x^3 \leq 0 \quad ; \quad \mu(y+x^3) = 0 \\ -y\leq 0 \quad ; \quad \Theta y = 0 \lambda, \mu, \Theta \geq 0 \end{cases} $$ From here I have to study $8$ cases. Here are some: $\bullet$ When $\lambda = \mu = \Theta = 0$ the system is impossible. $\bullet$ When $\lambda = \mu = 0$, $\Theta \neq 0$ I obtain $(2, 0)$ which doesn't satisfy the constraints. $\bullet$ When $\lambda = 0$, $\mu \neq 0, \Theta = 0$ I get $\mu = -1$ which is not admissible. $\bullet$ When $\lambda, \mu, \Theta \neq 0$ I eventually manage to get among the others $$\begin{cases} \Theta = \mu + 1 - \lambda \\ (\mu+1-\lambda)y = 0 \end{cases} $$ From which either $y =0$ or $\lambda = \mu +1$. For $\lambda = \mu +1$ using the first equation I get $$\mu = \frac{4-2x-3x^2}{6x^2}$$ from which $$\lambda = \frac{4 - 2x + 3x^2}{6x^2}$$ If $y = 0$ then from the complementarity equation $\mu(x^3-y) = 0$ I obtain $(4-2x-3x^2)x = 0$, hence either $x = 0$ or $x = \frac{1}{3}(-1\pm \sqrt{13})$, but those last ones don't satisfy all the constraints. On the other side, $x = 0$ would be good if not for the fact that I cannot take it since it would make $\lambda, \mu$ nonsensical. So I ask you: how to deal with this problem analytically? It looks like KKT conditions might not be satisfied, but I am not sure of this. I would like other pairs of eyes/mind from you, thank you! Here is the sketch too, with Mathematica code. It's not as good as I thought, since the feasible region should include a missing portion (the origin to the left and above). [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/yVX5N.png plot1 = RegionPlot[{y - x^3 >= 0 && y + x^3 <= 0 && y >= 0}, {x, -3, 3}, {y, -3, 5}, Axes -> True] plot2 = Plot[{-(x - 2)^2, 4 - (x - 2)^2, 3 - (x - 2)^2}, {x, -3, 5}, PlotStyle -> {Dashed, Dashed, Dashed}] Show[plot1, plot2] **Second Thought** Or maybe I just could say that $y \leq -x^3$ is the same as $-y \geq x^3$ but this, with the other condition implies $\begin{cases} x^3 \leq -y \\ x^3 \leq y \end{cases}$ could just imply either $x = 0$ and then $y = 0$, or $y = 0$ and $x$ must be negative, though in this last case we would keep increasing the value of $f$ rather than find the minimum. **Third Thought** I perhaps have forgotten about the regularity of the constraints. The Jacobian matrix indeed reads $$\mathsf{J} = \begin{pmatrix} 3x^2 & -1 \\ 3x^2 & 1 \\ 0& -1 \end{pmatrix}$$ From which we observe that at $(0, 0)$ we lose the regularity of the constraitns, being $\mathsf{J}$ of randk $1$. Perhaps this is what makes KKT conditions to not being satisfied.
I managed to solve it using the following function: given a cartesian point A and point B. the geodesic path on a sphere is defined as: r(t) = sin(1-t)*A + sin(t)*B, for t=[0, 1] then normalize r(t)/||r(t)|| to get the points on a sphere [![enter image description here][1]][1] Can someone explain me why does it work? [1]: https://i.stack.imgur.com/wj7Mm.png
I derived a formula for SIP calculations myself, but I'm unsure if it's accurate. Could you please help me verify it?
I'm reviewing some very basic concepts in analysis because I want to get better at it. I am trying to prove that $$\lim_{x\to 9}(\sqrt{x} + 2) = 5$$ using the rigorous definition of a limit. Here is my proof: Let $\epsilon > 0$. And take $\delta = \epsilon$. Then $\lvert x-9\rvert < \delta$ implies that $\lvert \sqrt{x} + 3\rvert\lvert \sqrt{x} - 3\rvert < \delta$ so that $$\lvert \sqrt{x} - 3\rvert < \frac{\delta}{\lvert \sqrt{x} + 3\rvert}$$ For sufficiently small $\epsilon$, $\lvert x - 9\rvert < \delta$ implies that $\lvert \sqrt{x} + 3\rvert \geq 1$, meaning $$\lvert \sqrt{x} - 3\rvert < \frac{\delta}{\lvert \sqrt{x} + 3\rvert} \leq \frac{\delta}{1} = \epsilon$$ Is all of this logically valid? I'm particularly concerned about the part where I say "For sufficiently small $\epsilon$, $\lvert x - 9\rvert < \delta$ implies that $\lvert \sqrt{x} + 3\rvert \geq 1$" All help and criticism is appreciated, thank you.
#### Prompt: > Find the $x$- and $y$-intercepts of the graph of $x = (y+2)\ln(3y+4)$. #### Given solution: For $x = 0$, solve $(y+2)\ln(3y+4) = 0$ for $y$: \begin{align} y + 2 &= 0 & \ln(3y + 4) &= 0 \\ y &= -2 & 3y + 4 &= e^0 \\ \rlap{\text{not a solution}}\qquad && 3y + 4 &= 1 \\ && 3y &= -3 \\ && y &= -1 \end{align} So, only $(0, -1)$. For $y = 0$, evaluate $x = (0 + 2) \ln(3(0) + 4) = 2\ln 4 = 4\ln 2$, so only $(4\ln2, 0)$. ---- #### My question: Why does $y \neq -2$? Can someone help me in simple terms. [1]: https://i.stack.imgur.com/ZCH7r.jpg
>Can we choose a special $\psi$ that make $M\otimes N$ is $*$-isomorphic to $N$? No. For starters, most of the time $M\otimes N$ will not be isomorphic to $N$. You could have $M=N=M_2(\mathbb C)$, and it already fails. Or if you want infinite-dimensional examples, take $N$ amenable, for instance the hypefinite II$_1$-factor, and $M$ a non-amenable II$_1$-factor, or any infinite von Neumann algebra. Then $M\otimes N$ cannot be isomorphic to $N$. A conditional expectation is a projection, which is about the farthest you can have an isomorphism. They are not multiplicative, and they are not injective. Note that as $\psi$ is a linear functional, it will always have a significant kernel, so the slice map cannot be injective, and it can only be multiplicative if $\psi$ is, which again is something you won't usually have.
Recently I have encountered an paper which impose a subGaussian condition on the covariate vector $\boldsymbol{x}$ as follows: >There exists some universal constant $\sigma_x \in[1, \infty)$ such that $$ \forall \boldsymbol{v} \in \mathbb{R}^p, \quad \mathbb{E}\left[\exp \left\{\boldsymbol{v}^{\top} \boldsymbol{\Sigma}^{-1 / 2} \boldsymbol{x}\right\}\right] \leq \exp \left(\frac{\sigma_x^2}{2} \cdot\|\boldsymbol{v}\|_2^2\right), $$ where $\boldsymbol{\Sigma}$ is the covariance matrix of $\boldsymbol{x}$. Meanwhile, it imposes subGaussian condition on the error as follows: >There exists some universal constant $\sigma_{\varepsilon} \in \mathbb{R}^{+}$such that, $$ \forall \lambda \in \mathbb{R}, \quad \mathbb{E}\left[e^{\lambda \varepsilon}\right] \leq e^{\frac{1}{2} \lambda^2 \sigma_{\varepsilon}^2}. $$ **My confusion**: In the first condition, the subGaussian parameter $\sigma_x$ is related with the vector $\boldsymbol{\Sigma}^{-1/2}\boldsymbol{x}$ instead of $\boldsymbol{x}$. In the second condition, the subGaussian parameter $\sigma_{\varepsilon}$ is related with the random variable $\varepsilon$ instead of var($\varepsilon)^{-1/2}\varepsilon$. **My question**: I understand there is a relationship between the variance and subGaussian parameter. I read from [here][1] that >$$ X \in \mathcal{S G}\left(\sigma^2\right) \Longrightarrow \operatorname{Var}[X] \leq \sigma^2 $$ In fact, $\operatorname{Var}[X] \leq \sigma^2(X)$ where $\sigma^2(X):=\inf \left\{\sigma^2: \mathbb{E}\left[e^{\lambda(X-\mu)}\right] \leq e^{\lambda^2 \sigma^2 / 2}\right\}$ When I impose a condition about subGaussian random variable (vector), is there a rule/criterion that tell me whether or not I should scale it first with its variance? [1]: https://www.stat.cmu.edu/~arinaldo/Teaching/36710/F18/Scribed_Lectures/Sep5.pdf
> I have $3$ statements $A, B$, and $C$ and I wish to change them from the form: > > $(A \operatorname{xor} B)$ > > $(A \operatorname{xor} C)$ > > Into > > $A \operatorname{xor} B \operatorname{xor} C$ You may wish to do so, but please note that the conjunction of the first two statement is *not* equivalent to $A \operatorname{xor} B \operatorname{xor} C$. In fact, $(A \operatorname{xor} B)$ and $(A \operatorname{xor} C)$ don't even imply $A \operatorname{xor} B \operatorname{xor} C$ Consider: $A$ is False ($F$), and $B$ and $C$ are True ($T$). Then $(A \operatorname{xor} B)$ and $(A \operatorname{xor} C)$ are clearly both true. However, $A \operatorname{xor} B \operatorname{xor} C$ is False, since: $$F \operatorname{xor} T \operatorname{xor} T = F \operatorname{xor} (T \operatorname{xor} T) = F \operatorname{xor} F = F$$ You actually have to be very careful in using expressions where you apply what is normally a binary operator to more than two operands. This may work well for operators like $\land$ and $\lor$, but it is not immediately clear it works for other operators. Indeed, it does *not* work for the material conditional: since $A \to (B \to C)$ is *not* equivalent to $(A \to B) \to C$, you can't just drop parentheses and start using $A \to B \to C$. Of course, you say, but that is because the $\to$ is not 'symmetrical': $P \to Q$ is not equivalent to $Q \to P$. But the $\land$ and $\lor$ are, and therefore so is the $\text{xor}$. But no, that argument doesn't fly! The $\text{NAND}$ is 'symmetrical' (the technical word is commutative), but it is *not* associative: $A \ \text{NAND} \ (B \ \text{NAND} \ C)$ is *not* equivalent to $(A \ \text{NAND} \ B) \ \text{NAND} \ C$, and so we can't drop parentheses there either. So, it isn't even clear we can even drop parentheses for the $\text{XOR}$ either. Well, it turns out the $\text{xor}$ *is* associative, and so we *can* drop parentheses. OK, but what do you think does an expression like $A \operatorname{xor} B \operatorname{xor} C$ means? Many believe that it says that exactly one of $A$, $B$,and $C$ is true ... OK, well, that's in fact *not* what it means, but let's suppose that is what it means. Note that we could still not infer $A \operatorname{xor} B \operatorname{xor} C$ from $(A \operatorname{xor} B)$ and $(A \operatorname{xor} C)$, since I can use the same counterexample as before: with $A$ False and $B$ and $C$ True, we do have that exactly one of $A$ and $B$ is true, and also exactly one of $A$ and $C$ is true, but it is not the case that exactly one of $A$, $B$, and $C$ is true). Also, the other way around does not work either: If $B$ is True, and $A$ and $C$ are false, then $A \operatorname{xor} B$ is True, but $A \operatorname{xor} C$ is False. But that's not even what the generalized $xor$ means! It turns out that a general $xor$ expression is true if and only if an odd number of its operands are true. So yes, it would be true if exactly one of the operands is true, but when you have more than two operands, there will be other situations where it is true. Consider: if $A$, $B$, and $C$ are all true, then $A \operatorname{xor} B \operatorname{xor} C$ turns out to be true as well, which is easily verified: $$T \operatorname{xor} T \operatorname{xor} T = T \operatorname{xor} (T \operatorname{xor} T) = T \operatorname{xor} F = T$$ But note: If $A$, $B$, and $C$ are all true, then $A \operatorname{xor} B$ and $A \operatorname{xor} C$ are both false. OK, so the moral is: you have to be real careful when it comes to generalizing binary operators to more than two operands!!
I'm a beginner to number theory, and in a text book, right after proving the fundamental theorem of arithmetic, the following problem is stated: Let $H_m$ be the subset of real numbers, which can be written in the form of $x + y\sqrt{m}$, where $x$ and $y$ are integers and $m$ isn't a square number. Show that besides $\pm1$, $1 + \sqrt{2}$ and $3 + 2\sqrt{2}$ are units in $H_2$. The book recommends defining divisibility, units and undecomposablity in the $H_m$ set first. How should I begin solving the problem? Any kind of help is welcome. (I'm not that familiar with the technicalities in English, I'm sorry.)
How can I obtain a polynomial formula for the product of a triangle's [incircle and excircles](https://en.wikipedia.org/wiki/Incircle_and_excircles), avoiding the use of radicals? Assume three lines in general position are given by three equations $$a_ix+b_iy+c_i=0\quad\text{for }i\in\{1,2,3\}$$ A circle with center $(p,q)$ and radius $r$ may be given by the equation $$(x-p)^2+(y-q)^2-r^2=0$$ which can also be written as $$(x,y,1)\begin{pmatrix}1&0&-p\\0&1&-q\\-p&-q&p^2+q^2-r^2\end{pmatrix} \begin{pmatrix}x\\y\\1\end{pmatrix}=0$$ Using the adjugate matrix to switch from primal to dual conic, we can say a line is tangent to that circle if it satisfies $$(a,b,c)\begin{pmatrix} p^2-r^2 & pq & p \\ pq & q^2-r^2 & q \\ p & q & 1 \end{pmatrix} \begin{pmatrix}a\\b\\c\end{pmatrix}=0$$ So by plugging in the three lines from the start, we get three non-linear equations. In general we get 4 distinct solutions for $p,q,r^2$. (We actually get 8 solutions if we solve for $r$ instead of $r^2$, but these are only $r=\pm\sqrt{r^2}$ and by convention we'd pick the positive radius. All the formulas only use $r^2$ not $r$ so it makes sense to treat that as the variable.) The underlying quartic equation behind this would introduce plenty of radicals, and rules on how to match the solutions, which complicates subsequent work. It would be much better if we could avoid taking roots here. And I think that by dealing with all four circles together, that should be possible. I conjecture that the product of the four circles, i.e. the formula $$0=\prod_{i=1}^4 (x-p_i)^2+(y-q_i)^2-r_i^2$$ can be stated as a polynomial of combined degree 8 in $x,y$ where the coefficients themselves can be given as polynomials in the coordinates of the lines, namely $a_i,b_i,c_i$. If the coordinates of the lines are rational, then the coefficients in the product of circles will be rational, too. I have checked this conjecture using one specific triangle, chosen fairly arbitrarily (using three rational points on the unit circle): \begin{align*} a_1 &= 23 & b_1 &= 41 & c_1 &= -47 \\ a_2 &= 4 & b_2 &= -7 & c_2 &= 4 \\ a_3 &= 3 & b_3 &= -5 & c_3 &= 3 \end{align*} For this I got the product of circles as \begin{align*} 59636082025&\,x^8 \\ + 238544328100&\,x^6y^2 \\ + 357816492150&\,x^4y^4 \\ + 238544328100&\,x^2y^6 \\ + 59636082025&\,y^8 \\ - 241134854740&\,x^6 \\ + 424846368960&\,x^5y \\ - 1438821671300&\,x^4y^2 \\ + 849692737920&\,x^3y^3 \\ - 2154238778380&\,x^2y^4 \\ + 424846368960&\,xy^5 \\ - 956551961820&\,y^6 \\ + 108802118880&\,x^5 \\ + 265528980600&\,x^4y \\ - 1771920221760&\,x^3y^2 \\ + 3788213456560&\,x^2y^3 \\ - 1880722340640&\,xy^4 \\ + 3522684475960&\,y^5 \\ - 12729722876&\,x^4 \\ + 1691695948800&\,x^3y \\ - 2241047404232&\,x^2y^2 \\ + 2683644936960&\,xy^3 \\ - 6476277331836&\,y^4 \\ - 484880944704&\,x^3 \\ + 468260157040&\,x^2y \\ - 1625087765184&\,xy^2 \\ + 7083496190160&\,y^3 \\ + 48256725504&\,x^2 \\ + 293698179840&\,xy \\ - 4644695250832&\,y^2 \\ - 13129818624&\,x \\ + 1675829775360&\,y \\ - 243236814336&\quad=0 \end{align*} How can I get these coefficients without going through the detour of the four distinct circles and their irrational parameters? Background for my question is https://math.stackexchange.com/q/4887813/35416 discussing the probability of the centroid lying within the incircle. For an exact solution, radicals would make one's life really hard. But at the same time, since the centroid will never lie within an excircle, considering the sign of the product of circles should work just as well, and when combined with a rational parametrization of the circle might lend itself to some nice algebraic approach for that question. That's what got me thinking, but at the moment I'm actually more intrigued by this question here for its own merit. I feel like I'm missing some very useful tool in my arsenal, but don't know how to learn more. I'm including the Galois theory tag as I have the rough understanding that Galois theory deals with the relationship between the different roots of a polynomial. So if I want to understand how the different solutions interact when I multiply the circles, I assume that topic might have contributions. But so far my knowledge of Galois theory is pretty much exhausted by getting my computer algebra system to compute the Galois group of some polynomial, and then using that to decide whether a number is constructible or not.
We know a hyperbola can be expressed in the form of$$ \frac{(x-h)^2}{a^2}-\frac{(y-k)^2}{b^2}=1$$ where $(h,k)$ is it's centre. I've learnt that in the parametric form, we take $$x= h + a\sec t$$ and $$ y = k + b\tan t $$ These values satisfy the given equation. But so does $$x= h + a\csc t$$ and $$y=k+b\cot t$$ Then why aren't these second values of x and y taken as parameters?
I'm a beginner to number theory, and in a text book, right after proving the fundamental theorem of arithmetic, the following problem is stated: Let $H_m$ be the subset of real numbers, which can be written in the form of $x + y\sqrt{m}$, where $x$ and $y$ are integers and $m$ isn't a square number. Show that besides $\pm1$, $1 + \sqrt{2}$ and $3 + 2\sqrt{2}$ are also units in $H_2$. The book recommends defining divisibility, units and undecomposablity in the $H_m$ set first. How should I begin solving the problem? Any kind of help is welcome. (I'm not that familiar with the technicalities in English, I'm sorry.)
I managed to solve it using the following function: given a cartesian point A and point B. the geodesic path on a sphere is defined as: r(t) = sin(1-t)*A + sin(t)*B, for t=[0, 1] then normalize r(t)/||r(t)|| to get the points on a sphere [![enter image description here][1]][1] Can someone explain me why does it work? [1]: https://i.stack.imgur.com/wj7Mm.png ** edit - it works even without the sin
- I am measuring the temperature of a metal plate using 4 thermal sensors. - The sensor used is specified to have an accuracy of ±0.1°C. If I have for example those readings: [![enter image description here][1]][1] The average of all sensors is: 20.45°C. However how should I report the accuracy? My intuition is that: - The lowest reading (20.3°C) could be as low as 20.2°C because of the sensor reported accuracy. - The highest reading (20.6°C) could be as high as 20.7°C. - That covers an 0.5°C range. So I would conclude that the accuracy of my sensors average is ±0.25°C Is this correct? [1]: https://i.stack.imgur.com/s63wY.jpg
How to report the accuracy of the average of multiple sensors?
Let $x \geq 1$. I wish to show, by interpreting the sum as a Riemann sum, that $$\log(x) = \int_{1}^{x} \frac{dt}{t} \leq \sum_{n \leq x} \frac{1}{n}.$$. Certainly, $f(t) = 1/t$ is continuous on $[1, x]$, so is integrable on $[1, x]$ and may be computed as the limit of a sequence of Riemann sums. I have only the idea of interpreting $\sum_{n \leq x} \frac{1}{n}$ as an "upper sum" of $f$ with respect to a partition $\mathcal{P} = \{1, 2, ..., \lfloor x \rfloor, x\}$, which would necessarily be no less than $\int_{1}^{x} \frac{dt}{t}$. However, I am not certain how to formalise this idea, because, in using $\mathcal{P}$, I should be left with an "excess" area from $\lfloor x \rfloor$ to $x$?
The author of [this](https://math.stackexchange.com/questions/2413529/inverse-of-a-matrix-derivative) question was close to determining the derivative of the function of dual variable, when we consider matrices isomorphic (algebraically and topologically) to dual numbers: $$(a+\epsilon b) \sim \begin{bmatrix} a & 0 \\ b & a \\ \end{bmatrix}.$$ So, using the fact we can define the derivative (in the Fréchet sense) for functions $F$ for with an argument in the form of such a matrix and a value in the form of such a matrix: $$F\big(\begin{bmatrix} x+s & 0 \\ y+t & x+s \\ \end{bmatrix}\big)-F\big(\begin{bmatrix} x & 0 \\ y & x \\ \end{bmatrix}\big)=\begin{bmatrix} u' & 0 \\ v' & u' \\ \end{bmatrix}\begin{bmatrix} s & 0 \\ t & s \\ \end{bmatrix}+o\bigg(\bigg|\bigg|\begin{bmatrix} s & 0 \\ t & s \\ \end{bmatrix}\bigg|\bigg|\bigg),$$ where $\bigg|\bigg|\begin{bmatrix} s & 0 \\ t & s \\ \end{bmatrix}\bigg|\bigg|=\max\{|s|,|t|\}$ and all elements of all matrices are real. Therefore, the existence of such a matrix $\begin{bmatrix} u' & 0 \\ v' & u' \\ \end{bmatrix}$ (which we will call derivative at $\begin{bmatrix} x & 0 \\ y & x \\ \end{bmatrix}$) means differentiability of $F$ at $\begin{bmatrix} x & 0 \\ y & x \\ \end{bmatrix}$. I'm interested in to what extent can this approach be generalized in defining a matrix-valued function of a matrix argument? I mean the case, when the derivative is an object of the same nature as variables (in opposed to the definition of the derivative of a function $f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$ which is a (Jacobian) matrix). Can anyone share links to material with respect to such kind of derivatives?
Let $x \geq 1$. I wish to show, by interpreting the sum as a Riemann sum, that $$\log(x) = \int_{1}^{x} \frac{dt}{t} \leq \sum_{n \leq x} \frac{1}{n}.$$ Certainly, $f(t) = 1/t$ is continuous on $[1, x]$, so is integrable on $[1, x]$ and may be computed as the limit of a sequence of Riemann sums. I have only the idea of interpreting $\sum_{n \leq x} \frac{1}{n}$ as an "upper sum" of $f$ with respect to a partition $\mathcal{P} = \{1, 2, ..., \lfloor x \rfloor, x\}$, which would necessarily be no less than $\int_{1}^{x} \frac{dt}{t}$. However, I am not certain how to formalise this idea, because, in using $\mathcal{P}$, I should be left with an "excess" area from $\lfloor x \rfloor$ to $x$?
I managed to solve it using the following function: given a cartesian point A and point B. the geodesic path on a sphere is defined as: r(t) = sin(1-t)*A + sin(t)*B, for t=[0, 1] then normalize r(t)/||r(t)|| to get the points on a sphere [![enter image description here][1]][1] Can someone explain me why does it work? [1]: https://i.stack.imgur.com/wj7Mm.png
Let $x \geq 1$. I wish to show, by interpreting the sum as a Riemann sum, that $$\log(x) = \int_{1}^{x} \frac{dt}{t} \leq \sum_{n \leq x} \frac{1}{n} \leq 1 + \int_{1}^{x} \frac{dt}{t}.$$ Certainly, $f(t) = 1/t$ is continuous on $[1, x]$, so is integrable on $[1, x]$ and may be computed as the limit of a sequence of Riemann sums. I have only the idea of interpreting $\sum_{n \leq x} \frac{1}{n}$ as an "upper sum" of $f$ with respect to a partition $\mathcal{P} = \{1, 2, ..., \lfloor x \rfloor, x\}$, which would necessarily be no less than $\int_{1}^{x} \frac{dt}{t}$. However, I am not certain how to formalise this idea, because, in using $\mathcal{P}$, I should be left with an "excess" area from $\lfloor x \rfloor$ to $x$?
How can I obtain a polynomial formula for the product (i.e. union) of a triangle's [incircle and excircles](https://en.wikipedia.org/wiki/Incircle_and_excircles), avoiding the use of radicals? Assume three lines in general position are given by three equations $$a_ix+b_iy+c_i=0\quad\text{for }i\in\{1,2,3\}$$ A circle with center $(p,q)$ and radius $r$ may be given by the equation $$(x-p)^2+(y-q)^2-r^2=0$$ which can also be written as $$(x,y,1)\begin{pmatrix}1&0&-p\\0&1&-q\\-p&-q&p^2+q^2-r^2\end{pmatrix} \begin{pmatrix}x\\y\\1\end{pmatrix}=0$$ Using the adjugate matrix to switch from primal to dual conic, we can say a line is tangent to that circle if it satisfies $$(a,b,c)\begin{pmatrix} p^2-r^2 & pq & p \\ pq & q^2-r^2 & q \\ p & q & 1 \end{pmatrix} \begin{pmatrix}a\\b\\c\end{pmatrix}=0$$ So by plugging in the three lines from the start, we get three non-linear equations. In general we get 4 distinct solutions for $p,q,r^2$. (We actually get 8 solutions if we solve for $r$ instead of $r^2$, but these are only $r=\pm\sqrt{r^2}$ and by convention we'd pick the positive radius. All the formulas only use $r^2$ not $r$ so it makes sense to treat that as the variable.) These 4 solutions correspond to the incircle and the three excircles of the triangle formed by the lines. The underlying quartic equation behind this would introduce plenty of radicals, and rules on how to match the solutions, which complicates subsequent work. It would be much better if we could avoid taking roots here. And I think that by dealing with all four circles together, that should be possible. Specifically I'm looking for the product of the four circle equations, which corresponds to an algebraic curve of degree 8 that represents the union of the four circles. I conjecture that the product of the four circles, i.e. the formula $$0=\prod_{i=1}^4 (x-p_i)^2+(y-q_i)^2-r_i^2$$ can be stated as a polynomial of combined degree 8 in $x,y$ where the coefficients themselves can be given as polynomials in the coordinates of the lines, namely $a_i,b_i,c_i$. If the coordinates of the lines are rational, then the coefficients in the product of circles will be rational, too. I have checked this conjecture using one specific triangle, chosen fairly arbitrarily (using three rational points on the unit circle): \begin{align*} a_1 &= 23 & b_1 &= 41 & c_1 &= -47 \\ a_2 &= 4 & b_2 &= -7 & c_2 &= 4 \\ a_3 &= 3 & b_3 &= -5 & c_3 &= 3 \end{align*} For this I got the product of circles as \begin{align*} 59636082025&\,x^8 \\ + 238544328100&\,x^6y^2 \\ + 357816492150&\,x^4y^4 \\ + 238544328100&\,x^2y^6 \\ + 59636082025&\,y^8 \\ - 241134854740&\,x^6 \\ + 424846368960&\,x^5y \\ - 1438821671300&\,x^4y^2 \\ + 849692737920&\,x^3y^3 \\ - 2154238778380&\,x^2y^4 \\ + 424846368960&\,xy^5 \\ - 956551961820&\,y^6 \\ + 108802118880&\,x^5 \\ + 265528980600&\,x^4y \\ - 1771920221760&\,x^3y^2 \\ + 3788213456560&\,x^2y^3 \\ - 1880722340640&\,xy^4 \\ + 3522684475960&\,y^5 \\ - 12729722876&\,x^4 \\ + 1691695948800&\,x^3y \\ - 2241047404232&\,x^2y^2 \\ + 2683644936960&\,xy^3 \\ - 6476277331836&\,y^4 \\ - 484880944704&\,x^3 \\ + 468260157040&\,x^2y \\ - 1625087765184&\,xy^2 \\ + 7083496190160&\,y^3 \\ + 48256725504&\,x^2 \\ + 293698179840&\,xy \\ - 4644695250832&\,y^2 \\ - 13129818624&\,x \\ + 1675829775360&\,y \\ - 243236814336&\quad=0 \end{align*} How can I get these coefficients without going through the detour of the four distinct circles and their irrational parameters? Background for my question is https://math.stackexchange.com/q/4887813/35416 discussing the probability of the centroid lying within the incircle. For an exact solution, radicals would make one's life really hard. But at the same time, since the centroid will never lie within an excircle, considering the sign of the product of circles should work just as well, and when combined with a rational parametrization of the circle might lend itself to some nice algebraic approach for that question. That's what got me thinking, but at the moment I'm actually more intrigued by this question here for its own merit. I feel like I'm missing some very useful tool in my arsenal, but don't know how to learn more. I'm including the Galois theory tag as I have the rough understanding that Galois theory deals with the relationship between the different roots of a polynomial. So if I want to understand how the different solutions interact when I multiply the circles, I assume that topic might have contributions. But so far my knowledge of Galois theory is pretty much exhausted by getting my computer algebra system to compute the Galois group of some polynomial, and then using that to decide whether a number is constructible or not.
Let X is Banach space. Let T(t) is a semigroup on X.(t≧0)If there exists M≧1,ω≧0, will T(t) become a C0 semigroup? I know the converse holds true, but I'm not sure about this proposition. I would be glad to have a proof if it is valid, and a counterexample if it is not.
Suppose we have groups $G,H$, a subsemigroup $S$ of $G$ generating $G$ as a group, and a homomorphism $f\colon S\to H$. When does $f$ extend to a homomorphism $G\to H$? If $f$ is injective, when does it extend to an injective homomorphism $G\to H$? Aside from the trivial case (when $S=G$, which is the case e.g. if $G$ is torsion), I think this is true when $G$ is abelian. In this case, every element of $G$ can be written as $g_1g_2^{-1}$ for $g_1,g_2\in S$, and $g_1g_2^{-1}\mapsto f(g_1)f(g_2)^{-1}$ is well-defined: if $g_1g_2^{-1}=g_3g_4^{-1}$, then $g_1g_4=g_3g_2$, so $f(g_1g_4)=f(g_1)f(g_4)=f(g_3g_2)=f(g_3)f(g_2)$, so $f(g_1)f(g_2)^{-1}=f(g_3)f(g_4)^{-1}$. It is also easy to check that it defines a homomorphism, which is injective if $f$ is. I think the same argument works more generally if we assume that there is a central semigroup $B\subseteq S^{-1}$ such that $G=SB$. I don't actually have any counterexamples, so I'd be interested in seeing those, too.
How can I obtain a polynomial formula for the product (i.e. union) of a triangle's [incircle and excircles](https://en.wikipedia.org/wiki/Incircle_and_excircles), avoiding the use of radicals? Assume three lines in general position are given by three equations $$a_ix+b_iy+c_i=0\quad\text{for }i\in\{1,2,3\}$$ A circle with center $(p,q)$ and radius $r$ may be given by the equation $$(x-p)^2+(y-q)^2-r^2=0$$ which can also be written as $$(x,y,1)\begin{pmatrix}1&0&-p\\0&1&-q\\-p&-q&p^2+q^2-r^2\end{pmatrix} \begin{pmatrix}x\\y\\1\end{pmatrix}=0$$ Using the adjugate matrix to switch from primal to dual conic, we can say a line is tangent to that circle if it satisfies $$(a,b,c)\begin{pmatrix} p^2-r^2 & pq & p \\ pq & q^2-r^2 & q \\ p & q & 1 \end{pmatrix} \begin{pmatrix}a\\b\\c\end{pmatrix}=0$$ So by plugging in the three lines from the start, we get three non-linear equations. In general we get 4 distinct solutions for $p,q,r^2$. (We actually get 8 solutions if we solve for $r$ instead of $r^2$, but these are only $r=\pm\sqrt{r^2}$ and by convention we'd pick the positive radius. All the formulas only use $r^2$ not $r$ so it makes sense to treat that as the variable.) These 4 solutions correspond to the incircle and the three excircles of the triangle formed by the lines. The underlying quartic equation behind this would introduce plenty of radicals, and rules on how to match the solutions, which complicates subsequent work. It would be much better if we could avoid taking roots here. And I think that by dealing with all four circles together, that should be possible. Specifically I'm looking for the product of the four circle equations, which corresponds to an algebraic curve of degree 8 that represents the union of the four circles. I conjecture that the product of the four circles, i.e. the formula $$0=\prod_{i=1}^4 (x-p_i)^2+(y-q_i)^2-r_i^2$$ can be stated as a polynomial of combined degree 8 in $x,y$ where the coefficients themselves can be given as polynomials in the coordinates of the lines, namely $a_i,b_i,c_i$. If the coordinates of the lines are rational, then the coefficients in the product of circles will be rational, too. I have checked this conjecture using one specific triangle, chosen fairly arbitrarily (using three rational points on the unit circle): \begin{align*} a_1 &= 23 & b_1 &= 41 & c_1 &= -47 \\ a_2 &= 4 & b_2 &= -7 & c_2 &= 4 \\ a_3 &= 3 & b_3 &= -5 & c_3 &= 3 \end{align*} For this I got the product of circles as \begin{align*} 59636082025&\,x^8 \\ + 238544328100&\,x^6y^2 \\ + 357816492150&\,x^4y^4 \\ + 238544328100&\,x^2y^6 \\ + 59636082025&\,y^8 \\ - 241134854740&\,x^6 \\ + 424846368960&\,x^5y \\ - 1438821671300&\,x^4y^2 \\ + 849692737920&\,x^3y^3 \\ - 2154238778380&\,x^2y^4 \\ + 424846368960&\,xy^5 \\ - 956551961820&\,y^6 \\ + 108802118880&\,x^5 \\ + 265528980600&\,x^4y \\ - 1771920221760&\,x^3y^2 \\ + 3788213456560&\,x^2y^3 \\ - 1880722340640&\,xy^4 \\ + 3522684475960&\,y^5 \\ - 12729722876&\,x^4 \\ + 1691695948800&\,x^3y \\ - 2241047404232&\,x^2y^2 \\ + 2683644936960&\,xy^3 \\ - 6476277331836&\,y^4 \\ - 484880944704&\,x^3 \\ + 468260157040&\,x^2y \\ - 1625087765184&\,xy^2 \\ + 7083496190160&\,y^3 \\ + 48256725504&\,x^2 \\ + 293698179840&\,xy \\ - 4644695250832&\,y^2 \\ - 13129818624&\,x \\ + 1675829775360&\,y \\ - 243236814336&\quad=0 \end{align*} This polynomial will factor into four circles over $\mathbb R$ but doesn't factor at all over $\mathbb Q$. How can I get these coefficients without going through the detour of the four distinct circles and their irrational parameters? Background for my question is https://math.stackexchange.com/q/4887813/35416 discussing the probability of the centroid lying within the incircle. For an exact solution, radicals would make one's life really hard. But at the same time, since the centroid will never lie within an excircle, considering the sign of the product of circles should work just as well, and when combined with a rational parametrization of the circle might lend itself to some nice algebraic approach for that question. That's what got me thinking, but at the moment I'm actually more intrigued by this question here for its own merit. I feel like I'm missing some very useful tool in my arsenal, but don't know how to learn more. I'm including the Galois theory tag as I have the rough understanding that Galois theory deals with the relationship between the different roots of a polynomial. So if I want to understand how the different solutions interact when I multiply the circles, I assume that topic might have contributions. But so far my knowledge of Galois theory is pretty much exhausted by getting my computer algebra system to compute the Galois group of some polynomial, and then using that to decide whether a number is constructible or not.
I'm working through Problem 4.16 in Armstrong's *Basic Topology*, which has the following questions: 1) Prove that $O(n)$ is homeomorphic to $SO(n) \times Z_2$. 2) Are these two isomorphic as topological groups? Let $\mathbb{M_n}$ denote the set of $n\times n$ matrices with real entries. We identify each matrix $A=(a_{ij}) \in \mathbb{M_n}$ with the corresponding point $(a_{11},a_{12},...,a_{1n},a_{21},a_{22}...,a_{2n},...,a_{n1},a_{n2},...,a_{nn}) \in \mathbb{E}^{n^2}$, thus giving $\mathbb{M_n}$ the subspace topology. The *orthogonal group* $O(n)$ denotes the group of orthogonal $n \times n$ matrices $A \in \mathbb{M_n}$, i.e. with $det(A)=\pm{1}$. The *special linear group* $SO(n)$ denotes the subgroup of $O(n)$ with $det(A)=1$. $Z_2$={-1, 1} denotes the multiplicative group of order 2. <hr> For odd $n$, the answer to both questions is **yes**, as we verify below. Consider the mapping $f:O(n)\to SO(n)\times Z_2, A \mapsto(det(A)\cdot A, det(A))$. We have the following facts about $f$: - **It is injective.** If $f(A)=f(B)$ then $(det(A)\cdot A, det(A))=(det(B)\cdot B, det(B))$. Therefore, $det(A)=det(B) \neq 0$ so $A=B$. - **It is surjective.** For $(D,d) \in SO(n) \times Z_2$, we can take $dD \in O(n)$, giving $f(dD)=(det(dD)\cdot dD, det(dD))=(d^n\cdot det(D) \cdot dD,d^n \cdot det(D))=(d^{n+1}D, d^n)=(D,d)$, since $n$ is odd. - **It is a homomorphism.** $f(AB)=(det(AB)\cdot AB, det(AB))=(det(A)det(B)\cdot AB, det(A)det(B))=((det(A)\cdot A)(det(B)\cdot B), det(A)det(B))=f(A)f(B)$. - **It is continuous.** Let $\mathcal{O} \in SO(n) \times Z_2$ be open. Then $\mathcal{O}=U \times V$ for $U$ open in $SO(n)$ and $V$ open in $Z_2$. Since $SO(n)$ is open in $O(n)$, $U$ is therefore open in $O(n)$. $U^{-1}=A^{-1} \mid A\in U$ is also open in $O(n)$. But $f^{-1}(\mathcal{O})=f^{-1}(U\times V)=U\cup U^{-1}$. Since $O(n)$ is compact and $SO(n)\times Z_2$ is Hausdorff, we therefore have that $f$ is a homeomorphism. Thus, they are isomorphic as topological groups. <hr> For even $n$, this mapping is not well-defined: if $det(A)=-1$ then, $det(det(A)\cdot A)=(det(A))^{n+1}=-1$, so $A \notin SO(n)$. My question then is **are they homeomorphic as topological spaces if $n$ is even?** From the related questions, it seems like for even $n$, the two groups cannot be isomorphic due to one being abelian while the other is not and them having different centers and derived subgroups (I don't fully understand these arguments but I will brush up on them). So they cannot be isomorphic as topological groups. But can they be homeomorphic as topological spaces? <hr> Related questions: https://math.stackexchange.com/questions/3399888/are-son-times-z-2-and-on-isomorphic-as-topological-groups https://math.stackexchange.com/questions/1468198/two-topological-groups-mathrmon-orthogonal-group-and-mathrmson-ti?noredirect=1&lq=1 https://math.stackexchange.com/questions/4537037/understanding-on-homeomorphic-to-son-times-bbb-z-2-proof
Are $O(n)$ and $SO(n)\times Z_2$ homeomorphic as topological spaces?
I'm working through Problem 4.16 in Armstrong's *Basic Topology*, which has the following questions: 1) Prove that $O(n)$ is homeomorphic to $SO(n) \times Z_2$. 2) Are these two isomorphic as topological groups? **Some preliminaries:** Let $\mathbb{M_n}$ denote the set of $n\times n$ matrices with real entries. We identify each matrix $A=(a_{ij}) \in \mathbb{M_n}$ with the corresponding point $(a_{11},a_{12},...,a_{1n},a_{21},a_{22}...,a_{2n},...,a_{n1},a_{n2},...,a_{nn}) \in \mathbb{E}^{n^2}$, thus giving $\mathbb{M_n}$ the subspace topology. The *orthogonal group* $O(n)$ denotes the group of orthogonal $n \times n$ matrices $A \in \mathbb{M_n}$, i.e. with $det(A)=\pm{1}$. The *special linear group* $SO(n)$ denotes the subgroup of $O(n)$ with $det(A)=1$. $Z_2$={-1, 1} denotes the multiplicative group of order 2. **My attempt** For odd $n$, the answer to both questions is **yes**, as we verify below. Consider the mapping $f:O(n)\to SO(n)\times Z_2, A \mapsto(det(A)\cdot A, det(A))$. We have the following facts about $f$: - **It is injective.** If $f(A)=f(B)$ then $(det(A)\cdot A, det(A))=(det(B)\cdot B, det(B))$. Therefore, $det(A)=det(B) \neq 0$ so $A=B$. - **It is surjective.** For $(D,d) \in SO(n) \times Z_2$, we can take $dD \in O(n)$, giving $f(dD)=(det(dD)\cdot dD, det(dD))=(d^n\cdot det(D) \cdot dD,d^n \cdot det(D))=(d^{n+1}D, d^n)=(D,d)$, since $n$ is odd. - **It is a homomorphism.** $f(AB)=(det(AB)\cdot AB, det(AB))=(det(A)det(B)\cdot AB, det(A)det(B))$ $=((det(A)\cdot A)(det(B)\cdot B), det(A)det(B))=f(A)f(B)$. - **It is continuous.** Let $\mathcal{O} \in SO(n) \times Z_2$ be open. Then $\mathcal{O}=U \times V$ for $U$ open in $SO(n)$ and $V$ open in $Z_2$. Since $SO(n)$ is open in $O(n)$, $U$ is therefore open in $O(n)$. $U^{-1}=A^{-1} \mid A\in U$ is also open in $O(n)$. But $f^{-1}(\mathcal{O})=f^{-1}(U\times V)=U\cup U^{-1}$. Since $O(n)$ is compact and $SO(n)\times Z_2$ is Hausdorff, we therefore have that $f$ is a homeomorphism. Thus, they are isomorphic as topological groups. <hr> For even $n$, this mapping is not well-defined: if $det(A)=-1$ then, $det(det(A)\cdot A)=(det(A))^{n+1}=-1$, so $A \notin SO(n)$. My question then is **are they homeomorphic as topological spaces if $n$ is even?** From the related questions, it seems like for even $n$, the two groups cannot be isomorphic due to one being abelian while the other is not and them having different centers and derived subgroups (I don't fully understand these arguments but I will brush up on them). So they cannot be isomorphic as topological groups. But can they be homeomorphic as topological spaces? <hr> Related questions: https://math.stackexchange.com/questions/3399888/are-son-times-z-2-and-on-isomorphic-as-topological-groups https://math.stackexchange.com/questions/1468198/two-topological-groups-mathrmon-orthogonal-group-and-mathrmson-ti?noredirect=1&lq=1 https://math.stackexchange.com/questions/4537037/understanding-on-homeomorphic-to-son-times-bbb-z-2-proof
We know a hyperbola can be expressed in the form of$$ \frac{(x-h)^2}{a^2}-\frac{(y-k)^2}{b^2}=1$$ where $(h,k)$ is it's center. I've learned that in the parametric form, we take $$x= h + a\sec t$$ and $$ y = k + b\tan t $$ These values satisfy the given equation. But so does $$x= h + a\csc t$$ and $$y=k+b\cot t$$ Then why aren't these second values of $x$ and $y$ taken as parameters?
We know a hyperbola can be expressed in the form of$$ \frac{(x-h)^2}{a^2}-\frac{(y-k)^2}{b^2}=1$$ where $(h,k)$ is it's center. I've learnt that in the parametric form, we take $$x= h + a\sec t$$ and $$ y = k + b\tan t $$ These values satisfy the given equation. But so does $$x= h + a\csc t$$ and $$y=k+b\cot t$$ Then why aren't these second values of $x$ and $y$ taken as parameters?
For context: * $\Omega$ is an open subset of $\mathbb{R}^n$. * $\mathcal{D}(\Omega)$ is the space of test (smooth, compactly supported) functions $\Omega\to\mathbb{C}$ with a complete, unmetrizable topology $\tau$. * $\mathcal{D}_K(\Omega)$ is the space of test functions supported on the compact set $K\subseteq \Omega$, with a Fréchet-space topology $\tau_K$ that corresponds with the subspace topology under $\tau$. * $\mathcal{D}'(\Omega)$ is the space of distributions i.e. of continuous linear functionals $\mathcal{D}(\Omega)\to\mathbb{C}$. All the above is as explained in chapter 1 and 6 of Rudin's _Functional Analysis_. I had the following written in my personal notes: $\def\L{\Lambda} \def\DDD{\mathcal{D}} \def\W{\Omega} \def\CC{\mathbb{C}}$ **Result:** let $\L_1,\L_2\in\DDD'(\W)$, and $z\in\CC$, then 1. $\L_1+\L_2\in\DDD'(\W)$. 2. $z\L_1\in\DDD'(\W)$. 3. $\L_1\cdot \L_2\in\DDD'(\W)$. As for the proof, I merely wrote that it follows from elementary results on continuity of maps between topological vector spaces, but I could not find any result that would easily imply the result above, so I set out to prove it directly: Fix a compact $K\subseteq \W$, and suppose $$|\L_1(\phi)| \le C_1\|\phi\|_N \ \ \ \ \text{ and } \ \ \ \ |\L_2(\phi)| \le C_2\|\phi\|_M$$ for all $\phi\in\DDD_K(\W)$. For addition, we have $$|(\L_1+\L_2)\phi| = |\L_1(\phi)+\L_2(\phi)|\le |\L_1(\phi)|+|\L_2(\phi)| \le C_1\|\phi\|_N + C_2\|\phi\|_M \le (C_1+C_2)\|\phi\|_{\max(N,M)}.$$ For scalar multiplication, we note $$|z\L_1(\phi)| \le |z|C_1\|\phi\|_N.$$ Finally, for multiplication, we have $$|(\L_1\cdot\L_2)(\phi)| = |\L_1(\phi)\L_2(\phi)| \le C_1\|\phi\|_NC_2\|\phi\|_M \le \ldots$$ --- **Questions:** * Is the proof for addition and scalar multiplication valid? * How could one finish the proof for multiplication?
Why is $\mathcal{D}'$, the space of distributions, closed under addition, scalar multiplication, and multiplication?
I managed to solve it using the following function: given a cartesian point A and point B. the geodesic path on a sphere is defined as: r(t) = sin(1-t)*A + sin(t)*B, for t=[0, 1] then normalize r(t)/||r(t)|| to get the points on a sphere [![enter image description here][1]][1] Can someone explain me why does it work? * edit, changed the "1" to pi/2 to get uniform sampling [![enter image description here][2]][2] [1]: https://i.stack.imgur.com/wj7Mm.png [2]: https://i.stack.imgur.com/VO0bH.png
Everything that follows is from Rudin's _Functional Analysis_: $\def\L{\Lambda} \def\DDD{\mathcal{D}} \def\sbe{\subseteq} \def\W{\Omega} \def\RR{\mathbb{R}} \def\CC{\mathbb{C}} $ --- Below $\DDD$ is the space of test (smooth, compactly supported) functions from an open set $\Omega\sbe\RR^n$ to $\CC$. Such space is given a complete, unmetrizable topology $\tau$. Similarly $\DDD_K$ is the space of test functions $\Omega\to\CC$ whose support lies in the compact set $K$. Each $\DDD_K$ has a Fréchet space topology $\tau_K$ which corresponds with the subspace topology under $\tau$. **Theorem 6.6:** suppose $\L$ is a linear mapping of $\DDD$ into a lctvs $Y$. Then the following are equivalent: a) $\L$ is continuous. b) $\L$ is bounded. c) If $\phi_i\to 0$ in $\DDD(\W)$, then $\L\phi_i\to0$ in $Y$. d) The restriction of $\L$ to any $\DDD_K\sbe\DDD(\W)$ is continuous. --- The proof shows a)$\implies$b)$\implies$c)$\implies$d)$\implies$a). I struggle to understand the steps b)$\implies$c) and d)$\implies$a): b)$\implies$c): there is a compact subset $K\sbe\W$ that contains the support of every $\phi_i$, thus the $\phi_i$ are members of the subset $\DDD_K$ and $\phi_i\to 0$ holds in $\DDD_K$. The restriction of $\L$ to this $\DDD_K$ is bounded **(why?)** and since $\DDD_K$ is metrizable it follows that $\L\phi_i\to 0$ in $Y$. d)$\implies$a): let $U$ be a convex balanced neighborhood of $0$ in $Y$, and put $V=\L^{-1}(U)$. Then $V$ is convex and balanced **(why?)**. Now $V$ is open if and only if $\DDD_K\cap V\in\tau_K$ for every compact $K\sbe \W$ **(I understand the only if implication only)**. This proves the equivalence of a) and d) **(how?)**.
Suppose $P < Q$ and $PQ=N$, where both $P$ and $Q$ are prime (including $2$). What's the maximum distance $P$ can ever be from $\sqrt{N}$, if we hold $Q$ constant and let $P$ start at $0$ approach $Q$?
So there is a million (essentially equivalent) ways to make it obvious **algebraically** that the axis of symmetry of the parabola $y=x^2+bx+c$, must be at $x=-\frac{b}{2}$, while its real roots, if they exist, are $\pm\sqrt{\left(\frac{b}{2}\right)^2-c}$ away from that axis. It feels like there must be a way to make this graphically obvious as well: The line $y=bx+c$ is tangent to the parabola at $x=0$, so the graphical interpretation of $b$ and $c$ are as the slope or $y$-intercept, respectively, of the parabola at $x=0$. The question: Given these (or some other less obvious) graphical interpretations of $b$ & $c$, and without doing the standard algebra (completing the square etc.), is there a way to "read off" from the parabola graph that $\sqrt{\left(\frac{b}{2}\right)^2-c}$ must be the distance between a root and the symmetry axis? What I am looking for is an argument like the following, which makes it visually obvious that $(a+b)^2=a^2+2ab+b^2$: [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/UqFRn.png
I have to understand a thing about this exercise: find the minimum of $f(x, y) = (x-2)^2 + y$ subject to $y-x^3 \geq 0$, $y+x^3 \leq 0$ and $y \geq 0$. Now, I solved the problem quite easily in a sketching way: the level curves of $f(x, y)$ are concave parabolas ($y = k - (x-2)^2$), and the feasible region is the upper left plane bounded by $x<0$-axis and under the curve $-x^3$. The candidate solution is $(0, 0)$ at which $f(x, y) = 4$. On the other side, I wanted to solve it with Kuhn-Tucker multipliers, so I set the problem in the standard form for a minimum problem that is: $$-\max -f(x, y) \qquad \text{s.t.} \qquad \begin{cases} -y+x^3 \leq 0 \\ y+x^3 \leq 0 \\ -y \leq 0 \end{cases}$$ with KKT Lagrangian $$L = -(x-2)^2-y- \lambda(-y+x^3) - \mu(y+x^3) - \Theta(-y)$$ which leads to the optimal conditions $$ \begin{cases} -2(x-2) - 3\lambda x^2 - 3\mu x^2 = 0 \\ -1+\lambda - \mu + \Theta = 0 \\ -y + x^3 \leq 0 \quad ; \quad \lambda(-y+x^3) = 0 \\ y+x^3 \leq 0 \quad ; \quad \mu(y+x^3) = 0 \\ -y\leq 0 \quad ; \quad \Theta y = 0 \lambda, \mu, \Theta \geq 0 \end{cases} $$ From here I have to study $8$ cases. Here are some: $\bullet$ When $\lambda = \mu = \Theta = 0$ the system is impossible. $\bullet$ When $\lambda = \mu = 0$, $\Theta \neq 0$ I obtain $(2, 0)$ which doesn't satisfy the constraints. $\bullet$ When $\lambda = 0$, $\mu \neq 0, \Theta = 0$ I get $\mu = -1$ which is not admissible. $\bullet$ When $\lambda, \mu, \Theta \neq 0$ I eventually manage to get among the others $$\begin{cases} \Theta = \mu + 1 - \lambda \\ (\mu+1-\lambda)y = 0 \end{cases} $$ From which either $y =0$ or $\lambda = \mu +1$. For $\lambda = \mu +1$ using the first equation I get $$\mu = \frac{4-2x-3x^2}{6x^2}$$ from which $$\lambda = \frac{4 - 2x + 3x^2}{6x^2}$$ If $y = 0$ then from the complementarity equation $\mu(x^3-y) = 0$ I obtain $(4-2x-3x^2)x = 0$, hence either $x = 0$ or $x = \frac{1}{3}(-1\pm \sqrt{13})$, but those last ones don't satisfy all the constraints. On the other side, $x = 0$ would be good if not for the fact that I cannot take it since it would make $\lambda, \mu$ nonsensical. So I ask you: how to deal with this problem analytically? It looks like KKT conditions might not be satisfied, but I am not sure of this. I would like other pairs of eyes/mind from you, thank you! Here is the sketch too, with Mathematica code. It's not as good as I thought, since the feasible region should include a missing portion (the origin to the left and above). [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/yVX5N.png plot1 = RegionPlot[{y - x^3 >= 0 && y + x^3 <= 0 && y >= 0}, {x, -3, 3}, {y, -3, 5}, Axes -> True] plot2 = Plot[{-(x - 2)^2, 4 - (x - 2)^2, 3 - (x - 2)^2}, {x, -3, 5}, PlotStyle -> {Dashed, Dashed, Dashed}] Show[plot1, plot2] **Second Thought** Or maybe I just could say that $y \leq -x^3$ is the same as $-y \geq x^3$ but this, with the other condition implies $\begin{cases} x^3 \leq -y \\ x^3 \leq y \end{cases}$ could just imply either $x = 0$ and then $y = 0$, or $y = 0$ and $x$ must be negative, though in this last case we would keep increasing the value of $f$ rather than find the minimum. **Third Thought** I perhaps have forgotten about the regularity of the constraints. The Jacobian matrix indeed reads $$\mathsf{J} = \begin{pmatrix} 3x^2 & -1 \\ 3x^2 & 1 \\ 0& -1 \end{pmatrix}$$ From which we observe that at $(0, 0)$ we lose the regularity of the constraitns, being $\mathsf{J}$ of randk $1$. Perhaps this is what makes KKT conditions to not being satisfied. In any case, the problem with the solution $x = 0$ remains: it is not valid since it makes $\lambda, \mu$ nonsensical.
For ellipses having equation in the form of $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$$ why are the parametric equations are always $$x=a\cos(\theta)$$ $$y=b\sin(\theta)$$ even when $b>a$? As far as I know, for hyperbola having equation $$\frac{x^2}{a^2}-\frac{y^2}{b^2}=\pm 1$$the parametric equations are $(x(\theta),(y(\theta))=(a\sec(\theta),b\tan(\theta))$ and $(x(\theta),(y(\theta))=(a\tan(\theta),b\sec(\theta))$ for + and - respectively. Which I suppose that the angle is considered with the transverse axis. But why is it always the angle with $x$-axis for parametric equations of ellipse? Thank you.
why do we always take the parameter of parametric equation of ellipse as the angle formed with x-axis instead of semi-major axis?
I'm working through Problem 4.16 in Armstrong's *Basic Topology*, which has the following questions: 1) Prove that $O(n)$ is homeomorphic to $SO(n) \times Z_2$. 2) Are these two isomorphic as topological groups? **Some preliminaries:** Let $\mathbb{M_n}$ denote the set of $n\times n$ matrices with real entries. We identify each matrix $A=(a_{ij}) \in \mathbb{M_n}$ with the corresponding point $(a_{11},a_{12},...,a_{1n},a_{21},a_{22}...,a_{2n},...,a_{n1},a_{n2},...,a_{nn}) \in \mathbb{E}^{n^2}$, thus giving $\mathbb{M_n}$ the subspace topology. The *orthogonal group* $O(n)$ denotes the group of orthogonal $n \times n$ matrices $A \in \mathbb{M_n}$, i.e. with $det(A)=\pm{1}$. The *special linear group* $SO(n)$ denotes the subgroup of $O(n)$ with $det(A)=1$. $Z_2$={-1, 1} denotes the multiplicative group of order 2. **My attempt** For odd $n$, the answer to both questions is **yes**, as we verify below. Consider the mapping $f:O(n)\to SO(n)\times Z_2, A \mapsto(det(A)\cdot A, det(A))$. We have the following facts about $f$: - **It is injective.** If $f(A)=f(B)$ then $(det(A)\cdot A, det(A))=(det(B)\cdot B, det(B))$. Therefore, $det(A)=det(B) \neq 0$ so $A=B$. - **It is surjective.** For $(D,d) \in SO(n) \times Z_2$, we can take $dD \in O(n)$, giving $f(dD)=(det(dD)\cdot dD, det(dD))=(d^n\cdot det(D) \cdot dD,d^n \cdot det(D))=(d^{n+1}D, d^n)=(D,d)$, since $n$ is odd. - **It is a homomorphism.** $f(AB)=(det(AB)\cdot AB, det(AB))=(det(A)det(B)\cdot AB, det(A)det(B))$ $=((det(A)\cdot A)(det(B)\cdot B), det(A)det(B))=f(A)f(B)$. - **It is continuous.** Let $\mathcal{O} \in SO(n) \times Z_2$ be open. Then $\mathcal{O}=U \times V$ for $U$ open in $SO(n)$ and $V$ open in $Z_2$. Since $SO(n)$ is open in $O(n)$, $U$ is therefore open in $O(n)$. $U^{-1}=A^{-1} \mid A\in U$ is also open in $O(n)$. But $f^{-1}(\mathcal{O})=f^{-1}(U\times V)=U\cup U^{-1}$. Since $O(n)$ is compact and $SO(n)\times Z_2$ is Hausdorff, we therefore have that $f$ is a homeomorphism. Thus, they are isomorphic as topological groups. <hr> For even $n$, this mapping is not well-defined: if $det(A)=-1$ then, $det(det(A)\cdot A)=(det(A))^{n+1}=-1$, so $A \notin SO(n)$. My question then is **are they homeomorphic as topological spaces if $n$ is even?** From the related questions, it seems like for even $n$, the two groups cannot be isomorphic due to <s>one being abelian while the other is not and</s> them having different centers and derived subgroups (I don't fully understand these arguments but I will brush up on them). So they cannot be isomorphic as topological groups. But can they be homeomorphic as topological spaces? <hr> Related questions: https://math.stackexchange.com/questions/3399888/are-son-times-z-2-and-on-isomorphic-as-topological-groups https://math.stackexchange.com/questions/1468198/two-topological-groups-mathrmon-orthogonal-group-and-mathrmson-ti?noredirect=1&lq=1 https://math.stackexchange.com/questions/4537037/understanding-on-homeomorphic-to-son-times-bbb-z-2-proof