instruction
stringlengths
12
30k
$\require{AMScd}$I make reference to [this](https://www.math.uchicago.edu/~may/PAPERS/AddJan01.pdf) paper. I've recently become aware that there's an annoying variety of seemingly distinct definitions of triangulated categories, partly because allegedly no one knows an example of a weakly but not "strongly" triangulated category. Anyway, as part of all this I need to check the following assertion from May, in the linked paper: (I apologise for using photos but these and later commutative diagrams are beyond my abilities to draw in LaTex) > [![enter image description here][1]][1] This, together with the basic TR1) and TR2) axioms (rotation and existence, isomorphism-invariance of triangles), are claimed to imply the weak form of the classical octahedral axiom. May gives the following hint: [![enter image description here][2]][2] And... it **almost** works. It's so close to working but there's a final wrinkle which just does not seem to work. *I'm left wondering if May made an oversight about this final wrinkle or if I'm just missing an easy last step - this is my question.* If anyone has a reference for this equivalence (appropriately stated?) please say. So let $X\overset{f}{\to}Y\overset{g}{\to}Z$ be some composable morphisms, $h$ their composite. So extend $f,g,h$ to distinguished triangles: $$\begin{align}&X\overset{f}{\to}Y\overset{f'}{\to}U\overset{f''}{\to}\Sigma X\\&Y\overset{g}{\to}Z\overset{g'}{\to}W\overset{g''}{\to}\Sigma Y\\&X\overset{h}{\to}Z\overset{h'}{\to}V\overset{h''}{\to}\Sigma X\end{align}$$ The octahedral axiom in its weakest form tasks us with finding fillers $\alpha,\beta$ making the following a distinguished triangle: $$U\overset{\exists \alpha}{\to}V\overset{\exists\beta}{\to}W\overset{\Sigma(f')\circ g''}{\to}\Sigma U$$And such that the following commute: > $$f''=h''\circ\alpha,\,g'=\beta\circ h',\,\alpha\circ f'=h'\circ g,\,g''\circ\beta=\Sigma(f)\circ h''$$ The latter two involve four morphisms. Using May's hint, I can find $\alpha,\beta$ such that **all but one** of these commute, the difficulty in making both quadrilaterals join up simultaneously. Did May make an oversight, do I need an extra hypothesis to make it tick, or am I missing an easy final resolution? Specifically, I can find $\alpha,\alpha':U\to V,\beta,\beta':V\to W$ such that: > The triangles $U\to V\to W\overset{\Sigma(f)\circ h''}{\to}\Sigma U$ are distinguished; $f''=h''\circ\alpha=h''\circ\alpha',g'=\beta\circ h'=\beta'\circ h'$; > > And $\alpha\circ f'=h'\circ g$ or $g''\circ\beta'=\Sigma(f)\circ h''$ but seemingly I can't make both hold for one pair $(\alpha,\beta)$ or $(\alpha',\beta')$ simultaneously. > > We can also find an automorphism $\gamma:V\cong V$ such that $\alpha'=\gamma\circ\alpha,\,\beta'\circ\gamma=\beta$ but isomorphism is *not* equality ! And I don't think this is good enough to conclude. Because inserting $\gamma$ anywhere into the picture could potentially alter the other, *fixed* morphisms given to us. What to do? [1]: https://i.stack.imgur.com/SgIS6.png [2]: https://i.stack.imgur.com/2lSZm.png
Is the octahedral axiom really equivalent to the 4 x 4 lemma?
I assume you meant $U = \{f \in X | \sup_{a \in A} |f(a)| \geq 1\}$ ($f(a)$ in the absolute value instead of $f(x)$)? In that case no, assuming $A$ is $T_1$ and infinite. Indeed, just choose a sequence of distinct points $\{a_n\}_{n \in \mathbb{N}} \subset A$. Then $f_n = 1_{\{a_n\}} \in U$, but $f_n \to 0$ pointwise, so by dominated convergence, $\langle f_n, \mu \rangle \to 0$ for all $\mu \in Y$, i.e., $f_n \to 0$ in $\sigma(X, Y)$ topology. But $0 \notin U$.
I was trying to come up with a problem where the solutions would be the numbers 5, 6, 10 and 11. It seems like "finding all integers n, where $\frac{n(n+1)}{10}$ is a prime power", is a good attempt, as there are at least no other solutions for n < 1.000.000 (tested with python). However, I didn't manage to prove that there are no other numbers that match this criteria, so I wonder whether there are none, or whether it's just that prime powers are quite rare, and there could be very large numbers still matching my criteria. Can anyone help me out there?
Let $(V,+,\cdot)$ be a vector space over a field $\mathbb K$. For a $v\in V$ Let $-v$ be its inverse in $V$ and for $a\in \mathbb K$ let $a’$ be its additive inverse in $\mathbb K$ Now is $-v$ the same as $(1)’\cdot v$? Where the RHS is the vector $v$ scaled by the additive inverse of $1$ in $\mathbb K$. I don’t know how to prove this but we should probably use the identity $1\cdot v=v$.
Your equation for $C(t)$ can be broken down into equations for $x$ and $y$ separately: $$ x(t) = \frac{1-t^2}{1+t^2} \quad ; \quad y(t) = \frac{2t}{1+t^2} $$ It’s easy to check that $[x(t)]^2 + [y(t)]^2 = 1$ for all $t$. This means that every point $C(t)= (x(t),y(t))$ lies on the unit circle. Also it’s clear that $0 \le x(t) \le 1$ if $0 \le t \le 1$. Can you take it from there? The same sort of reasoning will work whenever you have parametric equations and an implicit equation for a conic. In fact, it will work whenever you have parametric equations and an implicit equation for any curve. A rational quadratic curve will never quite cover an entire conic — there will always be at least one point missing. For example, your parametric equation $C(t)$ will never give you the point $(-1,0)$ on the unit circle no matter what parameter value $t$ you use.
I was trying to come up with a problem where the solutions would be the numbers 5, 6, 10 and 11. It seems like "finding all integers n, where $\frac{n(n-1)}{10}$ is a prime power", is a good attempt, as there are at least no other solutions for n < 1.000.000 (tested with python). However, I didn't manage to prove that there are no other numbers that match this criteria, so I wonder whether there are none, or whether it's just that prime powers are quite rare, and there could be very large numbers still matching my criteria. Can anyone help me out there?
Consider the stereographic projection chart on $S^2$ which doesn't include the north pole $$(X,Y)=\varphi(x,y,z)=\left(\frac{x}{1-z}, \frac{y}{1-z}\right).$$ I want to pull back the 1-form $\omega = \frac{-ydx+xdy}{\sqrt{x^2+y^2}}$ to $S^2$ from $\mathbb{R}^2$ to $S^2$ but I am not sure about a step in the calculation. The definition of pullback of a form under a smooth map $\varphi $ is $$\varphi^* \omega=(f \circ \varphi) d\left(X \circ \varphi\right)+(g \circ \varphi) d\left(Y \circ \varphi\right)$$ Then, $$ \begin{aligned} &\varphi^*\omega =\frac{\frac{-y}{1-z}}{\sqrt{\frac{x^2}{(1-z)^2}+\frac{y^2}{(1-z)^2}}} d\left(X \circ \varphi\right)+\frac{x}{\sqrt{\frac{x^2}{(1-z)^2}+\frac{y^2}{(1-z)^2}}} d\left(Y \circ \varphi\right) \\ & =\frac{-y}{\sqrt{x^2+y^2}} d(X \circ \varphi)+\frac{x}{\sqrt{x^2+y^2}} d(Y \circ \varphi) \\ & \end{aligned} $$ How do I compute $d(X\circ \varphi)$ and $d(Y\circ \varphi)$. Intuitively, this feels like some sort of product rule would have to occur. $$d\frac{x}{1-z} = \frac{1}{1-z}dx + \frac{x}{(1-z)^2}dz$$ $$d\frac{y}{1-z} = \frac{1}{1-z}dy + \frac{y}{(1-z)^2}dz$$ Then $$\phi^*\omega = \frac{-y}{\sqrt{x^2+y^2}}\left[\frac{1}{1-z}dx + \frac{x}{(1-z)^2}dz\right] + \frac{x}{\sqrt{x^2+y^2}}\left[\frac{1}{1-z}dy + \frac{y}{(1-z)^2}dz\right]$$ In the end I should get some 1-form on $S^2$. Is this calculation correct? Thank you!
Let $A$ be a compact topological space equipped with the Borel $\sigma$-algebra, and $X=B_b(A)$ be the vector space of bounded measurable functions. Let $Y=\mathcal M(A)$ be the vector space of finite signed measure on $A$. Define the dual pair $<\cdot, \cdot>$ between $(X,Y)$ such that $ <f, \mu >=\int_A f(a)\mu(da) $. Let $\sigma(X,Y)$ be the weakest topology such that for all $\mu\in Y$, the linear map $X\ni f\mapsto <f,\mu>\in \mathbb R$ is continuous. Define the set $U=\{f\in X\mid \sup_{a\in A}|f(a)|\ge 1\}$. Is the set $U$ closed in the $\sigma(X,Y)$ topology? I am not sure how to proceed to prove or disprove the claim.
I’m trying to proof that if $A$ and $B$ are two convex sets in a **finite dimensional** vector space $V$ than there exists an hyperplane that separates $A$ and $B$, but I’m not understanding how to proceed. Does someone have any ideas?
Consider the following logical statement. $$ (A \land B) \implies (C \land D)$$ **Question:** Under what conditions does the following statement follow from the above? $$ (A\implies C) \land (B \implies D)$$ --- **Context:** I realised the heart of my previous [question][1] is this simpler to state, more general question. --- **My Thoughts** As a beginner, my initial error was to separate $A$ from $B$ in the antecedent of the first statement, and proceed from there. This is an error because the antecedent is only true if $A$ and $B$ are **both** true. I then tried reading about rules for distributing conjunctions over implications, but that seemed to be a very mechanistic approach, lacking intuition. My third attempt was to consider that the second statement implies the first fairly easily, but this doesn't seem fruitful in revealing the conditions in which the first implies the second.
When does $ (A \land B) \implies (C \land D)$ imply $ (A\implies C) \land (B \implies D)$?
It is said that category theory serves as an alternative foundation of mathematics, as such it must define natural numbers in terms of categories as it is done in set theory, when we consider set theory as a foundation for mathematics and where we define recursively natural numbers as $0:=\emptyset$ and $n+1:=n\cup \{n\}$. So, what is the definition of natural numbers in category theory?
I’m trying to proof that if $A$ and $B$ are two convex sets in a **finite dimensional** vector space $V$, with $A\cap B= \emptyset $ , than there exists an hyperplane that separates $A$ and $B$, but I’m not understanding how to proceed. Does someone have any ideas?
I read in some lecture notes the following definition of contraction rate: Definition (Posterior rate of contraction) The posterior distribution $\Pi_n\left(\cdot \mid X^{(n)}\right)$ is said to contract at rate $\epsilon_n \rightarrow 0$ at $\theta_0 \in \Theta$ if $\Pi_n\left(\theta: d\left(\theta, \theta_0\right)>M \epsilon_n \mid X^{(n)}\right) \rightarrow 0$ in $P_{\theta_0}^{(n)}$ probability, for a sufficiently large constant $M$ as $n \rightarrow \infty$. **Q**: Does this formulation imply that under the posterior: $\epsilon_n^{-1} d\left(\theta, \theta_0\right) \rightarrow 0$? I think the answer is yes but I could not prove it.
> What numbers can be written uniquely as a sum of two squares? I was looking at sequence [A125022](https://oeis.org/A125022), which shows the numbers that can be uniquely written as a sum of two squares. Here are a few things that I noticed from the first numbers. We have $1$, $2$, $4$, $8$, $16$, $32$, $64$, $128$. It is then safe to assume that all numbers of the form $2^{s}$ can be written uniquely, where $s \in \mathbb{Z}_{+} \cup \{0\}$. Moreover, primes of the form $4k+1$, for example $5$ and $13$, also appear and, interestingly enough, $5^2$ and $13^{2}$ do not. So, we could also say that $p^{s}$ has a unique representation only when $s = 0$ or $s = 1$. If we analyze $A125022$ a bit more, we notice that $3^{2}$, $7^{2}$, $11^{2}$ are there, so we can also conjecture that numbers of the form $q^{2}$ have a unique representation, where $q$ is a prime of the form $4k+3$. Furthermore, for reasons I will say later, I believe $d^{2}$, where $d$ has all of its prime factors of the form $4k+3$, can be uniquely represented as a sum of two squares. It is also possible to see that products of these three cases are in the sequence, for example $2^{2}\cdot 5$, $2 \cdot 5 \cdot 3^{2}$ and $2 \cdot 7^{2}$. **Conjecture.** A number $n \in \mathbb{Z}_{+}$ can be written uniquely as a sum of two squares if, and only if, $n = 2^{s} d^{2} p^{e_1}$, where $s \in \mathbb{Z}_{+} \cup \{0\}$, $d$ has all of its prime divisors of the form $4k+3$, $p$ is a prime of the form $4k+1$ and $e_{1} \in \{0,1\}$. It is known that a number can be written as a sum of two squares if, and only if, it can be written as $2^{s} t^{2} l$, where $s \in \mathbb{Z}_{+} \cup \{0\}$ and $l$ is a square-free positive integer with all of its prime factors of the form $4k+1$. Thus, we know the number $n$ we conjectured above can in fact be written as a sum of two squares. We only need to understand uniqueness. It is more natural to study these questions with the Gaussian integers, $\mathbb{Z}[i]$. If, for example, we have $$n = a^{2} + b^{2} = (a+ib)(a-ib) = (\pi_1 \cdots \pi_k) (\overline{\pi_1} \cdots \overline{\pi_k}),$$ where the last expression is the factorization of $n$ in primes of $\mathbb{Z}[i]$, then we may get different sum representations of $n$ by exchanging, say, $\pi_j$ for $\overline{\pi_j}$. That is, $$(\pi_1 \cdots \overline{\pi_j} \cdots \pi_n)(\overline{\pi_1} \cdots \pi_j \cdots \overline{\pi_n})$$ should yield a different sum when $\pi_j \neq \overline{\pi_j}$ and at least one of the other primes, say $\pi_i$, also satisfies $\pi_i \neq \overline{\pi_i}$. This does not seem to occur precisely for the numbers conjectured above, which makes me think those are the only numbers that can be uniquely represented. *Question.* Is my guess correct or am I missing other numbers?
Investigate at which points on $\partial D$ the function can be continuously extended, and in this case, provide the continuous extension. $f : D \rightarrow \mathbb{R}$, $x \mapsto \frac{{x_1 \sin(x_2) + x_2 \sin(x_1)}}{{\sqrt{x_1^2 + x_2^2}}}$ I have tried to disprove that $f(x_1, x_2)$ has a continuous extension at $(0,0)$. Specifically, I used straight lines, but the limit always seems to approach zero. Therefore, I plotted the graph, and indeed, it seems like a continuous extension is possible. My question is as follows: If that is the case, how do I show it? It seems impossible to prove in every possible direction. Can I maybe use sequences, or what should I do in this situation? Any help is appreciated.
We know that $$\sin x = x - \frac{1}{3!}x^3 + \frac{1}{5!}x^5 - \cdots = \sum_{n\ge 0} \frac{(-1)^n}{(2n+1)!}x^{2n+1}$$ But how could we derive this without calculus? There were some approach using $e^{ix} = \cos x + i \sin x$, pls notice I also would like to avoid such definition as to prove $e^{ix} = \cos x + i \sin x$ again we need the expansion of $\sin x$ and $\cos x$. One approach I tried is to start with $\sin^2 x$: let $$\sin^2 x := \sum_{n\ge 1} a_n x^{2n}$$ Note: I guess such an ansatz as $\sin x$ is an odd function -- so that $\sin x$ has only $x$'s power of odd numbers, so $\sin^2 x$ only has $x$'s power of even numbers. Now if I can arrive $$\sin^2 x = \sum_{n\ge 1} \frac{(-1)^{n+1} 2^{2n-1} }{(2n)!} x^{2n},$$ then via $\cos^2x = 1-2\sin^2 x$ I can get the expansion of $\cos x$ then $\sin x$. To derive $a_n$, first I use $\lim_{x\rightarrow 0} \frac{\sin x}{x} = 1$ from geometric interpretation (the arc is almost the opponent edge for small angle $x$), to get $$a_1=1$$ Then from $$\sin^2 2x = 4 \sin^2 x ( 1 - \sin^2 x) \Rightarrow \sin^2 x - \frac14 \sin^2 2x = \sin^4 x, $$ I get $$\sum_{n\ge1} (1-2^{2n-2}) a_n x^{2n} \equiv \left(\sum_n a_n x^{2n}\right)^2$$ This leads to the recursive formula $$ (1-2^{2n}) a_{n+1} = \sum_{k=1}^{n} a_k a_{n-k}, $$ with $a_1=1$, I do get $$a_2=-\frac13, a_3=\frac{2}{45}, a_4 = -\frac{1}{315}, a_5 = \frac{2}{14175}$$ etc, however, I can only get such results via manual calculation -- there is convolution involved, I could not derive a formula for $a_n$. Is there a way out pls?
Consider the stereographic projection chart on $S^2$ which doesn't include the north pole $$(X,Y)=\varphi(x,y,z)=\left(\frac{x}{1-z}, \frac{y}{1-z}\right).$$ I want to pull back the 1-form $\omega = \frac{-ydx+xdy}{\sqrt{x^2+y^2}}$ to $S^2$ from $\mathbb{R}^2$ to $S^2$ but I am not sure about a step in the calculation. The definition of pullback of a form under a smooth map $\varphi $ is $$\varphi^* \omega=(f \circ \varphi) d\left(X \circ \varphi\right)+(g \circ \varphi) d\left(Y \circ \varphi\right)$$ Then, $$ \begin{aligned} &\varphi^*\omega =\frac{\frac{-y}{1-z}}{\sqrt{\frac{x^2}{(1-z)^2}+\frac{y^2}{(1-z)^2}}} d\left(X \circ \varphi\right)+\frac{x}{\sqrt{\frac{x^2}{(1-z)^2}+\frac{y^2}{(1-z)^2}}} d\left(Y \circ \varphi\right) \\ & =\frac{-y}{\sqrt{x^2+y^2}} d(X \circ \varphi)+\frac{x}{\sqrt{x^2+y^2}} d(Y \circ \varphi) \\ & \end{aligned} $$ How do I compute $d(X\circ \varphi)$ and $d(Y\circ \varphi)$. Intuitively, this feels like some sort of product rule would have to occur. $$d\frac{x}{1-z} = \frac{1}{1-z}dx + \frac{x}{(1-z)^2}dz$$ $$d\frac{y}{1-z} = \frac{1}{1-z}dy + \frac{y}{(1-z)^2}dz$$ Then $$\phi^*\omega = \frac{-y}{\sqrt{x^2+y^2}}\left[\frac{1}{1-z}dx + \frac{x}{(1-z)^2}dz\right] + \frac{x}{\sqrt{x^2+y^2}}\left[\frac{1}{1-z}dy + \frac{y}{(1-z)^2}dz\right]$$ In the end I should get some 1-form on $S^2$. Is this calculation correct? Thank you! ________________ EDIT: Is it actaully the case that if I wanted to pull this back onto $S^2$, what I shuold really do is just consider the map $F:(X,Y)\mapsto (x,y)$ then the pullback will just be $$F^*\omega = \frac{-YdX+XdY}{\sqrt{X^2+Y^2}}$$ where the upper case $X$ and $Y$ are in stereographic coordinates? Is it this easy?
> What numbers can be written uniquely as a sum of two squares? I was looking at sequence [A125022](https://oeis.org/A125022), which shows the numbers that can be uniquely written as a sum of two squares. Here are a few things that I noticed from the first numbers. We have $1$, $2$, $4$, $8$, $16$, $32$, $64$, $128$. It is then safe to assume that all numbers of the form $2^{s}$ can be written uniquely, where $s \in \mathbb{Z}_{+} \cup \{0\}$. Moreover, primes of the form $4k+1$, for example $5$ and $13$, also appear and, interestingly enough, $5^2$ and $13^{2}$ do not. So, we could also say that $p^{s}$ has a unique representation only when $s = 0$ or $s = 1$. If we analyze $A125022$ a bit more, we notice that $3^{2}$, $7^{2}$, $11^{2}$ are there, so we can also conjecture that numbers of the form $q^{2}$ have a unique representation, where $q$ is a prime of the form $4k+3$. Furthermore, for reasons I will say later, I believe $d^{2}$, where $d$ has all of its prime factors of the form $4k+3$, can be uniquely represented as a sum of two squares. It is also possible to see that products of these three cases are in the sequence, for example $2^{2}\cdot 5$, $2 \cdot 5 \cdot 3^{2}$ and $2 \cdot 7^{2}$. **Conjecture.** A number $n \in \mathbb{Z}_{+}$ can be written uniquely as a sum of two squares if, and only if, $n = 2^{s} d^{2} p^{e_1}$, where $s \in \mathbb{Z}_{+} \cup \{0\}$, $d$ has all of its prime divisors of the form $4k+3$, $p$ is a prime of the form $4k+1$ and $e_{1} \in \{0,1\}$. It is known that a number can be written as a sum of two squares if, and only if, it can be written as $2^{s} t^{2} l$, where $s \in \mathbb{Z}_{+} \cup \{0\}$ and $l$ is a square-free positive integer with all of its prime factors of the form $4k+1$. Thus, we know the number $n$ we conjectured above can in fact be written as a sum of two squares. We only need to understand uniqueness. It is more natural to study these questions with the Gaussian integers, $\mathbb{Z}[i]$. If, for example, we have $$n = a^{2} + b^{2} = (a+ib)(a-ib) = (\pi_1 \cdots \pi_k) (\overline{\pi_1} \cdots \overline{\pi_k}),$$ where the last expression is the factorization of $n$ in primes of $\mathbb{Z}[i]$, then we may get different sum representations of $n$ by exchanging, say, $\pi_j$ for $\overline{\pi_j}$. That is, the product $$(\pi_1 \cdots \overline{\pi_j} \cdots \pi_n)(\overline{\pi_1} \cdots \pi_j \cdots \overline{\pi_n})$$ should yield a different sum when $\pi_j \neq \overline{\pi_j}$ and at least one of the other primes, say $\pi_i$, also satisfies $\pi_i \neq \overline{\pi_i}$. This does not seem to occur precisely for the numbers conjectured above, which makes me think those are the only numbers that can be uniquely represented. *Question.* Is my guess correct or am I missing other numbers?
How can you show that the absolute value of $\sin{x}$ diverges?
$\require{AMScd}$I make reference to [this](https://www.math.uchicago.edu/~may/PAPERS/AddJan01.pdf) paper. I've recently become aware that there's an annoying variety of a priori distinct (but "known" to be equivalent up to adding other hypotheses) definitions of triangulated categories, partly because allegedly no one knows an example of a weakly but not "strongly" triangulated category. Anyway, as part of all this I need to check the following assertion from May, in the linked paper: (I apologise for using photos but these and later commutative diagrams are beyond my abilities to draw in LaTex) > [![enter image description here][1]][1] This, together with the basic TR1) and TR2) axioms (rotation and existence, isomorphism-invariance of triangles), are claimed to imply the weak form of the classical octahedral axiom. May gives the following hint: [![enter image description here][2]][2] And... it **almost** works. It's so close to working but there's a final wrinkle which just does not seem to work. *I'm left wondering if May made an oversight about this final wrinkle or if I'm just missing an easy last step - this is my question.* If anyone has a reference for this equivalence (appropriately stated?) please say. So let $X\overset{f}{\to}Y\overset{g}{\to}Z$ be some composable morphisms, $h$ their composite. So extend $f,g,h$ to distinguished triangles: $$\begin{align}&X\overset{f}{\to}Y\overset{f'}{\to}U\overset{f''}{\to}\Sigma X\\&Y\overset{g}{\to}Z\overset{g'}{\to}W\overset{g''}{\to}\Sigma Y\\&X\overset{h}{\to}Z\overset{h'}{\to}V\overset{h''}{\to}\Sigma X\end{align}$$ The octahedral axiom in its weakest form tasks us with finding fillers $\alpha,\beta$ making the following a distinguished triangle: $$U\overset{\exists \alpha}{\to}V\overset{\exists\beta}{\to}W\overset{\Sigma(f')\circ g''}{\to}\Sigma U$$And such that the following commute: > $$f''=h''\circ\alpha,\,g'=\beta\circ h',\,\alpha\circ f'=h'\circ g,\,g''\circ\beta=\Sigma(f)\circ h''$$ The latter two involve four morphisms. Using May's hint, I can find $\alpha,\beta$ such that **all but one** of these commute, the difficulty in making both quadrilaterals join up simultaneously. Did May make an oversight, do I need an extra hypothesis to make it tick, or am I missing an easy final resolution? Specifically, I can find $\alpha,\alpha':U\to V,\beta,\beta':V\to W$ such that: > The triangles $U\to V\to W\overset{\Sigma(f)\circ h''}{\to}\Sigma U$ are distinguished; $f''=h''\circ\alpha=h''\circ\alpha',g'=\beta\circ h'=\beta'\circ h'$; > > And $\alpha\circ f'=h'\circ g$ or $g''\circ\beta'=\Sigma(f)\circ h''$ but seemingly I can't make both hold for one pair $(\alpha,\beta)$ or $(\alpha',\beta')$ simultaneously. > > We can also find an automorphism $\gamma:V\cong V$ such that $\alpha'=\gamma\circ\alpha,\,\beta'\circ\gamma=\beta$ but isomorphism is *not* equality ! And I don't think this is good enough to conclude. Because inserting $\gamma$ anywhere into the picture could potentially alter the other, *fixed* morphisms given to us. What to do? [1]: https://i.stack.imgur.com/SgIS6.png [2]: https://i.stack.imgur.com/2lSZm.png
How can I convert a number into a seemingly random number in a process that is actually *deterministic*? For example I'd like to transform: `123456` into `51`. Or `28`. Doesn't matter as long as it's a number between 1 - 100. The value should be significantly different for similar inputs. Transforming `123456` should give a very different number from transforming `123457`. I recall reading about these processes when I was younger in some paper about cryptography but i can't recall their name. "Cascading functions?" "Waterfall functions?" I don't remember but anyway.. The process should be deterministic. It should give the same output for input `123456` every time I run it. ## Context: I'm creating a tool that scours logfiles across geographical servers and helps us view the errors and their frequency so we can understand cause-and-effect. Each error occurrence appears at a point on a time plot. The x-axis has the time of the error. The y-axis; I don't need to use. But I don't want all the errors that happened at a similar time to appear on the same y-point on my plot. So what I'm doing is the following: I select a random number for that log's y-axis value. But when I run my tool again, that point appears on a different position because of the random number. It's no real problem for the investigative purposes we use it for but it looks a bit wonky, every time I run the tool I don't want the point to jump around. [![Image of the tool ][1]][1] So instead of picking a random number, I've thought of "hashing" the log id (given as i.e '94827717') into a deterministic "random" number, like so: // this abomination actually works but its "randomness" probably // depends on the number of concurrent users using the log service. // I suspect `id` is an autoincrement value hashLogIdToNumber: id => { const wholeNumber = id.charAt(len - 3) const firstDecimal = id.charAt(len - 2) const secondDecimal = id.charAt(len - 1) return parseFloat(`${wholeNumber}.${firstDecimal}${secondDecimal}`) } [1]: https://i.stack.imgur.com/7VDme.png
Give an example of a sequence of function in $C^1(\mathbb{R})$ to show that the Poincaré inequality does not hold on a general non-compact domain. I tried to find the sequence in $C^1(\mathbb{R})$ such that it is not belong to $L^2$ on unbounded domain in but its dirivative is in $L^2$ in that unbounded domain. Consider $f(x)= x^{-1/3}$ and $f'(x)=-\dfrac{1}{3}x^{-4/3}$ on the domain $[1, \infty),$ we have $$\int_1^{\infty}x^{-2/3}dx=+\infty$$ but $$\int_1^{\infty} -\dfrac{1}{3}[x^{-4/3}]^2dx=\dfrac{5}{27}.$$ It is easy to find an example that it is not integrable on unbounded domain but its dirivative is integrable in that unbounded domain. But the problem is $f$ is not continuously differentiable at $0.$ So I have trouble to find the function belongs to $C^1(\mathbb{R}),$ counte-examples that I found only countinuously differentiable on limit domains not for $\mathbb{R}.$ I also saw a lot of the examples with $u(x) \in H^1_0(\Omega)$ with $u$ is even not continuous. Could you provide me any idea? Edit question: I realized that for Poincare's inequality, it requires $f=0$ on the boundary but my above counter-example also does not satisfy this condition.
If X has Poisson (lambda) distribution then what would be the coverage probability of its confidence interval?
How can I convert a number into a seemingly random number in a process that is actually *deterministic*? For example I'd like to transform: `123456` into `51`. Or `28`. Doesn't matter as long as it's a number between 1 - 1000. The value should be significantly different for similar inputs. Transforming `123456` should give a very different number from transforming `123457`. I recall reading about these processes when I was younger in some paper about cryptography but i can't recall their name. "Cascading functions?" "Waterfall functions?" I don't remember but anyway.. The process should be deterministic. It should give the same output for input `123456` every time I run it. ## Context: I'm creating a tool that scours logfiles across geographical servers and helps us view the errors and their frequency so we can understand cause-and-effect. Each error occurrence appears at a point on a time plot. The x-axis has the time of the error. The y-axis; I don't need to use. But I don't want all the errors that happened at a similar time to appear on the same y-point on my plot. So what I'm doing is the following: I select a random number for that log's y-axis value. But when I run my tool again, that point appears on a different position because of the random number. It's no real problem for the investigative purposes we use it for but it looks a bit wonky, every time I run the tool I don't want the point to jump around. [![Image of the tool ][1]][1] So instead of picking a random number, I've thought of chopping the log id (given as i.e '94827717') into a 1-10 decimal, like so: // this abomination actually works but its "randomness" probably // depends on the number of concurrent users using the log service. // I suspect `id` is an autoincrement value hashLogIdToNumber: id => { const wholeNumber = id.charAt(len - 3) const firstDecimal = id.charAt(len - 2) const secondDecimal = id.charAt(len - 1) return parseFloat(`${wholeNumber}.${firstDecimal}${secondDecimal}`) } I use the last parts of the log id because they are the ones with the most variation. I suspect that `log.id` is an autoincremented value. I don't have control over it, it's given by the log service provider. [1]: https://i.stack.imgur.com/7VDme.png
Give an example of a sequence of function in $C^1(\mathbb{R})$ to show that the Poincaré inequality does not hold on a general non-compact domain. I tried to find the sequence in $C^1(\mathbb{R})$ such that it is not belong to $L^2$ on unbounded domain in but its dirivative is in $L^2$ in that unbounded domain but this sequence has to vanish on the boundary. Consider $f(x)= x^{-1/3}$ and $f'(x)=-\dfrac{1}{3}x^{-4/3}$ on the domain $[1, \infty),$ we have $$\int_1^{\infty}x^{-2/3}dx=+\infty$$ but $$\int_1^{\infty} -\dfrac{1}{3}[x^{-4/3}]^2dx=\dfrac{5}{27}.$$ It is easy to find an example that it is not integrable on unbounded domain but its dirivative is integrable in that unbounded domain. But the problem is $f$ is not continuously differentiable at $0$ and $f=0$ on the boundary. So I have trouble to find the function belongs to $C^1(\mathbb{R}),$ counte-examples that I found only countinuously differentiable on limit domains not for $\mathbb{R}$ and the function should be vanish on the boundary. I also saw a lot of the examples with $u(x) \in H^1_0(\Omega)$ with $u$ is even not continuous. Could you provide me any idea? Edit question: I realized that for Poincare's inequality, it requires $f=0$ on the boundary but my above counter-example also does not satisfy this condition.
To derive the determinant of a square matrix, we can always use elimination to convert it into an upper triangular matrix. For example, in the $M_{2 \times 2}$ case, we can always use elimination to convert the $M_{2 \times 2}$ to an upper triangular matrix. There are two cases to consider: when $a \neq 0$ and when $a=0$. I understand the case where $a \neq 0$. If $a \neq 0$, then $\begin{bmatrix}a&b\\c&d\end{bmatrix} \rightarrow \begin{bmatrix}a&b\\0&d-\frac{c}{a}b\end{bmatrix}$ We can then multiple the pivots along the diagonal line (i.e. $a \dot (d-\frac{c}{a}b) = ad - bc$ But I am not sure the case when a = 0 (or one of the diagonal entries is 0). Do I need to exchange rows? What properties of determinant do I need to use? I know the determinant is 0 if one of the diagonal entries is 0 (singular).
Give an example of a sequence of function in $C^1(\mathbb{R})$ to show that the Poincaré inequality does not hold on a general non-compact domain. I tried to find the sequence in $C^1(\mathbb{R})$ such that it is not belong to $L^2$ on unbounded domain in but its dirivative is in $L^2$ in that unbounded domain but this sequence has to vanish on the boundary. Consider $f(x)= \begin{cases} x^{-1/3} &\text{if}\,\, x\neq 0\\ 0 & \text{if}\,\, x = 0 \end{cases}$ and $f'(x)= \begin{cases} -\dfrac{1}{3}x^{-4/3} &\text{if}\,\, x\neq 0\\ 0 & \text{if}\,\, x = 0 \end{cases}$ $$\int_0^{\infty}x^{-2/3}dx=+\infty$$ but $$\int_0^{\infty} -\dfrac{1}{3}[x^{-4/3}]^2dx=0.$$ It is easy to find an example that it is not integrable on unbounded domain but its dirivative is integrable in that unbounded domain. But the problem is $f$ is not continuously differentiable at $0$. So I have trouble to find the function belongs to $C^1(\mathbb{R}),$ counte-examples that I found only countinuously differentiable on limit domains not for $\mathbb{R}$ and the function should be vanish on the boundary. I also saw a lot of the examples with $u(x) \in H^1_0(\Omega)$ with $u$ is even not continuous. Could you provide me any idea?
Are elliptic differential operators between vector bundles epimorphisms of sheaves?
How can I convert a number into a seemingly random number in a process that is actually *deterministic*? For example I'd like to transform: `123456` into `51`. Or `28`. Doesn't matter as long as it's a number between 1 - 1000. The value should be significantly different for similar inputs. Transforming `123456` should give a very different number from transforming `123457`. I recall reading about these processes when I was younger in some paper about cryptography but i can't recall their name. <strike>"Cascading functions?" "Waterfall functions?" I don't remember but anyway..</strike> I was talking about the [avalanche effect](https://en.wikipedia.org/wiki/Avalanche_effect). The process should be deterministic. It should give the same output for input `123456` every time I run it. ## Context: I'm creating a tool that scours logfiles across geographical servers and helps us view the errors and their frequency so we can understand cause-and-effect. Each error occurrence appears at a point on a time plot. The x-axis has the time of the error. The y-axis; I don't need to use. But I don't want all the errors that happened at a similar time to appear on the same y-point on my plot. So what I'm doing is the following: I select a random number for that log's y-axis value. But when I run my tool again, that point appears on a different position because of the random number. It's no real problem for the investigative purposes we use it for but it looks a bit wonky, every time I run the tool I don't want the point to jump around. [![Image of the tool ][1]][1] So instead of picking a random number, I've thought of chopping the log id (given as i.e '94827717') into a 1-10 decimal, like so: // this abomination actually works but its "randomness" probably // depends on the number of concurrent users using the log service. // I suspect `id` is an autoincrement value hashLogIdToNumber: id => { const wholeNumber = id.charAt(len - 3) const firstDecimal = id.charAt(len - 2) const secondDecimal = id.charAt(len - 1) return parseFloat(`${wholeNumber}.${firstDecimal}${secondDecimal}`) } I use the last parts of the log id because they are the ones with the most variation. I suspect that `log.id` is an autoincremented value. I don't have control over it, it's given by the log service provider. [1]: https://i.stack.imgur.com/7VDme.png
To derive the determinant of a square matrix, we can always use elimination to convert it into an upper triangular matrix. For example, in the $M_{2 \times 2}$ case, we can always use elimination to convert the $M_{2 \times 2}$ to an upper triangular matrix. There are two cases to consider: when $a \neq 0$ and when $a=0$. I understand the case where $a \neq 0$. If $a \neq 0$, then $\begin{bmatrix}a&b\\c&d\end{bmatrix} \rightarrow \begin{bmatrix}a&b\\0&d-\frac{c}{a}b\end{bmatrix}$ We can then multiple the pivots along the diagonal line (i.e. $a \dot (d-\frac{c}{a}b) = ad - bc$) But I am not sure the case when a = 0 (or one of the diagonal entries is 0). Do I need to exchange rows? What properties of determinant do I need to use? I know the determinant is 0 if one of the diagonal entries is 0 (singular).
Let $n\ge 2$ and $F:\mathbb R\to\mathbb R^n$ be a positive function, meaning that $$ F(x)=(F_1(x),\dots, F_n(x))$$ and $F_i(x)\ge 0$ for any $i\in\{1,\dots, n\}$. My question is: is always possible to find the minimum (or the maximum) between the $n$ components of $F$? More precisely, is it possible to find $i\in\{1,\dots,n\}$ such that $F_i(x)=\min\{F_1(x),\dots, F_n(x)\}$? On the one hand, I think that the answer is yes because $F_i(x)\in\mathbb R$ for any $i\in\{1,\dots,n\}$, which means that $(F_1(x),\dots, F_n(x))$ is vector made of real numbers and I can always find the minimum (or maximum) between $n$ real numbers. On the other hand, I am confused about the presence of the variable $x$. Anyone could please help me in understanding this?
If $F(x)=(F_1(x),\dots, F_n(x))$, is always possible to find $F_i(x)=\min\{F_1(x),\dots, F_n(x)\}$?
In the paper *On The Closure of Characters and the Zeros of Entire Functions* by Beurling and Malliavin they make the following claim in the introduction. The closure radius $\rho = \rho(\Lambda)$ defined as the upper bound of the numbers $r$ such that set $\{e^{i \lambda x}\}_{\lambda \in \Lambda}$ span the space $L^2(-r, r)$ (by span we mean that the span of the set $\{e^{i\lambda t}\}_{\lambda \in \Lambda} \text{is dense in} L^2(-r, r))$. The claim is that $\rho(\Lambda)$ does not change if the metric is replaced with any other $L^p$ metric. In other words, if I understand correctly the claim is that $\rho(\Lambda)$ is independent of $p$ in $L^p$. How can be true? Surely, the topology should affect this somehow.
To derive the determinant of a square matrix, we can always use elimination to convert it into an upper triangular matrix. For example, in the $M_{2 \times 2}$ case, we can always use elimination to convert the $M_{2 \times 2}$ to an upper triangular matrix. There are two cases to consider: when $a \neq 0$ and when $a=0$. I understand the case where $a \neq 0$. If $a \neq 0$, then $\begin{bmatrix}a&b\\c&d\end{bmatrix} \rightarrow \begin{bmatrix}a&b\\0&d-\frac{c}{a}b\end{bmatrix}$ We can then multiple the pivots along the diagonal line (i.e. $a \dot (d-\frac{c}{a}b) = ad - bc$) But I am not sure the case when $a = 0$ (or one of the diagonal entries is 0). Do I need to exchange rows? What properties of determinant do I need to use? I know the determinant is 0 if one of the diagonal entries is 0 (singular).
Let $n\ge 2$, ($n$ finite) and $F:\mathbb R\to\mathbb R^n$ be a positive function, meaning that $$ F(x)=(F_1(x),\dots, F_n(x))$$ and $F_i(x)\ge 0$ for any $i\in\{1,\dots, n\}$. My question is: is always possible to find the minimum (or the maximum) between the $n$ components of $F$? More precisely, is it possible to find $i\in\{1,\dots,n\}$ such that $F_i(x)=\min\{F_1(x),\dots, F_n(x)\}$? On the one hand, I think that the answer is yes because $F_i(x)\in\mathbb R$ for any $i\in\{1,\dots,n\}$, which means that $(F_1(x),\dots, F_n(x))$ is vector made of real numbers and I can always find the minimum (or maximum) between $n$ real numbers. On the other hand, I am confused about the presence of the variable $x$. Anyone could please help me in understanding this?
To derive the determinant of a square matrix, we can always use elimination to convert it into an upper triangular matrix. For example, in the $M_{2 \times 2}$ case, we can always use elimination to convert the $M_{2 \times 2}$ to an upper triangular matrix. There are two cases to consider: when $a \neq 0$ and when $a=0$. I understand the case where $a \neq 0$. If $a \neq 0$, then $\begin{bmatrix}a&b\\c&d\end{bmatrix} \rightarrow \begin{bmatrix}a&b\\0&d-\frac{c}{a}b\end{bmatrix}$ We can then multiple the pivots along the diagonal line (i.e. $a \dot (d-\frac{c}{a}b) = ad - bc$) But I am not sure the case when $a = 0$ (or one of the diagonal entries is 0). Do I need to exchange rows? What properties of determinant do I need to use? I know the determinant is 0 when the matrix is singular.
I want to derive $\sin(A-B) = \sin A \cos B - \cos A \sin B$ from $\cos(A-B)=\cos A \cos B +\sin A \sin B$ $$\cos(A-B)=\cos A \cos B +\sin A \sin B$$ substitute $A$ for $A + \pi$ $$\cos(A+\pi-B)=\cos( A+\pi) \cos B +\sin( A+\pi) \sin B$$ $$\sin(A-B)=\sin(A) \cos B +\cos(A) \sin B$$ but this is wrong. However you get the correct answer with $\pi - A$. Why?
I confirm your guess for the case $N=3$. By your hint in the comments, > $K$ is a constant in $[-\pi, \pi]$ follows, $k_{1..3}$ are almost free. That is why I disbelieved your minimum if $k_{1..3}=K/N$ assumption. My reasoning, with $k_1=k_2=0$ the "terms" $\cos(k_1)$ and $\cos(k_2)$ have a maximum impact on the result, only $\cos(k_3)$ with $k_3=K$ is what it is. **But**, at $k_n=0$ the effect of a "little change" may be lower than it is for $k_3=K$. Hence I modified your function $- (\cos k_1 + \cos k_2 + \cos k_3)$ to the *ansatz* $$-\left(2\cos\left(e\right)+\cos\left(K-2e\right)\right)$$ The first derivative to $e$ set to $0$ and solved for $e$ yields four solutions: $$\left\{e=\frac{4m\pi+2\pi+K}{3}\mathrm{,}\\e=\frac{4m\pi+K}{3}\mathrm{,}\\e=4n\pi+\pi+K\mathrm{,}\\e=4n\pi-\pi+K\right\}$$ with $m$ and $n$ arbitrary integers including $0$ and negative values, I opted for $m=n=0$. Next I checked the sign of the second derivative with this solutions, if it results a minimum or not. It depends on $K$ but not for solution 2 -- **in respect of the given range of $K$**. Here the plot of the $2^{nd}$ derivative with the solutions 1..3:[![Chk sign of df2][1]][1] But, I'd like also to see the results of the solutions, green is the second shown above -- [![The results][2]][2] Contrary to my expectations it turns out, a minimum is achieved by $\displaystyle e=\frac{K}{3}=k_1=k_2$ and $k_3=K-2e$ what is -- well, *youknowit*. Something missing? -- Yes, same again with an *ansatz* where $k_1\ne k_2$. [1]: https://i.stack.imgur.com/Nyhbh.png [2]: https://i.stack.imgur.com/KL3vS.png
Give an example of a sequence of function in $C^1(\mathbb{R})$ to show that the Poincaré inequality does not hold on a general non-compact domain. I tried to find the sequence in $C^1(\mathbb{R})$ such that it is not belong to $L^2$ on unbounded domain in but its dirivative is in $L^2$ in that unbounded domain but this sequence has to vanish on the boundary. Consider $f(x)= \begin{cases} x^{-1/3} &\text{if}\,\, x\neq 0\\ 0 & \text{if}\,\, x = 0 \end{cases}$ and $f'(x)= \begin{cases} -\dfrac{1}{3}x^{-4/3} &\text{if}\,\, x\neq 0\\ 0 & \text{if}\,\, x = 0 \end{cases}$ on the domain $(0, \infty).$ $$\int_0^{\infty}x^{-2/3}dx=+\infty$$ but $$\int_0^{\infty} -\dfrac{1}{3}[x^{-4/3}]^2dx=0.$$ It is easy to find an example that it is not integrable on unbounded domain but its dirivative is integrable in that unbounded domain. But the problem is $f$ is not continuously differentiable at $0$. So I have trouble to find the function belongs to $C^1(\mathbb{R}),$ counte-examples that I found only countinuously differentiable on limit domains not for $\mathbb{R}$ and the function should be vanish on the boundary. I also saw a lot of the examples with $u(x) \in H^1_0(\Omega)$ with $u$ is even not continuous. Could you provide me any idea?
Let $A$ be a compact metric space, and $E\subset A$ be a Borel measurable set. Does there exist a finite positive measure $\mu$ on the Borel sets of $A$ such that $\mu(E)>0$ and $\mu(A\setminus E)=0$? ----- The question is related to [this question](https://math.stackexchange.com/questions/208305/probability-measure-with-predefined-support), except that we don't ask $E$ to be the support of $\mu$.
[enter image description here][1] [1]: https://i.stack.imgur.com/cpS8K.png How do I label the position vectors & what would be an efficacious way to continue the proof?
I'm working through Problem 4.16 in Armstrong's *Basic Topology*, which has the following questions: >1) Prove that $O(n)$ is homeomorphic to $SO(n) \times Z_2$. >2) Are these two isomorphic as topological groups? **Some preliminaries:** Let $\mathbb{M_n}$ denote the set of $n\times n$ matrices with real entries. We identify each matrix $A=(a_{ij}) \in \mathbb{M_n}$ with the corresponding point $(a_{11},a_{12},...,a_{1n},a_{21},a_{22}...,a_{2n},...,a_{n1},a_{n2},...,a_{nn}) \in \mathbb{E}^{n^2}$, thus giving $\mathbb{M_n}$ the subspace topology. The *orthogonal group* $O(n)$ denotes the group of orthogonal $n \times n$ matrices $A \in \mathbb{M_n}$, i.e. with $det(A)=\pm{1}$. The *special orthogonal group* $SO(n)$ denotes the subgroup of $O(n)$ with $det(A)=1$. $Z_2=\{-1, 1\}$ denotes the multiplicative group of order 2. **My attempt** For odd $n$, the answer to both questions is **yes**, as we verify below. Consider the mapping $f:O(n)\to SO(n)\times Z_2, A \mapsto(det(A)\cdot A, det(A))$. We have the following facts about $f$: - **It is injective.** If $f(A)=f(B)$ then $(det(A)\cdot A, det(A))=(det(B)\cdot B, det(B))$. Therefore, $det(A)=det(B) \neq 0$ so $A=B$. - **It is surjective.** For $(D,d) \in SO(n) \times Z_2$, we can take $dD \in O(n)$, giving $f(dD)=(det(dD)\cdot dD, det(dD))=(d^n\cdot det(D) \cdot dD,d^n \cdot det(D))=(d^{n+1}D, d^n)=(D,d)$, since $n$ is odd. - **It is a homomorphism.** $f(AB)=(det(AB)\cdot AB, det(AB))=(det(A)det(B)\cdot AB, det(A)det(B))$ $=((det(A)\cdot A)(det(B)\cdot B), det(A)det(B))=f(A)f(B)$. - **It is continuous.** Let $\mathcal{O} \in SO(n) \times Z_2$ be open. Then $\mathcal{O}=U \times V$ for $U$ open in $SO(n)$ and $V$ open in $Z_2$. Since $SO(n)$ is open in $O(n)$, $U$ is therefore open in $O(n)$. $-U=\{-A\mid A\in U\}$ is also open in $O(n)$. But $f^{-1}(\mathcal{O})=f^{-1}(U\times V)=U\cup -U$. Since $O(n)$ is compact and $SO(n)\times Z_2$ is Hausdorff, we therefore have that $f$ is a homeomorphism. Thus, they are isomorphic as topological groups. <hr> For even $n$, this mapping is not well-defined: if $A \in O(n)$ with $det(A)=-1$ then, $det(det(A)\cdot A)=(det(A))^{n+1}=-1$, so $det(A)\cdot A \notin SO(n)$. My question then is **are they homeomorphic as topological spaces if $n$ is even?** From the related questions, it seems like for even $n$, the two groups cannot be isomorphic due to <s>one being abelian while the other is not and</s> them having different centers and derived subgroups (I don't fully understand these arguments but I will brush up on them). So they cannot be isomorphic as topological groups. But can they be homeomorphic as topological spaces? <hr> Related questions: https://math.stackexchange.com/questions/3399888/are-son-times-z-2-and-on-isomorphic-as-topological-groups https://math.stackexchange.com/questions/1468198/two-topological-groups-mathrmon-orthogonal-group-and-mathrmson-ti?noredirect=1&lq=1 https://math.stackexchange.com/questions/4537037/understanding-on-homeomorphic-to-son-times-bbb-z-2-proof
I wouldn't say this isomorphism is "really simple" when learning these things for the first time. It involves an isomorphism in infinite Galois theory and an isomorphism between a certain subgroup of $\mathbf Z_p^\times$ with $\mathbf Z_p$. In particular, the group ${\rm Gal}(F(\mu_{p^\infty})/F(\mu_{p}))$ is not really "directly" isomorphic to $\mathbf Z_p$ is any natural way, but rather this comes out as the result of composing some isomorphisms. To start off, the statement you made is not quite true since it has counterexamples when $p = 2$: $$ {\rm Gal}(\mathbf Q(\mu_{2^\infty})/\mathbf Q(\mu_2)) = {\rm Gal}(\mathbf Q(\mu_{2^\infty})/\mathbf Q) \cong \mathbf Z_2^\times, $$ which is not isomorphic to $\mathbf Z_2$ since it has nontrivial torsion, namely the number $-1$. That $-1$ corresponds to complex conjugation acting on $\mathbf Q(\mu_{2^\infty})$. More generally, when $F$ is a number field with a real embedding, then viewing $F$ in $\mathbf R$ shows complex conjugation is a nontrivial element in ${\rm Gal}(F(\mu_{2^\infty})/F(\mu_{2})) = {\rm Gal}(F(\mu_{2^\infty})/F)$, so this Galois group has an element of order $2$ and thus can't be isomorphic to $\mathbf Z_2$. Maybe you meant to tell us $p$ is odd? Anyway, the result you ask about is related to how Galois groups behave when forming composite fields: think about $F(\mu_{p^{\infty}})$ as a composite field such as $F \,\mathbf Q(\mu_{p^{\infty}})$, or perhaps better as $F(\mu_p) \,\mathbf Q(\mu_{p^{\infty}})$ When $K$ is a field, $L/K$ is Galois extension, and $E$ is an arbitrary extension of $K$ that lies in a common field with $L$ (e.g., $E/K$ is algebraic with $L$ and $E$ both in a common algebraic closure of $K$), the extension $LE/E$ is Galois and restricting elements of ${\rm Gal}(LE/E)$ to $L$ is an injective homomorphism ${\rm Gal}(LE/E) \hookrightarrow {\rm Gal}(L/K)$. The image is ${\rm Gal}(L/K \cap E)$, so $$ {\rm Gal}(LE/E) \cong {\rm Gal}(L/L \cap E). $$ For infinite Galois groups, this is an isomorphism not just of groups, but of topological groups. Let's apply this to $K = \mathbf Q(\mu_p)$, $L = \mathbf Q(\mu_{p^\infty})$, and $E = F(\mu_p)$ where $F$ is a number field. Then $LE = F(\mu_{p^\infty})$, so the above isomorphism says $$ {\rm Gal}(F(\mu_{p^\infty})/F(\mu_{p})) \cong {\rm Gal}(\mathbf Q(\mu_{p^\infty})/M) $$ where $M = L \cap E = \mathbf Q(\mu_{p^\infty}) \cap F(\mu_p)$, which is a subfield of $F(\mu_p)$ and thus is a number field. What does the group ${\rm Gal}(\mathbf Q(\mu_{p^\infty})/M)$ look like? The field $M$ is not just a number field: since $\mu_p \subset M$, we have $\mathbf Q(\mu_p) \subset M$, so ${\rm Gal}(\mathbf Q(\mu_{p^\infty})/M)$ is a subgroup of ${\rm Gal}(\mathbf Q(\mu_{p^\infty})/\mathbf Q(\mu_p))$. Moreover, it's an *open* subgroup since $[M:\mathbf Q(\mu_p)]$ is finite (that's infinite Galois theory at work: closed subgroups of finite index are also open subgroups). What is ${\rm Gal}(\mathbf Q(\mu_{p^\infty})/\mathbf Q(\mu_p))$ and what are its open subgroups? Given the context of the question, surely you understand how ${\rm Gal}(\mathbf Q(\mu_{p^\infty})/\mathbf Q) \cong \mathbf Z_p^\times$ for all primes $p$. Inside that group, ${\rm Gal}(\mathbf Q(\mu_{p^\infty})/\mathbf Q(\mu_p))$ corresponds to $1+p\mathbf Z_p$: a $p$-adic unit being used as an exponent on $p$-power roots of unity fixes $\mu_p$ exactly when the exponent is $1 \bmod p$, meaning the unit is in $1 + p\mathbf Z_p$. Thus $$ {\rm Gal}(F(\mu_{p^\infty})/F(\mu_{p})) \cong {\rm Gal}(\mathbf Q(\mu_{p^\infty})/M) = {\rm open \ subgroup \ of \ } 1 + p\mathbf Z_p. $$ This is true for all primes $p$, including $p = 2$. What are the open subgroups of $1+p\mathbf Z_p$? This is where a distinction arises between $p = 2$ and $p > 2$. (I gave a counterexample at the start when $p = 2$, so things have to break at $p = 2$ somewhere, and we've now reached that step.) When $p$ is *odd*, $1 + p\mathbf Z_p$ is isomorphic to $\mathbf Z_p$. Here are two ways to set that up: (i) if $u \equiv 1 \bmod p$ and $u \not\equiv 1 \bmod p^2$, e.g., $u = 1 + p$, the map $\mathbf Z_p \to 1 + p\mathbf Z_p$ where $x \mapsto u^x$ is an isomorphism, or (ii) the $p$-adic logarithm is an isomorphism $1+p\mathbf Z_p \to p\mathbf Z_p$, and the latter group is obviously isomorphic to $\mathbf Z_p$. Now the key point about $\mathbf Z_p$ is that its open subgroups are the groups $p^k \mathbf Z_p$ where $k \geq 0$ and those are all isomorphic to $\mathbf Z_p$. So anything isomorphic to an open subgroup of $\mathbf Z_p$ is itself isomorphic to $\mathbf Z_p$. Thus, when $p$ is odd, $$ {\rm Gal}(F(\mu_{p^\infty})/F(\mu_{p})) \cong \mathbf Z_p. $$ When $p = 2$ the previous paragraph breaks down since $1+2\mathbf Z_2 = \mathbf Z_2^\times \not\cong \mathbf Z_2$ (there is a nontrivial element of finite order in $1 + 2\mathbf Z_2$ but not in $\mathbf Z_2$). We can fix things in this case by digging a little deeper: $1+4\mathbf Z_2 \cong \mathbf Z_2$ (e.g., the $2$-adic log is an isomorphism $1+4\mathbf Z_2 \to 4\mathbf Z_2 \cong \mathbf Z_2$) and ${\rm Gal}(\mathbf Q(\mu_{2^\infty})/\mathbf Q(\mu_4)) = 1 + 4\mathbf Z_2$, which is isomorphic to $\mathbf Z_2$. Thus all open subgroups of $1+4\mathbf Z_2$ are isomorphic to $\mathbf Z_2$, so the correct thing to say when $p = 2$ is that $$ {\rm Gal}(F(\mu_{2^\infty})/F(\mu_{4})) \cong {\rm open \ subgroup \ of \ } {\rm Gal}(\mathbf Q(\mu_{2^\infty})/\mathbf Q(\mu_4)) \cong \mathbf Z_2, $$ where the last isomorphism can be read as saying ${\rm Gal}(\mathbf Q(\mu_{2^\infty})/\mathbf Q(\mu_4)) \cong \mathbf Z_2$ or as saying open subgroups of ${\rm Gal}(\mathbf Q(\mu_{2^\infty})/\mathbf Q(\mu_4))$ are isomorphic to $\mathbf Z_2$: both of those statements are true. The $F$ that are counterexamples to your statement when $p = 2$ (such as those $F$ with a real embedding) and the way it gets corrected confirms the useful advice that often in the $p$-adics, "if $p = 2$ then $p = 4$".
There is a similar question here: https://math.stackexchange.com/questions/3836805/proving-a-certain-implication-in-the-equivalent-formulations-of-hausdorff-spaces/4890244#4890244 but it does not ask for proof of this exact equivalence that I am asking. My book has the following equivalences for a Hausdorff topological space. (1) $X$ is Hausdorff (2) The diagonal relation set $Δ := \left\{ (x,x) | x \in X \right\}$ is a closed set in $X^2$. (3) Limits of nets in $X$ are unique I'm looking for a direct proof of (2) => (3). For (3) => (2) I have the following: To show Δ is closed, we show that Δ = $\overline{Δ}$. $\subset$ is always true. $\supset$: Let $(a, b) \in \overline{Δ}$. Then there is a net in Δ, {$x_j$} that converges to $(a,b)$. Now since {$x_j$} is a net in Δ, for each index $j$, $x_j = (z_j, z_j)$. Observe that in both coordinates, the net in $X$ is exactly the same. So, for the first coordinate, $z_j$ converges to $a$ and for the second coordinate, $z_j$ converges to $b$. Now, by (2), the limit is unique, therefore $a = b$, thus $(a,b) \in Δ$. The book I use do (1) => (3) => (2) => (1) so I actually do have a roundabout proof for what I want. But is there a direct way?
Here is an image of the proposition: [![enter image description here][1]][1] An highlighted below is the part of the proof I'm having trouble with. [![enter image description here][2]][2] [1]: https://i.stack.imgur.com/DDSYD.png [2]: https://i.stack.imgur.com/3agfb.png Why is the equation true for every n? $f_n(T) = \int_{\sigma(T)} f_n \: dE$, but I don't see how if you take some $x \in \ker(\lambda I-f(T))$ that then $f_n(T)x=0$. I tried using the fact that: $$\lVert f_n(T)x \rVert^2 = \int_{\sigma(T)} f^2_n \: d\mu_{x,x}$$ but I'm failing to see how this integral is $0$, given that all I know about it is that $\mu_{x,x}(Y) = \langle E(Y)x,x \rangle$ on measurable subsets of the spectrum.
Pederson's Analysis now Proposition 4.5.10, Why is this operator equal to $0$?
I'm working through Problem 4.16 in Armstrong's *Basic Topology*, which has the following questions: >1) Prove that $O(n)$ is homeomorphic to $SO(n) \times Z_2$. >2) Are these two isomorphic as topological groups? **Some preliminaries:** Let $\mathbb{M_n}$ denote the set of $n\times n$ matrices with real entries. We identify each matrix $A=(a_{ij}) \in \mathbb{M_n}$ with the corresponding point $(a_{11},a_{12},...,a_{1n},a_{21},a_{22}...,a_{2n},...,a_{n1},a_{n2},...,a_{nn}) \in \mathbb{E}^{n^2}$, thus giving $\mathbb{M_n}$ the subspace topology. The *orthogonal group* $O(n)$ denotes the group of orthogonal $n \times n$ matrices $A \in \mathbb{M_n}$, i.e. with $det(A)=\pm{1}$. The *special orthogonal group* $SO(n)$ denotes the subgroup of $O(n)$ with $det(A)=1$. $Z_2=\{-1, 1\}$ denotes the multiplicative group of order 2. **My attempt** For odd $n$, the answer to both questions is **yes**, as we verify below. Consider the mapping $f:O(n)\to SO(n)\times Z_2, A \mapsto(det(A)\cdot A, det(A))$. We have the following facts about $f$: - **It is injective.** If $f(A)=f(B)$ then $(det(A)\cdot A, det(A))=(det(B)\cdot B, det(B))$. Therefore, $det(A)=det(B) \neq 0$ so $A=B$. - **It is surjective.** For $(D,d) \in SO(n) \times Z_2$, we can take $dD \in O(n)$, giving $f(dD)=(det(dD)\cdot dD, det(dD))=(d^n\cdot det(D) \cdot dD,d^n \cdot det(D))=(d^{n+1}D, d^n)=(D,d)$, since $n$ is odd. - **It is a homomorphism.** $f(AB)=(det(AB)\cdot AB, det(AB))=(det(A)det(B)\cdot AB, det(A)det(B))$ $=((det(A)\cdot A)(det(B)\cdot B), det(A)det(B))=f(A)f(B)$. - **It is continuous.** Let $\mathcal{O} \in SO(n) \times Z_2$ be open. Then $\mathcal{O}=U \times V$ for $U$ open in $SO(n)$ and $V$ open in $Z_2$. Since $SO(n)$ is open in $O(n)$, $U$ is therefore open in $O(n)$. $-U=\{-A\mid A\in U\}$ is also open in $O(n)$. But $f^{-1}(\mathcal{O})=f^{-1}(U\times V)=U\cup -U$. Since $O(n)$ is compact and $SO(n)\times Z_2$ is Hausdorff, we therefore have that $f$ is a homeomorphism. Thus, they are isomorphic as topological groups. <hr> For even $n$, this mapping is not well-defined: if $A \in O(n)$ with $det(A)=-1$ then, $det(det(A)\cdot A)=(det(A))^{n+1}=-1$, so $det(A)\cdot A \notin SO(n)$. My question then is **are they homeomorphic as topological spaces if $n$ is even?** From the related questions, it seems like for even $n$, the two groups cannot be isomorphic due to <s>one being abelian while the other is not and</s> them having different centers and derived subgroups (I don't fully understand these arguments but I will brush up on them). So they cannot be isomorphic as topological groups. But can they be homeomorphic as topological spaces? <hr> Related questions: https://math.stackexchange.com/questions/3399888/are-son-times-z-2-and-on-isomorphic-as-topological-groups https://math.stackexchange.com/questions/1468198/two-topological-groups-mathrmon-orthogonal-group-and-mathrmson-ti?noredirect=1&lq=1 https://math.stackexchange.com/questions/4537037/understanding-on-homeomorphic-to-son-times-bbb-z-2-proof https://math.stackexchange.com/questions/29279/why-is-the-orthogonal-group-operatornameo2n-mathbb-r-not-the-direct-prod
I'm reading [the paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8715594)(I can't find one arXiv version of this paper...) and suspect the correctness of one theorem inside. A [hankel matrix](https://en.wikipedia.org/wiki/Hankel_matrix) $H$ is a square matrix in which each ascending skew-diagonal from left to right is constant. Vandemonde decomposition is to decompose $H$ as $H=V^TDV$ with $D$ diagonal matrix and $V$ the [Vandemonde matrix](https://en.wikipedia.org/wiki/Vandermonde_matrix). Theorem I.1 of the paper states that > Theorem I.1. For any positive semidefinite Hankel matrix $H \in \mathbb{R}^{n \times n}$ with rank $r, 1 \leq r<n$, there exists a Vandermonde matrix $V \in \mathbb{R}^{n \times r}$ and a diagonal matrix $D \in \mathbb{R}^{r \times r}$ with positive diagonal entries such that $H=$ $V^T D V$. My question is, is this statement really true? For example, if we set $n=3,r=1$ and choose $H=\left( \begin{matrix} 0& 0& 0\\ 0& 0& 0\\ 0& 0& 1\\ \end{matrix} \right) $. Then $H$ is hankel and does not have a Vandemonde decomposition if we require elements of $D$ to be positive.
Does every positive semidefinite hankel matrix obeys one Vandermonde decomposition?
To derive the determinant of a square matrix, we can always use elimination to convert it into an upper triangular matrix. For example, in the $M_{2 \times 2}$ case, we can always use elimination to convert the $M_{2 \times 2}$ to an upper triangular matrix. There are two cases to consider: when $a \neq 0$ and when $a=0$. I understand the case where $a \neq 0$. If $a \neq 0$, then $\begin{bmatrix}a&b\\c&d\end{bmatrix} \rightarrow \begin{bmatrix}a&b\\0&d-\frac{c}{a}b\end{bmatrix}$ We can then multiple the pivots along the diagonal line (i.e. $a \dot (d-\frac{c}{a}b) = ad - bc$) But I am not sure the case when $a = 0$ (or one of the diagonal entries is 0). Do I need to exchange rows? What properties of determinant do I need to use? I know the determinant is 0 when the matrix is singular. Edit: I am so sorry if the comment causes any confusion. It was due to a typo in the title. Initially, the title was: Determinant when one of the diagonal entries of a diagonal matrix is 0. But, I actually meant to ask for the **Determinant of a triangular matrix when one of its diagonal entries is 0**. This means that the counterexample $\begin{bmatrix}1&1\\1&0\end{bmatrix}$ no longer works. But, the last comment solves the question.
There is a similar question here: https://math.stackexchange.com/questions/3836805/proving-a-certain-implication-in-the-equivalent-formulations-of-hausdorff-spaces/4890244#4890244 but it does not ask for proof of this exact equivalence that I am asking. My book has the following equivalences for a Hausdorff topological space. (1) $X$ is Hausdorff (2) The diagonal relation set $Δ := \left\{ (x,x) | x \in X \right\}$ is a closed set in $X^2$. (3) Limits of nets in $X$ are unique I'm looking for a direct proof of (2) => (3). For (3) => (2) I have the following: To show Δ is closed, we show that Δ = $\overline{Δ}$. $\subset$ is always true. $\supset$: Let $(a, b) \in \overline{Δ}$. Then there is a net in Δ, {$x_j$} that converges to $(a,b)$. Now since {$x_j$} is a net in Δ, for each index $j$, $x_j = (z_j, z_j)$. Observe that in both coordinates, the net in $X$ is exactly the same. So, for the first coordinate, $z_j$ converges to $a$ and for the second coordinate, $z_j$ converges to $b$. Now, by (3), the limit is unique, therefore $a = b$, thus $(a,b) \in Δ$. The book I use do (1) => (3) => (2) => (1) so I actually do have a roundabout proof for what I want. But is there a direct way?
To derive the determinant of a square matrix, we can always use elimination to convert it into an upper triangular matrix. For example, in the $M_{2 \times 2}$ case, we can always use elimination to convert the $M_{2 \times 2}$ to an upper triangular matrix. There are two cases to consider: when $a \neq 0$ and when $a=0$. I understand the case where $a \neq 0$. If $a \neq 0$, then $\begin{bmatrix}a&b\\c&d\end{bmatrix} \rightarrow \begin{bmatrix}a&b\\0&d-\frac{c}{a}b\end{bmatrix}$ We can then multiple the pivots along the diagonal line (i.e. $a \dot (d-\frac{c}{a}b) = ad - bc$) But I am not sure the case when $a = 0$ (or one of the diagonal entries is 0). Do I need to exchange rows? What properties of determinant do I need to use? I know the determinant is 0 when the matrix is singular. Edit: I am so sorry if the comment causes any confusion. It was due to a typo in the title. Initially, the title was: Determinant when one of the diagonal entries of a diagonal matrix is 0. But, I actually meant to ask for the **Determinant of a triangular matrix when one of its diagonal entries is 0**. This means that the counterexample $\begin{bmatrix}1&1\\1&0\end{bmatrix}$ no longer works.
I am studying stochastic processes using *An Introduction to Stochastic Modeling* by Pinsky and Karlin. I stuck on this question 3.4.18 in Chapter 3. I would really appreciate if someone could help me with it! **Here is the question:** > A well-disciplined man, who smokes exactly one half of a cigar each day, buys a box containing $N$ cigars. He cuts a cigar in half, smokes half, and returns the other half to the box. In general, on a day in which his cigar box contains $w$ whole cigars and $h$ half cigars, he will pick one of the $w + h$ smokes at random, each whole and half cigar being equally likely, and if it is a half cigar, he smokes it. If it is a whole cigar, he cuts it in half, smokes one piece, and returns the other to the box. What is the expected value of $T$, the day on which the last whole cigar is selected from the box? The textbook provides a hint: > Let $X_n$ be the number of whole cigars in the box after the $n$th smoke. Then $X_n$ is a Markov chain whose transition probabilities vary with $n$. Define $v_n(w) = E[T | X_n = w]$. Use a first-step analysis to develop a recursion for $v_n(w)$ and show that the solution is \begin{equation} v_n(w) = \frac{2Nw + n + 2w}{w + 1} - \sum_{k = 1}^w \frac{1}{k}, \end{equation} whence \begin{equation} E[T] = v_0(N) = 2N - \sum_{k = 1}^N \frac{1}{k}. \end{equation} **So far, what I have done is the following:** > According to the hint, $X_n$ is the number of whole cigars in the box after $n$th smoke. > > If $X_n = w$, then $X_{n + 1} = w$ or $w - 1$, and \begin{equation} Pr\{X_{n + 1} = w | X_n = w\} = \frac{h}{w + h}, \end{equation} \begin{equation} Pr\{X_{n + 1} = w - 1 | X_n = w\} = \frac{w}{w + h}. \end{equation} Moreover, \begin{equation} v_n(w) = E[T | X_n = w] = E[T | X_{n - 1} = w] \cdot Pr\{X_{n - 1} = w | X_n = w\} + E[T | X_{n + 1} = w - 1] \cdot Pr\{X_{n + 1} = w - 1 | X_n = w\} = \frac{h}{w + h} v_{n + 1}(w) + \frac{w}{w + h} v_{n + 1}(w - 1). \end{equation} I am not 100% sure whether my steps so far are correct, though I feel they should alright. But I don't know how to proceed next and how to derive the answer. Thanks a lot in advance!
In [this][1] question it is proved in the answers that If $f :(a , \infty ) \to \mathbb{R}$ and $f $ is bounded on every $(a,b)$ such that $a<b <\infty$, prove that $\lim\limits_{x \to \infty }(f(x+1) - f(x) )=l$ implies $\lim\limits_{x \to \infty }\frac{f(x)}{x}=l$. This condition looks suspiciously similar to Stolz-Cesaro theorem see [this][2] for a proof of using Stolz-Cesaro theorem . ---- My question is : If $\lim\limits_{x\to\infty}\frac{f(x+1)-f(x)}{g(x+1)-g(x)}=l$ what are sufficient conditions for $f,g$ to make $\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=l$? Of course when $f$ and $g $ are both differentiable functions on $\mathbb{R}$ and if $\lim\limits_{x \to \infty } g(x)= \infty$ this condition sufficient to make $\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=l$, but this hypothesis is too strong is there a weaker hypothesis. and I want to generalise this result. I conjectured this more general hypothesis : If $f$ is bounded on every $(a,b)$ such that $a<b <\infty$, $g$ is a continuous increasing function on $\mathbb{R}$ then $\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=l$. I couldn't prove this result but I think it is true. My question is this conjecture true ? If it was true is the a weaker hypothesis ? If it wasn't true, Is there a weaker hypothesis than $f$ and $g $ are both differentiable functions on $\mathbb{R}$ and if $\lim\limits_{x \to \infty } g(x)= \infty$? [1]: https://math.stackexchange.com/questions/1642662/prove-that-lim-x-to-infty-fracfxx-l-if-lim-x-to-infty-f [2]: https://math.stackexchange.com/questions/4858423/can-stolz-cesaro-theorem-be-applied-to-this-problem-if-lim-limits-x-to-infty?noredirect=1&lq=1
If $\lim\limits_{x\to\infty}\frac{f(x+1)-f(x)}{g(x+1)-g(x)}=l$ what are sufficient conditions to make $\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=l$?
In [this][1] question it is proved in the answers that If $f :(a , \infty ) \to \mathbb{R}$ and $f $ is bounded on every $(a,b)$ such that $a<b <\infty$, prove that $\lim\limits_{x \to \infty }(f(x+1) - f(x) )=l$ implies $\lim\limits_{x \to \infty }\frac{f(x)}{x}=l$. This condition looks suspiciously similar to Stolz-Cesaro theorem see [this][2] for a proof of using Stolz-Cesaro theorem . ---- My question is : If $\lim\limits_{x\to\infty}\frac{f(x+1)-f(x)}{g(x+1)-g(x)}=l$ what are sufficient conditions for $f,g$ to make $\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=l$? Of course when $f$ and $g $ are both differentiable functions on $\mathbb{R}$ and if $\lim\limits_{x \to \infty } g(x)= \infty$ this condition sufficient to make $\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=l$, but this hypothesis is too strong is there a weaker hypothesis. and I want to generalise this result. I conjectured this more general hypothesis : If $f$ is bounded on every $(a,b)$ such that $a<b <\infty$, $g$ is a continuous increasing function on $\mathbb{R}$ then $\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=l$. I couldn't prove this result but I think it is true. My question is this conjecture true ? If it was true is the a weaker hypothesis ? If it wasn't true, Is there a weaker hypothesis than $f$ and $g $ are both differentiable functions on $\mathbb{R}$ and ]$\lim\limits_{x \to \infty } g(x)= \infty$? [1]: https://math.stackexchange.com/questions/1642662/prove-that-lim-x-to-infty-fracfxx-l-if-lim-x-to-infty-f [2]: https://math.stackexchange.com/questions/4858423/can-stolz-cesaro-theorem-be-applied-to-this-problem-if-lim-limits-x-to-infty?noredirect=1&lq=1
I'm currently working on an assignment in my calculus course where we have to prove the formula to calculate area D given in exercise 1.2.14 of the book Introduction to Mathematics of Medical Imaging by Charles L. Epstein. However hard I try to find possible proof of this formula, either it's doing it myself, asking people around me, or asking various AI assistants, I have yet to be successful at solving this problem. The assignment is due in 5 days so any help on this is highly appreciated. Thank you!! Note: Depending on the edition it might be exercise 1.2.9 or 1.2.14 Link to the book: https://books.google.com.vn/books/about/Introduction_to_the_Mathematics_of_Medic.html?id=fErAEWU_sHUC&printsec=frontcover&newbks=1&newbks_redir=0&source=gb_mobile_entity&hl=en&gl=VN&redir_esc=y#v=onepage&q&f=false
Here is the binary operation $ *: \mathbb{R}\times \mathbb{R} \backslash (0,0) $ defined by $ (a,b)(c,d)=(ac-bd,ad+bc) $. My idea is that to show this is a group ($\mathbb{R}\times \mathbb{R} \backslash (0,0), * $), I need to show that $ * $ is well-defined and associative and then show it has an identity and inverse. I am struggling to do the first part. How do I show $ * $ is well-defined (and is the first part required)? Is showing that $ ac-bd=0,ad+bc=0 $ will only be true if $a=b=c=d=0$ sufficient?
How did Artin discover this function?
In Artin's "Galois Theory" P38, he said the function $$f(x) = \frac{(x^2 - x + 1)^3}{x^2(x-1)^2}$$ satisfies the properties of $f(x)=f(1-x)=f(\frac{1}{x})$. > Is the function given by some rational step or just by a flash of insight? If $f(0)$ is a number, then $f(0) = f(\frac{1}{0})$. So that the domain of definition of f(x) does not include 0.(maybe. I know it's not rigorous) Then the domain of definition of f(x) does not include 1 either. Thus I think it is a function like $f(x)=\frac{g(x)}{x^a(x-1)^bh(x)}, h(0)*h(1) \neq 0$. Then I tried $a=1, b=1$, failed. but $a = 2, b = 2$ succeed. However, I think that's a really weird way to go about it. Does the question like" $f(x)$ is a rational function that satisfies the properties of $f(x) = f(g_1(x)) = f(g_2(x)) = ... =f(g_n(x)). \forall k \in \mathbb N^+, g_k(x)$ is a rational function. Now give a example of f(x)." has an easy way to solve?
How did Artin discover the function $f(x)=\frac{(x^2-x+1)^3}{x^2(x-1)^2}$ with the properties $f(x)=f(1-x)=f(\frac{1}{x})$?
Let's state this question in cokernels first, and hopefully duality will take care of the kernels. Let $0 \to P \to N \to M \to 0$ be a short exact sequence of modules over a ring $R$. Given free resolutions $0 \to L_n \to L_{n-1} \to \cdots \to L_0 \to N \to 0$ and $0 \to K_m \to K_{m-1} \to \cdots \to K_0 \to P \to 0$, how to construct a free resolution of $M$? The idea is to fit the given information into an exact diagram, but I cannot prove the canonical injections $K_n \to L_n$ assured by projectivity of free modules split. Would this work with some extra care, or do I need to follow a different approach?
To derive the determinant of a square matrix, we can always use elimination to convert it into an upper triangular matrix. For example, in the $M_{2 \times 2}$ case, we can always use elimination to convert the $M_{2 \times 2}$ to an upper triangular matrix. There are two cases to consider: when $a \neq 0$ and when $a=0$. I understand the case where $a \neq 0$. If $a \neq 0$, then $\begin{bmatrix}a&b\\c&d\end{bmatrix} \rightarrow \begin{bmatrix}a&b\\0&d-\frac{c}{a}b\end{bmatrix}$ We can then multiple the pivots along the diagonal line (i.e. $a \dot (d-\frac{c}{a}b) = ad - bc$) But I am not sure the case when $a = 0$ (or one of the diagonal entries is 0). Do I need to exchange rows? What properties of determinant do I need to use? I know the determinant is 0 when the matrix is singular. Edit: I am so sorry if the comment causes any confusion. It was due to a typo in the title. Initially, the title was: Determinant when one of the diagonal entries of a diagonal matrix is 0. But, I actually meant to ask for the **Determinant of a triangular matrix when one of its diagonal entries is 0**. This means that the counterexample $\begin{bmatrix}1&1\\1&0\end{bmatrix}$ no longer works. The second last comment clears my concern :)
Definition 3.1.1 in page 25 of [this book][1] is the definition of quasiperod and Proposition 3.1.3. shows that gcd of two quasiperiods is a quasiperiod. The whole proof is clear except for the part about CRT. I would appreciate a simple explanation of the following claim from the proof of Proposition 3.1.3: Now choose an integer $w_1$ such that it is from the prescribed residue class modulo $d_2/ \gcd(d_1, d_2)$, and that for any prime divisor $p$ of $q$ not dividing $d_1d_2$, we have $w_1 \not\equiv −m/d_1\pmod p$. The existence of such integers is guaranteed by the Chinese Remainder Theorem. How holds $w_1 \not\equiv −m/d_1\pmod p$ and how it comes from CRT? PS this is an exercise in Apostol's book Ch8 and also Montgomery's book Ch9. In Apostol "quasiperiod" is named "induced modulus". [1]: https://users.renyi.hu/~magap/classes/ceu/19_fall_analytic_number_theory/classical_analytic_number_theory.pdf
Given a set of Pauli matrices ${X, Y, Z, I}$ and $n$ is the number representing the length of tensor products I want to produce from Paulie matrices. I construct a set of $2n-2$ such matrices, for example, the set $S=\{Z_1Z_2Y_1,Z_3Y_2,Y_3,Y_2\}$ with n=3 and $Y_2 = I\otimes Y \otimes I$ (the same notation with other elements). Then I also have a matrix group equipped with matrix product operation denoted $(M,.)$. This group has the following elements $\{Z_2Y_1,Z_3Y_2,Y_2,Y_3\}$. My question is: is there a way to check that the group $M$ generates S or another way is that given an element of group S, can I show that there is no combination of group $M$ that I can produce that element? I have a feeling that there are some theorems or algorithms about this. I truly appreciate any feedback. Thank you in advance.
How to check if a group of matrices generate a set of matrices?
In [this][1] question it is proved in the answers that If $f :(a , \infty ) \to \mathbb{R}$ and $f $ is bounded on every $(a,b)$ such that $a<b <\infty$, prove that $\lim\limits_{x \to \infty }(f(x+1) - f(x) )=l$ implies $\lim\limits_{x \to \infty }\frac{f(x)}{x}=l$. This condition looks suspiciously similar to Stolz-Cesaro theorem see [this][2] for a proof of using Stolz-Cesaro theorem . ---- My question is : If $\lim\limits_{x\to\infty}\frac{f(x+1)-f(x)}{g(x+1)-g(x)}=l$ what are sufficient conditions for $f,g$ to make $\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=l$? Of course when $f$ and $g $ are both differentiable functions on $\mathbb{R}$ and if $\lim\limits_{x \to \infty } g(x)= \infty$ this condition sufficient to make $\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=l$, but this hypothesis is too strong is there a weaker hypothesis. and I want to generalise this result. I conjectured this more general hypothesis : If $f$ is bounded on every $(a,b)$ such that $a<b <\infty$, $g$ is a continuous increasing function on $\mathbb{R}$ then $\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=l$. I couldn't prove this result but I think it is true. My question is this conjecture true ? If it was true is there a weaker hypothesis ? If it wasn't true, Is there a weaker hypothesis than $f$ and $g $ are both differentiable functions on $\mathbb{R}$ and ]$\lim\limits_{x \to \infty } g(x)= \infty$? [1]: https://math.stackexchange.com/questions/1642662/prove-that-lim-x-to-infty-fracfxx-l-if-lim-x-to-infty-f [2]: https://math.stackexchange.com/questions/4858423/can-stolz-cesaro-theorem-be-applied-to-this-problem-if-lim-limits-x-to-infty?noredirect=1&lq=1
How to check if a group of matrices generates a set of matrices?
If $A$ and $B$ aren't disjoint and $A \cup B \neq \Omega$, then is $P(A \cup B) \geq P(A)P(B)$? My only idea is to use $P(A \cup B) = P(A) + P(B) - P(A \cap B)$ but there's a minus in front of the intersection and the events don't have to be independent.
To derive the determinant of a square matrix, we can always use elimination to convert it into an upper triangular matrix. For example, in the $M_{2 \times 2}$ case, we can always use elimination to convert the $M_{2 \times 2}$ to an upper triangular matrix. There are two cases to consider: when $a \neq 0$ and when $a=0$. I understand the case where $a \neq 0$. If $a \neq 0$, then $\begin{bmatrix}a&b\\c&d\end{bmatrix} \rightarrow \begin{bmatrix}a&b\\0&d-\frac{c}{a}b\end{bmatrix}$ We can then multiply the pivots along the diagonal line (i.e. $a \dot (d-\frac{c}{a}b) = ad - bc$) But I am not sure the case when $a = 0$ (or one of the diagonal entries is 0). Do I need to exchange rows? What properties of determinant do I need to use? I know the determinant is 0 when the matrix is singular. Edit: I am so sorry if the comment causes any confusion. It was due to a typo in the title. Initially, the title was: Determinant when one of the diagonal entries of a diagonal matrix is 0. But, I actually meant to ask for the **Determinant of a triangular matrix when one of its diagonal entries is 0**. This means that the counterexample $\begin{bmatrix}1&1\\1&0\end{bmatrix}$ no longer works. The second last comment clears my concern :)
Give an example of a sequence of function in $C^1(\mathbb{R})$ to show that the Poincaré inequality does not hold on a general non-compact domain. I tried to find the sequence in $C^1(\mathbb{R})$ such that it is not belong to $L^2$ on unbounded domain in but its dirivative is in $L^2$ in that unbounded domain but this sequence has to vanish on the boundary. Consider $f(x)= \begin{cases} x^{-1/3} &\text{if}\,\, x\neq 0\\ 0 & \text{if}\,\, x = 0 \end{cases}$ and $f'(x)= \begin{cases} -\dfrac{1}{3}x^{-4/3} &\text{if}\,\, x\neq 0\\ 0 & \text{if}\,\, x = 0 \end{cases}$ on the domain $(1, \infty).$ $$\int_1^{\infty}x^{-2/3}dx=+\infty$$ but $$\int_1^{\infty} -\dfrac{1}{3}[x^{-4/3}]^2dx=\dfrac{5}{9}.$$ It is easy to find an example that it is not integrable on unbounded domain but its dirivative is integrable in that unbounded domain. But the problem is in my example $f$ is not continuously differentiable at $0$ at not vanish on boundary. So I have trouble to find the function belongs to $C^1(\mathbb{R}),$ counte-examples that I found only countinuously differentiable on limit domains not for $\mathbb{R}$ and the function should be vanish on the boundary. I also saw a lot of the examples with $u(x) \in H^1_0(\Omega)$ with $u$ is even not continuous. Could you provide me any idea?
I have to show the equivalence of this Let $f(x) \in F[x]$, and $K / F$ an extension which contains $R_f$, the set of all root of $f(x)$. show the equivalence for a subfield $D \leq K$ : (a) $D$ is the least element of the set $\left\{E \leq K \mid F \leq E, R_f \subseteq E\right\}$. (b) $D$ is the minimal element of the set $\left\{E \leq K \mid F \leq E, R_f \subseteq E\right\}$. (c) $D=\bigcap\left\{E \leq K \mid F \leq E, R_f \subseteq E\right\}$. (d) $D=F\left(R_f\right)$. for a) implies b) I take $H \in \left\{E \leq K \mid F \leq E, R_f \subseteq E\right\}$ such that $H\subset D$ but how $D$ is the least element $D\subset H$ and then $D=H$, so $D$ is the minimal element. for b) implies c) We know that $\bigcap\left\{E \leq K \mid F \leq E, R_f \subseteq E\right\}\subset H$ for all $H \in \left\{E \leq K \mid F \leq E, R_f \subseteq E\right\}$ in particular for $D$ and how $D$ is minimal then $D= \bigcap\left\{E \leq K \mid F \leq E, R_f \subseteq E\right\}$ for c) implies d) How $F \leq L$ for all $L \in \left\{E \leq K \mid F \leq E, R_f \subseteq E\right\}$ the for all $\ell \in F(R_f)$ then $\ell \in L$ so $\ell \in D$ and then $F(R_f)\subseteq D$ but im stuck in showing the other contention, and c implies a, any help?
When the boat's speed was 45 km/h, its engine suddenly stopped. 7 minutes after the engine stopped, the boat's speed was 12 km/h. It is known that on the boat, in the opposite direction to its movement, the frictional force of the water acts, which is proportional to the speed of the boat. Find the law of the boat's motion in a river with a constant current of 3 km/h in the direction of the boat's motion. After how long will the speed of the boat be 4 km/h? My thoughts: $$ \left\{ \begin{array}{c} f'(x)=45-3f(x) \\ f'(7)=45-3f(7)=12 \rightarrow f(7)=11\\ \end{array} \right. $$ $$f(x)=15-4e^{21-3x}$$ so $$ f'(x)=12e^{21-3x}$$ and we need to solve $$f'(x)=7$$ so $$12e^{21-3x}=7 \rightarrow x=7+\frac{ln(3)}{3} = 7.366$$
As I learned, for a differential manifold with an affine connection $(M,\nabla)$, its torsion is defined as $$\begin{aligned} T:\mathscr{X}(M)\times \mathscr{X}(M) &\rightarrow \mathscr{X}(M)\\ (X,Y) &\mapsto T(X,Y):= \nabla_XY-\nabla_Y X-[X,Y] \end{aligned} $$ And I can check the multi-linearity. My question is: Is $T$ a $(1,2)$ tensor (or tensor field)? I have learned that a (r,s) tensor field on $M$ is a smooth section of $T^{(r,s)}(M)$, mapping every $p\in M$ to a $(r,s)$ tensor over $T_p(M)$. But $T$ is defined globally, i.e. we cannot talk about $\nabla_{X_p}Y_p-\nabla_{Y_p} X_p-[X_p,Y_p]$. So what does it exactly mean by saying $T$ a $(1,2)$ tensor? Are they equivalent definitions? Any comments or references are welcomed. THANKS!
Why is torsion (curvature) a tensor field?
2Orthogonal Matrices and Change of Basis Let be an ordered basis for the vector space . Recall that the coordinate matrix of a vector in is the column vector If is another basis for , then the transition matrix from to changes a coordinate matrix relative to into a coordinate matrix relative to , The question you will explore now is whether there are transition matrices that preserve the length of the coordinate matrix—that is, given , does ? For example, consider the transition matrix from Example 5 in Section 4.7, relative to the bases for , and If , then and . (Verify this.) So, using the Euclidean norm for , You will see in this project that if the transition matrix is orthogonal, then the norm of the coordinate vector will remain unchanged. You may recall working with orthogonal matrices in Section 3.3 (Exercises 73, 74, 75, 76, 77, 78, 79, 80, 81, and 82) and Section 5.3 (Exercise 65). Definition of Orthogonal Matrix The square matrix is orthogonal when it is invertible and . Show that the matrix defined previously is not orthogonal. Show that for any real number , the matrix is orthogonal. Show that a matrix is orthogonal if and only if its columns are pairwise orthonormal. Prove that the inverse of an orthogonal matrix is orthogonal. Is the sum of orthogonal matrices orthogonal? Is the product of orthogonal matrices orthogonal? Illustrate your answers with appropriate examples. Prove that if is an orthogonal matrix, then for all vectors in . Verify the result of part 6 using the bases and . See image: [Orthogonal Matrices and Change of Basis][1] [1]: https://i.stack.imgur.com/lsa5Z.png
When the boat's speed was 45 km/h, its engine suddenly stopped. 7 minutes after the engine stopped, the boat's speed was 12 km/h. It is known that on the boat, in the opposite direction to its movement, the frictional force of the water acts, which is proportional to the speed of the boat. Find the law of the boat's motion in a river with a constant current of 3 km/h in the direction of the boat's motion. After how long will the speed of the boat be 4 km/h? My thoughts: $$ \left\{ \begin{array}{c} f'(x)=45-3f(x) \\ f'(7)=45-3f(7)=12 \rightarrow f(7)=11\\ \end{array} \right. $$ $$f(x)=15-4e^{21-3x}$$ so $$ f'(x)=12e^{21-3x}$$ and we need to solve $$f'(x)=4$$ so $$12e^{21-3x}=4 \rightarrow x=7+\frac{ln(3)}{3} = 7.366$$
2 Players playing a game to guess a number between 1 and 100 such that the higher of the two guesses wins 100 - max(guess1, guess2). What should be the ideal guess in this game? My thought is first to assume uniform distribution for the opponent as we dont know anything how they might guess and in turn have a expected payout: E[x1] = a(100 - a) for the guess a between 1 and 100 yielding the optimal guess to be 50 but not sure how to continue form here as I am not convinced this is the optimal answer.
The question is: > Prove that the following statement is true. > > For any rectangular $m\times n$ matrix A, $(\mathrm{Nul}( A ))^\perp = \mathrm{Row} (A^T A)$ Now, my understanding of the Row space is that it is simply $\mathrm{Col}\ A^T$ (since it is just the Span of all linear combinations of row vectors of A.) I also know that $(\mathrm{Row}\ A)^\perp = \mathrm{Nul}\ A$. Since we can take a double orthogonal complement, we can interchange it: $((\mathrm{Row}\ A)^\perp)^\perp = (\mathrm{Nul}\ A)^\perp \implies \mathrm{Row}\ A = (\mathrm{Nul}\ A)^\perp$. Hence, the actual proof is now simplified to proving that $\mathrm{Row}\ A = \mathrm{Row}\ A^TA$. Here is my (incomplete) proof for the above: Let the matrix A be $m \times n$, consisting of entries of the form of vectors: $A = \left[\begin{array}{cccc}\vec{v_1} & \vec{v_2} & \cdots & \vec{v_n}\end{array}\right]_{m\times n}$, where $\vec{v_i} \in \mathbb{R}^m$. Now, we take the transpose of A: $A^T = \left[\begin{array}{c}\vec{v_1}^T \cr \vec{v_2}^T \cr\vdots \cr\vec{v_n}^T\end{array}\right]_{n\times m}$ Multiplying the two: $$A^TA = \left[\begin{array}{c}\vec{v_1}^T \cr \vec{v_2}^T \cr\vdots \cr\vec{v_n}^T\end{array}\right]_{n\times m}\left[\begin{array}{cccc}\vec{v_1} & \vec{v_2} & \cdots & \vec{v_n}\end{array}\right]_{m\times n}$$ Since matrix multiplication is row-column, we simply multiply the transposes and the vectors: $$\left[\begin{array}{cccc} \vec{v_1}^T\vec{v_1} & \vec{v_1}^T\vec{v_2} & \cdots & \vec{v_1}^T\vec{v_n} \cr \vec{v_2}^T\vec{v_1} & \vec{v_2}^T\vec{v_2} & \cdots & \vec{v_2}^T\vec{v_n}\cr \vdots & \vdots & \vdots & \ddots\cr \vec{v_n}^T\vec{v_1} & \vec{v_n}^T\vec{v_2} & \cdots & \vec{v_n}^T\vec{v_n}\end{array}\right]$$ By definition, this is dot product: $$\left[\begin{array}{cccc} \vec{v_1}\cdot\vec{v_1} & \vec{v_1}\cdot\vec{v_2} & \cdots & \vec{v_1}\cdot\vec{v_n} \cr \vec{v_2}\cdot\vec{v_1} & \vec{v_2}\cdot\vec{v_2} & \cdots & \vec{v_2}\cdot\vec{v_n}\cr \vdots & \vdots & \vdots & \ddots\cr \vec{v_n}\cdot\vec{v_1} & \vec{v_n}\cdot\vec{v_2} & \cdots & \vec{v_n}\cdot\vec{v_n}\end{array}\right]$$ However, this result fails on giving me any insight. How do I proceed with this proof?
>Let $(H, \ast)$ be a group, where $H \subseteq (0, \infty)$, which has these properties: >- $x \in H \Rightarrow \frac{1}{x} \in H$ >- $2023 \in H$, and >- $x \ast y = \frac{1}{x} \ast \frac{1}{y}$ for any $x, y$. > >Prove that $H$ is not an interval. As far as my attempts go, I haven't been able to do much, apart from modifying the " * " relation a bit and finding things such as x=1/x * 1/e=1/e * 1/x and so on.(where e is the identity element) I could see an outline of what im supposed to prove, as if i were to take any value, either higher or lower than 2023 and place them on an axis $oX$, they would create an interval between $(0,1)$ and $(1,\infty)$, but would only unite if $1$ was part of $H$. Could that be where I'm supposed to be "heading" with my proof?
I tried it using the double angle identity $$\sin{2x}=2\sin x\cos x$$ The answer that I got is $$\frac{-\cos 2x}{4} +c$$ However I've also tried it using $u$-substitution. I let $u=\sin x$. Thus obtaining $\cos x$ when differentiating. And cutting the $\cos x$ in $2\sin x\cos x$ out with the $\cos x$ in the denominator below $du$. However the answer that I am then getting is : $0.25 - 0.25\cos 2x + c$. So as you can see there is the extra term 0.25 there. Is the second answer deemed to be wrong? If so why? My book tells me to use the double angle formula but does not explain why.
I tried it using the double angle identity $$\sin{2x}=2\sin x\cos x$$ The answer that I got is $$\frac{-\cos 2x}{4} +c$$ However I've also tried it using $u$-substitution. I let $u=\sin x$. Thus obtaining $\cos x$ when differentiating. And cutting the $\cos x$ in $2\sin x\cos x$ out with the $\cos x$ in the denominator below $du$. However the answer that I am then getting is : $0.25 - 0.25\cos 2x + c$. So as you can see there is the extra term $0.25$ there. Is the second answer deemed to be wrong? If so why? My book tells me to use the double angle formula but does not explain why.
Let's state this question in cokernels first. The case of kernels are easily solved since free modules are projective. Let $0 \to P \to N \to M \to 0$ be a short exact sequence of modules over a ring $R$. Given free resolutions $0 \to L_n \to L_{n-1} \to \cdots \to L_0 \to N \to 0$ and $0 \to K_m \to K_{m-1} \to \cdots \to K_0 \to P \to 0$, how to construct a free resolution of $M$? The idea is to fit the given information into an exact diagram, but I cannot prove the canonical injections $K_n \to L_n$ assured by projectivity of free modules split. Would this work with some extra care, or do I need to follow a different approach?
I was recently reading about parallel lines which by definition are lines on the same plane which never meet and are equidistant. I came across a piece of text where the concept of parallel lines was also extended to curves which is all right. My question is that can two concentric circles also be considered as parallel? Is there something known as "parallel circles"?
2 Players playing a game to guess a number between 1 and 100 such that the higher of the two guesses wins $100 - \max(guess_{1}, guess_{2})$. What should be the ideal guess in this game? My thought is first to assume uniform distribution for the opponent as we don't know anything how they might guess and in turn have a expected payout: $E[x_{1}] = a(100 - a)$ for the guess a between 1 and 100 yielding the optimal guess to be 50 but not sure how to continue form here as I am not convinced this is the optimal answer.
Do you know modular arithmetic? This problem is much clearer to understand in that setting (it becomes obvious why the exponent $m$ is being taken as odd), while it seems totally opaque and strange as a plain induction problem.
Consider $\displaystyle(\ell_2, \lVert\cdot\rVert_\star), \lVert x\rVert_\star = \sum\limits_{k=1}^{\infty}\frac{|x(k)|}{k}$. What is its dual space? Is this space reflexive? My idea is to consider $\displaystyle\varphi: (\ell_2, \lVert\cdot\rVert_\star) \rightarrow (\ell_1, \lVert\cdot\rVert_1), \varphi(\{x(k)\}_{k=1}^\infty) = \left\{ \frac{x(k)}{k} \right\}_{k=1}^\infty$. Then $\lVert\varphi(x)\rVert_1 = \lVert x\rVert_\star$ and $\varphi$ is isometric isomorphism. But $\text{Im}\varphi \subsetneq \ell_1$ becasue, for example, $\displaystyle\left\{\frac{1}{k^{3/2}}\right\} \in \ell_1, \left\{\frac{1}{k^{1/2}}\right\} \notin \ell_2$. So how to find dual space in this case?
A corollary on page 287 of Hungerford's Algebra is > Corollary 6.13. If $F$ is an extension field of $E$ and $E$ is an extension field of $K$, then $$[F:E]_s[E:K]_s=[F:K]_s\mbox{ and } [F:E]_i[E:K]_i=[F:K]_i.$$ Here $[F:K]_s$ means the separable degree of $F$ over $K$ and $[F:K]_i$ the inseparable degree of $F$ over $K$. This may be true, but the proof Hungerford gives seems to work only when $F$ is finite-dimensional over $K$. Am I missing something or does Hungerford drop an assumption?
For this question I am asked to find the *radius and centrepoint* of the circle of curvature for the following function: > −7.87e^(2.65x) I calculate the radius correctly with the formula: R =1/ρ which is: R = 0.9838228573 [working out for radius][1] However, when it comes to finding *the centrepoint* I have been getting it wrong everytime. I use this formula I found online to do so: > (x+R(dy/dx), y+R) does anyone know another method I could use to find the centrepoint ? I have seen vector variations which come to the right answer but I dont understand them very much :/ Thank you all ! for any help it is appreciated [1]: https://i.stack.imgur.com/647Mo.png
For this question I am asked to find the *radius and centrepoint* of the circle of curvature for the following function: > −7.87e^(2.65x) at the point when x = -1.25 I calculate the radius correctly with the formula: R =1/ρ which is: R = 0.9838228573 [working out for radius][1] However, when it comes to finding *the centrepoint* I have been getting it wrong everytime. I use this formula I found online to do so: > (x+R(dy/dx), y+R) does anyone know another method I could use to find the centrepoint ? I have seen vector variations which come to the right answer but I dont understand them very much :/ Thank you all ! for any help it is appreciated [1]: https://i.stack.imgur.com/647Mo.png
In a paper by Frink (1987), it was shown that if $(x,y,z)$ are solutions to the equation $x^2+y^2 = z^2 +1$ and $(a,b,c)$ are Pythagorean triples i.e, solutions to $a^2 + b^2 = c^2$ then: $$(x,y,z) = (x+a, y+ b, z+c) \\ (x', y',z') = (x'+ a, y'+b, z'+c) $$ where $x+ x' = a, y+y' = b, \mbox{and } z+z' =c$ are pairs of solutions to $x^2+y^2 = z^2 +1$. I tried to prove this algebraically by showing that: \begin{aligned} &(x+a)^2 + (y+b)^2 - (z+c)^2 = 1 \\ &(x^2 + y^2 -z^2) + (a^2+b^2-c^2) + 2(ax+by-cz) = 1 \\ &\implies 2(ax+by-cz) = 0 \end{aligned} I've tried to verify this for some $(x,y,z)$ and $(a,b,c)$ and the last equation does hold. However, I am not certain if this holds **in general**. My question is how do I show that $(ax+by-cz) = 0$ given the properties provided?
In [this][1] question it is proved in the answers that If $f :[0 , \infty ) \to \mathbb{R}$ and $f $ is bounded on every $(a,b)$ such that $a<b <\infty$, prove that $\lim\limits_{x \to \infty }(f(x+1) - f(x) )=l$ implies $\lim\limits_{x \to \infty }\frac{f(x)}{x}=l$. This condition looks suspiciously similar to Stolz-Cesaro theorem see [this][2] for a proof of using Stolz-Cesaro theorem . ---- My question is : If $\lim\limits_{x\to\infty}\frac{f(x+1)-f(x)}{g(x+1)-g(x)}=l$ what are sufficient conditions for $f,g$ to make $\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=l$? Of course when $f$ and $g $ are both differentiable functions on $\mathbb{R}$ and if $\lim\limits_{x \to \infty } g(x)= \infty$ this condition sufficient to make $\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=l$, but this hypothesis is too strong is there a weaker hypothesis. and I want to generalise this result. I conjectured this more general hypothesis : If $f$ is bounded on every $(a,b)$ such that $a<b <\infty$, $g$ is a continuous increasing function on $\mathbb{R}$ then $\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=l$. I couldn't prove this result but I think it is true. My question is this conjecture true ? If it was true is there a weaker hypothesis ? If it wasn't true, Is there a weaker hypothesis than $f$ and $g $ are both differentiable functions on $\mathbb{R}$ and ]$\lim\limits_{x \to \infty } g(x)= \infty$? [1]: https://math.stackexchange.com/questions/1642662/prove-that-lim-x-to-infty-fracfxx-l-if-lim-x-to-infty-f [2]: https://math.stackexchange.com/questions/4858423/can-stolz-cesaro-theorem-be-applied-to-this-problem-if-lim-limits-x-to-infty?noredirect=1&lq=1
Suppose $n \geq p \geq m$. Let $A \in \mathbb{R}^{m \times n}$ be of full row rank and $B \in \mathbb{R}^{n \times p}$ be of full column rank. If $A B$ is an $m$-by-$p$ matrix of full row rank. Can we have an estimation of $\|(A B)^{\dagger}\|$ with respect to $\|A^{\dagger}\|$ and $\|B^{\dagger}\|$ (and maybe $\|A\|$ and $\|B\|$)? We can write \begin{equation*} (A B)^{\dagger} = B^{*} A^{*} (A B B^{*} A^{*})^{-1}. \end{equation*} But I have no idea how to bound the inverse term. Any advice is appreciated. Very thanks!
How to solve this limit with e power in both nominator and denominator?
That means if a one parameter differentiable group $\Phi(x,t):\mathbb{R}^d\times \mathbb{R}\to\mathbb{R}^d$ satisfies $$\Phi(\Phi(x_0,t_1),t_2)=\Phi(x_0,t_1+t_2),\Phi(x_0,0)=x_0$$ holds for $\forall x_0\in\mathbb{R}^d,t_1,t_2\in\mathbb{R}.$ Then is there exist a suitable function $F$ on $\mathbb{R}^d$ s.t. $\dot{\Phi(x(t),t)}=F(\Phi(x(t),t)),x(0)=x$?
Is solution's group property is a characteristic property of autonomous differential dynamic systems?
I'm working through a paper on Proximal Policy Optimization (PPO) and am trying to understand the derivation of the optimal policy probabilities for the off-policy case as expressed in Equation 16. "Off-Policy Proximal Policy Optimization" is the name of this paper, and it can be found easily online. The derivation uses the Karush-Kuhn-Tucker (KKT) conditions, and I'm struggling to follow the specific steps to arrive at the final formula. Given the following constraint optimization problem for the case $ A_t \leq 0 $: $$ \min_{\pi} \sum_a \pi(a|s_t) \log \frac{\pi(a|s_t)}{\pi_{\text{old}}(a|s_t)} \text{s.t.} \quad \pi(a|s_t) \leq \mu(a|s_t)\big|_{s_t,a_t}, \quad \sum_a \pi(a|s_t) = 1, \quad \pi(a|s_t) > 0 $$ The KKT conditions that I'm considering are: 1. Stationarity: $$ \frac{\partial \mathcal{L}}{\partial \pi(a|s_t)} = 0 $$ 2. Primal feasibility: $$ \pi(a|s_t) \leq \mu(a|s_t)\big|_{s_t,a_t} $$ 3. Dual feasibility: $$ \lambda_a \geq 0 $$ 4. Complementary slackness: $$ \lambda_a \cdot (\pi(a|s_t) - \mu(a|s_t)\big|_{s_t,a_t}) = 0 $$ However, when applying these conditions, I'm not reaching the expected formula given in the paper: $$ \pi^{\text{Off-Policy PPO}}_{\text{new}}(a|s_t) = \begin{cases} \frac{\pi_{\text{old}}(a|s_t)(1 - \mu(a_t|s_t))}{1 - \pi_{\text{old}}(a_t|s_t)}, & \text{if } a \neq a_t \\ \mu(a_t|s_t), & \text{if } a = a_t \end{cases} $$ Can someone guide me through the detailed steps of using the KKT conditions to arrive at this result?
In a paper by Frink (1987), it was shown that if $(x,y,z)$ are solutions to the equation $x^2+y^2 = z^2 +1$ and $(a,b,c)$ are primitive Pythagorean triples i.e, solutions to $a^2 + b^2 = c^2$ and $\gcd(a,b,c) = 1$ (pairwise relatively prime), then: $$(x,y,z) = (x+a, y+ b, z+c) \\ (x', y',z') = (x'+ a, y'+b, z'+c) $$ where $x+ x' = a, y+y' = b, \mbox{and } z+z' =c$ are pairs of solutions to $x^2+y^2 = z^2 +1$. I tried to prove this algebraically by showing that: \begin{aligned} &(x+a)^2 + (y+b)^2 - (z+c)^2 = 1 \\ &(x^2 + y^2 -z^2) + (a^2+b^2-c^2) + 2(ax+by-cz) = 1 \\ &\implies 2(ax+by-cz) = 0 \end{aligned} I've tried to verify this for some $(x,y,z)$ and $(a,b,c)$ and the last equation does hold. However, I am not certain if this holds **in general**. My question is how do I show that $(ax+by-cz) = 0$ given the properties provided?